OpenAI’s next-gen Orion model is hitting a serious bottleneck, according to a new report – here’s why
In certain areas, next-gen Orion is failing to impress compared to the existing GPT-4 model.da
OpenAI is reportedly having trouble with Orion in certain areas like codingProgress is slower than expected due to quality issues with training dataThe next-gen model could also be more expensive
OpenAI is running into difficulties with Orion, the next-gen model powering its AI. The company is struggling in certain areas when it comes to the performance gains realized with the successor to GPT-4.
This comes from a report by The Information, citing OpenAI employees, who claim that the increase in quality seen with Orion is ‘far smaller’ than that witnessed when moving from GPT-3 to GPT-4.
We’re also told that some OpenAI researchers are saying that Orion “isn’t reliably better than its predecessor [GPT-4] in handling certain tasks.” What tasks would they be? Apparently, coding is a weaker point, with Orion possibly not outdoing GPT-4 in this arena – although it is also noted that Orion’s language skills are stronger.
So, for general-use queries – and for jobs such as summarizing or rewriting text – it sounds like things are going (relatively) well. However, these rumors don’t sound quite as hopeful for those looking to use AI as a coding helper.
(Image credit: Shutterstock / Ascannio)
So, what’s the problem here?
By all accounts, OpenAI is running into something of a wall when it comes to the data available to train its AI. As the report makes clear, there’s a “dwindling supply of high-quality text and other data” that LLMs (Large Language Models) can work with in pre-release training to hone their powers in solving knottier problems like resolving coding bugs.
These LLMs have chomped through a lot of the low-hanging fruit, and now finding this good-quality training data is becoming a considerably more difficult process – slowing down advancement in some respects.
On top of that, this training will become more intensive in terms of computing resources, meaning that developing (and running) Orion – and further AI models down the line – will become much more expensive. Of course, the user of the AI will end up footing that bill, one way or another, and there’s even talk of more advanced models becoming effectively “financially unfeasible” to develop.
Not to mention the impact on the environment in terms of bigger data centers whirring away and sucking more power from our grids, all at a time of increasing concern around climate change.
While we need to take this report with an appropriate amount of caution, there are worrying rumblings here, foreshadowing a serious reality check for the development of AI going forward.
The Information further notes that a different approach may be taken in terms of improving AI models on an ongoing basis after their initial training – indeed, this may become a necessity from the sound of things. We shall see.
Orion is expected to debut early in 2025 (and not imminently, as some rumors have hinted), and it may not be called ChatGPT-5, with OpenAI possibly set to change the naming scheme of its AI completely with this next-gen model.
You might also like…
OpenAI edges closer to making its first AI chip in bid to power your favorite new appsChatGPT integration and image generation are finally here in the iOS 18.2 developer betaChatGPT search is now live – and it could be the Google replacement you’ve been looking for