The excitement surrounding cutting-edge artificial intelligence (AI) technology seems to be losing its momentum. OpenAI’s most recent language model, Orion, is not living up to the high standards set by its predecessors like GPT-4 and GPT-3. According to reports from Bloomberg and The Information, the model is struggling, especially when it comes to tasks like coding.
But OpenAI is not the only one facing challenges. Google’s Gemini model and Anthropic’s Claude 3.5 Opus are also falling short of expectations. This trend of diminishing returns in the AI industry suggests that simply making models bigger and more powerful may not be a sustainable long-term solution.
Margaret Mitchell, the chief ethics scientist at Hugging Face, believes that a shift in training methods may be necessary to achieve human-like levels of intelligence and adaptability in AI. Relying heavily on scaling — increasing model size and training data — is proving to be expensive and unsustainable for companies. Obtaining high-quality datasets without human input, especially for language tasks, has become a significant challenge.
The costs associated with developing and running state-of-the-art AI models are on the rise. Anthropic CEO Dario Amodei predicts that by 2027, these models could cost more than $10 billion each to create. With Opus and Gemini facing performance issues and limited progress, it appears that the rapid growth of the AI industry may be slowing down.
As mathematics professor Noah Giansiracusa points out, the era of rapid AI advancements may have been short-lived and unsustainable. The industry now needs to explore new approaches to AI development that can lead to significant breakthroughs without breaking the bank.
In conclusion, the AI industry may be experiencing a decline as companies grapple with the limitations of current scaling strategies. It is evident that a new direction is needed to drive AI innovation forward in a sustainable and cost-effective manner.