
Advancements in AI reasoning models are expected to slow down within a year as scaling limits approach, according to research from the nonprofit institute EpochAI. Rapid gains fueled by increasing compute resources are nearing practical and economic ceilings.
EpochAI analyst Josh You projects that current training growth rates are unsustainable with future progress expected to slow as compute scaling levels off to about fourfold per year.
What is reasoning AI?
Reasoning AI is an inference engine that acts as the brain of an AI model, such as OpenAI’s o3, Google’s Gemini 2.0, DeepSeek-R1, and IBM’s Granite 2.0. It applies logic and reasoning to analyze data, identify patterns, and make decisions.
The algorithms are trained to receive, recognize, and classify knowledge-based information, and use reasoning techniques, such as inductive, deductive, analogical, spatial, and probabilistic reasoning, to make real-time decisions.
In recent years, progress in the capabilities of reasoning AI models has brought optimism among AI researchers. Frontier AI models have reached substantial gains on benchmarks like measuring math and programming skills. However, a question now arises as to how far the reasoning techniques used to train the models go in terms of scalability; at some point, the rate of expansion and improvement in training the models will slow down.
Why progress in reasoning AI could hit a wall
“If reasoning training continues to scale at 10× every few months, in line with the jump from o1 to o3, it will reach the frontier of total training compute before long, perhaps within a year. At that point, the scaling rate will slow and converge with the overall growth rate in training compute of ~4× per year. Progress in reasoning models may slow down after this point as well,” You wrote in the analysis.
Reasoning AI models are trained with massive datasets and reasoning techniques that enable them to apply logic and inference when analyzing data.
The progress of training AI models has been tied to scaling. However, companies typically do not disclose the exact scale of their reasoning model training, making external estimates difficult, despite the fact that AI companies have been adopting reasoning AI in their models.
Undisclosed scaling practices cloud the future of AI progress
OpenAI claimed that the reasoning training of its o3 model is 10 times scaled up compared to its o1 model, which is comparable to DeepSeek-R1. But at this rate, little is known about how training compute is scaling in the latest models.
Top AI developers tend not to disclose the scale of their reasoning models. Industry analysts often rely on indirect indicators and estimations to assess how much further reasoning models can scale.
Companies are not shy about spending billions of dollars on scaling up their models to gain a competitive advantage. Once the upper limits of training capacity are reached, the scalability rate is expected to decline.
According to TechCrunch, the analysis reflects broader concerns that AI progress, driven heavily by compute scaling, may face diminishing returns across multiple AI domains, not just reasoning tasks.