The AI industry is undergoing a complex and transitional period, according to Stanford University’s 2025 AI Index, published by the Institute for Human-Centered AI. While AI continues to transform the tech sector, public sentiment remains mixed, underscoring the rapidly shifting nature of the field.
Below are key takeaways from Stanford’s latest findings on the current state of artificial intelligence — both generative and not generative.
Investment in AI increases
Investment in AI is growing. Private investors poured $109.1 billion into AI in the US. Globally, private investors contributed $33.9 billion to generative AI specifically. The number of businesses reporting using AI has grown from 55% in 2023 to 78% in 2024.
Most notable AI models in 2024 were produced in the US; China and Europe follow. While China has 15 notable models to the 40 notable models produced in the US, China’s models nearly match America’s in quality. Plus, China produces more AI-related patents and publications. The Middle East, Latin America, and Southeast Asia have also produced notable AI launches.
Most advanced AI are ‘reasoning’ models
Frontier models today typically use “complex reasoning,” an increasingly competitive part of the field. Stanford pointed out reasoning is still a challenge. Frontier AI still struggles with complex reasoning benchmarks and logic tasks. Although companies often refer to human-level intelligence, pattern-recognition tasks that are simple to humans still elude the most advanced AI.
SEE: Meta-hallucinations: Anthropic’s Claude 3.7 Sonnet and DeepSeek-R1 don’t always accurately reveal how they arrived at an answer in their explanations of their reasoning.
AI benchmark scores improve
Stanford said benchmark scores are steadily improving, with tests like MMMU now considered standard and AI systems scoring high. Video generation has improved, with AI videos now able to be longer, more realistic, and more consistent moment-to-moment.
FDA approval for medical devices increase
In 2023, a growing number of medical devices including AI were approved by the FDA: 223 compared to 15 in 2015 (these devices don’t necessarily include generative AI). Automated cars like Waymo’s growing fleet show AI is becoming more and more integrated with daily life.
AI responsible risks needs to be addressed more
Generally accepted definitions of how to use AI responsibly have been slow to emerge, Stanford pointed out. “Among companies, a gap persists between recognizing RAI [responsible AI] risks and taking meaningful action,” the researchers wrote. However, global organizations have released frameworks to address this.
SEE: How to Keep AI Trustworthy From TechRepublic Premium
Consumers worry about AI’s drawbacks compared to benefits
Consumer sentiment does not always match business sentiment. Significant proportions of respondents to the study in Canada (40%), US (39%), and the Netherlands (36%) said that AI would prove more harmful than beneficial. Elsewhere, the public is more on board – see the number of people who believe AI has more benefits than drawbacks in China (83%), Indonesia (80%), and Thailand (77%).
Confidence that AI companies will protect users’ data fell from 50% in 2023 to 47% in 2024 globally.
Barriers to AI decrease, though environmental impact is still a concern
As with any technology, people gradually learn how to produce it more quickly and with greater efficiency. Looking at Stanford’s data, costs to run the hardware declined by 30% annually, while energy efficiency improved by 40% per year.
“Together, these trends are rapidly lowering the barriers to advanced AI,” the researchers wrote.
Improved energy efficiency does not necessarily mean good energy use. Power consumption has increased beyond the capacity of energy efficiency to make up for it, meaning carbon emissions from frontier models continue to rise.