According to a blog post from Sam Altman, OpenAI has stated that its main target for the upcoming year will be on developing” superintelligence.” This has been described as Artificial with greater-than-human features.
Although the current set of products from OpenAI has a wide range of capabilities, Altman asserted that superintelligence will allow users to perform “anything more.” He highlights accelerating medical discovery as the main case, which, he believes, may lead to the betterment of society.
” This sounds like science fiction right now, and I’m not even going to talk about it.” That’s alright—we’ve been there before and we’re Fine with being that again”, he wrote.
Altman’s trust in his firm now knowing “how to develop AGI as we have usually understood it” has motivated the change of direction. AGI, or artificial general intelligence, is generally defined as a method that matches people skills, whereas superintelligence exceeds them.
Observe: OpenAI’s Sora: All You Need to Know
Altman has been considering superintelligence for years, but there are problems.
When discussing the dangers of AI systems and how to align them with human values, OpenAI has been referring to superintelligence for several years. OpenAI announced in July 2023 that it would be hiring researchers to investigate superintelligent AI.
To keep potential AI products in line, the team had presumably devote 20 % of OpenAI’s total processing power to training a “human-level automated alignment researcher.” Superintelligent AI raises worries because it may not be possible to control it and may not share people values.
In a blog post at the time, OpenAI Head of Alignment and co-founder and chief professor Ilya Sutskever wrote,” We need scientific and technical advances to navigate and control AI techniques little smarter than us.”
View: OpenAI and Anthropic Sign Deals With U. S. AI Safety Institute
Four months after creating the team, however, another company post revealed that they” still ( did ) not know how to reliably steer and control superhuman AI systems” and that they had no way of “preventing ( a superintelligent AI ) from going rogue.”
In May, the superintelligence security staff at OpenAI was disbanded, and many senior employees left because they worried that” security culture and techniques have taken a backseat to shiny items,” including Jan Leike and Ilya Sutskever, the group’s co-lead. The group’s work was absorbed by OpenAI’s another research work, according to Designed.
In his site post, Altman emphasized the importance of health for OpenAI despite this. We continue to think that incrementally and slowly releasing an AI system into the world will help society adjust and co-evolve with the technology, drawing lessons from encounter, and staying healthy overall, he wrote.
” We think it’s crucial to be world leaders in health and fit analysis, and to support that research with input from practical applications.”
Centuries may still be on the horizon for superintelligence.
There is debate over how long it will take to become superhuman. It may develop within a generation, according to the November 2023 blog post. However, almost a year later, Altman predicted that it might be” a several thousand times ahead.”
But, Brent Smolinski, IBM VP and global head of Technology and Data Strategy, said this was” completely exaggerated”, in a company blog from September 2024. ” I don’t believe we’re even in the right zip code for getting to superintelligence”, he said.
AI also requires much more data than humans to discover a new potential, is limited in the range of capabilities, and does not have consciousness or self-awareness, which Smolinski views as a vital indicator of superintelligence.
He adds that classical technology might be the only way to unlock AI that is superior to animal ability. IBM predicted that quantum would began addressing true business issues before 2030 at the start of the century.
Observe: Quantum Cloud Computing’s Breakthrough Ensures its Security and Privacy
Altman anticipates that by 2025, AI agents may be employed.
AI agents are semi-autonomous relational AI that can interact with or chain up to carry out instructions or decisions in an unorganized environment. For instance, Salesforce uses AI agents to call income prospects.
TechRepublic predicted that by the end of the year, AI officials will be more popular in 2025. Altman agrees with Altman when he writes that” we may see the first AI agents joining the workforce” and that their work will likely significantly alter the production of businesses.
Notice: IBM: Organization IT Facing Imminent AI Agent Revolution
The second market agents to take center stage will become software development, according to a Gartner research paper. ” Existing AI programming assistants achieve age, and AI agencies provide the following set of progressive advantages”, the authors wrote.
By 2028, 33 % of enterprise software applications will include agentic AI, up from less than 1 % in 2024, according to the Gartner paper. By that time, agencies will handle at least 15 % of daily decisions for online stores and a fifth of all interactions with customers.
” We are beginning to shift our focus away from that and toward superintelligence,” Altman wrote. ” We’re pretty convinced that in the next several years, people will see what we see, and that the need to act with great care, while also maximizing broad advantage and independence, is thus important”.