International organizations gathered to discuss the global development of artificial knowledge at the AI Seoul Summit, which was co-hosted by the Republic of Korea and the United Kingdom.
Representatives from well-known academic institutions, civil organizations, the European Commission, and 20 different nations ‘ institutions were present. It was also attended by a number of AI companies, like OpenAI, Amazon, Microsoft, Meta and Google DeepMind.
The meeting, which took place on May 21 and 22, followed on from the AI Safety Summit, held in Bletchley Park, Buckinghamshire, U. K. last November.
One of the main objectives was to move the development of a comprehensive set of international standards and rules for AI protection. To that end, a number of important steps were taken:
- Tech companies made a commitment to publishing health guidelines for their new AI designs.
- countries came together to form a global network of Artificial Safety Institutes.
- Countries agreed to work together to establish risk thresholds for advanced AI designs that may aid in the development of biological and chemical arms.
- For exploration into preventing community from AI threats, the U.K. government provides up to £8.5 million in grants.
The earth takes practical ways to become more resilient to the dangers of AI, according to U.K. Technology Secretary Michelle Donelan in a final statement. This is the start of Phase Two of our AI Safety plan, which will deepen our understanding of the science that will guide a common approach to AI protection in the future.
1. Tech companies are committed to producing health frameworks for their cutting-edge AI models.
16 world Artificial organizations have agreed to new voluntary pledges to implement best practices in relation to border AI protection. Frontier AI is defined as general-purpose AI designs or methods that can perform a wide range of things and match or exceed the abilities of the most sophisticated models.
The businesses that are not listed are:
- Amazon ( USA ).
- Anthropic ( USA ).
- Cohere ( Canada ).
- Google ( USA ).
- G42 ( United Arab Emirates ).
- IBM ( USA ).
- Inflection AI ( USA ).
- Meta ( USA ).
- Microsoft ( USA ).
- Mistral AI ( France ).
- Naver ( South Korea ).
- OpenAI ( USA ).
- Samsung Electronics ( South Korea ).
- Technology Innovation Institute ( United Arab Emirates ).
- xAI ( USA ).
- Zhipu. ai ( China ).
The so-called Frontier AI Safety Commitments state that:
- When developing and deploying their border AI versions and systems, organizations can successfully detect, assess, and handle risks.
- Organizations are responsible for properly creating and deploying their advanced AI methods.
- Organizations ‘ approaches to border AI safety are properly open to additional actors, including governments.
These software companies are also required to release security frameworks for how they will evaluate the risk of the border models they create as a result of the commitments. These systems will examine the AI’s ability for abuse, taking into account its functions, protections and implementation contexts. The businesses must specify when serious risks are “deemed intolerable” and what steps they will take to prevent exceeding thresholds.
SEE: Generative AI Defined: How It Works, Benefits and Dangers
The undersigned companies have agreed to” not develop or deploy ( the ) model or system at all” if mitigations fail to keep risks within the thresholds. Prior to the AI Action Summit scheduled for February 2025 in France, their thresholds will be announced.
However, some critics claim that these voluntary regulations may not be hardline enough to significantly affect these AI giants ‘ business decisions.
The real test will be in whether these businesses follow through on their promises and how open to the public they are about their safety procedures, according to Joseph Thacker, the principal AI engineer at security firm AppOmni. ” I did n’t see any mention of consequences, and aligning incentives is extremely important”.
Fran Bennett, the interim director of the Ada Lovelace Institute, told The Guardian,” Companies determining what is safe and what is dangerous, and voluntarily choosing what to do about that, that’s problematic.
It’s great to be thinking about safety and establishing standards, but now you need some teeth: you need regulation, and you need some institutions that can draw the line from the viewpoint of the people affected, not those who are creating the things.
2. nations reaffirmed their commitment to establish an international network of AI Safety Institutes
Through the establishment of a network of AI Safety Institutes, world leaders from ten countries and the E.U. have partnered to conduct research in AI safety. In response to the group’s unprecedented advancements and the impact on our economies and societies, they each signed the Seoul Statement of Intent toward International Cooperation on AI Safety Science.
The nations that signed the statement are:
- Australia.
- Canada.
- European Union.
- France.
- Germany.
- Italy.
- Japan.
- Republic of Korea.
- Republic of Singapore.
- United Kingdom.
- United States of America.
Institutions that will form the network will be similar to the U. K.’s AI Safety Institute, which was launched at November’s AI Safety Summit. Its three main objectives are to evaluate existing AI systems, conduct fundamental research on AI safety, and share information with other local and international actors.
SEE: U. K.’s AI Safety Institute Launches Open- Source Testing Platform
The U. S. has its own AI Safety Institute, which was formally established by NIST in February 2024. It was established to work on the top priority issues identified in the AI Executive Order from October 2023, which include developing standards for the security and safety of AI systems. In recent months, Singapore, France, and South Korea have all established similar research facilities.
Donelan credited the” Bletchley effect” for the creation of the international network, which included the establishment of the U.K.’s AI Safety Institute at the AI Safety Summit.
The U.K. government and the United States formally agreed to collaborate on advanced AI models in April 2024, primarily by sharing research findings from their respective AI Safety Institutes. Similar institutes are being established in other nations as a result of the new Seoul agreement.
To promote the safe development of AI globally, the research network will:
- Use a risk-based approach in the design, development, deployment, and use of AI to ensure interoperability between technical work and AI safety.
- Share information about models, including their limitations, capabilities, risk and any safety incidents they are involved in.
- Share best practices on AI safety.
- Promote socio- cultural, linguistic and gender diversity and environmental sustainability in AI development.
- Collaborate on AI governance.
By the AI Impact Summit in France in 2015, the AI Safety Institutes will need to demonstrate their progress in AI safety testing and evaluation in order to move forward with regulatory discussions.
3. The EU and 27 countries agreed to work together to establish risk thresholds for frontier AI models that could aid in the development of biological and chemical weapons.
A number of countries have agreed to work together to create risk thresholds for advanced AI systems that, if misused, could be serious threats. They will also agree on when model capabilities could pose” severe risks” without appropriate mitigations.
Such high-risk systems include those that could allow for the unauthorized use of biological or chemical weapons by evil actors and those that can evade human oversight. An AI could potentially achieve the latter through safeguard circumvention, manipulation or autonomous replication.
The signatories will work with AI companies, civil society, and academia to develop their proposals for risk thresholds and present them at the AI Action Summit in Paris.
SEE: NIST Establishes AI Safety Consortium
The Seoul Ministerial Statement, which was signed by 27 nations and the E. U., binds the nations to commitments made by 16 AI companies that adhered to the Frontier AI Safety Commitments. China, notably, did not sign the statement despite being involved in the summit.
The nations that signed the Seoul Ministerial Statement are Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, Republic of Korea, Rwanda, Kingdom of Saudi Arabia, Singapore, Spain, Switzerland, Türkiye, Ukraine, United Arab Emirates, United Kingdom, United States of America and European Union.
4. The U.K. government provides up to £8.5 million in grants for research into preventing society from AI risks.
According to Donelan, the government will grant up to £8.5 million in research grants to research ways to reduce AI risks like deep-fakes and cyberattacks. Grantees will be engaged in the field of what is known as” systemic AI safety,” which examines how to understand and act on behalf of individuals rather than the systems themselves.
SEE: 5 Deepfake Scams That Threaten Enterprises
Examples of proposals that might be considered for a Systemic AI Safety Fast Grant might include:
- preventing the spread of fake news and misinformation by intervening on the digital platforms that spread them.
- Preventing AI- enabled cyber attacks on critical infrastructure, like those providing energy or healthcare.
- Monitoring or at least minimizing the potentially harmful secondary effects of AI systems that operate autonomously on digital platforms, such as social media bots.
Eligible projects might also include ways that could assist society in utilizing the advantages of AI systems and adapting to the changes it has caused, such as through increased productivity. Candidates must be based in the United Kingdom, but they will be encouraged to work with other researchers from other countries, possibly working with international AI Safety Institutes.
The Fast Grant programme, which expects to offer around 20 grants, is being led by the U. K. AI Safety Institute, in partnership with the U. K. Research and Innovation and The Alan Turing Institute. They are specifically searching for initiatives that “offer concrete, measurable solutions to significant systemic risks from AI.” The most promising proposals will be expanded into longer-term projects and may receive funding.
U. K. Prime Minister Rishi Sunak also announced the 10 finalists of the Manchester Prize, with each team receiving £100, 000 to develop their AI innovations in energy, environment or infrastructure.