The U.K. government has publicly agreed to collaborate with the United States in creating tests for sophisticated artificial intelligence models. A Memorandum of Understanding, which is a non- legally binding agreement, was signed on April 1, 2024 by the U. K. Technology Secretary Michelle Donelan and U. S. Commerce Secretary Gina Raimondo ( Number A).
Find A
Both states will presently “align their academic approaches” and work together to “accelerate and speedily iterate strong suites of assessments for AI versions, systems, and agents”. This is in line with the commitments made at the first international AI Safety Summit in November, when governments from all over the world accepted their roles in health testing the newest AI models.
What AI efforts have the U.K. and U.S. come to a consensus on?
The U.K. and U.S. have come to terms with the MoU regarding how to create a popular method for testing AI safety and collaborate on improvements. Specifically, this may include:
- Creating a common method to assess the health of AI models.
- carrying out at least one shared tests exercise on a platform that is accessible to the general public.
- Working on complex AI safety study to improve the general understanding of AI versions and align any new policies.
- Exchanging employees between individual institutes.
- sharing details about all the activities being conducted at the relevant schools.
- Working with other institutions on developing AI requirements, including health.
” Because of our engagement, our Institutes may gain a better understanding of AI systems, do more powerful assessments, and matter more comprehensive guidance”, Secretary Raimondo said in a statement.
SEE: Learn how to Use AI for Your Business ( TechRepublic Academy )
The MoU generally relates to moving forward with the initiatives made by the U.K. and U.S. The U.K.’s research facility was established at the AI Safety Summit with the three main objectives of evaluating existing AI systems, conducting basic AI security analysis, and sharing information with various national and international actors. Several companies, including OpenAI, Meta, and Microsoft, have agreed that the U.K. AISI will individually evaluate their most recent conceptual AI designs.
Similar to NIST’s formal establishment of the U.S. AISI in February 2024, which works on the priority tasks outlined in the Artificial Executive Order from October 2023, including developing standards for the security and safety of AI techniques. The U. S.’s AISI is supported by an AI Safety Institute Consortium, whose individuals consist of Meta, OpenAI, NVIDIA, Google, Amazon and Microsoft.
Does this lead to the rules of AI businesses?
The benefits of their combined studies are likely to influence upcoming policy changes even though neither the United Kingdom nor the United States AISI is a regulatory system. According to the U. K. government, its Sae” will provide basic insights to our leadership regime”, while the U. S. service will” ​develop professional guidance that will be used by regulators”.
The European Union is arguably still a step ahead of the AI Act, which was signed into law on March 13, 2024. In addition to other regulations governing AI for facial recognition and transparency, the legislation lists measures intended to ensure that AI is used safely and ethically.
SEE: The majority of cybersecurity professionals anticipate that AI will have an impact on their careers.
The majority of the big tech players, including OpenAI, Google, Microsoft and Anthropic, are based in the U. S., where there are currently no hardline regulations in place that could curtail their AI activities. October’s EO does provide guidance on the use and regulation of AI, and positive steps have been taken since it was signed, however, this legislation is not law. The NIST finalized AI Risk Management Framework in January 2023 is also voluntarily provided.
In fact, these major tech companies are primarily in charge of self-regulation, and they founded the Frontier Model Forum last year to create their own “guardrails” to reduce the risk of AI.
What do legal and AI experts think about the safety testing?
AI regulation should be a priority
Not widely accepted as a method of controlling AI in the country, the United Kingdom AISI was founded. The chief executive of Faculty AI, a company associated with the institute, stated in February that creating robust standards might be a wiser use of government resources than attempting to vette every AI model.
” I think it’s important that it sets standards for the wider world, rather than trying to do everything itself”, Marc Warner told The Guardian.
Experts in tech law hold a similar position when it comes to this week’s MoU. Aron Solomon, legal analyst and chief strategy officer at the legal marketing agency Amplify, told TechRepublic in an email that” the countries ‘ efforts would be much better spent on developing hardline regulations rather than research.
” The issue is that very few legislators, especially in the US Congress, possess the same level of deep understanding of AI as it does how to regulate it,” he said.
Solomon continued,” We should be departing rather than going through a period of necessary deep analysis, when lawmakers really focus their collective minds on how AI works and how it will be used in the future.” But, as highlighted by the recent U. S. debacle where lawmakers are trying to outlaw TikTok, they, as a group, do n’t understand technology, so they are n’t well- positioned to intelligently regulate it.
This places us in the difficult place we are right now. AI is developing much more quickly than regulators can control. However, at this point, putting regulations in favor of anything else delays the inevitable.
Indeed, safety tests conducted by the two institutes will have to follow suit because AI models ‘ capabilities are constantly evolving and expanding. Christoph Cemper, the chief executive officer of prompt management platform AIPRM, warned in an email that” some bad actors may attempt to circumvent tests or misapply dual-use AI capabilities.” Dual- use refers to technologies that can be employed for both peaceful and hostile purposes.
Cemper said:” While testing can flag technical safety concerns, it does not replace the need for guidelines on ethical, policy and governance questions … Ideally, the two governments will view testing as the initial phase in an ongoing, collaborative process”.
SEE: A study from the National Cyber Security Centre suggests that generational AI may contribute to the threat of global ransomware.
Research is needed for effective AI regulation
Hardline legislation could stifle advancement in AI if not properly considered, according to Dr. Kjell Carlsson, even though voluntary guidelines may not be sufficient to actually change the activities of the tech giants.
The former ML/A I analyst and Domino Data Lab’s head of strategy stated in an email to TechRepublic that” there are currently AI-related areas where harm is a real and growing threat. These are areas like fraud and cybercrime, where regulation usually exists but is ineffective.
Unfortunately, few of the proposed AI regulations, such as the EU AI Act, are effective at addressing these threats because they primarily concentrate on commercial AI offerings that criminals do not use. As a result, many of these regulatory initiatives will harm innovation and cause more money, while doing little to enhance actual safety.
Therefore, many experts believe that putting research and collaboration prior to regulations in the U.K. and U.S. is more effective.
Dr. Carlsson once said that “regulation works when it comes to preventing established harm from known use cases.” Today, however, most of the use cases for AI have yet to be discovered and nearly all the harm is hypothetical. In contrast, research is absolutely necessary to evaluate how to safely test, mitigate risk, and ensure the safety of AI models.
” As such, the establishment and funding of these new AI Safety Institutes, and these international collaboration efforts, are an excellent public investment, not just for ensuring safety, but also for fostering the competitiveness of firms in the US and the UK”.