Close Menu
Alan C. Moore
    What's Hot

    Tapper’s ‘Exposé’ Of Biden Cover-Up Actually Preserves It By Giving Anonymity To Perpetrators

    May 15, 2025

    Justice Thomas Exposes The Absurdity Of Nationwide Injunctions With One Simple Question

    May 15, 2025

    Biden’s Autopen Pardons May Just Get Invalidated

    May 15, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Tapper’s ‘Exposé’ Of Biden Cover-Up Actually Preserves It By Giving Anonymity To Perpetrators
    • Justice Thomas Exposes The Absurdity Of Nationwide Injunctions With One Simple Question
    • Biden’s Autopen Pardons May Just Get Invalidated
    • Justice Thomas Destroys the Case for Nationwide Injunctions With One Devastating Question
    • Austin Metcalf’s Mom Disrespected by Son’s Killer’s School District
    • JD Vance will head to Rome for Pope Leo XIV’s inaugural Mass
    • New York, Washington, Vermont see most illegal northern border crossers
    • Racial slur charges dropped against Yale conservative leader – ‘cannot be proven’
    Alan C. MooreAlan C. Moore
    Subscribe
    Thursday, May 15
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » U.K. and U.S. Agree to Collaborate on the Development of Safety Tests for AI Models

    U.K. and U.S. Agree to Collaborate on the Development of Safety Tests for AI Models

    April 5, 2024Updated:April 5, 2024 Tech No Comments
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The U.K. government has publicly agreed to collaborate with the United States in creating tests for sophisticated artificial intelligence models. A Memorandum of Understanding, which is a non- legally binding agreement, was signed on April 1, 2024 by the U. K. Technology Secretary Michelle Donelan and U. S. Commerce Secretary Gina Raimondo ( Number A).

    Find A

    U. S. Commerce Secretary Gina Raimondo ( left ) and U. K. Technology Secretary Michelle Donelan ( right ). Source: UK Government. Image: U. K. state

    Both states will presently “align their academic approaches” and work together to “accelerate and speedily iterate strong suites of assessments for AI versions, systems, and agents”. This is in line with the commitments made at the first international AI Safety Summit in November, when governments from all over the world accepted their roles in health testing the newest AI models.

    What AI efforts have the U.K. and U.S. come to a consensus on?

    The U.K. and U.S. have come to terms with the MoU regarding how to create a popular method for testing AI safety and collaborate on improvements. Specifically, this may include:

    • Creating a common method to assess the health of AI models.
    • carrying out at least one shared tests exercise on a platform that is accessible to the general public.
    • Working on complex AI safety study to improve the general understanding of AI versions and align any new policies.
    • Exchanging employees between individual institutes.
    • sharing details about all the activities being conducted at the relevant schools.
    • Working with other institutions on developing AI requirements, including health.

    ” Because of our engagement, our Institutes may gain a better understanding of AI systems, do more powerful assessments, and matter more comprehensive guidance”, Secretary Raimondo said in a statement.

    SEE: Learn how to Use AI for Your Business ( TechRepublic Academy )

    The MoU generally relates to moving forward with the initiatives made by the U.K. and U.S. The U.K.’s research facility was established at the AI Safety Summit with the three main objectives of evaluating existing AI systems, conducting basic AI security analysis, and sharing information with various national and international actors. Several companies, including OpenAI, Meta, and Microsoft, have agreed that the U.K. AISI will individually evaluate their most recent conceptual AI designs.

    Similar to NIST’s formal establishment of the U.S. AISI in February 2024, which works on the priority tasks outlined in the Artificial Executive Order from October 2023, including developing standards for the security and safety of AI techniques. The U. S.’s AISI is supported by an AI Safety Institute Consortium, whose individuals consist of Meta, OpenAI, NVIDIA, Google, Amazon and Microsoft.

    Does this lead to the rules of AI businesses?

    The benefits of their combined studies are likely to influence upcoming policy changes even though neither the United Kingdom nor the United States AISI is a regulatory system. According to the U. K. government, its Sae” will provide basic insights to our leadership regime”, while the U. S. service will” ​develop professional guidance that will be used by regulators”.

    The European Union is arguably still a step ahead of the AI Act, which was signed into law on March 13, 2024. In addition to other regulations governing AI for facial recognition and transparency, the legislation lists measures intended to ensure that AI is used safely and ethically.

    SEE: The majority of cybersecurity professionals anticipate that AI will have an impact on their careers.

    The majority of the big tech players, including OpenAI, Google, Microsoft and Anthropic, are based in the U. S., where there are currently no hardline regulations in place that could curtail their AI activities. October’s EO does provide guidance on the use and regulation of AI, and positive steps have been taken since it was signed, however, this legislation is not law. The NIST finalized AI Risk Management Framework in January 2023 is also voluntarily provided.

    In fact, these major tech companies are primarily in charge of self-regulation, and they founded the Frontier Model Forum last year to create their own “guardrails” to reduce the risk of AI.

    What do legal and AI experts think about the safety testing?

    AI regulation should be a priority

    Not widely accepted as a method of controlling AI in the country, the United Kingdom AISI was founded. The chief executive of Faculty AI, a company associated with the institute, stated in February that creating robust standards might be a wiser use of government resources than attempting to vette every AI model.

    ” I think it’s important that it sets standards for the wider world, rather than trying to do everything itself”, Marc Warner told The Guardian.

    More must- read AI coverage

    Experts in tech law hold a similar position when it comes to this week’s MoU. Aron Solomon, legal analyst and chief strategy officer at the legal marketing agency Amplify, told TechRepublic in an email that” the countries ‘ efforts would be much better spent on developing hardline regulations rather than research.

    ” The issue is that very few legislators, especially in the US Congress, possess the same level of deep understanding of AI as it does how to regulate it,” he said.

    Solomon continued,” We should be departing rather than going through a period of necessary deep analysis, when lawmakers really focus their collective minds on how AI works and how it will be used in the future.” But, as highlighted by the recent U. S. debacle where lawmakers are trying to outlaw TikTok, they, as a group, do n’t understand technology, so they are n’t well- positioned to intelligently regulate it.

    This places us in the difficult place we are right now. AI is developing much more quickly than regulators can control. However, at this point, putting regulations in favor of anything else delays the inevitable.

    Indeed, safety tests conducted by the two institutes will have to follow suit because AI models ‘ capabilities are constantly evolving and expanding. Christoph Cemper, the chief executive officer of prompt management platform AIPRM, warned in an email that” some bad actors may attempt to circumvent tests or misapply dual-use AI capabilities.” Dual- use refers to technologies that can be employed for both peaceful and hostile purposes.

    Cemper said:” While testing can flag technical safety concerns, it does not replace the need for guidelines on ethical, policy and governance questions … Ideally, the two governments will view testing as the initial phase in an ongoing, collaborative process”.

    SEE: A study from the National Cyber Security Centre suggests that generational AI may contribute to the threat of global ransomware.

    Research is needed for effective AI regulation

    Hardline legislation could stifle advancement in AI if not properly considered, according to Dr. Kjell Carlsson, even though voluntary guidelines may not be sufficient to actually change the activities of the tech giants.

    The former ML/A I analyst and Domino Data Lab’s head of strategy stated in an email to TechRepublic that” there are currently AI-related areas where harm is a real and growing threat. These are areas like fraud and cybercrime, where regulation usually exists but is ineffective.

    Unfortunately, few of the proposed AI regulations, such as the EU AI Act, are effective at addressing these threats because they primarily concentrate on commercial AI offerings that criminals do not use. As a result, many of these regulatory initiatives will harm innovation and cause more money, while doing little to enhance actual safety.

    Therefore, many experts believe that putting research and collaboration prior to regulations in the U.K. and U.S. is more effective.

    Dr. Carlsson once said that “regulation works when it comes to preventing established harm from known use cases.” Today, however, most of the use cases for AI have yet to be discovered and nearly all the harm is hypothetical. In contrast, research is absolutely necessary to evaluate how to safely test, mitigate risk, and ensure the safety of AI models.

    ” As such, the establishment and funding of these new AI Safety Institutes, and these international collaboration efforts, are an excellent public investment, not just for ensuring safety, but also for fostering the competitiveness of firms in the US and the UK”.

    Source credit

    Keep Reading

    Innovations in AI Reasoning Models Will Slow Within 1 Year, Warns Analyst

    Innovations in AI Reasoning Models Will Slow Within 1 Year, Warns Analyst

    Sales Teams Struggle With Using AI – A CEO Tells Us Why

    Amazon, AMD Launch Multibillion Dollar AI Projects in Saudi Arabia

    Microsoft Cuts Off Access to Bing Search Data as It Shifts Focus to Chatbots

    The Reason Murderbot’s Tone Feels Off

    Editors Picks

    Tapper’s ‘Exposé’ Of Biden Cover-Up Actually Preserves It By Giving Anonymity To Perpetrators

    May 15, 2025

    Justice Thomas Exposes The Absurdity Of Nationwide Injunctions With One Simple Question

    May 15, 2025

    Biden’s Autopen Pardons May Just Get Invalidated

    May 15, 2025

    Justice Thomas Destroys the Case for Nationwide Injunctions With One Devastating Question

    May 15, 2025

    Austin Metcalf’s Mom Disrespected by Son’s Killer’s School District

    May 15, 2025

    JD Vance will head to Rome for Pope Leo XIV’s inaugural Mass

    May 15, 2025

    New York, Washington, Vermont see most illegal northern border crossers

    May 15, 2025

    Racial slur charges dropped against Yale conservative leader – ‘cannot be proven’

    May 15, 2025

    Brown ends investigation, won’t discipline student journalist for administrative bloat report

    May 15, 2025

    ‘Born in the USA’ singer Bruce Springsteen says Trump is incompetent, ‘running rogue’

    May 15, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.