Close Menu
Alan C. Moore
    What's Hot

    Trump vs Musk: DOGE staff fear fallout; public feud sparks anxiety over political targeting- report

    June 6, 2025

    ‘No idea what he was thinking’: Errol Musk on son Elon’s Epstein-Trump association claims; urges to let feud ‘fizzle out’

    June 6, 2025

    ‘Maryland Dad’ Is Back in the U.S., But He’s Not Going to Have a Good Time

    June 6, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Trump vs Musk: DOGE staff fear fallout; public feud sparks anxiety over political targeting- report
    • ‘No idea what he was thinking’: Errol Musk on son Elon’s Epstein-Trump association claims; urges to let feud ‘fizzle out’
    • ‘Maryland Dad’ Is Back in the U.S., But He’s Not Going to Have a Good Time
    • Biden’s Border Nihilism Will Live Long After He Is Gone
    • New OpenAI Sora & Google Veo Competitor Focuses on Storytelling With Its Text-to-Video Tool
    • My Friend Sol, the Supreme Court, and Defining Discrimination
    • Double Murder: Man Kills Pregnant Wife With Forced Abortion in India
    • SECDEF Hegseth orders Navy to rename ship named after gay rights activist: Report
    Alan C. MooreAlan C. Moore
    Subscribe
    Friday, June 6
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » OpenAI and Anthropic Sign Deals With U.S. AI Safety Institute, Handing Over Frontier Models For Testing

    OpenAI and Anthropic Sign Deals With U.S. AI Safety Institute, Handing Over Frontier Models For Testing

    August 30, 2024Updated:August 30, 2024 Tech No Comments
    tr openai anthropic us government jpg
    tr openai anthropic us government jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI and Anthropic have reached agreements with the United States authorities that allow them to test and validate their innovative AI designs. The U.S. AI Safety Institute did have access to the technologies “prior to and following their common release,” according to an NIST announcement on Thursday.

    The two AI giants have signed the particular Memorandum of Understandings, which are non-legally bound agreements, so the AISI you evaluate the capabilities of their models and identify and mitigate any security risks.

    The AISI, which was officially established by NIST in February 2024, concentrates on the goal actions set forth in the AI Executive Order on the Safe, Secure, and Use of Artificial Intelligence, which was issued in October 2023. These activities include developing criteria for the security and safety of AI methods. The party is supported by the AI Safety Institute Consortium, whose people consist of Meta, OpenAI, NVIDIA, Google, Amazon, and Microsoft.

    Elizabeth Kelly, chairman of the AISI, said in the media discharge:” Safety is important to fueling discovery technological development. With these agreements in place, we anticipate beginning our complex partnerships with Anthropic and OpenAI to enhance the knowledge of AI protection.

    These agreements are only the launch, but they represent a significant milestone as we work to properly manage the future of AI.

    Observe: Generative AI Defined: How it Works, Benefits and Dangers

    Jack Clark, co-founder and nose of Policy, Anthropic, told TechRepublic via e-mail:” Safe, reliable AI is essential for the technology’s beneficial effects. Our close cooperation with the U.S. AI Safety Institute allows them to thoroughly examine our models before deploying them in large numbers.

    ” This strengthens our ability to identify and mitigate risks, advancing responsible AI development. We’re delighted to be a part of this crucial work, setting new standards for trustworthy and safe AI.

    The U.S. AI Safety Institute’s objective is well supported by Jason Kwon, Chief Strategy Officer at OpenAI, who quoted him as saying via email:” We look forward to working up to notify safety best techniques and standards for AI models. &nbsp,

    We think the university has a crucial role to play in properly developing artificial intelligence, and we hope that our collaboration provides a foundation on which the rest of the world can create.

    AISI to work with the UK AI Safety Institute

    The AISI even intends to work with the U.K. AI Safety Institute to provide OpenAI and Anthropic with safety-related opinions. The two nations fully agreed to collaborate on the creation of health tests for AI models in April.

    Following the first world AI Safety Summit in November, where governments from all over the world agreed to play a role in health testing the newest AI models, this agreement was adopted in keeping with the commitments made.

    After Thursday’s news, Jack Clark, co-founder and head of policy at Anthropic, posted on X:” Third-party assessment is a really significant part of the AI habitat and it’s been wonderful to see governments have up health institutes to accomplish this.

    ” This operate with the US AISI likely build on previous work we did this month, where we collaborated with the UK AISI to conduct a pre-deployment check on Sonnet 3.5″,” said the statement.

    Claude 3.5 Sonnet is Anthropic’s latest Artificial unit, released in June.

    AI companies and regulators have been at odds with each other since ChatGPT was released regarding the need for strict AI regulations. The original advocates for safeguards against risks like misinformation, while the latter contend that extremely strict rules may stifle development. Silicon Valley’s leading gamers have pushed for deliberate regulations to prevent the government from having to impose stringent restraining orders on their Artificial technologies.

    The U. S.’s view on a regional level has been more industry-friendly, focusing on volunteer guidelines and collaboration with technical companies, as seen in light-touch initiatives like the AI Bill of Rights and the Artificial Executive Order. In contrast, the E. U. has taken a stricter regulatory path with the AI Act, setting legal requirements on transparency and risk management.

    Somewhat at odds with the national perspective on AI regulation, on Wednesday, California State Assembly passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB-1047 or California’s AI Act. It was approved by the state Senate the following day, and now only needs to be approved by the governor. Before it is enacted into law, Gavin Newsom.

    Silicon Valley legends OpenAI, Meta, and Google have all written letters to California lawmakers outlining their concerns about SB-1047, arguing that a more cautious approach would help to promote AI technologies.

    SEE: OpenAI, Microsoft, and Adobe Back California’s AI Watermarking Bill

    Upon Thursday’s announcement of his company’s agreement with the U. S. AISI, Sam Altman, OpenAI’s CEO, posted on X that he felt it was “important that this happens at the national level”, making a sly dig at California’s SB-1047. Violating the state-level legislation would result in penalties, unlike a voluntary Memorandum of Understanding.

    More must-read AI coverage

    Meanwhile, the UK AI Safety Institute faces financial challenges

    The U.K. government has undergone a number of notable changes in its approach to AI since the transition from Conservative to Labour leadership in early July.

    According to Reuters sources, it has abandoned the office it was supposed to open in San Francisco this summer, which was intended to establish connections between the UK and the Bay Area’s AI titans. Nitarshan Rajkumar, a senior policy advisor and co-founder of the U.K. AISI, was also reportedly fired by tech minister Peter Kyle.

    SEE: UK Government Announces £32m for AI Projects After Scrapping Funding for Supercomputers

    Kyle plans to cut back on the government’s direct investments in the sector, according to the Reuters sources. Indeed, earlier this month, the government shelved £1.3 billion worth of funding that had been earmarked for AI and tech innovation.

    Chancellor Rachel Reeves announced £5.5 billion in cuts, including to the Investment Opportunity Fund, which supported projects in the digital and tech sectors in July, after confirming that public spending was on track to exceed the budget by £22 billion.

    A few days before the Chancellor’s speech, Labour appointed tech entrepreneur Matt Clifford to create the” AI Opportunities Action Plan” to show how AI can be best used at a national level to increase efficiency and reduce costs. His recommendations are scheduled to be made public in September.

    According to Reuters ‘ sources, Clifford met with ten established venture capital firms last week to talk about how the government can use AI to improve public services, support university spinout companies, and make it simpler for startups to hire foreigners.

    Behind the scenes, however, there is a certain unrest, as one attendee told Reuters that they were” stressing that they only had a month to turn the review around.”

    Source credit

    Keep Reading

    New OpenAI Sora & Google Veo Competitor Focuses on Storytelling With Its Text-to-Video Tool

    Trump/Musk Feud: Possible Impact on AI Regulation, Budget Bill, Government Contracts

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Editors Picks

    Trump vs Musk: DOGE staff fear fallout; public feud sparks anxiety over political targeting- report

    June 6, 2025

    ‘No idea what he was thinking’: Errol Musk on son Elon’s Epstein-Trump association claims; urges to let feud ‘fizzle out’

    June 6, 2025

    ‘Maryland Dad’ Is Back in the U.S., But He’s Not Going to Have a Good Time

    June 6, 2025

    Biden’s Border Nihilism Will Live Long After He Is Gone

    June 6, 2025

    New OpenAI Sora & Google Veo Competitor Focuses on Storytelling With Its Text-to-Video Tool

    June 6, 2025

    My Friend Sol, the Supreme Court, and Defining Discrimination

    June 6, 2025

    Double Murder: Man Kills Pregnant Wife With Forced Abortion in India

    June 6, 2025

    SECDEF Hegseth orders Navy to rename ship named after gay rights activist: Report

    June 6, 2025

    Iran orders ballistic missile materials from China for hundreds of missiles: Report

    June 6, 2025

    US power grid may be at risk from Chinese solar power inverters, fmr. NSA official warns: Report 

    June 6, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.