Close Menu
Alan C. Moore
    What's Hot

    Israel attacks Yemeni port city, Houthi rebels say

    June 10, 2025

    Gaza Flotilla mission: Did alleged Hamas operative plan Greta Thunberg’s failed voyage? Report claims Zaher Birawi was key organiser

    June 10, 2025

    Florida Atlantic U. may be site of Trump Presidential Library, but questions remain

    June 10, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Israel attacks Yemeni port city, Houthi rebels say
    • Gaza Flotilla mission: Did alleged Hamas operative plan Greta Thunberg’s failed voyage? Report claims Zaher Birawi was key organiser
    • Florida Atlantic U. may be site of Trump Presidential Library, but questions remain
    • Catholic group demands U. Nebraska ‘held accountable’ for drag show mocking Mass
    • Arizona bill at gov’s desk would allow students to sue teachers over antisemitism claims
    • Three Liberty University students sue Virginia for excluding them from scholarship program
    • Los Angeles protests: Donald Trump sends 2,000 more National Guard troops; Pentagon spokesman confirms
    • Greta Thunberg detained: Gaza-bound ship seized by Israel; ‘Madleen’ sprayed with white paint, communications jammed
    Alan C. MooreAlan C. Moore
    Subscribe
    Tuesday, June 10
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » UK, US, EU Authorities Gather in San Francisco to Discuss AI Safety

    UK, US, EU Authorities Gather in San Francisco to Discuss AI Safety

    November 22, 2024Updated:November 22, 2024 Tech No Comments
    tr international network ai safety institutes meeting jpg
    tr international network ai safety institutes meeting jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email

    This year, officials from the U. K., E. U., U. S., and seven different countries gathered in San Francisco to release the” International Network of AI Safety Institutes”.

    The conference, which took place at the Presidio Golden Gate Club, addressed managing the risks of AI-generated information, testing base designs, and conducting risk assessments for superior AI methods. Artificial health schools from Australia, Canada, France, Japan, Kenya, the Republic of Korea, and Singapore also actually joined the Network.

    In addition to signing a mission statement, more than$ 11 million in funding was allocated to research into AI-generated content, and the results of the Network’s first joint safety testing exercise were reviewed. Participants included governmental officials, AI designers, academics, and civil world leaders to help the discussion on emerging Artificial challenges and potential safeguards.

    The AI Safety Summit in May, which took position in Seoul, was expanded upon by the meeting. In light of unnatural intelligence’s extraordinary advancements and its negative effects on our societies and economies, the 10 countries agreed to support “international cooperation and dialogue on its development.”

    According to the German Commission,” the International Network of AI Safety Institutes” will act as a collaborative community bringing up technical skills to handle AI safety risks and best practices. The Network will work toward a common understanding of AI safety risks and mitigation strategies in recognition of the significance of cultural and linguistic diversity.

    By the Paris AI Impact Summit in February 2025, member AI safety institutes will need to demonstrate their progress with regulation discussions.

    Key outcomes of the conference

    Mission statement signed

    The Network members are obligated to work together in four areas by the mission statement:

    1. Research: Collaborating with the AI safety research community and sharing findings.
    2. Testing: Developing and sharing best practices for testing advanced AI systems.
    3. Guidance: Facilitating shared approaches to interpreting AI safety test results.
    4. Inclusion: Sharing information and technical tools to broaden participation in AI safety science.

    Over$ 11 million allocated to AI safety research

    Network members and a number of nonprofits announced more than$ 11 million in funding for research to reduce the risk of AI-generated content. Child sexual abuse material, non-consensual sexual imagery, and the use of AI for fraud and impersonation were highlighted as key areas of concern.

    Researchers studying digital content transparency methods and model safeguards to stop the creation and distribution of harmful content will receive funding in this area. For scientists to create technical mitigations and social scientific and humanistic assessments, grants will be taken into account.

    A number of voluntary approaches were also released by the U.S. institute to mitigate the risks of AI-generated content.

    Discussions about the outcomes of a joint testing exercise

    The network has completed its first-ever joint testing exercise on Meta’s Llama 3.1 405B, looking into its general knowledge, multi-lingual capabilities, and closed-domain hallucinations, where a model provides information from outside the realm of what it was instructed to refer to.

    The exercise raised several considerations for how AI safety testing across languages, cultures, and contexts could be improved. How can small, subtle methodological differences and model optimization techniques affect evaluation results, for instance? Prior to the Paris AI Action Summit, there will be a few broader joint testing sessions.

    agreed upon a common framework for risk assessments

    The network has agreed upon a shared scientific basis for AI risk assessments, including that they must be actionable, transparent, comprehensive, multistakeholder, iterative, and reproducible. Members discussed how it could be operationalised.

    U. S.’s ‘ Testing Risks of AI for National Security ‘ task force established

    Finally, the new TRAINS task force was established, led by the U. S. AI Safety Institute, and included experts from other U. S. agencies, including Commerce, Defense, Energy, and Homeland Security. All members will test AI models to manage national security risks in domains such as radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, and military capabilities.

    SEE: Apple Joins Voluntary U. S. Government Commitment to AI Safety

    This shows how important is the relationship between AI and the military in the United States. The White House published the first-ever National Security Memorandum on Artificial Intelligence, which mandated that the Department of Defense and American intelligence agencies accelerate their adoption of AI in national security missions.

    More must-read AI coverage

    Speakers addressed the importance of achieving safety and AI innovation.

    U. S. Commerce Secretary Gina Raimondo delivered the keynote speech on Wednesday. She told attendees that “advancing AI is the right thing to do, but advancing as quickly as possible, just because we can, without thinking of the consequences, is n’t the smart thing to do”, according to TIME.

    In recent months, governments and tech companies have been at odds with one another regarding the advancement and safety of AI. Regulators risk restricting consumers ‘ access to the most recent technologies, which could have real advantages, despite the intention to protect them. Google and Meta have both openly criticised European AI regulation, referring to the region’s AI Act, suggesting it will quash its innovation potential.

    Raimondo said that the U. S. AI Safety Institute is” not in the business of stifling innovation”, according to AP. ” But here’s the thing. Safety is good for innovation. Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation”.

    She added that governments are “obligated” to take precautions against risks that could harm society, such as by causing unemployment and security breaches. Let’s avoid letting our ambitions make us feel bad and let us slip into our own mistakes, she said via AP.

    Dario Amodei, the CEO of Anthropic, also delivered a talk stressing the need for safety testing. According to Fortune, he said that while “people laugh when chatbots say something a little unpredictable,” it shows how crucial it is to maintain AI control before it reaches a more nefarious level.

    Over the past year, there have been numerous international AI safety institutes.

    The first meeting of AI authorities took place in Bletchley Park in Buckinghamshire, U. K. about a year ago. It saw the launch of the U. K.’s AI Safety Institute, which has the three primary goals of:

    • Evaluating existing AI systems.
    • Performing foundational AI safety research.
    • sharing information with other international and national actors.

    The U. S. has its own AI Safety Institute, formally established by NIST in February 2024, that has been designated the network’s chair. It was established to carry out the priority tasks outlined in the 2023 AI Executive Order. These actions include developing standards for AI system security and safety.

    SEE: OpenAI and Anthropic Sign Deals With U. S. AI Safety Institute

    The U.K. government and the United States formally agreed in April to work together on advanced AI models, primarily by sharing research findings from their respective AI Safety Institutes. Similar institutions were established in other countries as a result of a deal reached in Seoul.

    Particularly crucial was the clarification of the United States ‘ position on AI safety at the San Francisco conference because the rest of the world is n’t currently overwhelmingly supportive of this. When he comes back to the White House, President-elect Donald Trump has pledged to repeal the Executive Order. California Governor Gavin Newsom, who was in attendance, also vetoed the controversial AI regulation bill SB 1047 at the end of September.

    Source credit

    Keep Reading

    Apple Is Pushing AI Into More of Its Products—but Still Lacks a State-of-the-Art Model

    Apple’s WWDC Keynote: iOS 26 & macOS Tahoe 26 Includes New Liquid Glass Design Language

    Apple’s WWDC Keynote: iOS 26 & macOS Tahoe 26 Includes New Liquid Glass Design Language

    New OpenAI Sora & Google Veo Competitor Focuses on Storytelling With Its Text-to-Video Tool

    Trump/Musk Feud: Possible Impact on AI Regulation, Budget Bill, Government Contracts

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Editors Picks

    Israel attacks Yemeni port city, Houthi rebels say

    June 10, 2025

    Gaza Flotilla mission: Did alleged Hamas operative plan Greta Thunberg’s failed voyage? Report claims Zaher Birawi was key organiser

    June 10, 2025

    Florida Atlantic U. may be site of Trump Presidential Library, but questions remain

    June 10, 2025

    Catholic group demands U. Nebraska ‘held accountable’ for drag show mocking Mass

    June 10, 2025

    Arizona bill at gov’s desk would allow students to sue teachers over antisemitism claims

    June 10, 2025

    Three Liberty University students sue Virginia for excluding them from scholarship program

    June 10, 2025

    Los Angeles protests: Donald Trump sends 2,000 more National Guard troops; Pentagon spokesman confirms

    June 10, 2025

    Greta Thunberg detained: Gaza-bound ship seized by Israel; ‘Madleen’ sprayed with white paint, communications jammed

    June 10, 2025

    SEIU Union Boss Charged With ‘Impeding’ L.A. ‘ICE’ Bust—But How Did He Know About It in the First Place?

    June 9, 2025

    YouTuber shoots YouTuber: Las Vegas Strip altercation leaves two dead; watch video

    June 9, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.