Close Menu
Alan C. Moore
    What's Hot

    V-E Day, the Day the Nazis Were Defeated

    May 8, 2025

    Salt Lake City Defies Utah’s New Flag Law With Ridiculous Stunt

    May 8, 2025

    Sweden’s ‘Queen of Trash’ risks prison in toxic waste crime trial

    May 8, 2025
    Facebook X (Twitter) Instagram
    Trending
    • V-E Day, the Day the Nazis Were Defeated
    • Salt Lake City Defies Utah’s New Flag Law With Ridiculous Stunt
    • Sweden’s ‘Queen of Trash’ risks prison in toxic waste crime trial
    • Who was Bassel al-Araj – Palestinian activist at the centre of the Columbia protest
    • US again targets Iran oil despite talks
    • Voting for next Pope: All eyes on Sistine Chapel’s chimney for smoke colour as conclave begins
    • The Biden Administration Lied About Famine in Gaza, Accusing Israel of a War Crime It Didn’t Commit
    • Arrests underway as pro-Palestinian protesters at Columbia University take over Butler Library
    Alan C. MooreAlan C. Moore
    Subscribe
    Thursday, May 8
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » Singapore’s Vision for AI Safety Bridges the US-China Divide

    Singapore’s Vision for AI Safety Bridges the US-China Divide

    May 7, 2025Updated:May 7, 2025 Tech No Comments
    business ai safety singapore us china jpg
    business ai safety singapore us china jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The government of Singapore released a blueprint today for global collaboration on artificial intelligence safety following a meeting of AI researchers from the US, China, and Europe. The document lays out a shared vision for working on AI safety through international cooperation rather than competition.

    “Singapore is one of the few countries on the planet that gets along well with both East and West,” says Max Tegmark, a scientist at MIT who helped convene the meeting of AI luminaries last month. “They know that they’re not going to build [artificial general intelligence] themselves—they will have it done to them—so it is very much in their interests to have the countries that are going to build it talk to each other.”

    The countries thought most likely to build AGI are, of course, the US and China—and yet those nations seem more intent on outmaneuvering each other than working together. In January, after Chinese startup DeepSeek released a cutting-edge model, President Trump called it “a wakeup call for our industries” and said the US needed to be “laser-focused on competing to win.”

    The Singapore Consensus on Global AI Safety Research Priorities calls for researchers to collaborate in three key areas: studying the risks posed by frontier AI models, exploring safer ways to build those models, and developing methods for controlling the behavior of the most advanced AI systems.

    The consensus was developed at a meeting held on April 26 alongside the International Conference on Learning Representations (ICLR), a premier AI event held in Singapore this year.

    Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI safety event, as did academics from institutions including MIT, Stanford, Tsinghua, and the Chinese Academy of Sciences. Experts from AI safety institutes in the US, UK, France, Canada, China, Japan and Korea also participated.

    “In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future,” Xue Lan, dean of Tsinghua University, said in a statement.

    The development of increasingly capable AI models, some of which have surprising abilities, has caused researchers to worry about a range of risks. While some focus on near-term harms including problems caused by biased AI systems or the potential for criminals to harness the technology, a significant number believe that AI may pose an existential threat to humanity as it begins to outsmart humans across more domains. These researchers, sometimes referred to as “AI doomers,” worry that models may deceive and manipulate humans in order to pursue their own goals.

    The potential of AI has also stoked talk of an arms race between the US, China, and other powerful nations. The technology is viewed in policy circles as critical to economic prosperity and military dominance, and many governments have sought to stake out their own visions and regulations governing how it should be developed.

    DeepSeek’s debut in January compounded fears that China may be catching up or even surpassing the US, despite efforts to curb China’s access to AI hardware with export controls. Now, the Trump administration is mulling additional measures aimed at restricting China’s ability to build cutting-edge AI.

    The Trump administration has also sought to downplay AI risks in favor of a more aggressive approach to building the technology in the US. At a major AI meeting in Paris in 2025, Vice President JD Vance said that the US government wanted fewer restrictions around the development and deployment of AI, and described the previous approach as “too risk-averse.”

    Tegmark, the MIT scientist, says some AI researchers are keen to “turn the tide a bit after Paris” by refocusing attention back on the potential risks posed by increasingly powerful AI.

    At the meeting in Singapore, Tegmark presented a technical paper that challenged some assumptions about how AI can be built safely. Some researchers had previously suggested that it may be possible to control powerful AI models using weaker ones. Tegmark’s paper shows that this dynamic does not work in some simple scenarios, meaning it may well fail to prevent AI models from going awry.

    “We tried our best to put numbers to this, and technically it doesn’t work at the level you’d like,” Tegmark says. “And, you know, the stakes are quite high.”

    Source credit

    Keep Reading

    OpenAI for Countries Offers ‘A Clear Alternative to Authoritarian Versions of AI’

    SAS Says ‘Goodbye, GenAI; Hello, Agentic AI’

    OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation

    New Microsoft Surface Pro & Laptop Get Major AI Boost — Starting at $799

    IBM’s CEO: ‘Era of AI Experimentation is Over’ at THINK 2025

    New Microsoft Surface Pro & Laptop Get Major AI Boost — Starting at $799

    Editors Picks

    V-E Day, the Day the Nazis Were Defeated

    May 8, 2025

    Salt Lake City Defies Utah’s New Flag Law With Ridiculous Stunt

    May 8, 2025

    Sweden’s ‘Queen of Trash’ risks prison in toxic waste crime trial

    May 8, 2025

    Who was Bassel al-Araj – Palestinian activist at the centre of the Columbia protest

    May 8, 2025

    US again targets Iran oil despite talks

    May 8, 2025

    Voting for next Pope: All eyes on Sistine Chapel’s chimney for smoke colour as conclave begins

    May 8, 2025

    The Biden Administration Lied About Famine in Gaza, Accusing Israel of a War Crime It Didn’t Commit

    May 8, 2025

    Arrests underway as pro-Palestinian protesters at Columbia University take over Butler Library

    May 8, 2025

    Arizona woman charged after faking pregnancy with ‘Bachelor’ star Clayton Echard

    May 8, 2025

    3 Doors Down singer Brad Arnold reveals Stage 4 cancer diagnosis

    May 8, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.