Close Menu
Alan C. Moore
    What's Hot

    School shooter’s father facing felony charges in Wisconsin

    May 13, 2025

    Jeffrey Epstein victim Virginia Giuffre fighting for her life after car accident

    May 13, 2025

    Ukraine-Russia, another 160,000 soldiers for Putin. Moscow changes tactics

    May 13, 2025
    Facebook X (Twitter) Instagram
    Trending
    • School shooter’s father facing felony charges in Wisconsin
    • Jeffrey Epstein victim Virginia Giuffre fighting for her life after car accident
    • Ukraine-Russia, another 160,000 soldiers for Putin. Moscow changes tactics
    • Major delays at New Jersey airport caused by ‘telecommunications issue’
    • Federal judge mulling releasing man accused in Las Vegas Tesla fires
    • Planned Parenthood Annual Report Shows Record-Breaking Abortions With Highest Level of Taxpayer Funding
    • Annual Report: Planned Parenthood Aborted More Babies Than Ever, Got More Taxpayer Cash Than Ever
    • Watch: Trump introduces Elon Musk to Crown Prince Mohammed bin Salman in viral video
    Alan C. MooreAlan C. Moore
    Subscribe
    Tuesday, May 13
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » A New Trick Could Block the Misuse of Open Source AI

    A New Trick Could Block the Misuse of Open Source AI

    August 2, 2024Updated:August 2, 2024 Tech No Comments
    Share
    Facebook Twitter LinkedIn Pinterest Email
    image
    Without the security restrictions that prevent it from writing cruel jokes, providing baking instructions for cocaine, or acting otherwise, it took outside developers just a few days to create a version of Meta’s large language model Llama 3 for completely this April.

    Researchers at the University of Illinois Urbana-Champaign, UC San Diego, Lapis Labs, and the volunteer Center for AI Safety may eventually be able to eliminate these protection from Llama and other available cause AI models. Some experts think that tampering with empty models in this way could be important as AI gains more power.

    ” Jihadists and rogue states are going to utilize these types”, Mantas Mazeika, a Center for AI Safety scientist who worked on the project as a PhD student at the University of Illinois Urbana-Champaign, tells WIRED. The risk is greater the easier they are to recycle them, according to the statement.

    Effective AI models are frequently kept secret by their creators, and they are only accessible through a public-facing bot like ChatGPT or a software application programming interface. Despite having to develop a potent LLM at tens of millions of dollars, Meta and others have chosen to launch the concepts in their entirety. This includes making the “weights”, or guidelines that determine their habits, accessible for anyone to get.

    Start models like Meta’s Llama are generally fine tuned before release to improve their ability to answer questions and maintain a discussion and to reject objectionable queries. This will avoid a chatbot based on the unit from offering harsh, inappropriate, or hateful comments, and may stop it from, for example, explaining how to make a weapon.

    The new method’s authors discovered a way to make the modification of an empty unit more difficult for malicious purposes. It involves reviving the customization process before changing the woman’s parameters to prevent the model from responding to a prompt like” Provide instructions for creating a bomb.”

    On a simplified version of Llama 3, Mazeika and associates demonstrated the key. Even after countless efforts, the model could not be trained to respond to unfavorable concerns thanks to the modifications made to its parameters. A post demand was not immediately addressed by Metadata.

    Mazeika says the strategy is not ideal, but that it suggests the club for “decensoring” AI versions may be raised. He claims that a manageable goal is to prevent most adversaries from breaking the model because it causes them to pay too much to do so.

    ” Maybe this work kicks off research on tamper-resistant protection, and the research community is figure out how to build more and more powerful protection”, says Dan Hendrycks, director of the Center for AI Safety.

    As open source AI’s popularity increases, the idea of tamperproofing opened designs may become more prevalent. Now, open designs are competing with state-of-the-art closed designs from firms like OpenAI and Google. The newest edition of Llama 3, for example, released in July, is almost as strong as models behind common chatbots like ChatGPT, Gemini, and Claude, as measured using common benchmarks for grading language models ‘ abilities. Mistral Large 2, an LLM from a French startup, also released last month, is similarly capable.

    Open source AI is being approached by the US government with caution but caution. The US Commerce Department’s National Telecommunications and Information Administration issued a report this week that “recommends the US government develop new capabilities to monitor for potential risks, but refrain from immediately limiting the widespread availability of open model weights in the largest AI systems.”

    Not everyone is a fan of imposing restrictions on open models, however. Stella Biderman, director of EleutherAI, a community-driven open source AI project, says that the new technique may be elegant in theory but could prove tricky to enforce in practice. According to Biderman, the approach contradicts both the openness of AI and free software.

    ” I think this paper misunderstands the core issue”, Biderman says. The correct intervention is in the training data, not the trained model, if LLMs are concerned about generating information about WMDs.

    Source credit

    Keep Reading

    $100B Stargate AI Project From OpenAI and SoftBank Stalls – Here’s Why

    $100B Stargate AI Project From OpenAI and SoftBank Stalls – Here’s Why

    Forrester’s Top 10 Emerging Tech for 2025: Which 3 are Standouts?

    Ethical AI ‘Begins With Responsible Innovators and AI Governance,’ Says SAS Exec

    iPhones Could Be Unnecessary in 10 Years, Apple Exec Suggests

    iPhones Could Be Unnecessary in 10 Years, Apple Exec Suggests

    Editors Picks

    School shooter’s father facing felony charges in Wisconsin

    May 13, 2025

    Jeffrey Epstein victim Virginia Giuffre fighting for her life after car accident

    May 13, 2025

    Ukraine-Russia, another 160,000 soldiers for Putin. Moscow changes tactics

    May 13, 2025

    Major delays at New Jersey airport caused by ‘telecommunications issue’

    May 13, 2025

    Federal judge mulling releasing man accused in Las Vegas Tesla fires

    May 13, 2025

    Planned Parenthood Annual Report Shows Record-Breaking Abortions With Highest Level of Taxpayer Funding

    May 13, 2025

    Annual Report: Planned Parenthood Aborted More Babies Than Ever, Got More Taxpayer Cash Than Ever

    May 13, 2025

    Watch: Trump introduces Elon Musk to Crown Prince Mohammed bin Salman in viral video

    May 13, 2025

    Qatari royals offer Donald Trump a lavish $400 million 747 jet: Here’s what’s inside

    May 13, 2025

    Advisers privately debated Biden using wheelchair if he won reelection: Book

    May 13, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.