Close Menu
Alan C. Moore
    What's Hot

    California funding slashed? Trump administration plans federal grant cuts, escalate fight over policy disputes: Report

    June 6, 2025

    ‘Do you think I will protect …’ Kash Patel opens up about Epstein in a surprise Joe Rogan podcast

    June 6, 2025

    Israel-Palestine conflict: En route Gaza, Greta Thunberg’s boat rescues 4 Sudanese migrants from Mediterranean Sea

    June 6, 2025
    Facebook X (Twitter) Instagram
    Trending
    • California funding slashed? Trump administration plans federal grant cuts, escalate fight over policy disputes: Report
    • ‘Do you think I will protect …’ Kash Patel opens up about Epstein in a surprise Joe Rogan podcast
    • Israel-Palestine conflict: En route Gaza, Greta Thunberg’s boat rescues 4 Sudanese migrants from Mediterranean Sea
    • Biden’s Doctor Gets Subpoena After Resisting Investigation Into His Personal Connection To Biden Family  
    • Democrats Still Think Nobody Can See What They’re Lying About
    • Chronic Student Absenteeism Plagues Many School Districts Three Years After Lockdowns Ended
    • FAA to pause hundreds of passenger flights from Reagan during Trump military parade
    • Epstein Files: Democrats seek answers from Kash Patel, ask if Trump has any role in reviewing evidence
    Alan C. MooreAlan C. Moore
    Subscribe
    Friday, June 6
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » The Rise of ‘Vibe Hacking’ Is the Next AI Nightmare

    The Rise of ‘Vibe Hacking’ Is the Next AI Nightmare

    June 4, 2025Updated:June 4, 2025 Tech No Comments
    WIRED%STILL% png
    WIRED%STILL% png
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In the near future one hacker may be able to unleash 20 zero-day attacks on different systems across the world all at once. Polymorphic malware could rampage across a codebase, using a bespoke generative AI system to rewrite itself as it learns and adapts. Armies of script kiddies could use purpose-built LLMs to unleash a torrent of malicious code at the push of a button.

    Case in point: as of this writing, an AI system is sitting at the top of several leaderboards on HackerOne—an enterprise bug bounty system. The AI is XBOW, a system aimed at whitehat pentesters that “autonomously finds and exploits vulnerabilities in 75 percent of web benchmarks,” according to the company’s website.

    AI-assisted hackers are a major fear in the cybersecurity industry, even if their potential hasn’t quite been realized yet. “I compare it to being on an emergency landing on an aircraft where it’s like ‘brace, brace, brace’ but we still have yet to impact anything,” Hayden Smith, the cofounder of security company Hunted Labs, tells WIRED. “We’re still waiting to have that mass event.”

    Generative AI has made it easier for anyone to code. The LLMs improve every day, new models spit out more efficient code, and companies like Microsoft say they’re using AI agents to help write their codebase. Anyone can spit out a Python script using ChatGPT now, and vibe coding—asking an AI to write code for you, even if you don’t have much of an idea how to do it yourself—is popular; but there’s also vibe hacking.

    “We’re going to see vibe hacking. And people without previous knowledge or deep knowledge will be able to tell AI what it wants to create and be able to go ahead and get that problem solved,�” Katie Moussouris, the founder and CEO of Luta Security, tells WIRED.

    Vibe hacking frontends have existed since 2023. Back then, a purpose-built LLM for generating malicious code called WormGPT spread on Discord groups, Telegram servers, and darknet forums. When security professionals and the media discovered it, its creators pulled the plug.

    WormGPT faded away, but other services that billed themselves as blackhat LLMs, like FraudGPT, replaced it. But WormGPT’s successors had problems. As security firm Abnormal AI notes, many of these apps may have just been jailbroken versions of ChatGPT with some extra code to make them appear as if they were a stand-alone product.

    Better then, if you’re a bad actor, to just go to the source. ChatGPT, Gemini, and Claude are easily jailbroken. Most LLMs have guard rails that prevent them from generating malicious code, but there are whole communities online dedicated to bypassing those guardrails. Anthropic even offers a bug bounty to people who discover new ones in Claude.

    “It’s very important to us that we develop our models safely,” an OpenAI spokesperson tells WIRED. “We take steps to reduce the risk of malicious use, and we’re continually improving safeguards to make our models more robust against exploits like jailbreaks. For example, you can read our research and approach to jailbreaks in the GPT-4.5 system card, or in the OpenAI o3 and o4-mini system card.”

    Google did not respond to a request for comment.

    In 2023, security researchers at Trend Micro got ChatGPT to generate malicious code by prompting it into the role of a security researcher and pentester. ChatGPT would then happily generate PowerShell scripts based on databases of malicious code.

    “You can use it to create malware,” Moussouris says. “The easiest way to get around those safeguards put in place by the makers of the AI models is to say that you’re competing in a capture-the-flag exercise, and it will happily generate malicious code for you.”

    Unsophisticated actors like script kiddies are an age-old problem in the world of cybersecurity, and AI may well amplify their profile. “It lowers the barrier to entry to cybercrime,” Hayley Benedict, a Cyber Intelligence Analyst at RANE, tells WIRED.

    But, she says, the real threat may come from established hacking groups who will use AI to further enhance their already fearsome abilities.

    “It’s the hackers that already have the capabilities and already have these operations,” she says. “It’s being able to drastically scale up these cybercriminal operations, and they can create the malicious code a lot faster.”

    Moussouris agrees. “The acceleration is what is going to make it extremely difficult to control,” she says.

    Hunted Labs’ Smith also says that the real threat of AI-generated code is in the hands of someone who already knows the code in and out who uses it to scale up an attack. “When you’re working with someone who has deep experience and you combine that with, ‘Hey, I can do things a lot faster that otherwise would have taken me a couple days or three days, and now it takes me 30 minutes.’ That’s a really interesting and dynamic part of the situation,” he says.

    According to Smith, an experienced hacker could design a system that defeats multiple security protections and learns as it goes. The malicious bit of code would rewrite its malicious payload as it learns on the fly. “That would be completely insane and difficult to triage,” he says.

    Smith imagines a world where 20 zero-day events all happen at the same time. “That makes it a little bit more scary,” he says.

    Moussouris says that the tools to make that kind of attack a reality exist now. “They are good enough in the hands of a good enough operator,” she says, but AI is not quite good enough yet for an inexperienced hacker to operate hands-off.

    “We’re not quite there in terms of AI being able to fully take over the function of a human in offensive security,” she says.

    The primal fear that chatbot code sparks is that anyone will be able to do it, but the reality is that a sophisticated actor with deep knowledge of existing code is much more frightening. XBOW may be the closest thing to an autonomous “AI hacker” that exists in the wild, and it’s the creation of a team of more than 20 skilled people whose previous work experience includes GitHub, Microsoft, and a half a dozen assorted security companies.

    It also points to another truth. “The best defense against a bad guy with AI is a good guy with AI,” Benedict says.

    For Moussouris, the use of AI by both blackhats and whitehats is just the next evolution of a cybersecurity arms race she’s watched unfold over 30 years. “It went from: ‘I’m going to perform this hack manually or create my own custom exploit,’ to, ‘I’m going to create a tool that anyone can run and perform some of these checks automatically,’” she says.

    “AI is just another tool in the toolbox, and those who do know how to steer it appropriately now are going to be the ones that make those vibey frontends that anyone could use.”

    Source credit

    Keep Reading

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    ChatGPT Business Features Now Include Gmail/Outlook Connectors, Meeting Transcriptions, New Pricing

    $10B Amazon AI & Cloud Expansion in North Carolina Includes Thousands of Jobs

    Editors Picks

    California funding slashed? Trump administration plans federal grant cuts, escalate fight over policy disputes: Report

    June 6, 2025

    ‘Do you think I will protect …’ Kash Patel opens up about Epstein in a surprise Joe Rogan podcast

    June 6, 2025

    Israel-Palestine conflict: En route Gaza, Greta Thunberg’s boat rescues 4 Sudanese migrants from Mediterranean Sea

    June 6, 2025

    Biden’s Doctor Gets Subpoena After Resisting Investigation Into His Personal Connection To Biden Family  

    June 6, 2025

    Democrats Still Think Nobody Can See What They’re Lying About

    June 6, 2025

    Chronic Student Absenteeism Plagues Many School Districts Three Years After Lockdowns Ended

    June 6, 2025

    FAA to pause hundreds of passenger flights from Reagan during Trump military parade

    June 6, 2025

    Epstein Files: Democrats seek answers from Kash Patel, ask if Trump has any role in reviewing evidence

    June 6, 2025

    ‘Trees Not Tesla’: Australian city’s protest ignites a war over environment- and Elon Musk

    June 6, 2025

    WATCH: Scott Jennings Exposes Democrats’ Radical Support for Free Healthcare for Illegal Aliens

    June 6, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.