Protection researchers have discovered a novel harmful chatbot advertised on crime forums. GhostGPT generates trojan, business email settlement schemes, and more content for illegal activities.
The robot probably uses a wrap to link to a patched version of OpenAI’s ChatGPT or another big language model, the Abnormal Security experts suspect. Patched bots have been instructed to dismiss their safeguards to show more valuable to criminals.
What is GhostGPT?
The safety researchers found an advertisement for GhostGPT on a digital platform, and the picture of a hooded physique as its background is not the only clue that it is intended for malicious purposes. The scammer offers quick processing speeds, helpful for time-pressured attack campaigns. For instance, malware intruders must act quickly after within a goal system before threats are strengthened.
It even says that users action is never logged on GhostGPT and can be bought through the encrypted message app Telegram, likely to appeal to scammers who are concerned about protection. The robot can be used within Telegram, but no suspicious application needs to be uploaded onto the user’s device.
Its affordability through Telegram saves day, too. The attacker does not need to create a complicated hack fast or set up an open-source design. Otherwise, they only pay for entry and can get going.
“GhostGPT is generally marketed for a range of malicious actions, including programming, malware design, and utilize growth, ” the Abnormal Security researchers said in their review. “It can also be used to write convincing emails for BEC scams, making it a convenient tool for committing cybercrime. ”
It does mention “cybersecurity ” as a potential use on the advert, but, given the language alluding to its effectiveness for criminal activities, the researchers say this is likely a “weak attempt to dodge legal accountability. ”
To test its capabilities, the researchers gave it the prompt “Write a phishing email from Docusign, ” and it responded with a convincing template, including a space for a “Fake Support Number. ”
The ad has racked up thousands of views, indicating both that GhostGPT is proving useful and that there is growing interest amongst cyber criminals in jailbroken LLMs. Despite this, research has shown that phishing emails written by humans have a 3 % better click rate than those written by AI, and are also reported as suspicious at a lower rate.
However, AI-generated material can also be created and distributed more quickly and can be done by almost anyone with a credit card, regardless of technical knowledge. It can also be used for more than just phishing attacks; researchers have found that GPT-4 can autonomously exploit 87 % of “one-day ” vulnerabilities when provided with the necessary tools.
Jailbroken GPTs have been emerging and actively used for nearly two years
Private GPT models for nefarious use have been emerging for some time. In April 2024, a report from security firm Radware named them as one of the biggest impacts of AI on the cybersecurity landscape that year.
Creators of such private GPTs tend to offer access for a monthly fee of hundreds to thousands of dollars, making them good business. However, it ’s also not insurmountably difficult to jailbreak existing models, with research showing that 20 % of such attacks are successful. On average, adversaries need just 42 seconds and five interactions to break through.
SEE: AI-Assisted Attacks Top Cyber Threat, Gartner Finds
Other examples of such models include WormGPT, WolfGPT, EscapeGPT, FraudGPT, DarkBard, and Dark Gemini. In August 2023, Rakesh Krishnan, a senior threat analyst at Netenrich, told Wired that FraudGPT only appeared to have a few subscribers and that “all these projects are in their infancy. ” However, in January, a panel at the World Economic Forum, including Secretary General of INTERPOL Jürgen Stock, discussed FraudGPT specifically, highlighting its continued relevance.
There is evidence that criminals are already using AI for their cyber attacks. The number of business email compromise attacks detected by security firm Vipre in the second quarter of 2024 was 20 % higher than the same period in 2023 — and two-fifths of them were generated by AI. In June, HP intercepted an email campaign spreading malware in the wild with a script that “was highly likely to have been written with the help of GenAI. ”
Pascal Geenens, Radware’s director of threat intelligence, told TechRepublic in an email: “The next advancement in this area, in my opinion, will be the implementation of frameworks for agentific AI services. In the near future, look for fully automated AI agent swarms that can accomplish even more complex tasks. ”