IT officials are concerned about the skyrocketing costs of AI-enhanced cyber security devices. In contrast, hackers are essentially avoiding AI because there aren’t many discussions on how to use it in cybercrime forums.
According to a study conducted by security company Sophos among 400 IT security decision-makers, 80 % of respondents think conceptual AI will significantly lower the cost of security tools. This songs with separate Gartner study that predicts international technology spend to rise by about 10 % this year, mainly due to , AI infrastructure , upgrades.
According to Sophos research, 99 % of organizations list AI capabilities on their cybersecurity platform requirements, with the most popular use being to increase safety. However, only 20 % of respondents cited this as their primary cause, indicating a lack of consensus on the necessity of AI resources in safety.
Three-quarters of the leaders claimed it is difficult to calculate the extra cost of AI functions in their protection tools. For instance, Microsoft controversially increased the price of , Office 365 , by 45 % this month due to the inclusion of , Copilot.
On the other hand, 87 % of respondents think that the cost savings from AI-related efficiency will outweigh the additional expense, which may be why 65 % of respondents have already adopted security solutions with AI. The transfer of low-cost , AI type DeepSeek R1 , has generated hope that the price of AI resources will immediately lower across the board.
Notice:  , HackerOne: 48 % of Security Professionals Believe AI Is Volatile
But price isn’t the only issue highlighted by Sophos ‘ experts. A substantial 84 % of security officials worry that high aspirations for AI tools ‘ skills will make it difficult for them to reduce the number of people on their team. 89 % of people are concerned that security threats could be introduced by flaws in the tools ‘ AI capabilities.
The Symantec researchers warned that “poor quality and poorly implemented AI models can unwittingly introduce significant cybersecurity risk of their own,” and that the adage “gag in, garbage out” is especially applicable to AI.
You might assume that there is more AI being used by digital thieves than they do.
According to independent studies from Sophos, protection concerns may be holding back cyber criminals from adopting Artificial as much as they hoped. Despite , scientist projections, the scientists found that AI is not yet frequently used in attacks. To determine the , predominance of AI usage , within the hacking community, Eset examined articles on underwater forums.
Less than 150 comments about GPTs or big language models were identified by the researchers last year. More than 1, 000 articles on bitcoin and more than 600 threads on selling and buying community accesses were found in this context.
The majority of the crime forums we examined found no evidence of cybercriminals using generative AI to create new exploits or malware, according to Sophos researchers. They also don’t appear to be particularly enthusiastic or excited about it.
One Russian-language violence page has had a devoted AI region since 2019, but it only has 300 fibers compared to more than 700 and 1, 700 fibers in the malware and network access areas, both. The researchers did point out that this might be viewed as “relatively rapid development for a matter that has only gained traction in the last two years.”
However, a customer admitted to speaking to a GPT for social factors in one article rather than to launch a cyberattack. Another user replied it is “bad for your opsec]operational safety ]”, more highlighting the group’s lack of trust in the systems.
Hackers are using AI for spamming, gathering knowledge, and social executive
Articles and threads that mention AI apply it to methods such as spamming, open-source intelligence gathering, and social architecture, the latter includes the use of GPTs to , make hacking emails , and email texts.
In contrast to the same period in 2023, business email compromise attacks increased by 20 % in the second quarter of that year, according to security firm Vipre. AI was also at fault for two-fifths of those BEC attacks.
Other posts focus on , “jailbreaking” , , where models are instructed to bypass safeguards with a carefully constructed prompt. Since 2023, there have been a lot of delicious chatbots specifically designed for cybercrime. While models like , WormGPT , have been in use, newer ones such as , GhostGPT , are still emerging.
Only a few “primitive and low-quality” attempts to generate malware, attack tools, and exploits using AI were spotted by Sophos research on the forums. Similar incidents are commonplace; in June, HP intercepted a malicious email campaign that was “highly likely to have been written with the aid of GenAI.”
Conversations about AI-generated code frequently included sarcasm or criticism. For example, on a post containing allegedly hand-written code, one user responded,” Is this written with ChatGPT or something…this code plainly won’t work”. According to Sophos researchers, there is generally agreement that using AI to create malware was intended for “lazy and/or low-skilled individuals looking for shortcuts.”
Interestingly, some posts mentioned creating AI-enabled malware in an aspirational way, indicating that, once the technology becomes available, they would like to use it in attacks. The world’s first AI-powered autonomous C2 was acknowledged in a post titled” This is still just a product of my imagination for now.”
Some users are also automating routine tasks, according to the researchers. However, it seems that the majority of people don’t rely on it for anything more complicated.