A recent survey of 500 security professionals by HackerOne, a security research platform, found that 48 % believe AI poses the most significant security risk to their organization. Among their main problems in relation to AI are:
- Leaked training data ( 35 % ).
- Unauthorized usage ( 33 % ).
- The use of outsiders to hack AI models ( 32 % ).
These worries emphasize the need for businesses to reevaluate their AI safety practices before flaws become real risks.
Security groups have a tendency to use AI to produce false positives.
Although the full Hacker Powered Security Report wo n’t be available until later this fall, additional research from a HackerOne-sponsored SANS Institute report revealed that 58 % of security professionals think security teams and threat actors could find themselves in an “arms race” to use generative AI strategies and methods in their work.
Security professionals ( 71 % ) reported success using AI to automate time-consuming tasks in the SANS survey. The same members, however, acknowledged that threat actors may use AI to improve their operations. In particular, respondents “were most concerned with AI-powered phishing campaigns (79 % ) and automated vulnerability exploitation ( 74 % )”.
Notice: Security leaders are getting discouraged with AI-generated script.
According to Matt Bromiley, an researcher at the SANS Institute,” Security team must find the best uses for AI to keep up with enemies while also taking into account its current limitations,” or they run the risk of producing more work for themselves.
The answer? Applications of AI should go through an external evaluation. More than two-thirds of those surveyed ( 68 % ) believed that internal review was the best way to identify issues with AI safety and security.
” Groups are now more practical about AI’s existing restrictions” than they were last month, said HackerOne Senior Solutions Architect Dane Sherrets in an email to TechRepublic. Humans provide both defensive and offensive stability with a lot of crucial framework that AI has yet to learn. Teams have also become less hesitant to use the systems in crucial systems due to issues like illusions. However, AI is still great for increasing productivity and performing tasks that do n’t require deep context”.
More studies from the SANS 2024 AI Survey, released this month, include:
- 38 % of their safety strategies are considering using AI in the future.
- 38.6 % of respondents said they had issues with using AI to identify and prevent cyber risks.
- 40 % of people object to adoption of AI because of its legal and social implications.
- 41.8 % of companies have faced pushback from people who do not believe AI judgments, which SANS says is “due to absence of transparency”.
- 43 % of businesses use AI in their protection plans right now.
- Anomaly detection systems ( 56 % ), malware detection ( 50 % ), and automated incident response ( 48 % ), are the most frequently used in security operations using AI technology ( 56 % ).
- 58 % of respondents said AI techniques struggle to identify new challenges or respond to signals that are outliers, which SANS attributes to a lack of education information.
- 71 % of those who complained about how poorly AI was able to detect and respond to digital risks said AI had a tendency to produce false positives.
Anthropic seeks output from security experts on Artificial safety measures.
In August, Generative AI creator Anthropic expanded its bug bounty program on HackerOne.
Anthropic wants the thief society to stress-test” the workarounds we use to prevent use of our versions,” including attempting to break through the scaffolding to stop Artificial from providing recipes for explosives or cyberattacks. According to Anthropic, it will grant up to$ 15, 000 to those who successfully discover new booting problems and may give HackerOne security experts early access to its newest health prevention system.