According to a document from the Cloud Security Alliance that Google Cloud commissioned, the C-suite is more knowledgeable about AI systems than their This and security personnel. The report, which was released on April 3, addressed the issues with conceptual AI, whether IT and safety experts fear their careers will be replaced, and more.
Of the IT and safety experts surveyed, 63 % believe AI may enhance security within their corporation. Another 24 % are balanced on AI’s impact on security steps, while 12 % do not think AI will enhance security within their corporation. Of the people surveyed, only a very few ( 12 % ) predict AI will replace their jobs.
In November 2023, the study used to make the review was conducted worldwide, with responses from 2, 486 IT and safety professionals and C- set leaders from organizations across the Americas, APAC, and EMEA.
Cybersecurity professionals not in leadership are less clear than the C- suite on possible use cases for AI in cybersecurity, with just 14 % of staff ( compared to 51 % of C- levels ) saying they are “very clear”.
” The connect between the C- set and workers in knowing and implementing Artificial highlights the need for a proper, integrated approach to properly combine this technology”, said Caleb Sima, chair of Cloud Security Alliance’s AI Safety Initiative, in a press release.
While some questions in the report stated that the answers should be related to generative AI, others used the term” AI” in general.
The AI knowledge gap in security
C-level professionals are subject to a downward pressure that may have made them more cognizant of AI use cases than security professionals.
Many ( 82 % ) C-suite professionals claim that their boards of directors and executive leadership are pushing for the adoption of AI. However, the report notes that this approach might lead to future implementation issues.
This may indicate a lack of understanding of the difficulties and skills required to adopt and use such a novel and disruptive technology ( such as prompt engineering ), according to lead author Hillary Baron, senior technical director of research and analytics at the Cloud Security Alliance, and a team of contributors.
There are a few possible causes of this knowledge gap:
- Cybersecurity professionals may not be as knowledgeable about how AI can affect overall strategy.
- Leaders may not realize how challenging it might be to incorporate AI strategies into existing cybersecurity strategies.
The authors of the report point out that some data ( Figure A) suggests that respondents are just as familiar with large language models and generative AI as they are with more recent terms like deep learning and natural language processing.
Figure A
The authors of the report point out that the prevalence of older terms like deep learning and natural language processing might indicate a conflation between generative AI and well-known tools like ChatGPT.
” It’s the difference between being familiar with consumer- grade GenAI tools vs professional/enterprise level which is more important in terms of adoption and implementation”, said Baron in an email to TechRepublic. That is something security professionals at all levels are seeing all over the board.
Will jobs in cybersecurity be replaced by AI?
A small percentage of security professionals ( 12 % ) predict that over the next five years, AI will completely replace their jobs. Others are more optimistic:
- 30 % think AI will help enhance parts of their skillset.
- 28 % of people believe AI will support them in their current position overall.
- 24 percent believe that AI will take over most of their role.
- 5 % of people anticipate that AI will have no impact on their job at all.
The results of AI enhancing security teams ‘ abilities and knowledge are 36 % of organizations ‘ goals, which reflect this.
Although enhancing skills and knowledge is a highly desired outcome, talent is at the bottom of the list of challenges, according to the report’s interesting discrepancy. This might mean that in day-to-day operations, tasks like identifying threats are prioritized while talent is a longer-term concern.
Benefits and challenges of AI in cybersecurity
The group divided on whether AI would be more advantageous for defenders and attackers:
- 34 % see AI more beneficial for security teams.
- 31 % view it as equally advantageous for both defenders and attackers.
- 25 % see it as more beneficial for attackers.
Professionals who are concerned about the use of AI in security cite the following arguments:
- Poor data quality leads to unintended bias and other problems ( 38 % ).
- Lack of transparency ( 36 % ).
- When it comes to managing complex AI systems, there are skills and expertise gaps ( 33 % ).
- Data poisoning ( 28 % ).
Hallucinations, privacy, data leakage or loss, accuracy and misuse were other options for what people might be concerned about, all of these options received under 25 % of the votes in the survey, where respondents were invited to select their top three concerns.
SEE: The UK National Cyber Security Centre discovered that generative AI might enhance an attacker’s arsenal. ( TechRepublic )
Over half ( 51 % ) of respondents said “yes” to the question of whether they are concerned about the potential risks of over- reliance on AI for cybersecurity, another 28 % were neutral.
Planned uses for generative AI in cybersecurity
There are a lot of intended applications for generative AI among the organizations that intend to use it for cybersecurity ( Figure B). Common uses include:
- Rule creation.
- Attack simulation.
- Compliance violation monitoring.
- Network detection.
- Reducing false positives.
Figure B
How are teams being organized in the age of AI?
74 % of respondents stated that their organizations intend to establish new teams to oversee the use of AI over the next five years. The composition of those teams can vary.
Some organizations working on AI deployment today put it in the hands of their security team ( 24 % ). Other organizations give primary responsibility for AI deployment to the IT department ( 21 % ), the data science/analytics team ( 16 % ), a dedicated AI/ML team ( 13 % ) or senior management/leadership ( 9 % ). In rarer cases, DevOps ( 8 % ), cross- functional teams ( 6 % ) or a team that did not fit in any of the categories ( listed as “other” at 1 % ) took responsibility.
SEE: Hiring kit: prompt engineer ( TechRepublic Premium )
According to lead author Hillary Baron and the contributors,” AI in cybersecurity is transforming existing roles as well as opening up new specialized positions.”
What kind of positions? Generative AI governance is a growing sub- field, Baron told TechRepublic, as is AI- focused training and upskilling.
” In general, we’re also starting to see job postings that include more AI- specific roles like prompt engineers, AI security architects, and security engineers”, said Baron.