According to a recent survey conducted by the International Information System Security Certification Consortium, the majority of cybersecurity professionals (88 % ) anticipate that AI will significantly affect their jobs, with only 35 % of respondents having already seen AI’s effects ( Figure A). The impact suggests that cybersecurity professionals anticipate changes in their jobs more than actually being positive or negative. Additionally, worries have been raised about deepfakes, false information, and social engineering attacks. Policy, exposure, and rules were also covered in the survey.
How AI may impact the work of security professionals
According to survey respondents, AI will increase the efficiency of cybersecurity jobs ( 82 % ) and free up time for higher-value tasks by taking care of other tasks ( 56 % ). These elements of security work, in particular, may be replaced by AI and machine learning ( Find B):
- examining patterns of user behavior ( 81 % ).
- automating repetitive actions ( 75 % ).
- Keeping an eye on network traffic and finding malware ( 71 % ).
- predicting potential breaches ( 62 % ).
- Threat detection and blockade ( 62 % ).
A response to the survey that” AI will create some parts of my job redundant” is not always viewed negatively; rather, it is framed as an increase in efficiency.
Major AI security issues and potential outcomes
The pros surveyed were most worried about the following in terms of security attacks:
- Deepfakes ( 76 % ).
- campaigns against disinformation ( 70 % ).
- Social engineering ( 64 % ).
- The lack of regulation at the moment ( 59 % ).
- Ethical issues ( 57 % ).
- Privacy infringement ( 55 % ).
- the possibility of intentional or unintentional data poisoning ( 52 % ).
Whether AI may be better for computer attackers or defenders was a topic of discussion among the surveyed area. 28 % agreed, 37 % disagreed, and 32 % were unsure when asked about the claim that” AI and ML benefit cybersecurity professionals more than they do criminals.”
Professionals who responded to the survey said they were confident they could definitively link an increase in cyber threats over the past six months to AI, while 41 % claimed they could n’t. ( Both of these figures are representative of the 54 % of respondents who claimed to have observed a significant rise in cyber threats over the previous six months. )
Notice: Although it’s a little more complicated than that, the National Cyber Security Centre of the UK forewarned that conceptual AI could increase the frequency and severity of attacks over the next two decades. ( TechRepublic )
Concern actors could use conceptual AI to start attacks at speeds and scales that would be impossible with also a sizable human team. The impact of relational AI on the risk landscape is still unknown, though.
Implementing AI plans and gaining access to AI resources in businesses is in flux.
Only 27 % of respondents to the ISC2 survey said their companies have formal procedures for using AI in a safe and ethical manner, and 15 % said they do so for the purpose of securing and deploying AI technology ( Figure C ). The majority of organizations are also drafting some sort of AI apply policy:
- Companies of 39 % of respondents are developing AI ethics policies.
- 38 % of respondents ‘ businesses are developing plans for AI safe and secure deployment.
According to the review, there are many different ways to give people access to AI equipment, including:
- My company has 12 % blocked access to all conceptual AI tools.
- Access to some generative AI tools has been restricted by my company ( 32 % ).
- All generative AI tools are available to my organization ( 29 % ).
- Internal debates about allowing or disallowing generative AI tools ( 17 % ) have not taken place within my organization.
- I’m not familiar with how my company approaches generative AI tools ( 10 % ).
Cybersecurity experts may be at the forefront of knowledge about conceptual AI problems in the workplace because it affects both the threats they respond to and the tools they use for work. The implementation of AI is still in flow and will undoubtedly change significantly more as the industry grows, falls, or stabilizes. Only 60 % of security experts who were polled said they were convinced they may oversee the adoption of AI within their company.
ISC2 CEO Clar Rosso stated in a press release that” cybersecurity experts anticipate both the opportunities and obstacles AI offers, and are concerned their companies lack the expertise and knowledge to introduce AI into their businesses securely.” ” This presents a fantastic opportunity for cybersecurity professionals to take the lead, utilizing their knowledge of safe technology to ensure its secure and honest use.”
How to control relational AI
The interactions between federal regulation and significant tech companies will have a significant impact on how relational AI is regulated. ” See a clear need for detailed and specific rules” over conceptual AI, according to four out of five review respondents. It’s unclear how that regulation might be implemented because 72 % of respondents agreed that various AI types will require various regulations.
- 63 % of respondents believed that the regulation of AI should result from joint government initiatives ( ensuring international standardization )
- 54 % of respondents believed that AI regulation should be handled by national governments.
- 61 % ( polled in a different question ) wants AI experts to band together in support of the effort to regulate.
- 28 % support private sector self-regulation.
- 3 % need to keep the unchecked culture as it is.
The ISC2 approach
Between November and December 2023, the study was given to a global group of 1, 123 security experts who are ISC2 people.
Today, the concept of” AI” can occasionally be ambiguous. The subject matter is described as “public-facing big speech versions” like ChatGPT, Google Gemini, or Meta’s Llama, also known as generative AI, even though the statement uses the standard words” Artificial” and machine learning throughout.