According to cybersecurity expert Liat Hayun, the AI surge is putting more risks in business information estates and sky environments.
Hayun, Tenable’s vice president of product management and research, advised businesses to prioritize understanding their risk coverage and compassion while focusing on addressing pressing issues like cloud failures and protecting sensitive data in an interview with TechRepublic.
She noted that while businesses are careful, certain risks are being made more acute by AI’s availability. She continued, noting that AI could finally be a powerful instrument for bolstering safety because CISOs nowadays are evolving into business enablers.
How AI is affecting security, data backup
TechRepublic: What is changing in the security setting according to AI?
Liat: First of all, AI has become much more visible to companies. If you consider a decade ago, the only organizations able to create AI were those with PhDs in information technology and statistics. Organizations now have many easier access to AI; It almost feels like introducing a new collection or programming language to their culture. There are so many more organizations, not only big ones like Tenable and people, but also any start-ups that can now use AI to incorporate that into their items.
Notice: Gartner Encourages Australian IT Leaders to acquire AI at their own speed
The second thing: AI requires a lot of files. There are so many more organizations that need to gather and store more data, which can also often have higher levels of sensitivity. Before, my streaming services would have only kept a small amount of information about me. Now, even my geographical matters, because they can make more specific tips based on that, or my time and my sex, and so on. Because they can now use this information for their company purposes, such as generating more revenue, they are now much more determined to keep it in larger volumes and with increasing levels of sensitivity.
TechRepublic: Is that feeding into growing use of the sky?
Liat: If you want to save a lot of information, it ’s significantly easier to do that in the sky. The volume of data you are storing grows with each new type of data you choose to keep. You can install new volumes of data without having to come inside your data centre. You simply log, and voila, you have a fresh data store location. Thus, the sky has made data storage much simpler.
These three elements create a kind of sphere that feeds itself. Because if it ’s easier to store information, you may upgrade more AI functions, and therefore you’re motivated to save even more information, and so on. But what has happened in the world recently, with LLMs becoming a much more attainable, common capability for organizations, posing challenges for all of these three verticals.
recognizing the safety concerns of AI
TechRepublic: Are you seeing certain security challenges rise with AI?
Liat: The use of AI in institutions, unlike the use of AI by personal people across the world, is still in its early stages. Organizations want to make sure that they are introducing it in a way that, in my opinion, does n’t create any unnecessary risk or extreme risk. But in terms of figures, we still only have a some examples, and they are not always a good picture because they’re more empirical.
One instance of a threat is the use of AI to learn sensitive information. That’s something we are seeing. It’s no because companies are not being watchful; It’s because it’s difficult to separate sensitive info from non-sensitive data while maintaining a potent AI system that is trained on the appropriate data set.
The second thing we’re seeing is what we call information poison. So, even if you have an AI broker that is being trained on non-sensitive data, if that non-sensitive data is publicly exposed, as an adversary, as an attacker, I can put my personal data into that publicly exposed, publicly available data storage and had your AI say things that you did n’t want it to claim. It’s not this all-knowing entity. It knows what it ’s seen.
TechRepublic: How should organisations weigh the security risks of AI?
Liat: First, I would ask how organisations can understand the level of exposure they have, which includes the cloud, AI, and data … and everything related to how they use third-party vendors, and how they leverage different software in their organisation, and so on.
SEE: Australia Promotes Required Guardrails for AI.
The second part is, how do you identify the critical exposures? You probably want to address that first if we know it has a high-severity vulnerability and is a publicly accessible asset. But it ’s also a combination of the impact, right? If you have two issues that are very similar, and one can compromise sensitive data and one cannot, you want to address that first [issue ] first.
You should also be aware of the best course of action to take to minimize the impact on the company.
TechRepublic: What are some big cloud security risks you warn against?
Liat: There are three things we usually advise our customers.
The first one is on misconfigurations. Even if you’re in a single cloud environment, but especially if you’re going multi-cloud, the chances of something becoming an issue are still very high just because of the complexity of the infrastructure, the complexity of the cloud, and all the technologies it provides. So that is definitely one thing I would concentrate on, especially when introducing new technologies like AI.
The second one is over-privileged access. Many people believe that their company is completely secure. However, if your home is a fort and you are handing out your keys to everyone in the neighborhood, that still poses a problem. So excessive access to sensitive data, to critical infrastructure, is another area of focus. Even if everything is perfectly set up and no hackers are present in your environment, there is still a risk.
The most important thing that people think about is identifying suspicious or malicious activity as soon as it occurs. This is where AI can be used; because if we use AI tools to monitor a lot of data and make sure that it is done quickly, we can use them to spot suspicious or malicious behavior in an environment. So we can address those behaviors, those activities, as early as possible before anything critical is compromised.
Implementing AI leaves you with too much of a chance.
TechRepublic: How are CISOs approaching the risks you are seeing with AI?
Liat: I’ve been in the cybersecurity industry for 15 years now. What I find fascinating about most security experts and CISOs is that they are not as experienced as they were ten years ago. As opposed to being a gatekeeper, as opposed to saying, “No, we can’t use this because it ’s risky, ” they’re asking themselves, “How can we use this and make it less risky? Which is an amazing trend to see. They’re becoming more of an enabler.
TechRepublic: Are you seeing the good side of AI, as well as the risks?
Liat: Organizations should consider how to introduce AI more than simply believing that AI is too risky at this time. You can’t do that.
Organisations that do n’t adopt AI in the next few years will simply stay in place. It’s an amazing tool that can benefit so many business use cases, internally for collaboration and analysis and insights, and externally, for the tools we can provide our customers. There is simply too good of a chance to pass up. I’ve done my job if I can help organizations adopt the mindset that they can use AI but we still need to take these risks into account. ”