Close Menu
Alan C. Moore
    What's Hot

    Two escapees from immigration detention center recaptured, two still at large

    June 16, 2025

    Israel-Iran Conflict: Netanyahu claims Iran tried to assassinate Donald Trump—is his claim true?

    June 16, 2025

    Private plane enters restricted no-fly zone above G7 venue in Canada, military jets deployed

    June 16, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Two escapees from immigration detention center recaptured, two still at large
    • Israel-Iran Conflict: Netanyahu claims Iran tried to assassinate Donald Trump—is his claim true?
    • Private plane enters restricted no-fly zone above G7 venue in Canada, military jets deployed
    • Some Things About Boelter’s ‘Hit List’ Don’t Make Sense
    • Is Israel About to Do the (Almost) Unthinkable?
    • Torching of 11 NYPD vehicles in Brooklyn ‘connected’ to LA protests, mayor says
    • Trump mulls using defense powers to fund rare-Earth projects
    • Judge approves $69M class action settlement in UnitedHealth 401(k) litigation
    Alan C. MooreAlan C. Moore
    Subscribe
    Monday, June 16
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » Generative AI in Security: Risks and Mitigation Strategies

    Generative AI in Security: Risks and Mitigation Strategies

    October 15, 2024Updated:October 15, 2024 Tech No Comments
    tr microsoft generative ai security risk reduction isc jpg
    tr microsoft generative ai security risk reduction isc jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email

    With the launch of ChatGPT, generated AI quickly became the most infuriating phrase in technology. Microsoft is now using OpenAI base models and responding to customer inquiries about how AI affects the stability scenery two years later.

    Siva Sundaramoorthy, top cloud options security designer at Microsoft, usually answers these questions. A group of security professionals at ISC2 in Las Vegas on October 14 received an overview of conceptual AI, including its benefits and safety risks.

    What safety risks are associated with conceptual AI?

    During his statement, Sundaramoorthy discussed issues about GenAI’s reliability. He emphasized that the technology acts as a predictor, choosing what it believes to be the most probable response, even though other responses may also be true, depending on the circumstances.

    Cybersecurity practitioners should consider Iot use circumstances from three angles: use, program, and software.

    You need to comprehend the utilize event you are attempting to protect, Sundaramoorthy said.

    He added:” A lot of developers and people in companies are going to be in this center bucket]application ] where people are creating applications in it. Each business has a bot or a pre-trained AI in their atmosphere”.

    Notice: AMD revealed its rival to NVIDIA’s heavy-duty AI cards last week as the equipment war continues.

    When the usage, application, and platform are identified, AI may be secured also to other systems— though never completely. With conceptual AI, challenges are more likely to arise than with conventional systems. Sundaramoorthy named seven implementation hazards, including:

    • Bias.
    • Misinformation.
    • Deception.
    • Lack of accountability.
    • Overreliance.
    • Intellectual property rights.
    • Psychological effects.

    A risk map created by AI corresponds to the three sides outlined above:

    • Artificial use in surveillance can lead to disclosure of sensitive data, dark IT from third-party LLM-based apps or plugins, or inside danger risks.
    • Artificial applications in security can open doors for quick shot, data leaks or penetration, or outsider threat risks.
    • Artificial platforms can create safety problems through data poison, denial-of-service attacks on the model, robbery of models, design inversion, or hallucinations.

    To circumvent content filters, attackers can use techniques like prompt converters, which use obfuscation, semantic tricks, or directly malicious instructions, as well as jailbreaking techniques. They may possibly utilize AI systems and poison training data, do quick injection, take advantage of anxious plugin design, launch denial-of-service attacks, or force Artificial models to leak data.

    What happens if the AI connects to an API that can run some code on other systems, as described above? Sundaramoorthy said. Can you deceive the AI into creating a backdoor for you?

    Must-read security coverage

    Security teams must strike a balance between AI’s risks and benefits.

    Sundaramoorthy finds Copilot by Microsoft to be useful for his work frequently. However,” The value proposition is too high for hackers not to target it”, he said.

    Other issues that security teams should be on the lookout for in relation to AI include:

    • The integration of new technology or design decisions introduces vulnerabilities.
    • Users must be trained to adapt to new AI capabilities.
    • New risks are introduced by AI systems for sensitive data access and processing.
    • Throughout the lifecycle of an AI, transparency and control must be established and maintained.
    • The supply chain for AI can introduce malicious or vulnerable code.
    • It’s unclear how to effectively secure AI because there are n’t established compliance standards and the rapid evolution of best practices.
    • Leaders must establish a reliable starting point for generative AI-integrated applications.
    • AI introduces unique and poorly understood challenges, such as hallucinations.
    • The ROI of AI has not yet been demonstrated in the real world.

    Additionally, Sundaramoorthy emphasized that generative AI can fail both benignly and maliciously. An attacker could use a posing as a security researcher to steal sensitive data, such as passwords, to bypass the AI’s security measures in a malicious failure. When biased content unintentionally enters the AI’s output due to poorly filtered training data, a benign failure might occur.

    Trusted ways to secure AI solutions

    There are some tried-and-true methods for reasonably thorough search for AI solutions despite the uncertainty surrounding it. For generative AI, standard organizations like NIST and OWASP provide risk management frameworks. The ATLAS Matrix, a library of well-known strategies and techniques used by attackers against AI, is published by MITRE.

    Furthermore, Microsoft offers governance and evaluation tools that security teams can use to assess AI solutions. Google offers its own version, the Secure AI Framework.

    Organizations should use appropriate data sanitation and cleaning to prevent user data from entering training model data. When fine-tuning a model, they should use the principle of least privilege. When connecting the model to external data sources, strict access control policies should be in place.

    Ultimately, Sundaramoorthy said,” The best practices in cyber are best practices in AI”.

    To use AI — or not to use AI

    What about not using artificial intelligence altogether? At the opening keynote address of the ISC2 Security Congress, author and researcher Janelle Shane noted that one option for security teams is to avoid using AI because of the risks it presents.

    Sundaramoorthy adopted a different strategy. If AI can access documents in an organization that should be insulated from any outside applications, he said,” That is not an AI problem. That is an access control problem”.

    Disclaimer: ISC2 paid for my airfare, accommodations, and some meals for the ISC2 Security Congres event held Oct. 13 – 16 in Las Vegas.

    Source credit

    Keep Reading

    Major Outages Impact Google Cloud, OpenAI, More This Week: What We Know

    $14B Meta Investment in Scale AI Boosts Plans for Superintelligence Lab

    UK Passes Data Bill Without Controversial AI Copyright Clause: ‘Evolution, Not Revolution’

    First Known ‘Zero-Click’ AI Exploit: Microsoft 365 Copilot’s EchoLeak Flaw

    The Meta AI App Lets You ‘Discover’ People’s Bizarrely Personal Chats

    NVIDIA Expands AI Dominance in Europe with Major Partnerships and Infrastructure Deals

    Editors Picks

    Two escapees from immigration detention center recaptured, two still at large

    June 16, 2025

    Israel-Iran Conflict: Netanyahu claims Iran tried to assassinate Donald Trump—is his claim true?

    June 16, 2025

    Private plane enters restricted no-fly zone above G7 venue in Canada, military jets deployed

    June 16, 2025

    Some Things About Boelter’s ‘Hit List’ Don’t Make Sense

    June 16, 2025

    Is Israel About to Do the (Almost) Unthinkable?

    June 16, 2025

    Torching of 11 NYPD vehicles in Brooklyn ‘connected’ to LA protests, mayor says

    June 16, 2025

    Trump mulls using defense powers to fund rare-Earth projects

    June 16, 2025

    Judge approves $69M class action settlement in UnitedHealth 401(k) litigation

    June 16, 2025

    Former MTV VJ Ananda Lewis dies after battle with breast cancer

    June 16, 2025

    Obama on immigration: Former president breaks silence amid ICE arrests, speaks about DACA

    June 16, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.