
AIM Security researchers have discovered a significant zero-click risk known as “EchoLeak.” Cybercriminals can steal sensitive data from a person’s corporate environment by simply sending a properly designed email, which is targeted by the AI-enabled Microsoft 365 Copilot.
This is the first known “zero-click” AI vulnerability to affect a significant software like Microsoft 365 Copilot, according to a report released this week. This means that people don’t need to take any action to stop the attack from working.
AIM Security explained that” the chains allow adversaries to instantly exfiltrate sensitive and amazing information from M365 Copilot environment without the user’s knowledge or by relying on any particular target behaviour.”
What experts call a “LLM Scope Violation” makes this possible. Simply put, the weakness tamps up Copilot’s underlying AI, which is based on OpenAI’s GPT designs, by obtaining private user data after reading harmful guidelines that were hidden in plain-looking emails.
How the strike operates
The researchers created a comprehensive, multi-part strike plan that defies Microsoft’s current security measures.
- XPIA pass: Microsoft uses XPIA classifier to look for nefarious prompts. The attacker bypasses these safeguards by writing the message in ordinary, non-technical language that sounds like it’s meant for a man, not an AI.
- Link revision bypass: Usually, additional websites are removed, but AIM Security discovered markdown link evasion techniques. These references return the URL with sensitive information.
- Navigator may be deceived into creating graphic links that send data to the attacker without the user clicking. These images can then trigger automatic browser requests.
- CSP Bypass via Microsoft Services: Despite Microsoft’s security measures in place to prevent inside pictures, hackers have discovered ways to course information through Microsoft Teams and SharePoint, which are permitted regions.
Additionally, the scientists discovered how using a technique known as “RAG spraying,” attackers can increase their chances of success. Instead of sending just one message, hackers either:
- Send numerous brief letters with slightly different subject lines, or send texts with significantly unique wordings.
- Send a lengthy, specifically written message that the AI system breaks down into smaller pieces.
This makes the Artificial use of normal usage to get the malignant message more frequently.
What is in danger?
Microsoft 365 Copilot has access to a variety of business records, including messages, OneDrive data, Teams messages, inside Powerpoint records, and other pertinent information.
Although Copilot is built to adhere to strict permission rules, EchoLeak circumvents these by altering how Copilot interprets and responds to user requests, essentially exposing information it shouldn’t.
The researchers argued that “underprivileged email” should not be able to relate to privileged data, especially when the email’s comprehension is mediated by an LLM.
Microsoft confirms CVE-2025-32711 and mitigates it
Microsoft has confirmed the issue, resolving it with the number CVE-2025-32711, which has a CVSS score of 9.3 out of 10. The official MSMRSA statement states that” AI command injection in M365 Copilot allows an unauthorized attacker to disclose information over a network.”
The company claimed that no customer action is necessary because the vulnerability has already been completely remedied on its end. Aim Labs was also thanked by Microsoft for its sincere disclosure.
Read TechRepublic’s coverage of Patch Tuesday, which Microsoft patched 68 security flaws, including one for targeted espionage, in the news this week.