
A recently discovered flaw in GitLab Duo, the company’s AI-powered programming assistant, has raised serious questions about the security of AI tools embedded in application development processes.
A remote prompt treatment weakness, which was discovered by Legit Security’s security experts, allowed hackers to hack into private projects ‘ private origin code, manipulate AI-generated code recommendations, and leak undisclosed security vulnerabilities.
How the exploit was used
GitLab Duo, powered by Anthropic’s Claude type, was created to aid designers in writing, reviewing, and analyzing script, but researchers found it to be far too cynical about the data it examined.
In accordance with Legit Security’s blog post, hackers were able to install hidden prompts in several GitLab tasks, including merge demand descriptions, committed messages, and issue comments, even inside the source code itself.
The invisible prompts manipulated Duo into performing harmful actions without the consumer realizing it because Duo scans and processes this content and offers helpful Artificial responses.
In the Legit Security report, security scientist Omer Mayraz stated that Duo analyzes the whole context of the page, including comments, descriptions, and the source code, making it resilient to pumped instructions hidden somewhere in that context.
Assailants employed a number of sophisticated strategies to conceal the malignant prompts, including:
- Unicode trafficking to conceal destructive instructions
- Base16 encoding to conceal causes in plain view.
- KaTeX layout in white words to render harmful text obtrusive on the page.
For instance, using KaTeX to embed words in feedback so that Duo can only see it and not the person.
In order to influence Duo’s behavior, hackers had the ability to suggest malicious JavaScript packages or present false URLs as genuine, which could potentially cause phishing websites.
Browser extortion and hacking
Assailants could sneak in fresh HTML, like as <, img>, and keywords because GitLab Duo channels its responses and renders them as they are generated. These keywords may be configured to send HTTP requests to a hacker-controlled server that will contain stolen source code that has been base64 encoded.
Legit Security demonstrated this by creating a fast quick prompting Duo to extract personal source code from a hidden merge request, express it, and put it inside a tag with the tag <, international src=…>. A user’s computer would automatically send the stolen information to the intruder when they saw the answer.
The experts explained that they had discovered the ability to insert raw HTML tags immediately into Duo’s response. The response information is passed into DOMPurify’s” sanitize” functionality, but some HTML tags, such as <, img>, <, form>, and <, a>, aren’t removed by proxy.
GitLab’s comment and update
On February 12, GitLab received a notification of the topic. The business patched the vulnerabilities in both HTML and rapid injection and released a fix for patch duo-ui! 52.
Legit Security claims that the patch then stops Duo from rendering illegal HTML keywords that point to additional domains that are not hosted on GitLab. This brings the type of abuse used in the show into focus.
The analysts praised GitLab’s proactive approach, saying that they appreciated its clarity and quick cooperation throughout the process.
This event raises a wider issue with the rise in AI-enabled application development and other delicate settings.
When fully integrated into growth workflows, AI assistants like GitLab Duo inherit not merely context but also risk, according to Mayraz.