Claude Chrome Extension Flaw Allowed Zero-Click Prompt Injection via Any Website
Cybersecurity researchers have disclosed a vulnerability in Anthropic's Claude Google Chrome extension that could have allowed attackers to inject malicious prompts into the AI assistant simply by getting a victim to visit a web page.
The flaw, dubbed ShadowPrompt, was uncovered by Koi Security researcher Oren Yomtov, who said the issue “allowed any website to silently inject prompts into that assistant as if the user wrote them.” In practical terms, that meant a victim did not need to click anything, approve a permission prompt, or even interact with the page in order for the attack to work.
According to the disclosure, the attack chained together two separate weaknesses. The first was an overly permissive origin allowlist in the Claude extension that trusted any subdomain matching the pattern *.claude.ai to send a prompt for execution. The second was a DOM-based cross-site scripting vulnerability in an Arkose Labs CAPTCHA component hosted on a-cdn.claude.ai.
Because the Arkose component could be abused to run arbitrary JavaScript in the context of a-cdn.claude.ai, an attacker could use it as a stepping stone to issue a prompt directly to the extension. Since that subdomain matched the extension’s trusted allowlist, the injected instruction would appear in Claude’s sidebar as though it were a legitimate user request.
Koi said the attack flow was effectively invisible to the victim. An attacker-controlled page could embed the vulnerable Arkose component inside a hidden iframe, pass the XSS payload using postMessage, and let the injected script fire the prompt into the Claude extension. The end result was a zero-click prompt injection chain triggered by ordinary browsing.
The impact could have been significant. Koi said successful exploitation could allow attackers to steal sensitive data such as access tokens, access conversation history with the AI assistant, and even perform actions on the victim’s behalf, including sending emails impersonating them or using the assistant to request confidential information.
What makes the issue especially important is the broader security model around AI browser assistants. Anthropic’s Chrome extension is described in the Chrome Web Store as a browser-based assistant that can navigate websites, fill forms, extract information, and automate multi-step workflows directly inside the browser. That kind of capability also raises the stakes when trust boundaries or message-passing logic break down.
Anthropic patched the extension after responsible disclosure on December 27, 2025. The fix, released in Chrome extension version 1.0.41, tightened the origin validation logic so that prompts now require an exact match to claude.ai rather than any subdomain under the broader wildcard pattern. Arkose Labs separately fixed the underlying XSS issue on February 19, 2026.
The incident is another reminder that browser-based AI agents should be treated less like ordinary extensions and more like privileged automation systems. Once an assistant can browse pages, inspect user context, and take actions on behalf of the user, any weakness in origin trust, embedded components, or extension messaging can become a path to powerful abuse. That is an inference based on the extension’s capabilities and Koi’s description of the risk.
Koi summarized the larger lesson clearly: the more capable AI browser assistants become, the more valuable they become as attack targets. An autonomous extension that can read credentials, navigate the web, and take user-like actions is only as secure as the weakest domain or component inside its trust boundary.
Reference Links and Sources