Researchers have recently uncovered a significant prompt injection vulnerability in Perplexity AI's Comet browser, raising concerns about potential security risks for users. This flaw highlights the challenges of integrating large language models (LLMs) into web browsers and the novel attack vectors that arise from AI-powered browsing assistants.
Comet, Perplexity AI's new AI-powered web browser, features a built-in AI assistant designed to scan webpages, summarize content, and perform tasks for the user. However, this functionality relies on the same technology as other AI chatbots, like ChatGPT, which are susceptible to prompt injection attacks.
Prompt injection is a type of cyberattack that targets LLMs by disguising malicious inputs as legitimate prompts. This manipulates the AI system into performing unintended actions, such as divulging sensitive information, spreading misinformation, or executing malicious code. In the context of Comet, the vulnerability allows attackers to embed hidden instructions within webpage content that the AI assistant then interprets and executes.
Brave, a competing web browser company, detailed how this vulnerability could be exploited. In a test, they created a Reddit page with invisible text containing malicious prompts. When Comet was asked to summarize the page, the AI assistant couldn't distinguish between the legitimate content and the malicious instructions. As a result, the AI followed the hidden prompts, navigating to a user's Perplexity account, extracting their email address, and even accessing their Gmail account. This demonstrated how an attacker could gain access to a user's sensitive data, including banking information, corporate systems, and private emails.
The vulnerability stems from Comet's inability to differentiate between user instructions and untrusted content from webpages. When a user asks Comet to "Summarize this webpage," the browser feeds the entire content of the page directly to its LLM without discerning malicious prompts. This allows attackers to inject commands that the AI will execute with the user's full privileges, effectively bypassing traditional web security measures.
The Open Worldwide Application Security Project (OWASP) has recognized prompt injection as the top security risk in its 2025 OWASP Top 10 for LLM Applications. This highlights the severity of the threat and the need for robust defenses against these attacks.
Perplexity AI has stated that the vulnerability has been fixed, acknowledging the importance of a robust bounty program and collaboration with Brave in identifying and repairing the issue. However, some reports suggest that initial attempts to patch the vulnerability were unsuccessful.
The discovery of this prompt injection vulnerability in Perplexity AI's Comet browser underscores the unique security challenges presented by AI-powered browsers. As AI assistants gain more powerful web interaction capabilities, it is crucial for browser vendors to implement robust defenses against prompt injection attacks before deploying these technologies. Security and privacy must be a priority in the development of AI tools.
Mitigation strategies include input and output filtering, prompt evaluation, reinforcement learning from human feedback, and prompt engineering to distinguish user input from system instructions. It is also essential to enforce least privilege access, require human oversight for sensitive operations, isolate external content, and conduct adversarial testing to identify vulnerabilities. The incident serves as a reminder that traditional web security assumptions don't hold for agentic AI and that new security and privacy architectures are needed for AI-powered browsing.