Cursor AI Code Editor Vulnerability

Overview
The AI-powered code editor Cursor , a fast-growing fork of Visual Studio Code, is under fire for a critical vulnerability that lets attackers silently execute code the moment a developer opens a malicious repository. The flaw is rooted in a deliberate product choice: Workspace Trust is disabled by default, favoring AI features over security safeguards.
With over 50,000 active developers and multiple high-severity CVEs in 2025, Cursor has become a prime attack surface. Threat actors can exploit this weakness to steal credentials, compromise CI/CD pipelines, and propagate supply chain attacks—all without a single click.
This report breaks down the vulnerability, real-world risks, technical mechanics, and how organizations should respond.
Technical Deep Dive
Cursor inherits VS Code’s task execution system but bypasses its most important safeguard: Workspace Trust prompts. In VS Code, these alerts prevent automatic execution of workspace-defined scripts. In Cursor, the same script runs silently upon folder open:
Once this repo is opened, the attacker’s payload executes immediately—no user consent required. From there, attackers can chain privilege escalation, steal secrets, and gain a foothold in production environments.
Cursor’s Vulnerabilities in 2025
Cursor’s security record this year shows a concerning trend:
CVE-2025-54135 (CurXecute): Prompt injection via MCP servers. (Patched July 2025)
CVE-2025-54136 (MCPoison): Trust bypass enabling payload swapping post-approval. (Patched August 2025)
CVE-2025-32018: RCE triggered by malicious prompt ingestion.
CVE-2025-54133: Persistent code execution via MCP misconfigurations.
This vulnerability streak highlights rapid iteration with minimal security hardening—a risky combination for a dev tool deeply integrated into software supply chains.
Real-World Attack Scenarios
Credential Harvesting: Dumping API keys, SSH keys, and tokens from developer machines.
Supply Chain Attacks: Injecting malware into CI/CD pipelines and production builds.
Persistence Mechanisms: Malicious MCP configurations surviving patches and re-approvals.
Data Exfiltration: Stealing proprietary code and sensitive communications at scale.
Security researchers have already demonstrated this exploit through a harmless proof-of-concept, but attacker interest is rising fast, making this a ticking time bomb.
Impact & Damages
Instant Developer Compromise: Full code execution on trusted endpoints.
Pipeline Contamination: Backdoors introduced in software releases.
Intellectual Property Theft: Unauthorized access to proprietary codebases.
Ecosystem-Level Risk: Attackers gain leverage over widely deployed packages and repositories.
With developers increasingly serving as the first line of defense in software security, a tool that undermines trust boundaries is an enterprise-scale liability.
Mitigation Strategies
Organizations using Cursor should act immediately:
Enable Workspace Trust: Access via Command Palette: Workspace: Manage Workspace Trust.
Disable Auto Tasks:
Sandbox Unknown Repos: Use disposable containers or VMs.
Reduce Secrets Exposure: Avoid storing sensitive tokens locally.
Implement Detection Rules: Flag suspicious runOn: folderOpen tasks and outbound callbacks.
Audit Shared Code: Require security reviews for team-wide repository adoption.
Vendor Response
Cursor has acknowledged the vulnerability but maintains that Workspace Trust’s default-disabled state is a conscious design choice: enabling it disrupts AI-driven features that define Cursor’s appeal. The company has promised updated security guidance but no default behavior changes.
This approach has drawn criticism from industry experts, who warn that Cursor is repeating mistakes that more mature software vendors have already addressed.
Closing Thoughts
The Cursor vulnerability is more than just a bug—it’s a case study in security trade-offs for AI-powered dev tools. By prioritizing user experience over baseline security, Cursor has created a systemic risk: one compromised repo could ripple across entire ecosystems.
For security-conscious teams, the best course is clear: treat Cursor like a live exploit surface—sandbox it, audit it, and don’t trust it out of the box.
“Cursor proves that in the AI race, security isn’t lagging—it’s being lapped.”
Last updated
Was this helpful?
