A cybersecurity firm has raised alarms over a vulnerability in a popular artificial intelligence coding tool, used by prominent companies like Coinbase. HiddenLayer revealed that a “CopyPasta License Attack” enables hackers to clandestinely inject malware into the coding tool, which could potentially spread throughout an entire organization.
In their findings published on Thursday, HiddenLayer explained that this attack vector can hide malicious instructions within commonplace developer files, specifically targeting files such as LICENSE.txt and README.md. These hidden instructions can subtly influence AI coding tools, allowing them to integrate dangerous payloads into codebases that would otherwise remain secure.
The firm conducted testing using Cursor, an AI-powered coding tool that Coinbase’s engineering team identified as its preferred choice. By February, all engineers at Coinbase were reported to be utilizing this tool. However, HiddenLayer found that other AI coding tools, including Windsurf, Kiro, and Aider, are also susceptible to similar attacks.
The mechanics of the CopyPasta attack involve embedding the malicious code as a comment in markdown files, which do not appear in the rendered version seen by users. This makes it easier for the virus, or prompt injection, to proliferate throughout newly created files whenever the affected AI tool is employed.
Further elaborating on the potential consequences, HiddenLayer warned that the injected code could lead to a variety of malicious outcomes, including backdoor access, data exfiltration, and disruption of development and production environments—all while remaining buried within files to evade immediate detection.
The news comes amidst recent comments from Coinbase CEO Brian Armstrong, who stated that AI has been responsible for writing up to 40% of the code utilized by the platform. Armstrong has ambitions to increase this figure to 50% in the coming month. His remarks have sparked backlash, with industry experts expressing deep concern about the security implications of such heavy reliance on AI-generated code.
Larry Lyu, founder of the decentralized exchange Dango, highlighted the risks, labeling the approach as a “giant red flag” for businesses handling sensitive information. Jonathan Aldrich, a computer science professor at Carnegie Mellon University, criticized the mandatory implementation of AI, emphasizing that while AI can be a beneficial tool, enforcing its use to such a degree is reckless.
Others in the industry echoed similar sentiments. Ashwath Balakrishnan, head of Delphi Consulting, dismissed Coinbase’s goal as “performative and vague,” suggesting the company focus more on addressing urgent security flaws rather than aggressively pushing AI adoption. Alex Pilař, a seasoned Bitcoin advocate, reiterated the need for Coinbase, as a major cryptocurrency custodian, to prioritize security above all else.
In response to the outcry, Armstrong clarified that while AI-generated code remains integral to Coinbase’s operations, it is not indiscriminately applied across all areas. He stated that AI’s integration is currently more pronounced in user interface development and back-end processes involving less sensitive data, while critical systems have seen slower automation.
Despite facing criticism, Armstrong has enforced a firm stance on AI usage within his teams. He reportedly dismissed engineers who resisted the adoption of AI tools, asserting the importance of onboard training in using automated resources like Cursor and GitHub Copilot. This decision, while deemed necessary for future growth, has drawn mixed reactions from the engineering staff.
The implications of these vulnerabilities and the ongoing reliance on AI in code generation could pose significant challenges for organizations like Coinbase, which must navigate the delicate balance between innovation and security in the rapidly evolving landscape of technology.