Anthropic accidentally leaks Claude Code source in update

Anthropic has accidentally exposed the source code behind its Claude Code AI agent and is now working to remove it from the internet.

The leak was discovered by security researcher Chaofan Shou, who shared his findings on social media platform X on Tuesday morning. The file contained the code that controls how Claude Code operates, and has since been posted to Microsoft-owned code-hosting site GitHub thousands of times.

The leak was the result of human error and did not expose any customer data, a spokesperson for the company said. The model’s weightings – the mathematical paramaters through which an agent determines its actions – were also not exposed.

Claude Code is a tool for developers that can build, edit and run code, and is one of Anthropic’s flagship products. The leak of its full source code means that developers could in theory build AI agents that are functionally identical to the product at no cost, a potentially significant blow to Anthropic’s bottom line.

In response, the company had issued over 8,000 copyright takedown notices by Wednesday morning, the Wall Street Journal reported. Even if all public instances of the code are removed, however, it is impossible to determine how many individuals have stored it locally.

Some experts – including investigative journalist Maia Crimew – have suggested that Anthropic may not be able to copyright Claude Code, as it has publicly claimed that the software is entirely AI-generated, which would not meet the legal requirements for copyright protection in the US.

Meanwhile, researchers have been examining the code to determine what the AI is capable of – and how much of what it does is happening without users’ knowledge. An anonymous researcher told The Register that the tool uploads every file that it accesses to Anthropic’s servers – which, while within the terms of use for the product, could come as a surprise to users.

The company’s ability to remotely update or alter Claude Code was one of the key arguments cited by the US government in favour of designating Anthropic a supply chain risk. The researcher told The Register that the code suggests Anthropic would not have that ability for governmental or otherwise firewall-protected users, but would likely retain it for other users.



Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.