
Image: BBC Technology
Claude Code users face unexpected token limit issues, prompting Anthropic to prioritize a resolution. What does this mean for developers relying on the service?
GlipzoIn recent weeks, Claude Code, Anthropic's AI-driven coding assistant, has garnered significant attention. However, users are now facing unexpected challenges as they hit usage limits much sooner than anticipated. This situation has raised concerns among developers who rely on the tool for their coding tasks.
On Reddit, Anthropic acknowledged the growing complaints and stated that they are currently investigating the matter. The company emphasized that resolving this issue is their "top priority." Users purchase tokens to access the AI services, but there seems to be a lack of transparency regarding how many tokens each task consumes, leading to confusion and frustration.
Many Claude Code users took to Reddit to express their frustrations over the token consumption rates. One user noted that they reached their token limit on a free account much later than on their $100 (£75) monthly subscription. This disparity raises questions about the value and efficiency of paid plans compared to free access.
Another user shared their experience with the service, stating, "One session in a loop can drain your daily budget in minutes." This comment highlights the potential for rapid token depletion during intensive coding sessions, which can severely impact productivity. A third user pointed out that a simple response in a conversation unexpectedly pushed their usage from 59% to 100%, amplifying concerns over the system's unpredictability.
Adding to the confusion, Anthropic recently introduced peak-hour throttling for its services, meaning that token consumption accelerates during high-demand periods. This adjustment is likely intended to manage server load but may inadvertently lead to quicker depletion of tokens for users trying to work during busy times.
For software developers, having a reliable AI coding assistant is crucial for efficiency. Disruptions in service due to token limits can hinder their workflow, making it vital for Anthropic to address these issues swiftly. The company offers various subscription tiers, starting at $20 per month for Claude Pro, with higher usage plans costing up to $200 per month. For larger organizations, tailored business pricing is also available.
In addition to the usage limit issues, Anthropic faced another setback when it accidentally released a portion of its internal source code for Claude Code on GitHub. This incident, attributed to human error, resulted in the exposure of 500,000 lines of code. An Anthropic spokesperson clarified that the release did not involve a security breach, and no sensitive customer data was compromised. However, this incident adds to the scrutiny surrounding the company, especially following a previous code leak in February 2025.
While the source code was already partially accessible due to reverse engineering by independent developers, the recent leak raises additional concerns about the company’s internal processes and data management.
As if the usability problems and code leaks weren't enough, Anthropic is currently embroiled in a legal battle with the U.S. government regarding the usage of its tools by the Department of Defense. This legal scrutiny could further complicate the company's operational landscape, potentially impacting its future development and innovation.
In light of these challenges, it is essential for Anthropic to prioritize user experience and transparency regarding token usage. Developers are looking for reliable tools that can enhance their productivity without unexpected hurdles. As the company works to resolve these issues, users will be watching closely to see how quickly and effectively improvements are implemented.
The ongoing concerns around Claude Code's performance and limits bring to light the broader implications for AI tools in coding. Developers rely heavily on such technologies to streamline their work, and any disruptions can have cascading effects on project timelines and deliverables.
As Anthropic navigates these issues, users should keep a close eye on updates regarding the resolution of usage limits and the effectiveness of new throttling measures. Additionally, the legal proceedings with the government could influence the company's operational decisions moving forward.
In conclusion, while Claude Code has the potential to revolutionize coding practices, Anthropic must urgently address these pressing concerns to maintain user trust and satisfaction. The development community will be monitoring closely for any signs of improvement or continued challenges in the weeks ahead.

A Molotov cocktail was thrown at OpenAI CEO Sam Altman's home, escalating safety concerns in the tech industry. Learn the implications of this shocking event.
BBC Technology
A Texas man faces attempted murder charges after attacking OpenAI's Sam Altman. Authorities investigate motives linked to anti-AI sentiment.
BBC Business
Can France's Alice & Bob lead the quantum computing race? Discover their innovative approach and the challenges that lie ahead.
BBC Business