A postmortem of three recent issues(anthropic.com)
193 points bymoatmoat5 hours ago |21 comments
data-ottawa2 hours ago
With all due respect to the Anthropic team, I think the Claude status page[1] warrants an internal code red for quality. There were 50 incidents in July, 40 incidents in August, and 21 so far in September. I have worked in places where we started approaching half these numbers and they always resulted in a hard pivot to focusing on uptime and quality.

Despite this I'm still a paying customer because Claude is a fantastic product and I get a lot of value from it. After trying the API it became a no brainer to buy a 20x Max membership. The amount of stuff I've gotten done with Claude has been awesome.

The last several weeks have strongly made me question my subscription. I appreciate the openness of this post, but as a customer I'm not happy.

I don't trust that these issues are all discovered and resolved yet, especially the load balancing ones. At least anecdotally I notice that around 12 ET (9AM pacific) my Claude Code sessions noticeably drop in quality. Again, I hope the team is able to continue finding and fixing these issues. Even running local models on my own machine at home I run into complicated bugs all the time — I won't pretend these are easy problems, they are difficult to find and fix.

[1] https://status.anthropic.com/history

ruszki2 hours ago
I don’t know whether they are better or worse than others. One for sure, a lot of companies lie on their status pages. I encounter outages frequently which are not reported on their status pages. Nowadays, I’m more surprised when they self report some problems. Personally, I didn’t have serious problems with Claude so far, but it’s possible that I was just lucky. In my perspective, it just seems that they are reporting outages in a more faithful way. But that can be completely coincidental.
willsmith722 hours ago
> Despite this I'm still a paying customer because Claude is a fantastic product and I get a lot of value from it.

Doesn't that say it all? At this point the quality of the AI trumps reliability for the customer (you and me), so even though of course they should (and I'm sure will) focus on it, why would they prioritise reliability over model quality right now?

edoceo1 hour ago
The up-theead complaint is that quality drops and draws a line to reliability. They (Anthropx) have two hard problems to solve.
martinald1 hour ago
What makes it even worse is the status page doesn't capture all smaller incidents. This is the same for all providers. If they actually provided real time graphs of token latency, failed requests, token/s etc I think they'd be pretty horrific.

If you trust this OpenRouter data the uptime record of these APIs is... not good to say the least: https://openrouter.ai/openai/gpt-5/uptime

It's clear to me that every provider is having enormous scale challenges. Claude Code often slows to a crawl and I have to interrupt it and tell it to try again.

This is especially pronounced around 4-6pm UK time (when we have Europe, Eastern US and West Coast US all hammering it).

Even today I was getting 503 errors from Gemini AI studio with model overloaded at that time, nothing on status page.

I really wonder if it would be worth Claude et al offering a cheaper off peak plan, to try and level out demand. Perhaps the optics of that don't look good though.

Edit to add: I think another potential dimension to this is GB200s have been a lot slower to come on stream than probably the industry expected. There's been a lot of defects with various hardware and software components and I suspect the liquid cooling has been difficult to get right (with far more catastrophic failure states!).

lumost2 hours ago
I've become extremely nervous about these sudden declines in quality. Thankfully I don't have a production product using AI (yet), but in my own development experience - the model becoming dramatically dumber suddenly is very difficult to work around.

At this point, I'd be surprised if the different vendors on openrouter weren't abusing their trust by silently dropping context/changing quantization levels/reducing experts - or other mischievous means of delivering the same model at lower compute.

martinald1 hour ago
Openrouter is aware this is happening and flags it now on the UI. It's a real problem.
extr4 hours ago
> Incorrect routing affected less than 0.0004% of requests on Google Cloud's Vertex AI between August 27 and September 16.

Matches my experience. I use CC through our enterprise Vertex AI account and never noticed any degradation.

In general it seems like these bugs, while serious, were substantially less prevalent than anecdotal online reports would have you believe. We are really talking about a ~1-2 week window here where most issues were concentrated, a relatively small percentage of total requests and total users impacted.

ispeaknumbers4 hours ago
I'm not sure if you can claim these were "less prevalent than anecdotal online reports". From their article:

> Approximately 30% of Claude Code users had at least one message routed to the wrong server type, resulting in degraded responses.

> However, some users were affected more severely, as our routing is "sticky". This meant that once a request was served by the incorrect server, subsequent follow-ups were likely to be served by the same incorrect server.

30% of Claude Code users getting a degraded response is a huge bug.

extr3 hours ago
I don't know about you but my feed is filled with people claiming that they are surely quantizating the model, Anthropic is purposefully degrading things to save money, etc etc. 70% of users were not impacted. 30% had at least one message degraded. One message is basically nothing.

I would have appreciated if they had released the full distribution of impact though.

lmm18 minutes ago
> 30% had at least one message degraded. One message is basically nothing.

They don't give an upper bound though. 30% had at least one message degraded. Some proportion of that 30% (maybe most of them?) had some larger proportion of their messages (maybe most of them?) degraded. That matters, and presumably the reason we're not given those numbers is that they're bad.

dytyruio2 hours ago
> Anthropic is purposefully degrading things to save money

Regardless of whether it’s to save money, it’s purposefully inaccurate:

“When Claude generates text, it calculates probabilities for each possible next word, then randomly chooses a sample from this probability distribution.”

I think the reason for this is that if you were to always choose the highest probable next word, you may actually always end up with the wrong answer and/or get stuck in a loop.

They could sandbag their quality or rate limit, and I know they will rate limit because I’ve seen it. But, this is a race. It’s not like Microsoft being able to take in the money for years because people will keep buying Windows. AI companies can try to offer cheap service to government and college students, but brand loyalty is less important than selecting the smarter AI to help you.

andy991 hour ago
> I think the reason for this is that if you were to always choose the highest probable next word, you may actually always end up with the wrong answer and/or get stuck in a loop.

No, it's just the definition of sampling at non-zero temperature. You can set T=0 to always get the most likely token. Temperature trades of consistency for variety. You can set T to zero in the API, I assume the defaults for Claude code and their chat are nonzero.

flutas3 hours ago
That 30% is of ALL users, not users who made a request, important to note the weasel wording there.

How many users forget they have a sub? How many get a sub through work and don't use it often?

I'd bet a large number tbh based on other subscription services.

smca3 hours ago
(I work at Anthropic) It's 30% of all CC users that made a request during that period. We've updated the post to be clearer.
flutas3 hours ago
Thanks for the correction and updating the post.

I typically read corporate posts as cynically as possible, since it's so common to word things in any way to make the company look better.

Glad to see an outlier!

extr3 hours ago
That's a pretty cynical read. My personal impression is that Anthropic has a high level of integrity as an organization. Believe what you want, I'm inclined to give them the benefit of the doubt here and move on.
thousand_nights3 hours ago
i don't trust companies anymore because every time there's a worldwide outage they use softspeak like "we're observing elevated amounts of errors for a small subset of users", hours after some CTO approves to change the status page

imho there's a big market gap for companies that are truly honest with customers instead of corporate gaslighting

edoceo1 hour ago
I'm with you that a market gap for honesty exists - especially on status pages. Making a better product and being honest I'd class as very-very-hard.

I do think an independent service status monitor might be an easier stip-gap and could serve to improve honesty. It's not trivial.

HoyaSaxa3 hours ago
I’m pretty surprised that Anthropic can directly impact the infra for AWS Bedrock as this article suggests. That goes against AWSs commitments. I’m sure the same is true for Google Vertex but I haven’t digged in there from a compliance perspective before.

> Our own privacy practices also created challenges in investigating reports. Our internal privacy and security controls limit how and when engineers can access user interactions with Claude, in particular when those interactions are not reported to us as feedback.

Ok makes sense and glad to hear

> It remains particularly helpful for users to continue to send us their feedback directly. You can use the /bug command in Claude Code

Ok makes sense and I’d expect that a human can then see the context in that case although I hope it is still very explicit to the end user (I’m not a Claude Code user so I cannot comment)

> or you can use the "thumbs down" button in the Claude apps to do so

This is pretty concerning. I can’t imagine the average person equates hitting this button with forfeiting their privacy.

l1n3 hours ago
(Anthropic employee, speaking in a personal capacity)

> I’m pretty surprised that Anthropic can directly impact the infra for AWS Bedrock as this article suggests.

We don't directly manage AWS Bedrock deployments today, those are managed by AWS.

> I can’t imagine the average person equates hitting this button with forfeiting their privacy.

We specify

> Submitting this report will send the entire current conversation to Anthropic for future improvements to our models.

in the thumbs down modal. Is there a straightforward way to improve this copy?

crazygringo2 hours ago
Sounds fine to me. I'm assuming it wasn't obvious to readers that there was a confirmation message that appears when thumbs down is clicked.
pluto_modadic2 hours ago
"have a human take a look at this conversation (from {time} to {time})"
_da_3 hours ago
> This is pretty concerning. I can’t imagine the average person equates hitting this button with forfeiting their privacy.

When you click "thumbs down" you get the message "Submitting this report will send the entire current conversation to Anthropic for future improvements to our models." before you submit the report, I'd consider that pretty explicit.

mulmboy17 minutes ago
Big missing piece - what was the impact of the degraded quality?

Was it 1% worse / unnoticeable? Did it become useless? The engineering is interesting but I'd like to see it tied to actual impact

cyanf3 hours ago
> On August 29, a routine load balancing change unintentionally increased the number of short-context requests routed to the 1M context servers. At the worst impacted hour on August 31, 16% of Sonnet 4 requests were affected.

Interesting, this implies that the 1M context servers performs worst at low context. Perhaps this is due to some KV cache compression, eviction or sparse attention scheme being applied on these 1M context servers?

kiratp3 hours ago
This is due to RoPE scaling.

> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. It is also recommended to modify the factor as needed. For example, if the typical context length for your application is 524,288 tokens, it would be better to set factor as 2.0.

https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking

stellalo4 hours ago
Title should be fixed: it’s about Claude models in general, not Claude Code
dantodor2 hours ago
That is a very good start in sharing some level of information with their users, and kudos to the Anthropic team for doing that. However, I don't see any mention of the longstanding issue in CC of API timeout errors. And, at least for me, it's the most frustrating one.
lukasb1 hour ago
I almost never see these. Maybe issue is your network?
stephen_cagle2 hours ago
I do wonder what a random dip in quality causes in a long running conversation? Does the conversation recover at a later point, or does the introduction of temporary idiocy permanently affect the rest of the conversation?

Statistically, probably likely that the dip occurred at a point that wasn't too important? But what happens if the idiot comes out at a critical point?

Kind of reminds me of the two alternate ways that time travel works in sci-fi. Does the small change to the past explode like a fission reaction, or does history heal itself?

Anywho, if errors do accumulate, I can see being very pissed off even with temporary idiocy from the model, as it means it poisons the context for the entire rest of the conversation.

Wowfunhappy4 hours ago
> On August 25, we deployed a misconfiguration to the Claude API TPU servers that caused an error during token generation. An issue caused by a runtime performance optimization occasionally assigned a high probability to tokens that should rarely be produced given the context, for example producing Thai or Chinese characters in response to English prompts, or producing obvious syntax errors in code. A small subset of users that asked a question in English might have seen "สวัสดี" in the middle of the response, for example.

Can anyone explain to a layperson how this sort of thing is even possible for an LLM?

For normal code, of course stupid bugs happen all the time. You accidentally introduce an off-by-one error in a conditional, for example, or add an extra `goto fail`.

But LLMs aren't written by humans! Models are trained by automated programs over a period of many months across unfathomably massive data centers.

How would a human introduce a bug like the one described in TFA?

blackqueeriroh5 minutes ago
Simple answer: there are two separate processes here, training and inference.

As you discuss, training happens over a long period of time in a (mostly) hands-off fashion once it starts.

But inference? That’s a separate process which uses the trained model to generate responses, and it’s a runtime process - send a prompt, inference runs, response comes back. That’s a whole separate software stack, and one that is constantly being updated to improve performance.

It’s in the inference process where these issues were produced.

Voloskaya3 hours ago
LLMs are still executed by code written by humans. In this case, the model ultimately give you a probability distribution over each (~200k) tokens in the vocabulary. It's then up to you to decide how you want to sample the next token, you could for example just always sample the most likely, or to make the output more creative, you can sample randomly from the top-k tokens. This top-k sampling, to make it efficient, is written in XLA and compiled to run directly as a kernel, there was a bug in that kernel, which presumably led to tokens outside of the top-k window be select from times to times.
Centigonal3 hours ago
LLMs produce a probability distribution for what the next token might be. How you pick the actual word that is printed next from that probability distribution is by using a sampling approach[1]. If your sampling approach is "select the next word randomly from among the top 4 possibilities" and you flip a > sign, you could end up with the behavior described in the OP.

[1] Here is an example of two common approaches: https://www.reddit.com/r/AIDungeon/comments/1eppgyq/can_some...

jjmarr3 hours ago
The next word can also be selected with weighted randomization and "temperature" is used to control how much weight lower probability tokens get.

I've honestly received the best results in creative writing by ignoring top_k/top_p and simply tuning temperature. Restricting my output to only common words causes everything to feel generic. But Deepseek constantly breaks into Chinese/gibberish/ZALGO! when I go to 1.14.

This isn't related to the "recent issues" but I feel like it's useful advice for anyone trying out AI story creation.

ashdksnndck4 hours ago
There are many layers of human-written code in between you and the weights.
jldugger2 hours ago
The AI kernels are floating point, so it's possible to do some unintuitive math that ends up negative even though it wouldn't be in the Real domain. I wouldn't be surprised if checking for overflow state is disabled for perf reasons and the negative simply becomes really big -- like asking for the -1st item in an array and getting the last.
vlovich1233 hours ago
The value of figuring out how to make their LLM serving deterministic might help them track this down. There was a recent paper about how the received wisdom that kept assigning it to floating point associativity actually overlooked the real reasons for non-determinism [1].

[1] https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

ants_everywhere7 minutes ago
network traffic and machine load aren't deterministic. I think for the near term, getting full determinism (e.g. for auditing) is going to only be feasible for batch jobs that are not cost sensitive.

A google search isn't deterministic. Neither is loading upvote count on social media.

It's common advice in distributed systems to have a graceful degradation state instead of becoming unavailable. That wouldn't be possible in a system that's completely deterministic.

mmaunder3 hours ago
It has a big impact on performance to do determinism. Which leaves using another model to essentially IQ test their models with reporting and alerting.
yomismoaqui1 hour ago
This reminds me of the story [1] about Facebook intentionally breaking parts of its Android app for some users (including crashing or disabling functionality), to see how far it could degrade before users stopped using Facebook.

According to reports, users did not stop coming back even when the app was broken for hours.

A similar thing happened to me when playing some initial version of The Binding of Isaac on Linux, when it was made with Flash. Its performance wasn't the best but I couldn't stop playing.

So if people still returns maybe Anthropic has something great going on with Claude Code.

[1]: https://www.theguardian.com/technology/2016/jan/05/facebook-...

Omnipresent1 hour ago
Which LLM can generate that timeline event graphic from text?
woah2 hours ago
Vibe coding gone wrong?
nojs1 hour ago
bdangubic1 hour ago
80% of Atlassian employees use Jira :)
flutas4 hours ago
And yet no offers of credits to make things right for the users, for what was essentially degraded performance of what you paid for.

I know I'll probably get push back on this, but it left a sour taste in my mouth when I paid for a $200 sub that felt like it was less useful than ChatGPT Plus ($20) at times.

Or to summarize: [south park "we're sorry" gif]

blackqueeriroh2 minutes ago
I’m pretty certain if you check the ToS that Anthropic doesn’t guarantee a level of response quality, and explicitly even says there is zero guarantee, even for paid plans.

So to be fair, you are getting exactly what you paid for - a non-deterministic set of generated responses of varying quality and accuracy.

OGEnthusiast4 hours ago
Seems like Claude is using TPUs a lot more than I thought. For some reason I thought 90%+ of their capacity was from AWS.
zer00eyz3 hours ago
If you are going to run a non deterministic system on three very different hardware platforms doesn't it behoove you to tell your users where their experience is coming from?

Calling the platforms A, B and C might help provide us the insight we're missing to spot incongruous behaviors faster than trying to aggregate more generalized feedback.

mvdtnz1 hour ago
I don't believe for one second that response quality dropped because of an infrastructural change and remained degraded, unnoticed, for weeks. This simply does not pass the sniff test.
blackqueeriroh0 minutes ago
Can you provide any proof of what you’re saying? Any examples that would bear out what you’re asserting? Anything at all?

“I refuse to believe what the people who would know the best said, for no real reason except that it doesn’t feel right” isn’t exactly the level of considered response we’re hoping for here on HN. :)

behnamoh3 hours ago
Reminder that Anthropic is the only AI company that has never released any open-source/weight models.
arduanika3 hours ago
Sure, but don't you feel safer that way?
behnamoh3 hours ago
of course, who wants an open-source Sonnet 3... /s
moatmoat5 hours ago
TL;DR — Anthropic Postmortem of Three Recent Issues

In Aug–Sep 2025, Claude users saw degraded output quality due to infrastructure bugs, not intentional changes.

The Three Issues 1. *Context window routing error* - Short-context requests sometimes routed to long-context servers.

   - Started small, worsened after load-balancing changes.
2. *Output corruption* - TPU misconfigurations led to weird outputs (wrong language, syntax errors).

   - Runtime optimizations wrongly boosted improbable tokens.
3. *Approximate top-k miscompilation* - A compiler bug in TPU/XLA stack corrupted token probability selection.

   - Occasionally dropped the true top token.
Why It Was Hard to Detect - Bugs were subtle, intermittent, and platform-dependent.

- Benchmarks missed these degradations.

- Privacy/safety rules limited access to real user data for debugging.

Fixes and Next Steps - More sensitive, continuous evals on production.

- Better tools to debug user feedback safely.

- Stronger validation of routing, output correctness, and token-selection.

sebastiennight4 hours ago
> Privacy/safety rules limited access to real user data for debugging.

Do their ToS really limit access to user data (prompt/response)? I don't remember seeing anything to that effect in their terms.

mcintyre19944 hours ago
I’d imagine they have a lot of internal controls, even if ultimately someone at the company can read the data within their terms. It makes sense that the teams debugging stuff wouldn’t have this access immediately.
favorited4 hours ago
I know that when you submit a thumbs up/down rating for a response, you need to opt-in to the whole chat conversation being shared with Anthropic.
bravetraveler4 hours ago
> We don't typically share this level of technical detail about our infrastructure, but the scope and complexity of these issues justified a more comprehensive explanation.

Layered in aggrandizing. You host a service, people give you money.

levocardia4 hours ago
No, what that statement means is "we know that if we just say 'we weren't downgrading performance to save money', you won't believe us, so here is a deep dive on the actual reason it happened"
pluto_modadic2 hours ago
they're big, and we expect proper behavior out of them when they mess up. that includes public details.
bravetraveler3 hours ago
They can still do the deep dive, that is absolutely convincing. They likely did: distracted before I could finish [work, unfortunately - incident of our own]

My criticism is it's 'puffy'. The 'scope and complexity' for a public postmortem is 'customer-facing'. Otherwise it's a tree/forest scenario.

One might say 'the lady doth protest too much'; this should be routine. It is, elsewhere: see Cloud, Web Hosting, PBX. Pick your decade.

deepdarkforest4 hours ago
Wow. Sneaky. They do not even state the rate of impact for the XLA bug afaik, which affected everyone, not just claude code users, very vague. Interesting.

Claude code made almost half a billion so far[1] (>500m in ARR and its like 9 months old) , and 30% of all users have been impacted at least once, just from the first routing bug. Scary stuff.

Their post mortem is basically "evaluations are hard, we relied on vibe checking, now we are going to have even more frequent vibe checking". I believe it was indeed unintentional, but in the future where investor's money wont come down from the skies, serving distilled models will be very tempting. And you can not be liable to any SLA currently, it's just vibes. I wonder how enterprise vendors are going to deal with this going forward, you cannot just degrade quality without client or vendor even being able to really prove it.

[1][https://www.anthropic.com/news/anthropic-raises-series-f-at-...]

extr4 hours ago
Is your contention that paying for a service entitles you to zero bugs, ever?
deepdarkforest3 hours ago
Of course not! But usually, you can quantify metrics for quality, like uptime, lost transactions, response time, throughput etc. Then you can have accountability, and remediate. Even for other bugs, you can often reproduce and show clearly the impact. But in this case, other than internal benchmarks, you cannot really prove it. There is no accountability yet
_zoltan_3 hours ago
why would they publish the data you seek? I would not publish it either.

the blog explains what issues they had and how they fixed them. this is good enough.

flutas3 hours ago
If you paid for a streaming service and the HD option only worked for a random subset of users, and not you, would you complain?

It's a material difference in the product, not just "a bug."

dylan6043 hours ago
I'd honestly blame my ISP for traffic shaping my connection as a first assumption, and not immediately blame the streaming platform.
VirusNewbie1 hour ago
They likely don't want to say how much of their inference comes from GCP vs. AWS.