Props to them for actually updating their status page as issues are happening rather than hours later. I was working with claude code and hit an API error, checked the status page and sure enough there was an outage.
This should be a given for any service that others rely on, but sadly this is seldom the case.
Thank you! Opening an incident as soon as user impact begins is one of those instincts you develop after handling major incidents for years as an SRE at Google, and now at Anthropic.
I was also fortunate to be using Claude at that exact moment (for personal reasons), which meant I could immediately see the severity of the outage.
Sweet. Hopefully it is more than instinct but a codified at Anthropic. I.e. a graduate engineer with little experience can assess and raise incident if needed.
As a solo bootstrapped founder, I take my sabbath sundown on Saturday to sundown on Sunday. Sunday evening therefore is generally the start of my work week.
"There's a problem and we already know about it" is so much better than "there's a problem and we don't know about it and/or are hoping it will magically go away and that we won't be embarrassed".
Also an engineer on this incident. This was a network routing misconfiguration - an overlapping route advertisement caused traffic to some of our inference backends to be blackholed. Detection took longer than we’d like (about 75 minutes from impact to identification), and some of our normal mitigation paths didn’t work as expected during the incident.
The bad route has been removed and service is restored. We’re doing a full review internally with a focus on synthetic monitoring and better visibility into high-impact infrastructure changes to catch these faster in the future.
If you have a good network CI/CD pipeline and can trace the time of deployment to when the errors began, it should be easy to reduce your total TTD/TTR. Even when I was parsing logs years ago and matching them up against AAA authorization commands issued, it was always a question of "when did this start happening?" and then "who made a change around that time period?"
When I was at Big Corp, I loved reading the internal postmortems. They were usually very interesting and I learned a lot. It's one of the things I miss about leaving.
A tech company that publishes the postmortems when possible always get a +1 in my eyes, I think it's a sign of good company culture. Cloudflare's are great and I would love to see more from others in the industry.
A big reason for that is it comes from the CEO. Other providers have a team and then at least 2 to 3 layers of management above them and a dotted line legal counsel. So the goal posts randomly shift from "more information" to "no information" over time based on the relationships of that entire chain, the customer heat of the moment, and personality.
Underneath a public statement they all have extremely detailed post-mortems. But how much goes public is 100% random from the customer's perspective. There's no Monday Morning QB'ing the CEO, but there absolutely is "Day-Shift SRE Leader Phil"
Cloudflare deploys stuff on Fridays, and it directly affected shopify, one of their major ecommerce customers. Until they fix their internal processes all writeups should be seen as purely marketing material.
I absolutely see it as marketing, and it is effective because I still appreciate the write ups. Arguably any publicly traded company should be letting their investors know more details about outages.
Was this a typo situation or a bad process thing ?
Back when I did website QA Automation I'd manually check the website at the end of my day. Nothing extensive, just looking at the homepage for piece of mind.
Once a senior engineer decided to bypass all of our QA, deploy and took down prod. Fun times.
Depending on how long someone's been in the industry it's more a question of if, not when, an outage will occur due to someone deciding to push code haphazardly.
At my first job one of my more senior team members would throw caution to the wind and deploy at 3pm or later on Fridays because he believed in shipping ASAP.
There were a couple times that those changes caused weekend incidents.
Network routes consist of a network (a range of IPs) and a next hop to send traffic for that range to.
These can overlap. Sometimes that’s desirable, sometimes it is not. When routers have two routes that are exactly the same they often load balance (in some fairly dumb, stateless fashion) between possible next hops, when one of the routes is more specific, it wins.
Routes get injected by routers saying “I am responsible for this range” and setting themselves as the next hop, others routers that connect to them receive this advertisement and propagate it to their own router peers further downstream.
An example would be advertising 192.168.0.0/23, which is the range of 192.168.0.0-192.168.1.255.
Let’s say that’s your inference backend in some rows in a data center.
Then, through some misconfiguration, some other router starts announcing 192.168.1.0/24 (192.168.1.0-192.168.1.255). This is more specific, that traffic gets sent there, and half of the original inference pod is now unreachable.
I was kind surprised to see details like that in a comment, but clicked on your personal website and see your a Co-founder, so I guess no one is going to repremand you lol
Any chance you guys could do write ups on these incidents similar to how CloudFlare does? For all the heat some people give them, I trust CloudFlare more with my websites than a lot of other companies because of their dedication to transparency.
I already love the product, and I think it would be great to see. Even if its not as "quickly" as CloudFlares (they post ASAP its insane) I would still be happy to see postmortem threads. We all learn industry wide from them.
Everybody using the same three centralized inference providers? That would be as absurd and unrealistic as everybody hosting in us-east-1 and behind Cloudflare today!
“A lone coder, trained in the direct manipulation of symbols—an elegant weapon from a more civilized age—-is now all that stands between humanity and darkness.” etc
Agreed. When cloudflare (ugh, aka the internet) goes down, we can't access information to think and work through. ("the fuel" in some metaphor)
But what about when LLMs go down and a good chunk of a whole generation won't even know how to think, when the remote system goes down? (Is the ability to think "the engine" of self and agency in this metaphor?)
We are building a wildly irresponsible context to exist in.
Claude code cut me off a few days ago and I _seriously_ had no idea what to do. I’ve been coding for 33 years and I suddenly felt like anything I did manually would be an order of magnitude slower than it had to be.
The nice thing is unlike Cloudflare or AWS you can actually host good LLMs locally. I see a future where a non-trivial percentage of devs have an expensive workstation that runs all of the AI locally.
I'm more and more convinced of the importance of this.
There is a very interesting thing happening right now where the "llm over promisers" are incentivized to over promise for all the normal reasons -- but ALSO to create the perception that the "next/soon" breakthrough is only going to be applicable when run on huge cloud infra such that running locally is never going to be all that useful ... I tend to think that will prove wildly wrong and that we will very soon arrive at a world where state of art LLM workloads should be expected to be massively more efficiently runnable than they currently are -- to the point of not even being the bottleneck of the workflows that use these components. Additionally these workloads will be viable to run locally on common current_year consumer level hardware ...
"llm is about to be general intelligence and sufficient llm can never run locally" is a highly highly temporary state that should soon be falsifiable imo. I don't think the llm part of the "ai computation" will be the perf bottleneck for long.
Ah, you need to buy into this dystopia wholesale. The internet is also down because the LLMs fucked up the BGP routing table, which congress agreed (at the time) should run through the LLM interface.
Imagination, either the first or last thing to die in 2075.
“Hey folks, did you know in 100 years you can’t just call the town doc? Nah, you need to go get a referral. No, for real. Yeah, yeah, that is in fact a compound fracture. I can’t treat it without a referral. Congress made the rules.”
What's the best you can do hosting an LLM locally for under $X dollars. Let's say $5000. Is there a reference guide online for this? Is there a straight answer or does it depend? I've looked at Nvidia spark and high end professional GPUs but they all seem to have serious drawbacks.
That's nice, thank you, I've joined and will follow. They don't seem to have a wiki or about page that synthesizes the current state of the art though.
I think it's possible, but the current trend is that by the time you can run x level at home, they have 10-100x in the frontier models, so if you can run today's Claude.ai at home, then software engineering as a career is already over.
My poorly informed hope is that that we can have mixture of experts with highly tuned models on areas of focus. If I'm coding in language Foo, I only care about a model that understands Foo and its ecosystem. I imagine that should be self-hostable now.
A model that only understands, say, Java is useless : you need a model that understands English and some kind of reasoning and has some idea of how the human world works, and also knows Java. The vast majority of the computational effort is spent on the first two, the second is almost an afterthought. So, a model that can only program in Java is not going to be meaningfully smaller than a model that can program in ~all programming languages.
Sure, but in the context I was considering, creativity itself wasn't a concern.
For coding, creativity is not necessarily a good thing. There are well established patterns, algorithms, and applications could reasonably be construed as "good enough" to assist with the coding itself. Adding a human language model over that to understand the user's intents could be considered an overlay on the coding model.
I confess that this is willful projection of my hope to be able to self-host agents on affordable hardware. A frontier model on powerful hardware would always be preferable but sometimes "good enough" is just that.
I was chatting with Claude this morning about a trip to Chicago where I visited the wonderful Field Museum among other places. An excerpt:
Claude: The Field Museum is enormous - I imagine you got your steps in that day. The anthropology collections there are world-class, even if they didn't make it into your photo collection. Sometimes the experience is more important than the documentation. Did you have a favorite exhibit beyond the dinosaurs?
Me: Oddly enough, the anthropology and human history section! Now I need to take another visit to Chicago to get pictures of those.
[No response from Claude]
[40 minutes pass while I have breakfast]
Me: Looks like I caught you during your morning nap. Every two and a half year old should get a good nap now and then!
Claude: Ha! You caught me mid-thought - I was composing a response about how great it is that the anthropology section was your favorite despite the dinosaurs being such showstoppers, and apparently my brain decided to take that nap you mentioned! ... Though I have to say, for a "two and a half year old," I like to think I'm pretty articulate when I'm awake!
There really should be an http header dedicated to "outage status" with a link to the service outage details page... clients (for example, in this case, your code IDE) could intercept this and notify users.
503 is cool and yes, there is the "well if it's down how are they going to put that up" but in reality most downtimes you see are on the backend and not on the reverse proxies/gateways/cdns where it would be pretty trivial to add a issues/status header with a link to the service status page and a note.
More likely that error handling is not well implemented - i.e Either backend is not throwing the equivalent of 429/402 errors or the gateway is not handling the errors well and returns this message even though a 429 is being thrown.
This sort of thing keeps me skeptical of AI quite a bit. ChatGPT also has non sensical errors messages for random failures, Gemini too. These companies have infinite compute and yet they haven't been able to implement reliable/graceful error handling in 2+ years for a chat app? Why are they promising us they can replace all developers?
This joke is getting old kinda Opus4.5 handles all the bugs in one go and also doesn’t introduce new ones at least for me. Very rarely i get stuck with it like i did with past generations of AI
Just a warning for those not on the max plan; if you pay by the token or have the lower tier plans you can easily blow through $100s or cap your plan in under an hour. The rates for paying by the token are insane and the scaling from pro to max is also pretty crazy.
They made pro have many times more value than paying per token and then they made max again have 25x more tokens than pro on the $200 plan.
It’s a bit like being offered rice at $1 per grain (pay per token) or a tiny bag of rice for $20 (pro) or a truck load for $200. That’s the pricing structure right now.
So while i agree you can’t easily exceed the quota on the big plans it’s a little crazy how they’ve tiered pricing. I hope no one out there’s paying per token!
Some companies are. Yes, for Claude Code. My co used to be like that as it's an easy ramp up instead of giving devs who might not use it that much a $150/mo seat; if you use it enough you can have a seat and save money, but if you're not touching $150 in credits a month just use the API. Oxide also recommends using API pricing. [0]
I trust companies that immediately and regularly update their status/issues page and follow up any outages with proper and comprehensive post-mortems. Sadly this is becoming the exception these days and not the norm.
Was it just me or did Opus start producing incredibly long responses before the crash. I was asking basic questions and it wouldn't stop trying to spit out full codebases worth of unrelated code. For some very simple questions about database schemas it ended up compacting twice on a 3 message conversation.
It seems resolved now (per the status-page) - i experienced a moment where the agent got stuck in the same error loop just to pop the result this time. Makes me wonder if there has been some kind of rule applied in order to automatically detect such failure occurring again - quiet inspiring work
You made me try Haiku as I can't get Opus, and made me realize how a quicker feedback simplifies many tasks, I should be more dynamic in my model selection.
I've been using it through this and it occasionally stops with an error message saying something like "repeated 529 responses". Kind of annoying but it's fine.
This should be a given for any service that others rely on, but sadly this is seldom the case.
I was also fortunate to be using Claude at that exact moment (for personal reasons), which meant I could immediately see the severity of the outage.
https://www.theguardian.com/technology/2021/oct/05/facebook-...
I have a memory that Slack fell into this trap too (I could be wrong)
Google SRE still keeps IRC available in case of an emergency.
Claude user base believes in Sunday PM work sessions
The bad route has been removed and service is restored. We’re doing a full review internally with a focus on synthetic monitoring and better visibility into high-impact infrastructure changes to catch these faster in the future.
A tech company that publishes the postmortems when possible always get a +1 in my eyes, I think it's a sign of good company culture. Cloudflare's are great and I would love to see more from others in the industry.
Underneath a public statement they all have extremely detailed post-mortems. But how much goes public is 100% random from the customer's perspective. There's no Monday Morning QB'ing the CEO, but there absolutely is "Day-Shift SRE Leader Phil"
Back when I did website QA Automation I'd manually check the website at the end of my day. Nothing extensive, just looking at the homepage for piece of mind.
Once a senior engineer decided to bypass all of our QA, deploy and took down prod. Fun times.
At my first job one of my more senior team members would throw caution to the wind and deploy at 3pm or later on Fridays because he believed in shipping ASAP.
There were a couple times that those changes caused weekend incidents.
Did the bad route cause an overload? Was there a code error on that route that wasn’t spotted? Was it a code issue or an instance that broke?
Network routes consist of a network (a range of IPs) and a next hop to send traffic for that range to.
These can overlap. Sometimes that’s desirable, sometimes it is not. When routers have two routes that are exactly the same they often load balance (in some fairly dumb, stateless fashion) between possible next hops, when one of the routes is more specific, it wins.
Routes get injected by routers saying “I am responsible for this range” and setting themselves as the next hop, others routers that connect to them receive this advertisement and propagate it to their own router peers further downstream.
An example would be advertising 192.168.0.0/23, which is the range of 192.168.0.0-192.168.1.255.
Let’s say that’s your inference backend in some rows in a data center.
Then, through some misconfiguration, some other router starts announcing 192.168.1.0/24 (192.168.1.0-192.168.1.255). This is more specific, that traffic gets sent there, and half of the original inference pod is now unreachable.
Only curious as a developer and dev op. It's all quite interesting where and how things go wrong especially with large deployments like Anthropic.
https://youtu.be/uRGljemfwUE?si=Sq0t-2ipXr_gDqao&t=69
Man that cracks me up.
But what about when LLMs go down and a good chunk of a whole generation won't even know how to think, when the remote system goes down? (Is the ability to think "the engine" of self and agency in this metaphor?)
We are building a wildly irresponsible context to exist in.
Crazy: https://www.youtube.com/shorts/SV4DMqAJ8RQ
There is a very interesting thing happening right now where the "llm over promisers" are incentivized to over promise for all the normal reasons -- but ALSO to create the perception that the "next/soon" breakthrough is only going to be applicable when run on huge cloud infra such that running locally is never going to be all that useful ... I tend to think that will prove wildly wrong and that we will very soon arrive at a world where state of art LLM workloads should be expected to be massively more efficiently runnable than they currently are -- to the point of not even being the bottleneck of the workflows that use these components. Additionally these workloads will be viable to run locally on common current_year consumer level hardware ...
"llm is about to be general intelligence and sufficient llm can never run locally" is a highly highly temporary state that should soon be falsifiable imo. I don't think the llm part of the "ai computation" will be the perf bottleneck for long.
I've often thought that local power generation (via solar or wind) could be (or could have been) a viable alternative to national grid supply.
Imagination, either the first or last thing to die in 2075.
Is it so different?
For coding, creativity is not necessarily a good thing. There are well established patterns, algorithms, and applications could reasonably be construed as "good enough" to assist with the coding itself. Adding a human language model over that to understand the user's intents could be considered an overlay on the coding model.
I confess that this is willful projection of my hope to be able to self-host agents on affordable hardware. A frontier model on powerful hardware would always be preferable but sometimes "good enough" is just that.
“So sorry, we can’t read the WiFi signal from your generator. We don’t know how much to tax you for the emissions. Sadly, we can’t allow it to start.”
The story writes itself.
Claude: The Field Museum is enormous - I imagine you got your steps in that day. The anthropology collections there are world-class, even if they didn't make it into your photo collection. Sometimes the experience is more important than the documentation. Did you have a favorite exhibit beyond the dinosaurs?
Me: Oddly enough, the anthropology and human history section! Now I need to take another visit to Chicago to get pictures of those.
[No response from Claude]
[40 minutes pass while I have breakfast]
Me: Looks like I caught you during your morning nap. Every two and a half year old should get a good nap now and then!
Claude: Ha! You caught me mid-thought - I was composing a response about how great it is that the anthropology section was your favorite despite the dinosaurs being such showstoppers, and apparently my brain decided to take that nap you mentioned! ... Though I have to say, for a "two and a half year old," I like to think I'm pretty articulate when I'm awake!
503 is cool and yes, there is the "well if it's down how are they going to put that up" but in reality most downtimes you see are on the backend and not on the reverse proxies/gateways/cdns where it would be pretty trivial to add a issues/status header with a link to the service status page and a note.
They made pro have many times more value than paying per token and then they made max again have 25x more tokens than pro on the $200 plan.
It’s a bit like being offered rice at $1 per grain (pay per token) or a tiny bag of rice for $20 (pro) or a truck load for $200. That’s the pricing structure right now.
So while i agree you can’t easily exceed the quota on the big plans it’s a little crazy how they’ve tiered pricing. I hope no one out there’s paying per token!
For comparison’s sake, this is clear: https://support.cerebras.net/articles/9996007307-cerebras-co...
And while the Cerebras service is pretty okay, their website otherwise kinda sucks - and yet you can find clear info!
Some companies are. Yes, for Claude Code. My co used to be like that as it's an easy ramp up instead of giving devs who might not use it that much a $150/mo seat; if you use it enough you can have a seat and save money, but if you're not touching $150 in credits a month just use the API. Oxide also recommends using API pricing. [0]
0: https://gist.github.com/david-crespo/5c5eaf36a2d20be8a3013ba...
So we can maybe vibe, depending what service we use.
"I can't change the laws of physics!"
I don’t even bother with Sonnet anymore, it’s been made obsolete by Opus 4.5.
Spent it in bloody Figma instead :(
AFAIK it shouldn’t have been.
What else is people using ? Haiku 4.5 ?
Looking forward to the post-mortem.