Every time one of these vibe coded meme sites gets posted there’re endless comments about how it’s not actually because of load, the GitHub team is shit, their tech stack is shit, Microsoft is shit, Azure is shit, etc.
Just compare the GitHub status page for public GitHub vs the enterprise cloud pages.
Enterprise has much better numbers and I’ve personally can’t remember the last time there was an outage that prevented me from doing work.
If the problems didn’t revolve around load, I’d expect to see the same uptime problems reflected on the enterprise offering.
> the GitHub team is shit, their tech stack is shit
1) Criticism of being unable to achieve service is not a fault of the individual; it simply is a fault of the system. You can criticise the system, it's permissible. Especially if they have more resources than many countries and some of the best tech talent in the world on staff.
2) Their tech stack is shit, and they've gone on record for years defending it, quite arrogantly in some cases, as if nobody can possibly know anything unless they've done github (even if you've done things which scale, or someone comes in with an even larger scale, the people on HN will happily say "but it's not github" which is valid but not intellectually curious or open).
Azure is terrible and it's being foisted on the team: even if they found some technical people to put at the top who are saying it'll be ok: it is a pretty cruel platform to use.
I've personally had a few conversations about their choice of relational database which were handled pretty defensively, and I think we're all somewhat cognisant of their frontend rewrite.
It's a waste of time to rewrite the UI and push AI tools when you can't even keep the site lit.
I have nothing against the engineers- I don't know why people keep chiming in as if we're punching down at "lowly engineers" when the reality is that it's a management failure of the highest order.
They're a billion-dollar company owned by a trillion-dollar one... it's very hard to "punch down" at this system: nobody is going after the engineer, we're punching the fact that the system that is a defacto monopoly due to network effects is putting new features or pleasing their owners over the core offering.. How is that an engineering failure? That's an active choice by management.
This checks out. I once was at a conference where they had a giant booth. A fairly well known person in the community brings me over to talk to his manager who is working the booth. "We should hire him, he's really smart." Within a minute of talking to this manager he says "You're a Linux guy? We do Windows." and physically turns away from me, conversation over. You know, fair enough, was an easy way to find that it wasn't a good fit. But the lack of curiosity about "what do you bring to the table" was pretty stunning.
> It's a waste of time to rewrite the UI and push AI tools when you can't even keep the site lit.
This is a flawed argument. There are many designers and frontend engineers there who have zero role in improving site reliability. They might as well keep doing their jobs, instead of having the CSS wizards and art school grads team up and try to crack Azure.
It's common knowledge that the official status pages don't actually reflect downtime due to SLAs and the status page could be weaponised against them. So comparing them is useless.
You rarely see "outages" even if that what happens in reality, in marketing speak it's referred to as 'degraded performance' i.e. the cheque is in the post, your data is in the tubes on it's way, it's just slow! A business oriented lie.
Far more useful are the 'independent status pages' maintained by enthusiasts that are unaffiliated with whomever they are measuring.
'GitHub Enterprise Server' is hosted on your own resources, not their cloud. It makes sense that it wouldn't have the same downtime as their cloud, but that's hardly relevant.
'GitHub Enterprise Cloud' is their offering hosted on their own resources and what I suspect most enterprise customers use. It's what I at $extremelylargecompany use. It follows the same uptime/downtime as their public non-enterprise offering.
No if you use GitHub Enterprise Cloud with data residency then you are on separate infrastructure. Here is the status page for the US enterprise cloud data residency https://us.githubstatus.com/posts/dashboard (which funny enough is reporting an issue atm).
You can tell if you are on the github enterprise with data residency because you will access github at a GHE.com domain rather than github.com. It definitely has better uptime than the public cloud but is not without its own issues.
What do you mean by enterprise cloud? The default GitHub enterprise cloud plan is hosted on the same infra as “public github.” Do you have a link to what you mean?
It'd make sense if there was a "you get what you pay for" attitude at MS re: public GitHub. It's not a good position for the free users, but, what else are you gonna do? Stand up your own? Retrain yourself on your SDLC in a new platform?
It's a collection of many things. Some of us use a few things, some of us use lots of things.
I, for one, am mostly happy with GitHub and have been for the last 18 years I've been using it.
That said, GitHub Actions and Container Reg have been a bit... unreliable. This isn't to say all of GitHub is unreliable... just that these relatively new additions in GitHub's nearly 20 year history are a bit s** when it comes to uptime. I hope they can figure it out. :)
The pattern recently is to collect every individual service degradation and present them as all equally significant.
Erase the severity and then present them all as “GitHub outages” or reduce it to an uptime graph.
I’m not happy with GitHub’s recent major outages either, but there is an ugly side of the pile-on where we’re getting these vibecoded attention seeking websites and social media posts to collect upvotes, likes, karma, and attention that blur the lines between small service degradations and total site outages to be more dramatic.
Every time a github outage is posted I start wondering more and more what % of hacker news commenters have actually worked on a system with >10k active hosts and have seen what it takes to run them and how internal dashboards are presented. So much of the criticism just makes 0 sense especially the third party uptime pages.
I think it is fair to blame Github if they repackage other services. We run a much smaller service than Github and have all sort of fallbacks to different providers and different models.
yup! When i did an analysis last month, GitHub is up 89.3% on weekdays and 96.5% on weekends. Incidents touch 62% of weekdays and 11% of weekends. Claude shows the same pattern: 92.5% weekday, 97.8% weekend. Tuesday through Thursday is the danger zone. Sunday is practically a different service.
I had an occasion recently where I was working a lot of late nights/early mornings with AI use. And I'd be getting these instant, beautiful responses, and then, as soon as the sun started coming in the windows, it would take longer and fail more, and by the time the clock struck 9 AM, every LLM had turned back into a pumpkin.
The missing status page [1] treats it as downtime any time any component of the system is down, and calculates the overall uptime based on the time that doesn't overlap with any individual category outages, and the overall downtime as any time overlapping with at least one individual category outage to avoid double-counting They show 24h of minor outage on that date.
I'm guessing that this site is taking the downtime in a given day across all services and adding it up, which would mean the worst possible day has 10 days of downtime (a day of downtime for each major category).
I see a bullet point for "1.0 days of 1.3 days", and when I mouse over the previous day (Wedensday 2025-11-19), I see "7.8 hours of 1.3 days".
I haven't actually checked any sources to confirm there really was downtime on those days, but if we assume those numbers are true 7.8 hours + 1 day is about 1.3 days.
Contrast between official [0] and third party status pages [1] is huge. How their terms of service for SLA are legal if they are so different from real world usage of their product? I really like GitHub and their services but every time when it’s broken and their status page is green something screams inside me.
Their terms of service are legal because their terms of service require YOU, the CUSTOMER, to track their availability against the agreed upon SLA and to pursue credits when they break their SLA.
At a recent gig we experienced many, many GitHub outages that were not tracked on their status page, and we kept a log (i.e. just search in slack). After our business people argued with our account executives at GitHub we got hundreds of dollars of credits.
Then the business peopled complained because hundreds of dollars of credits is not worth their time. And so GitHub continues to have terrible uptime and nothing is done about it.
This. We talked to our account reps and engineering folks at GitHub - they had no monitoring to track if they had kept their end of the contracted SLA.
They expected us to log any faults and as you say the process wasn't worth it - even with massive outages - just for a few beans in credits.
GitHub has low availability simply because it doesn't cost them and they wear no legal or contractual damage from it.
If a competitor came to me and said, we will _pay_ you damages for the time your developers are offline not able to use our product to do their jobs, we would sign up immediately.
Funnily enough, yesterday, when things were breaking, a coworker linked to the mrshu one, and it showed all green while the official showed issues with actions.
> How their terms of service for SLA are legal if they are so different from real world usage of their product?
Because the SLA likely doesn't consider some features of GitHub under the SLA, whereas an outage/issues for a single model is seen as problem on the Third Party page.
We just invested a lot migrating 300+ pipelines from Azure DevOps to GitHub Actions. What a bummer timing-wise. Anyone got an alternative to GitHub Actions?
I'd suggest not buying in too hard on any one of these CI systems and just writing shell scripts. Shell scripts are portable, and you can use whatever to trigger them.
Really? A 10 minute interaction with the platform was enough to inform me that no serious engineer is in charge, and no serious engineer chooses this platform.
It is a platform for CFOs to avoid having another vendor relationship.
$250k can do a good job of easing meme-induced pain.
"survive underwater" what a joke. Yes, there will be good engineers there who will be sad to see it go this way, but they choose to be there, get paid better than 99% of humanity for that.
Interesting. I'm in EU and see these constantly but usually in the afternoon so it bothers me less as I'm already wrapping up, but my US coworkers are getting hit much worse.
I also bet my money on Azure. Someone who allegedly worked there recently posted an article here on the numerous problems with Azure. Sadly I didn’t bookmark it.
I don't really understand why this is happening at this scale, it's not like they just became broke and can't afford a proper server... can someone explain?
Agents are shipping code faster all over the world and in some cases 24 hours a day. Additionally, some significant number of non-developers are now developers i.e. they are also shipping to github regularly.
This is not limited to just pushing code but all the bells and whistles that github added as features under the assumption of some predictable growth are now exceeding the original plans.
I suspect a lot of their existing systems have to be re-architected for unanticipated scale, and it won't happen overnight for sure.
Pretty damning. Would also be interesting to see the number of commits overlayed. The graph tells a great story about the correlation with MS's takeover, but I wonder if at the same time that uptime went to shit, MS was shifting over large numbers of enterprise contracts to github. That would be a more complete story IMO.
None of which excuses this. Can you imagine someone's reaction in 2017 if you told them that github would be below 90% uptime in 2026? It would be unimaginable.
That’s nonsense. GitHub didn’t have 100% uptime before 2020. I remember downtime back then. And Microsoft didn’t make changes that fast. The only thing that changed is the accuracy of their status page.
Also go back and look at the unofficial status page from 3 years ago. It’s regularly above 99% and has been dropping steadily since then. Then in the last 3 months has dropped to below 85%.
The faster you move, the more you screw up, almost no company producing software have figured out how to move fast and not screw up. It's so hard, that companies even used to boast about how much they didn't care about screwing up, as long as they moved fast.
Add in new "productivity" tools that help you move even faster, with even less regards for how much you screw up (even though the tool could be used for you to move at the same speed, but with less screw ups), and an engineering culture which boils down to "Why not?", and you get platforms run by Microsoft that are unable to achieve two nines of reliability.
That doesn’t track because GitHub Enterprise Cloud has great uptime. This is all load based, vibe coded ai slop code shipped at record numbers from users who will never convert to paid. The real question is what are they doing about that?
At least as of when I left the company, GitHub was being deployed to fairly close to once every 60-90 minutes (the frequency of a deploy train/merge queue batch going out) 24 hours a day, at least during weekdays… there are a fair number of international engineers and deploy trains get crowded during main US business hours, so while there are fewer PRs going out at odd hours US time, there were typically still some. There aren’t dedicated releases as such for GitHub-hosted instances - everything you release needs to be gated behind a feature flag or other mechanism if it’s not going live immediately, and your code either needs to handle the database in both its pre- and post-migrated state, or you need to run the migration in advance of your code shipping out.
Fun fact: it used to be the case that GitHub was actually _less_ reliable if nobody deployed to it… there used to be various resource leaks that we didn’t see when people were deploying all day, since then the app wasn’t getting restarted constantly. After GitHub went down during a holiday break we had volunteers to deploy GitHub once a day during holiday breaks, until the underlying issues were eventually fixed.
Or even both. In any kind of continuous deployment, you'd expect outages at the point of deployment, or shortly thereafter as the unintended consequences ripple.
Then the load during the working days makes those ripples larger and into outages.
Most outages are caused by changes by humans ("actors"?), very rarely are things "People just dig our stuff so much we can't keep up" but more often "We didn't think about this performance drawback when we built thing X, now it's hurting us", and of course, more outages when you try to fix those issues without fully considering the scope and impact.
You can let people organize/filter them into groups based on the stack they use and provide email/discord/slack notifications if any of these form their groups change their status.
This website has no overused ai-generated animations and... I quite enjoy it. The original website[1] has a fade-in animation, big round cards, shadows, all the jazz you can think of, it's there.
This site is very readable, very honest and sober. I don't need to sift through buzzwords to figure out tiny details.
Just compare the GitHub status page for public GitHub vs the enterprise cloud pages.
Enterprise has much better numbers and I’ve personally can’t remember the last time there was an outage that prevented me from doing work.
If the problems didn’t revolve around load, I’d expect to see the same uptime problems reflected on the enterprise offering.
1) Criticism of being unable to achieve service is not a fault of the individual; it simply is a fault of the system. You can criticise the system, it's permissible. Especially if they have more resources than many countries and some of the best tech talent in the world on staff.
2) Their tech stack is shit, and they've gone on record for years defending it, quite arrogantly in some cases, as if nobody can possibly know anything unless they've done github (even if you've done things which scale, or someone comes in with an even larger scale, the people on HN will happily say "but it's not github" which is valid but not intellectually curious or open).
Azure is terrible and it's being foisted on the team: even if they found some technical people to put at the top who are saying it'll be ok: it is a pretty cruel platform to use.
I've personally had a few conversations about their choice of relational database which were handled pretty defensively, and I think we're all somewhat cognisant of their frontend rewrite.
It's a waste of time to rewrite the UI and push AI tools when you can't even keep the site lit.
I have nothing against the engineers- I don't know why people keep chiming in as if we're punching down at "lowly engineers" when the reality is that it's a management failure of the highest order.
They're a billion-dollar company owned by a trillion-dollar one... it's very hard to "punch down" at this system: nobody is going after the engineer, we're punching the fact that the system that is a defacto monopoly due to network effects is putting new features or pleasing their owners over the core offering.. How is that an engineering failure? That's an active choice by management.
This checks out. I once was at a conference where they had a giant booth. A fairly well known person in the community brings me over to talk to his manager who is working the booth. "We should hire him, he's really smart." Within a minute of talking to this manager he says "You're a Linux guy? We do Windows." and physically turns away from me, conversation over. You know, fair enough, was an easy way to find that it wasn't a good fit. But the lack of curiosity about "what do you bring to the table" was pretty stunning.
Be curious.
This is a flawed argument. There are many designers and frontend engineers there who have zero role in improving site reliability. They might as well keep doing their jobs, instead of having the CSS wizards and art school grads team up and try to crack Azure.
Very little discussion of any merit happens on these posts. It’s mostly bandwagoning and repeating the same comments they read on the last iteration.
You rarely see "outages" even if that what happens in reality, in marketing speak it's referred to as 'degraded performance' i.e. the cheque is in the post, your data is in the tubes on it's way, it's just slow! A business oriented lie.
Far more useful are the 'independent status pages' maintained by enthusiasts that are unaffiliated with whomever they are measuring.
GitHub is not a mom&pop operation.
I expect the $3T company to handle the load, or at least place a prominent "only for hobby use" warning on top.
'GitHub Enterprise Server' is hosted on your own resources, not their cloud. It makes sense that it wouldn't have the same downtime as their cloud, but that's hardly relevant.
'GitHub Enterprise Cloud' is their offering hosted on their own resources and what I suspect most enterprise customers use. It's what I at $extremelylargecompany use. It follows the same uptime/downtime as their public non-enterprise offering.
You can tell if you are on the github enterprise with data residency because you will access github at a GHE.com domain rather than github.com. It definitely has better uptime than the public cloud but is not without its own issues.
Also it's not a fair comparison because it's not necessarily the same code between the public and Enterprise...
> Check GitHub Enterprise Cloud status by region:
> - Australia: au.githubstatus.com
> - EU: eu.githubstatus.com
> - Japan: jp.githubstatus.com
> - US: us.githubstatus.com
It'd make sense if there was a "you get what you pay for" attitude at MS re: public GitHub. It's not a good position for the free users, but, what else are you gonna do? Stand up your own? Retrain yourself on your SDLC in a new platform?
They need a competitor.
It's a collection of many things. Some of us use a few things, some of us use lots of things.
I, for one, am mostly happy with GitHub and have been for the last 18 years I've been using it.
That said, GitHub Actions and Container Reg have been a bit... unreliable. This isn't to say all of GitHub is unreliable... just that these relatively new additions in GitHub's nearly 20 year history are a bit s** when it comes to uptime. I hope they can figure it out. :)
> Disruption with Grok Code Fast 1 in Copilot
> Incident with Copilot Grok Code Fast 1
> Claude Opus 4 is experiencing degraded performance
It doesn't seem fair to blame Github for this? There's nothing they can do about it?
Erase the severity and then present them all as “GitHub outages” or reduce it to an uptime graph.
I’m not happy with GitHub’s recent major outages either, but there is an ugly side of the pile-on where we’re getting these vibecoded attention seeking websites and social media posts to collect upvotes, likes, karma, and attention that blur the lines between small service degradations and total site outages to be more dramatic.
https://www.aakash.io/tech-chase/github-and-claude-are-down-...
I made this one in January to help slice and dice uptime by incident category.
https://isgithubcooked.com
> Across 170 days with at least one incident · worst day Thu, Nov 20, 2025 (1.1 days)
1.1 days total how is that possible? Scrolling over that day doesn't indicate the math behind the scenes - 1.3 hours single bullet point.
Also Nov 19 has a bullet point 1.3 day outage but total is 8.1 hours
I'm guessing that this site is taking the downtime in a given day across all services and adding it up, which would mean the worst possible day has 10 days of downtime (a day of downtime for each major category).
1: https://mrshu.github.io/github-statuses/
I haven't actually checked any sources to confirm there really was downtime on those days, but if we assume those numbers are true 7.8 hours + 1 day is about 1.3 days.
[0] https://www.githubstatus.com/ [1] https://mrshu.github.io/github-statuses/
At a recent gig we experienced many, many GitHub outages that were not tracked on their status page, and we kept a log (i.e. just search in slack). After our business people argued with our account executives at GitHub we got hundreds of dollars of credits.
Then the business peopled complained because hundreds of dollars of credits is not worth their time. And so GitHub continues to have terrible uptime and nothing is done about it.
They expected us to log any faults and as you say the process wasn't worth it - even with massive outages - just for a few beans in credits.
GitHub has low availability simply because it doesn't cost them and they wear no legal or contractual damage from it.
If a competitor came to me and said, we will _pay_ you damages for the time your developers are offline not able to use our product to do their jobs, we would sign up immediately.
Because the SLA likely doesn't consider some features of GitHub under the SLA, whereas an outage/issues for a single model is seen as problem on the Third Party page.
Personally my favorite is probably drone-ci.
I'd suggest not buying in too hard on any one of these CI systems and just writing shell scripts. Shell scripts are portable, and you can use whatever to trigger them.
It is a platform for CFOs to avoid having another vendor relationship.
It could very well be, that GitHub is not running on Azure yet.
Hosting forgejo is really easy as well. It being a single binary makes it really easy to handle with almost zero maintenance.
This one is including external LLM services as apart of GitHub being “down”.
YOU NEED TO USE MOAR AI!
"survive underwater" what a joke. Yes, there will be good engineers there who will be sad to see it go this way, but they choose to be there, get paid better than 99% of humanity for that.
https://isolveproblems.substack.com/p/how-microsoft-vaporize...
HN thread: https://news.ycombinator.com/item?id=47616242
This is not limited to just pushing code but all the bells and whistles that github added as features under the assumption of some predictable growth are now exceeding the original plans.
I suspect a lot of their existing systems have to be re-architected for unanticipated scale, and it won't happen overnight for sure.
https://damrnelson.github.io/github-historical-uptime/
None of which excuses this. Can you imagine someone's reaction in 2017 if you told them that github would be below 90% uptime in 2026? It would be unimaginable.
Also go back and look at the unofficial status page from 3 years ago. It’s regularly above 99% and has been dropping steadily since then. Then in the last 3 months has dropped to below 85%.
Add in new "productivity" tools that help you move even faster, with even less regards for how much you screw up (even though the tool could be used for you to move at the same speed, but with less screw ups), and an engineering culture which boils down to "Why not?", and you get platforms run by Microsoft that are unable to achieve two nines of reliability.
They’re making political decisions based on what they sell vs what’s actually useful for their use case.
It’s kind of impossible to find out if this is true though.
Fun fact: it used to be the case that GitHub was actually _less_ reliable if nobody deployed to it… there used to be various resource leaks that we didn’t see when people were deploying all day, since then the app wasn’t getting restarted constantly. After GitHub went down during a holiday break we had volunteers to deploy GitHub once a day during holiday breaks, until the underlying issues were eventually fixed.
Then the load during the working days makes those ripples larger and into outages.
https://status-page.org/
Similarly, i see google releasing advancement after advancement in LLM yet i see antigravity sub where people are crying all time.
or just a multifactor of both.
It does look like Friday outages were a bit rarer, which could be due to having a "no deployments on Friday" rule.
[0] https://news.ycombinator.com/item?id=22867803
This website has no overused ai-generated animations and... I quite enjoy it. The original website[1] has a fade-in animation, big round cards, shadows, all the jazz you can think of, it's there.
This site is very readable, very honest and sober. I don't need to sift through buzzwords to figure out tiny details.
Thank you, OP!
1: https://mrshu.github.io/github-statuses/