▲I like this more than I expected.
The intensity gradient is a nice touch too.
reply▲Weekends are the untapped frontier. Still room to scale.
reply▲change is the biggest cause then?
reply▲This is one of the most creative idea I've seen this year. Tasteful and clever. Bravo!
reply▲A graph I have to question is even accurate.
> Across 170 days with at least one incident · worst day Thu, Nov 20, 2025 (1.1 days)
1.1 days total how is that possible? Scrolling over that day doesn't indicate the math behind the scenes - 1.3 hours single bullet point.
Also Nov 19 has a bullet point 1.3 day outage but total is 8.1 hours
reply▲thenewnewguy4 minutes ago
[-] I see a bullet point for "1.0 days of 1.3 days", and when I mouse over the previous day (Wedensday 2025-11-19), I see "7.8 hours of 1.3 days".
I haven't actually checked any sources to confirm there really was downtime on those days, but if we assume those numbers are true 7.8 hours + 1 day is about 1.3 days.
reply▲The missing status page [1] treats it as downtime any time any component of the system is down, and calculates the overall uptime based on the time that doesn't overlap with any individual category outages, and the overall downtime as any time overlapping with at least one individual category outage to avoid double-counting They show 24h of minor outage on that date.
I'm guessing that this site is taking the downtime in a given day across all services and adding it up, which would mean the worst possible day has 10 days of downtime (a day of downtime for each major category).
1: https://mrshu.github.io/github-statuses/
reply▲Far fewer outages during the weekends. Perfect, wasn't gonna do any work then anyway.
reply▲Funny to see this closely match contribution graphs with effectively no downtime on weekends.
reply▲The memes are really painful now. I feel for the team that's is trying to survive underwater.
reply▲renegade-otter42 minutes ago
[-] With management screaming down their necks:
YOU NEED TO USE MOAR AI!
reply▲Setup my self-hosted Forgejo last night. Very pleased so far.
reply▲Yeah me too. I moved all my public projects to codeberg and my internal repos to self-hosted forgejo.
Hosting forgejo is really easy as well. It being a single binary makes it really easy to handle with almost zero maintenance.
reply▲debarshri18 minutes ago
[-] Would be funny if you host it on github pages.
reply▲korrectional53 minutes ago
[-] I don't really understand why this is happening at this scale, it's not like they just became broke and can't afford a proper server... can someone explain?
reply▲Agents are shipping code faster all over the world and in some cases 24 hours a day. Additionally, some significant number of non-developers are now developers i.e. they are also shipping to github regularly.
This is not limited to just pushing code but all the bells and whistles that github added as features under the assumption of some predictable growth are now exceeding the original plans.
I suspect a lot of their existing systems have to be re-architected for unanticipated scale, and it won't happen overnight for sure.
reply▲reply▲Octoth0rpe22 minutes ago
[-] Pretty damning. Would also be interesting to see the number of commits overlayed. The graph tells a great story about the correlation with MS's takeover, but I wonder if at the same time that uptime went to shit, MS was shifting over large numbers of enterprise contracts to github. That would be a more complete story IMO.
None of which excuses this. Can you imagine someone's reaction in 2017 if you told them that github would be below 90% uptime in 2026? It would be unimaginable.
reply▲Whoa, if that is even remotely accurate then the talk about agents is a complete red herring.
reply▲theolivenbaum20 minutes ago
[-] If I remember correctly the status page was not precise before the acquisition - so take with a big grain of salt the 100% pre-acquisition values
reply▲I suspect it’s caused because Microsoft is using buggy Microsoft tech instead of the original stack.
They’re making political decisions based on what they sell vs what’s actually useful for their use case.
It’s kind of impossible to find out if this is true though.
reply▲They’re on track to 30x volume yoy by their own words
reply▲See previous days articles. Agentic coding. Going from 1b annual commits to estimated 14b or more from one year to another.
reply▲embedding-shape49 minutes ago
[-] The faster you move, the more you screw up, almost no company producing software have figured out how to move fast and not screw up. It's so hard, that companies even used to boast about how much they didn't care about screwing up, as long as they moved fast.
Add in new "productivity" tools that help you move even faster, with even less regards for how much you screw up (even though the tool could be used for you to move at the same speed, but with less screw ups), and an engineering culture which boils down to "Why not?", and you get platforms run by Microsoft that are unable to achieve two nines of reliability.
reply▲I wonder how well this corolates with azure incidents. Especially for the US regions.
reply▲I live in Europe. I've not noticed these constant outages. But I only use GitHub after work.
reply▲I also bet my money on Azure. Someone who allegedly worked there recently posted an article here on the numerous problems with Azure. Sadly I didn’t bookmark it.
reply▲Would be interesting to see if this correlated with their release cycles.
reply▲Well, outages seem to be distributed across all days except weekends. So this seems like people fucking around with stuff being a major factor.
reply▲Surely it just means more people working, resulting in more load, resulting in more outages?
reply▲pwagland46 minutes ago
[-] Or even both. In any kind of continuous deployment, you'd expect outages at the point of deployment, or shortly thereafter as the unintended consequences ripple.
Then the load during the working days makes those ripples larger and into outages.
reply▲embedding-shape47 minutes ago
[-] Most outages are caused by changes by humans ("actors"?), very rarely are things "People just dig our stuff so much we can't keep up" but more often "We didn't think about this performance drawback when we built thing X, now it's hurting us", and of course, more outages when you try to fix those issues without fully considering the scope and impact.
reply▲revolution8824 minutes ago
[-] For 30th of April, 2026 it shows it was down 1.0 days of 2.6 days (minor incident) :)
reply▲This design is perfect irony. I love it.
reply▲Gigacore21 minutes ago
[-] It is funny how weekends are almost always up!
reply▲faangguyindia40 minutes ago
[-] All these companies brag about being hyperscalers and cannot scale github.
Similarly, i see google releasing advancement after advancement in LLM yet i see antigravity sub where people are crying all time.
reply▲airstrike55 minutes ago
[-] can you correlate this to data on # of commits, actions, etc?
reply▲double entendre: Is it load based or github-employee based that weekends are sparser.
or just a multifactor of both.
reply▲Didn't they blame "AI" for the increased load? I'm not sure why AI usage would be more during the week than the weekend, but it could be.
It does look like Friday outages were a bit rarer, which could be due to having a "no deployments on Friday" rule.
reply▲mirekrusin56 minutes ago
[-] From the chart it seems they should have policy to deploy on weekends only.
reply▲ramon15659 minutes ago
[-] Please tell me this makes sense
This website has no overused ai-generated animations and... I quite enjoy it. The original website[1] has a fade-in animation, big round cards, shadows, all the jazz you can think of, it's there.
This site is very readable, very honest and sober. I don't need to sift through buzzwords to figure out tiny details.
Thank you, OP!
1: https://mrshu.github.io/github-statuses/
reply▲"Good job, Microsoft, amazing uptime."
reply▲Clearly their team needs more LLM usage.
reply