GitHub experience various partial-outages/degradations
131 points
3 hours ago
| 14 comments
| githubstatus.com
| HN
llama052
3 hours ago
[-]
Looks like Azure as a platform just killed the ability for VM scale operations, due to a change on a storage account ACL that hosted VM extensions. Wow... We noticed when github actions went down, then our self hosted runners because we can't scale anymore.

Information

Active - Virtual Machines and dependent services - Service management issues in multiple regions

Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com.

Current status: We have determined that these issues were caused by a recent configuration change that affected public access to certain Microsoft‑managed storage accounts, used to host extension packages. We are actively working on mitigation, including updating configuration to restore relevant access permissions. We have applied this update in one region so far, and are assessing the extent to which this mitigates customer issues. Our next update will be provided by 22:30 UTC, approximately 60 minutes from now.

https://azure.status.microsoft/en-us/status

reply
bob1029
2 hours ago
[-]
They've always been terrible at VM ops. I never get weird quota limits and errors in other places. It's almost as if Amazon wants me to be a customer and Microsoft does not.
reply
dgxyz
1 hour ago
[-]
Amazon isn't much better there. Wait until you hit an EC2 quota limit and can't get anyone to look at it quickly (even under paid enterprise support) or they say no.

Also had a few instance types which won't spin up in some regions/AZs recently. I assume this is capacity issues.

reply
arcdigital
1 hour ago
[-]
Agreed...I've been waiting for months now to increase my quota for a specific Azure VM type by 20 cores. I get an email every two weeks saying my request is still backlogged because they don't have the physical hardware available. I haven't seen an issue like this with AWS before...
reply
llama052
1 hour ago
[-]
We've ran into that issue as well, ended up having to move regions entirely because nothing was changing in the current region. I believe it was westus1 at the time. It's a ton of fun to migrate everything over!

That’s was years ago, wild to see they have the same issues.

reply
llama052
1 hour ago
[-]
It's awful. Any other service in Azure that relies on the core systems seems to have issues trying to depend on it, I feel for those internal teams.

Ran into an issue upgrading an AKS cluster last week. It completely stalled and broke the entire cluster in a way where our hands were tied as we can't see the control plane at all...

I submit a severity A ticket and 5 hours later I get told there was a known issue with the latest VM image that would create issues with the control plane leaving any cluster that was updated in that window to essentially kill itself and require manual intervention. Did they notify anyone? Nope, did they stop anyone from killing their own clusters. Nope.

It seems like every time I'm forced to touch the Azure environment I'm basically playing Russian roulette hoping that something's not broken on the backend.

reply
flykespice
1 hour ago
[-]
Their AI probably hallucinated the configuration change
reply
guywithabike
2 hours ago
[-]
It's notable that they blame "our upstream provider" when it's quite literally the same company. I can't imagine GitHub engineers are very happy about the forced migration to Azure.
reply
madeofpalk
1 hour ago
[-]
I would imagine the majority of Github engineers there currently joined post MS acquisition.
reply
fbnszb
2 hours ago
[-]
As an isolated event, this is not great, but when you see the stagnation (if not downwards trajectory) of GitHub as a whole, it‘s even worse in my opinion.

edit: Before someone says something. I do understand that the underlying issue is some issue with Azure.

reply
llama052
2 hours ago
[-]
Sadly Github moving more into Azure will expose the fragility of the cloud platform as a whole. We've been working around these rough edges for years. Maybe it will make someone wake up, but I don't think they have any motivation to.
reply
estimator7292
57 minutes ago
[-]
It really doesn't even matter why it failed. Shifting blame on Azure doesn't change the fact that GitHub is becoming more and more unreliable.

I don't get how Microsoft views this level of service as acceptable.

reply
cluckindan
2 hours ago
[-]
> Azure

Which is again even worse.

reply
maddmann
2 hours ago
[-]
This is why I come to hacker news. Sanity check on why my jobs are failing.
reply
nialv7
1 hour ago
[-]
better luck with your next job :)
reply
bhouston
2 hours ago
[-]
Exactly same reason why I posted. My Github Actions jobs were not being picked up.
reply
fishgoesblub
1 hour ago
[-]
Getting the monthly GitHub outage out of the way early, good work.
reply
spooneybarger
1 hour ago
[-]
well played sir. well played.
reply
booi
3 hours ago
[-]
Copilot being down probably increased code quality
reply
falloutx
2 hours ago
[-]
50% of code written by AI, now let the AI handle this outage.
reply
anematode
2 hours ago
[-]
Catch-22, the AI runs on Azure...
reply
maddmann
2 hours ago
[-]
Ai deploys itself to aws, saving GitHub but destroying Microsoft’s cloud business — full circle
reply
levkk
2 hours ago
[-]
This happens routinely every other Monday or so.
reply
locao
1 hour ago
[-]
I was going to joke "so, it's Monday, right?" but I thought my memory was playing tricks on me.
reply
focusgroup0
1 hour ago
[-]
Will paid users be credited for the wasted Actions minutes?
reply
suriya-ganesh
2 hours ago
[-]
It is always a config problem. somewhere somplace in the mess of permissioning issues.
reply
jmclnx
3 hours ago
[-]
With linkedin down, I wonder if this is an azure thing ? IIRC github is being moved to azure, maybe the azure piece was partially enabled ?
reply
CubsFan1060
3 hours ago
[-]
It is: https://azure.status.microsoft/en-us/status

"Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com."

reply
re-thc
2 hours ago
[-]
Jobs get stuck. Minutes are being consumed. The problem isn't just it being unavailable.
reply
rvz
2 hours ago
[-]
Tay.ai and Zoe AI Agents probably running infra operations at GitHub and still arguing about how to deploy to production without hallucinating a config file and deploying a broken fix to address the issue.

Since there is no GitHub CEO, (Satya is not bothered anymore) and human employees not looking, Tay and Zoe are at the helm ruining GitHub with their broken AI generated fixes.

reply
anematode
21 minutes ago
[-]
Hey, let them cook.
reply
ChrisArchitect
3 hours ago
[-]
reply