This is the only way anything will ever change. GitHub is _easily_ the most unreliable SaaS product. There's not a week whereby we aren't affected by an outage. Their reputation is mud.
Some of us are stuck using Atlassian and BitBucket and it is by far worse in every way.
Looking at PRs in github and then when toggling to the "files" tab it chocking up or being like "I don't want to display this file because its more than 100 lines" is like wtf you're whole point is to show me modified files.
Come to think of it, this might be great advice for life in general: do one thing very well, and be modular (aka play well with others)
I don't think "do one thing well" can succeed in this world, which is why Atlassian, Dropbox, etc. keep on launching things like office suites even though that makes no sense considering their core competencies. It's the only way not be streamrolled by FAANG.
I think modern wait times are crushing for productivity, it is really demotivating and wears you down. Either just skip to another task and get overwhelmed by context switches or you wait and degenerate to a non-thinking troglodyte.
Github's problem is that it isn't a SPA. It is a massive Ruby on rails project that is all server-rendered. Everything you do needs to be synchronous and almost everything requires a reload. A react or angular app with great restraint would be dramatically faster at all of this as viewing a file is just an API call - not a page reload. They are stuck with their hands tied as loading large data would cause the whole page load to be delayed - thus silly limits.
Many things should not be webapps... but an app on the web like this...probably should.
this is pretty incorrect, you may want to look into the concept of "partials" in SSR. maybe you meant everything requires a roundtrip ? but SPA would not solve most of the roundtrips necessary in github given many interactions in the github app require authn/authz checks.
would you care getting into more details ?
Also, 'old' github was known to be very fast an reliable and was indeed a ruby on rails SSR app. Since a few years ago github started to introduce react and more client side logic and it correlates with more issues and more slowness in the frontend. It only correlates, but still.
There is no excuse for possibly the most used feature of Github to suck so badly.
At least it wasn't Perforce?
The last game I worked on was like 80gb built. The perforce depot was many terabytes large, not something you want to have on every person's workstation. Games companies use Perforce for a very good reason.
I am very much not in this industry, but it seems to me that if the assets and code depend on one another, you'd want to keep them together.
I just approved a PR which added a user to one of our AWS accounts for example, if github is down then that PR can't be approved, the update can't work and the user can't access that account
Most of the problems I hear about are micromanaging product managers. That's not the fault of the tool itself per-se.
I hope I never again have to explain to someone that you can’t just “restore the code from the weekly database backup” because the code is in the file system they just osmosed.
Probably not worth it for low cost services, but if you’re paying GitHub $x millions per year, maybe it is.
Its great that your specific product does this, but as a whole I have to monitor the service separately to keep you honest (well not you specifically, I'm sure you are honest and do as much as you can to be honest, but not every company is), and of course to monitor the problems I have which you don't detect.
I'm no fan of Microsoft either but when you say ridiculous things it's hard to take you seriously.
Sure it's less impactful than some services since you don't need access to the website at all times, but the reliability is still really bad.
Well there's Claude for starters.
If companies begin to _cancel_ their contracts with MSFT/GH because of a breach of SLAs, then maybe conditions will improve.
Reality: companies locked into multi year deals with MSFT including a MSFT-shit suite and windows licenses.
Migrating away from it will be expensive. MS knows this. Thus the reason why nothing will change.
The way that you change this is by switching to a different forge host, or by self-hosting Gitea.
I do so, and it’s simple and painless and cheap, and this quarter my uptime is better than this multinational’s.
You all should be running a small private git
Most teams could get buy with something like a tiny instance with snapshots every commit
AWS has too much skin in the game to be as unreliable as they used to be.
There's zero reason for a startup to use all these services anymore. The only reason they ever existed was big government manipulation of the labor market through ZIRP
It's far more set n forget to self host git than github will be
Still sounds like good advice though.
For the longest time, I thought that there was absolutely no way for some of these corner stone companies (slash tech) to be toppled. And I’m very impressed with their ability to destroy consumer trust!
At that scale price of IPv4 is the highest cost of the VPS.
Here is a list of providers I created back in 2022.
It would be the same as with home NAT. Your device can create TCP connections outbound but can't listen/accept.
It would solve the problem of not being able to communicate to another IPv4 server but it prevents you from hosting your own.
https://aws.amazon.com/blogs/aws/new-aws-public-ipv4-address...
Oh, you’re wondering about the air quotes?
Don’t worry about it! Sales told my boss that that feature checkbox has a “tick”.
What is wrong/missing?
… the quick and dirty bullet points are:
- Enabling IPv6 in one virtual network could break managed PaaS services in other peered networks.
- Up until very recently none of the PaaS services could be configured with IPv6 firewall rules.
- Most core managed network components were IPv4 only. Firewalls, gateways, VPNs, etc… support is still spotty.
- They NAT IPv6 which is just gibbering eldrich madness.
- IPv6 addresses are handed out in tiny pools of 16 addresses at a time. No, not a /16 or anything like that.
Etc…
The IPv6 networking in Azure feels like it was implemented by offshore contractors that did as they were told and never stopped to think if any of it made sense.
References:
- Inbound IPv6 support for App Service was added this week. https://azure.microsoft.com/en-au/updates/?id=499998
- Outbound IPv6 support is "Preview": https://learn.microsoft.com/en-us/azure/app-service/overview...
- Public IP Prefixes support a maximum of 16 consecutive addresses even for IPv6: https://learn.microsoft.com/en-us/azure/virtual-network/ip-s...
- There's an entire page of IPv6 limitations. To understand how nuts this is, just swap IPv6<->IPv4 and see if it still reads like a professional service you'd pay money for! https://learn.microsoft.com/en-us/azure/virtual-network/ip-s...
- You STILL can't use PostgreSQL with IPv6: "Even if the subnet for the Postgres Flexible Server doesn't have any IPv6 addresses assigned, it cannot be deployed if there are IPv6 addresses in the VNet." -- that's just bonkers.
- Just... oh my god:
"Azure Virtual WAN currently supports IPv4 traffic only."
"Azure Route Server currently supports IPv4 traffic only."
"Azure Firewall doesn't currently support IPv6"
"You can't add IPv6 ranges to a virtual network that has existing resource in use."
Yeah! I'm out.
What a complete lack of competence!
But be aware if you intend to host it you will need to protect it from recent AI web scrapers. I use anubis but there are other alternatives.
It drives me crazy how slow it is. A lot of operations take minutes. Eg: I push commits to a branch and open an MR. During 2-3 minutes, the MR will indicate that no changes were found. When I push new changes, it can also take minutes or update, so I can’t quickly check that it all looks correct.
The latest release changed their issues UI, so when you try to open an issue, it’s opened on a floating panel on the right 30% of the screen. I’ve no idea what exotic use case this addresses, but when I click a link, just open it. The browser does it find. No need to reinvent navigation like this. Now to open an issue, I need to wait for this slow floating UI to load before I can _actually_ navigate to the page. Which will also be extremely slow.
Don’t even get me started on the UI. Buttons are hidden all over the place. Obvious links are behind obscure abstract menus. At this point, I remember where all the basic stuff is, but I can understand why newcomers struggle so much.
Hosting GitLab is also really resource intensive. For a small team of 2-3 people, I don’t think you can get away with “just” 8GB of ram.
—-
I do have to admit, GitLab CI is pretty good, assuming that you’re fine with just Docker support and don’t need to support BSD or some exotic platforms.
Having self-hosted Gitea after considering GitLab, I can also say that the resource consumption of Gitea is a tiny fraction of that of GitLab's. I don't get the impression that their employees care about self hosters beyond as a gateway for enterprise sysadmins to get it running quickly before doing some big installation
Codeberg devs have to disable some features (pull mirrors e.g., only push is allowed to prevent abuse) and they use some custom code (abuse mitigation - spam, etc.) but in general you're getting the latest Foregjo experience "test drive" which only gets better when self hosting when you can use all the features.
GitLab is great, I really do enjoy working with it. I hate running it.
The real issue with gitea/forgejo compared to Gitlab is their terrible CI, which is (to some approximation) a clone of GitHub Actions, also a dumpster fire for those of us proficient with/preferring the UNIX command line. You'll probably need a separate CI runner, like Woodpecker or Drone.
Godspeed, IR workers...
I don't think I've really been impacted by any of the outages. Maybe I wait an extra hour to merge a feature or something, in which case I actually get to eat lunch and browse HN, doesn't feel quite as catastrophic for me, as some of you.
> can't even push code hotfixes to production without it. It's a terrible SPOF
GitHub's availability impact is the least of my concerns these days. It'll be a really tough year for society worldwide if we need to rebuild loads of infrastructure after some threat actor got into github and managed to change key pieces of code without being detected a couple of years. Having seen how hospitals handle updates, they might get lucky and be old enough to not be affected yet, or have a really tough time recovering due to understaffed IT
No clue how to even begin solving this since our OSes are likely all pulling dependencies from GitHub without verification of the developer's PGP key, if the project even has that and applies it correctly. I guess I can only recommend being aware of the problem and doing what you can in your own organization to reduce the impact
Nobody is preventing the devs from just setting up a second "upstream" and pushing to both github and gitlab (for example) or any other service at the same time.
Maybe everyone here is just using it as an excuse to chatter about forges or GitHub being down too much etc. and it has no impact. But if it does and people are honestly fretting they can mirror their repos. Then no one needs to worry that much (except for their cursed CI setups) the next time it happens.
And that’s a benefit of peer-to-peer repos right there.
Git is in a good position to support this.
It seems like there are some teams that have figured out a way to turn GH into a labyrinth of CI/CD actions that allegedly produces a ton of business value, but I don't know how typical this kind of use case is. If the average GH user just needs the essentials, I could easily focus those verticals down and spend more time on things like SAML/OIDC, reporting APIs, performance, etc. I suspect there aren't a whole lot of developers who are finding things like first party AI integration to be indispensable.
I like GH Actions myself, though sometimes it can get a little cumbersome with self-hosted workers on private/enterprise projects. I'm a big fan of automation and like to keep related bits closer together. As opposed to alternatives that have completely detached workflows/deployments.
Codeberg is a cloud site for open source projects that runs Forgejo.
Forgejo is a fork of Gitea, which is another option, especially if you want commercial support, but I haven't tried it yet.
I also kinda like GitLab, both the cloud one and the enterprise on-prem version. And their issue label features work more easily with the board than Forgejo's (automatically moving issues between columns based on scoped labels). Though their pricing tiers have been unfortunate at times (I don't know latest).
I wonder if self hosting is more reliable. How much does a private and firewalled git* instance need updates?
If you want to self-host for more features (CI/CD, PRs, etc.) there's GitLab, Gitea, and forgejo that I'm aware of. I think GitLab is a bit heavy duty for most self-hosting usage myself though. I actually appreciate the online/cloud and commercial options.
When you self-host, it becomes your job to fix it when it breaks.
I'm finding myself liking and using gitlab more and more when I come back to it every 6-8 months.
I don't know how I'd be able to trust only a cloud for my source code and devops/CI/CD. At least a mirrored setup in a private or hybrid cloud on another provider as a failover that isn't with the same cloud provider.
At the very least a few backups and mirrors running once I get them syncing.
If you want to self-host a code forge... start here: https://forgejo.org/docs/latest/admin/upgrade
I don't get why people use those.
I understand free software projects that don't want to run any infrastructure, but why companies push their building and deployment out of premises when all you need is one trusted computer somewhere. Why do people insist on trusting cloud computers more than the ones they can kick?
Git was originally local only too. People would run their own source code repos, it was trivial to run and maintain for the most part for most basic to intermediate use cases.
I had clients who insisted source code (mine or theirs) couldn't be on a public cloud provider. It's not that unreasonable or uncommon.
If I were to set up the same thing again today, I'd add some automation to automatically keep it in sync with github as well as automate the servers so that they'd attempt to pull from both on every deploy.
I share this as an idea for those who need an emergency or backup deploy mechanism.
https://news.ycombinator.com/item?id=44874945
I'm not affiliated with Radicle in any way. I am just impressed with the technology.
This is just perfect comedic timing
IMO mixing a dynamically typed language, a framework based on magic conventions and vibe coding sounds like the perfect recipe for a disaster.
I'm mixed on the cultural changes at MS and have historically preferred GH's approach. I'm hoping MS moves closer to GH than the reverse. I'm not working with/at either company and don't really have a lot of insights to offer other than observations from the outside.
Yeah, he can give whatever title he wants to his subordinates, but the "CEO" of GitHub has been a mid-level VP for quite some time.
If one person can move a company move off of GitHub in a weekend, you're too small a company for Microsoft to care about.
When are the AI vibe coders going to create a GitHub replacement? With 1000x AI productivity a lean startup should easily outcompete the incumbents, no?
social features via atprotocol, native jujutsu support, stacked PRs, interdiffs, light weight to selfhost, nix-based CI.
What's the consensus on gitlab?
I'm not sure that I would choose it for self-hosting over gitea, forgejo or straight up ssh+git on a remote system, which works well enought for a personal backup target.
Unless you have some sort of decentralized method of doing CI/CD and pull requests I'm unaware of?