For the past few days I've been playing with Forgejo (from the Codeberg people). It is fantastic.
The biggest difference is memory usage. GitLab is Ruby on Rails and over a dozen services (gitlab itself, then nginx, postgrest, prometheus, etc). Forgejo is written in go and is a single binary.
I have been running GitLab for several years (for my own personal use only!) and it regularly slowly starts to use up the entirety of the RAM on a 16GB VM. I have only been playing with Forgejo for a few days, but I am using only 300MB of the 8 GB of RAM I allocated, and that machine is running both the server and a runner (it is idle but...).
I'm really excited about Forgejo and dumping GitLab. The biggest difference I can see if that Forgejo does not have GraphQL support, but the REST API seems, at first glance, to be fine.
EDIT: I don't really understand the difference between gitea and forgejo. Can anyone explain? I see lots of directories inside the forgejo volume when I run using podman that clearly indicate they are the same under the hood in many ways.
EDIT 2: Looks like forgejo is a soft fork in 2022 when there were some weird things that happened to governance of the gitea project: https://forgejo.org/compare-to-gitea/#why-was-forgejo-create...
Our product studio with currently around 50 users who need daily git access moved to a self hosted forgejo nearly 2 years ago.
I really can’t overstate the positive effects of this transition. Forgejo is a really straightforward Go service with very manageable mental model for storage and config. It’s been easy and cheap to host and maintain, our team has contributed multiple bugfixes and improvements and we’ve built a lot of internal tooling around forgejo which otherwise would’ve required a much more elaborate (and slow) integration with GitHub.
Our main instance is hosted on premise, so even in the extremely rare event of our internet connection going offline, our development and CI workflows remain unaffected (Forgejo is also a registry/store for most package managers so we also cache our dependencies and docker images).
Speaking of authentication it also works as an openid provider meaning you can authenticate every other web software that supports it to Forgejo... which in turn can look for users in other sources.
It also has wikis.
Its an underrated piece of software that uses a ridiculous small amount of computer resources.
Backups are handled by zfs snapshots (like every other server).
We've also had at least 10× lower downtime compared to github over the same period of time, and whatever downtime we had was planned and always in the middle of the night. Always funny reading claims here that github has much better uptime than anything self-hosted from people who don't know any better. I usually don't even bother responding anymore.
Like others, I've switch to Gitea, but whenever I do visit gitlab I can't help but think the design / UX is so much nicer.
Gitea refused to do some perfectly sensible action- I think it had something to do with creating a fork of my own repo. Looking online, there's zero technical reason for this, and the explanation given was "this is how GitHub does things". Immediately uninstalled. I'm not here for this level of disrespect.
ssh example.com ‘mkdir repos/my-proj.git && cd repos/my-proj.git && git init —bare .’
Then I just set my remote origin URL to example.com:repos/my-proj.gitThe filesystem on example.com is backed up daily. Since I do not need to send myself pull requests for personal projects and track my own TODOs and issues via TODO.md, what exactly am I missing? I have been using GitHub for open source projects and work for years but for projects where I am the only author, why would I need a UI besides git and my code editor of choice?
-> convenience, collaboration, mobility
Some forges even include(d) instant messaging!
If you ever find yourself wishing for a web UI as well, there's cgit[1]. It's what kernel.org uses[2].
[1]: https://git.zx2c4.com/cgit/ [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
It's a shame that GitHub won the CI race by sheer force of popularity and it propagates its questionable design decisions. I wish more VCS platforms would base their CI systems on Gitlab, which is much much better than GitHub actions.
Shared filesystems is all you need to scale/replicate it, and it also makes the backup process quite simple.
I’m a little put off on the google connection but it seems like it could run rather independently.
It's not focused on all the other stuff that people think of in code forges (hosting the README in a pretty way, arbitrary page hosting, wiki, bug tracking, etc.) but can be integrated with 3rd party implementations of those fairly trivially.
even browsing the git repos it hosts uses an embedded version of another tool (gitiles).
https://gerrithub.io/ is a public instance
Having done CI integrations with both, Gerrit's APIs send pre- and post-merge events through the same channel, instead of needing multiple separate listeners like GitHub.
Self hosting, it's still a single point of failure and the article arguing "mirroring", well... it allows redundancy with reads but writes?
It's an interesting take on a purist problem.
How do people find your online project and know it's you (instead of an impersonator) without relying on an authority, like GitHub accounts or domain names? It is a challenging problem with no good solution. At least now the project is alive again and more resilient than before.
What I did:
* used a custom docker image on my own registry domain with hugo/nodejs and my custom zig app
* no problems
* store artifacts
* required using a different artifact "uses" v3 instead of v4 (uses: actions/upload-artifact@v3)
* An example of how there are some subtle differences between GitHub Actions, but IMHO, this is a step forward because GitLab CI YAML is totally different
* can't browse the artifacts like I can on gitlab, only allows download of the zip. Not a big deal, but nice to verify without littering my Downloads folder.
* Unable to use "forgejo-runner exec" which I use extensively to test whether a workflow is correct before pushing
* Strange error: "Error: Open(/home/runner/.cache/actcache/bolt.db): timeout"
* I think GitLab broke this feature recently as well!
* Getting the runner to work with podman and as a service was a little tricky (but now works)
* Mostly because of the way the docker socket is not created by default on podman
* And the docker_host path is different inside the runner config file.
* There are two config files, one (JSON) is always stored in .runner and contains the auth information and IP, and the other is YAML and runner needs the -c switch to specify it, and has the config of the runner (docker options, etc). It's a bit strange there are two files IMHO.This will occur if you have a `forgejo-runner daemon` running while you try to use `exec` -- both are trying to open the cache database, and only the first to open it can operate. You could avoid this by changing the cache directory of the daemon by changing `cache.dir` in the config file, or run the two processes as different users.
> It's a bit strange there are two files IMHO.
The `.runner` file isn't a config file, it's a state file -- not intended for user editing. But yes, it's a bit odd.
I agree with the sentiment, but want to point out that email can be used to turn push into pull, by auto-filtering the respective email notifications into a separate dedicated email folder, which you can choose to only look at when you want.
This is in search of a problem.
The hacker spirit alive and well.
Someone kind enough to explain this to me? What's the difference between push model and pull model? What about push model makes it difficult to work offline?
When you're working with e-mails, you sync your relevant IMAP box to local, pulling all the proposed patches with it, hence the pull model.
Then you can work through the proposed changes offline, handle on your local copy and push the merged changes back online.
It still needs work to match the capabilities of most source forges, but for small closed teams it already works very well.
Like are they calling the "GitHub pull request" workflow as the push model? What is "push" about it though? I can download all the pull request patches to my local and work offline, can't I?
I don't know how you can download the pull request as a set of patches and work offline, but you have to open a branch, merge the PR to that branch, test the things and merge that branch to relevant one.
Or you have to download the forked repository, do your tests to see the change is relevant/stable whatnot and if it works, you can then merge the PR.
---
edit: Looks like you can get the PR as a patch or diff, and is trivial, but you have to be online again to get it that way. So, getting your mails from your box is not enough, you have to get every PR as a diff, with a tool or manually. Then you have to organize them. e-mails are much more unified and simple way to handle all this.
---
In either case, reviewing the changes is not possible when you're offline, plus the pings of the PRs is distracting, if your project is popular.
Random PR example, https://github.com/microsoft/vscode/pull/280106 has a diff at https://github.com/microsoft/vscode/pull/280106.diff
Another thing that surprises some is that GitHub's forks are actually just "magic" branches. I.e the commits on a fork exist in the original repo: https://github.com/microsoft/vscode/commit/8fc3d909ad0f90561...
Discoverability in UX seems to have completely died.
It's yet another brick on the wall of the garden. That's left there for now, but for how long?
IOW, It's deliberate. Plus, GitHub omits to add trivial features (e.g.: deleting projects, "add review" button, etc.) while porting their UI.
It feels like they don't care anymore.
Maybe another script to download them all at once, and apply each diff to its own own branch automatically.
Almost everything about git and github/gitlab/etc. can be scripted. You don't have to do anything on their website if you're willing to pipe some text around the old way.
> Almost everything about git and github/gitlab/etc. can be scripted.
Moving away from GitHub is more philosophical than technical at this point. I also left the site the day they took Copilot to production.
This is due to increasing competition in the source forge space. It’s good that different niches can be served by their preferred choice, even if it will be less convenient for devs who want to contribute a patch on a more obscure platform.
The solutions on the roadmap are not centralized as GitHub. There is a real initiative to promote federation so we would not need to rely on one entity.
Hah, exactly what we’re attempting with Tangled! Some big announcements to come fairly soon. We’re positioning ourselves to be the next social collab platform—focused solely on indies & communities.
How do you speculate the candidacy for email.
At least GitHub adds new features over time.
Gitlab has been removing features in favor of more expensive plans even after explicitly saying they wouldn’t do so.
Horses for courses I guess ¯\_(ツ)_/¯
Not as quickly as they add anti-features, imho.
1. Oh! It's "d.i.l.l.o."! I misread that as something else.
2. After reading many comments in this thread, I must admit I am stupefied at the sheer amount of stuff that can go into merely setting up and maintaining a version control system for a project.
3. I have cited every one of the same problems OP enumerates as my argument for switching new projects over to self-hosted fossil. It also helps a good bit with #2 above when you're a small organization and you're the sole software engineer, sysadmin, and tier >1 support. It's a much simpler VCS that's closer to using perforce in my experience. YMMV, but it's the kind of VCS that doesn't qualify as a skill on a resume.
4. I also find GH deploy keys frustrating because I can't use the same key for multiple repositories. I have 3 separate applications that each run on 4 machines in my cluster, and I have to configure 12 separate deploy keys on GitHub and in my ~/.ssh/config file.
The best reason right here.
I've taken to loading projects in github.dev for navigating repos so I pay the js tax just once and it's fine for code reading. But navigating PRs and actions is terrible.
I love this. I used to be a big fan of linear (because the alternatives were dog water), but this also opened the question "why even have a seperate, disconnected tool?"
Most of my personal projects have a TODO.md somewhere with a list of things i need to work on. If people really need a frontend for bugs, it wouldn't be more than just rendering that markdown on the web.
Well, if your bugs can be specified clearly in plain text and plain text only, then yeah, I'd also advocate for this approach. Unfortunately, that's not really the case in any bigger software project. I need screenshots, video recordings that are 100 megs, cross-issue linking etc. I hate JIRA (of course) but it gets it right.
The post does not mention CI anywhere else, are they doing anything with it, keeping it on GitHub, or getting rid of it?
> Furthermore, the web frontend doesn't require JS, so I can use it from Dillo (I modified cgit CSS slightly to work well on Dillo).
That sounds like a bad approach to developing a Web browser, surely it would be better to make Dillo correctly work with the default cgit CSS (which is used by countless projects)?
Dillo is actively developed, and the project of "migrate away from github" is complete, so now other work can be started and completed (like adding the CSS features required to support mainline cgit).
Another social issue on GitHub: you cannot use the "good first issue" tag on a public repository without being subjected to low quality drive-by PRs or AI slop automatically submitted by someone's bot.
I think the issue with centralization is still understated. I know developers who seem to struggle reading code if it's not presented by VS Code or a GitHub page. And then, why not totally capture everyone into developing just with GitHub Codespaces?
This is exactly what well-intentioned folk like to see: it's solving everyone's problems! Batteries included, nothing else is needed! Why use your own machine or software that doesn't ping into a telemetry hell-hole of data collection on a regular basis?
I'm not part of the project at all, but this is the only offline code review system I've found.
And the github frontend developers are aware of these accessibility problems (via the forums and bug reports). They just don't care anymore. They just want to make the site appear to work at first glance which is why index pages are actual text in html but nothing else is.
It clearly represents a pretty seismic cultural change within the company. GitHub was my go-to example of a sophisticated application that loaded fast and didn't require JavaScript for well over a decade.
The new React stuff is sluggish even on a crazy fast computer.
My guess is that the "old guard" who made the original technical decisions all left, and since it's been almost impossible to hire a frontend engineer since ~2020 or so that wasn't a JavaScript/React-first developer the weight of industry fashion became too much to resist.
But maybe I'm wrong and they made a technical decision to go all-in on heavy JavaScript features that was reasoned out by GitHub veterans and accompanied by rock solid technical justification.
GitHub have been very transparent about their internal technical decisions in the past. I'd love to see them write about this transition.
Relevant quote:
> But beyond accessibility and availability, there is also a growing expectation of GitHub being more app-like.
> The first case of this was when we rebuilt GitHub projects. Customers were asking for features well beyond our existing feature set. More broadly, we are seeing other companies in our space innovate with more app-like experiences.
> Which has led us to adoption React. While we don’t have plans to rewrite GitHub in React, we are building most new experiences in React, especially when they are app-like.
> We made this decision a couple of years ago, and since then we’ve added about 250 React routes that serve about half of the average pages used by a given user in a week.
It then goes on to talk about how mobile is the new baseline and GitHub needed to build interfaces that felt more like mobile apps.
(Personally I think JavaScript-heavy React code is a disaster on mobile since it's so slow to load on the median (Android) device. I guess GitHub's core audience are more likely to have powerful phones?)
Let them choke on their "app-like experience", and if you can afford it, switch over to either one. I cannot recommend it enough after using it "in production" daily for more than five years.
Non-technical incentives steering technical decisions is more common than we'd perhaps like to admit.
no-one cares about the github mobile experience
microsoft making the windows 8 mistake all over again
It's where I interact with notifications about new issues and PRs for one thing. I doubt I'm alone there.
I'd like to see their logs about this.
Seems a small audience to optimise for.
All of six of these commits today were created and shipped from my phone while I was out and about on a nice dog walk: https://github.com/simonw/tools/commits/47b07010e3459adb23e1... - now deployed to https://tools.simonwillison.net
Note that for some of the web pages on GitHub, the data is included as JSON data within the HTML file, although this schema is undocumented and sometimes changes. User scripts (which you might have to maintain due to these changes) can be used to display the data without any additional downloads from the server, and they can be much shorter and faster than GitHub's proprietary scripts.
Using a GPG key to sign the web page and releases is helpful (for the reasons they explain there), although there are some other things that might additionally help (if the conspiracy was not making it difficult to do these things with X.509 certificates in many ways).
I went to gitlab from github due to Microsoft changes, my needs are very simple so far gitlab seems OK.
I also mirror just the current source on sdf.org via gopher. If gitlab causes issues this could very well become my main site.
Not only did they spend years rewriting the frontend from Pjax to I think React? They also manage to lost customer because of it.
[1] https://github.blog/engineering/architecture-optimization/ho...
Git pulling isn't unique to github and it works over http or ssh?
Demo: https://tools.simonwillison.net/cors-fetch?url=https%3A%2F%2...
https://git.dillo-browser.org/dillo/plain/src/ui.cc?id=29a46...
That being said, there's no reason for tools like it to have those constraints other than pushing users into an ecosystem they prefer (i.e. GitHub instead of other forges).
Sourcehut is hosted in The Netherlands, and Codeberg in Germany.
[0]: https://tangled.org/
I suppose something like this with git and source code exists on Tor.
During the Arab Spring and Hong Kong protests, Bluetooth was used to share messages whilst the internet was cut off.