Caliper: Right-size your CI runners
8 points
5 days ago
| 3 comments
| attune.inc
| HN
martinald
3 hours ago
[-]
Please try running this on a "desktop" CPU as well, like a Ryzen 9950X. (edit: sorry, didn't realise you shared the script at the end. i will test this myself :))

In my experience CI/CD tasks are more "single thread" bound that people expect. The Epyc CPU cores are _slow_ per core, so trading less cores for fewer faster ones actually works out well.

So if you are wanting fast CI/CD builds you are much better off using desktop CPU cores vs enterprise server cpus like this.

Wrote some thoughts up on this a while back https://martinalderson.com/posts/how-i-make-cicd-much-faster... - more focussed on the change from github to selfhosted runners, but I'd be interested to see a comparison on desktop class CPUs.

reply
mgaunard
4 hours ago
[-]
The main problem is that builds require a variable amount of cores depending on what needs to be (re)built. The ideal thing to do is to have the build system itself orchestrate remote builds, since it actually knows how many things need building and how expensive they are.
reply
nixbuild
4 hours ago
[-]
This is what nixbuild.net does, it tracks historic CPU and memory usage of individual builds, and takes that into account when deciding what resources to allocate for new builds. You can configure limits on max/min CPUs on your account or individual builds. Also, if a build runs out of memory we simply restart it with more memory. The client will just see that the build log starts over.
reply
mgaunard
3 hours ago
[-]
That's precisely what I'm not describing; Nix doesn't even have access to the build DAG.
reply
Havoc
4 hours ago
[-]
Great idea. Don’t have a personal need for it but imagine many will!
reply