and key point: "Previous benchmarks we have run (where we rebuilt the entire archive for x86-64-v3 57) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that."
This takes me back to arguing with Gentoo users 20 years ago who insisted that compiling everything from source for their machine made everything faster.
The consensus at the time was basically "theoretically, it's possible, but in practice, gcc isn't really doing much with the extra instructions anyway".
Then there's stuff like glibc which has custom assembly versions of things like memcpy/etc, and selects from them at startup. I'm not really sure if that was common 20 years ago but it is now.
It's cool that after 20 years we can finally start using the newer instructions in binary packages, but it definitely seems to not matter all that much, still.
Since then we've slowly started accumulating optional extensions again; newer SSE versions, AVX, encryption and virtualization extensions, probably some more newfangled AI stuff I'm not on top of. So very slowly it might have started again to make sense for an approach like Gentoo to exist**.
* usual caveats apply; if the compiler can figure out that using the instruction is useful etc.
** but the same caveats as back then apply. A lot of software can't really take advantage of these new instructions, because newer instructions have been getting increasingly more use-case-specific; and applications that can greatly benefit from them will already have alternative code-pathes to take advantage of them anyway. Also a lot of the stuff happening in hardware acceleration has moved to GPUs, which have a feature discovery process independent of CPU instruction set anyway.
Why is this "clever"? This is pretty much how "fat" binaries are supposed to work, no? At least, such packaging is the norm for Android.
I would guess that these are domain-specific enough that they can also mostly be enabled by the relevant libraries employing function multiversioning.
I used Gentoo a lot, jeez, between 20 and 15 years ago, and the install guide guiding me through partitioning disks, formatting disks, unpacking tarballs, editing config files, and running grub-install etc, was so incredibly valuable to me that I have trouble expressing it.
There's lots of software applications out there whose official Docker images or pip wheels or whatever bundle everything under the sun to account for all the optional integrations the application has, and it's difficult to figure out which packages can be easily removed if we're not using the feature and which ones are load-bearing.
I'd agree that the manual Gentoo install process, and those tinkering years in general, gave me experience and familiarity that's come in handy plenty of times when dealing with other distros, troubleshooting, working on servers, and so on.
https://www.shlomifish.org/humour/by-others/funroll-loops/Ge...
The extra issue here is that SIMD (the main optimization) simply sucks to use. Auto-vectorization has been mostly a pipe dream for decades now as the sufficiently-smart compiler simply hasn't materialized yet (and maybe for the same reason the EPIC/Itanium compiler failed -- deterministically deciding execution order at compile time isn't possible in the abstract and getting heuristics that aren't deceived by even tiny changes to the code is massively hard).
Doing SIMD means delving into x86 assembly and all it's nastiness/weirdness/complexity. It's no wonder that devs won't touch it unless absolutely necessary (which is why the speedups are coming from a small handful of super-optimized math libraries). ARM vector code is also rather Byzantine for a normal dev to learn and use.
We need a more simple assembly option that normal programmers can easily learn and use. Maybe it's way less efficient than the current options, but some slightly slower SIMD is still going to generally beat no SIMD at all.
As in, are there any common libraries or parts of the system that typically slow things down, or was this more targeting a time when hardware was more limited so improving all would have made things feel faster in general.
If you aren't convinced by your ubuntu being 1% faster, consider how many servers, VMs and containers run ubuntu. Millions of servers using a fraction of a percent less energy multiplies out to a lot of energy
It's not for nothing that some time ago "write once, run everywhere" was a selling proposition (not that it was actually working in all cases, but definitely working better than alternatives).
If you're negotiating deals worth billions of dollars, or even just millions, I'd strongly suggest not doing so with a hangover.
(Not really, I just know somewhere out there is a LinkedInLunatic who has a Business Philosophy based on being hungover.)
- Sun Zoo
...have you met salespeople? Buying lap dances is a legitimate business expense for them. You'd be surprised how much personal rapport matters and facts don't.
In all fairness, I only know about 8 and 9 figure deals, maybe at 10 and 11 salespeople grow ethics...
If you would only accept 10x improvements, I would argue progress would be very small.
Well one example could be llama.cpp . It's critical for them to use every single extension the CPU has move more bits at a time. When I installed it I had to compile it.
This might make it more practical to start offering OS packages for things like llama.cpp
I guess people that don't have newer hardware aren't trying to install those packages. But maybe the idea is that packages should not break on certain hardware.
Blender might be another one like that which really needs the extensions for many things. But maybe you so want to allow it to be used on some oldish hardware anyway because it still has uses that are valid on those machines.
> Previous benchmarks (...) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that.I don't think this is a valid argument to make. If you were doing the optimization work then you could argue tradeoffs. You are not, Canonical is.
Your decision is which image you want to use, and Canonical is giving you a choice. Do you care about which architecture variant you use? If you do, you can now pick the one that works best for you. Do you want to win an easy 1% performance gain? Now you have that choice.
> where that 1% is worth any hassle
You'll need context to answer your question, but yes there are cases.Let's say you have a process that takes 100hrs to run and costs $1k/hr. You save an hour and $1k every time you run the process. You're going to save quite a bit. You don't just save the time to run the process, you save literal time and everything that that costs (customers, engineering time, support time, etc).
Let's say you have a process that takes 100ns and similarly costs $1k/hr. You now run in 99ns. Running the process 36 million times is going to be insignificant. In this setting even a 50% optimization probably isn't worthwhile (unless you're a high frequency trader or something)
This is where the saying "premature optimization is the root of all evil" comes from! The "premature" part is often disregarded and the rest of the context goes with it. Here's more context to Knuth's quote[0].
There is no doubt that the holy grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
Knuth said: "Get a fucking profiler and make sure that you're optimizing the right thing". He did NOT say "don't optimize".So yes, there are plenty of times where that optimization will be worthwhile. The percentages don't mean anything without the context. Your job as a programmer is to determine that context. And not just in the scope of your program, but in the scope of the environment you expect a user to be running on. (i.e. their computer probably isn't entirely dedicated to your program)
[0] https://dl.acm.org/doi/10.1145/356635.356640 (alt) https://sci-hub.se/10.1145/356635.356640
an unobservable benefit is not a benefit.
Perhaps if you're doing CPU-bound math you might see an improvement?
"Well, let's say you can shave 10 seconds off of the boot time. Multiply that by five million users and thats 50 million seconds, every single day. Over a year, that's probably dozens of lifetimes. So if you make it boot ten seconds faster, you've saved a dozen lives. That's really worth it, don't you think?"
I put a lot of effort into chasing wins of that magnitude. Over a huge userbase, something like that has a big positive ROI. These days it also affects important things like heat and battery life.
The other part of this is that the wins add up. Maybe I manage to find 1% every couple of years. Some of my coworkers do too. Now you're starting to make a major difference.
Standard advice: You are not Google.
I'm surprised and disappointed 1% is the best they could come up with, with numbers that small I would expect experimental noise to be much larger than the improvement. If you tell me you've managed a 1% improvement you have to do a lot to convince me you haven't actually made things 5% worse.
At scale marginal differences do matter and compound.
Further, maybe it has not been a focus for compiler vendors to generate good code for these higher-level archs if few are using the feature. So Ubuntu's move could improve that.
I bet you there is some use case of some app or library where this is like a 2x improvement.
Would be nice to know the per app metrics.
x86-64-v3 is AVX2-capable CPUs.
Which unfortunately extends all the way to Intels newest client CPUs since they're still struggling to ship their own AVX512 instructions, which are required for v4. Meanwhile AMD has been on v4 for two generations already.
Having a non-uniform instruction set for one package was a baffling decision.
It's pretty clear that Alder Lake was simply a rush job, and had to be implemented with the E cores they already had, despite never having planned for heterogenous cores to be part of their product roadmap.
They had two teams designing the two types of cores.
(There is some older text in the Debian Wiki https://wiki.debian.org/ArchitectureVariants but it's not clear if it's directly related to this effort)
No, because those are different ABIs (and a debian architecture is really an ABI)
> the issue of i486 vs. i586 vs. i686 vs. the many varieties of MMX and SSE extensions for 32-bit?
It could be used for this but it's about 15 years too late to care surely?
> (There is some older text in the Debian Wiki https://wiki.debian.org/ArchitectureVariants but it's not clear if it's directly related to this effort)
Yeah that is a previous version of the same design. I need to get back to talking to Debian folks about this.
Intel has reduced its number of employees, and has lost lots of software developers.
So we lost Clear Linux, their Linux distribution that often showcased performance improvements due to careful optimization and utilization of microarchitectural enhancements.
I believe you can still use the Intel compiler, icc, and maybe see some improvements in performance-sensitive code.
"It was actively developed from 2/6/2015-7/18/2025."
Not the same thing, or perhaps an augmentation of Intel performance libraries (which required C++, I believe).
Sure, harmonizing all of this may have suggested that there were too many software teams. But device drivers don't write themselves, and without feedback from internal software developers, you can't validate your CPU designs.
Would love to see which packages benefited the most in terms of percentile gain and install base. You could probably back out a kWh/tons of CO2 saved metric from it.
Does anyone know what the plans are to accomplish this?
But that does not sound like a simple for non technical users solution.
Anyway, non technical users using an installation on another lower computer? That sounds weird.
x86-64-v3 is AVX2-capable CPUs.
And if I have it right, The main advantage should come with package manager and open sourced software where the compiled binaries would be branched to benefit and optimize newer CPU features.
Still, this would be most noticeable mostly for apps that benefit from those features such as audio dsp as an example or as mentioned ssl and crypto.
Especially the binaries for the newest variant, since they can entirely conditionals/branching for all older variants.
If you’re doing that sort of work, you also shouldn’t use pre-compiled PyPi packages for the same reason - you leave a ton of performance on the table by not targeting the micro-architecture you’re running on.
If you do a typical: "cmake . && make install" then you will often miss compiler optimisations. There's no standard across different packages so you often have to dig into internals of the build system and look at the options provided and experiment.
Typically if you compile a C/C++/Fortran .cpp/.c/.fXX file by hand, you have to supply arguments to instruct the use of specific instruction sets. -march=native typically means "compile this binary to run with the maximum set of SIMD instrucitons that my current machine supports" but you can get quite granular doing things like "-march=sse4,avx,avx2" for either compatibility reasons or to try out subsets.
That said, most of those packages will just read the hardware capability from the OS and dispatch an appropriate codepath anyway. You maybe save some code footprint by restricting the number of codepaths it needs to compile.
There’s also the difference between being able to run and being able to run optimised. At least 5 years ago, the Ubuntu/Debian builds of FFTW didn’t include the parallelised OpenMP library.
In a past life I did HPC support and I recommend the Spack package manager a lot to people working in this area because you can get optimised builds with whatever compiler tool chain and options you need quite easily that way.
> As a result, we’re very excited to share that in Ubuntu 25.10, some packages are available, on an opt-in basis, in their optimized form for the more modern x86-64-v3 architecture level
> Previous benchmarks we have run (where we rebuilt the entire archive for x86-64-v3 57) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that.
1. https://riscv.atlassian.net/wiki/spaces/HOME/pages/16154732/... 2. https://developer.arm.com/documentation/dui0801/h/A64-Floati...
I couldn't run something from NPM on a older NAS machine (HP Microserver Gen 7) recently because of this.
To me hwcaps feels like a very unfortunate feature creep of glibc now. I don't see why it was ever added, given that it's hard to compile only shared libraries for a specific microarch, and it does not benefit executables. Distros seem to avoid it. All it does is causing unnecessary stat calls when running an executable.
> Description: official repositories compiled with LTO, -march=x86-64-vN and -O3.
Packages: https://status.alhp.dev/
The same will apply to different arm64 or riscv64 variants.
apt (3.1.7) unstable; urgency=medium . [ Julian Andres Klode ] * test-history: Adjust for as-installed testing . [ Simon Johnsson ] * Add history undo, redo, and rollback features
very odd choice of words. "better utilize/leverage" is perhaps the right thing to say here.
"Changes/Optimized Binaries for the AMD64 Architecture v2" (2025) https://fedoraproject.org/wiki/Changes/Optimized_Binaries_fo... :
> Note that other distributions use higher microarchitecture levels. For example RHEL 9 uses x86-64-v2 as the baseline, RHEL 10 uses x86-64-v3, and other distros provide optimized variants (OpenSUSE, Arch Linux, Ubuntu).
All the fuss about Ubuntu 25.10 and later being RVA23 only was about nothing?