What boggles my mind is why Google hasn't gotten more serious about making Android a desktop OS. Pay the money needed to get good hardware support, control the OS, and now you're a Microsoft/Apple competitor for devices. Yes there is the Chromebook, but ChromeOS is not a real desktop OS, it's a toy. Google could control both the browser market and the desktop computing market if they seriously tried. (But then again that would require listening to customers and providing support, so nevermind)
What are you talking about? The majority of hardware is supported by only Linux at this point.
Linux with glibc is the complete opposite; there really does exist old Linux software that static-links in everything down to libc, just interacting with the kernel through syscalls—and it does (almost always) still work to run such software on a modern Linux, even when the software is 10-20 years old.
I guess this is why Linux containers are such a thing: you’re taking a dynamically-linked Linux binary and pinning it to a particular entire userland, such that when you run the old software, it calls into the old glibc. Containers work, because they ultimately ground out in the same set of stable kernel ABI calls.
(Which, now that I think of it, makes me wonder how exactly Windows containers work. I’m guessing each one brings its own NTOSKRNL, that gets spun up under HyperV if the host kernel ABI doesn’t match the guest?)
…actually, looks like it’s a bit looser these days. Version matrix incoming: https://learn.microsoft.com/en-us/virtualization/windowscont...
This is not a big problem if it's hard/unlikely enough to write a code that accidentally relies on raw syscalls. At least MS's dev tooling doesn't provide an easy way to bypass the standard DLLs.
> makes me wonder how exactly Windows containers work
I guess containers do the syscalls through the standard Windows DLLs like any regular userspace application. If it's a Linux container on Windows, probably the WSL syscalls, which I guess, are stable.
Can you give an example where a breaking change was introduced in NT kernel ABI?
(One example: hit "Show" on the table header for Win11, then use the form at the top of the page to highlight syscall 8c)
I argue that NT doesn't break its kernel ABI.
https://thomasvanlaere.com/posts/2021/06/exploring-windows-c...
Turns out that Nix is built against a different version of glibc than SteamOS, and for some reason, that matters. You have to make sure none of Steam's libraries are on the path before the Nix code will run. It seems impractical to expect every piece of software on your computer to be built against a specific version of a specific library, but I guess that's Linux for you.
* Except for non-glibc distributions of course.
Why doesn't the glibc use the version tag to do the appropriate mapping?
In other words, the Linux desktop as a whole is a Bazaar, not Cathedral.
Correct me if I'm wrong but I don't think versioned symbols are a thing on Windows (i.e. they are non-portable). This is not a problem for glibc but it is very much a problem for a lot of open source libraries (which instead tend to just provide a stable C ABI if they care).
There’re quite a few mechanics they use for that. The oldest one, call a special API function on startup like InitCommonControlsEx, and another API functions will DLL resolve differently or behave differently. A similar tactic, require an SDK defined magic number as a parameter to some initialization functions, different magic numbers switching symbols from the same library; examples are WSAStartup and MFStartup.
Around Win2k they did side by side assemblies or WinSxS. Include a special XML manifest into embedded resource of your EXE, and you can request specific version of a dependent API DLL. The OS now keeps multiple versions internally.
Then there’re compatibility mechanics, both OS builtin and user controllable (right click on EXE or LNK, compatibility tab). The compatibility mode is yet another way to control versions of DLLs used by the application.
Pretty sure there’s more and I forgot something.
Isn't the oldest one... to have the API/ABI version in the name of your DLL? Unlike on Linux which by default uses a flat namespace, on the Windows land imports are nearly always identified by a pair of the DLL name and the symbol name (or ordinal). You can even have multiple C runtimes (MSVCR71.DLL, MSVCR80.DLL, etc) linked together but working independently in the same executable.
Honestly I might buy a T-shirt with such a quote.
I think glibc is such a pain that it is the reason why we have so vastly different package management and I feel like non glibc things really would simplify the package management approach to linux which although feels solved, there are definitely still issues with the approach and I think we should still all definitely as such look for ways to solve the problem
AppImage have some issues/restrictions like it cant run on older linux than one it was compiled on, so people compile it on the oldest pc's and a little bit of more quirks
AppImage are really good but zapps are good too, I had once tried to do something on top of zapp but shame that zapp went into the route of crypto ipfs or smth and then I don't really see any development of that now but it would be interesting if someone can add the features of zapp perhaps into appimage or pick up the project and build something similar perhaps.
At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today).
Looks like you met the right guy because I have built this tool :)
Allow me to show my project, Appseed (https://nanotimestamps.org/appseed): It's a simple fish script which I had (prototyped with Claude) some 8-10 months ago I guess to solve exactly this.
I have a youtube video in the website and the repository is open source on github too.
So this actually worked fantastic for a lot of different binaries that I tested it on and I had uploaded it on hackernews as well but nobody really responded, perhaps this might change it :p
Now what appseed does is that you can think of it is that it can take a binary and convert it into two folders (one is the dynamic library part) and the other is the binary itself
So you can then use something like tar to package it up and run it anywhere. I can of course create it into a single elf-64 as well but I wanted to make it more flexible so that we can have more dynamic library like or perhaps caching or just some other ideas and this made things simple for me too
Ldshim is really good idea too although I think I am unable to understand it for the time being but I will try to understand it I suppose. I would really appreciate it if you can tell me more about Ldshim! Perhaps take a look at Appseed too and I think that there might be some similarities except I tried to just create a fish script which can just convert any dynamic binary usually into a static one of sorts
I just want more people to take ideas like appseed or zapp's and run with it to make linux's ecosystem better man. Because I just prototyped it with LLM's to see if it was possible or not since I don't have much expertise in the area. So I can only imagine what can be possible if people who have expertise do something about it and this was why I shared it originally/created it I guess.
Let me know if you are interested in discussing anything about appseed. My memory's a little rusty about how it worked but I would love to talk about it if I can be of any help :p
Have a nice new year man! :p
I wasn't directly involved, but the company I worked for has created its own set of runtimes too and I haven't heard any excessive complaints on internal chats, so I don't think it's as arcane as you make it sound either.
The flatpak ecosystem is problematic in that most packages are granted too much rights by default.
You can still get firefox as a .deb though.
It makes sense. Every distribution wants to be in charge of what set of libraries are available on their platform. And they all have their own way to manage software. Developing applications on Linux that can be widely used across distributions is way more complex than it needs to be. I can just ship a binary for windows and macOS. For Linux, you need an rpm and a dpkg and so on.
I use davinci resolve on Linux. The resolve developers only officially support Rocky Linux because anything else is too hard. I use it in Linux mint anyway. The application has no title bar and recording audio doesn’t work properly. Bleh.
We already have an entire platform like that (steam deck), and it's the best linux development experience around in my opinion.
Who needs ABI compatibility when your software is OSS? You only need API compatibility at that point.
Because almost certainly someone out there will want to use it. And they should be able to, because that is the entire point of free software: user freedom.
I've had so much trouble with package managers that I'm not even sure they are a good idea to begin with.
Even if we ship as source, even if the user has the skills to build it, even if the make file supports every version of the kernel, plus all other material variety, plus who knows how many dependencies, what exactly am I supposed to do when a user reports;
"I followed your instructions and it doesn't run".
Linux Desktop fails because it's not 1 thing, it's 100 things. And to get anything to run reliably on 95 of them you need to be extremely competent.
Distribution as source fails because there are too many unknown, and dependent parts.
Distribution as binary containers (Docker et al) are popular because it gives the app a fighting chance. While at the same time being a really ugly hack.
I think Rob pike has the right idea with go just statically link everything wherever possible. These days I try to do the same, because so much less can go wrong for users.
People don’t seem to mind downloading a 30mb executable, so long as it actually works.
Stable ABIs for certain critical pieces of independently-updatable software (libc, OpenSSL, etc.) is not even that big of a lift or a hard tradeoff. I’ve never run into any issues with macOS’s libc because it doesn’t version the symbol for fopen like glibc does. It just requires commitment and forethought.
To be honest, I think OSes are boring, and should have been that way since maybe 1995. The basic notions:
multi-processing, context switching, tree-like file systems, multiple users, access privileges,
haven't changed since 1970, and the more modern GUI stuff hasn't changed since at least the early '90s. Some design elements, like tree-like file systems, WIMP GUIs, per-user privileges, the fuzziness of what an
"operating system" even is and its role,
are perhaps even arbitrary, but can serve as a mature foundation for better-concieved ideas, such as: ZFS (which implements in a very well-engineered manner a tree-like data storage that's
been standard since the '60s) can serve as a founation for
Postgres (which implements a better-conceived relational design)
I'm wondering why OSS - which according to one of its acolytes, makes all bugs shallow - couldn't make its flagship OS more stable and boring. It's produced an anarchy of packaging systems, breaking upgrades and updates,
unstable glibc, desktop environments that are different and changing seemingly
for the sake of it, sound that's kept breaking, power management iffiness, etc.Why should everything pretend to be a 1970s minicomputer shared by multiple users connected via teletypes?
If there's one good idea in Unix-like systems that should be preserved, IMHO it's independent processes, possibly written in different languages, communicating with each other through file handles. These processes should be isolated from each other, and from access to arbitrary files and devices. But there should be a single privileged process, the "shell" (whether command line, TUI, or GUI), that is responsible for coordinating it all, by launching and passing handles to files/pipes to any other process, under control of the user.
Could be done by typing file names, or selecting from a drop-down list, or by drag-and-drop. Other program arguments should be defined in some standard format so that e.g. a text based shell could auto-complete them like in VMS, and a graphical one could build a dialog box from the definition.
I don't want to fiddle with permissions or user accounts, ever. It's my computer, and it should do what I tell it to, whether that's opening a text document in my home directory, or writing a disk image to the USB stick I just plugged in. Or even passing full control of some device to a VM running another operating system that has the appropriate drivers installed.
But it should all be controlled by the user. Normal programs of course shouldn't be able to open "/dev/sdb", but neither should they be able to open "/home/foo/bar.txt". Outside of the program's own private directory, the only way to access anything should be via handles passed from the launching process, or some other standard protocol.
And get rid of "everything is text". For a computer, parsing text is like for a human to read a book over the phone, with an illiterate person on the other end who can only describe the shape of each letter one by one. Every system-level language should support structs, and those are like telepathy in comparison. But no, that's scaaaary, hackers will overflow your buffers to turn your computer into a bomb and blow you to kingdom come! Yeah, not like there's ever been any vulnerability in text parsers, right? Making sure every special shell character is properly escaped is so easy! Sed and awk are the ideal way to manipulate structured data!
I wish either of those systems had the same hardware & software support. I’d swap my desktop over in a heartbeat if I could.
Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000.
glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain.
The actual practical problem is not glibc but the constant GUI / desktop API changes.
patchelf --set-interpreter /lib/ld-linux-x86-64.so.2 "$APP"
patchelf --set-rpath /lib "$APP"1. Delete the shared symbol versioning as per https://stackoverflow.com/a/73388939 (patchelf --clear-symbol-version exp mybinary)
2. Replace libc.so with a fake library that has the right version symbol with a version script e.g. version.map GLIBC_2.29 { global: *; };
With an empty fake_libc.c `gcc -shared -fPIC -Wl,--version-script=version.map,-soname,libc.so.6 -o libc.so.6 fake_libc.c`
3. Hope that you can still point the symbols back to the real libc (either by writing a giant pile of dlsym C code, or some other way, I'm unclear on this part)
Ideally glibc would stop checking the version if it's not actually marked as needed by any symbol, not sure why it doesn't (technically it's the same thing normally, so performance?).
Windows kept logging down the system trying to download a dozen different language versions of word (for which I didn't have a licence and didn't want regardless). Steam kept going into a crash restart cycle. Virus scanner was ... being difficult.
Everything just works on Linux except some games on proton have some sound issues that I still need to work out.
Is this 1998? Linux is forever having sound issues. Why is sound so hard?
As always It is Not Linux Fault, but it is Linux Problem.
It's one of the reasons why I moved to OSX + Linux virtual machine. I get the best of both worlds. Plus, the hardware quality of a 128GB unified RAM MacBookPro M4 Max is way beyond anything else in the market.
It doesn’t help that they only officially support rocky Linux. I use mint. I assume there’s some magic pipewire / alsa / pulseaudio commands I can run that would glue everything together properly. But I can’t figure it out. It just seems so complicated.
Similarly, Bluetooth on my Thinkpad T14 is slightly wonky, and it sometimes fails to register a Bluetooth mouse on wake-up (I have to switch the mouse off and back on). This mouse registers fine on my other Linux machines. The logs show a report from a kernel driver saying that the BT chip behaved weirdly.
Binary-blob firmware, and physical hardware, do have bugs, and there's little an OS can do about that, Linux or otherwise. Macs have less hardware variety and higher prices, which makes their hardware errata lists shorter, but not empty.
I think it’s a software issue in how resolve uses the Linux audio stack. But I have no idea how to get started debugging it. I’ve never had any problems with the same hardware in windows, or the same software (resolve) on macOS.
FWIW I lost sound completely 3 times in the last 2 months on my works windows laptop and it would only come back after a reboot. I assumed it was a driver crash.
It depends on having a properly good implementation, which will come eventually for most apps.
The fix is:
mkdir -p ~/.config/pipewire/pipewire.conf.d && echo "context.properties = {default.clock.min-quantum = 1024}" | tee ~/.config/pipewire/pipewire.conf.d/pipewire.conf
Basically, just force the quantum to be higher. Often it defaults to 64, which is around 1ms.What are some examples?
A recent example is that in San Andreas, the seaplane never spawns if you're running Windows 11 24H2 or newer. All of it due to a bug that's always been in the game, but only the recent changes in Windows caused it to show up. If anybody's interested, you can read the investigation on it here: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...
I see there are guides on Steam forums on how to get it to run under Windows 11 [0], and they are quite involved for someone not overly familiar with computers outside of gaming.
0: https://steamcommunity.com/sharedfiles/filedetails/?id=29344...
It's a great game, unfortunately right now I am not able to play it anymore :( even though I have the original CD.
Unfortunately, Wine is of no help here :(
Also original Commandos games.
One more popular example is Grid 2, another is Morrowind. Both crash on launch, unless you tweak a lot of things, and even then it won't always succeed.
Need for Speed II: SE is "platinum" on Wine, and pretty much unable to be run at all on Windows 11.
[0] https://learn.microsoft.com/en-us/windows/win32/direct3darti...
I might seriously recommend it to newbies and like there is just this love I have for windows 7 even though I really didn't use it for much but its so much more elegant in its own way than windows 10
like it can be a really fun experiment and I would be interested to see how that would pan out.
Rough approximations have been possible since the early 2000s, but they’re exactly that: rough approximations. Details matter, and when I boot up an old XP/7 box there are aspects in which they feel more polished and… I don’t know, finished? Complete? Compared to even the big popular DEs like KDE.
Building a DE explicitly as a clone of a specific fixed environment would also do wonders to prevent feature creep and encourage focus on fixing bugs and optimization instead of bells and whistles, which is something that modern software across the board could use an Everest sized helping of.
I think one of the friction could be ideological if not than anything since most linux'ers love Open source and hate windows so they might not want to build anything which even replicates the UI perhaps
Listen I hate windows just as much as the other guy but gotta give props that I feel nostalgic to windows 7, and if they provide both .exe perfect support and linux binary perfect support, things can be really good. I hope somebody does it and perhaps even adds it to loss32, would be an interesting update.
The screenshots could easily fool me into believing it actually is Windows 7 :p
There is also anduinos which I think doesn't try to replicate windows 7 but it definitely tries to look at windows 10 perhaps 11 iirc
It would fail, and just be another corpse in the desktop OS graveyard.
https://en.wikipedia.org/wiki/Hitachi_Flora_Prius
https://www.osnews.com/story/136392/the-only-pc-ever-shipped...
https://en.wikipedia.org/wiki/Linspire
Unless you ship your own hardware or get a vendor to ship your OS (see the above), and set up so the user can actually use it, you have to get users to install it on Windows hardware. So now your company is debugging broken consumer hardware without the help of the OEM. So that hopefully someone will install it on exactly that configuration for free.
This is not a winning business model.
Loss32 is itself a linux distro and thus there should technically be nothing stopping it from shipping everywhere
I think you were assuming that I meant create a whole kernel from scratch or something but I am just merely asking a loss32 reskin which looks like windows 7 which is definitely possible without any of the company debugging consumer hardware or even the need of company for that matter I suppose considering that I was proposing an open source desktop environment which just behaved like windows 7 by default as an example.
I don't really understand why we need a winning business model out of it, there isn't really a winning model for niri,hyprland,sway,kde,xfce,lxqt,gnome etc., they are all open source projects who are run with help of donations
There might be a misunderstanding between us but I hope this clears up any misunderstanding.
> you were assuming that I meant create a whole kernel from scratch or something
No, making Linux run reliably on random laptops is already a monumental challenge.
Regarding successful, well they already are, ZorinOS is an OS which looks like windows 7 or has some similarities to it and its sort of recommended to beginners but usually linux mate is the most recommended distro
> No, making Linux run reliably on random laptops is already a monumental challenge.
Not sure about this but I ran linux in 15 year old dell mini like its no big deal so I can only assume that support has been better but I feel like I can assure you that linux support is really good for most laptops in my observation.
The problem is slapping Linux on some random bit of Windows kit and expecting it to work as though it had shipped with Linux, with support to back it. The more recent, the worse it will be.
If you want to run Linux, buy Linux computers that ship with Linux and have a support number you can call. Just like you'd not expect to be able to slap OSX on some random Dell and have it work.
This is how loss32 works and I am just saying that sir, instead of merely using the win95 design that loss32 uses, perhaps we can modernize the style a little towards something like windows 7 as a good balance?
Sir of course, if you are worried about the software emulation aspect of things, you are worried about loss32 itself and not my idea of "hey lets reskin it to look like win7", We can have a discussion itself on loss32 if you want and weigh in some pros and cons and it certainly isn't something that I will use as a main driver but I think as linux is certainly built on ideas of freedoms, having loss32 isn't really that bad. Its an experiment of sorts even right now and people will test it out because they are curious and we will hear about responses of people who try it out and what they think.
I love Linux just as much as you do but I would admit I never really gotten into windows ecosystem that much so I went to learning Linux really good and took it as a challenge to conquer (mission accomplished)
Many people might not go with that mindset and may come with the mindset that Microsoft is treating them really badly and moral dilemmas as well and so having something which can cater to them isn't bad.
I also want to say that something like this might be good because yes, people say for others to just linux mint but I never really found it good option, not for the gen-z. I think Zorin can be an answer or perhaps AnduinOS but we definitely need more young people in linux and I will tell you as young guy what's happening
People want to get the freedom but they aren't able to articulate it. They are worried about AI but they just can't do anything about it and to be honest they are right, how much can I or you do anything about ram crisis. Maybe there is something that we can do but we just don't know (like did you know that there is a way to convert laptop ram to desktop ram with its gatchas?)
They simply don't know about the open source side of things since they just weren't exposed to it. To us, it may be the core feature but to them its a word written between other words of features that they want to use.
So like I don't really know but pardon me, I don't understand your side of the discussion and I am trying to find a common point.
Do you find an issue within the loss32 architecture itself? Or with the idea of a re-skin towards win7.
I presume its the loss32 architecture but I don't know what to tell you except that it uses wine and wine just works, so much so that the original title of this i think might've been/was about how win32 was the most stable ABI even for linux and that's only possible due to wine.
Not sure what you meant by support there sir, perhaps you are red hat user for a company license or similar and of course this isn't targeted for that sector but for niche users at homes who just want to try out what's "linux" perhaps :) I find the idea of loss32 very interesting as I had thought of designing something similar so I am glad that it exists and I would probably look at it from afar.
I'd love a discussion about it because I think we are saying the same point from different angles and perhaps I can do a better rephrasing but what i mean is completely open source and all linux-y but just have windows applications run easily and have win7 like UI (really similar) and that's it. Everything's linux and these wine programs just convert them to posix syscalls but perhaps I am missing your point of concern and we can talk about it since clearly nothing's better than talking about linux (oh the joy) to another linux user! I think I may be misinterpreting somethings if so pardon me but I am unable to understand how hardware might take a role in wine/what I said and I would be interested if you can tell me more about it perhaps and (have a nice day sir, I got enough quota for the day or the year of talking about linux haha!)?
Or maybe ReactOS - the actual windows clone - gets finished. Rumours put a first release date some time after Hurd.
Pro tip but if someone wants to create their own iso as well, they can probably just customize things imperatively in MxLinux even by just booting them up in your ram and then they have the magnificient option of basically snapshotting it and converting that into an iso so its definitely possible to create an iso tweaked down to your configuration without any hassle (trust me but its the best way to create iso's without too much hassle and if one wants hassle, nix or bootc seems to be the way to go)
Regarding Why it wouldn't hit. I don't know, I already build some of my own iso's and I can build one for windows (on MxLinux principle) and upload it for free on huggingface perhaps but the idea is of mass appeal
Yes I can do that but I would prefer if there was an iso which could just do that and I could share it with a new person in linux. And yes I could have the new person do the changes themselves but (why?), there really is no reason perhaps imo and this just feels like a low hanging fruit which nobody touched perhaps and so this is why I was curious too.
But also as the other comment pointed out, I feel like sure we can do this thing, but that there is definitely a genuine reason why we can probably create this thing itself as well and they give some good reasons as well and I agree with them overall too.
Like if you ask me, it would be fun to have more options especially considering this is linux where freedom is celebrated :p
Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.
Wait you can make Android applications with Golang without too much sorcery??
I just wanted to convert some Golang CLI applications to GUI's for Android and I instead ended up giving up on the project and just started recommending people to use termux.
Please tell me if there is a simple method for Golang which can "just work" for basically being the Visualbasic-alike glue code to just glue CLI and GUI mostly.
Why don't you try it out: https://www.remobjects.com/elements/gold/
One of the key principles of f-droid is that it must be reproducible (I think) or open source with it being able to be built by f-droid servers but I suppose reproducibility must require having this software which is paid in this case.
We might take it for granted but React-like declarative top-down component model (as opposed to imperative UI) was a huge step forward. In particular that there's no difference between initial render or a re-render, and that updating state is enough for everything to propagate down. That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.
Personally I much rather the approach taken by solidjs / svelte.
React’s approach is very inefficient - the entire view tree is rerendered when any change happens. Then they need to diff the new UI state with the old state and do reconciliation. This works well enough for tiny examples, but it’s clunky at scale. And the code to do diffing and reconciliation is insanely complicated. Hello world in react is like 200kb of javascript or something like that. (Smaller gzipped, but the browser still needs to parse it all at startup). And all of that diffing is also pure overhead. It’s simply not needed.
The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree. Those variables are wrapped up as “observed state”. As a result, you can just update those variables and exactly and only the parts of the UI that need to be changed will be redrawn. No overrendering. No diffing. No virtual Dom and no reconciliation. Hello world in solid or svelte is minuscule - 2kb or something.
Unfortunately, swiftui has copied react. And not the superior approach of newer libraries.
The rust “Leptos” library implements this same fine grained reactivity, but it’s still married to the web. I’m really hoping someone takes the same idea and ports it to desktop / native UI.
That's not true. React only re-renders down from where the update happens. And it skips over stuff that is provably unchanged -- which, fair, involves manual memoization hints. Although with React Compiler it's actually pretty good at automatically adding those so in practice it mostly re-renders along the actually changed path.
>And the code to do diffing and reconciliation is insanely complicated.
It's really not, the "diffing" is relatively simple and is maybe ~2kloc of repetitive functions (one per component kind) in the React source code. Most of complexity of React is elsewhere.
>The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree.
I actually count those as "React-like" because it's still declarative componentized top-down model unlike say VB6.
But if you liked that, consider that C# was in many ways a spiritual successor to Delphi, and MS still supports native GUI development with it.
The web was a big step backwards for UI design. It was a 30 year detour whose results still suck compared to pre-web UIs.
Maybe one day something like Lazarus or Avalonia would catch up but today I feel that Electron is best at what it does.
Emulation does not mean that the CPU must be interpreted. For example, the DOSEMU emulator for Linux from the early 90s ran DOS programs natively using the 386's virtual 8086 mode, and reimplemented the DOS API. This worked similarly to Microsoft's Virtual DOS Machine on Windows NT. For a more recent example, the ShadPS4 PS4 emulator runs the game code natively on your amd64 CPU and reimplements the PS4 API in the emulator source code for graphics/audio/input/etc calls.
1. The exact problem with the Linux ABI
2. What causes it (the issues that makes it such a challenge)
3. How it changed over the years, and its current state
4. Any serious attempts to resolve it
I've been on Linux for may be 2 decades at this point. I haven't noticed any issues with ABI so far, perhaps because I use everything from the distro repo or build and install them using the package manager. If I don't understand it, there are surely others who want to know it too. (Not trying to brag here. I'm referring to the time I've spent on it.)
I know that this is a big ask. The best course for me is of course to research it myself. But those who know the whole history tend to have a well organized perspective of it, as well as some invaluable insights that are not recorded anywhere else. So if this describes you, please consider writing it down for others. Blog is probably the best format for this.
Together this means that basically nobody implements applications anymore. For commercial applications that market is too fragmented and it is too much effort. Open-source applications need time to grow and if all the underpinnings get changed all the time, this is too frustrating. Only a few projects survive this, and even those struggle. For example GIMP took a decade to be ported from GTK 2 to 3.
My understanding is that very old statically linked Linux images still run today because paraphrasing Linus: "we don't break user space".
The kernel doesn't break user space. User space breaks on its own.
Also, if you happened to have linked that image to a.out it wouldn't work if you're using a kernel from this year, but that's probably not the case ;)
Good operating systems should:
1. Allow users to obtain software from anywhere.
2. Execute all programs that were written for previous versions reliably.
3. Not insert themselves as middlemen into user/developer transactions.
Judged from this perspective, Windows is a good OS. It doesn't nail all three all the time, but it gets the closest. Linux is a bad OS.
The answers to your questions are:
(1) It isn't backwards compatible for sophisticated GUI apps. Core APIs like the widget toolkits change their API all the time (GTK 1->2->3->4, Qt also does this). It's also not forwards compatible. Compiling the same program on a new release may yield binaries that don't run on an old release. Linux library authors don't consider this a problem, Microsoft/Apple/everyone else does. This is the origin of the glibc symbol versioning errors everyone experiences sometimes.
(2) Maintaining a stable API/ABI is not fun and requires a capitalist who says "keep app X working or else I'll fire you". The capitalist Fights For The User. Linux is a socialist/collectivist project with nobody playing this role. Distros like Red Hat clone the software ecosystem into a private space that's semi-capitalist again, and do offer stable ABIs, but their releases are just ecosystem forks and the wider issue remains.
(3) It hasn't change and it's still bad.
(4) Docker: "solves" the problem on servers by shipping the entire userspace with every app, and being itself developed by a for-profit company. Only works because servers don't need any shared services from the computer beyond opening sockets and reading/writing files, so the kernel is good enough and the kernel does maintain a stable ABI. Docker obviously doesn't help the moment you move outside the server space and coordination requirements are larger.
Never happens for me on Arch, which I've run as my primary desktop for 15 years.
And Arch itself also needs manual interventions on package updates every so often, just a few weeks ago there was a major change to the NVidia driver packaging.
> And Arch itself also needs manual interventions on package updates every so often, just a few weeks ago there was a major change to the NVidia driver packaging.
If you're running a proprietary driver on a 12 year old GPU architecture incapable of modern games or AI, yeah... so I actually haven't needed to care about many of these. Maybe 2 or 3 ever...
It's my strong opinion that Windows 2000 Server, SP4 was the best desktop OS ever.
I haven't yet gone more than a decade in the past before, so I can't promise forever, and GPU-accelerated things probably still break, but X11 is very compatible backwards.
Source: I reviewed Cutler's lock-free data structure changes in Vista/Longhorn to find bugs in them, failed to find any.
Meanwhile, in 2025, with 64GB RAM and solid state drives, we hear, "Windows 11 Task Manager really, really shouldn't be eating up 15% of my CPU and take multiple seconds to fire up."
(also Microsoft has been heavily embracing Linux and open source in the last decade)
Nowadays, with the Windows team barely able to produce a functional UI, what's happening with the NT kernel? Is it all graybeards back there? When they retire, the stability of Windows going to be in trouble, which is important for the things that really pull in the money. It'll get real bad, then they'll give up and move to an open source base, just like Edge.
No reason to dump a very good kernel.
This is something that is very much needed to make Linux much more user friendly for new users.
It sure was, if you were already bored by Windows 3.11/95 and were getting into Linux, it was fantastic. You were getting skills at the ground floor which could help keep you in good career for most of the rest of your life.
'unfortunate rough edges that people only tolerate because they use WINE as a last resort'
Whether those rough edges will ever be ironed out is a matter I'll leave to other people. But I love that someone is attempting this just because of the tenacity it shows. This reminds me of projects like asahi and cosmopolitan c.
Now if we're to do something to actually solve for Gnu/Linux Desktops not having a stable ABI I think one solution would be to make a compatibility layer like Wine's but using Ubuntu's ABIs. Then as long as the app runs on supported Ubuntu releases it will run on a system with this layer. I just hope it wouldn't be a buggy mess like flatpak is.
We have gone through one perceived reason after the other to try and explain why the year of the Linux desktop wasn’t this one.
Uncharitably, Linux is too busy breaking and deprecating itself to ever become more than a server OS, and that only works due to companies sponsoring most the testing and code that makes those parts work. Desktop in all its forms is an unmitigated shit show.
With linux, you’re always one kernel/systemd/$sound system/desktop upgrade away from a broken system.
Personal pains: nvidia drivers, oss->alsa, alsa->pulse audio, pulse audio->pipe wire, init.d to upstart to systemd, anything dkms ever, bash to dash, gtk2 to gtk3, kde3 to kde4 (basically a decade?), gnome 2 to gnome 3, some 10 gnome 3 releases breaking plugins I relied on.
It should be blindingly obvious; windows can shove ads everywhere from the tray bar to start menu and even the damned lock screen, on enterprise editions no less, and STILL have users. This should tell you that linux is missing something.
It’s not the install barrier (it’s never been lower, corporate IT could issue linux laptops, linux on laptops exist from several vendors).
It’s also not software, the world has never placed so many core apps in the browser (even office, these days).
It’s not gaming. Though its telling that, in the end, the solution from valve (proton) incidentally solves two issues - porting (stable) windows APIs to linux and packaging a complete mini-linux because we can’t interoperate between distros or even releases of the same distro.
I think the complete and utter disdain in linux for stability from libraries through subsystems to desktop servers, ui toolkits and the very desktops themselves is the core problem. And solving through package management and the ensuing fragmentation from distros a close second.
From there, popularity outside the organization is irrelevant, internal support and userbase is for and on some version of Linux.
As this would spread, we would eventually see global usage increase and global popularity become a non-issue.
Wine and Proton should have levelled the playing field. But they haven't. Also, if you've only just started using Linux, I recommend you wait a few years before forming an opinion.
I wanted to be nice and entered a genuine Windows key still in my laptop's firmware somewhere.
As a thank you Microsoft pulled dozens of the features out of my OS, including remote desktop.
As soon as these latest FSR drivers are ported over I will swap to Linux. What a racket, lol.
I used to be a pretty happy Windows camper (I even got through Me without much complaint), but I'm so glad I moved to Linux and KDE for my private desktops before 11 hit.
Things started going downhill after that.
Things definitely went up-hill AFTER Windows 2000.
What on earth would cause someone to say Windows 2000 was a good release? It wasn't even a good release when it came out, and it definitely didn't stand the test of time.
But you can use group policy etc. freely. I don't know how Win 11 is though
Competition. In the first half of the 90s Windows faced a lot more of it. Then they didn't, and standards slipped. Why invest in Windows when people will buy it anyway?
Upgrades. In the first half of the 90s Windows was mostly software bought by PC users directly, rather than getting it with the hardware. So, if you could make Windows 95 run in 4mb of RAM rather than 8mb of RAM, you'd make way more sales on release day. As the industry matured, this model disappeared in favor of one where users got the OS with their hardware purchase and rarely bought upgrades, then never bought them, then never even upgraded when offered them for free. This inverted the incentive to optimize because now the customer was the OEMs, not the end user. Not optimizing as aggressively naturally came out of that because the only new sales of Windows would be on new machines with the newest specs, and OEMs wanted MS to give users reasons to buy new hardware anyway.
UI testing. In the 1990s the desktop GUI paradigm was new and Apple's competitive advantage was UI quality, so Microsoft ran lots of usability studies to figure out what worked. It wasn't a cultural problem because most UI was designed by programmers who freely admitted they didn't really know what worked. The reason the start button had "Start" written on it was because of these tests. After Windows 95 the culture of usability studies disappeared, as they might imply that the professional designers didn't know what they were doing, and those designers came to compete on looks. Also it just got a lot harder to change the basic desktop UI designs anyway.
The web. When people mostly wrote Windows apps, investing in Windows itself made sense. Once everyone migrated to web apps it made much less sense. Data is no longer stored in files locally so making Explorer more powerful doesn't help, it makes more sense to simplify it. There's no longer any concept of a Windows app so adding new APIs is low ROI outside of gaming, as the only consumer is the browser. As a consequence all the people with ambition abandoned the Windows team to work on web-related stuff like Azure, where you could have actual impact. The 90s Windows/MacOS teams were full of people thinking big thoughts about how to write better software hence stuff like DCOM, OpenDoc, QuickTime, DirectMusic and so on. The overwhelming preference of developers for making websites regardless of the preferences of the users meant developing new OS ideas was a waste of time; browsers would not expose these features, so devs wouldn't use them, so apps wouldn't require them, so users would buy new computers to get access to them.
And that's why MS threw Windows away. It simply isn't a valuable asset anymore.
The answer to maintaining a highly functional and stable OS is piles and piles of backwards compatibility misery on the devs.
You want Windows 9? Sorry, some code checks the string for Windows 9 to determine if the OS is Windows 95 or 98.
This is largely true in North America, UK and AUS/NZ, less true in Europe, a mixed bag in the Middle East and mostly untrue everywhere else.
Perhaps that could be mitigated if someone could come up with an awesome OSS machine code translation layer like Apple's Rosetta.
Love this idea. Love where it is coming from.
No, the academic literature makes the difference between the kernel and the OS as a whole. The OS is meant to provide hardware abstractions to both developers and the user. The Linux world shrugged and said 'okay, this is just the kernel for us, everyone else be damned'. In this view Linux is the complete outlier, because every other commercial OS comes with a full suite of user-mode libraries and applications.
https://distrowatch.com/table.php?distribution=gnomeos
https://distrowatch.com/table.php?distribution=kdelinux
The question is if either will catch any interest and if so, what will happen to regular distributions.
systemd/Linux maybe? Lots of things are more significant than GNU, either way.
I might unironically use this. The Windows 2000 era desktop was light and practical.
I wonder how well it performs with modern high-resolution, high-dpi displays.
From a quick glance at the feature lists it looks quite comparable.
Just target Windows, business as usual, and let Valve do the hard work.
But they do test their Windows games on Linux now and fix issues as needed. I read that CDProjekt does that, at least.
How many game studios were bothering with native Linux clients before Proton became known?
That goes back to address the original question of "But would you want to run these Win32 software on Linux for daily use?"
Maybe Valve can play the reverse switcheroo out of Microsoft's playbook and, once enough people are on Linux, force the developers' hand by not supporting Proton anymore.
googles
Ah, no, that was FreeWin95. What on earth is Free95, it feels like history repeating itself…
There is a ton of useful FOSS for Windows and maybe it is a good push to modernize abandoned projects or make Win32 projects cross-compilable.
Your average user might not even know its Linux.
And failing everything else, Microsoft is in the position to put WSL center and front, and yet again, that is the laptops that normies will buy.
It's not a moving target. Proton and Wine have shown it can be achieved with greater comparability than even what Microsoft offers.
It is a moving target, Proton is mostly stuck on Windows XP world, before most new APIs started being a mix of COM and WinRT.
Even if that isn't the case, almost no company would bother with GNU/Linux to develop with Win32, instead of Windows, Visual Studio, business as usual.
It's a start.
This will never work, because it isn't a radical enough departure from Linux.
Linux occupies the bottom of a well in the cartesian space. Any deviation is an uphill battle. You'll die trying to reach escape velocity.
The forcing factors that pull you back down:
1. Battles-testedness. The mainstream Linux distros just have more eyeballs on them. That means your WINE-first distro (which I'll call "Lindows" in honor of the dead OS from 2003) will have bugs that make people consider abandoning the dream and going back to Gnome Fedora.
2. Cool factor. Nobody wants to open up their riced-out Linux laptop in class and have their classmate look over and go "yo this n** running windows 85!" (So, you're going to have to port XMonad to WINE. I don't make the rules!)
3. Kernel churn. People will want to run this thing on their brand-new gaming laptop. That likely means they'll need a recent kernel. And while they "never break userspace" in theory, in practice you'll need a new set of drivers and MESA and other add-ons that WILL breaks things. Especially things like 3D apps running through WINE (not to mention audio). Google can throw engineers at the problem of keeping Chromium working across graphics stacks. But can you?
If you could plant your flag in the dirt and say "we fork here" and make a radical left turn from mainline Linux, and get a cohort of kernel devs and app developers to follow you, you'd have a chance.
The idea of "fuck it, let's do Windows everywhere" was introduced by Justine Tunney as an April Fools Joke in the Cosmopolitan repository.
That's it. An april fools joke.
Better to consider is the Proton verified count, which has been rocketing upwards.
(That and Linux doesn't implement win32 and wine doesn't exclusively run on Linux.)
If you make a piece of software today and want to package it for Linux its an absolute mess. I mean, look at flatpack or docker, a common solution for this is to ship your own userspace, thats just insane.
It's much more bloated than it should be, but the best way to reliably run old/new software in any given Linux.
If its a choice between downloading a binary that depends on a stable ABI and compiling the source. They way most Linux software gets installed is downloading a binary that has been compiled for your OS version (from repos), and the next most common way of installing is compiling source through a system that figures out the dependencies for you (source based distros and repos).
Not talking about the cross-platform versions of .NET and VS-Code. I'm specifically talking about the Windows-specific software I mentioned above.
I don't see this happening, despite the fact that by now, these types of porting efforts were supposed to be trivial because of AI. Yeah, I'll wait.