Meta is using the Linux scheduler designed for Valve's Steam Deck on its servers
416 points
4 hours ago
| 10 comments
| phoronix.com
| HN
Fiveplus
4 hours ago
[-]
Valve is practically singlehandedly dragging the Linux ecosystem forward in areas that nobody else wanted to touch.

They needed Windows games to run on Linux so we got massive Proton/Wine advancements. They needed better display output for the deck and we got HDR and VRR support in wayland. They also needed smoother frame pacing and we got a scheduler that Zuck is now using to run data centers.

Its funny to think that Meta's server efficiency is being improved because Valve paid Igalia to make Elden Ring stutter less on a portable Linux PC. This is the best kind of open source trickledown.

reply
cosmic_cheese
24 minutes ago
[-]
One would've expected one of the many desktop-oriented distros (some with considerable funding, even) to have tackled these things already, but somehow desktop Linux has been stuck in the awkward midway of "it technically works, just learn to live with the rough edges" until finally Valve took initiative. Go figure.
reply
iknowstuff
8 minutes ago
[-]
There's far more of that, starting with the lack of a stable ABI in gnu/linux distros. Eventually Valve or Google (with Android) are gonna swoop in with a user-friendly, targetable by devs OS that's actually a single platform
reply
MarleTangible
3 hours ago
[-]
Over time they're going to touch things that people were waiting for Microsoft to do for years. I don't have an example in mind at the moment, but it's a lot better to make the changes yourself than wait for OS or console manufacturer to take action.
reply
asveikau
3 hours ago
[-]
I was at Microsoft during the Windows 8 cycle. I remember hearing about a kernel feature I found interesting. Then I found linux had it for a few years at the time.

I think the reality is that Linux is ahead on a lot of kernel stuff. More experimentation is happening.

reply
wmf
3 hours ago
[-]
I was surprised to hear that Windows just added native NVMe which Linux has had for many years. I wonder if Azure has been paying the SCSI emulation tax this whole time.
reply
stackskipton
2 hours ago
[-]
Probably, most of stuff you see in Windows Server these days is backported from Azure improvements.
reply
pantalaimon
55 minutes ago
[-]
Afaik Azure is mostly Linux
reply
ndiddy
27 minutes ago
[-]
The user VMs are mostly Linux but Azure itself runs on a stripped down version of Windows Server and all the VMs are hosted inside Hyper-V. See https://techcommunity.microsoft.com/blog/windowsosplatform/a...
reply
athoneycutt
2 hours ago
[-]
It was always wild to me that their installer was just not able to detect an NVMe drive out of the box in certain situations. I saw it a few times with customers when I was doing support for a Linux company.
reply
b00ty4breakfast
1 hour ago
[-]
when the hood is open for anyone to tinker, lots of little weirdos get to indulge their ideas. Sometimes those are ideas are even good!
reply
ethbr1
25 minutes ago
[-]
Never underestimate the efficiency and amazing results of autistic focus.

"Now that's curious..."

reply
dijit
3 hours ago
[-]
yeah, but you have IO Completion Ports…

IO_Uring is still a pale imitation :(

reply
asveikau
3 hours ago
[-]
io_uring does more than IOCP. It's more like an asynchronous syscall interface that avoids the overhead of directly trapping into the kernel. This avoids some overheads IOCP cannot. I'm rusty on the details but the NT kernel has since introduced an imitation: https://learn.microsoft.com/en-us/windows/win32/api/ioringap...
reply
loeg
3 hours ago
[-]
IOCP is great and was ahead of Linux for decades, but io_uring is also great. It's a different model, not a poor copy.
reply
torginus
2 hours ago
[-]
I think they are a bit different - in the Windows kernel, all IO is asynchronous on the driver level, on Linux, it's not.

io_uring didn't change that, it only got rid of the syscall overhead (which is still present on Windows), so in actuality they are two different technical solutions that affect different levels of the stack.

In practice, Linux I/O is much faster, owing in part to the fact that Windows file I/O requires locking the file, while Linux does not.

reply
IshKebab
1 hour ago
[-]
Yeah and Linux is waaay behind in other areas. Windows had a secure attention sequence (ctrl-alt-del to login) for several decades now. Linux still doesn't.
reply
roblabla
52 minutes ago
[-]
Linux (well, more accurately, X11), has had a SAK for ages now, in the form of the CTRL+ALT+BACKSPACE that immediately kills X11, booting you back to the login screen.

I personally doubt SAK/SAS is a good security measure anyways. If you've got untrusted programs running on your machine, you're probably already pwn'd.

reply
TeMPOraL
20 minutes ago
[-]
The "threat model" (if anyone even called it that) of applications back then was bugs resulting in unintended spin-locks, and the user not realizing they're critically short on RAM or disk space.
reply
dangus
38 minutes ago
[-]
This setup came from the era of Windows running basically everything as administrator or something close to it.

The whole windows ecosystem had us trained to right click on any Windows 9X/XP program that wasn’t working right and “run as administrator” to get it to work in Vista/7.

reply
ttctciyf
25 minutes ago
[-]
reply
dangus
45 minutes ago
[-]
Is that something Linux needs? I don’t really understand the benefit of it.
reply
ethbr1
19 minutes ago
[-]
The more powerful form is the UAC full privilege escalation dance that Win 7+(?) does, which is a surprisingly elegant UX solution.

   1. Snapshot the desktop
   2. Switch to a separate secure UI session
   3. Display the snapshot in the background, greyed out, with the UAC prompt running in the current session and topmost
It avoids any chance of a user-space program faking or interacting with a UAC window.

Clever way of dealing with the train wreck of legacy Windows user/program permissioning.

reply
mikkupikku
32 minutes ago
[-]
It made a lot more sense in the bygone years of users casually downloading and running exe's to get more AIM "smilies", or putting in a floppy disk or CD and having the system autoexec whatever malware the last user of that disk had. It was the expected norm for everybody's computer to be an absolute mess.

These days, things have gotten far more reasonable, and I think we can generally expect a linux desktop user to only run software from trusted sources. In this context, such a feature makes much less sense.

reply
7bit
3 hours ago
[-]
And behind on a lot of stuff. The Microsoft's ACLs are nothing short of one of the best designed permission systems there are.

On the surface, they are as simple as Linux UOG/rwx stuff if you want it to be, but you can really, REALLY dive into the technology and apply super specific permissions.

reply
jandrese
1 hour ago
[-]
> The Microsoft's ACLs are nothing short of one of the best designed permission systems there are.

You have a hardened Windows 11 system. A critical application was brought forward from a Windows 10 box but it failed, probably a permissions issue somewhere. Debug it and get it working. You can not try to pass this off to the vendor, it is on you to fix it. Go.

reply
butlike
1 hour ago
[-]
and why is it not on the vendor of the critical application?
reply
jandrese
4 minutes ago
[-]
Because they aren't allowed on the system where it is installed, and also they don't deal with hardened systems.
reply
7bit
1 hour ago
[-]
Procmon.exe. Give me 2 minutes. You make it sound like it's such a difficult thing to do. It literally will not take me more than 2 minutes to tell you exactly where the permission issue is and how to fix it.
reply
roblabla
41 minutes ago
[-]
Procmon won't show you every type of resource access. Even when it does, it won't tell you which entity in the resource chain caused the issue.

And then you get security product who have the fun idea of removing privileges when a program creates a handle (I'm not joking, that's a thing some products do). So when you open a file with write access, and then try to write to the file, you end up with permission errors durig the write (and not the open) and end up debugging for hours on end only to discover that some shitty security product is doing stupid stuff...

Granted, thats not related to ACLs. But for every OK idea microsoft had, they have dozen of terrible ideas that make the whole system horrible.

reply
jandrese
2 minutes ago
[-]
Especially when the permission issue is up the chain from the application. Sure it is allowed to access that subkey, but not the great great grandparent key.
reply
nunez
1 hour ago
[-]
The file permission system on Windows allows for super granular permissions, yes; administrating those permissions was a massive pain, especially on Windows file servers.
reply
torginus
2 hours ago
[-]
And they work on everything. You can have a mutex, a window handle or a process protected by ACL.
reply
bbkane
2 hours ago
[-]
Do you have any favorite docs or blogs on these? Reading about one of the best designed permissions systems sounds like a fun way to spend an afternoon ;)
reply
trueismywork
3 hours ago
[-]
You have ACLs on linux too
reply
Arainach
2 hours ago
[-]
ACLs in Linux were tacked on later; not everything supports them properly. They were built into Windows NT from the start and are used consistently across kernel and userspace, making them far more useful in practice.

Also, as far as I know Linux doesn't support DENY ACLs, which Windows does.

reply
onraglanroad
2 hours ago
[-]
Yes it does.
reply
112233
2 hours ago
[-]
since when?
reply
onraglanroad
1 hour ago
[-]
Since some of us could be bothered reading docs. Give it a try and see how it works out for you.
reply
112233
1 hour ago
[-]
Some of us can! I certainly enjoy doing it, and according to "man 5 acl" what you assert is completely false. Unless you have a particular commit or document from kernel.org you had in mind?
reply
112233
2 hours ago
[-]
Haha, sure. Sorry, it's not you, it's the ACLs (and me nerves). Have you tried configuring NFSv4 ACLs on Linux? Because kernel devs are against supporting them, you either use some other OS or have all sorts of "fun". Also, not to be confused with all sorts of LSM based ACLs... Linux has ACLs in the most ridiculous way imaginable...
reply
7bit
2 hours ago
[-]
Not by default. Not as extensive as in Windows. What's your point?
reply
dabockster
2 hours ago
[-]
Oh yeah for sure. Linux is amazing in a computer science sense, but it still can't beat Windows' vertically integrated registry/GPO based permissions system. Group/Local Policy especially, since it's effectively a zero coding required system.

Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.

reply
Elv13
2 hours ago
[-]
Debian (and thus Ubuntu) has full support for automated installs since the 90's. It's built into `dpkg` since forever. That include saving or generating answer to install time questions, PXE deployment, ghosting, CloudInit and everything. Then stuff like Ansible/Puppet have been automating deployment for a long time too. They might have added yet another way of doing it, but full stack deployment automation has been there for as long as Ubuntu existed.
reply
benterix
1 hour ago
[-]
> Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.

What?! I was doing kickstart on Red Hat (want called Enterprise Linux back then) at my job 25 years ago, I believe we were using floppies for that.

reply
cactacea
1 hour ago
[-]
> Ubuntu just recently got a way to automate its installer (recently being during covid).

Preseed is not new at all:

https://wiki.debian.org/DebianInstaller/Preseed

RH has also had kickstart since basically forever now.

I've been using both preseeds and kickstart professionally for over a decade. Maybe you're thinking of the graphical installer?

reply
LeSaucy
2 hours ago
[-]
Still the king but developing/testing/debugging group policy issues is a miserable experience.
reply
7bit
2 hours ago
[-]
I always found it straight forward. Never had an issue and I've implemented my fair share on thousands on devices and servers.
reply
lll-o-lll
12 minutes ago
[-]
Not an implementer of group policy, more of a consumer. There are 2 things that I find extremely problematic about them in practice.

- There does not seem to be a way to determine which machines in the fleet have successfully applied. If you need a policy to be active before doing deployment of something (via a different method), or things break, what do you do?

- I’ve had far too many major incidents that were the result of unexpected interactions between group policy and production deployments.

reply
max-privatevoid
1 hour ago
[-]
I'm surprised no one has said NixOS yet.
reply
esseph
2 hours ago
[-]
> Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.

1. cloud-init support was in RHEL 7.2 which released November 19, 2015. A decade ago.

2. Checking on Ubuntu, it looks like it was supported in Ubuntu 18.04 LTS in April 2018.

3. For admining tens of thousands of servers, if you're in the RHEL ecosystem you use Satellite and it's ansible integration. That's also been going on for... about a decade. You don't need much integration though other than a host list of names and IPs.

There are a lot of people on this list handling tens of thousands or hundreds of thousands of linux servers a day (probably a few in the millions).

reply
6r17
2 hours ago
[-]
Tbh i'm starting to think that I do not see microsoft being able to keep it's position in the OS market ; with steam doing all the hard work and having a great market to play with ; the vast distributions to choose from, and most importantly how easy it has become to create an operating system from scratch - they not only lost all possible appeal, they seem stuck on really weird fetichism with their taskbar and just didn't provide me any kind of reason to be excited about windows.

Their research department rocks however so it's not a full bash on Microsoft at all - i just feel like they are focusing on other way more interesting stuff

reply
embedding-shape
1 hour ago
[-]
> Tbh i'm starting to think that I do not see microsoft being able to keep it's position in the OS market

It's a big space. Traditionally, Microsoft has held both the multimedia, gaming and lots of professional segments, but with Valve doing a large push into the two first and Microsoft not even giving it a half-hearted try, it might just be that corporate computers continue using Microsoft, people's home media equipment is all Valve and hipsters (and others...) keep on using Apple.

reply
Arainach
1 hour ago
[-]
Kernel improvements are interesting to geeks and data centers, but open source is fundamentally incompatible with great user experience.

Great UX requires a lot of work that is hard but not algorithmically challenging. It requires consistency and getting many stakeholders to buy in. It requires spending lots of time on things that will never be used by 10% of people.

Windows got a proper graphics compositor (DWM) in 2006 and made it mandatory in 2012. macOS had one even earlier. Linux fought against Compiz and while Wayland feels inevitable vocal forces still complain about/argue against it. Linux has a dozen incompatible UI toolkits.

Screen readers on Linux are a mess. High contrast is a mess. Setting font size in a way that most programs respect is a mess. Consistent keyboard shortcuts are a mess.

I could go on, but these are problems that open source is not set up to solve. These are problems that are hard, annoying, not particularly fun. People generally only solve them when they are paid to, and often only when governments or large customers pass laws requiring the work to be done and threaten to not buy your product if you don't do it. But they are crucially important things to building a great, widely adopted experience.

reply
einr
1 hour ago
[-]
…and you are implying that Microsoft Windows 11 is a better example of ”great user experience”?
reply
Arainach
43 minutes ago
[-]
If you have anything less than perfect vision and need any accessibility features, yes. If you have a High DPI screen, yes. In many important areas (window management, keyboard shortcuts, etc.), yes.

Here's one top search result that goes into far more detail: https://www.reddit.com/r/linux/comments/1ed0j10/the_state_of...

reply
benoau
3 hours ago
[-]
"It just works" sleep and hibernate.

"Slide left or right" CPU and GPU underclocking.

reply
dijit
3 hours ago
[-]
“it just works” sleep was working, at least on basically every laptop I had the last 10 years…

until the new s2idle stuff that Microsoft and Intel have foisted on the world (to update your laptop while sleeping… I guess?)

reply
dabockster
2 hours ago
[-]
From what I read, it was a lot of the prosumer/gamer brands (MSI, Gigabyte, ASUS) implementing their part of sleep/hibernate badly on their motherboards. Which honestly lines up with my experience with them and other chips they use (in my case, USB controllers). Lots of RGB and maybe overclocking tech, but the cheapest power management and connectivity chips they can get (arguably what usually gets used the most by people).
reply
zargon
1 hour ago
[-]
Sleep brokenness is ecosystem-wide. My Thinkpad crashes/freezes during sleep 3 times a week. Lenovo serviced/replaced it 3 times to no avail.
reply
chocochunks
52 minutes ago
[-]
It never really worked in games even with S3 sleep. The new connected standby stuff created new issues but sleeping a laptop while gaming was a roulette wheel. SteamOS and the like actually work, like maybe 1/100 times I've run into an issue. Windows was 50/50.
reply
pmontra
3 hours ago
[-]
Sleep and hibernate don't just work on Windows unless Microsoft work with laptop and boards manufacturers to make Windows play nice with all those drivers. It's inevitable that it's hit and miss on any other OS that manufacturers don't care much about. Apple does nearly everything inside their walls, that's why it just works.
reply
Insanity
3 hours ago
[-]
“It just works” sadly isn’t true across the Apple Ecosystem anymore.

Liquid Glass ruined multitasking UX on my iPad. :(

Also my macbook (m4 pro) has random freezes where finder becomes entirely unresponsive. Not sure yet why this happens but thankfully it’s pretty rare.

reply
pbh101
3 hours ago
[-]
Regardless of how it must be implemented, if this is a desirable feature then this explanation isn’t an absolution of Linux but rather an indictment: its development model cannot consistently provide this product feature.

(And same for Windows to the degree it is more inconsistent on Windows than Mac)

reply
AnthonyMouse
33 minutes ago
[-]
> its development model cannot consistently provide this product feature.

The real problem is that the hardware vendors aren't using its development model. To make this work you either need a) the hardware vendor to write good drivers/firmware, or b) the hardware vendor to publish the source code or sufficient documentation so that someone else can reasonably fix their bugs.

The Linux model is the second one. Which isn't what's happening when a hardware vendor doesn't do either of them. But some of them are better than others, and it's the sort of thing you can look up before you buy something, so this is a situation where you can vote with your wallet.

A lot of this is also the direct fault of Microsoft for pressuring hardware vendors to support "Modern Standby" instead of rather than in addition to S3 suspend, presumably because they're organizationally incapable of making Windows Update work efficiently so they need Modern Standby to paper over it by having it run when the laptop is "asleep" and then they can't have people noticing that S3 is more efficient. But Microsoft's current mission to get everyone to switch to Linux appears to be in full swing now, so we'll see if their efforts on that front manage to improve the situation over time.

reply
gf000
33 minutes ago
[-]
The feature itself works. There are just hardware that is buggy and don't support it properly.

That's a vastly different statement.

reply
spauldo
2 hours ago
[-]
It's not the development model at fault here. It's the simple fact that Windows makes up nearly the entire user base for PCs. Companies make sure their hardware works with Windows, but many don't bother with Linux because it's such a tiny percentage of their sales.
reply
tharkun__
2 hours ago
[-]
Except when it doesn't. I can't upgrade my Intel graphics drivers to any newer version than what came with the laptop or else my laptop will silently die while asleep. Internet is full of similar reports from other laptop and graphics manufacturers and none have any solutions that work. The only thing that reliably worked is to restore the original driver version. Doesn't matter if I use the WHQL version(s) or something else.
reply
mschuster91
2 hours ago
[-]
> Regardless of how it must be implemented, if this is a desirable feature then this explanation isn’t an absolution of Linux but rather an indictment: its development model cannot consistently provide this product feature.

The problem is: the specifications of ACPI are complex, Windows' behavior tends to be pretty much trash and most hardware tends to be trash too (AMD GPUs for example were infamous for not being resettable for years [1]), which means that BIOSes have to work around quirks on both the hardware and software. Usually, as soon as it is reasonably working with Windows (for a varying definition of "reasonably", that is), the ACPI code is shipped and that's it.

Unfortunately, Linux follows standards (or at least, it tries to) and cannot fully emulate the numerous Windows quirks... and on top of that, GPUs tend to be hot piles of dung requiring proprietary blobs that make life even worse.

[1] https://www.nicksherlock.com/2020/11/working-around-the-amd-...

reply
devnullbrain
2 hours ago
[-]
I don't understand this comment in this context. Both of these features work on my Steam Deck. Neither of them have worked on any Windows laptop my employers have foisted upon me.
reply
Krssst
2 hours ago
[-]
On my Framework 13 AMD : Sleep just works on Fedora. Sleep is unreliable on Windows; if my fans are all running at full speed while running a game and I close the lid to begin sleeping, it will start sleeping and eventually wake up with all fans blaring.
reply
tremon
1 hour ago
[-]
That requires driver support. What you're seeing is Microsoft's hardware certification forcing device vendors to care about their products. You're right that this is lacking on Linux, but it's not a slight on the kernel itself.
reply
seba_dos1
2 hours ago
[-]
Both of these have worked fine for the last 15 years or so on all my laptops.
reply
shantara
43 minutes ago
[-]
I’ve heard from several people who game on Windows that Gamescope side panel with OS-wide tweakables for overlays, performance, power, frame limiters and scaling is something that they miss after playing on Steam Deck. There are separate utilities for each, but not anything so simple and accessible as in Gamescope.
reply
mstank
2 hours ago
[-]
Valve... please do Github Actions next
reply
xmprt
2 hours ago
[-]
I wonder what Valve uses for source control (no pun intended) internally.
reply
harrisoned
2 hours ago
[-]
reply
packetlost
2 hours ago
[-]
Kernel level anti-cheat with trusted execution / signed kernels is probably a reasonable new frontier for online games, but it requires a certain level of adoption from game makers.
reply
dabockster
2 hours ago
[-]
This is a part of Secure Boot, which Linux people have raged against for a long time. Mostly because the main key signing authority was Microsoft.

But here's my rub: no one else bothered to step up to be a key signer. Everyone has instead whined for 15 years and told people to disable Secure Boot and the loads of trusted compute tech that depends on it, instead of actually building and running the necessary infra for everyone to have a Secure Boot authority outside of big tech. Not even Red Hat/IBM even though they have the infra to do it.

Secure Boot and signed kernels are proven tech. But the Linux world absolutely needs to pull their heads out of their butts on this.

reply
ndriscoll
2 hours ago
[-]
The goals of the people mandating Secure Boot are completely opposed to the goals of people who want to decide what software they run on the computer they own. Literally the entire point of remote attestation is to take that choice away from you (e.g. because they don't want you to choose to run cheating software). It's not a matter of "no one stepped up"; it's that Epic Games isn't going to trust my secure boot key for my kernel I built.

The only thing Secure Boot provides is the ability for someone else to measure what I'm running and therefore the ability to tell me what I can run on the device I own (mostly likely leading to them demanding I run malware like like the adware/spyware bundled into Windows). I don't have a maid to protect against; such attacks are a completely non-serious argument for most people.

reply
cogman10
1 hour ago
[-]
And all this came from big game makers turning their games into casinos. The reason they want everything locked down is money is on the line.
reply
jpalawaga
1 hour ago
[-]
anti-cheat far precedes the casinoification of modern games.

nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.

anti-cheat is essentially existential for studios/publishers that rely on multiplayer gaming.

So yes, the second half of your statement is true. The first half--not so much.

reply
cogman10
1 hour ago
[-]
> anti-cheat far precedes the casinoification of modern games.

> nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.

You are correct, but I think I did a bad job of communicating what I meant. It's true that anti-cheat has been around since forever. However, what's changed relatively recently is anti-cheat integrated into the kernel alongside requirements for signed kernels and secure boot. This dates back to 2012, right as games like Battlefield started introducing gambling mechanics into their games.

There were certainly other games that had some gambly aspects to them, but 2010s is pretty close to where esports along with in game gambling was starting to bud.

reply
codeflo
2 hours ago
[-]
There are plenty of locked down computers in my life already. I don't need or want another system that only runs crap signed by someone, and it doesn't really matter whether that someone is Microsoft or Redhat. A computer is truly "general purpose" only if it will run exactly the executable code I choose to place there, and Secure Boot is designed to prevent that.
reply
mhitza
2 hours ago
[-]
I don't know overall in the ecosystem but Fedora has been working for me with secureboot enabled for a long time.

Having the option to disable secureboot, was probably due to backlash at the time and antitrust concerns.

Aside from providing protection "evil maid" attacks (right?) secureboot is in the interest of software companies. Just like platform "integrity" checks.

reply
packetlost
2 hours ago
[-]
I'm pro secure boot fwiw and have had it working on my of my Linux systems for awhile.
reply
esseph
2 hours ago
[-]
I'm not giving game ownership of my kernel, that's fucking insane. That will lead to nothing but other companies using the same tech to enforce other things, like the software you can run on your own stuff.

No thanks.

reply
guidopallemans
3 hours ago
[-]
Surely a gaming handheld counts
reply
theLiminator
1 hour ago
[-]
Imagine if windows moved to the linux kernel and then used wine/proton to serve their own userspace.
reply
layer8
1 hour ago
[-]
The Linux kernel and Windows userspace are not very well matched on a fundamental level. I’m not sure we should be looking forward to that, other than for running games and other insular apps.
reply
theLiminator
1 hour ago
[-]
Ah, I was being facetious, I think it would be pretty funny if it happened though.
reply
duped
3 hours ago
[-]
> I don't have an example in mind at the moment

I do, MIDI 2.0. It's not because they're not doing it, just that they're doing it at a glacial pace compared to everyone else. They have reasons for this (a complete rewrite of the windows media services APIs and internals) but it's taken years and delays to do something that shipped on Linux over two years ago and on Apple more like 5 (although there were some protocol changes over that time).

reply
bilekas
3 hours ago
[-]
I do agree. It's also thanks to gaming that the GPU industry was in such a good state to be consumed by AI now. Game development used to always be the frontier of software optimisation techniques and ingenious approaches to the constraints.
reply
thdrtol
9 minutes ago
[-]
I have a feeling this will also drag Linux mobile forwards.

Currently almost no one is using Linux for mobile because the lack or apps (banking for example) and bad hardware support. When developing for Linux becomes more and more attractive this might change.

reply
baq
2 hours ago
[-]
I low key hope the current DDR5 prices push them to drag the Linux memory and swap management into the 21st century, too, because hard locking on low memory got old a while ago
reply
the_pwner224
1 hour ago
[-]
It takes a solid 45 seconds for me to enable zram (compressed RAM as swap) on a fresh Arch install. I know that doesn't solve the issue for 99% of people who don't even know what zram is / have no idea how to do it / are trying to do it for the first time, but it would be pretty easy for someone to enable that in a distro. I wouldn't be shocked if it is already enabled by default in Ubuntu or Fedora.
reply
ahepp
44 minutes ago
[-]
what behavior would you like to see when primary memory is under extreme pressure?
reply
baq
34 minutes ago
[-]
See mac or windows: grow swap automatically up to some sane limit, show a warning, give user an option to kill stuff; on headless systems, kill stuff. Do not page out critical system processes like sshd or the compositor.

A hard lock which requires a reboot or god forbid power cycling is the worst possible outcome, literally anything else which doesn’t start a fire is an improvement TBH.

reply
jhasse
30 minutes ago
[-]
Same as Windows. Instead the system freezes.
reply
stdbrouw
1 hour ago
[-]
I feel like all of the elements are there: zram, zswap, various packages that improve on default oom handling... maybe it's more about creating sane defaults that "just work" at this point?
reply
GZGavinZhao
1 hour ago
[-]
Next thing I want them to work on is Linux suspend(-to-RAM) support!
reply
captn3m0
3 hours ago
[-]
My favourite is the Windows futex primitives being shipped on Linux: https://lwn.net/Articles/961884/
reply
raverbashing
18 minutes ago
[-]
Let's be honest

Linux (and its ecosystem) sucks at having focus and direction.

They might get something right here and there, especially related to servers, but they are awful at not spinning wheels

See how wayland progress is slow. See how some distros moved to it only after a lot of kicking and screaming.

See how a lot of peripherals in "newer" (sometimes a model that's 2 or 3 yrs on the market) only barely works in a newer distro. Or has weird bugs

"but the manufacturers..." "but the hw producers..." "but open source..." whine

Because Linux lacks a good hierarchy at isolating responsibility, otherwise going for a "every kernel driver can do all it wants" together with "interfaces that keep flipping and flopping at every new kernel release" - notable (good) exception : USB userspace drivers. And don't even get me started on the whole mess that is xorg drivers

And then you have a Ruby Goldberg machine in form of udev dbus and what not, or whatever newer solution that solves half the problems and create another new collection of bugs.

reply
cosmic_cheese
14 minutes ago
[-]
Honestly I can't see it remaining tenable to keep things like drivers in the kernel for too much longer… both due to the sheer speed at the industry moves and due to the security implications involved.
reply
irusensei
2 hours ago
[-]
If I'm not mistaken this has been greatly facilitated by the recent bpf based extension mechanism that allows developers to go crazy on creating schedulers and other functionality through some protected virtual machine mechanism provided by the kernel.
reply
delusional
2 hours ago
[-]
> Valve is practically singlehandedly dragging the Linux ecosystem forward in areas that nobody else wanted to touch.

I'm loving what valve has been doing, and their willingness to shove money into projects that have long been under invested in, BUT. Please don't forget all the volunteers that have developed these systems for years before valve decided to step up. All of this is only possible because a ton of different people spent decades slowly building a project, that for most of it's lifetime seemed like a dead end idea.

Wine as a software package is nothing short of miraculous. It has been monumentally expensive to build, but is provided to everyone to freely use as they wish.

Nobody, and I do mean NOBODY would have funded a project that spent 20 years struggling to run office and photoshop. Valve took it across the finish line into commercially useful project, but they could not have done that without the decade+ of work before that.

reply
aeyes
1 hour ago
[-]
Long before Valve there was CrossOver which sold a polished version of Wine making a lot of Windows only enterprise software work on Linux.

I'm sure there have been more commercial contributors to Wine other than Valve and CodeWeavers.

reply
PartiallyTyped
1 hour ago
[-]
To be fair proton is based on DXVK which is some guy’s project because he wanted to play nier automata on Linux.

The guy is Philip Rebohler.

reply
downrightmike
1 hour ago
[-]
Man, if only meta would give back, oh and also stop letting scammers use their AI to scam our parents, but hey, that accounted for 10% of their revenue this last year, that's $16 BILLION.
reply
ls612
3 hours ago
[-]
Gaben does nothing: Wins

Gaben does something: Wins Harder

reply
7bit
3 hours ago
[-]
He's the person I want to meet the least from all the people in the world, he is that much of my hero.
reply
dabockster
2 hours ago
[-]
> This is the best kind of open source trickledown.

We shouldn't be depending on trickledown anything. It's nice to see Valve contributing back, but we all need to remember that they can totally evaporate/vanish behind proprietary licensing at any time.

reply
dymk
2 hours ago
[-]
They have to abide by the Wine license, which is basically GPL, so unless they’re going to make their own from scratch, they can’t make the bread and butter of their compat layer proprietary
reply
stavros
2 hours ago
[-]
How? It's GPL.
reply
mikkupikku
4 hours ago
[-]
> SCX-LAVD has been worked on by Linux consulting firm Igalia under contract for Valve

It seems like every time I read about this kind of stuff, it's being done by contractors. I think Proton is similar. Of course that makes it no less awesome, but it makes me wonder about the contractor to employee ratio at Valve. Do they pretty much stick to Steam/game development and contract out most of the rest?

reply
ZeroCool2u
4 hours ago
[-]
Igalia is a bit unique as it serves as a single corporate entity for organizing a lot of sponsored work on the Linux kernel and open source projects. You'll notice in their blog posts they have collaborations with a number of other large companies seeking to sponsor very specific development work. For example, Google works with them a lot. I think it really just simplifies a lot of logistics for paying folks to do this kind of work, plus the Igalia employees can get shared efficiency's and savings for things like benefits etc.
reply
chucky_z
3 hours ago
[-]
This isn’t explicitly called out in any of the other comments in my opinion so I’ll state this. Valve as a company is incredibly focused internally on its business. Its business is games, game hardware, and game delivery. For anything outside of that purview instead of trying to build a huge internal team they contract out. I’m genuinely curious why other companies don’t do this style more often because it seems incredibly cost effective. They hire top level contractors to do top tier work on hyper specific areas and everyone benefits. I think this kind of work is why Valve gets a free pass to do some real heinous shit (all the gambling stuff) and maintain incredible good will. They’re a true “take the good with the bad” kind of company. I certainly don’t condone all the bad they’ve put out, and I also have to recognize all the good they’ve done at the same time.

Back to the root point. Small company focused on core business competencies, extremely effective at contracting non-core business functions. I wish more businesses functioned this way.

reply
javier2
1 hour ago
[-]
Yeah, I suppose this workflow is not for everyone. I can only imagine Valve has very specific issue or requirements in mind when they hire contractors like this. When you hire like this, i suspect what one really pay for is a well known name that will be able to push something important to you to upstream linux. Its the right way to do it if you want it resolved quickly. If you come in as a fresh contributor, landing features upstream could take years.
reply
smotched
3 hours ago
[-]
Whats the bad practices valve is doing in gambling?
reply
mewse-hn
3 hours ago
[-]
Loot box style underage gambling in their live service games - TF2 hats, counterstrike skins, "trading cards", etc etc
reply
crtasm
3 hours ago
[-]
Their games and systems tie into huge gambling operations on 3rd party sites

If you have 30mins for a video I recommend People Make Games' documentary on it https://www.youtube.com/watch?v=eMmNy11Mn7g

reply
trinsic2
2 hours ago
[-]
Yeah, im sorry. Valve is the last company people should be focusing for this type of behavior. All the other AAA game companies use these mechanics to deliberate manipulate players. IMHO valve doesn't use predatory practices to keep this stuff going.
reply
heywoods
2 hours ago
[-]
Just because they weren’t the first mover into predatory practices doesn’t mean they can’t say no to said practices. Each actor has agency to make their own operating and business decisions. Is Valve the worst of the lot? Absolutely not. But it was still their choice to implement.
reply
inexcf
1 hour ago
[-]
What makes Valve special is that they were the first mover on those practices like lootboxes, gamepasses... but they never pushed it as far as the competition where it became predatory.
reply
msh
3 hours ago
[-]
Lootboxes comes to mind.
reply
tayo42
3 hours ago
[-]
I feel like I rarely see contacting out work go well. This seems like an exception
reply
OkayPhysicist
2 hours ago
[-]
The .308 footgun with software contracting stems from a misunderstanding of what we pay software developers for. The model under which contracting seems like the right move is "we pay software developers because we want a unit of software", like how you pay a carpenter to build you some custom cabinets. If the union of "things you have a very particular opinion about, and can specify coherently" and "things you don't care about" completely cover a project, contracting works great for that purpose.

But most of the time you don't want "a unit of software", you want some amorphous blob of product and business wants and needs, continuously changing at the whims of business, businessmen, and customers. In this context, sure, you're paying your developers to solve problems, but moreover you're paying them to store the institutional knowledge of how your particular system is built. Code is much easier to write than to read, because writing code involves applying a mental model that fits your understanding of the world onto the application, whereas reading code requires you to try and recreate someone else's alien mental model. In the situation of in-house products and business automation, at some point your senior developers become more valuable for their understanding of your codebase than their code output productivity.

The context of "I want this particular thing fixed in a popular open source codebase that there are existing people with expertise in", contracting makes a ton of sense, because you aren't the sole buyer of that expertise.

reply
magicalhippo
3 hours ago
[-]
If you have competent people on both sides who care, I don't see why it wouldn't work.

The problem seems, at least from a distance, to be that bosses treat it as a fire-and-forget solution.

We haven't had any software done by oursiders yet, but we have hired consultants to help us on specifics, like changing our infra and help move local servers to the cloud. They've been very effective and helped us a lot.

We had talks though so we found someone who we could trust had the knowledge, and we were knowledgeable enough ourselves that we could determine that. We then followed up closely.

reply
stackskipton
2 hours ago
[-]
Most companies that hiring a ton of contractors are doing it for business/financial reporting reasons. Contractors don't show up as employees so investors don't see employee count rise so metric of "Revenue/Employee" ratio does not get dragged down and contractors can be cut immediately with no further on expenses. Laid off employees take about quarter to be truly shed from the books between severance, vacation payouts and unemployment insurance.
reply
tayo42
3 hours ago
[-]
I think your first 2 sentances are pretty common issues though.
reply
TulliusCicero
2 hours ago
[-]
Valve contracts out to actually competent people and companies rather than giant bodycount consulting firms.
reply
to11mtm
2 hours ago
[-]
I've seen both good and bad contractors in multiple industries.

When I worked in the HFC/Fiber plant design industry, the simple act of "Don't use the same boilerplate MSA for every type of vendor" and being more specific about project requirements in the RFP makes it very clear what is expected, and suddenly we'd get better bids, and would carefully review the bids to make sure that the response indicated they understood the work.

We also had our own 'internal' cost estimates (i.e. if we had the in house capacity, how long would it take to do and how much would it cost) which made it clear when a vendor was in over their head under-bidding just to get the work, which was never a good thing.

And, I've seen that done in the software industry as well, and it worked.

That said, the main 'extra' challenge in IT is that key is that many of the good players aren't going to be the ones beating down your door like the big 4 or a WITCH consultancy will.

But really at the end of the day, the problem is what often happens is that business-people who don't really know (or necessarily -care-) about specifics enough unfortunately are the people picking things like vendors.

And worse, sometimes they're the ones writing the spec and not letting engineers review it. [0]

[0] - This once led to an off-shore body shop getting a requirement along the lines of 'the stored procedures and SQL called should be configurable' and sure enough the web.config had ALL the SQL and stored procedures as XML elements, loaded from config just before the DB call, thing was a bitch to debug and their testing alone wreaked havoc on our dev DB.

reply
WD-42
2 hours ago
[-]
Igalia isn’t your typical contractor. It’s made up of competent developers that actually want to be there and care to see open source succeed. Completely different ball game.
reply
abnercoimbre
3 hours ago
[-]
Nope. Plenty of top-tier contractors work quietly with their clientele and let the companies take the credit (so long as they reference the contractor to others, keeping the gravy train going.)

If you don't see it happening, the game is being played as intended.

reply
tapoxi
4 hours ago
[-]
Valve is actually extremely small, I've heard estimates at around 350-400 people.

They're also a flat organization, with all the good and bad that brings, so scaling with contractors is easier than bringing on employees that might want to work on something else instead.

reply
sneak
36 minutes ago
[-]
300 people isn’t “extremely small” for a company. I don’t work with/for companies over 100 people, for example, and those are already quite big.
reply
hatthew
6 minutes ago
[-]
the implied observation is that valve is extremely small relative to what it does and how big most people would expect it to be
reply
mindcrash
4 hours ago
[-]
Proton is mainly a co-effort between in-house developers at Valve (with support on specific parts from contractors like Igalia), developers at CodeWeavers and the wider community.

For contextual, super specific, super specialized work (e.g. SCX-LAVD, the DirectX-to-Vulkan and OpenGL-to-Vulkan translation layers in Proton, and most of the graphics driver work required to make games run on the upcoming ARM based Steam Frame) they like to subcontract work to orgs like Igalia but that's about it.

reply
everfrustrated
4 hours ago
[-]
Valve is known to keep their employee count as low as possible. I would guess anything that can reasonably be contracted out is.

That said, something like this which is a fixed project, highly technical and requires a lot of domain expertise would make sense for _anybody_ to contract out.

reply
treyd
4 hours ago
[-]
They seem to be doing it through Igalia, which is a company based on specialized consulting for the Linux ecosystem, as opposed to hiring individual contractors. Your point still stands, but from my perspective this arrangement makes a lot of sense while the Igalia employees have better job security than they would as individual contractors.
reply
izacus
4 hours ago
[-]
This is how "Company funding OSS" looks like in real life.

There have been demands to do that more on HN lately. This is how it looks like when it happens - a company paying for OSS development.

reply
koverstreet
2 hours ago
[-]
Speaking for myself, Valve has been great to work with - chill, and they bring real technical focus. It's still engineers running the show there, and they're good at what they do. A real breath of fresh air from much of the tech world.
reply
wildzzz
4 hours ago
[-]
It would be a large effort to stand up a department that solely focuses on Linux development just like it would be to shift game developers to writing Linux code. Much easier to just pay a company to do the hard stuff for you. I'm sure the steam deck hardware was the same, Valve did the overall design and requirements but another company did the actual hardware development.
reply
jvanderbot
4 hours ago
[-]
They probably needed some point expertise on this one, as they build out their teams.
reply
Brian_K_White
3 hours ago
[-]
I don't know what you're trying to suggest or question. If there is a question here, what is it exactly, and why is that question interesting? Do they employ contractors? Yes. Why was that a question?
reply
mikkupikku
2 hours ago
[-]
Wut.
reply
bogwog
2 hours ago
[-]
Valve has a weird obsession with maximizing their profit-per-employee ratio. There are stories from ex-employees out on the web about how this creates a hostile environment, and perverse incentives to sabotage those below you to protect your own job.

I don't remember all the details, but it doesn't seem like a great place to work, at least based on the horror stories I've read.

Valve does a lot of awesome things, but they also do a lot of shitty things, and I think their productivity is abysmal based on what you'd expect from a company with their market share. They have very successful products, but it's obvious that basically all of their income comes from rent-seeking from developers who want to (well, need to) publish on Steam.

reply
redleader55
3 hours ago
[-]
It's worth mentioning that sched_ext was developed at Meta. The schedulers are developed by several companies who collaborate to develop them, not just Meta or Valve or Italia and the development is done in a shared GitHub repo - https://github.com/sched-ext/scx.
reply
999900000999
4 hours ago
[-]
That's the magic of open source. Valve can't say ohh noes you need a deluxe enterprise license.
reply
senfiaj
4 hours ago
[-]
In this case yes, but on the other hand Red Hat won't publish the RHEL code unless you have the binaries. The GPLv2 license requires you to provide the source code only if you provide the compiled binaries. In theory Meta can apply its own proprietary patches on Linux and don't publish the source code if it runs that patched Linux on its servers only.
reply
dralley
1 hour ago
[-]
RHEL source code is easily available to the public - via CentOS Stream.

For any individual RHEL package, you can find the source code with barely any effort. If you have a list of the exact versions of every package used in RHEL, you could compose it without that much effort by finding those packages in Stream. It's just not served up to you on a silver platter unless you're a paying customer. You have M package versions for N packages - all open source - and you have to figure out the correct construction for yourself.

reply
cherryteastain
3 hours ago
[-]
Can't anyone get a RHEL instance on their favorite cloud, dnf install whatever packages they want sources of, email Redhat to demand the sources, and shut down the instance?
reply
dfedbeef
3 hours ago
[-]
RHEL specifically makes it really annoying to see the source. You get a web view.
reply
tremon
1 hour ago
[-]
This violates the GPL, which explicitly states that recipients are entitled to the source tree in a form suitable for modification -- which a web view is not.
reply
SSLy
48 minutes ago
[-]
it's not the only way they offer the sauce through
reply
Aperocky
2 hours ago
[-]
Don't forget RH is owned by IBM.
reply
OsrsNeedsf2P
2 hours ago
[-]
Honestly just hearing this makes me want to get all their binaries, request the code, scrape it with OCR and upload it somewhere
reply
dralley
1 hour ago
[-]
But that would be silly, because all of the code and binaries is already available via CentOS Stream. There's nothing in RHEL that isn't already public at some point via CentOS Stream.

There's nothing special or proprietary about the RHEL code. Access to the code isn't an issue, it's reconstructing an exact replica of RHEL from all of the different package versions that are available to you, which is a huge temporal superset of what is specifically in RHEL.

reply
kstrauser
4 hours ago
[-]
I'm more surprised that the scheduler made for a handheld gaming console is also demonstrably good for Facebook's servers.
reply
giantrobot
2 hours ago
[-]
Latency-aware scheduling is important in a lot of domains. Getting video frames or controller input delivered on a deadline is a similar problem to getting voice or video packets delivered on a deadline. Meanwhile housecleaning processes like log rotation can sort of happen whenever.
reply
bigyabai
4 hours ago
[-]
I mean, part of it is that Linux's default scheduler is braindead by modern standards: https://en.wikipedia.org/wiki/Completely_Fair_Scheduler
reply
3eb7988a1663
3 hours ago
[-]
Part of that is the assumption that Amazon/Meta/Google all have dedicated engineers who should be doing nothing but tuning performance for 0.0001% efficiency gains. At the scale of millions of servers, those tweaks add up to real dollar savings, and I suspect little of how they run is stock.
reply
Anon1096
3 hours ago
[-]
This is really just an example of survivorship bias and the power of Valve's good brand value. Big tech does in fact employ plenty of people working on the kernel to make 0.1% efficiency gains (for the reason you state), it's just not posted on HN. Someone would have found this eventually if not Valve.

And the people at FB who worked to integrate Valve's work into the backend and test it and measure the gains are the same people who go looking for these kernel perf improvements all day.

reply
accelbred
4 hours ago
[-]
CFS was replaced by EEVDF, no?
reply
0x1ch
3 hours ago
[-]
I vaguely remember reading when this occurred. It was very recent no? Last few years for sure.

> The Linux kernel began transitioning to EEVDF in version 6.6 (as a new option in 2024), moving away from the earlier Completely Fair Scheduler (CFS) in favor of a version of EEVDF proposed by Peter Zijlstra in 2023 [2-4]. More information regarding CFS can be found in CFS Scheduler.

reply
jorvi
1 hour ago
[-]
Ultimately, CPU schedulers are about choosing which attributes to weigh more heavily. See this[0] diagram from Github. EEVDF isn't a straight upgrade on CFS. Nor is LAVD over either.

Just traditionally, Linux schedulers have been rather esoteric to tune and by default they've been optimized for throughput and fairness over everything else. Good for workstations and servers, bad for everyone else.

[0]https://tinyurl.com/mw6uw9vh

reply
phdelightful
4 hours ago
[-]
Parent's article says

> Starting from version 6.6 of the Linux kernel, [CFS] was replaced by the EEVDF scheduler.[citation needed]

reply
jorvi
4 hours ago
[-]
I mean.. many SteamOS flavors (and Linux distros in general have) have switched to Meta's Kyber IO scheduler to fix microstutter issues.. the knife cuts both ways :)
reply
bronson
3 hours ago
[-]
Kyber is an I/O scheduler. Nothing to do with this article.
reply
Brian_K_White
3 hours ago
[-]
The comment was perfectly valid and topical and applicable. It doesn't matter what kind of improvement Meta supplied that everyone else took up. It could have been better cache invalidation or better usb mouse support.
reply
sintax
1 hour ago
[-]
Well if you think about it, in this case the license is the 30% cut on every game you purchase on steam.
reply
Sparkyte
1 hour ago
[-]
I've been using Bazzite Desktop for 4 months now and it has been my everything. Windows is just abandonware now even with every update they push. It is clunky and hard to manage.
reply
loeg
3 hours ago
[-]
Maybe better to go straight to the source and bypass Phoronix blogspam: https://www.youtube.com/watch?v=KFItEHbFEwg
reply
fph
1 hour ago
[-]
Life becomes a lot better the moment you stop considering Youtube videos valid primary sources.
reply
loeg
7 minutes ago
[-]
It’s a recording of a talk. Feel free to point out other sources but there doesn’t seem like much to object to here.
reply
hobobaggins
2 hours ago
[-]
Phoronix is blogspam?!
reply
webdevver
1 hour ago
[-]
yeah thats kinda harsh, phoronix is a good oss news aggregator at the very least, and the PTS is a huge boon for "whats the best bang for buck llvm build box" type of question (which is very useful!)
reply
tra3
3 hours ago
[-]
I'm curious how this came to be:

> Meta has found that the scheduler can actually adapt and work very well on the hyperscaler's large servers.

I'm not at all in the know about this, so it would not even occur to me to test it. Is it the case that if you're optimizing Linux performance you'd just try whatever is available?

reply
laweijfmvo
2 hours ago
[-]
almost certainly bottom-up: some eng somewhere read about it, ran a test, saw positive results, and it bubbles up from there. this is still how lots of cool things happen at big companies like Meta.
reply
erichocean
1 hour ago
[-]
Omarchy should adopt the SCX-LAVD scheduler as its default, it helps conserve power on laptops.
reply
binary132
3 hours ago
[-]
I'm struggling to understand what workloads Meta might be running that are _this_ latency-critical.
reply
commandersaki
2 hours ago
[-]
According to the video linked somewhere in this thread indicates WhatsApp Erlang workers that want sub-ms latency.
reply
Pr0Ger
3 hours ago
[-]
It's definitely for ads auctions
reply
dabockster
2 hours ago
[-]
It's Meta. They always push to be that fast on paper, even when it's costly to do and doesn't really need it.
reply
stuxnet79
3 hours ago
[-]
Meta is a humongous company. Any kind of latency has to have a business impact.
reply
tayo42
3 hours ago
[-]
If you have 50,000 servers for your service, and you can reduce that by 1 percent, you save 50 servers. Multiply that by maybe $8k per server and you have saved $400k,you just paid for your self for a year. With meta the numbers are probably a bit bigger.
reply
pixelbeat__
2 hours ago
[-]
LOL (I used to work for Meta, so appreciate the facetious understatement)
reply
bongodongobob
1 hour ago
[-]
That's not how it works though. Budgets are annual. A 1% savings of cpu cycles doesn't show up anywhere, it's a rounding error. They don't have a guy that pulls the servers and sells them ahead of the projection. You bought them for 5 years and they're staying. 5 years from now, that 1% got eaten up by other shit.
reply
Anon1096
52 minutes ago
[-]
You're wrong about how services that cost 9+ figures to run annually are budgeted. 1% CPU is absolutely massive and well measured and accounted for in these systems.
reply
tayo42
55 minutes ago
[-]
You don't buy servers once every 5 years. I've done purchasing every quarter and forecasted a year out. You reduce your services budget for hardware by the amount saved for that year.
reply
tayo42
3 hours ago
[-]
Interesting to see server workloads take ideas from other areas. I saw recently that some of the k8s specific os do their updates like android devices
reply
esseph
1 hour ago
[-]
You mean immutable?
reply