They needed Windows games to run on Linux so we got massive Proton/Wine advancements. They needed better display output for the deck and we got HDR and VRR support in wayland. They also needed smoother frame pacing and we got a scheduler that Zuck is now using to run data centers.
Its funny to think that Meta's server efficiency is being improved because Valve paid Igalia to make Elden Ring stutter less on a portable Linux PC. This is the best kind of open source trickledown.
I think the reality is that Linux is ahead on a lot of kernel stuff. More experimentation is happening.
"Now that's curious..."
IO_Uring is still a pale imitation :(
io_uring didn't change that, it only got rid of the syscall overhead (which is still present on Windows), so in actuality they are two different technical solutions that affect different levels of the stack.
In practice, Linux I/O is much faster, owing in part to the fact that Windows file I/O requires locking the file, while Linux does not.
I personally doubt SAK/SAS is a good security measure anyways. If you've got untrusted programs running on your machine, you're probably already pwn'd.
The whole windows ecosystem had us trained to right click on any Windows 9X/XP program that wasn’t working right and “run as administrator” to get it to work in Vista/7.
1. Snapshot the desktop
2. Switch to a separate secure UI session
3. Display the snapshot in the background, greyed out, with the UAC prompt running in the current session and topmost
It avoids any chance of a user-space program faking or interacting with a UAC window.Clever way of dealing with the train wreck of legacy Windows user/program permissioning.
These days, things have gotten far more reasonable, and I think we can generally expect a linux desktop user to only run software from trusted sources. In this context, such a feature makes much less sense.
On the surface, they are as simple as Linux UOG/rwx stuff if you want it to be, but you can really, REALLY dive into the technology and apply super specific permissions.
You have a hardened Windows 11 system. A critical application was brought forward from a Windows 10 box but it failed, probably a permissions issue somewhere. Debug it and get it working. You can not try to pass this off to the vendor, it is on you to fix it. Go.
And then you get security product who have the fun idea of removing privileges when a program creates a handle (I'm not joking, that's a thing some products do). So when you open a file with write access, and then try to write to the file, you end up with permission errors durig the write (and not the open) and end up debugging for hours on end only to discover that some shitty security product is doing stupid stuff...
Granted, thats not related to ACLs. But for every OK idea microsoft had, they have dozen of terrible ideas that make the whole system horrible.
Also, as far as I know Linux doesn't support DENY ACLs, which Windows does.
Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
What?! I was doing kickstart on Red Hat (want called Enterprise Linux back then) at my job 25 years ago, I believe we were using floppies for that.
Preseed is not new at all:
https://wiki.debian.org/DebianInstaller/Preseed
RH has also had kickstart since basically forever now.
I've been using both preseeds and kickstart professionally for over a decade. Maybe you're thinking of the graphical installer?
- There does not seem to be a way to determine which machines in the fleet have successfully applied. If you need a policy to be active before doing deployment of something (via a different method), or things break, what do you do?
- I’ve had far too many major incidents that were the result of unexpected interactions between group policy and production deployments.
1. cloud-init support was in RHEL 7.2 which released November 19, 2015. A decade ago.
2. Checking on Ubuntu, it looks like it was supported in Ubuntu 18.04 LTS in April 2018.
3. For admining tens of thousands of servers, if you're in the RHEL ecosystem you use Satellite and it's ansible integration. That's also been going on for... about a decade. You don't need much integration though other than a host list of names and IPs.
There are a lot of people on this list handling tens of thousands or hundreds of thousands of linux servers a day (probably a few in the millions).
Their research department rocks however so it's not a full bash on Microsoft at all - i just feel like they are focusing on other way more interesting stuff
It's a big space. Traditionally, Microsoft has held both the multimedia, gaming and lots of professional segments, but with Valve doing a large push into the two first and Microsoft not even giving it a half-hearted try, it might just be that corporate computers continue using Microsoft, people's home media equipment is all Valve and hipsters (and others...) keep on using Apple.
Great UX requires a lot of work that is hard but not algorithmically challenging. It requires consistency and getting many stakeholders to buy in. It requires spending lots of time on things that will never be used by 10% of people.
Windows got a proper graphics compositor (DWM) in 2006 and made it mandatory in 2012. macOS had one even earlier. Linux fought against Compiz and while Wayland feels inevitable vocal forces still complain about/argue against it. Linux has a dozen incompatible UI toolkits.
Screen readers on Linux are a mess. High contrast is a mess. Setting font size in a way that most programs respect is a mess. Consistent keyboard shortcuts are a mess.
I could go on, but these are problems that open source is not set up to solve. These are problems that are hard, annoying, not particularly fun. People generally only solve them when they are paid to, and often only when governments or large customers pass laws requiring the work to be done and threaten to not buy your product if you don't do it. But they are crucially important things to building a great, widely adopted experience.
Here's one top search result that goes into far more detail: https://www.reddit.com/r/linux/comments/1ed0j10/the_state_of...
"Slide left or right" CPU and GPU underclocking.
until the new s2idle stuff that Microsoft and Intel have foisted on the world (to update your laptop while sleeping… I guess?)
Liquid Glass ruined multitasking UX on my iPad. :(
Also my macbook (m4 pro) has random freezes where finder becomes entirely unresponsive. Not sure yet why this happens but thankfully it’s pretty rare.
(And same for Windows to the degree it is more inconsistent on Windows than Mac)
The real problem is that the hardware vendors aren't using its development model. To make this work you either need a) the hardware vendor to write good drivers/firmware, or b) the hardware vendor to publish the source code or sufficient documentation so that someone else can reasonably fix their bugs.
The Linux model is the second one. Which isn't what's happening when a hardware vendor doesn't do either of them. But some of them are better than others, and it's the sort of thing you can look up before you buy something, so this is a situation where you can vote with your wallet.
A lot of this is also the direct fault of Microsoft for pressuring hardware vendors to support "Modern Standby" instead of rather than in addition to S3 suspend, presumably because they're organizationally incapable of making Windows Update work efficiently so they need Modern Standby to paper over it by having it run when the laptop is "asleep" and then they can't have people noticing that S3 is more efficient. But Microsoft's current mission to get everyone to switch to Linux appears to be in full swing now, so we'll see if their efforts on that front manage to improve the situation over time.
That's a vastly different statement.
The problem is: the specifications of ACPI are complex, Windows' behavior tends to be pretty much trash and most hardware tends to be trash too (AMD GPUs for example were infamous for not being resettable for years [1]), which means that BIOSes have to work around quirks on both the hardware and software. Usually, as soon as it is reasonably working with Windows (for a varying definition of "reasonably", that is), the ACPI code is shipped and that's it.
Unfortunately, Linux follows standards (or at least, it tries to) and cannot fully emulate the numerous Windows quirks... and on top of that, GPUs tend to be hot piles of dung requiring proprietary blobs that make life even worse.
[1] https://www.nicksherlock.com/2020/11/working-around-the-amd-...
https://developer.valvesoftware.com/wiki/Using_Source_Contro...
But here's my rub: no one else bothered to step up to be a key signer. Everyone has instead whined for 15 years and told people to disable Secure Boot and the loads of trusted compute tech that depends on it, instead of actually building and running the necessary infra for everyone to have a Secure Boot authority outside of big tech. Not even Red Hat/IBM even though they have the infra to do it.
Secure Boot and signed kernels are proven tech. But the Linux world absolutely needs to pull their heads out of their butts on this.
The only thing Secure Boot provides is the ability for someone else to measure what I'm running and therefore the ability to tell me what I can run on the device I own (mostly likely leading to them demanding I run malware like like the adware/spyware bundled into Windows). I don't have a maid to protect against; such attacks are a completely non-serious argument for most people.
nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.
anti-cheat is essentially existential for studios/publishers that rely on multiplayer gaming.
So yes, the second half of your statement is true. The first half--not so much.
> nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.
You are correct, but I think I did a bad job of communicating what I meant. It's true that anti-cheat has been around since forever. However, what's changed relatively recently is anti-cheat integrated into the kernel alongside requirements for signed kernels and secure boot. This dates back to 2012, right as games like Battlefield started introducing gambling mechanics into their games.
There were certainly other games that had some gambly aspects to them, but 2010s is pretty close to where esports along with in game gambling was starting to bud.
Having the option to disable secureboot, was probably due to backlash at the time and antitrust concerns.
Aside from providing protection "evil maid" attacks (right?) secureboot is in the interest of software companies. Just like platform "integrity" checks.
No thanks.
I do, MIDI 2.0. It's not because they're not doing it, just that they're doing it at a glacial pace compared to everyone else. They have reasons for this (a complete rewrite of the windows media services APIs and internals) but it's taken years and delays to do something that shipped on Linux over two years ago and on Apple more like 5 (although there were some protocol changes over that time).
Currently almost no one is using Linux for mobile because the lack or apps (banking for example) and bad hardware support. When developing for Linux becomes more and more attractive this might change.
A hard lock which requires a reboot or god forbid power cycling is the worst possible outcome, literally anything else which doesn’t start a fire is an improvement TBH.
Linux (and its ecosystem) sucks at having focus and direction.
They might get something right here and there, especially related to servers, but they are awful at not spinning wheels
See how wayland progress is slow. See how some distros moved to it only after a lot of kicking and screaming.
See how a lot of peripherals in "newer" (sometimes a model that's 2 or 3 yrs on the market) only barely works in a newer distro. Or has weird bugs
"but the manufacturers..." "but the hw producers..." "but open source..." whine
Because Linux lacks a good hierarchy at isolating responsibility, otherwise going for a "every kernel driver can do all it wants" together with "interfaces that keep flipping and flopping at every new kernel release" - notable (good) exception : USB userspace drivers. And don't even get me started on the whole mess that is xorg drivers
And then you have a Ruby Goldberg machine in form of udev dbus and what not, or whatever newer solution that solves half the problems and create another new collection of bugs.
I'm loving what valve has been doing, and their willingness to shove money into projects that have long been under invested in, BUT. Please don't forget all the volunteers that have developed these systems for years before valve decided to step up. All of this is only possible because a ton of different people spent decades slowly building a project, that for most of it's lifetime seemed like a dead end idea.
Wine as a software package is nothing short of miraculous. It has been monumentally expensive to build, but is provided to everyone to freely use as they wish.
Nobody, and I do mean NOBODY would have funded a project that spent 20 years struggling to run office and photoshop. Valve took it across the finish line into commercially useful project, but they could not have done that without the decade+ of work before that.
I'm sure there have been more commercial contributors to Wine other than Valve and CodeWeavers.
The guy is Philip Rebohler.
Gaben does something: Wins Harder
We shouldn't be depending on trickledown anything. It's nice to see Valve contributing back, but we all need to remember that they can totally evaporate/vanish behind proprietary licensing at any time.
It seems like every time I read about this kind of stuff, it's being done by contractors. I think Proton is similar. Of course that makes it no less awesome, but it makes me wonder about the contractor to employee ratio at Valve. Do they pretty much stick to Steam/game development and contract out most of the rest?
Back to the root point. Small company focused on core business competencies, extremely effective at contracting non-core business functions. I wish more businesses functioned this way.
If you have 30mins for a video I recommend People Make Games' documentary on it https://www.youtube.com/watch?v=eMmNy11Mn7g
But most of the time you don't want "a unit of software", you want some amorphous blob of product and business wants and needs, continuously changing at the whims of business, businessmen, and customers. In this context, sure, you're paying your developers to solve problems, but moreover you're paying them to store the institutional knowledge of how your particular system is built. Code is much easier to write than to read, because writing code involves applying a mental model that fits your understanding of the world onto the application, whereas reading code requires you to try and recreate someone else's alien mental model. In the situation of in-house products and business automation, at some point your senior developers become more valuable for their understanding of your codebase than their code output productivity.
The context of "I want this particular thing fixed in a popular open source codebase that there are existing people with expertise in", contracting makes a ton of sense, because you aren't the sole buyer of that expertise.
The problem seems, at least from a distance, to be that bosses treat it as a fire-and-forget solution.
We haven't had any software done by oursiders yet, but we have hired consultants to help us on specifics, like changing our infra and help move local servers to the cloud. They've been very effective and helped us a lot.
We had talks though so we found someone who we could trust had the knowledge, and we were knowledgeable enough ourselves that we could determine that. We then followed up closely.
When I worked in the HFC/Fiber plant design industry, the simple act of "Don't use the same boilerplate MSA for every type of vendor" and being more specific about project requirements in the RFP makes it very clear what is expected, and suddenly we'd get better bids, and would carefully review the bids to make sure that the response indicated they understood the work.
We also had our own 'internal' cost estimates (i.e. if we had the in house capacity, how long would it take to do and how much would it cost) which made it clear when a vendor was in over their head under-bidding just to get the work, which was never a good thing.
And, I've seen that done in the software industry as well, and it worked.
That said, the main 'extra' challenge in IT is that key is that many of the good players aren't going to be the ones beating down your door like the big 4 or a WITCH consultancy will.
But really at the end of the day, the problem is what often happens is that business-people who don't really know (or necessarily -care-) about specifics enough unfortunately are the people picking things like vendors.
And worse, sometimes they're the ones writing the spec and not letting engineers review it. [0]
[0] - This once led to an off-shore body shop getting a requirement along the lines of 'the stored procedures and SQL called should be configurable' and sure enough the web.config had ALL the SQL and stored procedures as XML elements, loaded from config just before the DB call, thing was a bitch to debug and their testing alone wreaked havoc on our dev DB.
If you don't see it happening, the game is being played as intended.
They're also a flat organization, with all the good and bad that brings, so scaling with contractors is easier than bringing on employees that might want to work on something else instead.
For contextual, super specific, super specialized work (e.g. SCX-LAVD, the DirectX-to-Vulkan and OpenGL-to-Vulkan translation layers in Proton, and most of the graphics driver work required to make games run on the upcoming ARM based Steam Frame) they like to subcontract work to orgs like Igalia but that's about it.
That said, something like this which is a fixed project, highly technical and requires a lot of domain expertise would make sense for _anybody_ to contract out.
There have been demands to do that more on HN lately. This is how it looks like when it happens - a company paying for OSS development.
I don't remember all the details, but it doesn't seem like a great place to work, at least based on the horror stories I've read.
Valve does a lot of awesome things, but they also do a lot of shitty things, and I think their productivity is abysmal based on what you'd expect from a company with their market share. They have very successful products, but it's obvious that basically all of their income comes from rent-seeking from developers who want to (well, need to) publish on Steam.
For any individual RHEL package, you can find the source code with barely any effort. If you have a list of the exact versions of every package used in RHEL, you could compose it without that much effort by finding those packages in Stream. It's just not served up to you on a silver platter unless you're a paying customer. You have M package versions for N packages - all open source - and you have to figure out the correct construction for yourself.
There's nothing special or proprietary about the RHEL code. Access to the code isn't an issue, it's reconstructing an exact replica of RHEL from all of the different package versions that are available to you, which is a huge temporal superset of what is specifically in RHEL.
And the people at FB who worked to integrate Valve's work into the backend and test it and measure the gains are the same people who go looking for these kernel perf improvements all day.
> The Linux kernel began transitioning to EEVDF in version 6.6 (as a new option in 2024), moving away from the earlier Completely Fair Scheduler (CFS) in favor of a version of EEVDF proposed by Peter Zijlstra in 2023 [2-4]. More information regarding CFS can be found in CFS Scheduler.
Just traditionally, Linux schedulers have been rather esoteric to tune and by default they've been optimized for throughput and fairness over everything else. Good for workstations and servers, bad for everyone else.
> Starting from version 6.6 of the Linux kernel, [CFS] was replaced by the EEVDF scheduler.[citation needed]
> Meta has found that the scheduler can actually adapt and work very well on the hyperscaler's large servers.
I'm not at all in the know about this, so it would not even occur to me to test it. Is it the case that if you're optimizing Linux performance you'd just try whatever is available?