Linux gives up on 6-year LTS kernels, says they’re too much work
274 points
1 year ago
| 21 comments
| arstechnica.com
| HN
runeks
1 year ago
[-]
> The other big problem is the burnout from maintainers, which are often unpaid and could use a lot more support from the billion-dollar companies that benefit from using Linux.

This is the crux of the issue: putting the maintenance burden on unpaid volunteers instead of having the burden be carried by the companies that profit from the 6-year LTS.

reply
rlpb
1 year ago
[-]
Canonical volunteered to maintain LTS kernels within the community, but upstream refuses to accept Canonical's contributions - ironically because apparently they "don't work within the community". Source: https://lwn.net/Articles/608917/

Canonical maintained them anyway, though, just not officially. Sources: https://lwn.net/Articles/618862/ https://lwn.net/Articles/667925/ https://lkml.org/lkml/2015/12/15/538. I don't see more recent announcements; I don't know if they stopped because of continuous rejection of their community contributions from upstream, or if they are continuing anyway and I'm not aware.

Disclosure: I work for Canonical. I'm not authorised to speak for Canonical, expressed opinions here are my own. [Edit: I should add that I don't work on the kernel so I feel like I'm as much an outside observer as you probably are]. But I'm not sure I'm even expressing an opinion here - just citing some relevant, publicly verifiable facts.

reply
PH95VuimJjqBqy
1 year ago
[-]
There's a difference between a company volunteering money/employees and a company taking over maintenance of the branch itself.

If Canonical was not willing to do the former, it implies the kernel developers were correct.

reply
pseg134
1 year ago
[-]
It switched to your opinion when you said ironically and didn’t list the real reason the upstream patches were rejected.
reply
esrauch
1 year ago
[-]
Minor thing, if you look at the citation it says that they don't want to hand over official control of official branches to Canonical because they don't trust them to engage with the community, with those citations OP must have meant "accept contributions" as "accept canonical being in control of older official kernel branches to decide what patches are accept (even though no one else will)" rather than they are not accepting patches from canonical.
reply
bitcharmer
1 year ago
[-]
Given Canonical's outright user-hostile decisions in recent past I don't blame kernel devs for not trusting them at all.
reply
mardifoufs
1 year ago
[-]
Such as? And they still accept redhat contributions so your point is moot anyways.
reply
goku12
1 year ago
[-]
The LXD - Incus saga with the linuxcontainers community is a very good indicator of how Canonical would behave in a similar situation.
reply
bitcharmer
1 year ago
[-]
Snap
reply
fnordpiglet
1 year ago
[-]
What sort of user hostile actions would one take on a 5 year old kernel version?
reply
rlpb
1 year ago
[-]
I'm just going by the article I cited. I don't know any more than the reason given there. And come on: how is refusing contributions when the cited reason is not contributing not ironic?
reply
gjvc
1 year ago
[-]
canonical ceased to be relevant with the usability disaster that was the hype-driven unity desktop.
reply
stavros
1 year ago
[-]
I don't see how the UX of Unity applies to kernel maintenance?
reply
Rexogamer
1 year ago
[-]
Ubuntu is still pretty widely used for a Linux distro?
reply
mhitza
1 year ago
[-]
The Linux Foundation "harvests" a couple hundred million dollars a year [1]. They could easily spend more on maintainers. I can't easily find exact number, but Torvalds is paid between 1-2 million dollars a year by The Linux Foundation. They could support other volunteer maintainers as well.

[1] https://lunduke.substack.com/p/linux-foundation-spends-just-...

reply
infamouscow
1 year ago
[-]
The Linux Foundation's 990 form for 2022 is available. Page 2.

https://projects.propublica.org/nonprofits/organizations/460...

reply
5Qn8mNbc2FNCiVV
1 year ago
[-]
I have zero issues with Linus being paid 1-2 million dollars per year
reply
red-iron-pine
1 year ago
[-]
if he was paid a flat license fee for every linux install I reckon it would be far more than that. larry ellison 2nd boat level income.
reply
yummypaint
1 year ago
[-]
This would be a good use case for government grants. The system to administer them is already there. It's beurocratic but it is free money that could support developers long term. It could probably be argued the DOE should offer funding due to the national security etc implications of open source maintenance.
reply
INTPenis
1 year ago
[-]
Spot on, I mean who wants 6 year LTS support? Must be big old slow enterprises and possibly banks, government agencies, so pay up.
reply
cvwright
1 year ago
[-]
In practice, that would require those companies to pay for a distro like RedHat, who supports a lot of the kernel dev work.

But unfortunately everyone would rather find a way to get RH packages for free, and RH/IBM has only made the situation worse by alienating people.

reply
red-iron-pine
1 year ago
[-]
aye. the second IBM took over it was clear it would be going to shit. it hasn't, totally, but im not loving the long-term direction
reply
supertrope
1 year ago
[-]
Windows is supported for 10 years if you upgrade on day one. No special support contract needed.
reply
INTPenis
1 year ago
[-]
Good for them. With that kind of gumption I'm sure we'll see them running on a lot of devices, mars landers and practically the entire infrastructure of the internet in no time.
reply
supertrope
1 year ago
[-]
Mars Landers use VXWorks. Servers are definitely Linux.
reply
justincormack
1 year ago
[-]
To be fair some companies do maintain LTS kernels like Red Hat but they choose different kernel versions than upstream LTS and backport different things so it doesn’t have much crossover with these ones now.
reply
athrun
1 year ago
[-]
As an ex-Novell/SUSE employee this makes sense to me. Upstream is supposed to keep marching onwards.

Backporting is _so_ much work. And it's unfortunately not sexy work either, so it's always going to be hard attracting unpaid contributors for it.

If you need stability and long term support as a customer, you have companies like RedHat or SUSE whose entire point is providing 10+ years maintenance on these components.

reply
structural
1 year ago
[-]
Unfortunately, none of these companies providing 10+ years of maintenance are doing so for most embedded devices. We either need to get SoC vendors to update their kernel baselines regularly This is hard, we've been trying for a decade and not seen much progress. Alternately, get them to backport fixes and patches (there's actually been quite a bit of progress here in getting them to actually take updates from the stable kernel at all! And that's getting thrown away now...)
reply
qznc
1 year ago
[-]
The effect of EU’s Cyber Resilience Act will be interesting. It requires maintenance, at least for security fixes.
reply
athrun
1 year ago
[-]
Exactly. It sounds like currently there's no money to be made supporting old embedded devices (in the consumer space at least), because no one is on the hook for long term maintenance.

Regulations _could_ change the incentives, and create a market for long term servicing. Regulations are hard to get right though...

reply
WJW
1 year ago
[-]
Old devices getting support from the Linux team is only one of the ways this can play out though. Some other ways I can think of:

- Old devices are phased out sooner.

- Moving away from linux towards proprietary embedded OSes that do provide support.

- No doubt some companies will try just ignoring the regulations

reply
danieldk
1 year ago
[-]
Or maybe vendors will be incentivized to actually upstream kernel patches, plus stop making 10 different models every year for weird market segmentation reasons.
reply
bee_rider
1 year ago
[-]
“Old devices are phased out sooner” seems like an OK solution with some caveats.

It is nice that it makes the cost of not supporting things visible to the users. Assuming “phased out” means the device will actually stop operating; “Company X’s devices have a short lifetime” is an easy thing for people to understand.

I suspect consumers will look for brands that don’t have this reputation, which should give those well behaved brands a boost.

Although, if it does turn out that just letting devices die is the common solution, maybe something will need to be done to account for the additional e-waste that is generated.

Moving toward proprietary OSes; hey, if it solves the problem… although, I don’t see why they’d have an advantage in keeping things up to date.

It is possible that companies will just break the law but then, that’s true of any law.

reply
robertlagrant
1 year ago
[-]
They could also provide the devices without an OS, and point you at the recommended open source ISO to download.
reply
TheLoafOfBread
1 year ago
[-]
If it is not standard x86, then good luck.
reply
ladyanita22
1 year ago
[-]
Nobody will move out of Linux, and old devices won't be phased out sooner, at least not in s significant manner.
reply
fnordpiglet
1 year ago
[-]
This won’t make more money available for supporting old devices, it’ll just make the long term profitability of any device significantly lower and therefore less competition and innovation.

A smarter regulation would have been required non-commercial use firmware source disclosures to allow non competitive long term maintenance by owners.

reply
graemep
1 year ago
[-]
Who is responsible for complying with it? If a Chinese or American manufacturer of an embedded device that does not have a presence in the EU fails to provide updates what happens?

How many of the companies producing this stuff have the skills to fix kernel security bugs?

It will end up being tickbox regulatory compliance and will create barriers to competition, especially from FOSS: https://pyfound.blogspot.com/2023/04/the-eus-proposed-cra-la...

reply
kelnos
1 year ago
[-]
Not sure who the "we" is that you refer to, but Google (and Samsung, and other Android manufacturers, as well as companies building other Linux-based embedded/IoT devices) could band together and create a "Corporate Embedded Linux Consortium", and pool some money together to pay developers to maintain old kernel versions.

If the mainline kernel devs are uncomfortable allowing those to be official kernel.org releases, that's fine: the CELC can host the new versions themselves and call the versions something like "5.10.95-celc" or whatever.

I don't get why this is so difficult for people to grasp: if you want long-term maintenance of something, then pay people to maintain it long-term. It's a frighteningly simple concept.

But yes, it'd be better for SoC vendors to track upstream more closely, and actually release updates for newer kernel versions, instead of the usual practice of locking each chip to whatever already-old kernel they choose from the start. Or, the golden ideal: SoC vendors should upstream their changes. But fat chance of that happening any time soon.

> (there's actually been quite a bit of progress here in getting them to actually take updates from the stable kernel at all! And that's getting thrown away now...)

I found this statement kinda funny. If the original situation was that they wouldn't take updates from the stable kernels, then what were all those unpaid developers even maintaining them for? It's bad enough that it's (for most people) unrewarding work that they weren't getting paid for... but then few people were actually making use of it? Ouch. No wonder they're giving up, regardless of any progress made with the SoC vendors.

reply
leonheld
1 year ago
[-]
>SoC vendors should upstream their changes. But fat chance of that happening any time soon

I honestly do not understand why SoC vendors don't put the extra 1% effort in upstreaming their stuff. I've seen (and worked with) software that is lagging 3 to 4 years behind upstream developed by these vendors and if you diff it against upstream it's like 10 small commits, granted, these commits are generally hot garbage.

reply
nudgeee
1 year ago
[-]
Isn’t this what the Civil Infrastructure Platform (CIP) initiative [0] was also proposing? Maintenance of Linux kernels on the 10+ year horizon aimed at industrial use cases. Has backing from Toshiba, Hitachi, Bosch, Siemens, Renesas, etc, though a marked lack of chip vendors as members. Not really sure how well it is going though.

[0] https://wiki.linuxfoundation.org/civilinfrastructureplatform...

reply
pxc
1 year ago
[-]
> Unfortunately, none of these companies providing 10+ years of maintenance are doing so for most embedded devices.

But why do these devices need special kernels anyway? Isn't that the real problem?

reply
Const-me
1 year ago
[-]
Linux kernel doesn't have ABI for device drivers. The device manufacturers either can't or won't publish the drivers as a part of Linux kernel, that's why they fork Linux instead.
reply
worthless-trash
1 year ago
[-]
If they got their fixes upstream , they would be pulled into the same build tree that centos and rhel uses.
reply
dcow
1 year ago
[-]
Which really makes you understand why the lgpl was a defeat.
reply
tremon
1 year ago
[-]
What do you mean? The Linux kernel isn't licensed under the LGPL.
reply
neurostimulant
1 year ago
[-]
> If you need stability and long term support as a customer, you have companies like RedHat or SUSE whose entire point is providing 10+ years maintenance on these components.

Is that even feasible for projects like the Android kernel that distributes their fork to vendors when RedHat forbid redistribution of their source code?

reply
throwawaymqsh
1 year ago
[-]
> it's unfortunately not sexy work

What comprises sexy work in programming?

reply
xnorswap
1 year ago
[-]
I don't know what the definition of "sexy work" is, but rewarding work in programming is solving interesting problems.

Backporting fixes isn't generally interesting problem solving.

reply
UncleMeat
1 year ago
[-]
This is the root of so much of our software quality problem. “I want to work on something shiny” outweighs “I have pride in this software and want to keep it healthy.”
reply
xnorswap
1 year ago
[-]
Personally I love working on legacy software, I actually dislike greenfield projects, but even in the context of legacy software and system maintenance, backporting fixes would still not rate highly or provide much in the way of interesting work for me.
reply
tremon
1 year ago
[-]
I'd say there's enough software developers that enjoy doing the latter. It's mostly the external motivation (both in community standing and in payments) that push people to shiny new things.
reply
rkta
1 year ago
[-]
And so everyone has his own kink. I'd love to backport patches maintaining some legacy code base.
reply
esrauch
1 year ago
[-]
Are you flagging the word "sexy" or are you asking whether some important projects are fun and exciting and other important projects boring?

Surely maintaining 40 year old bank Cobol code is important but it's not considered fun and exciting. Rewriting half of skia from C++ into Rust is arguably not important at all but it's exciting to the point that it reasonably could make the front page of HN.

reply
samtho
1 year ago
[-]
New features, typically.
reply
1970-01-01
1 year ago
[-]
Going back to 2017, some of you knew this was not a tenable solution: https://news.ycombinator.com/item?id=15428340
reply
jauntywundrkind
1 year ago
[-]
Google showed up & offered to pay a huge amount of money to extend LTS support 3x, iirc.

At the time, Linux 4.14 was shipping I think.

Personally it made me a bit sad, because it created real permission for Android to never upgrade kernels. I'd hoped eventually Android would start upgrading kernels, thought the short LTS would surely force them to become a respectable ecosystem that takes maintenance seriously. Making Super LTS was a dodge; suddenly it was ok that 4.14 straggles along until 2024.

Also an interesting note, supposedly there's a "Civil Infrastructure Platform" that supposedly will have some support until 2029 for 4.14! The past never is gone, eh?

Supposedly Project Treble is a whole new driver abstraction in I think mostly userland (maybe?) whose intent is to allow kernel upgrades without having to rewrite drivers (not sure if that is the primary/express goal or was just often mentioned). I'm not sure if Android has yet shipped a kernel upgrade to any phones though; anyone know of an model running an upgraded kernel?

reply
yjftsjthsd-h
1 year ago
[-]
> Personally it made me a bit sad, because it created real permission for Android to never upgrade kernels.

I would argue that it was still better than not doing that, because the vendors weren't going to properly keep up with kernels either way; the choice wasn't "support 3.x for longer or move to 4.x", it was "support 3.x for longer or watch Android devices stay on 3.x without patches".

reply
jauntywundrkind
1 year ago
[-]
As the old adage goes, "chips without software is just expensive sand."

Yes what happened & happens is often monstrously unsupportable & terrible. For the past 6 years, Google rolling up with a dump truck full of bills has been justification to keep doing nothing, to keep letting kernel devices be bad.

Your history isn't even good or right. Old releases at the time didn't get official longer support. Canonical just opted to maintain a basically Super LTS 3.16 until 2020, regardless of the Google dump-truck-of-money thing going on here. Old phones got nothing directly from this payoff. Google was just paying for a way forward to keep doing nothing, to justify their ongoing delinquency & inactivity.

Which was unprincipled terrible and awful before, but which they basically bribed gregkh to suddenly make acceptable by at least paying for the privilege of being negligent do nothing delinquents on.

reply
ralferoo
1 year ago
[-]
Some comments are saying that the 6-year LTS is needed to support older Android devices. Also, in practice most vendors don't bother releasing updates to phones after the first 2 years, other than security updates.

One possible nice side-effect of not maintaining kernels for so long and allowing people to stay on out-of-date systems would be to encourage vendors to allow users to upgrade to the newer versions of Android for more than the current 2 year life span. They are then more likely to put pressure on their component vendors to get kernel support for their chipsets into mainline so they don't have the excuse that they can't provide updates because the hardware isn't support by modern firmware.

reply
zshrc
1 year ago
[-]
They are a lot of work. And expensive to maintain. You’re not getting just 10 years of support from Red Hat, but 10 years of kernel support as well.
reply
worthless-trash
1 year ago
[-]
14 years of RHEL 7 for ELS.
reply
TowerTall
1 year ago
[-]
Microsoft is famed for their backwards combability. How do they achieve this? By hard work and a lot of "if version == x" spread throughout the code? Or is because of their development process or do they plan and design for backwards combability from day one?
reply
heavyset_go
1 year ago
[-]
There's a difference between backwards compatibility and backporting. For either, Microsoft can afford to pay engineers to maintain them.

But backwards compatibility isn't what kernel developers are maintaining, they're backporting things like security fixes to older versions of the kernel.

It would be like if a security fix is implemented in Windows 11, and Microsoft also chose to patch the same change in Windows 10. At some point Microsoft decides that older version of Windows won't get new updates, like how Windows 8.1 stopped receiving them this January.

What kernel developers are deciding is that sufficiently old enough kernel branches will stop receiving backports from newer kernels.

reply
knallfrosch
1 year ago
[-]
Windows 8.1 was released in 2013, with suppport ending in 2023 - a 10 year support window (backporting security fixes.) And Linux cuts from 6 to 2?

I think you just stressed TowerTall's point.

reply
jraph
1 year ago
[-]
I don't think TowerTall was speaking about this.

They are saying: "Recent versions of Windows can run old programs made for old versions of Windows. How?".

The Linux kernel is very good at it because of the "Do not break userspace" Linus Torvalds' rule. The usual user space on top of the Linux kernel, not so much.

So yes, backward compatibility and backporting are different matters.

And Windows addresses them both indeed. Your parent commenter is not comparing Windows with Linux.

reply
kelnos
1 year ago
[-]
I think the point is more of a "so what?" Windows' backward compatibility is completely irrelevant and uninteresting here because we're not talking about backward compatibility, we're talking about long-term support.
reply
diffeomorphism
1 year ago
[-]
How do windows kernel versions like 6.1.7600.20655 work? Maybe my google fu is weak, but from looking at

https://superuser.com/questions/296020/windows-kernel-name-v...

it seems that Microsoft does a mix of backports and upgrades:

> 6.1.7600.16385, 6.1.7600.16539, 6.1.7600.16617, 6.1.7600.16695, 6.1.7600.16792, 6.1.7600.20655, 6.1.7600.20738, 6.1.7600.20826, 6.1.7600.20941, 6.1.7601.17514, 6.1.7601.17592, 6.1.7601.21701

So this does not look like 10 year support for the initial version but rather like switching different LTS versions over that time. Is there any data from microsoft itself on support duration, release dates, backports and how to parse these numbers?

reply
kelnos
1 year ago
[-]
I don't think we can infer all that much from the version numbers without knowing Microsoft's internal processes around this sort of thing, and exactly what those version numbers mean in the context of Microsoft.

To me, though, 6.1.7600.16385 -> 6.1.7601.21701 does sound like long-term support for a single "version" (whatever that word means in this context).

reply
BiteCode_dev
1 year ago
[-]
Linux is not a company.

You pay microsoft for that support.

You can have 10 years support for linux, just pay red hat for it like you pay microsoft.

reply
kelnos
1 year ago
[-]
I don't think any of this is useful to compare like this.

Windows has had three major releases in 11 years. The Linux kernel does one every two months. Windows is an entire OS, with a userland and GUI. The Linux kernel is... a kernel.

The development and support cycles are naturally going to be very different for the two. And regardless, the mainline Linux kernel team is not beholden to anyone for any kind of support. Whatever they do is either voluntary, or done because someone has decided to pay some subset of developers for it to get done. Microsoft employs and pays the people who maintain their old Windows versions.

If no one is paying someone enough to maintain an old Linux kernel for six years, why would they choose to do it? It's mostly thankless, unrewarding work. And given that the pace of development for the Linux kernel is much much faster than that of Windows (or even just the Windows/NT kernel), the job is also much more challenging.

reply
Kipters
1 year ago
[-]
Microsoft does it too, Windows 11 versions have a 2-year support lifecycle. Windows 11 21H2 will reach end of support this next october.

https://learn.microsoft.com/en-us/lifecycle/products/windows...

reply
Dalewyn
1 year ago
[-]
Windows 11 uses the NT 10.0 kernel that originally released with Windows 10 in 2015. NT 10.0 will be supported for well over a decade at this point, maybe even two.

NT6.1 (Windows 7) was also supported from 2009 to 2020 (11 years!), and NT 5.1 (Windows XP) was supported from 2001 through either 2014 (13 years!) or 2019 (18 years!) depending on support channel.

Microsoft will support a product for a decade if not more, assuming you're keeping up with security updates which they absolutely will backport, sometimes even beyond EOL if the fix is that important. Linux with 2 years is a bad joke, by comparison.

reply
diffeomorphism
1 year ago
[-]
That only tells me something about naming? I have no clue how many LTS or non-LTS versions were between the one that shipped with windows 10 and 10.0.22621.900. For all I know, that could be like Linux 2.something being all the way from 1996 to 2011, except that Linux 3.something had a major change of "NOTHING. Absolutely nothing." except for a shiny new number (https://en.wikipedia.org/wiki/Linux_kernel).

So honest question: What does 10.0.22621.900 mean? Is 10.0.X.Y supported for a decade or is that discontinued at some point and I am forced to upgrade to 10.0.X+10,Y-5?

reply
Dalewyn
1 year ago
[-]
10.0 is the current and latest NT kernel and it's LTS like the rest of Microsoft's Windows offerings. Much like how Linux 6.1 is an LTS.
reply
diffeomorphism
1 year ago
[-]
That does not answer the question? 10.0.22621.900 seems to be the latest and 10.0 just a prefix for all of them.

As an example from https://superuser.com/questions/296020/windows-kernel-name-v... lists

> 6.1.7600.16385, 6.1.7600.16539, 6.1.7600.16617, 6.1.7600.16695, 6.1.7600.16792, 6.1.7600.20655, 6.1.7600.20738, 6.1.7600.20826, 6.1.7600.20941, 6.1.7601.17514, 6.1.7601.17592, 6.1.7601.21701

Are these all just 6.1? Or is 7601 a different version than 7600? Could I choose to stay on 7600 and get backports or do I have to switch to 7601?

reply
Dalewyn
1 year ago
[-]
You could choose to stay on Windows 7, that is NT 6.1, and Microsoft will still backport updates from newer kernels such as NT 6.2 and NT 10.0 for the support life of NT 6.1.
reply
diffeomorphism
1 year ago
[-]
That is nice, but that was not my question?
reply
Dalewyn
1 year ago
[-]
>Are these all just 6.1?

Yes. The numbers after the Major.Minor numbers are just revision and build numbers of little consequence for most people.

Are you here for thoughtful conversation or are you just being a Micro$oft Windoze troll? Because I can't tell; I would presume most people here know how to read version numbers.

reply
diffeomorphism
1 year ago
[-]
I had to ask you three(!) times to finally get an answer to a simple question and then you go "major versions are obviously of little consequence; that is why they are called major". Clearly someone is trolling, but it isn't me.
reply
kelnos
1 year ago
[-]
Microsoft maintains their kernels/OSes for that long because people are willing to pay for that support.

It's pretty disrespectful to call Linux's process a "bad joke" when these developers mostly aren't getting paid to maintain major versions for any length of time that you'd consider more reasonable.

Meanwhile, if you do want longer-term support for a specific kernel+OS combo, IBM/Red Hat (among others) will be happy to sell it to you. You may think it's inefficient for each enterprise distro to have their own internal kernel fork that they maintain (rather than all contributing to a centralized LTS kernel), but that's the choice they've all seemingly collectively made. I guess they feel that if they're on the hook to support it, they want full and final say of what goes into it.

Also consider that Windows doesn't sell a kernel: they sell a full OS. In the Windows world, you don't mix and match kernel versions with the rest of the system. You get what Microsoft has tested and released together. With Linux, I can start with today's Debian stable and run it for years, but continue updating to a new major kernel version (self-building it if I want or need) every two months. The development and support cycle for an OS is very different than that of a kernel. You just can't compare the two directly. If you want to, compare Windows with RHEL.

Also-also consider that Windows and Linux are used in very different contexts. Microsoft's customers may largely care about different things than (e.g.) Red Hat's customers.

reply
Kipters
1 year ago
[-]
Even if it still reports 10.0, it's not the same kernel as it was in 2015
reply
heavyset_go
1 year ago
[-]
They asked questions and I answered them. I'm not trying to make a point, and I don't think that the OP was trying to make a point with their questions, either.
reply
13of40
1 year ago
[-]
When I worked in Windows they had entire teams dedicated to backwards compatibility testing and "sustained engineering". At the end of every release cycle there would be a multi-month effort to pack up all of our test automation and hand it off to another team who owned running it for the next several years. Plus SLAs with giant companies that could get you camped out on the floor of a datacenter with a kernel debugger if you pushed out a broken update. It was never a totally perfect system, but they invested a lot of effort (and money) into it.
reply
zik
1 year ago
[-]
A friend who worked at MS tells me that there's a huge amount of "if version" in their code. Apparently it's at the level where it's a big maintenance headache.
reply
spondyl
1 year ago
[-]
A running theory was that Windows 9 was skipped because of all the "if version contains 9" logic for 95, 98 etc

I don't think it was ever anything more than speculation though

reply
GrumpySloth
1 year ago
[-]
It was about code in third-party apps, not in Windows code.
reply
noirscape
1 year ago
[-]
IIRC it was partially confirmed when some Windows 11 beta builds started causing issues with software thinking it was being executed on 1.1.x (whose identifier internally apparently is 11).
reply
somsak2
1 year ago
[-]
maintenance burden has to be felt somewhere. it's nice when the vendor does it for you.
reply
rcme
1 year ago
[-]
Maybe for a while. But when you add the maintenance burden to the code, it stays there, forever being felt. Over time, this degrades the product for everyone. And indeed, Windows can be unpleasant to use, not least of all because it feels like glued together legacy systems.
reply
xorcist
1 year ago
[-]
Microsoft sunsets their stuff all the time. It's just that they're competing with Google and Apple now, so they're actively trying to push this line to differentiate where they can.

Try to use only a ten year old printer driver sometime. It's a pain. Linux executes 20 year old code with no problem, as long as you kept all the pieces. How do they do it? Never merge anything that breaks known user space. Easy in theory, hard work in practice.

If you want to run applications from the 90s, you're likely to have more success with dosbox or wine than with a plain Windows. Didn't Microsoft completely give up on backwards emulation a few years ago and started virtualizing it instead, with mixed success?

Of course, if you really want something famous for backwards compatibility, look at OS/400 and z/OS. It's all layers of emulation from the hardware up in order to guarantee that an investment in that platform is future proof. It's all expensive in the end of course, as someone has to pay for it, but they live well on the customers who value such things. Running 50 year old code there is commonplace.

reply
rahen
1 year ago
[-]
IBM i is stellar in design, compatibility, quality, efficiency, reliability, consistency and security. x86 and Linux pale in comparison.

I wish IBM hadn't fenced it so much like a walled garden. Had they issued inexpensive or free licenses for OS/400 targeted to students and developers, maybe also an emulator to develop conveniently on x86, their i platform would probably be more commonplace now, with quite a bit more available software.

What is killing their platform is not the price but mostly the lack of skills and software. And it's probably too late now to change course.

reply
booi
1 year ago
[-]
I'm a long time software engineer and do quite a bit of devops both in cloud but also have significant experience building on-prem and datacenter server clusters.

I have never heard of IBM i until this moment right now.

I assume this is specifically for their Power-series hardware? I've only ever seen Linux on Power hardware...

reply
rahen
1 year ago
[-]
You may have known IBM i under a different name such as eSeries or AS/400 as it has gone through many renaming.

Yes, it currently targets their Power series, although it's fairly hardware independent. As a matter of fact AS/400 binaries don't even care what CPU they run on, as there are several abstraction layers underneath, namely XPF, TIMI and SLIC. It's a bit like a native, hardware-based JVM with the OS being also the SDK. Another peculiarity is that everything is an object in "i", including libraries, programs and files.

But mostly, it requires close to no sysadmin. Just turn it on, start the services and leave it alone for years if needed.

reply
jeroenhd
1 year ago
[-]
Microsoft dropped 16 bit support on 64 bit machines, but that was becomes 16 bit support on 32 bit was already using emulation/virtualisation, and so did 32 bit on 64 bit. Emulating a 16 bit emulator inside the 32 bit emulator would be too much, even for Microsoft.

Microsoft does drop backwards compatibility sometimes, usually because the backwards compatibility layer leaves a huge security risk.

reply
romanovcode
1 year ago
[-]
Not sure about printers but 20+ year old games run surprisingly fine on Windows 11
reply
aerique
1 year ago
[-]
And if they don't, they'll run fine on Wine.
reply
justsomehnguy
1 year ago
[-]
> Didn't Microsoft completely give up on backwards emulation a few years ago

Bwahah! AppCompat traces to Windows 95.

reply
tored
1 year ago
[-]
Yes, old printer drivers yes for Windows can be a problem, often because of the 32 to 64 bit switch, I have that exact problem with an old printer that still works but can't get it to install on 64 bit.

20 year old software is rarely a problem, I'm running Office XP on Windows 10 without problems.

reply
runeks
1 year ago
[-]
They achieve it by paying developers to do it (instead of relying on unpaid volunteers).
reply
phendrenad2
1 year ago
[-]
Microsoft ships an entire OS and apps too. Hard to break an API when the teams using it can walk down the hallway and yell at you.
reply
t0mas88
1 year ago
[-]
They go much further than that, checking and fixing compatibility with all kinds of third party software.
reply
benj111
1 year ago
[-]
There's a story about them getting SimCity working on some new os.

My Google Fu is failing me though.

reply
knallfrosch
1 year ago
[-]
Search for "SimCity" here

How Microsoft Lost the API War https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...

reply
worksonmine
1 year ago
[-]
Are you really comparing a multibillion dollar company to an open source project?

Also this has nothing to do with backwards compatibility, it's about supporting older kernels with security fixes and similar. The decision is a pragmatic one to lessen the burden on the unpaid volunteers.

Like others have mentioned if a company needs a specific kernel pay up. Or use Windows.

Don't compare apples to oranges.

reply
caskstrength
1 year ago
[-]
Did you read the article? It has nothing to do with backward compatibility.
reply
vermaden
1 year ago
[-]
On the contrary the FreeBSD project offers stable 'LTS' release for 5 years each.

What I mean by that is each 'major' version with stable API/ABI has a life span of about 5 years - like 5 years of 12.x version, 5 years of 13.x version, etc.

... and all that with having about only 1/10 of the Linux people count (rough estimate).

reply
phicoh
1 year ago
[-]
A difference is that the FreeBSD has to maintain two versions most of the time and sometimes three versions.

At the moment there are 12.x and 13.x. 14.x is in beta, so soon there will be three. But 12.x is expected to be dropped at the end of this year, so in 2024 it will be back to two versions.

As far as I can tell there are a lot more Linux kernels in LTS at the moment.

reply
throw0101b
1 year ago
[-]
> But 12.x is expected to be dropped […]

Not "expected", rather than that being the policy: five years is ending.

* https://www.freebsd.org/security/#sup

December 2018:

* https://www.freebsd.org/releases/12.0R/

* https://www.freebsd.org/releases/

reply
kelnos
1 year ago
[-]
FreeBSD's major release cycle is much longer than that of the Linux kernel (which makes sense, since FBSD has to maintain the entire OS, not just the kernel). Right now they have two active major release series, and there's a new one every 2.5 years or so.

The Linux kernel has a new major release every two months, and it looks like LTS kernels are one out of every five or six major versions, so that's a new LTS kernel every 10-12 months; right now they have six LTS kernels to maintain.

Also I expect that the Linux kernel develops at a much more rapid pace than the FreeBSD kernel. That's not a knock on FreeBSD; that's just the likely reality. Is that development pace sufficient to support thousands of different embedded chips? Does the FreeBSD project even want to see the kind of change in their development process that an influx of new embedded developers would likely entail?

reply
pengaru
1 year ago
[-]
> ... and all that with having about only 1/10 of the Linux people count (rough estimate).

It's easy to offer a long LTS release cadence when your project is stagnant.

reply
ktaylora
1 year ago
[-]
It's pretty active, actually. Look at the release notes for FreeBSD major versions. Some folks think the release engineering team is too active and that major versions should be supported for more than ~5 years.
reply
vermaden
1 year ago
[-]
If this [1] is stagnant then I do not know what is not ...

[1] https://cgit.freebsd.org/src/log/

reply
pengaru
1 year ago
[-]
That's all of FreeBSD including userspace, not just the kernel. In Linux land you see more than that level of activity in just the kernel:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

I didn't say FreeBSD was abandonware, its kernel development has just been relatively stagnant for decades vs. Linux. Which shouldn't come as particularly surprising considering how much more adoption and investment there's been surrounding Linux over that time period.

reply
ladyanita22
1 year ago
[-]
Yeah FreeBSD is not happening, sorry.
reply
Gud
1 year ago
[-]
What do you mean? I've witnessed many exoduses in technology, the most obvious ones being MySQL > PostgreSQL. And from PHP to JS & Python.

I don't think it's too far fetched for people clinging to their favorite GNU/Linux distro to switch to FreeBSD, especially on the server side where in my opinion, FreeBSD is the superior choice.

I think the world is better off when there are choices and the Linux near mono culture is not good for the FOSS movement, in my opinion.

reply
PedroBatista
1 year ago
[-]
Good thing it already happened and keeps going.
reply
SuperNinKenDo
1 year ago
[-]
6 years seems like an incredibly long time to me. It's a bit of a shame, but looks like industry really didn't come out to support the idea.

2 years on the other hand seems incredibly short? Am I wrong?

reply
dathinab
1 year ago
[-]
6 years is way to short for some use-cases, like phones

sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.

so for some hardware it might mean that when it gets released you only have noticeable less then 6 years of kernel support

then between starting to build a phone and releasing it 2 years might easily pass

so that means from release you can provide _at most_ 4 years of kernel security patches etc.

but dates tend to not align that grate so maybe it's just 3 years

but then you sell your phone for more then one year, right?

in which case less then 2 years can be between the customer buying your product and you no longer providing kernel updates

that is a huge issue

I mean think about it if someone is a bit thigh on money and buy a slightly older phone they probably aren't aware that their phone stops getting security updates in a year or so (3 years software support since release but it's a 2 year old phone) at which point using it is a liability.

EDIT: The answer is in my opinion not to have even longer LTS but to proper maintain drivers and in turn being able to do full kernel updates

reply
Hackbraten
1 year ago
[-]
That's one of the reasons Purism went out of their way to upstream all drivers for the Librem 5. The distribution can upgrade kernels pretty much whenever it wants.

The downside can be painful though: sourcing components with such properties is hard. You basically have to cherry-pick them from all over the world because they're so few and far between.

That's one of the reasons why the Librem 5 is so thick and consumes so much energy.

reply
benj111
1 year ago
[-]
i dont really get how this continues.

vendor A sells a part with a driver in the mainline kernel.

vendor B doesnt, so on top of the part you have to spend time bodging together an untested driver into a custom kernel.

as a buyer why would you go for the second?

reply
Hackbraten
1 year ago
[-]
Contributing a driver to mainline Linux takes significant time and effort up front. You can't just throw anything over the Linux fence and expect that already-overworked kernel maintainers keep tending for it for the next decades.

Slapping together a half-working out-of-tree kernel module and calling it a day is not only much cheaper; it also buys you the time you need to write the new driver for next year's hot shit SoC that smartphone vendors demand.

reply
benj111
1 year ago
[-]
Precisely.

What would you want as a buyer. A driver that has already demonstrated that it is good enough to be included in the kernel, or one of unknown quality that may need extra work to integrate with the kernel.

I get why suppliers don't want to do the work. I just don't understand why there isn't enough value add for buyers to justify the premium for the benefits of a mainline driver, and/or why sellers don't try and capture that premium

reply
kelnos
1 year ago
[-]
I don't think buyers are actually going to pay enough for the sellers to justify the added cost. Remember that the buyers have to pass their costs on to their end customers (e.g. consumer phone purchasers), and those people won't accept all phones becoming $50 more expensive or whatever.

Also consider the cultural context. The culture of hardware manufacturers is much different than that of software vendors. They don't view software as a product, but more a necessary evil to make their hardware work with existing software infrastructure. They want to spend as little time on it as possible and then move onto the next thing.

I'm not endorsing this status quo, merely trying to explain it.

reply
benj111
1 year ago
[-]
The way it seems to me is that a driver takes X hours to make, integrate, etc. It's cheaper for the vendor to spend those X hours, rather than each individual purchaser each spending those X hours.
reply
noirscape
1 year ago
[-]
The easy answer is that buyers largely don't care. Most people get their phones from their ISP provider, so that's the main target. They get a data plan that comes bundled with a phone and pay it off for 2 years. After 2 years they get a new plan with a new phone.

Caring about long-term maintenance isn't what most buyers do. Going SIM-only on your data plan is out of the ordinary.

Also in my experience people largely pick their phones based on the surface level hardware rather than the long-term reliability. Hence why Apple keeps putting fancier cameras into every iPhone even though I'm pretty sure a good chunk of customers don't need a fancy camera. Heck, just getting a phone that fits in my hand was a struggle because buyers somehow got convinced that bigger phone = better phone and now most smartphones on the market are just half-size tablets.

That trend at least seems to be somewhat reversing though.

reply
vkoskiv
1 year ago
[-]
The trend is sadly not reversing fast enough. Apple already discontinued their line of slightly too big phones (mini series), and now they only sell oversized phablets. I might not have viable iOS-based hardware options when I upgrade in 2-3 years, and I'm not comfortable switching to an operating system made by an adtech company. I do hope they go back to smaller sizes before then. Kind of baffling to me how Apple otherwise puts a lot of effort into accessibility, but their main line of phones are awkward and uncomfortable to hold even for a fully able-bodied person with average size hands.
reply
circuit10
1 year ago
[-]
I think by “buyer” they mean the phone manufacturer buying parts
reply
kelnos
1 year ago
[-]
I agree, but consider that the buyer must also consider what the end-customer cares about. The buyer is not going to pay the chip manufacturer extra for mainlined (or at least open source) drivers unless their end-customers are asking for that (since those costs will be passed on to the customer). And outside of niche products like Librem's, the vast majority of customers don't even know about chipset drivers, let alone care.
reply
pritambaral
1 year ago
[-]
Sadly, far too often, software support simply nevers enter the picture in sourcing decisions. Back when I was privy to this process at an OEM, the only factors that mattered were:

1. Hit to the BOM (i.e. cost); and chip

2. Suppliability (i.e., can we get enough pieces, by the time we need them, preferably from fewer suppliers).

In the product I was involved in building (full OS, from bootloader to apps), I was lucky that the hardware team (separate company) was willing to base their decisions on my inputs. The hardware company would bear the full brunt of BOM costs, but without software the hardware was DOA and wouldn't even go to manufacturing. This symbiotic relationship, I think, is what made it necessary for them to listen to our inputs.

Even so, I agreed software support wasn't a super strong input because:

1. There's more room for both compromises and making up for compromises, in software; and

2. Estimating level of software support and quality is more nuanced than just a "Has mainline drivers?" checkbox.

For example, RPi 3B vs. Freescale iMX6. The latter had complete mainline support (for our needs) but the former was still out-of-tree for major subsystems. The RPi was cheaper. A lot cheaper.

I okayed RPi for our base board because:

1. Its out-of-tree kernel was kept up-to-date with mainline with a small delay, and would have supported the next LTS kernel by the time our development was expected to finish (a year);

2. Its out-of-tree code was quite easy (almost straightforward) to integrate into the Gentoo-based stack I wanted to build the OS on; and

3. I was already up-and-running with a prototype on RPi with ArchLinuxARM while we were waiting for iMX6 devkits to be sourced. If ArchLinuxARM could support this board natively, I figured it wouldn't be hard to port it to Gentoo; turned out Gentoo already had built-in support for its out-of-tree code.

Of course, not every sourcing decision was as easy as that. I did have to write a driver for an audio chip because its mainline driver did not support the full range of features the hardware did. But even in that case, the decision to go ahead with that chip was only made after I was certain that we could write and maintain said driver.

reply
kelnos
1 year ago
[-]
Yup, exactly. I last worked in this field in 2009, and BOM cost (tempered with component availability) was king. This was also a time when hardware was much less capable, so they usually ran something like vxWorks (or, ::shudder::, uClinux). Building the cheapest product that could get to market fastest (so as to beat competitors to the latest WiFi draft standard) was all that mattered.

Your Raspberry Pi example is IMO even more illustrative than you let on. I'll reiterate that even that platform is not open and doesn't have a full set of mainlined drivers, after a decade of incredibly active development, by a team that is much more dedicated to openness than most other device manufacturers. Granted, they picked a base (ugh, Broadcom) that is among the worst when it comes to documentation and open source, but I think that also proves a point: device manufacturers don't have a ton of choice, and need to strike a balance between openness and practical considerations. The Raspberry Pi folks had price and capability targets to go with their openness needs, and they couldn't always get everything they wanted.

reply
kelnos
1 year ago
[-]
Because you don't have much choice, and each choice has trade offs. If you pick the part from vendor A, you get the mainlined driver, but maybe you get slower performance, or higher power consumption, or a larger component footprint that doesn't work with your form factor.

And most vendors are like vendor B because they're leading the pack in terms of performance, power consumption, and die size (among other things) and have the market power to avoid having to do everything their customers want them to do.

Still, some headway has been made: Google and Samsung have been gradually getting some manufacturers (mainly Qualcomm) to support their chips for longer. It's been a slow process, though.

As for mainlining: it's a long, difficult process, and the vendor-B types just don't care, and mostly don't need to care.

reply
rcxdude
1 year ago
[-]
Because the buyers are consumer hardware companies. This means a) there's an expectation that software works just like their hardware: they put it together once and then throw it onto the market. Updating or supporting it is not a particular consideration, unless they re-engineer something significantly to reduce costs. and b) the bean-counters and hardware engineers have more sway than the software engineers: lower cost, better battery life, features, etc on paper will win out over good software support over the life of the product.
reply
dathinab
1 year ago
[-]
because you don't care to give the customer longer term software support

many consumers are not aware about the danger a unmaintained/non-updatable software stack introduces or that their (mainly) phone is unmaintained

so phone vendor buys from B because A is often just not an option (not available for the hardware you need) and then dumps the problem subtle and mostly unnoticeable on the user

there are some exceptions, e.g. Fairphone is committed to quite long term software support so they try to use vendor As or vendor Bs which have contractual long term commitment for driver maintaince

but in the space of phones (and implicit IoT using phone parts) sadly sometimes (often) the only available option for the hardware you need is vendor B where any long term diver maintenance commitment contracts are just not affordable if you are not operating on a scale of a larger phone vendor

E.g. as far as I remember Fairphone had to do some reserve engineering/patching to continue support for the FP3 until today (and well I think another 2 or so years), and I vaguely remember that they where somewhat lucky that some open source driver work for some parts was already ongoing and getting some support with some of the vendors. For the FP5 they manage to have a more close cooperation with Qualcomm allowing them to provide a 5 year extended warranty and target software support for 8 years (since release of phone).

So without phone producer either being legally forced to have a certain amount of software support (e.g. 3 years after last first party selling) or at least be largely visible transparent about the amount of software support they do provide upfront and also inform their user when the software isn't supported anymore I don't expect to see any larger industry wide changes there.

Through some countries are considering laws like that.

reply
charcircuit
1 year ago
[-]
Not a lot of software requires a bleeding edge kernel. If vendor B has a superior chip at a viable price it makes sense to go with them.
reply
mrweasel
1 year ago
[-]
> so for some hardware it might mean that when it gets released you only have noticeable less then 6 years of kernel support

Or they could just upgrade the kernel to a newer version. There's no rule that says that the phone needs to run the same major kernel version for its entire lifetime. The issue is that if you buy a sub €100 phone, how exactly is the manufacturer supposes to finance the development and testing of never versions of the operating system? It might be cheap enough to just apply security fixes to an LTS kernel, but moving and re-validating drivers for hardware that may not even be manufactured quickly becomes unjustifiable expensive for anything but flagship phones.

reply
dathinab
1 year ago
[-]
they often can't

proprietary drivers for phone parts are often not updated to support newer kernels

reply
kelnos
1 year ago
[-]
That's the point: these drivers should get updated. Obviously the low-level component manufacturers don't want to do this, but perhaps we need to find a way to incentivize them to do so. And if that fails, to legally force them.
reply
xvilka
1 year ago
[-]
> sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.

These manufacturers should be punished by the lack of LTS and need to upgrade precisely because of that lazyness and incompetence.

reply
charcircuit
1 year ago
[-]
Why not blame the kernel developers for being lazy and incompetent for not offering a stable API for driver developers to use?

You don't see Windows driver developers having their drivers broken by updates every few months.

reply
mschuster91
1 year ago
[-]
> You don't see Windows driver developers having their drivers broken by updates every few months.

At the cost of Windows kernel development being a huge PITA because effectively everything in the driver development kit becomes an ossified API that can't ever change, no matter if there are bugs or there are more efficient ways to get something done.

The Linux kernel developers can do whatever they want to get the best (most performant, most energy saving, ...) system because they don't need to worry about breaking someone else's proprietary code. Device manufacturers can always do the right thing and provide well-written modules to upstream - but many don't because (rightfully) the Linux kernel team demands good code quality which is expensive AF. Just look at the state of most non-Pixel/Samsung code dumps, if you're dedicated enough you'll find tons of vulnerabilities and code smell.

reply
charcircuit
1 year ago
[-]
>no matter if there are bugs or there are more efficient ways to get something done.

Stability is worth it. After 30 years of development the kernel developers should be able to come up with a solid design for a stable api for drivers that they don't expect to radically change in a way they can't support.

reply
kelnos
1 year ago
[-]
Stability is worth it to you. Others can hold different opinions and make different decisions, and until and unless you -- or someone like minded -- becomes the leader of a major open source kernel project used in billions of devices, the opinions of those others will rule the day.
reply
charcircuit
1 year ago
[-]
All developers love stability of the platform they are building on. Good platforms recognize this.
reply
kelnos
1 year ago
[-]
Because the kernel developers are not beholden to chipset manufacturers who want to spend the shortest possible time writing a close-source driver and then forgetting about it. They're there to work on whatever they enjoy, as well as whatever their (paying) stakeholders care about.

The solution to all this is pretty simple: release the source to these drivers. I guarantee members of the community -- or, hell, companies who rely on these drivers in their end-products -- will maintain the more popular/generally-useful ones, and will update them to work with newer kernels.

Certainly the ideal would be to mainline these drivers in the first place, but that's a long, difficult process and I frankly don't blame the chipset manufacturers for not caring to go through it.

Also, real classy to call the people who design and build the kernel that runs on billions of devices around the world "lazy and incompetent". Methinks you just don't know what you're talking about.

reply
charcircuit
1 year ago
[-]
Chipset providers are not interested in showing off their trade secrets to the entire world.

>Also, real classy to call the people who design and build the kernel that runs on billions of devices around the world "lazy and incompetent"

I never did that. The parent comment did call manufacters that and I suggested that the kernel developers are at some fault.

reply
dathinab
1 year ago
[-]
It's less kernel developers then a certain subset of companies providing proprietary drivers only.

Most linux kernel changes are limited enough so that updating a driver is not an issue, IFF you have the source code.

That is how a huge number of drivers are maintained in-tree, if they had to do major changes to all the drivers every time anything changes they wouldn't really get anything done.

Only if you don't have the source code is driver breakage an issue.

But Linux approach to proprietary drivers was always that there is no official support when there is no source code.

reply
charcircuit
1 year ago
[-]
Why stop at kernel space. You might as well break all of user space every so often. If everything is open source it shouldn't be an issue to fix all broken Linux software right?

People don't want you to break their code.

reply
kelnos
1 year ago
[-]
> You might as well break all of user space every so often. If everything is open source it shouldn't be an issue to fix all broken Linux software right?

What an uninformed take.

The Linux kernel has a strict "don't break userspace" policy, because they know that userspace is not released in lock step with the kernel. Having this policy is certainly a burden on them to get things right, but they've decided the trade offs make it worth it.

They have also chosen that the trade offs involved in having a stable driver API are not worth it.

> People don't want you to break their code.

Then maybe "people" (in this case device manufacturers who write crap drivers) should pony up the money and time to get their drivers mainlined so they don't have to worry about this problem. The Linux kernel team doesn't owe them anything.

reply
charcircuit
1 year ago
[-]
>because they know that userspace is not released in lock step with the kernel

They also know out of tree drivers are not released lock step with the kernel.

>They have also chosen that the trade offs involved in having a stable driver API are not worth it.

It sucks for driver developers to not have a stable API regardless if they think its worth it or not.

>should pony up the money and time to get their drivers mainlined so they don't have to worry about this problem.

They don't want to reveal their trade secrets.

>The Linux kernel team doesn't owe them anything.

Which is why Google ended up offering the stable API to driver developers.

reply
xvilka
1 year ago
[-]
It happens all the time with Glibc, Glib, GTK, Qt, etc
reply
charcircuit
1 year ago
[-]
Those breaking changes often are years apart which I sn much better than what the kernel currently offers.
reply
mschuster91
1 year ago
[-]
> sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.

The worst part of all of this is: Google could go and mandate that the situation improves by using the Google Play Store license - only grant it if the full source code for the BSP is made available and the manufacturer commits to upstreaming the drivers to the Linux kernel. But they haven't, and so the SoC vendors don't feel any pressure to move to sustainable development models.

reply
kelnos
1 year ago
[-]
Google realistically can't do this. "The SoC vendors" is basically Qualcomm (yes, I know there are others, but if Qualcomm doesn't play ball, none of it matters).

Google has tried to improve the situation, and has made some headway: that's why they were able to get longer support for at least security patches for the Pixel line. Now that they own development of their own SoC, they're able to push that even farther. But consider how that's panned out: they essentially hit a wall with Qualcomm, and had to take ownership of the chipset (based off of Samsung's Exynos chip; they didn't start from scratch) in order to actually get what they want when it comes to long-term support. This should give you an idea of the outsized amount of power Qualcomm has in this situation.

Not many companies have the resources to do what Google is doing here! Even Samsung, who designed their own chipset, still uses Qualcomm for a lot of their products, because building a high-performance SoC with good power consumption numbers is really hard. Good luck to most/all of the smaller Android manufacturers who want more control over their hardware.

(Granted, I'm sure Google didn't decide to build Tensor on their own solely because of the long-term support issues; I bet there were other considerations too.)

reply
eimrine
1 year ago
[-]
After your message I started to think about LTS with 6-years support but every 3 years a new LTS.
reply
inferiorhuman
1 year ago
[-]
6 years may seem like a long time, but check out what the competition is doing. Oracle is supporting Solaris 10 for 20 years, 11.4 for 16 years (23 years if you lump it in with 11.0). HP-UX 11i versions seem to get around 15 years of support.

It really depends on what you're doing, a lot of industries may not need such long-term support. 6 years seems like a happy medium to me, but then again I'm not the one supporting it. I expect the kernel devs would be singing a different tune if people were willing to pay for that extended support.

https://upload.wikimedia.org/wikipedia/en/timeline/276jjn0uo...

reply
wkat4242
1 year ago
[-]
Are those really competition anymore?

They're just legacy now IMO and their long term support requirements are a result of this, companies that haven't gotten rid of them by now aren't likely to do it any time soon.

I hate seeing them go. I wasn't such a fan of Solaris but I was of HP-UX. But its days are over. It doesn't even run on x86 or x64 and HP has been paying Intel huge money to keep itanium on life support, which is running out now if it hasn't already.

At least Solaris had an Intel port but it too is very rare now.

reply
kube-system
1 year ago
[-]
Theres a lot of people who haven't got rid of old Linux systems these days too. RHEL 6 from 2010 is still eligible for extended support.
reply
structural
1 year ago
[-]
There's still a decent population of RHEL 5 systems in the wild. Last year I was offered an engagement (turned down for a few reasons) to help a company upgrade several hundred systems from RHEL 5 to RHEL 6 and start planning for a future rollout of RHEL 7.

Outside of tech focused companies, 10+ year old systems really are the norm.

reply
SoftTalker
1 year ago
[-]
> Outside of tech focused companies, 10+ year old systems really are the norm.

It's because outside of tech companies, nobody cares about new features. They care about things continuing to work. Companies don't buy software for the fun of exploring new versions, especially frustratingly pointless cosmetic changes that keep hitting their training budgets.

Many companies would be happy with RHEL5 or Windows XP today from a feature standpoint, if it weren't a security vulnerability.

reply
wkat4242
1 year ago
[-]
The problem about "things continuing to work" is really that many security fixes require updated architecture too. This is really why it's so hard to do LTS. It's not only about wanting new features.
reply
inferiorhuman
1 year ago
[-]
At megacorp (years ago) we were transitioning to CentOS 7 (from 6) and just starting to wind down our 32-bit windows stuff in AWS. I'm sure there are plenty of legacy Linux systems out there, but I wonder how many folks are actually paying for them.

CentOS/RHEL 6 was already pretty long in the tooth, but being the contrarian I am, I was not looking forward to the impending systemd nonsense.

reply
loxias
1 year ago
[-]
> At megacorp (years ago) we were transitioning to CentOS 7 (from 6)...

Today at work, we finally got the OK to stop supporting CentOS 7, for new releases.

reply
baq
1 year ago
[-]
It’s nightmare for developers if you get stuck with infrastructure on such dinosaurs and need to deploy a fresh new project. Anything made in the last 3-5 years likely won’t build due to at least openssl even if you get it to otherwise compile. Docker may not run. Postgres may not run. Go binaries? Yeah those also have issues. It’s like putting yourself into a time capsule with unbreakable windows - you can see how much progress has been made and how much easier your life could’ve been, but you’re stuck here.

Old systems are stable, but there’s a fine line between that and stagnation. Tread carefully.

reply
pjmlp
1 year ago
[-]
That is common workday in enterprise consulting.

Most of our .NET workloads are still for .NET Framework, and only now we are starting to have Java 17 for new projects, and only thanks to projects like Spring pushing for it.

Ah, and C++ will most likely be a mix of C++14 and C++17, for new projects.

reply
worthless-trash
1 year ago
[-]
What feature did they need in el7 that wasn't there in el9 ? What was their logic ?
reply
LinuxBender
1 year ago
[-]
That's 2 years of the upstream LTS kernel. I would expect that major Linux distributions such as Redhat RHEL and Canonical's Ubuntu would continue to do their extended patch cycles against one of the upstream snapshots as they have done in the past. I think 2 years for upstream LTS is probably fine if the vendor patching methodology remains true. This also assumes that the usage of smaller distributions such as Alpine are more commonly used in very agile environments such as K8's, Docker Swarm, etc... Perhaps that is a big assumption on my part.
reply
kube-system
1 year ago
[-]
Depends on where the computer is at, I guess. On a desk, 6 years is a pretty long time. In an industrial setting, 6 years is not very long of a lifecycle.
reply
dathinab
1 year ago
[-]
consider that it's 6 years after release of the kernel version

so likely <5 years since release of the hardware in the US

likely <4 years since release of hardware outside of the US

likely <3 years since you bought the hardware

and if you buy older phones having only a year or so of proper security updates is not that unlikely

So for phones you would need more something like a 8 or 10 year LTS, or well, proper driver updates for proprietary hardware. In which case 2 years can be just fine because in general/normally only drivers are affected by kernel updates.

reply
Ekaros
1 year ago
[-]
All comes to cycle. When do you enter that 6 year LTS? Is there new LTS every year or every other year? If you enter 2 years in or even 4 years in. How much have you support left?

Do you jump LTS releases. So one you are on is ending and there is brand new available? Or do you go to one before and have possibly only 2 or 4 years left...

reply
Gigachad
1 year ago
[-]
What kind of breaking change would take longer than 2 years to deal with? The reality is that people wait out the entire 6 year period and then do the required months work at the end. If you make the support period 2 years they will just start working on it sooner.
reply
Retric
1 year ago
[-]
Many perhaps most projects don’t last 6 years, thus punting can save people a great deal of time.
reply
josephcsible
1 year ago
[-]
With the "never break userspace" guarantee, is there ever a reason to want to be on an LTS kernel instead of the latest stable one, other than proprietary kernel modules?
reply
craftkiller
1 year ago
[-]
Yes, ZFS. When new, incompatible kernels are released the dkms build for the ZFS kernel modules will fail. By switching to an LTS kernel, I no longer have to worry because my kernel lags so far behind.

The alternative is using a pre-built package repo which will prevent the kernel from updating to an incompatible version using the package dependencies. I lived that way for years and it is an awful experience.

reply
XorNot
1 year ago
[-]
As a ZFS user, this really isn't a good enough reason IMO. Never break userspace really should be enough for most people.
reply
phone8675309
1 year ago
[-]
This wouldn't be a problem if OpenZFS didn't have a dogshit license.
reply
craftkiller
1 year ago
[-]
The same could be said for Linux. This isn't a problem on FreeBSD.
reply
josephcsible
1 year ago
[-]
> The same could be said for Linux.

No, because the CDDL was intentionally written to be incompatible with the GPL.

reply
craftkiller
1 year ago
[-]
The original intent of the license authors is irrelevant. The aspect of the CDDL that makes it incompatible with GPL is present in the GPL too. Neither license is more or less "dogshit" than the other, they are the same. The difference is the CDDL only applies to code written under the CDDL, whereas the GPL spreads to everything it touches.
reply
xorcist
1 year ago
[-]
If Linux had been under the CDDL, ZFS would have chosen another license. Sun management at the time saw Linux as their primary competitor, and ZFS and DTrace was the crown jewels of Solaris. Just open sourcing was reported to have been a long internal struggle by the people involved, and there's no chance they would have let the Linux distributors use them for free.

Good or bad, it's the result of another era. Still impressive stuff. It's only recently that things like btrfs and eBPF became usable enough, and not in all situations.

reply
bananapub
1 year ago
[-]
> The original intent of the license authors is irrelevant.

what on earth is that supposed to mean? ZFS is not in the Linux kernel because Sun and then Oracle deliberately decided to do that and continue to want that to be the case. The Linux kernel can't be re-licensed, (the Oracle and Sun code in) ZFS could be relicenced in ten minutes if they cared.

> The aspect of the CDDL that makes it incompatible with GPL is present in the GPL too. Neither license is more or less "dogshit" than the other, they are the same. The difference is the CDDL only applies to code written under the CDDL, whereas the GPL spreads to everything it touches.

lol

reply
craftkiller
1 year ago
[-]
> what on earth is that supposed to mean?

It means that the original authors could have originally intended to write a recipe for chocolate chip cookies and somehow accidentally wrote the CDDL. That wouldn't change a thing and it wouldn't make the CDDL any better or worse since it would have exactly the same words. The intent is irrelevant, all that matters is the end result.

> ZFS could be relicenced in ten minutes if they cared.

Indeed, I hope that they do. A copyfree license like the BSD licenses would make ZFS significantly more popular and I think would have saved all the effort sunk into btrfs had it been done earlier.

reply
pabs3
1 year ago
[-]
That aspect of the GPL is what any software end-user should want, all the source code, for every part of what you are using.

It is a shame Oracle hasn't released a CDDLv2 that provides GPL compatibility, they could solve the incompatibility quite easily, since CDDLv1 has an auto-update clause by default. I think some of OpenZFS has CDDLv1-only code, but that could probably be removed or replaced.

reply
ghaff
1 year ago
[-]
And different people remember or choose to interpret the original intent differently.
reply
raverbashing
1 year ago
[-]
As opposed to what ZFS users think, very few people care about it

And in mobile, I'd say the amount goes to zero

reply
kaylynb
1 year ago
[-]
Sometimes breaking changes for ZFS are backported to LTS anyway.
reply
craftkiller
1 year ago
[-]
Oh :-\ Thanks for the warning, I guess I'll have to remain vigilant. Switching to LTS certainly significantly reduces the frequency of incompatibilities, so I'm definitely going to remain on it, but I guess its not the perfect fix I thought it was.
reply
o11c
1 year ago
[-]
Some kind of breakage is pretty common in random recent kernels. It might not affect you this time, but do you really want to risk it?

So yes - you do want to be on an LTS kernel. But you only need to stay there for about a year until the next one is released and you can test it for a bit before deploying.

reply
josephcsible
1 year ago
[-]
> do you really want to risk it?

That question applies to LTS kernels too. Do you really want to risk that a backport of an important fix won't introduce a problem that mainline didn't have? Do you really want to risk that there are no security vulnerabilities in old kernels that won't get noticed by maintainers since they were incidentally fixed by some non-security-related change in mainline?

reply
sambazi
1 year ago
[-]
so you want to trade in tailored and curated fixes in lts for acccidental and experimental fixes in bleeding-edge.

go ahead

reply
josephcsible
1 year ago
[-]
My work uses "tailored and curated fixes in lts" and at home I use "acccidental and experimental fixes in bleeding-edge". I've had way more stuff break because of the former than because of the latter, and not just with the kernel.
reply
xorcist
1 year ago
[-]
Lots of third party software hooks the kernel for various things, such as drivers for enterprise RAID or proprietary networking, and whatever it is Nvidia does these days. Those go far beyond the user space interface and are dependent on binaries staying unchanged. This is for them.

If you are running user space applications only on upstream supported hardware, there is no reason to stay with long time supported kernels, just follow the regular stable which is much easier for everyone.

reply
Jhsto
1 year ago
[-]
GPU applications are often less buggy on LTS (kernel, but also package sets)
reply
josephcsible
1 year ago
[-]
Isn't the reason for kernel-related bugginess in GPU applications usually the out-of-tree proprietary GPU driver kernel module?
reply
baq
1 year ago
[-]
GPU drivers are routinely the biggest chunk of the kernel (both source and runtime) and have the most surface area to have bugs in them regardless of their openness.
reply
sambazi
1 year ago
[-]
came to say the same.

i absolutely despise breakage and random changes in my desktop environment

reply
pmontra
1 year ago
[-]
My phone is running a minor version of 4.4 from March 2023. Kernel 4.4 is originally from 2016. This means that they are still patching the kernel after 7 years even if it's not a LTS version.
reply
brnt
1 year ago
[-]
Is Google's Android Linux tree something distros could use as LTS?
reply
phh
1 year ago
[-]
Google pays gregkh to make LTS (not all LTS are made by him though), and he works mainline first, so it's already working that way
reply
pjmlp
1 year ago
[-]
Hardly, given how the Linux kernel is an implementation detail, Linux drivers are considered legacy (all modern drivers should be in userspace since Treble, written in Java/C++/Rust), and the NDK doesn't expose Linux APIs as stable interface.

So not something to build GNU/Linux distributions on top of.

reply
redleader55
1 year ago
[-]
The drivers in userspace are part of the GKI initiative[0], not Treble [1]. Treble deals with separation between Vendor, System and OEM. It establishes a process (CTS & VTS tests) to ensure system components (HALs) stay compatible with whatever updates Google makes to Android, but it deals with the base Android, not the Kernel specifically.

[0] - https://source.android.com/docs/core/architecture/kernel/gen...

[1] - https://android-developers.googleblog.com/2017/05/here-comes...

reply
pjmlp
1 year ago
[-]
Historically, Treble predates GKI, created after OEMs disregarded Treble, as Google had the clever idea to leave Treble updates as optional for OEMs.

> Binderized HALs. HALs expressed in HAL interface definition language (HIDL) or Android interface definition language (AIDL). These HALs replace both conventional and legacy HALs used in earlier versions of Android. In a Binderized HAL, the Android framework and HALs communicate with each other using binder inter-process communication (IPC) calls. All devices launching with Android 8.0 or later must support binderized HALs only.

https://source.android.com/docs/core/architecture/hal

GKI only became a thing in Android 12 to fix Treble adoption issues, as you can also easily check, and GSI was introduced in Android 9, after userspace drivers became a requirement in Android 8 as per link above.

https://arstechnica.com/gadgets/2017/05/ars-talks-android-go...

https://arstechnica.com/gadgets/2021/11/android-12-the-ars-t...

reply
dathinab
1 year ago
[-]
in my opinion for anything internet connected not updating the kernel is a liability

security patches of an LTS kernel are as much updates as moving to a newer kernel version

custom non in-tree drivers are generally an anti-pattern

the kernel interface is quite stable

automated testing tools have come quite a way

===> you should fully update the kernel LTS isn't needed

the only offenders which makes this hard are certain hardware vendors mostly related to phones and IoT which provide proprietary drivers only and also do not update them

even with LTS kernels this has caused ton's of problems over time maybe 6-years LTS being absconded in combination with some legislatures starting to require security updates for devices for 2-5 years *after sold* (i.e > released) this will put enough pressure on to change this for a better approach (weather that are user land drivers, in-tree drivers or better driver support in general)

reply
sylware
1 year ago
[-]
My opinion is that 6 years is not enough, I would target 10 years.

But I guess the core of the issue is planned obsolescence in linux internals. Namely the fear of missing out some features which require significant internal changes and which if linux is without could lower its attractiveness in some ways.

It all depends on Linus T.: arbitration of "internals breaking changes".

reply
worksonmine
1 year ago
[-]
Do you have the resources to achieve that target and still move the project forward?

If I could ask anything from Linus it would be to be a little more relaxed about the "never break userspace" rule. Allow for some innovation and improvements. There are bugs in the kernel that have become documented features because some userspace program took advantage of that.

reply
sylware
1 year ago
[-]
Arbitration, which is ulitmately in the hands of Linus T.

Where ABI stability is paramount for Linus T., ABI bugs will become features.

The glibc/libgcc/libstdc++ found a way around it... which did end up even worse: GNU symbol versioning.

Basically, "fixed" symbols get a new version, BUT sourceware binutils is always linking with the latest symbol version, which makes generating "broad glibc version spectrum compatibility" binaries an abomination (I am trying to stay polite)... because glibc/libgcc/libstdc++ devs are grossely abusing GNU symbol versioning. Game/game engine devs are hit super hard. It is actually a real mess, valve is trying technical mitigations, but there are all shabby and a pain to maintain.

Basically, they are killing native elf/linux gaming with this kind of abuse.

reply
compiler-devel
1 year ago
[-]
Could this be a reaction to RedHat’s latest position on RHEL rebuilders? Did kernel devs see RH as freeloading on the LTS kernels?
reply
tuna74
1 year ago
[-]
Absolutely, especially since RHEL never used the LTS Linux kernels. /s
reply
dingi
1 year ago
[-]
Good move! At least, this will push those pesky Android OEMs to make their drivers available upstream. Its time we have a standardized environment for mobile/embedded use cases like the one we have for PCs. All these stupid little devices filling the junk yards because some greedy OEM didn't want to update their drivers is ridiculous.
reply
kashyapc
1 year ago
[-]
(Disclaimer: I work for Red Hat, but I don't work on the kernel. I'm a user-land mammal, and sometimes work with kernel maintainers to debug issues.)

It makes total sense and I support this. I've met some of the upstream Linux maintainers at conferences over the years. Some (many?) of them are really oversubscribed and still plug away. They need relief from the drudgery of LTS stuff at some point.

Anyone here involved in backporting fixes to many stable branches in a user-land project will relate to the problem here. It's time-consuming and tedious work. This kind of work is what "Enterprise Linux" companies get paid for by customers.

reply
blibble
1 year ago
[-]
I figured that with the random bugs and policy changes (i.e. break ZFS again) that seem to appear in "LTS" mode on a regular basis
reply
polskibus
1 year ago
[-]
Won’t the new right to repair regulations both in eu and us affect this stance?
reply
ghaff
1 year ago
[-]
Why? You have the right to backport fixes yourself to your heart’s delight. Right to repair doesn’t require someone else to do the work for you for free.
reply
diffeomorphism
1 year ago
[-]
Because. The "new right to repair regulations" aim to improve sustainability and that includes repair and support and upgrades, e.g. see

https://www.androidauthority.com/eu-smartphone-updates-rules...

> Right to repair doesn’t require someone else to do the work for you for free.

The term maybe not, but the proposed legislation totally does. Same as warranties or customer protection or not using toxic materials or ... ; none of that is "for free" to the manufacturer, but it is mandatory if you want to be allowed to sell your product.

reply
ghaff
1 year ago
[-]
First of all this isn’t about a product.

But if legislation would actually require the Linux kernel, say, to have LTS for??? every major release? Every point release? It’s bad law and should absolutely not exist. If I’m running a community project I have a big FU for anyone trying to impose support requirements on me. Which was actually a rather hot topic at an open source conference I was just at in Europe.

reply
diffeomorphism
1 year ago
[-]
> It’s bad law and should absolutely not exist.

Whatever you came up with in your mind sounds very weird and, yeah, obviously should not exist. That has nothing to do with the actual law though.

> I have a big FU for anyone trying to impose support requirements on me

Nobody talked about any of that?

Product: Samsung phone. Requirement: Samsung needs to keep that device usable for N years.

To meet that requirement Samsung will also need kernel updates. Whether that means doing them in house, paying someone else, making updates more seamless to easily upgrade or ... . The requirement to find a way to make that work is on Samsung, not you.

reply
tzs
1 year ago
[-]
> Product: Samsung phone. Requirement: Samsung needs to keep that device usable for N years.

What does "usable" mean though?

If I were to go and dig my old Apple Centris 650 [1] that I bought around 1994 out of my pile of old electronics, if the hardware still actually works it would still be able to do everything it did back when it was my only computer. It is running A/UX [2], which is essentially Unix System V 2.2 with features from System V 3 and 3 and BSD 4.2 and 4.3 added.

Even much of what I currently do on Linux servers at work and on the command line at home with MacOS would work or be easy to port.

So in one sense it is usable, because it still does everything that it could do when I got it.

But it would not be very good for networking because even though it has ethernet and TCP/IP, it wouldn't have the right protocols to talk to most thing on today's internet and the browsers it has don't implement things that most websites now depend on.

So in another sense we could say it is not usable, although I think it would be more accurate to say it is no longer useful rather than unusable.

[1] https://en.wikipedia.org/wiki/Macintosh_Quadra_650

[2] https://en.wikipedia.org/wiki/A/UX

reply
ghaff
1 year ago
[-]
I don’t make phones. The question upthread was in the context of LTS for the Linux kernel and by implication open source projects more generally.
reply
diffeomorphism
1 year ago
[-]
Exactly, but samsung does and they use the linux kernel. And this change affects them and is particularly untimely for them if laws require future long term support from them (not from linux). That was the comment you are replying to.

Of course, linux can just say "not my problem" -- the law does not affect them directly. The discussion topic is whether with this change in law companies like samsung will be willing to invest lots of money to get sufficiently long LTS versions and hence lead to a change in position on the linux side. ... or a switch to fuchsia.

reply
ghaff
1 year ago
[-]
So it’s for Samsung to decide if they want to take a different approach or not. And yeah it’s not a kernel.org problem and there’s actually likely no easy mechanism for Samsung to pay for LTS given the work is mostly done by a bunch of people working for many different companies. I think the Linux Foundation only pays the salaries of three maintainers—including Linus.
reply
caskstrength
1 year ago
[-]
Device manufacturers can provide support for kernel used by their products themselves, pay some distro vendor to do it for them or contract maintainers directly. You expect Linux or Greg to do it for free because EU says so or what?
reply
polskibus
1 year ago
[-]
I didn't say for free. It can be factored in the initial price of the product.
reply
mrintegrity
1 year ago
[-]
How? It's open source and Free Software, literally guaranteing the right to "repair" your code. Maybe I don't understand your question but it seems totally unrelated to the concept of an open source support cycle
reply
diffeomorphism
1 year ago
[-]
https://www.androidauthority.com/eu-smartphone-updates-rules...

https://ec.europa.eu/info/law/better-regulation/have-your-sa...

> Maybe I don't understand your question but it seems totally unrelated to

The discussion title is "Designing mobile phones and tablets to be sustainable" and lists the following aims:

- mobile phones and tablets are designed to be energy efficient and durable

- consumers can easily repair, upgrade and maintain them

- it is possible to reuse and recycle the devices.

This hence includes both repair and support and according to the linked article for N years. Seems very relevant.

reply
phendrenad2
1 year ago
[-]
This is why I like MINIX. They last LTS release was 9 years ago and it's still just as unsupported as ever!
reply
ksec
1 year ago
[-]
And yet most of the Synology NAS are still running on 4.x kernel.
reply
tuna74
1 year ago
[-]
Mine runs on 5.10...
reply
userbinator
1 year ago
[-]
It's really disturbing that software is the only thing whose stability seems to be decreasing very significantly over time, and dragging down everything that it's embedded in.

Who wants constant changes and breakage? Who wants software that's in constant need of updating? I'm pretty sure it's not the users.

reply
jupp0r
1 year ago
[-]
The point of LTS kernels is that they do get constant updates ie that security patches are backported. There is no world in which you can avoid updating frequently.
reply
usr1106
1 year ago
[-]
There are many more updates than security updates coming to LTS kernels all the time.

Kernel updates are more often than not even categorized in any way. Only for very prominent vulnerabilities the security impact is clear to a larger audience.

reply
bman_kg
1 year ago
[-]
So does it mean that linux is rolling out updates but these updates do not consider security? Just curious about this thing, I just started using linux and this topic is interesting for me
reply
elsjaako
1 year ago
[-]
It means that there are bug fixesall the time, but most of the time no one sorts these into "security" and "non-security" categories.

I remember a message (I can't find it back right now) where this is explained. Basically the thinking is that a lot of bugs can be used to break security, but sometimes it takes a lot of effort to figure out how to exploit a bug.

So you have some choices:

* Research every bug to find out the security implications, which is additional work on top of fixing the bug.

* Mark only the bugs that have known security implications as security fixes, basically guaranteeing that you will miss some that you haven't researched.

* Consider all bugs as potentially having security implications. This is basically what they do now.

reply
worthless-trash
1 year ago
[-]
Upstream seems downright hostile to classifying patches as security.
reply
mr_toad
1 year ago
[-]
Nearly any bug could be a security risk.
reply
worthless-trash
1 year ago
[-]
Could be yep, the delicate dance of writing in C.
reply
makeitdouble
1 year ago
[-]
I understand the sentiment, while also looking at our computing devices' market where the fight for repairability is pretty harsh and far from a given.

Same for cars that are becoming less and less repairable and viable in the long term, medical devices are now vulnerable to external attack vectors to a point the field didn't predict, and I'd assume it's the same in so many other fields.

One could argue those are all the effect of software getting integrated into what where "dumb" devices, but that's also the march of progress the society is longing for...where to draw the line is more and more a tough decision, and the need for regulation kinda puts a lot of the potential improvements into the "when hell freezes" bucket. I hope I'm deeply mistaken.

reply
titzer
1 year ago
[-]
There's always time to refactor and tinker with incomplete migration, always time to debug and patch and fuss with updates, but never time to sit down, gather all the requirements, redesign, and reimplement.
reply
arcticbull
1 year ago
[-]
Ultimately it's just a question of objectives. Usually software isn't written for its own sake. It's written to achieve a goal, meet a customer need, generate revenue, prove a market, etc. You can achieve those goals without gathering all the requirements, redesigning and re-implementing most of the time. Not in aerospace, or biomedical maybe, where we are willing to pay the outlandish velocity penalties. But most of the things we do aren't that.

Generally, if you want to build a pristine, perfect snowflake, a work of art, then you'll be the only one working on it, on your own time while listening to German electronica, in your house. [1] Nothing wrong with that - I have a few of those projects myself - but I think it's important to remember.

Linux is hoping to adjust the velocity-quality equilibrium a little closer to velocity, and a little further from quality. That's okay too. Linux doesn't have to be everything to everyone. It doesn't have to be flawless to meet the needs of any given person.

[1] https://www.stilldrinking.org/programming-sucks

reply
randmeerkat
1 year ago
[-]
> Not in aerospace, or biomedical maybe, where we are willing to pay the outlandish velocity penalties.

… because we really needed X version of software out yesterday? This incredible “velocity” that you speak of has created monstrous software systems, that are dependency nightmares, and are obtuse even for an expert to navigate across. In the rush to release, release, release, the tech sector has layered on layers of tech debt upon layers of tech debt, all while calling themselves “engineers”… There’s nothing to celebrate in the “velocity” of modern software except for someone hustling a dollar faster than someone else.

reply
imtringued
1 year ago
[-]
I for one enjoy the fact that AMD GPU drivers have been upstreamed.
reply
titzer
1 year ago
[-]
Gathering requirements doesn't have to take months. It can take hours or a few days. You can even do it in the middle of maintaining a product, e.g. as part of steps towards addressing a feature request or a refactoring. It doesn't strike me as unreasonable that in the 50 years of building UNIX-style kernels, and the 30+ year development of Linux to have someone write down some functional requirements somewhere.
reply
userbinator
1 year ago
[-]
The software version of "There's always time to do it twice, but never time to do it right?"
reply
imtringued
1 year ago
[-]
That is how C rolls. In theory you need to formally verify every single C program to ensure that it does not violate memory safety. Yes, that is akin to the same straightjacket that people complain about in that iron oxide language.
reply
gemstones
1 year ago
[-]
If you think about the fact that higher level languages are largely a 1950s innovation, it helps.

Disciplines that are 70 years old are not likely to be stable.

reply