This is the crux of the issue: putting the maintenance burden on unpaid volunteers instead of having the burden be carried by the companies that profit from the 6-year LTS.
Canonical maintained them anyway, though, just not officially. Sources: https://lwn.net/Articles/618862/ https://lwn.net/Articles/667925/ https://lkml.org/lkml/2015/12/15/538. I don't see more recent announcements; I don't know if they stopped because of continuous rejection of their community contributions from upstream, or if they are continuing anyway and I'm not aware.
Disclosure: I work for Canonical. I'm not authorised to speak for Canonical, expressed opinions here are my own. [Edit: I should add that I don't work on the kernel so I feel like I'm as much an outside observer as you probably are]. But I'm not sure I'm even expressing an opinion here - just citing some relevant, publicly verifiable facts.
If Canonical was not willing to do the former, it implies the kernel developers were correct.
But unfortunately everyone would rather find a way to get RH packages for free, and RH/IBM has only made the situation worse by alienating people.
Backporting is _so_ much work. And it's unfortunately not sexy work either, so it's always going to be hard attracting unpaid contributors for it.
If you need stability and long term support as a customer, you have companies like RedHat or SUSE whose entire point is providing 10+ years maintenance on these components.
Regulations _could_ change the incentives, and create a market for long term servicing. Regulations are hard to get right though...
- Old devices are phased out sooner.
- Moving away from linux towards proprietary embedded OSes that do provide support.
- No doubt some companies will try just ignoring the regulations
It is nice that it makes the cost of not supporting things visible to the users. Assuming “phased out” means the device will actually stop operating; “Company X’s devices have a short lifetime” is an easy thing for people to understand.
I suspect consumers will look for brands that don’t have this reputation, which should give those well behaved brands a boost.
Although, if it does turn out that just letting devices die is the common solution, maybe something will need to be done to account for the additional e-waste that is generated.
Moving toward proprietary OSes; hey, if it solves the problem… although, I don’t see why they’d have an advantage in keeping things up to date.
It is possible that companies will just break the law but then, that’s true of any law.
A smarter regulation would have been required non-commercial use firmware source disclosures to allow non competitive long term maintenance by owners.
How many of the companies producing this stuff have the skills to fix kernel security bugs?
It will end up being tickbox regulatory compliance and will create barriers to competition, especially from FOSS: https://pyfound.blogspot.com/2023/04/the-eus-proposed-cra-la...
If the mainline kernel devs are uncomfortable allowing those to be official kernel.org releases, that's fine: the CELC can host the new versions themselves and call the versions something like "5.10.95-celc" or whatever.
I don't get why this is so difficult for people to grasp: if you want long-term maintenance of something, then pay people to maintain it long-term. It's a frighteningly simple concept.
But yes, it'd be better for SoC vendors to track upstream more closely, and actually release updates for newer kernel versions, instead of the usual practice of locking each chip to whatever already-old kernel they choose from the start. Or, the golden ideal: SoC vendors should upstream their changes. But fat chance of that happening any time soon.
> (there's actually been quite a bit of progress here in getting them to actually take updates from the stable kernel at all! And that's getting thrown away now...)
I found this statement kinda funny. If the original situation was that they wouldn't take updates from the stable kernels, then what were all those unpaid developers even maintaining them for? It's bad enough that it's (for most people) unrewarding work that they weren't getting paid for... but then few people were actually making use of it? Ouch. No wonder they're giving up, regardless of any progress made with the SoC vendors.
I honestly do not understand why SoC vendors don't put the extra 1% effort in upstreaming their stuff. I've seen (and worked with) software that is lagging 3 to 4 years behind upstream developed by these vendors and if you diff it against upstream it's like 10 small commits, granted, these commits are generally hot garbage.
But why do these devices need special kernels anyway? Isn't that the real problem?
Is that even feasible for projects like the Android kernel that distributes their fork to vendors when RedHat forbid redistribution of their source code?
What comprises sexy work in programming?
Backporting fixes isn't generally interesting problem solving.
Surely maintaining 40 year old bank Cobol code is important but it's not considered fun and exciting. Rewriting half of skia from C++ into Rust is arguably not important at all but it's exciting to the point that it reasonably could make the front page of HN.
At the time, Linux 4.14 was shipping I think.
Personally it made me a bit sad, because it created real permission for Android to never upgrade kernels. I'd hoped eventually Android would start upgrading kernels, thought the short LTS would surely force them to become a respectable ecosystem that takes maintenance seriously. Making Super LTS was a dodge; suddenly it was ok that 4.14 straggles along until 2024.
Also an interesting note, supposedly there's a "Civil Infrastructure Platform" that supposedly will have some support until 2029 for 4.14! The past never is gone, eh?
Supposedly Project Treble is a whole new driver abstraction in I think mostly userland (maybe?) whose intent is to allow kernel upgrades without having to rewrite drivers (not sure if that is the primary/express goal or was just often mentioned). I'm not sure if Android has yet shipped a kernel upgrade to any phones though; anyone know of an model running an upgraded kernel?
I would argue that it was still better than not doing that, because the vendors weren't going to properly keep up with kernels either way; the choice wasn't "support 3.x for longer or move to 4.x", it was "support 3.x for longer or watch Android devices stay on 3.x without patches".
Yes what happened & happens is often monstrously unsupportable & terrible. For the past 6 years, Google rolling up with a dump truck full of bills has been justification to keep doing nothing, to keep letting kernel devices be bad.
Your history isn't even good or right. Old releases at the time didn't get official longer support. Canonical just opted to maintain a basically Super LTS 3.16 until 2020, regardless of the Google dump-truck-of-money thing going on here. Old phones got nothing directly from this payoff. Google was just paying for a way forward to keep doing nothing, to justify their ongoing delinquency & inactivity.
Which was unprincipled terrible and awful before, but which they basically bribed gregkh to suddenly make acceptable by at least paying for the privilege of being negligent do nothing delinquents on.
One possible nice side-effect of not maintaining kernels for so long and allowing people to stay on out-of-date systems would be to encourage vendors to allow users to upgrade to the newer versions of Android for more than the current 2 year life span. They are then more likely to put pressure on their component vendors to get kernel support for their chipsets into mainline so they don't have the excuse that they can't provide updates because the hardware isn't support by modern firmware.
But backwards compatibility isn't what kernel developers are maintaining, they're backporting things like security fixes to older versions of the kernel.
It would be like if a security fix is implemented in Windows 11, and Microsoft also chose to patch the same change in Windows 10. At some point Microsoft decides that older version of Windows won't get new updates, like how Windows 8.1 stopped receiving them this January.
What kernel developers are deciding is that sufficiently old enough kernel branches will stop receiving backports from newer kernels.
I think you just stressed TowerTall's point.
They are saying: "Recent versions of Windows can run old programs made for old versions of Windows. How?".
The Linux kernel is very good at it because of the "Do not break userspace" Linus Torvalds' rule. The usual user space on top of the Linux kernel, not so much.
So yes, backward compatibility and backporting are different matters.
And Windows addresses them both indeed. Your parent commenter is not comparing Windows with Linux.
it seems that Microsoft does a mix of backports and upgrades:
> 6.1.7600.16385, 6.1.7600.16539, 6.1.7600.16617, 6.1.7600.16695, 6.1.7600.16792, 6.1.7600.20655, 6.1.7600.20738, 6.1.7600.20826, 6.1.7600.20941, 6.1.7601.17514, 6.1.7601.17592, 6.1.7601.21701
So this does not look like 10 year support for the initial version but rather like switching different LTS versions over that time. Is there any data from microsoft itself on support duration, release dates, backports and how to parse these numbers?
To me, though, 6.1.7600.16385 -> 6.1.7601.21701 does sound like long-term support for a single "version" (whatever that word means in this context).
You pay microsoft for that support.
You can have 10 years support for linux, just pay red hat for it like you pay microsoft.
Windows has had three major releases in 11 years. The Linux kernel does one every two months. Windows is an entire OS, with a userland and GUI. The Linux kernel is... a kernel.
The development and support cycles are naturally going to be very different for the two. And regardless, the mainline Linux kernel team is not beholden to anyone for any kind of support. Whatever they do is either voluntary, or done because someone has decided to pay some subset of developers for it to get done. Microsoft employs and pays the people who maintain their old Windows versions.
If no one is paying someone enough to maintain an old Linux kernel for six years, why would they choose to do it? It's mostly thankless, unrewarding work. And given that the pace of development for the Linux kernel is much much faster than that of Windows (or even just the Windows/NT kernel), the job is also much more challenging.
NT6.1 (Windows 7) was also supported from 2009 to 2020 (11 years!), and NT 5.1 (Windows XP) was supported from 2001 through either 2014 (13 years!) or 2019 (18 years!) depending on support channel.
Microsoft will support a product for a decade if not more, assuming you're keeping up with security updates which they absolutely will backport, sometimes even beyond EOL if the fix is that important. Linux with 2 years is a bad joke, by comparison.
So honest question: What does 10.0.22621.900 mean? Is 10.0.X.Y supported for a decade or is that discontinued at some point and I am forced to upgrade to 10.0.X+10,Y-5?
As an example from https://superuser.com/questions/296020/windows-kernel-name-v... lists
> 6.1.7600.16385, 6.1.7600.16539, 6.1.7600.16617, 6.1.7600.16695, 6.1.7600.16792, 6.1.7600.20655, 6.1.7600.20738, 6.1.7600.20826, 6.1.7600.20941, 6.1.7601.17514, 6.1.7601.17592, 6.1.7601.21701
Are these all just 6.1? Or is 7601 a different version than 7600? Could I choose to stay on 7600 and get backports or do I have to switch to 7601?
Yes. The numbers after the Major.Minor numbers are just revision and build numbers of little consequence for most people.
Are you here for thoughtful conversation or are you just being a Micro$oft Windoze troll? Because I can't tell; I would presume most people here know how to read version numbers.
It's pretty disrespectful to call Linux's process a "bad joke" when these developers mostly aren't getting paid to maintain major versions for any length of time that you'd consider more reasonable.
Meanwhile, if you do want longer-term support for a specific kernel+OS combo, IBM/Red Hat (among others) will be happy to sell it to you. You may think it's inefficient for each enterprise distro to have their own internal kernel fork that they maintain (rather than all contributing to a centralized LTS kernel), but that's the choice they've all seemingly collectively made. I guess they feel that if they're on the hook to support it, they want full and final say of what goes into it.
Also consider that Windows doesn't sell a kernel: they sell a full OS. In the Windows world, you don't mix and match kernel versions with the rest of the system. You get what Microsoft has tested and released together. With Linux, I can start with today's Debian stable and run it for years, but continue updating to a new major kernel version (self-building it if I want or need) every two months. The development and support cycle for an OS is very different than that of a kernel. You just can't compare the two directly. If you want to, compare Windows with RHEL.
Also-also consider that Windows and Linux are used in very different contexts. Microsoft's customers may largely care about different things than (e.g.) Red Hat's customers.
I don't think it was ever anything more than speculation though
Try to use only a ten year old printer driver sometime. It's a pain. Linux executes 20 year old code with no problem, as long as you kept all the pieces. How do they do it? Never merge anything that breaks known user space. Easy in theory, hard work in practice.
If you want to run applications from the 90s, you're likely to have more success with dosbox or wine than with a plain Windows. Didn't Microsoft completely give up on backwards emulation a few years ago and started virtualizing it instead, with mixed success?
Of course, if you really want something famous for backwards compatibility, look at OS/400 and z/OS. It's all layers of emulation from the hardware up in order to guarantee that an investment in that platform is future proof. It's all expensive in the end of course, as someone has to pay for it, but they live well on the customers who value such things. Running 50 year old code there is commonplace.
I wish IBM hadn't fenced it so much like a walled garden. Had they issued inexpensive or free licenses for OS/400 targeted to students and developers, maybe also an emulator to develop conveniently on x86, their i platform would probably be more commonplace now, with quite a bit more available software.
What is killing their platform is not the price but mostly the lack of skills and software. And it's probably too late now to change course.
I have never heard of IBM i until this moment right now.
I assume this is specifically for their Power-series hardware? I've only ever seen Linux on Power hardware...
Yes, it currently targets their Power series, although it's fairly hardware independent. As a matter of fact AS/400 binaries don't even care what CPU they run on, as there are several abstraction layers underneath, namely XPF, TIMI and SLIC. It's a bit like a native, hardware-based JVM with the OS being also the SDK. Another peculiarity is that everything is an object in "i", including libraries, programs and files.
But mostly, it requires close to no sysadmin. Just turn it on, start the services and leave it alone for years if needed.
Microsoft does drop backwards compatibility sometimes, usually because the backwards compatibility layer leaves a huge security risk.
Bwahah! AppCompat traces to Windows 95.
20 year old software is rarely a problem, I'm running Office XP on Windows 10 without problems.
My Google Fu is failing me though.
How Microsoft Lost the API War https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...
Also this has nothing to do with backwards compatibility, it's about supporting older kernels with security fixes and similar. The decision is a pragmatic one to lessen the burden on the unpaid volunteers.
Like others have mentioned if a company needs a specific kernel pay up. Or use Windows.
Don't compare apples to oranges.
What I mean by that is each 'major' version with stable API/ABI has a life span of about 5 years - like 5 years of 12.x version, 5 years of 13.x version, etc.
... and all that with having about only 1/10 of the Linux people count (rough estimate).
At the moment there are 12.x and 13.x. 14.x is in beta, so soon there will be three. But 12.x is expected to be dropped at the end of this year, so in 2024 it will be back to two versions.
As far as I can tell there are a lot more Linux kernels in LTS at the moment.
Not "expected", rather than that being the policy: five years is ending.
The Linux kernel has a new major release every two months, and it looks like LTS kernels are one out of every five or six major versions, so that's a new LTS kernel every 10-12 months; right now they have six LTS kernels to maintain.
Also I expect that the Linux kernel develops at a much more rapid pace than the FreeBSD kernel. That's not a knock on FreeBSD; that's just the likely reality. Is that development pace sufficient to support thousands of different embedded chips? Does the FreeBSD project even want to see the kind of change in their development process that an influx of new embedded developers would likely entail?
It's easy to offer a long LTS release cadence when your project is stagnant.
I didn't say FreeBSD was abandonware, its kernel development has just been relatively stagnant for decades vs. Linux. Which shouldn't come as particularly surprising considering how much more adoption and investment there's been surrounding Linux over that time period.
I don't think it's too far fetched for people clinging to their favorite GNU/Linux distro to switch to FreeBSD, especially on the server side where in my opinion, FreeBSD is the superior choice.
I think the world is better off when there are choices and the Linux near mono culture is not good for the FOSS movement, in my opinion.
2 years on the other hand seems incredibly short? Am I wrong?
sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.
so for some hardware it might mean that when it gets released you only have noticeable less then 6 years of kernel support
then between starting to build a phone and releasing it 2 years might easily pass
so that means from release you can provide _at most_ 4 years of kernel security patches etc.
but dates tend to not align that grate so maybe it's just 3 years
but then you sell your phone for more then one year, right?
in which case less then 2 years can be between the customer buying your product and you no longer providing kernel updates
that is a huge issue
I mean think about it if someone is a bit thigh on money and buy a slightly older phone they probably aren't aware that their phone stops getting security updates in a year or so (3 years software support since release but it's a 2 year old phone) at which point using it is a liability.
EDIT: The answer is in my opinion not to have even longer LTS but to proper maintain drivers and in turn being able to do full kernel updates
The downside can be painful though: sourcing components with such properties is hard. You basically have to cherry-pick them from all over the world because they're so few and far between.
That's one of the reasons why the Librem 5 is so thick and consumes so much energy.
vendor A sells a part with a driver in the mainline kernel.
vendor B doesnt, so on top of the part you have to spend time bodging together an untested driver into a custom kernel.
as a buyer why would you go for the second?
Slapping together a half-working out-of-tree kernel module and calling it a day is not only much cheaper; it also buys you the time you need to write the new driver for next year's hot shit SoC that smartphone vendors demand.
What would you want as a buyer. A driver that has already demonstrated that it is good enough to be included in the kernel, or one of unknown quality that may need extra work to integrate with the kernel.
I get why suppliers don't want to do the work. I just don't understand why there isn't enough value add for buyers to justify the premium for the benefits of a mainline driver, and/or why sellers don't try and capture that premium
Also consider the cultural context. The culture of hardware manufacturers is much different than that of software vendors. They don't view software as a product, but more a necessary evil to make their hardware work with existing software infrastructure. They want to spend as little time on it as possible and then move onto the next thing.
I'm not endorsing this status quo, merely trying to explain it.
Caring about long-term maintenance isn't what most buyers do. Going SIM-only on your data plan is out of the ordinary.
Also in my experience people largely pick their phones based on the surface level hardware rather than the long-term reliability. Hence why Apple keeps putting fancier cameras into every iPhone even though I'm pretty sure a good chunk of customers don't need a fancy camera. Heck, just getting a phone that fits in my hand was a struggle because buyers somehow got convinced that bigger phone = better phone and now most smartphones on the market are just half-size tablets.
That trend at least seems to be somewhat reversing though.
1. Hit to the BOM (i.e. cost); and chip
2. Suppliability (i.e., can we get enough pieces, by the time we need them, preferably from fewer suppliers).
In the product I was involved in building (full OS, from bootloader to apps), I was lucky that the hardware team (separate company) was willing to base their decisions on my inputs. The hardware company would bear the full brunt of BOM costs, but without software the hardware was DOA and wouldn't even go to manufacturing. This symbiotic relationship, I think, is what made it necessary for them to listen to our inputs.
Even so, I agreed software support wasn't a super strong input because:
1. There's more room for both compromises and making up for compromises, in software; and
2. Estimating level of software support and quality is more nuanced than just a "Has mainline drivers?" checkbox.
For example, RPi 3B vs. Freescale iMX6. The latter had complete mainline support (for our needs) but the former was still out-of-tree for major subsystems. The RPi was cheaper. A lot cheaper.
I okayed RPi for our base board because:
1. Its out-of-tree kernel was kept up-to-date with mainline with a small delay, and would have supported the next LTS kernel by the time our development was expected to finish (a year);
2. Its out-of-tree code was quite easy (almost straightforward) to integrate into the Gentoo-based stack I wanted to build the OS on; and
3. I was already up-and-running with a prototype on RPi with ArchLinuxARM while we were waiting for iMX6 devkits to be sourced. If ArchLinuxARM could support this board natively, I figured it wouldn't be hard to port it to Gentoo; turned out Gentoo already had built-in support for its out-of-tree code.
Of course, not every sourcing decision was as easy as that. I did have to write a driver for an audio chip because its mainline driver did not support the full range of features the hardware did. But even in that case, the decision to go ahead with that chip was only made after I was certain that we could write and maintain said driver.
Your Raspberry Pi example is IMO even more illustrative than you let on. I'll reiterate that even that platform is not open and doesn't have a full set of mainlined drivers, after a decade of incredibly active development, by a team that is much more dedicated to openness than most other device manufacturers. Granted, they picked a base (ugh, Broadcom) that is among the worst when it comes to documentation and open source, but I think that also proves a point: device manufacturers don't have a ton of choice, and need to strike a balance between openness and practical considerations. The Raspberry Pi folks had price and capability targets to go with their openness needs, and they couldn't always get everything they wanted.
And most vendors are like vendor B because they're leading the pack in terms of performance, power consumption, and die size (among other things) and have the market power to avoid having to do everything their customers want them to do.
Still, some headway has been made: Google and Samsung have been gradually getting some manufacturers (mainly Qualcomm) to support their chips for longer. It's been a slow process, though.
As for mainlining: it's a long, difficult process, and the vendor-B types just don't care, and mostly don't need to care.
many consumers are not aware about the danger a unmaintained/non-updatable software stack introduces or that their (mainly) phone is unmaintained
so phone vendor buys from B because A is often just not an option (not available for the hardware you need) and then dumps the problem subtle and mostly unnoticeable on the user
there are some exceptions, e.g. Fairphone is committed to quite long term software support so they try to use vendor As or vendor Bs which have contractual long term commitment for driver maintaince
but in the space of phones (and implicit IoT using phone parts) sadly sometimes (often) the only available option for the hardware you need is vendor B where any long term diver maintenance commitment contracts are just not affordable if you are not operating on a scale of a larger phone vendor
E.g. as far as I remember Fairphone had to do some reserve engineering/patching to continue support for the FP3 until today (and well I think another 2 or so years), and I vaguely remember that they where somewhat lucky that some open source driver work for some parts was already ongoing and getting some support with some of the vendors. For the FP5 they manage to have a more close cooperation with Qualcomm allowing them to provide a 5 year extended warranty and target software support for 8 years (since release of phone).
So without phone producer either being legally forced to have a certain amount of software support (e.g. 3 years after last first party selling) or at least be largely visible transparent about the amount of software support they do provide upfront and also inform their user when the software isn't supported anymore I don't expect to see any larger industry wide changes there.
Through some countries are considering laws like that.
Or they could just upgrade the kernel to a newer version. There's no rule that says that the phone needs to run the same major kernel version for its entire lifetime. The issue is that if you buy a sub €100 phone, how exactly is the manufacturer supposes to finance the development and testing of never versions of the operating system? It might be cheap enough to just apply security fixes to an LTS kernel, but moving and re-validating drivers for hardware that may not even be manufactured quickly becomes unjustifiable expensive for anything but flagship phones.
proprietary drivers for phone parts are often not updated to support newer kernels
These manufacturers should be punished by the lack of LTS and need to upgrade precisely because of that lazyness and incompetence.
You don't see Windows driver developers having their drivers broken by updates every few months.
At the cost of Windows kernel development being a huge PITA because effectively everything in the driver development kit becomes an ossified API that can't ever change, no matter if there are bugs or there are more efficient ways to get something done.
The Linux kernel developers can do whatever they want to get the best (most performant, most energy saving, ...) system because they don't need to worry about breaking someone else's proprietary code. Device manufacturers can always do the right thing and provide well-written modules to upstream - but many don't because (rightfully) the Linux kernel team demands good code quality which is expensive AF. Just look at the state of most non-Pixel/Samsung code dumps, if you're dedicated enough you'll find tons of vulnerabilities and code smell.
Stability is worth it. After 30 years of development the kernel developers should be able to come up with a solid design for a stable api for drivers that they don't expect to radically change in a way they can't support.
The solution to all this is pretty simple: release the source to these drivers. I guarantee members of the community -- or, hell, companies who rely on these drivers in their end-products -- will maintain the more popular/generally-useful ones, and will update them to work with newer kernels.
Certainly the ideal would be to mainline these drivers in the first place, but that's a long, difficult process and I frankly don't blame the chipset manufacturers for not caring to go through it.
Also, real classy to call the people who design and build the kernel that runs on billions of devices around the world "lazy and incompetent". Methinks you just don't know what you're talking about.
>Also, real classy to call the people who design and build the kernel that runs on billions of devices around the world "lazy and incompetent"
I never did that. The parent comment did call manufacters that and I suggested that the kernel developers are at some fault.
Most linux kernel changes are limited enough so that updating a driver is not an issue, IFF you have the source code.
That is how a huge number of drivers are maintained in-tree, if they had to do major changes to all the drivers every time anything changes they wouldn't really get anything done.
Only if you don't have the source code is driver breakage an issue.
But Linux approach to proprietary drivers was always that there is no official support when there is no source code.
People don't want you to break their code.
What an uninformed take.
The Linux kernel has a strict "don't break userspace" policy, because they know that userspace is not released in lock step with the kernel. Having this policy is certainly a burden on them to get things right, but they've decided the trade offs make it worth it.
They have also chosen that the trade offs involved in having a stable driver API are not worth it.
> People don't want you to break their code.
Then maybe "people" (in this case device manufacturers who write crap drivers) should pony up the money and time to get their drivers mainlined so they don't have to worry about this problem. The Linux kernel team doesn't owe them anything.
They also know out of tree drivers are not released lock step with the kernel.
>They have also chosen that the trade offs involved in having a stable driver API are not worth it.
It sucks for driver developers to not have a stable API regardless if they think its worth it or not.
>should pony up the money and time to get their drivers mainlined so they don't have to worry about this problem.
They don't want to reveal their trade secrets.
>The Linux kernel team doesn't owe them anything.
Which is why Google ended up offering the stable API to driver developers.
The worst part of all of this is: Google could go and mandate that the situation improves by using the Google Play Store license - only grant it if the full source code for the BSP is made available and the manufacturer commits to upstreaming the drivers to the Linux kernel. But they haven't, and so the SoC vendors don't feel any pressure to move to sustainable development models.
Google has tried to improve the situation, and has made some headway: that's why they were able to get longer support for at least security patches for the Pixel line. Now that they own development of their own SoC, they're able to push that even farther. But consider how that's panned out: they essentially hit a wall with Qualcomm, and had to take ownership of the chipset (based off of Samsung's Exynos chip; they didn't start from scratch) in order to actually get what they want when it comes to long-term support. This should give you an idea of the outsized amount of power Qualcomm has in this situation.
Not many companies have the resources to do what Google is doing here! Even Samsung, who designed their own chipset, still uses Qualcomm for a lot of their products, because building a high-performance SoC with good power consumption numbers is really hard. Good luck to most/all of the smaller Android manufacturers who want more control over their hardware.
(Granted, I'm sure Google didn't decide to build Tensor on their own solely because of the long-term support issues; I bet there were other considerations too.)
It really depends on what you're doing, a lot of industries may not need such long-term support. 6 years seems like a happy medium to me, but then again I'm not the one supporting it. I expect the kernel devs would be singing a different tune if people were willing to pay for that extended support.
They're just legacy now IMO and their long term support requirements are a result of this, companies that haven't gotten rid of them by now aren't likely to do it any time soon.
I hate seeing them go. I wasn't such a fan of Solaris but I was of HP-UX. But its days are over. It doesn't even run on x86 or x64 and HP has been paying Intel huge money to keep itanium on life support, which is running out now if it hasn't already.
At least Solaris had an Intel port but it too is very rare now.
Outside of tech focused companies, 10+ year old systems really are the norm.
It's because outside of tech companies, nobody cares about new features. They care about things continuing to work. Companies don't buy software for the fun of exploring new versions, especially frustratingly pointless cosmetic changes that keep hitting their training budgets.
Many companies would be happy with RHEL5 or Windows XP today from a feature standpoint, if it weren't a security vulnerability.
CentOS/RHEL 6 was already pretty long in the tooth, but being the contrarian I am, I was not looking forward to the impending systemd nonsense.
Today at work, we finally got the OK to stop supporting CentOS 7, for new releases.
Old systems are stable, but there’s a fine line between that and stagnation. Tread carefully.
Most of our .NET workloads are still for .NET Framework, and only now we are starting to have Java 17 for new projects, and only thanks to projects like Spring pushing for it.
Ah, and C++ will most likely be a mix of C++14 and C++17, for new projects.
so likely <5 years since release of the hardware in the US
likely <4 years since release of hardware outside of the US
likely <3 years since you bought the hardware
and if you buy older phones having only a year or so of proper security updates is not that unlikely
So for phones you would need more something like a 8 or 10 year LTS, or well, proper driver updates for proprietary hardware. In which case 2 years can be just fine because in general/normally only drivers are affected by kernel updates.
Do you jump LTS releases. So one you are on is ending and there is brand new available? Or do you go to one before and have possibly only 2 or 4 years left...
The alternative is using a pre-built package repo which will prevent the kernel from updating to an incompatible version using the package dependencies. I lived that way for years and it is an awful experience.
No, because the CDDL was intentionally written to be incompatible with the GPL.
Good or bad, it's the result of another era. Still impressive stuff. It's only recently that things like btrfs and eBPF became usable enough, and not in all situations.
what on earth is that supposed to mean? ZFS is not in the Linux kernel because Sun and then Oracle deliberately decided to do that and continue to want that to be the case. The Linux kernel can't be re-licensed, (the Oracle and Sun code in) ZFS could be relicenced in ten minutes if they cared.
> The aspect of the CDDL that makes it incompatible with GPL is present in the GPL too. Neither license is more or less "dogshit" than the other, they are the same. The difference is the CDDL only applies to code written under the CDDL, whereas the GPL spreads to everything it touches.
It means that the original authors could have originally intended to write a recipe for chocolate chip cookies and somehow accidentally wrote the CDDL. That wouldn't change a thing and it wouldn't make the CDDL any better or worse since it would have exactly the same words. The intent is irrelevant, all that matters is the end result.
> ZFS could be relicenced in ten minutes if they cared.
Indeed, I hope that they do. A copyfree license like the BSD licenses would make ZFS significantly more popular and I think would have saved all the effort sunk into btrfs had it been done earlier.
It is a shame Oracle hasn't released a CDDLv2 that provides GPL compatibility, they could solve the incompatibility quite easily, since CDDLv1 has an auto-update clause by default. I think some of OpenZFS has CDDLv1-only code, but that could probably be removed or replaced.
And in mobile, I'd say the amount goes to zero
So yes - you do want to be on an LTS kernel. But you only need to stay there for about a year until the next one is released and you can test it for a bit before deploying.
That question applies to LTS kernels too. Do you really want to risk that a backport of an important fix won't introduce a problem that mainline didn't have? Do you really want to risk that there are no security vulnerabilities in old kernels that won't get noticed by maintainers since they were incidentally fixed by some non-security-related change in mainline?
If you are running user space applications only on upstream supported hardware, there is no reason to stay with long time supported kernels, just follow the regular stable which is much easier for everyone.
i absolutely despise breakage and random changes in my desktop environment
So not something to build GNU/Linux distributions on top of.
> Binderized HALs. HALs expressed in HAL interface definition language (HIDL) or Android interface definition language (AIDL). These HALs replace both conventional and legacy HALs used in earlier versions of Android. In a Binderized HAL, the Android framework and HALs communicate with each other using binder inter-process communication (IPC) calls. All devices launching with Android 8.0 or later must support binderized HALs only.
GKI only became a thing in Android 12 to fix Treble adoption issues, as you can also easily check, and GSI was introduced in Android 9, after userspace drivers became a requirement in Android 8 as per link above.
security patches of an LTS kernel are as much updates as moving to a newer kernel version
custom non in-tree drivers are generally an anti-pattern
the kernel interface is quite stable
automated testing tools have come quite a way
===> you should fully update the kernel LTS isn't needed
the only offenders which makes this hard are certain hardware vendors mostly related to phones and IoT which provide proprietary drivers only and also do not update them
even with LTS kernels this has caused ton's of problems over time maybe 6-years LTS being absconded in combination with some legislatures starting to require security updates for devices for 2-5 years *after sold* (i.e > released) this will put enough pressure on to change this for a better approach (weather that are user land drivers, in-tree drivers or better driver support in general)
But I guess the core of the issue is planned obsolescence in linux internals. Namely the fear of missing out some features which require significant internal changes and which if linux is without could lower its attractiveness in some ways.
It all depends on Linus T.: arbitration of "internals breaking changes".
If I could ask anything from Linus it would be to be a little more relaxed about the "never break userspace" rule. Allow for some innovation and improvements. There are bugs in the kernel that have become documented features because some userspace program took advantage of that.
Where ABI stability is paramount for Linus T., ABI bugs will become features.
The glibc/libgcc/libstdc++ found a way around it... which did end up even worse: GNU symbol versioning.
Basically, "fixed" symbols get a new version, BUT sourceware binutils is always linking with the latest symbol version, which makes generating "broad glibc version spectrum compatibility" binaries an abomination (I am trying to stay polite)... because glibc/libgcc/libstdc++ devs are grossely abusing GNU symbol versioning. Game/game engine devs are hit super hard. It is actually a real mess, valve is trying technical mitigations, but there are all shabby and a pain to maintain.
Basically, they are killing native elf/linux gaming with this kind of abuse.
It makes total sense and I support this. I've met some of the upstream Linux maintainers at conferences over the years. Some (many?) of them are really oversubscribed and still plug away. They need relief from the drudgery of LTS stuff at some point.
Anyone here involved in backporting fixes to many stable branches in a user-land project will relate to the problem here. It's time-consuming and tedious work. This kind of work is what "Enterprise Linux" companies get paid for by customers.
> Right to repair doesn’t require someone else to do the work for you for free.
The term maybe not, but the proposed legislation totally does. Same as warranties or customer protection or not using toxic materials or ... ; none of that is "for free" to the manufacturer, but it is mandatory if you want to be allowed to sell your product.
But if legislation would actually require the Linux kernel, say, to have LTS for??? every major release? Every point release? It’s bad law and should absolutely not exist. If I’m running a community project I have a big FU for anyone trying to impose support requirements on me. Which was actually a rather hot topic at an open source conference I was just at in Europe.
Whatever you came up with in your mind sounds very weird and, yeah, obviously should not exist. That has nothing to do with the actual law though.
> I have a big FU for anyone trying to impose support requirements on me
Nobody talked about any of that?
Product: Samsung phone. Requirement: Samsung needs to keep that device usable for N years.
To meet that requirement Samsung will also need kernel updates. Whether that means doing them in house, paying someone else, making updates more seamless to easily upgrade or ... . The requirement to find a way to make that work is on Samsung, not you.
What does "usable" mean though?
If I were to go and dig my old Apple Centris 650  that I bought around 1994 out of my pile of old electronics, if the hardware still actually works it would still be able to do everything it did back when it was my only computer. It is running A/UX , which is essentially Unix System V 2.2 with features from System V 3 and 3 and BSD 4.2 and 4.3 added.
Even much of what I currently do on Linux servers at work and on the command line at home with MacOS would work or be easy to port.
So in one sense it is usable, because it still does everything that it could do when I got it.
But it would not be very good for networking because even though it has ethernet and TCP/IP, it wouldn't have the right protocols to talk to most thing on today's internet and the browsers it has don't implement things that most websites now depend on.
So in another sense we could say it is not usable, although I think it would be more accurate to say it is no longer useful rather than unusable.
Of course, linux can just say "not my problem" -- the law does not affect them directly. The discussion topic is whether with this change in law companies like samsung will be willing to invest lots of money to get sufficiently long LTS versions and hence lead to a change in position on the linux side. ... or a switch to fuchsia.
> Maybe I don't understand your question but it seems totally unrelated to
The discussion title is "Designing mobile phones and tablets to be sustainable" and lists the following aims:
- mobile phones and tablets are designed to be energy efficient and durable
- consumers can easily repair, upgrade and maintain them
- it is possible to reuse and recycle the devices.
This hence includes both repair and support and according to the linked article for N years. Seems very relevant.
Who wants constant changes and breakage? Who wants software that's in constant need of updating? I'm pretty sure it's not the users.
Kernel updates are more often than not even categorized in any way. Only for very prominent vulnerabilities the security impact is clear to a larger audience.
I remember a message (I can't find it back right now) where this is explained. Basically the thinking is that a lot of bugs can be used to break security, but sometimes it takes a lot of effort to figure out how to exploit a bug.
So you have some choices:
* Research every bug to find out the security implications, which is additional work on top of fixing the bug.
* Mark only the bugs that have known security implications as security fixes, basically guaranteeing that you will miss some that you haven't researched.
* Consider all bugs as potentially having security implications. This is basically what they do now.
Same for cars that are becoming less and less repairable and viable in the long term, medical devices are now vulnerable to external attack vectors to a point the field didn't predict, and I'd assume it's the same in so many other fields.
One could argue those are all the effect of software getting integrated into what where "dumb" devices, but that's also the march of progress the society is longing for...where to draw the line is more and more a tough decision, and the need for regulation kinda puts a lot of the potential improvements into the "when hell freezes" bucket. I hope I'm deeply mistaken.
Generally, if you want to build a pristine, perfect snowflake, a work of art, then you'll be the only one working on it, on your own time while listening to German electronica, in your house.  Nothing wrong with that - I have a few of those projects myself - but I think it's important to remember.
Linux is hoping to adjust the velocity-quality equilibrium a little closer to velocity, and a little further from quality. That's okay too. Linux doesn't have to be everything to everyone. It doesn't have to be flawless to meet the needs of any given person.
… because we really needed X version of software out yesterday? This incredible “velocity” that you speak of has created monstrous software systems, that are dependency nightmares, and are obtuse even for an expert to navigate across. In the rush to release, release, release, the tech sector has layered on layers of tech debt upon layers of tech debt, all while calling themselves “engineers”… There’s nothing to celebrate in the “velocity” of modern software except for someone hustling a dollar faster than someone else.
Disciplines that are 70 years old are not likely to be stable.