> Pham Nuwen spent years learning to program/explore. Programming went back to the beginning of time. It was a little like the midden out back of his father’s castle. Where the creek had worn that away, ten meters down, there were the crumpled hulks of machines—flying machines, the peasants said—from the great days of Canberra’s original colonial era. But the castle midden was clean and fresh compared to what lay within theReprise ’s local net. There were programs here that had been written five thousand years ago, before Humankind ever left Earth. The wonder of it—the horror of it, Sura said—was that unlike the useless wrecks of Canberra’s past, these programs still worked! And via a million million circuitous threads of inheritance, many of the oldest programs still ran in the bowels of the Qeng Ho system. Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely. . .the starting instant was actually some hundred million seconds later, the 0-second of one of Humankind’s first computer operating systems.
(Yes, Vinge made a slight error here - it's only about 14 megaseconds from Armstrong to the epoch, not 100.)
2038 is less than 15 years away and I bet a significant portion of systems will not be upgraded during this period. (As a reference, Windows 7 was released 15 years ago and it's still on 3% usage share [1].)
What's worse, compared to Y2K problem, in 2038, much more critical infrastructures will depend on computers (payment, self-driving cars, public transportation, security cameras, etc etc). We are doomed.
[1] https://en.wikipedia.org/wiki/Template:Windows_usage_share
Also, strictly speaking transitioning with the CPU architecture is not enough -- having kernel APIs and ABIs have 64-bit time_t is not the only place it needs to change. Notably, it has to change in the filesystem too. And at that point millions of people will have on-disk structures with 32bit timestamps, so it needs to transition in a somewhat back compatible way.
No, Linux distro releases are not like Windows, there is little reason to be holding back and running 15 year old systems. If you are running Debian 12 in 2038, lack of t64 will be the least of your issues. Debain 12 standard LTS ends in 2028 [0] and extended support (if you pay for it) ends in 2033 [1]. Beyond that you're on your own. If you're running servers, you'd be mad to be running any the current stable and old-stable releases by then.
> I bet a significant portion of systems will not be upgraded
And regardless, nothing Debian does or does not do is going to make a difference to all of the issues of running EOLed OS versions.
Linux kernel 2.6 was still in development and people are using 2.2 or even 2.0.
The latest stable Debian was Woody at that time, using Linux kernel 2.2.
GNU/Linux was still struggling to support POSIX threads. ext3 was merged to the Linux kernel in 2001 and people were still using ext2. ext2/ext3 has year 2038 problem but in year 2004, nobody think it seriously. Even the ext4 had year 2038 problem and later got a workaround to prolong its life for another few hundred years.
UTF-8 standard was just finalized in 2003 and nobody thought it will be the default character encoding. No GNU/Linux Distros at that time uses UTF-8 as the default locale. Most people believed UTF-16 was the future but reluctant to change , except for Microsoft Windows. Microsoft was the early adopter of Unicode and they are paying for their failure of choosing UTF-16 while all other OSes, web browsers and everything later choose UTF-8.
People of 2004 realized the year 2038 problem. But there was no way we could use 64 bit signed integer as time_t at that time. Even for the filesystem.
Even if we tried to fix it in 2004, it was probably a bad choice that needed to be redone right now.
Personally, I think that if people are still using Linux kernel 2.2 or Windows CE in their embedded system in year 2038, they have serious other problems as well.
Doubt it. Any disruption will probably be two orders of magnitude less than the COVID-19 pandemic.
Most systems (like self driving cars or payments) will be 64-bit and unaffected. Anything critical still running 32-bit will be on these kinds of debian distros or NetBSD or whatever. The stuff that breaks will mostly be flashing-VCR-time impacts that can be worked around (timestamps on security cameras may wrap around to 1970 and be impossible to set to the correct date, but the cameras should still continue working).
There may be stupid companies that go bankrupt because all their devices fail hard and they get sued out of existence, but that kind of impact happens all the time. Just looking at something like the list of largest trading losses in banks:
https://en.wikipedia.org/wiki/List_of_trading_losses
The economy has a very large capacity for dumbassery. Most hard failures will just be Keynesian stimulus for their competitors. Any hard failures that result in losses to the customers (e.g. stores robbed because security cameras down) will get absorbed by insurance -- and really they're already getting robbed so much by smash and grabs with stolen vehicles I doubt it matters.
> This is now less that 15 years away and plenty of system that will have problems have already been shipped. We should stop adding to the problem. Most computing, especially computing using Debian or its derivatives, is now done on 64-bit hardware where this issue does not arise.
The goal is to support 64 bit time on 32 bit systems.
> We are doomed.
We really aren't. Y2K was a big nothingburger despite the media scaremongering.
64 bit systems are already nearly everywhere. As TFA states: most distros don't even bother supporting 32 bit hardware anymore. They simply don't.
Are there going to be tiny Internet-of-shitty-insecure-Things devices out there failing in 2038? Very likely. Nuclear powerplants failing, nuclear warheads launched by mistake and lab doors wrongly opened (from labs where they do GoF research on deadly viruses)? Nope. Not gonna happen.
For some reason, a lot of people ignore the warnings
https://wiki.debian.org/DebianUnstable
about unstable and testing and are complaining that Debian is broken.
I have never (yet!) gotten myself into any trouble, by following a very simple principle: if aptitude (dselect back in the day) wants to remove or hold more than a handful of packages, STOP. Undo what you just did, and instead of a bulk upgrade go through marking one package at a time for upgrade. When you hit one that needs major changes, if you can't find an upgrade alternative that is sensible, leave it un-upgraded and try again in a few days.
This time upgrade has been one of the more challenging ones to track in unstable, but patience and actually reading the reasons that dpkg reports for why it wants to do something has gotten me through it.
So I wrote a lighthearted theme song for `unstable` updates. I don't recall most of it offhand, but it was to the tune of "Cat's in the Cradle":
o/~ What I'd really like now
is to update your libc.
See you later,
can you apt-get please?
At some point, Linux software stopped improving so quickly, and Debian Stable seemed better. And I've been very happy with it, for most all startup and personal purposes.Testing and unstable requires a bit more experience to drive daily, but neither of them just collapses into a pile of smoking ash with an update. They're pretty stable and usable. I didn't reinstall any testing systems since I started using them.
The only thing about Testing and unstable is: "None of them receives security updates. They are not intended for internet facing production systems".
Hope that changes until the packets move to stable.
Stable will iron out all of them.
Yes, the probability of being attacked is less but it is a false sense of security. Nowadays, you download loads of code from the internet just by navigating to a website using the web browser. Are you 100% sure there is no bug in the sandbox? How about all the other protocols on your computer, or all the other computers on the same "internal" network. Are you sure, those are 100% resistant too?
Firefox follows upstream from 24-48h behind, unless something big like t64 transition is happening (which happens once every 5 years or so). Unstable currently provides 125.0.3. If you wish, you can use Mozilla's official Firefox builds updates which arrive as soon as new versions are released. However, Debian hardens Firefox to its own standards.
Other libraries and everything follows pretty close to upstream when the distro is not frozen for release.
Also, Stable has the "Patches in 24h even if upstream doesn't provide" guarantee (which we have seen during Bash's RCE bug). This is what testing and unstable lacks.
So, it's no FUD. It's nuanced. As I said before. Using testing and unstable needs some experience and understanding how a distribution works. Stable is install and forget.
> or all the other computers on the same "internal" network.
Considering we manage all the other computers in the same internal network, I'm sure they're fine. They all run prod-ready software. Some BSD, rest is Linux.
Debian Testing and Unstable are mostly fine if you know what you are doing - you seem to have a clue. Security guidance is usually for people that need it - those that might not understand the full implications. Even for professionals it can be a useful reminder.
Oh, OK. My bad, sorry. Thanks for the clarification.
Given the enormous scope of this transition, I have to wonder how many will be benefiting from this 15 years from now. I don’t question it will have utility, but I wonder about the ROI.
This transition is mainly to benefit 32-bit ARM since it's likely to be used in new systems for many more years.
Besides, maybe there will be some digital (ARM?) dust running 32-bit Debian in the future.
I read the article with my limited knowledge and I couldn't find an explanation for this statement. Can someone elaborate?
Or we can do library transitions for the set of all libraries that reference time_t in their ABI; this is a bit more work for everyone in the short term because it will impact testing migration across all archs rather than being isolated to a single port, but it’s on the order of 500 library packages which is comparable to other ABI transitions we’ve done in the past (e.g. ldbl https://lists.debian.org/debian-devel/2007/05/msg01173.html).
It is Debian specific because Debian is one of the rare distro which keeps supporting 32 bit systems. The others don't even bother, it's all 64 bit now, where the year 2038 problem doesn't exist at all.
Maybe 20 years ago, but these days you cannot make such a generalization. For example, VC++ uses a 64-bit time_t even in 32-bit builds unless you explicitly define _USE_32BIT_TIME_T. on 64-bit builds, a 32-bit time_t would be an illegal configuration that won't even build.
> 64-bit architectures are not affected by the y2k38 problem, but they are affected by this transition.
^ surprised by this line. guess it also impacts lib32 packages on amd64
Little surprised it didn't happen sooner, but guess 64bit transition dampened urgency. Looks like it was waiting on changes in glibc etc. Unfortunately the remaining 32bit space is less likely to support upgrades
The reason for this is that in Debian, package name and ABI are coupled, as that allows you to coinstall multiple versions of libraries with different ABIs. This transition changes the ABI on 32-bit architectures, so the package name of impacted packages also has to change (they get a "t64" suffix). While it's technically possible to have different package names on different architectures, that's a PITA to deal with, so they opted to also change the package names on 64-bit architectures, where it's not strictly required.
It wasn't gracefully. I don't know for sure if that's the transition is the cause, but I managed to break network- manager while updating. Had to load the Deb from my phone and transfer it over since I'm travelling. I managed to force the install but broke KDE in the process. I had to figure out how to connect to the internet from the console and reinstall KDE. That would have been much easier had I not been on the road.
But sometimes I envy common lispers, you livepatch any crap like nothing. Also, 9front: megafast system recompilation so the worst of the issue would be rebooting a machine.
It's amazing how quickly you can solve problems when you're fine breaking user space between releases.
Not that Linux has a perfect record, but it's much harder when you actually try.
Slides on OpenBSD's transition to 64-bit time_t
https://www.openbsd.org/papers/eurobsdcon_2013_time_t/index....
"I have altered the ABI. Pray I do not alter it further." -- Theo de Raadt
But OpenBSD also broke Go binaries, with some recent changes, right?
Breaking ABI towards userspace, or breaking user space? What's the difference? Binaries that used to work no longer do.
If you only run packages and ports, then shrug. If all you have is packages, ports, and source, then just upgrade and recompile everything.
I'm not saying it's wrong to break userspace, but it's not inaccurate to call it such.
C.f. Google's monorepo, to not need to maintain an ABI.
ABIs may or may not be overrated, but dropping that requirement sure make things easier.
I was more thinking of the 'end-user', not the package maintainers, and app developers. Random user upgrades to a new OS release, gets the new software packages which use the new ABI.
Actually that's not even fair to Go. I'm as annoyed as anyone about the many suboptimal technical choices of the Go team, but I would say that Theo saying "I have altered the ABI. Pray I do not alter it further" is quite accurate.
Go in this case made a choice based on one "promise", and OpenBSD changed the deal.
Even Hyrum's law is more about "as implemented". Go actually went with "as designed, in the last, what, four decades?". Syscalls actually did use to be the API, and statically linked libc really did use to be "ok".
OpenBSD let security be done though the heavens fall, and that's fine. But it's MUCH easier than the alternative. So obviously much easier to make progress on.
Won't work well for games and other stuff you can't just recompile on your own.
Going mainframe mode and virtualizing the previous generation has problems in that the older releases don't get security fixes, and may not work as well with tuning and monitoring.
I would not want to run a 10 year old OpenBSD, virtual or not.
E.g. the speculative execution patches would be nontrivial to backport.
But it has valid use cases.
Well, turns out ABI stability was not enough to make gamers use Linux, but it's an example.
A better example would maybe be the Oracle database. You could maybe see them building for OpenBSD. But not if that means they'd sign up for rebuilding every six months the ABI has potentially broken in the new release.
I would not say that this is the reason OpenBSD hasn't yet been the OS of choice for the Oracle sales team, of course.