Debian 64-bit-time transition
157 points
by okl
13 days ago
| 9 comments
| wiki.debian.org
| HN
dmd
13 days ago
[-]
The 64-bit time transition ensures Unix time can be used far, far into the future, as described in Vernor Vinge's "A Deepness in the Sky":

> Pham Nuwen spent years learning to program/explore. Programming went back to the beginning of time. It was a little like the midden out back of his father’s castle. Where the creek had worn that away, ten meters down, there were the crumpled hulks of machines—flying machines, the peasants said—from the great days of Canberra’s original colonial era. But the castle midden was clean and fresh compared to what lay within theReprise ’s local net. There were programs here that had been written five thousand years ago, before Humankind ever left Earth. The wonder of it—the horror of it, Sura said—was that unlike the useless wrecks of Canberra’s past, these programs still worked! And via a million million circuitous threads of inheritance, many of the oldest programs still ran in the bowels of the Qeng Ho system. Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely. . .the starting instant was actually some hundred million seconds later, the 0-second of one of Humankind’s first computer operating systems.

(Yes, Vinge made a slight error here - it's only about 14 megaseconds from Armstrong to the epoch, not 100.)

reply
blahgeek
13 days ago
[-]
It bothers me that they are only doing the transition now. It should have been done 20 years ago.

2038 is less than 15 years away and I bet a significant portion of systems will not be upgraded during this period. (As a reference, Windows 7 was released 15 years ago and it's still on 3% usage share [1].)

What's worse, compared to Y2K problem, in 2038, much more critical infrastructures will depend on computers (payment, self-driving cars, public transportation, security cameras, etc etc). We are doomed.

[1] https://en.wikipedia.org/wiki/Template:Windows_usage_share

reply
Denvercoder9
13 days ago
[-]
Note that for most systems this transition already happened with the transition to 64-bit architectures, in the 2000s. It's only 32-bit architectures to which the current transition applies. Debian couldn't have transitioned them 20 years ago, as kernel support for 64-bit time_t on a 32-bit architecture is fairly recent (Linux 5.6 in 2020).
reply
asveikau
13 days ago
[-]
I remember OpenBSD working on this about 10 years ago.

Also, strictly speaking transitioning with the CPU architecture is not enough -- having kernel APIs and ABIs have 64-bit time_t is not the only place it needs to change. Notably, it has to change in the filesystem too. And at that point millions of people will have on-disk structures with 32bit timestamps, so it needs to transition in a somewhat back compatible way.

reply
phaedrus
13 days ago
[-]
Except some communication protocols have a 32 bit timestamp baked into the format.
reply
tomxor
13 days ago
[-]
> 2038 is less than 15 years away and I bet a significant portion of systems will not be upgraded during this period.

No, Linux distro releases are not like Windows, there is little reason to be holding back and running 15 year old systems. If you are running Debian 12 in 2038, lack of t64 will be the least of your issues. Debain 12 standard LTS ends in 2028 [0] and extended support (if you pay for it) ends in 2033 [1]. Beyond that you're on your own. If you're running servers, you'd be mad to be running any the current stable and old-stable releases by then.

[0] https://wiki.debian.org/LTS

[1] https://wiki.debian.org/LTS/Extended

reply
soupbowl
13 days ago
[-]
I worked on ancient redhat and debian Linux equipment recently. Just because it shouldn't be used or is "easy" to upgrade means nothing in the real world. That applys now and it will 15 years from now.
reply
yoyohello13
13 days ago
[-]
People willfully/recklessly running systems well beyond eol is not the problem of maintainers.
reply
HeatrayEnjoyer
12 days ago
[-]
It will be everyone's problem if critical infrastructure stops working. Nobody gives a flying carp who's fault it is when erroneous air traffic control systems start killing people or bank transactions stop processing.
reply
ranger_danger
12 days ago
[-]
Everyone except for the people who voted to EOL those older OSes people still use.
reply
HeatrayEnjoyer
10 days ago
[-]
Voting to EOL your video game software will not save you from burning to death in jet fuel.
reply
marcosdumay
13 days ago
[-]
Well, if people go with "if it's working, don't break it", they can upgrade when they lose some availability after it breaks.
reply
tomxor
12 days ago
[-]
Yes, I have worked on migrating such servers myself once upon a time, but the parent is not arguing about minorities:

> I bet a significant portion of systems will not be upgraded

And regardless, nothing Debian does or does not do is going to make a difference to all of the issues of running EOLed OS versions.

reply
ezoe
12 days ago
[-]
Do you really remember 20 years ago? I do. It was year 2004.

Linux kernel 2.6 was still in development and people are using 2.2 or even 2.0.

The latest stable Debian was Woody at that time, using Linux kernel 2.2.

GNU/Linux was still struggling to support POSIX threads. ext3 was merged to the Linux kernel in 2001 and people were still using ext2. ext2/ext3 has year 2038 problem but in year 2004, nobody think it seriously. Even the ext4 had year 2038 problem and later got a workaround to prolong its life for another few hundred years.

UTF-8 standard was just finalized in 2003 and nobody thought it will be the default character encoding. No GNU/Linux Distros at that time uses UTF-8 as the default locale. Most people believed UTF-16 was the future but reluctant to change , except for Microsoft Windows. Microsoft was the early adopter of Unicode and they are paying for their failure of choosing UTF-16 while all other OSes, web browsers and everything later choose UTF-8.

People of 2004 realized the year 2038 problem. But there was no way we could use 64 bit signed integer as time_t at that time. Even for the filesystem.

Even if we tried to fix it in 2004, it was probably a bad choice that needed to be redone right now.

Personally, I think that if people are still using Linux kernel 2.2 or Windows CE in their embedded system in year 2038, they have serious other problems as well.

reply
lamontcg
13 days ago
[-]
> We are doomed.

Doubt it. Any disruption will probably be two orders of magnitude less than the COVID-19 pandemic.

Most systems (like self driving cars or payments) will be 64-bit and unaffected. Anything critical still running 32-bit will be on these kinds of debian distros or NetBSD or whatever. The stuff that breaks will mostly be flashing-VCR-time impacts that can be worked around (timestamps on security cameras may wrap around to 1970 and be impossible to set to the correct date, but the cameras should still continue working).

There may be stupid companies that go bankrupt because all their devices fail hard and they get sued out of existence, but that kind of impact happens all the time. Just looking at something like the list of largest trading losses in banks:

https://en.wikipedia.org/wiki/List_of_trading_losses

The economy has a very large capacity for dumbassery. Most hard failures will just be Keynesian stimulus for their competitors. Any hard failures that result in losses to the customers (e.g. stores robbed because security cameras down) will get absorbed by insurance -- and really they're already getting robbed so much by smash and grabs with stolen vehicles I doubt it matters.

reply
IshKebab
12 days ago
[-]
How many 32 bit time-dependent systems running 15 year old Linux do you think there will be in 2038? It's going to be a handful at most.
reply
immibis
12 days ago
[-]
I worked on one. Cellphone routers deployed in remote locations, yes including handling 911 calls. They'll probably all crash at the rollover and require a technician to come and upgrade the firmware on site.
reply
TacticalCoder
12 days ago
[-]
From TFA:

> This is now less that 15 years away and plenty of system that will have problems have already been shipped. We should stop adding to the problem. Most computing, especially computing using Debian or its derivatives, is now done on 64-bit hardware where this issue does not arise.

The goal is to support 64 bit time on 32 bit systems.

> We are doomed.

We really aren't. Y2K was a big nothingburger despite the media scaremongering.

64 bit systems are already nearly everywhere. As TFA states: most distros don't even bother supporting 32 bit hardware anymore. They simply don't.

Are there going to be tiny Internet-of-shitty-insecure-Things devices out there failing in 2038? Very likely. Nuclear powerplants failing, nuclear warheads launched by mistake and lab doors wrongly opened (from labs where they do GoF research on deadly viruses)? Nope. Not gonna happen.

reply
immibis
12 days ago
[-]
Plenty of single-purpose 32-bit ARM systems are around.
reply
NoahKAndrews
11 days ago
[-]
Wasn't it a nothingburger because of the massive effort to get everything important updated in time?
reply
dsr_
13 days ago
[-]
Since all of this is happening, properly, in the unstable and testing repos, stable is not affected.

For some reason, a lot of people ignore the warnings

https://wiki.debian.org/DebianUnstable

about unstable and testing and are complaining that Debian is broken.

reply
bgribble
13 days ago
[-]
I have used unstable as my daily driver since they named it "unstable", which I don't even remember what that was but at least 20 years ago.

I have never (yet!) gotten myself into any trouble, by following a very simple principle: if aptitude (dselect back in the day) wants to remove or hold more than a handful of packages, STOP. Undo what you just did, and instead of a bulk upgrade go through marking one package at a time for upgrade. When you hit one that needs major changes, if you can't find an upgrade alternative that is sensible, leave it un-upgraded and try again in a few days.

This time upgrade has been one of the more challenging ones to track in unstable, but patience and actually reading the reasons that dpkg reports for why it wants to do something has gotten me through it.

reply
neilv
13 days ago
[-]
Back when I was running Debian `unstable` (Sid?), I was always slightly nervous that an update could make the system unbootable, or otherwise ruin my day, when I had other things to do.

So I wrote a lighthearted theme song for `unstable` updates. I don't recall most of it offhand, but it was to the tune of "Cat's in the Cradle":

    o/~ What I'd really like now
        is to update your libc.
        See you later,
        can you apt-get please?
At some point, Linux software stopped improving so quickly, and Debian Stable seemed better. And I've been very happy with it, for most all startup and personal purposes.
reply
Grimeton
13 days ago
[-]
(S)id (I)s (D)angerous
reply
marcosdumay
13 days ago
[-]
It's even named after the movie's villain.
reply
bayindirh
13 days ago
[-]
I'm using testing for the last decade, and it's pretty solid. Testing received the bulk of the "-t64" packages last week (~1.200), and the update went pretty smoothly.

Testing and unstable requires a bit more experience to drive daily, but neither of them just collapses into a pile of smoking ash with an update. They're pretty stable and usable. I didn't reinstall any testing systems since I started using them.

The only thing about Testing and unstable is: "None of them receives security updates. They are not intended for internet facing production systems".

reply
binkHN
13 days ago
[-]
Did you just run apt update && apt full-upgrade and all went well?
reply
okl
13 days ago
[-]
For me, aptitude showed a bunch of conflicts but when I selected all the required -t64 ones manually, they disappeared. Guess the algorithm is not so great at resolving the changes seamlessly.

Hope that changes until the packets move to stable.

reply
bayindirh
13 days ago
[-]
It removed the Bluetooth stack once, because I didn't look at the packages being removed. Other than that, other two big upgrades (~350 & ~550 packages each) went smoothly. It removes tons of libraries and installs "t64" versions, so small glitches like this is expected in testing.

Stable will iron out all of them.

reply
binkHN
13 days ago
[-]
Yeah, I noticed these t64 versions. Do you know if these t64 versions will forever have t64 in the package name? Or will they, one day, revert to their previous package names with the t64 removed?
reply
seba_dos1
13 days ago
[-]
I'd expect packages to drop the suffix at their next ABI version bump, which changes the package name anyway.
reply
bayindirh
13 days ago
[-]
They'll probably remove the t64 suffix when all packages are migrated and t64 becomes the norm. However, I didn't dig the list too much to see what they're planning.
reply
josephcsible
13 days ago
[-]
That lack of security updates is the only reason that I'm not using Debian testing or unstable as my daily driver at home.
reply
bayindirh
13 days ago
[-]
If it's behind a NAT, there's no serious risk since they can't reach you, but if there's nothing between you and the internet, then it becomes a problem.
reply
kaliszad
12 days ago
[-]
Please don't spread these false assurances.

Yes, the probability of being attacked is less but it is a false sense of security. Nowadays, you download loads of code from the internet just by navigating to a website using the web browser. Are you 100% sure there is no bug in the sandbox? How about all the other protocols on your computer, or all the other computers on the same "internal" network. Are you sure, those are 100% resistant too?

reply
bayindirh
12 days ago
[-]
Well, Kernel follows the upstream fairly well, so unless something catastrophic like Spectre and the like happens, you follow the upstream closely. Currently Testing runs 6.7.12 from April, 24.

Firefox follows upstream from 24-48h behind, unless something big like t64 transition is happening (which happens once every 5 years or so). Unstable currently provides 125.0.3. If you wish, you can use Mozilla's official Firefox builds updates which arrive as soon as new versions are released. However, Debian hardens Firefox to its own standards.

Other libraries and everything follows pretty close to upstream when the distro is not frozen for release.

Also, Stable has the "Patches in 24h even if upstream doesn't provide" guarantee (which we have seen during Bash's RCE bug). This is what testing and unstable lacks.

So, it's no FUD. It's nuanced. As I said before. Using testing and unstable needs some experience and understanding how a distribution works. Stable is install and forget.

> or all the other computers on the same "internal" network.

Considering we manage all the other computers in the same internal network, I'm sure they're fine. They all run prod-ready software. Some BSD, rest is Linux.

reply
kaliszad
12 days ago
[-]
I was addressing the NAT point. NAT itself is not a firewall, there are different implementations of NAT e.g. full cone NAT that are particularly bad if what you expect is a layer of stateful randomness. And a firewall these days is not perfect either btw. if you can get your code to run on the client directly in some capacity.

Debian Testing and Unstable are mostly fine if you know what you are doing - you seem to have a clue. Security guidance is usually for people that need it - those that might not understand the full implications. Even for professionals it can be a useful reminder.

reply
bayindirh
12 days ago
[-]
> I was addressing the NAT point.

Oh, OK. My bad, sorry. Thanks for the clarification.

reply
ComputerGuru
13 days ago
[-]
Debian is one of the few distros still offering a tier-1, first-party i686 target in 2024. Even a lot of derivative distros (i.e. where it would be minimal effort to do the same) have dropped i686.

Given the enormous scope of this transition, I have to wonder how many will be benefiting from this 15 years from now. I don’t question it will have utility, but I wonder about the ROI.

reply
agwa
13 days ago
[-]
Debian is not switching to 64-bit time_t on their x86 architecture, because they see it as a legacy architecture whose primary use is running legacy binaries.

This transition is mainly to benefit 32-bit ARM since it's likely to be used in new systems for many more years.

reply
pwdisswordfishc
12 days ago
[-]
Wow, so the multi-hour package conflict resolution I am going through RIGHT NOW on my 2004 laptop is completely pointless? Good to know.
reply
snvzz
12 days ago
[-]
You'd be better off switching to a non-linux system on that hardware. Ideally something that uses 64bit offsets and time_t.
reply
pabs3
12 days ago
[-]
Debian is dropping i686 too. You won't be able to install it soon. Also, the t64 fixes are not happening on i386, since the ABI change that the t64 changes do would break proprietary binaries, and Debian cares more about that use case than about people with ancient i386 hardware, who should just upgrade to amd64 anyway.
reply
actionfromafar
13 days ago
[-]
Adjusted for coolness points the ROI is pretty good, though. :)

Besides, maybe there will be some digital (ARM?) dust running 32-bit Debian in the future.

reply
WirelessGigabit
13 days ago
[-]
> 64-bit architectures are not affected by the y2k38 problem, but they are affected by this transition.

I read the article with my limited knowledge and I couldn't find an explanation for this statement. Can someone elaborate?

reply
mikepavone
13 days ago
[-]
On Linux at the kernel level, time_t was historically defined as a 'long'. Since Linux is LP64 this automatically became 64-bit on 64-bit architectures.
reply
zinekeller
13 days ago
[-]
I would initially assume that the problem is due to compatibility concerns (like NFS3 requiring 32-bit timestamps), but is instead a rather Debian-specific problem:

  Or we can do library transitions for the set of all libraries that reference time_t in their ABI; this is a bit more work for everyone in the short term because it will impact testing migration across all archs rather than being isolated to a single port, but it’s on the order of 500 library packages which is comparable to other ABI transitions we’ve done in the past (e.g.  ldbl https://lists.debian.org/debian-devel/2007/05/msg01173.html).
reply
TacticalCoder
12 days ago
[-]
> but is instead a rather Debian-specific problem

It is Debian specific because Debian is one of the rare distro which keeps supporting 32 bit systems. The others don't even bother, it's all 64 bit now, where the year 2038 problem doesn't exist at all.

reply
Denvercoder9
13 days ago
[-]
See my comment elsewhere in this discussion: https://news.ycombinator.com/item?id=40264502.
reply
bun_terminator
13 days ago
[-]
(here stood something wrong because I've been working with 32bit code for too long recently)
reply
TonyTrapp
13 days ago
[-]
> but in general time_t is commonly an int32 type - doesn't matter what OS it compiles on

Maybe 20 years ago, but these days you cannot make such a generalization. For example, VC++ uses a 64-bit time_t even in 32-bit builds unless you explicitly define _USE_32BIT_TIME_T. on 64-bit builds, a 32-bit time_t would be an illegal configuration that won't even build.

reply
CamperBob2
13 days ago
[-]
In Win32 builds, it's been necessary to request legacy 32-bit time_t structures explicitly with _USE_32BIT_TIME_T for many years, for what that's worth. 64-bit time_t has been the standard for a while.
reply
bun_terminator
13 days ago
[-]
ow, a stark reminder that I've worked with out 32bit code for far too long
reply
__s
13 days ago
[-]
Note this only matters for 32 bit systems. Good to see tho

> 64-bit architectures are not affected by the y2k38 problem, but they are affected by this transition.

^ surprised by this line. guess it also impacts lib32 packages on amd64

Little surprised it didn't happen sooner, but guess 64bit transition dampened urgency. Looks like it was waiting on changes in glibc etc. Unfortunately the remaining 32bit space is less likely to support upgrades

reply
Denvercoder9
13 days ago
[-]
> ^ surprised by this line. guess it also impacts lib32 packages on amd64

The reason for this is that in Debian, package name and ABI are coupled, as that allows you to coinstall multiple versions of libraries with different ABIs. This transition changes the ABI on 32-bit architectures, so the package name of impacted packages also has to change (they get a "t64" suffix). While it's technically possible to have different package names on different architectures, that's a PITA to deal with, so they opted to also change the package names on 64-bit architectures, where it's not strictly required.

reply
juujian
13 days ago
[-]
I just went through this transition in Debian testing, I suppose. I wasn't aware it was going to happen, and I didn't have a chance to install updates for two weeks prior, which I usually do every day.

It wasn't gracefully. I don't know for sure if that's the transition is the cause, but I managed to break network- manager while updating. Had to load the Deb from my phone and transfer it over since I'm travelling. I managed to force the install but broke KDE in the process. I had to figure out how to connect to the internet from the console and reinstall KDE. That would have been much easier had I not been on the road.

reply
seba_dos1
13 days ago
[-]
The golden rule when using testing/unstable is: when apt wants to remove packages, double-check everything before proceeding. Really, it's that simple.
reply
juujian
13 days ago
[-]
Yeah, reading comprehension is one of my weaknesses tbh.
reply
bitwize
13 days ago
[-]
An old Jargon entry that nicely captures the scope of the problem:

http://www.catb.org/jargon/html/R/recompile-the-world.html

reply
anthk
13 days ago
[-]
Less than a problem with ccache, and OpenBSD solved the time_t issue long ago.

But sometimes I envy common lispers, you livepatch any crap like nothing. Also, 9front: megafast system recompilation so the worst of the issue would be rebooting a machine.

reply
knorker
13 days ago
[-]
> OpenBSD solved the time_t issue long ago.

It's amazing how quickly you can solve problems when you're fine breaking user space between releases.

Not that Linux has a perfect record, but it's much harder when you actually try.

reply
sillywalk
13 days ago
[-]
I'd prefer to refer to it as 'changing userspace', as opposed to breaking it. The ABI is changed, and userspace/ports are updated to use the new ABI. Users just upgrade as usual, and don't see any 'breakage'. It wouldn't work for Linux, because the kernel and glibc + the rest of userspace are developed separately.

Slides on OpenBSD's transition to 64-bit time_t

https://www.openbsd.org/papers/eurobsdcon_2013_time_t/index....

"I have altered the ABI. Pray I do not alter it further." -- Theo de Raadt

https://marc.info/?l=openbsd-tech&m=157489277318829&w=2

reply
knorker
13 days ago
[-]
Linux could probably do it despite glibc, just more carefully.

But OpenBSD also broke Go binaries, with some recent changes, right?

Breaking ABI towards userspace, or breaking user space? What's the difference? Binaries that used to work no longer do.

If you only run packages and ports, then shrug. If all you have is packages, ports, and source, then just upgrade and recompile everything.

I'm not saying it's wrong to break userspace, but it's not inaccurate to call it such.

C.f. Google's monorepo, to not need to maintain an ABI.

ABIs may or may not be overrated, but dropping that requirement sure make things easier.

reply
sillywalk
12 days ago
[-]
On second thought, I think you're right. The term breaking user-space is more accurate.

I was more thinking of the 'end-user', not the package maintainers, and app developers. Random user upgrades to a new OS release, gets the new software packages which use the new ABI.

reply
HumanProtractor
12 days ago
[-]
Go wasn't properly behaved, the OpenBSD people actually fixed it so it works properly now. It's not broken.
reply
knorker
12 days ago
[-]
That's Hyrum's law for you.

Actually that's not even fair to Go. I'm as annoyed as anyone about the many suboptimal technical choices of the Go team, but I would say that Theo saying "I have altered the ABI. Pray I do not alter it further" is quite accurate.

Go in this case made a choice based on one "promise", and OpenBSD changed the deal.

Even Hyrum's law is more about "as implemented". Go actually went with "as designed, in the last, what, four decades?". Syscalls actually did use to be the API, and statically linked libc really did use to be "ok".

OpenBSD let security be done though the heavens fall, and that's fine. But it's MUCH easier than the alternative. So obviously much easier to make progress on.

reply
anthk
13 days ago
[-]
Upgrading is not an issue, now you have vmd and a lot of mirrors still have both the old TGZ sets, ramdisks, kernels and packages to be able to virtualize old machines just in case.
reply
knorker
13 days ago
[-]
I agree that it's a fine use case. I'm just saying it's simpler if you allow yourself a fresh start.

Won't work well for games and other stuff you can't just recompile on your own.

Going mainframe mode and virtualizing the previous generation has problems in that the older releases don't get security fixes, and may not work as well with tuning and monitoring.

I would not want to run a 10 year old OpenBSD, virtual or not.

E.g. the speculative execution patches would be nontrivial to backport.

But it has valid use cases.

reply
anthk
12 days ago
[-]
Most games can be either recompilable or able to being run with xnaify or similar Mono/C# wrappers.
reply
knorker
12 days ago
[-]
Sure, they're recompilable. If whoever has the source recompiles them.

Well, turns out ABI stability was not enough to make gamers use Linux, but it's an example.

A better example would maybe be the Oracle database. You could maybe see them building for OpenBSD. But not if that means they'd sign up for rebuilding every six months the ABI has potentially broken in the new release.

I would not say that this is the reason OpenBSD hasn't yet been the OS of choice for the Oracle sales team, of course.

reply
shmerl
13 days ago
[-]
It got over most of the blockers it seems, but it's really monstrous in scope.
reply