If you look at "date" specifically on https://uutils.github.io/coreutils/docs/test_coverage.html, it looks much worse than the overall graph suggests: 2 tests passing, 3 tests skipped, 3 with errors. Not really reassuring, right?
That's because they added new tests to catch these cases. I recall seeing someone mention in a comment here that coreutils didn't have a test for this either.
So it is reassuring that these things actually get documented and tested.
You can write frivolous tests all you want; bugs are a part of life. They'll occur even if you test thoroughly, especially in new software. It's the response that matters.
First - cases where expected+old+new are identical, should go to regression suite. Now a HUMAN should take a look in this order: 1. Cases where expected+old are identical, but rust is different. 2. If time allows - Cases where expected+rust are identical , but old is different.
TBH, after #1 (expected+old, vs. rust) I'd be asking the GenAI to generate more test cases in these faulty areas.
Usually, the standard a rewrite is held to is "no worse than the original," which is a very high bar.
They tested what original coreutils tested. Until other people put uutils into production, neither had a test for this case.
https://github.com/coreutils/coreutils/blob/master/tests/dat...
Especially because people will not use a pre-compiled binary, but compile the software themselves (e.g., Gentoo users). So there must be no 'secret' tests, to guarantee that whoever compiles the software, as long as the dependencies are met, will produce a binary with the exact same behavior.
In fact, as an Open Source software, the test suite of the original coreutils is part of the Source package. It's in their (that is, coreutils' maintainers) interest to have the software tested against known edge cases. Because one day their project will be picked up by "some lone developer in Iowa" who will add new features. If there are 'secret' test cases, then the new developer's additions might break things.
This incident is merely coreutils happening to produce correct results on some edge case for uutils.
In practice "some lone developer in Iowa" will be held to the standard of quality of the original project if they want to add to it or replace it despite the support they get from the public package. Open-source software is also often not open to being pushed by any random person.
These kind of "tests" often are enforced as the codebase evolved, by having old guys e.g. named Torvalds that yell at new guys. They are hard to formalize short of writing a proof.
Same thing that happens with Alpine's shell, or macOS, or the BSDs — I work with shell all the time and often run into scripts that should work on non-bash shells, with non-GNU coreutils, but don't, because nobody cared to test them anywhere besides Ubuntu, which until now at least had the same environment as most other Linux distributions.
More pain incoming, for no technical reason at all. Canonical used to feel like a force for the good, but doesn't anymore.
Working on macOS for years taught me that most people are going to write code that supports what they can test even if there's a better way that works across more systems. Heck, even using 'sed -i' breaks if you go from macOS to Linux or vice-versa but if you don't have a Mac you wouldn't know.
Meanwhile, this is a rewrite of `date` (and other coreutils) with the goal of being perfectly compatible with GNU coreutils (even using the coreutils test cases), which means that differences between the two are going to reduce, not expand.
What you're complaining about here is "people only test on one platform" and your solution is that everything should stay the same and never change, and we should have homogeneity across all platforms forever so that you don't have to deal with a shell script that doesn't work. The actual solution is for more people to become aware of these differences and why best practices exist.
Note that recently Ubuntu and Debian switched /bin/sh to dash instead of bash, which then resulted in a lot of people having to fix their /bin/sh scripts to remove bashisms which then improves things for everyone across all platforms. Now Ubuntu switches to uutils and we find they have a bug in `date` because GNU coreutils didn't have a test for that either; now coreutils has a test for it too so that they don't have a regression in the future, and everyone's software gets better.
Not that recent. Ubuntu switched in 2006! and Debian followed in 2011
I mean, all good then.
1. It literally remains stable for less time. Nine months instead of 5+ years, up to 12 if you pay them.
2. They apparently have a history of testing changes in it.
3. They appear to only sell things like livepatch and extended support for LTS editions, and products you pay for are implicitly more stable than products you do not.
Or to use Ubuntu's own terminology: "Interim releases will introduce new capabilities from Canonical and upstream open source projects, they serve as a proving ground for these new capabilities." They also call LTS 'enterprise grade' while interims are merely production-quality. Personally I see these as different levels of stability.
Isn't "stability" in this context a direct reference to feature set which stays stable? When a version is designated stable it stays stable. You're talking about support which can be longer or shorter regardless of feature set.
When they stop adding features, it's stable. Every old xx.04 and xx.10 version of Ubuntu is stable even today, no more features getting added to 12.10. When they stop offering support, it's unsupported. 14.04 LTS became unsupported last year but not less stable.
These are orthogonal. You can offer long term support for any possible feature combination (if you have the resources), and you can be stable with no support. In reality it's easier to freeze a feature set and support that snapshot for a long time then chase a moving target.
Applying the word "stable" to things in the unusable region of state space seems technically, but only technically, correct.
If you have a problem with them, 20 other people have had that same problem before you did, two of them have posted on Stackoverflow and one wrote a blog post.
OpenBSD and Illumos may be cool, but you really need to know what you're doing to use them.
That said, the underlying structure is still Ubuntu centered. I also like Ubuntu server, even through I don't use snaps, mostly because the install pre-configures most of the initial changes I make to debian anyway. Sudo is configured, you get an option to import your public key and preconfigure non-pwd ssh, etc. I mostly install ufw and Docker and almost everything I run goes under Docker in practice.
Unofficially any serious user knows to stick to LTS for any production environment. This is by far the most common versions I encounter in the wild and on customer deployment from my experience.
In fact I don't think I ever saw someone using a non-LTS version.
Canonical certainly has these stats? Or someone operating update mirror could infer them? I'd be curious what the real world usage of different Ubuntu versions actually are.
The above is absolutely what happened in this case. The package was at revision 0.1 throughout the release cycle, then 12 days before the final release, 6 weeks after the freeze and after the beta release, they changed the package to 0.2 with no notes other than "new upstream release". Nobody had time to try it, they just shipped it out to everyone literally without even trying it once.
Obvious reason is to have less bugs in the long run. Temporary increase during transition is expected and is not ideal, but after that there should be less of them.
It's not like C version didn't have any bugs:
https://bugs.debian.org/cgi-bin/pkgreport.cgi?archive=both;d...
The highest sounds are hardest to hear. Going forward is a way to retreat. Great talent shows itself late in life. Even a perfect program still has bugs.
People start making sudo more secure by replacing it with sudo-rs
You: "why are we rewriting old utilities?"
Looks like a logic bug to me? So rust wouldn't have helped.
Those are exactly the kind of bugs you might introduce when you do a rewrite.
This is a take I never understood. I get being huge, but old? Software doesn't age, when it is older it tends to have less bugs, not more.
sudo-rs is designed to be a drop-in replacement for maybr 95-99% of people who have been using sudo.
(I do use doas on my own systems though)
I would have much preferred if ubuntu went with run0 as the default instead of trying to rewrite sudo in rust. I like rust but the approach seems wrong from the beginning to me. The vast majority of sudo usecases are covered in run0 in a much simpler way, and many of the sudo bugs come from the complex configurations it supports (not to mention a poorly configured sudo, which is also a security hazard and quite easy to do). Let people who need sudo install and configure it for themselves, but make something simple the default, especially for a beginner distro like ubuntu.
sudo-rs can be a drop-in replacement for sudo for at least 95-99% of deployments, without any config changes necessary.
Now the rewrite in Rust is important because it greatly prevents appearance of new, memory-based bugs. Which might inadvertently happen if, say because of fixing a logic bug in one of sudo's more complex usages (and thus, less traversed code path), the maintainer introduced a memory bug.
This resistance, IMHO, is moot anyways since the sudo maintainer himself is in support of sudo-rs and actually helped the project in a consultancy capacity (as opposed to directly contributing code).
This is ubuntu, purportedly targeting ease of use, good defaults, and new Linux users. How many Linux newbies are running with custom sudo configurations? By definition, basically none, and of those who do, it's only for passwordless sudo, which I assume can be trivially recreated in run0. For advanced or enterprise users, it is not difficult to install sudo manually or port their configuration over to run0.
> This resistance, IMHO, is moot anyways since the sudo maintainer himself is in support of sudo-rs and actually helped the project in a consultancy capacity (as opposed to directly contributing code).
I'm not categorically against sudo-rs, but use the tool for the job. If all you need is a simple way to get root privilege, sudo is overkill.
Yes, if you ignore all the bugs resulting from features that almost nobody uses.
> along with the rest of the systemd abominations
Not too interested in engaging systemd debates. I have enjoyed using systems with and without systemd, and while I understand the arguments against feature creep, I think you'd be throwing the banana out with the peel to overlook the idea behind run0.
For such a security sensitive piece of software like sudo, reducing complexity is one of the best ways to prevent logic bugs (which, as you mentioned in the sibling, is what the above bug was). If run0 can remove a bunch of unused features that are increasing complexity without any benefit, that's a win to me. Or if you don't like systemd, doas on OpenBSD is probably right up your alley with a similar(ish) philosophy.
For anyone who wants to read more about Lennart's reasoning behind run0: https://mastodon.social/@pid_eins/112353324518585654
However run0 has a property of being a systemd project, which makes it a no go from the inception. And sudo-rs has a similar property of being a virtue signaling project and not a real one. Hence, sudo stays.
> For anyone who wants to read more about Lennart's reasoning
I'm not sure LP is a high-quality source. He has reputation that makes me want to listen to everyone else but him.
Based off his reputation, I would agree, but after reading a lot of his own words via blog posts, comments in github issues, etc, I wonder how he gained that reputation. He has solid reasoning behind many of his ideas even if you disagree with them, and his comments seem pretty respectful and focused on the technical aspects. Maybe things were different in the past, or maybe some segments of the community just never forgave him for the early buggy systemd implementations, or maybe I just happened to only read things he wrote when he wasn't having a bad day, who knows.
So, not interested in his opinions even on merit.
This is not the same as fixing a bug.
As other threads have mentioned, a more advanced argument parser and detection of parsed, but unused arguments could have caught this. Of course, there's already complaints about the increase in size for the Rust versions of uutils, mostly offset by a merged binary with separate symlinks. It's a mixed bag.
But, I'm sure you'll be reverting back to Xfree86 now.
How can you be sure that something is "backwards compatible"?
By running tests. And as it happens, the original coreutils did not have a test for this particular edge case.
Now that a divergence of behavior has been observed, all parties -- the coreutils devs and the uutils devs -- have agreed that this is an unacceptable regression and created new test cases to prevent the same misbehavior from happening again.
Backwards compatible means it's a drop-in replacement.
> How can you be sure that something is "backwards compatible"?
You compare the outputs from the same inputs.
> the original coreutils did not have a test for this particular edge case
So? 'man date' shows this argument option. Just because there was no existing unit test for it doesn't mean it's ok to break it. It would have taken literally 10 seconds to compare the outputs from the old and new program.
Isn't this testing what they're doing now, which is what is exposing the bugs that need to be fixed?
Can someone explain to me this analogy? Because I consider sliced bread as decline. But maybe that is cultural thing.
If cooking is a hobby for you, you're seeking labor. Maybe that makes the obvious unintelligible. If you're poor and have a bunch of hungry kids waiting, you don't want the cutting board covering up half your counter space while you're carefully trying not to screw up eight slices of bread before something on the stove burns.
(yes, I don't really get it either)
There surely would be a more beneficial undertaking somewhere else. If then you’d argue that they may do as they please with their time, fair, but then let’s not pretend this rewrite has any objective value aside from scratching personal itches and learning how cat and co are implemented.
It wasn't "worth it" at all.
If the rest of coreutils is bug free cast the first stone.
I do not think reimplementing stuff in Rust is a bad thing. Why? Because reimplementing stuff is a very good way to througly check the original. It is always good to have as many eyeballs om the code as possible.
I’m still shocked by the number of people who seem to believe that the borrow checker is some kind of magic. I can assure you that the core utils have all already went through static analysers doing more checks than the Rust compiler.
Some checks are pretty much impossible to do statically for C programs because of the lack of object lifetime annotations, so no, this statement can't be right.
It is true that the borrow checker doesn't prevent ALL bugs though.
Furthermore, the "bug" in this case is due to an unimplemented feature causing a flag to be silently ignored... It's not exactly something that any static analyser (or runtime ones for that matter) can prevent, unless an explicit assert/todo is added to the codepath.
And even without annotations, you can prove safe a lot of constructs by being conservative in your analysis especially if there is no concurrency involved.
Note that I wasn't specifically commenting about this specific issue. It's more about my general fatigue regarding people implying that rewrite in Rust are always better or should be done. I like Rust but the trendiness surrounding it is annoying.
Like any trendy language, you've got some people exaggerating the powers of the borrow checker, but I believe Rust did generally bring out a lot of good outcomes. If you're writing a new piece of systems software, Rust is pretty much a no-brainer. You could argue for a language like Zig (or Go where you're fine fine with a GC and a bit more boilerplate), but that puts even more spotlight on the fact that C is just not viable choice for most new programs anymore.
The Rewrites-in-Rust are more controversial and they are just as much as they are hyped here on HN, but I think many of them brought a lot of good to the table. It's not (just?) because the C versions were insecure, but mostly because a lot of these new Rust tools replaced C programs that had become quite stagnant. Think of ripgrep, exa/eza, sd, nushell, delta and difft, dua/dust, the various top clones. And these are just command line utilities. Rewriting something in Rust is not an inherently bad idea of what you are replacing clearly needs a modern makeover or the component is security critical and the code that you are replacing has a history of security issues.
I was always more skeptical about the coreutils rewrite project because the only practical advantage they can bring to the table is more theoretical safety. But I'm not convinced it's enough. The Rust versions are guaranteed to not have memory or concurrency related bugs (unless someone used unverified unsafe code or someone did something very silly like allocating a huge array and creating their own Von Neumann Architecture emulator just to prove you can write unsafe code in Rust). That's great, but they are also more likely to have compatibility bugs with the original tools. The value proposition here is quite mixed.
On the other hand, I think that if Ubuntu and other distros persist in trying to integrate these tools the long-term result will be good. We will get a more maintainable codebase for coreutils in the future.
Where can I see these annotations for coreutils?
True, but "prevents all bugs" is that what the debate pretty much digests to in the "rust is better" debate. So you end up with rewrites of code which introduce errors any programmer in any language can make and since you do a full rewrite that WILL happen no matter what you do.
If that's acceptable fine, otherwise not. But you cannot hide from it.
Memory-related bugs is, like, 70% of bugs found in programs written in C and C++.
So, by rewriting in Rust, you prevent 70% of new bugs from happening, because a whole class of bugs just cease to exist.
These are command line utilities meant to be a human porcelain for libc. And last I checked, libc was C.
Ideally these should be developed in tandem, and so should the kernel. This is not the case in Linux for historical reasons, but some of the other utilities such as iputils and netfilter are. The kernel should be Rust long before these porcelain parts are.
1. Unsafe doesn't mean the code is actually unsafe. It only tells you that the compiler itself cannot guarantee the correctness of it.
2. Unsafetiness tells the code reviewers to give a specific section of code more scrunity. Clippy also has an option that requires the programmer to put a comment to explain how the unsafe code is actually safe in the context.
3. And IF a bug does occur, it minimizes the amount of code you need to audit.
You do coverage testing, which would have found the missing date -r path.
You do proper code review, which would have found the missing date -r path.
And many coreutils options will not be implemented at all. ENOFIX
The original coreutils test suite didn't cover the -r path. The same bug would have not been statically discovered in most programming languages, except perhaps the highly functional ones like Haskell.
>You do proper code review, which would have found the missing date -r path.
And in an ideal world there would be no bugs at all. This is pointless -- we all know that we need to do a proper code review, but humans make errors.
And it should most certainly not be possible to declare options and leave them as empty placeholders. That should be flagged just like an unused variable is flagged. That is a problem with whatever option library they chose to use.
That alone should disqualify it from being a replacement yet. We're talking about a stable operating system used in production here. This is simply wrong on so many levels.
"Then any replacement project should not include bugs in their code."
Like I said before, broad statements like these are borderline pointless.
Of course we all know the should, the real problem is how -- how can you realistically make a "better test suite" when your goal is to create a bug-for-bug compatible replacement project?
And given the size of the original project, how should a better test suite be created?
>That is a problem with whatever option library they chose to use.
Instead of being vague, why not show a precise example of what you are talking about?
On the borrow checker: It doesn't prevent logic errors as is commonly understood. These errors are what careful use of Rusts type system could potentially prevent in many cases, but you can write Rust without leveraging it successfully. The Rust compiler is an impressive problem-avoidance tool, but there are classes of problems even it can't prevent from happening.
BLet us not fall into the trap of thinking thst just because the Rust compiler fails to prevent all issues, we should therefore abandon it. We shouldn't forget our shared history of mistakes rustc would have prevented (excerpt):
- CVE-2025-5278, sort: heap buffer under-read in begfield(), Out-of-bounds heap read in traditional key syntax
- CVE-2024-0684, split: heap overflow in line_bytes_split(), Out-of-bounds heap write due to unchecked buffer handling
- CVE-2015-4042, sort: integer overflow in keycompare_mb(), Overflow leading to potential out-of-bounds and DoS
If we were civil engineers with multiple bridge collapses in our past and then, we finally had developed a tool that reliably prevents a certain especially dangerous and common type of bridge collapse, we would be in the wrong profession if we scoffed at the use of the tool. Whether it is Ruet or some C checker isn't really the point here. The point is building stable, secure and performant software others can rely on.
Any new method to achieve this more reliably has to be tested. Ideally in an environment where harm is low.
Yes, C programmers can do much more checks. The reality on the ground is -- they do not.
Forcing checks by the compiler seems to be the only historically proven method of making programmers pay more attention.
If you can go out there and make _all_ C code utilize best-in-class static checkers, by all means, go and do so. The world would be a much better place.
Tradeoffs, as always.
Yet, this effort has still found bugs in the upstream project! No codebase is perfect.
I'd be very interested in reading more about this. Could you please explain what are these checks and how they are qualitatively and quantitatively better than the default rustc checks?
Please note that this is about checks on the codebase itself - not system/integration tests which are of course already applicable against alternative implementations.
It shouldn't be magic. It should be routine and boring, like no-nulls, pure functions, static typing and unit tests.
Honestly, Ada is a far better choice than Rust when it comes to the safety/usability ratio and you can even add Spark on top which Rust has no equivalent of. But somehow we are saddled with the oversold fashion machine even when it makes no sense. I mean, look at the discussion. It’s full of people who don’t even know what static analysis is but come explaining to me that I am a C zealot stuck in the past and ignorant of the magnificence of our lord and saviour Rust. I don’t even use C.
I don’t care that people rewrite stuff become they want to be cool. If they maintain them, it’s their responsibility and their time. I do care about distribution replacing things with untested things to sound cool however. That’s shoddy work.
I also deeply disagree that pure functions should be the norm as an Ocaml programmer. They have their place but are no panacea either. That’s the Haskell hype instead of Rust hype this time.
That's why they (still) have users. If it works so well for Microsoft, why wouldn't work for Ubuntu ?
I am trying to read your comments charitably but I am mostly seeing generalizations which makes it difficult to extract useful info from your commentary.
We can start by dropping dismissive language like "trendy" and "magic", "fashion" and "Rust kids". We can also continue by saying that "believing the borrow checker is some kind of magic" is not an interesting thing to say as it does not present any facts or advance any discussion.
What you "assure" us of is also inconsequential.
One fact remains: there are a multitude of CVEs that, if the program was written in Rust, would not happen. I don't think anyone serious ever claimed that logic bugs are prevented by Rust. People are simply saying: "Can we just have less possible bugs by the virtue of the program compiling, please?" -- and Rust gives them that.
What's your objection to that? And let us leave aside your seeming personal annoyance of whatever imaginary fandom you might be seeing. Let us stick to technical facts.
Honestly it's at the point where I see someone complaining about a Rust rewrite and I just go ahead and assume that they're mouthing off about something because they think it's trendy and they think it's cool to hate things people like. I hate being prejudicial about comments but I don't have the energy to spend trying to figure out if someone is debating in good faith or not when it seems to so rarely be the case.
It's really weird, at one point I started asking myself if many comments are just hidden from me.
Then I just shrugged it off and concluded that it's plain old human bias and "mine is good, yours is bad" tribe mentality and figured it's indeed not worth my time and energy to do further analysis on tribal instinctive behaviour that's been well-explained in literature for like a century at this point.
I have no super strong feelings for or against Rust, by the way. I have used it to crushing success exactly where it shines and for that it got my approval. But I also work a lot with Elixir and I would rarely try to make a web app with Rust; multiple PLs have the frameworks that make this much better and faster and more pleasant to do.
But it does make me wonder: what stake do these people have in the whole thing? Why do they keep mouthing off about some imaginary zealots that are nowhere to be found?
If you show me Rust advocates with comments like these I would be happy to agree that there are in fact Rust zealots in this thread.
Like, one zealot stabbing at another HN commenter saying "Biased people like yourself don't belong in tech", because the other person simply did not like the Rust community. Or another zealot trying to start a cancel campaign on HN against a vocal anti-Rust person. Yet another vigorously denied the existence of Rust supremacism, while simultaneously raging on Twitter about Microsoft not choosing Rust for the Typescript compiler.
IMO, the sad part is watching zealots forget. Reality becomes a story in their head; much kinder, much softer to who they are. In their heads, they are an unbiased and objective person, whereas a "zealot" is just a bad word for a bad, faraway person. Evidence can't change that view because the zealot refuses to look & see; they want to talk. Hence, they fail the mirror test of self-awareness.
Well, most of them fail. The ones who don't forget & don't deny their zealotry, I have more respect for.
I, or anybody else, owe them no grace beyond a certain point.
Where do you draw the line when confronted with people who already dislike you because they put you in a camp you don't even belong to but you still tried to reason with them to make them see nuance?
Skewing reality to match your bias makes for boring discussions. But again, I stand behind what I said then. And I refuse to be called a zealot. I don't even use Rust as actively; I use the right tool for the job and Rust was that on multiple projects.
If you're not interested in the context then please don't make hasty conclusions and misrepresent history. If you want to continue that old discussion here, I'm open to it.
EDIT: I would also love it if people just gave up the "zealot" label altogether. It's one of the ways to brand people and make them easier to hate or insult. I don't remember ever calling any opponent from the 'other side' a C/C++ zealot, for what it's worth. And again, if people want to actually discuss, I am all for it. But this is not what I have witnessed, historically.
- the cult-like evangelism from the Rust community that everything written in Rust would be better
- the general notion, that rewriting tools should bring clear an tangible benefits. Rewriting something mostly because the new language is safer will provoke irritation and frustration with affected end-users when the end product turns out to introduce new issues
This rewrite project is about corporations escaping from GPL code. It's got nothing to do with security.
If license was the only concern then I'd think that they wouldn't switch the programming language?
And yeah, obviously using Rust will not eliminate all CVEs. It does eliminate buffer overflows and underflows though. Not a small thing.
Also I would not uncritically accept the code of the previous coreutils as good. It got the job done (and has memory safety problems here and there). But is it really good? We can't know for sure.
Problem is, they unironically want to replace coreutils with their toy. And they just did.
One can say a lot about the reason, but that it would not be obvious, is certainly an unlikely one.
I also don't think that Rust itself is the only possible good language to use to write software - someone might invent a language in the future that is even better than Rust, and maybe at some point it will make sense to port rust-coreutils to something written in that yet-undesigned language. It would be good to design software and software deployment ecosystems in such a way that it is simply possible to do rewrites like this, rather than rely so much on the emergent behavior of one C source code collection + build process for correctness that people are afraid to change it. Indeed I would argue that one of the flaws of C, a reason to want to avoid having any code written in it at all, is precisely that the C language and build ecosystem make it unnecessarily difficult to do a rewrite.
That's empty dogma.
C issue is that C compilers provide very little in term of safety analysis by default. That doesn't magically turn Rust into a panacea. I will take proven C or even static analysed C above what the borrow checker adds to Rust any day of the week.
I like the semantic niceties Rust adds when doing new development but that doesn't in any way justify all rewrites as improvement by default.
Yes this is precisely a respect in which C is bad. Another respect is that C allows omitting curly braces after an if-statement, which makes bugs like https://www.codecentric.de/en/knowledge-hub/blog/curly-brace... possible. Rust does not allow this. This is not an exhaustive list of ways in which Rust is better than C.
> I will take proven C or even static analysed C above what the borrow checker adds to Rust any day of the week.
Was coreutils using proven or statically analyzed C? If not, why not?
Which is why your first and only example is a bug from over a decade ago, caused by an indentation error that C compilers can trivially detect as well.
You have some theoretical guardrails that aren't used widely in practice, many times even can't be used. If they could just be introduced like that, they'd likely be added to the standard in the first place.
The fact that the previous commenter can even ask the question if someone has analyzed or proven coreutils shows how little this "can detect" really guarantees.
The end your "can trivially detect" is very useless compared to Rust's enforcing these guarantees for everyone, all the time.
And how many C programmers just ignore the warning while thinking, "I have decades of experience the compiler is just blabbering false positives"?
Rust forces some things preventing the bypass of guardrails.
That seems to come from taking a meaning of errors and warnings from other languages to C. In other language an error means there might be some mistake, and a warning is a minor nitpick. For C, a warning is a stern warning. It is the compiler saying "this is horrible broken and I do compile this to something totally different from what you thought. This will never work, and you should fix it, but I will still do my job and produce the code, because you are the boss." An error is more akin to the compiler not even knowing what that syntax could mean.
Honestly, this is because I like C. I want control.
This is a silly thing to point to, and the very article you linked to argues that the lack of curly braces is not the actual problem in that situation.
In any case, both gcc and clang will give a warning about code like that[1] with just "-Wall" (gcc since 2016 and clang since 2020). Complaining about this in 2025 smells of cargo cult programming, much like people who still use Yoda conditions[2] in C and C++.
C does have problems that make it hard to write safe code with it, but this is not one of them.
And how many C programmers ignore such warnings because they are confident they know better?
People who write C code ignoring warnings are the same people who in Rust will resort to writing unsafe with raw pointers as soon as they hit the first borrow check error. If you can't force them to care about C warnings, how are you going to force them to care about Rust safety?
I've seen this happen; it's not seen at large because the vast majority of people writing Rust code in public do it because they want to, not because they're forced.
I think it works, and quite well even. Defaults matter, a lot, and Rust and its stdlib do a phenomenal job at choosing really good ones, compared to many other languages. Cargo's defaults maybe not so much, but oh well.
In C, sloppy programmers will generally create crashy and insecure code, which can then be fixed and hardened later.
In Rust, sloppy programmers will generally create slower and bloated code, which can then be optimized and minimized later. That's still bad, but for many people it seems like a better trade-off for a starting point.
> In Rust, sloppy programmers will [...]
You're comparing apples to oranges.
Inexperienced people who don't know better will make safe, bloated code in Rust.
Experienced people who simply ignore C warnings because they're "confident they know better" (as the other poster said) will write unsafe Rust code regardless of all the care in the world put in choosing sensible defaults or adding a borrow checker to the language. They will use `unsafe` and call it a day -- I've seen it happen more than once.
To fix this you have to change the process being used to write software -- you need to make sure people can't simply (for example) ignore C warnings or use Rust's `unsafe` at will.
This dogma is statistically verifiable. We could also replace them with Go counterparts
> I will take proven C or even static analysed C
This just means you don't understand static analysis as much as you do. A rejection of invalid programs by a strict compiler will always net more safety by default than a completely optional step after the fact.
No it isn't. In fact, "Replacing code written in <X> with code written in <Y> is good in and of itself" is a falsehood, for any pair of <X> and <Y>. That kind of unqualified assertion is what the deluded say to themselves, or propagandists (usually <Y> hype merchants) say out loud.
This is what reality looks like: https://www.bankofengland.co.uk/news/2022/december/tsb-fined...
Furthermore, "designing for a future rewrite" is absolute madness. There is already a lot of YAGNI waste work going on. It's fine to design software to be modular, reusable, easily comprehensible, and so on, but designing it so its future rewrite will be easier - WTF? You haven't even built the first version yet, and you're already putting work into designing the second version.
Fashions are fickle. You can't even know what will be popular in the future. Don't try to anticipate it and design for it now.
If software is in fact designed to be modular, reusable, and easily-comprehensible, then it should be pretty easy to rewrite it in another language later. The fact that many people are arguing that programmers should not even attempt to rewrite C coreutils, for fear of breaking some poorly-understood emergent behavior of the software, is evidence that C coreutils is not in fact modular, reuseable, and easily-comprehensible. This is true regardless of whether or not the Rust rewrite (or another language rewrite) actually happens or not.
It's not. I never said it was. Nor are my bank's systems; I don't want them to fuck them up either. My bank's job is not to rewrite their codebase in shinier, newer languages that look nice on their staff's CVs, their job is to continue to provide reliable banking services. The simplest, cheapest way for them to do that is to not rewrite their software at all.
What I was addressing was two different approaches to "design[ing] software [...] in such a way that it is simply possible to do rewrites"
* One way is evergreen: think about modularity, reusability, and good documentation in the code you're writing today. That will help with any mooted future rewrite.
* The other way, which you implied, is to imagine what the future rewrite might look like, and design for that now. That way lies madness.
you don't really believe this because you said that the only possibly better language would be one that doesn't yet exist
But my point was that there's no reason to think that the specific package of design decisions that Rust made as a language is the best possible one; and there's no reason why people shouldn't continue to create new programming languages including ones intended to be good at writing basic PC OS utils, and it's certainly possible that one such language might turn out to do enough things better than Rust does that a rewrite is justified.
These kinds of bugs might not bug end users much, but when it becomes a fleet-wide problem, it becomes crippling.
I'm debugging a problem on a platform since this morning. At the end of the day it turned out to be the platform is sending things to somewhere it's explicitly told not to.
Result? Everything froze, without any errors. System management is hard to begin with. It becomes really hard when the tools you think you can depend breaks.
Also, consider what would be the uproar if the programming language was something else than Rust. The developers would be crucified, burned with flamethrowers, reincarnated, and tortured again until they fed-up and leave computers and start raising chicken at an off-grid location.
at Microsoft. /s
https://buildings.honeywell.com/au/en/products/by-category/b...
The reasoning was that the users didn’t own the device. While I personally believe this is not consistent with recent interpretations of the license by the courts, I think they concluded that it was worth the risk of a customer suing to get the source code, as the company could then pull the hardware and leave that customer high and dry. It is unlikely any of their users a would risk that outcome.
Edited to add: it would be cool if, instead of the top-most wealth-concentrators F[500:], there was an index of the top-most wealth-spreaders F[:500]. What would that look like? A list of cooperatives?
Assuming this theory is true then, what other GPLv3-licensed "core" software in the distro could be next on their list?
https://packages.ubuntu.com/plucky/rust-coreutils
The dependencies of rust-coreutils list libgcc-s1, which is GPL version 3.
However, it would be a fairly straightforward project to replace the unwinder used directly by Rust binaries with the one from libunwind. Given that this hasn't happened, I'd be surprised if Canonical is actually investing into a migration. Of course there are much bigger tasks for avoiding GPLv3 software, such as porting the distribution (including LLVM itself and its users) from libstdc++ (GCC's C++ standard library that requires GCC to build, but provides support for Clang as well) to libc++ (LLVM's C++ standard library).
https://sfconservancy.org/blog/2021/mar/25/install-gplv2/ https://sfconservancy.org/blog/2021/jul/23/tivoization-and-t... https://events19.linuxfoundation.org/wp-content/uploads/2017...
Perhaps it is also so they can be used in closed source systems (I have uutils installed on my Windows system which works nicely).
This would let them sell to companies who want to only permit signed firmware images to run on their devices, which isn't allowed under GPLv3.
How is this not allowed under GPLv3?I don't think there’s any serious evidence of it being true though. All we can see right now is that there are a surprising number of MIT-licensed packages replacing GPL-licensed packages. It could be a coincidence.
We sigh in relief every time we see a software that we rely upon changes/adds non-viral license such as MIT, Apache, MPL, BSD, and so on.
This is gonna cause a lot of disappointment down the road.
In the end, a lot of people are willing to write open source just for the sake of having it as it scratches their own need and isn't otherwise monetizable or they just think it should exist. I would never even consider touching a GPLv3 licensed UI library component, for example.
It's not always the most appropriate license and if a developer wants to use a permissive license, they are allowed to. This isn't an authoritarian, communist dictatorship, at least it isn't where I live and to my dying breath won't be.
Choosing licenses due to peer pressure is completely stupid though. If you're not sure, you can just not pick a license at all. Copyright 2025 all rights reserved. If you must pick a license just because, then the reasonable choice is the strongest copyleft license available, simply because it maximizes leverage. The less you give away, the more conditions, the more leverage. It's that simple.
That people are actually feeling "pressure" to pick permissive licenses leads me to conclude this is a psyop. It's a wealth transfer, from well meaning developers straight into the pockets of corporations. It's being actively normalized so that people choose it "by default" without thinking. Yeah, just give it all away! Who cares, right?
I urge people to think about what they are doing.
So the ONLY reasonable choice for me is to release my code with a non-viral license. A copyleft license is TOTALLY UNREASONABLE for me because it limits the reach of my software.
(My license of choice is MPL-2.0)
The problem is that people choose permissive licenses to be "nice" when the truth is they have tons of unwritten rules and hidden assumptions. Magical thinking like "if I publish this open source software then it will come back to me in some way, maybe a job, maybe a sponsorship." No such deal exists. Then they wake up one day with corporations making billions off of their software while they're not making even one cent, and they suddenly have a very public meltdown where they bitterly regret their decisions. I've seen it happen, even with copyleft licenses.
If I'm writing something I intend or might intend to monetize later or otherwise don't want to have privatized, I'll probably reach for GPLv3, AGPL or a different license. The less "whole" a thing is, the more likely I'm going to use a more permissive license than not. Libraries or snippets of code are almost always going to be permissive at least from me. This includes relatively simple CLI utils.
Brave of them to ship a Rust port of sudo as well.
- GNU coreutils (GPLv3)
- uutils coreutils (MIT)
- busybox (GPLv2)
In sort command
Is this the best they could come up with?
But the worst you can do is crash 'sort' with that. Note that uutils also has crashes. Here is one due to unbounded recursion:
$ ./target/release/coreutils mkdir -p `python3 -c 'print("./" + "a/" * 32768)'`
Segmentation fault (core dumped)
Not saying that both issues don't deserve fixing. But I wouldn't really panic over either of them.This has nothing to do with memory ownership, so borrow checker is irrelevant. Ubuntu just shipped before that argument's handling was implemented.
https://discourse.ubuntu.com/t/carefully-but-purposefully-ox...
> Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change. These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.
As an obvious example, I sometimes download files from the Internet, then run coreutils sha256sum or the like on those files to verify that they're trustworthy. That means they're untrusted at the time where I use them as input to sha256sum.
If there's an RCE in sha256sum (unlikely, but this is a thought experiment to demonstrate an attack vector), then that untrusted file can just exploit that RCE directly.
If there's a bug in sha256sum which allows a malicious file to manipulate the result, then a malicious file could potentially make itself look like a trusted file and therefore get past a security barrier.
Maybe there's no bug in sha256sum, but I need to base64 decode the file before running sha256sum on it, using the base64 tool from coreutils.
If you use your imagination, I'm sure you yourself can think up plenty more use cases where you might run a program from GNU coreutils against untrusted user input. If it helps, here's a Wikipedia article which lists all commands from GNU coreutils: https://en.wikipedia.org/wiki/GNU_Core_Utilities#Commands
EDIT: To be clear, this comment is only intended to explain what the attack surface is, not to weigh in on whether rewriting the tools in Rust improves security. One could argue that it's more likely that the freshly rewritten sha256sum from uutils has a bug than that GNU sha256sum has a bug. The statement "tools from coreutils are sometimes used to operate on untrusted input and therefore have an attack surface worth exploring" is not the same as the statement "rewriting coreutils in Rust improves security". Personally, I'm excited for the uutils stuff, but not primarily because I believe it alone will directly result in significant security improvements in Ubuntu 25.10.
Rust is not a silver bullet.
Honestly, Rust-related hilarity aside, this project was a terrible, terrible idea. Unix shell environments have always been ad hoc and poorly tested, and anything that impacts compatibility is going to break historical code that may literally be decades old.
See also the recent insanity of GNU grep suddenly tossing an error when invoked as "fgrep". You just don't do that folks.
The 'fgrep' and 'egrep' didn't throw errors, it would just send a warning to standard error before behaving as expected.
Those commands were never standardized, and everyone is better off using 'grep -F' and 'grep -E' respectively.
Noted without comment. Except to say that I've had multiple scripts of my own break via "just" discovering garbage in the output streams.
> Those commands were never standardized
"Those commands" were present in v7 unix in 1979!
The only place that that doesn't support 'grep -E' and 'grep -F' nowadays is Solaris 10. But if you are still using that you will certainly run into many other missing options.
[1] https://pubs.opengroup.org/onlinepubs/007908775/xcu/egrep.ht... [2] https://pubs.opengroup.org/onlinepubs/007908775/xcu/fgrep.ht...
The deprecation argument is at least... arguable. It was indeed retired from POSIX. But needless deprecation is itself a smell in a situation where you can't audit all the code that uses it. Don't do that. It breaks stuff. It broke the updates in the linked article too. If you have an API, leave it there absent extremely strong arguments for its removal.
Why?
It's surely not. The question wasn't how to rewrite the shell environment to be more "endorseable", though.
The point is that we have a half century (!) long history of writing code to this admittedly fragile environment, with no way to audit usage or even find all the existing code (literally many of the authors are retired or dead).
So... it's just not a good place to play games with "Look Ma, I rewrote /usr/bin/date and it's safe now!" Mess with your own new environments, not the ones that run the rest of the world please.
Sounds like a plan. Let me know when you're done, and then we can remove fgrep.
EDIT: Just to be clear, I'm otherwise perfectly happy that these experiments are being done, and we should all be better off for it and learn something as a result. Obviously somebody has assessed that this tradeoff has at least a decent probability of being a net positive here in some timeframe, and if others are unhappy about it then I suppose they're welcome to install another implementation of coreutils, or use a different distro, or write their own, or whatever.
F.ex. `sudo-rs` does not support most of what the normal `sudo` does... and it turned out that most people did not need most of `sudo` in the first place.
Less code leads to less bugs.
Hence "doas".
OpenBSD has a lot of new stuff throughout the codebase.
No need for adding a bloated dependency (e.g. Rust) just because you want to re-implement "yes" in a "memory-safe language" when you probably have no reasons to.
$ /usr/bin/time date
Fri Oct 24 10:20:17 AM CDT 2025
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 2264maxresident)k
0inputs+0outputs (0major+93minor)pagefaults 0swaps
Imagine how much faster it will be with threading!What? How??
- Tagged unions so you can easily and correctly return "I have one of these things".
- Generics so you can reuse datastructures other people wrote easily and correctly. And a modern toolchain with a package manager that makes it easy to correctly do this.
- Compile time reference counting so you don't have to worry about freeing things/unlocking mutex's/... (sometimes also called RAII + a borrow checker).
- Type inference
- Things that are changed are generally syntactically tagged as mutable which makes it a lot easier to quickly read code
- Iterators...
And so on and so forth. Rust is in large part "take all the good ideas that came before it and put it in a low level language". In the last 50 years there's been a lot of good ideas, and C doesn't really incorporate any of them.
Meanwhile my description doesn't fully capture how it guarantees unique access for writing, while yours does.
You're confusing the borrow checker with RAII.
Dropping the last reference to an object does nothing (and even the exclusive &mut is not an "owning" reference). Dropping the object itself is what automatically frees it. See also Box::leak.
With only RAII you don't get the last reference part.
Yes, there are exceptions, it's a roughly correct analogy not a precise description.
I didn't invent this way of referring to it, though I don't recall who I stole it from. It's not entirely accurate, but it's a close enough description to capture how rust's mostly automatic memory management works from a distance.
If you want a more literal interpretation of compile time reference counting see also: https://docs.rs/static-rc/0.7.0/static_rc/
It’s just not a good mental model.
For example, with reference counting you can convert a shared reference to a unique reference when you can verify that the count is exactly 1. But converting a `&T` to a `&mut T` is always instantaneous UB, no exceptions. It doesn’t matter if it’s actually the only reference.
Borrows are also orthogonal to dropping/destructors. Borrows can extend the lifetime of a value for convenience reasons, but it is not a general rule that values are dropped when the last reference is gone.
Borrow checking is necessary for dropping and destructors in the sense that without borrows we could drop an owned value while we still have references to it and get a use after free. RAII in rust only works safely because we have the borrow checker reference counting for us to tell us when its again safe to mutate (including drop) owned values.
Yes, rust doesn't support going from an &T to an &mut T, but it does support going from an <currently immutable reference to T> to a <mutable reference to T> in the shape of going from an &mut T which is currently immutably borrowed to an &mut T which is not borrowed. It can do this because it keeps track of how many shared references there are derived from the mutable reference.
You're right that it's possible to leak the owning reference so that the object isn't freed when the last reference is gone - but it's possible to leak a reference in runtime reference runtime reference counted language too.
But yes, it's not a perfect analogy, merely a good one. It's most likely that the implementation doesn't just keep a count of references for instance, but a set of them to enable better diagnostics and more efficient computation.
I work this way and that's why I consider Rust to be a major impediment to my productivity. Same goes for Python with its significant whitespace which prevents freely moving code around and swapping code blocks, etc.
I guess there are people who plan everything in their mind and the coding part is just typing out their ideas (instead of developing their ideas during code editing).
But it's also because of all the things I'm forced to fix while implementing or refactoring, that I would've been convinced were correct. And I was proven wrong by the compiler, so, many, times, that I've lost all confidence in my own ability to do it correctly without this kind of help. It helped me out of my naivety that "C is simple".
I don't think there are, I think Gall's law that all complex systems evolve from simpler systems applies.
I play with code when I program with Rust. It just looks slightly different. I deliberately trigger errors and then read the error message. I copy code into scratch files. I'm not very clever; I can't plan out a nontrivial program without feedback from experiments.
I've written probably tens of thousands of lines each in languages like C, C++, Python, Java and a few others. None other has been as misery-free. I admit I haven't written Haskell, but it still doesn't very approachable to me.
I can flash a microcontroller with new firmware and it won't magically start spewing out garbage on random occasions because the compiler omitted a nullptr check or that there's an off-by-one error in some odd place. None. Of. That. Shit.
I'm a bit surprised that you are surprised by this. I sometimes think Rust emphasizes memory safety too much - like some people hear it and just think Rust is C but with memory safety. Maybe that's why you're surprised?
Memory safety is a huge deal - not just for security but also because memory errors are the worst kind of bug to debug. If I never have to a memory safety bug that corrupts some data but only in release mode... Those bugs take an enormous amount of time to deal with.
But Rust is really a great modern language that takes all the best ideas from ML and C, and adds memory safety.
(Actually multithreading bugs might be slightly worse but Rust can help there too!)
But the fact that program X was written in Rust is, on the other hand, newsworthy? And there is nothing odd in the fact that the first property of the software that is advertised is the fact that it was made in Rust.
Yeah, nothing odd there.
That's not why it is newsworthy though.
"A project reimplementing core OS programs for the sake of reimplementing in the favourite language breaks stable OS", is what makes it newsworthy.
however once software that has been only rewritten for the sake of being written in Rust starts affecting large distributions like Ubuntu, that's a different issue...
however one could argue that Ubuntu picking up the brand new Rust based coreutils instead of the old one is a 2nd order effect of "let's rewrite everything in Rust, whether it makes sense of not"
There's no "however" here. Rewriting anything in Rust has no effect on anybody by itself.
This isn't something that affected Ubuntu. It's something Ubuntu wanted to test in day to day usage.
Which part of that is stupid? License is chosen because Rust is more static linkage friendly. Which leaves exercise part or high compatibility.
You might as well say Linux is a stupid rewrite that will never achieve anything circa 1998.
> nice
> we’re releasing coreutils rewritten in a memory safe language for free
> how dare you!
They've added internal features though, like better hardware support in copying and moving files.
But systemd projects and Rust rewrites have this one thing in common: them being pure virtue signaling they absolutely have to be noticed. And what's a better way to get noticed if not going for something important and core?
To me, Rust rewrites look like "just stop oil" road blocks - the more people suffer, the better.
PS: Disclaimer: I love Rust. I hate fanboys.
Then blame Canonical? Quit it with the Rust hate.
The thought of rewriting anything as intricate, foundational, and battle-tested as GNU coreutils from scratch scares me. Maybe I'd try it with a mature automatic C-to-Rust translator, but I would still expect years of incompatibilities and reintroduced bugs.
See also the "cascade of attention-deficit teenagers" development model.
It is extremely bad that it's not a relatively straightforward process for any random programmer to rewrite coreutils from scratch as a several week project. That means that the correct behavior of coreutils is not specified well enough and and it's not easy enough to understand it by reading the source code.
For a business it's often fine to stop at a local maximum, they can keep using old versions of coreutils however long they want, and they can still make lots of money there! However we are not talking about a business but a fundamental open source building block that will be around for a very long time. In this setting continuous long term improvement is much more valuable than short term stability. Obviously you don't want to knowingly break stability either, and in this regard I do think Ubuntu's timeline for actually replacing the default coreutils implementation is too ambitious, but that's beside the point—the rewrite itself is valuable regardless of what Ubuntu is doing!
The core bug seems to be that support for `date -r <file>` wasn't implemented at the time ubuntu integrated it [1, 2].
And the command silently accepted -r before and did nothing (!)
0: https://lwn.net/Articles/1043123/
I have automated a lot of things executing other utilities as a subprocess and it's absolutely crazy how many utilities handle CLI flags just seemingly correct, but not really.
If it was added in bulk, with many other still unsupported option names, why does the program not crash loudly if any such option is used?
A fencepost error is a bug. A double-free is a bug. Accepting an unsupported option and silently ignoring it is not, it takes a deliberate and obviously wrong action.
let date_source = if let Some(date) = matches.value_of(OPT_DATE) {
DateSource::Custom(date.into())
} else if let Some(file) = matches.value_of(OPT_FILE) {
DateSource::File(file.into())
} else {
DateSource::Now
};
And after `-r` support was added (among other changes) [1]: let date_source = if let Some(date) = matches.get_one::<String>(OPT_DATE) {
DateSource::Human(date.into())
} else if let Some(file) = matches.get_one::<String>(OPT_FILE) {
match file.as_ref() {
"-" => DateSource::Stdin,
_ => DateSource::File(file.into()),
}
} else if let Some(file) = matches.get_one::<String>(OPT_REFERENCE) {
DateSource::FileMtime(file.into())
} else {
DateSource::Now
};
Still the same fallback. Not sure one can discern from just looking at the code (and without knowing more about the context, in my case) whether the choice of fallback was intentional and handling the flag was forgotten about.[0]: https://github.com/yuankunzhang/coreutils/commit/850bd9c32d9...
[1]: https://github.com/yuankunzhang/coreutils/blob/88a7fa7adfa04...
No, it doesn't. For example, you could have code that recognizes that something "is an option", and silently discards anything that isn't on the recognized list.
That's a deliberate action.
I'm frankly appalled that an essential feature such as system updates didn't have an automated test that would catch this issue immediately after uutils was integrated.
Nevermind the fact that this entire replacement of coreutils is done purely out of financial and political rather than technical reasons, and that they're willing to treat their users as guinea pigs. Despicable.
This feels like a large corporation, in the bad sense.
> deliberate and wrong decision
Yeah... I hope "we" will not switch to it just because it is written in Rust. There is much more than just the damn language behind it.
https://bugs.launchpad.net/ubuntu/+source/rust-coreutils/+bu...
The last commit[0] is a fix for date parsing to bring it in line with the GNU semantics, which seems like a pretty good candidate.
Edit: Or not, see evil-olive's comment[1] for a more likely candidate.
0: https://github.com/uutils/coreutils/commit/0047c7e66ffb57971...
Seems if you have a reference implementation your fuzzer should be able to do some nice white-box validation to ensure you are behaving the same as the old implementation.
Whatever language you're working in there is probably a port of Hypothesis or quickcheck. For Rust I use the `proptest` crate, but for differential testing of a CLI I would probably use the Python Hypothesis package and invoke the commands externally.
How "long term" are we talking about that rewriting battle-tested, mission-critical C utils (which, as other posters noted, in this case often have minimal attack surfaces) actually makes sense?
>> Which is why I'm glad they're doing it! It seems like the kind of thing that one can be understandably scared to ever do, and I say this as one of the folks involved with getting some Rust in the Linux kernel.
Total zealot.
Reminder that one of the uutils devs gave a talk at FOSDEM where he used spurious benchmarks to falsely claim uutils's sort was faster, only for /g/ users to discover it was only because it was locale-unaware, and in fact was much slower:
https://archive.fosdem.org/2025/schedule/event/fosdem-2025-6... (~15 min)
That was a lot of noise about not much. Locale handling was added and performance got even better:
Makes me wonder if putting a similar amount of effort into building up proof/formal verification system for coreutils would have yielded better results security wise.
Especially if you look very long term, as in where the young developers are, you'll see a significant reduction in the amount of people with the ability to write high-quality C. Rust has the benefit that low-quality Rust ist fairly close to high-quality Rust, while low-quality C is a far cry from high-quality C.
Choosing Rust does not necessarily require Rust itself to be better for the task. It can also be the result of secondary factors.
I don't know if this applies to coreutils, but C being technically sufficient does not always mean it shouldn't be replaced.
Or they will get bored as soon as a New Awesome Language will be hyped on HN and elsewhere.
> The next Ubuntu release will be called Grateful Guinea-Pig
> Systems with the rust-coreutils package version 0.2.2-0ubuntu2 or earlier have the bug, it is fixed in 0.2.2-0ubuntu2.1 or later.
based on the changelog [0] it seems to be:
> date: use reference file (LP: #2127970)
from there: [1]
> This is fixed upstream in 88a7fa7adfa048dabdffc99451d7aba1d9e6a9b6
which in turn leads to [2, 3]
> Display the date and time of the last modification of file, instead of the current date and time.
this is not the type of bug I was expecting, I assumed it would be something related to a subtle timezone edge case or whatever.
instead, `date -r` is supposed to print the modtime of a given file:
> date --utc -Is -r ~/.ssh/id_ed25519.pub
2025-04-29T19:25:01+00:00
> date --utc -Is
2025-10-23T21:46:47+00:00
and it seems like the Rust version just...silently ignored that expected behavior?maybe I'm missing something? if not this seems really sloppy and not at all what I'd expect from a project aiming to replace coreutils with "safer" versions.
0: https://launchpad.net/ubuntu/questing/+source/rust-coreutils...
1: https://bugs.launchpad.net/ubuntu/+source/rust-coreutils/+bu...
Neither this issue, which doesn't appear to be a bug at all but merely an unimplemented feature, nor the fact that uutils doesn't (yet) pass the entire testsuite, seem to me to at all be an indictment of the uutils project, merely a sign that it is incomplete. Which is hardly surprising when I get the impression it's primarily been a hobby project for a bunch of different developers. It does make me wonder about the wisdom of Ubuntu moving to it.
I don't know what the code coverage of coreutils' test suite is, but my guess is that it's not spectacular.
Ubuntu is likely used by 10s of millions of servers and desktops. I'm not sure why this kind of breakage is considered acceptable. Very confusing.
Users who need stability should use the LTS releases. The interim releases have always been more experimental, and have always been where Canonical introduces the big changes to ensure everything's mature by the time the LTS comes around.
> Every six months between LTS versions, Canonical publishes an interim release of Ubuntu, with 25.10 being the latest example. These are production-quality releases and are supported for 9 months, with sufficient time provided for users to update, but these releases do not receive the long-term commitment of LTS releases.
and this is probably a net positive, there's now an early adopter for the project, the testsuite gets improved, and the next Ubuntu LTS will ship more modern tools
I was expecting that they would be concerned about bugs in the untested parts!
So this is a good thing even for coreutils itself, they will slowly find all of these untested bits and specify behaviour more clearly and add tests (hopefully).
Doesn't look like people who do their homework
but I don't think that should let the uutils authors off the hook - if `--reference` wasn't implemented, that should have been an error rather than silently doing the wrong thing.
after even more Git spelunking, it looks like that problem goes all the way back to the initial "Partial implemantion of date" [1] commit from 2017 - it included support for `--reference` in the argument parsing, including the correct help text, but didn't do anything with it, not even a "TODO: Handle this option" comment like `--set` has.
0: https://github.com/coreutils/coreutils/commit/14d24f7a530f58...
1: https://github.com/uutils/coreutils/commit/41d1dfaf440eabba3...
That bring GNU date(1) line coverage from 79.8% to 87.1%
More coverage is nice, but the foremost care should be to do the right thing, not have some tests for it. Some cultures do not include testing-first and instead treat tests as a tool for edge cases. Nobody bothered to add a tests, for -r, because the did not thing of that as an edge case, but as a core behaviour.
Yeah, broad tivoisation and patent clauses make it a problem, because making any patent litigation on unrelated grounds has potential to lose ability to ship the entire OS.
Canonical is trying to position Ubuntu as a relevant player in the embedded space.
They've lost the plot. I don't mind change if it has meaningful benefits, but forcing unstable and barely-tested coreutils that fail their own tests is madness.
This summer I have migrated all our production and development servers to Debian. Because absolutely and sincerely fuck rust coreutils, sudo-rs, systemd-* and the other virtue signaling projects.
Also Canonical was named and shamed by people trying to get jobs as the poster child of everything wrong with tech recruiting in [current year].
Eventually installed from the PPA but it was an unexpected PITA.
They had me do an automated IQ test, specified I had to do it in my native language, and it turned out it had been machine translated with some tools that was decades old, so I didn't understand anything at all.
I am sure they've also blacklisted me because I get autorejected since then.
I'm also a Debian Developer so I don't have any relevant experience that could be useful in working at Canonical.
Where do you see anything wrong with their process?
https://devclass.com/2024/08/19/rust-1-80-0-breaks-existing-...
So what's actually your point here?
Did they change the language? GCC is not meant to change the C or C++ languages (unless the user uses some flag to modify the language), there is an ISO standard that they seek to be compliant with. rustc, on the other hand, only somewhat recently got a specification or something from Ferrocene, and that specification looks lackluster and incomplete from when I last skimmed through it. And rustc does not seem to be developed against the official Rust specification.
That's not what you asked though, these were intentional breakages. Language standard or not.
In any case though, bringing up language specification as an example for maturity is such a massive cop-out considering the amount of UB in C and C++. It's not like it gives you good stability or consistency.
> there is an ISO standard that they seek to be compliant with
You can buy RM 8048 from NIST, is that the "culture" of stability you have in mind?
You are completely wrong, and you ought to be able to see that already.
It makes a world of difference if it is a language change or not. As shown in dtolnay's comment https://github.com/rust-lang/rust/issues/127343#issuecomment... .
If breakage is not due to a language change, and the program is fully compliant with the standard, and there is no issue in the standard, then the compiler has a bug and must fix that bug.
If breakage is due to a language change, then even if a program is fully compliant with the previous language version, and the programmer did nothing wrong, then the program is still the one that has a bug. In many language communities, language changes are therefore handled with care and changing the language version is generally set up to be a deliberate action, at least if there would be breakage in backwards compatibility.
I do not see how it would be possible for you not to know that I am completely right about this and that you are completely wrong. For there is absolutely no doubt that that is the case.
> In any case though, bringing up language specification as an example for maturity is such a massive cop-out considering the amount of UB in C and C++.
Rust is worse when unsafe is involved.
https://materialize.com/blog/rust-concurrency-bug-unbounded-...
There are almost no C programs without UB. So a lot of what you would call "compiler bugs" are entirely permitted standard. If you say "no true C program has UB" then of course, congrats, your argument might be in some aspects correct. But that's not really the case in practice and your language standard provides shit in terms of practical stability and cross-compatibility in compilers.
> I do not see how it would be possible for you not to know that I am completely right about this and that you are completely wrong. For there is absolutely no doubt that that is the case.
Lol, lmao even.
> Rust is worse when unsafe is involved.
It's really not.
If the compiler optimization is compliant with the standard, then it is not a compiler bug. rustc developers have the same expectation when Rust developers mess up using unsafe, though the rules might be less defined for Rust than for C and C++, worsening the issue for Rust.
I don't know where you got the idea that "almost no C programs [are] without UB". Did you get it from personal experience working with C and you having trouble avoiding UB? Unless you have a clear and reliable statistical source or other good source or argument for your claim, I encourage you to rescind that claim. C++ should in some cases be easier to avoid UB with than C.
> > Rust is worse when unsafe is involved.
> It's really not.
It definitely, very much is. As just some examples among many, consider aliasing and pinning https://lwn.net/Articles/1030517/ .
From the fact that a lot of compilers can and do rely on UB to do certain optimizations. If UB wasn't widespread, they wouldn't have those optimization passes. You not knowing how widespread UB is in C and C++ codebases is very telling.
You're however absolutely free to find me one large project that will not trigger "-fsanitizer=undefined" for starters. (Generated codebases do not count though.)
> It definitely, very much is. As just some examples among many, consider aliasing and pinning https://lwn.net/Articles/1030517/ .
Difficult to understand or being unsafe does not make unsafe Rust worse than C. It's an absurd claim.
Your understanding of both C, C++ and Rust appear severely flawed. As I already wrote, rustc also uses these kinds of optimizations. And the optimizations do not rely on UB being present in a program, but on UB being absent in a program, and it is the programmer's responsibility to ensure the absence of UB, also in Rust.
Do you truly believe that rustc does not rely on the absence of UB to do optimizations?
Are you a student? Have you graduated anything yet?
> You're however absolutely free to find me one large project that will not trigger "-fsanitizer=undefined" for starters. (Generated codebases do not count though.)
You made the claim, the burden of proof is on you. Though your understanding appear severely flawed, and you need to fix that understanding first.
> Difficult to understand or being unsafe does not make unsafe Rust worse than C. It's an absurd claim.
Even the Rust community at large agrees that unsafe is more difficult than C and C++. So you are completely wrong, and I completely right, yet again.
Relying on absence of UB is not the same as relying on existence of UB. I'm not surprised however that you find this difference difficult to grasp.
> You made the claim, the burden of proof is on you. Though your understanding appear severely flawed, and you need to fix that understanding first.
I gave you such a great opportunity to prove me wrong. Why not take it? Surely if what you say is true this should be easy.
You are coming with complete inanity, instead of fixing your understanding of a subject that you obviously do not have a good understanding of and that I have a much better understanding of than you, as you are well aware of.
You are making both yourself and your fellow Rust proponents look extremely bad. Please do not burden others with your own lack of competence and knowledge. Especially in a submission that is about a bug caused by Rust software.
This is a stark difference to back in the early post 1.0 days where many high profile crates needed nightly and everyone was experimenting.
Pros and Cons either way for better or worse depending on your perspective.
Personally while I think Rust is a decent language it would not have caught on with younger devs if C/C++ didn't have such a shitty devex that is stuck 30 years in the past.
Younger people will always be more willing to break things and messing around with ancient and unfriendly build/dev does not attract that demographic because why waste time messing around with the build env that actually getting things done.
One day rust will be the same and the process will start again.
It is so much more fun to cargo-download some stuff and build some new shiny Rust-xyz implementation of Z on your Apple Macbook and even get some HN attention just for this. The problem with all the Rust hype is that people convince themselves that they are actually helping the world by being part of a worthwhile cause to rid the world from old languages, while the main effect is that i draws resources away from much more important efforts and places an even higher burden on the ecosystem - making our main problem - sustainable maintenance of free software - even harder.
No it doesn't. What on earth are you talking about?
Sure, some pre-1.0 libraries in Rust land are actually wildly volatile, but I find that's not especially the norm, out of the crates I've used. That said... 0.4 for EIGHT YEARS is also a pretty darn good sign you've solidified the API by now, and should probably just tag a 1.0 finally...
So you seem to be saying "I don't like how Rust libraries communicate their stability." But that's something wholly different from "Ready to make breaking changes on a whim." Yet your commentary doesn't distinguish these concepts and instead conflates them.
And when most library authors communicate "the API is not stabilized, you must be prepared for breaking changes on a whim", then yeah, of course I am going to perceive that as a lack of stability.
Moreover, it's unclear to me if you're aware that, in the Rust ecosystem, 0.x and 0.(x+1) are treated as semver incompatible releases, while 0.x.y and 0.x.(y+1) are treated as semver compatible releases. While the actual semver specification says "Anything MAY change at any time. The public API SHOULD NOT be considered stable." when the major version is 0, this isn't actually true in the Rust crate ecosystem. For example, if you have `log = "0.4"` in your `Cargo.toml`, then running a `cargo update` will only bump you to semver compatible releases without breaking changes (up to a human's ability to adhere to semver).
Stated more succinctly, in the Rust ecosystem, semver breaking changes are communicated by incrementing the leftmost non-zero version component. In other words, you cannot correctly interpret what version numbers mean in the Cargo crate ecosystem using only the official semver specification. You also need to read the Cargo documentation: https://doc.rust-lang.org/cargo/reference/semver.html#change...
(emphasis mine)
> This guide uses the terms “major” and “minor” assuming this relates to a “1.0.0” release or later. Initial development releases starting with “0.y.z” can treat changes in “y” as a major release, and “z” as a minor release. “0.0.z” releases are always major changes. This is because Cargo uses the convention that only changes in the left-most non-zero component are considered incompatible.
So I repeat: you are conflating perception with what actually is reality. This isn't to say that perception is meaningless or doesn't matter or isn't a problem in and of itself. But it is distinct from what is actually happening. That is, "the Rust crate ecosystem doesn't use semver version numbers in a way that can be interpreted using only the official semver specification" is a distinct problem from "the Rust crate ecosystem makes a habit of introducing breaking changes on a whim." They are two distinct concerns and you conflating them is extremely confusing and misleading.
I know about the semver exception Rust uses where 0.x is considered a different "major version" to 0.y for the purpose of automatic updates. That's not really relevant. I'm talking about communication between humans, not communication between human and machine. By releasing log 0.4.4 after 0.4.3, you're communicating to the machine that it should be safe to auto-update to the new release, but by keeping the version number 0.x, you're communicating to the human that you still don't promise any kind of API stability.
>>> Rust still has a very "Ready to make breaking changes on a whim"
>>> reputation
>>
>> No it doesn't. What on earth are you talking about?
>
> I like Rust, but almost all libraries I end up using are on some 0.x version...
That initial complaint is talking about Rust being ready to "make breaking changes on a whim." But this is factually not true. That is something different from what you perceive the version numbers to mean. Just because several important ecosystem crates are still on 0.x doesn't actually mean that "Ready to make breaking changes on a whim" is a true statement.> you're communicating to the human that you still don't promise any kind of API stability.
A 1.x version number doesn't communicate that either. Because nothing is stopping someone from releasing 2.x the next day. And so on. Of course, it may usually be the case that a 1.x means the cadence of semver incompatible releases has decreased in frequency. But in the Rust ecosystem, that is also true of 0.x too.
> I'm talking about communication between humans, not communication between human and machine.
Yes, but communication between humans is distinct from describing the Rust as ready to make breaking changes on a whim. You are confusing communication with a description about the actual stability of the Rust ecosystem.
I read the original message as talking about the Rust community broadly, not just the language. You're right that the language itself is pretty stable and doesn't make breaking changes. Regardless, right now, we're clearly talking about the community and libraries, whether that was the original topic or not.
> A 1.x version number doesn't communicate that either. Because nothing is stopping someone from releasing 2.x the next day. And so on.
If a library has been on version 3.x for the past few years, I have some indication that there's some commitment to API stability. If the library recently released version 157.0 and released version 156.0 last week, I have some indication that the library probably doesn't care that much about API stability.
If a library is on version 0.x, it's communicating to me that it's still in an early development phase where they don't care about API stability. It's more like the library where version 157.0 just released than the library which has been on version 3.x for the past few years.
No, I know. I am talking about community/libraries too.
> If a library has been on version 3.x for the past few years, I have some indication that there's some commitment to API stability.
As you also do if a library has been on version `0.2` for the past few years. As is the case for `libc`. Or `0.4` for `log`.
Notice that you aren't actually using just the version number. You are making an inference about release cadence to draw a conclusion. Contrast that with the essential communicative role of semantic versioning, which is that it communicates something about the content of the change: whether it's breaking or not. That truly can be read straight from the version number exclusively. But "stability" or "maturity" cannot be.
Your comment is making my point for me IMO.
If the library has been on 0.2 for a few years, that tells me nothing other than "the developer probably hasn't committed to a stable API yet". It's completely fair for a library on version 0.2.3 to break the API in major ways in version 0.2.4, since it's still in the early development 0.x phase.
What breaking changes has Rust made "on a whim" ?
I don't know about "on a whim", but this isn't far off in regards to breaking compatibility. And it caused some projects, like Nix, a lot of pain.
https://github.com/rust-lang/rust/issues/127343
https://devclass.com/2024/08/19/rust-1-80-0-breaks-existing-...
Probably not the best way to lead, considering that that phrase is the entire root of the disagreement you're chiming in on!
> but this isn't far off in regards to breaking compatibility.
I think it might be worth elaborating on why you think that change "isn't far off" being made "on a whim". At least to me, "on a whim" implies something about intent (or more specifically, the lack thereof) that the existence of negative downstream impacts says nothing about.
If anything, from what I can tell the evidence suggests precisely the opposite - that the breakage wasn't made "on a whim". The change itself [0] doesn't exactly scream "capricious" to me, and the issue was noticed before Rust 1.80.0 released [1]. The libs team discussed said issue before 1.80.0's release [2] and decided (however (un)wisely one may think) that that breakage was acceptable. That there was at least some consideration of the issue basically disqualifies it from being made "on a whim", in my view.
[0]: https://github.com/rust-lang/rust/pull/99969
[1]: https://github.com/rust-lang/rust/issues/127343
[2]: https://github.com/rust-lang/rust/issues/127343#issuecomment...
Your post itself reinforces the OP's claim.
Edit: Seriously. At this point, it seems clear that the culture around Rust, especially driven by proponents like you, indirectly have a negative effect on both Rust software, and software security & quality overall, as seen by the bug discussed in the OP. Without your kind of post, would Ubuntu have felt less pressured to make technical management decisions that allowed for the above bug?
> Your post itself reinforces the OP's claim.
Again, I think it might be worth elaborating precisely what you think "on a whim" means. To me (and I would hope anyone else with a reasonable command of English), making a bad decision is not the same thing as making a decision on a whim, and you have provided no reason to believe the described change falls under the latter category instead of the former.
In C and C++ land, if gcc (as a thought experiment) tried breaking backwards compatibility by changing the language, people would be flabbergasted, complain that gcc made a dialect, and switch to Clang or MSVC or fork gcc. But for Rust, Rust developers just have to suck it up if rustc breaks backwards compatibility. Like Dtolnay's comment in the Github issue I linked indicates. If and once gccrs gets running, that might change.
Though I am beginning to worry, for the specification for Rust gotten from Ferrocene might be both incomplete and basically fake, and that might cause rustc and gccrs to more easily risk becoming separate dialects of Rust, which would be horrible for Rust, and since there should preferably be more viable options in my opinion of systems languages, arguably horrible for the software ecosystem as well. I hope that there are plans for robust ways of preventing dialects of Rust.
You're moving the goalposts. Neither the original claim nor your previous comment in this subthread used such vague and weakening qualifiers to "on a whim".
And even those still don't say anything about what exactly you mean by "on a whim" or how precisely that particular change can be described as such, though at this rate I suppose there's not much hope in actually getting an on-point answer.
> the Rust community is willing to break backwards compatibility
Again, the fact that Rust can and will break backwards compatibility is not in dispute. It's specifically the claim that it's done "on a whim" that was the seed of this subthread.
> appear to not only be unwilling to admit the issues
I suggest you read my comment more carefully.
I also challenge you to find anyone who claims that the changes in Rust 1.80.0 did not cause problems.
> but even directly talk around the issues.
Because once again, the existence of breaking changes and/or their negative downstream impact is not what the original comment you replied to was disputing! I'm not sure why this is so hard to understand.
> In C and C++ land, if gcc (as a thought experiment) tried breaking backwards compatibility by changing the language, people would be flabbergasted, complain that gcc made a dialect, and switch to Clang or MSVC or fork gcc.
No need for a thought experiment. Straight from the GCC docs [0]:
> By default, GCC provides some extensions to the C language that, on rare occasions conflict with the C standard.
> The default, if no C language dialect options are given, is -std=gnu23.
> By default, GCC also provides some additional extensions to the C++ language that on rare occasions conflict with the C++ standard.
> The default, if no C++ language dialect options are given, is -std=gnu++17.
Also from the GCC docs [1]:
> The compiler can accept several base standards, such as ‘c90’ or ‘c++98’, and GNU dialects of those standards, such as ‘gnu90’ or ‘gnu++98’.
So not only has GCC "chang[ed] the language" by implementing extensions that can conflict with the C/C++ standards, GCC has its own dialect and uses it by default. And yet there's no major GCC fork and no mass migration to Clang or MSVC specifically because of those extensions.
And it's not like those extensions go unused either; perhaps the most well-known example is Linux, which only officially supported compilation via GCC for a long time precisely because Linux made (and makes!) extensive use of GCC extensions. It was only after a concerted effort to remove some of those GNU-isms and add support for others into Clang that mainline Clang could compile mainline Linux [2].
> I hope that there are plans for robust ways of preventing dialects of Rust.
This is not a realistic option for any language that anyone is free to implement for what I hope are obvious reasons.
[0]: https://gcc.gnu.org/onlinedocs/gcc/Standards.html
[1]: https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html
Nope, I am not moving the goalposts, as is perfectly clear to you already. You are well aware that I am completely correct and that you are wrong.
> Again, the fact that Rust can and will break backwards compatibility is not in dispute. It's specifically the claim that it's done "on a whim" that was the seed of this subthread.
And you and the other Rust proponent's directly talking around it, as you again are doing here, only worsens the situation.
> No need for a thought experiment. Straight from the GCC docs [0]:
Technically correct, but outside of extensions that has to be enabled, more or less none of that breaks any backwards compatibility. A program written in pure C or C++ ought to behave exactly the same and compile exactly the same as in those default dialects. The default dialects amount to more or less just a strict superset that behaves the same, like adding support for C++ "//" comments, or backporting newer C standard changes to previous versions. The only extensions that change behavior significantly and are not only strict supersets with same behavior, require flags to be enabled.
Thus, yet again, radically different from what the rustc developers did just last year.
Overall, your posts and the posts of your fellow Rust proponents in this submission both worsen the situation for Rust and for software overall regarding compatibility, security and safety, as the bug of the submission indicates. Imagine being so brazen and doubling down on a path that arguably lead to a very public bug. I do not believe any responsible software company would want you anywhere near its code if it cared about safety and security.
Turns out you have more than once. I wish I didn't have to spell this out for you, but here goes one last attempt...
The original part of awesome_dude's comment that started this subthread:
> Rust still has a very "Ready to make breaking changes on a whim" reputation
Note the existence and wording of the qualifier here. The claim here is not "Ready to make breaking changes", but "Ready to make breaking changes on a whim".
The relevant response from umanwizard:
> What breaking changes has Rust made "on a whim"?
Again, note the existence and wording of the qualifier. The question here is not "What breaking changes has Rust made", but "What breaking changes has Rust made 'on a whim'
Your first response:
> I don't know about "on a whim", but this isn't far off in regards to breaking compatibility.
This is the first goalpost move. You're not claiming to have an example of a breaking change "on a whim" (in fact, you explicitly distance yourself from such a claim), but instead you say you have an example of a breaking change that "isn't far off" of being "on a whim". Note that this is not the same unadorned "on a whim" qualifier, as it uses the (slightly) weakening and more vague "isn't far off". How far off and in what way is it not far off? You fail to elaborate on both counts.
Your next response:
> Your post strongly reinforces Rust's reputation as a language whose language designers are willing to break compatibility on a whim.
A second goalpost move. You're not using the "isn't far off" qualifier any more, and are instead using the unadorned "on a whim". Again, you fail to elaborate further on this.
And finally:
> closer to "on a whim" than many like
A third goalpost move, with "on a whim" having grown two qualifiers, neither of which have previously appeared in this subthread! Now it's neither "on a whim" nor "isn't far off" "on a whim", but it's now "closer to" "on a whim" "than many like".
How close is "closer to"? Who falls under "many"? How do these describe the example you provide? Who knows!
> And you and the other Rust proponent's directly talking around it, as you again are doing here, only worsens the situation.
It's not clear to me why it's so hard to understand what this subthread was originally about, nor why you seem so insistent on refusing to actually discuss the original topic.
> Technically correct, but outside of extensions that has to be enabled, more or less none of that breaks any backwards compatibility.
This is moving the goalposts yet again. We go from what is in effect "C/C++ compilers would never break backwards compatibility by adding language extensions!" to "You're correct in that they have done it, but it's mostly not a problem".
> The default dialects amount to more or less just a strict superset that behaves the same, like adding support for C++ "//" comments, or backporting newer C standard changes to previous versions. The only extensions that change behavior significantly and are not only strict supersets with same behavior, require flags to be enabled.
Not only does this claim contradict the snippets I quoted earlier, it also contradicts this other snippet from the docs (emphasis added) [0]:
> On the other hand, when a GNU dialect of a standard is specified, all features supported by the compiler are enabled, even when those features change the meaning of the base standard.
And given that GCC defaults to said GNU dialects, that means that non-strict-superset features are enabled by default.
[0]: https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html
Do you consider old program now compiles, that previously didn't to be a serious break in backward compatibility? I think they do not, and so do I.
That's correct. There are not much of those, but they do exist. These generally mean that the GNU dialect gave syntax a meaning, back when it was still forbidden in standard C. Then standard C adopted that feature due to it being implemented in a compiler (that's how language evolution should work), but gave it slightly different semantics. Now GCC has the choice between breaking old existing programs, or not exposing standard semantics by default. They solve that by letting the user choose the language version.
An example of that are arrays of size 0 at the end of a structure. This used to be used for specifying arrays whose size can be arbitrary large, but became obsolete with the introduction of flexible array members in C99. Now if GCC would only implement standard C, then the correct semantics for any array with a declared size that is accessed with an index larger than that size, would be undefined behaviour, but since GCC gave that the semantics, that now flexible array members have, before flexible array members existed, it chooses to implement these semantics instead, unless you tell it, which C standard you want to use.
Actually due to the use in popular codebases, such as the Linux kernel, this semantic is even assigned (based on a heuristic) with array sizes larger than zero.
From https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html :
> In the absence of the zero-length array extension, in ISO C90 the contents array in the example above would typically be declared to have a single element. Unlike a zero-length array which only contributes to the size of the enclosing structure for the purposes of alignment, a one-element array always occupies at least as much space as a single object of the type. Although using one-element arrays this way is discouraged, GCC handles accesses to trailing one-element array members analogously to zero-length arrays.
(With GNU projects when you have questions, the best source are the official docs themself. They are stellar and are even completely available offline on your computer in the interactive documentation system Info.)
The other extensions are in the sibling chapters, e.g. https://gcc.gnu.org/onlinedocs/gcc/Syntax-Extensions.html or https://gcc.gnu.org/onlinedocs/gcc/Semantic-Extensions.html . A one that I quite like is https://gcc.gnu.org/onlinedocs/gcc/Unnamed-Fields.html .
Are you on an OS, where using the GNU Info system is an option? I quite like it. Unfamiliar people often are deterred from using it, either, because it is in the terminal, or, because they think they are looking at a pager. When it's only the latter preventing you from using it, have in mind, that this is not in fact a simple paged document, but an interactive hypertext system, that predates the WWW. Documents are generally structured as a tree. Use the normal cursor movement, use Enter to follow links, p for previous node, n for next node, u / backspace for up/parent node, / works for text search, i searches in the index. Use info info when you want to know more. Pressing h for help also works. (I just discovered that the behaviour of h depends on your terminal size :-) .) When you look at the GNU onlinedocs, you look at a HTML version of that Info document. Using Info directly is nicer, since it has native support for jumping in the doc tree and instead of relying on an external entity (like Google) to point you to the node that contains your information (Often resulting in bringing you at another version or document entirely, which can lead to confusion.) you can use the built-in index, which is maintained by the document authors, so it will be accurate.
GNU Info is in my opinion the best and fastest way to access documentation, that is more then a simple reference sheet, when you don't object to leaving the Web browser. It even has C tutorials and all, completely offline.
> Are you on an OS, where using the GNU Info system is an option?
Technically yes, but I admittedly have basically zero experience with using it.
> Using Info directly is nicer, since it has native support for jumping in the doc tree and instead of relying on an external entity (like Google) to point you to the node that contains your information (Often resulting in bringing you at another version or document entirely, which can lead to confusion.) you can use the built-in index, which is maintained by the document authors, so it will be accurate.
I'm not entirely confident about how helpful it'd be for someone who is less familiar with the subject material like me as opposed to someone who has a general idea of what they're looking for, but I suppose I won't know until I try it.
[0]: https://gcc.gnu.org/onlinedocs/gcc/C-Extensions.html
[1]: https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Extensions.ht...
More than anything, the Rust community is hyper-fixated on stability and correctness. It is very much the antithesis to “move fast and break things”.
This is incorrect.
https://devclass.com/2024/08/19/rust-1-80-0-breaks-existing-...
Cargo always picks the newest version of a dependency, even if that version is incompatible with the version of Rust you have installed.
You're like "build this please", and it's like "hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad"
They finally addressed this bug -- optionally (the default is still to break the build at the slightest provocation) -- in January this year (which of course, requires you upgrade your compiler again to at least that)
https://blog.rust-lang.org/2025/01/09/Rust-1.84.0/#cargo-con...
What a bunch of shiny-shiny chasing idiots with a brittle build system. It's designed to ratchet forward your dependencies and throw new bugs and less-well-tested code at you. That's absolutely exhausting. I'm not your guinea pig, I want to build reliable, working systems.
gcc -std=c89 for me please.
Also picking C89 over any later iteration is bananas.
PKG_CHECK_MODULES([libfoo], [libfoo >= 1.2.3])
AC_CHECK_HEADER([foo.h], ,[AC_MSG_ERROR([Cannot find foo header])])
AC_CHECK_LIB([foo],[foo_open], ,[AC_MSG_ERROR([Cannot find foo library])])
There are additionally versioning standards for shared objects, so you can have two incompatible versions of a library live side-by-side on a system, and binaries can link to the one they're compatible with.> PKG_CHECK_MODULES([libfoo], [libfoo >= 1.2.3])
This also picks the newest version that might be incompatible with your compiler, if the newer version uses a newer language standard.
> You're like "build this please", and it's like "hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad"
Also possible in the case of your example.
> What a bunch of shiny-shiny chasing idiots with a brittle build system.
Autoconf as an example of non-brittle build system? Laughable at best.
> This also picks the newest version that might be incompatible with your compiler, if the newer version uses a newer language standard.
It doesn't, it just verifies what the user has already installed (with apt/yum/dnf) is suitable. It certainly doesn't connect to the network and go looking for trouble.
The onus is on library authors to write standard-agnostic, compiler-agnostic headers, and that's what they do:
#if __STDC_VERSION__ >= 199901L
/* C99 definitions */
#else
/* pre-C99 definitions */
#endif
For linking, shared objects have their own versioning to allow backwards-incompatible versions to exist simultaneously (libfoo.so.1, libfoo.so.2).No. You set a bar for Cargo that the solution you picked does not reach either.
> It doesn't, it just verifies what the user has already installed (with apt/yum/dnf) is suitable.
There's no guarantee that that is compatible with your project though. You might be extra unlucky and have to bring in your own copy of an older version. Plus their dependencies.
Perfect example of the pile of flaming garbage that is C dependency "management". We haven't even mentioned cross-compiling! It multiplies all this C pain a hundredfold.
> The onus is on library authors to write standard-agnostic, compiler-agnostic headers, and that's what they do:
You're assuming that the used feature can be represented in older language standards. If it doesn't, you're forced to at least have that newer compiler on your system.
> [...] standard-agnostic, compiler-agnostic headers [...] > For linking, shared objects have their [...]
Compiler-agnostic headers that get compiled to compiler-specific calling conventions. If I recall correctly, GCC basically dictates it on Linux. Anyways, I digress.
> shared objects have their own versioning to allow backwards-incompatible versions to exist simultaneously (libfoo.so.1, libfoo.so.2).
Oooh, that one is fun. Now you have to hope that nothing was altered when that old version got built for that new distro. No feature flag changed, no glibc-introduced functional change.
> hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad
If we look at your initial example again, Cargo followed your project's build instructions exactly and unfortunately pulled in a package that is for some reason incompatible with your current compiler version. To fix this you have the ability to just specify an older version of the crate and carry on.
Looking at your C example, well, I described what you might have to do and how much manual effort that can be. Being forced to use a newer compiler can be very tedious. Be it due to bugs, stricter standards adherence or just the fact that you have to do it.
In the end, it's not a fair fight comparing dependency management between Rust and C. C loses by all reasonable metrics.
I listed a specific thing -- that Rust's ecosystem grinds people towards newness, even if goes so far to actually break things. It's baked into the design.
I don't care that it's hypothetically possible for that to happen with C, I care that practically, I've never seen it happen.
Whereas, the single piece of software I build that uses Rust, _without changing anything_ (already built before, no source changes, no compiler changes, no system changes) -- cargo install goes off to the fucking internet, finds newer packages, downloads them, and tells me the software it could build last week can't be build any more. What. The. Fuck. Cargo, I didn't ask you to fuck up my shit - but you did it anyway. Make has never done that to me, nor has autoconf.
Show me a C environment that does that, and I'll advise you to throw it out the window and get something better.
There have been about 100 language versions of Rust in the past 10 years. There have been 7 language versions of C in the past 40. They are a world apart, and I far prefer the C world. C programmers see very little reason to adopt "newer" C language editions.
It's like a Python programmer, on a permanent rewrite treadmill because the Python team regularly abandon Python 3.<early version> and introduce Python 3.<new version> with new features that you can't use on earlier Python versions, asking how a Perl programmer copes. The Perl programmer reminds them that the one Perl binary supports and runs every version of Perl from 5.8 onwards, simultaneously, and the idea of making all the developers churn their code over and over again to keep up with latest versions is madness, the most important thing is to make sure old code keeps running without a single change, forever. The two people are simply on different planets.
I don't think your anecdotal experience is enough to redeem the disarray that is C dependency management. It's nice to pretend though.
> and tells me the software it could build last week can't be build any more. What. The. Fuck. Cargo, I didn't ask you to fuck up my shit - but you did it anyway. Make has never done that to me, nor has autoconf.
If you didn't get my point in previous comment, let me put it more frankly - it is your skill issue if you aren't fixing your crates to a specific version but depend on them remaining constant. This is not Cargo's fault.
> Make has never done that to me, nor has autoconf.
Yeah, because they basically guarantee nothing nor allow working around any of the potential issues I've already described.
But you do get to wait for a thousandth time for it to check the size of some types. All those checks are a literal proof how horrible the ecosystem is.
> There have been about 100 language versions of Rust in the past 10 years
There's actually four editions and they're all backwards-compatible.
> C programmers see very little reason to adopt "newer" C language editions.
Should've stopped at the word "reason".
For C++, there is vcpkg and Conan. While they are overall significantly or much worse options than what Rust offers according to many, in large part due to C++'s cruft and backwards compatibility, they do exist.
But I asked about C.
So after all these decades there's maybe something vaguely tolerable that's also certainly less mature than what even Rust has. Congrats.
https://lwn.net/Articles/1035890/
They did not have an easy time including Rust software as I read it. Maybe just initial woes, but I have also read other descriptions of distribution maintainers having trouble with integrating Rust software. Dynamic binding complaints? I have not looked into it.
dnf or apt, depending on if Fedora/EL or Debian...
I suppose I missed the important case of Yocto though
Sorry, but your argument is incorrect.
Somebody is attempting to characterize the Rust community in general as being similar to other programming communities that value velocity over stability, such as the JS ecosystem and others.
I’m pointing out that incidents such as this are incredibly rare, and extremely controversial within the community, precisely because people care much more about stability than velocity.
Indeed, the design of the Rust language itself is in so many ways hyper-fixated on correctness and stability - it’s the entire raison d’etre of the language - and this is reflected in the culture.
And your post is itself a part of the Rust community, and it itself is an argument against what you claim in it. If you cannot or will not own up to the 1.80 time crate debacle, or mention the 1.80 time crate debacle proactively as a black mark that weighs on Rust's conscience and that it will take time to rebuild trust and confidence in Rust's stability because of it, well, your priorities, understood as in the Rust community's priorities, are clear, and they do not, in practice, lie with stability, safety and security, nor with being forthcoming.
Good luck achieving anything of long-term value this way.