If 32-bit x86 support can be dropped for pragmatic reasons, so can these architectures. If people really, really want to preserve these architectures as ongoing platforms for the future, they need to step up and create a backend for the Rust toolchain that supports them.
There's other languages that are considered acceptable, even desirable, languages to write applications in (e.g., Java, PHP, Go), but Rust is really the first language to compete sufficiently close to C's competence for people to contemplate adding it to the base-system-languages list. I'd say only Go has ever come close to approaching that threshold, but I've never seen it contemplated for something like systemd.
Interestingly, I wonder if the debates over the addition of C++, Python, and Perl to the base system language set were this acrimonious.
I think any projects that are run by people that see themselves as "X-people" (like Python-people, Perl-people) always have a bit "ick" reaction to new languages being added to projects they might see as part of a language's community.
So say you're a C++ developer, contributed to APT over the years, see all of it linked to the C++ community which you are part of too, and someone wants to start migrating parts of it to Rust/$NewLang. I think it might sometimes affect more for these people than just the code, might even be "attacking" (strong word perhaps) their sense of identity, for better or worse.
If APT were a hardcore C++ project surely we'd have like adopted namespaces everywhere by now.
I would say that Pythonistas are quite accustomed to "(other) languages being added" to the Python ecosystem. After all, NumPy relies on Fortran, as well as C.
Asserting that kind of "ownership" over code seems rather distasteful to me. Maybe there would be less acrimony if developers got paid for it somehow.
Some communities indeed are better at embracing multiple languages, Python, JavaScript and Java/JVM comes to mind, where it isn't uncommon to call out to other languages.
How is language relevant here? If someone just rewrote it in the same language instead of a different one, do you feel the reaction would be significantly better?
Rust has been the tool of choice for stealing GPL3 open source projects where some people have spent all their free time on at some point in their life.
In your view, was writing a BIOS re-implementation from scratch "stealing" from IBM? Are all of the vaguely Unix-compatible operating systems "stealing" from Unix (ATT/Bell)? Why is the "free time" of the original developer more sacrosanct than the "free time" of the re-implementer?
> Why is the "free time" of the original developer more sacrosanct than the "free time" of the re-implementer?
This has nothing to do with free time. It has to do with the fact that the former actually went through a ton of pain to get the project from its original conception and past bugs & design flaws into its current useful state, and the latter is merely translating the solution right in front of them. And not only is the latter going to deprive the former of the ability to even say "this is my project" (despite them having spent years or decades on its design etc.), but they're also probably going to be able to take the whole thing in a different direction, effectively leaving the original author behind.
Whether you feel this is a good or a bad thing for society aside, it should be obvious why this could be genuinely upsetting in some situations.
I happen to like living in a world where Pipewire can just be a drop in replacement for PulseAudio that can implement the same APIs and do everything that PulseAudio does but better. Or where I can play my games on Linux because Valve re-implemented DirectX and all the Windows platform libraries. Or where there are half a dozen "vim" clones to pick from if I want to. If there are broad, tangible benefits to rewriting a piece of software, then it ought to be rewritten, hurt feelings nonwithstanding.
I don't really understand the argument that the original author gets deprived of anything by a piece of free software being rewritten, certainly not credit. Obviously stealing the name would be a dick move and maybe open you up to legal action but that's not really what happens 99.99% of the time.
This is not like that though; moving from a pro-user license to a pro-business license is the reason for being upset, not just losing control over the product.
With the move, any future improvements run the very real risk of being extended then closed off by tech companies.
It's simply hubris for an entire community of developers to look at an existing working product that got popular due to the goal of being pro-user, then say to themselves "we can replace it with this new thing I created, but in order for my new thing to gain traction it must be immediately favourable to big business to close off".
If you view it in that light, then, yeah, you can understand why the upset people are upset, even if you don't agree with them.
> Many of them were upset enough to engage in decade-long lawsuits. But we'd ultimately be in a much worse place if "avoiding making original creators upset" was the primary factor in development, over things like compatibility, bugs, performance, security etc.
Original creators did frequently get upset, but the difference between "some individual forked the code or cloned the product" is very different to "an entire community celebrating a gradual but persistent and constant effort by the same community to move the ecosystem away from a pro-user license".
I hope this gives you some insight into why people are upset, even if you don't agree with them. Most of them aren't articulating it like I did.
[EDIT: I'm thinking of this like an Overton-Window equivalent, shifting from pro-user to pro-business. It seems like an accurate analogy: some people don't want this shift to happen while others are actively pushing for it. This is obviously going to result in conflict.]
Kinda like how the purpose of creating LLVM was to create a more extensible compiler framework, not to create a permissively licensed compiler. As it happens the license of LLVM has not really led to the proprietary hellscape that some people suggested it would, and in any case the technical benefits vastly outweigh the drawbacks. Companies like Apple that do have proprietary compilers based on LLVM are difficult to describe as "freeloaders" in practice because they contribute so much back to the project.
I didn't say it was.
> The purpose of rewriting it is to provide a modernized alternative with some benefits from the choice of language
I did not contend that either.
> and the choice of license is incidental.
This I disagree with - the license choice is not incidental; it is foundational to gain popularity in a hurry, to gain widespread adoption at corporates.
The rewriter's intentions is to gain popularity over the incumbent software; using a pro-user license does not gain popularity for all those pro-business use-cases.
The license switch from pro-user to pro-business is not incidental. It's deliberate, and it's too achieve the stated goal of replacing the existing software.
This is one place where I feel that the ends do not justify the means.
Moreover, often these folks have rare expertise in those subjects, and/or rare willingness to work on them as open side projects. If you tell them this is just something they have to deal with, I wouldn't be surprised if you also remove some of the incentives folks have to keep contributing to OSS voluntarily in the first place. Very little money, and massive risk of losing control, recognition, etc. even if you "succeed"... how many people are still going to bother? And then you get shocked, shocked that OSS is losing to proprietary software and dying.
Debian has ongoing efforts to make many shell scripts (like postinst Scripts in packages etc.) non-bash-specific.
A minimal Debian installation doesn't contain bash, but rather dash, which doesn't support bash extensions.
Please don't make up wrong facts that would be trivial to check first.
All minimal Debian installations include bash as it is an essential package. Where essential is used in the sense of https://www.debian.org/doc/debian-policy/ch-binary.html#esse...
For clarity, 'sh' is what is softlinked to dash. Not bash.
I don't know if you've tried to get someone else's Python running recently, but it has devolved into a disaster effectively requiring containers to accurately replicate the exact environment it was written in.
Core system applications should be binaries that run with absolutely minimal dependencies outside of default system-wide libraries. Heck, I would go as far as to say applications in the critical path to repairing a system (like apt) should be statically linked since we no longer live in a storage constrained world.
Please show me a project where you believe you "effectively require containers" just to run the code, and I will do my best to refute that.
> since we no longer live in a storage constrained world.
I think you do care about the storage use if you're complaining about containers.
And I definitely care, on principle. It adds up.
For reasons I can only assume have to do with poorly configured CI, pip gets downloaded billions of times annually (https://pypistats.org/packages/pip), and I assume those files get unpacked and copied all the time since there would be no good reason to use uv to install pip. That's dozens of petabytes of disk I/O.
I guess GP meant "containers" broadly, including things like pipx, venv, or uv. Those are, effectively, required since PEP 668:
https://stackoverflow.com/questions/75608323/how-do-i-solve-...
This statement makes no sense. First off, those are three separate tools, which do entirely different things.
The sort of "container" you seem to have in mind is a virtual environment. The standard library `venv` module provides the base-line support to create them. But there is really hardly anything to them. The required components are literally a symlink to Python, a brief folder hierarchy, and a five-or-so-line config file. Pipx and uv are (among other things) managers for these environments (which manage them for different use cases; pipx is essentially an end-user tool).
Virtual environments are nowhere near a proper "container" in terms of either complexity or overhead. There are people out there effectively simulating a whole new OS installation (and more) just to run some code (granted this is often important for security reasons, since some of the code running might not be fully trusted). A virtual environment is... just a place to install dependencies (and they do after all have to go somewhere), and a scheme for selecting which of the dependencies on local storage should be visible to the current process (and for allowing the process to find them).
They are all various attempts at solving the same fundamental problem, which I broadly referred to as containerization (dependency isolation between applications). I avoided using the term "virtual environment" because I was not referring to venv exclusively.
Of all the languages, python in the base system has been an unmitigated garbage fire.
It was not their action, nor is it hacked, nor is the message contained within pip.
The system works by pip voluntarily recognizing a marker file, the meaning of which was defined by https://peps.python.org/pep-0668/ — which was the joint effort of people representing multiple Linux distros, pip, and Python itself. (Many other tools ignore the system Python environment entirely, as mine will by default.)
Further, none of this causes containers to be necessary for installing ordinary projects.
Further, it is not a problem unique to Python. The distro simply can't package all the Python software out there available for download; it's completely fair that people who use the Python-native packaging system should be expected not to interfere with a system package manager that doesn't understand that system. Especially when the distro wants to create its tools in Python.
You only notice it with Python because distros aren't coming with JavaScript, Ruby etc. pre-installed in order to support the system.
The fact that users have to keep up with multiple PEPs, error messages, --single-version-externally-managed, --break-system-packages, config files everywhere, stealth packages in .local and uv to paper over all of this shows that Python packaging is completely broken.
There's still quite a bit you can do with the "system Python". Mine includes NumPy, bindings for GTK, QT5 and QT6, Freetype, PIL....
> insofar Python allows that with its __pycache__ spam
This is, to my understanding, precisely why the standard library is pre-compiled during installation (when the process already has sudo rights, and can therefore create the `__pycache__` folders in those locations). This leverages the standard library `compileall` module — from the Makefile:
   @ # Build PYC files for the 3 optimization levels (0, 1, 2)
   -PYTHONPATH=$(DESTDIR)$(LIBDEST) $(RUNSHARED) \
    $(PYTHON_FOR_BUILD) -Wi $(DESTDIR)$(LIBDEST)/compileall.py \
    -o 0 -o 1 -o 2 $(COMPILEALL_OPTS) -d $(LIBDEST) -f \
    -x 'bad_coding|badsyntax|site-packages' \
    $(DESTDIR)$(LIBDEST)
> The fact that users have to keep up with multiple PEPs, error messages, --single-version-externally-managed, --break-system-packages, config files everywhere, stealth packages in .local and uv to paper over all of this shows that Python packaging is completely broken.Please do not spread FUD.
They don't have to do any of that. All they have to do is make a virtual environment, which can have any name, and the creation of which is explicitly supported by the standard library. Further, reading the PEPs is completely irrelevant to end users. They only describe the motivation for changes like --break-system-packages. Developers may care about PEPs, but they can get a better summary of the necessary information from https://packaging.python.org ; and none of the problems there have anything to do with Linux system Python environments. The config files that developers care about are at the project root.
Today, on any Debian system, you can install an up-to-date user-level copy of yt-dlp (for example) like so, among many other options:
  sudo apt install pipx
  pipx install yt-dlp
You only have to know how one of many options works, in order to get a working system.Okay so to create a five line script I have to make a virtual environment. Then I have to activate and deactivate it whenever using it. And I have to remember to update the dependenceis regularly. For my five line script.
Seems to me the companies managing mloc-codebases pushed their tradeoffs on everyone else.
> Okay so to create a five line script... For my five line script.
I can guarantee that your "five line script" simply does not have the mess of dependencies you imagine it to have. I've had projects run thousands of lines using nothing but the standard library before.
> Then I have to activate and deactivate it whenever using it.
No, you do not. Activation scripts exist as an optional convenience because the original author of the third-party `virtualenv` liked that design. They just manipulate some environment variables, and normally the only relevant one is PATH. Which is to say, "activation" works by putting the environment's path to binaries at the front of the list. You can equally well just give the path to them explicitly. Or symlink them from somewhere more convenient for you (like pipx already does for you automatically).
> And I have to remember to update the dependenceis regularly.
No, you do not in general. No more so than for any other software.
Programs do not stop working because of the time elapsed since they were written. They stop working because the world around them changes. For many projects this is not a real concern. (Did you know there is tons of software out there that doesn't require an Internet connection to run? So it is automatically invulnerable to web sites changing their APIs, for example.) You don't have to remember to keep on top of that; when it stops working, you check if an update resolves the problem.
If your concern is with getting security updates (for free, applying to libraries you also got for free, all purely on the basis of the good will of others) for your dependencies, that is ultimately a consequence of your choice to have those dependencies. That's the same in every language that offers a "package ecosystem".
This also, er, has nothing to do with virtual environments.
> Seems to me the companies managing mloc-codebases pushed their tradeoffs on everyone else.
Not at all. They are the ones running into the biggest problems. They are the ones who have created, or leveraged, massive automation systems for containers, virtualization etc. — and probably some of it is grossly unnecessary, but they aren't putting in the time to think about the problem clearly.
And now we have a world where pip gets downloaded from PyPI literally billions of times a year.
If I need a python script, I have to arrange for all the RUN lines to live inside a virtual environment inside the container.
My understanding of the reasoning is that python-based system packages having dependencies managed through pip/whatever present a system stability risk. So they chose this more conservative route, as is their MO.
Honestly if there is one distribution to expect those kinds of shennanigans on it would be Debian. I don't know how anybody chooses to use that distro without adding a bunch of APT sources and a language version manager.
The tricky part is when "users" start using pip to install something because someone told them to.
    > indeed, there's quite a few commenters here who I think would be surprised to learn that not only is C++ on this list, but that it's been on it for at least 25 years
... isn't so surprising.Pardon?
  $ file `which apt`
  /usr/local/bin/apt: Python script, ASCII text executableIt looks like in trixie, it's libpam-modules that pulls in debconf, which is written in Perl. And libpam-modules is required by util-linux, which is Essential: yes.
? file `which apt`
/usr/bin/apt: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=157631f2617f73dee730273c7c598fd4d17b7284, for GNU/Linux 3.2.0, stripped
You can of course add your own "apt" binary in /usr/local/bin/apt which can be written in any language you like, say COBOL, Java, Common Lisp or Python.
I’ve notice a lot of that in base OS systems
Its a curiosity more than anything though
> Critical infrastructure still written in C - particularly code that parses data from untrusted sources - is technical debt that is only going to get worse over time.
But hasn't all that foundational code been stable and wrung out already over the last 30+ years? The .tar and .ar file formats are both from the 70s; what new benefits will users or developers gain from that thoroughly battle-tested code being thrown out and rewritten in a new language with a whole new set of compatibility issues and bugs?
After all the library wasn't designed around safety, we assumed the .debs you pass to it are trusted in some way - because you publish them to your repository or you are about to install them so they have root maintainer scripts anyway.
But as stuff like hosting sites and PPAs came up, we have operators publishing debs for untrusted users, and hence suddenly there was a security boundary of sorts and these bugs became problematic.
Of course memory safety here is only one concern, if you have say one process publishing repos for multiple users, panics can also cause a denial of service, but it's a step forward from potential code execution exploits.
I anticipate the rewrites to be 1 to 1 as close as possible to avoid introducing bugs, but then adding actual unit tests to them.
Not necessarily. The "HTTP signature verification code" sounds like it's invoking cryptography, and the sense I've had from watching the people who maintain cryptographic libraries is that the "foundational code" is the sort of stuff you should run away screaming from. In general, it seems to me to be the cryptography folks who have beat the drum hardest for moving to Rust.
As for other kind of parsing code, the various archive file formats aren't exactly evolving, so there's little reason to update them. On the other hand, this is exactly the kind of space where there's critical infrastructure that has probably had very little investment in adversarial testing either in the past or present, and so it's not clear that their age has actually led to security-critical bugs being shaken out. Much as how OpenSSL had a trivially-exploitable, high criticality exploit for two years before anybody noticed.
You don't want the core cryptography implemented in Rust for Rust's sake when there's a formally verified Assembler version next to it. Formally verified _always_ beats anything else.
The core cryptographic algorithms, IMHO, should be written in a dedicated language for writing cryptographic algorithms so that they can get formally-verified constant-time assembly out of it without having to complain to us compiler writers that we keep figuring out how to deobfuscate their branches.
In contrast, a rust implementation can be compiled into many architectures easily, and use intrinsically safer than a C version.
Plus cryptography and PKI is constantly evolving. So it can’t benefit from the decades old trusted implementations.
Formally verified in an obscure language where it's difficult to find maintainers does not beat something written in a more "popular" language, even if it hasn't been formally verified (yet?).
And these days I would (unfortunately) consider assembly as an "obscure language".
(At any rate, I assume Rust versions of cryptographic primitives will still have some inline assembly to optimize for different platforms, or, at the very least, make use of compile intrinsics, which are safer than assembly, but still not fully safe.)
Take BLAKE3 as an example. There's asm for the critical bits, but the structural parts that are going to be read most often are written in rust like the reference impl.
It seems that for reasons I don't understand this idea isn't popular and people really like hand rolling assembly.
https://github.com/PLSysSec/FaCT
They struggle to guarantee constant time for subroutines within a non-constant time application, which is how most people want to use cryptography.
(New cryptographic software can also be developed by all sorts of people. In this case I'm not familiar, but we do know that GnuPG worked for the highest profile case imaginable.)
Ironically, it was the urge not to roll your own cryptography that got people caught in GPG-related security vulnerabilities.
Additionally, the fact that this comes across as so abrasive and off-putting is on brand for online Rust evangelicalism.
No: a little less than 5 years ago there was CVE-2020-27350, a memory safety bug in the tar/ar implementations.
Seeing this tone-deaf message from an Ubuntu employee would be funny if I didn’t actually use Ubuntu. Looks like I have to correct that…
In all seriousness though, let me assure you that I plan to take a very considerate approach to Rust in APT. A significant benefit of doing Rust in APT rather than rewriting APT from scratch in Rust means that we can avoid redoing all our past mistakes because we can look at our own code and translate it directly.
https://github.com/keepassxreboot/keepassxc/issues/10725#iss...
- It's not an option for debian core infrastructure until it supports at least the same platforms debian does (arm, riscv, etc) and it currently only supports x86_64.
- It doesn't turn C into a modern language, since it looks like there's active development here getting the productivity benefits of moving away from C is likely still worth it.
But even so - what price correct & secure software? We all lost a tonne of performance overnight when we applied the first Meltdown and Spectre workarounds. This doesn't seem much different.
> We have an alternative that isn't 10x slower, and comes with many other benefits
Anyone involved with development around a fruity company would say Swift ;)
If all the entry-level jobs are C or C++, do you think companies would have a hard time filling them? Would the unemployed new graduates really shun gainful employment if Rust wasn't part of the equation?
Meanwhile, hiring managers left and right are reporting that within hours of a job being posted, they are flooded with hundreds of applications. And you can't find a single person because of the programming language of your stack? And to remedy this, you're going to rewrite your stack in an unproven language? Have you considered that if you can't find anyone that it might not be a programming language or tech stack problem?
A lot of the C code used in python is calling out to old, battle tested and niche libraries so it is unlikely that someone is going to replace those any time soon but Rust is definitely increasing as time goes on for greenfield work.
From experience with this type of code you typically end up with a load of functions that take in a numpy array and its length/dimensions to a C function that works on that array in place or an output array that was also supplied. In terms of getting this wrong, it’s usually a crash caused by out of bounds memory access which would still be a runtime crash in Rust. So I’m not sure there’s a massive benefit for these types of code other than the fun of learning Rust. Other than that, you’re typically writing C/C++ to interface with C and Fortran libraries that are really battle tested, and for which it will take decades for Rust to have equivalents. So moving to Rust will just cause you to have lots of unsafe statements - not a bad thing necessarily if you are doing a lot of work at the C level in existing code but less of a benefit if you are doing a straight wrap of a library.
On the flip side, things on the web side of Python like uWSGI which is written in C are important for the security aspect but they’re a very small part of the Python ecosystem.
All (current) languages eventually have a compiler/runtime that is memory unsafe. This is basically fine because it's a tiny amount of surface area (relative to the amount of code that uses it) and it exists in a way that the input to is relatively benign so there's enough eyes/time/... to find bugs.
There's also nothing stopping you from re-implementing python/ruby/... in a safer way once that becomes the low hanging fruit to improve computer reliability.
How many type confusion 0 days and memory safety issues have we had in dynamic language engines again? I've really lost count.
My impression is that for the trusted code untrusted input case it hasn't been that many, but I could be wrong.
How is "type confusion" a security issue?
I don't know about that. Look at the code for the COSMIC desktop environment's clock widget (the cosmic-applet-time directory under <https://github.com/pop-os/cosmic-applets>), for example. It's pretty much unreadable compared to a C code base of similar complexity (GNU coreutils, for example: <https://savannah.gnu.org/projects/coreutils/>).
as in that "isn't the style of code you are used too"
I don't think "how well people not familiar with you language can read it" is a relevant metric for most languages.
Also IMHO while C feels readable it isn't when it matters. Because it very often just doesn't include information you need when reading. Like looking at function header doesn't tell you if a ptr is nullable, or if a mut ptr is a changeable input value or instead is a out ptr. which is supposed to point to unitialized memory and if there is an error how that affects the state of the validity of any mutable ptrs passed in. To just name some example (lets not even get started about pre processor macros pretending to be C functions). In conclusion while C seems nice to read it is IMHO often a painful experience to "properly" read it e.g. in context of a code review.
As a side note: The seemingly verbose syntax of e.g. `chrono::DateTime` comes from there being 2 DateTime-types in use in the module, one from the internationalization library (icu) and one from a generic time library (chronos). Same for Sender, etc. That isn't a supper common issue, but happens sometimes.
If I wanted to tweak the Rust project, I’d feel pretty confident I was calling the right things with the right params.
Java can potentially have the same problem. But because everyone uses an IDE and because it's rarely really an issue, everyone will simply import `Baz` rather than worry about the Foo::Baz and Bat::Baz collision. It does happen in java code, but I can't stress how infrequently it's actually a problem.
In java, I rarely pay attention to the `import` section (and I know most devs at my company).
You can look up `using namespace std;` in google and you'll find a lot of articles saying it's a bad practice in C++. Everyone recommends writing the full `std::cout` rather than `cout`.
I do think it’s down to personal preference. With the fully qualified names, I can look at the screen and follow the flow without having to mouse over the various names in play. For that matter, I could print it out if I wanted to and still have all the information.
I don’t think you’re objectively wrong. It’s more that we have different approaches to managing the complexity when it gets hairy.
Most of the code in that module is dedicated to the gui maintenance. The parts that do deal with time are perfectly legible.
I disagree. Both seem perfectly readable, assuming you know their preferred coding styles. As a non-C programmer, I absolutely despise running into #ifndef SOME_OBSCURE_NAME and `while (n) { if (g) {` but C (and in the latter case Go) programmers seem to love that style.
Comparing a bunch of small, barely integrated command line programs to a UI + calendar widget doesn't seem "of similar complexity" to me. Looking at a C clock widget (https://gitlab.freedesktop.org/xorg/app/xclock/-/blob/master...) the difference seems pretty minimal to me. Of course, the XClock code doesn't deal with calendars, so you have to imagine the extra UI code for that too.
I beg to differ.
The easiest way to see this is in US locales, which use 12-hour clocks in GNU 'date' but not other implementations:
  $ date -d '13:00'
  Sat Nov  1 01:00:00 PM PDT 2025
  $ uu_date -d '13:00'
  Sat Nov  1 13:00:00 2025
I added a test case for that recently, since it is a nice usability feature [1].[1] https://github.com/coreutils/coreutils/commit/1066d442c2c023...
And the answer to "why now" is quite simple - Because of the whole Rust in kernel debate, people started scrutinizing the situation.
People who become aware of something only when it’s being used by something huge also aren’t early adopters either. Rust has already been in the Windows kernel for years at this point, with none of this consternation.
Can you provide some evidence to support this? There’s a large body of evidence to the contrary, e.g. from Chrome[1].
> But we have tools to prevent that. The new security issues are supply chain attacks.
Speaking as a “supply chain security” person, this doesn’t really hold water. Supply chain attacks include the risk of memory unsafety lurking in complex dependency trees; it’s not an either-or.
[1]: https://www.chromium.org/Home/chromium-security/memory-safet...
Does it audit third-party code for you?
The average C project has at most a handful of other C dependencies. The average Rust, Go or NodeJS project? A couple hundred.
Ironically, because dependency management is so easy in modern languages, people started adding a lot of dependencies everywhere. Need a leftpad? Just add one line in some yaml file or an "Alt-Enter" in an IDE. Done.
In C? That is a lot more work. If you do that, you do it for advanced for stuff you absolutely need for your project. Because it is not easy. In all likelihood you write that stuff yourself.
I think the problem started with the idea over language-level managers that are just github collections instead of curated distribution-level package managers. So my response "C has no good package manager" is: It should not have a packager manager and Cargo or npm or the countless Python managers should all not exist either.
Usually the hard bit with C libraries is having dependencies with dependencies all of which use their own complex build systems, a mix of Make, CMake, Autotools, Ninja, etc.
Then within that for e.g. a mix of using normal standard names for build parameters and not e.g. PROJECTNAME_COMPILER instead of CMAKE_C_COMPILER
So, yes, you do have to figure out how to build and package these things by yourself very often. There are also no "leftpad" or similar packages in C. If you don't want to write something yourself.
In constrast - virtually every software package of any version is available to you in cargo or npm.
But Rust, you know, has one.
No. Rust is not magic, it just forces a discipline in which certain safety checks can be made automatically (or are obviated entirely). In other languages like C, the programmer needs to perform those checks; and it's technical debt if the C code is not coded carefully and reviewed for such issues. If coding is careful and the code is review - there is no technical debt, or perhaps I should say no more than the unsafe parts of a rust codebase or the standard libraries. And the safety of critical infra code written in C gets _better_ over time, as such technical debt is repaid.
> Rust is explicitly designed to be what you'd get if you were to re-create C knowing what we know now about language design and code safety.
That's not true. First, it's not a well-defined statement, since "what we know now" about language design is, as it has always been, a matter of debate and a variety of opinions. But even regardless of that - C was a language with certain design choices and aesthetics. Rust does not at _all_ share those choices - even if you tack on "and it must be safe". For example: Rust is much richer language - in syntax, primitive types, and standard library - than C was intended to be.
How many decades have we tried this? How many more to see that it just hasn't panned out like you describe?
History shows again and again that this statement is impossible..
Name a large C application that’s widely used, and I’ll show you at least one CVE that’s caused by a memory leak from the project
Separation of concerns solves this because the compiler has minimal impact on the trustedness of the code the Rust compiler generates. Indeed, one would expect that all the ways that the LLVM compiler fails are ways any Rust implementation would fail too - by generating the wrong code which is rarely if ever due to memory safety or thread safety issues. There may be other reasons to write the compiler backend in Rust but I wouldn’t put the trust of compiled Rust code as anywhere near the top of reasons to do that.
IOW, what's your specification?
According to what?
> Rust is explicitly designed
There is no standard. It's accidentally designed.
> knowing what we know now about language design and code safety.
You've solved one class of bugs outside of "unsafe {}". The rest are still present.
Are you really claiming that you can't design a language without an official standard? Not to mention that C itself has been designed long before its first ISO standard. Finally, the idea that a standard committee is a preconditionfor good language design is rather bold, I have to say. The phrase "design by committee" isn't typically used as a compliment...
> You've solved one class of bugs outside of "unsafe {}".
It's "only" the single most important class of bugs for system safety.
This kind of deflection and denialism isn't helping. And I'm saying this as someone who really likes C++.
No, just that it's not 1968 anymore, and if you want to claim your language has learned lessons from the past, then this is one that clearly got missed.
> The phrase "design by committee" isn't typically used as a compliment...
While the phrase "emergent incompatibilities" is only known as a detriment.
> It's "only" the single most important class of bugs for system safety.
Again, I ask for a reference, "according to what?" I understand this is the zeitgeist. Is it actually true? It seems to me this great experiment is actually proving it probably isn't.
> This kind of deflection and denialism isn't helping.
Once again, I asked for proof that the claim was true, you've brought nothing, and instead have projected your shortcomings onto my argument.
> And I'm saying this as someone who really likes C++.
Have you ever pushed for C++ to replace C programs because you assume they would be "better" according to some ill defined and never measured metrics?
> Again, I ask for a reference, "according to what?" I understand this is the zeitgeist.
I think that at this point it is pretty well-established that the majority of security CVEs in C or C++ applications are caused by memory safety bugs. For sources see https://www.cisa.gov/news-events/news/urgent-need-memory-saf.... As a C++ dev this totally makes sense. (I just happen to work in a domain where security doesn't really matter :)
I have been seeing hatred on this forum towards Rust since long time. Initially it didn't make any kind of sense. Only after actually trying to learn it did I understand the backlash.
It actually is so difficult, that most people might never be able to be proficient in it. Even if they tried. Especially coming from the world of memory managed languages. This creates push back against any and every use, promotion of Rust. The unknown fear seem to be that they will be left behind if it takes off.
I completed my battles with Rust. I don't even use it anymore (because of lack of opportunities). But I love Rust. It is here to stay and expand. Thanks to the LLMs and the demand for verifiability.
For instance,
  struct Feet(i32);
  struct Meters(i32);
  
  fn hover(altitude: Meters) {
      println!("At {} meters", altitude.0);
  }
  
  fn main() {
      let altitude1 = Meters(16);
      hover(altitude1);
      let altitude2 = Feet(16);
      hover(altitude2);
  }
This fails at build time with:   12 |     hover(altitude2);
      |     ----- ^^^^^^^^^ expected `Meters`, found `Feet`
      |     |
      |     arguments to this function are incorrect
Guaranteeing that I’ve never mixed units means I don’t have to worry about parking my spacecraft at 1/3 the expected altitude. Now I can concentrate on the rest of the logic. The language has my back on the types so I never have to waste brain cycles on the bookkeeping parts.That’s one example. It’s not unique to Rust by a long shot. But it’s still a vast improvement over C, where that same signed 32 bit data type is the number of eggs in a basket, the offset of bytes into a struct, the index of an array, a UTF-8 code point, or whatever else.
This really shows up at refactoring time. Move some Rust code around and it’ll loudly let you know exactly what you need to fix before it’s ready. C? Not so much.
``` #include <stdio.h>
typedef struct { int value; } Feet;
typedef struct { int value; } Meters;
void hover(Meters altitude) { printf("At %i meters\n", altitude.value); }
int main() { Meters altitude1 = {.value = 16}; hover(altitude1); Feet altitude2 = {.value = 16}; hover(altitude2); } ```
``` error: passing 'Feet' to parameter of incompatible type 'Meters' 20 | hover(altitude2); ```
Coming from a dynamically typed language (Python, etc), this might seem like a revelation, but its old news since the dawn of programming computers. A C language server will pick this up before compile time, just like `rust-analyzer` does: `argument of type "Feet" is incompatible with parameter of type "Meters"`.
Did you not know this? I feel like a lot of people on message boards criticizing C don't know that this would fail to compile and the IDE would tell you in advance...
In C++ you can even add overloaded operators to make using math on such structs ergonomical.
Compilers know of the idiom, and will optimize the struct away.
> An investigation attributed the failure to a measurement mismatch between two measurement systems: SI units (metric) by NASA and US customary units by spacecraft builder Lockheed Martin.[3]
> ... ground controllers ignored a string of indications that something was seriously wrong with the craft's trajectory, over a period of weeks if not months. But managers demanded that worriers and doubters "prove something was wrong," even though classic and fundamental principles of mission safety should have demanded that they themselves, in the presence of significant doubts, properly "prove all is right" with the flight
Dropping units on the NASA side also was problematic but really culture was the cause of the actual crash.
[0] https://spectrum.ieee.org/why-the-mars-probe-went-off-course
But also think of how many libc functions take multiple ints or multiple chars in various orders. You can get carried away with typing, i.e. by having a separate type for everything*. Still, imagine you’re writing, say, a hypothetical IDE device driver and had separate types for BlockNumber and ByteInBlock so that it’s impossible to transpose read(byte_offset,block) instead of read(block,byte_offset), even if those are really the same kind of numbers.
That kind of thing makes a gazillion bugs just vanish into the ether.
There appears to be some tension between different conveniences you might afford yourself. If you have read(offset: offsetTypeForRead, address: addressTypeForRead), you can catch when someone accidentally passes an address where the offset should be and an offset where the address should be.
Or, you can say "hey, I'm always adding the offset to the address; it doesn't matter which one gets passed first" and relieve the programmer of needing to know the order in which two semantically distinct variables get passed to `read`.
But if you do provide that convenience -- and it's not unintuitive at all; there really is only one valid interpretation of a combination of an address and an offset, regardless of the order you mention them in -- you lose some of the safety that you wanted from the types. If your variables are declared correctly, everything is fine. If there's a mistake in declaring them, you'll wave through incorrect calls to `read` that would have been caught before.
  fn sub(a:LeftOp, b:RightOp)
but that seems redundant. There are still plenty of other cases where I could your idea being useful. Like I always forget whether (in Python) it’s json.dump(file, data) or dump(data, file). Ultimately, should it matter? I’m passing a file handle and an object, and it’s unambiguous how those two args relate to the task at hand.ide_read ( &(struct ide_loc) { .byte_offset = 10, .block = 20 } )
IIUC, rust would NOT let you do a type checked m/s * s => m, so using the type system for these kinds of games is silly and dangerous (I presume you would have to do the dumb thing and typeconvert to the same type -- e.g.
    (m) (speed * ((m/s) seconds))
to do multiplication which means you're inserting unscientific and reader-confusing type conversions all over the place)I haven't written rust, but my impression is the benefit is more about deeper introspection of things like lifetime than basic typesafety, which already exists in C/C++ (and is likewise occasionally bypassed for convenience, so I wonder how often the same is done for Rust)
/s
If people from that world complain about Rust, I surely wouldn't want them around a C codebase.
There's nothing wrong about memory-managed languages, if you don't need to care about memory. But being unfamiliar with memory and complaining about the thing that help you avoid shooting your foot isn't something that inspires trust.
The hardship associated with learning rust isn't going to go away if they do C instead. What's going to happen instead is that bugged code will be written, and they will learn to associate the hardship with the underlying problem: managing memory.
I think this is more true of C than it is of Rust if the bar is "code of sufficient quality to be included in debian"
It might take some people months rather than days, but I think that is a desirable outcome.
Important low level software should be written by competent developers willing to invest the effort.
The problem here is that C is too basic, dated, with inadequate higher-level abstractions, which makes writing robust and secure software extra difficult and laborious. "Learning underlying hardware" doesn't solve that at all.
Debian supports dozens of architectures, so it needs to abstract away architecture-specific details.
Rust gives you as much control as C for optimizing software, but at the same time neither Rust nor C really expose actual underlying hardware (on purpose). They target an abstract machine with Undefined Behaviors that don't behave like the hardware. Their optimisers will stab you in the back if you assume you can just do what the hardware does. And even if you could write directly for every logic gate in your hardware, that still wouldn't help with the fragility and tedium of writing secure parsers and correct package validation logic.
That could also be applied to C and C++ …
Time and time again, theoretically worse solutions that are easily accessible win
> Rust is already a hard requirement on all Debian release architectures and ports except for alpha, hppa, m68k, and sh4 (which do not provide sqv).
Wonder what this means for those architectures then?
It looks like the last machines of each architecture were released:
Alpha in 2007
HP-PA in 2008
m68k in pre-2000 though derivatives are used in embedded systems
sh4 in 1998 (though possible usage via "J2 core" using expired patents)
This means that most are nearly 20 years old or older.
Rust target triples exist for:
m68k: https://doc.rust-lang.org/nightly/rustc/platform-support/m68... and https://doc.rust-lang.org/nightly/rustc/platform-support/m68... both at Tier 3.
(Did not find target triples for the others.)
If you are using these machines, what are you using them for? (Again, genuinely curious)
Everything else is at least i686 and Rust has perfectly adequate i686 support.
Is there any major distro left with pre i686 support?
Either legacy systems (which are most certainly not running the current bleeding-edge Debian) or retro computing enthusiast.
These platforms are long obsolete and there are no practical reasons to run them besides "I have a box in the corner that's running untouched for the last 20 years" and "for fun". I can get a more powerful and power efficient computer (than any of these systems) from my local e-waste recycling facility for free.
Here is one famous example of a dude who’s managed to get PRs merged in dozens of packages, just to make them compatible with ancient versions of nodejs https://news.ycombinator.com/item?id=44831811
You could make this argument for so many usecases but apparently people just enjoy bashing retrocomputing here.
According to the last Steam survey, 3% of players use Linux. Steam has 130 million active players, so that means there are 4 million people playing on Linux. Definitely not "nobody", and way bigger than the whole retrocomputing community.
By the way, I am also one of those retrocomputing guys, I have a Pentium 2 running Windows 98 right here. IMHO, trying to shoehorn modern software on old hardware is ridiculous, the whole point of retro hardware is using retro software.
Well, there are so many things were you could argue about the relevance of a userbase.
If the size of a userbase would be the only argument, Valve could just drop support for the Linux userbase which is just 2-3% of their overall userbase.
The other mentioned architectures hppa, m68k and sh4 are at a similar level.
Cars, airplanes, construction equipment, etc.
you mainly find that with systems needing certification
this are the kind of situations where having a C language spec isn't enough but you instead need a compiler version specific spec of the compiler
similar they tend to run the same checkout of the OS with project specific security updates back-ported to it, instead of doing generic system updates (because every single updates needs to be re-certified)
but that is such a huge effort that companies don't want to run a full OS at all. Just the kernel and the most minimal choice of packages you really need and not one more binary then that.
and they might have picked Debian as a initial source for their packages, kernel etc. but it isn't really Debian anymore
John Deere, Caterpillar, etc are leaning heavily into the “connected industrial equipment” world. GE engines on airplanes have updatable software and relay telemetry back to GE from flights.
The embedded world changed. You just might have missed it if your view is what shipped out before 2010.
1) The control network is air gapped, any kind of direct Internet connection is very much forbidden.
2) Embedded real-time stuff usually runs on VxWorks or RTEMS, not Linux. If it is Linux, it is an specialized distro like NI Linux.
3) Anything designed in the last 15 years uses ARM. Older systems use PowerPC. Nobody has used Alpha, HPPA, SH4 or m68k in ages. So if you really want to run Debian on it, just go ahead and use Armbian.
But yeah, those can figure out how to keep their own port
Who is actually _running_ Debian Trixie on these platforms now?
It is counter-intuitive to me that these platforms are still unofficially supported, but 32-bit x86 [edit: and all MIPS architectures!] are not!
I am emotionally sad to see them fall by the wayside (and weirdly motivated to dig out a 68k Amiga or ‘very old Macintosh’ and try running Trixie…) but, even from a community standpoint, I find it hard to understand where and how these ports are actually used.
It’s just a bit annoying that Rust proponents are being so pushy in some cases as if Rust was the solution to everything.
This is not intended to bash you or anyone else who’s working on it - I think it’s a cool project (I have in the recent past got an 86duino ZERO to run Gentoo, just to see if an obscure old-ish piece of hardware can be useful with modern Linux on it - and it can). I do understand the reason a project like Debian might not want to have to spend resources even just to make it easier to do though.
https://sandervanderburg.blogspot.com/2025/01/running-linux-...
I didn't find what Debian version they tried but I think it's implied it's a recent version. They ran into memory issues. They had only 48MB while the recommendations are to use 64MB. It did boot though until it threw errors because of memory constraints.
They got a working system by trying Debian 3.1 though.
It's been somewhat useful for finding weird edge cases in software where for whatever reason, it doesn't reproduce easily on AArch64 or x86, but does there. (Or vice-versa, sometimes.)
I don't know that I'd say that's sufficient reason to motivate dozens of people to maintain support, but it's not purely academic entertainment or nostalgia, for that.
(LLVM even used to have an in-tree DEC Alpha backend, though that was back in 2011 and not relevant to any version of Rust.)
[0] Looks like there is basic initial support but no 'core' or 'std' builds yet. https://doc.rust-lang.org/rustc/platform-support/m68k-unknow... This should potentially be fixable.
yes, from a pure code generation aspect
no, as all conditional-compiled platform specific code is missing.
So using it with #[no_core] should work (assuming the WIP part of the backend isn't a problem). But beyond that you have to first port libcore (should be doable) and then libstd (quite a bunch of work).
sure some are also payed by a foundation. Which is also payed by companies but with a degree of decoupling of influence.
and some pay them self, e.g. fully voluntary work, but most dev can't afford to do so on a long term, high time commitment manner. So a lot of major changes and contributions end up coming from people directly or indirectly "payed" by some company.
and that's pretty common across most "older, larger, sustainable and still developed OSS"
https://mastodon.social/@juliank
>Senior Engineer at Canonical.
They will be rebranded as "retro computing devices"
I’m not in the Debian world, but those do seem to me like the types of systems that could use their own specialized distros rather than being a burden to the mass market ones. It’s not as if you could run a stock configuration of any desktop environment on them anyway.
Does or should debian care? I don't know.
I don’t get the fuzz around the “retro computing” verbiage. I doubt anyone is actually running Debian on these devices out of necessity, someone who plays baroque music in reconstructed period instruments won’t balk at being called an “early music” enthusiast.
But I'm not sure. I think the new Rust dependencies are good. In an ideal world, the people who care about niche systems step up to help Rust target those systems.
I’m actually the person who added the m68k target to the Rust compiler and was also one of the driving forces of getting the backend into LLVM.
Generally speaking, getting a new backend into the Rust compiler is not trivial as it depends on LLVM support at the moment which is why asking someone to just do it is a bit arrogant.
Luckily, both rustc_codegen_gcc and gccrs are being worked on, so this problem will be resolved in the future.
I'll try to rephrase: if we never want to give up support for a platform we've supported in the past, then I think we only have two options: (1) never adopt new technology where support for said platforms doesn't come for free, or (2) leave it up to those who care about the niches to ensure support.
Neither is pain-free, but the first seems like a recipe for stagnation.
It's lovely to see the two alternative compiler paths for Rust moving forward though! Thank you!
Here's a thread of them insulting upstream developers & users of the Debian packages. https://github.com/keepassxreboot/keepassxc/issues/10725
Unnecessary drama as usual...
In fact not having it encourages copy and paste which reduces security.
Whats next? Strip javascript support from browsers to reduce the attack surface?
I don't get how this is even a discussion. Either he is paid by canonical to be a corporate saboteur or he is completely insane.
The one demanding it is the maintainer of keepassxc it would’ve been better to just close the issue that this is a Debian only problem and he should install it like that and just close it.
now this is separate from being open for discussion if someone has some good arguments (which aren't "you break something which isn't supported and only nich used") and some claim he isn't open for arguments
and tbh. if someone exposes users to actual relevant security risk(1) because the change adds a bit of in depth security(2) and then implicitly denounces them for "wanting crap" this raises a lot of red flags IMHO.
(1): Copy pasting passwords is a very bad idea, the problem is phsishing attacks with "look alike" domains. You password manager won't fill them out, your copy past is prone to falling for it. In addition there are other smaller issues related to clip board safety and similar (hence why KC clears the clipboard after a short time).
(2): Removing unneeded functionality which could have vulnerabilities. Except we speak about code from the same source which if not enabled/setup does pretty much nothing (It might still pull in some dependencies, tho.)
but yes very unnecessary drama
As teh64 helpfully pointed out in https://news.ycombinator.com/item?id=45784445 some hours ago, 4ish years ago my position on this was a total 360 and I'd have had the same reaction to now-me's proposal.
Most of the academic research into these sorts of typesafe languages usually returns the null result (if you don't agree, it means you haven't read the research on this topic). That's researcher for it didn't work and you shouldn't be using these techniques. Security is a process, not a silver bullet and 'just switch to Rust' is very silvery bullet.
A lot of the Rust rewrites suffer a crucial issue: they want a different license than what they are rewriting and hence rewrite from scratch because they can't look at the code.
But here we're saying: Hey we have this crucial code, there may be bugs hidden in it (segfaults in it are a recurring source of joy), and we'll copy that code over from .cc to .rs and whack it as little as possible so it compiles there.
The problem is much more there on the configuration parser for example which does in a sense desparately need a clean rewrite, as it's way too sloppy, and it's making it hard to integrate.
In an optimal world I'd add annotations to my C++ code and have a tool that does the transliteration to Rust at the end; like when the Go compiler got translated from C to Go. It was glorious.
    It is our responsibility to our users to provide them the most secure option possible as the default.
Removing features is not the most secure option possible. Go all the way then and remove everything. Only when your computer cannot do anything it will be 100% secure.If I have a program that encrypts and decrypts passwords, then the surface area is way smaller than if it also has browser integrations and a bunch of other features. Every feature has the potential to make this list longer: https://keepass.info/help/kb/sec_issues.html which applies to any other piece of software.
At the same time, people can make the argument that software that's secure but has no useful features also isn't very worthwhile. From that whole discussion, the idea of having a minimal package and a full package makes a lot of sense - I'd use the minimal version because I don't use that additional functionality, but someone else might benefit a bunch from the full version.
Security is there to keep the features usable without interruptions or risks.
E.g. plugging the computer off the network is not about security if the service needs to be accessible.
Very concrete example, the whole Log4j vulnerability issue was basically just a direct implication of a feature that allowed for arbitrary code execution. Nearly no user of Log4j intentionally used that feature, they were all vulnerable because Log4j had that feature.
The fix to the CVE was effectively to remove the feature. If someone had the foresight to try to reduce Log4j to only the features that ~everyone actually used, and publish a separate Log4j-maximal for the fringe users that intentionally use that feature, it would have prevented what was arguably the worst vulnerability that has ever happened.
In the case this thread is about, no one seems to be deny that there should be a 'minimal' and 'full' versions and that the 'minimal' version is going to be more secure. The entire flame war seems to be over whether its better to take a preexisting package name and have it be a minimal one or the full one.
That is simply a tradeoff between "make preexisting users who don't use ancillary features be as secure as possible by default going forward" or "make preexisting users who do use ancillary features not broken by upgrades".
In this case it is not clear at all whether the feature is obscure. For most people it could be actually essential and the primary requirement for the whole software.
This is literally the same as helping a relative to make their computer more secure by turning it off. Problem solved I guess?
If you made a mistake by shipping insecure defaults you could fix it e.g. by including a banner to use the minimal version to users that don't use the extra features. But simply rug-pulling everybody for "security" and doubling down by insulting the affected users? I really do not understand people that act like this.
We should really hold more value to keeping existing user setups working. Breakages are incredibly damaging and might very well have a bigger impact than insecure defaults.
"All of these features are superfluous and do not really belong in a local password database manager" seems to me like a pretty clear explanation of what is "crap" about them, and it seems pretty clearly not to be about personal taste.
Some people care about modularity.
Yes there are absolutely some obnoxious "you should rewrite this in Rust" folks out there, but this is not a case of that.
And regardless, my point is it would be more sensible to say "I'm going to introduce an oxidized fork of apt and a method to use it as your system apt if you prefer" and then over the next year or so he could say "look at all these great benefits!" (if there are any). At that point, the community could decide that the rust version should become the default because it is so much better/safer/"modern"/whatever.
What you're seeing now is developers who are interested in writing a better version of whatever they're already working on, and they're choosing Rust to do it. It's not a group "Rust enthusiasts" ninjas infiltrating projects. It's more and more developers everywhere adopting Rust as a tool to get their job done, not to play language wars.
Where we disagree is I would not call injecting rust into an established project “writing a better version”. I would love it if they did write a better version, so we could witness its advantages before switching to it.
Can't you see how much more thought and care went into this, than is on display in this Debian email (the "if your architecture is not supported in 6 months then your port is dead" email)?
All officially supported ones. The Debian discussion is not about officially supported Debian ports, it's about unofficial ones.
That's not how open source software development works.
I wasn't asked by Linus whether ipchains should become the default over ipfirewall nor whether iptables should become over ipchains.
I wasn't asked whether GCC should use C++ instead of C as the language to build GCC itself.
I can go on with lots of examples.
Why should APT be different and require the maintainers to fork their own project do introduce changes? Why should an undefined "community" (who is that? apparently not the APT developers...) decide? Does this have to be done for every code change in APT?
They certainly did NOT say "I'm replacing ipfirewall with ipchains in six months, and if your distro can't handle it you should sunset your distro."
It shouldn't be controversial to request a measured approach when making major changes to software lots of people depend on. That's part of the burden of working on important software. Note I'm not against apt or anything moving to rust.
edit: spelling
> Rust is already a hard requirement on all Debian release architectures and ports except for alpha, hppa, m68k, and sh4 (which do not provide sqv).
And just like with the kernel the fallback gets removed eventually.
We didn't talk about your gcc-to-c++ example, but if you read up on it you will know they took the pulse of affected developers, started experimental branches, made presentations, and made sure no architectures were left behind. All of which this Debian developer is failing to do.
Look I don't even disagree with the ultimate result... I don't think Debian needs to indefinitely support every strange port it has built up over the years. The way it's being done, though, doesn't sit right. There are far more mature ways to steer a big ship. Your own examples are showing the way.
* It's becoming increasingly difficult to find new contributors who want to work with very old code bases in languages like C or C++. Some open source projects have said they rewrote to Rust just to attract new devs.
* Reliability can be proven through years in use but security is less of a direct correlation. Reliability is a statistical distribution centered around the 'happy path' of expected use and the more times your software is used the more robust it will become or just be proven to be. But security issues are almost by definition the edgiest edge cases and aren't pruned by normal use but by direct attacks and pen testing. It's much harder to say that old software has been attacked in every possible way than that it's been used in every possible way. The consequences of CVEs may also be much higher than edge case reliability bugs, making the justification for proactive security hardening much stronger.
On your second part. I wonder how aviation and space and car industry do it. They rely heavily on tested / proven concepts. What do they do when introducing a new type of material to replace another one. Or when a complete assembly workflow gets updated.
The world isn't black or white. Some people write Rust programs with the intent to be drop-in compatible programs of some other program. (And, by the way, that "some other program" might itself be a rewrite of an even older program.)
Yet others, such as myself, write Rust programs that may be similar to older programs (or not at all), but definitely not drop-in compatible programs. For example, ripgrep, xsv, fd, bat, hyperfine and more.
I don't know why you insist on a word in which Rust programs are only drop-in compatible rewrites. Embrace the grey and nuanced complexity of the real world.
There is a ton of new stuff getting written in Rust. But we don't have threads like this on HN when someone announces a new piece of infra written in Rust, only when there's a full or partial rewrite.
Re automotive and other legacy industries, there's heavy process around both safety and security. Performing HARAs and TARAs, assigning threat or safety levels to specific components and functions, deep system analysis, adding redundancy for safety, coding standards like MISRA, etc. You don't get a lot of assurances for "free" based on time-proven code. But in defense there's already a massive push towards memory safe languages to reduce the attack surface.
Because of backwards compatibility. You don’t rewrite Linux from scratch to fix old mistakes, that’s making a new system altogether. And I’m pretty sure there are some people doing just that. But still, there’s value in rewriting the things we have now in a future-proof language, so we have a better but working system until the new one is ready.
And by the way, Rust didn't invent this "rewrite old software" idea. GNU did it long before Rust programmers did.
You also didn't respond to my other rebuttal, which points out a number of counter-examples to your claim.
From my view, your argument seems very weak. You're leaving out critical details and ignoring counterpoints that don't confirm your bias.
uutils/coreutils is MIT-licensed and primarily hosted on GitHub (with issues and PRs there) whereas GNU coreutils is GPL-licensed and hosted on gnu.org (with mailing lists).
EDIT: I'm not expressing a personal opinion, just stating how things are. The license change may indeed be of interest to some companies.
The GPL protects the freedom of the users while MIT-licensed software can be easily rug-pulled or be co-opted by the big tech monopolists.
Using GitHub is unacceptable as it is banning many countries from using it. You are excluding devs around the world from contributing. Plus it is owned by Microsoft.
So we replaced a strong copyleft license and a solid decentralized workflow with a centralized repo that depends on the whims of Microsoft and the US government and that is somehow a good thing?
That is not at all true. If someone were to change the license of a project from MIT to something proprietary, the original will still exist and be just as available to users. No freedom is lost.
MIT is a big joke at the expense of the open-source community.
There is also another crowd that completely aligns with the US foreign policy and also has the same animosity towards those countrie's citizens (I 've seen considerable amount of examples of these).
For the license part I really don't get the argument how can a coreutils rewrite can get rugpulled this is not a hosted service where minio [1] [2] like situation can happen and there is always the original utils if something like that were to happen.
[1] http://news.ycombinator.com/item?id=45665452 [2] https://news.ycombinator.com/item?id=44136108
Whether the rewrite should be adopted to replace the original is certainly a big discussion. But simply writing a replacement isn’t really worth complaining about.
Note that I'm not saying Debian should, I'm saying it is reasonable that they would. I am not a Debian maintainer and so I should not have an opinion on what tools they use, only that adding Rust isn't unreasonable. It may be reasonable to take away a different tool to get Rust in - again this is something I should not have an opinion on but Debian maintainers should.
Furthermore, if these architectures are removed from further debian updates now, is there any indication that, once there's a rust toolchain supporting them, getting them back into modern debian wouldn't be a bureaucratic nightmare?
These architectures aren't being removed from Debian proper now, they already were removed more than a decade ago. This does not change anything about their status nor their ability to get back into Debian proper, which had already practically vanished.
i.e. they are only still around because they haven't caused any major issues and someone bothered to fix them up from time to time on their own free time
so yes, you probably won't get them back in once they are out as long as a company doesn't shoulder the (work time) bill for it (and with it I mean long term maintenance more then the cost of getting them in)
but for the same reason they have little to no relevance when it comes to any future changes which might happen to get them kicked out (as long as no company steps up and shoulders the (work time) bill for keeping them maintained
The GCCRS project can't even build libcore right now, let alone libstd. In addition, it is currently targeting Rust 1.50's feature set, with some additions that the Linux kernel needs. I don't see it being a useful general purpose compiler for years.
What's more likely is that rustc_codegen_gcc, which I believe can currently build libcore and libstd, will be stabilised first.
What I don't get is the burning need for Rust developers to insult others. Kind of the same vibes that we get from systemd folks and LP. Does it mean they have psychological issues and deep down in their heart they know they need to compensate?
I remember C vs Pascal flame back in the day but that wasn't serious. Like, at all. C/C++ developers today don't have any need to prove anything to anyone. It would be weird for a C developer to walk around and insult Rust devs, but the opposite is prevalent somehow.
... where?
I think it’s a combination of religion decreasing in importance and social media driving people mildly nuts. Many undertakings are collecting “true believers”, turning into their religion and social media is how they evangelize.
Rust is a pretty mild case, but it still attracts missionaries.
So, the people are different, Western society’s different and social media’s giving everyone a voice while bringing out the worst in them.
https://github.com/keepassxreboot/keepassxc/issues/10725#iss...
> Rust is a security nightmare. We'd need to add over 130 packages to main for sequoia, and then we'd need to rebuild them all each time one of them needs a security update.
What has changed? Why is 130 packages for a crypto application acceptable?
As for why, probably the same reason the dependency tree for gnupg (generate with `debtree -R -b gnupg` but grepping out all the gcc/mingw dependencies) looks like this: https://static.jeroenhd.nl/hn/gnupg.svg There's probably a good reason why I need libjpeg62, libusb-1.0-0-dev, and libgmp3 to compile gnupg, though they're hidden away from the usual developer docs in the form of transitive dependencies; complex software just tends to include external dependencies rather than reinventing the wheel.
To me, this sounds like "leftpad" but for CS1 data structures.
(*) random number
The dependency explosion is still a problem and I’m not aware of any real solution. It would have been interesting to to see why their opinion changed… I’m guessing it’s as simple as the perceived benefits overriding any concerns and no major supply-chain attacks being known so far.
Recently, there was an exploit discovered in an abandoned Rust package that was used by many other Rust projects, many unaware of it due to the sheer number of dependencies. Whether by negligence or malice, having a known vulnerability that permeates significant portions of the ecosystem is on the order of a supply chain attack.
https://edera.dev/stories/tarmageddon
Worse yet, independent research suggests that the state is arguably much worse: https://00f.net/2025/10/17/state-of-the-rust-ecosystem/
Given projects that make the claim of switching to Rust to access new contributors, it remains to be seen how many of those new contributors are capable of being retained.
What's the long-term play for Canonical here?
Open source fundamentally is a do-ocracy (it's in literally all of the licenses). Those who do, decide; and more and more often those who do are just one or two people for a tool used by millions.
The obvious potential motivations are things like making a more reliable product, or making their employees more productive by giving them access to modern tools... I guess I could imagine preparing for some sort of compliance/legal/regulatory battle where it's important to move towards memory safe tooling but even there I rather imagine that microsoft is better placed to say that they are and any move on canonical's part would be defensive.
Presumably it's rewriting critical parsing code in APT to a memory-safe language.
It's insane that x86 Debian is still compiling all software targeting Pentium Pro (from 1995!).
x64 Debian is a bit more modern, and you must splurge for a CPU from 2005 (Prescott) to get the plethora of features it requires
Note that Debian no longer supports x86 as of Debian 13.
Debian 13 raised the x86 requirement to Pentium 4 because LLVM required SSE2 and Rust required LLVM.
The target before was not Pentium Pro in my understanding. It was Pentium Pro equivalent embedded CPUs. Servers and desktops since 2005 could use x86-64 Debian.
The cost of supporting this old hardware for businesses or hobbyists isn’t free. The parties that feel strongly that new software continue to be released supporting a particular platform have options here, ranging from getting support for those architectures in LLVM and Rust, pushing GCC frontends for rust forward, maintaining their own fork of apt, etc.
There's a non-negligble amount of "handed-down" refurbished hardware from developed to developing. PCs and servers that are already 5+yo and out of market at installation.
(In my second-tier university at my developing country, the Sun workstation hadn’t been turned on in years by the late 2000s, and the the minicomputer they bought in the 1980s was furniture at the school)
Edit: As for big businesses, they have support plans from IBM or HP for their mainframes, nothing relevant to Debian.
See (relatively recent) list of manfuacturers here:
https://en.wikipedia.org/wiki/List_of_x86_manufacturers
and scroll down for other categories of x86 chip manufacturers. These have plenty of uses. Maybe in another 30 years' time they will mostly be a hobby, but we are very far from that time.
But you are also completely ignoring limited-capabilities hardware, like embedded systems and micro-controllers. That includes newer offerings from ST Microelectronics, Espressif, Microchip Technology etc. (and even renewed 'oldies' like eZ80's which are compatible with Zilog's 8-bit Z80 from the 1970s - still used in products sold to consumers today). The larger ones are quite capable pieces of hardware, and I would not be surprised if some of them use Debian-based OS distributions.
BTW, today is Pentium Pro's 30 years anniversary.
why not? I still want to run modern software on older machines for security and feature reasons
This doesn't seem like a noteworthy change to the degree to which GNU/Linux is an accurate name... though there are lots of things I'd put more importance on than GNU in describing debian (systemd, for instance).
Edit: Looks like Perl 1.0 was under the following non-commercial license, so definitely not always GPL though that now leaves the question of licensing when debian adopted it, if you really care.
> You may copy the perl kit in whole or in part as long as you don't try to make money off it, or pretend that you wrote it.
https://github.com/AnaTofuZ/Perl-1.0/blob/master/README.orig
But, there are now a lot more replacements for GNU's contributions under non-copyleft licenses, for sure.
More seriously I think Linux in general could benefit from a bit more pruning legacy stuff and embracing new so I count this as a plus
It's also "relatively easy" to add a new backend to Rust.
There's a policy document for Rust here: https://doc.rust-lang.org/rustc/target-tier-policy.html
There are a lot of things that can go wrong. You want to be able to test. Being able to test requires that someone has test hardware.
Much of the language used seems to stem from nauseating interactions that have occured in kernel world around rust usage.
I'm not a big fan of rust for reasons that were not brought up during the kernel discussions, but I'm also not an opponent of moving forward. I don't quite understand the pushback against memory safe languages and defensiveness against adopting modern tooling/languages
I haven't seen this from Rust. Obviously lots of us think that Rust is the way forward for us but I think the problem you're talking about is that nobody offered any alternatives you liked better and that's not on Rust.
If Bob is ordering pizza for everybody who wants one, it is not the case that "Pizza is necessarily the way forward", and it's not Bob's fault that you can't have sliders, I think if you want sliders you're going to need to order them yourself and "Pizza is the way forward" is merely the default when you don't and people are hungry.
Dave Abraham's Hylo is an example of somebody offering to order sushi in this analogy. It's not yet clear whether Dave knows a Sushi place that delivers here, or how much Sushi would be but that's what having another way forward could look like.
In C++ they've got profiles, which is, generously, "Concepts of a plan" for a way forward and in C... I mean, it's not your focus, but nobody is looking at this right? Maybe Fil-C is your future? I note that Fil-C doesn't work on these obsolete targets either.
Nobody is being forced out of the community, you can fork and not adopt the changes if you want. Thats the real point of free software, that you have the freedom to make that choice. The whole point of free software was never that the direction of the software should be free from corporate control in some way, the maintainers of a project have always had the authority to make decisions about their own project, whether individual or corporate or a mix.
What are some concrete cases you can point to where a decision was made with full consensus? Literally everyone agreed? All the users?
I'm not sure many projects have ever been run that way. I'm sure we've all heard of the Benevolent Dictator for Life (BDfL). I'm sure Linus has made an executive decision once in a while.
Requiring full consensus for decisions is a great way to make no decisions.
You describe it that way, but that's not how the world in general works in practice. You do things based on majority.
False claims don't really make the claims about the evils of Rust more believable.
This assumes there wasn't agreement.
And if so, what would 'eventually adopted by the majority' mean. Is this announcement not that?
This hasn’t changed.
Well, what's the alternative? The memory safety problem is real, I don't think there is any doubt about that.
C/C++ is a dead end: the community has thoroughly rejected technical solutions like the Circle compiler, and "profiles" are nothing more than a mirage. They are yet again trying to make a magical compiler which rejects all the bad code and accepts all the good code without making any code changes, which of course isn't going to happen.
Garbage collection is a huge dealbreaker for the people still on C/C++. This immediately rules out the vast majority of memory-safe languages. What is left is pretty much only Zig and Rust. Both have their pros and cons, but Rust seems to be more mature and has better community adoption.
The way I see it, the pro-memory-safety crowd is saying "There's a giant hole in our ship, let's use Rust to patch it", and the anti-Rust crowd yells back "I don't like the color of it, we shouldn't repair the hole until someone invents the perfect solution". Meanwhile, the ship is sinking. Do we let the few vocal Rust haters sink the ship, or do we tell them to shut up or show up with a better alternative?
> Garbage collection is a huge dealbreaker for the people still on C/C++.
The problem is not so much GC itself, but more like pervasive garbage collection as the only memory management strategy throughout the program. Tracing GC is a legit memory management strategy for some programs or parts of a program.
The reason memory safety is interesting in the first place (for practical, not theoretical reasons) is that it is a common cause of security vulnerabilities. But spatial memory safety is a bigger problem than temporal memory safety, and Zig does offer spatial memory safety. So if Rust's memory safety is interesting, then so is the memory safety Zig offers.
I'm a rabid software correctness advocate, and I think that people should acknowledge that correctness, safety (and the reasons behind it) are much more complex than the binary question of what behaviours are soundly disallowed by a language (or ATS advocates would say that from that their vantage point, Rust is just about as unsafe as C, and so is completely uninteresting from that perspective).
The complexity doesn't end with spatial vs temporal safety. For example, code review has been found to be one of the most effective correctness measures, so if a language made code reviews easier, it would be very interesting from a correctness/security perspective.
The whole Rust ecosystem is heavily biased towards prioritising memory safety and "safe by construction" .
This is evident in the standard library, in how crates approach API design, what the compilation defaults are, ...
In 6+ years of using Rust the only time I had to deal with segfaults was when working on low level wrappers around C code or JIT compilation.
Zig has some very interesting features, but the way they approach language and API design leaves a lot of surface area that makes mistakes easy.
I’ve written a good chunk of low level/bare metal rust—unsafe was everywhere and extremely unergonomic. The safety guarantees of Rust are also much weaker in such situations so that’s why I find Zig very interesting.
No oob access, no wacky type coercion, no nullptrs solves such a huge portion of my issues with C. All I have to do is prove my code doesn’t have UAF (or not if the program isn’t critical) and I’m basically on par with Rust with much less complexity.
I don’t actually mind Rust when I was able to write in safe user land, but for embedded projects I’ve had a much better time with Zig.
No it is not. We have a lot of amazing and rock solid software written in C and C++. Stuff mostly works great.
Sure, things could be better but there is no reason why we need to act right now. This is a long term decisions that doesn't need to be rushed.
> What is left is pretty much only Zig and Rust.
We had Ada long before Rust and it is a pretty amazing language. Turns out security isn't that important for many people and C++ is good enough for many projects apparently.
There is also D, Nim, Odin and so on.
> Garbage collection is a huge dealbreaker
It isn't. We had Lisp Machines in the 80s and automatic garbage collection has vastly improved these days. So I wouldn't rule those out either.
In short, no the ship is not sinking. There are many options to improve things. The problems is once you depend on rust it will be hard to remove so it is better to think things through because rushing to adopt it.
> The Debian infrastructure currently has problems with rebuilding packages of types that systematically use static linking. With the growth of the Go and Rust ecosystems it means that these packages will be covered by limited security support until the infrastructure is improved to deal with them maintainably.
My first thought is that it is kind of like talking about gradually improving manual memory allocation in Java. C and C++ are fundamentally memory unsafe; it's part of their design, to offer complete control over memory in a straightforward, direct way.
Really? As opposed to e.g. C or C++ (as the most important languages which Rust is competing with)? Sure, taste plays into everything, but I think a lot of people work with Rust since it's genuinely a better tool.
I hear you on free software being controlled by corporate interests, but that's imo a separate discussion from how good Rust is as a language.
[1] https://github.com/johnperry-math/AoC2023/blob/master/More_D...
It seems like Ada more or less has to have memory safety bolted on -- that is what SPARK does -- and it's not clear that Ada's bias towards OO is better than Rust's bias towards functional programming.
Are you talking about features like type inference (so the Rust code could be less clear, since types are not always written out)?
you see, GP did not speak in relative terms, but absolutely: They believe Rust has problems. They did not suggest that problems with programming languages are basically all fungible, that we should sum up all problems, compare different languages, and see which ones come out on top.
Of course most people aren't smart enough for the language so they have to use inferior algol languages like rust.
This is someone who says things like
>It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices.
While on company time.
Yes well, glad to hear there’s no one bullying people there!
Elitism is it's own form of bullying and needs to be treated as such.
I don't particularly like large swaths of humanity, but I also try hard not to be elitist towards them either. I'm not always successful, but I make a strong effort as my family raised me to be respectful to everyone, even if you don't personally like them.
I'll wait.
Apparently, Rust is part of the "woke agenda"
If you opt into something with as high a barrier to entry and necessary time commitment as a programming language, you naturally also opt into the existing community around that language, because that will be where the potential contributors, people to help you solve issues, and people you have to talk to if you need the language or ecosystem to move in some direction will hail from. In turn, the community will naturally get to impose its own values and aesthetic preferences onto you, whether by proactively using the position of relative power they have over you, or simply by osmosis. As it happens, the community surrounding Rust does largely consist of American progressives, which should not be surprising - after all, the language was created by an American company whose staff famously threatened mutiny when its own CEO turned out to offend progressive sensibilities.
As such, it is natural that bringing Rust into your project would over time result in it becoming more "woke", just like using Ruby would make it more likely that you attract Japanese contributors, or targeting Baikal CPUs would result in you getting pulled into the Russian orbit. The "woke" side themselves recognises this effect quite well, which is why they were so disturbed when Framework pushed Omarchy as a Linux distribution.
Of course, one needs to ask whether it is fair to insinuate premeditation by calling a mere expected effect an "agenda". Considering the endlessly navel-gazing nature of the culture wars, I would find it surprising if there weren't at least some people out there who make the same observation as above, and do think along the lines that driving Rust adoption is [also] a good thing because of it. Thus, Rust adoption does become, in a sense, part of the "woke agenda", just as Rust rejection becomes, perhaps even more clearly so, part of the "chud agenda".
I think this analysis is basically accurate - there's no conspiracy or even deliberate agenda going on, it's just that the community surrounding Rust happens to have (at the moment, anyway) a relatively high number of American progressives, many of whom are openly interested in imposing American progressive ideological norms in spaces they care about (which is basically what we mean by the term "woke").
I think Rust is a good software tool and I would like to see it be as widely adopted and politically-neutral as C is, and used in all sorts of projects run by all sorts of people with all sorts of other agendas, political or otherwise. Consequently, I would like to see people and projects who do not agree with American progressive norms adopt the language and become active users of it, which will help dilute the amount of Rust users who are progressives. I myself am not an American political progressive and I have lots of issues with the stated politics of many well-known Rust developers.
The general temperature of politics in FOSS, I think, is not obviously lower than before: just in terms of things that made it onto HN, in the past month or so alone we have seen the aforementioned kerfuffle about dhh (the leader? founder? of Ruby on Rails), his projects and their detractors, and the wrestling over control between NixOS's board and its community moderators who were known for prosecuting political purges and wanted to assert formal authority over the former.
Personally, I'm simply bothered by the fact that (one of?) the most famous figure of Rust on Linux and Rust Forever consumes and advocates for pornography that's illegal in my country, without being held accountable by the community.
From what I could piece together, the only group who ever cried wolf about this is a forum full of contemptious little angry men who spend weeks researching people they hate on the internet. No one seems to want to touch the subject from fear of being associated with them.
I'll give it to you, this is not a great time.
I'm pretty suspicious of demands for communities to hold people accountable, especially when the community in question is a loose group of people who mostly communicate online and are united by their shared use of a specific programming technology; and who probably disagree on all sorts of other issues, including contentious ones.
If some form of speech is illegal in your country it does automatically mean it should be illegal for the whole world or that it is wrong or that the world-wide community should adhere to standards specific to your country. Even if that country is USA.
In other words, nobody should give a flying f about open source developers porn preferences.
Your abhorrent personal opinion of another individual has no place in a technical discussion.
If you could separate the language from the acolytes it would have seen much faster adoption.
That’s an interesting thought. It would run counter to everything we know about human nature, but interesting nevertheless.
Rust is already pretty successful adoption wise. It’s powering significant parts of the internet, it’s been introduced in 3 major operating systems (Windows, Linux, Android), many successful companies in a variety of domains have written their entire tech stack in it. Adoption as measured by crates.io downloads has doubled every year for the last 10 years.
Now I’m imagining how much more widely Rust would be used if they had adopted your visionary approach of never saying anything positive about it.
No, it's the people who have given rise to the multiple Rust memes over the years.
I'm battling to think of any other about-to-go-mainstream language that had the reputation of a hostile community. Scala? Kotlin? Swift? Zig? None of those languages have built such poor reputations for their communities.
After all, for quite a few years every thread on forums that mentioned C or C++ was derailed by Rust proponents. I didn't see C++ users jumping into Rust threads posting attacks, but there are many examples of Rust users jumping into C++ or C threads, posting attacks.
> That’s an interesting thought. It would run counter to everything we know about human nature, but interesting nevertheless.
Well, the fact that Rust is an outlier in this sample should tell you everything you need to know; other up-and-coming languages have not, in the past, gotten such a reputation.
Because you’re young or you weren't around in 2010 when Go was gaining adoption. Same shit back then. People said “I like the language, it’s quite useful” followed by tirades from people who thought it was the end of human civilisation. It had exactly the reputation you speak of. (“DAE generics???”)
Eventually the haters moved on to hating something else. That’s what the Rust haters will do as well. When Zig reaches 1.0 and gains more adoption, the haters will be out in full force.
I've been working as a programmer since the mid-90s
>> I'm battling to think of any other about-to-go-mainstream language that had the reputation of a hostile community.
> People said “I like the language, it’s quite useful” followed by tirades from people who thought it was the end of human civilisation.
And? That's not the same as having a hostile community. I never saw Go proponents enter C# or Java discussions to make attacks against the programmers using C# or Java like I saw constantly wirh Rust proponents entering C or C++ discussions and calling the developers dinosaurs, incompetent, etc.
Hostile according to who? According to the haters, maybe. I’m sure the Go community was called “hostile” by haters back in the day.
Look at the drama created by Linux maintainers who were being insanely hostile, coming up with spurious objections, being absolute asshats - to the point where even Linus said enough was enough. The Rust for Linux members conducted themselves with dignity throughout. The Linux subsystem maintainers acted like kindergarteners.
But of course, haters will read the same emails and confirmation bias will tell them they’re right and Rust is the problem.
Keep hating.
I was there, and no it wasn't. The Go community didn't jump into every programming discussion throwing around accusations of dinosaur, insecurity, etc.
I also notice that these language debates are very much generational. That has a few consequences. First is that older devs have thicker skin. Second, older devs are more wary of the big promises made by Rust. Whether you like it or not, the push for Rust very much comes across as naivete as much as anything to older, more experienced devs who have seen this type of thing before.
You can't write a device driver without manipulating memory directly. A OS Kernel has to manipulate memory directly by definition. Most academic research into memory safe languages is mixed with a high amount of null results (meaning it doesn't work). Yet the Rust folks push it as the 'one true way'. Meanwhile, most Rust OpenSource projects are abandoned currently.
Its not hate, its pointing out track record and avoiding repeating past mistakes due to painful experiences in our youth. Your determination to repeat past mistakes doesn't come across as enlightenment like you think it does.
The Android team shipped a more secure operating system to billions of people. Their lives are better because of choosing more Rust and Kotlin and less C++.
> You can't write a device driver without manipulating memory directly.
This isn’t the gotcha you think it is. Check out this upstreamed driver - https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
This is a successful kernel driver that powers all IPC in Android. This is the most load bearing component in Android, especially because it is constantly being attacked by malware. It manipulates memory just fine actually.
In your hurry to dismiss Rust, you haven’t done a technical evaluation of it. If you had you wouldn’t conflate memory safety with a lack of memory manipulation. You’ve taken the intellectually lazy shortcut of dismissing a new thing because no new thing can be as good as the old things.
I write all this not to convince you to change your mind. I don’t think that’s possible. I write it so anyone else reading this will avoid your thought process. I don’t need to convince you, because this industry is moving ahead regardless.
We are all aware of unsafe. We are also aware that all those assurances of safety go away in those circumstances.
This is cherry-picking. I didn't say all research papers, just most. This is a very specific circumstance. Under specific circumstances these ideas will work.
This example is one where the replaced code wasn't that old, on a very specific set of hardware and in a rather specific case. Its basically the ideal set of conditions for a rewrite. But there are plenty of cases where attempts to swap in Rust aren't being done in ideal conditions like this. I doubt switching to Rust is never a good idea. I also doubt switching to Rust is always a good idea.
PS The problem is partly the way you write. I criticize ideas. You criticize me. That's a big part of why you get pushback.
How could anyone possibly think this? It runs on ARMv7, ARM64, x86 and x86-64. What else do you want it to run on? It runs on literally billions of devices, made by hundreds of manufacturers - phones, TVs, Chromebooks, cars. All of these couldn’t be more different.
Windows/MacOS are examples of supporting specific hardware. Android works on incredibly diverse hardware. All of which will be powered by Rust now.
> wasn’t that old
Binder is about 20 years old. Linux is 30 years old, by the way. It was developed across 3 different corporations. There’s a reason the code was incredibly difficult to make changes to.
Could we stick to facts, please? This stuff isn’t hard to google.
> I also doubt switching to Rust is always a good idea.
No one said this, by the way. This is a strawman you’ve constructed.
PS the problem is that none of what you write makes sense. No one is giving me any pushback apart from you. And the only way you’re able to pushback is by completely ignoring facts.
There absolutely are, and have been. You could say it's a reaction. I don't want to argue about who started it.
I agree with you that if the Rust community has gained such a peculiar reputation, it's also due to valid reasons.
I have rarely seen an argument that pushes back against Rust with actual alternative solutions to the problems the rust proponents are trying to solve. It is mostly a bunch of old people letting the perfect be the enemy of the good.
> I didn't see C++ users jumping into Rust threads posting attacks, but there are many examples of Rust users jumping into C++ or C threads, posting attacks.
I already seen this with Zig. And even without language communities. Look at this whole thread. Look in to the mirror. Regularly when Rust is mentioned on HN. Anti-Rust cult comes to complain that there is Rust.
Even if someone just posts "I have made this with Rust" - then this cult comes and complains "why do you need to mention Rust?!". Like look at your self. Who hurt you?
Pointing out that the Rust community has gained such a poor reputation while other communities have not requires "looking into the mirror"?
Rust haters seem strangely obsessed.
Well, this is a great example. People complaining about the community are labeled as people complaining about the language.
Do you not see the problem here?
Because it literally says "Rust haters"; not "Rust community haters".
Are you saying that when someone refers to "Rust", they mean the community and not the language?
Maybe. What does that have to do with the Rust community having such a poor reputation compared to other communities?
Good news: you can. And that's why it has had fast adoption.
(those advocating for Rust in "meme-like" ways are not generally the same people actually developing the Rust compiler or the core parts of it's ecosystem)
As far as i read on HN, the only memory safe language discused on HN is rust and mostly with childish pro arguments.
EDIT: from a brief search: it doesn't.
For a language to be memory safe it means there must be no way to mishandle a function or use some object wrong that would result in an "unsafe" operation (for Rust, that means undefined behavior).
That is to say the default is safe, and you are given an escape hatch. While in something like c/c++ the default is unsafe.
I'd also like to add that program correctness is another separate concept from language safety and code safety, since you could be using an unsafe language writing unsafe ub code and still have a correct binary.
I think it isn’t reasonable to infer that nobody uses something because you don’t know anybody who uses it in your niche. I know lots of embedded programmers who use Rust.
I think the linked requirement, the hype you see, and rust's own material is misleading: It's not a memory-safety one-trick lang; it's a nice overall lang and tool set.
Lack of drivers is prohibitive if your are a small/medium team or are using a lot of complicated peripherals or SoC. Compare to C where any MCU or embedded SoC or moderately complex peripheral normally comes with C driver code.
In practice I end up rewriting drivers. Which sounds daunting but often times it's much easier than folks think and the resulting code is usually 1/4th or smaller the original C code. If only implement what you need sometimes drivers can be less than 100 lines of Rust.
Zig is an example of excelling at C interop--not Rust.
And Cargo is an impediment in the embedded ecosystem rather than a bonus.
Part of why we're getting Rewrite-it-in-Rust everywhere is precisely because the C interop is sufficiently weak that you can't do things easily in a piecemeal fashion.
And lets not talk about Rust compile times and looking at Rust code in a debugger and just how bad Rust code is in debug mode ...
And note: browsers are the pathological case, in terms of build system integrations, global state assumptions, C++, etc.
(Your other complaints have a place, and don't seem unreasonable to me. But they're empirically not impediments to Rust's interop story.)
[1]: https://chromium.googlesource.com/chromium/src/+/refs/heads/...
[2]: https://firefox-source-docs.mozilla.org/build/buildsystem/ru...
[3]: https://www.memorysafety.org/blog/rustls-nginx-compatibility...
One problem: It's tedious going from the pointer-level API bindgen gives you to a high-level rust API that has references, arrays etc. In that you have to do some boilerplate for each bit of functionality you want. Not a big deal for a specific application, but not ideal if making a general library. And C libs tend to be sloppy with integer types, which works, but is not really idiomatic for rust. Maybe that could be automated with codegen or proc macros?
I believe the ESP-IDF rust lib is mostly FFI (?); maybe that's a good example. We've been re-inventing the wheel re STM-32 and Nordic support.
https://github.com/rust-embedded/cortex-m
Even the embedded world is slowly changing.
That's you. At companies like Microsoft and Google, plenty of people think about and discuss Rust, with some products/features already using Rust.
EC2 (lots of embedded work on servers), IAM, DynamoDB, and parts of S3 all heavily use Rust for quite a few years now already.
We can move really fast with Rust as compared to C, while still saving loads of compute and memory compared to other languages. The biggest issue we've hit is the binary size which matters in embedded world.
Linux has added support for Rust now. I don't think Rust's future supremacy over C is doubtful at this point.
AWS might honestly be the biggest on Rust out of all the FAANGs based on what I've heard too. We employ loads of Rust core developers (incl Niko, who is a Sr PE here) and have great internal Rust support at this point :). People still use the JVM where performance doesn't matter, but anywhere where performance matters,I don't see anyone being okay-ed to use C over Rust internally at this point.
Additionally to that, a part of the team doesn't had fun on writing code with Rust.
We trashed the whole tool, which was a massive loss of time for the project.
I dislike the tone of the evangelism and the anti-C attitude but I'm not anti-rust. I purchased a computer with an oversized amount or RAM in part so I could experiment with rust. But determining how to write, edit and compile small programs, from the ground up, without cargo appears exceedingly difficult, and feels like going against the tide
It stands to reason that the embedded programmer commenting was unable to determine how to avoid using cargo and pulling in unnecessary dependencies. Otherwise he would not have encountered this problem
e.g. Chrome & Fuchsia both build included Rust bits using their existing build system.
Bazel and Buck2 both work well with it, relatively.
One can also just be really disciplined with Cargo and not add superfluous deps and be careful about the ones you do include to monitor their transitive dependencies.
IMHO this is more about crates.io than Cargo, and is the biggest weakness of the language community. A bulk of developers unfortunately I think come from an NPM-using background and so aren't philosophically ... attuned... to see the problem here.
Once you take out cargo, rusts development environment becomes quite poor.
There is a lot of legacy code out there and I generally feel like many of the rust advocates forget how important it is to play well with legacy project setups.
This is your bias alone. I know tons of people and companies that do. Rust most likely runs on your device.
Most people nowadays who criticize Rust do so on a cultural basis of "there are people who want this so and it changes things therefore it is bad". But never on the merits.
Rust is a good language that contains in its language design some of the lessons the best C programmers have internalized. If you are a stellar C programmer you will manually enforce a lot of the similar rules that Rust enforces automatically. That doesn't mean Rust is a cage. You can always opt for unsafe if ypu feel like it.
But I know if my life depended on it I would rather write that program in Rust than in C, especially if it involves concurrency or multiprocessing.
Practically on embedded the issue is that most existing libraries are written in C or C++. That can be a reason to not choose it in the daily life. But it is not a rational reason for which a programming language sucks. Every programming language had once only one user. Every programming language had once no dependencies written in it. Rust is excellent in letting you combine it with other languages. The tooling is good. The compiler error messages made other language realize how shitty their errors were.
Even if nobody programmed in Rust, the good bits of that language lift the quality in the other languages.
In this mindset, arguing against change is an argument on the merits. Because everything you spend time on has the opportunity cost of everything else you could spend time on.
We could now pretend their position is: "Oh, we got this shiny new language that magically makes everything 100% safe and thus we need to rewrite everything." But that is not the position. Most of them are aware that a rewrite is always a trade-off. You could reintroduce old bugs etc.
As I said, I program languages on both sides on the divide and if I had to write and maintain secure software that my life depended on I would certainly prefer to write it in Rust. Memory safety would be just a tiny part of that. The other factors would be the strict type system (can be used to enforce certain guarantees that contributers cannot easily fuck up) and the tooling (the builtin testing is stellar).
The future of tooling is going to be written in the language people of the coming generations like to write. There was a time when C++ or even C was the new shiny thing. Why can't we just write all software in assembly like in the good old days? Because there were some actual tangible benefits to doing it in C and that's the language people with the ability of doing the job chose.
I am not saying a Rust rewrite makes sense in every case, but if you check the past decade of CVEs on a project and half of them would have been prevented by the vanilla Rust compiler maybe that's the rational thing?
Secondly the argument that because you don't use it in your area no one should use it in OS development is nonsensical.
This is entirely the wrong lens. This is someone who wants to use Rust for a particular purpose, not some sort of publicity stunt.
> I know nobody that programms or even thinks about rust. I’m from the embedded world a there c is still king.
Now’s a good time to look outside of your bubble instead of pretending that your bubble is the world.
> as long as the real money is made in c it is not ready
Arguably, the real money is made in JavaScript and Python for the last decade. Embedded roles generally have fewer postings with lower pay than webdev. Until C catches back up, is it also not ready?
Telling people they need to take their ball and go home if they're incapable or unable to maintain an entire compiler back-end seems like a, shall we say, 'interesting' lens for a major distro such as Debian.
Just to parse some files?
It's cool that you can run modern Debian on an Amiga or whatever, but it's not particularly important that that be the case.
People selling slop does not imply much about anything other than the people making the slop
(I similarly have yet to see a single convincing argument to try to fight past the awkward, verbose and frustrating language that is rust).
No changes required. Bringing up the fil-C toolchain on weird ports is probably less work than bringing up the Rust toolchain
It also doesn't help you to attract new contributors. With the changes we made over in Ubuntu to switch to rust-coreutils and sudo-rs, we have seen an incredible uptake in community contributions amongst other things, and it's very interesting to me to try to push APT more into the community space.
At this time, most of the work on APT is spent by me staying awake late, or during weekends and my 2 week Christmas break, the second largest chunk is the work I do during working hours but that's less cool and exciting stuff :D
Adding Rust into APT is one aspect; the other, possibly even more pressing need is rewriting all the APT documentation.
Currently the APT manual pages are split into apt-get and apt-cache and so on, with a summary in apt(8) - we should split them across apt install(8), apt upgrade (8) and so on. At the same time, DocBook XML is not very attractive to contributors and switching to reStructuredText with Sphinx hopefully attracts more people to contribute to it.
Sorry to double-reply, but this is actually a super important point in favor of Fil-C.
If you adopted Fil-C for apt, then you could adopt it optionally - only on ports that had a Fil-C compiler. Your apt code would work just as well in Fil-C as in Yolo-C. It's not hard to do that. I think about half the software I "ported" to Fil-C worked out of the box, and in those cases where I had to make changes, they're the sort of changes you could upstream and maintain the software for both Fil-C and Yolo-C.
So, with Fil-C, there would be no need to ruffle feathers by telling port maintainers to support a new toolchain!
We'll have to see how this plays out but it's not super plug and play.
Some notes about that here: https://cr.yp.to/2025/fil-c.html
That's easily fixable.
> It also doesn't help you to attract new contributors.
I don't understand this point.
as easily as fixing Rust to work on the remaining 4 architectures?
> > It also doesn't help you to attract new contributors. > I don't understand this point.
C++ doesn't attract a lot of developers, Rust attracts many more. I want more community, particularly _young_ community. I don't wanna work on this alone all the time :D
Easier, because you won't have to port Fil-C to all of the architectures in order to use it on amd64.
> C++ doesn't attract a lot of developers, Rust attracts many more.
C is #2 on TIOBE.
C++ is #3 on TIOBE.
Rust is #16 on TIOBE.
So I don't know what you're talking about
GitHub also just published Octoverse 2025 and Rust still hasn't cracked the top 10: https://github.blog/news-insights/octoverse/octoverse-a-new-...
Meanwhile, C++ is steadfast and even C is on the edges.
Looking at these lists, Go is an interesting option. It's rising in popularity and there is also a young community interested in it. It also integrates much better with existing C projects. Are there requirements for manual memory management? Would porting to Go instead of Rust have noticeable impacts on performance? Thinking about it now, Go seems like a more prudent option than Rust that achieves all of the publicly stated goals.
i guess it's cool for c(++) to have nice tiobe rankings but if they're not contributing how is that relevant?
If he was asking for C/C++ contributors, he'd be asking for help maintaining a mature project. That's less fun. It mature, grown-up work for serious people. Those serious people probably already have serious jobs. So, fewer people will show up.
And this argument about "young" contributors is the same nonsense that came from your senior management. But you're independent.
Aren't the experienced engineers supposed to be leading the next generation? If you really want to get the young folks on board, drop Ubuntu and call it Gyatt. Instead of LTS, call it Rizz. Just think of all the young who will want to work on Skibidi 26.04!
Rust attracts hype and hype artists. Ask me how I know. Do you want drive-by people or do you want long-term community members? There are many young folk interested in learning C and looking for adequate mentorship along with a project to work on. Wouldn't that be a better use of energy? Have you even put out any outreach to attract others to these projects where you say you're alone?
You are making a mistake and falling on the sword for your bosses at the same time. Tough days are here but maybe hold on for better employment than this.
Rust is the present and the future and it's quite logical that it becomes a key requirement in Linux distributions, but I'm really not convinced by the wording here… This last sentence feels needlessly antagonistic.
A nostalgia-fuelled Linux distro, maybe using a deliberately slimmed down or retro kernel, and chosen software could make a lot more sense than keep trying to squeeze Debian onto hardware that was already obsolete at the turn of the century while also promoting Debian as a viable choice for a brand new laptop.
Solved problem:
United States Patent Application 3127321 Date of Patent March 31, 1964 NUCLEAR REACTOR FOR A RAILWAY VEHICLE
alpha, hppa, m68k and sh4
To be fair, lots of people did use Motorola 68xxx CPUs when those were new, it's just that it was 40+ years ago in products like the Commodore Amiga. The SH4 is most popularly connected to the Dreamcast, Sega's video game console from back when Sega made video game consoles.
The Alpha and PA Risc were seen in relatively recent and more conventional hardware, but in much tinier numbers, and when I say relatively I mean early this century, these are not products anybody bought five years ago, and when they were on sale they were niche products for a niche which in practical terms was eaten by Microsoft.
It's obviously more likely it's just fans of the language with a knee-jerk reaction of "ackshully you're totally definetely wrong, but uh... don't ask me how, you just are" than legitimate talking points.
It aint stupid if its the truth lol
Feel free to stay "ignant" though, making stupid claims and refusing to back them up when asked.
I am asking if the former option is a practical one
For other architectures currently unsupported by Rust, I doubt it'll happen. The CPU architectures themselves are long dead and often only used for industrial applications, so the probability of hobbyists getting their hands on them is pretty slim.
People still using these old architectures for anything but enthusiast hacking will probably not be using Debian Trixie, and if they do, they can probably find a workaround. It's not like the .deb format itself is changing, so old versions of apt and dpkg will keep working for quite a while.
I would consider that a passive-aggressive or an insult
I see the deadline more as a "expect breakages in weird unofficial Debian downstreams that were never supported in the first place" or "ask your weird Debian downstream maintainer if this is going to cause problems now". It's not that Debian is banning unofficial downstreams or semi-proprietary forks, but it's not going to let itself be limited by them either.
And who knows, maybe there are weird Debian downstreams that I don't know of that do have a working Rust compiler. Projects like Raspbian are probably already set but Debian forks for specific boards may need to tweak a few compiler settings to make compilers emit the right instructions for their ARM/MIPS CPUs to work.
I only find the message passive-aggressive or insulting if you're of the opinion you're entitled to Debian never releasing software that doesn't work on the Commodore64.
... This is Debian we're talking about here?
... What distros are recommended for those who intend to continue trying to squeeze utility out of "retro computing devices"?
... And what sort of minimum specifications are we talking about, here?
But for end users on Debian trying to compile rust stuff is a nightmare. They do breaking changes in the compiler (rustc) every 3 months. This is not a joke or exaggeration. It's entirely inappropriate to use such a rapidly changing language in anything that matters because users on a non-rolling distro, LIKE DEBIAN, will NOT be able to compile software written for it's constantly moving bleeding edge.
This is an anti-user move to ease developer experience. Very par for the course for modern software.
That is, in fact, a gross exaggeration. Breaking changes to rustc are extremely rare.
The rustc version will be fixed for compaibility at every release and all rust dependencies must be ported to apts.
In the debian context, the burden imposed by rust churn and "cargo hell" falls on debian package maintainers.
First, Debian is not a distro where users have to compile their software. The packages contain binaries, the compilation is already done. The instability of Rust would not affect users in any way.
And second, as a developer, I never had a more unpleasant language to work with than Rust. The borrow checker back then was abysmal. Rust is not about developer happiness - Ruby is - but its memory safety makes it a useful option in specific situation. But you can be sure that many developers will avoid it like a plague - and together with the breakage and long compile times that's probably why moves like the one dictated here are so controversial.
Sure it would. Suppose a rust-based package has a security bug. Upstream has fixed it, but that fix depends on some new rust language feature that the frozen version of rust in Debian doesn't have yet.
I would be worried if even C++ dependencies were added for basic system utilities, let alone something like Rust.
Now, granted, I'm not an expert on distro management, bootstrapping etc. so maybe I'm over-reacting, but I am definitely experiencing some fear, uncertainty and doubt here. :-(
This is the status quo and always has been. gcc has plenty of extensions that are not part of a language standard that are used in core tools. Perl has never had a standard and is used all over the place.
For example, IIUC, you can build a perl interpreter using a C compiler and GNU Make. And if you can't - GCC is quite bootstrappable; see here for the x86 / x86_64 procedure:
https://stackoverflow.com/a/65708958/1593077
and you can get into that on other platforms anywhere along the bootstrapping chain. And then you can again easily build perl; see:
https://codereflections.com/2023/12/24/bootstrapping-perl-wi...
apt is so late in the process that these bootstrapping discussions aren’t quite so relevant. My point was that at the same layer of the OS, there are many, many components that don't meet the same criteria posted, including perl.
I don't know if the rust compiler produces bigger binaries, but for a single program, it'll not make a big difference.
    > I find this particular wording rather unpleasant and very unusual to what I'm used to from Debian in the past. I have to admit that I'm a bit disappointed that such a confrontational approach has been chosen.
Ref: https://lists.debian.org/debian-devel/2025/10/msg00286.htmlBecause that saves a lot of headaches down the line.
Don't want to introduce complex code to only copy the parts that are actually reachable would be silly and introduce bugs.
But keep in mind valgrind is super buggy and we spend quite a bunch of time working around valgrind false positives (outside of amd64)
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=802778
10 years old. It never was a false positive. It was fixed a good few years ago. The fix did not involve suppressing the error.
Valgrind does need a lot of work, especially for missing CPU features and for Darwin. I’m not aware of many memcheck bugs that aren’t relatively obscure corner cases.
If you have encountered bugs please report them to https://bugs.kde.org.
It's quite surprising and it takes days to weeks to debug each of these, going down to the assembler level and verifying that by hand.
Or I guess if you interpret this as a societal scale: we've collectively used C in production a lot, and look at all the security problems. Judgment completed. Quality is low.
By Sequoia, are they talking about replacing GnuPG with https://sequoia-pgp.org/ for signature verification?
I really hope they don't replace the audited and battle-tested GnuPG parts with some new-fangled project like that just because it is written in "memory-safe" rust.
Meanwhile, GnuPG is well regarded for its code maturity. But it is a C codebase with nearly no tests, no CI pipeline(!!), an architecture that is basically a statemachine with side effects, and over 200 flags. In my experience, only people who haven't experienced the codebase speak positively of it.
It exits 0 when the verification failed, it exits 1 when it passed, and you have to ignore it all and parse the output of the status fd to find the truth.
It provides options to enforce various algorithmic constraints but they only work in some modes and are silently ignored in others.
Does Sequoia-PGP have similar credentials and who funds it?
Loved this statement on the state of modern software using the backbone of C (in linux and elsewhere)
     C              modern C++
  "int foo[5]" -> "array<int,5> foo"
It's easy to criticize simple examples like the one above, since the C++ (or Rust) version is longer than the C declaration, but consider something like this:  char *(*(**foo[][8])())[];
and the idiomatic Rust equivalent:  let foo: Vec<[Option<fn() -> Vec<String>>; 8]> = Vec::new();
The later can be parsed quite trivially by descending into the type declaration. It's also visible at a glimpse, that the top-level type is a Vec and you can also easily spot the lambda and it's signature.Another ergonomic aspect of the Rust syntax is that you can easily copy the raw type, without the variable name:
  Vec<[Option<fn() -> Vec<String>>; 8]>
While the standalone C type looks like this:  char *(*(**[][8])())[]
which is quite a mess to untangle ;)Also, I think C# is generally closer to Rust than to C when it comes to the type syntax. A rough equivalent to the previous example would be:
  var foo = new List<Func<List<string>>?[]>();
I can't deny that "?" is more ergonomic than Rust's "Option<T>", but C# has also a way less expressive type system than Rust or C++, so pick your poison.        int[] foo
        |   |  |
        |   |  foo 
        |   is an array  
        of ints
I still prefer Rust types though..  > Be careful.  Rust does not support some platforms well.[0]  ANything
  > that is not Tier 1 is not guaranteed to actually work.  And
  > architectures like m68k and powerpc are Tier 3.
  > 
  > [0] <https://doc.rust-lang.org/beta/rustc/platform-support.html>.
[ The rustc book > Platform Support: https://doc.rust-lang.org/beta/rustc/platform-support.html ][ The rustic book > Target Tier Policy: https://doc.rust-lang.org/beta/rustc/target-tier-policy.html... ]
  Thank you for your message.
  Rust is already a hard requirement on all Debian release
  architectures and ports except for alpha, hppa, m68k, and
  sh4 (which do not provide sqv).
Create a plan to add support for {alpha, hppa, m68k, and
sh4,} targets to the Rust compiler- 2.5pro: "Rust Compiler Target Porting Plan" https://gemini.google.com/share/b36065507d9d :
> [ rustc_codegen_gcc, libcore atomics for each target (m68k does not have support for 64-bit atomics and will need patching to libgcc helper functions), ..., libc, liballoc and libstd (fix std::thread, std::fs, std::net, std::sync), and then compiletest will find thousands of bugs ]
So, CI build hours on those actual but first emulated ISAs?
"Google porting all internal workloads to ARM, with help from GenAI" (2025) https://news.ycombinator.com/item?id=45691519
"AI-Driven Software Porting to RISC-V" (2025) https://news.ycombinator.com/item?id=45315314
"The Unreasonable Effectiveness of Fuzzing for Porting Programs" (2025) https://news.ycombinator.com/item?id=44311241 :
> A simple strategy of having LLMs write fuzz tests and build up a port in topological order seems effective at automating porting from C to Rust.
Delusional overconfidence that developer “skill” is all that is needed to overcome the many shortcomings of C is not a solution to the problem of guaranteeing security and safety.
The C programmers I know are certainly not deluded or overconfident. They don't even think "their" language is a perfect one, or even a very good one. They just avoid black-and-white thinking. They take a practical approach about memory issues, seeing them more like any other kind of bug. It's a different aesthetics than you would maybe see from many Rust folks. They want to be productive and want to be in control and want to understand what their code does. In turn, they accept that in some cases, bugs (possibly memory bugs) creep in, some of which could go unnoticed for some time. They tend to not see that as a huge issue, at least in general, because an issue that has gone unnoticed (or didn't manifest) is often less of a problem than one that is immediately obvious. (In case of data corruption, it _can_ be a huge issue, and you have to add safeguards to prevent it, and have to be accepting some residual risk).
They understand that everything is a trade off and that with experience and practice, good architecture, good tooling etc. you can prevent many bugs early, and detect them early. They have tried many approaches to prevent bugs, including fancy languages and constructs, and have concluded that in many cases, perfect safety is not possible, in particular it's not possible without seriously hurting other requirements, such as productivity.
As to valgrind, I can say that it was a bit of a mixed bag for me. It did help me finding bugs a number of times, but I also had to configure it a bit because it was producing a lot of noise for some external libraries (such as libc). I don't really understand the underlying issues.
Btw. kindly look at the other issue that I overconfidently waved away as "probably a false positive"?
0: https://rustfoundation.org/media/ferrous-systems-donates-fer...
Do you think that it was made up from whole cloth in the abstract machine and implemented later? No, it was based on the available implementations of its time.
On top of that, languages like Python do not have a specification and yet have multiple implementations.
(Plus, architecture quantity isn’t exactly the thing that matters. Quality is what matters, and Rust’s decision to conservatively stabilize on the subset of LLVM backends they can reliably test on seems very reasonable to me.)
Honestly, I am not even opposed to Rust. It has cool ideas. I do think it should care a lot more about being portable and properly defined and should have done so a lot earlier and I do deeply disagree with the opinion of some core team members that specification is meaningless.
C obviously always was a questionable choice for a tool like apt but Rust seems even worse to me. Apt has absolutely no need to be written in a low level language. At least you could argue that C was chosen because it’s portable but I don’t see what Rust has going for it.
The war is over. ARM and x86 won.
(We detached this subthread from https://news.ycombinator.com/item?id=45782109.)
Rust also has multiple compilers (rustc, mrustc, and gccrs) though only one is production ready at this time.
There is other work on specifying Rust (e.g. the Unsafe Code Guidelines Working Group), but nothing approaching a real spec for the whole language. Honestly, it is probably impossible at this point; Rust has many layers of implementation-defined hidden complexities.
But even if we accept that, it doesn’t seem like a good comparative argument: anybody who has written a nontrivial amount of C or C++ has dealt with compiler-defined behavior or compiler language extensions. These would suggest that the C and C++ standards are “performative” in the same sense, but repeated claims about the virtues of standardization don’t seem compatible with accepting that.
The actual informal semantics in the standard and its successors is written in an axiomatic (as opposed to operational or denotational) style, and is subject to the usual problem of axiomatic semantics: one rule you forgot to read can completely change the meaning of the other rules you did read. There are a number of areas known to be ill-specified in the standard, with the worst probably being the implications of the typed memory model. There have since been formalized semantics of C, which are generally less general than the informal version in the standard and make some additional assumptions.
C++ tried to follow the same model, but C++ is orders of magnitude more complex than C and thus the standard is overall less well specified than the C++ standard (e.g. there is still no normative list of all the undefined behavior in C++). It is likely practically impossible to write a formal specification for C++. Still, essentially all of the work on memory models for low-level programming languages originates in the context of C++ (and then ported back to C and Rust).
Also, the C++ ordering model is defective in the sense that while it offers the orders we actually use it also offers an order nobody knows how to implement, so it's basically just wishful thinking. For years now the C++ standard has labelled this order "temporarily discouraged" as experts tried to repair the definition and C++ 26 is slated to just deprecate it instead. Rust doesn't copy that defect.
> The FLS is not intended to be used as the normative specification of the Rust language
But the people who use the language have an amazing talent to make people on the fence hate them within half a dozen sentences.
They remind me of Christian missionaries trying to convert the savages from their barbarous religions with human sacrifice to the civilised religion with burning heretics.
Rust people for some reason are.
https://www.cvedetails.com/vulnerabilities-by-types.php is a bit more clear. It's xss, SQL, then memory. The first two are not possible to enforce a fix on - you can always make a decision to do something bad with no visible annotation. Even then, rich types like in rust make safe interfaces easier to produce. But rust tackles the next class of issues - one that you can verify to be safe or require an explicit "unsafe" around it.
As for preventing XSS and SQL injections, that's what good web frameworks do. If your framework encourages you to write raw unescaped SQL, or doesn't provide sensible defaults around content policies, then no matter what language it's in, there are going to be issues (and maybe if we called these frameworks "unsafe" then we'd get somewhere with fixing them).
My feeling is in the specific instance of using rust in apt, this is most likely a good thing (though I hope existing well tested rust libraries are used rather than NIHing them and introducing new bugs), but so far Ubuntu's rustification has not gone smoothly, so I'm more wary of the changes that e.g. improvements to Firefox via rust.
I think that's much more likely to introduce bugs.
Think of it that way, a lot of the Rust libraries are rewriting existing copyleft libraries in permissive licenses, so they cannot look at the original code, dooming them to repeat the mistakes that were made in the original code and having to fix them all over again on their own (as both go from "oh this is simple" to "oh another corner case").
I just want to translate code 1:1 to Rust, reusing my existing knowledge, design decisions, and tests. It should behave _exactly_ the same as before, just memory safe.
There is no guarantee that other bugs do not flurish in the rust echosystem. There are no publicly known quality code checks of rust programs except a big "trust us"(see firefox with all its CVEs, despite "rust"). And combined with the Cargo echosystem, where every malicious actor can inject malware is a big warning sign.
And just an anecdote, Asahi Linux devs said that Rust made it very easy (maybe relative to working with C) to write the drivers for the Apple M1 and M2 series, so it seems that the language has his merits, even without the cargo ecosystem.
Also Rust will only minimize certain kinds of bugs, others are impossible, a few years ago (I believe was Microsoft) that said that 70% of the bugs found were memory related [0], it means that Rust would have prevented most of those.
Maybe Rust is not the best answer, but as for now it the most proven answer for this particular problem, who know of Zig or other language will replace both C and Rust in the future.
[0] https://www.zdnet.com/article/i-ditched-linux-for-windows-11...
If I got that right, how is "it's still not perfect" an argument?
Agree with the Cargo objection.
Use Rust for evergreen projects by all means, just leave mature tested systems alone, please.
Or maybe Debian should never rely on any software written after 2015?
> There is no guarantee that other bugs do not flurish in the rust echosystem.
well, less likely than in C thanks to a advanced type system, e.g. allowing authors of abstractions make their API much more fool proof.
> where every malicious actor can inject malware is a big warning sign.
Very much doubt that is the case...
Firefox is not even close to 100% Rust.
This is a wildly misinformed comment.
Not sure how that’s relevant when CL is basically dead and no one wants to work with it, while Rust is flourishing and delivering value
Citation needed.
Or, what can be asserted without evidence can be dismissed by pointing to ripgrep.
More evidence than you have provided for your claim "Rust isn't delivering value", what did you use to come to that conclusion?
To add on to that, with declarations the programmer can tell the Lisp compiler that (for example) a variable can be stack allocated to help improve performance. The fact that Lisp code is just data is another benefit towards performance as it means macros are relatively easy to write so some computation can be done at compile time. There are also various useful utilities in the spec which can be used to help profile execution of a program to aid in optimization, such as time and trace.
Fast forward 5 centuries, it turns out they were in fact pretty successful as South America central Africa are the places where Catholicism is the most active today, far more than in Europe.
The Christian Brothers missions were hell holes across the undeveloped regions.
* https://www.theguardian.com/uk-news/2017/mar/02/child-migran...
* https://www.childabuseroyalcommission.gov.au/case-studies/ca...
Author quite literally said bcachefs-tools "is impossible to maintain in Debian stable", and the reason cited was Rust.
What would really be scary would be a distro that won't even boot unless a variety of LLM's are installed.
Boo!
I struggle to believe that this is really about a call to improve quality when there seem to be some other huge juicy targets.
I struggled over how to layout the directories in my GIT repo. The fact that I want to build from the git repo is another layer of confusion - as opposed to building from a tarfile. I'm making something unstable for other developers right now, rather than a stable releasable item.
The next bit of extreme confusion is .... where should my package's install target put the binary plugins I built. I'm not going to try to go back and check over this in detail but as far as I remember the docs were very unspecific about that as if it could be anywhere and different user docs on the net seemed to show different things.
I got to the point where I could appear to build the thing on my machine but that's not good enough - the PPA has to be able to do it and then you've got to upload, wait and hope the log explains what's wrong well enough.
I tried looking at other packages - I'm building plugins for GNU make so I tried that - but it was using the build from tar approach (IIRC) and was way overcomplicated for my very simple package which is just a few .so files in a directory.
It took me a couple of weeks of messing around with every possible option to get this far and I just ran out of energy and time. I am not a beginner at programming - only at packaging - so IMO there is a great deal that could be done for the user experience. Don't get me wrong - I'm not picking on .deb. RPM is another incredibly horrible packaging system where every tiny mistake can force a long long long rebuild.
They're obviously complicated because they're trying to offer a lot and e.g. Artix doesn't use selinux so there's one misery avoided straight away but it has a consequence.
IMO the core docs just don't prevent any of this confusion. They seem like a reference for people who already know what they're doing and enough tutorial for a very specific simple case that wasn't mine. People wouldn't bother to write their own tutorials if the docs filled the need.
Back to my original point - I don't think Rust is going to fix this.