To me as someone who's not learned either it looks like all the concepts and features of 'true' modern C++ (as opposed to C + a few extra features) spliced with the features and concepts of Haskell.
Yet people seem to like making useful things in it so it must have gotten something right. So I'll probably get around to attempting to really use it again.
[1]: https://github.com/joshtriplett/rfcs/blob/use/text/3680-use....
Asking as someone whose life became much easier after opting not do anything of the above and just write C in C++ ;-)
If src and this are equal and you don't check for it, then you end up destroying src/this resources and the end result would be an empty vector (since the last step is to clear everything out).
The expected behavior is a no-op.
I probably still wouldn't care, unless it's clear that moving to self is even required. Trying to break everything in a thousand pieces and generalizing and perfecting them individually is a lot of busywork.
They are trying to fix it in recent ~10 years, but they are also adding new clever expert features at the same time, so I kinda see it as a lost cause.
Rust isn't doesn't have "that" much less complexity if you take all the stable (or worse unstable experimental) things it has.
What makes Rust so nice is nearly always(1) if you don't use a feature you don't have to know about it (not so in C++) and if you stumble about a feature you don't know it's not just normally syntax visible but if you use it wrongly it won't segfault or even have a RCE due to out of bounds writes or similar.
So for C++ you have to learn a lot of the stuff upfront, but are not forced to by the compiler, and getting it wrong can have catastrophic consequences.
But for Rust you can learn it (mostly) bit by bit.
It's not perfect, when you go unsafe you are touching on a lot of "modern compiler" complexity which now also need to map to rust guarantees (but also have better save guards/visibility in Rust).
And async doesn't work so well with the bit by bit learning.
But in my personal experience if you don't have a team only consisting of senior C++ devs I would stay far away from C++. On the other hands with the right guidelines/limitations using rust with junior engineers is just fine (the guidelines matter to not get hangup on the wrong things or over-complicate things).
Complexity is not necessarily an automatic dealbreaker for a Rust language proposal—lots of Rust features are complicated because they solve problems that don't admit simple solutions—but the purpose of this particular feature is to make it easier to write code that uses reference-counted pointers and reduce how much stuff the programmer has to keep track of. It doesn't let you do anything you can't already do, it just makes it less clunky. So if people can't easily understand what it does, then that's bad.
Complexity is fine. Sometimes we're working on complex problems without simple, straightforward, correct solutions. I do try to avoid complications, though.
I meant that more in the spirit of "oh, while we're talking about this, here are my thoughts on a tangentially related idea".
Often the best way to proceed is to just solve a simpler problem :)
Case in point, my first read on lifetimes just left me confused. So I used Rc<> everywhere and made a perfectly functional program. Picking up lifetimes after mastering the basics made it a lot easier. Most people won't even need to care about lightweight clones.
Curious, did you run into lifetime issues or just started wrapping everything in Rc<> after reading about lifetimes? Wrapping everything in Rc<> isn't even a bad thing, that's what you have to when you do WASM in a browser.
I still find it confusing sometimes because you're not setting lifetime, you're just giving it a name so that the compiler can reason about it.
Like saying something 'static doesn't make it static, it supposed to mean "live until program closes", but you can totally create things with 'static and free them before program exits.
IMO this is something that should just be handled by extra runtime code or a magically smarter compiler. Lifetime management feels like something that matters in a microcontroller or hyper-optimized setting, but never when I’m executing code on Apple Silicon for a random application. And yet the language makes simple GC ergonomically painful. I love the language but don’t really need all of that performance. I would gladly take a 1% hit to increment reference counts.
With Rc you're telling the compiler "I don't know the lifetime, or can't describe it to you using the Rust type system. Please figure it out at runtime."
Like `x: Foo<'a>` means "x is constrained by lifetime 'a" as in "x is only guaranteed to be soundly usable if 'a is 'alive'".
With the implicit context of "you are only allowed to use things which are guaranteed to be soundly usable".
And that "moving" is distinct from lifetimes (i.e. if you move x out of scope you just don't have it anymore, while a 'a constraint on x limits where you _can_ "have" it).
Then `'static` basically means "no constraint"/"unconstrained".
So a `x: X + 'static` mean x has no constraints, as long as you have it you can use it.
Where `x: X+'a` (or `X<'a>`) would mean you can _at most_ have x until 'a stops being "alive". (This doesn't mean the data behind X doesn't life longer, just that in that specific place you can't "have" it at most while 'a is alive).
So then if we look at `&'static A` we have a type which 1) is always copied instead of moved, so you always have it while you can have it and 2) you can always have it. As consequence it must life for the whole program execution. Not because `'static` says "until end of program" but because a unconstrained &reference can only be sound if it exist until the end of the program!!
I guess you don't have that much experience with actual C++ development? Because there's a plethora of static analysis tools and any serious IDE come with refactoring tools on top of that, both assistive and automated, that will suggest fixes as you type. Rust didn't invent anything with clippy, however good the tool might be...
while the Use trait (a better name is proposed [1]) is useful, but I don't see how the .use syntax adds any value over .clone()? If the compiler is able to perform those optimizations, it can also lint and ask the user to remove the unnecessary .clone()/change x.clone() to &x. IMO this is better than the compiler doing black magic.
[1]: https://smallcultfollowing.com/babysteps/blog/2025/10/07/the...
And from all the discussions I have seen this RFC is one of the less promising ones as it mixes up the concept of "implicitly doing an operation before moving something into a closure scope" and "light weight clones" in a confusing ambiguous way.
So I don't expect it to be accepted/implemented this way, but I expect something similar to happen.
Like for "light weight clones" use is a pretty bad name and new syntax isn't needed, if we start shortening a `.clone()` to `.use` because it saves 4 letters then we are doing something wrong. Similar if argue for it for niche optimization reasons instead of improving generic optimizations to have the same outcome we are doing something wrong IMHO.
And for the the scoping/closure aspect (i.e. a rust equivalent of C++ closures `[]` parts (e.g. [&x](){...}) then it also seems a bad solution. First it's a operation which relates to the closure scope not the call inside of it, so attaching it to the call inside of it isn't a grate idea. Especially given that you might have multiple places you pass a clone of x in and this leading to a lot of ambiguity not highlighted with any examples in the RFC. Secondly for this concept it isn't limited to "cheap clones" sometimes you have the same pattern for not-so-cheap clones (through it mainly matters for cheap clones). And lastly if we really add ways to define captures, why not allow defining captures.
Now sure if you have a good solution for more compact way to handle "not-copy but still cheap" sharing (cloning of handles/smart pointers) of values there it maybe could make sense to also allow it to happen implicitly outside of closure capture scope instead of the invocation scope. But I would argue it's an extension of an not yet existing handle/smart pointer ergonomic improvement and should be done after that improvement.
(yes, I'm aware they use `use` because it's already a keyword, but making a non-zero cost copy of a handle/smart pointer isn't exactly "use it" but more like "share it" :/)
Obviously I’m not building anything production ready in it but it’s been useful for a few apps that I’d previously been using Python for.
https://smallcultfollowing.com/babysteps/blog/2025/10/07/the...
If your favorite language is assembly and C is too high-level for you then you are probably going to dislike Rust (and Java, Python, and every other modern language).
C++ is incredibly complicated. I mean there's a 278 page book just on initialization [1].
I have seen all sorts of bad multithreaded code that compilers have let someone write. It would've been much harder in Rust but Rust would've forced you to be correct. As an example, I've seen a pool of locks for message delivery where the locks are locked on one thread and unlocked on another. This would get deadlocked every now and again so every half second or so a separate process would just release all the locks.
Rust is definitely on the same order of magnitude of complexity, but you don't have to remember it all. Usually if you forget some complex rule and make a mistake, the compiler will tell you.
That's not true for unsafe Rust, but you rarely need unsafe Rust. Apart from FFI I have yet to use it at all and I've been writing Rust for years.
Async Rust is probably the closest it gets to C++'s "you didn't explicitly co_return in the context of a promise with no return_void? ok carry on but I'm going to crash sometimes! maybe just in release mode, on another OS".
I'm not commenting on Rust, seriously! But I couldn't help to notice that this sentence is a non sequitur. Something right has been developed in PHP and Visual Basic, even in Javascript and Go; still, those developers who freely choose to use those abominations, they do deserve to be pointed at and made fun of.
Huh, I missed that part. It's a pretty technical point, but I'm happy they made the decision, it held up a lot of discussions.
i just decided do a good ol' 'find -name "*.rs"' in the kernel tree to get a sense for what all this is about. from what i can tell, there's just an api compatibility layer (found in /rust) and then a smattering of proof of concept drivers in tree that appear to just be simple rewrites of existing drivers (with the exception of the incomplete nvidia thing) that aren't even really in use. from what i can tell even the android binder rust rewrite is vestigial.
the whole thing seems kinda cute but like, shouldn't this experiment in programming language co-development be taking place somewhere other than the source tree for the world's most important piece of software?
redox is a pretty cool experimental piece of software that might be the os of the future, why not do it there?
""" Normally, when you write a brand new kernel driver as complicated as this one, trying to go from simple demo apps to a full desktop with multiple apps using the GPU concurrently ends up triggering all sorts of race conditions, memory leaks, use-after-free issues, and all kinds of badness.
But all that just… didn’t happen! I only had to fix a few logic bugs and one issue in the core of the memory management code, and then everything else just worked stably! Rust is truly magical! Its safety features mean that the design of the driver is guaranteed to be thread-safe and memory-safe as long as there are no issues in the few unsafe sections. It really guides you towards not just safe but good design. """
https://asahilinux.org/2022/11/tales-of-the-m1-gpu/
> the whole thing seems kinda cute but like, shouldn't this experiment in programming language co-development be taking place somewhere other than the source tree for the world's most important piece of software?
Torvalds seems to disagree with you.
Rust does, successfully, guarantee the lack of data races. It also guarantees the lack of memory-unsafety resulting from race conditions in general (which to be fair largely just means "it guarantees a lack of data races", though it does also include things like "race conditions won't result in a use after free or an out of bounds memory access").
If by address it you mean "show how C/C++ does this"... they don't and this is well known.
If by address it you mean "prove that rust doesn't do what it says it does"... as that point you're inviting someone to teach you the details of how rust works down to the nitty gritty in an HN comment. You'd be much better off finding and reading the relevant materials on the internet than someones off hand attempt at recreating them on HN.
Sadly, I don't know rustlang, so I can't tell if the inability to describe its features in more commonly used terms is due to incompetence or the features being irrelevant to this discussion (see the title of the thread).
As near as I can tell to give you the answer you're looking for I'd have to explain the majority of rust to you. How traits work, and auto traits, and unsafe trait impls, and ownership, and the borrow checker, and for it to make sense as a practical thing interior mutability, and then I could point you at the standard library concepts of Send and Sync which someone mentioned above and they would actually make sense, and then I could give some examples of how everything comes together to enable memory safe, efficient, and ergonomic, threading primitives.
But this would no longer be a discussion about a rust language feature, but a tutorial on rust in general. Because to properly understand how the primitives that allow rust to build safe abstractions work, you need to understand most of rust.
Send and Sync (mentioned up thread) while being useful search terms, are some of the last things in a reasonable rust curriculum, not the first. I could quickly explain them to someone who already knew rust, and hadn't used them (or threads) at all, because they're simple once you have the foundation of "how the rest of rust works". Skipping the foundation doesn't make sense.
† "Memory safety" was admittedly possibly popularized by rust, but is equivalent to "the absence of undefined behaviour" which should be understandable to any C programmer.
Well, yes, but that's the whole value of Rust: you don't need to use these overly-cautious defensive constructs, (at least not to prevent data races), because the language prevents them for you automatically.
Yes
> I.e. it assumes that all other code is Rust for the purpose of those checks?
Not exactly, it merely assumes that you upheld the documented invariants when you wrote code to call/be-called-from other languages. For example that if I have a `extern "C" fn foo(x: &mut i32)` that
- x points to a properly aligned properly allocated i32 (not to null, not to the middle of un-unallocated page somewhere)
- The only way that memory will be accessed for the duration of the call to `foo` is via `x`. Which is to say that other parts of the system won't be writing to `x` or making assumptions about what value is stored in its memory until the function call returns (rust is, in principle, permitted to store some temporary value in `x`s memory even if the code never touches x beyond being passed it. So long as when `foo` returns the memory contains what it is supposed to). Note that this implies that a pointer to the same memory isn't also being passed to rust some other way (e.g. through a static which doesn't have a locked lock around it)
- foo will be called via the standard "C" calling convention (on x86_64 linux this for instance means that the stack pointer must be 2-byte aligned. Which is the type of constraint that is very easy to violate from assembly and next to impossible to violate from C code).
That it's up to the programmer to verify the invariants is why FFI code is considered "unsafe" in rust - programmer error can result in unsoundness. But if you, the programmer, are confident you have upheld the invariants you still get the guarantees about the broader system.
Rust is generally all about local reasoning. It doesn't actually care very much what the rest of the system is, so long as it called us following the agreed upon contract. It just has a much more explicit definition of what that contract is then C.
This is fairly narrow, often C functions for example aren't actually safe, for example they take a pointer and it must be valid, that's not inherently safe, or they have requirements about the relative values of parameters or the state of the wider system which can't be checked by the Rust, again unsafe. But there are cases where this affordance is a nice improvement.
I like the term "checked" and "unchecked" better but not enough to actually lobby to change them, and as a term of art they're fine.
As far as I can tell, ANY guarantee provided by ANY language is "just a language construct" that fails if we assume there is other code executing which is ill-behaved.
The point is rather that it’s not. The “trait send sync things” specify whether a value of the type is allowed to be respectively move or borrowed across thread boundaries.
It won't prevent all races, but it might help avoid mistakes in a few of em. And concurrency is such a pain; any such machine-checked guarantees are probably nice to have to those dealing with em - caveat being that I'm not such a person.
I can only tell you: open your mind. Is Rust just a fad? The latest cool new shiny, espoused only by amateurs who don’t have a real job? Or is it something radically different? Go dig into Rust. Compile it down to assembly and see what it generates. Get frustrated by the borrow checker rules until you have the epiphany. Write some unsafe code and learn what “unsafe” really means. Form your own opinion.
Speaking seriously, they surely meant data races, right? If so, what's preventing me from using C++ atomics to achieve the same thing?
Rust inherently models this idea. Read about Rust's "Send" and "Sync" marker traits. e.g. https://doc.rust-lang.org/std/marker/trait.Send.html
> Is rust going to synchronize shared memory access for me?
Much better than that. (safe) Rust is going to complain that you can't write the unsynchronized nonsense you were probably going to write, shortcutting the step where in production everything gets corrupted and you spend six months trying to reproduce and debug your mistake...
i can't remember the last time i faced a data race to be honest.
i guess the real question is, how well does it all hold up when you have teamwork and everything isn't strictly adherent to one specific philosophy.
Spatial memory safety is easy, just check the bounds before indexing an array. Temporal memory safety is easy, just free memory only after you've finished using it, and not too early or too late. As you say, thread safety is easy.
Except we have loads of empirical evidence--from widespread failures of software--that it's not easy in practice. Especially in large codebases, remembering the remote conditions you need to uphold to maintain memory safety and thread safety can be difficult. I've written loads of code that created issues like "oops, I forgot to account for the possibility that someone might use this notification to immediately tell me to shut down."
What these annotations provide is a way to have the compiler bop you in the head when you accidentally screw something up, in the same way the compiler bops you in the head if you fucked up a type or the name of something. And my experience is that many people do go through a phase with the borrow checker where they complain about it being incorrect, only to later discover that it was correct, and the pattern they thought was safe wasn't.
"Just" annotations... that are automatically added (in the vast majority of cases) and enforced by the compiler.
> proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
Yes, like how avoiding type confusion/OOB/use-after-free/etc. "just require[s] a little bit of discipline and consistency"?
The point of offloading these kinds of things onto the compiler/language is precisely so that you have something watching your back if/when your discipline and consistency slips, especially when dealing with larger/more complex systems/teams. Most of us are only human, after all.
> how well does it all hold up when you have teamwork and everything isn't strictly adherent to one specific philosophy.
Again, part of the point is that Send/Sync are virtually always handled by the compiler, so teamwork and philosophy generally aren't in the picture in the first place. Consider it an extension of your "regular" strong static type system checks (e.g., can't pass object of type A to a function that expects an unrelated object of type B) to cross-thread concerns.
Rust has real improvements here, like this example from the fuschia team of enforcing lock ordering at compile time [0]. This is technically possible in C++ as well (see Alon Wolf's metaprogramming), but it's truly dark magic to do so.
https://a10nw01f.github.io/post/advanced_compile_time_valida...
Given how the committee works and the direction they insist on taking, C++ will never ever become a safe language.
Bit of a fun fact, but as one of the linked articles states the C++ committee doesn't seem to be a fan of stateful metaprogramming so its status is somewhat unclear. From Core Working Group issue 2118:
> Defining a friend function in a template, then referencing that function later provides a means of capturing and retrieving metaprogramming state. This technique is arcane and should be made ill-formed.
> Notes from the May, 2015 meeting:
> CWG agreed that such techniques should be ill-formed, although the mechanism for prohibiting them is as yet undetermined.
No, they are not. You also don't need mutex ordering as much since Mutexes in Rust are a container type. You can only get ahold of the inside value as a reference when calling the lock method.
Mutex as a container has no bearing on lock ordering problems (deadlock).
Wow, you must be really smart! I guess you don’t need rust. For the rest of us who find concurrent programming difficult, it is useful.
Rust’s strict ownership model enforces more correct handling of data that is shared or sent across threads.
> Speaking seriously, they surely meant data races, right? If so, what's preventing me from using C++ atomics to achieve the same thing?
C++ is not used in the Linux kernel.
You can write safe code in C++ or C if everything is attended to carefully and no mistakes are made by you or future maintainers who modify code. The benefit of Rust is that the compiler enforces it at a language level so you don’t have to rely on everyone touching the code avoiding mistakes or the disallowed behavior.
Nothing is preventing you from writing correct C++ code. Rust is strictly less powerful (in terms of possible programs) than C++. The problem with C++ is that the easiest way to do anything is often the wrong way to do it. You might not even realize you are sharing a variable across threads and that it needs to be atomic.
Well, pretty close to that, actually! Rust will statically prevent you from accessing the same data from different threads concurrently without using a lock or atomic.
> what's preventing me from using C++ atomics to achieve the same thing
You might forget?
Imagine this C++ code:
class Foo {
// ...
public:
void frobFoo();
};
Now, is it okay to call `frobFoo` from multiple threads at once? Maybe, maybe not -- if it's not documented (or if you don't trust the documentation), you will have to read the entire implementation to answer that.Now imagine this Rust code:
struct Foo {
// ...
}
impl Foo {
fn frobFoo(&mut self) {
// ...
}
}
Now, is `frobFoo` okay to call from multiple threads at once? No, and the language will automatically make it impossible to do so.If we had `&self` instead of `&mut self`, then it might be okay, you can discover whether it's okay by pure local reasoning (looking at the traits implemented by Foo, not the implementation), and if it's not then the language will again automatically prevent you from doing so (and also prevent the function from doing anything that would make it unsafe).
i don't really care for mindless appeals to authority. make your own arguments and defend them or don't bother.
this gpu driver looks pretty cool though. looks like there's much more to the rust compatibility layer in the asahi tree and it is pretty cool that they were able to ship so quickly. i'd be curious how kernel rust compares to user space rust with respect to bloat. (user rust is pretty bad in that regard, imo)
You previously appealed to Linux being the largest worlds most important source tree and then you choose to eschew the opinion of that project's lead.
Unless you have a specific falsifiable claim that is being challenged or defended, it's not at all a fallacy to assume expert opinions are implicitly correct. It's just wisdom and good sense, even if it's not useful to the debate you want to have.
This isn't an appeal to just any authority, but the authority that defines Linux and is its namesake.
This is incorrect. The android binder rust rewrite is planned to wholly replace the current C implementation.
https://www.phoronix.com/news/Rust-Binder-For-Linux-6.18
And most of the big drivers written for Apple M-series hardware are written in Rust, those are not simple rewrites or proof of concepts.
Vestigal how? The commit message in the version in Linus's tree (from commit eafedbc7c050c44744fbdf80bdf3315e860b7513 "rust_binder: add Rust Binder driver") makes it seem rather more complete:
> Rust binder passes all tests that validate the correctness of Binder in the Android Open Source Project. We can boot a device, and run a variety of apps and functionality without issues. We have performed this both on the Cuttlefish Android emulator device, and on a Pixel 6 Pro.
> As for feature parity, Rust binder currently implements all features that C binder supports, with the exception of some debugging facilities. The missing debugging facilities will be added before we submit the Rust implementation upstream.
----
> shouldn't this experiment in programming language co-development be taking place somewhere other than the source tree for the world's most important piece of software?
Rust for Linux did start as an out-of-tree project. The thing is that no matter how much work you do out of tree if you're serious about trying integration out you'll have to pull in experimental support at some point - which is more or less what is happening now.
It'll go faster once all the bindings are in place and people have more experience with this stuff. I've been greatly looking forward to expanding bcachefs's use of rust, right now it's just in userspace but I've got some initial bindings for bcachefs's core btree API.
Real iterators, closures, better data types, all that stuff is going to be so nice when it can replace pages and pages of macro madness.
The avalanche has already started. It is too late for the pebbles to vote.
0: https://blog.regehr.org/archives/1180 1: https://blog.regehr.org/archives/1287
> This post is a long-winded way of saying that I lost faith in my ability to push the work forward.
The gem of despair:
> Another example is what should be done when a 32-bit integer is shifted by 32 places (this is undefined behavior in C and C++). Stephen Canon pointed out on twitter that there are many programs typically compiled for ARM that would fail if this produced something besides 0, and there are also many programs typically compiled for x86 that would fail when this evaluates to something other than the original value.
Even if it is though, we don't have it. It seems like linux should go with the solution we have in hand and can see works, not a solution that hasn't been developed or proved possible and practical.
Nor is memory safety the only thing rust brings to the table, it's also brings a more expressive type system that prevents other mistakes (just not as categorically) and lets you program faster. Supposing we got this memory safe C that somehow avoided this complexity... I don't think I'd even want to use it over the more expressive memory safe language that also brings other benefits.
† A memory-safe managed C is possible of course (see https://fil-c.org/), but it seems unsuitable for a kernel.
†† There are some other alternatives to the choices rust made, but not meaningfully less complex. Separately you could ditch the complexity of async I guess, but you can also just use rust as if async didn't exist, it's a purely value added feature. There's likely one or two other similar examples though they don't immediately come to mind.
Can you give an example? One that remained a low level language, and remained ergonomic enough for practical use?
> Second, I do not even believe that memory safety is that important that this trumps other considerations
In your previous comment you stated "a memory safe C would be far more useful. It is sad that not more resources are invested into this". It seems to me that after suggesting that people should stop working on what they are working on and work on memory safe C instead you ought to be prepared to defend the concept of a memory safe C. Not to simply back away from memory safety being a useful concept in the first place.
I'm not particularly interested in debating the merits of memory safety with you, I entered this discussion upon the assumption that you had conceded them.
They can't, of course, because there was no such language. Some people for whatever reason struggle to acknowledge that (1) Rust was not just the synthesis of existing ideas (the borrow checker was novel, and aspects of its thread safety story like Send and Sync were also AFAIK not found in the literature), and (2) to the extent that it was the synthesis of existing ideas, a number of these were locked away in languages that were not even close to being ready for industry adoption. There was no other Rust alternative (that genuinely aimed to replace C++ for all use cases, not just supplement it) just on the horizon or something around the time of Rust 1.0's release. Pretty much all the oxygen in the room for developing such a language has gone to Rust for well over a decade now, and that's why it's in the Linux kernel and [insert your pet language here] is not.
BTW, this is also why people being are incentivized to figure out ways to solve complex cases like Rcu-projection through extensible mechanisms (like the generic field projection proposal) rather than ditching Rust as a language because it can't currently handle these ergonomically. The lack of alternatives to Rust is a big driving factor for people to find these abstractions. Conversely, having the weight of the Linux kernel behind these feature requests (instead of e.g. some random hobbyist) makes it far more likely for them to actually get into the language.
> But it seems kind of random why they picked Rust. I do not think there is anything which makes it particularly good and it certainly has issues.
Like I said, they picked Rust because there was literally no other suitable language. You're avoiding actually naming one because you know this is true. Even among academic languages very few targeted being able to replace C++ everywhere directly as the language was deemed unsuitable for verification due to its complexity. People were much more focused on the idea of providing end to end verified proofs that C code matched its specification, but that is not a viable approach for a language intended to be used by regular industry programmers. Plenty of other research languages wanted to compete with C++ in specific domains where the problem fit a shape that made the safety problem more tractable, but they were not true general purpose languages and it was not clear how to extend them to become such (or whether the language designers even wanted to). Other languages might have thought they were targeting the C++ domain but made far too many performance sacrifices to be suitable candidates, or gave up on safety where the problem get hard (how many "full memory safety" solutions completely give up on data races for example? More than a few).
As a "C++ guy" Rust was the very first language that gave us what we actually wanted out of a language (zero performance compromises) while adding something meaningful that we couldn't do without it (full memory safety). Even where it fell short on performance or safety, the difference with other languages was that nobody said "well, you shouldn't care about that anyway because it's not that big a deal on modern CPUs" or "well, that's a stupid thing for a user to do, who cares about making that case safe?" The language designers genuinely wanted to see how far we cold push things without compromises (and still do). The work to allow even complex Linux kernel concurrent patterns (like RCU or sequence locking) to be exposed through safe APIs, without explicitly hardcoding the safety proofs for the difficult parts into the language, is just an extension of the attitude that's been there since the beginning.
I agree that memory safety is useful, but I think the bigger problem is complexity, and Rust goes in the wrong direction. I also think that any investment into safety features - even if not achieving perfect safety - in C tooling would have much higher return of investment and bigger impact on the open-source ecosystem.
The fact that Rust gets to benefit from the project too is just an added bonus.
The language co-development isn't unique to Rust. There are plenty of features in GCC and Clang that exist specifically for Kernel usage too.
Because people want to use Rust where they use C, right now? Whereas yours is a perfectly fine criticism, it ignores that people want the good stuff, everywhere, in the things they actually use every day. And since this is something the project lead wants to do, this doesn't seem to problem/live issue.
Even figuring out what exactly the Linux kernel API safety rules are is a huge task, much less encoding them in a computer-readable form.
The code there is not about C<->Rust FFI. It's about encoding Linux kernel API properties into a safe Rust API.
The uncertainty of the calling/ordering rules is exactly why kernel C has been hard to write. For VFS locking rules, you pretty much have to simulate Al Viro's brain and replay his whole life experience...
Since this is a Git repo, I'd go with `git ls-files '.rs'`.
That’s an interesting idea. You should let this guy know you disagree with his technical decision -> torvalds@linux-foundation.org
Note that there's some discussion about the name of that proposal, because "optimization" gives the wrong idea (that it's optional or could depend on the backend).
"Any structures larger than x, or any structures marked with a marker type, are returned by the caller providing an outref to a buffer with correct size, and the callee directly writes the structure into that buffer."
Then you could just write normal code like
fn initialize() -> A {
// the initialization code
}
and it would just work? And it would reduce unnecessary copies in a lot of other situations too?
Given how much work has been put into this issue, and how much less convenient the proposed solutions are, I feel like I must be missing something.Essentially the problem is composability. If you are building a large type from a sequence of other large types, and one or more step is fallible, the normal return ABI breaks down very quickly.
AIUI, that's why MaybeUninit<T> exists. But even if you address the issue of it being unsafe to assert that a MaybeUninit has been initialized (which &out references could in principle solve) there are real problems with this; for example, MaybeUninit<T> has no niches or free-for-use padding even when T does, so you can't just "project" MaybeUninit to individual fields except in special cases. My understanding is that C++ partial initialization has the exact same issues in principle, they just don't come up as often because the standard for code correctness is a lot less rigorous.
Much like how you don't have the C stdlib when writing kernel code, Rust is used with the no_std option. You do not use cargo and do not have access to crates.
You'd likely have to rewrite half of tokio to use kernel level abstractions for things like sockets and everything else that interacts with the OS.
At first glance, these features look quite general to me and not particularly tied to the kernel, they are important utilities for doing this kind of programming in the real world.
These new features are all about making things that the kernel devs need possible in safe Rust. This often requires support for some quite fancy abstractions, some of which cannot be expressed in current stable Rust.
I am currently working on a fairly involved C & Rust embedded systems project and getting the inter-language interface stable and memory-leak free took a good amount of effort. It probably didn't help that I don't have access to valgrind or gdb on this platform.
Only if you primarily work with `cargo` and want to interact with C from Rust. The other way around has far less support and `rustc` does not standardize the object generation. This is actively preventing projects like `systemd` to adopt Rust into their project as an example.
In what way(s) does Rust's C interop depend on cargo?
> The other way around has far less support and `rustc` does not standardize the object generation.
I believe in this context the understanding is that you're going to be using `extern "C"` and/or `#[repr(C)]` in your Rust code, which gives you a plain C interface. I think attempting to use "raw" Rust code from other languages is a rare phenomenon, if it's even attempted at all.
> This is actively preventing projects like `systemd` to adopt Rust into their project as an example.
Could you point out specific instances from that thread? From a quick glance I didn't see any obvious instances of someone saying that using Rust from C is problematic.
Do rust and cargo allow for multiple interpretations of the same C header file across different objects in the same program? That's how C libraries are often implemented in practice due to preprocessor tricks, though I wish it wasn't normal to do this sort of thing.
However, some people use cargo's build scripts to build c programs, which then you can link into your Rust program. Support would then depend on whatever the person wrote with the script, which in my experience usually delegates to whatever build system that project uses. So it should work fine.
If the header files are consumed by C code that is then consumed by Rust then you'll have full support for what C supports because it will be compiled by a C compiler.
I guess looking at that pedantically that's "just" a tooling issue, rather than an issue with the Rust language itself. That's not really a useful distinction from an end user's perspective, though; it's friction either way, and worth addressing.
Writing Rust code to be called from C (but within the same application)? Doable but somewhat painful.
Writing Rust code to act like a C shared library? Quite painful and some pretty important features are missing (proper symbol versioning support being the most obvious one). Theoretically doable if you're willing to compromise.
There's also some aspects of FFI-safety that are very subtle and easy to mess up:
* #[repr(C)] enums still have the same requirements as Rust enums and so C callers can easily trigger UB, so you need to use something like open_enum. Thankfully cbindgen is too dumb to know that #[open_enum] is a proc macro and produces a non-enum type.
* Before io_safety in Rust 1.63, dealing with file descriptors from C without accidentally closing them was horrific (though this was a wider problem in Rust). BorrowedFd is quite nice -- though Rustix will panic if you use negative fds and so you need to add validation and your own type in practice. However, #[repr(transparent)] is very nice for this.
* Lots of reading about unsafe Rust is necessary when doing most non-trivial things with C FFI.
* You need to make use of a lot of compiler internals, build scripts, and other magic to get the output you want.
* Tools like cargo-c and cbindgen are nice and probably work great for 80% of projects, but the 20% really suffer from no useful tooling. I haven't tried to use rustc directly to work around some of the remaining issues, but I suspect it'd be even more painful.
I would say that the C interop with Rust is pretty good but it has lots of room for improvement and it feels like very few resources have been spent on it after they got the core stuff working.Source: I've been writing a Rust library intended to be used primarily via C FFI and run into a lot of issues...
There is some C++/rust interop in the past that I've worked on that would have enjoyed the arbitrary self types feature, but not particularly because of the C++ part of the equation. In fact I think if it had been a pure rust project it would also have enjoyed that feature just as much so... eh... take it for what little it's worth I guess.
Philopp has been a particular force for change in this area.
As someone who's written a number of userspace applications in many languages as well as embedded firmwares running on bare metal, Rust is a rare gem that excels at both.
I'm particularly fond of how easy it was to make all the controls live in the editor, and editable with changes appearing immediately. imgui would probably provide a similar experience, but I find C++ much more of a pain to work with than Rust.
(Disclaimer: I'm one of the Slint developers.)
I actually wrote a blog post about this exact topic, since it's a common question: https://slint.dev/blog/domain-specific-language-vs-imperativ...
In your article, you mention that an imperative code in Rust looks more complicated, but this can be fixed by adding an "object tree" syntax to Rust, that allows creating trees of objects and link them like this:
VBox margin=10:
Label text=label_text
Button label="OK" onclick=on_ok_clicked
This syntax could be used not only for UI, but for describing configuration, database tables and many other things. I think it would be better solution than a proprietary language.Also I think it would be better if GUI could be drawn in the editor, it would allow to use lower paid developers without expensive computer science education for making UI.
That said, great work! There's plenty of room in the language for more than one solution!
Like the person you’re replying to, I am generally averse to DSLs where the domain is a single project because I associate them with previous bad experiences where I spent time learning only to find it totally misses the mark. There’s also the issue of software where I’m the sole maintainer and look at it rarely. If it’s the only place I use Slint, then I’m going to be worried that I need to relearn the DSL in three months when I want to tweak the UI. Although you can probably say the same about any non-trivial framework’s API whether or not it uses a DSL.
All that said, I’ll be a bit more open to DSLs after reading your post, and if I ever need to write a GUI in Rust I’ll give Slint a shot (although that seems unlikely since I don’t typically write Rust or GUIs).
https://github.com/slint-ui/slint?tab=License-1-ov-file#read...
Beyond that, Dart / Futter are truly an absolute pleasure to use for that use case.
"There ain't no such thing as a free lunch" as the saying goes.
That too requires a substantial investment of time and resources.
I think in a more pragmatic sense too that you can form a very good understanding on if you’re going to have weird special requirements just from looking at what others have done with the same tools in similar situations before you.
> That too requires a substantial investment of time and resources.
The discussion has gotten to be pretty abstract at this point. To get back to concrete examples, the egui RAD builder I've been hacking on worked on day 1, first commit. It's been a joy to put together, and no more difficult than building GUI apps with any other toolkit I've worked with. Which causes me to question your statements about additional complexity. You can dig deep and do dark magic with Rust if you want, but you can also treat it like any high level language and get things done quickly. That's part of what makes it a rare gem to me.
Some folks don't like dealing with strict types, or with the borrow checker, but I find that the errors they illuminate for me would have been similarly serious in other languages which lacked the tooling to highlight them. Which adds to my appreciation of Rust.
Immediate mode has it's detractors, but in practice I've found it remarkably flexible, the resulting code relatively clean, and egui gets out of the way when I want to do something like blit fast graphics to a memory mapped area of the screen. Responses have been positive.
BTW, this happens to almost all languages. Which ACTUAL good GUIs toolkits exist? And which ACTUAL languages HAVE good integration or implementation of them?
A good GUI kit AND integration is a bigger task than do a Os or a RDBMS. (And neither are many good languages for RDBMS)
All of the above also have great ORM libraries and RDBMS standard driver APIs, as doing data entry applications is a common enterprise GUI workflow.
A truly good one? FoxPro. Is very hard to know if you don't have exposure to what should be.
I think Swift (and even ObjC) is perfect for AppKit & UIKit. I think those frameworks are pretty good and I like using them. Languages have great integration, Swift literally made around them. Those toolkits have great integrations with the macOS.
I find C# a pretty nice language for GUI, I assume it has good (maybe not great) integration with at least one of MS GUI toolkits.
I find Rust good for GUI, but right now story is meh. Wrapper style frameworks always suffer from Rust not being whatever it wraps. Pure Rust framework miss a lot of features compared to wrapper frameworks.
I strongly agree with this. In particular the Rust GUI libraries I've looked at have text layout / rendering that is nowhere near what should be considered adequate today (imo).
For example egui:
- It doesn't do any bidi reordering.
- It also doesn't do any font fallback so even if you didn't need bidi you can't render many languages without first acquiring an appropriate font somehow.
- Complex shaping is nowhere to be seen either.
- egui's italics look really terrible and I'm not sure why since I can't believe even synthesized ones have to look that bad.
CSS has been doing this for years and it's been doing it very well. So I am kind of disappointed that we don't have equally powerful tools in the general Rust ecosystem. Even just considering text layout / rendering libraries, only `cosmic-text` has an API that is somewhat in the right direction[1], but even it fails simply because I don't see a way to insert a button (block element) in-between the text[2].
Note that I'm not just hating on egui here, egui is amazing. Afaict it is the most complete GUI library for Rust and it's great to be able to make GUIs this easily. However I can't just not point out that it is, in fact, not perfect and doesn't truly "excel" at GUIs.
Also I have no idea how inline layout looks in other established GUI libraries like GTK and Qt so maybe I'm complaining about something that is not available most places outside a browser. If anyone knows, it would be interesting to learn how well they compare here.
[1] CSS inline layout is so complex that even this is a decent achievement, since it is not reasonable to expect most CSS features out of new Rust libraries with a significant time disadvantage.
[2] This is non-trivial because bidi reordering should happen on the whole paragraph not only on both sides of the button, so the inline layout API must handle non-text blocks in text.
However, it seems like most of the other UI toolkits discussed here, even in other languages, suffer similar issues. Which points to the difficulty of the problems. Consequently, I don't have any problem praising egui and Rust alongside. They're great! And even great can always get better! :)
GUIs are incredibly hard and most languages never get high quality GUI libraries. Rust is still pretty young and a ton of people are working on the problem so that will definitely change.
There is a full-fledged DE written in Rust that uses Iced: https://en.wikipedia.org/wiki/COSMIC_(desktop_environment)
Immediate mode was certainly a different paradigm to wrap my head around, but so far I haven't found anything I couldn't accomplish with egui, including some fairly complex applications like https://timschmidt.github.io/alumina-interface/
This can be true but it can still be the case that a managed language is even better at one of them.
Except for thread safety.
In memory resources shared among threads.
Turns out threads also may share resources like out-of-process files, memory mapped regions, shared memory, databases, distributed transactions,.... where the Send and Sync traits is of no help.
Also you happen to forget Haskell has a Software Transactional Memory, and Swift also has similar protocols since version 6, and effects are a thing in OCaml, Scala, Koka, while languages like Dafny, Koka and Idris also provide similar capabilities via proofs.
That might be true if you're developing a pure application [1], but not if you're writing a library. Have fun integrating a C# library in a Java program, a Java library in a C# program, or either of those in a Python, Node, C, C++, or Rust program. There's something to be said for not requiring a runtime.
[1] Unless you care about easy parallelization with statically guaranteed thread-safety. But hey, who needs this in the age of multicore CPUs?
Message-passing IPC is much, much slower and less efficient than shared-memory communication, and inter-process IPC (both message-passing and shared-memory) is much less convenient than intra-process multi-threading. Rust is the only mainstream language, managed or otherwise, which enables safe and efficient multi-threading.
> Turns out threads also may share resources like out-of-process files, memory mapped regions, shared memory, databases, distributed transactions
For multi-threaded parallelization,
- out-of-process files
- databases
- distributed transactions
aren't really relevant, and Rust directly helps with the other two aspects: - memory mapped regions
- shared memory
In practice, your "very specific" aspect is the most important one, and the hardest to get right without Rust's Send and Sync traits and their automatic enforcement.> aren't really relevant, and Rust directly helps with the other two aspects:
Not at all, because Rust code has nothing to say about what other processes do to those resources, the only thing you can do is wrap accesses in a unsafe code block and hope for the best, that nothing was corrupted.
Rust plucks the fruit it can reach and it mostly stays in its lane and it's trying to expand it here and there (like Rust in Linux and embedded). I too want one ultimate language and one ultimate kernel but I don't think you and I will live to see it. Maybe our grandchildren will not as well.
Ask LLM to generate rust code from chat, usb, i2c, GPU drivers - build and test it automatically? Possible?
Or start with other "smaller" projects such as sqlite, apache, nginx, etc - possible?
LLMs seem generally unsuited for the task, because they're fundamentally approximators that won't always get things right, and as a result will introduce subtle bugs. Perhaps if you paired them with some sort of formal methods... I'm not aware of anyone doing that. Tests aren't sufficient - lots of subtle bugs will not be caught by existing test suites.
Your idea of "smaller" projects is not... smaller enough. See the actual success stories for example: https://github.com/immunant/c2rust?tab=readme-ov-file#uses-o...
fn project_reference(r: &MyStruct) -> &Field {
&r.field
}
unsafe fn project_pointer(r: *mut MyStruct) -> *mut Field {
unsafe { &raw mut (*r).field }
}
// The equivalent C code would look like this:
struct field *project(struct my *r) {
return &(r->field);
}
I am a very heavy Rust user. I mostly program in safe Rust while occassionally dipping into unsafe Rust.IDK, I think Rust should stick with what it is good at and not try to expand into domain that it is clearly not nicely designed for. That is, what if the best way to implement linked list in RUST is via an array of indices and NOT through RefCell or whatever it is? What if Rust will never ever have a sane way to implement linked list. What is so wrong with that? I think there should be a very clean divide between C and Rust. Rust stays in the happy Rust world and C stays on the happy C world.
I am not sure I am excited to see something like this
unsafe fn project_pointer(r: *mut MyStruct) -> *mut Field {
unsafe { &raw mut (*r).field }
}