# [derive(Clone)] Is Broken
111 points
3 days ago
| 14 comments
| rgbcu.be
| HN
lidavidm
6 hours ago
[-]
Someone on the issue they made explained why Clone is "broken": https://github.com/JelteF/derive_more/issues/490#issuecommen...

Which links to this blog post explaining the choice in more detail: https://smallcultfollowing.com/babysteps/blog/2022/04/12/imp...

reply
samsartor
4 hours ago
[-]
I have a crate with a "perfect" derive macro that generates where clauses from the fields instead of putting them on the generic parameters. It is nice when it works, but yah cyclical trait matching is still a real problem. I wound up needing an attribute to manually override the bounds whenever they blow up: https://docs.rs/inpt/latest/inpt/#bounds
reply
sbt567
6 hours ago
[-]
from Niko's post:

> In the past, we were blocked for technical reasons from expanding implied bounds and supporting perfect derive, but I believe we have resolved those issues. So now we have to think a bit about semver and decide how much explicit we want to be.

reply
loa_in_
6 hours ago
[-]
Automatically deriving Clone is a convenience. You can and should write your own implementation for Clone whenever you need it and automatically derived implementation is insufficient.
reply
eqvinox
9 minutes ago
[-]
The entirety of any programming language is a convenience. You could just write your code in assembly; I don't think your argument is "automatic". Question is, how much does this particular convenience matter?

The fact that people are writing blog posts and opening bugs about it (and others in the comments here recount running into the issue) seems to indicate this particular convenience matters.

reply
josephg
5 hours ago
[-]
But this issue makes it confusing & surprising when an automatically derived clone is sufficient and when its not. Its a silly extra rule that you have to memorise.

By the way, this issue also affects all of the other derivable traits in std - including PartialEq, Debug and others. Manually deriving all this stuff - especially Debug - is needless pain. Especially as your structs change and you need to (or forget to) maintain all this stuff.

Elegant software is measured in the number of lines of code you didn't need to write.

reply
jhugo
4 hours ago
[-]
> Elegant software is measured in the number of lines of code you didn't need to write.

Strong disagree. Elegant software is easy to understand when read, without extraneous design elements, but can easily have greater or fewer lines of code than an inelegant solution.

reply
0rzech
4 hours ago
[-]
It's surprising up to the moment the compilation error tells you that all of the members have to implement derived trait.

Nevertheless, it would be cool to be able to add #[noderive(Trait)] or something to a field not to be included in automatic trait implementation. Especially that sometimes foreign types do not implement some traits and one has to implement lots of boilerplate just to ignore fields of those types.

I know of Derivative crate [1], but it's yet another dependency in an increasingly NPM-like dependency tree of a modern Rust project.

All in all, I resort to manual trait implementations when needed, just as GP.

[1] https://crates.io/crates/derivative

reply
0rzech
2 hours ago
[-]
Apparently, Derivative is unmaintained [1], but there is Derive_more [2], Educe [3] and Derive-where [4], if anyone is interested.

[1] https://rustsec.org/advisories/RUSTSEC-2024-0388.html

[2] https://crates.io/crates/derive_more

[3] https://crates.io/crates/educe

[4] https://crates.io/crates/derive-where

reply
atoav
4 hours ago
[-]
The same is true for memory allocation. But I do not think it makes sense that everybody has to write memory allocators from scratch because a few special cases require it.
reply
j-pb
4 hours ago
[-]
I disagree; Elegant software is explicit. Tbh I wouldn't mind if we got rid of derives tomorrow. Given the ability of LLMs to generate and maintain all that boilerplate for you, I don't see a reason for having "barely enough smarts" heuristic solutions to this.

I rather have a simple and explicit language with a bit more typing, than a perl that tries to include 10.000 convenience hacks.

(Something like Uiua is ok too, but their tacitness comes from simplicity not convenience.)

Debug is a great example for this. Is derived debug convenient? Sure. Does it produce good error messages? No. How could it? Only you know what fields are important and how they should be presented. (maybe convert the binary fields to hex, or display the bitset as a bit matrix)

We're leaving so much elegance and beauty in software engineering on the table, just because we're lazy.

reply
zwnow
3 hours ago
[-]
I am sorry but Uiua and LLM generated code? This has to be a shitpost
reply
Intermernet
2 hours ago
[-]
Welcome to the new normal. Love it or hate it, there are now a bunch of devs who use LLMs for basically everything. Some are producing good stuff, but I worry that many don't understand the subtleties of the code they're shipping.
reply
jhugo
4 hours ago
[-]
> we cannot just require all generic parameters to be Clone, as we cannot assume they are used in such a way that requires them to be cloned.

No, this is backwards. We have to require all generic parameters are Clone, as we cannot assume that any are not used in a way that requires them to be Clone.

> The reason this is the way it is is probably because Rust's type system wasn't powerful enough for this to be implemented back in the pre-1.0 days. Or it was just a simple oversight that got stabilized.

The type system can't know whether you call `T::clone()` in a method somewhere.

reply
chrismorgan
1 hour ago
[-]
> The type system can't know whether you call `T::clone()` in a method somewhere.

It’s not about that, it’s about type system power as the article said. In former days there was no way to express the constraint; but these days you can <https://play.rust-lang.org/?gist=d1947d81a126df84f3c91fb29b5...>:

  impl<T> Clone for WrapArc<T>
  where
      Arc<T>: Clone,
  {
      …
  }
reply
enricozb
2 hours ago
[-]
For structs, why couldn't rust check the necessary bounds on `T` for each field to be cloned? E.g. in

    #[derive(Clone)]
    struct Weird<T> {
      ptr: Arc<T>,
      tup: (T, usize)
    }

for `ptr`, `Arc<T>: Clone` exists with no bound on `T`. But for `tup`, `(T, usize): Clone` requires `T: Clone`.

Same thing for other derives, such as `Default`.

reply
jhugo
2 hours ago
[-]
Because it doesn't know if you're relying on T being Clone in method bodies. The internal behavior of methods is not encoded in the type system.
reply
saghm
1 hour ago
[-]
You can already write method bodies today that have constraints that aren't enforced by the type definition though; it's trivially possible to write a method that requires Debug on a parameter without the type itself implementing Debug[0], for example. It's often even encouraged to define the constraints on impl blocks rather than the type definition. The standard library itself goes out of its way to define types in a way that allow only partial usage due to some of their methods having bounds that aren't enforced on the type definition. Rust's hashmap definition in the standard library somewhat notably doesn't actually enforce that the type of the key is possible to hash, which allows a hashmap of arbitrary types to be created but not inserted into unless the value actually implements Hash[1].

[0]: https://play.rust-lang.org/?version=stable&mode=debug&editio...

[1]: https://www.reddit.com/r/rust/comments/101wzdq/why_rust_has_...

reply
RGBCube
2 hours ago
[-]
What?

The way the derives work is they generate code by utilizing the fields and their types. Here is a trivial implementation (of a custom trait, rather than Clone. still holds though) which will prove you wrong:

<https://github.com/cull-os/carcass/blob/master/dup%2Fmacros%...>

reply
berkes
4 hours ago
[-]
> The type system can't know whether you call `T::clone()` in a method somewhere.

Why not?

reply
jhugo
3 hours ago
[-]
Types don't carry behavioral information about what the method does internally. Everything about a method is known from its signature.

The compiler doesn't introspect the code inside the method and add additional hidden information to its signature (and it would be difficult to reason about a compiler that did).

reply
ninkendo
2 hours ago
[-]
> Types don't carry behavioral information about what the method does internally.

I don’t remember specifics, but I very distinctly remember changing a method in some way and Rust determining that my type is now not Send, and getting errors very far away in the codebase because of it.[0]

If I have time in a bit I’ll try and reproduce it, but I think Send conformance may be an exception to your statement, particularly around async code. (It also may have been a compiler bug.)

[0] It had something to do with carrying something across an await boundary, and if I got rid of the async call it went back to being Send again. I didn’t change the signature, it was an async method in both cases.

reply
kd5bjo
1 hour ago
[-]
`Send`, `Sync`, and `Unpin` are special because they're so-called 'auto traits': The compiler automatically implements them for all compound types whose fields also implement those traits. That turns out to be a double-edged sword: The automatic implementation makes them pervasive in a way that `Clone` or `Debug` could never be, but it also means that changes which might be otherwise private can have unintended far-reaching effects.

In your case, what happens is that async code conceptually generates an `enum` with one variant per await point which contains the locals held across that point, and it's this enum that actually gets returned from the async method/block and implements the `Future` trait.

reply
delta_p_delta_x
2 hours ago
[-]
> Types don't carry behavioral information about what the method does internally.

I was under the impression type inference meant that the implementation of a function directly determines the return type of a function, and therefore its signature and type.

reply
jhugo
2 hours ago
[-]
While you can sometimes elide the return type (and what you describe only happens in closures — `|| { 0u32 }` is the same as `|| -> u32 { 0u32 }` — methods and free functions must always have an explicitly declared return type), that's not the same thing as being described above.

For the existence of any invocation of `<T as Clone>::clone()` in the method body to be encoded in the method signature, we'd either need some wild new syntax, or the compiler would need to be able to encode hidden information into types beyond what is visible to the programmer, which would make it very hard to reason about its behavior.

reply
mryall
2 hours ago
[-]
No, Rust functions have to declare their return types. They cannot be inferred.
reply
almostdeadguy
1 hour ago
[-]
All the #[derive(Clone)] does is generate a trait impl of Clone for the struct, which itself can be bounded by trait constraints. It doesn't have to know that every use of the struct ensures generic parameters have to/don't have to be Clone. It doesn't have to make guarantees about how the struct is used at all.

It only needs to provide constraints that must hold for it to call clone() on each field of the struct (i.e. the constraints that must hold for the generated implementation of the fn clone(&self) method to be valid, which might not hold for all T, in which case a Struct<T> will not implement Clone). The issue this post discusses exists because there are structs like Arc<T> that are cloneable despite T not being Clone itself [1]. In a case like that it may not be desirable to put a T: Clone constraint on the trait impl, because that unnecessarily limits T where Struct<T>: Clone.

[1]: https://doc.rust-lang.org/std/sync/struct.Arc.html#impl-Clon...

reply
moomin
6 hours ago
[-]
Haskell does this. If you derive Eq, it puts a condition on the generic parameter(s) requiring them to be Eq as well. Then if you use it with something that doesn’t implement Eq, your generic type doesn’t either.

It helps if you can express these preconditions in the first place, though.

reply
hardwaresofton
5 hours ago
[-]
Haskell has, like-for-like, a better type system than Rust.

That said, Rust is just enough Haskell to be all the Haskell any systems programmer ever needed, always in strict mode, with great platform support, a stellar toolkit, great governance, a thriving ecosystem and hype.

Lots of overlap in communities (more Haskell -> Rust than the other way IMO) and it's not a surprise :)

reply
01HNNWZ0MV43FF
5 hours ago
[-]
I have respect for Haskell because I saw it long before I saw Rust, and I love the ideas, but I never got around to actually using Haskell.

I think an IO monad and linear types [1] would do a lot for me in a Rust-like

[1] Affine types? Linear types? The ones where the compiler does not insert a destructor but requires you to consume every non-POD value. As someone with no understanding of language theory, I think this would simplify error handling and the async Drop problem

reply
yccs27
4 hours ago
[-]
I can understand why Rust didn't implement an IO monad (even though I'd love it), but linear types seem like they would fit right in with the rest of Rust. Not sure why they didn't include them.

There's actually two different ways that Rust types can always be discarded: mem::forget and mem::drop. Making mem::forget unsafe for some types would be very useful, but difficult to implement without breaking existing code [1].

Btw, you are correct with linear types - Affine types allow discarding (like Rust already does).

[1] https://without.boats/blog/changing-the-rules-of-rust/

reply
hardwaresofton
4 hours ago
[-]
I used Haskell for a while and eventually switched over (and intentionally target Rust for many new projects, though I suspect I should be doing more Go for really simple things that just don't need more thought).

One large problem with Haskell that pushes people to Rust is strictness, I think -- laziness is a feature of Haskell and is basically the opposite direction from where Rust shines (and what one would want out of a systems language). It's an amazing feature, but it makes writing performant code more difficult than it has to be. There are ways around it, but they're somewhat painful.

Oh there's also the interesting problem of bottom types and unsafety in the std library. This is a HUGE cause of consternation in Haskell, and the ecosystem suffers from it (some people want to burn it all down and do it "right", some want stability) -- Rust basically benefited from starting later, and just making tons of correct decisions the first time (and doing all the big changes before 1.0).

That said, Haskell's runtime system is great, and it's threading + concurrency models are excellent. They're just not as efficient as Rust's (obviously) -- the idea of zero cost abstractions is another really amazing feature.

> [1] Affine types? Linear types? The ones where the compiler does not insert a destructor but requires you to consume every non-POD value. As someone with no understanding of language theory, I think this would simplify error handling and the async Drop problem

Yeah, the problem is that Affine types and Linear types are actually not the same thing. Wiki is pretty good here (I assume you meant to link to this):

https://en.wikipedia.org/wiki/Substructural_type_system

Affine is a weakening of Linear types, but the bigger problem here is that Haskell has a runtime system -- it just lives in a different solution-world from Rust.

For rust, Affine types are just a part of the way they handle aliasing and enable a GC-free language. Haskell has the feature almost... because it's cool/powerful. Yes, it's certainly useful in Haskell, but Haskell just doesn't seem as focused on such a specific goal, which makes sense because it's a very research driven language. It has the best types (and is the most widely adopted ML lang IIRC) because it focuses on being correct and powerful, but the ties to the ties to practicality are not necessarily the first or second thought.

It's really something that Rust was able to add something that was novel and useful that Haskell didn't have -- obvious huge credit to the people involved in Rust over the years and the voices in the ecosystem.

reply
evertedsphere
1 hour ago
[-]
that's incorrect

haskell implements the "perfect derive" behaviour that the blog post is complaining about rust's lack of. the constraint is only propagated to the field types when using a haskell derive, not the parameters themselves (as in rust)

so the following compiles just fine:

    data Foo -- not Eq

    data Bar a = Bar 
      deriving Eq

    f :: Eq (Bar Foo) => ()
    f = ()
reply
hyperbrainer
6 hours ago
[-]
> we cannot just require all generic parameters to be Clone, as we cannot assume they are used in such a way that requires them to be cloned.

I don't understand what "used in such a way requires them to be cloned" means. Why would you require that?

reply
xvedejas
6 hours ago
[-]
Right now the derive macro requires `T` be `Clone`, but what we actually want to require is only that each field is clone, including those that are generic over `T`. eg `Arc<T>` is `Clone` even though `T` isn't, so the correct restriction would be to require `Arc<T>: Clone` instead of the status quo which requires `T: Clone`
reply
nithssh
33 minutes ago
[-]
This is the best explanation I've read of this limitation
reply
Xunjin
17 minutes ago
[-]
Thank you, summarized perfectly.
reply
tinco
6 hours ago
[-]
That's the crux of the article. There's no good reason for this requirement, at least none that arise in the article, so the author concludes it must be a mistake.

I think it's a bit cynical that it would cost at least 4 years for this change to be admitted into the compiler. If the author is right and there is really no good reason for this rule, and I agree with the author in seeing no good reason, then it seems like something that could be changed quite quickly. The change would allow more code to compile, so nothing would break.

The only reason I could come up with for this rule is that for some other reason allowing non complying type parameters somehow makes the code generation really complex and they therefore postponed the feature.

reply
the_mitsuhiko
5 hours ago
[-]
> The only reason I could come up with for this rule is that for some other reason allowing non complying type parameters somehow makes the code generation really complex and they therefore postponed the feature.

The history of this decision can be found in details in this blog post: https://smallcultfollowing.com/babysteps//blog/2022/04/12/im...

The key part:

> This idea [of relaxing the bounds] is quite old, but there were a few problems that have blocked us from doing it. First, it requires changing all trait matching to permit cycles (currently, cycles are only permitted for auto traits like Send). This is because checking whether List<T> is Send would not require checking whether Option<Rc<List<T>>> is Send. If you work that through, you’ll find that a cycle arises. I’m not going to talk much about this in this post, but it is not a trivial thing to do: if we are not careful, it would make Rust quite unsound indeed. For now, though, let’s just assume we can do it soundly.

> The other problem is that it introduces a new semver hazard: just as Rust currently commits you to being Send so long as you don’t have any non-Send types, derive would now commit List<T> to being cloneable even when T: Clone does not hold.

> For example, perhaps we decide that storing a Rc<T> for each list wasn’t really necessary. Therefore, we might refactor List<T> to store T directly […] We might expect that, since we are only changing the type of a private field, this change could not cause any clients of the library to stop compiling. With perfect derive, we would be wrong.2 This change means that we now own a T directly, and so List<T>: Clone is only true if T: Clone.

reply
josephg
4 hours ago
[-]
Yeah; I think this argument makes sense. With perfect derive, #[derive(Clone)] has a bunch of implicit trait bounds which will change automatically as the struct changes. This has semver implications - and so we might want to be explicit about this rather than implicit.

We could solve this by having developers add trait bounds explicitly into the derive macro.

Currently this:

    #[derive(Clone)]
    struct Foo<T>(Arc<T>)
expands to:

    impl Clone for Foo where T: Clone { ... }
Perfect derive would look at the struct fields to figure out what the trait bounds should be. But it might make more sense to let users set the bound explicitly. Apparently the bon crate does it something like this:

    #[derive(Clone(bounds(Arc<T>: Clone)))]
Then if you add or remove fields from your struct, the trait bounds don't necessarily get modified as a result. (Or, changing those trait bounds is an explicit choice by the library author.)
reply
rocqua
6 hours ago
[-]
A type might have a generic parameter T, but e.g. use it as a phantom marker.

Then even if T isn't cloneable, the type might still admit a perfectly fine implementation of clone.

reply
qwertox
6 hours ago
[-]
LLMs are broken, too:

> "Of course. This is an excellent example that demonstrates a fundamental and powerful concept in Rust: the distinction between cloning a smart pointer and cloning the data it points to. [...]"

Then I post the compiler's output:

> "Ah, an excellent follow-up! You are absolutely right to post the compiler error. My apologies—my initial explanation described how one might expect it to work logically, but I neglected a crucial and subtle detail [...]"

Aren't you also getting very tired of this behavior?

reply
the_mitsuhiko
5 hours ago
[-]
> Aren't you also getting very tired of this behavior?

The part that annoys me definitely is how confident they all sound. However the way I'm using them is with tool usage loops and so it usually runs into part 2 immediately and course corrects.

reply
bt1a
5 hours ago
[-]
Well, they're usually told that they're some unicorn master of * languages, frameworks, skillsets, etc., so can you really fault them? :)
reply
rfoo
5 hours ago
[-]
TBH I'm tired of only the "Ah, an excellent follow-up! You are absolutely right <...> My apologies" part.
reply
IshKebab
5 hours ago
[-]
Yeah they definitely didn't do that in the past. We've lost "as a large language model" and "it's important to remember" but gained "you're absolutely right!"

I would have thought they'd add "don't apologise!!!!" or something like that to the system prompt like they do to avoid excessive lists.

reply
codedokode
4 hours ago
[-]
Languages like Rust and C seem to be too complicated for them. I also asked different LLMs to write a C macro or function that creates a struct and a function to print it (so that I don't have to duplicate a field list) and it generates plausible garbage.
reply
bigfishrunning
1 hour ago
[-]
LLMs only ever produce plausible garbage -- sometimes that garbage happens to be right, but that's down to luck.
reply
ramon156
5 hours ago
[-]
You should check Twitter nowadays, people love this kind of response. Some even use it as an argument
reply
darkwater
5 hours ago
[-]
And this is why basically LLMs are "bad". They have already reached critical mass adoption, they are right or mostly right most of the time but they also screw up badly many times as well. And people will just not know, trust blindly and going even deeper down in the spiral of the total absence of critical judgement. And yeah, it also happened with Google and search engines back in the day ("I found it on the web so it must be true") but now with LLMs it is literally tailored to what you are asking, for every possible question you can ask (well, minus the censored ones).

I keep thinking the LLM contribution to humanity is/will be a net negative in the long run.

reply
chrismorgan
5 hours ago
[-]
> they are right or mostly right most of the time

It’s times like this when I wonder if we’re even using the same tools. Maybe it’s because I only even try to actively use them when I expect failure and am curious how it will be (occasionally it just decides to interpose itself on a normal search result, and I’m including those cases in my results) but my success rate with DuckDuckGo Assist (GPT-4o) is… maybe 10% of the time success but the first few search results gave the answer anyway, 30% obviously stupidly wrong answer (and some of the time the first couple of results actually had the answer, but it messed it up), 60% plausible but wrong answer. I have literally never had something I would consider an insightful answer to the sorts of things I might search the web for. Not once. I just find it ludicrously bad, for something so popular. Yet somehow lots of people sing their praises and clearly have a better result than me, and that sometimes baffles, sometimes alarms me. Baffles—they must be using it completely differently from me. Alarms—or are they just failing to notice errors?

(I also sometimes enjoy running things like llama3.2 locally, but that’s just playing with it, and it’s small enough that I don’t expect it to be any good at these sorts of things. For some sorts of tasks like exploring word connections when I just can’t quite remember a word, or some more mechanical things, they can be handy. But for search-style questions, using models like GPT-4o, how do I get such consistently useless or pernicious results from them!?)

reply
Ukv
3 hours ago
[-]
Probably depends a lot of the type of questions you're asking. I think LLMs are inherently better at language-based tasks (translate this, reword this, alternate terms for this, etc.) than technical fact-based tasks, and within technical tasks someone using it as their first port of call will be giving it a much larger proportion of easy questions than someone using it only once stumped having exhausted other sources (or, as here, challenging it with questions where they expect failure).

There's a difference in question difficulty distribution between me asking "how do I do X in FFmpeg" because I'm too lazy to check the docs and don't use FFmpeg frequently enough to memorize, compared to someone asking because they have already checked the docs and/or use FFmpeg frequently but couldn't figure out how to do specifically X (say cropping videos to an odd width/height, which many formats just don't support), for instance. Former probably makes up majority of my LLM usage, but have still occasionally been suprirsed on the latter where I've come up empty checking docs/traditional search but an LLM pulls out something correct.

reply
chrismorgan
2 hours ago
[-]
A few days ago I tried something along the “how do I do X in FFmpeg” lines, but something on the web platform, I don’t remember what. Maybe something to do with XPath, or maybe something comparatively new (3–5y) in JS-land with CSS connections. It was something where there was a clear correct answer, no research or synthesis required, I was just blanking on the term, or something like that. (Frustratingly, I can’t remember exactly what it was.) Allegedly using two of the search results, one of which was correct and one of which was just completely inapplicable, it gave a third answer which sounded plausible but was a total figment.

It’s definitely often good at finding the relevant place in the docs, but painfully frequently it’s stupendously bad, declaring in tone authoritative how it snatched defeat from the jaws of victory.

The startlingly variety of people’s experiences, and its marked bimodal distribution, has been observed and remarked upon before. And it’s honestly quite disturbing, because they’re frequently incompatible enough to suggest that at least one of the two perspectives is mostly wrong.

reply
bt1a
5 hours ago
[-]
Yet theyre fantastic personal tutors / assistants who can provide a deeply needed 1:1 learning interface for less privileged individuals. I emphasize 'can'. Not saying kids should have them by their side in their current rough around the edges and mediocre intelligent forms. Many will get burned as you describe, but it should be a lesson to curate information from multiple sources and practice applying reasoning skills!
reply
darkwater
4 hours ago
[-]
I agree with your take, and I personally used Claude and ChatGPT to learn better/hone some skills while interviewing to land a new job. And they also help me get unstucked when doing small home fixes, because it's a custom-tailored answer to my current doubt/issue that a normal web search would make much more complicated to answer (I had to know more context about it). But still, they get things wrong and can lead you astray even if you know the topic.
reply
AIPedant
4 hours ago
[-]
My dad taught high school science until retiring this year, and at least in 2024 the LLM tutors were totally useless for honest learning. They were good at the “happy path” but the space of high schoolers’ misconceptions about physics greatly exceeds the training data and can’t be cheaply RLHFed, so they crap the bed when you role play as a dumb high schooler.

In my experience this is still true for the reasoning models with undergraduate mathematics - if you ask it to do your point-set topology homework (dishonest learning) it will score > 85/100, if you are confused about point-set topology and try to ask it an honest (but ignorant) question it will give you a pile of pseudo-mathematical BS.

reply
renewiltord
6 hours ago
[-]
Haha, I encountered the opposite of this when I did a destructive thing recently but first asked Gemini, then countered it saying it’s wrong and it insisted it was right. So the reality they encountered is probably that: it either is stubbornly wrong or overly obsequious with no ability to switch.

My friend was a big fan of Gemini 2.5 Pro and I kept telling him it was garbage except for OCR and he nearly followed what it recommended. Haha, he’s never touching it again. Every other LLM changed its tune on pushback.

reply
kzrdude
2 hours ago
[-]
The only thing that needs change with derive(Clone) is to add an option to it so that you easily can customize the bounds. Explicitly.
reply
csomar
5 hours ago
[-]
Derive Clone is not broken. It is basic. I’d say this is a good area for a dependency but not the core Rust functionality. Keep derive simple and stupid, so people can learn they can derive stuff themselves. It also avoids any surprises.
reply
josephg
5 hours ago
[-]
I disagree. I think the current behaviour is surprising. The first time I ran into this problem, I spent half an hour trying to figure out what was wrong with my code - before eventually realising its a known problem in the language. What a waste of time.

The language & compiler should be unsurprising. If you have language feature A, and language feature B, if you combine them you should get A+B in the most obvious way. There shouldn't be weird extra constraints & gotchas that you trip over.

reply
MangoToupe
4 hours ago
[-]
I don't see it as a problem, personally. It's consistent behavior that I don't find surprising at all, perhaps because I internalized it so long ago. I can understand your frustration tho

> in the most obvious way.

What people find obvious is often hard to predict.

reply
saghm
50 minutes ago
[-]
The main reason I'm not super fond of the way it currently works is that it can be a bit confusing in code reviews. I've joined several teams over the years working on Rust codebases around a year old where most of the team hadn't used Rust beforehand, with the idea that my Rust experience can help the team grow in their Rust knowledge and mature the codebase over time. I can recall numerous times when I've seen a trait like Debug or Clone manually implemented by someone newer to Rust where the implementation is identical to what would be generated by automatically deriving it, with a roughly equal split between times when they did actually need to manually implement it for the reasons described in this article and times when they totally could have derived the trait but didn't realize. If I can't look at a Clone implementation that just manually clones every field exactly the same way as deriving it would and immediately know whether it would be possible to derive it after over 10 years of Rust experience, I can't possibly expect someone with less than a year of Rust experience to do that, so my code review feedback ends up having to be a question about whether they tried to derive the trait or not (and to try it and keep it like that if it does work) rather than being able to let them know for sure that they can just derive the trait instead.

I guess at a higher level, my issue with the way it currently works is that it's a bit ambiguous with respect to the intent of the developer. If it were possible to derive traits in the cases the article describes, seeing a manual implementation would be immediately clear that this was what the developer chose to write. The way it works right now means that I can't tell the difference between "I tried to derive this, but it didn't work, so I had to implement it manually as a fallback" and "I implemented this manually without trying to derive it first". It's a minor issue, but I think small things like this add up in the overall experience of how hard it is for someone to learn a language, and I'd argue that it's exactly the type of thing that Rust has benefited from caring about in the past. Rust has a notoriously sharp learning curve, and yet it's grown in popularity quite a lot over the past decade, and I don't think that would have been possible without the efforts of those paying attention to the smaller rough edges in the day-to-day experience of using the language.

reply
Ar-Curunir
3 hours ago
[-]
The solution is to use a derive macro like derivative or educe.
reply
mrbook22
4 hours ago
[-]
skill issue. Arc should allow Clone, even if the underlying is not `impl Clone`.
reply
saghm
48 minutes ago
[-]
Arc _does_ allow clone without the underlying implementation not being clone. That's exactly why it's so unexpected that having an `Arc<T>` field in a struct doesn't allow deriving clone unless `T` is also clone; it doesn't actually matter to the implementation, and you end up having to write the exact same thing manually that would be generated by deriving it if the compiler actually allowed it.
reply
DougBTX
5 hours ago
[-]
Agreed, a screwdriver isn’t broken just because it isn’t a good hammer. The title seems misleading, I was expecting a bug, memory unsafe etc.

Allowing more safe uses seems OK to me, but obviously expanding the functionality adds complexity, so there’s a trade off.

reply
atemerev
4 hours ago
[-]
I mean, how this is not core Rust functionality if you need clone for many things unless you want to fight the borrow checker for yet another day?
reply
m3talsmith
3 hours ago
[-]
Just looking at the examples, you can tell that they wouldn't compile: the other structs passed in don't derive the trait as well, nor implement it. It's really simple, not broken.
reply
xupybd
1 hour ago
[-]
Amazing site from someone so young
reply
bloppe
5 hours ago
[-]
I don't see how "the hard way" is a breaking change. Anybody got an example of something that works now but wouldn't work after relaxing that constraint?
reply
yuriks
4 hours ago
[-]
It relaxes the contract required for an existing type with derive(Clone) to implement Clone, which might allow types in existing code to be cloned where they couldn't before. This might matter if precluding those clones is important for the code, e.g. if there are safety invariants being maintained by Type<T> only being clonable if T is clone.
reply
exfalso
4 hours ago
[-]
Eh. It's a stretch to call it "broken"
reply
tucnak
5 hours ago
[-]
A bit off-topic, but every time I read some sophisticated Rust code involving macros, I cannot help but think that something went wrong at some point. The sheer complexity far outpaces that of C++, and even though I'm sure they would call C++ on undefined behaviour (and rightfully so) it seems less of it has to do with memory and thread-safety, and moreso with good old "C++ style" bloat: pleasing all, whilst pleasing none. Rust doesn't seem worthwhile to learn, as in a few years time C++ will get memory safety proper, and I could just use that.

Maybe this is an improvement on templates and precompiler macros, but not really.

reply
junon
5 hours ago
[-]
None of this has to do with the complexity of macros.

And no, sorry, the complexity of C++ templates far outweighs anything in Rust's macros. Templates are a turing complete extension of the type system. They are not macros or anything like it.

Rust macro rules are token-to-token transformers. Nothing more. They're also sanitary, meaning they MUST form valid syntax and don't change the semantics in weird ways like C macros can.

Proc-macros are self-standing crates with a special library type in the crate manifest indicating as such, and while they're not "sanitary" like macros rules, they're still just token to token transformers that happen to run Rust code.

Both are useful, both have their place, and only proc macros have a slight developer experience annoyance with having to expand to find syntax errors (usually not a problem though).

reply
codedokode
4 hours ago
[-]
Proc macros are implemented in an unsafe way because they run arbitrary code during compilation and access any files. I do not like it.

Also I think it would be better if they operated with reflection-like structures like functions, classes, method rather than tokens - they would be easier to write and read.

reply
junon
3 hours ago
[-]
I agree in principle but also there's a lot of worth in having them do that in certain cases, and build scripts and the like already have that anyway.

Achieving a perfect world where build tooling can only touch the things it really needs is less of a toolchain problem and more of an OS hardening issue. I'd argue that's outside the scope of the compiler and language teams.

reply
tucnak
1 hour ago
[-]
This is often overlooked. No, I don't want random cargo package to run RCE's on my machine. And there's just so many; in terms of bloat, the dependency trees are huge, rivaled only by Nodejs, if I'm being honest. I have to build some Rust stuff once in a while (mostly Postgres extensions) and every time something goes wrong it's a nightmare to sort out.
reply
coldtea
5 hours ago
[-]
>The sheer complexity far outpaces that of C++

Not even close. Rust Macros vs C++ templates is more like "Checkers" vs "3D-chess while blindfolded".

>Rust doesn't seem worthwhile to learn, as in a few years time C++ will get memory safety proper

C++ getting "memory safety proper" is just adding to the problem that's C++.

It being a pile of concepts, and features, and incompatible ideas and apis.

reply
j-krieger
4 hours ago
[-]
I write both for a decade now and I disagree immensely. If Dante knew of the horrors of CPP template compiler errors he would've added an eighth ring.
reply
bigfishrunning
1 hour ago
[-]
I'm convinced that the only reason anyone's still using C++ is that they refuse to learn any other language, and the features they want may get added eventually. C++ is an absolute mess.
reply
Surac
2 hours ago
[-]
it seems i have a personal dislike for rust syntax. i think non of the code should compile because they are just ugly :)
reply