Here's a quote from Bjarne,
> So go back about one year, and we could vote about it before it got into the standard, and some of us voted no. Now we have a much harder problem. This is part of the standard proposal. Do we vote against the standard because there is a feature we think is bad? Because I think this one is bad. And that is a much harder problem. People vote yes because they think: "Oh we are getting a lot of good things out of this.", and they are right. We are also getting a lot of complexity and a lot of bad things. And this proposal, in my opinion is bloated committee design and also incomplete.
Nobody wanted it.
Some imperfect data points on how to judge if a language feature is wanted (or not):
- Discussion on forums about how to use the feature
- Programs in the wild using the feature
- Bug reports showing people trying to use the feature and occasionally producing funny interactions with other parts of the language
- People wanting to do more complex things on the initially built feature by filing features requests. (Showing that there is uptake on the feature and people want to do more fancy advanced things)
The fact that the C++ standard community has been working on Contracts for nearly a decade is something that by itself automatically refutes your claim.
I understand you want to self-promote, but there is no need to do it at the expense of others. I mean, might it be that your implementation sucked?
Also, in my view the committee has been entertaining wider and wider language extensions. In 2016 there was a serious proposal for a graphics API based on (I think) Cairo. My own sense is that it's out of control and the language is just getting stuff added on because it can.
Contracts are great as a concept, and it's hard to separate the wild expanse of C++ from the truly useful subset of features.
There are several things proposed in the early days of C++ that arguably should be added.
Boy, this makes me feel old... oh wait :)
(I agree with your point; early 90s vs. mid-10s are two very different worlds, in this context.)
Not a very fair assumption. However, even if your not so friendly point was even true, I'd like people who have invented popular languages to "self-promote" more (here dlang). It is great to get comments on HN from people who have actually achieved something nice !
This is actually a good point. Yes, LLMs have saturated the conversation everywhere but contracts help clarify the pre-post conditions of methods well. I don't know how good the implementation in C++ will be but LLMs should be able to really exploit them well.
In my experience, this is absolutely true. I wrote my own metaprogramming frontend for C and that's basically all you need. At this point, I consider the metaprogramming facilities of a language it's most important feature, by far. Everything else is pretty much superfluous by comparison
It should be done at some point. People can always develop languages with more or less things but piling more things on is just not that useful.
It sounds cool in the minds of people that are designing these things but it is just not that useful. Rust is in the same situation of adding endless crap that is just not that useful.
Specifically about this feature, people can just use asserts. Piling things onto the type system of C++ is never going to be that useful since it is not designed to be a type system like Rust's type system. Any improvement gained is not worth piling on more things.
Feels like people that push stuff do it because "it is just what they do".
It is difficult to overstate just how important these features are for high-performance and high-reliability systems software. These features greatly expand the kinds of safety guarantees that are possible to automate and the performance optimizations that are practical. Without it, software is much more brittle. This isn’t an academic exercise; it greatly reduces the amount of code and greatly increases safety. The performance benefits are nice but that is more on the margin.
One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++.
Aren’t Rust macros more powerful than C++ template metaprogramming in practice?
Declarative macros deliberately don't share Rust's syntax because they are macros for Rust so if they shared the same syntax everything you do is escape upon escape sequence as you want the macro to emit a loop but not loop itself etc. But other than the syntax they are pretty friendly, a one day Rust bootstrap course should probably cover these macros at least enough that you don't use copy-paste to make those seven functions by hand.
However the powerful feature you're thinking of is procedural or "proc" macros and those are a very different beast. The proc macros are effectively compiler plugins, when the compiler sees we invoked the proc macro, it just runs that code, natively. So in that sense these are certainly more powerful, they can for example install Python, "Oh, you don't have Python, but I'm a proc macro for running Python, I'll just install it...". Mara wrote several "joke" proc macros which show off how dangerous/ powerful it is, you should not use these, but one of them for example switches to the "nightly" Rust compiler and then seamlessly compiles parts of your software which don't work in stable Rust...
In the space of language design, everything "more powerful" is not necessary good. Sometimes less power is better because it leads to more optimisable code, less implementation complexity, less abstraction, better LSP support. TL;DR More flexibility and complexity is not always good.
Though I would also challenge the fact that Rust's metaprogramming model is "not powerful enough". I think it can be.
While I agree that, generally, compile time metaprogramming is a tremendously powerful tool, the C++ template metaprogramming implementation is hilariously bad.
Why, for example, is printing the source-code text of an enum value so goddamn hard?
Why can I not just loop over the members of a class?
How would I generate debug vis or serialization code with a normal-ish looking function call (spoiler, you can't, see cap'n proto, protobuf, flatbuffers, any automated dearimgui generator)
These things are incredibly basic and C++ just completely shits all over itself when you try to do them with templates
I understand some of these kinds of features are because Rust is Rust but it still feels useless to learn.
I'm not following rust development since about 2 years so don't know what the newest things are.
1. We didn't want to give the thing we're returning a name, it does have one, but we want that to be an implementation detail. In comparison the Rust stdlib's iterator functions all return specific named Iterators, e.g. the split method on strings returns a type actually named Split, with a remainder() function so you can stop and just get "everything else" from that function. That's an exhausting maintenance burden, if your library has some internal data structures whose values aren't really important or are unstable this allows you to duck out of all the extra documentation work, just say "It's an Iterator" with RPIT.
2. We literally cannot name this type, there's no agreed spelling for it. For example if you return a lambda its type does not have a name (in Rust or in C++) but this is a perfectly reasonable thing to want to do, just impossible without RPIT.
Blanket trait implementations ("auto impl trait for type that implements other trait") are an important convenience for conversions. If somebody wrote a From implementation then you get the analogous Into, TryFrom and even TryInto all provided because of this feature. You could write them, but it'd be tedious and error prone, so the machine does it for you.
Async/await is just fundamental to making efficient programs, I'm not sure what to mention here. Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.
Actively writing code for the others you mentioned generally isn't required in the average program (e.g. you don't need to create your own proc macros, but it can help cut down boilerplate). To be fair though, I'm not sure how someone would know that if they weren't already used to the features. I imagine it must be what I feel like when I see probably average modern C++ and go "wtf is going on here"
A shoutout to Eiffel, the first "modern" (circa 1985) language to incorporate Design by Contract. Well done Bertrand Meyer!
Note that this is not the end of contrats. This is a minimun viable start that they intend to add to but the missing parts are more complex.
Exactly. People stopped using Ada as soon as they were no longer forced to use it.
In other words on its own merits people don't choose it.
Maybe Ada's bad, but programmer preference isn't a strong enough argument. It's just as likely that newer software is buggier and more unsafe or that this otherwise isn't an apples-to-apples comparison.
But my anecdotal experience aside, it is plain to see that developers had the opportunity to continue with Ada and largely did not once they were no longer required to use it.
So, it is exceedingly unlikely that some conspiracy against C++, motivated by mustache-twirling Ada gurus, is afoot. And even if that were true, knocking C++ down several pegs will not make people go back to Ada.
C#, Rust, and Go all exist and are all immensely more popular than Ada. If there were to be a sudden exodus of C++ developers, these languages would likely be the main beneficiaries.
My original point, that C++ isn't what's standing in the way of Ada being popular, still stands.
From my outside vantage point, there seems to be a few different camps about what is desired for contracts to even be. The conflict between those groups is why this feature has been contentious for... a decade now?
Some of the pushback against this form of contracts is from people who desire contracts, but don't think that this design is the one that they want.
C++ will never, ever be modern and comprehensible because of 1 and 1 reason alone: backward compatibility.
It does not matter what version of C++ you are using, you are still using C with classes.
https://en.cppreference.com/w/cpp/compiler_support.html
Funny how gcc seems to be the top dog now, what happened to clang? Thought their codebase was supposed to be easier and more pleasant to work with? Or maybe just more hardcore compiler devs work on gcc?
Even C++23 is largely usable at this point, though there are still gaps for some features.
I for one can write C++ but I cannot write a single program in C. If the overlap was so vast, I would be able to write good C but I cannot.
I've done things with templates to express my ideas in C++ that I cannot do in other languages, and the behaviour of deterministic destructors is what sets it apart from C. It is comprehensible and readable to me.
I would argue that C++ is modern, since it is in use today. Perhaps your definition of "modern" is too narrow?
Finally, it certainly helps to have a standardized mechanisms instead of everyone rolling their own, especially with multiple libraries.
You are passing in a memory location that can be read or written too.
That’s it.
I worked on a massive codebase where we used Microsoft SAL to annotate all parameters to specify intent. The compiler could throw errors based on these annotations to indicate misuse.
This seems like an extension of that.
But like modules and concepts the committee has opted for staggered implementation. What we have now is effectively syntax sugar over what could already be done with asserts, well designed types and exceptions.
An argument can be made that C++26 features like reflection add complexity but I don't follow that argument for contracts.
This should also clarify the complexity issue.
It sounds (and probably is) insane. But if a feature breaks backwards compatibility, or can't be implemented in a way that non-negligibly affects compiler/IDE performance for codebases that ignore it, what's the issue? Specifically, what significant new issues would it cause that C++’s existing bloat hasn’t?
C++20 isn't fully implemented in any one compiler (https://en.cppreference.com/w/cpp/compiler_support.html#C.2B...).
Different TUs can be compiled with different settings for the contract behavior. But can they be binary compatible? In general, no. If a function is declared in-line in a header, the comoiler may have generated two different versions with different contract behaviors, which violates ODR.
What happens if the contract check calls a helper function that throws an exception?
The whole things is really, really complex and I don't assume that I understand it properly. But I can see that there are some valid concerns against the feature as standardized and that makes me side with the opposition here: this was not baked enough yet
It sounds like it should solve your problem. At first it seems to work. Then you keep on finding the footguns after it is too late to change the design.
Contracts as they are today won't solve every problem. However they can expand over time to solve more problems. (or at least that is the hope, time will tell - there is already a lot of discussion on what the others should be)
We already have some primitive ways to define preconditions, notably the assert macro and the 'restrict' qualifier.
I don't mind a more structured way to define preconditions which can automatically serve as both documentation and debug invariant checks. Though you could argue that a simpler approach would be to "standardize" a convention to use assert() more liberally in the beginning of functions as precondition checks; that a sequence of 'assert's before non-'assert' code should semantically be treated as the functions preconditions by documentation generators etc.
I haven't looked too deep into the design of the actual final contracts feature, maybe it's bad for reasons which have nothing to do with the fundamental idea.
I've been thinking that way for many years now, but clearly I've been wrong. Perhaps C++ is the one language to which the issue of excess complexity does not apply.
I think what you meant to say is popular. If a feature is popular it doesn't matter how bad it turns out in hindsight: you can't remove it without breaking too much code (you can slowly deprecate it over time, I'm not sure how you handle deprecation in D, so perhaps that is what editions give you). However if a great feature turns out not to be used you can remove it (presumably to replace it with a better version that you hope people will use this time, possibly reusing the old syntax in a slightly incompatible way)
So perhaps I think the issue is not committees per se, but how the committees are put together and what are the driving values.
It worked remarkably well. r7rs-small was done in 2013 and is enjoyed by many. The large variant is still not done and may never be done. That's no problem though, the important point was that it created a place to point people with ideas to instead of outright telling them "no".
Offtopic, but this is a problem in the web world, too. Once something is on a standards track, there are almost mechanisms to vote "no, this is bad, we don't need this". The only way is to "champion" a proposal and add fixes to it until people are somewhat reasonably happy and a consensus is reached. (see https://x.com/Rich_Harris/status/1841605646128460111)
I don't think this opinion is well informed. Contracts are a killer feature that allows implementing static code analysis that covers error handling and verifiable correct state. This comes for free in components you consume in your code.
https://herbsutter.com/2018/07/02/trip-report-summer-iso-c-s...
Asserting that no one wants their code to correctly handle errors is a bold claim.
Modern C++ contracts are being sold as being purely for debugging. You can't rely on contracts like an assert to catch problems, which is an intentional part of the design of contracts
I’m fascinated by the concept of deciding how much complexity (to a human) a feature has. And then the political process of deciding what to remove when everyone agrees something new needs to be accepted.
> bloated committee design and also incomplete
That's truly in that backdoor alley catching fire
Also almost every feature added to C++ adds a great deal of complexity, everything from modules, concepts, ranges, coroutines... I mean it's been 6 years since these have been standardized and all the main compilers still have major issues in terms of bugs and quality of implementation issues.
I can hardly think of any major feature added to the language that didn't introduce a great deal of footguns, unintended consequences, significant compilation performance issues... to single out contracts is unusual to say the least.
It does have a runtime cost. There's an attribute to force undefined behavior on read again and avoid the cost:
int x [[indeterminate]];
std::cin >> x;But if you really, really want to leave it uninitialized, write:
int x = void;
where you're not writing that by accident.This requires that there is a default. Several modern languages (such as Go) insist on this, it means now your types don't even model reality in this very fundamental way. Who is a person's default spouse? Even where you can imagine a default it's sometimes undesirable to have one, for example we already live in a society where somebody decided default gender is male - and it might look too much like real data, default birthday being 1 January also matches hundreds of thousands of Americans...
The most likely place you go after "Everything has a default" is the billion dollar mistake because you're inclined to just incorporate "or it's default invalid" into the type definition to get your default, and when you do that everywhere needs to have "check it's valid" code added, even if we already just checked that a moment ago.
Thus the current erroneous. It means this isn't a bug (compilers used to optimized out code paths where an uninitialized value is read and this did cause real world bugs when it doesn't matter what value is read). It also means the compiler is free to put whatever value they want there - one of the goals was the various sanitizers that check for using uninitialized values need to still work - the vast majority of the time when an uninitialized value is read that is a bug in the code.
There are a lot of situations where a compiler cannot tell if a variable would be used uninitialized, so we can't rely on compiler warnings (it sometimes needs solving the halting problem).
...But the change to EB in this case does initialize everything by default?
It's an explicit choice in C++ to always accept correct programs (the alternative being to always reject incorrect programs†). The committee does not have to stick by this bad decision in each C++ version, of course they aren't likely to stop making the same bad choice, but it is possible to do so.
If you're allowed to take the other side, you can of course (Rust and several other languages do this) reject programs where the compiler isn't satisfied that you definitely always initialize the variable before it's value is needed. Most obviously (but it's pretty annoying, so Rust does not do this) you could insist on the initialization as part of the variable definition in the actual syntax.
† You can't have both, by Rice's Theorem, Henry Rice got his PhD for figuring out how to prove this, last century, long before C++ was conceived. So you must pick, one or the other.
I'm surprised the "= void;" wasn't discussed. People liked it immediately in D, and other alternatives were not proposed.
1. If there’s no initializer and various conditions are met, then “the bytes have erroneous values, where each value is determined by the implementation independently of the state of the program.
What does “independently” mean? Are we talking about all zeros? Is the implementation not permitted to use whatever arbitrary value was in memory? Why not?
2. What’s up with [[indeterminate]]? I would expect “indeterminate” to mean that the variable has a value that happens to be arbitrary (and may contain sensitive data, etc), not that it turns back into actual UB.
It can pick whatever value it wants and doesn't have to care what the program is doing.
Also the value has to stay the same until it's 'replaced'.
> Are we talking about all zeros?
It might be, but probably won't be. What makes you bring up all zeroes?
> Is the implementation not permitted to use whatever arbitrary value was in memory? Why not?
It can. What suggests it wouldn't be able to?
> 2. What’s up with [[indeterminate]]? I would expect “indeterminate” to mean that the variable has a value that happens to be arbitrary (and may contain sensitive data, etc), not that it turns back into actual UB.
"has a value that happens to be arbitary" would be the default without [[indeterminate]]. Well, it can also error out if the compiler wants to do that.
"Whatever value was in memory" would be depending on the (former?) state of the program, wouldn't it?
It means what it says on the tin. Whatever value ends up being used must not depend on the state of the program.
> Are we talking about all zeros?
All zeros is an option, but the intent is to allow the implementation to pick other values as it sees fit:
> Note that we do not want to mandate that the specific value actually be zero (like P2723R1 does), since we consider it valuable to allow implementations to use different “poison” values in different build modes. Different choices are conceivable here. A fixed value is more predictable, but also prevents useful debugging hints, and poses a greater risk of being deliberately relied upon by programmers.
> Is the implementation not permitted to use whatever arbitrary value was in memory?
No, because the value in such a case can depend on the state of the program.
> Why not?
Doing so would defeat the purpose of the change, which is to turn nasal-demons-on-mistake into something with less dire consequences:
> In other words, it is still an "wrong" to read an uninitialized value, but if you do read it and the implementation does not otherwise stop you, you get some specific value. In general, implementations must exhibit the defined behaviour, at least up until a diagnostic is issued (if ever). There is no risk of running into the consequences associated with undefined behaviour (e.g. executing instructions not reflected in the source code, time-travel optimisations) when executing erroneous behaviour.
> What’s up with [[indeterminate]]?
The idea is to provide a way to opt into the old full-UB behavior if you can't afford the cost of the new behavior.
> I would expect “indeterminate” to mean that the variable has a value that happens to be arbitrary (and may contain sensitive data, etc), not that it turns back into actual UB.
I believe the spelling matches how the term was used in previous standards. For example, from the C++23 standard [0] (italics in original):
> When storage for an object with automatic or dynamic storage duration is obtained, the object has an indeterminate value, and if no initialization is performed for the object, that object retains an indeterminate value until that value is replaced.
[0]: https://open-std.org/JTC1/SC22/WG21/docs/papers/2023/n4950.p...
Just like when I was learning rust and trying to read some http code but it was impossible because each function had 5 generics and 2 traits.
So I would have to look it up and be very careful about it since I can break something easily in C++.
This just makes things more difficult from the perspective of using/learning the language.
Similar problem with "unsequenced" and "reproducible" attributes added in c. It sounded cool after I took the time to learn exactly (/s) what it means. But it is not worth the time to learn it. And it is not worth the cognitive load it will put on people that will read the code later imo.
const int a = 10; // Just an immutable variable named a
constexpr int b = 20; // Still an immutable variable named b
static constinit int c = 30; // Now it isn't even immutable
For functions const says this function promises it doesn't change things, constexpr says this function is shiny and modern and has no other real meaning (hence "constexpr all the things" memes, you might as well) but consteval does mean that we're promising this must always be evaluated at compile time, so the evaluation is frozen by runtime, however only a function can have this label.Volatile is a mess because what you actually want are the volatile intrinsics, indeed you might want more (or fewer) depending on the target. If your target can do single bit hardware writes it'd be nice to provide an intrinsic for that, rather than hoping you can write in code REG |= 0x40 and have that write a single bit... which on platforms which do not have this single bit write feature that's going to compile to an unsynchronized read-modify-write which may cause problems. However instead of having intrinsics C's volatile was hacked into the type system instead and C++ tries to keep that.
And yeah, it would probably be nice to also have some sane intrinsics to provide memory_order_consume semantics... but what can you do.
I thought constexpr was a hard physical constant, but in reality it's a weird hybrid
this visualisation helped me to wrap my head around it - https://vectree.io/c/c-constness-and-evaluation-qualifiers
Date in Europe: 30/03/2026
Date in China: 2026/03/30
Then you have Little Endian and you have Big Endian.
TL;DR: Some humans like to talk about the specific and then the general and others vice versa.
But here is really why I think the author referred to it as "London, Croydon"
"London, Croydon" communicates "Hey we had this C++ standards meeting in London, one of the coolest cities in the world. (Be jealous!). We were helping add more complexity to the most complex language in the world in the lovely environment of London, England. Croydon is a piece of irrelevant detail... meeting was in London, remember that !
"Croydon, London" communicates "Hey we had this C++ standards meeting in gritty Croydon... it was in London so I guess it was OK ?? Sorry our budgets could not put us up in Westminister, London"
[End of Joke]
How far is Clang on reflection and contracts?
It has been on compiler explorer for a while
This has been used to add reflection to simdjson
It's nice to have new features, but what is really killing C++ is Cargo. I don't think a new generation of developers are going to be inspired to learn a language where you can't simply `cargo add` whatever you need and instead have to go through hell to use a dependency.
In the interest of pedantry, locating source files relative to the crate root is a language-level Rust feature, not something specific to Cargo. You can pass any single Rust source file directly to rustc (bypassing Cargo altogether) and it will treat it as a crate root and locate additional files as needed based on the normal lookup rules.
That being said, I'd argue that the fact that this happens so transparently that people don't really need to know this to use Cargo correctly is somewhat the point I was making. Compared to something like cmake, the amount of effort to use it is at least an order of magnitude lower.
For most crates, yes. But you might be surprised how many crates have a build.rs that is doing more complex stuff under the hood (generating code, setting environment variables, calling a C compiler, make or some other build system, etc). It just also almost always works flawlessly (and the script itself has a standardised name), so you don't notice most of the time.
And Mesons awesome dependency handling:
https://mesonbuild.com/Dependencies.html
https://mesonbuild.com/Using-the-WrapDB.html#using-the-wrapd...
https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies...
I suffered with Java from Any, Maven and Gradle (the oldest is the the best). After reading about GNU Autotools I was wondering why the C/C++ folks still suffer? Right at that time Meson appeared and I skipped the suffering.
* No XML
* Simple to read and understand
* Simple to manage dependencies
* Simple to use options
Feel free to extend WrapDB.In C++ you don't get lockfiles, you don't get automatic dependency install, you don't get local dependencies, there's no package registry, no version support, no dependency-wide feature flags (this is an incoherent mess in Meson), no notion of workspaces, etc.
Compared to Cargo, Meson isn't even in the same galaxy. And even compared to CMake, Meson is yet another incompatible incremental "improvement" that offers basically nothing other than cute syntax (which in an era when AI writes all of your build system anyway, doesn't even matter). I'd much rather just pick CMake and move on.
> I’m still surprised how people ignore Meson. Please test it :)
I did just that a few years ago and found it rather inconvenient and inflexible, so I went back to ignoring it. But YMMV I suppose.
> After reading about GNU Autotools
Consider Kitware's CMake.
The fact that building C++ is this opaque process defined in 15 different ways via make, autoconf, automake, cmake, ninja, with 50 other toolchains is something that continues to create a barrier to entry.
I still remember the horrors of trying to compile c++ in 2004 on windows without anything besides borland...
Standardizing the build system and toolchain needs to happen. It's a hard problem that needs to be solved.
I agree, and I also think it’s never happening. It requires agreeing on so many things that are subjective and likely change behaviour. C++ couldn’t even manage to get module names to be required to match the file name. That was for a new feature that would have allowed us to figure out esports without actually opening the file…
C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous.
There are of course good ways of building C++, but those are the exception rather than the standard.
In C, ABI = API because the declaration of a function contains the name and arguments, which is all the info needed to use it. You can swap out the definition without affecting callers.
That's why Rust allows a stable C-style ABI; the definition of a function declared in C doesn't have to be in C!
But in a C++-style templated function, the caller needs access to the definition to do template substitution. If you change the definition, you need to recompile calling code i.e. ABI breakage.
If you don't recompile calling code and link with other libraries that are using the new definition, you'll violate the one-definition rule (ODR).
This is bad because duplicate template functions are pruned at link-time for size reasons. So it's a mystery as to what definition you'll get. Your code will break in mysterious ways.
This means the C++ committee can never change the implementation of a standardized templated class or function. The only time they did was a minor optimization to std::string in 2011 and it was such a catastrophe they never did it again.
That is why Rust will not support stable ABIs for any of its features relying on generic types. It is impossible to keep the ABI stable and optimize an implementation.
Rust is interested in having a properly thought out ABI that's nicer than the C ABI which it supports today. It'd be nice to have say, ABI for slices for example. But "freeze everything and hope" isn't that, it means every user of your language into the unforeseeable future has to pay for every mistake made by the language designers, and that's already a sizeable price for C++ to pay, "ABI: Now or never" spells some of that out and we don't want to join them.
The de-facto ABI for slices involves passing/storing pointer and length separately and rebuilding the slice locally. It's hard to do better than that other than by somehow standardizing a "slice" binary representation across C and C-like languages. And then you'll still have to deal with existing legacy code that doesn't agree with that strict representation.
Obviously it's easier to provide a stable ABI for say &'static [T] (a reference which lives forever to an immutable slice of T) or Option<NonZeroU32> (either a positive 32-bit unsigned integer, or nothing) than for String (amortized growable UTF-8 text) or File (an open file somewhere on the filesystem, whatever that means) and it will never be practical to provide some sort of "stable ABI" for arbitrary things like IntoIterator -- but that's exactly why the C++ choice was a bad idea. In practice of course the internal guts of things in C++ are not frozen, that would be a nightmare for maintenance teams - but in theory there should be no observable effect from such changes and so that discrepancy leads to endless bugs where a user found some obscure way to depend on what you'd hidden inside some implementation detail, the letter of the ISO document says your change is fine but the practice of C++ development says it is a breaking change - and the resulting engineering overhead at C++ vendors is made even worse by all the UB in real C++ software.
This is the real reason libc++ still shipped Quicksort as its unstable sort when Biden was President, many years after this was in theory prohibited by the ISO standard† Fixing the sort breaks people's code and they'd rather it was technically faulty and practically slower than have their crap code stop working.
† Tony's Quicksort algorithm on its own is worse than O(n log n) for some inputs, you should use an introspective comparison sort aka introsort here, those existed almost 30 years ago but C++ only began to require them in 2011.
It seems to me the "convenient" options are the dangerous ones.
The traditional method is for third party code to have a stable API. Newer versions add functions or fix bugs but existing functions continue to work as before. API mistakes get deprecated and alternatives offered but newly-deprecated functions remain available for 10+ years. With the result that you can link all applications against any sufficiently recent version of the library, e.g. the latest stable release, which can then be installed via the system package manager and have a manageable maintenance burden because only one version needs to be maintained.
Language package managers have a tendency to facilitate breaking changes. You "don't have to worry" about removing functions without deprecating them because anyone can just pull in the older version of the code. Except the older version is no longer maintained.
Then you're using a version of the code from a few years ago because you didn't need any of the newer features and it hadn't had any problems, until it picks up a CVE. Suddenly you have vulnerable code running in production but fixing it isn't just a matter of "apt upgrade" because no one else is going to patch the version only you were using, and the current version has several breaking changes so you can't switch to it until you integrate them into your code.
First you confuse API and ABI.
Second there is no practical difference between first and third-party for any sufficiently complex project.
Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering. It's a bad idea and a recipe for ODR violations.
In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries. Being able to maintain ABI compatibility, deprecating things while introducing replacement etc. is a massive engineering work and simply makes people much less likely to change the way things are done, even if they are broken or not ideal. That's an effort you'll do for a kernel (and only on specific boundaries) but not for the average program.
I'm not confusing API with ABI. If you don't have a stable ABI then you essentially forfeit the traditional method of having every program on the system use the same copy (and therefore version) of that library, which in turn encourages them to each use a different version and facilitates API instability by making the bad thing easier.
> Second there is no practical difference between first and third-party for any sufficiently complex project.
Even when you have a large project, making use of curl or sqlite or openssl does not imply that you would like to start maintaining a private fork.
There are also many projects that are not large enough to absorb the maintenance burden of all of their external dependencies.
> Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering.
Which is all the more reason to encourage every program on the system to use the same copy by maintaining a stable ABI. What do you do after you've encouraged everyone to include their own copy of their dependencies and therefore not care if there are many other incompatible versions, and then two of your dependencies each require a different version of a third?
> In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries.
This feels like arguing that people are bad at writing documentation so we should we should reduce their incentive to write it, instead of coming up with ways to make doing the good thing easier.
If you import some ready made binaries, you have no way to guarantee they are compatible with the rest of your build or contain the features you need. If anything needs updating and you actually bother to do it for correctness (most would just hope it's compatible) your only option is usually to rebuild the whole thing, even if your usage only needed one file.
There nothing faster and more efficient than building C programs. I also not sure what is dangerous in having libraries. C++ is quite different though.
The same problems exist.
What are the good ways?
You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.
I'd also argue you shouldn't have any kind of declaration of dependencies and simply deduce them transparently based on what the code includes, with some logic to map header to implementation files.
Bazel is certainly not the solution; it's arguably closer to being the problem. The worst build system I have ever seen was Bazel-based.
Really? I'd love a link to even something that works as a toy project
> Bazel is certainly not the solution; it's arguably closer to being the problem. The worst build system I have ever seen was Bazel-based.
I agree
I usually make it so that it's fully integrated with wherever we store artifacts (for CAS), source (to download specific revisions as needed), remote running (which depending on the shop can be local, docker, ssh, kubernetes, ...), GDB, IDEs... All that stuff takes more work for a truly generic solution, and it's generally more valuable to have tight integration for the one workflow you actually use.
Since I also control the build image and toolchain (that I build from source) it also ends up specifically tied to that too.
In practice, I find that regardless of what generic tool you use like cmake or bazel, you end up layering your own build system and workflow scripts on top of those tools anyway. At some point I decided the complexity and overhead of building on top of bazel was more trouble than it was worth, while building it from scratch is actually quite easy and gives you all the control you could possibly need.
Which tool do you use for content-addressable storage in your builds?
>You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.
This isn't always feasible though.
What's the best practice when one cannot avoid a library?
You hash all the inputs that go into building foo.cpp, and then that gives you /objs/<hash>.o. If it exists, you use it; if not, you build it first. Then if any other .cpp file ever includes foo.hpp (directly or indirectly), you mark that it needs to link /objs/<hash>.o.
You expand the link requirements transitively, and you have a build system. 200 lines of code. Your code is self-describing and you never need to write any build logic again, and your build system is reliable, strictly builds only what it needs while sharing artifacts across the team, and never leads to ODR.
It's the same with compilers, there's not one single implementation which is the compiler, and the ecosystem of compilers makes things more interesting.
There should be a happy path for the majority of C++ use cases so that I can make a package, publish it and consume other people's packages. Anyone who wants to leave that happy path can do so freely at their own risk.
The important thing is to get one system blessed as The C++ Package Format by the standard to avoid xkcd 927 issues.
The standard was initially meant to standardize existing practice. There is no good existing practice. Very large institutions depending heavily on C++ systematically fail to manage the build properly despite large amounts of third party licenses and dedicated build teams.
With AI, how you build and integrate together fragmented code bases is even more important, but someone has yet to design a real industry-wide solution.
I'm doing a migration of a large codebase from local builds to remote execution and I constantly have bugs with mystery shared library dependencies implicitly pulled from the environment.
This is extremely tricky because if you run an executable without its shared library, you get "file not found" with no explanation. Even AI doesn't understand this error.
You can also very easily harden this if you somehow don't want to capture libraries from outside certain paths.
You can even build the compiler in such a way that every binary it produces has a built-in RPATH if you want to force certain locations.
(And "absolute" or other adjectives don't qualify "correctness"... it simply is or isn't.)
You cannot cargo add Unreal, LLVM, GCC, CUDA,...
What'll spur adoption is cmake adopting Clang's two-step compilation model that increases performance.
At that point every project will migrate overnight for the huge build time impact since it'll avoid redundant preprocessing. Right now, the loss of parallelism ruins adoption too much.
IMO, the modules standard should have aimed to only support headers with no inline code (including no templates). That would be a severe limitation, but at least maybe it might have solved the problem posed by protobuf soup (AFAIK the original motivation for modules) and had a chance of being a real thing.
yes you have CPM, vcpkg and conan, but those are not really standard and there is friction involved in getting it work.
Ah, and the two compiler major frameworks that all those C++ wannabe replacements use as their backend.
Once big companies like Google started pulling out of the committee, they lost their connection to reality and now they're standardizing things that either can't be implemented or no one wants as specced.
Neither of those things require modules as currently defined.
Personally I use them in new projects using XMake and it just works.
There's not a compatible format between different compilers, or even different versions of the same compiler, or even the same versions of the same compiler with different flags.
This seems immediately to create too many permutations of builds for them to be distributable artifacts as we'd use them in other languages. More like a glorified object file cache. So what problem does it even solve?
They have effectively zero use outside of hobby projects. I don’t know that any open source C++ library I have ever interacted with even pretends that modules exist.
Meanwhile C++ build system is an abomination. Header files should be unnecessary.
Embedded is such a perfect fit for interface-based programming, but because it cant determine call resolution outside of a single source file, EVERYTHING gets vtable'd, which ruins downstream optimizations.
There's some ugly workarounds.... CRTP, c-style (common header + different source files. To the person who says "use templates!".... no. I dont like templates. They are verbose, complex, and every time i try to use them they I end up foot-gunning myself. Maybe its a skill issue, but if you designed something that most people cant figure out, I'd argue the design is wrong.
C++ is SOOO close to doing compile-time polymorphism. If just needs a way to determine type across source files, which LTO sorta-kinda-but-not-really does.
I've seen some examples of C++ contracts replacing CRTP, but it used templates, which again, not a fan of.
This should be your default stack on any small-to-medium sized C++ project.
Bazel, the default pick for very large codebases, also has support for C++20 modules.
But once it works and you setup the new stuff, having started a new CPP26 Project with modules now, it's kinda awesome. I'm certainly never going back. The big compilers are also retroactively adding `import std` to CPP20, so support is widening.
[1] https://gitlab.kitware.com/cmake/cmake/-/work_items/27706
Header-only projects are the best to convert to modules because you can put the implementation of a module in a "private module fragment" in that same file and make it invisible to users.
That prevents the compile-time bloat many header-only dependencies add. It also avoids distributing a `.cpp` file that has to be compiled and linked separately, which is why so many projects are header-only.
That being said, while it looks better than CMake, for anything professional I need remote execution support to deviate from the industry standard. Zig doesn't have that.
This is because large C++ projects reach a point where they cannot be compiled locally if they use the full language. e.g. Multi-hour Chromium builds.
https://github.com/bazelbuild/remote-apis
Once you get a very large C++ project with several thousand compilation jobs over hundreds of devs, you need to distribute the build across multiple computers and have a shared cache for object files.
Zig doesn't seem to support that.
But headers are perfectly fine to deal with and have been for decades and decades! Next you'll be arguing that contents pages in all books should be removed.
eg. B = std::move(A); // You are worried about touching A when it's in this indeterminate state?
Currently move semantics in C++ requires that A is left in a 'moved from, but valid state' which means that:
1. The compiler must still generate code that calls the destructor.
2. Every destructor need have to have some flag and a test in it like: if(moved_from) // do nothing else { free_resources(); }
(Granted, for some simple types the compiler might inline and removed redundant checks so it ends up with no extra code, but that is not guaranteed)
With destructive moves the compiler can just forget about the object completely, no need to call it, destructurs can be written as normal and only care about the invariants established in the constructor.
Also, some of the implementation details and semantics do matter in an application dependent way, which makes it a bit of an opinionated feature. I would guess there is a lot of arguing over the set of tradeoffs suitable for a standard, since C++ tends to avoid opinionated designs if it can.
Someone has to write a proposal, bring it to the various meetings, and getting it to win the features selection election from all parties.
Also WG21 tends to disregard C features that can already be implemented within C++'s type system.
https://www.hpcwire.com/2022/12/05/new-c-sender-library-enab...
Overlapping communication and computation has been a common technique for decades in high-performance computing to "hide latency", which leads to better scaling. Now standard C++ can be used to express parallel algorithms without tying to a specific scheduler.
1. Standardize a `restrict` keyword and semantics for it (tricky for struct/class fields, but should be done).
2. Uniform Function Call Syntax! That is, make the syntax `obj.f(arg)` mean simply `f(obj, arg)` . That would make my life much easier, both as a user of classes and as their author. In my library authoring work particularly. And while we're at it, let us us a class' name as a namespace for static methods, so that Obj::f the static method is simply the method f in namespace Obj.
3. Get compiler makers to have an ABI break, so that we can do things like passing wrapped values in registers rather than going through memory. See: https://stackoverflow.com/q/58339165/1593077
4. Get rid of the current allocators in the standard library, which are type-specific (ridiculous) and return pointers rather than regions of memory. And speaking of memory regions (i.e. with address and size but no element type) - that should be standardized too.
Someone has to bring a written spec to WG21 meetings and push it through.
And like in every open source project that doesn't go the way we like, the work is only done by those that show up.
In many ways, it isn't.
> Someone has to bring a written spec to WG21 meetings and push it through.
That is one way it is not like (most) other FOSS projects. In a typical FOSS project, there are bug reports and feature/change requests that people file. They don't have to write a full article merely for their idea to be given the time of day. Certainly not have to appear physically at meetings held elsewhere in the world. Of course, the question of the extent to which ideas and requests from the public are considered seriously and fairly is a spectrum - some FOSS projects give them more attention and consider them seriously, others do not. vis-a-vis WG21 the "public" is, to some extent: Compiler author teams, standard library author teams, national bodies, and large corporations using C++. This is perhaps not entirely avoidable, since there are millions of C++ users, but still.
Anyway, what I described isn't just some personal ideas of mine, it is for the most part ideas which have been put forward before the committee, either directly in submitted papers or indirectly via public discussion in fora the committee is aware of.
2. Name lookup and overload resolution is already so complex though! This will likely never be added because it's so core c++ and would break so much. imo, it also blurs the line between what's your interface vs what I've defined.
3. This is every junior c++ engineers suggestion. Having ABI breaks would probably kill c++, even though it would improve the language long term.
4. Again, you make solid points and I think a lot of the committee would agree with you. However, the committee's job is to adapt C++ in a backwards supporting way, not to disrupt it's users and API each new release.
There are definitely things to fix in c++ and every graduate engineer I've managed has had the same opinions of patching the standard, without considering the full picture.
Proper reflection is exciting.
Also, useful: https://gcc.gnu.org/projects/cxx-status.html
The US government still uses C++ widely for new projects. For some types of applications it is actually the language of choice and will remain so for the foreseeable future.
Can you give an example please? And how does it correspond to government ONCD report and other government docs "recommending" "safe" languages like: Rust (noted for its ability to prevent memory-unsafe code), Go, Java, Swift, C#, Ruby, Ada
Among other things I design and implement high performance C++ backends. for some I got SOCS2 Type II certification but I am curious about future. Do not give a flying fuck about what the criteria for military projects as I would not ever touch one even if given a chance.
An increasingly common requirement is the ability to robustly reject adversarial workloads in addition to being statically correct. Combined with the high-performance/high-scale efficiency requirements, this dictates what the software architecture can look like.
There are a few practical reasons Rust is not currently considered an ideal fit for this type of development. The required architecture largely disables Rust's memory-safety advantages. Modern C++ has significantly better features and capabilities under these constraints, yielding a smaller, more maintainable code base. People worry about supply chain attacks but I don't think that is a major factor here.
Less obvious, C++ has strong compile-time metaprogramming and execution features that can be used to extensively automate verification of code properties with minimal effort. This allows you to trivially verify many correctness properties of the code that Rust cannot. It ends up being a comparison of unsafe Rust versus verification maximalist C++20, which tilts the safety/reliability aspects pretty hard toward C++. Code designed to this standard of reliability has extremely low defect rates regardless of language but it is much easier in some languages than others. I even shipped Python once.
A lot of casual C++ code doesn't bother with this level of verification, though they really should. It has been possible for years now. More casual applications also have more exposure to memory safety issues but those mostly use Java in my experience, at least in government.
Would you be willing to share some more information about this? Interested in learning more since this sort of thing rarely seems to come up in typical situations I work in.
Of course, this is not completely guaranteed safety - but safety has certainly mattered.
Yes, this what Stroustrup said and it makes me laugh. IIRC he phrased with a more of a 'we had safety before Rust' attitude. It also misses the point, safety shouldn't be opt-in or require memorising a rulebook. If safety is that easy in C++ why is everyone still sticking their hand in the shredder?
As for the Core Guidelines - most of them are not about safety; and - they are not to be memorized, but a resource to consult when relevant, and something to base static analysis on.
But C++ projects are usually really boring. I want to go back but glad I left. Has anyone found a place where C++ style programming is in fashion but isn't quite C++? I hope that makes sense.