The only thing that's less great is that this got so much less upvotes than all the Safe-C++ languages that never really had the chance to get into production in old code.
#IF __cplusplus==202302LI’m all for C++ making these changes. For a lot of people, adding a bit of safety to the language they’re going to use anyway is a big win. But in general guarding against threading bugs, or use after free, or a lot of more obscure memory issues requires either expensive GC like runtime checks (Fil-C has 0.5x-4x performance overhead and a large memory overhead). Or compile time checks. And C++ will never get rust’s extensive compile time checks.
its not
Just reviewing the actual hardening of the standard library, it looks like in C++26 an implementation may be considered hardened in which case if certain preconditions don't hold then a contract violation triggers an assertion which in turn triggers a contract violation handler which may or may not result in a predictable outcome depending on one of 4 possible "evaluation semantics".
Oh and get this... if two different translation units have different evaluation semantics, a situation known as "mixed-mode" then you're shit out of luck with respect to any safety guarantees as per this document [1] which says that mixed-mode applications shall choose arbitrarily among the set of evaluation semantics, and as it turns out the standard library treats one of the evaluation semantics (observe) as undefined behavior. So unless you can get all third party dependencies to all use the same evaluation semantic, then you have no way to ensure that your application is actually hardened.
So is C++26 adding changes? Yes it's adding changes. Are these changes actual improvements? It's way to early to tell but I do know one thing... it's not at all uncommon that C++ introduces new features that substitute one set of problems for a new set of problems. There's literally a 300 page book that goes over 20 distinct forms to initialize an object [2], many of these forms exist to plug in problems introduced by previous forms of initialization! For all we know the same thing might be happening here, where the classical "naive" undefined behavior is being alleviated but in the process C++ is introducing an entire new class of incredibly difficult to diagnose issues. And lest you think I'm just spreading FUD, consider this quote from a paper titled "C++26 Contracts are not a good fit for standard library hardening" [3] submitted to the C++ committee regarding this upcoming change arguing that it risks giving nothing more than the illusion of safety:
>This can result in violations of hardened preconditions being undefined behaviour, rather than guaranteed to be diagnosed, which defeats the purpose of using a hardened implementation.
[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p29...
[2] https://www.amazon.ca/dp/B0BW38DDBK?language=en_US&linkCode=...
- P3878 [0] was adopted, so the standard now forbids "observe" semantics for hardened precondition violations. To be fair, the paper doesn't explicitly say how this change interacts with mixed mode contract semantics, and I'm not familiar enough with what's going on to fill in the gaps myself.
- It appears there is interest in adopting one of the changes proposed in D3911 [1], which introduces a way to mark contracts non-ignorable (example syntax is `pre!()` for non-ignorable vs. the current `pre()` for ignorable). A more concrete proposal will be discussed in the winter meeting, so this particular bit isn't set in stone yet.
Since then, libc++ has categorized the checks by cost and one can scale them back too.
Once again C++ people imagining into existence Undefined Behaviour which isn't Security Critical as if somehow that's a thing.
Mostly I read the link because I was intrigued as to how this counted as "at scale" and it turns out that's misleading, the article's main body is about the (at scale) deployment at Google, not the actual hardening work itself which wasn't in some special way "at scale".
The author of TFA actually makes another related assumption:
> A crash from a detected memory-safety bug is not a new failure. It is the early, safe, and high-fidelity detection of a failure that was already present and silently undermining the system.
Not at all? Most memory-safety issues will never even show up in the radar, while with "Hardening" you've converted all of them into crashes that for sure will, annoying customers. Surely there must be a middle ground, which leads us back to the "debug mode" that the article is failing to criticize.
In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.
No one knows what software will be security critical when it's written. We usually only find out after it's already too late.Language maintainers have no idea what code will be written. The people writing libraries have no idea how their library will be used. The application developers often don't realize the security implications of their choices. Operating systems don't know much about what they're managing. Users may not even realize what software they're running at all, let alone the many differing assumptions about threat model implicitly encoded into different parts of the stack.
Decades of trying to limit the complexity of writing "security critical code" only to the components that are security critical has resulted in an ecosystem where virtually nothing that is security critical actually meets that bar. Take libxml2 as an example.
FWIW, I disagree with the position in the article that fail-stop is the best solution in general, but there's experimental evidence to support it at least. The industry has tried many different approaches to these problems in the past. We should use the lessons of that history.
Unless you're paying them, the people writing the libraries have no obligation to care. The real issue is Big Tech built itself on the backs of volunteer labor and expects that labor to provide enterprise-grade security guarantees. That's entitled and wholly unreasonable.
> Take libxml2 as an example.
libxml2 is an excellent example. I recommend you read what its maintainer has to say [1].
[1] https://gitlab.gnome.org/GNOME/libxml2/-/issues/913#note_243...
But this isn't a conversation limited to the big tech parasitism Nick is talking about. A quick check on my FOSS system implicates the text editor, the system monitor, the office suite, the windowing system, the photo editor, flatpak, the IDEs, the internationalization, a few daemons, etc as all depending on libxml2 and its nonexistent security.
Citation needed? There's all sorts of problems that don't "show up" but are bad. Obvious historical examples would be heartbleed and cloudbleed, or this ancient GTA bug [1].
1: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...
Most people around here don’t have any reason to have strong opinions about safety-critical code.
Most people around here spend the majority of their time trying to make their company money via startup culture, the annals of async web programming, and how awful some type systems are in various languages.
Working on safety-critical code with formal verification is the most intense, exhausting, fascinating work I’ve ever done.
Most people don’t work a company that either needs or can afford a safety-critical toolchain that is sufficient for formal, certified verification.
The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely. This subtle point seems to have been lost a long time ago with “*end” developers trying to sell ads, or whatever.
And what makes you think buggy software only causes problems when hackers get in? Memory bugs cause memory corruption and crashes. I don’t want my pacemaker running somebody’s cowboy C++, even if the device is never connected to the internet.
> Your average web app can have security-critical issues but they probably won’t have safety-critical issues.
How many air-gapped systems have you worked on?
I've worked on safety critical systems with MAC addresses you can ping. Some of those systems were also air-gapped or partially isolated from the outside world. A rare few were even developed as safety critical.
The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely.
Software safety cases depend on being able to link the executable semantics of the code to your software safety requirements.You don't inherently need to eliminate UB to define the executable semantics of your code, but in practice you do. You could do binary analysis of the final image instead. You wouldn't even need a qualified toolchain this way. The semantics generated would only be valid for that exact build, and validation is one of the most expensive/time-consuming parts of safety critical development.
Most people instead work at the source code level, and rely on qualified toolchains to translate defined code into binaries with equivalent semantics. Trying to define the executable semantics of source code inherently requires eliminating UB, because the kind of "unrestricted UB" we're talking about has no executable semantics, nor does any code containing it. Qualified toolchains (e.g. Compcert, Green Hills, GCC with solidsand, Diab) don't guarantee correct translation of code without defined semantics, and coding standards like MISRA also require eliminating it.
As a matter of actual practice, safety critical processes "optimistically ignore" some level of undefined behavior, but that's not because it's acceptable from a principled stance on UB.
But undefined behavior is literally introduced as "the compiler is allowed to do anything, including deleting all your files". Of course that's security critical by definition?
Undefined behavior in the (poorly written) spec doesn't mean undefined behavior in the real world. A given compiler is perfectly free to specify the behavior.
Seems like the daily anti c++ post
> optional is unsafe in idiomatic use cases? I’d like to challenge that.
std::optional<int> x(std::nullopt);
int val = *x;
Optional is by default unsafe - the above code is UB.I also agree with them: I am pro-C++ too, but the current standard is a fucking mess. Go and look at modules if you haven't, for example (don't).
You can't magically make all the member functions on std::vector safe after a move for example unless the moved from vector allocates itself a new (empty) buffer, which kills the performance benefits.
It's all by design.
I beg to differ. Humans are fallible. Static analysis of C++ cannot catch all cases and humans will often accept a change that passes the analyses.
You're ignoring how static analysis can be made to err on the side of safety rather than promiscuity.
Specifically, for optional dereferencing, static analysis can be made to disallow it unless it can prove the optional has a value.
Ho ho ho good one.
Anyway safety checked modes are sufficient for many programs, this article claims otherwise but then contradicts itself by showing that they caught most issues using .. safety checked modes.
No, it won't. https://gcc.godbolt.org/z/Mz8sqKvad
The problem is not nullopt, but that the client code can simply dereference the optional instead of being forced to pattern-match. And the next problem, like the other guy mentioned above, is that you cannot make any claims about what will happen when you do so because the standard just says "UB". Other languages like Haskell also have things like fromJust, but at least the behaviour is well-defined when the value is Nothing.
It's not a pointer.
> The following code for example, simply returns an uninitialized value:
#include <optional>
int f() {
std::optional<int> x(std::nullopt);
return *x;
}tl;dr: use-after-move, or dereferencing null.