The rest looks very reasonable, like avoiding locale-hell.
Some of it is likely options that sand rough edges off of the standard lib, which is reasonable.
Yea, you encounter this a lot at companies with very old codebases. Don't use "chrono" because we have our own date/time types that were made before chrono even existed. Don't use standard library containers because we have our own containers that date back to before the STL was even stable.
I wonder how many of these (or the Google style guide rules) would make sense for a project starting today from a blank .cpp file. Probably not many of them.
Flashback to last job. Wrote their own containers. Opaque.
You ask for an item from it, you get back a void pointer. It's a pointer to the item. You ask for the previous, or the next, and you give back that void pointer (because it then goes through the data to find that one again, to know from where you want the next or previous) and get a different void pointer. No random access. You had to start with the special function which would give you the first item and go from there.
They screwed up the end, or the beginning, depending on what you were doing, so you wouldn't get back a null pointer if there was no next or previous. You had to separately check for that.
It was called an iterator, but it wasn't an iterator; an iterator is something for iterating over containers, but it didn't have actual iterators either.
When I opened it up, inside there was an actual container. Templated, so you could choose the real inside container. The default was a QList (as in Qt 4.7.4). The million line codebase contained no other uses; it was always just the default. They took a QList, and wrapped it inside a machine that only dealt in void pointers and stripped away almost all functionality, safety and ability to use std::algorithm
I suspect but cannot prove that the person who did this was a heavy C programmer in the 1980s. I do not know but suspect that this person first encountered variable data type containers that did this sort of thing (a search for "generic linked list in C" gives some ideas, for example) and when they had to move on to C++, learned just enough C++ to recreate what they were used to. And then made it the fundamental container class in millions of lines of code.
The STL makes you pay for ABI stability no matter if you want it or not. For some use cases this doesn't matter, and there are some "proven" parts of the STL that need a lot of justification for substitution, yada yada std::vector and std::string.
But it's not uncommon to see unordered_map substituted with, say, sparsehash or robin_map, and in C++ libraries creating interfaces that allow for API-compatible alternatives to use of the STL is considered polite, if not necessarily ubiquitous.
I know, I know, long run does not exists in today's investor dominated scenarios. Code modernization is a fairytale. So far I seen no exception in my limited set of experiences (but with various codebases going back to the early 90's with patchy upgrades here and there, looking like and old coat fixed many many times with diverse size of patches of various materials and colour).
> Use char and unprefixed character literals. Non-UTF-8 encodings are rare enough in Chromium that the value of distinguishing them at the type level is low, and char8_t* is not interconvertible with char* (what ~all Chromium, STL, and platform-specific APIs use), so using u8 prefixes would obligate us to insert casts everywhere. If you want to declare at a type level that a block of data is string-like and not an arbitrary binary blob, prefer std::string[_view] over char*.
If your codebase has those guarantees, go ahead and use it.
True, but sizeof(char) is defined to be 1. In section 7.6.2.5:
"The result of sizeof applied to any of the narrow character types is 1"
In fact, char and associated types are the only types in the standard where the size is not implementation-defined.
So the only way that a C++ implementation can conform to the standard and have a char type that is not 8 bits is if the size of a byte is not 8 bits. There are historical systems that meet that constraint but no modern systems that I am aware of.
[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/n49...
But instead we get this mess. I guess it's because there's too much Microsoft in the standard and they are the only ones not having UTF-8 everywhere in Windows yet.
The cpp standard explicitly says that it has the same size, typed, signedness and alignment as unsigned char, but its a distinct type. So its pretty useless, and badly named
Have you read the standard? It says: "The result of sizeof applied to any of the narrow character types is 1." Here, "narrow character types" means char and char8_t. So technically they aren't guaranteed to be 8 bits, but they are guaranteed to be one byte.
Texas Instruments' compiler seems to be celebrating C++14 support: https://www.ti.com/tool/C6000-CGT
CrossCore Embedded Studio apparently supports C++11 if you pass a switch in requesting it, though this FAQ answer suggests the underlying standard library is still C++03: https://ez.analog.com/dsp/software-and-development-tools/cce...
Everything I've found CodeWarrior related suggests that it is C++03-only: https://community.nxp.com/pwmxy87654/attachments/pwmxy87654/...
Aside from that, from what I can tell, those esoteric architectures are being phased out in lieu of running DSP workloads on Cortex-M, which is just ARM.
I'd love it if someone who was more familiar with DSP workloads would chime in, but it really does seem that trying to be the language for all possible and potential architectures might not be the right play for C++ in 202x.
Besides, it's not like those old standards or compilers are going anywhere.
ISO/IEC 9899:2024 section 7.30
> char8_t which is an unsigned integer type used for 8-bit characters and is the same type as unsigned char;
The problem is that too many people drank too much koolaid and trying to parrot everything to a letter without understanding the bigger picture.
The best example would be Kubernetes. Employed by many orgs that have 20 devs and 50 services.
Reasonable summary. There's some massive NIH syndrome going on.
Another piece is that a lot of stuff that makes sense in the open source world does not make sense in the context of the giant google3 monorepo with however many billions of lines of code all in one pile.
A good decision. I tried to use it once and realized that it can't even work with UTF-8 properly. It's a mystery for me how such flawed design was standardized at all.
That triggered a flash of feeling extremely old realizing we broke ground on this codebase 20 years ago this year!
https://chromium.googlesource.com/chromium/src/+/main/styleg...
Key takeaway => "Things would probably be different if we had to do it all over again from scratch."
"On their face, the benefits of using exceptions outweigh the costs, especially in new projects. However, for existing code, the introduction of exceptions has implications on all dependent code. If exceptions can be propagated beyond a new project, it also becomes problematic to integrate the new project into existing exception-free code. Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions.
Given that Google's existing code is not exception-tolerant, the costs of using exceptions are somewhat greater than the costs in a new project. The conversion process would be slow and error-prone. We don't believe that the available alternatives to exceptions, such as error codes and assertions, introduce a significant burden.
Our advice against using exceptions is not predicated on philosophical or moral grounds, but practical ones. Because we'd like to use our open-source projects at Google and it's difficult to do so if those projects use exceptions, we need to advise against exceptions in Google open-source projects as well. Things would probably be different if we had to do it all over again from scratch."
For example:
> The <filesystem> header, which does not have sufficient support for testing, and suffers from inherent security vulnerabilities.
Source code should all be UTF-8 natively, letting you directly write UTF-8 text between quotes.
Exactly their rationale.
These literals are a solution in search of a problem ... which is real but has a much better solution.
C# similarly has old warts that are discouraged now. .NET Framework is a great example (completely different from modern c#, which used to be called "dotnet core"). WPF and MAUI are also examples. Or when "dynamic" was used as a type escape hatch before the type system advanced to not need it. ASP being incompatible with ASP.NET, the list goes on.
They're just languages, there's no reason to pretend they're perfect.
Dynamic is largely unnecessary, and it was unnecessary even when it was introduced.
ASP and ASP.NET are completely unrelated. ASP was designed to allow dynamic webpages to be written in VBScript (like CGI). This is not something you want to do in modern languages.
Almost all of this is incorrect or comparing apples to oranges.
.net framework and .net core are runtime and standard library impl, not languages. C# is a language that can target either runtime or both. Framework is still supported today, and you can still use most modern C# language features in a project targeting it. WPF and Maui are both still supported and widely used. ASP predates .net - c# was never a supported language in it. ASP.net core has largely replaced ASP.net, but it's again a library and framework, not a language feature.
Dynamic in c# and the dlr are definitely not widely used because it's both difficult to use safely and doesn't fit well with the dominant paradigm of the language. If you're looking for STD lib warts binaryserializer would have been an excellent example.
I use it when deserializing unknown message types.
Unlike C# that has both delegates and lambdas, for example. Also finalizers and IDisposable.
I guess the difference is it's rarely "dangerous" or "hard to reason about" using the old features unlike what I see in the C++ list. Java replaces things with better things and momentum shifts behind them kind of naturally because the better things are objectively better.
The instanceof[0] operator is typically banned from use in application code and often frowned upon in library implementations.
0 - https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.htm...
Reflection - unless you really need to do something fancy almost certainly a very bad idea in normal application code.
Other than that it’s either just limiting yourself to a specific JVM version or telling people not to use old syntax/patterns that were replaced with better options long ago.
They’d rather see it done the same way it would’ve been in any other similar language than with a language specific feature.
There are also portability concerns in mind given that projects like Chromium have to be easily portable across a vast amount of platforms (this shows with things like long long which is also on the list).
Anyway, about these C++ conventions - to each software house its own I guess. I don't think banning exceptions altogether is appropriate; and I don't see the great benefit of using abseil (but feel free to convince me it's really that good.)
They are clearly not against them per se. It simply wasn't practical for them to include it into their codebase.
And I think a lot of the cons of exceptions are handled in languages like F#, etc. If f calls g which calls h, and h throws an exception, the compiler will require you to deal with it somehow in g (either handle or explicitly propagate).
> the compiler will require you to deal with it somehow in g
I agree, this is the sensible solution.
Exceptions in high-level languages avoid many of these issues by virtue of being much further away from the metal. It is a mis-feature for a systems language. C++ was originally used for a lot of high-level application code where exceptions might make sense that you would never use C++ for today.
I don't this this is true. There is A LOT of C++ for GUI applications, video games, all kind of utilities, scientific computing and others. In fact, I find that the transition to "modern" alternatives from native GUI toolkits in C/C++ has led to a regression in UI performance in general. Desktop programs performed better 20 years ago when everything was written in Win32, Qt, GTK and others and people did not rely on bloated Web toolkits for desktop development. Even today you can really feel how much more snappy and robust "old school" programs are relative to Electron and whatnot.
I can assure you: Most C++ SW is not written for low-level.
> exceptions can introduce nasty edge cases that are difficult to detect and reason about.
That's true, except for languages that ensure you can't simply forget that something deep down the stack can throw an exception.
BTW, I'm not saying C++'s exceptions are in any way good. My point is that exceptions are bad in C++, and not necessarily bad in general.
Sometimes it is not safe to unwind the stack. The language is not relevant. Not everything that touches your address space is your code or your process.
Exception handlers must have logic and infrastructure to detect these unsafe conditions and then rewrite the control flow to avoid the unsafety. This both adds overhead to the non-exceptional happy path and makes the code flow significantly uglier.
The underlying cause still exists when you don't use exceptions but the code for reasoning about it is highly localized and usually has no overhead because you already have the necessary context to deal with it cleanly.
This where garbage collected languages shine.
Exceptions are more robust, not less.
Everyone (except Go devs) knows that those are the worst. Exceptions are better, but still less reliable than Result.
https://home.expurple.me/posts/rust-solves-the-issues-with-e...
Where people use things like anyhow.[0]
Interestingly, Microsoft C / C++ compiler does support structured exception handling (SEH). It's used even in NT kernel and drivers. I'm not saying it's the same thing as C++ exceptions, since it's designed primarily for handling hardware faults and is simplified, but still shares some core principles (guarded region, stack unwinding, etc). So a limited version of exception handling can work fine even in a thing like an OS kernel.
There are two main limitations. Currently, the compiler has no idea what can be safely unwound. You could likely annotate objects to provide this information. Second, there is currently no way to tell the compiler what to do with an object in the call stack may not be unwound safely.
A lot of error handling code in C++ systems code essentially provides this but C++ exceptions can't use any of this information so it is applied manually.
Google’s reasons for banning exceptions are historical, not technical. Sadly, this decision got enshrined in Google C++ Style Guide. The guide is otherwise pretty decent and is used by a lot of projects, but this particular part is IMO a disservice to the larger C++ ecosystem.
There are things you can't do easily in C++ without using exceptions, like handling errors that happen in a constructor and handling when `new` cannot alloc memory. Plus, a lot of the standard library relies on exceptions. And of course there's the stylistic argument of clearly separating error-handling from the happy-path logic.
I won't argue that it's popular to ban them, though. And often for good reasons.
In such scenario there's no error recovery, software is expected to shutdown and raise loud error.
... That seems like a pretty accurate description of how exception handling mechanisms are implemented under the hood. :)
And - very often, you would _not_ shut down. Examples:
* Failure/error in an individual operation or action does not invalidate all others in the set of stuff to be done.
* Failure/error regarding the interaction with one user does not mean the interaction with other users also has to fail.
* Some things can be retried after failing, and may succeed later: I/O; things involving resource use, etc.
* Some actions have more than one way to perform them, with the calling code not being able to know apriori whether all of them are appropriate. So, it tries one of them, if it fails tries another etc.
I like the idea of an exception as a way to blow out of the current context in order for something else to catch it and handle in a generic manner. I don’t like the idea of an exception to hide errors or for conditional logic because you have to know what is handling it all. Much easier to handle it there and then, or use a type safe equivalent (like a maybe or either monad) or just blow that shit up as soon as you can’t recover from the unexpected.
No, that's what assertions or contracts are for.
Most exceptions are supposed to be handled. The alternative to exceptions in C++ are error codes and `std::expected::. They are used for errors that are expected to happen (even if they may be exceptional). You just shouldn't use exceptions for control flow. (I'm looking at you, Python :)
On banning exceptions:"Things would probably be different if we had to do it all over again from scratch."
https://google.github.io/styleguide/cppguide.html#Exceptions