You make it sound worse than it really is today. Using submodules for everything is the pre-vcpkg, pre-FetchContent, pre-ExternalProject way of including dependencies—more than half a decade out of date, and arguably more appropriate for GNU Autotools.
With CMake and vcpkg, it's not that much harder: add vcpkg as a Git submodule, add a vcpkg.json, and use its CMake toolchain to bootstrap and install packages with find_package(), done. Said vcpkg.json can be as minimal as (taken from my own projects):
{
dependencies: [
'spdlog',
'vulkan-sdk-components',
'libpng'
]
}
- vcpkg: dependencies that have custom patches only relevant for my software or where maintainers apply patches once every six months, dependencies where the author doesn't do versioning and the correct version to use is git HEAD
- vcpkg: not sure how I can pass specific flags to dependencies. For instance I need to build LLVM with specific CMake flags.
- CMake FetchContent : how do you handle dependencies that are other repos from your organization which may definitely get patches as part of the development of the software? With FetchContent all those go into build directories and aren't treated as source, I would like to be able to tell CMake "for this build, use source folder /foo for dependency "foo" instead of cloning its 300MB repo again
- How do you handle dependencies that takes ages to build. My software uses Qt, llvm, libclang, ffmpeg and a fair amount of other things. When I tried with vcpkg, the experience building for a new contributor took something like three hours on an average laptop and required dozens of gigabytes of space (software build itself is ~5 minutes with the current precompiled SDK I ship). The space thing is critical, I often get students, interns, OSS contributors etc which definitely cannot afford 30GB of free space on cheap laptops with 256G SSDs
Yeah, this is a pain point; maintainers need to be more on-the-ball about updating and maintaining packages and package versioning. Maintainers for large Linux package repos (e.g. apt, yum, pacman) can do it, I don't see why vcpkg maintainers can't.
> vcpkg: not sure how I can pass specific flags to dependencies. For instance I need to build LLVM with specific CMake flags.
Use overlay ports[1] and edit the portfile.cmake to pass in additional variables[2]. If you want this to be really configurable every time, then use `VCPKG_ENV_PASSTHROUGH`[3] in a custom triplet[4]. This Stack Overflow answer[5] (full disclosure: I wrote it) explains why you have to do this.
> I would like to be able to tell CMake "for this build, use source folder /foo for dependency "foo"
Use `FetchContent_Declare` with `SOURCE_DIR`[6; this leads to `ExternalProject`, but everything you can use there you can also use with `FetchContent`]. e.g.
FetchContent_Declare(fmt SOURCE_DIR "${CMAKE_CURRENT_LIST_DIR}/../thirdparty/fmt/")
FetchContent_MakeAvailable(fmt)
> How do you handle dependencies that takes ages to build.Link vcpkg's asset and binary caches to a network drive that all developer terminals can access straightforwardly. S3, Azure, GCP, GitHub, local/network filesystems are all supported[7][8].
[1]: https://learn.microsoft.com/en-gb/vcpkg/concepts/overlay-por...
[2]: https://learn.microsoft.com/en-gb/vcpkg/get_started/get-star...
[3]: https://learn.microsoft.com/en-gb/vcpkg/users/triplets#vcpkg...
[4]: https://learn.microsoft.com/en-gb/vcpkg/concepts/triplets
[5]: https://stackoverflow.com/a/77954891/1654223
[6]: https://cmake.org/cmake/help/latest/module/ExternalProject.h...
[7]: https://learn.microsoft.com/en-gb/vcpkg/users/assetcaching
[8]: https://learn.microsoft.com/en-gb/vcpkg/consume/binary-cachi...
> Use `FetchContent_Declare` with `SOURCE_DIR`
I know about this but it doesn't solve my problem: I want people who develop to be able to point to the source dir elsewhere, but I want e.g. CI and people who just grab it from github to have it point to some commit in usual FetchContent fashion. And I don't want to have to pass CMake flags for each "co-developed" dependency, that would be a very bad developer experience too. Every cmake flag I add is another couple support tickets from junior developers mistyping it in my experience.
> a network drive that all developer terminals can access straightforwardly
Not viable for OSS projects developed in the wild
Then set up your CMake logic to use FetchContent_Declare based on the boolean value of an option, set up CMake presets[1] with hard-coded values for each CMake command-line option, and ask your developers/set CI to choose between these presets.
You mentioned a precompiled SDK; you can still use vcpkg and overlay ports to redirect to these prebuilt binaries[2]. Replace 'system package manager' with your own paths.
Or, forgo vcpkg altogether if you think the long initial bring-up is a big problem, and just use CMake and your pre-compiled SDK to expose packages, libraries, and headers that your consuming source needs. Not everything is a vcpkg-shaped nail that needs a vcpkg hammer.
[1]: https://cmake.org/cmake/help/latest/manual/cmake-presets.7.h...
[2]: https://devblogs.microsoft.com/cppblog/using-system-package-...
Yes... but as great as vcpkg is, it's still not ubiquitous enough that everything is on it, in the same way that everything is available for Rust via Cargo, or for Python via pip, or for Java/typescript via NPM.
So submodules are still used quite a lot.
Completely fair assessment; I wanted to add MIT's krb5 and realised it wasn't on vcpkg.
> So submodules are still used quite a lot
And this is the wrong solution. The correct one would be to put in the leg-work and write a new portfile[1] and submit it as a PR[2].
[1]: https://learn.microsoft.com/en-gb/vcpkg/get_started/get-star...
[2]: https://learn.microsoft.com/en-gb/vcpkg/get_started/get-star...
This is not very realistic. I don't want to become a package maintainer of somebody else's library.
But I 100% agree that submodules are not good. I'm hoping Pijul will handle that sort of thing better but I haven't tried it.
Fair enough; then maybe ask the library developer nicely if they could add vcpkg support. Many C++ libraries de-facto support CMake and vcpkg because these have reached critical mass over the past half-decade or so.
I'd say submodules are worse in every metric—you still have to maintain someone else's library (imagine a CVE patch comes through, for instance); submodules themselves are so fragile that they can break your own repo, requiring a full delete + re-clone, and it leads to 'I have this code, I can now make changes to it' which makes updating even harder.
I wouldn't call `git pull` maintaining a library.
> it leads to 'I have this code, I can now make changes to it
I would say this is a big advantage! Probably one of the few areas where submodules are better than e.g. crates/pypi. It's great for fixing bugs for example. You can fix bugs in Rust crates you use, by using a special override in Cargo.toml, and I assume pip/NPM support that somehow took, but it is more of a pain.
You're absolutely right that submodules are fragile. But they work well enough that people still use them.
> requiring a full delete + re-clone
I've got pretty close to that but actually I've been able to get out of every submodules breakage that I got into. Did require some magic commands I found in a mailing list somewhere though which isn't fun.
I've had zero problems with stability though if you avoid these two features:
* Worktrees, which sucks because worktrees are great
* git checkout --recurse-submodules (or submodule.recurse). This is just fundamentally broken and has been forever.
https://learn.microsoft.com/en-gb/vcpkg/concepts/registries and
In essence:
- Publish your local/private mirror of the 3rd-party dep
- Edit the portfile for each dependency to point to your private repo, usually in `vcpkg_from_git`, `vcpkg_from_github`, `vcpkg_from_gitlab`, or `vcpkg_from_bitbucket`
- Publish these portfiles to your private vcpkg registry
- Set the registry in vcpkg-configuration.json (or the `vcpkg-configuration` field in vcpkg.json) to your private registry
Done.
> just refers to a git repository, tgz download link or similar decentralized identifier
In the `portfile.cmake`, use the following functions correspondingly:
- vcpkg_from_git: https://learn.microsoft.com/en-gb/vcpkg/maintainers/function...
- vcpkg_download_distfile: https://learn.microsoft.com/en-gb/vcpkg/maintainers/function...
Each documentation link has a bunch of live examples near the bottom of the page.
[1]: https://learn.microsoft.com/en-gb/vcpkg/concepts/overlay-por...
[2]: https://learn.microsoft.com/en-us/vcpkg/get_started/get-star...
Maybe for windows, but I haven't seen it be that popular in the linux world?
I tend to use Meson for C++ because it is much more pleasant to use. Meson is definitely a minority build system, owing in some part to being new-ish, but I see it being used in new projects so it is still growing.
And by playing around, I mean that I now have a non-trivial project (c. 50,000 loc, 20 libs/apps) that can compile with all three build systems. Then again, I'm a fan of 'keep it simple' in the first place, so I don't tend to do 'stupid' things in my build systems that makes it difficult to port.
Then I tried meson for my own projects, after deciding that cmake was just too ugly and aesthetics matter, to me. For about 3 months I fought the carefully thought out constraints but still liked it. After that the meson design clicked and boy howdy the amount of mental energy I put into my build systems has gone down comfortably close to zero. I adore meson.
Ah I should mention that five years ago or more even I bumped into rough patches in meson but I can say zero in the last few years. Bugs get fixed.
I don't think it implausible though that cmake seems to be getting better for the projects I dive deep into, in an almost meson way. Maybe the days of a project ~/cmake directory containing tens of files and 1000s of lines of bespoke cmake code are dwindling.
Now that I've drunk the meson koolaid, especially about the syntax not being a full programming language, something natively supporting lua, like xmake seems to, makes me wary.
I guess I'm saying cmake is the "build" system you'd expect for C++.
> After that the meson design clicked and boy howdy the amount of mental energy I put into my build systems has gone down comfortably close to zero. I adore meson.
I'm only a week or so into it, and it's been ridiculously easy to get along with so far.
Search for CMake there.
Cmake now has tooling in all the major IDEs, whereas the other build systems generally don't. Meson and Bazel are build models in CLion but they're relatively new. CMake is the leader at the moment, due to first mover advantage.
CMake is dominating across all platforms, sadly
I use CLion for its CMake integration, and recently sought out XMake integration for it as well .. this works so well, I just don't see myself ever going back to CMake willfully.
Its just a huge difference in the semantics and ontology required to maintain projects with these tools.
I am now annoyed by the claims of those build systems. My own experience is using a visual studio project manually, and writing classic makefile for linux. Those are things I can't avoid learning.
The root of the problem is that windows/unix do things differently, but that doesn't change anyone's understanding of how a compiler/linker works.
I am against those build system: I need to learn how to use them, and I need to understand how they manage to "reunite" different toolchains.
Generally, I would guess I spend less time building something on each system with either visual studio/makefiles, than using a meta-build system.
Microsoft is at fault for making things differently, of course, but attempts to reunite platforms are often quite misguided
Of course, people who write libraries need those build systems, obviously, although I don't know what any other build system does better than CMake.
Or that cpp ecosystem is so fragmented that you have X various compilers, IDEs, debuggers, build systems and package managers.
Insanity for anyone who worked in ecosystems like go, c#, rust
On the other hand, I can easily pick a part of the "C ecosystem" that suits me really well, for there is so much choice! In rust, I'm stuck with some conventions that I must learn to love or else. In my case I like rust the language but cannot stand the ecosystem (most notably, the cargo tool). Similar thing for javascript: the language is OK, but the node ecosystem is nuts.
I have exp. With various tech stacks and I know where my life was way easier when it comes to basic things
But I don't think we should separate politics, social dynamics and tech. Because everything (unfortunately for us, techno geeks) is politics, as everything is done by people and politics is relationship between groups of people.
Your last sentence is perfectly applicable to politics too: it is way easier to be adult in totalitarian society, if your will to align yourself with "main party line", than in democratic society. You need to make much less choices yourself. But if you cannot align to this line, your life becomes miserable.
I think, that any monoculture is limited and bad. In tech too.
> tech bro: diversity is important, convenience is totalitarian, everything you do is politics
sound like a baby Richard Stallman (I like open source but still)
In terms of software it helps to have a centralized repository for code as this eases development quite a bit. Imagine you need libraries from multiple repositories. That's a pretty big pain point. Eventually everyone will migrate to just a single source.
Again, these are natural monopolies and primarily driven by the network effect.
But lets look at USB-C alone*. It is USB-C, all USB-C is the same from the user-faced side. But they are different inside! There are a lot of variations, with different PCB footprint, different certification (aviation or automotive grade connectors vs "simple" ones), different weather protection, different mounting options. It is great that we have a LOT of different vendors for USB-C, and A LOT of different USB-C connectors!
USB-C is standard, with multiple vendors. As C/C++. One connector. Multiple vendors. Diversity.
[*] - and close eyes on situation with USB-C which is worse than ever, with all these functions which can or cannot be supported and you can never know what is support before trying, multiplied with same problem with myriad of cables which all look the same, but works differently. So, really, USB-C is bad example. But lets pretend it is good.
As a user I couldn’t care less about all the bullshit innards of USB-C. All I care is that I can charge all my Android devices, all my iOS devices and my MacBooks with the same cable. As a developer I couldn’t care less whether it’s CMake, XMake, Gradle, Autotools, Bazel, Webpack, Bash script or something else.
Is it standard? Is it not absolutely awful? Yes - let’s go, no: can you afford to pay devs to put up with learning yet another bullshit tool? Yes - let’s go, no - keep it to your hobby project until it reaches critical mass and becomes standard.
"Diversity" is good in an ecosystem when you have multiple species that complement each other, forming a trophic chain (like plants, herbivores, carnivores, and decomposers).
What you're defending is having two or more species that compete with each other, which, biologically, is a scenario that results in a single victor: https://en.wikipedia.org/wiki/Competitive_exclusion_principl...
The resultant monopoly is good at least initially because the victor, ideally, was the best at the niche it competed for. Having suddenly no competition allows it to stagnate, though, compared to if it had constant competition. Biology solves this with constant, random mutations in a cycle of death and reproduction. Maybe that's what WG21 represents for C++? ;D
In general, though, when you're "trying to do a narrow thing", diversity seems bad all around. Python beat Ruby because it was the totalitarian "there's one right way to do a thing" alternative, which was better. Apple stuff works better because it's a totalitarian design from the hardware on up (I admit this as an Apple hater). Cargo works better than CMake because it's an integrated and opinionated build system.
The "narrow thing" could even be a political agenda and a particular subset of social interactions, in which case diversity can also be viewed as negative politically and socially. See political gridlock and how much strife exists between races, religions, classes, and genders. For example, take the "don't go alone in a room with a woman" spiel that OpenBSD guy blathered about. He seemed to claim that women of low status with political axes to grind would seize clout by claiming false sexual harassment in situations without witnesses. If we suppose that scenario exists, it would impossible were there no diversity of gender/sexuality/politics/status in those environments, which would be better for the "narrow thing" of "have a BSD conference or whatever with no drama".
And other C/C++ implementations not so much compete with each other. MSVC++ is Windows-specific. KeilC supports obscure 12-bit microcontrollers. etc.
Even in any economic theory I know of (I admit, I cannot know everything!) monopoly is not praised, but rather considered bad.
There are myriad of Linux distributives. Is it bad, as they compete each other? I don't think so.
Kotlin give Java big push to become better language.
There are a lot of examples of productive competition, in tech too.
Became?
Recently, it seems like LLVM's libc++ has stagnated, and clang is basically just copying GCC. If GCC stagnated before, it's nice competition seems to've rekindled its fire. :p
> monopoly is not praised, but rather considered bad.
Because economic monopolies are not self-competitive like organisms are and use their market capture to extract more value, unlike CMake or Linux (probably).
> There are myriad of Linux distributives. Is it bad, as they compete each other?
In that the labor required to make and maintain a good distribution is divided among them, it can be considered bad. In that users have to choose between an array of similar things, it's also bad for them. (If the things are very different, the choice becomes clearer, but then we move away from competition and toward complementing in the ecosystem sense.)
[0] https://github.com/ggerganov/llama.cpp/blob/master/build.zig
I don't think it's an obvious choice for literally everything, at least not yet. But I'd suggest strongly considering it as a build tool for object-code-heavy applications, even if they don't use the Zig language itself.
I can't answer myself because I only have basic theoretical idea of what build tools do, I have always mostly been using IDEs like VisualStudio which do everything behind the scenes in a press of a button.
What a build system needs to offer, IMHO as someone who also wrote one, is the following:
* tasks (incl. figuring out the graph of dependencies between them and caching results).
* dependencies. Each language has its own repository so it needs to be pluggable. It also may need to resolve conflicts (e.g. Java can't handle multiple versions of the same lib, which you can get due to transitive deps).
* custom options for each language. Native projects may choose -static or -dynlib, Java projects which bytecode version to target and so on.
* tests, documentation generation, linting etc.
Of course there's more to it, but this is the minimum you need. To accomplish all of these, you'll probably need a real programming language or something very close to it - and that's where the problem lies: which language??
XMake chose Lua, which is a very good choice for this (it's a small language but high level and powerful enough to do everything you may need). But if you're writing Java or JS, do you really want to write builds in Lua? Or just use the language-specific build system? I don't think the answer should be obvious, but reality says we chose the latter.
I would love if one day, something like XMake (basically, a DSL to build stuff that's based on a simple lingua-franca, like Lua) became the standard for all communities, but that's not going to happen unless something changes completely in the industry.
Not-invented-here and developers want to do stupid things in build systems. Not normally intentionally or maliciously, it's just that if you can do something and the build system does not complain about it, it's in.
Then you get 'build systems a la carte' - that are designed based on rigor and experience such as Bazel and buck2 and you start seeing that you can no longer do those things for very good reasons such as guaranteed reproducibility and hermetic builds.
https://www.microsoft.com/en-us/research/uploads/prod/2018/0...
Eventually we got enough of an effort to do the long task of adopting CMake for every platform. It's soo much nicer and more consistent than what we had and handles most of our platforms out of the box with little custom tweaking for project file generation.
I do like the idea of it using a common language though. Lua, JS, etc. would all be a nice choice. I am not a fan of the custom language just for building.
It's a no-go, as it doesn't work on Windows (no, WSL does not count).
Language ecosystems like Rust and Go, and even Haskell shrink the problem space to a smaller universe to grab dependencies. You can see this by how awkward it would be to build a program containing those three languages. They all have different package repositories and build workflows.
C, Fortran, C++ have very rough tools, but the community wants, and often needs, a wide range of dependency management options. Like mixing languages. Or shipping to PYPI. Or integrating with exotic embedded build environments.
I also agree with the why to invet yet another tool for the same thing kind of note. Could be that fresh graduates think they know much better than those worked this field for long and revolutionise/disrupt it with cool new fresh (all the usual adjectives here) utility? Make life better for everyone at last, finding the Saint Grail that all the makers of brand new cool fresh (other adjectives) utilities of the way past - one comment here told half a decade is too old - could not? I don't know, but it is interesting seeing all the rage of new tools in thick row. Yet, still they are all bad so need to make yet another one!
You endd a standard because you don't want everyone to have to learn a new tool every time.
If you just want a script, Make is the one to use. You just write your bash in there and it works.
https://news.ycombinator.com/item?id=19610459
Xmake's biggest advantage: it's a single binary (vs Meson requiring Python)
Meson's biggest advantage (aside from popularity): it's a declarative DSL rather (vs including a full-blown programming language like Lua in Xmake)
(The second point is subjective, I know some think that including a full programming language is a strength but to my eyes it's a downside that largely outweighs the Python dependency.)
I personally don't consider this an advantage. We've ran into the limitations of a DSL way too often at work and at this point I prefer just having a plain programming language as an interface. This way you don't have to learn a custom DSL with its own quirsk, you always have the escape hatch of just writing custom code, and it tends to be less quirky.
It is actually pretty useful to have installed alongside meson, even if just for access to the manpages as documentation.
In the non-Bazel world, that's ccache and distcc
There is (was) xmake already: build tool used by XFree86/x.org before x.org modularization. It was very good, much better than auto*-tools, but didn't get enough traction outside XFree86 project.
--- curl -fsSL https://xmake.io/shget.text | bash ---
System packet manager? What is it? Never heard about it.
It is like copying "src" directory to new name with each change in sources instead using version control system.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
This paradigm is still reckless, though, even if you refused to believe in the security arguments: if you "merely" have a transient error during the script response--which might actually cut off only part of the last line you received!!--you shouldn't execute the fragment :/.
If you insist on doing this and you don't want it to feel like amateur hour to people like me--people who have a strong innate sense of exception safety--you (at least) need to use the bash -c with substitution and add a variable (so you can catch the error with set -e or &&).
(c=$(curl) && bash -c "$c")
In that space, I tend to prefer MacPorts.
xmake is available in many OS, including debian
And some script which can fail half-way and leave some debris in filesystem is not good practice.
The longer answer is that there are no declarative build tools because all the build tools aim to support all the insane build requirements of existing C or C++ projects. At that point embedding a turing complete scripting language actually makes for simpler build descriptions than some massive declarative system trying to handle all the corner cases.
Of course, that raises the question: why don't any C or C++ build tools limit themselves to some sane subset of build requirements? That, I don't have a good answer for.
1. No, using Turing-complete scripts does not prevent information extraction and meaningful automation. You can ask the program to dump useful information, e.g. targets, compiler flags, etc. That's what rizsotto/Bear does for Makefile. CMAKE_EXPORT_COMPILE_COMMANDS is another example.
2. I don't really think you can do anything further with declarative build languages, unless it is a really limited one like JSON or XML. Meson (a relatively advanced one in the space) advocates for non-Turing-completeness, but you still cannot, for example, modify meson build files reliably using an external tool other than a text editor.
3. Complex build configuration usually requires non-trivial computation and/or dynamic dependency graph building. Turing-completeness gives you a possibility and you don't need to wait for build tool upgrades.
Comparing Meson to CMake, I find the niceties of Meson are usually not inherently non-TC. That is, you can theoretically reimplement a nicer CMake with all the niceties of Meson, while still being TC.
All a rigidly non-turin-complete build tool means is the day will come where you have to throw it away and rewrite everything in a more powerful language, probably at the last minute and with a bunch of fires taking light around you.
https://xmake.io/#/guide/project_examples?id=iosmacos-progra...
Including signing. It still relies on xcode of course, but if you don't actually need to use xcode to configure or build, that's a big plus.
Seems viable as a replacement for things like vckpg [2] which only builds from source.
I'm still researching this but it seems like rattler [3] is the tool to use to build/publish packages. The supported repos are: prefix.dev's own hosting, anaconda.org, artifactory or a self-hosted server.
--
1: https://github.com/prefix-dev/pixi/blob/main/examples/cpp-sd...
2: https://github.com/microsoft/vcpkg
3: https://prefix-dev.github.io/rattler-build/latest/authentica...
Hmm, interesting. Has anyone tested this and seen how it compares against CMake's latest versions?
1. General named module use: https://www.kitware.com/import-cmake-c20-modules/
2. Importing the std module: https://www.kitware.com/import-std-in-cmake-3-30/
As someone who doesn't program in C++ I've got to ask: why is that?
I doubt it supports modules as well as CMake 3.30 (which is not amazing btw) but you never know.
for clangd i can't do much currently, there is a PR that add some support, i tried it, but doesn't require for now buildsystem intervention
For example:
add_requires("vulkan-headers", {configs = {modules = true}})
# xmake.lua
target("test-vk")
set_languages("c++latest")
add_files("src/\*.cpp")
set_policy("build.c++.modules", true)
add_packages("vulkan-headers")
// main.cpp
import std;
import vulkan_hpp;
int main() {
auto vk_context = vk::raii::Context();
const auto exts = vk_context.enumerateInstanceExtensionProperties();
for(auto &&ext : exts)
std::println("{}", std::string{ext.extensionName});
return 0;
}
// result
> xmake f --toolchain=llvm --runtimes="c++_shared" --yes; xmake b
checking for platform ... windows
checking for architecture ... x64
[ 0%]: <test-vk> generating.module.deps src\main.cpp
[ 0%]: <test-vk> generating.module.deps F:\llvm\bin\..\lib\..\share\libc++\v1\std.cppm
[ 0%]: <test-vk> generating.module.deps F:\packages\xmake\v\vulkan-headers\1.3.275+0\ae359cf8d45b4d049acf1ae7350e6dc3\modules\b028617b\vulkan.cppm
[ 12%]: <test-vk> compiling.module.release std
[ 12%]: <test-vk> compiling.module.release vulkan_hpp
[ 62%]: compiling.release src\main.cpp
[ 75%]: linking.release test-vk.exe
[100%]: build ok, spent 5.64s
> xmake run
VK_KHR_device_group_creation
VK_KHR_external_fence_capabilities
VK_KHR_external_memory_capabilities
VK_KHR_external_semaphore_capabilities
VK_KHR_get_physical_device_properties2
VK_KHR_get_surface_capabilities2
VK_KHR_surface
VK_KHR_win32_surface
VK_EXT_debug_report
VK_EXT_debug_utils
VK_EXT_swapchain_colorspace
VK_KHR_portability_enumeration
VK_LUNARG_direct_driver_loading
I wish more people would embrace it over cmake, I despise cmake so much..
I fully moved to D however, the experience here is night and day, the module system alone makes me not miss C/C++, at all
Example:
dmd -i -Isrc/ src/main.d
and the compiler will automatically grab files you import, and when you need packages you can rely on dub, compiler is so fast that in most cases you don't need any form of caching, a full rebuild of my game takes less than 1secXmake ≈ Make/Ninja + CMake/Meson + Vcpkg/Conan + distcc + ccache/sccache
Some of the wisdom from the recent XZ backdoor incident was that cmake was a contributing factor, due to its internal complexity.
Some of the current trend is to strip out autotools and cmake, and go back to the basics, because modern OS support is a subset of what was relevant 20 years ago.
I'm not convinced it's necessary any more. When a compiler claims to implement c99, it probably does implement c99. Lots of ugly target dependent hackery is now fixed at source by doing more sensible things in the target and/or the target having died off decades ago.
Libuv does horrendous platform specific stuff on targets I don't recognise, in raw C, and it's totally possible to compile it without doing any configure or cmake style nonsense because the platform specific things are in their own source files.
90% of the target specific stuff I can handle by asking the C preprocessor what it's building. The remaining 10% can be specified by editing a header or passing compiler flags or similar.
Right now for C libraries I want to depend on I'm patching the source on import to not need any configure check or magic compiler flags, then building every C file separately into an object with the same invocation, and it is working just fine.