The C-Shaped Hole in Package Management
38 points
6 hours ago
| 8 comments
| nesbitt.io
| HN
conorbergin
59 minutes ago
[-]
I use a lot of obscure libraries for scientific computing and engineering. If I install it from pacman or manage to get an AUR build working, my life is pretty good. If I have to use a Python library the faff becomes unbearable, make a venv, delete the venv, change python version, use conda, use uv, try and install it globally, change python path, source .venv/bin/activate. This is less true for other languages with local package management, but none of them are as frictionless as C (or Zig which I use mostly). The other issue is .venvs, node_packages and equivalents take up huge amounts of disk and make it a pain to move folders around, and no I will not be using a git repo for every throwaway test.
reply
auxym
51 minutes ago
[-]
uv has mostly solved the python issue. IME it's dependency resolution is fast and just works. Packages are hard linked from a global cache, which also greatly reduces storage requirements when you work with multiple projects.
reply
drowsspa
12 minutes ago
[-]
You still need to compile when those libraries are not pre compiled.
reply
storystarling
30 minutes ago
[-]
uv is great for resolution, but it seems like it doesn't really address the build complexity for heavy native dependencies. If you are doing any serious work with torch or local LLMs, you still run into issues where wheels aren't available for your specific cuda/arch combination. That is usually where I lose time, not waiting for the resolver.
reply
amluto
39 minutes ago
[-]
uv does nothing to help when you have old, crappy, barely maintained Python packages that don’t work reliably.
reply
megolodan
11 minutes ago
[-]
compiling an open source C project isn't time consuming?
reply
rwmj
3 hours ago
[-]
Please don't. C packaging in distros is working fine and doesn't need to turn into crap like the other language-specific package managers. If you don't know how to use pkgconf then that's your problem.
reply
Joker_vD
1 hour ago
[-]
Well, if you're fine with using 3-year old versions of those libraries packaged by severely overworked maintainers who at one point seriously considered blindly converting everything into Flatpaks and shipping those simply because they can't muster enough of manpower, sure.

"But you can use 3rd party repositories!" Yeah, and I also can just download the library from its author's site. I mean, if I trust them enough to run their library, why do I need opinionated middle-men?

reply
hliyan
3 hours ago
[-]
When I used to work with C many years ago, it was basically: download the headers and the binary file for your platform from the official website, place them in the header/lib paths, update the linker step in the Makefile, #include where it's needed, then use the library functions. It was a little bit more work than typing "npm install", but not so much as to cause headaches.
reply
zbentley
2 hours ago
[-]
What do you do when the code you downloaded refers to symbols exported by libraries not already on your system? How do you figure out where those symbols should come from? What if it expects version-specific behavior and you’ve already installed a newer version of libwhatever on your system (I hope your distro package manager supports downgrades)?

These are very, very common problems; not edge cases.

Put another way: y'all know we got all these other package management/containerization/isolation systems in large part because people tried the C-library-install-by-hand/system-package-all-the-things approaches and found them severely lacking, right? CPAN was considered a godsend for a reason. NPM, for all its hilarious failings, even moreso.

reply
JohnFen
2 hours ago
[-]
> These are very, very common problems; not edge cases.

Honestly? Over the course of my career, I've only rarely encountered these sorts of problems. When I have, they've come from poorly engineered libraries anyway.

reply
bengarney
31 minutes ago
[-]
Here is a thought experiment (for devs who buy into package managers). Take the hash of a program and all its dependency. Behavior is different for every unique hash. With package managers, that hash is different on every system, including hashes in the future that are unknowable by you (ie future "compatible" versions of libraries).

That risk/QA load can be worth it, but is not always. For an OS, it helps to be able to upgrade SSL (for instance).

In my use cases, all this is a strong net negative. npm-base projects randomly break when new "compatible" version of libraries install for new devs. C/C++ projects don't build because of include/lib path issues or lack of installation of some specific version or who knows what.

If I need you to install the SDL 2.3.whatever libraries exactly, or use react 16.8.whatever to be sure the app runs, what's the point of using a complex system that will almost certainly ensure you have the wrong version? Just check it in, either by an explicit version or by committing the library's code and building it yourself.

reply
tpoacher
51 minutes ago
[-]
You are conflating development with distribution of binaries (a problem which interpreted languages do not have, I hasten to add).

1. The accepted solution to what you're describing in terms of development, is passing appropriate flags to `./configure`, specifying the path for the alternative versions of the libraries you want to use. This is as simple as it gets.

As for where to get these libraries from in the event that the distro doesn't provide the right version, `./configure` is basically a script. Nothing stopping you from printing a couple of ftp mirrors in the output to be used as a target to wget.

2. As for the problem of distribution of binaries and related up-to-date libraries, the appropriate solution is a distro package manager. A c package manager wouldn't come into this equation at all, unless you wanted to compile from scratch to account for your specific circumstances, in which case, goto 1.

reply
fredrikholm
2 hours ago
[-]
And with header only libraries (like stb) its even less than that.

I primarily write C nowadays to regain sanity from doing my day job, and the fact that there is zero bit rot and setup/fixing/middling to get things running is in stark contrast to the horrors I have to deal with professionally.

reply
krautsauer
1 hour ago
[-]
And then you got some minor detail different from the compiled library and boom, UB because some struct is layed out differently or the calling convention is wrong or you compiled with a different -std or …
reply
JohnFen
3 hours ago
[-]
I agree entirely. C doesn't need this. That I don't have to deal with such a thing has become a new and surprising advantage of the language for me.
reply
zbentley
2 hours ago
[-]
I mean … it clearly isn’t working well if problems like “what is the libssl distribution called in a given Linux distro’s package manager?” and “installing a MySQL driver in four of the five most popular programming languages in the world requires either bundling binary artifacts with language libraries or invoking a compiler toolchain in unspecified, unpredictable, and failure-prone ways” are both incredibly common and incredibly painful for many/most users and developers.

The idea of a protocol for “what artifacts in what languages does $thing depend on and how will it find them?” as discussed in the article would be incredibly powerful…IFF it were adopted widely enough to become a real standard.

reply
fc417fc802
1 hour ago
[-]
> what is the libssl distribution called in a given Linux distro’s package manager?

I think you're going to need to know that either way if you want to run a dynamically linked binary using a library provided by the OS. A package manager (for example Cargo) isn't going to help here because you haven't vendored the library.

To match the npm or pip model you'd go with nix or guix or cmake and you'd vendor everything and the user would be expected to build from scratch locally.

Alternatively you could avoid having to think about distro package managers by distributing with something like flatpak. That way you only need to figure out the name of the libssl package the one time.

Really issues shouldn't arise unless you try to use a library that doesn't have a sane build system. You go to vendor it and it's a headache to integrate. I guess there's probably more of those in the C world than elsewhere but you could maybe just try not using them?

reply
rwmj
2 hours ago
[-]
Assuming that your distro is, say, Debian, then you'll know the answer to that is always libssl-dev, and if you cannot find it then there's a handy search tool (both CLI and web page: https://packages.debian.org) to help you.

I'm not very familiar with MySQL, but for C (which is what we're talking about here) I typed mysql here and it gave me a bunch of suggestions: https://packages.debian.org/search?suite=default&section=all... Debian doesn't ship binary blobs, so I guess that's not a problem.

"I have to build something on 10 different distros" is not actually a problem that many people have.

Also, let the distros package your software. If you're not doing that, or if you're working against the distros, then you're storing up trouble.

reply
lstodd
2 hours ago
[-]
Actually "build something on 10 different distros" is not a problem either, you just make 10 LXC containers with those distros on a $20/mo second-hand Hetzner box, sick Jenkins with trivial shell scripts on them and forget about it for a couple years or so until a need for 11th distro arrives, in which case you spend half an hour or so to set it up.
reply
duped
1 hour ago
[-]
> C packaging in distros is working fine

GLIBC_2.38 not found

reply
Joker_vD
1 hour ago
[-]
Like, seriously. It's impossible to run Erlang/OTP 21.0 on a modern Ubuntu/Debian because of libssl/glibc shenanigans so your best bet is to take a container with the userspace of Ubuntu 16 (which somehow works just fine on modern kernel, what a miracle! Why can't Linux's userspace do something like that?) and install it in there. Or just listen to "JuST doN'T rUN ouTdaTED SoftWAre" advices. Yeah, thanks a lot.
reply
amiga386
1 hour ago
[-]
If you have a distro-supplied binary that doesn't link with the distro-supplied glibc, something is very very wrong.

If you're supplying your own binaries and not compiling/linking them against the distro-supplied glibc, that's on you.

reply
duped
33 minutes ago
[-]
Linking against every distro-supplied glibc to distribute your own software is as unrealistic as getting distributions to distribute your software for you. The model is backwards from what users and developers expect.

But that's not the point I'm making. I'm attacking the idea that they're "working just fine" when the above is a bug that nearly everyone hits in the wild as a user and a developer shipping software on Linux. It's not the only one caused by the model, but it's certainly one of the most common.

reply
aa-jv
3 hours ago
[-]
^ This.

Plus, we already have great C package management. Its called CMake.

reply
bluGill
1 hour ago
[-]
CMake is not a package management tool, it is a build tool. It can be abused to do package management, but that isn't what it is for.
reply
rwmj
3 hours ago
[-]
I hate autotools, but I have stockholm syndrome so I still use it.
reply
kergonath
1 hour ago
[-]
I hated auto tools until I had to use cmake. Now, I still hate auto tools, but I hate cmake more.
reply
aa-jv
3 hours ago
[-]
Its not so hard once you learn it. Of course, you will carry that trauma with you, and rightly so. ;)
reply
CMay
1 hour ago
[-]
I don't trust any language that fundamentally becomes reliant on package managers. Once package managers become normalized and pervasively used, people become less thoughtful and investigative into what libraries they use. Instead of learning about who created it, who manages it, what its philosophy is, people increasingly just let'er rip and install it then use a few snippets to try it. If it works, great. Maybe it's a little bloated and that causes them to give it a side-eye, but they can replace it later....which never comes.

That would be fine if it only effected that first layer, of a basic library and a basic app, but it becomes multiple layers of this kind of habit that then ends up in multiple layers of software used by many people.

Not sure that I would go so far as to suggest these kinds of languages with runaway dependency cultures shouldn't exist, but I will go so far as to say any languages that don't already have that culture need to be preserved with respect like uncontacted tribes in the Amazon. You aren't just managing a language, you are also managing process and mind. Some seemingly inefficient and seemingly less powerful processes and ways of thinking have value that isn't always immediately obvious to people.

reply
krautsauer
59 minutes ago
[-]
Why is meson's wrapdb never mentioned in these kinds of posts, or even the HN discussion of them?
reply
josefx
2 hours ago
[-]
> Conan and vcpkg exist now and are actively maintained

I am not sure if it is just me, but I seem to constantly run into broken vcpkg packages with bad security patches that keep them from compiling, cmake scripts that can't find the binaries, missing headers and other fun issues.

reply
adzm
1 hour ago
[-]
I've never had a problem with vcpkg, surprisingly. Perhaps it is just a matter of which packages we are using.
reply
fsloth
1 hour ago
[-]
C++ community would be better off without Conan.

Avoid at all cost.

reply
duped
1 hour ago
[-]
Missing in this discussion is that package management is tightly coupled to module resolution in nearly every language. It is not enough to merely install dependencies of given versions but to do so in a way that the language toolchain and/or runtime can find and resolve them.

And so when it comes to dynamic dependencies (including shared libraries) that are not resolved until runtime you hit language-level constraints. With C libraries the problem is not merely that distribution packagers chose to support single versions of dependencies because it is easy but because the loader (provided by your C toolchain) isn't designed to support it.

And if you've ever dug into the guts of glibc's loader it's 40 years of unreadable cruft. If you want to take a shot at the C-shaped hole, take a look at that and look at decoupling it from the toolchain and add support for multiple version resolution and other basic features of module resolution in 2026.

reply
pif
55 minutes ago
[-]
> And if you've ever dug into the guts of glibc's loader it's 40 years of unreadable cruft.

You meant: it's 40 years of debugged and hardened run-everywhere never-fails code, I suppose.

reply
duped
37 minutes ago
[-]
No, I meant 40 years of unreadable cruft. It's not hard to write a correct loader. It's very hard to understand glibc's implementation.
reply
Piraty
3 hours ago
[-]
reply
manofmanysmiles
2 hours ago
[-]
One of my favorite blog posts. I enjoy it every time I read it. I've implemented two C package managers and they... were fine. I think it's a pretty genuinely hard thing to get right outside of a niche.

I've written two C package managers in my life. The most recent one is mildly better than the first from a decade ago, but still not quite right. If I ever build one I think is good enough I'll share, only to mostly likely learn about 50 edge cases I didn't think of :)

reply
smw
1 hour ago
[-]
The fact that the first entry in his table says that apt doesn't have source packages is a good marker of the quality of this post.
reply
xyzsparetimexyz
2 hours ago
[-]
C*** shaped?
reply