https://github.com/mbrock/filnix
It's working. It builds tmux, nethack, coreutils, Perl, Tcl, Lua, SQLite, and a bunch of other stuff.
Binary cache on https://filc.cachix.org so you don't have to wait 40 minutes for the Clang fork to build.
If you have Nix with flakes on a 64-bit Linux computer, you can run
    nix run github:mbrock/filnix#nethack
Fil-C compiled flatpaks might be a interesting target as well for normal desktop users. (e.g. running a browser)
I wonder if GPU graphics are possible in Fil-C land? Perhaps only if whole mesa stack is compiled using Fil-C as well, limiting GPU use to open drivers?
How does python work? Of course I can just add filc.python to my system, but if I `python3 -m pip install whatever` will it just rebuild any C modules with the fil-c compiler?
My idea is to move towards defining Fil-C as a "platform", meaning it would have its own ABI value, so it would look like a target called "x86_64-unknown-linux-filc", and then the magic of Nixpkgs is that it can instantiate a full realm called "pkgsCross.filc" which automatically has every package, whose packages are built on your ordinary non-FilC platform as a cross compilation.
When that works—I hope it works, I probably need help to get it working—you should be able to use all the Nixpkgs packaging helpers like
    pkgs.mkShell {
      packages = [
        (pkgsCross.filc.python3.withPackages (pypkgs: with pypkgs; [
          pandas
          requests
        ]))
      ];
    }
I should probably try to gather some interest among people who are truly familiar with Nixpkgs cross compilation...
My Fil-C flake already includes like 50 different patches extracted from the upstream port catalogue!
Pizlo seems to have found an astonishingly cheap way to do the necessary pointer checking, which hopefully I will be able to understand after more study. (The part I'm still confused about is how InvisiCaps work with memcpy.)
tialaramex points out that we shouldn't expect C programmers to be excited about Fil-C. The point tialaramex mentions is "DWIM", like, accessing random memory and executing in constant time, but I think most C programmers won't be willing to take a 4× performance hit. After all, if they wanted to be using a slow language, they wouldn't be writing their code in C. But I think that's the wrong place to look for interest: Fil-C's target audience is users of C programs, not authors of C programs. We want the benefits of security and continued usage of existing working codebases, without having to pay the cost to rewrite everything in Rust or TypeScript or whatever. And for many of us, much of the time, the performance hit may be acceptable.
Apple has a memory-safer C compiler/variant they use to compile their boot loaders:
Reason: there's no solution to the security problem. Fil-C doesn't solve security. Does it make things more secure? Yes! But there will always be more security issues. So imagine if I had called it "Secure C" (and then had a sexy compiler), and 10 years from now someone finds a comprehensive solution to the string injection problem in Secure C. What do they call their thing? Securer C? Secure C Pro? Secure Secure C?
A similar criticism applies to "new". Newcastle is named after a castle built 945 years ago. Neuchâtel is named after a castle built 1014 years ago. Xavier (from Basque "etxeberri", "new house") is named after a castle built in the 10th century. Windows NT "new technology", etc.
What is the 64-bit-presenting representation of pointers in Fil-C?
That is, what does %p return and how does that work as a pointer?
I sort of get a sense of what the flight pointers do after reading the linked page with the assembly listings( which on mobile looks like you're discussing hex dumps until one tries side scrolling, btw ) but my brain would expect something along the lines of: "this is the flight pointer and this is how it is managed for local variables, global variables, etc. This part of it is what your C program sees" at the beginning of the invisicaps article rather than "pointers are 64 bit like in regular C proceeds to talk about a tuple containing two pointers"(at least that's how it reads to me).
Probably just laziness on my part.
InvisiCaps are counterintuitive. Took me a while to come up with them. I still haven’t found a really clean way to explain them to folks
Also I'm really skeptical about your "hundreds of millions" number, even if we're talking about all the code that runs before the kernel starts. How do you figure? The entire Linux kernel doesn't contain a hundred millions of lines of code, and that includes all the drivers for network cards, SCSI controllers, and multiport serial boards that nobody's made in 30 years, plus ports to Alpha, HP PA-RISC, Loongson, Motorola 68000, and another half-dozen architectures. All of that contains maybe 30 million lines. glibc is half a million. Firefox 140.4.0esr is 33 million. You're saying that the bootloader is six times the size of Firefox?
Are you really suggesting that tens of gigabytes of source code are compiled into the bootloader? That would make the bootloader at least a few hundred megabytes of executable code, probably gigabytes, wouldn't it?
For all the wrong code that assumes long can store a pointer, there's likely a lot more wrong code that assumes long can fit in 64 bits, especially when serializing it for I/O and other-language interop.
Also, 128-bit integers can't fit in standard registers on most architectures, and don't have the full suite of ALU operations either. So you're looking at some serious code bloat and slowdowns for code using longs.
You've also now got no "standard" C type (char, short, int, long, long long) which is the "native" size of the architecture. You could widen int too, but a lot of code also assumes an int can fit in 32 bits.
However, most longs are just numbers that have no metadata. I guess you'd set the metadata portion to all zeroes in that case. This feels like a reified version of Rust's pointer provenance, and I think you would have to expose some metadata-aware operations to the user. In which case, you're inviting some code rewrites anyway.
While not as bad as the register/ALU ops issue, you're still making all code pay a storage size penalty, and still adding some overhead to handle the metadata propagating through arithmetic operations, just to accommodate bad code, plus it complicates alignment and FFI.
And yes, there would still be some overhead for storing and propagating the metadata, and struct alignment would change and FFI wouldn't work with longs.
And it's not just the bounds-checking that's great -- it makes a bunch of C anti-patterns much harder, and it makes you think a lot harder about pointer ownership and usage. Really a great addition to the language, and it's source-compatible with empty macro-definitions (with two exceptions).
I think you’re thinking of something else
"Why Embedded Swift"
Reading Fil-C website's "InvisiCaps by example" page, I see that "Laundering Integers As Pointers" is disallowed. This essentially disqualifies Fil-C for low-level work, which makes for a substantial part of C programs.
(int2ptr for MMIO/pre-allocated memory is in theory UB, in practice just fine as long as you don't otherwise break aliasing rules (and lifetime rules in C++) - as the compiler will fail to track provenance at least once).
But that isn't really what Fil-C is aimed at - the value is, as you implied, in hardening userspace applications.
Fil-C already allows memory mapped I/O in the form of mmap.
The only thing missing that is needed for kernel level MMIO is a way to forge a capability. I don’t allow that right now, but that’s mostly a policy decision. It also falls out from the fact that InvisiCaps optimize the lower by having it double as a pointer to the top of the capability. That’s also not fundamental; it’s an implementation choice.
It’s true that InvisiCaps will always disallow int to ptr casts, in the sense that you get a pointer with no capability. You’d want MMIO code to have some intrinsic like `zunsafe_forge_ptr` that clearly calls out what’s happening and then you’d use that wherever you define your memory mapped registers.
    #include <stdio.h>
    int main()
    {
      const char c[] = "Howling\n";
      char *p = (char*)c;
      p[4] = 'o';
      printf("%s", c);
      return 0;
    }
Same results with Debian clang (and clang++) version 14.0.6 with the same options.
Of course, if you change c[] to *c, it will segfault. But it still compiles successfully without warnings.
Laundering your pointer through an integer is evidently not necessary.
Ok that got a chuckle out of me haha
No one person could write a compiler for it, and even if they could they would forget as much in doing so as they could learn.
But if you try to write to a readonly global constant then you’ll panic. And there are a handful of ways to allocate readonly data via Fil-C’s APIs.
Not allowing a cast from integer to pointer is the point of having pointers as capabilities in the first place.
Central in that idea of capabilities is that you can only narrow privileges, never widen them. An intptr_t would in-effect be a capability narrowed to be used only for hashing and comparison, with the right for reading and writing through it stripped away.
BTW, if you would store the uintptr_t then it would lose its notion of being a pointer, and Fil-C's garbage collector would not be able to trace it.
The C standard allows casts both ways, but the [u]intptr_t types are optional. However, C on hardware capability architectures' (CHERI, Elbrus, I dunno about AS/400) tend to make the type available anyway because the one-way cast is so common in real-world code.
If the laundering through integers is syntactically obvious - obvious that the cast back from int used a int that obviously can from a pointer - then I allow it.
It's a concurrent GC.
If I wanted to go to kernel, I'd probably get rid of the GC. I've tweeted about what Fil-C would look like without GC. Short version: use-after-free would not trap anymore, but you wouldn't be able to use it to break out of the capability system. Similar to CHERI without its capability GC.
One interesting feature is that there might be some synergy there. The GC safepoints can be used to implement cooperative multitasking, with capabilities making it safe.
There would still be ways to make it work with a more restricted intrinsic, if you didn't want to open up the ability for full pointer forging. At a high level, you're basically just saying "This struct exists at this constant physical address, and doesn't need initialisation". I could imagine a "#define UART zmmio_ptr(UART_Type, 0x1234)" - which perhaps requires a compile time constant address. Alternatively, it's not uncommon for embedded compilers to have a way to force a variable to a physical address, maybe you'd write something like "UART_Type UART __at(0x1234);". I believe this is technically already possible using sections, it's just a massive pain creating one section per struct for dozens and dozens.
Unfortunately the way existing code does it is pretty much always "#define UART ((UART_Type*)0x1234)". I feel like trying to identify this pattern is probably too risky a heuristic, so source code edits seem required to me.
https://github.com/mbrock/filnix/blob/main/ports/analysis.md
This is still within the userspace application realm but it's good to know that Fil-C does have explicit capability-preserving operations (`zxorptr`, `zretagptr`, etc) to do e.g. pointer tagging, and special support for mapping pointers to integer table indices and back (`zptrtable`, etc).
At a 4x performance hit, you might as well use C# or Go.
> Fil-C's target audience is users of C programs, not authors of C programs.
Sure, but then they don't get it for free. There is a perf penalty from GC. Plus you need all the original sources, right?
> we lose access to that intellectual heritage.
Declining usage of C is going to make you lose intellectual heritage[1]. A language no one can read or write is a dead language, regardless if you can translate it to English or not.
[1] And that is outside Rust's or Zig's influence. It's an old language from when people thought you can trust the programmer. Which may well be the case if only people using it were Bell Labs engineers. It's got UB up the wazoo, no safety, and no sane package manager.
> There is a perf penalty from GC.
Not really, no. There's a perf penalty from bounds checking and runtime type checking. GC takes a little time but saves you time on free(), although it only becomes a real performance win when you remove the other, less efficient ways of tracking lifetimes, such as reference counting.
> Plus you need all the original sources, right?
Yes, but from my point of view, loss of sources is not a significant problem. I know it happened historically, especially last millennium, but really only for proprietary software running on a single platform such as MS-DOS. Unix software, free software, and software using source control systems have suffered almost no source code loss, except for particular old versions.
> Declining usage of C is going to make you lose intellectual heritage
I think there are more C programmers today than there have ever been, and I doubt that that number will ever fall to zero while there are still humans.
The pervasive use of reference counting that you find in languages like Swift is worse on throughput than typical GC, but can often avoid the memory overhead of GC due to deterministic destruction and typically gives you better worst-case latency, so there isn't a single winner between ARC and GC.
So I'm not sure there isn't a single winner between ARC and GC, but you could be right.
Its being not essential doesn't matter. If you have a Fil-C code that terminates on UB and a C code that doesn't, you have two provably semantically (subtly) different programs.
Proof: You have a program that halts on UB and one that continues running on UB.
> I think there are more C programmers today than there have ever been, and I doubt that that number will ever fall to zero while there are still humans.
Think that depends on more things than just there being humans. Where my BASIC programmers at?
Old BASIC programmers are mostly working on Pick and other "business BASIC" systems, and I still run into them from time to time. But most of that code is only useful within a single business, so I expect it to eventually die out. (Meanwhile, new BASIC programmers are proliferating in the retrocomputing hobby.)
By contrast, on my system here I have over a thousand libraries written in C or C++. A random sampling reveals libraries for: LevelDB; various JS interpreters; file format handlers for zipfiles, OpenEXR, DjVu, and JPEG; gamepad interaction; a sparse matrix solver (used in Octave); the RIST protocol (used by OBS Studio); simulation with finite element models, which uses a different sparse matrix solver (used by FreeCAD); inspecting and manipulating configuration of PCI devices; the MTP protocol; the Icecast protocol; the protocol FTDI devices speak over USB; and so on.
Nearly all software written today is either written in C, written in C++, or interpreted or compiled by an interpreter or compiler written in C or C++.
Except, uh, you can't use C# or Go to run a program written in C/C++.
Oh, you mean we can solve all our problems by simply rewriting all legacy software? Right, I forgot!
Isn’t that Rust’s raison d'être?
(Just kidding…mostly)
Related funny anecdote, I recently saw a Show HN post title that made sure to mention the thing was written in Rust, but forgot to mention what it actually did. Priorities lol.
I’m sure it sounds like I’m a Rust hater or something, but I’m really not. I like a lot of the new Rust tools being created and use a few myself (ripgrep, fd, and bat immediately come to mind but I’m sure there are more I’m using). I just find the almost religious fervor of the Rust community amusing.
https://people.cs.rutgers.edu/~santosh.nagarakatte/softbound...
CCured was another:
https://people.eecs.berkeley.edu/~necula/Papers/ccured_popl0...
I still try to squirrel away a little time to learn from and contribute to the tech community. Mostly here and on AI subs like r/mlscaling. The best stuff that I can't work on or even curate right now I'm saving in case time or funding open up in the future. Worst case, I can pass it onto bright minds with the skills to build it. Like before in security.
You were doing neat things with satellites and bootstrapping research last I checked. What interesting jobs or tech are you digging into now?
Previous discussion:
2025 Safepoints and Fil-C (87 points, 1 month ago, 44 comments) https://news.ycombinator.com/item?id=45258029
2025 Fil's Unbelievable Garbage Collector (603 points, 2 months ago, 281 comments) https://news.ycombinator.com/item?id=45133938
2024 The Fil-C Manifesto: Garbage In, Memory Safety Out (13 points, 17 comments) https://news.ycombinator.com/item?id=39449500
1. How do we prevent loading a bogus lower through misaligned store or load?
Answer: Misaligned pointer load/stores are trapped; this is simply not allowed.
2. How are pointer stores through a pointer implemented (e.g. `*(char **)p = s`) - does the runtime have to check if *p is "flight" or "heap" to know where to store the lower?
Answer: no. Flight (i.e. local) pointers whose address is taken are not literally implemented as two adjacent words; rather the call frame is allocated with the same object layout as a heap object. The flight pointer is its "intval" and its paired "lower" is at the same offset in the "aux" allocation (presumably also allocated as part of the frame?).
3. How are use-after-return errors prevented? Say I store a local pointer in a global variable and then return. Later, I call a new function which overwrites the original frame - can't I get a bogus `lower` this way?
Answer: no. Call frames are allocated by the GC, not the usual C stack. The global reference will keep the call frame alive.
That leads to the following program, which definitely should not work, and yet does. ~Amazing~ Unbelievable:
    #include <stdio.h>
    
    char *bottles[100];
    
    __attribute__((noinline))
    void beer(int count) {
        char buf[64];
        sprintf(buf, "%d bottles of beer on the wall", count);
        bottles[count] = buf;
    }
    
    int main(void) {
        for (int i=0; i < 100; i++) beer(i);
        for (int i=99; i >= 0; i--) puts(bottles[i]);
    }There’s nothing about how Fil-C is designed that constrains it to x86_64. It doesn’t strongly rely on x86’s memory model. It doesn’t strongly rely on 64-bit.
I’m focusing on one OS and arch until I have more contributors and so more bandwidth to track bugs across a wider set of platforms.
the performance overhead of this approach for most programs makes them run about four times more slowly
4x slower isn't the normal case. 4x is at the upper end of the overheads you'll see.
C is immensely powerful, portable and probably as fast as you can go without hand-coding in the architecture-specific assembly. Most of the world's information systems (our cyberstructure) rely directly or indirectly on C. And don't get me wrong, I'm a great enthusiast of the idea of sticking to memory-safe languages like Rust from now on.
The hard truth is will live with legacy C code, period. Pizlo's heroic effort bridges the gap so to speak, it kind of sandboxes userspace C in a way that inherently adds memory safety to legacy code. There are only a few corner cases now that can't be bothered by any slow-down vis-a-vis unsafe C, and the great majority of code across every industry would benefit much more from the reduced surface of exposure.
This is already quite impressive. Many automatic memory managed languages have more than 4x worst-case slowdown. E.g. Google-backed Golang is ~1.5× to ~4× slower than optimized C++. I suppose there are many ways to further speed-up Fil-C if more resources were given to the project.
You can't really do this without adding the same sorts of annotations as Rust or Safe C++. Especially if you care about keeping modular/separate compilation and code reuse.
However there are also kernel like commercial projects in Go, and apparently the related TamaGo fork might eventually get upstreamed into the reference implementation.
https://www.withsecure.com/en/solutions/innovative-security-...
Recompiling existing software written in C using Fil-C isn't also a great idea, since some modifications are likely needed, at least for fixing bugs found with usage of Fil-C. And after these bugs are fixed, why continue using Fil-C?
This would ultimately save much of the overhead associated with tracing GC itself.
Because Fil-C might be a good way to debug future code?
Your question reads like, "Why use a debugger?"
Yes, this means that C is one of the slower languages around. That’s fine; computers are fast. If you want to write high-performance code, there are plenty of faster languages to do it with.
"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"
Guess what he means by "1980 language designers and users have not learned this lesson".
That’s why even and especially if a C program runs in Fil-C with zero changes, you should use the Fil-C version in any context where security matters