Yeah, I was thinking nix will be probably one of the first things that can easily adapt Fil-C as it already packages in a way that allows different packages to be completely independent of each other, thus Fil-C's ABI compatibility does not matter. I assume other targets will be mostly enterprise distros where the perf hit and source compatibility issues are less of a concern, and memory safety is absolutely critical.
Fil-C compiled flatpaks might be a interesting target as well for normal desktop users. (e.g. running a browser)
I wonder if GPU graphics are possible in Fil-C land? Perhaps only if whole mesa stack is compiled using Fil-C as well, limiting GPU use to open drivers?
That is super cool, and I will probably start running it on at least a test box shortly.
How does python work? Of course I can just add filc.python to my system, but if I `python3 -m pip install whatever` will it just rebuild any C modules with the fil-c compiler?
Right now Python simply does not work at all. I was trying last night just to get the compiler to build, but ran into some Fil-C panics at the last moment when it tries to "freeze" some modules? But it should work.
My idea is to move towards defining Fil-C as a "platform", meaning it would have its own ABI value, so it would look like a target called "x86_64-unknown-linux-filc", and then the magic of Nixpkgs is that it can instantiate a full realm called "pkgsCross.filc" which automatically has every package, whose packages are built on your ordinary non-FilC platform as a cross compilation.
When that works—I hope it works, I probably need help to get it working—you should be able to use all the Nixpkgs packaging helpers like
This is already how Nixpkgs can have a pkgsCross.musl that builds everything with Musl as a cross compilation. So it seems reasonable and possible. It should be of wide interest for Nix users, since it would suddenly allow memory safety for whole swaths of the C/C++ packages, it would let NixOS use memory safe builds for core services like OpenSSHd, and so on...
I should probably try to gather some interest among people who are truly familiar with Nixpkgs cross compilation...
Yes filc as abi makes most sense. Do note that while fil-c seems to have pretty good source compatibility some stuff still require patches (like python!)
Either Fil-C or a different implementation of the same idea seems essential to me. A great deal of software has been written in C, and without some way of running it, we lose access to that intellectual heritage. But pervasive security vulnerabilities mean that the traditional "YOLO" approach to C compilation is a bad idea for software that has to handle untrusted input, such as Web browsing or email.
Pizlo seems to have found an astonishingly cheap way to do the necessary pointer checking, which hopefully I will be able to understand after more study. (The part I'm still confused about is how InvisiCaps work with memcpy.)
tialaramex points out that we shouldn't expect C programmers to be excited about Fil-C. The point tialaramex mentions is "DWIM", like, accessing random memory and executing in constant time, but I think most C programmers won't be willing to take a 4× performance hit. After all, if they wanted to be using a slow language, they wouldn't be writing their code in C. But I think that's the wrong place to look for interest: Fil-C's target audience is users of C programs, not authors of C programs. We want the benefits of security and continued usage of existing working codebases, without having to pay the cost to rewrite everything in Rust or TypeScript or whatever. And for many of us, much of the time, the performance hit may be acceptable.
Forgive me for not putting in the effort this deserves, but the description of pointers in this thread reads a lot like the current description of Fil-C pointers from the main site.
What is the 64-bit-presenting representation of pointers in Fil-C?
That is, what does %p return and how does that work as a pointer?
You don't even need to reverse it. It's in the public clang, and I'm working on helping my team adopt it in some test cases.
And it's not just the bounds-checking that's great -- it makes a bunch of C anti-patterns much harder, and it makes you think a lot harder about pointer ownership and usage. Really a great addition to the language, and it's source-compatible with empty macro-definitions (with two exceptions).
It or something similar has apparently been upstreamed to clang as -fbounds-safety. I don't know if they're the same, but the RFC on -fbounds-safety does give some credit to the author of Fil-C, who also took credit for firebloom on this thread as well.
Yeah, fat pointers are definitely a viable approach, but a lot of the existing C code that is the main argument for Fil-C assumes it can put a pointer in a long. (Most of the C code that assumed you could put it in an int has been flushed out by now, but that was a big problem when the Alpha came out.) I'm guessing that the amount of existing C code in Apple's bootloader is minimal, maybe 1000 lines, not the billions of lines you can compile with Fil-C.
You’re off by a few orders of magnitude. I’ll grant you, what is the bootloader becomes a very complex question. Even if you scope it to just “what is the code physically etched into the chip as the mask ROM” (secureROM) you’re talking hundreds of thousands. If you’re talking about all the code that runs before the kernel starts executing you’re talking hundreds of millions.
No, I was only talking about the pre-existing C code that wasn't written for the bootloader, which therefore might have incompatibilities with fat pointers you had to hunt down and fix.
Also I'm really skeptical about your "hundreds of millions" number, even if we're talking about all the code that runs before the kernel starts. How do you figure? The entire Linux kernel doesn't contain a hundred millions of lines of code, and that includes all the drivers for network cards, SCSI controllers, and multiport serial boards that nobody's made in 30 years, plus ports to Alpha, HP PA-RISC, Loongson, Motorola 68000, and another half-dozen architectures. All of that contains maybe 30 million lines. glibc is half a million. Firefox 140.4.0esr is 33 million. You're saying that the bootloader is six times the size of Firefox?
Are you really suggesting that tens of gigabytes of source code are compiled into the bootloader? That would make the bootloader at least a few hundred megabytes of executable code, probably gigabytes, wouldn't it?
For all the wrong code that assumes long can store a pointer, there's likely a lot more wrong code that assumes long can fit in 64 bits, especially when serializing it for I/O and other-language interop.
Also, 128-bit integers can't fit in standard registers on most architectures, and don't have the full suite of ALU operations either. So you're looking at some serious code bloat and slowdowns for code using longs.
You've also now got no "standard" C type (char, short, int, long, long long) which is the "native" size of the architecture. You could widen int too, but a lot of code also assumes an int can fit in 32 bits.
Maybe so; I haven't tried. Probably a lot less code depends on unsigned long wrapping at 2⁶⁴ than used to depend on unsigned int wrapping at 2¹⁶, and we got past that. But stability standards were lower then. Any code that runs on both 32-bit and 64-bit LP64 systems can't be too dependent on the exact sizeof long, and sizeof long already isn't sizeof int the way it was on 32-bit platforms.
There's a lot of code that makes assumptions about the number of bytes in a long rather than diligently using sizeof ... remember, the whole point here is low quality code.
Also this is _de facto_ limited to userspace application for the mainstream OSes if my understanding is correct.
Reading Fil-C website's "InvisiCaps by example" page, I see that "Laundering Integers As Pointers" is disallowed. This essentially disqualifies Fil-C for low-level work, which makes for a substantial part of C programs.
(int2ptr for MMIO/pre-allocated memory is in theory UB, in practice just fine as long as you don't otherwise break aliasing rules (and lifetime rules in C++) - as the compiler will fail to track provenance at least once).
But that isn't really what Fil-C is aimed at - the value is, as you implied, in hardening userspace applications.
Fil-C already allows memory mapped I/O in the form of mmap.
The only thing missing that is needed for kernel level MMIO is a way to forge a capability. I don’t allow that right now, but that’s mostly a policy decision. It also falls out from the fact that InvisiCaps optimize the lower by having it double as a pointer to the top of the capability. That’s also not fundamental; it’s an implementation choice.
It’s true that InvisiCaps will always disallow int to ptr casts, in the sense that you get a pointer with no capability. You’d want MMIO code to have some intrinsic like `zunsafe_forge_ptr` that clearly calls out what’s happening and then you’d use that wherever you define your memory mapped registers.
Can you "launder" pointers through integers just to do things like drop `const`? It's a very common pattern to have to drop attributes like `const` due to crappy APIs: `const foo a = ...; foo b = (foo *)(uintptr_t)a;`
As kragen already posted, you can cast from const-pointer to non-const directly.
Not allowing a cast from integer to pointer is the point of having pointers as capabilities in the first place.
Central in that idea of capabilities is that you can only narrow privileges, never widen them.
An intptr_t would in-effect be a capability narrowed to be used only for hashing and comparison, with the right for reading and writing through it stripped away.
BTW, if you would store the uintptr_t then it would lose its notion of being a pointer, and Fil-C's garbage collector would not be able to trace it.
The C standard allows casts both ways, but the [u]intptr_t types are optional. However, C on hardware capability architectures' (CHERI, Elbrus, I dunno about AS/400) tend to make the type available anyway because the one-way cast is so common in real-world code.
If the laundering through integers is syntactically obvious - obvious that the cast back from int used a int that obviously can from a pointer - then I allow it.
Hopefully Pizlo will correct me if I get this wrong, but I don't think Fil-C's pointer tagging enforces constness, which isn't needed for C in any case. This C code compiles with no warnings and outputs "Howlong\n" with GCC 12.2.0-14 -ansi -pedantic -Wall -Wextra:
Somewhat to my surprise, it still compiles successfully with no warnings as C++ (renaming to deconst.cc and compiling with g++). I don't know C++ that well, since I've only been using it for 35 years, which isn't nearly long enough to learn the whole language unless you write a compiler for it.
Same results with Debian clang (and clang++) version 14.0.6 with the same options.
Of course, if you change c[] to *c, it will segfault. But it still compiles successfully without warnings.
Laundering your pointer through an integer is evidently not necessary.
I'll preface this by saying my experience is with embedded, not kernel, but I can't imagine MMIO is significantly different.
There would still be ways to make it work with a more restricted intrinsic, if you didn't want to open up the ability for full pointer forging. At a high level, you're basically just saying "This struct exists at this constant physical address, and doesn't need initialisation". I could imagine a "#define UART zmmio_ptr(UART_Type, 0x1234)" - which perhaps requires a compile time constant address. Alternatively, it's not uncommon for embedded compilers to have a way to force a variable to a physical address, maybe you'd write something like "UART_Type UART __at(0x1234);". I believe this is technically already possible using sections, it's just a massive pain creating one section per struct for dozens and dozens.
Unfortunately the way existing code does it is pretty much always "#define UART ((UART_Type*)0x1234)". I feel like trying to identify this pattern is probably too risky a heuristic, so source code edits seem required to me.
I'm curious, what's your strategy for integrating the GC with low-level code? I've been thinking about trying to use it for Arduino development. Mostly as a thought experiment for now (as I'm playing with Rust on RP2040).
If I wanted to go to kernel, I'd probably get rid of the GC. I've tweeted about what Fil-C would look like without GC. Short version: use-after-free would not trap anymore, but you wouldn't be able to use it to break out of the capability system. Similar to CHERI without its capability GC.
Arduino is kinda both. You have full control over the execution flow, but then you have to actually exercise the full control over the execution flow. The only major wrinkle are the hardware interrupts.
One interesting feature is that there might be some synergy there. The GC safepoints can be used to implement cooperative multitasking, with capabilities making it safe.
This is still within the userspace application realm but it's good to know that Fil-C does have explicit capability-preserving operations (`zxorptr`, `zretagptr`, etc) to do e.g. pointer tagging, and special support for mapping pointers to integer table indices and back (`zptrtable`, etc).
Yes, I think that's reasonable. I imagine you wouldn't have to extend Fil-C very much to sneak some memory-mapped I/O addresses into your program, but maybe having the garbage collector pause the program in the middle of an interrupt handler would have other bad effects. Like, if you were generating a video signal, you'd surely get display glitches.
> but I think most C programmers won't be willing to take a 4× performance hit.
At a 4x performance hit, you might as well use C# or Go.
> Fil-C's target audience is users of C programs, not authors of C programs.
Sure, but then they don't get it for free. There is a perf penalty from GC. Plus you need all the original sources, right?
> we lose access to that intellectual heritage.
Declining usage of C is going to make you lose intellectual heritage[1]. A language no one can read or write is a dead language, regardless if you can translate it to English or not.
[1] And that is outside Rust's or Zig's influence. It's an old language from when people thought you can trust the programmer. Which may well be the case if only people using it were Bell Labs engineers. It's got UB up the wazoo, no safety, and no sane package manager.
You absolutely can just transpile any language into any language. You just need to be willing to give up on performance, and fil-c is more or less doing just that.
I worked with the author of Fil-C at Apple while on the Safari team, and he's easily one of the brightest folks I've had the pleasure of knowing. Fil-C looks extremely cool.
Extraordinary project. I had several questions which I believe I have answered for myself (pizlonator please correct if wrong):
1. How do we prevent loading a bogus lower through misaligned store or load?
Answer: Misaligned pointer load/stores are trapped; this is simply not allowed.
2. How are pointer stores through a pointer implemented (e.g. `*(char **)p = s`) - does the runtime have to check if *p is "flight" or "heap" to know where to store the lower?
Answer: no. Flight (i.e. local) pointers whose address is taken are not literally implemented as two adjacent words; rather the call frame is allocated with the same object layout as a heap object. The flight pointer is its "intval" and its paired "lower" is at the same offset in the "aux" allocation (presumably also allocated as part of the frame?).
3. How are use-after-return errors prevented? Say I store a local pointer in a global variable and then return. Later, I call a new function which overwrites the original frame - can't I get a bogus `lower` this way?
Answer: no. Call frames are allocated by the GC, not the usual C stack. The global reference will keep the call frame alive.
That leads to the following program, which definitely should not work, and yet does. ~Amazing~ Unbelievable:
#include <stdio.h>
char *bottles[100];
__attribute__((noinline))
void beer(int count) {
char buf[64];
sprintf(buf, "%d bottles of beer on the wall", count);
bottles[count] = buf;
}
int main(void) {
for (int i=0; i < 100; i++) beer(i);
for (int i=99; i >= 0; i--) puts(bottles[i]);
}
Hmmm ... there's a danger here that people will test their programs compiled with Fil-C and think that they are safe to compile with a "normal" compiler. I would hope for an option to flag any undefined behavior.
Great effort, but I find the whole idea somewhat flawed. If one needs speed, he can't use this C implementation, because it's several times slower. If speed isn't important, why not just using a memory safe language? And if both are important, why not using Rust?
Recompiling existing software written in C using Fil-C isn't also a great idea, since some modifications are likely needed, at least for fixing bugs found with usage of Fil-C. And after these bugs are fixed, why continue using Fil-C?
You could use Fil-C in combination with a memory-safe language like Rust to allow for both "safe" leaf references (potentially using ownership and reference counting to represent more complex allocated objects, but would not themselves "own" pointers to Fil-C-managed data, thus avoiding the need to trace inside these objects and auto-free them) and general Fil-C managed pointers with possible cycles (perhaps restricted to some arena/custom address space, to make tracing and collecting more efficient) to be contained in a singke origram. Due to memory safety, the use of "leaf" references could be ensured not to alter the invariants that Fil-C relies on; but managed pointers would nonetheless be available whenever tracing GC could not be avoided.
This would ultimately save much of the overhead associated with tracing GC itself.
Most software works flawlessly without modifications on Fil-C, the performance isn't *that* bad and there are applications where security is more important than performance (for example military applications)
Safety isn’t really optional. Up until now, we’ve had no choice but to run C and accept the bugs; now we’re allowed to also run it safely.
Yes, this means that C is one of the slower languages around. That’s fine; computers are fast. If you want to write high-performance code, there are plenty of faster languages to do it with.
There were several options, but the industry decided to bet on UNIX and C as main inspiration going forwards from the 1980's.
"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"
Guess what he means by "1980 language designers and users have not learned this lesson".
There’s nothing about how Fil-C is designed that constrains it to x86_64. It doesn’t strongly rely on x86’s memory model. It doesn’t strongly rely on 64-bit.
I’m focusing on one OS and arch until I have more contributors and so more bandwidth to track bugs across a wider set of platforms.
C is immensely powerful, portable and probably as fast as you can go without hand-coding in the architecture-specific assembly. Most of the world's information systems (our cyberstructure) rely directly or indirectly on C. And don't get me wrong, I'm a great enthusiast of the idea of sticking to memory-safe languages like Rust from now on.
The hard truth is will live with legacy C code, period. Pizlo's heroic effort bridges the gap so to speak, it kind of sandboxes userspace C in a way that inherently adds memory safety to legacy code. There are only a few corner cases now that can't be bothered by any slow-down vis-a-vis unsafe C, and the great majority of code across every industry would benefit much more from the reduced surface of exposure.
This is already quite impressive. Many automatic memory managed languages have more than 4x worst-case slowdown. E.g. Google-backed Golang is ~1.5× to ~4× slower than optimized C++. I suppose there are many ways to further speed-up Fil-C if more resources were given to the project.
Yes, think of something like Inferno, but Limbo now is AOT compiled instead of JITed.
However there are also kernel like commercial projects in Go, and apparently the related TamaGo fork might eventually get upstreamed into the reference implementation.
https://github.com/mbrock/filnix
It's working. It builds tmux, nethack, coreutils, Perl, Tcl, Lua, SQLite, and a bunch of other stuff.
Binary cache on https://filc.cachix.org so you don't have to wait 40 minutes for the Clang fork to build.
If you have Nix with flakes on a 64-bit Linux computer, you can run
right now!Fil-C compiled flatpaks might be a interesting target as well for normal desktop users. (e.g. running a browser)
I wonder if GPU graphics are possible in Fil-C land? Perhaps only if whole mesa stack is compiled using Fil-C as well, limiting GPU use to open drivers?
How does python work? Of course I can just add filc.python to my system, but if I `python3 -m pip install whatever` will it just rebuild any C modules with the fil-c compiler?
My idea is to move towards defining Fil-C as a "platform", meaning it would have its own ABI value, so it would look like a target called "x86_64-unknown-linux-filc", and then the magic of Nixpkgs is that it can instantiate a full realm called "pkgsCross.filc" which automatically has every package, whose packages are built on your ordinary non-FilC platform as a cross compilation.
When that works—I hope it works, I probably need help to get it working—you should be able to use all the Nixpkgs packaging helpers like
This is already how Nixpkgs can have a pkgsCross.musl that builds everything with Musl as a cross compilation. So it seems reasonable and possible. It should be of wide interest for Nix users, since it would suddenly allow memory safety for whole swaths of the C/C++ packages, it would let NixOS use memory safe builds for core services like OpenSSHd, and so on...I should probably try to gather some interest among people who are truly familiar with Nixpkgs cross compilation...
Pizlo seems to have found an astonishingly cheap way to do the necessary pointer checking, which hopefully I will be able to understand after more study. (The part I'm still confused about is how InvisiCaps work with memcpy.)
tialaramex points out that we shouldn't expect C programmers to be excited about Fil-C. The point tialaramex mentions is "DWIM", like, accessing random memory and executing in constant time, but I think most C programmers won't be willing to take a 4× performance hit. After all, if they wanted to be using a slow language, they wouldn't be writing their code in C. But I think that's the wrong place to look for interest: Fil-C's target audience is users of C programs, not authors of C programs. We want the benefits of security and continued usage of existing working codebases, without having to pay the cost to rewrite everything in Rust or TypeScript or whatever. And for many of us, much of the time, the performance hit may be acceptable.
Apple has a memory-safer C compiler/variant they use to compile their boot loaders:
https://saaramar.github.io/iBoot_firebloom/
What is the 64-bit-presenting representation of pointers in Fil-C?
That is, what does %p return and how does that work as a pointer?
https://fil-c.org/invisicaps_by_example
And it's not just the bounds-checking that's great -- it makes a bunch of C anti-patterns much harder, and it makes you think a lot harder about pointer ownership and usage. Really a great addition to the language, and it's source-compatible with empty macro-definitions (with two exceptions).
I think you’re thinking of something else
Also I'm really skeptical about your "hundreds of millions" number, even if we're talking about all the code that runs before the kernel starts. How do you figure? The entire Linux kernel doesn't contain a hundred millions of lines of code, and that includes all the drivers for network cards, SCSI controllers, and multiport serial boards that nobody's made in 30 years, plus ports to Alpha, HP PA-RISC, Loongson, Motorola 68000, and another half-dozen architectures. All of that contains maybe 30 million lines. glibc is half a million. Firefox 140.4.0esr is 33 million. You're saying that the bootloader is six times the size of Firefox?
Are you really suggesting that tens of gigabytes of source code are compiled into the bootloader? That would make the bootloader at least a few hundred megabytes of executable code, probably gigabytes, wouldn't it?
For all the wrong code that assumes long can store a pointer, there's likely a lot more wrong code that assumes long can fit in 64 bits, especially when serializing it for I/O and other-language interop.
Also, 128-bit integers can't fit in standard registers on most architectures, and don't have the full suite of ALU operations either. So you're looking at some serious code bloat and slowdowns for code using longs.
You've also now got no "standard" C type (char, short, int, long, long long) which is the "native" size of the architecture. You could widen int too, but a lot of code also assumes an int can fit in 32 bits.
"Why Embedded Swift"
https://youtu.be/LqxbsADqDI4?t=144
Reading Fil-C website's "InvisiCaps by example" page, I see that "Laundering Integers As Pointers" is disallowed. This essentially disqualifies Fil-C for low-level work, which makes for a substantial part of C programs.
(int2ptr for MMIO/pre-allocated memory is in theory UB, in practice just fine as long as you don't otherwise break aliasing rules (and lifetime rules in C++) - as the compiler will fail to track provenance at least once).
But that isn't really what Fil-C is aimed at - the value is, as you implied, in hardening userspace applications.
Fil-C already allows memory mapped I/O in the form of mmap.
The only thing missing that is needed for kernel level MMIO is a way to forge a capability. I don’t allow that right now, but that’s mostly a policy decision. It also falls out from the fact that InvisiCaps optimize the lower by having it double as a pointer to the top of the capability. That’s also not fundamental; it’s an implementation choice.
It’s true that InvisiCaps will always disallow int to ptr casts, in the sense that you get a pointer with no capability. You’d want MMIO code to have some intrinsic like `zunsafe_forge_ptr` that clearly calls out what’s happening and then you’d use that wherever you define your memory mapped registers.
Not allowing a cast from integer to pointer is the point of having pointers as capabilities in the first place.
Central in that idea of capabilities is that you can only narrow privileges, never widen them. An intptr_t would in-effect be a capability narrowed to be used only for hashing and comparison, with the right for reading and writing through it stripped away.
BTW, if you would store the uintptr_t then it would lose its notion of being a pointer, and Fil-C's garbage collector would not be able to trace it.
The C standard allows casts both ways, but the [u]intptr_t types are optional. However, C on hardware capability architectures' (CHERI, Elbrus, I dunno about AS/400) tend to make the type available anyway because the one-way cast is so common in real-world code.
If the laundering through integers is syntactically obvious - obvious that the cast back from int used a int that obviously can from a pointer - then I allow it.
Same results with Debian clang (and clang++) version 14.0.6 with the same options.
Of course, if you change c[] to *c, it will segfault. But it still compiles successfully without warnings.
Laundering your pointer through an integer is evidently not necessary.
Ok that got a chuckle out of me haha
But if you try to write to a readonly global constant then you’ll panic. And there are a handful of ways to allocate readonly data via Fil-C’s APIs.
There would still be ways to make it work with a more restricted intrinsic, if you didn't want to open up the ability for full pointer forging. At a high level, you're basically just saying "This struct exists at this constant physical address, and doesn't need initialisation". I could imagine a "#define UART zmmio_ptr(UART_Type, 0x1234)" - which perhaps requires a compile time constant address. Alternatively, it's not uncommon for embedded compilers to have a way to force a variable to a physical address, maybe you'd write something like "UART_Type UART __at(0x1234);". I believe this is technically already possible using sections, it's just a massive pain creating one section per struct for dozens and dozens.
Unfortunately the way existing code does it is pretty much always "#define UART ((UART_Type*)0x1234)". I feel like trying to identify this pattern is probably too risky a heuristic, so source code edits seem required to me.
It's a concurrent GC.
If I wanted to go to kernel, I'd probably get rid of the GC. I've tweeted about what Fil-C would look like without GC. Short version: use-after-free would not trap anymore, but you wouldn't be able to use it to break out of the capability system. Similar to CHERI without its capability GC.
One interesting feature is that there might be some synergy there. The GC safepoints can be used to implement cooperative multitasking, with capabilities making it safe.
https://github.com/mbrock/filnix/blob/main/ports/analysis.md
This is still within the userspace application realm but it's good to know that Fil-C does have explicit capability-preserving operations (`zxorptr`, `zretagptr`, etc) to do e.g. pointer tagging, and special support for mapping pointers to integer table indices and back (`zptrtable`, etc).
At a 4x performance hit, you might as well use C# or Go.
> Fil-C's target audience is users of C programs, not authors of C programs.
Sure, but then they don't get it for free. There is a perf penalty from GC. Plus you need all the original sources, right?
> we lose access to that intellectual heritage.
Declining usage of C is going to make you lose intellectual heritage[1]. A language no one can read or write is a dead language, regardless if you can translate it to English or not.
[1] And that is outside Rust's or Zig's influence. It's an old language from when people thought you can trust the programmer. Which may well be the case if only people using it were Bell Labs engineers. It's got UB up the wazoo, no safety, and no sane package manager.
Except, uh, you can't use C# or Go to run a program written in C/C++.
Oh, you mean we can solve all our problems by simply rewriting all legacy software? Right, I forgot!
Isn’t that Rust’s raison d'être?
(Just kidding…mostly)
https://people.cs.rutgers.edu/~santosh.nagarakatte/softbound...
CCured was another:
https://people.eecs.berkeley.edu/~necula/Papers/ccured_popl0...
Previous discussion:
2025 Safepoints and Fil-C (87 points, 1 month ago, 44 comments) https://news.ycombinator.com/item?id=45258029
2025 Fil's Unbelievable Garbage Collector (603 points, 2 months ago, 281 comments) https://news.ycombinator.com/item?id=45133938
2024 The Fil-C Manifesto: Garbage In, Memory Safety Out (13 points, 17 comments) https://news.ycombinator.com/item?id=39449500
1. How do we prevent loading a bogus lower through misaligned store or load?
Answer: Misaligned pointer load/stores are trapped; this is simply not allowed.
2. How are pointer stores through a pointer implemented (e.g. `*(char **)p = s`) - does the runtime have to check if *p is "flight" or "heap" to know where to store the lower?
Answer: no. Flight (i.e. local) pointers whose address is taken are not literally implemented as two adjacent words; rather the call frame is allocated with the same object layout as a heap object. The flight pointer is its "intval" and its paired "lower" is at the same offset in the "aux" allocation (presumably also allocated as part of the frame?).
3. How are use-after-return errors prevented? Say I store a local pointer in a global variable and then return. Later, I call a new function which overwrites the original frame - can't I get a bogus `lower` this way?
Answer: no. Call frames are allocated by the GC, not the usual C stack. The global reference will keep the call frame alive.
That leads to the following program, which definitely should not work, and yet does. ~Amazing~ Unbelievable:
https://news.ycombinator.com/item?id=45234460
Recompiling existing software written in C using Fil-C isn't also a great idea, since some modifications are likely needed, at least for fixing bugs found with usage of Fil-C. And after these bugs are fixed, why continue using Fil-C?
This would ultimately save much of the overhead associated with tracing GC itself.
That’s why even and especially if a C program runs in Fil-C with zero changes, you should use the Fil-C version in any context where security matters
Yes, this means that C is one of the slower languages around. That’s fine; computers are fast. If you want to write high-performance code, there are plenty of faster languages to do it with.
"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"
Guess what he means by "1980 language designers and users have not learned this lesson".
Because Fil-C might be a good way to debug future code?
Your question reads like, "Why use a debugger?"
There’s nothing about how Fil-C is designed that constrains it to x86_64. It doesn’t strongly rely on x86’s memory model. It doesn’t strongly rely on 64-bit.
I’m focusing on one OS and arch until I have more contributors and so more bandwidth to track bugs across a wider set of platforms.
the performance overhead of this approach for most programs makes them run about four times more slowly
4x slower isn't the normal case. 4x is at the upper end of the overheads you'll see.
C is immensely powerful, portable and probably as fast as you can go without hand-coding in the architecture-specific assembly. Most of the world's information systems (our cyberstructure) rely directly or indirectly on C. And don't get me wrong, I'm a great enthusiast of the idea of sticking to memory-safe languages like Rust from now on.
The hard truth is will live with legacy C code, period. Pizlo's heroic effort bridges the gap so to speak, it kind of sandboxes userspace C in a way that inherently adds memory safety to legacy code. There are only a few corner cases now that can't be bothered by any slow-down vis-a-vis unsafe C, and the great majority of code across every industry would benefit much more from the reduced surface of exposure.
This is already quite impressive. Many automatic memory managed languages have more than 4x worst-case slowdown. E.g. Google-backed Golang is ~1.5× to ~4× slower than optimized C++. I suppose there are many ways to further speed-up Fil-C if more resources were given to the project.
However there are also kernel like commercial projects in Go, and apparently the related TamaGo fork might eventually get upstreamed into the reference implementation.
https://www.withsecure.com/en/solutions/innovative-security-...