Bril: An Intermediate Language for Teaching Compilers
170 points
1 month ago
| 11 comments
| cs.cornell.edu
| HN
o11c
1 month ago
[-]
It's interesting how diametrically opposed this is to my recent explorations ... and yet, this is far better than most if only because it points out some of the decisions that would be bad (or at least could be done differently) in production.

I've ranted extensively [1] about how badly interpreters (among adjacent topics) are often taught, because so often decisions are just abruptly made without explanation of the reasoning (or, often, even that a decision is being made!).

One thing I just added was that there are (at least) 6 "arities" (which really should not be called that) of bytecode/IR:

* stack output, stack input, stack input - though often a few instructions do take arguments (push, jump)

* accumulator output/input, 1 explicit input (usually a local variable or constant)

* belt output, belt input, 1 explicit input. The first argument is always the output of the previous instruction; the explicit argument (if not a constant) specifies the output of some previous instruction. Backwards jumps will require storing many variables to a prior part of the belt, but this is simple and useful for mini expression evaluators.

* explicit output/input, explicit input - common for machine code

* explicit output, explicit input, explicit input - common for machine code

* implicit output, explicit input, explicit input - an easy way to save space for SSA (though you can also use it with mutability), though there's weirdness for phis.

[1]: https://gist.github.com/o11c/6b08643335388bbab0228db763f9921...

reply
contificate
1 month ago
[-]
The author has mentioned ANF a few times but, from what I can tell, the likeness that they emphasise is really just the usual property of operands being atomic. This is a property used in many IRs, but I don't feel it's enough to describe Bril as being "an ANF language" - especially when you think about how tied ANF is to the functional compiler space.

The original ANF is actually looser than this in that it permits anonymous functions as arguments. In practice, there is no canonical variant of ANF that people really refer to, but most people usually mean a variant of ANF that doesn't permit this (which, to my knowledge, was first published in David Tarditi's PhD thesis). See this table from Appel's "Modern Compiler Implementation in ML" for the comparisons made in the functional IR space: https://i.imgur.com/17nfGMI.png.

Usually what people in the functional compiler space mean when they mention ANF is some variant of ANF (with Tarditi's restriction) that retains nested functions in a tree-like structure. The tree structure is important because it practically necessitates the extension of "join point"s within the tree structure (to act as local, second-class, continuations: to avoid duplicating evaluation contexts for multi-way branching constructs, without using functions for this purpose). It just so happens that you hoist ANF into a bunch of function bodies (which were once nested) and blocks (which were once join points), you can easily construct a control flow graph. However, you could also say that lambda calculus would be "in SSA" throughout all of this (as it is originally, then normalised into ANF, and then hoisted into a control flow graph) - it just isn't usually what people mean when they talk about an SSA IR (they tend to imply the existence of specific provisions for the splitting of live ranges at join points, be it a phi-like pseudo-instruction or block arguments).

All this is to say that ANF is very tied to literature about the compilation of functional languages and its related analysis and comparison with CPS (such as whether it's closed under beta-reduction, for example), such that I think we need to be a bit more precise about the expected shape and properties of IRs to differentiate them, rather than just expecting compiler engineers to know what you're talking about - and, indeed, agree with your description - when you describe something as "an ANF language".

reply
michaelmior
1 month ago
[-]
The author does acknowledge in the article that Bril is stricter than ANF.
reply
contificate
1 month ago
[-]
My reading of the article is that the author has chosen to use "ANF" to describe a specific property of their IR that is not unique to ANF, whilst ignoring the fact that ANF (and variants of it) is strongly tied to the functional compiler realm where a specific tree-shaped structure - with nested and first-class functions etc. - is expected. The article says "It's an instruction-based, assembly-like, typed, ANF language". I think the usage of "ANF" is a misnomer here: just because it has this atomic arguments property does not make it an "ANF language" (whatever that means).

I normally wouldn't leave such a long - pedantic - comment, but the first comment on this thread was a question about ANF; I don't think much of the article has any relevance to ANF. It's mentioned off-hand and used as an adjective ("Bril is extremely A-normal form"), which suggests we need better terminology here. Most practical CPS IRs share the same atomic argument property, but you wouldn't suggest "Bril is extremely CPS".

reply
mananaysiempre
1 month ago
[-]
> Bril’s SSA form needs a full rework, probably including an actual language extension along the lines of MLIR’s basic block arguments.

The linked MLIR documentation, in turn, credits Swift for that idea, but the earliest occurrence of phis as continuation arguments I know is in MLton. It’d be interesting to know where this idea comes from initially, because standard phis really are incredibly awkward.

reply
tomsmeding
1 month ago
[-]
I'm not sure if it's the first occcurrence of this idea, but one of the main ideas of "SSA is functional programming" by Andrew Appel in 1998 is almost precisely this: https://www.cs.princeton.edu/~appel/papers/ssafun.pdf (TL;DR: see the left column on page 2)
reply
thechao
1 month ago
[-]
I really like this idea — I use it in a couple of you languages I maintain for work — but have never seen a good write up for the idea. Do you have any pointers to papers?
reply
JonChesterfield
1 month ago
[-]
Basic block arguments are a good thing. Cranelift wrote some stuff on this and mlton is well documented. Huge semantic simplification in my opinion - the simultaneous evaluation hassle of phi nodes disappears, leaving behind needing to deal with calling conventions across jumps. Which are a hassle, but it's the same thing as a function call, and you've probably got those anyway.
reply
rayiner
1 month ago
[-]
This is super cool! It would be nice to have a simple C or Pascal frontend that generates Bril. Playing with optimization and code generation is a lot of fun, but it's hard to really dig into it without being able to test things on real-world, non-trivial code. You can use Clang and work in LLVM, of course, but that's an order of magnitude too complicated for an intro-level compilers course, IMHO.

I also like the decision to not have the IR always in SSA form. I think it helps you understand SSA form better to have to construct it, and you're going to have to eliminate SSA form anyway for any of the classic techniques for generating machine code.

reply
fire_lake
1 month ago
[-]
I thought that ANF is considered a dead-end?

Now the main choices seem to be CPS (which is seeing a bit of a resurgence!) and SSA.

So why teach ANF?

reply
Vosporos
1 month ago
[-]
ANF is very much not considered a dead-end. It is the opposite number of SSA from the Functional programming languages perspective. CPS overlaps with it but also has properties, like requiring whole-program transformation, that do not help with predicting the final shape of the program.

See https://pauldownen.com/publications/anf-continued.pdf, https://www.reddit.com/r/ProgrammingLanguages/comments/13w3c... and https://langdev.stackexchange.com/a/2254 (as well as https://www.college-de-france.fr/sites/default/files/media/d... as a bonus)

reply
mrkeen
1 month ago
[-]
I attempted both when trying to implement a language, and I couldn't wrap my head around the typing of the term that CPS introduces, so I ended up going down the ANF road.
reply
fire_lake
1 month ago
[-]
Appel has a book on compiling with CPS Compiling with Continuations. It’s implemented in SML so the types are front and center. Might be of use!
reply
mrkeen
1 month ago
[-]
p225:

> However, having a typed CPS language would require the manipulation and reduction of types whenever B-reductions (and other transformations) are performed. Although this is certainly possible to do, it is painful enough that we have decided to avoided this approach.

reply
asplake
1 month ago
[-]
“I want students to feel the pain of working with non-SSA programs before the course introduces SSA. This frustration can help motivate why SSA is the modern consensus.”
reply
fn-mote
1 month ago
[-]
In case anyone else believed this was scarcasm, it is not. It is a direct and accurate quote from the article. Bril is not (always) SSA.

Here is a follow-up quote from the article.

> Unfortunately, this aftermarket SSA retrofit has been a huge headache. [...] I think my original design is fundamentally flawed; it was a mistake to treat phi semantically as “just another instruction” [...]. Bril’s SSA form needs a full rework [...]. It has been an interesting lesson for me that SSA comes with subtle design implications that are difficult to retrofit onto an existing mutation-oriented IL.

I don't know enough to know what to make of this and the accompanying bug reports. Perhaps just "stay away from Bril SSA"?

reply
smcl
1 month ago
[-]
Funny, “Bril” was also the name of the IL used by ADI’s C compiler when I worked in their DSP tools team.
reply
082349872349872
1 month ago
[-]
> ...it’s wasteful to serialize and deserialize programs at each stage in a long pipeline. But the trade-off is that the Bril ecosystem comprises a large number of small pieces, loosely joined and infinitely remixable on the command line.

Education often has needs diametrically opposed to production. For example, the early Pascal compilers seem to have been optimised for throughput, not object codegen: after all, production code is compiled rarely and run often, but student code is compiled often and run almost never.

reply
cube2222
1 month ago
[-]
At the compiler’s course in my university we just had the option to use either LLVM text format as the target, or assembly (for bonus points).

Frankly, I don’t see much point in a “special IR for teaching” because llvm text format is just really straightforward, and at the same time teaches the “real deal” (sure, normally you’d likely use bindings, but still).

You can still have your students reimplement optimizations that llvm would normally do by themselves (like inductive variable elimination, const propagation, dataflow analysis, using phi instead of alloc etc.).

At least I was really happy I could play with llvm for a university project.

reply
mananaysiempre
1 month ago
[-]
It sounds like you didn’t write optimizations operating on that format, though. This seems to have been a large part of the course TFA describes.
reply
saagarjha
1 month ago
[-]
What’s wrong with it? If you’re the one generating it LLVM IR isn’t that bad
reply
o11c
1 month ago
[-]
LLVM IR is a bit too low-level, so a lot of optimizations aren't really possible anymore, particularly around allocation and logical purity.
reply
cube2222
1 month ago
[-]
Yeah that’s fair, you only target it, but would have an ad-hoc internal IR of your own making for optimizations.
reply
PartiallyTyped
1 month ago
[-]
While life has gotten in the way of me going through this course, the way the IR is defined makes it super easy to just define some schemas and automatically have everything parsed and loaded in whatever language you comfortable in.

Personally, I just wrote the types, and then used serde.

reply
animal_spirits
1 month ago
[-]
Python started out as a language for teaching programming and evolved into what is now one of the most used languages in the world for all sorts of use cases. I wonder if Bril could follow the same trajectory, becoming the common language for creating compilers
reply
mrkeen
1 month ago
[-]
One hopes we can have nice things from the get-go next time around.

Various languages have deliberately shunned green threads, lambdas, generics, and/or compile-time type-checking, only to try to shoe-horn them in later.

reply
v3gas
1 month ago
[-]
This is very cool! I wish the IL acronym was explained in the text, though.
reply
nioj
1 month ago
[-]
reply