Nanopass Framework: Clean Compiler Creation Language
85 points
4 days ago
| 5 comments
| nanopass.org
| HN
verdagon
2 hours ago
[-]
I'm often skeptical of the desire to create a lot of passes. In the early Vale compiler, and in the Mojo compiler, we were paying a lot of interest on tech debt because features were put in the wrong pass. We often incurred more complexity trying to make a concept work across passes than we would have had in fewer, larger passes. I imagine this also has analogies to microservices in some way. Maybe other compiler people can weigh in here on the correct number/kind of passes.
reply
munificent
19 minutes ago
[-]
I work on Dart. I don't work directly on the compilers much, but I've gathered from talking to my teammates who do that, yeah, generally fewer passes is better.

From a maintenance perspective, it's really appealing to have a lot of small clearly defined passes so that you can have good separation of concerns. But over time, they often end up needing to interact in complex ways anyway.

For example, you might think you can do lexical identifier resolution and type checking in separate passes. But then the language gets extension methods and now it's possible for a bare identifier inside a class declaration to refer to an extension method that can only be resolved once you have some static type information available.

Or maybe you want to do error reporting separately from resolution and type checking. But in practice, a lot of errors come from resolution failure, so the resolver has to do almost all of the work to report that error, stuff that resulting data somewhere the next pass can get to it, and then the error reporter looks for that and reports it.

Instead of nice clean separate passes, you sort of end up with one big tangled pass anyway, but architected poorly.

Also, the performance cost of multiple passes is not small. This is especially true if each pass is actually converting to a new representation.

reply
pfdietz
1 hour ago
[-]
Yes, and a similar question is the organization of the thing being acted on by the passes. If I understand correctly, this is in scheme and the things being acted on are trees with pointers. A performance optimized compiler, on the other hand, will probably use some sort of array-based implementation of trees.

There's also a question of data about the trees (like, a flow graph) being recomputed for each nanopass. Also expensive.

reply
soegaard
56 minutes ago
[-]
Nanopass uses structures internally to represent the programs.

The Nanopass dsl just gives the user a nicer syntax to specify the transformations.

reply
onlyrealcuzzo
2 hours ago
[-]
Do you have an article on lessons learned?

I'm creating a language/compiler now, and I'm quite certain that I did not have enough passes initially, but I hope I'm at a good spot now - but time will tell.

reply
soegaard
38 minutes ago
[-]
reply
LegNeato
29 minutes ago
[-]
Why do passes anymore when we have invented egraphs?
reply
skybrian
12 minutes ago
[-]
It's the first I've heard of them. Looks like the research goes back to 1980, but good libraries seem fairly new?

https://blog.sigplan.org/2021/04/06/equality-saturation-with...

reply
jnpnj
2 hours ago
[-]
I wonder if there's some implicit wisdom that layering/modularizing incurs some communication cost that can cancel all the benefits.
reply
jasonjmcghee
1 hour ago
[-]
This is a question folks are asking about in terms of organization building too.

Bottlenecks are changing and it's pretty interesting.

reply
s20n
2 hours ago
[-]
I agree with the notion that having multiple passes makes compilers easier to understand and maintain but finding the right number of passes is the real challenge here.

The optimal number of passes/IRs depends heavily on what language is being compiled. Some languages naturally warrant this kind of an architecture that would involve a lot of passes.

Compiling Scheme for instance would naturally entail several passes. It could look something like the following:

Lexer -> Parser -> Macro Expander -> Alpha Renaming -> Core AST (Lowering) -> CPS Transform -> Beta / Eta Reduction -> Closure Conversion -> Codegen

reply
presz
1 hour ago
[-]
We really need to get more designers interested in Scheme, because that logo is awful
reply
vmsp
1 hour ago
[-]
Wouldn't this kind of architecture yield a slower compiler, regardless of output quality? Conceptually, trying to implement the least-amount of passes with each doing as much work as possible would make more sense to me.
reply
Mathnerd314
4 hours ago
[-]
website is not up to date, https://www.youtube.com/watch?v=lqVN1fGNpZw is not on there
reply