I think this is important and for a more sophisticated compiler design I find Ghuloum approach very appealing [1]. I.e. build a very simple subset of the language from top to bottom and then grow the meat gradually.
The really great book following this approach I've discovered recently was [2]. Although I find both C and x86 not the best targets for your first compiler, still a very good book for writing your first compiler.
[1] http://scheme2006.cs.uchicago.edu/11-ghuloum.pdf
[2] https://norasandler.com/2024/08/20/The-Book-Is-Here.html
Compiler courses are structured like that because parsing really was the most important part, but I'd say in the "modern" world once you have a clear idea of how parsing actually works, it's more important to understand how compilers implement language features.
Even if you want to implement a compiler yourself, "Claude, please generate a recursive descent parser for this grammar" is close to working one-shot.
Building a recursive descent parser from scratch was an eye opener to 17yo me on how a seemingly very complex problem that I had no idea how to approach can be made simple by breaking it down into the right primitives.
Whats blocking me during programming usually are edge cases I had no idea about. Its still hard to find good material on compilers if you are not into reading dry ass books. Thats a me problem though, I simply cant force myself to read boring factual only content (one of the reasons as to why I love beejs guides).
Yes, but with experience that just becomes a matter of recognizing problem and design patterns. When you see a parsing problem, you know that the simplest/best design pattern is just to define a Token class representing the units of the language (keywords, operators, etc), write a NextToken() function to parse characters to tokens, then write a recursive descent parser using that.
Any language may have it's own gotchas and edge cases, but knowing that recursive descent is pretty much always going to be a viable design pattern (for any language you are likely to care about), you can tackle those when you come to them.
Table driven parsers (using yacc/etc) used to be emphasized in old compiler writing books such as Aho & Ullman's famous "dragon (front cover) book". I'm not sure why - maybe part efficiency for the slower computers of the day, and part because in the infancy of computing a more theoretical/algorithmic approach seemed more sophisticated and preferable (the cannonical table driven parser building algorithm was one of Knuth's algorithms).
Nowadays it seems that recursive descent is the preferred approach for compilers because it's ultimately more practical and flexible. Table driven can still be a good option for small DSLs and simple parsing tasks, but recursive descent is so easy that it's hard to justify anything else, and LLM code generation now makes that truer than ever!
There is a huge difference in complexity between building a full-blown commercial quality optimizing compiler and a toy one built as a learning exercise. Using something like LLVM as a starting point for a learning exercise doesn't seem very useful (unless your goal is to build real compilers) since it's doing all the heavy lifting for you.
I guess you can argue about how much can be cut out of a toy compiler for it still to be a useful learning exercise in both compilers and tackling complex problems, but I don't see any harm in going straight from parsing to code generation, cutting out AST building and of course any IR and optimization. The problems this direct approach causes for code generation, and optimization, can be a learning lesson for why a non-toy compiler uses those!
A fun approach I used at work once, wanting to support a pretty major C subset as the language supported by a programmable regression test tool, was even simpler ... Rather than having the recursive descent parser generate code, I just had it generate executable data structures - subclasses of Statement and Expression base classes, with virtual Execute() and Value() methods respectively, so that the parsed program could be run by calling program->Execute() on the top level object. The recursive descent functions just returned these statement or expression values directly. To give a flavor of it, the ForLoopStatement subclass held the initialization, test and increment expression class pointers, and then the ForLoopStatement::Execute() method could just call testExpression->Value() etc.
https://en.wikipedia.org/wiki/Niklaus_Wirth
From the Publications section of that Wikipedia page:
>The April 1971 Communications of the ACM article "Program Development by Stepwise Refinement",[22][23] concerning the teaching of programming, is considered to be a classic text in software engineering.[24] The paper is considered to be the earliest work to formally outline the top-down method for designing programs.[25][26] The article was discussed by Fred Brooks in his influential book The Mythical Man-Month and was described as "seminal" in the ACM's brief biography of Wirth published in connection to his Turing Award.[27][28]
The initial edition was published in 1976, in German, but the latest version is available online:
https://people.inf.ethz.ch/wirth/CompilerConstruction/Compil...
There are also parser generators like ANTLR (https://en.wikipedia.org/wiki/ANTLR) which take an input not unlike yacc, but generate a LL parser using explicit code, rather than the table driven LALR parsing of yacc.
I experimented with PCs in Haskell and Rust (nom), then moved on to parser generators in Rust (pest.rs), Ocaml (Menhir), Haskell (Happy) and finally ended up with python's Lark - the speed of experimenting with different syntax/grammars is just insane.
I suppose that one of the pros of using tree-sitter is its portability? For example, I could define my grammar to both parse my code and to do proper syntax highlighting in the browser with the same library and same grammar? Is that correct? Also it is used in neovim extensively to define syntax for a languages? Otherwise it would have taken to slightly modify the grammar.
Yes, portability like that is a huge benefit, though I personally utilized it for that yet. I just use it as an error-tolerant frontend to my compiler.
As to how errors are reported, tree-sitter creates an ERROR or MISSING node when a particular subtree has invalid syntax. I've found that it never leaves a node in an invalid state, (so never would it create a binaryop(LeftNode(...), Op, ERROR) if RightNode is not optional. Instead it would create an ERROR for binaryop too. This allows you to safely unwrap known fields. ERROR nodes only really bunch up in repeat() and optional()s where you would implicity handle them.
For an example, I can only point you to my own use: https://github.com/pc2/sus-compiler
tree-sitter-sus has the grammar
sus-proc-macro has nice proc macros for dealing with it (kind!("binop"), field!("name"), etc)
src/flattening/parser.rs has conveniences like iterating over lists
and src/flattening/flatten.rs has the actual conversion from syntax tree to SUS IR
For Rust I used Nom and I didn't mind it all that much although I noticed it's quite baroque. If I had more to write I'd probably make some wrappers or macros of my own for most commonly used Nom snippets.
Is "syntax-directed translation" just another term for a single-pass compiler, e.g. as used by Lua (albeit to generate bytecode instead of assembly / machine code)? Or is it something more specific?
> in the latter parts of the tutorial it starts showing its limitations. Especially once we get to types [...] it's easy to generate working code; it's just not easy to generate optimal code
So, using a single-pass compiler for a statically-typed language makes it difficult to apply type-based optimizations. (Of course, Lua sidesteps this problem because the language is dynamically typed.)
Are there any other downsides? Does single-pass compilation also restrict the level of type checking that can be performed?
I'm actually a big fan a function-by-function dual-pass compilation. You generate IR from the parser in one pass, and do codegen right after. Most intermediate state is thrown out (including the AST, for non-polymorphic functions) and you move on to the next function. This give you an extremely fast data-oriented baseline compiler with reasonable codegen (much better than something like tcc).
A sigle pass compiler can still split the various phases, and only do the code generation on the last phase.
Let's Build a Compiler (1988) - https://news.ycombinator.com/item?id=38773049 - Dec 2023 (15 comments)
Let's Build a Compiler (1988) - https://news.ycombinator.com/item?id=36054416 - May 2023 (19 comments)
Let’s Build a Compiler (1995) - https://news.ycombinator.com/item?id=22346532 - Feb 2020 (41 comments)
Let's Build a Compiler - https://news.ycombinator.com/item?id=20444474 - July 2019 (47 comments)
Let's Build a Compiler (1995) - https://news.ycombinator.com/item?id=19890918 - May 2019 (18 comments)
Let’s Build a Compiler (1995) - https://news.ycombinator.com/item?id=6641117 - Oct 2013 (56 comments)
Let’s Build a Compiler (1995) - https://news.ycombinator.com/item?id=1727004 - Sept 2010 (17 comments)
Let’s Build a Compiler (1995) - https://news.ycombinator.com/item?id=232024 - June 2008 (5 comments and already complaining about reposts)
Let's build a compiler (dated, but very good) - https://news.ycombinator.com/item?id=63004 - Oct 2007 (2 comments)
It seems there aren't any (interesting) others? I expected more.
But there is this bonus:
An Interview with Jack Crenshaw, Author of the “Let’s Build a Compiler” - https://news.ycombinator.com/item?id=9502977 - May 2015 (0 comments, but good article!)
Pros: * uses Python and recursive descent parsing * separates front and backend via an IR * generates ELF binaries (either x86 or ARM) * meant for real world use
Cons: * more complex * not written in a tutorial style
When one looks at languages that use LLVM as a backend, there is one consistent property: slow compilation. Because of how widespread LLVM is, we often seem to accept this as a fact of life and that we are forced to make a choice between fast runtime code and a fast compiler. This is a false choice.
Look at two somewhat recent languages that use LLVM as a backend: zig and rust. The former has acknowledged that LLVM is an albatross and are in the process of writing their own backends to escape its limitations. The latter is burdened with ridiculous compilation times that will never get meaningfully better so long as they avoid writing their own backend.
Personally, I find LLVM a quite disempowering technology. It creates the impression that its complexity is necessary for quality and performance and makes people dependent on it instead of developing their own skills. This is not entirely dissimilar to another hot technology with almost the same initials.
Which by the way, it is still active, https://compilers.iecc.com/index.phtml
Good summary.
I had no background in compilers or related theory but read Jack Crenshaw's Let's Build a Compiler tutorials some time ago. My main take away from reading half a dozen or so of these tutorials was that building a simple compiler for a toy language was a small project that was well within my grasp and ability, not a huge undertaking that required mastery of esoteric pre-requisites or a large amount of planning.
I got a lot of enjoyment messing about with toy compiler projects related to Brainfuck.
Why Brainfuck? It's a beautiful little toy language. Brainfuck has 8 instructions, each instruction is 1 character, so parsing reduces to getting a char and switching on it. I guess it depends on what you want to explore. If you want to focus on writing recursive descent parsers, not the best choice!
One initial project could be to compile (transpile) from Brainfuck source to C source. You can do this as a source to source compiler without any internal representation by transforming each Brainfuck operation to a corresponding C statement. Brainfuck is specified in terms of a single fixed length array of bytes, and a pointer - an index into that array - that can be moved around, and basic manipulations of the byte it is pointing it. So on the C side you need two variables: one for the array and a second, an index for the pointer.
A second project could be compiling from Brainfuck to assembly language, skipping C. You'd need to read a few tutorials/reference docs about your chosen assembly language and learn how to run the assembler to compile tiny assembly programs into native executables. You could explore some examples of what output assembly programs you get when you compile small Brainfuck programs to C and then compile those C programs to assembly. You could write a direct source to source compiler without an internal representation, where each Brainfuck operation is directly mapped to a snippet of assembly instructions. Once you've got this working, you can compile a Brainfuck program into an assembly program, and then use the usual toolchain to assemble that into a native executable and run it.
There's also lots of projects in another direction, treating Brainfuck as the target language. Imagine that your job is to write Brainfuck programs for a CPU that natively executes Brainfuck. Try writing a few tiny Brainfuck programs by hand and savour how trying to do almost anything involves solving horrible little puzzles. Maybe it'd be much easier to do your job if you, the Brainfuck programmer, didn't have to manually track which index of the array is used to store what. You could invent a higher level language supporting concepts like local variables, where you could add two local variables together and store the results in a third local variable! Maybe you could allow the programmer to define and call their own functions! Maybe you could support `if` blocks, comparisons! You could have a compiler that manages the book-keeping of memory allocation and mapping complex high level abstractions such as integer addition into native Brainfuck concepts of adding one to things and moving left or right. Projects in this direction let you explore more stuff about parsers (the input syntax for your higher level language is richer), internal representations, scopes and so on.
The T3X language it's very Pascal like and fun to use (and portable: DOS/Win/Unix/CPM...).
Also, as an intro, with Zenlisp you can get a mini CS-101 a la SICP or CACS but simpler and explained in a much easier way.
https://github.com/howerj/subleq/
As a goodie you can run Eforth on top which almost writtes itself. Compiler, interpreter, editor, IDE and a Sokoban, all in a simple VM.
Let's scale. Mu808/n808. Interpreters in C and AWK, a compiler in Python.
https://codeberg.org/luxferre/n808
You have the exact assembly algorithm in the page. What you see it's what you get. Now, for real, I'd suggest getting lvltl (VTL-02) interpreter written in C for a "bigger" language running not just under a VM, but for small machines and simulators such as the 6502 based Kim-1 and Apple1. With that "not enough to be called Basic" a Sokoban might be written with a bit of patience.