That put a smile on my face because I remember that was how LLVM was born out of frustration with GCC.
I dont know how the modern GCC and LLVM compares, I remember LLVM was fast but resulting binary were not as optimised, once those optimisation added it became slower. While LLVM was a wake up call to modernise GCC and make it faster. In the end competition made both a lot better.
I believe some industry ( Gaming ) used to swear by VS Studio / MS Compiler / Intel Compiler or languages that depends / prefer the Borland ( What ever they are called now ) compiler. Its been very long since I last looked I am wondering if those two are still used or have we all merged mostly into LLVM / GCC?
Embarcadero owns Borland, unfortunely stuff like C++ Builder doesn't seem to get much people outside big corps wanting to use it, which is a shame given its RAD capabilities and GUI design tooling for C++.
Also has a standard ABI between Delphi and C++ Builder, which allows to similar development workflows that .NET offered later with C#/VB alongside Managed C++ extensions (later replaced by C++/CLI).
I’ve personally watched the enshittification of too many proprietary tools to ever build something I care about on top of one today, especially something which becomes so fundamental to the design of your application like a GUI toolkit.
I know it sounds crazy, like you’d never bother forking your GUI framework anyway even when it’s open source. But I worked on an application built in C++ Builder, at a company with enough engineering talent and willpower that we would’ve forked the compiler internally to fix its problems and limitations had we been granted access to the source. Instead, I got to watch the product get held back by it year after year, hoping that Borland would care.
This seemed to be simply down to variable alignment; the programs took more memory but ran much faster, particularly multi-core (which was still high end then).
And this was on x86 where metrowerks weren't really competing, and was probably accidental. But the programs it compiled were fast.
I'd be surprised if anyone even knew that metrowerks had a c++ compiler on x86 on windows. At the time metrowerks were on the tail end of their domination of mac compilers from before mac ran on x86.
Because of how the architecture works, LLVM is one of the backends, but it doesn't have to be. Very interesting project, you could do a lot more IR processing before descending to LLVM (if you use that), that way you could give LLVM a lot less to do.
Chris has said LLVM is fast at what it is designed to do - lower IR to machine code. However, because of how convoluted it can get, and the difficulty involved in getting information from some language-specific MIR to LLVM, languages are forced to generate tons upon tons of IR so as to capture every possible detail. Then LLVM is asked to clean up and optimize this IR.
One thing to look out for is the problem of either losing language-specific information when moving from MIR to Low-level IR (be it Tilde or LLVM) or generating too much information, most of it useless.
From Chris Lattner's descriptions of LLVM vs MLIR in various podcasts, it seems like LLVM is often used as a backend for MLIR, but only because so much work has been put into optimizing in LLVM. It also seems like MLIR is strictly a superset of LLVM in terms of capabilities.
Here's my question: It seems inevitable that people will eventually port all LLVM's smarts directly into MLIR, and remove the need to shift between the two. Is that right?
MLIR does include a couple dozen built-in dialects for common tasks - there's an "scf" dialect which defines loops and conditionals, for example, and a "func" dialect with function call and return ops - and it so happens that one of these built-in dialects offers a 1:1 representation of the operations and types found in LLVM IR.
If you choose to structure your compiler so that all of your other dialects are ultimately lowered into this MLIR-LLVM dialect, you can then pass your MLIR-LLVM output through a converter function to get actual LLVM-IR, which you can then provide to LLVM in exchange for machine code; but that is the extent of the interaction between the two projects.
In the future, it's plausible that dialects for hardware ISAs would be added to MLIR, and thus it would be plausible to completely bypass the LLVM IR layer for optimizations. But the final codegen layer of LLVM IR is not a simple thing (I mean, LLVM itself has three different versions of it), and the fact that GlobalISel hasn't taken over SelectionDAG after even a decade of effort should be a sign of how difficult it is to actually replicate that layer in another system to the point of replacing existing work.
https://2025.cgo.org/details/cgo-2025-papers/39/A-Multi-Leve...
Here's a quick pres from the last dev meeting on how this can be leveraged to compile NNs to a RISC-V-based accelerator core: https://www.youtube.com/watch?v=RSTjn_wA16A&t=1s
The big players have their own frontend, dialects, and mostly use LLVM backends. There’s very little common usable infrastructure that is upstreamed. Some of the upstreamed bits are missing large pieces.
Certainly, there are closed-source downstream dialects, that was one of the actual design goals of the project, but they are rarely as useful as one might think. I'd expect every big company with a hardware to have an ISA/intrinsic-level dialect, at least as a prototype, that they won't open source for the same reason they won't open source the ISA.
What I find sad is the lack is that end-to-end flows from, e.g., PyTorch to binaries are usually living outside of the LLVM project, and often in each company's downstream. There is some slow motion to fix that.
Hmm I wonder what all that stuff is then that's under mlir/lib?
Like what are you even talking about? First of all there are literally three upstream frontends (flang, ClangIR, and torch-mlir). Most people use PyTorch as a frontend (some people use Jax or Triton). Secondly, downstream users having their own dialects... is basically the whole point of MLIR. Core dialects like linalg, tensor, memref, arith are absolutely generically useful. In addition many (not all) MLIR-based production quality compilers are fully open source (IREE, Triton) even if wholly developed at a for-profit.
There was a keynote at the LLVM developer meeting a couple of years ago presenting the differences and the likely evolution from somebody not involved in MLIR that does the lay of the land.
Theoretically, yes -- taking years if not decades for sure. And set aside (middle-end) optimizations, I think people often forgot another big part of LLVM that is (much) more difficult to port: code generation. Again, it's not impossible to port the entire codegen pipeline, it just takes lots of time and you need to try really hard to justify the advantage of moving over to MLIR, which at least needs to show that codegen with MLIR brings X% of performance improvement.
Mind you, I've never written a compiler after that Uni course and touched LLVM IR a long time ago
It also has the advantage of being able to parallelize passes
EDIT: just found this podcast where the author gives more informations about the project goals and history (at least the beginning of the podcast is interesting): https://www.youtube.com/watch?v=f2khyLEc-Hw
If you have to work on a system that hasn't been updated since 2014 or so (since it's fair enough to avoid the .0 releases), getting support for later C++ standards is significantly more complicated.
Not sure how this relates to my statement. I was talking about an optimizer, not about compilation speed. I'm neither using QBE, but Eigen, for good reasons.
One young developer checked out a couple files to "clean them up" with some refactoring. But because he changed some function interfaces, he needed to check out the files which called those functions. And since he was editing those files he decided to refactor them too. Pretty quickly he had more than half the files locked and everyone was beating on him for doing that. But because he had so many partial edits in progress and things weren't yet compiling and running, he prevented a dozen other people from doing much work for a few days.
The efficient alternative, which I’ve seen used a few times in these cases, is to retcon a high-quality fake history into the source tree after the design has stabilized. This has proven to be far more useful to other engineers than the true history in cases like this.
Incremental commits are nice but not all types of software development lends itself to that, especially early in the development process. I’ve seen multiple cases where trying to force tidy incremental commit histories early in the process produced significantly worse outcomes than they needed to be.
I agree that one can do good commit messages also early on though. Initial commit can be "project skeleton", then "argument parsing and start of docs", then maybe "implemented basic functionality: it now lists files when run", next "implemented -S flag to sort by size", etc. It's not as specific as "Forbid -H option in POSIX mode" and the commits are going to often be large and touch different components, but I'd say that's expected for young projects with few (concurrent) contributors
``` Big rewrites
* Rewrote X
* Deleted Y
* Refactored Z ```
Done
Spending a minute writing commit messages while prototyping something will break my flow and derail whatever I’m doing.
I can understand this if you are coding for a corporate. But if it's your own project, you should care about it enough to write good commit messages.
Or is your objection that solo devs code up prototypes and toy with ideas in live code instead of just in their mental VM in grooming sessions?
Or is your objection that you don't think early prototypes and demos should be available in the source tree?
Churn is okay. Prototypes are okay. Toying with ideas is okay. They should all be in the source tree. But I would want an explanation for the benefit of future readers, including the future author. Earlier in my life I have more than once run blame on a piece of code to find myself writing a line of code where the commit message does not explain it adequately. These days it's much rarer because I ask myself to write good commit messages. Furthermore the act of writing a commit message is also soothing and a nice break from writing for computers.
Explain how requirements have changed. Explain how the prototype didn't work and led to a rewrite. Explain why some idea that was being toyed with turned out to be bad.
Notice that the above are explanations. They do not come with any implied actions. "Why is there churn" is a good question to answer but "how do we avoid churn in the future" is absolutely not. We all know churn is inevitable.
That said, specific fixes or similar can definitely do with good messaging. Though I'd say such belongs in comments, not commit messages, where it won't get shadowed over time by unrelated changes.
V8 has been moving away from sea-of-nodes. Here's a video where Ben Titzer is talking about V8's reasons for moving away from sea-of-nodes: https://www.youtube.com/watch?v=Vu372dnk2Ak&t=184s. Yasser, the author of Tilde, is is also in the video.
Before setting out to implement 1980s-style graph coloring, I would suggest considering SSA-based register allocation instead: https://compilers.cs.uni-saarland.de/projects/ssara/ , I find the slides at https://compilers.cs.uni-saarland.de/projects/ssara/hack_ssa... especially useful.
Graph coloring is a nice model for the register assignment problem. But that's a relatively easy part of overall register allocation. If your coloring fails, you need to decide what to spill and how. Graph coloring does not help you with this, you will end up having to iterate coloring and spilling until convergence, and you may spill too much as a result.
But if your program is in SSA, the special properties of SSA can be used to properly separate these subphases, do a single spilling pass first (still not easy!) and then do a coloring that is guaranteed to succeed.
I haven't looked at LLVM in a while, but 10-15 years ago it used to transform out of SSA form just before register allocation. If I had to guess, I would guess it still does so. Not destroying SSA too early could actually be a significant differentiator to LLVM's "cruft".
What are you doing to make sure Tilde does not end up like this?
* Less cache churn, I'm doing more work per cacheline loaded in (rather than rescanning the same function over and over again).
* Combining mutually beneficial optimizations can lead to less phase ordering problems and a better solve (this is why SCCP is better than DCE and constant prop separately).
In a few years when TB is mature, I'd wager I'll have maybe 10-20 real passes for the "-O2 competitive" optimizer pipeline because in practice there's no need to have so many passes.
Essentially, if you say that LLVM's mid-end in particular is slow, I would expect you to present a drop-in replacement for LLVM's mid-end opt tool. You could leave C-to-LLVM-bitcode to Clang. You could leave LLVM-bitcode-to-machine-code to llc. Just like opt, take unoptimized LLVM bitcode as input and produce optimized LLVM bitcode as output. You would get a much fairer apples to apples comparison of both code quality and mid-end compiler speed (your website already mentions that you aren't measuring apples-to-apples times), and you would duplicate much less work.
Alternatively, look into existing Sea of Nodes compilers and see if you can build your demonstrator into them. LibFIRM is such a C compiler: https://libfirm.github.io/ There may be others.
It just seems like you are mixing two things: On the one hand, you are making some very concrete technical statements that integrated optimizations are good and the Sea of Nodes is a great way to get there. A credible demonstrator for this would be very welcome and of great interest to the wider compiler community. On the other hand, you are doing a rite-of-passage project of writing a self-hosting C compiler. I don't mean this unkindly, but that part is less interesting for anyone besides yourself.
EDIT: I also wanted to mention that the approach I suggest is exactly how LLVM became well-known and popular. It was not because of Clang; Clang did not even exist for the first eight years or so of LLVM's existence. Instead, LLVM focused on what it wanted to demonstrate: a different approach to mid-end optimizations compared to the state of the art at the time. Parsing C code was not part of that, so LLVM left that to an external component (which happened to be GCC).
I'm also confused why you believe LLVM didn't start out the exact same way?
I say this as one of the main people responsible for working on combined pass replacements in both GCC and LLVM for things that were reasonable to be combined.
I actually love destroying lots of passes in favor of better combined ones. In that sense, i'm the biggest fan in the world of these kinds of efforts.
But the reason these passes exist is not cruft, or 20 years of laziness, - it's because it's very hard to replace them with combined algorithms that are both faster, and achieve the same results.
What exactly do you plan on replacing GVN + Simplify CFG + Jump Threading + correlated value prop with?
It took years of cherrypicking research and significant algorithm development to develop algorithms for this that had reasonable timebounds, were faster, and could do better than all of them combined. The algorithm is quite complex, and it's hard to prove it terminates in all cases, actually. The number of people who understand it is pretty small because of the complexity. That's before you get to applied engineering of putting it in a production compiler.
These days, as the person originally responsible for it, i'd say it's not better enough for the complexity, even though it is definitely faster and more complete and would let you replace these passes.
Meanwhile, you seem to think you will mature everything and get there in a few years.
I could believe you will achieve some percent of GCC or LLVM's performance, but that's not the reason these passes exist. They exist because that is what it reasonably takes to achieve LLVM (and GCC's) performance across a wide variety of code, for some acceptable level of algorithm complexity and maintainability.
So if you told me you were only shooting for 80% across some particular subset code, i could believe 10-20 passes. If you told me you were going to build a different product that targets a different audience, or in a different way, i could maybe believe it.
But for what you say here, I think you vastly underestimate the difficult and vastly underappreciate the effort that goes into these things. This is hundreds of very smart people working on things for decades. It's one thing to have a healthy disrespect for the impossible. It's another to think you will, in a few years, outdo hundreds of smart, capable engineers on pure technical achievement.
That strikes me as somewhere between hubris and insanity.
People also pretty much stopped building and publishing general purpose compiler optimization algorithms a decade ago, moving towards much more specialized algorithms and ML focused things and whatnot.
This is because in large part, there isn't a lot left worth doing.
So unless you've got magic bullets nobody else has, either you won't achieve the same performance level, or it will be slow, or it will take you a significant amount of algorithm development and engineering well beyond "a few years".
I say this not to dissuade you, but to temper your apparent expectations and view.
Honestly - I wish you the best of luck, and hope you succeed at it.
I read it a few times and as best I can get this is what you're saying:
- You came up with a similar combined replacement pass for LLVM based on years of personal and external research.
- It's faster and has more functionality.
- It's very complex and you're not comfortable that it's possible to achieve various reliability guarantees, so it is unreleased
- Therefore (?) you think the Tilde author also couldn't possibly succeed
AFAICT you also believe that the Tilde author hasn't completed their replacement pass. From their post my take was that it was already done. The part that will mature is additional passes, or maybe optimizations/bugfixes, but not the MVP development.
Your main arguments seem to be probability and appeal to authority (external research, assigned responsibility, industry association). Pretty much all projects and startups fail, but it's because people attempt them that some succeed.
Is the author betting their career on this? Why do their expectations need to be tempered?
I'd be interested in hearing concrete criticisms of their algorithms (have you looked at the code?) or oversights in the design. Maybe the author would too! If you let the author know, maybe you could think of a solution to reduce the complexity or improve the guarantees together.
Lots of these come by, it gets tiring to try to provide detailed critiques of all of them. Feel free to go through comment history and see the last one of these to come along and the detailed critiques there :)
Meanwhile, let me try to help more:
First, the stuff i'm talking about is released. It's been released for years. It is included in LLVM releases. None of this matter, it was an example of what it actually takes in terms of time and energy to perform some amount of pass combination for real, which the author pays amazingly short shrift to.
I chose the example I did not because i worked on it, because it's in the list of things the author thinks are possible to combine easily!
Second it's not one combined pass they have to make - they think they will turn 75 passes into 20, with equivalent power to LLVM, but somehow much faster, yet still maintainable, mainly because "it's time for a rewrite" and they will avoid 20 years of cruft.
So they don't have to repeat the example i gave once. They have to repeat it 20-30 times. Which they believe they will achieve and reach maturity of in ... a few years.
They give no particular reason this is possible - i explained why it is remarkably difficult - while certainly you can combine some dataflow optimizations in various ways, doing so is not just hacking around.
It's often hard computer science problems to take two optimization passes, combine them in some way, and prove the result even ever terminates, not even that it actually optimizes any better. Here they are literally talking about combining 3 or 4 at a time.
While there are some basic helpful tools we proved a long time ago about things like composability of monotonic dataflow problems, these will not help you that much here.
Solving these hard problems are what it takes to have it work. It's not just cherry picking research papers and implementing them or copying other compiler code or something.
Let's take a concrete example, as you request:
If you want to subsume the various global value numbering passes, which all eliminate slightly different redundancies and prove or otherwise guarantee that you have actually done so, you would need a global value numbering pass you can prove to be complete. Completeness here means that it detects all equivalent values that can be detected.
There is no way around this. Either it subsumes them or it doesn't. If it doesn't, you aren't matching what LLVM's passes do, which the author has stated as the goal. As I said, i could believe a lesser goal, but that's not what we have here.
The limit of value numbering completeness here was proved a long time ago. The best you can do is something called herbrand equivalences. Anything stronger than that can't be proven to be decidable, and to the degree it can, you can't prove it ever terminates.
That has the upside that you only have to achieve this to prove you've done the best you can.
It has the downside that there are very few algorithms that achieve this.
So there are a small number of algorithms that have been proven complete here (about 7) - and all but 3 have exponential worst time.
The three polymonial algorithms are remarkably complicated, and as far as i know, never been implemented in any production compiler, anywhere. Two are N^4, and one is N^3.
One of the N^3 ones has some followup papers where people question whether it really works or terminates in all cases.
These are your existing choices if you try to use existing algorithms to combine these 4 out of the 70 passes, into 1 pass.
Otherwise, you get to make your own, from scratch.
The author seems to believe you can still, somehow, do it, and make the result faster than the existing passes, which, because they do not individually try to be complete, are O(N) and in one case, N^2 in the worst case. So combined, they are N^2.
While it is certainly possible to end up with N^3 algorithms that are faster than N^2 algorithms in practice, here, none of the algorithms have also ever been proven practical or usable in a production compiler, and the fastest one has open questions about whether it works at all.
Given all this, i see the onus as squarely on the author to show this is really possible.
Again, this is just one example of what it takes to subsume 4 passes into 1, along the exact lines the author says they think they will do, and it would have to be repeated 30 more times to get down to 20 passes that are as good as LLVM.
That's without saying anything else about the result being faster, less complex, or having less cruft.
As for whether they've accomplished a combined pass of any kind -I've looked at the code in detail - it implements a fairly basic set of optimization passes that nowhere approaches the functionality of any of the existing LLVM passes in optimization power or capability. It's cool work for one person, for sure, but it's not really that interesting, and there are other attempts i would spend my time on before this one. I don't say that to knock the author - i mean it in the literal sense to answer your question - IE It is not interesting in the sense that there is nothing here that suggests the end result will achieve fundamental improvements over LLVM or GCC, as you would hope to see in a case like this. The choices made so far are a set of tradeoffs that have been chosen before in other compilers, and there is nothing (yet) that suggests it will not end up with similar results to those compilers.
It is any not further along, more well developed, etc, than other attempts have been in the past.
So when I look at it, i view that all as (at least so far) not interesting - nothing here yet suggests a chance of success at the goals given.
As I said, these things come along not infrequently - the ones that are most viable are the ones that have different goals (IE fast compilation even if it does not optimize as well. Or proven correct transforms. or ...). Or those folks who think they can do it, but it will take 10-15 years. Those are believable things.
The rest seem to believe there are magic bullets out there - that's cool - show me them.
As for " Pretty much all projects and startups fail, but it's because people attempt them that some succeed."
This is true, but also tautological, as you know - of course things can only succeed if someone attempts them. It is equally as true that just because people attempt things does not mean anyone will succeed. While it is true nobody will be able to breathe unassisted in space if nobody tries, that does not mean anyone can or will ever succeed at it no matter how many people try.
This case is not like like a startup that succeeds because it built a better product. This is like a startup that succeeds because it proved P=NP.
Those are not the same kind of thing at all, and so the common refrains about startups and such are not really that useful here.
The one you use is useful when arguing that if enough people try to build a better search and outdo Google (or whatever), eventually someone will succeed - this is likely true.
In this case, however, it is closer to arguing that if enough people jump off 500ft cliffs and die, eventually someone will achieve great success at jumping off 500ft cliffs.
Then again, I now can't help but wonder if LLVM (or even GCC) would be fast, if you just turned off all the optimizations ...
(Of course, at this point, I can't help but think "you don't need to worry about the speed of compilation" in things like Common Lisp or Smalltalk, because everything is compiled incrementally and immediately, so you don't have to wait for the entire project to compile before you could test something ...)
It's not the optimizations really, it's the language front ends. Rust and C++ are extremely analysis-heavy. Try generating a comparable binary in C (or a kernel build, which is likely to be much larger!) and see how fast these compilers can be.
> It's been 20 years and cruft has built up, time for a "redo".
Ah.. is this one of those "I rewrote it and it's better" things, but when people inevitably discover issues that "cruft" was handling the author will blame the user?If we want simple and fast, we can do that, but sometimes it doesn't cover the corner cases that the slow and complicated stuff does -- and as you fix those things, the "simple and fast" becomes "complicated and slow".
But, as others have observed about GCC vs LLVM (with LLVM having had a similar life cycle), the added competition forced GCC to step up their game, and both projects have benefited from that competition -- even if, as time goes on, they get more and more similar to what each can do.
I think all our efforts suffer from the effects of the Second Law of Thermodynamics: "You can't win. You can't break even. And it's the only game in town."
Rewriting LLVM gives you the opportunity to rethink some of its main problems. Of those I think two big ones include Tablegen and peephole optimisations.
The backend code for LLVM is awful, and tablegen only partially addresses the problem. Most LLVM code for defining instruction opcodes amounts to multiple huge switch statements that stuff every opcode into them, its disgusting. This code is begging for a more elegant solution, I think a functional approach would solve a lot of the problems.
The peephole optimisation in the InstCombime pass is a huge collection of handwritten rules that's been accumulated over time. You probably don't want to try and redo this yourself but it will also be a big barrier to achieving competitive optimisation. You could try and solve the problem by using a superoprimisation approach from the beginning. Look into the Souper paper which automatically generates peepholes for LLVM: (https://github.com/google/souper, https://arxiv.org/pdf/1711.04422.pdf).
Lastly as I hate C++ I have to throw in an obligatory suggestion to rewrite using Rust :p
So one of the main problems you run into is that your elegant solution only works about 60-80% of the time. The rest of the time, you end up falling back onto near-unmaintainable, horribly inelegant kludges that end up having to exist because gee, real architectures are full of inelegant kludges in the first place.
Recently, I've been working on a decompiler, and I started out with going for a nice, elegant solution that tries as hard as possible to avoid the nasty pile of switch statements. And this is easy mode--I'm not supporting any ugly ISA extensions, I'm only targeting ancient, simple hardware! And still I ran into the limitations of the elegant solution, and had to introduce ugly kludges to make it work.
The saving grace is that I plan to rip out all of this manual work with a fully automatically-generated solution. Except that's only feasible in a decompiler, since the design of that solution starts by completely ignoring compatibility with assembly (ISAs turn out to be simpler if you think of them as "what do these bytes do" rather than "what does this instruction do")... and I'm worried that it's going to end up with inelegant kludges because the problem space more or less mandates it.
> You could try and solve the problem by using a superoprimisation approach from the beginning. Look into the Souper paper which automatically generates peepholes for LLVM:
One of the problems that Souper ran into is that LLVM IR is too abstract for superoptimization to be viable. Rather than the promise of an automatic peephole optimizer, it's instead morphed more into "here's some suggestions for possible peepholes". You need a really accurate cost model for superoptimization to work well, and since LLVM IR gets shoved through instruction selection and instruction scheduling, the link between LLVM instructions and actual instructions is just too tenuous to build the kind of cost model a superoptimizer needs (even if LLVM does have a very good cost model for the actual machine instructions!).
This is generally true, though for small compiler backends they have the luxury to straight up refuse to support such use cases. Take QBE and Cranelift for example, the former lacks x87 support [1], the latter doesn't support varargs[2]; which means either of them support the full x86-64 ABI for C99.
[1]https://github.com/michaelforney/cproc?tab=readme-ov-file#wh...
What damaged would there be if gcc or LLVM did decide to not support x87 anymore. It is not much different from dropping an ISA like IA64. You can still use the older compilers if you need to.
Similarly, what is varargs used for? Pretty much only for C and its unfortunate printf, scanf stdlib calls. If a backend decides not support C, all this headache goes away. The problem is, of course, that the first thing every new backend designer does is to write a C frontend.
For starters, you'd break every program using long double on x86.
And as far as "complexities of the x86 ISA" goes, x87 isn't really that high on the list. I mean, MMX is definitely more complex (and LLVM recently ripped out support for that). But even more complex than either of those would be anything touching AVX, AVX-512, or now AVX-10 stuff, and all the fun you get trying to build your systems to handle encoding the VEX or EVEX prefixes.
IR = Intermediate Representation
SSA = Single Static Assignment
CFG = Control-Flow Graph (not Context-Free Grammar)
And "sea of nodes" is this: https://en.wikipedia.org/wiki/Sea_of_nodes ... IIANM, that means that instead of assuming a global sequence of all program (SSA) instructions, which respects the dependecies - you only have a graph with the partial order defined by the dependencies, i.e. individual instructions are nodes that "float" in the sea.MIT license (the king of licenses, IMHO. That's not an objective statement though)
Written in easy to understand C.
No python required to build it. (GCC requires perl, and I think that Perl is way easier to bootstrap then Python for LLVM)
No Apple. I don't know if you all have seen some of the Apple developers talking with people but some of them are extremely condescending and demeaning towards people of non-formal CS backgrounds. I get it, in a world where you are one of the supreme developers it's easy to be that way but it's also just as bad as having people like Theo or historical Torvalds.
The idea behind LLVM isn't new per se, there have been other similar tools in the past, e.g. Amsterdam Compiler Toolkit.
Am I wrong to assume that Cwerg doesn't support x86, or is this just assumed by "X86-64"?
About using C++:
Cwerg is NOT going "all in" on C++ and tries to use as little STL as possible. There are some warts here and there that C++17 fixes and those are used by Cwerg - nothing major. There is also a lightweight C wrapper for Cwerg Backend.
About not using C:
I do not get the fetishizing of C. String handling is just atrocious, no concept of a span, no namespaces, poorer type system, etc. Cwerg is actually trying to fix these.
If Cwerg was written in C instead of C++, a lot of the constructs would become MACRO-magic.
About Backends:
Currently supported are: 64 bit ARM, 32 bit ARM (no thumb), 64 bit x86 There are no plans to support 32 bit x86
It's about dependability and bootstrapping. GCC 4.7 was the last version implemented in C, and it supports C++98/03 and a subset of C++11.
> There are some warts here and there that C++17 fixes [..] nothing major
But it's C++17 and thus requires many more bootstrap cycles until we have a compiler. I think a backend which only supports a subset of the features of a C++17 compiler should not depend on C++17, otherwise its usefulness is restricted by the availability of such a compiler.
> I do not get the fetishizing of C. String handling is just atrocious...
C is just a pretty primitive high-level language (with a very strange syntax) a compiler of which exists on almost any system. The next complexity step is C++98 (or the subset supported by cfront), which solves your issues and is even good enough to build something as complex as Qt and KDE.
> There are no plans to support 32 bit x86
Ok, thanks. The support for ARM32 already enables many use cases.
A self-hosting Cwerg will hopefully be much easier to bootstrap because of its size. But until then, why do you need the (continuous) bootstrapping. You can use a cached version of the bootstrapped C++ compiler or cross compile.
QBE, MIR, & IR (php's) are all worth a look too.
Personally I've settled on IR for now because it seemed to match my needs the most closely. It's actively developed, has aarch64 in addition to x64 (looks like TB has just started that?), does x64 Windows ABI, and seems to generate decent code quickly.
In other words, since this alternative LLVM is coded in plain and simple C, it is shielded against those who are still not seeing that computer languages with an ultra complex syntax are not the right way to go if if want sane software.
You also have QBE, which with cproc will give you ~70% of latest gcc speed (in my benchmarks on AMD zen2 x86_64).
And we must not forget that, 100% "safe" high level code should never be trusted to be compiled into 100% safe machine code.
C99+ is just the less worse compromise.
Pascal here meaning compilers people actually use, not ISO Pascal from 1976, people love their C extensions after all.
A for C's simplicity, one just needs to organise a pub quiz, using ISO, and key extensions as source of inspiration.
Nonetheless, it is good to see efforts to reduce UB on the standard.
My complaints apply to C++ as well, naturally.
How would you know? You don't generally find out until a newer compiler release breaks your code.
> I have also never seen a language specification that answers all questions, not even Pascal or Ada.
Maybe, but I haven't see "upgrade your compiler, get a new security bug" be defended so aggressively in other languages. Probably more cultural than legalistic - obviously "the implementation is the spec" has its problems, but most languages commit to not breaking behaviour that most code relies on, even if that behaviour isn't actually written in the spec, which means that in practice the language (the social artifact) is possible to learn in a way that C isn't.
> I agree that implicit conversions are an unfortunate feature of C, but I think the same about all languages where you can easily assign floating point to integer variables (or vice versa), for example.
So don't use those languages either then?
> Cross-toolchain and cross-platform experiments are a constant activity with all the programming languages I use.
Sounds pretty unpleasant, I practically never need to do that.
Until there is that special library that doesn't care about this target group of developers that want to stay in the past.
Also starting a new project like this in C is an interesting choice.
No serious alternative
They exist, and have truly enviable amounts of time for projects.
Also, don't overestimate compilers (or kernels). They can be much simpler than they might seem! The difficult/tedious parts are optimizations(!) and broad compatibility with bizarre real-world stuff.
(But you probably need to be older to be a good project manager.)
Why though, it seems with time we learn a LOT more and then there are 100 ways of doing things and get into analysis paralysis. The passion hasn't been the same as well.
I'm saying this as someone who uses LLVM daily and wishes that it was written in anything else than C/CPP,
those languages bring so many cons that it is unreal.
Slow compilation, mediocre tooling (cmake), terrible error messages, etc, etc.
What's the point of starting with tech debt?
Having said that perhaps Tilde can be a good compiler addition for D in addition to the existing GDC (GCC), LCD (LLVM) and the reference compiler DMD.
It's interesting to note that Odin is implementing its compiler alternative in Tilde maybe due to some unresolved issues and bugs in LLVM [1],[2].
[1] A review of the Odin programming language (2022):
https://graphitemaster.github.io/odin_review/
[2] Understanding the Odin Programming Language (97 comments):
2. If you're using something like C# or Kotlin or whatever, then you're not serious about compilation speed.
3. You will need to provide a C API anyway.
Executable is executable, there's no magic in C code, it's just frontend for LLVM IR.
>If you're using something like C# or Kotlin or whatever, then you're not serious about compilation speed.
Do you have any good benchmarks about how big the difference is?