The paradox gains another layer when you consider that their whole mission is to build tools for the JavaScript ecosystem, yet by moving to Rust they are betting that JS-the-language is so broken that it cannot even host its own tools. And because JS is still a stronger language for building UIs in than Rust, their business strategy now makes them hard-committed to their bet that JS tools in JS are a dead end.
You say this like this is the basic requirement for a language. But languages make tradeoffs that make them more appropriate for some domains and not others. There's no shade if a language isn't ideal for developer tools, just like there's no shade if a language isn't perfect for web frontends, web backends, embedded development, safety critical code (think pacemakers), mobile development, neural networks and on and on.
Seriously, go to https://astral.sh and scroll down to "Linting the CPython code base from scratch". It would be easy to look at that and conclude that Python's best days are behind it because it's so slow. In reality Python is an even better language at its core domains now that its developer tools have been rewritten in Rust. It's the same excellent language, but now developers can iterate faster.
It's the same with JavaScript. Just because it's not the best language for linters and formatters doesn't mean it's broken.
Also, the paradox is not really even there. JS ecosystem largely gave up on JS tools long time ago already. Pretty much all major build tools are migrating to native or already migrated, at least partially. This has been going on for last 4 years or something.
But the key to all of this is that most of these tools are still supporting JS plugins. Rolldown/Vite is compatible with Rollup JS plugins and OXLint has ESLint compatible API (it's in preview atm). So it's not really even a bet at all.
Evan wallace proved it by building esbuild. this is no longer bet.
> If their open platform were as powerful as it should be, it would be easy to use it to recreate the kinds of experiences they propose to sell.
you would be surprised to know that tech companies may find it cheaper to pay money than developer bandwidth for stuff beyong their core compentency.
dropbox was also considered to be trivially implementable, but end users rarely try to re-invent it.
Another example is the TypeScript compiler being rewritten in Go instead of self-hosting. It's an admission that the language is not performant enough, and more, it can never be enough for building its own tooling. It might be that the tooling situation is the problem, not the language itself, though. I do see hopeful signs that JavaScript ecosystem is continuing to evolve, like the recent release of MicroQuickJS by Bellard, or Bun which is fast(er) and really fun to use.
I quite like Roc's philosophy here: https://www.roc-lang.org/faq#self-hosted-compiler. The developers of the language want to build a language that has a high performance compiler, but they don't want to build a language that one would use to build a high performance compiler (because that imposes a whole bunch of constraints when it comes to things like handling memory). In my head, JavaScript is very similar. If you need a high performance compiler, maybe look elsewhere? If you need the sort of fast development loop you can get by having a high performance compiler, then JS is just the right thing.
Off topic but I wonder if this applies to human languages, whether some are more suited for particular purposes - like German to express rigorous scientific thinking with compound words created just-in-time; Spanish for romantic lyrical situations; or Chinese for dense ideographs. People say languages can expand or limit not only what you can express but what you can think. That's certainly true of programming languages.
From what I understand, Vite+ seems like an all-in-one toolchain. Instead of maintaining multiple configurations with various degrees of intercompatibility, you maintain only one.
This has the added benefit that linters and such can share information about your dependency graph, and even ASTs, so your tools doesn't have to compute them individually. Which has a very decent potential of improving your overall pre-merge pipeline. Then, on top of that, caching.
The focus here is of course enterprise customers and looks like it is supposed to compete with the likes of Nx/Moonrepo/Turborepo/Rush. Nx and Rush are big beasts and can be somewhat unwieldy and quirky. Nx lost some trust with its community by retracting some open-source features and took a very long time to (partially) address the backlash.
Vite+ has a good chance to be a contender on the market with clearer positioning if it manages to nail monorepo support.
Doesn’t look super interesting to me tbh.
Let this be a warning: running oxfmt without any arguments recursively scans directory tree from the current directory for all *.js and *.ts files and silently reformats them.
Thanks to that, I got a few of my Allman-formatted JavaScript files I care about messed up with no option to format them back from K&R style.
I've got to say this is what I would have expected and wanted to happen. I'd say it is wise to not run tools designed to edit files on files you don't have a backup for (like Git) without doing a dry-run or a small scope experiment first.
How were you not expecting this? Did you not bother to read anything before installing and running this command on a sensitive codebase?
But most formatters I'm used to absolutely don't do this. For example, `rustfmt` will read input from stdin if no argument is given. It can traverse modules in a project, but it won't start modifying everything under your CWD.
Most unix tools will either wait for some stdin or dump some kind of help when no argument is given. Hell, according to this tool's docs, even `prettier` seems to expect an argument:
> Running oxfmt without arguments formats the current directory (*equivalent to prettier --write .*)
I'm not familiar with prettier, so I may be wrong, but from the above, I understand that prettier doesn't start rewriting files if no argument is given?Looking up prettier's docs, they have this to say:
> --write
This rewrites all processed files in place. *This is comparable to the eslint --fix* workflow.
So eslint also doesn't automatically overwrite everything?So yeah, I can't say this is expected behaviour, even if it's documented.
Try git reset --hard, that should work.
It is bad ux.
A power user can just pass the right params. Besides, it is not that hard to support "--yolo" parameter for that use case
`oxfmt` should have done the same and `oxfmt .`, with the desired dir ".", should have been the required usage.
I don't expect `rm` with no argument to trash everything in my CWD. Which it doesn't, see sibling's comment.
new devs should not learn these things the hard way
So maybe we should call it JavaScript style? Modern JS style? Do we have a good name for it?
Also, does anyone know when and why “K&R style” [5] started being used to refer to Java style? Meaning K&R statement block style (“Egyptian braces” [6]) being used for all braces and single statement blocks getting treated the same as multi-statement blocks. Setting aside the eternal indentation question.
1: https://en.wikipedia.org/wiki/Indentation_style#Java
2: https://www.oracle.com/java/technologies/javase/codeconventi...
3: https://www.oracle.com/java/technologies/javase/codeconventi...
4: https://prettier.io/docs/options#tab-width
5: https://ia903407.us.archive.org/35/items/the-ansi-c-programm...
6: https://en.wikipedia.org/wiki/Indentation_style#Egyptian_bra...
We switched a CI pipeline from babel to SWC last year and got roughly 8x improvement. Tried oxc's transformer more recently on the same codebase and it shaved off another 30-40% on top of SWC. The wins compound when you have thousands of files and the GC pressure from all those AST nodes starts to matter.
Not to discredit OP's work of course.
It just take someone to have poor empathy towards your users to ship slow software that you don't use.
Maybe once day we can use wasm or whatever and I can write fast code for the frontend but not today, and it's a bit unsurprising that others face similar issues.
Also, if I'm building a CLI, maybe I think that 1ms matters. But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".
- for loops over map/filter
- maps over objects
- .sort() over .toSorted()
- mutable over immutable data
- inline over callbacks
- function over const = () => {}
Pretty much, as if you wrote in ES3 (instead of ES5/6)
> care so little about the performance of the code they ship to browsers.
> but I'm curious to hear how do you know it for backend code but not frontend code.
Because I find backend languages extremely easy to reason about for performance. It seems to me that when I write in a language like rust I can largely "grep for allocations". I find that hard to see in javascript etc. This is doubly the case because frontend code seems to be extremely framework heavy and abstract, so it makes it very hard to reason about performance just by reading the code.
You think about allocations: JS is a garbage collected language and allocations are "cheap" so extremely common. GC is powerful and in most JS engines quite fast but not omniscient and sometimes needs a hand. (Just like reasoning with any GC language.) Of course the easiest intervention to allocations is to remove allocations entirely; just because it is cheap to over-allocate, and the GC will mostly smooth out the flaws with such approaches, doesn't mean ignoring the memory complexity of the chosen algorithms. Most browser dev tools today have allocation profilers equal or better to their backend cousins.
You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers). On the flipside, JS is a little harder to reason about threading than many backend languages because it is extensively cooperatively threaded. Code has to yield to other code frequently and regularly. Shaving milliseconds off a routine yields more time to other things that need to happen (browser events, user input, etc). That starts to add up. JS encourages you to do things in short, tight "bursts" rather than long-running algorithms. Here again, most browser dev tools today have strong stack trace/flame chart profilers that equal or exceed backend cousins. Often in JS "tall" flames are fine but "wide" flames are things to avoid/try to improve. (That's a bit reversed from some backend languages where shallow is overall less overhead and long-running tasks are sometimes better amortized than lots of short ones.)
> But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".
The heavily event-driven architecture of the browser often means that just sitting on a webpage is "browsing in a hot loop". Browsers have gotten better and better at sleeping inactive tabs and multi-threading tabs to not interfere with each other, but things are still a bit of a "tragedy of the commons" that the average performance of a website still directly and indirectly drags everyone else down. It might not matter to you that your webpage is slow because you only expect a user to visit it once, but you also aren't taking into account that is probably not the only website that user is browsing at that moment. Smart users do directly and indirectly notice when the bad performance of one webpage impacts their experiences of other web pages or crashes their browser. Depending on your business model and what the purpose of that webpage is for, that can be a bad impression that leads to things like lost sales/customers.
> You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers).
My issue isn't with being able to write concurrent code that has no bugs, my issue is having access to primitives where I have tight control over concurrency and parallelism. The primitives in JS do not provide that control and are often very heavy in and of themselves.
I think it's perhaps worth noting that I am not saying "it's impossible to write fast code for the browser", I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.
I still think that's a training/familiarity problem more than a language issue? You can just as easily start with `rg \bnew\b` as you can `rg \.clone`. The `new` operator is a useful thing to start with as in both C++ and C#, too. (Even though JS new is technically a different operator than both C++ and C#'s.) After that the JSON syntax is a decent start. Something like `rg {\s*["\.']` and `rg [` are places to start. Curly brackets and square brackets in "data position" are useful in Python and now some of C#, too.
After that the next biggest culprits are common library things like `.filter()` and `.map()` which JS defaults to reified/eager versions for historic reasons. (There are now lazier versions, but migrating to them will take time.) That sort of library allocations knowledge is mostly just enough familiarity with standard library, a need that remains universal in any language.
> JS hardly feels like a language where it's smooth to then fix that
Again, perhaps this is just a familiarity issue, but having done plenty of both, at the end of the day I still see this process as the same: move allocations out of tight loops, use object pools if necessary, examine the O-Notation/Omega-Notation of an algorithm for its space requirements and evaluate alternatives with better mean or worst cases, etc. It mostly doesn't matter what language I'm working in the basics and fundamentals are the same. Everything is as "smooth" as you feel comfortable refactoring code or switching to alternate algorithm implementations.
> frameworks are so common that I doubt I'd be in a position to do so
Do you treat all your backend library dependencies as black boxes as well?
Even if that is the case and you want to avoid profiling your framework dependencies themselves and simply hope someone else is doing that, there's still so much in your control.
I find JS is one of the few languages where you can somewhat transparently profile even all of your dependencies. Most JS dependencies are distributed as JS source and you generally don't have missing symbol files or pre-compiled binary bricks that are inscrutable to inspection. (WASM is changing that, for the worse, but so far there are very few WASM-only frameworks and most of them have other debugging and profiling tools.)
I can choose which frameworks to use based on how their profiler results look. (I can tell you that I don't particularly like Angular and one of the reasons why is I've caught it with truly abysmal profiles more than once, where I could prove the allocations or the CPU clock time were entirely framework code and not my app's business logic.)
I've used profilers to guide building my own "frameworks" and help proven "Vanilla" approaches to other developers over frameworks in use.
> The primitives in JS do not provide that control and are often very heavy in and of themselves.
Maybe I'm missing what primitives you are looking for. async/await is about the same primitive in JS and Rust and there are very similar higher-level tools on top of them. There's no concurrency/parallelism primitives today in JS because there is no allowed concurrency or parallelism. There are task scheduling primitives somewhat unique to JS for doing things like "fan out" akin to parallelism but relying on cooperative (single) threading. Examples include `requestAnimationFrame` and `requestIdleCallback` (for "this can wait until you next need to draw a frame, including if you need to drop frames" and "this can wait until things are idle" respectively).
> I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.
I think I'm saying that it is surprising to me that people who have developed skillsets for optimizing backend code in languages designed to be fast seem to struggle applying the same skills to a language with simpler/"slower" mechanics, but also on average much higher transparency into dependencies (fuller top-to-bottom stack traces and metrics in profiles).
To be fair, I get the impulse to want to leave it as someone else's problem. But as a full stack developer who has done performance work in at least a half dozen languages, I feel like if you can profile and performance tune Rust you should be able to profile and performance tune JS. But maybe I've seen "too much of the Matrix" and my "it's all the same" comes from a deep generalist background that is hard for a specialist to appreciate.
But that's fine. Even if we say it's a familiarity problem, that's fine. I'm only saying that it's not reasonable to expect my skills in optimizing backend code to somehow transfer. Obviously many things are the same - reducing allocation, improving algorithmic performance, etc. But that looks very different when you go from the backend to the frontend because the languages can look very different.
> You can just as easily start with `rg \bnew\b` as you can `rg \.clone`.
That's not true though. In Rust you have to have a clone somewhere if you're allocating on the heap, or one of the pointer types like `new`. If I pass a struct around it's either cheaply moveable (ie: Copy) or I have to `clone` it. Granted, many APIs will clone "invisibly" within them, but I can always grep to find the clone.
In Javascript, things seem to allocate by default. A new object allocates. A closure allocates. Things are very implicit, you sort of are in an "allocates by default" mode with js, it seems. In Rust I can just do `[u8; n]` or whatever if I want to, I can just do `let x = "foo"` for a static string, or `let y = 5;` etc. I don't really have to question the memory layout much.
Regardless, you can just learn those rules, of course, but you have to learn them. It seems much easier to "trip onto" an allocation, so to speak, in js.
> Again, perhaps this is just a familiarity issue
I largely agree, though I think that js does a lot more allocation in its natural syntax.
> Do you treat all your backend library dependencies as black boxes as well?
No, but I don't really use frameworks in backend languages much. The heaviest dependency I use is almost always the HTTP library, which is reliably quite optimized. Frameworks impose patterns on how code is structured, which, to me, makes it much harder to reason about performance. I now have to learn the details of the framework. Perhaps the only thing close to this in Rust would be tokio.
> I've used profilers to guide building my own "frameworks" and help proven "Vanilla" approaches to other developers over frameworks in use.
I suspect that this is merely an issue of my own biased experience where I have inherited codebases with javascript that are already using frameworks.
> Maybe I'm missing what primitives you are looking for. async/await is about the same primitive in JS and Rust and there are very similar higher-level tools on top of them.
I mean, stack allocation feels like a pretty obvious one, reasoning about mutability, control over locking, the ability to `join` two futures or manage their polling myself, access to operating system threads, access to atomics, access to mutexes, access to pointers, etc. These just aren't available in javascript. async/await in js is only superficially similar to Rust.
I mean, a simple example is that I recently switched to CompactString and foldhash in Rust for a significant optimization. I used Arc to avoid expensive `.clone` calls. I preallocated vectors and reused them, I moved other work to threads, etc. I feel really comfy doing this in Rust where all of this is sort of just... first class? Like, it's not "weird" rust to do any of this. I don't have to really avoid much in the language, it's not like js where I'd have to be like "Okay, I can't write {a: 5} here because it would allocate" or something. I feel like that shouldn't be too contentious? Surely one must learn how to avoid much of javascript if they want to learn how to avoid allocations.
> To be fair, I get the impulse to want to leave it as someone else's problem.
I just reject that framing. People focus on what they focus on. Optimizing their website is not necessarily their interest.
> I feel like if you can profile and performance tune Rust you should be able to profile and performance tune JS.
I probably could but it's definitely not going to feel like second nature to me and I suspect I'd really feel like I'm fighting the language. I mean, seriously, I'd be curious, how do you deal with the fact that you can't stack allocate? I can spawn a thread in Rust and share a pointer back to the parent stack, that just seems very hard to do in javascript if not outright impossible?
> I think I'm saying that it is surprising to me that people who have developed skillsets for optimizing backend code in languages designed to be fast seem to struggle applying the same skills to a language with simpler/"slower" mechanics
Yeah I don't really see it tbh. I mean even if you say "I can do it", that's great, but how is it surprising?
I had alluded to it before, but this is maybe where some additional experience with other garbage collected backend languages like C# or Java could help build some "muscle memory" here.
The typical lens in a GC-based language is value types versus reference types. Value types are generally stack allocated and pass-by-value (copy-by-value; copied from stack frame to stack frame when passed). Reference types are usually heap allocated and pass-by-reference. A reference is generally a "fat pointer", with the qualification that you generally can't dereference one like a pointer without complex GC locks because the GC reserves the right to move the objects pointed to by references (for instance, due to compaction, but can also due to things like promotion to another heap). References themselves follow the same pass-by-value rules generally (stack allocated and copied).
(The lines are often blurry hence "generally" and "usually": a GC language may choose to allocate particularly large value types on the heap and apply copy-on-write semantics in a way to meet the pass-by-value semantics. A GC language is also free to stack allocate small reference types that it believes won't escape a particular part of the stack. I bring up these edge cases not to suggest complexity but to remind that profile-guided optimization is often the best strategy in any language because any good compiler, even a JIT compiler, is trying to optimize what it can.)
In JS, the breakdown is generally that your value types are string, number, boolean, and your reference types are object, array, and function. `const a = 12` is a static, stack allocated number. `const x = 'foo'` is a static, stack allocated string. It will get copied if you pass it anywhere. Though there's one more optimization here that most GC languages use (and goes all the way back to early Lisp) called "string interning". Strings are always treated as immutable and essentially copy-on-write. Common strings and strings passed to a large number of stack frames get "interned" to shared memory (sometimes the heap; sometimes even just reusing the memory of their first compiled instance in the compiled binary). But because of the copy-on-write and how easy it is to trigger, and often those copies start stack allocated, strings are still considered value types, even though with "interning" they sometimes exhibit reference-like behavior and are sort of the "border type".
Of things to look out for `+` or `+=` where one of the sides is a string can be a huge memory allocator due to copying string bytes alone, which should be easy to expect to happen.
On the reference type side `let x = {a: 5}; let y = x`, the `{a: 5}` part is an object and does allocate to the heap (probably, modulo again things like escape detection by the JIT compiler), but `x` and `y` themselves are stack allocated references. That `let y = x` is only a reference copy.
> it's not like js where I'd have to be like "Okay, I can't write {a: 5} here because it would allocate" or something. I feel like that shouldn't be too contentious? Surely one must learn how to avoid much of javascript if they want to learn how to avoid allocations.
Generally, it's not about "avoiding" the easy language constructions because they allocate, it is balancing the trade-offs of when you want to allocate and how much.
Just like you might preallocate a vector before a tight loop, you might preallocate an array or an object, or even an object pool. (Build an array of objects, with a "free" counter, borrow them, mutate them, return them to the "free" section when done.)
But some of that is trade-offs, preallocation is sometimes harder to read/reason with. On the other side the "over-allocation" you are worried about might be caught entirely by the JIT's escape analysis and compiled out. For almost all languages it is best to let a profile or real data guide what to try to optimize (premature optimization is rarely a good idea), but especially for a GC language it can be crucial. Not because the GC language is more complicated or "magic" or "mysterious", but simply because a GC language is tuned for a lot of auto-optimizations that a manually managed memory language doesn't necessarily get "for free". The trade-off for references being much more opaque boxes than pointers is that a JIT compiler has more optimization options because it can just assume pointer math is off the table. It's between the JIT and the GC where an allocation lives, more times than not, and there are some simple optimization answers such as "the JIT stack allocated that because it doesn't escape this method". It shouldn't feel like a surprise when such things happen, when you get such benefits "for free". The JIT and GC are still maintaining the value-type or reference-type "semantics" at all times, those are just (intentionally) big easy "traits" with a lot of useful middle ground and lot of cross-implementation.
> stack allocation feels like a pretty obvious one, reasoning about mutability, access to pointers
A lot of the above should be a decent starting place for learning those tools. `let` versus `const` as maybe a remaining JS piece not explicitly dived into.
References are generally "pointer enough" for most work. The JS GC doesn't have a way to manually lock a reference to dereference it for pointer math today, but that doesn't mean it never will. Parts of WASM GC are applicable here, but mostly restricted to shared array buffers (blocks of bytes).
In other GC languages, C# has been exploring a space for GC-safe stack allocated pointers to blocks of memory that support (range checked) pointer-like math called Span<T> and Memory<T>. It's roughly equivalent to Rust's Arc-like mechanics, but subtly different as you would expect for existing in a larger GC environment. As that approach has become very successful in C# I am starting to expect variations of it in more GC languages in the next few years.
> control over locking, access to atomics, access to mutexes
For the most part JS is single threaded, stack data is copied (value types), and reference-types get auto-locking for "free" from the GC. So locks aren't important for most JS work and there's not much to control.
If you start to share memory buffers from JS to a Service/Web Worker or to a WASM process you may need to do more manual locks. The big family of tools for that is the Atomics global object: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
But a lot of that is new and rare in JS today.
> the ability to `join` two futures
`Promise.all` and `Promise.any` are the two most common "standard library" combinators. `Promise.all` is the most like Rust `join`.
There are also libraries with even higher-level combinators.
> manage their polling myself
Promises don't poll. JS lives in a browser-owned event loop. Superficially you are in a browser-provided "tokio"-like runtime at all times.
There are some "low-level" tricks you can pull, though in that the Promise abstraction is especially thin compared to Rust Futures. The entire "trait" that async/await syntax abstracts is just the "thenable pattern" in JS. All you need to make a new non-Promise Promise-like is create an object that supports `.then(callBack)` (optionally a second parameter for a catchCallback and/or a `.catch(callBack)`). Though the Promise constructor is also powerful enough you generally don't need to make your own thenable, just implement your logic in the closure you provide to the Promise constructor.
Similarly on the flipside if you need a more complex combinator than Promise.all, and the reason that some higher-level libraries also exist, you just have to build the right callbacks to `.then()` and coordinate what you need to.
It's generally recommended to stick with things like Promise.all, but low level tricks exist.
> I mean even if you say "I can do it", that's great, but how is it surprising?
I think what continues to surprise me is that it sometimes reads like a lack of curiosity for other languages and for the commonalities between languages. Any GC language is built on the same exact kind of building blocks as "lower level" languages. There is a learning curve involved in reasoning about a GC language, but I don't think it should seem like a steep one. The vocabulary has strong overlaps: value types and stack allocated; reference types and heap allocated; references and pointers. The intuitions of one often benefit the other ("this is a reference type, can I simplify what I need from it inside this loop to a value type or two to keep it stack allocated or would it make more sense to preallocate a pool of them?"). Just because you don't have access to the exact same kinds of low level tools doesn't mean that they don't exist or that you can't learn how to take what you would do with the low level tools and apply them in the higher level space. (Plus tools like C#'s Span<T> and Memory<T> work where the low level tools themselves are also starting to blur more together than ever before.)
It just takes a little bit of curiosity, I think, to ask that next question of "how does a GC language stack allocate?" and allowing that to lead you to more of the vocabulary. Hopefully, I've done an okay job in this post illustrating that.
But to be fair, besides the usual patterns like tree-shaking and DCE, "runtime performance" is really tricky to measure or optimize for
It's blisteringly fast
We are programmers we are supposed to like programming. These rust haters are intolerable.
Compare Go (esbuild) to webpack (JS), its over 100x faster easily.
For a dev time matters, but is relative, waiting 50sec for a webpack build compared to 50ms with a Go toolchain is life changing.
But for a dev waiting 50ms or 20ms does not matter. At all.
So the conclusion is javascript devs like hype, and flooded Rust and built tooling for JS in Rust. They could have used any other compiled languge and get near the same peformance computer-time-wise, or the exact same time human-timewise.
It absolutely does:
https://mail.python.org/pipermail/python-dev/2018-May/153296...
Anyway, you posted about speed, and then followed by a link to some python related thing. In python speed has never been a key tenet, at least when it comes to pure cpu based calculations. How much tooling is built in python? All the modern python tooling is mostly Rust based too. So theres that.
I mean for a dev working in JS with JS built tooling the speed is not in milliseconds, but in seconds, even minutes.
I still think my point holds, having a build take int he 10s of seconds vs 50ms is very much good enough for development (the usual frontend save and refresh browser cycle)
C is fine but old
* You need have a clean architecture, so starting "almost from scratch" * Knowledge about performance (for Rust and for build tools in general) is necessary * Enough reason to do so, lack of perf in competition and users feeling friction * Time and money (still have to pay bills, right?)
And we can always reach out to Scala or F# if feeling creating to play with type systems.
Nonsense.
In other words does it treat comments as syntactic units, or as something that can be ignored wince they are not needed by the "next stage"?
The reason to find out what the comments are is of course to make it easy to remove them.
- how does it compare to biome?
- also biome does all 3 , linting, formatting and sorting, why do you want 3 libraries to do the job biome does alone?
I hate to say it, but biome just works better for me. I found the ox stuff to do weird things to my code when it was in weird edge case states as I was writing it. I'd move something around partially correct, hit save to format it and then it would make everything weird. biome isn't perfect, but has fewer of those issues. I suspect that it is hard to even test for this because it is mostly unintended side effects.
ultracite makes it easy to try these projects out and switch between them.
But I guess it wouldn't be an apples to apples com parison because Bun can also run typescript directly.
In text form:
Bundling 10,000 React components (Linux x64, Hetzner)
Bundler Version Time
─────────────────────────────────────────────────────────
Bun v1.3.0 269.1 ms
Rolldown v1.0.0-beta.42 494.9 ms
esbuild v0.25.10 571.9 ms
Farm v1.0.5 1,608.0 ms
Rspack v1.5.8 2,137.0 msThis is a set of linting tools and a typestripper, a program that removes the type annotations from typescript to make turn it into pure javascript (and turn JSX into document.whateverMakeElement calls). It still doesn't have anything to actually run the program.
Currently it uses .Net and NativeAOT, but adding support for the Rust backend/ecosystem over the next couple of months. TypeScript for GPU kernels, soon. :)