The c-style interface of wasm is pretty limiting when designing higher level interfaces, which is why wasn-bindgen is required in the first place.
Luckily, Firefox is pushing an early proposal to expose all the web apis directly to wasm through a higher level interface based on the wasm component-model proposal.
See https://hacks.mozilla.org/2026/02/making-webassembly-a-first...
I really doubt that web APIs like WebGPU or WebGL would see similar performance improvements, and it's unclear how the much more critical performance problems for accessing WebGPU from WASM would be solved by the WASM Component Model (e.g. WebGPU maps WGPUBuffer content into separate JS ArrayBuffer objects which cannot be accessed directly from WASM without copying the data in and out of the WASM heap).
2. It’s horrible needing so much JS glue code to do anything in wasm. I know most people don’t look at it, but JS glue code is a total waste of everyone’s time when you’re using wasm. It’s complex to generate. It can be buggy. It needs to be downloaded and parsed by the browser. And it’s slow. Like, it’s pure overhead. There’s simply no reason that this glue needs to exist at all. Wasm should be able to just talk to the browser directly.
I’d love to be able to have a <script src=foo.wasm> on my page and make websites like that. JS is a great language, but there’s no reason to make developers bridge everything through JS from other languages. Nobody should be required to learn and use JavaScript to make web software using webassembly.
Web APIs are designed for JavaScript, though, which makes this hard. For example, APIs that receive or return JS Typed Arrays, or objects with flags, etc. - wasm can't operate on those things.
You can add a complete new set of APIs which are lower-level, but that would be a lot of new surface area and a lot of new security risk. NaCl did this back in the day, and WASI is another option that would have similar concerns.
There might be a middle ground with some automatic conversion between JS objects and wasm. Say that when a Web API returns a Typed Array, it would be copied into wasm's linear memory. But that copy may make this actually slower than JS.
Another option is to give wasm a way to operate on JS objects without copying. Wasm has GC support now so that is possible! But it would not easily help non-GC languages like Rust and C++.
Anyhow, these are the sort of reasons that previous proposals here didn't pan out, like Wasm Interface Types and Wasm WebIDL bindings. But hopefully we can improve things here!
Some of the newer Web APIs would be difficult to port. But the majority of APIs have quite straight forward equivalents in any language with a defined struct type (which you admittedly do have to define for WASM, and whether that interface would end up being zero-copy would change depending on the language you are compiling to wasm)
There is no solution without tradeoffs here, but the only reason JS-glue-code is winning out is because the complexity is moved from browsers to each language or framework that wants to work with wasm
Correct, but this is has been one of wasm's guiding principles since the start: move complexity from browsers to toolchains.
Wasm is simple to optimize in browsers, far simpler than JavaScript. It does require a lot more toolchain work! But that avoids browser exploits.
This is the reason we don't support the wasm text format in browsers, or wasm-ld, or wasm-opt. All those things would make toolchains easier to develop.
You are right that this sometimes causes duplicate effort among toolchains, each one needing to do the same thing, and that is annoying. But we could also share that effort, and we already do in things like LLVM, wasm-ld, wasm-opt, etc.
Maybe we could share the effort of making JS bindings as well. In fact there is a JS polyfill for the component model, which does exactly that.
It would also enable combining different languages with high-level interfaces rather than having to drop down to c-style interfaces for everything.
IMHO the developer experience should be provided by compiler toolchains like Emscripten or the Rust compiler, and by their (standard) libraries. E.g. keep the complexity out of the browser, the right place for binding-layer complexity is the toolchains and at compile time. The browser is already complex enough as it is and should be radically stripped down instead instead of throwing more stuff onto the pile.
Web APIs are designed from the ground up for Javascript, and no amount of 'hidden magic' can change that. The component model just moves the binding shim to a place inside the browser where it isn't accessible, so it will be even harder to investigate and fix performance problems.
You're probably already thinking "obviously we just need a hub-and-spoke architecture where there's a common intermediate representation for all these types". That kind of architecture means that each environment only has to worry about conversions to and from the common representation, then you can connect any environment to any other environment, and you only need 2N glue systems instead of N^2. Effectively, you'd be formalizing the prior system of bespoke glue code generation into a standardized interface for interoperation.
That's the component model.
I'm perfectly happy with integers and floats as common interface types (native ABIs also only use integers and floats: pointers are integer-indices into memory, and struct offsets need to be implicitly known and compiled into the caller and callee).
The WASM Component Model looks like a throwback to the 1990s when component object models where all the rage (COM, CORBA, and whatnot).
Most people at least want strings too. And once you add strings, you need to make sure the format is correct (JS uses UTF-16, C uses NULL-termination, etc). So even if you don't allow a complex object model, you would still need N^2 glue systems just for strings.
Then you might as well add arrays too.
Before you know it, you end up with the component model.
Some operating systems might want their strings as UTF-8 encoded, some as UTF-16. It's the job of the caller to provide the strings in the right format before calling the OS function. In the end it's up to the caller and callee to agree on a format for string data. There is no 'middleman' or canonical standard format needed, just an agreement between a specific caller and callee.
The good and important part of such an agreement is that it is unopinionated. As long as caller and callee agree, it's totally fine to pass zero-terminated bytes, other callers and callees might find a pointer/size pair better. This sort of agreement also needs to happen when calling between native Rust and C code (or calling between any language for that matter). My C code might even prefer to receive string data as pointer/size pairs instead of zero-terminated bytes when all my string-processing code is built on top of strings as pointer/size pairs (e.g. apart from string literals there is not a single feature in the C language which dictates that strings are zero-terminated bags of bytes - it's mostly just of convention of the ancient C stdlib functions).
IMHO the WASM Component Model is solving a problem that just isn't all that relevant in practice. System ABIs / calling conventions don't need a 'component model' and so shouldn't WASM.
> The browser is already complex enough as it is and should be radically stripped down
I’d love this too, but I think this ship has sailed. I think the web’s cardinal sin is trying to be a document platform and an application platform the same time. If I could wave a magic wand, I’d split those two used cases back out. Documents shouldn’t need JavaScript. They definitely don’t need wasm. And applications probably shouldn’t have the URL bar and back and forward buttons. Navigation should be up to the developers themselves. If apps were invented today, they should probably be done in pure wasm.
> Web APIs are designed from the ground up for Javascript
Web APIs are already almost all bridged into rust via websys. The APIs are more awkward than we’d like. But they all work today.
WASM DWARF debugging works perfectly fine though?
The 'debugger half' just moved from Chrome into a VSCode debug adapter extension where debugging is much more comfortable than in the browser:
https://marketplace.visualstudio.com/items?itemName=ms-vscod...
I use that all the time when working on web-platform specific code, with this extension WASM debugging in VSCode feels just like native debugging (it actually feels snappier on macOS than debugging a native macOS exe via LLDB).
TL;DR: It works just fine (early versions of the DWARF extension had problems catching 'early breakpoints' but that had been fixed towards the end of 2024.
You can integrate external debuggers, like Uno documents here:
https://platform.uno/docs/articles/debugging-wasm.html
I assume that uses some browser extension, but I didn't look into the details.
You can also use an extension to provide additional debugging capability in the browser:
One of the common (mis-)understandings about WASM when it was released, was people could write web applications "in any language" that could output WASM. (LLVM based things as an example)
That was clear over-selling of WASM, as in reality people still needed to additionally learn JS/TS to make things work.
So for the many backend devs who completely abhor JS/TS (there are many), trying out WASM and then finding it was bullshit has not been positive.
If WASM is made a first class browser citizen, and the requirement for JS/TS truly goes away, then I'd expect a lot of positive web application development to happen by those experienced devs who abhor JS/TS.
At being said, that viewpoint is from prior to AI becoming reasonably capable. That may change the balance of things somewhat (tbd).
It adds an incredible amount of complexity and bloat versus writing a proper hybrid C++/JS application where non-trivial work is happening in handwritten JS functions versus hopping across the JS/WASM boundary for every little setter/getter. It needs experience though to find just the right balance between what code should go on either side of the boundary.
Alternatively tunnel through a properly designed C API instead of trying to map C++ or Rust types directly to JS (e.g. don't attempt to pass complex C++/Rust objects across the boundary, there's simply too little overlap between the C++/Rust and JS type systems).
The automatic bindings approach makes much more sense for a C API than for a native API of a language with a huge 'semantic surface' like C++ or Rust.
Async mutexes in Rust have so many footguns that I've started to consider them a code smell. See for example the one the Oxide project ran into [1]. IME there are relatively few cases where it makes sense to want to await a mutex asynchronously, and approximately none where it makes sense to hold a mutex over a yield point, which is why a lot of people turn to async mutexes despite advice to the contrary [2]. They are essentially incompatible with structured concurrency, but Rust async in general really wants to be structured in order to be able to play nicely with the borrow checker.
`shadow-rs` [3] bears mentioning as a prebuilt way to do some of the build info collection mentioned later in the post.
[1]: https://rfd.shared.oxide.computer/rfd/0609 [2]: https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#wh... [3]: https://docs.rs/shadow-rs/latest/shadow_rs/
Yes! Mutexes are much nicer in Rust than a lot of languages, but they're still much too low-level for most use-cases. Ironically Lindsey Kuper was an early contributor to the Rust project and IIRC at roughly the same time started talking about LVars [1]. But we still ended up with mutexes as the primary concurrency mechanism in Rust.
I'm not sure what you mean by ‘more threads than I had cores’, though. Unless you tell it otherwise, Tokio will default to one thread per core on the machine.
Meanwhile the async _code_ itself is just a new(ish), lower-level way of writing code that lets you peek under an abstraction. Traditional ‘blocking’ I/O tries to pretend that I/O is an active, sequential process like a normal function call, and then the OS is responsible for providing that abstraction by in fact pausing your whole process until the async event you're waiting on occurs. That's a pretty nice high-level abstraction in a lot of cases, but sometimes you want to take advantage of those extra cycles. Async code is a bit more powerful and ‘closer to the metal’ in that it exposes to your code which operations are going to result in your code being suspended, and so gives you an opportunity to do something else while you wait.
Of course if you're not spending a lot of time doing I/O then the performance improvements probably aren't worth dropping the nice high-level abstraction — if you're barely doing I/O then it doesn't matter if it's not ‘really’ a function call! But even so async functions can provide a nice way of writing things that are kind of like function calls but might not return immediately. For example, request-response–style communication with other threads.
Very handy to update serde data structure and see all the typescript errors that after recompiling.
It was the first time I was writing this sort of thing, but I found the spec very clear and well-written.
Fun fact: I was surprised when the test from a toy parser surfaced a real regression in version 3 of the spec[1], released roughly 4 months before.
WASM takes care of all the GC bits, but in turn you have to use its ref, struct, and array types for everything. Making vtables is straight forward. Fat pointers for interfaces require an object though, since you can't do your own pointer layout.
In web contexts GC should be great because browser GCs can work across DOM, JS, and WASM, so you can hold a reference to a node that you get via JS, and if you remove the node from the DOM and discard the reference, it'll be collected just like in JS.
The big downside is data transfer over the Wasm boundary requires a lot more copying with GC. Byte arrays like (array i8) are completely opaque to the outside, so you need to vend individual byte access functions to read data. On the WASI side, it only lowers to linear memory, so you need to allocate some from scratch space, then copy into GC structs and arrays. Strings are doubly awful because of this because you also have to deal with encoding.
GC also doesn't support multi-threading yet.
Some of this is supposed to be fixed by things like https://github.com/WebAssembly/design/issues/1569 and WASI lowering to GC types. stringref would have been great, but that appears to be dead for now.
So I'm on board and optimistic these things will get fixed, but GC is still a bit of a second-class citizen for now.
Outside the browser, GC available on JVM and CLR runtimes are much more advanced than WASM GC will ever be.
This was one of the things that has put me off, and ergonomics are still a subset for Rust Roadmap 2026.
https://github.com/modelcontextprotocol/modelcontextprotocol...
A tool like wasm2c (or my wasm2go) shows this: there is no huge runtime to carry, just a fairly direct translation of Wasm byte code to C (or Go).
wasm2c: https://github.com/WebAssembly/wabt/blob/main/wasm2c/README....
wasm2go: https://github.com/ncruces/wasm2go
Java Applets were amazing technology IMO. Windows Java launcher ruined it (as i understand that was the main security issue).
ps. Java 8 still works :P
I can't tell if this is a jab at WASM itself or whether it should be taken at face value lmao, WASM is the definition of overengineered boondoggle.
It's a great target for a simple language (unless you insist on someone else doing your GC for you, which mandates their design on your language).
And it's also fairly easy to build a Wasm interpreter, or an AOT compiler.
It's also largely useless outside of targeting c/c++ and derivatives. Most code we write cannot target wasm without severe drawbacks.
Not to mention the issue that bundling a GC implementation as part of your web page can be prohibitive in terms of download size.
But.... they would certainly be much more useful architectures and devices if they chose to cater more to actual needs rather than performance under C/C++
The first idea was computation heavy algorithms to be written in a language like C/C++/Rust and to be compiled to wasm.
Now it gets marketed as something to write sandboxed code/compontents for every language to be consumed by a wasm runtime.
Then there is the problem with the types of wasm. While it was seen to be something run on the web/browser. Its types are way more similar to rusts. For example strings in JS are fundamentally UTF-16 while wasm/rust is utf-8.
We need to constantly convert between them. I always hoped that wasm would simply allow for faster code on the web not here is my program completly sandboxed from the outside world you cant interact with other programs on the same machine.
I'd like a toolchain better targeted for the pure acceleration use case though. Emscripten adds a lot of bloat and edges just to serve out of the box posix compatibility. Which is nice for quick demos of "look I can run Doom in the browser"-kind. But less useful for advanced web app usage, where you anyways will want to keep control of such behavior and interact with the browser apis more directly.
It started even much earlier. At first Emscripten compiled to a plain Javascript subset, after this demonstrated 'usefulness' this JS subset was properly specificed into 'asm.js' which browsers could specifically target and optimize for. The next evolutionary step was WASM (which didn't immediately bring any peformance improvements over asm.js, but allowed further improvements without having to 'compromise' Javascript with features that are only useful for a compilation target).
Mozzilla not wanting to jump into Google's boat came up with asm.js.
This naturally ignoring all the other plugins.
Plus, having to watch talks selling the idea as if no one had ever done it before.
This even ignoring operating systems where the whole userspace was bytecode based.
WASM is "just" another virtual ISA, everything else is just marketing. If you manage expections (in the sense of "it's just another ISA / bytecode VM") then WASM can be incredibly useful, e.g. all my C/C++ projects are running automatically in browsers thanks to WASM (for instance these home computer emulators: https://floooh.github.io/tiny8bit/).
Things that are a niche, will often sooner or later just die - and nobody will even notice this. I don't understand the wasm committee. Why design something that is bound to fail due to barely anyone using it?
There are a lot more niche web standards that have a lot less usage that stuck around for a log time (e.g. the recent debate around removal of XSLT)
(Including a game with online multiplayer! Though only the client, I did the server in TS ;)
Now a disclaimer, I experienced most of these languages for the first time during the jams, which biases me towards confusion and suffering. That being said, there was still a big gradient, and it does give us useful data on "how easy is it to get started with the basics and achieve basic tasks" (my games were like, 1970s level complexity).
Spent the last day of the C++ one dealing with a weird Emscripten bug. 12 hours left in the jam and suddenly the whole thing refused to compile, but only with Emscripten.
Spent most of the Zig jam trying to find up to date libraries and documentation. (Stuff changes fast apparently. This was 2024 though so maybe different now).
The Rust one was the most painful because I kept running into "I know what I want to do, and it's correct, and it works in every single other programming language, but rust is an authoritarian nanny state and won't let me."
(That's not WASM specific, I had similar problems when making native games in Rust. But WASM does make that aspect of the Rust experience worse, unless, I'm told, you go for one of the well supported WASM games libraries, in which case it should be relatively smooth.)
I found this too bad because overall Rust should have been the winner, in terms of cool language features. It's just a bad match for "yes I know what I'm doing and this over-cautious compiler case genuinely doesn't apply to me and there's no way to configure that so it actually lets you do your job". Also probably not ideal for game jams where dev speed and flexibility matter more than correctness.
(To beat a dead horse, my favorite part about the Rust jam was going into discord for help, getting even more absurd workarounds than GPT gave me, and then being called a bad person for not prioritizing memory safety on a game jam xD)
Zig was similarly suboptimal for game jams (though I imagine it would be fine for longer projects). Where I had 6 hours left and it was forcing me to think about what kind of division operator I wanted to use xD
Odin, I found surprisingly pleasant, even in the context of a very short jam. Very pleasant syntax. Nicest I've seen by far. (Well, I hear Jai is nice too, being Odin inspired, but no invite from Jon yet ;)
Odin's also very nice for game dev, being batteries included. For wasm there wasn't native support for game libraries at the time, but I found a GitHub repo that let you do it with a C wrapper.
Overall a fun and educational experience (and I definitely want to give Rust another go, in a less time sensitive context — it's the one language you can't just hope to wing it ;).
I did, I must admit, switch back to JS/TS and just use that for the last few games I made, because the truth is you're going to be writing it anyway for web games (unless you're using an engine or heavyweight library / toolchain), and the interop gave me more headaches than the nicer languages solved.
Depending on your scale or timeline, (or deep seated feelings about JavaScript!) WASM may be worth it for you though :)
I'm curious why you didn't use `unsafe`?
In general people are really bad at knowing when the strict safety rules are actually being too strict, but if you're confident they are then using `unsafe` seems like a valid path to explore.
Initially I fell for the hype too. Now it seems the number of people using wasm, is mega-low. It's like the tiniest ever fraction of javascript-knowing folks. And this will probably never change either.
I guess it is time to conclude:
- wasm will remain a small niche. At the least for the, say, close future. But probably forever.
If you want it to pay off, use Wasm for CPU bound modules, batch calls and pass pointers into linear memory to minimize boundary overhead, use a tiny allocator like wee_alloc, build with cargo build --release --target wasm32-unknown-unknown then run wasm-opt -Oz, and run server or plugin workloads on wasmtime or wasmer with wasm32-wasi instead of shoehorning it into DOM heavy front ends.
And if using a GC language, JS/TS isn't really that bad.
I’m still waiting for being able to access the DOM from WebAssembly, so it’s possible to do something useful with it without JavaScript glue code.
To a very casual observer (me) it seems like it ought to be simple, but I expect there are good reasons why it isn't.
The JS shim is still there, but it's hidden away from the programmer.
For a more direct approach which entirely avoids going through a JS shim:
Mozilla is starting to experiment with integrating the WASM Component Model into the browser. Personally I'm not a fan of this because apart from string conversion the JS shim is not the performance bottleneck that people think it is, but at the least it will finally shut up all the 'direct DOM access whining' that shows up in each and every HN thread about WASM from people who never actually used WASM ;)
https://hacks.mozilla.org/2026/02/making-webassembly-a-first...
Fair if you mean me - I've at most toyed with it, but the shim stuff was off-putting.
I'm a backender though, I don't pretend my opinion is of any consequence here.
> Personally I'm not a fan of this
You make it sound like the shim layer is actively desirable - why is that?
More flexibility. For instance even though the 'official' WebGPU shim has the upside that it is compatible with the webgpu.h C API header, it buys that compatibility with some pretty serious performance compromises which can be avoided when using a non-standard JS shim (for instance reading/writing data from/to mapped WebGPU buffers which live in their own ArrayBuffer object, I don't think the WASM Component Model has a solution for such scenarios). The WASM Component Model solution basically has to deal with the exact same problems, but the WebGPU C API will essentially be baked into the browser. I expect though that it will still be possible to write your own specialized JS shim, so it's not a too big of an issue. I would prefer if the WASM peeps first focus on solving other problems which provide more bang for the buck though.
A benchmark in the article you linked shows a 2× slowdown.
Once you build a web app via the DOM any sort of performance doesn't matter anyway because the DOM is slow by design, manipulating the DOM from WASM won't magically make an inherently slow system fast.
I hate JavaScript and don’t want to use it at all. I want WebAssembly to allow me to write “traditional” webpages using a different scripting language.
...it's just another tool in the programmers toolbox? Right tool for the job etc... Also for anything non-trivial just use Typescript which is actually a decent language.
What are you basing this on? Got a source?
“Javascript-knowing folks” aren’t likely to have much overlap with people who have a need for WASM. It’s a bit confusing because of the history of WASM, but the two are pretty separate at this point.
We use WASM to deploy machine learning models in the browser, for example. That’s not something we would have ever considered doing with Javascript.