I'd follow on that, the earlier you can get this test-suite running the better for the iteration speed and correctness of your project.
It took a bit of time to make everything work, but once we did, we very quickly got to the point of running anything. The test-suite is certainly incomplete but gets you 95% there: https://github.com/WebAssembly/testsuite
I produced https://github.com/peterseymour/winter on the back of it and learnt WASM is not as simple as it should be.
How do you debug an interpreter when you aren't coding for it directly? How far does fuzzing strings of opcodes get you?
How much practical difference is there between a server side WASM engine and a browser-based one? How much work would be involved converting one to the other?
For anyone that wants to check where the meat is at, is mostly in this file: https://github.com/irrio/semblance/blob/main/src/wrun.c
Thinking out loud, I think it would have been a great idea to conform with the Wasm-C-API (https://github.com/WebAssembly/wasm-c-api) as a standard interface for the project (which most of the Wasm runtimes: Wasmer, V8, wasmi, etc. have adopted), as the API is already in C and it would make it easier to try for developers familiar with that API.
Note for the author: if you feel familiar enough with Wasm and you would like to contribute into Wasmer, we would also welcome any patches or improvements. Keep up the work work!
> understandable concerns about the fact we, Wasmer, a VC-backed corporation, attempted to trademark the name of a non-profit organization, specifically WebAssembly
Acknowledgement of wrongdoing.
https://wasmer.io/posts/wasmer-and-trademarks-extended
I don't think this is as much of a smoking gun as it is made out to be.
> Installed-Size: 266 MB
What the hell
Most of the size comes from the LLVM backend, which is a bit heavy. Wasmer ships many backends by default, and if you were to use Wasmer headless that would be just a bit less than a Mb.
If you want, you can always customize the build with only the backends that you are interested in using.
Note: I've seen some builds of LLVM under 5-10Mb, but those require heavy customization. Is clear that we have still some work to do to reduce size on the general build!
They are now in the size realm of Lisp and Smalltalk. Forth may lean towards the lighter side.
tcc is 100KB
https://github.com/bytecodealliance/wasm-micro-runtime
This says 4000 lines
https://github.com/explodingcamera/tinywasm
What are we talking about here? There is obviously no reason a wasm jit has to be 266 MB
Wasmer supports most of the Wasm-C-API, with some exceptions for APIs that are not that common to use: finalize, hostref and threads (tables just had some quirks on the implementation that we had to polish, but is generally implemented [1]).
https://github.com/wasmerio/wasmer/blob/main/lib/c-api/tests...
If you are interested in running any of these cases using Wasmer via the Wasm-C-API please let us know... it should be mostly trivial to add support!
[1] https://github.com/wasmerio/wasmer/blob/main/lib/c-api/src/w...
The WASM spec people rejected it for being too "high-level". But the C committee also rejected proposals from Dennis Ritchie. My money is still on Ritchie. Rob Pike's money seems to be on Ritchie direction as well. Otherwise, why create Golang?
Tail-calls are only high-level if calls are high-level.
--edit--
Oh, and I also was going to suggest using a library like libffi to make calls into C so you can do multiple arguments and whatnot.
how do plugin developers debug their code? Is there a way for them to do breakpoint debugging for example? What happens if their code crash, do they get a stacktrace?
If anyone is looking for a slightly more accessible way to learn WebAssembly, you might enjoy WebAssembly from the Ground Up: https://wasmgroundup.com
(Disclaimer: I'm one of the authors)
Constraining this on "that's not an option" is a big waste of time - learning this will open up all of the literature written on the subject.
But I've always gotten confused with... it is secure because by default it can't do much.
I don't quite understand how to view WebAssembly. You write in one language, it compiles things like basic math (nothing with network or filesystem) to another and it runs in an interpreter.
I feel like I have a severe lack/misunderstanding. There's a ton of hype for years, lots of investment... but it isn't like any case where you want to add Lua to an app you can add WebAssembly/vice versa?
You can get output by reading the buffer at the end of execution/when receiving callbacks. So, for instance, you pass a few frames worth of buffers to WASM, WASM renders pixels into the buffers, calls a callback, and the Javascript reads data from the buffer (sending it to a <canvas> or similar).
The benefit of WASM is that it can't be very malicious by itself. It requires the runtime to provide it with exported functions and callbacks to do any file I/O, network I/O, or spawning new tasks. Lua and similar tools can go deep into the runtime they exist in, altering system state and messing with system memory if they want to, while WASM can only interact with the specific API surface you provide it.
That makes WASM less powerful, but more predictable, and in my opinion better for building integrations with as there is no risk of internal APIs being accessed (that you will be blamed for if they break in an update).
That's not correct, when you embed Lua you can choose which APIs are available, to make the full stdlib available you must explicitly call `luaL_openlibs` [1].
[1] https://www.lua.org/manual/5.3/manual.html#luaL_openlibs
Re: tty support in container2wasm and fixed 80x25 due to lack of SIGWINCH support in WASI Preview 1: https://github.com/ktock/container2wasm/issues/146
The File System Access API requires granting each app access to each folder.
jupyterlab-filesystem-access only works with Chromium based browsers, because FF doesn't support the File System Access API: https://github.com/jupyterlab-contrib/jupyterlab-filesystem-...
The File System Access API is useful for opening a local .ipynb and .csv with JupyterLite, which builds CPython for WASM as Pyodide.
There is a "Direct Sockets API in Chrome 131" but not in FF; so WebRTC and WebSocket relaying is unnecessary for WASM apps like WebVM: https://news.ycombinator.com/item?id=42029188
> wasi-io, wasi-clocks, wasi-random, wasi-filesystem, wasi-sockets, wasi-cli, wasi-http
But if the module wants to compute on the values in the buffer, at some level it would have to copy the data in/out.
We have a chapter called "What Makes WebAssembly Safe?" which covers the details. You can get a sneak peek here: https://bsky.app/profile/wasmgroundup.com/post/3lh2e4eiwnm2p
Yes. That’s a super accurate description. You’re not confused.
> I don't quite understand how to view WebAssembly. You write in one language, it compiles things like basic math (nothing with network or filesystem) to another and it runs in an interpreter.
Almost. Wasm is cheap to JIT compile and the resulting code is usually super efficient. Sometimes parity with native execution.
> I feel like I have a severe lack/misunderstanding. There's a ton of hype for years, lots of investment... but it isn't like any case where you want to add Lua to an app you can add WebAssembly/vice versa?
It’s definitely a case where the investment:utility ratio is high. ;-)
Here’s the trade off between embedding Lua and embedding Wasm:
- Both have the problem that they are only as secure as the API you expose to the guest program. If you expose `rm -rf /` to either Lua or Wasm, you’ll have a bad time. And it’s surprisingly difficult to convince yourself that you didn’t accidentally do that. Security is hard.
- Wasm is faster than Lua.
- Lua is a language for humans, no need for another language and compiler. That makes Lua a more natural choice for embedded scripting.
- Lua is object oriented, garbage collected, and has a very principled story for how that gets exposed to the host in a safe way. Wasm source languages are usually not GC’d. That means that if you want to expose object oriented API to the guest program, then it’ll feel more natural to do that with Lua.
- The wasm security model is dead simple and doesn’t (necessarily) rely on anything like GC, making it easier to convince yourself that the wasm implementation is free of security vulnerabilities. If you want a sandboxed execution environment then Wasm is better for that reason.
Not quite. Web assembly isn't a source language, it's a compiler target. So you should be able to write in C, Rust, Fortran, or Lua and compile any of those to WebAssembly.
Except that WebAssembly is a cross-platform assembly language/machine code which is very similar to the native machine code of many/most contemporary CPUs. This means a WebAssembly interpreter can be very straightforward, and could often translate one WebAssembly instruction to one native CPU instruction. Or rather, it can compile a stream of WebAssembly instructions almost one-to-one to native CPU instructions, which it can then execute directly.
Not necessarily; on AMD64 you can do memory accesses in a single instruction relatively easily by using the CPU's paging machinery for safety checks plus some clever use of address space.
> branches could mostly be direct _unless_ the runtime has any kind of metering (it should) to stop eternal loops
Even with metering the branches would be direct, you'd just insert the metering code at the start of each basic block (so that's two extra instructions at the start of each basic block). Or did you mean something else?
Reserving 4gb address space oughta work on any 64bit machine with a decent OS/paging system though? I was looking into it but couldn't use it in my case however since it needs to cooperate with another VM that already hooks the PF handler (although maybe I should take another stab if there is a way to put in a hierarhcy).
But the host machine still can, so it's not as big of advantage in that regard. If you could somehow deliver a payload of native code and jump to it, it'd work just fine. But the security you get is the fact that it's really hard to do that because there's no wasm instructions to jump to arbitrary memory locations (even if all the host ISAs do have those). Having a VM alone doesn't provide security against attacks.
It's often the case that VMs are used with memory-safe languages and those languages' runtime bounds checks and other features are what gives them safety moreso than their VM. In fact, most bytecode languages provide a JIT (including some wasm deployments) so you're actually running native code regardless.
If you haven't looked at it in a while — we just published a draft of the final technical chapter, and are planning an official launch on March 4. So, might be a good time to dig back in :-)
You're more generous than me, I think it's rubbish.
Would have been easier to read if they had written it more like an ISA manual.
Granted, not many people have, but there’s a reason why it makes sense for it to be written in that style: they want it to be very clear that the verification (typechecking, really) algorithm doesn’t have any holes, and for that it’s reasonable to speak the language of the people who prove that type of thing for a living.
The WASM spec is also the ultimate authoritative reference for both programmers and implementers. That’s different from the goals of an ISA manual, which usually only targets programmers and just says “don’t do that” for certain dark corners of the (sole) implementation. (The RISC-V manual is atypical in this respect; still, I challenge you to describe e.g. which PC value the handler will see if the user code traps on a base RV32IMA system.)
Is there a lot of crossover between those people and people who work with assemblers or code generation? There's even more crossover with those people and understanding how to read a minimal ISA document.