But then you've got to figure out and prevent all the security holes that can be introduced by adding file access, networking, etc. That's what killed the Java write-once, run-anywhere promise. Maybe put the whole thing into a container? Oops, looks like the container wasn't replaced after all (though perhaps it could be simplified).
And the nice thing about that is you can pick which environment a wasm bundle runs in. Want to run it on the browser? Sure! Want to give it r/w access to some particular path? Fine! Or you want to run it "natively", with full access to the host operating system? That can work too!
We just ("just") need wasi, and a good set of implementations which support all the different kinds of sandboxing that people want, and allow wasm containers to talk to each other in all the desirable ways. Make no mistake - this is a serious amount of work. But it looks like a very solvable problem.
I think the bigger adoption problem will be all the performance you leave on the table by using wasm instead of native code. For all its flaws, docker on linux runs native binaries at native speed. I suspect big companies running big cloud deployments will stick with docker because it runs code faster.
The problem is that in WASM-land we're heading towards WASI and WAT components, which is similar to the .NET, COM & IDL ecosystems. While this is actually really cool in terms of component and interface discovery, the downside is that it means you have to re-invent the world to work with this flavor of runtime.
Meaning... no, I can't really just output WASM from Go or Rust and it'll work, there's more to it, much more to it.
With a RISC-V userland emulator I could compile that to WASM to run normal binaries in the browser, and provide a sandboxed syscall interface (or even just pass-through the syscalls to the host, like qemu-user does when running natively). Meaning I have high compatibility with most of the Linux userland within a few weeks of development effort.
But yes, threads, forking, sockets, lots of edge cases - it's difficult to provide a minimal spoof of a Linux userland that's convincing enough that you can do interesting enough things, but surprisingly it's not too difficult - and with that you get Go, Rust, Zig, C++, C, D etc. and all the native tooling that you'd expect (e.g. it's quite easy to write a gdbserver compatible interface, but ... you usually don't need it, as you can just run & debug locally then cross-compile).
At the application level, you're generally going to write to the standards + your embedding. Companies that write embeddings are encouraced/incentivized to write good abstractions that work with standards to reduce user friction.
For example, for making HTTP requests and responding to HTTP requests, there is WASI HTTP:
https://github.com/WebAssembly/wasi-http
It's written in a way that is robust enough to handle most use cases without much loss of efficiency. There are a few inefficiencies in the WIT contracts (that will go away soon, as async lands in p3), but it represents a near-ideal representation of a HTTP request and is easy for many vendors to build on/against.
As far as rewriting the world, this happens to luckily not be quite true, thanks to projects like wasi-libc:
https://github.com/webassembly/wasi-libc
Networking is actually much more solved in WASI now than it was roughly a year ago -- threads is taking a little longer to cook (for good reasons), but async (without function coloring) is coming this year (likely in the next 3-4 months).
The sandboxing abilities of WASM are near unmatched, along with it's startup time and execution speed compared to native.
There are a few niches where standardization of interfaces and discoverability will be extremely valuable in terms of interoperability and reducing the development effort to bring-up products that deeply integrate with many things, where currently each team has to re-invent the wheel again for every end-user product they integrate with, with the more ideal alternative being that each product provides their own implementations of the standard interfaces that are plugged into interfaces.
But, the reason I'm still on the fence is that I think there's more value in the UNIX style 'discrete commands' model, whether it's WASM or RISC-V I don't think anybody cares, but it's much more about self-describing interfaces with discoverability that can be glued together using whatever tools you have at your disposal.
I think we can at least say WebAssembly + WASI is distinct from DLL hell because at the very least your DLLs will run everywhere, and be intrinsically tied to a version and strict interface.
These are things we've just never had before, which is what makes it "different this time". Having cross-language runnable/introspectable binaries/object files with implicit descriptions of their interfaces that are this approachable is new. You can't ensure semantics are the same but it's a better place than we've been before.
> But, the reason I'm still on the fence is that I think there's more value in the UNIX style 'discrete commands' model, whether it's WASM or RISC-V I don't think anybody cares, but it's much more about self-describing interfaces with discoverability that can be glued together using whatever tools you have at your disposal.
A bit hard to understand here the difference you were intending between discrete commands and a self-describing interface, could you explain?
I'd also argue that WASM + Component Model/WASI as a (virtual) instruction set versus RISC-V are very different!
Really this is walking an already trailed path, multiple times, we can even notice the parts grass no longer grows, how much it has been walked through.
The sandboxing is the keystone holding up the entire wasm ecosystem, without it no one would be interested in it same as nobody would run javacript in browsers without a sandbox (we used to, it was called flash, we no longer do).
I am curious why you focus so much on "universal runtime/compile-target do fail" rather than its actual strenght when at least in the case of java applet they failed because their sandbox sucked (and startup times).
Additionally, it is a kind of worthless sandbox, given that the way it is designed it doesn't protect against memory corruption, so it is still possible to devise attacks, that will trigger execution flows leading to internal memory corruption, possibly changing the behaviour of an WASM module.
> Nevertheless, other classes of bugs are not obviated by the semantics of WebAssembly. Although attackers cannot perform direct code injection attacks, it is possible to hijack the control flow of a module using code reuse attacks against indirect calls. However, conventional return-oriented programming (ROP) attacks using short sequences of instructions (“gadgets”) are not possible in WebAssembly, because control-flow integrity ensures that call targets are valid functions declared at load time. Likewise, race conditions, such as time of check to time of use (TOCTOU) vulnerabilities, are possible in WebAssembly, since no execution or scheduling guarantees are provided beyond in-order execution and post-MVP atomic memory primitives :unicorn:. Similarly, side channel attacks can occur, such as timing attacks against modules. In the future, additional protections may be provided by runtimes or the toolchain, such as code diversification or memory randomization (similar to address space layout randomization (ASLR)), or bounded pointers (“fat” pointers).
--> https://webassembly.org/docs/security/
Finally, WASM is only as secure as its implementations, whatever the bytecode promises only matters if the runtimes aren't exploitable themselves.
Could you expand on this? I think everyone would agree with the first two of these - sandboxing is the whole point of WASM, so it would be excellent at that. And startup latency matters a great deal to WASM programs, again not surprised that runtimes have optimised that.
But execution speed compared to native? Are you saying WASM programs execute faster than native? Or even at the same speed?
Separately, it also matters what you consider "native" -- it is possible to write programs in a more efficient language (ex. one without a runtime), apply reasonable optimizations, and with AOT/JIT be faster than what could be reasonably written idiomatically in the host language (e.g. some library that already exists to do X but just does it inefficiently).
This is at least one of the reasons we've been building thin kernel interfaces for Wasm. We've built two now, one for the Linux syscall interface (https://github.com/arjunr2/WALI) and one for Zephyr. A preliminary paper we wrote a year or so back is here (https://arxiv.org/abs/2312.03858), and we have a new one coming up in Eurosys 25.
One of the advantages of a thin kernel interface to something like Linux is really low overhead and low implementation burden for Wasm engines. This makes it easier to then build things like WASI just one level up, compiled against the kernel interface and delivered as a Wasm module. Thus a single WASI implementation can be reused across engines.
Such a low burden that both Google (gVisor) and Microsoft (WLS1) failed at it!
Again, no. The security policies we have in mind can be implemented above the WALI call layer and supplied as an interposition library as a Wasm module. So you can have custom policies that run on any engine, such as implementing the WASI security model as a library. As it is now, all of WASI has to be implemented within the Wasm engine because the engine is the only entity with authority to do so. That's problematic in that engines have N different incompatible, incomplete and buggy implementations of WASI, and those bugs can be memory safety violations that own the entire process.
Thin kernel interfaces separate the engine evolution problem from the system interface evolution problem and make the entire software stack more robust by providing isolation for higher-level interfaces.
This is why gVisor contains a reimplementation of parts of Linux.
[1] This makes the interface per-kernel, not per-kernel x per-engine. It's also not per-kernel x per-kernel; engines would not be required to emulate one kernel on another kernel.
Try writing a seccomp policy for filesystem access (that isn't just 100% yes/no). That's how hard this thing will also be to use.
Obviously, an expert would write the security policies and make them reusable as libraries. Incidentally, that is what WASI is--it's not only a new security model, but a new API that requires rewrites of applications to fit with the new capability design.
> Try writing a seccomp policy for filesystem access
Try implementing an entire new system API (like WASI) in every engine! You have that problem and a whole lot more.
For comparison, implementing WASI preview1 is 6000 lines of C code in libuvwasi--and that's not even complete. Other engines have their own, less complete and broken, buggy versions of WASI p1. And WASI p2 completely upends all of that and needs to be redone all over again in every engine.
Obviously, WASI p1 and p2 should be implemented in an engine-independent way and linked in. Which is exactly the game plan of thin kernel interfaces. In that sense, at the very least thin kernel interfaces is a layering tool for the engine/system API split that enhances security and evolvability of both. Nothing requires the engine to expose the kernel interface, so if you want a WASI only engine then only expose WALI to WASI and call it a day.
More likely, the browser will implement some that make sense there, some browsers will implement more than others, Cloudflare workers will implement a different set, AWS Lambda will implement a different set or have some that don't work the same way... and now you need to write your WASM code to deal with these differing implementations.
Unless the API layer is, essentially, a Linux OS or maybe POSIX(?) for Docker, which I doubt it would be as that's a completely different level of abstraction to WASM, I don't have a lot of faith in this being a utopian ideal common API, given that as an industry we've so far failed almost every opportunity to make those common APIs.
Things are going to change a little bit with the introduction of Preview3 (the flagship feature there is async without function coloring), but you can look at the core interfaces:
https://github.com/WebAssembly/WASI/tree/main/wasip2
This is what people are building on, in the upstream, and in the bytecode alliance
You're absolutely right about embeddings being varied, but the standard existing enforces the expectations around a core set of these, and then the carving out of embeddings to support different use cases is a welcome and intended consequence.
WASI started as closer to POSIX now, but there is a spectacular opportunity to not repeat some mistakes of the past, so some of those opportunities are taken where they make sense/won't cause too much disruption to people building in support.
Correct me if I’m wrong, but that’s only possible if you separate runtime threads from OS threads, which sounds straightforward but introduces problems relating to stack-lifetimes in continuations so it introduces demands on the compiler and/or significant runtime memory overhead - which kinda defeats the point of trying to avoid blocking OS threads in the first place.
I’m not belittling the achievement there - I’m just saying (again, correct me if I’m wrong) there’s a use-case for function-colouring in high-thread, high-memory applications.
…but if WASI is simply adding more options without taking anything away then my point above is moot :)
Correct -- note that the async implementation does not address parallelism (i.e. threading) -- it's a language +/- runtime level distinction.
The overhead is already in the languages that choose to support -- tokio in rust, asyncio in python, etc etc. For those that don't want to opt in, they can keep to synchronous functions + threads (once WASI threads are reimagined, working and stable!)
You can actually solve this problem with both multiple stacks and a continuation based approach, with different tradeoffs.
> I’m not belittling the achievement there - I’m just saying (again, correct me if I’m wrong) there’s a use-case for function-colouring in high-thread, high-memory applications. > > …but if WASI is simply adding more options without taking anything away then my point above is moot :)
Didn't take it as such! The ability to avoid function coloring does not block the implementations of high-threads/high-memory applications, once an approach to threading is fully reconsidered. And adding more options while keeping existing workflows in place is definitely the goal (and probably the only reasonable path to non-trivial adoption...).
How to do it is quite involved, but there are really smart people thinking very hard about it and trying to find a cross-language optimal approach. For example, see the explainer for Async:
https://github.com/WebAssembly/component-model/blob/main/des...
There are many corners (and much follow up discussion), but it's shaping up to be a pretty good interface, and widely implementable for many languages (Rust and JS efforts are underway, more will come with time and effort!).
I suspect you're right - unless people are careful, it'll be a jungle out there. Just like javascript is at the moment.
That also assumes that everyone involved wants compatibility, and that's unlikely. Imagine a world where every WASM implementation is identical. If one implementation decides to change something to implement an improvement to differentiate themselves in the market, they'll likely win marketshare from the others.
Most companies implementing WASM will tend to want to a) control the spec in their own favour, and b) gain advantages over other implementations. And this is why we can't have nice things.
I think the browser problem is a marketshare/market power problem, and Wasm doesn't have that problem.
Also, I'd argue that compat tests for JS engines and browsers are an overall positive thing -- at least compared to the world where there is no attempt to standardize at all.
> That also assumes that everyone involved wants compatibility, and that's unlikely. Imagine a world where every WASM implementation is identical. If one implementation decides to change something to implement an improvement to differentiate themselves in the market, they'll likely win marketshare from the others.
This is a good thing though -- as long as it happens without breaking compatibility. Users are very sensitive to changes that introduce lock-in/break standards, and the value would have to be outsized for someone to forgo having other options.
> Most companies implementing WASM will tend to want to a) control the spec in their own favour, and b) gain advantages over other implementations. And this is why we can't have nice things.
I think you can see this playing out right now in the Wasm ecosystem, and it isn't working out like you might expect. There are great benefits in building standards because of friction reduction for users -- as long as there is a "standards first" approach, people overwhelmingly pick it if functionality is close enough.
Places that make sense to differentiate are differentiated, but those that do not start to get eaten by standards.
I think organizations that are aware of this problem and attempt to address is directly like the Bytecode Alliance are also one of the only forms of bulwark against this.
No, it really isn’t.
For more than the last two decades every browser bar IE looked towards compatibility and only included differences as browser-specific extensions.
And even when Microsoft eventually caved and started the Edge project to create a compatible browser, they ended up admitting defeat and pivoted to Chromium themselves.
> And even when Microsoft eventually caved and started the Edge project to create a compatible browser, they ended up admitting defeat and pivoted to Chromium themselves.
This can be interpreted as a problem of marketshare not staying balanced. It may have shifted hands, but the imbalance is the problem -- if Chrome had to deal with making changes that would be incompatible with half the users that visit sites on Chrome, they'd be forced to think a lot more about it.
This doesn't mean they can't add value in the form of non-standardized extensions -- that's not a desirable goal because it would stifle innovation. The point is that at some point if users are on browser Y and they get a "this site only runs on browser X", they're just not going to visit that site, and developers are going to shy away from using that feature. In a world with lopsided marketshare, there's not much incentive for the company with the most marketshare to be interoperable.
And these days the problem is simply that the specifications are so complex and fail mode so forgiving that it’s almost impossible for two different implementations to output entirely the same results across every test suite.
Neither of these are market leader problems. The former is just Microsoft being their typical shitty selves. While the latter is a natural result of complex systems designed for broad use even by non-technical people.
Agree on the other points though, market share is clearly not the only problem!
Yeah - but its barely a problem today compared to a few decades go. I do a lot of work on the web, and its pretty rare these days to find my websites breaking when I test them on a different web browser. That used to be the norm.
I think essentially any time you have multiple implementations of the same API you want a validation test suite. Otherwise, implementation inconsistencies will creep in. Its not a wasm thing. Its just a normal compatibility thing.
Commonmark is a good example of what doing this right looks like. The spec is accompanied by a test suite - which in their case is a giant JSON list containing input markdown text and the expected output HTML. Its really easy to check if any given implementation is commonmark compliant by just rendering everything in the list to HTML and checking that the output matches:
> Most companies implementing WASM will tend to want to a) control the spec in their own favour, and b) gain advantages over other implementations. And this is why we can't have nice things.
Your cynicism seems miscalibrated. We have hundreds of examples of exactly this kind of successful cross-company collaboration in computing. For example, at the IETF you'll find working groups for specs like TCP, HTTP, BGP, Email, TLS and so on. The HTTP working group alone has hundreds of members, from hundreds of companies. WhatWG and the W3C do the same for browser APIs. Then there's hardware groups - who manage specs like USB, PCI / PCIe, Bluetooth, Wifi and so on. Or programming language standards groups.
Compatibility can always be better, but generally its great. We can have nice things. We do have nice things. WASM itself is an example of that. I don't see any reason to see these sort of collaborations stopping any time soon.
This is a serious misunderstanding of how containers work.
Containers make syscalls. The Linux kernel serves them. Linux kernel has features that let one put userspace processes in namespaces where they don't see everything. There is no "routing". There is no "its own filesystem and network", just a namespace where only some of the host filesystems and networks are visible. There is no second implementation of the syscalls in that scenario.
For WASM, someone has to implement the server-side "file I/O over WASI", "network I/O over WASI", and so on. And those APIs are likely going to be somewhat different looking than Linux syscalls, because the whole point is WASM was sandboxing.
Quite far from "pretty easy".
All of this sounds too good to be true. The JVM tried to use one abstraction to abstract different processor ISAs, different operating systems, and a security boundary. The security boundary failed completely. As far as I understand WASM is choosing a different approach here, good. The abstraction over operating systems was a partial failure. It succeeded good enough for many types of server applications, but it was never good enough for desktop applications and system software. The abstraction over CPU was and is a big success, I'd say.
What exactly makes you think it is easier with WASM as a CPU abstraction to do all the rest again? Even when thinking about so diverse use-cases like in-browser apps and long running servers.
A big downside of all these super powerful abstraction layer is reaction to upstream changes. What happens when Linux introduces a next generation network API that has no counterpart in Windows or in the browser. What happens if the next language runtime wants to implement low-latency GC? Azul first designed a custom CPU and later changed the Linux API for memory management to make that possible for their JVM.
All in all the track record of attempts to build the one true solution for all our problems is quite bad. Some of these attempt discovered niches in which they are a very good fit, like the JVM and others are a curiosity of history.
They both use Linux kernel features such as control groups and namespaces. When put together this is referred to as a container but the kernel has zero concept of “a container”.
Not easy and certainly not fast.
E.g. here's I can read host filesystem even though uname says weird things about the kernel container is running in:
$ sudo podman run -it --runtime=/usr/bin/runsc_wrap -v /:/app debian:bookworm /bin/bash
root@7862d7c432b4:/# ls /app
bin home lib32 mnt run tmp vmlinuz.old
boot initrd.img lib64 opt sbin usr
dev initrd.img.old lost+found proc srv var
etc lib media root sys vmlinuz
root@7862d7c432b4:/# uname -a
Linux 7862d7c432b4 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux
Gvisor let's one have strong sandbox without resorting to WASM.https://cloud.google.com/blog/products/serverless/cloud-run-...
Between this and WLS1, trying to reimplement all Linux syscalls might not lead to a good experience for running preexisting software.
Also, startup times are generally better w/ availability of general metering (fuel/epochs) for example. The features of Wasm versus a virtual machine are similar but there are definitely unique benefits to Wasm.
The closer comparison is probably the JVM -- but with support for many more languages (the list is growing, with upstream support commonplace).
https://en.m.wikipedia.org/wiki/List_of_CLI_languages
https://en.m.wikipedia.org/wiki/IBM_i#TIMI
Really this is only new for those that weren't around, many other examples, even older available.
wasmtime (the current reference runtime implementation) is much more embeddable than these other options were/are, and is trivially embeddable in many languages today, with good performance. On top of being an option, it is being used, and WebAssembly is spreading cross-language, farther than the alternatives ever reached.
These things may look the same, but just like ssh/scp and dropbox, they're not the same once you zoom/dig in to what's different this time.
What we have now is lots of hype, mostly by folks clueless of their history, in the venture to sell their cool startup idea based on WASM.
I don't think there's much of a hype cycle -- most of the air has been sucked out of the room by AI.
There aren't actually that many Wasm startups, but there are companies leveraging it to great success, and some of these cases are known. There is also the usefulness of Wasm as a target, and that is growing -- languages are choosing to build in the ability to generate wasm bytecode, just as they might support a new architecture. That's the most important part that other solutions seemingly never achieved.
The ecosystem is aiming for a least-changes-necessary approach -- integrating in a way that workflows and existing code does not have to change. This is a recipe for success.
I think it's a docker-shaped adoption curve -- most people may not think it is useful now, but it will silently and usefully be everywhere later. At some point, it will be trivial to ship a small WASM binary inside (or independent of) a container, and that will be much more desirable than building a container. The artifact will be smaller, more self-describing, work with language tooling (i.e. a world without Dockerfiles), etc.
IMO in games developers would prefer something with a reasonable repl like lua or javascript (as a game is already assumed to be heavy if the mods are not performance critical running a V8 should not be a problem) for extensions in generic complex applications (things like, VSCode, Blender, Excel, etc.) I would posit that the wasm sandbox could be a really good way to enable granular-permission secure extenstion.
Assuming equivalent performance, which I understand might not be the case, is there merit to this idea? Or is there nothing new WASM provides?
That's the idea behind the Extism framework:
uBlock Origin uses WebAssembly in Firefox for better performance:
https://github.com/gorhill/uBlock/wiki/uBlock-Origin-works-b...
https://blog.cloudflare.com/announcing-wasi-on-workers/
I assume the merit for cloudflare is lower overhead cost per worker than if they had done something more like AWS lambda. Explained better than I can, here: https://developers.cloudflare.com/workers/reference/how-work...
It does currently present a lot of restrictions as compared to what you could do in a container. But it's good enough to run lots of real world stuff today.
It does not work with wasi.
I just use a simple driver:
https://github.com/syumai/workers
Wasi is so painful that I just write all my golang using stdio and have a shim per runtime. Web browser, Cloudflare, server ( with wazero ).
The new go:wasmexport might be useful in go 1.24 but I highly doubt by it.
Yes, this is a key value of WebAssembly compared to other approaches, it is a relatively (compared to a container or a full blown VM) lightweight way to package and distribute functionality from other languages, with high performance and fast startup. The artifact is minimal (like a static/dynamic library, depending on how much you've included), and if your language has a way to run WASM, you have a way to tap into that specialized computation.
Contrasted with WASM where you can write in any language and bring the ecosystem with you, since it all compiles down.
Bolstering your point -- smart-and-hardworking people are working on this, which results in:
https://github.com/bytecodealliance/componentize-py/
which inspired
https://github.com/WebAssembly/component-model/blob/main/des...
with some hard work done to make things work:
https://github.com/dicej/wasi-wheels
It's a fun ecosystem -- the challenge is huge but the work being done is really fundamentally clean/high quality, and the solutions are novel and useful/powerful.
In the sense that compiling C to any language is easily done without too many problems, what wasm allow is to have a secure and performant interface with that language.
For example IIRC one of the first inclusions of wasm was to sandbox many of the various codecs that had regular security vulnerabilities, in this Wasm is neither the first nor the only approach, but with a combination of hype and simplicity it is having good success.
I write Java software on Intel and deploy to an arm device.
The promise seems to work for me.
Meanwhile you won't have much luck running Eclipse or any other large Java program on your custom OS or hardware platform if you just port the JRE.
Since this is an emerging ecosystem, why not take a different spin on security, and instead try e.g. capabilities? Instead of opening a connection to the DB, or a listening socket, you get FDs from your runtime. Instead of a path where you can read/write files, such as assets or local cache, you get a directory FD from openat (not sure right now if that could be bypassed with "..", but you get the idea).
Bonus: you can get hot code reloading for very cheap.
You can make the same argument for any compiled language if you call QEMU your runtime.
And this isn't just theoretical. Games written in compiled languages know to bundle all their dependencies. Games written in Java often expect you to have a JRE. And more often than not "a JRE" means the official Sun JRE (maybe even a specific version range) because too many Java applications use non-portable interfaces.
Only if you ship a QEMU-compatible image, and I don't think anyone does. The usability and integration with the host system is too poor.
> Games written in compiled languages know to bundle all their dependencies. Games written in Java often expect you to have a JRE.
You can't get away from having to have some interface between the host system and the program, but so far the JVM is the least bad one. When laptops started shipping with ARM processors, both docker images and games that had compiled in their dependencies broke, while programs that were shipped as JARs worked fine.
Java is cool.
It's a similar but quite different solution.
Java
- was designed with a lot of tight system integration foremost, sand boxing being secondary
- a ton of the sandbox enforcement where checks run in the same VM/code as the code they where supposed to sandbox, like you Java byte code decided if Java byte code should be able to access file IO etc.
- Java Applets are, at lest for somewhat more modern standard, a complete security nightmare _in their fundamental design_, not just practically due to a long history of failure.
- a lot of the security design was Java focused, but the bytecode wasn't limited to only representing possible Java code
- Java "sandboxing" targeted a very different use-case/context the "WASM replace containers" blog is speaking about, mainly the blog is about (maybe micro-) servies while Java sandboxing was a lot about desktop application. I.e. more comparable with flatpack and their sandboxing is also (sadly) more about compatibility then security (snap does that better, but has other issues).
And especially the last point one is one to really important as we are not speaking about WASM replacing sandboxing e.g. for dev tools and similar but sandboxing for deployment of micro services written with it in mind. In such context
1. you (should))always run semi trusted code, not untrusted code
2. when giving access to other resources (e.g. file system) it's often in a context where you normally don't need any form of dynamic access management (like you need on a desktop) which means _all the tech underlying to containers can be used with WASI_. Like there is no reason not to still use cgroups, dropping privileges and co integrated in your WASI VM in the same way docker and co uses them.
3. (I kinda thing) there is a (subtle/very slow) trend to not rely only on container isolation but e.g. have a firecracker micro vm run multiple closely coupled containers (a pod/side care container) but place not closely coupled containers in different micro VMs.
The true challenge isn't WASI, but that it's competing with docker->kubernets where docker is "one thing fit's all (badly)" solution which can not only run your services but can also run all kind of dev tooling, legacy applications etc. without requiring any changes to them and can (badly but often good enough) simulate your deployment locally with compose. Then to make the competition hard kubernets has been become somewhat of a "standard" interface to deployment, especially in the cloud, this might suck, but also mean you use OIC images both locally in in production. And that is what the WASI for service sandboxing use-case is competing with OIC images and software running them, nut just docker.
> - was designed with a lot of tight system integration foremost, sand boxing being secondary
> Java sandboxing was a lot about desktop application
I think this is a false history. Java was designed for interactive television. As in cable television set top boxes receiving apps broadcast over the cable and executing them.
But I could have formulated some thing better:
> sand boxing being secondary
sand boxing for _security_ being secondary, for compatibility it was primary (e.g. like flatpack) and even if it wasn't what was seen as acceptable security for the 90th isn't anywhere close to it for today in most cases
Also "tight system integration" was in context of "highly sandboxes" things, which isn't necessary quite the same. E.g. in context of "highly sandboxes" things rusts standard library and support for C-API libraries is tightly system integrated. But if you speaking in a context of e.g. windows you need to add the official bindings to the various MS specific system libraries to count as "tightly integrated" and even then you could argue it's not quite there due to not having first class COM support.
Anyway I think the most important takeaway form Java sandbox security is "never run the code enforcing your sandbox as part of the inside of your sandbox" because a huge amount of security issues can be traced back to that (followed by the way Java applets have been embedded in the browser wrt. "privileged" applets being really really bad designed in a ton of ways).
So an operating system?
Check out the WASI repository. For people not understanding what WASI is, I always tell them it's something like a reference/specification of cross platform syscalls that have to be implemented in WASM VMs.
Of course, access to such things always come with assumptions of control and policies that rely on behavioral analysis. So I hope that something similar to host and web application firewall rules will come out of this, similar to how deno does it.
I hope for a much more near term bright future for WASM: language interop. The lowest common denominator today is C, and doing FFI manually is stone age. If you can leverage WASM for the FFI boundary, perhaps we can build cross language applications that can enjoy the strengths of all libraries, not just those outside of our language silos.
Containers don't solve that problem. They aren't a particularly good security boundary, and they are much heavier weight, in terms of bytes and startup costs, than WASM binaries, because they are deeply integrated into the OS for networking, etc. However, when what you need to do is ship a binary with a bunch of accoutrements, dependencies, files, etc, and then run multiple processes, multiple threads, and use more of the OS primitives, containers are an ergonomic way to do that, and that suits Infrastructure-as-a-Service much more closely.
One can use something like https://github.com/google/gvisor as a container runtime for podman or docker. It's a good hybrid between VMs and containers. The container is put into sort of VM via kvm, but it does not supply a kernel and talks to a fake one. This means that security boundary is almost as strong as VM, but mostly everything will work like in a normal container.
E.g. here's I can read host filesystem even though uname says weird things about the kernel container is running in:
$ sudo podman run -it --runtime=/usr/bin/runsc_wrap -v /:/app debian:bookworm /bin/bash
root@7862d7c432b4:/# ls /app
bin home lib32 mnt run tmp vmlinuz.old
boot initrd.img lib64 opt sbin usr
dev initrd.img.old lost+found proc srv var
etc lib media root sys vmlinuz
root@7862d7c432b4:/# uname -a
Linux 7862d7c432b4 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux
But wasm also has done a lot to enable a portable easy to ship blob/inage, and is rapidly tackling the distribution problems that has been key to container's success.
I'm not sure what we can really put firmly on containers side here, what makes them distinct or lasting as compared to what wasm is shaping up to be.
The various wasm runtimes already offer an array of low level threading and networking capabilities. Higher level services like http servers can be provided by runtimes with very optimized native backends, that have excellent performance characteristics but still "appear" as regular wasm. There may be niche long term users of containers for these reasons, but it's unclear to me that the systems programming capabilities will be much of a moat for containers: if anything, I think we'll see wasm implementations competing to provide the fastest performance for the same well specified interfaces in a hot war competition.
The main counter-factor against wasm that I see is just interia. Change is slow. These runtimes are still early. Wasm language targets have been around for a while but at various quality/polish levels. Wasm components are still effectively brand new, a year and a month after release, and the specs are still undergoing major updates to figure out how exactly async is going to work & there's very few runtimes keeping pace with this radical upshift. Its happening & I believe in wasm, but this is a long change and the capabilities aren't really truly here yet.
I too do see a future where Wasm is way more widespread but as of now we are deploying a small app with a MySQL in single docker container. I suspect that we won't be running MySQL compiled to a Wasm container anytime soon*
* I suspect that we could with the right transpilations/interpreters but for sure we won't
I really hope WASM takes over as a plugin mechanism. I don't think it will lead to fragmentation because communities will form around their preferred language, it will just not be enforced anymore. And forcing a plugin language did not work so successfully to prevent fragmentation anyway, see GNU Guile or vimscript.
I'm going through this now, giving a full Cloudflare tech stack a shot, but I'm nearing a limit before I just go back to containers
Containers: server, dev's device
Cmon OP try to keep up
Implementing a FaaS runtime like Lambda is actually quite hard to do in a way that is both safe and multi-language. Compiling down to a safe, sandboxed bytecode, is not a bad idea, regardless of whether you're doing that to run it in a user's browser, or to run a FaaS function on some cloud infra, or to write a server-side plugin to a SaaS product.
The user's device which can be a smartphone or a latpop and the developer's device which can be a 24 hour online server or a 24 hour online server, are just two completely different devices that cannot be abstracted away.
WASM is a technology that was born for the frontend, and js was a technology that was born for the frontend, you can for sure metastasize the web front towards the back, but then you end up with a web bias in the frontend (why not make the backend in swift, or Java for phones?, or whatever language will be used for new tech like LLM voice assistants?) and of course a frontend bias in the backend.
Not a good look, let's stop it with the full stack thing, let's specialize, half of use do backend, half of you do frontend, and then we keep on specializing, if everyone does everything we'll cover no ground.
This is true if your wasm code is purely computational, but if it interacts with the outside world it’s a different story. Each V8 runtime has a subtly different interface, so code that runs on cloudflare V8 might not run in Bun or Deno. Not to mention if you want to support WASI too, which is a different set of bindings even though it’s still WebAssembly.
Love or hate Docker, part of its success was that POSIX was already a fairly established standard and there weren’t a lot of vendors driving it in different directions.
Nit: bun uses javascriptcore
> In the year 2030, no one will remember Kubernetes.
So what's going to handle out rolling out new versions of your WASM, setting up whatever Reverse Proxy you pick and other stuff involved getting. A bunch of scripts you wrote to do this for you? https://www.macchaffee.com/blog/2024/you-have-built-a-kubern...
> The promise of DevOps has been eroded by complicated tooling and tight coupling of program-container-linux. In my experience, developers want to write code and ship features to hit their quarterly goals.
Here is why we ended up here: In my experience, developers want to write code and ship features to hit their quarterly goals.
Sure, and my angry PlatformOps (formerly DevOps (formerly SRE (formerly Ops))) is stuck picking up the pieces because we are getting crap while we desperately paging you because we have no clue what "duplicate street key" in your logs mean. On top of that, InfoSec dropped us 10 tickets about code level library vulnerabilities in your docker container but all the Developer's Managers got together to convince them it was our problem somehow.
So we are forced to write this bundle of terribly written Ops type software in attempt to keep this train on the tracks while you strap rockets to cafe car.
WASM replacing containers is just a solution looking for a problem. Containers solved a problem of "How do we run two different versions of PHP on a single server without them colliding." Most of the containers problem is higher level DevOps problems that we haven't been able to solve and WASM isn't going to change that. I deal with a team that writes 100% Golang so their code is like WASM as it's ship binary and done. Yea, they begged for Kubernetes because it works a ton better then custom Ansible they wrote to keep these VMs/Load Balancer in sync.
Instead of spinning up a container on-demand you spin up what is essentially a chrome tab in your V8 instance. Startup time is nil
In terms of solutions looking for a problem, that one seems to have fixed at least one problem for at least one person
It's pretty genius
This is obviously not true when the application itself can take arbitrary amount of time to initialize.
The overhead of WASM startup is small. But Firecracker is also very quick to start the virtualization; a lean compiled app in Firecracker would start quicker than a bloaty interpreted app in WASM. Both approaches can also freeze a pre-initialized application, and even clone those.
Reality is most "serverless" stuff is slow to start because the application itself is slow to start. WASM isn't going to change that, Enterprises are going to Enterprise. The hype field is just very strong.
I however don't have that problem. Every environment I've been in has banished Lambdas for most things because writing ourselves into knots to keep the lambdas going wasn't worth just having a small container listening to SQS/Kafka was easier and if we needed to scale, we could.
While I would lament the age that I came up in, where if some hot company down the street had just a killer javascript dropdown menu well you could just view source and maybe learn a few things. I think what that initial impression I got was a great idea.
But yeah, the concept has kind of expanded. I want to say you can write WASM 'plugins' for istio. Which I also think is pretty cool.
Something is going to replace containers, I say let them take a swing at it but I think at the end of the day you get something that ends up looking a lot like containers.
To put it into context, Rust was released in 2012. 8 years later it was stable, had a solid toolchain and plenty of people using it in production. Wasm still feels like a toy compared to that
You likely visited a website using wasm today without realizing it, and major apps like Photoshop have been ported to wasm, which was the original dream behind it all. That has all succeeded.
But if you want to replace containers specifically, as this article wants, then you need more than wasm 1.0 or even 2.0. I don't know when that future will arrive.
Like, how easy is it to write a web application in Wasm? Or how easy is it to compile your average program written for a native platform to Wasm without hand picking your dependencies to work on the platform?
Wasm succeeded at its initial goals, and has been expanding into more use cases like compiling GC languages. Perhaps some day it will be common to write websites in wasm, but personally I doubt it - JavaScript/TypeScript are excellent.
https://thenewstack.io/amexs-faas-uses-webassembly-instead-o...
But the rest of the community prefers using the original implementation. If some library that you want to use doesn’t work, no one is going to help you. You’re using the second class implementation.
And I was able to get things working with an understanding of the whole system. You could instantiate the WASM from either a file, or from a byte array, pass in the byte array that held 'system memory' for the WASM program, call the WASM functions from the JavaScript code, and see results of making the function calls. The WASM binary was under 3KB in size.
Now once you want to use libraries, everything light and small about WASM goes out the window. You're at the mercy of how efficiently the library code was implemented, and how many dependencies they used while doing so.
I think the "WASM as containers" and "WASM as .jar" approaches are rather silly, but language support is good enough to use the technology if you think it's a match. I don't think it will be for most, but there are use cases where pluggable modules with very limited API access are necessary, and for those use cases WASM works.
Plus, if you want to run any kind of game engine in the browser, you're going to need WASM. While I'm not replacing my Steam install with a browser any time soon, I have found that WASM inside itch.io runs a lot faster and more stable than Winlator when I'm on my Android tablet.
Around the same time Linux is ready for the desktop.
There is naturally the version that works, keeping the Linux kernel, and replacing the userland with managed language frameworks, I have heard they are making a huge success in mobile devices and throwaway laptops.
Sleep worked perfectly until Microsoft decided that device manufacturers should replace sleep with overheating in your bag (a much better sleep mode than, y'know, actual SLEEP).
Not sure what "modern UEFI features" means. Whenever something is described as "modern" that screams to me that someone is trying to conflate recentness with quality which is a red flag. UEFI itself has worked fine for as long as it has existed as far as I know?
Why you would replace the userland with "managed language frameworks" is quite beyond me.
That must be why Linux forums are full of VA-API tutorials and how to enable hardware decoding on Chrome then.
> UEFI itself has worked fine for as long as it has existed as far as I know?
Depends on the board, some boards don't play ball with Linux distros.
> Why you would replace the userland with "managed language frameworks" is quite beyond me.
Google has their reasons, as does LG, seems to work quite well in market share.
Meanwhile on Windows I am not exaggerating when I say that every computer I have owned and every peripheral device I have ever used has had serious issues. Wireless headphones randomly disconnect, microphones require frequent unplug-replug cycles, rebooting is often required, reinstalling is common. Mice and keyboards have weird compatibility issues with software drivers. This experience is shared with most people I know that I have discussed it with. People are just used to it.
Maybe it isn't Linux that is the problem. Maybe the problem is that consumer hardware is designed and built on the cheap and is not designed to last, and they get away with it because most people (1) have no idea it could be so much better and (2) have no insight into these issues before buying because they are rarely covered in reviews.
For some reason when this happens on Windows, the hardware is to blame, but when it happens on Linux, Linux is to blame.
Then, of course, I was also curious whether their reasoning was grounded, so I manually enabled acceleration and re-run the test - and found out that both Chrome and Firefox will inevitably crash in 2-3 hours of active browsing with it enabled, so they disable it for a reason.
As far as "maybe Linux isn't the problem" - you're broadly correct that it's really an issue of hardware quality and/or lack of good first party drivers. But from the end user perspective, if you can't reliably use Linux with popular off-the-shelf hardware, it's not really "ready for the desktop", regardless of where the blame lies. I've been a Linux user for 25 years now, with about a decade of using it as a primary desktop OS, and this exact excuse has been around for as long as I remember (I've used it myself plenty of times way back!). And yet, here we are.
More commonly you have to wait six months for good/full support of new hardware.
With Durable Objects, two clients on either side of the world can both request a websocket connection to an object with the same unique identifier, and all the bytes from those clients will land in one single process somewhere inside a CloudFlare data centre.
I am pretty sure the answer is yes, but the docs seem a bit less direct than CloudFlare's web focused use cases.
“ChatGPT make me sound confident”
Containers package applications that run directly on real hardware (well, directly on a real kernel that is running on real hardware). There is no runtime. I am talking OCI containers here (Docker and Kubernetes). At least they can. Most containers are probably running on a Linux kernel that is running in a virtual machine (in the way that KVM, EC2, and VirtualBox are virtual machines).
WASM needs a runtime. That is, it is going to run inside an application. That application needs to run on a kernel. So, WASM will always be further from the hardware than a container is.
WASM solves the same "portability" problem that the JVM and .NET do. So, maybe WASM wins against those environments.
That is not the problem that containers solve though. Containers bundle applications with their dependencies. They replace "installation and configuration" with instantiation (deployment). WASM does not magically eliminate dependencies or the differences between environments (not even the difference between V8 implementations).
If anything, the technologies are complementary. Maybe, in the future, all our containers will be runing WASM applications.
Or maybe we will run a different kind of container that ONLY runs WASM applications and then WASM can replace the Linux kernel running in a VM that hosts all our OCI containers today. Perhaps that is what the author really envisions. Even then, it sounds like more of a complement than a true alternative.
> WASM does not magically eliminate dependencies or the differences between environments (not even the difference between V8 implementations).
Not by itself, and not currently, but I don't find it too much of a leap of faith to assume that that'll be standardized before too long.
Here's a compiler from WASM to native code, it can run AoT also not just JIT:
As if you need to learn anything, you get your Dockerfile and that's it, what else there is to learn? Your WASM app still need Kubernetes to run so it's not adding any value.
The complexity is not in running your app in Docker, the complexity is running your container somewhere, and WASM does not help at all with that.
WebAssembly is not going anywhere, it's pretty clear it won't grow much in the next 5years.
It's not trivial to manage a running container or group of, with firewalls and filesystems and whatnot.
My biggest gripe is that it's quite redundant with the os and tends to reinvent stuff. You end up needing to learn, doc, and build towards both os layer fw and container layer fw for example.
This is represented in some languages better than others -- for many languages you just switch the "target" -- from x86_64-unknown-linux-gnu to wasm32-wasip2 (ex. Rust). Some things won't work, but most/many will (simple file access, etc), as they are covered under WASI[0]
That's not to say that nothing will change, because there are some technologies that cannot be simply ported, but the explicit goal is to not require porting.
> My money is on WebAssembly (WASM) to replace containers. It already has in some places. WebAssembly is a true write-once-run-anywhere experience. (Anywhere that can spin up a V8 engine, which is a lot of places these days.)
Luckily a container is a place that can spin up a V8 engine. If you want to bet on WASM my bet would be on containers running WASM.
Can you explain your thoughts here? WebAssembly is sandboxed and effort must be expended to provide a mechanism for getting data through that boundary. How does that differ from “encapsulation?”
Also, just because you could does not mean you should -- most of the time you don't want to inject environment variables or configurations that could contain secrets until runtime.
Most instances of "X will eat the world" and "X will be used anywhere" will be false even for very successful technologies
There's not even a single argument in there to support the clickbait title. We have containers, but "containers are annoying". WASM won't be annoying? Pray tell, how do you surmise that?
Docker too complicated? Build times too long? You believe WASM tools will be simpler and faster... why?
The WASM world doesn't have most of the pieces of that puzzle and WASM itself is quite irrelevant. Say we standardized on a sandbox running x86_64 VMs under Firecracker, with the proper sandboxing that would work just as well as running WASM. You might say that WASM is portable, x86_64 assembler is not, to that I would counter that ARM (and probably RISC-V too) can emulate x86_64 faster than they can run WASM. So what's the point of the WASM piece of the puzzle?
Funny. The most obvious place for WASM is a web browser and yet WASM STILL cannot access the browser DOM. Its only been, what? At least 8 years of promises about it coming soon and how people are working on it.
Besides, you can already send data from/to Browser<>WASM context which seems to at least solve most use cases people was imagining for WebAssembly back when it was just asm.js.
WASM will never replace containers. People will be running wasm inside of containers. That's what will happen.
That is, you need a Linux kernel underneath for the containers to run on. More often than not, that Linux kernel is running in a virtual machine.
When you run Docker Desktop on your Windows or macOS machine, how do you think it runs that Alpine Linux container? It works because there is a virtual machine running Linux that all the Docker containers run on top of.
If you are running Linux directly on real hardware, your containers do not need a VM. Everywhere else, they do.
AWS Fargate / Lambda are Firecracker VMs. EC2 are normal VMs.
Turns out only containers is not secure enough.
WASM to an API is essentially the `Fn(…) -> …` type. E.g., you have
POST /some/api
And it can take JSON, but what if it needs to do "something", where something depends on the consumer?And across the board, what APIs/aaS's do is that some PM goes "I think these are the only 2 things anyone will ever want", those get implemented, and you get an enum in the API, the UI, the service, etc. And that's all it is ever capable of, until someone at the company envisions something bigger.
If I could pass WASM, I could just substitute my own logic.
Like webhooks, but I don't have to spin up a whole friggin' HTTP server.
What many people don't get between WASM & containers is that containers don't need software developers to make changes to support containers. WASM however relies on software developers to make changes to their apps. Otherwise, you have to emulate an entire architecture in WASM which doesn't perform well. It is the difference between VMs, which emulates physical hardware & containers which doesn't need to emulate the hardware cause it provides the sandboxing using kernel features.
Better sandboxing does not mean completely foolproof sandboxing, and defense-in-depth is a practice for a reason. The idea is a vulnerability in the runtime (your Wasm runtime) would mean system access. A vulnerability in the container runtime underneath (a different layer of security) would mean system access, and then a VM (if there was one underneath) is another layer to break through. This means to get to "root access" on a machine, there are now 3 layers of security to escape.
Running a single wasm process offers better isolation because it is deny-by-default, running a program as a process, or w/ cgroups + namespaces a a container has a wide surface attack surface. You can achieve greater density of applications with Wasm than you can with containers because of the lighter footprint thank a userspace process.
Containers are a hack that packages the assumption of an operating system and a bunch of other files and dependencies into essentially a tarball to make apps run. You must deal with isolation at the OS level (seccomp, etc). Wasm gives you greater control -- you don't have the "same access" for every app, you can vary access infinitely and dynamically, without worrying about OS primitives much more easily with WebAssembly.
It's OK if you think this is silly -- no one is forcing you to adopt the technology, it'll either come around or it doesn't.
> What many people don't get between WASM & containers is that containers don't need software developers to make changes to support containers. WASM however relies on software developers to make changes to their apps. Otherwise, you have to emulate an entire architecture in WASM which doesn't perform well. It is the difference between VMs, which emulates physical hardware & containers which doesn't need to emulate the hardware cause it provides the sandboxing using kernel features.
This is not necessarily true -- WebAssembly support is being added in languages upstream, and the goal (and reality for some programs today) is that compiling to WebAssembly does not require drastic changes. It's not perfect, but this is a stated goal, and is what is playing out in reality. The WebAssembly ecosystem is working very hard both internally and with upstreams to work with use cases/patterns that exist today, and make using WebAssembly close to a no-op/config change.
Any sysadmin/devops person can tell you that the move to containers was/is not pain free. I'm not promising Wasm will be pain free either, but the idea here is that change is happening upstream -- the ecosystem is working to make it pain free. It will be more like changing a few flags (e.g. building for ARM rather than x86) and following the errors. Some languages will be easier to do this in than others.
You'll just wake up one day and your python toolchain will be able to compile to WebAssembly natively with no extra tooling if you want. Maybe you don't have a stack that can make use of that yet, and maybe Django won't be fully supported early on, but Flask more likely will be.
Also, I think most uses of containers lose the advantage of WASM. WASM is about running on any platform, great for browsers and serverless. But containers are usually run in controlled environment where can compile once and not pay the penalty of compiling each time.
I wonder if someone could make a decent cross-platform GUI toolkit to save us from the horribly slow Electron-hell we've carved out for ourselves.
I'm no wasm expert, but I find it just fantastically unlikely that they're going to beat the decades of research that have gone into the JIT in the JVM anytime soon. But, I guess if the objective is just "run this bytecode in every browser on Earth," that ship has sailed and I will look forward to more copies of node infiltrating my machines
Probably not, but that's sort of orthogonal to my point.
Java started as "write once run anywhere", but it has almost become the opposite of that: "write once, run it on your specific server".
"Portability" is not nearly the same concern with Java as it was thirty years ago; I don't have direct numbers on this, but I would guess that a vast majority of Java code written in 2025 is either running on a server or running an Android app, neither of which are nearly as "portable" what was kind of promised in the 90's, at least not on the desktop.
I'm sure you can list any number of programs that are written in Java, but it certainly has not been the cross-platform standard that everyone was promised; it feels like Electron has more or less taken its mantle in the world of desktop land.
Funny, how people forget that a "specific server" can be running Linux on bare metal ARM or a x86 container, or maybe Windows or even MacOS.
For a server environment, I set all the parameters I want and then I code around it. Obviously there's a lot of variation between different servers, but you typically develop your server code a specific set of servers.
How convenient is that, huh?
Aww, man, they nixed my chances of trying that out on _both_ of my local machines in one fell swoop (still on macOS 12.7 because it works fine)
Does this thing, really, seriously, need the most bleeding edge Darwin toys?! Come to think of it, I bet $1 it's because GHA only goes down to 13 <https://docs.github.com/en/actions/using-github-hosted-runne...> It seems GL is in the same boat <https://docs.gitlab.com/ee/ci/runners/hosted_runners/macos.h...>
x86 and ARM compilation don't go away. WebAssembly is an additional option.
As a sys admin, no, we will have to remember. Once a system is in place and "functions", they tend to stay for a long time.
Also does objects nowadays.
Additionally, given all the AI prompts being used nowadays writing long English texts instead of proper programming, COBOL was actually a language ahead of its time.
> WebAssembly is a true write-once-run-anywhere experience.
Except not. The wasm ISA is really quite limited in the types of operations. A full-blown RISC/CISC ISA will have way more opportunities for optimization. To say nothing of multithreading and pipelining. JITing also has overhead.
> You can compile several languages into WebAssembly already.
But if you can compile them, why not just compile to container, and get free performance?
Wasm will have a hard time with anything low level: networking, I/O, GPU, multimedia codecs.
Personally, I don't think that Cloudflare is the best provider for Wasm at the Edge as everything needs to go through a Javascript layer that eventually hurts performance and prevents further optimization, but is a strong one nontheless (note: take everything with a grain of salt, even though I try hard to not be biased, I'm also founder of Wasmer)
> "The main thing holding back wider adoption is a lack of system interfaces. File access, networking, etc. But it's just a matter of time before these features get integrated."
Wasmer launched WASIX [1] a few years ago which fulfills the vision that the article describes. With WASIX you can have sandboxed access to:
1. Filesystem
2. Networking / Sockets
3. Processes
4. Threads
[1] https://wasix.org/What I found was the JS version was a bit faster than the compiled WAT. Yikes.
EDIT: I think I'll try debugging it more
The performance is very poor, perhaps 100x worse than native. It's bad enough that we only use SQLite for trivial queries. All joins, sorting, etc. are done in JavaScript.
Profiling shows the slowdown is in the JS <-> WASM interop. This is exacerbated by the one-row-at-a-time "cursor" API in SQLite, which means at least one FFI round-trip for each row.
I heard WebSQL, which used SQLite, was at least 10x faster than WASM SQLite. Probably even more.
Containers are much more than that. They're service providers for small installations, short-running VeryFatBinaries, close to the metal, yet isolated complex applications in HPC environments.
WASM will be all and well, it'll be a glorified CGI, and might be a good one at that, we'll see, but it'll not and can't snipe containers with one clean headshot and be done with that.
It'll replace Kubernetes? Don't be silly. Not everyone is running K8S for the same reason, and I'm telling that as a person who doesn't like K8S.
It would be immensely silly to run full x86 emulators in WebAssembly and go through 2 layers of transpiling / interpreting for what can run natively on the host's CPU.
This argument always reminds me of "Square Hole!" video: https://www.youtube.com/watch?v=6pDH66X3ClA (just because you can make it fit, it doesn't mean you should do it).
That's 5 years from now.
Which is a pretty good time frame for getting enterprisey things on to kubernetes.
And maybe a bit short for getting vendors to add a new sandboxed VM type to their supported platforms.
At their heart, modern containers are a clever way to create something that looks like a Linux VM without the overhead of actual virtualization. Your application still executes "natively," just inside of a Potemkin environment (modulo whatever holes you poke in the veneer.) The latter bit is why we use containers.
WASM is a bytecode format. It doesn't carry around the environment it needs to execute correctly like a container does. In fact, it (by definition) needs an environment with certain properties (interpreter/JIT present) to work!
A big constraint here is the memory model. Languages include a specification for memory allocation, deallocation, lifetime and garbage collection, and WASM has it's own way of going about that that is engine dependent. The performance lost from reimplementing the memory model within WASM could only be regained by going back to something that looks like containers.
I mean I don't enjoy Docker either, but I think it's more that there are many problems k8s + Docker help you solve, and WASM alone wouldn't solve a lot of them.
If this was done in a way that works in mechanical sympathy with a wide range of languages I think WASM would be more successful. Making an API available inside a sandbox is painful. You have to manually roll something that can deal with memory, marshal calls and parameters etc. I'm not suggesting it is easy. I'm merely pointing out that this is what I see as the main obstacle to adoption.
I am not interested in "generic" access to system resources (filesystem, network etc) at all. In fact, I can't think of any scenario where I'd actually want to do that. I want to provide APIs that deal with external resources in narrowly defined ways.
It is much easier to do this securely when you explicitly have to provide functionality rather than say "here's the filesystem" and then try to clamp down access.
I want to use WASM on servers. Primarily in Go. I want to be able to create a bunch of APIs that provide narrow access to persistence and communication and project those APIs into the sandbox. In a manner that is interoperable and easy to use from various languages. I don't want to have to craft some application specific marshalling scheme.
If projecting APIs into sandboxes in a (mostly) language agnostic way that is natural, safe and easy to use, it'd be easy enough to write system interfaces. I have no idea why so few people see this as important since I feel it should be self-evident.
(And yes, being able to offer concurrency would be nice, but that's a much smaller issue than the problem of there being no good way to marshall APIs into WASM containers)
It needs something to make it a must have for some area of adoption. I just don't see it yet.
I've grown less young since then, and I can probably count numerous other claims for some great idea to replace "all this crap very soon." Turns out, "one-size-fits-all" solutions are almost always hype, sometimes not even backed up with pragmatic arguments. There are simply no silver bullets in our industry, especially in relation to web tech. Guess what? Some websites are still being built with jQuery, and maybe there's nothing wrong with that.
No, WASM not going to replace containers. At best, it will likely find its specific niches rather than becoming a universal solution. That's all.
And author loves shiny new things [1]: "...use 10% of your time writing code (or telling AI to write code)..."
[1] https://creston.blog/stop-writing-code/
What about old, not shiny things?
For example, SQL is Turing complete and expresses simulation of data flow paradigm [2]. It is effectively executable on current massively parallel hardware and even scales horizontally, enabling safe transactional "microservices." Why should we embrace WASM, but not SQL?
The API construct that lets a worker call another worker (in the same process, in fact, in the same thread) is a Service Binding: https://blog.cloudflare.com/service-bindings-ga/
This is one type of "binding" or "live environment variable" or "capability". You configure it at deploy time, and then at runtime you can just do `env.SOME_SERVICE.fetch(request)` to call to your other worker: https://blog.cloudflare.com/workers-environment-live-object-...
There's a fancy RPC system, although it's (for now) centered on JavaScript so not as relevant to Wasm users: https://blog.cloudflare.com/javascript-native-rpc/
(I'm the tech lead for Cloudflare Workers.)
Thank you!!
Right now I can run containers and WASM workloads in the same k8s clusters. I dont even have to think about it with runtimes like crun/youki or wasmedge. The OCI image format is the universal package format. It always has all its need and the tooling is broad and mature.
With containers I can basically put any app in any language in there and it just runs. WASM support is far from that, there is migration toil. Containers are and will be more flexible then WASM
I feel like this prolog missed an important point. Kubernetes abstracts data centers: networking, storage, workload, policy. Containers implement workloads. They're at different layers.
And back to the article's point: WASM may well replace containerized workloads, indeed there are already WASM node runtimes for Kubernetes. Something else may well replace Kubernetes in five years, but it won't be something at the workload layer.
I highly doubt that. Maybe there will be an evolution to k8s but fundamentally it solves a whole host of challenges around defining the environment an application runs in.
I sure hope "developing on Cloudflare" is not "what the future looks like".
There are many, many VM:s and programming languages that are more or less easy to compile to many architectures and/or possible to run straight on a hypervisor or metal. JavaScript, Python, Lua, V, and so on. None of them are seen as container competitors.
https://github.com/firedancer-io/firedancer/blob/main/src/ut...
He researched chromeos process virtualization, moby (docker upstream), and chrome tab isolation. It’s all done ebpf magic on top of seccomp at its core.
Maybe not, but one can dream at least.
What do we need containers for, if we can build a service in a binary that you can throw on any Linux of the past 20 years and it just starts to serve network requests?
What do we need to support other platforms if the server world is one big Linux monoculture and the next best platforms are just a cross-compile away?
Why wouldn't I compile everything statically if containers don't share libraries anyway?
I don't doubt that wasm has potential, but personally I imagine more esoteric use cases as the go to than necessarily the replacement for containers (where my money is more on unikernel).
The purpose of docker is like a VM. To simulate the running of two or more server machines with OS's to run on one machine.
wasm is like java.
imo real question is: at what scale/complexity does k8 overhead get amortized by its management benefits? For a number of services, I suspect it never does. I will dutifully accept all my downvotes now.
Also the container runtime which is containerd by default I believe can be switched out for micro vms like Firecracker (never done this though - not sure how painful it is).
And that is great, thanks to it is all turtles to the way down, I can have my plugins back, now running on WebAssembly, that is the only thing I care about it.
- Containers are wrappers for binaries. Any binary can be contained, and when run, it gets a constrained (fake) view of the kernel.
- WASM defines a portable binary format. WASM is intermediate-representation, in the same vein as Java byte-code.
You could reasonably put WASM binaries inside containers
We'd use a Dockerfile, install nodejs or use a source image to build from. How does that work for wasm? Does it have layers to reuse?
I'm curious what's holding it back?
It seems like despite its technical advancements, WASM hasn't captured the public's interest in the same way some other technologies have. Perhaps it's a lack of easily accessible learning resources, or maybe the benefits haven't been clearly articulated to a broader audience. There's also the possibility that developers haven't fully embraced WASM due to existing toolchains and workflows.
as a Dylibso employee, I am wondering what made you think that :D at Dylibso we advocate for Wasm for software extensions, rather than an alternative to containers!
>> I am a software developer for a top 100 app. I've written code you've probably used. That makes me qualified to say stuff.
Ah... that explains a bunch of it.
If someone tacks on file system access to WASM, the whole system becomes worthless.
WASI Design Principles
Capability-based security
WASI is designed with capability-based security principles, using the facilities provided by the Wasm component model. All access to external resources is provided by capabilities.
There are two kinds of capabilities:
Handles, defined in the component-model type system, dynamically identify and provide access to resources. They are unforgeable, meaning there's no way for an instance to acquire access to a handle other than to have another instance explicitly pass one to it.
Link-time capabilities, which are functions which require no handle arguments, are used sparingly, in situations where it's not necessary to identify more than one instance of a resource at runtime. Link-time capabilities are interposable, so they are still refusable in a capability-based security sense.
WASI has no ambient authorities, meaning that there are no global namespaces at runtime, and no global functions at link time.
Source: https://github.com/WebAssembly/WASI/blob/main/README.mdIs there any compelling reason not to just compile TypeScript to WASM? Why should it have JavaScript as its compile target?
The draw of WASM is to be able to have your code run in a browser tab exactly as it can run on your local hardware, as it can run if you embed it in your own application, with the only thing moving being custom syscalls between the 3
The biggest limitation of the JVM was that it's closed
You can spin up your own WASM interpreter and integrate it anywhere you like. It wouldn't be an impossible bridge to cross, it's RISC, it's open, there's many open implementations. Is it even possible to write your own JVM from scratch?
Servers were the future of code after mainframes because of simplicity and write-once run-anywhere code but without all the complexity, they just needed similar networking and storage solutions as mainframes had to be viable first.
Virtual machines would be the future of bare metal servers, allowing code to be written once and run anywhere, eliminating the complexity of bare metal servers. VMs just needed better networking and storage first to be viable.
Containers would replace the complexity of VMs and finally allow code to be written once and run anywhere, once orchestration for storage and networking was figured out.
Serverless would replace containers and allow code to be…
You get the idea.
The only thing holding back code from truly being “write-once, run anywhere” is literally everything that keeps it safe and scalable. The complexity is the solution. WebAssembly will start with the same promise of a Golden Path for development, until all the ancillary tooling ruins the joy (because now it’s no longer a curiosity, but a production environment with change controls and SOPs) and someone else comes along with an alternative that’s simpler and easier to use, because it lacks the things that make it truly work.
I don’t particularly care how it’s packaged in the end, so long as it runs and has appropriate documentation. Could be a container, or a VM template, or an OS package, or a traditional installer, or a function meant for a serverless platform.
Just write good code and support it. Everything else is getting lost in the forest.
If you want a vision of the future, imagine a bare metal hypervisor hosting Linux hosting K8S hosting V8 hosting a WASM-based IBM mainframe emulator running COBOL.
No oversimplification to see here.
Firefox also lets add-ons run WebAssembly modules which is something that uBlock Origin has made use of for a long time:
https://github.com/gorhill/uBlock/wiki/uBlock-Origin-works-b...
They compiled their rust libraries to wasm and it allows reuse of those rust bits across everything
The other famous example I know of, but haven't used, is Figma https://www.figma.com/blog/webassembly-cut-figmas-load-time-...
Who wants everyone to be forced to use the same BigTech slop?
no way someone would engineer their way out of not over-engineering a kubernetes stack.
The other bit of this is that FNaaS pricing is crazy expensive. Unless someone goes an order of magnitude cheaper than cloudflare's wrangler offering on the edge, I don't see it happening. You get none of the portability by writing yourself into a FNaaS cage like cloudflare or fastly.
Companies start off with business logic that solves a problem. Then they scale it (awkwardly or not) as it balloons with customers. Then they try to optimize it (sometimes by going to the cloud, sometimes by going back to their own VMs). VCs might not like "ending growth", but once you're on a stable customer basis you can breath, understand your true optimal compute, and make a commitment to physical hardware (which is cheaper than cloud scaling, and FAR cheaper than FNaaS).
The piece that might travel the entire way with you? Containers and Kubernetes.
It's been about 3 years since I dug into WASM and was bewildered about how to use it.
Is it time to dig back in?
That’s double server costs.
It’s golang so maybe that’s why ?
Fellow Wikipedians may show their interest in the comment section.
The exact uniform resource location of the concerned Wiki page is https://en.wikipedia.org/w/index.php?title=User:Dr.SeemaMidh...
The displayed uniform resource location of the concerned Wiki page is https://en.wikipedia.org/wiki/User:Dr.SeemaMidha
Containers, VM's, physical servers, WASM programs, Kubernetes, and countless other technologies fill niches. They will become mature, boring technologies, but they'll be around, powering all the services we use. We take mature technologies, like SQL or HTTP for granted, but once upon a time, they were the new hotness and people argued about their suitability.
It sounds like another way to distribute software has entered the chat, and it'll be useful for some people, and not for others.
Truly buzzword compliant, this article.
Author's argument is "because it is easier today and will be as powerful as containers in the future".
Well, what about it gets as powerful but 3 times more complex? Frankly, I find it quite messy to develop WASM in C++ without downloading an emscripten ...container. Yeah, AFAIK, there is no WASM compiler in WASM.
Oh an there is the *in the browser*, also. Yeah, but the truth of the matter is that most WASM frameworks have a mediocre performance when compared to JS (because of the memory isolation).
In this job we love new projects. We like them so much that we keep forgetting that the vast majority of them fail.
What would you do with one if you had it? Run it on your wasm OS?
So, I'd use it the same way we use compilers in containers (i.e: one single download, no installation) and would run it with a runtime like Wasmer, Wasmtime, Wasmedge, etc.
Or else I could run it sandboxed in a browser as a PWA. Then you could build things in Chromebook, phone, etc.
$ du -hs $HOMEBREW_CELLAR/gcc/*
527M /usr/local/Cellar/gcc/14.2.0_1
(nod)The problem with modern containers is not the container (well it kinda is) its the thing that works out how to run and connect that container.
Kubernetes is a shit to configure, insecure unless you spend effort tracing and managing permissions, expensive to run.
But it is cool.
Apart from the networking. Thats just fucking batshit.
TLDR: Containers aren't the problem its the stupid orchestration layers people insist on building.
https://bandysc.github.io/AvaloniaVisualBasic6/
https://github.com/BAndysc/AvaloniaVisualBasic6
WebAssembly brings all languages to the browser and that's a good thing.
I can write applications for the desktop in any language, I should be able to do the same thing in the browser.
WebAssembly makes that possible.
All third party browser plugins failed eventually.
This is demonstrably false.
On a serious note. What's missing in containers? Maybe they could be a bit more Nixxy? (Just use Nix to make the container)
From a consumer/user perspective almost none of the benefit of containers is available to the end-user. Features sets vary wildly by operating system. MacOS doesn't even have real containerization and apple has not signaled moving in that direction. (not even going to bother to take windows seriously.) jails in FreeBSD work in a completely different way from cgroups. Our phones should effectively be containerizing apps so we can e.g. control who is allowed to contact the internet, but no such functionality is offered to the user. Apps instead are simply not allowed to look at each other, but they can contact whoever they want. (Maybe a rooted android has slightly better feature set in this regard, but that sounds miserable to me to have to figure out.)
For writing services, yes, they're quite useful. We've only tapped a tiny part of the potential though. These could be easily repurposed to allow the end-user who uses graphical interfaces to lock down their computer.