There’s been a flurry of articles over the past few weeks (here and here) about WebAssembly (often shortened to Wasm) and its future in cloud Computing. This may come as something of a surprise to those who know Wasm only in its initial incarnation, which was as a browser technology, or its second main use case, which is as a gaming technology. Both of these uses are flourishing, but there’s a third, which is as a server technology.
WebAssembly was originally designed to do a number of things, the most notable of which is “do Javascript properly in the browser,” but one of the key reasons for its rising popularity is as a cross-platform abstraction layer. When developers write some code and compile it to WebAssembly (which you can do from lots of different programming languages), it will run – without any changes – on any platform that has runtime support of it. This initially meant that you could have guaranteed binary compatibility between multiple browsers on multiple operating systems on multiple chipsets, but as WebAssembly matured, it got a new model: WASI, or WebAssemby System Interface.
WASI, supported by the open source foundation the Bytecode Alliance, is “headless,” which means a server-based, non-GUI version of WebAssembly that is adopted as a W3C standard. It allows you to run server applications on different operating systems, running different chipsets. So the same Wasm executable will run – again, without changes – on a Macintosh M2 with a TSMC chipset, a Linux distribution with AMD hardware, a Windows machine with an Intel CPU, or an Arm-based Raspberry Pi. Here at Profian, we’ve extended that to allow you to run Wasm applications in Trusted Execution Environments, supporting Confidential Computing on Intel SGX and AMD SEV-SNP platforms – all using the same executable.
Running WebAssembly workloads with Kubernetes
What does this have to do with Kubernetes?
Well, Kubernetes allows you to run applications on various platforms (mainly Linux-based) in Linux Containers. Linux Containers (which used to be known as Docker Containers and now tend just to go by the name containers) allow you to run executable code in runtimes that meet OCI (Open Container Initiative) standards. To this extent, WebAssembly is somewhat analogous to containers in that each allows runtimes to execute standardized executable formats. You could write the same microservice to run in a container or write it to run as a WebAssembly executable. This, however, is where some major differences creep in. Whereas Containers are basically abstractions of the Linux operating system, WebAssembly provides a very different target layer – it’s basically a virtual machine, providing a standardized CPU across all platforms. This means that the approach you need to take in order to make your microservice available as a container is very different from that for WebAssembly (though automation is emerging that should allow the choice to be made without too much effort and later in the development process). Another difference is that the amount of dependencies required to run a WebAssembly executable is generally much smaller (with the runtime and language dependencies making up a few tens of megabytes) than for containers (where a typical set of dependencies will run into several hundreds of megabytes or more). WebAssembly runtimes can be optimized for performance and JIT compilation in ways that are generally more difficult or unachievable for containers. WebAssembly also comes with security guarantees around isolation and capabilities granting, which are much more difficult to support with containers.
Where does Kubernetes come into the story? What Kubernetes does is allow you to orchestrate and deploy containers across multiple distributed systems, where those systems provide appropriate runtime environments. It’s really good at this, and while it can be complex to set up and run, it provides a great way to deploy and manage distributed applications across multiple systems on one or more clouds. In fact, it has become so popular for containers that it’s now possible to use Kubernetes to deploy other types of workloads, including virtual machines and serverless applications. And, of course, given how popular WebAssembly is becoming, people are beginning to provide support for deploying Wasm workloads using Kubernetes, as well. It’s arguable that Wasm workloads are the perfect fit for Kubernetes, but you could say the same thing for virtual machines and serverless applications, and Kubernetes is pretty versatile these days, and, importantly for this article, there doesn’t exist an equivalent orchestration platform for Wasm.
WebAssembly on the server is the future of computing
People who are interested in deploying and running WebAssembly workloads, then, are thinking about using Kubernetes to do so. But why use containers in the first place? Solomon Hykes, one of the initial inventors of Docker, tweeted in 2019 that “If WASM+WASI existed in 2008, we wouldn’t have needed to created [sic] Docker. That’s how important it is. WebAssembly on the server is the future of computing. A standardized system interface was the missing link. Let’s hope WASI is up to the task!”
But WebAssembly is nowhere near as mature as containers are and certainly can’t yet match the amazing ecosystem that has grown up around containers and their deployment – including the Kubernetes community and the CNCF (Cloud Native Computing Foundation).
It seems likely that containers are going to be around for a while, and Kubernetes will continue to thrive to allow people to deploy the container workloads. We can, however, expect to see WebAssembly growing unabated and Kubernetes being used to deploy both types of workloads – and others – for the foreseeable future.
So, while WebAssembly may be set to replace containers in the medium term, it’s unclear whether the same can be said for Kubernetes.