Misadventures in Docker and WASM Benchmarking

My only experience with compiling to WebAssembly before this was playing around with Hyperware, so I thought, "why not try it out?"

August 10, 2025
4 min read
Lynett Simons

Lynett Simons

Misadventures in Docker and WASM Benchmarking

I may be a sales lead, but I am also a programmer (a pretty bad one). My only experience with compiling to WebAssembly before this was playing around with Hyperware, so I thought, "why not try it out?"

Recently, I came across wasmtime, a fast and secure runtime for WebAssembly on the server, which sent me down on a path of researching about WebAssembly.

If WASM+WASI existed in 2008, we wouldn't have needed to create Docker. That's how important it is. WebAssembly on the server is the future of computing. -Solomon Hykes

Specs

We'll be running these benchmarks inside a VM on my server running Proxmox, which should have similar CPU performance to Altivox's Budget VPS line.

  • OS: Ubuntu 22.04
  • CPU: 8 vCPUs (Intel Xeon E5-2660v3)
  • RAM: 32GB DDR4-2133

This benchmark will be using Docker version 28.1.1, build 4eba377. Docker has support for wasm through the wasmedge runtime!

Initial setup

We'll be benchmarking three different environments: Docker (musl, distroless), wasmtime standalone and native standalone (musl). Using musl because static linking for distroless + it's faster. The benchmark programs are written in Rust <3

(I could not get Docker + WASM working.)

There will be two benchmark programs:

  • Recursive Fibonacci
  • HTTP webserver

I'll be running the Fibonacci program 10 times for each setup using hyperfine. The webserver will be stress tested using bombardier.

The setup

The setup is a kludgy mess, but it works. I decided to target the new wasm32-wasip2 / WASI v0.2 target rather than wasm32-wasip1 (formerly wasm32-wasi).

First, I prepared Docker's WebAssembly support. A quick web search couldn't find any information for Docker Engine, so I went ahead and asked Perplexity for this information. It told me to add the following to /etc/docker/daemon.json:

{
 "features": {
    "containerd-snapshotter": true
  }
}

Then I restarted the Docker daemon. But my troubles didn't end there: see, enabling this gets rid of your preexisting images, so you have to rebuild them. I built the images beforehand (uh oh)

docker buildx build -t fibonacci-regular .
docker buildx build -t fibonacci-wasm . -f wasm.dockerfile

And after that, I needed to install the shim. Unfortunately, there doesn't seem to be any good way other than downloading from https://github.com/containerd/runwasi/releases and dropping the binaries into /bin or some other folder in PATH.

Oops. The shims (wasmtime and wasmedge) for some reason simply did not work: they would just hang without outputting anything, and I had to kill it with 3 ^Cs.

Giving up

Okay. I just ran docker run --rm --runtime=io.containerd.wasmedge.v1 --platform=wasi/wasm secondstate/rust-example-hello:latest to see if it was an issue with my program, and nope, same hanging.

Docker still works though. docker run --rm hello-world outputted a beautiful little hello world message. Drats. Whatever, we can scrap the Docker shim...

Finally, the gosh-darned results

As expected, when it came to time taken to compute, the native code outperformed the code running in Docker, which then outperformed the WASM code. However, it's not really a big difference.

Fibonacci

Benchmark 1: docker run --rm fibonacci-regular
  Time (mean ± σ):     16.324 s ±  0.469 s    [User: 0.045 s, System: 0.048 s]
  Range (min … max):   15.749 s … 17.393 s    10 runs
 
Benchmark 2: ./target/x86_64-unknown-linux-musl/release/fib
  Time (mean ± σ):     15.698 s ±  0.935 s    [User: 15.690 s, System: 0.006 s]
  Range (min … max):   14.561 s … 17.415 s    10 runs
 
Benchmark 3: wasmtime ./target/wasm32-wasip2/release/fib.wasm
  Time (mean ± σ):     23.533 s ±  0.769 s    [User: 23.535 s, System: 0.045 s]
  Range (min … max):   22.184 s … 25.009 s    10 runs
 
Summary
  ./target/x86_64-unknown-linux-musl/release/fib ran
    1.04 ± 0.07 times faster than docker run --rm fibonacci-regular
    1.50 ± 0.10 times faster than wasmtime ./target/wasm32-wasip2/release/fib.wasm

Webserver

Compiling the webserver written in Actix Web to WASM did not work due to a Tokio compile error:

error: Only features sync,macros,io-util,rt,time are supported on wasm.
   --> /home/lyn/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.45.1/src/lib.rs:475:1
    |
475 | compile_error!("Only features sync,macros,io-util,rt,time are supported on wasm.");
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

:(

I used an HTTP stress tester called Bombardier to benchmark the webserver. I originally worried that the bottleneck would be the stress tester itself, as it's written in Go, but for lack of a better option I went through with it.

docker run --rm -p 3030:3030 webserver-regular

./bombardier -c 200 -d 10s http://localhost:3030
Bombarding http://localhost:3030 for 10s using 200 connection(s)
[=========================================================================================================================================================================] 10s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec     38065.93    4459.31   49631.32
  Latency        5.25ms     1.77ms    85.75ms
  HTTP codes:
    1xx - 0, 2xx - 380270, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:     9.17MB/s
./target/x86_64-unknown-linux-musl/release/webserver

./bombardier -c 200 -d 10s http://localhost:3030
Bombarding http://localhost:3030 for 10s using 200 connection(s)
[=========================================================================================================================================================================] 10s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec     62399.35    8079.63   80232.54
  Latency        3.20ms     0.94ms    44.35ms
  HTTP codes:
    1xx - 0, 2xx - 623764, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:    15.05MB/s

To conclude

Well, this was fun. It did sate my curiosity, and now I know a little more about WebAssembly. Though WebAssembly on the server isn't production ready yet (especially with all the random issues I ran into, such as Docker runtime and Tokio compilation), it's still an interesting idea!

Though containers compute at near-native speeds, the addition of the Docker networking stack added a lot of overhead even for a simple port binding.

Could WebAssembly runtimes beat Docker on the networking aspect? I don't know, but my guess is yes. Anyhow, til next time! I've got to get back to work.


Ready to try self-hosting out for yourself? Coolify offers an excellent self-hosted PaaS experience combined with a solid server like those provided by Altivox!

About the Author

Lynett Simons

Lynett Simons

Lyn is a programmer forced to be a vibe designer and sales lead. She rants about the evils of serverless on the Altivox Networks Twitter account.