The "running WASM on the server" use case is pretty fun and applies to whenever you want to execute untrusted user code and limit its capabilities / resources, e.g. for plugins. For example, we use it in Seafowl  for user-defined functions, so does ScyllaDB . Cloudflare Workers  lets you deploy arbitrary WASM code at the edge and pay at high granularities just for the amount of time your function spends executing (e.g. 10ms is the limit for the free tier).
In addition, you can limit the amount of instructions  a certain subroutine is allowed to use up before halting.
Also, WASM subroutines can only call out to functions (including system calls) that you allow them to (they're sandboxed otherwise). By default, they can't open files / print to stdout, unless the module can import the relevant functions (which WASI  provides).
It's the rebirth of Java without the drawback of necessarily needing a runtime; write once in whatever (supported) language and run everywhere, if other platforms like OSes embrace the standard. The closest thing I've found is Flutter but that's only in Dart.
Another use case, imagine a single program mixing and matching many different languages because they all compile up WASM as a universal format. This already happens in the frontend web with so called microfrontends which basically let you write various components in React, Vue, Angular etc and it all compiles down to JS.
I personally think that WASM images will eventually displace (or merge with?) OCI images in server-side use cases. It remains to be seen if this will happen by the current OCI-focused stuff (Kubernetes mainly) adding capability for WASM apps or the WASM ecosystem developing its own parallel tooling of the same type (somebody coming up with "Kubernetes for WASM"). I think the latter has the possibility of being much cleaner, but there is a lot of momentum behind the former.
I think a big use case is running kernels/UDFs directly inside databases or data infrastructure systems. For example, . Imagine you need a custom function in postgres, but instead of implementing it in C and shipping a shared library, you would implement it in WASM.
Not if you care about side channels. There really aren't any good solutions here that let you safely run untrusted code in a shared address space. Yes, you can slap barriers behind every single branch and after every single store (memory ordering violations can cause mispeculations in straight line code!), but that's going to come with an enormous performance cost (academics regularly brag about inventing new mitigations with ""only"" a 20% performance overhead lmao). The only option we've really found is to make interesting secrets inaccessible (site isolation in Chrome, for example). Trying to shove all user applications into a flat address space would mean giving any application the ability to read arbitrary memory, which is not great.
I think they just worded that poorly. I suspect their suggestion is not that you run with the MMU _off_ (as doing so would trash your perf anyways since everything becomes uncacheable!) but rather that you don't need to context switch the page tables, which can lead to some pretty decent performance gains given that you can (on some platforms) avoid TLB flushes. Nowadays though I seriously would not consider the page table switch to be a significant cost since (in ARMv8 anyways) you have ASIDs and so switching tables is just a single msr+isb.
Nope, I meant what I wrote.
You don't need an MMU, because you have no need for virtual memory or similar.
Checking an integer is within some bounds is a much simpler problem, and in some cases the check can be elided through analysis on the code.
unfortunately, then, you'll need new CPUs. You don't _need_ memory protection but CPUs today are built on the assumption that you are using it and have thus stapled many critical attributes about memory to it (such as cacheability and shareability).