> ..here is the summary of the additions in version 2.0 of the language:
Vector instructions: With a massive 236 new instructions — more than the total number Wasm had before — it now supports 128-bit wide SIMD (single instruction, multiple data) functionality of contemporary CPUs, like Intel’s SSE or ARM’s SVE. This helps speeding up certain classes of compute-intense applications like audio/video codecs, machine learning, and some cryptography.
Bulk memory instructions: A set of new instructions allows faster copying and initialization of regions of memory or ranges of tables.
Multi-value results: Instructions, blocks, and functions can now return more than one result value, sometimes supporting faster calling conventions and avoiding indirections. In addition, block instructions now also can have inputs, enabling new program transformations.
Reference types: References to functions or pointers to external objects (e.g., JavaScript values) become available as opaque first-class values. Tables are repurposed as a general storage for such reference values, and new instructions allow accessing and mutating tables in Wasm code. In addition, modules now may define multiple tables of different types.
Non-trapping conversions: Additional instructions allow the conversion from float to integer types without the risk of trapping unexpectedly.
Sign extension instructions: A new group of instructions allows directly extending the width of signed integer value. Previously that was only possible when reading from memory.
> Instructions, blocks, and functions can now return more than one result value, sometimes supporting faster calling conventions and avoiding indirections.
Unfortunately, despite being "enabled", Rust+LLVM don't take advantage of this because of ABI compatibility mess. I don't know whether the story on Clang's side is similar.
Between functions there might be a performance advantage, but as wasm VMs do more things like runtime inlining (which becomes more and more important with wasm GC and the languages that compile to it), that benefit goes away.
I figured out the way to get multi-value results on GCC for 32-bit ARM. Use a union to pack two 32-bit values into a 64-bit value. Return the 64-bit value. Then use a union to split the 64-bit value into two 32-bit values. I haven't tested it on other 32-bit architectures though.
"As a result there is no longer any possible method of writing a function in Rust that returns multiple values at the WebAssembly function type level."
premature optimization is the root of all evil and this SIMD mess could have been implemented so much more elegantly if they just followed the general variable size flexible vector proposal
They also say at the end "In a future post we will take a look at Wasm 3.0, which is already around the corner at this point!" so I suppose the Wasm 3.0 is coming very soon?
You can have an ISA sufficiently generic to run on any CPU, or one sufficiently specific to efficiently exploit SIMD on a particular CPU. Never both. That's why some platforms provider higher-level operations, like element-wise multiplication of packed arrays. I can't see whether the actual WASM2 SIMD instructions are sufficiently generic because apparently I'm rate-limited on GitHub (???) and therefore can't see the spec.
Values are hardwired to 128 bits which can be i8x16/i16x8/i32x4/i64x2 or f32x4/f64x2, so that already limits the 'feature surface' drastically.
IMHO as long as it covers the most common use cases (e.g. vec4 / mat4x4 floating point math used in games and a couple of common ALU and bit-twiddling operations on integers) that's already quite a bit better than having to fall back to scalar math.
They were sufficient for me to implement most of `string.h` and get speedups between 4 and 16x vs “portable musl C code,” including sophisticated algorithms such as this one:
http://0x80.pl/notesen/2016-11-28-simd-strfind.html
> apparently I'm rate-limited on GitHub (???) and therefore can't see the spec.
Are you also on Firefox? I've been getting those 429s a lot over the past week or so. I haven't changed my configuration other than I'm religious about the "check for updates" button, but I cannot imagine a world in which my release-branch browser is a novelty. No proxies, yes I run UBO but it is disabled for GH
The WebAssembly spec is quite approachable, but for anyone who is interested in learning Wasm and doesn't want to read the spec —
WebAssembly from the Ground Up (https://wasmgroundup.com/) an online book to learn Wasm by building a simple compiler in JavaScript. It starts with handcrafting bytecodes in JS, and then slowly builds up a simple programming language that compiles to Wasm.
const MIN_U32 = 0;
const MAX_U32 = 2 ** 32 - 1;
function u32(v) {
if (v < MIN_U32 || v > MAX_U32) {
throw Error(`Value out of range for u32: ${v}`);
}
return leb128(v);
}
I love Ada, because you can do this:
subtype U32 is Interfaces.Unsigned_64 range 0 .. 2 ** 32 - 1;
I know. That said, I keep mentioning Ada because it is widely used in mission critical systems, and because it supports contracts (yes, I know, so does Eiffel), and you can do formal verification using Ada / SPARK, meaning that it could be used in place of Rust, whereas Pascal probably not.
Is it possible to "instrument" the WASM code to enable in-process debugging? In other words, would it be possible to generate WASM based off some input string (my custom language) on-the-fly, and then run it with breakpoints and memory inspection, all within the same Javascript script hosted on, say, a web page?
That still requires the usage of dev tools and linear source code mapping between the original and the generated WASM, correct? Would it be possible to avoid dev tools, and implement the debugger in Javascript? Or the WASM technology doesn't provide such an opportunity?
I'd like to break on breakpoints, jump back into Javascript, unroll it into the original location in the source code, and display it all in an IDE-like window all within a browser page, and without involvement of dev tools (that can't handle non-linear conversions between source code and generated JS/WASM).
Yes, if you use bytecode rewriting then all the offsets are changed and you need a mapping. This is one of the advantages of engine-side instrumentation; bytecode offsets don't change. It'll be some time before we can get engines to agree on a standard interface for instrumentation, but there have been some discussions.
Whamm can inject arbitrary instrumentation logic, so you could, e.g. inject calls to imports that are implemented in JS. You'll have some heavy lifting to do on the JS side.
Visual Studio supports debugging C# compiled to WASM when your page is made with Blazor.
Granted, you're debugging in another window that isn't a browser; but overall the debugger is about 80% of what you get when debugging a .net process running outside of the debugger.
I'm not sure I totally understand what you mean by "in-process" here. But you could have some JavaScript that compiles some code in your custom language to WebAssembly and then execute it, and you can use the browser dev tools to set breakpoints the Wasm, inspect the memory, etc.
In the book, we don't cover source maps, but it would also be possible to generate source maps so that you can set breakpoints in (and step through) the original source code in your custom language, rather than debugging at the Wasm instruction level.
Sadly, no, I'd like to write a ~Prolog interpreter (compiler into WASM that would dynamically replace parts of the implementation as source code evolves), and have the debugger and WASM memory inspector as part of the web page written in Javascript, which was used to compiled the code in the first place.
That is, would it be possible to implement a debugger and memory inspector in Javascript without resorting to dev tools? Prolog doesn't map 1:1 to WASM/Javscript via source maps, making it nearly impossible to properly debug it in dev tools.
Inspecting Wasm memory is easy from JS, but to be able to do the debugging, you'd probably either need to rewrite the bytecode (e.g., inserting a call out to JS between every "real" instruction) or a self-hosted interpreter like wasm3 (https://github.com/wasm3/wasm3).
(Or maybe there are better solutions that I'm not thinking of.)
I've been working on Webassembly runtimes for the last year or so, specifically on spec compliance and performance. I came to it as a neophyte, but I've become quite fond of the specification. It's a bit hard to get started with the notation, but it's refreshing to have such a thoroughly specified language. There is a reference interpreter generated directly from the spec, which is very useful for resolving questions about what a runtime should do in a given situation.
The provided specification tests allow implementers to be confident that their runtime conforms to the spec.
Overall I think it's an impressive specification and is worth studying .
There aren't any particular rules about contractions and intermediate capitalization so we are free to choose. WAsm is more awkward than Wasm so the latter seems better.
It’s one thing to take an acronym and “demote” it to a common noun if it’s being used often by wide public (not unlike how proper nouns become common nouns), it’s another thing to randomly pretend that a regular noun is an acronym. I’m looking at you, photographers shouting “RAW” in all caps whenever the topic comes up. “WASM” rubs me wrong for the same reason.
I admit to being guilty of this and mimicking whatever form I encounter first, but then I’d switch once I look it up. I don’t quite understand why would anyone do otherwise.
Great release with many welcome features. As a nit, I'm rather disappointed at the inclusion of fixed-size SIMD (128-bit wide) instead of adaptive SIMD instructions letting the compiler maximize SIMD width depending on host capabilities, similar to how ARM SVE works.
Personally I prefer fixed-size SIMD mainly because it enables more usages than usual vector instructions while vector instructions can be rather trivially lowered to fixed-size SIMD instructions. I'd call them as "opportunistic" usages, because those are perfectly fine without SIMD or vector but only get vectorized due to the relatively small size of SIMD registers. Those usages are significant enough that I see them as useful even with the presence of vector instructions.
If you have variable length SIMD, you can always treat them as fixed-size SIMD types.
New x86 processor don't executes 128-bit SIMD, the vecto ALUs are all wider now and 128 and 256-bit instructions have the same throughput and latency.
Also, do you have an example for such "opportunistic" usages?
I suppose mainly things the SLP vectorizer can usually do already (in compiled languages, I'm not sure how good the JIT is these days).
I worry that we now may end up in a world, where "hand optimized SIMD" in WASM ends up slower than autovectorization, because you can't use the wider SIMD instructions and leave 2x (zen4) to 4x (zen5) of the performance on the table.
> Also, do you have an example for such "opportunistic" usages?
The simplest example would be copying a small number of bytes (like, copying structs). Vector instructions generally have a higher setup cost, like setting so it can't really be used for this purpose. Maybe future vector instructions have no such caveats and can be used as like SIMD, but AFAIK it's not yet the case even for RISC-V's V extension.
premature optimization is the root of all evil and this SIMD mess could have been implemented so much more elegantly if they just followed the general variable size wasm flexible vector proposal
What's truly missing for Wasm and WASI to be an alternative to POSIX is dynamic instatiation so that a Wasm program/component can start another Wasm program/component by providing the bytecode at runtime. So far I don't think anyone is working on that.
if the host provides the guest wasm module via imports a function to create and run from an array of bytes then it can be done today (if I understand you correctly).
Yes, this can be done today, though it needs some gluing together. On the Web (and in Node etc.) you can use JavaScript to create and link the modules, which is how dynamic linking support for wasm works there:
Most have been for some time now. As the announcement post says:
> the Wasm Community and Working Groups had reached consensus and finished the specification in early 2022. All major implementations have been shipping 2.0 for even longer.
> In a future post we will take a look at Wasm 3.0, which is already around the corner at this point!
Features in 3.0 presumably also being mostly implemented already, some maybe just kept behind feature flags.
Is there anywhere I could look at the "changelogs" between 1.0 and 2.0, and 2.0 and 3.0? All I could find are different versions of the spec that don't seem very keen on expliciting what changed.
A bytecode for the web was a dream for a very long time.
As a C# developer who appreciates Blazor being on the cutting edge with WASM from early on, I'm looking forward to WASM 2.0's changed being added. .NET has a massive jump on this and I think it's one of the better bets they've taken.
It's more accurate to say those do the boilerplate of memory access necessary for complex types for you. You're still basically limited to integers and floats.
But when you think about it, isn't that basically true for native languages?
> WebAssembly provides no ambient access to the computing environment in which code is executed. Any interaction with the environment, such as I/O, access to resources, or operating system calls, can only be performed by invoking functions provided by the embedder and imported into a WebAssembly module.
TL;DR: depending on your use case and defintion of "safe" in most general prupose cases they can be assumed to be "in general" as safe as the other.
For the sandbox it's hard to say, lets just say for most considerations they can be treated as "as safe" as the other.
But many vulnerabilities had been in APIs interacting with external resources, I/O etc.And currently in the browser that in general goes through JS, so some would say JS is more secure.
But it's not that a WASM engine can't provide such APIs to WASM without going through JS (e.g. see WASI) and weather it's WASM or JS they semantically only have access to this APIs through their engine which can guard/filter/limit/etc. the APIs however it wants (i.e. you can't call the systems libc function directly or anything like that).
So in general I would say the question if one is "safer" then the other is meaningless.
Especially if we compare a custom JS only vs. WASM only sandbox which doesn't have DOM and all the old JS browser APIs. Through with this APIs you could say WASM is slightly more save.
There are also some other interesting considerations like e.g. in JS you have eval (and DOM to do eval in a roundabout way) but then in WASM you have memory safety issues (depending on the source language, through due to WASM design they are much much less abusable then in native C, but they still can involve vulnerabilities leading to affecting program behavior in a way which can be a security issue, e.g. overflow overwriting a "valid" flag or similar).
Anyway if asked "in general" I think there is no meaningful answer outside of treat it as the same.
But if you have specific use-cases/needs things might differ.
I did 5 game jams in Web assembly last year and found it quite painful overall.
Emscripten is very bloated, but it's the best option from what I can tell.
I lost a whole day of a 3 day game jam to a weird Emscripten bug. It ended up being that adding a member to a class blew up the whole thing.
The alternative (and the only option, if you want it to be as light as possible) is to do the bindings yourself, which is fun, depending on how much your concept of fun involves JavaScript, and having half your code in a different programming language.
I'm told the Rust situation is pretty nice, although my attempt didn't get anywhere — apparently I tried to use it in exactly the opposite way that it was intended.
I had a pretty nice time with Odin. Someone put raylib wasm bindings for Odin on GitHub, and it worked really well for me.
(Odin syntax is really nice, but you don't realize just how nice, until you port your game to another language!)
Zig was cool, but a bit pedantic for a jam, and a bit unstable (kept finding out of date docs). Didn't see much in the way of game libs, but I was able to ship a Zig game with WASM-4.
I ended up switching to TS, which I'm not happy with, but since you (usually) need JS anyway, the benefit of having a single language, and a reliable toolchain, is very high, especially under time pressure. The "my language is nice" benefits do not in my experience outweigh the rest of the pain.
What do you mean, DOA? It's been in active use for years now.
As far as I know, "2.0" is just a marketing term batching several extensions standardized since 1.0 (and simplifying feature queries "are extensions X,Y,Z supported" to "is 2.0 supported"), not unlike what Vulkan does with their extensions.
On the contrary, the browser is the only place where it makes sense.
Outside of the browser are only VC backed companies, pretending bytecode based distribution isn't something existing since 1958, with wins and losses, many of those were polyglot, supporting languages like C in bytecode was already done in 1989 with Architecture Neutral Distribution Format, and many other examples.
If you're talking about WASI, well personally I'm not interested in it and we're just using plain wasm in the browser. However, nothing in this linked post is about WASI specifically.
> A common misconception is that WebAssembly can only run in web browsers. Although "Web" is part of its name, WebAssembly is not limited to browsers. It's designed to be a platform-independent technology that can run in various environments, including IoT devices, edge computing, artificial intelligence, game development, backend services, or cloud services. Its portable binary format allows it to execute efficiently across different platforms and architectures.
I am not going to lie, I thought the same because of the name, too.
Think of it like German names, where people are often named for where they came from. Berliner, Münchner, etc. WebAssembly is so named because it came from the web :)
Which were mostly tied down to specific languages and GC'd runtimes. You seem to have a big problem with Wasm just because bytecoode runtimes have been done before.
"The Architecture Neutral Distribution Format (ANDF) in computing is a technology allowing common "shrink wrapped" binary application programs to be distributed for use on conformant Unix systems, translated to run on different underlying hardware platforms. ANDF was defined by the Open Software Foundation and was expected to be a "truly revolutionary technology that will significantly advance the cause of portability and open systems",[1] but it was never widely adopted."
"The Amsterdam Compiler Kit (ACK) is a retargetable compiler suite and toolchain written by Andrew Tanenbaum and Ceriel Jacobs, since 2005 maintained by David Given.[1] It has frontends for the following programming languages: C, Pascal, Modula-2, Occam, and BASIC."
Furthermore, “I recognize this technology as a rehash of something that already exists” is just extremely uninteresting conversation without getting into the specifics. Clearly there are differences. Let’s talk about the merits and demerits of the thing.
Personally I don't find it that painful to write the little js code to send browser input to wasm. I'm having a lot of fun with it. Just the simd stuff to speed up whatever you're working with is often worth writing a c version of things.
We do high performance computer vision in the browser (at 30/60 fps) that's an order of magnitude faster in WASM than JS. It simply would not be fast enough without WASM.
What types of operations are 10x faster in Wasm than in JS? Why can't the JIT compiler compile JS to the same native code as your Wasm gets compiled to?
Basically everything numerical. I have similar experience, translated some JS code to wasm - simple template matching algorithm, basically doing the same thing (looping over ArrayBuffer and computing some sums) and it was 10x faster.
We do for some elements of our pipeline. WebGPU will give us more opporunity for this in the future too but right now we need broader device support unfortunately.
Without more details on their exact use case, their algorithms, and their data movement patterns, you have no way of knowing this. Doing stuff on the GPU isn't automatically faster than doing it on the CPU.
For physics it's in some cases way easier to implement on cpu. For ML using wasm means the model loads way faster (the user doesn't have to wait; and sometimes the performance is about the same). For some small models wasm can be faster. Mediapipe by google for instance gets better performance and better latency with their wasm model than the gpu one.
If JS had native SIMD, probably. But it doesn't, and it won't (because it's complex to do in JS, and you can just use Wasm instead), so it really can't compete because of just how much faster Wasm can be these days.
It's not for lack of trying either, there was a JS SIMD proposal which got pretty far along, but then everyone came to their senses and scrapped it to focus on WASM SIMD instead.
That's a bit of a trick question - typical use case for each is different, you don't use these interchengably, it doesn't make that much sense.
Typical use case for JS is let's say a glue between network and DOM, where it doesn't really do much, most of that work is done by the browser anyway. If you add wasm to that, you'll just add one more indirection through the wasm sandbox and it'll probably be slower in many cases, because you have to copy data.
Typical use case for Wasm is either porting existing native programs or something compute heavy. Figma uses this for the native layer, I used it for some image processing use cases or for board game solver backend. Doing that in JS is slower because JS semantics are not straightforward to optimize, even for basic numerical operations. I found something around 3-10x speedup for this kind of code is pretty common, but it depends on what it is doing - whether JS can represent the types and operations well.
> If you want to run code written in other languages in the browser, you could just as well compile to JavaScript.
That's how asm.js was conceived. You can compile other languages to a subset of JS that is super JIT-friendly and performs stupidly well.
I think WASM came to be due to a desire to run that resulting code in a heavily sandboxed environment, with much more limited access to certain APIs than the rest of your JavaScript code.
Unrelated, but it's not clear to me why you're getting downvoted. Your point sounded genuine and didn't look like flamewar bait.
Same, it still feels like too much of a grift for VC monies to me.
Not a hater, though it's fun to run Doom in a browser tab... just can't see any business value in 99% of its ecosystem, especially with the drift away from web (the only niche where it made sense).
Wasm 2.0 Completed - https://webassembly.org/news/2025-03-20-wasm-2.0/
> ..here is the summary of the additions in version 2.0 of the language:
Vector instructions: With a massive 236 new instructions — more than the total number Wasm had before — it now supports 128-bit wide SIMD (single instruction, multiple data) functionality of contemporary CPUs, like Intel’s SSE or ARM’s SVE. This helps speeding up certain classes of compute-intense applications like audio/video codecs, machine learning, and some cryptography.
Bulk memory instructions: A set of new instructions allows faster copying and initialization of regions of memory or ranges of tables.
Multi-value results: Instructions, blocks, and functions can now return more than one result value, sometimes supporting faster calling conventions and avoiding indirections. In addition, block instructions now also can have inputs, enabling new program transformations.
Reference types: References to functions or pointers to external objects (e.g., JavaScript values) become available as opaque first-class values. Tables are repurposed as a general storage for such reference values, and new instructions allow accessing and mutating tables in Wasm code. In addition, modules now may define multiple tables of different types.
Non-trapping conversions: Additional instructions allow the conversion from float to integer types without the risk of trapping unexpectedly.
Sign extension instructions: A new group of instructions allows directly extending the width of signed integer value. Previously that was only possible when reading from memory.
Unfortunately, despite being "enabled", Rust+LLVM don't take advantage of this because of ABI compatibility mess. I don't know whether the story on Clang's side is similar.
Inside functions, there is perhaps a 1-3% code size opportunity at best (https://github.com/WebAssembly/binaryen?tab=readme-ov-file#b...), and no performance advantage.
Between functions there might be a performance advantage, but as wasm VMs do more things like runtime inlining (which becomes more and more important with wasm GC and the languages that compile to it), that benefit goes away.
"As a result there is no longer any possible method of writing a function in Rust that returns multiple values at the WebAssembly function type level."
And similar queries in Rust's zulip: https://rust-lang.zulipchat.com/#narrow/channel/122651-gener...
https://github.com/WebAssembly/flexible-vectors
WebAssembly Specification - Release 3.0 (Draft 2024-11-07) https://webassembly.github.io/spec/versions/core/WebAssembly... (PDF)
Source: https://github.com/WebAssembly/spec
Values are hardwired to 128 bits which can be i8x16/i16x8/i32x4/i64x2 or f32x4/f64x2, so that already limits the 'feature surface' drastically.
IMHO as long as it covers the most common use cases (e.g. vec4 / mat4x4 floating point math used in games and a couple of common ALU and bit-twiddling operations on integers) that's already quite a bit better than having to fall back to scalar math.
I posted about my efforts here: https://news.ycombinator.com/item?id=43935284
Or, if you wanna jump to the code: https://github.com/ncruces/go-sqlite3/blob/main/sqlite3/libc...
Are you also on Firefox? I've been getting those 429s a lot over the past week or so. I haven't changed my configuration other than I'm religious about the "check for updates" button, but I cannot imagine a world in which my release-branch browser is a novelty. No proxies, yes I run UBO but it is disabled for GH
WebAssembly from the Ground Up (https://wasmgroundup.com/) an online book to learn Wasm by building a simple compiler in JavaScript. It starts with handcrafting bytecodes in JS, and then slowly builds up a simple programming language that compiles to Wasm.
There's a free sample available: https://wasmgroundup.com/book/contents-sample/
(Disclaimer: I'm one of the authors)
Whamm can inject arbitrary instrumentation logic, so you could, e.g. inject calls to imports that are implemented in JS. You'll have some heavy lifting to do on the JS side.
Granted, you're debugging in another window that isn't a browser; but overall the debugger is about 80% of what you get when debugging a .net process running outside of the debugger.
In the book, we don't cover source maps, but it would also be possible to generate source maps so that you can set breakpoints in (and step through) the original source code in your custom language, rather than debugging at the Wasm instruction level.
Does that answer your question?
re: "dynamically replace parts of the implementation as source code evolves" — there is a technique for this, I have a short write-up on it here: https://github.com/pdubroy/til/blob/main/wasm/2024-02-22-Run...
About the debugging and inspecting —
Inspecting Wasm memory is easy from JS, but to be able to do the debugging, you'd probably either need to rewrite the bytecode (e.g., inserting a call out to JS between every "real" instruction) or a self-hosted interpreter like wasm3 (https://github.com/wasm3/wasm3).
(Or maybe there are better solutions that I'm not thinking of.)
I like the idea of WASM but it often feels like DAPPs. This kinda fun idea that nothing is actually based on, maybe I’m wrong
The provided specification tests allow implementers to be confident that their runtime conforms to the spec.
Overall I think it's an impressive specification and is worth studying .
> A contraction of “WebAssembly”, not an acronym, hence not using all-caps.
There aren't any particular rules about contractions and intermediate capitalization so we are free to choose. WAsm is more awkward than Wasm so the latter seems better.
I admit to being guilty of this and mimicking whatever form I encounter first, but then I’d switch once I look it up. I don’t quite understand why would anyone do otherwise.
New x86 processor don't executes 128-bit SIMD, the vecto ALUs are all wider now and 128 and 256-bit instructions have the same throughput and latency.
Also, do you have an example for such "opportunistic" usages?
I suppose mainly things the SLP vectorizer can usually do already (in compiled languages, I'm not sure how good the JIT is these days).
I worry that we now may end up in a world, where "hand optimized SIMD" in WASM ends up slower than autovectorization, because you can't use the wider SIMD instructions and leave 2x (zen4) to 4x (zen5) of the performance on the table.
The simplest example would be copying a small number of bytes (like, copying structs). Vector instructions generally have a higher setup cost, like setting so it can't really be used for this purpose. Maybe future vector instructions have no such caveats and can be used as like SIMD, but AFAIK it's not yet the case even for RISC-V's V extension.
https://github.com/WebAssembly/flexible-vectors
if the host provides the guest wasm module via imports a function to create and run from an array of bytes then it can be done today (if I understand you correctly).
Here's some related content: https://github.com/pdubroy/til/blob/main/wasm/2024-02-22-Run...
https://emscripten.org/docs/compiling/Dynamic-Linking.html
https://github.com/pannous/wasp
> the Wasm Community and Working Groups had reached consensus and finished the specification in early 2022. All major implementations have been shipping 2.0 for even longer.
> In a future post we will take a look at Wasm 3.0, which is already around the corner at this point!
Features in 3.0 presumably also being mostly implemented already, some maybe just kept behind feature flags.
Wasm 2.0 is complete in a handful of engines, whereas 3.0 is less well-supported.
Wizard is almost done with 3.0; only memory64 and relaxed-simd are incomplete.
As a C# developer who appreciates Blazor being on the cutting edge with WASM from early on, I'm looking forward to WASM 2.0's changed being added. .NET has a massive jump on this and I think it's one of the better bets they've taken.
[1] https://github.com/speced/bikeshed
with the component model's wit you get higher level types like enums, option, result and generics: https://component-model.bytecodealliance.org/design/wit.html
But when you think about it, isn't that basically true for native languages?
> WebAssembly provides no ambient access to the computing environment in which code is executed. Any interaction with the environment, such as I/O, access to resources, or operating system calls, can only be performed by invoking functions provided by the embedder and imported into a WebAssembly module.
more info here:
- https://webassembly.org/docs/security/
- https://www.w3.org/TR/wasm-core-1/#design-goals%E2%91%A0
For the sandbox it's hard to say, lets just say for most considerations they can be treated as "as safe" as the other.
But many vulnerabilities had been in APIs interacting with external resources, I/O etc.And currently in the browser that in general goes through JS, so some would say JS is more secure.
But it's not that a WASM engine can't provide such APIs to WASM without going through JS (e.g. see WASI) and weather it's WASM or JS they semantically only have access to this APIs through their engine which can guard/filter/limit/etc. the APIs however it wants (i.e. you can't call the systems libc function directly or anything like that).
So in general I would say the question if one is "safer" then the other is meaningless.
Especially if we compare a custom JS only vs. WASM only sandbox which doesn't have DOM and all the old JS browser APIs. Through with this APIs you could say WASM is slightly more save.
There are also some other interesting considerations like e.g. in JS you have eval (and DOM to do eval in a roundabout way) but then in WASM you have memory safety issues (depending on the source language, through due to WASM design they are much much less abusable then in native C, but they still can involve vulnerabilities leading to affecting program behavior in a way which can be a security issue, e.g. overflow overwriting a "valid" flag or similar).
Anyway if asked "in general" I think there is no meaningful answer outside of treat it as the same.
But if you have specific use-cases/needs things might differ.
Without direct browser support for WASM with DOM access ( and no need for JavaScript "shim"), all this is futile.
Emscripten is very bloated, but it's the best option from what I can tell.
I lost a whole day of a 3 day game jam to a weird Emscripten bug. It ended up being that adding a member to a class blew up the whole thing.
The alternative (and the only option, if you want it to be as light as possible) is to do the bindings yourself, which is fun, depending on how much your concept of fun involves JavaScript, and having half your code in a different programming language.
I'm told the Rust situation is pretty nice, although my attempt didn't get anywhere — apparently I tried to use it in exactly the opposite way that it was intended.
I had a pretty nice time with Odin. Someone put raylib wasm bindings for Odin on GitHub, and it worked really well for me.
(Odin syntax is really nice, but you don't realize just how nice, until you port your game to another language!)
Zig was cool, but a bit pedantic for a jam, and a bit unstable (kept finding out of date docs). Didn't see much in the way of game libs, but I was able to ship a Zig game with WASM-4.
I ended up switching to TS, which I'm not happy with, but since you (usually) need JS anyway, the benefit of having a single language, and a reliable toolchain, is very high, especially under time pressure. The "my language is nice" benefits do not in my experience outweigh the rest of the pain.
https://janpfeifer.github.io/hiveGo/www/hive/
Probably everything JS and DOM is better supported from TS, but I have to say, I was never blocked on my small project.
As far as I know, "2.0" is just a marketing term batching several extensions standardized since 1.0 (and simplifying feature queries "are extensions X,Y,Z supported" to "is 2.0 supported"), not unlike what Vulkan does with their extensions.
Outside of the browser are only VC backed companies, pretending bytecode based distribution isn't something existing since 1958, with wins and losses, many of those were polyglot, supporting languages like C in bytecode was already done in 1989 with Architecture Neutral Distribution Format, and many other examples.
If you're talking about WASI, well personally I'm not interested in it and we're just using plain wasm in the browser. However, nothing in this linked post is about WASI specifically.
The HN crowd has just always been terminally myopic about this because it has "web" in the name.
[0]https://learn-wasm.dev/tutorial/introduction/what-webassembl...
I am not going to lie, I thought the same because of the name, too.
We already have lots of bytecode formats.
https://www.youtube.com/@wasmio
According to those, likely to replace containers and likely to be integreated in more and more systesms.
It seems like it's exploding in populartity and usage because it solves some very real problems.
"More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET. "
https://news.microsoft.com/source/2001/10/22/massive-industr...
Ah, it isn't portable, maybe 1989?
"The Architecture Neutral Distribution Format (ANDF) in computing is a technology allowing common "shrink wrapped" binary application programs to be distributed for use on conformant Unix systems, translated to run on different underlying hardware platforms. ANDF was defined by the Open Software Foundation and was expected to be a "truly revolutionary technology that will significantly advance the cause of portability and open systems",[1] but it was never widely adopted."
https://en.wikipedia.org/wiki/Architecture_Neutral_Distribut... or better 1980?
"The Amsterdam Compiler Kit (ACK) is a retargetable compiler suite and toolchain written by Andrew Tanenbaum and Ceriel Jacobs, since 2005 maintained by David Given.[1] It has frontends for the following programming languages: C, Pascal, Modula-2, Occam, and BASIC."
https://en.wikipedia.org/wiki/Amsterdam_Compiler_Kit
I have a problem with people selling WASM as something spectacullary new, never done before.
Nobody is doing this here, you're arguing against a strawman.
If you want to run code written in other languages in the browser, you could just as well compile to JavaScript.
All Wasm brings to the table is a bit of a speed improvement.
What types of operations are 10x faster in Wasm than in JS? Why can't the JIT compiler compile JS to the same native code as your Wasm gets compiled to?
Without more details on their exact use case, their algorithms, and their data movement patterns, you have no way of knowing this. Doing stuff on the GPU isn't automatically faster than doing it on the CPU.
Not having the JS GC randomly pausing your process unpredictably.
Sandboxing untrusted code, i.e. you sell a SaaS and you also want clients to be able to run untrusted plugins from a marketplace.
https://github.com/tc39/ecmascript_simd
Typical use case for JS is let's say a glue between network and DOM, where it doesn't really do much, most of that work is done by the browser anyway. If you add wasm to that, you'll just add one more indirection through the wasm sandbox and it'll probably be slower in many cases, because you have to copy data.
Typical use case for Wasm is either porting existing native programs or something compute heavy. Figma uses this for the native layer, I used it for some image processing use cases or for board game solver backend. Doing that in JS is slower because JS semantics are not straightforward to optimize, even for basic numerical operations. I found something around 3-10x speedup for this kind of code is pretty common, but it depends on what it is doing - whether JS can represent the types and operations well.
That's how asm.js was conceived. You can compile other languages to a subset of JS that is super JIT-friendly and performs stupidly well.
I think WASM came to be due to a desire to run that resulting code in a heavily sandboxed environment, with much more limited access to certain APIs than the rest of your JavaScript code.
Unrelated, but it's not clear to me why you're getting downvoted. Your point sounded genuine and didn't look like flamewar bait.
Not a hater, though it's fun to run Doom in a browser tab... just can't see any business value in 99% of its ecosystem, especially with the drift away from web (the only niche where it made sense).
Just because 95% of web apps are crud DOMs doesn’t mean such technology is not important.