I think it's a huge exaggeration to say "the support for DOM stuff is ready now."
When writing most non-web software, you can usually write it quickly in a high-level language (with a rich standard library and garbage collection), but you can get better performance (with more developer effort) by writing your code in a lower-level language like C or Rust.
What developers are looking for is a way to take UI-focused DOM-heavy web apps, RIIR, and get a performance improvement in browsers.
That is not ready now. It's not even close. It might literally never happen.
What is ready now is a demo project where you can write WASM code against a DOM-like API running in Node.js.
What you have is an interesting demo, but that's not what we mean when we ask when WASM will get "DOM support."
> What developers are looking for is a way to take UI-focused DOM-heavy web apps, RIIR, and get a performance improvement in browsers.
Could you expand a bit on waht you expect would make this possible? What would you list as the most important blockers right now stopping people from getting there in your mind?
Honestly, I can't believe you're asking me this! You're so far in the weeds of the WASM component model that you have no idea what the problem is. It's really silly to ask what the "most important blockers" are, as if it's a list of bugs that can be fixed.
But, sure, in good faith, here's the problem.
Today, if you take a UI-focused DOM-heavy web app that makes lots and lots of DOM API calls (i.e. most JS web apps ever written) and try to rewrite it in Rust, you'll have to cross the boundary between JS and WASM over and over again, every time you use a DOM API. Every time you add/remove/update an element, or its styles, or handle a click event, you'll cross the boundary.
The boundary is slow because every time you touch a JS string (all CSS styles are strings!), you'll have to serialize it (in JS) into a byte array and send it into WASM land, do your WASM work, transfer back a byte array, and deserialize it into a JS string. (In the bad old days you had to copy the byte array in and out to do any I/O, but at least we have reference types now.)
And it's not just strings. All JS objects that you need to do actual work with have to be serialized/deserialized in this way, because WASM only knows about bytes, arrays, and opaque pointers. DOM elements/attributes, DOM style properties, DOM events (click events, keyboard events, etc.), all of them get slow when you transfer them in and out of WASM land, even with reference types, because of serialization/deserialization.
WASM interface types will make it easier to call JS from WASM, but as long as you're still calling JS in the end, rewriting in Rust will never make a DOM-heavy web app faster than writing it in JS.
That's why this sucks! Rewriting a Node.js app in Rust (or Go or Zig, etc.) normally yields huge performance gains (at huge developer effort), but rewriting a JS DOM-heavy web app in Rust just slaps Rust on top of JS; it usually makes it slower.
The only fix, as Daniel's article explains, would be to standardize a low-level DOM API, one that didn't assume that you can use JS strings, objects+properties, exceptions, promises, etc. This would be an unimaginably large standardization project.
You couldn't use WebIDL at all; you'd need to start by defining a new "low-level WebIDL." Then, you'd start standardizing the entire DOM API, all over again (or at least the most important parts) in low-level WebIDL, and then browser vendors could start implementing the low-level DOM API.
Then WASM could start calling that API directly. And maybe then you could rewrite web apps in Rust and have them get faster.
Until then, WASM is only faster for CPU-intensive tasks with I/O at the beginning/end, and otherwise it's only good for legacy code, where you don't have time to make it faster by rewriting it in JS.
(It should sound insane to anyone that taking C++ and rewriting it in JS would make it faster, but that's how it is on the web, because of this WASM boundary issue.)
So, what's the most important blocker? (gesture toward the universe) All of it??
Thanks for sharing your thoughts -- I think I have a clearer grasp of what you were getting at now.
> Honestly, I can't believe you're asking me this! You're so far in the weeds of the WASM component model that you have no idea what the problem is. It's really silly to ask what the "most important blockers" are, as if it's a list of bugs that can be fixed.
This is a pretty unproductive way to converse -- I asked because I wanted to know what you thought the blockers were. Problems can be broken down into smaller ones, and getting on the same page with regards to the problem is key to producing better software and meeting user needs.
We can't solve a problem if we can't enumerate it's pieces, and what people want can be varied.
> Today, if you take a UI-focused DOM-heavy web app that makes lots and lots of DOM API calls (i.e. most JS web apps ever written) and try to rewrite it in Rust, you'll have to cross the boundary between JS and WASM over and over again, every time you use a DOM API. Every time you add/remove/update an element, or its styles, or handle a click event, you'll cross the boundary.
At present, you must cross two boundaries. One out of WASM and one in JS into the engine.
> And it's not just strings. All JS objects that you need to do actual work with have to be serialized/deserialized in this way, because WASM only knows about bytes, arrays, and opaque pointers. DOM elements/attributes, DOM style properties, DOM events (click events, keyboard events, etc.), all of them get slow when you transfer them in and out of WASM land, even with reference types, because of serialization/deserialization.
> WASM interface types will make it easier to call JS from WASM, but as long as you're still calling JS in the end, rewriting in Rust will never make a DOM-heavy web app faster than writing it in JS.
This is untrue, Wasm interfcace types make it easier to call any language from Wasm. This includes the languages that browsers are written in, namely C++, Rust, etc. This opens a path to optimization and possible direct application of the APIs that JS uses today.
Also as a side note -- increased performance of DOM operations is not the only goal worth pursuing. For example, performance parity with JS is also a worthy goal, because holistically Wasm as a compute option has other benefits.
> The only fix, as Daniel's article explains, would be to standardize a low-level DOM API, one that didn't assume that you can use JS strings, objects+properties, exceptions, promises, etc. This would be an unimaginably large standardization project.
>
> You couldn't use WebIDL at all; you'd need to start by defining a new "low-level WebIDL." Then, you'd start standardizing the entire DOM API, all over again (or at least the most important parts) in low-level WebIDL, and then browser vendors could start implementing the low-level DOM API.
Introducing a completely new low level DOM API is ideal, but is naive in practice. WebIDL is the lingua franca for browser integration at this point, for the one language that actually has such support.
Standards are hard to create (especially from scratch), we cannot control how fast browsers implement or the priorities of browser companies.
> Then WASM could start calling that API directly. And maybe then you could rewrite web apps in Rust and have them get faster.
Embedders of WebAssembly components (i.e. browser engines written in C++/Rust/etc) may choose to fulfill the imports of WebAssembly components however they'd like. It is indeed possible that a browser engine, upon instantiating a WebAssembly component that does DOM work, fulfills the WebIDL contract with more direct bindings to the underlying operations.
There will always be at least one hop across a boundary -- WebAssembly. That said, it is possible for that one hop to be optimized by embedding engines.
For any such optimization to even be possible (with or without the component model), standards bodies and browser vendors must agree on what the interface they are about to speak is. They have already agreed on WebIDL, so porting that is a good place to start independent of whether creating a completely new standard is feasible.
> So, what's the most important blocker? (gesture toward the universe) All of it??
Concretely, what needs to be optimized is DOM calls. They are not impossible, they are just inefficient at present. That is a problem that can be worked on, and I don't think it's productive to say "all of it". There are many discrete steps that need to be accomplished to get to cross-browser consistently performant DOM support for components, and it's reductive to assume there is only one step to be done/that it all gets done at once.
A realistic path to optimization of any more performant interaction with underlying browser engine primitives is working with the interfaces they already have. WebIDL to WIT support (i.e. cross browser and cross platform as it's WebAssembly native) is a step in that direction, and as such is forward progress. If we can't agree there, we'll have to agree to disagree.
At this point it's a matter of working with browser vendors on their embeddings, and getting whatever bindings we agree on to be performant inside the relevant browser engine.
When writing most non-web software, you can usually write it quickly in a high-level language (with a rich standard library and garbage collection), but you can get better performance (with more developer effort) by writing your code in a lower-level language like C or Rust.
What developers are looking for is a way to take UI-focused DOM-heavy web apps, RIIR, and get a performance improvement in browsers.
That is not ready now. It's not even close. It might literally never happen.
What is ready now is a demo project where you can write WASM code against a DOM-like API running in Node.js.
What you have is an interesting demo, but that's not what we mean when we ask when WASM will get "DOM support."