This article is intended to discuss the future of WebGPU, specifically in the context of recent Google debacle. Goole was paying Mozilla (Firefox developer) ~80% of their annual revenue for making google their default search engine. This will stop from this year onward. Mozilla was already in a relatively shaky position in recent years, and Firefox development has been quite slow.
For those not aware, WebGPU is the next generation graphics API for browsers, and what three.js is transitioning towards.
For us, as developers and users of three.js, to move forward and get access to the new and shiny WebGPU features, we need these features to be shipped by all the major browsers. Right now Chrome ships WebGPU, but that’s about it.
Where it says “Nightly”, it means beta branch essentially, not the main release.
My guess is that Firefox will fade into obscurity and will be irrelevant in the coming years, and we will not have to care. Does anyone know if Firefox WebGPU is still coming and/or what the future of Firefox is at this point?
Every company running on the same core would ease the transition imo.
So on a dev standpoint, we win. On a user standpoint it would unironicaly create an even greater monopoly. It’s sad if Mozilla suffer from this, we need diversity and competition.
Side note about webGPU: it’s pure delight. Three.js team’s efforts to blend things together is also amazing, and a complete success in my eyes. I’ve seen some critics about the choice of TSL, but they’re from people that aren’t regular three users, it’s outsiders perspective on exotic languages. In practice, the moment you start to grasp the possibilities, you don’t want to come back to webGL/shaderchunks anymore
An “in-the-wild” user is savvy enough to mitigate Firefox 3D slippage. Tech emerges from “stealth mode” after generational growing pains. A restricted period helps address internal conflict and demand. With respect for browser horizon:
AI-driven scripting
thin client
modular Win12 + WSL
Web3, Web4, Web6, Inrupt
OpenWeb regulation, extension fuzzing
Getting “wheels on” the collective rollout? WebGPU-critical projects should use default webview in node.js before publishing to Steam. Or bootstrap a new browser.
In 1997, Apple was on the verge of bankruptcy and was saved by Microsoft’s investment of 150 million in non-voting shares and other deals between the companies. Not selflessly, but as a move in the monopoly war! The US Department of Justice had concerns in the “browser war” with Netscape, as Microsoft had bundled Internet Explorer with Windows. If the Mac had disappeared from the market, Microsoft would have had a operating system monopoly and a break-up was on the cards.
Safari for Windows has not been maintained by Apple for a long time.
There are currently monopoly proceedings again. A judge in Washington ruled that the company had a monopoly in Internet search - and was defending it against competitors by unfair means.
This is the truth. The node-based system for writing shaders blows my mind. It is really amazing. It’s one of those “I can’t believe this is even possible” things.
I may be one of those critics. Why do you feel this way? Is it that much easier to write JavaScript that gets transpiled(?) than it is an actual shading language?
one of the “promise” of Three is to provide an abstract layer on top of what the Khronos/WC3 are cooking. In that regard, shaders always felt like an unfinished business disconnected from the rest of the library.
It’s right to develop webGPU and WGSL with a wider scope that doesn’t limit itself to JS. The web is way more that that.
But Three maintainers are also right to be wrong and hack it, to provide it’s own javascript oriented solution. Because that’s what they do, that’s Three.js.
Unless I’m missing something, I think the point of tsl is that it will instantly auto transpile to either GLSL or WGSL depending on the capabilities of the browser / device, eg… If webGPU is available the relative renderer + transpilation to WGSL can be served, if it is not and only webGL is available the relative fallback renderer + transpilation to GLSL can be automatically served to the user…
@Oxyn imo it seems like less of a hack and more of a robust solution to handling both threads of processing while in the storms eye of a mass transition / migration to supporting webGPU across the board, a win win in terms of maintaining a stable platform for the broadest set of users / device capabilities while using a common syntax for both…
That’s what I don’t quite understand. How do you transpile a compute shader to Webgl? Are there other things that are not compatible?
TBH I don’t know much about this topic, but aren’t most languages compiled/transpiled? Eg with typescript I get a whole different syntax but I get JavaScript in the end. With GLSL it gets compiled to different shaders in the GPU already and all I provide is a string. I don’t understand why this has to be in JavaScript. Is it possible to do it in reverse, eg if I have a GLSL shader, to generate TSL?
The benefits are in my opinion not so much the transpiling, but rather the ability to write a node somewhere in JavaScript code, save it in some “global” scope, and be able to use it all over the place. You could also write functions to dynamically build nodes like getNode(){ ... }, save nodes to JavaScript arrays, and just take advantage of JavaScript’s native features. On the other hand, directly writing a shader requires you to call .toString() on everything from JavaScript world because ultimately a shader is just a big string. Without the node system, everything is just very inflexible, and requires a lot more forethought and planning, which is a waste of brain cycles when the node system is right there ready to be used.
But I don’t see any types or anything in JavaScript, may be worth mentioning that I haven’t written JavaScript in years only typescript. I know three is a JS exclusive library but I imagine that there is some subset of users that like types. To me the shader output is actually more readable, except for the mangled names.
I’m not trying to be combatitive if certain people read this, I am genuinely just trying to understand the approach/philosophy. Please don’t ban me.
To me “nodes” invoke the same feeling as “no code”. I have great interest in this topic and I’ve made a (broken) tool for this myself - https://youtu.be/FwBhpUgy9Ss?si=KW5FcHgm4rPrhRuU. But I don’t understand why this would be done in code.
For example why isn’t it a JSON? Or by that extension, some other language? Why does it have to be this imperative, by calling specific functions in JavaScript?
This all happens at runtime right? You have to load your node code, threes node code, compile to a shader, and then have the machine compile your shader. If it were done in some editor where you end up with a compact artifact and no node runtime it would make more sense to me for a web engine.
This makes sense!!! This is how I imagined this to work. Not sure if I would chose the JS as output but I would save the other type of shader somewhere.