Shade - WebGPU graphics

@mrdoob

I’ve had a look at all of those, didn’t see the SSRNode before though. I do have some thoughts.

WebGPURenderer

I think in general the approach is sound, and I like how minimalist a lot of the code is. It’s very few lines for the benefit that the API provides. I admire that.

To me, the issue is in the overall architecture. Three.js is going for a fairly traditional rendering architecture with WebGPU. Which is fine, and you can build other architectures on top of it, but that’s not what I wanted to build.

My goal was to build a GPU-resident renderer, something that can render millions of instances in real-time without much CPU overhead.

Let me paint a rough picture:
in 2014 DirectX 12 was released. It provided a much lower-level graphics API.
Vulkan came soon after, now we also have Metal. The primary problem that these APIs came to address was this:

pushing commands to GPU is slow, let’s make it faster.

To do that, command queues were introduced and we were given the ability to record command buffers in different threads. This was huge, you could now push almost 2 orders of magnitude more commands to the GPU each frame. The “bottleneck” of CPU ↔ GPU communication was widened.

At the time, it was believed that this widening would solve the issue for good, but GPUs got faster and CPUs didn’t really.

As AI started to adopt GPUs and started to take off, money started to poor into the manufacturers’ hands from clients that wanted to do general purpose compute on the GPU. They didn’t need “shaders” they had compute. So compute shaders started to develop and with time - dominate.

Graphics programmers saw this happening, so they started to move more and more of traditionally CPU-based work loads to the GPU. A lot of that trend was driven by the console market as their hardware architectures incorporated compute more readily, and the hardware architectures were more… fluid, where you had a bunch of not-quite-CPU cores that could do a lot of work, but required special programming model, just like compute shaders.

Let’s fast-forward to today, a lot of graphics engines are idling on the CPU side, there’s little to do for them there, as most of the work is happening on the GPU. All you do is build descriptors of change, and sometimes even that is handled on the GPU.

With this, the paradigm on drawing one object at a time and sending a bunch of commands to the GPU for each object is dying. I don’t think it’s dead, the graphics APIs like Direct X, OpenGL, Vulkan etc are still centered around the concept of attributes, indices and traditional draw. But these are getting less and less use.

Granted a GPU-resident renderer is much harder to build, there’s more complexity, there’s less educational material and the APIs allow you to do that, but don’t really encourage you or help you in any specific ways.

So, long story short - WebGPURenderer the way it is now, and the direction it’s going is not compatible with what I’m doing. On a pretty fundamental level.

Three.js is fast-enough, it’s much faster to get into, the programming model is very clear and straight-forward. I’m creating something, arguably not for today but for 2+ years from now.

NodeMaterial

I like the idea. I think Node-based languages are really powerful. I wrote a few in my days. Heck, meep has a few node-based languages inside of it.

The problem I see with node-based languages is the user interface. A node-based language can be useful as an API, but only if it is sufficiently high-level. A low-level node-based API is just a pain with no gain. You can sort-of see it in Unreal and Unity, they have node-based shaders, but they offer very complex nodes for you to use there. I have used Unity’s shader editor a fair bit, and I came to realize that it’s a massive pain. Because it doesn’t offer a good user interface. The UI is slow, there’s no search, grouping is non-existent etc. So, in my view a node-based language without a great UI is a bad investment.

Can you write an SSR shader in a node-based language? - yes, as evidenced by @Mugen87 's work that you linked. But,

  • is it clearer than GLSL?
  • is it more concise than GLSL?
  • does it offer lower system complexity?
  • is it easier to learn?
  • does it compile faster?

I know I’m cherry picking here, but I hope this clarifies my view a bit.

Do I think GLSL is great, or even WGSL - no, I think they are pretty bad languages. Especially WGSL is a massive pain. But it’s a standardized pain, with a lot of reference material. I was missing module system in WGSL - so I wrote a bare-bones one like so:

/**
     *
     * @param {CodeChunk[]} dependencies
     * @param {string} code
     * @returns {CodeChunk}
     */
    CodeChunk.from(code, dependencies = []) { /* ...*/ }

This isn’t perfect, but it’s good enough for me. I looked at a bunch of different language abstractions for WGSL in particular, or, let’s say SPIR-V, the problem is that they all sacrifice expressiveness and specificity for the sake of compatibility. Just like, say TypeScript is dominant because it only compiles to 1 language - JavaScript, so it’s able to capture every aspect of JavaScript perfectly. As soon as you start to target multiple languages in translation - you’re playing a losing game.

Here’s a simple example: Unreal engine can target WebGL as a compile target. And it works! It looks like :poop: though. Why is it? Do guys at epic not know how to use a graphics API? It’s because the compiler is forced to target the lowest common denominator, the actual compiler doesn’t know about WebGL in particular, not really. It targets OpenGLES 3.0, and then disables all features that don’t translate directly to WebGL.

There are some successful examples out there, such as C, or LLVM. But they have the benefit of C not being a target language at the time, and actually providing more conciseness and expressiveness, and LLVM not being oriented towards programmers.

I think NodeMaterial concept is not bad, and can be great, but it needs to come with an amazing set of tooling, specifically a UI.

TSL - I don’t dislike it, but I don’t love it. It’s WGSL with extra steps, and you’re using it as a declarative language written in a functional language (JavaScript) which makes awkward on top of what I mentioned before.

SSRNode

I’m not sure this is speicifically valid or not. But SSRNode is a toy. If you read the original code - you can see the step-wise ray marching through the depth buffer, you can see the basic denoising with the edge-preserving blur pass, but it’s not a practical tool.

The traversal is way too slow, and it considers everything to be a mirror surface. You can use it to produce some pretty pictures, but it’s not physically grounded in the slightest, it doesn’t respect StandardMaterial and it’s not energy-preserving.

I don’t think that’s a problem, it’s a good teaching tool and there are use cases where you have glossy everything where it behaves close to “realistic”.

7 Likes