Understanding about WebGPURenderer

Hi everyone,

I’m exploring THREE.WebGPURenderer in Three.js and trying to understand how it compares to WebGLRenderer.

From what I’ve read, I don’t need to explicitly use WebGPU-only features (like compute shaders or TSL). Instead, if I just switch to:

const renderer = new THREE.WebGPURenderer();

instead of

const renderer = new THREE.WebGLRenderer();

then I should automatically get some advantages from WebGPU, such as:

  • Better performance (more efficient rendering pipeline)
  • Lower CPU overhead (reduced driver overhead)
  • Better memory management
  • Access to more modern GPU features

My question is: is this understanding correct?

Take this example three.js examples it use WebGPURenderer I modify these example and instead of WebGPURenderer I used WebGLRenderer and I get around same perfromance and I dont see the use case of using WebGPURenderer
WebGPURenderer is used here just to showcase how to use WebGPURenderer or is there any spcific performance we are getting here.

1 Like

My guess is:
Examples aren’t only for learning, they also a “test bench” for new/future releases to check if supported features didn’t break along the way. This is typically how it was done since the dawn of three.

In this specific case the main feature use canvas FlakesTexture.js and clearcoatNormalMap (clearcoat probably the only part requiring changes).There is very little chances to get a performance gap. The strong side of webGPU is more obvious when shaders are involved to manipulate meshes and textures, enabling previously unthinkable move (like massive instancing and procedural generation). So yes you 100% right

1 Like

I would not expect performance improvement simply by swapping WebGPURenderer in for WebGLRenderer today. See: three.js - WebGPU vs WebGL - rendering tens of thousands polygons - Stack Overflow.

But WebGPU does provide access to more modern GPU features like compute. Those, and TSL, do give you better tools to improve performance in your application. But you’ll need to adapt the tools to your application and its specific performance challenges.

2 Likes

Okay, understood — my initial thought was that simply switching to WebGPURenderer would give me extra performance. But from what you explained, it makes more sense to use WebGPURenderer only when there’s a clear need or a specific use case for it.

In my case, I’m rendering thousands of cylinders. When I use instanced meshes, I get very good performance, but I can’t rely on instancing here because each cylinder has different geometry. My scene is also not static — I need to update each mesh dynamically (rotation, transparency, outlines, etc.).

With individual geometries and MeshStandardMaterial, I hit a performance penalty when drawing around 10k shapes. That’s why I tried WebGPURenderer, thinking it might give me a higher FPS. In practice, I didn’t see any significant difference in frame rate. The only notable improvement was in memory consumption, which was about 30% lower with WebGPU.

So my conclusion is: to really get the benefits of WebGPU, you need to make use of WebGPU-specific features (like compute shaders or TSL) that WebGL doesn’t offer. Just swapping renderers won’t automatically give a big performance boost.

That’s a great example. Both WebGPU and WebGL are APIs to execute instructions on the GPU, but the cost of sending 10,000 draw calls to the GPU is significant regardless of the API. Very possibly as THREE.WebGPURenderer and browser implementations improve the overhead of the draw calls will be lower in WebGPU, but batching and/or instancing will remain important.

THREE.BatchedMesh might be a good option for the case you’re describing. It’s also possible to implement something very similar using a large BufferGeometry (containing all objects’ vertex data) with custom shaders or TSL to control object color or pos/rot/scale.

1 Like

My impression was what the webgpu renderer was at time slower than the WebGl one. This is probably because it is in active development. It may also have more bugs than the webgl one because it’s been developed for 15 years.

It’s still the same graphics card that runs your code.

There was a thread recently where someone experienced a TSL shader recompiling every frame, which isn’t the most performant thing to do.

With all this being said I don’t think it’s a magic bullet and that just swapping it out makes everything go faster. It probably will in a few years.

My unpopular opinion is that most people don’t even really need the things that webgpu allows for.

Btw how did you measure the memory consumption? I’m really curious as to where the savings are in your case.

2 Likes

(post deleted by author)

My unpopular opinion is that most people don’t even really need the things that webgpu allows for.

Yes, that makes sense. We’re currently evaluating what’s possible with web graphics technologies like Babylon.js, Three.js, and HOOPS (paid). The goal is to port a desktop application to the web, and before doing that, we want to understand the rendering capabilities and built-in features of each framework so we can provide the client with solid advice. That’s why I’ve been investigating WebGPU — but I’ve realized that directly jumping to it feels more like finding a problem for a solution,

Btw how did you measure the memory consumption? I’m really curious as to where the savings are in your case.

It wasn’t a very deep dive, but mainly through the browser performance monitor and FPS stats. I noticed that memory usage was consistently about 30% lower with WebGPURenderer, even though FPS was about the same.

There are several programs in the official three.js examples that compare WebGL2 and WebGPU performance.

I believe the real increases in speed come once you start attaching your buffers and shaders to the GPU, especially the compute shader which is not available in WebGL2. Reducing calls to the GPU is also an important factor (i.e. have the GPU perform all the tasks without reverting back to the CPU).

WebGPU is eventually going to replace WebGL2 on three.js, so it certainly doesn’t hurt to learn how to use it.