What's going on with WebGL2? WebGPU?

That’s a choice, I guess. I tend to do that too every now and then. I would not say that Chrome dev tools have no value though, as that’s where I get most of my debugging done.

Another aspect is the cycle, if you change your shader to see what happens, then compile, run, see the result, process it, modify your mental state, make another change etc. Formal set of debugging tools cuts out the “change, compile, run” part for the most part making the whole process a lot faster. But that’s me trying to sell you on the idea of a debugger at this point :smiley:

That looks pretty cool, personally I see 2 issues with the current renderer:

  • API largely non-existent. You can throw a thing at it to render, but that’s about it.
  • No deferred shading (please don’t try to point out demos, I’m aware of those, I mean production-ready stuff)

I think if you’re not addressing at least one of those - it’s an exercise in learning more than creation of something that’s largely different from what already exists. That’s my point of view though. My biggest pains with the library as it stands.


^on the latter part, yes, and I agree that the deferred shading is not really supported (even the example renderer was deleted now I think). Whether deferred shading could be supported without increasing the complexity of the rendering code by 2X — which would be a non-starter, I think — is unproven at this point.

Code paths for WebGPU are going to make this interesting, too, I’d add that to the list. Don’t know if REGL tries to support that, probably not yet.

Unproven - sure. And it will not be until that’s done, but based on my limited knowledge and having read through multiple deferred rendering engine sources - I believe that it would be simpler for the most part, provided the right initial approach.

The reason why WebGLRenderer is so complex/compicated right now is mostly legacy. @mrdoob does a great job over the years of making it more modular and more structured, but it is what it is - a legacy application. If you were to write WebGLRenderer3 today - you would probably take the lessons of the past and modularize the heck out of it as well as building it as a 2-tier interface.

I even think that deferred rendering pipeline has a few advantages when it comes to performance, as you need fewer re-compilations and generally smaller shaders. It’s a less straight-forward (hehe, forward) approach to understand overall, but I dare anyone unfamiliar with three.js to learn and explain to me how color management works from color of pixel in a JPEG image to the color of a pixel on the screen, rendered by three.js; it is a convoluted process.

What I am trying to say is that I don’t see any particular reason why deferred shading renderer would be more complex than the current forward one.

1 Like

Sure, I don’t mean to say it can’t be done — just a rather big project.

That’s a choice, I guess.

I’m not trying to judge other peoples debugging practices or tell everyone to debug the way I do. I know there’s value in a debugger and stepping through statements and use it myself. I use console logs quite a bit because it works everywhere I have to run my code so I just meant that it’s a familiar paradigm.

Another aspect is the cycle, if you change your shader to see what happens, then compile, run, see the result, process it, modify your mental state, make another change etc. Formal set of debugging tools cuts out the “change, compile, run” part for the most part making the whole process a lot faster. But that’s me trying to sell you on the idea of a debugger at this point :smiley:

I mentioned above there’s value to debugging tools like this that streamline this process (which includes realtime debugging and changes) but they can be built with WebGL right now. It doesn’t need a software rendering implementation.

Regarding a DeferredRenderer I’m optimistic! I expect it can be built on top of the existing WebGLRenderer and can use MRT once it’s available. Maybe I’m overlooking something related to the complexity of it? I think the most complex part would be reconfiguring all the existing shader code and writing the output final shader that reads from all the other buffers.

1 Like

About the MRT, it’s a bit more than that. You have your G buffer, that’s all good and well, but you have your specular, diffuse, emissive buffers as well as separately stored lighting information. This means that shadow mapping is done outside of the material shader as well. Tone mapping will have to be moved too.

It’s not about new stuff as much as about moving existing stuff around, but it would be a fair amount of restructuring.

TBH, I doubt that deferred rendering will ever be part of the engine’s core. The resources of the project are limited and there are a lot of other issues which have a higher priority.

I personally think that WebGL 2 as well as WebGPU are a bit overrated. You can already build great applications with the existing technologies. A way more important feature request than let’s say a WebGPURenderer is a more advanced scene editor. This would extremely open up the potential user base since there are a lot of creative designers and game devs outside there how just don’t want to develop an entire JavaScript app.

Form my point of view, a scene editor similar to what the Godot Engine offers (I would not use Unity or other commercial game engines as a benchmark) is much more important for content creators than WebGL 2 features. The three.js editor is a good start, but it lacks many features. I would love to see more progress of this part of the project in the next years.


I think that’s a valid point of view. But it also depends on who you see three.js main audience. I’m a game developer, and modern graphics at reasonable performance level matter a lot to me, as well as low level API. I’m not the only one. Your statement can be interpreted as you saying that three.js is not for me. I’m not trying to start anything here, it’s just curious to see how vastly different our perspectives are.

I do not believe that three.js is a viable graphics engine for a video game with modern graphics, not in it’s current form. I also see three.js editor as beign vastly inferior to blender, why would I use it, when I can model anything I like in blender and just export it as GLTF.

I guess I’m just a bit puzzled by what this means. Is three.js a graphics engine? Is it a game engine? Is it a “3d environment”, whatever that means. I’m cool with any of those, to be clear. I’m saddened by the apparent lack of common understanding, which in turn results in a lack of focus.

That is quite off-topic. To get back on-track. Why do you think that WebGL2 and WebGPU are overrated, in the context of three.js? What are the new features that those would bring, and why do you believe those features to be of little significance? I hope I’m not putting words in your mouth with this phrasing.


I’d like to double dare here :smiley:

1 Like

Could you elaborate on this, what’s considered more important than issues of future APIs? Is what you’re describing regarding editors one such issue (lack of editors)?

I think this nails it, as i’ve had the exact same feeling. If this is true i don’t understand why a game engine or some kind of content creation tool has to be three.js why can’t it be built using three.js. I think that there were a plethora of companies that tried to do this though over the year. The meetups in san francisco around 2013-2014 seemed to feature one new startup per month that did exactly this.

Jack of all trades, master of none, but i think what you’re mentioning should be more straightforward. I wrote a lengthy philosophical piece on this topic, and still don’t know the answer:

Even though this is related, it might be a bit of a digression, the core question is, what @usnul already posted, what is considered more important than WebGL 2 and WebGPU?

1 Like

Well, according to your comments in the past, I actually think BabylonJS better fits to you^^.

A scene editor is not the same as a DCC tool like Blender. Are you actually familiar with workflows in Unity or Godot? The idea is that devs can implement three.js apps by authoring their scene in the editor. That means implementing scripts, custom shaders or arranging imported assets. And finally exporting the app.

From github: The aim of the project is to create an easy to use, lightweight, 3D library with a default WebGL renderer.

So no game engine, no 3D environment.


I know this will sound weird, but what if there is no scene to author? I’m not sure if it can be put this way, because you always have a scene, but what if it’s some sort of data viz or otherwise very dynamic, not loading some gltfs etc. Is the expectation that a scene editor would help in this case?

Could these two possibly be at odds? Unity is not a library, far from it, it’s a game engine. Why would one be comparing the workflow of using a library which should be:

import library

In any language, to something as specific as a game engine. I think i’ve seen some examples of three being used for 2d / video editing and such.

To be clear, the workflow that i’m familiar from Unity when it comes to using libraries is:

using System;
using UnityEngine;

I’ve only seen it being declared in text editors, it doesn’t seem to be usable within unity’s GUI. using is analogous to import or require in the javascript workflow, or more archaically per three’s examples <sript src="library.js">

1 Like

Check out how the following water effect is developed with the scene editor of the Godot’s engine (it has nothing to do with game related logic):

Unfortunately, something like this is currently not possible with the three.js editor.

The only question I could follow it up with is:
“what is a 3D library?”

Though, I will not. I have asked this in the past, I don’t want to start anything. As I said. This was not intended as an interrogation.

So three.js editor is a game editor? Again, not here to fight :slight_smile:. But yes, I’m familiar with these.

I’m more interested in your thought on WebGL2 and WebGPU.

Can you point out specific things we should look out for?

A lot of this i think you get when you setup some environment with webpack, glslify, webpack-dev-server and such. I’ve never had to use three.js editor, but i often had to integrate code doing something with some canvas and three.js and something like react for UI.

I opened a topic for this digression, What is three.js?

I’m curious here how does this godot video relate to WebGL2 and WebGPU. Cursory glance at the godot wiki mentions vulkan, so much more advanced than webgl?

Regarding WebGPU

I was planning on focusing on WebGPU on the second half of this year.


looks like chrome default will switch to 4xMSAA by default https://bugs.chromium.org/p/chromium/issues/detail?id=1063437&q=component%3ABlink>WebGL

is there a good way still use 8xMSAA with webgl2 and threejs?

@arpu I think Looeee shared a good method of implementing 8xMSAA here: Advantages/disadvantages of using WebGL2