Advantages/disadvantages of using WebGL2

As stated in the How to use WebGL2 doc:

By default three.js always uses a WebGL 1 context when creating an instance of WebGLRenderer . If you want use a WebGL 2 context, please have a look at the following workflow.

However, it’s easy to set things up so that an app uses WebGL2 by default and falls back to WebGL1 if it’s not available, e.g. using WebGL.js:

const canvas = document.createElement( 'canvas' );
let context;

 if ( WEBGL.isWebGL2Available() ) {

     context = canvas.getContext( 'webgl2', { antialias: true } );

} else {

     context = canvas.getContext( 'webgl', { antialias: true } );

}

const renderer = new WebGLRenderer( { canvas, context } );

I guess doing this will complicate matters if you are writing your own shaders, but aside from that, are there any caveats to watch out for now, or in the future as more WebGL2 features are supported?

2 Likes

Apart from custom shaders, there is no disadvantage of using WebGL2. However, it only makes sense right now if you are actually using one of the new WebGL 2 features like 3D textures. The problem is that three.js has not yet integrated stuff like Uniform Buffer Objects, MRT or multiview support (useful for XR) so there is no performance benefit if you use WebGL2. Besides, multisampled render targets are not automatically used in post processing yet so you don’t see any improvement in rendering quality when using WebGL2.

Things might change over time so this post is only valid for three.js R104.

2 Likes

Actually it seems to be a advantage for few cases, such as WebGL2 being available and extensions being standard, while for WebGL1 the extensions being missing. I have no confirmation about the case since there was no more reply Creating Texture From Float32Array

But as you said the mentioned features will only give a performance benefit when implemented.

However, it only makes sense right now if you are actually using one of the new WebGL 2 features like 3D textures.

Even if you are using 3D textures, you wouldn’t want to fallback to WebGL 1, correct?.
So there’s really no scenario, as of now, that would make sense to swap context based on WebGL 2 availability.

1 Like

Yes, that’s right.

Yes, I understand that. I’m using the WebGLMultisampleRenderTarget to get cheap AA with post-processing, when available.

My current setup is to check if WebGL2 is available, and then use a WebGLMultisampleRenderTarget in post-processing, but if only WebGL1 is available fallback to a WebGLRenderTarget and add a final FXAA or SMAA pass.

I was just wondering if doing this would lead to complications later when things like UBOs, MRT etc get integrated.

So there’s really no scenario, as of now, that would make sense to swap context based on WebGL 2 availability.

Except for this one :grin:

1 Like

Here’s my code for setting this up BTW:


const canvas = document.createElement( 'canvas' );
let context;

 if ( WEBGL.isWebGL2Available() ) {

     context = canvas.getContext( 'webgl2', { antialias: false } ); // disable AA if using post-processing

} else {

     context = canvas.getContext( 'webgl', { antialias: false } );

}

const renderer = new WebGLRenderer( { canvas, context } );

let composer;
if ( renderer.capabilities.isWebGL2 ) {

  const size = renderer.getDrawingBufferSize( new Vector2() );
    
   const parameters = {
      format: RGBFormat,
      stencilBuffer: false,
   };

   const renderTarget = new WebGLMultisampleRenderTarget( size.width, size.height, parameters );
   renderTarget.samples = 8; //default is 4 but to match the built-in AA quality 8 or 16 samples is needed

   composer = new EffectComposer( renderer, renderTarget );

} else {

   composer = new EffectComposer( renderer );
}

EDIT: I’m doing a bit more testing with this, and in general it seems to be working well.
However, for passes that create their own render targets internally (e.g. Outline), those also need to be converted to WebGLMultisampleRenderTarget, otherwise the main scene will have AA, but the post effect will not.

EDIT2: I asked over on the Khronos forum. As expected, there’s no magic solution here, however it should be OK to render once to a WebGLMultisampleRenderTarget at the start and then immediately downsample. That would save a bunch of memory and means you only need one multisample target. All the post effects can be done on a normal render target.

3 Likes

The ultimate goal is that the renderer only uses stuff like UBO if it’s available. Otherwise fallbacks are used. Of course this logic should be transparent for app-level code.

3 Likes

Resuscitating this discussion after several months. Mugen87: You said above “Apart from custom shaders, there is no disadvantage …” Does this imply that with WebGL 2, I can’t write a custom shader? Also, has this changed since I’m on r113.

No. The problem is that if you write GLSL 1 shader code for WebGL 1 with ShaderMaterial, the code is not necessarily GLSL 3 conform (e.g. by using reserved keywords). In this case you have to upgrade your shader code if you want to use latest GLSL features.

1 Like

One big advantage of WebGL2 is real MSAA using blitFramebuffer which works on mobile phones(WebGL 1 AA doesn’t work on most phones and only rendering higher resolution then downscaling using CSS works but it’s expensive AF). I built a website some time ago which uses webgl2(with fallback to webgl1) for this purpose. It’s basically ShaderToy displayed on fonts http://fonted.io

1 Like

@rockclimber correct me if I’m wrong, but doesn’t iOS Safari still not support WebGL2? How did you get MSAA to work on mobile phones?

1 Like

MSAA requires 2 buffers and antialiasing set to disabled. It’s the same code for browsers and mobile phones. https://stackoverflow.com/a/55976760

If webgl2 or webgl 1 antialiasing are not supported it renders the frame in higher resolution and scales down using CSS. Not the most performant way but widely compatible and looks good.