Render multiple views

I am trying to render multiple views of a scene. Some views use an Orthographic camera and wireframe. I’ve set autoClear, autoClearColor, autoClearDepth and autoClearStencil to false. The only way I have been able to get this working is to trick the renderer into thinking each camera is an ArrayCamera.

camera.cameras.forEach( camera => {
       //renderer.setViewport(camera.viewport);
       if (camera.isOrthographicCamera){
             scene.background = col;
             scene.overrideMaterial = mat;
        }else{
             scene.background = envmap;
             scene.overrideMaterial = null;
        }
        if (!camera.isArrayCamera){
             //Workaround to use viewports
             camera.isArrayCamera = true;
             camera.cameras = [camera];
         }
         renderer.render( scene, camera );
    });

This nearly works. But the Orthographic views do not display the correct background colour
https://niklever.com/mycourses/threejs-cookbook/complete/cameras/orthographic.html
Any advice?

I added

renderer.setClearColor(col, 1);
renderer.clearColor();

Outside the forEach. Working fine now. But is this the best way to control multiple views?

I use several containers with separate cameras and renderers.

See from the Collection of examples from discourse.threejs.org :
ConstructionBasis

and Construction of frames with contour/profile

In another project with several containers


const containerA = document.querySelector( '.containerA' );
const containerB = document.querySelector( '.containerB' );
const sliderAB = document.querySelector( '.sliderAB' );
const containerC = document.querySelector( '.containerC' );
const containerD = document.querySelector( '.containerD' );
// scenes 
const sceneA = new THREE.Scene( );
sceneA.background = new THREE.Color( 0xdedede );

const sceneB = new THREE.Scene( );
sceneB.background = new THREE.Color( 0xeeeeee );

const sceneC = new THREE.Scene( );
sceneC.background = new THREE.Color( 0xf6f6f6 );

const sceneD = new THREE.Scene( );
sceneD.background = new THREE.Color( 0xf6f6f6 );
// cameras
const cameraA = new THREE.PerspectiveCamera( 55, containerA.clientWidth / containerA.clientHeight, 0.001, 10000 ); 
cameraA.position.set( 0.5, 2.5, 3.5 );

widthB = 0.95 * containerB.clientWidth;
heightB = 0.95 * containerB.clientHeight;
if ( widthB > 2 * heightB ) { widthB = 2 * heightB; } else { heightB = widthB / 2; }
aspectB = widthB / heightB;
const cameraB = new THREE.OrthographicCamera( -aspectB, aspectB, 1, -1, 0.01, 0.2 );
cameraB.position.set( 0, 0, 0.1 );

leftC = widthB / 4 + 10; // +10 to right
topC = 0;
widthC = heightB;
heightC = heightB;
aspectC = widthC / heightC;
const cameraC = new THREE.OrthographicCamera( -aspectC, aspectC, 1, -1, 0.01, 0.2 );
cameraC.position.set( 0, 0, 0.1 );

leftD = leftC + widthC + 30;
topD = 0;
widthD = widthC / 10;
heightD = heightC;
aspectD = widthD / heightD;
const cameraD = new THREE.OrthographicCamera( -aspectD, aspectD, 1, -1, 0.01, 0.2 );
cameraD.position.set( 0, 0, 0.1 );
rendererA.setAnimationLoop( ( ) => { render( ); } );
rendererB.setAnimationLoop( ( ) => { render( ); } );
rendererC.setAnimationLoop( ( ) => { render( ); } );
rendererD.setAnimationLoop( ( ) => { render( ); } );
// renderers 
const rendererA = new THREE.WebGLRenderer( { antialias: true } );
rendererA.setSize( containerA.clientWidth, containerA.clientHeight );
rendererA.setPixelRatio( window.devicePixelRatio ); 
containerA.appendChild( rendererA.domElement );

const rendererB = new THREE.WebGLRenderer( { antialias: true } );
rendererB.setSize( widthB, heightB );
rendererB.setPixelRatio( window.devicePixelRatio );
containerB.appendChild( rendererB.domElement );

const rendererC = new THREE.WebGLRenderer( { antialias: true } );
rendererC.setSize( widthC, heightC );
rendererC.setPixelRatio( window.devicePixelRatio ); 
containerC.appendChild( rendererC.domElement );

const rendererD = new THREE.WebGLRenderer( { antialias: true } );
rendererD.setSize( widthD, heightD );
rendererD.setPixelRatio( window.devicePixelRatio ); 
containerD.appendChild( rendererD.domElement );
function render( ) {
 
	rendererA.render( sceneA, cameraA );
	rendererB.render( sceneB, cameraB );
	rendererC.render( sceneC, cameraC );
    rendererD.render( sceneD, cameraD );
    
}

Nice idea. Multiple renderers is probably the best solution.

You can also render the wireframe with a camera (and a single renderer) by using the mesh’s onBeforeRender and onAfterRender, setting a ‘wireframe’ name to the camera.

camera.name = 'wireframe';

mesh.onBeforeRender = (renderer, scene, camera) => {
  if (camera.name === 'wireframe') {
    mesh.material.wireframe = true;
  }
};

mesh.onAfterRender = (renderer, scene, camera) => (mesh.material.wireframe = false);

imo multiple renderers are slow, you can’t share data, everything gets double tripple quadruple loaded, it’s a massive memory sink hole, the browser has an arbitrary limit for canvas context and if that’s reached it ends the tab. the clean solution is gl.skissor, cutting one canvas into fragments, but that will get complex fast, so much that i’m not sure this would be feasible in oop vanilla. for instance establishing isolation, syncing complex app logic is going to be rough, dealing with events properly, dealing with controls, etc.

this is a problem that is solved in the react eco system for three, if that is a possibility. this is one canvas with view fragments. each is like a canvas in isolation, they can have their own cameras, environments, controls, scenes, events naturally work. syncing is no problem at all because they all are the outcome of the same state.

the component is called drei/view.

1 Like

It certainly depends on the use case.

I have not tested this in comparison to other solutions. It depends on the application. There are no problems with my construction grids. The user himself is the slowest when clicking and dragging the markers.

To what extent?

I combine the data from the containers B, C and D used in the last project to create a geometry in container A. This works quickly and without any problems.

it’s generally not a good idea, the simplest reason is that the browser will kill your app, and there will be overhead and double loading because some (most? …) assets are tied to their render instance, three cache et al. with gl.skissor/setViewport you can re-use everything, environments, textures, models, materials, shaders, geometries, …

plain three examples for this are three.js examples and three.js examples the first one would not be technically doable with multiple canvas, it would crash, the second would live on the edge, and perform slower.

Don’t use multiple renderers if you can avoid it. They can’t easily share data and GL contexts are pretty heavyweight.

You want to use renderer.setViewport for this…

https://threejs.org/docs/#api/en/renderers/WebGLRenderer.setViewport
.

https://threejs.org/examples/webgl_multiple_views.html

I came to my construction step by step (see my first post here) through the following things.

Compare three.js scenes with overlay slider

The now missing codepen @looeee is available as a copy in the collection.
See scene comparison .

At the time, @Mugen87 thought it was cool.

It may be that something has changed internally at three.js in the meantime. I don’t have an overview of this, my thing is “crazy” geometries. hofk (Klaus Hoffmeister) · GitHub

I need sub-areas with a user interface that can be moved, resized, shown and hidden by the user. Partly as a construction grid with an orthogonal camera, partly as a 3D representation with a perspective camera. The data from the construction grids is used to generate various geometries.

Is this possible in a similarly simple way with viewport, multiple views? Are there any open source examples?

2 Likes

I didn’t think setViewport was working. But I removed * window.devicePixelRatio and all was well.

You all can get multiple canvasses/renderers to work, but you’re might be leaving performance on the table, especially if each is doing requestAnimationFrame.
If you’re only occasionally rendering on demand, the performance difference might not be too bad… But you will still need to load your assets into each context uniquely.
If instead, you use one big canvas, and just render to different regions of it, you can still update the regions on demand, but they can share resources, and hang off a single requestAnimationFrame.

The ultimate example of this would be a 3d editor with multiple viewpoints rendered on the same canvas. In that scenario you definitely want to use the renderer.setViewport to control the drawing.

It can be feel tricky to get setViewport to work, because of one quirk you might overlook in the docs:

https://threejs.org/docs/#api/en/renderers/WebGLRenderer.setViewport

Namely:

“Sets the viewport to render from (x, y) to (x + width, y + height).
(x, y) is the lower-left corner of the region.

So, basically flipped vertically from what you expect from the DOM. since in OpenGL, Y increases from the bottom of the screen, up…

1 Like

to add to what manthrax said, theoretically all canvas could be globbed into a single raf loop, but the main overhead stems from gpu interfacing. three.webglrenderer caches geometries, materials, textures, etc. for instance two meshbasicmaterials count against a single instance of the shader. this is probably not the case with multiple renderers since it happens in here https://github.com/mrdoob/three.js/blob/99fd53fbadc511febcc11eab6273f47bf02a329e/src/renderers/WebGLRenderer.js#L320-L352

at work, for a rather big codebase, we did try to use multiple renderers once and the performance suffered a lot. gl.skissor can get complicated to get right but worth it.

I set renderer’s autoClear to false and use multiple viewports. Before rendering I clear renderer’s color and in a for loop, before rendering each viewport, I clear the depth buffer:

        this.renderer.clearColor();
        for (let i = this.visibleViewportCameras.length - 1; i >= 0; i--) { // Primary viewports must be rendered in reverse order: the topmost visible one will be rendered last
            const camera = this.visibleViewportCameras[i];
            this.renderer.setViewport(camera.viewport.x, camera.viewport.y, camera.viewport.width, camera.viewport.height);
            this.renderer.clearDepth();
            this.renderer.render(this.scene, camera.activeProjection); // Render the scene
        }

It works with both perspective and orthographic projection cameras. Check this:

https://www.dei.isep.ipp.pt/~jpp/threeJS/Thumb_Raiser%20-%20Merged/Thumb_Raiser_Loquitas_10x10.html

3 Likes

Very interesting.

You use controls and UI elements in fixed positions.
Can these also be linked to individual areas at the bottom as I need them to be? (see my posts above)

Yes. Controls on the top right are using lil-gui API. I define their positions this way:

gui.domElement.style.position = “absolute”;
gui.domElement.style.right = “0.5vw”;
gui.domElement.style.top = “1.0vh”;

If you want to put it at the bottom, I guess you’ll need to define the bottom property as well, e.g.:

gui.domElement.style.top = “50.0vh”;
gui.domElement.style.bottom = “1.0vh”;

The remaining GUI elements are plain HTML elements. I’m configuring them in the style section of the HTML file, but you can do it in a CSS file or in JavaScript instead:

    #views-panel {
        position: absolute;
        left: -50.0vmin;
        top: -49.0vh;
        width: 100.0vmin;
        font-size: 1.5vmin;
        color: white;
    }

    #subwindows-panel {
        position: absolute;
        left: -49.5vw;
        bottom: -49.0vh;
        font-size: 1.5vmin;
        color: white;
    }

I placed elements inside HTML tables so that they look neater.

Check the code here:

https://www.dei.isep.ipp.pt/~jpp/threeJS/Thumb_Raiser%20-%20Merged/