WebXR issue with UVs / Viewport when rendering into a WebGLRenderTarget that is not matching screen res

hi, ive been banging my head against this one for a while and hoping someone can help.

firstly my assumption is that using the full screen quad shader technique i lifted from here > three.js webgl - shader [Monjori], im able to draw into a WebGLRenderTarget of any resolution and this will fill the render target texture. the simple shader below just draws the UVs as color to confirm the UVs are coming through as expected, as values between 0 and 1.

const fsGeom = new THREE.PlaneGeometry( 2, 2 );
const fsMat = new THREE.ShaderMaterial({
  uniforms: {
    //
  },
  vertexShader: `
    varying vec2 vUv;
    void main(){ 
      vUv = uv;
      gl_Position = vec4( position, 1.0 );
    }
  `,
  fragmentShader: `
    varying vec2 vUv;
    
    void main(){
      gl_FragColor = vec4(vUv.xy, 0.0, 1.0);
    }
  `,
});
const fsMesh = new THREE.Mesh(fsGeom, fsMat);
fsMesh.frustumCulled = false;
const fsCamera = new THREE.OrthographicCamera( - 1, 1, 1, - 1, 0, 1 );

the above works for me across all browsers, mobile etc.
but it gets weird when switching to WebXR (AR using Google Pixel)
the UVs are no longer between 0 and 1… (the aforementioned weirdness)
my test WebGLRenderTarget resolution is (512 x 512) and the UVs are now coming in as (512 / window.innerWidth, 512 / window.innerHeight)

seems to suggest some issue with the viewport always assuming fullscreen resolution instead of custom resolutions of the WebGLRenderTarget?

full test code below:

const w = window.innerWidth;
const h = window.innerHeight;
const r = window.devicePixelRatio;

const container = document.createElement( 'div' );
document.body.appendChild( container );

const scene = new THREE.Scene();

const camera = new THREE.PerspectiveCamera( 50, w / h, 0.1, 10 );
camera.position.set( 0, 0, 3 );

const controls = new OrbitControls( camera, container );
controls.minDistance = 0;
controls.maxDistance = 8;

const renderer = new THREE.WebGLRenderer( { antialias: true, alpha: true } );
renderer.setPixelRatio( r );
renderer.setSize( w, h );
renderer.outputEncoding = THREE.sRGBEncoding;
renderer.xr.enabled = true;
renderer.xr.addEventListener( 'sessionstart', onXRSessionStart );
renderer.xr.addEventListener( 'sessionend', onXRSessionEnd );
container.appendChild( renderer.domElement );

const renderTarget = new THREE.WebGLRenderTarget(512, 512, {
  wrapS: THREE.ClampToEdgeWrapping,
  wrapT: THREE.ClampToEdgeWrapping,
  magFilter: THREE.LinearFilter,
  minFilter: THREE.LinearFilter,
  generateMipmaps: false,
  format: THREE.RGBAFormat,
  type: THREE.UnsignedByteType,
  anisotropy: THREE.Texture.anisotropy,
  encoding: THREE.LinearEncoding,
  depthBuffer: true,
  stencilBuffer: true,
  samples: 0,
});

// fullscreen quad - draws a quad full screen to the render target viewport.
// based on this threejs example - https://threejs.org/examples/webgl_shader.html

const fsGeom = new THREE.PlaneGeometry( 2, 2 );
const fsMat = new THREE.ShaderMaterial({
  uniforms: {
    //
  },
  vertexShader: `
    varying vec2 vUv;
    void main(){ 
      vUv = uv;
      gl_Position = vec4( position, 1.0 );
    }
  `,
  fragmentShader: `
    varying vec2 vUv;
    
    void main(){
      gl_FragColor = vec4(vUv.xy, 0.0, 1.0);
    }
  `,
});
const fsMesh = new THREE.Mesh(fsGeom, fsMat);
fsMesh.frustumCulled = false;
const fsCamera = new THREE.OrthographicCamera( - 1, 1, 1, - 1, 0, 1 );

// 1x1 plane which uses the render target as a texture to show the UV values.

const planeGeom = new THREE.PlaneGeometry(1, 1);
const planeMat = new THREE.MeshBasicMaterial({
  color: 0xffffff,
  map: renderTarget.texture,
});
const planeMesh = new THREE.Mesh(planeGeom, planeMat);
scene.add( planeMesh );

document.body.appendChild( ARButton.createButton( renderer, {
  requiredFeatures: ['hit-test'],
}));

window.addEventListener( 'resize', onWindowResize );

renderer.setAnimationLoop( render );

function onXRSessionStart() {
  planeMesh.position.z = -1.5;
}

function onXRSessionEnd() {
  camera.position.set(0, 0, 5);
  camera.lookAt(0, 0, 0);
  camera.updateMatrixWorld();
}

function onWindowResize() {
  camera.aspect = window.innerWidth / window.innerHeight;
  camera.updateProjectionMatrix();

  renderer.setSize( window.innerWidth, window.innerHeight );
}

function render() {
  const renderTargetSaved = renderer.getRenderTarget();

  renderer.setRenderTarget(renderTarget);
  renderer.clear();
  renderer.render(fsMesh, fsCamera);

  renderer.setRenderTarget(renderTargetSaved);

  renderer.render( scene, camera );
}

ive localised the issue to this bit of code in the render function.

	this.render = function ( scene, camera ) {

		...

		if ( xr.enabled === true && xr.isPresenting === true ) {

			if ( xr.cameraAutoUpdate === true ) xr.updateCamera( camera );

			camera = xr.getCamera(); // use XR camera for rendering

		}

		...
}

the issue appears to be that the camera passed into the render function is being overridden and replaced by the XR camera every time render is called. this is ok when you want to render the scene but falls over when other cameras need to be used for rendering to WebGLRenderTargets.

not sure what the solution could be atm? its probably a much bigger API discussion.
maybe having a new XR Camera class/object or setting a property on the camera object to earmark it as an XR Camera, so that in the render function we can discern which one is which and handle the different use cases?

here is my dirty little hack that gets around this issue

when creating the default camera used for rendering the XR scene, mark it with the name xrCamera

const camera = new PerspectiveCamera(20, screenWidth / screenHeight, 0.01, 1000);
camera.name = 'xrCamera';

in the render function, only execute the XR camera code when:

		if ( xr.enabled === true && xr.isPresenting === true && camera.name === 'xrCamera' ) {

			if ( xr.cameraAutoUpdate === true ) xr.updateCamera( camera );

			camera = xr.getCamera(); // use XR camera for rendering

		}

so when rendering into WebGLRenderTarget that is not the same size as the canvas framebuffer, use a different camera that is not marked as xrCamera. also when doing post processing effects, use an OrthographicCamera not marked as xrCamera and that should work as expected. only use xrCamera name when rendering the XR scene to the default canvas frame buffer or when rendering to WebGLRenderTarget which is exactly the same size as the canvas frame buffer.

would love to help figure out a proper solution with others more experienced in threejs / XR.

1 Like

Thank you for narrowing this down, I faced similar trouble using a render target texture in WebXR.

Another workaround that doesn’t require editing Three.js source is to temporarily disable XR before rendering the texture, similar to how you save and restore the render target.

// Save
const renderTargetSaved = renderer.getRenderTarget();
const xrEnabledSaved = renderer.xr.enabled

// Render
rendered.xr.enabled = false
renderer.setRenderTarget(renderTarget);
renderer.render(scene, camera);

// Restore
renderer.setRenderTarget(renderTargetSaved);
renderer.xr.enabled = xrEnabledSaved