I’m getting a flipped texture when rendering to a render target using the WebGPU version of Three.js, even with flipY disabled

Hi everyone!

I’m trying to apply some effects to a texture using shaders. To do this, I use a full screen quad, and in its shader, for each fragment, I get its UV coordinates, sample the texture using these coordinates, perform some calculations, and assign the result as the fragment color. I’m rendering into a render target to get the result as a separate texture. For a simplified example, I’m just copying the color from the input texture without any modifications. I expect the original texture and the one in the render target to be identical.

However, in practice, when rendering the texture from the render target to the canvas, it appears flipped along the Y-axis. This only happens when using the WebGPU version of Three.js and only when rendering into a render target. I created a similar example using the “classic” version of Three.js, and there it works as expected. The flipY setting on the texture has no effect on the final result in either version.

At first I thought this might be a bug, but since rendering to a render target is a part of many rendering techniques, I’m inclined to think that I’m doing something wrong. I understand that I can “fix” the WebGPU version by simply inverting the uv.y coordinate in the shader, but the issue is that I don’t know exactly when the flipping is happening — during rendering into the render target, or when rendering the texture from the render target to the canvas.

And since I plan to use this technique in a fairly complex multi-pass rendering pipeline, this could lead to difficult-to-diagnose issues. Also, not understanding the cause is keeping me up at night.

Here are two minimal examples I created. One uses WebGPU and NodeMaterial, which exhibits the issue. The other uses the “classic” Three.js renderer. Both examples do the same:

  • Create a full screen quad (FSQ) that covers the entire visible area regardless of camera settings
  • In the shader for the FSQ, compute the color of each fragment as vec4(vec3(uv.y), 1), which should produce a gradient from black to white going from bottom to top
  • Render the FSQ into a render target
  • Then assign a shader to the same FSQ that samples the texture from the render target using the fragment’s UV coordinates and outputs the sampled color
  • Render the FSQ to the canvas

When using the WebGPU version, the gradient is flipped — it goes from top to bottom, contrary to the expected result.

Both examples include a constant called RENDER_DIRECTLY_TO_CANVAS. When set to true, the render-to-texture step is skipped, and the scene is rendered directly to the canvas. In that case, both examples behave identically.

WebGPU version
WebGL version

1 Like

Maybe try oneMinus()

// rt.texture.flipY = false
// ...
fsqMesh.material.colorNode = vec4(vec3(uv().y.oneMinus()), 1);

and in your glsl version

fragmentShader: `
  varying vec2 vUv;
  void main() {
    gl_FragColor = vec4(1. - vec3(vUv.y), 1.0);
  }
`,

Thank you. Yes, this helps — I mentioned it in my question, but it doesn’t answer why this happens in WebGPU (even when using the forceWebGL flag) and not in WebGL. Nor does it clarify at which exact stage the flipping occurs — during rendering into the render target or from the render target to the canvas. This can easily lead to hard-to-diagnose bugs in a complex rendering pipeline where I need to use this technique.