How do I render directly from GPU to a 3DRenderTarget?

I’m trying to do some GPU-side calculations, and I want to render my output to a 3D texture instead of the standard 2D texture by using WebGL3DRenderTarget.

In a regular 64x64 2D texture, I can query the XY fragment coordinates with gl_FragCoord.

void main() {
	vec3 uvw = vec3(gl_FragCoord.x, gl_FragCoord.y, 0.0) / 64.0;
	gl_FragColor.rgb = uvw;
	gl_FragColor.a = 1.0;
}

When I render to my Target, then sample this texture, I get red/green gradients as expected:
gradients

Question 1:
How would I write color to the desired XYZ fragment in a 3D texture? I’ve tried the following:

// gl_FragCoord.z always yields 0.5
vec3 uvw = vec3(gl_FragCoord.x / 64.0, gl_FragCoord.y / 64.0, gl_FragCoord.z);

// gl_FragDepth always yields 0
vec3 uvw = vec3(gl_FragCoord.x / 64.0, gl_FragCoord.y / 64.0, gl_FragDepth );

Question2:
I’m using this trick from the official examples to sample the values from a 3D texture. However, I only see the red/green gradient when my 3D coordinates are vec3(uv.x, uv.y, 0.0); Anything above 0.0 in the z-axis, and my texture turns black, leading me to believe only the front “slice” is being rendered to, as in the example below. Is this connected to my issue in Question1?
tex3d

I’ve looked through the official WebGL2 examples and they all use a JavaScript array to create a Data3DTexture. However, I’m trying to compute my 3D texture on the GPU side.

1 Like

I’ve created a minimal working example of this issue.

  • On the left side, I’m sampling a Data3DTexture that’s created via JavaScript. The blue channel smoothly transitions as I move up and down the z-axis:
  • On the right side I’m sampling a WebGL3DRenderTarget texture created in a frag shader. As you can see, I’m only able to render color to the texture when the z-axis is 0.0. All the other “slices” are black.

tex3dWin tex3d

How can I compute the rest of the 3DTexture when the z-coordinate is > 0.0? Any help or pointers would be greatly appreciated.

I’ve opened a CC with a bounty on Stackoverflow, if anybody is interested in helping me tackle this issue. :wink:

Just answered on SO but this example shows how to do it for array render targets:

https://threejs.org/examples/?q=array#webgl2_rendertarget_texture2darray

Basically you can set the layer of the render target render into. Docs should be updated to note this:

renderer.setRenderTarget( target3d, layer );
2 Likes

Thank you so much for taking the time!

I was afraid it might require multiple draw calls. I’m a little bummed, I thought WebGL2 would let me tap into some super-efficient way to render a 64x64x64 texture in a single drawcall. I guess computing onto a 64x64x64 3DTexture is less efficient than a “tiled” 4096x64 2D Texture to get a similar result.

Yeah as far as I know it’s not possible. I think in newer OpenGL / DirectX APIs you might be able to. And certainly with compute shaders in WebGPU you should be able to able to random write directly into the 3d contents. Have you done testing on performance differences in what you’re trying to do, though?

There are other benefits to 3d textures, as well, such as being able to make a 256^3 texture (which would be a 65536x256 2d texture which APIs wouldn’t allow) and affording Z interpolation and faster sampling in shaders.

2 Likes