Is it possible to render a non-FloatType 3D texture?

I have successfully adapted the texture3d example for my application, from the source code

the texture is expected to be THREE.FloatType type in the red-channel. My source data can have various formats (uint8, int16, int32 or float32). if I convert the buffer data to float first, then the rendering works as expected. However, if I do not convert the buffer, and just set the texture.type to the corresponding buffer format (THREE.UnsignedByteType or THREE.UnsignedShortType), the MIP rendering gave me completely zeros.

in order to render textures of other data types, other than changing the texture.type line highlighted above, is there anything else that I need to change in the shaders?

I don’t think there is anything you need to change. As long as you’re outputting 0 to 1 from the shader, it will map to 0 to 255 for UnsignedByteType or 65535 for UnsignedShort. Not sure what happens if you output out of 0 to 1 range… i’d guess it gets clamped.

In webGL 1 we didn’t even have float buffers, and we had to pack floats into 8 bit RGBA and unpack them in the shader if we wanted to do something fancy.

Maybe helpful for debugging:

Example:

import { TextureHelper } from 'three/examples/jsm/helpers/TextureHelper.js';

const helper = new TextureHelper( texture );

scene.add( helper );

thanks for both of your comments.

I made a reproducer of the issue I mentioned

if I directly assign a Uint8Array buffer to the THREE.DataTexture3D object and set texture.type = THREE.UnsignedByteType, then rendering is complete empty, see

however, if I convert the buffer to ‘float32’ (I use numjs to handle the ND array buffer), and set texture.type = THREE.FloatType, then the texture is rendered correctly. see

the diff of the two codes are shown below

$ diff uint8_texture.html float_texture.html
1384c1384
<   lastvolumedata=volume.transpose().flatten();
---
>   lastvolumedata=nj.array(volume.transpose().flatten().selection.data, 'float32');
1388c1388
<   texture.type = dtype[lastvolumedata.dtype];
---
>   texture.type = THREE.FloatType;

my shaders were copied from the threejs texture example.

is there anything else I need to change in addition to the texture.type flag?

Update: my above Uint8Array test example actually works, except that one must drag the “Upper-bound” slidebar all the way to the left

I am not entirely sure why this is the case - the volume is a binary mask with 0-1 uint8 values. I thought that setting uniform["u_clim"] to [0,1] in the shader should be enough to show the texture.

the colormap was computed in this function in the shader code

		'		vec4 apply_colormap(float val) {',
		'				val = (val - u_clim.x) / (u_clim.y - u_clim.x);',
		'				return texture2D(u_cmdata, vec2(val, 0.5));',
		'		}',

where uniform["u_clim"]=[0,1] defines the min and max of the input data; val is the computed MIP value along the ray (defined as float). I anticipate that when the ray pass by the 1-valued masks, the above colormap should return the highest color in the x-axis - however, for some reason, float-valued MIP max seems to perform this correctly but uint8 valued MIP max fails to get the highest color in the colormap.

I spent a bit time yesterday looking into this, and I believe I understood why.

the 3d texture example on threejs website uses sampler3D as the input data type

this works only for float data. if I assign the texture to a uint8 buffer, one need to use usampler3D for unsigned integer type; similarly, isampler3D for signed integer types.

I found that if I use sampler3D to read a uint8array buffer, I can still use the shader if I multiply the readout by 255. Unfortunately this does not work if I read a uint16 buffer. I did not test int32/uint32 but I assume they won’t work either.

anyways, the current demo code only works for FloatType. One can assign a non-float buffer, but the shader code must use the proper sampler to read the data for rendering.

Uint8arrays are 0 to 255 range not 0 to 1. Is that what you’re hitting?
If you use sampler3D you’ll get 0 to 1 range out for the range 0 to 255 you put in.
If you use one of those integer type samplers you’ll get back 0 to 255…
Not positive on this… just speculating…

this works only for float data. if I assign the texture to a uint8 buffer, one need to use usampler3D for unsigned integer type; similarly, isampler3D for signed integer types.

This isn’t right. Byte textures can be sampled as [0, 1] float values on the GPU from 3d textures the same way it works for 2d textures. There must be another issue like the input data not being in the right domain. You should produce a minimal example of the problem if you’d like more exact help, though.

This being marked as a solution will be misleading to others who find the post later.

1 Like

Yeah!! that’s what I thought… The isampler3D/usampler3D type is just if you want to read integer values from it. The regular sampler returns float, etc… They work interchangeably with input type. I’m guessing OP was passing 0 and 1 in an uint8 buffer instead of 0 to 255.

1 Like