Data3DTexture where each pixel is 16 bits precision

Hi,
I want to use Data3DTexture where each pixel is 16 bits precision.
I rewrote the sample program “webgl2_volume_perlin.html” as follows, but it does not work.

// Texture
const size = 128;
const data = new Uint16Array( size * size * size );

let i = 0;
const perlin = new ImprovedNoise();
const vector = new THREE.Vector3();

for ( let z = 0; z < size; z ++ ) {
	for ( let y = 0; y < size; y ++ ) {
		for ( let x = 0; x < size; x ++ ) {
			vector.set( x, y, z ).divideScalar( size );
			const d = perlin.noise( vector.x * 6.5, vector.y * 6.5, vector.z * 6.5 );
			data[ i ++ ] = d * 32768 + 32768;
		}
	}
}

const texture = new THREE.Data3DTexture( data, size, size, size );
texture.format = THREE.RedFormat;
texture.type = THREE.UnsignedShortType;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.unpackAlignment = 2;
texture.needsUpdate = true;

The following warning is displayed in the console.
Please let me know the correct way to do this.
image

Maybe because RedFormat is 1 chanell

I want 16 bits precision on the red component.
So I think RedFormat is correct.

The RED format only supports R8, R8_SNORM, R16F and R32F.

You end up with R16UI which is an integer format. So it’s necessary to use RedIntegerFormat. However, that means the usage of an integer texture which is probably not what you want.

I suggest you use R16F instead. Try it with:

const size = 128;
const data = new Uint16Array( size * size * size );

let i = 0;
const perlin = new ImprovedNoise();
const vector = new THREE.Vector3();

for ( let z = 0; z < size; z ++ ) {

	for ( let y = 0; y < size; y ++ ) {

		for ( let x = 0; x < size; x ++ ) {

			vector.set( x, y, z ).divideScalar( size );

			const d = perlin.noise( vector.x * 6.5, vector.y * 6.5, vector.z * 6.5 );

			data[ i ++ ] = THREE.DataUtils.toHalfFloat( d );

		}

	}

}

const texture = new THREE.Data3DTexture( data, size, size, size );
texture.format = THREE.RedFormat;
texture.type = THREE.HalfFloatType;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.needsUpdate = true;
1 Like

The documentation says:

RedFormat discards the green and blue components and reads just the red component.

RedIntegerFormat discards the green and blue components and reads just
the red component. The texels are read as integers instead of floating point.

Thanks for the reply.

I tried your code (probably what I want to achieve).
But the execution result is as follows.

I was expecting the same execution result as the original code, so I am wondering why the execution result is different.

Result of the original code:

I have made a few changes to your code.
This is exactly the code I was looking for. Thank you!

// Texture
const size = 128;
const data = new Uint16Array( size * size * size );

let i = 0;
const perlin = new ImprovedNoise();
const vector = new THREE.Vector3();

for ( let z = 0; z < size; z ++ ) {
	for ( let y = 0; y < size; y ++ ) {
		for ( let x = 0; x < size; x ++ ) {
			vector.set( x, y, z ).divideScalar( size );
			const d = perlin.noise( vector.x * 6.5, vector.y * 6.5, vector.z * 6.5 );
			data[ i ++ ] = THREE.DataUtils.toHalfFloat( (d * 128 + 128) / 256 );
		}
	}
}

const texture = new THREE.Data3DTexture( data, size, size, size );
texture.format = THREE.RedFormat;
texture.type = THREE.HalfFloatType;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.needsUpdate = true;
1 Like

While converting uint16 to half-floats works, r184 will add the EXT_TEXTURE_NORM16 texture formats which allow using (u)int16 texture which are normalized to [0, 1] or [-1, 1], respectively.

I might add an usage example once r184 is available (planned for April 2026).

Just to add a bit from my experience working with large procedural worlds and data driven environments in Three.js.

If you are targeting WebGL2, the combination of UnsignedShortType or HalfFloatType with RedFormat usually works, but the real limitation often comes from the internal texture format chosen by the renderer. In many cases WebGL expects the internal format to match the type more explicitly, such as R16F or R16UI equivalents.

So even if the typed array is correct, the GPU driver may complain because the internal format selected by Three.js does not map exactly to what the hardware expects.

Another thing worth checking is GPU compatibility. Some mobile GPUs behave differently with half floats or 16 bit integer textures, especially when linear filtering is enabled. On some devices you may need to:

  • use NearestFilter instead of LinearFilter

  • verify support for half float filtering extensions

  • test with smaller volumes first such as 64³

I work a lot with procedural environments and simulation systems and I ran into similar issues when building volumetric data structures and terrain density fields. In practice I found that:

  • HalfFloatType is often more portable than UnsignedShortType

  • but integer precision textures can be better when exact values matter

So the solution posted using DataUtils.toHalfFloat() is actually a good compromise when you want more precision but still keep GPU compatibility.

Also if the goal is volumetric rendering or density fields, another approach is to encode the data across multiple channels or pack values manually depending on what the shader expects.

Just sharing this in case it helps others running into the same issue with Data3DTexture.