Data3DTexture where each pixel is 16 bits precision

Hi,
I want to use Data3DTexture where each pixel is 16 bits precision.
I rewrote the sample program “webgl2_volume_perlin.html” as follows, but it does not work.

// Texture
const size = 128;
const data = new Uint16Array( size * size * size );

let i = 0;
const perlin = new ImprovedNoise();
const vector = new THREE.Vector3();

for ( let z = 0; z < size; z ++ ) {
	for ( let y = 0; y < size; y ++ ) {
		for ( let x = 0; x < size; x ++ ) {
			vector.set( x, y, z ).divideScalar( size );
			const d = perlin.noise( vector.x * 6.5, vector.y * 6.5, vector.z * 6.5 );
			data[ i ++ ] = d * 32768 + 32768;
		}
	}
}

const texture = new THREE.Data3DTexture( data, size, size, size );
texture.format = THREE.RedFormat;
texture.type = THREE.UnsignedShortType;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.unpackAlignment = 2;
texture.needsUpdate = true;

The following warning is displayed in the console.
Please let me know the correct way to do this.
image

Maybe because RedFormat is 1 chanell

I want 16 bits precision on the red component.
So I think RedFormat is correct.

The RED format only supports R8, R8_SNORM, R16F and R32F.

You end up with R16UI which is an integer format. So it’s necessary to use RedIntegerFormat. However, that means the usage of an integer texture which is probably not what you want.

I suggest you use R16F instead. Try it with:

const size = 128;
const data = new Uint16Array( size * size * size );

let i = 0;
const perlin = new ImprovedNoise();
const vector = new THREE.Vector3();

for ( let z = 0; z < size; z ++ ) {

	for ( let y = 0; y < size; y ++ ) {

		for ( let x = 0; x < size; x ++ ) {

			vector.set( x, y, z ).divideScalar( size );

			const d = perlin.noise( vector.x * 6.5, vector.y * 6.5, vector.z * 6.5 );

			data[ i ++ ] = THREE.DataUtils.toHalfFloat( d );

		}

	}

}

const texture = new THREE.Data3DTexture( data, size, size, size );
texture.format = THREE.RedFormat;
texture.type = THREE.HalfFloatType;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.needsUpdate = true;
1 Like

The documentation says:

RedFormat discards the green and blue components and reads just the red component.

RedIntegerFormat discards the green and blue components and reads just
the red component. The texels are read as integers instead of floating point.

Thanks for the reply.

I tried your code (probably what I want to achieve).
But the execution result is as follows.

I was expecting the same execution result as the original code, so I am wondering why the execution result is different.

Result of the original code:

I have made a few changes to your code.
This is exactly the code I was looking for. Thank you!

// Texture
const size = 128;
const data = new Uint16Array( size * size * size );

let i = 0;
const perlin = new ImprovedNoise();
const vector = new THREE.Vector3();

for ( let z = 0; z < size; z ++ ) {
	for ( let y = 0; y < size; y ++ ) {
		for ( let x = 0; x < size; x ++ ) {
			vector.set( x, y, z ).divideScalar( size );
			const d = perlin.noise( vector.x * 6.5, vector.y * 6.5, vector.z * 6.5 );
			data[ i ++ ] = THREE.DataUtils.toHalfFloat( (d * 128 + 128) / 256 );
		}
	}
}

const texture = new THREE.Data3DTexture( data, size, size, size );
texture.format = THREE.RedFormat;
texture.type = THREE.HalfFloatType;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.needsUpdate = true;
1 Like