How to render coloured volume with Data3DTexture and RGBAFormat?

I have RGBA volume data and want to color the voxels correspondingly. I am using the code from three.js examples.

However the shader of this example uses only the “red” channel and a colormap. I would like to modifiy the shader to show the actual color from the RGBA texture data.

I tried to replace apply_colormap(max_val) with texture(u_data, loc.xyz) in the shader code. So far I had no success. I have no experience with shader code, so I would appreciate your help.

I use the following texture format:

texture.format = THREE.RGBAFormat;
texture.type = THREE.UnsignedByteType;

Post your code in a glitch or a codepen, or at the least, link the relevant files containing the shader you’re trying to modify?

This is like asking a mechanic how to fix your car, via text.

Thanks for your reply. Below the code for creating the volume object:

    const volume = {
          data: dataArray, // RGBA Uint8Array
          xLength: width,
          yLength: height,
          zLength: depth,
        };
        const shader = VolumeRenderShader1; // https://github.com/mrdoob/three.js/blob/841d2e791d3e8a2463322c5ca31b16956828b91c/examples/jsm/shaders/VolumeShader.js
        const uniforms = UniformsUtils.clone(shader.uniforms);
      
        const texture = new Data3DTexture(
          volume.data,
          volume.xLength,
          volume.yLength,
          volume.zLength
        );
        texture.format = RGBAFormat;
        texture.type = UnsignedByteType;
        texture.minFilter = texture.magFilter = LinearFilter;
        texture.unpackAlignment = 1;
        texture.needsUpdate = true;

        uniforms['u_data'].value = texture;
        uniforms['u_size'].value.set(
          volume.xLength,
          volume.yLength,
          volume.zLength
        );
        // not required as volume data is already adjusted
        // uniforms['u_clim'].value.set(config.clim1, config.clim2);
        // uniforms['u_cmdata'].value = this.cmTextures[config.colormap];
        // uniforms['u_renderthreshold'].value = config.isothreshold; // For ISO renderstyle
        uniforms['u_renderstyle'].value = config.renderstyle === 'mip' ? 0 : 1; // 0: MIP, 1: ISO

        const material = new ShaderMaterial({
          uniforms: uniforms,
          vertexShader: shader.vertexShader,
          fragmentShader: shader.fragmentShader,
          side: BackSide, // The volume shader uses the backface as its "reference point"
        });

        const geometry = new BoxGeometry(
          volume.xLength,
          volume.yLength,
          volume.zLength
        );
        geometry.translate(
          volume.xLength / 2 - 0.5,
          volume.yLength / 2 - 0.5,
          volume.zLength / 2 - 0.5
        );

        const mesh = new Mesh(geometry, material);

As said, I would like to modifiy the fragmentShader to show the voxel in the corresponding RGB color from the volume data.

I changed the sample1 function to return the luminance from the RGB value instead the intensity from the “red” channel. Also I replaced apply_colormap(....) with texture(u_data, loc.xyz) in the:



				precision highp float;
				precision mediump sampler3D;

				uniform vec3 u_size;
				uniform int u_renderstyle;
				uniform float u_renderthreshold;
				uniform vec2 u_clim;

				uniform sampler3D u_data;
				uniform sampler2D u_cmdata;

				varying vec3 v_position;
				varying vec4 v_nearpos;
				varying vec4 v_farpos;

				// The maximum distance through our rendering volume is sqrt(3).
				const int MAX_STEPS = 887;	// 887 for 512^3, 1774 for 1024^3
				const int REFINEMENT_STEPS = 4;
				const float relative_step_size = 1.0;
				const vec4 ambient_color = vec4(0.2, 0.4, 0.2, 1.0);
				const vec4 diffuse_color = vec4(0.8, 0.2, 0.2, 1.0);
				const vec4 specular_color = vec4(1.0, 1.0, 1.0, 1.0);
				const float shininess = 40.0;

				void cast_mip(vec3 start_loc, vec3 step, int nsteps, vec3 view_ray);
				void cast_iso(vec3 start_loc, vec3 step, int nsteps, vec3 view_ray);

				float sample1(vec3 texcoords);
				vec4 apply_colormap(float val);
				vec4 add_lighting(float val, vec3 loc, vec3 step, vec3 view_ray);


				void main() {
						// Normalize clipping plane info
						vec3 farpos = v_farpos.xyz / v_farpos.w;
						vec3 nearpos = v_nearpos.xyz / v_nearpos.w;

						// Calculate unit vector pointing in the view direction through this fragment.
						vec3 view_ray = normalize(nearpos.xyz - farpos.xyz);

						// Compute the (negative) distance to the front surface or near clipping plane.
						// v_position is the back face of the cuboid, so the initial distance calculated in the dot
						// product below is the distance from near clip plane to the back of the cuboid
						float distance = dot(nearpos - v_position, view_ray);
						distance = max(distance, min((-0.5 - v_position.x) / view_ray.x,
																				(u_size.x - 0.5 - v_position.x) / view_ray.x));
						distance = max(distance, min((-0.5 - v_position.y) / view_ray.y,
																				(u_size.y - 0.5 - v_position.y) / view_ray.y));
						distance = max(distance, min((-0.5 - v_position.z) / view_ray.z,
																				(u_size.z - 0.5 - v_position.z) / view_ray.z));

						// Now we have the starting position on the front surface
						vec3 front = v_position + view_ray * distance;

						// Decide how many steps to take
						int nsteps = int(-distance / relative_step_size + 0.5);
						if ( nsteps < 1 )
								discard;

						// Get starting location and step vector in texture coordinates
						vec3 step = ((v_position - front) / u_size) / float(nsteps);
						vec3 start_loc = front / u_size;

						// For testing: show the number of steps. This helps to establish
						// whether the rays are correctly oriented
						//'gl_FragColor = vec4(0.0, float(nsteps) / 1.0 / u_size.x, 1.0, 1.0);
						//'return;

						if (u_renderstyle == 0)
								cast_mip(start_loc, step, nsteps, view_ray);
						else if (u_renderstyle == 1)
								cast_iso(start_loc, step, nsteps, view_ray);

						if (gl_FragColor.a < 0.05)
								discard;
				}

				float sample1(vec3 texcoords) {
						/* Sample float value from a 3D texture. Assumes intensity data. */
						// return texture(u_data, texcoords.xyz).r;
						/* Luminance of texture RGB value */
						return dot(texture(u_data, texcoords.xyz), vec4(0.2125, 0.7154, 0.0721, 1));
				}

				vec4 apply_colormap(float val) {
						val = (val - u_clim[0]) / (u_clim[1] - u_clim[0]);
						return texture2D(u_cmdata, vec2(val, 0.5));
				}


				void cast_mip(vec3 start_loc, vec3 step, int nsteps, vec3 view_ray) {

						float max_val = -1e6;
						int max_i = 100;
						vec3 loc = start_loc;

						// Enter the raycasting loop. In WebGL 1 the loop index cannot be compared with
						// non-constant expression. So we use a hard-coded max, and an additional condition
						// inside the loop.
						for (int iter=0; iter<MAX_STEPS; iter++) {
								if (iter >= nsteps)
										break;
								// Sample from the 3D texture
								float val = sample1(loc);
								// Apply MIP operation
								if (val > max_val) {
										max_val = val;
										max_i = iter;
								}
								// Advance location deeper into the volume
								loc += step;
						}

						// Refine location, gives crispier images
						vec3 iloc = start_loc + step * (float(max_i) - 0.5);
						vec3 istep = step / float(REFINEMENT_STEPS);
						for (int i=0; i<REFINEMENT_STEPS; i++) {
								max_val = max(max_val, sample1(iloc));
								iloc += istep;
						}

						// Resolve final color						
						//gl_FragColor = apply_colormap(max_val);
						/* RGB value from texture data */
						gl_FragColor = texture(u_data, iloc.xyz);
				}


				void cast_iso(vec3 start_loc, vec3 step, int nsteps, vec3 view_ray) {

						gl_FragColor = vec4(0.0);	// init transparent
						vec4 color3 = vec4(0.0);	// final color
						vec3 dstep = 1.5 / u_size;	// step to sample derivative
						vec3 loc = start_loc;

						float low_threshold = u_renderthreshold - 0.02 * (u_clim[1] - u_clim[0]);

						// Enter the raycasting loop. In WebGL 1 the loop index cannot be compared with
						// non-constant expression. So we use a hard-coded max, and an additional condition
						// inside the loop.
						for (int iter=0; iter<MAX_STEPS; iter++) {
								if (iter >= nsteps)
										break;

								// Sample from the 3D texture
								float val = sample1(loc);

								if (val > low_threshold) {
										// Take the last interval in smaller steps
										vec3 iloc = loc - 0.5 * step;
										vec3 istep = step / float(REFINEMENT_STEPS);
										for (int i=0; i<REFINEMENT_STEPS; i++) {
												val = sample1(iloc);
												if (val > u_renderthreshold) {
														gl_FragColor = add_lighting(val, iloc, dstep, view_ray);
														return;
												}
												iloc += istep;
										}
								}

								// Advance location deeper into the volume
								loc += step;
						}
				}


				vec4 add_lighting(float val, vec3 loc, vec3 step, vec3 view_ray)
				{
					// Calculate color by incorporating lighting

						// View direction
						vec3 V = normalize(view_ray);

						// calculate normal vector from gradient
						vec3 N;
						float val1, val2;
						val1 = sample1(loc + vec3(-step[0], 0.0, 0.0));
						val2 = sample1(loc + vec3(+step[0], 0.0, 0.0));
						N[0] = val1 - val2;
						val = max(max(val1, val2), val);
						val1 = sample1(loc + vec3(0.0, -step[1], 0.0));
						val2 = sample1(loc + vec3(0.0, +step[1], 0.0));
						N[1] = val1 - val2;
						val = max(max(val1, val2), val);
						val1 = sample1(loc + vec3(0.0, 0.0, -step[2]));
						val2 = sample1(loc + vec3(0.0, 0.0, +step[2]));
						N[2] = val1 - val2;
						val = max(max(val1, val2), val);

						float gm = length(N); // gradient magnitude
						N = normalize(N);

						// Flip normal so it points towards viewer
						float Nselect = float(dot(N, V) > 0.0);
						N = (2.0 * Nselect - 1.0) * N;	// ==	Nselect * N - (1.0-Nselect)*N;

						// Init colors
						vec4 ambient_color = vec4(0.0, 0.0, 0.0, 0.0);
						vec4 diffuse_color = vec4(0.0, 0.0, 0.0, 0.0);
						vec4 specular_color = vec4(0.0, 0.0, 0.0, 0.0);

						// note: could allow multiple lights
						for (int i=0; i<1; i++)
						{
								 // Get light direction (make sure to prevent zero devision)
								vec3 L = normalize(view_ray);	//lightDirs[i];
								float lightEnabled = float( length(L) > 0.0 );
								L = normalize(L + (1.0 - lightEnabled));

								// Calculate lighting properties
								float lambertTerm = clamp(dot(N, L), 0.0, 1.0);
								vec3 H = normalize(L+V); // Halfway vector
								float specularTerm = pow(max(dot(H, N), 0.0), shininess);

								// Calculate mask
								float mask1 = lightEnabled;

								// Calculate colors
								ambient_color +=	mask1 * ambient_color;	// * gl_LightSource[i].ambient;
								diffuse_color +=	mask1 * lambertTerm;
								specular_color += mask1 * specularTerm * specular_color;
						}

						// Calculate final color by componing different components
						vec4 final_color;
						//vec4 color = apply_colormap(val);
						/* RGB value from texture data */
						vec4 color = texture(u_data, loc.xyz);
						final_color = color * (ambient_color + diffuse_color) + specular_color;
						final_color.a = color.a;
						return final_color;
				}

Source:

With these changes it shows the color now, but there are some dark artifacts. Furthermore in ISO mode, the threshold has no impact anymore:

This is without modification of the shader:

With modification but also in grey (for easier comparison):

I guess it is the luminance calculation which needs some more adjustments in the shader code. Maybe there is somebody who can help me at this point.