Depth order lost with shader material in Volume Rendering

Hello everyone, I’m trying to implement a Volume Rendering using Medical Images (DICOM & NIFTI). I’m using a custom ShaderMaterial to execute the VR. All is OK, but when I add other objects l(ROIs) the order is lost. Have you any suggestion? Thanks in advance.

Here an example where I’ve just added a box to check that…

Here is the right result with the Surface Rendering (withouth shaderMaterial)

Here my pieces of code:
this._renderer = new THREE.WebGLRenderer({
canvas: this.canvas,
antialias: true,
logarithmicDepthBuffer: true,
preserveDrawingBuffer: true,
});

this._shaderMaterial = new THREE.ShaderMaterial({
uniforms: currUniforms,
vertexShader: vertex,
fragmentShader: fragments,
side: THREE.FrontSide,
});

here my shading:
vertex:
//three.js/src/renderers/shaders/ShaderChunk at dev · neeh/three.js · GitHub
export const vertex = `

#include
#include <logdepthbuf_pars_vertex>
struct Ray {
vec3 origin;
vec3 dir;
};

out vec3 _normal;
out Ray vRay;
out mat4 _modelViewProjectionMatrix;
uniform vec3 uVolScale;
#include <clipping_planes_pars_vertex>
void main()
{
vec3 pos1 = positionuVolScale;
_modelViewProjectionMatrix = projectionMatrix * modelViewMatrix;//projectionMatrix
(viewMatrix*modelMatrix);
gl_Position = _modelViewProjectionMatrix * vec4(pos1, 1.0);
vec3 eyePos = cameraPosition/uVolScale;
vRay.dir = position-eyePos;
vRay.origin = eyePos ;
_normal = vec3(normal);
#include <logdepthbuf_vertex>
}`;

fragment:

void main()
{
#include <logdepthbuf_fragment>
Ray ray;
ray.origin = vRay.origin;
ray.dir = normalize(vRay.dir);
vec2 bounds = computeNearFar(ray);
if (bounds.x > bounds.y) discard;
bounds.x = max(bounds.x, 0.0);
float near = bounds.x;
float far = bounds.y;
vec3 rayStart = ray.origin + near * ray.dir;
vec3 rayStop = ray.origin + far * ray.dir;

  // Transform from object space to texture coordinate space:
  rayStart = 0.5 * (rayStart + 1.0);
  rayStop = 0.5 * (rayStop + 1.0);
  vec3 dir = rayStop - rayStart;
  float len = length(dir);
  dir = normalize(dir);
  vec3 deltaDir = dir * uStepSize;
  float offset = wang_hash(int(gl_FragCoord.x + 640.0 * gl_FragCoord.y));
  vec3 samplePos = rayStart + deltaDir * offset;

  vec4 acc = vec4(0.0);
  switch(uRenderStyle){
    case 0: acc = CalculateOpacityShadingModelNone(samplePos,deltaDir,len);break;
    case 1: acc = CalculateOpacityShadingModelIllustrative(samplePos, deltaDir, len,  dir); break;
    case 2: acc = CalculateOpacityShadingModelMIP(samplePos, deltaDir,  len, dir); break;
  }
  if (acc.a < 0.1)
    discard;
  if (acc.a < 1.0)
   acc.rgb = mix(uClearColor, acc.rgb, acc.a);
  gl_FragColor.rgba = acc;
}

As far as the GL layer is concerned.. the depth value is the depth of the primitive that contains your raymarching shader.
(presumably a plane or cube)

You’re not doing anything in your raymarching code to also produce a depth value that is consistent with the sample you are outputting.

afaik, you will need to compute a depth value and output it via gl_FragDepth, or manually generate a depth buffer in a separate pass.

Have a look at the recently added Data3DTexture class, it could give some insights