For example, I added one cat model to the real space (phone camera frame, etc.).
Then the cat model will always be rendered on top of every component in real space.
It loses immersion.
Suppose the cat model needs to be placed behind a vase in a real space.
What I want is to cover the cat model as much as the vase size and show the rest.
Currently, I can extract 3D point data from camera images and render the contours of real space.
I need a way to make only special parts transparent/hidden using ThreeJS.
In other words, we try to make the cat model transparent/hidden as much as the outline of the vase so that the vase in the real space is not covered by the cat model.
If you got a depth map from a depth camera it’s basically the same as the depth buffer used in 3D to determine which pixel is in front of each other, you can use that texture either writing to depth on first pass or patching the materials with onBeforeCompile to do the depth test yourself.
Either way it requires some lines of shader code to be added as well as you have to know how the coordinates are encoded, the near/far range to translate to the scene depth. Depending on the camera quality i could imagine you might get some jitter and flickering, a lot apps implement this without any depth camera with AI that recently became incredible accurate and robust.