There is a main camera and a depth camera.
readRenderTargetPixels introduces a huge latency, and drops the frame rate considerably on mobile (android), rendering it completely unwatchable in WebXR mode where I’m interested, even with the lightest scene.
I have provided a functional demo that measures depth, and shows a ray and a point against a rotating cube, that can run both in normal and WebXR mode, and provides min and max millisecond stats on screen, in both modes for the last 10 frames, and I have also provided even QR codes for your convenience, with and without that function, on stackoverflow (no need to enter WebXR on mobile to see the huge latency).
The goal is to “cure” that huge latency, and have an efficient depth measurement from the depth camera, or via other method. I haven’t tried the raycasting method yet, but I remember it was considered heavier.
I’m sure that many people are interested to find the most efficient method for something like that.
As it is, that latency renders it useless for me on mobile, expecially on VR.
If any guru can help, it will be greatly appreciated. Thanks.