I’ve been experimenting with WebXR through Three.js, and I’m trying to understand who renders the background image (the “live camera”), and when. Is it rendered (by three?) before any scene rendering ?
I want to do two things, if possible: 1) take this texture inside three (as a live opengl texture) and 2) drop shadows by doing some blend operation.
My idea to do the shadowing (assume an object dropping shadow on a plane that anchors on the floor) is to receive shadow for the white plane, and then the plane blends with some blending combination so that the dark shadow “darkens” the background image. But this works on the assumption of the background image being rendered first, and part of the scene, which I don’t think works at all that way…
Hi,
In all examples in AR, we have a background that is the live camera feed and then some graphics on top.
Something is responsible for rendering this image first, and then the graphics on top. I’m wondering how I can access this texture in three, and how can I do blending on the assumption it renders first in the background.
This is not possible. In general, you can only work with the virtual content which is aligned and composed with the real-world environment . Blending is automatically controlled by WebXR depending on the display technology of the device and the respective blend mode.
Check out the official specification for more details.
I can now see that if I change the alpha of a fragment then it will blend properly with the background.
I was hoping to get the video texture to do some sort of ambient light estimation (in a shader) or even environmental mapping, but seems to be currently impossible. I wonder if there are other means for that, maybe there is a parallel way to acquire the camera frame into webgl/three ? or will it be available in webxr soon ?