I’ve read through the example code here and have somewhat pieced together the overall idea, but had some questions. From my understanding, the overall process is as follows:
- Render the scene first on a render target. Then, we have target.texture (the normal rendering of the scene), and target.depthTexture (the depth buffer rendering of the scene).
- Now, pass target.depthTexture to the second rendering pass, which is rendering straight to the website’s actual canvas (therefore we will see our render now)
Why exactly are we using an orthographic camera + plane/quad scene setup for our canvas render? Is the plane/quad basically emulating or lining up with our the camera, so when we render a texture on it, it just seems like a normal screen? Is there a way to get a depth buffer rendering without using this orthographic + plane/quad setup?
Any insight on this second step, aka why it’s done this way, if it can be done other ways, would be greatly appreciated