DepthTexture Example Clarification

I’ve read through the example code here and have somewhat pieced together the overall idea, but had some questions. From my understanding, the overall process is as follows:

  1. Render the scene first on a render target. Then, we have target.texture (the normal rendering of the scene), and target.depthTexture (the depth buffer rendering of the scene).
  2. Now, pass target.depthTexture to the second rendering pass, which is rendering straight to the website’s actual canvas (therefore we will see our render now)

Why exactly are we using an orthographic camera + plane/quad scene setup for our canvas render? Is the plane/quad basically emulating or lining up with our the camera, so when we render a texture on it, it just seems like a normal screen? Is there a way to get a depth buffer rendering without using this orthographic + plane/quad setup?

Any insight on this second step, aka why it’s done this way, if it can be done other ways, would be greatly appreciated

Short answer no. Long answer yes but…

Sorry could you clarify anything I may be misunderstanding?

You can emit the vertex position from the vertex shader directly without transforming it, to skip the camera matrix… but you still need a camera for rendere.render call… then you would also have to set .frustumCulled=false… on the plane mesh… altogether which is kind of a hassle and you might as well just use the orthocamera setup.