tl;dr: does the three.js shadow rendering pipeline do anything that might adversely affect our multi-view + postprocessing rendering?
When I render for the Looking Glass display (an autostereoscopic display), I first render a grid, as in the ArrayCamera example. To do this I simply loop through the camera positions, change the viewport, and render to a RenderTarget.
Once we have this grid of images, we apply a post-processing shader to that RenderTarget which samples from the grid for every subpixel on the Looking Glass display.
I finally turned on shadows for the first time, and our postprocessing shader completely broke
The render-to-grid step works perfectly fine:
But now our postprocessing output looks like this…I can’t really discern what it’s doing, but the sampling is off somehow:
When it should look more like this (though this scene has shadows disabled):
So I’m curious - does the three.js shadow pipeline have some effect on our RenderTarget, viewport, scissor area, etc… which might adversely affect our postprocessing step? The postprocessing step uses the same WebGLRenderer to simply render a quad in its own scene with its own camera (but shares a WebGLRenderer, and uses the RenderTarget from before as a shader uniform).
I found this three.js issue from a few years ago that might be related, but I can’t figure out how it might apply to our own issue.
Here’s a live demo on Glitch if you wanna check it out. Toggle params.holoplay if you want to switch between the grid view and the final postprocessed view.