I use RenderTarget to create a texture, which I then use in the final render to canvas. I would like to use EffectComposer to fix my texture.
RenderTarget which is accepted by EffectComposer constructor seems to serve completely unrelated purpose (alhough it “kinda” works, EffectComposer renders into it, but no other pass seems to get applied).
Anyone knows if it’s a feature and how to get it right?
Non working pseudo-code - attempt to add a GlitchPass. It kinda works, world get rendered into renderTarget, but no GlitchPass. Hopefully it shows the intend clear enough though.
initialization:
const renderer = new WebGLRenderer()
const renderTarget = new WebGLRenderTarget(512, 512)
const composer = new EffectComposer(renderer, renderTarget)
composer.addPass(new RenderPass(world, camera))
composer.addPass(new GlitchPass())
Um, I don’t think creating a custom render target is necessary. Just use the same setup like in webgl_postprocessing_glitch. You should be able to access the final composed image via composer.readBuffer (which is also a render target).
BTW: If the composer should not render the final pass to the screen, you have to set composer.renderToScreen to false.
the composer.outputBuffer.texture is written to the material map but without the DepthOfFieldEffect applied, is there a known way to achieve the same in R3F / postprocessing?