Post-processing of WebGLRenderTarget

Should this work?

I use RenderTarget to create a texture, which I then use in the final render to canvas. I would like to use EffectComposer to fix my texture.

RenderTarget which is accepted by EffectComposer constructor seems to serve completely unrelated purpose (alhough it “kinda” works, EffectComposer renders into it, but no other pass seems to get applied).

Anyone knows if it’s a feature and how to get it right?

Can you elaborate a bit more? Post processing is usually done on render targets only, i’m not sue if there is any other way to do it.

Can you also please explain what you mean with this sentence?

Hello, thanks for quick replies. It seems I created more confusion than I should have…yes, that happens to me quite often!

I wanna to following:

  1. render something to WebGLRenderTarget
  2. apply post-processing to the rendered image
  3. use the rendered and post-processed image as a texture to render a different scene

(doesn’t sound too complicated, or?)

Working code - without post-processing:

initialization:

const renderer = new WebGLRenderer()
const renderTarget = new WebGLRenderTarget(512, 512)

each frame:

renderer.setRenderTarget(renderTarget)
renderer.render(world, camera);
renderer.setRenderTarget(null);
....
myShader.uniforms.texture.value = renderTarget.texture
renderer.render(mainScene, mainCamera);

Non working pseudo-code - attempt to add a GlitchPass. It kinda works, world get rendered into renderTarget, but no GlitchPass. Hopefully it shows the intend clear enough though.

initialization:

const renderer = new WebGLRenderer()
const renderTarget = new WebGLRenderTarget(512, 512)
const composer = new EffectComposer(renderer, renderTarget)  
composer.addPass(new RenderPass(world, camera)) 
composer.addPass(new GlitchPass())

each frame:

composer.render() 
myShader.uniforms.texture.value = renderTarget.texture
renderer.render(mainScene, mainCamera);

The question is whether my pseudo-code can be expressed in the actual code…

Um, I don’t think creating a custom render target is necessary. Just use the same setup like in webgl_postprocessing_glitch. You should be able to access the final composed image via composer.readBuffer (which is also a render target).

BTW: If the composer should not render the final pass to the screen, you have to set composer.renderToScreen to false.

I just tried your suggestion and it works, thank you.

1 Like

in R3F, if we import three’s native Effect composer we can use this to write the combination of all effect passes to a map of any object…

import { EffectComposer } from 'three/examples/jsm/postprocessing/EffectComposer.js';
<meshBasicMaterial attach="material" map={composer.writeBuffer.texture} />

but if we import the effect composer from postprocessing the same thing does not work eg…

import { EffectComposer, DepthOfFieldEffect } from 'postprocessing'
<meshBasicMaterial attach="material" map={composer.outputBuffer.texture} />

the composer.outputBuffer.texture is written to the material map but without the DepthOfFieldEffect applied, is there a known way to achieve the same in R3F / postprocessing?

1 Like