[Solved] How can I render a shader to a render target?

I need to post-process the first render target and output to another one, therefore no triangles, no scene, and no camera is needed, so how can I setup the renderer?

There are three passes:

First pass:
Render a scene with a camera to render_target_A.

Seconds pass:
Process render_target_A and output to render_target_B.

Third pass:
Process render_target_A and render_target_B and output to screen.

My question is about the second pass.

So since I didn’t get any responses I presume this is not supported by Three.js and my only option is to “hack” the standard methodology by replacing or extending some Three.js functions with pure Webgl, right?

It depends on what you mean with “process”.

Normally, you use the texture render target A as a texture for a full-screen quad or triangle and then perform a render pass with an orthographic camera. The shader for this render pass represents the processing logic. The result is saved in a second render target.

Yes, that’s what I “normally” often do, but in this particular case I need an intermediate shader pass of targetA as input and targetB as output, instead of the screen.
Such a pass is usually required in some advanced cases of image processing for increased efficiency.

Here is a Webgl example I found that does this (albeit for different reasons) - and I’m trying to figure out how to adapt that information to Three.js, but it’s not easy:
WebGL Image Processing Continued

P.S. I’ve done this before in Flash (Stage3D) successfully, but that was many years ago and in a very different environment…

EDIT: I misread you, you probably mean I should render the 2nd pass like the 3d one, but assign a render target… OK, I’m trying this.

1 Like

Thanks Mugen87, it works (finally)!
I didn’t thought of this simple solution, as I often make long breaks from Three.js code in order to develop other parts in Javascript-only that take months or even more than a year, also it’s rather counter-intuitive to use an orthographic camera to render from texture to texture - I was subconsciously expecting something else… and couldn’t find anyone to clearly mention this (4 days lost)…

1 Like

also it’s rather counter-intuitive to use an orthographic camera to render from texture to texture

You don’t need to render from an orthographic camera. You don’t even need to setup your camera.

const pg = new PlaneBufferGeometry(2,2,1,1)
const m = new ShaderMaterial({
  uniforms: { texture: null }
    varying vec2 vUv;
    void main(){ vUv = uv; glPosition = vec4(position.xy,0.,.1.);}
   fragmentShader: `
uniform sampler2D texture; 
varying vec2 vUv; void main(){ gl_FragColor = texture2D( texture, vUv);}`

const myProcess = new Mesh(pg,m)
myProcess.frustrumCulled = false

You just need to do a texture read and a texture write (screen, target, whatever). There might be a million and one way to do this, maybe the orthographic thing was the easiest to explain. You don’t even need to use two triangles, you can use one, or a million.

OK, but your code is missing the step where rendering is started without one, and a render target is assigned.

If the “cameraOrtho” from the Three.js instructions bellow could be eliminated by increasing efficiency without increasing complexity too much, then it could be a great alternative.

renderer.render(sceneRT2TB, cameraOrtho);
const myMainCamera = new MyFancyCamera()
const myScene = new Scene()

const procCamera = new PerspectiveCamera()
const procScene = new Scene()
procScene.add( thingFromPreviousPost )

function render(){
  renderer.setRenderTarget(someTarget) // "render to this target"
  renderer.render(myScene, myMainCamera) 
  thingFromPreviousPost.setSource(someTarget) //"some state of some node"

  renderer.setRenderTarget(someOtherTarget) // "render to this target"
  renderer.render(procScene,procCamera) //"some graph with some nodes with some state"
  // because of specifics of thingFromPreviousPost, ANY camera would give the exact same results
  //renderer.render(procScene, someoneElsesCamera) 
  //more stuff