How do you get multiple outputs from a fragment shader in a Shader Material?

I am currently trying to implement a Lattice-Boltzmann Implementation of fluid simulation in threeJS. This requires each point in the grid to hold 9 values representing distribution functions for the fluid. Since all of this information cannot be stored in a single texture, I was trying to use three textures and store the 9 values as rgb values. I tried passing them to the shader and but I could not figure out a way to have the shader output 3 textures back to the javascript so I could pass it back to the shader for the subsequent iterations of the simulation.
I am aware Multiple Render Targets are an option but the WebGLMultipleRenderTarget function said it was deprecated and I could not find any documentation on implementing it with regular WebGLRenderTarget.
Could someone explain how to give a shader multiple inputs and get multiple outputs back from it? Any help would be greatly appreciated!

You have to use GLSL30 and restructure your shader a bit.

Here’s some untested code from gpt as an example:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Three.js Multi-RenderTarget Example</title>
<style>
  body { margin:0; overflow:hidden; }
</style>
</head>
<body>
<script type="module">
import * as THREE from 'https://cdn.jsdelivr.net/npm/three@0.157.0/build/three.module.js';

const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.getContext(); // Ensure context is created
document.body.appendChild(renderer.domElement);

// Ensure WebGL2 context:
if (renderer.capabilities.isWebGL2 === false) {
  console.warn("WebGL2 not supported.");
}

// Create a multi-target render texture with 2 outputs
const width = 256, height = 256;
const renderTarget = new THREE.WebGLMultipleRenderTargets(width, height, 2);

// Simple full-screen quad
const geometry = new THREE.PlaneGeometry(2, 2);
const material = new THREE.RawShaderMaterial({
  glslVersion: THREE.GLSL3,
  vertexShader: `#version 300 es
  in vec3 position;
  void main() {
    gl_Position = vec4(position, 1.0);
  }`,
  fragmentShader: `#version 300 es
  precision highp float;

  layout(location = 0) out vec4 outColor0;
  layout(location = 1) out vec4 outColor1;

  void main() {
    // Output red to the first target
    outColor0 = vec4(1.0, 0.0, 0.0, 1.0);
    // Output green to the second target
    outColor1 = vec4(0.0, 1.0, 0.0, 1.0);
  }`
});

const quad = new THREE.Mesh(geometry, material);
const scene = new THREE.Scene();
scene.add(quad);

// Use a camera that covers the full screen quad
const camera = new THREE.Camera();

// Render to the multiple render targets
renderer.setRenderTarget(renderTarget);
renderer.render(scene, camera);
renderer.setRenderTarget(null);

// At this point, renderTarget.texture[0] contains the red image,
// and renderTarget.texture[1] contains the green image.
// You can use these textures elsewhere as needed.

console.log("First target texture:", renderTarget.texture[0]);
console.log("Second target texture:", renderTarget.texture[1]);
</script>
</body>
</html>

Hey. The code you provided uses WebGLMultipleRenderTargets which is a deprecated function. There is no longer any documentation regarding it.
In the WebGLRenderTarget documentation( WebGLRenderTarget – three.js docs ), it talks about a count parameter to define the number of render targets but I’ve been unsuccesful in using that. Have you been able to use the count property succesfully?

I have I think.

I would do calculations in a compute shader. A compute shader can store multiple textures. Here is one of my compute shaders as an example:

Alternatively, multiple storageBuffers can also be used. But I’m talking about three.webgpu.js
Since you are talking about fluid simulation. Then my repo might fit your question even better