How to set up this post-processing pipeline with render targets?

Hi all!

I’m trying to figure out the best approach to solving a post-processing pipeline setup problem using three.js, and am wondering if anyone here has any input or ideas for me?

I’m pretty new to this, so I’m sorry if my domain specific language is lacking.

Context:
I’m using three.js for a school project where I’m implementing the algorithm from this paper: https://hal.inria.fr/hal-03140647/file/nm-EG2021-local_light_alignment.pdf

Problem outline/pipeline overview:

Stage 1:

First I’m rendering the scene containing my geometry using a custom shader to get a texture (the normals in world space) as well as a depth texture. Let’s call this the Normals-pass

Input: Scene with geometry
Output: Normals-texture[0] & Depth texture

Stage 2:
Next I want to produce an array of successively filtered versions of that texture using another shader pass (a bilateral filter). Each pass takes the last pass as input. The result should be an array of textures where each one is a filtered version of the previous one.

Input: Normals-texture[n] & Depth texture
Output Normals-texture[n+1]

Stage 3:
The final shader pass is the actual algorithm this is all centered around, and it takes as input the entire array of textures, starting with the unfiltered one from the normals-pass, then

Input: Normals-textures[0…x], where x is number of filtering passes run in stage 2
Output: final render

My problem:

I’ve managed to get this setup working using a different render target for each pass using with a hard-coded number of filter passes in stage 2. However, I would like this number to be dynamic so that I can adjust the number of passes from the GUI.

My initial thought was that my array of textures needs to be some sort of 3d texture if I want to keep the depth dynamic. I looked at Data3DTexture and DataArrayTexture here. However, since each pass is dependent on the previous one, and I’m pretty sure I can’t use the same texture both as input and output to a shader, I ran into a wall here and don’t really know where to go.

The simplest approach I thought of was be to just ping-pong between a two buffers and then copy the texture into a 3d-texture in between renders. However, I haven’t gotten this to work. Whenever I try to clone() a texture and then render it I get an overflow resolution error, and the documentation for clone() says that it’s not a deep copy anyway, so I can’t see how it would work.

I’ve been reading the documentation and looking at the examples for MultipleRenderTargets, thinking that I could maybe devise a solution using a ping-pong approach with multiple render targets, so I would render from bufferA to both bufferB and 3Dtexture[0], then from bufferB to both bufferA and 3Dtexture[1]. And so on. However, it’s not apparent to me how I would setup (if at all possible) a MultipleRenderTarget that contains both regular textures as well as a 3Dtexture.

I have a feeling I’m maybe making this more complicated than it needs to be, and that someone with more experience could perhaps point me in the right direction. Am I on to something or am I way off the mark? Thankful for all input!

This is my first post here. Please tell me if there’s anything missing or if I’m being unclear. Also, if you have any ideas for a better title for this topic, please do tell.

Thank you for you attention!