Passing updated vertex buffers between shader passes on GPU

Picture this scenario:
I have a PLY file containing vertex data loaded into a scene, I then add vertex noise based on a flowfield affecting the positions/ particle sizes by using a GPGPU texture which then I pass on as a uniform to THREE.Points() / ShaderMaterial Object to render as my first RenderPass on an EffectComposer composition.
The next step is that I want to now create a SDF of my scene at the current point (with the altered vertexes). The code is pretty monstrous right now so please excuse the lack of a fiddle/csb link but what I’m looking for is a way to share an updated vertex buffer between shader passes keeping everything on the GPU for efficiency. Is this something that a VAO or a VBO would solve? I havn’t use either much before and was hoping for advice.

Two options to consider here:

  • (A) Write your shaders such that vertex positions are dynamically written to a texture. This requires allocating a render target with at least as many pixels as you have vertices, drawing to it, and sampling from it in later materials. Existing gpgpu examples take this approach.
  • (B) Write to a vertex buffer using WebGL 2 “transform feedback” APIs. I’m not aware of any official API to use transform feedback in three.js currently, but you can get the general idea from this PR.

Thanks Don,
this is very helpful indeed, especially the PR. for what I can see in the roadmap, it looks like this is going to make it into v165 of Three.js. do you (or anyone else) know if that’s likely? If so I might wait as the code submitted looks like a much more solid implementation than I might hack together for a single renderpass on my personal project.
Again thanks for the help!

sorry just another followup question to your first suggestion.
I’m already using a gpgpu computational render to do my flowfield transformation texture however I could easily capture the transformed geometry at this point in the process however I’d want to use another texture to do so.
Reading this pretty useful article from Babylon.js it looks like it’s possible as you mentioned before but not yet supported by ThreeJS however i was thinking that perhaps there was a hackaround by out/inputting multiple textures? From the docs it looks like there can only be one texture output from the GPGComputationalRenderer, is this correct or is there a way of hacking this so that one render process can output two textures without me having to run another render call? My project has a pretty resource intensive render chain already in my project and I want to save as much gas as possible to increase my particle count :slight_smile:
I know that i can just double the size of the single texture and use a buffer offset where my 2nd texture begins but I’ve been lead to believe that multiple smaller textures are always better than one big one… maybe that’s an incorrect asumption.

No, I’m afraid that’s unlikely. The milestone generally indicates that a feature has positive interest from maintainers, and could be merged when/if work is complete. But it does not indicate that anyone has committed time to complete the work by that deadline, and the large majority of tasks on the r165 milestone will simply be moved into the r166 milestone after the next release.

Reading this pretty useful article from Babylon.js it looks like it’s possible as you mentioned before but not yet supported by ThreeJS however i was thinking that perhaps there was a hackaround by out/inputting multiple textures?..

I don’t think VAOs are going to help you much here, it’s just a way of organizing the vertex buffers. Writing to the vertex buffer from the shader will still require transform feedback. Or you can read/write vertex positions in textures instead. three.js supports read/write with textures as in the gpgpu examples.

I’ve been lead to believe that multiple smaller textures are always better than one big one…

Maybe true in some specific cases… but I don’t think I’d agree with this in general. There is a limit on the texture size though, particularly for mobile devices, I’d probably choose a limit of 2048px or 4096px (max width and max height) and stay under that.

related: https://www.youtube.com/watch?v=OYYZQ1yiXOE