Does it ever make sense to send vec3 vertex positions to the GPU as an image?

I’ve seen techniques like Frame Buffer Objects wherein one sends and stores vertex positions to the GPU via images rather than vec3 arrays. Does that allow one to pas the data in a more compact form to the GPU?

More generally, if one just needs to pass a massive amount of vertex data to the GPU, are there advantages to using an image rather than a vec3 array? Any suggestions or insights are welcome!

1 Like

first thing that comes to mind is you can access your neighbors arbitrarily, but i’ll let someone more experienced answer this

1 Like

That’s how THREE.Skeleton.boneTexture works, for example. It contains the matrices of the skeleton’s bones, which are used to deform the base mesh’s vertices.

The term “vertex position” usually means a vertex attribute, though… if you really have one vec3 of data per vertex you should probably be using a custom attribute? Maybe I’ve misunderstood the question?

1 Like

In some cases, yes. Textures may be updated in blocks. I once made an FFT history visualizer that used texSubimage2D to update only one row of the texture with the latest data (unaltered!) from an AudioContext (AnalyzerNode etc.). This trick may also be possible with an attribute, using some obscure WebGL calls.

If the GPU can receive compressed image formats and decompress them, and you accept the small inaccuracies that follow, that is another case where it will positively affect performance.

In another discussion here, we also considered encoding animations of really many vertices in a video texture.

2 Likes

Yes, I just have some single vertex Point() objects, each with a vec3 attribute to specify its position. I was just wondering if combining all these vec3 attributes into an image and passing that image to the GPU would offer any advantages to just passing a buffer with all the vec3 data. I hope that helps clarify my line of thought (I still find it hard to speak clearly about some aspects of three.js)

All sorts of good stuff in here!

Do you happen to have a minimal example of using texSubimage2D in Three.js? I think that will be the key that will unlock a good deal of performance for my team!

I also wanted to ask if you know of any examples of video textures in Three.js–that sounds like a pretty fascinating idea, but I haven’t seen any examples yet…

texSubimage2D has quite recently been exposed through WebGLRenderer.copyTextureToTexture.

Here (turn sound down before going there!) is an example of VideoTexture, though only for postprocessing of video.

1 Like

Absolutely. I once made a demo that used a shader with a simplex-noise generation to power the movement of vertices:

http://dyadstudios.com/playground/flexus/

You can see the bottom-left box shows the RG values that I then used to move the vertex XY positions. You can play with the controls on the top-right drawer to get a feel for what the texture does.

I rendered the “motion shader” to a WebGLRenderTarget then passed the resulting texture as a uniform to the plane to read the updated values. The best part of this approach is that you can calculate thousands of vertex positions simultaneously with the power of the GPU, instead of doing it sequentially with CPU arrays.

7 Likes

A few things come to mind. First is probably a normal map, normal maps are super common in computer graphics and they are basically 2/3d vectors packed into an image where each pixel represents a separate vector. You might think of your 1024x1024 normal map differently if you consider that it’s a collection of 1 million separate 3d vectors.

For my own usage - i pack various vectors, 2,3 and 4 dimensions into image textures to represent particle emitter parameters in my particle engine (Particular), this way I can use a single shader and a single texture for all particle emitters, saves a lot of texture switches and having to compile unique shaders per-emitter.

5 Likes

A similar example would be vertex animation textures (VATs) in Houdini: https://github.com/keijiro/HdrpVatExample

2 Likes