Does it ever make sense to send vec3 vertex positions to the GPU as an image?

A few things come to mind. First is probably a normal map, normal maps are super common in computer graphics and they are basically 2/3d vectors packed into an image where each pixel represents a separate vector. You might think of your 1024x1024 normal map differently if you consider that it’s a collection of 1 million separate 3d vectors.

For my own usage - i pack various vectors, 2,3 and 4 dimensions into image textures to represent particle emitter parameters in my particle engine (Particular), this way I can use a single shader and a single texture for all particle emitters, saves a lot of texture switches and having to compile unique shaders per-emitter.

5 Likes