The question I’m about to ask is somewhat - but not entirely - similar to the one here. In my case, I already have a mesh (spherical geometry, video material based on a customized NASA clip made of shades of grey from black to white corresponding to a month of “weather” that’s playing in a loop) that I use to display (and animate, via appropriately playing the video in sync with the rest of the movements or objects in the scene) the clouds in a 3D representation of the Earth.
The system works great albeit obviously slightly more intensive than without it, but having recently built my volumetric atmosphere, I was wondering how it would be possible to turn the said video (or the frames in it) into volumetric representations of the clouds and then possibly integrating them into the existing atmosphere.
Since the video is not made up of volumetric images (where each “page” in the image apparently represents a cross section at a certain “depth”, as far as I can understand), I imagine I’d have to draw each frame on a temporary canvas, displace the vertices based on color similar to how it’s done for height maps to create some resemblance of “shapes”, then use the shader to create the volumetric representation based on depth, view and sun rays and so on.
So, is there a standard way to approach this, or maybe a different way to handle this? I can probably take care of the shader part in a similar way to the atmosphere, but how would I turn an essentially 2D video frame to something that can be used for a 3D purpose like this? How would you approach it? No need for code or anything like that, besides some hypothetical examples I’m might not be yet aware of, just a basic idea or methodology, if by any chance you thought of this or have experience in managing such a scenario.