Hello everyone,
I’m currently researching and implementing a type of animation similar to what’s shown in the following video:
- This is an animation effect that generates particles on the surface of a model, specifically around the edges of a dissolver effect applied to that surface.
My approach simple case (from a front-facing camera perspective):
1. Round 1:
I output an FBO that contains model dissolver effect — a gradient ranging from black to white. The closer the value is to white, the more it indicates edge regions. These bright regions are where particles will be spawned.
2. Round 2:
I output another FBO containing the depth map, corresponding to the same camera view.
3. Round 3 ( in passes pos particles):
a. I read Texture 1 using mipmap levels to randomly distribute points within the bright (near-white) areas, based on screen UV coordinates — meaning I already have X and Y positions.
b. Then, I sample the depth map to get the corresponding Z position. At this point, I have the 2D position not map to 3d (x and y) and pos z from depth of each particle.
c. Finally, I convert the screen-space coordinates into 3D world-space positions using the projection matrix.
The problem I’m facing:
- I need at least 6 FBOs for each camera view, corresponding to 6 orthographic views from the positive and negative directions of the X, Y, and Z axes.
- And I’ll likely need even more camera angles to reconstruct accurate positions from depth, especially for complex geometries.
- This approach becomes very expensive, and although some optimization might be possible, it seems impractical for complex models.
Has anyone tried this kind of approach before? I’d really appreciate any advice or suggestions you may have!