The method samples the exact position on the surface of the model

Hello everyone,
I’m currently researching and implementing a type of animation similar to what’s shown in the following video:

  • This is an animation effect that generates particles on the surface of a model, specifically around the edges of a dissolver effect applied to that surface.

My approach simple case (from a front-facing camera perspective):

1. Round 1:
I output an FBO that contains model dissolver effect — a gradient ranging from black to white. The closer the value is to white, the more it indicates edge regions. These bright regions are where particles will be spawned.

2. Round 2:
I output another FBO containing the depth map, corresponding to the same camera view.

3. Round 3 ( in passes pos particles):
a. I read Texture 1 using mipmap levels to randomly distribute points within the bright (near-white) areas, based on screen UV coordinates — meaning I already have X and Y positions.
b. Then, I sample the depth map to get the corresponding Z position. At this point, I have the 2D position not map to 3d (x and y) and pos z from depth of each particle.
c. Finally, I convert the screen-space coordinates into 3D world-space positions using the projection matrix.

The problem I’m facing:

  • I need at least 6 FBOs for each camera view, corresponding to 6 orthographic views from the positive and negative directions of the X, Y, and Z axes.
  • And I’ll likely need even more camera angles to reconstruct accurate positions from depth, especially for complex geometries.
  • This approach becomes very expensive, and although some optimization might be possible, it seems impractical for complex models.

Has anyone tried this kind of approach before? I’d really appreciate any advice or suggestions you may have!

1 Like

For optimisation you can save calculated particles positions and time then they must start fly away to file if model is static.
Another solution maybe bad: splat (spread) points on each trainagle with saving triangle normal at point. Then place camera to this point position, move camera by it normal. Readpixel color, if white, then save time to array, delete point from check list. And do it for over points.

3 Likes

A great tutorial and demo on this exact effect can be found here Implementing a Dissolve Effect with Shaders and Particles in Three.js | Codrops this may give some insight into a performant implementation

2 Likes

this method basically works fine. but if i want randomly generated points in that area, i need a really big Meshsurfacesample dataset to get more random feeling. and a lot of points in a small area then the effect will look better. but if meshsurfacesample is low, i will only see sparse particles generated exactly from each active vertex. i.e. basically particle placement i am not sure in UE they use this method :thinking::thinking:

I think they are using the color from “fbo” as a base for the location sample. This method seems very reasonable and optimal for the whole SkinnedMesh

ok , i finished my test

The problem is also simpler , and works with low vertex versions , and works on both skinmesh and staticmesh

We just store the position in the texture , read that texture with higher resolution

A little problem to face , that is we need to sort , or use mipmap to find the influence area , or maybe simpler is store the uv texture with higher resolution . In general , have some preparation to work best

1 Like