Reuse model animation with samples of its geometry

I was wondering if I can:

  • load a model
  • sample some particles off the gemetry (using the MeshSurfaceSampler);
  • use the models animation with the particles.

This works nice when i create particles from the vertices (calculating their position in each animation frame), but i have no idea how to “randomize” them, other throwing out some of the vertices :stuck_out_tongue:

I thought of a dictionary of <vertex, closest sample> transform relation but this seems to be an unnecessary overkill, and probably will be very imprecise.

Is there a way to keep the relation between the geometry and the samples, or should I approach this with a shader instead?

I have a model->“scattered particles” fiddle(which doesn’t properly handle the skinned mesh matrix, but it’s not the issue here), but since for me it’s more of an conceptual problem, since I don’t know where to go from here (other than abandoning this and doing it with a shader)

Is the model animated with skinning, or some other method? And when you say “use the models animation with the particles” do you mean the particles should “stick” to the surface of the animated model? Or do something else, like flying outward on a trajectory determined by the motion of the surface?

With skinning or TRS animation I think you could make a few tweaks to MeshSurfaceSampler and get there… if the animation is coming from a custom shader you will likely need something pretty custom.

1 Like

Thanks for the quick response :slight_smile:
Yeah, I meant animation with skinning and particles sticking to the surface.

I didn’t look yet at the sampler source code, do you mean I could make the samples deterministic (like creating them in the same place on the same geometry)?

I mentioned shaders just because I thought It could be easier to make random dots with a transparent background. But I’ll look into the Mesh surface sampler.


I hoped that making the randomFunction fixed and not random will do the trick (as it seemed to be the only variable), but still it looks cool.


Looks like you’ve got it working! The approach you have above will be something like O(n * log(n)) time complexity, and seems fine for that number of particles.

I’m thinking there would be at least two ways to bring it down to O(n) or less, both requiring some customization of MeshSurfaceSampler:

  1. Currently sample(...) gives you a position, normal, and color. Modify it to also give the index of each of the three vertices nearest to your point, and the barycentric coordinates (i.e. weights) of those vertices. sampleFace computes both but does not return them. With that, you can use the skinnedMesh.boneTransform(...) function to update the position of the particle on each frame, as the animation plays, by computing the positions of each of the 3 base vertices and blending those positions according to the weight.
  2. The method above should work well for smaller numbers of particles, but it’s not feasible to apply skinning transforms to many thousands of vertices on the CPU. To get around that you could put your particles into a new SkinnedMesh, bound to the same bones as the base mesh, and modify the sample(...) method to return the interpolated skin indices and weights at the sampled position. More performant but requires that the particles go into a SkinnedMesh.


That was me… :sweat_smile: the idea was to future-proof the API for maybe supporting SkinnedMesh or morph targets in the future, but those features did not get added. (MeshSurfaceSampler: Accept Mesh as parameter, not BufferGeometry by donmccurdy · Pull Request #18219 · mrdoob/three.js · GitHub)


1 sounds like a nice challenge, but I like 2, and I’ll try it out for sure!
Although I never dug deep into how skinning works :thinking: - looking at boneTransform the skin indices and weights are a Vector4() - do i have to interpolate them like:

// first vector component - x
face.a.skinIndex.x with face.b.skinIndex.x with face.c.skinIndex.x
// second vector component - y
face.a.skinIndex.y with face.b.skinIndex.y with face.c.skinIndex.y
// .. etc for the 3rd and 4th

including there the actual difference between the vertices and the sampled position?

That was me…

I hope I didn’t come off as rude :sweat_smile:, I was a bit confused since I wasn’t sure if i had missed something there.

1 Like
  • skinWeight: Weight (0-1) corresponding to each bone in the skinIndex list.
  • skinIndex: integer index of each bone that influences a vertex.

For a given sample influenced by 3 vertices, each vertex influenced by up to 4 bones, you’d need to sum the weights of up to 3 x 4 bones, where each bone’s weight is a product of its weight for that vertex, and the weight of that vertex on the sample point. Keep the 4 bone indices with the highest total weights and discard the rest, this gives you skinWeight (4 x float) and skinIndex (4 x int) for the new sample/particle.

I hope I didn’t come off as rude :sweat_smile:, I was a bit confused since I wasn’t sure if i had missed something there.

All good, it doesn’t use the Mesh right now anyway! :slight_smile: