Reuse model animation with samples of its geometry

Looks like you’ve got it working! The approach you have above will be something like O(n * log(n)) time complexity, and seems fine for that number of particles.

I’m thinking there would be at least two ways to bring it down to O(n) or less, both requiring some customization of MeshSurfaceSampler:

  1. Currently sample(...) gives you a position, normal, and color. Modify it to also give the index of each of the three vertices nearest to your point, and the barycentric coordinates (i.e. weights) of those vertices. sampleFace computes both but does not return them. With that, you can use the skinnedMesh.boneTransform(...) function to update the position of the particle on each frame, as the animation plays, by computing the positions of each of the 3 base vertices and blending those positions according to the weight.
  2. The method above should work well for smaller numbers of particles, but it’s not feasible to apply skinning transforms to many thousands of vertices on the CPU. To get around that you could put your particles into a new SkinnedMesh, bound to the same bones as the base mesh, and modify the sample(...) method to return the interpolated skin indices and weights at the sampled position. More performant but requires that the particles go into a SkinnedMesh.

WHY DOES THE SAMPLER NEED A MESH IF IT USES A GEOMETRY :PPPPP

That was me… :sweat_smile: the idea was to future-proof the API for maybe supporting SkinnedMesh or morph targets in the future, but those features did not get added. (MeshSurfaceSampler: Accept Mesh as parameter, not BufferGeometry by donmccurdy · Pull Request #18219 · mrdoob/three.js · GitHub)

3 Likes