Procedural animation and skeletons

I’m working on a project for which I want to animate some models/meshes procedurally. Should I bother using the three.js Skeleton class?

I’m currently working on collision detection. I’m trying to detect collisions among the components of my models and I realized recently that using a Skeleton to animate the models makes this somewhat difficult. For one, the skeleton bones in a sense only affect the model vertices, not the model objects. Because I’m not (currently) binding my model’s object’s vertices to multiple bones, or even binding vertices in the same object to different bones, I can, fairly easily, map any model object to a unique bone anyways. Still, calculating the world position of model objects isn’t that straightforward.

Maybe I should create my own bespoke ‘skeleton’ code. Being able to move the model objects directly and thus easily determine, e.g. their world position, seems like it might be worth replacing Skeleton with my own version.

TBH, I don’t understand this sentence. What do you mean with “model objects”?

It’s hard to make any suggestions based on the information you’ve provided so far. Your problem is not clear to me.

1 Like

Thanks for the reply; sorry for the confusion!

A ‘model’ is just a thing in my game, say a humanoid character controlled by the user. In my code that uses three.js, the model is a hierarchy of three.js BoxGeometry objects, e.g. one for the right forearm, another for the right upper arm, etc. Each of the BoxGeometry objects is a ‘model object’.

[What’s the standard or common terminology for this?]

I’m procedurally animating the humanoid character by manipulating the skeleton bones. That works fine in terms of producing the expected visual effect of the character moving.

I’d like to calculate collisions with the character and furthermore, calculate collisions with specific model objects, e.g. the BoxGeometry object for the character’s right forearm.

My initial problem was that, if I moved the character’s right forearm by manipulating the relevant bone, the resulting motion wasn’t applied to the right forearm BoxGeometry object. I wrote:

For one, the skeleton bones in a sense only affect the model vertices, not the model objects.

Obviously the skeleton bones visually affect the “model objects”. What I was initially surprised to discover was that I couldn’t find any info in the model objects data that they had been affected by the skeleton bones, i.e. the position of the BoxGeometry object for the character’s right forearm wasn’t changing even when it was obviously (visually) being animated on the screen.

[Based on my cursory reading of the three.js code, that’s because the effects of ‘skinning’ simply aren’t tracked in the geometry objects. The ‘skinned’ positions are simply calculated by lower-level rendering code.]

So, my code is moving the character’s right forearm by manipulating a bone. That works. But now I want to calculate potential collisions between the forearm object and other objects. How do I determine where in world-space the character’s forearm is? From what I can tell, I have to walk the hierarchy of the character objects and apply the transformations for each of the relevant bones. I think I can do that pretty easily, but only by simplifying the possibilities that the three.js ‘skinning’ features allows.

Again:

For one, the skeleton bones in a sense only affect the model vertices, not the model objects.

This is still probably confusing. My bad!

the skeleton bones in a sense only affect the model vertices

By this, I was just referring to how each vertex in a geometry object has to be associated, via its skinIndices and skinWeights properties, with specific bones. Currently, I could associate each vertex with up-to four different bones (and each vertex can be associated with different bones).

… not the model objects

By this, I was referring to the point above about the bones being associated with vertices and also to the point that the “model objects”, e.g. the BoxGeometry objects in my humanoid character, are not directly associated with any bones.

So, in general, there isn’t any obvious way to determine how transformations made to bones could be applied to an entire BoxGeometry object. Certainly one could, e.g. calculate the position of the box as the geometrical average of all of its vertices.

But, in my case, AFAICT, I only need or want to use one bone at a time. Thus I can ‘simplify’ (or bypass) the need to solve the general case described above and just assume that, e.g. only one bone is ever ‘rigged’ for any one geometry object. I should then be able to pick any vertex, determine what bone it’s rigged to, and calculate the bone’s transformation to the object itself.

Given all of the above, it seems like I might be better off simply not using the three.js Skeleton features and, e.g. animating the humanoid character some other way (“bespoke ‘skeleton’ code”).

You may find some of the links in this IK thread helpful:

1 Like

Also note that for any bone you can do things like:

// Get world position of a single bone.
const v = new THREE.Vector3();
bone.getWorldPosition(v); // warning: expensive
console.log(v);

// Not sure this is quite right, may need to account for skinnedMesh.bindMatrix,
// but something like this should be workable:
const boxPosition = box.position.clone();
handBone.worldToLocal(boxPosition);
console.log(boxPosition);
1 Like

As @donmccurdy said, you can easily retrieve the transformation of a single bone. But I don’t think that BoxGeometry or a mesh is the right entity for collision detection. They are primarily used for rendering. You need actually a bounding volume that understands orientation for example an Oriented Bounding Box (OBB). Normally you would create a hierarchy of such bounding volumes which represent single parts of you character (arms, legs etc.). Of course you would keep them in sync so they always represent the bounds of your model even when animations are applied.

The problem is that three.js does not provide an OBB and the respective implementation of the SAT-algorithm for intersection tests. I’ve also never seen a plugin for this. That is somewhat understandable since a real OBB implementation that calculates the oriented minimum bounding box with good time complexity is advanced computational geometry.

You can try to build a solution based on THREE.Box3 (AABB) but this bounding volume won’t have optimal tightness.

1 Like

Thanks; that’s about what I figured I’d have to do.

Thanks for the reply.

I understand the utility of bounding boxes generally, i.e. they’re less expensive to use to calculate, e.g. collisions, than the full, potentially complicated geometry.

But in my case, for the specific demo I’m working on, the fully geometry just is a bunch of boxes. I’m guessing that BoxGeometry is still more complicated than the minimum object needed to encapsulate just the relevant bounding box info, but, in my specific case, it doesn’t seem worth optimizing right now.

If the geometry of my characters was significantly more complicated, what’re the downsides of using the three.js geometry or mesh objects directly to calculate collisions? Is there some way, perhaps by writing a three.js plugin, that I could maintain bounding boxes more cheaply?

What about my idea of implementing my own ‘bespoke’ skeleton animations? The idea being that if I use the regular three.js transformations then I don’t need to apply bone transformations separately to perform, e.g. collision calculations.

An OBB implementation that calculates minimum bounding boxes does sound like a lot of work. I’m going to pass on that for the indefinite future!

But the SAT intersection algorithm seems totally feasible to implement! Thanks for the pointer.