Object bounds not updated with animation

I’m using an animated player to get different icons from his actions (idle, running, jumping, sliding, etc). Using ThreeJS r95.

To do this, I place the player in the corresponding action pose, fit the player into the camera bounds, render the scene and get the image from the canvas.

Everything works fine, except when the player is in a pose other than the default one (eg: jumping or sliding). The player ‘is not fitted’ into the camera bounds and he gets cropped in the image.

Investigating a bit, it seems that the bounding box values are not updated with the animation, so the player can’t be fit properly into the camera bounds.

image

I’ve created a codepen for testing, using the marine from ThreeJS examples. By the way, my player is in GLTF format and marine is in JSON format, so the problem is not related to the 3D object itself.

https://codepen.io/Eketol/pen/oMJYxM

I’ve tried several things like force updating the matrix, computing the geometry’s boundingbox, etc. But no success. Not sure if I’m doing something wrong or if it is a bug.

  • Is it possible to get the animated player’s bounds while he is in a pose other than the default one?

  • Perhaps it is not a bug, but a feature (because it could be a performance issue). Maybe the bouds can be updated on demand by calling ‘updateBoundingBox()’ or with some flag like ‘updateBoundsOnAnimation = true’?

1 Like

Skeletal animation as a form of vertex displacement happens on the GPU (vertex shader). The bounding box is calculated on the CPU. three.js does not support to update the bounding box with animated vertex information.

To solve this issue, you would have to apply the vertex transformation per frame to the geometry via JavaScript and then calculate the bounding box. However, there is no code in the official repository that demonstrates this approach.

BTW: Raycasting against a skinned mesh is also not supported because of this reason, see

Too sad to hear the news. I guess it means I can’t do drag and drop or have some accurate collision detection. I will have to invest some time in the workarounds. Thanks for the quick response.

three.js` does not support to update the bounding box with animated vertex information.

Does any real-time 3D engine support this?

Yes. Unity is one example https://docs.unity3d.com/ScriptReference/SkinnedMeshRenderer-localBounds.html

1 Like

I’ve actually needed this functionality before - not per frame updating, but just creating a relatively conservative pre-computed bounding box that encompasses animations.

Good to know it’s possible, it would be a useful function to add to the examples.

1 Like

This is what we are looking for too. Adding my vote for an example function for it, if possible.

@Mugen87 sent me here from another thread.

I was able to solve this for dddance.party with some help from the illustrious Vince Mckelvie. I remember it being an extremely annoying challenge to solve…

I am using animated DAE files but hopefully it will work for any format? Depends on the SkeletonHelper?

Here is the code from when the animated DAE (object) is loaded:

	var avatar = object.scene;
mixer = new THREE.AnimationMixer(avatar);
mixer.clipAction(animations[0]).play();
scene.add(avatar);
var skeleton = new THREE.SkeletonHelper(avatar); 
var box_helper = new THREE.BoxHelper( skeleton, 0xffffff ); //box_helper is just lines
box_helper.material.visible = false;
box_helper.update();
scene.add(box_helper);
var geo = new THREE.BoxBufferGeometry( 1, 1, 1 );
var mat = new THREE.MeshBasicMaterial( { color: 0xeeeeee } ); 
var box_mesh = new THREE.Mesh( geo, mat );  //box_mesh is a transparent box
box_mesh.material.visible = false;
box_mesh.material.transparent = true;
box_mesh.visible = true;

Here is the code running every animation loop:

for(d=0;d<dancers.length;d++){
		dancers[d].box_helper.update();
		cc = getCenterPoint(dancers[d].box_helper);
		ww = Math.abs(dancers[d].box_helper.geometry.boundingBox.min.x - dancers[d].box_helper.geometry.boundingBox.max.x);
		hh = Math.abs(dancers[d].box_helper.geometry.boundingBox.min.y - dancers[d].box_helper.geometry.boundingBox.max.y);
		dd = Math.abs(dancers[d].box_helper.geometry.boundingBox.min.z - dancers[d].box_helper.geometry.boundingBox.max.z);
		dancers[d].box_mesh.position.set(cc.x,cc.y,cc.z);
		dancers[d].box_mesh.scale.x = ww;
		dancers[d].box_mesh.scale.y = hh;
		dancers[d].box_mesh.scale.z = dd;
}

The get center function:

function getCenterPoint(mesh) {
var geometry = mesh.geometry;
geometry.computeBoundingBox();
center = geometry.boundingBox.getCenter();
mesh.localToWorld( center );
return center; }

I hope this helps :)))

1 Like

Hehe, calculating a bounding box based on SkeletonHelper is a smart idea. Unfortunately, it’s not a 100% accurate solution since you do not calculate the bounding box based on animated vertices.

When you look closely, you can see that certain parts of your character’s geometry is overlapping the resulting bounding box.

image

So it might work for your application but it’s important to highlight that no exact bounding volume is produced. As an approximation, it might be sufficient.

It’s pretty damn close though ;)))
Unless your model extremely overweight…
What is the context in which you would need 100% accuracy for this?

1 Like

E.g. exact collision detection or raycasting.

Well, as you said, it depends on the model. The hotdog character shows the problem more obvious.

Since I currently work at a project with animated characters, I’ve implemented a function which computes the current AABB based on the transformed vertices. As you can see in the following fiddle, the red AABB is the result if you use Box3.setFromObject() with a skinned mesh. The green box is calculated per frame and represents the real bounds of the object. It’s actually the same code as in the vertex shader but now implemented in JavaScript.

https://jsfiddle.net/qosuLyhf/

The next fiddle computes immediately after the loading process of the animated character an AABB that represents its maximum bounds. Something like this is useful for the first step of a ray/intersection test with a skinned mesh. However, I’m not sure if there is a better way in order to process the animation data. The problem is that an animation clip usually consists of multiple tracks which can have different amount of keyframes. This makes it hard to iterate over all keyframes and calculate the current transformation since you would have to insert missing keyframes in certain tracks. Right now, the code just samples the animation clip based on a given value. The greater the value, the more exact is the produced AABB. But it also takes more time for processing.

https://jsfiddle.net/q4sjbLuk/

But be aware that computing the AABB in this way is expensive. It’s not surprising that vertex blending is computed in parallel on the GPU^^.

10 Likes

Can you describe how a bounding box is used in raycasting? It seems like if you are raycasting against just the box, there would be a lot of times where your raycast would hit the box but not the character.

Sorry for the delay.

It seems like if you are raycasting against just the box, there would be a lot of times where your raycast would hit the box but not the character.

Yes that’s correct. There are two options: First, you just use the bounding box for performance reasons and accept the disadvantages of a coarse bounding volume. That means you might detect intersections although the ray only hits the AABB but not the geometry of your model. The other option is to perform a second, detail raycast against the animated geometry but this is a relatively expensive test (depends on the complexity of the model). An alternative is to use GPU picking. But that does not work in all use cases for examples in shooters. If you want to detect if a bullet hits an animated character, you need more tight bounding volumes like a hierarchy of oriented bounding boxes (OBB) or you actually test on geometry level.

1 Like

Hi @Mugen87.

Your links with examples not working at current moment, can you actualize them please?
I need collission detection functionality with animated characters, and it looks like i need this solution, because i need exact collision detection (to know place where sword picked to the dress/body)

I’ve tested both fiddles and they seem to work.

Yes, now yes, thank you.

Please tell me, can i use this logic for skinned meshes after i updated skeleton? (skinned meshes i got after import fbx model). I can’t now apply new vertices (vector3) to old skinned mesh…

I had a similar problem and solved it by:

mesh.geometry.boundingBox = new THREE.Box3().expandByObject(mesh);

Thanks @Mugen87 .
Can I implement a vertex normal helper with that coding?
I mean like below video

VertexNormalsHelper does currently not honor animations via morph targets or skinning.

1 Like