Since I currently work at a project with animated characters, I’ve implemented a function which computes the current AABB based on the transformed vertices. As you can see in the following fiddle, the red AABB is the result if you use Box3.setFromObject()
with a skinned mesh. The green box is calculated per frame and represents the real bounds of the object. It’s actually the same code as in the vertex shader but now implemented in JavaScript.
https://jsfiddle.net/qosuLyhf/
The next fiddle computes immediately after the loading process of the animated character an AABB that represents its maximum bounds. Something like this is useful for the first step of a ray/intersection test with a skinned mesh. However, I’m not sure if there is a better way in order to process the animation data. The problem is that an animation clip usually consists of multiple tracks which can have different amount of keyframes. This makes it hard to iterate over all keyframes and calculate the current transformation since you would have to insert missing keyframes in certain tracks. Right now, the code just samples the animation clip based on a given value. The greater the value, the more exact is the produced AABB. But it also takes more time for processing.
https://jsfiddle.net/q4sjbLuk/
But be aware that computing the AABB in this way is expensive. It’s not surprising that vertex blending is computed in parallel on the GPU^^.