Question about Exporting Skinned Mesh in different poses

I have a question about exporting a 3d model that has been posed so that it retains that new popse for 3d printing.

Right now we have a single animation clip that has all the different poses and we can go between all of them. It is my understanding that animations in three.js happens using shaders… so while the character looks like it is in the right pose, all that processing happens on the GPU and the model is still in the rest pose.

How do I make sure that the vertexes actually move, so that when I export the model, it is actually in the new posed form?

Is there a discrete number of static poses you need to export? Or does the animation blend between poses, and you need to be able to stop the animation at any point and still get the data (on the CPU) for that in-between state?

There are a discrete number of static poses and there isnt any blending. I use that custom PR you submitted for splitting a single animation into multiple clips. Each pose is a single frame and I stop the animation once in that pose. So I know it would be possible to get the data. But I’m not sure how to get it into a 3d printable export format. (like .obj)

This is related to another problem I’m having where I want to be able to select clothing items for coloring once they are on a model. But since all the objects are skinned meshes, the raycasting doesn’t work since the CPU thinks the objects are in a different place than where they are being rendered on the screen.

@Mugen87 suggested using bounding boxes for collision detection, but I don’t know if that will give me the fidelity I need for selecting the right object.

Corresponding post: Raycaster intersection with morphTarget - #4 by Mugen87

If you can bake your poses into morph targets (or “shape keys” in Blender) then I think you should have a relatively easy time replacing the base geometry of the mesh on the CPU, using the morph targets. See:

Applying skinning on the CPU is also possible, but much more complex.

1 Like

We are currently baking the animations in blender. Are you saying that this should be sufficient or do I need to handle the animations differently ?

Let me ask that a different way. Does this mean that instead of using animationclips/actions, I need to execute the animations by applying the morph targets to the base geometry manually?

It sounds like you’re going to have to apply either a skinned animation or a morph target to the base geometry. Since both are normally computed on the GPU, it will require some manual work to do so on the CPU and get the resulting data back out. Doing this with morph targets will be significantly easier than with skinned animation, so, if you can bake your animations into shape keys in Blender I think that would be best.

The alternative, and what you’re probably doing now, is baking the animation into (more) keyframes. This can solve certain issues but isn’t relevant here.

1 Like

I have a couple questions before I go down this path.

Will this mess with current skinning? i.e. if I use shape keys to pose the model, will that also move the armature and all other bound objects somehow? I’m thinking about how to get gloves to fit a hand correctly.

Also, it seems like this might make the file size huge since as far as I can tell, file size scales linearly with # of morph targets. Is there a way to keep file size down when doing this other than reducing number of polys?

Each shape key will be the same size as the base geometry, yeah. If your model’s size is primarily geometry, that could be an issue.

On a closer look, it might not be as much work as I thought to implement skinning on the CPU. See this thread and this PR for some example code that can compute the (new) position of any vertex from the skeleton pose. If that still works, it may just be a matter of looping over all the vertices overwriting them with new values.

1 Like

I’ve implemented the bounding box solution described here: , but the problem I’m running in to has to do with async code. Because they are discrete changes in pose, I really just want to update the AABB once per pose change. If I call the function to update the AABB in the pose function, the async function doesnt complete prior to the call to the update function, so the updated AABB is always one pose change behind.

Any ideas how to call the AABB function after the pose change is complete?

Also, how do you raycast against a box3?

If I call the function to update the AABB in the pose function, the async function doesnt complete prior to the call to the update function, so the updated AABB is always one pose change behind.

Calling mixer.update( delta ) should apply your animation synchronously.

I’m not finding this to be the case. If I set updateBoundingBox to true in the pose function, and then try to update it once following the call to mixer.update, I get the same behavior as before. If I update the AABB every frame, then I get the desired behavior. I just prefer not to do that.

   animate = () => {
   requestAnimationFrame( this.animate );
   var delta = this.clock.getDelta();
   if (this.mixer != null && this.mixer !== undefined) {
//    this.updateAllAABB();
   if( this.updateBoundingBox ){
       this.updateBoundingBox = false;


I think mixer.update starts all animations at the same time, but I don’t think the animation has completed by the time I call the update function, so it is still behind.

I have temporary fixed this by just building in a 50ms delay before it tries to update the bounding box. Not sure if this will work well on slower machines or not.