Limit 8 BlendShapes on the FBXLoader

Hello everybody

Is there a way to run the FBXLoader with 12 instead of just 8 BlendShapes?

This is a limitation of three.js, not the loader. See this post for more info:

@looeee, thanks for the information. It works with JSON, so I thought it was up to the loader itself.

What do you mean by that? Just to be clear, loaders in general can parse an arbitrary amount of morph targets. The limitation is that maximal eight morph targets can influence the final position of a vertex at the same time.

Yes, I realize that. In my JSON variant, I use up to 15 Shapes and it works. Maybe I’m lucky and only 8 will be used at the same time.

Depends on how you intend to use them, like temporary changes or realtime animation.

For character customization i’m using 24 different while the model uses only one morph target, i mix all on CPU into the GPU one.

Care to share your approach for using them for temporary changes that get pushed down into only one morph target? That is exactly what I am trying to accomplish but have run up against the limit of 8. Thanks!

Hi, it’s a bit more complicated because I’m using something that does not actually exist. I have developed my own expression data recorder, which uses its own form of control data that I have specially designed for it. So I’m coping with a much smaller amount of data. The whole thing is based on the BVH, but is different in the ground, so that it still works. In addition, I also control with different combinations. There are quite a few, depending on the character. Now we are with JSON on the border whit the textures, so I hope more from FBX, but the thing with the control starts again from the beginning. At JSON you can see at least all shapes, at FBX only 8 (are displayed) @Mugen87, we have debated a lot about a recorder. He is now running whit an own control data concept.

At the moment I still have problems controlling the FBX character. That’s why I use the DatGui first, so I’m sure that it runs, before I connect with the other controller.

@Fyrestar this is something I’ve wondered about previously - currently in the FBXloader when more than 8 morph targets are found we just log a warning and ignore the rest of the targets. Would it be better to parse these targets as well? Does adding more than 8 morph targets to a buffer geometry cause problems?

I don’t know about the FBXLoader, but vertex attributes are limited (around 16 i guess depending on the card).

To avoid this limit on GPU the states can be stored in a float texture atlas instead vertex attributes. I just took a look at my implementation and i actually use a texture instead the attribute approach of THREE since for a composed morph target buffer attribute you would require a geometry per mesh.

For the composed approach this requires one composition texture per mesh (per character for example).

Edit: how many attributes are supported can be seen in renderer.capabilities.maxAttributes, on my TitanX it’s 16, low-end cards might have less, i suppose 8.

There are working examples with 30-50 morph targets using JSON and glTF in the three.js examples — everything on https://mrdoob.github.io/rome-gltf/ works that way. The key thing is that only ~2 morph targets are active at a given time, the animation clips cycle through them gradually. So I don’t think there should be any reason for FBXLoader to drop morph targets, as the renderer will be smart about playing them back.

Animation clips?

Morph targets are composed of multiple states like https://threejs.org/examples/webgl_morphtargets.html. It requires all states to be on the GPU if composed in realtime.

Yep! You can design an AnimationClip such that it cycles through the morph target states with only a few active at once, and there is some logic in WebGLMorphTargets to ensure that only active morph targets are taking up one of the 16 slots. I’m not sure how that is accomplished in WebGL APIs to be honest, but would be curious to learn… In any case, it works:

866A7EBA-8A28-4F77-AEDC-387613EC5960-98372-0001004C82DF734F

Horse.glb (177.5 KB)

2 Likes

Switching the attributes doesn’t really solve the limitation in sense of using all at once. For animations of course this seems a good approach, even though it’s still limited for mixing between them.

But skeletal animations are rather suited for this kind of animation, morph targets are rather used for per-vertex animations such for facial expressions.

In my case the character customization uses a single human geometry mixing gender, age, size, weight and various body proportions, those doesn’t need to be animated (even though it still works smoothly composing on CPU with 24 on a 3-4k vertices geometry) so using 1 composition texture of the size for 1 set of positions isn’t too much. The composition has relative displacements, so if the precision allows it, a cheaper RGB8 instead RGB16 texture can be used (64x64 for up to 4096 vertices).

True, but I think it’s relevant to the point that FBXLoader shouldn’t remove morph targets >9. The loader doesn’t know whether the targets will be applied simultaneously.

Yes, if they are sequential animation frames there isn’t really an issue and as your example shows it already tries to use a few as possible. I don’t know about the FBX format internally right now, but if those morph targets somehow hint to be for vertex keyframe animations then it could skip the removal.

It would also be much more comfortable to use the models because if you need to use certain morph targets that are not among the top 8, you do not need to rebuild your model, which is the case now.

@looeee wouldn’t it be better to optionally implement morph targets on a texture atlas like i described? Basically bones do just the same storing the matrices in an atlas, this is optionally too but used by default.

If more attributes such as normals are used per target it even cuts the possible slots by half, and the regular attributes already take up 4.

1 Like

OK, I’ll change the behaviour to return all the morph targets. Do you think it should still display a warning?

Perhaps, I’m not sure if there are any caveats to doing it that way. Perhaps you could open an issue on GH and suggest this? Or even better, make a PR so we can test both approaches.