If you don’t have a motion capture gloves, my advise is to look at Google’s Mediapipe.
Use it to save hands positions, then apply this data to any IK-rigged avatar.
Thanks, we indeed use Mediapipe to capture Gesture for the animating part we are still evaluating a way to create the Hand Gesture using some technique on the fly.
Maybe I am missing something, but if you can create a gesture “vocabulary” with Mediapipe,
then you can apply it to an avatar after speech recognition, yes ?
Yes that’s one of the way we thought of and in pipeline and applying it to avatars.
At the moment we are looking for three.js developer who can do this animation for us
Just dropping in the idea that gestures can be done without sophisticated tools. Here is my attempt to model from scratch a fingerspelling in the Japanese Sign Language. You may click to see it live (but you need hiragana characters in your fonts to see the moras):
I have a suggestion for your situation. You can apply animations to the model in Autodesk or Blender and then run it in Three.js. This will allow you to control various aspects, such as timing, repetition, and more.