Hello, I am planning to build a 3D talking head with animated lips (i.e. lip sync) according to some user input. My idea is to find a model head on a website such as https://sketchfab.com/ and then customize it in blender. Once complete, I would export it in glTF format and then load into three.js. From there I could morph the vertices on the lips using morphTargetInfluences. Is this a reasonable approach? Is there a better way?
Using morph targets is definitely a valid approach for facial animations.
i still waiting for merged https://github.com/mrdoob/three.js/pull/17649 looks like more optimized solution for Morph targets with https://github.com/zeux/meshoptimizer
Hi @shane,
I am struggling to make lipsync by using Rhubarb lipsync library but, I am unable to create lipsync by the exported json. Can you please suggest something?
B"H
ya just use usermedia to get the microphone, or preload an audio file, then look up how to analyze audio data in real time, but get the least amount of data, basically, look up how to get the audio data in the main update loop, then just determine whether or not one is talking, check the volume level, and if u are talking, then put in some kind of sprite sheet of mouths moving, unless u want to do more complicated moiuth animations, but basic mvement should be good enough
Does anyone know: what is the most standard standard for 3D model lip sync data these days?
What kind of lips does it have, shape keys, 2d image? If realtime is more important than precision just get the audio levels of any audio playing then use that realtime data to move the lips randomly
Thinking about 3D of course (for Three.js 3D models, 2D probably not as useful).