Advice needed on a project

I’ve been working on this project for some time. I can outline it in three steps:

  1. Use a deep learning model to extract the motion from a video (or web camera, in progress), retarget it to a humanoid model, play the motion in threejs on the humanoid model. Save the motion data when it’s done.
  2. Edit the extracted animation in a web editor. eg. the rotation angles of a joint. Similar as the feature in blender or cascadeur, but much much simpler.
  3. Once the animation is ready, typically a workout routine or dance. User can play it in a humanoid model, at the same time, if user turn on their web camera, their pose will be retargeted to another humanoid model in web brwoser in realtime. And the program will then check if they follow the animation closely.

You can take a look at it here

Please share your comments on it. What areas do you see for improvement? Do you think this project could be useful? Did you see a project similar to this one?

I also welcome any kind of collaboration.

1 Like

The tech looks awesome - design and rendering could use some improvement, but that’s only that :eyes::100:

For the current direction you’re taking - not sure. When working out looking at the screen is most of the time impossible / inconvenient / dangerous.

But could be awesome for stuff like in-house motion capture and export to three.js-compatible format - finally letting us drop mixamo and just do some animations with webcam.

Also for VTubers, something like Kalido.

Nice one!

Could you add WebXR camera features? Also, what ML stack or engine do you use to capture the data?

Is it possible to open source your project?


Thanks! I will start focusing on extract better, more accurate animations.

“WebXR camera”, I don’t know, not really familiar with it. The deep learning model is mediapipe Pose landmarks. They have solutions for web and python.

I can’t open source at the moment, maybe in a few months, I will post my repo here when I can.

1 Like