Hi. So I can perform human 3D pose estimation from a video. I have a human “animation” data stored as 3D per-joint coordinates for each frame of the video input sequence. In other words, I can obtain spatial location for each joint in a human skeleton at any moment of a video. From the docs I can see that to render animation with three.js I can utilize one of the proposed animation formats. Any idea how I could convert my 3d joints data to one of the proper formats e.g. bvh/fbx/glb?
I guess it’s easier for the community to provide feedback if you share a sample of your 3D joints data (so it is more clear how your data look like).
So the data comes in an arbitrary format(its basically an output from a neural network) and is stored in json file. I can describe it as follows: the NN can estimate 3d coordinates of 30 human joints(these are hips, neck, spine, elbows, legs etc) for each frame of an input video. I don’t think it would be helpful sharing it cause it is basically just a 3-d list of numbers of shape (number_of_frames, number_of_joints, number of coordinates) e.g. (100frames, 30joints, 3 xyz coords)