Visualizing joint detection data with bones and skinned mesh

I have a 3D model of a human consisting of a skeleton and skinned meshes. My AI model detects the joints and extracts their coordinates, which I aim to use for animating the model. Initially, I considered interpreting these coordinates as rotations and applying them to the bones, but I found this approach to be too challenging. I am now exploring alternative solutions, such as directly utilizing the X, Y, and Z coordinates. However, I encountered difficulties when trying to modify the positions of the skinned meshes attached to the joints. I believe I am lacking the required knowledge of Three.js to solve this issue effectively. I would greatly appreciate any suggestions or advice you may have.