Body:
The model is a single merged mesh with no submeshes, no bones, and no named parts. Raycasting returns hits only on the mesh surface, but I need to classify the clicked point into regions such as head, torso, legs, back, eyes, etc.
Constraints:
• No bones to map against
• No separate geometries
• Cannot rely on primitive hitboxes because they introduce false positives around curved areas
• Must detect only the exact triangle hit from Raycaster
Maybe need to apply different material for parts into Blender, 3dsMax and then take from hit face.materialIndex
Also to optimize draw calls maybe need to save geometry.groups, apply one materail to model.
Then after intersection you got faceIndex and can check it in geometry.groups which is part.
mediapipe, blazepose, SAM, tensorflow+bodypix can do segmentation of humans in the browser. You could run one of those models to classify what screen pixels are what body part and go from there.