I am implementing a WebXR based application where the user can grab various objects in the scene with both hands/controllers.
The functionalities look like this:
- User can grab objects with either hands/controllers individually. (And rotate them, translate them)
- User can grab objects with one hand/ and then use the other hand in tandem to scale/rotate the object of intersection.
The proof of concept that I have implemented works well for a three.js primitive like BoxGeometry or SphereGeometry. It seems like the position of these objects corresponds directly to the center of their bounding volume. So that when I rotate them with both hands the pivot point always seems to be at the center of the sphere/box.
Now, we create our 3D models in Blender and use them in our app(Gltf format). These models don’t behave like the primitives. The pivot points seem arbitrary (the rotation is not happening at the center of the model but at a random point) and also the scaling is not happening from the center of its bounding volume. I tried a few solutions from @WestLangley on the StackOverflow forums but could not fix this issue.
Also, I found this feature request on Github for the same. Is this related?
How do we dynamically change the pivot point for an object at runtime? Is this even possible to do?
Any help will be appreciated here.