It’s a wonderful library that you have developed! Kudos!
I have been looking at this library for a week and have a couple of questions that I have been pondering over.
In the case of a WebXR based VR application, how would I use this library if I had to check constantly for the intersection of "gamepad controllers" and a few “interactive” grabbable objects in the scene?
Let’s consider that in my application I have a few objects that I can interact with. Other objects are static and make a part of the environment. The only objects of interest are the ones that the users can interact with which are spread across the scene.
In that case, should I multi ray cast from the gamepad controllers (to get better intersection results and not just in a single direction) or should I check for the intersection of the gamepad’s bounding box/sphere with the BVH of every interactable object?
Any help regarding this will be greatly valued as I am yet to decide the approach for the same.