Hello! I wanted to share the three-mesh-bvh package I maintain which, among other things, can be used to greatly improve raycasting performance by multiple orders of magnitude against highly complex geometry as well as enable a variety of spatial queries against the triangles. It’s designed to be memory conscious, fast, and able to be generated asynchronously in a web worker.
It started off as a project to learn about spatial data structures and through need, suggestions, and contributions has grown into a very capable and well tested package for querying detailed geometry like medical, CAD, or photogrammetry models or even moderately detailed geometry in more constrained environments like WebVR or mobile phones.
If you check out the examples listed in the README you can see additional use case demos including basic ball and character controller physics, lasso triangle selection, geometry intersection detection, geometric triangle triangle painting, and others! There are some other examples I have in the works which I’ll be excited to share here once they’re added.
Bug reports, feature suggestions, PRs, and other contributions welcome! And feel free to share projects if you wind up using it. I always appreciate seeing how my work is used!
I have case where I have 80,000 meshes of relatively few polygons each. Would this improve raycasting of this case, or is it only designed for high-poly meshes (not many meshes)?
One question is, is there any standalone js or jsm bundle file?
There is a UMD version of the library in the /umd folder which you can use as a global script include and adds all imports to a global MeshBVHLib variable. There’s no bundled jsm version, yet, but you should be able to make one easily by modifying this rollup config to output a module rather than umd. I probably won’t be able to get to it for a bit myself but if you want to make a PR to make a jsm build as well as a umd one I’d be happy to provide some guidance and get it merged!
I have case where I have 80,000 meshes of relatively few polygons each. Would this improve raycasting of this case, or is it only designed for high-poly meshes (not many meshes)?
This library is intended to speed up raycasting against individual meshes with complex geometry rather than many objects. If your objects are static you can merge them into one mesh and build a BVH using that for raycasting, though.
It’s a wonderful library that you have developed! Kudos!
I have been looking at this library for a week and have a couple of questions that I have been pondering over.
In the case of a WebXR based VR application, how would I use this library if I had to check constantly for the intersection of "gamepad controllers" and a few “interactive” grabbable objects in the scene?
Let’s consider that in my application I have a few objects that I can interact with. Other objects are static and make a part of the environment. The only objects of interest are the ones that the users can interact with which are spread across the scene.
In that case, should I multi ray cast from the gamepad controllers (to get better intersection results and not just in a single direction) or should I check for the intersection of the gamepad’s bounding box/sphere with the BVH of every interactable object?
Any help regarding this will be greatly valued as I am yet to decide the approach for the same.
@gkjohnson I had a need for spatial indexing in a little physics engine I’m making for a game. I knew about your lib but I’m using three.js math primitive classes for collision, not meshes, so I assumed I couldn’t use three-mesh-bvh.
I coded my own makeshift ( and very suboptimal I’m sure ) binary tree, but now I’m wondering if I should have asked you before if I could use your lib for this. Is it possible to index objects such as OBB and Sphere and speed up collision detection between these objects ?
In that case, should I multi ray cast from the gamepad controllers (to get better intersection results and not just in a single direction) or should I check for the intersection of the gamepad’s bounding box/sphere with the BVH of every interactable object ?
Either one would work – it really depends on what you want. Raycasting with a visual indicator can be good for interacting with objects at far distances but if you want users to interact with objects at closer distances using something like a sphere attached to the controller to detect intersections can work well too.
The project doesn’t intend to dictate or suggest interactions as much as it enables interactions that may not have been possible or performant previously. I think the best thing to do is experiment and find what you like best for your project!
I coded my own makeshift ( and very suboptimal I’m sure ) binary tree, but now I’m wondering if I should have asked you before if I could use your lib for this. Is it possible to index objects such as OBB and Sphere and speed up collision detection between these objects ?
No the project is built to index the triangles of a mesh with a small memory footprint by rearranging the index buffer data and storing bounds information in an array buffer, as well, which means its not well suited for storing other types of dynamic objects (though static point and line geometry may be supported more fully at some point).
I haven’t had a need myself but I would think there’s a space for an efficient, stand alone dynamic spatial index of scene objects for uses like physics, other geometric spatial queries, or optimized raycasting capabilities in scenes with lots of objects as @prominent mentioned above. A spatial scene index like that could be used in conjunction with this one to, too, with the dynamic index being used for broad queries / detection while three-mesh-bvh is used for individual triangle intersection detection.
Thanks to a recent contribution an example on how to use the library to improve point cloud raycasting performance by multiple orders of magnitude has been added! Right now it works by creating a separate mesh of degenerate triangles which can be treated as points which means some extra index memory and a bit of custom raycasting logic is required. In the future a more robust, built-in implementation could be added but it will require some reworking of the indexing functions.
I just recently released v0.4.0 which made some API improvements and added a couple new features to enable a bit more flexibility in the BVH. I also added a new geometry sculpting demo inspired by tools like SculptGL and ZBrush. There are still some improvements that could be made to improve performance and add features but it was a fun little demo to make. WebXR sculpting would be a cool addition.
Hello @gkjohnson! will your library work well on non manifold meshes? or does it require “clean” geometry to function correctly? I’m thinking about the “shape intersection demo” as a use case, but with dodgy imported/converted meshes.
will your library work well on non manifold meshes? or does it require “clean” geometry to function correctly?
The package doesn’t make any structural assumptions about the mesh. There are some things to maybe keep in mind listed in the gotchas section but generally a “triangle soup” model will work just fine.
I’m thinking about the “shape intersection demo” as a use case, but with dodgy imported/converted meshes.
That should work – the geometry intersection demo works by detecting intersection of triangles between the two meshes.
I just added a new demo showing how the BVH can be used to speed up extraction of edges along clip planes for things like CAD applications. Using the BVH the clipped edges can be generated in real time on the 2 million polygon demo model.
There have been a couple new releases recently that add some new features and demo!
InterleavedBufferAttributes are now supported for geometry position attributes meaning glTF models generated from gltf-transform and MeshOpt projects will now work out of the box without having to transform them:
And with the latest version the Surface Area Heuristic (SAH) BVH generation strategy has (finally) been made viable and produces a more optimal BVH typically with a smaller memory footprint. From some the following demo it seems raycasting is improved by 10-20% overall but would probably be improved further in some of the worst case scenarios on a model where there are deep hierarchy traversals required. You pay for it in generation cost, though, as it can take several times longer to produce compared to the CENTER strategy.
I’ve put together this demo to help show the impact of different MeshBVH options and hopefully help people understand a bit more about how the whole project works!
I came across Peter Shirley’s Raytracing in One Weekend series and figured it would be a neat use case for the project and a good opportunity to learn about path tracing so I’ve just added a singled threaded CPU Path Tracing demo to the repo. I a number of other papers, open path tracer implementations, and PBR Book to help wrap my head around everything but I think it came out pretty well. The demo includes some more advanced features like a GGX roughness model, direct area light sampling, and multiple importance sampling. Next up I’ll be looking into enabling raycasting / raytracing in a shader. Check out the new demo below!