BVH scene usage

I’m getting started with Three-mesh-bvh but i’m not sure what to do with it. I have a large scene with tens of thousands of meshes. I already merge them and thats what i render, however i discard the merged geometry on the cpu. I keep the original geometries and each mesh instance around for raycasting.

I see that i need to patch the geometry and mesh to work with the structure, but it’s not clear to me how they relate afterwards. Ie. am i better off applying this to my merged geometry, or to the (invisible) individual meshes?

The example that seemed the most relevant, merges the world into one mesh using a helper from the package, i imagine i can skip that step if i already have my own version of the merged geometry.

i believe it should not matter, it creates an oct tree of the model or the models and it knows into which segments rays fall. it won’t go through all vertices no more. merging is generally better for gpu performance due to draw calls. keeping the old unmerged geo around isn’t bad also because this can allow you to get per-mesh/face highlighting. we do this for CAD at work: bvh + merged geo + unmerged for selection/highlighting.

1 Like

Hi, I am writing a library to handle scene BVH if you are interested.
It is still under development, but not long before release.

Repo + example (raycasting + frustum culling 100k different meshes) : StackBlitz

1 Like

Three-Mesh-BVH is a great library it allows to break down the geometry into a Bounding Volumetric Hierarchy which is helpful for detecting interaction with the geometry and the geometry triangles without using too much resources.

For instance take the example of the Three-Mesh-BVH given on the GitHub main page the example is casting 500 rays against an 80,000 polygon model at 60fps which is if done directly without BVH will take either too much time, will make the website lag or even browser crashes.

It can be used to detect collision between geometries, better and faster raycasting and it can even find the triangles which are being collided by other geometries using shapecast.

There are a lot of benefits of using the lib. also I just saw the example given by agargaro (not mentioning him due to privacy) and I have to say thats working really smooth and great hope to see it as a full project soon.

But that’s what I’m considering. I could have a first tree be all instanced cubes for bounding boxes. Once I hit that I can accelerate the triangles.

I don’t know how much doing the instanced cubes would help cull… since that’s effectively what the the BVH is doing already. so you would just be doing the same thing twice… the B in BVH are already those cubes/boxes. :smiley:
I just throw my whole world at bvh and use its accelerated raycast and it “just works”.

I might be missing some basic understanding a bout BVH, but this example is confusing me:

If it indeed “just worked” as @manthrax and @drcmda are suggesting, i don’t think this code would be needed:


			// collect all geometries to merge
			environment.updateMatrixWorld( true );

			const staticGenerator = new StaticGeometryGenerator( environment );
			staticGenerator.attributes = [ 'position' ];

			const mergedGeometry = staticGenerator.generate();
			mergedGeometry.boundsTree = new MeshBVH( mergedGeometry );

My understanding is that its more like this?

scene.traverse.forEach(o=>{
  if(o instanceof Mesh){
    o.bvhTree.intersect(myRay)
  }
})

Otherwise the merge would be redundant? and

the B in BVH are already those cubes/boxes. :smiley:

this isnt actually happening, since we are just boxes.forEach those first boxes.

What would a three-scene-bvh package do?

I think the example specifically merges the environment into one mesh to make a colider. It’s not even using it to optimize rendering, the environment is put in, non merged. If it worked magically out of the box i don’t think any of this would be needed. It would be something like myMesh.bvh.layers.set(Static)

:neutral_face: :neutral_face: :neutral_face: :neutral_face: :neutral_face: :neutral_face: :neutral_face: :neutral_face: :neutral_face: :neutral_face: :neutral_face: :neutral_face: :neutral_face:

What is the point of using chatgpt for this?

I totally didn’t see this reply. So my conclusions are correct?

Here’s all I had to do:


import *as meshbvh from "./three-mesh-bvh.js"

geometry.boundsTree = new meshbvh.MeshBVH(geometry);

and then used the “optimized raycast” that patches THREEJS raycast

THREE.Mesh.prototype.raycast = meshbvh.acceleratedRaycast;

and it Just Works. Raycasts are instant.

2 Likes

Do you have this online somewhere? All these boxes are unique meshes and draw calls? @agargaro what are you working then?

The buildings are all one buffergeometry. I don’t have it shared publicly yet.

The accelerated raycast override will also create the BVHs automagically I think when It hits a node without one, but not sure.
I remember seeing some option about “lazy generation” somewhere…

In that sample, I explicitly only cast against the mesh itself.

This is supporting my assumption then? You accelerated raycasting through a single mesh. That makes sense.

What you could do is, try to render the single buffer geometry, but dont raycast it. instead, make another scene, with all the boxes as meshes and try to raycast. If it worked like magic, it would be eqully fast. But if it doesnt, youll just keep on raycasting thousands of boxes in linear time. Once you actually find which box to raycast, the triangles are irrelevant, since theres like 12 of them.

All the examples do infact raycast many rays against single large objects. None of them raycast against many small meshes.

I don’t see how raycasting against an instancedmesh of boxes is gonna be faster than raycasting against the bounding boxes of a bunch of BVHed smaller meshes.

You’re still colliding the instanced box mesh in software yeah? (unless you’re doing that in a shader || webgpu, then yeah maybe you could squeeze some speed out of the broad phase)

Didn’t use ChatGPT on that, I wrote it myself. I mean I don’t know the reason why you think I used GPT but still.

To be honest GPT is trash :smile:

I’ll be honest, i am now confused even more, what prompted you to write an essay on “What is three-bvh-mesh”?

say you have how man, 20k of these boxes in your image? So it’s only really 240k triangles not much.

Now imagine youve got the stanford rabbit inside each one of those boxes. 20k x stanford rabbit triagles. It’s a prettty big soup.

It would make sense to me to first look at bounding boxes, without any triangles, and once suitable ones are found proceed to intersect the mesh. I’m not sure but maybe even different heuristics would make sense eg BVH on the scene, kd-tree or something else on the triangles or something like that.

I think we best wait for @agargaro to explain why three-bvh-scene is needed.

Not instanced. The first pass would be exactly what you are doing, cast against a big scene size triangle soup. Everything in the scene is in that soup, its one mesh, hence three-bvh-MESH.

But in the next pass you say, “ok building 1.234.567 has the stanford rabbit model with 20k triangles” and then you do the bvh mesh again.

raycasting checks against geometry.boundingSphere (if it exists) and/or checks against geometry.boundingBox (if it exists), first… then only drills into the geometry if that passes.
So that’s functionally equivalent to casting against an array of box instances is all i’m saying.

Right, but you’re saying it’s unecessary? This library already does that under the hood sort to speak?