I load a GLTF, and I merge all of the meshes found in it into one. It’s a very simple GLTF, basically a list of say 20k unique geometries.
I add another buffer to hold an index indicating which triangle belongs to which entity from the GLTF. So I end up with this giant bag of triangles.
I do this in a worker thread. I send these buffers to the main thread via a transferable.
On the main thread I create a a BufferGeometry along with some DataTextures. The data textures live on the main thread, I access update their buffers frequently. At the moment it’s an expensive, fixed size operation, I simply update the entire texture any time any object changes.
The geometry, I upload to the gpu, keep the reference, so that I can dispose of it later, but I send the buffer data back to the worker via transferable.
I now have a need to render only specific objects from this giant bag. I cannot do that efficiently, since the only control I have is to mask any triangles that are not my specific objects. Which means that I have to render all the many millions of them
I want to do this via BatchedMesh and I want to try to understand theoretically what will happen.
Can I achieve this while keeping my single buffer? I can crate many BufferGeometry objects, each with a different offset and count? But each one will have its own dispose handle. What is this disposing off? Is it sufficient to call dispose on one instance and remove the blob of memory that this single buffer that I’m passing between threads and hoping to copy to the gpu once?
Will this solve my problem? Will I be able to select a sparse smaller set from this single buffer and render it effeciently?