I have been tasked to replicate an effect made in a non-realtime tool. The effect has been achived by setting up 200x200 rays and casting it onto an animated mesh, then placing sprites at the first hit point.
I know about BVH, but because the mesh is animated, the BVH needs to be maintained as well.
I am looking for ideas that I might have missed. I think I will have to come up with another solution that can run in parallell on the GPU, and maybe change the scope of the task.
For any kind of close-to-realtime bvh generation for a complex mesh like this you’re probably going to want to generate it on the GPU and then maybe raycast on the GPU, as well.
This generation isn’t supported by the project, yet, but WebGPU compute shaders could enable it. I’m open to anyone who might be interested in submitting a contribution to support it - I can provide any guidance needed.
For a little more information, those 200x200 casts is in this project lined up in an orthographic grid. This means I was able to do a 200x200 pixel depth map from an Orthographic camera from which I then could use the depth value per pixel in the vertex shader to read the depth value.
It looks like this depth map may be able to fulful the requirements to meet the required outcome.