I’ll look into tetrahedral mesh at some point as grid can be a limitation sometime.
Did you implemented everything from scratch (appart from Three JS) or start from existing work. Your solution use WebGPU entirely or a mix with WebGL2.
I’ll move to WebGPU some day, but I’ve had some requests to keep the code in WebGL for a bit, at least It wasn’t an option when I started. The worst part about WebGL has been browser regressions and compilations times from complex shaders more than anything. Compute shaders will provide a performance boost and some of the new data constructs will be nice but otherwise once the texture sampling is abstracted away the fundamental path tracing logic is the same, which is the bulk of the complexity imo.
Yeah, I totally get it. I thought about doing it in a similar way, and toyed around with your codebase. I thought you did an amazing job, but the compile times, the data transformation and shader limits are all making it into a practical nightmare.
Actual raytracing (well, path tracing) is trivial by comparison. That said, I appreciate that majority of the complexity is on that end in what you created. It’s just that those other pesky bits are such an enormous pain.
I have written a few systems that run things on WebGL that don’t fit the API well, and I have more or less developed an apathetic attitude towards it. I get the end result, but you’re never fully satisfied with the solution, too many insurmountable limits.
Say I want to use this tech on WebGL - I would have to convert all geometries to textures, and atlas all textures and all that. At that point, a lot of time has passed for larger scenes and I might not even have succeeded, as there are practical limits on how big a texture can be in WebGL on a given device. Theeen, I have wait for the shaders to compile, which is likely a few seconds on lower-end device, and only then may I be able to use it. And all the while, moving data out of the GPU memory is a pain as well.
All of the points above can be addressed, but it just feels dirty at that point, workarounds on top of workarounds
I implemented visibility term into solution. That is, there is a sort of a depth map for each probe, and we use those maps to determine if a probe is visible from a given point in space or not. If it’s not visible - it doesn’t contribute. This essentially removed light leaks. At least in theory, I found that there are sampling errors because of low resolution in the map.
That said, results are still very good.
I was curious how close my solution is to “reality”, so I recreated the same scene in Blender (almost the same, minor errors due to laziness) and did a few renders at 4096 samples per pixel and the same 7 bounce light settings. Here are results, first picture is Blender Cycles render, and second is light probes in engine.
No, the implementation is not open. I know it’s a massive pain to create a tetrahedral mesh on the web. There are CGAL and TETGEN libraries out there written in C/C++, but they are massively complex and aren’t easily portable to the web. You can try though, cutting the code down as much as possible and exporting it via emscripten as a WASM bundle.
Note my follow-up a few comments later – I think these existing JS implementations could be OK if we clean up the degenerate triangles at the edges, which shouldn’t be a big deal. But it’s possible I missed something, I had not (and still haven’t) pushed an implementation as far as @Usnul has done here, beautiful work!
Aside – I have been thinking it would be nice to have a MeshVolumeSampler, similar to MeshSurfaceSampler, but based on sampling 3D points within a volume based on a tetrahedral mesh generated from the source, rather than sampling from the surface. That’s much simpler than an entire light probe volume implementation and would still probably be helpful to many people.
I followed that discussion for a while. From my own experience, I tried porting tetgen to JS, and I got pretty far, gave up, if I recall correctly due to pointer mess which made the whole thing super slow or require substantial redesign.
Tried porting CGAL as well, but they have super tight coupling to the rest of the framework and it felt like pulling on one thread just takes you on a merry ride through the entire framework. I tried paying a C++ guy to port it for me, but that went nowhere, turns out you have to have pretty specialized knowledge to understand the code as well as know JS well enough to do a decent port.
I have ~30 papers on the subject in my research “library” for this. In short - it was a massive pain
Tet meshes are really cool though, even if I myself haven’t see any commercial success with them.
My experience with tet mesh sampling in a shader is that it’s kind of slow, mainly because of the search. It’s fine in a vertex shader though, but that means you’re only able to sample irradiance on per-vertex resolution, depending on your models that can be bad.
In my implementation for meep I use a lookup 3d texture of relatively low resolution to help speed up the search and support both per-pixel and per-vertex sampling, but it’s still somewhat slow. That is to say that it’s not for integrated GPUs.
As an anecdote, it ran a scene with 26,000 probes at ~30 FPS on M2 macbook air. There were some 130 lights in the scene as well as a single directional shadow map, so not sure how useful that data is. I did find that the probes were the biggest perf hog though.