Gentle Light Probing

Thanks for you insight.

I’ll look into tetrahedral mesh at some point as grid can be a limitation sometime.

Did you implemented everything from scratch (appart from Three JS) or start from existing work. Your solution use WebGPU entirely or a mix with WebGL2.

Thanks again, can’t wait to see the demo.

1 Like

I found an old paper about tetrahedra probe mesh on unity I read long time ago.

You certainly already know about it, but it’s a good intro for people want to discover this subject.

1 Like

I’ll move to WebGPU some day, but I’ve had some requests to keep the code in WebGL for a bit, at least :sweat_smile: It wasn’t an option when I started. The worst part about WebGL has been browser regressions and compilations times from complex shaders more than anything. Compute shaders will provide a performance boost and some of the new data constructs will be nice but otherwise once the texture sampling is abstracted away the fundamental path tracing logic is the same, which is the bulk of the complexity imo.

5 Likes

Yeah, I totally get it. I thought about doing it in a similar way, and toyed around with your codebase. I thought you did an amazing job, but the compile times, the data transformation and shader limits are all making it into a practical nightmare.

Actual raytracing (well, path tracing) is trivial by comparison. That said, I appreciate that majority of the complexity is on that end in what you created. It’s just that those other pesky bits are such an enormous pain.

I have written a few systems that run things on WebGL that don’t fit the API well, and I have more or less developed an apathetic attitude towards it. I get the end result, but you’re never fully satisfied with the solution, too many insurmountable limits.

Say I want to use this tech on WebGL - I would have to convert all geometries to textures, and atlas all textures and all that. At that point, a lot of time has passed for larger scenes and I might not even have succeeded, as there are practical limits on how big a texture can be in WebGL on a given device. Theeen, I have wait for the shaders to compile, which is likely a few seconds on lower-end device, and only then may I be able to use it. And all the while, moving data out of the GPU memory is a pain as well.

All of the points above can be addressed, but it just feels dirty at that point, workarounds on top of workarounds :sweat_smile:

I implemented visibility term into solution. That is, there is a sort of a depth map for each probe, and we use those maps to determine if a probe is visible from a given point in space or not. If it’s not visible - it doesn’t contribute. This essentially removed light leaks. At least in theory, I found that there are sampling errors because of low resolution in the map.

That said, results are still very good.

I was curious how close my solution is to “reality”, so I recreated the same scene in Blender (almost the same, minor errors due to laziness) and did a few renders at 4096 samples per pixel and the same 7 bounce light settings. Here are results, first picture is Blender Cycles render, and second is light probes in engine.

I’m really happy with how close it is already.

10 Likes

Hello, is your implementation for tetrahidralization open? Can’t find a good algorithm which will support sparse placement :frowning:

1 Like

Hi @Meetem ,

No, the implementation is not open. I know it’s a massive pain to create a tetrahedral mesh on the web. There are CGAL and TETGEN libraries out there written in C/C++, but they are massively complex and aren’t easily portable to the web. You can try though, cutting the code down as much as possible and exporting it via emscripten as a WASM bundle.

1 Like

Related discussion:

Note my follow-up a few comments later – I think these existing JS implementations could be OK if we clean up the degenerate triangles at the edges, which shouldn’t be a big deal. But it’s possible I missed something, I had not (and still haven’t) pushed an implementation as far as @Usnul has done here, beautiful work!

Aside – I have been thinking it would be nice to have a MeshVolumeSampler, similar to MeshSurfaceSampler, but based on sampling 3D points within a volume based on a tetrahedral mesh generated from the source, rather than sampling from the surface. That’s much simpler than an entire light probe volume implementation and would still probably be helpful to many people.

5 Likes

I followed that discussion for a while. From my own experience, I tried porting tetgen to JS, and I got pretty far, gave up, if I recall correctly due to pointer mess which made the whole thing super slow or require substantial redesign.

Tried porting CGAL as well, but they have super tight coupling to the rest of the framework and it felt like pulling on one thread just takes you on a merry ride through the entire framework. I tried paying a C++ guy to port it for me, but that went nowhere, turns out you have to have pretty specialized knowledge to understand the code as well as know JS well enough to do a decent port.

I have ~30 papers on the subject in my research “library” for this. In short - it was a massive pain :smiley:

Tet meshes are really cool though, even if I myself haven’t see any commercial success with them.

My experience with tet mesh sampling in a shader is that it’s kind of slow, mainly because of the search. It’s fine in a vertex shader though, but that means you’re only able to sample irradiance on per-vertex resolution, depending on your models that can be bad.

In my implementation for meep I use a lookup 3d texture of relatively low resolution to help speed up the search and support both per-pixel and per-vertex sampling, but it’s still somewhat slow. That is to say that it’s not for integrated GPUs.

As an anecdote, it ran a scene with 26,000 probes at ~30 FPS on M2 macbook air. There were some 130 lights in the scene as well as a single directional shadow map, so not sure how useful that data is. I did find that the probes were the biggest perf hog though.

2 Likes

Really cool work!

I’ve been doing something similar, though no light probes so far – only precomputed radiance transfer, storing spherical harmonics coefficients per vertex with one set of coefficients for the infinite far environment.

Masked Environment Coefficients only:

With albedo (and way too high bounce intensity haha)

I wanted to take the next step now and enhance with local coefficients through light probes and been wondering how you’re doing the shader look up @Usnul ? I was initially considering splitting up the mesh and providing a set of local coefficients as uniforms, but seems that might be unnecessary?

Cheers

5 Likes

I managed to compile cgal to the web some time ago while searching for subdivision methods. Here GitHub - ademola-lou/3DLibrary

2 Likes

There are 3 ways:

  1. Cache nearby probes on the mesh
  2. Cache probes in screen space
  3. Cache probes in world space

To make this make sense, let’s step back for a second and understand what an “irradiance volume” is.

We have lightmaps, which encode irradiance on the surface. That is - given a point on the surface of a mesh - we store light contribution.
Volumes are difference, we store irradiance for a point in space instead of at a surface.

Volumes have the benefit of not caring about the surfaces. That is - we can resample them for novel things and irradiance will be correct… -ish, the larger the things we introduce in to a baked irradiance volume, the more “wrong” the volume will become.

Volume also doesn’t care about number of meshes or their complexity. Conversely, a light map will need some space for each mesh, and vertex-encoded lightight will need to store sode data for each vertex of each mesh instance.

So, back to caching. Irradiance volumes are cool for the their property of interpolation. When we “cache” we want to cache a volume “cell”, not just a single sample. If our volume is a tetrahedral mesh - a single cell will point to 4 probes (making up a tetrahedron). If your volume is some kind of a regular grid (3d texture), a cell will be nearest voxel ID of the grid.

If your volume is a grid - I’d say just don’t bother to cache, as computing the cell is trivial.

For tetrahedral mesh, we can store nearest cell ID at some level of granularity. We can store 1 cell for the entire instance, or we can store nearest cell on per-vertex basis. Note that whatever you choose, you have to consider that you need to update that cache when mesh moves or animates. If you don’t - the useful of the cache will deteriorate and will offer less and less benefit.

How do we go from our cached cell to actual set of probes when shading a texel? - We start from the cached cell, and a point in world space, and we walk our tetrahedral mesh until we land in a cell that definitely contains our world point.

This is fairly trivial, Unity has a good explanation on this topic.

  1. You essentially compute barycentrics for a tetrahedron
  2. If barycentrics are all positive (uvwc) an they add up to 1 - you’re in the right cell
  3. If your barycentrics are off, move to the neighbour that lies in the direction of barycentric vector
  4. Repeat process until you arrive at correct cell, or reach some number of steps, in which case you give up

This seems dumb, but it works really well, and you typically find the right cell in just a handful of steps given a cached starting point.

there are 3 ways here as well:

  1. We can resolve at the mesh origin, that’s just 1 point for the entire mesh
  2. We can resolve in vertex space, and pass resolved value onto pixel shader. We can be a bit more clever here and resolve in some kind of directional format, such as spherical harmonics, which we can interpolate in the pixel shader to capture extra directional information.
  3. We can resolve in pixel shader.

You can probably find it easy to believe that 1. is fastest, as we do only 1 resolve, but is also least accurate and that 3. is the slowest but captures stored volume information perfectly.

The screen-space caching is basically just doing a very low-resolution render pass with your depth buffer and finding nearest volume cells for each pixel. These get you quite far genrally.
If you don’t have a deferred renderign pipeline, you can also do this at the end of the frame and accept 1 frame discrepancy, sampling your cache from previous frame. It’s pretty good most of the time.

3d caching is basically the same as screen-space, but instead of building a 2d texture, you build a 3d texture in frustum space. A “froxel” texture if you will. This one has an advantage of capturing information for things that don’t have depth, such as volumentric effects, particles and transparent/translucent surfaces.

I didn’t bother implementing mesh-level caches, but I’ve tried all of the other options.

I also made a BVH-based lookup, where each cell is given an AABB and we use a BVH to do resolution, this make resolution O(log(n)) instead of linear. This is what I’m currently using as it appears to offer the best scalability. Your mileage may vary though.

2 Likes

Thanks so much for the detailed explanation, that’s very helpful! :pray: :slight_smile:

1 Like