Gentle Light Probing

Been trying to find a solution to baking light probes. There is the option of using cube-map approach that @donmccurdy used. However, it has a couple of problems:

  1. It hammers the read-back interface, the part that copies pixels from GPU to CPU. This is in fact so slow that it really kills performance. And we have to do it 6 times per probe, since each cube face is essentially a separate render.
  2. And this is a deal-breaker - there is no light bounce. Light probes are intended as a global illumination solution, but if those probes are baked using standard rendering with no light bounce - it does very little. It, essentially adds a single light bounce, by virtue of being rendered from a different perspective to the pixel that we plan to shade, but that’s it.

So… I was looking for a raytracing solution that would work well with the rest of my tech. I had a fairly deep look into @gkjohnson 's gpu-path-tracer, and honestly it’s really awesome and I am even more impressed now after having studied the code for a while. But, it doesn’t really suit me for 2 key reasons:

  1. it packs everything in the scene into a texture, this means you have hard limits on how large a scene can be. And this limit is fairly hard, especially on lower-end devices. I didn’t want to accept that limitation, it’s too “hard” of a limitation.
  2. You can’t easily pack spherical harmonic data on the GPU. It can be done, by using render target per coefficient, which requires 9 render target for 3-band harmonic. Not good. Alternatively, you can do multiple passes and write out just the coefficient you’re interested in - this is stupid-wasteful, as you’re essentially throwing away very heavy GPU work. This problem would be solved if we had random writes, but we don’t, this is WebGL, baby :face_with_diagonal_mouth: .

There are a couple of other problems in there, but if the two issues above were solved - I’d go with @gkjohnson 's tech, it’s really good and it saves sooo much time.

Then I though, hold on - I already have a ton of spatial code, raytracing for game queries, multiple BVH implementations - how hard can it be to write your own path tracer from scratch? - turns out pretty hard.

So that’s where I am at right now, writing a path tracer from scratch more or less. I followed some of the old references I used in the past when working on related tech:
smallpt - 100 line C implementation of a path tracer
embree - Intel’s awesome open-source gift to humanity. The code is seriously amazing, well structured, commented and without too much overengineering - so it’s relatively easy to understand and follow

I don’t actually want to write a path-tracer in the sense that most projects aim to. Most projects want to render a scene from perspective of a camera, painting some viewport. I don’t actually need that, I’m going to be tracing rays from a point and collecting a fairly small number of samples to construct SH coefficients. However, I quickly realized that me not having any experience with path tracing or ray-tracing from rendering perspective, I had bugs in my code and no idea where or why.

So I decided to build a small harness on top of the path tracer, to do what I spent a minute just now telling you that I don’t need - writing a viewport renderer. The logic is pretty simple though, when you don’t have a good intuition on what your code is doing - visualizing that work can help a great deal. Human brain has a huge visual cortex, so we can process a very large amount of visual information with ease. Plus I know enough about path tracing to identify various types of issues visually.

Anyway, all that writing, just to be able to show a silly image. Here is the current state:

This is essentially just occlusion, drawn with false colors, I used a color gradient to be able to represent a wider range of values. So for now I got ray bounces sorted out, BVH integration and geometry support. If you look closely - you’ll see segmentation on the spheres - that’s not a bug, the spheres are actually THREE.SphereBufferGeometry, they are just flat shaded.

I noticed that waiting for a render takes a while, and the page basically hangs. Which is not pleasant, so I put the path tracer on meep’s job system, the bar you see at the top of the image is progress until full frame is rendered. Now the main thread is not frozen anymore and I get a reasonable idea about how long to wait. All that for something that’s essentially a one-off debug tool :woman_facepalming:

Is it slow as sin? - yes. Is it my baby? - also yes. Will it provide a good solution to light probe baking? - remains to be seen :mag:

Now I just need to implement support for bdrf and lights. Just. ha.

The idea is to be able to throw this inside a worker and bake probes progressively. Based on some speculative research, ~1000 rays should be sufficient to get decent-looking probes. Right now my tracer runs anywhere between 20k to 200k paths per second on moderately interesting toy scenes. That would let me so 20 to 200 light probes per second essentially.

To me that’s okay. There’s also the option of running this across multiple workers. For comparison, a single light probe bake using cube camera and THREE.js renderer takes whatever 6 frames of your scene rendering take, if you have a game world that’s optimized for 60FPS rendering, that would be around 16*6 = 96 ms assuming you’re not fill-rate bound. It’s actually less if you are fill-rate bound. In my limited experiments with a couple of simple meshes and no shadow - a probe was rendered in ~6ms. So if I can do even 10 probes per second - the approach is already fast-enough to compete with cube-camera approach, which I believe I will be able to do.

For reference, here’s a rendering of lucy (100,000 poly mesh)

it was rendered at ~15k paths per second, which is pretty slow, especially considering that majority of the image is basically empty space. Yeah, there’s a lot of room for improvement.

3 Likes