Volumetric lighting in WebGPU

A while ago I played around with atmospheric scattering in my webgpu renderer.

The results were generally positive, but I didn’t like the traditional approach of marching rays through the atmosphere for every pixel. It works, but it’s a lot of calculations.

At the time, I read a paper by Sebastian Hillaire of Epic Games, called "A Scalable and Production Ready Sky and Atmosphere Rendering Technique”.

There was a small burb in there about creating a LUT for volumetrics. In the grand scheme of things, for the sky - it’s not the most important part.

A few weeks ago, I came back to it, and my goal was to add both the atmosphere, as well as volumetric lighting to the engine.

What I mean by that are things like light shafts

Volumetric fog

And local light scattering events


Somewhat unsurprisingly, this was achieved years ago, the first reference I could find dates back to 2014, SIGGRAPH paper titled “Volumetric Fog: Unified compute shader based solution to atmospheric scattering” by Bartlomiej Wronski, Ubisoft.

The paper/presentation had basically the same goals

The funny thing to me, is that I found this paper in a reverse order. I read everything from 2020s, then papers from 2017-19, then 2016 and only later did I get to this one.

I highly recommend the paper, it’s quite easy to read and has pretty much everything you need, the tech has changed surprisingly little since then.

The first thing I was going for were light shafts (aka “God Rays”), and with a bit of effort, here’s what I got:




The basic idea is quite simple, you pre-integrate scattering and transmission to a froxel texture (3d texture aligned on NDC view frustum, FRustum VOXEL)

The tricky part for me was the physics, but after a bit of reading it turns out to be way simpler than it seems at a first glance, it’s all about scattering and transmission

For now I put a pause on it, as I hit a point where denoising is necessary and other unpleasant parts of turning theory into production-ready technique. But I’m super excited about it in general. The whole thing takes sub-1ms time to compute even on 10 year old hardware, and during shading it’s basically free (1 texture lookup).

Other interesting paper I would recommend is:

  • “Physically Based and Unified Volumetric Rendering” by Sebastien Hillaire, from SIGGRAPH 2015, back when he was at EA

This is a recurring theme for me, both in graphics research and other technical areas:

Something doesn’t work, I don’t understand why, so I read a dozen papers and write a few prototypes and suddenly it all makes sense.

10 Likes

Thanks for sharing!

1 Like

Made a Demo

Screenshot 2025-11-09 154956
image

It’s quite dark, because atmosphere density is cranked way up to exaggerate scattering effects. It’s still physically based, just a different type of atmosphere, think something like Venus, or Jupiter perhaps. So transmittance is quite low, similar to something like a very foggy day, a snow storm or a dust storm here on earth.

I integrated better noise into the sampling part, so there are fewer aliasing artifacts. The integration part is still a bit noisy though, haven’t touched that.

What we have here:

  • directional light integration
  • Mie scattering + absorption
  • Rayleigh scattering ( pretty much no absorption according to physical model )
  • Ozone absorption ( no scattering )
  • Multiple scattering integration for the sun (see Hillaire 2020)
  • Visibility taken into account (shadowmaps)
3 Likes

As a separate point. I notices that three.js recently merged volumetric lighting with an example

The effect is done using post-process as far as I can understand, you can see that by aliasing of the effect along the edges


The problem with this approach is 3 fold:

  1. Raymarching is expensive, and you’re doing it per pixel of your post-process pass. In this case it’s 1/4 of the original resolution, so for 1024x1024 resolution your pass resolution would be 256x256 and you have to march every one of those for a total of 12 steps each

    That’s 786,432 evaluations in total. The fact that you are taking 12 steps also means that your Z resolution is going to be incredibly low.
  2. Aliasing. Resolution mismatch means you have to upscale the result somehow. The most basic ray would be to just stretch the image, basically what happens here. You end up with some jaggies and there’s noise in the image. So let’s slap some blur on it

    Looks pretty, even if it destroys the edges.
  3. Bandwidth. You have to do a separate compositing pass, this is a standard cost of doing post-processing passes though and resolution is relatively low, but the cost is still there.

The reason AAA industry doesn’t use this approach is pretty much those first two points. The modern approach is to build a 3d lookup table separately, and then just sample it during draw.

The demo I posted used 128x128*64 resolution, which amounts to slightly more samples 1,048,576, about 30% more to be exact. But we’re getting 530% more resolution in Z axis.

Using a 3d texture you don’t have to worry about edges either, here’s an example:




By contrast, even if you crank up resolution of post process to 1/2 of the original (which is already 4x the pixels), and push the blur radius to the max on the slider, you still have aliasing


When I say “aliasing”, I mean that the edges of the “fog” do not align with the edges of the geometry, you can see that the silhouette has been completely destroyed.

It might sound harsh, and it is, but I’m still blown away by the work that @sunag and @Mugen87 did here. Very impressive, even despite the limitations. I especially like the density injection part via a TSL function, very elegant.

I’m guessing that the volumetrics here are a bit of stretch, the system was designed for post-processing volumes, like blurring parts of an image, or applying toon shaders etc, so the architecture was not designed specifically for volumetric lighting.

3 Likes

Added support for local lights.

Also played around with different phase functions for Mie scattering.

If anyone is interested, there’s a paper by NVIDIA from 2023:

It offers a very different parametrization, instead of anisotropy parameter G it offers a physical parameter of particle size, which, I thought, was quite neat. It also appears to have a much better fit to the ground truth Mie function shape. The authors boast 95% fit.

HG seem to the be standard (Henyey and Greenstein from 1941), as it’s relatively simple and it’s all over the existing code bases.

I discovered Cornette-Shanks approximation (CS) a while ago, which offers a better fit than HG in a way, as it provides a stronger back-scattering component.

Here’s a plot from the NVIDIA paper to show what I mean

The HG+D is the function from that paper. The “Mie” is a plot from the simulator (ground truth).

I found that if your media is largely homogenous and there is little density variation, it’s hard to tell much of a difference between these functions. But still - maybe someone will find this useful.

Here is what I got with local lights (all lights supported that is)


and without the volumetrics





One more thing, on the post-processing approach versus 3d texture (froxels).

The post-processign approach doesn’t support transparencies. Here’s a shot with a glowing crystal

There is a large spherical light source in the crystal

And the crystal itself is barely transparent, you can see through it a little.

Here it is close up to prove

You can see a bit of the background through it

Why is this important? Post processing does not support this, you’re running a post-process on top of everything, so transparencies can’t be used, they don’t write depth so there is no ray length to integrate to.

You could make it work, by running a separate post process for every triangle, but that’s not a realistic option.

1 Like

demo seems to be broken

Probably takes a while to load. Sponza scene is about 80 Mb, the server I host it on is a bit slow as well :sweat_smile:

Anything in the console by any chance?

1 Like