B"H

I’m looking to use WebGPU to create super realistic sun rays like this^.

There is this question about realistic lighting, but that was before (and not considering) the advantages of the WebGPU API.

Same thing with this question, its not asking about WebGPU specific solutions.

Nowadays with that, one can do all of the same things that standard desktop applications can do.

I am aware of the G-drays three.js example, but its no where near the desired effect, and I want to use WebGPU specifically. Couldn’t find any examples to work with

How can I get started with this?

I am able to make a WebGPU scene in general with other examples form the three.js examples page, but not able to figure out how to even start with those kind of sun rays.

I was considering using particles and some kind of volumetric light, but have no idea how to start on this and if that’s the correct way to do it

I would do this with (postprocessing, depthTexture, raymarching).

I create a realistic atmosphere and clouds in a similar way. Volumetric light, an atmosphere and clouds all have no surface but scatter light with the particles in their volume. But you don’t use particles for that!

The mechanism is the same for all three. You use the depthTexture and use it to reconstruct the world coordinates in postprocessing. With these you then have the rays from the camera for each fragment (pixel). This ray is divided into discrete steps. The step size can be freely applied, although a large number, i.e. small step sizes, are even more computationally intensive. At each division point you create a second ray from the camera ray to the light source. You can also divide these secondary rays, the number of which depends on how you divide the camera rays, into discrete steps, the number of which is also an applicable degree of freedom.

So if you divide your camera ray into 10 steps you have 9 points. If you now have 9 secondary rays of these 9 points, which you usually subdivide a little lower to limit the calculation effort, e.g. 8, i.e. 7 points, that makes 63 points in total. At each of these points you calculate the light scattering using your light scattering model (atmosphere, clouds, fog, smoke, …) and accumulate the light scattering of all points up to the camera.

Now you can also imagine why raymarching/scattering is so computationally intensive, because doing all of this for every pixel requires a corresponding amount of computational effort. But the effects are worth it

To describe this in detail would be a longer story.

This is how I create this atmospheric effect:

More atmosphere in the canyons because the air density is higher and the direction of the horizon is more red because the path through the atmosphere is longer and red light gets through better to the camera because the blue light is scattered more strongly. It is precisely because of this strong scattering of blue light that the atmosphere is blue, the white sun is yellow for us, and the horizon is red in the morning and evening.

I create clouds in a similar way, only with a physical cloud model.

So that’s the theory. The better you simulate the physics behind it, the better you can create such effects. I tend to handle things like this very physically because I’m a physicist. There may be easier ways, but I admit I don’t know them. Maybe others here will know more than me about a simpler way.

I hope that I was able to give you a little more clarity with the basic description.