Light probes (r104)

In version 104 a LightProbe class has been introduced. I was curious and search a bit about the topic. If I understand correctly, light probes can be used to extract light information from a scene and use it “offline”, like using a lightmap. It’s not added yet to the documentation, thus I inspected directly the source and found that LightProbe class has a toJSON function, which let me think that you can export your lighting information to a json file and use it offline (meaning without the lights). Is that correct?
An official LightProbe example is available, but the toJSON function is never used. Instead it uses a copy function like the following:

lightProbe.copy( THREE.LightProbeGenerator.fromCubeTexture( cubeTexture ) )

My guess is the probe is storing light information from the cube texture…but what about the directional light? Is storing that light information too?

Could someone explain a bit about light probes in general and on this example in particular and which could be the use cases in three.js?

Thanks in advance (ツ)_.\m/

2 Likes

Once the API settles down a bit, we really need to write some documentation for the lighting workflow. Not just for lightprobes, but for all of the lighting methods available.

Currently we have:

Indirect lighting

  • AmbientLight
  • HemisphereLight

Direct Lighting

These are required if you want to cast dynamic shadows, although shadows haven’t been implemented for RectAreaLight yet

  • SpotLight
  • DirectionalLight
  • PointLight
  • RectAeaLight

Image Based Lighting

  • lightmaps and ambient occlusion maps (requires a second set of UVs). IMO the documentation and examples for lightmaps, in particular, are very much lacking at the moment. Lightmaps are potentially very powerful, for example, Unreal uses them almost exclusively for lighting static objects. The cannot generally be used for moving objects though, unless you are generating them on the fly
  • Environment maps MeshStandardMateral and Cubemaps only, I think. Other materials types support environment maps as “reflection maps”, not as a source of light.
  • PMREM (WebGL-based Prefiltered Mipmapped Radiance Environment Map) basically an enhanced version of environment maps. I won’t get into the details here. But using these with an HDR environment map currently has the best results for static objects, I think. See this Clara.io example. Also, see this issue on Github for some explanation. Just as with lightmaps, the documentation here is lacking a lot too, as is the tooling - in particular, we need a HDR -> RGBM16 converter.

Actually, all lighting is either “direct” or “indirect”, but I’m not sure which category these IBL techniques fall into.

NEW !! Lightprobes :rocket:

The API is probably going to change a lot over the next couple of months so there’s not much point in writing documentation yet. But I guess these will fall into the Indirect Lighting category?

3 Likes

In their current form (r104) light probes may be used as very efficient sources of indirect light, which can vary by direction more than AmbientLight or HemisphereLight. This comparison, by WestLangley, is a good example:

AmbientLight:

56745661-c57b2900-6748-11e9-8049-b7f8dad2216d

LightProbe:

56745611-b2685900-6748-11e9-81ad-c2d66279851b

To create the spherical harmonics, you would take an environment map that produces the lighting you want and ‘bake’ it to more efficient SH representation (just 27 floats!). In doing so, all reflectance information is lost, so this is mostly useful for non-metallic objects. For metallic objects, you still need an environment map, and the light probe benefits you less.

As @looeee says, this API is going to evolve with future releases. I expect that something similar to what I describe above will still work, but you’ll also be able to use multiple light probes scattered throughout a scene to get nice lighting effects on moving objects. See https://github.com/mrdoob/three.js/issues/16228 for background.

4 Likes

That example looks so good! It almost looks like it’s using an AOMap.

Hmm… so an alternative to the PMREM workflow would be to load up an environment map (preferably HDR), and use it to generate a LightProbe. Then on metallic objects, use the environment map, and non-metallic objects, use the LightProbe. Is that right?

How does that compare performance/quality wise with PMREM? And can LightProbes be used with moving objects easily?

this API is going to evolve with future releases. I expect that something similar to what I describe above will still work, but you’ll also be able to use multiple light probes scattered throughout a scene to get nice lighting effects on moving objects.

I guess this is where the real strength of lightprobes comes into play, making them great for high quality lighting when moving around, for example moving outdoors to indoors in a game level.

However, one of the main uses of three.js is displaying static objects that need to be extremely high quality, for example photo-realistic product displays. I’m curious whether LightProbes will be useful here, or whether we’ll still be better off using PRMREM.

For some use cases (mobile, AR/VR, games) you might not load an environment map or use PMREM at runtime at all. You’d load them once in development to compute SH, or use a tool like cmgen. Then you can literally paste the resulting 27-float array into your code, and avoid all the overhead of loading the HDR environment map and PMREM in your deployed application.

In other cases like a model viewer, you want both – SH for diffuse indirect, and a cubemap for specular indirect lighting. See https://computergraphics.stackexchange.com/questions/233/how-is-a-light-probe-different-than-an-environmental-cube-map.

Then on metallic objects, use the environment map, and non-metallic objects, use the LightProbe. Is that right?

I don’t think it’s this simple, but I don’t know the answer. :sweat_smile:

And can LightProbes be used with moving objects easily?

They are very good for this, with the caveat that it requires multiple lightprobes, and we still need to figure out the workflow for baking all of those lightprobes. You could reasonably scatter 100–1000 probes throughout a scene, bake them, and get subtle effects where an object moving close to a red brick wall will be tinted red on that side, as though light were bouncing off.

However, one of the main uses of three.js is displaying static objects that need to be extremely high quality, for example photo-realistic product displays. I’m curious whether LightProbes will be useful here, or whether we’ll still be better off using PRMREM.

From An Efficient Representation for Irradiance Environment Maps, SH can get you very, very close for the diffuse indirect light:

…the average error over all surface orientations can be shown to be under 3% for any physical input lighting distribution

IMO, that error is small enough to be negligible even for very high-quality rendering. However, I believe you still want cubemaps and PMREM for specular indirect lighting for best results. Note that there is some discussion of creating another type of probe, a ReflectionProbe, to help with the latter.

2 Likes

Thank you guys for the useful info. If I understand correctly you can use light probes only to store indirect light informations (ambient, emisphere) and from environment (cube texture). You can’t use them to bake light data from point lights for example, right?

Some other useful info about probe lighting here: https://unity3d.com/learn/tutorials/topics/graphics/probe-lighting

1 Like

Technically that’s possible with light probes, but the math is a bit complicated… maybe someone will implement it later. I’m also not totally sure what the advantages would be to that, but I have heard of it being done. :thinking:

I wonder if this would work with a high polish plastic material. You know, something thats not metal, but highly reflective…

Well I was thinking that since probes can store light information in space, could be used also as a lightweight light system for non static objects. If I understand correctly the probes divide space in tetrahedron regions and if an object is in a specific region will receive light informations from the probes in that region in the form of coefficients. So basically if probes can store also PointLight data they could fake dynamic light…It’s just a conjecture correct me if I am wrong

1 Like

Hey guys, it’s been almost a year now. Any chance for LightProbe documentation? :innocent:

@DolphinIQ Do you mind asking this question at github? Maybe the OP stopped working at it.

1 Like