When I render the scene, should the lights still be in it?
Or am I getting “double” lighting?
const rt = new THREE.WebGLCubeRenderTarget(512);
const cubeCamera = new THREE.CubeCamera(1, 20000, rt);
scene.add(cubeCamera);
scene.environment = rt.texture;
function render() {
// ...
if (frame % 360 === 0) {
cubeCamera.position.copy(cameraPosition);
cubeCamera.update(renderer, scene);
scene.environment = pmremGen.fromCubemap(rt.texture).texture;
}
renderer.render(scene, camera);
}
I use CubeCamera with CubeRenderTarget for the env map. Then render the full scene.
The scene is a large, natural environment in an open-world game with day / night etc, using DirectionalLight, HemisphereLight, PBR materials, and Cascading Shadow Map.
I read IBL’s much better than regular lighting. Now I’m moving to use all PBR materials and environment map.
If you have general advice on configuring, I need help here and I’d appreciate it.
The process to set these is not straightforward in a dynamic environment, even after reading for hours:
light.intensity
hemiLight.intensity
scene.environmentIntensity
toneMappingExposure
For example, which factor should change when the sun sets?
Do I need to set any .mapping property on the cube texture?
Many unknowns remain.
Short answer, you probably want both… but less is more.
IBL has much more “texture” to it, since it’s a capture of real light…(an image) instead of mathematical single color point or directional or spot… (including a wide intensity range with HDR environment maps), and the coloring varies across the reflecting surface, making them appear much more realistic.
But some things in your scene may be dynamic and also emit light.
And you may want some control over the look of light in different places, so for these situations you can add other lights… for instance torches, and glowing objects.
But keep in mind that lights incur a cost over all the pixels that they touch, and Shadow casting lights incur an entire scene re-render per shadow casting light, so these should be used as absolutely sparingly as possible. So… you can use image based light to cast the basic light, giving character to the scene, perhaps a dimmer directional or spot, shadow casting light to simulate the sun and provide some dynamic shadows… and then perhaps a few non shadow casting point lights to highlight certain areas. Some applications manage a “pool” of dynamic lights that fade in and out as the user moves through the scene… (even more advanced setups will use different environment maps in different locations… these are more generally referred to as local light probes)
There are both local and global settings for how much the environment IBL contributes to the scene.
The scene.envMapIntensity and material.envMapIntensity… so you can use those to reduce some of the IBL and give you some headroom for the additional lights you add.
There are no exact rules for how to set this up, and its very situational, and somewhat of an art to decide how and when to use them. You’re balancing performance, realism, and aesthetics to make something that looks good. 
Definitely keep an eye on how performance changes when adding lights/changing their effect radii etc. and test on slower/older hardware… because something that runs fine on a nvidia desktop GPU may run terribly on integrated chipsets. Also changing the lighting environment too dynamically can trigger shader recompilations which create stuttering.
Make sure that shadow casting lights are kept to an absolute minimum and set up their shadowmaps to use the lowest possible sizes and coverage area to create the shadows you need. If you’re using a 4k shadowmap, you’re in dangerous territory.
So test a lot!
1 Like
all good advice! thank you
the challenge for me is figuring out if I’m getting a ‘double’ lighting effect by having my lights in the render of the env map (every 30 frames), and also keeping the lights in the main render of the scene.
If I remove the lights from the second render call, I’d lose cast shadows. And I can’t just set intensity to 0 on the second render for the same reason.
Latest versions of three js use physical based lighting methods to replicate real world properties of lighting, using such a renderer effectively opens the developer up to creatively understanding, using and applying the laws of real world optics (as best currently can be matched digitally in a renderer such as three)
The answer to this would be yes, if there are two light sources these will overlap and add to each other in terms of intensity, however this in terms of real world optical effects and environmental “vibe” is typically a go to setup…
Although it’s down to the creative balance of lighting within a scene, many lights can be used and it can be somewhat confusing using both a directional light for shadows as well as environmental lighting for visual enhancements on top of that… Again it’s really down to the “scientifically” creative developer to embrace this pipeline so far… (until env’s cast shadows or something of that nature arises )
2 Likes
This thread on optimising point lights might be useful.
2 Likes