Out of curiousity I wanted to create an example of objects, affected by lights.
Here it is: https://jsfiddle.net/prisoner849/2w0stdfp/
There are two icosahedrons and two lights in the scene. One icosahedron and one light belong to layer 0, the other icosahedron and light belong to layer 1.
The camera is enabled “to see” both layers with camera.layers.enable(1);
The expected result is to have two icosahedrons, each of them is affected by the light of the same layer.
The actual result is quite confusing: both icosahedrons are affected by both lights.
Looks like I miss something simple and basic or misunderstand the conception of layers.
Any hints, advice or suggestions?
The layer setting of a light only determines if it will be in the scene or not. If a light is in a scene it will illuminate all objects in that scene, It doesn’t matter which layer the object or the light is in.
layer testing takes place here:
lights are setup for the whole scene here, just once per render
An object’s layer is only compared with the camera layers as in the code above.
Unfortunately, as implemented, it seems to negatively effect performance (at least when tested on my phone). If anyone wants to help move this forward they could do some performance testing with that PR. If you can demonstrate that it doesn’t reduce performance at least for people not using selective lighting, that would be a good start.