You might be able to assign objects to layers, and then toggle them in one shot by enabling the relevant layers on the camera.
You perhaps could also instead make a separate scene for the sky rendering… but then you would have to move the sky object itself in and out of it so idk.
(Also, you probably want to make the PMRemgenerator a static and re-use it instead of creating it often, since it has a bit of startup cost iirc.)
Thanks. I was wondering more like if there’s a way to use the Sky shader directly for the scene.environment, but maybe that’s silly. I’m not sure. Just want something fast!
Another option to your code above would be to render the sky shader to a render target and use the texture from that as the scenes environment, there’s a good example on how to do that in Mugens post here Using shader material as texture - #2 by Mugen87 which is pretty much what you’re doing already without having to traverse the scene to hide all other objects and then reinstate their visibility every frame, instead just using a render target to get the sky alone as a texture output…
That’s a useful technique, but isn’t rendering to a cubemap afaik?
That’s what using the cubecamera approach does.
Additionally.. you still have to run it through the pmrem generator to get proper image based lighting effects + roughness etc.
I don’t think there are really many shortcuts if you want to use that complex sky shader, short of rewriting it to output pmrem directly.
Using layers reduces the “traverse the scene to hide all objects” to a single camera.layer.enable()/disable()
All that said, I wouldn’t necessarily recommend doing it (every frame).. that sky shader is pretty expensive already, and the pmrem generation also isn’t cheap.
If I had to do something somewhat performant, I might consider rendering periodic key frames and blending between them.. perhaps doing the render to cubemap on one frame, pmrem gen the next, (say one a second) then crossfading between the samples per frame for the environment map..
Oh, I assumed that as the sky class extends mesh and super is called with BoxGeometry that it’s internal shader would already be precomputing the output to be in the format of a cube map? I’m also probably mistaken as here I haven’t tried this, it doesn’t sound “cheap” on performance either way.
It does render volumetrically, but rendering it to a plane+rendertarget or cube+rendertarget will only get you whatever direction the camera is pointed at… that’s why you need a CubeCamera.. to render it to all 6 faces of a cubemap.. the cubecamera goes inside the box and renders all 6 faces at once to a rendertarget in cubemap layout iirc. Then once you have that cubemap, it will “work” as the environment, but reflections will only have perfect sharpness since no roughness mipmaps have been calculated.. which is why you need the pmrem step.
I’ve done the CubeCamera approach rendering to a target that gets used as the environment map, if the code would help you at all. But if you’re looking for something simple, and the initial post says the block of code in it that you shared was ‘crazy’, then perhaps this is not what you’re looking for.
You might want to check on the status of SSILVB as it seemed like that was making it’s way into the core library at some point, and could help for indirect lighting like scene.environment does.