you do not need pmrem.fromScene, just a regular cubecamera that films a virtual THREE.Scene into a THREE.WebGLCubeRenderTarget. i say virtual because it’s not part of the normal scene you render out. the cube-cam-scene contains light formers, softboxes and so on. the texture result of that rendertarget can plug right into your main scene.background and/or .environment.
now every time you call cubecam.update() it will render into the texture again, and if you render every frame you have a realtime environment map. runtime env doesn’t work on all platforms, that’s why all these r3f demos have <PerfMonitor> which makes it static if it detects problems with the fps.
// This is the scene that receives the lightformers
const virtualScene = new THREE.Scene()
const fbo = new THREE.WebGLCubeRenderTarget(resolution)
fbo.texture.type = THREE.HalfFloatType
const cam = new THREE.CubeCamera(near, far, fbo)
scene.environment = fbo.texture
let count = 1
function loop() {
if (frames === Infinity || count < frames) {
cam.update(gl, virtualScene)
count++
}
}
btw this is a good opportunity to hash out a generic structure for loop access, resize and overall reactivity. maybe something like a base class from which all your primitives inherit.