Since scene.environment now always uses PMREMGenerator under the hood, I was wondering if the result of that could be reused for other purposes (for example, to display as blurred background).
WebGLRenderer seems to have cubeuvmaps internally only and not accessible, but maybe there’s another way?
For reference, here’s what I’m currently doing to get a blurry background from the environment - requires another duplicate PMREMGenerator right now:
scene.environment = texture;
// TODO can the PMREM from scene.environment be reused?
let pmremGenerator = new THREE.PMREMGenerator( renderer );
let rt = pmremGenerator.fromEquirectangular( texture );
scene.background = rt.texture;
// Workaround: can we get a specific mip level from the PMREMGenerator as new texture?
// Patch envmap_fragment.glsl.js to load background mip level
THREE.ShaderChunk.envmap_fragment = THREE.ShaderChunk.envmap_fragment.replace(
`vec4 envColor = textureCubeUV( envMap, reflectVec, 0.0 );`,
`vec4 envColor = textureCubeUV( envMap, reflectVec, 1.0 );`
Thanks! I’ve seen that PR, looking forward to that getting merged. Looks pretty clean!
Also thanks for verifying that there’s no way to access the already generated cube maps.
The user should actually not worry about this kind of stuff.
Some users do worry about this kind of stuff, rendering performance, not doing duplicate work, etc., I’m not sure you can generalize like that.
Side note, pre-convoluted environment maps are sometimes used for blurred backgrounds (e.g. in Unity). They do work pretty well for that usecase actually (in many cases, they yield a much smoother result than regular mipmaps - exactly as for their intended usecase for roughness mipmapping) - whether their additional smoothness is wanted is a design choice.
At least from working with Unity I can say that the “regular” mipmaps for cube textures are always too blocky, as mipmaps are not really meant to be viewed enlarged - quite to the contrary, they aim to represent a texture at a mapping near 1:1 between “screen pixel” and “mipmap texel”. In contrast, at least from my understanding, PMREM is designed to provide smooth appearance under enlargement (or, phrased differently, accessing a lower mip than what would be used based on texel size).
It might of course be that best results would be achieved with another convolution style (not PMREM but "create mipmaps so that they can be used for different levels of background bluriness) - I’d argue pmrem is closer to that goal than mipmaps but I can see that it could also be argued the other way around
Not entirely true. WebGLRenderer.properties is public and a reference to the environment map is store there under all standard/physical materials. So you can make a standard/physical material, render it with envMap, then access the prefiltered envMap by renderer.properties.get(material).envMap