Non-color maps are not sRGB-encoded, even if they’re PNG or JPEG files. By “non-color” we mean their RGB channels are being used to represent data — normal vectors, opacities, and PBR properties — rather than to represent visible color. It’s also worth noting that while PNG and JPEG files do contain color space metadata (usually claiming to be sRGB) that metadata is inaccurate more often than not, for these files.
Also note: Blender has the same concept of non-color textures, and should export correctly.
When using an EffectComposer to render to a render target/texture, is the resulting texture linear SRGB space?
You can choose the color space for render targets when using THREE.UnsignedByteType storage. With float or half-float storage, the render target cannot be sRGB. This choice is mostly unrelated to the renderer’s output color space.
Why is a float render target more important when using linear space, and should it be assumed that render targets should always use FloatType?
UnsignedByteType (8 bits per channel) is generally not enough to represent Linear-sRGB colors accurately. FloatType or HalfFloatType should be fine – if you still see banding with these choices, I suspect something else is wrong. Using UnsignedByteType with the sRGB color space is also an option to reduce banding, but note there is a related Chromium bug.
How was THREE 0.151.0 and earlier handling color spaces, assuming all default encoding values were used? Was the entire render chain using non-linear SRGB space?
Because the default render chain did no conversions, this depends on what you fed into it. But I think in most cases, the entire render chain used sRGB color space.
However, loaders like GLTFLoader and FBXLoader have been marking their colors with the correct conversions since at least 2019, so if you’ve been using these loaders, you would have needed to at least update the renderer’s output space to get correct results.