I am currently struggling with a render target/EffectComposer issue with the new color management changes in r152. I noticed that when rendering to a texture, there are subtle color differences to the regular scene. This is consequential when designing something like a mirror.
JSFiddle: Color Management Issue with Effect Composer - JSFiddle - Code Playground
The POC is very simple. The left plane is the regular scene. The dark portion is not part of the image - it is a black plane with 0.5 opacity. I am using an EffectComposer and RenderPass to render the left plane to a texture and display it on the right plane. Notice that the color differences are most noticeable in the dark portion, where the rendered texture is slightly brighter.
I believe what’s happening here is the difference between alpha blending in sRGB space vs. alpha blending in Linear-sRGB space.
When drawing into the EffectComposer’s render target, the image in the back is drawn in Linear-sRGB, the panel is blended on top in Linear-sRGB, the result is stored to the render target, and when we render to screen it is converted from Linear-sRGB to sRGB as required for display.
When drawing directly to the canvas, each material does the Linear-sRGB to sRGB conversion in its own fragment shader, before alpha blending occurs. When the transparent panel is drawn, its color is blended with the (already sRGB) pixels behind it, and we get a slightly different result.
The first is “correct” for purposes of PBR rendering. The second is an unfortunate necessity in WebGL, although there are situations where alpha blending in sRGB might be preferred. Using post-processing for the final render to screen would allow blending to be done in Linear-sRGB, and both sides should match, but that is a bit more costly.
This makes sense. I agree that the issue must be alpha blending. I didn’t make a JSFiddle for this, but I noticed that if I “bake” the panel into the image using Photoshop, the colors are the same.
I am writing a UI library on top of THREE JS and need to make a design decision. Would it be wise to always use an EffectComposer instead of the default renderer output, even if it perhaps isn’t needed? Is there any downside to that other than a performance hit?
I believe that’s probably the best choice, visually. I don’t know of any downside other than some performance cost from the additional passes and render targets.
Many image editing tools either default to this or provide an option (sometimes also called “blending colors with gamma 1.0”). Linear blending is also required if trying to keep >8 bit precision in the effect stack. See the list in: Wiki - Linear Color Blending | OBS.