Wrong colors in transparent materials when using EffectComposer (post-processing)

Hi all!

I’m trying to make my scenes look exactly the same with and without post-processing, as I need to match the exact sRGB hexadecimal values and transparencies the design team gives me.

I’ve managed to make almost everything work correctly, but I have a huge problem with the objects which have a transparent material.

Here are two comparison images, without and with post-processing (I added a big dummy cube to make it even more obvious). Notice all the colors are the same except for the transparent platforms and the cube:


I’m using THREE.ColorManagement.enabled = true; and renderer.outputColorSpace = THREE.SRGBColorSpace; as recommended in the color management guide (which by the way I need to be set exactly like that so the colors are correctly represented from their hex values in the other scenes not affected by the EffectComposer).

In case it helps this is how the transparent material looks like:

const cubeMat = new THREE.MeshLambertMaterial({
     color: new THREE.Color("#9505D9"),
     transparent: true,
     opacity: 0.5,
});

How can I fix it? Am I doing something wrong?

Thx!

You’re only seeing this problem for transparent materials, right? If it’s affecting opaque materials too, it’s a different issue than I’ll describe below.

I believe the difference you’re seeing here is that by default, three.js does blending in sRGB space. With post-processing enabled, blending occurs in Linear-sRGB space instead. The latter is “usually better,” but it just isn’t possible without post-processing. You might mention this to your design team, their software might have options to use a linear blend space to match.

It’s possible to change post-processing to do blending in sRGB instead, by configuring your render targets with {type: UnsignedByteType, colorSpace: SRGBColorSpace} (edit: fixed typo), but this will introduce some possibly-unwanted issues with lit scenes and tone mapping.

@donmccurdy Yes, all the rest of materials are perfectly fine, only happens with the transparent ones.

I’ve tried your suggestion but with that combination I get a WebGL error, I’ve tried a different type and the error goes away but I still see the wrong color, here is how I create the EffectComposer:

addEffectComposer() {
    // We check how many samples the native canvas MSAA is using so we can mimick it in the postprocessing pipeline
    const gl = this.renderer.getContext();
    const samples = gl.getParameter(gl.SAMPLES);

    const renderTarget = new THREE.WebGLRenderTarget(window.innerWidth, window.innerHeight, {
            colorSpace: THREE.SRGBColorSpace,
            type: THREE.UnsignedByteType
     });

    this.composer = new EffectComposer( this.renderer, renderTarget );
    this.composer.renderTarget1.samples = samples;
    this.composer.renderTarget2.samples = samples;

    this.renderPass = new RenderPass( this.graphScene, this.graphCamera );
    this.composer.addPass( this.renderPass );

    this.outputPass = new OutputPass();
    this.composer.addPass( this.outputPass );
}

Hm, this looks correct to me. Would it be possible to create a simple demo, perhaps with just the cube in your screenshot?

@donmccurdy

Sure, took me my quite some time but I think I managed to replicate it in a working minimal codepen simulating most of my setup :slight_smile:

First a little explanation:
The codepen has two scenes, the one called graphScene is the one intended to be affected by post-processing (though not always, that’s why I need so hard for it to look the same if I turn it on/off) and another one called guiScene that will never be affected by post-processing and will be always rendered ‘normally’.

The cube on the left is the one rendered in graphScene, and the one on the right is the one rendered in guiScene.

At the start no post-processing is turned on, click the top right checkbox to activate/deactivate post-processing for graphScene (and you will see how the cube on the left looses the intended color pretty heavily):

My apologies, it seems I was incorrect about how WebGL does blending in an sRGB framebuffer. :frowning: I’d assumed it would do blending in sRGB, like the drawing buffer does when not using post-processing, but it turns out to decode to Linear-sRGB first, then do blending, then encode back to sRGB. Technical thread here:

https://groups.google.com/g/angleproject/c/5ITMg4_m8Ug/m/p0x8--KYAQAJ

I suppose that is a more correct behavior, but I’d just assumed it matched the drawing buffer.

That being the case, I don’t have an easy way around this. The proposal at WebGLRenderTarget: Allow opt-in for output transforms / image formation · Issue #29429 · mrdoob/three.js · GitHub might offer a way forward, or you could patch your shaders to do sRGB encoding in the fragment shader for materials regardless of the renderer settings. But if the designs can be adjusted to account for linear blending, that is much easier.

1 Like

Hi, could this be the same issue as I’m experiencing in this thread?

@donmccurdy Thx a lot for all your insights, I think I finally understand it.

Correct me if I’m wrong, basically when using ‘normal’ render the colorspace_fragment in each built-in shader gets executed for all materials, if your renderer is set with outputColorSpace=THREE.SRGBColorSpace then the sRGBTransferOETF that transforms from linear to sRGB gets executed in the fragment and finally blending is done resulting in the expected sRGB result.

But when using post-processing instead the scene is rendered to a RenderTarget, skipping your main render colorSpace setting, so no transform to sRGB in each material, then blending. In the OutputPass (or executing the sRGBTransferOETF function or equivalent yourself in a custom pass/shader) the linear to sRGB transform is done in the whole texture, but the blending has already happened using linear so the result for the transparent objects is messed up (no problem with anything else opaque as is was not affected by the blending and the pixel value is fine).

Now, if I understood everything correctly, the colorSpace setting in the RenderTarget that you can pass to EffectComposer, instead of just using the default, is basically ignored. By the way changing it to linear changes absolutely nothing. I find this VERY unexpected, wrong in my opinion unless I’m missing something…

What I would expect from post-processing is to grab the exact same texture I’m seeing in the canvas/screen and apply some effects into it. Just as you would run any shader on a texture, your input texture may be in sRGB which is basically the web standard, the shader will transform it to linear, work with it in linear, and then transform the result back again to sRGB.

The analogy I’m trying to stablish is that if I’m telling the EffectComposer that the RenderTarget is sRGB I want it to render it as the ‘normal’ render would with sRGB output, then transform to linear in the RenderPass, run all effects needed in linear space, and convert back to sRGB in the final OutputPass.

Sorry for the long post but I think it’s worth a thought :thinking:.
Wouldn’t it be worth to ‘respect’ the RenderTarget colorSpace that the user specifically set? I guess the change would make sense and not introduce such a big breaking change, right?

Thx for reading! :slightly_smiling_face:

You’re correct in your description of what’s happening in the direct and post-processed rendering pipelines, perhaps except for the “expected sRGB result” vs. “messed up” descriptors. Blending in sRGB is generally incorrect in a lit render pipeline — but it’s impossible to fix that without post-processing, so it’s a long-standing bug.

… the colorSpace setting in the RenderTarget that you can pass to EffectComposer , instead of just using the default, is basically ignored. By the way changing it to linear changes absolutely nothing … Wouldn’t it be worth to ‘respect’ the RenderTarget colorSpace that the user specifically set?

The colorSpace setting is respected — it’s the color space stored in the framebuffer. The fragment shader outputs Linear-sRGB, and WebGL converts that automatically to sRGB. This has important implications for color precision, avoiding banding, and sample interpolation. Notably, an UnsignedByteType framebuffer does not have enough precision to store linear color values without banding, so you need the sRGB color space to prevent that.

I think the disconnect here is that the .colorSpace property of the render target defines the storage — not the blending space. Whether compositing into an sRGB or linear framebuffer, WebGL will use linear space for blending.

To force the result you are asking for, we’d need to tell WebGL it’s a linear framebuffer, but then “secretly” go ahead and encode values to sRGB in the fragment shader before writing to the framebuffer. I believe #29429 would enable that. I will caution that this isn’t usually what you want, but it can be helpful as a workaround.

1 Like

@electric.cicada if your issue is happening only with transparency (alpha blending) enabled then it might be the same. Since your demo appears to use custom shaders everywhere, you could try explicitly doing a linear-srgb to srgb conversion at the end of the fragment shader, rather than including the tonemapping and color space encoding chunks. At least to test if that makes the results match.

Can you explain this a bit more? I’m seeing that when ColorManagement.enabled is set to false and WebGLRenderer.outputColorSpace is set to LinearSRGBColorSpace that both the effect composer and normal-rendered cube are the same colors. These settings should result in colors being properly reflective of their hex codes.

I agree it’s not an ideal solution since it will impact lighting calculations but it might be okay for the kind of rendering you’re showing above.

1 Like

I am also having problems with EffectComposer, but my issue is that it simply doesnt work if I want to add multiple View elements with only one Canvas (because I need multiple scenes for my project).

There should be a bright light with the Bloom properties I added, but only a standard material is displayed with no glowing effect, and it is the View that causes it because I tried without and it works fine.

Thank you for the suggestion, I did try it, I have to be honest, asked my local codestral about it, because I was not sure how to do this exactly and it suggested this:

mix(12.92 * color, 1.055 * pow(color, vec3(1.0/2.4)) - 0.055, step(0.0031308, color));

I don’t see any difference (as in, it’s still not correct). Is this what you meant or I totally botched it? :slight_smile:

@gkjohnson Thx a lot for taking a look a it Garrett.

Well, the codepen I put together is a very simplified demo of the problem, my actual app heavily relies on having ColorManagement.enabled set to true so colors are properly converted from their sRGB/CSS values (and I’m not 100% sure but also several loaded textures and some CanvasTextures I create on the fly).

After the feedback from @donmccurdy I think I understood the problem with alpha blending better, so to go forward I just pushed my design team to try not to use things that force me to use materials with opacity < 1… :smile:

I hope the issue he raised here gets traction, my intuition is that if the ‘normal’ renderer outputs whatever to the framebuffer, then if you want to apply some kind of postprocessing (let’s say just a blur) none of the colors should change no matter what. But hey, as I insisted in my previous posts I’m no way such as expert as you guys and I may be missing very important nuances, just my two cents are that as a user I would expect it to remain the same :slight_smile: .

my actual app heavily relies on having ColorManagement.enabled set to true so colors are properly converted from their sRGB/CSS values (and I’m not 100% sure but also several loaded textures and some CanvasTextures I create on the fly).

I’m asking because your description of what happens when ColorManagement is disabled does not align with what is supposed to happen. Provided hex codes etc will match their CSS color values with or without ColorManagement set to true. I’m seeing that there is a color change, though, even when outputColorSpace = LinearSRGBColorSpace, which shouldn’t be happening, and nor should it change when using post processing. I’ve made a few new issues to look into this.

I hope the issue he raised here gets traction, my intuition is that if the ‘normal’ renderer outputs whatever to the framebuffer, then if you want to apply some kind of postprocessing (let’s say just a blur) none of the colors should change no matter what.

I agree the colors shouldn’t change but it’s technically complicated. I’m hoping that using unmanaged color can be a solution if you need blending to occur in sRGB but it seems there may be some more changes needed to achieve that.

You can see how three.js does this here …

… though your code also looks reasonable to me, and may be fine.

Perhaps it’s better to continue the investigation on your app in the other thread (Weird EffectComposer result - #5 by electric.cicada), I think it is a different issue, and I’ll follow up there.

1 Like