TSL color mismatched

Hi, everybody.

I’m having some strange behaviour with colors in shaders.
I first noticed it while trying to implement some sort of highlight functionality in my shader.

I was mixing the texture color (using mix) with a constant color vec4(0, 0.5, 1, 1).
I was expecting this color to be rgba(0, 127, 255, 1) but it resulted in rgba(0, 188, 255, 1) when rendered.

Just to point it out:

127 / 255 = 0.4980... ~= 0.5
188 / 255 = 0.7372... ~= 0.75

I also noticed that this behaviour makes no difference if color is read from a texture or if it’s a constant inside the code.

Where things change is when you load the texture using SRGBColorSpace.
In that case, the color loaded from the texture is correct.

In this example you can notice the difference between the correct color loaded from texture with SRGBColorSpace and the wrong color defined as constant in the code.

They should reference the same color:

rgba(0, 127, 255, 1) = vec4(0, 0.5, 1, 1)

Last but not least…

If you even load the correct color from the texture but then you process it before returning it, for example by mixing it with another color using mix, there’s again a discrepancy with the actual expected color.

In this example, I’m mixing it with vec4(0, 0, 0, 1) with a t value of 0.5.
The starting color is correctly rgba(0, 127, 255, 1) and the expected final color of the mix should be rgba(0, 63, 127, 1).

mix uses lerp function, so:

R: (0 * (1 - 0.5)) + (0 * 0.5) = 0
G: (0 * (1 - 0.5)) + (127 * 0.5) = 63
B: (0 * (1 - 0.5)) + (255 * 0.5) = 127

But the actual rendered color is: rgba(0, 92, 188, 1).


This can’t be a bug: it’s too serious for anyone not to have noticed it before me.

So, what am I missing?
Why is this happening?

Thanks. :face_holding_back_tears:

Maybe that discussion will give you some answers…

1 Like

Hi, @Lukasz_D_Mastalerz! :upside_down_face:
Thanks for your reply.

So… If I understand correctly…
If the texture is sRGB, all the constant colors are interpreted as sRGB as well. Isn’t it?

What’s the solution you suggest for this?
How other developers avoid these problems?

I’m not expert in this topic at all, mate… But I think constants colors in shaders are assumed to be in linear space and need manual conversion to represent sRGB colors according to Alfonse clarifies.

Some helpful links,

For official information on color management, check the Color Management documentation.

For additional questions and discussion about the updates to color management introduced in three.js r152, see this thread.

In short – the shader is using “Linear-sRGB” colors, and interpolating in Linear-sRGB space. By annotating the texture with .colorspace = SRGBColorSpace you’re ensuring that the texture is properly decoded from sRGB (source data) to Linear-sRGB (needed in the shader).

After everything else in the shader, Linear-sRGB values are converted back to sRGB (as required by the WebGL canvas) for display. Your “expected” value would be the result only if interpolating in sRGB space, which you could do by converting before the mix, but this isn’t the default.

4 Likes

Ok, I kinda got it…

Here’s something that seems to work:


What I’m curious about now, is the correct algorithm to do this…
What I actually did is:

  1. I had to completely disable .colorspace = SRGBColorSpace or it woudn’t work.
    I’ve commented out the entire line.
  2. I defined the toSrgb function as follows:
    const toSrgb = (color) =>
    {
        const value = color.rgb;
        const alpha = color.a;
    
        const a = value.div(12.92);
        const b = value.add(0.055).div(1.055)
            .pow(2.4);
    
        const t = step(0.04045, value);
    
        return vec4(mix(a, b, t), alpha);
    };
    
  3. Before returning a color value, I always wrap it with the toSrgb function.

All I can say is that it works fine…
Even with different or weird textures…

BUT…
Something seems wrong with my process, compared to your words, @donmccurdy

Am I doing it right?
I understand why I had to disable .colorspace = SRGBColorSpace but I have the feeling that it isn’t the right way…

Thanks!


Oh, a little doubt…
Is it possible that there is no a Node, a function or some other kind of helper to convert a color to sRGB already bundled in Three.js?

Do you think it would make sense to open a PR on this?

What I actually find strange about this whole thing is…

According to the previous links posted by @Fennec, using and processing colors in an sRGB color space brings numerous benefits.

So what I don’t understand is how I should be handling these colors…

To achieve the above result, what I basically did was to ‘step out’ of the sRGB color space, process the colors as I would in a traditional linear space, and then convert them back into sRGB again. This contradicts the earlier claims, which said that processing colors in an sRGB space brings benefits… Or am I wrong? It feels like I’m doing the opposite!

So, am I approaching this the wrong way?
And yet, how can I write a shader in an sRGB color space that -simply- blends the starting color at 50% with black?

Should I just accept that, in this color space, the output color will be different from what I’d usually expect? Is it justified by the benefits mentioned above?

I don’t quite understand this…
And I’m not sure if I managed to express my doubt clearly…

There’s a built-in node with three.js >= r170:

import {convertColorSpace} from 'three/webgpu'

const out = convertColorSpace(
  in, 
  THREE.LinearSRGBColorSpace, 
  THREE.SRGBColorSpace
);

On these comments …

… using and processing colors in an sRGB color space brings numerous benefits.
… what I basically did was to ‘step out’ of the sRGB color space, …

… I think you mean “Linear-sRGB” and not “sRGB”. Your hex colors and textures (any 8-bit color, really) are going to be sRGB-encoded to begin with. Rendering with sRGB-encoded colors doesn’t work well, so we convert from sRGB inputs to Linear-sRGB for rendering, everything in the shader should generally be Linear-sRGB, then three.js automatically converts back to sRGB before writing to the canvas.

Adding .colorSpace = SRGBColorSpace tells three.js what the texture is to begin with, so it knows to decode it. When you remove .colorSpace = SRGBColorSpace three.js won’t know to decode it to Linear-sRGB, so any shader using that texture gets an undecoded sRGB input instead of Linear-sRGB. Then the final step of sRGB encoding when drawing to the canvas adds a second level of sRGB encoding onto the already sRGB-encoded color, which will produce entirely the wrong color.

Note that fully-saturated colors (like 1 0 0 or 0 0 0) are the same in both color spaces, you’ll only see a difference for colors containing components between 0 and 1.

1 Like

Ok, so…
Getting to the point.

If I get the correct value that I expect by omitting .colorSpace = SRGBColorSpace and then encoding the color before returning it… Exactly like I did in my last shared CodePen
Is that the correct approach? Am I doing it right?

I’m afraid not. I would suggest creating a simplified example without the color mixing, with an opaque texture, and using the built-in convertColorSpace node. I’m not sure what output you expect for this CodePen.

Your texture is already sRGB-encoded, and you’ve prevented three.js from decoding it to the linear working color space (by not annotating it with .colorSpace). Then you’ve applied an sRGB encoding again. Then, automatically, three.js does another sRGB encoding. Applying sRGB encoding to the same color three times is not going to produce anything predictable.


If you keep the .colorSpace annotation then by default the texture is decoded to Linear-sRGB and your operations in the shader are in Linear space. If you prefer to do color operations in sRGB then you’d usually want to convert to sRGB space before the operation, and convert back to Linear-sRGB after.

const a = convertColorSpace(in, LinearSRGBColorSpace, SRGBColorSpace);
const b = // ... do some operations on 'a' ...
const out = convertColorSpace(b, SRGBColorSpace, LinearSRGBColorSpace);

Ok, so…

I’m not sure what output you expect for this CodePen.

You’re perfectly right. My bad.
All I have to do is halving the color rgba(0, 127, 255, 1) obtaining its correct half: rgba(0, 63, 127, 1).

Exactly what you would expect by mixing it with pure black with t = 0.5.

That’s it.
Nothing less. Nothing more.


I would suggest creating a simplified example without the color mixing, with an opaque texture, and using the built-in convertColorSpace node.

I’ve already done it and it worked perfectly, indeed! I’m very happy about it, to be honest!
The problem seems to begin once I start “elaborating” that color, by using mix and other functions.


Here…
Take again my CodePen as reference…

I’m loading this texture:

Then -to simplify the process- for each uv() coordinates, I always pick the same spot: vec2(0.125, 0.333) which is the blue-ish tile on (0; 2).

It is a rgba(0, 127, 255, 0.25) color.
Let’s get rid of the alpha part of this color; in my CodePen, I only use the rgb part.

Then, I simply take that color (stored in the color variable) and perform a mix with a pure black color, as follows:

const result = mix(vec3(0, 0, 0), color, 0.5);

Looks pretty easy and straight-forward, to me, honestly!
What I would expect as output is to have a rendered color equals to rgba(0, 63, 127, 1).


Now…
To achieve this, I had to do 2 different things.

  1. I’ve commented out .colorSpace = SRGBColorSpace.
  2. Before returning the color, I wrapped it in convertColorSpace(result, SRGBColorSpace, LinearSRGBColorSpace)

Then…
There are all the other combinations and scenarios:

  • If I only enable again .colorSpace = SRGBColorSpace I get: rgba(0, 27, 127, 1).
  • If I don’t wrap the result I get: rgba(0, 137, 188, 1).
  • If I both enable again .colorSpace = SRGBColorSpace and don’t wrap the result I get: rgba(0, 92, 188, 1).

You can try it by yourself by editing the CodePen, if you want…


If you prefer to do color operations in sRGB then you’d usually want to convert to sRGB space before the operation, and convert back to Linear-sRGB after.

const a = convertColorSpace(in, LinearSRGBColorSpace, SRGBColorSpace);
const b = // ... do some operations on 'a' ...
const out = convertColorSpace(b, SRGBColorSpace, LinearSRGBColorSpace);

I also tried what you suggest (converting the color before mixing it) and, yeah, it worked pretty well!

Now… I understood all of this and I wouldd consider my issue solved…

BUT… Now I’m concerned about performance and code readability.

I achieve the same result in two different ways:

Method 1:

// map.colorSpace = SRGBColorSpace;

material.colorNode = Fn(() => {
    const picked = texture(map, vec2(0.125, 0.333));
    const color = vec4(picked.rgb, 1);
    const result = mix(vec4(0, 0, 0, 1), color, 0.5);

    return convertColorSpace(result, SRGBColorSpace, LinearSRGBColorSpace);
})();

Method 2:

map.colorSpace = SRGBColorSpace;

material.colorNode = Fn(() => {
    const picked = texture(map, vec2(0.125, 0.333));
    const color = vec4(picked.rgb, 1);
    const converted = convertColorSpace(color, LinearSRGBColorSpace, SRGBColorSpace);
    const result = mix(vec4(0, 0, 0, 1), converted, 0.5);

    return convertColorSpace(result, SRGBColorSpace, LinearSRGBColorSpace);
})();

Isn’t the 2nd method more computation intense?
Isn’t there -at least- 1 more color conversion?

Why should I pick the 2nd method, if it requires more effort and is less readable than the first one?
Am I missing something?


Here’s another couple of CodePen, about it…

Method 1:

Method 2:

So just to be very explicit — this means you are trying to do a 50/50 blend of rgb(0,0,0) and rgb(0, 127, 255), where the two input colors are sRGB, and you want the blending also done in sRGB space. Blending these two colors in sRGB space will produce rgb(0, 63, 127) (sRGB). Converting them to Linear-sRGB, blending, and then converting back to sRGB will not.

To achieve this, I had to do 2 different things.

  1. I’ve commented out .colorSpace = SRGBColorSpace.
  2. Before returning the color, I wrapped it in convertColorSpace(result, SRGBColorSpace, LinearSRGBColorSpace)

I’d consider this a fine solution, if you feel comfortable with “why” it works. Step (1) keeps the texture in its original sRGB Space by preventing three.js from decoding it. Then step (2) applies an sRGB decode step right before the automatic three.js encode step, so the two cancel out and have no net effect. As a result, the colors basically remain in sRGB space throughout your render pipeline, all the way from input to the canvas.


Comparing methods (1) and (2), I don’t think a single color conversion is much to worry about performance-wise. If mixing colors in sRGB space is all you need to do then, either should be fine! If you need to also do some lighting or PBR, then keeping the colors mostly in the default Linear-sRGB space and switching to sRGB for specific operations (2) would generally be the best option.

1 Like

Ok. Got it. :face_holding_back_tears:

Thanks for your time.
It makes so much more sense, now! :upside_down_face:

1 Like