When using ACESFilmicToneMapping, patterns on textures become ‘washed out’, with very low contrast and low color saturation. Here’s a fiddle, with the texture on the right for comparison. Any suggestions about how to improve contrast and color saturation on examples like this? I have played with a postprocessing effect to increase the contrast, but that doesn’t really feel like the right approach to me (and messes up other things).
Yes, ACES Filmic tone mapping leads to a hue shift. However, I’m not sure why you use tone mapping in your demo. Tone mapping is intended for HDR to LDR conversion. You are not using HDR rendering in your scene.
It seems to me what you really want to use is color mapping (also known as color grading).
I know this scene doesn’t use HDR, that’s just to make the sample program simpler.
To elaborate on that: I was under the impression that the ‘standard/modern’ pipeline uses linear colorspace internally, and that you then map that to sRGB at the end, using the tone mapping setting. So input textures need
texture.encoding=THREE.sRGBEncoding (which instructs three to convert the sRGB jpegs to linear color space first), the renderer needs
renderer.outputEncoding=THREE.sRGBEncoding because it outputs to the screen, and to specify how to get from the linear internal space to that final sRGB, you set
renderer.toneMapping=THREE.ACESFilmicToneMapping (or one of the others). That makes sense independent of whether you use any HDR assets in your scene (or more precisely, the 'internal three.js representation is always HDR).
In that case, I would hope that there would be some way to get those sRGB input textures out in rendered form with more or less the same hue/contrast as they went in.
Is my understanding incorrect perhaps? Thanks!
Sorry but this assumption is not correct. It seems you are mixing color space conversion with tone mapping. Color textures are usually defined in
sRGB color space. Hence, you set
THREE.sRGBEncoding. That is necessary so the shader is able to correctly decode the color value since lighting equations are implemented in linear color space. You can then define via
WebGLRenderer.outputEncoding in what color space the linear color values should be outputted to screen. Tone mapping has nothing to do with this process.
BTW: Tone mapping happens in
three.js always in linear color space. So in the fragment shader, color space conversion happens after tone mapping. In other words: Tone mapping is nothing else than a gamut reduction. The subsequent color space conversion is a gamut transformation (expecting LDR color input values in the range
Ok, fair enough, strictly speaking the
renderer.outputEncoding setting determines which output color space we want. Internally, three.js works with HDR. Those HDR values need to be brought down to a LDR. That is what tone mapping does. Once you are in LDR range, you may still need to do something to get the values to be sRGB-encoded colors. The
renderer.outputEncoding only says that we want sRGB, not how the reduction from HDR to LDR takes place, that’s what
(And I want that tone mapping done, because in my real app the dynamic range is large, so if I use
THREE.NoToneMapping I’ll get loads of clipped values (over/under exposure)).
However, if I take an sRGB-encoded image, transform it to linear space, then bring that HDR representation back down to LDR with tone mapping, and then convert to sRGB space with a gamut transformation, I would hope to get the same hue/contrast out. I can see how that could fail (e.g. the HDR-to-LDR tone mapping process has to throw away information), but I was hoping for some hint about how to reduce that loss of information. Can I bump up hue/contrast of my texture after it has been converted from sRGB to linear, for example?
Sorry for the waste of bandwidth if this is simply not easy to do.
Well, the problem is that you compare a textured PBR cube (using
MeshStandardMaterial) with a image included in a HTML file. Lighting computations will affect the final rendering in a way so that it’s nearly impossible to achieve a 1:1 match. The only thing you can try in this context is to play with different tone mapping settings (like a different exposure). Otherwise can try to use post-processing and apply color correction with an additional shader pass (e.g. by using ColorCorrectionShader).
In any event, what you are looking for can be easier done by using an unlit-material like
WebGLRenderer.outputEncoding was introduced with
r112. You are using
r105 in your fiddle. I’ve updated to the latest version (
r116.1) in my fiddle so setting this property actually shows an effect.
I know I shouldn’t expect 1:1 (sorry the question was a bit misleading on that). I also know how to do that if I want that. Was just surprised to see the contrast and hue be so washed away (I have various other examples where the texture has a clear pattern, but the rendering just shows an essentially homogeneous washed out color). I know that exposure, light settings and material parameters influence the rendering, but so far I wasn’t able to find any combination that makes these patterns come out more pronounced.
I’ll play with
To expand on this a little. There are two conversions happening:
Linear -> sRGB (lossless)
HDR -> LDR (tone mapping, lossy)
Suppose the renderer has created two bright red colors, in linear HDR space, say (4,0,0) and (6,0,0).
If you don’t do tone mapping (using either
LinearToneMapping), these will be converted to sRGB:
(4,0,0) -> (1.82, 0, 0)
(6,0,0) -> (2.17,0,0)
Next, they are dumped onto a normal LDR screen. However the screen can’t handle values >1, so they both get clamped:
(1.82, 0, 0) -> (1,0,0)
(2.17,0,0) -> (1,0,0)
The result is that all detail in bright areas of your scene is lost. Tonemapping operators (ACES Filmic, Reinhard etc., but not linear) remap these colors to LDR space (so all color components are <=1) before the Linear -> sRGB conversion happens.
That matches my understanding, thanks. For this particular fiddle, if I use
LinearToneMapping, the contrast is fine. That suggests that the linear HDR image of three.js’s internal representation is fine. The
ACESFilmicToneMapping somehow feels the need to squash the colors of my texture together, making the pattern wash out. The question is ‘why?’. It is obviously not necessary, as no values were clipped when doing
LinearToneMapping. Is there anything I can do to make ACES keep the colors of my texture further apart? Do I need to change the fragment shader to ‘artificially’ bump up the contrast on my material, or is there a simpler way?
It is obviously not necessary, as no values were clipped when doing
It is necessary. You are trying to fit a lot more color information (HDR) into a more limited range (LDR) to have at least some contrast in the clipped darks and lights. in order to have that resolution to create contrast for more colors you have to “make room” which means modifying the original LDR range into narrower band and therefore lose color / contrast.
LinearToneMapping does nothing unless you’ve change the WebGLrenderer
toneMappingExposure to something other than 1. It also uses a completely different technique so it’s not a good thing to compare to.
Here’s the ACES filmic tone mapping curve taken from this article:
You can seen the original LDR range is squished down into something like [0.1 to 0.8] which is significant. Unfortunately that’s the price to pay for displaying more range.
Is there anything I can do to make ACES keep the colors of my texture further apart?
You can do something like a sharpness filter to increase local contrast across the scene or you could use a color LUT to do color grading and map certain color ranges to what you want – neither of those are built into three.js, though. You could look into the three.js adaptive tone mapping pass, which adapts tonemapping per pixel over time or write your own type of tone mapping that crunches values into a range you are happier with.
LinearToneMapping is not actually tone mapping. As far as I have been able to figure out, three.js took the name from Substance Painter (or maybe they both come from some previous work). It’s a linear exposure control function. However, it is not a tone mapping function, so the name is unfortunate.