As far as I have been able to find out, tone-mapping should always be the final step in a linear rendering process. This means that the correct approach when using post-processing is to set renderer.toneMapping = NoToneMapping, and then add a ToneMapPass as the final pass when using the EffectComposer.
However, the official tone mapping example demonstrates using the various built-in tone mapping algorithms with the EffectComposer. There’s no actual post-processing taking place (just a CopyPass), so nothing looks incorrect. However, if this example is used as a template then any post-processing passes added will be performed on a render target with a reduced gamut (i.e. a LDR target rather than a HDR target).
Is the example demonstrating an incorrect process? Or am I making an incorrect assumption somewhere here (for example, are three.js post-processing passes designed to work in LDR)?
I’m not sure why EffectComposer is used in this example. webgl_tonemapping should be an example for inline tone mapping which means tone mapping is implemented directly in the material shader.
BTW: The default render mode is “Renderer” which means EffectComposer is only used if you select it. We might want to remove this option and refer to webgl_shaders_tonemapping. This example performs tone mapping via post processing.
BTW, I’ve done a bit more research and while most post passes are generally done in HDR, some are done in LDR (after the tone map pass). Generally the ones that are a final artistic tweak such as color correction, brightness/ contrast etc., will be done in LDR.