R3F / Drei -> Image component with videoTexture. How do I fix brightness/tone mapping issue?

This one is a doozy. I feel like I’m missing something small.

I have decided to use Drei’s Image component for video, because it allows me to set any size on the Image component and the videoTexture will act like CSS background: cover, covering the entire size but maintaining its aspect ratio. This functionality is mandatory.

That being said, when I do this, no matter if I change the toneMapped to false on the material or not, or set the encoding on the videoTexture or gl renderer to sRGB, I run into the same issue: It renders way too bright.

Here is a codepen: Video textures - Drei Image Bug - CodeSandbox

And here is a screenshot of the brightness/difference:

Any ideas? Seriously any help is much appreciated, I think I’m losing my mind.

it doesn’t seem to make much sense, it’s these two includes drei/Image.tsx at 555ea0748a8b7f35744e69efe2c6ea598c7299ee · pmndrs/drei · GitHub

    #include <tonemapping_fragment>
    #include <encodings_fragment>

if i remove them it’s OK but that cannot be correct, shaders must have these includes. @donmccurdy is videotexture perhaps an outlier?

threejs is legacy = off
encoding = sRGB
tonemapping = filmic
the texture has encoding=sRGB and toneMapped=false

i wouldn’t have imagined so… the official three examples use standard material types to render the VideoTexture whereas drei/Image.tsx is using a shader material that does not explicitly use a toneMapped uniform… i’m not entirely sure but it looks like the ShaderMaterial needs to be re-jigged a little to give access to the toneMapped uniform inside the <tonemapping_fragment>, though, i would have thought #include automatically propigated the uniforms into the shader material

i believe only rawshadermaterial is missing that uniform? shadermaterial should have it, i don’t think i have ever seen this before and using shadermaterial often. normally toneMapped={false} takes care of the pale colors.


when i add #include <tonemapping_pars_fragment>
it complains about re-definition, i think that confirms it, uniforms should be there.

1 Like

@drcmda @Lawrence3DPK thanks a ton for looking into this today guys. I seriously appreciate it, this one left me scratching my head.

@drcmda ah yes, I do see that removing those includes seems to do the trick, but admittedly I’m a little new to the WebGL/Three.js world and want to understand the pitfalls of doing this, even if it’s for a standalone component (you mentioned shaders must have these includes)?

If there’s any other ideas that follow the Three.js “way of doing things”, do let me know, and thanks again for all the help with this.

Edit: Not to throw another wrench in things, but after trying adjusting the fragment shader @drcmda I did notice that worked, however if <EffectComposer /> is added from @react-three/postprocessing, that turns it bright again. Dunno if this sparks an idea for what it may be. Here’s a codepen illustrating what I’m talking about.

In most cases, the output of a video-driven shader material should be sRGB-encoded and not tone mapped. Use tone mapping if the video drives an albedo, emissive, or other PBR material input, but that is rare.

That said… excluding these fragments doesn’t feel like the right way to achieve the fix, it’ll break other things later (post-processing comes to mind). If excluding “encodings_fragment” improves the result, then it probably means the video was never decoded from sRGB to Linear-sRGB to begin with. Currently that needs to be done in a shader …

… because we can’t use WebGL’s hardware sRGB decoding for videos right now. I think Drei’s shader material will require a similar decoding block for the video texture here, so that the encoding block starts from the expected color space.


thank you, as always, for the in depth explanation! the image component is not specifically for video content, mostly it’s been used for images. i think this confirms what i thought, we’ve been treating it like any other shader, adding the required includes. it seems that video is the outlier, and the component will have to be copied into user land and stripped of the encoding segment.

1 Like

@drcmda @donmccurdy thank you for looking into this. I appreciate the digging.

@drcmda even when the encoding segment is stripped, if EffectsComposer is rendered in Canvas, we run into the same issue. The beautiful hack I ended up with is I have replaced the current fragment shader on the <Image/> component with the following. I am mixing the final output rgb values with black through a soft light blending mode and returning the frag color. This works with the fragments at the bottom. The output is fairly identical to the original video.

// mostly from https://gist.github.com/statico/df64c5d167362ecf7b34fca0b1459a44
  varying vec2 vUv;
  uniform vec2 scale;
  uniform vec2 imageBounds;
  uniform vec3 color;
  uniform sampler2D map;
  uniform float zoom;
  uniform float grayscale;
  uniform float opacity;
  const vec3 luma = vec3(0.299, 0.587, 0.114);

  vec4 toGrayscale(vec4 color, float intensity) {
    return vec4(mix(color.rgb, vec3(dot(color.rgb, luma)), intensity), color.a);

  vec2 aspect(vec2 size) {
    return size / min(size.x, size.y);

  // This is the most important fucking thing here
  // Without this we run into the tone mapping issue
  // If EffectsComposer is active it turns it bright
  // Taken from https://github.com/mattdesl/glsl-blend-soft-light/blob/fb229db060063fe2a1b298a102930e527704877b/index.glsl
  vec3 blendSoftLight(vec3 base, vec3 blend) {
    return mix(
        sqrt(base) * (2.0 * blend - 1.0) + 2.0 * base * (1.0 - blend), 
        2.0 * base * blend + base * base * (1.0 - 2.0 * blend), 
        step(base, vec3(0.5))

  void main() {
    vec2 s = aspect(scale);
    vec2 i = aspect(imageBounds);
    float rs = s.x / s.y;
    float ri = i.x / i.y;
    vec2 new = rs < ri ? vec2(i.x * s.y / i.y, s.y) : vec2(s.x, i.y * s.x / i.x);
    vec2 offset = (rs < ri ? vec2((new.x - s.x) / 2.0, 0.0) : vec2(0.0, (new.y - s.y) / 2.0)) / new;
    vec2 uv = vUv * s / new + offset;
    vec2 zUv = (uv - vec2(0.5, 0.5)) / zoom + vec2(0.5, 0.5);

    // get the final return value of the original shader
    vec4 finalReturnValue = toGrayscale(texture2D(map, zUv) * vec4(color, opacity), grayscale);

    // blend it to be darker because without this it is too light
    vec3 blendedColor = blendSoftLight(vec3(finalReturnValue[0], finalReturnValue[1], finalReturnValue[2]), vec3(0.0, 0.0, 0.0));

    // return blended rgb with the alpha value back in finalReturnValue
    gl_FragColor = vec4(blendedColor.rgb, finalReturnValue[3]);
    #include <tonemapping_fragment>
    #include <encodings_fragment>

I apologize in advance to anyone who is passionate about R3F, or shaders, or solving things the correct way.

1 Like

It sounds like we may be able to make the proper fix, now that an upstream issue has been fixed in Chrome… Reconsider removal of inline sRGB decode · Issue #23803 · mrdoob/three.js · GitHub … if that works out, then video textures could be treated like other textures.


@donmccurdy Thank you so much for helping with this :pray: