Black Video Texture for Chroma Key Video in Safari (Only Happens Intermittently)

Hello! I’m struggling with an issue that appears to only happen in Safari, and it’s difficult to reproduce consistently. I’m loading one (or more) chroma key videos in our three.js scene, and occasionally the video(s) get stuck as a black texture.

I would post code, but I’m working with proprietary code and the source file is 2000+ lines long. Considering it works most of the time, and considering it works 100% of the time with Chrome/Android, I have a feeling we’re loading the videos correctly… But something in Safari/WebKit is screwing us.

I am hoping that someone here has had prior experience working with three.js in Safari so that they could share any tips on how to load the chroma key videos in Safari with 100% success rate. Perhaps I’m missing a WebKit-specific operation in my JavaScript? Thanks everyone!

Here are some images to demonstrate what I’m talking about.


My only experience w safari is that webgl support is random/abysmal, and you can burn an infinite amount of braincells trying to figure something out, only for it to change/be fixed/move on the next update.

If the behavior is sporadic though, that’s sometimes a sign of resource exhaustion.. like preloading too much, or blowing out some internal cache of some sort.

Thanks for the input! I’ll keep that in mind and relay it to my colleagues. And I agree with your first point, yet unfortunately my bosses have emphasized that our clients will be using iPhones 90% of the time (and based on our current data, that has been true, so I’m stuck with Safari for the foreseeable future).

Sadly this issue sounds like that’s out of my hands, unless there’s a way for us to manually pace how our resources are loaded (I think all our assets are loaded asynchronously with promises). Beyond sync vs async loading, I don’t believe I have much power to manage how Safari loads resources (or any browser for that matter). Would be a different story if I was working with something like C or C++ of course but alas this is web development haha.

Any other comments and perspectives are still welcome!

Chrome preloads resources (via proxy Google CDN) and may serve resources combined/minified. Chrome also has different rules for https/http than Safari, and applies progressive/fallback technology. Other than that, web host (servers) are known to throttle bandwidth. I have had useless paid hosting plans.

Maybe if the videos are self-hosted, you could upload to YouTube or Vimeo or some professional video provider. Or at least log your headers and video meta state (on multiple devices) and see what you actually get (file size?).

  • does poster display, or
  • an alternate source, or
  • a preset codec for encoding/quality

Sincerely,
Why just the spin wheel video, and not the main video?

That’s great to know! I had no idea Chrome and Safari loaded resources differently like that, I assumed that kind of boilerplate/low-level architecture was standardized across browsers but obviously I need to manage my expectations of blackbox stuff done by Google vs Apple vs Mozilla. I am still a fairly young web dev, this is all great info.

My team has discussed making all of our videos conform to a single mp4 configuration (encoding, bitrate, file size, etc) using some 3rd party converter API. Perhaps our loading issue might naturally get fixed if all our video files are universally smaller (based on your feedback about throttling, I’m hoping this is the case, but who knows until we try). Also, we host our videos in Firebase storage, I’m unaware of any throttling they might be doing when loading our videos.

The “spin wheel” video is a chroma key video (main difference comes down to the material configuration that gets loaded when creating the mesh). I’ll post some code for context.

const videoElement = asset.ref;
const videoTexture = new VideoTexture(videoElement);
videoTexture.minFilter = LinearFilter;
videoTexture.magFilter = LinearFilter;
videoTexture.format = RGBAFormat;

// these might not apply to video since format = RGBAFormat
videoTexture.colorSpace = SRGBColorSpace;
videoTexture.encoding = sRGBEncoding;

AppLog('scale', asset.asset.coords.s)
let scale = asset.asset.coords.s;
let relscale = scale * .6
const width = videoElement.videoWidth;
const height = videoElement.videoHeight;
const videoAspectRatio = width / height;
const planeWidth = videoAspectRatio * relscale;
const planeHeight = relscale;
const planeAspectRatio = planeWidth / planeHeight;

let material = new MeshBasicMaterial({ map: videoTexture });
if (asset.asset.chromaKeyColor) {
  const color = new Color(asset.asset.chromaKeyColor)
  material = new ShaderMaterial(
    {
      uniforms: {
        tex: {
          value: videoTexture,
        },
        keyColor: { value: color },
        texWidth: { value: width },
        texHeight: { value: height },
        similarity: { value: 0.01 },
        smoothness: { value: 0.18 },
        spill: { value: 0.1 },

      },
      vertexShader: VERTEX_SHADER,
      fragmentShader: FRAGMENT_SHADER,
      transparent: true,
    })
}
material.shadowSide = DoubleSide;

// let vwidth = 1; // omitting due to eslint warning
// let vheight = 1 / videoAspectRatio; // omitting due to eslint warning
// if (videoAspectRatio <= 1) {
//   vwidth = 1;
//   vheight = 1 * videoAspectRatio;
// }
AppLog('videoAspectRatio', videoAspectRatio)
let textureOffsetX = 0;
let textureOffsetY = 0;
AppLog('videoAspectRatios')
AppLog(videoAspectRatio)
AppLog(planeAspectRatio)
// Calculate the aspect ratio
if (videoAspectRatio > planeAspectRatio) {
  // Video is more portrait than the plane
  const repeatX = planeAspectRatio / videoAspectRatio;
  videoTexture.repeat.set(repeatX, 1);
  textureOffsetX = (1 - repeatX) / 2; // Center horizontally
} else {
  // Video is more landscape than the plane
  const repeatY = videoAspectRatio / planeAspectRatio;
  videoTexture.repeat.set(1, repeatY);
  textureOffsetY = (1 - repeatY) / 2; // Center vertically
}

videoTexture.offset.set(textureOffsetX, textureOffsetY);
const vidplane = new PlaneGeometry(planeWidth, planeHeight);
let mesh = new Mesh(vidplane, material);
mesh.scale.set(scale, scale, scale);
mesh.position.set(asset.asset.coords.p[0], asset.asset.coords.p[1], asset.asset.coords.p[2]);
mesh.rotation.set(asset.asset.coords.r[0], asset.asset.coords.r[1], asset.asset.coords.r[2]);
mesh.name = id;
mesh.castShadow = true;

For anyone curious, here is the shader code for VERTEX_SHADER and FRAGMENT_SHADER


// All "#include" and "precision" phrases were added to address this z-layering problem when using logarithmicDepthBuffer in three.js config
// See more info here: https://discourse.threejs.org/t/shadermaterial-render-order-with-logarithmicdepthbuffer-is-wrong/49221/3

const VERTEX_SHADER = `
// Set the precision for data types used in this shader
precision highp float;
precision highp int;

#include <common>
#include <logdepthbuf_pars_vertex>
varying vec2 vUv;

void main() {
  vUv = uv;
  gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
  #include <logdepthbuf_vertex>
}
`

const FRAGMENT_SHADER = `

precision highp float;
precision highp int;

uniform sampler2D tex;
uniform float texWidth;
uniform float texHeight;

uniform vec3 keyColor;
uniform float similarity;
uniform float smoothness;
uniform float spill;

#include <common>
#include <logdepthbuf_pars_fragment>

varying vec2 vUv;

// From https://github.com/libretro/glsl-shaders/blob/f3dc75a3bb57ac83801b3873617feecfb34d6c78/nnedi3/shaders/rgb-to-yuv.glsl
vec2 RGBtoUV(vec3 rgb) {
  return vec2(
    rgb.r * -0.169 + rgb.g * -0.331 + rgb.b *  0.5    + 0.5,
    rgb.r *  0.5   + rgb.g * -0.419 + rgb.b * -0.081  + 0.5
  );
}

vec4 ProcessChromaKey(vec2 texCoord) {
  vec4 rgba = texture2D(tex, texCoord);
  float chromaDist = distance(RGBtoUV(texture2D(tex, texCoord).rgb), RGBtoUV(keyColor));

  float baseMask = chromaDist - similarity;
  float fullMask = pow(clamp(baseMask / smoothness, 0., 1.), 1.5);
  rgba.a = fullMask;

  float spillVal = pow(clamp(baseMask / spill, 0., 1.), 1.5);
  float desat = clamp(rgba.r * 0.2126 + rgba.g * 0.7152 + rgba.b * 0.0722, 0., 1.);
  rgba.rgb = mix(vec3(desat, desat, desat), rgba.rgb, spillVal);

  return rgba;
}

void main(void) {
	#include <logdepthbuf_fragment>
  vec2 texCoord = vUv;
  gl_FragColor = ProcessChromaKey(texCoord);
}
`

export {
  VERTEX_SHADER,
  FRAGMENT_SHADER,
}

Maybe video would be bulletproof if WebComponents and R3F had an off-topic alum. :skateboard: If I were a bounty-hunter… I would aim at some off-ramps:

  • possibly, your shader samples an rgb value and have converted to a bad value. Debug the pixel input/output/alpha and see if the number is out of range: string, NaN, undefined, -Infinity… Even a rounding error from downsampled alias precision may have broken your entire shader. JavaScript can treat math equalities poorly.
  • possibly (continued) … to reiterate, is the source video bad, or is it actually the alpha map. Perhaps your supplier(s) could be on different shifts. “Ha ha”. I’m not avoiding a direct code response (see above) but is the actual problem consistency or encoding, server, or alpha sample(s)?
  • verify supplied video(s) half consistent alpha channel, or pipe them through an encoder.
  • objectively… since video is 2d orthographic overlay, use a non-three ui layer and put it in the corner. Alter the design and avoid the import.
  • objectively… use css3d, or create the wheel as components as geometry in the native ecosystem.

Sorry for the iffy grind, my undisclosed sponsor was taping a segment.

Why would javascript math inequalities be relevant to a shader operation?

1 Like

I guess I’m just used to TypeScript, where everything “just works”.