What's the fastest way to copy a webcam frame to a WebGLRenderTarget?

Greetings to the gurus,
I’m struggling for too long with this one…

I need to post-process the device camera frames with a custom shader, so the copy of the actual image data to the render target must be as cheap as possible, to leave CPU room for the heavy post-processing.

So far I have failed using grabframe(), blob, and videotexture.

for instance, this doesn’t work (CamTex is the rendertarget):

imageCapture.grabFrame().then(function(imageBitmap) {

neither does this:

CamTex.texture.getContext(‘2d’).drawImage(imageBitmap, 0, 0);

nor this:


Perhaps the answer is in this example three.js webgl - materials - video - webcam where the webcam is rendered on a videotexture that is used in a scene, but… it’s a texture, not a rendertarget and whatever I tried, it failed too. Obviously, I’m not doing it right in any of the above approaches.

I want to avoid drawing the frame(s) on the canvas, then copying from the canvas etc.
The video frame data is f*** bytes, not …peanuts(!) so I presume that it has to be an efficient way, right? :wink:

You don’t need a render target for this. You can use a video texture as the input for your shader.

I’m trying this, I’m not getting an error, but I’m not getting an image rendered either…
The image displays in the hidden html video element if I unhide it, but I get a black screen from the shader.

Got it! I was passing the video texture as CamTex.texture to the shader as we do with the render targets, when I changed that to just CamTex it worked, thank you very much Mugen87!


The size parameters in renderer.setSize() are used by the renderer to set the viewport when rendering to the screen only .

When the renderer renders to an offscreen render target, the size of the texture rendered to is given by the parameters renderTarget.width and renderTarget.height .

So the answer to your question is that it is OK to use the same renderer for both; there is no inefficiency.