Greetings to the gurus,
I’m struggling for too long with this one…
I need to post-process the device camera frames with a custom shader, so the copy of the actual image data to the render target must be as cheap as possible, to leave CPU room for the heavy post-processing.
So far I have failed using grabframe(), blob, and videotexture.
for instance, this doesn’t work (CamTex is the rendertarget):
neither does this:
CamTex.texture.getContext(‘2d’).drawImage(imageBitmap, 0, 0);
Perhaps the answer is in this example three.js webgl - materials - video - webcam where the webcam is rendered on a videotexture that is used in a scene, but… it’s a texture, not a rendertarget and whatever I tried, it failed too. Obviously, I’m not doing it right in any of the above approaches.
I want to avoid drawing the frame(s) on the canvas, then copying from the canvas etc.
The video frame data is f*** bytes, not …peanuts(!) so I presume that it has to be an efficient way, right?