How to have the result of a rendered scene as an RGBA buffer in memory?

Once I rendered a scene, is there a way to have the resulting bitmap as an RGBA buffer in Javascript?

I’m creating my renderer like this:

const renderer = new THREE.WebGLRenderer({
    antialias: true,
    alpha: true,
    preserveDrawingBuffer: true
});

And I would like to have the rendered bitmap in a Uint8Array of size width * height * 4. Is this possible using three? Also, performance is a concern to me, so I prefer it if the solution does not involve copying.

You are rendering on the GPU, it does require copying somehow, and doing that from a WebGLRenderTarget is slow/blocking not suited for realtime, if you only need a screenshot you can also just draw your WebGL canvas into a regular canvas with drawImage and get the pixel data with getImageData, this way you keep antialiasing working and the array buffer is a Uint8Array of size width * height * 4.

Thanks, @Fyrestar.

To explain what I’m trying to accomplish, I’m trying to render a 3D scene on top of a realtime feed. And the scene is changing as well. It’s just that the feed exists on a server (most likely I’ll need a headless browser - I have not tested this part yet). So, what I’m trying to use the three.js for is to render a scene for each frame of the feed and overlay that on top of the feed’s frame. So, what I need is a way to render the scene into an RGBA buffer and then use a library (like OpenCV) to merge the rendered scene into the live feed frame.

I explained all that to say that it’s not just one screenshot. And it needs to execute fast enough to keep up with the frame rate. Basically, I want to use three.js as a render engine.

BTW, being introduced to three.js recently, I’m not familiar with the concept “regular canvas”. Could you please explain what you mean?

Thanks again.

That’s what it is :cat:

I just meant a 2D context instead WebGL (not related to THREE), you can pass a canvas to ctx.drawImage.

What kind of feed is that? A video stream? You might also use it then as video texture in THREE, which would be by far the fastest approach. If that’s not possible it depends on what you mean with “merge” specifically.

If you meant streaming the canvas to the client, there is a official HTML5 API to stream sources such as a canvas.

OK, back to problem definition.

I have a webcam in a NodeJs which I read a frame from (images) periodically. So far, there’s no sign of a browser. It’s all in NodeJs memory (buffers and Uint8Arrays). Next, I work on the frame and manipulate it, then I write it to a fake webcam. Again, no browser so far.

As part of the manipulation step (mentioned above), I want to render a 3D scene with an alpha channel and put it on top of the webcam frame (that’s what I mean by “merge”), like adding a hat. I might need a headless browser for rendering the 3D scene but only if that could be fast enough. But that does not mean that I can use the browser for the webcam as well. The webcam feed needs to be read by the NodeJs since there are other components that will work on the frame and those cannot be moved into a browser.

To me, it sounds like the only solution is to move the rendering of the scene into NodeJs alongside other components so they can all work on the frame. Now, for that, I need to render the scene into an RGBA array of pixels within NodeJs.

Correct me if I’m wrong, but when you talk regular canvas that means I’m not using three.js to render the scene. If that’s so, who’s going to render my scene? I mean I can’t just draw my scene, it’s pretty complicated.