Using a WebGLTexture as texture for Three.JS

I’m trying to load a raw WebGLTexture as a Texture for a Material. I’m generating this texture on the go with raw WebGL, and I want to use it directly with Three.JS. (Reading the pixels and writing a new image out of it, is basically non-sense, idea is to use GPU power and be optimized)

I tried what is stated on this StackOverflow question: javascript - Use WebGL texture as a Three.js Texture map - Stack Overflow

const forceTextureInitialization = function() {
    const material = new THREE.MeshBasicMaterial();
    const geometry = new THREE.PlaneBufferGeometry();
    const scene = new THREE.Scene();
    scene.add(new THREE.Mesh(geometry, material));
    const camera = new THREE.Camera();		
    return function forceTextureInitialization(texture) { = texture;
        renderer.render(scene, camera);

const texture = new THREE.Texture();

const texProps =;
texProps.__webglTexture = myWebGLTexture; = texture;

But is not showing up the texture that I created, for some reason is showing the render result of my scene as the texture, which causes to repeat infinitely. Funny glitch.

Any ideas how to achieve it? Thanks in advance.

There is currently no official way for using a custom WebGLTexture object. The code at stackoverflow is considered as a hack since it accesses a private variable of the renderer.

The management of WebGLTexture objects inside the renderer is not trivial since we want to share texture data whenever possible. So hacking in WebGL resources from outside is not something I would recommend since it bypasses internal optimizations. I would rather think about a new GLTexture class (similar to GLBufferAttribute) that enables the usage of a custom WebGLTexture object. But frankly I have a hard time to understand why it’s really necessary to use WebGLTexture objects on app level. I tend to believe it’s better to not work with the raw WebGL context and migrate the code to three.js entities instead.

It’s not always possible to do everything in threes. For a long long time, three didn’t support any stencil operations. Does this mean that users should not use stenciling with Webgl?

I don’t think so, it’s a hardware level operation for accelerated graphics, a user should be able to use that feature. So rather than waiting 10 years for someone to randomly implement that, you could have used it with the raw Webgl context.

A lot of your posts seem “opinionated”? Do you mind if I ask why do you often feel so strongly about particular topics? This approach my possibly come as offensive since it sounds like you’re assuming that all users do not know what they are doing.

Sharing WebGL state between three.js and userland code is an absolute nightmare scenario for project maintainability. People do it, and that’s OK, but we’re always going to try to steer you to choices we can more reliably support. There’s really no way we can anticipate and deal with all of the ways that internal renderer updates might affect GL state, and if we start providing hacks to work around the first layer of issues, there are just going to be that many more issues to deal with later.

That being the case, I think it’s fair for @Mugen87 to be clear about our not recommending this path. If you’d like to describe problems that cannot be solved currently, I think we are happy to talk through possible solutions as long as we’re starting with the problem first.

Could WebGLRenderTarget be used instead here?


Thanks for the input, that actually makes sense. If there is a way to do something without using the raw Webgl state that will always be better since it should be guaranteed to play nicely with other things changing the state.

1 Like

So, the thing is… I’m using a library to generate the WebGLTexture, so I’d need to modify the library in order to make it comply to the type of data three.js accepts.

But, good news, I found a way to have it rendered in a <canvas> element and then use it on CanvasTexture. Looks fast but I don’t know if I’m having a bottleneck doing it this way.

I’ll keep this post updated if it became problematic to do it that way and if I ever find a better solution.


If you’re doing it once in a while it shouldnt be that huge of a bottleneck. (i think).

One use case I just faced was trying to get the WebXR AR background to mediapipe for hands gesture analysis.
As I understand I can only get a WebGLTexture from the camera-access feature ( WebXR Raw Camera Access Module ). Mediapipe currently only accepts either an image, a video or a canvas, not yet a WebGLTexture.
So in order to convert that texture to canvas, I do the following:

  • bind a framebuffer with that texture
  • call gl.readPixel
  • call canvasContext.putImageData

See the (very dirty) code here:

And as expected those steps decimate the FPS. And while this can surely be improved, I don’t think I could avoid accessing the gl context directly. So maybe in this case having a GLTexture class could help?