Hey everyone
I’m currently building a Matterport-like 3D viewer where panoramic photos are projected onto a mesh
Everything works fine at 512px faces even 1k faces.
But when I start assigning 2K or 4K face textures, I get a noticeable performance spike (frame drop at the same time or only frame drop) each time a new face texture is uploaded to the GPU.
This happens even though I’ve already optimized the loading pipeline as much as possible:
Using createImageBitmap() for decoding.
Uploading with gl.texImage2D() and renderer.initTexture() on requestAnimationFrame.
Loading faces one by one, not all at once.
Even moved all image loading into a Web Worker (off-main-thread).
!!! Despite all that, the GPU spike still happens the moment the 2K/4K texture is assigned to the shader.
You can literally see in Performance Monitor.
I know that Matterport and similar systems use tiling (splitting each face into smaller sub-images), but implementing that seems tricky since WebGL2 only allows up to 16 active sampler2D uniforms,Sampler2dArray or at least for Device capabilties, while even a single 4K cube face would need hundreds of tiles (24–384 !!).
So my question is:
How do systems like Matterport manage smooth transitions and high-res tiled panos without hitting GPU upload stalls or uniform limits?
I’ve tried atlases but too large, spikes again when updated same issue.
Matterport’s view:
https://discover.matterport.com/space/9gCmRSBt3qa
My current viewer’s performance drop in the moments when switching to high-res textures:
Any insights or best practices on:
Progressive GPU uploads
Async or incremental texture binding
Efficient tile management (beyond the 16 texture limit)


