Optimising many hundreds of texture images

Hi all,
First post here, I’m working on a visualisation based off this project - https://github.com/chrisrzhou/react-globe.

Here’s what it looks like atm:

This react-globe library positions interactive geo-located markers on a globe. My project ultimately needs to display close to 1000 images/markers. Progress has been pretty good so far, i’ve customised the module a fair bit to generate all the line markers (cone geometry) for each image as a single instance mesh (using instance butter attributes for the white/yellow colour variation).
I then create a planeBufferGeometry + MeshBasicMaterial mesh for each of the different images (loaded by textureloader, the plane mesh is sized to each images specific aspect ratio - they vary and aren’t of the power of 2).
These planeMeshes then get positioned correctly on top of their correct instanced cone. There’s even some logic to group the markers by geo-location proximity so that they can be evenly spaced vertically (instead of randomly) so they remain viewable/selectable instead of tightly stacked. The react-globe modules tweens to the clicked marker image and displays a tooltip and link so keeping the images selectable is important.

Performance with 800 images (~800 draw calls) isn’t actually too bad, fine on a gtx980 and really surprisingly pretty smooth on my samsung s9 android, less so on a macbook pro but that’s the increased resolution.

I’m after advice on what i can do next to improve performance, draw calls are ~800 for the separate planeMeshes and all the images are plain jpegs of around 600px, memory shoots up pretty quickly to 1.5GB, there’s some GPU leakage i guess as i need to restart a tab otherwise refreshes start dropping frames.

I tried a basis texture conversion on the clouds png texture supplied with the react-globe lib (5mb-1mb) which worked really well, but running the basis conversion on all the image jpeg’s (converted to png’s first) resulted in alot of black images and loss of saturation or darkening. Is Basis worth pursuing as a format to greatly lower my memory consumption here, or should I be looking to try a texture atlas/array? (if i can reliably generate a useable 800 image x 600px texture atlas/array?).

Ultimately the draw calls are still too high though (frustrum is culled) and not sure if instancing the planeMeshes and using some shader cleverness to get all the individual image textures in place would be able to keep the individual image interactivity?

1 Like

I suggest using a texture atlas, if you have too many images - I suggest having some kind of a virtual texture approach.

You have build a static texture atlas using many different tools. You can build a texture atlas at runtime using meep, for example.

For disclosure - i’m the author of meep.

With a texture atlas you can have 1 draw call, provided that you use a single geometry. Even if you still have 800 geometries and 800 draw calls, avoiding texture switching will boost your performance by quite a bit.

1 Like

thanks for this, very interesting project i will need to look closely at.

I started looking at texturepacker first as a quicker way to learn how to use a textureAtlas. Still need to make time for progress but I think ultimately I do need to use a runtime generated texture atlas for a smaller download size, since the textures I need will change depending on the selected data.

I’ve been reading however that texture Atlases can take up more memory on the GPU process (ultimately i still need the same number of different image pixels of 700 unique images on the GPU).

Is the reduced texture switching with a textureAtlas also going to reduce memory somehow?

Texture atlas can take more space than a bunch of individual textures - that is true. With an atlas you usually end up with some empty space, due to padding and simply some unused areas.

Most of the modern GPUs have a lot of memory though, so you shouldn’t have to worry about that too much, as long as the atlas fits into the texture unit constaints - it’s not an issue.

GPUs are fast, CPUs are slow. When you draw your 700 objects, here’s what happens 700 times each second:

1 - load material shader for the object (probably the same one for all, so this step is done just once)
2 - load all uniforms to GPU, this includes textures. So here a new picture is bound to the shader basically
3 - load geometry
4 - request draw

Each of these operations is quite slow, by computing standards that is.

If you use an atlas, you can drop that down to just 1 cycle, by merging all the geometries and a single texture atlas. I’m not a prophet, but I bet your performance problem would basically go away at this point.

As far as switching textures goes, it’s complicated. The basic theory is this:
when you want to use a texture - you load it into the GPU, this requires transfer of data for said texture from CPU to GPU, this operation can be slow, depending on the texture size, it also incurs synchronization penalties.

Basically, if you don’t have to switch data on the GPU - you really want to avoid doing so.

Situations vary, and some usecases mandate large volumes of data upload, but your usecase is not one of those.

Hope this clears things up.

HI @Usnul, the idea of creating atlas textures on the fly is super interesting, but trying to figure out logic here. I tried to look at code, but not being developer myself would be nice to see something in a demo ( maybe side by side comparison to see the effect of meep)

but fundamentally, in a scene with say 10 objects each having their textures, how would meep work exactly and what kind of “savings” are we looking at?
.basis is nice but I understand we have to install their basius and do manual conversion of textures from .png to .basis. which is kind of not convenient at scale.
thanks for sharing more about meep.

J.

You can check this out as well to build Texture Atlases on the run. : )

2 Likes

I don’t have numbers for that. In my own case - all i know is that 10s of different textures being used for various particles on screen in my game cause virtually no performance overhead, in part this is due to the dymanic atlas. How much though? - I don’t have a figure. I designed the engine with performance as a first-class priority, so unless there is a problem - I don’t tend to analyse performance, and this has not been an issue for me even since the the particle system was initially written.

Atlas building has become an issue at one point, so I invested a chunk of time analysing that, found that atlas couldn’t fit all of the particle textures I was using in game, and so it was being re-built completely sometimes, this caused a significant performance spike, around 30ms, which could happen several times in a single frame, so I re-designed the atlas to allow incremental packing and editing, now “patches” can be added and removed, and you only pay for packing/painting new patches, which tends to be well below 1ms for my use-cases.

This approach does come at a cost, I have to keep packing metadata around and it uses extra memory. Due to the kind of packing method that I use - this metadata is fairly small though. There is also the design cost, it took pretty much several weeks to finalize this solution.

The method that I use has 3 levels of updates:

  1. write a single patch into available empty space
  2. if no sufficiently large empty space exists - re-pack the atals from scratch and see if there would be enough space then, if so - do step 1 and mark all existing patches for re-painting, since their positions would have changed as a result of re-packing
  3. if repacking didn’t help - enlarge the atlas and try steps 1-2 again

it’s a crude approximation of what’s happening, but the idea is to use least amount of space and work. If you have only a single patch (texture) to pack that’s 1x1 pixel, your atlas can be 1x1 too, so basically there’s no extra cost for you. It also ensures that you generally end up with least useless space.

Hi Usnul,
thanks for the explanations, think I finally understand the benefit of texture atlas with batched draw calls, some amazing optimisations you have there.

Oguzeroglu - was looking at your TextureMerger earlier actually, looking at how I can use it for creating multiple atlases from a single array (700) of textures as its not far from what i need.

It throws the ‘try smaller textures’ error when the texturesObj list is too long for the MAX_TEXTURE_SIZE, do you think it’s possible I can modify it to return out of the textureMerger function with the remaining non-inserted texture list in order to pass them to a new textureMerger instance? Or is this too inefficient time-wise with such a long list of textures?
As long as I can use the textureName with my object ID’s i can match them up to the UV’s later I guess, testing with my 640px textures and for a 4096 max atlas size i’ll end up with around 15-20 texture atlases maybe.

I could estimate the number of my textures that will fit in the max_texture_size by an average size and limit them that way but that would seem inefficient as it would’nt fully fill each atlas?

@munrocket posted another texture atlas script here:

https://pastebin.com/rCKsqVnQ

1 Like

Texture atlases only work so long as your smaller textures can fit into the atlas. Beyond that point - they pretty much break down.

You can use multiple atlases, that would still be an optimization generally, even if you pack, say 4 textures into a single atlas it might be useful, beyond 16 or so I guess it would be a win for most platforms/GPUs.

I mentioned this earlier, but you may wish to look into virtual textures, that technique scales a lot better. It would a fair amount of effort to implement though, as far as I know - there is no decent open-source Virtual Texture implementation for webgl. There’s enough information out there to implement one though. It would be a pretty large task.

1 Like

Interesting solution for dynamic download.

1 Like

You can try to use multiple texture atlases for that many textures. The thing is, even though you end up modifying the max size of the TextureMerger, you won’t be able to target many of the devices as not every device supports. texture uniforms greater than 8192x8192

Hello @Usnul : do you refer to something like this?

Not “like”, exactly that :slight_smile:

It’s a bit of work to implement this well though.