.basis textures and multi texture optimisation techniques (array textures, shaderMaterial etc.)

Happy new year all,

Looking for some more advice on optimising my multi-image visualisation (up to 1200 dynamically loaded images set) (https://youtu.be/lyB7avzkOew). Quite early on in the project i switched to .basis files successfully instead of jpg/webp and saved 50% of GPU memory doing so, 600mb instead of 1200mb for the jpgs. I’ve also merged the plane geometry for each of these images, so i’m now at a single program but 1200 drawcalls for the individual texture materials. The important attributes for this project for me are fast load times and optimal performance on most modern devices, main caveat is I’m a shader noob so struggling to visualise the various techniques.

When I posted earlier in the year, texture atlases seemed like a good bet but after some research and looking/trying available examples (alot admittedly over my head) I’ve kind of ruled them out for these reasons, I’m happy to be challenged on any of these points of course as this is just from my limited understanding:

  • images are 500px+ - not a huge benefit for a 2048/4096 atlas limit. - I’m happy with the compromise of resolution vs quality in the scene and would’nt want to go lower res.

  • collection of images is user set and varies - no point in loading a 1000 atlased set if there’s only 10 images needed - this largely ruled out offline atlases to me.

  • Dynamic atlas generation is slow and/or takes up too much memory when working with many high resolution images, i’m unsure if it can be setup with loaded .basis compressed textures anyway.

Since then i’ve been looking into array textures and multi texture shaderMaterial approaches, in particular https://douglasduhaime.com/posts/visualizing-tsne-maps-with-three-js.html and examples like this http://jsfiddle.net/1zy35g7b/2/ that pass a list of textures direct to the shaders. My confusion comes from these different approaches, passing textures directly to the shader vs shaderMaterials etc.

Webgl2’s array texture seemed like the clear way forward to me (targeting webgl2 only is no problem), I can have all my images in a single material (the 2048 texture layer limit is fine for me), i can work around the 2-3 different PoT image sizes I have by just having 2-3 shaderMaterials, and the simple z indexing mapping might be easier for a novice like me. However this post Is it possible to have a texture array of compressed textures? seems to rule the approach out for now as i’m committed to .basis because of memory usage(?).

Main questions for me to continue progress/research:

  • as per the post title, what multi-texture techniques are applicable to compressed textures like .basis?

  • If I’m unable to use array textures with basis, could I still pass 2d textures to a shader material? - i believe the max for that is 16 (texture units)? if so that’s still a worthwhile saving in drawcalls - but maybe 60+ shader programs (instead of 1 with 1000 drawcalls) will give me different performance problems.

An offline atlas set seems like the only sure bet to get those draw calls down at the moment,
I could generate a 16k and 4096 set - pack the jpg’s via texturePacker/imageMagick etc. and then convert to .basis atlas files - apparently basis has atlas/packing support but havent seen any documentation.

It’s just alot of web load compromise compared to the other runtime approaches I hoped I could try. However if basis can compress the atlases well enough, loading the full atlased once and using them from browser cache could work.
It’s still going to be a large learning curve either way for me as I’ve never done any sort of mapping, hence this post to make sure I’m taking a good approach.

thanks.

1 Like

I recommend virtual texture for this usecase. There are no shortcuts generally. You need some kind of caching solution once you go beyond certain memory usage level. Basis is not helping you so much there, basis is a lossy block-compression format, meaning you lose some quality, but it’s about the same as JPG. So you reduce memory usage by say 4x or so, that’s nice, but if your image set keeps growing - that only delays the problem.

Virtual textures, mega textures, virtual sparse textures or whatever else they are called are a generic solution to a problem of requiring a huge texture space that would not fit in graphics memory as well as mip-mapping. Have a google search on the subject, but if you decide to go down this route - you’ll need to learn shaders.

Thanks Usnul, I know you mentioned virtual textures when I last posted about this a few months back but it sounded so far beyond me I did’nt spend much time looking into it, will try to understand the concept at least.
I was just about to update my post to say what you saw straight away, ultimately this isn’t going to scale very far with any of the approaches I mentioned due to the texture data involved.

I guess I’ll still try the offline atlas approach for now as that’s still going to be useful for me, it’s just a learning/UI demo but keen to get it as useable as possible, the image set for this particular usecase will only grow 100 per year so the concept still could have some useage.

For what it’s worth (I’m not sure if this necessarily solves the problem you’re describing), the Basis format does allow array textures. three.js doesn’t currently have a way to represent compressed texture arrays, though, see Add support for compressed versions of DataTexture2DArray and DataTexture3D. · Issue #19676 · mrdoob/three.js · GitHub.

Thought I’d give an update on this and ask a couple of more questions if ok @usnul @donmccurdy

I managed to implement a ‘standard’ offline texture atlas setup (Free texture packer - sprite sheets for games and sites pretty useful) using simple atlas tiling and facevertexuvs on lots of planeMesh geometry. (walk before you run :slight_smile: ) e.g.

let tile = [
new Vector2(xOffset, yOffset + hRatio),
new Vector2(xOffset + wRatio, yOffset + hRatio),
new Vector2(xOffset + wRatio, yOffset),
new Vector2(xOffset, yOffset )
];

and

(mesh.geometry as Geometry).faceVertexUvs[0][0] = [ tile[3], tile[0], tile[2] ];
(mesh.geometry as Geometry).faceVertexUvs[0][1] = [ tile[0], tile[1], tile[2] ];

before merging them to a single bufferGeometry.
This doubled the FPS (300 to 600) even though there’s still hundreds of drawcalls, far less texture swapping as you said @Usnul.
the still high drawcalls surprised me though, i thought 15 textures/materials and 1 merged geometry would = 15 draw calls?

Any idea how much more performance would it likely give to generate the geometry ‘manually’ instead of merging (1000+) planeGeometry like this?

Also I’m getting the dreaded atlas seams on the borders of the meshes now despite 16px (overkill I know) atlas padding, is an extrude (say 2px) on the atlased tiles the best bet to improve this? I’ve not had any luck with any of the usual texture settings so everything is default.
You can see them best on the white images, its not too bad except some aliasing of the borders at certain distances.

image

I’ve yet to convert the JPG atlases to .basis but I’m hoping this should’nt cause any problems.

Thanks,

Try timing how long it takes to merge all of the geometries — in the best case, that’s what using pre-generated geometry could eliminate from your loading time. Generating the same merged geometry manually will not improve framerate though.

1 Like

I was looking for something like this! Thank you for making it!

I case of loading 1000+ images, how do you guys recommend to render them in three.js?
Should I go for sprites?

@iosonosempreio - depends what you want them to look like, sprites always face the camera so look kind of 2D, whereas meshes are 3D.

Sprites:

planeMesh geometry:

I’m going to try and write up this experiment as an article soon, will cover the geometry instancing/merging and atlas approach used to get the performance out of it.

It is good for them to look 2D in my case. Does it come with better performances too?

I have to make a 2D environment, I plan to use an orthographic camera!

Thanks!