Texture Atlasing Material Maps

I’ve read in Nvidia’s GPU Gems Vol 1 that texture atlasing can be used to increase performance. I see this being obviously the case when used to consolidate materials in order to reduce drawcalls. But I have 2 questions about other uses.

  1. Is there any performance benefit to having an objects multiple material maps (diffuse, normal, specular, emmissive etc.) combined onto a single texture or am I just doing weird stuff for no practical reason?

  2. I have some meshes that use the same normal / specular maps but have different diffuse maps. I was hoping I could share the same material for these meshes and just set a unique uniform per object somehow. According to WestLangley in several posts over at SO, the only way to do this is by cloning the material which would share a reference to the shader program. Does this play nicely with material.onBeforeCompile since the program’s shader code is modified prior to compilation?

This demo utilizes both mentioned techniques.

I’m running a fairly decent PC, so I personally cannot see much of a difference performance-wise.

https://jsfiddle.net/titansoftime/9udjmzgL/

You also need a lot in your scene to notice a difference.

  1. Yes you save texture switches, reads as well as memory, like when you store roughness, specular, ambient occlusion and metalness in just one you only need to read it once for those, i patched THREE for this too. But only packing maps into one isn’t a texture atlas, it is once you store texture tiles you need to adress in tile coordinates.

  2. Actually cloning a material to share the program probably was in some earlier versions, i recently checked this and saw it compares the code strings for new materials with existing so materials sharing programs if the resulting code is the same.

3 Likes

Thanks for your response!

  1. I set the tile uv coordinates as uniforms, primarily so I can specify which alternate diffuse texture to use. Is this correct (optimal)?

  2. Hmm, I’m not sure how that works when using onBeforeCompile. My goal is really to share a singular material for all Objects of the same geometry that may have alternate diffuse textures. Is this possible (or even the correct thing to do)?

  1. You may have issues with mipmaps. Imagine a normal map all purple next to something like a gray brick tile. If these repeat you could zoom out and see the purple bleeding into gray.

  2. WestLangley’s articles and advice could possibly be taken with a grain of salt. I don’t know which specific approach you’re referring to, but this for example i think is straight up false.. I think it’s fairly straightforward to achieve what you want.

The bugs with onbeforecompile would actually make sure that the renderer caches only one program. In most cases this gets in the way of the feature but in your case it would assure you don’t do anything wrong. Make a thousand materials and give them the same spec map. Attach new repeat transform (say a matrix) to userdata of each material. Give each material the same diffuse map atlas. Give each material the same onbeforecompile patch doing whatever it needs to diffusemap. Render. Win.

1 Like

Thanks for your response as well!

  1. Indeed, I noticed this when I attempted doing this with repeating terrain textures. Fortunately for this particular case I am only using non-repeating materials for in game characters / objects.

  2. Yea, that’s what I assumed would happen after reading Fyrsstar’s response. The trouble I am having is figuring out how to have unique per-object uniforms using a single material. I was hoping it would be straight forward, but as shown in my fiddle, I’ve really only got this working with cloning the material.

I honestly do not know even how this is working with (in this case two) materials (including clone) sharing a program that are modified using onBeforeCompile. I would figure since the actual shader code is being modified on each cloned material instance, they would have to be separate programs. I know the cloned materials will share a program since the shaders have not compiled yet at the point of cloning, but even after onBeforeCompile runs they at least appear to still be sharing the same program.

The only way I can think of getting this to work (which is a terrible solution) is to add a bufferAttribute to each mesh containing the uv coords of the proper diffuse texture. I would not be able to share geometry this way so I am definitely not doing it.

This may be straightforward, but I am struggling =/

I guarantee it. It’s the same program. As long as oncompile tostring is the same.

1 Like

For repeating textures it requires a trick with padding but with non- repeating it will be fine, if the uvs are close a little padding helps as well. For repeating ones texture arrays are the most optimal solution, but requires WebGL 2.

1 Like

Indeed you are correct. I was completely ignoring the fact that the onBeforeCompile I use changes the shader code in the exact same way, only the uniforms are different. I’m just going to go ahead and blame that oversight on lack of sleep… =] Thanks!

To be clear though, this will still have the cpu side overhead of setProgram on the renderer correct? Too bad as that is one of my final cpu hotspots (after hacking away at updateMatrixWorld).

@Fyrestar I am very much looking forward to using texture arrays. Does three.js have an api for this at the moment, or is it for raw shaders only?

Not even that, literally converting the function body to a string will cache it the same way myOnBeforeCompile.toString().

HOW the function is modifying the shader is irrelevant. You could have a million different branches based on a million different say userData properties on your material, three.js will by design:

  1. generate the code for you, ie. actually apply your logic and make some GLSL a million times
  2. throw that away 999,999 times
  3. cache the first occurance of the code under yourFunction.toString()

I thought this was a bug, but unfortunately it’s a feature:

There’s probably some better way to use this, that i’m unaware of, it’s not documented properly.

Hmm, interesting read. It really is unfortunate that serious refactoring of materials is not going to happen pending the release of NodeMaterial (which has been pending for years now). Oh well.

There is an api now, just use THREE.DataTexture2DArray, it even has the benefit of not wasting memory with unused slots of a power of 2 atlas if you have a fixed amount of textures.

This is extremely important to me! I have a lot of wasted slots, especially with grouped together meshes with alternate diffuse textures.

Very nice, I will need to study this. Thank you.

1 Like

But notice texture arrays require all maps to be at the same size, this is why it is a perfect solution for repeating textures, a texture array is like a stack of textures being the same size like this picture illustrates.

3 Likes

That works for me, my current setup has the same requirement =]