InstancedMesh - can it be used to draw hundreds of planes with different textures?

Hi everyone,

I have the following scenario:
I have over 250 different textures (with 256x256 resolution), and over 3000 objects to render, each of them uses one of the 250 textures. I need the ability for each object to individually

  • change its color
  • change its position
  • change its rotation

Also I need the ability to freely remove any one of the 3000 objects realtime, and also add a new one (realtime).

Right now I’m rendering the objects with a pretty basic logic: globally 1 PlaneBufferGeometry (reused each time), separate material for each object (to be able to modify their colors individually), and basic THREE.Mesh instances (3000 pieces).

The FPS gets quite low of course, so I’m wondering if there’s anything I could do about it. I guess merging them is not really a good idea, because of the need for the individual position/orientation setting, that could/should happen realtime.

Using InstancedMesh would help, right? If so, can someone show me an example that I could use as a starting point? One with different textures would be great.

Also, any other performance-improving hints would be welcome for this scenario. :slight_smile:

Thanks in advance

Yes, but it’s hard to map it onto your use case because of the massive amount of different textures. Can you really use 250 texture objects in your app without crashing? TBH, that sounds a bit extreme. I would suggest to merge all textures into a single texture atlas, use it as the diffuse map for a single material. You can then use an additional instanced attribute to define texture offsets per instance. Then, and only then, instanced rendering makes sense.

When using InstancedMesh, it’s easy to change position and rotation per instance. Since the color attribute is not defined by default, you have to define it yourself and enhance the respective shader code. Fortunately, there is an example that demonstrates this workflow:

It’s no problem to add and remove instance however you can’t exceed the instance count define when creating your InstancedMesh. So you need to know the maximum number of instances right from the beginning.


GPU instancing means that the objects must share a material, and a single material can only have 8-16 textures depending on the device, I think. So merging the textures into an atlas (2K-4K maybe) and using UV offsets for each instance is the way to do this.

Fortunately, there is an example that demonstrates this workflow…

I think this approach is easier, you don’t need to change the shader:


Thanks guys, great ideas, I appreciate it :slight_smile:

It’d be neat if this was simple from a high level. For someone who just wants to make a 3D scene, many objects each with a different 'texture' can be the same as 'many *instanced* objects each with a different 'texture'.

Ultimately, the fact that “textures” should be all in one “texture”, or actually “multiple textures with an atlas”, or that all objects are “one geometry” or “multiple geometries” is something that I believe should be abstracted for the high-level users that just want to focus on the details of the experience they are making rather than focus on the implementation details of the engine.

The ideal high-level concept should just be “many objects each with a different texture”, and instancing the geomatries, or atlasing the “textures” should just be a performance optimization under the hood (with optional controls for those that do wish to get into the details).

@Mugen87 @donmccurdy When I describe these things to a designer who knows nothing about code, after I am done describing it, they ask “so… what’s the difference?”.

Most of the time, these “implementation details” don’t matter to a purely conceptual designer, and they shouldn’t matter.

It’d be great for 3D libs like Three.js or similar to make it easy for high level users not to worry about these details, but at the same time make it intuitive for technical people to have flexibility in configuring the underlying behavior.

Maybe, f.e., a tree traversal in WebGLRenderer would detect materials with the same geometry, and automatically apply instancing as well as atlasing, and some higher level classes like InstancedMesh and InstancedMeshGroup could help tinkerers tune the otherwise underlying behavior. (I’d like to get back to that InstancedMeshGroup idea sometime).


I agree with these goals. However, by the time you’ve loaded a model and sent it to the renderer, it’s already too late to automatically perform many important optimizations. Packing a texture atlas is slow, and often requires human input. This task is better done offline.

To me, the problem is that the best optimization tools are (currently) built into native game engines, which can compile and optimize assets as long as they want. There are almost no tools for doing similar optimizations that aren’t tied to a particular engine. gltfpack is one (it batches models with shared materials). If TexturePacker3D ever gets released, and supports formats we can use on the web, that might be another.

There is some amount of optimization that WebGLRenderer could do for you, and doesn’t yet, but probably less opportunity than there is for offline tools to become more powerful and easier to use.

1 Like

I did something similar for runtime Mesh Packing, it merges all geometries also with different materials into one geometry, material and optimized atlases. Primary reason was for instancing.

You can get 256 in an atlas but your resolution per tile shrinks, with 64 in 4096x4096 you still get 512x512 tiles, but depending on your target devices you might come across texture size limits. Also you need to deal with bleeding, what is quite easier with webgl2.

1 Like

@Fyrestar Oooooh, that sounds awesome! Do you have a live example? @hofk can add it to his example index.