What is the recommended way to do sprite or mesh batching?

If I want to add a bunch of objects like sprites or meshes, the rendering slows down quite quickly the more that are added. More than a few hundred individual sprites causes large FPS drops. If I use bufferGeometry, it’s possible to render tens of thousands of shapes no problem.

Adding sprites is really easy to setup and I can easily give each instance different properties.
Adding bufferGeometry is hard to setup and is difficult to give each submesh different properties like materials, textures, dynamic position, rotation etc.

What would be the recommended route for efficiently rendering a group of distinct meshes with different materials/textures? I saw an example of an instanced skinned mesh here:

While those are varied animations, it looks like they all have the same texture.

One example I’m thinking of is to draw dynamic text where the characters are packed into an image. They can share the same texture but each shape/quad would be able to show a different character and it has to not leak memory on text changes.

What I’d like to be able to do is create meshes separately and assign them materials, transforms and UVs and then just batch them together for an efficient render.

  • input text “Hello World”
  • split text, create sprites/quads/meshes for each character
  • position, rotate each character sprite
  • batch draw the sprites/quads/meshes

Correct, because all instances share the same material. When using InstancedMesh (or instanced rendering in general), there is no way around this.

First of all, three.js does not support auto-batching so applications have to do this task. You can try to start with BufferGeometryUtils.mergeBufferGeometries() by merging all geometries which share a common material. In this way, you can render the merged geometry with a single draw call which can noticeably improve performance.

Thanks, is mergeBufferGeometries efficient enough to call every frame?

For example, could I maintain a list of sprites and animate their UVs, scale, rotation and call merge on them each frame to draw them?

Does bufferedGeometry allow for different opacities per submesh?

In Pixi, I can render a few thousand sprites separately and give each sprite a separate transform and opacity and it runs at 60FPS. When I try to do the same in ThreeJS, it drops to 30FPS with a few hundred sprites. Both are using webGL. I’ll check their source code to see how Pixi is doing sprite rendering, I expect it must be possible to render the same way in ThreeJS.

Not really, because the related overhead is just too high. Meaning you would create new WebGL buffers each update step which is something you normally want to avoid (the idea is creating this stuff once and then reuse it).

Not sure I understand this question. Opacity is defined on material level.

three.js is not optimized for sprite rendering. The problem is that even if textures share a common image (e.g. a sprite sheet), the engine currently creates for each instance of Texture a WebGLTexture object which can cause really high memory allocation. Next to draw calls, this is the most often performance problem in context of sprite rendering.

However, we are trying to fix this issue. Related PR: WebGLTextures: Avoid unnecessary texture uploads. by Mugen87 · Pull Request #17949 · mrdoob/three.js · GitHub

1 Like

Not sure I understand this question. Opacity is defined on material level.

If a bufferedGeometry had a single material with a texture and the texture had varying levels of transparency, each submesh could have different UVs and would display with different levels of transparency.
Is there a way to set this transparency with a variable rather than UVs like vertex alpha? Maybe I could have a separate alpha map with it’s own UVs? A custom shader might be the way to go.

three.js is not optimized for sprite rendering. The problem is that even if textures share a common image (e.g. a sprite sheet), the engine currently creates for each instance of Texture a WebGLTexture object which can cause really high memory allocation. Next to draw calls, this is the most often performance problem in context of sprite rendering.
However, we are trying to fix this issue. Related PR: WebGLTextures: Avoid unnecessary texture uploads. by Mugen87 · Pull Request #17949 · mrdoob/three.js · GitHub

That’s great you are working on the memory issue, thanks for that, it will be a big help. The performance issue I’ve seen with multiple sprites has been using the same texture instance so for example:

texture = new THREE.Texture(…);

for (i<1000)
sprite = new THREE.Sprite(SpriteMaterial({map:texture}))

I had a quick look at the Pixi renderer code, it looks like they have an automatic batching system:

“This renderer works by automatically managing WebGLBatches”

It seems to build a pool of draw calls. It builds a shader that “colors each vertex based on a “aTextureId” attribute that points to an texture in “uSampler”. This enables the objects with different textures to be drawn in the same draw call.”

It looks like they iterate over the scene and run checks on which geometry/textures can be batched together and then render those batches.

It is most optimal to do that batching on the entire scene but usually there are just a few heavy objects that need rendered. I think I’ll start by making a wrapper around a bufferedGeometry and update the geometry/UVs. Maybe use a custom shader to be able to use multiple textures and alpha values.

Hi,

I am faced with the exact same problem … having (a few hundred) text labels for a TSNE Cloud, threejs is getting really slow when using a “regular” Sprite. @adevart did you finally create your wrapper around bufferedGeometry?

No, unfortunately I never got round to making a wrapper, I just stuck to using under 100 sprites for what I was doing and the performance was ok. My plan was to create an array of meshes/sprites in the scene, iterate over all those meshes and make vertices in a buffer. Then mirror the vertex transforms when I modified the positions of the separate meshes.

There were a lot of things to work out like placing the buffer in global space and getting the correct global transforms of the vertices of the separate objects. How to handle transparency if the separate objects aren’t all transparent. How to handle adding/removing objects from the buffer, just hiding removed ones offscreen or rebuilding the buffer.

I found a library for fonts that did something like this for just single blocks of text:


In their update function, they take a series of text glyphs, calculate positions and UVs from the separate sprites and then set those arrays as attributes for the TextGeometry class, which extends the BufferGeometry class.

For a cloud of text, instead of just using letter/glyph positions, it would need word positions too and when the position attributes are computed and passed to the buffer, it would take the word positions into account.

Words could be setup as groups and add letters to the groups, then get the world positions of the sprite corners and use those in the buffer. If you set the unbuffered group of words as invisible, ThreeJS won’t render it and that will help with debugging letter/word positions by setting it to visible.

@adevart I will have a look at three-bmfont-text, although I am not sure if in my case this will be the way to go as my application should be able to display text in every possible language and I will have to load different font faces.

Thanks for your detailed update, much appreciated :slight_smile:

If you want lots of sprites, you could use Points or maybe even InstancedMesh with billboard shader

Yeah bmfont won’t work well for multiple languages as bitmap fonts have a limited character set, it’s best to render the text into a canvas texture for different languages. If it’s a separate texture per word, that would mean a different material per word, which I don’t think can be batched. You can put multiple words in the same texture but that would run out of space quickly.

There will be a limited amount of characters in the cloud animation. If each character is given 128 x 128 texture space, a 2k texture will allow for 256 unique characters. Each character can be rendered into the 2k canvas texture and that single texture can be assigned to a buffered mesh. Then buffered mesh vertices would be of the letters with UVs that match the characters in the canvas textures.

If the space runs out in the texture, it would either need a bigger texture, smaller letters or a separate buffered mesh, maybe multiple materials. A separate mesh or material would mean an extra draw call but two will perform better than a few hundred.

2 Likes

This approach would scale to other languages better I think: Troika-3d-text: library for SDF text rendering

Troika looks like a nice library, it uses the method described above, caching the letters used in a block of text into a texture atlas. That defaults to 64x64 sized letters, which allows 1024 characters in a 2k spritesheet. The signed-distance field shaders (bmfont library uses these too) will help keep the text sharp even though the letters are smaller.

It uses an InstancedBufferGeometry for the characters and parses characters into Uint buffers from the font files and puts them into the 2k texture buffer. I find it’s easier to render characters into Canvas textures as it doesn’t need loading fonts manually (it does fonts the same way as CSS) and you get lots of things included like stroke, italics, bold, drop shadow etc.

To have words in the same buffer in different positions, there would need to be a word position value to offset the groups but given that this method uses a single texture atlas and material, it would be easier to generate the words separately and merge them into a single geometry. Using a position offset would give more control though.

1 Like