Multiple geometries in one draw call

Is there a standard way to do this? Currently I got a bunch of instances doing things like trees, vegetation and primitives all using the same texture atlas and shader but separate geometry buffers. I’m guessing I can hack this in the vertex shader by merging the geometry buffers and then pass a draw range value as an attribue but that feels a tad innovative. Any hint towards a better way to do it would make me happy.

Check out how multiple geometries are merged in the following example. Keep in mind that this approach does only work if all geometries share the same material.

Of course you can also use instanced rendering in order to draw multiple geometries with a single draw call. There are several examples in the repo like: https://threejs.org/examples/webgl_buffergeometry_instancing.html

I’m looking for the version where some geometries are triangles and others are boxes in that one call. Don’t think I have seen any such example?

Something like this? https://jsfiddle.net/f2Lommf5/6189/

Yes! And then how to animate them independently of each other.

Well, when the geometries are merged, it’s hard to do this. You normally use this approach for static geometry.

1 Like

Indeed, this is where I was considering passing the vertex index range of the respective shape as an attribute to dig up the relevant vertices in the shader. My guess says this is doable but I am not sure? Or there may be a more clever way to do it…

I’m not sure I understand. Can you show a simple live example that illustrates your idea?

I don’t really have one. But imagine a large text built with separate models for each letter. Then have the ability to move the letters around as if they were particles. Or a vegetation system which can support different shapes for flowers and grass and be responsive to changes in a terrain on which they are attached.

I already do both of these but only by changing the texture uv offsets on shared geometries. Which in practice makes all the shape information based on the texture data. I’m thinking that vertices are just buffer data the same as texture texels. The big difference is that the shader has its special function to sample texture data by coordinates. It would make sense for vertex shaders to have the ability to sample vertex buffer data selectively in a similar manner.

A draw call with given vertex information form a task which is automatically decomposed by the GPU. The processing of the resulting units of work is highly parallelized and independent of each other. This conception does not allow arbitrary access on attribute data in the shaders.

So no, processing vertex information is something totally different compared to sampling textures. Because of this, I’m afraid your intended approach will not work.

1 Like

In this case you’d probably instance each letter and then build words out of the instances.

Yes, that seems reasonable.

Okay, thanks for the detailed explanation :slight_smile:

I’m not inclined to give up just yet tho and I have two other candidates towards a way to do it.

One I think may work would be to encode a normalized version of the vertex buffer into a texture. (Lets say each model has 256 or less vertices and don’t need very high resolution). Now each vPosition can be offset by rgb from the texture. I think I saw something similar to this used for a terrain shader some time ago but in that case the shader only manipulated the elevation axis of the vertex buffer.

Sounds like a displacement map?

Right, a displacement map with some little tweak to displace in all directions maybe can do the trick.