Three.js Instancing - How does it work?

On a base webgl level, how do instanced meshes work? Maybe I should be more specific. Geometries use vertex array objects (VAOs). I looked far enough into the three.js code to know that. But, how do you get instancing to work? You would need to put the buffer data into the geometries VAO, but you can use the geometry in multiple meshes, which means you should be able to use the geometry in multiple instanced meshes. So, how would that work? Do you have safeguards in place to prevent that? Do you re-upload the data per frame (per object even if there is multiple objects with the same geometry)? Somehow I don’t think that second answer is what you do, but I don’t know. That’s why I am asking.

Each Mesh has a geometry and a material. Usually, Mesh has a specific geometry representing, ex. a single character, a chest, a car etc. Rendering such a Mesh requires moving its vertices and material to the GPU - and each Mesh has to be done separately (ie. each mesh causes a “draw call”.)

Now imagine you have 10 models of the same character. Instead of creating 10 separate meshes and 10 separate draw calls - create a single mesh, and duplicate its vertices 10 times. Then, to each of these duplicates (that are all within a single geometry) apply a separate transformation matrix - making it look as if there’s 10 independent meshes.

That way you get an effect of having multiple meshes within a single draw call.

I get what instancing is used for, what I meant was that you could use the geometry in multiple instanced meshes. Like, I know that instancing is used to reduce draw calls by putting the matrices in an attribute, and setting the attrib divisor, but if you have two different instanced meshes using the same geometry, you would have to reupload the data to the geometry per object per frame. It would be easier to just make a different geometry for the second instanced mesh and call it a day, but what I was wondering is is that what people are forced to do, or are there other things you do in that event. So, for example, if you were to have like 60 boxes, all the same size, which means you could share the geometry. If 30 of them have the same material, then you can create an instanced mesh for those 30, and if the other 30 share a different material, than you can make a second instanced mesh. What I was wondering about is what happens if you share the geometry between the two instanced meshes.

We encourage sharing the same BufferGeometry for reuse with different Mesh and Material instances. This is totally fine and should not do any duplicate GPU uploads.

That’s quite different from “GPU instancing”, however. A single BufferGeometry reused by 60 THREE.Mesh instances will result in 60 draw calls. To draw them all in a single draw call you must use the THREE.InstancedMesh interface.

Again, you guys misunderstand what I am saying, unless I am just not being very clear, because if that is the case, than I am sorry. Let me try to explain myself better. lemme write it out in code what I am trying to say…

//set up renderer, scene, camera, controls...

var geometry = new THREE.BoxGeometry();
var material1 = new THREE.MeshLambertMaterial();
var material2 = new THREE.MeshPhongMaterial();

var iMesh1 = new THREE.InstancedMesh(geometry, material1, 30);
var iMesh2 = new THREE.InstancedMesh(geometry, material2, 30);

//set up rendering loop, lights, and other stuff

Would that code above work? You have the geometry being shared between the instanced meshes, but you cant combine the instanced meshes to 1 because the materials are different. So, would it be better to make a new geometry for the second instanced mesh, or would the code above work. Because the way I see it, on a base webgl level, you need to submit buffer info to the GPU. That is unavoidable if you want decent performance from applications. The buffer info is stored in the VAO assigned to the buffer geometry. So, one would conclude that the buffer info for the instanced matrices and intanced colors would also be stored in the VAO assigned to the buffer geometry. But, the geometry is part of two meshes, so you would need to constantly re-submit the data to the gpu.

Yep, can’t see a reason it wouldn’t. 3 things are sent to the GPU for instanced meshes - geometry (just duplicated vertices with no transforms), material, and the transformations matrix. Transformations are then applied in the vertex shader, so they don’t affect the geometries on the CPU in any way.

It will work, but only with an assumption that for both of these InstancedMeshes the amount of instances is the same (or at least similar.) Otherwise you’d be rendering more vertices than necessary / get z-fighting issues.

(I’m nowhere near the level to answer the VAO concerns tho :smiling_face_with_tear:)

Dang school computer… Can’t open the link. Oh well… when trying it for myself (I had to re download the library, but at least I now have the most up-to-date version (i got rid of the library because I am writing my own rendering engine modeled very closely after three.js, so I didn’t want too much influence from its code)), and it works. I brought the count down to 10 so I could actually count them, and when counting, I found 20 boxes.
The reason I thought it wouldn’t work is because all buffer data (including the matrix attribute and color attribute) would have to be stored in the buffer geometries VAO, simply because of how the rendering works. You input all of the buffer data once, and you only have to re-input it when the needs update flag is flipped. But, with the instanced attributes, those also have to be stored in the geometries VAO because when you bind it in WebGL, the buffer data gets put into that VAO, and stored there. Now, this is a personal thing, but VAOs while nice for performance because they cause less WebGL calls, are worse for memory usage because you need to create new buffers. That’s just personal preference, and a bit of a tangent. Anyway, you would also store the instancing buffers in the VAO because that is how the data is put into WebGL. So, you would have to re upload the data to the GPU per object, unless you don’t do GPU instancing, in which case, I am completely wrong here, but I think that you guys do do GPU instancing because why else would the ANGLE_instanced_arrays extension be in the three.js WebGLRenderer? Sorry, just putting down some things for myself to think about in the future.

Ok, so after some testing, it doesn’t matter how many more than the other there is. here is a fiddle so you can… fiddle?.. with the values. So… it seems that you do reupload the data per object… I dunno… I would have to look further in the code, and to be honest, its hard to do that with the three.js source code. No offense to you guys, but searching through everything to try and find the code that you use for uploading the buffer data was hard and laborious enough. I mean, I would be willing to root through the code, but it would be easier to just ask you guys, but testing myself seems to have confirmed what I thought. Thanks for your guy’s help!