InstancedMesh from GLTF doesn't work?!


InstancedMesh ([1] in code) doesn’t work when geometry loaded from GLTF - is this a bug or I’m doing smth wrong?! When using Mesh ([2] in code) - all works fine…but InstancedMesh doesn’t work at all…

the code:

				loaderGLTF.setPath( 'models/gltf/DamagedHelmet/glTF/' );
				loaderGLTF.load( 'DamagedHelmet.gltf', function ( gltf ) {
					let _chr = gltf.scene;
					let geometry = new THREE.InstancedBufferGeometry();						// [1]
					//let geometry = new THREE.BufferGeometry();							// [2]
					_chr.traverse( function ( child ) {
						if ( child.isMesh ) {
							//geometry = child.geometry.clone(); //same
					let mchr = new THREE.InstancedMesh(geometry,new THREE.MeshBasicMaterial({color:0xff0000}),1);	// [1]
					//let mchr = new THREE.Mesh(geometry,new THREE.MeshBasicMaterial({color:0xff0000}));		// [2]

Is this a bug? All attributes are equal.

You don’t have to use InstancedBufferGeometry when using InstancedMesh.

I’ve used both InstancedBufferGeometry as well as BufferGeometry. Result is the same. I’ve also used BufferGeometryUtils.mergeBufferGeometries :slight_smile:

By the way… All works fine even with InstancedBufferGeometry when I’m making geometry manually… but in case of GLTF smth wierd happened (I mean nothing happened :slight_smile: )

P.S. For what purposes InstancedBufferGeometry is needed if not in InstancedMesh? There is no any information in docs section of InstancedBufferGeometry

InstancedBufferGeometry was the first API for instanced rendering. InstancedMesh came later. If only the transformation is different, using InstancedMesh is way more user-friendly since built-in shader code is automatically enhanced for instanced rendering.

The following example just uses InstancedBufferGeometry.

You have to remember that you normally can’t just extract the geometry of a glTF asset and create a new mesh from it. That’s because a glTF asset is always represented as a hierarchy of 3D objects that might have important 3D transformations. So if you just extract a mesh in the middle of a hierarchy, it could be too big or too small or even rotated and positioned in the wrong way.

But all works fine when I’m doing all the same but with THREE.Mesh (see [1] in code above)… the red helmet shows to me…same extraction geometry procedure… but when using THREE.InstancedMesh (see [2] in code above) it doesn’t shows - that’s the difference and wierd issue. So, Is this a bug?

No. I’m afraid you are using the API not correctly. You have to configure the instance transformation at least once.


Thank you very much for a detailed explanation!
WIth these lines red helmet appered:

var dummyMatrix = new THREE.Matrix4();
mchr.setMatrixAt( 0, dummyMatrix );

Didn’t know that after creation of instancedMesh it is necessary to do setMatrixAt ay least with dummy Matrix4…

There is an example of loading a glTF model and putting its parts into InstancedMesh instances, here:

1 Like

First of all, thank you for the additional clarity in understanding what’s going on under the hood in a glTF container.

I tried expanding your fiddle example by increasing the instancedMesh count and then positioning the helmets a bit to make sure the instancing was occurring, but when I ran the following, I got an accumulative draw call amount (25) rather than just the one that I was expecting. Is there anything in particular that I’m missing?

Thanks in advance for your help!

import * as THREE from "three";
import { OrbitControls } from "three/examples/jsm/controls/OrbitControls.js";
import { GLTFLoader } from "three/examples/jsm/loaders/GLTFLoader.js";
import { RGBELoader } from "three/examples/jsm/loaders/RGBELoader.js";

var container, controls;
var camera, scene, renderer;


function init() {
const api = {count: 5};

  container = document.createElement("div");

  camera = new THREE.PerspectiveCamera(60, window.innerWidth / window.innerHeight, 0.01, 1000);
  camera.position.set(10, 15, -150);

  scene = new THREE.Scene();

  new RGBELoader()
    .load("/textures/environmentMaps/royal_esplanade_1k.hdr", function(
    ) {
      var envMap = pmremGenerator.fromEquirectangular(texture).texture;

      //scene.background = envMap;
      scene.environment = envMap;



      let dummy = new THREE.Object3D();

      var loader = new GLTFLoader();
      loader.load("/models/glTF/DamagedHelmet.glb", function (gltf) {
        gltf.scene.traverse(function(child) {
          if (child.isMesh) {

			let xDistance = 20;
			let zDistance = 60;
		    //let xOffset = -100;

			for(let i = 0; i < api.count; i++){
				for(let j = 0; j < api.count; j++){
					let instancedMesh = new THREE.InstancedMesh(child.geometry, child.material, api.count);
					instancedMesh.position.x = (xDistance * i);// + xOffset;
					instancedMesh.position.z = (zDistance * j);
					instancedMesh.setMatrixAt(0, dummy.matrix);
					instancedMesh.scale.set(10, 10, 10);
					instancedMesh.rotation.y = THREE.Math.degToRad(-180);


  renderer = new THREE.WebGLRenderer({ antialias: true });
  renderer.setSize(window.innerWidth, window.innerHeight);
  renderer.toneMapping = THREE.ReinhardToneMapping;
  renderer.toneMappingExposure = 1;
  renderer.outputEncoding = THREE.sRGBEncoding;

  var pmremGenerator = new THREE.PMREMGenerator(renderer);

  const axesHelper = new THREE.AxesHelper(30);

  controls = new OrbitControls(camera, renderer.domElement);
  controls.addEventListener("change", render); // use if there is no animation loop
  controls.minDistance = 5;
  controls.maxDistance = 1000;,0,0);

  window.addEventListener("resize", onWindowResize, false);

function onWindowResize() {
  camera.aspect = window.innerWidth / window.innerHeight;

  renderer.setSize(window.innerWidth, window.innerHeight);



function render() {
  renderer.render(scene, camera);

Each InstancedMesh is a single draw call. You’ll want to create one, and then set the position/rotation/scale of the nth instance inside your loop:

const instancedMesh = new THREE.InstancedMesh(child.geometry, child.material, api.count);
const dummyObject = new THREE.Object3D();

for(let i = 0; i < api.count; i++){
	for(let j = 0; j < api.count; j++){
		dummyObject.position.x = (xDistance * i);
		dummyObject.position.z = (zDistance * j);
		dummyObject.scale.set(10, 10, 10);
		instancedMesh.setMatrixAt(i + j * api.count, dummyObject.matrix);


Thank you, Don, I really appreciate the assistance but unfortunately I could not get this to work. I can create a single instance but trying to make those instances spread out (and eventually be placed with translation data from an endpoint) seems much more difficult. The iteration above was my feeble attempt to get that to work, but unfortunately, I stumble up against disappearing instances. The draw calls are what they are (two in my case. 1 for the helmet and 1 for the axes helper), but none of the instances appear after iterating. I can tell that the code is instancing working by the increase in the triangle count, but they simply do not render. I can only assume that there is some matrix (or material) magic that I do not fully understand or some additional missing piece of the puzzle???

BTW, I’ve also tried to reverse engineer (strip away really) this example but in my opinion, the example really takes this to a whole nother level of complexity with the particle emission from the mesh. I really wish that I could find a simpler, more tangible example of instancing a glTF file, that is shown spread out (so as to show that the instances are being rendered), but I’ve unfortunately been unable to find one. If there’s a simpler, more understandable example that you can point me to I would really appreciate it!