Cannot load a high detailed 3D Model

As a non-specialist designer, I don’t precisely know every technical specification of 3D environnement, for work purpose we tried various experiment with importing glTF assets.

On Three JS we used the beginner’s package that load a specific objet into your project’s folder and have some mouse control to see how’s it feel,

We realized that on Three JS :

Loading 384 149Ko, 4 208 360 triangles → It’s Ok Loading 856 085Ko, 9 258 394 triangles → The page is displayed but it seems that the moddle ins’t displayed on the scene, or not loaded at all (With a smaller objet, we had 48 fps, with a bigger moddle we have 60 fps and don’t have any display)

We don’t really know what happens during the process,

We tried to find any technical informations on Three JS, there isn’t any precise information on importing huge assets on Three JS, we learned that the navigator have specifications too, but still no precise informations …

Do you have any idea of what happens ?
Is there any “max size” specification somewhere ?
Or is it some kind of internal instability ?
Is there any way to get technical information ? And last but not least :
Does it exist any other way to import huge assets on Three JS ?

Our mind are open on any information that would exists …

Thanks in advance,

  1. Print the scene to see if it’s actually getting loaded or not.

  2. Are you loading these models individually or simultaneously? There might be something wrong with the model itself.

  3. While loading that specific model does the loader throw out any errors?

                (model) => {/* Your code here. */},
                (xhr) => {},
                (error) => { console.log(error) })

Can we see your code?

I think a rule of thumb for “sizes” is to keep total number of triangles below 3000000 (3 million)…

the number of individual mesh+materials down to below 300… and texture sizes at or below 4096x4096, and you will have to test on different hardware configurations!

Ideally you keep numbers lower than these values to have decent framerate.


Here is the code :/*
var objectPath= ‘Models/Lucy67percent/Lucy67percent.obj’;

var cameraPosition= [0, 5.537, 9.286]

var cameraXangle= -20;

var objectScale= 0.007;

var rotationSpeed= 0.5;

// Import Three JS.

import * as THREE from ‘three’;

import { OBJLoader } from ‘three/addons/loaders/OBJLoader.js’;

// Add the stats module (mainly for FPS counter).

(function(){var script=document.createElement(‘script’);script.onload=function(){var stats=new Stats();document.body.appendChild(stats.dom);requestAnimationFrame(function loop(){stats.update();requestAnimationFrame(loop)});};script.src=‘stats.js-master/build/stats.min.js’;document.head.appendChild(script);})()

// Create the scene.

const scene = new THREE.Scene();

// Create the camera.

const camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.05, 4000 );

// Creat a directional light (like a sun), and attach it to the camera.

const directionalLight = new THREE.DirectionalLight( 0xffffff, 1 );

camera.add( directionalLight );

// Position and orientation of the camera.

camera.position.set(cameraPosition[0], cameraPosition[1], cameraPosition[2]); // valuea have been found using Godot.

var xAngleInRad= cameraXangle * (Math.PI / 180.0);

camera.rotation.set( xAngleInRad, 0, 0); // valuea have been found using Godot.

// Add the camera to the scene (because the camera has a light attached to it).


// Create a rendedrer and attach it to the DOM.

const renderer = new THREE.WebGLRenderer();

renderer.setSize( window.innerWidth, window.innerHeight );

document.body.appendChild( renderer.domElement );

// Defines variables for the grid of cubes

var RowColumnCount= 100;

var CubeList= ;

// instantiate a loader

const loader = new OBJLoader();

var importedObject

// load a resource


// resource URL


// called when resource is loaded

function ( object ) {

object.scale.set(objectScale, objectScale, objectScale)

importedObject= object;

scene.add( object );


// called when loading is in progresses

function ( xhr ) {

console.log( ( xhr.loaded / * 100 ) + ‘% loaded’ );


// called when loading has errors

function ( error ) {

console.log( ‘An error happened’ );



var clock = new THREE.Clock();

// animation function

function animate() {

requestAnimationFrame( animate );

// Rotate object by 0.01 on the y axis

if (importedObject)


var rotationSpeedRad= rotationSpeed * (Math.PI / 180.0);

importedObject.rotation.y += rotationSpeed * clock.getDelta();;


renderer.render( scene, camera );



The html is a sample loader with a link of the javascript page :

My first three.js app body { margin: 0; }

As far as I remember, most of the assets used doesn’t have any problem, only a single mesh is loaded, we used some well known demonstrator such : Lucy, Suzanne, StanfordDragon, StanfordLucy, …

It seems that none have problems, it is more likely the size of the product that create a problem, but we don’t know exactly if something breaks somewhere,

On the console, with the try catch everything seems OK :
0.14204199816295085% loaded
RotatingObj.js:112 0.22427683920465924% loaded
RotatingObj.js:112 99.9975667067174% loaded
RotatingObj.js:112 100% loaded

We already have tested different “sizes” of objects, we don’t have precise comparisons for textures too, but it seems that the mesh was more impactful than the texture, which only follow the mesh’s complexity.

In the current situation we can’t really make test with different types of computers, even if the hardware is a right candidate to test …

Is “3 million” a practical observation, or something quoted from a specification or something like this ?

Because the file is so big have you tried converting the file to a .glb/gltf then to Draco you can use gltf-pipelines for this after you convert the file to a .glb/gltf.

Also regarding big model sizes refer to these two posts
Source 1
Source 2

Thanks for these sharing,

So, based on your sources, it seems that the loading “problem” is close to a technical specification,
this is what I was looking for. I suppose it isn’t a very “official” documentation based on proper testing ?

It’s a ballpark. High end graphics cards can handle 10x that much. Low end cards much less.