How much GB of memory my renderer/scene is holding?


I am creating an application using threejs where user can upload many stl models but they can see only one at a time in the viewer. I am controlling visibility of models by setting object3D.visible to true or false. Let’s say user uploaded 10 parts of 100 MB each and he is viewing only one model , remaining 9 are hidden.
I want to know what is the maximum memory limit renderer/scene can hold ? I guess memory limit per tab in chrome browser is 4 GB. I want to track how much memory my renderer is holding , if it is exceeding my threshold limit (lets say 700 MB) , I want to delete previously loaded hidden objects from the scene.
Can anyone help how to achieve this ?

Look at the documentation here…

1 Like

But in webgl renderer info I am getting number of geometries and textures. It doesn’t have the size of geometry. I want to know the size of objects in MB.

OK, you won’t find that there then, the file size may be hidden due to security reasons in js, although I’m not 100% on that, what you may be able to do is get the object3d’s blob reference and use .size maybe?

1 Like

I created Blob reference like this ,
var Bb = new Blob([object], {type:“object”})
And I get size as 15 . I guess unit is in byte . I uploaded 11 MB file but I get only 15 bytes here.

I am thinking of trying following approach. get position and normal by using geometry.attributes.position & geometry.attributes.normal. position gives number of vertices and normal gives normals.

I am assuming one pixel is required to display one vertex. in order to display one pixel , system will take 24 bytes. we are not displaying normals , system holds this as float32array. so this contains 4 bytes per value.

my final calculation ,

total memory = position * 24 + normal * 4

ex: position.count = 356838, normal.length = 1070514. total memory = 356838 * 24 + 1070514 * 4 = 12846168.

This is around 12MB. I didn’t find any direct solution while searching through platforms . Is my solution correct ? Am I missing anything ?

for geometry

BufferGeometryUtils.estimateBytesUsed ( geometry : BufferGeometry )

the code snippet from BufferGeometryUtils


This works for me. Thanks a lot.

And is deleting the objects from scene the only way to handle performance? literally , I want to keep the the invisible objects in the scene itself. because if I delete and load the part again using loader , it takes time to load and I don’t want to wait. Even though I make previous parts invisible while loading new part , I can see , memory of my application is increasing while taking heap snapshot every time.

If there is a way I can stop my application counting the memory of invisible parts as well , it would be really helpful. If there is no way I will go with deleting the object.


If your objects are loaded and cached in the browsers vram you need to “garbage collect” dispose() of objects and textures to have them unloaded from memory, think of it like filling a plane with steel mid flight, if the plane is too heavy, it will eventually crash, if you want to put another resource, say gold, you need to take off some steel to fit the gold on as well as not be overweight.

What file format are the models? You have the option to use draco compression if they are gltf format.

Otherwise, I don’t know what the outcome would be but, you could potentially save the geometry.positions and normals of each object as their own array variables and dynamically rebuild the geo when needed, I’m not sure if that would save memory or not though…

1 Like

I’m using stl files. I cannot change the file format. I wonder then how browser based games are managing memory where it needs to handle more 3d objects and animations . I guess memory limit per tab in chrome is 4 gb . Definitely games will cross this limit . What will happen if threejs gaming page crosses 4 gb ? Will it crash ?

Can you please tell me how can I recreate the object if I store geometry.positions and geometry.normal ?


Oof, I’ve had an iPhone gpu completely die with an early test scene I made by accidently pumping too much geo and 4k textures into the viewport, being said it was an old iPhone 6s but was 100% the scene that sent it to its grave.

I think with pre compiled, closed circuit systems like unity and unreal, there’s a level of optimisation, baking both reflections and light maps, ( consciously making Web ready assets, optimised geo / textures ) you can get away with
the illusion of a lot more effects and objects in a scene, you can achieve very similar results with THREE if you apply similar rules to both your code formatting, model / texture optimization and using instancing where possible. If you are merely chucking huge unoptimized models into your scene, you are always going to be limited as to what you can do.

Reguarding rebuilding… This was a random thought and may not be a lead at all, this would involve loading all your models, storing the position and normal attributes to an array of js objects, [{type: obj1, positions: floatArray, normals: floatArray}, etc…] and then feed each “type” to a function as a callback that builds a new buffer geometry from the “types” positions and normals, applying a material and having the function return this resulting mesh as an object to be added / removed from the scene where needed. Again, I don’t know much, this is a theory and may be useless!

in @orion_prime solution, we calculate memory for geometry. should we calculate memory for Material & textures as well ?

In the case of standard 3d objects, you can make them appear and disappear by adding or removing them from a visible parent. They are retained in memory, but are not part of the scene.

However, this option does not appear to be available with a loaded gltf file which, I believe, is saved as a “group”. You might be able to isolate individual objects in your glft file and “repackage” them as 3d objects which could can add and remove. But I have not tried that so I don’t know for sure.

And I am not sure if, from a performance standpoint, the add/remove option is significantly better (or worse) than using the visible flag.

I’ve implemented @orion_prime Suggestion. Thank you all guys for the wonderful support :+1:

for textures according to this page three.js manual

Memory Usage

Textures are often the part of a three.js app that use the most memory. It’s important to understand that in general , textures take width * height * 4 * 1.33 bytes of memory.

Shadows: 512x512px->20mb, 1024px->30mb, 2048px->60mb, 4096px->180mb, 8192px->690mb
128x128=0.084mb, 10pieces → 0,84mb, 100pieces → 8.4mb
256x256=0.348mb, 10pieces → 3.48mb, 100pieces → 34.8mb
512x512=1.396mb, 10pieces → 13.96mb, 100pieces → 139.6mb
1024x1024=5.56mb, 10pieces → 55.6mb, 100pieces → 556mb
2048x2048=20.96mb, 10pieces → 209.6mb, 100pieces → 2.096gb
4096x4096=88.08mb, 10pieces → 880.8mb, 100pieces → 8.808gb
8192x8192=356.48mb, 10pieces → 3.56gb, 100pieces → 35.648gb
16384x16384=1.4280gb, 10pieces → 14.280gb, 100pieces → 142.80gb