Is it expensive to create threejs classes?

Hello,

as the title says: Is it expensive to create geometries, vectors and other classes? I don’t add anything to the scene but I only want to calculate different things.

I am talking about thousands of classes.

Best

if you can re-use it’s better, it’s often not needed to create that many object, just keep a vector around in global space and use it in your calculations. creating lots of objects runtime will cause you GC problems, that has actual performance implications. creating tons of objects in global space for no reason will increase the memory footprint.

I am currently working on creating objects based on a topological order, like a node graph. This means that at each step I save the respective geometry to continue from that point in case relevant values change. For this reason, I can’t use just one geometry or vector.

But the problem should be solved if I always dispose the class before creating a new one, correct?

It’s never a good idea to create and dispose a geometry continuously, or create just any kind of object in a realtime procedure, there are various ways to get around this without ending up with page crashes or GC hiccups.

For objects you either use them as singleton out of the hot scope or pool them, geometries depends much on your scenario.

Here is a small game you can try to see if it breaks your browsers GC.
Every time you click the button, it creates a new mesh, with a new geometry and a new material.
I clicked it 200 times, and it still didn’t break my browsers GC, or the memory or slow the frame rate.
How high can you try.
Source Code : Break The GC Game
image

Thanks for your reply!

I know that it is much more computationally intensive to create the scene procedurally, but I want to make it dynamic. If we refer to a node graph, such as geometry nodes in Blender, each step has certain inputs and outputs. In order to continue through changes from that node, all inputs for each node must be stored and all outputs must be passed to subsequent nodes.

For example, let’s use the nodes (A) Cube, (B) Transform and (C) Output:

  1. (A) creates a mesh with cube geometry and stores it as output.
  2. for (B) the mesh is an input. This mesh is cloned, transformed and stored as output.
  3. in (C) the transformed mesh is then stored as input and rendered.

In (B), I don’t have the option to transform the same mesh created in (A) because with more complex node structures, there are dependencies that fall back to the original mesh.

How would you solve the problem with a singleton or pooling?

the graph structure you are describing sounds very similar to how I’ve designed polygonjs. My conclusion is that it’s remains performant to clone geometries once per operation. You can even have your operations close their inputs at each frame, providing that they are low resolution enough.
But in the end, you need to give users ways to design their graphs and decide when the input is cloned or not, as this will always be network dependent.

That’s not exactly a representative test scenario. The material you create will always use the same programs, a box geometry is cheap and creating thousands of meshes is totally normal.

Even millions, only dropping references is where GC comes into play.

The rate of creation by click is nothing the GC would struggle with (if you drop them right after), it’s when you create objects in realtime in a animation frame call, which is where creation of Vector3 and similar (when a lot are created perf frame) already can be the issue.

If you re-create multiple complex new geometries per frame and dispose them per frame you won’t only get struggle with GC but WebGL as you might get a context loss. Some also go as far as disposing materials for no reason which additionally costs recompiling them every time.

1 Like