The best approach to take rendering a lot of meshes, that can be transformed

I’m new to computer graphics, so want to apologize ahead, question will touch multiple topics. Would really appreciate at least some hints, or links to resources. Got stuck conceptually.

My task is to be able to render a lot of meshes (100,000 - 400,000) in 2D. All of them are going to be rather simple - Rectangles, Ellipses, Stars, Polygons, Curves. The scene would be a space 20 000px x 20 000px, with a “camera” observing only a part of the scene, e.g. 1000x1000 (don’t know how to make it yet, however it’s not the point of my question). These shapes could be modified - scaled, dragged, filled with a certain color, given an outline, changing border radius, etc. The closest analogous is Figma.

Was using WebGl and Three.js. Was trying to make a test - to render 100 000 of Rectangles onto the screen, and trying to animate movement of one of them. (As i understand, if we are moving even a single mesh, we are initiating the whole redrawing cycle, of all meshes out there).

  1. WebGl - I’ve created 100 000 instances of a custom class Rectangle, that holds 6 vertices, and translation vector - translation = [0, 0].
let rects = [...] // 100000 new Rectangle()
for (let rect of rects) {
	gl.bufferData(
	    gl.ARRAY_BUFFER,
		new Float32Array(rect.verticies),
		gl.STATIC_DRAW
    );

	drawRect(rect.translation)
}

let positionLocation = gl.getAttribLocation(program, "a_position");
function drawRect(translation_vec) {
		// gl.clear(gl.COLOR_BUFFER_BIT);

		gl.useProgram(program);

		gl.enableVertexAttribArray(positionLocation);
		gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer)

		gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);


		gl.uniform2fv(translationLocation, translation_vec);

		// 6 verticies per rect
		gl.drawArrays(gl.TRIANGLES, 0, 6);
}

This approach wouldn’t allow me to make a translation normally(in case i would like to drag and drop rectangle to a different position). Because it takes 1-2 seconds to redraw everything. Is there a better, and more performant way of doing so?

  1. Three.js - I’ve tried to accomplish the same goal. First, approached to to use PlaneGeometry class, using it to create Mesh.
for (let plane of meshes) {
  // plane - is a Three.js Mesh instance, with material and geometry
  scene.add(plane)
}

At 10 000 of meshes my browser lagged, and then was closed.
There are couple of suggestions, that I’ve faced, trying to figure out how to render a big amount of 2D Meshes. There were two of them. Either using InstancedMesh, that allows to render all these shapes at once, at one rendering cycle. However, it restricts an opportunity to have Meshes of a different size. They can be positioned anywhere in the world, but should all be the same.
Or using BufferGeometry what puts me back to the problem 1. That i don’t know how to make rendering and transforming more performant. I hoped, that Three.js can provide some optimized abstraction.

Sorry, it’s too long. I would appreciate any advises. Maybe the approach I’m talking is wrong conceptually, and you can advise what to do in this situation, considering the goal needs to be accomplished.

1 Like

By using InstancedMesh you can position, rotate and scale each instance. See three.js docs

1 Like

UPD: Actually i figured out, that in webgl it’s called Instanced Drawing. I guess, this is what Three.js uses under the hood. Haven’t looked closely yet, however it looks like what i need.

I appreciate a lot. It helped. But still, i haven’t figured out some things. Maybe you can clear a bit more. Trying to learn, but it’s hard to grasp some things, however i need to achieve a certain result by Monday.
Found an example, on codepen, that was using InstancedMesh.

let rects = [];
let rects_num = 500;
for (let i = 0; i < rects_num; i++) {
        let body = new Object3D();
        body.scale.set(0.5, 0.5, 1);
        scene.add(body);
        rects.push(
}

let gBody = new PlaneGeometry(1, 1);
let body = new InstancedMesh(gBody, new MeshBasicMaterial({
        color: 0xff00f0}), rects_num);

scene.add(body);

let sign1 = Math.random() > 0.5 ? -1 : 1;
let sign2 = Math.random() > 0.5 ? -1 : 1;

rects.forEach((tr, trIdx) => {
        tr.position.set(Math.random() * 90 * sign1, Math.random() * 60 * sign2);
        sign1 = Math.random() > 0.5 ? -1 : 1;
        sign2 = Math.random() > 0.5 ? -1 : 1;

        tr.updateMatrixWorld(true);
        body.setMatrixAt(trIdx, tr.matrixWorld)
})

I have two questions.

  1. In the first for loop where let body = new Object3D() is being assigned. The author was using Cylinders, instead of PlaneGeometry. Would Object3D() work fine in my case, and if it’s convinient. Because i don’t need anything, except 2D shapes, or there is another class, container, then can achieve the same goal?

  2. With this approach i can translate, scale single Mesh, without redrawing the whole screen? If it’s true, and assumption is correct, what is the approach to achieve the same using bare WebGl? Because what I’ve got so far reading tutorials, and looking at the examples - If i want to move a single Mesh, I have to redraw all of them. From my newbie, incompetent point of view, there is a way to use kind of “indexing” in the buffer (while we know the index, and length), and ask to redraw that place in memory only.
    (It takes 2 seconds for my browser to render 100 000 rectangles, but when I’m moving one of them with InstancedMesh, it gives 20-30 fps. It wouldn’t be possible, if all screen was rendering from scratch. Or i don’t understand something)
    So i thought, when doing so :

let arr = new Float32Array([-0.4, 0.3, -0.4, 0, 0.3, -0.2,
        -0.5, -0.25,
        0.7, 0.25,
        0.9, -0.7    
]);
let position_buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, position_buffer);
gl.bufferData(gl.ARRAY_BUFFER, arr, gl.STATIC_DRAW)

gl.vertexAttribPointer(position_location, 2, gl.FLOAT, false, 0 , 0);
gl.enableVertexAttribArray(position_location);

gl.drawArrays(gl.TRIANGLES, 0, 6)

is it possible somehow, to reach the “second” triangle in the gpu buffer, and change him only? Without redrawing the first one?

Appreciate a lot, and btw, was reading some of your posts, they are great, and what you are doing looks beautiful (:

I’m no expert, and i think with the traditional approach its not fundamentally possible.

However, I think may be, with a different approach, it’s theoretically possible. Never tried myself but here’s how it might work.
I’d be really curious to see if it works.

Make 2 scenes. Primary scene ( scene1 ) and Secondary scene ( scene2 )

  • Primary scene to be used to render all elements
  • Secondary scene to be used to render a specific element

Make 4 render targets :

  • renderTarget1 - to store render data for scene1
  • renderTarget2 - to store a copy of renderTarget1
  • renderTarget3 - to store render data for scene2 (must have alpha channel specified)
  • renderTarget4 - to be used to store blended renderTarget2 and renderTarget3

case 1: When no specific object to be targeted. That is, all objects need to rendered.

  • render scene1 to renderTarget1 and display it to screen

case 2: when a specific object need to be transformed / updated.

1. for first frame: ( heavy computation )

  • moved the specific object to scene2
  • render scene1 to renderTarget1. ( will result in all object rendered but not the one being transformed )
  • copy renderTarget1 data to renderTarget2
  • render scene2 to renderTarget3 ( will result the specific object rendered and nothing else )
  • blend renderTarget3 with renderTarget2 to into renderTarget4
  • make the renderTarget4 display on screen

2. for second frame: ( light weight computation)

  • copy renderTarget1 data to renderTarget2
  • render scene2 to renderTarget3
  • blend renderTarget3 with renderTarget2 to into renderTarget4
  • make the renderTarget4 display on screen

3. for third / forth / … nth frame :

  • repeat what was done for second frame.

4. on last frame ( when transformation / update ended )

  • move back the specific object to scene1 and apply use case 1.
1 Like

Thank you! It sounds interesting, i thought there is one-to-one correspondence between canvas and the scene. Will definitely try, and give a feedback. Thanks once again.

1 Like

Just to add, i realized the transformed target object would always remain on top of other objects as its rendered last.