Jerky camera movement with many objects

There Circular control used for walkable areas control I have described a procedure how to move a person in an environment.

This works very well in the very spartan test environment. One can try it out there. In the article there is also a picture of a somewhat more elaborate showroom under development. Another incomplete room.

The room contains floors, ceilings, walls with openings and fillings (self defined non indexed buffer geometries) with textures, also video. Also loaded obj and gltf objects. And a few CylinderBufferGeometry and BoxBufferGeometry. Also mirrors, which work again in r116 with Scene.background. There was a bug before.
Fix rendering bug when using Scene.background. #18973 (@Mugen87)
Remove recursion parameter. #19078 (@Mugen87)

All meshes are added to the room.
room.add( …Meshes[ i ] );

For each type… there is a array that is processed for generation.

The more I equip the room, the more jerky the movement that is realized with the help of the control

At a certain point, this is no longer practicable.
I do not know whether there is a practicable solution for this.

Is there any benefit in creating all non indexed BufferGeonetries as a single one when defining them?

So only one
construction.setAttribute( ‘position’, new THREE.BufferAttribute( positions, 3 ) )
instead of the many individual ones? And only one positions = new Float32Array( positionCount ); ?

This makes considerable conversion effort and is not very clear in the program code but may be feasible.

Or is the AreaControl.js approach possibly unsuitable for this application? :thinking:
I do not want a camera movement on a predefined path.

I’m grateful for any hint.

I ran some tests, based on Circular control used for walkable areas control . For this I added a lot of meshes in an example. Much more than my room has. For this purpose I added .toNonIndexed() to the BufferGeometries as a test, because my room elements are mostly of this kind.

This test arrangement runs quite smoothly. Only when the mirror is in the field of view of the camera it gets a bit jerky. Try it there.

In my room I took out the mirror - no improvement.
The strange thing is that the jerking in different places is different. But sometimes strongly, where only two or three walls and floor, ceiling are close to the camera.

One guess is that the many commands .addGroup( …, 3, … ) “fragment” the geometry. However, this does not fit the behavior when only two or three walls of two triangles and one material each are in the camera’s field of view.

Combining the material groups for my room is very complex and only worthwhile if it is really a solution.

What is the background to the fact that the material groups are created using elaborate script objects { start: integer, count: Integer, materialIndex: Integer } and not via BufferAttributes?

Still baffled. :thinking: says that you perform draw calls in the range of 100 - 500 depending on the camera position and orientation. On top of that, you render almost three million triangles in some extreme cases. It’s not surprising that the FPS drops on certain hardware under these conditions.

I’m not sure I understand your reference to BufferGeometry.groups. In general, using multiple materials results in more effective render items and thus draw calls. It can be an important performance improvement to avoid such setups if you are having performance problems.

Thanks for pointing that out.


In my room I have about max. 350 calls and only 200 up to 1200 triangles.

This is considerably less than in the test example AreaControl_TEST_01 with the many cylinders and boxes.

I have to keep checking.

With a time interruption I reprogrammed the showroom.

The new approach is to do completely without groups and instead create a separate geometry and mesh for each material used. This only makes sense if the parts are immobile after creation. This is the case with the floors, ceilings, walls, frames, etc. of a showroom.

With .push arrays for the vertices and uv’s are filled.

let vt = []; // vertices
let uv = []; 

for ( let i = 0; i < m.length; i ++ ) {
	m[ i ].unshift( i ) // insert index at the front of the material definition, used for vt, uv index 
	vt.push( [] ); // vertices array for each material
	uv.push( [] ); // uv's array for each material

When the individual components are created, they are broken down according to the material. For this purpose the material index is determined.


vt[ getMatIndex( cp, i, mi, mNo ) ].push( x1,y1,z1, x2,y2,z2, x3,y3,z3, x1,y1,z1, x3,y3,z3, x4,y4,z4 );

uv[ getMatIndex( cp, i, mi, mNo ) ].push( u1, v1, u2, v2, u3, v3, u1, v1, u3, v3, u4, v4 );

The completely defined geometries and meshes are then created in a loop.

let geometries = [];
let meshes = [];

for ( let i = 0; i < m.length; i ++ ) {
	if ( vt[ i ].length > 0 ) { // only for defined materials, not for material Empty
		geometries[ i ] = new THREE.BufferGeometry();
		geometries[ i ].setAttribute( 'position', new THREE.BufferAttribute( new Float32Array( vt[ i ] ), 3 ) );
		geometries[ i ].setAttribute( 'uv', new THREE.BufferAttribute( new Float32Array( uv[ i ] ), 2 ) );
		geometries[ i ].computeVertexNormals();
		meshes[ i ] = new THREE.Mesh( geometries[ i ], getMaterial( m[ i ] ) );		
		scene.add( meshes[ i ] );
	} else {
		geometries[ i ] = null;
		meshes[ i ] = null;

The effort was obviously worth it, it runs much better now with much more elements and more materials in the showroom. results in maximum values, if Gltf/Obj models are displayed in the mirror.

calls: 45, triangles about 42000, points: 0, lines: 84

Unfortunately, I can’t explain it because of my very limited overview of the internal workings of three.js/ WebGL, but obviously it has something to do with the grouping as I only suspected. Because I have extended the number of materials a bit, currently 21 materials. So I have 21 meshes with one material each.

At the showroom itself there are still some finer details to be done.

Translated with (free version)

1 Like