Normals wrong when Indexed Geometry shares Vertices

I wish to be wrong and learn that I’ve done something wrong. According to the BufferGoemetry.index docs, this feature:

"Allows for vertices to be re-used across multiple triangles"

However, in practice, I’m not having luck. I’m using for this example the simple tetrahedron. 4 sides, 4 vertices. In unindexed geometry, this results in an array of 36 for position and normals.

positions[36] = 4 (faces) * 3 (vertices each) * 3 (x,y,z)

But optimally indexed geometry only needs 4 unique vertices, so the positions array only needs to be 12 and the index array only needs to be 12.

positions[12] = 4 (vertices) * 3 (x,y,z)
index[12] = 4 (faces) * 3 (positions)

This optimal indexing re-uses each vertex 3 times. Running computeVertexNormals() creates an array of 36, which is to be expected since the normals for each vertex on each face is different. But it doesn’t render properly.

Unindexed:

Screen Shot 2022-01-04 at 8.37.48 AMScreen Shot 2022-01-04 at 8.37.34 AM

Optimal indexing:

Screen Shot 2022-01-04 at 8.37.53 AMScreen Shot 2022-01-04 at 8.56.58 AM

If I add indexing, but do not attempt to re-use vertices, I get the same representation as Unindexed. I verified this by messing with the index array to confirm it was being used.

So my question is this: can you really share vertices? And if so, are there constraints or another step I’m missing? Or is this a bug?

Thanks!!

Here is a movie and the code example that produced it. You can clearly see the normal defects on the bottom box. The top box uses the same geometry but not indexed.

let camera, scene, renderer, controls;
let material, mesh;
let vertices, indices;
let unindexed, indexed;

function init() {
    scene = new THREE.Scene();
    camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000);
    camera.position.z = 15;

    scene.add(camera);
    scene.add(new THREE.AmbientLight(0xffffff, 0.2));
    scene.add(light(0.2,  30,   0,   0));
    scene.add(light(0.2, -30,   0,   0));
    scene.add(light(0.2,   0,  30,   0));
    scene.add(light(0.2,   0, -30,   0));
    scene.add(light(0.2,   0,   0,  30));
    scene.add(light(0.2,   0,   0, -30));

    vertices = Float32Array.from([
        -2, -2,  -2,
        -2,  2,  -2,
         2,  2,  -2,
         2, -2,  -2,
        -2, -2,   2,
        -2,  2,   2,
         2,  2,   2,
         2, -2,   2,
    ]);

    indices = Uint32Array.from([
        0, 1, 2, // bottom
        0, 2, 3,
        6, 5, 4, // top
        7, 6, 4,
        0, 4, 1, // side 1
        1, 4, 5,
        2, 6, 3, // side 2
        3, 6, 7
    ]);

    // generate full vertices list (non-indexed)
    let vertall = [];
    for (let i of indices) {
        vertall = vertall.concat([...vertices].slice(i*3,i*3+3));
    }
    vertall = Float32Array.from(vertall);

    material = new THREE.MeshPhongMaterial({
        side: THREE.DoubleSide,
        shininess: 100,
        specular: 0x404040,
        transparent: false,
        color: 0xffff00,
        opacity: 0.5
    });

    // non-indexed geometry
    unindexed = new THREE.BufferGeometry()
        .setAttribute('position', new THREE.BufferAttribute(vertall, 3));
    unindexed.computeVertexNormals();

    // mesh with non-indexed geometry
    let meshU = new THREE.Mesh(unindexed, material);
    meshU.add(new THREE.Mesh(
        unindexed.clone(), new THREE.MeshBasicMaterial({
            wireframe:  true,
            color: 0xffff00
        })
    ));
    scene.add(meshU);
    meshU.position.y = 2.5;

    // indexed geometry
    indexed = new THREE.BufferGeometry()
        .setAttribute('position', new THREE.BufferAttribute(vertices, 3))
        .setIndex(new THREE.BufferAttribute(indices, 1));
    indexed.computeVertexNormals();

    // mesh with indexed geometry
    let meshI = new THREE.Mesh(indexed, material);
    meshI.add(new THREE.Mesh(
        indexed.clone(), new THREE.MeshBasicMaterial({
            wireframe:  true,
            color: 0xffff00
        })
    ));
    scene.add(meshI);
    meshI.position.y = -2.5;

    renderer = new THREE.WebGLRenderer({
        antialias: true,
        preserveDrawingBuffer: true,
        logarithmicDepthBuffer: true
    });

    controls = new THREE.OrbitControls(camera, renderer.domElement);
    controls.update();

    renderer.setSize(window.innerWidth, window.innerHeight);
    document.body.appendChild(renderer.domElement);

    animate();
}

function light(i, x, y, z) {
    let l = new THREE.PointLight(0xffffff, i);
    l.position.set(x,y,z);
    return l;
}

And here’s the codepen. Sorry I didn’t post that first. https://codepen.io/grid-space/pen/KKXROjW

The secret to the observed behaviour lies buried within the

indexed.computeVertexNormals();

function. I added a NormalsHelper to the meshI which shows what’s happening. After all, we are in the visualization business, aren’t we?


Seen from the bottom. I didn’t care to make the vertex numbering match your data - apologies for this shortcut.

Note, how vertex 1 is shared by 4 triangle faces, vertex 2 by 2 triangle faces. These are symmetrical situations. Hence we see the vertex normals as an exact extension of the (scree space) diagonal lines 5 - 1 and 6 - 2, respectively.

Conversely, vertex 4 is shared by 3 triangles, just like vertex 3. These are asymmetrical situations, having one triangle on one cube face, and two triangles on the adjacent cube face. Hence we see a clockwise slanted deviation of the normals at vertices 4 and 3 from their respective (screen space) diagonal lines 4 - 8 and 3 - 7, respectively.

Why are the normals slanted in asymmetrical situations?
For that to understand, you need to look at how BufferGeometry.computeVertexNormals() works. I did that for you:

The index array is processed in chunks of three (one triangle at a time).
For each triangle:

  • get position of each vertex a, b, c
  • compute two in-plane vectors a - b, a - c
  • compute cross-product of the two in-plane vectors. This gives you the face normal of that triangle.

For each vertex of that triangle:

  • retrieve whatever value the vertex normal was at that time
  • add the value of the current face normal vector to it
  • write back the resulting value as the new vertex normal. Over time, this averages the contributions of the face normals of all faces sharing the same vertex.

Going back to the asymmetrical situation at vertex 4:
you have two face normals of triangles 4-7-8 and 4-3-7 “pulling” down, while only one face normal of triangle 4-8-1 pulling left. Hence the average vertex normal “leans” towards the bottom face.

Bottom line:
While the BufferGeometry.computeVertexNormals() function certainly has its merits, it also has its built-in limitations.

2 Likes

In my self-made geometries I sometimes calculate all or only certain normals myself.

Example from the Collection of examples from discourse.threejs.org

Curved2Geometry

from line 735



	g.computeVertexNormals( );
	
	if ( connected ) { // calculate new normals at mantle seam
		
		for( let i = 0; i <= g.vertical; i ++ ) {
			
			smoothEdge( ( g.radial + 1 ) * i, ( g.radial + 1 ) * i + g.radial );
			
		}
		
	}
	
	if ( wTop && !flatTop ) { // calculate new normals at top seam
		
		for( let j = 0; j <= g.radial; j ++ ) {
			
			smoothEdge( ( g.radial + 1 ) * ( g.vertical + 1 ) + 1 + j, j );
			
		}
		
	}
	
	if ( wBtm && !flatBtm ) { // calculate new normals at bottom seam
		
		for( let j = 0; j <= g.radial ; j ++ ) {
			
			const offs = wTop ? + g.radial + 2 : 0;
			
			smoothEdge( ( g.radial + 1 ) * g.vertical + j, ( g.radial + 1 ) * ( g.vertical + 1 ) + offs + 1 + j );
			
		}
		
	}
	
	g.attributes.normal.needsUpdate = true;
1 Like

Thank you for this reply. I had looked at the BufferGeometry.computeVertexNormals() code and not understood why indexed faces were treated differently. This seems like an obvious deficiency in the implementation because it doesn’t actually work for indexed geometry. IOW, what are the indexed cases where it does work? Why not “unroll” the positions to generate normals so the same code path can generate from indexed or unindexed?

With indexed faces you have only 1 (one) normal per shared vertex.

With non-indexed faces (triangle soup) there are no shared vertices. Some vertices just happen to occupy the same position in space, but are in fact separate entities which each come with their individual normals. There is no such thing as one (1) vertex having two (or more) normals.

Maybe you re-visit the BufferGeometry.computeVertexNormals() code and look at it in that light.

1 Like

Take, for instance, LatheGeometry(), one of my recent favorites. :wink:

I also expect working indexed geometries in

  • ConeGeometry,
  • CylinderGeometry,
  • ExtrudeGeometry,
  • LatheGeometry,
  • PlaneGeometry,
  • RingGeometry,
  • SphereGeometry,
  • TorusGeometry,
  • TubeGeometry,
1 Like

Thanks again. Then this is my misunderstanding of the normals attribute. I thought that the normals tracked face vertices (one for each use of a vertex on a face), not unique vertices. And this may be from a prior use of BufferGoemetryUtils.mergeVertices() that seemed to generate this from triangle soup. That was days ago, so perhaps I mis-remembered.

So my understanding of why this works is because the surfaces are all curved without sharp edges. And PlaneGeometry, obviously, because creating an object out of a bunch of planes is the same as un-indexed geometry. I see now that in some of the test cases I did not share that it is “smoothing” curved surfaces and only looks disjoint at sharp edges.

I appreciate your patience answering my questions.

@s0a
Is there anything stopping you using geometry. toNonIndexed(), calculate vertex normals and then reindex the geometry?

Is there anything stopping you using geometry. toNonIndexed(), calculate vertex normals and then reindex the geometry?

My new understanding from this thread is that you only get one normal per vertex when indexed, so a normals array generated for non-indexed would not work for indexed. And if you want to mix/match, then you need to duplicate vertices around sharp edges so you can apply different normals to them.

I would rephrase that a bit:

It’s not the lack of “sharp” edges that allows this to work.

These are all procedurally generated geometries, and the knowledge of the proper procedure includes knowledge of the geometry’s topology: which faces are adjacent to each other and which faces share the same vertex. In that way, these are “special” geometries.

For a “generic” geometry, you probably get the best “programming effort vs. result quality” ratio by using the computeVertexNormals() function “as is”. You can assist it in delivering best results by avoiding abrupt size changes between adjacent faces and by avoiding “open ended” structures where some neighbouring faces are missing.

1 Like