OBJLoader wrongly loads concave faces (which seem to be allowed in OBJs)

I noticed today that when I load an OBJ containing concave faces, the Three.js OBJLoader generates geometry where those faces are triangulated incorrectly.

It always creates a triangle fan, which does not work for concave polygons.

// Draw an edge between the first vertex and all subsequent vertices to form an n-gon

const v1 = faceVertices[ 0 ];

for ( let j = 1, jl = faceVertices.length - 1; j < jl; j ++ ) {

	const v2 = faceVertices[ j ];
	const v3 = faceVertices[ j + 1 ];

	state.addFace(
		v1[ 0 ], v2[ 0 ], v3[ 0 ],
		v1[ 1 ], v2[ 1 ], v3[ 1 ],
		v1[ 2 ], v2[ 2 ], v3[ 2 ]
	);

}

As far as I know, ShapeUtils already can triangulate concave polygons, so the library seems to have everything needed for a more robust solution. Was there a specific reason or historical decision why OBJLoader does not use this more advanced triangulation method?

I think because OBJ is a relatively braindead format, you want to keep the loader braindead as well.
It was also created before a lot of the other utilities that we have now… so applying proper triangulation just wasn’t part of the library at implementation time.

The format has lots of quirks.

For one.. the vertex indexing is 1 based. Ouch.

Here is a summary of other weirdnesses from Gemini:

:hammer_and_wrench: Common Quirks of the Wavefront OBJ Format

The Wavefront OBJ format is the “Grandfather” of 3D assets. Designed in the 1980s for human readability, it contains several legacy behaviors that can break modern parsers or GPU-centric pipelines.


1. 1-Based & Negative Indexing

Unlike almost every modern programming language (C++, JS, Python), OBJ indices start at 1.

  • Absolute: f 1 2 3 refers to the first three vertices in the file.
  • Relative (Negative): f -3 -2 -1 refers to the three vertices defined immediately before the face declaration. This is often used when merging files to avoid recalculating global index counts.

Developer Note: Always remember to subtract 1 from indices when mapping to a Uint32Array or IndexBuffer.


2. Face Syntax Variations

Faces (f) are defined using a “shorthand” that changes based on available data (Position, UVs, Normals).

Syntax Description
f v1 v2 v3 Position indices only.
f v1/vt1 v2/vt2 v3/vt3 Position + Texture (vt) indices.
f v1//vn1 v2//vn2 v3//vn3 Position + Normal (vn) indices (Note the double slash).
f v1/vt1/vn1 ... All three: Position, Texture, and Normal.

The Gotcha: The indices for v, vt, and vn do not have to match. You might use the 10th vertex position but the 2nd texture coordinate.


3. “NGons” vs. Triangles

Modern graphics APIs strictly require triangles, but the OBJ spec allows polygons with any number of vertices (e.g., f 1 2 3 4 5).

  • The Problem: You cannot simply upload OBJ face data to a GPU.
  • The Fix: You must implement a triangulation strategy (like “Ear Clipping” or a basic Fan-Triangulation for convex faces) during the import process.

4. Separate Material Files (.mtl)

The OBJ file contains no color or texture data itself. It relies on an external companion file.

  • mtllib filename.mtl — Declares the library.
  • usemtl material_name — Applies a material to all subsequent faces.
  • The Quirk: Path issues are rampant. If the OBJ contains an absolute path from the artist’s local machine (e.g., C:\Users\Artist\...), your loader will fail to find the textures.

5. Unofficial Vertex Colors

While not part of the official 1980s spec, many modern tools (ZBrush, MeshLab) append RGB values directly to the vertex line:

  • v 1.0 2.0 3.0 1.0 0.0 0.0 (XYZ + RGB)
  • The Quirk: A strict parser expecting exactly 3 floats will crash or skip these lines. If you’re building a tool for high-poly sculpting assets, you need to check the element count on every v line.

6. Smooth Shading Groups (s)

The s tag (e.g., s 1 or s off) controls normal interpolation across faces.

  • This is a legacy feature from the fixed-function pipeline era.
  • The Quirk: In modern PBR workflows, we usually rely on the explicit vertex normals (vn) provided. However, if vn data is missing, you are expected to use these groups to manually calculate and average your normals.

7. Line Continuations

For extremely complex faces or long lists of data, OBJ supports the backslash (\) to split a single logical line across multiple physical lines.

Example:
f 1/1/1 2/2/2
3/3/3 4/4/4

Your parser needs to check for this character before processing the line, or you’ll end up with “missing” data.

So, in summary.. It’s an old format. It has a lot of weird “opinions” about how it will be consumed. It’s bloated. It has that weird separate material+mesh data file thing.

On the plus side.. you can probably have gemini/claude puke out a more robust importer if you need it.

If you’re planning on using this for delivering to a user over the web, I strongly encourage to go with GLTF. It’s modern. Binary or text, can pack objects/materials/textures in a single file.. supports compression, both for meshes and textures.. preserves metadata on nodes. Just a superior format on all axes.

1 Like

Thanks a lot for your fast answer and for sharing your experience with the problems the OBJ format seem to have!!

My app is basically a web client in front of a server that meshes the 3d object and does FEM calculations on it. Users can either model the object online or import a 3d file. I use gmsh to do the meshing and gmsh does not support gltf but it does support OBJ and STL, so I started with them. GLTF also seemed like overkill because I only need to save one geometry, not a whole scene. I don’t even need a render material.

I decided against STL and for OBJ as internal data format because it can not only save concave faces but even use curves as edges in those faces. As gmsh can read those I thought it is nice to be able to save precise curves to not quantizise them too early and loose precision.

Funny thing is I also found this sentence in my doc: “you should always try to establish a glTF based workflow first“ this is the answer to the very first Question in the threejs FAQ.

I will have another look at gltf as well.
Thanks again!!

Ahh fair enough. And yeah.. being able to support curves as edges seems valuable if its useful in your workflow.
I don’t know how you would support that for rendering though… you’d have to come up with some kind of tessellation for it.. and GLTF doesn’t support it afaik.

2 Likes

My guess is: simplicity.

OBJ format is very rudimentary and outdated, although it is still heavily used. There is no strict standard as many vendors made custom extensions to the format. Maybe the loader just loads some basic common type of OBJs, without support for advanced features like splines. My guess is that the initial intention was to provide a loader that could load the majority of then existing OBJ files.

I can see a few ways to continue:

  • use another file format (I second @manthrax recommendation for GLTF/GLB)
  • triangulate the model in the software that created the model
  • use third-party tool to preprocess OBJ faces to triangles
  • improve the OBJLoader class (for own use or as PR)

But if storing curves/surfaces is important, GLTF/GLB is not good for this (its purpose is to transmit data ready to be poured into the GPU).

1 Like

Thanks @PavelBoytchev for your opinion on this as well.

For now I will improve the OBJLoader, as it seems like an easy fix. Also, if I want to replace obj as the internal format I need to change some more parts, especially my manifold-checker. This part reads the obj, builds a graph of the faces, edges, points and tells the user about problems in their model (holes, intersections,…).

But I keep your thoughts in mind for future decisions/changes.

2 Likes

In case somebody needs it.

Replace the else if block in the original OBJLoader that reads the face data with

} else if ( lineFirstChar === 'f' ) {

				const lineData = line.slice( 1 ).trim();
				const vertexData = lineData.split( _face_vertex_data_separator_pattern );
				const faceVertices: string[][] = [];

				// Parse the face vertex data into an easy to work with format

				for ( let j = 0, jl = vertexData.length; j < jl; j ++ ) {

					const vertex = vertexData[ j ];

					if ( vertex.length > 0 ) {
						/** in obj every vertex of a face can consistn have multiple ids separated by / 
						 * The first id is the vertex id, the following ids are for vertex normals and other stuff */
						const vertexParts = vertex.split( '/' );
						faceVertices.push( vertexParts );
					}

				}

				const nVertices = faceVertices.length;

				if (nVertices === 3) {

					// Standard Triangle: Fast path
					const v1 = faceVertices[0];
					const v2 = faceVertices[1];
					const v3 = faceVertices[2];

					state.addFace(
						v1[0], v2[0], v3[0],
						v1[1], v2[1], v3[1],
						v1[2], v2[2], v3[2]
					);

				} else {

					/**
					 * With the help of gemini v1.0.0 2026-01-22
					 * N-Gon (Concave or Convex): Use Earcut for robust triangulation.
					 * * Logic:
					 * 1. Retrieve actual 3D coordinates for all vertices in the face.
					 * 2. Calculate a normal vector for the face to determine the best projection plane.
					 * 3. Project 3D points to 2D (flat array) by dropping the dominant axis.
					 * 4. Triangulate the 2D data using Earcut.
					 * 5. Map triangulation indices back to original OBJ face data and flip if necessary.
					 */

					const positions: readonly Vector3[] = [];
					const vLen = state.vertices.length;

					// 1. Get 3D positions
					for (let j = 0; j < nVertices; j++) {

						// Note: parseVertexIndex returns the index in the flat array (vIndex * 3)
						const index = state.parseVertexIndex(faceVertices[j][0], vLen);

						// We create a temporary Vector3 for calculation
						positions.push(new Vector3(
							state.vertices[index],
							state.vertices[index + 1],
							state.vertices[index + 2]
						));

					}

					// 2. Calculate face normal using the first three points (assuming non-collinear)
					// We use _cb and _ab from module scope as temporary vectors to avoid allocation
					_cb.subVectors(positions[2], positions[1]);
					_ab.subVectors(positions[0], positions[1]);
					_cb.cross(_ab).normalize();

					const contour: number[] = [];
					const nx = Math.abs(_cb.x);
					const ny = Math.abs(_cb.y);
					const nz = Math.abs(_cb.z);

					// We define a flag to check if we need to flip the triangulation result
					let reverse = false;

					if (nz > nx && nz > ny) {
						// Dominant Z: Project x, y
						// Normal (0,0,1) corresponds to standard CCW in 2D.
						// If original normal is negative Z, we need to flip.
						reverse = _cb.z < 0;
						
						for (let j = 0; j < nVertices; j++) {
							contour.push(positions[j].x, positions[j].y);
						}
					} else if (nx > ny && nx > nz) {
						// Dominant X: Project y, z
						// Normal (1,0,0) corresponds to standard CCW (Y -> Z).
						// If original normal is negative X, we need to flip.
						reverse = _cb.x < 0;

						for (let j = 0; j < nVertices; j++) {
							contour.push(positions[j].y, positions[j].z);
						}
					} else {
						// Dominant Y: Project x, z
						// CAUTION: Mathematical Cross Product of X and Z axes (x, 0, z) is -Y.
						// Meaning: Standard CCW here produces a DOWNWARD (-Y) facing normal.
						// So if the original normal points UP (+Y), we must flip it (because the default is down).
						// If the original normal points DOWN (-Y), we keep it (because default is down).
						reverse = _cb.y > 0;

						for (let j = 0; j < nVertices; j++) {
							contour.push(positions[j].x, positions[j].z);
						}
					}

					// 3. Triangulate using Earcut directly 
					// (returns a flat array of our original vertex ids in faceVertices)
					const triangles = Earcut.triangulate(contour, undefined, 2);

					// 4. Add faces
					for (let i = 0; i < triangles.length; i += 3) {
						
						// The face normal of all new triangles need to look in the
						// same direction as the original ngon
						// Apply the flip by swapping the second and third index
						// [0, 1, 2] -> [0, 2, 1]
						const idx0 = triangles[i];
						const idx1 = reverse ? triangles[i + 2] : triangles[i + 1];
						const idx2 = reverse ? triangles[i + 1] : triangles[i + 2];

						const v1 = faceVertices[idx0];
						const v2 = faceVertices[idx1];
						const v3 = faceVertices[idx2];

						state.addFace(
							v1[0], v2[0], v3[0],
							v1[1], v2[1], v3[1],
							v1[2], v2[2], v3[2]
						);
					}

				}

			}

This is my first working version, no guarantees.

Only tested with this test file yet:
744f4c81c2b8f7eed49da9b900b6a36f_concave.obj (498 Bytes)

2 Likes