GLTF maximum buffer size for vertices, 65536, is too low

Hi,

There are some C++ exporters like the Microsoft GLTFSDK (GitHub - microsoft/glTF-SDK: glTF-SDK is a C++ Software Development Kit for glTF (GL Transmission Format -https://github.com/KhronosGroup/glTF).) that generate GLBs that contain buffers of more than 65536 elements for storing vertices. So far so good.

However, in GLTFLoader.js, the limit of an indice value is 65536, see

three.js/GLTFLoader.js at master · mrdoob/three.js · GitHub @L2226

const WEBGL_COMPONENT_TYPES = {
5120: Int8Array,
5121: Uint8Array,
5122: Int16Array,
5123: Uint16Array,
5125: Uint32Array,
5126: Float32Array
};

where 5123 stands for indices. Uint16 = 2 power 16 = 0 to 65535.

Therefore indice values that refer to a vertice at location higher than 65535 are renumbered to start from zero, i.e. indice 66372 will be come indice 66372-65535 = 837 which results to scrumbled meshes.

Is there a particular reason that an Indice value should be Uint16 but not Uint32 ?

In this aspect, I think Khronos specification should be more specific as it lets the developer to decide in which structure indices should be loaded:

indices` accessor MUST NOT contain the maximum possible value for the component type used (i.e., 255 for unsigned bytes, 65535 for unsigned shorts, 4294967295 for unsigned ints).

(taken: from Section 3.7.2.1 of glTF™ 2.0 Specification)

Best,
Dimitrios

The glTF format (and three.js and THREE.GLTFLoader) supports both uint16 and uint32 indices. So you can choose, and will either have a limit of 2^16 – 1 = 65,535 or 2^32 – 1 = 4,294,967,295 vertices per draw call, accordingly. The – 1 is there because the last value is reserved for a primitive restart value in certain graphics APIs.

Thanks for the quick response. Yes I know it works, but I had to change it myself.
The selection among Uint16 or Uint32 type of data is not stored anywhere inside the json part of the glb so that it can be exploited in order to modify “5123: Uint16Array,” to “5123: Uint32Array” in GLTFLoader.js. JSON is just saying only “SCALAR” but there is difference among Uint16 scalar vs Uint32 scalar.

Do I miss it anywhere ?

Best,
Dimitrios

Under the mesh’s primitives list, you should see an “indices” property containing an integer. This points to an “accessor”, and the “componentType” of that accessor would specify whether it’s a uint16 or uint32. But you can’t just modify the JSON to change this — it’s indicating what the underlying binary data actually is, and so a change would just make it incorrect.

If you want to convert a file from using uint16 to uint32 I can help with that… But if the file already uses uint16 for binary storage, it won’t contain any index values > 2^16, so conversion to uint32 is not going to change the visual result. Is that what you mean?

Let us assume that we have a cube, such as the one below generated by Blender and exported with GLTF embedded option.

{
    "asset" : {
        "generator" : "Khronos glTF Blender I/O v3.3.27",
        "version" : "2.0"
    },
    "scene" : 0,
    "scenes" : [
        {
            "name" : "Scene",
            "nodes" : [
                0
            ]
        }
    ],
    "nodes" : [
        {
            "mesh" : 0,
            "name" : "Cube"
        }
    ],
    "meshes" : [
        {
            "name" : "Cube",
            "primitives" : [
                {
                    "attributes" : {
                        "POSITION" : 0
                    },
                    "indices" : 1
                }
            ]
        }
    ],
    "accessors" : [
        {
            "bufferView" : 0,
            "componentType" : 5126,
            "count" : 8,
            "max" : [
                1,
                1,
                1
            ],
            "min" : [
                -1,
                -1,
                -1
            ],
            "type" : "VEC3"
        },
        {
            "bufferView" : 1,
            "componentType" : 5123,
            "count" : 36,
            "type" : "SCALAR"
        }
    ],
    "bufferViews" : [
        {
            "buffer" : 0,
            "byteLength" : 96,
            "byteOffset" : 0,
            "target" : 34962
        },
        {
            "buffer" : 0,
            "byteLength" : 72,
            "byteOffset" : 96,
            "target" : 34963
        }
    ],
    "buffers" : [
        {
            "byteLength" : 168,
            "uri" : "data:application/octet-stream;base64,AACAvwAAgL8AAIA/AACAvwAAgD8AAIA/AACAvwAAgL8AAIC/AACAvwAAgD8AAIC/AACAPwAAgL8AAIA/AACAPwAAgD8AAIA/AACAPwAAgL8AAIC/AACAPwAAgD8AAIC/AAABAAMAAAADAAIAAgADAAcAAgAHAAYABgAHAAUABgAFAAQABAAFAAEABAABAAAAAgAGAAQAAgAEAAAABwADAAEABwABAAUA"
        }
    ]
}

The vertices are only 8

V[0] = [0.0,-1.0,0.0]
V[1] = [0.0, 1.0,0.0]
V[2] = [0.0, 1.0,1.0]

V[8] = [1.0,1.0,1.0]

Indices point to the vertices

(BufferView 1 contains the indices of the points of the 12 triangles so it is 36 indices)

Triangle 1
indice[0] = 0
indice[1] = 1
indice[2] = 2

Triangle 2
indice[3] = 2
indice[4] = 3
indice[5] = 0


Triangle 12
indice[33] = 6
indice[34] = 7
indice[35] = 5

Let’s go high now.

Imagine now that we have 100000 vertices , and a complicated mesh out of these 100 000

How an index can point out to vertex 70 000 since an index is stored in Uint16 format ? Simply it will restart from zero after 65535 and this will cause the bug, i.e.

Command: Get the indice 50 that points to vertex 70000, i.e.

indice[50] = 70000

The CPU says: Hey you store it in Uint16, you can not do that, I will clip it for you so that it can fit to Uint16, so I give you back:

indice[50] = 70000 - 65535 = 4465

which will cause the mesh to be scrambled for the triangles that involve vertices that are in position 65536 and afterwards.

In conclusion:

Your Thee.js GLTF importer should recognize if a single buffer contains more than 65536 vertices. For the cube case the number 8 is below the Uint16 limit (namely below 65535) and it is ok to use the Uint16 format. If the number of vertices were more than 65536 points, then the Uint32 should be used instead.

if (MaxNumberOfVerticesAcrossBuffers < 65535){
        const WEBGL_COMPONENT_TYPES = {
        5120: Int8Array,
        5121: Uint8Array,
        5122: Int16Array,
        5123: Uint16Array,
        5125: Uint32Array,
        5126: Float32Array
        };
}  else {

        const WEBGL_COMPONENT_TYPES = {
        5120: Int8Array,
        5121: Uint8Array,
        5122: Int16Array,
        5123: Uint32Array,
        5125: Uint32Array,
        5126: Float32Array
        };
}

I studying the issue in greater detail. I will come back with my conclusions.

If you have more than 2^16 – 1 vertices then you must store your index in Uint32 format, rather than Uint16, within the glTF file itself. The Blender exporter will do this automatically and THREE.GLTFLoader will recognize when the glTF file it’s given contains Uint32 indices, uploading to the GPU accordingly.

GLTF importer should recognize if a single buffer contains more than 65536 vertices.

See above. The indices are already stored as uint16 or uint32 in the file. GLTFLoader supports both, no change is needed here.

Personally I think 65536 vertices should be enough for everyone. When I was growing up, we would get 10-50 vertices per model on PlayStation 1 and we were happy to have them.

2 Likes

There is no issue , gltfloader of three.js adapts automatically for more than 65536 vertices. It was my mistake to rush to report it. The issue was on my exporter using Microsoft’s Gltfsdk serializer. In three js, although gltf exporter says uint16 for indices, it is not strict. In js every buffer has the byte data in all uint types. In c++ is not the same. Things are strict there.