In WebGL2 the so-called “provoking vertex id” of a triangle is the final vertex.
Given mesh.geometry
has 128
vertices we can attach an attribute “vertexId” with value [0...127]
. Then we can use a flat varying
to pass the “provoking vertex id” into the fragment shader.
// vertex shader
attribute int vertexId;
flat varying int vId;
// ...
vId = vertexId;
// fragment shader
flat varying int vId;
But can we assume distinct triangles have a distinct provoking vertex?
It holds for every GLTF model I have exported from Blender so far.
This property is very helpful for uv-mapping i.e. for each triangle I can provide a uv-offset whenever I want to partially change the skin.
Cheers, Rob.
Thanks @manthrax that’s helpful, but I don’t think it solves the problem.
If we force the triangles to be disjoint (no two triangles share a vertex) then we can certainly infer the current triangle from vertexId.
However, my triangles are not disjoint e.g. a cube has 8 * 3 vertices when exported (3 normals per vertex) rather than 3 * 12 (3 vertices per triangle).
No guarantee of this, unless you unweld vertices with toNonIndexed or similar.
You can construct a mesh such that all triangles share the same last vertex, or even share all three vertices in common. I expect that Blender is splitting vertices (on a cube, for example) only when required where faces have distinct UVs or normals.
Related: modeling - How to stop blender from generating duplicate vertices on export? - Blender Stack Exchange
What is the “provoking vertex” useful for? This is the first I’ve heard of it…
Thanks, much appreciated.
Saves me going down a dark alley.
I also didn’t know about toNonIndexed
which avoids un-welding inside Blender.
I only stumbled across the concept myself.
In WebGL2 it would permit me to determine the current triangle in a fragment shader using fewer vertices. This at least works when the model is a union of cubes (2/3
factor).
I’m guessing this video provides a better use-case:
https://www.youtube.com/watch?v=l6PEfzQVpvM&ab_channel=ThinMatrix
I think you can generate terrains with a single quad and a heightmap (datatexture) using InstancedBufferGeometry
You use the gl_InstanceID to figure out which heightmap pixel you’re on, and deform the vertices of the quad accordingly. You can do another 2 samples of the heightmap at each vertex to compute the normal if you need smooth shading.
r.e. using the provoking vertex as a flat shaded normal…
That doesn’t seem accurate to just pick one vertex arbitrarily as the normal?
Hey again, it would be accurate if the last vertex in each triangle determines the triangle. Shame we don’t have gl_PrimitiveID
in WebGL.
Hmm I still don’t get it. You would get the normal for that vertex, not the face normal of the triangle yea?
It would be close to correct, but not the true face normal… unless you’re preprocessing the data and storing the face normal explicitly in that vertex?
Trippy stuff…
Maybe not a good example.
Suppose you had a specific vertex ordering on your model, where certain ranges corresponded to certain parts of the model.
In Blender you can show the vertex id via an overlay, and manipulate the ordering via separate / join.
Then the provoking vertex id could be sufficient to determine the range you are in, and eg hide or color as appropriate in fragment shader.
Then you could avoid unweld and reduce vertex count.
One other note, if you want to do the equivalent of .toNonIndexed()
offline (so it’s not a runtime cost) then …
gltf-transform unweld in.glb out.glb
… will do the job. It would also be possible to modify the implementation to detach only the 3rd vertex of each triangle, so the vertex count wouldn’t increase as much.
1 Like
For fun i made a heightmap renderer that uses a single instanced quad to render a 1kx1k heightmap:
(~2 million triangles)
(code:Glitch :・゚✧)
2 Likes