Using .mergeBufferGeometries() with geometries that have different attributes?

I have multiple BufferGeometry objects that I want to combine into one single geometry.

However some geometries have weights and skin indexes but some don’t, in some cases a geometry might have a normal map but some don’t. etc.

Using BufferGeometryUtils.mergeBufferGeometries() fails with the error:

Make sure all geometries have the same number of attributes.

Is there another way to combine multiple BufferGeometries that have different attributes into a single mesh?

Not with BufferGeometryUtils. You would have to remove the skinned attribute data from the geometries in order to use mergeBufferGeometries().

The existence of a normal map does not say something about the geometry data. It’s more important whether the geometry has texture coordinates or not.

BTW: The mentioned check in BufferGeometryUtils.mergeBufferGeometries() exists because it makes normally no sense to combine incompatible geometries. Vertex buffer data always have to match in their primitive count so you would have to define default data for all geometries with no skinning data. This is not something the engine wants to do.

Thanks,

What would be the workflow to normalize all geometries?

To make the geometry compatible, you’ll need to know why it’s incompatible, then decide what a “normal” geometry should be. There is no right answer for all cases: if you don’t need normals, you should get rid of them on all geometries. If you do need normals, you’ll have to compute them for every geometry before merging. Similar for:

  • normals
  • tangents
  • uv1
  • uv2
  • color
  • skinIndex / skinWeights

Some of these are easy to compute, color=#FFFFFF for every vertex will not change anything. Some are hard: there’s obvious “empty” value for uv1 and uv2. BufferGeometryUtils expects that the geometry already be compatible because it can’t guess which attributes you want to keep and which you don’t.

2 Likes

What a great insight, thank you!

For uvs, normals, etc. as I understand it’s fairly straightforward, but to take skinWeights as an example - is it possible to apply the weights to a BufferGeometry mesh and make the mesh static? In other words, I’d like to “bake” the transformations into the geometry and leave the mesh “as-is” without having any weight attributes. I need the weights prior to my rendering process to manipulate the objects, but during rendering the objects are static and I have no need to keep the weights or indexes.

UVs are only straightforward if the two geometries already use the same textures. If not, is the merged geometry going to have a single material? Then you have to “pack” textures into one, and adjust each geometry’s UVs into the right part of the packed texture. This is easier to do in a tool like Blender than at runtime in three.js.

If you don’t want a geometry to be affected by skinning, then skinWeights=0,0,0,0 for each vertex might work, or weight each vertex to a single stationary bone. Or just delete those attributes from all merged geometries.

I do want the objects to be affected by skinning, however when rendered the objects can be static as no transformations occur at that point on. I only need to rig and manipulate the objects during the initial loading process.

As merging geometries with different attribute types isn’t possible, my question now is - how to apply skinning, weight and morphAttribute transformations to an object and then remove those attributes keeping the now “posed” object static. At that point I could use BufferGeometryUtils.mergeBufferGeometries() to merge all geometries together and save client side resources when rendering those objects.

Edit: at this point I’m not concerned about normals, uvs, textures, colors etc, I can deal with that in a different way as long as I can join the geometries together into one object.

You’d need to apply the pose to every vertex in the geometry. SkinnedMesh.boneTransform can apply the current skeleton pose to a given Vector3, as a starting point. It takes the index of the source vertex in its geometry, and takes a Vector3 target into which it writes the transformed vertex position. You’d need to repeat that for every vertex and store its position in the new geometry.

This is slow, and shouldn’t be done in a render loop for non-trivial vertex counts.