How Are Transparent Meshes in a GLTF File Treated?

Several of the models I create in Blender have many parts, some of which have a transparent texture. When I export this model as a glb file, each part of the model is saved as a separate mesh. Thus, despite having saved a single model, three.js does not treat the loaded file as a single 3D object, but more like a group of separate meshes. (I discovered that when I tried to make the loaded file a child of a 3D object.)

I understand that, in general, three.js treats meshes with transparent textures different than meshes with opaque textures. In the case above, does mean that three.js will single out the mesh with the transparent texture for special treatment? Or is the whole collection singled out for special treatment?

What happens if a part has both opaque and transparent textures?

1 Like

It’s a bit hard to generalize without knowing what you’re exporting from Blender, but a few points that might be helpful:

  • Each GLB is loaded as a THREE.Group and could have any number of Object3D, Mesh, SkinnedMesh, Light, Camera, etc. objects within it.
  • GLTFLoader never creates multi-material meshes, so if you have more than one material (or parts with different textures) those will always be loaded as different meshes.
  • The three.js renderer sorts and draws opaque and transparent meshes in two separate passes for graphics pipeline reasons, so (a) each mesh must be sorted into one of those two passes (or a third, not relevant here), and (b) it is almost always a good idea to keep opaque and semi-transparent materials separate. If you don’t need only need “mask” transparency (all or nothing) then use material.alphaTest (Blender’s Alpha Clip Blend Mode) instead.
1 Like

That’s exactly the kind of general information I needed.
However, I’m not 100% sure what your last sentence means.
After sorting opaque objects, meshes, etc., does three.js separately sort transparent objects, meshes, etc. based on distance?

I do have a specific question regarding a conflict between a transparent object and an emitter that uses transparent smoke textures. The transparent object is showing through the smoke - apparently without regard for distance.

But before I post that question, are there some special rules regarding handling of emitters that use transparent textures? In general, emitters do not appear to be a 3D object, but are a Points object. And this particular emitter uses a ShaderMaterial that references a custom shader.

After sorting opaque objects, meshes, etc., does three.js separately sort transparent objects, meshes, etc. based on distance?

All opaque objects are drawn first, nearest-to-furthest. Then all transparent objects are drawn, furthest-to-nearest.

The transparent object is showing through the smoke - apparently without regard for distance.

Often a good idea to set depthWrite = false on any transparent materials. GLTFLoader does this by default. This solves common issues, but alpha blend transparency is always a bit prone to sort-related edge cases.

…are there some special rules regarding handling of emitters that use transparent textures?

three.js does not have a separate concept of emitters; it’s safe to assume that one mesh is treated like another mesh (within the opaque/transparent render queues). Note that individual triangles within a single mesh are not sorted.


Thanks! Very helpful.

One last question. Regarding nearest and farthest, as I was rotating the camera I noticed that the result varied. This made me wonder if my definition of nearest and farthest for purpose of drawing order is incorrect.

I had always assumed that distance for purpose of z-buffering does not change when you rotate the camera. However, perhaps I am wrong. To illustrate: Suppose there is an object A with an object center that is 100 units in front of the camera and an object B with an object center that is 125 units away, but is way off to the side so that the forward distance is only 25 units. Would three.js treat object B as nearer or farther?

Here is a crude illustration (the camera is pointed towards A):


The answer makes a difference because I have sometimes tried to nail down drawing order by making sure that the object center for the object in back is further away than the object center for the object in front. If rotating the camera changes the z-buffer distance for the object centers, it looks like I would have to split the object up so that the center of the portion of the object being drawn is always roughly in front of the camera.


Although I was not looking for a solution to my problem du jour, this discussion appears to have given me the tools to solve that problem. I had a model of an island with a volcano and a smoke emitter on top. To get a good coastline and allow shadows, I had broken the island into two meshes - one with an opaque texture (for shadows) and the coastline with a transparent texture. The result was this problem:

So, as you suggested, for the coastline mesh, I set the mesh.material.depthWrite = false (I had already done so for the emitter). I then moved the island center to be right under the smoke emitter. So, as I am looking down (which is where the problem appears), the center for the emitter will almost always be higher (closer) than the center for the island. (Regardless of which definition of depth you use.)

This appears to be an advantage with creating scenery for a flight simulation: Since you are mostly looking down, you can “layer” your terrain. Flat ground first, objects for hills, islands and vehicles second, trees and smoke third, etc.

1 Like

I believe sorting is strictly based on distance from the camera position — Euclidean distance between points.

1 Like

Yes, and it appears that the Z-direction is determined by the direction the camera is pointed. So, in the example above, the Z-distance for B is only 25 units, even though the actual distance from the camera is 125 units. Apparently that is standard practice for Z-buffers - something I had forgotten.

It is interesting to find that opaque objects are drawn front to back. The 3D program I created decades ago drew them back to front so that more distant objects would be covered by nearer objects. For front to back, I assume the program creates some kind of mask to keep track of which parts of the screen have already been drawn on. As it goes from front to back, there are fewer “free” areas until, at the end, the remaining open areas are filled with background. Very clever!

1 Like

Interesting! Was not aware of that z-buffer behavior. Makes sense for an ortho camera but i’m curious about perspective and cube cameras.

The reason for drawing opaque objects front-to-back is to avoid overdraw — the GPU can avoid painting fragments where it sees a closer depth already in the depth buffer.

Technically, I did not use Z-buffering in my simulation since that involves mapping a lot of points into a buffer. Instead, I used a “Z-buffering approach” - I was using object centers to determine distance and then drawing each object onto the screen starting with the most distant object. The similarity is that with both methods, the Z axis extends straight along the direction the viewer is looking. The XY axes are perpendicular to that - in the direction of my outstretched arms.

If you want a perspective camera, you convert the XY coordinates to screen XY coordinates by dividing each by Z (or some similar distance-related factor). For an ortho camera, you don’t divide by Z. I’m not sure about a cube camera.

The Z-buffering approach works best with a more limited field of view - which was fine in the days before widescreen monitors. (If you used a camera with a field of view that extended to 90 degrees on each side, and moved forward, the objects at the far side of the screen would be moving straight outwards as you passed them.) In contrast, something like raycasting appears to be more accurate - since it measures actual distance to objects - and thus gives more of a panoramic view.

From what I am seeing, it appears that three.js uses something similar to my Z-buffering approach. Technically, if I turn my head (or rotate my camera), the distance to objects does not change. Closer objects will always remain closer. However, with a Z-buffering approach, the Z distance of objects changes and, even within a limited field of view, a farther object can become closer. That appears to be happening with my transparent objects since they are switching drawing order as I rotate my camera.

(I hope I am not repeating things that you already know. If I am, let me know and I can delete the extraneous material.)