Questions on three.js example ----- webgl_loader_gltf_transmission.html

Considering the various problems associated with getting transparency to work nicely (in the general case), I am wondering how this near-perfect transparent glass cover was rendered.

Does it come from using transmission along with physically-based materials?

Are the usual material properties…
.opacity
.transparent
.depthWrite
etc being used in special ways?

Or is it just that the glass cover geometric model is very well behaved?

OLDMAN

it has a thickness map. i don’t think this is trivial to make and that model is the only example of it i have seen so far. it does look nice!

check out this thread How to bake thicknessMap for gltf transmission volume?

The properties “opacity”, “transparent”, and “depthWrite” are all related to a type of transparency called alpha blending. That is the most common and usually the cheapest type of transparency. It simply fades things out, and a surface with opacity=0 cannot have any reflection.

Transmission is an entirely different method, using none of those properties. Fully-transmissive materials like glass can still have a realistic reflection. See the .transmission property of MeshPhysicalMaterial, along with other properties like thickness, attenuation color, attenuation distance, and others. The model also uses some careful choice of IOR and specular values I believe.

Transmission has different limitations than alpha blending, and is much more expensive.

A few other models below use transmission as well:

Thanks for those inputs. Here are my comments:

My application renders an assembly of components. Each component is a collection of visually (not topologically) edge-connected threejs meshes arranged in a domain-sensible way. The need is for visually “clear” renderings. Those that visually convey the structure and inter-relationships among the components on the screen. Real-time and photorealism are low priority. I am prepared to wait several seconds for a good rendering of this nature. Perhaps that opens up additional implementation choices?

So far I have been using very basic stuff in threejs: opacity values less than 1.0, depthWrite setting, transparent setting. I have a feeling there are more threejs setting and switches to use advantageously: alphaHash, alphaToCoverage, changing various blending settings from their defaults to something else, etc - but I don’t understand them well enough to do so.

I need a general solution - not one finetuned to a few pre-defined scenes. Thus, I don’t think renderOrder helps me.

I have been using doubleSided materials throughout my work. Could that be creating a problem?

Here is a specific example: two components, green inside the red.

I make the outside component transparent so as to expose the inside component within.
The rendering has aberrations: some caused by Z-fighting among coincident faces and others - as indicated in the picture. I consider the Z-fighting problem to be due to “bad” modeling. But the others are due to the specific rendering threejs has produced with my current switches and settings.

A view-dependent solution is quite acceptable: meaning the rendering looks perfect in the rendered view, but degrades when the camera is moved.

Please suggest what more I can do to improve the rendering and make it more like that lovely glass cover in the transmission example.

Thanks much,

OLDMAN

Unfortunately I don’t think transmission is what you’ll want for nested transparency like that.

Alpha hashing (three.js examples) is definitely worth a try here, especially combined with TAA as in the example.

I do think alpha blended transparency and render order would have a shot though. If you have a lot of nesting, you could try manually sorting objects by the distance from the camera to their bounding box, rather than to their center as three.js usually does. That may help with nested objects.

Thanks for the response.
Here are some questions on the published example/demo for alphaHash that you pointed out:

(1) I notice that alphaHash is a property of the base Material class. Therefore, I expect it would work with MeshStandardMaterial (as used in the demo) and also others like MeshPhongMaterial. Correct? If so, I would like to use MeshPhongMaterial for all my geometry (opaque and transparent). So I actually tried making this one-liner change to the demo code and the screen went blank. Is that because I have no lighting in the scene?

(2) I need alphaHash meshes to coexist with regular meshes in my scene. The former uses multi-pass rendering. The latter doesn’t. So how to make this work?

(3) What does sampleLevel mean in the demo? Is it the number of camera-jittered renderings that are done before composing the final image?

OLDMAN

  1. Yes, .alphaHash should work with any material type. If that’s not the case it could be a bug. Just switching to MeshPhongMaterial in the alpha hash broke it?

  2. .alphaHash doesn’t require multiple passes (transmission does). The post-processing effect TAA is just to reduce the noise/grain. That post-processing effect will not do any harm with non-alpha-hashed materials in the scene, it’s just another type of anti-aliasing.

  3. Exactly yes, higher numbers cost more but will (up to a point) reduce the grain further.

That’s right. All I did was replace

				material = new THREE.MeshStandardMaterial( {
					color: 0xffffff,
					alphaHash: params.alphaHash,
					opacity: params.alpha
				} );

with

				material = new THREE.MeshPhongMaterial( {
					color: 0xffffff,
					alphaHash: params.alphaHash,
					opacity: params.alpha
				} );

and that made the screen go blank

OLDMAN

This demo relies on scene.environment (IBL) for lighting, which MeshPhongMaterial does not support. If you add other lights to the scene you’ll see the material.