Crossfade two materials on a single mesh?

What’s the easiest way to crossfade between two materials on a mesh?

For example, suppose I want to crossfade between MeshPhysicalMaterial and MeshNormalMaterial, or between MeshPhysicalMaterial and MeshPhongMaterial, or even between MeshPhysicalMaterial and a custom ShaderMaterial.

You can modify a material. :thinking:
Specifically for the crossfade to MeshNormalMaterial, minor changes needed.

Picture:

Demo: https://codepen.io/prisoner849/full/MWMBGMK

4 Likes

Without TSL there’s not much automation that can be done there and to blend two entirely different types of materials, you need to manually copy the code of either into a single shader and combine them - things overlap in way too many places and there’s no #ifdef / #ifndef anywhere in the chunks, like here for example - you’d get redefine-errors every 10-20 lines if you just merged the shaders blindly unfortunately :smiling_face_with_tear:


But as an alternative, if you’re not going to do it for every single mesh in the scene, you can do a UV-cast into a Plane for each material and blend the results - example. That way you can skip entirely the shader merging:

preview

Lines 50-75 - you’d need to have a simple wrapper like that for every material type you’d be blending. Each FBO holds light-accurate final rendering of the mesh using a different material - all information included, normals, roughness, alpha etc. The position buffer in the mesh is unwrapped to match the UVs (technically you should even be able to do that on the CPU without custom shader at all, just un-index all child geometries and swap the position / UV buffers.)
Then all you need to do is blend the 2 FBO textures in a 3 LoC blend shader (Line 30-44.)

3 Likes

Ah yeah, MeshNormalMaterial was too easy. I suppose cross fading physical and phong like that would be more of a challenge.

I’m imagining node materials make this a lot easier, but the system I’m working with is currently all built with the classic materials on WebGL.

That’s interesting. It is real slow on my phone after a short while (fast at first). It works with any arbitrary shape geometry? I’m not familiar with UV unwrapping yet.

let firstMaterial = new THREE.MeshStandardMaterial({
        color:'red'});
let secondMaterial = new THREE.MeshBasicMaterial({
        color:"blue",
        transparent:true, // Need this to blend to material behind...
        depthFunc:THREE.EqualDepth, //Need this or depth test will prevent the second material
        opacity:.5, //Fade 50% between the materials...
        })
let thirdMaterial = new THREE.MeshPhongMaterial({
        color:"green",
        transparent:true, // Need this to blend to material behind...
        depthFunc:THREE.EqualDepth, //Need this or depth test will prevent the second material
        opacity:.5, //Fade 50% between the materials...
        })
let bx = new THREE.Mesh(new THREE.SphereGeometry(16,16,16));
scene.add(bx);
bx.geometry.addGroup(0,Infinity,0)
bx.geometry.addGroup(0,Infinity,1)
bx.geometry.addGroup(0,Infinity,2)
bx.material = [firstMaterial,secondMaterial,thirdMaterial ]
setInterval(()=>{
    bx.material[(Math.random()*3)|0].opacity = Math.random()
})

You should be able to crossfade any number/types of materials this way.

5 Likes

That’s interesting. That’s almost the same as drawing three separate meshes that are exactly positioned in the same place right? The materials array on that “single mesh” become three separate draw calls, one per material, last I checked, as if drawing one mesh per material.

Although that’ll be fine for many cases, I’m keen to make it happen in a single draw call (single material) for the case that I have lots of objects (f.e. if I had 30 objects, 30 draw calls would be better than 90 draw calls).

Also something tells me that the depthFunc and transparent requirements are gonna further reduce the cases where this works well, and will otherwise cause unexpected sorting/transparency issues in an overall scene.

Here's an example of the same method with separate meshes (untested, but will have the same issues)

something like the following (again, not tested):

let firstMaterial = new THREE.MeshStandardMaterial({
        color:'red'});
let secondMaterial = new THREE.MeshBasicMaterial({
        color:"blue",
        transparent:true, // Need this to blend to material behind...
        depthFunc:THREE.EqualDepth, //Need this or depth test will prevent the second material
        opacity:.5, //Fade 50% between the materials...
        })
let thirdMaterial = new THREE.MeshPhongMaterial({
        color:"green",
        transparent:true, // Need this to blend to material behind...
        depthFunc:THREE.EqualDepth, //Need this or depth test will prevent the second material
        opacity:.5, //Fade 50% between the materials...
        })

const geom = new THREE.SphereGeometry(16,16,16)

let bx1 = new THREE.Mesh(geom, firstMaterial);
let bx2 = new THREE.Mesh(geom, secondMaterial);
let bx3 = new THREE.Mesh(geom, thirdMaterial);

scene.add(bx1, bx2, bx3);

const bxs = [bx1, bx2, bx3]

setInterval(()=>{
    bxs[(Math.random()*3)|0].material.opacity = Math.random()
})

I’m starting to come to the conclusion that migrating from these “legacy” materials to TSL materials is the key to all the future possibilities, including blending effects together (basically what I’m asking about here in this thread) without introducing sorting/transparency issues

With TSL, effects can be blended within a single material, regardless of the material’s opacity, depth testing, or transparency, etc. The resulting effects can effectively become the color of the material without affecting anything else.

I believe my goal now is to port any of my libs and frameworks onto TSL, then move forward without using the “classic” (or “legacy”) materials.

depthfunc = equalDepth It’s actually a pretty common and clean approach to material blending.

You can’t always predict what material combos you will need, (when precompiling shaders) and the depth func has no major side effects if you think about it. depth=Equal is fast and precise, and there is ~zero overhead if the variant materials aren’t in the .materials array. It’s also more efficient than a cloned mesh since the calls are happening right after each other, so no overhead of computing new sets of matrices etc… only the material params themselves need to be re-bound. Also, since they are still separate materials, the handling of transparent vs opaque is handled for you by the engine.
This is something you can actually lose if mashing materials together. For instance, rendering an oily sheen on top of a standard material. The standardmaterial is still “opaque” with respect to the rendering pipeline, it’s just your oil sheen that is transparent.

Precompiling your variations into the shader, on the other hand, can incur branching penalty if you’re not always using the variations, it increases the complexity of material handling since client code needs to Know that you’re doing weird stuff to the materials themselves, and how to re-parametize them, (unless you create custom wrappers.). You lose out on material settings… i.e. your variations will have all the same parameters as the first material… color/roughness/opacity etc., especially since threejs does a Lot of stuff behind the scenes to support the primarily named material params, like color + (colorspace conversion), .map handling, (assigning UV channels, texture rotation params, etc.)

If you “solve” this complexity, you may end up with some funky material with .variant1color, .variantcolor2, params etc. and imposed mental burden of “knowing” which params are for what variant, and also give up external control over layering order.

There are definitely cases where you do want to extend the material to augment its functionality… but you want to consider all these caveats when deciding if it’s appropriate.
(I think the scenario it’s most appropriate is when extending to add new global behaviors.)

Another pitfall of depthFunc == Equal, is that the materials need to use the exact same math to transform vertices, or Equal depth won’t work, and you’ll get z-fighting.
I ran into this in older versions of threejs that used slightly different vertex transforms… like…
modelView * projection * vertex on phong, but then
vs model * view * projection * vertex in standard.
(will yield very slightly different results due to the precision being shifted around ), so that’s something to be aware of as well.
Other downsides include… not being able to export multi material slots in most formats. Most formats assume single material per mesh. but you’re also not going to be exporting your custom mashed material either.

TLDR, I hack materials via onBeforeCompile, all the time. It’s totally my jam… but If I was trying to future proof something, or extending the threejs multiverse, I would consider all these aspects.

1 Like

Yeah I get a lot of that, but I know from experience that changing transparent on something that shouldn’t be transparent can cause an app to easily break. That option alone is fragile, let alone any others.

I’m willing to take a perf hit first to see what can be visually achieved without risking visual bugs, and to then decide if it is within budget or not. I think TSL will really help here: make effects, see if they are in budget, then optimize or swap for alternatives if needed.

But anywho, I’m new to TSL. First time trying to import it and I faced difficulty importing it. Once I’m past that, time for some fun!

1 Like

Ok, I think I see how to do it with TSL, but it isn’t quite working yet. Basically this:

const material = new THREE.MeshPhysicalNodeMaterial({metalness: 0.7, roughness: 0.1, envMap: new THREE.TextureLoader().load('https://assets.codepen.io/191583/iss-interior-equirect.jpg')});
const material2 = new THREE.MeshPhongNodeMaterial({ color: 'cornflowerblue' });

material.fragmentNode = 
	material.fragmentNode.mul(0.5).add( material2.fragmentNode.mul(0.5) )

Seems super simple! But the thing is, fragmentNode is null by default.

So the question is, how do we get the fragmentNodes to be populated by the default implementations first?

Once we do that, mixing them should be super simple!

1 Like

I see that after material.setup(builder) is called, all the nodes (f.e. .fragmentNode) are still null.

Looks like the node properties are user inputs that Threejs does not set, making it possible to easily override any of them from outside. If only we can get the fragment color node from one material so then we can override fragment on another material…

Do you know how by any chance @sunag ?

I think I’m close, but no luck yet. Here we moneky patch setupOutput on two materials to get their output nodes, however no change happens to the output visual when I try to blend the nodes together:

const material2 = new THREE.MeshPhongNodeMaterial({ color: 'cornflowerblue' });
let material2OutputNode
{
	const setupOutput = material2.setupOutput
	
	material2.setupOutput = function(builder, outputNode) {
		outputNode = setupOutput.call(this, builder, outputNode)
		material2OutputNode = outputNode
		console.log('setupOutput 2', outputNode)
		return outputNode
	}
	
	scene.add(new THREE.Mesh(new THREE.BoxGeometry(0,0,0), material2));
}

const material1 = new THREE.MeshPhysicalNodeMaterial({metalness: 0.7, roughness: 0.1, envMap: new THREE.TextureLoader().load('https://assets.codepen.io/191583/iss-interior-equirect.jpg')});
{
	const setupOutput = material1.setupOutput
	material1.setupOutput = function(builder, outputNode) {
		outputNode = setupOutput.call(this, builder, outputNode)
		console.log('setupOutput 1', outputNode.mul)
		outputNode = outputNode.mul(0.1).add( material2OutputNode.mul(0.9) )
		return outputNode
	}
}

const geometry = new THREE.BoxGeometry(120, 120, 120);
const mesh = new THREE.Mesh(geometry, material1);
scene.add(mesh);

This pen shows what I have so far, only material1 output is visible, but I expected it to be closer to material2:

Here’s an example that shows that replacing the outputNode with a vec4 makes it have a flat color:

	material1.setupOutput = function(builder, outputNode) {
		outputNode = setupOutput.call(this, builder, outputNode)
		outputNode = THREE.vec4(THREE.color(0xff6600), 1) // REPLACE
		return outputNode
	}

So the merging of the two outputNodes is what isn’t working yet. Hmmm.

I don’t think this will be easy without two drawcalls, a node is not exactly a material, the material prepares the call flows to execute the nodes at the input, so if you create a node that reproduces the light model this will work correctly.

I’ve created a simple example below where I blend one with another using overlay() and have a line commented out using simple mix, you could use any blend mode here.

This approach prioritizes effect over performance, if you are thinking of adding a lot of objects with this effect I suggest something else.

PhongMaterial is the gradient part, and the checker is in BasicMaterial.

// import { vec4, overlay, output, viewportSharedTexture, color, checker, uv } from 'three/tsl';

const sphereGeo = new THREE.SphereGeometry();
sphereGeo.groups[ 0 ] = { start: 0, count: Infinity, materialIndex: 0 };
sphereGeo.groups[ 1 ] = { start: 0, count: Infinity, materialIndex: 1 };

const material1 = new THREE.MeshBasicNodeMaterial();
material1.colorNode = checker( uv().mul( 50 ) ).mul( color( 0x00ff00 ) ); // basic texture effect

const material2 = new THREE.MeshPhongNodeMaterial();
//material2.outputNode = mix( output, viewportSharedTexture(), 0.5 );
material2.outputNode = vec4( overlay( output, viewportSharedTexture() ) );

mesh = new THREE.Mesh( sphereGeo, [ material1, material2 ] );
mesh.position.set( 0, 1, 0 );
scene.add( mesh );
2 Likes

Hey, thanks for that sample. What does the viewportSharedTexture node do? How exactly does that result in blending the two materials? Does this have issues like with transparency (like rendering two objects instead of one, with transparency)?

I’m new to TSL and node materials, but what I was imagining is that I would be able to connect two nodes together with a mix node, to create the new color. F.e. in my mind I was visualizing something like the following (imagine like a node-based shader editor UI) with the result being a single graph (single material, single draw call):

"Material" 1:

             GPU
              |
              |
          outputNode
              /\   <------- for example combined with mix(), or mul(), or etc
             /  \
            /    \
 SomeColorNode  OtherColorNode




"Material" 2:

             GPU
              |
              |
          outputNode
              /\
             /  \
            /    \
 SomeColorNode  OtherColorNode




 New "Material" by connecting nodes together:


                             GPU
                              |
                        newOutputNode
                              |             <------ f.e. using mix() here
                              |
              +---------------+------------------+
              |                                  |
              |                                  |
          outputNode                         outputNode
              /\                                 /\
             /  \                               /  \
            /    \                             /    \
 SomeColorNode  OtherColorNode      SomeColorNode  OtherColorNode

It seems like that’s what the power of nodes is for. If that’s not possible yet, what may be needed? Maybe I can help once I get more familiar.