Convert custom shader to MeshBasicNodeMaterial

I’m rephrasing a previous post. What is the requirement to transform this shader code to MeshBasicNodeMaterial. Is it a colorNode ? I’m requiring to use this for Webgpu support and it’s incredibly difficult to figure out.

It seems I need to get the rgb of a texture sampler, then calculate the alpha setting, then use a color uniform for the output color.

I’ve looked through all the examples but it’s not clear how to do it.

I may have figured something out but it only works for webgpurenderer, the color uniform breaks with webgl. So I need duplicated code for both renderers. It would be nice if there was a clear example how to create a shader function for some of the math like in webgl. I thought the point of tslFn was to support both renderers.

material = new MeshBasicNodeMaterial({ map: videoTexture, color: new THREE.Color( 0x0066ff ), opacity: 0.5 });

       // console.log(uniform(material.color));

        const colorShaderNode = tslFn( ( input ) => {

            const tex = texture(input.texture);
            const color = uniform(input.color);
            const opacity = uniform(input.opacity);

            const sigDist = max(min(tex.r, tex.g), min(max(tex.r, tex.g), tex.b));

            const alpha = clamp(sigDist.div(fwidth(sigDist)).add(0.5), 0.0, 1.0);

            //return vec4(, 1);
            return vec4(, alpha.mul(opacity));

            //return vec3( 0.299, 0.587, 0.114 ).dot( );


        const opacityShaderNode = tslFn( ( input ) => {

            const tex = texture(input.texture);
            const color = uniform(input.color);
            const opacity = uniform(input.opacity);

            const sigDist = max(min(tex.r, tex.g), min(max(tex.r, tex.g), tex.b));

            const alpha = clamp(sigDist.div(fwidth(sigDist)).add(0.5), 0.0, 1.0);

            return alpha.mul(opacity);
            //return vec4(, 1);
            //return vec4(, alpha.mul(opacity));


        material.colorNode = colorShaderNode( { texture: videoTexture, color: material.color, opacity: material.opacity });
        //material.opacityNode = opacityShaderNode( { texture: videoTexture, color: material.color, opacity: material.opacity });
1 Like

Maybe there is some #define you can #ifdef off of?

What is the define for ? I’m not sure if this will convert back to webgl it broke on the color uniform for some reason. I’m trying to integrate to get a demo going. So might have to try and do wgsl instead and have to try and support both methods. One using ShaderMaterlal the other for webgpu. The tricky thing is webgpu falls back to webgl internally also.

@danrossi Are you referring to incompatibility with WebGLRenderer or WebGPURenderer over WebGLBackend?

It’s when configuring THREE.WebGLRenderer. So I guess it needs to use another method to setup WebGLRenderer using the backend ? WebGLRenderer is still needed for WebXR support for instance as WebXR in WebGPU is still theoreticals. So can’t rely on falling back to that yet, need to specify WebGL.

I’m still doing steep learning curve trying to figure out this new system to setup a demo for the bmfont text rendering for now and see if it works. I think my tsl function may be doing what I need that replicates the shader.

I just realised the nodes system requires the WebGLBackend. Which you can’t specify with WebGPURenderer. And it doesn’t have the same WebXR support as WebGLRenderer has. Therefore still requires two shader setups. But something like this can create a specific WebGLBackend.

class WebGLRenderer extends Renderer {

constructor( parameters = {}, useWebGl = false ) {

	let BackendClass;

	if ( WebGPU.isAvailable() && !useWebGl) {

		BackendClass = WebGPUBackend;

		this.isWebGPURenderer = true;

	} else {

		BackendClass = WebGLBackend;

		this.isWebGLRenderer = true;

		console.warn( 'THREE.WebGPURenderer: WebGPU is not available, running under WebGL2 backend.' );


	const backend = new BackendClass( parameters );

	//super( new Proxy( backend, debugHandler ) );
	super( backend );

	this.isWebGPURenderer = true;



Setting up seperate shaders should work for now until the api is finalised by checking for renderer.isWebGPURenderer . It seems webgpu now doesnt like the indices setup for the custom geometry. So need to figure that out.

Index range (first: 0, count: 192, format: IndexFormat::Uint16) does not fit in index buffer size (204).
- While encoding [RenderPassEncoder].DrawIndexed(192, 1, 0, 0, 0).

I fixed that problem a drawRange was being set that causes problems. I found when trying to directly use WebGLBackend its not rendering the same as WebGLRenderer. Using WebGPU the text and texture is all skewed which seems to be custom geometry uv/indices related so still a porting issue. So if I can get it working in WebGLBackend it should hopefully work in WebGPU.

Yes, your shader would be something for the colorNode. I recreated your code here in wgsl. If you have a WebGPU compatible browser you will be able to see something. I don’t yet know how I can activate transparency in a MeshBasicNodeMaterial.

In any case, the wgsl shader is running and you can adjust the opacity. But as I see, you’ve already figured the whole thing out yourself and recreated it in tsl

Thanks I’ll have a look. I worked out I need to do the color and opacity seperate as it tries to transform opacity after with a uniform I don’t know what it is. I had to debug the WebGL generated shader code to figure it out.

However the rendering for a forced WebGL backend shows correct and for WebGPU the textures and text is skewed. I think it doesnt like what is in the geometry code not sure.

This is what webgpu generates

// Three.js r157dev - NodeMaterial System

// uniforms
@binding( 1 ) @group( 0 ) var nodeUniform2_sampler : sampler;
@binding( 2 ) @group( 0 ) var nodeUniform2 : texture_2d<f32>;
struct NodeUniformsStruct {
nodeUniform1 : vec3<f32>,
nodeUniform3 : mat3x3<f32>,
nodeUniform4 : f32,
nodeUniform5 : f32
@binding( 3 ) @group( 0 )
var<uniform> NodeUniforms : NodeUniformsStruct;

// structs

// codes

fn threejs_lessThanEqual( a : vec3<f32>, b : vec3<f32> ) -> vec3<bool> {

return vec3<bool>( a.x <= b.x, a.y <= b.y, a.z <= b.z );


fn main( @location( 0 ) nodeVarying0 : vec3<f32>,
@location( 2 ) nodeVarying2 : vec2<f32> ) -> @location( 0 ) vec4<f32> {

// vars

var TransformedNormalView : vec3<f32>;
var DiffuseColor : vec4<f32>;
var nodeVar2 : vec4<f32>;
var nodeVar3 : f32;
var Output : vec4<f32>;
var nodeVar5 : vec4<f32>;
var nodeVar6 : vec4<f32>;

// flow
// code

TransformedNormalView = normalize( nodeVarying0 );
DiffuseColor = vec4<f32>( NodeUniforms.nodeUniform1, 1.0 );
nodeVar2 = textureSample( nodeUniform2, nodeUniform2_sampler, ( NodeUniforms.nodeUniform3 * vec3<f32>( nodeVarying2, 1.0 ) ).xy );
nodeVar3 = ( max( min( nodeVar2.x, nodeVar2.y ), min( max( nodeVar2.x, nodeVar2.y ), nodeVar2.z ) ) - 0.5 );
DiffuseColor.w = ( DiffuseColor.w * ( clamp( ( ( nodeVar3 / fwidth( nodeVar3 ) ) + 0.5 ), 0.0, 1.0 ) * NodeUniforms.nodeUniform4 ) );

if ( ( DiffuseColor.w <= NodeUniforms.nodeUniform5 ) ) {



nodeVar5 = vec4<f32>(, DiffuseColor.w );
nodeVar6 = vec4<f32>( mix( ( ( pow(, vec3<f32>( 0.41666 ) ) * vec3<f32>( 1.055 ) ) - vec3<f32>( 0.055 ) ), ( * vec3<f32>( 12.92 ) ), vec3<f32>( threejs_lessThanEqual(, vec3<f32>( 0.0031308 ) ) ) ), nodeVar5.w );
Output = nodeVar6;

// result
return nodeVar6;


I have webgpu rendering sdf text finally;. so tslFn supports both backends, but RawShaderMaterial is still needed for WebGLRenderer for WebXR support. For Webgpu even though the texture is flipY I can’t manipulate flipY in the geometry uv. only for webGL. Not sure what is different there. the flipY in the uv skews the texture. if I disable flipY for webgl backend the texture is skewed like webgpu.