How do I need to modify my wgsl compute shader to use it with the node system?

I want to create and save a texture with a compute shader. Then I want to load this texture into another shader.

First i wrote a compute shader and fragment shader in wgsl code:

//compute shader
@group(0) @binding(0) var outputTexture: texture_storage_2d<rgba8unorm, write>;

@compute 
@workgroup_size(8, 8, 1) 
fn computeTexture(@builtin(global_invocation_id) id: vec3<u32>) {

    var texelCoord: vec2<u32> = id.xy; 
    var uv: vec2<f32> = vec2<f32>(f32(texelCoord.x) / 511.0, f32(texelCoord.y) / 511.0); 
    var color: vec4<f32> = vec4<f32>(uv.x, uv.y, 0.5, 1.0);
     
    textureStore(outputTexture, texelCoord, color); 
}


//fragment shader
@group(0) @binding(0) var inputTexture: texture_2d<f32>;
@group(0) @binding(1) var textureSampler: sampler;

@fragment
fn main(@location(0) uv: vec2<f32>) -> @location(0) vec4<f32> {
    return textureSample(inputTexture, textureSampler, uv);
}

I understand the handling of wgsl shaders in connection with the colorNode, because there is a nice example in the WebGPU materials example. I’ve modified it a bit here:

const textureLoader = new THREE.TextureLoader();
const uvTexture = textureLoader.load( './textures/uv_grid_opengl.jpg' );
const textureNode = texture( uvTexture );

const textureSampleParams{ 
	inputTexture: textureNode, 
	textureSampler: textureNode,
	uv: uv()
}

const getWGSLTextureSample = wgslFn(`
	fn getWGSLTextureSample( 
		inputTexture: texture_2d<f32>, 
		textureSampler: sampler, 
		uv: vec2<f32> 
	) -> vec4<f32> {

		return textureSample( inputTexture, textureSampler, uv );
	}
`);

const material = new MeshBasicNodeMaterial();
material.colorNode = getWGSLTextureSample(textureSampleParams);

The analogies to my pure wgsl fragment shader are clearly recognizable as are the differences. I now want to replace the loaded texture and the textureNode with a compute shader.

From what I see in the WebGPU compute example, I assume that I need the shaderNode and the storage node for my compute shader. But I still have no idea how to use it for my purpose.
Does anyone know more than me and can create a texture with a compute shader, store it and load it into another shader?

In WebGL2, textures must always be rendered out with renderTargets. If I need 6 renderTargets because each shader depends on the result of the previous shader, then that consumes a lot of resources. I want to do this now in WebGPU with 6 compute shaders.

Update 08/23/2023:
I made a copy of the WebGPU compute example in codePen (r155) to experiment with.

Thanks to sunag I now know how to set the workgroup_size. Is one of the node system developers present who can tell me if my idea of ​​my desired compute shader is on the right track?

In addition, I still have no idea how the initialization of the outputTexture could look like. The storage node will certainly be important.

const WGSLTextureStore = wgslFn(`
	fn WGSLTextureStore(
		outputTexture: texture_storage_2d<rgba8unorm, write>,
		@builtin(global_invocation_id) id: vec3<u32>
	) -> vec4<f32> {			
								
		var texelCoord: vec2<u32> = id.xy; 
		var uv: vec2<f32> = vec2<f32>(f32(texelCoord.x) / 511.0, f32(texelCoord.y) / 511.0); 
		var color: vec4<f32> = vec4<f32>(uv.x, uv.y, 0.5, 1.0);
			
		textureStore(outputTexture, texelCoord, color); 				
	}
`);


const computeShaderNode = new ShaderNode(() => {

	WGSLTextureStore(outputTexture);	
});

// compute params
const outputTexture = ??? ;
const workgroup_size = [8, 8, 1];

// compute
computeNode = computeShaderNode.compute(outputTexture, workgroup_size);
1 Like