I’m looking for an analogous functionality of out/in from glsl shaders for wgsl shaders. In pure wgsl you can do this with structs. I need something like this to be able to send variables from my positionNode wgsl shader to the colorNode wgsl shader. I have displaced vertex positions in my positionNode and I need exactly these displaced vertex positions in the colorNode wgsl shader.
However, I don’t yet know how this works in conjunction with the node system.
Position transformations are represented in the positionLocal
node. Maybe material.colorNode = positionLocal
can solve this?
Thanks sunag, I’ll do that this evening. Here now with more surface structure.
The positionLocal sounds good. Does this correspond to the vertex positions that the positionNode outputs? Then it should work and i can activate the morphing again without disturbing the textures.
The wave generator now works perfectly. Let’s see if I can upload a first version to Github in December.
I must correct myself. Unfortunately, the positionLocal node cannot help. Why not? The vertex shader, i.e. the wgslFn that is passed to the positionNode, sends out the displaced vertices and that is correct because this creates the waves in the wireframe. In the wgslFn that is addressed to the colorNode I also use the attribute(position). But these are the undisplaced positions of the wireframe. This is also correct at first, because the normal texture, which is created together with the displacement texture at every interval, brings the movement of the normal vectors into play in the colorNode shader. I generate the normal vectors directly and precisely from the wave equations through derivatives and save them at the same time as the displacement textures. I load the displacement textures into the positionNode wgslFn and the normal textures into the colorNode wgslFn. Since the normal textures cause the movement of the normal vectors, I also have to use attributes(position), so far so good. In the vertex shader I make two different shifts. First the lod morphing for perfect seamless lod transitions. And after then I displace the vertices with the displacement textures. The positionNode needs the end result of both, the morphed displaced vertices.
But the colorNode only needs the morphed vertices, not the morphed and displaced vertices, which would leads to ugly errors. I checked node.js to see if there was a varying node, but I suspect that the one I see is not intended for wgsl.
To be absolutely sure, in my webgl2 test version I sent the unmorphed positions to the fragment shader while I morphed in the vertex shader. And I was able to reproduce exactly the same error as in webgpu. I didn’t expect anything different, but it doesn’t hurt to test it to be sure that it is exactly that. The reason for the display error on the screen is that if I morph but use the unmorphed positions from the attribute(position), the fragment interpolations are the wrong ones. That’s why I deliberately deactivated morphing, which then leads to the gaps in the lods.
This means I can now at least work on the ocean surface. The IFFT system works very efficient.
I was thinking about how could to solve this elementary problem of not being able to send parameters from the positionNode shader (vertexShader) to the colorNode shader (fragmentShader).
A varying node would then have to automatically create a structure. If there are several varyings, the second one sees that a structure has already been created by the first varying and then simply adds its parameter.
//VertexOutput Node struct (not visible) is created by the first varying node that is used for this shader in 3js.
struct VertexOutput {
@builtin(position) position: vec4<f32>; //allways there (default). The positionNode allways see this
vPosition: vec4<f32>; //this then be added by the varying node
};
//PositionNode Shader
wgslFn(`fn mainVertex(position: vec4<f32>) -> VertexOutput {
var output: VertexOutput;
output.position = position; //or modified position
output.vPosition = position; //or modified position
return output;
}`);
//ColorNode Shader
wgslFn(`fn mainFragment(in: VertexOutput) -> vec4<f32> {
return vec4<f32>(abs(normalize(in.vPosition.xyz)), 1.0); //just somthing
}`);
Now that was quite a lot. But I don’t want to just say something isn’t working. If my idea doesn’t make sense, you can say so directly.
Have you tried using varying( myNode )
?
In this case myNode
will do the built and processed in the vertex stage.i.e:
const myFn = wgslFn( ` fn myFunction( position: vec3f ) -> vec3f
return position;
` )
const myVertexNode = varying( myFn( positionGeometry ) );
//material.positionNode = myVertexNode;
material.colorNode = myVertexNode;
WGSL myFunction
will only be created in the vertex stage, and its result will be passed in a varying
, and can be used in other inputs as colorNode
.
–
Continuing…
This could be done at a lower level, but I don’t remember creating this feature. Maybe something like:
(draft)
const myVarying = varyingProperty( 'vec3', 'myVarying' );
const vertexFn = wgslFn( `fn mainVertex ( ... ) -> ... {
NodeVaryingsStruct.myVarying = vec3f( 0.0 );
} `, [ myVarying ] );
material.positionNode = vertexFn();
material.colorNode = myVarying;
Is there something similar created in Line2NodeMaterial
, but I believe I will revise it for something closer to that soon.
No, I admit that I haven’t tried it because I didn’t know about it before. I tried your second example because it seems the most obvious to me. I naively tried to integrate a varying and a property node via my params, just like I do with the uniform, attribute, texture node. This also works, but I don’t know yet whether this is correct because my attempts to use it within the shader have resulted in the shader no longer working. I imagined that I pass it to the shader as a parameter like the uniform and attribute nodes. I can then assign something to it in the positionNode shader and can read it out in the colorNode shader because I use the same parameter set for both. I will continue to test it and also study the “Line2NodeMaterial.js” a bit. It is quite extensive and will take some time before I understand how it works.
I have many more parameters but for the sake of clarity I have reduced this to one uniform, attribute, texture and the varying node. In the positionNode shader, I only read the parameters intended for it and so in the colorNode
const wgslShaderParams = {
time: uniform(0),
position: attribute("position"),
noise: texture(noiseTexture),
morphedPosition: varying( vec3(), 'morphedPosition'),
}
this.oceanMaterial = new MeshBasicNodeMaterial();
this.oceanMaterial.positionNode = positionWGSL(wgslShaderParams);
this.oceanMaterial.colorNode = colorWGSL(wgslShaderParams);
I got to know the node system essentially through the two wgslFn and TslFn examples in the materials and compute example that existed up to r154 and a note from you in r155. That helped me alot. I hadn’t seen the varying node before except in node.js from 3js. The examples of webgpu in the 3js examples have now become very extensive and are very suitable for getting to know the node system. Most people look at the examples far more often than at a documentation anyway.
If there could be a varying example with r159 that is based on the parameter list, the positionNode and the colorNode like mine, that should be useful. I didn’t intend the analogous appearance to webgl
(wgslShaderParams → uniforms),
(MeshBasicNodeMaterial → shaderMaterial),
(positionNode → vertexShader),
(colorNode → fragmentShader).
It turned out that way because I found it clear. The schema in one example could also make the transition from webgl to webgpu easier for others because the analogies can be clearly seen. The node system can do much more. And exactly what it can do more is well highlighted in the examples. An example that illustrates the analogy to the classic WebGL world, as it happened to me by chance, would round off the examples.
I’ve been thinking about for the last few days why my specular light isn’t right and why I’m not happy with my Fresnel either.
So clear of course they depend on mine:
var viewDir = normalize(position - cameraPosition);
And since I’m currently use the position from:
//parameters
position: attributes("position");
Instead of the morphed-displaced one as would be correct.
I admit that I still don’t understand from your code example how I use the varyingNode to communicate a customPosition from the vertexStage wgslFn to the fragmentStage wgslFn.
const vertexFn = wgslFn( `fn mainVertex ( ... ) -> ... {
...myVarying = vec3f(0); //set varying ?
}`);
const fragmentFn = wgslFn( `fn mainFragment ( ... ) -> ... {
...myVarying; //get varying ?
}`);
material.positionNode = vertexFn();
material.colorNode = fragmentFn();
There are no wgslFn in line2NodeMaterial.js that would allow me to understand how this works. I took a look at the varyingNode code, but in order to understand it I would probably have to learn a lot more about the nodeSystem. But I don’t want to go that deep into it now.
Is what I imagine and want currently possible in r158 with the varyingNode? Or should I be patient and wait for the next release?
I think this should solve your problem, it’s worth remembering that we can use material.vertexNode
if necessary.
Hi Sunag, I was just experimenting with the TSL editor in the hope of understanding how the corresponding nodes are converted to WGSL. I saw that there is an OutputStructNode. Then I thought this might help me.
But then I’m looking forward to your extension in r159.
I see in your extension that you assign the varyingProperty directly to the colorNode.
const myVarying = varyingProperty( 'vec3', 'myVarying' );
...
material.colorNode = myVarying;
Is it possible to use the varyingProperty like this?
const myVarying = varyingProperty( 'vec3', 'myVarying' );
const vertexFn = wgslFn( `
fn mainVertex (
position: vec3<f32>,
...
) -> vec4<f32) {
varyings.myVarying = position * 2; //a very simply modified position
...
return position;
}
` , [ myVarying ] );
const fragmentFn = wgslFn( `
fn mainFragment (
...
) -> vec4<f32> {
var myVarying = varyings.myVarying;
...
return color;
}
`, [ myVarying ] );
const shaderParams = {
position: attribute("position"),
...
}
const material = new MeshBasicNodeMaterial();
material.positionNode = vertexFn(shaderParams);
material.colorNode = fragmentFn(shaderParams);
According to logic, I would think it would work.
It may be necessary to add in the function parameter for fragment shader, e.g:
material.colorNode = fragmentFn( { color: myVarying } );
const fragmentFn = wgslFn( `fn mainFragment ( myVarying: vec3<f32> ) -> ... {
...myVarying;
}`);
material.colorNode = fragmentFn( { myVarying } );
This is even more elegant
Hi sunag,
First of all, this is not necessary now but something useful for the future. At the end, when one done with all the computes, it would be useful to be able to create mipmaps for the generated textures to be able to use
THREE.LinearMipMapLinearFilter
THREE.LinearFilter
I simply wrote my own bilinear filter for my wgslFn’s because no samplers can be used in the vertex stage anyway.
Hi sunag,
For few days I had the feeling that something was wrong with the normal vectors in my wave generator. I have a clear idea in my head of what they should look like and it just didn’t work.
The reason for this is this:
var posX = f32(index) % width;
var posY = f32(index) / height;
I based myself on your examples.
In webgl I used a floor for posY because posY = 45.678 is still the texel with posY = 45. I inserted the floor in all of my compute shaders
var posX = f32(index) % width;
var posY = floor(f32(index) / height);
and the difference looks like this
without floor:
with floor:
Troubleshooting can be quite a Sisyphean task.
Now the Jacobin determinant, which is a measure of the curvature, is also much better
So it’s worth adding the missing floor in the examples. I think I’ll can create a very realistic ocean. I have to look for a server that can be configured for “cross origin isolation” so that I can share it live.
I believe that in the examples this would already happen because the index
is a u32
instead of f32( index )
?
I didn’t see any typing for posX and posY in the examples. They will then probably inherit the type from index. Are decimal points automatically rounded down when an u32 type is divided by width?
On this occasion, is there anything special to note about the webgpu depthTexture?
The webgpu depthTexture example is very simple. There’s almost nothing what can do wrong. However, I get errors when I pass the depthTexture material like all the other materials just for testing to a ShaderPass that I pass to the composer. In itself just a routine. I’ll investigate this further. With a depthTexture of the ground or everything below the surface I can take into account scattering and refraction of transmitted light.
Hi sunag,
As I already wrote in the developer forum, r159 works wonderfully with the extensions.
That’s a big milestone to now also have the varyings.
Have you changed anything in the composer? It doesn’t work anymore. That doesn’t really hurt me because at the moment I only use it as a debugger to check the many compute textures and they are all correct.
This is the message I get:
WebGPUBackend.js:535 Uncaught TypeError: Cannot read properties of null (reading 'createView')
at WebGPUBackend.clear (WebGPUBackend.js:535:46)
at WebGPURenderer.clear (Renderer.js:606:16)
at RenderPass.render (RenderPass.js:67:13)
at EffectComposer.render (EffectComposer.js:126:9)
at ThreeJSController.Render (threejs-component.js:67:19)
at main.js:192:31
I simply deactivated the composer. But it will be important for effects later. I used post-processing to create the real looking atmosphere of my Mars and Earth.
I didn’t integrate EffectComposer, I think nodes can offer us more possibilities, what would be the desired effect here? maybe we can translate this to node-system.
I was wondering why there is no PPNodeMaterial or PostFXNodeMaterial or something like that. The desired effect is completely normal postprocessing. A material placed over the rendered 3D world.
Like here with the atmosphere. I made a physical model of the atmosphere in a shader. I reconstructed the 3D world in the shader with a depthTexture, but these are unimportant details.
Postprocessing is a very powerful tool. For water, I could later simulate the underwater effect.
At the moment I only use postprocessing for debugging. I used it to control the many compute shader textures.
I can also do this with a plane and orthographic camera. The composer was simply elegant because it didn’t need a plane and a camera.
I imagine it would be more elegant with a PostFXNode because in webgl I always needed three different things (EffectComposer, RenderPass, ShaderPass).
I created backdrop
node for this kind of effect, using backdrop
you don’t need to create passes.
https://threejs.org/examples/?q=backdrop#webgpu_backdrop_area