Calculating new normals in TSL shader

That is the idea but it’s EXTREMELY low level. I know webgl but I don’t understand the modern metal/webgpu and such apis. It’s more like, you still need to basically express the same idea, what you want computed on the gpu etc, but instead of making 5 calls, the modern api allows you to do it in one. But again, it’s pretty complicated.

This is where a shader be it TSL or GLSL or whatever should help. You don’t need to compute the normals sequentially, meaning your cpu goes into a loop and does it al one by one. You make a shader and compute these values from the first texture. This is because a texture does allow for an arbitrary read. You would write a shader so that at every point of the terrain (your height map) it looks at say two neighboring points, and you use this information (the difference in the heights) to make a normal map.

You can put floats into the height texture but it would be 4 times larger than using 8bits, in which case you get your height from 0-1. You would have to scale this to say 2037 meters to get the correct normals.

This way you avoid uploading your normals (very slow) and updating them one by one (loop) since in the shader it’s done in parallel.

For this, I don’t think you would see benefits. You can make a Mesh for each of these sections and just show/hide them.

Thank you again @dubois and @Mugen87. It’s super helpful to have some guidance. The subject is large and it’s easy to find yourself just completely ‘lost’.

3d programming seems harder to me as (again, correct me if I’m wrong) debugging is more difficult. Often, you can’t just drop in a console.log and see what’s going on… So I’m frequently left thinking ‘hmm, something is wrong. But what?’

I have plenty to be working on here though. I will investigate data textures and see what that gets me. Once I have something to report, I’ll do so here, just for completeness.

All the best
B

The issue with using a GPU from a browser is that when processing a vertex it has the data only about this vertex. If a normal vector is not provided, then it has to calculate it, and to do this, it needs access to two neighbour non-colinear vertices. But it has no access to them.

Some solutions are already mentioned before, but I’ll add one more idea, at the bottom:

  • pass an array for elevation, and a second array for slope/normals (best option)
  • pass only array for elevation, then use derivatives to calculate normals (good only if you are OK with flat shading with visible facets)
  • pass elevation and slope as textures, instead of arrays (good option, if textures have sufficient precision)
  • pass elevation as texture, a vertex can read any part of the texture, so it can “access” the texture values for neighbour vertices (next-to-good option, if textures have sufficient precision, also: prone to staircase artefacts)
  • pass the elevation array, a x-shifted elevation array and a y-shifted elevation array, thus when you process one vertex you have the elevations of its two neighbor vertices (almost no JS preparation work, all the hard work is done by GPU, but I have never tested something like this)
1 Like

How would you perform the shifting? I think this requires quite significant JS work to make the alignments. It’s basically making a vertex that has two other vertices embedded in it. You’d have to duplicate them at the edges (while textures can be clamped when sampled). I wouldn’t say there is almost no JS work (if I understood correctly).

Exactly!

Two ways. Either during the phase of filling up the array, or post-factum.

Here is how a post-factum shifting can be done:

// C=columns; R=rows; a=flat array

// horizontal shift
for( let i=R*C-1; i>=0; i-- ) if( i%C>0 ) a[i]=a[i-1]

// vertical shift
for( let i=R*C-1; i>=0; i-- ) if( i>C-1 ) a[i]=a[i-C]

Here is a snapshot of the console (C=8, R=5):

Full code
var C = 8; // columns
var R = 5; // rows
var a = []; // flat array

function show(what)
{
	console.log(what)
	for( let y=0; y<R; y++ )
		console.log( a.slice( C*y, C*(y+1) ) )
}


// populate
for( let y=0; y<R; y++ )
	for( let x=0; x<C; x++ )
		a.push( 10*y+x+11 )

show('original')


// horizontal shift
for( let i=R*C-1; i>=0; i-- ) if( i%C>0 ) a[i]=a[i-1]

show('x-shifted')


// vertical shift
for( let i=R*C-1; i>=0; i-- ) if( i>C-1 ) a[i]=a[i-C]

show('and also y-shifted')

To make normals good at the edges it is enough to either provide one additional row and column of extra data, or to fill the edge row and column with fresh data (not found in the original array).

1 Like

I wouldn’t call this “almost no JS preparation work” :slight_smile:

This now has to be organized in 3 different attribute buffers?

Yeah, it is a matter of perspective - shifting values is almost nothing compared to calculating normal vectors or creating a normal map. In any case, this is just an idea, different from the rest. One can never be sure beforehand whether (or when) some idea may come handy.

1 Like

Excellent ideas, thank you @PavelBoytchev. Great question from @dubois about edges.

I had been wondering about the idea of creating shifted textures. Though I have not actually implemented it, I was thinking about swapping the direction of the shift half way through, so that the edges are doable without additional data. Not sure if this will work though.

For this project, I can do some pre-processing. I have the terrain data (well, I’m working on that too) so can create and store textures for each map. But these terrains will be stacked one on top of the other, with the heights becoming cumulative, so there will be ‘some’ work for JS up front.

I planned to store all the data (shifted and not shifted) in one texture. So, R-channel for height, then G and B for the normals. But maybe it will have to be separate textures?

Thank you everyone for your help, this thread alone has been like a month or more of learning time.

All the best

B

1 Like

Ok let’s do an exercise I’m finally near my computer:

This should explain the first step where you’re getting confused. A texture can be sampled at any point. vUv is the point. vUv.x + 1/8 is the next pixel over, since the texture is 8x8 in size. The stretching is due to how the texture is formatted and not the shader, because it’s set to ClampToEdge by default. Meaning, it’s the result of these systems working together in this case, but you could also implement this edge case in the shader.

So if you observe the black pixel in the bottom left corner. If you move to x by 1, you get red, if you move to y by one you get green, both, its the diagonal hence blue.

Amazing, thank you @dubois! I ‘think’ I get what’s happening here, though it’s making my head hurt a bit. And it highlights a problem.

Trying to make this into TSL, I have not been able to figure out how to get the dataTexture into the material.positionNode

In this new attempt Around line 80 I try to assign like this:

material.colorNode = texture(vTex);

This doesn’t work, nor do any of the other combinations I’ve tried. Would someone please put me out of my misery?

Thanks!

B

When using the TSL function texture(), you have to pass in a texture and not a uniform. A TextureNode (the type that is returned by texture()) is already a UniformNode.

4 Likes

Hallelujah! Thank you again @Mugen87 It’s starting to take shape.

All the best

Mark

Well, it seems like I’ve spent a long time just moving stuff around and now I’m completely tied up in knots.

I have something similar to the exercise posted by @dubois. Thank you for that. Very educational. colorNode seems to work OK. And positionNode.

And for a while there it looked like dataTexture would allow me to create a getHeight() function in emulation of the original procedural terrain example. This looks close to me (line 79) but doesn’t result in functional normals… it could be wildly out

My eyes are about to fall out of my head and roll under the sofa so I’m giving up for the day.

Thanks all for you help getting me to this point. The point where this feels like it might work, not eyes falling out.

All the best
B

I’m a little late to this discussion, so this may not be necessary or responsive.

I have an ocean module which uses WebGPU and compute shaders (but not TSL) to create a display of moving waves. It uses a series of compute shaders to compute the 3D position of the waves, with the final step being to compute the resulting normals. This last part computes the normal taking into account all three dimensions - not just vertical.

If you are interested, I could send you a link to my GitHub repo. The code is actually fairly simple and could be easily stripped down and adapted to something where you only need to make a single computation and then compute the normals.

Could you send me the link? Or actually better yet - could you post it here?

It’s on my GitHub repo in the jsm directory. The file name is Ocean4t2.js.

It is set up in the form of a Class. If you are not familiar with what that involves - the first part saves all the variables and sends a link to a variable which you use whenever you run the program.

In the Initialize Class section, the first part initializes the variables that will be used when you run the program. The next part initializes 7 buffers, including 5 used to compute the values, and a buffer that stores the final values and a buffer that holds the normal values. The next part defines those buffers. The next part computes the values for the first buffer.

The Shaders section includes a series of Shaders which perform the calculations. The last shader computes the Normal values.

The Initialize Class (cont.) section links all the buffers and shaders together and exports links to the displacement and normal buffers.

The Update Class section runs all the computations.

The neat thing about WebGPU is that these displacement and normal buffers can be loaded directly into the texture, as follows:

normalNode: normalMap(texture(grd_.Nrm)), // 3D values
positionNode: positionLocal.add(texture(grd_.Dsp)),  // 3D values

So if you are just going to compute static values once, you only need 3 buffers and 3 shaders, a compute buffer, a displacement buffer and a normal buffer. Furthermore, you can compute 3D values, not just Y displacements.

And if you are going to recompute values over and over, you can use as many compute buffers as you want to compute 3D values.

It is likely that TSL will eventually be expanded to do some or all of these things (and maybe it already has). But I found that for complex math operations, it is easier to work with wgslFn shaders.

2 Likes

This is amazing, thank you @phil_crowther. You really are not kidding when you say ‘complex math operations’. Looking at your code is a humbling experience. If you don’t mind me asking, how long did this take to write?

Eventually, I will be animating these terrains, in essentially the same way as the original example which I started from.

So I’m hoping that I won’t need to delve directly into the murky world of wgslFn shaders. We shall see though…

In the meantime, I had a minor breakthrough with the TSL version, which I’ll post later today.

All the best
B

1 Like

That code - especially the content of the shaders - is an adaptation of code written by others whom I credit. My main contribution was to update it from three.js r79 to r150. Attila Schroeder helped update it for use with WebGPU. He understands the underlying math and has a longer version that has more features.

I was merely offering it as a non-TSL template just in case you wanted to compute 3D vertex values (aka “vector displacement”) and need to compute the resulting 3D normals. The classic example of vector displacement is converting a flat plane to show a mushroom, as shown here. As far as I know, TSL cannot currently compute the normals for these kinds of values.

I look forward to seeing what you come up with in TSL since that is an increasingly powerful language that vastly simplifies a lot of complex tasks. I find wgslFn shaders easier to use for long computations since I have been using them for years. But I will definitely not complain if TSL converts them to a simple function.

1 Like