Calculating new normals in TSL shader

Hello everyone, I’m making some progress with TSL but, as always, there’s a new bump in the road to deal with. If anyone could point me in the right direction I’d be very grateful.

I have imported map data into my TSL shader and deformed the vertices using position.y.addAssign( heightNode2); where heightNode2 is a bufferAttribute. Thanks to @Mugen87 in this post

The problem now is that normals/shadows seem broken. The shading seems to be completely flat. I’ve tried lots of different things but this is the best outcome so far.

So, the question is, what am I missing? Do I need something completely other than recalculating the normals? Or am I just doing this wrong?

Here’s a picture of the problem:

And here is a JSFiddle.

Thank you in advance

B

1 Like

There is a ‘transformNormalToView’ method

E.g.,

Or you could write it yourself. Sample 3 heights in a clockwise order around the point where you want the normal. Make a vec3 from it and then normalize.

3 Likes

Thank you very much @seanwasere

This is the method I’m actually trying to use, and it looks pretty much identical to your example.
So perhaps I’m doing something wrong with the values going into the elevation of the NeighbourA/B vertices?


// elevations
    neighbourA.y.addAssign(heightNode2);
    neighbourB.y.addAssign(heightNode2);

I shall investigate further.

All the best
B

If you are doing this once and not animating the vertices/changing the map, a far better solution would be to actually deform the vertices without the shader and calculate the normals once there. Otherwise you will be reading the textures far too many times unnecessarily, and it could slow things down.

Thank you @dubois, that’s an interesting point. I may actually implement the vertices first approach for when the map isn’t animated. But the plan is for it to be animated, very much as the map in the TSL procedural terrain example, but based on actual map data, rather than random.

At this point though, I’d settle for just being able to get the static model to respond correctly to the lights!

All the best
B

Hello again, apologies for restating the question but I’m a bit lost.

I’ve spent quite a while trying to work out why it is my normal calculations aren’t working and I think it might be because I’ve assigned the incoming elevations array to the material’s position with a bufferAttribute.

This means I’m giving the same value to the Neighboring vertices in the normal calculation:


const heightNode2 = bufferAttribute( new THREE.Float32BufferAttribute( hArray, 1 ) );

position.y.addAssign( heightNode2);
neighbourA.y.addAssign( heightNode2);
neighbourB.y.addAssign( heightNode2);

If I change this for a function which calculates a height and returns a value, it all works as expected. Just like the TSL procedural terrain example.

To be honest though, after what seems like a long time trying to understand this problem, I’m still basically clueless.

If anyone had time to look at the code, and just give me a read on what looks wonky I would be very grateful. It’s here. Or point me at some examples related to calculating normals, or docs for the bufferAttribute?

Thankyou!

B

I hope this does not get interpreted the wrong way since i’ve been a bit critical of the TSL.

I understand that there is an example with TSL that you are trying to follow. However, TSL is very new, and specific to three.js, while shaders like GLSL are generic. This means that in order to get help about this, someone has to be familiar with both shaders in general, and the syntax/system for TSL which is quite prone to change. Sure, a lot of people seem to be experimenting with it, but GLSL has been around for more than a decade.

Would it make sense for you to try to do this with ShaderMaterial and WebGLRenderer first and then move to the more modern approach? There should be examples on how to do that. I looked at your code, but i don’t understand anything unfortunately, while in theory, i know how this can be done :frowning:

For starters, i wouldn’t touch buffers and neighbors on the mesh itself. I would process the height texture to create a normal map. At least thats where my mind is at, as this could also be done less frequently than your render rate. Ie, rather than looking up neighbors every frame, you would look them up once you receive the texture. I imagine that there is a texture involved since i dont see any other way to do this:

edit

I’ve been able to assign vNormal to your output, and i got green, so it looks like all your normals are 0,1,0. But yikes, i don’t understand anything else in the code.

edit2


position.y.addAssign( heightNode2);
neighbourA.y.addAssign( heightNode2);
neighbourB.y.addAssign( heightNode2);

You do most likely want to change this. I’m not sure what the expectation here is, but it’s probably not doing what you want, since it’s the same value being assigned to the neighbors as is to your point of interest. But again this is all very confusing. You probably dont want heightNode2 to be assigned to the both neighbors. If using buffers you proba bly want to create a new buffer that has the neighbor information for the point. But this seems awkward and inefficient unless there is something with WebGPU which im unfamiliar with. If you do read these from a texture though, then it becomes easier - for the point itself, you would read the texture at that point. For the neighbors you would read the texture at neighboring points.

FWIW you aren’t actually using shaders to generate the terrain mesh. You are kind of extracting a part of the mesh, into a separate buffer, so that you could paint it.

So why don’t you try this - make your terrain mesh like a peasant would - just fill x,y and z. X and Z would be your grid, Y would be the height you put in the buffer.

Then, instead of reading this single height buffer, just read a component from the mesh position, in this case z. Plug that into your color functions and the result should be the same. Run the simple, peasant method on the geometry like computeVertexNormals() and see how far that gets you.

You can also avoid shaders altogether by making a texture and mapping it via UVs. It won’t be as crisp as the step function but it might be easier to reason about before you dive into this advanced stuff.

This is the basic idea:

Key takeaways:

The peasant shader version:

  const fragmentShader = `
    varying vec3 vNormal;
    varying float vHeight;
    uniform vec3 uGrassColor;
    uniform vec3 uRockColor;
    uniform vec3 uSnowColor;
    uniform vec3 uSandColor;
    
    void main(){
      vec3 color = uSandColor;
      color = mix(color, uGrassColor, step(0.1, vHeight));
      color = mix(color, uRockColor, step(0.3,vHeight));
      color = mix(color, uSnowColor, step(0.5,vHeight));
      float nDotL = dot( normalize(vNormal), normalize(vec3(1.)));
      gl_FragColor = vec4(color*nDotL,1.);
    }
  `
  const vertexShader = `
    varying vec3 vNormal;
    varying float vHeight;
    uniform float uMaxHeight;
    void main(){
      vNormal = normal;
      vHeight = position.y/uMaxHeight;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.);
    }
  `
  const maxH = 8

  const material = new THREE.ShaderMaterial({
    vertexShader,
    fragmentShader,
    uniforms:{
      uSandColor: { value: new THREE.Color('#e62861')},
      uGrassColor: { value: new THREE.Color('#85d534')},
      uSnowColor: { value: new THREE.Color('#FFFFFF')},
      uRockColor: { value: new THREE.Color('#bfbd8d')},
      uMaxHeight: {value: maxH*0.5}
    }
  })

The not fancy, peasant simple mesh version (to be fair, the fancy shiny new version doesnt do much else, it just cant compute the normals).

  const geometry2 = new THREE.PlaneGeometry(7, 7, curve_points, curve_points)
  const {position} = geometry2.attributes
  for ( let i = 0 ; i < geometry2.attributes.position.count ; i++){
     const i3 = i*3
     position.array[i3+2] = hArray[i]
  }
  geometry2.computeVertexNormals()

The problem with this is that i hardcoded a very naive and simple lambert lighting model into the shader, if you want to use MeshStandardMaterial you will run into t he problem of extending materials which TSL aims to solve. But if you ignore that last line (nDotL) it should be obvious what needs to be done. You take the height of the mesh, and then you map these colors to it, thats it, no more no less. You never really did anything in the shader to affect the geometry, so you might as well just generate it on the cpu…

Thank you so much @dubois, it’s very good of you to work through this with me. I’m going to do exactly as you suggest and put together a peasant version. Then see if that process gives me any inspiration.

My thought for using TSL/GPU was that, eventually, these maps may be quite large, and multi-layered, like a geological diagram. And then animated. It seemed a good idea to be able to parallel process all that data, since I’ll have the maps available as large arrays of elevation numbers ahead of time. Perhaps I’m being optimistic?

But as you rightly point out, the version I have here doesn’t do what I need.

The problem definitely is in this part:

position.y.addAssign( heightNode2);
neighbourA.y.addAssign( heightNode2);
neighbourB.y.addAssign( heightNode2);

Though, if it works to assign the bufferAttribute to position, this must mean that somewhere there’s a look-up which connects the correct element of heightNode2 with the position value.

Like: position[0] => heightNode2[0] but from position.xyz you have a location not an element in an array. So (brain explodes) not sure what to do about that.

Anyway, thank you once again. I’ll report back in this thread once I have some progress.

All the best

B

1 Like

Well, after quite a long time banging my head against this problem I have abandoned the TSL approach for now.

A quick restatement of the problem

I want to create a custom terrain/mesh using a TSL shader to deform a Plane.
I am not generating the terrain elevation data with a function but with an incoming array of heights.

  • Assigning the elevation data to positionLocal.y as a bufferAttribute works an absolute dream to shift the vertices, so I have the surface, which is great.

  • But the normals cannot be generated because (as far as I can tell) there seems no way to discover, ‘hey, what’s the elevation at point A?’

It is super frustrating.

Anyway, I’ve cleaned up the JS Fiddle, and added some comments. The problem is around line 147.

It could well be there’s a simple solution I’m just not well enough versed in threejs or TSL, or anything else to find it.

Anyone with a big enough brain to figure this out wins my heartfelt admiration forever.

Thanks
B

The problem with this is that it’s not just shaders that are needed to achieve this. The problem is that you have several systems working in conjunction. It’s not even threes fault here, it’s just how graphics work and that is a big topic.

I still don’t quite get what is your understanding of this. Was this merely some example that you were trying to follow?

Why do you need that special buffer, how do you understand this? What do you think is different in my example? Going into the nitty gritty of how GPUs work, it’s actually better to lump it all into one buffer, but even though I understand some of I don’t quite understand how this works, it’s got to do with memory alignment and gpu architecture and such.

There is a bit of a balance here, uploading the data is costly, and odds are that your render call is not being fully utilized (meaning, you could compute the normals and it would still be more expensive to ask the GPU to draw it rather than the drawing itself).

But to me, it feels that you are not understanding this problem well, and TSL alone is not a magic bullet that can solve this. I think it’s made more complicated by the fact that this is WebGPU which I understand to be much more complex than WebGL, meaning managing those different systems together is harder.

This is done by sampling textures. I don’t know how it’s done with TSL but most shading languages have something like readTexture(texture, yourPointA)

I don’t know much about this but I think transform feedback could be used to treat a vertex buffer the same as a texture (or rather vice versa) but I may be wrong. Putting data into textures and then doing reads from different location is how this has traditionally been done in graphics programming.

@dubois I think you’re right. It’s my lack of understanding that is at the root of the error.

This is for a larger project which will allow the user to explore a real terrain, in essentially the same way as is possible with this procedural example.

I did consider sending the terrain data as a texture. I’ll take a look at that next. For now I’m using your peasant method and this works. Haven’t looked at how this will work once I want to allow the user to scroll around the map. I expect it will be horrible.

My naive hope was the TSL would mean the various calculations (eg. what is the height is at point A?) would be done all at once, rather than one after the other. And thus faster. Maybe that isn’t how it works? Looks like I have some research to do…

Anyway, thanks again for taking the time.

All the best
B

Hm, if it’s some sort of a tile, that can be computed on t a worker and sent via transferables at no cost. You can keep a bunch of tiles in a cache and evict some. Either way you are going to be paying the penalty of actually uploading that to the gpu. I guess, to be fair, maybe splitting into different buffers could be faster as you wouldn’t have to change the plane itself (all tiles would be using the same buffer for that) but again, with how GPUs like everything to be in groups of 4 maybe that wouldn’t really work.

If this were some kind of water, that’s moving and has waves and whatnot, it would make sense to compute the normals in a shader. Otherwise I don’t think it does.

Computing normals from displaced vertices in the shader is tricky. There is one way to do this with derivative functions but you end up with flat normals:

You can’t compute smooth normals in the shader since that requires to weight all connected face normals of a vertex which you can’t do. Normally when displacing vertices via a displacement map, you always use an accompanying normal map to circumvent this issue.

Another thing: Using VertexNormalsHelper won’t show you a valid result since it only visualizes the geometry data, not the analytical normals in the shader.

If you don’t need to update the terrain each frame, I suggest you generate it in JS and just maintain a single position and normal buffer attribute with already displaced values.

1 Like

Thank you @Mugen87,

I will be updating the frame eventually. I started with a ‘simple’ case. I am trying to make something which is almost identical to the TSL procedural terrain example, but from existing map elevation data.

Current thinking is that the best way might be to generate data textures for normal and height, then read from these. Do you think that might work?

All the best
B

Yes, if the normal map corresponds to the height/displacement map that would be an ideal approach.

Uhhh no, if you generate a data texture then it’s pretty much the same thing as calculating the “mesh”normals. But instead of them being in the vertex buffer you will have to have more logic and less performance in comparison by doing a texture fetch.

Reading textures is slow. I think that uploading a texture costs the same as uploading a vertex buffer.

So why would you have this extra level of complexity (reading textures from a normal) when you can just read them from a vertex buffer ie “render an ordinary mesh”.

In order to fill the DataTexture you have to have to basically do something very similar to computeVertexNormals if not the exact same thing. Then you would upload the texture (why not upload the normals in a vertex buffer?) then you would read from the texture, which has a cost.

Instead, you could use a WebGLRenderTarget you would upload your heights as a DataTexture then instead of computeVertexNormals you would do renderer.render(heightToNormals, camera) (like, with an EffectComposer or something).

This way you would compute the normals once for this terrain.

Alternatively, you would do this exact same logic inside your mesh shader, but this way you would be (re)computing the normals in each frame. If the height data did not change, then this would be redundant.

Just look what goes inside a DataTexture, it also has xyzw and can be a Float32Array or Int8array just like BufferAttribute.

Thank you both @dubois and @Mugen87.

My hope for TSL/WebGPU was that it would speed things up by enabling all (or some of) the loops involved in 3D processing to be done ‘all at once’ rather than ‘one at a time’. Have I misunderstood this? Very possibly. I’m trying to learn both 3D and threejs as I build.

I was thinking of trying the texture route, imagining it worked as follows:

  1. Turn the Height arrays into a texture for the entire map, let’s say 512x512 in size
  2. Do the same for the normal calculations. So I have two textures that correspond to the same points.
  3. Then pass some section of these textures (current position+128x128) to the shader as the current map to render
  4. So, the values for vertex position and normal are just looked up, not calculated at each frame.
  5. Also, these lookups can be done in parallel, so much quicker.

Have I imagined this process in an unrealistic fashion?

@dubois This sounds like an interesting route. Though I don’t fully understand terms like RenderTarget or EffectCoposer. I will research, thank you!

Instead, you could use a WebGLRenderTarget you would upload your heights as a DataTexture then instead of computeVertexNormals you would do renderer.render(heightToNormals, camera) (like, with an EffectComposer or something).

Apologies if I appear ignorant. There is a phase with most learning where you know enough to begin seeing how little you actually know. I’m at that point.

All the best

B

Yes, at least in the case with vertex normals you expect a level of automatism that no 3D engine can provide.

1 Like