MeshPhongMaterial displacementMap / normalMap

Hello. I’m trying to use PlaneBufferGeometry to create terrain chunks by altering vertex positions using a height map. I’d like to do this in the shader so that my scene only needs one geometry for all chunks instead of a unique geometry for each chunk.

I’ve been experimenting with the MeshPhongMaterial displacementMap and normalMap properties to enable this. Ultimately I’d like to use MeshLambertMaterial and patch in similar functionality using onBeforeCompile so it’s more performant but I’ve encountered some problems just with MeshPhongMaterial. I’ve create a test case here: https://codepen.io/kuxazoso/pen/GVYMda?editors=0010

In the test, I’ve created four meshes. The first two are examples of MeshPhongMaterial and MeshLambertMaterial with the vertex positions manually altered by looping over them and adjusting according to the height map you can see at top left. The third mesh is the one I’m currently having problems with.

The displacementMap property works as expected and displaces the vertices as desired. However, the normalMap property is not working. There are two problems. The shadows that are created are very faint, and can only really be noticed when increasing the normalScale parameter, but this results in shadows that are too dark. The perhaps more obvious problem is that the shadows are pixelated rather than smooth like in the CPU displaced examples.

The fourth mesh is my work in progress adding the displacementMap / normalMap properties into MeshLambertMaterial. Like the GPU Phong example, the normal map doesn’t have the desired effect. The shadows of the boxes are also incorrect which is what I was initially working to fix via patching the shader depth material. Obviously I’m not at that stage yet as I haven’t even managed to get the normal maps working for MeshPhongMaterial.

To clarify: I want the GPU Phong mesh (vertices adjusted in shader via GPU with displacementMap / normalMap properties set) to look the same as the CPU Phong mesh (vertices adjusted in JavaScript). Ultimately I want GPU Lambert to look the same as CPU Lambert.

Many thanks for any assistance anyone can provide.

1 Like

The perhaps more obvious problem is that the shadows are pixelated rather than smooth like in the CPU displaced examples.

I’ve managed to improve this somewhat by setting magFilter and minFilter to LinearFilter, but it still doesn’t look like the CPU-adjusted mesh. I’ve included some screenshots to show the difference.


I think the difference occurs because you execute geometry.computeVertexNormals() after the vertex displacement. This will result in different vertex normal data compared to your GPU (Phong) demo.

If you remove the line and use the normal map from GPU (Phong) for CPU (Phong), both planes look identical.

1 Like

That makes sense. I suppose my question then is if it’s possible to get the same result that geometry.computeVertexNormals() produces in JavaScript by using a normal map in GLSL?

At the moment I have a 9x9 grid of terrain chunks. If I have to do everything on the CPU I’ll have 81 distinct chunk geometries, whereas if I can do everything in the shader I’ll only have 1 chunk geometry. That’s the ideal situation but the GPU normal map approach doesn’t look as good as the CPU computeVertexNormals() approach which clearly looks superior. Is this just a limitation of using a normal map versus whatever’s going on in computeVertexNormals()?

Is this just a limitation of using a normal map versus whatever’s going on in computeVertexNormals() ?

No, it just seems your current normal map does not represent the right data. However, achieving an equal visual result is tricky. Vertex normals are interpolated in the fragment shader whereas normal maps are directly sampled. Besides, even when using (tangent-space) normal maps, the interpolated vertex normal is part of the computation process of the final normal in the fragment shader. So there are various factors that determine the final outcome. Not sure which approach is right for your use case.

In any event, are you sure it’s better to perform the displacement on the GPU? If you are doing it with JavaScript, you do it once and just render the data. When using a displacement map, you have to to it over and over again (per frame) in the vertex shader. It seems to me that vertex displacement on the GPU makes more sense if the displacement is dynamic e.g. varies over time.

1 Like

I want the GPU to do as much of the calculations as possible to free up the CPU for things like collision detection and game logic. Using displacement / normal maps also appeals because I can render textures and chunk data in a layered approach:

The outermost chunks are the height map, which is used to generate normals. In the next ring of chunks the edge normals are corrected so the chunks are seamless. A voronoi diagram is calculated, then biomes added, then biomes are used to add trees / plants / rocks etcetera. The advantage is that I can calculate each layer of data a bit at a time and store it for later use when loading / unloading chunks around the camera position.

I was incorrect earlier when I said it was a 9x9 grid - it’s actually 7x7 (starts from fourth ring) which makes for 49 chunks. I’m also using three levels of detail which would make for 147 unique geometries which seems like a lot. That was the main reason I was concerned about the number of geometries being used and therefore looking for a shader-based solution so I can have just three planes, one for each LOD. In a previous version of this application I was doing everything on the CPU and it chugged a lot so I thought I’d try reducing what appeared to be an obvious bottleneck.

On the flip side, I also want to create a basic version with just a single chunk so I can experiment with adjusting parameters such as the displacement scale, elevation / moisture data, noise frequency, etcetera in real time. That particular application would definitely benefit from calculating everything on the GPU so it responds to user input smoothly as uniforms are changed.

Hope that provides a bit more context. Thanks for your help so far, it’s appreciated!