Help on texture and renderer encoding

Hi I am trying to generate the terrain and i have image as a texture to that terrain and i have issue regarding encoding, color space or i don’t know something that i am not aware of.

When i do sRGBEncoding the image comes sharp but very yellowish and dark whereas when i use default image is not enough sharp as sRGBEncoding.

my renderer:

renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.gammaInput = true;
renderer.gammaOutput = true; 
renderer.gammaFactor = 2.4;
renderer.shadowMap.enabled = true;
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);

and when i do

texture.encoding = THREE.sRGBEncoding;

full texture loading code:

  loader.load(
    "./resources/images/satellite.png",
    function (texture) {
      texture.needsUpdate = true;
      texture.encoding = THREE.sRGBEncoding;
      texture.generateMipmaps = false;
      texture.magFilter = THREE.LinearFilter;

whereas with out texture.encoding defined which is by default

THREE.LinearEncoding

output is

for same lighting i.e

    const ambientLight = new THREE.AmbientLight(0x808080); // soft white light
    scene.add(ambientLight);

    const dirLight = new THREE.DirectionalLight(0xffffff, 0.7);
    dirLight.color.setHSL(0.1, 1, 0.95);
    dirLight.position.set(1000, 5000, -29000);
    dirLight.position.multiplyScalar(2);
    scene.add(dirLight);

    dirLight.castShadow = true;

    dirLight.shadow.mapSize.width = 64;
    dirLight.shadow.mapSize.height = 64;

    const d = 500;

    dirLight.shadow.camera.left = -d;
    dirLight.shadow.camera.right = d;
    dirLight.shadow.camera.top = d;
    dirLight.shadow.camera.bottom = -d;

    dirLight.shadow.camera.far = 3500;
    dirLight.shadow.bias = -0.0001;

this is after making light soft

making soft lighting by setting light as:


    const ambientLight = new THREE.AmbientLight(0x202020); // soft white light
    scene.add(ambientLight);

    const dirLight = new THREE.DirectionalLight(0xffffff, 0.5);
    dirLight.color.setHSL(0.1, 1, 0.95);
    dirLight.position.set(1000, 25000, -40000);
    dirLight.position.multiplyScalar(2);
    scene.add(dirLight);

    dirLight.castShadow = true;

    dirLight.shadow.mapSize.width = 64;
    dirLight.shadow.mapSize.height = 64;

    const d = 500;

    dirLight.shadow.camera.left = -d;
    dirLight.shadow.camera.right = d;
    dirLight.shadow.camera.top = d;
    dirLight.shadow.camera.bottom = -d;

    dirLight.shadow.camera.far = 3500;
    dirLight.shadow.bias = -0.0001;

the size of this mesh is: 205200x92800 and height of vertices ranging from 0 to 1000

I want the sharpness of the first mesh and illumination or lighting of second image and i believe that i am doing something wrong. The first image is very yellowish, can anyone please give me any kind of help.

PS: i am using custom shader with:

 uniforms1 = {
        diffuseTexture: { type: "t", value: texture },
        heightScale: { type: "f", value: _scaleHeight },
      };

      uniforms1 = THREE.UniformsUtils.merge([
        THREE.UniformsLib["lights"],
        uniforms1,
      ]);

      console.log(uniforms1);

      _meshMaterial = new THREE.RawShaderMaterial({
        uniforms: uniforms1,
        vertexShader: terrainShader._VS,
        fragmentShader: terrainShader._FS,
        lights: true,
        side: THREE.DoubleSide,
      });

and this is my fragment shader:

const _FS = `#version 300 es
precision highp sampler2DArray;
precision highp float;
precision highp int;

uniform mat4 modelMatrix;
uniform mat4 modelViewMatrix;
uniform vec3 cameraPosition;
uniform sampler2D diffuseTexture;

struct DirectionalLight {
  vec3 direction;
  vec3 color;
};

uniform DirectionalLight directionalLights[NUM_DIR_LIGHTS];
uniform vec3 ambientLightColor;

vec3 addedLights;
vec3 _vNormalView ; 

in vec3 vNormal;
in vec3 vNormalView;
in vec3 vPosition;

out vec4 out_FragColor;

vec3 blendNormal(vec3 normal){
	vec3 blending = abs(normal);
	blending = normalize(max(blending, 0.00001));
	blending /= vec3(blending.x + blending.y + blending.z);
	return blending;
}

vec3 triplanarMapping (sampler2D tex, vec3 normal, vec3 position) {
  vec3 normalBlend = blendNormal(normal*normal);
  vec3 xColor = texture(tex, position.yz).rgb;
  vec3 yColor = texture(tex, position.xz).rgb;
  vec3 zColor = texture(tex, position.xy).rgb;
  // return normalBlend;
  return (xColor * normalBlend.x + yColor * normalBlend.y + zColor * normalBlend.z);
}

void directionalLightEffect(){
  for (int i = 0; i < NUM_DIR_LIGHTS; i++) {
    vec3 dirVector = directionalLights[i].direction;
    vec3 _lightDir = normalize(dirVector);
    float lambertian = max(dot(_vNormalView, _lightDir), 0.0);
    addedLights.rgb += directionalLights[i].color*lambertian;
  }
} 

void main(){
    _vNormalView = normalize(vNormalView);
    addedLights  = ambientLightColor;
    vec3 color = triplanarMapping(diffuseTexture, vNormal, vPosition);
    directionalLightEffect();  
    out_FragColor = vec4(color.rgb *addedLights, 1.0);
    

  }```

Since you’re using custom shader, you might want to implement gamma encoding in your code.

You can do a simple test - use your shader on a flat color quad with
out_FragColor = vec4(0.5, 0.5, 0.5, 1.0);

see what color is rendered on the screen, if it’s 128/128/128 - then your color (0.5) is treated as a human perception level.

Renderer settings do not know what you mean by color (0.5) in your code.

Physics formulas for diffusion (like Lambertian) give you the number of photons radiated in a particular direction, so if you get color 0.5 - that’s a half of max number of photons, it needs to be gamma encoded into human perception levels before rendering, that would be:

float gamma = 2.2;
vec3 diffusion_color = pow(current_diffusion, vec3(1.0 / gamma);

only diffusion output should be encoded, not the texels, if you read some.

1 Like

The effect of marking texture.encoding = sRGBEncoding is to cause three.js to decode the texture from sRGB to Linear-sRGB, so that the texture data is provided to the fragment shader in linear space. This is required for correct lighting calculations.

However, as @tfoller suggests, you do not want the final output of the fragment shader to be linear! It’s raw data, and does not look like a properly-formed image to our eyes. Adding a Linear-sRGB to sRGB encoding step at the end of the fragment shader is the minimum here; including tone mapping can improve the image further.

If you skip both the input and output encoding, you’ll end up close to the correct image, but the lighting may be a little off. If you skip just one or the other, things will look obviously quite wrong.

I would recommend not worrying about sharpness and contrast until after the color reproduction is correct, and perhaps just using a solid white AmbientLight to ensure the colors are right. It is easier to fix the lighting and contrast after the color management is set up, without fiddling with color spaces.

Thank you so much for a response, i am sorry i am little confused here. Does this mean i don’t have to do

texture.encoding = THREE.sRGBEncoding;

if it is so when i remove this line, my image is very faint color compared to what i get when i have the above code

It’s hard to tell without seeing a live working example of your code (which you can create on jsfiddle or codepen and post here). Judging by the snippets you provided, you can either remove all these:

renderer.gammaInput = true;
renderer.gammaOutput = true; 
renderer.gammaFactor = 2.4;
texture.encoding = THREE.sRGBEncoding;

and after

float lambertian = max(dot(_vNormalView, _lightDir), 0.0);

add

float gamma = 2.2;
lambertian  = pow(lambertian , 1.0 / gamma);

OR

keep all the above settings and lambertian as is and change the code at the end

float gamma = 2.2;
vec3 sRGB  = pow(color.rgb *addedLights , vec3(1.0 / gamma));
out_FragColor = vec4(sRGB, 1.0);

I just consoled the material and i see this as ambient light color,

whereas i have ambient light as

    const ambientLight = new THREE.AmbientLight(0xffffff); 

and passing it into shader as

uniforms1 = {
        diffuseTexture: { type: "t", value: texture },
        heightScale: { type: "f", value: _scaleHeight },
      };

      uniforms1 = THREE.UniformsUtils.merge([
        THREE.UniformsLib["lights"],
        uniforms1,
      ]);

      console.log(uniforms1);

      _meshMaterial = new THREE.RawShaderMaterial({
        uniforms: uniforms1,
        vertexShader: terrainShader._VS,
        fragmentShader: terrainShader._FS,
        lights: true,
        side: THREE.DoubleSide,
      });

why is it showing 3.14 whereas i was expecting 1.0 ?

Hi, yes this look awesome and the change is very noticeable and better output.

can you please help me to attain this kind of betterment in the rendered output in my case !!