HDR Texture.image data format

Hi, I’d like to understand how to interpreter the Texture.image data properly of a loaded .hdr texture.

image

I see that I have a total of 8388608 UInt16 values that divided by image width and height give me 4 values per pixel. Is that correct?
If so I guess the first three are R,G, and B but what about the fourth?

Basically the hdr texture I’m loading is being used as an THREE.EquirectangularReflectionMapping environment texture for IBL.
I want to scan texture values in order to find the highest luminosity point of the texture.
But to do so I need to undertand how to interpreter values in the UInt16Array.
Thanks in advance for any help

EDIT: does this apply somehow?
Color#fromBufferAttribute – three.js docs (threejs.org)

comething like this?

const bufferAttribute = new THREE.BufferAttribute(texture.image.data, 4, true);
const color = new THREE.Color().fromBufferAttribute(bufferAttribute, 0);
console.log(color);

The fourth is the Alpha channel, for opacity or transparency usually ignored with HRDI.

You can convert the color to HSL (Hue, Saturation, Luminosity), and work with the L(Luminosity) value.

Generally speaking, it’s not related to HDRI. (But it’s just raw color data you can always hack your way)

Thanks so much!
I suspected it was alpha but it sounded useless to me having alpha in a HDR image.

This seems to work (still need to check)

let co = 0;
let max = { x: 0, y: 0, l: 0};

for (let x = 0; x < texture.image.width; x++){
    for (let y = 0; y < texture.image.height; y++){
        const r = texture.image.data[co++] / 65535,
              g = texture.image.data[co++] / 65535,
              b = texture.image.data[co++] / 65535,
              l = (Math.max(r, g, b) + Math.min(r, g, b)) / 2;
        
        co++; // skip alpha

        if (l > max.l)
            max = { x, y, l}
    }
}

console.log(max);
1 Like

That’s some elegant implementation! It should effectively get you the highest luminosity point.

If it’s RGBE try the following:

let co = 0;
let max = { x: 0, y: 0, l: 0 };

const data = new Uint16Array(texture.image.data);

for (let y = 0; y < texture.image.height; y++) {
  for (let x = 0; x < texture.image.width; x++) {
    const r = data[co];
    const g = data[co + 1];
    const b = data[co + 2];
    const e = data[co + 3];

    const scaleFactor = Math.pow(2, e - 128); // Calculate the scaling factor based on the exponent

    const rNormalized = r * scaleFactor / 65535;
    const gNormalized = g * scaleFactor / 65535;
    const bNormalized = b * scaleFactor / 65535;

    const l = Math.max(rNormalized, gNormalized, bNormalized);

    co += 4; // Increment by 4 to move to the next set of RGBE values

    if (l > max.l) {
      max = { x, y, l };
    }
  }
}

console.log(max);

Ok I found it! It’s not RGBE, it’s actually RGBA but HalfFloatType, so I have to convert each 16 bit entry into a float. Also I had inverted x and y loops.
Here is the working code:

const decodeFloat16 = (h:number) => {
    const s = (h & 0x8000) >> 15;
    const e = (h & 0x7C00) >> 10;
    const f = h & 0x03FF;

    if(e == 0) {
        return (s?-1:1) * Math.pow(2,-14) * (f/Math.pow(2, 10));
    } else if (e == 0x1F) {
        return f?NaN:((s?-1:1)*Infinity);
    }

    return (s?-1:1) * Math.pow(2, e-15) * (1+(f/Math.pow(2, 10)));
}

let co = 0;
let max = { x: 0, y: 0, l: 0};

const data = texture.image.data;

for (let y = 0; y < texture.image.height; y++){
    for (let x = 0; x < texture.image.width; x++){
        const r = decodeFloat16(data[co++]),
                g = decodeFloat16(data[co++]),
                b = decodeFloat16(data[co++]),
                l = (Math.max(r, g, b) + Math.min(r, g, b)) / 2;
        
        co++;
        if (l > max.l)
            max = { x, y, l }
    }
}

console.log(max);
1 Like