Specular aliasing with textures

I’m running into what seems like specular aliasing, and I’m wondering if this is expected, and whether there’s any obvious way to mitigate it.

I have a shiny material with a normal map, which has some high intensity, high frequency details that represent “microbevels”.



This looks fine when there are enough pixels, but when viewed from afar (low density sampling), I’m getting a lot more sparkly aliasing than I expect.

The aliasing can be mitigated effectively by increasing roughness, or by overriding the texture2D call to use a mipmap bias. This has the unwanted side effect of reducing detail in other non-problematic areas of the texture.

Ideally, the “roughness” would automatically increase just for the edges, where there is a high rate of change in the normal, and when the pixel density requires it. I could maybe craft a roughnessMap and modulate roughness based on these conditions.

But that got me wondering whether ThreeJS is expecting to do this calculation already? Does ThreeJS take into account the normal delta when it decides the way incoming lights will be calculated? Or is each pixel modelled as a “facet”?

Would creating a simple LOD solve the issue? Just assigning a material without the glittery detail whenever object is further away.

Would creating a simple LOD solve the issue?

That’s a helpful tip, but in this case, no. Distance from camera isn’t the only thing that alters sampling density. Other factors, like rotation, also cause this. Creating LODs for every case where this happens would also be a significant amount of extra work (artisting and testing) I’d like to avoid if possible.

I’ve added a bit more detail to the original post above. I’m interested in pragmatic, systematic solutions, but also interested in discussion about:

  1. what is the root cause,
  2. whether ThreeJS intends to handle this edge case,
  3. whether ThreeJS (or any realtime engine) can reasonably be expected to handle this

In my limited testing, these sparkles seem to happen no matter the light source type (point, directional, rectarea, envmap).

Here’s my handwavey working explanation. Would love to be corrected.

Even though the normal map is being sampled with mipmapping enabled, each pixel/fragment only samples a single normal value (i.e. the return value of texture2D()). While mipmapping makes sure low density sampling reflects the detail from the original “level 0” texture, individual pixels still sample just a single vec3 from the normalMap texture. This sampled value doesn’t include any information about the variation of the four normals from the higher detail level.

So, when each pixel calculates its lighting, it assumes that the entire pixel has that normalMap value, effectively modelling each pixel as though it were a pixel-sized facet. Kinda like a disco ball. So when there are very sharp changes in the normal, you get little disco sparkles.

If this is correct, I was wondering whether that ThreeJS might be able to somehow detect the “spread” of a mipmap-sampled normalMap, and factor that in when deciding how bright the incoming light oughta be, effectively widening and flattening the BDRF by the appropriate amount.

There is a Nvidia paper (Mipmapping Normal maps (2006)) about how the spread of the “upstream” normal texels can be inferred by how much the box filter has shortened the normal. I don’t know if that could be brought to bear, but I thought it was a cool idea.

I have (just now) confirmed that the length of the vec3 value of texels in normal mipmaps gets significantly less than 1.0 when the “upstream” texels have a lot of spread.

Separately, I also notice that in lights_physical_fragment.glsl.js, there is a geometryRoughness calculation, which seems to be “roughing up” the material when the geometry normals are “bendy”, which makes a lot of sense. Anyone able to confirm whether there is an equivalent mechanism for textures?

I did a proof of concept for this, and it does seem promising. Please let me know if this has already been invented and/or implemented.

By using the “squashed” normal effect (above) as a mask to boost the roughness before lighting calculations (similar to how geometryRoughness works) the specular aliasing is significantly reduced, at least in this case. Because it only tweaks roughness, this approach just piggybacks on the hard work others have already contributed to Three’s energy-conserving shading.

Here’s what I found…

Top row: By software downsampling a difficult high resolution render, we have something like an “ideal” low resolution image to compare. On the right, the “texture roughness mask” is shown in red, indicating where the length of normal texels have been “squashed” by dissimilar normals being averaged together by the mipmap box filter. This red mask is showing something like 1.0 - length(normalMap).

Bottom row: the leftmost is the current render in r169. Note the undesirable aliasing/sparkling. The next three pictures show the material.roughness value being increased by textureRoughness multiplied by an arbitrary factor. To my eye, 4 looks best here, but maybe 6 is more accurate if we trust our “ideal” goal.

Importantly, pixels that don’t “need help” are not affected. Full detail texels (texture level 0), and mipmapped texels that faithfully represent their upstream (high res) texels, have a length of 1.0, which means we add no extra texture roughness, and rendering proceeds as normal.

Thoughts?