Seams / Artifacts between texture tiles when using Mipmaps

Hello.
I’ve implemented a TileMap / SpriteMap component for my Three.js application.

At the moment, I configured the loaded Texture object to use NearestFilter for both magFilter and minFilter properties to avoid the following problem:

As you can see, whenever I use the LinearFilter for those properties, some seams / artifacts appear all around the tiles.
I’ve been investigated this issue a lot and it seems to be actually a well-known problem but -to be honest- I wasn’t able to find a comprehensive technical explanation of why this happens and -even if it seems to be clearly solved by everyone- how to practically fix this.

Could you help me with this?


Just to let you know…

I’m trying to build a solid component…
Not only for my application but also to open a PR on the Three.js project.

I don’t know if this would be something of interest to you, but it could be very interesting to offer such a component to the community.
In case there’s no interest in it, I will provide a standalone library to anyone who wants to use it anyway.

It’s not quite straightforward what you’re trying to achieve, given you are using a less than self-explanatory sample bitmap from
https://files.byloth.dev/grid.png

That said, I found changing the following lines

const MAP_WIDTH = 512;   // made that an exact power of 2 (even though your bitmap is even 520 by 520)
const MAP_HEIGHT = 512;  // dto.

and also those:

const TILE_INNER_WIDTH = 118; // (MAP_WIDTH / STAGE_TILES_X) - 2 * STAGE_WIDTH
const TILE_INNER_HEIGHT = 118; // (MAP_HEIGHT / STAGE_TILES_Y) - 2 * STAGE_HEIGHT

To make the result less “cluttered”.

It also helps, to set the delay between frames to 1000 milliseconds, to appreciate each frame a little bit:

setInterval(() =>
{
    const index = (Random.Integer(STAGE_TILES_Y) * STAGE_TILES_X) + Random.Integer(STAGE_TILES_X);

    const element = tileBuffer.array[index];

    element.x = Random.Integer(TILES_X) * TILE_OUTER_WIDTH + 5;
    element.y = Random.Integer(TILES_Y) * TILE_OUTER_HEIGHT + 5;

}, 1000);

The most straightforward way to solve the issue is using array textures for the tiles. You put each tile of your atlas into an array texture layer, and it will solve all issues with linear filtering, mipmaps, offsets, rotation or wrapping. That’s what they’re made for. The downside is: you can’t use them out of the box in Three, because you need an additional dimension in the geometry’s uv attribute to select the texture layer for each tile, and you also have to patch/reimplement the material/shaders for sampler2DArray support.

Doing it the old fashioned way is possible, but there there will always be edge cases where the result is imperfect, some things you can’t do at all, and it requires additional steps in the preparation of the tile atlas, namely adding 1-texel padding and margins to the tiles. Proper mipmapped rendering without artifacts is going to be very difficult, maybe even impossible to achieve.

2 Likes

I’m not sure about what the correct use of an array texture should look like…
I tried following some of the examples on the Three.js Examples page…

Did you mean something like that?

Despite all the changes, the final result hasn’t changed much.


However…
I came across the fwidth function online and -trying to better understand what it does- I used it in my project, getting this curious result:

It seems like this value can track exactly where the seams / artifacts are going to be generated.
Honestly, I don’t understand what it does exactly.

Browsing the Net a bit more, I found someone who used this function together with the smoothstep function and -after some attempts- I got this result:

I’m trying to figure out how to replace that blank pixels with the actual tiles colors…
But if I use -again- texture(map, tileUv) I get exactly the same initial result.

Any ideas?
Am I on the wrong track? :thinking:

For texel-accurate texture mapping you have to provide accurate UV coordinates. PlaneGeometry generates them to go from (0, 0) to (1, 1), but that’s not correct in this case.

What you actually want is (.5 / TILE_INNER_WIDTH, .5 / TILE_INNER_HEIGHT) to (1 - .5 / TILE_INNER_WIDTH, 1 - .5 / TILE_INNER_HEIGHT) because UV coordinates address the center of texels.

I’m not sure about you’re suggesting…
Something like this?

const _halfOverSize = div(0.5, innerTileSize).toVar();
      
const tileUv = vUv.fract()
    .mul(innerTileRatio)
    .add(tileOffset)
    .clamp(_halfOverSize, sub(1, _halfOverSize));

Note that using .mul(innerTileRatio) chained with .add(tileOffset) keeps the UVs always both above 0.03 and below 0.96.
This is because every tile has a total resolution of 130 x 130 but during my research someone online suggested rendering a smaller portion of them. So the actual resolution of each rendered tile is 120 x 120 leaving some margin all around that match the correct look & colors of the tile (to make sure to avoid any clipping issues).

Anyway… This is the result, after the change you suggested:

Something has change, indeed…
But some artifacts are still there… :thinking:

Sorry man, I don’t know this material API, and I don’t have time or patience to learn what’s going on in your code.

Suppose you want to draw only one single tile, then you would do this to adjust the texture coordinates:

    const halfU = .5 / TILE_INNER_WIDTH;
    const halfV = .5 / TILE_INNER_HEIGHT;
    const uv = geometry.getAttribute('uv');
    uv.setX(0, halfU);
    uv.setY(0, 1 - halfV);
    uv.setX(1, 1 - halfU);
    uv.setY(1, 1 - halfV);
    uv.setX(2, halfU);
    uv.setY(2, halfV);
    uv.setX(3, 1 - halfU);
    uv.setY(3, halfV);

Try this with a single tile, it should look correct then.

Oh, ok… No problem.

Yeah, I know.
I started this project using WebGPU so I have to write shaders using the new TSL language, which is quite odd to me too.

So what you’re asking for is not exactly possible as you described; I achieved it in the best way possible as I did in my last playground…

I think… I’m not sure…


Don’t worry, though…
I’ll wait for someone who might help me with this problem; I’ll go ahead without Mipmaps.

Thanks for your time and your dedication. Appreciated! :face_holding_back_tears:

Why wouldn’t it be possible? I have implemented this more than once in raw WebGL and native OpenGL-based frameworks. Not in Three.js though or in WebGPU (which I can’t run), but I don’t see a reason why it wouldn’t be possible.

PlaneGeometry uses indexed geometry, which is another possible reason why your mapping is incorrect. You can’t have shared vertices in a tile map.

Not in Three.js though or in WebGPU (which I can’t run)

Wait… So you can’t see my Codepen?
I can share screenshots or videos if it helps! :pleading_face:


Why wouldn’t it be possible?

Well…
Let’s start by saying that I’m not an expert of Three.js, WebGL or even WebGPU; I know some basics and I’m developing this project also to learn and improve my knowledge in these technologies.

So… What I understood (which could be wrong) is that when you use TSL you don’t have control over attributes in the way you described.

Or, at least… I don’t know how to do it.
The documentation isn’t so verbose in that sense… :smiling_face_with_tear:


You can’t have shared vertices in a tile map.

This is a very helpful information! I didn’t know that! :exploding_head:

As I said in my first message: «I wasn’t able to find a comprehensive technical explanation of why this happens» so -if you know it or have any source that I may have missed- I’m here to listen and gladly to learn! :face_holding_back_tears:


I have implemented this more than once […]

No matter what language… No matter what technology… No matter how long ago…
If you’ve done this in the past, I’ll be forever grateful if you can share it…
A repository… An excerpt… An example… Whatever you can will do!

It will then be my job to read it, understand it, convert it and make it work.

Thank you! :face_holding_back_tears:

I can see the Codepen, but my browser does not support WebGPU. I don’t have any tile rendering code that I can share freely, sorry. I do have Typescript code that live-patches some of Three’s shaders to make them work with array textures and uvw coordinates. I can share that if it helps you.

All the pieces to solve this are there:

  1. Use array textures for efficient map rendering and to solve issues with filtering: array layers don’t bleed into each other when scaling, scrolling, rotating, filtering, etc. like a 2d sheet would. Your grid.png has borders around the tiles, make sure they do not end up in the array texture. You only want the actual tile pixels in there. OUTER_WIDTH/HEIGHT should be irrelevant once you’ve created the ArrayTexture, which should be of size INNER_WIDTH x INNER_HEIGHT x NUM_TILES.

  2. Accurate uv coordinates: make sure you specify coordinates at the center of texels to achieve pixel-perfect rendering, without getting artifacts caused by interpolation/wrapping/clamping at the edges. Also note that for a simple map you should end up with all tiles having an identical set of uv coordinates, with only the texture layer/depth being different.

  3. No shared vertices between tiles: in indexed geometry, a shared vertex in the middle of the map belongs to 4 tiles. That won’t work because that one vertex can’t be 4 tile corners at once, each with different texture coordinates.

I see you’re using an array of uniforms for the tile map. I usually use geometry or instance attributes, or extended uvw texture coordinates for layer selection, so the grid can be rendered without special logic in the shader. If you expect a lot of updates to the map, then your approach might be better.

Something like this should be possible even when using WebGPU, but the material/shaders must match the new definition of course:

		geom.deleteAttribute('uv')
		geom.setAttribute('uvw', new Float32BufferAttribute(uvw, 3)) // the w component selects the layer

It’s what I’m doing in my project, but I’m doing it to accelerate rendering of a large number of meshes, not tiles.

Note that I’m talking about WebGL 2, and I don’t know jack about TSL and WebGPU.

1 Like

Got it! :exploding_head:

I finally understand what the problem actually is and why this happens!
Thanks again to everyone who took the time to help me with my question.

Really appreciated! :heart:

TL;DR


So…
If you’re interested -like me- in understanding what’s happening here and -more importantly- WHY it happens… You’re -of course- in the right place. :wink:

It all started with this CodePen, which I’d already shared previously:

At first, it wasn’t clear to me how the fwidth function could perfectly detect and highlight where these artifacts were appearing.
It obviously wasn’t a coincidence; it had to be part of or somehow related to the problem, I thought.

And I was right.

Setting aside the details of how fwidth works and the complex math behind it…
I found that the values this function handles are also the main components of the internal function responsible for determining which mipmap level to use for each texel.

Take this example:

Here I’m rendering the texture as-is, without any additional processing…
If you move the camera around with your mouse, you’ll probably notice that the more you view the plain at an angle, the blurrier the texture gets.
That’s -of course- due to mipmaps!

But HOW does the computer know which level to apply to each texel?

Try the same thing with this:

See the pattern? :smirk:
The higher the fwidth value, the blurrier the mipmap becomes. That’s it.


But again… Why these artifacts, then?

When you render a texture “the-simple-way” (texture2d(map, uv);), as I did in my second CodePen, you leave the task to the computer (or rather: to the GPU) to automatically detect the correct mipmap level.
Do to so, it calculates these internal values (used also by fwidth) based on the difference between the UV coordinates you pass to texture2d for a specific texel and those for the adjacent texels.

When tiling is used -of course- each texel on the tile border will have a large difference compared to adjacent tiles’ ones.

THAT’S WHY in my first CodePen, fwidth highlights those texels!
And that’s WHY these artifacts appear!

Usually, the smaller the mipmap, the more the colors blends together.
That’s why the artifacts have always been an average color of the entire texture!


«Mmmh… Ok… So how do you solve this?»

There are two different ways.

The first one is to force a specific mipmap level (called “Level of Detail” or “LOD”).
The lower this number is, the clearer the mipmap level will be used.

Here’s an example:

The “problem” with this solution is that it essentially disables mipmapping altogether, similar to using NearestFiltering.

The second one -however- is to tell the GPU which LOD to use without letting it automatically compute it.

Here’s an example:

As you can see, in this case, when you move the camera around, you’ll still see mipmaps in action.
With this example texture, the result may not look as ideal as it could.

In my case, where tiles are more similar and uniform, the result is simply stunning!

For a scenario like this, I’d suggest writing an even smarter shader to achieve a better result.
Maybe, I’ll work on this in the future.

I’d probably start by merging this logic, in some way, as well:


If you’ve read this far, thank you for your time and patience in following my thoughts and my less-than-ideal English.

Thanks again, everyone. :upside_down_face: