PBR from one image

Hello,

There are 4 textures needed to make a good looking PBR material: that’s a lot to transfer. In Blender, you can use just one texture and play with different nodes, as shown in the following image.

node-composition

I have faked bump/metal/roughness maps with just two nodes (ColorRamp and Bump). Is it possible to do this in Three.js? How?

I already used ShaderFrog but it doesn’t seem to be PBR oriented. Any other tool? Any idea?

Thank you!

2 Likes

You can combine a roughness, metalness and ambient occlusion map into one texture for MeshStandardMaterial and MeshPhysicalMaterial. The rule is:

  • the red channel represents ambient occlusion
  • the green channel represents roughness
  • the blue channel represents metalness

Other combinations are not supported. I’m not sure about the support but there might be glTF exporter which can automatically merge these textures during the export. Otherwise, you have to do this by yourself.

/cc @donmccurdy

1 Like

Thank you, Mugen87. I read about that before posting and I think it is a good option.

However, I don’t understand why the AO occupies a channel when, IMHO it can easily be blended to the base/albedo map, which would free a black/white channel for a bump. Sure bump maps are not as good as Normal maps, but they are good enough. It’s not about best results but about efficiency.

I know you’re knowledgeable and it may mean that there might not be another option.

I know that, by default, you cannot export the node tree in the image. Maybe there’s a trick but I would not mind to do it by Javascript… I am exploring possibilities.

There is an issue when baking the AO into the diffuse texture: Many professional assets put every surface color of an asset into a single texture. That means certain parts of the texture are sampled multiple times at different places of the geometry. However, the ambient occlusion is often different at these places. In order to decouple the pure surface color from the lighting data, you use two different maps (diffuse map and ambient occlusion map) with two different set of texture coordinates.

Blender 2.8 or Substance Painter should be able to do this.

IMHO it can easily be blended to the base/albedo map, which would free a black/white channel for a bump.

You can do this for a cheap effect, but it doesn’t look as good. The renderer will be simulating ambient and direct lighting on a dark surface, which looks quite different than a brightly colored surface that’s actually receiving less ambient light. Ambient occlusion is supposed to affect only indirect light, not point lights etc. Baking it to the color map is OK in some cases (especially unlit materials), but often looks dirty instead of shadowed and isn’t correct for PBR.

glTF doesn’t support bump maps, by design, it’s better practice to use a normal map. But you can always bring in a bump map separately of course.

2 Likes

glTF doesn’t support bump maps, by design, it’s better practice to use a normal map

Not sure I agree with this. You’ll definitely get better quality with a normal map, but they have some drawbacks too:

  1. They are much harder to create
  2. They don’t compress well. Generally, you’ll should store normal maps as pngs which means that you have to download much more data.

You usually don’t need a metalness map. Materials are either metal, or not metal, so you can set this material.metal to 1 or zero accordingly and avoid this map. The exception is if you’re making a single material for an object that’s part metal and part non-metal, in which case your map will be pure black and white (with maybe 1 or 2 pixels of grey padding between) and will compress to a tiny size. The only common exception is something like rusty metal which you’ll need to ‘cheat’ with a greyscale metalness map.

Do you know how Jpeg compression will affect a combined map like this? Jpeg is highly tuned for lossy compression that looks good visually. From what I’ve heard it’s not a good idea to Jpeg compress images that store other kinds of data, such as normal maps.

I agree with all what it has been said.

  • Normal maps are better but more expensive, difficult to create
  • Blending AO onto Albedo may lead to some artifact
  • glTF is built this way and good luck for changing that.

The idea behind my original post was that, in my day-to-day job, I have scenes with 20+ materials and transferring them to the web is challenging in terms of bandwidth. Web users don’t have a Nvidia 2080 RTX Ti as I do, they may run on a CPU-embedded GPU and they still expect to use a page in 3sec max/target.

I am a 3D Artist not a programmer although I’m learning Lit-Element/Polymer because of its small footprint and lazy-loading capabilities. Downloading and loading 20 images or 60 is not the same and I’m trying to find a balance because waiting for 30sec looking at a loading bar is not sexy.

IMHO, Three.js holds a different place than UE4 (HTML export footprint is 120MB) or Godot (footprint may be 20MB) or even the up-coming Google’s Stadia platform (that will be costly). Three.js strength? It makes 3D ubiquitous, it feels like another img/text on the page. That’s why I’m training to make my 3D scenes lightweight and I won’t be using a 20MB Hero Tree.

If you don’t know how I could replicate the node tree shown in the screenshot, it’s ok. It may be a good feature request. Anyway, thank you for your interest in this topic.

2 Likes

That happens because many compression algorithms like JPG do not judge color channels individually but as a whole. That works well for color information but not for packed data like normals. Of course it highly depends on the compression ratio how visible compression artifacts are going be.

In any event, certain texture compression formats like ASTC provide special modes or pre-defined encoding settings in order to produce an optimized output for data-source textures (like normal maps). For example -normal_psnr or -normal_percep.

Since the usage of texture compression formats in WebGL is a bit inconvenient, I would still compress normals maps with JPG rather than PNG. The difference in size is still worth it.

By hand in Photoshop, yes. But baking a bump map to a normal map is easy in a program like Blender. In general glTF focuses on what is best practice for the renderer, and requires export tools to do more up front. And bump maps are pretty old school these days, for games and such. From Unity docs:

Modern realtime 3D graphics hardware rely on Normal Maps, because they contain the vectors required to modify how light should appear to bounce of the surface. Unity can also accept Height Maps for bump mapping, but they must be converted to Normal Maps on import in order to use them.


It depends. For hard surfaces you might get better compression with PNG. For organic surfaces probably not. The formats handle compression differently, and with large areas of solid colors PNG can be better.

And of course glTF will eventually have proper compressed texture support.

Since the usage of texture compression formats in WebGL is a bit inconvenient, I would still compress normals maps with JPG rather than PNG. The difference in size is still worth it.

Try different things of course, but the artifacts can be very noticeable on JPG normal maps…

1 Like

tl;dr it’s complicated, and experimenting with your textures to find the right choice is probably a good idea. Related: https://squoosh.app/.

1 Like

It’s pretty depressing that there are no good free tools for creating texture atlases. :disappointed:

Well at least there’s a cool new app for combining textures :grin:

1 Like

Squoosh.app is great, I’ve been running that on all my textures for the last few months and I haven’t found any other compression utility that beats it.

For hard surfaces you might get better compression with PNG

It’s true, I was thinking about this in ‘worst case’ mode. In general, if your texture is mainly single blocks of color with a couple of lines (tiled walls, for example), then compression is not a problem for you anyway. Maybe PNG will be better, but when it’s 10kb vs 15kb that’s not really important. I’m thinking about high frequency normals (lots of tiny details), where a 1024x1024 PNG may be >1mb and jpg compression will kill the details.

Still, I’m willing to accept that bump maps are a thing of the past and we’ll have to accept these annoyances :grin:

Looking forward to the time when we can use proper compressed textures on the web. @Mugen87 I think calling the current situation ‘a bit inconvenient’ is quite an understatement!

I have faked bump/metal/roughness maps with just two nodes (ColorRamp and Bump). Is it possible to do this in Three.js? How?

Back to the original question :grin:
I’d say that you could do this fairly easily for the color ramp, since that’s just a simple gradient mask. You could write a custom function in JS to achieve this.

As I mentioned above, for a wood texture you don’t need a metalness map, you just set metal to zero. Or you could create a 1x1 black texture if you really want that map!

That just leaves the bump node. It looks like blender’s bump node is creating a normal map from a height map, the same as Unity does per @Mugen87’s comment above. I have no idea how hard that would be to implement, but I guess Sobel operators are going to feature somewhere. Blender is open source so you could check out what they’re doing there. Check out these links too:

/
https://www.gamedev.net/forums/topic/475213-generate-normal-map-from-heightmap-algorithm/

And if you do end up implementing any of this, please share it here. I’d be particularly interested in a height → normal map plugin for three.js.

Thank you @looeee, I put that on my list of things to learn because a few procedural nodes could go a long way in curving bandwidth requirement. I’ll take that in my mind!

1 Like

There’s two approaches you could take here, either you could pre-process the textures in JS, which I guess would be easier to implement, but less flexible. Or, you could try adding these as nodes in the (currently experimental) node-based material system. I’m not sure how much processing is required for the height -> normal map, it may not be feasible to do in real-time. But the color ramp should work well as a node.

Or, maybe this one already does bump -> normal?

1 Like