Using array textures with stock materials

In my project I have a few hundred textures (color maps and corresponding bump maps), all of identical size. I want to put all of them in two array textures, add an extra dimension to BufferGeometry’s uv attribute which will take the texture layer index, and render the geometry using MeshPhongMaterial.

I keep getting shader errors because apparently the material can’t handle the array texture. I used MeshBasicMaterial here, but the error is the same.

THREE.WebGLProgram: Shader Error 0 - VALIDATE_STATUS false

Material Name: 
Material Type: MeshBasicMaterial

Program Info Log: Must have a compiled vertex shader attached:
SHADER_INFO_LOG:
ERROR: 0:338: 'uvundefined' : undeclared identifier
ERROR: 0:338: 'constructor' : not enough data provided for construction
VERTEX

ERROR: 0:338: 'uvundefined' : undeclared identifier
ERROR: 0:338: 'constructor' : not enough data provided for construction

  333: void main() {
  334: #if defined( USE_UV ) || defined( USE_ANISOTROPY )
  335: 	vUv = vec3( uv, 1 ).xy;
  336: #endif
  337: #ifdef USE_MAP
> 338: 	vMapUv = ( mapTransform * vec3( MAP_UV, 1 ) ).xy;
  339: #endif

Note the uvundefined identifier in the error message.

My question is: do I have to write my own shaders to make this work with array textures or is there an easier way?

Is the question too difficult?

I there any way to make MeshPhongMaterial work with 3d texture coords? Or do I have to implement the material from scratch?

You could use a custom shader, or inject some the custom functionality into an instance of the MeshPhongmaterial, via onBeforeCompile.
onBeforeCompile gets the (shader)=> which has the vertex and fragment shader on it, which you can edit or regex to add the functionality you need.

What does your texture array look like?
You get the error message because your uv coordinates are undefined.

I don’t know if you can use a textureArray with MeshPhongMaterial. If you need to do this with your own shader, I have this example here:

Thanks for the responses!

I’ve had a look at onBeforeCompile but alas, the shader code at this stage is just a bunch of #include directives. No way to replace the sampler that way. So I have to find the relevant parts in the ShaderChunks somehow and modify those.

The uv coordinates are not undefined in the geometry, I generate 3d coords explicitely. It all works with standard 2d textures and breaks when I switch to array textures, with everything else the same. The uvundefined looks like an identifier that’s generated somewhere in three’s code?

It looks beautiful. I don’t really understand the question. It’s a Uint8Array containing pixel data.

The uvundefined identifier is generated in renderers/webgl/WebGLProgram.js because the parameters.mapUv field is undefined, because material.map.channel is undefined when material.map is a DataArrayTexture. This is probably a bug, idk.

I have decided to implement my own shaders instead, because patching the chunks seems like a path to madness.

Patching the shaders isn’t actually that bad.

The THREE.ShaderChunk object contains all the chunk strings themselves.

You can modify these chunks directly at init time, if you want to inject global behaviors in Everything,
Or you can use onBeforeCompile to expand/edit/modify these chunks on a per material instance basis.

You can use this site: three-shaderlib-skim
to see the unrolled source code for the actual material shaders, and use that to determine which chunks you may need to modify/remove/add.

For instance, I just had to write a generic method that patches built in materials to pull their positions from a different data source, allowing me to render proper shadows on GPU generated particles, rendered with the built in materials.

For your specific uvundefined problem, this may be due to not having a .map assigned to the material. If the .map is not assigned, the material compiler will omit the defines that define the texture uniform and uv channels… so if your custom material needs to interact with UVs, you will have to wrap that in:

#if defined( USE_UV ) 
    vec2 myUv = vUv;  // In here we can safely access vUv
#endif

Without this approach, every material would have to be compiled with the full code for any possible combination of features that it could support.

Here’s a method that expands the source file, and replaces all the #include with the actual source code. You can call this inside onBeforeCompile with:
shader.vertexShader = expandShaderSource( shader.vertexShader );
shader.fragmentShader = expandShaderSource( shader.fragmentShader );

let expandShaderSource =(s)=>{
    let cks = s.split("#include <");
    for(let i=1;i<cks.length;i++){
        let c=cks[i];
        let ck = THREE.ShaderChunk[c.slice(0,c.indexOf('>'))];
        cks[i] = ck + c.slice(c.indexOf('>')+1);
    }
    return cks.join('\n');
}

uv_vertex.glsl.js has statements like this for each possible map:

vUv = vec3( uv, 1 ).xy;

This seems unusual. Why is it done this way instead of simply vUv = uv? What am I missing?

I want to replace these with

vUvw = vec3(uv, layer);

where layer is a per-instance attribute.

That seems to work as intended, but I’m unsure if I might be missing side effects of those weird assignments.

Should I do

vUvw = vec4(uv, layer, 1).xyz;

instead?

That seems like a no-op. Not sure what’s going on there. It might be a typo/something someone overlooked. I can’t imagine it having any side effects. I’d think the compiler just optimizes it out.
It might just be written that way to match the look of the other cases where the uv is transformed by a matrix as a real vec3. sort of as a hint that this case is doing vertex setup like the other cases, but just so happens to not transform it.