Three.js Shaders - Why one for each material?

So, I have recently been writing my own 3D rendering that was modeled after three.js, because I made a game wrapper for three.js and ammo.js, made completely in javascript, and wanted to get rid of the dependency for three.js. For my rendering, I made a master shader where I inputted all of the attributes and uniforms, and did all of the math that I needed to. But, there are different materials, so I made it such that in the shader it filtered through which material it was, and used the correct math for it. So, my question is, why does three.js use different shaders for different materials? When I tried it with my engine, using different shaders for each material, each material had its own shader, it only got about 10 fps with around 30 objects. Yet, when looking at three.js’s code, I notice that the different materials have different shaders, yet it runs smoothly with hundreds of objects. So, I guess my question evolved into two questions. 1, why different shaders for each material, and 2, how did you get it to run smoothly with that?

1 Like

The different shaders follow different approaches and are for different usecases, having different features or being for rendering something entirely else than triangles (lines, points). Such as Lambert doing per-vertex lightning (what is cheap) while Phong will to per fragment lighting and PBR is more demanding the more features are used. There are also other materials that do specific things such as rendering the normals or depth of the scene.

Using a “master” shader is generally a bad idea for code maintenance alone unless it’s basically just a final standard material shader and your app is small enough that this is all needed. However some things will also require other materials again such as depth materials for shadows.

Internally shader programs are also shared between materials as long as their configuration is the same, features like having a normal map or not is such a configuration just as the alphaTest (constant) value while color or opacity is only a configuration in terms of if it is defined or not (uniforms).

You can check how many programs are used in renderer.info.programs, and you should avoid using various values for constant properties like alphaTest, don’t sometimes declare maps and sometimes not if you want to avoid creating new configuration variations of the material.

We need more info about your specific scene, a live example (codepen, jsfiddle etc) would make this far more clear, the performance likely drops from something else than just accidental 30 programs.

2 Likes

It’s worth noting that “if” statements and branching are often things you want to minimize in a shader, as they can lead to performance problems, especially if the conditions of the “if” statements are not completely static: opengl - Do conditional statements slow down shaders? - Stack Overflow

But also maintenance, as Fyrestar says. PBR shaders are more than complicated enough by themselves, without trying to mix other shading models in. :slight_smile:

2 Likes

So, what I would want to do is rather than in the shader do the conditional, in javascript/the render function do the conditional to get the correct shader. Got it.

As it turns out, I was compiling the shaders per frame :man_facepalming:. That might explain why it was running so slow, because instead of compiling once, I compiled each material when it needed to be rendered, and not storing it anywhere. So, if I had 10 objects, each material would have a vertex and fragment shader, and I was compiling each of them per object per frame. So, in that frame, I compiled the shaders for 10 objects, and the next frame I did the same, and so on and so forth. And sorry I couldn’t put a live example as I had already fixed the issue a lot earlier, and had already gotten rid of the problem by implementing the master vertex/fragment shader.

Ok, so after looking more thoroughly through the stack overflow question, I saw that another person said that if the conditional is used through a uniform, then dont bother. And, with my master shader, that’s what I have. A branching conditional statement, but each thing is only activated through a uniform, which means that it shouldn’t drop performance, unless I am understanding that completely wrong.

On a side note, when trying to implement the multiple different shaders, I got something that looks completely wrong with lambertian lighting calculation.
I get this

instead of this.

Don’t know if its a problem with the shaders, but I have checked, and the math seems right, so, I dont know whats going on.

#version 300 es
in vec4 aVertex_position;
in vec3 aVertex_normal;
in vec4 aVertex_color;
in vec2 aVertex_uv;

uniform mat4 uProjectionMatrix;
uniform mat4 uModelVeiwMatrix;
uniform mat4 uNormalMatrix;
uniform vec4 uMatCol;
uniform bool uVertCol;
uniform vec3 uLightPosition;
uniform vec4 uAmbientLight;

 out mediump vec3 vVertPos;
 out mediump vec4 vMatCol;
 out mediump vec2 vTextCoord;

 void main(void){
     lowp vec4 color = uVertCol ? aVertex_color : uMatCol;
     gl_Position = uProjectionMatrix * uModelVeiwMatrix * aVertex_position;
     vVertPos = (uModelVeiwMatrix * aVertex_position).xyz;
     lowp vec4 tN = uNormalMatrix * vec4(aVertex_normal,1.0);
     lowp vec3 dir = normalize(uLightPosition);
     lowp float b = max(dot(tN.xyz,dir),0.0);
     vMatCol = color * (vec4(b,b,b,1.0)+uAmbientLight);
     vTextCoord = aVertex_uv;
 }

is the vertex shader, and all the fragment shader is is setting the color to vMatCol, and doing some fog/texture calculations. I already checked to see if those were the problem, and they arent. Any help would be appreciated as I am still kind of new to 3D rendering.