The models I’m working on at the moment present a layered material: a material composed of different (1, 2 or 3) layers, each one with proper texCoords, blending function, texture source (textures both opaque and transparent), with the possibility to animate (texture offset, size and rotation) each layer independently.
Searching on GitHub and StackOverflow you can find some references to a THREE.MeshLayerMaterial:
This is the kind of material system I would like to implement some day:
var material = new THREE.MeshLayerMaterial( [
new THREE.LambertShading( 0xff0000 ),
new THREE.UVMapping( texture, THREE.MultiplyBlending ),
new THREE.UVMapping( aoTexture, THREE.MultiplyBlending, 1 ), // uv channel: 1
new THREE.SphericalReflectionMapping( texture, THREE.AdditiveBlending )
The proposed API is NodeMaterial, which is according to him:
still quite experimental, the API is evolving…
This framework is supposed to abstract away writing GLSL which should ideally be done with a visual editor. Because it basically gives you the ability to construct any GLSL, anything is possible, including the thing that you describe. IMHO, this is as much as a solution to your problem as saying:
THREE.ShaderMaterial is a solution to all of this, you can write GLSL to achieve any effect you desire.
To put things into context, this API has also been in development for almost three years, and still doesn’t look like it’s production ready. It’s about 1/3 the size of three.js of additional code, and you can practically achieve anything it does with THREE.ShaderMaterial.
Why I believe it’s actually not an answer to your question in any shape or form is because of this:
IMO this refers to specific terminology from “render passes”. Before you issue a draw call you set a blending function. In the shader, there are no blending functions, there is just math you can write to describe a blending function. Arbitrary blending of arbitrary things, while in the context of draw calls it refers specifically to setting the blending function for blending src pixels with dst pixels.
To reiterate, you can mimic “blending” of something in GLSL, because GLSL executes arbitrary mathematical operations on arbitrary pieces of data. You can take N variables and “blend” them. In the context of WebGL’s state management and rendering, blending function is a very very specific thing.
Unity allows you to write a single multi-pass material, within one shader file. The syntax is very non shader like, and it hints at what sort of state the underlying GL api should set.
All these terms they mention there (culling, ztest, polygonOffset…) are not at all things related to the shader, which is what NodeMaterial does.
You need to have multiple passes in order to be able to set something like this in between them. NodeMaterial would not allow you to do this, but would allow you to apply arbitrary mathematical operations at arbitrary stages of the shader.
However i think that this would run into problems. Say you want to blend 5 materials each using 8 textures. With an API like NodeMaterial i think the draw calls would fail as you would be trying to access 40 different textures within the same call. Rendering this in 5 different passes would have no problems.
With all of this being said, you can probably achieve something like this already, either with mesh.onBeforeCompile or drawGroup thingie which i haven’t used. I believe that it is possible to assign an array of materials to a mesh, and then have the mesh render itself several times. This could also be achieved by manually managing the render order through the scene graph:
When NodeMaterial becomes available. It will allow you to, instead of this:
float foo = a + b / c;
const a = new THREE.FloatNode( 1 )
const b = new THREE.FloatNode( 2 )
const c = new THREE.FloatNode( 3 )
const foo = new THREE.OperatorNode(
(I think i got it right at least, there is absolutely no documentation.)
IMO this API should not be supported by the core of three.js. Conceptually it may not make sense to include something like this on a medium such as the web, because it introduces disproportionately more overhead for parsing shaders. In addition to the driver compiling and the string manipulation, this whole graph has to be traversed and logic applied to it. There are probably lot’s of uses (like games) where you don’t want this happening at run time but would a more compact shader representation closer to what webgl actually consumes (a string).
Besides these technical challenges, there’s the issue of excluding a fair amount of three.js contributors from contributing.
If you take a look at the first page of three.js open PR’s you’ll notice that they range from a few additional lines to a few dozen. NodeMaterial PRs touch over 100 files and 10.000 (ten thousand) lines of code. I don’t consider myself a very experienced engineer, but i have some experience, and that experience tells me that large diffs are hard to read.
Unfortunately the standard is much higher:
If you do not have the skills to do that by studying webgl_materials_nodes.html and reading the source, you should not be a developer on this project.
This API sets the bar extremely high IMO. If you are unable to follow what is going on in these 103 files and almost 10k lines of code, you won’t be able to contribute to anything material related. I’m not an expert, but i know some GLSL and i can read the shaders in three.js. Obfuscating it behind this API, makes that a lot harder and may close that door.
Note that this above is just part 1 of the diff, there is another one with 8500 lines changed:
It is being aggressively pushed as a cure for a lot of issues that three.js suffers from. I can’t do anything to stop this, but I can try to argue against it, wherever this push is made. I also am aware of how careful one has to be, since this issue has prompted threats of ban’s from both the repo and this forum. So I mean this as no offense to anyone and am trying to be as constructive and objective (albeit blunt) as possible. Thank you.