I’ve been experimenting with node materials under the assumption that a MeshBasicNodeMaterial could be used interchangeably with a regular MeshBasicMaterial. When I tried to use it with an alphaMap texture in this example, it didn’t work as expected.
Upon inspecting the source code, I noticed that MeshBasicNodeMaterial inherits from ShaderMaterial and only copies properties from MeshBasicMaterial, lacking a proper shader extension. The same applies to MeshStandardNodeMaterial and MeshPhysicalNodeMaterial.
I have come across many successful ThreeJS extended materials addons. However, in my opinion, the best approach here might be to treat the NodeMaterial as a distinct class rather than attempting to extend it to mimic a regular material. Since the NodeMaterial can be dynamically mutated based on how it is extended with other nodes, it cannot be reduced to a conventional material class.
I acknowledge that I may be making certain assumptions, and I apologize in advance for any misunderstandings. Please feel free to correct me if I’m mistaken. (I’m gonna blame it on the lack of documentation anyway)
@dubois, Thank you for your reply. I completely agree with your point. The issue I was highlighting is that the current implementation of NodeMaterials doesn’t fully inherit from their regular counterparts. In the example I shared, it becomes apparent that certain properties, such as alphaMap or any other property unrelated to ShaderMaterial, cannot be utilized.
I don’t believe NodeMaterial subclasses are intended to have a backward-compatible API with the older Material subclasses. Rather than separate “opacity” and “alphaMap” properties, you can assign the “opacityNode” socket as a float node, a texture node, or any subgraph of nodes representing a more complex expression. Things can get fancy:
@donmccurdy, Thank you, this is the answer I was hopping for. If backward compatibility isn’t the priority, then having MeshBasicNodeMaterial, MeshStandardNodeMaterial, and MeshPhysicalNodeMaterial could potentially lead to confusion.
It might be more appropriate to emphasize the use of NodeMaterial as a standalone entity rather than a direct replacements for the older Material sub-classes. (Sorry for this part, I clearly don’t know what I’m talking about)
For now, it seems that I can’t use NodeMaterial on its own, it only work with one of the Mesh sub-classes, please correct me if I’m mistaken. here is the codesandbox
And thanks again for the great demo, It’s indeed fantastic and showcases the potential of NodeMaterials.
Alright, After further experimentation with this example, turns out that they are compiled at a later stage, not during the initial class creation, as I initially assumed.
While I’m still in the process of understanding their inner workings, the key notes here are:
1_ They are not interchangeable with regular classes (at some degrees).
2_ I still need to dive deeper to fully grasp their inner workings.
3_ They are obscured and underrated.
4_ The lack of documentation is really unfortunate and making it more challenging.
I would just add to that list — they are still a work in progress. Documentation will be added at a later stage.
THREE.NodeMaterial cannot be used on its own, just as THREE.Material cannot be used on its own, it is the base class for all related material types. The keywords Basic/Standard/Physical etc. describe the shading model used by that material, so MeshBasicNodeMaterial and MeshBasicMaterial have the same shading model (unlit / shadeless) but different APIs.
I never used node materials but I could say with a 100% certainty that they end up being a ShaderMaterial. All other materials are also ShaderMaterial. The only caveat is that the renderer class writes some shader code between you creating the material and you compiling it.
Thank you for your prompt response! I want to clarify that my previous posts were not intended as criticism in any way. Quite the opposite, they were driven solely by a sincere curiosity and genuine interest in exploring this remarkable masterpiece.
My mistake was approaching NodeMaterials with a conventional workflow in mind. NodeMaterials operate under a different paradigm. You have to think from a node editor perspective, even when working programmatically.
Updating things is also not the same as we are accustomed to with regular materials. The resulting shaders are hardcoded, meaning you can’t simply update individual Vector3 or Color values within an animation loop.
If you don’t mind I have a few more questions:
_ How to expose a uniform, as I’m guessing this is the only way to update the material externally.
_ How to get the resulting vertexShader and fragementShader GLSL’s code, console login the material returns the defaults Shaders and toJSON don’t seems to expose them.
Once again, thank you for your valuable contributions and for being responsive to the community feedback. Your work is highly appreciated!
Yes, exactly. The creation of TSL (Three.js Shading Language) simplifies this process, as it resembles a conventional shader programming language:
In this way the user only needs to worry about the algorithm using JavaScript, the node system will take care about the optimization of the shader, for example, creating variables only when necessary, avoiding unnecessary bindings such as textures used in more than one input, organizes hashing with type precedence to better reuse programs, and more… In fact this is being further explored in WebGPURenderer.
Compilation is done at render time, so you will need to add a console.log() here for now.
You can do like this using TSL:
import { uniform, MeshBasicNodeMaterial } from "three/examples/jsm/nodes/Nodes.js";
const myUniform = uniform( 1 ); // you can put number, Vector*, Color and others
const nodeMaterial = new MeshBasicNodeMaterial();
nodeMaterial.colorNode = myUniform;
// changing
//myUniform.value = .5;
Oh, thanks for the words. I’ve already read and received a lot of emails from people asking for documentation, I think your message reflects their desire too. Thanks
What exactly is inputs? I’m sure I’m not the only one developing solely in TypeScript, but I can’t see how a type-system works with this that would provide proper type-checking and/or documentation while editing?
When using ShaderNode-instances inside other nodes, you reference them like so:
DFGApprox.call( { dotNV, roughness } );
Am I correct to assume that dotNV and roughness are required inputs for the DFGApprox in this example? If so, how are we type-validating these type of semantics? For example, how are we giving developers in-editor feedback when they use the wrong types for inputs?
This is in the end just JS. It doesn’t have features that can help you with type safety. I believe that only typesafety comes from an independent project that is maintained in parallel to threejs.
Threejs should throw runtime errors if you pass the wrong type. Often it also warns you in console that you’re doing something wrong.
However, I do not agree that the “userland” API is designed in such a way that it basically fights against the possibility of type-safety all together. For example, why the .call()-invocations, rather than a factory function?
Speaking solely for myself here, but if I had to use this system in a large-scale application, I would rather reinvent the wheel in a type-safe manner, before using this, especially when working in a team. If your IDE can warn you about possible mistakes, wouldn’t you prefer that compared to your code breaking during runtime (and possibly a production environment)?
I would much rather have my code not compile at all, than have the possibility of breaking a runtime environment.
I honestly think that the API for the node system should be at least built in such a way that it allows for the maintainers of the type-definitions of three to work with this, rather than having users of three develop against an any-type.
Anyway, I don’t think this isn’t the topic for this discussion per-se, but hopefully this will make some people think about the architecture of the nodes-API before this system leaves the “example” directories and makes it into core. Or maybe I’m (like others are) just missing documentation, and there really isn’t any problem…
I think the context of the typescript we can use interface for inputs and outputs, but I agree with facilitating type determination whenever possible. The functions written here in ShaderNode could be replaced by traditional JS functions or by fn of the TSL in the vast majority. ShaderNode has as main objective to provide a stack and builder, stack can help to edit a code line by line, as well as if/else and for statements and builder can be useful to define the structure of nodes between renderers. In terms of class design, ShaderNode represents the function, the code, and call( { ...inputs } ) represents the call of function. This is exactly what happens at a lower level in the system, this design will help set snippets of entire functions to caches and make debugging easier too, but not done yet. However, to facilitate this process, I am in favor of moving to traditional functions whenever possible, and I’ll see what I can improve soon.