TSL considered harmful

I’ve been thinking about TSL for a long time on and off, and I want to like it. But I don’t.

And so, I wanted to put a few words on paper as to why, as I hope that this can serve to improve TSL. You can think of it as a review of the current state of TSL.

First off, what’s TSL? Three.js Shading Language, it’s something that @sunag was working on for a better part of a decade, it was originally referred to as “Node-based shaders”, it still very much is the same thing, but it has evolved to serve a new function, specifically - abstracting the underlying graphics API (WebGL / WebGPU).

Who the heck am I to talk about languages, graphics and node-based techniques?

  • Over the years I have shipped a number of node-based frameworks, there are a few in Meep even.

  • Similarly, I have designed a number of languages in my career, both textual and programmatic as well as various file formats. I am proficient in a number of very distinct programming languages such as C, C++, Java and C#, and can comfortably read many more. In the domain of graphics - I’m very familiar with different version of GLSL, HLSL, SPIR-V(not exactly a language), WGSL and Metal.

  • As for graphics, graphics has been my main area professionally for close to a decade now, and I have been working with graphics for much longer.

All of that is to give my arguments a bit of extra weight.


So, with the obligatory disclaimers out of the way.

What’s wrong with TSL?

A language needs to be helpful, first and foremost. We could just bang instructions directly into computer’s memory if that were not the case.

What does TSL accomplish, well it does 2 nice things:

  1. It abstracts the API (WebGPU vs WebGL)
  2. It allows code reuse

Code reuse is typically achieved with some form of modules in a language, such as the import statement in JavaScript. TSL achieves code reuse through composable nodes which can in turn be packaged into JS modules, et voilá - code reuse.

At this point, unfortunately, we have reached the end of the list of good things about TSL.

Allow me to make a small tangent. I have taught many junior to mid developers in the past, and it’s a common trait of ours, engineers that is, to want to abstract.

You start with a very simple task and you want to give your solution some structure, you want your solution to not only solve this one simple task, but to be able to solve other similar tasks. Why stop there? we can abstract the solution to solve tasks that are more and more different from the original goal. This is something I’m guilty of, for sure. I wrote an entire fully-featured game engine when was working on a specific game. It doesn’t get much closer than that to the definition of “Over-Engineered”.

And that’s what I see TSL as - “Over-Engineered”.

TSL is:

  • poorly documented
  • slow to compile
  • slow to execute
  • painful to debug
  • feature-limiting
  • awkward to write
  • has a massive API surface

Let’s go 1 by 1

Poorly documented

But Alex, there is the documentation right there: three.js docs

Let’s take a look at this documentation. To avoid cherry-picking, I’ll go with the very first entry for TSL

TSL function for creating a Break() expression.

Well - that’s clear as mud. And you may think:

You said no cherry-picking and yet you picked, you cherried! you dirty picker of cherries!

Sadly most of it is like that. And even if it were not the case - this is still a problem, here’s the entire list of entires in the documentation under TSL:

It’s 561 entries. Let that sink in. The entire WGSL spec has only about 150 operations. Yes, you could say

But Alex, WGSL has a lot more than that, you have attributes and generic types and compute shaders and WebGPU stuff like pipelines and bind groups oh my!

And you’d be totally right, but… TSL has the same things. So the comparison is pretty fair, sadly.

Let’s acknowledge the fact that not all of what’s documented on three.js docs is “strictly” TSL, a lot of it are things built from basic building blocks, like triplanarTexture, to pick a random example. But that’s also a point against the documentation - it doesn’t make that distinction.

It is my belief that TSL can not succeed until it can be learned and understood, neither the documentation in code, nor the error messages from the compiler, nor the official documentation on the website are particularly helpful with that currently.

Slow to compile

Let’s get the obvious out of the way. TSL can not be faster than WGSL or GLSL when it comes to compilation. TSL is an abstraction on top of these, so there will always be WGSL/GLSL emitted from TSL, and that will take time to compile. So TSL is already in a losing position from the start.

But it gets much worse. I don’t know the current state of TSL’s code, but when I last reviewed it a few years ago - it wasn’t a well written compiler. When I say this - I say it as someone who has written a their fair share of compilers and someone who is intimately familiar with a number of large mainstream compilers.

Somewhat anecdotally - I tried TSL some number of years ago, and I remember being amazed by the fact that a Node-based standard PBR material was taking over a second to compile to GLSL on a top-of-the-line desktop CPU. Recently I had the pleasure of working with TSL once again, and I can confirm that it still takes a long time

And this is before we get to compiling the emitted WGSL. Now imagine you have 10 materials instead of just the one. And imagine you trigger a material re-compilation or a new object appears in view that didn’t have a material compiled yet. Unreal engine recently has been in hot water for “stutter struggle”, but TSL is that on steroids.

I don’t mean to diminish @sunag 's work. I’m guessing he’s not a compiler engineer by trade, and TSL doesn’t look like it was designed for fast compile times to begin with. And yet… here we are…

Slow to execute

A compiler is a complex system. Compiling the code is actually the easy part. The hard part is the optimization and analysis.

TSL, appears to be a simple substitution-based compiler. This is pretty much the most basic you can get. And nothing wrong with that honestly, I’ve written a bunch of these and they are popular for a reason. However, when you’re want to produce optimal code as a result - it’s nowhere near enough.

You need a good AST/CST to perform analysis, and you need a robust tree transformation system in place to implement various optimization rules.

As it stands, here’s the result of using depth node:

( ( ( render.cameraNear + v_positionView.z ) * render.cameraFar ) / ( ( render.cameraFar - render.cameraNear ) * v_positionView.z ) )

And here it is used twice

( ( ( render.cameraNear + v_positionView.z ) * render.cameraFar ) / ( ( render.cameraFar - render.cameraNear ) * v_positionView.z ) )

( ( ( render.cameraNear + v_positionView.z ) * render.cameraFar ) / ( ( render.cameraFar - render.cameraNear ) * v_positionView.z ) )

There is no common-expression substitution. Over the course of a typical material shader this will be a death by a thousand wasted ALU operations. You get the picture.

But Alex, this optimization whatcha-ma-call-it-thing you described, we can haz it!

That is a very well thought out argument, and I agree. We can indeed haz it. But it would likely take significant engineering effort to allow for such optimizations in the first place, and then… well, there’s a reason V8 (Chrome’s JS runtime) doesn’t optimize code immediately, there’s a reason it does JIT and there’s a reason why optimizations are progressive. Optimizing code is slow. In fact, the better optimization techniques are the slowest ones.

Heck, consider why WGSL takes a long time to compile, it’s not the translation to SPIR-V, and it’s not GPU driver converting the SPIR-V to instructions. It’s the optimization that the driver does under the hood. But let’s not get into all that, let’s stay on topic.

To get well-performing WGSL out of TSL - you need optimizations to be done as part of the compilation process. Let’s agree on that. And optimizations are slow as rule, in every compiler. I’m not singling TSL out here.

Therefore - TSL is slow to execute, and slow to compile. And even if we make it faster to execute - it will still be slow to compile. For now we have worst of both worlds.

Painful to debug

When something goes wrong, TSL doesn’t notice half the time. It happily emits WGSL, which doesn’t compile.

Now you’re in a situation where you have a bunch of WGSL code in front of you, that doesn’t look remotely similar to what you were writing in TSL, and the WGSL compiler is complaining at you about concepts you haven’t been operating with.

Is this hypothetical?

  • No, it happened to me last week.

Let’s consider another situation - TSL compiler does detect that you did an oopsie, and it blows up. The error message is not helpful. You are not told “where” you made a mistake, you are not told what was the causal chain that led to this point, you are just told something like “value is not a texture”.

… very helpful TSL… thank you.

If you try to set a breakpoint on uncaught exceptions - it becomes marginally more helpful, as you can use the debugger’s execution stack and evaluation context to figure out where you are in the node tree, but this is far from acceptable. If you believe it is acceptable - I don’t know what to tell you… you deserve better..?

Feature-limiting

Let’s get back to that depth example from earlier. This is not true depth value that comes as a fragment shader’s input. And if you use screenCoord node - you get the .xy, but no z (depth).

You can’t use pointers in WGSL via TSL.

You can’t disable uniformity checks in WGLS, which makes many algorithms impossible to implement.

You can’t use workgroup storage space.

These are just a few, and there can be an argument made

Man, TSL is, like, it can do anything… man

You can indeed write a piece of WGSL code and ask TSL to wrap it for you. But at that point TSL becomes an obstacle, as I can write WGSL just fine without it. And by using this technique - one of the benefits of TSL, the cross-compatibility with WebGL - it disappears.

WGSL may seems like the manna from the gods, but the truth is - it’s already a limited API, WGSL doesn’t expose access to nearly as many GPU features as something like HLSL or Vulkan. You’re already limited before you get to TSL, what values is being traded in return for these extra limitations?

And you might say

Well, Alex, akshually… these limitations will disappear in time, and you’re just being unfair and picky in a very cherry way

But as someone who has worked on cross-compilers, I can say that sadly, this is more of a law than a suggestion. If you have 3 languages:

  • A
  • B
  • C

And C needs to translate into A and B, C can not be any more expressive than the common shared expressiveness of A and B.

Now, this diagram is unfair, because the commonality between something like GLSL and WGSL is massive, and there’s probably something like 20% of WGSL that’s outside of the GLSL and something like 5% or so of GLSL that’s outside of WGSL, but the point stands.

It’s a law. You can’t fight it, you can try and you might get around one or two limitations in a sort-of okay way, but it’s an uphill struggle that usually isn’t worth the trouble. Your language would need to invent more abstract and complex concepts for this to be possible, and let me tell you - designers of WGSL are not fools, you’re unlikely to beat them at their own game.

Awkward to write

Suppose you want to write an If statement, here’s what you have to do:

import { If } from 'three/tsl'; // 1. import the function

...

If( some_node, () => {

    // then clause

}, ()=>{

   // else clause

});

This is awkward. It’s awkward not only because you need import every keyword you use, but also because the syntax is a lot more convoluted. Let’s compare it to JS if statement as a lexer would:

if( some_condition ){

}else{

}

For JS we have 9:

IF
LPAREN
IDENTIFIER       // "some_condition"
RPAREN
LBRACE
RBRACE
ELSE
LBRACE
RBRACE

For TSL we have 16:

IF
LPAREN
IDENTIFIER       // "some_node"
COMMA
LPAREN
RPAREN
ARROW
LBRACE
RBRACE
COMMA
LPAREN
RPAREN
ARROW
LBRACE
RBRACE
RPAREN

And you might say

But alex, they look about the same

And to that I’d say, first of all - I resent the lower-case in my name, and second: it’s the congnitive load. We write the code not exactly in the same way a lexer parses it, but we do break it down into logical pieces, tokens, if you will. And those extra tokens absolutely cost you extra WPU cycles (Wet Processing Unit )

The functional notation on operations is also cumbersome. Instead of saying a + b you have + a b essentially. And if you’re a fan of Scheme - you ain’t a friend of mine. Maybe that’s why I don’t have friends. You all just love Scheme so much :cry:

Has a massive API surface

It was just node bro…

The intention was good. And it always is I think.

Initially, if you wnated to have NodeMaterial you would have 2 nodes:

  • one for vertex shader
  • one for fragment shader

But Now we have:

You thought I was going to stop there? :sweat_smile:

You also have

And that’s just he case for now. Now let’s consider what a shader looks like in WebGPU:

  • You have the bind group layout (your uniforms/terxtures/buffers etc)
  • You have the vertex shader
  • You have the fragment shader

That’s it.

What now?

I think TSL is a bad language by itself, that much is probably clear. However, it does have something going for it that can make it useful.

Here are a few things I believe can make it so:

  • Offline compilation. Have your TSL shaders, compose them, re-use code, compile them to WGSL or GLSL with maximum level of optimization, but ship the WGSL/GLSL. Keep TSL offline so you don’t force the users to pay the compilation cost and damage the experience with stutter.
  • Visual tooling. TSL is node-based. Have a fully-featured node-based graphical shader editor would make it worth the trouble. So far we don’t have that. Oh, sure, there is the prototype editor somewhere, and there are toys here and there, but nothing solid, nothing that you could use end-to-end to build shaders seriously.
  • Emulation. This would single-handedly make TSL one of the best languages for graphics debugging. You could run the TSL on the CPU without translation to WGSL/GLSL, and this would enable you to step through the code, inspect variables and have the best-in-class debugging experience.

If not TSL then what?

I said so many mean things about TSL, but what is the alternative?

Well, for compatibility’s sake - you could use HLSL as a source language, or you could use WGSL and translate that GLSL. You could use SPIR-V, you could use Slang (from Nvidia). All of those languages (except for SPIR-V) were actually designed to be written by humans.

As for node-based, I think you’d be better served by having a node-based WGSL/GLSL targeting compiler, instead of a compiler that targets materials or specific shaders. Yes, there would be a need for an extra layer to glue the two halves together, but it would be additive, not either-or, as is the case today.


Disclaimer 2.0

This is not meant as an attack on three.js, or any of the three.js developers. I love @sunag, even if he already has someone :sad_but_relieved_face: and I have an immense respect for the work that went into TSL.


References (in no particular order):

8 Likes

I wrote a bit in the TSL spec about the reasons and benefits, and there are many examples that show, at the very least, that it would be impossible to have the level of abstraction that you get with Nodes using a string language.

What you’re suggesting about bringing in a string language has already been done and is called WebGLRenderer. In fact, for over a decade, maintainers have had difficulty fitting features that cater to different audiences using string-based shader solutions. One of the classic problems is that whenever there’s a modification to the core, many custom shaders need to be rewritten. It’s easy to think of string languages ​​as a general solution when all a person has in mind is a fixed pipeline. If a person doesn’t understand why we have colorNode, metalnessNode for example, they don’t understand why users have so much difficulty creating custom shaders that remains alive through three.js updates. WebGLRenderer had problems reusing code custom shader for over a decade due to internal changes in core, with TSL we don’t have that, that’s why almost all examples have custom shaders, transitions are smooth.

The benefits go far beyond what you’re mentioning and I don’t think I could list everything here :slight_smile:
Some people who have used this seriously they also gave their opinions.

There’s actually a lot more if you use keywords you’ll find it.

There are certainly things to improve in TSL, thinking about ways to improve performance and syntax is great and improve support to WGSL too, but for a general analysis, I think it’s superficial and cherry-picked, the reason might be a lack of in-depth knowledge about.

Like I don’t understand how you arrive at some conclusions like TSL based on substitutions, since this uses an AST on top of Node abstraction, with separate builder for WGSL and GLSL or perhaps mention compilations benefit from the cache-key of flow of node and not everything is recompiled, or nodes like range, or related to the backdrop features .…

I think that when people compare WGSL or GLSL with TSL, it’s because they don’t understand TSL very well; at least they would know that TSL handles the CPU + Renderer + GPU program, while WGSL or GLSL only handles the GPU program.

6 Likes

My 2 cents.

I would avoid considering any technology harmful, only because it is harmful to me.

As for TSL and some of the things labeled “poor” - I tend to agree with some, but also tend to disagree with others.

  • Poorly documented - here I tend to agree, some explanations just retell the name of the entity. Of course, it is a matter of balance between being a reference and being a tutorial, but a concept that is not native to Three.js, should be better explained. Example: the description of .barrier is “TSL function for creating a barrier node” which is somewhat pointless.
  • Slow to execute - the example of inlined expressions is something that the user can control. Example: when I want an expression to be calculated once, I use .toVar or .toConst
  • Painful to debug: agree, but the reason for this is that TSL has no own lexer and parser, so it has no reasonable way to map JS source to nodes, additionally the TSL code is linear by text, but non-linear by execution. But I agree that a better error reporting would make things much easier.
  • Feature-limiting - TSL is a beautiful counterexample to this, as it has compute shaders, that are available in WebGPU, but not available in WebGL2.
  • Awkward to write - I had experience in LISP (scheme, common) and in post-fix languages (like Forth), so I feel perfectly OK with the syntax - and this, unfortunatelly, removes me from your list of friends (as you rule out such people). The reason for this syntax is that TSL uses JS as a backbone and JS does not allow things like operator overloading or custom control statements.
  • Has a massive API surface - I consider this advantage. Instead of forcing users to define the whole shaders in tree of nodes, TSL gives the tree, exposes most nodes and allows users to modify just one node, leaving all the rest as they are.
  • Conditional compilation - this is not found in the original list, but TSL can use JS as preprocessor, mixing both languages, and achieves thing like conditional code for shaders. Examples: mixing if and If in a single function.

In a nutshell, I think TSL is not suitable for people, whose skills and experiences are well above the average. You simply do not need it, it would hold you back. For for others, less smart people, like me, it is useful, and it helps me do things faster that if I try to do them with only GSLS or WGSL.

3 Likes

Ouch @sunag :cry:

You will pretty much always be right here. TSL is a massive thing. Which is not a criticism against it, but unless I provide an in-depth review of the entire thing - I will always be “picking” and you will always be “right”.

We both know I will not review the system in its entirety, as that would be both pointless for the project at this stage and a massive undertaking for me.


Here’s a snippet directly from GLSL part of the TSL

I’m sorry, but this looks like replacement of a node with a string via a template. Maybe we’re splitting hairs along the semantic lines here though.


I don’t believe that I’m saying that. I think that the old “chunk”-based system was neat in many ways, but I never thought that it was a good solution. I fully agree with you that it was very hard to extend and it did not promote code reuse.


I’m sorry @sunag , I don’t know if we’re on a completely different wave length, but this argument reads to me as

You can’t have a certain level of abstraction in JavaScript

or more generically

You can’t have a certain level of abstraction in a programming language

Node-based languages are languages, and despite syntactical differences languages are generally equivalent in their expressive power.

I don’t wish to put words in your mouth, but text-based languages have been around for a long time, and as long as TSL relies on JavaScript - it will not be anymore inherently expressive.


I disagree fundamentally, you’re talking about extensibility, and I don’t believe that textual languages can’t have extensibility built into them. Heck, we’ve had a concept of hooks and extension points for close to as long as programming languages have existed.


I do understand that the entire system handles more than just the language translation, and I don’t actually have an issue with that part. But I also don’t think that the language part and the automatic binding / recompilation systems are indivisible. You can absolutely have one without the other.


@sunag , perhaps you’re seeing this as an attack, but I assure you, for the third time - is not.

You know TSL better than I, there is probably no person on this planet that knows TSL better than you.

1 Like

Here I thought I was being clever with the “X considered harmful” title. But you’re right, and I fully agree with that statement.

So I’ll go a step further and state why it is harmful, in my view:

With the points I have outlined earlier, three.js pushes hard on TSL.

What language should you use to create shaders when you use three.js?

  • The answer is: TSL

I hope this is not a contentious statement.

Three.js, for better or for worse has a massive audience. It is something that has succeeded at being approachable and a learning ramp for 3d graphics for many. I will readily admit that most of my learning of 3d graphics happened in this space as well.

Then, would one consider pushing someone towards learning a bad language if not harmful?

If one learns WGSL or GLSL or HLSL or any other standardized language - they can take that and use it outside of three.js. TSL is not such a tool, as such I believe it is fair to hold it to a much higher standard.

At this point in time I believe it doesn’t succeed on a level playing field, let alone being superior.


Would you not agree that this is a problem? The code like

const x = Fn(...);

x( depth, depth );

Looks perfectly fine to me, and yet it would be unfolded to

x( ( ( ( render.cameraNear + v_positionView.z ) * render.cameraFar ) / ( ( render.cameraFar - render.cameraNear ) * v_positionView.z ) ), ( ( ( render.cameraNear + v_positionView.z ) * render.cameraFar ) / ( ( render.cameraFar - render.cameraNear ) * v_positionView.z ) ))

Perhaps I am being uncharritable, but this is the exact kind of thing that a node-based system is supposed to address.

Honestly - if you’re on AMD or Nvidia, this code will probably be fine, because the vendor compiler will detect this garbage and clean it up for you, but you’re not making things easier and if anything you’re doing 2 bad things here:

  1. Slow down compilation by making more work for the driver
  2. Rely on the driver to fix bad code, which it may or may not do

I agree, and I think we’re on the same page. However, again, the fact that “this is the best you can do in JS” would not be a very good argument, you’re not making it, but it’s easy to view it as such for others and excuse poor usability on that basis.


Yep, I totally get it. Let’s say you want to play basketball, and you’re short. Well - that’s genetics for you, we all want a few extra inches.

So what now? do we say that the hoop needs to be lower, otherwise it’s unfair, or do you need to suck it up and learn to be much better than the other guys at other things to compete?

TSL, like just about any language that doesn’t have it’s own syntax and is built on top of another language, starts at an inherent disadvantage.

It has to bring more to the table because of that. Is that fair? - no.


Picture this: the year is 2026, you’re 18 and you decide to give this 3d thing a try. You pick three.js and try making a shader with TSL…

The year is now 2048 - you’re 40 and you know that 3d graphics is not for you.

I jest, but having a large API surface without any tiers is something that works for experts only.

Fully agreed. I think that this is a departure from three.js philosophy though. I know that three.js is not static and its philosophy changes overtime. But it’s something that aims to be beginner-friendly, trading that for many other goals.

3 Likes

Yes, this might be confusing.

In your example if depth is a TSL expression, you will see it twice in cases like x(depth,depth) because expressions are inlined. If depth is a TSL variable, you will see its expression stored once in a variable, and x(depth,depth) will use the precalculated value from the variable.

AFAIK currently users have to explicitly define what expression must be precalculated and stored as a variable.

Could this be avoided by comparing all TSL expressions for duplicates? Most likely – yes, but at a higher cost of slowing down the processing time - maybe O(n^2) or, if you play smart, O(n.log(n)).

Could it be avoided by automatically converting all expressions and subexpressions into variables? Most likely – yes, but at a higher cost of hard faceplanting against low-end mobile GPUs which have a lot limitations.

Could it be avoided by automatically declaring only JS variables containing a node to become TSL variables? Most likely – yes, but at the cost of figuring out how to do this, or whether it is even possible, because TSL cannot not see the JS variables and how they are used.

Could it be avoided by employing some smart, fast and relatively easy to implement solution? Most likely – yes, but at the cost of discovering/explaining/implementing this solution.

1 Like

I’m sorry, but this looks like replacement of a node with a string via a template. Maybe we’re splitting hairs along the semantic lines here though.

I think code replacement is very different from generation. If we analyze it using such subjective criteria, we can actually say anything about all.

I disagree fundamentally, you’re talking about extensibility, and I don’t believe that textual languages can’t have extensibility built into them. Heck, we’ve had a concept of hooks and extension points for close to as long as programming languages have existed.

This is not my opinion, but rather the experience of the maintainers. Examples: Add TSL VFX Tornado by brunosimon · Pull Request #29020 · mrdoob/three.js · GitHub

I think the impact is different when you have a library with such diverse users.

I do understand that the entire system handles more than just the language translation, and I don’t actually have an issue with that part. But I also don’t think that the language part and the automatic binding / recompilation systems are indivisible. You can absolutely have one without the other.

In TSL, you can manipulate the renderer or create buffers directly in TSL function for example. You’ll see many examples using particles and post-processing. You can create a material.colorNode = pass( scene, camera ) on a television in the scene, all in one line and can manipulate it because it’s a node, adding a Gaussian blur, for example: gaussianBlur( pass( scene, camera ), 2 ) or anything else. The same code works for post-processing. You should know that this type of abstraction isn’t achievable with just a GPU language program like WGSL and GLSL, because it requires manipulating the renderer, in addition to the level of syntax required in WGSL is much higher, if you later want to chain things in such a simple way like TSL, you’ll still have to resort to a node system.

@sunag , perhaps you’re seeing this as an attack, but I assure you, for the third time - is not.

You know TSL better than I, there is probably no person on this planet that knows TSL better than you.

I’m sure that’s not the case, and I don’t want you to think the same.

I’m sure TSL isn’t created for all; it’s for the majority of users and use cases. I already had added in the wiki the targets audience. Even if three.js did everything, some would want to create everything from scratch again. What we can do in that regard is improve support for WGSL, although everything said here was already on the roadmap. But three.js is open-source and is always open to new PRs as well in case you want to help us.

1 Like

TSL has been a godsend for a game engine project I have been working on for the ~2 years. A portion of the engine needs to pass a significant amount of data between JavaScript and WGSL, and without TSL, it would have been extremely cumbersome to come up with some abstraction to manage that myself. Dozens and dozens of uniforms? No thanks. Manual string concatentation? Also no thanks.

So far, I have not encountered a situation where TSL was generating excessive code. Like @PavelBoytchev mentioned earlier in this thread, you can easily “pre-calculate” a node by using .toVar().

The only thing I would add to TSL however, is for the nodes to do type checking on arguments you pass to them. For example, if I accidentally pass a string for x here: node.add(x). Right now (if I remember correctly), TSL does not catch that mistake, and instead sometimes gives a vague error message that can be hard to track down.

Other than that, @sunag TSL is a brilliant idea that I feel has an extremely bright future. Just keep up the great work.

1 Like

The only thing I find really bad about TSL is the shader syntax. We need a consistent syntax—specifically, mathematical syntax, not literal syntax for math. The current approach makes no sense to me. I have no idea why it was designed this way, but I can confidently say that not many new developers will get into shaders with this syntax.

Shaders are already hard as they are; this just makes them feel almost impossible.

Other than that, it’s great. I have no complaints. Thank you!

1 Like

I’m not sure TSL would work if it were done any other way than it is currently. We have to be able to manage “lazily-executed” nodes, and since are doing this in JavaScript, then we have to chain function calls together. The only other way to do this would be to come up with a new language that we pass to a tslEval('some proprietary string') that accomplishes what you are asking, but then we are just coming up with a new shader language and we might as well be directly writing GLSL or WGSL anyway.

1 Like

What would make TSL better is if JavaScript could do operator overloading.

I.e.,

const p = sin(q.mul(10).add(time))

could become,

const p = sin(q * 10 + time)

But that is first a problem that the developers of JavaScript would need to solve. Python can do it.

The closest I came to simulating operator overloading in JavaScript is AST’ing a string of roman numerals.

{1CCFFCC8-4AEA-402A-A7CD-0ACEB80E7106}

https://editor.sbcode.net/595fcb556527fe2ffbf71c112a667aca690a0f10#L228-L228

1 Like

The best analogy I can use for TSL is following:

TSL is to WGSL as Entity Framework is to SQL.

In some cases, writing raw SQL for more advanced database tasks would be more performant for your application. But if you embrace Entity Framework and take advantage of its syntax for the trivial SQL tasks (such as simple SELECT or UPDATE statements), you become 10x more productive. This applies to even the most advanced SQL writers.

Let me shoot a bullet in my own leg (or head). Played a little with Error.captureStackTrace. At least in my toy demo, it is possible:

  • to construct simple node trees (with deliberate errors, like 5+undef or nan+3)
  • then traverse the node trees (e.g. during compilation or building phase)
  • find the bad nodes and give the location in the JS file where the nodes are constructed.

Here is the toy demo, look in the console, and read the comments in the code:

https://codepen.io/boytchev/pen/vEKproM?editors=0011

1 Like

Yes, this is what I was complaining about — math syntax should remain math.

1 Like

You’re not seeing things correctly. Imagine writing a full shader with complex math in literal syntax… I know I can write the shader in GLSL and translate it into TSL, but that’s an extra step, and personally, I don’t want to do that.

Compare this.

TSL:

const p = sin(p.mul(10).add(time).mul(0.5).add(sin(p.mul(3).sub(time))).div(1.2));

GLSL:

float p2 = sin(
      (p * 10.0 + time) * 0.5 +
      sin(p * 3.0 - time)
) / 1.2;

And this is just a simple line of code… for a full shader TSL, it is unreadable.

Even with this simple example, the first line makes no sense mathematically speaking.

You will need to take your complaint first to the developers of JavaScript.

3 Likes

The last thing JavaScript needs is the ability to overload operators.

You could write your example like this in TSL which is a little easier to read:

const p2 = sin(add(
  p.mul(10).add(time).mul(0.5),
  sin(p.mul(3).sub(time))
)).div(1.2);

Yes, we can adapt, but still math should be math… operators should be * ? + not text…

1 Like

Omg, first i want to thank you for starting this thread. This topic has been a bane on my mental health and the main reason why @pailhead is banned :frowning:

I feel it’s very constructive to have these topics, and im grateful that someone with your experience and knowledge (and a very damn good writing style) wrote all of this.

I’ve been a huge opponent of nodes and TSL within three.js. Not this approach in general. I was a bit surprised to see how much it grew when i took a peek at the documentation. Being able to write several passes in a single shader reminded me of unity, and in general, a game engine, not a rendering library, especially not a rendering library.

There is a lot to unpack here, but i’m so excited and started responding before reading the entire thread.

First i want to explain where i come from:

  • I think my article was for some time the only tutorial describing how to “extend” the built in materials.
  • I was a huge opponent of opponent of onBeforeCompile but at the same time, i wrote the only documentation for it (at the time around 2017/2018).
  • Three arbitrarily locked you out of some basic WebGL features, @usnul mentions these patterns in the OP they seem to be worse now.
  • I’ve been proposing for a leaner three.js

There is a ton of unpack here, i’m not even sure where to start. **Maybe by reiterating again that criticism is valuable.**?

I would disagree with this, but with limited knowledge. Before i started learning three, i worked a bit with Unity. I think you could write hlsl or some such shading language, but then you would have additional instructions, like saying some part of the same shader (or the same file at least) needs to execute in a later stage. It was still a shading language though. I think #pragma whatever was not HLSL possibly or whatever was used to say its running in some post processing or deferred stage but this is similar to how the #include <chunk> was handled, it was not glsl but kinda looked like it and you as a human could write it.
So it hasn’t been done before and was called WebGLRenderer more like it has been done before, and many engines did it?

What i want to say, constructively here, is that there was more to be said about ShaderMaterial.
I’m not claiming i know whats the right answer but i think i had some valid ideas.

Just look at this file

It solely exists to copy values from the material object, into the uniforms objects. Meanwhile, ShaderMaterial just works :trade_mark: . So, 10 years ago, three could have possibly looked into this, since it was suggested, but, in my experience it wasn’t open to it, because of the investment into the whole nodes system.
So an application that runs robotics visualization, supports a toon shader. Maybe it’s useful, maybe it’s not, but the idea is that the three’s renderer, knows about all of this. It handles all of this. Imagine if people were more creative, or if there was a nodes tool 10 years ago, there could be hundreds of material variations, would the renderer have included all of them? Meanwhile, the renderer has been working fine for all those ten years with just the ShaderMaterial this entire file goes away, no risk of missing a uniform update, less maintenance burden, more time to work on other things.

I hope I’m not coming off the wrong way. Again, having seen how much the TSL has evolved, a lot of my points have become invalid. But it took a long time to get here, and in the meanwhile the old system didn’t get much love.

Essentially i think that this is unfair. I think a more fair thing to say would be

WebGLRenderer had problems reusing shader code traditionally because very little was invested in that

If we are being fair, Bruno Simon made upwards of 5 million dollars from selling his course? 50 thousand people times a hundred bucks, even with discount and sales events it ads up. I’ve been reading on these forums that he is working on adding a TSL chapter. This could be seen as a bit of a conflict of interest. If there is hype for TSL, and it’s not documented well, more people would buy his course.

I don’t think that this is a valid argument. Yes there are people who use this seriously and also give their opinion AND (as we can see here) there are people who use this seriously and also gave their opinion and its different :slight_smile:

My argument is, and has always been:

this does not belong in three.js

It belongs in a different project, based on threejs, built for three.js but not three.js. Now granted, i am basing this purely on my peasant and limited knowlege of how WebGL works, i can see a world where manipulating shader strings after the fact is even worse in WebGPU land.

But, if there is anything like an analogy to the ShaderMaterial in webgpu, land (i know ther eisnt, in three, it’s all nodes, but if there could be).

You would be generating shaders ahead of time and downloading them as assets

Basically you would either write a tsl program in javascript, or use some GUI since nodes are much better suited for guis. What you would get in the end are uniforms and shader strings, basically the object you have to unfortunately manipulate through onBeforeCompile. So it could be a simple json asset, very familiar to web people.

This is how i remember using these tools many years ago in game engines. You open a full fledged node editor, build your nodes, but at the end of the day you still only have a handful of inputs that the engine sees. You manipulate these as you would any other object in the engine.

1 Like