I'm on TC39. Would Three.js be interested in "tree-shakable methods" syntax?

There are plenty of people using threejs with no build tooling. (and it’s one of my favorite ways of working with the library). Right now, you can make a single page double clickable html that shows threejs content, by hotlinking, to threejs directly (bad manners) or just pointing to npm.
Just include the script/importmap, and go.

This proposal would break that for some platforms, since the proposal assumes that these apps can be recompiled/rebuilt/transpiled. Then the knock-on effects, like doing the actual conversion, debugging it, maintaining it, educating the community about the what/why of it.

Threejs resisted module tooling for a long time for similar reasons, until importmap made it less of an issue.

imo breaking changes should be limited to changes that fix critical bugs.

If you want threejs with blackjack and hookers, that’s what forks are for! :smiley:
Maybe three-lite or something?

If that proves to be popular, people will use it, and that will create momentum to either integrate into the main library, or endorse it: (see r3f/babylon/aframe/etc)

In short.. if you really think this would be an improvement.. implement it in a fork and let the public decide! :smiley:

(full disclosure.. I’m not in charge of any of this decision making.. this is all just my 2c)

2 Likes

I think you’re new here :wink:

1 Like

The dynamic, polymorphic and typeless nature of JS is what makes it stand out. This gives a kind of expressiveness and creativity that is impossible with (re)strict(ed) languages.

Edit:

Generally, I’m not against changes that do not break anything existing. So, adding new operators to JS is OK if it does not force anyone (besides browser developers) to update their code base. As for Three.js, I’m not part of the team, but would the proposal mean the following: “If you want Three.js code to be shaked better, all the Three.js core must be rewriten/refactored in a specific way.”

I’m still somewhat lost in the whole idea. Imagine I want to make Three.js that is perfectly shakable. I could rewrite it in the old C-style with functions, and then I’d put an OOP wrapper. So users that need small file size would use the function API, the others will use the OOP API. And all this would not need new operators. Thus, small size is possible with the current JS if Three.js were rewritten in an OOP-less style. Would such approach be clearer and more natural than adding operators?

Edit 2:

It looks like you are looking for arguments to support the proposal. I’m not quite sure what you need more: technical arguments or conceptual arguments. Also, it might be good if there is a much bigger example of something using your proposal and the same thing that does not use it. This would allow much fair comparison and clearer picture of the impact. The info about the proposal at GitHub shows just small code fragments, and I’m not sure how the change could scale up to thousands-LOCs projects.

3 Likes

tl;dr

Tree shaking isn’t the reason why my revolutionary world class ground breaking threejs example gets shares & likes.


I just did a test using a minimal threejs scene that demonstrates THREE.Vector3.set()

vite & typescript before optimisations

the bundled js --> 578.88 kB │ gzip: 131.28 kB

~94 packets to transmit 131.28 KB over TLS with MTU 1500

vite & typescript after vite specific tree shaking optimisations

the bundled js --> 494.19 kB │ gzip: 120.62 kB

~87 packets to transmit 120.62 KB over TLS with MTU 1500


So with currently supported Vite specific tree shaking optimisations, I saved ~7 packets.

At this point, I can’t rule out OOD (optimisation obsessive disorder).

Especially since I can still demonstrate using THREE.Vector3.set() using the non tree shaked import map approach, along with lots of extra html editor code, in less than a second anyway.

Proof (start you watch clocks and click) —> https://editor.sbcode.net/5b2606

This is perhaps better for the developers of bundlers that provide support for strongly typed JS. Since TSC AST’s the TS into JS. So it’s the AST stage where an optimisation can perhaps be easier ascertained and applied.

Maybe the bundler developers can compete against each other for users on the basis of possibly saving several more TCP packets.

This is a problem waiting for the day our governments tax the # of TCP packets our internet connected devices log.


For ref, here are the Vite specific tree shaking optimisations I tried. Webpack has its own options too.

vite.config.ts

import { defineConfig } from 'vite';

export default defineConfig({
  build: {
    target: 'esnext', 

    minify: 'terser', 

    terserOptions: {
      module: true,
      compress: {
        passes: 3,
        pure_getters: true,
        unsafe: true,
        unsafe_arrows: true,
        unsafe_methods: true,
        dead_code: true,
        unused: true,
        drop_console: true
      },
      mangle: true
    },

    rollupOptions: {
      treeshake: {
        moduleSideEffects: false,
        propertyReadSideEffects: false,
        tryCatchDeoptimization: false,
        unknownGlobalSideEffects: false
      }
    }
  }
});

tsconfig.json

{
  "compilerOptions": {
    "module": "ESNext",
    "target": "ESNext",
    "moduleResolution": "Bundler",

    "importsNotUsedAsValues": "remove",
    "preserveValueImports": false,

    "isolatedModules": true,
    "esModuleInterop": false
  },
  "include": ["src"]
}

package.json

{
  // add to your existing package.json
  "type": "module",
  "sideEffects": false,
}
4 Likes

This is incorrect. The current API is easy to maintain while providing the new API. Everyone who doesn’t have a build setup can continue to use classes as they do today. They could also switch to the new API, though they wouldn’t get the tree shaking benefits, but they would still get the runtime execution benefits I mentioned.

To stress this point: this can be done without breaking changes to existing users.

This is exactly what we’re proposing. If Three.js library were written more C static functions, then it becomes extremely easy to tree-shake.

The reason we’re thinking about new operators is to make it more ergonomic to use these functions. C functions break “left to right fluency”, and developers really seem to like method chaining that provide that fluency:

// Class-method fluency
vec
  .add(other)
  .normalize()

// C-style breaks left-to-right fluency
normalize(add(vec, other))

// Pipeline operator restores fluency
vec
  |> add(##, other)
  |> normalize(##)

The pipeline uses the same C-style functions, but does it in a way that reads more like the class-style chaining.

I think we’re trying to gauge community reaction. If we introduce pipelines, but libraries don’t switch to exposing a tree-shakeable function API, then we didn’t make any progress to remove dead code. If libraries did switch but users didn’t want to switch because they don’t like it, then we’ve still failed.

It would be great if we could build support beforehand. But even if you hate it, it’s valuable to get that feedback.

1 Like

It feels like trading better treeshaking (good!) for strange, more verbose, and newer syntax.(not great!)

And wouldn’t it also break polymorphism.. so like..
I can’t swap out object.position with a RangeCheckedVector3() that does range checking or something?

And existing code that uses polymorphism would be forced to re-implement that logic, or be stuck at the last non pipe version of threejs?

I think the discussion is somewhat academic though. If someone wants to fork the library and do the conversion, the public will decide if they want the old crusty object style threejs, or the new slim tree shaken pipey version. That’s kinda how these things shake out.. :wink:

Proof (start you watch clocks and click) —> https://editor.sbcode.net/5b2606

FWIW I got to 13 seconds on my high end cell phone in the US between clicking that link and seeing everything rendered.

click it again. it’s much faster the second time.

1 Like

Exactly, and most of the time, nobody will maintain forks long enough.
The origin of three.js success is not because it’s optimized (hell no). It’s because it was written to be understood by everyone. It’s an invitation to dive into the “demo world”.

Nobody care about gaining few miliseconds on loading vec3. We have megabytes of 3D models and textures to loads after this anyway. what are we debating about here? Bundlers create a problem…and now three.js need to solve it? why? what the hell are we talking about, why is this three.js problem? Can’t bundlers solve this shit by themselves? the problem they created?

7 Likes