I wonder how many trees get burnt down and kids get neglected from three.js churn

Everytime three.js updates and breaks things it means 1000s, 10s of 1000s, maybe even 100s of thousands of developers end up spending time migrating. That means lot of computer time so energy used. Even if it’s only 30 minutes, 30 minutes * 10000 devs = 5000 hours of work. or 2.5 man years. Collectively that $$$$$$$ and probably several barrels of oil or other measure of energy.

It also means those devs get work added to their plates. Work that they could either be using to ship their product instead of dealing with migration issues or time they could have spent with friends and family.

Not only that but many live samples, stack overflow answers, online tutorials, youtube videos, etc all become useless. An expert might understand the changes but the typical inexperienced programmer Googling for how to use three.js, stumbling on some 3 month old tutorial, will have no idea why things are broken.

I’m sure the responses will be things like “so don’t upgrade” or “if you don’t like the churn don’t use three.js”

But, just pointing out the changes have real impacts on real people. Maybe weigh those impacts a little higher? Some how the majority of libraries of all types for all languages manage not to have this much churn. I’m not sure what makes three.js special. It’s not like other 3d engines churn this much.

Anyway I didn’t post this to debate or even get feedback. I only posted it to raise awareness that there are real impacts of shipping breaking changes every month. I don’t expect anything will change but given all the other libraries out there that don’t have this issue, even 3D libraries, there really isn’t much of an excuse. It’s a choice to cause this much extra work for thousands of developers.


Hey @samj,

Welcome to the community.

There are two sides to any project, the left and the right, the progress and the conservation.

You’re for conservation - I get that. I think that’s important.

If you believe that three.js is not sufficiently backwards compatible - you may wish to do something about it. This is, after all, an open-source project, you have just as much power to influence the project as anyone else, put forward good code - and I’m sure it will be accepted, provided that collective sees it as a net positive.

As far as progress versus conservation goes, I have a few points from my own experience:

  • Maintaining backwards compatibility is hard, especially when architecture and interfaces change. Imagine old English, can you you provide a 1:1 translator from old English, oh, say from 16th century to modern English with words like LOLW. I’m sure you would see the difficulty, not only from the perspective of actual effort to implement such a thing, but from the mental effort required to think up a mechanism.
  • Backwards compatibility holds back a lot of new features. You say “hey, lets just add light probe interpolation, I even know how to, i’ll write all the code” but the voices for conservation point out to you that you would have to, not only rewrite the entire lighting code path, but ensure that the old ways still work. Do you get paid for doing this? Nope. So you throw your hands up and say “screw this, I’m out”.
  • Backwards compatibility hurts performance. Every time an interface is used - you have to figure out if it’s an old way or the new way, and do appropriate conversion as necessary.
  • Backwards compatibility makes code complicated, not complex. You end up with a lot of your code base simply doing the conversion.
  • Backwards compatibility makes code larger, which is not ideal for the web
  • Backwards compatibility puts a barrier for learning. Which way is the right way? Old? New? How am I supposed to know, they all work the same way.

From the perspective of a paying customer - you are within your right to demand that the product evolves and becomes better, more features, more performance, more stability, better documentation and all with minimum burden on you, ideally 0.

Question is, who is the paying customer here?


The release management of three.js was already discussed several times and we know not everyone is happy with it. Yes, monthly releases with breaking changes are problematic for certain teams. However, it’s the only way right know to manage the project for us since the support, development and review effort is already high.

Are you aware about the official migration guide? It lists all breaking changes for a single release so it’s easier for developers to get an overview about the migration tasks.

Besides, we always try to ensure that all official examples, the documentation and the editor reflect the latest API of three.js. Especially the example code is a good source that might help when code migration is necessary.

In any event, three.js like all other libraries or frameworks is not perfect. The are many conceptual issues in the engine we are trying to resolve over time. As @Usnul mentioned, we could try to keep these things in the library but that means a lot of other users would complain about too less progress. It’s really hard for a software project to find a good compromise between backwards compatibility and new things.

I highly doubt that statement. Please do not claim such things without proper facts. Otherwise such statements belong to the category “Fake News”.


Recently, I’ve been working with Autodesk Forge, a 3D viewer built on top of three.js.

This is a large scale application created by a huge company and used professionally by thousands of people. It is used for displaying industrial and architectural models, amongst other things.

They use three.js R71, and it works just fine.

In short, the rapid release cycle of three.js does not add to developers work. The desire to always have the newest and shiniest version of three.js is what does that.

Instead, try this: pick the latest version of three.js when you start a project, and only update when you need a new feature or do a complete refactor of the project. You’ll save a lot of time and frustration and lose nothing.

If you keep chasing the latest version of three.js (or any software/hardware) without a clear reason for doing so, the only person to blame for wasted time is you.

EDIT: and by the way, I say this as someone not that fond of the rapid release schedule. As a person writing learning material for three.js, it does make my life harder. But there’s plenty of benefits too - the most important is lowering the burden of development, as pointed out by @Mugen87.



In order to not break things I would have to not spend any time with friends and family myself. And, to be honest, I do not spend enough time with them already :confused:

Anyway, would be helpful to know what changes affected you.


Counterargument: You’re leaving out of the equation all the man years Three.js has saved the web dev community by evolving into the library that it is today. How many hours does it take to build something from scratch with the vanilla WebGL API? Think about the huge barrier to entry devs would have to overcome if they had to write their own GPU shader code. Think about how much money people, companies, and organizations save because it’s free.

Claiming Three.js is responsible for neglected children and burnt trees is downright malicious and defamatory.


Come on, we’re software engineers here, above average intelligence is assumed, and sarcasm is the highest form of intelligence (albeit the driest form of wit, or something like that). This isn’t malicious, unless the US military intervenes and invades the three.js community :slight_smile: I found the OP’s post funny, but then again, yall know me :slight_smile:

I too am curious about the reasoning behind this release schedule. I’d be curious to see how much of the R71 lives in it’s original form in Autodesk Forge @looeee. Regardless of if it’s heavily modified or not, it still tells me that R71 was a perfectly good candidate for building web based CAD applications.

I’d like to understand why this wasn’t three v 1.0.17, or 1.4.37 ie. various incarnations of say the “non VR version of three”. Three 2.0.13 or 2.6.29 could have been “the PBR version of three” or the “shaderChunk version of three”.

I wonder what is the interface, ie. what are the breaking changes to the average user of three, if such a designation even exists. The little bit of data that i have is fairly anecdotal. Some mentioned by the OP - tutorials become outdated soon. A lot of the SO questions are fairly basic, and or go unanswered. A lot of conclusions can be made from observations like this which i believe are worthy of a discussion.

My high level thoughts are that three’s interface changes as much as the example code changed. Historically you make:

  • Scene
  • WebGLRenderer (for wgl)
  • Camera
  • Object3D
  • Geometry (optional, maybe you just want sprites)
  • ‘Light’ (optional, maybe you are working with meshbasic material)
  • Material

And the methods my imagined average user would use have been pretty consistent for years, things like:

  • add()
  • remove()
  • setSize()
  • updateProjectionMatrix()
  • .color, .roughness, .transparent
  • etc.

In my inexperience and imagination, i imagine some of these to be locked as THREE.SceneGraph v1.16.66 which is a stable version after a bunch of patches. Another one is the THREE.WebGLRenderer v2.1.6. All i know is that this is the far other extreme, and that something like this people are moving away from in favor of monorepos.

The other stuff that is prone to change is IMHO what causes frustration like the OP has. I have a similar experience where i was not allowed to modify the “official” three.js version, which is i asked for more flexibility with the core. I mean yeah sure, you personally can always fork three and play with it, but it’s not always possible in real life.

I don’t like that everything is “making it in”, i’d much rather like to have a stable core, and a bunch of modules / plugins / examples.

1 Like

Surprisingly little has changed. At least, it’s added very little to my workload. I just downloaded the R71 release and whenever a method is not working I search through the source and usually can figure out the old method name, or old way of doing things, within a couple of minutes.

The main exception to this was the camera, which used to be a target camera - setting the rotation worked in much the same way as we have to do with the DirectionalLight now.

1 Like

THREE does a really great job not breaking with legacy versions with warnings staying there for months or years if something changed, every new feature or change always tries to integrate smoothly without breaking api changes, for improvements and new features it simply requires some changes, it isn’t like some automatic forced Windows update that breaks everything. I never experienced any updates that required more than couple minutes to adapt to, and i use it in rather large projects.

Asides of the changes being minor - welcome to the life of developers, you have to stay up to date, things change, and they change quickly. This applies to any library or software being in active development, except WinRAR.

I would rather understand your complaint if it was like with Drupal and their jump from 7-8, which is like a entirely new framework, being hard to migrate to if it all, no matter if expert or user, with every extension for it having to migrate first as well - and this project has much more businesses depending on it, some completely.

With all your money, energy, oil and hours calculations you probably forgot it’s open-source, made by developers not being paid for it - while you use it for something commercial.

I rather wonder how many trees got burnt down in australia :crying_cat_face:


I was under the impression a few people are paid to work on three.js by Google and Mozilla. Maybe I’m wrong though.


Where can i sign up? ^^

I admit i got that impression too observing the amount of time spend. But even if, it’s probably more of a donation/sponsor way?

But anyway, many large open-source projects receive donations or get sponsored, it still has no commercial goals and there is only a minority of the community receiving anything, especially not those not even involved in the source directly, like examples, plugins, learning material or just the people helping others here and other places.

tbh, i would love contributing to it with the projects i develop for THREE being just paid in a sponsor way instead being forced to go a commercial route, just in order to keep the lights on.

1 Like


I wonder if anyone has successfully used Patreon for open source development? I know a lot of webcomic creators make an ok living that way.

For artists it works well as their audience are consumers and on Patreon you can post paid posts, being unlocked after becoming a Parteon. But with the audience being consumers and being relatively quickly to make, it keeps the subscribers entertained and makes the advertisment for more to come.

For developers it’s harder, developments phases can be long with no huge visual progress to show if visually at all. Shadertoy for example has a Patreon too, but it’s quite popular and more of a end user project, for a framework that doesn’t really has some “entertainment” this might be much harder.


I think Github is playing with a new sponsoring program that allows people to support individual developers, or entire projects. It’s still in its early phases, but it could go a long way to help: https://github.com/sponsors/


I ask myself :thinking: the question (which has perhaps already been answered somewhere?) what the technical reasons are for the approximately monthly phase.

I could imagine that if the changes are collected over 3 or 6 months, things get confusing and then possibly still existing errors are more difficult to correct. Will the total cost be considerably greater?

Surely the experts who always need the very latest features can work with the files available on Github at any given time?

Two years ago I tried “raycast for multi material”. It didn’t work then. But it could be made usable beside the official revision. See [SOLVED] Raycaster - MultiMaterial - #5 by hofk

Used there Modify indexed BufferGeometry (mouse or input)

1 Like

It’s not only about sponsorship. Even if more developers were active at github, the core team still has to review all these changes. The problem is that we are not always sure if a change goes into the right direction. Maybe there are better solutions, maybe not. But as soon as @mrdoob approves a change and includes it into the library, it becomes visible for all users.

I think it’s important to develop the project in a controlled way and still provide interested developers at github the opportunity to contribute to three.js. I have sometimes the feeling, certain people expect a project and release management similar to a commercial projects. But I’m afraid this won’t happen.


Why is this your conclusion? I mean, does it have to be similar, and if so, why is that bad / unattainable?

  • why is monthly a good pace to do releases, why not every 2 months, why not every two weeks?
  • why can’t versioning be guided by something other than the core contributor’s and owner’s intuition? (The problem is that we are not always sure...

I’ve learned to program with three.js, but unfortunately it stops just there - programming. Architecture designs, SDLC, management stuff does not exist in this project, and it be nice to have that around as well

Babylonjs is an example of a similar project that uses a longer release cycle - not sure, but maybe every six months or so. However, I’ve heard the devs say that the result is everyone just uses the dev branch in production apps. Which seems way more chaotic than the three.js way.


Maybe it’s not a good example then :slight_smile: what about things like angular or react?

What about semver versioning spec ?
it’s recommended by npm, and it does help people understand the extent of changes in a given version :
And for instance three could release patch releases every months, and keep minor or major changes for release every 3 months ?