What (exactly) is three.js for?

I’ve been using three.js every day for almost two years now, and one of the things I often wonder is whether my goals and the goals of the authors/maintainers of three.js are really in alignment. Or to put it another way, if I were going to seek venture funding to build and publish a 3D web application, would my choice to depend on three.js be considered a risk factor or not? The answer, from what I can tell, is “maybe”.

Compare the situation with Unity, Unreal, or even Babylon.js. You go to those web sites and immediately you know what the target audience is, what kinds of apps are being targeted - games, VR apps, scientific visualization and so on. There’s an implicit promise that these kinds of things will be supported now and in the future.

Now, of course you can do all those things in three.js, but that’s not the point. When choosing a tool for a commercial-grade game or app, it’s not enough that you can “force” the tool to do what you want. A senior software engineer working in a commercial setting, when deciding which tool or library to use, is also expected to be able to make reasonable predictions as where that package is going, what its ultimate goals are and what its velocity is. Otherwise you risk placing a bet on something that later turns out to be a bad fit.

When I look at the three.js website, the documentation, the way that the examples are coded, the source code in github and all of the available information, it’s not actually clear to me what the authors’ goals are. Looking at this stuff, one could just as easily draw the conclusion that the primary purpose of three.js is not to build games and VR apps, but rather to compete in the demo scene - to make cool demos that show off the latest rendering techniques and algorithms.

In my case, I picked three.js over bablylon.js for one reason - because so much of my game world is procedurally generated at runtime, I found that babylon’s more opinionated approach led to slower performance. That one factor outweighed every other concern.

At the same time, however, I often wonder whether or not I made the right choice.


Other than you I don’t have much experience with Three.js. In fact I just started learning it 2 weeks ago. So I’m not sure if my opinion has much weight.

But I personally see Three.js as one layer above the plain WebGL API that gives you a model so that you can create interactive 3D experiences in the browser more easily. I don’t consider it as a full fledged game engine like Unity or Unreal even though you could use Three.js as a layer in a game engine for the web.

I think this is a project that goes in this direction: https://rogueengine.io/


Three.js is a lightweight rendering tool. It doesn’t presume to be a robust commercial-grade game engine.

It sounds like you’re frustrated that it doesn’t do more, but it never promised to.


Let me be clear, I don’t need - or want - feature parity with Unity, Unreal or Babylon. I’m perfectly able to build my own game engine on top of three.js. What I am more interested in is whether I can depend on three.js to evolve in a direction that better suits my needs.

Here’s an example: most web game developers are going to want to load GLTF models and do various things with skinning - these are critical components, I would never choose any library where I had to implement skinning or GLTF loading myself. I want to work at a higher level and not have to worry about this stuff.

However, the fact that in three.js these classes are relegated to an “examples” directory is kind of a red flag for me, and signals that these components are of secondary priority and don’t receive the same level of attention as other parts of the library. Also, the fact that (at least in the past) the typescript type definitions for the examples were badly out of date has also been a pain point.

There are other things I could mention, but this is an example of what I mean.

Here’s a YouTube video that shows the kind of stuff I am doing in three.js: Game Engine Demo Reel (Nov 2021) - YouTube


related What is three.js?


You can’t expect that out of an open-source library where people volunteer their time and expertise for free. Contributors aren’t here to suit your needs, so no, I wouldnt depend on that if I were you.

The GLTF format is brand new, made available to you at no cost, with 68 improvements over the last year (that’s about one improvement every 5 days), and you’re upset that its code is in the /examples folder?! Come on now…


One layer above plain WebGL would be something like this i think:

Three.js is many many layers above WebGL in general.


This is just the nature of three unfortunately. Any example submitted is part of three until it is removed. At one point, something as basic as OrbitControls was an “example”.

I think examples are more like a module/plugin ecosystem, kinda like NPM than anything else. There could be many reasons why it’s called “examples” and lives in the repo, but i think in summary it’s just a different ecosystem.

If you consider something like React - you have the main repo and the library and they use a versioning system. That is basically a contract that solves this:

So yes, as a senior software engineer, you can expect this library at least to be stable. Now mind you, you don’t have to be in a commercial setting, this is beneficial for open source as well, as now you can develop against this stable version. This is why there are probably thousands upon thousands of react components that are hosted on NPM and are guaranteed to work with some version of react. More precisely the react developers are guaranteeing that by not breaking major versions.

Can you expect the same from three.js - no.

Technically, three.js always publishes a major version, so with each version you can expect something to break. If you take a look at the issues on github you could make a better guess, but it would still be a guess. For example, look for an issue that was present in 5 different versions of three, until it was noticed, i think there are lots of these types of issues.

So a three example is basically the same thing as some react library say “size-me”. Except, if you host your example outside of the three repo, it may break within less than a month. It could be years with react. If your example is in the repo however, it may be updated along with the library if it is important enough. It may also be removed, it may also be moved to the core.

What i think works in most professional settings is to just pick a version of three and then stick with it. If you were unfortunate enough to have picked the one that was more broken, then you do an upgrade, but you try to keep this at a minimum.

If you look at products around the web that use three, i believe you may find a bunch of ancient versions. I also think that three has been VR centric for the past 4 years or so, it’s not necessarily aiming to become a more robust library that could be a reliable part of other software, i feel like it’s more of a jQuery for VR.

Whoa, why not? I feel that react is a perfect example of this. BTW i thought that three.js project is sponsored?

i have raised some of these concerns myself. gltf, orbitcontrols, they aren’t examples, they are critical. three still struggles with its eco system, on one hand it doesn’t see itself as a caretaker for one, on the other hand it takes in “examples” that are absolutely being used as eco system modules. and while it does maintain these modules it must be exhausting for a few people to worry about 500 dispersed classes, and that leads to many of these modules catching dust.

i believe three itself is on a good path, but the eco system around it is pretty weak, or even dying due to lack of community, lack of semver and upstream breaking changes. there are critical libraries that haven’t been maintained for many years, discoverability is weak, maintainers take that for a year or two and eventually bail out. the libraries they leave behind are outdated in a manner of weeks, usually when the next threejs “minor” is out.

we tried to fix some of these concerns with Poimandres · GitHub which invites developers with stake to care about the libraries they rely on.


A number of thoughtful comments, thank you.

Another way of asking my question: Is it likely that there will be any expansion of scope for three.js in the future, or is the library basically in caretaker mode at this point? This speaks to the earlier point about stability - stability is great, but it’s also nice to have velocity :slight_smile:

The way I tend to think of it, part of the job of a senior engineer is “dependency curation”. You need to solve a problem X and there’s a dozen open-source libraries to pick from which claim to offer a solution. Which one do you choose?

For some kinds of problems you want a stable solution, but for others, you want a library that isn’t static but is being actively improved. Of course, no one can predict the future perfectly, but you can get a reasonable picture by looking at clues - how many GitHub stars, rates of commits, unit test coverage levels, whether they are using lint or other quality-enhancing tools, responsiveness on issue trackers and so on. All of these factors help judge the level of risk of adoption - whether or not you want to bet your company or (if you’re a hobbyist) several years of your free time on a given solution.

One important clue is simply the mission statement for the project; another is the target audience. If those are misaligned with your own goals, then chances are you are going to be unhappy down the road.

Every open-source project has a target audience, even if it’s not written down - otherwise there’s no logical basis for accepting or rejecting a new feature request.

You look at a library like React and it’s pretty clear where it’s going, and who is served by it. Similarly, even though you might not know what features Unreal or Unity will release in the next two years, you can pretty much predict which subset of developers will be happier in two years than they are today, because that’s an explicit part of their mission.


yea, I get this, but Id say it is safe as long as you dont hack too deep into it? If you only use their high level API, and not rely on fancy shaders, your web app will not suffer when they drop WebGL support in the coming month, for example. I mean, you dont even have to know that they are switching to WebGPU, right? - as long as you stick to high level API, this change will be completely transparent to you. I’m sure there will be a small minority of users who will notice, but like they can just stick to the last three.js supporting WebGL (r140), it will still work as long as browsers support WebGL. so Id say theres really not much risk involved :wink: go ahead and use it as much as you want!

edit: I see a number of people above had their code broken, I guess these are the hackers I was talking about. Just stop hacking, guys, and if you cant - just stop upgrading, you dont have to upgrade just because there is new three.js version. Just like you dont have to throw away your iPhone N when they start selling iPhone N + 1. Come on and try not to. It is not that hard, I promise :kissing_heart:

1 Like

Take a look at the video link I posted above. The animated sea foam is a custom shader, the black outlines are a custom shader, the character’s hair shine is a custom shader - just about everything is a custom shader in fact. This is normal in high-end game development, or at least it was back when I was doing this stuff professionally.

Also, the portal code is making direct calls to WebGL stencil APIs - oh, and I patched InstancedMesh to allow for frustum culling :slight_smile: The game I am building would be impossible if I limited myself to only calling the high level APIs.


Three.js is not a game engine (and never will be). However I find it to be in a really sweet spot of abstraction where you can build one on top of it. Pushing changes like frustum culling of InstancedMesh might not be in the interest of majority using this library, but it’s at a point where it’s easy to extend it into allowing that frustum culling for your app.
In fact Three.js does a great job at providing basic level features, which you can extend later (or import someone elses modules and extensions). For example:

  • we have Points, but it’s up to you to build your own particle system on it (or import someone else’s)
  • we have MRT, but it’s up to you to implement it into a deferred rendering pipeline
  • we have only rendering without forcing you onto a physics system, so you can use any JS physics lib you want. There are also ready examples of it

In other words Three.js is just a library for making 3D things faster and easier. Once you get a little used to it you can extend it for more specific use-cases of your app. Simply pick a version and stick to it if you require core changes.

As for “examples”, it’s just a case of wording. Yes, some are used way more often than others, but still it doesnt mean everyone wants to have them in their apps, so it’s kept outside the core. It does not mean however that they’re not maintained, every release checks if examples got broken, so you can be certain GLTF support is there to stay and keeps being upgraded all the time :smiley_cat:


Me too… Me too…


Regarding feature velocity, I would tend to say it’s still quite high: just look at the 0.139 versioning - every month is a breaking change, which means you have to pay attention if you upgrade, but generally speaking the upgrades are in fact quite smooth with much attention given to not breaking existing users and generally giving a lot of warning.

As for glTF (and all loaders) being examples, I think that actually harkens back to a day before three.js handled tree shaking. It’s always been paramount to keep the library as small as possible for loading performance, so everything not needed by everyone was separated to examples as a kind of manual tree-shake. Probably not necessary anymore, but it’s also not hurting anything. We depend on GLTFLoader heavily for <model-viewer> and it is very well maintained.

As with all open source projects, they are the best dependency when you either pin the version and forget about it, or you keep up and contribute.


Is this true? That’s a good thing but I didn’t know that was the case.

Yeah, every time you visit the three.js examples section, they’re all using the latest build, which means they need to be updated and maintained to keep up with any changes.

In my years of using the examples as a resource, I’ve only seen a broken demo once.


Your point about tree-shaking is well taken. There are many npm libraries that are designed in a way that specifically takes advantage of tree-shaking, and the prevalence of tools that support tree-shaking has an impact on how those libraries are coded. For myself, I always import three.js symbols in a way that is tree-shakable:

import { 
} from 'three';

The question for a library author is not whether they should support tree-shaking - obviously they should if possible - but whether they should rely on it being present. This goes back to the “target audience” issue - it’s less work for a library maintainer if they can assume that users of their library will be doing tree-shaking, since they no longer have to do weird tricks to keep the code size down - but it also means that developers who are not using tree-shaking are going to pay a penalty. So the library maintainer must make that judgment call at some point, and contributors must be informed as to what the official policy is so that they don’t violate the constraint.

And given my earlier point, I suspect that most game devs and application writers will be bundling using Webpack or (my preferred option these days) Vite - whereas someone who is writing a one-page demo is more likely to download the three.js bundle from a CDN and not bother with bundling (maybe).

One issue I have had with the examples directory is this:

import * as SkeletonUtils from 'three/examples/jsm/utils/SkeletonUtils.js';

// I have to cast SkeletonUtils to 'any' to get this to compile under tsc:
this.armature = (SkeletonUtils as any).clone(armature) as Object3D;

I don’t like having to use any types if I can avoid it. I know that SkeletonUtils has TypeScript type bindings, but I haven’t been able to get any of the following combinations to work:

import SkeletonUtils from 'three/examples/jsm/utils/SkeletonUtils';

this.armature = SkeletonUtils.clone(armature) as Object3D;


import { clone } from 'three/examples/jsm/utils/SkeletonUtils';

this.armature = clone(armature) as Object3D;

Now, I sort of assumed - given that the three.js core developers have already stated previously that they don’t want to be responsible for maintaining TypeScript type bindings - that the type bindings for the examples directory would be even lower down on the priority stack than the core code. Thus, I assumed that my problem wasn’t really a bug so much as a triage decision.

Think of it this way: normally when something is marked as an “example”, the implicit message is that it’s a starting point for your own implementation, which will likely as not be heavily modified from the original - not something that you would be using verbatim in your project.

Of course, if code size sans tree-shaking is really a concern, the “right” answer (IMAO) is to make these “examples” as separate npm packages, and let people install what they need. That doesn’t mean that they packages will drift or get out of sync, it can still be a monorepo and everything gets unit tested together.

Is this Reddit ?

Your opinion of ThreeJs is very extensive already, why do you need the author’s opinion too ?
Would that really change anything ?