What is three.js?

I remember there was a person on github who edited auto-instancing into the WebGLRenderer, but the PR was rejected, basically for the reasons you mentioned.
He then proposed to introduce Renderer Middleware - something that would allow such optimizations to be optional, modular add-ons, without adding all the extra size to the core. To me, it makes sense, although Im not sure how plausible that is and if possible, would probably require a lot of work & thought about how to implement a standard for such add-ons.

However it is sweet, to think about a future where a user can import optimization addons best-suited for his/her app, without having to build it himself or bloating the library size with things you dont need :slight_smile:

I wonder what others think about it?

1 Like

or a future where he just uses alterative WebGL renderer. this could be very close future, too. all it takes is just writing one :slight_smile:

In case it’s not obvious yet, I’m more of an artist mindset than an engineer.


I started building the library because I needed it to express myself.
I needed it to create these kind of things.

While creating I saw that there were a lot of computer graphics problems to solve, so I tried to solve them and shared my “solutions” so other creative web devs didn’t have to.

It’s also good to know that, before that, I was using Adobe Flash. Even if Flash was great, relying on Adobe was not. I didn’t like depending on a corporation for expressing myself, so in the transition to HTML5/WebGL I saw an opportunity of building a toolset that would let creative web devs free from having to give your email/password to a corporation in order to get new features. Let alone paying a subscription fee.

I just didn’t expect it was going to take me 10 years, and counting…

So you can think of it as a collection of open solutions for computer graphics problems that work in harmony (more or less).

WebGL2, WebGPU and the editor

It’s easy to get lost by adding features and more features that make graphics more advanced; order-independent transparency, deferred rendering, … These are a bit use-case specific and hard to develop and maintain.

I’m personally more interested in solving problems that allow people to express themselves. OIT and DR do not add “that much” value. People are not going to feel happier, sadder or angrier when playing your game, but it sure will increase the maintenance burden on the library side (unless someone solves it nicely).

Look at what people are doing with DreamsPS4. I’m more interested in facilitating that kind of output and experimentation. I’m more interested in reducing the friction of creation.

But, at the same time, I’m trying to keep an open mind, trying to empathize with the library users and accommodate use cases I could never imagine, and then I look at the maintenance cost of each feature to see if it’s viable or if it could damage the traction of the project.

Having said that, I was planning on working on WebGPU on the second half of this year.

What is Three.js?

Going back to my artist mindset… I see the project more and more like a painting. And in between all of us we paint different parts.

Then you step back and try to explain the painting. A painting means different things depending on the viewer. Depending on your past experiences. Depending on your use case.

Unfortunately, I can’t say what Three.js is, I just know that it helps me create things without too much friction. Seems like it helps others too, but the things they create are very different than the things I create.

I hope this makes things a bit more clear.

PS: So, yes @makc3d… I definitely see the Rick resemblance :sweat_smile:


It is a tool to help you do 3D graphics in JavaScript, by providing a higher level extrapolation of the otherwise intolerably nasty WebGL.

For me it is the core interface for the Future of all Computing.

JavaScript is the winning language, [TypeScript is just a distraction for old school coder to help the bigger corporation thinking make the switch. They unfortunately loose the advantages a typless logic has that helped it win, and oh look python is similar in that way.] Alas that is another topic but the point is JavaScript and NodeJS on the server side leaves you wanting a JavaScript based solution for all your needs.
When it comes to the next wave of VR and AR we can and ultimately will cast aside the old 2D & CSS way of thinking (but not entirely surface mapping and overlays etc.) .

The momentum that the open source community and Mr. Doobs early efforts have made are why I still believe THREEjs is the future and the most important thing to do in computing today.

Sure we all have our differences and it’s mostly inclusive and open to contributions with a firm simple guided core-visionary’s vision. This is why it is not so definable. Anyway if the future as I believe is in community and consensus then you will need to adopt a ‘gray area’ understanding of the world, to understand its components. This project is a perfect reflection of this newer modern way forward for humanity.
It is not always B&W we live in Color now. Take your stenciled one color fax-able logo and maybe you should extinguish it, or not but ultimately you should then bring it to life with an interactive 3D color version via THREEjs! And yes I meant that as a metaphor for perceiving the world in general.

Anyway I was glad to see the question and read the thread. Mr. Doob’s comments were insightful and I related to many of his comments.

I hope you now understand in simple B&W words. “THREEjs It’s a library tool kit to help you do 3D in JavaScript.” In technicolor it’s the future of computer interfaces and a reflection of the new paradigm of OpenSource community consensus. An inclusive versatile JS library tool kit to enable 3D and better visual experiences in the language that has distilled out over the history of computing from all languages into a superior hybrid. So to it can be said for what 3JS is within a much needed niche of enhanced 3D visualization.

Look into Smalltalk creator Alan Kay’s wisdom about longterm thinking to understand anything transformative should have a longer timeline by design. Thinking short term even quarterly is a degenerative stagnant problem for modern corporate mantality that is hopefully being phased out. Lets think ahead to another 10 years and ask what will 3JS be then. Alas that would require another thread but is really an ongoing discussion but as was said no one knew at the time it would have been 10+ years. I ask myself often what would Alan do today if we gave him a billion dollars? The answer I get is in part THREEjs but it is only a critical face forward UI part of something much bigger. What will it be in another 10 years a robust UI to something distilling out of computing in the Cloud server level. Also good things coming out of the inventor of HTML in the solid pods etc. Well that is the most public variation of the decentral grid ideas that I see happening today.


Thanks for the explanation. It does make your position more clear, even if it doesn’t really answer the core question. It’s still helpful to know.

I think that creation is better facilitated with other tools, some of which are built on top of three.js

I always saw three.js as a thing that is low-level, fast, and is an enabler for other tech. You want to build an app and want to visualize 3d stuff? - go for it, you want to build a game and need a rendering engine? - great.

I never saw it as something that a complete newbie in 3d could pick up and run far with. The reason I feel at odds with the project’s direction is that as a fast and low-level rendering library - i feel that three.js is a failure. It has little in the way of optimization, and it rejects or fails to accept features that would make it better as a low-level rendering library.

I feel this is both a really relatable statement and one with little meaning. Why do I want deferred rendering? Is it because the game will look more slick? or is it because it will let me add dynamic decals into the game?

Do dynamic decals make the player “feel” something? Well, according to your logic - no. But they are valuable for gameplay, to mark certain things in the game world. Can I do without? - sure. Do I even have to have 3d graphics? Can my game be done using pen and paper instead? - maybe. Is there a point to this kind of reductionist line of thinking?.. I think not.

Take a game like “The last of us”, it offered a ton of beautiful things, a lot of those things were only possible due to rendering tech, and that beauty? - don’t know about you, but it definitely did make me feel something.

Lets say I want to tell a story about a group of 30 people. Without instancing, showing 30 detailed models on-screen might be hard for some hardware. This means that tech limits my ability to express myself. That PR with auto-instancing you shut down? - that would enable creative people who do not understand instancing to be able to express themselves in the ways they currently can’t.

I started using three.js because it was pushing the envelope, because it impressed me with it’s technological capabilities. I feel that your attitude now and my impression from years ago are at odds. Let’s say that I’m a useless person to this community and the development of this project, but I do believe I am representative of a larger group, if not in my expression, then at least in my attitude towards the project. I would suggest that you that that into consideration.

Why walk when you can crawl, why take flight, why innovate? after all, it doesn’t…

Would I pick three.js for rendering if I was to start a graphically demanding project today, knowing what I know about your attitude and given the state of the “library” - no.

It is a rendering library, a nice one, but it’s not one that faces the technological future.


@Usnul With all due respect but I feel you have misunderstood the aim of the project. It was always an important (maybe the most important) aspect of three.js to support developers with less experience in computer graphics in creating web-based 3D content. Of course it still requires a certain amount of know-how especially when implementing more complex applications. But videos like the following show that even the youngest can work with three.js :blush:.

Providing more high-level, editor-like tools will allow to open up the user base even more. That said, I think it’s not fair to state three.js does “not face the technological future”. Last year for example the project migrated the entire code base to ES6 modules and added TypeScript support. In the next step, the development group tries to find out how to remove legacy code ( examples/js) and adding more ES6 features like classes.

Besides, as mentioned by @mrdoob expect to see some movement in WebGPU and WebGL 2 this year. I’m sure stuff like MRT will land in the core over time. I’m currently trying to improve the support of multisampling FBOs. In this context, I’d like to highlight that our activities actually revealed a bug in Chrome’s multisampling FBO implementation. I would expect that other engines like BabylonJS which already provide broad WebGL 2 support could revealed such browser implementation issues a bit earlier. But obviously not…


I disagree with this. I had never done any 3D work before I came to three.js (well, besides a few days failing to accomplish anything with Unity). I found that three.js inspired me to learn both 3D and web development.

The main issue at the time was that the docs were terrible, and that’s been largely fixed now.

Well, perhaps the question itself doesn’t have much meaning. three.js is a toolkit, it excels in some areas and is lacking in others, just like every other toolkit.

It’s also an open-source, unfunded project, and there’s a limit to how fast development can proceed when every decision must be carefully considered by one person.

It sounds like what you mean is that it’s not facing the future you want.


I agree with this, but I’m not convinced that these should exist in the three.js repo. A full-featured editor is a huge project in itself and the repo is noisy enough already.


Same thing here.


I disagree with this. I had never done any 3D work before I came to three.js (well, besides a few days failing to accomplish anything with Unity)

Haha, that’s exactly me :smiley:


While I understand the maintainers’ goals, it does sadden me, that Babylon might be better suited for big games development :frowning_face:

AFAICT, no tool in the web is suitable for this use case so far…

1 Like

The same about me :slight_smile: :beers:


I think that’s great. And I believe it three.js will be better for it. I do believe that three.js is a nice 3d library ™. I think it’s more of a question of architecture and API for me here. You have have the same fixed forward rendering pipeline on WebGL2 and on WebGPU, as long as the rendering pipeline remains simplistic and has little in the way of extensibility - I believe it will remain behind the curve. That’s not to say that the library won’t advance, just not anywhere near the edge of what’s possible.

I feel like every time I proverbially open my mouth on the internet - I end up misrepresenting myself.

I never said that it’s not a goal. And I never said that I newbie can’t use three.js, I (attempted to) said that in my view you can’t get far with three.js without a decent understanding of 3d. Can it be used to learn - sure. I have learned a bulk of what I know today while using three.js in that area. So yeah, yeah - me too.

Operative word is “far”. Lets say you pick three.js and you have little to no understanding of 3d - you will quickly get stuck with simple things like rotating a thing, changing a color, changing geometry etc. Solving those would teach you a lot, and you might have a ton of fun along the way. But, heck, how did we get to the teaching/learning aspect from talking about expression? Damned if I know.

You disagree with a version of your version of an interpretation of misreading of my statement. Whatever that version is - I think you’re a smart guy, so I probably fully agree with you. :+1:

I am in 100% agreement with these statements. The pace of development was never brought into this discussion… until now. We can talk about that, it’s an interesting point. Perhaps out of scope of the OT. To clarify - my point was about the intention, about the priorities and about the direction. Hope that’s cleared up now.

I thought that was clear from my earliest statements in the same post you have quoted. But yes, you’re 100% on the mark.

When statements that strongly disagree with my views were voiced by prominent members of the community - I wanted to make my voice heard, since I believe that I am representative of a portion of this community. I believe that I have achieved that.

I strive to be a better person each successive day. I try to not get petty, not to say things I will feel ashamed of later and not to offend people without a good reason. I do not always succeed, but I appreciate the patience and compassion, that I have been privileged to experience here so far.


@Usnul since you went as far as making PRs for those complex features, it would not be a big problem for you to maintain your own renderer, now would it? There will still be stuff you get for free like math, and scenegraph, and parsers, and to some extent materials. Not sure how this would play in the long term, but should be enough for your game, no?

1 Like

Why? I think alternatives are good. Specially if they’re free and open :ok_hand:


It would also have opened a can of worms as we would have to keep track of objects that change and update the instance transform. It was one of those features that could have made the renderer harder to maintain.

And, to be honest, the renderer is pretty hard to maintain already. Very few people venture into doing improvements to it.

I feel it was better to do a InstancedMesh API so the intent is explicit and the user has more control.


You may be right on that one. We’re definitely giving more priority to cross-browser/cross-device support than adding future tech.

In general, we prioritise making things work everywhere and making things easy, rather than adding cool new features that do not work everywhere and only advanced devs know how to use.

Step by step.


I fully agree with this kind of thinking. However, I feel it’s conflicting with what you have stated about expressive power.

Perhaps it would be useful to have a philosophy of three.js written down somewhere? Even if it’s just a few sentences. Like :

  • expressive power is the main goal, all else is in service of that goal
  • maintainability. New features will only be introduced if they clearly fit in the the rest of the system and are understandable enough to be maintained by other members of the community.
  • Clear and deterministic API. Every method shall have clearly defined and deterministic behavior.
  • Clarity over performance. When a conflict arises between clarity of implementation and performance - clarity takes precedence. Three.js should strive to be facilitate learning, for both current and future maintainers.

As we should. The whole point of web technology is that it works everywhere and that users don’t have to think about what OS/browser/hardware they are using. When it comes to 3D, of course, there are some limitations - you can’t render 10 million polys on a cheap smartphone. But the limits should lie only in the hardware, not because three.js doesn’t support some GPU/browser combination.

There’s limits to that statement - we shouldn’t support outdated browsers that are a security risk, for example, and we have to balance backward compatibility against being a part of the modern JS ecosystem.

I like this idea. But no matter how clearly our goals are defined, some things will still come down to a judgment call. I don’t think this would have helped much with the auto-instancing PR, for instance. At some point, @mrdoob just had to call it.

Yeah… I look in there whenever I need to figure out how something works, but I always come away feeling that everything is so intertwined that it’s hard for someone who didn’t help to build it to get involved - at least without dedicating a few weeks to understanding it. I mean no disrespect to the people who put hard work into building it, it’s simply a result of evolving over time with many people working on it.

My hope is that when it comes to building the WebGPURenderer we’ll be able to create something with a cleaner, more modular structure. Or, maybe it’s just in the nature of renderers to be complex like this.

Well I agree with your disagreement of my misreading of your agreeing with my disagreement… wait, did I just call myself stupid? :thinking:

Your disagreements are always welcome. In this case, maybe it’s because you are doing something different than most of us, in creating full-featured, steam-worthy games. Your need is quite different - most of my projects are expected to load in a few seconds as soon as someone enters the URL. When it comes to games, people have the expectation that they’ll need specific hardware, and that loading will take some time, and so on. Websites, even 3D websites, have to appeal to a different demographic who may be less interested in “future graphics tech” than gamers, but who will leave if the site takes more than 6 seconds to load.