Long story short: I’m adding ThreeJS as a renderer into blender and there are all kinds of interesting things that could be done once the implementation of this linkage is complete.
My question to the community is, what would you like to see?
I’ve built a few engines in ThreeJS and BabylonJS and was always frustrated by the art workflow that involved saving and checking assets. It’s just damn slow to save something as GLTF thousands of times as you want to tweak a scene and perfect a look. And especially if you’re playing with transmission and are curious how something might render, or the performance you should expect (which is important when optimized for web)
To summarize how this actually works in blender:
Blender renderers can be considered separate “applications” or “modules” that are fed all the data structures and variable changes that happen in the blender viewport so the renderer can create the image.
Adding features to blender 3.X is pretty easy because of how well they have abstracted the everything. Essentially you register your renderer with blender then you can provide a texture that it will paint into the viewport window (and it announces what resolution it wants and stuff like that).
ThreeJS is a browser technology which is where this gets complicated. It doesn’t have a nice C API we can interface with and hook into Blender. So I have done the legwork to encapsulate an instance of ThreeJS in chromium embedded framework. I have shared memory sections (Works in Windows right now but will work Mac/Linux too in time) and chromium renders into a shared memory section that is accessed via the Blender render python script and fed into the OpenGL context.
4. I have the remote rendering done already. Meaning I can see threejs rendering in blender, but there is still work to do for serializing objects so we can render what’s supposed to be there .
But after this is done, you should be able to preview a scene, or at least what is compatible with the ThreeJS renderer, the same way you could in EEVEE, or Cycles. But the question should be asked, what’s next? It seems like this has the potential for a tool ecosystem but I’m curious to hear what you all think.
I’ve seen people complaining that what they see in Blender is not what they see later on in Three.js. With your tool it will be easier to distinguish the two cases:
the difference is because Blender and Three.js use incompatible (or different) properties
the difference is because the intermediate 3D model file format has restrictions
I’m somewhat curious about animations and also about incompatible features/properties between Blender and Three.js → will you ignore them, or will you try to mimic them as much as possible?
Disclaimer. I’m not a good (or even an average) Blender user. I have used it on several occasions, but usually my time with Blender is spent like this: 10% doing what I need, 90% reading tutorials and watching videos how to do what I need.
I should clarify that I’m planning to support the Principled BSDF Material shader only, because it is the only that is compatible with the GLTF exporter and has a pretty close parity between quite a few properties. Properties that aren’t part of what GLTF supports I’m going to ignore for now.
TLDR; the whole point is optimizing the gltf model / scene creation workflow so I plan to support at least anything you can save into GLTF out of blender.
Extra Notes: regarding displaying animations, the animations actually wouldn’t get sent into threejs as animations, but rather updates to positions are sent as scene updates to the renderer while in the editor (as far as I know). But to be transparent, I haven’t dug into the animation side of it yet and it would come after getting the rendering working correctly.
My thoughts are: Since GLTF can be extended so easily to carry additional properties on any node; it could be possible to configure and save custom settings in future versions of the ecosystem.
Imagine a menu in blender specific to ThreeJS GLTF scenes that allow you to select meshes and set their render order, add custom flags, or really anything else you can think of. As long as there’s some code in place to handle these extra parameters on the GLTF loader, you can add features to expand the scope of what you can author inside of Blender.
I don’t want to personally promise this, but I imagine a world in the near future where my project is the first brick to having blender become a Web 3D IDE. A sandbox where you could code / create applications while leverage the mature power of blender as a tool.
You can use the Custom Properties panel on both materials and objects to add whatever extra properties you need in Blender. Make sure to check “custom properties” in the Export as GLTF dialog (under “Include” I think).
Then, in your gltf loader, add a traverse loop to get every entry in o.userData object and apply it to o itself.
I’ve been using this to apply things like reflectivity and envMapIntensity in my materials in Blender.
The only caveat to this, obviously, is that you don’t actually see the effects of these in Blender but only in Three.
So to that point a Three renderer for Blender would be very welcome.
@trusktr I’ve been out of town for another project recently, but I am back now. I’m trying to get some free time so I can finish the first release of this. It’s on GitHub and I will release the public repo and make announcement when I’ve got the first revision working completely.
This is precisely my thought as well
I have posted here in three.js forum because that’s the first integration I am planning, but the code that empowers this integration would work for any 3D platform on the web.