Creating a tool to get Blender procedural geometry to ThreeJS without exporting

Hi. I need advice on a project I’m hoping to start.
I come from a game development background, where I’ve used Houdini engine to create procedural levels in Unreal and Unity. I’ve also worked for a company that creates models in 3DS Max then exports them to a ThreeJS website.

Introduction

What I’m thinking of is a tool to load Blender objects made with geometry nodes into ThreeJS while keeping the parameters of the geometry nodes intact, so as to be able to adjust those parameters from within ThreeJS.

Houdini engine parallel

Keeping it analogous to Houdini engine, an instance of Blender would be running in the background. The webpage would communicate parameters to this instance which will process these nodes and generate geometry for consumption by ThreeJS.

Bridge mode

An easier way would be to create a Bridge between Blender and ThreeJS, where you could edit meshes in Blender and they would be sent instantly to ThreeJS. This way you would adjust the parameters and mesh in Blender, and see the result reflected in ThreeJS without having to go through exports.

If someone could give me their two cents on this matter I would highly appreciate it.

Without exporting? Even if you manage to get the editing phase with blender in the background or with some extension to run ThreeJS inside blender, how would you ship the final result to the masses. Would you ask every final user to install blender?

Let’s say you managed to isolate blender’s node processor, run it as WASM worker. Then you’ll need a huge Javascript layer of abstraction to have any control over the resulting geometries with ThreeJS. What would be the size of the final WASM + JS files? Will it be as performant? we’re talking Javascript here. Will it be compatible with every device/browser? …

I’m not trying to discourage you, I’m just saying this a huge project, that require a lot of work and a deep understanding of Computer Graphics, C, C++, Javascript, WASM, Blender and ThreeJS. And even you have all what it takes, if you are not surrounded by a team of equally skilled engineers, unless you hate yourself, don’t even think of doing it alone.

2 Likes

Like Fennec already stated, this is a huge undertaking to develop. I think the only real benefit this may have is to have a ready-to-go editor/user-interface in which content (levels?) can be created.

Assuming you’re intention is building a game, so using blender (or other tools) as a “real-time” procedural level-editor, the main architectural question I would ask myself first:

Which part acts a the server, and who is the client?

If your tooling software (e.g. Blender) acts as a server that your browser can connect to, then it needs one of three possible ways of communicating; either by HTTP requests (which is tedious), WebSockets (plugins may be available for Blender, or could be written in Python I suppose), WebRTC? All you would need to do is intercept these calls and act upon them. It would be up to you to implement some kind of protocol for this. If the software gives the opportunity to act as a “server” using one of the before mentioned methods, then you could theoretically do everything in a browser.

If your tooling software can only act as a client (e.g. Houdini connecting to Unreal Engine, rather than the other way around), then you’ll have to open up a whole different “can of worms”. To implement an effective server, you will most likely create your own “application” that probably can’t run inside a browser. Depending on what you’re trying to build, this may be a blocking factor. If your app is something like an Electron app (which is basically just NodeJS with a chromium layer ontop), then you could implement a socket server that blender or other apps can connect to. However, there will be additional overhead due to the needed translation between the NodeJS(?) process and the browser. You’ll be effectively introducing a piece of middleware that needs to do an extra translation step. This will be a performance factor, depending on the IO.

Once you can answer the above question, you would have a direction you could walk towards. By that point, you may come to the realization that just building your own editor interface may in fact be less effort, unless the tools you use (e.g. blender) gives you the ability to connect to it directly from a browser.

It won’t be trivial either way.

2 Likes

Thank you, that is sound advice. I was hoping there’d be something like a standalone for geometry nodes. The users who develop websites using threeJS would have to install either this standalone or load a .blend file with the required setup, and the end users would interact with the website as is, similar to Houdini engine. But as you’ve said this is a huge undertaking involving a lot of translating and back and forth. I was hoping there’d be a standard representation of geometry nodes that I could parse in Three (they have to be stored somewhere in the .blend, no?). But as I’m finding out there’s no easy way to get to that representation.
Since there are multiple moving parts to this, I might try one of the smaller parts, like the standalone geometry node parser with an addon.
Again, thank you for your input.

1 Like

As you’ve pointed out, the conversion will be a bottleneck, yes. I doubt so much data can be transferred between them at a satisfactory rate.
There are huge moving parts to this project if I decide to make it.
But then again, what would be the point of the project? SideFX does have academic licenses for their products and any company that wants procedural assets pretty much has the commercial cost covered. It won’t really add much value to anything.

image

6 Likes

The server would be whatever’s running blender no? Because what the user would need is to send the parameters with request and get the resulting model through response.

1 Like

Hell yeah bestie. “What’s the worst that could happen” mentality.

1 Like

There are too many unresolved variables in your question before they can be properly answered in my opinion.

If I understand your question correctly, you want your website to be able to “request” some form of resource based on a set of arbitrary ‘parameters’ from a server (blender), and the server returns geometry/materials that Three can understand? If this is indeed the case, you could:

  • Write a blender plugin that runs a HTTP(s) / WebSocket server that your website can connect to. Some questions that need to be answered before that:

    • How are you going to host this? Running blender on a server isn’t very efficient I think, especially if this service is responsible for all the heavy lifting (e.g. generating procedural models for every individually connected user). I don’t think that would scale very well unless you throw a ton of money against it for the infrastructure.
    • Can such plugins be built in blender, if so, what does the protocol look like? E.g. what information is being sent from your website to this server and how must Blender interpret this?
  • Write an HTTP API or WebSocket client that can interface with your blender plugin.

    • What is the expected traffic? Is each individual user going to connect to this blender server, or should there be an intermediate (caching) server sit in the middle, for caching or load-balancing purposes?
    • What is the expected output from the server? Do you still need to generate geometry/materials on the client-side based on incoming data, or does the blender plugin / caching server do this for you?

I don’t think anyone can answer these questions besides yourself. I’m not trying to discourage you, but if you are aiming to go this route for a website that has a decent amount of traffic, you should think very carefully about the architecture of this “stack” and the required infrastructure in order to make this scalable. If it were up to me, I’d lean more towards the client-side option where most of the heavy lifting is done on the client-side - maybe in worker threads.

If this entire project is aimed for a development pipeline, e.g. something similar to using Houdini with Unreal Engine on your local machine to build worlds, then its a whole different story and you could definitely make something like that - if it all runs locally. But if you want to have this type of architecture on a website, that’s a whole different can of worms :sweat_smile:

As for the server and the host - there should be no problems at all. As I understand it, blender and browser run on the same computer. So just the browser sends a request to the localhost, where blender runs the server (python “out of the box” can run an http server, for work I use a server that starts python from the blender folder). I think it must be some kind of blender extension.
Regarding the exchange of information - I think you can use an ordinary “suspended” http request, which the plugin will not complete, but will add a string with the identifier of the geometry when it would changed. Then the script in the browser will send a separate get request with the name of this geometry and will receive a blob with its vertices in response.By the way, if the vertices were not added or deleted, but only one or a small number of vertices were moved, then you can simply add a list of changed vertices, and then the script will not load the entire changed geometry.
Although, maybe this is an amateur approach, I think it is not much more difficult to program a full-fledged websocket connection.

1 Like

Have you seen threegn? I’d look into this repo as it’s quite parallel to what you’re asking and then probably build off of this.

3 Likes