I’m just launching a node-based 3D editor, which was used successfully for client work, and am now making it accessible to beta testers.
It is inspired by visual effects apps like Houdini or Nuke, if you’re familiar with them. What makes those apps so efficient is that they are node-based, which allows people to try ideas easily, by building scenes step by step and making variations just by copy/pasting nodes. It’s an idea I haven’t seen frequently in web-based tool, or only partially implemented.
I’ve also started a documentation: https://doc.polygonjs.com/ with some short videos and examples (more on that will be available in the coming days). I have quite a few plans to improve it, like adding a GPU particle system (by extending the current shader builder ) and adding custom nodes for threejs developers who want to add their own functionalities (since the app is built on top of threejs).
Would any of you be interested to beta test it? I’d certainly be curious to hear your thoughts.
You can’t export at the moment (like you would on Playcanvas, if I understood correctly how they operate), but you can still embed your scene on your website like you would with Sketchfab.
I’ve debated a lot with myself (and still am) the pros and cons of both approaches. And I can certainly see why exporting the scene would be really convenient.
My thinking so far is that I could either:
1 - open source the engine, and allow people to export the threejs scene so you could work with it how you see fit. I’d then have to make using the editor paying to sustain (what playcanvas is doing)
2 - keep the engine closed source, but allow to embed an iframe, with a high level javascript api (which is present, but not yet documented). Commercial embeds would be paying, allowing the editor to be free. (what sketchfab is doing).
3 - go full open source and rely on donation/client work.
So far I’ve opted for 2. I think it’s more interesting for everybody if there is no cost in trying the app out, experiment with it, or use it to showcase your work. And only pay if you’re doing commercial work with it.
In a way, open sourcing is also very tempting, as more people could extend it and learn from it. But that wouldn’t allow me to focus on it and grow it as much as I’d like. And I’m investigating ways to extend the app without that. More to come on this.
You probably when through similar questions for your book, I believe? I’d love to hear if you have counter arguments to this, or if you can see other ways I haven’t yet considered.
Actually, while you can’t export a full scene, you can export geometries and shaders, which I’m hoping should still be helpful if you want to do some test. Writing glsl and ensuring that the input geometry has the right data can be time consuming, so a visual tool is certainly useful for my own work.
Since I mentioned earlier there would be a particles system, here is a tutorial on how it works. Each is only a few minutes long.
It is based on the GPUComputationRenderer, which is great, but still requires you to code one or several glsl shaders. If you have several attributes you want to be updated in the particles, it becomes a little tricky. With Polygonjs, the advantage is that you don’t have to code any glsl shader, those are created on the fly simply by connecting nodes.
And as I was explaining in my previous post, you can easily export those glsl shader if you prefer.
Hi ! I had a try, I have a few comments regarding the user interface.
So I guess that you’re fan of Blender aren’t you ? I can tell it, because in absolutely no other software I ever used, you’re supposed to pan with roll-click. Also I clicked on the boxes to select them like a maniac will all the mouse buttons, until I gave up and looked for this thread, watched the video, and discover that it is NOT possible to select it with a click. You must box select. Also if you expect people with few coding skill to use your soft, better make the node’s purpose more explicit than one word of text. Even for experienced user, as soon as you have many boxes in the scene, it would become hard to differentiate them at a glance.
My suggestions:
left-click: select
right-grab: pan
zoom: only roll
color the nodes : one color for meshes, one for geometries, one for materials, etc…
context-box when hover on a node : how to use it, inputs, outputs ?
icons on the nodes, with similar icons in the "add node’ menu
Overall this is a good tool, and I’m sure I could have a use for it especially with the particles system and the shaders without GLSL.
Thanks for checking it out and the feedback, that’s great!
My background is actually Houdini and Nuke, with a bit of Maya. I did open blender a couple times, but haven’t managed to grasp the interface so didn’t push further. But I do plan to go past that, to potentially use the renderer in the backend and provide light baking. And it’s interesting to learn it has common points with Houdini, I had no idea.
As for your feedback:
left-click: select. Very good point. I think I’m just so used to do just box click, but that’s just me. So I’ve just added that.
right-grab: pan and zoom: only roll. I’m not 100% sure about this one. Let me see if more people request it, in which case I would then switch to that. Which apps do you know that behaves like this?
color the nodes: yes, I absolutely agree. That’s on my list.
context-box when hover on a node: Some nodes do have that actually, but not all, so it should definitely be improved. I’ve actually added a page in the doc of all the various ways you can get info on how things work. In short, as in the screenshots below, you can:
icons on the nodes: Yes, you’re very correct that it’s needed. Some nodes do have an icon (like the merge SOP), but it always take more time than I’d like to pick an icon that makes sense and looks good. So that unfortunately goes to the bottom of the priority list quickly, but it’s good to hear it. I’ll move it up.
Thanks a lot again, it’s super useful. Any questions or more feedback, please don’t hesitate.
right-grab: pan and zoom: only roll. I’m not 100% sure about this one. Let me see if more people request it, in which case I would then switch to that. Which apps do you know that behaves like this?
Well I don’t use so many softs after all, but I do use Grasshopper, and due to the similarity of your soft with Grasshopper I expected it to work the same.
Grasshopper controls :
left-click: select
left-grab: box-select or move box(es) if there was a box on the start of your grab
roll : zoom
roll-click : contextual menu with frequent functions (Find, Bake, Enable/Disable component…)
roll-grab: nothing
right-click: contextual menu again, but specialised if you clicked on a box
right-grab: pan in the box scene
It’s very intuitive (at least for me), so I never gave much thought about it, I just started playing with the functions the second I started to learn it. It may be because of my background with Rhino though, and obviously they created the controls according to Rhino’s standard.
I didn’t know that Houdini used roll-click to grab, I’m surprised ! I cannot get used to the roll-click of Blender, it may just be personal though… It just feel uncomfortable because in the common use of a computer (like in the WIndows or IOS user interface, or any every-day soft like mailboxes), basic controls are set on left and right click. You’re usually not supposed to use roll-click in these context, so I find it somewhat disturbing to change my mind-set every time I open Blender.
No idea if it’s personal, just wait until you have another feedback
Interesting. Thank you for detailing this. I may be also very biased too, with my own habits. It’s definitely a personal thing. And all my colleagues who were using Houdini/Nuke wanted to customize the app a lot, so that’s very likely something I’ll offer in the not too distant future.
And I used Rhino in the past, to model robots, but didn’t know Grasshopper (link in case someone else also learns about it), so I’m glad to hear about it.
I’ll publish here shortly other tutorials regarding Polygonjs, to show different examples than just particles.
And for the curious, I’ve added another tutorial. This one shows how to:
Add geometries to Mapbox.
Import data from an API and convert it to geometries to be added to a map.
Create a custom material which will display properties from the API.
You will see 2 examples:
Bicycle accidents in Canberra, where each accident is displayed as a bicycle.
Air quality in North America, display as a 2D and 3D heatmap.
If you’re a researcher, statistician, cartographer or simply a map enthusiast, this may be useful to you. If adding information to a map ever felt too much work, this shows you it can be done in a few minutes.
Next tutorial series I’m working on is about the javascript API, which would allow you to edit your scenes programmatically (to edit any parameters or add nodes based on any outside event) or to create your own custom nodes (so you could have full control over your scenes).
As mentioned in my previous post in this thread, here is an update with a new addition to Polygonjs: the javascript API.
There are 2 distinct APIs:
the Custom Node API which allows you to create your own node with javascript. This is useful if you want to integrate with a custom library, while still benefitting from Polygonjs engine. In this video, you can see how the code live reloads without any compilation needed, just like if you were working locally:
the Scene API, which allows you to interact with your scene from the embedding webpage. This can be used to create configurators like this one: