I’m developing a loader for the 3dm format used in Rhino3d:
In Rhino, we have the concept of ‘blocks’, which allow a user to define some objects and reference them around the model without adding more geometry to the model. This is a pretty common concept in 3d modelling software.
I’d like to take advantage of the instancing capabilities in three.js in order to map rhino3dm types:
- “InstanceDefinitions” (the object which contains the base geometry, not visualized in the 3d model)
- “InstanceReferences” (the objects which contain a reference to the definition in addition to a transformation matrix, visualized in the 3d model).
In Rhino, the user can edit any of the “InstanceReferences” (which in the background just edits the original InstanceDefinition) and the rest will be updated.
I don’t think the mapping has to be exact, and I don’t expect users to want to update things. I am mostly interested in performance.
We have some useful objects in threejs for this:
Some reasons these are problematic for this case:
- Instances should be able to contain other kinds of geometry besides Mesh (Lines, Points, etc).
- References are individual objects which transform the definition object geometry.
I see the types available in three.js working from top down: there is a base object which contains all of the instancing information (number of instances, transform, etc).
I’d ideally like something like this:
- DefinitionObject (type = Object3D)
- children[]
- Object3D with BufferGeometry
- Object3D with BufferGeometry
- Object3D with BufferGeometry
- ...
- ReferenceObject (type = Object3D)
- children[]
- Object with reference to the BufferGeometry
- Object with reference to the BufferGeometry
- Object with reference to the BufferGeometry
- ...
But maybe that isn’t a good design or even technically possible. Looking for suggestions. Maybe I just need to change to a more three.js way of organizing this the following way:
- DefinitionObject (type = Object3D)
- children[]
- Object3D with InstancedBufferGeometry x number of instances
- Object3D with InstancedBufferGeometry x number of instances
- Object3D with InstancedBufferGeometry x number of instances
- ...
I think it depends on what you want to provide —
- If your goal is to create a loader that constructs objects that behave like normal three.js objects, then
THREE.InstancedMesh
is probably what you want to use, even though it collapses the scene graph. Perhaps you could have an option on the loader to either create InstancedMesh groups, or many objects (for more control but less optimization).
- If your goal is to provide a custom suite of tools really specific to Rhino workflows, you could certainly build that on top of three.js. For example, create some “RhinoReferenceMesh” objects that don’t render geometry themselves, but that write their positions into a specific index of an InstancedMesh. That goes far beyond a loader of course, but would give you more control.
Instances should be able to contain other kinds of geometry besides Mesh (Lines, Points, etc).
Do you mean that a single instance might contain a mix of all of these? Or that there might be instanced line or instanced point groups, in addition to instanced mesh groups, separately?
Thanks for your suggestions!
Yes. An Instance Reference object can be made up of many kinds of geometry like meshes, lines, points, etc. So in Rhino, you might have something like this:
- Instance Definition
- Objects
- Line A
- Mesh A
- Line B
- Line C
- Point A
- PointCloud A
- ...
- Instance Reference
- Transform matrix
- Objects
- ref to Line A
- ref to Mesh A
- ref to Line B
- ref to Line C
- ref to Point A
- ref to PointCloud A
- ...
At the moment, I’m just looking to bring the objects into three.js in a manner that does not duplicate / clone the same geometry across instances (which is what the first version of the loader does). I would like to avoid collapsing the scene graph, but would be willing to consider it if means we can actually take advantage of instancing.
Another option might be to start by reusing the same BufferGeometry instances many times, but attaching them to unique Mesh instances. That’s still going to cost N draw calls, but the memory footprint is lower. And in theory it would be easier for application code to optimize the draw calls later, since it can more easily gather up the meshes that share references to the same geometries.
Objects
- Line A
- Mesh A
- Line B
- Line C
- Point A
- PointCloud A
- ...
Blender has this structure too — lines, points, triangles, and n-gons can all be jumbled together in the same Mesh. For the purposes of rendering it all has to be separated out, WebGL needs primitives grouped together coherently and can’t render different types of primitives in the same draw call. My opinion (and the approach of formats like glTF) is to do that earlier rather than later… trying to keep it in the DCC tool’s edit-friendly layout is likely to create inefficiencies for realtime rendering.
2 Likes