I’m currently working on a project where I need to render around 30,000 objects, each consisting of approximately 1,000 triangles. These objects are procedurally generated based on data that is passed to them. Each object is to be a essentially a cylinder with its shape curving to fit the data, and having certain parts (within a single cylinder) thicker than the rest. This data will be passed to me from the backend.
To optimize performance, I’m considering using InstancedBufferGeometry
and maybe stealing some code from CatmullRomCurve3 to setup data, and then handle transformations in the vertex shader.
Here’s the approach I’m thinking of implementing:
- Instanced Rendering: Generate a single
InstancedBufferGeometry
for the objects, all with the same geometry positioned at (0,0,0) initially.
- Vertex Shader: Pass transformation data (position, rotation, scale, offset, width, etc…) as instance buffer attributes to the vertex shader and handle all transformations there.
My questions are as follows:
- Is this a good approach for optimizing the rendering of such a large number of objects?
- Are there any potential pitfalls or limitations I should be aware of when using this method?
- Are there other optimization techniques or best practices that would be better than is?
Any insights, advice, or examples from those who have tackled similar challenges would be greatly appreciated!
Thank you in advance for your help. (apologies for being a bit vague, don’t wanna breach any NDA)
Most consumer GPUs are gonna max out around 2 million triangles.
30k instances * 1000 triangles is 30 million triangles.
No amount of instancing will overcome this fundamental limit.
InstancedBufferGeometry and custom shaders shouldn’t be neccesary, nor actually help you much.
You will want to use InstancedMesh for all identical instances… and for unique instances that are static, you will want to merge them into a single BufferGeometry.
1 Like
they are all static (non-moving matrices) but they will have updating uniforms.
ill try my best to provide an example of what im trying to do.
"points": [
{
"x": 575137.25,
"y": 92.8,
"z": -5652322.35
},
{
"x": 575153.3465567377,
"y": 17.9230636279824,
"z": -5652344.763952367
},
{
"x": 575143.7880401737,
"y": 37.77237619091095,
"z": -5652343.338936069
},
...]
i’ll get some data like this which are points along a curve (ignore the large numbers, i do localize it so its close to 0,0,0)
this data above is an example of 1 cylinder, i will have up to 30,000 of these. i will need to make cylinders dynamically based of this data.
@manthrax you suggest that we should not use instancing, so would i create all the geometry and merge it then? could i still have a unique data at a per-vertex level?
i feel like i need a way to have per instance attribute data, so im a bit confused here
I didn’t say you shouldn’t use instancing. I said you should use InstancedMesh instead of the lower level InstancedBufferGeometry
.
You can make per instance attribute data with InstancedMesh.
I don’t think you read the part about pushing 30 million vertices being impossible.
So, I’m hesitant to give you further recommendations for what to do, without knowing what you’re trying to do.
Can you give me an example of some existing product or site doing what you want? Or any more information?
Good call, apologies for not showing an image. I’ve been hesitant to share data.
I’m recreating these procedurally based off user uploaded data,
This image has a bunch of colors for 1 cylinder, but for mine I would only have 1 color shown for all cylinders at a time. Imagine you could toggle which color you wanted to focus on (via uniforms) and only that color would be visible. for non visible parts you would still see the line but would be thinner like so
i imagined that per cylinder (InstancedMesh) would have data for each “color block” and the uniform would select that part of the array.
each mesh would have the same number of horizontal vertices but in different locations. Kinda like this
Heres what im thinking of storing on a per-vertex level. The row/column would be pointing at which array to use in the HoleIdData
I did read that 30 million triangles is way past the limit. I think i was confused by which i should use. (instancing vs merge)
I had a client synthesize protein in India and they used PointsMaterial. Indian users clicked individual molecular strands to rebuild a DNA sequence chain. The client compared the GPU usage to raycasting with BVH enabled.
Sk8for4
1 Like