BufferGeometry and performance comparison

Hello !

While trying to extend my knowledge about how BufferGeometry works, I found these two examples : webgl_gpgpu_birds and webgl_cubes.

When diving into the code, I realized that they used pretty different approches, while both using a single BufferGeometry with a ShaderMaterial.

But what I noticed is that I can run very smoothly (i.e. at approximately 50 fps) the ‘cubes’ example (with more than a million triangles…), while the frame rate drops when I use more than 16 000 triangles in the ‘birds’ example.

Can someone explain what can cause such a difference? I imagined it might have to do with mouse interactions or the use of GPUComputationRenderer (that I don’t really understand yet), but I really don’t know.

Thanks!

The birds example has a higher computational complexity in the shader since it simulates a flocking steering behavior. The shader code has a runtime complexity of O(n²) since the determination of a bird’s velocity is based on the positions of all other birds. That’s the reason why the implementation has a bad scaling. The performance gets worse faster and faster as more birds are in the scene.

I don’t think it makes sense to refer to BufferGeometry in the title of the topic since it’s unrelated to the performance difference.

No, in context of these examples the processing of user input has no relevant performance impact.

1 Like

Well, thank you for the explanation, indeed that makes sense!

I wrote BufferGeometry because that seemed like the most obvious thing the two examples had in common, but I understand now that it is unrelated. I will edit the title to make it more relevant.

1 Like