GPU birds example - Changing shape

Hello !

First of all, I apologies for my evident lack of knowledge about GLSL shaders : I just began to learn this language and I don’t understand 90% of what I’m doing…

So I’m trying to modify this example that shows how to use GPU computing to generate birds flying around. What I would like to do seems pretty simple : I want the birds to be simple tetrahedron shapes…

To do that, I just changed the way each “bird” was generated, modifying this part of the code to define a tetrahedron shape with 4 faces:

for(var f = 0; f < birdsNumber; f++)
        0, 12, 0,
        0, 0, 12,
        -12, 0, -12

        0, 12, 0,
        -12, 0, -12,
        12, 0, -12

        0, 12, 0,
        12, 0, -12,
        0, 0, 12

        0, 0, 12,
        -12, 0, -12,
        12, 0, -12

But instead of having tetrahedrons, I get independent triangular faces floating around… Like if the tetrahedron was not told which faces belong to it.

I think it has something to do with the “references” array defined by this formula :

var i = ~~(v / 3);
references.array[ v * 2     ] = (i % WIDTH) / WIDTH;
references.array[ v * 2 + 1 ] = ~~(i / WIDTH) / WIDTH;

Indeed, this array is passed to the vertex shader using THREE.BufferAttribute like so :

var references = new THREE.BufferAttribute( new Float32Array( points * 2 ), 2 );
this.addAttribute( 'reference', references );

And in the vertex shader:

attribute vec2 reference;
uniform sampler2D texturePosition;

vec4 tmpPos = texture2D( texturePosition, reference );

I absolutely don’t understand how this array is supposed to affect the shape of the bird in any way? I can try and change its values, the effect seems to be random no matter how I change it…

If anyone can explain to me the purpose of the “reference” array in this particular case, I would really appreciate it. And if you have any clue on how I can achieve my little goal, I’m listening :slight_smile:

Thanks in advance, and sorry for my laborious explanations!

Well, you picked out one of the most complex examples of three.js :wink:. That’s indeed a though start.

It’s important to understand that the steering behavior of the birds is computed in multiple shader programs. Certain interim results like the position and velocity are saved in so called DataTextures and used in the final vertex shaders for calculations. The workflow is managed by GPUComputationRenderer, a special renderer of three.js.

In this context the reference attribute is used to identify the correct position and velocity values for a particular vertex of a bird in the respective data textures. So think of it like texture coordinates. If you change the shape of the birds, the code in webgl_gpu_birds should actually adjust the reference attribute accordingly.

I guess you have to share your code with a live example so it’s easier to see what’s going on in your app.

BTW: It is necessary to adjust the shader programs since there are some assumptions about the geometry, see

Yeah, I figured out that was not the simplest of examples ! I like abstract code, but this one maybe is too abstract for me.

Ok, I THINK I understand the main purpose of it. But I’m starting to realize that GPUComputing may be too complicated, at least at my level, for what I’m trying to achieve (which is basically shapes flying around according to mouse movements).

Now I’m thinking about using THREE.InstancedBufferGeometry(), like in this example.

I have two questions about it:

  • Would it suits my needs? Would I be able to setup instances that react to mouse movements?
  • What about performances? I chose GPUComputing because it seemed really well optimized to generate hundreds of shapes, but is the difference really significant?

It will be difficult to share a live example since my implementation of webgl_gpu_birds is more complicated than it seems, but I’ll try and edit my post if I have time.

Anyway, thank you very much for your time and your answers!

Yes. But if you new to shaders, I would implement the code without instancing first and verify that everything works. After that, you can still migrate to code to instanced rendering.

BTW: Instanced rendering only reduces the amount of draw calls not the computational overhead in the shaders.

This questions is somewhat confusing. The basic data of you shapes are always defined on CPU side in your JavaScript code. At this place you create your attribute data. The GPGPU examples just shows how you can move as much computations as possible from the CPU to the GPU. This can be useful for stuff like scientific simulations. But you not necessarily need GPUComputationRenderer. The following demo illustrates this: Most of the animation related code of the particle effect is implemented in the vertex shader:


hey @Mugen87 i’ve noticed the webGPU birds example uses vertex colors to generate the colors of the mesh, would there be a simple fix you know of in order to apply the original imported glb’s texture map to the instances?

Um, there is no such WebGPU example. On what are you referring to?

Probably they mean GPGPU not WebGPU

1 Like

Yes, my mistake @looeee, I do mean GPGPU, would you know how to preserve the original texture map on a gltf model ive imported to replace the birds