Calculation on GPU side

Hi everyone :waving_hand:

I’m trying to make some computation on the GPU. Things looked to work well until one of our clients worked with more points.

I managed to narrow down my issue. Here is the most simple code sample that I manage to write in order to highlight the difficulties I’m facing:

import { StorageBufferAttribute, WebGPURenderer } from 'three/webgpu'; // Version 0.177.0
import {
  Fn,
  If,
  Return,
  atomicAdd,
  atomicStore,
  instanceIndex,
  storage,
} from 'three/tsl';

(async () => {
  const renderer = new WebGPURenderer({});

  const count = storage(new StorageBufferAttribute(1, 1), 'uint', 1).setPBO(true).toAtomic();

  for (const total of [10_000_000, 100_000_000, 1_000_000_000]) {
    // Reset count
    await renderer.computeAsync(
      Fn(() => {
        atomicStore(count.element(0), 0);
      })().compute(1),
    );

    await renderer.computeAsync(
      Fn(() => {
        If(instanceIndex.greaterThanEqual(total), () => {
          Return();
        });
        atomicAdd(count.element(0), 1);
      })().compute(total),
    );

    const countBuffer = new Int32Array(
      await renderer.getArrayBufferAsync(
        count.value,
      ),
    );

    if (total !== countBuffer[0]) {
      console.log(`Count mismatch: expected ${total}, got ${countBuffer[0]}`);
    }
  }
})();

This code sample logs twice with the following:

Count mismatch: expected 100000000, got 99999996
Count mismatch: expected 1000000000, got 999999968

I would expect that the computation on the GPU returns the expected value. Any suggestion to still run that code in parallel and get the right value?

Thanks in advance,
David

no idea but.. that sounds like a threading thing maybe?

can you:

await (async () => {

and see if that changes anything?

(im a complete noob w webgpu)