Access "external" texture from GPUComputationRenderer shader

Hi,

I’ve been playing with the great gpgpu birds example for a while, modifying and adding some behaviour rules in the velocity fragment shader with success.

Now I’m trying to add a new rule based on a grayscale texture, which I’d like to use as a grid of attractors for the birds. All my code seems to be working (i.e. I can fake some attractors by hardcoding them in the shader) except I’m not able to get the pixel values right. I’m wondering if my approach is correct as I can’t find a similar one after days of browsing.

See relevant parts of the code below.

main JS script :

async function initComputeRenderer() {

  /* ... */

  velocityUniforms[ 'separationDistance' ] = { value: 1.0 };
  velocityUniforms[ 'alignmentDistance' ] = { value: 1.0 };
  velocityUniforms[ 'cohesionDistance' ] = { value: 1.0 };
  velocityUniforms[ 'freedomFactor' ] = { value: 1.0 };
  velocityUniforms[ 'predator' ] = { value: new THREE.Vector3() };
  velocityVariable.material.defines.BOUNDS = BOUNDS.toFixed( 2 );

  velocityVariable.wrapS = THREE.RepeatWrapping;
  velocityVariable.wrapT = THREE.RepeatWrapping;
  positionVariable.wrapS = THREE.RepeatWrapping;
  positionVariable.wrapT = THREE.RepeatWrapping;

  // modified code here :

  return new Promise((resolve, reject) => {
    new TextureLoader().load(config.attractorsBitmapPath, tex => {
      velocityUniforms[ 'attractors' ] = { value: tex };
      velocityUniforms[ 'attractorsDimensions' ] = {
        value: new Vector2(tex.image.width, tex.image.height)
      };
       
      const error = gpuCompute.init();

      if (error !== null) {
        console.error(error);
        reject();
      }

      resolve();
    });
  });
}

fragmentShaderVelocity script :

uniform float time;
uniform float testing;
uniform float delta; // about 0.016
uniform float separationDistance; // 20
uniform float alignmentDistance; // 40
uniform float cohesionDistance; //
uniform float freedomFactor;
uniform vec3 predator;
uniform sampler2D attractors;
uniform vec2 attractorsDimensions;

/* ... */

void main() {

  /* ... */

  float xTexCoord = selfPosition.x * attractorsDimensions.x;
  float yTexCoord = selfPosition.y * attractorsDimensions.y;
  float l = floor(xTexCoord);
  float r = ceil(xTexCoord);
  float b = floor(yTexCoord);
  float t = ceil(yTexCoord);

  // here I'm trying to get the normalized lightness value of "enclosing" pixels,
  // but even if the computing is not done right, I can't see a difference between
  // black and non-black pixels : 

  float lt = length(texture2D(attractors, vec2(l,t) / attractorsDimensions).rgb);
  float rt = length(texture2D(attractors, vec2(r,t) / attractorsDimensions).rgb);
  float lb = length(texture2D(attractors, vec2(l,b) / attractorsDimensions).rgb);
  float rb = length(texture2D(attractors, vec2(r,b) / attractorsDimensions).rgb);

  /* ... */
}

As this is the first time I go into glsl that far, I can’t tell why it isn’t working. My hypothesis are either that the texture isn’t loading properly, or that I should somehow specify the right colorspace for the texture.

Also I wonder if the fact that all the shaders in a GPUComputationRenderer must have the same dimension would require the texture to have the same dimension (although I doubt it).

Any insights would be very welcome !
Thanks in advance.

Hi again, I made some progress and I was able to use a texture (not an “external” one) created with GPUComputationRenderer like the position and velocity ones, thus of the same dimensions as them, which I’m adding as a variable to the GPUComputationRenderer after filling it with the following snippet :

  async fillAttractorsTexture(texture, bitmapPath) {
    return new Promise((resolve, reject) => {
      new ImageLoader().load(
        bitmapPath,
        img => {
          const theArray = texture.image.data;
          const $canvas = document.createElement('canvas');
          $canvas.width = $canvas.height = WIDTH;
          const ctx = $canvas.getContext('2d');
          ctx.drawImage(img, 0, 0);
          const imageData = ctx.getImageData(0, 0, WIDTH, WIDTH);

          for (let i = 0; i < imageData.data.length; i += 4) {
            theArray[i + 0] = 1 - imageData.data[i + 0] / 255;
            theArray[i + 1] = 1 - imageData.data[i + 1] / 255;
            theArray[i + 2] = 1 - imageData.data[i + 2] / 255;
            theArray[i + 3] = 1 - imageData.data[i + 3] / 255;
          }

          resolve();
        },
        undefined,
        err => reject(err),
      );
    });

Then I’m able to get the pixel values from the velocity shader.
That’s nice to achieve something, but I’d like to use a higher resolution texture.
Does it sound possible ? My first approach never worked, and I have no idea why …