Gpgpu on point geometry

I need to do a bunch of parallel computations, each takes up to 4 inputs and produces up to 4 outputs.

The idea here is to make a vertex grid and use 2x2 point primitives to fill in the output data, one computation output per point.

The vertex grid serve the purpose of splitting input data array into attribute chunks.

Then render all points to a texture and use it as a uniform in custom shaders for the normal scene.

In an example below, I randomly generate points on a sphere and then move them around in gpgpu and render using instance geometry.

A couple of questions:

  1. In this situation, is this a reasonable way to do things?

  2. I can see that gpgpu part of the frame rendering brings my GPU up to 20%, when I add scene rendering, it goes up to 70%, so I want to ask if I’m doing the THREE part correctly and not impending THREE performance somehow?

  3. Can this solution be improved? For now, I’d like keep gpgpu shaders WebGL 1.0 no extensions.

Thank you!