Morphing between geometries of Points system on GPU

Hi there,
I have seen some examples of using morphtargets or simply manipulating geometry/buffergeometry which I fully understand however I want to understand if there is a known way to interpolate between multiple point cloud models without going down the route of either looping through vertexes on the CPU or using prefabbed morphtargets/influences.

For example if I have a model with 50k points which builds some geometry, then i inject another 50k points/vertices onto the buffer like so:

modelGeometry = SOME_MODEL_I_JUST_LOADED.geometry;
modelBufferGeometry = new BufferGeometry();
//copy the loaded geometry into the buffer geometry
modelBufferGeometry.setAttribute('position', new Float32BufferAttribute( new Float32Array(modelGeometry.attributes.position.array),3))
//removed the normals creation code from here so as not to confuse
//removed the UV creation code also
let len = modelGeometry.attributes.position.count;
let morphPositionArray = new Array();
for(let i = 0; i<(len*3); i++)
   morphPositionArray[i]= (Math.random() - 0.5) * 20
modelBufferGeometry.setAttribute('morphposition', new Float32BufferAttribute( new Float32Array(morphPositionArray),3))

At this stage I have one structured models geometry and also an equally sized model geometry in the buffer which I would idealy purely use the shader/GPU/wgsl to mix() between.

Theoretically, if i had another model who’s geometry was of the same vertex count I could add more buffer attributes also however (using the PointsMaterial with the onBeforeCompile injection, I cannot seem to access from the shader without manually passing an attribute with a vertex through to the shader, which seems somewhat resource heavy.


on my shader material instantiation:

pointsShaderMaterial = new THREE.PointsMaterial({
    blending: THREE.AdditiveBlending, 
    onBeforeCompile: function(shader){
      shader.uniforms.time = uniforms.time;
      shader.uniforms.speed = uniforms.speed;
      shader.uniforms.morphposition = modelBufferGeometry.attributes.morphposition
      shader.vertexShader = vShader;
      shader.fragmentShader = fShader;

then using gl_vertexID to get the corresponding morph vertex position.

but this does not seem to work and is seemingly slower than CPU for loops?

Is there a smarter way to do this?

i have seen this already:
morphTargetsTween ← using morphtargets

closing this. I seem to have spent a lot of time redeveloping functionality which copies that which I can achieve much easier and more robustly using morphtargets