Overcoming Morph Target limitations

How would I go about keeping changes made to a model with morph targets and still clear out the memory in the GPU so that I can apply more than four? I’m not asking anyone to write the code for me… I just need a general direction.

Thanks… Harvey

You would have to apply the vertex displacement once on the CPU via JavaScript. You can then modify the already displaced geometry again in the shader.

Mesh.raycast() already has the code that shows how you do this:

Thank you… I will play with this and see how far I get.

For character customization of a mmo i’m working on i bake different body proportion morphs mixed into a texture, there is no limit for the number of targets.

The only limit is it’s not suited for animation obviously. But instead having a geometry per avatar it’s only a texture for each. Additionally this can be cheaper using half float as each morph is only a relative displacement.

For animated morph i use a texture as well, but an texture atlas with all morph states. Technically it is similar to how THREE avoids bone limitations as uniforms are limited as well.

That is an interesting approach. Do you bake the morphs in Blender? For character customization I would think that you are applying datGui controls on the model. Or is that what you mean by not being suitable for animation? How are you generating the texture atlas?

Thanks…

You can just use the buffer attributes blender generates and remove them from the geometry, using them to bake a single or transfer them to an atlas.

I made a tool as bridge between the DCC tool and THREE to apply various automated processing steps and features before saving into a format that contains everything prepared rather than being limited to a standard format or the corresponding exporter. So atlases are generated in this step, you just need to find the smallest PO2 size for the number of vertices of your model and number of poses and write them from the attribute buffers to the texture array.

You will also need to make the changes in the vertex shader for any texture approach, if you only bake all states into1 morph like Mugen described you don’t have to make changes.

1 Like

Thanks, you both have given me much to think about.

I am following an issue on Github which seems to have an attractive option using a shader:

//////////////////////////////////

@imgntn For sure. Code is a bit messy right now, needs optimizing, and references specific model objects. Note this is using R100 of Three.

So on load of the model we create a data texture with the size 4096 by 4096 and load that up with geometry data for every blendshape as such:

let size = 4096 * 4096;
let data = new Float32Array(3 * size);
let vertIndexs = new Float32Array(this.model.children[2].geometry.attributes.position.count);
//Loop over each original vertex
let stride = 0;
let vertexId = 0;
for(let v = 0; v < this.model.children[2].geometry.attributes.position.array.length; v+=3) {
  //Loop over all blendshapes
  for(let i = 0; i < this.model.children[2].geometry.morphAttributes.position.length; i++) {

let morphAttr = this.model.children[2].geometry.morphAttributes.position[i];
//Copy x, y, and z for the given vertex
data[ stride ] =  morphAttr.array[v];
data[ stride + 1 ] =  morphAttr.array[v + 1];
data[ stride + 2 ] =  morphAttr.array[v + 2];

stride += 3;
  }
  
  vertexId++;
  //Also set vertIndex at v to v which is the vert index
  vertIndexs[vertexId] = vertexId;
}
this.model.children[2].geometry.addAttribute('vertIndex', new THREE.BufferAttribute(vertIndexs, 1));
//CREATE DATA TEXTURE AND PLACE ON SHADER MAT
let dataTexture = new THREE.DataTexture(data, 4096, 4096, THREE.RGBFormat, THREE.FloatType);
dataTexture.needsUpdate = true;

let uni = { texture0: {type: 't', value: dataTexture}, influences: {value: this.model.children[2].morphTargetInfluences}, mainTexture: {type:'t', value: texture}};
let shaderMat = new THREE.ShaderMaterial({uniforms: uni});

this.model.children[2].material = shaderMat;





Then in the vertex shader we take the uniform data, as well as the vertex index as an attribute and modify all the vertex positions based off the blendshape influences passed in. Note that it is hardcoded to 136 blendshapes, this will obviously change based on how many blendshapes you have on the model. This also is unoptimized. From the passed in data, you can calculate the texture coordinate that contains the vertex data for any given vertex on the model at 100 percent blendshape influence. You then take that data and set the vertex position to the current vertex position minus the vertex at 100 percent influence and multiply it by the actual influence of the blendshape for that frame. That is how we were able to get 136 blendshapes firing in Three js!:





//Data texture
    uniform sampler2D texture0;
    //Blendshape influences
    uniform float influences[136];
    //Current vertex index
    attribute float vertIndex;
    
    varying vec2 vUv;

    void main() {
      vUv = uv;
      vec4 transformed = vec4(position, 1.0);
      
      //Offset used for fixing the x y coordinates on the data texture
      float offset = vertIndex * 136.;
      //Loop over every blendshape
      for(int i=0; i<136; i++) {
        float iFloat = float(i);
        //If influence is 0, lets not waste GPU processing, move on
        if(influences[i] == 0.) {
          continue;
        }
        //Find the x and y position of the vertex data based on vertex index and blendshape index
        float x = mod(offset + iFloat, 4096.);
        float y = ((offset + iFloat) / 4096.);
        
        //Grab the data at x and y
        vec2 texCoord = vec2(x / 4096.,y / 4096.);
        vec4 data = texture2D(texture0, texCoord);
        
        //Modify the current vertex position with the data found in the texture and the current blendshape influence
        transformed.x -= (position.x - data.x) * influences[i];
        transformed.y -= (position.y - data.y) * influences[i];
        transformed.z -= (position.z - data.z) * influences[i];
      }
      
      gl_Position = projectionMatrix * modelViewMatrix * transformed;
    }
> ```
>
>
>
>
>
> For us, we were able to use one 4096 by 4096 texture to populate the blendshape data of a model with 100k verts and 136 blendshapes. The texture has room for 16,777,216 data points. Which was well enough for this model. If you need more, from what I understand you can pass up to 8 textures to a shader.

///////////////////////////////////////

I know little about shaders so pardon my nativity…

The reference “texture” here:

this.model.children[2].morphTargetInfluences}, mainTexture: {type:‘t’, value: texture}};

Is this a texture previously loaded with TextureLoader?

This is calling the shader here and the shader id is “uniforms” Is this correct?

let shaderMat = new THREE.ShaderMaterial({uniforms: uni});

How are we passing new transforms in the code or is this just loading and performing a model in which morph targets are preset?

Thank you…

Do you even need an animated morph or just a mixed morph state? 136 texture fetches and then even in a loop looks like a nightmare.

I certainly wouldn’t have that many… I’m trying to find the best practice for achieving many morph targets and influence them…

If you need them animated and there aren’t too many this approach is basically what i described with an atlas.

If you don’t need them animated you can do it on CPU and give just 1 final to the GPU.

OK, thank you. I’m exploring possibilities but your approach is the easiest to understand. I am trying to understand what you are saying here: “For animated morph i use a texture as well, but an texture atlas with all morph states.” Lets talk about just one morph. Are you saying that I create a texture for the beginning state and another for the end state and put that into one atlas? Are you baking those as well? I have no idea how this approach would work. Is there an example you can point me at?

I want to say that I finally figured it out… this code was very instrumental in my discoveries… doable in just a few lines of code…