How Does the Talking Head ThreeJS Project Apply Morph Targets to GLB Models in SwiftUI?

I’m working on integrating a feature similar to the Talking Head JavaScript project I found on GitHub (link to repository) into my SwiftUI project. This project uses readyplayer.me models in GLB format to create animated, talking heads by applying a set of predefined morph targets for facial expressions.

For instance, for a smiling emoji, the project modifies the model with a specific set of morph targets like browInnerUp, jawOpen, etc., as shown in the code snippet below:

:grinning:’: { dt: [300,2000], vs: { browInnerUp: [0.6], jawOpen: [0.1], mouthDimpleLeft: [0.2], mouthDimpleRight: [0.2], mouthOpen: [0.3], mouthPressLeft: [0.3], mouthPressRight: [0.3], mouthRollLower: [0.4], mouthShrugUpper: [0.4], mouthSmile: [0.7], mouthUpperUpLeft: [0.3], mouthUpperUpRight: [0.3], noseSneerLeft: [0.4], noseSneerRight: [0.4] }},

However, when exploring the model (example model), I couldn’t find references to these morph targets, such as browInnerUp or mouthShrugUpper, within the model itself. This confusion persists even after converting the GLB model to USDZ, making me wonder if the conversion process might be the cause.

I am trying to understand how these morph targets are defined and applied to the model using Three.js for visualization in the original project. My questions are:

  1. Do the GLB models from readyplayer.me inherently contain these morph targets (browInnerUp, mouthShrugUpper, etc.), and if so, how can they be accessed?
  2. If these morph targets are not part of the GLB model, how does the Talking Head project apply them to create facial animations? Specifically, I’m looking to understand the mechanism or the source within the JavaScript code where these morph targets are defined or referenced.

I would appreciate any insights or guidance on understanding and implementing similar functionality in my project using SwiftUI.