Update vertices base on useFrame

Hi there,
I get all vertices of the first render of an animated model. I draw a Points cloud base on that vertices. But I can’t sync that point cloud when animation running. How can I safely update?

const getVertices = (mesh: THREE.Mesh) => {
    const position = mesh.geometry.getAttribute('position')
    const vertices = []

    for (let i = 0; i < position.count / position.itemSize; i++) {
      const vertex = new Vector3(position.getX(i), position.getY(i), position.getZ(i))

      vertices.push(vertex)
    }
    position.needsUpdate = true
    position.setUsage(THREE.DynamicDrawUsage)

    return vertices
  }

useFrame((state, delta) => {
    for (const mixer of mixers) {
      mixer.update(delta)
    }
    model?.traverse((node) => {
      if (node instanceof THREE.Mesh) {
        let veticesBone: Array<THREE.Vector3> = []
        veticesBone = getVertices(node)
        const pointsGeometry = new THREE.BufferGeometry().setFromPoints(veticesBone)
        const line = new THREE.Points(
          pointsGeometry,
          new THREE.LineBasicMaterial({ vertexColors: true, color: 0xff0000 })
        )

        line.geometry.attributes.position.needsUpdate = true
        line.geometry.computeBoundingBox()
        line.geometry.computeBoundingSphere()
        scene.add(line)
        veticesBone = []
      }
    })
  })

My hypothesis* is:

The animation with skeletons happens in the GPU. Reading coordinates from the position attribute will read the coordinates that go to GPU, so they are yet unmodified by the animation. To get modified values you either have to get them from the GPU or to simulate the GPU calculation in JavaScript. Both options are not pleasant to implement.


* hypothesis – (noun) (1) a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation; (2) a proposition made as a basis for reasoning, without any assumption of its truth.

this looks very strange to me. useFrame is your ticker, it runs 120 times per second. why would you want to create buffergeometry in there, that is beyond wasteful and will blow up the GC. not to mention that you should never add/remove stuff, there is no scene.add/scene.remove in react it doesn’t ever come to that. you mount elements by returning them.

there is an article that teaches you how to update things three.js docs

  1. return the buffer geometry declaratively
  2. mutate the point positions directly in the buffer
  3. set needsUpdate to true

Thanks @drcmda ,
Because I draw vertex normal for an model. But the vertex not sync with animation model.

yes, but the amount of vertices doesn’t change just because the model moves. all you need to do is create points once, and then update the vertices every frame, not create 120 buffer geometries per second.

Thanks. I created points. Could you please send me guide to update vertices by every frame?

Hi @drcmda,
Thanks so much for that. But I can’t see the update vertices by frame in this demo. Sorry if I miss it.

  useFrame((state) => {
    const t = state.clock.elapsedTime
    positions.forEach((p, i) => (positions[i] += Math[i % 2 ? 'sin' : 'cos'](1000 * i + t) / 100))
    points.current.geometry.attributes.position.needsUpdate = true
  })

according to threejs docs that’s the only way you should update buffergeometries, mutate the array, call needsUpdate

Thanks so much.