Using face-landmarks-detection, with a 640x480 webcam I get keypoints like:
// sample face
const FACE = {
"keypoints": [
{"x":356.2804412841797,"y":295.1960563659668,"z":-23.786449432373047,"name":"lips"},
{"x":354.8859405517578,"y":264.69520568847656,"z":-36.718435287475586},
{"x":355.2180862426758,"y":275.3360366821289,"z":-21.183712482452393},
...
}
// from https://github.com/tensorflow/tfjs-models/blob/a8f500809f5afe38feea27870c77e7ba03a6ece4/face-landmarks-detection/demos/shared/triangulation.js
const TRIANGULATION = [127, 34, 139, 11, 0, 37, 232, 231, 120, 72, 37, 39, 128, 121, 47, 232, ... ]
These keypoints are the result of my face projected onto my webcam: I’d like to “unproject” those points to the threejs world basis.
My approach would be:
- create a perspective camera for the webcam (I assume I know
fov
/near
/far
webcam’s values):const webcam = new THREE.PerspectiveCamera(85, 640/480, 0.2, 3); webcam.updateProjectionMatrix(); webcam.updateMatrixWorld(true);
- create a geometry:
const geom = new THREE.BufferGeometry(); // Vertices const arr = []; FACE.keypoints.forEach(({ x, y, z }) => arr.push(x, y, z)); const vertices = new Float32Array(arr); geom.setAttribute("position", new THREE.BufferAttribute(vertices, 3)); // Indices geom.setIndex(TRIANGULATION);
- compute
screen2ndcMatrix
matrixconst screen2ndcMatrix = new THREE.Matrix4() screen2ndcMatrix.set( // NEED HELP )
- apply
screen2ndcMatrix
>webcam.projectionMatrixInverse
>webcam.matrixWorld
to my geometrygeom .applyMatrix4(screen2ndcMatrix) // screen to ndc .applyMatrix4(webcam.projectionMatrixInverse) // ndc to camera .applyMatrix4(webcam.matrixWorld); // camera to world
I need help for 3… I have tried:
// ndc -> screen
screen2ndcMatrix.set(
640/2, 0, 0, 640/2,
0, -320/2, 0, 320/2,
0, 0, 0.5, 0.5,
0, 0, 0, 1
);
// screen -> ndc
screen2ndcMatrix.invert();
but without success: