I need to make a virtual clothing fitting application for practice. I’ve already done it with three.JS 3D model selection, using PoseNet body reading. But I do not know how to implement it so that the clothes are not just in front of the body, but so that it looks like it is dressed. If someone can help, please write, I really need help, because I can’t do it for a long time
Depends - what are you fitting the clothes on - what’s the source of the underlying model? Can you share the preview of how the application is displaying the clothes?
I can give you link on vercel with this project or just send video how this look
Yes, sharing a video / screenshot of how the app works in this case is a bit of a must, since the solution depends on where you get actor info and what data you’re working with.
If it’s just a webcam feed, then you need to someway estimate the depth (ex. using some simple AI evaluator.)
If the feed it always against a green screen / consistent background - you could run it through a shader that marks all green pixels as “far away” and all other pixels as “near-by”. Then compare that fake depth with fragments in the fragment shader of clothes, and discard all fragments are (1) back-facing the camera (ie. the fragments of the inside of shirts etc.) and (2) overlaying the “near-by” pixels.
In theory, this application will be used on the phone, but in general, the algorithm is done on the computer and then done under the phone. And can I also write to you on a social network for help, just in case
The way I’ve done this in the past is used blazepose to position a virtual 3d avatar… attach the clothing to that avatar, and make the avatar body invisible to show through to the image. It was hard on multiple levels.
Yes, it’s challenging. The issue isn’t depth but transformation (matrix, position, scale, and orientation). Relying on another AI to compute the transformation would slow things down. About five or six years ago, I worked on FaceMesh. If I remember right, I solved it by using an “idle” model as the source of truth. Then compared it with the new points to calculate the transformation and applied it to the target model.
The main idea is to set a reference plane or triangle using points that are most likely to be fixed. For the face, I’ve set a triangle formed by the eyes and nose. The transformation was calculated by comparing the idle model with the new transformed points, and it was definitely a tough problem.
Thank you for a tip on where to go
ml5.js made some updates to achieve skeletal pose estimation possible with a devices camera, you may be able to use it as a layer to calculate the pose and apply that to your three.js object transformations, there’s a good article on the process here How to Implement Pose Estimation with ml5.js? - GeeksforGeeks
In project stack I need to use TypeScript, because I don’t know about ml5.js, because how I remember it only on JS