I am making a texture painting web app. How would I go about projecting an image from the screen-space to the uv-space? I have a code set up for finding the intersecting triangles and their respective uv’s based on the mouse location and brush size and then filling the faces with a single color. It works good for filling a single color, but not for drawing images.
I was thinking about this. What if we first render the projected model to a texture with pixels of the texture being the uv coordinates of the model. We can then render to another texture that takes in the generated texture containing the uv coords for the pixels on screen, the original texture of the model and the screen space image. We then just copy the screen space image onto the uv map using the tex coords provided in the image. But I think this will have problems when the surface is curved.
Any better way for doing it?