Smart people,
What I want to do is to create an “augmented reality” overlay for a video from a static webcam, displaying virtual representations for “real life” objects based on their geolocation. Said geolocations and the position of the camera are known very accurately.
I have managed to transform the geolocations into three.js space and to get the y rotation of the camera right. What I’m struggling with, however, is to get a good optical match between the real objects and their virtual representations.
I have created a codesandbox to illustrate my challenge.
I’m expecting that this can be achieved by applying propper configuration to the camera (and the scene?), but I haven’t been able to get it right. Any thoughts, any suggestions?