Hi!
I am developing a demo project that will use device orientation and geolocation together. Users will see virtual representation of real things as s/he moves/rotates with the device.
- It will place certain things on real world with their geolocation on virtual world which is a three.js scene. Certain things on real world are not everything on Earth, just some predefined things on predefined geolocations.
- And the user will be placed and oriented in same manner too. The virtual camera, a perspective camera, will be the users virtual eyes to be moved and orientated with respect to user’s device geolocation and orientation.
My dilemma is how to place things in virtual world.
What are my options?
Options I can think of are as follows:
1- Should I project the spherical coordinates from geolocation api* on the plane, xz-plane properly?
2- Should I project the spherical coordinates from geolocation api* on the plane, xz-plane improperly, by NOT considering the arc length distance between points?
3- Should I directly use spherical coordinates from geolocation api* on a sphere? (I need to use the Vector3.setFromSpherical(r, phi, theta)).
Last option seems most realistic, geographically correct. I can present a globe overview by this in future since all will be on a sphere. I can apply things like zoom out and zoom in from too high like from space, show the entire glob like google Earth does or similar.
*Geolocation Web API gives Spherical coordinates (latitude as phi, longitude as theta, and altitude can be expressed as in radius = Earth radius + altitude)