Convert external coords to three.js coords

Hello this is my first qüestion, thanks in advance
In my experiment i take some photos with spherical camera(equirectangular images) then i take the positions and estimate yaw pitch roll.
My file looks like this

Now i set equirectangular image (as three.js pano example), and i would like to set point on image to link next image as coordinates systems.

I extract x y z position by substract relative position on each pano load, but think this coord system not mach with three.js

Any idea?

Is it possible to provide a live example that illustrates what are you trying to achieve?

BTW: Where are these data come from?

Data can come from GPS or Photogramety software.

//1.-Load First Image as pano

var geometry = new THREE.SphereBufferGeometry(100, 60, 40 );
// invert the geometry on the x-axis so that all of the faces point inward
geometry.scale(  -1, 1, 1 );
var material = new THREE.MeshBasicMaterial( {
			map: new THREE.TextureLoader().load( './R0010354.jpg' )
} );
mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
var firstimagepos=new THREE.Vector3(2.026018,41.529049,263.135916)
var secondimagepos=new THREE.Vector3(2.148812,41.34341,262.941184)

//2.-Get relative pos for second image
var relative_pos_second = secondimagepos.clone().sub(firstimagepos);

//3.-Try to add a simple box on relative position,(then we can do as hotspot)

var c_r = 0.5;
var c_geometry = new THREE.BoxBufferGeometry( c_r, c_r, c_r );
var c_material = new THREE.MeshLambertMaterial( { color: new THREE.Color("rgb(40, 0, 0)")} );
var c_mesh = new THREE.Mesh( c_geometry, c_material );
c_mesh.receiveShadow = true;
c_mesh.castShadow = true;
scene.add( c_mesh );

box geometry not apear on pano

@nopaixx please take the time to format code blocks in your comments correctly. Use the “preformatted text” button.

done! sorry!

1 Like

Since you are using MeshLambertMaterial material, make sure you have actually added lights to the scene. Besides, I don’t understand step two of your previous post. Why don’t you use the position data (secondimagepos) directly for c_mesh? Why are you performing a vector subtraction?