Hello this is my first qüestion, thanks in advance
In my experiment i take some photos with spherical camera(equirectangular images) then i take the positions and estimate yaw pitch roll.
My file looks like this
IMAGE|X_est|Y_est|Z_est|Yaw_est|Pitch_est|Roll_est|
1|2.026018|41.529049|263.135916|238.950694|4.652574|39.268939|
2|2.148812|41.34341|262.941184|234.779664|-2.004237|2.05646|
3|2.065184|41.469367|262.847203|238.213313|-1.622865|-82.31293|
4|2.026123|41.529455|262.96785|239.248018|-0.083333|-139.416828|
5|1.921099|41.687054|263.119355|233.84976|-0.87989|39.84895|
6|1.854345|41.788576|263.198491|234.537904|0.025119|47.317144|
Now i set equirectangular image (as three.js pano example), and i would like to set point on image to link next image as coordinates systems.
I extract x y z position by substract relative position on each pano load, but think this coord system not mach with three.js
var geometry = new THREE.SphereBufferGeometry(100, 60, 40 );
// invert the geometry on the x-axis so that all of the faces point inward
geometry.scale( -1, 1, 1 );
var material = new THREE.MeshBasicMaterial( {
map: new THREE.TextureLoader().load( './R0010354.jpg' )
} );
mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
var firstimagepos=new THREE.Vector3(2.026018,41.529049,263.135916)
var secondimagepos=new THREE.Vector3(2.148812,41.34341,262.941184)
//2.-Get relative pos for second image var relative_pos_second = secondimagepos.clone().sub(firstimagepos);
//3.-Try to add a simple box on relative position,(then we can do as hotspot)
var c_r = 0.5;
var c_geometry = new THREE.BoxBufferGeometry( c_r, c_r, c_r );
var c_material = new THREE.MeshLambertMaterial( { color: new THREE.Color("rgb(40, 0, 0)")} );
var c_mesh = new THREE.Mesh( c_geometry, c_material );
c_mesh.position.copy(relative_pos_second);
c_mesh.receiveShadow = true;
c_mesh.castShadow = true;
scene.add( c_mesh );
Since you are using MeshLambertMaterial material, make sure you have actually added lights to the scene. Besides, I don’t understand step two of your previous post. Why don’t you use the position data (secondimagepos) directly for c_mesh? Why are you performing a vector subtraction?