I am trying to create a 360 scene with using panorama image, along with that I have some other data to create floor and walls in that scene.
I am trying to figure out how can we calculate the geometry of the panorama image as everything fits correctly (wall & floor meshes). we can’t update the floor and wall meshes data.
room scene panorama image can be accessed with key roomImg from the object.
I have done so far is to create meshes based on the data with a unique colors, now trying to identify how can we put it correctly with panorama image.
I have tried this with blender software to find out the geometry side, but couldn’t manage to find a way to load everything correctly in that.
I don’t really get what you are trying to solve here. By hand or automatically? in three.js or outside and then load as an external model? single pano calibration or multiple in a shared coordinate system?
If you just need to obtain the geo, you are safe in regards of the images, as they seems to depict a very regular scene. First you would have to know the camera position and orientation, starting from there it would be fairly straight forward to project the geometry in blender by hand.
For a quick demo, I converted the panostrip image to equirectangular projection and fiddle around to get something like this
things are done so far with a little help of previous project’s work and along with continuous support of @manthrax
created surface meshes based on the available data (values are available for position and rotation of the surfaces) . actual data is available of floor & wall positions, based on that we have created those surfaces : Glitch :・゚✧
we need to place the room’s panorama image to fit correctly with the created surfaces, trying to figure out how can we properly place it in scene so individual created surface meshes would work as the part of the actual room and then later we may change texture of the surfaces dynamically.
Ok, now I get it, you are trying to align / pose the panorama according to known surfaces. Well, that is an interesting challenge in itself to code.
Just remember that with only planes I think the solution will not be precise enough. You are going to get much higher definition if you have known points in 3d space that correlate to x,y in image space, instead of known planes that somehow you should be able to translate to image space (that is not trivial).
In any case, you can always place the known planes in Blender and try to eyeballing by hand with some sort of projection rig. In the following image, you can see that the bottom edges of both opposite walls lay onto the ground surface. Nevertheless, in order to get a good enough camera solving you are going to need a proper identification of what those surfaces correspond to in image pixels.