Matterport features for THREEjs?

Is there a way to replicate the dollshouse feature that Matterport has into THREEjs? I can display panoramic views and transition to the next one and add “hotspots” but how do they achieve the dollshouse view do they project a flattened panoramic view onto some planes?

I would also like to know how I can replicate in panoramic view their measuring tool, does anyone know how to deal with perspective as things further away are measuring smaller than they are, is there some trick that I can use?

Any information / tips greatly received.

Thanks in advance!

1 Like

What is this? Can you link to some examples?

In order to replicate this feature (actually it’s a collection of features), you need to understand the individual steps involved.
In my view, the basic building block is creating equirectangular spherical panoramas from various locations inside a scene. Note the hardware, they have on offer to support you with that. Basically it involves a rigid positioning device like a tripod, on top of which you have a mount which allows a stepped rotation of a camera about the vertical axis. The stepping angle is related to the optical properties of your camera, such as the resulting single shots do have a sufficient overlap. If you want a full 180°x360° spherical panorama, you may have to take several vertically overlapping"bands" of shots. There exist several commercial software packages for the stitching of individual shots into a panoramic one, like PTGui Pro, which also have a full featured free trial version that watermarks any output though. [Disclaimer: I own a copy of PTGui Pro but am not in any way affiliated with them]

Flickr also have a huge section on equirectangular pictures. These are of course all rectangular bitmaps, which are warped in such a way that a projection onto the inside of a sphere gives you the impression of being immersed into that very location.

You’ll have to repeat that process from as many different positions within a scene as you need to get your (commercial or other) point across to your customer.

For the dollshouse effect you’ll either have to create a a 3D (or at least 2.5D) model of your object and texturise that with respective photographs. NYT (see below) are also working on deriving a 3D model from 2D-shots of the same object, taken from different angles.

Matterport have a pretty detailed video which explains how they work.

Also see this really cool description by the New York Times how they streamlined the process of creating immersive journalism.

1 Like

Here is an example:

1 Like

Thanks for the links, i’ll have a look, I can do the projection onto a sphere and I have some panoramic views that work well and I can add hotspots etc with basic information, I don’t have such a camera to do them myself but I have found sites like you mention that have them, it was the dollshouse effect I was interested in, looked like they flatten the image and project onto planes but it looked a bit more than just as simple projection onto a plane.

No idea how they get the measurements from a perspective view unless they are doing it from the 3D model but even that is difficult by the looks of things.

Thanks for the links and info.

I believe their device captures depth .

Yeah that makes sense, they must store a LOT of information about the image, I have seen photogemmetry (or however you say it) but it seems that creates a 3d model from a photo, is that the same as what they are doing for their dolls house view? Seems like it is more a flat surface than a 3d model as such.

1 Like

It definitely is more than “projecting images onto some planes”. In their video you can clearly see that they also have 3D-furniture inside individual rooms, which obscure (or not, depending on perspective) parts of the interior. That requires an underlying 3d-model. I see two ways to achieve that:

  • Starting from a 2D floor plan, pulling up walls to the desired height, excluding windows, doors etc. along the way.
  • Using photogrammetry, the process of extracting 3D information from 2D photographs.

In a very(!) abstract sense, it is ultimately indeed lots of textured planes, as you said. At the showcased level of complexity, it’d expect in the order of thousands of textured planes. And both the planes and the textures must be individually provided. It seems, Matterport have streamlined this process for efficiency.

Yes, it certainly interesting, perhaps a bit too ambitious for me to tackle, especially when I don’t have a fancy camera.

How do you suppose they do the measurements with depth and perspective do they have some fancy trick to do this too or do you think the information is also included in the image when they turn it into 3D space?

I watched the video you linked and a few more and it seems as though it is not just a simple case of setup a fancy camera run out of the rooms and voila a few mins later a full on 3d scene, looks like there are a number of tweaks to do behind the scene, which as a “viewer” we don’t get to see.


When you use a camera to take a snapshot, that’s pretty much the same as we do in a 3D graphics projection: condense 3D information onto a 2D plane, which results in a 2D image file. Information gets lost in the process. (Turning the photograph around is not going to show you the back side of the scene, for instance). It’s important to understand, that there exist an infinite number of 3D realities which would result in the same projected 2D image.

If you want to reverse that process, that is: derive 3D information from a 2D image, you need to assist in resolving inevitable ambiguities. A proven way of doing this is taking several shots of the same 3D scene, taken from different positions/angles. By cross-referencing matching details in different shots, you can in fact resolve those ambiguities. It helps, if you remember the details (Positions, angles, frustum parameters (like fov), etc., of the contributing 2D-shots. So a solid amount of planning and recording helps in the process.

When creating a 3D model using photogrammetry, photographs should be taken from different points in the scene. This difference in apparent position is called parallax, which allows the photogrammetry software to calculate depth, making it possible to render a 3D model.

Emphasis mine. Image and image subtitle taken from:

I do not believe, that actual “depth” gets recorded along any single 2D-shot. Which implies, that any camera will in principle be capable of taking those shots. Also, I don’t believe that there is “some fancy trick” involved. Except you would be willing to consider solid math/trigonometry a “fancy trick”.

1 Like

Photogrammetry is a new concept to me, obviously to turn it into a 3D object it needs depth information as well as other information, so you could grab the required info from that, I guess the trick is, maybe not so fancy to those who already know about it is the photogrammetry bit itself, but I didn’t realise it takes so many pictures and builds a scene from that.

Very interesting topic I must look into it more, I have seen some mobile phone apps that use this concept but I don’t think my phone supports it. Since looking into it more I have noticed a few more sites that allow you to measure from a 3d scanned object.

With iphone 12/13 and apps like Polycam you can get good results. As said, nytimes is very good resource and has examples made with threejs and iphone.

Yeah, I still rock a Samsung S6 so I don’t have the hardware to do that kind of thing and the cameras are way too expensive for me. I have seen some models on a website that I fail to recall now that has a load of scanned models in, however none of them look like the dolls house view, but then it looks like you have a lot of tooling on the Matterport side to create the geometry. The dolls house is almost like a textured / materialised floor plan.

A little suggestion from my own research on this: just remember that the main point in matterport viewer is how smooth the transition looks between locations in walking mode. This is due to the fact that there is a 3d model underneath, onto which a mix of env maps are being projected. From time to time this questions pop-ups here, in this link you can see a post regarding how this mix gets calculated.

Considering that, your questions can be further narrowed

  • Dollhouse view is basically this rough group of simple surfaces, textured with a spherical projection shader, in turn fed with equirectangular images. Three.js does provide the main aspects but you will have to write a couple of things (i.e. that spherical shader)

  • Measuring is being made in a very conventional way: ray casting points onto the meshes and performing the distance between them. Just remember that there is no need in rendering those surfaces directly, so for instance in walking mode the user could be presented with nothing but the panoramas, although behind that there are 3d faces with position, normal direction (… and not a single UVMap :wink: )


1 Like

Thanks Antonio, very useful information, do you know if there are any THREE js examples online that show off something similar?
I have seen a few other websites that portray a similar feature but they do not use THREE js.

Thanks again for the useful information!

sorry @mrticklespot for not responding on time (swaping phone here).
Answering both ideas:

  • Regarding dollhouse view, there are some interesting implementation of this on 360 tours side. Everpano is one of them, here there is a demo of dollhouse view. Another one is krpano, whit their recent adition of depthmaps, that allows rendering dollhouse view based on both 360 imagery and textured 3d model, here there is a demo of this functionality (remember to press ‘dollhouse’ button). Although being closed / propietary, this two softwares are based on three.js.

  • In relation with distance measuring, our friend @hofk shared right here an excelent working demo, including addding a helper line between the two points being measured, you can see the demo and the discussion post.

Hope it helps