I understand that dimensions in Three.JS are on a unitless scale, and that when modelling to a real dimension I can make the scale meet the range that best fits my object sizes.
For example I can make 1 unit = 1m, or 100 units = 1m, or 1000 units = 1 m, aka 1unit=1mm.
Is there a reason regarding memory, pixel screen conversion, hardware unit conversions, or other reason, that would make working with a particular range/scale better or worse?
A more practical situation to consider.
- Take a a higher resolution image 2000 x 2000px
- Apply image to a 2000 x 2000px 2D canvas context, separate from the Three.js 3D canvas
- Convert this 2D canvas into a texture that can be applied to a plane in Three.js space
- Apply this texture to plane
I can make my three.js plane 1 x 1, or the plane 2000 x 2000, and with scaling the camera position accordingly, it appears to be equivalent.
Is there any reason to upscale or downscale in this approach to putting an image plane into a 3JS scene?
Nope.
Nope.
Nope - but keep hardware floating-point precision into account. As long as you’re staying away from 1.7976931348623157e+308
- you should be good though.
Yes - in some cases. Unless you have a good reason to not do so - it’s a good idea to keep “normal sized” objects in your scene as close to 0.1-1.0
as possible, and scale all else appropriately (ie. it’s ok for some far-away mountain to be 2000x2000x2000, but don’t make a hero cube that’s right in front of the came this size.) While there’s no actual limitation to this - it’s good to keep your scene around this size because:
- Default values are assuming that kind of sizing - for example shadow maps. If objects in front of your camera are 1x1x1, you can just turn on shadowMap and it will work right away. If your objects and world scale is 1000x bigger, you’ll need to adjust shadow bias and shadow resolution defaults to make it work. And it can be lengthy and troublesome.
- Some postprocessing effects like SSAO also have defaults listed for the 1x1x1 kind-of-scale. So you’ll just be able to copy-paste values from docs and it’ll work out of the box.
- Your code will be cleaner, will contain less
*= 2000.0
s, and you’ll be able to work easier with things like normalization (ex. normalizing position will put things in predictable distances, normalizing scale will let you scale things to-and-from 1x1x1 size etc.)
GPU doesn’t see the absolute size (besides camera near / far clipping), only size relative to other objects in the scene will matter - these will be rendered exactly the same.
5 Likes
To add to @mjurczyk’s list of considerations:
- When developing for VR/AR/XR, 1 unit is 1 meter
3 Likes
RectAreaLight works with 1 unit 1 m.
2 Likes
The physically-based lights and materials in three.js (e.g. MeshPhysicalMaterial.prototype.thickness
) assume 1 scene unit = 1 meter. But you can reduce or disable decay on lights if you don’t want that.
3 Likes
This would be a good thread to summarize and have in the getting started section on threejs.org.
I see people using crazy units all the time. 
1 Like