I’m currently working on a 2d Plot using three.js and using Orthographic Camera to achieve a mapping from plot coordinates to the three.js scene:
renderer = new THREE.WebGLRenderer();
renderer.setSize( myWidth, myHeight );
camera = new THREE.OrthographicCamera( 0, myWidth, 0, myHeight, -1000, 1000 );
...
renderer.render( ..., camera)
I on purpose use the same width (myWidth) and height (myHeight) for the renderer dimension and the camera distance between the frustrums. Doing so, I get the same coordinate system with browser-viewport pixels as in the scene itself (according X and Y). This seems to work.
The problem is, that the rendered content gets distorted. Using Points with PointMaterial results in rectangles that are wider than they are high (the width of my renderer is bigger than the height). And I don’t get why to be honest. As soon as I change the width and height of the renderer to be the same, the distortion is gone. I however, want to keep my requirement, that the coordinate systems stay identical.
I’m assuming, that I still misunderstand some parameters to initiate the renderer or the camera. What is wrong?