There is this application that uses threejs,
Does any one know the workflow on how to create pdfs like this. For the first one I know its orthographic camera, but how did they draw it onto a pdf.
And for the second image, how did they add the elevation measurements?
I would like help on the workflow, like if i do use the Orthographic camera, how can translate that view into pdf, and also add the measurements for the elevation? If possible I would want to know how to do this without a library
What renderer.render does is just creating an image (creating 60 images per second gives illusion of movement, but it’s still just images.)
So what you’d do in this case is call renderer.render only once, render to a canvas (or an in memory render target), then just convert contents of that canvas into base64 and put it in the pdf as an image.
You can use Lines and Text in WebGL right next to your model for example (keep in mind, you cannot use CSS3D or CSS2DRenderers in this case, since they don’t actually render to canvas, they just use HTML overlays which you can’t translate into an image easily.)
Alternatively, you can also render the frame, put it into a separate <canvas> element, then draw using CanvasAPI .fillText and .lineTo.
Both will give pretty much the same effect - the first option being a bit easier since you’re working in the same 3D scene as your model.
Of course each mesh has also its geometry information, even without embedded user data. I guess, @Dieter_Banaag is seeking some sort of automatic mesh geometry analysis. Which I assume is kind of difficult, if it’s supposed to go beyond trivial bounding box information.