I’m creating a product configurator, which uses three.js combined with CSG to visualise the products. Those products can be pretty simple shapes, think rectangles with a corner cut off,…
I’m saving all the data that I need in the front-end to my database, via an api (php - Laravel). So the next time a user visits the applicatie I can draw the same product again on my canvas.
The problem, however, is that I have to create a PDF with the product specifications and 2D drawings of said products (which I visualise in 3D with three.js).
Solutions I have thought of:
Make a screenshot of the canvas with JavaScript
I don’t like this as the screenshots have to be made when the canvas is live, and I have to create images of the products individually but also when they are grouped on the canvas. So I would have to draw them one per one, take screenshot and then draw them together and draw a screenshot again. Then send all those screenshots to my backend,…
Draw the 2D images in the backend
I have all the information I need, so I could just draw them in my backend with php drawing functions. But this is a lot of hassle for drawings that I actually have already.
Has anyone ever struggled with this problem, or know another path I could take?
Have you considered to use a node.js setup that allows you to run a headless Chromium e.g.
In this way, you can implement WebGL renderings in the backend with three.js.
BTW: three.js uses this approach for its E2E regression tests (that means a script makes screenshots from the official examples and compares them with pre-defined ones).
I didn’t know about puppeteer, that could be an option. But for this to work I would need to copy paste my code for positioning,… from my front-end to my back-end, which could be very tedious when I make some changes to the front-end three.js code. Or am I wrong on that part?
I mean images of the configuration that the user made. That way I can add these images to an offer that the user can print out.
The canvas is shown in the configuration step of the process. But when the user has ordered the configuration and wants to print out a pdf the canvas won’t be shown anywhere on the page, so it won’t be “live”. (“live” is indeed not that clear)
When creating the WebGLRenderer set preserveDrawingBuffer to true, so you can use toDataURL or toBlob on the canvas element, returning the image file either base64 encoded or as binary blob.
This could be the solution! I made a quick prototype and its seems to be working like I want it to. The only downside is that the code for creating the other contents of the pdf is already written in the backend. So that means that I have to send the images to the backend before I can create the pdf.
But this beats writing php for drawing everything again in 2D every day, so thank you!