Use threejs app as backend to receive renders

Hi there,

I have built a configurator with r3f that can save screenshots for the user. Is there a way to use my r3f app as a server to retrieve images from it. Basically I want to create an API to request images based on different configurations.
Does anyone have any tips or has anyone done this before?
I have read about server-side rendering but in my understanding that is not the right way to adress this topic right?

yes, this would be relatively easy using vite js api and puppeteer: buerli-starter/packages/with-solid-puppeteer at main · awv-informatik/buerli-starter · GitHub

/cli.js starts the client app with vite, same as if you’d start it locally. then it uses pupetteer to navigate a headless browser towards it and from then on you can make snapshots. the await page.waitForSelector('.complete') in particular is used to await the rendered result. await writeFile(${file}.png, await page.screenshot()) creates the png. you can communicate to the client app via url parameters ?foo=bar....

if you look into src/App.js it adds an empty <div className="complete" /> to signal it’s ready. forget all the CAD stuff, you can use the client app as always.

don’t forget <Canvas gl={{ preserveDrawingBuffer: true }}> or else you can’t make snapshots of the canvas.

1 Like

ps SSR is something else, some frameworks do that in order to generate the html on the server, the client is merely inflated and doesn’t need runtimes and such, nor does the client generate the markup any longer. it’s like serving a html page in 1990 and with the same speed, but will all the benefits of modern frameworks.

1 Like

does this mean, that the whole browser window gets captured? Atm i only use the canvas for screenshots.

so basically i call my app with url parameters that set my desired configuration and then the screenshotting takes place?

this looks promising, i`ll try that!! Thank you

it captures the browser view, puppeteer has no bars buttons frames and so on, it is just the canvas if it fills 100vh/100vw.
yes, you configure the page with url params, threejs/fiber renders, you wait for the ready-signal, and capture the screen. this can all run on the server in response to http requests.

I think you can also simply use OffscreenCanvas and render to that, skipping puppeteer. I use it web workers, not 100% sure if it would work in node, you can look it up.

1 Like

You might also need mesa drivers.

Question, how can i start /cli.js ? i have’nt setup a server myself yet. I saw that express is a dependency, i tried to start it like

npm cli.js

but it only shows the tags that i could use (whats in the meow).

I’ll look into that, thanks for the hint!

it’s just an example. you can start it locally with node foo.js. with express you don’t need a cli, you use vite and pup inside the request handler. the first step is to get a server running, if you have that the rest will fall into place.

Does this mean that the server sould have a GPU?
Because the server calls the configurator (with specific configurations the client sends it) through puppeteer right?

I am asking because some of the configurations have 500+ draw calls…

No, that’s why I mentioned mesa software emulation drivers…

https://www.mesa3d.org/

Can you explain how this works? In my understanding the configuration always has to be rendered first to generate an image out of it. And if i want to do that on a server the servers hardware has to handle the rendering.
Thanks

Well in this case the servers hardware will be the CPU and not the GPU. It will be slow. But anyways webGL takes care of all of that, it should just work. The drivers can run a very fast gaming GPU, an integrated laptop GPU…. or a software emulated GPU.

Thanks @drcmda,

that works beautifully!

@dubois i’ll dive into mesa3d now aswell. For now it works without mesa, so i guess mesa would be a performance improvement?

1 Like