While WebGLRenderer use a canvas to render 3D objects, CSS2DRenderer in realty doesn’t render anything, it uses CSS to reposition HTMLElements relative to your scene/canvas, which means the screenshot you are performing can only capture the drawing on the WebGLRenderer’s canvas so no HTMLElements.
There is steel hope, you can use one of the following options:
1_ html2canvas, it’s straightforward and easy to implement, but depending complexity of your labels you may get inconsistent result. Also, not sure if it can capture the content of a WebGL canvas, you may need to somehow combine both capture methods.
2_ getDisplayMedia, it works well and capture both html and WebGL canvas content (it’s literally a screenshot), the integration is not as straightforward, it has some extra layers of complexity and require user permissions.
Thanks very much.
I need to show the lalels in a consistent way, so I think the 2nd one should be my choice.
Is there any chance to modify high resolution when capture, then recovery to the original?
Could the permission system submit once, then use forever?
This is an OA-system and custom tailored, so I think the permission privacy is never mind.
Sometimes, some user use low resolution screen, but need to show the screenshot in high resolution, but it is not the primal problem.
Hi, I tried to use this API but have more question.
These demos are all showing the screen contents as a video, into a <video> elements. How can I capture picture into .png or jpeg? And could I choose an html’s element to capture, which’s a <div> or <canvas> showing my 3d modules?
That’s what I meant by that, getDisplayMedia return a video stream you can render the stream in a canvas then perform capture from the canvas as a png or jpeg.
You can get the (x, y, width, height) coordinates of your element then crop the canvas or use this relatively new feature Region Capture
There is another solution, don’t use CSS2DRenderer and render your labels with Sprites, the sprites need textures, preferably high resolution, if it’s simple enough (just text and a background color) you can auto-generate them with a canvas something like this:
Does it mean we must show the video synchronized with the 3d-modules? Or else, the video can be outputed in a hidden mode, then we can transparently do all these steps. I mean this way may make users be interfered.
May Sprite is a good choice, but maybe I need to use an long-focus camera, I will try it. The reason I use the CSS2D is that it can show the same font-size whatever the modules’ distance. I don’t think an OrthographicCamera is reasonable here, for I need to show perspective effect.
And I think Sprite will need more operations to let the face directly to the screen, I need to learn more and will try it.
To be honest I don’t like the getDisplayMedia solution, it has it places, but not in your use case, it’s too restrictive you have no to little control over the resolution and require a lot of post processing. I think you should go with the sprite solution.
If you want this effect you can watch the camera zoom through the orbitControls update event and set the sprite scale accordingly.
Could you have any suggestions to let me know how to calculate the scale value?
Sorry for making you mis-understand. I’m a beginner in 3D, about 2 weeks period. All my knowlege about 3D is from the threejs demos. I only found 2 ways to implement the same font-size, CSS2D & orbitControls, so I said that words.