I want to display 12k images facing the camera, with different images that have random proportions.
Now I have 12k sprites with 12k unique sprite materials and the performance is not bad, but it is improvable.
I made a single mesh with points and a single material, and of course the performance is much better (1 object, 1 material) but, will it still be good with 12k unique materials? Is that doable?
I need the materials to be unique since I have to modify(highlight) them by groups that overlaps.
Should be possible with using BufferGeometry’s “groups” property and assigning the Mesh an array of materials. But 12.5k materials on one mesh is still 12.5k draw calls.
That’s way too many draw calls, you can make it a single if you know how to patch the material.
If you only need to highlight groups it is simple. Since you only have vertices in a single geometry you only need to add another buffer attribute and use that attribute in the shader to either hightlight or not, basically like this (not tested) assuming you added a BufferAttribute called selected
We had to do something similar for the Visions of King project, where we needed a mosaic of 100 x 100 unique portraits and we knew we couldn’t do 10,000 drawcalls. The site is no longer live, sadly, but you can see videos of the result through the link above.
We ended up subdividing a single PlaneBufferGeometry and used custom shaders to apply a depthmap, animations, and mouseover states on a single ShaderMaterial so the whole thing is rendered in 1 drawcall. The portraits were all in a single 8192x8192 texture.
I’m not entirely sure what your situation is, but you should definitely try to use a single material with custom uniforms to tell it to behave differently for the “highlight” state. 12k materials will crash a lot of mobile browsers. What’s the highlight state you’re planning? Is it a color change, position change, scale…?
@Fyrestar Definitively, your approach sounds like the way it should be. To completely understand your answer I need to learn more about handling the shaders. Excuse my ignorance, I am used to work with offline render engines where 20M objects and 200 materials is not a problem, if you can wait several hours for 1 image, heheh. Learning THREE is definitively hard, but a lot of fun too.
@marquizzo Your work looked amazing. Making it run in mobile makes it even more impressive.
The images should be able to be highlited (adding a color tint) by topic, year, artist, country…(still defining). I tried atlas-images before, and it worked great, but I am not able to apply that to a geometry that always face the camera like sprites or points while keeping the image aspect ratio.
That is why I am trying the BruteForce approach now. Like I said to titansoftime, the scene is going to be used locally in a workstation.
Thank you guys for your support and fast answers!!!
@titansoftime Yes, you are right. It is a lot of drawcalls. The good news is that the scene is intended to be used locally on a nice workstation. I will give it a try since I am flexible on performance efficiency.
Since I am a newbie in the forum I could not mention three people in the same answer, titansoftime.
I tried atlas-images before, and it worked great, but I am not able to apply that to a geometry that always face the camera like sprites or points while keeping the image aspect ratio.
The fragment shader can use both fragment coordinates and varyings. Each vertex (point) can have two extra attributes for the UVs of the corners of the subimage, that are forwarded through varyings. In the fragment shader, you can derive aspect scaling from the two UV varyings and use the fragment coordinates to interpolate UVs within the subimage.
That should be perfect indeed. Right now I am not very familiar with the vertex/fragment shaders so I’ll dig in and try to make a little test tonight and be back with the results.
@prisoner849 That is really useful!! Even with fiddles samples.
I am glad not being the only one who think the shaders are “abstract and complex” hehe
@Nomte I also needed to show ~100K images in a webgl scene. I combined all the images into one actual .jpg file, then loaded that .jpg into a texture, then sampled from that texture with GL_POINTS (using the Points mesh in three.js).
That was before I learned to write custom shaders though–with custom shaders I was able to render 1M points at 60 FPS. The experimental branch in yaledhlab/pixplot has some shader code that could point you in the right direction…
@duhaime haha Of course I know! I am following your work and it is the inspiration for the development I am trying to achieve! I’ve learnt a lot through your post about how pix-plot works. Thank you for sharing. BTW the 1M points video is awesome.
Actually I wanted to contact you as soon as I had a prototype working, but due to this lucky encounter, if you don’t mind, I would like to send you an email with the explanation about how pix-plot could improve how we work in the visualization industry. Nice meeting you!
To avoid the derivation of this thread I will stick to the technical question in the topic. Sorry for the little off-topic.
Currently I am struggling with shaders. In the meantime I tried the brute-force solution (buffer point geometry + 12k unique materials with 256px textures) came out to be handable by an average computer, once the textures are loaded into memory (before it is a sure-crash). Another factor that is helpful when using unique mateirals is that I can easily hilight or hide an array single elements based on a boolean filter. And this is an important feature.
With the help and advice I learnt that customs shaders is the right way to do this. And sooner or later I will have to implement it in the project. Thank you all for the support and guidance!!
@Nomte thanks for your kind words! Please feel free to send me an email any time (douglas.duhaime at gmail). I’d be happy to chat about any aspect of data visualization…
With this setup, once all sprites with their unique materials are visible on screen (11.303), I have 18 fps. Smoother when less sprites are visible. It takes around 40 secs to display all objects on screen.
The app is intended to run locally on an HP workstation with Xeon processor, 64Gb Ram and GeForce GTX 1080 Ti, so I expect an improve on this early test.
Thats what i was wondering, with a 1080ti and that kind of a setup you might be able to do this. The reason you get higher frame rates is because they’re being culled and you’re not issuing draw calls.
In the example I’ve mentioned above, I’ve changed numbers of rows and cols (110 x 110), so I’ve got 12100 particles. It gives me stable 75 fps on my laptop (i7, 12GB RAM, GTX 1060). https://jsfiddle.net/prisoner849/nc5jv7k2/
The number of callbacks is usually 60 times per second, but will generally match the display refresh rate in most web browsers as per W3C recommendation.