40M cubes with individual opacity and color

Hi all,

First I want to say thank you for such library. It, and all examples online, did the learning curve to program in WebGL very easy to climb =)

I am developing the online monitoring of a new type of medical PET scanner, and I have already achieved a nice result as I show in this and (zoomed in) this links.

What you see are the lines-of-response (simple line connecting the two 3D positions of the two detected photons, emitted from a single point where there was a positron-electron annihilation) rasterised in a 3D volume (where the patient would be). This volume is discretised in 340 x 340 x 340 voxels/cubes of 100 µm.

Right now this is InstancedMesh and, therefore, all cubes have the same material and opacity.
If we add an AdditiveBlending to the material, the position of the positron source becomes more distinguishable if multiple lines-of-response crosses the same region in space.
This by itself is already amazing. My first question is:
1 - Would it be possible to make some custom blending, or some post-processing effect, that would not only increase the brightness of where I have the blending, but also add a color-scale (as in this image)?

Following this… my real need is to control the opacity of each cube/voxel independently, which is not possible if I use the InstancedMesh. (well… there is this repo, but I would like to stay within the threejs main/master version).
My goal is to have a voxel grid where the color and opacity depends on the amount of lines-of-response going through each voxel (as in this event display shown here, done using CERN’s data analysis framework ROOT). Therefore, the question is…
2 - How can I achieve this efficiently without exploding the draw calls, given the millions of cubes to be rendered?

Many thanks for any help! =)
Cheers,
Mateus

I made a post about it some time ago, you can add individual opacities to instanced geometry but it requires modifying a few lines of code in the library (not sure if there has been any progress in regards to instanced geometry since then):

You can first render your image in black and white on a render target and then add another pass that maps brightness into a color using some formula, and render that onto canvas (using render target as an input texture), that’s pretty easy to do, probably adds one extra draw call. You can write a simple ShaderMaterial for that purpose.

1 Like

https://threejs.org/examples/?q=blending#webgl_materials_blending
https://threejs.org/examples/?q=blending#webgl_materials_blending_custom

1 Like

The issue here is that you would have to sort your objects, which may be insanely expensive given the count. You could look into “depth peeling” ive seen a working example on this forum.

1 Like

I would use post-processing, and think of this in two steps. First, render to a frame buffer with simple additive blending, to get a linear sum. If you need more precision you can go to a half-float buffer. Then apply your color scale in a second pass — THREE.LUTPass would be a good fit, or a short custom shader.


As others have mentioned, you can use InstancedMesh to reduce draw calls. Adding opacity requires a bit of GLSL modification but it’s not a big deal — at least not compared with the scale of data you have to manage here.

While the draw calls can be reduced by instancing, I’d expect that rendering 40M * 8 = 320M vertices is still going to bring your GPU to a crawl. Maybe a high-end GPU could handle it but mine sure can’t. And as @dubois mentions, those instances don’t get depth sorted for free, maybe you can incrementally sort them rather than sorting all at once.

I’d say if there’s anything you can possibly do to reduce the quantity of data (larger voxels in the distance, smaller near the camera?) and to subdivide those voxels into chunks (rather than one big block of 40M) that is going to help. Even so this is a lot of data…

1 Like

Hi @tfoller @Chaser_Code @dubois @donmccurdy, hi all,
Thank you for taking the time and posting here!
I took some time to read everything, do some more research online, and digest everything…

@tfoller, thanks for sharing your library mod… I had a look at it but in the end, as I will mention below, I don’t think an InstancedMesh will be a suitable solution for me…
Regarding the post-processing, the target render and texture method indeed sounds like a reasonable way to achieve what I mentioned.
While I might be also displaying the CAD model of the scanner, where I don’t need such color scale, would it be possible to skip such post-processing for this mesh?
My issue with this post-processing technique is that the rendered result depends on the position of the camera looking at the mesh (for example, as one can see here when the camera is inside a Line-of-response, all of its voxels will also be blended), while my “physical data” to be displayed is constant.

@Chaser_Code, I already had a look at the links you shared but, to be honest, I find it quite difficult to find what I need on these examples.
I think that what I need is an additive blending. What I would like to know is if it is possible to add a “logarithmic” weight for the additive blending, instead of linearly summing up the “overlapping meshes”… This way I should have better contrast between where I have only a few voxels blending, and where I have many (indicating the position of the simulated radioactive source).

@dubois, I have found this post on depth peeling and indeed it seems to solve the depth issue. This depth needs to be calculated for each position of the camera, right? Indeed, sounds like a Herculean task…
Can I take any profit from the fact that the mesh is static?

@donmccurdy, thanks for confirming that post-processing is one way to go. I think this will be the first thing I will try. But, again, I think this post-processing can lead to a wrong interpretation of my data, so I will mainly use it just to show the potential of such display, while I create the real deal.
Regarding adding opacity to individual instances of an instanced mesh via custom shaders… I still don’t understand how the single (shader)material will allow modification for individual instances… can you, please, elaborate?
Modifying the size of the voxels would not be a nice idea from the scientific point of view of the data visualisation… but, indeed, grouping the voxels is one idea we already have (kind of octree partitioning)… but this would be a second-order optimisation for us for now…

Regarding computing power, as this application is meant to work for a medical device ($$$$$), we decided to get a good computer ($$$) to guarantee that rendering such large amount of data would not be a bottleneck…
So, to run this application, we have a RYZEN 9 5950x + GTX 3080 12 Gb.

Still, as @donmccurdy mention, rendering 40M cubes is not easy.
In fact, I was not able to create a single InstancedMesh with 40M instances. I get the error “Array buffer allocation failed”.
When creating an InstancedMesh with 32M cubes, I got 7 or 8 FPS. 1M cubes run smoothly at 75 FPS…
320M vertices might indeed be too too much… and when I think about it I realise that all neighboring/touching cubes are with their vertex overlapping, making the number of vertices to be rendered 8 fold!
Therefore, I don’t think anymore this would be a nice approach…

I wonder if it will save me some computing power if I make a geometry of a 3D space grid, in a way creating the 40M voxels, but with ~40M vertices…
Would a vertex shader containing the position of each vertex of this 3D grid, together with some color/alpha attributes be a way for this?
I don’t have experience enough to know if this would work… More specifically, I don’t know how (if possible) to modify the color (in the fragment shader?) of a single cube defined by given 8 vertices (in the vertex shader)…

Moreover on all of this… I discovered Texture3D… This example is fascinating, and this looks very interesting…
The visualisation is weird for me sometimes… here for example, depending on the number of steps(?) I configure, when I rotate the “data” (or volume) I see the weird effects/features (like this indicated here by the white arrow) which I can’t physically explain in the data visualisation, but that I guess I can avoid at the cost of having many steps I the texture…
Can you, please, let me know what you think on this matter of using a Texture3D to visualise such data?

As a final note… while being a bunch of physicists having to learn this (currently in the dark), we would be willing to pay for private classes targeting this application, or even to pay for a service of an extra pair of hands to help us setting all of this up… (DM me if any availability :grin: )

Thank you all again! This is all very useful!

(OMG… what a huge reply… :sweat_smile: sorry for this but, without any previous knowledge, this forum is our only source of trustable knowledge :slightly_smiling_face: Thank you again!)

Yes, you only render the lines of response on the render target and then apply the filter, the rest of the scene, including the device, is rendered directly on canvas, as you normally would.

I’d say the simplest way to represent the data is point cloud, so, instead of instanced cubes, you can use Point geometry, this will reduce the number vertices 8 times. You don’t seem to benefit much from the fact that it’s a cube shape and not anything else.

You can draw on each point any shape you like, starting from a single pixel, they can have additive blending. The only possible inconvenience is that points won’t be sorted by the distance from the camera but it might not be needed in a simple case of them overlapping.

Another way to reduce the geometry and draw lines is to …draw lines (or polygons), instead of series of cubes or points that essentially lie on a line. There is enough material on this forum about drawing thick transparent lines in 3D space. Can also have blending and post-processing filters.

1 Like

The idea would be to put a per-instance color into an InstancedBufferAttribute, instanceColor. So this is a separate vec4 color available to each instance. If you don’t need different colors, the first three components would just be white, and the 4th component is the alpha. Then the GLSL shader requires a bit of modification to use that 4th alpha component A, rather than just using the first three, RGB.

Final color of each instance would be the product of {material color} x {texture color} x {vertex or instance color}.

1 Like