Hi @tfoller @Chaser_Code @dubois @donmccurdy, hi all,
Thank you for taking the time and posting here!
I took some time to read everything, do some more research online, and digest everything…
@tfoller, thanks for sharing your library mod… I had a look at it but in the end, as I will mention below, I don’t think an InstancedMesh will be a suitable solution for me…
Regarding the post-processing, the target render and texture method indeed sounds like a reasonable way to achieve what I mentioned.
While I might be also displaying the CAD model of the scanner, where I don’t need such color scale, would it be possible to skip such post-processing for this mesh?
My issue with this post-processing technique is that the rendered result depends on the position of the camera looking at the mesh (for example, as one can see here when the camera is inside a Line-of-response, all of its voxels will also be blended), while my “physical data” to be displayed is constant.
@Chaser_Code, I already had a look at the links you shared but, to be honest, I find it quite difficult to find what I need on these examples.
I think that what I need is an additive blending. What I would like to know is if it is possible to add a “logarithmic” weight for the additive blending, instead of linearly summing up the “overlapping meshes”… This way I should have better contrast between where I have only a few voxels blending, and where I have many (indicating the position of the simulated radioactive source).
@dubois, I have found this post on depth peeling and indeed it seems to solve the depth issue. This depth needs to be calculated for each position of the camera, right? Indeed, sounds like a Herculean task…
Can I take any profit from the fact that the mesh is static?
@donmccurdy, thanks for confirming that post-processing is one way to go. I think this will be the first thing I will try. But, again, I think this post-processing can lead to a wrong interpretation of my data, so I will mainly use it just to show the potential of such display, while I create the real deal.
Regarding adding opacity to individual instances of an instanced mesh via custom shaders… I still don’t understand how the single (shader)material will allow modification for individual instances… can you, please, elaborate?
Modifying the size of the voxels would not be a nice idea from the scientific point of view of the data visualisation… but, indeed, grouping the voxels is one idea we already have (kind of octree partitioning)… but this would be a second-order optimisation for us for now…
Regarding computing power, as this application is meant to work for a medical device ($$$$$), we decided to get a good computer ($$$) to guarantee that rendering such large amount of data would not be a bottleneck…
So, to run this application, we have a RYZEN 9 5950x + GTX 3080 12 Gb.
Still, as @donmccurdy mention, rendering 40M cubes is not easy.
In fact, I was not able to create a single InstancedMesh with 40M instances. I get the error “Array buffer allocation failed”.
When creating an InstancedMesh with 32M cubes, I got 7 or 8 FPS. 1M cubes run smoothly at 75 FPS…
320M vertices might indeed be too too much… and when I think about it I realise that all neighboring/touching cubes are with their vertex overlapping, making the number of vertices to be rendered 8 fold!
Therefore, I don’t think anymore this would be a nice approach…
I wonder if it will save me some computing power if I make a geometry of a 3D space grid, in a way creating the 40M voxels, but with ~40M vertices…
Would a vertex shader containing the position of each vertex of this 3D grid, together with some color/alpha attributes be a way for this?
I don’t have experience enough to know if this would work… More specifically, I don’t know how (if possible) to modify the color (in the fragment shader?) of a single cube defined by given 8 vertices (in the vertex shader)…
Moreover on all of this… I discovered Texture3D… This example is fascinating, and this looks very interesting…
The visualisation is weird for me sometimes… here for example, depending on the number of steps(?) I configure, when I rotate the “data” (or volume) I see the weird effects/features (like this indicated here by the white arrow) which I can’t physically explain in the data visualisation, but that I guess I can avoid at the cost of having many steps I the texture…
Can you, please, let me know what you think on this matter of using a Texture3D to visualise such data?
As a final note… while being a bunch of physicists having to learn this (currently in the dark), we would be willing to pay for private classes targeting this application, or even to pay for a service of an extra pair of hands to help us setting all of this up… (DM me if any availability )
Thank you all again! This is all very useful!