Need some expert guidance

I need a little bit of expert guidance. I am working with a dataset that represents the inverse solution scalp-based EEG recordings. The solution works out what is going on in the volume space of the brain based on the data collected at the scalp. The dataset consists of approximately 2400 voxels whose positions in space are known. I have built a three.js app (my first one!) to display these voxels (as BoxBufferGeometries) with transparency so that one can get a sense of the electrical activity of the brain in three dimensions. You can view my progress so far here.

Now to my question…I would like to soften/blend the edges of the voxels so that the the model does not have such a “lego block” look. I have tried to subdivide the voxels and implement an interpolation of the color between the neighboring voxels but this turns out to be too computationally expensive to be practical. About the maximum I can do is to subdivide each side into 3 segments before the response of the app is just too slow. And this level of subdivision is better but still has the same block-like effect.

I am aware that the cube geometry can be subdivided into a larger number of triangles but i do not know how I can use this subdivision to fade colors across the faces of the cube. I have tried the cube with rounded edges but that does not look right either.

Is this a job for some sort of custom shader? Do I need to look to build a Blender model and then color that? As I said in the beginning, I am looking for some guidance on which direction to turn to achieve my goal.

I am hopeful that I can take it from there.

Thanks in advance

Are the voxels colored with different materials? Or the same material, and each voxel has different vertex colors? The second would probably be better, because you could change each vertex’s colors to the average of itself and its neighbors… that might not be perfectly smooth (you could see some visual banding) but at least the hard edges at voxel boundaries would be gone.

I should be able to use the second option, The voxels are all the the same material. Its a simple MeshLambertMaterial with a color and with the transparent and opacity properties set appropriately.

Will that option work even though the voxels are separate objects? and can you point me toward some documentation?

Thanks for your reply.

It will work with separate objects, although your app’s performance would be better if you could merge them (regardless of what color solution you use).

See this example, maybe?

For each vertex, you’ll need to find the colors of the neighboring voxels and blend the colors.

Thank you. I’ll look into that example.

By merging them voxels do you mean creating something like a single 3-d matrix of points and then indexing into it based on where the voxel is located?

Could you provide explanatory image(s) of the desired result?
Maybe, makes sense to use instancing? Clouds of cubes

Instead of N meshes with triangles for each cube, you would have a mesh containing the triangles of all N meshes. Since the colors would be stored by vertex, rather than on the material, you can keep the colors distinct. BufferGeometryUtils.mergeBufferGeometries( ... ) can help with that.