Cuboid face adjacency detection and grouping, voxels with harmonics concept

Hello,

I’m working on efficient method allowing to sort through large number of cuboids consisting of:
index, minX, minZ, minY, maxX, maxZ, maxY, detect groups of cuboids with adjacent faces, discard loose ones with no adjacent face and ignore adjacency of edges and vertices.

Big picture and future concept:
It seamed not too exciting at low cuboid count, but as extent grows to 100k - 1M cuboids things change a lot and it opens very interesting possibility of developing Gaussian voxels with normal vectors and logarithmic encoding of harmonics that will allow to bake physically accurate reflection, refraction, physical definition of specular and possibilities for many applications such as Non-Euclidean volumetrics, voxels with logarithmic normals and harmonic sub-pixels, physical Fourier transform, synthetic aperture, physical camera simulation and photo studio quality DOF pretty much for free. A bit like Indigo Renderer, but running in a browser on baked sets of voxel harmonics with geometry baked as log normals for edge hardness.

Actual task:
For face adjacency detection, grouping within cuboids set I ended up using workers, early sorting, log grid splits per axis plane and WASM for heavy detection. After trying many approaches I’ve ended up with performance around 1.5M cuboids/s, which includes entire process from fetching data to 3d scene and render.

Are there better ways for more efficient sorting and detection that always output exact results? I haven’t explored GPU based ones and perhaps there already are shader, or rasterizer components that partly do this process

Have a great 2026 and appreciate input,
Bartolome

1.5M of cuboids

1 Like

That sounds like an interesting problem to solve.

For the sake of mutual understanding (and for helping future readers), let me traslate this

mhm, rephrasing to:

“Instead of indexing cuboids directly, we could convert them into a smooth volumetric/surface representation with efficient storing that allows high-quality physical rendering.”

Now, back to the OP..
What you’re describing addresses how the scene is represented, not how it is efficiently queried or sorted. Those are two different layers of the problem.

Having said that, I think Gaussian voxels is a great idea. It’s just they are only a representation, ‘cause they define what exists in space (geometry, orientation, material/lighting response). Even with normals and harmonic data, they don’t by themselves provide fast lookup, culling, or ordering when you have a large number of primitives. If the goal is efficient sorting / traversal of many cuboids, you still need an acceleration structure — some form of spatial indexing — on top of that representation. As you may know, typical choices are:

  • Object-based structures (BVH, KD-tree): group primitives by bounding volumes, good for ray queries, intersection tests, and object-centric operations.

  • Space-based structures (octrees, grids): subdivide space itself, good for sparse scenes, density queries, and locality-based sorting.

The distinction matters because it lets you combine the strengths of both, by using gaussian voxels (with normals / harmonics) you’d get smooth geometry and physically meaningful shading, and by using BVHs or octrees, you’d make queries, sorting, and traversal scale to large datasets.

In other words, Gaussian voxels are compatible with — but not a substitute for — BVHs or octrees, so a general approach would be to wrap those voxels inside an object- or space-partitioning structure if performance and scalability are the goal.

Many thanks for your post, very inspiring and stimulating