How to Create a 3D Data Texture with Depth Using Slices of Positions and Normals from a Model?

Hi everyone,

I’m currently working on a project where I need to generate a 3D data texture (volumetric texture) from a 3D model in Three.js. The goal is to create a texture that includes both position and normal data, structured in a way that it can be used for volumetric rendering in shaders. Here’s what I’m trying to achieve step-by-step:

  1. Voxelization of the Model:
  • The 3D model (in formats like GLTF or PLY) needs to be voxelized into a grid of defined resolution (e.g., 64x64x64).
  • I aim to sample the position and normal data at each voxel point.
  1. Creating Image Slices:
  • Generate 64 slices of images:
    • The first 32 slices should encode position data (RGB channels).
    • The next 32 slices should encode normal data (XYZ as RGB).
  1. Exporting and Using the Texture:
  • Save the slices as PNG images and combine them into a KTX2 texture for efficient use in volumetric rendering.
  • This texture should be usable in a shader to visualize depth and surface features of the model volumetrically.

Issues I’m Facing:

  • The voxelization process doesn’t seem to accurately capture the depth and structure of the model in the slices.
  • Positions and normals sometimes appear mixed or incorrectly ordered, resulting in a KTX2 texture that doesn’t visually represent the original 3D model.
  • The output doesn’t resemble the model when used in shaders, and I suspect the slicing process or data encoding might be incorrect.

What I’ve Tried:

  • Used a raycasting approach to sample positions and normals from the model grid.
  • Attempted to save slices using THREE.DataTexture and convert them into PNGs for the KTX2 process.
  • Followed online resources, but I’m still struggling to achieve a usable volumetric texture.

What I’m Looking For:

  • Guidance on the best approach to generate a 3D data texture from a 3D model.
  • Suggestions for voxelizing the model and sampling positions and normals efficiently.
  • Help with structuring the slices to correctly represent the depth and features of the model.
  • Any examples, libraries, or code snippets that might help with this process.

I’d appreciate any advice, code examples, or even just pointers in the right direction. Thanks in advance!