I’m trying to create a 3D texture with depth using KTX2 files for volume rendering in Three.js. I have a KTX2 file that, when extracted using ktx extract --all x.ktx2, produces 64 slices named output_depth*.png, each of size 64x64.
The slices appear to represent some baked model or pattern, and the texture has depth when used as a 3D texture. However, I can’t figure out how to generate similar slices from my own model or texture.
Here’s what I’ve tried so far:
Converted my slices into a KTX2 file using available tools, but the results don’t match the depth or structure of the original KTX2.
Explored various methods to bake my model or texture to generate similar slices, but without success.
I’ve attached the KTX2 file and extracted slices for reference. Can anyone guide me on:
How these slices might have been created?
The exact process or tools I can use to generate slices like this from my model or texture?
Best practices for creating a KTX2 file suitable for 3D textures with depth in Three.js?
I’d greatly appreciate any advice, examples, or resources that could point me in the right direction. Thanks in advance!
Original ktx2 file : peach.ktx2 (817.0 KB)
I’ve used the command below to produce KTX2 3D textures…
… this assumes the file names share a prefix, followed by ascending numbers in the suffix.
Note, though, that THREE.KTX2Loader doesn’t currently load compressed 3D textures. PRs would be welcome, or you might be able to use array textures instead if those suit your goals.
Thanks, @donmccurdy! I really appreciate your detailed reply and the examples provided.
The issue I’m facing isn’t with creating 3D textures per se, as I’m familiar with the process of converting slices into a KTX2 volume texture. The challenge lies in the initial steps: how to properly generate those slices from a 3D model in such a way that the resulting KTX2 file accurately represents the original model in its shape and appearance.
From what I understand, the KTX2 file I shared uses a texture-based volume rendering approach. The developer who created it seems to have employed a slicing technique to capture depth, and the slices have baked-in information—potentially depth, position, or normals. When I view the provided slices, the images clearly resemble the model’s shape, with features like the Twitter logo (“X”) encoded in them.
In contrast, when I attempt to generate slices, I struggle with these key points:
Generating slices with correct baking information: How do I encode depth, normals, or other data in the slices to match the original model’s representation? Are there specific tools or pipelines to do this?
Replicating the developer’s method: The slices in the shared KTX2 seem to contain rich, meaningful information about the model. How can I replicate this kind of “baking” process to ensure my KTX2 textures capture the model accurately?
For reference, you can view the slices of the KTX2 file I mentioned here:developer slices The pattern and information baked into each slice are critical, but I’m not sure how to achieve this effect with my current methods.
Any guidance or pointers to tools, techniques, or articles about slicing 3D models for texture-based volume rendering would be incredibly helpful. This would not only help me but could also benefit others exploring similar workflows. Thank you again!
I can say that the texture contains pixels encoded under the model KHR_DF_MODEL_RGBSDA, representing: red, green, blue, stencil, depth, and alpha. There’s basic information in the KTX2 spec explaining:
I’m not aware of resources explaining how to produce a 3D texture like this from (for example) a 3D model. The ktx-parse library might be of some help either in deconstructing your existing KTX2 file, or constructing a new one given the proper RGBSDA pixel data. But in terms of generating the RGBSDA pixel data, you may need to dig into that one channel at a time.
Thank you so much for your detailed explanation and the reference link. It’s very helpful to know about the KHR_DF_MODEL_RGBSDA model and its channels.
I’d like to ask if you could provide an example of how to “dig into one channel at a time” using a GLB model and generate slices from it. I’m trying to figure out how to extract depth and other channel data to recreate the volumetric texture. I suspect they might have used Blender or a similar tool to slice the model along its depth, baking the information for each slice. However, I’m not sure how this was done or how to proceed with encoding the RGBSDA channels.
If you have any example code, steps, or resources on generating these slices or handling the RGBSDA pixel data, I’d greatly appreciate it.
@donmccurdy From what I gather, The 3D texture is essentially a stack of 2D textures (slices) that together create a volumetric representation. Each slice represents a cross-section of the model along a specific axis, and the pixel data includes depth and other information.
@donmccurdy Not sure if I did exactly what you suggested about digging into each RGBA channel—please correct me if anything doesn’t make sense.
import { KTX2Container, write } from 'ktx-parse';
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
import { readFileSync, writeFileSync } from 'fs';
global.self = global;
const modelBuffer = readFileSync('./public/model/ghost4.glb');
// Parse the GLB model
const loader = new GLTFLoader();
loader.parse(modelBuffer.buffer, '', (gltf) => {
const model = gltf.scene;
voxelizeModel(model);
}, (error) => {
console.error('Error loading GLB model:', error);
});
function voxelizeModel(model) {
const voxelGrid = [];
const gridSize = 64;
const box = new THREE.Box3().setFromObject(model);
const size = new THREE.Vector3();
box.getSize(size);
for (let x = 0; x < gridSize; x++) {
for (let y = 0; y < gridSize; y++) {
for (let z = 0; z < gridSize; z++) {
const voxel = {
position: new THREE.Vector3(
box.min.x + (x / gridSize) * size.x,
box.min.y + (y / gridSize) * size.y,
box.min.z + (z / gridSize) * size.z
),
color: new THREE.Color(),
};
voxelGrid.push(voxel);
}
}
}
generatePixelData(voxelGrid, gridSize);
}
// Generate pixel data for KTX2
function generatePixelData(voxelGrid, gridSize) {
const slices = [];
for (let sliceIndex = 0; sliceIndex < gridSize; sliceIndex++) {
const slice = new Uint8Array(gridSize * gridSize * 4); // RGBA data
for (let i = 0; i < gridSize; i++) {
for (let j = 0; j < gridSize; j++) {
const voxel = voxelGrid[sliceIndex * gridSize * gridSize + i * gridSize + j];
const color = voxel.color;
slice[(i * gridSize + j) * 4 + 0] = (voxel.position.x / gridSize) * 255; // R
slice[(i * gridSize + j) * 4 + 1] = (voxel.position.y / gridSize) * 255; // G
slice[(i * gridSize + j) * 4 + 2] = (voxel.position.z / gridSize) * 255; // B
slice[(i * gridSize + j) * 4 + 3] = 255; // A
}
}
slices.push(slice);
}
createKTX2File(slices, gridSize);
}
function createKTX2File(slices, gridSize) {
const ktx2Container = new KTX2Container();
ktx2Container.pixelWidth = gridSize;
ktx2Container.pixelHeight = gridSize;
ktx2Container.pixelDepth = gridSize;
ktx2Container.vkFormat = KTX2Container.VK_FORMAT_R8G8B8A8_UNORM;
ktx2Container.levelCount = 1;
for (const slice of slices) {
ktx2Container.levels.push({
levelData: slice,
uncompressedByteLength: slice.byteLength,
});
}
ktx2Container.dataFormatDescriptor = [
{
vendorId: 0,
descriptorType: 0,
versionNumber: 1,
descriptorBlockSize: 24,
bytesPlane0: gridSize * gridSize * 4,
},
];
const ktx2File = write(ktx2Container);
writeFileSync('output.ktx2', Buffer.from(ktx2File));
}
console.log('KTX2 file created successfully!');
I managed to generate the KTX2 file, but when I run the command:
ktx extract --all output3.ktx2 ./textures
I encounter the following errors:
error-3016: Too many mip levels.
levelCount is 64 but for the largest image dimension which is 64 it is too many level.
error-6027: Invalid sample count in basic DFD block. The sample count must be non-zero for non-supercompressed textures with VK_FORMAT_UNDEFINED.
DFD block #1 sample count in basic DFD block is 0 but non-supercompressed VK_FORMAT_UNDEFINED textures must have sample information.
error-6023: Invalid bytesPlane0 in basic DFD block. BytesPlane0 must be non-zero for non-supercompressed VK_FORMAT_UNDEFINED textures.
DFD block #1 bytesPlane0 in basic DFD block is 0 but it must be non-zero for non-supercompressed VK_FORMAT_UNDEFINED textures.
I’m not aware of examples or resources to help create a 3D texture of this sort, sorry.
One note is that “levels” in the KTX2 file are mipmap levels, not slices of a 3D texture. I believe the entire 3D texture should be stored in a single level, represented as a W x H x D array. THREE.KTX2Exporter can write a THREE.Data3DTexture to KTX2 which might be helpful either for reuse or as an example:
They are using textures as VDB files instead of models to form the particles into shapes. The “x” you’re seeing in my first post is data extracted from one of those files.
I understand it’s not easy to replicate, but I really appreciate your help. Thanks again!