How can I convert gltf 3d model to particles properly?

I am just a beginner for threejs and react-three-fiber, so I may make some basic mistakes. Sorry if I do so.

When I withdrew an object from scene by using the function getObjectByName, the obj was not Mesh type object. Instead, children elements were Mesh type object.

Additionally, when I tried to convert first Mesh type object to particles by extracting positions and substituting to buffer geometry, the image seemed just parts of whole model.

So, I decided to integrate all mesh objects, but an issue occurred. Positions of geometry were NaN

Here are specific codes I wrote.

// particle3D component
"use client";

import { MeshSurfaceSampler } from 'three/addons/math/MeshSurfaceSampler.js';
import * as BGU from 'three/examples/jsm/utils/BufferGeometryUtils.js';
import { useGLTF } from '@react-three/drei';
import * as THREE from 'three';
import { useEffect, useState } from 'react';

export default function Particle3D({ url, particleCount = 100000 }: { url: string; particleCount?: number }) {
  const { scene } = useGLTF(url);
  const [geometry, setGeometry] = useState<THREE.BufferGeometry | null>(null);

  useEffect(() => {
    const geometries: THREE.BufferGeometry[] = [];

    const obj = scene.getObjectByName("3d-modelobj");

    obj?.children.forEach((child) => {
      if (child instanceof THREE.Mesh) {
        geometries.push(child.geometry);
      }
    });

    const combinedGeometry = BGU.mergeGeometries(geometries, false);

    const sampler = new MeshSurfaceSampler(new THREE.Mesh(combinedGeometry)).build();

    const positions = new Float32Array(particleCount * 3);
    const temp = new THREE.Vector3();

    for (let i = 0; i < particleCount; i++) {
      sampler.sample(temp);
      positions[i * 3] = temp.x;
      positions[i * 3 + 1] = temp.y;
      positions[i * 3 + 2] = temp.z;
    }

    const geo = new THREE.BufferGeometry();
    geo.setAttribute("position", new THREE.BufferAttribute(positions, 3));

    const center = new THREE.Vector3();
    geo.computeBoundingBox(); 
    geo.boundingBox?.getCenter(center);
    geo.translate(-center.x, -center.y, -center.z);

    setGeometry(geo);
  }, [scene, particleCount]);

  if (!geometry) return null;

  return (
    <points geometry={geometry} position={[0, 0, 0]} rotation={[-Math.PI / 10, 0, 0]}>
      <pointsMaterial size={0.01} color="white" sizeAttenuation />
    </points>
  );
}

I confirmed that the gltf model can be read by using primitive component, and so, the url path was also correct.

Take a look here

Thank you for replying.
Certainly, this codes you mentioned worked well, but it rendered a part of model. I think this is possibly because only a child of scene objects, even if it is Mesh type object, is not the entire model.
When I extract object from scene by getObjectByName, the object was not Mesh type. This may orchestrate each part of the model.
That’s why I tried to integrate all the Mesh typed children elements of scene object.

Or, maybe, the structure of gltf model is different in my case?

Instead forEach try using traverse callback which grabbing all descendants.

obj.traverse((child) => {
    if (child instanceof THREE.Mesh) {
      const cloned = child.geometry.clone();
      cloned.applyMatrix4(child.matrixWorld);
      geometries.push(cloned);
    }
});

Thank you for replying again. I appreciate your patience and tolerance.
After I examined codes and check which geometry has NaN value in its position, I realized that I failed to combine the geometry. This may because the material types are not exactly the same.

I struggled to combine geometries, so just once, I tried to combine only positions like this.

"use client";

import { useGLTF } from '@react-three/drei';
import * as THREE from 'three';
import { useEffect, useState } from 'react';

export default function Particle3D({ url, particleCount = 100000 }: { url: string; particleCount?: number }) {
  const { scene } = useGLTF(url);
  const [geometry, setGeometry] = useState<THREE.BufferGeometry | null>(null);


  useEffect(() => {
    const positionArray: number[] = [];

    const obj = scene.getObjectByName("3d-modelobj");

    obj?.children.forEach((child) => {
      if (child instanceof THREE.Mesh) {
        const cloned = child.geometry.clone()
        const arr = cloned.attributes.position.array as Float32Array;
        positionArray.push(...arr)
      }
    })

    
    const sparklesGeometry = new THREE.BufferGeometry();
    sparklesGeometry.setAttribute("position", new THREE.Float32BufferAttribute(positionArray, 3));

    sparklesGeometry.computeBoundingBox(); 
    sparklesGeometry.center()

    console.log(sparklesGeometry.attributes.position.array);

    setGeometry(sparklesGeometry);
  }, [scene, particleCount]);

  if (!geometry) return null;

  
  return (
    <points geometry={geometry} scale={0.01} >
      <pointsMaterial size={0.1} color="white" />
    </points>
  );
}

The 3d model appears but the particles don’t spread evenly.
Question of merging geometries properly lingers on.

Thank you for advising.
If you have knowledge about this and don’t mind, pls tell me :slight_smile: