Model detect problem

Hello, everyone.
I’ve been working on a small-scale project that displays various cabinet types in a room using R3F (React Three Fiber). The project includes a sidebar on the right side, where users can drag and drop cabinet images in sidebar into the 3D scene. When an image is dropped, the corresponding cabinet model should appear in the scene. However, if there’s already an existing model in that spot, the new one shouldn’t be placed to avoid overlap.

I’ve set up raycasting to detect where the new model should be placed, but I’m running into an issue. Specifically, I’m able to detect all models when the ray is cast from the mouse drop point, but only certain models are detected by the raycasters placed at the left and right sides of the new cabinet being dropped. These side raycasters aren’t detecting models as expected, causing models to overlap in the scene.

To be more precise, I’ve set up three raycasters: one at the mouse drop point and two on the left and right sides of the cabinet that be dropped. The raycaster at the drop point works as expected, detecting all objects, but the others (left and right) fail to detect models. As a result, the new cabinet can overlap existing ones.

If you’ve encountered a similar issue or have any suggestions, I’d really appreciate your insights. If my method is not proper, let me know the greater method to detect model, please. Thank you in advance!

Hi, I can’t be sure that this will work for your case but if the primary raycast work well you could try with raycaster threshold (link to docs), with a threshold big enough maybe it solve your issue.

If this doesn’t work you can try calculate a collision between the olds objects and the new one, even using a semplici ed collider or checking all the distances and not adding the new one if is too close.

Your approach sounds reasonable. My guess is that the raycasters for left/right might not be set up correctly. If you post some code here we might be able to triage it.

For the drop point I presume you are casting from mouse cursor to floor?

and then for left/right you are casting rays from floor + some height offset, to a direction of + or - X ?

Thank you very much for your reply. I thought like you at first but I have tested so many and there was this kind of error for two models only. Not all. I post my base code here. I expect your answer. Thank you!

The checks for intersects.length<3 are suspicious.

The "validObjects = scene.children.filter
looks promising but then you’re not raycasting against just that set of objects… which is why you have that extra logic of num hits < 3.

Instead I would keep an array of placed objects and cast explicity against that.

SetCorrectRotPos looks suspicious as well… since it appears to return a single float? and then you are perturbing the ray with that somehow? I’m not clear on how that works with the camera potentially being at any orientation…

I also see that you’re raycasting from the camera each time.
I was imagining a scheme where you cast from camera to ground first…

If that hit is clear… then cast left / right from that intersection point… to make sure its clear.

These schemes can kinda work but will fail for instnace with a table that is on skinny legs.

Perhaps a better approach is to keep an array of Mesh(BoxGeoemtry the size of each placed object, and use those box meshes as your raycasting targets… it will be more precise, and faster as well since only casting against a simple box.

If you want a super robust solution, you might also consider using three-mesh-bvh on your scene, and then performing a box “shapecast” against the scene.

I tested with collider. But I have another problem! I can’t see the collision event callback function when the user drops it. I use map function to add models and each item has a collision event function. Here is a base code

  <Select enabled={clickOutline[modelIndex]}>
        <group onClick={handleClick} {...props} >
          <RigidBody canSleep={false} type="kinematicPosition" ref={pointer} />
          <RigidBody
            ref={obj}
            {...(id != selectedID ? { type: "fixed" } : { type: "dynamic" })}
            ccd
            canSleep={false}
            colliders={false}
            enabledRotations={[false, false, false]}
            onCollisionEnter={() => {
              // if (isDropped === true) {
              console.log("collision is entered");
              setDroppedModel(props.droppedModel.filter((item: any) => item.id !== draggedModel));
              setDraggedModel("");
              setIsDropped(false);
              // }
            }}
            restitution={0.1}
            friction={0.5}
            angularDamping={0.9}
            linearDamping={0.9}
          >
            {children}
          </RigidBody>
        </group>
      </Select>

Thanks manthrax. One thing I’m concerned about is that it doesn’t work on only some models. In my case, with the current logic, this only happens on 2 out of 11 models. How can you analyse from this? This is a refer video.

Sorry I’m not really an r3f person.

That syntax doesn’t really make much sense to me…
I see one “rigidbody” being declared there, but empty… followed by a another rigidbody… but the syntax coloring got confused, or the code is malformed.
Not sure what “physics” library is being employed there… or how it relates to threejs raycasting or really anything about the setup behind what you’re using.

Yes, that’s right. There is an empty RigidBody in front of the main RigidBody. This is for drag and drop, but it doesn’t make sense right now since we’re using PivotController.
Let us ignore this RigidBody. Perhaps do you think this is reason?

If I were doing this in vanilla, I would either use an array of boxes representing each object (not actually added to the scene, just for raycasting against) and raycast against that…

Or I would throw everything into three-mesh-bvh and use the box shapeCast to see if a box is colliding.

https://gkjohnson.github.io/three-mesh-bvh/example/bundle/shapecast.html

You could also try asking for help in one of the r3f discords:

1 Like

if the problem is only with some models maybe the problem is in their geometry or in the way the meshes are organized in those specific files, if you are able to edit the models i will suggest you to add directly there something to use as a collider, could be an invisible cube slightly larger than the object that is direclty in the root of the tree so that you can have something that is equal for all the models to scoop out other problems

1 Like

Thank you for your input. I’ve considered your perspective, but I believe the proposed solution isn’t correct. The problem lies in how the model can be detected when using raycasting from a dropped point. Reality, this raycater detect this model by dropped position. Using raycasting to detect boundaries from both sides of the model are missing the existing model.
The raycasting from the dropped point is based on the mouse event, but the side raycasting requires manually adjusting the ray direction for the side point positions.
I will wait your advice.
Thank you very much!