How to not use a transparent plane during mouse events?

I want to improve the performance of my app, and in looking for tips to do so, I found many recomendations to reduce the amount of transparent materials in my scene. My scene is full of transparent planes, which I use to catch the raycast coordinates during mouse events. For example, I have a few geometries that I want to be able to click and drag around the floor, in order to do that, I raycast into a transparent plane and move the object to that position. Other times I want my objects to move according to the camera, so I use a transparent sprite in the same way.

Here is a simplified example of what I mean:

const DraggableCube = (props) => {
      const [isDragging, setIsDragging] = useState(true)
      const cubeRef = useRef()
      const planeRef = useRef()

      const onPointerMove = (e) => {
           const point = raycaster.intersectObject(planeRef.current)[0].point
           cubeRef.current.position = point
           //...
      }

      //...s

      return (
         <>
           <Plane ref={planeRef} onPointerMove={onPointerMove} ... >
             <meshBasicMaterial transparent opacity={0.0} depthWrite={false}/>
           </Plane>
           <Cube ref={cubeRef}  ... />
         </>
      )
}

Is there a way to do this without this intermediate plane? Or something that would be more performant, and not require me to have these transparent planes that I don’t even need to see in my scene. My main goal here is to improve performance, because this solution works well, but when my geometries become numerous than it gets slow. Or would you say the performance impact of these planes would be negligibe and I should just focus on something else? Any help is appreciated, thank you!

Instead of transparent, try using just visible={false}. Setting transparent={true} and opacity={0.0} will still trigger the shading step (with each pixel being evaluated and calculated as usual in the shaders), just the outcome being an empty pixel.

In comparison - visible={false} will just remove that object from the rendering entirely, so it will never reach the GPU, never feel the warmth of a shader’s touch (raycasting will still work, since it’s done on the CPU and doesn’t care if the object is visible or not.)

Plus - as a general optimisation - try to restructure your code so that there’s only one of these intersection planes at a given time. There’s only one mouse cursor, in most of the cases, it won’t be in multiple places at once.

4 Likes

The soul of a poet. Chapeau!

4 Likes

why do you use transparent planes at all i wonder. it’s just <Cube onPointerMove={…} given that Cube spreads its props over its first mesh or group. you don’t need to intersect or use raycaster, pointer events are built in for everything. you also don’t need transparent planes to move stuff around. tbh i don’t understand what this all for, could you explain?

like, this code makes no sense

      const onPointerMove = (e) => {
           const point = raycaster.intersectObject(planeRef.current)[0].point
           cubeRef.current.position = point
           //...
      }

      return <Plane ref={planeRef} onPointerMove={onPointerMove} ... >

onPointerMove already raycasts for you, you end up in the callback when the mouse moves over the mesh. Why would you raycast again to fetch the point when you already know the mouse is over it and you already have the point?

<Plane onPointerMove={(e) => ref.current.position.copy(e.point)} ... >
1 Like

a wild guess, you are using planes because once you move trhe cursor form the model you loose the events for dragging. but this is what pointer capture is for: Events - React Three Fiber once you have captured the pointer it can even leave the window, the object will still receive events.

this might help, dragging stuff around on a grid https://codesandbox.io/p/sandbox/ecstatic-resonance-r36rg4 it also shows you how to use instancing, now matter how many cubes in this example, could be 100.000, it’s one single draw call.

1 Like

Thank you so much! Wow you’re right, there’s no need to raycast again. My bad. I had that at first because I was first capturing onPointerClick on the cube, and then capturing the movement in the whole window, but I changed the code a bit and didn’t realize this raycasting became obsolete.

The reason I was using the planes was because I want to move my object at that plane, as if it was glued to it. Raycasting into a plane seemed like the simplest way of getting the coordinates I needed, especially because I want to be able to drag on different planes, not just ones that are parallel to the xz plane. But looking at your grid example I see there might be alternatives, though I need bigger precision than a grid. And thank you for the pointer capture recommendation, that’s super useful and I had no idea it existed, it’ll definetely help! Thanks again!