Mouse Event Optimisation

Hey, guys.

Faced with this problem, I made a small implementation of calculating the distance between 2 points.I have dynamically when I switch on the calculation mode, a marker appears on the surface of the model, which moves on the model when the raycast crosses a part of the model.I would like to know how to optimise this process even more, as it is very costly.
For example, this is a model without this measurement 240 fps, with 71k triangles,

but when I move the mouse on the boundaries of the model as decreases to 80-160 fps.

For optimisation I turned first of all to 3dViewer open source.I’m afraid that my delay may be provoked by the architecture of react components that add a layer of abstraction unlike vanilla js.

Nevertheless, I want to give you my version of how I did it.

import React, {
	useRef,
	useLayoutEffect,
	useState,
	useCallback,
	useMemo,
} from 'react'
import {
	Vector3,
	Mesh,
	Box3,
	Sphere,
	Object3D,
	Intersection,
	Matrix4,
	Plane,
	Group,
} from 'three'

import { useThree, useFrame } from '@react-three/fiber'
import { Text } from '@react-three/drei'


const RAYCAST_THROTTLE = 16 // ~60fps
const BIG_EPS = 1e-6
const MARKER_SEGMENTS = 32
const LINE_THRESHOLD = 0.01

interface MeasurementPoint {
	id: string
	position: Vector3
	normal: Vector3
	intersection: Intersection
	timestamp: number
}

interface MeasurementResult {
	point1: MeasurementPoint
	point2: MeasurementPoint
	pointsDistance: number
	parallelFacesDistance: number | null
	facesAngle: number
}

interface MeasurementToolProps {
	measureMode: boolean
	modelRef: React.RefObject<Object3D>
}

// Утилиты для расчетов
const isEqualEps = (a: number, b: number, eps: number = BIG_EPS): boolean => {
	return Math.abs(a - b) < eps
}

const getFaceWorldNormal = (intersection: Intersection): Vector3 => {
	const normalMatrix = new Matrix4()
	intersection.object.updateWorldMatrix(true, false)
	normalMatrix.extractRotation(intersection.object.matrixWorld)

	const faceNormal = intersection.face?.normal?.clone() || new Vector3(0, 1, 0)
	faceNormal.applyMatrix4(normalMatrix)
	faceNormal.normalize()

	return faceNormal
}

const calculateMeasurementValues = (
	point1: MeasurementPoint,
	point2: MeasurementPoint
): {
	pointsDistance: number
	parallelFacesDistance: number | null
	facesAngle: number
} => {
	const pointsDistance = point1.position.distanceTo(point2.position)
	const facesAngle = point1.normal.angleTo(point2.normal)

	let parallelFacesDistance: number | null = null

	// Check for parallel faces (angle ~0° or ~180°)
	if (
		isEqualEps(facesAngle, 0.0, BIG_EPS) ||
		isEqualEps(facesAngle, Math.PI, BIG_EPS)
	) {
		const plane = new Plane().setFromNormalAndCoplanarPoint(
			point1.normal,
			point1.position
		)
		parallelFacesDistance = Math.abs(plane.distanceToPoint(point2.position))
	}

	return {
		pointsDistance,
		parallelFacesDistance,
		facesAngle,
	}
}

// Компонент маркера - оптимизированный
const MeasurementMarker = React.memo(
	({
		position,
		normal,
		radius = 0.05,
		color = 0xff0000,
		opacity = 1,
	}: {
		position: Vector3
		normal: Vector3
		radius?: number
		color?: number
		opacity?: number
	}) => {
		const groupRef = useRef<Group>(null!)

		// Создаем геометрию один раз
		const circleGeometry = useMemo(() => {
			const points = []
			for (let i = 0; i <= MARKER_SEGMENTS; i++) {
				const angle = (i / MARKER_SEGMENTS) * Math.PI * 2
				points.push(Math.cos(angle) * radius, Math.sin(angle) * radius, 0)
			}
			return new Float32Array(points)
		}, [radius])

		const crossGeometry = useMemo(
			() => ({
				horizontal: new Float32Array([-radius, 0, 0, radius, 0, 0]),
				vertical: new Float32Array([0, -radius, 0, 0, radius, 0]),
			}),
			[radius]
		)

		useLayoutEffect(() => {
			if (groupRef.current && normal) {
				groupRef.current.position.copy(position)
				groupRef.current.lookAt(position.clone().add(normal))
			}
		}, [position, normal])

		return (
			<group ref={groupRef}>
				<line>
					<bufferGeometry>
						<bufferAttribute
							attach='attributes-position'
							args={[circleGeometry, 3]}
						/>
					</bufferGeometry>
					<lineBasicMaterial
						color={color}
						transparent={opacity < 1}
						opacity={opacity}
						depthTest={false}
					/>
				</line>
				<line>
					<bufferGeometry>
						<bufferAttribute
							attach='attributes-position'
							args={[crossGeometry.horizontal, 3]}
						/>
					</bufferGeometry>
					<lineBasicMaterial
						color={color}
						transparent={opacity < 1}
						opacity={opacity}
						depthTest={false}
					/>
				</line>
				<line>
					<bufferGeometry>
						<bufferAttribute
							attach='attributes-position'
							args={[crossGeometry.vertical, 3]}
						/>
					</bufferGeometry>
					<lineBasicMaterial
						color={color}
						transparent={opacity < 1}
						opacity={opacity}
						depthTest={false}
					/>
				</line>
			</group>
		)
	}
)

// Component for the measurement line
const MeasurementLine = React.memo(
	({
		point1,
		point2,
		measurementResult,
	}: {
		point1: MeasurementPoint
		point2: MeasurementPoint
		measurementResult: ReturnType<typeof calculateMeasurementValues>
	}) => {
		const midPoint = useMemo(
			() => point1.position.clone().add(point2.position).multiplyScalar(0.5),
			[point1.position, point2.position]
		)

		const lineGeometry = useMemo(
			() =>
				new Float32Array([
					point1.position.x,
					point1.position.y,
					point1.position.z,
					point2.position.x,
					point2.position.y,
					point2.position.z,
				]),
			[point1.position, point2.position]
		)

		return (
			<group>
				<line>
					<bufferGeometry>
						<bufferAttribute
							attach='attributes-position'
							args={[lineGeometry, 3]}
						/>
					</bufferGeometry>
					<lineBasicMaterial color={0x00ff00} linewidth={2} depthTest={false} />
				</line>

				<Text
					position={midPoint}
					fontSize={0.05}
					color='white'
					anchorX='center'
					anchorY='middle'
					renderOrder={999}
					material-depthTest={false}
				>
					{`${measurementResult.pointsDistance.toFixed(3)}${
						measurementResult.parallelFacesDistance !== null
							? `\n↔ ${measurementResult.parallelFacesDistance.toFixed(3)}`
							: ''
					}${
						measurementResult.facesAngle
							? `\n∠ ${((measurementResult.facesAngle * 180) / Math.PI).toFixed(1)}°`
							: ''
					}`}
				</Text>
			</group>
		)
	}
)

export const OptimizedMeasurementTool: React.FC<MeasurementToolProps> = ({
	measureMode,
	modelRef,
}) => {
	const { gl, camera, raycaster, pointer } = useThree()

	const handleClickRef = useRef<(e: MouseEvent) => void>(() => {})

	const [measurementPoints, setMeasurementPoints] = useState<
		MeasurementPoint[]
	>([])

	const [tempMarker, setTempMarker] = useState<{
		position: Vector3
		normal: Vector3
		intersection: Intersection
	} | null>(null)
	const [boundingSphere, setBoundingSphere] = useState<Sphere | null>(null)
	const [markerRadius, setMarkerRadius] = useState<number>(0.05)
	const [isInitialized, setIsInitialized] = useState<boolean>(false)

	const measurementResult = useMemo(() => {
		if (measurementPoints.length === 2) {
			return calculateMeasurementValues(
				measurementPoints[0],
				measurementPoints[1]
			)
		}
		return null
	}, [measurementPoints])

	const pickablesRef = useRef<Mesh[]>([])
	const lastRaycastTime = useRef<number>(0)
	const tempMarkerVisibleRef = useRef<boolean>(false)

	// Initialization of the measurement tool
	useLayoutEffect(() => {
		if (!measureMode) {
			setIsInitialized(false)
			return
		}

		// Check if model is loaded
		if (!modelRef.current) {
			setIsInitialized(false)
			return
		}

		// Check if model has children
		if (modelRef.current.children.length === 0) {
			setIsInitialized(false)
			return
		}

		try {
			// Search for mesh objects in the model
			const meshes: Mesh[] = []
			modelRef.current.traverse(obj => {
				if ((obj as Mesh).isMesh) meshes.push(obj as Mesh)
			})

			if (meshes.length === 0) {
				console.warn('Нет mesh объектов для измерения')
				setIsInitialized(false)
				return
			}

			pickablesRef.current = meshes

			const box = new Box3()
			try {
				box.setFromObject(modelRef.current)

				// Check if bounding box is valid
				if (box.isEmpty()) {
					console.warn('Bounding box пустой')
					setIsInitialized(false)
					return
				}

				const sphere = new Sphere()
				box.getBoundingSphere(sphere)

				setBoundingSphere(sphere)
				setMarkerRadius(Math.max(sphere.radius / 20, 0.01)) 
				setIsInitialized(true)
			} catch (boxError) {
				console.error('Ошибка при создании bounding box:', boxError)
				setIsInitialized(false)
				return
			}
		} catch (error) {
			console.error('Ошибка инициализации измерительного инструмента:', error)
			setIsInitialized(false)
		}
	}, [measureMode, modelRef.current])

	// Reset measurement state when measureMode changes
	useLayoutEffect(() => {
		if (!measureMode) {
			setMeasurementPoints([])
			setTempMarker(null)
			setIsInitialized(false)
		}
	}, [measureMode])

	// Optimized raycasting 
	useFrame(() => {
		if (!measureMode || !isInitialized || pickablesRef.current.length === 0)
			return

		const now = performance.now()
		if (now - lastRaycastTime.current < RAYCAST_THROTTLE) {
			return
		}
		lastRaycastTime.current = now

		try {
			raycaster.setFromCamera(pointer, camera)
			const intersects = raycaster.intersectObjects(pickablesRef.current, false)

			if (intersects.length > 0) {
				const intersection = intersects[0]
				const normal = getFaceWorldNormal(intersection)

				if (!tempMarkerVisibleRef.current) {
					tempMarkerVisibleRef.current = true
				}

				setTempMarker({
					position: intersection.point.clone(),
					normal: normal.clone(),
					intersection,
				})
			} else {
				if (tempMarkerVisibleRef.current) {
					setTempMarker(null)
					tempMarkerVisibleRef.current = false
				}
			}
		} catch (error) {
			console.error('Ошибка при raycast:', error)
		}
	})

	// Click handler for adding measurement points
	const handleMouseEvent = useCallback(
		(event: MouseEvent) => {
			if (!measureMode || !tempMarker || !isInitialized) return

			event.stopPropagation()
			event.preventDefault()

			setMeasurementPoints(prev => {
				const next = prev.length >= 2 ? [] : [...prev]
				next.push({
					id: `pt_${Date.now()}`,
					position: tempMarker.position.clone(),
					normal: tempMarker.normal.clone(),
					intersection: tempMarker.intersection,
					timestamp: Date.now(),
				})
				return next
			})
		},
		[measureMode, tempMarker, isInitialized]
	)

	useLayoutEffect(() => {
		handleClickRef.current = handleMouseEvent
	}, [handleMouseEvent])

	useLayoutEffect(() => {
		if (!measureMode || !isInitialized) return

		const canvas = gl.domElement
		const listener = (e: MouseEvent) => {
			handleClickRef.current(e)
		}
		canvas.addEventListener('dblclick', listener, { passive: false })
		return () => {
			canvas.removeEventListener('dblclick', listener)
		}
	}, [measureMode, isInitialized, gl.domElement])

	if (!measureMode || !isInitialized) return null

	return (
		<>
			{measurementPoints.map(point => (
				<MeasurementMarker
					key={point.id}
					position={point.position}
					normal={point.normal}
					radius={markerRadius}
					color={0x00ff00}
					opacity={1}
				/>
			))}
			{measurementPoints.length === 2 && measurementResult && (
				<MeasurementLine
					point1={measurementPoints[0]}
					point2={measurementPoints[1]}
					measurementResult={measurementResult}
				/>
			)}
			{tempMarker && (
				<MeasurementMarker
					position={tempMarker.position}
					normal={tempMarker.normal}
					radius={markerRadius}
					color={0xffff00}
					opacity={0.7}
				/>
			)}
		</>
	)
}

I’ve heard of bvh, but I’m not sure if it will help me.

Thank you for the answer, I would like to clarify, do you mean it is impossible to impose bvh while the model is moving, or if the model does not play any animations at the moment, but it somehow in its files have animations? I have a measurement process occurs during the static position of the model, I outside of this component, I disable all animations and return the model to the original position when loading.

The only thing I don’t know for sure for bvh, though I’ve looked at some cases, like this one, though apparently there’s no rakast foreseen here, but just animation optimisation or something like that.

1 Like

OH IM sorry.. I replied to the wrong conversation! Apologies…

1 Like

I would start by putting console.time/timeEnd calls around just the intersectObjects invocation and measure that time, in milliseconds? Turn off all the display/rendering of the measurement points for a moment, how much time does intersectObjects take?

Assuming that time is non-trivial (and for >70K vertices it probably is) then I think you’d need to consider raycasting on lower-detail geometry than you’re rendering, or else using three-mesh-bvh to optimize it. Note that the BVH approach works well for static geometry, but constantly updating the BVH index for 70K vertices would still be expensive.

It is also entirely possible that React renders or something else are dragging down your performance too. I think you could check that by skipping the actual raycast, just return a dummy ‘hit’ result every time, and see if it’s still slow, then run a performance profile in Chrome.

Aside, a 240hz refresh rate seems like a lot. Is this a gaming monitor or something specialized? At 240hz your total budget per frame (for rendering, game logic, raycasting, everything) is just 4ms. On AR/VR headsets – which are pretty framerate-sensitive — I’d usually be targeting only 90–120 FPS.

1 Like

Yup! a large amount of browser users are limited to 60fps. They don’t have the hardware (and doesn’t need it). You can go higher for experimental purpose and fun, but keep in mind it concern like 2-3% of the userbase: video professionals and gaming enthusiasts

1 Like

(I ran the profiler on this app and the bvh regeneration is taking about 150 msec.) so roughly 6fps not accounting for other rendering if done every frame.

1 Like

Thank you for your reply, you are indeed right.
And regarding the FPS (I’m just testing as I have a gaming monitor) and regarding the slowdown (possibly through reactive components).After all, I referred to the 3dviewer.com project.
After analyzing the 3dviewer.com approach, I identified several key architectural differences that likely explain the performance gap:

1. Raycast Strategy:

  • My approach: Frame-driven raycast (~60fps) with throttling
  • 3D Viewer: Event-driven raycast only on mouse movement

2. React Overhead:

  • Multiple React hooks, states, and effects create unnecessary re-renders
  • Complex dependency chains in useMemo/useEffect
  • 3D Viewer uses simple class properties and direct Three.js calls

3. Object Management:

  • My code recreates geometries in useMemo on each change
  • 3D Viewer creates geometry once and reuses materials/objects

4. Event Handling:

  • My complex handleClickRef + useLayoutEffect system
  • 3D Viewer uses direct Click()/MouseMove() methods

The main issue seems to be that React Three Fiber adds abstraction layers that aren’t needed for measurement tools. Event-driven approach with simple classes appears much more efficient than frame-driven React components.

And so I was hoping to get by with relatively optimised variants without bvh.Here is even an example of how much the same model works differently.

My proj:

Similar model but in 3dViewer

And here you can clearly see that there is no slippage even without the fps tracker.I will certainly reduce the number of fps, maybe it really reacts more sharply (noticeably) when large fps fall to small values.But I was interested in the fact of how to optimise this process and whether it is possible without removing the complexity of the grid to come to the result of 3dViewer, which, by the way, does not use bvh.

But at the moment I’m inclined to the fact that my react ecosystem gives its complications on this kind of applications.

1 Like

Yeah it’s a really nice demo!

1 Like

bvh regeneration is taking about 150 msec…

It would definitely be important to make sure the BVH updates rarely, if at all. Preferably once before rendering starts, and then reuse the BVH for all hit tests after that. Then only the raycast time will matter, which should be much less. @Oleksandr-Fedorov I would strongly recommend measuring that number (“intersectObjects”), it’s pretty important, I would expect that with a BVH, a raycast on a 70K mesh should be < 0.1ms. Without a BVH it might be higher, or maybe that’s not your bottleneck at all…


Beyond that, yeah — it’s kind of necessary to handle state differently from React conventions, at 60+ FPS you can’t afford to re-render React components much, so any state changing continuously needs to live in refs, or out of React state. The R3F Performance Pitfalls documentation is a good source on this, it can be solved… but even on teams with multiple experienced React developers, it’s hard (in my experience) to get this right every time. I don’t really have a better suggestion other than to choose a convention that works for you, stick to it, and measure timing often. For me:

  1. I put console.count('doThing') ticks in my components when they render, or do other expensive tasks; if any counter starts running “continuously” (for any reason) stop and fix it. Can get fancier about this but console.count is enough.
  2. Use state management that doesn’t have to re-render components. For simple state, useRef might be enough. If more complex, consider something like Zustand and the subscribe pattern…
	const fooRef = useRef(useAppStore.getState().foo);

	useEffect(
		() =>
			useAppStore.subscribe(
				(state) => state.view,
				(view) => (fooRef.current = foo),
			),
		[],
	);

… mouse events assign refs or Zustand state, but don’t do any work, that happens later.

But if you’re finding it too difficult to reason about what’s happening, debug, and make improvements in the React code – maybe it isn’t serving you well for this particular project, that’s OK too! :slight_smile:

3 Likes

This doesn’t sound right. When used correctly, useMemo is meant to prevent the recreation of resources.

2 Likes