Hello everyone,
I’m developing a virtual stenography keyboard using React Three Fiber and I’ve hit a wall with a tricky multi-touch issue. I’m hoping to get some advice from the community on how to build a more reliable solution.
The main challenge is that a single finger (one pointerId
) might be large enough to press multiple keys at once. The keyboard needs to track all keys under all fingers and then register the final “chord” when the user lifts their fingers off the screen.
You can see a live version here: And the source code is here.
Current Implementation
My approach is to give each individual key its own gesture handler.
-
Component Structure: The keyboard is composed of many small
Hexagon
meshes. The mainStenoKeyboard
component manages the overall state. -
State Management: In
StenoKeyboard
, I use aMap
to track the state of all active pointers:
const [pressedKeys, setPressedKeys] = useState(new Map());
The Map
’s keys are the pointerId
for each touch, and the values are a Set
of the keyId
s being pressed by that specific finger.
- Per-Key Event Handling: This is the crucial part. Each individual
Hexagon
component has its ownuseDrag
gesture handler (from@use-gesture/react
via a custom hook).
When a finger touches down or drags over a hexagon, that hexagon’s useDrag
handler fires.
It then calls a shared updatePressedKeys
function to add its own keyId
to the Set associated with the current pointerId
.
When the finger is lifted (last: true
in the gesture state), it clears the Set
for that pointerId
.
Here’s a look at the Hexagon
component:
// In components/Hexagon.js
const Hexagon = ({ geometry, name, pressedKeys, updatePressedKeys, ...props }) => {
const keyId = name;
// This custom hook wraps a useDrag handler
const dragProps = useDrag({ keyId, pressedKeys, updatePressedKeys });
return (
<group {...dragProps} {...props}>
<mesh userData={{ keyId }}>
{/* ... material and geometry */}
</mesh>
</group>
);
};
The rationale for this design is to support multi-key presses from a single finger. If a finger is large enough to touch two hexagons at once, both of their useDrag handlers should fire for the same pointer event, adding both of their keyIds to the state.
The Problem: “Stuck” Keys
The keyboard works, but it’s not reliable. Frequently, keys get “stuck” in the pressed state.
My theory is that with so many adjacent event handlers, I’m running into race conditions or dropped events. For example, if a finger lifts up precisely on the boundary between two hexagons, or moves very quickly, one of the hexagons might miss the pointerup
or pointerleave
event. Its useDrag
handler never gets the last: true
signal, so it never clears its keyId
from the state map, leaving the key “stuck.”
The Core Question
How can I reliably track multiple keys per finger without the fragility of per-key event listeners?
- Is there a better pattern? A centralized event handler on a single large plane seems more robust for tracking pointers, but a single raycast from that handler would only detect one key at a time. How could a centralized approach be adapted to find all keys within a certain radius of the pointer’s location?
- Improving the current approach: If I stick with per-key listeners, are there techniques to make the state cleanup more foolproof, ensuring a
pointerup
event anywhere on the screen correctly cleans up the state for itspointerId
? - Alternative Ideas: Are there other
drei
helpers orthree.js
features that could solve this more elegantly?
I feel like I’m fighting the framework a bit here and would be grateful for any insights or suggestions on a better architecture. Thank you!