Seeking a robust multi-touch pattern for a keyboard with multi-key presses per finger

Hello everyone,

I’m developing a virtual stenography keyboard using React Three Fiber and I’ve hit a wall with a tricky multi-touch issue. I’m hoping to get some advice from the community on how to build a more reliable solution.

The main challenge is that a single finger (one pointerId) might be large enough to press multiple keys at once. The keyboard needs to track all keys under all fingers and then register the final “chord” when the user lifts their fingers off the screen.

You can see a live version here: And the source code is here.

Current Implementation

My approach is to give each individual key its own gesture handler.

  1. Component Structure: The keyboard is composed of many small Hexagon meshes. The main StenoKeyboard component manages the overall state.

  2. State Management: In StenoKeyboard, I use a Map to track the state of all active pointers:

const [pressedKeys, setPressedKeys] = useState(new Map());

The Map’s keys are the pointerId for each touch, and the values are a Set of the keyIds being pressed by that specific finger.

  1. Per-Key Event Handling: This is the crucial part. Each individual Hexagon component has its own useDrag gesture handler (from @use-gesture/react via a custom hook).

When a finger touches down or drags over a hexagon, that hexagon’s useDrag handler fires.
It then calls a shared updatePressedKeys function to add its own keyId to the Set associated with the current pointerId.
When the finger is lifted (last: true in the gesture state), it clears the Set for that pointerId.
Here’s a look at the Hexagon component:

// In components/Hexagon.js
const Hexagon = ({ geometry, name, pressedKeys, updatePressedKeys, ...props }) => {
  const keyId = name;
  // This custom hook wraps a useDrag handler
  const dragProps = useDrag({ keyId, pressedKeys, updatePressedKeys });

  return (
    <group {...dragProps} {...props}>
      <mesh userData={{ keyId }}>
        {/* ... material and geometry */}
      </mesh>
    </group>
  );
};

The rationale for this design is to support multi-key presses from a single finger. If a finger is large enough to touch two hexagons at once, both of their useDrag handlers should fire for the same pointer event, adding both of their keyIds to the state.

The Problem: “Stuck” Keys

The keyboard works, but it’s not reliable. Frequently, keys get “stuck” in the pressed state.

My theory is that with so many adjacent event handlers, I’m running into race conditions or dropped events. For example, if a finger lifts up precisely on the boundary between two hexagons, or moves very quickly, one of the hexagons might miss the pointerup or pointerleave event. Its useDrag handler never gets the last: true signal, so it never clears its keyId from the state map, leaving the key “stuck.”

The Core Question

How can I reliably track multiple keys per finger without the fragility of per-key event listeners?

  1. Is there a better pattern? A centralized event handler on a single large plane seems more robust for tracking pointers, but a single raycast from that handler would only detect one key at a time. How could a centralized approach be adapted to find all keys within a certain radius of the pointer’s location?
  2. Improving the current approach: If I stick with per-key listeners, are there techniques to make the state cleanup more foolproof, ensuring a pointerup event anywhere on the screen correctly cleans up the state for its pointerId?
  3. Alternative Ideas: Are there other drei helpers or three.js features that could solve this more elegantly?

I feel like I’m fighting the framework a bit here and would be grateful for any insights or suggestions on a better architecture. Thank you!

1 Like

I see you problem is that hexagons are overlapped. Even if you click the mouse between the keys, both are pressed.

1 Like

I’m not very familiar with React, but I know that Fiber can catch events directly on each mesh. I think this method would be much more accurate.

1 Like

In a stenography keyboard the ability to press multiple keys with a single finger is an important feature. For example, as there is no “I” character, it is represented by the combination of “E” and “U” pressed together. Additionally I wanted to have the ability to drop a key when dragging out of it. Or similarly, to ultimately allow a piano slide.

The behaviour you noticed happens because I implemented multiple raycasters per touch (5 at the moment) to simulate a 2d finger press touchscreen-stenography-keyboard/src/components/hooks/useDrag.js at main · CosmicDNA/touchscreen-stenography-keyboard.

Generally I feel like this would be optimally handled closer to a pointerdown event handler itself, and then set the states of the visual representation in response, in your graphics loop.. like maintain an array of pressed buttons.. and their position in screen space.. then on the pointerdown/pointermove, you quickly compare to the current screen positions, and set pressed/not pressed on those states. Then let the rendering layer react to it next frame. This is much faster than raycasting (just checking pointer position->distance to->hex screen position.. especially for a lot of keys.. and doesn’t restrict you to the hex boundary, rather you can find the closest buttons within a certain distance of each hit. But maybe that’s overkill, idk.

1 Like

Could it be an idea to place invisible “overlap” mesh boxes between any two keys that could be pressed at the same time, these could extent the full width of the keys and overlap something like a quarter or third of each key pairs in breadth, if this theoretical overlap mesh is intersected it would mean both keys are pressed if not only the individual intersected key would register as pressed? Just an idea..

1 Like