Blendshape Animation for Visemes slow

I get the BlendShapes, which contains an array of 55 facial positions represented as decimal values between 0 to 1 from Microsofts speech API and apply them in the useFrame loop in my react three fiber project.

{
    "FrameIndex":0,
    "BlendShapes":[
        [0.021,0.321,...,0.258],
        [0.045,0.234,...,0.288],
        ...
    ]
}
    synthesizer.visemeReceived = function (s, e) {
        const visemeId = e.privVisemeId;
        const timestamp = e.privAudioOffset / 10000; // Convert to milliseconds
        setVisemeIDs(prevVisemeIDs => [...prevVisemeIDs, { id: visemeId, timestamp }]);
        const parsedAnimation = JSON.parse(e.animation); // Expected to be of type ParsedAnimation
        setFrameQueue((prevQueue) => [...prevQueue, ...parsedAnimation.BlendShapes]);
    };


    const [frameQueue, setFrameQueue] = useState([]);
    
    useFrame(() => {
        if (frameQueue.length > 0) {
            const currentFrame = frameQueue[0];
            nodes.mesh.morphTargetInfluences = currentFrame;
            setFrameQueue(prevQueue => prevQueue.slice(1));
        }
    });

On Desktop and Mobile it works smooth (an offset of audio and viseme animation is there, since I couldn’t make use of the privAudioOffset… not focus of this question) - but on meta Quest2 it is very very laggy (hello motion sickness!) I guess the way I am getting the Blendshapes and pre-saving them in a temporary array to use it later in the useFrame is causing a lot of problems; since this example here works very smooth on the Quest2: three.js webgl - morph targets - face

Any ideas how I could optimize my code?

Using stateful hooks like useState() will cause the React component to re-render. It’s very important that React components not re-render on every frame, so calling state setters inside of useFrame or other continuously-running callbacks must be avoided when using React. See:

https://docs.pmnd.rs/react-three-fiber/advanced/pitfalls#avoid-setstate-in-loops