Positional audio stuttering (Using Omnitone & Resonance Audio SDK Ambisonics libraries)

Hello,

I’m building a virtual space with 3D audio integration, using the Resonance Audio SDK library (built on top of Omnitone). It is a tool for using Ambisonic audio, therefore providing more precise localization and better quality. I’m having problems with the sound, which is severely stuttering/popping when the listener is moving. It happens when rotating the camera, and when moving the listener, and it gets heavier when you get closer to the objects (which are attached some audio file).

I tried to lower the sample volume, as I thought it could be some clipping, but it doesn’t help. I’ve been struggling with that for days/weeks, I tried with several libraries, decomposed the camera matrix to apply some easing to the position update, but it won’t help. I tried to find some solutions online, I can see some people have the same problem but it’s not a widely known issue.

I’m trying now to ask some help here, because I believe this problem could be linked to the use of Three.js, and not the audio libraries (I’ve tried with 3 different ones). I found a demo project using A-frame online, which had the same problem. But I also found some that didn’t share it… And I know this issue was already addressed, and resolved a few times in the dedicated GitHub for Three.js.

Here is the link to the GitHub repo. I hope someone will be willing to spend some time to help me with some insight, if you have any familiarity with this issue. Thank you anyway if you took the time to read this.

The code : GitHub - polar0/metaverse: This experiment is an example of 3D Audio integration in a virtual space with Blockchain interaction, that could be described as a "metaverse".
You can get to the demo from the ReadMe.

Maybe I can make it easier with some hint on the organization of this repo :

  • The code for audio is in src/World/audio
  • ‘main.js’ only inits the main functions, what should be interesting is in ‘positioned.js’. It’s a very basic setup, with the audio position updated at each frame (you can see it in ‘src/World/World.js’).

I don’t really know what else to try, but I believe some more experienced developers can hint me. If you have any idea of a project making use of spatial audio with such libraries, without this specific issue, please tell me, as I could find some help looking though the code!

For instance, this project makes the same use of 3D Audio, with A-frame, and it working great! But the code is really complex and it spreads across many files… VR music – Trick the Ear

Here are some issues I believe could be linked with mine :

Please tell me if you need any more informations!

Thanks,
polarzero.

To ensure it’s a three.js related issue, it would be important to test the same audio input with another 3D engine like BabylonJS or with plain WebGL/WebXR.

In any event, PositionalAudio breaks in webxr when moving the user · Issue #22884 · mrdoob/three.js · GitHub could be related and I would definitely apply the respective patch (WebXRManager: Fix XRCamera Local Space Behavior by Reverting #21964 by zalo · Pull Request #22362 · mrdoob/three.js · GitHub) if you experience sound glitches in WebXR. However, keep in mind that this issue is only relevant for WebXR. It does not effect non-WebXR apps.

I’ve applied this patch, I still have the issue. I will try with other engines. But using other sounds (from this demo it’s works better. I guess the sound files are part of the problem, but it makes it even weirder…

I believe it might be an issue due to the refresh rate of browsers (10-15ms), since now it happens only when rotating the camera fast enough.

Thanks for your advice. :slight_smile:

Hey polarzero, have you found any solutions to the stuttering sound?
I’ve just tried Resonance Audio and it’s not very pleasant.
Also, do you know if I can set up multiple rooms? Or how do you approach audio with multiple rooms?

Hi! Sorry for the late response, I got caught up in some work lately.

I eventually ended up using Atmoky, which I think is much more adapted and in active development. and it’s not limited to web audio only. You will need to request access on the website then you can use the documentation to get started. There are a few specifications related to use in Three.js, but you will find the overall process much more attractive. See this code sample for adapting a the coordinate system of Three.js to Atmoky:

updateListenerPosition: (obj) => {
    const { renderer } = get();
    if (!renderer) return;

    // Convert Three.js -> Atmoky coordinate system
    const convertedPos = [
      -obj.position.z, // x
      -obj.position.x, // y
      obj.position.y, // z
    ];
    const convertedRotFromQuat = [
      obj.quaternion.w, // w
      -obj.quaternion.z, // x
      -obj.quaternion.x, // y
      obj.quaternion.y, // z
    ];

    // Update listener
    renderer.listener.setPosition(...convertedPos);
    renderer.listener.setRotationQuaternion(...convertedRotFromQuat);
  }

See my repository on Github (0xpolarzero/metaverse/blob/main/src/stores/Atmoky.js) for some code samples of the integration process (sorry it’s not well documented yet, I’m working on a more detailed repo).

To implement multiple rooms, I would update the room parameters depending on the position of the user. You can create a function that updates on each frame and returns the room in which the user is, based on its boundaries, then update the parameters (e.g. reverberation) if needed. Atmoky also allows you to set up occlusion (e.g. when there is a wall between the source and the listener). You can use a function for this as well. I’m sure there will soon be more convenient ways to implement all that.

Feel free to create a new thread and tag me there if you run into any issue when setting this up!