Hi everyone,
I’m currently working on a WebXR scene using Three.js, and I’m struggling with spatial audio (using THREE.PositionalAudio) in VR.
Setup
I’m using THREE.PositionalAudio attached to 3D objects in the scene.
I update the audio positions with .getWorldPosition() inside the animation loop.
The camera and listener are correctly linked, and in animate(), I update the listener position and orientation like this:
if (renderer.xr.isPresenting) {
const xrCamera = renderer.xr.getCamera(camera);
listener.position.copy(xrCamera.position);
listener.quaternion.copy(xrCamera.quaternion);
}
the prblem i have
In VR mode, the spatial audio doesn’t behave correctly , I hear weird distortions or “crackling” sounds aspecailly when i m close to the mesh. the mesh is moving.
Works fine in non-VR mode, but completely wrong once entering WebXR.
What I’ve tried
Verified that the listener is a child of the camera (tried both with and without parenting).
Tried HRTF
Tested different audio files
at the end i switched basck to basic THREE.Audio and make some code to update the volume with the distance from an object but it s not a sgood as a spatial sound .
Thanks a lot of your tips.
I’m not certain that the following info is related to your specific case, but generally, I have seen behavior like this when the audio system is overloaded.
You may want to check that your audio files are not using super high bitrate/stereo.. i think like 44khz 16 bit adpcm (ogg/wav/mp3) works best.. and if you’re streaming background audio, limit it to 1 or 2 channels of mp3 max.
For emitters it’s good practice to keep a pool of ~8 emitters or so, and recycle them based on distance, and set up some rules like.. a new sound will only play if one of the emitters is free, or if one of the emitters is much further away that the one that wants to be played. It’s generally good practice to limit the number of emitters because mixing too many sounds will cause audio to become muddy, and the user won’t even be able to differentiate between the sounds anyway.
You also referenced HRTF which is I think is waay more costly than just simple spatial audio emitters.
VR makes these issues more prominent since the renderer is already doing ~2x the cpu workload generating images for each eye, so if you have lots of emitters i could see the cpu becoming bogged, and the audio system taking second place to the renderer when trying to finish all the work in a single frame.
In your case I would wrap the audio systems in a manager class of some sort, and see if limiting the number of emitters + background audio streams, and using only mid range quality samples (and non stereo for emitters, especially since spatialization doesn’t apply to stereo sounds)
hi and thanks for your reply .
In my case, I’m only using a single THREE.PositionalAudio
emitter with a .mp3
file, and I’m already experiencing the issue (crackling), but only in VR mode — everything sounds fine in non-VR.
When you mention stereo: do you mean the audio file itself should be mono to avoid these issues in VR?
I haven’t tested with a mono file yet, since it worked perfectly outside VR, but that might indeed be part of the problem.
Could the issue come from the way I’m manually forcing the listener’s position and orientation every frame?
if (isVRActive && renderer.xr.isPresenting && audioManager) {
const xrCamera = renderer.xr.getCamera(camera);
xrCamera.getWorldPosition(cameraPosition);
xrCamera.getWorldQuaternion(cameraQuaternion);
audioManager.listener.position.copy(cameraPosition);
audioManager.listener.quaternion.copy(cameraQuaternion);
audioManager.listener.updateMatrixWorld(true);
const ctx = audioManager.listener.context;
const listener = ctx.listener;
if (listener.positionX) {
listener.positionX.value = cameraPosition.x;
listener.positionY.value = cameraPosition.y;
listener.positionZ.value = cameraPosition.z;
const forward = new THREE.Vector3(0, 0, -1).applyQuaternion(cameraQuaternion);
const up = new THREE.Vector3(0, 1, 0).applyQuaternion(cameraQuaternion);
listener.forwardX.value = forward.x;
listener.forwardY.value = forward.y;
listener.forwardZ.value = forward.z;
listener.upX.value = up.x;
listener.upY.value = up.y;
listener.upZ.value = up.z;
}
}
i dont know ..tried so manty solutions . its very frustating when it works superfine on the dekstop and not on vr.
Any way thanks a lot
I think the way you’re positioning is probably fine.. but looks somewhat verbose, as opposed to just .add() ing the listener to a camera?
It could be that the mp3 you are using has a high decoding or playback overhead.. I would try with a simple, lower bit rate mono .wav or .ogg file. How are you loading/playing the file?
You can use the free Audacity tool to convert your audio to different formats for testing..
I asked chatGPT for optimal lowest common denominator formats for webvr audio (take it with a grain of salt)
in summary:
Engine‐Noise Loop (Spatialized Effect)
Soundtrack / Music Loop
(non spatialized)
- Channels: Stereo
- Sample rate: 44 100 Hz
- Bitrate: 128–192 kbps (MP3/AAC) or 96 kbps (Ogg Vorbis)
- Format:
- MP3 stereo @ 128 kbps (widest play-everywhere)
(Also if you haven’t seen this audio example yet: three.js examples
you might try it on your device and see if you get the same crackle issue, and/or use its audio files to test in your app.)
Feel free to DM me if you need someone to test on a quest2.