Is there a detailed explanation of the webxr_vr_video.html WebXR example?

Hi all, I’m very interested in expanding my knowledge of webXR, especially the use of 3D video. The provided example is an excellent start but there are parts of it that I do not understand, specifically the manipulation of the mesh uv array with the lines:

const uvs2 = geometry2.attributes.uv.array;
for ( let i = 0; i < uvs2.length; i += 2 ) {
uvs2[ i ] *= 0.5;
uvs2[ i ] += 0.5;
}
Is there a more detailed explanation or tutorial that anyone know about that could explain what these manipulations are doing?

The purpose for this code is explained here: https://github.com/mrdoob/three.js/pull/20571#issuecomment-717846128

Thanks. Pulling that quote in here:

What the example does is to create two spheres and modify that uvs so each sphere renders a different part of the video. Then sets each sphere in a different layer so the cameras inside the WebXRManager see a different object.

How does looping through the uvs and multiplying every other one by 0.5 work? And adding 0.5 to every other one for the right eye only?

Just guessing, but does the 0.5 refer to the dividing point in the 3D SBS video of left and right eye? Layer 1 (left)'s adjustment somehow ignores the right half of the video with the uvs2[ i ] *= 0.5 of every other uv (still don’t get the every-other thing) and the right eye does the same transformation but also adds 0.5 to every other uv.

Are there any resources that can help me to understand what UVs are and how they’re used? I’m missing the foundation here.

Thanks for your help.

In this case, it’s best if you study a basic literature about real-time rendering. E.g.