Synchronized rendering over jpegs generated with Goggle Earth Studio

I’d like to use a sequence of images from Google Earth Studio as the background for my three.js scene. What I’ve accomplished so far is:

I have created my own scene containing a lower-res planet Earth with some additional elements rendered on it.
I can make the Earth invisible and enable alpha on the renderer so that the additional elements will be blended over the background.
I can export a series of frames (.jpeg’s) from Google Earth Studio.
I can export the Google Earth Studio camera’s 3D Tracking Data (in the form of a .json file) from Google Earth Studio and I can import the tracking data and use it to precisely control the camera in the three.js scene.
I have code working which takes control of the three.js renderer, captures a series of frames from my scene, and saves them to disk.

What’s left to do…
I need to load the .jpegs from Google Earth Studio and use them as the background of my scene. There needs to be an exact one-to-one mapping between the input background frame numbers and captured output frame numbers.

The problem…
Unfortunately, I haven’t been able to figure out how to load each image in turn into the background (and wait for it finish loading) before rendering the scene. All I’ve been able to find so far is code that loads images asynchronously and uses a callback.

I believe that I just need is code that directly loads a .jpeg without using a callback, but I can’t seem to find a function that will do that. The documentation that I can find relating to this topic talks about techniques that I think are too complicated, such as using “await” (didn’t work), promises, loading managers, or cryptic incantations such as image.src = `data:image/png;base64' + imageDataInBase64; all of which weren’t of much help since I’m just not enough of a Javascript genius.

I think I just need a function - if it exists - that resembles …

loader.loadAndYouHadBetterNotDamWellReturnUntilTheImageIsGoodAndReady('images/GoogleEarthImages_001.jpeg')

Is there a straightforward way to do this?

I tried to make two functions:

  • loadAndYouHadBetterNotDamWellReturnUntilTheImageIsGoodAndReady( URL )
  • loadAndReturnImmediatelyAndLetTheImageLoadAsynchronouslyForTheTimeBeing( URL )

The left square is using the first function (so it waits for the loading to complete) and that’s why the image is shown. The right square loads the texture asynchronously and the rendering happens before the loading is complete. That’s why it is black.

It is not exactly what you need, but is it close enough?

https://codepen.io/boytchev/full/oNOOBRP

image

A totally different approach -actually prefered in many sites- is to convert your jpg sequence into a video. By that you can 1) preload it at the very beggining, avoiding to manage loading at runtime, 2) keep in sync the video framerate (i.e. 30 fps) with a fixed render loop, leveraging a one-to-one corrspondency between the two sources, and 3) effectively disposing it from memory when you no longer need it.

3 Likes

It looks like you reproduced the same issue that I’m having. However, I tried loadAsync but it wouldn’t compile for me. The error I’m getting is: ‘await’ expressions are only allowed within async functions and at the top levels of modules. My code needs to be inside renderFrame(). Can you see anyway to use the await … loadAsync approach inside renderFrame()?

I’m hoping to render a sequence of high quality 4K images. If the video is preloaded, will it be stored in memory as jpegs or as already decoded RGB images? If the later, then this is going to require a lot of memory. But if the former, this approach may be promising. I’d like to learn more about how to implement this so that the one-to-one correspondence between the two sources is guaranteed.

I’m not sure I understand your question.
Video frames are accessible through the html5 video API, and when the browser preloads the video, it reserves the amount of memory it needs - meaning as a video.

About 7 years ago Jam3 studio developed a small module to overlay three.js content on top of a video clip, synchronizing both sources and matching the 3d camera from AE, here’s a tweet demonstrating a demo in action

Here is a codepen from that time, although using three.js version 70-ish and unable to display the video due to CORS issues. If you manage to solve that, you should see something like this:

With a dedicated scrubbing mechanism, you should be able to display a given frame in tandem with the rendering of a specific moment in time.

Oh, and a couple of other ideas:
i) unless the output screen is 3840 × 2160 px (four times the amount of pixels as the full HD 1080p size), I would consider decreasing the video size,
ii) after a long search, here is Jam3’s original post about the challenges to consider when developing this kind of things

I took a look and the demos are cool. I searched for a way to control the HTML5 video playback at the frame-by-frame level, but did not spot any way to do this. I think that if I were to implement this technique, the playback will occur asynchronously to the render loop. So the registration of the rendered components with the background scenery will not be guaranteed. It will look like the rendered components are jittering relative to the background. Jam3 apparently observed this as he said, “Our final result still has some slight jitter, where ThreeJS does not match each After Effects frame exactly, so this is one area which requires more work.”
My capture code is able to capture frames and it periodically stalls the render loop to dump them to the hard drive. Unfortunately, I didn’t write that code so I’m not familiar with how it works. I just want to implement the opposite function - that is, code that loads images and stalls the render loop as needed to accomplish this.

Thanks for sharing these insights mate as I found it very much useful and informative

I’m assuming you want to load an image before you start rendering.
You could have an init function like this:

async function init() {
  const loader = new THREE.TextureLoader();
  const texture = await loader.loadAsync('path/to/asset.png');
  const material = new THREE.MeshBasicMaterial( { map:texture } );
  const mesh = new THREE.Mesh(new THREE.PlaneGeometry(4, 4), material);
  scene.add(mesh);
  // Begin rendering, assuming that renderFrame calls
  // requestAnimationFrame(renderFrame) or similar
  renderFrame();
}

To start you could do any of:

init();
// or:
init().then(() => { /** do something */ });
// or inside another async function:
await init();

Typically, I will need to load about 20 seconds worth of 60FPS images each 3840x2160. My plan was to load these one at a time inside the render loop. I suppose that theoretically I could buy a graphics card with lots of RAM (~40GB) and preload them before I start rendering. For fun, I did implement loading one image with await and loadAsync at the top level and that does work. But the solution I’m really looking for is one that simply loads the images on the fly. That is, a load command that returns an image, not a promise.

But, since your answer made me give this some more thought, it occurred to me that I should be able to preload a few images and display them sequentially in renderFrame. I implemented this to test the entire flow end-to-end using a short low-res clip. This test established that Google Earth Camera camera control methodology is working.

You could try something like this:

const urls = ['path/foo', 'path/bar', 'path/baz'];

/** @type {{ [url: string]: THREE.Texture }} */
const textureCache = {};
const textureLoader = new THREE.TextureLoader();

urls.forEach(url => textureLoader.loadAsync(url).then((tex) => {
  textureCache[url] = tex;
  // Do something when this texture is loaded...
}));

The images will start loading in parallel. You can then assign those textures to pre-existing Meshes, which previously could have had a fallback texture.

Finally, the underlying HtmlImageElement is available as tex.source.data e.g. in case you want to draw them to single CanvasTexture instead.

For testing purposes I tried (and it worked)…

// In init...
let backgroundTexture = []
for (let i=0; i<400; i++) {
  backgroundTexture[i] = await backgroundTextureLoader.loadAsync(`./textures/googleEarthImages/NewZealandLaunchSite_${i.toString().padStart(3, '0')}.jpeg`, function(texture) {})
}
// In renderFrame...
// First frame...
backgroundMaterial = new THREE.MeshBasicMaterial( { map: backgroundTexture[0] } )

// Subsequent frames...
backgroundMaterial.map = backgroundTexture[nextFrame]

Cool. Just FYI because you use await you’re loading those texture one-at-a-time. If you want to load many in parallel, but not necessarily all of them, you could use e.g.

promise-limit - npm

Is this example closer to what you need: loading is executed by calling the texture-loading-function in a loop inside an event-handler-function called by clicking a button. So it could be considered not top-level. The console shows what happens with the loading and when the image data is available. Nothing is rendered on the screen.

The left snapshot is with traditional loading (without waiting). You can see, that the data becomes available later on.

The right snapshot is with waiting. The data becomes available within the function call, but the browser waits for each image to load. If you have a slow internet or huge images, the browser would be blocked while the image is loading. Users tend to hate when the browser is blocked for too long.

https://codepen.io/boytchev/pen/WNWWZKa?editors=0011

PS. To be honest, I think it is better to reorganize the data/control flow in your program, instead of forcing JavaScript to act counterjavascriptish. For example, in desktop graphics the programmer can directly control the animation. In a browser, you have to split the animation into chunks and give the browser one chunk per frame. Similarly, it might be better to change the logic of your code, so that the code that relies on the images also works chunk by chunk - you do one chuck whenever the next image is loaded.

1 Like

Your codepen does a nice job of illustrating the nature of the problem. It uses
async function testWait() which would be the equivalent of my renderFrame call. I’m not sure what the side effects might be of putting async in front of renderFrame in a three.js application. What are your thoughts on that?

Concerning the “PS”, the architecture of three.js and perhaps Javascript makes certain assumptions about which requirements the user or developer wants to prioritize, such as “they don’t want the browser to be blocked for too long”. I think we are exploring territory where the requirements are prioritized differently. In this case, what the user wants the behavior to be deterministic and the code to not be unwieldy. Blocking to achieve determinism is the desired behavior. I think it would best for users to be able to make this prioritization choice without having to invent and debug novel data/control flows that depart from the templates used in most of the three.js examples.

Interesting tool. Can it limit some promises and not others? In general, I need to load lots of different textures. I just want the sequence of background images to be limited.

I didn’t read the whole thread, but yeah… chrome has top level await now if you’re loading your code as a type=‘module’ script tag…
and then you can just
thing = await loader.loadAsync(

but if they are large loads… they will throw warns in the browser because they are kinda locking up the main thread… and you’re also by definition loading them all serially…

I use something like a

let thingsArray = await Promise.all( [loader.loadAsync( urlA ), loader.loadASync( urlB )])
or:
Promise.all( [loader.loadAsync( urlA ), loader.loadASync( urlB )]).then(thingsArray=>{

})

patten and that seems to work ok? ymmv… fun stuff.