From Three.js to WebGPU: My Insane Journey Building an Infinite 3D Engine

Hey guys,

This is not a complaining pitch about “why Three.js is bad” or a promotional “please use my library” post. It’s more about my insane journey and addiction to showcasing grand scale—millions of objects, rendering 100k+ items with the coolest post-processing filters. I even had one guy mistake my library video for movie VFX, which was pretty cool!

Hold on, this is going to be an epic story.

It started before the AI era. Back when I was a novice in Three.js, I did my fair share of tutorial projects. Then one day, I visited the NASA Eyes website and got completely captivated by its beauty. I thought, no way, could I do this too? Since this was before GPT, I had it rough. But I did pretty well! I rendered stars and our planets using circle geometry by rendering points, and the sun by rendering a sphere. It was awesome. With orbital controls, it almost looked like 0.001% of NASA Eyes. I was so happy… until I decided to add labels to the planets, and I got completely stuck. In theory, I put the text in a group so it would follow, but the problem came when I clicked on a planet to lerp toward it. Some mystery bug kept fixing the sun to the center of the screen. After spending days trying to fix it, I gave up, shoved the project in the closet, forgot about it, and moved on.

Then AI came along, and boy, did I remember this project. Out of nowhere, I decided to let AI take a crack at it, and voila—it solved it! I kept improving the project from that point on. It’s a long story, but here’s what I did in order:

  1. Got LabelManagers and CameraManagers working, so clicking takes you to the planet just like the NASA Eyes website.

  2. Added support for wormhole portals—a mini-game-like layer where we can fly around the solar system.

  3. Made it so you can cross wormholes to see other star systems (SceneRegistry).

  4. Moved away from circular orbits to elliptical orbits following Kepler’s laws.

  5. Parsed NASA exoplanet data so it could be consumed by the Three.js layer (pulled 4,100 star systems!).

  6. Modularized it so I only needed JSONs to create a single star system (even created fictional ones with this).

  7. Added solar system textures, procedural planets, and post-processing, keeping the spaceship mode separate from the game layer.

  8. Added support for multi-star systems and a 3I/Atlas mode to capture that hype.

End result: Grabbed 500k views on Reddit with mentions in a couple of “Site of the Day” galleries (orionrealms.com).

After this, I pondered and researched a lot: How can I make a walkable planet layer? The conclusion: it might be possible if we use offscreen rendering with a Data-Oriented Design (DOD) approach and try WebGPU.

Enter the creation of the Axion Engine. At its beginning, it had two web workers running alongside the main thread (using R3F and Three/WebGPU). It worked fabulously for its early iterations, rendering a million objects via InstancedMesh and animating 100k objects by supplying transferable arrays.

I thought, Wow, this is great, a walkable planet is possible! But then came the biggest boss: Cell-based origin rebasing architecture. It’s an absolute must if I want this dream to land. I sketched out my sim worker and render worker again, until I got hit with another problem—specifically with R3F. In origin rebasing, there’s an operation where we change the position for all objects in the visible 3x3x3 grid. This is the minimum baseline, but doing this shakes the R3F tree, causing scene rebuilds that drop the frame rate to 1 FPS!

I was like, okay, this is horrible, and I figured out the exact reason why. Next step: ditch React and R3F, and do it in vanilla Three.js.

It took some time to migrate the code, but finally… wow. It worked like seeing magic for the first time. I could cross a grid without triggering endless scene rebuilds in any direction. Just like those big open-world games where you can spend days inside, lol. I was thinking, heck yes, I’m going to be the one to bring this to the web! I might go down in history books like mrdoob! Well, until I hit more fundamental problems:

  1. I couldn’t put a new material into the scene without dropping frame rates.

  2. In 10 minutes of gameplay, a player could get hit with Garbage Collection (GC) stutters, ruining the gameplay.

  3. I tried a lot of caching strategies—cached lights, materials, geometry. I thought, okay, this works and looks cool. If I ignored the initial initialization lag or frame drops, it worked like magic. I could walk in any direction endlessly and see new objects and lights because they used cached materials and geometries.

  4. Enter the breaking point: InstancedMesh. I literally never would have guessed that a thing designed for optimization and large scale would be the exact reason why I had to create my own renderer in the end.

  5. You might ask, “Why exactly is this a problem?” As I said, I was caching geometries and materials, and it worked like magic until it involved InstancedMesh. The reason is that it allocates a fixed block of memory, and the count is fixed. Adding or updating an item guarantees a scene rebuild (not to be confused with animating it, which works fine). In an origin rebasing scenario, you want a setup that takes items dynamically and removes them from the scene to maintain the illusion of an infinite world.

If anyone wants to see the magic of the Axion Engine—infinite objects—and can bear the patience for the initial lag and the first 2-3 grid jumps (the time it takes to fully cache materials and geometries), you can check it out here: (https://axion-engine.web.app/)

Now, not to bore you too much, let me tell you about my custom WebGPU renderer and what I achieved with it:

  1. It’s a fully DOD library that deals exclusively in ArrayBuffers (I believe that’s the only thing that actually makes objects move at the end of the day).

  2. It’s a minimal wrapper so I can experiment freely with it.

  3. I called it “Null-Graph” because of the absence of a scene graph. It does “null” things and has “zero” features out of the box, but in the hands of the right person, you can make your dreams come true with it.

Why not integrate it with Axion Engine yet, you might ask? Well, the renderer right now is highly coupled with Three/WebGPU, so it’s going to take some time to migrate. For now, I’ve just been playing with my custom renderer, and man, it is sick AF. I posted it on LinkedIn and got random researchers, CEOs, CTOs, and PhD grads liking my posts—it’s super cool. It renders a lot of physics and math papers as part of the experiments.

Also, yes, I used AI to build my projects. In my defense, I feel like I obviously know how to code, but with AI, I save a ton of time that instead gets spent on building intuition for 3D. I might have full-blown AI psychosis, lol, but I guess it’s a ‘pick your poison’ kind of thing in the end. I could try building it all by myself without AI, reading through documentation like the old days, and reach a mediocre result—or I could do it with AI and reach the exact same place in half the time.

What do you guys think about my wild ideas, techniques, and shamelessness when it comes to pushing AI, GPUs, and CPUs to their absolute limits?

That’s a wild journey and honestly pretty inspiring. The progression from a simple Three.js solar system experiment to building your own WebGPU renderer is exactly the kind of obsession that drives engine development. The part about hitting architectural limits like InstancedMesh and scene graph rebuilds is something a lot of people eventually run into when they try to scale toward truly large worlds.

Your idea of moving toward a data oriented approach and minimizing the scene graph makes a lot of sense for infinite environments. When you start doing origin rebasing, large scale streaming, and millions of objects, the traditional object oriented scene structure becomes the bottleneck rather than the GPU.

I’ve been experimenting with similar large scale environments but more focused on persistent social worlds and cities rather than planetary scale systems. The idea is building cities where multiple users can explore the same environment in real time, whether it’s a heritage reconstruction or a modern megacity.

Here’s a small prototype environment I’ve been working on
https://theneoverse.web.app/#threeviewer&&crateria

It’s nowhere near the scale of your engine experiments, but the goal is similar in spirit exploring large worlds on the web and making them interactive. Your approach with WebGPU and array buffer driven systems is really interesting for pushing that scale much further.

Curious to see where your Axion Engine and Null Graph renderer go next. If you manage to combine infinite streaming worlds with stable performance in the browser, that would be huge.

1 Like

I actually visited your project one month ago, if i recall ,i was looking and researching a lot that time on if other people got this implemented or not. I felt loading time quite high.But overall your approach is cool,I m thinking its more centered around the idea of multiplayer or multi exploration type activities in shared environment,Honestly is pretty cool, And some what aligns with my end goal with this project

1 Like

Thanks for checking it out earlier, I really appreciate you taking the time to explore it and share feedback!

You’re absolutely right about the loading time. That’s something I’ve been actively improving. The project is designed to run even in unstable or low-bandwidth environments, so I’ve been experimenting with aggressive optimization techniques like compressed assets, progressive loading, and lightweight rendering to reduce startup time as much as possible.

And yes, your observation about multiplayer or shared exploration environments is very much aligned with the direction I’m exploring. I’m really interested in the idea of real-time social experiences directly in the browser, where people can interact in a shared 3D space without needing installs or powerful hardware.

It’s awesome that your project is moving toward a similar goal. The web still has a lot of untapped potential for collaborative 3D environments, especially with technologies like Three.js and WebGL pushing things forward.

Also, if you’re ever interested in the technical side of my approach, I’ve been developing extremely fast-loading web systems designed to operate even in unstable conditions, you can check some of that work here: https://theneoverse.web.app/#services

Would love to see how your project evolves as well!

2 Likes