Tesseract - Open World Planetary Engine

I’d like to introduce one of my spartime projects i’m wroking on. It’s going for about a year now and slowly becomes complete. I’ll drop updates of progress as it proceeds.

About a year ago i’ve started working on a terrain engine mainly for volumetric terrain. A half year ago i then started another focusing on rendering realscale planets with very low memory footprint and reasonable performance for low-end machines and mobile devices. It first was more of a prototype to see how practical it can be done for realworld applications/games, and since i didn’t used papers, it also was a fun challenge to solve the issues you face in developing a planetary renderer.

Some dev clips. The game i’m actually using it for has a simple toon style, i’ll setup a realistic demo environment soon. I disable culling for tests, several post effects or quick motions made the recording tool pixelate/struggle a little too

Being fast, scalable and flexible are one of the main goals, as it also should run on low-end machines aswell as mobile devices, while requiring less as possible workload, giving the room for actual game mechanics. From resolution of datasets, topology quality, geometry detail, view distance, lod-techniques everything is adaptive to the required targets.

The LOD depth basically as no limit from a meters to centimeters resolution it depends on the usecase. A low data resolution and higher LOD depth performs and scales better. Sources can hit precision limits depending on their operating area, that’s why they are localized. A 400 KM source-probe for example can already give very high precision while global sources are mostly relevant into things like weather or rough continents. The example video above uses a global source for the continent shapes.

I’ll further describe few things below…

Procedural designed worlds

The goal of this engine is to create universe scale game worlds based on both, procedural generated datasets as well as handcrafted. The tools for designing range from procedural brushes, simulating brushes, cloning stamps to more advanced structure pattern designing, what allows the designer to control the look and characteristics of entire realscale landscapes to make them believable for the desired style or theme, within a reasonable timeframe.

Volumetric terrain

Caves, cliffs and overhangs are done via dual contouring for an efficient adaptive geometry generation. 3D features are localized, that means only where volumetric sources are, such batches are processed, a hybrid of fast GPU based 2D terrain and volumetric features. Those features require CPU processing, while the GPU terrain is instant also for fast movements or immediate jumps.

CPU & GPU Rendering

I’ve implemented a CPU Renderer which transpiles GLSL to optimized JS shaders. The main purpose is to pick single samples or render in workers or server environments. This also is used for raycasting in cases, where collision buffers aren’t available since 2D geometry is on the GPU, only close terrain will be read back for precise face collisions.

Multi-Threading for tasks

Tile and data generation is always GPU accelerated and works asynchronous by distributing the workload across render-cycles. Models and their attachments can be replicated to threads, the components implement their specific optimized synchronization, this is useful for more CPU intensive tasks.

GPU accelerated multi-layer content generation

The IndexedVolume has a layers and chunks architecture, layers are basically separate octrees for different kind of content, chunks are to dynamically load/stream and save larger portions of the worlds content. By default there is a dynamic layer for active changing objects such as players, enemies or user created content and a static layer that is purely to be used by static assets such as foliage placed automatically.

Content generation is fully GPU accelerated, from distribution, selection to placement. The assets repository is automatically indexed creating a stubs LUT of the IDs and their biome values, so they can be naturally selected by the biome data.

Multiple layers with different chunk level (sizes) and ranges/cache ranges can be defined, tags are given to make the specific stubs LUT with all assets that are available to this layer. A layer may use volume-indexing (inserting the objects into the static layer of the IndexedVolume) or use layer-indexing which is the fastest possible only culling by the tile that casted it, for the highest LOD layers such as grass and smaller foliage, insertion is instant.


These localized or global puppies actually feed the generator to write tiles of datasets. These can do blending operations with each other, use a falloff and access other datasets to generate their own, like depth or normals. Sources can be locally restricted, so each tile only renders sources containing them, the falloff mask additionally reduces areas the source doesn’t contribute to.

This also goes for pixel density, for example sources describe continents with millions of smaller details, which stream i/o based on by the LOD. This is similar to how landscapes are brushed, which are similar to how world map systems like Google Maps work.

Using satelite data

I hadn’t time yet to use the Mapbox API for some experiments, but basically it’s quite easy to use satelite data. I investigated into the Mapbox API, as they are free for a certain traffic limit and provide depth and normal tiles. I’ll look into that soon or later.


Datasets are defined for the tile-manager and are used for any kind of spatial information, such as depth for heightmap data, biomes or simulating weather. The atlases are automatically managed, merged for multi-framebuffers, and use a chunksize according to what the GPU works good with for example.

Scene management

The scene is managed and renderered through the IndexedVolume component. This provides an interfaces for a lot usecases like efficient object access, collisions, but also will hierarchically cull nodes of objects. A impostor renderer will turn distant objects into 3D impostor, and a dataset module to render implyed massive dense areas of assets like forests by using their average color is used.

Images from development

A massive flattening probe, to flatten and level an area to build a town on.

Dynamic weather component with realtime clouds also using the atmosphere system below, the idea is to make it an all in one solution to also enable hybrid usage such as semi-volumetric from bottom transitioning into full volumetric approaching them. The clouds seamlessly soft-fade intersecting mountains or anything else in the scene.

Automatic global biomes, a global configurable probe.


A new fast physical abstracted atmosphere, the atmosphere is rendered to a cubemap which is supposed to be used as environment map.

Map maintaining it’s own state/tiles with different settings, but sharing the underlaying contents such as content and probes attachments, data-layer setup, materials, biome etc. The max. depth is cut in half reducing details, it directly uses the original pipeline of data-layers and a game specific stylized shader. It’s also going to render roads/paths and display labels and symbols of contents depending on zoom-level.

2018 / 07 / 06

Recent half year i was rather busy with work and other things, but also worked on content and collision feedback, which is a quite complex task to efficiently enable precise polygon collisions.

Besides some more optimizations i’ll implement a weather module, including simulation for wind and temperature. I’m also adding volumetric clouds similar to the game “Horzion Zero Dawn” and generally realistic materials and atmosphere by default. But i rather focus on performance than making everything phsyically correct.

One important aspect of the engine is having global information, such as using wind to influence objects, or objects influecing water with waves or grass, as well as for rather simple but effective global illumination.

2019 / 06 / 13

Recently been working on improving near details and restoring CSM (cascaded shadow maps), but i also want to fixer shimmering and transitions for it. For large scenes CSM simply are a requirement.

Using texture arrays also improved the quality for terrain materials, the default atlas technique will be only used as fallback.

I’m currently adding a default system for generating paths/roads also with crossings. Profiles with materials can be defined and assigned to segments. I figured out a backface depth buffer technique that prevents z-fighting and geometry conflicts even with flat plane profiles on the terrain.

I’m also testing with instanced chains/parts since it saves the entire extra geometry buffers and generation cost being instantly available.

Also added parallax occlusion as terrain material option.It doesn’t require any more data than the depth/bump map and works out of box, but is quite more costly in terms of performance. I’m also thinking about adding a modular way for adding custom shaders, this way raymarched materials could be used such as for grass, it would split the affected patches to be a separate drawcall.

2021 / 06 / 4
I’ve been quite busy recently and the project that uses the engine also consumes time, i might start a patreon soon so i can spend more time on it as it’s a freetime project, with a recent milestone it shouldn’t be too much longer.

I recently finished a couple more technical things such as a tech i made years ago, but couldn’t get to work at that time. It creates a tiny 8bit map of assets for “Volume Hull Impostors”, which are 3D impostors that have the same visual appearance as the original model, but don’t consume a huge amount of memory, which allows every larger asset to be rendered as such.

Here’s a example, the lantern with the axis helper is rendered as mesh closer, and as impostor once the Y axis is visible.

An here in another test (trees in distance are impostors), another feature i added to the static component is some sort of density based ambient occlusion, as even though every tree casts shadows, it wouldn’t give the feel of a real dark forest even if it’s day.

The terrain material also has a improved depth based splatting now, as with the technique to enable up to 256 materials it was a bit more complicated than i thought :sweat_smile:

Materials with decoration sets such as grass also will derive their tint from the blending ground

The paths/roads systems also will use the bump/depth texture in order to mask with the terrain but it won’t compete with the terrains materials, rather height influences can alter parts being more or less visible having more variation then, or for open paths to blend out.


So coool!!

1 Like

This is awesome! :grin:

Can’t wait to see where you go with this.

1 Like

If you want to try using satellite data I’ll recommend looking at the terrain tiles on AWS.
There are no use limits as far as I know, and no API auth needed.


This is simply great!:+1::+1::+1:
Can’t wait to see final result :astonished:

1 Like

Thanks i’ll look into it, though i doubt it’s really free :thinking:

This looks amazing. Very well done!

I have to ask though, how did you achieve this fog effect?

I’ve been looking for a (post processing) shader that allows me to create this effect, but have been unsuccessful so far. Would you care sharing this or point me in the right direction?

Thanks in advance!

1 Like

The fog is a low-res RenderTarget, i use the height of the terrain (contribution inceasing by distance) and depth-buffer in the post shader to blend it with the atmosphere then, which is rendererd in a separate smaller target. So you basically render the fog in a mask to blend the scene with the background if any.

For extending the default fog in the shaders without any postprocessing, you could use just the Y axis for the height if that already fits your needs.

It would require some changes to use this with the THREE.EffectComposer, sorry. I’m using a custom framework on top of THREE, not the EfffectComposer for example.


I’ve recently added a new model component, the LODPlaneStatic, which is used for finite or infinite chunked landscapes, based on static data instead instead procedural sources. It’s purpose is for high performance vast landscapes without the cost of procedural generation / composition of Sources. The terrain is instant without generations/allocations, regardless of motion or immediate jumps. The terrain is hierarchically culled and a single draw call for minimal cost. As a side note, “renderedObjects” means visible objects, the trees in the scene are just 1-2 draw calls with auto-instancing.

It uses the same scalable splatting technique supporting up to 256 materials. With normal, roughness, displacement and all features of the MeshStandardMaterial. Also per-pixel depth based blending instead only fading between materials.

As the LODSphere model used for planets, LODPlaneStatic supports attachments, for example another can be added for water with it’s LOD for geometric waves.

The pixel-world unit ratio can be stretched without artifacts with a bicubic interpolation, also larger than terrains of Unity or Unreal, depending on the used resolution and required detail control. It uses an additional modulator map, with different prefabed noise sources and will compose a more detailed and varied surface in sub-pixel range. The default tesselation suffice already for displacement details such as on rocks.

Additionally there is a attachment for this component which will decorate the near surroundings around the camera, faded away in distance. Theses decorations are added per material and automatically generated (threaded) according to the materials weights. The distribution function per decoration can be implemented. The decorations are processed in blocks and either instanced or single objets. In the picture above, grass across the entire landscape is added this way. The fading technique can be either custom, scale and or opacity, also dynamic features such as wind can be added.


This object boosts large scenes for rendering as well as accessing it’s content. It takes over the projection of the renderer and renders the scene with hierarchical and distance culling. matrixAutoUpdate is disabled by default, the idea is to touch as less as possible objects per frame. The updating logic is performed outside and an explicit update call will efficiently reindex the object and update the matrix.

I’ll post a video soon to demonstrate the difference of a lineary rendered scene. The difference can go from few FPS / crash without, to stable 60 FPS with IndexedVolume.

IndexedVolume: Auto Instancing

Geometries or materials can enable dynamic auto-instancing, which reduces drawcalls per asset to 1 while culling is still performed. 2 approaches, using cheaper offset+orientation or full matrix is supported. Any material, standard and custom are supported. Nothing has to be considered, except if it makes sense to enable instancing for a model.

IndexedVolume: Auto Impostor

For larger assets, revelant for big distances the autoImpostor flag on the geometry or material will render distanced objects as 9-angle impostor billboard, a larger group of objects sharing a node-level is batched into a instanced block.

Mesh Packing

This is a function that willy not only merge geometries of a mesh hierarchy into one, but also pack it’s materials and textures into a single one. It supports all map types, multi-material groups and repeated textures. The result is a single geometry, material and texture per type (diffuse, normal, roughness etc.)

(Sponza model, packed into a single geometry, material and texture)

(CS italy)

Notice that maps/scenes as above aren’t the best suited case, just to demonstrate. Rather suited are models with a lot smaller parts/materials which drag down performance if they are separate for culling. It additionally has options to optimize texture sizes or generally scale it down. A lower resolution can be generated for example to generate a proxy-imporstor model, like of a larger building.

Besides preprocessing of assets it can be enabled in a IndexedVolume for runtime convertion, since assets which are instanced should be a single geometry and material, else there will be a instancing stack for each object in it’s hierarchy.

NITRO (a THREE based engine)

This is a engine i made on top of THREE, i will create an extra thread soon. It basically extends THREE with some features, WebGL2 and adds a standard mechanism for tags (controllers), ROM (render object model) and surfaces automatically allocating and scaling RT’s of a ROM pipeline.

It has a unified system for input with devices such as keyboard, mouse, gamepad and virtual gamepad. A fast sliding-sphere-to-polygon collider system without physics engine and a “root” concept for raycasting are just one of major features of this engine.

I mention it since Tesseract requires some modifications in the core of THREE, for example to take over the projection routine.

I also mention it since it also adds support for environment map fog and global uniforms to fade with an actual sky/atmosphere instead a single color. The environment map is rendered as background and the environment map fog will blend objects into it besides it’s regular job as environmentMap.


this is some super impressive work!

1 Like

Thank you @pailhead

Some details i missed about the IndexedVolume: it can be used in existing projects with almost no effort, only updating of moving objects need the explicit update, tweaks are optionally. It would be nice to test it with some large-world projects.

The updating logic can be improved generally this way, such as simplifying it once the object isn’t visible anymore (every object can tell if it was visible the last frame, even if it was culled already by a node branch), by frustum or distance. Useful for objects that can skip costly updates and resume anytime, like pausing particle emitters, animations, sounds or other costly processes. Especially particles and animations can skip their most costly part, only updating the time state.

Exporting a scene with this object will create a binary index, so it will load fast without re-indexing it’s content. Indexing is content size and density depended, it optimizes the spatial tree and will discard smaller objects in larger nodes.

1 Like

That’s very impressive ! I hope to have your level one day


Impressive and beautiful work! The rocks have very natural details. (The trees could be less cross-shaped and more “golden ratio Fermat spiral-shaped”, though.)

Will any of this be free or open-source?

So coool!!
It’s cooler if it’s open source.:yum:

This sounds absolutely awesome! Fantastic work.

1 Like

WHEN is this coming out?

Which you’re refering to? I’ll release the IndexedVolume separate from Tesseract.

@Fyrestar The engine you made

I’ll release the IndexedVolume soon, Tesseract next then, but i can’t tell a exact date yet since i can only work on it in my spare time.


I’m very interested in IndexedVolume,How soon will it be released?

1 Like