Open World Planetary Engine

planetary-rendering
open-world
dual-contouring
volumetric-terrain
engine

#1

I’d like to introduce one of my spartime projects i’m wroking on. It’s going for about a year now and slowly becomes complete. I’ll drop updates of progress as it proceeds.

About a year ago i’ve started working on a terrain engine mainly for volumetric terrain. A half year ago i then started another focusing on rendering realscale planets with very low memory footprint and reasonable performance for low-end machines and mobile devices. It first was more of a prototype to see how practical it can be done for realworld applications/games, and since i didn’t used papers, it also was a fun challenge to solve the issues you face in developing a planetary renderer.

Some dev clips. The game i’m actually using it for has a simple toon style, i’ll setup a realistic demo environment soon. I disable culling for tests, several post effects or quick motions made the recording tool pixelate/struggle a little too

Being fast, scalable and flexible are one of the main goals, as it also should run on low-end machines aswell as mobile devices, while requiring less as possible workload, giving the room for actual game mechanics. From resolution of datasets, topology quality, geometry detail, view distance, lod-techniques everything is adaptive to the required targets.

The LOD depth basically as no limit from a meters to centimeters resolution it depends on the usecase. A low data resolution and higher LOD depth performs and scales better. Sources can hit precision limits depending on their operating area, that’s why they are localized. A 400 KM source-probe for example can already give very high precision while global sources are mostly relevant into things like weather or rough continents. The example video above uses a global source for the continent shapes.

I’ll further describe few things below…

Procedural designed worlds

The goal of this engine is to create universe scale game worlds based on both, procedural generated datasets as well as handcrafted. The tools for designing range from procedural brushes, simulating brushes, cloning stamps to more advanced structure pattern designing, what allows the designer to control the look and characteristics of entire realscale landscapes to make them believable for the desired style or theme, within a reasonable timeframe.

Volumetric terrain

Caves, cliffs and overhangs are done via dual contouring for an efficient adaptive geometry generation. 3D features are localized, that means only where volumetric sources are, such batches are processed, a hybrid of fast GPU based 2D terrain and volumetric features. Those features require CPU processing, while the GPU terrain is instant also for fast movements or immediate jumps.

CPU & GPU Rendering

I’ve implemented a CPU Renderer which transpiles GLSL to optimized JS shaders. The main purpose is to pick single samples or render in workers or server environments. This also is used for raycasting in cases, where collision buffers aren’t available since 2D geometry is on the GPU, only close terrain will be read back for precise face collisions.

Multi-Threaded

Modules are replicated to workers, any CPU content generation task goes there. Content management, also for processing AI also is done by workers instead the main thread.

Sources

These localized or global puppies actually feed the generator to write tiles of datasets. These can do basic boolean operations with each other, use a falloff and access other datasets to generate their own, like depth or normals. Sources can be locally restricted, so each tile only renders sources containing them, the falloff mask additionally reduces areas the source doesn’t contribute to.

This also goes for pixel density, for example sources describe continents with millions of smaller details, which stream i/o based on by the LOD. This is similar to how landscapes are brushed, which are similar to how world map systems like Google Maps work.

Using satelite data

I hadn’t time yet to use the Mapbox API for some experiments, but basically it’s quite easy to use satelite data. I investigated into the Mapbox API, as they are free for a certain traffic limit and provide depth and normal tiles. I’ll look into that soon or later.

Datasets

Datasets are defined for the tile-manager and are used for any kind of spatial information, such as depth for heightmap data, biomes or simulating weather. The atlases are automatically managed, merged for multi-framebuffers, and use a chunksize according to what the GPU works good with for example.

Scene management

The scene is managed and renderered through octree manager (repilcated from worker). This provides an interfaces for a lot usecases like efficient object access, collisions, but also will hierarchically cull nodes of objects. A impostor renderer will turn distant objects into 3D impostor, and a dataset module to render implyed massive dense areas of assets like forests by using their average color is used.

Actual things

At the moment content layout generation is almost completed. That was one of the more difficult things recently. I’m focusing on editing tools now, as well as setting up a demo using a realistic environment.

I’ll make an update of content generation soon. As progress goes on i can do more informative videos, most times i just randomly record things to compare with changes later.

2018 / 07 / 06

Recent half year i was rather busy with work and other things, but also worked on content and collision feedback, which is a quite complex task to efficiently enable precise polygon collisions.

Besides some more optimizations i’ll implement a weather module, including simulation for wind and temperature. I’m also adding volumetric clouds similar to the game “Horzion Zero Dawn” and generally realistic materials and atmosphere by default. But i rather focus on performance than making everything phsyically correct.

One important aspect of the engine is having global information, such as using wind to influence objects, or objects influecing water with waves or grass, as well as for rather simple but effective global illumination.


Question about Culling (if this is even the correct term)
Example to load splat map w/ Alpha?
Precision error in shaders
#2

So coool!!


#3

This is awesome! :grin:

Can’t wait to see where you go with this.


#4

If you want to try using satellite data I’ll recommend looking at the terrain tiles on AWS.
There are no use limits as far as I know, and no API auth needed.


#5

This is simply great!:+1::+1::+1:
Can’t wait to see final result :astonished:


#6

Thanks i’ll look into it, though i doubt it’s really free :thinking:


#7

This looks amazing. Very well done!

I have to ask though, how did you achieve this fog effect?

I’ve been looking for a (post processing) shader that allows me to create this effect, but have been unsuccessful so far. Would you care sharing this or point me in the right direction?

Thanks in advance!


#8

The fog is a low-res RenderTarget, i use the height of the terrain (contribution inceasing by distance) and depth-buffer in the post shader to blend it with the atmosphere then, which is rendererd in a separate smaller target. So you basically render the fog in a mask to blend the scene with the background if any.

For extending the default fog in the shaders without any postprocessing, you could use just the Y axis for the height if that already fits your needs.

It would require some changes to use this with the THREE.EffectComposer, sorry. I’m using a custom framework on top of THREE, not the EfffectComposer for example.


#9

I’ve recently added a new model component, the LODPlaneStatic, which is used for finite or infinite chunked landscapes, based on static data instead instead procedural sources. It’s purpose is for high performance vast landscapes without the cost of procedural generation / composition of Sources. The terrain is instant without generations/allocations, regardless of motion or immediate jumps. The terrain is hierarchically culled and a single draw call for minimal cost. As a side note, “renderedObjects” means visible objects, the trees in the scene are just 1-2 draw calls with auto-instancing.

It uses the same scalable splatting technique supporting up to 256 materials. With normal, roughness, displacement and all features of the MeshStandardMaterial. Also per-pixel depth based blending instead only fading between materials.

As the LODSphere model used for planets, LODPlaneStatic supports attachments, for example another can be added for water with it’s LOD for geometric waves.

The pixel-world unit ratio can be stretched without artifacts with a bicubic interpolation, also larger than terrains of Unity or Unreal, depending on the used resolution and required detail control. It uses an additional modulator map, with different prefabed noise sources and will compose a more detailed and varied surface in sub-pixel range. The default tesselation suffice already for displacement details such as on rocks.

Additionally there is a attachment for this component which will decorate the near surroundings around the camera, faded away in distance. Theses decorations are added per material and automatically generated (threaded) according to the materials weights. The distribution function per decoration can be implemented. The decorations are processed in blocks and either instanced or single objets. In the picture above, grass across the entire landscape is added this way. The fading technique can be either custom, scale and or opacity, also dynamic features such as wind can be added.

IndexedVolume

This object boosts large scenes for rendering as well as accessing it’s content. It takes over the projection of the renderer and renders the scene with hierarchical and distance culling. matrixAutoUpdate is disabled by default, the idea is to touch as less as possible objects per frame. The updating logic is performed outside and an explicit update call will efficiently reindex the object and update the matrix.

I’ll post a video soon to demonstrate the difference of a lineary rendered scene. The difference can go from few FPS / crash without, to stable 60 FPS with IndexedVolume.

IndexedVolume: Auto Instancing

Geometries or materials can enable dynamic auto-instancing, which reduces drawcalls per asset to 1 while culling is still performed. 2 approaches, using cheaper offset+orientation or full matrix is supported. Any material, standard and custom are supported. Nothing has to be considered, except if it makes sense to enable instancing for a model.

IndexedVolume: Auto Impostor

For larger assets, revelant for big distances the autoImpostor flag on the geometry or material will render distanced objects as 9-angle impostor billboard, a larger group of objects sharing a node-level is batched into a instanced block.

Mesh Packing

This is a function that willy not only merge geometries of a mesh hierarchy into one, but also pack it’s materials and textures into a single one. It supports all map types, multi-material groups and repeated textures. The result is a single geometry, material and texture per type (diffuse, normal, roughness etc.)

(Sponza model, packed into a single geometry, material and texture)

(CS italy)

Notice that maps/scenes as above aren’t the best suited case, just to demonstrate. Rather suited are models with a lot smaller parts/materials which drag down performance if they are separate for culling. It additionally has options to optimize texture sizes or generally scale it down. A lower resolution can be generated for example to generate a proxy-imporstor model, like of a larger building.

Besides preprocessing of assets it can be enabled in a IndexedVolume for runtime convertion, since assets which are instanced should be a single geometry and material, else there will be a instancing stack for each object in it’s hierarchy.

NITRO (a THREE based engine)

This is a engine i made on top of THREE, i will create an extra thread soon. It basically extends THREE with some features, WebGL2 and adds a standard mechanism for tags (controllers), ROM (render object model) and surfaces automatically allocating and scaling RT’s of a ROM pipeline.

It has a unified system for input with devices such as keyboard, mouse, gamepad and virtual gamepad. A fast sliding-sphere-to-polygon collider system without physics engine and a “root” concept for raycasting are just one of major features of this engine.

I mention it since Tesseract requires some modifications in the core of THREE, for example to take over the projection routine.

I also mention it since it also adds support for environment map fog and global uniforms to fade with an actual sky/atmosphere instead a single color. The environment map is rendered as background and the environment map fog will blend objects into it besides it’s regular job as environmentMap.


#10

this is some super impressive work!


#11

Thank you @pailhead

Some details i missed about the IndexedVolume: it can be used in existing projects with almost no effort, only updating of moving objects need the explicit update, tweaks are optionally. It would be nice to test it with some large-world projects.

The updating logic can be improved generally this way, such as simplifying it once the object isn’t visible anymore (every object can tell if it was visible the last frame, even if it was culled already by a node branch), by frustum or distance. Useful for objects that can skip costly updates and resume anytime, like pausing particle emitters, animations, sounds or other costly processes. Especially particles and animations can skip their most costly part, only updating the time state.

Exporting a scene with this object will create a binary index, so it will load fast without re-indexing it’s content. Indexing is content size and density depended, it optimizes the spatial tree and will discard smaller objects in larger nodes.


#12

That’s very impressive ! I hope to have your level one day