I’d like to introduce one of my spartime projects i’m wroking on. It’s going for about a year now and slowly becomes complete. I’ll drop updates of progress as it proceeds.
About a year ago i’ve started working on a terrain engine mainly for volumetric terrain. A half year ago i then started another focusing on rendering realscale planets with very low memory footprint and reasonable performance for low-end machines and mobile devices. It first was more of a prototype to see how practical it can be done for realworld applications/games, and since i didn’t used papers, it also was a fun challenge to solve the issues you face in developing a planetary renderer.
Some dev clips. The game i’m actually using it for has a simple toon style, i’ll setup a realistic demo environment soon. I disable culling for tests, several post effects or quick motions made the recording tool pixelate/struggle a little too
Being fast, scalable and flexible are one of the main goals, as it also should run on low-end machines aswell as mobile devices, while requiring less as possible workload, giving the room for actual game mechanics. From resolution of datasets, topology quality, geometry detail, view distance, lod-techniques everything is adaptive to the required targets.
The LOD depth basically as no limit from a meters to centimeters resolution it depends on the usecase. A low data resolution and higher LOD depth performs and scales better. Sources can hit precision limits depending on their operating area, that’s why they are localized. A 400 KM source-probe for example can already give very high precision while global sources are mostly relevant into things like weather or rough continents. The example video above uses a global source for the continent shapes.
I’ll further describe few things below…
Procedural designed worlds
The goal of this engine is to create universe scale game worlds based on both, procedural generated datasets as well as handcrafted. The tools for designing range from procedural brushes, simulating brushes, cloning stamps to more advanced structure pattern designing, what allows the designer to control the look and characteristics of entire realscale landscapes to make them believable for the desired style or theme, within a reasonable timeframe.
Caves, cliffs and overhangs are done via dual contouring for an efficient adaptive geometry generation. 3D features are localized, that means only where volumetric sources are, such batches are processed, a hybrid of fast GPU based 2D terrain and volumetric features. Those features require CPU processing, while the GPU terrain is instant also for fast movements or immediate jumps.
CPU & GPU Rendering
I’ve implemented a CPU Renderer which transpiles GLSL to optimized JS shaders. The main purpose is to pick single samples or render in workers or server environments. This also is used for raycasting in cases, where collision buffers aren’t available since 2D geometry is on the GPU, only close terrain will be read back for precise face collisions.
Modules are replicated to workers, any CPU content generation task goes there. Content management, also for processing AI also is done by workers instead the main thread.
These localized or global puppies actually feed the generator to write tiles of datasets. These can do basic boolean operations with each other, use a falloff and access other datasets to generate their own, like depth or normals. Sources can be locally restricted, so each tile only renders sources containing them, the falloff mask additionally reduces areas the source doesn’t contribute to.
This also goes for pixel density, for example sources describe continents with millions of smaller details, which stream i/o based on by the LOD. This is similar to how landscapes are brushed, which are similar to how world map systems like Google Maps work.
Using satelite data
I hadn’t time yet to use the Mapbox API for some experiments, but basically it’s quite easy to use satelite data. I investigated into the Mapbox API, as they are free for a certain traffic limit and provide depth and normal tiles. I’ll look into that soon or later.
Datasets are defined for the tile-manager and are used for any kind of spatial information, such as depth for heightmap data, biomes or simulating weather. The atlases are automatically managed, merged for multi-framebuffers, and use a chunksize according to what the GPU works good with for example.
The scene is managed and renderered through octree manager (repilcated from worker). This provides an interfaces for a lot usecases like efficient object access, collisions, but also will hierarchically cull nodes of objects. A impostor renderer will turn distant objects into 3D impostor, and a dataset module to render implyed massive dense areas of assets like forests by using their average color is used.
At the moment content layout generation is almost completed. That was one of the more difficult things recently. I’m focusing on editing tools now, as well as setting up a demo using a realistic environment.
I’ll make an update of content generation soon. As progress goes on i can do more informative videos, most times i just randomly record things to compare with changes later.
2018 / 07 / 06
Recent half year i was rather busy with work and other things, but also worked on content and collision feedback, which is a quite complex task to efficiently enable precise polygon collisions.
Besides some more optimizations i’ll implement a weather module, including simulation for wind and temperature. I’m also adding volumetric clouds similar to the game “Horzion Zero Dawn” and generally realistic materials and atmosphere by default. But i rather focus on performance than making everything phsyically correct.
One important aspect of the engine is having global information, such as using wind to influence objects, or objects influecing water with waves or grass, as well as for rather simple but effective global illumination.