Thanks i’ll look into it, though i doubt it’s really free
This looks amazing. Very well done!
I have to ask though, how did you achieve this fog effect?
I’ve been looking for a (post processing) shader that allows me to create this effect, but have been unsuccessful so far. Would you care sharing this or point me in the right direction?
Thanks in advance!
The fog is a low-res RenderTarget, i use the height of the terrain (contribution inceasing by distance) and depth-buffer in the post shader to blend it with the atmosphere then, which is rendererd in a separate smaller target. So you basically render the fog in a mask to blend the scene with the background if any.
For extending the default fog in the shaders without any postprocessing, you could use just the Y axis for the height if that already fits your needs.
It would require some changes to use this with the THREE.EffectComposer, sorry. I’m using a custom framework on top of THREE, not the EfffectComposer for example.
I’ve recently added a new model component, the LODPlaneStatic, which is used for finite or infinite chunked landscapes, based on static data instead instead procedural sources. It’s purpose is for high performance vast landscapes without the cost of procedural generation / composition of Sources. The terrain is instant without generations/allocations, regardless of motion or immediate jumps. The terrain is hierarchically culled and a single draw call for minimal cost. As a side note, “renderedObjects” means visible objects, the trees in the scene are just 1-2 draw calls with auto-instancing.
It uses the same scalable splatting technique supporting up to 256 materials. With normal, roughness, displacement and all features of the MeshStandardMaterial. Also per-pixel depth based blending instead only fading between materials.
As the LODSphere model used for planets, LODPlaneStatic supports attachments, for example another can be added for water with it’s LOD for geometric waves.
The pixel-world unit ratio can be stretched without artifacts with a bicubic interpolation, also larger than terrains of Unity or Unreal, depending on the used resolution and required detail control. It uses an additional modulator map, with different prefabed noise sources and will compose a more detailed and varied surface in sub-pixel range. The default tesselation suffice already for displacement details such as on rocks.
Additionally there is a attachment for this component which will decorate the near surroundings around the camera, faded away in distance. Theses decorations are added per material and automatically generated (threaded) according to the materials weights. The distribution function per decoration can be implemented. The decorations are processed in blocks and either instanced or single objets. In the picture above, grass across the entire landscape is added this way. The fading technique can be either custom, scale and or opacity, also dynamic features such as wind can be added.
This object boosts large scenes for rendering as well as accessing it’s content. It takes over the projection of the renderer and renders the scene with hierarchical and distance culling.
matrixAutoUpdate is disabled by default, the idea is to touch as less as possible objects per frame. The updating logic is performed outside and an explicit update call will efficiently reindex the object and update the matrix.
I’ll post a video soon to demonstrate the difference of a lineary rendered scene. The difference can go from few FPS / crash without, to stable 60 FPS with IndexedVolume.
IndexedVolume: Auto Instancing
Geometries or materials can enable dynamic auto-instancing, which reduces drawcalls per asset to 1 while culling is still performed. 2 approaches, using cheaper offset+orientation or full matrix is supported. Any material, standard and custom are supported. Nothing has to be considered, except if it makes sense to enable instancing for a model.
IndexedVolume: Auto Impostor
For larger assets, revelant for big distances the autoImpostor flag on the geometry or material will render distanced objects as 9-angle impostor billboard, a larger group of objects sharing a node-level is batched into a instanced block.
This is a function that willy not only merge geometries of a mesh hierarchy into one, but also pack it’s materials and textures into a single one. It supports all map types, multi-material groups and repeated textures. The result is a single geometry, material and texture per type (diffuse, normal, roughness etc.)
(Sponza model, packed into a single geometry, material and texture)
Notice that maps/scenes as above aren’t the best suited case, just to demonstrate. Rather suited are models with a lot smaller parts/materials which drag down performance if they are separate for culling. It additionally has options to optimize texture sizes or generally scale it down. A lower resolution can be generated for example to generate a proxy-imporstor model, like of a larger building.
Besides preprocessing of assets it can be enabled in a
IndexedVolume for runtime convertion, since assets which are instanced should be a single geometry and material, else there will be a instancing stack for each object in it’s hierarchy.
NITRO (a THREE based engine)
This is a engine i made on top of THREE, i will create an extra thread soon. It basically extends THREE with some features, WebGL2 and adds a standard mechanism for tags (controllers), ROM (render object model) and surfaces automatically allocating and scaling RT’s of a ROM pipeline.
It has a unified system for input with devices such as keyboard, mouse, gamepad and virtual gamepad. A fast sliding-sphere-to-polygon collider system without physics engine and a “root” concept for raycasting are just one of major features of this engine.
I mention it since Tesseract requires some modifications in the core of THREE, for example to take over the projection routine.
I also mention it since it also adds support for environment map fog and global uniforms to fade with an actual sky/atmosphere instead a single color. The environment map is rendered as background and the environment map fog will blend objects into it besides it’s regular job as environmentMap.
this is some super impressive work!
Thank you @pailhead
Some details i missed about the IndexedVolume: it can be used in existing projects with almost no effort, only updating of moving objects need the explicit update, tweaks are optionally. It would be nice to test it with some large-world projects.
The updating logic can be improved generally this way, such as simplifying it once the object isn’t visible anymore (every object can tell if it was visible the last frame, even if it was culled already by a node branch), by frustum or distance. Useful for objects that can skip costly updates and resume anytime, like pausing particle emitters, animations, sounds or other costly processes. Especially particles and animations can skip their most costly part, only updating the time state.
Exporting a scene with this object will create a binary index, so it will load fast without re-indexing it’s content. Indexing is content size and density depended, it optimizes the spatial tree and will discard smaller objects in larger nodes.
That’s very impressive ! I hope to have your level one day
Impressive and beautiful work! The rocks have very natural details. (The trees could be less cross-shaped and more “golden ratio Fermat spiral-shaped”, though.)
Will any of this be free or open-source?
It’s cooler if it’s open source.
This sounds absolutely awesome! Fantastic work.
WHEN is this coming out?
Which you’re refering to? I’ll release the IndexedVolume separate from Tesseract.
@Fyrestar The engine you made
I’ll release the IndexedVolume soon, Tesseract next then, but i can’t tell a exact date yet since i can only work on it in my spare time.
I’m very interested in IndexedVolume,How soon will it be released?
It will come soon as possible, unfortunatelly im sick since longer right now but you can definetly expect it this year, probably within next weeks, since there is only some documentation to finish.
There also has been a quiet a lot progress on Tesseract in past time, especially the universe model with solar systems and galaxies, as well as advanced fractal functions with erosion and statically composited fractal noise for more complexity while being less expensive, i’ll provide some updates on it soon.
I’m sorry to hear that ,and I hope you’ll get well soon.I have already followed you in Github. Will you release the IndexedVolume and Tesseract on Github?
There is an interesting GDC talk from Sean Murray on generating the worlds for No Mans Sky. He talks about how he started using satallite data for terrain generation but found it was boring in game. Lots of great information on procedural generation and inspiration for engine building too.
NMS was quiet a inspiration for me as well as Star Citizen, but this engine is going a little different route by it’s architecture which also aims to give the best possible performance for browser engines so it only takes minimal impact on the budget, leaving most room for the actual game. I never really heard about technical details how they solved various problems, mostly that they at least take advantage of 64bit what isn’t an option for WebGL. I think NMS is mostly based on a DC implementation for the close area, Tesseract mixes faster GPU tiles with volume terrain features only where it’s required, and another technique to maintain surfaces details like mountains in far LOD levels, this also helped with LOD since caves for example can be visible form a far distance while not being actually generated yet.
Since it aims to be a generic engine not being only controlled procedurally i had to work on different concepts and combining these without limiting each other. It can be fully automated/procedurally but also mixed with with different levels of manual control, starting from parameters, to sources and to hand-crafted data with procedural brushes composed on different levels.
Recently i’ve also focused on adding a faster method of procedural generation, like i mentioned above, statically composited layers of pre-baked terrain types which give a very detailed and natural result while being very cheap compared to the required noise functions. Basically when creating sources it always will be a mix of composed data and purely generated as very huge scaled sharp features like a several hundred kilometers long river cant be derived from static data, while all the details and terrain features can.
It’s a lot fun working out different noise features, but it’s also important to keep performance in mind, for example a complex varying but natural landscape can be also archived with feature blending of static maps and masks with different characteristics being very fast and predictable in terms of performance.