Tesseract - Open World Planetary Engine

I don’t know if this is still ongoing, but I really need this. Awesome work! I think this could be the missing piece for threejs to rise at game development.

Aside this, if anyone knows about some example of dual contouring for volumetric terrains I would be very grateful.

1 Like

I could use this on my prototype battle game. I plan to have more very large maps on this open world concept. Super Soldier Battle - Forest Mountain Open World Concept 3D Game

2 Likes

This is precisely what I am aiming for as well: a 3D universe, designed and renderable on both high and low scales. Amazing job - too bad it’s only present in videos, though I understand the reasons. The objective is commendable, but it involves lots of trade offs / hacks / tricks to combine various advantages of certain implementations and minimize or elliminate disadvantages, something I’m sure you’re already aware of. :+1:

One question though: how did you make the atmospheric system and the seamless transition from the outside atmospheric view to the inside one? Is it based on the same principles as this, or it’s something else entirely?

I’m currently using a front side + double side + back side system of 3 stuck together meshes (one for outside view, one acting as an invisible barrier, one for inside view) to draw my atmosphere and while they look great by themselves, the switch from one view to another happens instantly, without the realistic transition present in your videos. You don’t have to necessarily post any code if you don’t want to, but it’d be great to find out how you did it and hopefully I can work out from there.

1 Like

It depends on your system in general, there are many ways. The atmosphere, clouds etc are component layers on the model while the open space is in screenspace, not the actual structures like galaxies then (only from distance). The atmosphere also is rendered into a cubemap for a global environment lighting.

If you use a atmosphere geometry it needs to be adaptive in detail as the ground, since even though it’s no surface the geometry on a realscale planet becomes so rough it’s not sufficient if you just use something like a fixed sphere.

Not exactly, the weather system is pseudo volumetric from the ground or outerspace, as optional feature i’m blending between the global clouds and volumetric when in the sky. Once you dig into all possible raymarching cloud solutions you’ll figure in real world it’s extremely hard to optimize and then still will be a costly part in a full game, so you might want to look into multiple or hybrid solutions like this.

And yeah, you will come across things you over and over again need to figure out a new or better solution, for some things there are multiple choices that depend on what you need for your project, i generally bias more on performance which is at the end what matters most, not just the solid 60 FPS but also not being heavy for the GPU, as this engine has a bias for games/game worlds.

I’ve been integrating it in mine intensively in past time also improving the GPU accelerated content generator for higher density at lower cost with better selection filtering, i’ll post some updates soon also regarding road network system and house building, it’s always a mix on working on the engine, the game, networking, building, other systems, other work in general etc that consume just all spare time.

The building system is also part of the engine, multi-level (hierarchical) areas of political land maps, like contintents, countries (empires, kingdoms etc), towns/villages, private property land etc, you just define as many layer as you want to use, the data then is streamed on the fly, indexing, caching here and there etc.

image

Just some test for the system based on texel IDs mapped into voronoi/wrapped cells that together shape one merged area that also can go across multiple tiles, so has no boundary limitations.

image

Content zones with revision cells for dynamic changes as things change when you come back to cells you weren’t in range of anymore, it’s obviously not a topic for non-multiplayer games but the general architecture is supposed to be universal for both.

Also further work on weather and such

image

2 Likes

Thanks for answering me, I think I understand what you mean on a general level.

It just occurred to me right before replying that adjusting alpha in my inside (back sided) sphere according to the height above the ground level might be a solution to the sudden passage from outside viewed atmosphere to the inside viewed one (i.e. what we on earth call “the sky”), apart from maybe blending views a bit. I’m already manipulating the alpha in the shader based on the back side’s inverted normals in order to make the “sky” more opaque when daylight and gradually more transparent towards the night, similar to the outside view (including a golden hour tint when between day and night) and that works well, but altitude based alpha adjustment somehow didn’t pop up as an idea to me until now.

Other than that, personally I’m not much of a fan of having to resort to all sorts of tricks, trade offs and combinations of complicated approaches to solve what I believe it’s a fundamental flaw of most current 3D systems, that of relying on simplified geometries and materials for surfaces, instead of using volumes of points like the real world is structured in the first place, and rendering only what’s within the screen area. Yeah, I know such approaches mostly failed (or will fail) when talking about an infinite universe or animations, but that doesn’t change the fact that only such a structure would replicate real world and its detail, since everything in it is a volume and not a surface. Until a system like that becomes viable, you’d still have blocky, cartoonish results either way.

As for procedurally generated terrain, climate, vegetation or weather, I walked that road decades ago. Technology and its fast rate surpassed my poor one man attempt at it then, but I still have my old (unfinished, but working reasonably) 2D terrain generator based on triangles in Pascal or my climate / vegetation / river generation based on the distance from oceans, blocking terrain and general atmospheric wind patterns which should follow the same principles irrespective of planet (probably subject to gas / liquid densities as well, i.e. mostly nitrogen vs methane, for example). It’s outdated, of course, but the idea would probably suit any planet. On a micro level, you could probably use semi-random patterns that would mostly follow the macro patterns.

In my Three.js project, I use “layers” (not the layer objects, though) as well - I already placed latitude and longitude lines and borders on my Earth in multiple variants, my favorite being the texture based approach because it follows terrain, both the displaced and vertex altered one. Anyway, I share many of your objectives, the only difference being that I would have liked to work with structures that resemble reality closer than the available ones in order to avoid all these workarounds to get both performance and precision in the project. Only then I would be able to properly make my 3D Civilization universe / game dream a reality… :thinking:

This sounds very like the “atomic” engine that turned out a scam or at least never happened. There is no point in having such monstrous amount of data no client device can handle, 3D always is and always been about LOD, just as latest UE the key trick is always LOD.

I recently figured and added a new source type that has a constant cheap cost regardless of how complex the terrain structure is, as it’s using real world structures that are morphed on the fly into the global composition, things like local and global erosion can be applied then additionally without the very varying base cost.

You always need consider that no matter what smart system you would figure, it needs to be realistic as there are relentless hardware limitations that are far lower than you would believe, 60 FPS is not all, you also should take care of how hard GPU need to fight to maintain these, or if it’s super easy for it since you have LOD, caching, balancing and all these kind of required trade offs. The cloud example you linked for instance would be not realistic to use as a realscale sky in a real-world game scenario at all without extreme high cost, the optimizations done on such are way far beyond that to become like in Rockstars RAGE Engine or UE - and even there, it’s stilly costly.

Pseudo volumetric clouds for instance look (there are many cloud types) exactly the same as full volumetric from the ground at realscale distances (without any banding issues on horizon), by blending into 3D seamlessly the cost of the ground is reduced and can be balanced into the sky, that’s why i figured this is the best and prettiest solution for far. If you don’t do things step by step and with somewhat realistic solutions you end up in rabid hole and never ending process, or just give up. Which is not an option for me.

Well, I thought about this before I found out that there was something similar, so it couldn’t have been a scam. More likely an approach that looked like the next best thing at first sight, and then turned out to be seriously suboptimal when it came to other features - it can happen to anyone and it doesn’t make the approach a scam, since it’s not intentionally meant to deceive. There are many attempts that start well with teasers about how great the idea is and then unfortunately hit insurmountable obstacles along the way and cool down. :wink:

Fact is, aside from the problems of how to store hypothetically infinite data and computing transformations, you have a limited screen space and a limited amount of colors when it comes to rendering on the screen, and volumes not just surfaces when it comes to replicating real world. Thus it would make sense to follow the structure you’re trying to replicate instead of faking it via various methods, resting assured that, apart from computations and storage, the cost of redering would never be greater than how many pixels make up the monitor resolution. If that’s not realistic, I don’t know what is. Problems in implementations (storage and computation in this case) always exist… until an optimal solution is found and they don’t exist anymore - so no reason to be stuck in a certain paradigm, an open mind is useful most of the times.

Thanks for the details and insight into the challenges that would come with using something similar to the cloud example, by the way. So far, adjusting alpha for the ground-based sky based on distance from the planet looks promising, I just need to match colors and transparencies between it and the outside view in order for things to look like they happen seamlessly. Tried a couple of atmospheric shaders but somehow I feel a bit uncomfortable with having to use hundreds of lines that I barely understand what they’re doing instead of my own shader code that in one or two dozens achieves more or less the same using gradients instead of complex physics approaches. There is also the issue that most of them seem to be focused on representing the sky from earth instead of a solution that can be easily adapted to make a spherical atmosphere. I’m not one to give up either though, so I’ll see what I can do instead.

Anyway, I’ll stop here with this, I don’t want to divert things off topic too much. Great to hear that you figured out ways to improve performance via morphing real world structures on the fly. I guess even though the complexity of the terrain is not an issue on this, the amount of real world structures you use and doing things on the fly instead of statically could potentially pose another challenge as things evolve.

That being said, in my opinion, you (probably unintentionally, while looking for ways to make things more efficient) just hit the jackpot… because that’s exactly how humans are able to visualize and describe multidimensional space (or any other thing, for that matter) through association with already known forms on the fly, without having to “store” large amounts of data. For example, a human can instantly build a 3D apple knowing that it’s basically a “sphere” - actually a closed ball - less “bulged” downwards by some factor, from which you exclude a curved cone at the top and apply some random surface irregularities - so only a couple of known properties / characteristics and you have your shape. And the ball volume is easily described as the set of all points of distance less than or equal to r away from x, without having to store anything of those points or surfaces. Light usually accounts for surfaces only, but it can extend to lack of light for inside points, similar to how occlusion works.

Incidentally, like I mentioned before, this has little to do with geometric surfaces, faces, vertices, normals and such (which don’t even need to be stored or memorized every time, apart from the base “known” shapes) since they’re all volumes and have a base “inside” configuration similarly described via a couple of characteristics. They also represent a potentially optimal way to describe “atomic” structures by using just a small set of description data, along with the 3 transformations, instead of storing every point (or vertex, in the case of geometric approaches) in space. That’s what I meant with my previous replies, that the current approaches are flawed in that regard.

I would strongly suggest to further explore this avenue, because it’s efficient and can describe any shape whatsoever, since all shapes are just unions, intersections and exclusions of known simpler volumes. In Three.js terms, every shape can be described as an union of base convex hulls (e.g. spheres, cubes, pyramids, etc., that can easily be stored just once), and only the said hull transformations (e.g. scale, position and rotation) need to be accounted for, nothing else. Or, to put it another way, everything is just variations / combinations of instanced base forms. Of course, automatically decomposing a shape into convex hulls is a challenge and you need a fast on the fly morphing algorithm, but that’s a “different” matter… :thinking:

1 Like

I find this hard to believe, threejs may handle this project scale but the browser memory allocation will not, u will encounter memory allocation error.

The planetary scale components are actually faster and using less memory (RAM+VRAM) than a static terrain ^^

In fact it requires 3 times less VRAM than using a static heightmap based terrain that comes at 2048x2048 and 4 times less on RAM, basically almost none for data, as only partial data is synced back.

On JS side it also has a minimal footprint not larger than having a average glb asset in memory. For drawcalls as mentioned this also is only 1 draw call per layer, 1 entire terrain, 1 entire ocean (also LOD) etc. As i mentioned the goal and architecture of the engine is to be part of games, not being the center piece like a world map system like google maps that can consume all budget.

It also needed to be as efficient to handle world map views that handles separate tiles than those being used in 3D like this, where you can pan around and zoom into areas you’re not actually in 3D. I also use this map for in-game editing the world (probes) in realtime.

And yes, you should not approach such a project in a utopic dreamer way without knowing the tech, a rough solution to all you see without red lights, on the web especially we are more limited and while (with a lot abstraction work) you can open and add a lot detail you need to stay in a reasonable scope, especially if you don’t have decades of experience in developing such / graphics etc your dreams might be crushed very fast only demotivating you. This is harsh these days especially with kids being blinded by graphics and games they get presented that appear like “the default” and “ez” to them while they never made something beyond altering examples or coding at all.

The term sounds a bit harsh, i just don’t know a softer atm, but what i mean was the way it has been transported to public to non-tech people, while the loudest critique, regarding there always being just a hand full assets being instanced everywhere, was ignored, which was the main issue regarding memory consumption. I wouldn’t have made the project if i wasn’t enthusiastic myself, but you should align your ideas and dreams to what is technically reasonably possible, as it’s limited to the hardware users have. I also been running against walls here and there i had to stop and build a ramp first to get across, but these kind of limitations need to be considered a lot, Tesseract is supposed to be a part of projects and that with a small footprint as possible.

There are many attempts that start well with teasers about how great the idea is and then unfortunately hit insurmountable obstacles along the way and cool down. :wink:

I didn’t knew for sure either if this would work out when i started, and you always need to be prepared to change approaches, improve/reconsider things, not being bigoted on one way, accept if something wasn’t good enough, tear it down and do better.

But that’s getting a bit philosophical ^^

5 Likes

Can’t wait to try this. Is there a link yet? :smiley: