Procedural game level generation

This is continuation of the tech I have developed closed to 2 years ago. A game level is generated from a fixed seed during load-time, you can move around using mouse.

:arrow_forward: LINK TO DEMO



Some notable points, in no particular order

Treasure and Enemies are placed around the map, value of treasure and strength of enemies is assigned based on +1000 simulation of AI playing through the level.

Hint markers are placed throughout the level to guide the player from spawn point to the final challenge:

Tech

1 Like

Good. But low FPS 5-10.

1 Like

Hey @Chaser_Code , thanks for checking it out. What’s your hardware specs?

Also, what’s the zoom level? The game is intended for max zoom around this level:

At higher zoom level it’s going to run into perf issues because of draw calls and various simulations :sweat_smile:

1 Like

8gb ram, 4gb gpu. But if your project show more then 1500 draw calls its bad for my system. But for other people systems 20.000 draw calls not a problem. To reduce draw calls i’m using instancedBuffergeometry for grass and for particles.

1 Like

That’s a really good advice. I do use instanced geometries too, not on this demo though. As far as draw call count goes, it’s more the matter of use-cases for me. I’m mainly aiming at top-down camera perspective, so the goal is to have quite dense environments with fairly uniform distribution. If you look closely at the screenshots (or in the demo for that matter) there are a lot of different meshes there, each stone, mushroom, etc. And there are quite a lot of particle effects each of which is a separate draw call.

For example, here are all 21 particle emitters from just the screenshot above:

That screenshot has around 230 separate draw calls in the main view and close to 1070 in the shadow view (shadow camera). That’s a lot of objects to draw :slight_smile:

With instancing this can go down by about 60%, but it’s still not trivial.

Another thing is the terrain geometric density, here’s a wireframe view:

It looks quite nice close up, but this is completely redundant when zooming way out, and my terrain engine doesn’t do LODing, so you end up with a lot of triangles when you zoom out :slight_smile:

Up close like this, only a portion of the terrain is actually sent to the GPU (terrain is split into fixed-sized chunks, and only chunks in the view frustum are rendered)

That’s also why in the game maximum zoom is limited, to keep performance predictable.

In some way, the code that generates the box from its width, height and depth as arguments, is not so different from the code that reads a 3d file + that 3d file as an argument. So, when you say

it is not generated from a seed alone, it is from a seed and the code that, well, generates it. Plus whatever assets you use to support it (like rocks, grass, the heater thingy, tube thingies - are they all code-based here?).

For transparent particles first of all i update all variants of particles (glass, fire, lights, cloud, smoke, soark), position, rotation, texture animation, then put it together in one geometry attributes with sorting by distance to camera and with DynamicDrawUsage. Mergeing with sorting performance for 2000 particles is 0.6ms-1ms. I’m using one shader which have in vertex shader code for rotating billboard to camera, to point, cylindical rotating, rotating like sprite. In fragment shader is blending value - more additive or solid, with changing color and black color too can do, with several texture array, atlas and uv offset for animations. And all together is 2 draw calls, because doublesided and transparent enabled. But code not finished yet.

1 Like

it is not generated from a seed alone, it is from a seed and the code that, well, generates it. Plus whatever assets you use to support it (like rocks, grass, the heater thingy, tube thingies - are they all code-based here?).

Ha, that’s true, the actual assets are not “generated”. Their placement is. The terrain heights, terrain texturing (where should grass texture go, where should rock go etc.) are procedural.

Asset placement is a complex topic, I went into it a lot more here. To keep it short here - each stone, each mushroom etc. have a fairly complex set of declarative rules that are satisfied by a solver as part of the generation process.

Actual generator doesn’t understand or know what a 3d mesh is, it places “markers”, which are then consumed in the “theming” step to instantiate assets. Similar thing with terrain, generator just creates layers of 2d data, akin to a DataTexture set, these are read in the theming step to generate both terrain geometry as well as texturing masks.

The reference to a “fixed seed” is to point out that this demo is stable, it will generate roughly the same output every time. Because the seed that is used to initialize the generator is fixed, that is - usually you’d expose the seed as a parameter to the user, or just take some random number like the exact current time in milliseconds or hash over last 100 recorded mouse cursor positions.

For transparent particles first of all i update all variants of particles (glass, fire, lights, cloud, smoke, soark), position, rotation, texture animation, then put it together in one geometry attributes with sorting by distance to camera and with DynamicDrawUsage. Mergeing with sorting performance for 2000 particles is 0.6ms-1ms. I’m using one shader which have in vertex shader code for rotating billboard to camera, to point, cylindical rotating, rotating like sprite. In fragment shader is blending value - more additive or solid, with changing color and black color too can do, with several texture array, atlas and uv offset for animations. And all together is 2 draw calls, because doublesided and transparent enabled. But code not finished yet.

I do something similar. In the current iteration I don’t merge geometries of separate emitters, but that’s something I plan on doing later as well. The rest is pretty much the same. There’s only 1 shader that draws all particles, and it uses the same uniforms, so each particle is made distinct through attributes alone.

I guess one main difference is that I use point rendering, so there’s no need to orient quads, but there’s overdraw and lack of perspective distortion for larger particles. Pros and cons :slight_smile:

One notable thing is the particle count, my engine aims to have sub 1ms performance for 10k particles, so slightly different performance optimization targets. That is also one of the reasons why I have been putting off merging all particles into a single geometry - too afraid of performance hit without investing significant time to get it to work right.

Your particle demo screenshot looks pretty cool, lots of colors :rainbow:

2 Likes