Procedural level generation using meep

Hey guys, I’ve been working on a level generator for my meep-based new game for the past month or so and I thought I would share some results so far.

Here’s a small snippet, breaking down placement of trees. What you see on the following screenshot is landscape with pink boxes marking areas where trees will be placed. The goal was to place trees in such a way as to appear natural as well as respecting various gameplay constraints.

As you can see the level has many different features such as different altitudes, water, gameplay objects.

The generator that I wrote works on a very simple concept, it will place objects in 2d space using some kind of density field. So the tree placement, all that is required is to describe a density field of where the trees should go.

I have a few restrictions that I want to implement:

  1. A tree shall not be placed on the playable area, where the player can walk and interact with things. This is denoted with the grassy texture on the screenshot.

  2. A tree shall not be placed on the water.

  3. A tree shall not be placed on steep slopes.

Beyond that I want the trees to have a natural look, some areas of the map should have more densely forested areas than others.

To achieve the final result I created a density field for each restriction, as well a noise-based density field to provide that natural look. Here are the individual fields being visualized one by one:

Exclusion of playable areas:


Exclusion of areas that are below the water level:


Exclusion of areas with sharp slopes


For that natural look, the simplex noise function


The final result is just a multiplication of these densities:


I have initially considered implementing vegetation growth simulation to distribute the foliage (such as trees), but in the end I decided that “realism” was not necessary for the kind of game I am making, and I don’t have too many types of trees to juggle.

However, this approach does support some elements of additional realism, such as having a temperature or precipitation layer and sampling those to compute additional density fields.

One more point to make, this generation uses density as a measure of how much areas should be covered by the objects. So with density of 1 the generator will attempt to pack objects in the areas as tightly as it can, whereas with density of 0.1 it will try to keep only about 10% of the total area occupied.

To facilitate this, objects provide a simple 2d radius, which is used to calculate occupied area. Here’s a version of above above generation process with tree sizes scaled down 50%

You will notice that a lot more trees are being placed, as the generator is trying to achieve the same overall coverage density.

I plan to eventually release this generator as part of a future meep update, but there is no ETA on this for now.

If anyone is curious, the entire generation process for this 64x64 map takes about 1-2 seconds.

For this 128x128 map it’s about 4s:

that’s all folks, let me know what you think :slight_smile:

I’m also curious if anyone else has some experience with this kind of thing, what kind of approaches you use for foliage placement.

7 Likes

Added a “ground moisture” model, it’s a convolution of 3 factors:

  1. diffusion of existing bodies of water, things like ground near a lake being more “moist”.
  2. accumulation of precipitation. Water tends to run down slopes, and get trapped on level surfaces of in crevices.
  3. Regional relative precipitation, this is just a simplex noise function with very large scale, resulting in very small variation locally.

Here’s a visualization of that:

You’ll notice that most of the mountains are blue-ish, which means that there’s very little ground moisture.

here’s a re-generated tree placement with ground moisture being taken into account (note actual trees :deciduous_tree:) :

3 Likes

Looks simply cool. Great work! :+1:

1 Like

Awesome ! Is that for Might is Right 2, or are you doing something different this time ?

1 Like

@felixmariotto

This time it’s actually a Sci-Fi themed game, it’s the same genre though, so I’m using assets from Might is Right as a placeholder for now while I’m working on the tech.

I plan to retrofit this generator into Might is Right as well, so it’s not entirely a waste of effort to use these assets for now.

The way the generator is written, it works in 2 phases, first one generates metadata which has no relation to art/sound etc. assets, it’s just a collection of markers. The second phase takes that metadata and generates concrete game objects, that’s where the assets are being used.

Entire generation framework is game-agnostic. When you see the screenshots above - that’s me configuring the generator to produce a specific result, that configuration itself is fairly light, it’s only a few hundreds lines of declarative code.

[update on the progress]:

1 Like

A small update.


  • Added mushrooms and rocks.
  • Fixed up water shader.
  • Improved texture painting scheme (less rock and more sand)

Here’s just some messing around:




2 Likes

A small update. I worked on the sound engine upgrade for meep, and figured I would deploy a build of the level generator with positional sound sources. Each house is a positional sound source, it uses distance culling and a custom attenuation model (linear sounds really awkward).

Anyway, here’s the link

Controls:
Mouse, Left Button - pan, Right Button - rotate, Wheel - zoom.

PS: about the sound engine. I wanted to be able to add 100s of positional sound sources into the game, but Chrome said no. So I wrote a culling mechanism and this custom atenuation model, so now the browser is only dealing with sound sources that are audible to the listener. i.e. their sphere of audibility overlaps with the listener’s position. I tested this with 50,000 sound sources playing at the same time in a level, and it worked like a charm.

I implemented similar couple years ago in the spatial index, another thing was using temporary proxies, basically positional sounds without being real scene objects/being inserted. With couple thousands of constant sound sources this saves a lot memory and hierarchy nodes/changes asides of the actual audio limitations.

But also for other mechanisms, such as impact sounds of entities, especially when using a physics engine, detaching sound as an „object“ from the scene graph makes it more efficient.

2 Likes

Interesting. I created a bunch of tools for visibility culling along the way, this was mostly just reuse of existing data structures too. I don’t add/remove things every frame, same as you. I have something like a delta update mechanism, it still has to traverse every sound at least once though, but because of how my engine is structured - it’s pretty much a linear scan through an array and that’s quite cheap. Updating positions is done in an event-driven way, and attenuation is performed only on sounds that haven’t been culled.

I rely on a concept of “Visibility Set” in the rendering pipeline, I just extended that a bit to fit the usecase with sound emitters.

If anyone’s curious, my attenuation model is based around 2 spheres, inner and outer. Inner sphere is where sound plays at full volume, and outer sphere is where its volume is 0. The interpolation is done using squared distance:

volume = 1 - Math.pow( clamp( (distance - min) / (max - min), 0, 1) , 2);
1 Like