it is not generated from a seed alone, it is from a seed and the code that, well, generates it. Plus whatever assets you use to support it (like rocks, grass, the heater thingy, tube thingies - are they all code-based here?).
Ha, that’s true, the actual assets are not “generated”. Their placement is. The terrain heights, terrain texturing (where should grass texture go, where should rock go etc.) are procedural.
Asset placement is a complex topic, I went into it a lot more here. To keep it short here - each stone, each mushroom etc. have a fairly complex set of declarative rules that are satisfied by a solver as part of the generation process.
Actual generator doesn’t understand or know what a 3d mesh is, it places “markers”, which are then consumed in the “theming” step to instantiate assets. Similar thing with terrain, generator just creates layers of 2d data, akin to a DataTexture
set, these are read in the theming step to generate both terrain geometry as well as texturing masks.
The reference to a “fixed seed” is to point out that this demo is stable, it will generate roughly the same output every time. Because the seed that is used to initialize the generator is fixed, that is - usually you’d expose the seed as a parameter to the user, or just take some random number like the exact current time in milliseconds or hash over last 100 recorded mouse cursor positions.
For transparent particles first of all i update all variants of particles (glass, fire, lights, cloud, smoke, soark), position, rotation, texture animation, then put it together in one geometry attributes with sorting by distance to camera and with DynamicDrawUsage. Mergeing with sorting performance for 2000 particles is 0.6ms-1ms. I’m using one shader which have in vertex shader code for rotating billboard to camera, to point, cylindical rotating, rotating like sprite. In fragment shader is blending value - more additive or solid, with changing color and black color too can do, with several texture array, atlas and uv offset for animations. And all together is 2 draw calls, because doublesided and transparent enabled. But code not finished yet.
I do something similar. In the current iteration I don’t merge geometries of separate emitters, but that’s something I plan on doing later as well. The rest is pretty much the same. There’s only 1 shader that draws all particles, and it uses the same uniforms, so each particle is made distinct through attributes alone.
I guess one main difference is that I use point rendering, so there’s no need to orient quads, but there’s overdraw and lack of perspective distortion for larger particles. Pros and cons 
One notable thing is the particle count, my engine aims to have sub 1ms performance for 10k particles, so slightly different performance optimization targets. That is also one of the reasons why I have been putting off merging all particles into a single geometry - too afraid of performance hit without investing significant time to get it to work right.
Your particle demo screenshot looks pretty cool, lots of colors 