Particle engine

Hey @Aerion, the engine has been released on github, I haven’t updated it for a while, but what’s there is a product of 6 years of work or so.

The documentation is a weak point.

1 Like

I recently decided expand the simulation complexity, added something similar to what unity does. I have broken up the monolithic simulation process into several smaller steps and added some new ones. Each simulation step can be optionally applied to any emitter in the scene with per-emitter parameters.

With this it’s possible to apply gravity to some particles, or curl noise for example. Here’s a little demo.

Currently the simulation part is done in JS, I’m thinking of moving it over to GL side with transform feedback in the future. But for now i’m quite happy with the results as there’s next to zero GC involved so the simulation is quite fast and each step can be further optimized by hand if needed.


soooooooooooo cool!

1 Like

Can you share how to make a good-looking particle effect, and how to edit the trajectory of particles?

Editing particle trajectory is not something you do directly, typically. My particle engine doesn’t even support that. The reason for that is: particles are typically used in large quantities, 10s, 100s or even 1000s of particles at the same time, and to make them look somewhat interesting - you generally want some chaos in there. Animating even 10 particles over, say 10 seconds can be quite a chore, it gets worse for 100 or 1000 particles. So animation is generally handled procedurally, you define some formula for particle motion and engine handles the rest.

My engine offers fairly simple speed+position physics simulation. Particles can be spawned in volumes, there are 3 volumes provides:

  • point
  • box
  • sphere

Emission can be done from inside the volume, or from the surface of the volume.

Particle emitters have a parameter to control how long each particle lives, once a particle dies - it is removed from the screen. This “life” is described as a numeric range, min and max, and whenever a particle is created - it is assigned a random life duration between those two.

As far as it comes to making “good-looking” effects. I can’t help much here, I’m afraid. I don’t consider myself an artist, I learned by imitation mostly and reading tutorials on the internet. So all my knowledge is unstructured, it’s not suitable for teaching. I’m sure there are others here that could provide such an advice though.


a small demo, scene more appropriate to show off curl noise


pr:Can you provide some particle related demos in the project? To be honest, I can understand your particle engine, but I don’t hnow how to use it .:sob:

1 Like

Really cool stuff! Are there usage instructions? F.e. suppose someone has an existing project. Now what would they do to bring in the particle engine, and how would they set it up?

1 Like

I’ve been thinking of that too, and the answer right now is - pull meep, and use particle engine either by itself, or as part of meep (via ECS system). Both options will work, using the particle engine without ECS is more work, but not too much more, and you can always just check ParticleSystem class to see what needs to be done.

Size matters

I think most people wouldn’t want to use meep as a whole thing, the culture in web-dev these days is to use a bunch of smaller "middleware` components, and as such, having particle engine as a separate library would be more useful for majority of users I think. Doing that packaging for me is a bit of a pain, but I think I should do it at some point, even if as a snapshot of current state.


To clarify, why I say it’s a pain, the particle engine is made possible thanks to a large number of smaller powerful components that meep is built out of, such as packing algorithm that the atlas system uses, the automatic atlas, binary utilities, sorting algorithms, material management tools, and spatial index, to name a few. I work on the engine as a whole, it’s imported into my projects as a git module, and ripping out the particle engine would require either making a snapshot that would need to be manually updated every time, or doing some major refactoring in the meep engine to break it into a number of git modules.


The “Particular” (working name) engine has a number of neat things, and it’s most pragmatic engine I know of, feel free to take it with a grain of salt - since I’m the author. But it does have a bunch of limitations currently, chief among them is accessibility and ease of use for new users. I made a handy diagram in another thread, here it is, I suspect it will prove useful to many people who are curious about the engine:


A good tool for managing projects composed of multiple packages is Lerna. It’s practically a standard, used by notable projects.

There are other tools too. For example, NPM 7 just released Workspaces, which can now do a sizeable subset of what Lerna does.

The main thing you get from these tools is, bootstrapping (installing dependencies) of all your packages, and most importantly linking your dependent packages together.

The linking is especially important: as you mentioned, Particular depends on various other pieces. Those other pieces can be packages managed in your repo, and Lerna or competitor tools will link them to Particular, and will then link Particular to your root meep package.

As an example, i am doing this with LUME:

  • At the root folder is the primary lume package
  • in the packages/ folder you’ll see packages that lume depends on, such as @lume/element in the packages/element/ folder.
  • those packages are published separately to npm, f.e. see @lume/element
  • during dev the packages are symlinked into lume’s node_modules to make it easy to iterate on any of them live, and if any packages depend on each other they are linked to their node_modules as well.
  • my packages are git submodules, but they don’t have to be. You could also have one big “monorepo”, and still publish each package separately to npm.
  • A command like lerna run build will run each package’s build script in the correct order depending on the dependency graph (“topological order”). Similarly I use lerna run test to run the tests of all packages. Lerna will run them in parallel as possible. The build script for @lume/element will always run before the one for lume for example.

What I like about my particular setup is that the main repo runs tests for all packages at once to ensure everything works in linked mode, plus each package can run it’s own tests.

Another advantage of git submodules is that each package can be installed directly from git, f.e. npm install @lume/element@lume/element will install it from the GitHub - lume/element: Fast and simple custom elements. instead of from, which is useful when needing to install a particular commit or branch or any git ref, f.e. npm install @lume/element@lume/element#aj6fwo8a or npm install @lume/element@lume/element#some-branch. With a monorepo, there’s no way to install one of the packages from git, and one must fork the repo, publish the package on, then install it aliased to the original package name.

If you want to try multi-package management out with Lerna, I’d be happy to help if you hit any issues.

1 Like