Turn animated mesh into smoke

Live Link

Instructions: press K to smoke


I had an idea for a while of creating an effect for Might is Right, where a mesh wold turn into a bunch of particles.

The idea is quite simple:

  • compute skinned mesh vertices in world-space
  • distribute particles across the mesh surface uniformly

What do you guys think? :slight_smile:

10 Likes

Cool effect! :+1:

1 Like

Looks great! I especially like what happens when you hold down or spam the K button :grin:

1 Like

Wow, such a simple, yet cool-looking effect :smiley:

1 Like

Thanks! Yeah, it’s just a little prototyping tool I put together, there’s actually an emitted below the terrain and particles are just snapped to the mesh surface when you press K :smiley:

@DolphinIQ
Totally agree, it was quite simple to code up also, the key problem was to get performance on-par, had to inline a lot of the simple stuff like matrix transformation.

Super cool!

Do you want to share the code? I will love to learn how is done this technique.

Hey @mcanet,

Sure, I can’t share the code right now, I’m in the process of releasing my game on steam, but in a month or two I will come back to meep and release an update with this feature also.

The basic idea is to distribute points across the surface of the mesh in a random pattern so that it’s as even as possible.

To do this - you need to first calculate total surface of a mesh, then you go over each face (triangle), take it’s surface area and decide how many points to place on that face based on it’s relative surface area.

Something like this:

totalPointsToPlace = 100;
totalArea = computeTotalArea(geometry)

for(traingle in geometry){
    area = computeTraingleArea(triangle)
    relativeArea = area/totalArea;

    pintsToPlace = relativeArea*totalPointsToPlace;

    for(i =0; i<pointsToPlace; i++){
          position = computeRandomPointOnTraingle(triangle)
          placePoint(position)
    }
}

Actual code is a bit more complex, but this is a good representation. Home that helps!

4 Likes

Thanks this pseudocode is explanatory. It is really cool concept to spread 100 points around.
How many particles are in your demo? it runs super smooth.
Is the particle at the begging non-transparent? I have the feeling are all the same size.

Thanks a lot.

In the demo it’s 8000 I think. My code is super-optimized, so even 100,000 particles work without much delay,

At the start particles are… well, they are always somewhat transparent, at the start they start with about 0.3 opacity, though that works multiplicatively, so if the sprite has transparent areas (which it does) - you end up with some transparency in those areas. I think that’s pretty clear though, so I think I didn’t get your question :sweat_smile:

About particle size, they are different sized, but the distribution is only about 33%, minumum size is 0.2 and maximum is 0.3 (in world space measurements, that’s about 20 and 30cm respectively).

here’s the particle definition if you’re interested:

{
    "position": {
        "x": 9.17416,
        "y": -1.90031,
        "z": 12.06082
    },
    "scale": {
        "x": 1,
        "y": 1,
        "z": 1
    },
    "rotation": {
        "x": 0,
        "y": 0,
        "z": 0,
        "w": 1
    },
    "parameters": [
        {
            "name": "scale",
            "itemSize": 1,
            "defaultTrackValue": {
                "itemSize": 1,
                "data": [
                    1
                ],
                "positions": [
                    0
                ]
            }
        },
        {
            "name": "color",
            "itemSize": 4,
            "defaultTrackValue": {
                "itemSize": 4,
                "data": [
                    1,
                    1,
                    1,
                    1
                ],
                "positions": [
                    0
                ]
            }
        }
    ],
    "preWarm": false,
    "readDepth": true,
    "softDepth": true,
    "blendingMode": 0,
    "layers": [
        {
            "imageURL": "data/textures/particle/smokeparticle.png",
            "particleLife": {
                "min": 1,
                "max": 1.6
            },
            "particleSize": {
                "min": 0.2,
                "max": 0.3
            },
            "particleRotation": {
                "min": 0,
                "max": 0
            },
            "particleRotationSpeed": {
                "min": 0,
                "max": 0
            },
            "emissionShape": 1,
            "emissionFrom": 1,
            "emissionRate": 0,
            "emissionImmediate": 4000,
            "parameterTracks": [
                {
                    "name": "color",
                    "track": {
                        "itemSize": 4,
                        "data": [
                            0.6431372549019608,
                            0.6039215686274509,
                            0.5372549019607843,
                            0.3,
                            0.6,
                            0.5764705882352941,
                            0.5215686274509804,
                            0.265,
                            0.7098039215686275,
                            0.6745098039215687,
                            0.615686274509804,
                            0
                        ],
                        "positions": [
                            0,
                            0.7957639171068672,
                            1
                        ]
                    }
                },
                {
                    "name": "scale",
                    "track": {
                        "itemSize": 1,
                        "data": [
                            0.9235456523622187,
                            0.9685843530893341,
                            1.0034300604871111
                        ],
                        "positions": [
                            0,
                            0.31176470588235294,
                            0.7647058823529411
                        ]
                    }
                }
            ],
            "position": {
                "x": 0,
                "y": 0.6,
                "z": 0
            },
            "scale": {
                "x": 0.9,
                "y": 0.5,
                "z": 0.65
            },
            "particleVelocityDirection": {
                "direction": {
                    "x": 0,
                    "y": -1,
                    "z": 0
                },
                "angle": 0
            },
            "particleSpeed": {
                "min": 0.1,
                "max": 1.7
            }
        }
    ]
}

Particle engine itself is already in meep, so you can use that any time you like :wink:

1 Like

Yes understood particles should have always some transparency to merge nicely.
Meep engine looks super good.

One question, this calculation of triangle and relate to area. Do it work better when is a low poly model?

In terms of performance - yes. I don’t know how much though. In terms of quality of distribution - I honestly don’t know, but I guess it would not make any difference. Because this method uses surface and not volume - it will suffer if your model has dual surfaces or overlapping mesh pieces.

I thought that this method would be slow, but for my purposes and with pretty much maximum optimization levels - it has almost no impact on performance. My meshes are around 10k poly each and most of the time is spent on skinning and particle initialization and not on the distribution.

Maybe it’s not clear, but you have to do CPU-side skinning for animated meshes to use this method.

1 Like

Thanks your explanations help me a lot. I did some research on the topic:

I found that area is a method integrated already in THREE.Triangle:

var t = new THREE.Triangle(va,vb,vc);
var area = t.getArea();

Then I found detail explanation and code how to compute a random point on Triangle:

function randomPointInTriangle(vertex1, vertex2, vertex3) {
  var edgeAB = vertex2.clone().sub(vertex1)
  var edgeAC = vertex3.clone().sub(vertex1)
  var r = Math.random();
  var s = Math.random();
  if (r + s >= 1) {
    r = 1 - r
    s = 1 - s
  }
  return edgeAB.multiplyScalar(r).add(edgeAC.multiplyScalar(s)).add(vertex1)
  // random point in triangle
}

Other suggest for uniform sample better:

function randomInTriangle(v1, v2, v3) {
  var r1 = Math.random();
  var r2 = Math.sqrt(Math.random());
  var a = 1 - r2;
  var b = r2 * (1 - r1);
  var c = r1 * r2;
  return (v1.clone().multiplyScalar(a)).add(v2.clone().multiplyScalar(b)).add(v3.clone().multiplyScalar(c));
}
1 Like

Bumped into this, it’s for THREE.Geometry() though: https://github.com/mrdoob/three.js/blob/e9f31ad154d2cc314f37b5a8da4bdddd4f1bde7e/examples/js/utils/GeometryUtils.js#L93

And it does the trick:

See the tree, covered with points. Simple conversion of a buffer geometry into a geometry and calling that utils’ method give a set of points, distributed on the geometry’s surface.

That looks awesome, going to be quite slow though. I like the binary search idea, quite elegant, if not fast.

The solution I went with is O(n), this will be O(n+k*log(n)), that’s because of the binary search. Beyond that, the memory access is going to be pretty bad for large models, that’s just the nature of using random access and binary search.

2 Likes

I think @donmccurdy updated that for buffer geometry last week:

1 Like

There area a few problems with this approach the way I see it, at least for my use-case:

  1. Garbage overhead. You create a sampler, that is allocating it’s own buffer for accumulating weights, destroying this object will create garbage, re-using it is not obvious right now.

  2. No skinning support, this related to previous point, you will need to re-build these weights every frame that you want to do distribution in

  3. Function calls galore, each sample requries a ton function calls, each of which hurts your performacne

  4. Memory access patterns, .sample call will first access a start of the weighted list, then it will access end and things in between during the binary search, there are ways to help around that, but not to completely eliminate this, just the nature of binary search.

That being said - @donmccurdy has made a very nice and easy to use solution, I reckon most people will not care as much about performance, especially seeing how simple and intuitive that MeshSurfaceSampler is to use.

1 Like

The use pattern is 1 sampler per 1 geometry. I expect that’s more than sufficient for even performance-sensitive applications, unless someone wanted to use multiple samplers as a workaround for the issue mentioned in your next point…

No skinning support…

Agreed, I don’t currently have plans to add this but it would be a nice addition! Perhaps just changing the sampler to accept a Mesh or SkinnedMesh, rather than a BufferGeometry?

Function calls galore

I’d argue the per-sample function calls can’t be reduced much more without re-implementing core library functions like triangle.getNormal or vector.fromBufferAttribute. You could re-implement and inline this, I guess, but that has a maintenance / complexity cost that I tend to avoid paying unless and until there’s a compelling benchmark proving it’s worthwhile for real use cases. JavaScript engines can be pretty good about inlining hot functions automatically, which means these optimizations are not always as effective as one would expect. For example, the old implementation used a recursive binary search implementation — which I replaced here with a For-loop version. I’d expected that to improve performance, but in practice I couldn’t measure any consistent difference.

Not to say this code can’t be optimized further — I’m sure it could! But I’d want to measure that effect before inlining a bunch of things, and for the use cases I could think if it wasn’t necessary.

2 Likes

To be honest, I’ve never seen this effect be measurable in a JavaScript benchmark. Would be curious to hear if you’ve run into examples!

1 Like

Why focus on the negatives :smiley:

Okay, let me elaborate. You re-create the Float32 array every time build is called, that can be fixed, but it doesn’t matter at all in the static geometry case. So the whole point about garbage is moot with static geometry and no skinning.

About function calls - I agree that inlining does happen, but in my experience it’s not consistent, also functions are marked as “hot” typically after somewhere around 10k calls, that means that that first time you’re building your point set - it’s going to be slow as sin.

Again, does that matter in the grand scheme of things? -no. :woman_shrugging:
I’m just trying to iron out any potential FPS spikes in my code, does that make my code better than yours? -no. :rainbow:
Does it mean that I have a longer code than yous? - yes, yes it does, my code is longer than yours. :sunglasses:

Let me say this again, in italics this time, if I may: Your code is wonderful, I love it

I’m pretty sure you could, the basic idea is about cache, my implementation only accesses data sequentially, this means that it is super cache-friendly. Yours? -not so much. Is that a problem? - see points above.

I’m a flawed human being, and I’m a jaded software engineer, I have my own preferences and baggage from past work. I do value different perspectives and approaches though, to me it’s not about being objectively better, it’s about priorities, what’s important for whom and for which set of use-cases. I think my code is harder to understand, for example, and that’s bad.

Okay, i’ll stop, I love your code, I love your code…

3 Likes

Oops I hope I didn’t come across as defensive here! Your earlier comment didn’t seem negative at all to me. And I’m not remotely bothered that different requirements and experience led you to a different solution. :slight_smile: But since I added the MeshSurfaceSampler class pretty recently I thought I’d follow up, in case some of these suggestions would make it better for use cases I hadn’t considered.

Since the link in the PR broke after I deleted the branch, here’s the demo that goes with it:

webgl / instancing / scatter

If SkinnedMesh support were added, I’m assuming face areas would be computed only once in the default pose, not updated with each frame. Does that seem reasonable?

I’d also considered another sampling method that would be O(1) complexity per sample, but requires a larger memory footprint and some tuning or quantization of the face weights. I stopped working on that after realizing the number of new samples per frame was going to be pretty low for my purposes, but would be open to that in the future.

2 Likes