It is based on the GPUComputationRenderer, tonejs, lots of glsl, and Polygonjs to build the scene and make the creation of shaders mush easier. In the coming days I hope to release a tutorial on how to create scenes like this.
I’ve tested it on quite a few browsers and device, and that runs fine even on a 7 year old android, but I’d be curious to hear how that runs on your devices? Any frame drop or obvious bug?
One thing that surprised me for instance, is that for iphone, even a recent one, I’ve had to set the data type of the GPUComputationRenderer to HalfFloat. I would have expected this not to be necessary since iOS supports webgl2. And I have not tried on windows at all, so am curious to hear from any one how that works there.
quick summary where I show snippets of what this tutorial is about, namely how to use audio to drive elements like material properties, lights and forces applied to particles.
Yes, you’re right, thanks a lot for the feedback. I have not yet found a good way to automatically modulate the beat detection for every type of music.
I’ll probably add a slider “sensitivity” to allow fine tuning (not quite sure when though )
And thank you for the link to that song, lots of good ones in this channel (and plenty of tests cases to use!)
First of all I wanna say great work,
and the problem of the particles flying away could be solved by normalizing all your values between 0 and 1, and multiply by a certain set of scalars so that you could offset the range/radius of the particles from your sphere based on that normalizedValue*scalarValue, that way you can add a certain amout of randomness too without worrying too much about particles flying away.
And yes, I definitely agree normalizing values + adding randomness is very often a good way to controls things. Although in this case, there are a couple gotchas:
we need to know what values to normalize. In this case, the root data is the audio analysis. And as I understand, the output of the FFT will be dependent on the frequency as well as the volume. And the question is how do you normalize volume on a realtime analysis? Some songs may have a overall low volume, some may have an overall high volume, and we can’t know that in advance. But we need to be able to have a pleasant amount of particles movement in both situations.
at the moment, a force on each particle is applied when a beat is detected. Ands beats may be detected once every 5 seconds, or several times per seconds. If the first case, a force will be applied at the moment of the beat, and then the particles have time to go back to their rest position. In the second case, the force applied keep accumulating, and this is when the particles tend to fly away. I’m not entirely sure (yet) what data can be normalized here.