In scientific computing, the solution of a partial differential equation usually produces huge data sets. In my case, it produces a snapshot of thousands of spheres every minute, resulting in 500 snapshots in total stored in files. The positions, colors, and sizes of the spheres change over time. Spheres come and go over time too.
It seems easy to visualize one snapshot with the threejs. I load a file and create thousands of SphereBufferGeometry objects and merge them into one to make a mesh. It takes about 30 seconds.
I want to use the snapshots to make a time lapse animation. Is it possible to achieve that with the threejs? If so, could you please offer some suggestions?
Here’s an example with boxes. Nothing is moving in the example, but boxes are being added dynamically.
The example uses a somewhat advanced management system to keep track of the boxes, but you should be able to write a basic version of that to suit your needs.
The key is to adjust drawRange based on how many objects are added or removed fro mthe scene.
Instancing is one (good) option. Another is adapting THREE.Points to look more like spheres. I have an example at Bouncing points, spatial hashing (Edit: Note that rendering is far from a bottleneck in that case, so instancing could have done the same job). It makes the points round and have a color gradient from the center and out. With a texture they can be made to look a bit like lit spheres (see the three.js example three.js examples ). I also imagine it is possible to make a custom shader that will render gl_points as perfect spheres with correct projection, lighting and depth, but it may turn out to be prohibitively computationally expensive. I have long wanted to try, though. One day… Update: Coding jam: Pixel-perfect spheres without high-res geometry (Depth is sketchy, though, because of technical limitations.)
Our suggestions so far have regarded increasing the rendering performance. I get the impression that your question is just as much about how to make a video from a number of frames made with equal timesteps. Is that right? Or, maybe not a video, but a stored (precomputed) sequence of states that you want to move spheres according to. (One representation of such a precomputed sequence could actually be a vertex videotexture.)
I thought of points too. Since @DevelopDaily said “spheres”, I went for the geometry-based recommendation
I would not recommend shaders though, for a beginner it’s a tough spot to get into. I don’t know how advanced your knowledge is, @DevelopDaily, and I don’t know how critical performance is to you, it’s all about trade-offs.
If you go for billboards (points solution) - you can gain a lot in terms of vertex shader load, but billboards tend to tax fragment shading, since they have a lot of overdraw. You’re always drawing a square, so if you want a circle, you’ll be wasting a lot of cycles drawing nothing to the screen, even with good use of discard statement.
I still think you will fare better with “points”, because they give you a very natural LOD, because smaller billboards just take up fewer GPU cycles.
You said that there are “thousands” of spheres every minute, that’s actually not a lot, depending on how many thousands we are talking about. Lets say you are talking about the rate of 100,000 objects per minute, that’s only about 28 objects per frame. If you have less than 10,000 objects on the screen at any given moment - I would suggest that going with Instances should be more than sufficient. If you have more than that - perhaps it’s worth looking into the point-based solution (billboards).
Billboards come with their own problems:
they don’t respect lights
they don’t have shadows, don’t receive them, and don’t cast them (well).
they have no depth, so when they interact with one another, or anything else - it will be painfully obvious that these are flat images and not “spheres”.
if those concessions are not a problem for you to make - I’d say give it a go.
Billboards can respect light, i use billboards with g-buffer and volumetric impostors with the same features as regular geometry. Spheres are especially easy to make as billboards instead actual geometry, shadows only require a depth buffer rendered by the light camera, receiving the shadow requires to translate the sphere position instead the billboard quad.
I wouldn’t take what THREE offers as standard, it can’t cover all techniques, cases and features. So a lot should be expected to be implemented and not come out of box. Like this topic with spheres on billboards not being what THREE has in any way.
It isn’t really much extra work, related to this topic here all is that you have to provide inputs that would be usually taken from the geometry, basically position and normal.
Ha, I really admire you. I think you don’t realize how far out of “normal” scope your knowledge is. Anyway, I think your advice is totally valid, but it does require fairly advanced knowledge, and it requires being comfortable with writing shaders. I just wanted to caution the OP about that
@Usnul
I won’t use shaders as my first choice. I studied all the examples on the threejs website and understand them except the implementation of the loaders, but I don’t think my knowledge is advanced enough.
The “points” approach may well be very useful in some other cases, but I am afraid it won’t suit my purposes. My “spheres” are used to depict human cells and the organelles inside them. I think “spheres” would be the simplest geometries that can maintain the minimum fidelity of a cell structure. Later, I need to clip thousands of the cells to reveal the cross sections of a piece of tissue.
Cells grow and die over time. I meant, the state (position, size, color, etc.) of each of thousands of cells is recorded every minute as a snapshot. For example, in 10 hours, I will get 600 snapshots. I want to play those snapshots, frame by frame, as sort of animation, in an accelerated manner, say, in 1 minute. The goal is to let a biologist watch a 10 hour growth of a piece of artificial tissue in 1 minute.
A (very) simplified idea is to save positions and radii of cells from a single snapshot into an RGBA image (rgb - position, a - radius), do this operations for all shapshots (thus you’ll have a sequence of images), make a video of them in a format that suppoted by html-video element and that supports alpha-channel. Use this video as a video-texture, reading data from it in a shader(s) for instanced spheres or for points.
Sorry, @EliasHasle. I didn’t grasp the significance of the strategy you presented earlier until @prisoner849 enlightened me. Great minds (yours, not mine) think alike:-)
Clipping effects can in principle be accomplished with points, e.g. using 3D textures (or position-dependent procedural generation) and branching execution (or multiple point materials with conditional fragment discarding).