My Name is 01iePi. (Pi=π)
This is may first topic.
As we know, huge PointsMaterial’s rendering is very heavy or may lost context.
So I try to use Many Webworkers(offscreencanvas) per less points.
(Points number = WebWorker Number * Points Number / WebWorker Number)
But, Many webWorkers are heavy, finally crush.
Single WebWroker not crush and smooth.
Just an idea (haven’t ran your example): Browsers have a limit to the number of canvases. F.e. 32 canvases in Chrome last I checked. That means you can have 32 canvases maximum, or else they start crashing. And with one worker per canvas, that sets your limit to 32 workers. There’s also a limit to the number of workers (64?).
You are creating an entire new setup (scene, camera, renderer) per worker, and that’s inefficient and slowing things down. The idea is to initialize a single canvas (scene, camera*, renderer) and offload the heavy computations to multiple threads/workers while ideally keeping the data flow, the real bottle neck, between the main thread and the workers to the minimum, or even better eliminating it completely by using SharedArrayBuffer.
Here is an excellent demonstration of what I mean by SimonDev:
Thank you very much.
The following repository is similar to the demonstration.
I run this, then ‘SharedArrayBuffer not defined’ in Chrome, so change to ArrayBuffer, then it worked.
This repository is excellent, but I have no idea to apply this approach for huge PointsMaterial.
If you understand the concept of pointers in C/C++ for example, a transferable is basically that: you pass a pointer of the object to another thread. This means that you (the sender) can no longer use it, until the worker thread gives it back to you.
In my opinion, it makes no sense to want to use more workers or to create more instances of a worker than you have processor cores. Or to be more precise, not without active worker management who then actively manages the worker queue.
Simon dev from the example above has limited his number of workers to 7. He justifies this in his tutorial with the fact that it doesn’t get faster with more workers. He shows a performance table. Its main thread takes up one processor core. With 7 workers who all run on different CPU cores, that makes a total of 8 CPU cores. What a coincidence .
The 7 in his tutorial are not just a random number, but follow consistently from what the hardware makes sense as a boundary condition.
This is also the reason why GPUs now have 4000 cores or more.
For an image with 1000 x 1000 = 1000000 pixels you would have to be patient with 8 cores until you would see something on the screen.
A shader then runs on each GPU core. If you had a geometry object with 4000 vertices, such a GPU could then simultaneously (in parallel) execute the vertexShader for each vertex. This makes it clear that the number of your CPU cores sets the sensible limit for workers that can be used at the same time.
As for workers! If you send back many and large amounts of data to your main thread like I do e.g. I send back arrays with millions of entries and it runs smoothly on my tablet, then use SharedArrayBuffers. With normal ArrayBuffers, copies of the results of your worker are created in your main thread. This costs performance and uses up too much RAM unnecessarily. If the RAM fills up, your app will surely crash. I’ve had experience all that. SharedArrayBuffers need cross-origin-isolation on the server or for local development server this:
With SharedArrayBuffers, your main thread can then directly access the result arrays of your worker. This is maximally efficient and RAM-friendly.
If you are using an apache2 server for development then maybe this will help you because of the cross-origin-isolation: