While exploring possible causes, I noticed that there is a type of context called “webgpu”. Since I am using WebGPU, I wonder if I should be using that context?
I currently used context to split up an image into 16 textures. Here is the code:
//- Initialize Canvas
let ImgSiz = 512; // !!! Change this for each image
let canvas = document.createElement("canvas");
canvas.width = ImgSiz;
canvas.height = ImgSiz;
let context = canvas.getContext('2d');
context.translate(0, ImgSiz); // Flips vertical for three.js
context.scale(1,-1);
//- Use Canvas to Split Up Image into Multiple Textures
imagLoader.load(DifSrc, function(image) {
context.drawImage(image,0,0,ImgSiz,ImgSiz);
let idx = 0;
let siz = ImgSiz/4;
for (let z = 0; z < 4; z++) {
for (let x = 0; x < 4; x++) {
let ImgDat = context.getImageData(siz*x, siz*z, siz, siz);
let texture = new THREE.DataTexture(ImgDat.data, siz, siz);
texture.format = THREE.RGBAFormat;
texture.magFilter = THREE.LinearFilter;
texture.minFilter = THREE.LinearMipMapLinearFilter;
texture.generateMipmaps = true;
texture.needsUpdate = true;
grd_.Df0[idx] = texture;
idx++;
}
}
});
When I change the definition of context to the following, the original error goes away.
let context = canvas.getContext('webgpu');
However, I then get error messages saying that context does not recognize any of the other commands, such as context.translate, context.scale or context.drawImage.
Could this be the problem? Is there an equivalent set of commands for the webgpu context? Or is there a better way (or a three.js friendlier way?) to split up an image without using an html canvas?
I am so confused by this thread. Now you want webgpu, which isn’t even a graphics API to take the same commands as the 2d canvas? How do you imagine this should work and why? WebGPU if anything, is even lower level than webGL.
Regarding the links, are there links that you people clicked and got more than 5 fps?
My whole computer stops when I open this webpage. Does the same happen for you? Again, I played cyberpunk and doom eternal on this machine, 2080ti may be old, but it was the top of the line of that generation.
Both of you might be slightly overthinking this since WebGPU is experimental in both three.js as well as browsers.
phil_crowther, the Chrome posted a warning and you should just try to use its suggestion about enabling the willReadFrequently attribute - this way the warning should go away. The GPUCanvasContext also shows as experimental but does have an example of how to use it so try it and see how that works.
dubois, see if your 2080ti card can handle this Ocean Demo. If it freezes and slows down your computer then this card might just have a hard time with WebGPU. Or it might have a hard time with phil_crowther’s examples.
Why add extra steps? If you click on the link, you get the same exact view as i do? If you dont use your mouse and rotate the screen to have more sky and all that, what frame rate do you get?
If we all land on the same exact view, and the same exact rendering, then the FPS is meaningful, since our different machines should yield different FPS. If we all use our mouse each one of us is going to show a different amount of sky and ocean, now you have two variables. My machine could be slow, but im seeing more sky than you.
So, if you click on the link, the page opens, you dont touch anything, what does the fps meter say?
Which makes sense, the ocean is super complex, the background is a texture read. So, is it fair to conclude, that for the purposes of a benchmark a camera at say 0,100,0 looking down, without orbit controls, would be the best setup?
If you use the keyboard keys and lower the speed to 50mph and altitude to approximately 100ft then you could have the whole ocean with the plane and its reflection in the screen running at steady FPS of 50+.
I don’t really understand your last question about benchmarking a camera.
Thanks for clarifying that GPUCanvasContext is also new and experimental. I had gone to that source but was unable to find the commands I was looking for and there is very little other guidance online. I can stick with the old context if that still works with the three.js implementation of WebGPU - which is apparently does for others.
I had not done anything about the notice since it seems to merely a suggestion aimed at people who are using getImageData during runtime - which I am not. Nevertheless I could give the give the fix a try and see what happens.
It is interesting that you would refer that Ocean Demo. My Ocean Demo uses a simplified version of that wave generator and the author (Attila_Schroeder) is the programmer who was able to run all my programs in r168 with no problems (at 120 fps). Here is a r168 version of my wave generator that should work perfectly for you and, possibly dubois.
Sorry for the confusion. I was hoping that those links to older versions might provide you with some examples that worked at a faster speed. It appears that we both need newer GPUs, especially if we are going to work with three.js WebGPU programs.
That is exactly what I see. However, my r167 version shows a sold 60fps and my r168 version show less than 10fps.
Heh. I just noticed that if rotate the view to look straight up, I get 60 fps. If I look straight down, I get less than 10fps. So the Ocean is definitely causing the problems - which is kind of what I expected was the cause.
phil_crowther, maybe you could have your app start at low speed and altitude, like 50mph / 100ft, and let users speed up and go higher up if they want to.
The reason for my suggestion is because of the FPS which was higher and more steady.
Eureka! I found the source of the problem - shadows.
Once I comment out all the lines in the renderer setup relating to shadows the r168 version is back to 60fps. That may be why looking down reduced the frame rate - not because you are looking at the ocean, but because you are looking at the airplane shadow.
I will review the three.js examples to see if I can determine the exact source of the problem. Otherwise, I will start another query.
MORE:
It appears that the problem is that - for whatever reason - I can no longer use a shadow map size of 8192x8192. Reducing the size to 2048x2048 appears to eliminate (or significantly reduce) all of my fps problems. I also realized that the little airplane was not receiving shadows, so I fixed that.
I will still be interested in seeing if a new GPU eliminates this problem - but, even if it does, this experience has highlighted a problem that I need to consider when making programs work for people with lower performance equipment (like my current GPU).
But just to be sure, I will run this by the three.js developers to see if this was an expected result.
Yes, I am trying to find out if some software change in r168 caused shadow map textures (or something to be saved in the GPU ram space - if there is such a thing) which overwhelmed the GPU. Perhaps I could get a better idea if I try adding one of the CPU/GPU usage displays?
ADD: I have now updated my r167 and r168 versions (linked in message 7 above) with usage displays. As you can see, something odd is going on with the r168 GPU usage. Not only is it significantly greater, but there is some kind of cycling at work.