iFFT Ocean Wave Generator Module

The stitching works wonderfully. The surface appears as if it were a single mesh. Even with fast movements I see nothing at the edges. The surface looks just strange when I have different normal vectors because of the different sized chunks. Therefore, I have to calculate the normal vectors from the wave function myself, which glides smoothly over all chunks.
I can specify the minimum size of a chunk by parameter and also the resolution. So you can influence the quadtrees with only two parameters. Here I have a resolution of 48, that means 48 x 48 squares in one chunk. I just applied the minimum chunk size at 24.

From orbit:

Close to surface:

Although my waves are just a primitive perlin noise, when I look at them, they are almost hypnotic.

Back to minimum chunksize of 24:
That means the quadtrees are subdivided as long as their edge lengths are greater than 24, in my case meters. So if I go so close that the quadtrees go to the limit, I have a 0.5 m area resolution. I could now also set the minimum size to 1000 and choose a large or small resolution. However, large areas with a high resolution are very computationally intensive. Large areas with a large resolution would then be as if the quadtree had been switched off and if one large mesh had been created. Conversely, a subdivision to 1 m with a high resolution of 256 also makes little sense. Although quadtrees are very challenging to program, their parameterization is fortunately very simple.
For the skirts I will add parameters to activate or deactivate them and to adjust their offset. I donā€™t need them as it looks so far, but having them and not needing them is better than needing them later and then not having them.

I hope I can create a repository next weekend. Even if I wonā€™t be finished by then, you would already have the geometry. But please forgive me if it would take me longer for the repository. Everything I program there is tailored to communicate with other modules in my code and I have to change a lot to make it to a simple repository. And simple is relative :sweat_smile: It is in the nature of quadtrees that they are somewhat complex

Youā€™ve infected me with the ocean :grin:

2 Likes

I made a very detailed black pearl a few years ago. Unfortunately in cinema4d and at the moment I donā€™t know how to import it in blender because I donā€™t have cinema4d anymore.

P.S.

There isnā€™t much about quadtree oceans on the internet. A few nice videos but nothing about the code. There seems to be a big secret being made about it. Well then, letā€™s change that.

Iā€™ve just started with the repository. Letā€™s see how far I can get until next weekend.

Before starting on clouds, I have been working on incorporating the animated ocean in my ā€œdioramasā€.
Here is what I have accomplished so far (I still need to animate the pilotā€™s arms and legs like they are in my flight simulation demo.).
But this should give people some idea of what air combat in the Pacific was like during WWII - your greatest challenge was navigating the endless sea to find a tiny ship or island. (Some pilots swear that your engine starts running rough the minute you head out over the ocean.)
During the War, my Dad flew in B-29s for thousands of miles across the open ocean. My father-in-law flew in aircraft which were part of a group that flew aircraft like the one in the diorama from a tiny ā€œJeep Carrierā€.
Now I can start adding clouds - perhaps some of those low clouds you often see over the ocean dropping a column of rain.

1 Like

I like the Vought F4U, Spitfire and Focke Wulf 190. For propeller planes these are still very good machines with better performance data than some new ones. With internal combustion engines, displacement and turbochargers are important in order to get as much air as possible into the engine, and there has not been much improvement in piston engines to date. I live in Germany. What our carmakers improved was cheating (dieselgate). In general, I can only advise against buying a German car and for good reasons.

I havenā€™t dealt with character animations yet, I still have to learn that.
The ocean: I have now managed to calculate the normal vectors from the displacement function and I like the result very much. Here is a picture in which I simply output the normal vectors as color.

Normalvectors:

That looks better than if I were to calculate normal vectors for the polygons, because they donā€™t run as smoothly.
There are several different chunks directly in front of the camera and you donā€™t see any transitions. The reason I know there are multiple chunks when I canā€™t see anything is because Iā€™ve looked under the surface, because thatā€™s where I see the skirts and there are a lot.
The screenshots are all from a moving surface.
The enormous advantage of the normal vectors calculated from the displacement function is that they are not only continuous, but are just as precise even at great distances as they are close-up, although the quadtrees are rough at great distances. Good normal vectors are so important for viewing from above in order to then be able to see the typical sparkling of the sea. With rough normal vectors that would hardly be possible and not in the quality I want it to be
Now I can devote myself to the repository and then familiarize myself with your beautiful IFFT shader.

2 Likes

Okay, I have tried it out and it looks promising! The visuals are great, so I will have to see what happens to framerate when I add some to my diorama. I see that you structured this as a ā€œclassā€, rather than a subroutine. Is that an indication of a C+ programming background?

I donā€™t know if it makes much difference, but I get an error notice saying ā€œclouds_2.jpg not foundā€. I looked online, and found lots of files with that name, but none that seem to be the kind you would use in your program.

EDIT
If you do have a C+ background, you may be able to make use of NVIDIA WaveWorks. You can download their package and, apparently, use the shaders in your program. The package includes some demos which indicate that it can be used with a quadtree. I would try it, except that I donā€™t know how to convert the sample programs calling the shaders from C+ to JS. It looks like they finalized the program in 2017. But it has all the features you would want, including using cascades and foam.

I only program in an object-oriented manner, because otherwise the clarity is very quickly lost in larger programs. The one with clouds_2.jpg is harmless because I donā€™t use it in the repository. The texture is loaded in the cloud-model.js module if it exists. The purpose of the texture is that you can use it to define where clouds should be. Kind of like a weather map where there are clouds on a black background. In the clouds shader I donā€™t use the weather map but simply one of the texture layers in the worleyarray. I wanted to keep the example as simple as possible, because adding a thousand extra things quickly becomes too much.
Iā€™m making good progress with the ocean repository so far. I have to do a lot of things from scratch because my ocean is spherical and huge.
In the cloud repo I use a depth texture that renders threejs in each frame and passes them to the cloudshader. In the shader I then reconstruct the complete visible 3D world with the help of the depth texture. This is important to be able to tell if clouds are covering something or being covered by something.

As soon as Iā€™ve finished the quadtree ropo Iā€™ll dedicate myself to the FFT topic and texture cascading

1 Like

Does the reconstruction of the entire visible 3D world take a lot of time? When considering the order in which things are drawn, it seemed that (if the drawing order is near to far), I would always draw my aircraft first (because is at 0,0,0 and always nearest to me) and the sky the ground last (because nothing is beyond them - except the skybox of course, which you may not use). Clouds would fall in the middle range because they are between the ground and my aircraft. And, if the clouds are separated by enough distance, they will not conflict with each other.

Also, I noticed that you are using WebGL2. Are there advantages to that with what we are doing? I did not update the iFFT wave generator to that because it looked like that would take some effort and it might prevent some people from using it.

No need to respond right away - just curious.

No, the reconstruction of the 3D world in the shader is not a hassle. Threejs saves a depth texture in every frame. Thanks to this, the postprocessing shader then has an easy time. This is done in the shader in the ā€œcomputeWorldPosition()ā€ function. And as you can see, there isnā€™t much in it. The shader has the uv coordinates anyway, with the depth texture it then has the z-value and then it only has to convert from clip space to world space, thatā€™s 4 short lines of code.
I need webgl2 for the clouds, because webgl supports 3D textures. But webgl2 is easy to integrate. I do that right at the beginning in the module where I configure threejs.
The clouds are not rendered with your models. The clouds are in post-processing, thatā€™s what the composer does.
Thatā€™s why only the uv coordinates are in the vertex shader of the cloud shader, because the 3D world no longer exists in post-processing, so I have to reconstruct it in the fragment shader. For 3D textures I also need the rawShader.

Here, with this you can experience a 3D texture that I use for the clouds. In the animation, the shader goes in and out of the 3D texture. I created the 3D texture with a worley noise.

From my point of view, WebGl2 only offers advantages and it is easily integrated. I also use texture arrays and that only works with WebGl2. Well, in the raw shaders I have to do everything myself because there are no buildins, but thatā€™s not a hassle.

P.S.
Iā€™m almost done with the repository :blush: Iā€™ve done my best to keep it as small as possible without sacrificing performance. But itā€™s still a very complex system, because quadtrees are not a simple thing.

Thatā€™s why I built the entire oceancode as a module so that you just have to integrate it into your main and connect its interfaces (camera, scene, clock, ā€¦). There is still a lot more to do, but the seamless quadtree ocean surface as code is already complete. Iā€™ll clean up a bit and then upload.

Is something like this interesting for threejs as a module? I have no idea who to ask about it :man_shrugging:

I would love to program all day, but unfortunately my work prevents me from doing so :unamused:

It would make sense if I would write a detailed documentation with pictures because even if the code is very clean, it is, as mentioned, very demanding

1 Like

I am now trying to familiarize myself with your IFFT shader. I understand it in that way that you use shaders for displacement, reflection, etc., ā€¦ to render your own shaders in full screen and transfer the resulting images as textures to a THREE.MeshPhysicalMaterial. And then you assign this material to a plane geometry that you tile four times. So you donā€™t do the displacement in the shader or material of your geometry but via pre-rendered lookup textures, right?
One can see that you put a lot of love and energy into your project. I appreciate that.

I still have no idea how many frequencies are necessary to get a good water surface from it through inverse Fourier transformation. I also thought of FBm functions. Letā€™s seeā€¦

Iā€™ll upload my repo tomorrow or the day after tomorrow at the latest. Went faster than I thought.

I also downloaded and checked out the nvidia waveworks that you recommended. They use the world coordinates as uv just like me. Since each chunk of my ocean is basically an object with its own material, classic uv coordinates would also be useless because they only exist in the respective chunk. The world coordinates, on the other hand, are homogeneous and isotropic across all chunks and thatā€™s why I have clean, unrecognizable transitions in the material across all chunks. I really like that now.

1 Like

The iFFT uses several WebGLRenderTargets to make computations. Like your clouds, these are made to a hidden scene, but using an OrthographicCamera. The first buffer is initialized using random values and remains unchanged. That buffer is used as the starting point for the other buffers. Somewhere in the middle is a ā€œping-pongā€ computation where you are switching back and forth between buffers. This is standard procedure for iFFT computations. The program uses this to compute a displacement map and, then, a normal map. The original version used those maps to create a final textures. I removed that so that people could, instead, take advantage of three.js textures. I also removed a huge section that created the mirror effect since I (correctly) assumed that could be handled using the three.js textures.

Please feel free to suggest improvements. This process was a learning experience for me since I was unfamiliar with the process of using RenderTargets.

I think that the method used is somewhat different than the ā€œstandard methodā€, which is described in:
Fynn-Jorin FlĆ¼gge, ā€œRealtime GPGPU FFT Ocean Water Simulationā€, Hamburg University of Technology (2017). Here is the link:
https://tore.tuhh.de/bitstream/11420/1439/1/GPGPU_FFT_Ocean_Simulations.pdf
(Clicking on this link will immediately load the pdf document. You might even be able to find the original version in German.)

The WaveWorks example might provide some helpful guidance regarding creating cascading maps and foam. Here is a slide show discussion of an earlier version of the NVidia wave generator that uses a Perlin map to avoid tiling.

Phil Crowther and I work together at IFFT waves
The project has made good progress

The detailed coloring of the surface is still missing, but that is not important for me at the moment. First of all, itā€™s about good quality waves and performance. The quadtree surface does a very good job and the performance is very good in terms of geometry.

For better waves, however, we need many renderings in each interval. At the moment there are 5 renderings in each update. That also works quite well, but if we now increase the number to 15, that seems like a lot to me.

I multithreading the quadtree so that the mainthread doesnā€™t waste any valuable resources.
I immediately thought of threading the renderings as well, but I had to remember that the workers had no connection to the Dom.

To what extent is threejs dependent on dom elements?
I really like the idea of ā€‹ā€‹doing computationally intensive texture updates in another core.

There is an offscreencanvas. It works in workers. Does anyone have experience using threejs with offscreencanvas if thatā€™s possible?

That would make a lot more possible.

2 Likes

Three works just fine inside a worker. The main thing you need to concern yourself about is what kind of data are you going to transfer between the main process and the worker?

The most common bottleneck when working with worker threads is the data transfer between them. The workers are basically a separate v8 isolated context (if youā€™re on a chromium based browse that is, but the concept stays the same), meaning that you canā€™t simply ā€œshareā€ memory perse, except for Shared Buffers, but Iā€™m not 100% sure about the browser compatibility on that one.

class IFFTBuilderThreadedWorker {
  constructor() {
		const offscreen = new OffscreenCanvas(512, 512);  
  		this.threejs_ = new THREE.WebGLRenderer({ 
			canvas: offscreen,
			antialias: true
		});
  		this.textureCamera = new THREE.Camera();	
		this.screenQuad = new THREE.Mesh(new THREE.PlaneGeometry(2, 2));	 
		
		this.Init();		
  }
...
...
...

I set up a renderer in a worker for the first time. So far I have only used Threejs vector elements in workers. I almost always use SharedArrayBuffers to pass data from the worker to the mainthread.
What I canā€™t tell yet is whether packing the array data into a DataTexture will stress the main thread. It would be 6 textures 512x512 pixels.

Meanwhile, my worker renders. I havenā€™t transferred any data yet. What surprises me is that the main thread feels it. There are moments when the main thread is blocked shortly and I still donā€™t understand why that happens.

At least I have the certainty that I can run a renderer independent of the main thread in a worker. Writing the imageDatas to a float32array will also work, Iā€™ve done that before.

The only thing I donā€™t understand yet, as mentioned, is the sporadic blocking of the main thread, even though no data is returned.
The CPU monitor is in the green zone. However, the GPU is heavily loaded. There is no graphics card in my tablet, only an onboard GPU. However, it is not clear to me whether and if so, how much this depends on free CPU resources. Do I degrade the onboard GPU by using threading resources with workers?

The bottleneck seems to be the rendering. As long as I only do arithmetic operations in the worker, it doesnā€™t affect the main thread. But if I render images in the worker, it has an influence on the main thread.
I suspect this is because renderings in the worker uses the GPU as well and because when mainthread and worker both render together then the GPU is overloaded. I have 8 CPU cores in my tablet and thatā€™s why I like to use worker. A thread can only use one CPU core, but a thread can use the entire GPU. So if the worker relieves the main thread because I outsource computationally intensive things to the worker, I can still slow down the main thread if I use the GPU from the worker. Can anyone confirm my hypothesis?

Your suspicions are correct. You use worker threads to divide CPU-bound tasks over multiple threads, mainly because JavaScript (read: event-loop) is single-threaded. If your GPU is already a bottleneck in the main thread, offloading work to worker-threads to run things on a GPU as well is kind of pointless.

Thanks for the information. It is always important to me to understand the causes as precisely as possible.
However, I would like to free up the main thread as much as possible. Then I would deactivate the current IFFT module in the main thread so that only the worker uses the GPU for the IFFT generation. Nothing is saved in the GPU, but the related part in the CPU that manage all is saved.

class IFFTBuilderThreadedWorker {
  constructor() {
  
	this.offscreen = new OffscreenCanvas(512, 512);  
		
  	this.threejs_ = new THREE.WebGLRenderer({ 
		canvas: offscreen,
		antialias: true
	});
  	this.textureCamera = new THREE.Camera();	
	this.screenQuad = new THREE.Mesh(new THREE.PlaneGeometry(2, 2));	 
		
	this.Init();		
  }

So far I have always rendered in renderTargets and then taken the texture from them.
I imagine being able to do without the render target in the worker. After rendering, I have the image in the offscreen canvas. Then I would have to get it into a dataArray this way.

InitialSpectrum(){	
	// 1. Initial Spectrum
	if (this.changed) {	
		this.screenQuad.material = this.materialInitialSpectrum;
		//this.threejs_.setRenderTarget(this.initialSpectrumFramebuffer);
		this.threejs_.render(this.screenQuad, this.textureCamera);
        this.InitialSpectrumData = GetImageDataArray(512, 512);
		//this.threejs_.setRenderTarget(null);			
	}
}
GetImageDataArray(width, height) {
	let context = this.offscreen.getContext('2d'); 

	let bytesInInt8 = 4;			
	let imageData = context.getImageData(0, 0, width, height);
					
	const data = new Uint8Array(new SharedArrayBuffer(bytesInInt8 * imageData.data.length));
	data.set(imageData.data, 0);
			
	context = imageData = bytesInInt8 = null;
	return {data, width, height};
}	

On my PC with my 7900XTX readon I still have gigantic reserves. What is a lot for my tablet is hardly noticeable effort for my PC.

Next steps: Multiple IFFT layers, depthtexture to know exactly the waterdepth everywhere or to whatever is under the surface, raymarching, ā€¦

The most elegant way would be imageStore in the shader, but the function does not yet exist in webGL2. That would avoid the whole back and forth with the renderings between CPU and GPU.
Does anyone have any information about whether such functionality will be available in webGL2 in the near future?

From Mugen87 (april): ā€œā€¦However, that does not mean the material system takes away the ability to write custom shaders. If you want to develop customized materials with plain WGSL (for whatever reasons), that will be possible.ā€

Is that already possible or is there only the node variant until now?

With WebGL2 I am reaching performance limits due to the need for renderTargets. Thatā€™s why I want to use WebGPU now, because I donā€™t have to render out 5 textures one after the other in order to then hand them over to the next shader.
I also need the vertex shader because the quadtree chunks are seamlessly connected to each other by the vertex shader. I imagine that something is too special for a node system.
The skirts turned out to be superfluous as the chunks always fit perfectly. Surface and geometry appear as if they were one.
I imagine morphing the chunks into each other but at the moment Iā€™m busy getting used to WebGPU. Thanks to multithreading, the quadtree hardly uses any resources.
The many CPU <=> GPU back and forth with the many render targets for the wave generator is the weak point. WebGPU is the solution.