I am very happy about the release of r156. I immediately looked at the example with the compute texture. This will now be the next step to bring the IFFT wave generator from webgl2 to webgpu and replace the renderTargets with compute shaders to generate significantly better waves.
With webgpu, a very realistic water surface will be possible thanks to the computer shaders. The geometry is now very satisfactory, it is very fast and completely stepless.
This allows also to create huge landscapes that are always in high resolution wherever you are. The LODs are always concentric circles around the target (camera) that morph into each other.
I’m going to replace my old landscape generator that I used to create my Mars. Here is a video link to my mars which is running with my old LOD generator. Of course it’s also very good, but it doesn’t have morphing LOD transitions but stitching so that there are also no gaps between the LODs.
But my new ocean LOD geometry generator is more efficient.
Phil Crowter inspired me to this beautiful ocean project.
I have to admit I don’t know what CD in CDLOD means.
It is indeed a chunk based quadtree LOD (on Mars there are several quadtrees and everything in the original scale). Geometry chunks are then created. You can see this in the ocean because new square areas in different colors keep appearing. So there are actually many individual chunks that always fit together perfectly. You don’t see any gaps between them or any transitions anywhere on Mars. The surface appears to be made of a single geometry. But I used stitching on Mars because I hadn’t thought about morphing. The intermediate points of a higher resolution area that borders on a coarser one interpolates its intermediate points so that there are no gaps. With the ocean I made a few things more advanced to avoid the complex stitching in the vertex shader.
On Mars I already do the stitching in the workers that generate the new chunks, but on Ocean I have a moving surface and I would then have to constantly re-stitch. This also works very well but morphing is more advanced
For performance reasons, I use 7 workers so that my main thread remains undisturbed while my 7 remaining CPU cores can fully take care of generating new chunks. So I use all 8 CPU cores and it works so well that everything works on my tablet.
With the ocean in the vertex shader I morph the vertices towards each other or apart depending on the camera distance so that a smooth transition from LOD to LOD is created. You can no longer recognize any LOD changes on the surface.
It is allways a perfect surface with smooth LOD transitions.
Now I will take care of improving the IFFT generator, because in WebGL2 I had to use 5 renderTargets in each interval. Thanks to r156 I can now do the same thing in WebGPU with compute shaders which is more efficient.
This will be a beautiful open source ocean.
For real landscapes like Mars, I don’t need WebGPU, but I do need other very complex components that I’ve worked on for a long time. With better height maps for mars this would be much better. For the earth I have 14520 maps from NASA with 1 arc second resolution (30m)
Then i have that because that’s exactly the paper that inspired me to do it.
Now I am improving the IFFT wave generator thanks to r156 with compute texture. Then comes raytracing in conjunction with a depthtexture to take objects close to the surface or the ground into account when scattering.
Thank you, my planetary generator has now become very extensive. When I started a little over two years ago, I faced numerous hurdles. I use 512 high-resolution individual 16-bit height maps. Unfortunately, there is nothing better from Mars than the material from the Mars MOLA mission. However, there are numerous individual, very high-resolution maps of smaller areas (HiRISE mission). I’ll include those too. I only load the higher resolution maps when approaching and only in the areas around the camera. For my tablet, the RAM memory is the bottleneck. I had to be very careful with the maps.
Does Github have something like a limit when it comes to data size? All the maps I currently use need around 10 GB. And in addition I still have 150 GB of very high-resolution maps of the Earth.
You definitely will not be able to push the code with those large images. What I’ve seen other people do is just push the code to GitHub and, in your readme, link to your images that are hosted on something like AWS or a site designed for storing images so a user knows where to download it.
On a side note, doesn’t your application crash when using such a high-resolution image? You must have a pretty good computer, but you have to think about the stress this will put on a user’s machine
I don’t load all the maps. In total that would be 46k for Mars, 96k for the Moon or 178k for Earth.
As long as I’m far away I use lower resolution height maps. When I get closer, I load the higher-resolution maps in the area around the camera. It even runs quite well on my tablet (samsung galaxy tab S8+) and it only have 3.5 GB ram available. The map management was one of the many very big challenges. I would also like to have 178k for Mars, which would be about 16 times higher resolution. If there was a 1M map of Mars I would use it. I developed extra tools in Python because I had previously spent weeks just cutting maps and naming them. Now I can quickly convert and create hundreds of maps in my tools.
However, I can improve the map management a little here and there - there is still potential.
The only time I had crashes was when the RAM on my tablet was full. Otherwise I didn’t have to deal with crashes. The software runs very stable.
When it ran without any problems and the RAM load on my tablet was around 2 GB, I admittedly became a little comfortable and dedicated myself to the Ocean.
An earth without a real ocean just seems uhhh. Clouds are also important. I can create planet-wide clouds on my desktop without any problems. That looks very good. But for my tablet clouds are a very heavy stuff. I use raytracing for the atmosphere and the clouds. I have a simple cloud example on github if you’re interested.
Regarding github:
Then I’ll have to finally continue with my homepage. I could make the thousands of maps available for download on my homepage.
But first I want to make progress on the ocean. Thanks to r156, I’m now in a development frenzy
Do you guys use Discord? I am building my own LOD terrain renderer at https://nickvanurk.com/cdlod/ and would love to connect with people who also are into this sort of thing. Mine is Jycerian#9355.
I once had a discord account. I would have to reactivate it. You also did a CDLOD. I plan to start a project on github where I will then upload the ocean. Since I do everything with WebGPU, I can only now continue effectively with r157. I recently downloaded r157 and integrated it into my project and also integrated the new StorageTexture class. I think it’s great. WebGL2 is simply not well suited for the ocean with its moving surface. I don’t yet know how I do this with my planets, as the project folder is already 10 GB in size due to the many elevation maps. I will have to do this via my own homepage.
I don’t like unfinished things per se, but I could upload my ocean project to Github as soon as I’ve simply implemented the IFFT mechanism in WebGPU. It won’t look like an ocean at the beginning, but it will contain the most important elements fully functional, such as my multithreading CDLOD.
Thanks @sunag, with r158 there has been great progress in the development of my webgpu ocean. Phil has also once again inspired me to make a significant improvement. Unfortunately I can’t load this onto my site because for efficiency reasons I use several sharedArrayBuffers for which I need crossOriginIsolation. What is easy on a local server is unfortunately not possible on a standard server that you host. Therefore just pictures. The wave behavior is very real, I can stare at the screen for minutes and not get bored. And at the moment it’s just a monochrome, opaque, non-reflective surface without ray tracing, without foam. Just displacement and normal texture. This highly efficient ocean surface is only possible with the three.js webGPURenderer. Dozens of compute shaders are executed at every interval and it runs on my laptop with a simple onboard graphics chip.
If you look closely you will notice gaps in the surface. This is because I had to deactivate lod morphing because otherwise there would be ugly texture errors. The reason is because I access the position attribute in the colorNode. But I need my morphed “vPositions” from the positionNode in the colorNode. But I don’t yet know how to pass that in the node system.
Normally this works in wgsl with structs. Alternatively, I could use stitching instead of lod morphing, but morphing is more elegant because it is stepless.
I’ve now overcome myself to integrate a GUI. Because otherwise everyone except me would have to look for parameters in the code.
For me, the ocean is just a part of a much larger project that I imagine, but since the ocean with its moving surface is something that is only possible with the powerful webgpuRenderer and the new node system in 3js, it has a special appeal.
Here I would like to express my special thanks to @phil_crowther, who provided me with a lot of information material, and @Sunag, who is working on the continuous expansion and improvement of the new node system.
I will upload v1 to Github during December. This will still be a long way from the final result that I imagine, because the 3js node system will become much more extensive and with it the possibilities to significantly improve the ocean, but it is a solid foundation from which everyone can learn a lot about the new node system and the powerful possibilities of the webgpuRenderer.
There is a lot more to adjust, but you can influence the waves very well with this.
CDLOD resolution and texture resolution is missing in the gui like many other things.
But I’m happy to tell where to adjust it if anyone wants to put their computer to the test.
You can control the camera with arrow keys and wasd
You may be able to host your example directly from your github readme…
You use the gear icon, enable github pages, then select the branch/directory containing the app, and it will give you a url that goes hosts directly from the repo. Then you can put that link in your readme… Makes it really easy for people to see without having to dupe your build environment.
I did similar here: GitHub - manthrax/logiclab: This is a simulation of the Logic Lab device.
That sounds good. I tried that. Unfortunately I only get a black page. Many things in the console say “Failed to load resource: …”
A github live demo would be great.
I downloaded the repo and then ran the folder in vsc. This works, so there must be something wrong with my configuration on Github.
I’ve been avoiding it for a while but with the support of manthrax it’s now worked and the ocean can be experienced live on github.
I’ll reduce the IFFT wave compute resolution because it’s too computationally intensive at the moment. Visually it wouldn’t look much worse, but the necessary computing power would shrink significantly.
There’s still so much to do, but I’m still busy with two other large projects and these require 3 extensions in three.js that will come over time. Including the depth textures, which are very important for the ocean in order to be able to authentically display objects under water, to be able to see the bottom in shallow water, for light scattering and the underwater effect when you are under water.
Likewise the sky as it has a strong influence on the ocean color. But I’m now very happy that it finally works with the live server