Finding whether it is water or land according to the coordinates in three js globe

How do I know whether my location is water or land based on x y z coordinates , rotation or other solitons?

my code :

import * as THREE from ‘https://unpkg.com/three@0.159.0/build/three.module.js’;

import { OrbitControls } from ‘https://cdn.jsdelivr.net/npm/three@0.121.1/examples/jsm/controls/OrbitControls.js’;

const renderer = new THREE.WebGLRenderer({ antialias: true });

renderer.setSize(window.innerWidth, window.innerHeight);

document.body.appendChild(renderer.domElement);

const scene = new THREE.Scene();

const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 100);

camera.position.set(3, 0, 0);

camera.lookAt(0, 0, 0);

camera.updateProjectionMatrix();

const textureLoader = new THREE.TextureLoader();

const texture = textureLoader.load(src);

var orbit = new OrbitControls(camera, renderer.domElement);

const sphereGeometry = new THREE.SphereGeometry(1, 100, 100);

const sphereMaterial = new THREE.MeshBasicMaterial({

vertexColors: THREE.VertexColors,

opacity: 0,

wireframe: false,

map: texture,

});

const sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);

scene.add(sphere);

const geometry = new THREE.BoxGeometry(0.01, 0.01, 0.01);

const material = new THREE.MeshBasicMaterial({ color: “#E00012”, wireframe: true, });

const cube = new THREE.Mesh(geometry, material);

function animate() {

orbit.update();

camera.position.copy(orbit.object.position);

camera.rotation.copy(orbit.object.rotation);

camera.updateProjectionMatrix();

renderer.render(scene, camera);

requestAnimationFrame(animate);

}

animate();

You could load an image like: https://upload.wikimedia.org/wikipedia/commons/thumb/2/2b/World_elevation_map.png/2560px-World_elevation_map.png

Draw it to a canvas…
Use getImageData to get the pixel data from the canvas…

Take your point, convert it to canvas x/y coordinates…
If you’re using the raycaster to get mouse cursor hit point on sphere… the raycast hit has a .uv field which is the hit coordinate x/y in 0 to 1 space, which you convert to the canvas space by multiplying by canvas width/height…

Look up the pixel from the imageData array, take the brighness value of the pixel and decide based on that if its above or below sea level. This won’t account for rivers though.

3 Likes

Just to add on top of what @manthrax already said, maybe you can try with a texture that accounts for distiguishing between above or under the sea level, like nasa bathymetry or a mix of this and rivers.

2 Likes

I asked the first chat gpt and he suggested this solution, I applied it but it didn’t work correctly, I probably applied it wrong, I will try again, thank you for your help my friend.

1 Like

yeah it can be tricky…

One way to try to figure it out is to drop a bunch of cubes on parts that your code “thinks” are water… then you’ll see the outline of what your algorithm thinks is the water… and compare with the actual visual texture on the sphere…

It might be something as simple as it being flipped on Y or something.