Hi all,
I’m working on a project where I’d like to colour the vertices of a mesh based on their Vertical Sky Component. What this means is that areas where large parts of the sky is visible would get one colour, and other areas that are more obstructed by other elements gets another colour. See this picture for reference.
The naive approach would be to use a ray caster for this, but since I have a large number of vertices in my scene this does not scale well since I need to check all directions for all vertices using that.
What I want to do instead is the following:
- Set up n OrthographicCameras in a hemisphere over my scene.
- Create a DepthMap for each of the cameras.
- Project each vertex of each mesh onto that shadowmap and check if the point it in light or in shadow.
- Count the number of Cameras that the point is visible to and colour it accordingly.
I have found many problems with this approach, though.
- First of all I’m not sure if I should use an OrthographicCamera or DirectionalLight for this. My idea was to use the OrthographicCamera since I don’t want to add any new lights to my current scene.
- I also cannot figure out how to determine if a vertex is seen by the camera or not. I understand that I first need to project it to the camera’s pow and then compare the depth-value of the corresponding pixel to the actual distance to the vertex. If those two are the same, then the point is not obstructed by anything else. I just can’t figure out how to do it.
Here is the main method of the code as it looks right now for reference
const placeDirectionalLights = (activeMeshGroup: THREE.Group, scene: THREE.Scene, elevation: number | undefined) => {
const cameras: THREE.OrthographicCamera[] = []
const renderer = new THREE.WebGLRenderer()
renderer.setSize(window.innerWidth, window.innerHeight)
renderer.setClearColor(0x000000, 0) // clear to black, fully transparent
const hemisphereRadius = 100 // arbitrary value, set based on scene size
const hemisphereUp = new THREE.Vector3(0, 0, 1) // assuming Y is up in your scene
// Create 10 cameras
const furstum = 100
for (let i = 0; i < 10; i++) {
const angle = (Math.PI * 2 * i) / 10
const x = hemisphereRadius * Math.sin(angle)
const y = hemisphereRadius * Math.cos(angle)
const camera = new THREE.OrthographicCamera(-furstum / 2, furstum / 2, furstum / 2, -furstum / 2, 1, 1000)
camera.matrixAutoUpdate = true
camera.position.set(x, y, hemisphereRadius)
camera.lookAt(0, 0, elevation ? elevation : 0)
// camera.up = hemisphereUp
cameras.push(camera)
}
cameras.forEach((camera) => {
const helper = new THREE.CameraHelper(camera)
scene.add(helper)
})
// Render depth maps and project vertices
activeMeshGroup.children.forEach((mesh) => {
if (mesh instanceof THREE.Mesh) {
const positions = mesh.geometry.attributes.position
for (let i = 0; i < positions.count; i += 3) {
const vertex = new THREE.Vector3(positions.array[i], positions.array[i + 1], positions.array[i + 2])
let visibleCount = 0
cameras.forEach((camera) => {
// Render depth map from camera perspective
renderer.render(scene, camera)
// Project vertex onto depth map
const vertexProjected = vertex.clone().project(camera)
// Define the parameters for readRenderTargetPixels
const renderTarget = new THREE.WebGLRenderTarget(size, size)
const x = Math.floor(vertexProjected.x * size)
const y = Math.floor(vertexProjected.y * size)
const width = size
const height = size
const buffer = new Uint8Array(width * height * 4)
// Read the pixels from the render target
renderer.readRenderTargetPixels(renderTarget, x, y, width, height, buffer)
// Get the depth value from the buffer
const depth = buffer[3] / 255
if (depth < vertexProjected.z) {
visibleCount++
}
})
// Set vertex color based on visibility
const color = pSBC(visibleCount / 10, startColor, endColor)
if (color) {
/mesh.geometry.attributes.colors.push(new THREE.Color(color))
} else {
mesh.geometry.attributes.colors.push(new THREE.Color('#ffffff'))
}
}
mesh.material.vertexColors = true
mesh.geometry.colorsNeedUpdate = true
}
})
}
any help is really appreciated. Thanks!