[complicated] [please help] How to update UV map dynamically from intersected geometry?

This is sort of a complicated thing but I’m wondering how to dynamically update a UV map from information in the scene, so forgive these screenshots because they’re from blender, but use them as sketches. Say I have a cube (it won’t be a cube in the end—it’ll be something that’s really hard to map circles onto in a normal way, but for simplicity’s sake)

And I also have a sphere that intersects with that cube (see second screenshot), is there a way to update the UV map image that belongs to the cube and replace the intersected pixels in the UV map with pixels of a different color? So that the UV map’s colors will appear different only where that sphere has intersected with the cube–I imagine the UV map’s image will not actually appear as a perfect circle on the flat image if the geometry was something other than a cube though.

I want to use this as a way to apply some kind of random dot texture to a geometry of a much different shape, and the shape will be severely displaced, so much so that if I just use regular circles in the UV map, they appear as streaks. I figured doing some kind of projection in this way would solve the problem, though it requires going back and editing the UV map dynamically with where spheres intersect the target geometry.

I have no idea how to actually implement this though, but if anyone has any advice on this please let me know—or if there’s an easier way to apply a random dots texture to a UV map of an unpredictably shaped geometry also please let me know.

I would use shaders, but with the renderer I’m using, it doesn’t support a THREE.ShaderMaterial.

Thank you for reading and if anyone has any advice on this, please let me know :smiley:

you could use decalGeometry logic to figure out how your implementation might work


1 Like

This is helpful, though I’m really hesitant to use this because it creates a new mesh and has the potential to create a lot of new draw calls.

I did however find Manthrax’s CSG Library demo and I found it from this post

but the thing I’m running into now is actually extracting the coordinates from that intersection, but you can see the desired behavior is totally happeing!

That circle intersection is essentially the information I need, but now I just need to convert it from that intersected state, into the UV coordinates, and then onto a canvas and back into the modified texture.

So, basically i still haven’t found a solution but I’m still thankful for your input :pray:

1 Like

Hehe I’m not sure if I full understand your need, but I did just write this over the last couple days and it sounds like it might be relevant to your question?


1 Like

Internally… it does something similar to what you describe… renders out the UV map to the texture to implement the painting.

1 Like

oh woah yeah basically this is exactly what I was thinking!!

this is so sick, I was going to use it to draw dots on things but just drawing a circle on a canvas is wonky because of the way my meshes are shaped, I’ll look at this in the future! I ended up using your csg library that I found on another link on the forum and kind of hacked together a way to draw perfectly circular dots by creating a new mesh where the mesh would be intersected and making it a new color (it’s a performance nightmare though)

1 Like

I made the repo public… you can mine it for ideas… and I describe the algorithm in the readme:


Reading along, is the final texture via canvastexture accessible as just a regular image outside of the shader implementation you use?

It exists during runtime as a rendertarget… since reading the texture back from the GPU is a slow operation… but can be converted into a CanvasTexture when needed,

(which I have to do to export the 3d model with textures, since GLTFExporter can’t see rendertargets… it’s only seeing the rendertarget.texture and the texture doesn’t have a reference back to its target to extract the data.).

In my case, I flag all textures with the UUID of the renderTarget they belong to so I can look them up at export time and convert them to CanvasTextures.

this page is just so good that now I feel like I really have to use it :sweat_smile:

I was wondering like on a scale of 1—10, 10 being the most difficult for a mildy experienced three.js dev who doesn’t know much about writing shaders but a lot about most other things in the library,

how difficult do you think it would be to use this tool to generate tens of textured models that are polka dotted, output in a three.js scene but not rendered with shaders, and instead using the exported canvas textures as their final rendered state?

I plan to use this to procedurally texture some procedurally generated 3D models :smile:

I’d say a 5 to 10. I feel like I did some of the hardest bits here…
Once you have the model loaded, you can cast some rays from fixed points inward and draw your dots at those points.
You will however have to generate decent UVs for your procgen 3d models… with islands that don’t overlap and have a ~16 to 32 pixel margin so the texture dilation can fix the filtering seams.
You could do this with some relatively simple techniques if you don’t mind sloppy textures… i.e. just output each triangle to a padded, minimum bounding box on the UV map…

If you check the “tex” checkbox, you can see how the writing to the texture works, and how the islands are dilated with the shader.
Here’s a link to a threejs mentorship slack channel I help moderate if you want to talk about it in chat… I can maybe help get you rolling.


1 Like