Updating Mesh position is not updating its own geometry

Giving a Mesh, we can update its position by doing mesh.position.set(x, y, z), this will “translate” its own position into the new coordinates.
So giving a plane geometry

const width = 10;
const length = 10;
const plane = new THREE.Mesh(
    new THREE.PlaneGeometry(width, length, 2, 2),
    new THREE.MeshStandardMaterial({
        color: "red",
        side: THREE.DoubleSide,
        wireframe: false,
    })
);
plane.rotateX(-Math.PI / 2);
plane.receiveShadow = true;
plane.castShadow = false;
plane.position.set((width / 2), 0, (length / 2));

scene.add(plane);

That’s fine.
What I wish to do though is, to have the geometry updated as well.
Let’s imagine we are moving this plane around, whatever position we set its geometry position array will never change

console.log(plane.geometry.attributes.position.array);

Question

Is there a native method, either in Object3D, Mesh, GeometryBuffer (somewhere) that will update also the actual geometry from the updated translation?
If not, what’s the easiest way to achieve this?

I know this is expected but, I want to manipulate the geometry its self of many planes.
For this what I need this for doesnt matter, because doing this would help in general and for other use cases

If in case this is not possible at least How can we get the actual geometry.attributes.position.array of a translated mesh?
That is. the position.array would have a different coordinates potinting to the position

see three.js docs

But if you are applying the same position offset to the geometry its going to offset from the origin.
If you keep doing this each frame its gonna launch into space exponentially. The positions attibutes are in object space. If you want the world positions of them you have to run a function to get their coords

Thanks for your answer.
I indeed noticed that if you update the geometry attributes position it will go into another dimension

Having said that, reading again the documentation you posted, while is helpful, is not talking on what i need.

I try to explain in different way.
Normally, if easier and better update a position of a mesh but calling ‘mesh.position.set(xyz)’. Now this will not update the content of ‘mesh.geometry.attributes.position.array’
Theoretically if we want an object to change its position we could via the way i just descibed or doing
‘’’
mesh.geometry.attributes.position.setXYZ(x,yz)
mesh.geometry.neesdUpdate = true
‘’’

But again, doing in the first way will not update theb array.
I am not saying why i need this because i dont think is necessary (and i try to keep the question less “noisy”) but i hope this is more clear now

I think there we should avoid to handle the array of bufferGometry, because it is very costly and it could be one reason of the low performance.
when the length of the array is over 20K (this means that this model is normal one, not low poly), it will be difficult to handle this array normally without any reducing performance.

Do you just want the positions of the vertices in world coords?
There are many example questions for that already much better explained. Search for localToWorld positions
heres one

You can also clone geometry and apply the matrix to the whole mesh and get a new array that way

Even in basic GL you dont cache world positions each frame, you multiply its main matrix position from the parents matrix and the shader handles MVP to place the vertices on screen in screen world space

And… here we have an instance where…, trying to give less information in order to be more focus on what you need is actually counter productive, because it may give the wrong intention on what someone needs.

I ll try to explain actually what I am doing.
Performance is actually not a bit issue for what i need, unless is really start to bee tooo laggy. Also may not have close to 20k vertices.

What I’m trying to do is a simple editor to manipulate plane geometry. I do have few

Imports

import * as THREE from 'three';

import {TransformControls} from "three/examples/jsm/controls/TransformControls.js";
import * as BufferGeometryUtils from "three/examples/jsm/utils/BufferGeometryUtils.js";

Then assume that I have scene, renderer and all that good stuff. cool now for what i’m doing.
I really copy three.js webgl - transform controls and using TransformControls (which works amazingly I have to say, is a sharm :wink: )

function mergePlanes() {
        const width = 10;
        const length = 10;
        const geometry = new THREE.PlaneGeometry(width, length, 2, 2);

        const randomColors = getRandomColor();

        const planesGeometries = [];
        const planeMeshes = [];
        const planeSectionGeometryMapping = {};
        const columns = 3;
        const rows = 3;
        for (let column = 0; column < columns; column++) {
            const gridPositionX = (width / 2) + (width * column);

            for (let row = 0; row < rows; row++) {
                const gridPositionZ = (length / 2) + (length * row);

                // Create Each Separate Geometry
                let planeSectionGeometry = geometry.clone();
                planeSectionGeometry.rotateX(-Math.PI / 2);
                planeSectionGeometry.translate(gridPositionX, 0, gridPositionZ);
                planesGeometries.push(planeSectionGeometry);

                // Create Separate Mesh
                const color = randomColors("green");
                const plane = new THREE.Mesh(geometry,
                    new THREE.MeshStandardMaterial({
                        color: color,
                        side: THREE.DoubleSide,
                        wireframe: true,
                    })
                );
                plane.rotateX(-Math.PI / 2);
                plane.receiveShadow = true;
                plane.castShadow = false;
                plane.position.set(gridPositionX, 0, gridPositionZ);

                scene.add(plane);
                planeMeshes.push(plane);
                planeSectionGeometryMapping[plane.id] = planeSectionGeometry;

            }
        }

        const useGroups = false;
        const geometryMerged = BufferGeometryUtils.mergeGeometries(planesGeometries, useGroups);
        if (geometryMerged === null) {
            alert("Could not merge plane!!!");
            return;
        }
        geometryMerged.computeBoundingSphere();
        const planeMerged = new THREE.Mesh(
            geometryMerged,
            new THREE.MeshStandardMaterial({
                color: "green",
                side: THREE.DoubleSide,
                wireframe: false,
            })
        );

        scene.add(planeMerged);

        window.geometryMerged = geometryMerged;

        /// Add controls
        const object = planeMeshes[0];  // Here i'm giving the first plane.. is not really important , i'm only playing around
        const matrixBase = new THREE.Matrix4();
        const transformControls = new TransformControls(camera, renderer.domElement);
        transformControls.addEventListener('dragging-changed', function (event) {
            controls.enabled = !event.value;

            // Here is where i 'm stuck.
            // this event will be called once you transform the mesh position. This works fine

            // I want to:
            // Get somehow the current world position of the mesh.position
            // And apply to the geometry of the plane
            // re-build the plane with merged geometry..

            // get the corresponding 'geometry' buffer
            const planeSectionGeometry = planeSectionGeometryMapping[object.id];


            // here.. i really dont know what i am doing.. i honestly just trying random stuff :D
            matrixBase.setPosition(object.position.x, object.position.y, object.position.z)
            // here.. .goes to a black hole..
            planeSectionGeometry.translate(matrixBase.elements[12], matrixBase.elements[13], matrixBase.elements[14]);

            // Rebuild main plane with Merged geometries
            let geometryMergedNew = BufferGeometryUtils.mergeGeometries(planesGeometries, useGroups);
            geometryMerged.computeBoundingSphere();
            planeMerged.geometry.copy(geometryMergedNew);

        });

        transformControls.attach(object);
        scene.add(transformControls);
    }

I’m stuck on the dragging-changed event listener.
Now, If is not clear in the code this is what I’m doing

  • Creating a big plane devided in ‘smaller’ plane in sections.
  • each smaller section has an actual mesh, and a ‘separate’ autonomous geometry.
  • An object maps the mesh ids with its corresponding geometry. (planeSectionGeometryMapping)
  • Finally we are merging all geometry to create a bigger plane… thanks to BufferGeometryUtils.mergeGeometries

I am then attaching to one mesh, the TransformControls so that I can move it freely.
But once i do this i want to move the geometry that is mapped with in same position
Then rebuild the big plane.

So i am stuff on the part where i need to move the geometry into the same position of the mesh.
Why? well longer storry. it doesnt matter i think because i think that overall, is a good question and could be usefull for other things. I am a noob and is a learnig tool.

Again, not important soo much performance, is a tool and wont be use by ‘end user’ by a ‘pseudo’ editor.

Thanks, I ll check better all this material on following days.
Indeed I may just need some pointers to exiting code, APIs or something (or someone tell me that I’m doing all wrong)
Anyways will see if this solves the thing I’m doing, for sure is good for learning other things anyways.

What I need in this article?
You, try to get what you want.
I just wanted to tell you my experience because it could be some helps.

Im readn it, It does need a few more images faking what the change action is, when the origin begins and changes to in each delta transform.

But from what I can tell is you want to
1: “displace” the geometry by the transformControl and not change the origin of the object?
OR
2: “displace” the geometry AND then when you let go the objects origin copies the TransformControls position AND the geometry gets placed where you left it as well.

To Start TransformControl works on Object3D’s
Mesh is a subclass of Object3D Geometry is not.

For 1: you just move “something” to get the position and offset the geometry by that displacement
B - A in typical vector position math. Or the inverse matrix form. I forget

For 2: Thats two step, You do 1 to move the Geometry, then move the MeshObject which then moves the geometry cause its apart of the MeshObject and then apply its matrix to the geometry to push it back, or inverse apply the matrix, I forget

If you are moving multiple “Mesh” objects those are each Object3d’s each with geometries of which you can do the above two options

Mhh… almost. and is not exactly either “1” or “2”.

So yes I know TransformControl works for Object3D and Mesh, in fact this works well

I simply want to, move the separate BufferGeometry where the mesh that I just finish to move with the Transform control is.

Basically make the BufferGeometry follow the mesh.

  1. Mesh is moved by TransformControl
  2. When TransformControl ends the movement fires dragging-changed
  3. Inside dragging-changed event I need to move the BufferGeometry in same position of the mesh (so really, all its vertex should change position)

I’m stuck in step 3.
I would expect this to be a easier thing to do and more obvious.

Are you trying to build a big plane made of small planes that you can move around to change the shape of the big plane?

If so, why not save the addresses of each of the small meshes? Then you can position and move each mesh separately. Or are there way too many small planes to make this feasible?

Hi and thank you for your answer.
Almost…
Or well the first statement:

Are you trying to build a big plane made of small planes that you can move around to change the shape of the big plane?

This is correct but the second statement:

If so, why not save the addresses of each of the small meshes?

This is what I’m doing indeed

Then you can position and move each mesh separately

I’m also doing that via TransformControls

Or are there way too many small planes to make this feasible?

No, this is not a problem for the moment and at least for start, I need at least to be able to do it in a simple way… few meshes

Look, I added the example here Updating Mesh position is not updating its own geometry - #6 by koubae

When I find a planeSectionGeometry (which is a single THREE.PlaneGeometry hence really a BufferGeometry) I can’t position it where I just moved the single Mesh instance.
Why is this so difficult to do or even examplain :D?

Look is simple:

  • We have 1 Mesh made of a PlaneGeometry
  • We have 1 Copy of PlaneGeometry of the mesh
  • We move somehow the Mesh
  • I want to position the PlaneGeometry exactly where the Mesh is
  • Or at least, get the exact current mesh’s geometry.attributes.position.array … but as I stated on the question

Is using Transform Controls a requirement?

What I had in mind was something much simpler. For example, here is a CodePen example that allows you to use the arrow keys to move a mesh around.

If you want to move several different meshes around, you could convert the Mesh variable into an array, create more meshes and add another keydown command to switch between meshes within in the Mesh array.

Setxyz is for setting a single vertex. To move all vertices you use geometry.translate(x,y,z)

This should be done sparingly since it accumulates numeric error each time you do it, unlike setting the mesh position since that does the transform each frame from scratch.

hello, i have a similar question, i have a buffer geo making a rectangle shape, as i apply drag controls to it, now i need to fave the new cords so that i can save and load this data when i realod the page. The shape shall now be loaded at the dragged position, but i am unable to achieve this functionality

Is that what you mean?

raycaster - drag and drop

The coordinates are stored in the local storage.