How to create a Motion Vector map

Hi everyone! :wave:
I am trying to generate a motion vector map like the one shown below (second image) based on an image sequence similar to the first image.

Here are the images:

Diffuse texture (first image):

Motion vector map (second image):

Note: The images I provided are taken from buttermax.net for reference.

I want to achieve a motion blur effect or a directional displacement in Three.js using this vector map. I believe Blender can help in generating these motion vectors, but I’m not sure about the exact process (shaders, compositing, etc.).

Could someone guide me on:

  1. Generating the motion vector maps in Blender?

Any tips, tutorials, or workflow examples would be super helpful! :rocket:

Thanks in advance! :blush:

One search term is “optical flow”. Apparently there are plugins for some softwares that generate them. (i think houdini, aftereffects, blender)… I started going down that rabbit hole a bit but didn’t get too far… even looked into implementing optical flow manually. I’d be curious what you find, bc there is a Lot of potential to that technique.

3 Likes

@akella has a great tutorial on this

3 Likes

Yaa but he didn’t mentioned how to create the motion vector map

There’s also an article on how the original idea was developed, this gives a pretty detailed overview… Motion Vectors for 3D Content

2 Likes

Yaa but they didn’t tell how to create motion vector map :pensive:

https://docs.opencv.org/3.4/d4/dee/tutorial_optical_flow.html

2 Likes
1 Like

I found something that kind of works, but I think I’m missing something.

Here’s how I managed to create a motion vector map:

  1. Blender Setup:

    • In the View Layer settings, enable the Vector pass in the Data section.
    • Disable Motion Blur in the Renderer settings.
  2. Compositor:

    • Go to the Compositor and enable Use Nodes.
    • Set up the nodes as shown in the image below:

Here’s the result I achieved by combining the frames in GIMP:


Let me know if there’s anything I might be missing!

2 Likes

That output looks plausible yeah? Now that you mention it… I am overthinking this with the optical flow stuff. I was thinking that these would be generated from still frames… but generating it directly from the 3d animation data like I think you’re doing there, is more straightforward and accurate.

1 Like

It seems to work in the code as well! The only issue I noticed is that the gap between frames (a 4-degree rotation) is slightly too large for the motion vector map to handle accurately🥲. I think reducing it to a 3-degree rotation difference between frames would yield better results.

Here’s the GitHub repo if anyone wants to give it a try:
Link : GitHub - AT010303/Image-Sequence-3d

2 Likes

This is great, I’d say this is more a blender workflow question than 3js, however using blenders python bpy api package I set up a few utility scripts to automate this process, very similar to yours only with the added position pass as well as absolute image output paths…

if you were to start a new blender file ( I’m using 4.1.1 ) go to the scripting tab, create a new script, you can run the following to set up the relative composite nodes for rendering everything necessary for the motion vector map images…

import bpy

bpy.context.scene.render.engine = "CYCLES"

bpy.context.view_layer.use_pass_vector = True
bpy.context.view_layer.use_pass_position = True
bpy.context.scene.render.use_motion_blur = False

bpy.context.scene.use_nodes = True
tree = bpy.context.scene.node_tree

for node in tree.nodes:
    tree.nodes.remove(node)

render_layers = tree.nodes.new(type="CompositorNodeRLayers")
render_layers.location = (-200, 0)

separate_xyz = tree.nodes.new(type="CompositorNodeSeparateXYZ")
separate_xyz.location = (100, 0)

normalize_x = tree.nodes.new(type="CompositorNodeNormalize")
normalize_x.location = (300, 0)

normalize_y = tree.nodes.new(type="CompositorNodeNormalize")
normalize_y.location = (300, -100)

combine_color = tree.nodes.new(type="CompositorNodeCombineColor")
combine_color.location = (500, 0)

viewer = tree.nodes.new(type="CompositorNodeViewer")
viewer.location = (700, -125)
viewer.use_alpha = True

position_output = tree.nodes.new(type="CompositorNodeOutputFile")
position_output.location = (100, 125)
position_output.format.file_format = "PNG"
position_output.format.color_mode = "RGB"
position_output.base_path = bpy.path.abspath("//position_output/")

motion_vector_output = tree.nodes.new(type="CompositorNodeOutputFile")
motion_vector_output.location = (700, 0)
motion_vector_output.format.file_format = "PNG"
motion_vector_output.format.color_mode = "RGB"
motion_vector_output.base_path = bpy.path.abspath("//motion_vector_output/")

links = tree.links
links.new(render_layers.outputs["Position"], position_output.inputs[0])
links.new(render_layers.outputs["Vector"], separate_xyz.inputs[0])

links.new(separate_xyz.outputs[0], normalize_x.inputs[0])
links.new(separate_xyz.outputs[1], normalize_y.inputs[0])

links.new(normalize_x.outputs[0], combine_color.inputs[0])
links.new(normalize_y.outputs[0], combine_color.inputs[1])

links.new(combine_color.outputs[0], viewer.inputs[0])
links.new(combine_color.outputs[0], motion_vector_output.inputs[0])

Compositing window result:

I’ve used the same python bpy package for a utility script to automate the rendering of a sequence and packing the outputted images into their respective sprite maps [motion-vector, position, final-render].

EDIT:

For reference the images need to be packed in a left to right, top to bottom order eg…
After some experimentation it looks like the optimal image packing is a right to left, bottom to top pattern, I’ve updated the graph below to reflect the updated output of the following function…

[16, 15, 14, 13]
[12, 11, 10, 09]
[08, 07, 06, 05]
[04, 03, 02, 01]

creating a new script in the scripting tab we can run the following to automate this with bpy and pillow (the blend file has to be saved to know where to put the relative path directories and respective output images, you also you have to have pip pillow installed to blenders python packages)…

EDIT2:
Aside from the packing order above, the resulting packed image needs to be flipped horizontally and rotated 180°, I’ve updated the code below to include this change

import bpy
import os
from PIL import Image
from PIL import ImageOps

bpy.context.scene.cycles.samples = 32
bpy.context.scene.render.filepath = bpy.path.abspath("//render_output/")
bpy.context.scene.render.image_settings.file_format = "PNG"

bpy.ops.render.render(animation=True)

print("Rendering complete. Now combining images into a grid.")

image_folder = bpy.path.abspath("//motion_vector_output/")
image_paths = [os.path.join(image_folder, f"Image{i:04d}.png") for i in range(1, 17)]
images = [Image.open(img_path) for img_path in image_paths]

width, height = images[0].size
grid_width = 4 * width
grid_height = 4 * height
grid_image = Image.new("RGB", (grid_width, grid_height))

for i, img in enumerate(images):
    row = 3 - (i // 4)
    col = 3 - (i % 4)
    x_offset = col * width
    y_offset = row * height
    grid_image.paste(img, (x_offset, y_offset))

grid_image = ImageOps.mirror(grid_image)
grid_image = grid_image.rotate(180, expand=True)
grid_image.save(bpy.path.abspath("//final_grid_output/motion_vector_grid.png"))

image_folderA = bpy.path.abspath("//render_output/")
image_pathsA = [os.path.join(image_folderA, f"{i:04d}.png") for i in range(1, 17)]
imagesA = [Image.open(img_path) for img_path in image_pathsA]

widthA, heightA = imagesA[0].size
grid_widthA = 4 * widthA
grid_heightA = 4 * heightA
grid_imageA = Image.new("RGBA", (grid_widthA, grid_heightA))

for i, img in enumerate(imagesA):
    row = 3 - (i // 4)
    col = 3 - (i % 4)
    x_offset = col * widthA
    y_offset = row * heightA
    grid_imageA.paste(img, (x_offset, y_offset))

grid_imageA = ImageOps.mirror(grid_imageA)
grid_imageA = grid_imageA.rotate(180, expand=True)
grid_imageA.save(bpy.path.abspath("//final_grid_output/rendered_grid.png"))


image_folderB = bpy.path.abspath("//position_output/")
image_pathsB = [os.path.join(image_folderB, f"Image{i:04d}.png") for i in range(1, 17)]
imagesB = [Image.open(img_path) for img_path in image_pathsB]

widthB, heightB = imagesA[0].size
grid_widthB = 4 * widthB
grid_heightB = 4 * heightB
grid_imageB = Image.new("RGB", (grid_widthB, grid_heightB))

for i, img in enumerate(imagesB):
    row = 3 - (i // 4)
    col = 3 - (i % 4)
    x_offset = col * widthB
    y_offset = row * heightB
    grid_imageB.paste(img, (x_offset, y_offset))

grid_imageB = ImageOps.mirror(grid_imageB)
grid_imageB = grid_imageB.rotate(180, expand=True)
grid_imageB.save(bpy.path.abspath("//final_grid_output/position_grid.png"))

For the particular script above to work there must be some sort of key framed motion on the animation timeline (as a caution this will begin rendering 16 frames so may slow up resources but you can see the output directories being populated while rendering), following suit with the original linked resources the animation should be distributed over 16 frames to output a 4x4 grid, although I’m sure this can be extended to greater grid sizes…

EDIT3

After further progress on this it seems that the motion vector map essentially needs to be in an exr format to preserve the correct precision required from the image, this means that the individul exr images can be relatively small, eg… 512 x 512.

I haven’t manged to update the code above to output exr but put together a little render using an 8 x 8 grid (64 images)…

**EDIT4:
testing further it seems if the motion vector map is an exr of tiled exr images, each tile can be as small as 256 x 256 with a respectable quality maintained in the illusion!

3 Likes