Skip to content

mkhangg/flow_mp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FlowMP: Learning Motion Fields for Robot Planning with Conditional Flow Matching

Table of Contents
  1. Authors
  2. Abstract
  3. Prerequisites
  4. Method Overview
  5. Experiments
  6. Citing

Authors

  1. Khang Nguyen
  2. An T. Le
  3. Tien Pham
  4. Manfred Huber
  5. Jan Peters
  6. Minh Nhat Vu

Learning and Adaptive Robotics Lab, University of Texas at Arlington, USA; Intelligent Autonomous Systems Lab, TU Darmstadt, Germany; Cognitive Robotics Lab, University of Manchester, UK; German Research Center for AI (DFKI), SAIROL, Darmstadt, Germany; Hessian.AI, Darmstadt, Germany; Automation & Control Institute (ACIN), TU Wien, Vienna, Austria; Austrian Institute of Technology (AIT) GmbH, Vienna, Austria.

Abstract

Prior flow matching methods in robotics have primarily learned velocity fields to morph one distribution of trajectories into another. In this work, we extend flow matching to capture second-order trajectory dynamics, incorporating acceleration effects either explicitly in the model or implicitly through the learning objective. Unlike diffusion models, which rely on a noisy forward process and iterative denoising steps, flow matching trains a continuous transformation (flow) that directly maps a simple prior distribution to the target trajectory distribution without any denoising procedure. By modeling trajectories with second-order dynamics, our approach ensures that generated robot motions are smooth and physically executable, avoiding the jerky or dynamically infeasible trajectories that first-order models might produce. We empirically demonstrate that this second-order conditional flow matching yields superior performance on motion planning benchmarks, achieving smoother trajectories and higher success rates than baseline planners. These findings highlight the advantage of learning acceleration-aware motion fields, as our method outperforms existing motion planning methods in terms of trajectory quality and planning success.


Prerequisites

Please check the requirements.txt file for installation.

pip install -r requirements.txt

Method Overview

Motion planning at denoising steps along the time horizon t from 0 to 1, inferred by the trained conditional motion field as outlined in the generate_motion function (Alg. 1). The paths correspond to different task objectives within the same maze environment:Path 1 and Path 2 start from different initial positions but share the same goal, while Path 3 has distinct start and goal points. The start and goal positions are indicated as ● and ✖, respectively, and are encoded as the same color of the paths. Their corresponding velocity (blue and orange) and acceleration (green and yellow) profiles of the simultaneously inferred paths are shown accordingly.


FlowMP synthesizes 2D motions while guaranteeing collision-free and dynamic feasibility.

The conditional motion field is extended to 3D space, where collision-free paths are denoised along the time horizon t from 0 to1 within a obstacle map from the initial noises covering the workspace. The model is trained on expert motion distributions with various task objectives in the pre-defined 3D space using Alg. 1. Three example motions are sampled with the generate_motion function, where Path 1 and Path 3 start from different initial positions but go to the same goal, whereas Path 2 follows a distinct trajectory with unique start and goal points. Velocity (blue, orange, and violet) and acceleration (green, yellow, and apricot) profiles on three axes are sampled from their respective noise distributions. At t = 1, the trajectories and their derivatives are smooth.


FlowMP synthesizes dynamically-feasible and collision-free 3D motions.

Usage

Data Generation

The data generation for 2D expert demonstrations using B-Splines can be run:

python scripts/generate_data/generate_env_trajs.py

or for 3D expert motions:

python scripts/generate_data/generate_3d_env_trajs.py

Training and Inference

To train the demonstrations and do inference, run the Notebook file at:

scripts/train_flow/train_env.ipynb

Then, the visualization can be followed by running the file:

scripts/train_flow/visualize_flows.ipynb

The same fashion is applied to the 3D case.

NOTE: If you find this repositpory useful, please also give the GitHub repository of VP-STO kudos when you use this code for your work!

Citing

@article{nguyen2025flowmp,
    title={FlowMP: Learning Motion Fields for Robot Planning with Conditional Flow Matching},
    author={Nguyen, Khang and Le, An T and Pham, Tien and Huber, Manfred and Peters, Jan and Vu, Minh Nhat},
    journal={arXiv preprint arXiv:2503.06135},
    year={2025}
}

About

[IROS '25] FlowMP: Learning Motion Fields for Robot Planning with Conditional Flow Matching

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages