Diffusion Policy (DP) has established a new state-of-the-art in robotic manipulation by modeling policies as conditional denoising processes. However, the iterative nature of DP creates a significant latency bottleneck, requiring tens of network evaluations to generate a single action.
SwiftPolicy is a surprisingly simple distillation scheme to accelerate Diffusion Policy -> Achieving a 50x reduction in neural network evaluations (NFE)
The full project report is available as a PDF: MVA_Robots_25.pdf
Teacher vs Student policy running within the same time constraint
The entire training and evaluation pipeline is self-contained in a Google Colab notebook. You can run the code directly in your browser without any local setup.
Click the badge above to open the notebook.
We are actively looking to improve SwiftPolicy. Current roadmap includes:
- Run on a real robot (if you have a robot we can borrow, please reach out!)
- Improving stability during training
- Scale to different datasets
- Duc-Hai Pham (duchai1092002 at gmail dot com) - ENS Paris-Saclay
- Ianis Hammani (ianis.hammani1712 at gmail dot com) - ENS Paris-Saclay
This project was developed as part of the MVA Master's program (Robotics course).
If you find this code or work useful in your research, please cite the following:
@software{Pham_SwiftPolicy_Accelerated_Visuomotor_2025,
author = {Pham, Duc-Hai and Hammani, Ianis},
month = jan,
title = {{SwiftPolicy: Accelerated Visuomotor Policies via Score Distillation Sampling}},
url = {https://github.com/yourusername/swiftpolicy},
version = {1.0.0},
year = {2025}
}