Skip to content

[CVPR 2026] EasyOmnimatte: Taming Pretrained Inpainting Diffusion Models for End-to-End Video Layered Decomposition

Notifications You must be signed in to change notification settings

GVCLab/EasyOmnimatte

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

28 Commits
ย 
ย 
ย 
ย 

Repository files navigation

EasyOmnimatte: Taming Pretrained Inpainting Diffusion Models for End-to-End Video Layered Decomposition

arXiv Project Page License: MIT

Yihan Hu1, Xuelin Chen2 and Xiaodong Cun1, ๐Ÿ“ฎ

1GVC Lab, Great Bay University, 2Adobe Research

My Movie 3

TL;DR: EasyOmnimatte splits videos into layers (with effects) using 2-step diffusion without any post optimization.

Compare Image


โœ… TODO List

We are working on organizing the code. The following items will be released:

  • Release ArXiv paper.
  • Release Project Page.
  • Release inference code (Demo).
  • Release pretrained models (Checkpoints).
  • Release training scripts and data preparation guidelines.

๐Ÿš€ Getting Started (Coming Soon)

The code is currently being organized and will be released upon acceptance. Please stay tuned!

๐Ÿ“ง Contact

If you have any questions, please feel free to email [18281128hyh@gmail.com].

About

[CVPR 2026] EasyOmnimatte: Taming Pretrained Inpainting Diffusion Models for End-to-End Video Layered Decomposition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •