Skip to content

Latest commit

 

History

History
80 lines (62 loc) · 2.64 KB

File metadata and controls

80 lines (62 loc) · 2.64 KB

AnyRecon: Arbitrary-View 3D Reconstruction
with Video Diffusion Model


       

TODO List

  • Upload sparse attention weight.

🛠️ Environment Setup

1. Clone Repository and Setup Environment

git clone https://github.com/OpenImagingLab/AnyRecon.git
cd AnyRecon
conda create -n anyrecon python=3.10 -y
conda activate anyrecon
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

2. Download Models

AnyRecon relies on specific pre-trained weights. Please download them and place them in the ./checkpoints folder.

  • Base Video Diffusion Model
  • AnyRecon LoRA weights [download]

🚀 Quick Start

Inference

You can run the inference using the provided test.sh script:

bash test.sh

Or you can run the python script directly:

python run_AnyRecon.py \
    --root_dir example/valley \
    --output_dir example/valley \
    --lora_path full_attention.ckpt

💗 Acknowledgments

Thanks to these great repositories: Wan2.1 and DiffSynth-Studio.

🔗 Citation

If you find our work helpful, please cite it:

@article{
    
}