Skip to content

CzzzzH/FreeArt3D

Repository files navigation

FreeArt3D: Training-Free Articulated Object Generation using 3D Diffusion

SIGGRAPH Asia 2025

arXiv Project Page

Official Implementation for FreeArt3D: Training-Free Articulated Object Generation using 3D Diffusion

🔔 Updates

[10/29/2025] We released our code, dataset and results!

🚧 TODO List

  • Release test data on Objaverse dataset

🚀 Quick Start

  1. Setup the environment

    conda create -n fa3d python=3.10
    conda activate fa3d
    bash setup.sh

    Note: If you need to use different torch or cudatoolkit version (for correctly building other libraries), please also install the compatible version of kaolin

  2. Download the checkpoint of GIM-DKM from Google Drive, and put it to gim/weights

  3. Generate the articulated cabinet from the input images in examples/cabinet

    python run_two_parts.py

​ Then check outputs/cabinet/sds_output/part_meshes and outputs/cabinet/sds_output/states for the articulated mesh results!

​ You can also check outputs/cabinet/output.urdf for the URDF format output, and we provide a script to parse it in pipelines/urdf.py

  1. Try other examples by simply changing the input directory and the joint type.

    python run_two_parts.py --input_dir examples/cabinet2 --joint_type revolute
    python run_two_parts.py --input_dir examples/cabinet3

💻 PartNet-Mobility Articulation

  1. Download the preprocessed PartNet-Mobility dataset from Google Drive (including 144 objects in 12 categories, the objects have been rendered with disks) and put it to the datasets directory. The folder structure should be

    FreeArt3D
    |--datasets
        |-- PartNet
            |-- 100214 
            └-- ...
  2. Then run the same script with specified input directory and the PartNet option

    python run_two_parts.py --input_dir datasets/PartNet/100214 --partnet # You can use any other object id included in the dataset 
  3. (Optional) Evaluate the metric of results

    We provide the script to evaluate different metrics for the PartNet-Mobility results. Note that this is a general evaluation script so you can use it to evaluate other objects if you have ground-truth for them (simply changing the test_id)

    python evaluate_partnet.py --test_id 100214
  4. (Optional) Preprocess the PartNet-Mobility dataset

    If you want to test our method on the objects not in our processed dataset, you can use the following scripts the process meshes from the original PartNet-Mobility dataset. Note that you need to manually filter out the joints you want to use. You can check configs/partnet_target.json and partnet.json for the format. Here we provide the steps to preprocess and render the objects according to the object ids in partnet.json

    python preprocess_partnet_joint.py
    blenderproc run preprocess_partnet_mesh.py
    python render_partnet_mesh.py

    The scripts will generate .glb meshes at different qpos in datasets/PartNet/{$test_id}/gt_mesh and the renderings with disk in datasets/PartNet/{$test_id}, which is the same as the ones in the downloaded preprocessed dataset.

  5. (Optional) Download the PartNet-Mobility results

    We provide the PartNet-Mobility results run at our end with our latest version of the code for reference. The results are reproducible with the code, but you might get results with minor difference (we have fixed the random seed but some CUDA implementations (e.g. tiny-cuda-nn) are non-deterministic)

🚪 Multi-joint Articulation

  1. Download the processed multi-joint dataset from Google Drive (the objects have been rendered with disks for each joint) and put it to the datasetsdirectory. The folder structure should be

    FreeArt3D
    |--datasets
        |-- multi-joint
            |-- 46180
            |-- ...
            └-- meta.json

    Note: This small dataset includes objects from both PartNet-Mobility and Objaverse. A meta file records the types of joints for each object.

  2. Run the script for multi-joint articulation

    python run_multi_parts.py --input_dir datasets/multi_joint/46180 # You can use any other object id included in the dataset

🌟 Use your own images

If you want to use your own images, you need to segment them first using tools such as SAM. Here is a webtool you can refer to

sam

After segmentation, if your image wasn't segmented out with a "carpet", you can manually add a virtual disk with a GUI tool we provided

python add_disk.py --input_dir examples/cabinet3_no_disk

This tool can help you add a virtual disk for every input image efficiently. It also supports rescaling the objects to a suitable size for better articulation.

🤗 Demo

We have deployed our live demo at huggingface space. However, due to the long running time of our method, it's also recommended to run the demo on your local machine:

python app.py

🍀 Acknowledgement

We acknowledge the following repositories for borrowing the codes:

TRELLIS: https://github.com/microsoft/TRELLIS

GIM: https://github.com/xuelunshen/gim

📜 Citation

If you find this repository useful in your project, welcome to cite our work :)

@InProceedings{chen2025freeart3d,
  title = {FreeArt3D: Training-Free Articulated Object Generation using 3D Diffusion},
  author = {Chen, Chuhao and Liu, Isabella and Wei, Xinyue and Su, Hao and Liu, Minghua},
  booktitle = {SIGGRAPH Asia 2025 Conference Papers},
  year = {2025}
}

About

[SIGGRAPH Asia 2025] FreeArt3D: Training-Free Articulated Object Generation using 3D Diffusion

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors