Official Implementation for FreeArt3D: Training-Free Articulated Object Generation using 3D Diffusion
[10/29/2025] We released our code, dataset and results!
- Release test data on Objaverse dataset
-
Setup the environment
conda create -n fa3d python=3.10 conda activate fa3d bash setup.sh
Note: If you need to use different torch or cudatoolkit version (for correctly building other libraries), please also install the compatible version of kaolin
-
Download the checkpoint of GIM-DKM from Google Drive, and put it to
gim/weights -
Generate the articulated cabinet from the input images in
examples/cabinetpython run_two_parts.py
Then check outputs/cabinet/sds_output/part_meshes and outputs/cabinet/sds_output/states for the articulated mesh results!
You can also check outputs/cabinet/output.urdf for the URDF format output, and we provide a script to parse it in pipelines/urdf.py
-
Try other examples by simply changing the input directory and the joint type.
python run_two_parts.py --input_dir examples/cabinet2 --joint_type revolute python run_two_parts.py --input_dir examples/cabinet3
-
Download the preprocessed PartNet-Mobility dataset from Google Drive (including 144 objects in 12 categories, the objects have been rendered with disks) and put it to the
datasetsdirectory. The folder structure should beFreeArt3D |--datasets |-- PartNet |-- 100214 └-- ...
-
Then run the same script with specified input directory and the PartNet option
python run_two_parts.py --input_dir datasets/PartNet/100214 --partnet # You can use any other object id included in the dataset -
(Optional) Evaluate the metric of results
We provide the script to evaluate different metrics for the PartNet-Mobility results. Note that this is a general evaluation script so you can use it to evaluate other objects if you have ground-truth for them (simply changing the test_id)
python evaluate_partnet.py --test_id 100214
-
(Optional) Preprocess the PartNet-Mobility dataset
If you want to test our method on the objects not in our processed dataset, you can use the following scripts the process meshes from the original PartNet-Mobility dataset. Note that you need to manually filter out the joints you want to use. You can check
configs/partnet_target.jsonandpartnet.jsonfor the format. Here we provide the steps to preprocess and render the objects according to the object ids inpartnet.jsonpython preprocess_partnet_joint.py blenderproc run preprocess_partnet_mesh.py python render_partnet_mesh.py
The scripts will generate .glb meshes at different qpos in
datasets/PartNet/{$test_id}/gt_meshand the renderings with disk indatasets/PartNet/{$test_id}, which is the same as the ones in the downloaded preprocessed dataset. -
(Optional) Download the PartNet-Mobility results
We provide the PartNet-Mobility results run at our end with our latest version of the code for reference. The results are reproducible with the code, but you might get results with minor difference (we have fixed the random seed but some CUDA implementations (e.g. tiny-cuda-nn) are non-deterministic)
-
Download the processed multi-joint dataset from Google Drive (the objects have been rendered with disks for each joint) and put it to the
datasetsdirectory. The folder structure should beFreeArt3D |--datasets |-- multi-joint |-- 46180 |-- ... └-- meta.json
Note: This small dataset includes objects from both PartNet-Mobility and Objaverse. A meta file records the types of joints for each object.
-
Run the script for multi-joint articulation
python run_multi_parts.py --input_dir datasets/multi_joint/46180 # You can use any other object id included in the dataset
If you want to use your own images, you need to segment them first using tools such as SAM. Here is a webtool you can refer to
After segmentation, if your image wasn't segmented out with a "carpet", you can manually add a virtual disk with a GUI tool we provided
python add_disk.py --input_dir examples/cabinet3_no_diskThis tool can help you add a virtual disk for every input image efficiently. It also supports rescaling the objects to a suitable size for better articulation.
We have deployed our live demo at huggingface space. However, due to the long running time of our method, it's also recommended to run the demo on your local machine:
python app.pyWe acknowledge the following repositories for borrowing the codes:
TRELLIS: https://github.com/microsoft/TRELLIS
GIM: https://github.com/xuelunshen/gim
If you find this repository useful in your project, welcome to cite our work :)
@InProceedings{chen2025freeart3d,
title = {FreeArt3D: Training-Free Articulated Object Generation using 3D Diffusion},
author = {Chen, Chuhao and Liu, Isabella and Wei, Xinyue and Su, Hao and Liu, Minghua},
booktitle = {SIGGRAPH Asia 2025 Conference Papers},
year = {2025}
}

