This repository contains the official implementation of the paper:
"Is Graph Unlearning Ready for Practice? A Benchmark on Efficiency, Utility, and Forgetting"
We introduce a unified benchmark framework to evaluate multiple graph unlearning techniques across diverse datasets — measuring efficiency, utility, and forgetting.
This benchmark provides:
- A standardized evaluation of graph unlearning methods.
- Comparisons on time, memory, accuracy, and forgetting behavior.
- Support for multiple datasets and GNN architectures.
- Python: 3.8.0
- CUDA: Ensure the CUDA version is compatible with your PyTorch installation.
git clone <REPO-URL>
cd GNN_Unlearningpip install -e .Example for CUDA 12.1:
pip install torch==2.2.1 torchvision==0.17.1 torchaudio --index-url https://download.pytorch.org/whl/cu121Required Versions:
torch==2.2.1torchvision==0.17.1
Example for CUDA 12.x:
pip install cupy-cuda12xFor other CUDA versions, refer to the official CuPy Installation Guide.
pip install -r requirements.txtIf you encounter build errors, install the precompiled wheels from the
PyTorch Geometric Installation Guide.
Example for CUDA 12.1:
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.2.1+cu121.html
pip install torch-sparse -f https://data.pyg.org/whl/torch-2.2.1+cu121.html
pip install torch-geometricFor other CUDA versions, replace cu121 with your version (e.g., cu118).
To unlearn a model, run the unlearn_model.sh file or use the following command:
python GULib-master/main.py --dataset_name cora --base_model GCN --unlearning_methods MEGU --attack False --num_epochs 100 --batch_size 64 --unlearn_ratio 0.1 --num_runs 1 --cal_mem TrueThis command will train, unlearn, and save the unlearned model.
| Argument | Description | Example |
|---|---|---|
--cuda <device> |
Specify GPU device to use | --cuda 0 |
--dataset_name <name> |
Graph dataset name | --dataset_name cora |
--base_model <model> |
Base GNN model architecture | GCN, GAT, GIN |
--unlearning_methods <method> |
Unlearning method | MEGU, GIF, GraphEraser, GUIDE, GNNDelete, IDEA, Projector, ScaleGun, CGU |
--unlearn_ratio <value> |
Fraction of data to unlearn | 0.1 |
--num_epochs <N> |
Number of training epochs | 100 |
--batch_size <N> |
Batch size | 64 |
--attack <True/False> |
Enable membership inference attack | True |
--cal_mem <True/False> |
Record time and memory stats | True |
To record efficiency metrics breakdowns (time and memory usage) give this argument in main.py:
--cal_mem TrueResults are stored in:
efficiency_stats.txt
After Getting the Utility Stats, run:
bash utility_stats.shThis computes:
- Accuracy
- Fidelity
- Logit Similarity
For Getting Weight Comparsion Results, run:
python GULib-master/Weight_comparison.pyTo evaluate forgetting performance using given attack use attack_type argument in evaluate_unlearning.py:
--attack_type Attack_Name Where Attack_Name could be from MIattack, TrendAttack, MRattack.
Note:
- Utility results for GraphEraser and GUIDE are automatically stored during unlearning:
GraphEraser_utility_stats.txtGUIDE_utility_stats.txt
- For getting forgetting results for them, give attack_type argument in main.py
- Currently, to get the results on Cognac and ETR, we follow their open-source implementation.
Supported graph datasets:
- Cora
- Citeseer
- ogbn-arxiv
- Amazon-ratings
- Roman-empire
Our benchmark currently supports:
- MEGU
- GIF
- IDEA
- GraphEraser
- GUIDE
- GNNDelete
- Projector
- ScaleGun
- CGU
- Cognac
- ETR
For questions, issues, or contributions, please open a GitHub issue or contact the authors.