Skip to content

siruihan2024/SafeLawBench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

SafeLawBench: Towards Safe Alignment of Large Language Models

Venue arXiv Code Project

ACL 2025 | Sirui Han's Homepage

Authors

Chuxue Cao, Han Zhu, Jiaming Ji, Qichao Sun, Zhenghao Zhu, Yinyu Wu, Josef Dai, Yaodong Yang, Sirui Han*, Yike Guo*

Abstract

Abstract coming soon. Please refer to the paper for details.

Links

Resource Link
Paper (arXiv) https://arxiv.org/abs/2506.06636
Code https://github.com/chuxuecao/SafeLawBench
Project Page https://aclanthology.org/2025.findings-acl.721/
Author Homepage https://siruihan.com/
Google Scholar Sirui Han

Getting Started

The official code implementation is available at https://github.com/chuxuecao/SafeLawBench.

git clone https://github.com/chuxuecao/SafeLawBench
cd SafeLawBench
pip install -r requirements.txt

Please refer to the official repository for detailed instructions on training, evaluation, and pre-trained models.

Citation

If you find this work useful, please cite our paper:

@inproceedings{cao2025safelawbench,
  title={SafeLawBench: Towards Safe Alignment of Large Language Models},
  author={Chuxue Cao, Han Zhu, Jiaming Ji, Qichao Sun, Zhenghao Zhu, Yinyu Wu, Josef Dai, Yaodong Yang, Sirui Han*, Yike Guo*},
  booktitle={ACL 2025},
  year={2025}
}

License

This project is released under the MIT License.

About

SafeLawBench: Towards Safe Alignment of Large Language Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors