Skip to content

Genentech/CLADD

Repository files navigation

CLADD

The official source code for RAG-Enhanced Collaborative LLM Agents for Drug Discovery, acccepted at AAAI 2026.

Overview

Recent advances in large language models (LLMs) have shown great potential to accelerate drug discovery. However, the specialized nature of biochemical data often necessitates costly domain-specific fine-tuning, posing major challenges. First, it hinders the application of more flexible general-purpose LLMs for cutting-edge drug discovery tasks. More importantly, it limits the rapid integration of the vast amounts of scientific data continuously generated through experiments and research. Compounding these challenges is the fact that real-world scientific questions are typically complex and open-ended, requiring reasoning beyond pattern matching or static knowledge retrieval. To address these challenges, we propose CLADD, a retrieval-augmented generation (RAG)-empowered agentic system tailored to drug discovery tasks. Through the collaboration of multiple LLM agents, CLADD dynamically retrieves information from biomedical knowledge bases, contextualizes query molecules, and integrates relevant evidence to generate responses - all without the need for domain-specific fine-tuning. Crucially, we tackle key obstacles in applying RAG workflows to biochemical data, including data heterogeneity, ambiguity, and multi-source integration. We demonstrate the flexibility and effectiveness of this framework across a variety of drug discovery tasks, showing that it outperforms general-purpose and domain-specific LLMs as well as traditional deep learning approaches.

Requirements

- Python 3.11
- PyTorch 2.5.1
- Torch-Geometric 2.6.1
- RDKit 2023.9.6
- LangGraph 0.2.59

Please use CLADD.yaml file to setup environment.

External Database Setup

Annotation Database

Please follow the instruction for preprocessing step of previous work for extracting the data from PubChem database.

After preparing the data, please put the data into ./data/PubChemSTM/raw/ directory. The output structure should be like this:

data/
├── PubChemSTM
    └── raw
        └── CID2name_raw.json
        └── CID2name.json
        └── CID2text_raw.json
        └── CID2text.json
        └── CID2SMILES.csv
        └── molecules.sdf

Then, run ./preprocess_pubchem/preprocess.py to create the annotation database.

Knowledge Graph

[Step 1] Run ./preprocess_primekg/find2hop.py to find 2 hop drugs in the knowledge graph.

[Step 2] Run ./preprocess_primekg/kg2smiles.py to obtain SMILES representation of the drugs in the knowledge graph.

External Tool Setup

To get the MolT5 captions, please run ./captioner/molt5.py.

Pre-trained GNNs for Anchor Drug Retrieval

In the planning team, we propose to utilize 3D geometrically pre-trained GNNs to retrieve the molecules that are highly structurally similar to the model. We utilize the pre-trained GNNs from Pre-training Molecular Graph Representation with 3D Geometry. You can download the model weights in the original author's github link. The downloaded weights should be like this structure:

pretrained_weights/
├── GraphMVP
    └── GraphMVP_C
        └── model.pth
    └── GraphMVP_G
        └── model_baseline.pth
        └── model.pth

How to run the model

For property-specific molecualar captioning tasks, run with the following code:

python ./multi_agent/caption_multi_agent.py --dataset bbbp

For drug-target prediction and drug toxicity prediction tasks, run with the following code:

python ./multi_agent/multi_agent.py --dataset hERG

Citation

@article{lee2025rag,
  title={RAG-Enhanced Collaborative LLM Agents for Drug Discovery},
  author={Lee, Namkyeong and De Brouwer, Edward and Hajiramezanali, Ehsan and Biancalani, Tommaso and Park, Chanyoung and Scalia, Gabriele},
  journal={arXiv preprint arXiv:2502.17506},
  year={2025}
}

About

The official source code for "RAG-Enhanced Collaborative LLM Agents for Drug Discovery"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages