A repository for BioNLP (Biomedical Natural Language Processing) classification using BERT models from Hugging Face.
This repository provides scripts for training, evaluating, and performing inference with BERT models specifically tailored for biomedical Natural Language Processing (BioNLP) classification tasks.
- Abstract Dataset: Contains 53,093 paper abstracts. Download here
- Manually Annotated Training Dataset: Contains 605 manually labeled papers. Download here
Train a BERT model on the BioNLP classification task:
python train.py \
--model_name microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext \
--train_data train.jsonl \
--eval_data test.jsonl \
--output_dir pubmedbert_bionlp_classificationTo train with cross-validation:
python Bert_train_crossValidation.py \
--model_name microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext \
--jsonl_file test.jsonl \
--eval_data test.jsonl \
--output_dir cross_validation/
--n_splits 5To evaluate a trained model:
python eval.py \
--model_path ./pubmedbert_bionlp_classification/checkpoint-2440 \
--jsonl_file test.jsonlFor evaluation with cross-validation:
python Bert_eval_crossValidation.py \
--model_path ./pubmedbert_bionlp_classification/checkpoint-2440 \
--jsonl_file test.jsonl
--output_dir results/
--n_splits 5Run inference using a trained model:
python inference.py \
--model_path ./pubmedbert_bionlp_classification/checkpoint-2440 \
--input_jsonl paper_data.jsonl \
--output_jsonl predictions.jsonlThis repository works with biomedical BERT models available on Hugging Face, including:
- microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
- microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract
- michiyasunaga/BioLinkBERT-base
- michiyasunaga/BioLinkBERT-large
- Other compatible BERT-based models