Skip to content

PhotonTec/GM-hw1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

GPT-2 Fine-tuning Project

This project fine-tunes the GPT-2 model from Hugging Face using the WikiText-2 train dataset and evaluates the perplexity on the WikiText-2 test dataset. The repository includes scripts for training (train.py), testing (test.py), and a PDF report summarizing the project.

Getting Started

Prerequisites

Installation

  1. Clone the repository:

    git clone https://github.com/PhotonTec/GM-hw1.git
    cd GM-hw1
    
  2. Install the required packages:

pip install -r requirements.txt

Usage

Training

Run the training script to fine-tune the GPT-2 model:

python train.py --data_path path/to/wikitext-2/train --output_dir path/to/save/checkpoints
  • --data_path: Path to the WikiText-2 training dataset.
  • --output_dir: Directory to save the fine-tuned model checkpoints.

Testing

Evaluate the perplexity of the fine-tuned model on the WikiText-2 test dataset:

python test.py --model_path path/to/saved/checkpoints --test_data_path path/to/wikitext-2/test
  • --model_path: Path to the saved fine-tuned model checkpoints.
  • --test_data_path: Path to the WikiText-2 test dataset.

Project Structure

  • train.py: Script for fine-tuning the GPT-2 model.
  • test.py: Script for evaluating the perplexity on the test dataset.
  • report.pdf: PDF report summarizing the project.

Results

Include any relevant results or findings from your experiments.

Contributing

Feel free to open issues or submit pull requests.

License

This project is licensed under the MIT License.

Acknowledgments

  • This is homework1 of generative model class
  • Author:2100013158 Xu Tianyi# GM-hw1

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages