Containerising PyTorch models in a repeatable way. Deploy OpenAI's GPT-2 model and expose it over a Flask API. Finally deploy it to AWS Fargate container hosting using CloudFormation.
First, before anything else download the model
mkdir models
curl --output models/gpt2-pytorch_model.bin https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.binRun the following to get started with your local python environment
python3 -m venv ./venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txtThen run the python flask server using the following
cd deployment
python run_server.pydocker-compose up --build flaskGo to http://localhost:5000
docker-compose down -vFirst build and push the container to ECR
./container_push.shSetup the CloudFormation stack
./cloudformation_deploy.shDeploy the stack
aws cloudformation create-stack \
--stack-name "gpt-2-flask" \
--template-body file://cloudformation/deployment.yaml \
--parameters file://cloudformation/deployment-params.json \
--capabilities CAPABILITY_IAM