SWEN-356 Course Project
The Angry Chef Chatbot (also known as RamsayAI) engages in cooking-related conversation with its users, answering questions about cooking and giving recipe suggestions. It has been programmed to emulate the personality of the celebrity chef Gordon Ramsay; while it gives its responses to user prompts angrily and usually accompanied with insults, it answers questions and gives proper responsses to highest extent it can.
Make sure Python3 is available on your machine - this API is using version 3.11. If you have multiple versions of Python, you can use a Python version manager such as pyenv.
Similarly, make sure Node is available - the frontend is using Node18 and npm 9.8.1. For more information on installing this, see their docs.
The following steps outline additional setups to work on this project.
Navigate to the backend directory in your shell and install the dependencies using pip:
pip install -r requirements.txtNext, install ChatterPy, the machine learning chatbot we use. This is a version of chatterbot with continued maintenance for python 3.11. Learn more about it here.
pip install git+https://github.com/ShoneGK/ChatterPyYou may need to install spacy, an additional dependency:
python -m spacy download en_core_web_smNow you are ready to start the FastAPI server! Run the application using Uvicorn (runs on http://localhost:8000/).
uvicorn main:app --reloadOR
python -m uvicorn main:app --reloadNote - in order for the frontend to work with both gemini and chatterbot, you need to run both serveres in separate windows. To do this, run the below commands (also see google gemini setup instructions below):
uvicorn gemini:app --reload --port 8000
uvicorn main:app --reload --port 8001 # In a separate windowOR
python -m uvicorn gemini:app --reload --port 8000
python -m uvicorn main:app --reload --port 8001 # In a separate windowAlternatively, if you have all packages installed in the respective frontend/ and backend/ directories, you can
run start.py from the application base directory. You must first configure the BASE_URL variable as one that matches
the configuration of your machine (ex. /Users/dummy/Documents/Github/angry-chef) - then you can run:
python start.pyYou can access the Swagger Documentation for the API while running the application at http://localhost:8000/docs
Navigate to the frontend directory in a separate shell and install the dependencies using npm:
npm installNow you are ready to start the frontend server! Run the frontend using npm (runs on http://localhost:5173/):
npm run devWhile running both chatterbot and gemini APIs, chatterbot will send any recipe requests to the gemini API to increase the quality of its responses while we continue to train it on more recipes. To disable this feature, deactivate the gemini API.
The following steps and development directions set up an ideal version of our angry chef bot using the comprehensive functionality of Google's Gemini GPT.
Navigate to the backend directory in your shell and install the dependencies using pip (these are the same
dependencies as above):
pip install -r requirements.txtNext, you need to obtain a Google API key from the Google Cloud Console. Create a .env file in the backend directory. Add your Google API key to the .env file:
GOOGLE_API_KEY=your_api_key_here
Now you are ready to start the FastAPI server! Run the application using Uvicorn (runs on http://localhost:8000/):
uvicorn gemini:app --reloadOR
python -m uvicorn gemini:app --reloadYou can access the Swagger Documentation for the API while running the application at http://localhost:8000/docs
The frontend setup and usage remains the same as above.
To observe the most recent test runs, you can navigate to the Github Actions test pipeline by clicking the green checkmark by the bar indicating the most recent commit. The tests are run by the CI/CD upon any push to main or on creation/update of a pull request. Tests should run locally using the pytest command, though we do not have a refined process to set this up.