Skip to content

ATY02/angry-chef

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

92 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

angry-chef

SWEN-356 Course Project

Overview

The Angry Chef Chatbot (also known as RamsayAI) engages in cooking-related conversation with its users, answering questions about cooking and giving recipe suggestions. It has been programmed to emulate the personality of the celebrity chef Gordon Ramsay; while it gives its responses to user prompts angrily and usually accompanied with insults, it answers questions and gives proper responsses to highest extent it can.

Prerequisites

Make sure Python3 is available on your machine - this API is using version 3.11. If you have multiple versions of Python, you can use a Python version manager such as pyenv.

Similarly, make sure Node is available - the frontend is using Node18 and npm 9.8.1. For more information on installing this, see their docs.

Development

The following steps outline additional setups to work on this project.

Backend

Navigate to the backend directory in your shell and install the dependencies using pip:

pip install -r requirements.txt

Next, install ChatterPy, the machine learning chatbot we use. This is a version of chatterbot with continued maintenance for python 3.11. Learn more about it here.

pip install git+https://github.com/ShoneGK/ChatterPy

You may need to install spacy, an additional dependency:

python -m spacy download en_core_web_sm

Now you are ready to start the FastAPI server! Run the application using Uvicorn (runs on http://localhost:8000/).

uvicorn main:app --reload

OR

python -m uvicorn main:app --reload

Note - in order for the frontend to work with both gemini and chatterbot, you need to run both serveres in separate windows. To do this, run the below commands (also see google gemini setup instructions below):

uvicorn gemini:app --reload --port 8000
uvicorn main:app --reload --port 8001  # In a separate window

OR

python -m uvicorn gemini:app --reload --port 8000
python -m uvicorn main:app --reload --port 8001  # In a separate window

Alternatively, if you have all packages installed in the respective frontend/ and backend/ directories, you can run start.py from the application base directory. You must first configure the BASE_URL variable as one that matches the configuration of your machine (ex. /Users/dummy/Documents/Github/angry-chef) - then you can run:

python start.py

You can access the Swagger Documentation for the API while running the application at http://localhost:8000/docs

Frontend

Navigate to the frontend directory in a separate shell and install the dependencies using npm:

npm install

Now you are ready to start the frontend server! Run the frontend using npm (runs on http://localhost:5173/):

npm run dev

Chatterbot Text Collection

While running both chatterbot and gemini APIs, chatterbot will send any recipe requests to the gemini API to increase the quality of its responses while we continue to train it on more recipes. To disable this feature, deactivate the gemini API.

Development -- Google Gemini

The following steps and development directions set up an ideal version of our angry chef bot using the comprehensive functionality of Google's Gemini GPT.

Backend

Navigate to the backend directory in your shell and install the dependencies using pip (these are the same dependencies as above):

pip install -r requirements.txt

Next, you need to obtain a Google API key from the Google Cloud Console. Create a .env file in the backend directory. Add your Google API key to the .env file:

GOOGLE_API_KEY=your_api_key_here

Now you are ready to start the FastAPI server! Run the application using Uvicorn (runs on http://localhost:8000/):

uvicorn gemini:app --reload

OR

python -m uvicorn gemini:app --reload

You can access the Swagger Documentation for the API while running the application at http://localhost:8000/docs

Frontend

The frontend setup and usage remains the same as above.

Tests

To observe the most recent test runs, you can navigate to the Github Actions test pipeline by clicking the green checkmark by the bar indicating the most recent commit. The tests are run by the CI/CD upon any push to main or on creation/update of a pull request. Tests should run locally using the pytest command, though we do not have a refined process to set this up.

About

SWEN-356 Course Project

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5