Skip to content

juliastgermain/Applied-ML-16

 
 

Repository files navigation

Applied ML Team 16

Welcome to our repo for our project for Applied machine learning project.

Our project is to train an model to approximate depth data from RGB images in a tilling pattern.

Prerequisites

Make sure you have the following software and tools installed:

  • Conda: Conda is used for dependency management. This tools is a standard for a lot of machine learning libraries and has support for pip packages as well. "conda install --yes --file conda_requirements.txt"
  • python 3.11: a tested version of python that this repository works in.

Getting Started

general

  1. Clone this repository.
  2. Get the training and validation data and place in the data folder in the project_name folder.
    • Data used to train our model
      1. download from google drive. drive
    • original data
      1. download from original data. website
      2. run subset_maker.py with amount sample and "val or "train" to get workable data folder.
  3. Create conda environment with python 3.11
  4. Install the packages from "conda_requirements.txt" using the command under.
conda install --yes --file conda_requirements.txt
  1. Install pytorch using the command from their website.
  2. Download the model from the release page and place it in the root folder of the repository

Train and validate

Both the commands below should be ran in the root folder of the repository

  • Train

python main.py --epochs (amount epoch) --batch-size (batch size) --lr (learning rate) --freeze (amount before freeze) cnn
  • Validate

python main.py evaluate (model file name with extension) --batch-size (batch size)

API

  1. Run the following command in the root directory of the repository.
uvicorn FastAPI:app --reload
  1. In a new tab in the terminal, run the following, replacing image_path.jpg with the path and file name for the input image:
curl -X 'POST' \
  'http://127.0.0.1:8000/predict_depth/' \
  -H 'accept: application/json' \
  -H 'Content-Type: multipart/form-data' \
  -F 'file=@image_path.jpg;type=image/jpeg'
-- output output.png

Streamlit

  1. Run the following command in the root directory of the repository.
streamlit run streamlit_main.py" when in the main folder of repository.
  1. Follow instruction on the web demo.

Unit testing

To run all the tests developed using unittest, simply use:

python -m unittest discover tests

If you wish to see additional details, run it in verbose mode:

python -m unittest discover -v tests

repository map:

├───.github
│   └────workflows
│        └──── style.yml
├───project_name
│   ├───data
│   │   ├───val_subset # only there when validating
│   │   ├───train_subset # only there when training
│   │   ├───data_loader.py
│   │   ├───data_test.py
│   │   ├───path_grapper.py
│   │   └───subset_maker.py
│   ├───models
│   │   ├───cnn.py
│   │   └───Preprocessing_class.py
│   └───Training
│       ├───Evaluation
│       │   ├───evaluate.py
│       │   └───validation.py
│       └───model_trainer.py
├───tests
│   ├───data
│   ├───features
│   └───models
├───.gitignore
├───cnn_best.pth # download from release page
├───main.py
├───streamlit_main.py
├───FastAPI.py
├───conda_requirements.txt
└───README.md

About

Forked from JHZ5583233 / Applied-ML-16

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages

  • Python 100.0%