objectTracking.py contains all the essential parts of optical flow operation, including getFeatures,estimateAllTranslation and applyGeometricTransformation.py.
mrcnn_detect.py contains all the essential parts of object detection and instance segmentation, making use of published Mask R-CNN architecture, and it is used in objectTracking.py
create_output_video.py runs everything together through calling of objectTracking function.
This directory contains all the input videos we are testing.
This directory contains all the resulting output videos we are generating for this project.
- Unzip the zipped file
- Change directory to the
Final_Projectfolder using cd in command line - (Optional) Create a virtual environment using
virtualenv venv --python=python3.6 - Run
pip install -r requirements.txt. If using CPU only, install CPU-Optimized Tensorflow, and make sure Python version is 3.6 or less (if virtual environment was not created per last step, as 3.7 doesn't fully support Tensorflow yet) - Modify the
rawVideovariable increate_output_video.pyto be the name of input video file - Run
create_output_video.py. Ifmask_rcnn_coco.h5(pretrained Mask R-CNN weights) is not downloaded yet, it will first be downloaded to the current directory and then used for the application.