Based on output generated by our branch at https://github.com/mmlab-cv/Crowd-Simulator/tree/static-simulation
Report: Computer_Vision_Project.pdf
Scripts:
detectron_model.py: main file, does training, inference, and testingsegMaskToCOCO.py: parses the dataset generated by Crowd Simulator (color images + segmentation masks), and creates a COCO annotation in jsondatasets/coco/bbox.pyanddatasets/coco/coco.py: scripts used to modify the annotations and make new filtered versionsdatasets/coco/merge.py: set of function to convert RLE annotations to polygon and to merge two annotation files.
runs folder: each run contains
cfg.ymlfile containing settings used for the run- some inference examples of the trained model (images beginning with
inference_) evaluation_result_*for different test datasetsmetrics.json: training log- NOTE:
.pthtrained models are not shared
Datasets and annotations are available from links in datasets/link_drive.txt (need unitn mail to access drive)
train_compressed.7zcontains a lossy compression (.png to .jpg) of the original synthetic dataset used in the report
