diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000000..6cd11bac87
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,104 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+.hypothesis/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+.static_storage/
+.media/
+local_settings.py
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# pyenv
+.python-version
+
+# celery beat schedule file
+celerybeat-schedule
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
\ No newline at end of file
diff --git a/code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb b/code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
deleted file mode 100644
index 5ffcdeff15..0000000000
--- a/code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
+++ /dev/null
@@ -1,551 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Rover Project Test Notebook\n",
- "This notebook contains the functions from the lesson and provides the scaffolding you need to test out your mapping methods. The steps you need to complete in this notebook for the project are the following:\n",
- "\n",
- "* First just run each of the cells in the notebook, examine the code and the results of each.\n",
- "* Run the simulator in \"Training Mode\" and record some data. Note: the simulator may crash if you try to record a large (longer than a few minutes) dataset, but you don't need a ton of data, just some example images to work with. \n",
- "* Change the data directory path (2 cells below) to be the directory where you saved data\n",
- "* Test out the functions provided on your data\n",
- "* Write new functions (or modify existing ones) to report and map out detections of obstacles and rock samples (yellow rocks)\n",
- "* Populate the `process_image()` function with the appropriate steps/functions to go from a raw image to a worldmap.\n",
- "* Run the cell that calls `process_image()` using `moviepy` functions to create video output\n",
- "* Once you have mapping working, move on to modifying `perception.py` and `decision.py` to allow your rover to navigate and map in autonomous mode!\n",
- "\n",
- "**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".**\n",
- "\n",
- "**Run the next cell to get code highlighting in the markdown cells.**"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "%%HTML\n",
- ""
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "collapsed": true,
- "outputExpanded": false
- },
- "outputs": [],
- "source": [
- "%matplotlib inline\n",
- "#%matplotlib qt # Choose %matplotlib qt to plot to an interactive window (note it may show up behind your browser)\n",
- "# Make some of the relevant imports\n",
- "import cv2 # OpenCV for perspective transform\n",
- "import numpy as np\n",
- "import matplotlib.image as mpimg\n",
- "import matplotlib.pyplot as plt\n",
- "import scipy.misc # For saving images as needed\n",
- "import glob # For reading in a list of images from a folder\n",
- "import imageio\n",
- "imageio.plugins.ffmpeg.download()\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Quick Look at the Data\n",
- "There's some example data provided in the `test_dataset` folder. This basic dataset is enough to get you up and running but if you want to hone your methods more carefully you should record some data of your own to sample various scenarios in the simulator. \n",
- "\n",
- "Next, read in and display a random image from the `test_dataset` folder"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true,
- "outputExpanded": false,
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "path = '../test_dataset/IMG/*'\n",
- "img_list = glob.glob(path)\n",
- "# Grab a random image and display it\n",
- "idx = np.random.randint(0, len(img_list)-1)\n",
- "image = mpimg.imread(img_list[idx])\n",
- "plt.imshow(image)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Calibration Data\n",
- "Read in and display example grid and rock sample calibration images. You'll use the grid for perspective transform and the rock image for creating a new color selection that identifies these samples of interest. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "# In the simulator you can toggle on a grid on the ground for calibration\n",
- "# You can also toggle on the rock samples with the 0 (zero) key. \n",
- "# Here's an example of the grid and one of the rocks\n",
- "example_grid = '../calibration_images/example_grid1.jpg'\n",
- "example_rock = '../calibration_images/example_rock1.jpg'\n",
- "grid_img = mpimg.imread(example_grid)\n",
- "rock_img = mpimg.imread(example_rock)\n",
- "\n",
- "fig = plt.figure(figsize=(12,3))\n",
- "plt.subplot(121)\n",
- "plt.imshow(grid_img)\n",
- "plt.subplot(122)\n",
- "plt.imshow(rock_img)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Perspective Transform\n",
- "\n",
- "Define the perspective transform function from the lesson and test it on an image."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "# Define a function to perform a perspective transform\n",
- "# I've used the example grid image above to choose source points for the\n",
- "# grid cell in front of the rover (each grid cell is 1 square meter in the sim)\n",
- "# Define a function to perform a perspective transform\n",
- "def perspect_transform(img, src, dst):\n",
- " \n",
- " M = cv2.getPerspectiveTransform(src, dst)\n",
- " warped = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))# keep same size as input image\n",
- " \n",
- " return warped\n",
- "\n",
- "\n",
- "# Define calibration box in source (actual) and destination (desired) coordinates\n",
- "# These source and destination points are defined to warp the image\n",
- "# to a grid where each 10x10 pixel square represents 1 square meter\n",
- "# The destination box will be 2*dst_size on each side\n",
- "dst_size = 5 \n",
- "# Set a bottom offset to account for the fact that the bottom of the image \n",
- "# is not the position of the rover but a bit in front of it\n",
- "# this is just a rough guess, feel free to change it!\n",
- "bottom_offset = 6\n",
- "source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])\n",
- "destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],\n",
- " [image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],\n",
- " [image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset], \n",
- " [image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],\n",
- " ])\n",
- "warped = perspect_transform(grid_img, source, destination)\n",
- "plt.imshow(warped)\n",
- "#scipy.misc.imsave('../output/warped_example.jpg', warped)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Color Thresholding\n",
- "Define the color thresholding function from the lesson and apply it to the warped image\n",
- "\n",
- "**TODO:** Ultimately, you want your map to not just include navigable terrain but also obstacles and the positions of the rock samples you're searching for. Modify this function or write a new function that returns the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that you can map these areas into world coordinates as well. \n",
- "**Hints and Suggestion:** \n",
- "* For obstacles you can just invert your color selection that you used to detect ground pixels, i.e., if you've decided that everything above the threshold is navigable terrain, then everthing below the threshold must be an obstacle!\n",
- "\n",
- "\n",
- "* For rocks, think about imposing a lower and upper boundary in your color selection to be more specific about choosing colors. You can investigate the colors of the rocks (the RGB pixel values) in an interactive matplotlib window to get a feel for the appropriate threshold range (keep in mind you may want different ranges for each of R, G and B!). Feel free to get creative and even bring in functions from other libraries. Here's an example of [color selection](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html) using OpenCV. \n",
- "\n",
- "* **Beware However:** if you start manipulating images with OpenCV, keep in mind that it defaults to `BGR` instead of `RGB` color space when reading/writing images, so things can get confusing."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "# Identify pixels above the threshold\n",
- "# Threshold of RGB > 160 does a nice job of identifying ground pixels only\n",
- "def color_thresh(img, rgb_thresh=(160, 160, 160)):\n",
- " # Create an array of zeros same xy size as img, but single channel\n",
- " color_select = np.zeros_like(img[:,:,0])\n",
- " # Require that each pixel be above all three threshold values in RGB\n",
- " # above_thresh will now contain a boolean array with \"True\"\n",
- " # where threshold was met\n",
- " above_thresh = (img[:,:,0] > rgb_thresh[0]) \\\n",
- " & (img[:,:,1] > rgb_thresh[1]) \\\n",
- " & (img[:,:,2] > rgb_thresh[2])\n",
- " # Index the array of zeros with the boolean array and set to 1\n",
- " color_select[above_thresh] = 1\n",
- " # Return the binary image\n",
- " return color_select\n",
- "\n",
- "threshed = color_thresh(warped)\n",
- "plt.imshow(threshed, cmap='gray')\n",
- "#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Coordinate Transformations\n",
- "Define the functions used to do coordinate transforms and apply them to an image."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true,
- "outputExpanded": false
- },
- "outputs": [],
- "source": [
- "# Define a function to convert from image coords to rover coords\n",
- "def rover_coords(binary_img):\n",
- " # Identify nonzero pixels\n",
- " ypos, xpos = binary_img.nonzero()\n",
- " # Calculate pixel positions with reference to the rover position being at the \n",
- " # center bottom of the image. \n",
- " x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)\n",
- " y_pixel = -(xpos - binary_img.shape[1]/2 ).astype(np.float)\n",
- " return x_pixel, y_pixel\n",
- "\n",
- "# Define a function to convert to radial coords in rover space\n",
- "def to_polar_coords(x_pixel, y_pixel):\n",
- " # Convert (x_pixel, y_pixel) to (distance, angle) \n",
- " # in polar coordinates in rover space\n",
- " # Calculate distance to each pixel\n",
- " dist = np.sqrt(x_pixel**2 + y_pixel**2)\n",
- " # Calculate angle away from vertical for each pixel\n",
- " angles = np.arctan2(y_pixel, x_pixel)\n",
- " return dist, angles\n",
- "\n",
- "# Define a function to map rover space pixels to world space\n",
- "def rotate_pix(xpix, ypix, yaw):\n",
- " # Convert yaw to radians\n",
- " yaw_rad = yaw * np.pi / 180\n",
- " xpix_rotated = (xpix * np.cos(yaw_rad)) - (ypix * np.sin(yaw_rad))\n",
- " \n",
- " ypix_rotated = (xpix * np.sin(yaw_rad)) + (ypix * np.cos(yaw_rad))\n",
- " # Return the result \n",
- " return xpix_rotated, ypix_rotated\n",
- "\n",
- "def translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale): \n",
- " # Apply a scaling and a translation\n",
- " xpix_translated = (xpix_rot / scale) + xpos\n",
- " ypix_translated = (ypix_rot / scale) + ypos\n",
- " # Return the result \n",
- " return xpix_translated, ypix_translated\n",
- "\n",
- "\n",
- "# Define a function to apply rotation and translation (and clipping)\n",
- "# Once you define the two functions above this function should work\n",
- "def pix_to_world(xpix, ypix, xpos, ypos, yaw, world_size, scale):\n",
- " # Apply rotation\n",
- " xpix_rot, ypix_rot = rotate_pix(xpix, ypix, yaw)\n",
- " # Apply translation\n",
- " xpix_tran, ypix_tran = translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale)\n",
- " # Perform rotation, translation and clipping all at once\n",
- " x_pix_world = np.clip(np.int_(xpix_tran), 0, world_size - 1)\n",
- " y_pix_world = np.clip(np.int_(ypix_tran), 0, world_size - 1)\n",
- " # Return the result\n",
- " return x_pix_world, y_pix_world\n",
- "\n",
- "# Grab another random image\n",
- "idx = np.random.randint(0, len(img_list)-1)\n",
- "image = mpimg.imread(img_list[idx])\n",
- "warped = perspect_transform(image, source, destination)\n",
- "threshed = color_thresh(warped)\n",
- "\n",
- "# Calculate pixel values in rover-centric coords and distance/angle to all pixels\n",
- "xpix, ypix = rover_coords(threshed)\n",
- "dist, angles = to_polar_coords(xpix, ypix)\n",
- "mean_dir = np.mean(angles)\n",
- "\n",
- "# Do some plotting\n",
- "fig = plt.figure(figsize=(12,9))\n",
- "plt.subplot(221)\n",
- "plt.imshow(image)\n",
- "plt.subplot(222)\n",
- "plt.imshow(warped)\n",
- "plt.subplot(223)\n",
- "plt.imshow(threshed, cmap='gray')\n",
- "plt.subplot(224)\n",
- "plt.plot(xpix, ypix, '.')\n",
- "plt.ylim(-160, 160)\n",
- "plt.xlim(0, 160)\n",
- "arrow_length = 100\n",
- "x_arrow = arrow_length * np.cos(mean_dir)\n",
- "y_arrow = arrow_length * np.sin(mean_dir)\n",
- "plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Read in saved data and ground truth map of the world\n",
- "The next cell is all setup to read your saved data into a `pandas` dataframe. Here you'll also read in a \"ground truth\" map of the world, where white pixels (pixel value = 1) represent navigable terrain. \n",
- "\n",
- "After that, we'll define a class to store telemetry data and pathnames to images. When you instantiate this class (`data = Databucket()`) you'll have a global variable called `data` that you can refer to for telemetry and map data within the `process_image()` function in the following cell. \n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true,
- "inputHidden": false,
- "outputHidden": false
- },
- "outputs": [],
- "source": [
- "# Import pandas and read in csv file as a dataframe\n",
- "import pandas as pd\n",
- "# Change the path below to your data directory\n",
- "# If you are in a locale (e.g., Europe) that uses ',' as the decimal separator\n",
- "# change the '.' to ','\n",
- "df = pd.read_csv('../test_dataset/robot_log.csv', delimiter=';', decimal='.')\n",
- "csv_img_list = df[\"Path\"].tolist() # Create list of image pathnames\n",
- "# Read in ground truth map and create a 3-channel image with it\n",
- "ground_truth = mpimg.imread('../calibration_images/map_bw.png')\n",
- "ground_truth_3d = np.dstack((ground_truth*0, ground_truth*255, ground_truth*0)).astype(np.float)\n",
- "\n",
- "# Creating a class to be the data container\n",
- "# Will read in saved data from csv file and populate this object\n",
- "# Worldmap is instantiated as 200 x 200 grids corresponding \n",
- "# to a 200m x 200m space (same size as the ground truth map: 200 x 200 pixels)\n",
- "# This encompasses the full range of output position values in x and y from the sim\n",
- "class Databucket():\n",
- " def __init__(self):\n",
- " self.images = csv_img_list \n",
- " self.xpos = df[\"X_Position\"].values\n",
- " self.ypos = df[\"Y_Position\"].values\n",
- " self.yaw = df[\"Yaw\"].values\n",
- " self.count = 0 # This will be a running index\n",
- " self.worldmap = np.zeros((200, 200, 3)).astype(np.float)\n",
- " self.ground_truth = ground_truth_3d # Ground truth worldmap\n",
- "\n",
- "# Instantiate a Databucket().. this will be a global variable/object\n",
- "# that you can refer to in the process_image() function below\n",
- "data = Databucket()\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Write a function to process stored images\n",
- "\n",
- "Modify the `process_image()` function below by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this `process_image()` function in conjunction with the `moviepy` video processing package to create a video from the images you saved taking data in the simulator. \n",
- "\n",
- "In short, you will be passing individual images into `process_image()` and building up an image called `output_image` that will be stored as one frame of video. You can make a mosaic of the various steps of your analysis process and add text as you like (example provided below). \n",
- "\n",
- "\n",
- "\n",
- "To start with, you can simply run the next three cells to see what happens, but then go ahead and modify them such that the output video demonstrates your mapping process. Feel free to get creative!"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "\n",
- "# Define a function to pass stored images to\n",
- "# reading rover position and yaw angle from csv file\n",
- "# This function will be used by moviepy to create an output video\n",
- "def process_image(img):\n",
- " # Example of how to use the Databucket() object defined above\n",
- " # to print the current x, y and yaw values \n",
- " # print(data.xpos[data.count], data.ypos[data.count], data.yaw[data.count])\n",
- "\n",
- " # TODO: \n",
- " # 1) Define source and destination points for perspective transform\n",
- " # 2) Apply perspective transform\n",
- " # 3) Apply color threshold to identify navigable terrain/obstacles/rock samples\n",
- " # 4) Convert thresholded image pixel values to rover-centric coords\n",
- " # 5) Convert rover-centric pixel values to world coords\n",
- " # 6) Update worldmap (to be displayed on right side of screen)\n",
- " # Example: data.worldmap[obstacle_y_world, obstacle_x_world, 0] += 1\n",
- " # data.worldmap[rock_y_world, rock_x_world, 1] += 1\n",
- " # data.worldmap[navigable_y_world, navigable_x_world, 2] += 1\n",
- "\n",
- " # 7) Make a mosaic image, below is some example code\n",
- " # First create a blank image (can be whatever shape you like)\n",
- " output_image = np.zeros((img.shape[0] + data.worldmap.shape[0], img.shape[1]*2, 3))\n",
- " # Next you can populate regions of the image with various output\n",
- " # Here I'm putting the original image in the upper left hand corner\n",
- " output_image[0:img.shape[0], 0:img.shape[1]] = img\n",
- "\n",
- " # Let's create more images to add to the mosaic, first a warped image\n",
- " warped = perspect_transform(img, source, destination)\n",
- " # Add the warped image in the upper right hand corner\n",
- " output_image[0:img.shape[0], img.shape[1]:] = warped\n",
- "\n",
- " # Overlay worldmap with ground truth map\n",
- " map_add = cv2.addWeighted(data.worldmap, 1, data.ground_truth, 0.5, 0)\n",
- " # Flip map overlay so y-axis points upward and add to output_image \n",
- " output_image[img.shape[0]:, 0:data.worldmap.shape[1]] = np.flipud(map_add)\n",
- "\n",
- "\n",
- " # Then putting some text over the image\n",
- " cv2.putText(output_image,\"Populate this image with your analyses to make a video!\", (20, 20), \n",
- " cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)\n",
- " if data.count < len(data.images) - 1:\n",
- " data.count += 1 # Keep track of the index in the Databucket()\n",
- " \n",
- " return output_image"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Make a video from processed image data\n",
- "Use the [moviepy](https://zulko.github.io/moviepy/) library to process images and create a video.\n",
- " "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true,
- "scrolled": false
- },
- "outputs": [],
- "source": [
- "# Import everything needed to edit/save/watch video clips\n",
- "from moviepy.editor import VideoFileClip\n",
- "from moviepy.editor import ImageSequenceClip\n",
- "\n",
- "\n",
- "# Define pathname to save the output video\n",
- "output = '../output/test_mapping.mp4'\n",
- "data = Databucket() # Re-initialize data in case you're running this cell multiple times\n",
- "clip = ImageSequenceClip(data.images, fps=60) # Note: output video will be sped up because \n",
- " # recording rate in simulator is fps=25\n",
- "new_clip = clip.fl_image(process_image) #NOTE: this function expects color images!!\n",
- "%time new_clip.write_videofile(output, audio=False)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### This next cell should function as an inline video player\n",
- "If this fails to render the video, try running the following cell (alternative video rendering method). You can also simply have a look at the saved mp4 in your `/output` folder"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "\n",
- "from IPython.display import HTML\n",
- "HTML(\"\"\"\n",
- "\n",
- "\"\"\".format(output))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Below is an alternative way to create a video in case the above cell did not work."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "import io\n",
- "import base64\n",
- "video = io.open(output, 'r+b').read()\n",
- "encoded_video = base64.b64encode(video)\n",
- "HTML(data=''''''.format(encoded_video.decode('ascii')))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernel_info": {
- "name": "python3"
- },
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.5.2"
- },
- "widgets": {
- "state": {},
- "version": "1.1.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
diff --git a/code/__pycache__/decision.cpython-35.pyc b/code/__pycache__/decision.cpython-35.pyc
deleted file mode 100644
index 9c4822e9cf..0000000000
Binary files a/code/__pycache__/decision.cpython-35.pyc and /dev/null differ
diff --git a/code/__pycache__/output_images.cpython-35.pyc b/code/__pycache__/output_images.cpython-35.pyc
deleted file mode 100644
index edf3cc9ce4..0000000000
Binary files a/code/__pycache__/output_images.cpython-35.pyc and /dev/null differ
diff --git a/code/__pycache__/perception.cpython-35.pyc b/code/__pycache__/perception.cpython-35.pyc
deleted file mode 100644
index b534161a03..0000000000
Binary files a/code/__pycache__/perception.cpython-35.pyc and /dev/null differ
diff --git a/code/__pycache__/supporting_functions.cpython-35.pyc b/code/__pycache__/supporting_functions.cpython-35.pyc
deleted file mode 100644
index 763fb162c0..0000000000
Binary files a/code/__pycache__/supporting_functions.cpython-35.pyc and /dev/null differ