This is the code used for our Master Thesis where we have used Deep Learning to try and learn the network an optimal picking sequence in the classical bin picking problem. Contrary to other methods and approaches we have tried to not use any depth data for our training and only feed the network RGB images.
We have limited our research to be with medical packaging and a suction gripper.
Below you see our network prediction and affordance map of a picking sequence in a scene with scattered objects.
Using the inference above we used an UR5e robot for the grasping as seen below.
Full details of our work can be read in our report found here.
We also have full simulation support for this in RobWork. Requirements for compiling the project can be found below.
To run the code with C++, the project requires PyTorch, Open3D, OpenCV, and RobWork. Installation guides of these can be found below.
Torch was installed through conda with ollewelins guide.
Open3D was compiled from source with the following flags:
cmake -DBUILD_EIGEN3=ON -D BUILD_LIBREALSENSE=ON -DBUILD_GLEW=ON -DBUILD_GLFW=ON -DBUILD_JSONCPP=ON -DBUILD_PNG=ON -DGLIBCXX_USE_CXX11_ABI=ON -DPYTHON_EXECUTABLE=/usr/bin/python -DBUILD_UNIT_TESTS=ON ..OpenCV was a standard install with sudo apt install libopencv-dev or compiled using the the guide on the OpenCV official website.
RobWork was a standard install following the guide on the RobWork official website.
The inference is seperated from RobWork because of compatibility issues and needs to be compiled seperately. The compiled binarry file need to be located in the binary folder. This can be done with the following commands
cd RobWork/cpp/inference_bin_generator/build
cmake ..
make -j4
cp ../binNOTE: This should also be copied to the physical-implementation/bin to run on the UR5e.
Now the project can be built with the following commands
cd RobWork/cpp/build
cmake ..
make -j4To run the project, the following command can be used
cd RobWork/cpp/build
./main --model_name <model_name> --file_name <file_name> --folder_name <folder_name> --result_file_name <result_file_name>The only flag you should provide is the --model_name to choose what model to do inference with. However, the flags mean the following:
--model_name: Name of the model you want to use. Our final network is calledunet_resnet101_10_all_reg_jit.ptand can be found here along with our other models.--file_name: The prefix for saved files. If empty model name is used.--folder_name: The name of the folder to save images in. If empty model name is used.--result_file_name: Name for csv file containing results.
The images can be found in the images folder.
Same procedure as in the simulation except you have to be in another directory
cd physical-implementation/buildBefore running make sure you have a connection to the UR5e and change the IP accordingly.
We have generated our own data using blender on top of the dataset from Zeng et al. found here. You can generate your own data by running the generate_image.sh in the blender folder. Run the following commands to see how to call the script.
cd blender
chmod +x generate_image.sh
./generate_image.sh -hItems can be added or removed by removing them from items collection in synthetic_data_generator.blend which must be opened with blender.


