Skip to content

Research-grade reinforcement learning framework for robot navigation, covering discrete, obstacle-aware, continuous-control, and multi-agent environments with PPO and DQN, full evaluation pipeline, reproducible experiments, and LaTeX paper template for PhD-level research.

Notifications You must be signed in to change notification settings

alizangeneh/reinforcement-learning-for-robot-navigation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reinforcement Learning for Robot Navigation Author: Ali Zangeneh Email: engineer.zangeneh@gmail.com

GitHub: https://github.com/alizangeneh

Research Profile: https://orcid.org/0009-0002-5184-0571

Project Overview

This repository presents a research-grade reinforcement learning framework for robot navigation. The project is designed as a progressive and unified system that moves from:

Discrete single-agent navigation

To obstacle-aware environments

To continuous control

And finally to multi-agent cooperative navigation

The objective is to provide a full pipeline suitable for:

PhD-level research preparation

Robotics and autonomous systems

Reinforcement learning algorithm benchmarking

Scientific publication and reproducible experimentation

Core Research Contributions

This project is not a collection of toy examples. It is structured to demonstrate:

Reward-shaped navigation

Obstacle-aware path planning via RL

Continuous control using policy-gradient methods

Multi-agent cooperative learning

Reproducible training with fixed random seeds

Scientific evaluation with statistical metrics

Direct integration with LaTeX-based paper writing

Implemented Learning Paradigms

Tabular Reinforcement Learning

Q-Learning on GridWorld

Deep Reinforcement Learning

DQN on CartPole

PPO on navigation tasks

Continuous Control

PPO-based velocity control

Multi-Agent Reinforcement Learning

Cooperative navigation with shared team rewards

Environments

The project contains four navigation environments built on a shared base interface:

robot_nav_env.py Discrete, single-agent, no obstacles

robot_nav_env_obstacles.py Discrete, single-agent, obstacle-aware

robot_nav_env_continuous.py Continuous state and action control

robot_nav_env_multiagent.py Multi-robot cooperative navigation

All environments follow the Gymnasium API standard.

Algorithms

Q-Learning

Deep Q-Network (DQN)

Proximal Policy Optimization (PPO)

Multi-Agent PPO

Training and Evaluation Pipeline

The project supports:

Deterministic seeding

TensorBoard logging

Automated statistical evaluation

Mean and standard deviation reward analysis

Algorithm comparison plots

Model checkpointing

Visual rollout demonstrations

Reproducibility

All experiments use:

Fixed random seed

Deterministic environment initialization

Saved evaluation metrics

Fully reproducible training scripts

Scientific Reporting

The repository includes a full LaTeX paper template with:

Abstract

Introduction

Related Work

Method

Experiments

Results

Conclusion

Automated result table generation is also supported.

Target Audience

This project is intended for:

PhD applicants in AI, Robotics, and Control

Machine learning researchers

Robotics engineers

Reinforcement learning practitioners

License

This project is released under the MIT License.

About

Research-grade reinforcement learning framework for robot navigation, covering discrete, obstacle-aware, continuous-control, and multi-agent environments with PPO and DQN, full evaluation pipeline, reproducible experiments, and LaTeX paper template for PhD-level research.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published