This project trains a multi-agent RL model in the MAgent2 battle environment. The code sets up agents and evaluates them against:
- Random Agents
- A Pretrained Agent
- A Final (stronger) Agent
For each setting, the performance is measured by rewards and win rates.
The figure below depicts the results when the VDN mixer model competes with random opponents, pretrained agent, final pretrained agent.
pip install -r requirements.txt
-
dqn_train/dqn.ipynb: Main training script for the DQN-based agent.you need to "copy&edit" on kaggle to run script, then upload the red.pt file to kaggle and correct the path in the source code
-
dqn_train/dqn_noise_network.ipynb: Main training script for the DQN with Noise Network agent.you need to "copy&edit" on kaggle to run script, then upload the red.pt file to kaggle and correct the path in the source code
-
vdn_mixer_train/vdn_mixer.ipynb: Main training script for the VDN agent.you need to "copy&edit" on kaggle to run script, then upload the red.pt file to kaggle and correct the path in the source code
-
test_model/eval_test/test_dqn(others same).ipynb: Evaluation code against different opponents.
you can find checkpoints at source model/dqn/checkpoints or model/vdn_mixer/
Survey_on_MAgent2_Battle_Using_DQN_Variants_Report_Group22.pdf
- Experimental setup
- DQN variant details
- Performance comparison and discussion
For additional demos, see the video folder
For environment setup and agent interaction details, refer to the MAgent2 documentation.



