A reinforcement learning agent that plays the classic Snake game with real-time rendering and score tracking.
This project implements a Snake game with an AI agent trained using reinforcement learning. The AI has been trained to make optimal decisions about which direction to move the snake to maximize its score while avoiding collisions with walls or its own body.
The system uses a pre-trained model (model_1720984099_77.pth) that has been trained to play the game efficiently.
- 🎮 Real-time Snake game with Pygame rendering
- 💡 Reinforcement learning agent with defined reward structure
- 📊 Score tracking and best score monitoring
- 🧠 Pre-trained model for immediate gameplay
- 🔍 Configurable game dimensions (block size, width, height)
- 📦 Modular architecture with clear separation of concerns
The AI agent uses a reinforcement learning approach with the following reward structure:
| Action | Reward |
|---|---|
| Eats food | +10 |
| Normal movement | -0.01 |
| Game over (collision) | -10 |
The agent learns by:
- Observing the game state
- Choosing a direction to move
- Receiving rewards based on the outcome
- Updating its strategy to maximize future rewards
- Install Python 3.7+ (recommended)
- Install Pygame:
pip install pygameSimply run the main application:
python main.pyThis will launch the Snake game with the pre-trained AI agent. The game will automatically:
- Render the snake on screen
- Track score and best score
- Play with the pre-trained model
The current model used is: model_1720984099_77.pth
This model was trained to achieve a best score of 77 in the game environment with the following parameters:
- Block size: 10
- Game width: 40
- Game height: 40
You can customize the game parameters in the main.py file:
game = SnakeGame(bloc_size=10, width_bloc=40, height_blocs=40)Change the values to adjust the game size and difficulty.
This project is licensed under the MIT License - see the LICENSE file for details.
Note: This implementation is a research prototype for demonstrating reinforcement learning in a simple game environment. The model and training parameters are specific to this implementation and may not work with other Snake game implementations.