This project is a Chess Engine powered by a Deep Neural Network. It is designed to learn and evaluate chess positions to determine optimal moves, departing from traditional, hand-crafted evaluation functions.
The neural network is trained using Reinforcement Learning (e.g., a variant of AlphaZero's approach like Monte Carlo Tree Search (MCTS) with a neural network policy and value head, or supervised learning on expert games) to predict both the best move (policy) and the win probability (value) from any given board state.
✨ Key Features Deep Learning Evaluation: Utilizes a Convolutional Neural Network (CNN) for state-of-the-art position evaluation.
Move Selection: Implements a search algorithm (e.g., MCTS or Minimax with Alpha-Beta Pruning) guided by the neural network's outputs.
Customizable Architecture: The network architecture is modular and can be easily modified for experimentation.
Data Generation Pipeline: Includes scripts for self-play or processing large datasets of human/engine games for training.