Reinforcement Learning Environment for Wind Farm Control
WindGym provides a dynamic wind farm environment for developing and evaluating reinforcement learning agents for wind farm control. Built on DYNAMIKS and PyWake, it enables agents to learn optimal turbine yaw control strategies for power maximization and load reduction in complex wake interactions.
📚 View the full documentation here
- Realistic Simulations: High-fidelity wind farm dynamics using DYNAMIKS and PyWake
- Flexible Environments: Support for single-agent and multi-agent scenarios
- Turbulence Modeling: Mann turbulence boxes and realistic wind conditions
- Multiple Turbine Models: From simplified PyWake models to high-fidelity HAWC2 simulations
- Baseline Agents: Pre-built agents including PyWake optimizer, greedy, and random controllers
- Comprehensive Evaluation: Tools for comparing agents across various wind conditions
- Noise & Uncertainty: Built-in support for noisy measurements and uncertainty modeling
- Gymnasium Compatible: Standard RL interface compatible with popular frameworks
git clone https://gitlab.windenergy.dtu.dk/sys/windgym.git
cd windgym
pixi install
pixi shellFor detailed installation instructions, see the Installation Guide.
from WindGym import WindFarmEnv
from py_wake.examples.data.hornsrev1 import V80
# Create a wind farm environment
env = WindFarmEnv(
turbine=V80(), # PyWake turbine model
x_pos=[0, 500, 1000], # Turbine x positions (m)
y_pos=[0, 0, 0], # Turbine y positions (m)
config="EnvConfigs/Env1.yaml", # Configuration file
n_passthrough=10, # Number of flow passthroughs
turbtype="Random", # Turbulence type
)
# Run a simple episode
obs, info = env.reset()
for _ in range(100):
action = env.action_space.sample() # Random action
obs, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
break
env.close()from WindGym.Agents import PyWakeAgent
from WindGym.FarmEval import FarmEval
from py_wake.examples.data.hornsrev1 import V80
# Turbine positions
x_pos = [0, 500, 1000]
y_pos = [0, 0, 0]
# Create environment with evaluation wrapper
env = FarmEval(
turbine=V80(),
x_pos=x_pos,
y_pos=y_pos,
config="EnvConfigs/Env1.yaml",
Baseline_comp=True, # Enable baseline comparison
)
# Use PyWake optimization as baseline
agent = PyWakeAgent(x_pos=x_pos, y_pos=y_pos, turbine=V80())
# Run evaluation
obs, info = env.reset()
for _ in range(100):
action, _ = agent.predict(obs)
obs, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
break
# Get results
results = env.get_results()
print(f"Average power: {results['power'].mean():.2f} W")See the examples directory for complete demonstrations:
| Example | Description |
|---|---|
| Example 1 | Create and configure environments |
| Example 2 | Evaluate pre-trained RL agents |
| Example 3 | Analyze evaluation results |
| Agent Comparison | Compare multiple agents across wind conditions |
| Noise Examples | Working with measurement uncertainty |
- Installation Guide - Setup instructions
- Core Concepts - Understanding WindGym architecture
- Agents - Creating custom control agents
- Evaluation - Testing and comparing agents
- Noise & Uncertainty - Robust agent development
We welcome contributions! Feel free to open a merge request if you have anything you want to add.
WindGym is released under the MIT License.
Copyright (c) 2025 Technical University of Denmark (DTU)
- Documentation: https://sys.pages.windenergy.dtu.dk/windgym/
- Issues: https://gitlab.windenergy.dtu.dk/sys/windgym/-/issues
- Discussions: https://gitlab.windenergy.dtu.dk/sys/windgym/-/discussions
Developed by the DTU Wind Energy Systems group

