Skip to content

DTUWindEnergy/WindGym

Repository files navigation

WindGym Logo

WindGym

Reinforcement Learning Environment for Wind Farm Control

Coverage License: MIT Python 3.7-3.11 Pixi


Overview

WindGym provides a dynamic wind farm environment for developing and evaluating reinforcement learning agents for wind farm control. Built on DYNAMIKS and PyWake, it enables agents to learn optimal turbine yaw control strategies for power maximization and load reduction in complex wake interactions.

📚 View the full documentation here


Key Features

  • Realistic Simulations: High-fidelity wind farm dynamics using DYNAMIKS and PyWake
  • Flexible Environments: Support for single-agent and multi-agent scenarios
  • Turbulence Modeling: Mann turbulence boxes and realistic wind conditions
  • Multiple Turbine Models: From simplified PyWake models to high-fidelity HAWC2 simulations
  • Baseline Agents: Pre-built agents including PyWake optimizer, greedy, and random controllers
  • Comprehensive Evaluation: Tools for comparing agents across various wind conditions
  • Noise & Uncertainty: Built-in support for noisy measurements and uncertainty modeling
  • Gymnasium Compatible: Standard RL interface compatible with popular frameworks

Quick Start

Installation

git clone https://gitlab.windenergy.dtu.dk/sys/windgym.git
cd windgym
pixi install
pixi shell

For detailed installation instructions, see the Installation Guide.

Basic Usage

from WindGym import WindFarmEnv
from py_wake.examples.data.hornsrev1 import V80

# Create a wind farm environment
env = WindFarmEnv(
    turbine=V80(),           # PyWake turbine model
    x_pos=[0, 500, 1000],    # Turbine x positions (m)
    y_pos=[0, 0, 0],         # Turbine y positions (m)
    config="EnvConfigs/Env1.yaml",  # Configuration file
    n_passthrough=10,        # Number of flow passthroughs
    turbtype="Random",       # Turbulence type
)

# Run a simple episode
obs, info = env.reset()
for _ in range(100):
    action = env.action_space.sample()  # Random action
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        break

env.close()

Evaluate a Baseline Agent

from WindGym.Agents import PyWakeAgent
from WindGym.FarmEval import FarmEval
from py_wake.examples.data.hornsrev1 import V80

# Turbine positions
x_pos = [0, 500, 1000]
y_pos = [0, 0, 0]

# Create environment with evaluation wrapper
env = FarmEval(
    turbine=V80(),
    x_pos=x_pos,
    y_pos=y_pos,
    config="EnvConfigs/Env1.yaml",
    Baseline_comp=True,  # Enable baseline comparison
)

# Use PyWake optimization as baseline
agent = PyWakeAgent(x_pos=x_pos, y_pos=y_pos, turbine=V80())

# Run evaluation
obs, info = env.reset()
for _ in range(100):
    action, _ = agent.predict(obs)
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        break

# Get results
results = env.get_results()
print(f"Average power: {results['power'].mean():.2f} W")

Examples

See the examples directory for complete demonstrations:

Example Description
Example 1 Create and configure environments
Example 2 Evaluate pre-trained RL agents
Example 3 Analyze evaluation results
Agent Comparison Compare multiple agents across wind conditions
Noise Examples Working with measurement uncertainty

Flow field animation


Documentation


Contributing

We welcome contributions! Feel free to open a merge request if you have anything you want to add.


License

WindGym is released under the MIT License.

Copyright (c) 2025 Technical University of Denmark (DTU)


Support


Developed by the DTU Wind Energy Systems group

About

Reinforcement learning environment for wind farm control. Mirror of https://gitlab.windenergy.dtu.dk/sys/windgym

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •