Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
163 changes: 162 additions & 1 deletion hhw_brick/applications/boiler_cyc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,168 @@

## Overview

This application identifies whether boiler is short cycling
This application analyzes whether the boiler is short cycling (i.e., turning on and off repeatedly over a short period of time). Frequent cycling increases equipment wear, reduces efficiency, raises fuel consumption, and can greatly shortens the boiler's lifespan. Typically, the method identifies:
- Lower firing rate period.
- Supply water temperature fluctuation under low load conditions.

## Features

✅ **Minimum turndown ratio detection**: Estimates minimum turndown ratio if the boiler firing rate is available.
✅ **Daily operation cycle estimation**: Calculate the number of on-off cycles based on minimum turndown ratio.
✅ **Detect potential short cycling**: Identifies potential short cycling based on available measurements.
✅ **Visualization plots**: generates scatter plots for operating conditions with potential short cycling labeled, histogram summary showing daily operating cycles and firing rates.

## Installation

```bash
pip install -r requirements.txt
```

Main dependencies:
- pandas
- numpy
- matplotlib
- seaborn
- rdflib
- brickschema
- pyyaml

## Quick start

### 1. Check if building qualifies

To run the analysis, the system needs:
- Preferred: boiler firing rate and knwon mininum turndown ratio
- Alternative: supply and return water temperature, boiler operating status, outdoor temperature
- All measurements should be in sufficiently fine resolution (at least 15-min interval)

```python
from hhw_brick.applications.boiler_cyc.app import qualify

qualified, qualify_result = qualify('path/to/brick_model.ttl')
```

### 2. Run basic analysis
```bash
python app.py brick_model.ttl timeseries_data.csv
```

### 3. Use custom configuration

```bash
python app.py brick_model.ttl timeseries_data.csv --config config.yaml
```

## Configuration options

### Analysis parameters

- `DET_THR`: (default: 3.2), (°C) min supply-return deltaT
- `CYC_THR`: (default: 3), consecutive on-off transitions to count as cycling
- `SPT_THR`: (default: 2), (°C) supply close to setpoint threshold
- `TOUT_MILD`: (default: 12), (°C) mild outdoor criterion
- `N_CYC_THR`: (default: 60), daily cycle threshold (~5/hr * 12hr)

### Output settings

- `save_results`: Whether to save analysis results (default: true)
- `output_dir`: Directory for output files (default: './results')
- `export_format`: 'csv'
- `generate_plots`: Whether to generate visualizations (default: true)
- `plot_format`: 'png', 'pdf', or 'svg' (default: 'png')

### Time range

- `start_time`: Start date in 'YYYY-MM-DD' format (null = use all data)
- `end_time`: End date in 'YYYY-MM-DD' format (null = use all data)

## Output files

### Timeseries file
- `fire_*.csv`: contains flagged potential short cycling at each timestep (if firing rates measurements available).
- `daily_fire_cycles_*.csv`: contains summary of number of cycles estimated for each day.
- `hwst.csv`: contains flagged potential short cycling at each timestep (if firing rates measurements not available, but supply and return water temperature, boiler operating status available).

### Visualization plots
- `daily_cyc_*.png`: histogram showing identified daily operation cycles over the study period.
- `fire_rate _*.png`: histogram showning firing rate distribution and estimated minimuim turndown ratio.
- `firing_wt_results_*.png` (if firing rate available), `hwst_results.png` (firing rate not available): scatter plots showing hot water plant operating conditions (heating load and outdoor weather conditions)

## Supported sensor types

### Supply temperature sensors
- `Supply_Water_Temperature_Sensor`
- `Leaving_Hot_Water_Temperature_Sensor`
- `Hot_Water_Supply_Temperature_Sensor`

### Return temperature sensors
- `Return_Water_Temperature_Sensor`
- `Entering_Hot_Water_Temperature_Sensor`
- `Hot_Water_Return_Temperature_Sensor`

### Boiler firing rate sensors
- `Firing_Rate_Sensor`

### Boiler operating status
- `Enable_Status`

### Outdoor temperature sensors
- `Outside_Air_Temperature_Sensor`

## Key workflow

```python
from hhw_brick.applications.boiler_cyc.app import qualify, load_config, load_df, run_hwst_analysis, run_fire_analysis

args = parser.parse_args()

# Load config
config = load_config(args.config)

if args.output_dir:
config["output"]["output_dir"] = args.output_dir

# Run analysis
print(f"\n{'='*60}")
print(f"Boiler Short Cycling Analysis")
print(f"{'='*60}")
print(f"Brick model: {args.brick_model}")
print(f"Timeseries: {args.timeseries_data}")
print(f"{'='*60}")

print("Running HWST analysis...")

df, app = load_df(args.brick_model, args.timeseries_data, config)

if app == 0:
return(f"[FAIL] Analysis cannot proceed due to no sensor data.")
elif app == 1:
run_hwst_analysis(df, config, plot_options=True)
else:
# Get all fire columns
fire_columns = [col for col in df.columns if 'fire' in col]
for fire_col in fire_columns:
sub_df = df[['datetime_UTC', 'sup', 'ret', 't_out', fire_col]].copy()
sub_df = sub_df.rename(columns={fire_col: 'value'})
sub_df['boiler'] = fire_col
run_fire_analysis(sub_df, config, plot_options=True)

# Process notification
print(f"\n{'='*60}")
print(f"[SUCCESS] Analysis completed successfully!")
print(f" Results saved to: {config['output']['output_dir']}")
print(f"{'='*60}\n")
```

## Troubleshooting

### Building Not Qualified

**Error**: "Building NOT qualified - Missing: Supply and return temperature sensors or boiler operating status sensors"

### No Data Points Found

**Error**: "Failed to map sensors to data columns"

## License

Expand Down
71 changes: 48 additions & 23 deletions hhw_brick/applications/boiler_cyc/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ def qualify(brick_model_path):
else:
print(f"[FAIL] Building NOT qualified")
print(f" Missing: Boiler firing rate sensor\n")
qualified = qualified and False
qualified = qualified or False
qualified_result.update({})

return qualified, qualified_result
Expand Down Expand Up @@ -312,11 +312,11 @@ def run_hwst_analysis(dataframe, config, plot_options=False):
)

# Export csv file
df[['tag', 'datetime_UTC', 'sup', 'sup_stpt', 'ret', 't_out', 'flag_hwst']].to_csv(csv_dir / 'hwst_spt.csv', index=False)
df[['datetime_UTC', 'sup', 'sup_stpt', 'ret', 't_out', 'flag_hwst']].to_csv(csv_dir / 'hwst_spt.csv', index=False)

# Plot
if plot_options:
fig, ax = plt.subplots(figsize=(8, 8), dpi=150)
fig, ax = plt.subplots(figsize=(12, 8), dpi=150)
m = (df["flag_hwst"] == 0)
ax.scatter(df.loc[m, "t_out"], df.loc[m, "deltaT"], s=4, c=LS_COLORS["Others"], alpha=0.2, label="Others")
m = (df["flag_hwst"] == 1)
Expand All @@ -343,7 +343,7 @@ def run_hwst_analysis(dataframe, config, plot_options=False):

# Plot
if plot_options:
fig, ax = plt.subplots(figsize=(8, 8), dpi=150)
fig, ax = plt.subplots(figsize=(12, 8), dpi=150)
m = (df["flag_hwst"] == 0)
ax.scatter(df.loc[m, "t_out"], df.loc[m, "deltaT"], s=4, c=LS_COLORS["Others"], alpha=0.2, label="Others")
m = (df["flag_hwst"] == 1)
Expand All @@ -360,7 +360,7 @@ def run_hwst_analysis(dataframe, config, plot_options=False):
plt.close(fig)

# Export csv file
df[['tag', 'datetime_UTC', 'sup', 'sup_stpt', 'ret', 't_out', 'flag_hwst']].to_csv(csv_dir / 'hwst.csv', index=False)
df[['datetime_UTC', 'sup', 'ret', 't_out', 'flag_hwst']].to_csv(csv_dir / 'hwst.csv', index=False)

return df

Expand Down Expand Up @@ -436,7 +436,7 @@ def run_fire_analysis(dataframe, config, plot_options=False):
df["cyc"] = np.where(df["value"] < thr, 1, 0)

df["flag_fire"] = np.where((df["value"] > 0) & (df["value"] < thr), 1, 0)
df[['datetime_UTC', 'value', 'flag_fire']].to_csv(csv_dir / f"fire_{boiler}.csv", index=False)
df[['datetime_UTC', 'sup', 'ret', 't_out', 'value', 'flag_fire']].to_csv(csv_dir / f"fire_{boiler}.csv", index=False)

# rle duplicate adjustment (collapse long zero runs)
for dt_val, grp in df.sort_values("datetime_UTC").groupby("dt"):
Expand All @@ -455,7 +455,6 @@ def run_fire_analysis(dataframe, config, plot_options=False):
"exceed": 1 if daily_cyc > N_CYC_THR else 0,
})

# import pdb; pdb.set_trace()
daily_cyc_df = pd.DataFrame(daily_cycle_rows)

# Plot histogram of daily cycles
Expand Down Expand Up @@ -513,27 +512,54 @@ def load_df(brick_model_path, timeseries_data_path, config):
print(f"{'='*60}\n")

g, df = load_data(brick_model_path, timeseries_data_path)

print(f"[OK] Loaded {len(df)} data points")
print(f"[OK] Time range: {df.index.min()} to {df.index.max()}\n")

# Map sensors to columns
supply_uri = qualify_result["supply"]
return_uri = qualify_result["return"]
oper_uri = qualify_result["oper"]
firing_uri = qualify_result["firing_rate"]
oat_uri = qualify_result["oat"]

sensor_mapping = map_sensors_to_columns(g, [supply_uri, return_uri, oper_uri, oat_uri] + firing_uri, df)
# Map sensors to columns (use safe retrieval in case some keys are missing)
supply_uri = qualify_result.get("supply")
return_uri = qualify_result.get("return")
oper_uri = qualify_result.get("oper")
firing_uri = qualify_result.get("firing_rate", [])
oat_uri = qualify_result.get("oat")

# Normalize firing_uri to a list (it may be absent, a single string, or a list)
if firing_uri is None:
firing_uri = []
elif isinstance(firing_uri, str):
firing_uri = [firing_uri]

# Build the requested sensor list excluding any None values
requested_sensors = []
for uri in (supply_uri, return_uri, oper_uri, oat_uri):
if uri:
requested_sensors.append(uri)
requested_sensors.extend(firing_uri)

sensor_mapping = map_sensors_to_columns(g, requested_sensors, df)
app = 0
# If no sensors were mapped at all, fail early (preserve previous behaviour)
if len(sensor_mapping) == 0:
print("[FAIL] Failed to map sensors to data columns\n")
return None
elif set(firing_uri).issubset(sensor_mapping.keys()):

# Prefer the firing-rate path if all firing sensors are available
try:
firing_set = set(firing_uri)
except Exception:
firing_set = set()

if firing_set and firing_set.issubset(sensor_mapping.keys()):
print(f"[Firing Rate Sensors] Mapped: {firing_uri}\n")
app = 2
else:
# Otherwise check for supply/return/oper/oat sensors
elif all(uri in sensor_mapping for uri in (supply_uri, return_uri, oper_uri, oat_uri)):
print(f"[Supply/Return/Oper/OAT Sensors] Mapped: {supply_uri}, {return_uri}, {oper_uri}, {oat_uri}\n")
app = 1
else:
# Neither full firing-rate nor full plant-sensor set available
print("[WARN] Required sensors for either analysis path are not fully available.\n")
app = 0

# Extract and filter data
df_extracted = extract_data_columns(
Expand Down Expand Up @@ -591,20 +617,19 @@ def main():
print("Running HWST analysis...")

df, app = load_df(args.brick_model, args.timeseries_data, config)

# import pdb; pdb.set_trace()

if app == 0:
return(f"[FAIL] Analysis cannot proceed due to no sensor data.")
elif app == 1:
run_hwst_analysis(df, config, plot_options=False)
run_hwst_analysis(df, config, plot_options=True)
else:
# Get all fire columns
fire_columns = [col for col in df.columns if col != 'datetime_UTC']
fire_columns = [col for col in df.columns if 'fire' in col]
for fire_col in fire_columns:
sub_df = df[['datetime_UTC', fire_col]].copy()
sub_df = df[['datetime_UTC', 'sup', 'ret', 't_out', fire_col]].copy()
sub_df = sub_df.rename(columns={fire_col: 'value'})
sub_df['boiler'] = fire_col
run_fire_analysis(sub_df, config, plot_options=False)
run_fire_analysis(sub_df, config, plot_options=True)

# Process notification
print(f"\n{'='*60}")
Expand Down
3 changes: 0 additions & 3 deletions hhw_brick/applications/boiler_cyc/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,6 @@ output:
# Plot format: png, pdf, svg
plot_format: "png"

# Whether to generate interactive HTML visualizations with Plotly
generate_plotly_html: true

# Time range (optional)
time_range:
# Start time in YYYY-MM-DD format (null = use all data)
Expand Down
Loading