Skip to content
Merged
29 changes: 29 additions & 0 deletions .github/workflows/docker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
name: Docker

on:
push:
branches:
- "master"
# - "dev-ci"
pull_request:
branches: [ "master" ]

workflow_dispatch:

jobs:

docker:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build Docker Image
run: docker build -t forefire:latest .

- name: Check ForeFire version inside Docker
run: |
docker run --rm forefire:latest forefire -v
64 changes: 37 additions & 27 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,45 +3,55 @@ name: Linux
on:
push:
branches:
- "master"
- "dev-ci"
- "master"
- "dev-ci"
pull_request:
branches: [ "master" ]

workflow_dispatch:

jobs:

build-native:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Checkout repository
uses: actions/checkout@v4
with:
lfs: true # Keep enabled as you use LFS

- name: Install Dependencies (Build + Test)
run: |
sudo apt-get update -y
# Install build tools
sudo apt-get install -y --no-install-recommends build-essential cmake
# Install NetCDF libraries (C++ legacy for build, C base for Python test)
sudo apt-get install -y --no-install-recommends libnetcdf-dev libnetcdf-c++4-dev
# Install Python and pip
sudo apt-get install -y --no-install-recommends python3 python3-pip
# Keep strace installed for potential future debugging
sudo apt-get install -y --no-install-recommends strace
# === ADD ANY OTHER ForeFire build dependencies (e.g. MPI) ===
# pip install Python test dependencies
pip3 install --no-cache-dir lxml xarray netCDF4

- name: ForeFire build
run: sudo bash ./install-forefire.sh -y

- name: Check version
- name: Check ForeFire version
run: ./bin/forefire -v

# - name: Test
# run: cd tests && bash run.bash


docker:
runs-on: ubuntu-latest
needs: build-native
steps:
- uses: actions/checkout@v3

- name: Build Docker Image
run: docker build -t forefire:latest .

- name: Test Docker Image
# --- KEEPING DIAGNOSTIC STEPS ---
- name: Add Build/Runtime Diagnostics
run: |
# Run the container and check the version output
docker run --rm forefire:latest forefire -v

# - name: Run test inside Docker container
# run: docker run --rm forefire:latest sh ./test-forefire.sh
echo "--- ForeFire Linkage ---"
ldd ./bin/forefire | grep -i 'netcdf' || echo "Warning: NetCDF library not dynamically linked?"
echo "--- Input data.nc Info ---"
ls -lh tests/runff/data.nc
# Install ncdump tool and check the format kind
sudo apt-get install -y --no-install-recommends netcdf-bin
ncdump -k tests/runff/data.nc || echo "Could not check data.nc format"
# --- END DIAGNOSTIC STEPS ---

- name: Run 'runff' Test Script
run: |
cd tests/runff
bash ff-run.bash # Execute the complete test logic script
41 changes: 41 additions & 0 deletions TESTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Testing ForeFire

This document describes how to run the automated tests for the ForeFire wildfire simulator. Tests are located within the `tests/` directory.

## Test Dependencies

Running the test verification scripts requires:

* **Python 3**
* **Python Libraries:** `lxml`, `xarray`, `netCDF4` (and their C dependencies like `libnetcdf-dev`).

Install Python libraries via pip:
```bash
pip3 install lxml xarray netCDF4
```

## Running the Core Test (`runff`)

The primary automated test, validated in our CI pipeline, is located in `tests/runff/`. This test verifies core simulation, save/reload functionality, and NetCDF/KML output generation against reference files.

**To run this test manually:**

1. Ensure ForeFire is compiled (e.g., via `install-forefire.sh`).
2. Navigate to the test directory: `cd tests/runff`
3. Execute the test script: `bash ff-run.bash`

**Test Logic:**

The `ff-run.bash` script:
1. Runs an initial simulation (`real_case.ff`) generating NetCDF output (`ForeFire.0.nc`) and a reload file (`to_reload.ff`).
2. Runs a second simulation (`reload_case.ff`) using the reload file, which generates KML output (`real_case.kml`).
3. Uses Python scripts (`compare_kml.py`, `compare_nc.py`) to compare the generated KML and NetCDF files against reference files (`*.ref`) with numerical tolerance, accounting for minor floating-point variations.
4. Exits with status 0 on success, non-zero on failure.

## Other Tests

The `tests/` directory contains other subdirectories (`mnh_*`, `python`, `runANN`) for potentially testing specific features like coupled simulations or Python bindings. A main `tests/run.bash` script exists but is not currently fully validated in CI. Refer to specific subdirectories for details if needed.

## Contributing

Please see `CONTRIBUTING.md` for guidelines on contributing to ForeFire, including adding new tests. Report any issues via the repository's issue tracker.
2 changes: 1 addition & 1 deletion install-forefire.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ BIN_PATH="$PROJECT_ROOT/bin"
mkdir -p build
cd build
cmake ../
make
make -j"$(nproc)"

echo -e "\n=== ForeFire has been installed to $BIN_PATH ===\n"

Expand Down
60 changes: 60 additions & 0 deletions tests/runff/compare_kml.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import sys
import math
import xml.etree.ElementTree as ET
import re

# --- Configuration ---
RELATIVE_TOLERANCE = 1e-5
ABSOLUTE_TOLERANCE = 1e-8
# ---

def parse_coordinates(coord_string):
"""Parses the KML coordinate string into a list of [lon, lat, alt] floats."""
points = []
raw_points = [p for p in re.split(r'\s+', coord_string.strip()) if p]
for point_str in raw_points:
try:
coords = [float(c) for c in point_str.split(',')]
if len(coords) == 2: coords.append(0.0)
if len(coords) == 3: points.append(coords)
else: print(f"Warning: Skipping invalid coordinate tuple: {point_str}", file=sys.stderr)
except ValueError: print(f"Warning: Skipping non-numeric coordinate data: {point_str}", file=sys.stderr)
return points

def compare_coordinate_lists(list1, list2, rel_tol, abs_tol):
"""Compares two lists of coordinates point by point with tolerance."""
if len(list1) != len(list2):
print(f"Error: Coordinate lists have different lengths ({len(list1)} vs {len(list2)})", file=sys.stderr)
return False
for i, (p1, p2) in enumerate(zip(list1, list2)):
if not math.isclose(p1[0], p2[0], rel_tol=rel_tol, abs_tol=abs_tol) or \
not math.isclose(p1[1], p2[1], rel_tol=rel_tol, abs_tol=abs_tol) or \
not math.isclose(p1[2], p2[2], rel_tol=rel_tol, abs_tol=abs_tol):
print(f"Error: Mismatch at coordinate point {i+1}:\n Got: {p1}\n Expected: {p2}", file=sys.stderr)
return False
return True

if __name__ == "__main__":
if len(sys.argv) != 3:
print("Usage: python compare_kml.py <generated_kml_file> <reference_kml_file>", file=sys.stderr)
sys.exit(2)
file1_path, file2_path = sys.argv[1], sys.argv[2]
try:
tree1 = ET.parse(file1_path); root1 = tree1.getroot()
coords1_elements = root1.findall('.//{http://www.opengis.net/kml/2.2}coordinates') or root1.findall('.//coordinates')
coords1_text = "".join([c.text for c in coords1_elements if c.text])
points1 = parse_coordinates(coords1_text)

tree2 = ET.parse(file2_path); root2 = tree2.getroot()
coords2_elements = root2.findall('.//{http://www.opengis.net/kml/2.2}coordinates') or root2.findall('.//coordinates')
coords2_text = "".join([c.text for c in coords2_elements if c.text])
points2 = parse_coordinates(coords2_text)

if not points1 or not points2: print("Error: Could not extract coordinates.", file=sys.stderr); sys.exit(1)
if compare_coordinate_lists(points1, points2, RELATIVE_TOLERANCE, ABSOLUTE_TOLERANCE):
print(f"KML comparison successful: {file1_path} matches {file2_path} within tolerance.")
sys.exit(0)
else: print(f"KML comparison failed: {file1_path} differs from {file2_path}.", file=sys.stderr); sys.exit(1)
except ET.ParseError as e: print(f"Error parsing XML: {e}", file=sys.stderr); sys.exit(1)
except FileNotFoundError as e: print(f"Error: File not found - {e}", file=sys.stderr); sys.exit(1)
except Exception as e: print(f"An unexpected error occurred: {e}", file=sys.stderr); sys.exit(1)
52 changes: 52 additions & 0 deletions tests/runff/compare_nc.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# tests/runff/compare_nc.py
import sys
import xarray as xr

# --- Configuration ---
# Adjust tolerance as needed. Start with something reasonable.
RELATIVE_TOLERANCE = 1e-5
ABSOLUTE_TOLERANCE = 1e-8
# ---

if __name__ == "__main__":
if len(sys.argv) != 3:
print("Usage: python compare_nc.py <generated_nc_file> <reference_nc_file>", file=sys.stderr)
sys.exit(2) # Exit code for usage error

file1_path = sys.argv[1]
file2_path = sys.argv[2]

ds1 = None # Initialize to None
ds2 = None # Initialize to None
try:
# Open datasets using xarray. Ensure they are closed afterwards.
ds1 = xr.open_dataset(file1_path, cache=False)
ds2 = xr.open_dataset(file2_path, cache=False)

# Compare the datasets numerically with tolerance
# This compares all data variables by default.
# Use `equal = ds1.equals(ds2)` for exact bit-wise comparison (usually too strict)
# Use `identical = ds1.identical(ds2)` for exact comparison including attributes etc. (also often too strict)
xr.testing.assert_allclose(ds1, ds2, rtol=RELATIVE_TOLERANCE, atol=ABSOLUTE_TOLERANCE)

print(f"NetCDF comparison successful: {file1_path} matches {file2_path} within tolerance.")
sys.exit(0) # Success

except FileNotFoundError as e:
print(f"Error: NetCDF file not found - {e}", file=sys.stderr)
sys.exit(1)
except AssertionError as e:
# This exception is raised by xr.testing.assert_allclose on failure
print(f"NetCDF comparison failed: {file1_path} differs from {file2_path}.", file=sys.stderr)
print(f"Details from xarray: {e}", file=sys.stderr)
sys.exit(1) # Failure - Datasets are different
except Exception as e:
# Catch other potential errors (e.g., invalid NetCDF format)
print(f"An unexpected error occurred during NetCDF comparison: {e}", file=sys.stderr)
sys.exit(1) # Failure
finally:
# Ensure datasets are closed to release file handles
if ds1 is not None:
ds1.close()
if ds2 is not None:
ds2.close()
48 changes: 48 additions & 0 deletions tests/runff/ff-run.bash
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#!/bin/bash

set -e

GENERATED_KML="real_case.kml"
REFERENCE_KML="real_case.kml.ref"
GENERATED_NC="ForeFire.0.nc"
REFERENCE_NC="ForeFire.0.nc.ref"
GENERATED_RELOAD="to_reload.ff"

# --- Cleanup previous run artifacts ---
echo "Cleaning up previous test artifacts..."
rm -f "$GENERATED_KML" "$GENERATED_NC" "$GENERATED_RELOAD"

# --- Simulation ---
FOREFIRE_EXE="../../bin/forefire"

echo ""
echo "Running ForeFire simulation (real_case)..."
$FOREFIRE_EXE -i real_case.ff

echo "Running ForeFire simulation (reload_case)..."
if [ ! -f "$GENERATED_RELOAD" ]; then
echo "Error: State file '$GENERATED_RELOAD' needed for second run was not created."
exit 1
fi
$FOREFIRE_EXE -i reload_case.ff # <<<--- ADDED SECOND RUN

# --- Verification ---
echo ""
echo "Verifying KML output..."
# Make sure python3 is available and comparison script exists
if ! command -v python3 &> /dev/null; then echo "Error: python3 required."; exit 1; fi
if [ ! -f compare_kml.py ]; then echo "Error: compare_kml.py not found."; exit 1; fi

python3 compare_kml.py "$GENERATED_KML" "$REFERENCE_KML"
echo "KML verification passed."

echo "Verifying NetCDF output..."
if [ ! -f compare_nc.py ]; then echo "Error: compare_nc.py not found."; exit 1; fi
# Check if the NetCDF file was created (likely by the first run)
if [ ! -f "$GENERATED_NC" ]; then echo "Error: Expected NetCDF output file '$GENERATED_NC' not found."; exit 1; fi

python3 compare_nc.py "$GENERATED_NC" "$REFERENCE_NC"
echo "NetCDF verification passed."

echo "All KML and NetCDF verifications passed for runff test."
exit 0