diff --git a/README.md b/README.md index a866944..feddbbc 100644 --- a/README.md +++ b/README.md @@ -35,10 +35,10 @@ ## ✨ Key Features -- Modify parameters via the `calibration.cal` file. -- Run SWAT+ simulations. -- Perform sensitivity analysis on model parameters using the [SALib](https://github.com/SALib/SALib) Python package, with support for parallel computation. -- Compute performance metrics using widely adopted indicators and derive Sobol sensitivity indices. +- Modify model parameters through the `calibration.cal` file. +- Run SWAT+ simulations seamlessly. +- Compute performance metrics using widely adopted indicators. +- Perform sensitivity analysis on model parameters using the [SALib](https://github.com/SALib/SALib) Python package, with support for parallel computation; currently, only Sobol sampling and Sobol indices are supported. ## 📥 Install pySWATPlus diff --git a/docs/changelog.md b/docs/changelog.md index dd23909..27dfedf 100644 --- a/docs/changelog.md +++ b/docs/changelog.md @@ -2,28 +2,34 @@ ## Version 1.2.0 (Month DD, YYYY, not released yet) -- Introduced the `pySWATPlus.DataManager` class with the following methods to support data processing workflows: - - - `read_sensitive_dfs`: Reads sensitivity simulation data generated by the `simulation_by_sobol_sample` method in the `pySWATPlus.SensitivityAnalyzer` class. - - `simulated_timeseries_df`: Moved from the `pySWATPlus.SensitivityAnalyzer` class for improved modularity. +- All SWAT+ simulations with modified parameters are now configured through the `calibration.cal` file, eliminating the need to read and modify individual input files. - Introduced the `pySWATPlus.PerformanceMetrics` class to compute performance metrics between simulated and observed values using the following indicators: - - Nash–Sutcliffe Efficiency - - Kling–Gupta Efficiency - - Mean Squared Error - - Root Mean Squared Error - - Percent Bias - - Mean Absolute Relative Error + - Nash–Sutcliffe Efficiency + - Kling–Gupta Efficiency + - Mean Squared Error + - Root Mean Squared Error + - Percent Bias + - Mean Absolute Relative Error + +- Updated the `pySWATPlus.SensitivityAnalyzer` class: -- Added the `sobol_indices` method to the `pySWATPlus.SensitivityAnalyzer`** class for computing Sobol indices using the available indicators in the `pySWATPlus.PerformanceMetrics` class. + - Renamed the method `simulation_by_sobol_sample` to `simulation_by_sample_parameters` to standardize naming and allow different sampling techniques in the future. + - Added `parameter_sensitivity_indices` for computing sensitivity indices using the available indicators in the `pySWATPlus.PerformanceMetrics` class. + +- Introduced the `pySWATPlus.DataManager` class with methods to support data processing workflows: + + - `read_sensitive_dfs`: Reads sensitivity simulation data generated by the `simulation_by_sobol_sample` method in the `pySWATPlus.SensitivityAnalyzer` class. + - `simulated_timeseries_df`: Moved from the `pySWATPlus.SensitivityAnalyzer` class to improve modularity. -- Added new methods to the `pySWATPlus.TxtinoutReader` class: +- Updated the `pySWATPlus.TxtinoutReader` class: - - `set_simulation_timestep`: Modifies the simulation timestep in the `time.sim` file. - - `set_print_interval`: Modifies the print interval in the `print.prt` file. + - Added `set_simulation_timestep` to modify the simulation timestep in the `time.sim` file. + - Added `set_print_interval` to modify the print interval in the `print.prt` file. + - Added `set_print_period` to modify the print period in the `print.prt` file for recording simulated results. + - Renamed `set_begin_and_end_date` to `set_simulation_period` for better consistency. -- All SWAT+ simulations with modified parameters are now configured through the `calibration.cal` file, eliminating the need to read and modify individual input files. ## Version 1.1.0 (August 26, 2025) diff --git a/docs/index.md b/docs/index.md index dd7809a..5a3eadc 100644 --- a/docs/index.md +++ b/docs/index.md @@ -8,9 +8,10 @@ ## ✨ Key Features -- Modify parameters via the `calibration.cal` file. -- Run SWAT+ simulations. -- Perform sensitivity analysis on model parameters using the [SALib](https://github.com/SALib/SALib) Python package, with support for parallel computation. +- Modify model parameters through the `calibration.cal` file. +- Run SWAT+ simulations seamlessly. +- Compute performance metrics using widely adopted indicators. +- Perform sensitivity analysis on model parameters using the [SALib](https://github.com/SALib/SALib) Python package, with support for parallel computation; currently, only Sobol sampling and Sobol indices are supported. ## 📥 Install pySWATPlus diff --git a/docs/userguide/data_analysis.md b/docs/userguide/data_analysis.md index 6687331..92f96c6 100644 --- a/docs/userguide/data_analysis.md +++ b/docs/userguide/data_analysis.md @@ -32,10 +32,10 @@ print(output) ## Read Sensitivity Simulation Data -The sensitivity analysis performed using the [`simulation_by_sobol_sample`](https://swat-model.github.io/pySWATPlus/api/sensitivity_analyzer/#pySWATPlus.SensitivityAnalyzer.simulation_by_sobol_sample) method generates a file named `sensitivity_simulation.json` within the simulation directory. -This JSON file contains all the information required for Sobol sensitivity analysis, including: +The sensitivity analysis performed using the +[`simulation_by_sample_parameters`](https://swat-model.github.io/pySWATPlus/api/sensitivity_analyzer/#pySWATPlus.SensitivityAnalyzer.simulation_by_sample_parameters) method generates a file named `sensitivity_simulation.json`. This JSON file contains all the information required for sensitivity analysis, including: -- `problem`: Sobol problem definition +- `problem`: Problem definition dictionary - `sample`: List of generated samples - `simulation`: Simulated `DataFrame` corresponding to each sample @@ -43,7 +43,7 @@ To retrieve the selected `DataFrame` for all scenarios, use: ```python output = pySWATPlus.DataManager().read_sensitive_dfs( - sim_file=r"C:\Users\Username\simulation_folder\sensitivity_simulation.json", + sensim_file=r"C:\Users\Username\simulation_folder\sensitivity_simulation.json", df_name='channel_sd_mon_df', add_problem=True, add_sample=True @@ -65,7 +65,7 @@ To compute performance metrics for the desired indicators: ```python output = pySWATPlus.SensitivityAnalyzer().scenario_indicators( - sim_file=r"C:\Users\Username\simulation_folder\sensitivity_simulation.json", + sensim_file=r"C:\Users\Username\simulation_folder\sensitivity_simulation.json", df_name='channel_sd_mon_df', sim_col='flo_out', obs_file=r"C:\Users\Username\observed_folder\discharge_monthly.csv", @@ -76,22 +76,7 @@ output = pySWATPlus.SensitivityAnalyzer().scenario_indicators( ) ``` -## Sobol Indices -The available indicators can also be used to compute Sobol indices (first, second, and total orders) along with their confidence intervals. - -```python -output = pySWATPlus.SensitivityAnalyzer().sobol_indices( - sim_file=r"C:\Users\Username\simulation_folder\sensitivity_simulation.json", - df_name='channel_sd_mon_df', - sim_col='flo_out', - obs_file=r"C:\Users\Username\observed_folder\discharge_monthly.csv", - date_format='%Y-%m-%d', - obs_col='discharge', - indicators=['KGE', 'RMSE'], - json_file=r"C:\Users\Username\data_analysis\sobol_indices.json" -) -``` diff --git a/docs/userguide/sensitivity_interface.md b/docs/userguide/sensitivity_interface.md index f8cd6d5..d8472fa 100644 --- a/docs/userguide/sensitivity_interface.md +++ b/docs/userguide/sensitivity_interface.md @@ -1,7 +1,6 @@ -# Sensitivity Analysis +# Sensitivity Interface -Sensitivity analysis helps quantify how variation in input parameters affects model outputs. This tutorial demonstrates how to perform sensitivity analysis on SWAT+ model parameters. -The parameter sampling is handled by the [SALib](https://github.com/SALib/SALib) Python package using [Sobol](https://doi.org/10.1016/S0378-4754(00)00270-6) sampling from a defined parameter space. +Sensitivity interface helps quantify how variation in input parameters affects model outputs. This tutorial demonstrates how to perform sensitivity analysis on SWAT+ model parameters. ## Configuration Settings @@ -15,25 +14,25 @@ import pySWATPlus # Initialize the project's TxtInOut folder txtinout_reader = pySWATPlus.TxtinoutReader( - path=r"C:\Users\Username\project\Scenarios\Default\TxtInOut" + tio_dir=r"C:\Users\Username\project\Scenarios\Default\TxtInOut" ) -# Copy required files to an empty custom directory -target_dir = r"C:\Users\Username\custom_folder" -target_dir = txtinout_reader.copy_required_files( - target_dir=target_dir +# Copy required files to an empty simulation directory +sim_dir = r"C:\Users\Username\custom_folder" +sim_dir = txtinout_reader.copy_required_files( + sim_dir=sim_dir ) -# Initialize TxtinoutReader with the custom directory -target_reader = pySWATPlus.TxtinoutReader( - path=target_dir +# Initialize TxtinoutReader with the simulation directory +sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir ) # Disable CSV file generation to save time -target_reader.disable_csv_print() +sim_reader.disable_csv_print() # Disable daily time series in print.prt (saves time and space) -target_reader.enable_object_in_print_prt( +sim_reader.enable_object_in_print_prt( obj=None, daily=False, monthly=True, @@ -41,8 +40,8 @@ target_reader.enable_object_in_print_prt( avann=True ) -# Run a trial simulation to verify expected time series outputs -target_reader.run_swat( +# Set simulation period and run a trial simulation to verify expected time series outputs +sim_reader.run_swat( begin_date='01-Jan-2010', end_date='31-Dec-2012', warmup=1, @@ -51,14 +50,16 @@ target_reader.run_swat( } # enable daily time series for 'channel_sd' ``` -## Sobol-Based Interface +## Sensitivity Simulation -This high-level interface builds on the above configuration to run sensitivity simulations using Sobol sampling. It includes: +This high-level interface builds on the above configuration to run sensitivity simulations using sampling, powered by the [SALib](https://github.com/SALib/SALib) Python package. +Currently, it supports [Sobol](https://doi.org/10.1016/S0378-4754(00)00270-6) sampling from a defined parameter space. The interface provides: + +- Automatic generation of samples for the parameter space +- Parallel computation to accelerate simulations +- Output extraction with filtering options +- Structured export of results for downstream analysis -- Automatic generation of Sobol samples for the parameter space -- Parallel computation to speed up simulations -- Output extraction with filtering options -- Structured export of results for downstream analysis ```python # Sensitivity parameter space @@ -78,7 +79,7 @@ parameters = [ ] # Target data extraction from sensitivity simulation -simulation_data = { +extract_data = { 'channel_sdmorph_yr.txt': { 'has_units': True, 'ref_day': 15, @@ -97,13 +98,30 @@ simulation_data = { # Sensitivity simulation if __name__ == '__main__': - output = pySWATPlus.SensitivityAnalyzer().simulation_by_sobol_sample( + output = pySWATPlus.SensitivityAnalyzer().simulation_by_sample_parameters( parameters=parameters, sample_number=1, - simulation_folder=r"C:\Users\Username\simulation_folder", - txtinout_folder=target_dir, - simulation_data=simulation_data, - clean_setup=True + sensim_dir=r"C:\Users\Username\simulation_folder", + txtinout_folder=sim_dir, + extract_data=extract_data ) print(output) +``` + +## Sensitivity Indices + +Sensitivity indices (first, second, and total orders) are computed using the indicators available in the `pySWATPlus.PerformanceMetrics` class, along with their confidence intervals. + + +```python +output = pySWATPlus.SensitivityAnalyzer().parameter_sensitivity_indices( + sensim_file=r"C:\Users\Username\simulation_folder\sensitivity_simulation.json", + df_name='channel_sd_mon_df', + sim_col='flo_out', + obs_file=r"C:\Users\Username\observed_folder\discharge_monthly.csv", + date_format='%Y-%m-%d', + obs_col='discharge', + indicators=['KGE', 'RMSE'], + json_file=r"C:\Users\Username\sensitivity_indices.json" +) ``` \ No newline at end of file diff --git a/docs/userguide/swatplus_simulation.md b/docs/userguide/swatplus_simulation.md index af4185c..7780a98 100644 --- a/docs/userguide/swatplus_simulation.md +++ b/docs/userguide/swatplus_simulation.md @@ -12,10 +12,10 @@ Once the `TxtInOut` folder is properly configured with the necessary input files import pySWATPlus # Replace this with the path to your project's TxtInOut folder -txtinout_folder = r"C:\Users\Username\project\Scenarios\Default\TxtInOut" +txtinout_dir = r"C:\Users\Username\project\Scenarios\Default\TxtInOut" txtinout_reader = pySWATPlus.TxtinoutReader( - path=txtinout_folder + tio_dir=txtinout_dir ) ``` @@ -42,26 +42,26 @@ To keep your original `TxtInOut` folder unchanged, it is recommended to run `SWA ```python # Replace this with your desired empty custom directory - target_dir = r"C:\Users\Username\custom_folder" + sim_dir = r"C:\Users\Username\custom_folder" # Ensure the required files are copied to the custom directory txtinout_reader.copy_required_files( - target_dir=target_dir + sim_dir=sim_dir ) ``` - Initialize `TxtinoutReader` class for the custom directory ```python - target_reader = pySWATPlus.TxtinoutReader( - path=target_dir + sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir ) ``` - Run simulation ```python - target_reader.run_swat() + sim_reader.run_swat() ``` ## Step-wise Configurations and Simulations @@ -74,9 +74,9 @@ The following steps demonstrate how to configure parameters in a custom director ```python # Update timeline in `time.sim` file - target_reader.set_simulation_period( + sim_reader.set_simulation_period( begin_date='01-Jan-2012', - end_date='31-Dec-2016', + end_date='31-Dec-2016' ) ``` @@ -93,7 +93,7 @@ The following steps demonstrate how to configure parameters in a custom director ```python # Ensure simulation outputs for `channel_sd` object in `print.prt` file - target_reader.enable_object_in_print_prt( + sim_reader.enable_object_in_print_prt( obj='channel_sd', daily=False, monthly=True, @@ -102,12 +102,22 @@ The following steps demonstrate how to configure parameters in a custom director ) ``` -- Set output print interval within the simulation period: +- Set print interval within the simulation period: ```python - # Set ouput print every other day - target_reader.set_print_interval( - interval=2 + # Set print interval in `print.prt` file + sim_reader.set_print_interval( + interval=1 + ) + ``` + +- Set print period within the simulation timeline to record result in output files: + + ```python + # Set print period in `print.prt` file + sim_reader.set_print_period( + begin_date='15-Jun-2012', + end_date='15-Jun-2016' ) ``` @@ -121,7 +131,7 @@ The following steps demonstrate how to configure parameters in a custom director 'value': 0.5 } ] - target_reader.run_swat( + sim_reader.run_swat( parameters=parameters ) ``` @@ -148,13 +158,15 @@ parameters = [ # Run SWAT+ simulation from the original `TxtInOut` folder txtinout_reader.run_swat( - target_dir=r"C:\Users\Username\custom_folder", # mandatory + sim_dir=r"C:\Users\Username\custom_folder", # mandatory parameters=parameters, # optional - begin_date='01-Jan-2012', # optional - end_date= '31-Dec-2016', # optional + begin_date='01-Jan-2012', # optional + end_date= '31-Dec-2016', # optional + simulation_timestep=0, # optional warmup=1, # optional print_prt_control={'channel_sd': {'daily': False}}, # optional + print_begin_date='15-Jun-2012', # optional + print_end_date='15-Jun-2016', # optional print_interval=1 # optional ) ``` - diff --git a/pySWATPlus/data_manager.py b/pySWATPlus/data_manager.py index 2ae3792..1322639 100644 --- a/pySWATPlus/data_manager.py +++ b/pySWATPlus/data_manager.py @@ -14,7 +14,7 @@ class DataManager: def simulated_timeseries_df( self, - target_file: str | pathlib.Path, + sim_file: str | pathlib.Path, has_units: bool, begin_date: typing.Optional[str] = None, end_date: typing.Optional[str] = None, @@ -29,7 +29,7 @@ def simulated_timeseries_df( A new `date` column is constructed using `datetime.date` objects from the `yr`, `mon`, and `day` columns. Args: - target_file (str | pathlib.Path): Path to the input file containing time series data generated by + sim_file (str | pathlib.Path): Path to the input file containing time series data generated by the method [`run_swat`](https://swat-model.github.io/pySWATPlus/api/txtinout_reader/#pySWATPlus.TxtinoutReader.run_swat). The file must contain `yr`, `mon`, and `day` columns. @@ -72,13 +72,13 @@ def simulated_timeseries_df( ) # Absolute file path - target_file = pathlib.Path(target_file).resolve() + sim_file = pathlib.Path(sim_file).resolve() # DataFrame from input file - skip_rows = [0, 2] if has_units else [0] - df = utils._load_file( - path=target_file, - skip_rows=skip_rows + skiprows = [0, 2] if has_units else [0] + df = utils._df_extract( + input_file=sim_file, + skiprows=skiprows ) # DataFrame columns @@ -92,7 +92,7 @@ def simulated_timeseries_df( ] if len(missing_cols) > 0: raise ValueError( - f'Missing required time series columns "{missing_cols}" in file "{target_file.name}"' + f'Missing required time series columns "{missing_cols}" in file "{sim_file.name}"' ) df[date_col] = pandas.to_datetime( df[time_cols].rename(columns={'yr': 'year', 'mon': 'month'}) @@ -105,9 +105,9 @@ def simulated_timeseries_df( # Fix reference day if ref_day is not None: - if target_file.stem.endswith(('_day', '_subday')): + if sim_file.stem.endswith(('_day', '_subday')): raise ValueError( - f'Parameter "ref_day" is not applicable for daily or sub-daily time series in file "{target_file.name}" ' + f'Parameter "ref_day" is not applicable for daily or sub-daily time series in file "{sim_file.name}" ' f'because it would assign the same day to all records within a month.' ) df[date_col] = df[date_col].apply( @@ -116,9 +116,9 @@ def simulated_timeseries_df( # Fix reference month if ref_month is not None: - if target_file.stem.endswith('_mon'): + if sim_file.stem.endswith('_mon'): raise ValueError( - f'Parameter "ref_month" is not applicable for monthly time series in file "{target_file.name}" ' + f'Parameter "ref_month" is not applicable for monthly time series in file "{sim_file.name}" ' f'because it would assign the same month to all records within a year.' ) df[date_col] = df[date_col].apply( @@ -128,7 +128,7 @@ def simulated_timeseries_df( # Check if filtering by date removed all rows if df.empty: raise ValueError( - f'No data found between "{begin_date}" and "{end_date}" in file "{target_file.name}"' + f'No data found between "{begin_date}" and "{end_date}" in file "{sim_file.name}"' ) # Filter rows by dictionary criteria @@ -136,18 +136,18 @@ def simulated_timeseries_df( for col, val in apply_filter.items(): if col not in df_cols: raise ValueError( - f'Column "{col}" in apply_filter was not found in file "{target_file.name}"' + f'Column "{col}" in apply_filter was not found in file "{sim_file.name}"' ) if not isinstance(val, list): raise TypeError( - f'Column "{col}" in apply_filter for file "{target_file.name}" must be a list, ' + f'Column "{col}" in apply_filter for file "{sim_file.name}" must be a list, ' f'but got type "{type(val).__name__}"' ) df = df.loc[df[col].isin(val)] # Check if filtering removed all rows if df.empty: raise ValueError( - f'Filtering by column "{col}" with values "{val}" returned no rows in "{target_file.name}"' + f'Filtering by column "{col}" with values "{val}" returned no rows in "{sim_file.name}"' ) # Reset DataFrame index @@ -162,7 +162,7 @@ def simulated_timeseries_df( for col in usecols: if col not in df_cols: raise ValueError( - f'Column "{col}" specified in "usecols" was not found in file "{target_file.name}"' + f'Column "{col}" specified in "usecols" was not found in file "{sim_file.name}"' ) retain_cols = [date_col] + usecols @@ -191,14 +191,15 @@ def simulated_timeseries_df( def read_sensitive_dfs( self, - sim_file: pathlib.Path, + sensim_file: str | pathlib.Path, df_name: str, add_problem: bool = False, add_sample: bool = False ) -> dict[str, typing.Any]: ''' - Read sensitivity simulation data generated by the [`simulation_by_sobol_sample`](https://swat-model.github.io/pySWATPlus/api/sensitivity_analyzer/#pySWATPlus.SensitivityAnalyzer.simulation_by_sobol_sample) - method, and return a dictionary mapping each scenario integer to its corresponding `DataFrame`. + Read sensitivity simulation data generated by the method + [`simulation_by_sample_parameters`](https://swat-model.github.io/pySWATPlus/api/sensitivity_analyzer/#pySWATPlus.SensitivityAnalyzer.simulation_by_sample_parameters), + and return a dictionary mapping each scenario integer to its corresponding `DataFrame`. The returned dictionary may include the following keys: - `scenario` (default): A mapping between each scenario integer and its corresponding DataFrame. @@ -206,7 +207,7 @@ def read_sensitive_dfs( - `sample` (optional): The sample list used in the sensitivity simulation. Args: - sim_file (str | pathlib.Path): Path to the `sensitivity_simulation.json` file generated by `simulation_by_sobol_sample`. + sensim_file (str | pathlib.Path): Path to the `sensitivity_simulation.json` file generated by `simulation_by_sample_parameters`. df_name (str): Name of the `DataFrame` within `sensitivity_simulation.json`. @@ -218,12 +219,24 @@ def read_sensitive_dfs( A dictionary with the following keys: - `scenario` (default): A mapping between each scenario integer and its corresponding DataFrame. - - `problem` (optional): The problem definition. + - `problem` (optional): The definition dictionary passed to sampling. - `sample` (optional): The sample list used in the sensitivity simulation. ''' + # Check input variables type + validators._variable_origin_static_type( + vars_types=typing.get_type_hints( + obj=self.read_sensitive_dfs + ), + vars_values=locals() + ) + + # Absolute file path + sensim_file = pathlib.Path(sensim_file).resolve() + + # Sensitiivty output data output = utils._retrieve_sensitivity_output( - sim_file=sim_file, + sensim_file=sensim_file, df_name=df_name, add_problem=add_problem, add_sample=add_sample diff --git a/pySWATPlus/performance_metrics.py b/pySWATPlus/performance_metrics.py index 9e0f5db..b1ae16b 100644 --- a/pySWATPlus/performance_metrics.py +++ b/pySWATPlus/performance_metrics.py @@ -48,6 +48,14 @@ def compute_nse( obs_col (str): Name of the column containing observed values. ''' + # Check input variables type + validators._variable_origin_static_type( + vars_types=typing.get_type_hints( + obj=self.compute_nse + ), + vars_values=locals() + ) + # Simulation values sim_arr = df[sim_col].astype(float) @@ -79,6 +87,14 @@ def compute_kge( obs_col (str): Name of the column containing observed values. ''' + # Check input variables type + validators._variable_origin_static_type( + vars_types=typing.get_type_hints( + obj=self.compute_kge + ), + vars_values=locals() + ) + # Simulation values sim_arr = df[sim_col].astype(float) @@ -106,7 +122,7 @@ def compute_mse( obs_col: str ) -> float: ''' - Calculate the Mean Squared Error metric between simulated and observed values + Calculate the `Mean Squared Error` metric between simulated and observed values Args: df (pandas.DataFrame): DataFrame containing at least two columns with simulated and observed values. @@ -116,6 +132,14 @@ def compute_mse( obs_col (str): Name of the column containing observed values. ''' + # Check input variables type + validators._variable_origin_static_type( + vars_types=typing.get_type_hints( + obj=self.compute_mse + ), + vars_values=locals() + ) + # Simulation values sim_arr = df[sim_col].astype(float) @@ -134,7 +158,7 @@ def compute_rmse( obs_col: str ) -> float: ''' - Calculate the Root Mean Squared Error metric between simulated and observed values. + Calculate the `Root Mean Squared Error` metric between simulated and observed values. Args: df (pandas.DataFrame): DataFrame containing at least two columns with simulated and observed values. @@ -144,6 +168,14 @@ def compute_rmse( obs_col (str): Name of the column containing observed values. ''' + # Check input variables type + validators._variable_origin_static_type( + vars_types=typing.get_type_hints( + obj=self.compute_rmse + ), + vars_values=locals() + ) + # computer MSE error mse_value = self.compute_mse( df=df, @@ -163,7 +195,7 @@ def compute_pbias( obs_col: str ) -> float: ''' - Calculate the Percent Bias metric between simulated and observed values. + Calculate the `Percent Bias` metric between simulated and observed values. Args: df (pandas.DataFrame): DataFrame containing at least two columns with simulated and observed values. @@ -173,6 +205,14 @@ def compute_pbias( obs_col (str): Name of the column containing observed values. ''' + # Check input variables type + validators._variable_origin_static_type( + vars_types=typing.get_type_hints( + obj=self.compute_pbias + ), + vars_values=locals() + ) + # Simulation values sim_arr = df[sim_col].astype(float) @@ -191,7 +231,7 @@ def compute_mare( obs_col: str ) -> float: ''' - Calculate the Mean Absolute Relative Error metric between simulated and observed values + Calculate the `Mean Absolute Relative Error` metric between simulated and observed values Args: df (pandas.DataFrame): DataFrame containing at least two columns with simulated and observed values. @@ -201,6 +241,14 @@ def compute_mare( obs_col (str): Name of the column containing observed values. ''' + # Check input variables type + validators._variable_origin_static_type( + vars_types=typing.get_type_hints( + obj=self.compute_mare + ), + vars_values=locals() + ) + # Simulation values sim_arr = df[sim_col].astype(float) @@ -214,7 +262,7 @@ def compute_mare( def scenario_indicators( self, - sim_file: str | pathlib.Path, + sensim_file: str | pathlib.Path, df_name: str, sim_col: str, obs_file: str | pathlib.Path, @@ -224,20 +272,20 @@ def scenario_indicators( json_file: typing.Optional[str | pathlib.Path] = None ) -> dict[str, typing.Any]: ''' - Compute performance indicators for sample scenarios obtained using - the [`simulation_by_sobol_sample`](https://swat-model.github.io/pySWATPlus/api/sensitivity_analyzer/#pySWATPlus.SensitivityAnalyzer.simulation_by_sobol_sample) method. + Compute performance indicators for sample scenarios obtained using the method + [`simulation_by_sample_parameters`](https://swat-model.github.io/pySWATPlus/api/sensitivity_analyzer/#pySWATPlus.SensitivityAnalyzer.simulation_by_sample_parameters). Before computing the indicators, simulated and observed values are normalized using the formula `(v - min_v) / (max_v - min_v)`, where `min_v` and `max_v` represent the minimum and maximum of all simulated and observed values combined. The method returns a dictionary with two keys: - - `problem`: The definition dictionary passed to Sobol sampling. + - `problem`: The definition dictionary passed to sampling. - `indicator`: A `DataFrame` containing the `Scenario` column and one column per indicator, with scenario indices and corresponding indicator values. Args: - sim_file (str | pathlib.Path): Path to the `sensitivity_simulation.json` file produced by `simulation_by_sobol_sample`. + sensim_file (str | pathlib.Path): Path to the `sensitivity_simulation.json` file produced by `simulation_by_sobol_sample`. df_name (str): Name of the `DataFrame` within `sensitivity_simulation.json` from which to compute scenario indicators. @@ -284,7 +332,7 @@ def scenario_indicators( ) # Observed DataFrame - obs_df = utils._df_observed( + obs_df = utils._df_observe( obs_file=pathlib.Path(obs_file).resolve(), date_format=date_format, obs_col=obs_col @@ -293,7 +341,7 @@ def scenario_indicators( # Retrieve sensitivity output sensitivity_sim = utils._retrieve_sensitivity_output( - sim_file=pathlib.Path(sim_file).resolve(), + sensim_file=pathlib.Path(sensim_file).resolve(), df_name=df_name, add_problem=True, add_sample=False diff --git a/pySWATPlus/sensitivity_analyzer.py b/pySWATPlus/sensitivity_analyzer.py index 1a86b2d..ae13e5a 100644 --- a/pySWATPlus/sensitivity_analyzer.py +++ b/pySWATPlus/sensitivity_analyzer.py @@ -22,12 +22,12 @@ class SensitivityAnalyzer: Provide functionality for sensitivity analyzis. ''' - def _validate_simulation_data_config( + def _validate_extract_data_config( self, - simulation_data: dict[str, dict[str, typing.Any]], + extract_data: dict[str, dict[str, typing.Any]], ) -> None: ''' - Validate `simulation_data` configuration. + Validate `extract_data` configuration. ''' valid_subkeys = [ @@ -39,19 +39,19 @@ def _validate_simulation_data_config( 'apply_filter', 'usecols' ] - for sim_fname, sim_fdict in simulation_data.items(): + for sim_fname, sim_fdict in extract_data.items(): if not isinstance(sim_fdict, dict): raise TypeError( f'Expected "{sim_fname}" in simulation_date must be a dictionary, but got type "{type(sim_fdict).__name__}"' ) if 'has_units' not in sim_fdict: raise KeyError( - f'Key has_units is missing for "{sim_fname}" in simulation_data' + f'Key has_units is missing for "{sim_fname}" in extract_data' ) for sim_fkey in sim_fdict: if sim_fkey not in valid_subkeys: raise ValueError( - f'Invalid key "{sim_fkey}" for "{sim_fname}" in simulation_data; expected subkeys are {valid_subkeys}' + f'Invalid key "{sim_fkey}" for "{sim_fname}" in extract_data; expected subkeys are {valid_subkeys}' ) return None @@ -104,10 +104,10 @@ def _cpu_simulation( var_array: numpy.typing.NDArray[numpy.float64], num_sim: int, var_names: list[str], - simulation_folder: pathlib.Path, - txtinout_folder: pathlib.Path, + sensim_dir: pathlib.Path, + txtinout_dir: pathlib.Path, params_bounds: list[BoundDict], - simulation_data: dict[str, dict[str, typing.Any]], + extract_data: dict[str, dict[str, typing.Any]], clean_setup: bool ) -> dict[str, typing.Any]: ''' @@ -140,7 +140,7 @@ def _cpu_simulation( # Create simulation directory cpu_dir = f'sim_{track_sim}' - cpu_path = simulation_folder / cpu_dir + cpu_path = sensim_dir / cpu_dir cpu_path.mkdir() # Output simulation dictionary @@ -151,23 +151,23 @@ def _cpu_simulation( # Initialize TxtinoutReader class txtinout_reader = TxtinoutReader( - path=txtinout_folder + tio_dir=txtinout_dir ) # Run SWAT+ model in CPU directory txtinout_reader.run_swat( - target_dir=cpu_path, + sim_dir=cpu_path, parameters=params_sim ) # Extract simulated data - for sim_fname, sim_fdict in simulation_data.items(): - target_file = cpu_path / sim_fname + for sim_fname, sim_fdict in extract_data.items(): + sim_file = cpu_path / sim_fname df = DataManager().simulated_timeseries_df( - target_file=target_file, + sim_file=sim_file, **sim_fdict ) - cpu_output[f'{target_file.stem}_df'] = df + cpu_output[f'{sim_file.stem}_df'] = df # Remove simulation directory if clean_setup: @@ -177,17 +177,17 @@ def _cpu_simulation( def _save_output_in_json( self, - simulation_folder: pathlib.Path, - simulation_output: dict[str, typing.Any] + sensim_dir: pathlib.Path, + sensim_output: dict[str, typing.Any] ) -> None: ''' Write sensitivity simulation outputs to the file `sensitivity_simulation.json` - within the `simulation_folder`. + within the `sensim_dir`. ''' - # copy the simulation_output dictionary + # copy the sensim_output dictionary copy_simulation = copy.deepcopy( - x=simulation_output + x=sensim_output ) # Modify the copied dictionary @@ -204,7 +204,7 @@ def _save_output_in_json( copy_simulation[key][sub_key][k] = v.to_json() # Path to the JOSN file - json_file = simulation_folder / 'sensitivity_simulation.json' + json_file = sensim_dir / 'sensitivity_simulation.json' # Write output to the JSON file with open(json_file, 'w') as output_write: @@ -212,13 +212,13 @@ def _save_output_in_json( return None - def simulation_by_sobol_sample( + def simulation_by_sample_parameters( self, parameters: BoundType, sample_number: int, - simulation_folder: str | pathlib.Path, - txtinout_folder: str | pathlib.Path, - simulation_data: dict[str, dict[str, typing.Any]], + sensim_dir: str | pathlib.Path, + txtinout_dir: str | pathlib.Path, + extract_data: dict[str, dict[str, typing.Any]], max_workers: typing.Optional[int] = None, save_output: bool = True, clean_setup: bool = True @@ -282,13 +282,13 @@ def simulation_by_sobol_sample( Generates an array of length `2^N * (D + 1)`, where `D` is the number of parameter changes and `N = sample_number + 1`. For example, when `sample_number` is 1, 12 samples will be generated. - simulation_folder (str | pathlib.Path): Path to the folder where individual simulations for each parameter set will be performed. + sensim_dir (str | pathlib.Path): Path to the directory where individual simulations for each parameter set will be performed. Raises an error if the folder is not empty. This precaution helps prevent data deletion, overwriting directories, and issues with reading required data files not generated by the simulation. - txtinout_folder (str | pathlib.Path): Path to the `TxtInOut` folder. Raises an error if the folder does not contain exactly one SWAT+ executable `.exe` file. + txtinout_dir (str | pathlib.Path): Path to the `TxtInOut` directory. Raises an error if the folder does not contain exactly one SWAT+ executable `.exe` file. - simulation_data (dict[str, dict[str, typing.Any]]): A nested dictionary specifying how to extract data from SWAT+ simulation output files. + extract_data (dict[str, dict[str, typing.Any]]): A nested dictionary specifying how to extract data from SWAT+ simulation output files. The top-level keys are filenames of the output files, without paths (e.g., `channel_sd_day.txt`). Each key must map to a non-empty dictionary containing the following subkeys, as defined in [`simulated_timeseries_df`](https://swat-model.github.io/pySWATPlus/api/data_manager/#pySWATPlus.DataManager.simulated_timeseries_df): @@ -305,7 +305,7 @@ def simulation_by_sobol_sample( - `usecols` (list[str]): Optional. List of columns to extract from the simulated file. By default, all available columns are used. ```python - simulation_data = { + extract_data = { 'channel_sd_mon.txt': { 'has_units': True, 'begin_date': '01-Jun-2014', @@ -323,7 +323,7 @@ def simulation_by_sobol_sample( max_workers (int): Number of logical CPUs to use for parallel processing. If `None` (default), all available logical CPUs are used. - save_output (bool): If `True` (default), saves the output dictionary to `simulation_folder` as `sensitivity_simulation.json`. + save_output (bool): If `True` (default), saves the output dictionary to `sensim_dir` as `sensitivity_simulation.json`. clean_setup (bool): If `True` (default), each folder created during the parallel simulation and its contents will be deleted dynamically after collecting the required data. @@ -352,7 +352,7 @@ def simulation_by_sobol_sample( - `dir`: Name of the directory (e.g., `sim_`) where the simulation was executed. This is useful when `clean_setup` is `False`, as it allows users to verify whether the sampled values were correctly applied to the target files. The simulation index and directory name (e.g., `sim_`) may not always match one-to-one due to deduplication or asynchronous execution. - - `_df`: Filtered `DataFrame` generated for each file specified in the `simulation_data` dictionary + - `_df`: Filtered `DataFrame` generated for each file specified in the `extract_data` dictionary (e.g., `channel_sd_mon_df`, `channel_sd_yr_df`). Each DataFrame includes a `date` column with `datetime.date` objects. Note: @@ -365,7 +365,7 @@ def simulation_by_sobol_sample( - The output dictionary contains `datetime.date` objects in the `date` column for each `DataFrame` in the `simulation` dictionary. These `datetime.date` objects are converted to `DD-Mon-YYYY` strings when saving the output dictionary to - `sensitivity_simulation.json` within the `simulation_folder`. + `sensitivity_simulation.json` within the `sensim_dir`. - The computation progress can be tracked through the following `console` messages, where the simulation index ranges from 1 to the total number of unique simulations: @@ -373,7 +373,7 @@ def simulation_by_sobol_sample( - `Started simulation: /` - `Completed simulation: /` - - The disk space on the computer for `simulation_folder` must be sufficient to run + - The disk space on the computer for `sensim_dir` must be sufficient to run parallel simulations (at least `max_workers` times the size of the `TxtInOut` folder). Otherwise, no error will be raised by the system, but simulation outputs may not be generated. ''' @@ -384,35 +384,35 @@ def simulation_by_sobol_sample( # Check input variables type validators._variable_origin_static_type( vars_types=typing.get_type_hints( - obj=self.simulation_by_sobol_sample + obj=self.simulation_by_sample_parameters ), vars_values=locals() ) # Absolute path - txtinout_folder = pathlib.Path(txtinout_folder).resolve() - simulation_folder = pathlib.Path(simulation_folder).resolve() + txtinout_dir = pathlib.Path(txtinout_dir).resolve() + sensim_dir = pathlib.Path(sensim_dir).resolve() - # Check validity of path - validators._path_directory( - path=txtinout_folder + # Check validity of directory path + validators._dir_path( + input_dir=txtinout_dir ) - validators._path_directory( - path=simulation_folder + validators._dir_path( + input_dir=sensim_dir ) - # Check simulation_folder is empty - validators._empty_directory( - path=simulation_folder + # Check sensim_dir is empty + validators._dir_empty( + input_dir=sensim_dir ) - # Validate simulation_data configuration - self._validate_simulation_data_config( - simulation_data=simulation_data + # Validate extract_data configuration + self._validate_extract_data_config( + extract_data=extract_data ) - # Validate unique dictionaries for sensitive parameters - validators._list_contain_unique_dict( + # Validate unique dictionaries for parameters + validators._calibration_list_contain_unique_dict( parameters=parameters ) @@ -421,11 +421,11 @@ def simulation_by_sobol_sample( BoundDict(**param) for param in parameters ] validators._calibration_parameters( - txtinout_path=txtinout_folder, + input_dir=txtinout_dir, parameters=params_bounds ) - # Sobol problem dictionary + # problem dictionary problem = self._create_sobol_problem( params_bounds=params_bounds ) @@ -453,10 +453,10 @@ def simulation_by_sobol_sample( self._cpu_simulation, num_sim=num_sim, var_names=copy_problem['names'], - simulation_folder=simulation_folder, - txtinout_folder=txtinout_folder, + sensim_dir=sensim_dir, + txtinout_dir=txtinout_dir, params_bounds=params_bounds, - simulation_data=simulation_data, + extract_data=extract_data, clean_setup=clean_setup ) @@ -467,13 +467,13 @@ def simulation_by_sobol_sample( futures = [ executor.submit(cpu_sim, idx, arr) for idx, arr in enumerate(unique_array, start=1) ] - for idx, future in enumerate(concurrent.futures.as_completed(futures), start=1): + for future in concurrent.futures.as_completed(futures): # Message for completion of individual simulation for better tracking - print(f'Completed simulation: {idx}/{num_sim}', flush=True) + print(f'Completed simulation: {futures.index(future) + 1}/{num_sim}', flush=True) # Collect simulation results - idx_r = future.result() - cpu_dict[tuple(idx_r['array'])] = { - k: v for k, v in idx_r.items() if k != 'array' + f_r = future.result() + cpu_dict[tuple(f_r['array'])] = { + k: v for k, v in f_r.items() if k != 'array' } # Generate sensitivity simulation output for all sample_array from unique_array outputs @@ -501,25 +501,25 @@ def simulation_by_sobol_sample( } # Sensitivity simulaton output - simulation_output = { + sensim_output = { 'time': time_stats, 'problem': problem, 'sample': sample_array, 'simulation': sim_dict } - # Write output to the file 'sensitivity_simulation_sobol.json' in simulation folder + # Write output to the file 'sensitivity_simulation.json' in simulation folder if save_output: self._save_output_in_json( - simulation_folder=simulation_folder, - simulation_output=simulation_output + sensim_dir=sensim_dir, + sensim_output=sensim_output ) - return simulation_output + return sensim_output - def sobol_indices( + def parameter_sensitivity_indices( self, - sim_file: str | pathlib.Path, + sensim_file: str | pathlib.Path, df_name: str, sim_col: str, obs_file: str | pathlib.Path, @@ -529,16 +529,16 @@ def sobol_indices( json_file: typing.Optional[str | pathlib.Path] = None ) -> dict[str, typing.Any]: ''' - Compute Sobol sensitivy indices for sample scenarios obtained using - the [`simulation_by_sobol_sample`](https://swat-model.github.io/pySWATPlus/api/sensitivity_analyzer/#pySWATPlus.SensitivityAnalyzer.simulation_by_sobol_sample) method. + Compute parameter sensitivy indices for sample scenarios obtained using + the [`simulation_by_sample_parameters`](https://swat-model.github.io/pySWATPlus/api/sensitivity_analyzer/#pySWATPlus.SensitivityAnalyzer.simulation_by_sample_parameters) method. The method returns a dictionary with two keys: - - `problem`: The definition dictionary passed to Sobol sampling. - - `sobol_indices`: A dictionary where each key is an indicator name and the corresponding value is the computed Sobol sensitivity indices. + - `problem`: The definition dictionary passed to sampling. + - `sensitivty_indices`: A dictionary where each key is an indicator name and the corresponding value is the computed sensitivity indices. Args: - sim_file (str | pathlib.Path): Path to the `sensitivity_simulation.json` file produced by `simulation_by_sobol_sample`. + sensim_file (str | pathlib.Path): Path to the `sensitivity_simulation.json` file produced by `simulation_by_sample_parameters`. df_name (str): Name of the `DataFrame` within `sensitivity_simulation.json` from which to compute scenario indicators. @@ -551,7 +551,7 @@ def sobol_indices( obs_col (str): Name of the column in `obs_file` containing observed data. All negative and `None` observed values are removed before analysis. - indicators (list[str]): List of indicators to compute Sobol indices. Available options: + indicators (list[str]): List of indicators to compute sensitivity indices. Available options: - `NSE`: Nash–Sutcliffe Efficiency - `KGE`: Kling–Gupta Efficiency @@ -561,23 +561,23 @@ def sobol_indices( - `MARE`: Mean Absolute Relative Error json_file (str | pathlib.Path, optional): Path to a JSON file for saving the output dictionary where each key is an indicator name - and the corresponding value is the computed Sobol sensitivity indices. If `None` (default), the dictionary is not saved. + and the corresponding value is the computed sensitivity indices. If `None` (default), the dictionary is not saved. Returns: - Dictionary with two keys, `problem` and `sobol_indices`, and their corresponding values. + Dictionary with two keys, `problem` and `sensitivity_indices`, and their corresponding values. ''' # Check input variables type validators._variable_origin_static_type( vars_types=typing.get_type_hints( - obj=self.sobol_indices + obj=self.parameter_sensitivity_indices ), vars_values=locals() ) # Problem and indicators prob_inct = PerformanceMetrics().scenario_indicators( - sim_file=sim_file, + sensim_file=sensim_file, df_name=df_name, sim_col=sim_col, obs_file=obs_file, @@ -588,17 +588,17 @@ def sobol_indices( problem = prob_inct['problem'] indicator_df = prob_inct['indicator'] - # Sobol sensitivity indices - sobol_indices = {} + # Sensitivity indices + sensitivity_indices = {} for indicator in indicators: # Indicator sensitivity indices indicator_sensitivity = SALib.analyze.sobol.analyze( problem=copy.deepcopy(problem), Y=indicator_df[indicator].values ) - sobol_indices[indicator] = indicator_sensitivity + sensitivity_indices[indicator] = indicator_sensitivity - # Save the Sobol indices + # Save the sensitivity indices if json_file is not None: # Raise error for invalid JSON file extension json_file = pathlib.Path(json_file).resolve() @@ -606,7 +606,7 @@ def sobol_indices( json_file=json_file ) # Modify sensitivity index to write in the JSON file - copy_indices = copy.deepcopy(sobol_indices) + copy_indices = copy.deepcopy(sensitivity_indices) write_indices = {} for indicator in indicators: write_indices[indicator] = { @@ -619,7 +619,7 @@ def sobol_indices( # Output dictionary output = { 'problem': problem, - 'sobol_indices': sobol_indices + 'sensitivity_indices': sensitivity_indices } return output diff --git a/pySWATPlus/txtinout_reader.py b/pySWATPlus/txtinout_reader.py index 38388d2..819e577 100644 --- a/pySWATPlus/txtinout_reader.py +++ b/pySWATPlus/txtinout_reader.py @@ -18,13 +18,13 @@ class TxtinoutReader: def __init__( self, - path: str | pathlib.Path + tio_dir: str | pathlib.Path ) -> None: ''' Create a TxtinoutReader instance for accessing SWAT+ model files. Args: - path (str | pathlib.Path): Path to the `TxtInOut` folder, which must contain + tio_dir (str | pathlib.Path): Path to the `TxtInOut` directory, which must contain exactly one SWAT+ executable `.exe` file. ''' @@ -37,16 +37,16 @@ def __init__( ) # Absolute path - path = pathlib.Path(path).resolve() + tio_dir = pathlib.Path(tio_dir).resolve() # Check validity of path - validators._path_directory( - path=path + validators._dir_path( + input_dir=tio_dir ) # Check .exe files in the directory exe_files = [ - file for file in path.iterdir() if file.suffix == ".exe" + file for file in tio_dir.iterdir() if file.suffix == ".exe" ] # Raise error on .exe file @@ -56,10 +56,10 @@ def __init__( ) # TxtInOut directory path - self.root_folder = path + self.root_dir = tio_dir # EXE file path - self.swat_exe_path = path / exe_files[0] + self.exe_file = tio_dir / exe_files[0] def enable_object_in_print_prt( self, @@ -127,7 +127,7 @@ def enable_object_in_print_prt( ) # File path of print.prt - print_prt_path = self.root_folder / 'print.prt' + print_prt_path = self.root_dir / 'print.prt' # Read and modify print.prt file strings new_print_prt = '' @@ -231,7 +231,7 @@ def set_simulation_period( nth_line = 3 # File path of time.sim - time_sim_path = self.root_folder / 'time.sim' + time_sim_path = self.root_dir / 'time.sim' # Open the file in read mode and read its contents with open(time_sim_path, 'r') as file: @@ -300,7 +300,7 @@ def set_simulation_timestep( nth_line = 3 # File path of time.sim - time_sim_path = self.root_folder / 'time.sim' + time_sim_path = self.root_dir / 'time.sim' # Open the file in read mode and read its contents with open(time_sim_path, 'r') as file: @@ -348,7 +348,7 @@ def set_warmup_year( ) # File path of print.prt - print_prt_path = self.root_folder / 'print.prt' + print_prt_path = self.root_dir / 'print.prt' # Open the file in read mode and read its contents with open(print_prt_path, 'r') as file: @@ -383,7 +383,7 @@ def _enable_disable_csv_print( ''' # File path of print.prt - print_prt_path = self.root_folder / 'print.prt' + print_prt_path = self.root_dir / 'print.prt' # Target line nth_line = 7 @@ -447,7 +447,7 @@ def set_print_interval( ) # File path of print.prt - print_prt_path = self.root_folder / 'print.prt' + print_prt_path = self.root_dir / 'print.prt' # Open the file in read mode and read its contents with open(print_prt_path, 'r') as file: @@ -505,7 +505,7 @@ def set_print_period( end_year = end_dt.year # File path of print.prt - print_prt_path = self.root_folder / 'print.prt' + print_prt_path = self.root_dir / 'print.prt' # Open the file in read mode and read its contents with open(print_prt_path, 'r') as file: @@ -523,14 +523,14 @@ def set_print_period( def copy_required_files( self, - target_dir: str | pathlib.Path, + sim_dir: str | pathlib.Path, ) -> pathlib.Path: ''' - Copy the required file from the input folder associated with the + Copy the required file from the input directory associated with the `TxtinoutReader` instance to the specified directory for SWAT+ simulation. Args: - target_dir (str | pathlib.Path): Path to the empty directory where the required files will be copied. + sim_dir (str | pathlib.Path): Path to the empty directory where the required files will be copied. Returns: The path to the target directory containing the copied files. @@ -544,17 +544,17 @@ def copy_required_files( vars_values=locals() ) - # Absolute path of target_dir - target_dir = pathlib.Path(target_dir).resolve() + # Absolute path of sim_dir + sim_dir = pathlib.Path(sim_dir).resolve() - # Check validity of target_dir - validators._path_directory( - path=target_dir + # Check validity of sim_dir + validators._dir_path( + input_dir=sim_dir ) - # Check target_dir is empty - validators._empty_directory( - path=target_dir + # Check sim_dir is empty + validators._dir_empty( + input_dir=sim_dir ) # Ignored files @@ -565,12 +565,12 @@ def copy_required_files( ) # Copy files from source folder - for src_file in self.root_folder.iterdir(): + for src_file in self.root_dir.iterdir(): if src_file.is_dir() or src_file.name.endswith(_ignored_files_endswith): continue - shutil.copy2(src_file, target_dir / src_file.name) + shutil.copy2(src_file, sim_dir / src_file.name) - return target_dir + return sim_dir def _write_calibration_file( self, @@ -580,7 +580,7 @@ def _write_calibration_file( Writes `calibration.cal` file with parameter changes. ''' - outfile = self.root_folder / 'calibration.cal' + outfile = self.root_dir / 'calibration.cal' # If calibration.cal exists, remove it (always recreate) if outfile.exists(): @@ -676,11 +676,11 @@ def _calibration_cal_in_file_cio( add: bool ) -> None: ''' - Add or remove the calibration line to 'file.cio' + Add or remove the calibration line to 'file.cio'. ''' # Path of file.cio - file_path = self.root_folder / 'file.cio' + file_path = self.root_dir / 'file.cio' # Line format fmt = ( @@ -735,30 +735,38 @@ def _apply_swat_configuration( simulation_timestep: typing.Optional[int] = None, warmup: typing.Optional[int] = None, print_prt_control: typing.Optional[dict[str, dict[str, bool]]] = None, - begin_date_print: typing.Optional[str] = None, - end_date_print: typing.Optional[str] = None, + print_begin_date: typing.Optional[str] = None, + print_end_date: typing.Optional[str] = None, print_interval: typing.Optional[int] = None ) -> None: ''' - Set begin and end year for the simulation, the warm-up period, and toggles the elements in print.prt file + Configure and write parameter settings to SWAT+ input files. ''' - validators._ensure_together(begin_date=begin_date, end_date=end_date) - validators._ensure_together(begin_date_print=begin_date_print, end_date_print=end_date_print) + # Ensure both begin and end dates are given + validators._ensure_together( + begin_date=begin_date, + end_date=end_date + ) + + # Ensure both begin and end print dates are given + validators._ensure_together( + print_begin_date=print_begin_date, + print_end_date=print_end_date + ) # Validate dependencies between simulation and print periods - if (begin_date_print or end_date_print) and not (begin_date and end_date): + if (print_begin_date or print_end_date) and not (begin_date and end_date): raise ValueError( - "'begin_date_print'/'end_date_print' cannot be set unless " - "'begin_date' and 'end_date' are also provided." + 'print_begin_date or print_end_date cannot be set unless begin_date and end_date are also provided' ) # Validate date relationships - if begin_date_print and end_date_print and begin_date and end_date: + if print_begin_date and print_end_date and begin_date and end_date: begin_dt = utils._date_str_to_object(begin_date) end_dt = utils._date_str_to_object(end_date) - start_print_dt = utils._date_str_to_object(begin_date_print) - end_print_dt = utils._date_str_to_object(end_date_print) + start_print_dt = utils._date_str_to_object(print_begin_date) + end_print_dt = utils._date_str_to_object(print_end_date) validators._date_within_range( date_to_check=start_print_dt, @@ -801,8 +809,7 @@ def _apply_swat_configuration( for key, val in print_prt_control.items(): if key is None: raise ValueError( - '"None" cannot be used as a key in print_prt_control; ' - 'use the method "enable_object_in_print_prt" with "target_dir=None" for this setting' + 'Use enable_object_in_print_prt method instead of None as a key in print_prt_control' ) elif not isinstance(val, dict): raise TypeError( @@ -826,10 +833,10 @@ def _apply_swat_configuration( **key_dict ) - if begin_date_print and end_date_print: + if print_begin_date and print_end_date: self.set_print_period( - begin_date=begin_date_print, - end_date=end_date_print + begin_date=print_begin_date, + end_date=print_end_date ) if print_interval is not None: @@ -849,8 +856,8 @@ def _run_swat_exe( try: # Run simulation process = subprocess.Popen( - [str(self.swat_exe_path.resolve())], - cwd=str(self.root_folder.resolve()), + [str(self.exe_file.resolve())], + cwd=str(self.root_dir.resolve()), stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1, @@ -882,15 +889,15 @@ def _run_swat_exe( def run_swat( self, - target_dir: typing.Optional[str | pathlib.Path] = None, + sim_dir: typing.Optional[str | pathlib.Path] = None, parameters: typing.Optional[ModifyType] = None, begin_date: typing.Optional[str] = None, end_date: typing.Optional[str] = None, simulation_timestep: typing.Optional[int] = None, warmup: typing.Optional[int] = None, print_prt_control: typing.Optional[dict[str, dict[str, bool]]] = None, - begin_date_print: typing.Optional[str] = None, - end_date_print: typing.Optional[str] = None, + print_begin_date: typing.Optional[str] = None, + print_end_date: typing.Optional[str] = None, print_interval: typing.Optional[int] = None, skip_validation: bool = False ) -> pathlib.Path: @@ -898,7 +905,7 @@ def run_swat( Run the SWAT+ simulation with optional parameter changes. Args: - target_dir (str | pathlib.Path): Path to the directory where the simulation will be done. + sim_dir (str | pathlib.Path): Path to the directory where the simulation will be done. If None, the simulation runs directly in the current folder. parameters (ModifyType): List of dictionaries specifying parameter changes in the `calibration.cal` file. @@ -966,9 +973,9 @@ def run_swat( } ``` - begin_date_print (str): The start date for printing the output + print_begin_date (str): The start date for printing the output. - end_date_print (str): The end date for printing the output + print_end_date (str): The end date for printing the output. print_interval (int): Print interval within the period. For example, if interval = 2, output will be printed for every other day. @@ -987,25 +994,21 @@ def run_swat( ) # TxtinoutReader class instance - if target_dir is not None: - # Absolute path - target_dir = pathlib.Path(target_dir).resolve() - # Check validity of target_dir - validators._path_directory( - path=target_dir + if sim_dir is not None: + sim_dir = pathlib.Path(sim_dir).resolve() + # Check validity of sim_dir + validators._dir_path( + input_dir=sim_dir ) - # Copy files to the target directory run_path = self.copy_required_files( - target_dir=target_dir + sim_dir=sim_dir ) - # Initialize new TxtinoutReader class reader = TxtinoutReader( - path=run_path + tio_dir=run_path ) else: - # Select existing TxtinoutReader class instance reader = self - run_path = self.root_folder + run_path = self.root_dir # Apply SWAT+ configuration changes reader._apply_swat_configuration( @@ -1014,16 +1017,16 @@ def run_swat( simulation_timestep=simulation_timestep, warmup=warmup, print_prt_control=print_prt_control, - begin_date_print=begin_date_print, - end_date_print=end_date_print, + print_begin_date=print_begin_date, + print_end_date=print_end_date, print_interval=print_interval ) # Create calibration.cal file if parameters is not None: - # Validate unique dictionaries for sensitive parameters - validators._list_contain_unique_dict( + # Validate unique dictionaries for calibration parameters + validators._calibration_list_contain_unique_dict( parameters=parameters ) @@ -1034,13 +1037,13 @@ def run_swat( # Check if input calibration parameters exists in cal_parms.cal validators._calibration_parameters( - txtinout_path=reader.root_folder, + input_dir=reader.root_dir, parameters=params ) if not skip_validation: validators._calibration_conditions_and_units( - txtinout_path=reader.root_folder, + input_dir=reader.root_dir, parameters=params ) diff --git a/pySWATPlus/utils.py b/pySWATPlus/utils.py index 670876e..309dca9 100644 --- a/pySWATPlus/utils.py +++ b/pySWATPlus/utils.py @@ -4,8 +4,7 @@ import io import pathlib import typing -from collections.abc import Iterable -from collections.abc import Callable +import collections.abc from .types import ModifyDict @@ -53,55 +52,6 @@ def _date_str_to_object( return get_date -def _clean( - df: pandas.DataFrame -) -> pandas.DataFrame: - ''' - Clean a DataFrame by stripping whitespace from column names and string values. - ''' - - # Strip spaces from column names - df.columns = [str(c).strip() for c in df.columns] - - # Strip spaces from string/object values - obj_cols = df.select_dtypes(include=['object', 'string']).columns - for col in obj_cols: - df[col] = df[col].str.strip() - - return df - - -def _load_file( - path: pathlib.Path, - skip_rows: typing.Optional[list[int]] = None -) -> pandas.DataFrame: - ''' - Attempt to load a dataframe from `path` using multiple parsing strategies. - ''' - - if path.suffix.lower() == '.csv': - df_from_csv = pandas.read_csv( - filepath_or_buffer=path, - skiprows=skip_rows, - skipinitialspace=True - ) - return _clean(df_from_csv) - - strategies: list[Callable[[], pandas.DataFrame]] = [ - lambda: pandas.read_csv(path, sep=r'\s+', skiprows=skip_rows), - lambda: pandas.read_csv(path, sep=r'[ ]{2,}', skiprows=skip_rows), - lambda: pandas.read_fwf(path, skiprows=skip_rows) - ] - for attempt in strategies: - try: - df: pandas.DataFrame = attempt() - return _clean(df) - except Exception: - pass - - raise ValueError(f'Error reading the file: {path}') - - def _format_val_field( value: float ) -> str: @@ -127,7 +77,7 @@ def _format_val_field( def _compact_units( - unit_list: Iterable[int] + unit_list: collections.abc.Iterable[int] ) -> list[int]: ''' Compact a 1-based list of unit IDs into SWAT units syntax. @@ -189,7 +139,67 @@ def _parse_conditions( return conditions_parsed -def _df_observed( +def _df_clean( + df: pandas.DataFrame +) -> pandas.DataFrame: + ''' + Clean a DataFrame by stripping whitespace from column names and string values. + ''' + + # Strip spaces from column names + df.columns = [str(c).strip() for c in df.columns] + + # Strip spaces from string/object values + obj_cols = df.select_dtypes(include=['object', 'string']).columns + for col in obj_cols: + df[col] = df[col].str.strip() + + return df + + +def _df_extract( + input_file: pathlib.Path, + skiprows: typing.Optional[list[int]] = None +) -> pandas.DataFrame: + ''' + Extract a DataFrame from `input_file` using multiple parsing strategies. + ''' + + if input_file.suffix.lower() == '.csv': + csv_df = pandas.read_csv( + filepath_or_buffer=input_file, + skiprows=skiprows, + skipinitialspace=True + ) + return _df_clean(csv_df) + + strategies: list[collections.abc.Callable[[], pandas.DataFrame]] = [ + lambda: pandas.read_csv( + filepath_or_buffer=input_file, + sep=r'\s+', + skiprows=skiprows + ), + lambda: pandas.read_csv( + filepath_or_buffer=input_file, + sep=r'[ ]{2,}', + skiprows=skiprows + ), + lambda: pandas.read_fwf( + filepath_or_buffer=input_file, + skiprows=skiprows + ) + ] + for attempt in strategies: + try: + txt_df: pandas.DataFrame = attempt() + return _df_clean(txt_df) + except Exception: + pass + + raise ValueError(f'Error reading the file: {input_file}') + + +def _df_observe( obs_file: pathlib.Path, date_format: str, obs_col: str @@ -237,7 +247,7 @@ def _df_normalize( def _retrieve_sensitivity_output( - sim_file: pathlib.Path, + sensim_file: pathlib.Path, df_name: str, add_problem: bool, add_sample: bool @@ -251,7 +261,7 @@ def _retrieve_sensitivity_output( ''' # Load sensitivity simulation dictionary from JSON file - with open(sim_file, 'r') as input_sim: + with open(sensim_file, 'r') as input_sim: sensitivity_sim = json.load(input_sim) # Dictionary of sample DataFrames diff --git a/pySWATPlus/validators.py b/pySWATPlus/validators.py index 2731f0d..eeaa8c0 100644 --- a/pySWATPlus/validators.py +++ b/pySWATPlus/validators.py @@ -52,31 +52,31 @@ def _variable_origin_static_type( return None -def _path_directory( - path: pathlib.Path +def _dir_path( + input_dir: pathlib.Path ) -> None: ''' - Ensure the input path refers to a valid directory. + Ensure the input directory refers to a valid path. ''' - if not path.is_dir(): + if not input_dir.is_dir(): raise NotADirectoryError( - f'Invalid target_dir path: {str(path)}' + f'Invalid directory path: {str(input_dir)}' ) return None -def _empty_directory( - path: pathlib.Path +def _dir_empty( + input_dir: pathlib.Path ) -> None: ''' Ensure the input directory is empty. ''' - if any(path.iterdir()): + if any(input_dir.iterdir()): raise FileExistsError( - f'Input directory {str(path)} contains files; expected an empty directory' + f'Input directory {str(input_dir)} contains files; expected an empty directory' ) return None @@ -120,11 +120,11 @@ def _date_within_range( return None -def _list_contain_unique_dict( +def _calibration_list_contain_unique_dict( parameters: list[dict[str, typing.Any]] ) -> None: ''' - Check whether the input list contains only unique dictionaries. + Check whether the input calibration list contains only unique dictionaries of parameters. ''' # Get unique dictionaries @@ -142,7 +142,7 @@ def _list_contain_unique_dict( def _calibration_units( - txtinout_path: pathlib.Path, + input_dir: pathlib.Path, param_change: ModifyDict ) -> None: ''' @@ -156,7 +156,7 @@ def _calibration_units( return cal_parms_df = pandas.read_csv( - filepath_or_buffer=txtinout_path / "cal_parms.cal", + filepath_or_buffer=input_dir / 'cal_parms.cal', skiprows=2, sep=r'\s+' ) @@ -179,7 +179,7 @@ def _calibration_units( ) file = obj_type_files[obj_type] - file_path = txtinout_path / file + file_path = input_dir / file # Open file and check that units id are valid df = pandas.read_csv( @@ -202,7 +202,7 @@ def _calibration_units( def _calibration_conditions( - txtinout_path: pathlib.Path, + input_dir: pathlib.Path, param_change: ModifyDict ) -> None: ''' @@ -222,13 +222,13 @@ def _calibration_conditions( if 'hsg' in conditions: validators['hsg'] = {'A', 'B', 'C', 'D'} if 'texture' in conditions: - df_textures = pandas.read_fwf(txtinout_path / 'soils.sol', skiprows=1) + df_textures = pandas.read_fwf(input_dir / 'soils.sol', skiprows=1) validators['texture'] = set(df_textures['texture'].dropna().unique()) if 'plant' in conditions: - df_plants = pandas.read_fwf(txtinout_path / 'plants.plt', sep=r'\s+', skiprows=1) + df_plants = pandas.read_fwf(input_dir / 'plants.plt', sep=r'\s+', skiprows=1) validators['plant'] = set(df_plants['name'].dropna().unique()) if 'landuse' in conditions: - df_landuse = pandas.read_csv(txtinout_path / 'landuse.lum', sep=r'\s+', skiprows=1) + df_landuse = pandas.read_csv(input_dir / 'landuse.lum', sep=r'\s+', skiprows=1) validators['landuse'] = set(df_landuse['plnt_com'].dropna().unique()) # Validate conditions @@ -251,7 +251,7 @@ def _calibration_conditions( def _calibration_conditions_and_units( - txtinout_path: pathlib.Path, + input_dir: pathlib.Path, parameters: list[ModifyDict] ) -> None: ''' @@ -265,11 +265,11 @@ def _calibration_conditions_and_units( for param_change in parameters: try: _calibration_conditions( - txtinout_path=txtinout_path, + input_dir=input_dir, param_change=param_change ) _calibration_units( - txtinout_path=txtinout_path, + input_dir=input_dir, param_change=param_change ) except ValueError as e: @@ -282,7 +282,7 @@ def _calibration_conditions_and_units( def _calibration_parameters( - txtinout_path: pathlib.Path, + input_dir: pathlib.Path, parameters: list[BoundDict] | list[ModifyDict] ) -> None: ''' @@ -290,7 +290,7 @@ def _calibration_parameters( ''' # Path of cal_parms.cal - file_path = txtinout_path / 'cal_parms.cal' + file_path = input_dir / 'cal_parms.cal' # DataFrame from cal_parms.cal file parms_df = pandas.read_csv( @@ -325,20 +325,20 @@ def _json_extension( def _ensure_together(**kwargs: typing.Any) -> None: - """ + ''' Ensure that either all or none of the given arguments are provided (not None). Example: _ensure_together(begin_date=begin, end_date=end) - """ + ''' + total = len(kwargs) provided = [name for name, value in kwargs.items() if value is not None] # If some (but not all) values are provided → inconsistent input if 0 < len(provided) < total: missing = [name for name in kwargs if name not in provided] - all_args = ", ".join(kwargs.keys()) + all_args = ', '.join(kwargs.keys()) raise ValueError( - f"Arguments [{all_args}] must be provided together. " - f"Missing: {missing}, Provided: {provided}" + f'Arguments [{all_args}] must be provided together, but missing: {missing}' ) diff --git a/tests/test_data_manager.py b/tests/test_data_manager.py index 780aef1..1edc5c0 100644 --- a/tests/test_data_manager.py +++ b/tests/test_data_manager.py @@ -17,13 +17,13 @@ def test_simulated_timeseries_df( data_manager ): - # set up TxtInOut folder path - txtinout_folder = os.path.join(os.path.dirname(__file__), 'TxtInOut') + # set up TxtInOut directory path + txtinout_dir = os.path.join(os.path.dirname(__file__), 'TxtInOut') # Pass: time series DataFrame and save output with tempfile.TemporaryDirectory() as tmp_dir: df = data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_yr.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_yr.txt'), ref_day=15, ref_month=6, has_units=True, @@ -35,7 +35,7 @@ def test_simulated_timeseries_df( missing_cols = ['mon', 'day'] with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'basin_carbon_all.txt'), + sim_file=os.path.join(txtinout_dir, 'basin_carbon_all.txt'), has_units=True ) assert exc_info.value.args[0] == f'Missing required time series columns "{missing_cols}" in file "basin_carbon_all.txt"' @@ -43,7 +43,7 @@ def test_simulated_timeseries_df( # Error: invalid begin_date format with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_yr.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_yr.txt'), has_units=True, begin_date='2025-01-01' ) @@ -52,7 +52,7 @@ def test_simulated_timeseries_df( # Error: empty DataFrame extracted by date range with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_yr.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_yr.txt'), has_units=True, begin_date='01-Jan-1900', end_date='31-Dec-1900' @@ -62,7 +62,7 @@ def test_simulated_timeseries_df( # Error: invalid file for reference day with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_day.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_day.txt'), has_units=True, ref_day=6 ) @@ -71,7 +71,7 @@ def test_simulated_timeseries_df( # Error: invalid file for reference month with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_mon.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_mon.txt'), has_units=True, ref_month=6 ) @@ -80,7 +80,7 @@ def test_simulated_timeseries_df( # Error: invalid column name to filter rows with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_yr.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_yr.txt'), has_units=True, apply_filter={'unavailable': 1} ) @@ -89,7 +89,7 @@ def test_simulated_timeseries_df( # Error: invalid column value type with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_yr.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_yr.txt'), has_units=True, apply_filter={'name': 1} ) @@ -99,7 +99,7 @@ def test_simulated_timeseries_df( val = ['hru007'] with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_yr.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_yr.txt'), has_units=True, apply_filter={'name': val} ) @@ -108,7 +108,7 @@ def test_simulated_timeseries_df( # Error: invalid column name in usecols with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_yr.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_yr.txt'), has_units=True, usecols=['unavailable_col'] ) @@ -117,7 +117,7 @@ def test_simulated_timeseries_df( # Error: invalid JSON file extension to save the DataFrame with pytest.raises(Exception) as exc_info: data_manager.simulated_timeseries_df( - target_file=os.path.join(txtinout_folder, 'zrecall_yr.txt'), + sim_file=os.path.join(txtinout_dir, 'zrecall_yr.txt'), has_units=True, json_file='ext_invalid.txt' ) diff --git a/tests/test_sensitivity_analyzer.py b/tests/test_sensitivity_analyzer.py index 1bb0351..d00e7eb 100644 --- a/tests/test_sensitivity_analyzer.py +++ b/tests/test_sensitivity_analyzer.py @@ -22,17 +22,17 @@ def performance_metrics(): yield output -def test_simulation_by_sobol_sample( +def test_simulation_by_sample_parameters( sensitivity_analyzer, performance_metrics ): - # set up TxtInOut folder path - txtinout_folder = os.path.join(os.path.dirname(__file__), 'TxtInOut') + # set up TxtInOut directory path + txtinout_dir = os.path.join(os.path.dirname(__file__), 'TxtInOut') # initialize TxtinoutReader class txtinout_reader = pySWATPlus.TxtinoutReader( - path=txtinout_folder + tio_dir=txtinout_dir ) # Sensitivity parameters @@ -45,7 +45,7 @@ def test_simulation_by_sobol_sample( } ] # Target data from sensitivity simulation - simulation_data = { + extract_data = { 'channel_sd_mon.txt': { 'has_units': True, 'ref_day': 1, @@ -55,18 +55,18 @@ def test_simulation_by_sobol_sample( } with tempfile.TemporaryDirectory() as tmp1_dir: - # Copy required files to a target directory - target_dir = txtinout_reader.copy_required_files( - target_dir=tmp1_dir + # Copy required files to a simulation directory + sim_dir = txtinout_reader.copy_required_files( + sim_dir=tmp1_dir ) - # Intialize TxtinOutReader class by target direcotry - target_reader = pySWATPlus.TxtinoutReader( - path=target_dir + # Intialize TxtinOutReader class by simulation direcotry + sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir ) # Disable CSV file generation to save time - target_reader.disable_csv_print() + sim_reader.disable_csv_print() # Disable all objects for daily time series file in print.prt to save time and space - target_reader.enable_object_in_print_prt( + sim_reader.enable_object_in_print_prt( obj=None, daily=False, monthly=True, @@ -74,23 +74,23 @@ def test_simulation_by_sobol_sample( avann=True ) # Set begin and end year - target_reader.set_simulation_period( + sim_reader.set_simulation_period( begin_date='01-Jan-2010', end_date='31-Dec-2012' ) # Set warmup year - target_reader.set_warmup_year( + sim_reader.set_warmup_year( warmup=1 ) - # Pass: sensitivity simulation by Sobol sample + # Pass: sensitivity simulation by parameter sample with tempfile.TemporaryDirectory() as tmp2_dir: - output = sensitivity_analyzer.simulation_by_sobol_sample( + output = sensitivity_analyzer.simulation_by_sample_parameters( parameters=parameters, sample_number=1, - simulation_folder=tmp2_dir, - txtinout_folder=tmp1_dir, - simulation_data=simulation_data + sensim_dir=tmp2_dir, + txtinout_dir=tmp1_dir, + extract_data=extract_data ) assert 'time' in output assert isinstance(output['time'], dict) @@ -110,7 +110,7 @@ def test_simulation_by_sobol_sample( # Pass: read sensitive DataFrame of scenarios output = pySWATPlus.DataManager().read_sensitive_dfs( - sim_file=os.path.join(tmp2_dir, 'sensitivity_simulation.json'), + sensim_file=os.path.join(tmp2_dir, 'sensitivity_simulation.json'), df_name='channel_sd_mon_df', add_problem=True, add_sample=True @@ -125,10 +125,10 @@ def test_simulation_by_sobol_sample( # Pass: indicator values output = performance_metrics.scenario_indicators( - sim_file=os.path.join(tmp2_dir, 'sensitivity_simulation.json'), + sensim_file=os.path.join(tmp2_dir, 'sensitivity_simulation.json'), df_name='channel_sd_mon_df', sim_col='flo_out', - obs_file=os.path.join(txtinout_folder, 'a_observe_discharge_monthly.csv'), + obs_file=os.path.join(txtinout_dir, 'a_observe_discharge_monthly.csv'), date_format='%Y-%m-%d', obs_col='mean', indicators=indicators, @@ -138,74 +138,74 @@ def test_simulation_by_sobol_sample( assert len(output) == 2 assert len(output['indicator']) == 8 - # Pass: Sobol sensitivity indices - output = sensitivity_analyzer.sobol_indices( - sim_file=os.path.join(tmp2_dir, 'sensitivity_simulation.json'), + # Pass: sensitivity indices + output = sensitivity_analyzer.parameter_sensitivity_indices( + sensim_file=os.path.join(tmp2_dir, 'sensitivity_simulation.json'), df_name='channel_sd_mon_df', sim_col='flo_out', - obs_file=os.path.join(txtinout_folder, 'a_observe_discharge_monthly.csv'), + obs_file=os.path.join(txtinout_dir, 'a_observe_discharge_monthly.csv'), date_format='%Y-%m-%d', obs_col='mean', indicators=indicators, - json_file=os.path.join(tmp2_dir, 'sobol_indices.json') + json_file=os.path.join(tmp2_dir, 'sensitivity_indices.json') ) assert isinstance(output, dict) assert len(output) == 2 - sobol_indices = output['sobol_indices'] - assert isinstance(sobol_indices, dict) - assert len(sobol_indices) == 6 - assert all([isinstance(sobol_indices[i]['S1'][0], float) for i in indicators]) + sensitivity_indices = output['sensitivity_indices'] + assert isinstance(sensitivity_indices, dict) + assert len(sensitivity_indices) == 6 + assert all([isinstance(sensitivity_indices[i]['S1'][0], float) for i in indicators]) with tempfile.TemporaryDirectory() as tmp_dir: - # Error: invalid simulation_data type + # Error: invalid extract_data type with pytest.raises(Exception) as exc_info: - sensitivity_analyzer.simulation_by_sobol_sample( + sensitivity_analyzer.simulation_by_sample_parameters( parameters=parameters, sample_number=1, - simulation_folder=tmp_dir, - txtinout_folder=txtinout_folder, - simulation_data=[] + sensim_dir=tmp_dir, + txtinout_dir=txtinout_dir, + extract_data=[] ) - assert exc_info.value.args[0] == 'Expected "simulation_data" to be "dict", but got type "list"' - # Error: invalid data type of value for key in simulation_data + assert exc_info.value.args[0] == 'Expected "extract_data" to be "dict", but got type "list"' + # Error: invalid data type of value for key in extract_data with pytest.raises(Exception) as exc_info: - sensitivity_analyzer.simulation_by_sobol_sample( + sensitivity_analyzer.simulation_by_sample_parameters( parameters=parameters, sample_number=1, - simulation_folder=tmp_dir, - txtinout_folder=txtinout_folder, - simulation_data={ + sensim_dir=tmp_dir, + txtinout_dir=txtinout_dir, + extract_data={ 'channel_sd_yr.txt': [] } ) assert exc_info.value.args[0] == 'Expected "channel_sd_yr.txt" in simulation_date must be a dictionary, but got type "list"' - # Error: missing has_units subkey for key in simulation_data + # Error: missing has_units subkey for key in extract_data with pytest.raises(Exception) as exc_info: - sensitivity_analyzer.simulation_by_sobol_sample( + sensitivity_analyzer.simulation_by_sample_parameters( parameters=parameters, sample_number=1, - simulation_folder=tmp_dir, - txtinout_folder=txtinout_folder, - simulation_data={ + sensim_dir=tmp_dir, + txtinout_dir=txtinout_dir, + extract_data={ 'channel_sd_yr.txt': {} } ) - assert exc_info.value.args[0] == 'Key has_units is missing for "channel_sd_yr.txt" in simulation_data' - # Error: invalid sub_key for key in simulation_data + assert exc_info.value.args[0] == 'Key has_units is missing for "channel_sd_yr.txt" in extract_data' + # Error: invalid sub_key for key in extract_data with pytest.raises(Exception) as exc_info: - sensitivity_analyzer.simulation_by_sobol_sample( + sensitivity_analyzer.simulation_by_sample_parameters( parameters=parameters, sample_number=1, - simulation_folder=tmp_dir, - txtinout_folder=txtinout_folder, - simulation_data={ + sensim_dir=tmp_dir, + txtinout_dir=txtinout_dir, + extract_data={ 'channel_sd_yr.txt': { 'has_units': True, 'begin_datee': None } } ) - assert 'Invalid key "begin_datee" for "channel_sd_yr.txt" in simulation_data' in exc_info.value.args[0] + assert 'Invalid key "begin_datee" for "channel_sd_yr.txt" in extract_data' in exc_info.value.args[0] def test_error_scenario_indicators( @@ -215,7 +215,7 @@ def test_error_scenario_indicators( # Error: invalid indicator name with pytest.raises(Exception) as exc_info: performance_metrics.scenario_indicators( - sim_file='sensitivity_simulation.json', + sensim_file='sensitivity_simulation.json', df_name='channel_sd_mon_df', sim_col='flo_out', obs_file='a_observe_discharge_monthly.csv', diff --git a/tests/test_txtinout_reader.py b/tests/test_txtinout_reader.py index a21bde8..69acdaa 100644 --- a/tests/test_txtinout_reader.py +++ b/tests/test_txtinout_reader.py @@ -9,12 +9,12 @@ @pytest.fixture(scope='class') def txtinout_reader(): - # set up TxtInOut folder path - txtinout_folder = os.path.join(os.path.dirname(__file__), 'TxtInOut') + # set up TxtInOut directory path + tio_dir = os.path.join(os.path.dirname(__file__), 'TxtInOut') # initialize TxtinoutReader class output = pySWATPlus.TxtinoutReader( - path=txtinout_folder + tio_dir=tio_dir ) yield output @@ -26,30 +26,30 @@ def test_run_swat( with tempfile.TemporaryDirectory() as tmp1_dir: - # Intialize TxtinOutReader class by target direcotry - target_dir = txtinout_reader.copy_required_files( - target_dir=tmp1_dir + # Intialize TxtinOutReader class by simulation direcotry + sim_dir = txtinout_reader.copy_required_files( + sim_dir=tmp1_dir ) - target_reader = pySWATPlus.TxtinoutReader( - path=target_dir + sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir ) # Error: run SWAT+ in same directory with pytest.raises(Exception) as exc_info: - target_reader.run_swat( - target_dir=target_dir + sim_reader.run_swat( + sim_dir=sim_dir ) assert 'expected an empty directory' in exc_info.value.args[0] # Pass: enable CSV print - target_reader.enable_csv_print() - printprt_file = os.path.join(str(target_reader.root_folder), 'print.prt') + sim_reader.enable_csv_print() + printprt_file = os.path.join(str(sim_reader.root_dir), 'print.prt') with open(printprt_file, 'r') as read_output: target_line = read_output.readlines()[6] assert target_line[0] == 'y' # Pass: update all objects in print.prt - target_reader.enable_object_in_print_prt( + sim_reader.enable_object_in_print_prt( obj=None, daily=False, monthly=False, @@ -69,8 +69,8 @@ def test_run_swat( with tempfile.TemporaryDirectory() as tmp2_dir: # Pass: run SWAT+ in other directory - target_dir = target_reader.run_swat( - target_dir=tmp2_dir, + sim2_dir = sim_reader.run_swat( + sim_dir=tmp2_dir, begin_date='01-Jan-2010', end_date='01-Jan-2012', simulation_timestep=0, @@ -79,24 +79,24 @@ def test_run_swat( 'channel_sd': {'daily': False}, 'basin_wb': {} }, - begin_date_print='01-Feb-2010', - end_date_print='31-Dec-2011', + print_begin_date='01-Feb-2010', + print_end_date='31-Dec-2011', print_interval=1 ) - assert os.path.samefile(target_dir, tmp2_dir) + assert os.path.samefile(sim2_dir, tmp2_dir) # Pass: data types are parsed correctly (for example jday must be int) - df = pySWATPlus.utils._load_file( - path=target_dir / 'channel_sd_yr.txt', - skip_rows=[0, 2], + df = pySWATPlus.utils._df_extract( + input_file=sim2_dir / 'channel_sd_yr.txt', + skiprows=[0, 2], ) assert pandas.api.types.is_integer_dtype(df['jday']) # Pass: read CSV file - csv_df = pySWATPlus.utils._load_file( - path=target_dir / 'channel_sd_yr.csv', - skip_rows=[0, 2], + csv_df = pySWATPlus.utils._df_extract( + input_file=sim2_dir / 'channel_sd_yr.csv', + skiprows=[0, 2], ) # Pass: TXT and CSV file DataFrames. They cannot be compared directly due to rounding differences. @@ -105,10 +105,10 @@ def test_run_swat( assert all(df.dtypes == csv_df.dtypes) # Pass: adding invalid object with flag - target_reader = pySWATPlus.TxtinoutReader( - path=target_dir + sim2_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim2_dir ) - target_reader.enable_object_in_print_prt( + sim2_reader.enable_object_in_print_prt( obj='my_custom_obj', daily=True, monthly=False, @@ -116,13 +116,14 @@ def test_run_swat( avann=True, allow_unavailable_object=True ) - printprt_file = os.path.join(str(target_reader.root_folder), 'print.prt') + printprt_file = os.path.join(str(sim2_reader.root_dir), 'print.prt') with open(printprt_file, 'r') as f: lines = f.readlines() assert any(line.startswith('my_custom_obj') for line in lines) assert ' y' in lines[-1] + # Pass: disable CSV print - target_reader.disable_csv_print() + sim2_reader.disable_csv_print() with open(printprt_file, 'r') as read_output: target_line = read_output.readlines()[6] assert target_line[0] == 'n' @@ -133,24 +134,24 @@ def test_error_txtinoutreader_class(): # Error: invalid input path type with pytest.raises(Exception) as exc_info: pySWATPlus.TxtinoutReader( - path=1 + tio_dir=1 ) valid_type = ['str', 'Path'] - assert exc_info.value.args[0] == f'Expected "path" to be one of {valid_type}, but got type "int"' + assert exc_info.value.args[0] == f'Expected "tio_dir" to be one of {valid_type}, but got type "int"' # Error: invalid TxtInOut directory invalid_dir = 'nonexist_folder' with pytest.raises(Exception) as exc_info: pySWATPlus.TxtinoutReader( - path=invalid_dir + tio_dir=invalid_dir ) - assert exc_info.value.args[0] == f'Invalid target_dir path: {str(pathlib.Path(invalid_dir).resolve())}' + assert exc_info.value.args[0] == f'Invalid directory path: {str(pathlib.Path(invalid_dir).resolve())}' # Error: no EXE file with tempfile.TemporaryDirectory() as tmp_dir: with pytest.raises(Exception) as exc_info: pySWATPlus.TxtinoutReader( - path=tmp_dir + tio_dir=tmp_dir ) assert exc_info.value.args[0] == 'Expected exactly one .exe file in the parent folder, but found none or multiple' @@ -190,10 +191,10 @@ def test_set_simulation_period( # Pass: modify begin and end date in time.sim with tempfile.TemporaryDirectory() as tmp_dir: - target_dir = txtinout_reader.copy_required_files(tmp_dir) + sim_dir = txtinout_reader.copy_required_files(tmp_dir) # Read original time.sim - with open(target_dir / 'time.sim', 'r') as f: + with open(sim_dir / 'time.sim', 'r') as f: original_lines = f.readlines() # Replace begin and end date in read line @@ -204,15 +205,17 @@ def test_set_simulation_period( parts[3] = str(2012) expected_line = '{: >8} {: >10} {: >10} {: >10} {: >10} \n'.format(*parts) - target_reader = pySWATPlus.TxtinoutReader(target_dir) + sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir + ) - target_reader.set_simulation_period( + sim_reader.set_simulation_period( begin_date='15-Mar-2010', end_date='20-Oct-2012' ) # Read the line in time.sim again - with open(target_dir / 'time.sim', 'r') as f: + with open(sim_dir / 'time.sim', 'r') as f: lines = f.readlines() assert lines[2] == expected_line, f'Expected:\n{expected_line}\nGot:\n{lines[2]}' @@ -232,11 +235,11 @@ def test_set_simulation_timestep( # Pass: modify step in time.sim with tempfile.TemporaryDirectory() as tmp_dir: - target_dir = txtinout_reader.copy_required_files(tmp_dir) + sim_dir = txtinout_reader.copy_required_files(tmp_dir) simulation_timestep = 1 # Read original time.sim - with open(target_dir / 'time.sim', 'r') as f: + with open(sim_dir / 'time.sim', 'r') as f: original_lines = f.readlines() # Replace simulation timestep in read line @@ -244,13 +247,15 @@ def test_set_simulation_timestep( parts[4] = str(simulation_timestep) # new timestep value expected_line = '{: >8} {: >10} {: >10} {: >10} {: >10} \n'.format(*parts) - target_reader = pySWATPlus.TxtinoutReader(target_dir) - target_reader.set_simulation_timestep( + sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir + ) + sim_reader.set_simulation_timestep( step=simulation_timestep ) # Read the line in time.sim again - with open(target_dir / 'time.sim', 'r') as f: + with open(sim_dir / 'time.sim', 'r') as f: lines = f.readlines() # Now just compare @@ -271,10 +276,10 @@ def test_set_print_period( # Pass: modify begin date in print.prt with tempfile.TemporaryDirectory() as tmp_dir: - target_dir = txtinout_reader.copy_required_files(tmp_dir) + sim_dir = txtinout_reader.copy_required_files(tmp_dir) # Read original print.prt - with open(target_dir / 'print.prt', 'r') as f: + with open(sim_dir / 'print.prt', 'r') as f: original_lines = f.readlines() # Replace start date in read line @@ -286,21 +291,23 @@ def test_set_print_period( expected_line = f"{parts[0]:<12}{parts[1]:<11}{parts[2]:<11}{parts[3]:<10}{parts[4]:<10}{parts[5]}\n" - target_reader = pySWATPlus.TxtinoutReader(target_dir) + sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir + ) - target_reader.set_print_period( + sim_reader.set_print_period( begin_date='15-Mar-2010', end_date='31-Dec-2021' ) # Read the line in print.prt again - with open(target_dir / 'print.prt', 'r') as f: + with open(sim_dir / 'print.prt', 'r') as f: lines = f.readlines() assert lines[2] == expected_line, f'Expected:\n{expected_line}\nGot:\n{lines[2]}' # Error: begin date earlier than end date with pytest.raises(ValueError) as exc_info: - target_reader.set_print_period( + sim_reader.set_print_period( begin_date='01-Jan-2016', end_date='01-Jan-2012' ) @@ -314,11 +321,11 @@ def test_set_print_interval( # Pass: modify interval in print.prt with tempfile.TemporaryDirectory() as tmp_dir: - target_dir = txtinout_reader.copy_required_files(tmp_dir) + sim_dir = txtinout_reader.copy_required_files(tmp_dir) print_interval = 2 # Read original print.prt - with open(target_dir / 'print.prt', 'r') as f: + with open(sim_dir / 'print.prt', 'r') as f: original_lines = f.readlines() # Replace start date in read line @@ -326,14 +333,16 @@ def test_set_print_interval( parts[5] = str(print_interval) expected_line = f"{parts[0]:<12}{parts[1]:<11}{parts[2]:<11}{parts[3]:<10}{parts[4]:<10}{parts[5]}\n" - target_reader = pySWATPlus.TxtinoutReader(target_dir) + sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir + ) - target_reader.set_print_interval( + sim_reader.set_print_interval( interval=print_interval ) # Read the line in print.prt again - with open(target_dir / 'print.prt', 'r') as f: + with open(sim_dir / 'print.prt', 'r') as f: lines = f.readlines() assert lines[2] == expected_line, f'Expected:\n{expected_line}\nGot:\n{lines[2]}' @@ -356,47 +365,47 @@ def test_error_run_swat( ) assert "must be provided together" in exc_info.value.args[0] - # Error: begin_date_print set but no end_date_print + # Error: print_begin_date set but no print_end_date with pytest.raises(ValueError) as exc_info: txtinout_reader.run_swat( - begin_date_print='01-Jan-2010' + print_begin_date='01-Jan-2010' ) assert "must be provided together" in exc_info.value.args[0] - # Error: end_date_print set but no begin_date_print + # Error: print_end_date set but no print_begin_date with pytest.raises(ValueError) as exc_info: txtinout_reader.run_swat( - end_date_print='31-Dec-2013' + print_end_date='31-Dec-2013' ) assert "must be provided together" in exc_info.value.args[0] - # Error: begin_date_print and end_date_print set without begin_date and end_date + # Error: print_begin_date and print_end_date set without begin_date and end_date with pytest.raises(ValueError) as exc_info: txtinout_reader.run_swat( - begin_date_print='01-Jan-2010', - end_date_print='01-Jan-2011' + print_begin_date='01-Jan-2010', + print_end_date='01-Jan-2011' ) - assert "'begin_date_print'/'end_date_print' cannot be set unless 'begin_date' and 'end_date' are also provided." == exc_info.value.args[0] + assert 'print_begin_date or print_end_date cannot be set unless begin_date and end_date are also provided' == exc_info.value.args[0] - # Error: begin_date_print out of range + # Error: print_begin_date out of range with pytest.raises(ValueError) as exc_info: txtinout_reader.run_swat( begin_date='01-Jan-2010', end_date='31-Dec-2010', - begin_date_print='31-Dec-2011', - end_date_print='31-Dec-2012' + print_begin_date='31-Dec-2011', + print_end_date='31-Dec-2012' ) assert "must be between" in exc_info.value.args[0] - # Error: end_date_print out of range + # Error: print_end_date out of range with pytest.raises(ValueError) as exc_info: txtinout_reader.run_swat( begin_date='01-Jan-2010', end_date='31-Dec-2010', - begin_date_print='15-Jan-2010', - end_date_print='31-Dec-2012' + print_begin_date='15-Jan-2010', + print_end_date='31-Dec-2012' ) - assert "must be between" in exc_info.value.args[0] + assert 'must be between' in exc_info.value.args[0] # Error: invalid warm-up years with pytest.raises(Exception) as exc_info: @@ -410,12 +419,12 @@ def test_error_run_swat( txtinout_reader.run_swat( print_prt_control={None: {}} ) - assert '"None" cannot be used as a key in print_prt_control' in exc_info.value.args[0] + assert exc_info.value.args[0] == 'Use enable_object_in_print_prt method instead of None as a key in print_prt_control' # Error: invalid sub key value type of print_prt_control with pytest.raises(Exception) as exc_info: txtinout_reader.run_swat( - target_dir=None, + sim_dir=None, print_prt_control={'basin_wb': []} ) assert exc_info.value.args[0] == 'Expected a dictionary for key "basin_wb" in print_prt_control, but got type "list"' @@ -451,14 +460,14 @@ def test_calibration_cal_in_file_cio( ): with tempfile.TemporaryDirectory() as tmp1_dir: - target_dir = txtinout_reader.copy_required_files( - target_dir=tmp1_dir + sim_dir = txtinout_reader.copy_required_files( + sim_dir=tmp1_dir ) - target_reader = pySWATPlus.TxtinoutReader( - path=target_dir + sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir ) - file_path = target_reader.root_folder / 'file.cio' + file_path = sim_reader.root_dir / 'file.cio' fmt = ( f"{'{:<18}'}" # chg @@ -476,7 +485,7 @@ def test_calibration_cal_in_file_cio( ) # Pass: adding calibration line - target_reader._calibration_cal_in_file_cio( + sim_reader._calibration_cal_in_file_cio( add=True ) lines = file_path.read_text().splitlines() @@ -486,7 +495,7 @@ def test_calibration_cal_in_file_cio( assert lines[21] == expected_line # Pass: removing calibration line - target_reader._calibration_cal_in_file_cio( + sim_reader._calibration_cal_in_file_cio( add=False ) lines = file_path.read_text().splitlines() @@ -502,11 +511,11 @@ def test_write_calibration_file( ): with tempfile.TemporaryDirectory() as tmp1_dir: # Initialize TxtinOutReader class by target directory - target_dir = txtinout_reader.copy_required_files( - target_dir=tmp1_dir + sim_dir = txtinout_reader.copy_required_files( + sim_dir=tmp1_dir ) - target_reader = pySWATPlus.TxtinoutReader( - path=target_dir + sim_reader = pySWATPlus.TxtinoutReader( + tio_dir=sim_dir ) par_change = [ @@ -548,7 +557,7 @@ def test_write_calibration_file( ] # Run the method - target_reader._write_calibration_file(par_change) + sim_reader._write_calibration_file(par_change) # Expected output expected_content = ( @@ -592,7 +601,7 @@ def test_write_calibration_file( ) # Compare file content - cal_file = target_reader.root_folder / 'calibration.cal' + cal_file = sim_reader.root_dir / 'calibration.cal' content = cal_file.read_text() assert content == expected_content diff --git a/tests/test_validators.py b/tests/test_validators.py index dd55f05..f4863e8 100644 --- a/tests/test_validators.py +++ b/tests/test_validators.py @@ -7,12 +7,12 @@ @pytest.fixture(scope='class') def txtinout_reader(): - # set up TxtInOut folder path - txtinout_folder = os.path.join(os.path.dirname(__file__), 'TxtInOut') + # set up TxtInOut direcotry path + tio_dir = os.path.join(os.path.dirname(__file__), 'TxtInOut') # initialize TxtinoutReader class output = pySWATPlus.TxtinoutReader( - path=txtinout_folder + tio_dir=tio_dir ) yield output @@ -27,7 +27,7 @@ def test_calibration_parameters( pySWATPlus.types.ModifyDict(**{'name': 'cn2', 'value': 0.5, 'change_type': 'absval'}) ] pySWATPlus.validators._calibration_parameters( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, parameters=parameters ) @@ -37,7 +37,7 @@ def test_calibration_parameters( ] with pytest.raises(ValueError, match='obj_that_doesnt_exist'): pySWATPlus.validators._calibration_parameters( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, parameters=parameters ) @@ -49,21 +49,21 @@ def test_calibration_units( # Pass: parameter that supports units param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, units=[1, 2, 3]) pySWATPlus.validators._calibration_units( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) # Pass: parameter that supports units with range param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, units=range(1, 4)) pySWATPlus.validators._calibration_units( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) # Pass: parameter that supports units with set param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, units={1, 2, 3}) pySWATPlus.validators._calibration_units( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) @@ -71,14 +71,14 @@ def test_calibration_units( param_change = pySWATPlus.types.ModifyDict(name='organicn', change_type='pctchg', value=-50, units=[1, 2, 3]) with pytest.raises(Exception) as exc_info: pySWATPlus.validators._calibration_units( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) assert 'does not support "units" key' in exc_info.value.args[0] # Error: check units that does not exist df = pandas.read_csv( - filepath_or_buffer=txtinout_reader.root_folder / 'hru-data.hru', + filepath_or_buffer=txtinout_reader.root_dir / 'hru-data.hru', skiprows=1, sep=r'\s+', usecols=['id'] @@ -86,7 +86,7 @@ def test_calibration_units( param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, units=[len(df) + 1]) with pytest.raises(Exception) as exc_info: pySWATPlus.validators._calibration_units( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) assert 'Invalid units for parameter' in exc_info.value.args[0] @@ -99,18 +99,18 @@ def test_calibration_conditions( # Pass: No conditions param_change = pySWATPlus.types.ModifyDict(name="cn2", change_type="pctchg", value=-50) pySWATPlus.validators._calibration_conditions( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) # Pass: Supported conditions with valid values - df_textures = pandas.read_fwf(txtinout_reader.root_folder / 'soils.sol', skiprows=1) + df_textures = pandas.read_fwf(txtinout_reader.root_dir / 'soils.sol', skiprows=1) valid_textures = df_textures['texture'].dropna().unique() - df_plants = pandas.read_fwf(txtinout_reader.root_folder / 'plants.plt', sep=r'\s+', skiprows=1) + df_plants = pandas.read_fwf(txtinout_reader.root_dir / 'plants.plt', sep=r'\s+', skiprows=1) valid_plants = df_plants['name'].dropna().unique() - df_landuse = pandas.read_csv(txtinout_reader.root_folder / 'landuse.lum', sep=r'\s+', skiprows=1) + df_landuse = pandas.read_csv(txtinout_reader.root_dir / 'landuse.lum', sep=r'\s+', skiprows=1) valid_landuse = df_landuse['plnt_com'].dropna().unique() conditions = { @@ -122,7 +122,7 @@ def test_calibration_conditions( param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, conditions=conditions) pySWATPlus.validators._calibration_conditions( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) @@ -134,7 +134,7 @@ def test_calibration_conditions( param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, conditions=conditions) with pytest.raises(Exception) as exc_info: pySWATPlus.validators._calibration_conditions( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) assert 'is not supported' in exc_info.value.args[0] @@ -147,7 +147,7 @@ def test_calibration_conditions( param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, conditions=conditions) with pytest.raises(Exception) as exc_info: pySWATPlus.validators._calibration_conditions( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) assert 'has invalid value' in exc_info.value.args[0] @@ -158,7 +158,7 @@ def test_calibration_conditions( param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, conditions=conditions) with pytest.raises(Exception) as exc_info: pySWATPlus.validators._calibration_conditions( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) assert 'has invalid value' in exc_info.value.args[0] @@ -169,7 +169,7 @@ def test_calibration_conditions( param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, conditions=conditions) with pytest.raises(Exception) as exc_info: pySWATPlus.validators._calibration_conditions( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) assert 'has invalid value' in exc_info.value.args[0] @@ -180,7 +180,7 @@ def test_calibration_conditions( param_change = pySWATPlus.types.ModifyDict(name='cn2', change_type='pctchg', value=-50, conditions=conditions) with pytest.raises(Exception) as exc_info: pySWATPlus.validators._calibration_conditions( - txtinout_path=txtinout_reader.root_folder, + input_dir=txtinout_reader.root_dir, param_change=param_change ) assert 'has invalid value' in exc_info.value.args[0]