Conversation
WalkthroughVersion 3.9.1 release incorporating parallelisation enhancements to the PHS (Physiological Heat Strain) model computation. Configuration and packaging files updated with new version number. PHS implementation refactored to leverage numba's vectorised and parallel processing capabilities whilst preserving the public API contract. Test updated to reflect array input handling semantics. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR optimizes the performance of the PHS (Predicted Heat Strain) model by replacing the previous @np.vectorize decorator approach with a more efficient manual parallelization using Numba's prange. The optimization involves restructuring the code to separate scalar computation from parallel array processing.
Changes:
- Refactored PHS model to use Numba parallel loops with
prangefor better performance - Added helper functions for posture encoding and timing (though timing utilities are unused)
- Bumped version from 3.9.0 to 3.9.1
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| setup.py | Version bump to 3.9.1 |
| pythermalcomfort/init.py | Version bump to 3.9.1 |
| docs/conf.py | Version bump to 3.9.1 |
| .bumpversion.toml | Version bump to 3.9.1 |
| pythermalcomfort/utilities.py | Added unused timing utilities (_TimingResult, _time_function) and time import |
| pythermalcomfort/models/phs.py | Major refactoring: renamed _phs_optimized to _phs_optimized_scalar, added _phs_optimized_array with parallel processing, added _posture_to_code helper, updated main phs() function to use broadcasting and new parallel implementation |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pythermalcomfort/models/phs.py`:
- Around line 474-500: Add a NumPy-style docstring to the helper
_posture_to_code describing Parameters (posture: np.ndarray | str — accept
scalar or array of posture strings/values, valid values are
Postures.standing.value, Postures.sitting.value, Postures.crouching.value),
Returns (np.ndarray | int — mapped codes using _POSTURE_STANDING,
_POSTURE_SITTING, _POSTURE_CROUCHING; scalar input returns int, array returns
np.ndarray of dtype int64), and Raises (ValueError when an invalid posture is
supplied). Also briefly document behavior for scalar vs array inputs and mention
that comparisons are vectorized using numpy. Ensure the docstring follows the
NumPy docstring sections and uses the exact function name _posture_to_code and
constants _POSTURE_STANDING/_POSTURE_SITTING/_POSTURE_CROUCHING for clarity.
- Around line 319-355: The truthiness checks for carry-over inputs (e.g., "if
not t_re" and "if not t_cr_eq") fail for NumPy arrays; change those checks to
explicit None checks (use "is None") so array inputs are not evaluated as
boolean. Locate the carry-over sentinel logic around variables t_re and t_cr_eq
(and any similar carry-over variables like t_sk, t_cr if present) and replace
"if not <var>" / "if <var> is falsy" with "if <var> is None" so broadcasting and
the subsequent np.broadcast_arrays call continue to work for array inputs.
In `@pythermalcomfort/utilities.py`:
- Around line 38-73: Add explicit type validation at the start of
_time_function: use validate_type to ensure func is callable and that repeat,
number, and warmup are ints (raise TypeError for wrong types), then perform the
existing bounds checks (raise ValueError for invalid values like repeat <= 0,
number <= 0, warmup < 0). Ensure the type checks run before the current numeric
comparisons so invalid types produce TypeError instead of unrelated Python
errors.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
.bumpversion.tomldocs/conf.pypythermalcomfort/__init__.pypythermalcomfort/models/phs.pypythermalcomfort/utilities.pysetup.py
…nhance posture_to_code docstring
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pythermalcomfort/models/phs.py (1)
109-113:⚠️ Potential issue | 🟡 MinorStale docstring — update "If False" → "If None".
Both
t_reandt_cr_eqdocstring entries still read "If False" as the sentinel, but the implementation now usesNone. This mismatches the updated default at Lines 236–237 and theis Nonechecks at Lines 315–318.📝 Proposed docstring fix
- t_re : float, optional - Mean rectal temperature when worker starts working, [°C]. If False in the 2004 standard, - then t_re = t_cr, whereas in the 2023 standard t_re = 36.8 °C - t_cr_eq : float, optional - Mean core temperature as a function of met when worker starts working, [°C]. If False in the 2004 - standard, then t_cr_eq = t_cr, whereas in the 2023 standard t_cr_eq = 36.8 °C. + t_re : float or None, optional + Mean rectal temperature when worker starts working, [°C]. If None (default) in the 2004 standard, + then t_re = t_cr, whereas in the 2023 standard t_re = 36.8 °C + t_cr_eq : float or None, optional + Mean core temperature as a function of met when worker starts working, [°C]. If None (default) in the 2004 + standard, then t_cr_eq = t_cr, whereas in the 2023 standard t_cr_eq = 36.8 °C.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pythermalcomfort/models/phs.py` around lines 109 - 113, Update the parameter docstrings for t_re and t_cr_eq to reflect the actual sentinel used (None) instead of "If False": change the phrases "If False in the 2004 standard..." to "If None in the 2004 standard..." and similarly for the 2023 text so the docstring matches the implementation and the is None checks for t_re and t_cr_eq in the PHS model.
♻️ Duplicate comments (2)
pythermalcomfort/models/phs.py (2)
475-530:_posture_to_codedocstring — previous concern resolved. ✅The helper now ships with a proper NumPy-style docstring covering
Parameters,Returns,Raises, andNotessections, exactly as requested.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pythermalcomfort/models/phs.py` around lines 475 - 530, The docstring for _posture_to_code is now complete; remove the lingering duplicate review marker by deleting the stray "[duplicate_comment]" note from the PR/comment or commit message and ensure no leftover TODO/comments remain in the _posture_to_code function or its docstring.
315-318:is Nonesentinel checks — previous concern resolved. ✅The prior issue with
if not t_re/if not t_cr_eqthrowingValueError: truth value of an array is ambiguousfor NumPy array inputs has been cleanly addressed with explicitis Noneguards.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pythermalcomfort/models/phs.py` around lines 315 - 318, The previous bug where NumPy arrays caused "truth value of an array is ambiguous" has been fixed by replacing ambiguous boolean checks with explicit sentinel checks; ensure you keep the explicit "is None" guards around t_re and t_cr_eq (the assignments setting t_re = t_cr and t_cr_eq = t_cr) in the function where these variables are used so array inputs aren't evaluated in a boolean context and the fallback behavior remains correct.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pythermalcomfort/models/phs.py`:
- Around line 832-896: The JIT cache can serve stale compiled code if helper
functions move modules—add a short comment near _phs_optimized_array (or
directly above/next to _phs_optimized_scalar) stating that _phs_optimized_scalar
must remain in the same module as _phs_optimized_array to ensure Numba cache
invalidation works correctly (e.g., "keep in same module to avoid Numba
cache/stale compile issues"), so future refactors don't split them; no
functional change required.
---
Outside diff comments:
In `@pythermalcomfort/models/phs.py`:
- Around line 109-113: Update the parameter docstrings for t_re and t_cr_eq to
reflect the actual sentinel used (None) instead of "If False": change the
phrases "If False in the 2004 standard..." to "If None in the 2004 standard..."
and similarly for the 2023 text so the docstring matches the implementation and
the is None checks for t_re and t_cr_eq in the PHS model.
---
Duplicate comments:
In `@pythermalcomfort/models/phs.py`:
- Around line 475-530: The docstring for _posture_to_code is now complete;
remove the lingering duplicate review marker by deleting the stray
"[duplicate_comment]" note from the PR/comment or commit message and ensure no
leftover TODO/comments remain in the _posture_to_code function or its docstring.
- Around line 315-318: The previous bug where NumPy arrays caused "truth value
of an array is ambiguous" has been fixed by replacing ambiguous boolean checks
with explicit sentinel checks; ensure you keep the explicit "is None" guards
around t_re and t_cr_eq (the assignments setting t_re = t_cr and t_cr_eq = t_cr)
in the function where these variables are used so array inputs aren't evaluated
in a boolean context and the fallback behavior remains correct.
| @jit(nopython=True, parallel=True, cache=True) | ||
| def _phs_optimized_array( | ||
| tdb, | ||
| tr, | ||
| v, | ||
| p_a, | ||
| met, | ||
| clo, | ||
| posture_code, | ||
| drink, | ||
| acclimatized, | ||
| weight, | ||
| wme, | ||
| i_mst, | ||
| a_p, | ||
| height, | ||
| walk_sp, | ||
| theta, | ||
| duration, | ||
| f_r, | ||
| t_sk, | ||
| t_cr, | ||
| t_re, | ||
| t_cr_eq, | ||
| t_sk_t_cr_wg, | ||
| evap_load_wm2_min, | ||
| sweat_rate_watt, | ||
| model_code, | ||
| ): | ||
| # n == number of flattened input elements | ||
| out_t_re = np.empty_like(tdb, dtype=np.float64) | ||
| out_t_sk = np.empty_like(tdb, dtype=np.float64) | ||
| out_t_cr = np.empty_like(tdb, dtype=np.float64) | ||
| out_t_cr_eq = np.empty_like(tdb, dtype=np.float64) | ||
| out_t_sk_t_cr_wg = np.empty_like(tdb, dtype=np.float64) | ||
| out_sweat_rate_watt = np.empty_like(tdb, dtype=np.float64) | ||
| out_evap_load_wm2_min = np.empty_like(tdb, dtype=np.float64) | ||
| out_sw_tot_g = np.empty_like(tdb, dtype=np.float64) | ||
| out_d_lim_loss_50 = np.empty_like(tdb, dtype=np.float64) | ||
| out_d_lim_loss_95 = np.empty_like(tdb, dtype=np.float64) | ||
| out_d_lim_t_re = np.empty_like(tdb, dtype=np.float64) | ||
|
|
||
| n = tdb.size | ||
|
|
||
| for i in prange(n): | ||
| ( | ||
| out_t_re[i], | ||
| out_t_sk[i], | ||
| out_t_cr[i], | ||
| out_t_cr_eq[i], | ||
| out_t_sk_t_cr_wg[i], | ||
| out_sweat_rate_watt[i], | ||
| out_evap_load_wm2_min[i], | ||
| out_sw_tot_g[i], | ||
| out_d_lim_loss_50[i], | ||
| out_d_lim_loss_95[i], | ||
| out_d_lim_t_re[i], | ||
| ) = _phs_optimized_scalar( | ||
| tdb[i], | ||
| tr[i], | ||
| v[i], | ||
| p_a[i], | ||
| met[i], | ||
| clo[i], | ||
| posture_code[i], |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
_phs_optimized_array — solid parallel design, one cold-start caveat worth noting.
The prange(n) loop is free of race conditions: each iteration i writes exclusively to its own out_*[i] slot, with no shared mutable state across threads. The @jit(nopython=True, parallel=True, cache=True) decorator is a supported combination per Numba docs.
One thing to be mindful of: when a JIT-compiled function calls helpers imported from a different module, a change in those helpers won't invalidate the cache — stale compiled code may be silently used. Since _phs_optimized_scalar lives in the same file, this is not a concern here; however, it's worth documenting this assumption (or adding a # keep in same module comment) so future refactors don't inadvertently split them.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pythermalcomfort/models/phs.py` around lines 832 - 896, The JIT cache can
serve stale compiled code if helper functions move modules—add a short comment
near _phs_optimized_array (or directly above/next to _phs_optimized_scalar)
stating that _phs_optimized_scalar must remain in the same module as
_phs_optimized_array to ensure Numba cache invalidation works correctly (e.g.,
"keep in same module to avoid Numba cache/stale compile issues"), so future
refactors don't split them; no functional change required.
Description
optimised speed PHS model
Summary by CodeRabbit
Release Notes – Version 3.9.1
New Features
Chores