Fixed the issue of Datetime filtering #119#260
Fixed the issue of Datetime filtering #119#260FedericoTartarini merged 15 commits intoCenterForTheBuiltEnvironment:developmentfrom
Conversation
- Added Global filter UI - Removed the original Apply filter in the pages - Added grey fill to the heatmap
…equest Regulation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis pull request introduces a centralized global month/hour filtering system, replacing scattered per-tab local filter controls. It consolidates filter UI into a Tools Menu, updates callback signatures across eight page modules, renames element IDs for consistency, refactors chart rendering to support filtered-data visualization, and updates CSS assets. Changes
Sequence DiagramsequenceDiagram
participant User as User
participant UI as Tools Filter UI
participant GlobalStore as Global Filter Store
participant Pages as Page Callbacks
participant Charts as Chart Rendering
User->>UI: Adjust month/hour/invert settings
UI->>UI: Local slider/switch state updates
User->>UI: Click Apply Filter
UI->>GlobalStore: update_global_filter_state()
GlobalStore->>GlobalStore: Store filter config
Pages->>GlobalStore: Get filter state on page input change
GlobalStore-->>Pages: Return current filter config
alt Filter Active
Pages->>Pages: apply_global_month_hour_filter(df)
Pages->>Pages: Mark filtered rows with _is_filtered
Pages->>Charts: Pass filtered df
else Filter Inactive
Pages->>Pages: Use default month/hour ranges (1-12, 0-24)
Pages->>Charts: Pass original df
end
Charts->>Charts: Check for _is_filtered marker
alt Has Filtered Marker
Charts->>Charts: Render two-trace heatmap
Charts->>Charts: Trace 1: filtered data (grayscale)
Charts->>Charts: Trace 2: unfiltered data (full color)
else No Marker
Charts->>Charts: Render single-trace heatmap
end
Charts-->>UI: Display updated visualization
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
pages/lib/charts_sun.py (1)
45-66: customdata length mismatch with x/y (can break hover or raise errors)customdata is built from all rows in the month, while x/y are hourly medians (24 points). Make customdata length match x/y by repeating the month name for the 24 points.
@@ - for month_num in range(1, 13): - col_idx = month_num + for month_num in range(1, 13): + col_idx = month_num # We only need legend entries for the first pair, since the others repeat. is_first = col_idx == 1 @@ - fig.add_trace( + # subset once to get hourly x-values and reuse length for customdata + g_x = g_h_rad_month_ave.loc[ + g_h_rad_month_ave[Variables.MONTH.col_name] == month_num, + Variables.HOUR.col_name, + ] + fig.add_trace( go.Scatter( - x=g_h_rad_month_ave.loc[ - g_h_rad_month_ave[Variables.MONTH.col_name] == month_num, - Variables.HOUR.col_name, - ], + x=g_x, y=g_h_rad_month_ave.loc[ g_h_rad_month_ave[Variables.MONTH.col_name] == month_num, Variables.GLOB_HOR_RAD.col_name, ], @@ - customdata=epw_df.loc[ - epw_df[Variables.MONTH.col_name] == month_num, - Variables.MONTH_NAMES.col_name, - ], + customdata=np.repeat(month_lst[month_num - 1], g_x.shape[0]), @@ fig.add_trace( + # subset once to get hourly x-values and reuse length for customdata + d_x = dif_h_rad_month_ave.loc[ + dif_h_rad_month_ave[Variables.MONTH.col_name] == month_num, + Variables.HOUR.col_name, + ] go.Scatter( - x=dif_h_rad_month_ave.loc[ - dif_h_rad_month_ave[Variables.MONTH.col_name] == month_num, - Variables.HOUR.col_name, - ], + x=d_x, y=dif_h_rad_month_ave.loc[ dif_h_rad_month_ave[Variables.MONTH.col_name] == month_num, Variables.DIF_HOR_RAD.col_name, ], @@ - customdata=epw_df.loc[ - epw_df[Variables.MONTH.col_name] == month_num, - Variables.MONTH_NAMES.col_name, - ], + customdata=np.repeat(month_lst[month_num - 1], d_x.shape[0]),Also applies to: 50-70, 84-85, 90-107, 121-125
pages/lib/template_graphs.py (3)
531-535: Bug: margin set to None due to dict.update() misusedict.update() returns None; Plotly receives margin=None. Use a merged dict.
Apply this diff:
- fig.update_layout( - template=template, - title=title, - margin=tight_margins.copy().update({"t": 55}), - yaxis_nticks=13, - ) + fig.update_layout( + template=template, + title=title, + margin={**tight_margins, "t": 55}, + yaxis_nticks=13, + )
923-926: X-axis ticks/text mismatch and mislabeled axis
- tickvals uses len(available_months) but ticktext is 12 items. Mismatch causes labeling errors.
- Title says "Day" but the axis is months.
Apply this diff:
- # Get available months from filtered data - available_months = sorted(new_df[Variables.MONTH.col_name].unique()) + # Always show 12 months on x-axis, even if some months have zero bars + available_months = list(range(12)) # positions 0..11 fig.update_xaxes( - dict( - tickmode="array", - tickvals=np.arange(0, len(available_months), 1), - ticktext=month_lst, - ), - title_text="Day", + dict( + tickmode="array", + tickvals=np.arange(0, 12, 1), + ticktext=month_lst, + ), + title_text="Month", showline=True, linewidth=1, linecolor="black", mirror=True, )Also applies to: 939-945
1071-1079: Typo in title: max value missing and unit spacingSecond bound repeats min_val; unit lacks a space.
Apply this diff:
- + " and " - + str(min_val) - + filter_unit + + " and " + + str(max_val) + + " " + + filter_unitpages/summary.py (2)
269-277: Wrong output property and payload type for warning alertOutput targets "is-open" on a Stack; function returns a component/None. Change output to children and return alert component accordingly.
Apply this diff:
- [ - Output(ElementIds.DEGREE_DAYS_CHART_WRAPPER, "children"), - Output(ElementIds.WARNING_CDD_HIGHER_HDD, "is-open"), - ], + [ + Output(ElementIds.DEGREE_DAYS_CHART_WRAPPER, "children"), + Output(ElementIds.WARNING_CDD_HIGHER_HDD, "children"), + ], @@ - alert_children = ( + alert_children = ( dmc.Alert( "WARNING: Invalid Results! The CDD setpoint should be higher than the HDD setpoint!", color="yellow", variant="filled", title="Warning", withCloseButton=True, ) - if warning_setpoint - else None + if warning_setpoint else None ) return chart, alert_childrenAnd keep ElementIds.WARNING_CDD_HIGHER_HDD as a container (Stack) as you already do.
Also applies to: 287-290, 367-379
314-331: Potential x/y length mismatch after filtering monthsmonths = df[month_name].unique() may be < 12; arrays are length-12. Use fixed month labels.
Apply this diff:
- months = df[Variables.MONTH_NAMES.col_name].unique() + from pages.lib.global_scheme import month_lst + months = month_lstIf you prefer to show only filtered months, also slice hdd_array/cdd_array to the selected months.
Also applies to: 347-355
pages/t_rh.py (1)
217-261: RH heatmap passes wrong columns; KeyError likelyThe RH branch slices df with DBT only, then asks heatmap() to use RH. Include the RH column and handle original column per selected var.
Apply this diff:
def update_heatmap(_, global_local, dd_value, global_filter_data, df, meta, si_ip): @@ - base_columns = [ + base_columns = [ Variables.HOUR.col_name, Variables.UTC_TIME.col_name, Variables.MONTH_NAMES.col_name, Variables.DAY.col_name, ] if "_is_filtered" in df.columns: base_columns.append("_is_filtered") - if dd_value == dropdown_names[var_to_plot[0]]: - if f"_{Variables.DBT.col_name}_original" in df.columns: - base_columns.append(f"_{Variables.DBT.col_name}_original") + # Determine target variable and optional original column + target_var = ( + Variables.DBT.col_name + if dd_value == dropdown_names[var_to_plot[0]] + else Variables.RH.col_name + ) + orig_col = f"_{target_var}_original" + if orig_col in df.columns: + base_columns.append(orig_col) + + if target_var == Variables.DBT.col_name: units = generate_units_degree(si_ip) return dcc.Graph( config=generate_chart_name( TabNames.DRY_BULB_TEMPERATURE_HEATMAP, meta, units ), figure=heatmap( - df[[Variables.DBT.col_name] + base_columns], - Variables.DBT.col_name, + df[[target_var] + base_columns], + target_var, global_local, si_ip, ), ) else: - if f"_{Variables.DBT.col_name}_original" in df.columns: - base_columns.append(f"_{Variables.DBT.col_name}_original") units = generate_units(si_ip) return dcc.Graph( config=generate_chart_name(TabNames.RELATIVE_HUMIDITY_HEATMAP, meta, units), figure=heatmap( - df[[Variables.DBT.col_name] + base_columns], - Variables.RH.col_name, + df[[target_var] + base_columns], + target_var, global_local, si_ip, ), )pages/natural_ventilation.py (1)
340-341: Bug: dict.update returns None; margin is not applied.tight_margins.copy().update(...) returns None, so Plotly gets margin=None. Use a two-step assign.
- fig.update_layout( - template=template, - title=title, - yaxis_nticks=13, - yaxis=dict(range=(0, 24)), - margin=tight_margins.copy().update({"t": 55}), - ) + margins = tight_margins.copy() + margins.update({"t": 55}) + fig.update_layout( + template=template, + title=title, + yaxis_nticks=13, + yaxis=dict(range=(0, 24)), + margin=margins, + )
🧹 Nitpick comments (16)
tests/node/cypress/e2e/spec.cy.js (1)
103-103: Make the assertion resilient to copy changesMatch the stable part only (or use a data attribute) to reduce brittleness.
- cy.contains('The Best Weather Condition is: UTCI No Sun No Wind Categories'); + cy.contains(/UTCI No Sun No Wind Categories/i);Optionally: add a data-cy on the label and assert with cy.get('[data-cy=best-weather]').should('contain', 'UTCI No Sun No Wind Categories').
docs/README.md (1)
37-42: Fix capitalization; consider link consistency
- “Wenshu lyu” → “Wenshu Lyu”.
- Optional: add profile links for all new names for consistency with earlier bullets.
-* Wenshu lyu: Coding, code maintenance and review +* Wenshu Lyu: Coding, code maintenance and reviewdocs/contributing/contributing.md (3)
80-99: Choose one formatter to avoid churnYou recommend Ruff “format” and also Black below. That causes needless diffs. Use Ruff as the single formatter, or drop “ruff format” and keep Black—pick one and document it.
- We use ruff to enforce the code style and code formatting. You can run it with: + We use Ruff for linting and formatting. You can run it with: @@ -pipenv run ruff check . -pipenv run ruff format . +pipenv run ruff check . +pipenv run ruff format .
100-111: Remove Black section (if Ruff is the formatter of record)Avoid dual formatters. If you keep Ruff formatting, delete the Black installation/usage steps.
-Install Black: - -```bash -pipenv install black -``` - -Format your code before committing: - -```bash -black . -```
189-196: Standardize the reviewer mentionUse the actual bot handle “@coderabbitai” for automated reviews.
-- After submitting a Pull Request (PR), please @Coderabbit for review. - Check all improvement suggestions provided by Coderabbit before requesting a final review. +- After submitting a Pull Request (PR), please @coderabbitai for review. + Check all improvement suggestions provided by CodeRabbit before requesting a final review.pages/lib/template_graphs.py (2)
571-579: Inconsistent y-axis offset vs heatmap_with_filterheatmap_with_filter centers hour labels with y = hour - 0.5, but heatmap() does not. Standardize for consistent alignment.
Apply this diff:
- y=df[Variables.HOUR.col_name], + y=df[Variables.HOUR.col_name] - 0.5, ... - y=df[Variables.HOUR.col_name], + y=df[Variables.HOUR.col_name] - 0.5, ... - y=df[Variables.HOUR.col_name], + y=df[Variables.HOUR.col_name] - 0.5,Also applies to: 599-607, 624-632
479-482: Colorbar title logic differs between heatmap variantsheatmap_with_filter hides colorbar title for categorical vars, heatmap() always shows var_unit. Unify to avoid inconsistent UX.
Suggest:
- colorbar=dict(title=var_unit), + colorbar=(dict(title="") if "_categories" in var else dict(title=var_unit)),Also applies to: 620-621
pages/psy-chart.py (1)
187-209: Consider extracting the common global filter pattern.This global filter logic (lines 187-209) is duplicated across multiple files (sun.py, outdoor.py, wind.py). Consider extracting this into a shared utility function to reduce duplication and ensure consistency.
Example refactor in
pages/lib/utils.py:def apply_and_extract_global_filter(df, global_filter_data, target_columns=None): """Apply global filter and return filtered df with time range parameters.""" if not global_filter_data or not global_filter_data.get("filter_active", False): return df, 1, 12, 0, 24 from pages.lib.layout import apply_global_month_hour_filter, get_global_filter_state df = apply_global_month_hour_filter(df, global_filter_data, target_columns) filter_state = get_global_filter_state(global_filter_data) from pages.lib.utils import determine_month_and_hour_filter start_month, end_month, start_hour, end_hour = determine_month_and_hour_filter( filter_state["month_range"], filter_state["hour_range"], filter_state["invert_month"], filter_state["invert_hour"] ) return df, start_month, end_month, start_hour, end_hourThen simplify this code to:
df, start_month, end_month, start_hour, end_hour = apply_and_extract_global_filter( df, global_filter_data ) if not global_filter_data or not global_filter_data.get("filter_active", False): df = filter_df_by_month_and_hour(df, True, [1, 12], [0, 24], [], [], df.columns)pages/sun.py (1)
199-213: Clarify the _is_filtered row removal logic.Lines 212-213 filter out rows marked as
_is_filteredspecifically for solar radiation calculations. This handling is unique to this callback and not present in other similar callbacks (sun_path_chart, daily, update_heatmap).Consider adding a comment explaining why solar radiation calculations require excluding filtered rows while other visualizations do not:
# Filter out the filtered rows for solar radiation calculations # Solar radiation aggregations (monthly totals) should not include # partially visible data points to maintain accuracy if "_is_filtered" in df.columns: df = df[~df["_is_filtered"]]pages/outdoor.py (2)
180-194: Remove redundant string replacements.Lines 190-191 replace "Sun" with "Sun" and "Wind" with "Wind", which are no-op transformations. These can be safely removed since lines 188-189 already handle the "noSun" → "No Sun" and "noWind" → "No Wind" transformations.
Apply this diff:
display_name = display_name.replace("categories", "Categories") # Fix specific words that need spaces display_name = display_name.replace("noSun", "No Sun") display_name = display_name.replace("noWind", "No Wind") -display_name = display_name.replace("Sun", "Sun") # Keep Sun as is -display_name = display_name.replace("Wind", "Wind") # Keep Wind as is display_names.append(display_name)
349-369: Add clarifying comment for colorbar index logic.Line 349 dynamically selects which data trace receives the colorbar configuration. Consider adding a comment explaining why this is necessary:
# Select the appropriate trace for the colorbar # When filtering is active, there may be a background trace at index 0 # and the heatmap at index 1; otherwise the heatmap is at index 0 colorbar_index = 1 if len(utci_stress_cat["data"]) > 1 else 0pages/natural_ventilation.py (2)
195-205: Global filter wiring looks correct; consider hoisting imports.Inside-callback imports of pages.lib.layout repeat across callbacks. If there’s no circular import, move them to module scope to reduce overhead and improve readability. If cycles exist, keep as-is.
Also applies to: 212-219, 225-244
568-572: Defensive None-handling for checklist values.enable_dew_point_data_filter assumes a list; Dash may pass None. Handle None to avoid TypeError.
-def enable_dew_point_data_filter(condensation_enabled): - if len(condensation_enabled) == 1: +def enable_dew_point_data_filter(condensation_enabled): + if condensation_enabled and len(condensation_enabled) == 1: return True else: return Falsepages/explorer.py (3)
374-388: Global filter integration for yearly chart looks correct; minor import note.Inline import is fine if needed for cycles; otherwise consider module-level import for reuse.
Also applies to: 391-403
436-446: Daily/heatmap global filter path LGTM; remove duplicate docstring.There are two identical docstrings in update_tab_heatmap. Drop one.
-def update_tab_heatmap(_, var, global_local, global_filter_data, df, meta, si_ip): - """Update the contents of tab size. Passing in the info from the dropdown and the general info.""" - """Update the contents of tab size. Passing in the info from the dropdown and the general info.""" +def update_tab_heatmap(_, var, global_local, global_filter_data, df, meta, si_ip): + """Update the contents of tab size. Passing in the info from the dropdown and the general info."""Also applies to: 470-476
720-737: Table filtering: good use of _is_filtered guard.This avoids empty columns after global filtering. Consider centralizing this pattern in a helper to reuse across modules.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
assets/banner.css(0 hunks)assets/fonts.css(0 hunks)assets/layout.css(0 hunks)docs/README.md(1 hunks)docs/contributing/contributing.md(3 hunks)pages/explorer.py(17 hunks)pages/lib/charts_data_explorer.py(1 hunks)pages/lib/charts_sun.py(5 hunks)pages/lib/global_element_ids.py(5 hunks)pages/lib/layout.py(8 hunks)pages/lib/template_graphs.py(6 hunks)pages/lib/utils.py(1 hunks)pages/natural_ventilation.py(6 hunks)pages/outdoor.py(6 hunks)pages/psy-chart.py(2 hunks)pages/select.py(6 hunks)pages/summary.py(4 hunks)pages/sun.py(6 hunks)pages/t_rh.py(4 hunks)pages/wind.py(5 hunks)tests/node/cypress/e2e/spec.cy.js(1 hunks)
💤 Files with no reviewable changes (3)
- assets/layout.css
- assets/banner.css
- assets/fonts.css
🔇 Additional comments (21)
docs/contributing/contributing.md (1)
112-117: LGTM: concise “Code Simplicity” guidancepages/lib/charts_data_explorer.py (1)
66-76: No action required—pipeline correctly sets _is_filtered and preserves original valuesVerification confirms the upstream pipeline (pages/lib/layout.py) properly handles both requirements:
- df["_is_filtered"] is set at line 651
- Original values are preserved via f"_{target_col}_original" at line 654, before time_filtering mutations occur at line 656
The consuming code in charts_data_explorer.py (lines 70–72) includes defensive fallback logic: if the original column doesn't exist, it uses the current column value. This pattern is consistent across template_graphs.py as well, ensuring the grey overlay renders reliably.
pages/lib/global_element_ids.py (1)
77-93: ID renames and Tools IDs look good; confirm downstream updatesThe enum changes are clear and consistent. Please ensure all references across pages/tests are updated and consider keeping temporary aliases if external docs/scripts depend on old IDs.
Also applies to: 114-116, 137-138, 172-173, 232-244
pages/lib/utils.py (1)
271-276: Invert flags now treated as truthy; verify all callersTruthiness works with []/["invert"]/True, and guards skip full ranges. Confirm all UIs pass expected values and add tests for boundary cases (full range, single month/hour, wrap).
Suggest minimal tests for:
- month=[1,12], invert=True -> no swap
- hour=[0,24], invert=True -> no swap
- month=[3,6], invert=True -> swapped
- hour=[20,6], invert=False -> wrap behavior in time_filtering
pages/select.py (1)
120-130: LGTM: Initializes global filter store consistentlyDefault payload is consistent across branches; allow_duplicate usage is appropriate.
Also applies to: 161-168, 178-185, 209-216, 224-231, 241-247
pages/t_rh.py (1)
90-116: Global filter integration looks good; ensure apply_global_month_hour_filter supplies __original as expectedYou conditionally add original columns. Please confirm the filter utility creates the corresponding original only for requested target columns.
Also applies to: 152-158, 218-224, 276-286
pages/psy-chart.py (1)
156-186: LGTM! Clean integration of global filter store.The callback signature has been simplified by replacing local filter parameters with the centralized
global_filter_datainput from theTOOLS_GLOBAL_FILTER_STORE. This aligns well with the PR's goal of implementing global filtering.pages/sun.py (2)
126-135: LGTM! Consistent element ID naming.The element ID updates from
TAB4_*toSUN_*improve code clarity and maintainability by using descriptive names instead of generic tab numbers.
260-278: LGTM! Well-structured target column handling.The conditional logic properly builds the target columns list for filtering, including all sun position-related columns and conditionally adding the selected variable when it's not "None".
pages/lib/layout.py (5)
42-148: LGTM! Well-structured filter UI components.The filter UI provides intuitive controls for month and hour range selection with proper constraints (
allowCross=False) and invert toggles. The styling and layout are clean and user-friendly.
431-441: LGTM! Appropriate default filter state.The global filter store initializes with sensible defaults (full year and day ranges, no inversions, filter inactive) and uses session storage to persist state across page navigation.
553-585: LGTM! Correct filter state update logic.The callback properly updates the global filter state when the Apply button is clicked, handling None values and setting appropriate defaults.
588-604: LGTM! Clean state normalization.The function correctly normalizes the filter store data, converting the array-based invert flags to booleans and providing sensible defaults when data is missing.
668-689: LGTM! Proper UI synchronization.The callback correctly synchronizes the slider values and invert switches with the global filter store state, ensuring the UI reflects the current filter settings.
pages/wind.py (2)
241-266: LGTM! Correct handling of filter skip flag.The
skip_time_filterflag at line 259-260 correctly prevents double-filtering by tellingwind_roseto skip its internal time filtering when the global filter has already been applied.
327-442: Verify intentional exclusion of global filter from seasonal graphs.The
update_seasonal_graphscallback doesn't integrate the global filter, while most other wind visualizations do (annual wind rose, wind speed, wind direction, and daily graphs all support global filtering).Is this intentional? If seasonal graphs should remain fixed to their predefined time ranges (winter, spring, summer, fall), consider adding a comment explaining this design decision. If they should support global filtering, consider adding the integration.
pages/natural_ventilation.py (1)
101-101: Two-column grid change LGTM.Inputs compactness improves; no functional risk.
pages/explorer.py (4)
119-121: Comment-only change acknowledged.Global month/hour controls moved to sidebar; no code risk.
140-141: Grid/container ID updates LGTM.Consistent with sidebar migration and ID renames.
Also applies to: 214-215, 250-251
528-553: Time filter info derivation is sound.Passing [True, month, hour] built from global state aligns with custom_heatmap expectations.
Please confirm custom_heatmap treats time_filter_info=True as “restrict to provided ranges” and won’t double-filter given df is pre-filtered.
654-661: Fallback to local filtering OK; validate filter_df_by_month_and_hour signature.You pass empty lists for invert flags and df.columns as last arg. Ensure this matches the function’s current signature to avoid silently ignoring filters.
| (df[Variables.MONTH.col_name], df[Variables.DAY.col_name]), | ||
| axis=-1, | ||
| ), | ||
| hovertemplate=( | ||
| "<b>Filtered Data</b><br>" | ||
| + "Month: %{customdata[0]}<br>Day: %{customdata[1]}<br>Hour:" | ||
| " %{y}:00<br>" | ||
| ), | ||
| name="filtered", | ||
| ) |
There was a problem hiding this comment.
Use month names in hover for consistency
Else-branch shows month names; new traces show numeric months. Align to MONTH_NAMES.
- customdata=np.stack(
- (df[Variables.MONTH.col_name], df[Variables.DAY.col_name]),
- axis=-1,
- ),
+ customdata=np.stack(
+ (df[Variables.MONTH_NAMES.col_name], df[Variables.DAY.col_name]),
+ axis=-1,
+ ),
@@
- customdata=np.stack(
- (df[Variables.MONTH.col_name], df[Variables.DAY.col_name]), axis=-1
- ),
+ customdata=np.stack(
+ (df[Variables.MONTH_NAMES.col_name], df[Variables.DAY.col_name]),
+ axis=-1
+ ),Also applies to: 116-131
🤖 Prompt for AI Agents
In pages/lib/charts_data_explorer.py around lines 90-99 (and similarly at
116-131), the hovertemplate for the new traces uses numeric month values from
df[Variables.MONTH.col_name], causing inconsistency with the else-branch which
shows month names; replace the numeric month with a mapped month name by
converting the month column to the MONTH_NAMES lookup (e.g., map
df[Variables.MONTH.col_name] through MONTH_NAMES or use .apply(lambda m:
MONTH_NAMES[m-1]) before passing to customdata) and ensure the hovertemplate
still references the same customdata index so hover shows the month name rather
than the numeric month.
| def apply_global_month_hour_filter(df, filter_store_data, target_columns=None): | ||
| filter_state = get_global_filter_state(filter_store_data) | ||
|
|
||
| if not filter_state["filter_active"]: | ||
| df_copy = df.copy() | ||
| df_copy["_is_filtered"] = False | ||
| return df_copy | ||
|
|
||
| month_range = filter_state["month_range"] | ||
| hour_range = filter_state["hour_range"] | ||
| invert_month = filter_state["invert_month"] | ||
| invert_hour = filter_state["invert_hour"] | ||
|
|
||
| start_month, end_month, start_hour, end_hour = determine_month_and_hour_filter( | ||
| month_range, hour_range, invert_month, invert_hour | ||
| ) | ||
|
|
||
| df_copy = df.copy() | ||
|
|
||
| if target_columns is None: | ||
| target_columns = [Variables.DBT.col_name] | ||
| elif isinstance(target_columns, str): | ||
| target_columns = [target_columns] | ||
|
|
||
| month_mask = None | ||
| if start_month <= end_month: | ||
| month_mask = (df_copy[Variables.MONTH.col_name] < start_month) | ( | ||
| df_copy[Variables.MONTH.col_name] > end_month | ||
| ) | ||
| else: | ||
| month_mask = (df_copy[Variables.MONTH.col_name] >= end_month) & ( | ||
| df_copy[Variables.MONTH.col_name] <= start_month | ||
| ) | ||
|
|
||
| hour_mask = None | ||
| if start_hour <= end_hour: | ||
| hour_mask = (df_copy[Variables.HOUR.col_name] < start_hour) | ( | ||
| df_copy[Variables.HOUR.col_name] > end_hour | ||
| ) | ||
| else: | ||
| hour_mask = (df_copy[Variables.HOUR.col_name] >= end_hour) & ( | ||
| df_copy[Variables.HOUR.col_name] <= start_hour | ||
| ) | ||
|
|
||
| df_copy["_is_filtered"] = month_mask | hour_mask | ||
|
|
||
| for target_col in target_columns: | ||
| df_copy[f"_{target_col}_original"] = df_copy[target_col] | ||
|
|
||
| from pages.lib.template_graphs import time_filtering | ||
|
|
||
| time_filtering( | ||
| df_copy, start_month, end_month, Variables.MONTH.col_name, target_col | ||
| ) | ||
| time_filtering( | ||
| df_copy, start_hour, end_hour, Variables.MONTH.col_name, target_col | ||
| ) | ||
|
|
||
| return df_copy |
There was a problem hiding this comment.
Critical bug: Wrong column used for hour filtering.
Line 662 incorrectly uses Variables.MONTH.col_name for hour filtering instead of Variables.HOUR.col_name. This will cause the hour-based filtering to fail or produce incorrect results.
Apply this fix:
time_filtering(
df_copy, start_month, end_month, Variables.MONTH.col_name, target_col
)
time_filtering(
- df_copy, start_hour, end_hour, Variables.MONTH.col_name, target_col
+ df_copy, start_hour, end_hour, Variables.HOUR.col_name, target_col
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def apply_global_month_hour_filter(df, filter_store_data, target_columns=None): | |
| filter_state = get_global_filter_state(filter_store_data) | |
| if not filter_state["filter_active"]: | |
| df_copy = df.copy() | |
| df_copy["_is_filtered"] = False | |
| return df_copy | |
| month_range = filter_state["month_range"] | |
| hour_range = filter_state["hour_range"] | |
| invert_month = filter_state["invert_month"] | |
| invert_hour = filter_state["invert_hour"] | |
| start_month, end_month, start_hour, end_hour = determine_month_and_hour_filter( | |
| month_range, hour_range, invert_month, invert_hour | |
| ) | |
| df_copy = df.copy() | |
| if target_columns is None: | |
| target_columns = [Variables.DBT.col_name] | |
| elif isinstance(target_columns, str): | |
| target_columns = [target_columns] | |
| month_mask = None | |
| if start_month <= end_month: | |
| month_mask = (df_copy[Variables.MONTH.col_name] < start_month) | ( | |
| df_copy[Variables.MONTH.col_name] > end_month | |
| ) | |
| else: | |
| month_mask = (df_copy[Variables.MONTH.col_name] >= end_month) & ( | |
| df_copy[Variables.MONTH.col_name] <= start_month | |
| ) | |
| hour_mask = None | |
| if start_hour <= end_hour: | |
| hour_mask = (df_copy[Variables.HOUR.col_name] < start_hour) | ( | |
| df_copy[Variables.HOUR.col_name] > end_hour | |
| ) | |
| else: | |
| hour_mask = (df_copy[Variables.HOUR.col_name] >= end_hour) & ( | |
| df_copy[Variables.HOUR.col_name] <= start_hour | |
| ) | |
| df_copy["_is_filtered"] = month_mask | hour_mask | |
| for target_col in target_columns: | |
| df_copy[f"_{target_col}_original"] = df_copy[target_col] | |
| from pages.lib.template_graphs import time_filtering | |
| time_filtering( | |
| df_copy, start_month, end_month, Variables.MONTH.col_name, target_col | |
| ) | |
| time_filtering( | |
| df_copy, start_hour, end_hour, Variables.MONTH.col_name, target_col | |
| ) | |
| return df_copy | |
| def apply_global_month_hour_filter(df, filter_store_data, target_columns=None): | |
| filter_state = get_global_filter_state(filter_store_data) | |
| if not filter_state["filter_active"]: | |
| df_copy = df.copy() | |
| df_copy["_is_filtered"] = False | |
| return df_copy | |
| month_range = filter_state["month_range"] | |
| hour_range = filter_state["hour_range"] | |
| invert_month = filter_state["invert_month"] | |
| invert_hour = filter_state["invert_hour"] | |
| start_month, end_month, start_hour, end_hour = determine_month_and_hour_filter( | |
| month_range, hour_range, invert_month, invert_hour | |
| ) | |
| df_copy = df.copy() | |
| if target_columns is None: | |
| target_columns = [Variables.DBT.col_name] | |
| elif isinstance(target_columns, str): | |
| target_columns = [target_columns] | |
| month_mask = None | |
| if start_month <= end_month: | |
| month_mask = (df_copy[Variables.MONTH.col_name] < start_month) | ( | |
| df_copy[Variables.MONTH.col_name] > end_month | |
| ) | |
| else: | |
| month_mask = (df_copy[Variables.MONTH.col_name] >= end_month) & ( | |
| df_copy[Variables.MONTH.col_name] <= start_month | |
| ) | |
| hour_mask = None | |
| if start_hour <= end_hour: | |
| hour_mask = (df_copy[Variables.HOUR.col_name] < start_hour) | ( | |
| df_copy[Variables.HOUR.col_name] > end_hour | |
| ) | |
| else: | |
| hour_mask = (df_copy[Variables.HOUR.col_name] >= end_hour) & ( | |
| df_copy[Variables.HOUR.col_name] <= start_hour | |
| ) | |
| df_copy["_is_filtered"] = month_mask | hour_mask | |
| for target_col in target_columns: | |
| df_copy[f"_{target_col}_original"] = df_copy[target_col] | |
| from pages.lib.template_graphs import time_filtering | |
| time_filtering( | |
| df_copy, start_month, end_month, Variables.MONTH.col_name, target_col | |
| ) | |
| time_filtering( | |
| df_copy, start_hour, end_hour, Variables.HOUR.col_name, target_col | |
| ) | |
| return df_copy |
🤖 Prompt for AI Agents
In pages/lib/layout.py around lines 607 to 665, the second call to
time_filtering (line ~662) wrongly passes Variables.MONTH.col_name for hour
filtering; update that call to pass Variables.HOUR.col_name so the hour-based
filtering uses the hour column (i.e., replace the month column argument with
Variables.HOUR.col_name in the second time_filtering invocation).
| if not skip_time_filter: | ||
| start_month = month[0] | ||
| end_month = month[1] | ||
| start_hour = hour[0] | ||
| end_hour = hour[1] | ||
| if start_month <= end_month: | ||
| df = df.loc[ | ||
| (df[Variables.MONTH.col_name] >= start_month) | ||
| & (df[Variables.MONTH.col_name] <= end_month) | ||
| ] | ||
| else: | ||
| df = df.loc[ | ||
| (df[Variables.MONTH.col_name] <= end_month) | ||
| | (df[Variables.MONTH.col_name] >= start_month) | ||
| ] | ||
| if start_hour <= end_hour: | ||
| df = df.loc[ | ||
| (df[Variables.HOUR.col_name] > start_hour) | ||
| & (df[Variables.HOUR.col_name] <= end_hour) | ||
| ] | ||
| else: | ||
| df = df.loc[ | ||
| (df[Variables.HOUR.col_name] <= end_hour) | ||
| | (df[Variables.HOUR.col_name] >= start_hour) | ||
| ] |
There was a problem hiding this comment.
Hour filter off-by-one excludes hour 0; make ranges half‑open
Use [start, end) semantics to include start_hour and exclude end_hour=24. Align wrap logic too.
Apply this diff:
- if start_hour <= end_hour:
- df = df.loc[
- (df[Variables.HOUR.col_name] > start_hour)
- & (df[Variables.HOUR.col_name] <= end_hour)
- ]
+ if start_hour <= end_hour:
+ df = df.loc[
+ (df[Variables.HOUR.col_name] >= start_hour)
+ & (df[Variables.HOUR.col_name] < end_hour)
+ ]
else:
- df = df.loc[
- (df[Variables.HOUR.col_name] <= end_hour)
- | (df[Variables.HOUR.col_name] >= start_hour)
- ]
+ df = df.loc[
+ (df[Variables.HOUR.col_name] < end_hour)
+ | (df[Variables.HOUR.col_name] >= start_hour)
+ ]🤖 Prompt for AI Agents
In pages/lib/template_graphs.py around lines 685 to 709, the hour filtering uses
(>) for start and (<=) for end which excludes hour 0 and uses closed intervals;
change to half‑open [start, end) semantics by replacing the hour comparisons:
when start_hour <= end_hour filter with (df[Variables.HOUR.col_name] >=
start_hour) & (df[Variables.HOUR.col_name] < end_hour), and when wrapping
(start_hour > end_hour) filter with (df[Variables.HOUR.col_name] >= start_hour)
| (df[Variables.HOUR.col_name] < end_hour); ensure this logic handles end_hour
== 24 correctly (24 treated as exclusive endpoint).
| if global_filter_data and global_filter_data.get("filter_active", False): | ||
| from pages.lib.layout import ( | ||
| apply_global_month_hour_filter, | ||
| get_global_filter_state, | ||
| ) | ||
|
|
||
| df = apply_global_month_hour_filter( | ||
| df, global_filter_data, Variables.NV_ALLOWED.col_name | ||
| ) | ||
|
|
||
| filter_state = get_global_filter_state(global_filter_data) | ||
| month_range = filter_state["month_range"] | ||
| hour_range = filter_state["hour_range"] | ||
| invert_month_global = filter_state["invert_month"] | ||
| invert_hour_global = filter_state["invert_hour"] | ||
|
|
||
| start_month, end_month, start_hour, end_hour = determine_month_and_hour_filter( | ||
| month_range, hour_range, invert_month_global, invert_hour_global | ||
| ) | ||
| else: | ||
| # Use default values when global filter is not active | ||
| start_month, end_month, start_hour, end_hour = 1, 12, 0, 24 | ||
|
|
There was a problem hiding this comment.
Critical: Month-aligned aggregation can mis-shape arrays and divide by zero.
n_hours_nv_allowed (after dropna) may have fewer months than tot_month_hours (12), causing shape mismatches for division and bar x/y. Also risk of division by zero for months with no selected hours.
Apply a month-indexed, aligned aggregation with safe division and aligned x labels:
- # this should be the total after filtering by time
- tot_month_hours = (
- df.groupby(df[Variables.UTC_TIME.col_name].dt.month)[
- Variables.NV_ALLOWED.col_name
- ]
- .sum()
- .values
- )
+ # Build stable month index [1..12]
+ month_index = np.arange(1, 13)
+ groups = df.groupby(df[Variables.UTC_TIME.col_name].dt.month)[
+ Variables.NV_ALLOWED.col_name
+ ]
+ # Total selected hours per month (after time filtering): count non-NaN
+ tot_month_hours = (
+ groups.count().reindex(month_index, fill_value=0).astype(float).values
+ )
@@
- n_hours_nv_allowed = (
- df.dropna(subset=Variables.NV_ALLOWED.col_name)
- .groupby(df[Variables.UTC_TIME.col_name].dt.month)[
- Variables.NV_ALLOWED.col_name
- ]
- .sum()
- .values
- )
+ # Allowed hours per month after DBT/DPT filters (0/1 where present)
+ n_hours_nv_allowed = (
+ df.groupby(df[Variables.UTC_TIME.col_name].dt.month)[
+ Variables.NV_ALLOWED.col_name
+ ]
+ .sum(min_count=1) # month of all-NaN -> NaN
+ .reindex(month_index, fill_value=0)
+ .astype(float)
+ .values
+ )
@@
- per_time_nv_allowed = np.round(100 * (n_hours_nv_allowed / tot_month_hours))
+ # Safe percentage; avoid division by zero
+ per_time_nv_allowed = np.zeros_like(n_hours_nv_allowed, dtype=float)
+ np.divide(
+ n_hours_nv_allowed * 100.0,
+ tot_month_hours,
+ out=per_time_nv_allowed,
+ where=tot_month_hours > 0,
+ )
+ per_time_nv_allowed = np.round(per_time_nv_allowed, 0)
@@
- go.Bar(
- x=df[Variables.MONTH_NAMES.col_name].unique(),
+ go.Bar(
+ x=[month_lst[i - 1] for i in month_index],
y=n_hours_nv_allowed,
@@
- trace1 = go.Bar(
- x=df[Variables.MONTH_NAMES.col_name].unique(),
+ trace1 = go.Bar(
+ x=[month_lst[i - 1] for i in month_index],
y=per_time_nv_allowed,Also fix the margin update here:
- fig.update_layout(
- template=template,
- title=title,
- barnorm="",
- dragmode=False,
- margin=tight_margins.copy().update({"t": 55}),
- )
+ margins = tight_margins.copy()
+ margins.update({"t": 55})
+ fig.update_layout(
+ template=template,
+ title=title,
+ barnorm="",
+ dragmode=False,
+ margin=margins,
+ )Also applies to: 448-477, 479-489, 500-521, 523-531, 532-538
🤖 Prompt for AI Agents
In pages/natural_ventilation.py around lines 425-447 (and similarly for ranges
448-477, 479-489, 500-521, 523-531, 532-538), the aggregation over
n_hours_nv_allowed can produce fewer-than-12 months (after dropna), causing
shape mismatches and potential divide-by-zero when computing ratios and
plotting; reindex all month-aggregated Series/DataFrames to a full 1..12 month
index before any arithmetic, replace missing months with 0 (fillna(0)), and
perform safe division by using numerator.where(denominator!=0, 0) /
denominator.replace(0, np.nan).fillna(1) or use numpy.divide with out and where
to avoid ZeroDivisionError; ensure x-label arrays come from the reindexed month
index so x and y lengths match; finally apply the same
reindex/fillna/safe-division treatment to the related blocks and update the
margin update to target the correct plot element (use
fig.update_layout(margin=dict(...)) on the figure object) so layout changes
apply.
|
@FedericoTartarini At present, as for the modification suggestions provided in Coderabbit, we believe that they are all changes to the style content and will not have a huge effect on the overall project, so we will not modify them for now. Now you can view the PR of the filter. |
FedericoTartarini
left a comment
There was a problem hiding this comment.
@LeoLuosifen amazing work, well done! I have reviewed the code and I have noticed the following issues while testing the application locally on my computer.
- climate summary - climate profiles violion plots are not filtered
- I like that you applied a gray filter to the heatmaps
- something very strange happens if I first filter by month, then by time. The chart in the sun page breaks and some of the gray mask is not shown. This eroror happens in all the heatmaps. If I apply the hour filter then the month filter this does not occur.
- psychrometric page the layout is not nice please use a simple grid
- psy chart poor rendering on small screens
- nat ventilation - please use a better radio button for the filter “avoid condensation”
- outdoor comfort - please review my code and how I improved the layout at the top using a group -
- there are many code duplications and complex code. COuld you please try to simplify it.
- optimized duplicated code - removed redundant stack - fixed filter issues
|
@FedericoTartarini Hi, I did some modifications and optimizations, could you please review them and let me know if there are any changes needed. |
|
@LeoLuosifen could you please very very carefully review the coderabbits comments. I would like an explanation if they are relevant or not. After that I will merge the pull request. |
|
@FedericoTartarini Thanks for your time to review, in response to the modification suggestions proposed by coderabbit, I have done the fixes.
|
… ventilation and psy-chart
7ad631e
into
CenterForTheBuiltEnvironment:development
Summary
This PR resolves the filter issue. In previous projects, filter functionality only applied within the corresponding page; upon refreshing the page or re-entering it, the filter settings would reset. This modification implements global filtering, effectively preventing the prior issue and enhancing user experience.
Change
Results
Summary by CodeRabbit
New Features
Bug Fixes
Removals
Documentation