Fix the Issue of Code sanitisation #159 and Updated the contributting.md#238
Fix the Issue of Code sanitisation #159 and Updated the contributting.md#238LeoLuosifen wants to merge 38 commits intoCenterForTheBuiltEnvironment:developmentfrom
Conversation
…long with examples of code modifications.
…nd template_graphs.py
…rts_data_explorer.py
…y-chart.py and summary.py
… and natural_ventilation.py.
…n_names.py and natural_ventilation.py
Walkthrough
Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Developer
participant GitHub as GitHub Actions
participant Auth as google-github-actions/auth@v2
participant GCloud as setup-gcloud@v1
participant CloudBuild as gcloud builds submit
Developer->>GitHub: Push to main
GitHub->>Auth: Authenticate with GCP (credentials_json)
Auth-->>GitHub: Access token
GitHub->>GCloud: Setup Cloud SDK (project_id=heat-stress-scale)
GitHub->>CloudBuild: gcloud builds submit<br/>--project=clima-316917<br/>--substitutions=...
note right of CloudBuild: Build and deploy executed via substitutions
CloudBuild-->>GitHub: Build status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (17)
assets/manifest.json (1)
469-490: Duplicate "display" key in manifest; keep a single authoritative entry.JSON parsers will accept the last occurrence, but duplicate keys are brittle and can confuse tooling. Keep the first "display": "standalone" and drop the later duplicate.
"dir": "ltr", "prefer_related_applications": false, - "display": "standalone", "display_override": [ "window-controls-overlay", "standalone", "browser" ]docs/documentation/tabs-explained/wind/how-to-read-a-wind-rose.md (1)
23-26: Correct seasonal month ranges (typo).Typical meteorological seasons are: DJF, MAM, JJA, SON. The last bullet should be “September – November,” not “September – December.”
* December - February; * March - May; * June - August; -* September - December. +* September - November.pages/lib/template_graphs.py (2)
399-403: Bug: margin becomes None due to dict.update() return value.dict.update() returns None; update_layout receives None for margin. Use a merged dict instead.
Apply:
- fig.update_layout( - template=template, - title=title, - margin=tight_margins.copy().update({"t": 55}), - yaxis_nticks=13, - ) + margin_with_title = {**tight_margins, "t": 55} + fig.update_layout( + template=template, + title=title, + margin=margin_with_title, + yaxis_nticks=13, + )Repeat the same fix in thermal_stress_stacked_barchart (Lines 678-683).
678-683: Same margin bug as above in thermal_stress_stacked_barchart.Apply:
- fig.update_layout( - legend=dict(orientation="h", yanchor="bottom", y=1.05, xanchor="right", x=1), - barmode="stack", - dragmode=False, - title=title, - margin=tight_margins.copy().update({"t": 55}), - ) + fig.update_layout( + legend=dict(orientation="h", yanchor="bottom", y=1.05, xanchor="right", x=1), + barmode="stack", + dragmode=False, + title=title, + margin={**tight_margins, "t": 55}, + )pages/psy-chart.py (2)
188-194: Guard against None and use NaN instead of None for masking.
- Users can clear the number inputs, making min_val or max_val None.
- Assigning None to a DataFrame row coerces dtypes to object; prefer np.nan.
Apply this refactor within the data filter block:
- if data_filter: - if min_val <= max_val: - mask = (df[data_filter_var] < min_val) | (df[data_filter_var] > max_val) - df[mask] = None - else: - mask = (df[data_filter_var] >= max_val) & (df[data_filter_var] <= min_val) - df[mask] = None + if data_filter and min_val is not None and max_val is not None: + if min_val <= max_val: + mask = (df[data_filter_var] < min_val) | (df[data_filter_var] > max_val) + else: + mask = (df[data_filter_var] >= max_val) & (df[data_filter_var] <= min_val) + df.loc[mask, :] = np.nan
377-395: Hover template shows the variable name twice; missing unit.When var == "None" the hover template appends the DBT name instead of the unit.
- hovertemplate=mapping_dictionary[ColNames.DBT]["name"] - + ": %{x:.2f}" - + mapping_dictionary[ColNames.DBT]["name"], + hovertemplate=mapping_dictionary[ColNames.DBT]["name"] + + ": %{x:.2f}" + + mapping_dictionary[ColNames.DBT][si_ip]["unit"],pages/t_rh.py (5)
27-28: Use internal keys instead of human-readable labels for var_to_plot.Downstream callbacks and summary_table_tmp_rh_tab expect column keys (DBT/RH), not labels. Using labels causes df[[..., dd_value, ...]] to KeyError.
-var_to_plot = ["Dry bulb temperature", "Relative humidity"] +var_to_plot = [ColNames.DBT, ColNames.RH]
41-45: Options/value should use internal keys; let dropdown_names provide labels.This ensures dd_value is a column key, fixing update_table and simplifying conditionals.
- dropdown( - id=ElementIds.ID_T_RH_DROPDOWN, - className="dropdown-t-rh", - options={var: dropdown_names[var] for var in var_to_plot}, - value=dropdown_names[var_to_plot[0]], - ), + dropdown( + id=ElementIds.ID_T_RH_DROPDOWN, + className="dropdown-t-rh", + options={var: dropdown_names[var] for var in var_to_plot}, + value=ColNames.DBT, + ),
114-129: Fix conditional and pass units as a named argument to generate_chart_name.
- Compare dd_value to ColNames.DBT (key), not a label.
- generate_chart_name(tab_name, meta=None, custom_inputs=None, units=None): passing units positionally assigns it to custom_inputs.
-def update_yearly_chart(_, global_local, dd_value, df, meta, si_ip): - if dd_value == dropdown_names[var_to_plot[0]]: +def update_yearly_chart(_, global_local, dd_value, df, meta, si_ip): + if dd_value == ColNames.DBT: dbt_yearly = yearly_profile(df, ColNames.DBT, global_local, si_ip) dbt_yearly.update_layout(xaxis=dict(rangeslider=dict(visible=True))) units = generate_units_degree(si_ip) return dcc.Graph( - config=generate_chart_name("DryBulbTemperature_yearly", meta, units), + config=generate_chart_name("DryBulbTemperature_yearly", meta, units=units), figure=dbt_yearly, ) else: - rh_yearly = yearly_profile(df, ColNames.RH, global_local, si_ip) + rh_yearly = yearly_profile(df, ColNames.RH, global_local, si_ip) rh_yearly.update_layout(xaxis=dict(rangeslider=dict(visible=True))) units = generate_units(si_ip) return dcc.Graph( - config=generate_chart_name("RelativeHumidity_yearly", meta, units), + config=generate_chart_name("RelativeHumidity_yearly", meta, units=units), figure=rh_yearly, )
145-157: Align daily chart conditional and generate_chart_name usage.-def update_daily(_, global_local, dd_value, df, meta, si_ip): - if dd_value == dropdown_names[var_to_plot[0]]: +def update_daily(_, global_local, dd_value, df, meta, si_ip): + if dd_value == ColNames.DBT: units = generate_units_degree(si_ip) return dcc.Graph( - config=generate_chart_name("DryBulbTemperature_daily", meta, units), + config=generate_chart_name("DryBulbTemperature_daily", meta, units=units), figure=daily_profile( df[[ColNames.DBT, ColNames.HOUR, ColNames.UTC_TIME, ColNames.MONTH_NAMES, ColNames.DAY, ColNames.MONTH]], ColNames.DBT, global_local, si_ip, ), ) else: units = generate_units(si_ip) return dcc.Graph( - config=generate_chart_name("RelativeHumidity_daily", meta, units), + config=generate_chart_name("RelativeHumidity_daily", meta, units=units), figure=daily_profile( df[[ColNames.RH, ColNames.HOUR, ColNames.UTC_TIME, ColNames.MONTH_NAMES, ColNames.DAY, ColNames.MONTH]], ColNames.RH, global_local, si_ip, ), )
183-206: Align heatmap conditional and generate_chart_name usage.-def update_heatmap(_, global_local, dd_value, df, meta, si_ip): +def update_heatmap(_, global_local, dd_value, df, meta, si_ip): """Update the contents of tab three. Passing in general info (df, meta).""" - if dd_value == dropdown_names[var_to_plot[0]]: + if dd_value == ColNames.DBT: units = generate_units_degree(si_ip) return dcc.Graph( - config=generate_chart_name("DryBulbTemperature_heatmap", meta, units), + config=generate_chart_name("DryBulbTemperature_heatmap", meta, units=units), figure=heatmap( df[[ColNames.DBT, ColNames.HOUR, ColNames.UTC_TIME, ColNames.MONTH_NAMES, ColNames.DAY]], ColNames.DBT, global_local, si_ip, ), ) else: units = generate_units(si_ip) return dcc.Graph( - config=generate_chart_name("RelativeHumidity_heatmap", meta, units), + config=generate_chart_name("RelativeHumidity_heatmap", meta, units=units), figure=heatmap( df[[ColNames.RH, ColNames.HOUR, ColNames.UTC_TIME, ColNames.MONTH_NAMES, ColNames.DAY]], ColNames.RH, global_local, si_ip, ), )pages/sun.py (1)
231-239: Pass units as a named parameter in generate_chart_name.Currently units is the 3rd positional arg, which maps to custom_inputs. Use the named parameter to append units to the export filename correctly.
- return dcc.Graph( - config=generate_chart_name( - "Global_and_Diffuse_Horizontal_Solar_Radiation", meta, units - ), - figure=monthly, - ), dcc.Graph( - config=generate_chart_name("cloud_cover", meta, units), - figure=cover, - ) + return dcc.Graph( + config=generate_chart_name( + "Global_and_Diffuse_Horizontal_Solar_Radiation", meta, units=units + ), + figure=monthly, + ), dcc.Graph( + config=generate_chart_name("cloud_cover", meta, units=units), + figure=cover, + )pages/select.py (1)
187-196: Swap error messages: wrong extension vs invalid format.
- If the filename isn’t EPW, use "wrong_extension".
- Parsing failures should return "invalid_format".
This keeps feedback accurate.
- else: - return ( - None, - None, - True, - messages_alert["invalid_format"], - "warning", - ) - except (ValueError, IndexError, KeyError) as e: - print(f"Error parsing EPW file: {e}") + else: + return ( + None, + None, + True, + messages_alert["wrong_extension"], + "warning", + ) + except (ValueError, IndexError, KeyError) as e: + print(f"Error parsing EPW file: {e}") return ( None, None, True, - messages_alert["wrong_extension"], + messages_alert["invalid_format"], "warning", )pages/outdoor.py (1)
246-256: Avoid bare except and value_counts KeyError.Use value_counts().get(0, 0) to avoid exceptions and drop the bare except.
- for col in cols: - try: - count = df[col].value_counts()[0] # this can cause error if there is no 0 - if count > highest_count: - highest_count = count - cols_with_the_highest_number_of_zero.clear() - cols_with_the_highest_number_of_zero.append(col) - elif count == highest_count: - cols_with_the_highest_number_of_zero.append(col) - except: - continue + for col in cols: + vc = df[col].value_counts(dropna=False) if col in df else {} + count = vc.get(0, 0) + if count > highest_count: + highest_count = count + cols_with_the_highest_number_of_zero = [col] + elif count == highest_count: + cols_with_the_highest_number_of_zero.append(col)pages/natural_ventilation.py (3)
502-503: Inconsistent use of ColNames for column references.While most of the code has been updated to use
ColNamesconstants, lines 502-503 still use string literals "DBT" and "DPT" instead of the constants. This should be updated for consistency.Apply this diff to fix the inconsistency:
- var = "DBT" - filter_var = "DPT" + var = ColNames.DBT + filter_var = ColNames.DPT
276-276: String literal "DBT" still referenced in line 276.Line 276 still references "DBT" as a string literal in the DataFrame access. This should be updated to use
ColNames.DBT.Apply this diff:
- average_yearly_tmp = ( - f"Average yearly temperature: {df['DBT'].mean().round(1)} " + tmp_unit - ) + average_yearly_tmp = ( + f"Average yearly temperature: {df[ColNames.DBT].mean().round(1)} " + tmp_unit + )
278-285: Multiple string literals "DBT" in temperature calculations.Lines 279-285 contain multiple references to 'DBT' as string literals which should use
ColNames.DBT.Apply this diff:
- hottest_yearly_tmp = ( - f"Hottest yearly temperature (99%): {df['DBT'].quantile(0.99).round(1)} " - + tmp_unit - ) - coldest_yearly_tmp = ( - f"Coldest yearly temperature (1%): {df['DBT'].quantile(0.01).round(1)} " - + tmp_unit - ) + hottest_yearly_tmp = ( + f"Hottest yearly temperature (99%): {df[ColNames.DBT].quantile(0.99).round(1)} " + + tmp_unit + ) + coldest_yearly_tmp = ( + f"Coldest yearly temperature (1%): {df[ColNames.DBT].quantile(0.01).round(1)} " + + tmp_unit + )
🧹 Nitpick comments (59)
cloudbuild.yaml (1)
15-15: YAML lint: add newline at EOF.YAMLlint flags “no new line character at the end of file”. Add a trailing newline to satisfy linters and avoid noisy CI.
- '--cpu', '$_CPU'] + '--cpu', '$_CPU'] +docs/contributing/run-project-locally.md (2)
30-30: Switching topipenv install --devis the right call. Add pre-commit hook install step.Since pre-commit is now a dev dependency, add a note for first-time setup so contributors get hooks:
pipenv run pre-commit install # optional: run once across the repo pipenv run pre-commit run --all-filesI can open a quick docs PR snippet if helpful.
29-31: Use shell code fence for command and trim leading space.Minor presentation tweak for copy-paste/syntax highlighting.
-```text - pipenv install --dev -``` +```sh +pipenv install --dev +```docs/documentation/tabs-explained/sun-and-cloud/customizable-daily-and-hourly-maps.md (2)
11-11: Consistency: “heatmaps” vs “heat maps”.Pick one form and standardize across Sun & Cloud docs. Many sources use “heatmaps”; consistency helps search and translation.
-[Heat maps](https://en.wikipedia.org/wiki/Heat_map) allow the intensity of values to be perceived through color palettes. These graphs are very helpful in seeing how magnitudes vary throughout the year. +[Heatmaps](https://en.wikipedia.org/wiki/Heat_map) allow the intensity of values to be perceived through color palettes. These graphs are very helpful in seeing how magnitudes vary throughout the year.
13-13: Potential broken image path (space in filename) and missing alt text.Spaces in asset paths can break links depending on the renderer. Also, empty alt text hurts accessibility.
-<figure><img src="../../../.gitbook/assets/Daily sun exa.png" alt=""><figcaption><p>Examples of daily graphs with different variables (from top left to bottom right): global horizontal radiation, global horizontal illuminance, zenith luminance, opaque sky cover</p></figcaption></figure> +<figure><img src="../../../.gitbook/assets/Daily%20sun%20exa.png" alt="Examples of daily graphs for multiple variables (GHR, illuminance, zenith luminance, opaque sky cover)."><figcaption><p>Examples of daily graphs with different variables (from top left to bottom right): global horizontal radiation, global horizontal illuminance, zenith luminance, opaque sky cover</p></figcaption></figure>If the asset can be renamed, prefer removing spaces in the filename and updating the link accordingly.
docs/documentation/tabs-explained/sun-and-cloud/cloud-coverage.md (1)
5-5: Tighten wording and casing for readabilityMinor copy cleanups will improve clarity and consistency with sentence case used elsewhere.
-As the Cloud cover is reported in tenths of coverage (i.e. 0 is 0/10 covered. 10 is total coverage) for the purpose of this graph we have simplified the scale as per the table below. +As cloud cover is reported in tenths (0 means 0/10 covered; 10 means full coverage), for the purpose of this graph we have simplified the scale as per the table below.docs/documentation/tabs-explained/sun-and-cloud/global-and-diffuse-horizontal-solar-radiation/README.md (1)
14-14: Polish punctuation: remove superfluous comma and move period outside linkThis reads more smoothly and avoids including punctuation in the link text.
-The integral of the curves in the graphs is the total energy (in Wh/m²), supplied by the sun. Be careful in considering the [different types of solar radiation.](global-diffuse-and-normal-solar-radiation-explained.md) +The integral of the curves in the graphs is the total energy (in Wh/m²) supplied by the sun. Be careful to consider the [different types of solar radiation](global-diffuse-and-normal-solar-radiation-explained.md).docs/README.md (1)
13-21: Nice accessibility win adding “Watch on YouTube”; optionally silence MD034 for the embedThe added link provides a fallback when embeds don’t render. markdownlint may still flag the embed’s bare URL (MD034). Consider disabling it locally around the embed.
## Video Tutorial Learn more about the CBE Clima Tool by watching the following video. +<!-- markdownlint-disable MD034 --> {% embed url="https://www.youtube.com/watch?v=VJ_wOHadVdw" %} CBE Clima tool tutorial and overview {% endembed %} +<!-- markdownlint-enable MD034 --> [Watch on YouTube](https://www.youtube.com/watch?v=VJ_wOHadVdw)docs/documentation/tabs-explained/sun-and-cloud/README.md (1)
26-28: Good copy tweak; optionally add markdownlint guards for the embed (MD034)The “Sun and Clouds” wording is consistent with the page title. To keep markdownlint quiet about the bare URL in the embed, you can disable MD034 locally.
### Video Tutorial Learn more about the Sun and Clouds tab by watching the following video. +<!-- markdownlint-disable MD034 --> {% embed url="https://youtu.be/VJ_wOHadVdw?si=mB2xNH57MWW_4CRR&t=447" %} +<!-- markdownlint-enable MD034 -->.pre-commit-config.yaml (1)
1-10: Optional: add core pre-commit hooks (trailing whitespace, EOF, mixed line endings)These catch common issues before CI and would have prevented the missing-EOF warning.
repos: - repo: https://github.com/astral-sh/ruff-pre-commit # Ruff version. rev: v0.12.9 hooks: # Run the linter. - id: ruff-check args: [--fix] # Run the formatter. - id: ruff-format +- repo: https://github.com/pre-commit/pre-commit-hooks + rev: v4.6.0 + hooks: + - id: end-of-file-fixer + - id: trailing-whitespace + - id: mixed-line-endingdocs/documentation/tabs-explained/tab-home.md (2)
7-7: Minor grammar polish for clarity (comma after “EPW file”).Improves readability without changing meaning.
-Users can either choose to analyse the climate of the locations displayed on the map or upload a custom EPW file. After loading an EPW file the user can then access the other tabs to generate dynamic visualisations of the data. +Users can either choose to analyse the climate of the locations displayed on the map or upload a custom EPW file. After loading an EPW file, the user can access the other tabs to generate dynamic visualisations of the data.
13-15: Silence markdownlint MD034 for the embed and add an accessible fallback link.The embed macro triggers MD034 (bare URL). Disabling the rule for this line and adding a fallback link helps both linting and accessibility.
+<!-- markdownlint-disable-next-line MD034 --> {% embed url="https://youtu.be/VJ_wOHadVdw?si=SxvUzaI9rCNIFFs0&t=136" %} How to select or upload an EPW file {% endembed %} +*[Watch on YouTube](https://youtu.be/VJ_wOHadVdw?si=SxvUzaI9rCNIFFs0&t=136) if the embed does not load.*.github/workflows/cypress.yml (1)
26-44: Guard against race: wait for the app before launching Cypress.Starting the server in background without a readiness check can cause flaky runs. Add a wait-on (HTTP 200) before tests.
- name: Start Clima run: |- pip install pipenv pipenv install - pipenv run python main.py & + pipenv run python main.py & + - name: Wait for Clima to be ready + run: | + set -e + for i in {1..60}; do + if curl -fsS http://localhost:8050 >/dev/null; then + echo "Clima is up" + exit 0 + fi + sleep 2 + done + echo "Clima did not become ready in time" >&2 + exit 1pages/lib/layout.py (1)
78-78: Replace Hard-Coded Versions with a Single Source of TruthThe repository still contains 5 occurrences of the literal “0.9.0” (including the one in pages/lib/layout.py line 78). To avoid future drift whenever you bump the version, each of these should pull from the central version definition rather than being hard-coded.
• pages/lib/layout.py (line 78):
"Version: 0.9.0"
• Plus 4 other matches for0.9.0across the codebase (as reported by your grep)Action items:
- Introduce or import a single
VERSIONconstant (e.g. from your manifest, a config module, or bumpversion metadata).- Replace each hard-coded
"0.9.0"with a template or string interpolation that references that constant.docs/documentation/tabs-explained/tab-summary/degree-days-explained.md (1)
15-15: Fix typo and casing in image alt text.“deegree” → “degree”; also suggest normalizing country casing to “Italy” for consistency with the rest of the docs.
- (1).png>) + (1).png>)docs/documentation/tabs-explained/tab-summary/README.md (1)
15-16: Keep MD034 suppression; add a fallback link for accessibility and non-embed renderers.Some renderers (including plain Markdown viewers) won’t show the embed. Add a visible link directly below so users can still access the video.
<!-- markdownlint-disable-nextline MD034 --> {% embed url="https://youtu.be/VJ_wOHadVdw?si=H-93XRhh5Neuby_b&t=220" %} + +Watch on YouTube: https://youtu.be/VJ_wOHadVdw?si=H-93XRhh5Neuby_b&t=220docs/documentation/tabs-explained/tab-summary/clima-dataframe.md (2)
3-3: Fix minor grammar.“and other we calculate” → “and others we calculate” for subject–verb agreement.
-**Clima** calculates new variables and creates a new **dataframe** containing the variables already inside the original EPW files and other we calculate. Users can overlay all the variables on the [sun path](../sun-and-cloud/), on the [psychrometric chart](../psychrometric-chart/), and on the customizable graphs in the [data explorer](../data-explorer.md). +**Clima** calculates new variables and creates a new **dataframe** containing the variables already inside the original EPW files and others we calculate. Users can overlay all the variables on the [sun path](../sun-and-cloud/), on the [psychrometric chart](../psychrometric-chart/), and on the customizable graphs in the [data explorer](../data-explorer.md).
21-24: Good consistency pass; consider fixing nearby MD039 lint on untouched items.These bullets look clean. FYI, markdownlint flags “Spaces inside link text” on adjacent, unchanged items (e.g., lines 19–20 in this file). If you’re touching this section anyway, consider removing the trailing spaces inside the [Link Text ] brackets there to quiet MD039.
pages/lib/global_element_ids.py (2)
5-5: Naming consistency: drop the redundant “ID_” prefix within ElementIdsWithin an enum/class already called ElementIds, the “ID_” prefix is redundant and noisy (e.g., ID_T_RH_DF_STORE). Prefer T_RH_DF_STORE, EXPLORER_DF_STORE, etc., or namespace by page (T_RH__DF_STORE) for readability.
5-201: Readability: group constants by page/feature with section headersThis is a long list. Consider grouping by page (e.g., T/RH, Explorer, Outdoor, Sun, Wind, Summary…) and add short section headers to reduce mis-wires and improve discoverability.
.github/workflows/python.yml (3)
16-19: actions/setup-python: consider v5 and enable cache for pip/pipenvUpgrade to v5 and add caching to speed up CI, especially with pipenv.
- - name: Setup Python - uses: actions/setup-python@v4 + - name: Setup Python + uses: actions/setup-python@v5 with: python-version: '3.11' + cache: 'pip'Optional: cache pipenv’s virtualenv by hashing Pipfile.lock using actions/cache.
21-25: Prefer pipenv sync --dev for reproducible environmentspipenv install re-solves dependencies. If Pipfile.lock is committed, use sync for deterministic builds.
- pip install pipenv - pipenv install --dev + pip install pipenv + pipenv sync --dev
34-37: pytest step: consider richer flags and coverageOptional: add -q --maxfail=1 for faster feedback and emit coverage for PR annotations.
- pipenv run python -m pytest + pipenv run pytest -q --maxfail=1 --disable-warnings -r fE +# If you want coverage (requires pytest-cov): +# pipenv run pytest -q --maxfail=1 --cov=./ --cov-report=xmldocs/contributing/contributing.md (5)
21-24: Fix clone URL placeholderMinor clarity/UX fix.
-git clone https://github.com/Your Account name/clima.git +git clone https://github.com/<YOUR_GITHUB_USERNAME>/clima.git
48-52: Replace hard tabs with spaces (MD010)Tabs were detected; switch to spaces for markdownlint compliance.
-> * main - remotes/origin/HEAD -> origin/main - remotes/origin/development - remotes/origin/main +> * main + remotes/origin/HEAD -> origin/main + remotes/origin/development + remotes/origin/main
115-123: Pipenv usage: prefer sync; avoid mixing -r requirements.txtIf the project uses Pipfile/Pipfile.lock, use sync for reproducible installs. Mixing requirements.txt with pipenv often causes drift.
-Before submitting a Pull Request, please make sure: +Before submitting a Pull Request, please make sure that: @@ -```bash -npm install - -pipenv install -r requirements.txt -``` +```bash +npm install +pipenv sync --dev +```
147-164: Commit taxonomy wording and casingAvoid “Master” terminology, and standardize casing (e.g., “CI”, “Perf”, “Docs”). Also remove the trailing comma after “every time,”.
-- `Main (Master)`: Stable branch, merge code that passes review and CI; merge and release every time, +- `Main`: Stable branch, merge code that passes review and CI; create releases as needed. @@ -- `Ci`: CI Configuration Related: Changing Continuous Integration Configurations for Github Actions, Travis, Jenkins, etc. +- `CI`: Continuous Integration configuration changes (GitHub Actions, Travis, Jenkins, etc.). @@ -- `Docs/*`, `chore/*`, `refactor/*`, `test/*`: documentation, miscellaneous, refactor, test type branches. +- `docs/*`, `chore/*`, `refactor/*`, `test/*`: documentation, miscellaneous, refactor, test branches.
165-179: Minor grammar/punctuation fixes in “Pull Request Regulation”Tighten phrasing.
-**Time to submit PR:** +**When to submit a PR:** @@ -- User requirements/issues have been addressed or discussed in Issue and consensus has been reached. +- User requirements/issues have been addressed or discussed in an Issue and consensus has been reached. @@ -- Changes have been minimised (small steps/phased submission) to avoid "mega PRs". +- Changes have been minimized (small steps/phased submission) to avoid “mega PRs”. @@ -- **Description:** Provide a brief summary of the changes, related issues, and motivation. List any required dependencies. **Fixes # (issue)** +- **Description:** Provide a brief summary of the changes, related issues, and motivation. List any required dependencies. Reference issues (e.g., “Fixes #123”)..github/workflows/deploy.yml (3)
17-21: Tighten permissions for OIDC and add minimal permissions blockExplicit permissions improve security posture and avoid implicit broad scopes.
jobs: deploy: name: Deploying 🚀 Clima runs-on: ubuntu-latest + permissions: + contents: read + id-token: write
31-31: Newline at EOFYAML linters prefer a trailing newline.
- --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" + --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" +
9-16: Optional: add concurrency to cancel superseded deploys on mainPrevents overlapping Cloud Build submissions on rapid pushes.
jobs: deploy: name: Deploying 🚀 Clima runs-on: ubuntu-latest + concurrency: + group: deploy-main + cancel-in-progress: truedocs/documentation/tabs-explained/README.md (1)
11-11: Fix typo: “thy” → “they”.Minor spelling correction improves clarity.
-Although there is a logical sequence in the organization of the tabs, thy can be accessed in any order. +Although there is a logical sequence in the organization of the tabs, they can be accessed in any order.docs/documentation/tabs-explained/psychrometric-chart/psychrometric-chart-explained.md (2)
17-17: Subject–verb agreement.Plural “processes” requires “are,” or rephrase to singular “process.” Recommend the plural form below.
-The simplest transformation to be analyzed on the psychrometric diagram is the heating and cooling processes. +The simplest transformations to analyze on the psychrometric diagram are the heating and cooling processes.
31-31: Polish caption grammar and excess spaces.Lowercase common noun “violin,” remove multiple spaces before “Source,” and keep a single space after period.
-<figure><img src="../../../.gitbook/assets/675px-Violino_Clisbee.jpg.webp" alt=""><figcaption><p>Stradivari Violin, stored under precise temperature and humidity conditions to prevent the valuable wood from warping. Source: <a href="https://www.frammentirivista.it/il-clisbee-il-prezioso-violino-di-stradivari/">Frammentirivista</a></p></figcaption></figure> +<figure><img src="../../../.gitbook/assets/675px-Violino_Clisbee.jpg.webp" alt=""><figcaption><p>Stradivari violin, stored under precise temperature and humidity conditions to prevent the valuable wood from warping. Source: <a href="https://www.frammentirivista.it/il-clisbee-il-prezioso-violino-di-stradivari/">Frammentirivista</a></p></figcaption></figure>pages/lib/charts_data_explorer.py (1)
64-65: Make hour/DOY axis ranges data-driven (handle 0–23 hours and leap years).The y-axis is hard-coded to 1–24 and the x-axis to 1–365. If the dataset uses hours 0–23 or a leap year (DOY=366), parts of the data will be clipped or misaligned.
For lines outside the selected range, consider:
# y-axis: derive from data y_min = int(np.nanmin(df[ColNames.HOUR.value])) y_max = int(np.nanmax(df[ColNames.HOUR.value])) fig.update_layout(yaxis=dict(range=(y_min, y_max))) # x-axis: handle leap years x_max = int(np.nanmax(df[ColNames.DOY.value])) fig.update_layout(xaxis=dict(range=(1, x_max)))If hour should be displayed 1–24 for UX, map 0→24 only in the presentation layer while keeping underlying data intact.
docs/documentation/tabs-explained/psychrometric-chart/README.md (1)
9-9: Tighten phrasing (optional).Slightly more direct wording reads better in docs.
-With the first choice in the drop-down list, "None", it is possible to view temperature conditions in the psychrometric diagram over the entire year. +Selecting "None" in the drop-down shows temperature conditions in the psychrometric diagram for the entire year.pages/lib/utils.py (1)
217-226: Ensure Dash receives plain strings for column IDs; align style condition with ColNames.Because ColNames is a str Enum, most libraries treat members as strings, but to avoid any serialization edge-cases, cast IDs to str. Also, style_cell_conditional still targets the literal "month". Use ColNames for consistency.
Apply:
- df_summary = df_summary.rename(columns={ColNames.MONTH_NAMES: ColNames.MONTH}) + df_summary = df_summary.rename(columns={ColNames.MONTH_NAMES: str(ColNames.MONTH)}) - df_sum = df_sum.T.assign(count="Year").rename(columns={"count": ColNames.MONTH}) + df_sum = df_sum.T.assign(count="Year").rename(columns={"count": str(ColNames.MONTH)}) - columns=[ - {"name": i, "id": i} if i == ColNames.MONTH else {"name": f"{i} ({unit})", "id": i} - for i in df_summary.columns - ], + columns=[ + {"name": str(i), "id": str(i)} if str(i) == str(ColNames.MONTH) + else {"name": f"{str(i)} ({unit})", "id": str(i)} + for i in df_summary.columns + ], - style_cell_conditional=[{"if": {"column_id": "month"}, "textAlign": "right"}], + style_cell_conditional=[{"if": {"column_id": str(ColNames.MONTH)}, "textAlign": "right"}],Also applies to: 233-236, 242-258
pages/lib/template_graphs.py (3)
149-154: Prefer ColNames for adaptive comfort columns to avoid drift.These keys exist in ColNames; mixing raw strings and ColNames invites future mismatches.
Apply:
- lo80 = df.groupby(ColNames.DOY)["adaptive_cmf_80_low"].mean().values - hi80 = df.groupby(ColNames.DOY)["adaptive_cmf_80_up"].mean().values - rmt = df.groupby(ColNames.DOY)["adaptive_cmf_rmt"].mean().values + lo80 = df.groupby(ColNames.DOY)[ColNames.ADAPTIVE_CMF_80_LOW].mean().values + hi80 = df.groupby(ColNames.DOY)[ColNames.ADAPTIVE_CMF_80_UP].mean().values + rmt = df.groupby(ColNames.DOY)[ColNames.ADAPTIVE_CMF_RMT].mean().values
481-491: Use ColNames and boolean indexing instead of query strings; query mixes raw names.Stay consistent with ColNames and avoid string parsing overhead or name drift.
Apply:
- if start_month <= end_month: - df = df.loc[(df[ColNames.MONTH] >= start_month) & (df[ColNames.MONTH] <= end_month)] - else: - df = df.loc[(df[ColNames.MONTH] <= end_month) | (df[ColNames.MONTH] >= start_month)] + if start_month <= end_month: + df = df[(df[ColNames.MONTH] >= start_month) & (df[ColNames.MONTH] <= end_month)] + else: + df = df[(df[ColNames.MONTH] <= end_month) | (df[ColNames.MONTH] >= start_month)] - if start_hour <= end_hour: - df = df.loc[(df[ColNames.HOUR] > start_hour) & (df[ColNames.HOUR] <= end_hour)] - else: - df = df.loc[(df[ColNames.HOUR] <= end_hour) | (df[ColNames.HOUR] >= start_hour)] + if start_hour <= end_hour: + df = df[(df[ColNames.HOUR] > start_hour) & (df[ColNames.HOUR] <= end_hour)] + else: + df = df[(df[ColNames.HOUR] <= end_hour) | (df[ColNames.HOUR] >= start_hour)] - calm_count = df.query("wind_speed == 0").shape[0] + calm_count = (df[ColNames.WIND_SPEED] == 0).sum()Also applies to: 499-509
833-844: Mask logic is correct; minor readability tweak optional.Expressions are correct; consider extracting masks into named vars for readability.
pages/lib/charts_sun.py (1)
116-139: Minor: duplicate filtering of solpos; keep only one.You assign solpos = df[df[APPARENT_ELEVATION] > 0] twice. Remove the first or second to DRY.
Apply:
- solpos = df.loc[df[ColNames.APPARENT_ELEVATION] > 0, :] ... - solpos = df.loc[df[ColNames.APPARENT_ELEVATION] > 0, :] + solpos = df.loc[df[ColNames.APPARENT_ELEVATION] > 0, :](Keep the later assignment after time shifting, or confirm the intended frame and keep only that one.)
pages/lib/extract_df.py (3)
9-9: Use a module logger and avoid bare prints for consistency.You introduced logging but still have a print earlier (Line 39). Prefer a module logger and consistent severity.
Apply:
+logger = logging.getLogger(__name__) ... - print("returning none") + logger.warning("EPW ZIP fetch returned 404; returning None") ... - except Exception as e: - logging.error(f"Failed to fetch EPW data: {e}") + except Exception as e: + logger.error("Failed to fetch EPW data", exc_info=e)Also applies to: 47-49
303-337: Good: adaptive comfort columns now consistently use ColNames.Loop still references epw_df.DOY; consider bracket form for uniformity.
Apply:
- for day in epw_df.DOY.unique(): + for day in epw_df[ColNames.DOY].unique(): ... - epw_df.loc[epw_df.DOY == day, ColNames.ADAPTIVE_CMF_RMT] = rmt + epw_df.loc[epw_df[ColNames.DOY] == day, ColNames.ADAPTIVE_CMF_RMT] = rmt - epw_df.loc[epw_df.DOY == day, ColNames.ADAPTIVE_COMFORT] = r["tmp_cmf"] + epw_df.loc[epw_df[ColNames.DOY] == day, ColNames.ADAPTIVE_COMFORT] = r["tmp_cmf"] - epw_df.loc[epw_df.DOY == day, ColNames.ADAPTIVE_CMF_80_LOW] = r["tmp_cmf_80_low"] + epw_df.loc[epw_df[ColNames.DOY] == day, ColNames.ADAPTIVE_CMF_80_LOW] = r["tmp_cmf_80_low"] - epw_df.loc[epw_df.DOY == day, ColNames.ADAPTIVE_CMF_80_UP] = r["tmp_cmf_80_up"] + epw_df.loc[epw_df[ColNames.DOY] == day, ColNames.ADAPTIVE_CMF_80_UP] = r["tmp_cmf_80_up"] - epw_df.loc[epw_df.DOY == day, ColNames.ADAPTIVE_CMF_90_LOW] = r["tmp_cmf_90_low"] + epw_df.loc[epw_df[ColNames.DOY] == day, ColNames.ADAPTIVE_CMF_90_LOW] = r["tmp_cmf_90_low"] - epw_df.loc[epw_df.DOY == day, ColNames.ADAPTIVE_CMF_90_UP] = r["tmp_cmf_90_up"] + epw_df.loc[epw_df[ColNames.DOY] == day, ColNames.ADAPTIVE_CMF_90_UP] = r["tmp_cmf_90_up"]
374-380: Proactive conversion of adaptive metrics is OK; ensure no double-convert via mapping_json.If mapping_json also specifies conversion functions for these keys, they’ll be converted twice. Consider guarding with a flag in mapping_json or moving these into the mapping for symmetry.
pages/lib/global_column_names.py (1)
1-10: Solid centralization of column names; consider adding minimal docs and type helpers.Two small improvements:
- Add a module docstring describing intent and usage.
- Optionally implement str to make intent explicit and help linters, though str Enum already behaves as desired.
Example:
+""" +Single source of truth for DataFrame column names used across the app. +Use ColNames.X everywhere instead of string literals to avoid drift. +""" from enum import Enum @@ class ColNames(str, Enum): + def __str__(self) -> str: # explicit, helps some linters/type-checkers + return self.valueAlso applies to: 21-27, 33-71
pages/psy-chart.py (2)
292-301: Use ColNames for month null-check.This still hardcodes "month" and may diverge from the centralized constants.
- if df.dropna(subset=["month"]).shape[0] == 0: + if df.dropna(subset=[ColNames.MONTH]).shape[0] == 0:
440-476: Standardize customdata columns via ColNames (where available).customdata references df["h"] and df["t_dp"] while the rest of the file uses ColNames. If these columns are stable, consider adding them to ColNames (e.g., ENTHALPY and DPT_TDP) and using those constants here and in mapping_dictionary for consistency.
pages/t_rh.py (1)
53-57: Nit: public label "Yearly_chart" uses underscore.Consider "Yearly chart" for consistency with other titles.
- text="Yearly_chart", + text="Yearly chart",pages/sun.py (2)
221-229: Optional: adopt ColNames for "tot_sky_cover".To stay consistent with the PR’s goal, consider importing ColNames and replacing "tot_sky_cover" with ColNames.TOT_SKY_COVER.
132-140: Minor: extra spaces after equals in id= ElementIds.*.Cosmetic only; ignore if your formatter will normalize this later.
pages/select.py (3)
216-217: Use ElementIds for State lines-store for consistency.You already export LINES_STORE; avoid mixing literal "lines-store".
- [State(ElementIds.ID_SELECT_URL_STORE, "data"), State("lines-store", "data")], + [State(ElementIds.ID_SELECT_URL_STORE, "data"), State(ElementIds.LINES_STORE, "data")],
219-224: Dead variable: map_json is computed but unused.convert_data returns a converted mapping, but map_json isn’t used afterward. Remove to avoid confusion.
- map_json = json.dumps(mapping_dictionary) - if si_ip_input == UnitSystem.IP: - map_json = convert_data(df, map_json) + if si_ip_input == UnitSystem.IP: + _ = convert_data(df, json.dumps(mapping_dictionary)) # noop; remove if unnecessaryOr drop the conversion entirely if not needed here.
142-161: Modal path uses literal "modal-yes-button" in ctx.triggered.Safe because ElementIds.MODAL_YES_BUTTON.value is "modal-yes-button", but consider using the constant to avoid future ID drift in this equality check.
pages/natural_ventilation.py (1)
1-1: Add blank line after imports for better readability.The standard Python convention is to have two blank lines between import statements and module-level code. Consider adding another blank line after the imports.
pages/explorer.py (1)
405-408: Inconsistent style attribute placement.The style attribute placement for the graph component is inconsistent with the rest of the codebase. Consider keeping attributes on separate lines for better readability when there are multiple attributes.
Apply this diff for consistency:
- dcc.Graph( - className="full-width", - id=ElementIds.CUSTOM_SUMMARY, - config=fig_config, - ), + dcc.Graph( + id=ElementIds.CUSTOM_SUMMARY, + className="full-width", + config=fig_config, + ),pages/summary.py (1)
357-357: String literals in query conditions should reference ColNames for column names.While the DataFrame column references have been updated to use
ColNames.DBT, the query strings still contain "DBT" as a string literal. Consider building the query string using the constant value.Apply this diff for better maintainability:
- query = query_month + str(i) + " and DBT<=" + str(hdd_setpoint) + query = query_month + str(i) + f" and {ColNames.DBT}<=" + str(hdd_setpoint)- query = query_month + str(i) + " and DBT>=" + str(cdd_setpoint) + query = query_month + str(i) + f" and {ColNames.DBT}>=" + str(cdd_setpoint)Also applies to: 366-366
pages/wind.py (3)
126-126: Fix spacing issue in ID assignment.There's an extra space after
id=which is inconsistent with the rest of the code.Apply this diff:
- id= ElementIds.SUMMER_WIND_ROSE, + id=ElementIds.SUMMER_WIND_ROSE,
514-514: String literals "wind_speed" in query strings.The query strings still use "wind_speed" as a string literal. For consistency and maintainability, consider using the constant value in the query string construction.
Apply this diff pattern for all occurrences:
- query_calm_wind = "wind_speed == 0" + query_calm_wind = f"{ColNames.WIND_SPEED} == 0"Also applies to: 525-525, 530-530, 637-637
399-399: Consider using a more descriptive name for chart configuration.Using
ColNames.WIND_SPEEDdirectly as the chart name might not be ideal. The column name constant is "wind_speed" which works, but it might be clearer to use a dedicated string.Consider keeping the original approach or creating a mapping:
- config=generate_chart_name(ColNames.WIND_SPEED, meta, units), + config=generate_chart_name("wind_speed", meta, units),
| name: Cypress Testing 🌲 | ||
| on: | ||
| push: | ||
| branches: | ||
| - development | ||
| pull_request: | ||
| branches: | ||
| - development |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
CI won’t run on PRs to main with these triggers.
This PR targets main, but the workflow triggers only on development. End-to-end tests will be skipped for main PRs and pushes to main.
name: Cypress Testing 🌲
on:
- push:
- branches:
- - development
- pull_request:
- branches:
- - development
+ push:
+ branches: [main, development]
+ pull_request:
+ branches: [main, development]
+ workflow_dispatch:Please re-run this PR after the change to confirm the Cypress job triggers on main.
🏁 Script executed:
#!/bin/bash
# Show the first 20 lines of the Cypress workflow to inspect its triggers
sed -n '1,20p' .github/workflows/cypress.yml
# Highlight all 'branches' entries to verify which branches are included
rg -nP "branches" -C2 .github/workflows/cypress.yml || trueLength of output: 561
Add “main” to the Cypress workflow triggers
The current workflow only runs on pushes and PRs against development, so any PR targeting main (like this one) will skip end-to-end tests. Please update .github/workflows/cypress.yml to include both main and development—and add a manual trigger if desired—so CI runs on main as well.
• File: .github/workflows/cypress.yml
• Replace lines 2–8 with:
on:
- push:
- branches:
- - development
- pull_request:
- branches:
- - development
+ push:
+ branches: [main, development]
+ pull_request:
+ branches: [main, development]
+ workflow_dispatch:After making this change, re-run the workflow on this PR to confirm that the Cypress job now triggers for main branch changes.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| name: Cypress Testing 🌲 | |
| on: | |
| push: | |
| branches: | |
| - development | |
| pull_request: | |
| branches: | |
| - development | |
| name: Cypress Testing 🌲 | |
| on: | |
| push: | |
| branches: [main, development] | |
| pull_request: | |
| branches: [main, development] | |
| workflow_dispatch: |
🤖 Prompt for AI Agents
.github/workflows/cypress.yml lines 1 to 8: the workflow only triggers on pushes
and PRs to development so CI will skip PRs targeting main; update the on:
section to include both development and main branches for push and pull_request
and add a manual workflow_dispatch trigger (if desired) so tests can be run
manually; replace the current lines 2–8 with an on: block that lists push and
pull_request with branches: ['development','main'] and also include
workflow_dispatch to allow manual runs, then commit and re-run the workflow on
this PR to verify Cypress triggers for main.
| - name: Setting up | ||
| uses: google-github-actions/setup-gcloud@v0 | ||
| - name: Set up Cloud SDK | ||
| uses: google-github-actions/setup-gcloud@v1 |
There was a problem hiding this comment.
setup-gcloud@v1 is deprecated on new runners; bump to v2
actionlint warns v1 is too old. Upgrade to v2.
- - name: Set up Cloud SDK
- uses: google-github-actions/setup-gcloud@v1
+ - name: Set up Cloud SDK
+ uses: google-github-actions/setup-gcloud@v2🧰 Tools
🪛 actionlint (1.7.7)
23-23: the runner of "google-github-actions/setup-gcloud@v1" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🤖 Prompt for AI Agents
.github/workflows/deploy.yml around line 23: the workflow uses the deprecated
google-github-actions/setup-gcloud@v1; update the action reference to
google-github-actions/setup-gcloud@v2 to remove the deprecation warning, then
verify and adjust any input keys that changed in v2 (consult the v2 docs for
exact input names such as export_default_credentials or project/credentials
handling) and run actionlint and the workflow to confirm no other syntax or
input errors remain.
| project_id: heat-stress-scale | ||
|
|
||
| - name: Building (🏗️) | ||
| run: |- | ||
| - name: Building (🏗️) and Deploying (🚀) | ||
| run: | | ||
| gcloud builds submit \ | ||
| --tag us-docker.pkg.dev/clima-316917/gcr.io/clima | ||
|
|
||
| # Setup gcloud CLI | ||
| - name: Deploy (🚀) | ||
| uses: google-github-actions/deploy-cloudrun@v1 | ||
| with: | ||
| service: clima | ||
| image: us-docker.pkg.dev/clima-316917/gcr.io/clima | ||
| region: us-central1 | ||
| credentials: ${{ secrets.GCP_SA_KEY_JSON }} | ||
| project_id: clima-316917 No newline at end of file | ||
| --project=clima-316917 \ | ||
| --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" No newline at end of file |
There was a problem hiding this comment.
Project mismatch: setup uses heat-stress-scale, build submits to clima-316917
These should match; otherwise auth/quotas/logs land in different projects. Align both to the intended GCP project (pick one).
- with:
- project_id: heat-stress-scale
+ with:
+ project_id: clima-316917
@@
- --project=clima-316917 \
+ --project=${{ steps.gcloud_setup.outputs.project_id || 'clima-316917' }} \Or hardcode one project consistently in both places.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| project_id: heat-stress-scale | |
| - name: Building (🏗️) | |
| run: |- | |
| - name: Building (🏗️) and Deploying (🚀) | |
| run: | | |
| gcloud builds submit \ | |
| --tag us-docker.pkg.dev/clima-316917/gcr.io/clima | |
| # Setup gcloud CLI | |
| - name: Deploy (🚀) | |
| uses: google-github-actions/deploy-cloudrun@v1 | |
| with: | |
| service: clima | |
| image: us-docker.pkg.dev/clima-316917/gcr.io/clima | |
| region: us-central1 | |
| credentials: ${{ secrets.GCP_SA_KEY_JSON }} | |
| project_id: clima-316917 | |
| \ No newline at end of file | |
| --project=clima-316917 \ | |
| --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" | |
| - uses: google-github-actions/setup-gcloud@v0 | |
| with: | |
| project_id: clima-316917 | |
| - name: Building (🏗️) and Deploying (🚀) | |
| run: | | |
| gcloud builds submit \ | |
| --project=${{ steps.gcloud_setup.outputs.project_id || 'clima-316917' }} \ | |
| --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" |
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 31-31: no new line character at the end of file
(new-line-at-end-of-file)
🤖 Prompt for AI Agents
In .github/workflows/deploy.yml around lines 25 to 31, the workflow’s project_id
is set to heat-stress-scale but the gcloud builds submit uses
--project=clima-316917 (mismatched GCP projects); choose the correct project and
make them consistent by updating either project_id to clima-316917 or changing
the --project and any _PROJ_NAME/_REPO_NAME substitutions to heat-stress-scale
so both the workflow metadata and the gcloud invocation use the same project
identifier; ensure any related substitution variables (_PROJ_NAME, _REPO_NAME)
and IAM/secret references are updated accordingly.
| - id: ruff-check | ||
| args: [ --fix ] | ||
| # Run the formatter. | ||
| - id: ruff-format No newline at end of file |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix YAML indentation under hooks and add missing EOF newline
Current indentation under hooks is deeper than pre-commit’s canonical style and flagged by YAMLlint; also there’s no trailing newline.
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.12.9
hooks:
- # Run the linter.
- - id: ruff-check
- args: [ --fix ]
- # Run the formatter.
- - id: ruff-format
+ # Run the linter.
+ - id: ruff-check
+ args: [--fix]
+ # Run the formatter.
+ - id: ruff-format
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - id: ruff-check | |
| args: [ --fix ] | |
| # Run the formatter. | |
| - id: ruff-format | |
| repos: | |
| - repo: https://github.com/astral-sh/ruff-pre-commit | |
| # Ruff version. | |
| rev: v0.12.9 | |
| hooks: | |
| # Run the linter. | |
| - id: ruff-check | |
| args: [--fix] | |
| # Run the formatter. | |
| - id: ruff-format | |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 7-7: wrong indentation: expected 2 but found 4
(indentation)
[error] 10-10: no new line character at the end of file
(new-line-at-end-of-file)
🤖 Prompt for AI Agents
.pre-commit-config.yaml lines 7-10: the list items under hooks are over-indented
and the file is missing a trailing newline; re-indent the hooks block to match
pre-commit canonical style (list items and their keys aligned with two spaces
under the parent mapping, e.g. ensure "- id:" lines are at the correct list
indentation level under "hooks:" and any "args:" stays as a child of the
corresponding list item), and add a final newline (EOF) at the end of the file.
| We use GitHub as our main collaboration platform. Please work from the `development` branch, create small feature branches, and open focused pull requests. Follow Conventional Commit messages (e.g., `feat:`, `fix:`, `docs:`), format Python code with Black, and add tests where needed. Never merge your own PR—wait for review and address all comments (including AI reviewer suggestions). Use Issues and Projects to track tasks and discussions. | ||
|
|
||
| > This project requires Python 3.11. Do not use Python 3.12 or newer, as it may cause dependency incompatibilities, build failure or runtime errors | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Align formatter guidance: choose Ruff or Black (docs vs CI mismatch)
CI uses Ruff (check/format). The doc says “format Python code with Black” and later installs Black. This is conflicting. Recommend standardizing on Ruff (formatter) and dropping Black from mandatory steps.
-We use GitHub as our main collaboration platform. Please work from the `development` branch, create small feature branches, and open focused pull requests. Follow Conventional Commit messages (e.g., `feat:`, `fix:`, `docs:`), format Python code with Black, and add tests where needed. Never merge your own PR—wait for review and address all comments (including AI reviewer suggestions). Use Issues and Projects to track tasks and discussions.
+We use GitHub as our main collaboration platform. Please work from the `development` branch, create small feature branches, and open focused pull requests. Follow Conventional Commits (e.g., `feat:`, `fix:`, `docs:`), format Python code with Ruff (`ruff format`), and add tests where needed. Never merge your own PR—wait for review and address all comments (including AI reviewer suggestions). Use Issues and Projects to track tasks and discussions.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| We use GitHub as our main collaboration platform. Please work from the `development` branch, create small feature branches, and open focused pull requests. Follow Conventional Commit messages (e.g., `feat:`, `fix:`, `docs:`), format Python code with Black, and add tests where needed. Never merge your own PR—wait for review and address all comments (including AI reviewer suggestions). Use Issues and Projects to track tasks and discussions. | |
| > This project requires Python 3.11. Do not use Python 3.12 or newer, as it may cause dependency incompatibilities, build failure or runtime errors | |
| We use GitHub as our main collaboration platform. Please work from the `development` branch, create small feature branches, and open focused pull requests. Follow Conventional Commits (e.g., `feat:`, `fix:`, `docs:`), format Python code with Ruff (`ruff format`), and add tests where needed. Never merge your own PR—wait for review and address all comments (including AI reviewer suggestions). Use Issues and Projects to track tasks and discussions. | |
| > This project requires Python 3.11. Do not use Python 3.12 or newer, as it may cause dependency incompatibilities, build failure or runtime errors |
🤖 Prompt for AI Agents
In docs/contributing/contributing.md around lines 8 to 11, the contributor guide
instructs to "format Python code with Black" which conflicts with the CI that
uses Ruff; update the document to standardize on Ruff as the project formatter,
remove or replace references to Black, and adjust any installation or setup
steps in the docs to show installing/configuring Ruff (and its rules/plugins) so
the docs match CI and avoid telling contributors to use Black.
| ID_OUTDOOR_MONTH_HOUR_FILTER = "month-hour-filter" | ||
| MONTH_HOUR_FILTER = "month-hour-filter" |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Two names map to the same value; keep one public symbol
ID_OUTDOOR_MONTH_HOUR_FILTER and MONTH_HOUR_FILTER are synonyms. Keep a single canonical name to avoid confusion.
Apply one of the following:
Option A — keep generic:
- ID_OUTDOOR_MONTH_HOUR_FILTER = "month-hour-filter"
MONTH_HOUR_FILTER = "month-hour-filter"Option B — keep page-scoped with unique value:
- ID_OUTDOOR_MONTH_HOUR_FILTER = "month-hour-filter"
- MONTH_HOUR_FILTER = "month-hour-filter"
+ OUTDOOR_MONTH_HOUR_FILTER = "outdoor-month-hour-filter"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ID_OUTDOOR_MONTH_HOUR_FILTER = "month-hour-filter" | |
| MONTH_HOUR_FILTER = "month-hour-filter" | |
| MONTH_HOUR_FILTER = "month-hour-filter" |
| ID_OUTDOOR_MONTH_HOUR_FILTER = "month-hour-filter" | |
| MONTH_HOUR_FILTER = "month-hour-filter" | |
| OUTDOOR_MONTH_HOUR_FILTER = "outdoor-month-hour-filter" |
🤖 Prompt for AI Agents
In pages/lib/global_element_ids.py around lines 62-63, two constants
ID_OUTDOOR_MONTH_HOUR_FILTER and MONTH_HOUR_FILTER are set to the same value;
remove the duplication by keeping a single canonical constant and update code
that imports/uses the removed name. Either: (A) keep the generic
MONTH_HOUR_FILTER and delete ID_OUTDOOR_MONTH_HOUR_FILTER, replacing its usages
with MONTH_HOUR_FILTER; or (B) keep the page-scoped ID_OUTDOOR_MONTH_HOUR_FILTER
but give it a unique string value (e.g., "outdoor-month-hour-filter"), delete
MONTH_HOUR_FILTER, and replace its usages with the retained constant; ensure all
references/imports are updated and tests/linters pass.
| SEC1_MONTH_SLIDER = "sec1-month-slider" | ||
| SEC1_TIME_FILTER_INPUT = "sec1-time-filter-input" | ||
| SEC1_VAR_DROPDOWN = "sec1-var-dropdown" | ||
| SDATA_FILTER = "data-filter" |
There was a problem hiding this comment.
Likely typo: SDATA_FILTER
SDATA_FILTER looks accidental beside SEC1_* and SEC2_* variables and duplicates DATA_FILTER’s value. Remove or rename to the intended SEC1_DATA_FILTER (if truly distinct) with a unique ID.
Apply one of:
- SDATA_FILTER = "data-filter"
+ # Removed: duplicate/typo; use DATA_FILTER or a unique SEC1_DATA_FILTER if neededor
- SDATA_FILTER = "data-filter"
+ SEC1_DATA_FILTER = "sec1-data-filter"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SDATA_FILTER = "data-filter" | |
| SEC1_DATA_FILTER = "sec1-data-filter" |
🤖 Prompt for AI Agents
In pages/lib/global_element_ids.py around line 68, the SDATA_FILTER constant is
likely a typo that duplicates DATA_FILTER's value; remove SDATA_FILTER or rename
it to the intended unique constant (e.g., SEC1_DATA_FILTER) with a distinct
string ID, then update all code references/imports to use the new name (or stop
using it if removed) and run tests to ensure no broken references remain.
| df[ColNames.NV_ALLOWED] = 1 | ||
|
|
||
| df = filter_df_by_month_and_hour( | ||
| df, time_filter, month, hour, invert_month, invert_hour, "nv_allowed" |
There was a problem hiding this comment.
String literal "nv_allowed" should use ColNames constant.
The string literal "nv_allowed" at line 517 should use ColNames.NV_ALLOWED for consistency with the rest of the refactor.
Apply this diff:
- df, time_filter, month, hour, invert_month, invert_hour, "nv_allowed"
+ df, time_filter, month, hour, invert_month, invert_hour, ColNames.NV_ALLOWED📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| df, time_filter, month, hour, invert_month, invert_hour, "nv_allowed" | |
| df, time_filter, month, hour, invert_month, invert_hour, ColNames.NV_ALLOWED |
🤖 Prompt for AI Agents
In pages/natural_ventilation.py around line 517, replace the string literal
"nv_allowed" with the ColNames constant; update the function call argument from
"nv_allowed" to ColNames.NV_ALLOWED so it matches the refactor and existing
usage patterns, and ensure the ColNames symbol is imported or referenced
correctly in this module.
| # wind speed | ||
| @callback( | ||
| Output("wind-speed", "children"), | ||
| Output(ColNames.WIND_SPEED, "children"), |
There was a problem hiding this comment.
Incorrect Output ID reference.
The output ID should be ElementIds.WIND_SPEED instead of ColNames.WIND_SPEED. This appears to be a typo that would cause a runtime error.
Apply this diff to fix the issue:
- Output(ColNames.WIND_SPEED, "children"),
+ Output(ElementIds.WIND_SPEED, "children"),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Output(ColNames.WIND_SPEED, "children"), | |
| Output(ElementIds.WIND_SPEED, "children"), |
🤖 Prompt for AI Agents
In pages/wind.py around line 381, the Output ID is incorrectly using
ColNames.WIND_SPEED; replace that with ElementIds.WIND_SPEED to reference the
correct element ID. Update the Output(...) call so its first argument is
ElementIds.WIND_SPEED (not ColNames.WIND_SPEED) and ensure the ElementIds
import/enum is available in the file; run tests or lint to verify no further
references to ColNames.WIND_SPEED remain.
| (df[ColNames.HOUR] >= morning_times[0]) & (df[ColNames.HOUR] <= morning_times[1]) | ||
| ] |
There was a problem hiding this comment.
Incorrect DataFrame filtering logic.
The noon_df filtering is using morning_times instead of noon_times, which appears to be a copy-paste error.
Apply this diff to fix the logic:
- noon_df = df.loc[
- (df[ColNames.HOUR] >= morning_times[0]) & (df[ColNames.HOUR] <= morning_times[1])
- ]
+ noon_df = df.loc[
+ (df[ColNames.HOUR] >= noon_times[0]) & (df[ColNames.HOUR] <= noon_times[1])
+ ]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| (df[ColNames.HOUR] >= morning_times[0]) & (df[ColNames.HOUR] <= morning_times[1]) | |
| ] | |
| noon_df = df.loc[ | |
| (df[ColNames.HOUR] >= noon_times[0]) & (df[ColNames.HOUR] <= noon_times[1]) | |
| ] |
🤖 Prompt for AI Agents
In pages/wind.py around lines 632-633, the DataFrame filter for noon_df
mistakenly uses morning_times; change it to use noon_times instead so the
condition reads (df[ColNames.HOUR] >= noon_times[0]) & (df[ColNames.HOUR] <=
noon_times[1]) when building noon_df. Ensure any variable names reference
noon_times and not morning_times in that filtering expression.
Code Sanitisation #159
Contributing.md
We are not sure if these changes meet your needs. If there are any parts that need improvement, please comment and we will make changes in a time.
Summary by CodeRabbit