Skip to content

fixing GLC jupyter notebook#398

Open
gunterl wants to merge 2 commits intoNCAR:mainfrom
gunterl:cpl_hist_GRIS
Open

fixing GLC jupyter notebook#398
gunterl wants to merge 2 commits intoNCAR:mainfrom
gunterl:cpl_hist_GRIS

Conversation

@gunterl
Copy link

@gunterl gunterl commented Mar 7, 2026

  • in config.yml, add the option for the init_file
  • in the jupyter notebook, removed the hard coded dependency on the init_file, and fixed the range for plotting the time series.

Description of changes:

  • Please add an explanation of what your changes do and why you'd like us to include them.

All PRs Checklist:

  • Have you followed the guidelines in our Contributor's Guide?
  • Have you checked to ensure there aren't other open Pull Requests for the same update/change?
  • Have you made sure that the pre-commit checks passed (#8 in Adding Notebooks Guide)?
  • Have you successfully tested your changes locally when running standalone CUPiD?
  • Have you tested your changes as part of the CESM workflow?
  • Once you are ready to have your PR reviewed, have you changed it from a Draft PR to an Open PR?

New notebook PR Additional Checklist (if these do not apply, feel free to remove this section):

  • Have you hidden the code cells (#8 in Adding Notebooks Guide) in your notebook?
  • Have you removed any unused parameters from your cell tagged with parameters? These can cause confusing warnings that show up as DAG build with warnings.
  • Have you moved any observational data that you are using to /glade/campaign/cesm/development/cross-wg/diagnostic_framework/CUPiD_obs_data and ensured that it follows this format within that directory: COMPONENT/analysis_datasets/RESOLUTION/PROCESSED_FIELD_TYPE?

- in config.yml, add the option for the init_file
- in the jupyter notebook, removed the hard coded dependency on the
  init_file, and fixed the range for plotting the time series.
@mnlevy1981
Copy link
Collaborator

@gunterl -- Do you mind if I push some changes to your branch just to get it to pass the automated testing done by github?

@mnlevy1981 mnlevy1981 self-requested a review March 10, 2026 15:09
@gunterl
Copy link
Author

gunterl commented Mar 10, 2026

@mnlevy1981 not at all, thanks for looking at this.

Needed to pass the CI testing on github
Copy link
Collaborator

@mnlevy1981 mnlevy1981 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The update to add init_file is great; I have some comments about how you modified the plot limits, but the only important point is the one about what happens when base_case_name = None.

Comment on lines +561 to +565
"print(f\"avg_smb_case_climo, has shape {np.shape(avg_smb_case_climo)}\")\n",
"print(f\"avg_smb_case_timeseries has shape {np.shape(avg_smb_case_timeseries)}\")\n",
"print(f\"avg_smb_base_case_climo has shape {np.shape(avg_smb_base_case_climo)}\")\n",
"print(f\"avg_smb_base_timeseries has shape {np.shape(avg_smb_base_timeseries)}\")\n",
"print(f\"avg_smb_obs_timeseries has shape {np.shape(avg_smb_obs_timeseries)}\")\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gunterl do you want these print statements to remain, or were they just part of some debugging while you set up numpy calls to compute ymin1, ymin2, ymax1, and ymax2?

Comment on lines +567 to +577
"ymin1 = np.array(\n",
" [avg_smb_base_case_climo, avg_smb_base_timeseries, avg_smb_obs_timeseries]\n",
").min()\n",
"ymin2 = np.array([avg_smb_case_climo, avg_smb_case_timeseries]).min()\n",
"ymax1 = np.array(\n",
" [avg_smb_base_case_climo, avg_smb_base_timeseries, avg_smb_obs_timeseries]\n",
").max()\n",
"ymax2 = np.array([avg_smb_case_climo, avg_smb_case_timeseries]).max()\n",
"\n",
"ymin = np.array([ymin1, ymin2]).min() - 50\n",
"ymax = np.array([ymax1, ymax2]).max() + 50\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're going to run into problems with ymin1 and ymax2 if base_case_name is None; also, if you just want biggest / smallest values from three (or five) arrays, you can do something like

all_data = np.concatenate([avg_smb_case_climo, avg_smb_case_timeseries, avg_smb_obs_timeseries])
if base_case_name:
    all_data = np.concatenate([all_data, avg_smb_base_case_climo, avg_smb_base_timeseries])

ymin = all_data.min() - 50.
ymax = all_data.max() + 50.

Comment on lines +643 to +645
"display_name": "NPL 2025a",
"language": "python",
"name": "cupid-analysis"
"name": "npl-2025a"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll probably revert these metadata changes before committing :)

"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.12.8"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll fix this one, too -- it may end up getting updated, since we're now pinning 3.13.11 in (cupid-analysis)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants