Skip to content

add qmd doctester#650

Merged
d-chambers merged 14 commits intodevfrom
qmd_doctests
Apr 6, 2026
Merged

add qmd doctester#650
d-chambers merged 14 commits intodevfrom
qmd_doctests

Conversation

@d-chambers
Copy link
Copy Markdown
Contributor

@d-chambers d-chambers commented Apr 4, 2026

Description

This PR adds generated pytest coverage for executable Python examples embedded
in hand-written Quarto docs.

Motivation: If a code change breaks some of the executed code it can take a long time for the docbuild to actually surface this change. This PR will hopefully help us get ahead of this issue.

It introduces a script, scripts/generate_autogenerated_doccode_tests.py,
that scans docs/**/*.qmd, extracts executable {python} chunks, and mirrors
them into autogenerated tests under tests/test_autogenerated_doccode. The
generated tests execute each doc chunk in order through a shared runtime so we
can validate documentation examples without building the full docs site.

This PR also wires those generated tests into CI, documents the local workflow
for regenerating and running them, and marks the generated directory as
gitignored so the generated files are treated as build artifacts rather than
committed sources.

Checklist

I have (if applicable):

  • referenced the GitHub issue this PR closes.
  • documented the new feature with docstrings and/or appropriate doc page.
  • included tests. See [testing guidelines](https://dascore.org/
    contributing/testing.html).
  • added the "ready_for_review" tag once the PR is ready to be reviewed.

Summary by CodeRabbit

  • New Features

    • Automated testing of executable code examples in documentation files.
  • Documentation

    • Added testing workflow guide for running documentation code examples locally.
  • Chores

    • Updated CI/CD pipeline to trigger tests when documentation changes.
    • Updated project ignore list for auto-generated test artifacts.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This change introduces automated testing for executable Python code blocks within Quarto documentation files. A new script generates pytest modules from QMD files under docs/, extracts executable {python} blocks, and creates tests that run them with proper isolation. The workflow, configuration, and test utilities support this generation and execution pipeline.

Changes

Cohort / File(s) Summary
Workflow & Ignores
.github/workflows/runtests.yml, .gitignore
Updated workflow to trigger on docs changes; added step to generate qmd doc tests. Ignored generated test directory to prevent tracking.
Configuration
pyproject.toml
Added new docs_examples pytest marker for autogenerated documentation tests.
Documentation
docs/contributing/testing.qmd
Added workflow documentation explaining how to run generated tests from executable qmd examples and noting the gitignored generated test directory.
Implementation
scripts/generate_doc_code_tests.py
New script that discovers qmd files, extracts executable Python blocks respecting front matter and in-chunk options, generates pytest modules under tests/test_autogenerated_doccode/ with shared conftest for matplotlib configuration and working directory management.
Tests
scripts/test_generate_doc_code_tests.py
New test module validating extraction rules, output path mapping, module rendering, and regeneration semantics of the doc code test generation logic.
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'add qmd doctester' is vague and uses a non-descriptive abbreviation 'qmd doctester' that does not clearly convey what the pull request accomplishes. Consider using a more descriptive title like 'Add generated pytest tests for executable Quarto documentation examples' or 'Add autogenerated tests for qmd code examples'.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 91.67% which is sufficient. The required threshold is 80.00%.
Description check ✅ Passed The PR description clearly explains the motivation, implementation approach, and scope, though one checklist item referencing a GitHub issue is incomplete.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch qmd_doctests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai bot added documentation Improvements or additions to documentation CI continuous integration labels Apr 4, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d96d3c38e9

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

inner = cleaned[1:-1].strip()
if not inner:
return False
language = inner.split(",", 1)[0].strip()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Recognize python fences with space-delimited options

Update the fence parser to handle headers like {python filename="..."} in addition to comma-delimited options. Right now language = inner.split(",", 1)[0] treats the language token as python filename="...", so executable chunks are silently skipped and never mirrored into tests; this already drops multiple runnable blocks in docs/contributing/new_format.qmd, reducing the coverage this feature is intended to provide.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
docs/contributing/testing.qmd (1)

48-50: Consider formatting commands as code blocks for consistency.

The inline commands are functional but would be more readable and consistent with the rest of the document if formatted as separate code blocks like other command examples.

📝 Suggested formatting
-To validate executable examples in the hand-written documentation without building the full site, generate the mirrored tests with `python scripts/generate_autogenerated_doccode_tests.py` and then run `pytest tests/test_autogenerated_doccode`.
-The `tests/test_autogenerated_doccode` directory is intentionally gitignored and should be regenerated locally rather than committed.
+To validate executable examples in the hand-written documentation without building the full site, first generate the mirrored tests:
+
+<!--pytest-codeblocks:skip-->
+```bash
+python scripts/generate_autogenerated_doccode_tests.py
+```
+
+Then run the generated tests:
+
+<!--pytest-codeblocks:skip-->
+```bash
+pytest tests/test_autogenerated_doccode
+```
+
+The `tests/test_autogenerated_doccode` directory is intentionally gitignored and should be regenerated locally rather than committed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/contributing/testing.qmd` around lines 48 - 50, The documentation uses
inline backticks for shell commands; update docs/contributing/testing.qmd to
present the commands as fenced code blocks for consistency with other examples
by replacing inline `python scripts/generate_autogenerated_doccode_tests.py` and
`pytest tests/test_autogenerated_doccode` with separate bash code blocks (e.g.,
triple-backtick fences) and add the brief explanatory line "Then run the
generated tests:" before the pytest block; ensure the script name
generate_autogenerated_doccode_tests.py and the test path
tests/test_autogenerated_doccode remain unchanged.
scripts/generate_autogenerated_doccode_tests.py (1)

236-260: Consider preserving error context if rmtree fails.

While unlikely, if shutil.rmtree fails mid-operation (e.g., permission issues), the error will propagate with minimal context. This is acceptable for a build script, but you could add a brief log message if desired.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/generate_autogenerated_doccode_tests.py` around lines 236 - 260, The
call to shutil.rmtree in write_test_tree can raise exceptions with little
context; wrap the rmtree(tests_path) call in a try/except around the start of
write_test_tree, catch Exception as e, log or print a short descriptive message
that includes tests_path and the exception (e) to preserve context, then
re-raise the exception so behavior doesn't silently change; ensure references to
write_test_tree, tests_path, and shutil.rmtree are used so the change is easy to
locate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/generate_autogenerated_doccode_tests.py`:
- Line 236: The function signature for write_test_tree is too long; split the
signature across multiple lines to keep each line ≤88 chars. For example, break
after the opening paren and put qmd_files: list[QmdFile], tests_path: Path =
TESTS_PATH, on their own indented lines and keep the return annotation ->
list[Path]: on the final line. Ensure QmdFile, Path, and TESTS_PATH identifiers
remain unchanged.

In `@scripts/test_generate_autogenerated_doccode_tests.py`:
- Line 124: The qmd_file assignment for QmdFile(...) is too long; split its
arguments across multiple lines to satisfy the 88-char linter limit. For
example, create the Chunk instance on its own line or build chunks as a separate
variable (e.g., chunks = (Chunk(start_line=1, source="print(1)\n"),) ) and then
call qmd_file = QmdFile(path=source, chunks=chunks); reference QmdFile, Chunk
and the qmd_file assignment when making the change.

---

Nitpick comments:
In `@docs/contributing/testing.qmd`:
- Around line 48-50: The documentation uses inline backticks for shell commands;
update docs/contributing/testing.qmd to present the commands as fenced code
blocks for consistency with other examples by replacing inline `python
scripts/generate_autogenerated_doccode_tests.py` and `pytest
tests/test_autogenerated_doccode` with separate bash code blocks (e.g.,
triple-backtick fences) and add the brief explanatory line "Then run the
generated tests:" before the pytest block; ensure the script name
generate_autogenerated_doccode_tests.py and the test path
tests/test_autogenerated_doccode remain unchanged.

In `@scripts/generate_autogenerated_doccode_tests.py`:
- Around line 236-260: The call to shutil.rmtree in write_test_tree can raise
exceptions with little context; wrap the rmtree(tests_path) call in a try/except
around the start of write_test_tree, catch Exception as e, log or print a short
descriptive message that includes tests_path and the exception (e) to preserve
context, then re-raise the exception so behavior doesn't silently change; ensure
references to write_test_tree, tests_path, and shutil.rmtree are used so the
change is easy to locate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4541e692-1cf2-47f0-89c4-40fd7dfcf93f

📥 Commits

Reviewing files that changed from the base of the PR and between 35d6b82 and d96d3c3.

📒 Files selected for processing (6)
  • .github/workflows/runtests.yml
  • .gitignore
  • docs/contributing/testing.qmd
  • pyproject.toml
  • scripts/generate_autogenerated_doccode_tests.py
  • scripts/test_generate_autogenerated_doccode_tests.py

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 4, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00%. Comparing base (7224df8) to head (a62a9b1).
⚠️ Report is 1 commits behind head on dev.

Additional details and impacted files
@@            Coverage Diff            @@
##               dev      #650   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files          137       137           
  Lines        12711     12711           
=========================================
  Hits         12711     12711           
Flag Coverage Δ
unittests 100.00% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 4, 2026

✅ Documentation built:
👉 Download
Note: You must be logged in to github and a DASDAE member to access the link.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
scripts/generate_autogenerated_doccode_tests.py (1)

173-173: Use explicit UTF-8 for text IO in generator outputs.

Using default locale encoding can make generation non-deterministic across environments.

🔧 Proposed fix
-    lines = path.read_text().splitlines()
+    lines = path.read_text(encoding="utf-8").splitlines()
...
-    (tests_path / "__init__.py").write_text("")
+    (tests_path / "__init__.py").write_text("", encoding="utf-8")
...
-    runtime_path.write_text(RUNTIME_MODULE)
+    runtime_path.write_text(RUNTIME_MODULE, encoding="utf-8")
...
-                init_path.write_text("")
+                init_path.write_text("", encoding="utf-8")
...
-        output_path.write_text(render_test_module(qmd_file.path, qmd_file.chunks))
+        output_path.write_text(
+            render_test_module(qmd_file.path, qmd_file.chunks), encoding="utf-8"
+        )

Also applies to: 249-251, 263-266

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/generate_autogenerated_doccode_tests.py` at line 173, Replace any use
of path.read_text() and path.write_text(...) or open() without explicit encoding
in the generator with explicit UTF-8 encoding to ensure deterministic I/O:
change the read at "lines = path.read_text().splitlines()" to use
read_text(encoding="utf-8") (or open(..., encoding="utf-8") if preferred), and
likewise ensure any file writes (the blocks around the occurrences noted at the
other locations) call write_text(..., encoding="utf-8") or open(..., "w",
encoding="utf-8"); update all occurrences referenced (the read at the 'lines'
assignment and the writes in the sections corresponding to the other three
ranges) so all input/output uses UTF-8 explicitly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/generate_autogenerated_doccode_tests.py`:
- Around line 184-191: The extracted code chunk start is off-by-one when front
matter exists because you compute enumerate(body, start=body_start_line + 1) and
then set chunk_start = offset + 1; change chunk_start to use offset (i.e.,
chunk_start = offset) so the recorded starting line of the fenced code matches
the actual file line numbers; update the logic inside the loop where in_chunk is
set (referencing body_start_line, in_chunk, _is_python_fence, chunk_lines,
chunk_start and the enumerate over body) to remove the extra +1 adjustment.
- Around line 143-152: The _is_python_fence function only checks the language
and ignores inline fence header options like eval= or execute=, so update the
parsing logic in _is_python_fence (and similarly wherever fence headers are
parsed around the same area) to extract and interpret comma-separated options
inside the braces (e.g., "{python, eval=false}" or "{.python eval=false}"):
after identifying language == "python", parse the remaining tokens/key=val pairs
for keys "eval" and "execute" (accept true/false, yes/no, 1/0) and if either
explicitly disables execution return False (or surface the parsed option so
_chunk_is_executable can respect it); ensure compatibility with existing
body-directive checks in _chunk_is_executable so inline header flags
override/affect execution as intended.

---

Nitpick comments:
In `@scripts/generate_autogenerated_doccode_tests.py`:
- Line 173: Replace any use of path.read_text() and path.write_text(...) or
open() without explicit encoding in the generator with explicit UTF-8 encoding
to ensure deterministic I/O: change the read at "lines =
path.read_text().splitlines()" to use read_text(encoding="utf-8") (or open(...,
encoding="utf-8") if preferred), and likewise ensure any file writes (the blocks
around the occurrences noted at the other locations) call write_text(...,
encoding="utf-8") or open(..., "w", encoding="utf-8"); update all occurrences
referenced (the read at the 'lines' assignment and the writes in the sections
corresponding to the other three ranges) so all input/output uses UTF-8
explicitly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 05980ff4-7248-4dcb-9b8b-cdc31c885ca6

📥 Commits

Reviewing files that changed from the base of the PR and between d96d3c3 and 688354b.

📒 Files selected for processing (3)
  • docs/contributing/testing.qmd
  • scripts/generate_autogenerated_doccode_tests.py
  • scripts/test_generate_autogenerated_doccode_tests.py
✅ Files skipped from review due to trivial changes (2)
  • docs/contributing/testing.qmd
  • scripts/test_generate_autogenerated_doccode_tests.py

Comment on lines +184 to +191
body_start_line = len(front_matter) + 1 if front_matter else 0
for offset, line in enumerate(body, start=body_start_line + 1):
stripped = line.strip()
if not in_chunk and stripped.startswith("```{"):
in_chunk = True
should_capture = _is_python_fence(stripped[3:])
chunk_lines = []
chunk_start = offset + 1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix front-matter line-number offset in extracted chunks.

When front matter exists, chunk_start is currently shifted by one line. This makes traceback line mapping inaccurate for generated tests.

🔧 Proposed fix
-    body_start_line = len(front_matter) + 1 if front_matter else 0
-    for offset, line in enumerate(body, start=body_start_line + 1):
+    body_start_line = len(lines) - len(body)
+    for offset, line in enumerate(body, start=body_start_line + 1):
         stripped = line.strip()
         if not in_chunk and stripped.startswith("```{"):
             in_chunk = True
             should_capture = _is_python_fence(stripped[3:])
             chunk_lines = []
             chunk_start = offset + 1
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
body_start_line = len(front_matter) + 1 if front_matter else 0
for offset, line in enumerate(body, start=body_start_line + 1):
stripped = line.strip()
if not in_chunk and stripped.startswith("```{"):
in_chunk = True
should_capture = _is_python_fence(stripped[3:])
chunk_lines = []
chunk_start = offset + 1
body_start_line = len(lines) - len(body)
for offset, line in enumerate(body, start=body_start_line + 1):
stripped = line.strip()
if not in_chunk and stripped.startswith("
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/generate_autogenerated_doccode_tests.py` around lines 184 - 191, The
extracted code chunk start is off-by-one when front matter exists because you
compute enumerate(body, start=body_start_line + 1) and then set chunk_start =
offset + 1; change chunk_start to use offset (i.e., chunk_start = offset) so the
recorded starting line of the fenced code matches the actual file line numbers;
update the logic inside the loop where in_chunk is set (referencing
body_start_line, in_chunk, _is_python_fence, chunk_lines, chunk_start and the
enumerate over body) to remove the extra +1 adjustment.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
scripts/generate_autogenerated_doccode_tests.py (2)

22-22: Plumb the configurable roots all the way through.

tests_path is configurable, but generated modules still hardcode from tests.test_autogenerated_doccode._runtime import run_qmd_chunks, and get_output_path() still anchors to DOCS_PATH. That means these helpers only produce runnable output for the repo’s default layout, which makes the override API misleading.

Also applies to: 217-223, 237-240

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/generate_autogenerated_doccode_tests.py` at line 22, Generated test
modules and helper imports currently hardcode the repo layout (e.g., "from
tests.test_autogenerated_doccode._runtime import run_qmd_chunks") and
get_output_path() is anchored to DOCS_PATH, so overriding tests_path has no
effect; update the generator to plumb the configurable tests_path and base path
through into the generated module imports and into get_output_path() so they use
the configured tests_path/base_dir instead of hardcoded values—specifically,
replace hardcoded import emission with an import resolved from the generator's
tests_path variable (affecting generation sites around the import at
run_qmd_chunks) and change get_output_path() to compute its root from the
passed-in/configured base (not DOCS_PATH) so produced helpers work with
non-default layouts.

95-102: Consider handling inline YAML comments in execution flags.

_parse_bool() does not handle inline comments after boolean values. While a search found no instances of this pattern in the current documentation (e.g., execute: false # docs-only), the parser would incorrectly return None instead of False if such flags were added. The function currently only recognizes bare true/false values.

Suggested fix
 def _parse_bool(value: str) -> bool | None:
     """Parse a yaml-like bool."""
-    cleaned = value.strip().strip("'\"").lower()
+    cleaned = value.split("#", 1)[0].strip().strip("'\"").lower()
     if cleaned == "true":
         return True
     if cleaned == "false":
         return False
     return None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/generate_autogenerated_doccode_tests.py` around lines 95 - 102, The
_parse_bool function fails on inline YAML comments like "false # docs-only";
update _parse_bool to strip inline comments before trimming and lowercasing by
splitting the input on the first '#' (e.g., value.split('#', 1)[0]) and then
applying the existing strip().strip("'\"").lower() logic so values like "false #
comment" correctly return False; keep the function name _parse_bool and existing
return behavior (True/False/None).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/generate_autogenerated_doccode_tests.py`:
- Around line 196-198: The extracted chunk bodies are being joined and then
.strip()ed which doesn't remove common indentation from all lines, causing
IndentationError for indented fenced code; replace the join+strip logic used
before creating Chunk(start_line=chunk_start, source=...) with
textwrap.dedent("\n".join(chunk_lines)) (and ensure a trailing "\n" is
preserved), and add an import for textwrap at the top of the module; update the
call site where chunks.append(Chunk(...)) (referencing chunk_lines and
chunk_start) to use the dedented source instead of .strip().

---

Nitpick comments:
In `@scripts/generate_autogenerated_doccode_tests.py`:
- Line 22: Generated test modules and helper imports currently hardcode the repo
layout (e.g., "from tests.test_autogenerated_doccode._runtime import
run_qmd_chunks") and get_output_path() is anchored to DOCS_PATH, so overriding
tests_path has no effect; update the generator to plumb the configurable
tests_path and base path through into the generated module imports and into
get_output_path() so they use the configured tests_path/base_dir instead of
hardcoded values—specifically, replace hardcoded import emission with an import
resolved from the generator's tests_path variable (affecting generation sites
around the import at run_qmd_chunks) and change get_output_path() to compute its
root from the passed-in/configured base (not DOCS_PATH) so produced helpers work
with non-default layouts.
- Around line 95-102: The _parse_bool function fails on inline YAML comments
like "false # docs-only"; update _parse_bool to strip inline comments before
trimming and lowercasing by splitting the input on the first '#' (e.g.,
value.split('#', 1)[0]) and then applying the existing
strip().strip("'\"").lower() logic so values like "false # comment" correctly
return False; keep the function name _parse_bool and existing return behavior
(True/False/None).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8575118c-1d5d-4d67-9cf7-7c1542215562

📥 Commits

Reviewing files that changed from the base of the PR and between 2fd8779 and a39add1.

📒 Files selected for processing (2)
  • .github/workflows/runtests.yml
  • scripts/generate_autogenerated_doccode_tests.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/runtests.yml

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
scripts/generate_doc_code_tests.py (1)

289-303: Consider adding error context for exec failures.

When a chunk fails during test execution, the traceback will show the compile() filename as "docs/tutorial/example.qmd:12", which is helpful. However, for long documents with many chunks, users might benefit from knowing which chunk (by index) failed.

This is optional since the line number already provides sufficient context.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/generate_doc_code_tests.py` around lines 289 - 303, The generated
inline test in _render_chunk_source should attach chunk identity to exec
failures; modify the emitted chunk_block to either include the chunk index in
the compile filename (e.g.
"{source_path}:{chunk.start_line}:chunk={chunk.index}") or wrap the exec(...) in
a try/except that catches Exception and re-raises (or raises a new RuntimeError)
with additional context including chunk.start_line and a chunk index/identifier;
update _render_chunk_source to reference chunk.index (or add an identifier
field) and ensure the final indented string still uses indent(..., " " * 8).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/generate_doc_code_tests.py`:
- Around line 77-88: The module currently patches matplotlib globally
(matplotlib.use("Agg", force=True); plt.ioff(); plt.show = ...; Figure.show =
...) which affects all tests; instead move these changes into the
qmd_test_context context manager so the backend and show-noop are applied only
while that context is active. Modify qmd_test_context to call
matplotlib.use("Agg", force=True), set plt.ioff(), and monkeypatch plt.show and
Figure.show to no-op on entry, and restore the original plt.show and Figure.show
and any backend state on exit; ensure symbols referenced are plt.show,
matplotlib.use, plt.ioff, and Figure.show so other tests are not impacted.

In `@scripts/test_generate_doc_code_tests.py`:
- Around line 30-49: The test test_skips_plain_python_fences encodes an
off-by-one expectation for the extracted chunk start line; update the assertion
in this test to expect Chunk(start_line=6, ...) instead of 5 so it matches the
actual line number produced by extract_qmd_file (and the real source line for
"import dascore as dc"), referencing the Chunk class and extract_qmd_file helper
used in the test.

---

Nitpick comments:
In `@scripts/generate_doc_code_tests.py`:
- Around line 289-303: The generated inline test in _render_chunk_source should
attach chunk identity to exec failures; modify the emitted chunk_block to either
include the chunk index in the compile filename (e.g.
"{source_path}:{chunk.start_line}:chunk={chunk.index}") or wrap the exec(...) in
a try/except that catches Exception and re-raises (or raises a new RuntimeError)
with additional context including chunk.start_line and a chunk index/identifier;
update _render_chunk_source to reference chunk.index (or add an identifier
field) and ensure the final indented string still uses indent(..., " " * 8).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: cd3646b5-a0cc-43b3-a213-56bde6d05cd9

📥 Commits

Reviewing files that changed from the base of the PR and between a39add1 and f5a2ffc.

📒 Files selected for processing (4)
  • .github/workflows/runtests.yml
  • docs/contributing/testing.qmd
  • scripts/generate_doc_code_tests.py
  • scripts/test_generate_doc_code_tests.py
✅ Files skipped from review due to trivial changes (1)
  • docs/contributing/testing.qmd
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/runtests.yml

@d-chambers d-chambers merged commit 3538f1b into dev Apr 6, 2026
26 of 32 checks passed
@d-chambers d-chambers deleted the qmd_doctests branch April 6, 2026 10:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI continuous integration documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant