Conversation
Co-authored-by: HappyOnigiri <nodemeld+happy-onigiri@gmail.com>
|
Cursor Agent can help with this pull request. Just |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughCIワークフローを変更し、PythonベースのCIランナーと複数のリポジトリ整備スクリプトを追加。型/リンター設定と依存を導入し、主要なリファクタで Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
requirements.txt (1)
4-6: CI 用ツールは固定しないと再現性が落ちます。
mypy/ruff/types-PyYAMLを未固定のまま増やすと、同じコミットでも公開タイミング次第で CI 結果が変わります。今回の PR は CI 強化が目的なので、新規追加分だけでもバージョン固定かconstraints.txtでのロックを入れておきたいです。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements.txt` around lines 4 - 6, The CI tools listed (mypy, ruff, types-PyYAML) are unpinned and must be locked to ensure reproducible runs; update the requirements input by either pinning each package (e.g., replace entries for mypy, ruff, types-PyYAML with concrete versions using ==) or add a constraints.txt that pins exact versions and install these packages with pip -c constraints.txt; ensure CI job(s) that install dependencies use the constraints file or the pinned requirements so mypy / ruff / types-PyY are fixed across runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mypy.ini`:
- Line 5: 現在 mypy.ini にグローバルで ignore_missing_imports = True
が設定されており型チェックが緩くなっているため、グローバル設定を削除または False
に戻し、代わりに型スタブが無い依存だけを個別オーバーライドしてください(例: add [mypy-yaml] または [mypy-PyYAML] セクションで
ignore_missing_imports = True)。具体的には mypy.ini のグローバル ignore_missing_imports
を解除し、types-PyYAML のようにスタブが無いライブラリ名を使った [mypy-<module_name>]
セクションを追加して必要なモジュールだけ無視するように変更してください。
In `@scripts/check_non_ascii.py`:
- Around line 40-47: The loop uses rel_path.read_text(encoding="utf-8") which
can raise on a single unreadable file and abort the whole check; change the
per-file read to use rel_path.read_bytes() and then decode or try/except
UnicodeDecodeError around that decode so failures are caught per file and
reported (append an explicit violation like f"{rel_path}: decode error: {e}" to
the violations list). Update the block that iterates _tracked_files() (and the
violations append logic) to handle exceptions per file instead of letting
read_text propagate.
In `@scripts/ci.py`:
- Around line 16-27: The TASKS list currently builds command strings that
neither set PYTHONPATH nor safely handle sys.executable on Windows; update TASKS
entries (notably the "Python-Tests" entry) to ensure PYTHONPATH=src is set for
test runs (or otherwise inject the env var when invoking commands) and change
run_task() to avoid shell=True with raw concatenated strings—pass commands as
lists or properly quote/escape sys.executable (use shlex.quote or build argv
lists) so paths with spaces (e.g., "C:\Program Files\...") work; locate and
modify TASKS, run_task(), and the test invocation for tests/test_auto_fixer.py
to ensure imports of top-level modules succeed by including PYTHONPATH=src in
the environment used to run pytest.
- Around line 50-79: The run_task function can fail on Windows when commands
include sys.executable and are run with shell=True because paths with spaces
need quoting; update run_task to avoid shell=True for such commands: detect when
the provided command string contains sys.executable (or prefer to always)
convert the command into an argv list (e.g., via shlex.split on POSIX or a safe
split for Windows) and call subprocess.Popen/communicate with shell=False and
the argv list, or alternatively ensure any sys.executable insertion is wrapped
using shlex.quote() before calling with shell=True; references: function
run_task and the TASKS string commands that include f"{sys.executable} -m ...".
Ensure stdout/stderr/stdin behavior and creationflags handling remain the same.
In `@src/pr_reviewer.py`:
- Around line 169-177: The function fetch_review_threads lacks defensive JSON
parsing; wrap the json.loads(result.stdout) call in a try/except that catches
json.JSONDecodeError and returns {} (mirroring
fetch_pr_reviews/fetch_pr_review_comments behavior), so if GH's graphql output
is malformed or contains null fields the function won't raise and will instead
log/return an empty dict; update fetch_review_threads to perform this guarded
parse and keep the existing non-zero returncode handling unchanged.
In `@src/state_manager.py`:
- Around line 142-146: The current construction of footer using
accumulated_archived can itself push the assembled state comment past
STATE_COMMENT_MAX_LENGTH even when trimmed_entries is empty; update the state
comment assembly logic (the code that builds footer and returns early when not
trimmed_entries) to enforce the same size budget for the footer: compute
remaining_bytes = STATE_COMMENT_MAX_LENGTH - len(body_so_far) and if the
serialized footer (f"<!-- archived-ids: ... -->") would exceed remaining_bytes,
either trim accumulated_archived (drop oldest IDs or keep only the most recent
ones) to fit or move the excess IDs to an alternate store and append a marker
token in the footer; ensure this check is applied right before returning the
final comment where variables footer, accumulated_archived, trimmed_entries and
STATE_COMMENT_MAX_LENGTH are used.
In `@src/summarizer.py`:
- Around line 95-99: The subprocess.run call that invokes the claude CLI (the
call using arguments including "--model", model and f"Read the file
{prompt_path}...") lacks a timeout; add a timeout parameter (e.g. a configurable
constant) to that subprocess.run invocation and wrap the call in a try/except
that catches subprocess.TimeoutExpired; on timeout, ensure you terminate/cleanup
as needed, log a clear error including the timeout and command context, and
either re-raise or return a controlled failure value so callers can handle the
timeout instead of hanging indefinitely.
---
Nitpick comments:
In `@requirements.txt`:
- Around line 4-6: The CI tools listed (mypy, ruff, types-PyYAML) are unpinned
and must be locked to ensure reproducible runs; update the requirements input by
either pinning each package (e.g., replace entries for mypy, ruff, types-PyYAML
with concrete versions using ==) or add a constraints.txt that pins exact
versions and install these packages with pip -c constraints.txt; ensure CI
job(s) that install dependencies use the constraints file or the pinned
requirements so mypy / ruff / types-PyY are fixed across runs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 79719e7f-8ae8-44a2-8326-d4bb438cd96e
📒 Files selected for processing (18)
.github/workflows/test.yml.ruff.tomlMakefilemypy.inipyrightconfig.jsonrequirements.txtscripts/check_non_ascii.pyscripts/ci.pyscripts/fix_newlines.pysrc/auto_fixer.pysrc/github_pr_fetcher.pysrc/pr_reviewer.pysrc/state_manager.pysrc/summarizer.pytests/test_auto_fixer.pytests/test_imports.pytests/test_state_manager.pytests/test_summarizer.py
Replace read_text() with read_bytes().decode() wrapped in try/except UnicodeDecodeError so an invalid UTF-8 file adds a violation entry instead of aborting the entire check.
Add try/except json.JSONDecodeError around json.loads() in fetch_review_threads(), consistent with fetch_pr_reviews() and fetch_pr_review_comments().
Add REFIX_SUMMARIZER_TIMEOUT_SEC env var (default 300s) as timeout for the claude CLI call, and handle subprocess.TimeoutExpired by raising ClaudeCommandFailedError to prevent infinite hangs.
Remove the global ignore_missing_imports = True which silently suppressed all missing-stub errors. Add a [mypy-yaml] per-module override as a safety net; yaml stubs are already provided by types-PyYAML in requirements.txt, and python-dotenv ships py.typed.
🤖 Refix Status処理済みレビュー一覧 (System Use Only)
|
Co-authored-by: HappyOnigiri <nodemeld+happy-onigiri@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/pr_reviewer.py (1)
20-30:⚠️ Potential issue | 🟠 Major
run_gh_command()だけ JSON パース失敗でそのまま落ちます。Line 30 の
json.loads()が壊れた stdout をそのまま例外化するので、fetch_pr_details()全体が未処理例外で終了します。ほかの fetch 系と同じく、ここもJSONDecodeErrorを握って明示的に終了するか安全な既定値に寄せた方がよいです。修正案
- return json.loads(result.stdout) if result.stdout else {} + try: + return json.loads(result.stdout) if result.stdout else {} + except json.JSONDecodeError as exc: + print(f"Error: failed to parse gh output: {exc}", file=sys.stderr) + sys.exit(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/pr_reviewer.py` around lines 20 - 30, run_gh_command currently calls json.loads(result.stdout) directly which will raise JSONDecodeError and crash fetch_pr_details; wrap the JSON parsing in a try/except that catches json.JSONDecodeError, print a clear error to stderr (include the broken stdout/stderr for context) and call sys.exit(1) or return a safe default like {} consistent with the other fetch_* functions; update the run_gh_command implementation (and adjust callers such as fetch_pr_details if needed) so JSON decode failures are handled rather than letting an unhandled exception propagate.
🧹 Nitpick comments (1)
Makefile (1)
4-4:$(PYTHON)は絶対パス化しておくと再利用しやすいです。今の
.venv/bin/pythonは相対パスなので、将来cd src && $(PYTHON) ...に寄せるとsrc/.venv/bin/pythonを見に行って壊れます。ここで絶対パスにしておくと、残っているpython呼び出しも安全に共通化できます。差分案
-PYTHON := $(if $(wildcard .venv/bin/python),.venv/bin/python,$(shell command -v python3 || command -v python)) +PYTHON := $(if $(wildcard .venv/bin/python),$(abspath .venv/bin/python),$(shell command -v python3 || command -v python))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` at line 4, The PYTHON Makefile variable currently picks up a relative .venv/bin/python which can break when running make from other dirs; update the PYTHON assignment so the .venv/bin/python candidate is converted to an absolute path (e.g. using Makefile's $(abspath ...) around .venv/bin/python) while keeping the existing shell fallback (command -v python3 || command -v python), so references to PYTHON are safe regardless of current working directory.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/check_non_ascii.py`:
- Around line 27-41: _tracked_files assumes CWD is repo root; change it to run
git against the repository root and return absolute paths anchored to that root.
Compute repo_root (e.g. from Path(__file__).resolve().parents[...] or a
passed-in argument), call git with ["git", "-C", str(repo_root), "ls-files",
"--full-name"] (preserving capture_output/text/encoding/check logic), raise on
non-zero return as before, and return [repo_root / line for line in
result.stdout.splitlines() if line and _is_target_file(line)] so _is_target_file
and ASCII_ENFORCED_FILES continue to match expected repo-root-relative paths.
In `@src/pr_reviewer.py`:
- Around line 14-17: The stdout-encoding fix should avoid accessing
sys.stdout.buffer at import time; move the replacement into main() or guard with
hasattr(sys.stdout, "buffer") before using sys.stdout.buffer, or call
sys.stdout.reconfigure(encoding="utf-8") on Python 3.7+; update the code that
currently runs at import (the block referencing sys.stdout.buffer) so it
executes safely at runtime. Also wrap the json.loads() call inside
run_gh_command() with a try/except that catches json.JSONDecodeError (matching
fetch_pr_reviews() and others) and handle it consistently (e.g., return a clear
error/None or raise a contextual exception) so malformed JSON from gh commands
won't propagate uncaught.
---
Outside diff comments:
In `@src/pr_reviewer.py`:
- Around line 20-30: run_gh_command currently calls json.loads(result.stdout)
directly which will raise JSONDecodeError and crash fetch_pr_details; wrap the
JSON parsing in a try/except that catches json.JSONDecodeError, print a clear
error to stderr (include the broken stdout/stderr for context) and call
sys.exit(1) or return a safe default like {} consistent with the other fetch_*
functions; update the run_gh_command implementation (and adjust callers such as
fetch_pr_details if needed) so JSON decode failures are handled rather than
letting an unhandled exception propagate.
---
Nitpick comments:
In `@Makefile`:
- Line 4: The PYTHON Makefile variable currently picks up a relative
.venv/bin/python which can break when running make from other dirs; update the
PYTHON assignment so the .venv/bin/python candidate is converted to an absolute
path (e.g. using Makefile's $(abspath ...) around .venv/bin/python) while
keeping the existing shell fallback (command -v python3 || command -v python),
so references to PYTHON are safe regardless of current working directory.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a41f4b29-fbc5-47a5-88b8-583b9e295e6a
📒 Files selected for processing (5)
Makefilemypy.iniscripts/check_non_ascii.pysrc/pr_reviewer.pysrc/summarizer.py
🚧 Files skipped from review as they are similar to previous changes (2)
- mypy.ini
- src/summarizer.py
- summarizer.py: keep model param priority (main) + _timeout feature (HEAD) - auto_fixer.py: accept all new features from main (fetch_issue_comments, _is_coderabbit_login helper, coderabbit_auto_resume config vars, committed_prs tracking, claude_limit_reached guard, model= arg)
Accessing sys.stdout.buffer unconditionally raises AttributeError when sys.stdout is replaced with a stream that lacks a buffer attribute (e.g., io.StringIO in tests).
json.loads() in run_gh_command() was unguarded, causing
fetch_pr_details() to crash on malformed JSON. Now returns {} on
decode failure, consistent with other fetch functions.
Relative path breaks when make is invoked from a subdirectory. $(abspath ...) ensures the path is absolute, while keeping the existing shell fallback for when .venv is not present.
_tracked_files() assumed CWD was the repo root. When called from a subdirectory, git ls-files output was CWD-relative and _is_target_file() matching failed. Now uses git rev-parse to get the repo root and git -C <root> ls-files --full-name for repo-relative paths, then returns absolute paths for safe file access.
Co-authored-by: HappyOnigiri <nodemeld+happy-onigiri@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/auto_fixer.py`:
- Line 1735: The pyright ignore hides a real type mismatch between
expand_repositories() and process_repo(): update types and remove the "pyright:
ignore[reportGeneralTypeIssues]" on process_repo so the checker can validate;
either change expand_repositories()'s return annotation to list[dict[str, str |
None]] so its output matches process_repo's expected dict[str, str | None], or
broaden process_repo's parameter annotation to dict[str, Any] (and remove the
pyright ignore) and add narrow runtime checks/casts inside process_repo to
handle values safely; ensure you update the function signatures for
expand_repositories and process_repo (and any callers) consistently and run type
checks to confirm the mismatch is resolved.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0ea79a3c-37d3-416b-ac0a-06d1529c1136
📒 Files selected for processing (9)
Makefilescripts/check_non_ascii.pyscripts/sync_ruler.pysrc/auto_fixer.pysrc/pr_reviewer.pysrc/state_manager.pysrc/summarizer.pytests/test_auto_fixer.pytests/test_state_manager.py
🚧 Files skipped from review as they are similar to previous changes (2)
- Makefile
- tests/test_state_manager.py
…ess_repo expand_repositories() returns list[dict[str, Any]], so the repo_info parameter of process_repo() should accept dict[str, Any] rather than the narrower dict[str, str | None]. Runtime validation (isinstance check) is already present. This removes the pyright: ignore suppression.
…ngle_pr Pyright reported "Code is too complex to analyze" on process_repo. Extract the per-PR processing logic (~830 lines) into a new helper function _process_single_pr, reducing process_repo's branch count so pyright can analyze it without hitting its complexity limit.
…_prs support Add coderabbit_resumed_prs parameter to _process_single_pr and pass it through the call site, preserving the global tracking introduced in main.
- Conflict 1: Keep REFIX_MERGED_LABEL_COLOR from main + multi-line FAILED_CI_CONCLUSIONS from HEAD - Conflict 2: Keep _process_single_pr() refactoring from HEAD, discard inlined loop body from main
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/state_manager.py (1)
180-186:⚠️ Potential issue | 🟠 Major
archived-idsフッターだけでサイズ上限を超える経路が残っています。
trimmed_entriesをすべて落とした後は Line 185 のor not trimmed_entriesで無条件に返しているため、accumulated_archivedが大きいとbody + footerがSTATE_COMMENT_MAX_LENGTHを超えたまま返ります。長期運用でupsert_state_comment()の GitHub API 呼び出しが失敗するので、フッター側も残りサイズに収まるまで削ってから返す必要があります。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/state_manager.py` around lines 180 - 186, The code currently returns body + footer even when accumulated_archived is so large that the footer alone pushes the result past STATE_COMMENT_MAX_LENGTH; modify the logic in the function that builds footer (using accumulated_archived and footer) so that after trimming entries (trimmed_entries) you iteratively or deterministically truncate accumulated_archived (e.g., drop oldest IDs or limit the join) to fit the remaining allowed size: compute remaining = STATE_COMMENT_MAX_LENGTH - len(body) and build a footer from a sliced/summarized accumulated_archived such that len(body) + len(footer) <= STATE_COMMENT_MAX_LENGTH (or produce an empty footer if nothing fits) before returning; ensure the same code-path that calls upsert_state_comment() never receives an oversized string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/state_manager.py`:
- Around line 216-217: The username validation regex
r"^[a-zA-Z0-9][a-zA-Z0-9-]{0,38}$" rejects GitHub bot logins like
"coderabbitai[bot]" causing _get_authenticated_github_user() to return None and
later load_state_comment() to raise a RuntimeError; update the validation in
state_manager.py so that the username check either accepts an optional "[bot]"
suffix (e.g. allow a trailing "\[bot\]" group) or stop enforcing this strict
pattern and return the API-provided username directly (adjust the code around
the username variable and the function that performs the check such as
_get_authenticated_github_user()) to ensure bot accounts are treated as valid.
---
Duplicate comments:
In `@src/state_manager.py`:
- Around line 180-186: The code currently returns body + footer even when
accumulated_archived is so large that the footer alone pushes the result past
STATE_COMMENT_MAX_LENGTH; modify the logic in the function that builds footer
(using accumulated_archived and footer) so that after trimming entries
(trimmed_entries) you iteratively or deterministically truncate
accumulated_archived (e.g., drop oldest IDs or limit the join) to fit the
remaining allowed size: compute remaining = STATE_COMMENT_MAX_LENGTH - len(body)
and build a footer from a sliced/summarized accumulated_archived such that
len(body) + len(footer) <= STATE_COMMENT_MAX_LENGTH (or produce an empty footer
if nothing fits) before returning; ensure the same code-path that calls
upsert_state_comment() never receives an oversized string.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2b2fa8c5-a4ae-4aba-b805-82821fa7d76a
📒 Files selected for processing (4)
src/auto_fixer.pysrc/state_manager.pytests/test_auto_fixer.pytests/test_state_manager.py
The pattern r"^[a-zA-Z0-9][a-zA-Z0-9-]{0,38}$" rejected bot accounts
like coderabbitai[bot] and github-actions[bot], causing
_get_authenticated_github_user() to return None and load_state_comment()
to raise a RuntimeError. Allow an optional [bot] suffix to support
bot-token environments.
When all entries were trimmed from the visible table, the footer could still push body + footer past the limit if accumulated_archived was large. The previous code returned unconditionally on not trimmed_entries. Now, when no entries remain and the footer still overflows, archived IDs are greedily included up to the remaining budget, preventing GitHub API failures in upsert_state_comment().
Resolved conflict in process_repo loop by keeping HEAD's refactored _process_single_pr call. Updated _process_single_pr to call _prepare_reports_dir and pass report_path to all _run_claude_prompt calls, which is required by the updated function signature.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/state_manager.py (1)
326-342:⚠️ Potential issue | 🟠 Major
archived_idsも重複排除に使ってください。Line 335 の
seen_idsがstate.entriesだけを見ているため、可視テーブルから落ちてstate.archived_idsにだけ残っている ID がnew_entriesに再度入ると、そのレビューを state comment に復活させてしまいます。render_state_comment()で古い ID をフッターへ退避している意味が薄れるので、ここはstate.processed_idsを使って判定したほうが安全です。修正案
state = load_state_comment(repo, pr_number) merged_entries = list(state.entries) - seen_ids = {entry.comment_id for entry in merged_entries} + seen_ids = set(state.processed_ids) for entry in new_entries: if entry.comment_id in seen_ids: continue seen_ids.add(entry.comment_id) merged_entries.append(entry)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/state_manager.py` around lines 326 - 342, upsert_state_comment currently builds seen_ids from state.entries only, which allows IDs present in state.archived_ids to be reintroduced by new_entries; change the dedup logic to include state.archived_ids (or better, state.processed_ids if available) when building seen_ids so that any ID already processed/archived will be skipped, and keep calling render_state_comment(merged_entries, archived_ids=state.archived_ids) unchanged; update references to seen_ids, state.archived_ids, and state.processed_ids in upsert_state_comment to implement this check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/summarizer.py`:
- Line 95: 環境変数の整数変換が一時ファイル作成後ではなく事前に行われるよう修正してください: 現在の _timeout =
int(os.environ.get("REFIX_SUMMARIZER_TIMEOUT_SEC", "300"))
を一時ファイルを作るブロック(try/finally と一時ファイル作成処理、コメントで示された Line 84-89)より前に移動し、変換時に
ValueError が起きたら一時ファイルを作成する前に捕捉して
processLogger.error/適切なエラーハンドリングで制御された終了を行う(あるいは明示的にデフォルト値を設定する)ようにしてください。参照シンボル:
_timeout と該当の try/finally ブロック(一時ファイル作成箇所)。
---
Outside diff comments:
In `@src/state_manager.py`:
- Around line 326-342: upsert_state_comment currently builds seen_ids from
state.entries only, which allows IDs present in state.archived_ids to be
reintroduced by new_entries; change the dedup logic to include
state.archived_ids (or better, state.processed_ids if available) when building
seen_ids so that any ID already processed/archived will be skipped, and keep
calling render_state_comment(merged_entries, archived_ids=state.archived_ids)
unchanged; update references to seen_ids, state.archived_ids, and
state.processed_ids in upsert_state_comment to implement this check.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ced47c1a-ae03-45b2-8460-b41faf56f580
📒 Files selected for processing (5)
src/auto_fixer.pysrc/state_manager.pysrc/summarizer.pytests/test_auto_fixer.pytests/test_summarizer.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/test_summarizer.py
- tests/test_auto_fixer.py
…ngthening-c363 # Conflicts: # src/auto_fixer.py # tests/test_auto_fixer.py
…te_comment seen_ids was built only from state.entries, allowing archived IDs to be re-added if they appeared in new_entries again. Use state.processed_ids which includes both active entries and archived_ids.
REFIX_SUMMARIZER_TIMEOUT_SEC was parsed after the temp file was created but outside the finally cleanup block. An invalid env value would raise ValueError, leaving the temp file on disk. Moving the conversion earlier ensures it fails before any temp file is created.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/auto_fixer.py (1)
3700-3711:⚠️ Potential issue | 🔴 Critical
_update_done_label_if_completed呼び出しの引数が不正です。実行時エラーになります。この関数呼び出しには以下の問題があります:
- 存在しないパラメータを渡している:
silent,fix_model,summarize_model,ci_log_max_linesは_update_done_label_if_completedの引数として定義されていません- 必須パラメータが欠落している:
pr_number,has_review_targets,review_fix_started,review_fix_added_commits,review_fix_failed,state_saved,commits_by_phase,pr_data,review_comments,issue_commentsこのコードパスが実行されると
TypeErrorが発生します。🐛 修正案
_update_done_label_if_completed( repo=repo, + pr_number=pr_number, + has_review_targets=has_review_targets, + review_fix_started=review_fix_started, + review_fix_added_commits=review_fix_added_commits, + review_fix_failed=review_fix_failed, + state_saved=state_saved, + commits_by_phase=commits_by_phase, + pr_data=pr_data, + review_comments=review_comments, + issue_comments=issue_comments, dry_run=dry_run, - silent=silent, summarize_only=summarize_only, - fix_model=fix_model, - summarize_model=summarize_model, - ci_log_max_lines=ci_log_max_lines, auto_merge_enabled=auto_merge_enabled, coderabbit_rate_limit_active=bool(active_rate_limit), coderabbit_review_failed_active=bool(active_review_failed), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auto_fixer.py` around lines 3700 - 3711, The call to _update_done_label_if_completed is passing invalid parameters (silent, fix_model, summarize_model, ci_log_max_lines) and omitting required ones; fix it by updating the call site to only pass the parameters that the function actually accepts (e.g., repo and dry_run) and include all required arguments: pr_number, has_review_targets, review_fix_started, review_fix_added_commits, review_fix_failed, state_saved, commits_by_phase, pr_data, review_comments, issue_comments; also preserve the two boolean flags currently derived from active_rate_limit and active_review_failed (pass coderabbit_rate_limit_active=bool(active_rate_limit) and coderabbit_review_failed_active=bool(active_review_failed)) and remove the invalid named args so the call signature matches the _update_done_label_if_completed definition.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/auto_fixer.py`:
- Around line 3700-3711: The call to _update_done_label_if_completed is passing
invalid parameters (silent, fix_model, summarize_model, ci_log_max_lines) and
omitting required ones; fix it by updating the call site to only pass the
parameters that the function actually accepts (e.g., repo and dry_run) and
include all required arguments: pr_number, has_review_targets,
review_fix_started, review_fix_added_commits, review_fix_failed, state_saved,
commits_by_phase, pr_data, review_comments, issue_comments; also preserve the
two boolean flags currently derived from active_rate_limit and
active_review_failed (pass coderabbit_rate_limit_active=bool(active_rate_limit)
and coderabbit_review_failed_active=bool(active_review_failed)) and remove the
invalid named args so the call signature matches the
_update_done_label_if_completed definition.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f2131778-2ac3-4f09-a6d1-9fe9de2993fd
📒 Files selected for processing (2)
src/auto_fixer.pytests/test_auto_fixer.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_auto_fixer.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/state_manager.py`:
- Around line 187-205: The code in the trimmed_entries branch silently truncates
accumulated_archived into the archived-ids footer which causes
load_state_comment() to lose processed_ids; instead of dropping IDs, detect when
accumulated_archived would be truncated and handle it explicitly: either persist
the overflow to a separate durable store (e.g., call a new helper like
stash_archived_overflow(overflow_ids)) or treat it as a failure (raise an
exception or return an error) so the caller can retry/expand
STATE_COMMENT_MAX_LENGTH; modify the logic around trimmed_entries,
accumulated_archived, and footer construction to compute overflow_ids and then
either persist them or raise an error rather than silently omitting them so
load_state_comment() can always reconstruct processed_ids.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: df441db4-06dc-49a7-adf0-cdcff4a17c9f
📒 Files selected for processing (2)
src/state_manager.pysrc/summarizer.py
🚧 Files skipped from review as they are similar to previous changes (1)
- src/summarizer.py
…to fix mypy/pyright CI - Add active_review_failed handling and backfilled_count param to _process_single_pr - Add coderabbit_review_failed_active to all _update_done_label_if_completed calls in _process_single_pr - Replace process_repo's 750-line inline PR loop with _process_single_pr calls - Fixes mypy errors: pr_number Any|None and bad kwargs on _update_done_label_if_completed - Fixes pyright: "Code is too complex to analyze" on process_repo
… archived IDs When accumulated_archived cannot fully fit into the footer, the previous code silently dropped overflow IDs. load_state_comment() only reconstructs processed_ids from visible entries and the archived-ids footer, so dropped IDs would reappear as unprocessed on the next run and be handled again. Now render_state_comment raises RuntimeError whenever overflow would occur, making the data loss explicit rather than silent.
…TED_LABEL_COLOR from main
Implement an orchestrated Python CI pipeline including ruff, mypy, pyright, and pytest to enhance code quality and maintainability.
Summary by CodeRabbit
New Features
Chores
Refactor
Tests