diff --git a/plugins/sentry-skills/skills/iterate-pr/SKILL.md b/plugins/sentry-skills/skills/iterate-pr/SKILL.md index 8f64b2c..a3db234 100644 --- a/plugins/sentry-skills/skills/iterate-pr/SKILL.md +++ b/plugins/sentry-skills/skills/iterate-pr/SKILL.md @@ -7,133 +7,125 @@ description: Iterate on a PR until CI passes. Use when you need to fix CI failur Continuously iterate on the current branch until all CI checks pass and review feedback is addressed. -**Requires**: GitHub CLI (`gh`) authenticated and available. +**Requires**: GitHub CLI (`gh`) authenticated. -## Process +## Bundled Scripts -### Step 1: Identify the PR +### `scripts/fetch_pr_checks.py` + +Fetches CI check status and extracts failure snippets from logs. ```bash -gh pr view --json number,url,headRefName,baseRefName +python scripts/fetch_pr_checks.py [--pr NUMBER] ``` -If no PR exists for the current branch, stop and inform the user. +Returns JSON: +```json +{ + "pr": {"number": 123, "branch": "feat/foo"}, + "summary": {"total": 5, "passed": 3, "failed": 2, "pending": 0}, + "checks": [ + {"name": "tests", "status": "fail", "log_snippet": "...", "run_id": 123}, + {"name": "lint", "status": "pass"} + ] +} +``` -### Step 2: Check CI Status First +### `scripts/fetch_pr_feedback.py` -Always check CI/GitHub Actions status before looking at review feedback: +Fetches and categorizes PR review feedback using the [LOGAF scale](https://develop.sentry.dev/engineering-practices/code-review/#logaf-scale). ```bash -gh pr checks --json name,state,bucket,link,workflow +python scripts/fetch_pr_feedback.py [--pr NUMBER] ``` -The `bucket` field categorizes state into: `pass`, `fail`, `pending`, `skipping`, or `cancel`. - -**Important:** If any of these checks are still `pending`, wait before proceeding: -- `sentry` / `sentry-io` -- `codecov` -- `cursor` / `bugbot` / `seer` -- Any linter or code analysis checks +Returns JSON with feedback categorized as: +- `high` - Must address before merge (`h:`, blocker, changes requested) +- `medium` - Should address (`m:`, standard feedback) +- `low` - Optional (`l:`, nit, style, suggestion) +- `bot` - Automated comments (Codecov, Sentry, etc.) +- `resolved` - Already resolved threads -These bots may post additional feedback comments once their checks complete. Waiting avoids duplicate work. +## Workflow -### Step 3: Gather Review Feedback +### 1. Identify PR -Once CI checks have completed (or at least the bot-related checks), gather human and bot feedback: - -**Review Comments and Status:** ```bash -gh pr view --json reviews,comments,reviewDecision +gh pr view --json number,url,headRefName ``` -**Inline Code Review Comments:** -```bash -gh api repos/{owner}/{repo}/pulls/{pr_number}/comments -``` +Stop if no PR exists for the current branch. -**PR Conversation Comments (includes bot comments):** -```bash -gh api repos/{owner}/{repo}/issues/{pr_number}/comments -``` +### 2. Check CI Status -Look for bot comments from: Sentry, Codecov, Cursor, Bugbot, Seer, and other automated tools. +Run `scripts/fetch_pr_checks.py` to get structured failure data. -### Step 4: Investigate Failures +**Wait if pending:** If bot-related checks (sentry, codecov, cursor, bugbot, seer) are still running, wait before proceeding—they may post additional feedback. -For each CI failure, get the actual logs: +### 3. Fix CI Failures -```bash -# List recent runs for this branch -gh run list --branch $(git branch --show-current) --limit 5 --json databaseId,name,status,conclusion +For each failure in the script output: +1. Read the `log_snippet` to understand the failure +2. Read the relevant code before making changes +3. Fix the issue with minimal, targeted changes -# View failed logs for a specific run -gh run view --log-failed -``` - -Do NOT assume what failed based on the check name alone. Always read the actual logs. +Do NOT assume what failed based on check name alone—always read the logs. -### Step 5: Validate Feedback +### 4. Gather Review Feedback -For each piece of feedback (CI failure or review comment): +Run `scripts/fetch_pr_feedback.py` to get categorized feedback. -1. **Read the relevant code** - Understand the context before making changes -2. **Verify the issue is real** - Not all feedback is correct; reviewers and bots can be wrong -3. **Check if already addressed** - The issue may have been fixed in a subsequent commit -4. **Skip invalid feedback** - If the concern is not legitimate, move on +### 5. Handle Feedback by LOGAF Priority -### Step 6: Address Valid Issues +**Auto-fix (no prompt):** +- `high` - must address (blockers, security, changes requested) +- `medium` - should address (standard feedback) -Make minimal, targeted code changes. Only fix what is actually broken. +**Prompt user for selection:** +- `low` - present numbered list and ask which to address: -### Step 7: Commit and Push +``` +Found 3 low-priority suggestions: +1. [l] "Consider renaming this variable" - @reviewer in api.py:42 +2. [nit] "Could use a list comprehension" - @reviewer in utils.py:18 +3. [style] "Add a docstring" - @reviewer in models.py:55 -```bash -git add -A -git commit -m "fix: " -git push origin $(git branch --show-current) +Which would you like to address? (e.g., "1,3" or "all" or "none") ``` -### Step 8: Wait for CI +**Skip silently:** +- `resolved` threads +- `bot` comments (informational only) -Use the built-in watch functionality: +### 6. Commit and Push ```bash -gh pr checks --watch --interval 30 +git add +git commit -m "fix: " +git push ``` -This waits until all checks complete. Exit code 0 means all passed, exit code 1 means failures. - -Alternatively, poll manually if you need more control: +### 7. Wait for CI ```bash -gh pr checks --json name,state,bucket | jq '.[] | select(.bucket != "pass")' +gh pr checks --watch --interval 30 ``` -### Step 9: Repeat - -Return to Step 2 if: -- Any CI checks failed -- New review feedback appeared +### 8. Repeat -Continue until all checks pass and no unaddressed feedback remains. +Return to step 2 if CI failed or new feedback appeared. ## Exit Conditions -**Success:** -- All CI checks are green (`bucket: pass`) -- No unaddressed human review feedback +**Success:** All checks pass, no unaddressed high/medium feedback, user has decided on low-priority items. -**Ask for Help:** -- Same failure persists after 3 attempts (likely a flaky test or deeper issue) -- Review feedback requires clarification or decision from the user -- CI failure is unrelated to branch changes (infrastructure issue) +**Ask for help:** Same failure after 3 attempts, feedback needs clarification, infrastructure issues. -**Stop Immediately:** -- No PR exists for the current branch -- Branch is out of sync and needs rebase (inform user) +**Stop:** No PR exists, branch needs rebase. -## Tips +## Fallback -- Use `gh pr checks --required` to focus only on required checks -- Use `gh run view --verbose` to see all job steps, not just failures -- If a check is from an external service, the `link` field in checks JSON provides the URL to investigate +If scripts fail, use `gh` CLI directly: +- `gh pr checks --json name,state,bucket,link` +- `gh run view --log-failed` +- `gh api repos/{owner}/{repo}/pulls/{number}/comments` diff --git a/plugins/sentry-skills/skills/iterate-pr/scripts/fetch_pr_checks.py b/plugins/sentry-skills/skills/iterate-pr/scripts/fetch_pr_checks.py new file mode 100755 index 0000000..173266c --- /dev/null +++ b/plugins/sentry-skills/skills/iterate-pr/scripts/fetch_pr_checks.py @@ -0,0 +1,217 @@ +#!/usr/bin/env python3 +""" +Fetch PR CI checks and extract relevant failure snippets. + +Usage: + python fetch_pr_checks.py [--pr PR_NUMBER] + +If --pr is not specified, uses the PR for the current branch. + +Output: JSON to stdout with structured check data. +""" + +import argparse +import json +import re +import subprocess +import sys +from typing import Any + + +def run_gh(args: list[str]) -> dict[str, Any] | list[Any] | None: + """Run a gh CLI command and return parsed JSON output.""" + try: + result = subprocess.run( + ["gh"] + args, + capture_output=True, + text=True, + check=True, + ) + return json.loads(result.stdout) if result.stdout.strip() else None + except subprocess.CalledProcessError as e: + print(f"Error running gh {' '.join(args)}: {e.stderr}", file=sys.stderr) + return None + except json.JSONDecodeError: + return None + + +def get_pr_info(pr_number: int | None = None) -> dict[str, Any] | None: + """Get PR info, optionally by number or for current branch.""" + args = ["pr", "view", "--json", "number,url,headRefName,baseRefName"] + if pr_number: + args.insert(2, str(pr_number)) + return run_gh(args) + + +def get_checks(pr_number: int | None = None) -> list[dict[str, Any]]: + """Get all checks for a PR.""" + args = ["pr", "checks", "--json", "name,state,bucket,link,workflow"] + if pr_number: + args.insert(2, str(pr_number)) + result = run_gh(args) + return result if isinstance(result, list) else [] + + +def get_failed_runs(branch: str) -> list[dict[str, Any]]: + """Get recent failed workflow runs for a branch.""" + result = run_gh([ + "run", "list", + "--branch", branch, + "--limit", "10", + "--json", "databaseId,name,status,conclusion,headSha" + ]) + if not isinstance(result, list): + return [] + # Return runs that failed or are in progress + return [r for r in result if r.get("conclusion") == "failure"] + + +def extract_failure_snippet(log_text: str, max_lines: int = 50) -> str: + """Extract relevant failure snippet from log text. + + Looks for common failure markers and extracts surrounding context. + """ + lines = log_text.split("\n") + + # Patterns that indicate failure points + failure_patterns = [ + r"(?i)error[:\s]", + r"(?i)failed[:\s]", + r"(?i)failure[:\s]", + r"(?i)traceback", + r"(?i)exception", + r"(?i)assert(ion)?.*failed", + r"(?i)FAILED", + r"(?i)panic:", + r"(?i)fatal:", + r"(?i)npm ERR!", + r"(?i)yarn error", + r"(?i)ModuleNotFoundError", + r"(?i)ImportError", + r"(?i)SyntaxError", + r"(?i)TypeError", + r"(?i)ValueError", + r"(?i)KeyError", + r"(?i)AttributeError", + r"(?i)NameError", + r"(?i)IndentationError", + r"===.*FAILURES.*===", + r"___.*___", # pytest failure separators + ] + + combined_pattern = "|".join(failure_patterns) + + # Find lines matching failure patterns + failure_indices = [] + for i, line in enumerate(lines): + if re.search(combined_pattern, line): + failure_indices.append(i) + + if not failure_indices: + # No clear failure point, return last N lines + return "\n".join(lines[-max_lines:]) + + # Extract context around first failure point + # Include some context before and after + first_failure = failure_indices[0] + start = max(0, first_failure - 5) + end = min(len(lines), first_failure + max_lines - 5) + + snippet_lines = lines[start:end] + + # If there are more failures after our snippet, note it + remaining_failures = [i for i in failure_indices if i >= end] + if remaining_failures: + snippet_lines.append(f"\n... ({len(remaining_failures)} more error(s) follow)") + + return "\n".join(snippet_lines) + + +def get_run_logs(run_id: int) -> str | None: + """Get failed logs for a workflow run.""" + try: + result = subprocess.run( + ["gh", "run", "view", str(run_id), "--log-failed"], + capture_output=True, + text=True, + timeout=60, + ) + return result.stdout if result.stdout else result.stderr + except subprocess.TimeoutExpired: + return None + except subprocess.CalledProcessError: + return None + + +def main(): + parser = argparse.ArgumentParser(description="Fetch PR CI checks with failure snippets") + parser.add_argument("--pr", type=int, help="PR number (defaults to current branch PR)") + args = parser.parse_args() + + # Get PR info + pr_info = get_pr_info(args.pr) + if not pr_info: + print(json.dumps({"error": "No PR found for current branch"})) + sys.exit(1) + + pr_number = pr_info["number"] + branch = pr_info["headRefName"] + + # Get checks + checks = get_checks(pr_number) + + # Process checks and add failure snippets + processed_checks = [] + failed_runs = None # Lazy load + + for check in checks: + processed = { + "name": check.get("name", "unknown"), + "status": check.get("bucket", check.get("state", "unknown")), + "link": check.get("link", ""), + "workflow": check.get("workflow", {}).get("name", ""), + } + + # For failures, try to get log snippet + if processed["status"] == "fail": + if failed_runs is None: + failed_runs = get_failed_runs(branch) + + # Find matching run by workflow name + workflow_name = processed["workflow"] or processed["name"] + matching_run = next( + (r for r in failed_runs if workflow_name in r.get("name", "")), + None + ) + + if matching_run: + logs = get_run_logs(matching_run["databaseId"]) + if logs: + processed["log_snippet"] = extract_failure_snippet(logs) + processed["run_id"] = matching_run["databaseId"] + + processed_checks.append(processed) + + # Build output + output = { + "pr": { + "number": pr_number, + "url": pr_info.get("url", ""), + "branch": branch, + "base": pr_info.get("baseRefName", ""), + }, + "summary": { + "total": len(processed_checks), + "passed": sum(1 for c in processed_checks if c["status"] == "pass"), + "failed": sum(1 for c in processed_checks if c["status"] == "fail"), + "pending": sum(1 for c in processed_checks if c["status"] == "pending"), + "skipped": sum(1 for c in processed_checks if c["status"] in ("skipping", "cancel")), + }, + "checks": processed_checks, + } + + print(json.dumps(output, indent=2)) + + +if __name__ == "__main__": + main() diff --git a/plugins/sentry-skills/skills/iterate-pr/scripts/fetch_pr_feedback.py b/plugins/sentry-skills/skills/iterate-pr/scripts/fetch_pr_feedback.py new file mode 100755 index 0000000..3d30ae9 --- /dev/null +++ b/plugins/sentry-skills/skills/iterate-pr/scripts/fetch_pr_feedback.py @@ -0,0 +1,422 @@ +#!/usr/bin/env python3 +""" +Fetch and categorize PR review feedback. + +Usage: + python fetch_pr_feedback.py [--pr PR_NUMBER] + +If --pr is not specified, uses the PR for the current branch. + +Output: JSON to stdout with categorized feedback. + +Categories (using LOGAF scale - see https://develop.sentry.dev/engineering-practices/code-review/#logaf-scale): +- high: Must address before merge (h:, blocker, changes requested) +- medium: Should address (m:, standard feedback) +- low: Optional suggestions (l:, nit, style) +- bot: Automated comments (Codecov, Sentry bot, etc.) +- resolved: Already resolved threads +""" + +import argparse +import json +import re +import subprocess +import sys +from typing import Any + + +# Known bot usernames and patterns +BOT_PATTERNS = [ + r"(?i)bot$", + r"(?i)^codecov", + r"(?i)^sentry", + r"(?i)^dependabot", + r"(?i)^renovate", + r"(?i)^github-actions", + r"(?i)^mergify", + r"(?i)^semantic-release", + r"(?i)^sonarcloud", + r"(?i)^snyk", + r"(?i)^cursor", + r"(?i)^bugbot", + r"(?i)^seer", + r"(?i)^copilot", + r"(?i)\[bot\]$", +] + + +def run_gh(args: list[str]) -> dict[str, Any] | list[Any] | None: + """Run a gh CLI command and return parsed JSON output.""" + try: + result = subprocess.run( + ["gh"] + args, + capture_output=True, + text=True, + check=True, + ) + return json.loads(result.stdout) if result.stdout.strip() else None + except subprocess.CalledProcessError as e: + print(f"Error running gh {' '.join(args)}: {e.stderr}", file=sys.stderr) + return None + except json.JSONDecodeError: + return None + + +def get_repo_info() -> tuple[str, str] | None: + """Get owner and repo name from current directory.""" + result = run_gh(["repo", "view", "--json", "owner,name"]) + if result: + return result.get("owner", {}).get("login"), result.get("name") + return None + + +def get_pr_info(pr_number: int | None = None) -> dict[str, Any] | None: + """Get PR info, optionally by number or for current branch.""" + args = ["pr", "view", "--json", "number,url,headRefName,author,reviews,reviewDecision"] + if pr_number: + args.insert(2, str(pr_number)) + return run_gh(args) + + +def is_bot(username: str) -> bool: + """Check if username matches known bot patterns.""" + for pattern in BOT_PATTERNS: + if re.search(pattern, username): + return True + return False + + +def get_review_comments(owner: str, repo: str, pr_number: int) -> list[dict[str, Any]]: + """Get inline code review comments via API.""" + result = run_gh([ + "api", + f"repos/{owner}/{repo}/pulls/{pr_number}/comments", + "--paginate", + ]) + return result if isinstance(result, list) else [] + + +def get_issue_comments(owner: str, repo: str, pr_number: int) -> list[dict[str, Any]]: + """Get PR conversation comments (includes bot comments).""" + result = run_gh([ + "api", + f"repos/{owner}/{repo}/issues/{pr_number}/comments", + "--paginate", + ]) + return result if isinstance(result, list) else [] + + +def get_review_threads(owner: str, repo: str, pr_number: int) -> list[dict[str, Any]]: + """Get review threads with resolution status via GraphQL.""" + query = """ + query($owner: String!, $repo: String!, $pr: Int!) { + repository(owner: $owner, name: $repo) { + pullRequest(number: $pr) { + reviewThreads(first: 100) { + nodes { + id + isResolved + isOutdated + path + line + comments(first: 10) { + nodes { + id + body + author { + login + } + createdAt + } + } + } + } + } + } + } + """ + try: + result = subprocess.run( + [ + "gh", "api", "graphql", + "-f", f"query={query}", + "-F", f"owner={owner}", + "-F", f"repo={repo}", + "-F", f"pr={pr_number}", + ], + capture_output=True, + text=True, + check=True, + ) + data = json.loads(result.stdout) + threads = data.get("data", {}).get("repository", {}).get("pullRequest", {}).get("reviewThreads", {}).get("nodes", []) + return threads + except (subprocess.CalledProcessError, json.JSONDecodeError): + return [] + + +def detect_logaf(body: str) -> str | None: + """Detect LOGAF scale markers in comment body. + + LOGAF scale (https://develop.sentry.dev/engineering-practices/code-review/#logaf-scale): + - l: / [l] / low: → low priority (optional) + - m: / [m] / medium: → medium priority (should address) + - h: / [h] / high: → high priority (must address) + + Returns 'high', 'medium', 'low', or None if no marker found. + """ + # Check for LOGAF markers at start of comment (with optional whitespace) + logaf_patterns = [ + # h: or [h] or high: patterns + (r"^\s*(?:h:|h\s*:|high:|\[h\])", "high"), + # m: or [m] or medium: patterns + (r"^\s*(?:m:|m\s*:|medium:|\[m\])", "medium"), + # l: or [l] or low: patterns + (r"^\s*(?:l:|l\s*:|low:|\[l\])", "low"), + ] + + for pattern, level in logaf_patterns: + if re.search(pattern, body, re.IGNORECASE): + return level + + return None + + +def categorize_comment(comment: dict[str, Any], body: str) -> str: + """Categorize a comment based on content and author. + + Uses LOGAF scale: high (must fix), medium (should fix), low (optional). + """ + author = comment.get("author", {}).get("login", "") or comment.get("user", {}).get("login", "") + + if is_bot(author): + return "bot" + + # Check for explicit LOGAF markers first + logaf_level = detect_logaf(body) + if logaf_level: + return logaf_level + + # Look for high-priority (blocking) indicators + high_patterns = [ + r"(?i)must\s+(fix|change|update|address)", + r"(?i)this\s+(is\s+)?(wrong|incorrect|broken|buggy)", + r"(?i)security\s+(issue|vulnerability|concern)", + r"(?i)will\s+(break|cause|fail)", + r"(?i)critical", + r"(?i)blocker", + ] + + for pattern in high_patterns: + if re.search(pattern, body): + return "high" + + # Look for low-priority (suggestion) indicators + low_patterns = [ + r"(?i)nit[:\s]", + r"(?i)nitpick", + r"(?i)suggestion[:\s]", + r"(?i)consider\s+", + r"(?i)could\s+(also\s+)?", + r"(?i)might\s+(want\s+to|be\s+better)", + r"(?i)optional[:\s]", + r"(?i)minor[:\s]", + r"(?i)style[:\s]", + r"(?i)prefer\s+", + r"(?i)what\s+do\s+you\s+think", + r"(?i)up\s+to\s+you", + r"(?i)take\s+it\s+or\s+leave", + r"(?i)fwiw", + ] + + for pattern in low_patterns: + if re.search(pattern, body): + return "low" + + # Default to medium for non-bot comments without clear indicators + return "medium" + + +def extract_feedback_item( + body: str, + author: str, + path: str | None = None, + line: int | None = None, + url: str | None = None, + is_resolved: bool = False, + is_outdated: bool = False, +) -> dict[str, Any]: + """Create a standardized feedback item.""" + # Truncate long bodies for summary + summary = body[:200] + "..." if len(body) > 200 else body + summary = summary.replace("\n", " ").strip() + + item = { + "author": author, + "body": summary, + "full_body": body, + } + + if path: + item["path"] = path + if line: + item["line"] = line + if url: + item["url"] = url + if is_resolved: + item["resolved"] = True + if is_outdated: + item["outdated"] = True + + return item + + +def main(): + parser = argparse.ArgumentParser(description="Fetch and categorize PR feedback") + parser.add_argument("--pr", type=int, help="PR number (defaults to current branch PR)") + args = parser.parse_args() + + # Get repo info + repo_info = get_repo_info() + if not repo_info: + print(json.dumps({"error": "Could not determine repository"})) + sys.exit(1) + owner, repo = repo_info + + # Get PR info + pr_info = get_pr_info(args.pr) + if not pr_info: + print(json.dumps({"error": "No PR found for current branch"})) + sys.exit(1) + + pr_number = pr_info["number"] + pr_author = pr_info.get("author", {}).get("login", "") + + # Get review decision + review_decision = pr_info.get("reviewDecision", "") + + # Categorized feedback using LOGAF scale + feedback = { + "high": [], # Must address before merge + "medium": [], # Should address + "low": [], # Optional suggestions + "bot": [], + "resolved": [], + } + + # Process reviews for overall status + reviews = pr_info.get("reviews", []) + for review in reviews: + if review.get("state") == "CHANGES_REQUESTED": + author = review.get("author", {}).get("login", "") + body = review.get("body", "") + if body and author != pr_author: + item = extract_feedback_item(body, author) + item["type"] = "changes_requested" + feedback["high"].append(item) + + # Get review threads (inline comments with resolution status) + threads = get_review_threads(owner, repo, pr_number) + seen_thread_ids = set() + + for thread in threads: + if not thread.get("comments", {}).get("nodes"): + continue + + first_comment = thread["comments"]["nodes"][0] + author = first_comment.get("author", {}).get("login", "") + body = first_comment.get("body", "") + + # Skip if author is PR author (self-comments) + if author == pr_author: + continue + + # Skip empty or very short comments + if not body or len(body.strip()) < 3: + continue + + is_resolved = thread.get("isResolved", False) + is_outdated = thread.get("isOutdated", False) + + item = extract_feedback_item( + body=body, + author=author, + path=thread.get("path"), + line=thread.get("line"), + is_resolved=is_resolved, + is_outdated=is_outdated, + ) + + thread_id = thread.get("id") + if thread_id: + seen_thread_ids.add(thread_id) + + if is_resolved: + feedback["resolved"].append(item) + elif is_bot(author): + feedback["bot"].append(item) + else: + category = categorize_comment(first_comment, body) + feedback[category].append(item) + + # Get issue comments (general PR conversation) + issue_comments = get_issue_comments(owner, repo, pr_number) + + for comment in issue_comments: + author = comment.get("user", {}).get("login", "") + body = comment.get("body", "") + + # Skip if author is PR author + if author == pr_author: + continue + + # Skip empty comments + if not body or len(body.strip()) < 3: + continue + + item = extract_feedback_item( + body=body, + author=author, + url=comment.get("html_url"), + ) + + if is_bot(author): + feedback["bot"].append(item) + else: + category = categorize_comment(comment, body) + feedback[category].append(item) + + # Build output + output = { + "pr": { + "number": pr_number, + "url": pr_info.get("url", ""), + "author": pr_author, + "review_decision": review_decision, + }, + "summary": { + "high": len(feedback["high"]), + "medium": len(feedback["medium"]), + "low": len(feedback["low"]), + "bot_comments": len(feedback["bot"]), + "resolved": len(feedback["resolved"]), + "needs_attention": len(feedback["high"]) + len(feedback["medium"]), + }, + "feedback": feedback, + } + + # Add actionable summary based on LOGAF priorities + if feedback["high"]: + output["action_required"] = "Address high-priority feedback before merge" + elif feedback["medium"]: + output["action_required"] = "Address medium-priority feedback" + elif feedback["low"]: + output["action_required"] = "Review low-priority suggestions - ask user which to address" + else: + output["action_required"] = None + + print(json.dumps(output, indent=2)) + + +if __name__ == "__main__": + main()