diff --git a/plugins/docs-tools/.claude-plugin/plugin.json b/plugins/docs-tools/.claude-plugin/plugin.json index 0c762e59..416ed14e 100644 --- a/plugins/docs-tools/.claude-plugin/plugin.json +++ b/plugins/docs-tools/.claude-plugin/plugin.json @@ -1,6 +1,6 @@ { "name": "docs-tools", - "version": "0.0.47", + "version": "0.0.48", "description": "Documentation review, writing, and workflow tools for Red Hat AsciiDoc and Markdown documentation.", "author": { "name": "Red Hat Documentation Team", diff --git a/plugins/docs-tools/skills/commit-analyst/SKILL.md b/plugins/docs-tools/skills/commit-analyst/SKILL.md new file mode 100644 index 00000000..ba60bf5d --- /dev/null +++ b/plugins/docs-tools/skills/commit-analyst/SKILL.md @@ -0,0 +1,282 @@ +--- +name: commit-analyst +description: >- + Analyzes a git commit, PR/MR, or branch diff to determine documentation + impact. Produces structured requirements with acceptance criteria, grades + impact, identifies affected documentation, and recommends next steps. + Optionally enriches with JIRA context when a ticket is provided. Use when + triaging code changes for documentation needs or driving a commit-based + documentation workflow. +argument-hint: "--commit [--ticket ] [--repo ] [--base-branch ]" +allowed-tools: Read, Write, Bash, Grep, Glob, Skill, WebSearch, WebFetch +--- + +# Commit Analyst + +Analyze code changes (commits, PRs, MRs, branch diffs) for documentation impact. Produce structured requirements compatible with the docs-orchestrator pipeline. + +## Parse arguments + +Extract from `$ARGUMENTS`: + +- `--commit ` — **required**. The code change to analyze. Accepts: + - GitHub PR URL: `https://github.com/org/repo/pull/42` + - GitLab MR URL: `https://gitlab.com/org/repo/-/merge_requests/5` + - Commit SHA: `abc1234` + - Branch name: `feature-branch` +- `--ticket ` — optional. JIRA ticket for context enrichment +- `--repo ` — optional. Path to local source repository (required for commit SHA and branch refs) +- `--base-branch ` — optional. Base branch for comparison (default: main) +- `--base-path ` — optional. Output directory (pipeline mode) + +If `--base-path` is provided, set output path: + +```bash +OUTPUT_DIR="${BASE_PATH}/requirements" +OUTPUT_FILE="${OUTPUT_DIR}/requirements.md" +mkdir -p "$OUTPUT_DIR" +``` + +If `--base-path` is NOT provided (standalone mode), write to stdout summary only — do not create files. + +## Step 1: Run gather_changes.py + +```bash +python3 ${CLAUDE_SKILL_DIR}/scripts/gather_changes.py \ + --commit \ + [--repo ] \ + [--base-branch ] +``` + +Capture the JSON output. If the script exits non-zero, report the stderr message and stop. + +Read the JSON output and note: +- `ref_type` — how the ref was interpreted (pr, commit, branch) +- `pr_metadata` — title, description, labels, linked issues +- `summary` — files changed, lines added/removed +- `categories` — files grouped by type (source_code, documentation, tests, config, ci_cd, build) +- `signals` — new files, API changes, config changes, breaking indicators, docs already modified +- `key_diffs` — diff excerpts for high-signal files +- `code_context` — import/reference context for key files +- `docs_scan` — affected doc files and coverage gaps in the docs repo (cwd) + +## Step 2: Load impact grading framework + +Read [reference/impact-criteria.md](reference/impact-criteria.md). + +## Step 3: Grade impact + +Apply the grading criteria to the gathered data: + +1. Map each signal to its minimum grade using the signal-to-grade table +2. Cross-reference `docs_scan.affected_files` — if existing docs reference changed code, grade is at least MEDIUM +3. Check `docs_scan.coverage_gaps` — new features without doc coverage push toward HIGH +4. Check `signals.breaking_indicators` — any breaking change is HIGH +5. The overall grade is the **highest individual grade** found + +Identify documentation action categories from the impact-criteria reference (new feature, enhancement, breaking change, API change, config change, etc.). + +## Step 4: If NONE — write brief report and stop + +If the overall impact grade is NONE, write a brief explanation: + +```markdown +# Documentation Requirements + +**Source**: +**Date**: +**Overall documentation impact**: NONE + +## Summary + +No documentation impact detected. . + +Files analyzed: N (N source, N test, N config, N other) +``` + +If `--base-path` is set, write this to `OUTPUT_FILE`. Otherwise, print to the user. + +**Stop here.** Do not proceed to JIRA enrichment or web search for NONE-grade changes. + +## Step 5: JIRA enrichment (optional) + +Run this step when: +- `--ticket` was explicitly provided, OR +- `pr_metadata.linked_issues` contains JIRA keys (e.g., `PROJ-456`) discovered from the PR description/commits + +For each JIRA key: + +```bash +python3 ${CLAUDE_PLUGIN_ROOT}/skills/jira-reader/scripts/jira_reader.py --issue +``` + +For the primary ticket (from `--ticket`), also traverse the ticket graph: + +```bash +python3 ${CLAUDE_PLUGIN_ROOT}/skills/jira-reader/scripts/jira_reader.py --graph +``` + +Extract: ticket description, acceptance criteria, priority, status, related tickets. + +**If JIRA access fails** (missing token, 403, timeout), log a warning and continue with diff-only analysis. JIRA is enrichment, not a hard dependency. + +## Step 6: Web search expansion + +Build 1-3 targeted search queries from: +- Features identified in the diff (e.g., "user preferences API ") +- Technologies introduced (e.g., "OAuth PKCE implementation guide") +- Breaking changes that need migration context (e.g., " v2 API migration") + +Use WebSearch for each query. Evaluate results for relevance to documentation. Save key findings (URL, title, relevance note) for the report. **Do not include raw search queries in the final output** — only curated, relevant references. + +## Step 7: Article extraction (when relevant URLs found) + +If web search or PR description references specific documentation pages (upstream API docs, RFC pages, vendor docs), extract their content for reference: + +```bash +python3 ${CLAUDE_PLUGIN_ROOT}/skills/article-extractor/scripts/article_extractor.py \ + --url --format markdown +``` + +Use extracted content to inform requirements — do not copy it verbatim into the output. + +## Step 8: Write structured requirements report + +Write the report to `OUTPUT_FILE` (pipeline mode) or present to user (standalone mode). The format MUST be compatible with the planning step (docs-planner agent). + +```markdown +# Documentation Requirements + +**Source**: +**JIRA**: +**Date**: +**Overall documentation impact**: HIGH | MEDIUM | LOW + +## Summary + +- Files analyzed: N (N source, N test, N config, N docs, N other) +- Documentation impact grade: HIGH | MEDIUM | LOW +- Impact categories: new_feature, api_change, config_change, breaking_change, enhancement, bug_fix +- New modules needed: N +- Existing modules to update: N +- Breaking changes requiring docs: N + +## Change analysis + +### Source code changes + + + +### Signals detected + + + +### Existing documentation affected + + + +### Coverage gaps identified + + + +## Requirements by priority + +### Critical + +#### REQ-001: +- **Source**: | | +- **Summary**: +- **User impact**: +- **Documentation action**: + - [ ] Create `module-name.adoc` (PROCEDURE) + - [ ] Update `existing-module.adoc` — add new parameter section +- **Acceptance criteria**: + - [ ] + - [ ] +- **References**: + - : Source change + - `src/api/preferences.py:12-45`: Implementation + - [Upstream API docs](https://...): Reference + +### High + + +### Medium + + +### Low + + +## Documentation scope + +### New documentation needed + +| Requirement | Module type | Description | +|---|---|---| +| REQ-001 | Concept + Procedure + Reference | | + +### Existing documentation to update + +| Requirement | File | What changed | +|---|---|---| +| REQ-002 | `modules/ref-api-endpoints.adoc:45` | API endpoint modified | + +### Breaking changes + +| Change | Migration needed | Deprecation notice | References | +|---|---|---|---| +| | Yes/No | | | + +## Related context + +### JIRA context + + + +### Web search findings + + + +### Existing documentation references + + + +## Sources consulted + +### Code changes +- : +- `src/file.py:lines`: <what was found> + +### JIRA tickets +- [PROJ-123](url): <summary> + +### Pull requests / Merge requests +- [PR #42](url): <title> + +### External references +- [RFC 7636](url): <relevance> +- [Upstream API docs](url): <relevance> + +### Web search findings +- [Result title](url): <relevance> +``` + +### Report quality rules + +1. **Every requirement must have acceptance criteria** — specific, verifiable conditions the documentation must satisfy +2. **Every requirement must have references** — link to source code, PRs, JIRA tickets, or external docs that support it +3. **Documentation actions must be concrete** — name specific files to create or update, with module types (CONCEPT, PROCEDURE, REFERENCE) +4. **Omit empty sections** — if no breaking changes, omit the Breaking changes table. If no JIRA context, omit that subsection. +5. **Priority is based on user impact**, not code complexity. A one-line config default change that silently breaks user workflows is Critical. + +## Step 9: Reference tracking + +Track all URLs and file paths consulted throughout the analysis. Include in the "Sources consulted" section: +- PR/MR/commit URLs +- JIRA ticket URLs +- Code file paths with line numbers +- Web search result URLs (with relevance notes) +- Extracted article URLs +- Existing documentation file paths + +Same standard as requirements-analyst: every claim in the report should be traceable to a source. diff --git a/plugins/docs-tools/skills/commit-analyst/reference/impact-criteria.md b/plugins/docs-tools/skills/commit-analyst/reference/impact-criteria.md new file mode 100644 index 00000000..ee622b5c --- /dev/null +++ b/plugins/docs-tools/skills/commit-analyst/reference/impact-criteria.md @@ -0,0 +1,64 @@ +# Documentation Impact Grading Framework + +Grade each code change to determine documentation impact level and required actions. + +## Impact grades + +| Grade | Criteria | Examples | +|-------|----------|----------| +| **HIGH** | Major new features, architecture changes, new APIs, breaking changes, new user-facing workflows, new integrations | New operator install method, API v2 migration, new UI dashboard, new authentication provider | +| **MEDIUM** | Enhancements to existing features, new configuration options, changed defaults, deprecations, performance changes with user-visible impact | New CLI flag, updated default timeout, deprecated parameter, new supported platform | +| **LOW** | Minor UI text changes, small behavioral tweaks, additional supported values, error message improvements | New enum value, updated error message text, minor UX adjustment | +| **NONE** | Internal refactoring, test-only changes, CI/CD changes, dependency bumps, code cleanup, linting fixes | Test coverage increase, linter fixes, internal module rename, build script update | + +## Special handling + +- **QE/testing issues**: Grade as NONE unless they reveal user-facing behavioral changes +- **Security fixes (CVEs)**: Grade as HIGH if they require user action (config change, upgrade steps); MEDIUM if the fix is automatic +- **Bug fixes**: Grade based on whether the fix changes documented behavior or introduces new workarounds +- **Deprecations**: Grade as MEDIUM minimum — users need migration guidance even for soft deprecations + +## Signal-to-grade mapping + +Map signals from code analysis to minimum impact grades: + +| Signal | Minimum grade | Rationale | +|--------|--------------|-----------| +| New public API endpoint or route | HIGH | Users need to know how to call it | +| New file in api/, controllers/, handlers/ | HIGH | New user-facing surface area | +| Breaking change indicator (BREAKING, removed, migration) | HIGH | Users must take action | +| New configuration parameter or environment variable | MEDIUM | Users may need to set it | +| Changed default value | MEDIUM | Existing behavior changes silently | +| Deprecated function/parameter/endpoint | MEDIUM | Users need migration path | +| New file in source code (non-API) | MEDIUM | May indicate new capability | +| Schema change (protobuf, GraphQL, OpenAPI) | MEDIUM-HIGH | Contract change for consumers | +| Existing docs reference changed code | MEDIUM | Docs may be inaccurate | +| Config file structure change | MEDIUM | Users with custom configs affected | +| Test-only changes | NONE | No user-facing impact | +| CI/CD pipeline changes | NONE | Internal tooling | +| Dependency updates (no API change) | NONE | Transparent to users | +| Internal refactoring (no behavior change) | NONE | Implementation detail | + +## Documentation action categories + +Based on impact grade, determine what documentation actions are needed: + +| Change type | Typical actions | +|------------|----------------| +| New feature | Create Concept + Procedure + Reference modules | +| New API | Create Reference module (endpoints, parameters, responses) + Procedure (usage example) | +| Enhancement | Update existing Procedure or Reference module | +| Breaking change | Update existing modules + add migration Procedure | +| Deprecation | Update existing modules with deprecation notice + migration guidance | +| Config change | Update Reference module (parameters table) | +| Bug fix (behavior change) | Update Procedure or Concept if documented behavior changes | +| New integration | Create Concept (overview) + Procedure (setup) + Reference (configuration) | + +## Aggregate grading + +When a change spans multiple files and categories, the overall grade is the **highest individual grade** found. A single HIGH-signal file makes the entire change HIGH impact, even if other files are NONE. + +However, consider the change holistically: +- A new test file alongside a new API file confirms the API is intentional, not experimental +- Documentation files already modified in the same change may reduce the urgency (docs are being updated in-flight) +- Changes to generated code (`.pb.go`, `.gen.go`) inherit the grade of the schema that generated them, not their own diff --git a/plugins/docs-tools/skills/commit-analyst/scripts/gather_changes.py b/plugins/docs-tools/skills/commit-analyst/scripts/gather_changes.py new file mode 100755 index 00000000..ed7d2cd2 --- /dev/null +++ b/plugins/docs-tools/skills/commit-analyst/scripts/gather_changes.py @@ -0,0 +1,652 @@ +#!/usr/bin/env python3 +"""Extract code changes for documentation impact analysis. + +Runs deterministic extraction (diffs, file categorization, signal detection, +docs repo scanning) so the LLM can focus on judgment. Outputs JSON to stdout. + +Usage: + python3 gather_changes.py --commit <url-or-sha-or-branch> \ + [--repo <path>] [--base-branch <branch>] [--docs-root <path>] + +Exit codes: + 0 — success (JSON on stdout) + 1 — error (message on stderr) +""" + +import argparse +import json +import os +import re +import subprocess +import sys +from pathlib import Path +from urllib.parse import quote +from urllib.request import Request, urlopen + +SCRIPT_DIR = Path(__file__).resolve().parent +GIT_PR_READER = ( + SCRIPT_DIR / ".." / ".." / "git-pr-reader" / "scripts" / "git_pr_reader.py" +).resolve() + +GITHUB_PR_RE = re.compile( + r"https?://github\.com/([^/]+/[^/]+)/pull/(\d+)" +) +GITLAB_MR_RE = re.compile( + r"https?://([^/]+)/(.+?)/-/merge_requests/(\d+)" +) +GITLAB_COMMIT_RE = re.compile( + r"https?://([^/]+)/(.+?)/-/commit/([0-9a-f]{7,40})" +) +SHA_RE = re.compile(r"^[0-9a-f]{7,40}$") +JIRA_KEY_RE = re.compile(r"\b([A-Z][A-Z0-9]+-\d+)\b") +GITHUB_ISSUE_RE = re.compile(r"(?<![/\w])#(\d+)\b") +BREAKING_RE = re.compile( + r"\b(BREAKING|deprecated?|removed?|migrat(?:e|ion)|obsolete)\b", re.IGNORECASE +) + +CATEGORIES = { + "documentation": [ + r"\.adoc$", r"\.rst$", r"^docs/", r"^documentation/", + r"^README", r"^CONTRIBUTING", r"^CHANGELOG", + ], + "tests": [ + r"^tests?/", r"/__tests__/", r"_test\.", r"\.test\.", + r"\.spec\.", r"Test\.(java|py|go|ts|js)$", + r"^testdata/", r"^fixtures?/", + ], + "config": [ + r"\.ya?ml$", r"\.toml$", r"\.ini$", r"\.cfg$", r"\.conf$", + r"\.env", r"^config/", r"\.properties$", + ], + "ci_cd": [ + r"^\.github/", r"\.gitlab-ci", r"[Jj]enkinsfile", + r"^\.travis", r"^\.circleci/", r"^Makefile$", r"^Tiltfile$", + ], + "build": [ + r"^Dockerfile", r"^docker-compose", r"^Containerfile", + r"go\.(mod|sum)$", r"package\.json$", r"requirements\.txt$", + r"Pipfile", r"Gemfile", r"Cargo\.(toml|lock)$", + r"pom\.xml$", r"build\.gradle", + ], +} + +API_PATTERNS = [ + r"^(?:src/)?api/", r"controllers?/", r"routes?/", r"handlers?/", + r"endpoints?/", r"\.proto$", r"\.graphql$", r"openapi", r"swagger", +] +CONFIG_SIGNAL_PATTERNS = [ + r"defaults?\.(ya?ml|json|toml)$", r"\.env(\.\w+)?$", + r"settings\.", r"config\.(ya?ml|json|toml|py|ts|js)$", +] + +MAX_DIFF_LINES = 100 +MAX_DOC_MATCHES_PER_FILE = 5 +MAX_CODE_CONTEXT_FILES = 5 + + +def _run(cmd, **kwargs): + """Run a command and return stdout, or None on failure.""" + try: + r = subprocess.run(cmd, capture_output=True, text=True, timeout=60, **kwargs) + return r.stdout if r.returncode == 0 else None + except (subprocess.TimeoutExpired, FileNotFoundError): + return None + + +def _err(msg): + print(f"Error: {msg}", file=sys.stderr) + sys.exit(1) + + +# ---- Ref type detection ---- + +def detect_ref_type(ref): + if GITHUB_PR_RE.match(ref): + return "github_pr" + if GITLAB_MR_RE.match(ref): + return "gitlab_mr" + if GITLAB_COMMIT_RE.match(ref): + return "gitlab_commit" + if SHA_RE.match(ref): + return "commit" + return "branch" + + +# ---- File categorization ---- + +def categorize_file(path): + for cat, patterns in CATEGORIES.items(): + for p in patterns: + if re.search(p, path, re.IGNORECASE): + return cat + return "source_code" + + +# ---- Extraction: PR/MR ---- + +def extract_pr(ref): + if not GIT_PR_READER.exists(): + _err(f"git_pr_reader.py not found at {GIT_PR_READER}") + + reader = str(GIT_PR_READER) + + info_raw = _run(["python3", reader, "info", ref, "--json"]) + info = json.loads(info_raw) if info_raw else {} + + files_raw = _run(["python3", reader, "files", ref, "--json"]) + raw_files = json.loads(files_raw) if files_raw else [] + + files = [ + { + "path": f.get("path", ""), + "status": f.get("status", "modified"), + "added": f.get("additions", 0), + "removed": f.get("deletions", 0), + } + for f in raw_files + ] + + diff_text = (_run(["python3", reader, "diff", ref]) or "").strip() + + metadata = _build_pr_metadata(ref, info) + return files, diff_text, metadata + + +def _build_pr_metadata(ref, info): + metadata = { + "title": info.get("title", ""), + "description": info.get("body", ""), + "labels": [], + "linked_issues": [], + "milestone": None, + } + + if GITHUB_PR_RE.match(ref): + out = _run(["gh", "pr", "view", ref, "--json", "labels,milestone"]) + if out: + data = json.loads(out) + metadata["labels"] = [ + l.get("name", "") for l in data.get("labels", []) + ] + ms = data.get("milestone") + if ms: + metadata["milestone"] = ms.get("title") + + text = f"{metadata['title']} {metadata['description']}" + metadata["linked_issues"] = sorted(set(JIRA_KEY_RE.findall(text))) + gh_refs = [f"#{n}" for n in GITHUB_ISSUE_RE.findall(text)] + metadata["linked_issues"].extend(gh_refs) + + return metadata + + +# ---- Extraction: GitLab MR (via git-pr-reader) ---- + +def extract_gitlab_mr(ref): + """Extract files, diff, and metadata from a GitLab MR URL via git-pr-reader.""" + if not GIT_PR_READER.exists(): + _err(f"git_pr_reader.py not found at {GIT_PR_READER}") + + reader = str(GIT_PR_READER) + + info_raw = _run(["python3", reader, "info", ref, "--json"]) + info = json.loads(info_raw) if info_raw else {} + + files_raw = _run(["python3", reader, "files", ref, "--json"]) + raw_files = json.loads(files_raw) if files_raw else [] + + files = [ + { + "path": f.get("path", ""), + "status": f.get("status", "modified"), + "added": f.get("additions", 0), + "removed": f.get("deletions", 0), + } + for f in raw_files + ] + + diff_text = (_run(["python3", reader, "diff", ref]) or "").strip() + + metadata = _build_pr_metadata(ref, info) + return files, diff_text, metadata + + +# ---- Extraction: GitLab commit URL (REST API) ---- + +def _gitlab_api_get(host, endpoint): + """GET a GitLab REST API endpoint. Returns parsed JSON or None.""" + url = f"https://{host}/api/v4{endpoint}" + token = os.environ.get("GITLAB_TOKEN") or os.environ.get("GITLAB_PRIVATE_TOKEN") + req = Request(url) + if token: + req.add_header("PRIVATE-TOKEN", token) + try: + with urlopen(req, timeout=30) as resp: + return json.loads(resp.read()) + except Exception: + return None + + +def _parse_gitlab_diffs(diff_data): + """Convert GitLab diff JSON array into files list and unified diff text.""" + files = [] + diff_lines = [] + for d in diff_data: + path = d.get("new_path", d.get("old_path", "")) + old_path = d.get("old_path", "") + + if d.get("new_file"): + status = "added" + elif d.get("deleted_file"): + status = "deleted" + elif d.get("renamed_file"): + status = "renamed" + else: + status = "modified" + + patch = d.get("diff", "") + added = sum(1 for ln in patch.split("\n") if ln.startswith("+") and not ln.startswith("+++")) + removed = sum(1 for ln in patch.split("\n") if ln.startswith("-") and not ln.startswith("---")) + + files.append({ + "path": path, "status": status, + "added": added, "removed": removed, + }) + + diff_lines.append(f"diff --git a/{old_path} b/{path}") + diff_lines.append(patch) + + return files, "\n".join(diff_lines) + + +def extract_gitlab_commit(ref): + """Extract files, diff, and metadata from a GitLab commit URL.""" + m = GITLAB_COMMIT_RE.match(ref) + host, project_path, sha = m.group(1), m.group(2), m.group(3) + encoded = quote(project_path, safe="") + + commit_info = _gitlab_api_get(host, f"/projects/{encoded}/repository/commits/{sha}") + diff_data = _gitlab_api_get(host, f"/projects/{encoded}/repository/commits/{sha}/diff") or [] + + files, diff_text = _parse_gitlab_diffs(diff_data) + + msg = (commit_info or {}).get("message", "") + metadata = { + "title": msg.split("\n")[0] if msg else "", + "description": msg, + "labels": [], + "linked_issues": sorted(set(JIRA_KEY_RE.findall(msg))), + "milestone": None, + } + + return files, diff_text, metadata + + +# ---- Extraction: local commit / branch ---- + +def extract_local(ref, repo, base_branch, ref_type): + if not repo: + _err("--repo is required for local commits and branches") + + if ref_type == "commit": + diff_out = _run(["git", "-C", repo, "show", "--format=", ref]) + numstat_out = _run( + ["git", "-C", repo, "show", "--numstat", "--format=", ref] + ) + msg_out = _run(["git", "-C", repo, "log", "--format=%B", "-1", ref]) + else: + range_spec = f"{base_branch}..{ref}" + diff_out = _run(["git", "-C", repo, "diff", range_spec]) + numstat_out = _run(["git", "-C", repo, "diff", "--numstat", range_spec]) + msg_out = _run(["git", "-C", repo, "log", "--oneline", range_spec]) + + diff_text = (diff_out or "").strip() + files = _parse_numstat(numstat_out or "") + desc = (msg_out or "").strip() + + metadata = { + "title": desc.split("\n")[0] if desc else "", + "description": desc, + "labels": [], + "linked_issues": sorted(set(JIRA_KEY_RE.findall(desc))), + "milestone": None, + } + + return files, diff_text, metadata + + +def _parse_numstat(text): + files = [] + for line in text.strip().split("\n"): + if not line.strip(): + continue + parts = line.split("\t") + if len(parts) != 3: + continue + added_s, removed_s, path = parts + try: + added = int(added_s) if added_s != "-" else 0 + removed = int(removed_s) if removed_s != "-" else 0 + except ValueError: + added, removed = 0, 0 + + # Clean rename syntax: {old => new} + clean_path = re.sub(r"\{[^}]* => ([^}]*)\}", r"\1", path) + status = "renamed" if "{" in path and "=>" in path else "modified" + files.append({ + "path": clean_path, "status": status, + "added": added, "removed": removed, + }) + return files + + +# ---- Analysis ---- + +def build_categories(files): + cats = {} + for name in list(CATEGORIES.keys()) + ["source_code"]: + cats[name] = {"files": [], "total_added": 0, "total_removed": 0} + + for f in files: + cat = categorize_file(f["path"]) + cats[cat]["files"].append(f) + cats[cat]["total_added"] += f.get("added", 0) + cats[cat]["total_removed"] += f.get("removed", 0) + return cats + + +def detect_signals(files, diff_text): + signals = { + "new_files": [], "deleted_files": [], "renamed_files": [], + "api_surface_changes": [], "config_changes": [], + "schema_changes": [], "docs_modified": [], + "breaking_indicators": [], + } + + for f in files: + path, status = f["path"], f.get("status", "modified") + + if status == "added": + signals["new_files"].append(path) + elif status == "deleted": + signals["deleted_files"].append(path) + elif status == "renamed": + signals["renamed_files"].append(path) + + for p in API_PATTERNS: + if re.search(p, path, re.IGNORECASE): + signals["api_surface_changes"].append(path) + break + + for p in CONFIG_SIGNAL_PATTERNS: + if re.search(p, path, re.IGNORECASE): + signals["config_changes"].append(path) + break + + if re.search(r"\.(proto|graphql|gql|avro|thrift)$", path, re.IGNORECASE): + signals["schema_changes"].append(path) + + if categorize_file(path) == "documentation": + signals["docs_modified"].append(path) + + if diff_text: + for line in diff_text.split("\n"): + if line.startswith("+") and not line.startswith("+++"): + if BREAKING_RE.search(line): + signals["breaking_indicators"].append(line[:200].strip()) + + for k in signals: + signals[k] = list(dict.fromkeys(signals[k])) + return signals + + +def _split_diff_by_file(diff_text): + sections = {} + current_file = None + current_lines = [] + + for line in diff_text.split("\n"): + if line.startswith("diff --git"): + if current_file: + sections[current_file] = "\n".join(current_lines) + match = re.search(r" b/(.+)$", line) + current_file = match.group(1) if match else None + current_lines = [line] + else: + current_lines.append(line) + + if current_file: + sections[current_file] = "\n".join(current_lines) + return sections + + +def extract_key_diffs(files, diff_text, signals): + high_signal = set( + signals.get("new_files", []) + + signals.get("api_surface_changes", []) + + signals.get("config_changes", []) + + signals.get("schema_changes", []) + ) + if not high_signal: + return [] + + per_file = _split_diff_by_file(diff_text) + key_diffs = [] + + for path in high_signal: + if path not in per_file: + continue + excerpt = per_file[path] + lines = excerpt.split("\n") + if len(lines) > MAX_DIFF_LINES: + excerpt = ( + "\n".join(lines[:MAX_DIFF_LINES]) + + f"\n... ({len(lines) - MAX_DIFF_LINES} more lines)" + ) + + if path in signals.get("new_files", []): + reason = "new_file" + elif path in signals.get("api_surface_changes", []): + reason = "api_change" + elif path in signals.get("config_changes", []): + reason = "config_change" + else: + reason = "schema_change" + + key_diffs.append({"file": path, "reason": reason, "excerpt": excerpt}) + + return key_diffs + + +def extract_code_context(files, repo, signals): + if not repo: + return [] + + targets = ( + signals.get("new_files", []) + + signals.get("api_surface_changes", []) + )[:MAX_CODE_CONTEXT_FILES] + + contexts = [] + for path in targets: + basename = Path(path).stem + grep_out = _run([ + "grep", "-rn", "-l", + "--include=*.py", "--include=*.ts", "--include=*.js", + "--include=*.go", "--include=*.java", "--include=*.rb", + basename, repo, + ]) + if not grep_out: + continue + importers = [ + ln.strip() for ln in grep_out.strip().split("\n") + if ln.strip() and path not in ln + ][:5] + if importers: + contexts.append({ + "file": path, + "context_type": "imports", + "detail": f"Referenced by: {', '.join(importers)}", + }) + return contexts + + +def scan_docs(files, docs_root, signals): + if not docs_root or not Path(docs_root).is_dir(): + return None + + find_out = _run([ + "find", docs_root, "-type", "f", + "(", "-name", "*.adoc", "-o", "-name", "*.md", ")", + "!", "-path", "*/.git/*", + "!", "-path", "*/node_modules/*", + "!", "-path", "*/.claude/*", + "!", "-path", "*/.work/*", + ]) + if not find_out: + return { + "docs_root": docs_root, "total_doc_files": 0, + "affected_files": [], "coverage_gaps": [], + } + + doc_files = [f.strip() for f in find_out.strip().split("\n") if f.strip()] + + # Build search terms from high-signal files + interesting = ( + signals.get("api_surface_changes", []) + + signals.get("new_files", []) + + signals.get("config_changes", []) + )[:15] + + search_terms = set() + skip_generic = {"src", "lib", "pkg", "cmd", "internal", "main", "app", "index"} + for path in interesting: + stem = Path(path).stem + if len(stem) > 3: + search_terms.add(stem) + for part in Path(path).parts: + if len(part) > 3 and part.lower() not in skip_generic: + search_terms.add(part) + + if not search_terms: + return { + "docs_root": docs_root, "total_doc_files": len(doc_files), + "affected_files": [], "coverage_gaps": [], + } + + affected = {} + for term in sorted(search_terms)[:10]: + grep_out = _run([ + "grep", "-rn", "--include=*.adoc", "--include=*.md", + "-i", "-w", term, docs_root, + ]) + if not grep_out: + continue + for line in grep_out.strip().split("\n")[:20]: + parts = line.split(":", 2) + if len(parts) < 3: + continue + doc_file = parts[0] + try: + line_num = int(parts[1]) + except ValueError: + continue + text = parts[2].strip()[:200] + + if doc_file not in affected: + affected[doc_file] = {"doc_file": doc_file, "matches": []} + if len(affected[doc_file]["matches"]) < MAX_DOC_MATCHES_PER_FILE: + affected[doc_file]["matches"].append({ + "line": line_num, "text": text, "matched_term": term, + }) + + matched_terms = { + m["matched_term"] + for info in affected.values() + for m in info["matches"] + } + gaps = [] + for path in signals.get("new_files", []) + signals.get("api_surface_changes", []): + stem = Path(path).stem + if stem not in matched_terms and len(stem) > 3: + kind = ( + "New file" if path in signals.get("new_files", []) + else "API change" + ) + gaps.append({ + "feature": stem, + "signal": f"{kind}: {path}", + "existing_coverage": "none", + }) + + return { + "docs_root": docs_root, + "total_doc_files": len(doc_files), + "affected_files": list(affected.values())[:20], + "coverage_gaps": gaps[:10], + } + + +def main(): + parser = argparse.ArgumentParser( + description="Extract code changes for documentation impact analysis" + ) + parser.add_argument( + "--commit", required=True, + help="PR/MR URL, commit SHA, or branch name", + ) + parser.add_argument("--repo", help="Path to local source repository") + parser.add_argument( + "--base-branch", default="main", + help="Base branch for comparison (default: main)", + ) + parser.add_argument( + "--docs-root", default=os.getcwd(), + help="Docs repo root for scanning (default: cwd)", + ) + args = parser.parse_args() + + ref_type = detect_ref_type(args.commit) + + if ref_type == "github_pr": + files, diff_text, metadata = extract_pr(args.commit) + elif ref_type == "gitlab_mr": + files, diff_text, metadata = extract_gitlab_mr(args.commit) + elif ref_type == "gitlab_commit": + files, diff_text, metadata = extract_gitlab_commit(args.commit) + else: + files, diff_text, metadata = extract_local( + args.commit, args.repo, args.base_branch, ref_type, + ) + + cats = build_categories(files) + signals = detect_signals(files, diff_text) + summary = { + "files_changed": len(files), + "lines_added": sum(f.get("added", 0) for f in files), + "lines_removed": sum(f.get("removed", 0) for f in files), + } + key_diffs = extract_key_diffs(files, diff_text, signals) + code_context = extract_code_context(files, args.repo, signals) + docs_scan = scan_docs(files, args.docs_root, signals) + + output_ref_type = "pr" if ref_type in ("github_pr", "gitlab_mr") else ref_type + output = { + "ref_type": output_ref_type, + "ref": args.commit, + "repo": args.repo, + "pr_metadata": metadata, + "summary": summary, + "categories": cats, + "signals": signals, + "key_diffs": key_diffs, + "code_context": code_context, + "docs_scan": docs_scan, + } + + json.dump(output, sys.stdout, indent=2) + print() + + +if __name__ == "__main__": + main() diff --git a/plugins/docs-tools/skills/docs-orchestrator/SKILL.md b/plugins/docs-tools/skills/docs-orchestrator/SKILL.md index 356ea30d..ebef20da 100644 --- a/plugins/docs-tools/skills/docs-orchestrator/SKILL.md +++ b/plugins/docs-tools/skills/docs-orchestrator/SKILL.md @@ -2,7 +2,7 @@ name: docs-orchestrator description: Documentation workflow orchestrator. Reads the step list from .claude/docs-workflow.yaml (or the plugin default). Runs steps sequentially, manages progress state, handles iteration and confirmation gates. Claude is the orchestrator — the YAML is a step list, not a workflow engine. -argument-hint: <ticket> [--workflow <name>] [--pr <url>]... [--source-code-repo <url-or-path>] [--mkdocs] [--draft] [--docs-repo-path <path>] [--create-jira <PROJECT>] +argument-hint: [<ticket>] [--commit <url>] [--workflow <name>] [--pr <url>]... [--source-code-repo <url-or-path>] [--mkdocs] [--draft] [--docs-repo-path <path>] [--create-jira <PROJECT>] allowed-tools: Read, Write, Glob, Grep, Edit, Bash, Skill, AskUserQuestion --- @@ -27,7 +27,8 @@ bash ${CLAUDE_SKILL_DIR}/scripts/setup-hooks.sh When displaying available options to the user (e.g., on skill load or when asking for flags), reproduce the descriptions below **verbatim** — do not summarize or paraphrase them. -- `$1` — JIRA ticket ID (required). If missing, STOP and ask the user. +- `$1` — JIRA ticket ID or identifier (required unless `--commit` is provided). If both `$1` and `--commit` are absent, STOP and ask the user. +- `--commit <url>` — URL to a commit, PR, or MR. When provided, `$1` is optional — an identifier is auto-derived from the URL (e.g., `repo-pr-42`). Automatically selects the commit-driven workflow (`defaults/commit-driven-workflow.yaml`). If `$1` is also provided, it is used as the identifier and `--commit` supplies the URL to the commit-analysis step. Cannot be combined with `--workflow` (the commit-driven workflow is always used) - `--workflow <name>` — Use `.claude/docs-<name>.yaml` instead of `docs-workflow.yaml`. Allows running alternative pipelines (e.g., writing-only, review-only). Falls back to the plugin default at `skills/docs-orchestrator/defaults/docs-workflow.yaml` if no project-level YAML exists - `--pr <url>` — PR/MR URLs (repeatable, accumulated into a list). Accepts GitHub PRs (`gh` CLI) and GitLab MRs (`glab` CLI). Used both as requirements input (agent reads diffs/descriptions) and for source repo resolution (repo URL and branch derived from the first PR/MR). When multiple PRs from different repos are provided, all repos are resolved and treated equally as source material - `--mkdocs` — Use Material for MkDocs format instead of AsciiDoc. Propagates to the writing step (generates `.md` with MkDocs front matter) and style-review step (applies Markdown-appropriate rules). Sets `options.format` to `"mkdocs"` in the progress file @@ -36,6 +37,14 @@ When displaying available options to the user (e.g., on skill load or when askin - `--source-code-repo <url-or-path>` — Source code repository for code evidence and requirements enrichment. Accepts remote URLs (https://, git@, ssh:// — shallow-cloned to `.claude/docs/<ticket>/code-repo/`) or local paths (used directly). Passed to requirements, code-evidence, and writing steps (mapped to their internal `--repo` flag). Without `--pr`, the entire repo is the subject matter; with `--pr`, the PR branch is checked out so code-evidence reflects the PR's state. Takes highest priority in source resolution, overriding `source.yaml` and PR-derived URLs - `--create-jira <PROJECT>` — Create a linked JIRA ticket in the specified project after the planning step completes. Activates the `create-jira` workflow step (guarded by `when: create_jira_project`). Requires `JIRA_API_TOKEN` to be set +### Identifier derivation (commit-driven) + +When `--commit` is provided and `$1` is absent, derive the identifier from the URL: +- GitHub PR `https://github.com/org/repo/pull/42` → `repo-pr-42` +- GitLab MR `https://gitlab.com/org/repo/-/merge_requests/5` → `repo-mr-5` +- Commit SHA → first 8 characters +- Branch name → used as-is + ### Examples ```bash @@ -64,6 +73,16 @@ When displaying available options to the user (e.g., on skill load or when askin # Custom workflow YAML /docs-orchestrator PROJ-123 --workflow quick + +# Commit-driven — analyze a PR and generate documentation +/docs-orchestrator --commit https://github.com/org/repo/pull/42 + +# Commit-driven with JIRA context +/docs-orchestrator PROJ-123 --commit https://github.com/org/repo/pull/42 + +# Commit-driven with explicit source repo for code evidence +/docs-orchestrator --commit https://github.com/org/repo/pull/42 \ + --source-code-repo /path/to/repo ``` ## Resolve source repository @@ -83,11 +102,13 @@ python3 ${CLAUDE_SKILL_DIR}/scripts/resolve_source.py \ [--pr <url>]... ``` +**Commit-driven source resolution**: When `--commit` is a PR/MR URL and no `--source-code-repo` is provided, pass the `--commit` URL to `resolve_source.py` as a `--pr` argument. This derives the source repo from the PR/MR so that `has_source_repo` is true and code-evidence can run. For local commits/branches (not PR/MR URLs), source resolution requires `--source-code-repo` or `--repo` to be provided explicitly. + The script checks sources in priority order: 1. **CLI `--source-code-repo` flag** — clone or verify the path 2. **Per-ticket `source.yaml`** — read and apply existing config -3. **PR-derived** — resolve repo URL and branch from `--pr` via `gh pr view` or `glab mr view` +3. **PR-derived** — resolve repo URL and branch from `--pr` (or `--commit` PR/MR URL) via `gh pr view` or `glab mr view` 4. **No source** — exit code 2, defer resolution until after requirements The script outputs JSON to stdout: @@ -137,6 +158,7 @@ All fields except `repo` are optional. If `scope` is omitted, the entire reposit ### 1. Determine the YAML file +- If `--commit` was provided → use `skills/docs-orchestrator/defaults/commit-driven-workflow.yaml` (ignores `--workflow` if both present) - If `--workflow <name>` was specified → `.claude/docs-<name>.yaml` - Otherwise → `.claude/docs-workflow.yaml` - If neither exists → use the plugin default at `skills/docs-orchestrator/defaults/docs-workflow.yaml` @@ -341,6 +363,7 @@ Build the args string for the step skill. The orchestrator maps its user-facing 1. **Always**: `<ticket> --base-path <base_path>` — the ticket ID and the **absolute** base output path 2. **If source repo is resolved**: `--repo <repo_path>` — passed to steps that can use it 3. **From orchestrator context**: Step-specific args from parsed CLI flags: + - `commit-analysis`: `--commit <url> [--repo <repo_path>] [--base-branch <branch>]` — if `$1` matches JIRA pattern `[A-Z]+-[0-9]+`, also pass `--ticket <$1>` - `requirements`: `[--pr <url>]... [--repo <repo_path>]` - `prepare-branch`: `[--draft] [--repo-path <path>]` - `code-evidence`: `--repo <repo_path> [--scope-include <globs>] [--scope-exclude <globs>] [--reindex]` — scope globs come from `source.yaml` or `options.source.scope` in the progress file @@ -364,11 +387,11 @@ Skill: <step.skill>, args: "<constructed args>" 2. Read the step's `step-result.json` sidecar if it exists in the output folder. Log a warning if it is missing (the step still counts as completed — sidecars are expected but not required for backward compatibility) 3. Update the step's status to `"completed"` with the output folder path in the progress file 4. Update the progress file's `updated_at` timestamp -5. **If the just-completed step is `requirements` AND `options.source` is `null`** → run [Post-requirements source resolution](#post-requirements-source-resolution) before continuing to the next step. This may change `deferred` steps to `pending` or `skipped` +5. **If the just-completed step is `requirements` or `commit-analysis` AND `options.source` is `null`** → run [Post-requirements source resolution](#post-requirements-source-resolution) before continuing to the next step. This may change `deferred` steps to `pending` or `skipped` ## Post-requirements source resolution -This section triggers **only** when the `requirements` step completes AND `options.source` is still `null` (i.e., no source was resolved pre-flight). +This section triggers **only** when the `requirements` or `commit-analysis` step completes AND `options.source` is still `null` (i.e., no source was resolved pre-flight). ### 1. Run the script with `--scan-requirements` diff --git a/plugins/docs-tools/skills/docs-orchestrator/defaults/commit-driven-workflow.yaml b/plugins/docs-tools/skills/docs-orchestrator/defaults/commit-driven-workflow.yaml new file mode 100644 index 00000000..1b8cdbcc --- /dev/null +++ b/plugins/docs-tools/skills/docs-orchestrator/defaults/commit-driven-workflow.yaml @@ -0,0 +1,53 @@ +workflow: + name: commit-driven + description: >- + Documentation workflow driven by commit/PR/MR analysis. Analyzes code + changes for documentation impact, then runs the standard pipeline. + + # All step outputs go to: .claude/docs/<ticket>/<step-name>/ + # commit-analysis writes to requirements/ for planning step compatibility. + steps: + - name: commit-analysis + skill: docs-tools:docs-workflow-commit-analysis + description: Analyze commit/PR for documentation impact + + - name: planning + skill: docs-tools:docs-workflow-planning + description: Create documentation plan + inputs: [commit-analysis] + + - name: code-evidence + skill: docs-tools:docs-workflow-code-evidence + description: Retrieve code evidence from source repository + when: has_source_repo + inputs: [planning] + + - name: prepare-branch + skill: docs-tools:docs-workflow-prepare-branch + description: Create a fresh branch from latest upstream default branch + inputs: [planning] + + - name: writing + skill: docs-tools:docs-workflow-writing + description: Write documentation + inputs: [planning, prepare-branch, code-evidence] + + - name: technical-review + skill: docs-tools:docs-workflow-tech-review + description: Technical accuracy review + inputs: [writing] + + - name: style-review + skill: docs-tools:docs-workflow-style-review + description: Style guide compliance review + inputs: [writing, technical-review] + + - name: commit + skill: docs-tools:docs-workflow-commit + description: Commit and push documentation changes + inputs: [writing, style-review, technical-review] + + - name: create-mr + skill: docs-tools:docs-workflow-create-mr + description: Create merge request or pull request + inputs: [commit] diff --git a/plugins/docs-tools/skills/docs-orchestrator/schema/step-result-schema.md b/plugins/docs-tools/skills/docs-orchestrator/schema/step-result-schema.md index ac3fa074..122e4ba4 100644 --- a/plugins/docs-tools/skills/docs-orchestrator/schema/step-result-schema.md +++ b/plugins/docs-tools/skills/docs-orchestrator/schema/step-result-schema.md @@ -299,6 +299,28 @@ When an existing linked ticket is found: | `skipped` | boolean | Whether JIRA creation was skipped | Orchestrator | | `skip_reason` | string\|null | Reason when skipped (e.g., `"existing_link"`) | Orchestrator | +### commit-analysis + +```json +{ + "schema_version": 1, + "step": "commit-analysis", + "ticket": "repo-pr-42", + "completed_at": "2026-04-24T10:00:00Z", + "title": "Add user preferences API", + "impact_grade": "HIGH", + "files_analyzed": 15, + "doc_impact_categories": ["new_feature", "api_change"] +} +``` + +| Field | Type | Description | Consumed by | +|---|---|---|---| +| `title` | string | First heading from requirements.md (max 80 chars, ticket prefix stripped) | `create_mr.sh` — PR/MR title | +| `impact_grade` | string | `"HIGH"`, `"MEDIUM"`, `"LOW"`, or `"NONE"` | Orchestrator (informational) | +| `files_analyzed` | integer | Number of files in the analyzed commit/PR | Informational (orchestrator summary) | +| `doc_impact_categories` | string[] | Categories of documentation impact (e.g., `"new_feature"`, `"api_change"`, `"config_change"`, `"breaking_change"`) | Informational | + ## Backward compatibility Downstream consumers use a sidecar-first pattern: read from `step-result.json` when present, fall back to parsing the markdown output when absent. This ensures in-flight workflows from before sidecar adoption continue to work. diff --git a/plugins/docs-tools/skills/docs-workflow-commit-analysis/SKILL.md b/plugins/docs-tools/skills/docs-workflow-commit-analysis/SKILL.md new file mode 100644 index 00000000..7dd4b6cd --- /dev/null +++ b/plugins/docs-tools/skills/docs-workflow-commit-analysis/SKILL.md @@ -0,0 +1,114 @@ +--- +name: docs-workflow-commit-analysis +description: >- + Analyze documentation impact of a commit, PR/MR, or branch diff. + Produces requirements.md for the planning step. Invoked by the + orchestrator as the first step in the commit-driven workflow. +argument-hint: "<ticket> --base-path <path> --commit <url> [--repo <path>] [--base-branch <branch>]" +allowed-tools: Read, Write, Bash, Grep, Glob, Skill, AskUserQuestion +--- + +# Commit Analysis Step + +Step skill for the docs-orchestrator pipeline. Follows the step skill contract: **parse args → invoke skill → verify output → write sidecar**. + +## Arguments + +- `$1` — Identifier (ticket ID or auto-derived from commit URL) +- `--base-path <path>` — Base output path (e.g., `.claude/docs/repo-pr-42`) +- `--commit <url>` — Commit, PR, or MR URL (required) +- `--repo <path>` — Path to local source repository (optional) +- `--base-branch <branch>` — Base branch for comparison (optional) +- `--ticket <JIRA-ID>` — JIRA ticket for enrichment (optional, passed if $1 matches JIRA pattern) + +## Output + +``` +<base-path>/requirements/requirements.md +``` + +Writes to the same location as the standard requirements step so the planning step reads from the same path regardless of workflow. + +## Execution + +### 1. Parse arguments + +Extract `$1`, `--base-path`, `--commit`, `--repo`, `--base-branch`, and `--ticket` from the args string. + +Set the output path: + +```bash +OUTPUT_DIR="${BASE_PATH}/requirements" +OUTPUT_FILE="${OUTPUT_DIR}/requirements.md" +mkdir -p "$OUTPUT_DIR" +``` + +### 2. Invoke commit-analyst skill + +Build the skill invocation args: + +``` +--commit <COMMIT_URL> --base-path <BASE_PATH> +``` + +Add optional flags: +- If `--ticket` was provided OR if `$1` matches JIRA pattern `[A-Z]+-[0-9]+`: add `--ticket <value>` +- If `--repo` was provided: add `--repo <path>` +- If `--base-branch` was provided: add `--base-branch <branch>` + +Invoke the skill: + +``` +Skill: docs-tools:commit-analyst, args: "<constructed args>" +``` + +### 3. Check impact grade + +After the skill completes, read `OUTPUT_FILE`. Find the line containing "Overall documentation impact" and extract the grade (HIGH, MEDIUM, LOW, or NONE). + +If the grade is **NONE**: + +Use AskUserQuestion to prompt the user: + +> No documentation impact detected for this change. Continue the pipeline anyway? + +Options: +- **No — stop here (Recommended)**: Exit. Do not delete `OUTPUT_FILE` (the NONE report is still useful as a record). The orchestrator sees the step as complete but the planning step will find minimal requirements. +- **Yes — continue anyway**: Proceed normally. The planning step will work with whatever minimal requirements exist. + +### 4. Verify output + +Confirm `OUTPUT_FILE` exists and is non-empty. + +If missing, report an error: "Commit analysis failed — no output file at `<OUTPUT_FILE>`." + +### 5. Write step-result.json + +Run the title-extraction script (reused from the requirements step): + +```bash +python3 ${CLAUDE_PLUGIN_ROOT}/skills/docs-workflow-requirements/scripts/parse_title.py "<OUTPUT_FILE>" +``` + +The script prints `{"title": "..."}` to stdout. If it exits non-zero, report the stderr message as an error. + +Extract the impact grade from the output file (the "Overall documentation impact" line). + +Count files analyzed from the Summary section. + +Extract documentation impact categories from the Summary section's "Impact categories" line (e.g., `new_feature, api_change`). Split on commas and trim whitespace. + +Write the sidecar to `${OUTPUT_DIR}/step-result.json`: + +```json +{ + "schema_version": 1, + "step": "commit-analysis", + "ticket": "<IDENTIFIER>", + "completed_at": "<current ISO 8601 timestamp>", + "title": "<first heading from parse_title.py, max 80 chars>", + "impact_grade": "HIGH|MEDIUM|LOW|NONE", + "files_analyzed": 15, + "doc_impact_categories": ["new_feature", "api_change"] +} +``` diff --git a/plugins/docs-tools/skills/docs-workflow-start/SKILL.md b/plugins/docs-tools/skills/docs-workflow-start/SKILL.md index a967df17..e4b0fe3d 100644 --- a/plugins/docs-tools/skills/docs-workflow-start/SKILL.md +++ b/plugins/docs-tools/skills/docs-workflow-start/SKILL.md @@ -1,7 +1,7 @@ --- name: docs-workflow-start description: Interactive entry point for the docs workflow. When invoked with no CLI switches, uses AskUserQuestion to gather configuration. Supports full workflow, individual steps with auto-resolved prerequisites, and resuming previous runs. When switches are provided, passes through directly to docs-orchestrator. -argument-hint: "[<ticket>] [--workflow <name>] [--pr <url>]... [--source-code-repo <url-or-path>] [--mkdocs] [--draft] [--docs-repo-path <path>] [--create-jira <PROJECT>]" +argument-hint: "[<ticket>] [--commit <url>] [--workflow <name>] [--pr <url>]... [--source-code-repo <url-or-path>] [--mkdocs] [--draft] [--docs-repo-path <path>] [--create-jira <PROJECT>]" allowed-tools: Read, Write, Glob, Grep, Bash, Skill, AskUserQuestion --- @@ -42,7 +42,7 @@ If no ticket ID was provided in args, ask the user conversationally: > What is the JIRA ticket ID? (e.g., PROJ-123) -The ticket ID is required. After obtaining it, proceed to step 2. +The ticket ID is required for standard and specific-step workflows. For the commit-driven path (selected in step 2), the ticket ID is optional — proceed to step 2 even if not provided. After obtaining it (or skipping), proceed to step 2. ### Step 2: Action selection — call AskUserQuestion @@ -53,6 +53,7 @@ You MUST call the AskUserQuestion tool now with 1 question. Do not skip this. | Option | Description | |--------|-------------| | Run full workflow (Recommended) | Run the complete docs pipeline from requirements through to MR creation | +| Analyze a commit/PR for documentation impact | Analyze code changes and generate documentation (commit-driven workflow) | | Run specific step(s) | Run one or more individual workflow steps with prerequisites included automatically | | Resume existing workflow | Continue a previously started workflow for this ticket | @@ -60,6 +61,7 @@ Wait for the user's answer before proceeding. - If **"Resume existing workflow"**: skip steps 3–4 and go directly to step 5 (resume path). - If **"Run full workflow"**: proceed to step 3A. +- If **"Analyze a commit/PR for documentation impact"**: proceed to step 3C. - If **"Run specific step(s)"**: proceed to step 3B. ### Step 3A: Full workflow configuration — call AskUserQuestion @@ -124,10 +126,43 @@ After receiving the answer, determine which configuration questions are relevant If any questions are relevant, call AskUserQuestion with those questions (same text and options as step 3A). If no questions are relevant, proceed to step 4. +### Step 3C: Commit-driven configuration + +You MUST complete this step before proceeding. Do not skip this. + +**Q1: What is the commit, PR, or MR URL?** + +Ask the user conversationally (not via AskUserQuestion — URLs are always free-text): + +> Enter the commit, PR, or MR URL (e.g., `https://github.com/org/repo/pull/42` or `https://gitlab.com/org/repo/-/merge_requests/5`): + +Wait for the user's response. This is the only required input. After receiving the URL, ask the remaining questions together in a single AskUserQuestion call: + +**Q2: Do you have a related JIRA ticket?** + +| Option | Description | +|--------|-------------| +| No (Recommended) | Derive context from the commit/PR only | +| Yes | Provide a JIRA ticket for additional context | + +**Q3: What output format should the documentation use?** (same as Step 3A/Q1) + +**Q4: Where should the documentation be written?** (same as Step 3A/Q3) + +Wait for all answers before proceeding. + +If the user selected "Yes" for JIRA ticket, collect it as a follow-up in step 4. Source code repo is NOT asked — it is derived from the commit/PR URL automatically. + ### Step 4: Free-text follow-ups Based on answers from step 3, collect any needed free-text inputs. Use AskUserQuestion with `textInput: true` for each value, so the user has a clear input prompt. Only ask questions that apply: +**If "Yes" was selected for JIRA ticket (commit-driven path)**: + +Ask the user conversationally: "Enter the JIRA ticket ID (e.g., PROJ-123):" + +Maps to `$1` (positional arg before flags). + **If "Yes — I have a PR URL" was selected**: Ask via AskUserQuestion (textInput): "Enter the first PR/MR URL:" @@ -175,6 +210,8 @@ Build the args string from collected answers: | Answer | CLI flag | |--------|----------| +| Commit/PR URL (commit-driven path) | `--commit <url>` | +| JIRA ticket (commit-driven path) | `$1` (positional arg before flags) | | Material for MkDocs | `--mkdocs` | | PR URL(s) | `--pr <url>` (repeat for each URL) | | Repo URL or path | `--source-code-repo <url-or-path>` | @@ -196,12 +233,24 @@ Invoke the orchestrator with the ticket ID and all constructed flags: Skill: docs-orchestrator, args: "<ticket> <constructed flags>" ``` -Example: +Examples: ``` Skill: docs-orchestrator, args: "PROJ-123 --mkdocs --pr https://github.com/org/repo/pull/42 --draft" ``` +Commit-driven (with JIRA): + +``` +Skill: docs-orchestrator, args: "PROJ-123 --commit https://github.com/org/repo/pull/42" +``` + +Commit-driven (without JIRA): + +``` +Skill: docs-orchestrator, args: "--commit https://gitlab.com/org/repo/-/merge_requests/5" +``` + #### Resume execution For the resume path (user selected "Resume existing workflow" in step 2):