diff --git a/README.md b/README.md
index cd5db07..0629145 100644
--- a/README.md
+++ b/README.md
@@ -123,8 +123,12 @@ Everything above is what one user gets. Now scale it up: when you join a shared
## News
+- **2026/04/22** — Added a bilingual dashboard with `skillclaw dashboard sync` and `skillclaw dashboard serve` for inspecting local/shared skills, validation progress, version history, and session traces.
+- **2026/04/20** — Added [Codex](https://github.com/openai/codex) and [Claude Code](https://docs.anthropic.com/en/docs/claude-code) integration with proxy auto-configuration, native skills-directory defaults, and `doctor` / `restore` commands.
+- **2026/04/17** — Added [QwenPaw](https://github.com/agentscope-ai/QwenPaw) integration and updated the docs for broader multi-agent compatibility.
+- **2026/04/17** — Added full [Hermes](https://github.com/NousResearch/hermes-agent) integration, per-turn skill tracking, `doctor hermes`, `skillclaw skills *` management commands, and a major docs overhaul.
- **2026/04/14** — WeChat discussion group is live! [Join the group](assets/image.png) to chat with us.
-- **2026/04/14** — Seamless integration with [Hermes](https://github.com/NousResearch/hermes-agent) is now available.
+- **2026/04/14** — Initial [Hermes](https://github.com/NousResearch/hermes-agent) support landed together with the first README refresh.
- **2026/04/12** — Active discussion with [Deer-Flow](https://github.com/bytedance/deer-flow/discussions/2133) on cross-framework skill sharing.
- **2026/04/11** — SkillClaw ranked **#2 Paper of the Day** on [Hugging Face Daily Papers](https://huggingface.co/papers/2604.08377)!
- **2026/04/10** — SkillClaw is now open source! Code released on [GitHub](https://github.com/AMAP-ML/SkillClaw).
@@ -311,6 +315,46 @@ skillclaw validation run-once --force
`skillclaw start --daemon` will automatically run the background validator afterward. `run-once --force` is the quickest way to test the path without waiting for the idle timer.
+### Optional: inspect skills and sessions with the dashboard
+
+The dashboard is a local visualization layer for the current SkillClaw snapshot. It is useful when you want to inspect:
+
+- local skills and whether they match the shared official version
+- candidate validation jobs and their current status
+- published shared skills and version history
+- local and shared sessions behind skill updates
+
+The dashboard commands are available from the same `skillclaw` install:
+
+```bash
+skillclaw dashboard sync
+skillclaw dashboard serve
+```
+
+If you want to point the dashboard at a local shared root and a specific group:
+
+```bash
+skillclaw dashboard sync \
+ --sharing-local-root /path/to/shared/root \
+ --sharing-group-id my-group \
+ --sharing-user-alias alice
+
+skillclaw dashboard serve \
+ --host 127.0.0.1 \
+ --port 3791 \
+ --sharing-local-root /path/to/shared/root \
+ --sharing-group-id my-group \
+ --sharing-user-alias alice
+```
+
+Then open:
+
+```text
+http://127.0.0.1:3791
+```
+
+By default, `serve` rebuilds the snapshot on startup. If you already ran `skillclaw dashboard sync`, you can start faster with `--no-sync-on-start`.
+
## Server Guide
The evolve server is the shared backend for one user or many users. It can run locally for a personal setup, or remotely for a team setup.
diff --git a/assets/README_ZH.md b/assets/README_ZH.md
index 776d593..312b26b 100644
--- a/assets/README_ZH.md
+++ b/assets/README_ZH.md
@@ -122,8 +122,12 @@ SkillClaw 不是让 Hermes 学更多,而是让它学到的一切,真正变
## 动态
+- **2026/04/22** — 新增支持中英文切换的 dashboard,可通过 `skillclaw dashboard sync` 和 `skillclaw dashboard serve` 查看本地 / 共享 skill、候选验证进度、版本历史与会话追溯。
+- **2026/04/20** — 新增 [Codex](https://github.com/openai/codex) 与 [Claude Code](https://docs.anthropic.com/en/docs/claude-code) 集成,支持自动接入代理、使用各自原生 skills 目录,并提供 `doctor` / `restore` 命令。
+- **2026/04/17** — 新增 [QwenPaw](https://github.com/agentscope-ai/QwenPaw) 集成,并同步更新文档以覆盖更多 Agent 框架。
+- **2026/04/17** — 补齐完整的 [Hermes](https://github.com/NousResearch/hermes-agent) 集成能力,加入逐轮 skill 使用追踪、`doctor hermes`、`skillclaw skills *` 管理命令,以及一轮文档重构。
- **2026/04/14** — 微信讨论群已开放和我们交流。
-- **2026/04/14** — 已支持与 [Hermes](https://github.com/NousResearch/hermes-agent) 无缝集成。
+- **2026/04/14** — 初步接入 [Hermes](https://github.com/NousResearch/hermes-agent),并完成第一轮 README 改版。
- **2026/04/12** — 正在与 [Deer-Flow](https://github.com/bytedance/deer-flow/discussions/2133) 讨论跨框架技能共享。
- **2026/04/11** — SkillClaw 在 [Hugging Face Daily Papers](https://huggingface.co/papers/2604.08377) 上获得**当日第 2 名**!
- **2026/04/10** — SkillClaw 正式开源!代码已发布在 [GitHub](https://github.com/AMAP-ML/SkillClaw)。
diff --git a/assets/image.png b/assets/image.png
index deed04a..d4bfd26 100644
Binary files a/assets/image.png and b/assets/image.png differ
diff --git a/pyproject.toml b/pyproject.toml
index b1aad2a..3a4a070 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -55,6 +55,9 @@ skillclaw-evolve-server = "evolve_server.__main__:main"
where = ["."]
include = ["skillclaw*", "evolve_server*"]
+[tool.setuptools.package-data]
+skillclaw = ["dashboard_assets/*"]
+
[tool.ruff]
line-length = 120
diff --git a/skillclaw/cli.py b/skillclaw/cli.py
index 7895283..b675c1d 100644
--- a/skillclaw/cli.py
+++ b/skillclaw/cli.py
@@ -545,6 +545,170 @@ def validation_run_once(force: bool):
click.echo(f"{key}: {value}")
+@skillclaw.group()
+def dashboard():
+ """Dashboard and skill visualization commands."""
+
+
+def _apply_dashboard_runtime_overrides(
+ cfg,
+ *,
+ host: str | None = None,
+ port: int | None = None,
+ db_path: str | None = None,
+ no_sync_on_start: bool = False,
+ sharing_local_root: str | None = None,
+ sharing_group_id: str | None = None,
+ sharing_user_alias: str | None = None,
+ include_shared: bool | None = None,
+ evolve_server_url: str | None = None,
+):
+ if host:
+ cfg.dashboard_host = host
+ if port:
+ cfg.dashboard_port = port
+ if db_path:
+ cfg.dashboard_db_path = db_path
+ if no_sync_on_start:
+ cfg.dashboard_sync_on_start = False
+ if sharing_local_root:
+ cfg.sharing_enabled = True
+ cfg.sharing_backend = "local"
+ cfg.sharing_local_root = sharing_local_root
+ if sharing_group_id:
+ cfg.sharing_group_id = sharing_group_id
+ if sharing_user_alias:
+ cfg.sharing_user_alias = sharing_user_alias
+ if include_shared is not None:
+ cfg.dashboard_include_shared = include_shared
+ if evolve_server_url is not None:
+ cfg.dashboard_evolve_server_url = evolve_server_url
+ return cfg
+
+
+@dashboard.command(name="sync")
+@click.option(
+ "--db-path",
+ type=click.Path(dir_okay=False, path_type=str),
+ default=None,
+ help="Override dashboard SQLite file path.",
+)
+@click.option(
+ "--sharing-local-root",
+ type=click.Path(file_okay=False, path_type=str),
+ default=None,
+ help="Use a local filesystem directory as the shared storage root for dashboard sync.",
+)
+@click.option("--sharing-group-id", type=str, default=None, help="Override shared storage group id.")
+@click.option("--sharing-user-alias", type=str, default=None, help="Override sharing user alias.")
+@click.option(
+ "--include-shared/--no-include-shared",
+ default=None,
+ help="Control whether shared storage is included in the dashboard snapshot.",
+)
+@click.option("--evolve-server-url", type=str, default=None, help="Override evolve server base URL.")
+def dashboard_sync(
+ db_path: str | None,
+ sharing_local_root: str | None,
+ sharing_group_id: str | None,
+ sharing_user_alias: str | None,
+ include_shared: bool | None,
+ evolve_server_url: str | None,
+):
+ """Refresh the dashboard SQLite projection."""
+ from .dashboard_server import DashboardService
+
+ cs = ConfigStore()
+ cfg = _apply_dashboard_runtime_overrides(
+ cs.to_skillclaw_config(),
+ db_path=db_path,
+ sharing_local_root=sharing_local_root,
+ sharing_group_id=sharing_group_id,
+ sharing_user_alias=sharing_user_alias,
+ include_shared=include_shared,
+ evolve_server_url=evolve_server_url,
+ )
+ service = DashboardService(cfg)
+ result = service.sync()
+ summary = result["summary"]
+ click.echo(
+ f"Dashboard snapshot synced: "
+ f"{summary['skills']} skills, "
+ f"{summary['sessions']} sessions, "
+ f"{summary['validation_jobs']} validation jobs."
+ )
+ click.echo(f"SQLite: {cfg.dashboard_db_path}")
+ warnings = summary.get("warnings") or []
+ if warnings:
+ click.echo("Warnings:")
+ for item in warnings:
+ click.echo(f" - {item}")
+
+
+@dashboard.command(name="serve")
+@click.option("--host", type=str, default=None, help="Override dashboard host.")
+@click.option("--port", type=int, default=None, help="Override dashboard port.")
+@click.option(
+ "--db-path",
+ type=click.Path(dir_okay=False, path_type=str),
+ default=None,
+ help="Override dashboard SQLite file path.",
+)
+@click.option(
+ "--no-sync-on-start",
+ is_flag=True,
+ default=False,
+ help="Start the dashboard without rebuilding the snapshot first.",
+)
+@click.option(
+ "--sharing-local-root",
+ type=click.Path(file_okay=False, path_type=str),
+ default=None,
+ help="Use a local filesystem directory as the shared storage root while serving the dashboard.",
+)
+@click.option("--sharing-group-id", type=str, default=None, help="Override shared storage group id.")
+@click.option("--sharing-user-alias", type=str, default=None, help="Override sharing user alias.")
+@click.option(
+ "--include-shared/--no-include-shared",
+ default=None,
+ help="Control whether shared storage is included in the dashboard snapshot.",
+)
+@click.option("--evolve-server-url", type=str, default=None, help="Override evolve server base URL.")
+def dashboard_serve(
+ host: str | None,
+ port: int | None,
+ db_path: str | None,
+ no_sync_on_start: bool,
+ sharing_local_root: str | None,
+ sharing_group_id: str | None,
+ sharing_user_alias: str | None,
+ include_shared: bool | None,
+ evolve_server_url: str | None,
+):
+ """Serve the dashboard UI and API."""
+ from .dashboard_server import serve_dashboard
+
+ cs = ConfigStore()
+ cfg = _apply_dashboard_runtime_overrides(
+ cs.to_skillclaw_config(),
+ host=host,
+ port=port,
+ db_path=db_path,
+ no_sync_on_start=no_sync_on_start,
+ sharing_local_root=sharing_local_root,
+ sharing_group_id=sharing_group_id,
+ sharing_user_alias=sharing_user_alias,
+ include_shared=include_shared,
+ evolve_server_url=evolve_server_url,
+ )
+
+ click.echo(
+ f"Starting SkillClaw dashboard at http://{cfg.dashboard_host}:{cfg.dashboard_port} "
+ f"(db: {cfg.dashboard_db_path})"
+ )
+ serve_dashboard(cfg)
+
+
@skillclaw.group()
def skills():
"""Skill management commands."""
diff --git a/skillclaw/config.py b/skillclaw/config.py
index 9601189..b5a5c46 100644
--- a/skillclaw/config.py
+++ b/skillclaw/config.py
@@ -101,6 +101,17 @@ class SkillClawConfig:
validation_max_jobs_per_day: int = 5
validation_max_concurrency: int = 1
+ # ------------------------------------------------------------------ #
+ # Dashboard #
+ # ------------------------------------------------------------------ #
+ dashboard_enabled: bool = False
+ dashboard_host: str = "127.0.0.1"
+ dashboard_port: int = 3788
+ dashboard_db_path: str = "~/.skillclaw/dashboard.db"
+ dashboard_sync_on_start: bool = True
+ dashboard_include_shared: bool = True
+ dashboard_evolve_server_url: str = ""
+
# ------------------------------------------------------------------ #
# Cloud / Bedrock #
# ------------------------------------------------------------------ #
diff --git a/skillclaw/config_store.py b/skillclaw/config_store.py
index 9bfa417..95ef4f2 100644
--- a/skillclaw/config_store.py
+++ b/skillclaw/config_store.py
@@ -78,6 +78,15 @@
"max_jobs_per_day": 5,
"max_concurrency": 1,
},
+ "dashboard": {
+ "enabled": False,
+ "host": "127.0.0.1",
+ "port": 3788,
+ "db_path": str(CONFIG_DIR / "dashboard.db"),
+ "sync_on_start": True,
+ "include_shared": True,
+ "evolve_server_url": "",
+ },
}
@@ -253,6 +262,7 @@ def to_skillclaw_config(self) -> SkillClawConfig:
sharing = data.get("sharing", {})
validation = data.get("validation", {})
+ dashboard = data.get("dashboard", {})
sharing_backend = _infer_sharing_backend(sharing)
sharing_endpoint = _first_non_empty(sharing, "endpoint")
sharing_bucket = _first_non_empty(sharing, "bucket")
@@ -331,6 +341,15 @@ def to_skillclaw_config(self) -> SkillClawConfig:
validation_poll_interval_seconds=int(validation.get("poll_interval_seconds", 60)),
validation_max_jobs_per_day=int(validation.get("max_jobs_per_day", 5)),
validation_max_concurrency=max(1, int(validation.get("max_concurrency", 1))),
+ dashboard_enabled=bool(dashboard.get("enabled", False)),
+ dashboard_host=str(dashboard.get("host", "127.0.0.1") or "127.0.0.1"),
+ dashboard_port=int(dashboard.get("port", 3788) or 3788),
+ dashboard_db_path=str(
+ dashboard.get("db_path", str(CONFIG_DIR / "dashboard.db")) or str(CONFIG_DIR / "dashboard.db")
+ ),
+ dashboard_sync_on_start=bool(dashboard.get("sync_on_start", True)),
+ dashboard_include_shared=bool(dashboard.get("include_shared", True)),
+ dashboard_evolve_server_url=str(dashboard.get("evolve_server_url", "") or ""),
)
def describe(self) -> str:
@@ -339,6 +358,7 @@ def describe(self) -> str:
llm = data.get("llm", {})
skills = data.get("skills", {})
prm = data.get("prm", {})
+ dashboard = data.get("dashboard", {})
claw_type = str(data.get("claw_type", "openclaw") or "openclaw")
effective_skills_dir = resolve_skills_dir(
skills.get("dir", str(_DEFAULT_SKILLS_DIR)),
@@ -397,5 +417,11 @@ def describe(self) -> str:
f"validation.mode: {_normalize_validation_mode(validation.get('mode', 'replay'))}",
f"validation.idle_after: {validation.get('idle_after_seconds', 300)}",
f"validation.poll_interval: {validation.get('poll_interval_seconds', 60)}",
+ f"dashboard.enabled: {dashboard.get('enabled', False)}",
+ f"dashboard.host: {dashboard.get('host', '127.0.0.1')}",
+ f"dashboard.port: {dashboard.get('port', 3788)}",
+ f"dashboard.db_path: {dashboard.get('db_path', str(CONFIG_DIR / 'dashboard.db'))}",
+ f"dashboard.include_shared: {dashboard.get('include_shared', True)}",
+ f"dashboard.evolve_server_url: {dashboard.get('evolve_server_url', '') or '(not set)'}",
]
return "\n".join(lines)
diff --git a/skillclaw/dashboard_assets/app.js b/skillclaw/dashboard_assets/app.js
new file mode 100644
index 0000000..508a9ff
--- /dev/null
+++ b/skillclaw/dashboard_assets/app.js
@@ -0,0 +1,2640 @@
+const LOCALE_STORAGE_KEY = "skillclaw.dashboard.locale"
+
+function initialLocale() {
+ try {
+ const stored = window.localStorage.getItem(LOCALE_STORAGE_KEY)
+ if (stored === "en" || stored === "zh") {
+ return stored
+ }
+ } catch {}
+ return "zh"
+}
+
+const state = {
+ activeView: "overview",
+ loading: false,
+ locale: initialLocale(),
+ overview: null,
+ evolve: null,
+ skills: [],
+ sessions: [],
+ validationJobs: [],
+ skillDetails: Object.create(null),
+ sessionDetails: Object.create(null),
+ compareSelections: Object.create(null),
+ selectedLocalSkillId: "",
+ selectedCandidateJobId: "",
+ selectedFinalSkillId: "",
+ selectedSessionId: "",
+}
+
+const dom = {
+ body: document.body,
+ messageStrip: document.querySelector("#message-strip"),
+ brandTitle: document.querySelector("#brand-title"),
+ brandCopy: document.querySelector("#brand-copy"),
+ navOverviewTitle: document.querySelector("#nav-overview-title"),
+ navLocalTitle: document.querySelector("#nav-local-title"),
+ navCandidateTitle: document.querySelector("#nav-candidate-title"),
+ navFinalTitle: document.querySelector("#nav-final-title"),
+ navSessionsTitle: document.querySelector("#nav-sessions-title"),
+ navButtons: Array.from(document.querySelectorAll("[data-view]")),
+ viewPanels: Array.from(document.querySelectorAll("[data-view-panel]")),
+ navOverviewMeta: document.querySelector("#nav-overview-meta"),
+ navLocalMeta: document.querySelector("#nav-local-meta"),
+ navCandidateMeta: document.querySelector("#nav-candidate-meta"),
+ navFinalMeta: document.querySelector("#nav-final-meta"),
+ navSessionsMeta: document.querySelector("#nav-sessions-meta"),
+ sidebarStatusTitle: document.querySelector("#sidebar-status-title"),
+ sidebarStatus: document.querySelector("#sidebar-status"),
+ heroKicker: document.querySelector("#hero-kicker"),
+ heroTitle: document.querySelector("#hero-title"),
+ localeButton: document.querySelector("#btn-locale"),
+ refreshButton: document.querySelector("#btn-refresh"),
+ syncButton: document.querySelector("#btn-sync"),
+ overviewKicker: document.querySelector("#overview-kicker"),
+ overviewTitle: document.querySelector("#overview-title"),
+ overviewCopy: document.querySelector("#overview-copy"),
+ watchlistKicker: document.querySelector("#watchlist-kicker"),
+ watchlistTitle: document.querySelector("#watchlist-title"),
+ scopeKicker: document.querySelector("#scope-kicker"),
+ scopeTitle: document.querySelector("#scope-title"),
+ eventsKicker: document.querySelector("#events-kicker"),
+ eventsTitle: document.querySelector("#events-title"),
+ eventsCopy: document.querySelector("#events-copy"),
+ overviewMetrics: document.querySelector("#overview-metrics"),
+ overviewWatchlist: document.querySelector("#overview-watchlist"),
+ overviewContext: document.querySelector("#overview-context"),
+ overviewEvents: document.querySelector("#overview-events"),
+ localKicker: document.querySelector("#local-kicker"),
+ localTitle: document.querySelector("#local-title"),
+ localCopy: document.querySelector("#local-copy"),
+ localSearch: document.querySelector("#local-search"),
+ localList: document.querySelector("#local-list"),
+ localDetail: document.querySelector("#local-detail"),
+ candidateKicker: document.querySelector("#candidate-kicker"),
+ candidateTitle: document.querySelector("#candidate-title"),
+ candidateCopy: document.querySelector("#candidate-copy"),
+ candidateSearch: document.querySelector("#candidate-search"),
+ candidateStatus: document.querySelector("#candidate-status"),
+ candidateStatusAll: document.querySelector("#candidate-status-all"),
+ candidateStatusPending: document.querySelector("#candidate-status-pending"),
+ candidateStatusReview: document.querySelector("#candidate-status-review"),
+ candidateStatusPublished: document.querySelector("#candidate-status-published"),
+ candidateStatusRejected: document.querySelector("#candidate-status-rejected"),
+ candidateList: document.querySelector("#candidate-list"),
+ candidateDetail: document.querySelector("#candidate-detail"),
+ finalKicker: document.querySelector("#final-kicker"),
+ finalTitle: document.querySelector("#final-title"),
+ finalCopy: document.querySelector("#final-copy"),
+ finalSearch: document.querySelector("#final-search"),
+ finalList: document.querySelector("#final-list"),
+ finalDetail: document.querySelector("#final-detail"),
+ sessionsKicker: document.querySelector("#sessions-kicker"),
+ sessionsTitle: document.querySelector("#sessions-title"),
+ sessionsCopy: document.querySelector("#sessions-copy"),
+ sessionSearch: document.querySelector("#session-search"),
+ sessionSource: document.querySelector("#session-source"),
+ sessionSourceLocal: document.querySelector("#session-source-local"),
+ sessionSourceAll: document.querySelector("#session-source-all"),
+ sessionSourceShared: document.querySelector("#session-source-shared"),
+ sessionList: document.querySelector("#session-list"),
+ sessionDetail: document.querySelector("#session-detail"),
+ opButtons: Array.from(document.querySelectorAll("[data-op]")),
+}
+
+function localeTag() {
+ return state.locale === "en" ? "en-US" : "zh-CN"
+}
+
+function l(zh, en, vars = {}) {
+ const template = state.locale === "en" ? en : zh
+ return String(template).replace(/\{(\w+)\}/g, (_, key) => String(vars[key] ?? ""))
+}
+
+function escapeHtml(value) {
+ return String(value ?? "")
+ .replaceAll("&", "&")
+ .replaceAll("<", "<")
+ .replaceAll(">", ">")
+ .replaceAll('"', """)
+ .replaceAll("'", "'")
+}
+
+function clip(value, limit = 160) {
+ const text = String(value ?? "").trim().replace(/\s+/g, " ")
+ if (text.length <= limit) {
+ return text
+ }
+ return `${text.slice(0, limit).trimEnd()}...`
+}
+
+function number(value) {
+ return Number(value || 0).toLocaleString(localeTag())
+}
+
+function parseTime(value) {
+ const timestamp = Date.parse(String(value || ""))
+ return Number.isFinite(timestamp) ? timestamp : 0
+}
+
+function formatStamp(value) {
+ const timestamp = parseTime(value)
+ if (!timestamp) {
+ return l("无时间", "No time")
+ }
+ return new Intl.DateTimeFormat(localeTag(), {
+ year: "numeric",
+ month: "numeric",
+ day: "numeric",
+ hour: "2-digit",
+ minute: "2-digit",
+ }).format(new Date(timestamp))
+}
+
+function formatScore(value) {
+ if (value == null || value === "") {
+ return "-"
+ }
+ const numeric = Number(value)
+ if (!Number.isFinite(numeric)) {
+ return "-"
+ }
+ return numeric.toFixed(2)
+}
+
+function shortHash(value, size = 10) {
+ const text = String(value || "").trim()
+ if (!text) {
+ return "-"
+ }
+ return text.length <= size ? text : text.slice(0, size)
+}
+
+function getJson(url, options = {}) {
+ return fetch(url, {
+ headers: {
+ "Content-Type": "application/json",
+ },
+ ...options,
+ }).then(async (response) => {
+ const payload = await response.json().catch(() => ({}))
+ if (!response.ok) {
+ throw new Error(String(payload?.detail || payload?.error || response.statusText || l("请求失败", "Request failed")))
+ }
+ return payload
+ })
+}
+
+function showMessage(kind, text) {
+ dom.messageStrip.className = `message-strip ${kind || "info"}`
+ dom.messageStrip.textContent = text
+}
+
+function clearMessage() {
+ dom.messageStrip.className = "message-strip hidden"
+ dom.messageStrip.textContent = ""
+}
+
+function setLoading(value) {
+ state.loading = Boolean(value)
+ dom.body.dataset.loading = value ? "true" : "false"
+ for (const button of dom.opButtons) {
+ button.disabled = Boolean(value)
+ }
+}
+
+function tag(label) {
+ return `${escapeHtml(label)} `
+}
+
+function badge(label, tone = "neutral") {
+ return `${escapeHtml(label)} `
+}
+
+function actionLabel(action) {
+ const normalized = String(action || "").trim().toLowerCase()
+ if (!normalized) {
+ return l("未标注", "Unlabeled")
+ }
+ if (normalized === "create" || normalized === "create_skill") {
+ return l("创建", "Create")
+ }
+ if (normalized === "improve") {
+ return l("改进", "Improve")
+ }
+ if (normalized === "merge") {
+ return l("合并", "Merge")
+ }
+ if (normalized === "published_after_validation") {
+ return l("验证后发布", "Publish After Validation")
+ }
+ if (normalized === "snapshot") {
+ return l("快照", "Snapshot")
+ }
+ return String(action)
+}
+
+function outcomeLabel(outcome) {
+ const normalized = String(outcome || "").trim().toLowerCase()
+ if (!normalized) {
+ return l("未标注", "Unlabeled")
+ }
+ if (normalized === "success") {
+ return l("成功", "Success")
+ }
+ if (normalized === "review") {
+ return l("待复核", "Needs Review")
+ }
+ if (normalized === "rollback") {
+ return l("回滚", "Rolled Back")
+ }
+ if (normalized === "failure") {
+ return l("失败", "Failure")
+ }
+ return String(outcome)
+}
+
+function sourceLabel(source) {
+ const normalized = String(source || "").trim().toLowerCase()
+ if (normalized === "local") {
+ return l("本地", "Local")
+ }
+ if (normalized === "shared") {
+ return l("共享库", "Shared Pool")
+ }
+ if (normalized === "both") {
+ return l("本地和共享库", "Local + Shared Pool")
+ }
+ if (normalized === "observed") {
+ return l("仅观测记录", "Observed Only")
+ }
+ return String(source || "-")
+}
+
+function categoryLabel(value) {
+ const normalized = String(value || "").trim().toLowerCase()
+ if (!normalized) {
+ return l("未分类", "Uncategorized")
+ }
+ const labels = {
+ general: l("通用", "General"),
+ candidate: l("候选", "Candidate"),
+ coding: l("编码", "Coding"),
+ communication: l("沟通", "Communication"),
+ security: l("安全", "Security"),
+ automation: l("自动化", "Automation"),
+ agentic: l("代理", "Agentic"),
+ common_mistakes: l("常见错误", "Common Mistakes"),
+ research: l("研究", "Research"),
+ devops: l("运维", "DevOps"),
+ productivity: l("效率", "Productivity"),
+ data_analysis: l("数据分析", "Data Analysis"),
+ }
+ return labels[normalized] || String(value)
+}
+
+function candidateStatusKey(value) {
+ const normalized = String(value || "").trim().toLowerCase()
+ if (normalized === "pending_validation" || normalized === "pending") {
+ return "pending"
+ }
+ if (normalized === "review") {
+ return "review"
+ }
+ if (normalized === "published") {
+ return "published"
+ }
+ if (normalized === "rejected") {
+ return "rejected"
+ }
+ return normalized || "pending"
+}
+
+function candidateStatusLabel(value) {
+ const status = candidateStatusKey(value)
+ if (status === "pending") {
+ return l("待验证", "Awaiting Validation")
+ }
+ if (status === "review") {
+ return l("已有反馈", "Feedback Received")
+ }
+ if (status === "published") {
+ return l("已入最终池", "Published")
+ }
+ if (status === "rejected") {
+ return l("已拒绝", "Rejected")
+ }
+ return String(value || l("未知状态", "Unknown Status"))
+}
+
+function toneForStatus(value) {
+ const status = candidateStatusKey(value)
+ if (status === "published") {
+ return "published"
+ }
+ if (status === "rejected") {
+ return "rejected"
+ }
+ if (status === "pending" || status === "review") {
+ return "pending"
+ }
+ return "neutral"
+}
+
+function eventTypeLabel(value) {
+ const normalized = String(value || "").trim().toLowerCase()
+ if (normalized === "candidate") {
+ return l("候选生成", "Candidate Created")
+ }
+ if (normalized === "validation") {
+ return l("验证反馈", "Validation Feedback")
+ }
+ if (normalized === "publish") {
+ return l("正式发布", "Published")
+ }
+ if (normalized === "reject") {
+ return l("拒绝决策", "Rejected")
+ }
+ return String(value || l("事件", "Event"))
+}
+
+function stageStateLabel(value) {
+ const normalized = String(value || "").trim().toLowerCase()
+ if (normalized === "done") {
+ return l("已完成", "Done")
+ }
+ if (normalized === "current") {
+ return l("进行中", "In Progress")
+ }
+ if (normalized === "pending") {
+ return l("待开始", "Pending")
+ }
+ if (normalized === "blocked") {
+ return l("已终止", "Blocked")
+ }
+ return String(value || "-")
+}
+
+function normalizeSkillNames(items) {
+ if (!Array.isArray(items)) {
+ return []
+ }
+ const result = []
+ for (const item of items) {
+ if (typeof item === "string" && item.trim()) {
+ result.push(item.trim())
+ continue
+ }
+ if (item && typeof item === "object") {
+ const raw = item.skill_name || item.name || item.skill
+ if (String(raw || "").trim()) {
+ result.push(String(raw).trim())
+ }
+ }
+ }
+ return [...new Set(result)]
+}
+
+function sortSkills(items) {
+ return [...items].sort((left, right) => (
+ Number(right.session_count || 0) - Number(left.session_count || 0)
+ || Number(right.observed_injection_count || 0) - Number(left.observed_injection_count || 0)
+ || Number(right.local_inject_count || 0) - Number(left.local_inject_count || 0)
+ || parseTime(right.updated_at || right.uploaded_at) - parseTime(left.updated_at || left.uploaded_at)
+ || String(left.name || "").localeCompare(String(right.name || ""), "zh-CN")
+ ))
+}
+
+function sortSessions(items) {
+ return [...items].sort((left, right) => (
+ parseTime(right.timestamp) - parseTime(left.timestamp)
+ || String(left.session_id || "").localeCompare(String(right.session_id || ""), "zh-CN")
+ ))
+}
+
+function sortJobs(items) {
+ const priority = {
+ pending: 0,
+ review: 1,
+ published: 2,
+ rejected: 3,
+ }
+ return [...items].sort((left, right) => (
+ (priority[candidateStatusKey(left.status)] ?? 99) - (priority[candidateStatusKey(right.status)] ?? 99)
+ || parseTime(right.created_at) - parseTime(left.created_at)
+ || String(left.skill_name || "").localeCompare(String(right.skill_name || ""), "zh-CN")
+ ))
+}
+
+function localSkillItems() {
+ return sortSkills(
+ state.skills.filter((item) => item?.has_local || String(item?.source || "") === "local" || String(item?.source || "") === "both")
+ )
+}
+
+function finalSkillItems() {
+ return sortSkills(
+ state.skills.filter((item) => item?.has_remote || String(item?.source || "") === "shared" || String(item?.source || "") === "both")
+ )
+}
+
+function allSessionItems() {
+ return sortSessions(state.sessions)
+}
+
+function mySessionItems() {
+ return sortSessions(state.sessions.filter((item) => String(item?.source || "").trim().toLowerCase() === "local"))
+}
+
+function candidateItems() {
+ return sortJobs(state.validationJobs)
+}
+
+function findSkillById(skillId) {
+ return state.skills.find((item) => String(item.skill_id || "") === String(skillId || "")) || null
+}
+
+function findSkillByName(name) {
+ const normalized = String(name || "").trim().toLowerCase()
+ if (!normalized) {
+ return null
+ }
+ return state.skills.find((item) => String(item.name || "").trim().toLowerCase() === normalized) || null
+}
+
+function findJobById(jobId) {
+ return state.validationJobs.find((item) => String(item.job_id || "") === String(jobId || "")) || null
+}
+
+function findSessionById(sessionId) {
+ return state.sessions.find((item) => String(item.session_id || "") === String(sessionId || "")) || null
+}
+
+function selectedLocalSkillDetail() {
+ return state.skillDetails[state.selectedLocalSkillId] || null
+}
+
+function selectedFinalSkillDetail() {
+ return state.skillDetails[state.selectedFinalSkillId] || null
+}
+
+function selectedCandidateJob() {
+ return findJobById(state.selectedCandidateJobId)
+}
+
+function selectedSessionDetail() {
+ return state.sessionDetails[state.selectedSessionId] || null
+}
+
+function sharingEnabled() {
+ return Boolean(state.overview?.meta?.sharing_enabled)
+}
+
+function sharingTarget() {
+ const meta = state.overview?.meta || {}
+ if (!meta.sharing_enabled) {
+ return l("未启用共享", "Sharing Disabled")
+ }
+ const backend = String(meta.sharing_backend || "").trim().toLowerCase()
+ const groupId = String(meta.sharing_group_id || "default").trim()
+ if (backend === "local") {
+ return `${String(meta.sharing_local_root || "").trim() || "-"} / ${groupId}`
+ }
+ return `${backend || "shared"} / ${groupId}`
+}
+
+function warnings() {
+ return Array.isArray(state.overview?.meta?.warnings) ? state.overview.meta.warnings : []
+}
+
+function evolveHealthLabel() {
+ if (!state.evolve?.configured) {
+ return l("未配置", "Not Configured")
+ }
+ return state.evolve?.healthy ? l("正常", "Healthy") : l("异常", "Unhealthy")
+}
+
+function compareLocalAndRemote(skill) {
+ if (!skill?.has_local) {
+ return { key: "shared_only", label: l("仅在共享库", "Shared Only"), tone: "neutral" }
+ }
+ if (!skill?.has_remote) {
+ return {
+ key: sharingEnabled() ? "not_published" : "sharing_disabled",
+ label: sharingEnabled() ? l("尚未发布到共享库", "Not Published to Shared Pool") : l("共享未启用", "Sharing Disabled"),
+ tone: "neutral",
+ }
+ }
+ if (skill.local_sha && skill.remote_sha && skill.local_sha === skill.remote_sha) {
+ return { key: "synced", label: l("已与共享正式版同步", "Synced with Shared Official Version"), tone: "published" }
+ }
+ return {
+ key: "drift",
+ label: Number(skill.current_version || 0) > 0
+ ? l("与共享版 v{version} 不一致", "Different from shared v{version}", { version: number(skill.current_version) })
+ : l("与共享版不一致", "Different from shared version"),
+ tone: "pending",
+ }
+}
+
+function candidateRecordCount(skillName) {
+ return jobsForSkill(skillName).length
+}
+
+function sharedVersionLabel(skill) {
+ return skill.has_remote ? `v${number(skill.current_version || 0)}` : l("未发布", "Not Published")
+}
+
+function versionCount(skill) {
+ return Array.isArray(skill?.versions) ? skill.versions.length : 0
+}
+
+function visibleVersionCount(skill) {
+ return Math.max(
+ versionCount(skill),
+ Number(skill?.current_version || 0),
+ skill?.has_remote ? 1 : 0
+ )
+}
+
+function jobPayload(job) {
+ return job?.details?.job || {}
+}
+
+function jobResults(job) {
+ return Array.isArray(job?.details?.results) ? job.details.results : []
+}
+
+function jobDecision(job) {
+ return job?.details?.decision || {}
+}
+
+function jobSessionIds(job) {
+ const ids = jobPayload(job).session_ids
+ return Array.isArray(ids) ? ids.filter(Boolean) : []
+}
+
+function jobsForSkill(skillName) {
+ const normalized = String(skillName || "").trim().toLowerCase()
+ return sortJobs(
+ state.validationJobs.filter((job) => String(job.skill_name || "").trim().toLowerCase() === normalized)
+ )
+}
+
+function jobsForSession(sessionId) {
+ return sortJobs(
+ state.validationJobs.filter((job) => jobSessionIds(job).includes(sessionId))
+ )
+}
+
+function filteredLocalSkills() {
+ const search = String(dom.localSearch.value || "").trim().toLowerCase()
+ return localSkillItems().filter((skill) => {
+ if (!search) {
+ return true
+ }
+ return [
+ skill.name,
+ skill.description,
+ skill.category,
+ ].some((field) => String(field || "").toLowerCase().includes(search))
+ })
+}
+
+function filteredCandidateJobs() {
+ const search = String(dom.candidateSearch.value || "").trim().toLowerCase()
+ const status = String(dom.candidateStatus.value || "").trim().toLowerCase()
+ return candidateItems().filter((job) => {
+ if (status && candidateStatusKey(job.status) !== status) {
+ return false
+ }
+ if (!search) {
+ return true
+ }
+ const details = jobPayload(job)
+ return [
+ job.job_id,
+ job.skill_name,
+ job.proposed_action,
+ details.rationale,
+ details.source,
+ ].some((field) => String(field || "").toLowerCase().includes(search))
+ })
+}
+
+function filteredFinalSkills() {
+ const search = String(dom.finalSearch.value || "").trim().toLowerCase()
+ return finalSkillItems().filter((skill) => {
+ if (!search) {
+ return true
+ }
+ return [
+ skill.name,
+ skill.description,
+ skill.category,
+ skill.uploaded_by,
+ ].some((field) => String(field || "").toLowerCase().includes(search))
+ })
+}
+
+function filteredSessions() {
+ const search = String(dom.sessionSearch.value || "").trim().toLowerCase()
+ const source = String(dom.sessionSource.value || "local").trim().toLowerCase()
+ let items = allSessionItems()
+ if (source === "local") {
+ items = items.filter((item) => String(item.source || "").trim().toLowerCase() === "local")
+ } else if (source === "shared") {
+ items = items.filter((item) => String(item.source || "").trim().toLowerCase() === "shared")
+ }
+ return items.filter((item) => {
+ if (!search) {
+ return true
+ }
+ return [
+ item.session_id,
+ item.prompt_preview,
+ item.response_preview,
+ item.user_alias,
+ ...(Array.isArray(item.skill_names) ? item.skill_names : []),
+ ].some((field) => String(field || "").toLowerCase().includes(search))
+ })
+}
+
+function candidateCounts() {
+ const jobs = candidateItems()
+ return {
+ total: jobs.length,
+ pending: jobs.filter((item) => candidateStatusKey(item.status) === "pending").length,
+ review: jobs.filter((item) => candidateStatusKey(item.status) === "review").length,
+ published: jobs.filter((item) => candidateStatusKey(item.status) === "published").length,
+ rejected: jobs.filter((item) => candidateStatusKey(item.status) === "rejected").length,
+ }
+}
+
+function localSyncCounts() {
+ const skills = localSkillItems()
+ let synced = 0
+ let drift = 0
+ let localOnly = 0
+ for (const skill of skills) {
+ if (!skill?.has_remote) {
+ localOnly += 1
+ continue
+ }
+ const status = compareLocalAndRemote(skill)
+ if (status.key === "synced") {
+ synced += 1
+ } else {
+ drift += 1
+ }
+ }
+ return { synced, drift, localOnly }
+}
+
+function collectPipelineEvents(limit = 12) {
+ const events = []
+ for (const job of candidateItems()) {
+ events.push({
+ type: "candidate",
+ tone: "neutral",
+ timestamp: job.created_at,
+ title: l("{name} 进入候选池", "{name} entered the candidate pool", {
+ name: job.skill_name || l("未知技能", "Unknown Skill"),
+ }),
+ copy: `${actionLabel(job.proposed_action)} · ${job.job_id}`,
+ jobId: job.job_id,
+ })
+ for (const result of jobResults(job)) {
+ const accepted = result?.accepted === true
+ events.push({
+ type: "validation",
+ tone: accepted ? "published" : "rejected",
+ timestamp: result.created_at,
+ title: l("{name} 提交验证结果", "{name} submitted validation feedback", {
+ name: result.user_alias || l("验证客户端", "Validation Client"),
+ }),
+ copy: l("{skill} · {decision} · 分数 {score}", "{skill} · {decision} · score {score}", {
+ skill: job.skill_name || l("未知技能", "Unknown Skill"),
+ decision: accepted ? l("通过", "Approved") : l("拒绝", "Rejected"),
+ score: formatScore(result.score),
+ }),
+ jobId: job.job_id,
+ })
+ }
+ const decision = jobDecision(job)
+ if (decision?.status && decision?.decided_at) {
+ const published = candidateStatusKey(decision.status) === "published"
+ events.push({
+ type: published ? "publish" : "reject",
+ tone: published ? "published" : "rejected",
+ timestamp: decision.decided_at,
+ title: published
+ ? l("{name} 进入最终池", "{name} was published", { name: job.skill_name || l("未知技能", "Unknown Skill") })
+ : l("{name} 被拒绝", "{name} was rejected", { name: job.skill_name || l("未知技能", "Unknown Skill") }),
+ copy: String(decision.reason || decision.published_action || job.proposed_action || ""),
+ jobId: job.job_id,
+ })
+ }
+ }
+ return events
+ .sort((left, right) => parseTime(right.timestamp) - parseTime(left.timestamp))
+ .slice(0, limit)
+}
+
+function recentEventCount(hours = 24) {
+ const threshold = Date.now() - hours * 60 * 60 * 1000
+ return collectPipelineEvents(200).filter((event) => parseTime(event.timestamp) >= threshold).length
+}
+
+function validationDispatchSummary(job) {
+ const core = jobPayload(job)
+ const results = jobResults(job)
+ const minResults = Math.max(0, Number(core.min_results || 0))
+ const minApprovals = Math.max(0, Number(core.min_approvals || 0))
+ const accepted = Math.max(0, Number(job.accepted_count || 0))
+ const pendingResults = Math.max(0, minResults - results.length)
+ const pendingApprovals = Math.max(0, minApprovals - accepted)
+ const lastResult = [...results].sort((left, right) => parseTime(right.created_at) - parseTime(left.created_at))[0] || null
+ return {
+ dispatchAt: String(job.created_at || ""),
+ dispatchMode: "open-pool",
+ dispatchLabel: l("开放给空闲验证客户端领取", "Open to available validation clients"),
+ pendingResults,
+ pendingApprovals,
+ responseCount: results.length,
+ lastResultAt: String(lastResult?.created_at || ""),
+ }
+}
+
+function candidateSourceLabel(source) {
+ const normalized = String(source || "").trim().toLowerCase()
+ if (!normalized) {
+ return "-"
+ }
+ if (normalized === "no_skill") {
+ return l("来自会话中新沉淀出的技能", "Newly distilled from sessions")
+ }
+ if (normalized === "current_skill") {
+ return l("来自已有技能的改进", "Improvement of an existing skill")
+ }
+ if (normalized === "shared_skill") {
+ return l("来自共享技能的改进", "Improvement of a shared skill")
+ }
+ return String(source)
+}
+
+function validatorModeLabel(mode) {
+ const normalized = String(mode || "").trim().toLowerCase()
+ if (!normalized || normalized === "unknown") {
+ return l("自动验证", "Automatic Validation")
+ }
+ if (normalized === "replay") {
+ return l("回放验证", "Replay Validation")
+ }
+ return String(mode)
+}
+
+function skillDocumentPreview(skill) {
+ if (!skill || typeof skill !== "object") {
+ return ""
+ }
+ if (String(skill.skill_md || "").trim()) {
+ return String(skill.skill_md).trim()
+ }
+ if (String(skill.content || "").trim()) {
+ return String(skill.content).trim()
+ }
+ const parts = []
+ if (skill.name) {
+ parts.push(`name: ${skill.name}`)
+ }
+ if (skill.description) {
+ parts.push(`description: ${skill.description}`)
+ }
+ if (skill.category) {
+ parts.push(`category: ${skill.category}`)
+ }
+ return parts.join("\n")
+}
+
+function buildVersionEntries(skill, { includeLocal = true } = {}) {
+ if (!skill || typeof skill !== "object") {
+ return []
+ }
+
+ const entries = []
+ const seen = new Set()
+
+ const pushEntry = (entry) => {
+ if (!entry?.key || seen.has(entry.key)) {
+ return
+ }
+ seen.add(entry.key)
+ entries.push(entry)
+ }
+
+ const localDocument = String(skill.skill_md || skill.content || "").trim()
+ if (includeLocal && skill.has_local && localDocument) {
+ pushEntry({
+ key: "local-current",
+ label: l("本地当前版", "Current Local Version"),
+ source: "local",
+ version: null,
+ action: "local",
+ timestamp: String(skill.local_updated_at || skill.updated_at || "").trim(),
+ contentSha: String(skill.local_sha || skill.current_sha || "").trim(),
+ document: localDocument,
+ current: true,
+ })
+ }
+
+ const versions = Array.isArray(skill.versions) ? skill.versions : []
+ for (const item of versions) {
+ if (!item || typeof item !== "object") {
+ continue
+ }
+ const version = Number(item.version || 0) || 0
+ const contentSha = String(item.content_sha || "").trim()
+ pushEntry({
+ key: version > 0 ? `shared-version:${version}` : `shared-version:${shortHash(contentSha || item.timestamp || "", 8)}`,
+ label: version > 0 ? l("共享库 v{version}", "Shared v{version}", { version }) : l("共享库历史", "Shared History"),
+ source: "shared",
+ version: version || null,
+ action: String(item.action || "").trim(),
+ timestamp: String(item.timestamp || "").trim(),
+ contentSha,
+ document: String(item.skill_md || item.content || "").trim(),
+ current: version > 0 && Number(skill.current_version || 0) === version,
+ })
+ }
+
+ const remoteDocument = String(skill.remote_skill_md || skill.remote_content || "").trim()
+ if (skill.has_remote && remoteDocument) {
+ const version = Number(skill.current_version || 0) || 0
+ pushEntry({
+ key: version > 0 ? `shared-version:${version}` : "shared-current",
+ label: version > 0 ? l("共享库 v{version}", "Shared v{version}", { version }) : l("共享库当前版", "Current Shared Version"),
+ source: "shared",
+ version: version || null,
+ action: "published",
+ timestamp: String(skill.remote_updated_at || skill.uploaded_at || skill.updated_at || "").trim(),
+ contentSha: String(skill.remote_sha || skill.current_sha || "").trim(),
+ document: remoteDocument,
+ current: true,
+ })
+ }
+
+ const localEntries = entries.filter((item) => item.source === "local")
+ const remoteEntries = entries
+ .filter((item) => item.source === "shared")
+ .sort((left, right) => (
+ Number(right.version || 0) - Number(left.version || 0)
+ || parseTime(right.timestamp) - parseTime(left.timestamp)
+ || String(left.key || "").localeCompare(String(right.key || ""))
+ ))
+
+ return [...localEntries, ...remoteEntries]
+}
+
+function compareState(skill, scope, { includeLocal = true } = {}) {
+ const entries = buildVersionEntries(skill, { includeLocal })
+ const stateKey = `${scope}:${skill.skill_id}`
+ const current = state.compareSelections[stateKey] || {}
+ const defaultPrimary = scope === "local"
+ ? (entries.find((item) => item.key === "local-current") || entries.find((item) => item.current) || entries[0] || null)
+ : (entries.find((item) => item.current && item.source === "shared") || entries.find((item) => item.source === "shared") || entries[0] || null)
+ const defaultCompare = entries.find((item) => item.key !== defaultPrimary?.key) || defaultPrimary || null
+ const primary = entries.find((item) => item.key === current.primary) || defaultPrimary
+ const compare = entries.find((item) => item.key === current.compare) || defaultCompare
+ return { entries, primary, compare }
+}
+
+function setCompareSelection(scope, skillId, field, value) {
+ const key = `${scope}:${skillId}`
+ state.compareSelections = {
+ ...state.compareSelections,
+ [key]: {
+ ...(state.compareSelections[key] || {}),
+ [field]: value,
+ },
+ }
+}
+
+function ensureItemSelection(items, currentId, field) {
+ if (!items.length) {
+ return ""
+ }
+ if (items.some((item) => String(item?.[field] || "") === String(currentId || ""))) {
+ return currentId
+ }
+ return String(items[0]?.[field] || "")
+}
+
+function ensureSelections() {
+ state.selectedLocalSkillId = ensureItemSelection(filteredLocalSkills(), state.selectedLocalSkillId, "skill_id")
+ state.selectedCandidateJobId = ensureItemSelection(filteredCandidateJobs(), state.selectedCandidateJobId, "job_id")
+ state.selectedFinalSkillId = ensureItemSelection(filteredFinalSkills(), state.selectedFinalSkillId, "skill_id")
+ state.selectedSessionId = ensureItemSelection(filteredSessions(), state.selectedSessionId, "session_id")
+}
+
+async function loadSkillDetail(skillId) {
+ if (!skillId) {
+ return null
+ }
+ if (state.skillDetails[skillId]) {
+ return state.skillDetails[skillId]
+ }
+ const payload = await getJson(`/api/v1/skills/${encodeURIComponent(skillId)}`)
+ state.skillDetails[skillId] = payload
+ return payload
+}
+
+async function loadSessionDetail(sessionId) {
+ if (!sessionId) {
+ return null
+ }
+ if (state.sessionDetails[sessionId]) {
+ return state.sessionDetails[sessionId]
+ }
+ const payload = await getJson(`/api/v1/sessions/${encodeURIComponent(sessionId)}`)
+ state.sessionDetails[sessionId] = payload
+ return payload
+}
+
+async function hydrateSelections({ local = true, final = true, session = true, candidate = true } = {}) {
+ const tasks = []
+
+ if (local && state.selectedLocalSkillId) {
+ tasks.push(loadSkillDetail(state.selectedLocalSkillId))
+ }
+
+ if (final && state.selectedFinalSkillId) {
+ tasks.push(loadSkillDetail(state.selectedFinalSkillId))
+ }
+
+ if (session && state.selectedSessionId) {
+ tasks.push(loadSessionDetail(state.selectedSessionId))
+ }
+
+ if (candidate && state.selectedCandidateJobId) {
+ const job = findJobById(state.selectedCandidateJobId)
+ const linkedSkill = findSkillByName(job?.skill_name || "")
+ if (linkedSkill?.skill_id) {
+ tasks.push(loadSkillDetail(linkedSkill.skill_id))
+ }
+ }
+
+ await Promise.all(tasks.map((task) => task.catch((error) => {
+ showMessage("warn", error.message || l("加载详情失败", "Failed to load details"))
+ })))
+}
+
+async function refreshData({ notice = "", preserveMessage = false } = {}) {
+ setLoading(true)
+ if (!preserveMessage) {
+ clearMessage()
+ }
+ try {
+ const [overview, skillsPayload, sessionsPayload, validationPayload, evolve] = await Promise.all([
+ getJson("/api/v1/overview"),
+ getJson("/api/v1/skills?limit=500"),
+ getJson("/api/v1/sessions?limit=500"),
+ getJson("/api/v1/validation/jobs?limit=500"),
+ getJson("/api/v1/evolve/status"),
+ ])
+
+ state.overview = overview
+ state.evolve = evolve
+ state.skills = sortSkills(skillsPayload.items || [])
+ state.sessions = sortSessions(sessionsPayload.items || [])
+ state.validationJobs = sortJobs(validationPayload.items || [])
+ state.skillDetails = Object.create(null)
+ state.sessionDetails = Object.create(null)
+
+ ensureSelections()
+ renderAll()
+ await hydrateSelections()
+ renderAll()
+
+ if (notice) {
+ showMessage("info", notice)
+ }
+ } catch (error) {
+ showMessage("error", error.message || l("加载 dashboard 数据失败", "Failed to load dashboard data"))
+ } finally {
+ setLoading(false)
+ }
+}
+
+function renderStaticText() {
+ document.title = l("SkillClaw 技能演化看板", "SkillClaw Dashboard")
+ document.documentElement.lang = state.locale === "en" ? "en" : "zh-CN"
+
+ dom.brandTitle.textContent = l("技能演化看板", "Skill Evolution Dashboard")
+ dom.brandCopy.textContent = l(
+ "看你自己的技能状态、本地与共享库同步、候选验证、最终发布和会话追溯。",
+ "Review your own skill status, local/shared sync, candidate validation, publication, and session traceability."
+ )
+ dom.navOverviewTitle.textContent = l("总览", "Overview")
+ dom.navLocalTitle.textContent = l("我的技能", "My Skills")
+ dom.navCandidateTitle.textContent = l("候选池", "Candidate Pool")
+ dom.navFinalTitle.textContent = l("最终池", "Final Pool")
+ dom.navSessionsTitle.textContent = l("会话追溯", "Sessions")
+ dom.sidebarStatusTitle.textContent = l("当前状态", "Current Status")
+
+ dom.heroKicker.textContent = l("总览首页", "Overview")
+ dom.heroTitle.textContent = l("我的技能状态", "My Skill Status")
+ dom.localeButton.textContent = state.locale === "en" ? "中文" : "English"
+ dom.refreshButton.textContent = l("刷新数据", "Refresh")
+ dom.syncButton.textContent = l("重建投影", "Rebuild Snapshot")
+
+ dom.overviewKicker.textContent = l("总览", "Overview")
+ dom.overviewTitle.textContent = l("现在这套技能更新链路处在什么状态", "Current Skill Pipeline Status")
+ dom.overviewCopy.textContent = l(
+ "这里汇总本地同步、候选验证、最终发布和会话来源四类状态。",
+ "This page summarizes local sync, candidate validation, publication, and session sources."
+ )
+ dom.watchlistKicker.textContent = l("需要关注", "Watch")
+ dom.watchlistTitle.textContent = l("当前值得查看的变化", "Changes Worth Checking")
+ dom.scopeKicker.textContent = l("当前范围", "Scope")
+ dom.scopeTitle.textContent = l("你现在看到的是哪部分数据", "What Data You Are Looking At")
+ dom.eventsKicker.textContent = l("最近事件", "Recent Events")
+ dom.eventsTitle.textContent = l("候选生成、验证反馈、最终决策", "Candidate Creation, Validation, and Final Decisions")
+ dom.eventsCopy.textContent = l(
+ "按时间倒序展示最近 12 条流程事件,方便先抓住变化再下钻。",
+ "Shows the latest 12 pipeline events in reverse chronological order so you can spot changes first."
+ )
+
+ dom.localKicker.textContent = l("我的技能", "My Skills")
+ dom.localTitle.textContent = l("本地技能和共享正式版是否一致", "Local Skills vs Shared Official Versions")
+ dom.localCopy.textContent = l(
+ "这里看你的本地技能、它们是否已同步,以及最近关联了哪些会话。",
+ "Review your local skills, whether they are synced, and which sessions were recently related."
+ )
+ dom.localSearch.placeholder = l("搜索名称、描述或分类", "Search by name, description, or category")
+
+ dom.candidateKicker.textContent = l("候选池", "Candidate Pool")
+ dom.candidateTitle.textContent = l("哪些技能正在等待验证,哪些已经进入最终池", "Which skills are waiting for validation and which are already published")
+ dom.candidateCopy.textContent = l(
+ "这里只看候选技能的真实验证进度:候选内容、验证客户端、决策时间,以及它依赖哪些会话证据。",
+ "This page focuses on real candidate progress: candidate content, validating clients, decision time, and supporting sessions."
+ )
+ dom.candidateSearch.placeholder = l("搜索技能名、候选编号或说明", "Search skill name, candidate ID, or notes")
+ dom.candidateStatusAll.textContent = l("全部状态", "All Statuses")
+ dom.candidateStatusPending.textContent = l("待验证", "Awaiting Validation")
+ dom.candidateStatusReview.textContent = l("已有反馈", "Feedback Received")
+ dom.candidateStatusPublished.textContent = l("已入最终池", "Published")
+ dom.candidateStatusRejected.textContent = l("已拒绝", "Rejected")
+
+ dom.finalKicker.textContent = l("最终池", "Final Pool")
+ dom.finalTitle.textContent = l("共享技能当前正式版本和发布历史", "Shared Skill Versions and Release History")
+ dom.finalCopy.textContent = l(
+ "这里只显示已经进入共享库的正式技能,用来观察版本演进、发布时间和本地同步状态。",
+ "This page shows only published shared skills so you can inspect version history, publish time, and local sync status."
+ )
+ dom.finalSearch.placeholder = l("搜索共享技能", "Search shared skills")
+
+ dom.sessionsKicker.textContent = l("会话追溯", "Sessions")
+ dom.sessionsTitle.textContent = l("我的会话,以及支撑候选技能的共享会话", "My Sessions and Shared Sessions Behind Candidate Skills")
+ dom.sessionsCopy.textContent = l(
+ "默认先看你的本地会话;如果需要追踪候选来源,可以切到全部会话继续查。",
+ "By default you see local sessions first. Switch to all sessions if you need to trace candidate sources."
+ )
+ dom.sessionSearch.placeholder = l("搜索会话 ID、摘要或技能名", "Search session ID, summary, or skill name")
+ dom.sessionSourceLocal.textContent = l("我的会话", "My Sessions")
+ dom.sessionSourceAll.textContent = l("全部会话", "All Sessions")
+ dom.sessionSourceShared.textContent = l("共享会话", "Shared Sessions")
+}
+
+function setLocale(locale) {
+ state.locale = locale === "en" ? "en" : "zh"
+ try {
+ window.localStorage.setItem(LOCALE_STORAGE_KEY, state.locale)
+ } catch {}
+ renderAll()
+}
+
+function renderAll() {
+ renderStaticText()
+ renderNav()
+ renderSidebarStatus()
+ renderOverview()
+ renderLocalPage()
+ renderCandidatePage()
+ renderFinalPage()
+ renderSessionsPage()
+}
+
+function renderNav() {
+ for (const button of dom.navButtons) {
+ button.classList.toggle("active", button.dataset.view === state.activeView)
+ }
+ for (const panel of dom.viewPanels) {
+ panel.classList.toggle("active", panel.dataset.viewPanel === state.activeView)
+ }
+
+ const counts = candidateCounts()
+ dom.navOverviewMeta.textContent = l("{pending} 待验证 · {published} 已发布", "{pending} awaiting · {published} published", {
+ pending: number(counts.pending),
+ published: number(counts.published),
+ })
+ dom.navLocalMeta.textContent = l("{count} 个本地技能", "{count} local skills", {
+ count: number(localSkillItems().length),
+ })
+ dom.navCandidateMeta.textContent = l("{count} 条候选", "{count} candidates", {
+ count: number(counts.total),
+ })
+ dom.navFinalMeta.textContent = l("{count} 个共享技能", "{count} shared skills", {
+ count: number(finalSkillItems().length),
+ })
+ dom.navSessionsMeta.textContent = l("{count} 个我的会话", "{count} my sessions", {
+ count: number(mySessionItems().length),
+ })
+}
+
+function renderSidebarStatus() {
+ const sync = localSyncCounts()
+ const counts = candidateCounts()
+ dom.sidebarStatus.innerHTML = [
+ renderContextCard(l("我的技能", "My Skills"), [
+ [l("总数", "Total"), l("{count} 个", "{count}", { count: number(localSkillItems().length) })],
+ [l("已同步", "Synced"), l("{count} 个", "{count}", { count: number(sync.synced) })],
+ ]),
+ renderContextCard(l("候选", "Candidates"), [
+ [l("待验证", "Awaiting"), l("{count} 条", "{count}", { count: number(counts.pending) })],
+ [l("待决策", "Awaiting Decision"), l("{count} 条", "{count}", { count: number(counts.review) })],
+ ]),
+ renderContextCard(l("共享状态", "Sharing"), [
+ [l("共享库", "Shared Pool"), sharingEnabled() ? l("已启用", "Enabled") : l("未启用", "Disabled")],
+ [l("演化服务", "Evolve"), evolveHealthLabel()],
+ ]),
+ ].join("")
+}
+
+function renderOverview() {
+ const counts = candidateCounts()
+ const sync = localSyncCounts()
+
+ dom.overviewMetrics.innerHTML = [
+ renderMetricCard(l("我的技能", "My Skills"), localSkillItems().length, l("当前机器上可直接使用的技能数量。", "Number of skills currently available on this machine.")),
+ renderMetricCard(l("已同步", "Synced"), sync.synced, l("本地内容已经和共享正式版一致。", "Local content matches the shared official version.")),
+ renderMetricCard(l("待处理同步", "Needs Sync"), sync.drift + sync.localOnly, l("本地与共享库仍有差异,或尚未发布到共享库。", "Local content still differs from the shared pool or has not been published.")),
+ renderMetricCard(l("待验证候选", "Pending Candidates"), counts.pending + counts.review, l("已经进入候选池,但还没有完全走完验证和决策。", "Candidates have entered the pool but have not finished validation and decision making.")),
+ renderMetricCard(l("已入最终池", "Published"), counts.published, l("通过验证并完成发布决策的候选。", "Candidates that passed validation and were published.")),
+ renderMetricCard(l("我的会话", "My Sessions"), mySessionItems().length, l("当前本地采集到的个人会话数。", "Number of personal sessions currently captured locally.")),
+ ].join("")
+
+ const watchlist = buildWatchlistItems()
+ dom.overviewWatchlist.innerHTML = watchlist.length
+ ? watchlist.map(renderJumpCard).join("")
+ : `
${escapeHtml(l("当前没有明显堵点,整体状态比较稳定。", "No obvious blockers right now. The overall pipeline looks stable."))}
`
+
+ const contextCards = [
+ renderContextCard(l("同步情况", "Sync Status"), [
+ [l("已同步", "Synced"), l("{count} 个技能", "{count} skills", { count: number(sync.synced) })],
+ [l("待更新", "Out of Date"), l("{count} 个技能", "{count} skills", { count: number(sync.drift) })],
+ [l("仅本地", "Local Only"), l("{count} 个技能", "{count} skills", { count: number(sync.localOnly) })],
+ ]),
+ renderContextCard(l("候选进度", "Candidate Progress"), [
+ [l("待验证", "Awaiting"), l("{count} 条", "{count}", { count: number(counts.pending) })],
+ [l("待决策", "Awaiting Decision"), l("{count} 条", "{count}", { count: number(counts.review) })],
+ [l("已发布", "Published"), l("{count} 条", "{count}", { count: number(counts.published) })],
+ ]),
+ renderContextCard(l("当前范围", "Scope"), [
+ [l("共享库", "Shared Pool"), sharingEnabled() ? l("已启用", "Enabled") : l("未启用", "Disabled")],
+ [l("共享技能", "Shared Skills"), l("{count} 个", "{count}", { count: number(finalSkillItems().length) })],
+ [l("最近 24 小时变化", "Changes in 24h"), l("{count} 条", "{count}", { count: number(recentEventCount(24)) })],
+ ]),
+ ]
+
+ if (warnings().length) {
+ contextCards.push(`
+
+
+
+
${escapeHtml(l("告警", "Warnings"))}
+
${escapeHtml(l("采集或同步时发现了问题", "Issues Found During Collection or Sync"))}
+
+ ${badge(`${number(warnings().length)} 条`, "pending")}
+
+
+ ${warnings().map((item) => `${escapeHtml(String(item || ""))} `).join("")}
+
+
+ `)
+ }
+
+ dom.overviewContext.innerHTML = contextCards.join("")
+
+ const events = collectPipelineEvents()
+ dom.overviewEvents.innerHTML = events.length
+ ? events.map(renderEventCard).join("")
+ : `${escapeHtml(l("最近还没有可展示的候选、验证或发布事件。", "There are no candidate, validation, or publish events to show yet."))}
`
+}
+
+function buildWatchlistItems() {
+ const items = []
+ const sync = localSyncCounts()
+ const counts = candidateCounts()
+ const publishedRecent = candidateItems()
+ .filter((job) => candidateStatusKey(job.status) === "published")
+ .slice(0, 2)
+ const pendingJob = candidateItems().find((job) => candidateStatusKey(job.status) === "pending")
+ const reviewJob = candidateItems().find((job) => candidateStatusKey(job.status) === "review")
+ const driftSkill = localSkillItems().find((skill) => compareLocalAndRemote(skill).tone === "pending")
+
+ if (driftSkill) {
+ items.push({
+ view: "local",
+ title: l("{name} 和共享正式版存在差异", "{name} differs from the shared official version", { name: driftSkill.name }),
+ copy: l("当前状态:{status}", "Current status: {status}", { status: compareLocalAndRemote(driftSkill).label }),
+ dataset: `data-select-local-skill="${escapeHtml(driftSkill.skill_id)}"`,
+ tone: "pending",
+ })
+ }
+
+ if (pendingJob) {
+ items.push({
+ view: "candidate",
+ title: l("{count} 条候选仍在等待验证", "{count} candidates are still awaiting validation", { count: number(counts.pending) }),
+ copy: l("{name} 还在等待更多验证反馈。", "{name} is still waiting for more validation feedback.", { name: pendingJob.skill_name }),
+ dataset: `data-select-candidate="${escapeHtml(pendingJob.job_id)}"`,
+ tone: "pending",
+ })
+ }
+
+ if (reviewJob) {
+ items.push({
+ view: "candidate",
+ title: l("{count} 条候选已有反馈但还没完成决策", "{count} candidates have feedback but no final decision yet", { count: number(counts.review) }),
+ copy: l("{name} 已收到结果,正在等待最终结论。", "{name} already has results and is waiting for the final decision.", { name: reviewJob.skill_name }),
+ dataset: `data-select-candidate="${escapeHtml(reviewJob.job_id)}"`,
+ tone: "pending",
+ })
+ }
+
+ for (const job of publishedRecent) {
+ const linkedSkill = findSkillByName(job.skill_name)
+ items.push({
+ view: linkedSkill ? "final" : "candidate",
+ title: l("{name} 最近进入最终池", "{name} was recently published", { name: job.skill_name }),
+ copy: l("发布时间 {time}。", "Published at {time}.", { time: formatStamp(jobDecision(job).decided_at) }),
+ dataset: linkedSkill
+ ? `data-select-final-skill="${escapeHtml(linkedSkill.skill_id)}"`
+ : `data-select-candidate="${escapeHtml(job.job_id)}"`,
+ tone: "published",
+ })
+ }
+
+ if (mySessionItems().length) {
+ const latestSession = mySessionItems()[0]
+ items.push({
+ view: "sessions",
+ title: l("最近一条本地会话", "Most Recent Local Session"),
+ copy: clip(latestSession.prompt_preview || latestSession.response_preview || l("查看最近一次会话。", "View the latest session."), 80),
+ dataset: `data-select-session="${escapeHtml(latestSession.session_id)}"`,
+ tone: "neutral",
+ })
+ }
+
+ return items.slice(0, 5)
+}
+
+function renderJumpCard(item) {
+ return `
+
+
+
+
${escapeHtml(item.title)}
+
${escapeHtml(item.copy)}
+
+ ${badge(l("查看", "Open"), item.tone || "neutral")}
+
+
+ `
+}
+
+function renderMetricCard(label, value, note) {
+ return `
+
+ ${escapeHtml(label)}
+ ${escapeHtml(String(value))}
+ ${escapeHtml(note)}
+
+ `
+}
+
+function renderContextCard(title, rows) {
+ return `
+
+
+
+ ${rows.map(([key, value]) => `
+
+ ${escapeHtml(String(key))}
+ ${escapeHtml(String(value))}
+
+ `).join("")}
+
+
+ `
+}
+
+function renderEventCard(event) {
+ return `
+
+
+
+
${escapeHtml(eventTypeLabel(event.type))}
+
${escapeHtml(event.title)}
+
+ ${badge(formatStamp(event.timestamp), event.tone || "neutral")}
+
+ ${escapeHtml(clip(event.copy || l("无额外说明。", "No additional details."), 180))}
+ ${event.jobId
+ ? `
+
+
+ ${escapeHtml(l("查看候选详情", "View Candidate"))}
+
+
+ `
+ : ""}
+
+ `
+}
+
+function renderLocalPage() {
+ const items = filteredLocalSkills()
+ const detail = selectedLocalSkillDetail()
+
+ dom.localList.innerHTML = items.length
+ ? items.map(renderLocalSkillCard).join("")
+ : `${escapeHtml(l("当前筛选条件下没有本地技能。", "No local skills match the current filter."))}
`
+
+ dom.localDetail.innerHTML = detail
+ ? renderLocalSkillDetail(detail)
+ : `${escapeHtml(l("选择一个本地技能,查看同步状态、版本链和相关会话。", "Select a local skill to inspect sync status, version history, and related sessions."))}
`
+}
+
+function renderLocalSkillCard(skill) {
+ const active = String(skill.skill_id || "") === String(state.selectedLocalSkillId || "")
+ const sync = compareLocalAndRemote(skill)
+ const candidateCount = candidateRecordCount(skill.name)
+ return `
+
+
+
+
${escapeHtml(categoryLabel(skill.category))}
+
${escapeHtml(skill.name || l("未命名技能", "Unnamed Skill"))}
+
+ ${badge(sync.label, sync.tone)}
+
+ ${escapeHtml(clip(skill.description || l("这个技能还没有描述。", "This skill does not have a description yet."), 110))}
+
+ ${tag(l("关联会话 {count}", "Sessions {count}", { count: number(skill.session_count || 0) }))}
+ ${tag(l("候选记录 {count}", "Candidates {count}", { count: number(candidateCount) }))}
+ ${skill.has_remote ? tag(l("共享正式版 {version}", "Shared Version {version}", { version: sharedVersionLabel(skill) })) : tag(l("尚未进入共享库", "Not in Shared Pool"))}
+
+
+ `
+}
+
+function renderLocalSkillDetail(skill) {
+ const sync = compareLocalAndRemote(skill)
+ const candidateCount = candidateRecordCount(skill.name)
+ return `
+
+
+
+
+
${escapeHtml(l("本地技能", "Local Skill"))}
+
${escapeHtml(skill.name || l("未命名技能", "Unnamed Skill"))}
+
${escapeHtml(skill.description || l("这个技能还没有描述。", "This skill does not have a description yet."))}
+
+
+ ${badge(sync.label, sync.tone)}
+ ${badge(skill.has_remote ? l("本地和共享库都有", "Local + Shared") : l("仅本地", "Local Only"), "neutral")}
+
+
+
+ ${renderMiniCard(l("当前状态", "Current Status"), sync.label, l("本地内容和共享正式版现在是什么关系。", "How the local content relates to the shared official version."))}
+ ${renderMiniCard(l("共享正式版", "Shared Official Version"), sharedVersionLabel(skill), skill.has_remote ? l("共享库当前正式版本。", "Current official version in the shared pool.") : l("这个技能还没有进入共享库。", "This skill has not entered the shared pool yet."))}
+ ${renderMiniCard(l("候选记录", "Candidate Records"), number(candidateCount), candidateCount ? l("这个技能最近进入过多少次候选验证流程。", "How many candidate validation flows this skill entered recently.") : l("目前没有候选记录。", "There are no candidate records yet."))}
+ ${renderMiniCard(l("关联会话", "Related Sessions"), number(skill.session_count || 0), l("最近有多少条会话和这个技能有关。", "How many recent sessions are related to this skill."))}
+
+
+ ${renderVersionCompare(skill, {
+ scope: "local",
+ title: l("本地版本与共享正式版对比", "Compare Local vs Shared"),
+ copy: l("左边默认是本地当前版,右边可以切换共享历史版本,用来判断内容是否已经同步。", "The left side shows the current local version. Switch the right side to shared history versions to compare sync state."),
+ includeLocal: true,
+ })}
+ ${skill.has_remote
+ ? renderVersionTimeline(skill, {
+ title: l("共享版本历史", "Shared Version History"),
+ copy: l("这个技能在共享库中的版本演进。", "How this skill evolved inside the shared pool."),
+ })
+ : renderSharingNotConnectedNotice(l("这个技能还没有进入共享正式版,所以这里没有版本历史。", "This skill is not in the shared official pool yet, so there is no version history here."))}
+ ${renderCandidateLinks(skill.name)}
+ ${renderRelatedSessions(skill.related_sessions, { emptyTitle: l("最近没有观测到相关会话", "No related sessions were observed recently") })}
+
+ `
+}
+
+function renderMiniCard(label, value, note) {
+ return `
+
+ ${escapeHtml(label)}
+ ${escapeHtml(String(value))}
+ ${escapeHtml(String(note || ""))}
+
+ `
+}
+
+function renderCandidateLinks(skillName) {
+ const jobs = jobsForSkill(skillName)
+ if (!jobs.length) {
+ return `
+
+
+
+
${escapeHtml(l("候选记录", "Candidate Records"))}
+
${escapeHtml(l("这个技能目前没有候选记录", "This skill does not have candidate records yet"))}
+
+
+
+ `
+ }
+ return `
+
+
+
+
${escapeHtml(l("候选记录", "Candidate Records"))}
+
${escapeHtml(l("这个技能最近进入过哪些验证流程", "Which validation flows this skill recently entered"))}
+
+ ${badge(`${number(jobs.length)} 条`, "neutral")}
+
+
+ ${jobs.map((job) => `
+
+
+
+
${escapeHtml(job.job_id)}
+
${escapeHtml(actionLabel(job.proposed_action))} · ${escapeHtml(candidateStatusLabel(job.status))}
+
+ ${badge(formatStamp(job.created_at), toneForStatus(job.status))}
+
+
+ `).join("")}
+
+
+ `
+}
+
+function renderRelatedSessions(items, { emptyTitle = "" } = {}) {
+ const sessions = Array.isArray(items) ? items : []
+ const resolvedEmptyTitle = emptyTitle || l("没有相关会话", "No related sessions")
+ if (!sessions.length) {
+ return `
+
+
+
+
${escapeHtml(l("会话追溯", "Sessions"))}
+
${escapeHtml(resolvedEmptyTitle)}
+
+
+
+ `
+ }
+ return `
+
+
+
+
${escapeHtml(l("会话追溯", "Sessions"))}
+
${escapeHtml(l("最近关联到这个技能的会话", "Recent sessions related to this skill"))}
+
+ ${badge(l("{count} 条", "{count}", { count: number(sessions.length) }), "neutral")}
+
+
+ ${sessions.slice(0, 8).map((item) => `
+
+
+
+
${escapeHtml(item.session_id || l("未知会话", "Unknown Session"))}
+
${escapeHtml(clip(item.prompt_preview || item.response_preview || l("没有摘要。", "No summary."), 110))}
+
+ ${badge(formatStamp(item.timestamp), "neutral")}
+
+
+ `).join("")}
+
+
+ `
+}
+
+function renderCandidatePage() {
+ const items = filteredCandidateJobs()
+ const detail = selectedCandidateJob()
+
+ dom.candidateList.innerHTML = items.length
+ ? items.map(renderCandidateCard).join("")
+ : renderCandidateEmptyState()
+
+ dom.candidateDetail.innerHTML = detail
+ ? renderCandidateDetail(detail)
+ : renderCandidateDetailEmptyState()
+}
+
+function renderCandidateCard(job) {
+ const active = String(job.job_id || "") === String(state.selectedCandidateJobId || "")
+ const tone = toneForStatus(job.status)
+ const core = jobPayload(job)
+ const results = jobResults(job)
+ const dispatch = validationDispatchSummary(job)
+ return `
+
+
+
+
${escapeHtml(job.job_id || l("未知候选编号", "Unknown Candidate ID"))}
+
${escapeHtml(job.skill_name || l("未命名技能", "Unnamed Skill"))}
+
+ ${badge(candidateStatusLabel(job.status), tone)}
+
+ ${escapeHtml(clip(core.rationale || l("这个候选没有额外说明。", "This candidate has no additional notes."), 110))}
+
+ ${tag(actionLabel(job.proposed_action))}
+ ${tag(l("进入池 {time}", "Entered Pool {time}", { time: formatStamp(job.created_at) }))}
+ ${tag(l("已回 {count}", "Returned {count}", { count: number(results.length) }))}
+ ${dispatch.pendingResults > 0 ? tag(l("还差 {count} 条结果", "{count} more results needed", { count: number(dispatch.pendingResults) })) : tag(l("结果数已满足", "Enough results received"))}
+ ${tag(l("会话 {count}", "Sessions {count}", { count: number(jobSessionIds(job).length) }))}
+
+
+ `
+}
+
+function renderCandidateDetail(job) {
+ const tone = toneForStatus(job.status)
+ const core = jobPayload(job)
+ const results = jobResults(job)
+ const decision = jobDecision(job)
+ const linkedSkill = findSkillByName(job.skill_name)
+ const linkedDetail = linkedSkill ? state.skillDetails[linkedSkill.skill_id] : null
+ const candidateSkill = core.candidate_skill || {}
+ const currentSkill = core.current_skill || {}
+ const sessionEvidence = Array.isArray(core.session_evidence) ? core.session_evidence : []
+
+ return `
+
+
+
+
+
${escapeHtml(l("候选技能", "Candidate Skill"))}
+
${escapeHtml(job.skill_name || l("未命名技能", "Unnamed Skill"))}
+
${escapeHtml(core.rationale || l("这个候选没有额外说明。", "This candidate has no additional notes."))}
+
+
+ ${badge(candidateStatusLabel(job.status), tone)}
+ ${badge(actionLabel(job.proposed_action), "neutral")}
+
+
+
+ ${renderMiniCard(l("进入候选池", "Entered Candidate Pool"), formatStamp(job.created_at), l("这条候选开始进入验证流程的时间。", "When this candidate entered the validation pipeline."))}
+ ${renderMiniCard(l("验证结果", "Validation Results"), `${number(job.accepted_count || 0)} / ${number(job.result_count || results.length)}`, l("通过数 / 总反馈数", "Accepted / total feedback"))}
+ ${renderMiniCard(l("平均分", "Average Score"), formatScore(job.mean_score), l("当前验证结果的平均分", "Average score across current validation feedback"))}
+ ${renderMiniCard(l("最终决策", "Final Decision"), candidateStatusLabel(decision.status || job.status), decision?.decided_at ? l("决策时间 {time}", "Decided at {time}", { time: formatStamp(decision.decided_at) }) : l("还没有进入最终决策", "No final decision yet"))}
+
+
+ ${renderCandidateStages(job, linkedDetail)}
+ ${renderDispatchSection(job)}
+
+
+
+
${escapeHtml(l("验证阈值", "Validation Thresholds"))}
+
${escapeHtml(l("这条候选需要满足什么条件才能进入最终池", "Conditions required for this candidate to be published"))}
+
+
+
+ ${renderMiniCard(l("最少结果数", "Min Results"), number(core.min_results || 0), l("需要收集的最少验证结果", "Minimum validation results required"))}
+ ${renderMiniCard(l("最少通过数", "Min Approvals"), number(core.min_approvals || 0), l("至少要有多少个验证客户端给出通过结果。", "Minimum number of validating clients that must approve"))}
+ ${renderMiniCard(l("最低均分", "Min Average Score"), formatScore(core.min_score), l("平均分阈值", "Average score threshold"))}
+ ${renderMiniCard(l("最大拒绝数", "Max Rejections"), number(core.max_rejections || 0), l("达到这个数就会被拒绝", "The candidate is rejected when this threshold is reached"))}
+
+
+ ${renderCandidateDocuments(candidateSkill, currentSkill, linkedDetail)}
+ ${linkedDetail
+ ? renderVersionTimeline(linkedDetail, {
+ title: l("若发布成功,将进入这条版本链", "This is the version chain the candidate will join if published"),
+ copy: l("这里展示的是该技能当前在最终池中的已知历史版本。", "This shows the known published history of the skill in the final pool."),
+ })
+ : renderCandidateVersionNotice(job)}
+ ${renderValidatorSection(results)}
+ ${renderSessionEvidenceSection(sessionEvidence, jobSessionIds(job))}
+ ${renderCandidateOutcomeSection(job, linkedSkill, decision)}
+
+ `
+}
+
+function renderCandidateStages(job, linkedDetail) {
+ const results = jobResults(job)
+ const decision = jobDecision(job)
+ const status = candidateStatusKey(job.status)
+ const publishVersion = linkedDetail && candidateStatusKey(decision.status || job.status) === "published"
+ ? l("最终池当前版本 v{version}", "Current final version v{version}", { version: number(linkedDetail.current_version || 0) })
+ : l("等待正式入池", "Waiting to be published")
+
+ const stages = [
+ {
+ title: l("候选生成", "Candidate Created"),
+ state: "done",
+ stamp: job.created_at,
+ note: l("{action} · 候选编号 {jobId}", "{action} · candidate ID {jobId}", { action: actionLabel(job.proposed_action), jobId: job.job_id }),
+ },
+ {
+ title: l("验证反馈", "Validation Feedback"),
+ state: results.length ? (status === "pending" ? "current" : "done") : "current",
+ stamp: results.length ? results[results.length - 1].created_at : "",
+ note: results.length
+ ? l("已收到 {count} 条结果", "Received {count} results", { count: number(results.length) })
+ : l("还没有任何客户端提交验证结果", "No client has submitted validation feedback yet"),
+ },
+ {
+ title: l("最终决策", "Final Decision"),
+ state: decision?.status
+ ? (candidateStatusKey(decision.status) === "rejected" ? "blocked" : "done")
+ : (status === "review" ? "current" : "pending"),
+ stamp: decision?.decided_at || "",
+ note: decision?.status
+ ? candidateStatusLabel(decision.status)
+ : l("等待系统根据阈值汇总验证结果", "Waiting for the system to aggregate validation results"),
+ },
+ {
+ title: l("进入最终池", "Published to Final Pool"),
+ state: status === "published" ? "done" : status === "rejected" ? "blocked" : "pending",
+ stamp: candidateStatusKey(decision?.status) === "published" ? (decision.decided_at || "") : "",
+ note: status === "published"
+ ? publishVersion
+ : status === "rejected"
+ ? l("这条候选没有进入最终池", "This candidate was not published")
+ : l("还没有正式发布到共享库", "Not yet published to the shared pool"),
+ },
+ ]
+
+ return `
+
+
+
+
${escapeHtml(l("阶段追踪", "Stage Tracking"))}
+
${escapeHtml(l("这条候选是如何沿着更新流程往前走的", "How this candidate progressed through the pipeline"))}
+
+
+
+ ${stages.map((stage) => `
+
+ ${escapeHtml(stageStateLabel(stage.state))}
+ ${escapeHtml(stage.title)}
+ ${escapeHtml(stage.stamp ? formatStamp(stage.stamp) : l("无时间", "No time"))}
+ ${escapeHtml(stage.note)}
+
+ `).join("")}
+
+
+ `
+}
+
+function renderCandidateDocuments(candidateSkill, currentSkill, linkedDetail) {
+ const candidateDoc = skillDocumentPreview(candidateSkill)
+ const currentDoc = skillDocumentPreview(currentSkill)
+ || String(linkedDetail?.remote_skill_md || linkedDetail?.remote_content || "").trim()
+ return `
+
+
+
+
${escapeHtml(l("内容对照", "Content Comparison"))}
+
${escapeHtml(l("候选草案和当前最终版有什么区别", "How the candidate draft differs from the current official version"))}
+
+
+
+ ${renderDocCard(l("候选草案", "Candidate Draft"), candidateDoc, {
+ meta: [
+ badge(categoryLabel(candidateSkill?.category || "candidate"), "neutral"),
+ candidateSkill?.description ? tag(candidateSkill.description) : "",
+ ],
+ })}
+ ${renderDocCard(l("当前最终版", "Current Official Version"), currentDoc, {
+ meta: [
+ badge(currentSkill?.name || linkedDetail?.name || l("共享正式版", "Shared Official Version"), "published"),
+ linkedDetail?.current_version ? tag(`v${number(linkedDetail.current_version)}`) : "",
+ ],
+ emptyText: l("当前没有拿到对应的最终池文档快照。", "No official snapshot is available for this final-pool version."),
+ })}
+
+
+ `
+}
+
+function renderDispatchSection(job) {
+ const core = jobPayload(job)
+ const results = jobResults(job)
+ const dispatch = validationDispatchSummary(job)
+ return `
+
+
+
+
${escapeHtml(l("分发与响应", "Dispatch and Responses"))}
+
${escapeHtml(l("这条候选是什么时候进入验证池,又由哪些客户端返回结果", "When this candidate entered validation and which clients responded"))}
+
+
+
+ ${renderMiniCard(l("进入验证池", "Entered Validation"), formatStamp(dispatch.dispatchAt), l("这条候选开始对验证客户端可见的时间。", "When this candidate became visible to validating clients."))}
+ ${renderMiniCard(l("分发方式", "Dispatch Mode"), dispatch.dispatchLabel, l("当前是开放领取,不是后台预先分配给某一台机器。", "This is open pickup, not pre-assigned to a specific machine."))}
+ ${renderMiniCard(l("仍缺结果", "Results Missing"), number(dispatch.pendingResults), dispatch.pendingResults > 0 ? l("还需要更多客户端返回结果", "More client results are still needed") : l("结果数已达到最低阈值", "Minimum result count reached"))}
+ ${renderMiniCard(l("仍缺通过", "Approvals Missing"), number(dispatch.pendingApprovals), dispatch.pendingApprovals > 0 ? l("还需要更多接受结果", "More approvals are still needed") : l("通过数已达到最低阈值", "Minimum approval count reached"))}
+
+
+ ${results.length
+ ? results
+ .slice()
+ .sort((left, right) => parseTime(left.created_at) - parseTime(right.created_at))
+ .map((result) => `
+
+
+
+
${escapeHtml(result.user_alias || l("未知客户端", "Unknown Client"))}
+
${escapeHtml(formatStamp(result.created_at))}
+
+ ${badge(result.accepted === true ? l("返回通过", "Approved") : l("返回拒绝", "Rejected"), result.accepted === true ? "published" : "rejected")}
+
+ ${escapeHtml(validatorModeLabel(result.validator_mode))} · ${escapeHtml(l("分数", "Score"))} ${escapeHtml(formatScore(result.score))}
+
+ `).join("")
+ : `
+
+ ${escapeHtml(l("这条候选已经在 {time} 进入开放验证池,但当前还没有任何客户端返回结果。", "This candidate entered the open validation pool at {time}, but no client has responded yet.", { time: formatStamp(dispatch.dispatchAt) }))}
+
+ `}
+
+ ${core.source
+ ? `${escapeHtml(l("候选来源:{source}", "Candidate source: {source}", { source: candidateSourceLabel(core.source) }))}
`
+ : ""}
+
+ `
+}
+
+function renderCandidateVersionNotice(job) {
+ return `
+
+
+
+
${escapeHtml(l("版本链", "Version History"))}
+
${escapeHtml(l("当前还没有可关联的最终池版本链", "There is no linked final-pool version history yet"))}
+
+
+
+ ${escapeHtml(l("{name} 还没有在当前投影里对应到共享最终池中的正式技能。若后续发布成功,版本链会出现在这里。", "{name} is not yet linked to an official skill in the current final pool projection. If it is published later, the version history will appear here.", {
+ name: job.skill_name || l("这个候选", "This candidate"),
+ }))}
+
+
+ `
+}
+
+function renderDocCard(title, document, { meta = [], emptyText = l("没有可展示的文档。", "No document is available to display.") } = {}) {
+ return `
+
+
+ ${String(document || "").trim()
+ ? `${escapeHtml(String(document).trim())} `
+ : `${escapeHtml(emptyText)}
`}
+
+ `
+}
+
+function renderValidatorSection(results) {
+ if (!results.length) {
+ return `
+
+
+
+
${escapeHtml(l("验证反馈", "Validation Feedback"))}
+
${escapeHtml(l("还没有任何客户端提交结果", "No client has submitted results yet"))}
+
+
+
+ `
+ }
+
+ return `
+
+
+
+
${escapeHtml(l("验证反馈", "Validation Feedback"))}
+
${escapeHtml(l("哪些客户端验证过这个候选,给了什么分", "Which clients validated this candidate and what scores they gave"))}
+
+ ${badge(`${number(results.length)} 条`, "neutral")}
+
+
+ ${results.map((result) => {
+ const accepted = result.accepted === true
+ return `
+
+
+
+
${escapeHtml(result.user_alias || l("未知客户端", "Unknown Client"))}
+
${escapeHtml(validatorModeLabel(result.validator_mode))}
+
+ ${badge(accepted ? l("通过", "Accepted") : l("拒绝", "Rejected"), accepted ? "published" : "rejected")}
+
+
+ ${tag(l("分数 {score}", "Score {score}", { score: formatScore(result.score) }))}
+ ${tag(formatStamp(result.created_at))}
+
+ ${escapeHtml(result.notes || result.reason || l("没有备注。", "No notes."))}
+
+ `
+ }).join("")}
+
+
+ `
+}
+
+function renderSessionEvidenceSection(evidence, sessionIds) {
+ const rows = Array.isArray(evidence) ? evidence : []
+ const fallbackRows = rows.length
+ ? rows
+ : sessionIds.map((sessionId) => ({
+ session_id: sessionId,
+ summary: l("这条候选引用了该会话,但没有额外摘要。", "This candidate references the session, but no additional summary is available."),
+ }))
+
+ if (!fallbackRows.length) {
+ return `
+
+
+
+
${escapeHtml(l("会话证据", "Session Evidence"))}
+
${escapeHtml(l("这条候选没有记录会话来源", "This candidate has no recorded session source"))}
+
+
+
+ `
+ }
+
+ return `
+
+
+
+
${escapeHtml(l("会话证据", "Session Evidence"))}
+
${escapeHtml(l("哪些会话支撑了这条候选", "Which sessions support this candidate"))}
+
+ ${badge(`${number(fallbackRows.length)} 条`, "neutral")}
+
+
+ ${fallbackRows.map((item) => `
+
+
+
+
${escapeHtml(item.session_id || l("未知会话", "Unknown Session"))}
+
${escapeHtml(clip(item.summary || l("没有额外摘要。", "No additional summary."), 120))}
+
+
+ ${item.judge_overall_score != null ? badge(l("总评 {score}", "Overall {score}", { score: formatScore(item.judge_overall_score) }), "neutral") : ""}
+ ${item.avg_prm != null ? tag(l("自动评分 {score}", "Auto Score {score}", { score: formatScore(item.avg_prm) })) : ""}
+
+
+
+ `).join("")}
+
+
+ `
+}
+
+function renderCandidateOutcomeSection(job, linkedSkill, decision) {
+ const status = candidateStatusKey(decision?.status || job.status)
+ if (status === "published") {
+ return `
+
+
+
+
${escapeHtml(l("结果", "Result"))}
+
${escapeHtml(l("这条候选已经进入最终池", "This candidate has been published"))}
+
+ ${badge(l("已发布", "Published"), "published")}
+
+
+ ${escapeHtml(decision.reason || l("候选满足验证阈值后进入最终池。", "The candidate met the validation thresholds and was published."))}
+
+ ${linkedSkill
+ ? `
+
+
+ ${escapeHtml(l("查看最终池版本", "View Published Skill"))}
+
+
+ `
+ : ""}
+
+ `
+ }
+ if (status === "rejected") {
+ return `
+
+
+
+
${escapeHtml(l("结果", "Result"))}
+
${escapeHtml(l("这条候选没有进入最终池", "This candidate was not published"))}
+
+ ${badge(l("已拒绝", "Rejected"), "rejected")}
+
+ ${escapeHtml(decision.reason || l("当前验证结果未满足发布条件。", "The current validation results do not meet the publish conditions."))}
+
+ `
+ }
+ return `
+
+
+
+
${escapeHtml(l("结果", "Result"))}
+
${escapeHtml(l("这条候选还在流程中", "This candidate is still in progress"))}
+
+ ${badge(candidateStatusLabel(job.status), "pending")}
+
+ ${escapeHtml(l("继续关注验证反馈数量、平均分和拒绝数是否达到阈值。", "Keep watching result counts, average score, and rejection count against the thresholds."))}
+
+ `
+}
+
+function renderFinalPage() {
+ const items = filteredFinalSkills()
+ const detail = selectedFinalSkillDetail()
+
+ dom.finalList.innerHTML = items.length
+ ? items.map(renderFinalSkillCard).join("")
+ : renderFinalEmptyState()
+
+ dom.finalDetail.innerHTML = detail
+ ? renderFinalSkillDetail(detail)
+ : renderFinalDetailEmptyState()
+}
+
+function renderFinalSkillCard(skill) {
+ const active = String(skill.skill_id || "") === String(state.selectedFinalSkillId || "")
+ const sync = compareLocalAndRemote(skill)
+ const candidateCount = candidateRecordCount(skill.name)
+ return `
+
+
+
+
${escapeHtml(categoryLabel(skill.category))}
+
${escapeHtml(skill.name || l("未命名技能", "Unnamed Skill"))}
+
+ ${badge(`v${number(skill.current_version || 0)}`, "published")}
+
+ ${escapeHtml(clip(skill.description || l("这个技能还没有描述。", "This skill does not have a description yet."), 110))}
+
+ ${tag(sync.label)}
+ ${tag(l("历史版本 {count}", "Versions {count}", { count: number(visibleVersionCount(skill)) }))}
+ ${tag(l("候选记录 {count}", "Candidates {count}", { count: number(candidateCount) }))}
+ ${skill.uploaded_at || skill.remote_updated_at ? tag(l("发布时间 {time}", "Published {time}", { time: formatStamp(skill.uploaded_at || skill.remote_updated_at) })) : ""}
+
+
+ `
+}
+
+function renderFinalSkillDetail(skill) {
+ const sync = compareLocalAndRemote(skill)
+ return `
+
+
+
+
+
${escapeHtml(l("最终池技能", "Published Skill"))}
+
${escapeHtml(skill.name || l("未命名技能", "Unnamed Skill"))}
+
${escapeHtml(skill.description || l("这个技能还没有描述。", "This skill does not have a description yet."))}
+
+
+ ${badge(`v${number(skill.current_version || 0)}`, "published")}
+ ${badge(sync.label, sync.tone)}
+
+
+
+ ${renderMiniCard(l("当前正式版", "Current Official Version"), `v${number(skill.current_version || 0)}`, l("共享库当前正式版本。", "Current official version in the shared pool."))}
+ ${renderMiniCard(l("历史版本", "Version Count"), number(visibleVersionCount(skill)), l("当前能看到的共享版本数量。", "Number of shared versions currently visible."))}
+ ${renderMiniCard(l("最近发布时间", "Latest Publish Time"), formatStamp(skill.uploaded_at || skill.remote_updated_at || skill.updated_at), l("最新一次进入共享库的时间。", "When it most recently entered the shared pool."))}
+ ${renderMiniCard(l("本地状态", "Local Status"), sync.label, l("这份共享正式版和你本地内容当前是什么关系。", "How this published version relates to your local content."))}
+
+
+ ${renderVersionCompare(skill, {
+ scope: "final",
+ title: l("共享版本对比", "Shared Version Comparison"),
+ copy: l("可以在这里对照 v1 / v2 / v3 等历史版本,也可以把本地当前内容拉进来比较。", "Compare v1 / v2 / v3 and other history versions here, and also compare them with your current local content."),
+ includeLocal: true,
+ })}
+ ${renderVersionTimeline(skill, {
+ title: l("共享版本时间线", "Shared Version Timeline"),
+ copy: l("这个技能是如何一路发布到当前正式版的。", "How this skill evolved into the current official version."),
+ })}
+ ${renderCandidateLinks(skill.name)}
+ ${renderRelatedSessions(skill.related_sessions, { emptyTitle: l("最近没有观测到相关会话", "No related sessions were observed recently") })}
+
+ `
+}
+
+function renderVersionCompare(skill, options) {
+ const selection = compareState(skill, options.scope, { includeLocal: options.includeLocal })
+ if (!selection.entries.length) {
+ return `
+
+
+
+
${escapeHtml(options.title)}
+
${escapeHtml(l("当前没有可比较的版本", "No versions are available for comparison"))}
+
+
+
+ `
+ }
+
+ return `
+
+
+
+
${escapeHtml(options.title)}
+
${escapeHtml(options.copy)}
+
+ ${badge(l("{count} 个版本", "{count} versions", { count: number(selection.entries.length) }), "neutral")}
+
+
+
+
+ ${escapeHtml(l("左侧版本", "Left Version"))}
+
+ ${selection.entries.map((entry) => `
+
+ ${escapeHtml(versionEntryLabel(entry))}
+
+ `).join("")}
+
+
+
+ ${escapeHtml(l("右侧版本", "Right Version"))}
+
+ ${selection.entries.map((entry) => `
+
+ ${escapeHtml(versionEntryLabel(entry))}
+
+ `).join("")}
+
+
+
+
+ ${selection.entries.map((entry) => `
+
+ ${escapeHtml(entry.label)}
+
+ `).join("")}
+
+
+ ${renderVersionDoc(selection.primary, l("左侧版本", "Left Version"))}
+ ${renderVersionDoc(selection.compare, l("右侧版本", "Right Version"))}
+
+
+
+ `
+}
+
+function versionEntryLabel(entry) {
+ const parts = [entry.label]
+ if (entry.timestamp) {
+ parts.push(formatStamp(entry.timestamp))
+ }
+ if (entry.action && entry.action !== "local") {
+ parts.push(actionLabel(entry.action))
+ }
+ return parts.join(" · ")
+}
+
+function renderVersionDoc(entry, title) {
+ if (!entry) {
+ return `${escapeHtml(l("没有可展示的版本。", "No version is available to display."))}
`
+ }
+ return `
+
+
+
+
${escapeHtml(title)}
+
${escapeHtml(entry.label)}
+
+
+ ${badge(entry.source === "local" ? l("本地", "Local") : l("共享库", "Shared Pool"), entry.source === "local" ? "neutral" : "published")}
+
+
+
+
+ ${entry.timestamp ? tag(formatStamp(entry.timestamp)) : ""}
+ ${entry.action && entry.action !== "local" ? tag(actionLabel(entry.action)) : ""}
+
+
+ ${String(entry.document || "").trim()
+ ? `${escapeHtml(String(entry.document).trim())} `
+ : `${escapeHtml(l("这个版本只有元数据,没有留下文档快照。", "This version only has metadata and no saved document snapshot."))}
`}
+
+ `
+}
+
+function renderVersionTimeline(skill, options = {}) {
+ const entries = buildVersionEntries(skill, { includeLocal: false }).filter((entry) => entry.source === "shared")
+ if (!entries.length) {
+ return `
+
+
+
+
${escapeHtml(options.title || l("版本时间线", "Version Timeline"))}
+
${escapeHtml(l("当前没有可用的共享版本历史", "No shared version history is available yet"))}
+
+
+
+ `
+ }
+ return `
+
+
+
+
${escapeHtml(options.title || l("版本时间线", "Version Timeline"))}
+
${escapeHtml(options.copy || l("版本是如何一路走到当前正式版的", "How versions progressed into the current official version"))}
+
+
+
+ ${entries.map((entry) => `
+
+
+
+
${escapeHtml(entry.label)}
+
${escapeHtml(entry.timestamp ? formatStamp(entry.timestamp) : l("无时间", "No time"))}
+
+ ${badge(entry.current ? l("当前正式版", "Current") : l("历史版本", "History"), entry.current ? "published" : "neutral")}
+
+
+ ${entry.action && entry.action !== "local" ? tag(actionLabel(entry.action)) : ""}
+
+
+ `).join("")}
+
+
+ `
+}
+
+function renderSharingNotConnectedNotice(copy) {
+ return `
+
+
+
+
${escapeHtml(l("共享状态", "Sharing"))}
+
${escapeHtml(l("当前没有接入共享库", "The shared pool is not connected"))}
+
+
+ ${escapeHtml(copy)}
+
+ `
+}
+
+function renderCandidateEmptyState() {
+ if (!sharingEnabled()) {
+ return `
+
+ ${escapeHtml(l("当前没有启用共享库,所以候选池为空。接入共享库之后,这里才会显示候选、验证反馈和发布时间线。", "Sharing is not enabled, so the candidate pool is empty. After connecting the shared pool, candidates, validation feedback, and publish events will appear here."))}
+
+ `
+ }
+ return `${escapeHtml(l("当前筛选条件下没有候选技能。", "No candidate skills match the current filter."))}
`
+}
+
+function renderCandidateDetailEmptyState() {
+ if (!sharingEnabled()) {
+ return `
+
+ ${escapeHtml(l("当前环境未启用共享与验证链路,所以没有候选详情可看。", "Sharing and validation are not enabled in this environment, so candidate details are unavailable."))}
+
+ `
+ }
+ return `${escapeHtml(l("选择一个候选技能,查看它的验证轨迹和会话来源。", "Select a candidate skill to inspect validation progress and source sessions."))}
`
+}
+
+function renderFinalEmptyState() {
+ if (!sharingEnabled()) {
+ return `
+
+ ${escapeHtml(l("当前没有启用共享库,所以最终池为空。接入共享库后,这里会显示共享技能和版本历史。", "Sharing is not enabled, so the final pool is empty. After connecting the shared pool, published skills and version history will appear here."))}
+
+ `
+ }
+ return `${escapeHtml(l("当前筛选条件下没有最终池技能。", "No published skills match the current filter."))}
`
+}
+
+function renderFinalDetailEmptyState() {
+ if (!sharingEnabled()) {
+ return `
+
+ ${escapeHtml(l("当前环境未启用共享库,所以还没有可查看的最终池详情。", "The shared pool is not enabled in this environment, so no published-skill details are available."))}
+
+ `
+ }
+ return `${escapeHtml(l("选择一个最终池技能,查看版本历史、发布链路和相关会话。", "Select a published skill to inspect version history, release flow, and related sessions."))}
`
+}
+
+function renderSessionsPage() {
+ const items = filteredSessions()
+ const detail = selectedSessionDetail()
+
+ dom.sessionList.innerHTML = items.length
+ ? items.map(renderSessionCard).join("")
+ : `${escapeHtml(l("当前筛选条件下没有会话。", "No sessions match the current filter."))}
`
+
+ dom.sessionDetail.innerHTML = detail
+ ? renderSessionDetail(detail)
+ : `${escapeHtml(l("选择一个会话,查看逐轮内容、关联技能,以及它是否进入候选池。", "Select a session to inspect turn-by-turn content, related skills, and whether it entered the candidate pool."))}
`
+}
+
+function renderSessionCard(session) {
+ const active = String(session.session_id || "") === String(state.selectedSessionId || "")
+ return `
+
+
+
+
${escapeHtml(formatStamp(session.timestamp))}
+
${escapeHtml(session.session_id || l("未知会话", "Unknown Session"))}
+
+ ${badge(sourceLabel(session.source || "local"), String(session.source || "").toLowerCase() === "local" ? "neutral" : "published")}
+
+ ${escapeHtml(clip(session.prompt_preview || session.response_preview || l("没有摘要。", "No summary."), 120))}
+
+ ${tag(outcomeLabel(session.outcome))}
+ ${tag(l("回合 {count}", "Turns {count}", { count: number(session.num_turns || 0) }))}
+ ${tag(l("技能 {count}", "Skills {count}", { count: number((session.skill_names || []).length) }))}
+
+
+ `
+}
+
+function renderSessionDetail(session) {
+ const links = Array.isArray(session.links) ? session.links : []
+ const turns = Array.isArray(session.turns) ? session.turns : []
+ const uniqueSkillNames = [...new Set([
+ ...(Array.isArray(session.skill_names) ? session.skill_names : []),
+ ...links.map((item) => item.skill_name),
+ ].filter(Boolean))]
+ const linkedJobs = jobsForSession(session.session_id)
+
+ return `
+
+
+
+
+
${escapeHtml(l("会话详情", "Session Detail"))}
+
${escapeHtml(session.session_id || l("未知会话", "Unknown Session"))}
+
${escapeHtml(session.prompt_preview || session.response_preview || l("没有摘要。", "No summary."))}
+
+
+ ${badge(sourceLabel(session.source || "local"), String(session.source || "").toLowerCase() === "local" ? "neutral" : "published")}
+ ${badge(outcomeLabel(session.outcome), "neutral")}
+
+
+
+ ${renderMiniCard(l("时间", "Time"), formatStamp(session.timestamp), l("这条会话的时间戳", "Timestamp of this session"))}
+ ${renderMiniCard(l("回合数", "Turns"), number(session.num_turns || 0), l("当前会话的轮次数量。", "Number of turns in this session."))}
+ ${renderMiniCard(l("自动评分", "Auto Score"), formatScore(session.avg_prm_score), l("如果系统做了自动评分,会在这里显示。", "Displayed when automatic scoring is available."))}
+ ${renderMiniCard(l("关联候选", "Related Candidates"), number(linkedJobs.length), linkedJobs.length ? l("这条会话被哪些候选技能引用过。", "Which candidate skills referenced this session.") : l("当前没有候选技能显式引用它。", "No candidate skill explicitly references this session."))}
+
+
+
+
+
+
${escapeHtml(l("关联技能", "Related Skills"))}
+
${escapeHtml(l("这条会话中出现过哪些技能", "Which skills appeared in this session"))}
+
+
+
+ ${uniqueSkillNames.length
+ ? uniqueSkillNames.map(renderSessionSkillButton).join("")
+ : `${escapeHtml(l("这个会话没有显式记录关联技能。", "This session has no explicitly recorded related skills."))} `}
+
+
+
+
+
+
${escapeHtml(l("候选引用", "Candidate Links"))}
+
${escapeHtml(l("这个会话是否参与过候选技能的生成或验证", "Whether this session participated in candidate generation or validation"))}
+
+
+ ${linkedJobs.length
+ ? `
+
+ ${linkedJobs.map((job) => `
+
+
+
+
${escapeHtml(job.skill_name || l("未命名技能", "Unnamed Skill"))}
+
${escapeHtml(job.job_id)} · ${escapeHtml(candidateStatusLabel(job.status))}
+
+ ${badge(actionLabel(job.proposed_action), toneForStatus(job.status))}
+
+
+ `).join("")}
+
+ `
+ : `${escapeHtml(l("这条会话当前没有被候选池显式引用。", "This session is not explicitly referenced by the candidate pool."))}
`}
+
+
+
+
+
${escapeHtml(l("逐轮内容", "Turn-by-Turn Content"))}
+
${escapeHtml(l("用户输入与模型回复片段", "User Prompts and Model Responses"))}
+
+ ${badge(l("{count} 轮", "{count} turns", { count: number(turns.length) }), "neutral")}
+
+
+ ${turns.length
+ ? turns.map(renderTurnCard).join("")
+ : `
${escapeHtml(l("这个会话没有可展示的 turn 记录。", "This session has no turn records to display."))}
`}
+
+
+
+ `
+}
+
+function renderSessionSkillButton(skillName) {
+ const skill = findSkillByName(skillName)
+ if (!skill) {
+ return tag(skillName)
+ }
+ if (skill.has_local) {
+ return `
+
+ ${escapeHtml(skillName)}
+
+ `
+ }
+ return `
+
+ ${escapeHtml(skillName)}
+
+ `
+}
+
+function renderTurnCard(turn) {
+ const injected = normalizeSkillNames(turn.injected_skills)
+ const read = normalizeSkillNames(turn.read_skills)
+ const modified = normalizeSkillNames(turn.modified_skills)
+ return `
+
+
+
+
${escapeHtml(l("第 {turn} 轮", "Turn {turn}", { turn: String(turn.turn_num || "-") }))}
+
+ ${turn.prm_score != null ? badge(l("评分 {score}", "Score {score}", { score: formatScore(turn.prm_score) }), "neutral") : ""}
+
+
+
${escapeHtml(l("用户输入", "User Prompt"))}
+
${escapeHtml(String(turn.prompt_text || "").trim() || l("(空)", "(empty)"))}
+
+
+
${escapeHtml(l("模型回复", "Model Response"))}
+
${escapeHtml(String(turn.response_text || "").trim() || l("(空)", "(empty)"))}
+
+
+ ${injected.map((item) => tag(l("注入 {name}", "Injected {name}", { name: item }))).join("")}
+ ${read.map((item) => tag(l("读取 {name}", "Read {name}", { name: item }))).join("")}
+ ${modified.map((item) => tag(l("修改 {name}", "Modified {name}", { name: item }))).join("")}
+
+
+ `
+}
+
+async function runOperation(op) {
+ if (state.loading) {
+ return
+ }
+ const operationMap = {
+ "toggle-locale": async () => {
+ setLocale(state.locale === "en" ? "zh" : "en")
+ },
+ refresh: async () => {
+ await refreshData({ notice: l("已刷新看板数据。", "Dashboard data refreshed.") })
+ },
+ "sync-projection": async () => {
+ const response = await getJson("/api/v1/sync", { method: "POST" })
+ const summary = response.summary || {}
+ await refreshData({
+ notice: l("已重建投影:{skills} 个技能,{sessions} 个会话。", "Snapshot rebuilt: {skills} skills, {sessions} sessions.", {
+ skills: number(summary.skills || 0),
+ sessions: number(summary.sessions || 0),
+ }),
+ })
+ },
+ }
+ const handler = operationMap[op]
+ if (!handler) {
+ return
+ }
+ setLoading(true)
+ try {
+ await handler()
+ } catch (error) {
+ showMessage("error", error.message || l("操作失败", "Operation failed"))
+ } finally {
+ setLoading(false)
+ }
+}
+
+async function selectLocalSkill(skillId, { openView = false } = {}) {
+ state.selectedLocalSkillId = skillId || ""
+ if (openView) {
+ state.activeView = "local"
+ }
+ renderAll()
+ await hydrateSelections({ local: true, final: false, session: false, candidate: false })
+ renderAll()
+}
+
+async function selectFinalSkill(skillId, { openView = false } = {}) {
+ state.selectedFinalSkillId = skillId || ""
+ if (openView) {
+ state.activeView = "final"
+ }
+ renderAll()
+ await hydrateSelections({ local: false, final: true, session: false, candidate: false })
+ renderAll()
+}
+
+async function selectCandidate(jobId, { openView = false } = {}) {
+ state.selectedCandidateJobId = jobId || ""
+ if (openView) {
+ state.activeView = "candidate"
+ }
+ renderAll()
+ await hydrateSelections({ local: false, final: false, session: false, candidate: true })
+ renderAll()
+}
+
+async function selectSession(sessionId, { openView = false } = {}) {
+ const sessionItem = findSessionById(sessionId)
+ if (sessionItem && String(sessionItem.source || "").trim().toLowerCase() === "shared" && dom.sessionSource.value === "local") {
+ dom.sessionSource.value = "all"
+ }
+ state.selectedSessionId = sessionId || ""
+ if (openView) {
+ state.activeView = "sessions"
+ }
+ ensureSelections()
+ renderAll()
+ await hydrateSelections({ local: false, final: false, session: true, candidate: false })
+ renderAll()
+}
+
+async function onFilterChange(kind) {
+ ensureSelections()
+ renderAll()
+ if (kind === "local" && state.selectedLocalSkillId) {
+ await hydrateSelections({ local: true, final: false, session: false, candidate: false })
+ } else if (kind === "candidate" && state.selectedCandidateJobId) {
+ await hydrateSelections({ local: false, final: false, session: false, candidate: true })
+ } else if (kind === "final" && state.selectedFinalSkillId) {
+ await hydrateSelections({ local: false, final: true, session: false, candidate: false })
+ } else if (kind === "session" && state.selectedSessionId) {
+ await hydrateSelections({ local: false, final: false, session: true, candidate: false })
+ }
+ renderAll()
+}
+
+function setActiveView(view) {
+ state.activeView = view
+ renderAll()
+}
+
+async function handleDocumentClick(event) {
+ const origin = event.target instanceof Element ? event.target : null
+ if (!origin) {
+ return
+ }
+ const target = origin.closest(
+ "button, [data-select-local-skill], [data-select-final-skill], [data-select-candidate], [data-select-session], [data-view]"
+ )
+ if (!(target instanceof HTMLElement)) {
+ return
+ }
+
+ if (target.dataset.view) {
+ setActiveView(target.dataset.view)
+ return
+ }
+
+ if (target.dataset.openView) {
+ state.activeView = target.dataset.openView
+ }
+
+ if (target.dataset.op) {
+ await runOperation(target.dataset.op)
+ return
+ }
+
+ if (target.dataset.selectLocalSkill) {
+ await selectLocalSkill(target.dataset.selectLocalSkill, { openView: true })
+ return
+ }
+
+ if (target.dataset.selectFinalSkill) {
+ await selectFinalSkill(target.dataset.selectFinalSkill, { openView: true })
+ return
+ }
+
+ if (target.dataset.selectCandidate) {
+ await selectCandidate(target.dataset.selectCandidate, { openView: true })
+ return
+ }
+
+ if (target.dataset.selectSession) {
+ await selectSession(target.dataset.selectSession, { openView: true })
+ return
+ }
+
+ if (target.dataset.pickVersion) {
+ setCompareSelection(
+ target.dataset.compareScope,
+ target.dataset.skillId,
+ target.dataset.compareField || "primary",
+ target.dataset.pickVersion
+ )
+ renderAll()
+ }
+}
+
+function handleDocumentChange(event) {
+ const target = event.target
+ if (!(target instanceof HTMLSelectElement)) {
+ return
+ }
+ if (target.dataset.compareField) {
+ setCompareSelection(
+ target.dataset.compareScope,
+ target.dataset.skillId,
+ target.dataset.compareField,
+ target.value
+ )
+ renderAll()
+ }
+}
+
+function bindEvents() {
+ dom.localSearch.addEventListener("input", () => {
+ void onFilterChange("local")
+ })
+ dom.candidateSearch.addEventListener("input", () => {
+ void onFilterChange("candidate")
+ })
+ dom.candidateStatus.addEventListener("change", () => {
+ void onFilterChange("candidate")
+ })
+ dom.finalSearch.addEventListener("input", () => {
+ void onFilterChange("final")
+ })
+ dom.sessionSearch.addEventListener("input", () => {
+ void onFilterChange("session")
+ })
+ dom.sessionSource.addEventListener("change", () => {
+ void onFilterChange("session")
+ })
+
+ document.addEventListener("click", (event) => {
+ void handleDocumentClick(event)
+ })
+ document.addEventListener("change", handleDocumentChange)
+}
+
+async function init() {
+ bindEvents()
+ await refreshData()
+}
+
+void init()
diff --git a/skillclaw/dashboard_assets/index.html b/skillclaw/dashboard_assets/index.html
new file mode 100644
index 0000000..26f5e65
--- /dev/null
+++ b/skillclaw/dashboard_assets/index.html
@@ -0,0 +1,218 @@
+
+
+
+
+
+ SkillClaw 技能演化看板
+
+
+
+
+
+
+
+
+
+
+ English
+ 刷新数据
+ 重建投影
+
+
+
+
+
+
+
+
+
+
总览
+
现在这套技能更新链路处在什么状态
+
+
+ 这里汇总本地同步、候选验证、最终发布和会话来源四类状态。
+
+
+
+
+
+
+
+
+
+
+
最近事件
+
候选生成、验证反馈、最终决策
+
+
按时间倒序展示最近 12 条流程事件,方便先抓住变化再下钻。
+
+
+
+
+
+
+
+
+
+
我的技能
+
本地技能和共享正式版是否一致
+
这里看你的本地技能、它们是否已同步,以及最近关联了哪些会话。
+
+
+
+
+
+
+
+
+
+ 选择一个本地技能,查看同步状态、版本链和相关会话。
+
+
+
+
+
+
+
+
+
+
候选池
+
哪些技能正在等待验证,哪些已经进入最终池
+
+ 这里只看候选技能的真实验证进度:候选内容、验证客户端、决策时间,以及它依赖哪些会话证据。
+
+
+
+
+
+ 全部状态
+ 待验证
+ 已有反馈
+ 已入最终池
+ 已拒绝
+
+
+
+
+
+
+
+ 选择一个候选技能,查看它的验证轨迹和会话来源。
+
+
+
+
+
+
+
+
+
+
最终池
+
共享技能当前正式版本和发布历史
+
这里只显示已经进入共享库的正式技能,用来观察版本演进、发布时间和本地同步状态。
+
+
+
+
+
+
+
+
+
+ 选择一个最终池技能,查看版本历史、发布链路和相关会话。
+
+
+
+
+
+
+
+
+
+
会话追溯
+
我的会话,以及支撑候选技能的共享会话
+
默认先看你的本地会话;如果需要追踪候选来源,可以切到全部会话继续查。
+
+
+
+
+ 我的会话
+ 全部会话
+ 共享会话
+
+
+
+
+
+
+
+ 选择一个会话,查看逐轮内容、关联技能,以及它是否进入候选池。
+
+
+
+
+
+
+
+
+
+
diff --git a/skillclaw/dashboard_assets/styles.css b/skillclaw/dashboard_assets/styles.css
new file mode 100644
index 0000000..d06bd76
--- /dev/null
+++ b/skillclaw/dashboard_assets/styles.css
@@ -0,0 +1,859 @@
+:root {
+ --bg: #f6f8fb;
+ --bg-accent: #eef4ff;
+ --paper: #ffffff;
+ --paper-soft: #fdfefe;
+ --ink: #1f2328;
+ --muted: #57606a;
+ --line: #d0d7de;
+ --line-strong: #afb8c1;
+ --blue: #0969da;
+ --blue-soft: #eaf2ff;
+ --green: #1a7f37;
+ --green-soft: #e7f6ec;
+ --amber: #9a6700;
+ --amber-soft: #fff8c5;
+ --red: #cf222e;
+ --red-soft: #ffebe9;
+ --slate-soft: #f3f4f6;
+ --shadow: 0 14px 28px rgba(31, 35, 40, 0.06);
+ --shadow-soft: 0 8px 18px rgba(31, 35, 40, 0.04);
+ --radius-xl: 22px;
+ --radius-lg: 18px;
+ --radius-md: 14px;
+ --radius-sm: 10px;
+}
+
+* {
+ box-sizing: border-box;
+}
+
+html,
+body {
+ margin: 0;
+ min-height: 100%;
+}
+
+body {
+ color: var(--ink);
+ font-family: "PingFang SC", "Hiragino Sans GB", "Noto Sans CJK SC", "Source Han Sans SC", "Microsoft YaHei", sans-serif;
+ background:
+ radial-gradient(circle at top right, rgba(9, 105, 218, 0.08), transparent 28%),
+ linear-gradient(180deg, #fbfcfe 0%, var(--bg) 100%);
+}
+
+body::before {
+ content: "";
+ position: fixed;
+ inset: 0;
+ pointer-events: none;
+ background-image:
+ linear-gradient(rgba(208, 215, 222, 0.36) 1px, transparent 1px),
+ linear-gradient(90deg, rgba(208, 215, 222, 0.36) 1px, transparent 1px);
+ background-size: 28px 28px;
+ mask-image: linear-gradient(180deg, rgba(0, 0, 0, 0.1), transparent 74%);
+}
+
+button,
+input,
+select,
+textarea {
+ font: inherit;
+}
+
+button,
+input,
+select,
+textarea,
+pre {
+ border-radius: var(--radius-sm);
+}
+
+button {
+ border: 1px solid transparent;
+ padding: 10px 14px;
+ background: var(--blue);
+ color: #fff;
+ cursor: pointer;
+ transition:
+ background 140ms ease,
+ border-color 140ms ease,
+ transform 140ms ease,
+ box-shadow 140ms ease,
+ opacity 140ms ease;
+ box-shadow: 0 10px 18px rgba(9, 105, 218, 0.16);
+}
+
+button:hover {
+ background: #0858b8;
+ transform: translateY(-1px);
+}
+
+button:disabled {
+ opacity: 0.55;
+ cursor: not-allowed;
+ transform: none;
+ box-shadow: none;
+}
+
+button.secondary,
+button.ghost,
+.nav-item,
+.jump-card,
+.filter-chip,
+.version-pill {
+ background: #fff;
+ color: var(--ink);
+ border-color: var(--line);
+ box-shadow: none;
+}
+
+button.ghost {
+ color: var(--blue);
+}
+
+input,
+select,
+textarea {
+ width: 100%;
+ min-width: 0;
+ padding: 10px 12px;
+ border: 1px solid var(--line);
+ background: #fff;
+ color: var(--ink);
+ transition: border-color 140ms ease, box-shadow 140ms ease;
+}
+
+input:focus,
+select:focus,
+textarea:focus {
+ outline: none;
+ border-color: rgba(9, 105, 218, 0.45);
+ box-shadow: 0 0 0 4px rgba(9, 105, 218, 0.12);
+}
+
+pre {
+ margin: 0;
+ padding: 16px;
+ overflow-x: auto;
+ border: 1px solid #d8dee4;
+ background: #f6f8fa;
+ color: #24292f;
+ font-family: ui-monospace, "SFMono-Regular", Menlo, Consolas, monospace;
+ font-size: 0.88rem;
+ line-height: 1.6;
+ white-space: pre-wrap;
+ word-break: break-word;
+}
+
+h1,
+h2,
+h3,
+h4,
+p {
+ margin: 0;
+}
+
+h1,
+h2,
+h3,
+h4 {
+ font-weight: 650;
+ letter-spacing: -0.01em;
+}
+
+h1 {
+ font-size: 2.1rem;
+ line-height: 1.05;
+}
+
+h2 {
+ font-size: clamp(1.55rem, 2.5vw, 2.2rem);
+ line-height: 1.18;
+}
+
+h3 {
+ font-size: 1.1rem;
+ line-height: 1.3;
+}
+
+h4 {
+ font-size: 1rem;
+ line-height: 1.35;
+}
+
+.kicker,
+.metric-label,
+.field-label,
+.card-kicker,
+.status-caption {
+ font-size: 0.73rem;
+ font-weight: 600;
+ color: var(--muted);
+ letter-spacing: 0.04em;
+ text-transform: none;
+}
+
+.hero-copy,
+.panel-copy,
+.sidebar-copy,
+.soft-copy,
+.message-copy,
+.empty-state {
+ color: var(--muted);
+ line-height: 1.65;
+}
+
+.app-shell {
+ min-height: 100vh;
+ display: grid;
+ grid-template-columns: 280px minmax(0, 1fr);
+}
+
+.sidebar {
+ position: sticky;
+ top: 0;
+ min-height: 100vh;
+ padding: 28px 20px;
+ background:
+ linear-gradient(180deg, #ffffff 0%, #f9fbff 100%);
+ border-right: 1px solid var(--line);
+}
+
+.brand-block {
+ padding: 18px 16px;
+ border: 1px solid var(--line);
+ border-radius: var(--radius-lg);
+ background: linear-gradient(180deg, #ffffff 0%, #f8fbff 100%);
+ box-shadow: var(--shadow-soft);
+}
+
+.sidebar-copy {
+ margin-top: 14px;
+}
+
+.nav-list,
+.stack,
+.doc-grid,
+.kv-list,
+.timeline-stack,
+.turn-stack,
+.version-stack {
+ display: grid;
+ gap: 12px;
+}
+
+.nav-list {
+ margin-top: 22px;
+}
+
+.nav-item {
+ width: 100%;
+ padding: 14px 15px;
+ display: grid;
+ justify-items: start;
+ gap: 6px;
+ text-align: left;
+}
+
+.nav-item.active {
+ border-color: rgba(9, 105, 218, 0.26);
+ background: var(--blue-soft);
+}
+
+.nav-title {
+ font-weight: 700;
+}
+
+.nav-meta {
+ font-size: 0.84rem;
+ color: var(--muted);
+}
+
+.sidebar-section {
+ margin-top: 24px;
+}
+
+.main-shell {
+ padding: 28px 32px 36px;
+}
+
+.page-hero {
+ display: flex;
+ justify-content: space-between;
+ gap: 18px;
+ align-items: flex-start;
+ padding: 28px 30px;
+ border: 1px solid var(--line);
+ border-radius: var(--radius-xl);
+ background:
+ linear-gradient(135deg, #ffffff 0%, #f8fbff 52%, #eef5ff 100%);
+ box-shadow: var(--shadow);
+}
+
+.hero-actions,
+.controls,
+.controls.row,
+.chip-row,
+.meta-row,
+.action-row,
+.status-row,
+.selector-row {
+ display: flex;
+ flex-wrap: wrap;
+ gap: 10px;
+}
+
+.hero-actions,
+.controls {
+ align-items: center;
+}
+
+.controls.row > * {
+ flex: 1 1 180px;
+}
+
+.message-strip {
+ margin-top: 16px;
+ padding: 14px 16px;
+ border: 1px solid var(--line);
+ border-radius: var(--radius-md);
+ box-shadow: var(--shadow-soft);
+}
+
+.message-strip.hidden,
+.view {
+ display: none;
+}
+
+.message-strip.info {
+ background: var(--blue-soft);
+ border-color: rgba(9, 105, 218, 0.18);
+}
+
+.message-strip.warn {
+ background: var(--amber-soft);
+ border-color: rgba(154, 103, 0, 0.18);
+}
+
+.message-strip.error {
+ background: var(--red-soft);
+ border-color: rgba(207, 34, 46, 0.18);
+}
+
+.view.active {
+ display: block;
+}
+
+.view {
+ margin-top: 20px;
+}
+
+.panel {
+ border: 1px solid var(--line);
+ border-radius: var(--radius-lg);
+ background: var(--paper);
+ box-shadow: var(--shadow-soft);
+ overflow: hidden;
+}
+
+.panel-head {
+ display: flex;
+ justify-content: space-between;
+ gap: 14px;
+ align-items: flex-start;
+ padding: 20px 22px 0;
+}
+
+.panel-head.wrap {
+ flex-wrap: wrap;
+}
+
+.panel-head.compact {
+ align-items: center;
+}
+
+.panel-copy {
+ max-width: 64ch;
+}
+
+.panel-body,
+.detail-shell {
+ padding: 18px 22px 22px;
+}
+
+.two-column,
+.split-layout,
+.metric-grid,
+.doc-grid,
+.overview-grid,
+.detail-grid,
+.mini-grid,
+.stage-grid {
+ display: grid;
+ gap: 16px;
+}
+
+.two-column {
+ margin-top: 16px;
+ grid-template-columns: repeat(2, minmax(0, 1fr));
+}
+
+.split-layout {
+ margin-top: 16px;
+ grid-template-columns: minmax(320px, 0.9fr) minmax(0, 1.1fr);
+ align-items: start;
+}
+
+.metric-grid {
+ padding: 18px 22px 22px;
+ grid-template-columns: repeat(4, minmax(0, 1fr));
+}
+
+.metric-card,
+.record-card,
+.detail-card,
+.context-card,
+.event-card,
+.mini-card,
+.doc-card,
+.turn-card,
+.validator-card,
+.version-card,
+.stage-card,
+.sidebar-card {
+ border: 1px solid var(--line);
+ border-radius: var(--radius-md);
+ background: var(--paper-soft);
+}
+
+.metric-card,
+.record-card,
+.detail-card,
+.context-card,
+.event-card,
+.validator-card,
+.version-card,
+.stage-card {
+ padding: 16px;
+}
+
+.metric-card {
+ background: linear-gradient(180deg, #ffffff 0%, #f9fbff 100%);
+}
+
+.metric-value,
+.mini-value {
+ margin-top: 8px;
+ font-weight: 700;
+}
+
+.metric-value {
+ font-size: 1.8rem;
+ letter-spacing: -0.04em;
+}
+
+.mini-value {
+ font-size: 1rem;
+ line-height: 1.45;
+ letter-spacing: -0.01em;
+ overflow-wrap: anywhere;
+}
+
+.metric-note {
+ margin-top: 10px;
+ color: var(--muted);
+ line-height: 1.55;
+}
+
+.record-card {
+ cursor: pointer;
+ transition: border-color 140ms ease, transform 140ms ease, box-shadow 140ms ease;
+}
+
+.record-card:hover {
+ transform: translateY(-1px);
+ border-color: rgba(9, 105, 218, 0.25);
+ box-shadow: var(--shadow-soft);
+}
+
+.record-card.active {
+ border-color: rgba(9, 105, 218, 0.34);
+ background: linear-gradient(180deg, #ffffff 0%, #f4f8ff 100%);
+}
+
+.card-head,
+.detail-head,
+.headline-row,
+.context-row,
+.event-row,
+.turn-head,
+.validator-head,
+.version-head {
+ display: flex;
+ justify-content: space-between;
+ gap: 12px;
+ align-items: flex-start;
+ flex-wrap: wrap;
+}
+
+.card-head > *,
+.detail-head > *,
+.headline-row > *,
+.context-row > *,
+.event-row > *,
+.turn-head > *,
+.validator-head > *,
+.version-head > * {
+ min-width: 0;
+}
+
+.card-head > div,
+.detail-head > div,
+.headline-row > div,
+.event-row > div,
+.validator-head > div,
+.version-head > div {
+ flex: 1 1 220px;
+}
+
+.card-copy {
+ margin-top: 10px;
+ color: var(--muted);
+ line-height: 1.55;
+ overflow-wrap: anywhere;
+}
+
+.detail-head {
+ margin-bottom: 14px;
+}
+
+.mini-grid {
+ grid-template-columns: repeat(4, minmax(0, 1fr));
+}
+
+.mini-card {
+ padding: 14px;
+}
+
+.tag,
+.status-badge {
+ display: inline-flex;
+ align-items: center;
+ gap: 6px;
+ padding: 5px 9px;
+ border-radius: 999px;
+ border: 1px solid var(--line);
+ background: #fff;
+ color: var(--muted);
+ font-size: 0.82rem;
+ white-space: normal;
+ max-width: 100%;
+ overflow-wrap: anywhere;
+ text-align: left;
+ line-height: 1.35;
+}
+
+.chip-row > *,
+.action-row > *,
+.status-row > *,
+.selector-row > * {
+ min-width: 0;
+ max-width: 100%;
+}
+
+.status-badge.published,
+.tone-published {
+ border-color: rgba(26, 127, 55, 0.22);
+ background: var(--green-soft);
+ color: var(--green);
+}
+
+.status-badge.pending,
+.tone-pending {
+ border-color: rgba(154, 103, 0, 0.24);
+ background: var(--amber-soft);
+ color: var(--amber);
+}
+
+.status-badge.rejected,
+.tone-rejected {
+ border-color: rgba(207, 34, 46, 0.22);
+ background: var(--red-soft);
+ color: var(--red);
+}
+
+.status-badge.neutral,
+.tone-neutral {
+ border-color: rgba(9, 105, 218, 0.16);
+ background: var(--blue-soft);
+ color: var(--blue);
+}
+
+.sidebar-card {
+ padding: 14px;
+}
+
+.kv-key {
+ color: var(--muted);
+ font-size: 0.88rem;
+}
+
+.kv-value {
+ color: var(--ink);
+ font-size: 0.92rem;
+ text-align: right;
+ overflow-wrap: anywhere;
+}
+
+.jump-card {
+ width: 100%;
+ padding: 14px 15px;
+ text-align: left;
+}
+
+.jump-card strong,
+.nav-title,
+.nav-meta,
+.metric-note,
+.soft-copy,
+.empty-state,
+.stage-note,
+.stage-title,
+.panel-copy,
+.hero-copy,
+.sidebar-copy,
+.kv-key,
+.kv-value,
+.turn-label,
+.field-label,
+.card-kicker,
+.metric-label,
+.status-caption,
+h1,
+h2,
+h3,
+h4 {
+ overflow-wrap: anywhere;
+}
+
+.event-card {
+ display: grid;
+ gap: 10px;
+}
+
+.event-time {
+ color: var(--muted);
+ font-size: 0.88rem;
+}
+
+.panel-scroll {
+ max-height: calc(100vh - 260px);
+ overflow: auto;
+}
+
+.empty-state {
+ padding: 16px;
+ border: 1px dashed var(--line-strong);
+ border-radius: var(--radius-md);
+ background: var(--slate-soft);
+}
+
+.stage-grid {
+ grid-template-columns: repeat(4, minmax(0, 1fr));
+}
+
+.stage-card {
+ min-height: 132px;
+}
+
+.stage-card.current {
+ border-color: rgba(9, 105, 218, 0.28);
+ box-shadow: inset 0 0 0 1px rgba(9, 105, 218, 0.08);
+}
+
+.stage-card.done {
+ background: linear-gradient(180deg, #ffffff 0%, #f0faf3 100%);
+}
+
+.stage-card.pending {
+ background: linear-gradient(180deg, #ffffff 0%, #fffced 100%);
+}
+
+.stage-card.blocked {
+ background: linear-gradient(180deg, #ffffff 0%, #fff0ef 100%);
+}
+
+.stage-title {
+ margin-top: 12px;
+ font-weight: 700;
+}
+
+.stage-note {
+ margin-top: 10px;
+ color: var(--muted);
+ line-height: 1.5;
+}
+
+.doc-grid {
+ grid-template-columns: repeat(2, minmax(0, 1fr));
+}
+
+.doc-card {
+ overflow: hidden;
+}
+
+.doc-head {
+ padding: 14px 14px 0;
+}
+
+.doc-card pre {
+ margin: 14px;
+}
+
+.validator-grid,
+.version-grid {
+ display: grid;
+ gap: 12px;
+}
+
+.version-grid {
+ grid-template-columns: repeat(2, minmax(0, 1fr));
+}
+
+.version-card.current {
+ border-color: rgba(26, 127, 55, 0.26);
+}
+
+.rule-list {
+ margin: 0;
+ padding-left: 18px;
+ color: var(--muted);
+ line-height: 1.6;
+}
+
+.selector-row label {
+ flex: 1 1 220px;
+}
+
+.version-stack {
+ margin-top: 14px;
+}
+
+.version-pills {
+ display: flex;
+ flex-wrap: wrap;
+ gap: 8px;
+}
+
+.version-pill {
+ max-width: 100%;
+ white-space: normal;
+ text-align: left;
+ line-height: 1.35;
+}
+
+.version-pill.active {
+ border-color: rgba(9, 105, 218, 0.26);
+ background: var(--blue-soft);
+ color: var(--blue);
+}
+
+.turn-card {
+ padding: 14px;
+}
+
+.turn-block {
+ margin-top: 12px;
+}
+
+.turn-label {
+ display: inline-block;
+ margin-bottom: 8px;
+ font-weight: 600;
+ color: var(--muted);
+}
+
+.mono {
+ font-family: ui-monospace, "SFMono-Regular", Menlo, Consolas, monospace;
+}
+
+.hidden {
+ display: none !important;
+}
+
+body[data-loading="true"] button[data-op] {
+ opacity: 0.6;
+}
+
+@media (max-width: 1220px) {
+ .metric-grid,
+ .mini-grid {
+ grid-template-columns: repeat(2, minmax(0, 1fr));
+ }
+
+ .stage-grid,
+ .version-grid,
+ .doc-grid,
+ .two-column {
+ grid-template-columns: 1fr;
+ }
+}
+
+@media (max-width: 960px) {
+ .app-shell {
+ grid-template-columns: 1fr;
+ }
+
+ .sidebar {
+ position: static;
+ min-height: auto;
+ border-right: 0;
+ border-bottom: 1px solid var(--line);
+ }
+
+ .main-shell {
+ padding: 20px 18px 28px;
+ }
+
+ .page-hero,
+ .panel-head {
+ flex-direction: column;
+ }
+
+ .split-layout {
+ grid-template-columns: 1fr;
+ }
+
+ .panel-scroll {
+ max-height: none;
+ }
+}
+
+@media (max-width: 640px) {
+ .metric-grid,
+ .mini-grid {
+ grid-template-columns: 1fr;
+ }
+
+ .page-hero,
+ .brand-block,
+ .panel-body,
+ .detail-shell,
+ .metric-grid,
+ .panel-head {
+ padding-left: 16px;
+ padding-right: 16px;
+ }
+
+ .record-card,
+ .detail-card,
+ .context-card,
+ .event-card,
+ .validator-card,
+ .version-card,
+ .stage-card,
+ .mini-card {
+ padding: 14px;
+ }
+}
diff --git a/skillclaw/dashboard_ingest.py b/skillclaw/dashboard_ingest.py
new file mode 100644
index 0000000..87bd479
--- /dev/null
+++ b/skillclaw/dashboard_ingest.py
@@ -0,0 +1,1267 @@
+"""
+Snapshot builder for the SkillClaw dashboard.
+"""
+
+from __future__ import annotations
+
+import hashlib
+import json
+import logging
+import re
+from collections import defaultdict
+from datetime import datetime, timezone
+from pathlib import Path
+from typing import Any
+
+import yaml
+
+from evolve_server.core.skill_registry import SkillIDRegistry
+from evolve_server.core.utils import build_skill_md
+
+from .config import SkillClawConfig
+from .skill_hub import SkillHub
+from .validation_store import ValidationStore
+
+logger = logging.getLogger(__name__)
+
+_CORE_FRONTMATTER_KEYS = {"name", "description", "metadata", "category"}
+
+
+def _utc_now_iso() -> str:
+ return datetime.now(timezone.utc).isoformat()
+
+
+def _stable_skill_id(name: str) -> str:
+ return hashlib.sha256(name.encode("utf-8")).hexdigest()[:12]
+
+
+def _parse_iso8601(raw: str) -> datetime | None:
+ value = str(raw or "").strip()
+ if not value:
+ return None
+ try:
+ parsed = datetime.fromisoformat(value.replace("Z", "+00:00"))
+ if parsed.tzinfo is None:
+ parsed = parsed.replace(tzinfo=timezone.utc)
+ return parsed
+ except ValueError:
+ return None
+
+
+def _latest_timestamp(*values: str) -> str:
+ latest: tuple[datetime, str] | None = None
+ for value in values:
+ parsed = _parse_iso8601(value)
+ if parsed is None:
+ continue
+ if latest is None or parsed > latest[0]:
+ latest = (parsed, value)
+ return latest[1] if latest else ""
+
+
+def _read_text(path: Path) -> str:
+ try:
+ return path.read_text(encoding="utf-8")
+ except OSError:
+ return ""
+
+
+def _read_json(path: Path, default: Any) -> Any:
+ try:
+ return json.loads(path.read_text(encoding="utf-8"))
+ except Exception:
+ return default
+
+
+def _hash_bytes(data: bytes) -> str:
+ return hashlib.sha256(data).hexdigest()
+
+
+def _hash_text(text: str) -> str:
+ return _hash_bytes(text.encode("utf-8"))
+
+
+def _compute_file_sha(path: Path) -> str:
+ try:
+ return hashlib.sha256(path.read_bytes()).hexdigest()
+ except OSError:
+ return ""
+
+
+def _truncate(text: str, limit: int = 180) -> str:
+ value = " ".join(str(text or "").split())
+ if len(value) <= limit:
+ return value
+ return value[: max(0, limit - 3)].rstrip() + "..."
+
+
+def _trim_message(text: str, limit: int = 6000) -> str:
+ value = str(text or "").strip()
+ if len(value) <= limit:
+ return value
+ return value[: max(0, limit - 3)].rstrip() + "..."
+
+
+def _normalize_timestamp(raw: str) -> str:
+ value = str(raw or "").strip()
+ if not value:
+ return ""
+
+ parsed = _parse_iso8601(value)
+ if parsed is None:
+ for fmt in ("%Y-%m-%d %H:%M:%S", "%Y-%m-%dT%H:%M:%S"):
+ try:
+ parsed = datetime.strptime(value, fmt).replace(tzinfo=timezone.utc)
+ break
+ except ValueError:
+ continue
+ if parsed is None:
+ return value
+ return parsed.isoformat()
+
+
+def _extract_skill_names(items: Any) -> list[str]:
+ names: set[str] = set()
+ if not isinstance(items, list):
+ return []
+ for item in items:
+ if isinstance(item, dict):
+ raw = item.get("skill_name") or item.get("name") or item.get("skill")
+ else:
+ raw = item
+ name = str(raw or "").strip()
+ if name:
+ names.add(name)
+ return sorted(names)
+
+
+def _extract_message_text(message: Any) -> str:
+ if isinstance(message, str):
+ return message.strip()
+ if not isinstance(message, dict):
+ return ""
+
+ content = message.get("content")
+ if isinstance(content, str):
+ return content.strip()
+ if not isinstance(content, list):
+ return ""
+
+ parts: list[str] = []
+ for item in content:
+ if not isinstance(item, dict):
+ continue
+ if item.get("type") == "text":
+ text = str(item.get("text", "") or "").strip()
+ if text:
+ parts.append(text)
+ return "\n\n".join(parts).strip()
+
+
+def _clean_transcript_text(text: str) -> str:
+ value = str(text or "").strip()
+ wrapped = re.fullmatch(r"\s*(.*?)\s* ", value, flags=re.DOTALL)
+ if wrapped:
+ value = wrapped.group(1).strip()
+ return _trim_message(value)
+
+
+def _guess_category(skills_dir: Path, skill_path: Path) -> str:
+ try:
+ rel_parts = skill_path.resolve().relative_to(skills_dir.resolve()).parts
+ except Exception:
+ return "general"
+ if len(rel_parts) >= 3:
+ return str(rel_parts[0] or "general")
+ return "general"
+
+
+def _parse_skill_document(
+ raw: str,
+ *,
+ fallback_name: str = "",
+ fallback_category: str = "general",
+) -> dict[str, Any]:
+ body = raw.strip()
+ fm: dict[str, Any] = {}
+ if raw.startswith("---"):
+ end_idx = raw.find("\n---", 3)
+ if end_idx != -1:
+ try:
+ parsed = yaml.safe_load(raw[3:end_idx].strip()) or {}
+ if isinstance(parsed, dict):
+ fm = parsed
+ except yaml.YAMLError:
+ fm = {}
+ body = raw[end_idx + 4 :].strip()
+
+ metadata = fm.get("metadata")
+ if not isinstance(metadata, dict):
+ metadata = {}
+ skillclaw_meta = metadata.get("skillclaw")
+ if not isinstance(skillclaw_meta, dict):
+ skillclaw_meta = {}
+
+ category = (
+ str(skillclaw_meta.get("category") or fm.get("category") or fallback_category or "general").strip() or "general"
+ )
+ name = str(fm.get("name") or fallback_name or "").strip()
+ description = str(fm.get("description") or "").strip()
+ extra_frontmatter = {k: v for k, v in fm.items() if k not in _CORE_FRONTMATTER_KEYS}
+
+ return {
+ "name": name,
+ "description": description,
+ "category": category,
+ "metadata": metadata,
+ "extra_frontmatter": extra_frontmatter,
+ "content": body,
+ "skill_md": raw,
+ }
+
+
+def _load_local_skills(config: SkillClawConfig, warnings: list[str]) -> dict[str, dict[str, Any]]:
+ skills_dir = Path(config.skills_dir).expanduser()
+ if not skills_dir.is_dir():
+ return {}
+
+ stats = _read_json(skills_dir / "skill_stats.json", {})
+ if not isinstance(stats, dict):
+ stats = {}
+
+ skills: dict[str, dict[str, Any]] = {}
+ for skill_path in sorted(skills_dir.rglob("SKILL.md")):
+ raw = _read_text(skill_path)
+ if not raw:
+ warnings.append(f"failed to read local skill file: {skill_path}")
+ continue
+ parsed = _parse_skill_document(
+ raw,
+ fallback_name=skill_path.parent.name,
+ fallback_category=_guess_category(skills_dir, skill_path),
+ )
+ name = str(parsed.get("name") or skill_path.parent.name).strip()
+ if not name:
+ continue
+ stat = stats.get(name)
+ if not isinstance(stat, dict):
+ stat = {}
+ mtime = ""
+ try:
+ mtime = datetime.fromtimestamp(skill_path.stat().st_mtime, tz=timezone.utc).isoformat()
+ except OSError:
+ pass
+ local_sha = _compute_file_sha(skill_path)
+
+ skills[name] = {
+ "name": name,
+ "skill_id": _stable_skill_id(name),
+ "description": str(parsed.get("description") or ""),
+ "category": str(parsed.get("category") or "general"),
+ "metadata": parsed.get("metadata") or {},
+ "extra_frontmatter": parsed.get("extra_frontmatter") or {},
+ "content": str(parsed.get("content") or ""),
+ "skill_md": str(parsed.get("skill_md") or ""),
+ "source": "local",
+ "has_local": True,
+ "has_remote": False,
+ "local_path": str(skill_path),
+ "uploaded_at": "",
+ "uploaded_by": "",
+ "updated_at": mtime,
+ "local_updated_at": mtime,
+ "remote_updated_at": "",
+ "current_version": 1,
+ "current_sha": local_sha,
+ "local_sha": local_sha,
+ "remote_sha": "",
+ "local_inject_count": int(stat.get("inject_count", 0) or 0),
+ "observed_injection_count": 0,
+ "read_count": 0,
+ "modified_count": 0,
+ "session_count": 0,
+ "effectiveness": float(stat.get("effectiveness", 0.0) or 0.0),
+ "positive_count": int(stat.get("positive_count", 0) or 0),
+ "negative_count": int(stat.get("negative_count", 0) or 0),
+ "neutral_count": int(stat.get("neutral_count", 0) or 0),
+ "last_injected_at": str(stat.get("last_injected_at", "") or ""),
+ "stats": stat,
+ "manifest": {},
+ "registry": {},
+ "versions": [],
+ }
+
+ return skills
+
+
+def _skillclaw_state_dir(config: SkillClawConfig) -> Path:
+ return Path(config.skills_dir).expanduser().parent / "state"
+
+
+def _find_transcript_path(session_id: str, transcript_paths: list[str]) -> Path | None:
+ candidates: list[Path] = []
+ for raw_path in transcript_paths:
+ path = Path(str(raw_path)).expanduser()
+ if path.stem == session_id or session_id in path.as_posix():
+ candidates.append(path)
+ if not candidates:
+ return None
+ candidates.sort(key=lambda item: (item.stem != session_id, len(str(item))))
+ return candidates[0]
+
+
+def _parse_cursor_transcript_turns(transcript_path: Path, warnings: list[str]) -> list[dict[str, Any]]:
+ turns: list[dict[str, Any]] = []
+ current_turn: dict[str, Any] | None = None
+
+ try:
+ with transcript_path.open(encoding="utf-8") as handle:
+ for raw_line in handle:
+ if not raw_line.strip():
+ continue
+ try:
+ record = json.loads(raw_line)
+ except json.JSONDecodeError:
+ continue
+
+ role = str(record.get("role", "") or "").strip().lower()
+ if role not in {"user", "assistant"}:
+ continue
+
+ text = _clean_transcript_text(_extract_message_text(record.get("message")))
+ if not text:
+ continue
+
+ if role == "user":
+ if current_turn is not None:
+ turns.append(current_turn)
+ current_turn = {
+ "turn_num": len(turns) + 1,
+ "prompt_text": text,
+ "response_text": "",
+ "reasoning_content": None,
+ "tool_calls": [],
+ "read_skills": [],
+ "modified_skills": [],
+ "tool_results": [],
+ "tool_results_raw": [],
+ "tool_observations": [],
+ "tool_errors": [],
+ "injected_skills": [],
+ "prm_score": None,
+ }
+ continue
+
+ if current_turn is None:
+ continue
+ if current_turn["response_text"]:
+ current_turn["response_text"] += "\n\n" + text
+ else:
+ current_turn["response_text"] = text
+ except OSError as exc:
+ warnings.append(f"failed to read local transcript '{transcript_path}': {exc}")
+ return []
+
+ if current_turn is not None:
+ turns.append(current_turn)
+
+ return turns
+
+
+def _record_dir_candidates(config: SkillClawConfig) -> list[Path]:
+ raw = str(getattr(config, "record_dir", "") or "").strip()
+ if not raw:
+ return []
+
+ record_dir = Path(raw).expanduser()
+ if record_dir.is_absolute():
+ return [record_dir]
+
+ candidates: list[Path] = []
+ seen: set[str] = set()
+ for parent in (Path.cwd(), *Path.cwd().parents):
+ candidate = (parent / record_dir).resolve()
+ key = str(candidate)
+ if key in seen:
+ continue
+ candidates.append(candidate)
+ seen.add(key)
+ return candidates
+
+
+def _resolve_record_dir(config: SkillClawConfig) -> Path | None:
+ candidates = _record_dir_candidates(config)
+ for candidate in candidates:
+ if candidate.is_dir() or (candidate / "conversations.jsonl").exists():
+ return candidate
+ return candidates[0] if candidates else None
+
+
+def _extract_record_instruction(record: dict[str, Any]) -> str:
+ instruction = _clean_transcript_text(str(record.get("instruction_text", "") or ""))
+ if instruction:
+ return instruction
+
+ messages = record.get("messages")
+ if isinstance(messages, list):
+ for message in reversed(messages):
+ if not isinstance(message, dict):
+ continue
+ role = str(message.get("role", "") or "").strip().lower()
+ if role != "user":
+ continue
+ text = _clean_transcript_text(_extract_message_text(message))
+ if text:
+ return text
+
+ return _clean_transcript_text(str(record.get("prompt_text", "") or ""))
+
+
+def _normalize_tool_calls(raw: Any) -> list[dict[str, Any]]:
+ if not isinstance(raw, list):
+ return []
+ return [dict(item) for item in raw if isinstance(item, dict)]
+
+
+def _load_record_prm_scores(record_dir: Path, warnings: list[str]) -> dict[tuple[str, int], float]:
+ prm_scores_path = record_dir / "prm_scores.jsonl"
+ if not prm_scores_path.is_file():
+ return {}
+
+ scores: dict[tuple[str, int], float] = {}
+ try:
+ with prm_scores_path.open(encoding="utf-8") as handle:
+ for line_no, raw_line in enumerate(handle, start=1):
+ if not raw_line.strip():
+ continue
+ try:
+ payload = json.loads(raw_line)
+ except json.JSONDecodeError:
+ warnings.append(f"failed to parse PRM record '{prm_scores_path}' line {line_no}")
+ continue
+ if not isinstance(payload, dict):
+ continue
+ session_id = str(payload.get("session_id", "") or "").strip()
+ if not session_id:
+ continue
+ try:
+ turn_num = int(payload.get("turn", 0) or 0)
+ except (TypeError, ValueError):
+ continue
+ if turn_num <= 0:
+ continue
+ score = payload.get("score")
+ if isinstance(score, (int, float)) and not isinstance(score, bool):
+ scores[(session_id, turn_num)] = float(score)
+ except OSError as exc:
+ warnings.append(f"failed to read PRM records '{prm_scores_path}': {exc}")
+
+ return scores
+
+
+def _load_record_sessions(config: SkillClawConfig, warnings: list[str]) -> list[dict[str, Any]]:
+ record_dir = _resolve_record_dir(config)
+ if record_dir is None:
+ return []
+
+ conversations_path = record_dir / "conversations.jsonl"
+ if not conversations_path.is_file():
+ return []
+
+ prm_scores = _load_record_prm_scores(record_dir, warnings)
+ grouped: dict[str, dict[str, Any]] = {}
+ line_counter = 0
+
+ try:
+ with conversations_path.open(encoding="utf-8") as handle:
+ for raw_line in handle:
+ line_counter += 1
+ if not raw_line.strip():
+ continue
+ try:
+ payload = json.loads(raw_line)
+ except json.JSONDecodeError:
+ warnings.append(f"failed to parse conversation record '{conversations_path}' line {line_counter}")
+ continue
+ if not isinstance(payload, dict):
+ continue
+
+ session_id = str(payload.get("session_id", "") or "").strip()
+ if not session_id:
+ continue
+
+ timestamp = _normalize_timestamp(str(payload.get("timestamp", "") or ""))
+ try:
+ turn_num = int(payload.get("turn", 0) or 0)
+ except (TypeError, ValueError):
+ turn_num = 0
+
+ group = grouped.setdefault(
+ session_id,
+ {
+ "session_id": session_id,
+ "timestamp": "",
+ "turns": {},
+ "line_index": {},
+ "record_path": str(conversations_path),
+ },
+ )
+ group["timestamp"] = _latest_timestamp(group["timestamp"], timestamp) or timestamp
+
+ if turn_num <= 0:
+ turn_num = max(group["turns"].keys(), default=0) + 1
+
+ turn_payload = {
+ "turn_num": turn_num,
+ "prompt_text": _extract_record_instruction(payload),
+ "response_text": _trim_message(str(payload.get("response_text", "") or "")),
+ "reasoning_content": None,
+ "tool_calls": _normalize_tool_calls(payload.get("tool_calls")),
+ "read_skills": [],
+ "modified_skills": [],
+ "tool_results": [],
+ "tool_results_raw": [],
+ "tool_observations": [],
+ "tool_errors": [],
+ "injected_skills": [],
+ "prm_score": prm_scores.get((session_id, turn_num)),
+ }
+
+ existing_line = group["line_index"].get(turn_num, -1)
+ if line_counter >= existing_line:
+ group["turns"][turn_num] = turn_payload
+ group["line_index"][turn_num] = line_counter
+ except OSError as exc:
+ warnings.append(f"failed to read local conversations '{conversations_path}': {exc}")
+ return []
+
+ sessions: list[dict[str, Any]] = []
+ for session_id, group in grouped.items():
+ turns = [group["turns"][turn_num] for turn_num in sorted(group["turns"])]
+ sessions.append(
+ {
+ "session_id": session_id,
+ "timestamp": str(group.get("timestamp", "") or ""),
+ "user_alias": "local",
+ "num_turns": len(turns),
+ "turns": turns,
+ "source": "local",
+ "outcome": "",
+ "outcome_reasons": [],
+ "outcome_reason_count": 0,
+ "active_skills": [],
+ "transcript_path": "",
+ "trajectory_path": "",
+ "record_path": str(group.get("record_path", "") or ""),
+ }
+ )
+
+ return sessions
+
+
+def _merge_session_turns(
+ base_turns: list[dict[str, Any]],
+ overlay_turns: list[dict[str, Any]],
+) -> list[dict[str, Any]]:
+ by_turn: dict[int, dict[str, Any]] = {}
+ for turn in base_turns:
+ if not isinstance(turn, dict):
+ continue
+ try:
+ turn_num = int(turn.get("turn_num", 0) or 0)
+ except (TypeError, ValueError):
+ continue
+ if turn_num <= 0:
+ continue
+ by_turn[turn_num] = dict(turn)
+
+ list_fields = {
+ "tool_calls",
+ "read_skills",
+ "modified_skills",
+ "tool_results",
+ "tool_results_raw",
+ "tool_observations",
+ "tool_errors",
+ "injected_skills",
+ }
+ nullable_fields = {"prm_score", "reasoning_content"}
+
+ for turn in overlay_turns:
+ if not isinstance(turn, dict):
+ continue
+ try:
+ turn_num = int(turn.get("turn_num", 0) or 0)
+ except (TypeError, ValueError):
+ continue
+ if turn_num <= 0:
+ continue
+
+ current = by_turn.get(turn_num)
+ if current is None:
+ by_turn[turn_num] = dict(turn)
+ continue
+
+ merged = dict(current)
+ for key, value in turn.items():
+ if key == "turn_num":
+ continue
+ if key in list_fields:
+ if isinstance(value, list) and value:
+ merged[key] = value
+ else:
+ merged.setdefault(key, current.get(key, []))
+ continue
+ if key in nullable_fields:
+ if value is not None:
+ merged[key] = value
+ continue
+ if str(value or "").strip():
+ merged[key] = value
+ by_turn[turn_num] = merged
+
+ return [by_turn[turn_num] for turn_num in sorted(by_turn)]
+
+
+def _merge_local_sessions(base: dict[str, Any], overlay: dict[str, Any]) -> dict[str, Any]:
+ merged = dict(base)
+ merged_turns = _merge_session_turns(
+ list(base.get("turns") or []),
+ list(overlay.get("turns") or []),
+ )
+ merged["turns"] = merged_turns
+ merged["timestamp"] = _latest_timestamp(
+ str(base.get("timestamp", "") or ""),
+ str(overlay.get("timestamp", "") or ""),
+ ) or str(overlay.get("timestamp", "") or base.get("timestamp", "") or "")
+ merged["user_alias"] = str(overlay.get("user_alias", "") or base.get("user_alias", "") or "local")
+ merged["num_turns"] = max(
+ int(base.get("num_turns", 0) or 0),
+ int(overlay.get("num_turns", 0) or 0),
+ len(merged_turns),
+ )
+ merged["source"] = "local"
+ merged["outcome"] = str(overlay.get("outcome", "") or base.get("outcome", "") or "")
+
+ outcome_reasons = [
+ str(item or "").strip()
+ for item in [*(base.get("outcome_reasons") or []), *(overlay.get("outcome_reasons") or [])]
+ if str(item or "").strip()
+ ]
+ deduped_outcome_reasons = list(dict.fromkeys(outcome_reasons))
+ merged["outcome_reasons"] = deduped_outcome_reasons[:20]
+ merged["outcome_reason_count"] = len(deduped_outcome_reasons)
+ merged["active_skills"] = sorted(
+ {
+ str(item or "").strip()
+ for item in [*(base.get("active_skills") or []), *(overlay.get("active_skills") or [])]
+ if str(item or "").strip()
+ }
+ )
+ merged["transcript_path"] = str(base.get("transcript_path", "") or overlay.get("transcript_path", "") or "")
+ merged["trajectory_path"] = str(base.get("trajectory_path", "") or overlay.get("trajectory_path", "") or "")
+ merged["record_path"] = str(overlay.get("record_path", "") or base.get("record_path", "") or "")
+ return merged
+
+
+def _load_state_sessions(config: SkillClawConfig, warnings: list[str]) -> list[dict[str, Any]]:
+ state_dir = _skillclaw_state_dir(config)
+ conv_offsets_path = state_dir / "conv_offsets.json"
+ trajectories_dir = state_dir / "trajectories"
+
+ conv_offsets = _read_json(conv_offsets_path, {})
+ transcript_paths = list(conv_offsets.keys()) if isinstance(conv_offsets, dict) else []
+
+ trajectories: dict[str, dict[str, Any]] = {}
+ if trajectories_dir.is_dir():
+ for trajectory_path in sorted(trajectories_dir.glob("*.json")):
+ payload = _read_json(trajectory_path, {})
+ if not isinstance(payload, dict):
+ continue
+ session_id = str(payload.get("conversation_id", "") or trajectory_path.stem).strip()
+ if not session_id:
+ continue
+ payload["_trajectory_path"] = str(trajectory_path)
+ trajectories[session_id] = payload
+
+ session_ids = set(trajectories.keys())
+ for raw_path in transcript_paths:
+ session_id = Path(str(raw_path)).stem.strip()
+ if session_id:
+ session_ids.add(session_id)
+
+ sessions: list[dict[str, Any]] = []
+ for session_id in sorted(session_ids):
+ trajectory = trajectories.get(session_id, {})
+ transcript_path = _find_transcript_path(session_id, transcript_paths)
+ turns = _parse_cursor_transcript_turns(transcript_path, warnings) if transcript_path else []
+ active_skills = sorted(
+ {str(item or "").strip() for item in (trajectory.get("active_skills") or []) if str(item or "").strip()}
+ )
+ if turns and active_skills:
+ turns[0]["injected_skills"] = active_skills
+ elif active_skills:
+ turns = [
+ {
+ "turn_num": 1,
+ "prompt_text": "(local trajectory imported without transcript)",
+ "response_text": "",
+ "reasoning_content": None,
+ "tool_calls": [],
+ "read_skills": [],
+ "modified_skills": [],
+ "tool_results": [],
+ "tool_results_raw": [],
+ "tool_observations": [],
+ "tool_errors": [],
+ "injected_skills": active_skills,
+ "prm_score": None,
+ }
+ ]
+
+ timestamp = str(trajectory.get("end_time") or trajectory.get("start_time") or "")
+ if not timestamp and transcript_path is not None:
+ try:
+ timestamp = datetime.fromtimestamp(transcript_path.stat().st_mtime, tz=timezone.utc).isoformat()
+ except OSError:
+ timestamp = ""
+
+ outcome_reasons = [
+ str(item or "").strip() for item in (trajectory.get("outcome_reasons") or []) if str(item or "").strip()
+ ]
+ sessions.append(
+ {
+ "session_id": session_id,
+ "timestamp": timestamp,
+ "user_alias": "local",
+ "num_turns": len(turns),
+ "turns": turns,
+ "source": "local",
+ "outcome": str(trajectory.get("outcome", "") or ""),
+ "outcome_reasons": outcome_reasons[:20],
+ "outcome_reason_count": len(outcome_reasons),
+ "active_skills": active_skills,
+ "transcript_path": str(transcript_path) if transcript_path is not None else "",
+ "trajectory_path": str(trajectory.get("_trajectory_path", "") or ""),
+ }
+ )
+
+ sessions.sort(
+ key=lambda item: (
+ str(item.get("timestamp", "") or ""),
+ str(item.get("session_id", "") or ""),
+ ),
+ reverse=True,
+ )
+ return sessions
+
+
+def _load_local_sessions(config: SkillClawConfig, warnings: list[str]) -> list[dict[str, Any]]:
+ sessions_by_id: dict[str, dict[str, Any]] = {}
+
+ for session in _load_state_sessions(config, warnings):
+ session_id = str(session.get("session_id", "") or "")
+ if session_id:
+ sessions_by_id[session_id] = session
+
+ for session in _load_record_sessions(config, warnings):
+ session_id = str(session.get("session_id", "") or "")
+ if not session_id:
+ continue
+ existing = sessions_by_id.get(session_id)
+ if existing is None:
+ sessions_by_id[session_id] = session
+ else:
+ sessions_by_id[session_id] = _merge_local_sessions(existing, session)
+
+ sessions = list(sessions_by_id.values())
+ sessions.sort(
+ key=lambda item: (
+ str(item.get("timestamp", "") or ""),
+ str(item.get("session_id", "") or ""),
+ ),
+ reverse=True,
+ )
+ return sessions
+
+
+def _load_shared_skills(
+ config: SkillClawConfig,
+ warnings: list[str],
+) -> tuple[dict[str, dict[str, Any]], list[dict[str, Any]], list[dict[str, Any]], dict[str, dict[str, Any]]]:
+ if not config.sharing_enabled or not config.dashboard_include_shared:
+ return {}, [], [], {}
+
+ try:
+ hub = SkillHub.from_config(config)
+ except Exception as exc:
+ warnings.append(f"failed to initialize shared storage: {exc}")
+ return {}, [], [], {}
+
+ try:
+ manifest = hub._load_remote_manifest()
+ except Exception as exc:
+ warnings.append(f"failed to load shared manifest: {exc}")
+ manifest = {}
+
+ registry = SkillIDRegistry()
+ try:
+ registry.load_from_oss(hub._bucket, hub._prefix())
+ except Exception as exc:
+ warnings.append(f"failed to load shared registry: {exc}")
+
+ registry_entries = registry.all_entries()
+ skills: dict[str, dict[str, Any]] = {}
+ candidate_docs_by_skill: dict[str, dict[str, str]] = defaultdict(dict)
+ validation_jobs: list[dict[str, Any]] = []
+
+ try:
+ validation_store = ValidationStore.from_config(config)
+ for job in validation_store.list_jobs():
+ if not isinstance(job, dict):
+ continue
+ job_id = str(job.get("job_id", "") or "")
+ results = validation_store.list_results(job_id) if job_id else []
+ decision = validation_store.load_decision(job_id) if job_id else None
+ accepted_count = sum(1 for item in results if isinstance(item, dict) and item.get("accepted") is True)
+ rejected_count = sum(1 for item in results if isinstance(item, dict) and item.get("accepted") is not True)
+ score_values = [
+ float(item["score"])
+ for item in results
+ if isinstance(item, dict)
+ and isinstance(item.get("score"), (int, float))
+ and not isinstance(item.get("score"), bool)
+ ]
+ mean_score = round(sum(score_values) / len(score_values), 3) if score_values else None
+ decision_status = ""
+ if isinstance(decision, dict):
+ decision_status = str(decision.get("status", "") or "")
+ if decision_status:
+ status = decision_status
+ elif not results:
+ status = "pending"
+ else:
+ status = "review"
+
+ candidate_skill = job.get("candidate_skill")
+ candidate_name = ""
+ if isinstance(candidate_skill, dict):
+ candidate_name = str(candidate_skill.get("name", "") or "")
+ if candidate_name:
+ try:
+ candidate_md = build_skill_md(candidate_skill)
+ except Exception:
+ candidate_md = ""
+ if candidate_md:
+ candidate_docs_by_skill[candidate_name][_hash_text(candidate_md)] = candidate_md
+
+ validation_jobs.append(
+ {
+ "job_id": job_id,
+ "created_at": str(job.get("created_at", "") or ""),
+ "skill_name": str(job.get("candidate_skill_name", "") or candidate_name),
+ "proposed_action": str(job.get("proposed_action", "") or ""),
+ "status": status,
+ "result_count": len(results),
+ "accepted_count": accepted_count,
+ "rejected_count": rejected_count,
+ "mean_score": mean_score,
+ "job": job,
+ "results": results,
+ "decision": decision or {},
+ }
+ )
+ except Exception as exc:
+ warnings.append(f"failed to load validation jobs: {exc}")
+
+ for name, record in manifest.items():
+ raw = ""
+ try:
+ raw = hub._bucket.get_object(hub._skill_key(name)).read().decode("utf-8")
+ except Exception as exc:
+ warnings.append(f"failed to fetch shared skill '{name}': {exc}")
+ parsed = _parse_skill_document(
+ raw,
+ fallback_name=name,
+ fallback_category=str(record.get("category", "general") or "general"),
+ )
+ registry_entry = registry_entries.get(name)
+ if not isinstance(registry_entry, dict):
+ registry_entry = {}
+ history = registry_entry.get("history")
+ if not isinstance(history, list):
+ history = []
+ enriched_history: list[dict[str, Any]] = []
+ current_sha = str(registry_entry.get("content_sha") or record.get("sha256") or (_hash_text(raw) if raw else ""))
+ current_version = int(registry_entry.get("version", 0) or 0)
+ if current_version <= 0 and current_sha:
+ current_version = 1
+ history_latest = ""
+ for item in history:
+ if isinstance(item, dict):
+ version_entry = dict(item)
+ content_sha = str(version_entry.get("content_sha", "") or "")
+ snapshot_md = ""
+ if content_sha:
+ snapshot_md = candidate_docs_by_skill.get(name, {}).get(content_sha, "")
+ if not snapshot_md and content_sha and raw and content_sha == current_sha:
+ snapshot_md = raw
+ if snapshot_md:
+ parsed_snapshot = _parse_skill_document(
+ snapshot_md,
+ fallback_name=name,
+ fallback_category=str(record.get("category", "general") or "general"),
+ )
+ version_entry["skill_md"] = snapshot_md
+ version_entry["content"] = str(parsed_snapshot.get("content") or "")
+ enriched_history.append(version_entry)
+ history_latest = _latest_timestamp(history_latest, str(version_entry.get("timestamp", "") or ""))
+
+ skills[name] = {
+ "name": name,
+ "skill_id": str(registry_entry.get("skill_id") or _stable_skill_id(name)),
+ "description": str(parsed.get("description") or record.get("description") or ""),
+ "category": str(parsed.get("category") or record.get("category") or "general"),
+ "metadata": parsed.get("metadata") or {},
+ "extra_frontmatter": parsed.get("extra_frontmatter") or {},
+ "content": str(parsed.get("content") or ""),
+ "skill_md": str(parsed.get("skill_md") or ""),
+ "source": "shared",
+ "has_local": False,
+ "has_remote": True,
+ "local_path": "",
+ "uploaded_at": str(record.get("uploaded_at", "") or ""),
+ "uploaded_by": str(record.get("uploaded_by", "") or ""),
+ "updated_at": _latest_timestamp(
+ str(record.get("uploaded_at", "") or ""),
+ history_latest,
+ ),
+ "local_updated_at": "",
+ "remote_updated_at": _latest_timestamp(
+ str(record.get("uploaded_at", "") or ""),
+ history_latest,
+ ),
+ "current_version": current_version,
+ "current_sha": current_sha,
+ "local_sha": "",
+ "remote_sha": current_sha,
+ "local_inject_count": 0,
+ "observed_injection_count": 0,
+ "read_count": 0,
+ "modified_count": 0,
+ "session_count": 0,
+ "effectiveness": 0.0,
+ "positive_count": 0,
+ "negative_count": 0,
+ "neutral_count": 0,
+ "last_injected_at": "",
+ "stats": {},
+ "manifest": record,
+ "registry": registry_entry,
+ "versions": enriched_history,
+ }
+
+ sessions: list[dict[str, Any]] = []
+ try:
+ prefix = f"{hub._prefix()}sessions/"
+ for obj in hub._bucket.iter_objects(prefix=prefix):
+ if not str(obj.key).endswith(".json"):
+ continue
+ try:
+ payload = json.loads(hub._bucket.get_object(obj.key).read().decode("utf-8"))
+ if isinstance(payload, dict):
+ sessions.append(payload)
+ except Exception as exc:
+ warnings.append(f"failed to parse session object '{obj.key}': {exc}")
+ except Exception as exc:
+ warnings.append(f"failed to list shared sessions: {exc}")
+
+ sessions.sort(
+ key=lambda item: (
+ str(item.get("timestamp", "") or ""),
+ str(item.get("session_id", "") or ""),
+ ),
+ reverse=True,
+ )
+ validation_jobs.sort(
+ key=lambda item: (
+ str(item.get("created_at", "") or ""),
+ str(item.get("job_id", "") or ""),
+ ),
+ reverse=True,
+ )
+ return skills, sessions, validation_jobs, registry_entries
+
+
+def build_dashboard_snapshot(config: SkillClawConfig) -> dict[str, Any]:
+ warnings: list[str] = []
+ local_skills = _load_local_skills(config, warnings)
+ local_sessions = _load_local_sessions(config, warnings)
+ shared_skills, shared_sessions, validation_jobs, registry_entries = _load_shared_skills(config, warnings)
+
+ skills_by_name: dict[str, dict[str, Any]] = {name: dict(skill) for name, skill in local_skills.items()}
+
+ for name, shared_skill in shared_skills.items():
+ current = skills_by_name.get(name)
+ if current is None:
+ skills_by_name[name] = dict(shared_skill)
+ continue
+
+ current["source"] = "both"
+ current["has_remote"] = True
+ current["skill_id"] = str(shared_skill.get("skill_id") or current.get("skill_id") or _stable_skill_id(name))
+ current["uploaded_at"] = str(shared_skill.get("uploaded_at", "") or current.get("uploaded_at", ""))
+ current["uploaded_by"] = str(shared_skill.get("uploaded_by", "") or current.get("uploaded_by", ""))
+ current["updated_at"] = _latest_timestamp(
+ str(current.get("updated_at", "") or ""),
+ str(shared_skill.get("updated_at", "") or ""),
+ )
+ current["local_updated_at"] = str(current.get("local_updated_at", "") or current.get("updated_at", "") or "")
+ current["local_sha"] = str(current.get("local_sha", "") or current.get("current_sha", "") or "")
+ current["remote_updated_at"] = str(
+ shared_skill.get("remote_updated_at", "") or shared_skill.get("updated_at", "") or ""
+ )
+ current["remote_sha"] = str(shared_skill.get("remote_sha", "") or shared_skill.get("current_sha", "") or "")
+ current["current_version"] = int(
+ shared_skill.get("current_version", 0) or current.get("current_version", 0) or 0
+ )
+ current["current_sha"] = str(shared_skill.get("current_sha", "") or current.get("current_sha", ""))
+ current["manifest"] = shared_skill.get("manifest") or {}
+ current["registry"] = shared_skill.get("registry") or {}
+ current["versions"] = list(shared_skill.get("versions") or [])
+ current["remote_skill_md"] = shared_skill.get("skill_md", "")
+ current["remote_content"] = shared_skill.get("content", "")
+ if not current.get("description"):
+ current["description"] = str(shared_skill.get("description", "") or "")
+ if not current.get("metadata"):
+ current["metadata"] = shared_skill.get("metadata") or {}
+ if not current.get("extra_frontmatter"):
+ current["extra_frontmatter"] = shared_skill.get("extra_frontmatter") or {}
+
+ usage_by_name: dict[str, dict[str, Any]] = defaultdict(
+ lambda: {
+ "observed_injection_count": 0,
+ "read_count": 0,
+ "modified_count": 0,
+ "session_ids": set(),
+ }
+ )
+ link_counts: dict[tuple[str, str, str], int] = defaultdict(int)
+ session_summaries: list[dict[str, Any]] = []
+
+ sessions_by_id: dict[str, dict[str, Any]] = {}
+ for session in local_sessions + shared_sessions:
+ session_id = str(session.get("session_id", "") or "")
+ if not session_id:
+ continue
+ existing = sessions_by_id.get(session_id)
+ if existing is None:
+ sessions_by_id[session_id] = session
+ continue
+ if str(session.get("source", "") or "") == "shared":
+ sessions_by_id[session_id] = session
+
+ for session in sessions_by_id.values():
+ session_id = str(session.get("session_id", "") or "")
+ if not session_id:
+ continue
+ turns = session.get("turns")
+ if not isinstance(turns, list):
+ turns = []
+ prompt_preview = ""
+ response_preview = ""
+ prm_scores: list[float] = []
+ session_skill_names: set[str] = set()
+ injected_names_all: set[str] = set()
+ read_names_all: set[str] = set()
+ modified_names_all: set[str] = set()
+
+ for turn in turns:
+ if not isinstance(turn, dict):
+ continue
+ if not prompt_preview:
+ prompt_preview = _truncate(str(turn.get("prompt_text", "") or ""))
+ if not response_preview:
+ response_preview = _truncate(str(turn.get("response_text", "") or ""))
+
+ prm_score = turn.get("prm_score")
+ if isinstance(prm_score, (int, float)) and not isinstance(prm_score, bool):
+ prm_scores.append(float(prm_score))
+
+ injected_names = _extract_skill_names(turn.get("injected_skills"))
+ read_names = _extract_skill_names(turn.get("read_skills"))
+ modified_names = _extract_skill_names(turn.get("modified_skills"))
+
+ injected_names_all.update(injected_names)
+ read_names_all.update(read_names)
+ modified_names_all.update(modified_names)
+ session_skill_names.update(injected_names)
+ session_skill_names.update(read_names)
+ session_skill_names.update(modified_names)
+
+ for name in injected_names:
+ usage = usage_by_name[name]
+ usage["observed_injection_count"] += 1
+ usage["session_ids"].add(session_id)
+ link_counts[(session_id, name, "injected")] += 1
+ for name in read_names:
+ usage = usage_by_name[name]
+ usage["read_count"] += 1
+ usage["session_ids"].add(session_id)
+ link_counts[(session_id, name, "read")] += 1
+ for name in modified_names:
+ usage = usage_by_name[name]
+ usage["modified_count"] += 1
+ usage["session_ids"].add(session_id)
+ link_counts[(session_id, name, "modified")] += 1
+
+ avg_prm_score = round(sum(prm_scores) / len(prm_scores), 3) if prm_scores else None
+ session_summaries.append(
+ {
+ "session_id": session_id,
+ "timestamp": str(session.get("timestamp", "") or ""),
+ "user_alias": str(session.get("user_alias", "") or ""),
+ "num_turns": int(session.get("num_turns", len(turns)) or len(turns)),
+ "avg_prm_score": avg_prm_score,
+ "source": str(session.get("source", "") or ""),
+ "outcome": str(session.get("outcome", "") or ""),
+ "outcome_reasons": list(session.get("outcome_reasons") or []),
+ "outcome_reason_count": int(session.get("outcome_reason_count", 0) or 0),
+ "skill_names": sorted(session_skill_names),
+ "injected_skills": sorted(injected_names_all),
+ "read_skills": sorted(read_names_all),
+ "modified_skills": sorted(modified_names_all),
+ "prompt_preview": prompt_preview,
+ "response_preview": response_preview,
+ "turns": turns,
+ }
+ )
+
+ for name, usage in usage_by_name.items():
+ skill = skills_by_name.get(name)
+ if skill is None:
+ registry_entry = registry_entries.get(name)
+ if not isinstance(registry_entry, dict):
+ registry_entry = {}
+ skill = {
+ "name": name,
+ "skill_id": str(registry_entry.get("skill_id") or _stable_skill_id(name)),
+ "description": "",
+ "category": "general",
+ "metadata": {},
+ "extra_frontmatter": {},
+ "content": "",
+ "skill_md": "",
+ "source": "observed",
+ "has_local": False,
+ "has_remote": False,
+ "local_path": "",
+ "uploaded_at": "",
+ "uploaded_by": "",
+ "updated_at": "",
+ "local_updated_at": "",
+ "remote_updated_at": "",
+ "current_version": int(registry_entry.get("version", 0) or 0),
+ "current_sha": str(registry_entry.get("content_sha", "") or ""),
+ "local_sha": "",
+ "remote_sha": str(registry_entry.get("content_sha", "") or ""),
+ "local_inject_count": 0,
+ "observed_injection_count": 0,
+ "read_count": 0,
+ "modified_count": 0,
+ "session_count": 0,
+ "effectiveness": 0.0,
+ "positive_count": 0,
+ "negative_count": 0,
+ "neutral_count": 0,
+ "last_injected_at": "",
+ "stats": {},
+ "manifest": {},
+ "registry": registry_entry,
+ "versions": (
+ list(registry_entry.get("history") or []) if isinstance(registry_entry.get("history"), list) else []
+ ),
+ }
+ skills_by_name[name] = skill
+
+ skill["observed_injection_count"] = int(usage["observed_injection_count"])
+ skill["read_count"] = int(usage["read_count"])
+ skill["modified_count"] = int(usage["modified_count"])
+ skill["session_count"] = len(usage["session_ids"])
+
+ normalized_skills: list[dict[str, Any]] = []
+ for name in sorted(skills_by_name):
+ skill = skills_by_name[name]
+ versions = [item for item in (skill.get("versions") or []) if isinstance(item, dict)]
+ if not versions and skill.get("current_sha"):
+ versions = [
+ {
+ "version": int(skill.get("current_version", 0) or 1),
+ "content_sha": str(skill.get("current_sha", "") or ""),
+ "timestamp": str(
+ skill.get("updated_at") or skill.get("uploaded_at") or skill.get("last_injected_at") or ""
+ ),
+ "action": "snapshot",
+ "skill_md": str(skill.get("remote_skill_md") or skill.get("skill_md") or ""),
+ "content": str(skill.get("remote_content") or skill.get("content") or ""),
+ }
+ ]
+ skill["versions"] = versions
+ if not skill.get("updated_at"):
+ skill["updated_at"] = _latest_timestamp(
+ str(skill.get("uploaded_at", "") or ""),
+ str(skill.get("last_injected_at", "") or ""),
+ )
+ normalized_skills.append(skill)
+
+ normalized_skills.sort(
+ key=lambda item: (
+ -int(item.get("session_count", 0) or 0),
+ -int(item.get("observed_injection_count", 0) or 0),
+ -int(item.get("local_inject_count", 0) or 0),
+ str(item.get("name", "") or ""),
+ )
+ )
+ session_summaries.sort(
+ key=lambda item: (
+ str(item.get("timestamp", "") or ""),
+ str(item.get("session_id", "") or ""),
+ ),
+ reverse=True,
+ )
+
+ skill_id_by_name = {
+ str(skill.get("name", "") or ""): str(skill.get("skill_id", "") or "")
+ for skill in normalized_skills
+ if str(skill.get("name", "") or "")
+ }
+ session_skill_links = [
+ {
+ "session_id": session_id,
+ "skill_id": skill_id_by_name.get(skill_name, _stable_skill_id(skill_name)),
+ "skill_name": skill_name,
+ "relation": relation,
+ "count": count,
+ }
+ for (session_id, skill_name, relation), count in sorted(link_counts.items())
+ ]
+
+ return {
+ "generated_at": _utc_now_iso(),
+ "meta": {
+ "warnings": warnings,
+ "sharing_enabled": bool(config.sharing_enabled),
+ "dashboard_include_shared": bool(config.dashboard_include_shared),
+ "sharing_backend": str(config.sharing_backend or ""),
+ "sharing_group_id": str(config.sharing_group_id or "default"),
+ "sharing_local_root": str(config.sharing_local_root or ""),
+ "sharing_user_alias": str(config.sharing_user_alias or ""),
+ "skills_dir": str(config.skills_dir or ""),
+ "dashboard_db_path": str(config.dashboard_db_path or ""),
+ "dashboard_evolve_server_url": str(config.dashboard_evolve_server_url or ""),
+ },
+ "skills": normalized_skills,
+ "sessions": session_summaries,
+ "session_skill_links": session_skill_links,
+ "validation_jobs": validation_jobs,
+ }
diff --git a/skillclaw/dashboard_server.py b/skillclaw/dashboard_server.py
new file mode 100644
index 0000000..e1cfe00
--- /dev/null
+++ b/skillclaw/dashboard_server.py
@@ -0,0 +1,649 @@
+"""
+FastAPI dashboard service for SkillClaw.
+"""
+
+from __future__ import annotations
+
+import json
+import logging
+from contextlib import asynccontextmanager
+from pathlib import Path
+from typing import Any
+
+import httpx
+import uvicorn
+from fastapi import Body, FastAPI, HTTPException
+from fastapi.responses import FileResponse
+from fastapi.staticfiles import StaticFiles
+
+from .config import SkillClawConfig
+from .dashboard_ingest import build_dashboard_snapshot
+from .dashboard_store import DashboardStore
+from .skill_hub import SkillHub
+
+logger = logging.getLogger(__name__)
+
+
+def _assets_dir() -> Path:
+ return Path(__file__).with_name("dashboard_assets")
+
+
+def _build_skill_filter(config: SkillClawConfig, *, no_filter: bool = False) -> dict[str, Any] | None:
+ if no_filter:
+ return None
+ stats_path = Path(config.skills_dir).expanduser() / "skill_stats.json"
+ if not stats_path.is_file():
+ return None
+ try:
+ stats = json.loads(stats_path.read_text(encoding="utf-8"))
+ except Exception:
+ return None
+ if not isinstance(stats, dict):
+ return None
+ return {
+ "stats": stats,
+ "min_injections": int(config.sharing_push_min_injections or 0),
+ "min_effectiveness": float(config.sharing_push_min_effectiveness or 0.0),
+ }
+
+
+def _sharing_backend(config: SkillClawConfig) -> str:
+ backend = str(config.sharing_backend or "").strip().lower()
+ if backend:
+ return backend
+ if config.sharing_local_root:
+ return "local"
+ if config.sharing_bucket or config.sharing_endpoint:
+ return "s3"
+ return ""
+
+
+def _sharing_target(config: SkillClawConfig) -> str:
+ backend = _sharing_backend(config)
+ if backend == "local":
+ return f"local:{config.sharing_local_root}/{config.sharing_group_id}"
+ if config.sharing_bucket:
+ return f"{backend}:{config.sharing_bucket}/{config.sharing_group_id}"
+ return f"{backend}:{config.sharing_group_id}"
+
+
+def _require_sharing_hub(config: SkillClawConfig) -> SkillHub:
+ if not config.sharing_enabled:
+ raise ValueError("skill sharing is not enabled in the current config")
+ backend = _sharing_backend(config)
+ if backend == "local" and not config.sharing_local_root:
+ raise ValueError("local sharing backend requires sharing_local_root")
+ if backend == "s3" and not config.sharing_bucket:
+ raise ValueError("s3 sharing backend requires sharing_bucket")
+ if backend == "oss" and (not config.sharing_bucket or not config.sharing_endpoint):
+ raise ValueError("oss sharing backend requires sharing_bucket and sharing_endpoint")
+ if not backend:
+ raise ValueError("sharing backend is not configured")
+ return SkillHub.from_config(config)
+
+
+def _local_sessions_from_snapshot(snapshot: dict[str, Any]) -> list[dict[str, Any]]:
+ return [item for item in (snapshot.get("sessions") or []) if str(item.get("source", "") or "") == "local"]
+
+
+def _normalize_selection(items: Any, *, field_name: str) -> list[str] | None:
+ if items is None:
+ return None
+ if not isinstance(items, list):
+ raise ValueError(f"'{field_name}' must be a list of strings")
+ normalized: list[str] = []
+ seen: set[str] = set()
+ for item in items:
+ value = str(item or "").strip()
+ if not value or value in seen:
+ continue
+ normalized.append(value)
+ seen.add(value)
+ return normalized
+
+
+class DashboardService:
+ """Owns dashboard snapshot sync, queries, and operations."""
+
+ def __init__(self, config: SkillClawConfig) -> None:
+ self.config = config
+ self.store = DashboardStore(config.dashboard_db_path)
+
+ def sync(self) -> dict[str, Any]:
+ snapshot = build_dashboard_snapshot(self.config)
+ summary = self.store.replace_snapshot(snapshot)
+ return {
+ "summary": summary,
+ "overview": self.store.get_overview(),
+ }
+
+ def _embedded_evolve_server(self):
+ from evolve_server.core.config import EvolveServerConfig
+ from evolve_server.engines.workflow import EvolveServer
+
+ from .validation_store import ValidationStore
+
+ evolve_config = EvolveServerConfig.from_skillclaw_config(self.config)
+ try:
+ validation_store = ValidationStore.from_config(self.config)
+ if validation_store.list_jobs():
+ evolve_config.publish_mode = "validated"
+ evolve_config.__post_init__()
+ except Exception:
+ pass
+ return EvolveServer(evolve_config)
+
+ def pull_skills(self, *, skill_names: list[str] | None = None) -> dict[str, Any]:
+ hub = _require_sharing_hub(self.config)
+ selection = list(skill_names or [])
+ if selection:
+ result = hub.pull_skills(
+ self.config.skills_dir,
+ mirror=False,
+ include_names=selection,
+ )
+ else:
+ result = hub.pull_skills(self.config.skills_dir)
+ sync_result = self.sync()
+ return {
+ "operation": "pull",
+ "target": _sharing_target(self.config),
+ "selection": {
+ "mode": "selected" if selection else "all",
+ "requested": selection,
+ "count": len(selection),
+ },
+ "result": result,
+ "sync": sync_result["summary"],
+ }
+
+ def push_skills(self, *, no_filter: bool = False) -> dict[str, Any]:
+ hub = _require_sharing_hub(self.config)
+ result = hub.push_skills(
+ self.config.skills_dir,
+ skill_filter=_build_skill_filter(self.config, no_filter=no_filter),
+ )
+ sync_result = self.sync()
+ return {
+ "operation": "push",
+ "target": _sharing_target(self.config),
+ "result": result,
+ "sync": sync_result["summary"],
+ }
+
+ def sync_skills(self) -> dict[str, Any]:
+ hub = _require_sharing_hub(self.config)
+ result = hub.sync_skills(self.config.skills_dir)
+ sync_result = self.sync()
+ return {
+ "operation": "sync",
+ "target": _sharing_target(self.config),
+ "result": result,
+ "sync": sync_result["summary"],
+ }
+
+ def export_local_sessions(self, *, session_ids: list[str] | None = None) -> dict[str, Any]:
+ hub = _require_sharing_hub(self.config)
+ snapshot = build_dashboard_snapshot(self.config)
+ sessions = _local_sessions_from_snapshot(snapshot)
+ total_local_sessions = len(sessions)
+ session_lookup = {
+ str(item.get("session_id", "") or ""): item for item in sessions if str(item.get("session_id", "") or "")
+ }
+ selection = list(session_ids or [])
+ missing_ids: list[str] = []
+ if selection:
+ selected_sessions: list[dict[str, Any]] = []
+ for session_id in selection:
+ session = session_lookup.get(session_id)
+ if session is None:
+ missing_ids.append(session_id)
+ continue
+ selected_sessions.append(session)
+ sessions = selected_sessions
+
+ exported = 0
+ skipped = 0
+ for session in sessions:
+ session_id = str(session.get("session_id", "") or "")
+ if not session_id:
+ continue
+ payload = {
+ "session_id": session_id,
+ "timestamp": str(session.get("timestamp", "") or ""),
+ "user_alias": str(session.get("user_alias", "") or "local"),
+ "num_turns": int(session.get("num_turns", 0) or 0),
+ "turns": list(session.get("turns") or []),
+ "source": "local-dashboard-export",
+ "outcome": str(session.get("outcome", "") or ""),
+ "outcome_reasons": list(session.get("outcome_reasons") or []),
+ }
+ key = f"{hub._prefix()}sessions/{session_id}.json"
+ content = json.dumps(payload, ensure_ascii=False, sort_keys=True)
+
+ try:
+ existing = hub._bucket.get_object(key).read().decode("utf-8")
+ if existing == content:
+ skipped += 1
+ continue
+ except Exception:
+ pass
+
+ hub._bucket.put_object(key, content.encode("utf-8"))
+ exported += 1
+
+ sync_result = self.sync()
+ return {
+ "operation": "export-local-sessions",
+ "target": _sharing_target(self.config),
+ "selection": {
+ "mode": "selected" if selection else "all",
+ "requested": selection,
+ "count": len(selection),
+ },
+ "result": {
+ "exported": exported,
+ "skipped": skipped,
+ "matched": len(sessions),
+ "requested": len(selection) if selection else len(sessions),
+ "missing": len(missing_ids),
+ "missing_ids": missing_ids,
+ "total_local_sessions": total_local_sessions,
+ },
+ "sync": sync_result["summary"],
+ }
+
+ def activate_skill_version(self, skill_id: str, *, target: str) -> dict[str, Any]:
+ skill = self.store.get_skill(skill_id)
+ if not isinstance(skill, dict):
+ raise ValueError("skill not found")
+
+ skill_name = str(skill.get("name", "") or "").strip()
+ if not skill_name:
+ raise ValueError("skill name is missing")
+
+ selected_target = str(target or "").strip()
+ if not selected_target:
+ raise ValueError("'target' is required")
+
+ document = ""
+ label = ""
+ if selected_target == "local-current":
+ document = str(skill.get("skill_md") or skill.get("content") or "").strip()
+ label = "本地当前版本"
+ elif selected_target == "shared-current":
+ document = str(skill.get("remote_skill_md") or skill.get("remote_content") or "").strip()
+ label = "共享当前版本"
+ elif selected_target.startswith("shared-version:"):
+ raw_version = selected_target.split(":", 1)[1].strip()
+ try:
+ version_num = int(raw_version)
+ except ValueError as exc:
+ raise ValueError("invalid shared version target") from exc
+ versions = skill.get("versions") or []
+ version_payload = next(
+ (
+ item
+ for item in versions
+ if isinstance(item, dict) and int(item.get("version", 0) or 0) == version_num
+ ),
+ None,
+ )
+ if not isinstance(version_payload, dict):
+ raise ValueError(f"shared version not found: v{version_num}")
+ document = str(version_payload.get("skill_md") or version_payload.get("content") or "").strip()
+ label = f"共享 v{version_num}"
+ else:
+ raise ValueError(f"unsupported activation target: {selected_target}")
+
+ if not document:
+ raise ValueError("selected version does not include a document snapshot")
+
+ local_path = Path(str(skill.get("local_path", "") or "")).expanduser()
+ if not str(local_path).strip() or local_path.name != "SKILL.md":
+ local_path = Path(self.config.skills_dir).expanduser() / skill_name / "SKILL.md"
+ local_path.parent.mkdir(parents=True, exist_ok=True)
+ local_path.write_text(document.rstrip() + "\n", encoding="utf-8")
+
+ sync_result = self.sync()
+ return {
+ "operation": "activate-skill-version",
+ "skill_id": skill_id,
+ "skill_name": skill_name,
+ "target": selected_target,
+ "label": label,
+ "local_path": str(local_path),
+ "sync": sync_result["summary"],
+ }
+
+ async def submit_validation_review(
+ self,
+ job_id: str,
+ *,
+ accepted: bool,
+ score: float | None = None,
+ notes: str = "",
+ auto_finalize: bool = True,
+ ) -> dict[str, Any]:
+ if not self.config.sharing_enabled:
+ raise ValueError("skill sharing is not enabled in the current config")
+
+ from .validation_store import ValidationStore
+
+ validation_store = ValidationStore.from_config(self.config)
+ job = validation_store.load_job(job_id)
+ if not isinstance(job, dict):
+ raise ValueError(f"validation job not found: {job_id}")
+
+ raw_alias = str(self.config.sharing_user_alias or "").strip()
+ user_alias = raw_alias or "dashboard-review"
+ normalized_score = score
+ if normalized_score is None:
+ normalized_score = 0.95 if accepted else 0.05
+ normalized_score = max(0.0, min(1.0, float(normalized_score)))
+ note_text = str(notes or "").strip()
+
+ result_payload = {
+ "validator_mode": "manual",
+ "decision": "accept" if accepted else "reject",
+ "accepted": bool(accepted),
+ "score": normalized_score,
+ "threshold": float(job.get("min_score", 0.75) or 0.75),
+ "reason": note_text or f"Manual review submitted by {user_alias}.",
+ "notes": note_text,
+ "review_source": "dashboard",
+ }
+ validation_store.save_result(job_id, user_alias, result_payload)
+
+ response = {
+ "operation": "submit-validation-review",
+ "job_id": job_id,
+ "user_alias": user_alias,
+ "result": result_payload,
+ }
+
+ if auto_finalize and str(self.config.dashboard_evolve_server_url or "").strip():
+ response["finalize"] = await self.trigger_evolve()
+ return response
+
+ sync_result = self.sync()
+ response["sync"] = sync_result["summary"]
+ return response
+
+ async def get_evolve_status(self) -> dict[str, Any]:
+ base_url = str(self.config.dashboard_evolve_server_url or "").strip()
+ if not base_url:
+ if not self.config.sharing_enabled:
+ return {
+ "configured": False,
+ "url": "",
+ }
+ try:
+ from evolve_server.storage.oss_helpers import list_session_keys
+
+ server = self._embedded_evolve_server()
+ pending_keys = await server._call_storage(list_session_keys, server._bucket, server._prefix)
+ entries = server._id_registry.all_entries()
+ return {
+ "configured": True,
+ "url": "embedded://local-evolve",
+ "healthy": True,
+ "status": {
+ "running": False,
+ "pending_sessions": len(pending_keys),
+ "registered_skills": len(entries),
+ "skills": {
+ name: {
+ "skill_id": item.get("skill_id", ""),
+ "version": item.get("version", 0),
+ }
+ for name, item in entries.items()
+ },
+ },
+ }
+ except Exception as exc:
+ return {
+ "configured": True,
+ "url": "embedded://local-evolve",
+ "healthy": False,
+ "error": str(exc),
+ }
+ status_url = base_url.rstrip("/") + "/status"
+ try:
+ async with httpx.AsyncClient(timeout=10.0) as client:
+ response = await client.get(status_url)
+ response.raise_for_status()
+ payload = response.json()
+ return {
+ "configured": True,
+ "url": base_url,
+ "healthy": True,
+ "status": payload,
+ }
+ except Exception as exc:
+ return {
+ "configured": True,
+ "url": base_url,
+ "healthy": False,
+ "error": str(exc),
+ }
+
+ async def trigger_evolve(self) -> dict[str, Any]:
+ base_url = str(self.config.dashboard_evolve_server_url or "").strip()
+ if not base_url:
+ if not self.config.sharing_enabled:
+ raise ValueError("skill sharing is not enabled in the current config")
+
+ server = self._embedded_evolve_server()
+ result = await server.run_once()
+ sync_result = self.sync()
+ return {
+ "operation": "trigger-evolve",
+ "url": "embedded://local-evolve",
+ "result": result,
+ "sync": sync_result["summary"],
+ }
+ trigger_url = base_url.rstrip("/") + "/trigger"
+ async with httpx.AsyncClient(timeout=300.0) as client:
+ response = await client.post(trigger_url)
+ response.raise_for_status()
+ sync_result = self.sync()
+ return {
+ "operation": "trigger-evolve",
+ "url": trigger_url,
+ "result": response.json(),
+ "sync": sync_result["summary"],
+ }
+
+
+def create_dashboard_app(config: SkillClawConfig) -> FastAPI:
+ service = DashboardService(config)
+ assets_dir = _assets_dir()
+
+ @asynccontextmanager
+ async def lifespan(app: FastAPI):
+ service.store.initialize()
+ if config.dashboard_sync_on_start:
+ try:
+ service.sync()
+ except Exception:
+ logger.exception("[Dashboard] initial sync failed")
+ app.state.dashboard_service = service
+ yield
+
+ app = FastAPI(title="SkillClaw Dashboard", lifespan=lifespan)
+ app.mount("/assets", StaticFiles(directory=assets_dir), name="assets")
+
+ @app.get("/")
+ async def dashboard_index():
+ return FileResponse(assets_dir / "index.html")
+
+ @app.get("/api/v1/health")
+ async def health():
+ return {
+ "status": "ok",
+ "db_path": service.store.db_path,
+ "meta": service.store.get_meta(),
+ }
+
+ @app.get("/api/v1/overview")
+ async def overview():
+ return service.store.get_overview()
+
+ @app.get("/api/v1/skills")
+ async def list_skills(
+ search: str = "",
+ category: str = "",
+ source: str = "",
+ limit: int = 500,
+ ):
+ return {
+ "items": service.store.list_skills(
+ search=search.strip(),
+ category=category.strip(),
+ source=source.strip(),
+ limit=limit,
+ )
+ }
+
+ @app.get("/api/v1/skills/{skill_id}")
+ async def get_skill(skill_id: str):
+ payload = service.store.get_skill(skill_id)
+ if payload is None:
+ raise HTTPException(status_code=404, detail="skill not found")
+ return payload
+
+ @app.post("/api/v1/skills/{skill_id}/activate")
+ async def activate_skill(skill_id: str, payload: dict[str, Any] | None = Body(default=None)):
+ try:
+ body = payload or {}
+ target = str(body.get("target", "") or "").strip()
+ if not target:
+ raise ValueError("'target' is required")
+ return service.activate_skill_version(skill_id, target=target)
+ except Exception as exc:
+ raise HTTPException(status_code=400, detail=str(exc)) from exc
+
+ @app.get("/api/v1/sessions")
+ async def list_sessions(
+ skill_id: str = "",
+ search: str = "",
+ limit: int = 200,
+ ):
+ return {
+ "items": service.store.list_sessions(
+ skill_id=skill_id.strip(),
+ search=search.strip(),
+ limit=limit,
+ )
+ }
+
+ @app.get("/api/v1/sessions/{session_id}")
+ async def get_session(session_id: str):
+ payload = service.store.get_session(session_id)
+ if payload is None:
+ raise HTTPException(status_code=404, detail="session not found")
+ return payload
+
+ @app.get("/api/v1/validation/jobs")
+ async def validation_jobs(status: str = "", limit: int = 200):
+ return {
+ "items": service.store.list_validation_jobs(
+ status=status.strip(),
+ limit=limit,
+ )
+ }
+
+ @app.get("/api/v1/evolve/status")
+ async def evolve_status():
+ return await service.get_evolve_status()
+
+ @app.post("/api/v1/sync")
+ async def sync_projection():
+ return service.sync()
+
+ @app.post("/api/v1/ops/pull")
+ async def pull_skills(payload: dict[str, Any] | None = Body(default=None)):
+ try:
+ body = payload or {}
+ raw_skill_names = body.get("skill_names")
+ skill_names = _normalize_selection(raw_skill_names, field_name="skill_names")
+ if raw_skill_names is not None and not skill_names:
+ raise ValueError("'skill_names' must contain at least one non-empty value")
+ return service.pull_skills(skill_names=skill_names)
+ except Exception as exc:
+ raise HTTPException(status_code=400, detail=str(exc)) from exc
+
+ @app.post("/api/v1/ops/push")
+ async def push_skills(payload: dict[str, Any] | None = Body(default=None)):
+ try:
+ return service.push_skills(no_filter=bool((payload or {}).get("no_filter", False)))
+ except Exception as exc:
+ raise HTTPException(status_code=400, detail=str(exc)) from exc
+
+ @app.post("/api/v1/ops/sync")
+ async def sync_skills():
+ try:
+ return service.sync_skills()
+ except Exception as exc:
+ raise HTTPException(status_code=400, detail=str(exc)) from exc
+
+ @app.post("/api/v1/ops/export-sessions")
+ async def export_sessions(payload: dict[str, Any] | None = Body(default=None)):
+ try:
+ body = payload or {}
+ raw_session_ids = body.get("session_ids")
+ session_ids = _normalize_selection(raw_session_ids, field_name="session_ids")
+ if raw_session_ids is not None and not session_ids:
+ raise ValueError("'session_ids' must contain at least one non-empty value")
+ return service.export_local_sessions(session_ids=session_ids)
+ except Exception as exc:
+ raise HTTPException(status_code=400, detail=str(exc)) from exc
+
+ @app.post("/api/v1/ops/trigger-evolve")
+ async def trigger_evolve():
+ try:
+ return await service.trigger_evolve()
+ except Exception as exc:
+ raise HTTPException(status_code=400, detail=str(exc)) from exc
+
+ @app.post("/api/v1/validation/jobs/{job_id}/review")
+ async def submit_review(job_id: str, payload: dict[str, Any] | None = Body(default=None)):
+ body = payload or {}
+ accepted = body.get("accepted")
+ if not isinstance(accepted, bool):
+ raise HTTPException(status_code=400, detail="'accepted' must be a boolean")
+
+ raw_score = body.get("score")
+ score: float | None = None
+ if raw_score is not None and raw_score != "":
+ try:
+ score = float(raw_score)
+ except (TypeError, ValueError) as exc:
+ raise HTTPException(status_code=400, detail="'score' must be a number in [0, 1]") from exc
+ if score < 0.0 or score > 1.0:
+ raise HTTPException(status_code=400, detail="'score' must be a number in [0, 1]")
+
+ try:
+ return await service.submit_validation_review(
+ job_id,
+ accepted=accepted,
+ score=score,
+ notes=str(body.get("notes", "") or ""),
+ auto_finalize=bool(body.get("auto_finalize", True)),
+ )
+ except Exception as exc:
+ raise HTTPException(status_code=400, detail=str(exc)) from exc
+
+ return app
+
+
+def serve_dashboard(config: SkillClawConfig) -> None:
+ """Run the dashboard HTTP service."""
+ app = create_dashboard_app(config)
+ uvicorn.run(
+ app,
+ host=str(config.dashboard_host or "127.0.0.1"),
+ port=int(config.dashboard_port or 3788),
+ log_level="info",
+ )
diff --git a/skillclaw/dashboard_store.py b/skillclaw/dashboard_store.py
new file mode 100644
index 0000000..3e109c2
--- /dev/null
+++ b/skillclaw/dashboard_store.py
@@ -0,0 +1,660 @@
+"""
+SQLite-backed projection store for the SkillClaw dashboard.
+"""
+
+from __future__ import annotations
+
+import json
+import sqlite3
+from pathlib import Path
+from typing import Any
+
+
+def _json_dumps(value: Any) -> str:
+ return json.dumps(value, ensure_ascii=False)
+
+
+def _json_loads(raw: str | None, default: Any) -> Any:
+ if not raw:
+ return default
+ try:
+ return json.loads(raw)
+ except json.JSONDecodeError:
+ return default
+
+
+class DashboardStore:
+ """Materialized dashboard snapshot stored in SQLite."""
+
+ def __init__(self, db_path: str) -> None:
+ self.db_path = str(Path(db_path).expanduser())
+
+ def _connect(self) -> sqlite3.Connection:
+ path = Path(self.db_path)
+ path.parent.mkdir(parents=True, exist_ok=True)
+ conn = sqlite3.connect(path, timeout=30)
+ conn.row_factory = sqlite3.Row
+ return conn
+
+ def initialize(self) -> None:
+ with self._connect() as conn:
+ conn.executescript(
+ """
+ PRAGMA journal_mode=WAL;
+
+ CREATE TABLE IF NOT EXISTS meta (
+ key TEXT PRIMARY KEY,
+ value TEXT NOT NULL
+ );
+
+ CREATE TABLE IF NOT EXISTS skills (
+ skill_id TEXT PRIMARY KEY,
+ name TEXT NOT NULL UNIQUE,
+ description TEXT NOT NULL DEFAULT '',
+ category TEXT NOT NULL DEFAULT 'general',
+ source TEXT NOT NULL DEFAULT 'local',
+ has_local INTEGER NOT NULL DEFAULT 0,
+ has_remote INTEGER NOT NULL DEFAULT 0,
+ local_path TEXT NOT NULL DEFAULT '',
+ uploaded_at TEXT NOT NULL DEFAULT '',
+ uploaded_by TEXT NOT NULL DEFAULT '',
+ updated_at TEXT NOT NULL DEFAULT '',
+ current_version INTEGER NOT NULL DEFAULT 0,
+ current_sha TEXT NOT NULL DEFAULT '',
+ local_inject_count INTEGER NOT NULL DEFAULT 0,
+ observed_injection_count INTEGER NOT NULL DEFAULT 0,
+ read_count INTEGER NOT NULL DEFAULT 0,
+ modified_count INTEGER NOT NULL DEFAULT 0,
+ session_count INTEGER NOT NULL DEFAULT 0,
+ effectiveness REAL NOT NULL DEFAULT 0.0,
+ positive_count INTEGER NOT NULL DEFAULT 0,
+ negative_count INTEGER NOT NULL DEFAULT 0,
+ neutral_count INTEGER NOT NULL DEFAULT 0,
+ content TEXT NOT NULL DEFAULT '',
+ raw_json TEXT NOT NULL DEFAULT '{}'
+ );
+
+ CREATE INDEX IF NOT EXISTS idx_skills_name ON skills(name);
+ CREATE INDEX IF NOT EXISTS idx_skills_category ON skills(category);
+ CREATE INDEX IF NOT EXISTS idx_skills_source ON skills(source);
+ CREATE INDEX IF NOT EXISTS idx_skills_sessions
+ ON skills(session_count DESC, observed_injection_count DESC);
+
+ CREATE TABLE IF NOT EXISTS skill_versions (
+ skill_id TEXT NOT NULL,
+ version INTEGER NOT NULL,
+ content_sha TEXT NOT NULL DEFAULT '',
+ action TEXT NOT NULL DEFAULT '',
+ timestamp TEXT NOT NULL DEFAULT '',
+ raw_json TEXT NOT NULL DEFAULT '{}',
+ PRIMARY KEY (skill_id, version)
+ );
+
+ CREATE INDEX IF NOT EXISTS idx_skill_versions_timestamp ON skill_versions(skill_id, timestamp DESC);
+
+ CREATE TABLE IF NOT EXISTS sessions (
+ session_id TEXT PRIMARY KEY,
+ timestamp TEXT NOT NULL DEFAULT '',
+ user_alias TEXT NOT NULL DEFAULT '',
+ num_turns INTEGER NOT NULL DEFAULT 0,
+ avg_prm_score REAL,
+ skill_names_json TEXT NOT NULL DEFAULT '[]',
+ prompt_preview TEXT NOT NULL DEFAULT '',
+ response_preview TEXT NOT NULL DEFAULT '',
+ raw_json TEXT NOT NULL DEFAULT '{}'
+ );
+
+ CREATE INDEX IF NOT EXISTS idx_sessions_timestamp ON sessions(timestamp DESC);
+ CREATE INDEX IF NOT EXISTS idx_sessions_user_alias ON sessions(user_alias);
+
+ CREATE TABLE IF NOT EXISTS session_skill_links (
+ session_id TEXT NOT NULL,
+ skill_id TEXT NOT NULL,
+ skill_name TEXT NOT NULL DEFAULT '',
+ relation TEXT NOT NULL,
+ count INTEGER NOT NULL DEFAULT 0,
+ PRIMARY KEY (session_id, skill_id, relation)
+ );
+
+ CREATE INDEX IF NOT EXISTS idx_session_skill_links_skill ON session_skill_links(skill_id, relation);
+ CREATE INDEX IF NOT EXISTS idx_session_skill_links_session ON session_skill_links(session_id);
+
+ CREATE TABLE IF NOT EXISTS validation_jobs (
+ job_id TEXT PRIMARY KEY,
+ created_at TEXT NOT NULL DEFAULT '',
+ skill_name TEXT NOT NULL DEFAULT '',
+ proposed_action TEXT NOT NULL DEFAULT '',
+ status TEXT NOT NULL DEFAULT '',
+ result_count INTEGER NOT NULL DEFAULT 0,
+ accepted_count INTEGER NOT NULL DEFAULT 0,
+ rejected_count INTEGER NOT NULL DEFAULT 0,
+ mean_score REAL,
+ raw_json TEXT NOT NULL DEFAULT '{}'
+ );
+
+ CREATE INDEX IF NOT EXISTS idx_validation_jobs_status ON validation_jobs(status);
+ CREATE INDEX IF NOT EXISTS idx_validation_jobs_created ON validation_jobs(created_at DESC);
+ """
+ )
+
+ def replace_snapshot(self, snapshot: dict[str, Any]) -> dict[str, Any]:
+ self.initialize()
+ with self._connect() as conn:
+ conn.execute("DELETE FROM meta")
+ conn.execute("DELETE FROM skill_versions")
+ conn.execute("DELETE FROM session_skill_links")
+ conn.execute("DELETE FROM sessions")
+ conn.execute("DELETE FROM validation_jobs")
+ conn.execute("DELETE FROM skills")
+
+ for skill in snapshot.get("skills") or []:
+ conn.execute(
+ """
+ INSERT INTO skills (
+ skill_id,
+ name,
+ description,
+ category,
+ source,
+ has_local,
+ has_remote,
+ local_path,
+ uploaded_at,
+ uploaded_by,
+ updated_at,
+ current_version,
+ current_sha,
+ local_inject_count,
+ observed_injection_count,
+ read_count,
+ modified_count,
+ session_count,
+ effectiveness,
+ positive_count,
+ negative_count,
+ neutral_count,
+ content,
+ raw_json
+ ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
+ """,
+ (
+ str(skill.get("skill_id", "") or ""),
+ str(skill.get("name", "") or ""),
+ str(skill.get("description", "") or ""),
+ str(skill.get("category", "general") or "general"),
+ str(skill.get("source", "local") or "local"),
+ 1 if skill.get("has_local") else 0,
+ 1 if skill.get("has_remote") else 0,
+ str(skill.get("local_path", "") or ""),
+ str(skill.get("uploaded_at", "") or ""),
+ str(skill.get("uploaded_by", "") or ""),
+ str(skill.get("updated_at", "") or ""),
+ int(skill.get("current_version", 0) or 0),
+ str(skill.get("current_sha", "") or ""),
+ int(skill.get("local_inject_count", 0) or 0),
+ int(skill.get("observed_injection_count", 0) or 0),
+ int(skill.get("read_count", 0) or 0),
+ int(skill.get("modified_count", 0) or 0),
+ int(skill.get("session_count", 0) or 0),
+ float(skill.get("effectiveness", 0.0) or 0.0),
+ int(skill.get("positive_count", 0) or 0),
+ int(skill.get("negative_count", 0) or 0),
+ int(skill.get("neutral_count", 0) or 0),
+ str(skill.get("skill_md", "") or ""),
+ _json_dumps(skill),
+ ),
+ )
+ for version in skill.get("versions") or []:
+ conn.execute(
+ """
+ INSERT INTO skill_versions (
+ skill_id,
+ version,
+ content_sha,
+ action,
+ timestamp,
+ raw_json
+ ) VALUES (?, ?, ?, ?, ?, ?)
+ """,
+ (
+ str(skill.get("skill_id", "") or ""),
+ int(version.get("version", 0) or 0),
+ str(version.get("content_sha", "") or ""),
+ str(version.get("action", "") or ""),
+ str(version.get("timestamp", "") or ""),
+ _json_dumps(version),
+ ),
+ )
+
+ for session in snapshot.get("sessions") or []:
+ conn.execute(
+ """
+ INSERT INTO sessions (
+ session_id,
+ timestamp,
+ user_alias,
+ num_turns,
+ avg_prm_score,
+ skill_names_json,
+ prompt_preview,
+ response_preview,
+ raw_json
+ ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
+ """,
+ (
+ str(session.get("session_id", "") or ""),
+ str(session.get("timestamp", "") or ""),
+ str(session.get("user_alias", "") or ""),
+ int(session.get("num_turns", 0) or 0),
+ float(session["avg_prm_score"]) if session.get("avg_prm_score") is not None else None,
+ _json_dumps(session.get("skill_names") or []),
+ str(session.get("prompt_preview", "") or ""),
+ str(session.get("response_preview", "") or ""),
+ _json_dumps(session),
+ ),
+ )
+
+ for link in snapshot.get("session_skill_links") or []:
+ conn.execute(
+ """
+ INSERT INTO session_skill_links (
+ session_id,
+ skill_id,
+ skill_name,
+ relation,
+ count
+ ) VALUES (?, ?, ?, ?, ?)
+ """,
+ (
+ str(link.get("session_id", "") or ""),
+ str(link.get("skill_id", "") or ""),
+ str(link.get("skill_name", "") or ""),
+ str(link.get("relation", "") or ""),
+ int(link.get("count", 0) or 0),
+ ),
+ )
+
+ for job in snapshot.get("validation_jobs") or []:
+ conn.execute(
+ """
+ INSERT INTO validation_jobs (
+ job_id,
+ created_at,
+ skill_name,
+ proposed_action,
+ status,
+ result_count,
+ accepted_count,
+ rejected_count,
+ mean_score,
+ raw_json
+ ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
+ """,
+ (
+ str(job.get("job_id", "") or ""),
+ str(job.get("created_at", "") or ""),
+ str(job.get("skill_name", "") or ""),
+ str(job.get("proposed_action", "") or ""),
+ str(job.get("status", "") or ""),
+ int(job.get("result_count", 0) or 0),
+ int(job.get("accepted_count", 0) or 0),
+ int(job.get("rejected_count", 0) or 0),
+ float(job["mean_score"]) if job.get("mean_score") is not None else None,
+ _json_dumps(job),
+ ),
+ )
+
+ meta = dict(snapshot.get("meta") or {})
+ meta["generated_at"] = str(snapshot.get("generated_at", "") or "")
+ meta["skill_count"] = len(snapshot.get("skills") or [])
+ meta["session_count"] = len(snapshot.get("sessions") or [])
+ meta["validation_job_count"] = len(snapshot.get("validation_jobs") or [])
+ for key, value in meta.items():
+ conn.execute(
+ "INSERT INTO meta (key, value) VALUES (?, ?)",
+ (str(key), _json_dumps(value)),
+ )
+
+ return {
+ "generated_at": str(snapshot.get("generated_at", "") or ""),
+ "skills": len(snapshot.get("skills") or []),
+ "sessions": len(snapshot.get("sessions") or []),
+ "validation_jobs": len(snapshot.get("validation_jobs") or []),
+ "warnings": list((snapshot.get("meta") or {}).get("warnings") or []),
+ }
+
+ def get_meta(self) -> dict[str, Any]:
+ self.initialize()
+ with self._connect() as conn:
+ rows = conn.execute("SELECT key, value FROM meta ORDER BY key").fetchall()
+ return {str(row["key"]): _json_loads(row["value"], row["value"]) for row in rows}
+
+ def _skill_summary_from_row(self, row: sqlite3.Row) -> dict[str, Any]:
+ return {
+ "skill_id": row["skill_id"],
+ "name": row["name"],
+ "description": row["description"],
+ "category": row["category"],
+ "source": row["source"],
+ "has_local": bool(row["has_local"]),
+ "has_remote": bool(row["has_remote"]),
+ "local_path": row["local_path"],
+ "uploaded_at": row["uploaded_at"],
+ "uploaded_by": row["uploaded_by"],
+ "updated_at": row["updated_at"],
+ "current_version": row["current_version"],
+ "current_sha": row["current_sha"],
+ "local_inject_count": row["local_inject_count"],
+ "observed_injection_count": row["observed_injection_count"],
+ "read_count": row["read_count"],
+ "modified_count": row["modified_count"],
+ "session_count": row["session_count"],
+ "effectiveness": row["effectiveness"],
+ "positive_count": row["positive_count"],
+ "negative_count": row["negative_count"],
+ "neutral_count": row["neutral_count"],
+ }
+
+ def list_skills(
+ self,
+ *,
+ search: str = "",
+ category: str = "",
+ source: str = "",
+ limit: int = 500,
+ ) -> list[dict[str, Any]]:
+ self.initialize()
+ clauses: list[str] = []
+ params: list[Any] = []
+ if search:
+ clauses.append("(name LIKE ? OR description LIKE ?)")
+ needle = f"%{search}%"
+ params.extend([needle, needle])
+ if category:
+ clauses.append("category = ?")
+ params.append(category)
+ if source:
+ clauses.append("source = ?")
+ params.append(source)
+
+ query = """
+ SELECT * FROM skills
+ """
+ if clauses:
+ query += " WHERE " + " AND ".join(clauses)
+ query += """
+ ORDER BY
+ session_count DESC,
+ observed_injection_count DESC,
+ local_inject_count DESC,
+ name ASC
+ LIMIT ?
+ """
+ params.append(max(1, int(limit)))
+
+ with self._connect() as conn:
+ rows = conn.execute(query, params).fetchall()
+ return [self._skill_summary_from_row(row) for row in rows]
+
+ def get_skill(self, skill_id: str) -> dict[str, Any] | None:
+ self.initialize()
+ with self._connect() as conn:
+ row = conn.execute("SELECT * FROM skills WHERE skill_id = ?", (skill_id,)).fetchone()
+ if row is None:
+ return None
+ payload = _json_loads(row["raw_json"], {})
+ payload.update(self._skill_summary_from_row(row))
+ version_rows = conn.execute(
+ """
+ SELECT raw_json
+ FROM skill_versions
+ WHERE skill_id = ?
+ ORDER BY version DESC, timestamp DESC
+ """,
+ (skill_id,),
+ ).fetchall()
+ related_sessions = conn.execute(
+ """
+ SELECT
+ s.session_id,
+ s.timestamp,
+ s.user_alias,
+ s.num_turns,
+ s.avg_prm_score,
+ s.prompt_preview,
+ s.response_preview,
+ SUM(CASE WHEN l.relation = 'injected' THEN l.count ELSE 0 END) AS injected_count,
+ SUM(CASE WHEN l.relation = 'read' THEN l.count ELSE 0 END) AS read_count,
+ SUM(CASE WHEN l.relation = 'modified' THEN l.count ELSE 0 END) AS modified_count
+ FROM session_skill_links AS l
+ JOIN sessions AS s
+ ON s.session_id = l.session_id
+ WHERE l.skill_id = ?
+ GROUP BY
+ s.session_id,
+ s.timestamp,
+ s.user_alias,
+ s.num_turns,
+ s.avg_prm_score,
+ s.prompt_preview,
+ s.response_preview
+ ORDER BY s.timestamp DESC, s.session_id DESC
+ LIMIT 50
+ """,
+ (skill_id,),
+ ).fetchall()
+
+ payload["versions"] = [_json_loads(item["raw_json"], {}) for item in version_rows]
+ payload["related_sessions"] = [
+ {
+ "session_id": item["session_id"],
+ "timestamp": item["timestamp"],
+ "user_alias": item["user_alias"],
+ "num_turns": item["num_turns"],
+ "avg_prm_score": item["avg_prm_score"],
+ "prompt_preview": item["prompt_preview"],
+ "response_preview": item["response_preview"],
+ "injected_count": item["injected_count"],
+ "read_count": item["read_count"],
+ "modified_count": item["modified_count"],
+ }
+ for item in related_sessions
+ ]
+ return payload
+
+ def list_sessions(
+ self,
+ *,
+ skill_id: str = "",
+ search: str = "",
+ limit: int = 200,
+ ) -> list[dict[str, Any]]:
+ self.initialize()
+ clauses: list[str] = []
+ params: list[Any] = []
+ query = "SELECT DISTINCT s.* FROM sessions AS s"
+ if skill_id:
+ query += " JOIN session_skill_links AS l ON l.session_id = s.session_id"
+ clauses.append("l.skill_id = ?")
+ params.append(skill_id)
+ if search:
+ needle = f"%{search}%"
+ clauses.append("(s.session_id LIKE ? OR s.user_alias LIKE ? OR s.prompt_preview LIKE ?)")
+ params.extend([needle, needle, needle])
+ if clauses:
+ query += " WHERE " + " AND ".join(clauses)
+ query += " ORDER BY s.timestamp DESC, s.session_id DESC LIMIT ?"
+ params.append(max(1, int(limit)))
+
+ with self._connect() as conn:
+ rows = conn.execute(query, params).fetchall()
+ items: list[dict[str, Any]] = []
+ for row in rows:
+ payload = _json_loads(row["raw_json"], {})
+ items.append(
+ {
+ "session_id": row["session_id"],
+ "timestamp": row["timestamp"],
+ "user_alias": row["user_alias"],
+ "num_turns": row["num_turns"],
+ "avg_prm_score": row["avg_prm_score"],
+ "skill_names": _json_loads(row["skill_names_json"], []),
+ "prompt_preview": row["prompt_preview"],
+ "response_preview": row["response_preview"],
+ "source": str(payload.get("source", "") or ""),
+ "outcome": str(payload.get("outcome", "") or ""),
+ }
+ )
+ return items
+
+ def get_session(self, session_id: str) -> dict[str, Any] | None:
+ self.initialize()
+ with self._connect() as conn:
+ row = conn.execute("SELECT * FROM sessions WHERE session_id = ?", (session_id,)).fetchone()
+ if row is None:
+ return None
+ payload = _json_loads(row["raw_json"], {})
+ payload.update(
+ {
+ "session_id": row["session_id"],
+ "timestamp": row["timestamp"],
+ "user_alias": row["user_alias"],
+ "num_turns": row["num_turns"],
+ "avg_prm_score": row["avg_prm_score"],
+ "skill_names": _json_loads(row["skill_names_json"], []),
+ "prompt_preview": row["prompt_preview"],
+ "response_preview": row["response_preview"],
+ }
+ )
+ link_rows = conn.execute(
+ """
+ SELECT skill_id, skill_name, relation, count
+ FROM session_skill_links
+ WHERE session_id = ?
+ ORDER BY skill_name ASC, relation ASC
+ """,
+ (session_id,),
+ ).fetchall()
+ payload["links"] = [
+ {
+ "skill_id": item["skill_id"],
+ "skill_name": item["skill_name"],
+ "relation": item["relation"],
+ "count": item["count"],
+ }
+ for item in link_rows
+ ]
+ return payload
+
+ def list_validation_jobs(
+ self,
+ *,
+ status: str = "",
+ limit: int = 200,
+ ) -> list[dict[str, Any]]:
+ self.initialize()
+ query = "SELECT * FROM validation_jobs"
+ params: list[Any] = []
+ if status:
+ query += " WHERE status = ?"
+ params.append(status)
+ query += " ORDER BY created_at DESC, job_id DESC LIMIT ?"
+ params.append(max(1, int(limit)))
+ with self._connect() as conn:
+ rows = conn.execute(query, params).fetchall()
+ return [
+ {
+ "job_id": row["job_id"],
+ "created_at": row["created_at"],
+ "skill_name": row["skill_name"],
+ "proposed_action": row["proposed_action"],
+ "status": row["status"],
+ "result_count": row["result_count"],
+ "accepted_count": row["accepted_count"],
+ "rejected_count": row["rejected_count"],
+ "mean_score": row["mean_score"],
+ "details": _json_loads(row["raw_json"], {}),
+ }
+ for row in rows
+ ]
+
+ def get_overview(self) -> dict[str, Any]:
+ self.initialize()
+ with self._connect() as conn:
+ counts = {
+ "skills": int(conn.execute("SELECT COUNT(*) FROM skills").fetchone()[0]),
+ "local_skills": int(conn.execute("SELECT COUNT(*) FROM skills WHERE has_local = 1").fetchone()[0]),
+ "shared_skills": int(conn.execute("SELECT COUNT(*) FROM skills WHERE has_remote = 1").fetchone()[0]),
+ "sessions": int(conn.execute("SELECT COUNT(*) FROM sessions").fetchone()[0]),
+ "validation_jobs": int(conn.execute("SELECT COUNT(*) FROM validation_jobs").fetchone()[0]),
+ "open_validation_jobs": int(
+ conn.execute(
+ "SELECT COUNT(*) FROM validation_jobs WHERE status IN ('pending', 'review')"
+ ).fetchone()[0]
+ ),
+ "local_injections": int(
+ conn.execute("SELECT COALESCE(SUM(local_inject_count), 0) FROM skills").fetchone()[0]
+ ),
+ "observed_injections": int(
+ conn.execute("SELECT COALESCE(SUM(observed_injection_count), 0) FROM skills").fetchone()[0]
+ ),
+ "observed_reads": int(conn.execute("SELECT COALESCE(SUM(read_count), 0) FROM skills").fetchone()[0]),
+ "observed_modifications": int(
+ conn.execute("SELECT COALESCE(SUM(modified_count), 0) FROM skills").fetchone()[0]
+ ),
+ }
+ top_skills = conn.execute(
+ """
+ SELECT *
+ FROM skills
+ ORDER BY
+ session_count DESC,
+ observed_injection_count DESC,
+ local_inject_count DESC,
+ name ASC
+ LIMIT 8
+ """
+ ).fetchall()
+ recent_sessions = conn.execute(
+ """
+ SELECT *
+ FROM sessions
+ ORDER BY timestamp DESC, session_id DESC
+ LIMIT 8
+ """
+ ).fetchall()
+ categories = conn.execute(
+ """
+ SELECT category, COUNT(*) AS count
+ FROM skills
+ GROUP BY category
+ ORDER BY count DESC, category ASC
+ """
+ ).fetchall()
+
+ return {
+ "counts": counts,
+ "top_skills": [self._skill_summary_from_row(row) for row in top_skills],
+ "recent_sessions": [
+ {
+ "session_id": row["session_id"],
+ "timestamp": row["timestamp"],
+ "user_alias": row["user_alias"],
+ "num_turns": row["num_turns"],
+ "avg_prm_score": row["avg_prm_score"],
+ "skill_names": _json_loads(row["skill_names_json"], []),
+ "prompt_preview": row["prompt_preview"],
+ "source": str(_json_loads(row["raw_json"], {}).get("source", "") or ""),
+ "outcome": str(_json_loads(row["raw_json"], {}).get("outcome", "") or ""),
+ }
+ for row in recent_sessions
+ ],
+ "categories": [
+ {
+ "category": row["category"],
+ "count": row["count"],
+ }
+ for row in categories
+ ],
+ "meta": self.get_meta(),
+ }
diff --git a/skillclaw/skill_hub.py b/skillclaw/skill_hub.py
index 5053e7f..b7fa406 100644
--- a/skillclaw/skill_hub.py
+++ b/skillclaw/skill_hub.py
@@ -382,6 +382,7 @@ def pull_skills(
skills_dir: str,
mirror: bool = True,
skip_names: Optional[Collection[str]] = None,
+ include_names: Optional[Collection[str]] = None,
) -> dict[str, Any]:
"""Mirror cloud skills to local directory with backup + rollback safety.
@@ -396,6 +397,10 @@ def pull_skills(
local extras).
skip_names:
Skill names to preserve from local disk during this pull.
+ include_names:
+ Optional subset of remote skill names to download. When provided,
+ pull is forced into incremental mode to avoid deleting unrelated
+ local skills.
Returns:
{
@@ -412,17 +417,60 @@ def pull_skills(
local_skills = {name: dirs[-1] for name, dirs in local_dirs_by_name.items() if dirs}
manifest = self._load_remote_manifest()
skip_set = {str(name or "").strip() for name in (skip_names or []) if str(name or "").strip()}
+ include_set = {str(name or "").strip() for name in (include_names or []) if str(name or "").strip()}
+ if include_set and mirror:
+ mirror = False
+ if include_set:
+ manifest = {name: rec for name, rec in manifest.items() if name in include_set}
+ missing_names = sorted(include_set - set(manifest))
+ requested_count = len(include_set)
+ matched_remote = len(manifest)
+
+ def _result(
+ *,
+ downloaded: int,
+ skipped: int,
+ deleted: int,
+ total_remote: int,
+ restored_from_backup: bool,
+ backup_dir: str,
+ ) -> dict[str, Any]:
+ payload: dict[str, Any] = {
+ "downloaded": downloaded,
+ "skipped": skipped,
+ "deleted": deleted,
+ "total_remote": total_remote,
+ "restored_from_backup": restored_from_backup,
+ "backup_dir": backup_dir,
+ }
+ if include_set:
+ payload.update(
+ {
+ "requested": requested_count,
+ "matched_remote": matched_remote,
+ "missing": len(missing_names),
+ "missing_names": missing_names,
+ }
+ )
+ return payload
+
if not manifest:
# Empty/failed manifest is treated as no-op to avoid accidental wipe.
- logger.warning("[SkillHub] remote manifest empty; skip mirror pull (downloaded=0 skipped=0 deleted=0)")
- return {
- "downloaded": 0,
- "skipped": 0,
- "deleted": 0,
- "total_remote": 0,
- "restored_from_backup": False,
- "backup_dir": "",
- }
+ if include_set:
+ logger.info(
+ "[SkillHub] none of the requested remote skills matched the manifest: %s",
+ ", ".join(missing_names) or "(empty request)",
+ )
+ else:
+ logger.warning("[SkillHub] remote manifest empty; skip mirror pull (downloaded=0 skipped=0 deleted=0)")
+ return _result(
+ downloaded=0,
+ skipped=0,
+ deleted=0,
+ total_remote=0,
+ restored_from_backup=False,
+ backup_dir="",
+ )
downloaded = 0
skipped = 0
@@ -474,14 +522,14 @@ def pull_skills(
skipped,
len(manifest),
)
- return {
- "downloaded": downloaded,
- "skipped": skipped,
- "deleted": 0,
- "total_remote": len(manifest),
- "restored_from_backup": False,
- "backup_dir": "",
- }
+ return _result(
+ downloaded=downloaded,
+ skipped=skipped,
+ deleted=0,
+ total_remote=len(manifest),
+ restored_from_backup=False,
+ backup_dir="",
+ )
parent_dir = os.path.dirname(os.path.abspath(skills_dir))
base_name = os.path.basename(os.path.abspath(skills_dir))
@@ -496,14 +544,14 @@ def pull_skills(
shutil.copytree(skills_dir, backup_dir)
except Exception as e:
logger.warning("[SkillHub] backup before pull failed: %s", e)
- return {
- "downloaded": 0,
- "skipped": 0,
- "deleted": 0,
- "total_remote": len(manifest),
- "restored_from_backup": False,
- "backup_dir": "",
- }
+ return _result(
+ downloaded=0,
+ skipped=0,
+ deleted=0,
+ total_remote=len(manifest),
+ restored_from_backup=False,
+ backup_dir="",
+ )
os.makedirs(staging_dir, exist_ok=True)
resolved_targets: dict[str, str] = {}
@@ -590,14 +638,14 @@ def pull_skills(
except Exception as restore_err:
logger.error("[SkillHub] backup restore failed: %s", restore_err)
- return {
- "downloaded": 0,
- "skipped": 0,
- "deleted": 0,
- "total_remote": len(manifest),
- "restored_from_backup": restored_from_backup,
- "backup_dir": backup_dir,
- }
+ return _result(
+ downloaded=0,
+ skipped=0,
+ deleted=0,
+ total_remote=len(manifest),
+ restored_from_backup=restored_from_backup,
+ backup_dir=backup_dir,
+ )
finally:
if os.path.isdir(staging_dir):
shutil.rmtree(staging_dir, ignore_errors=True)
@@ -610,14 +658,14 @@ def pull_skills(
len(manifest),
)
self._prune_backups(backup_root, backup_prefix, keep=3)
- return {
- "downloaded": downloaded,
- "skipped": skipped,
- "deleted": deleted,
- "total_remote": len(manifest),
- "restored_from_backup": False,
- "backup_dir": backup_dir,
- }
+ return _result(
+ downloaded=downloaded,
+ skipped=skipped,
+ deleted=deleted,
+ total_remote=len(manifest),
+ restored_from_backup=False,
+ backup_dir=backup_dir,
+ )
# ------------------------------------------------------------------ #
# List remote skills #
diff --git a/tests/test_dashboard.py b/tests/test_dashboard.py
new file mode 100644
index 0000000..a720a0a
--- /dev/null
+++ b/tests/test_dashboard.py
@@ -0,0 +1,1432 @@
+from __future__ import annotations
+
+import hashlib
+import json
+import tempfile
+import textwrap
+import unittest
+from pathlib import Path
+
+from fastapi.testclient import TestClient
+
+from skillclaw.config import SkillClawConfig
+from skillclaw.dashboard_ingest import build_dashboard_snapshot
+from skillclaw.dashboard_server import DashboardService, create_dashboard_app
+from skillclaw.dashboard_store import DashboardStore
+
+
+def _sha256_text(value: str) -> str:
+ return hashlib.sha256(value.encode("utf-8")).hexdigest()
+
+
+def _skill_id(name: str) -> str:
+ return hashlib.sha256(name.encode("utf-8")).hexdigest()[:12]
+
+
+def _skill_doc(name: str, description: str, body: str, *, category: str = "general") -> str:
+ return textwrap.dedent(
+ f"""\
+ ---
+ name: {name}
+ description: "{description}"
+ category: {category}
+ ---
+
+ # {name}
+
+ {body}
+ """
+ )
+
+
+def _history_entry(version: int, document: str, timestamp: str, action: str) -> dict[str, object]:
+ return {
+ "version": version,
+ "content_sha": _sha256_text(document),
+ "timestamp": timestamp,
+ "action": action,
+ "skill_md": document,
+ "content": document,
+ }
+
+
+def _transcript_record(role: str, text: str) -> dict[str, object]:
+ payload = text
+ if role == "user":
+ payload = f"\n{text}\n "
+ return {
+ "role": role,
+ "message": {
+ "content": [
+ {
+ "type": "text",
+ "text": payload,
+ }
+ ]
+ },
+ }
+
+
+class DashboardFixture:
+ def __init__(self) -> None:
+ self.tempdir = tempfile.TemporaryDirectory()
+ self.root = Path(self.tempdir.name)
+ self.skills_dir = self.root / "skills"
+ self.share_root = self.root / "share"
+ self.group_dir = self.share_root / "team-alpha"
+ self.db_path = self.root / "dashboard.sqlite3"
+
+ self.local_docs = self._build_local_docs()
+ self.shared_docs = self._build_shared_docs()
+
+ self._create_local_skills()
+ self._create_local_state()
+ self._create_local_records()
+ self._create_shared_snapshot()
+
+ self.config = SkillClawConfig(
+ use_skills=True,
+ skills_dir=str(self.skills_dir),
+ record_dir=str(self.root / "records"),
+ sharing_enabled=True,
+ sharing_backend="local",
+ sharing_local_root=str(self.share_root),
+ sharing_group_id="team-alpha",
+ sharing_user_alias="tester",
+ dashboard_enabled=True,
+ dashboard_db_path=str(self.db_path),
+ dashboard_sync_on_start=True,
+ dashboard_include_shared=True,
+ dashboard_evolve_server_url="",
+ )
+
+ def cleanup(self) -> None:
+ self.tempdir.cleanup()
+
+ def _write_json(self, path: Path, payload: object) -> None:
+ path.parent.mkdir(parents=True, exist_ok=True)
+ path.write_text(json.dumps(payload, indent=2), encoding="utf-8")
+
+ def _build_local_docs(self) -> dict[str, str]:
+ return {
+ "debug-notes": _skill_doc(
+ "debug-notes",
+ "Keep a compact running log while debugging.",
+ """\
+ ## When to use
+ - When the failure mode keeps changing between retries.
+ - When you need a short breadcrumb trail before editing code.
+
+ ## Workflow
+ 1. Record the failed assumption.
+ 2. Capture the last observable fact.
+ 3. State the next probe before making a patch.
+ """,
+ category="coding",
+ ),
+ "api-contract-checklist": _skill_doc(
+ "api-contract-checklist",
+ "Verify request, auth, and schema assumptions before patching API tests.",
+ """\
+ ## Checklist
+ - Confirm auth headers and tenant routing.
+ - Check request and response schema drift.
+ - Verify fixture defaults before changing handlers.
+ """,
+ category="backend",
+ ),
+ "release-rollback-runbook": _skill_doc(
+ "release-rollback-runbook",
+ "Coordinate rollback checks during incident mitigation.",
+ """\
+ ## Rollback guardrails
+ - Identify blast radius and freeze new writes.
+ - Verify migration compatibility before rollback.
+ - Keep a short operator handoff note after mitigation.
+ """,
+ category="ops",
+ ),
+ "incident-timeline": _skill_doc(
+ "incident-timeline",
+ "Summarize incident progression, impact window, and mitigation sequence.",
+ """\
+ ## Timeline template
+ - Start with first customer-visible symptom.
+ - Keep timestamps in chronological order.
+ - Separate hypothesis, action, and observed outcome.
+ """,
+ category="ops",
+ ),
+ }
+
+ def _build_shared_docs(self) -> dict[str, str]:
+ return {
+ "debug-notes": _skill_doc(
+ "debug-notes",
+ "Keep a compact running log while debugging.",
+ """\
+ ## Shared practice
+ - Capture the failed assumption before editing files.
+ - After each retry, summarize what changed and what stayed invariant.
+ - End with one concrete next step instead of a generic note.
+ """,
+ category="coding",
+ ),
+ "incident-timeline": _skill_doc(
+ "incident-timeline",
+ "Summarize incident progression, impact window, and mitigation sequence.",
+ """\
+ ## Shared practice
+ - Anchor the timeline on customer impact and service restoration.
+ - Record mitigation checkpoints, not every shell command.
+ - Close with one unresolved question for the next responder.
+ """,
+ category="ops",
+ ),
+ "release-rollback-runbook": _skill_doc(
+ "release-rollback-runbook",
+ "Coordinate rollback checks during incident mitigation.",
+ """\
+ ## Shared rollback path
+ - Confirm release identifier, migration window, and affected tenants.
+ - Execute rollback in the least-coupled order.
+ - Validate alarms, dashboards, and customer traffic after recovery.
+ """,
+ category="ops",
+ ),
+ "sql-trace": _skill_doc(
+ "sql-trace",
+ "Trace SQL state transitions during debugging.",
+ """\
+ ## Trace format
+ - Log query, bind parameters, row count, and transaction scope.
+ - Mark where state diverges from expectation.
+ - Keep application log references next to query traces.
+ """,
+ category="data_analysis",
+ ),
+ "prompt-risk-screener": _skill_doc(
+ "prompt-risk-screener",
+ "Screen prompts for policy, jailbreak, and ambiguity risks before execution.",
+ """\
+ ## Screening loop
+ - Classify policy-sensitive intent first.
+ - Separate ambiguity from deliberate jailbreak behavior.
+ - Recommend the smallest safe rewrite when blocking is not required.
+ """,
+ category="governance",
+ ),
+ "handoff-brief": _skill_doc(
+ "handoff-brief",
+ "Prepare a concise operator handoff after long debugging sessions.",
+ """\
+ ## Handoff format
+ - Problem statement in one sentence.
+ - What changed, what remains risky, and what to verify next.
+ - Include owners, timestamps, and the next blocking question.
+ """,
+ category="operations",
+ ),
+ }
+
+ def _create_local_skills(self) -> None:
+ local_stats = {
+ "debug-notes": {
+ "inject_count": 18,
+ "positive_count": 10,
+ "negative_count": 2,
+ "neutral_count": 6,
+ "last_injected_at": "2026-04-21T01:20:00Z",
+ "effectiveness": 0.79,
+ },
+ "api-contract-checklist": {
+ "inject_count": 12,
+ "positive_count": 7,
+ "negative_count": 1,
+ "neutral_count": 4,
+ "last_injected_at": "2026-04-21T02:00:00Z",
+ "effectiveness": 0.82,
+ },
+ "release-rollback-runbook": {
+ "inject_count": 5,
+ "positive_count": 2,
+ "negative_count": 1,
+ "neutral_count": 2,
+ "last_injected_at": "2026-04-20T23:30:00Z",
+ "effectiveness": 0.58,
+ },
+ "incident-timeline": {
+ "inject_count": 9,
+ "positive_count": 5,
+ "negative_count": 1,
+ "neutral_count": 3,
+ "last_injected_at": "2026-04-20T23:10:00Z",
+ "effectiveness": 0.74,
+ },
+ }
+
+ for name, document in self.local_docs.items():
+ skill_dir = self.skills_dir / name
+ skill_dir.mkdir(parents=True, exist_ok=True)
+ (skill_dir / "SKILL.md").write_text(document, encoding="utf-8")
+
+ (self.skills_dir / "skill_stats.json").write_text(
+ json.dumps(local_stats, indent=2),
+ encoding="utf-8",
+ )
+
+ def _create_shared_snapshot(self) -> None:
+ skills_dir = self.group_dir / "skills"
+ sessions_dir = self.group_dir / "sessions"
+ validation_jobs_dir = self.group_dir / "validation_jobs"
+ validation_results_dir = self.group_dir / "validation_results"
+ validation_decisions_dir = self.group_dir / "validation_decisions"
+
+ for path in (skills_dir, sessions_dir, validation_jobs_dir, validation_results_dir, validation_decisions_dir):
+ path.mkdir(parents=True, exist_ok=True)
+
+ for name, document in self.shared_docs.items():
+ skill_dir = skills_dir / name
+ skill_dir.mkdir(parents=True, exist_ok=True)
+ (skill_dir / "SKILL.md").write_text(document, encoding="utf-8")
+
+ manifest = [
+ {
+ "name": "debug-notes",
+ "description": "Keep a compact running log while debugging.",
+ "category": "coding",
+ "sha256": _sha256_text(self.shared_docs["debug-notes"]),
+ "uploaded_by": "alice",
+ "uploaded_at": "2026-04-20T09:30:00Z",
+ },
+ {
+ "name": "incident-timeline",
+ "description": "Summarize incident progression, impact window, and mitigation sequence.",
+ "category": "ops",
+ "sha256": _sha256_text(self.shared_docs["incident-timeline"]),
+ "uploaded_by": "carol",
+ "uploaded_at": "2026-04-20T16:25:00Z",
+ },
+ {
+ "name": "release-rollback-runbook",
+ "description": "Coordinate rollback checks during incident mitigation.",
+ "category": "ops",
+ "sha256": _sha256_text(self.shared_docs["release-rollback-runbook"]),
+ "uploaded_by": "dan",
+ "uploaded_at": "2026-04-20T17:20:00Z",
+ },
+ {
+ "name": "sql-trace",
+ "description": "Trace SQL state transitions during debugging.",
+ "category": "data_analysis",
+ "sha256": _sha256_text(self.shared_docs["sql-trace"]),
+ "uploaded_by": "bob",
+ "uploaded_at": "2026-04-20T09:40:00Z",
+ },
+ {
+ "name": "prompt-risk-screener",
+ "description": "Screen prompts for policy, jailbreak, and ambiguity risks before execution.",
+ "category": "governance",
+ "sha256": _sha256_text(self.shared_docs["prompt-risk-screener"]),
+ "uploaded_by": "mia",
+ "uploaded_at": "2026-04-20T20:10:00Z",
+ },
+ {
+ "name": "handoff-brief",
+ "description": "Prepare a concise operator handoff after long debugging sessions.",
+ "category": "operations",
+ "sha256": _sha256_text(self.shared_docs["handoff-brief"]),
+ "uploaded_by": "erin",
+ "uploaded_at": "2026-04-20T21:10:00Z",
+ },
+ ]
+ (self.group_dir / "manifest.jsonl").write_text(
+ "\n".join(json.dumps(item) for item in manifest) + "\n",
+ encoding="utf-8",
+ )
+
+ registry = {
+ "debug-notes": {
+ "skill_id": _skill_id("debug-notes"),
+ "version": 3,
+ "content_sha": _sha256_text(self.shared_docs["debug-notes"]),
+ "history": [
+ _history_entry(
+ 1,
+ _skill_doc(
+ "debug-notes",
+ "Keep a compact running log while debugging.",
+ "Capture the failing assumption before editing any file.",
+ category="coding",
+ ),
+ "2026-04-18T08:00:00Z",
+ "create",
+ ),
+ _history_entry(
+ 2,
+ _skill_doc(
+ "debug-notes",
+ "Keep a compact running log while debugging.",
+ "Capture the failing assumption and note what changed after each retry.",
+ category="coding",
+ ),
+ "2026-04-19T08:15:00Z",
+ "improve",
+ ),
+ _history_entry(3, self.shared_docs["debug-notes"], "2026-04-20T09:30:00Z", "improve"),
+ ],
+ },
+ "incident-timeline": {
+ "skill_id": _skill_id("incident-timeline"),
+ "version": 2,
+ "content_sha": _sha256_text(self.shared_docs["incident-timeline"]),
+ "history": [
+ _history_entry(
+ 1,
+ _skill_doc(
+ "incident-timeline",
+ "Summarize incident progression, impact window, and mitigation sequence.",
+ "Track first impact, mitigation, and restore time.",
+ category="ops",
+ ),
+ "2026-04-19T12:00:00Z",
+ "create",
+ ),
+ _history_entry(2, self.shared_docs["incident-timeline"], "2026-04-20T16:25:00Z", "improve"),
+ ],
+ },
+ "release-rollback-runbook": {
+ "skill_id": _skill_id("release-rollback-runbook"),
+ "version": 2,
+ "content_sha": _sha256_text(self.shared_docs["release-rollback-runbook"]),
+ "history": [
+ _history_entry(
+ 1,
+ _skill_doc(
+ "release-rollback-runbook",
+ "Coordinate rollback checks during incident mitigation.",
+ "Confirm rollback owner and migration compatibility.",
+ category="ops",
+ ),
+ "2026-04-19T19:30:00Z",
+ "create",
+ ),
+ _history_entry(2, self.shared_docs["release-rollback-runbook"], "2026-04-20T17:20:00Z", "improve"),
+ ],
+ },
+ "sql-trace": {
+ "skill_id": _skill_id("sql-trace"),
+ "version": 2,
+ "content_sha": _sha256_text(self.shared_docs["sql-trace"]),
+ "history": [
+ _history_entry(
+ 1,
+ _skill_doc(
+ "sql-trace",
+ "Trace SQL state transitions during debugging.",
+ "Log query text, parameters, and row counts.",
+ category="data_analysis",
+ ),
+ "2026-04-19T07:50:00Z",
+ "create",
+ ),
+ _history_entry(2, self.shared_docs["sql-trace"], "2026-04-20T09:40:00Z", "improve"),
+ ],
+ },
+ "prompt-risk-screener": {
+ "skill_id": _skill_id("prompt-risk-screener"),
+ "version": 4,
+ "content_sha": _sha256_text(self.shared_docs["prompt-risk-screener"]),
+ "history": [
+ _history_entry(
+ 1,
+ _skill_doc(
+ "prompt-risk-screener",
+ "Screen prompts for policy, jailbreak, and ambiguity risks before execution.",
+ "Classify unsafe requests before suggesting changes.",
+ category="governance",
+ ),
+ "2026-04-18T06:15:00Z",
+ "create",
+ ),
+ _history_entry(
+ 2,
+ _skill_doc(
+ "prompt-risk-screener",
+ "Screen prompts for policy, jailbreak, and ambiguity risks before execution.",
+ "Separate policy risk from simple ambiguity.",
+ category="governance",
+ ),
+ "2026-04-19T10:45:00Z",
+ "improve",
+ ),
+ _history_entry(
+ 3,
+ _skill_doc(
+ "prompt-risk-screener",
+ "Screen prompts for policy, jailbreak, and ambiguity risks before execution.",
+ "Add a small safe-rewrite suggestion when blocking is not needed.",
+ category="governance",
+ ),
+ "2026-04-20T08:10:00Z",
+ "improve",
+ ),
+ _history_entry(4, self.shared_docs["prompt-risk-screener"], "2026-04-20T20:10:00Z", "improve"),
+ ],
+ },
+ "handoff-brief": {
+ "skill_id": _skill_id("handoff-brief"),
+ "version": 1,
+ "content_sha": _sha256_text(self.shared_docs["handoff-brief"]),
+ "history": [
+ _history_entry(1, self.shared_docs["handoff-brief"], "2026-04-20T21:10:00Z", "create"),
+ ],
+ },
+ }
+ (self.group_dir / "evolve_skill_registry.json").write_text(
+ json.dumps(registry, indent=2),
+ encoding="utf-8",
+ )
+
+ shared_sessions = [
+ {
+ "session_id": "sess-104",
+ "source": "shared",
+ "timestamp": "2026-04-21T01:30:00Z",
+ "user_alias": "jane",
+ "num_turns": 3,
+ "outcome": "success",
+ "outcome_reasons": [
+ "staging contract mismatch reproduced",
+ "shared handoff prepared for oncall",
+ ],
+ "turns": [
+ {
+ "turn_num": 1,
+ "prompt_text": "Trace the flaky partner API contract failure.",
+ "response_text": "I will validate auth headers and collect a compact trace.",
+ "injected_skills": ["debug-notes", "api-contract-checklist"],
+ "read_skills": [{"skill_name": "handoff-brief"}],
+ "modified_skills": [],
+ "prm_score": 0.83,
+ },
+ {
+ "turn_num": 2,
+ "prompt_text": "Patch the auth fixture and re-run the failing contract test.",
+ "response_text": (
+ "Header casing was wrong in the fixture; "
+ "I updated the checklist and reran the contract path."
+ ),
+ "injected_skills": [],
+ "read_skills": [{"skill_name": "api-contract-checklist"}],
+ "modified_skills": [{"skill_name": "api-contract-checklist"}],
+ "prm_score": 0.78,
+ },
+ {
+ "turn_num": 3,
+ "prompt_text": "Summarize the fix for the next responder.",
+ "response_text": "Recorded the mismatch, patch scope, and verification steps.",
+ "injected_skills": ["debug-notes"],
+ "read_skills": [],
+ "modified_skills": [],
+ "prm_score": 0.8,
+ },
+ ],
+ },
+ {
+ "session_id": "sess-103",
+ "source": "shared",
+ "timestamp": "2026-04-20T23:10:00Z",
+ "user_alias": "nora",
+ "num_turns": 2,
+ "outcome": "review",
+ "outcome_reasons": [
+ "prompt policy boundary refined",
+ "candidate improvement queued for human review",
+ ],
+ "turns": [
+ {
+ "turn_num": 1,
+ "prompt_text": (
+ "Screen a customer prompt that mixes benign analytics with policy-sensitive content."
+ ),
+ "response_text": "I will separate ambiguity from policy-sensitive intent first.",
+ "injected_skills": ["prompt-risk-screener"],
+ "read_skills": [],
+ "modified_skills": [],
+ "prm_score": 0.76,
+ },
+ {
+ "turn_num": 2,
+ "prompt_text": "Rewrite the screening guidance to reduce false positives.",
+ "response_text": "Added a safe-rewrite suggestion path and explicit jailbreak detection notes.",
+ "injected_skills": [],
+ "read_skills": [{"skill_name": "prompt-risk-screener"}],
+ "modified_skills": [{"skill_name": "prompt-risk-screener"}],
+ "prm_score": 0.71,
+ },
+ ],
+ },
+ {
+ "session_id": "sess-102",
+ "source": "shared",
+ "timestamp": "2026-04-20T18:40:00Z",
+ "user_alias": "bob",
+ "num_turns": 2,
+ "outcome": "success",
+ "outcome_reasons": [
+ "row count drift isolated to transaction scope",
+ ],
+ "turns": [
+ {
+ "turn_num": 1,
+ "prompt_text": "Investigate why the SQL update did not persist.",
+ "response_text": "I will trace transaction boundaries and compare row counts.",
+ "injected_skills": ["sql-trace"],
+ "read_skills": [{"skill_name": "debug-notes"}],
+ "modified_skills": [],
+ "prm_score": 0.81,
+ },
+ {
+ "turn_num": 2,
+ "prompt_text": "Patch the query logging and retry.",
+ "response_text": (
+ "Added query logging, confirmed the transaction scope, and updated the trace skill."
+ ),
+ "injected_skills": ["debug-notes"],
+ "read_skills": [{"skill_name": "sql-trace"}],
+ "modified_skills": [{"skill_name": "sql-trace"}],
+ "prm_score": 0.67,
+ },
+ ],
+ },
+ {
+ "session_id": "sess-101",
+ "source": "shared",
+ "timestamp": "2026-04-20T15:20:00Z",
+ "user_alias": "carol",
+ "num_turns": 2,
+ "outcome": "rollback",
+ "outcome_reasons": [
+ "release toggled off after tenant write amplification",
+ "postmortem timeline requested",
+ ],
+ "turns": [
+ {
+ "turn_num": 1,
+ "prompt_text": "Assemble an incident timeline for the tenant write spike.",
+ "response_text": "I will collect customer impact, mitigation steps, and restore time.",
+ "injected_skills": ["incident-timeline"],
+ "read_skills": [{"skill_name": "release-rollback-runbook"}],
+ "modified_skills": [],
+ "prm_score": 0.74,
+ },
+ {
+ "turn_num": 2,
+ "prompt_text": "Prepare a rollback note for the release manager.",
+ "response_text": (
+ "Captured the rollback sequence and annotated the timeline with mitigation checkpoints."
+ ),
+ "injected_skills": ["release-rollback-runbook"],
+ "read_skills": [],
+ "modified_skills": [{"skill_name": "incident-timeline"}],
+ "prm_score": 0.69,
+ },
+ ],
+ },
+ {
+ "session_id": "sess-100",
+ "source": "shared",
+ "timestamp": "2026-04-20T11:00:00Z",
+ "user_alias": "alice",
+ "num_turns": 2,
+ "outcome": "success",
+ "outcome_reasons": [
+ "query logging patch validated",
+ ],
+ "turns": [
+ {
+ "turn_num": 1,
+ "prompt_text": "Investigate why the SQL update did not persist.",
+ "response_text": "I will inspect the transaction boundaries first.",
+ "injected_skills": ["debug-notes"],
+ "read_skills": [{"skill_name": "sql-trace"}],
+ "modified_skills": [],
+ "prm_score": 0.81,
+ },
+ {
+ "turn_num": 2,
+ "prompt_text": "Patch the query logging and retry.",
+ "response_text": "Added query logging and replayed the failing path.",
+ "injected_skills": ["sql-trace"],
+ "read_skills": [],
+ "modified_skills": [{"skill_name": "sql-trace"}],
+ "prm_score": 0.67,
+ },
+ ],
+ },
+ ]
+ for payload in shared_sessions:
+ (sessions_dir / f"{payload['session_id']}.json").write_text(
+ json.dumps(payload, indent=2),
+ encoding="utf-8",
+ )
+
+ prompt_candidate_doc = _skill_doc(
+ "prompt-risk-screener",
+ "Screen prompts for policy, jailbreak, and ambiguity risks before execution.",
+ """\
+ ## Candidate change
+ - Add a small safe-rewrite path before hard blocking ambiguous prompts.
+ - Require an explicit jailbreak note when instructions conflict.
+ - Preserve a short rationale that a human reviewer can inspect later.
+ """,
+ category="governance",
+ )
+ api_candidate_doc = _skill_doc(
+ "api-contract-checklist",
+ "Verify request, auth, and schema assumptions before patching API tests.",
+ """\
+ ## Candidate checklist
+ - Verify auth, tenant routing, and fixture defaults before touching handlers.
+ - Compare contract fixtures against the latest generated schema.
+ - Record one rollback-safe verification step after the patch lands.
+ """,
+ category="backend",
+ )
+ sql_candidate_doc = _skill_doc(
+ "sql-trace",
+ "Trace SQL state transitions during debugging.",
+ """\
+ ## Candidate improvement
+ - Include transaction scope and row count deltas in the trace.
+ - Link the trace to the failing application log span.
+ - Mark the exact point where persistence diverged.
+ """,
+ category="data_analysis",
+ )
+ rollback_candidate_doc = _skill_doc(
+ "release-rollback-runbook",
+ "Coordinate rollback checks during incident mitigation.",
+ """\
+ ## Candidate rollback path
+ - Roll back immediately after migration lock acquisition.
+ - Skip secondary dashboard validation until after customer traffic stabilizes.
+ - Keep a terse operator note only if the incident stays open for more than 30 minutes.
+ """,
+ category="ops",
+ )
+
+ validation_jobs = [
+ {
+ "job_id": "job-pending",
+ "created_at": "2026-04-21T01:40:00Z",
+ "candidate_skill_name": "prompt-risk-screener",
+ "proposed_action": "improve",
+ "session_ids": ["sess-103"],
+ "session_evidence": [
+ {
+ "session_id": "sess-103",
+ "summary": (
+ "Reviewer flagged false positives when ambiguous prompts were screened too aggressively."
+ ),
+ "judge_overall_score": 0.71,
+ "avg_prm": 0.735,
+ }
+ ],
+ "candidate_skill": {
+ "name": "prompt-risk-screener",
+ "description": "Screen prompts for policy, jailbreak, and ambiguity risks before execution.",
+ "category": "governance",
+ "skill_md": prompt_candidate_doc,
+ },
+ "current_skill": {
+ "name": "prompt-risk-screener",
+ "description": "Screen prompts for policy, jailbreak, and ambiguity risks before execution.",
+ "category": "governance",
+ "skill_md": self.shared_docs["prompt-risk-screener"],
+ },
+ "min_results": 2,
+ "min_approvals": 1,
+ "min_score": 0.7,
+ "max_rejections": 1,
+ "rationale": (
+ "The current skill blocks too early on ambiguous prompts. "
+ "A reviewer should confirm the safer rewrite path."
+ ),
+ },
+ {
+ "job_id": "job-review",
+ "created_at": "2026-04-20T22:10:00Z",
+ "candidate_skill_name": "api-contract-checklist",
+ "proposed_action": "create",
+ "session_ids": ["local-004", "sess-104"],
+ "session_evidence": [
+ {
+ "session_id": "local-004",
+ "summary": "Local run found an auth header casing mismatch before modifying handlers.",
+ "judge_overall_score": 0.84,
+ "avg_prm": 0.0,
+ },
+ {
+ "session_id": "sess-104",
+ "summary": "Shared session confirmed the checklist generalized to a partner contract failure.",
+ "judge_overall_score": 0.8,
+ "avg_prm": 0.803,
+ },
+ ],
+ "candidate_skill": {
+ "name": "api-contract-checklist",
+ "description": "Verify request, auth, and schema assumptions before patching API tests.",
+ "category": "backend",
+ "skill_md": api_candidate_doc,
+ },
+ "min_results": 2,
+ "min_approvals": 2,
+ "min_score": 0.75,
+ "max_rejections": 0,
+ "rationale": (
+ "The checklist looks reusable, but reviewers disagree on whether it is still too API-specific."
+ ),
+ },
+ {
+ "job_id": "job-rejected",
+ "created_at": "2026-04-20T19:45:00Z",
+ "candidate_skill_name": "release-rollback-runbook",
+ "proposed_action": "improve",
+ "session_ids": ["local-003", "sess-101"],
+ "session_evidence": [
+ {
+ "session_id": "local-003",
+ "summary": "Local operator notes suggested a faster rollback shortcut.",
+ "judge_overall_score": 0.42,
+ "avg_prm": 0.0,
+ },
+ {
+ "session_id": "sess-101",
+ "summary": (
+ "Shared incident replay showed the shortcut skipped validation steps needed by oncall."
+ ),
+ "judge_overall_score": 0.35,
+ "avg_prm": 0.715,
+ },
+ ],
+ "candidate_skill": {
+ "name": "release-rollback-runbook",
+ "description": "Coordinate rollback checks during incident mitigation.",
+ "category": "ops",
+ "skill_md": rollback_candidate_doc,
+ },
+ "current_skill": {
+ "name": "release-rollback-runbook",
+ "description": "Coordinate rollback checks during incident mitigation.",
+ "category": "ops",
+ "skill_md": self.shared_docs["release-rollback-runbook"],
+ },
+ "min_results": 2,
+ "min_approvals": 2,
+ "min_score": 0.7,
+ "max_rejections": 0,
+ "rationale": "The candidate removed validation checks that responders still need during rollback.",
+ },
+ {
+ "job_id": "job-published",
+ "created_at": "2026-04-20T11:05:00Z",
+ "candidate_skill_name": "sql-trace",
+ "proposed_action": "improve",
+ "session_ids": ["sess-100", "sess-102"],
+ "session_evidence": [
+ {
+ "session_id": "sess-100",
+ "summary": "Initial trace isolated the missing transaction boundary.",
+ "judge_overall_score": 0.83,
+ "avg_prm": 0.74,
+ },
+ {
+ "session_id": "sess-102",
+ "summary": "Improved trace format captured row count drift and application log correlation.",
+ "judge_overall_score": 0.88,
+ "avg_prm": 0.74,
+ },
+ ],
+ "candidate_skill": {
+ "name": "sql-trace",
+ "description": "Trace SQL state transitions during debugging.",
+ "category": "data_analysis",
+ "skill_md": sql_candidate_doc,
+ },
+ "current_skill": {
+ "name": "sql-trace",
+ "description": "Trace SQL state transitions during debugging.",
+ "category": "data_analysis",
+ "skill_md": self.shared_docs["sql-trace"],
+ },
+ "min_results": 2,
+ "min_approvals": 2,
+ "min_score": 0.75,
+ "max_rejections": 0,
+ "rationale": (
+ "The new trace format consistently improved debugging quality across two SQL persistence failures."
+ ),
+ },
+ ]
+ for payload in validation_jobs:
+ (validation_jobs_dir / f"{payload['job_id']}.json").write_text(
+ json.dumps(payload, indent=2),
+ encoding="utf-8",
+ )
+
+ validation_results = {
+ "job-review": [
+ {
+ "job_id": "job-review",
+ "user_alias": "qa-ops",
+ "accepted": True,
+ "decision": "accept",
+ "score": 0.82,
+ "created_at": "2026-04-20T22:18:00Z",
+ "notes": "The checklist abstracts the contract debugging flow well.",
+ "validator_mode": "manual",
+ },
+ {
+ "job_id": "job-review",
+ "user_alias": "sec-review",
+ "accepted": False,
+ "decision": "reject",
+ "score": 0.38,
+ "created_at": "2026-04-20T22:24:00Z",
+ "notes": "Still too tied to one partner auth flow.",
+ "validator_mode": "manual",
+ },
+ ],
+ "job-rejected": [
+ {
+ "job_id": "job-rejected",
+ "user_alias": "oncall-sre",
+ "accepted": False,
+ "decision": "reject",
+ "score": 0.41,
+ "created_at": "2026-04-20T19:52:00Z",
+ "notes": "The shortcut removes rollback validation that the operator still needs.",
+ "validator_mode": "manual",
+ },
+ {
+ "job_id": "job-rejected",
+ "user_alias": "incident-commander",
+ "accepted": False,
+ "decision": "reject",
+ "score": 0.33,
+ "created_at": "2026-04-20T19:58:00Z",
+ "notes": "Too risky for a shared runbook.",
+ "validator_mode": "review",
+ },
+ ],
+ "job-published": [
+ {
+ "job_id": "job-published",
+ "user_alias": "alice",
+ "accepted": True,
+ "decision": "accept",
+ "score": 0.88,
+ "created_at": "2026-04-20T11:12:00Z",
+ "notes": "The trace format is clearer and reusable.",
+ "validator_mode": "manual",
+ },
+ {
+ "job_id": "job-published",
+ "user_alias": "bob",
+ "accepted": True,
+ "decision": "accept",
+ "score": 0.91,
+ "created_at": "2026-04-20T11:13:30Z",
+ "notes": "Captures transaction scope and row count drift without adding noise.",
+ "validator_mode": "review",
+ },
+ ],
+ }
+ for job_id, results in validation_results.items():
+ result_dir = validation_results_dir / job_id
+ result_dir.mkdir(parents=True, exist_ok=True)
+ for result in results:
+ (result_dir / f"{result['user_alias']}.json").write_text(
+ json.dumps(result, indent=2),
+ encoding="utf-8",
+ )
+
+ validation_decisions = {
+ "job-published": {
+ "job_id": "job-published",
+ "status": "published",
+ "published_action": "improve",
+ "decided_at": "2026-04-20T11:14:00Z",
+ "reason": "Published after two positive reviews with mean score above threshold.",
+ },
+ "job-rejected": {
+ "job_id": "job-rejected",
+ "status": "rejected",
+ "published_action": "improve",
+ "decided_at": "2026-04-20T20:05:00Z",
+ "reason": "Rejected because the rollback shortcut removed mandatory verification.",
+ },
+ }
+ for job_id, decision in validation_decisions.items():
+ (validation_decisions_dir / f"{job_id}.json").write_text(
+ json.dumps(decision, indent=2),
+ encoding="utf-8",
+ )
+
+ def _create_local_state(self) -> None:
+ state_dir = self.root / "state"
+ trajectories_dir = state_dir / "trajectories"
+ transcripts_root = self.root / "cursor_transcripts"
+
+ trajectories_dir.mkdir(parents=True, exist_ok=True)
+ transcripts_root.mkdir(parents=True, exist_ok=True)
+
+ conversations = [
+ {
+ "session_id": "local-004",
+ "start_time": "2026-04-21T01:40:00Z",
+ "end_time": "2026-04-21T02:10:00Z",
+ "outcome": "success",
+ "active_skills": ["api-contract-checklist", "debug-notes"],
+ "outcome_reasons": [
+ "auth header mismatch isolated before handler edits",
+ "schema fixture regenerated and verified",
+ ],
+ "turns": [
+ (
+ "Investigate why the partner API test still returns 401 after the fixture refactor.",
+ "I will validate the auth contract, capture the current "
+ "request shape, and keep a short debug log before editing "
+ "code.",
+ ),
+ (
+ "Patch the fixture and summarize the exact verification steps.",
+ "The header casing was wrong in the fixture. I updated the "
+ "contract checklist, reran the failing test, and recorded "
+ "the verification path.",
+ ),
+ ],
+ },
+ {
+ "session_id": "local-003",
+ "start_time": "2026-04-20T22:50:00Z",
+ "end_time": "2026-04-20T23:30:00Z",
+ "outcome": "rollback",
+ "active_skills": ["incident-timeline", "release-rollback-runbook"],
+ "outcome_reasons": [
+ "tenant write amplification required rollback",
+ "operator notes needed a cleaner incident timeline",
+ ],
+ "turns": [
+ (
+ "Prepare a rollback note for the release that amplified tenant writes.",
+ "I will build a timeline first, then turn the rollback into a short runbook for the operator.",
+ ),
+ (
+ "Summarize what remains risky after the rollback.",
+ "The rollback stabilized traffic, but secondary validation "
+ "is still required for delayed jobs and dashboard lag.",
+ ),
+ ],
+ },
+ {
+ "session_id": "local-002",
+ "start_time": "2026-04-20T20:50:00Z",
+ "end_time": "2026-04-20T21:25:00Z",
+ "outcome": "failure",
+ "active_skills": ["debug-notes"],
+ "outcome_reasons": [
+ "state drift still not reproducible locally",
+ "next step is to instrument the retry worker",
+ ],
+ "turns": [
+ (
+ "Debug why the retry worker stops after one transient failure.",
+ "I will keep a compact note of each assumption and check "
+ "whether the worker state changes between retries.",
+ ),
+ (
+ "Summarize what is still unknown.",
+ "We know the retry lease expires too early, but we still "
+ "need one more probe around worker heartbeats.",
+ ),
+ ],
+ },
+ {
+ "session_id": "local-001",
+ "start_time": "2026-04-20T08:00:00Z",
+ "end_time": "2026-04-20T08:30:00Z",
+ "outcome": "failure",
+ "active_skills": ["debug-notes"],
+ "outcome_reasons": [
+ "command failed: pytest tests/test_api.py",
+ "user correction: tests/conftest.py",
+ ],
+ "turns": [
+ (
+ "Please debug the failing API test.",
+ "I will inspect the failing test and trace the request path.",
+ ),
+ (
+ "Now summarize the fix.",
+ "The issue was an auth header mismatch in the test fixture.",
+ ),
+ ],
+ },
+ ]
+
+ conv_offsets: dict[str, int] = {}
+ for conversation in conversations:
+ session_id = str(conversation["session_id"])
+ transcript_dir = transcripts_root / session_id
+ transcript_dir.mkdir(parents=True, exist_ok=True)
+ transcript_path = transcript_dir / f"{session_id}.jsonl"
+
+ transcript_lines: list[dict[str, object]] = []
+ for user_text, assistant_text in conversation["turns"]:
+ transcript_lines.append(_transcript_record("user", user_text))
+ transcript_lines.append(_transcript_record("assistant", assistant_text))
+ transcript_path.write_text(
+ "\n".join(json.dumps(item) for item in transcript_lines) + "\n",
+ encoding="utf-8",
+ )
+ conv_offsets[str(transcript_path)] = 0
+
+ self._write_json(
+ trajectories_dir / f"{session_id}.json",
+ {
+ "conversation_id": session_id,
+ "active_skills": conversation["active_skills"],
+ "outcome": conversation["outcome"],
+ "start_time": conversation["start_time"],
+ "end_time": conversation["end_time"],
+ "outcome_reasons": conversation["outcome_reasons"],
+ },
+ )
+
+ self._write_json(state_dir / "conv_offsets.json", conv_offsets)
+
+ def _create_local_records(self) -> None:
+ records_dir = self.root / "records"
+ records_dir.mkdir(parents=True, exist_ok=True)
+
+ conversations = [
+ {
+ "session_id": "local-005",
+ "turn": 1,
+ "timestamp": "2026-04-21 03:15:00",
+ "messages": [
+ {"role": "system", "content": "Skill catalog injected"},
+ {
+ "role": "user",
+ "content": "Summarize the retry-worker regression and keep the answer brief.",
+ },
+ ],
+ "instruction_text": "Summarize the retry-worker regression and keep the answer brief.",
+ "prompt_text": (
+ "system: Skill catalog injected\n\nuser: Summarize the "
+ "retry-worker regression and keep the answer brief."
+ ),
+ "response_text": (
+ "The worker exits after the first transient failure because "
+ "the lease heartbeat stops renewing after the retry path "
+ "resets its state."
+ ),
+ "tool_calls": [],
+ },
+ {
+ "session_id": "local-005",
+ "turn": 2,
+ "timestamp": "2026-04-21 03:16:30",
+ "messages": [
+ {"role": "system", "content": "Skill catalog injected"},
+ {
+ "role": "user",
+ "content": "Now list the next two verification steps.",
+ },
+ ],
+ "instruction_text": "Now list the next two verification steps.",
+ "prompt_text": "system: Skill catalog injected\n\nuser: Now list the next two verification steps.",
+ "response_text": (
+ "1. Capture lease-heartbeat timestamps across the retry "
+ "boundary.\n2. Verify whether the worker resets its retry "
+ "state before the heartbeat loop restarts."
+ ),
+ "tool_calls": [],
+ },
+ ]
+ (records_dir / "conversations.jsonl").write_text(
+ "\n".join(json.dumps(item) for item in conversations) + "\n",
+ encoding="utf-8",
+ )
+
+ prm_scores = [
+ {
+ "session_id": "local-005",
+ "turn": 1,
+ "score": 0.61,
+ "votes": [0.6, 0.61, 0.62],
+ },
+ {
+ "session_id": "local-005",
+ "turn": 2,
+ "score": 0.67,
+ "votes": [0.65, 0.67, 0.69],
+ },
+ ]
+ (records_dir / "prm_scores.jsonl").write_text(
+ "\n".join(json.dumps(item) for item in prm_scores) + "\n",
+ encoding="utf-8",
+ )
+
+
+class DashboardSnapshotTests(unittest.TestCase):
+ def setUp(self) -> None:
+ self.fixture = DashboardFixture()
+
+ def tearDown(self) -> None:
+ self.fixture.cleanup()
+
+ def test_snapshot_and_store_queries(self) -> None:
+ snapshot = build_dashboard_snapshot(self.fixture.config)
+ self.assertEqual(len(snapshot["skills"]), 7)
+ self.assertEqual(len(snapshot["sessions"]), 10)
+ self.assertEqual(len(snapshot["validation_jobs"]), 4)
+ self.assertEqual(snapshot["sessions"][0]["session_id"], "local-005")
+ self.assertEqual(snapshot["sessions"][0]["num_turns"], 2)
+
+ store = DashboardStore(str(self.fixture.db_path))
+ summary = store.replace_snapshot(snapshot)
+ self.assertEqual(summary["skills"], 7)
+ self.assertEqual(summary["sessions"], 10)
+
+ overview = store.get_overview()
+ self.assertEqual(overview["counts"]["skills"], 7)
+ self.assertEqual(overview["counts"]["sessions"], 10)
+ self.assertEqual(overview["counts"]["validation_jobs"], 4)
+ self.assertEqual(overview["counts"]["open_validation_jobs"], 2)
+
+ skills = store.list_skills()
+ self.assertEqual(
+ {item["name"] for item in skills},
+ {
+ "api-contract-checklist",
+ "debug-notes",
+ "handoff-brief",
+ "incident-timeline",
+ "prompt-risk-screener",
+ "release-rollback-runbook",
+ "sql-trace",
+ },
+ )
+ debug_skill = next(item for item in skills if item["name"] == "debug-notes")
+ self.assertEqual(debug_skill["source"], "both")
+ self.assertEqual(debug_skill["local_inject_count"], 18)
+ self.assertEqual(debug_skill["session_count"], 6)
+ self.assertEqual(debug_skill["observed_injection_count"], 7)
+
+ debug_detail = store.get_skill(debug_skill["skill_id"])
+ self.assertIsNotNone(debug_detail)
+ self.assertGreaterEqual(len(debug_detail["versions"]), 3)
+ self.assertEqual(debug_detail["related_sessions"][0]["session_id"], "local-004")
+
+ session_detail = store.get_session("sess-100")
+ self.assertIsNotNone(session_detail)
+ self.assertEqual(session_detail["num_turns"], 2)
+ self.assertEqual(len(session_detail["links"]), 4)
+
+ local_session = store.get_session("local-001")
+ self.assertIsNotNone(local_session)
+ self.assertEqual(local_session["source"], "local")
+ self.assertEqual(local_session["outcome"], "failure")
+ self.assertEqual(len(local_session["turns"]), 2)
+
+ record_session = store.get_session("local-005")
+ self.assertIsNotNone(record_session)
+ self.assertEqual(record_session["source"], "local")
+ self.assertEqual(record_session["num_turns"], 2)
+ self.assertEqual(
+ record_session["turns"][0]["prompt_text"],
+ "Summarize the retry-worker regression and keep the answer brief.",
+ )
+ self.assertEqual(record_session["turns"][1]["prm_score"], 0.67)
+
+ validation_jobs = store.list_validation_jobs()
+ statuses = {item["job_id"]: item["status"] for item in validation_jobs}
+ self.assertEqual(statuses["job-published"], "published")
+ self.assertEqual(statuses["job-pending"], "pending")
+ self.assertEqual(statuses["job-review"], "review")
+ self.assertEqual(statuses["job-rejected"], "rejected")
+
+ def test_local_sessions_are_visible_without_sharing(self) -> None:
+ config = SkillClawConfig(
+ use_skills=True,
+ skills_dir=str(self.fixture.skills_dir),
+ record_dir=str(self.fixture.root / "records"),
+ sharing_enabled=False,
+ dashboard_enabled=True,
+ dashboard_db_path=str(self.fixture.root / "dashboard-local-only.sqlite3"),
+ dashboard_sync_on_start=True,
+ dashboard_include_shared=True,
+ )
+
+ snapshot = build_dashboard_snapshot(config)
+ self.assertEqual(len(snapshot["skills"]), 4)
+ self.assertEqual(len(snapshot["sessions"]), 5)
+ self.assertEqual(snapshot["sessions"][0]["session_id"], "local-005")
+ self.assertEqual(snapshot["sessions"][0]["source"], "local")
+
+ def test_export_local_sessions_to_shared_storage(self) -> None:
+ service = DashboardService(self.fixture.config)
+ result = service.export_local_sessions()
+ self.assertEqual(result["result"]["exported"], 5)
+ exported_path = self.fixture.share_root / "team-alpha" / "sessions" / "local-001.json"
+ self.assertTrue(exported_path.exists())
+ exported_payload = json.loads(exported_path.read_text(encoding="utf-8"))
+ self.assertEqual(exported_payload["session_id"], "local-001")
+ self.assertEqual(exported_payload["source"], "local-dashboard-export")
+
+ def test_export_selected_local_sessions_to_shared_storage(self) -> None:
+ service = DashboardService(self.fixture.config)
+ result = service.export_local_sessions(session_ids=["local-001", "missing-002"])
+ self.assertEqual(result["selection"]["mode"], "selected")
+ self.assertEqual(result["result"]["requested"], 2)
+ self.assertEqual(result["result"]["matched"], 1)
+ self.assertEqual(result["result"]["missing"], 1)
+ self.assertEqual(result["result"]["missing_ids"], ["missing-002"])
+ self.assertEqual(result["result"]["exported"], 1)
+ exported_path = self.fixture.share_root / "team-alpha" / "sessions" / "local-001.json"
+ self.assertTrue(exported_path.exists())
+
+
+class DashboardApiTests(unittest.TestCase):
+ def setUp(self) -> None:
+ self.fixture = DashboardFixture()
+
+ def tearDown(self) -> None:
+ self.fixture.cleanup()
+
+ def test_dashboard_api_and_ui(self) -> None:
+ app = create_dashboard_app(self.fixture.config)
+ with TestClient(app) as client:
+ index_resp = client.get("/")
+ self.assertEqual(index_resp.status_code, 200)
+ self.assertIn("技能演化看板", index_resp.text)
+ self.assertNotIn("触发 evolve", index_resp.text)
+ self.assertNotIn("推送本地 skill", index_resp.text)
+ self.assertNotIn("人工审核", index_resp.text)
+
+ health_resp = client.get("/api/v1/health")
+ self.assertEqual(health_resp.status_code, 200)
+ self.assertEqual(health_resp.json()["status"], "ok")
+
+ overview_resp = client.get("/api/v1/overview")
+ self.assertEqual(overview_resp.status_code, 200)
+ overview = overview_resp.json()
+ self.assertEqual(overview["counts"]["skills"], 7)
+ self.assertEqual(overview["counts"]["sessions"], 10)
+
+ skills_resp = client.get("/api/v1/skills")
+ self.assertEqual(skills_resp.status_code, 200)
+ skills = skills_resp.json()["items"]
+ self.assertEqual(len(skills), 7)
+
+ debug_skill = next(item for item in skills if item["name"] == "debug-notes")
+ detail_resp = client.get(f"/api/v1/skills/{debug_skill['skill_id']}")
+ self.assertEqual(detail_resp.status_code, 200)
+ debug_detail = detail_resp.json()
+ self.assertEqual(debug_detail["name"], "debug-notes")
+ debug_v2 = next(item for item in debug_detail["versions"] if int(item["version"]) == 2)
+
+ activate_resp = client.post(
+ f"/api/v1/skills/{debug_skill['skill_id']}/activate",
+ json={"target": "shared-version:2"},
+ )
+ self.assertEqual(activate_resp.status_code, 200)
+ self.assertEqual(activate_resp.json()["target"], "shared-version:2")
+ local_debug_path = self.fixture.skills_dir / "debug-notes" / "SKILL.md"
+ self.assertEqual(local_debug_path.read_text(encoding="utf-8").strip(), debug_v2["skill_md"].strip())
+
+ sessions_resp = client.get("/api/v1/sessions")
+ self.assertEqual(sessions_resp.status_code, 200)
+ self.assertEqual(len(sessions_resp.json()["items"]), 10)
+
+ session_resp = client.get("/api/v1/sessions/sess-100")
+ self.assertEqual(session_resp.status_code, 200)
+ self.assertEqual(session_resp.json()["session_id"], "sess-100")
+
+ validation_resp = client.get("/api/v1/validation/jobs")
+ self.assertEqual(validation_resp.status_code, 200)
+ self.assertEqual(len(validation_resp.json()["items"]), 4)
+
+ export_selected_resp = client.post(
+ "/api/v1/ops/export-sessions",
+ json={"session_ids": ["local-001"]},
+ )
+ self.assertEqual(export_selected_resp.status_code, 200)
+ export_selected = export_selected_resp.json()
+ self.assertEqual(export_selected["selection"]["mode"], "selected")
+ self.assertEqual(export_selected["result"]["requested"], 1)
+ self.assertEqual(export_selected["result"]["matched"], 1)
+ self.assertEqual(export_selected["result"]["exported"], 1)
+
+ pull_selected_resp = client.post(
+ "/api/v1/ops/pull",
+ json={"skill_names": ["sql-trace"]},
+ )
+ self.assertEqual(pull_selected_resp.status_code, 200)
+ pull_selected = pull_selected_resp.json()
+ self.assertEqual(pull_selected["selection"]["mode"], "selected")
+ self.assertEqual(pull_selected["result"]["requested"], 1)
+ self.assertEqual(pull_selected["result"]["matched_remote"], 1)
+ self.assertTrue((self.fixture.skills_dir / "sql-trace" / "SKILL.md").exists())
+
+ skills_after_pull = client.get("/api/v1/skills")
+ self.assertEqual(skills_after_pull.status_code, 200)
+ sql_trace = next(item for item in skills_after_pull.json()["items"] if item["name"] == "sql-trace")
+ self.assertEqual(sql_trace["source"], "both")
+
+ review_resp = client.post(
+ "/api/v1/validation/jobs/job-pending/review",
+ json={
+ "accepted": True,
+ "score": 0.91,
+ "notes": "Looks reusable and grounded in the session evidence.",
+ "auto_finalize": False,
+ },
+ )
+ self.assertEqual(review_resp.status_code, 200)
+ review_payload = review_resp.json()
+ self.assertEqual(review_payload["user_alias"], "tester")
+ self.assertEqual(review_payload["result"]["accepted"], True)
+ saved_result_path = self.fixture.group_dir / "validation_results" / "job-pending" / "tester.json"
+ self.assertTrue(saved_result_path.exists())
+ saved_result = json.loads(saved_result_path.read_text(encoding="utf-8"))
+ self.assertEqual(saved_result["decision"], "accept")
+ self.assertEqual(saved_result["notes"], "Looks reusable and grounded in the session evidence.")
+
+ validation_after_review = client.get("/api/v1/validation/jobs")
+ self.assertEqual(validation_after_review.status_code, 200)
+ reviewed_job = next(
+ item for item in validation_after_review.json()["items"] if item["job_id"] == "job-pending"
+ )
+ self.assertEqual(reviewed_job["status"], "review")
+ self.assertEqual(reviewed_job["result_count"], 1)
+ self.assertEqual(reviewed_job["accepted_count"], 1)
+
+ evolve_status_resp = client.get("/api/v1/evolve/status")
+ self.assertEqual(evolve_status_resp.status_code, 200)
+ evolve_status = evolve_status_resp.json()
+ self.assertTrue(evolve_status["configured"])
+ self.assertEqual(evolve_status["url"], "embedded://local-evolve")
+ self.assertIn("registered_skills", evolve_status["status"])
+
+ sync_resp = client.post("/api/v1/sync")
+ self.assertEqual(sync_resp.status_code, 200)
+ self.assertEqual(sync_resp.json()["summary"]["skills"], 7)
+
+
+if __name__ == "__main__":
+ unittest.main()