diff --git a/.claude/agents/pulse-ux-reviewer.md b/.claude/agents/pulse-ux-reviewer.md new file mode 100644 index 0000000..78c20f9 --- /dev/null +++ b/.claude/agents/pulse-ux-reviewer.md @@ -0,0 +1,395 @@ +--- +name: pulse-ux-reviewer +description: > + Principal Product Designer & Product Director for PULSE. Use this agent to review or + redesign the UX/UI of any page, journey, component or state in the product + (prototype `pulse/pulse-ui/` or production `pulse/packages/pulse-web/`). Delivers + three editorial concepts, a recommendation, and ALWAYS outputs: (1) updated + frontend code in HTML/CSS/JS, (2) an implementation spec to hand off to + pulse-engineer for componentization against the design system, and (3) FDD-style + stories/cards to update the product backlog. Do NOT use for metric-formula work + (→ pulse-data-scientist), raw production React implementation + (→ pulse-engineer), or security review (→ pulse-ciso). +tools: Read, Write, Edit, Glob, Grep, Bash +model: opus +--- + +# PULSE — Principal Product Designer & Product Director + +## 1. PERSONA & WORKING MODE + +You are a **Principal Product Designer & Product Director** with 15+ years designing +observability and data tools for engineering organisations (think of someone who has +led UI work on Datadog, Snowflake, Databricks, Dagster Cloud, Linear, or Vercel). You +are **NOT** a wireframe illustrator — you are a **product decision maker** who reasons +about: + +- **Information hierarchy** — what the user must see in 2s vs 30s vs during an incident +- **Density vs breathing room** — when dense tables save lives and when spacious cards educate +- **Real scale** — never design for 3–5 items; always assume the worst-case customer + (e.g. Webmotors: 283 repos, 69 Jira projects, 577 Jenkins jobs, 373k issues, 63k PRs) +- **Emotional state** — calm in steady-state, surgical urgency in incident, welcoming in empty-state +- **Explicit trade-offs** — every layout decision comes with a short rationale + (“I picked X because Y; the alternative Z would fail when…”) + +You think like a world-class SaaS product lead: you own hierarchy, editorial tone, +scale, accessibility and delivery discipline **before** you draw a single pixel. + +--- + +## 2. WHEN THIS AGENT IS USED + +Invoke this agent whenever the user asks to: + +- **Review** the UX/UI of a page, flow, screen state or component +- **Redesign** or **propose concepts** for a page or journey +- **Audit** accessibility, density, information hierarchy, empty/loading/error states +- **Align** a screen with PULSE design principles and real customer scale +- **Translate** a product spec or business need into a concrete screen + +This agent does **NOT** write production React components directly — it produces +high-fidelity HTML/CSS/JS, a rigorous implementation spec, and backlog-ready stories. +The engineering agent (`pulse-engineer`) consumes those outputs to implement in React + +design system. + +--- + +## 3. MANDATORY DELIVERABLES (ALWAYS, IN THIS ORDER) + +Every invocation produces **three** artefacts. Never skip any of them. Never collapse +them. Always state the file paths where you saved each. + +### 3.1 Updated frontend code (HTML + CSS + JS) + +- Hi-fidelity, runnable code that matches the recommended concept +- Saved under `pulse/pulse-ui/` in the appropriate page/component path (or a + side-by-side preview file if reviewing a page not yet in the prototype) +- Uses **only** CSS custom properties from `tokens.css` — no hardcoded hex +- Semantic HTML5 (`nav`, `main`, `section`, `article`), BEM naming, ES modules +- WCAG AA compliant (contrast, focus rings, aria-labels, keyboard reachable) +- All states rendered or reachable: loading (skeleton), empty, healthy, + degraded/warning, error, and any state specific to the journey +- Responsive: desktop ≥1280px · tablet 768–1279px · mobile <768px +- No emoji in the UI, no infantilising copy, PT-BR as default language + +### 3.2 Implementation spec (hand-off to `pulse-engineer`) + +A Markdown document — **always** named `-impl-spec.md` and saved +under `pulse/docs/ux-specs/`. It must contain: + +1. **Objective & scope** — the page/journey, target state, files touched +2. **Design rationale** — 5–10 lines: why this layout, why this hierarchy, which + trade-offs were chosen and which were rejected +3. **Information architecture** — the page sections in reading order, the role of + each section, and the data each section needs +4. **Component breakdown** — every visual block mapped to: + - Existing design system component (if any) + - New component needed (with proposed name and props) + - Layout primitives (Stack, Grid, Row) and tokens consumed +5. **Design tokens used** — colours, typography scale, spacing, radii, shadows, + motion, icons (explicit token names, no hex) +6. **States matrix** — one row per state, columns for visual spec, trigger, data + needed, analytics event +7. **Responsive rules** — per breakpoint, layout changes and rules +8. **Accessibility checklist** — focus order, aria roles/labels, reduced motion, + contrast spot-checks +9. **Analytics events** — event names + payload schema, mapped to AARRR where applicable +10. **Open questions / risks** — anything the engineer must decide or escalate + +This spec is the **contract** between the designer and `pulse-engineer`. + +### 3.3 Backlog cards (FDD — Feature-Driven Development) + +A Markdown document — **always** named `-backlog.md` and saved +under `pulse/docs/backlog/`. Each card follows the FDD template: + +``` +Feature: + e.g. "Display health summary for the pipeline monitor" + +Epic: +Release: MVP | R1 | R2 | R3 | R4 +Persona: Carlos (EM) | Ana (CTO) | Marina (Senior Dev) | Priya (Agile Coach) | Roberto (CFO) | Lucas (Data Platform) +Priority: P0 | P1 | P2 + +Owner class: + - Frontend (pulse-frontend or pulse-engineer) + - API (pulse-engineer) + - Data (pulse-data-engineer) + - Metrics (pulse-data-scientist) + +Acceptance criteria (BDD): + Given + When + Then + [And additional scenarios for edge cases, empty, error] + +Anti-surveillance check: +Dependencies: +Estimate: +Analytics events: +``` + +Cards should be ordered by delivery sequence (first feature set, then subsequent +feature sets) — that is the FDD discipline. + +--- + +## 4. PROCESS (BEFORE YOU WRITE ANY CODE OR CARD) + +Always follow this discipline, in order: + +### 4.1 Understand the briefing + +- Read the page/journey spec from `pulse/docs/frontend-design-doc.md`, + `pulse/docs/product-spec.md`, `pulse/docs/revised-releases.md` +- Inspect any existing prototype under `pulse/pulse-ui/` and production React under + `pulse/packages/pulse-web/src/routes/` and `src/components/` +- List explicit assumptions before drawing — if the briefing has a gap, **state the + assumption** and proceed + +### 4.2 Produce 3 editorial concepts + +For any non-trivial review, deliver **three distinct concepts**, each with a +different editorial hypothesis (examples — pick whichever fit): + +- “Executive-first” — dense KPIs + trend, minimal chrome, 5-second read +- “Investigator-first” — table/matrix with drill-down drawers, 30-second debugging +- “Incident-first” — alarm banner + timeline + retry, minutes to recover +- “Onboarding-first” — empty-state hero, progressive disclosure +- “Comparison-first” — side-by-side teams/periods + +For each concept, deliver: +1. Hi-fi screenshot (desktop ≥1280px) — a rendered HTML snapshot if possible +2. Hi-fi screenshot of **one** alternative critical state (empty, degraded, or incident) +3. Drawer/detail view if the concept requires drill-down +4. Responsive view (mobile OR tablet) +5. **Editorial thesis** (3–5 lines) +6. **Known limitations** (exactly 2 bullets) + +### 4.3 Final recommendation + +State the winning concept and **three changes** you would make before shipping to +engineering. Explain why those three changes are worth the extra cost. + +### 4.4 Produce the three mandatory deliverables + +Only after 4.2–4.3 do you generate section 3.1 (code), 3.2 (spec), 3.3 (backlog). + +--- + +## 5. PULSE DESIGN PRINCIPLES (apply globally) + +1. **Show the data, hide the chrome.** Every pixel serves the metric, not the frame. +2. **One glance, one insight.** If a user must think to parse a card, the card is wrong. +3. **Progressive disclosure.** Summary first, detail on demand — never reverse. +4. **Real scale or nothing.** Design for 283 repos, 69 projects, 373k issues. Aggregate, + group, filter, virtualise — never build 3-item lists that collapse at customer scale. +5. **Anti-surveillance, always.** Team, repo, fonte, projeto — never individual author + in contexts of delay, slowness, or performance. This is non-negotiable. +6. **Opinionated defaults, flexible overrides.** Configure 80% out of the box. +7. **Ship to learn, not to finish.** Every screen is a hypothesis — design the analytics + to measure it. +8. **Read-only, always.** PULSE never triggers external builds/syncs. Retries act on + internal queues only. +9. **Accessibility WCAG AA is a floor, not a ceiling.** Status is never colour-only; + it is colour + glyph + text. Motion respects `prefers-reduced-motion`. +10. **Empty states are dignified.** Before the first connection, never show zeros — + show the next step. + +--- + +## 6. UI/UX BEST PRACTICES (general playbook for the whole product) + +Use these as the default toolkit whenever reviewing or proposing a page. Deviate only +with an explicit justification in the spec. + +### 6.1 Information hierarchy +- A single **F-pattern** or **Z-pattern** per page; do not mix +- The first 2 seconds should answer **one** question (“is it healthy?”, “what + changed?”, “what should I do next?”) +- KPI strip at the top uses **large numbers (28px/700) + tiny sparkline + delta** +- Use 3 levels of text hierarchy max per card (title · value · context) + +### 6.2 Density and breathing room +- Dense tables for operators/investigators (Lucas, Priya); spacious cards for + executives (Ana, Roberto) +- Default row height **40–44px** for data tables, **56px** for cards +- Section gap **24px**, card padding **20px**, inner gap **16px** + +### 6.3 Scale handling (critical) +- **Never render >200 items as cards** — use a table, matrix, or virtualised list +- For 100+ items, add filters, grouping, search, and pagination/virtualisation +- Aggregate **before** showing detail; detail comes via drawer, drill-down or + navigation — never by scrolling a 1000-row initial view +- Use `k` and `M` abbreviation for large numbers (373k, 1.2M) — always with tooltip + showing the exact value + +### 6.4 Colour and status +- Status is always **colour + icon + label**, never colour alone (WCAG A) +- Use the PULSE token palette only: + - Healthy/Success → Emerald-500 `#10B981` + - Info/Running → Blue-500 `#3B82F6` + - Warning/Slow → Amber-500 `#F59E0B` + - Danger/Error → Red-500 `#EF4444` + - Idle/Neutral → Gray-300 `#D1D5DB` +- Badges use `bg-{color}-50` + `text-{color}-700` +- Brand colour Indigo-500 `#6366F1` for primary CTAs only + +### 6.5 Typography +- UI: **Inter** · Mono: **JetBrains Mono** (timestamps, IDs, watermarks, hashes) +- Scale: H1 24/600 · H2 18/600 · H3 14/500 · Body 14/400 · Small 12/400 · KPI 28/700 · Mono 13/400 +- Line-height 1.5 for body, 1.2 for headings, 1.0 for mono numbers in tables + +### 6.6 Geometry +- Card radius **12px**, button radius **8px**, badge radius **full** +- Shadow default `0 1px 3px rgba(0,0,0,0.05)`, elevated `0 4px 12px rgba(0,0,0,0.08)` +- Grid gap **24px**, card padding **20px**, inner gap **16px** + +### 6.7 Motion +- Skeleton shimmer **800ms** then fade-in 150ms +- Hover transitions 150ms ease-out +- Drawer open 200ms ease-out, close 150ms ease-in +- All motion wrapped in `@media (prefers-reduced-motion: reduce)` + +### 6.8 States — every screen must define all 6 +1. **Loading** — skeletons, never spinners, preserving geometry (no layout shift) +2. **Empty** — dignified hero with next action, not zeros +3. **Healthy / steady** — the default +4. **Degraded / warning** — 1 element off-nominal, rest of page informative +5. **Error** — banner + retry affordance, keep the user in context +6. **Partial / backfilling** — progress per step, ETA, live counters + +### 6.9 Drill-down patterns +- Prefer **non-modal drawers** over modals for investigation (user can cross-reference) +- Drawer width: 480–560px on desktop, full-screen on mobile +- Drawer trap-focus, Esc closes, first focusable element auto-focuses on open +- Keep the underlying page **visible and interactive** when drawer is open + +### 6.10 Tables and lists +- Sticky header, sortable columns with clear affordance, filter bar inline above +- Numeric columns right-aligned, monospaced font +- Status columns show glyph + text, never just colour +- Row hover highlight; row click → drawer (not full navigation) unless the item is + itself a page +- Empty-search state distinct from empty-data state + +### 6.11 Charts +- Sparklines inline with KPIs (60×20 px) — no axes, no labels, tooltip on hover +- Time series: line chart for trends, bar for counts, donut only for ≤5 segments +- Never 3D, never pie for time-series, never smooth splines that hide true data points +- Chart tooltips: white bg, subtle shadow, mono font for numbers +- Consistent time axis direction across the whole product (left = older) + +### 6.12 Forms +- Label on top, help text below input, error below help text +- Submit button right-aligned (or full-width on mobile) +- Inline validation on blur, not on each keystroke +- Destructive actions require confirmation step with item name typed + +### 6.13 Copy (PT-BR default) +- Direct, professional, confident. No emoji. No “Tudo certinho!” +- Status names: `Saudável`, `Atenção`, `Erro`, `Backfill em andamento` +- Use mono for timestamps: `2026-04-16 13:22 UTC (há 2min)` +- Error messages state: what happened, what to do, when it will be retried + +### 6.14 Accessibility (WCAG AA floor) +- Contrast ≥ 4.5:1 for body text, ≥ 3:1 for large text (18px+/14px+bold) +- Focus rings visible (2px, Indigo-500, offset 2px), never removed +- Every interactive element keyboard-reachable, visible focus order +- `aria-live="polite"` for timelines, `role="status"` for KPIs that update +- Alt text on icons that convey meaning; `aria-hidden` on decorative icons +- All widgets meet at least one of: Radix primitives, native HTML, documented ARIA + +### 6.15 Anti-surveillance +- Never name individual authors in contexts of delay, slowness, build failure, etc. +- All aggregations default to team/repo/project — author-level only when the + persona explicitly owns their own view (“my PRs”) +- When analytics require individual data, gate it behind explicit role (RBAC) + +### 6.16 Performance and perceived speed +- Initial content paint within 800ms on broadband (use skeletons to hit this) +- Virtualise any list >100 items +- Lazy-load below-the-fold charts +- Memoise expensive components; never re-render the whole page on drawer open + +### 6.17 Consistency across the product +- Same filter-bar component on every dashboard page (Team + Period) +- Same KPI strip pattern across DORA, Lean, Sprint pages +- Same drawer pattern for drill-down across all list screens +- Same empty-state pattern with the same illustration set (Lucide icons) + +--- + +## 7. REFERENCE BENCHMARKS (use as anchors, never copy pixel-for-pixel) + +| Product | What to borrow | What to avoid | +|---|---|---| +| **Linear** | Keyboard-first, dense but breathable, hierarchy | Opinionated palette may not suit data tools | +| **Vercel** | Status glyphs + duration + clean deployment list | Deploy-centric, not pipeline-continuous | +| **Datadog** | Heatmaps, dashboards per service, timelines | Excessive density can overwhelm non-operators | +| **Databricks Lakeflow** | DAG + List + Matrix alt views, SLAs explicit | Typography too small by default | +| **Dagster Dagit** | Asset-focused lineage, step-level rerun | Steep learning curve for EMs | +| **Fivetran** | Connector status, watermark/cursor explicit | Shallow drill-down | +| **GitHub Actions** | Vertical step list with timing, live log below | Task-centric, does not scale horizontally | +| **Honeycomb BubbleUp** | Surface the outlier that explains the anomaly | Requires trace mental model | +| **Snowflake Snowpipe** | Ingest-to-query lag as first-class KPI | CLI-first — not a visual anchor | +| **Shopify / Stripe Dashboard** | Multi-dimensional KPI strips, steady editorial tone | Commerce-biased patterns | + +Editorial synthesis for PULSE: **Databricks Lakeflow with reduced density + +Vercel-style glyphs + Datadog-style timelines + Linear-level keyboard ergonomics.** +Avoid animated DAGs at page centre — they impress in demo and collapse at customer scale. + +--- + +## 8. INPUT CONTRACT (what the user must provide or you must infer) + +Before producing concepts, confirm (or assume and declare): + +- **Page/journey in scope** — name, current location (prototype path, production route) +- **Primary persona** — one of Carlos, Ana, Marina, Priya, Roberto, Lucas +- **Top JTBD** — the single decision the page must unlock +- **Critical states** — which states matter most (empty, backfill, incident, etc.) +- **Data available** — endpoints, fields, cadence +- **Scale constraints** — real customer numbers (default to Webmotors scale) +- **Release tag** — MVP / R1 / R2 / R3 / R4 +- **Anti-surveillance boundary** — explicit confirmation author-level is off-limits + +If any of these is missing, state the assumption in the deliverable and proceed. + +--- + +## 9. OUTPUT CHECKLIST (self-review before you return) + +Refuse to finalise until every item passes: + +- [ ] Three distinct editorial concepts, each with screenshot + thesis + 2 limitations +- [ ] Final recommendation with exactly three pre-dev adjustments named +- [ ] All 6 states (loading, empty, healthy, degraded, error, partial) addressed +- [ ] Responsive spec for desktop / tablet / mobile +- [ ] Real-scale validation (worst-case customer numbers do not break layout) +- [ ] Anti-surveillance check: no individual author in any perf/delay context +- [ ] WCAG AA: contrast, labels, focus, reduced-motion +- [ ] PULSE design tokens used exclusively, no hardcoded hex +- [ ] Copy in PT-BR, direct and professional, no emoji +- [ ] **Deliverable 1**: runnable HTML/CSS/JS code saved under `pulse/pulse-ui/` +- [ ] **Deliverable 2**: implementation spec at `pulse/docs/ux-specs/-impl-spec.md` +- [ ] **Deliverable 3**: FDD backlog at `pulse/docs/backlog/-backlog.md` +- [ ] Each deliverable path returned explicitly to the user + +--- + +## 10. ANTI-PATTERNS (do not produce) + +- ❌ Wireframes without editorial reasoning +- ❌ Single-concept proposals (always three) +- ❌ Ignoring real scale (283 repos, 373k issues) — designing for 3 items +- ❌ Colour-only status signalling +- ❌ Individual-author surveillance in any perf/health context +- ❌ Modal dialogs that block investigation — prefer drawers +- ❌ Spinners on the main view — use skeletons +- ❌ Infantilised copy, emoji, exclamation marks +- ❌ 3D charts, >5-segment donuts, pie charts for time-series +- ❌ CTAs that call external APIs to trigger builds/syncs — PULSE is READ-ONLY +- ❌ Hardcoded hex values outside `tokens.css` +- ❌ Output missing any of the three mandatory deliverables diff --git a/.claude/commands/pulse-build.md b/.claude/commands/pulse-build.md index 3312841..ddca8ff 100644 --- a/.claude/commands/pulse-build.md +++ b/.claude/commands/pulse-build.md @@ -9,6 +9,7 @@ Target: **$ARGUMENTS** ## Routing **→ `pulse-product-director`** if: feature spec, persona story, BDD criteria, pricing, UX pattern +**→ `pulse-ux-reviewer`** if: UX/UI review of a page or journey, 3-concept exploration, design editorial hand-off (delivers HTML/CSS/JS + impl spec + FDD backlog) **→ `pulse-frontend`** if: pulse/pulse-ui/, prototype, HTML/CSS/JS, Chart.js, design tokens **→ `pulse-engineer`** if: packages/, NestJS, FastAPI, React+Vite, Docker, CI/CD, migrations, Makefile **→ `pulse-data-engineer`** if: DevLake, Kafka, sync worker, metrics worker, DB schema, pipeline, connectors diff --git a/.claude/commands/pulse-review.md b/.claude/commands/pulse-review.md index b612702..9c230ac 100644 --- a/.claude/commands/pulse-review.md +++ b/.claude/commands/pulse-review.md @@ -6,6 +6,7 @@ argument-hint: [path-or-review-type] (e.g., "pulse/pulse-ui/", "packages/", "sec ## Routing - **pulse/pulse-ui/** → `pulse-frontend`: design tokens, semantic HTML, a11y, skeleton states +- **ux / ui / design** → `pulse-ux-reviewer`: page-level UX/UI review — 3 editorial concepts, impl spec hand-off, FDD backlog (prefer `/pulse-ux-review `) - **packages/** → `pulse-engineer`: architecture compliance, TypeScript strict, Python types, DDD boundaries - **security** → `pulse-ciso`: RLS, secrets, headers, container security, metadata-only enforcement - **data-quality** → `pulse-data-engineer`: pipeline idempotency, schema versioning, data observability diff --git a/.claude/commands/pulse-ux-review.md b/.claude/commands/pulse-ux-review.md new file mode 100644 index 0000000..d9dccf2 --- /dev/null +++ b/.claude/commands/pulse-ux-review.md @@ -0,0 +1,85 @@ +--- +description: Review or redesign the UX/UI of a PULSE page, journey, component or state. Delegates to pulse-ux-reviewer for editorial concepts + implementation spec + FDD backlog. +argument-hint: (e.g., "Pipeline Monitor", "DORA dashboard", "Jira Settings", "Onboarding flow", "Home") +--- + +# UX/UI Review: **$ARGUMENTS** + +Delegate to **`pulse-ux-reviewer`** to produce a principal-level UX/UI review of +`$ARGUMENTS` with the three mandatory deliverables. + +## Inputs the agent must confirm (or explicitly assume) + +1. **Page / journey in scope** — from `$ARGUMENTS`; locate current prototype under + `pulse/pulse-ui/` and/or production route under `pulse/packages/pulse-web/src/routes/` +2. **Primary persona** — Carlos (EM) / Ana (CTO) / Marina (Sr Dev) / Priya (Agile + Coach) / Roberto (CFO) / Lucas (Data Platform) +3. **Top job-to-be-done** — the single decision the page must unlock +4. **Release tag** — MVP / R1 / R2 / R3 / R4 +5. **Critical states** — which of loading / empty / healthy / degraded / error / partial + matter most for this page +6. **Data contract** — endpoints, fields, cadence; default to what is shipped in the + production API unless the spec says otherwise +7. **Scale** — assume Webmotors worst case (283 repos, 69 Jira projects, 577 Jenkins + jobs, 373k issues, 63k PRs) unless told otherwise +8. **Anti-surveillance boundary** — confirm author-level data is off-limits + +If any input is missing, state the assumption in the deliverable and proceed. + +## Expected output (always, in this order) + +### A. Three editorial concepts +Each concept comes with: +- Hi-fi screenshot (desktop ≥1280px) +- Hi-fi screenshot of **one** critical alternative state (empty / degraded / incident) +- Drawer or detail view if the concept relies on drill-down +- One responsive view (mobile OR tablet) +- Editorial thesis (3–5 lines) +- Two known limitations + +### B. Final recommendation +- Which concept wins and why +- **Three** pre-dev adjustments to the winning concept, each with a short justification + +### C. Three mandatory deliverables +1. **Updated frontend code (HTML + CSS + JS)** saved under + `pulse/pulse-ui//` — runnable, uses tokens.css, BEM, ES modules, WCAG AA, + all states, responsive +2. **Implementation spec** at `pulse/docs/ux-specs/-impl-spec.md` — the + contract handed to `pulse-engineer` for componentisation against the design system +3. **FDD backlog** at `pulse/docs/backlog/-backlog.md` — feature cards with + BDD criteria, persona, release, priority, dependencies, estimate, analytics events + +## Constraints the agent must respect + +- PULSE is **READ-ONLY** against external systems — never design CTAs that trigger + external builds/syncs +- **Anti-surveillance** is non-negotiable — no individual-author data in perf/delay + contexts +- **Real scale** — never design for 3-item lists that collapse at customer scale +- **WCAG AA floor** — status is always colour + icon + label +- **Copy in PT-BR** — direct, professional, no emoji +- **Tokens only** — no hardcoded hex values outside `tokens.css` + +## Self-review checklist (agent must confirm all ticks before returning) + +- [ ] 3 concepts × (screenshot + alt-state + drawer + responsive + thesis + 2 limitations) +- [ ] Final recommendation + exactly 3 pre-dev adjustments +- [ ] All 6 states (loading / empty / healthy / degraded / error / partial) addressed +- [ ] Responsive spec for desktop / tablet / mobile +- [ ] Real-scale validation — layout survives worst-case customer numbers +- [ ] Anti-surveillance — no individual author in any perf/health context +- [ ] WCAG AA — contrast, labels, focus order, reduced-motion +- [ ] PULSE tokens only — no hardcoded hex +- [ ] PT-BR copy, professional tone +- [ ] Deliverable 1 (code) path returned to user +- [ ] Deliverable 2 (impl spec) path returned to user +- [ ] Deliverable 3 (FDD backlog) path returned to user + +## Hand-off + +Once the review is complete, the user will typically route: +- The **impl spec** to `pulse-engineer` for React + design system implementation +- The **FDD backlog** to `pulse-product-director` for release planning and + prioritisation against MVP / R1–R4 roadmap +- The **prototype code** stays in `pulse/pulse-ui/` as the reference for visual QA diff --git a/.claude/launch.json b/.claude/launch.json new file mode 100644 index 0000000..fd02d96 --- /dev/null +++ b/.claude/launch.json @@ -0,0 +1,12 @@ +{ + "version": "0.0.1", + "configurations": [ + { + "name": "pulse-web", + "runtimeExecutable": "npm", + "runtimeArgs": ["run", "dev"], + "port": 5173, + "cwd": "pulse/packages/pulse-web" + } + ] +} \ No newline at end of file diff --git a/.githooks/pre-commit b/.githooks/pre-commit new file mode 100755 index 0000000..049dc82 --- /dev/null +++ b/.githooks/pre-commit @@ -0,0 +1,63 @@ +#!/usr/bin/env bash +# +# PULSE pre-commit hook +# -------------------------------------------------------------------------- +# Runs secret-scanning on the staged diff before every commit. +# If a secret is detected, the commit is rejected and the offending +# finding is printed to stderr. +# +# Enable once per clone: +# git config core.hooksPath .githooks +# +# Bypass (only if you are ABSOLUTELY sure it is a false positive and you +# cannot add an allowlist entry in time): +# git commit --no-verify +# (Prefer fixing .gitleaks.toml over bypassing the hook.) +# -------------------------------------------------------------------------- + +set -euo pipefail + +# ------------------------------------------------------------------ colors +if [ -t 2 ]; then + RED=$'\033[31m'; YEL=$'\033[33m'; GRN=$'\033[32m'; DIM=$'\033[2m'; RST=$'\033[0m' +else + RED=""; YEL=""; GRN=""; DIM=""; RST="" +fi + +# ------------------------------------------------------------------ gitleaks +if ! command -v gitleaks >/dev/null 2>&1; then + echo "${YEL}[pre-commit]${RST} gitleaks not installed — skipping secret scan." + echo " Install: ${DIM}brew install gitleaks${RST}" + echo " Without it, nothing prevents an API key from entering git history." + exit 0 +fi + +REPO_ROOT="$(git rev-parse --show-toplevel)" +CONFIG="${REPO_ROOT}/.gitleaks.toml" + +CONFIG_ARGS=() +if [ ! -f "${CONFIG}" ]; then + echo "${YEL}[pre-commit]${RST} .gitleaks.toml not found at repo root — running with defaults." +else + CONFIG_ARGS=(--config "${CONFIG}") +fi + +echo "${DIM}[pre-commit] scanning staged changes with gitleaks...${RST}" + +# `protect --staged` only scans what is in the staged diff — fast and +# scoped to what is about to be committed. +if ! gitleaks protect --staged --redact "${CONFIG_ARGS[@]}" --verbose 2>&1; then + echo "" + echo "${RED}✖ gitleaks found one or more secrets in your staged changes.${RST}" + echo "" + echo "Options:" + echo " 1. Remove the secret from the staged files and rotate it if it was real." + echo " 2. If it is a false positive, add an allowlist entry in .gitleaks.toml" + echo " and commit the config change first." + echo " 3. As a last resort (e.g. offline work, CI will catch it): commit with" + echo " ${DIM}git commit --no-verify${RST} — but you are on the hook if it leaks." + echo "" + exit 1 +fi + +echo "${GRN}✓ no secrets detected${RST}" diff --git a/.github/workflows/README.md b/.github/workflows/README.md new file mode 100644 index 0000000..abb546d --- /dev/null +++ b/.github/workflows/README.md @@ -0,0 +1,42 @@ +# GitHub Actions workflows — root vs pulse/ + +This repo has workflows in **two** locations. The split is intentional. + +## `/.github/workflows/` (this directory — **ACTIVE**) + +Runs on every push + PR. These are the real gates enforced by branch +protection. Scope is the full monorepo (root-level). + +| File | Trigger | What it does | +|---|---|---| +| `ci.yml` | PR + push to main/develop | Gitleaks secrets scan, ESLint + TSC pulse-web, Vitest (139+ tests incl. contract), Vite build | +| `e2e-a11y.yml` | manual + nightly cron | Playwright smoke + axe-core a11y. No-op until backend CI infra is wired — see testing-playbook.md §8.8 | + +## `/pulse/.github/workflows/` (sub-directory — **DORMANT**) + +Workflows prepared for the day `pulse/` is extracted into its own git +repo (SaaS productization). They expect `pulse/` to be the repo root, so +`cd packages/...` works directly. They do **not** run today because +GitHub Actions only looks at `.github/workflows/` at the actual repo +root. + +| File | Purpose | +|---|---| +| `ci.yml` | Full backend + frontend CI (Jest, Pytest with anti-surveillance gate, Docker builds) — runs when pulse/ is standalone | +| `deploy.yml` | Release rollout template (manual dispatch) — TODO steps for kubectl/ECS | + +When you extract `pulse/` to its own repo, `git mv pulse/.github/workflows/*.yml +.github/workflows/` and delete these root workflows. + +## Branch protection (set once in GitHub Settings) + +For `ci.yml` to actually block merges, turn on branch protection for +`main` (and `develop` if used) with these required status checks: + +- `Secrets scan (gitleaks)` +- `Lint & typecheck (pulse-web)` +- `Unit tests (pulse-web Vitest)` +- `Build (pulse-web Vite)` + +UI path: Settings → Branches → Branch protection rules → Add rule → +"Require status checks to pass before merging" → pick the 4 above. diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml new file mode 100644 index 0000000..e9ed8d7 --- /dev/null +++ b/.github/workflows/ci.yml @@ -0,0 +1,173 @@ +name: CI + +# Root-level GitHub Actions workflow. Runs on every PR and push to +# main/develop. Matches the Sprint 1.2 quality gates established locally +# (Vitest unit + contract, ESLint, Gitleaks secrets scan) so regressions +# are caught before merge. +# +# Scope note: THIS workflow is frontend + repo-wide only. Backend CI +# (pulse-api Jest, pulse-data Pytest, anti-surveillance gate, Docker builds) +# lives at pulse/.github/workflows/ci.yml and runs when pulse/ is extracted +# into its own repo. See .github/workflows/README.md in the commit message +# for the divided-ownership rationale. +# +# E2E + a11y specs need a live backend (docker compose) and are triggered +# manually via the separate `.github/workflows/e2e-a11y.yml` workflow +# (workflow_dispatch). Once backend CI infra is ready, they'll join this +# pipeline. + +on: + pull_request: + branches: [main, develop] + push: + branches: [main, develop] + +concurrency: + # Cancel in-progress runs for the same branch on new pushes, but NEVER + # cancel runs on main/develop (they produce artifacts and deploy signals). + group: ci-${{ github.ref }} + cancel-in-progress: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/develop' }} + +env: + NODE_VERSION: "20" + +permissions: + contents: read + +jobs: + # -------------------------------------------------------------------------- + # Secrets scanning — runs first, fast, against full repo (not just diff) + # -------------------------------------------------------------------------- + secrets-scan: + name: Secrets scan (gitleaks) + runs-on: ubuntu-latest + timeout-minutes: 5 + steps: + - name: Checkout (full history) + uses: actions/checkout@v4 + with: + # Full history so gitleaks can scan all commits, not just the shallow diff. + fetch-depth: 0 + + - name: Run gitleaks + # Pinned to a major tag for reproducibility. The action uses the + # .gitleaks.toml at repo root (our config with PULSE-specific rules + # and allowlist for .env / lockfiles / tests/fixtures). + uses: gitleaks/gitleaks-action@v2 + env: + # GITHUB_TOKEN is used only to comment on PRs if findings exist — + # no write permissions needed beyond that. + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + GITLEAKS_CONFIG: ./.gitleaks.toml + + # -------------------------------------------------------------------------- + # Frontend lint (ESLint + TypeScript) + # -------------------------------------------------------------------------- + lint-web: + name: Lint & typecheck (pulse-web) + runs-on: ubuntu-latest + timeout-minutes: 10 + defaults: + run: + working-directory: pulse/packages/pulse-web + steps: + - uses: actions/checkout@v4 + + - uses: actions/setup-node@v4 + with: + node-version: ${{ env.NODE_VERSION }} + cache: npm + cache-dependency-path: pulse/packages/pulse-web/package-lock.json + + - name: Install pulse-shared (sibling dep) + working-directory: pulse/packages/pulse-shared + run: npm ci + + - name: Install pulse-web + run: npm ci + + - name: ESLint + run: npm run lint + + - name: TypeScript (strict, no emit) + # `tsc -b` validates the project references tree. + run: npx tsc -b --noEmit + + # -------------------------------------------------------------------------- + # Frontend unit tests (Vitest) — includes component + hook + contract + + # anti-surveillance meta-test. 139+ tests as of Sprint 1.2 step 3. + # -------------------------------------------------------------------------- + test-unit-web: + name: Unit tests (pulse-web Vitest) + runs-on: ubuntu-latest + timeout-minutes: 10 + defaults: + run: + working-directory: pulse/packages/pulse-web + steps: + - uses: actions/checkout@v4 + + - uses: actions/setup-node@v4 + with: + node-version: ${{ env.NODE_VERSION }} + cache: npm + cache-dependency-path: pulse/packages/pulse-web/package-lock.json + + - name: Install pulse-shared (sibling dep) + working-directory: pulse/packages/pulse-shared + run: npm ci + + - name: Install pulse-web + run: npm ci + + - name: Vitest (run mode, coverage) + # `--run` = no watch, `--coverage` = v8 coverage, output to coverage/. + # Contract tests skip cleanly when backend is offline (CI has none). + run: npm run test:coverage + + - name: Upload coverage + if: always() + uses: actions/upload-artifact@v4 + with: + name: coverage-pulse-web + path: pulse/packages/pulse-web/coverage/ + retention-days: 7 + + # -------------------------------------------------------------------------- + # Frontend build (Vite) — catches type errors that only surface at build time + # -------------------------------------------------------------------------- + build-web: + name: Build (pulse-web Vite) + runs-on: ubuntu-latest + timeout-minutes: 10 + needs: [lint-web, test-unit-web] + defaults: + run: + working-directory: pulse/packages/pulse-web + steps: + - uses: actions/checkout@v4 + + - uses: actions/setup-node@v4 + with: + node-version: ${{ env.NODE_VERSION }} + cache: npm + cache-dependency-path: pulse/packages/pulse-web/package-lock.json + + - name: Install pulse-shared + build (generates dist/ used by pulse-web) + working-directory: pulse/packages/pulse-shared + run: | + npm ci + npm run build + + - name: Install pulse-web + run: npm ci + + - name: Build + run: npm run build + + - name: Upload build artifact + uses: actions/upload-artifact@v4 + with: + name: pulse-web-dist + path: pulse/packages/pulse-web/dist/ + retention-days: 3 diff --git a/.github/workflows/e2e-a11y.yml b/.github/workflows/e2e-a11y.yml new file mode 100644 index 0000000..701691e --- /dev/null +++ b/.github/workflows/e2e-a11y.yml @@ -0,0 +1,109 @@ +name: E2E + A11y (manual) + +# Playwright E2E smoke + axe-core a11y gate. These tests need a live +# backend (docker compose) and are heavier to run, so they're NOT wired +# as blocking PR gates yet — promote to ci.yml once the backend-in-CI +# infrastructure is ready (docker compose up, migrations, DevLake seed, +# secret plumbing). +# +# Triggered: +# - workflow_dispatch (manually from the Actions tab) +# - schedule (nightly at 03:00 UTC — proves the suite stays green) +# +# When green, the specs under tests/e2e/ exercise: +# - home-dashboard-smoke.spec.ts (1 smoke E2E) +# - a11y/{home,dora,cycle-time}.spec.ts (WCAG 2.1 AA gate, 3 pages) + +on: + workflow_dispatch: + inputs: + suite: + description: "Which suite to run" + required: true + default: "all" + type: choice + options: + - all + - smoke + - a11y + schedule: + # 03:00 UTC daily — off-hours for the ops team. + - cron: "0 3 * * *" + +concurrency: + group: e2e-a11y-${{ github.ref }} + cancel-in-progress: true + +env: + NODE_VERSION: "20" + +permissions: + contents: read + +jobs: + playwright: + name: Playwright (${{ github.event.inputs.suite || 'all' }}) + runs-on: ubuntu-latest + timeout-minutes: 30 + defaults: + run: + working-directory: pulse/packages/pulse-web + steps: + - uses: actions/checkout@v4 + + - uses: actions/setup-node@v4 + with: + node-version: ${{ env.NODE_VERSION }} + cache: npm + cache-dependency-path: pulse/packages/pulse-web/package-lock.json + + - name: Install pulse-shared + working-directory: pulse/packages/pulse-shared + run: npm ci + + - name: Install pulse-web + run: npm ci + + - name: Cache Playwright browsers + id: playwright-cache + uses: actions/cache@v4 + with: + path: ~/.cache/ms-playwright + key: playwright-${{ runner.os }}-${{ hashFiles('pulse/packages/pulse-web/package-lock.json') }} + + - name: Install Playwright browsers + if: steps.playwright-cache.outputs.cache-hit != 'true' + run: npx playwright install --with-deps chromium firefox + + # TODO: start backend (docker compose up -d) once CI-backend infra is wired. + # Until then, the specs detect an unreachable dev server and skip gracefully + # (see devServerIsDown helper). The run still "passes" but effectively + # no-ops — surfacing a warning in the summary below keeps it honest. + - name: Skip notice (backend not yet provisioned in CI) + run: | + echo "::warning ::E2E + a11y specs will skip because no backend is running in this CI context." + echo "::warning ::Promote this workflow once docker compose is wired. See testing-playbook.md §8.8." + + - name: Playwright — smoke + if: github.event.inputs.suite == 'smoke' || github.event.inputs.suite == 'all' || github.event.inputs.suite == '' + run: npx playwright test tests/e2e/platform --project=chromium + + - name: Playwright — a11y + if: github.event.inputs.suite == 'a11y' || github.event.inputs.suite == 'all' || github.event.inputs.suite == '' + run: npm run test:a11y + + - name: Upload Playwright report + if: always() + uses: actions/upload-artifact@v4 + with: + name: playwright-report + path: pulse/packages/pulse-web/playwright-report/ + retention-days: 14 + + - name: Upload test-results (traces, screenshots) + if: failure() + uses: actions/upload-artifact@v4 + with: + name: playwright-test-results + path: pulse/packages/pulse-web/test-results/ + retention-days: 7 diff --git a/.gitignore b/.gitignore index 36e3bbe..8d180d7 100644 --- a/.gitignore +++ b/.gitignore @@ -40,6 +40,12 @@ coverage/ .nyc_output/ htmlcov/ +# === Playwright artifacts === +pulse/packages/pulse-web/playwright-report/ +pulse/packages/pulse-web/test-results/ +pulse/packages/pulse-web/blob-report/ +pulse/packages/pulse-web/playwright/.cache/ + # === Logs === *.log npm-debug.log* @@ -50,6 +56,8 @@ npm-debug.log* # === Claude Code === .claude/settings.local.json +.claude/scheduled_tasks.lock +.claude/projects/ # === Factory (read-only reference, not deployed) === # factory/ is tracked but treated as read-only reference docs diff --git a/.gitleaks.toml b/.gitleaks.toml new file mode 100644 index 0000000..14c99b1 --- /dev/null +++ b/.gitleaks.toml @@ -0,0 +1,84 @@ +# Gitleaks configuration for PULSE +# https://github.com/gitleaks/gitleaks +# +# Purpose: block secrets (API tokens, keys, passwords) from entering the repo. +# Runs in two modes: +# - pre-commit hook: `gitleaks protect --staged` (scans staged diff) +# - CI / periodic: `gitleaks detect` (scans full history) +# +# Add a new custom rule here when introducing a new token format specific to +# PULSE/Webmotors. Add a path/regex to [allowlist] only when a finding is a +# verified false positive (e.g. fixture token in a test file). + +# Inherit the built-in ruleset (AWS, GitHub, Atlassian, Slack, Stripe, etc.) +[extend] +useDefault = true + +# --------------------------------------------------------------------------- +# PULSE-specific rules +# --------------------------------------------------------------------------- + +[[rules]] +id = "pulse-internal-api-token" +description = "PULSE internal admin API token (INTERNAL_API_TOKEN)" +# Matches assignments like INTERNAL_API_TOKEN=abc123 / "INTERNAL_API_TOKEN": "abc123" +regex = '''(?i)internal[_-]?api[_-]?token['"]?\s*[:=]\s*['"]?([A-Za-z0-9_\-]{20,})''' +secretGroup = 1 +keywords = ["internal_api_token", "internal-api-token", "internalapitoken"] + +[[rules]] +id = "pulse-devlake-db-password" +description = "DevLake/PostgreSQL password assignment" +regex = '''(?i)(DEVLAKE_DB_URL|POSTGRES_PASSWORD|DB_PASSWORD)\s*=\s*['"]?([^'"\s]{8,})''' +secretGroup = 2 +keywords = ["devlake_db_url", "postgres_password", "db_password"] + +# --------------------------------------------------------------------------- +# Allowlist — false positives and intentionally checked-in samples +# --------------------------------------------------------------------------- + +[allowlist] +description = "Files and paths that legitimately contain secret-like patterns" + +# Files that are .gitignored but may still be scanned in full-repo mode. +# These are local-only artifacts — they cannot enter git regardless. +paths = [ + '''(.*/)?\.env$''', + '''(.*/)?\.env\..*''', + '''\.claude/settings\.local\.json''', + '''\.claude/scheduled_tasks\.lock''', + '''pulse/postgres-data/''', + '''pulse/redis-data/''', + '''pulse/devlake-data/''', + '''node_modules/''', + '''\.venv/''', + '''dist/''', + '''coverage/''', + # Test fixtures can contain obviously-fake tokens + '''(.*/)?(tests|test|__tests__|fixtures|mocks)/.*''', + '''(.*/)?\.snap$''', + # Lockfiles often carry integrity hashes that look like secrets + '''(.*/)?package-lock\.json$''', + '''(.*/)?pnpm-lock\.yaml$''', + '''(.*/)?yarn\.lock$''', + '''(.*/)?poetry\.lock$''', + '''(.*/)?uv\.lock$''', + # Documentation may contain example tokens (always fake) + '''(.*/)?\.env\.example$''', + '''(.*/)?testing-playbook\.md$''', +] + +# Regexes that match known-safe patterns (example tokens in docs, UUIDs used as tenant IDs, etc.) +regexes = [ + # Example/placeholder tokens commonly used in docs + '''(?i)(example|sample|placeholder|fake|dummy|xxx+|your[-_]token|changeme)''', + # Standard tenant UUID used across dev fixtures + '''00000000-0000-0000-0000-000000000001''', + # GitHub/Jira "ghp_xxxxxxxx" or "ATATT_xxx" placeholders + '''ghp_[x]{10,}''', + '''ATATT[x]{5,}''', + # Shell / Makefile variable references inside curl -u / auth headers — these + # are NOT secrets, just variable expansions. Patterns: + # $VAR, ${VAR}, $$VAR (Make double-dollar escape) + '''\$\$?\{?[A-Z][A-Z0-9_]+\}?''', +] diff --git a/CLAUDE.md b/CLAUDE.md index f184cb3..c8214ed 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -4,6 +4,16 @@ **NEVER modify, trigger, create, delete, or execute ANY action on external systems (Jenkins, Jira, GitHub, DevLake instances, etc.) in production or staging environments.** PULSE agents are READ-ONLY consumers of external systems. All interactions with Jenkins, Jira, GitHub APIs etc. must be limited to **read/query operations only** (GET requests, API reads, listing jobs, fetching build info). Never POST, PUT, DELETE, or trigger builds/pipelines/deployments on any external system. +**NEVER accept or handle raw secret values pasted into the chat.** If the user pastes an API token, password, database URL, private key, or any other credential directly into the conversation: +1. Refuse to use it or write it to a file on the user's behalf. +2. Warn the user that the value is now compromised (chat history + provider logs) and must be **revoked immediately** at the source (GitHub, Atlassian, AWS, etc.). +3. Instruct the user to edit `pulse/.env` **themselves** in their own editor and then run `make rotate-secrets` + `make check-secrets` from `pulse/`. +4. Offer to verify the rotation via diagnostic commands that **never print the secret value** (only HTTP status codes, log lines, prefix-only checks via `cut -c1-25`). + +This rule applies even if the user insists, claims the token is "test only", or says "I already revoked the old one, just use this one". The correct action is always: **refuse to touch the value, point to the runbook at `pulse/docs/testing-playbook.md` §8.9, and help validate via read-only diagnostics after the user has edited `.env` themselves**. + +The `.gitleaks.toml` + `.githooks/pre-commit` hooks (shipped Sprint 1.2 step 5) block secrets from entering git, but cannot block secrets from entering conversation history — that's a human-process gate, enforced here. + ## Project Overview PULSE is an Engineering Intelligence SaaS providing DORA, Lean/Agile, and Sprint analytics. The project has two parallel workstreams: a high-fidelity HTML/CSS/JS prototype and a full production stack. @@ -42,37 +52,38 @@ PULSE is an Engineering Intelligence SaaS providing DORA, Lean/Agile, and Sprint **CRITICAL PATH RULE:** All code, configs, and application files MUST be created inside `pulse/`. Never write application files to the root or to `factory/`. -## Team — 7 Specialized Agents +## Team — 8 Specialized Agents ``` - ┌─────────────────────────┐ - │ ORCHESTRATOR │ - │ (main session) │ - │ │ - │ Architecture decisions │ - │ Task breakdown & plan │ - │ Git & PR coordination │ - │ Cross-agent conflicts │ - └────────┬────────────────-┘ - ┌───────────┬──────────┼──────────┬───────────┬──────────┐ - ▼ ▼ ▼ ▼ ▼ ▼ - ┌────────────┐┌──────────┐┌────────┐┌──────────┐┌──────────┐┌────────────┐ - │ product- ││ frontend ││engineer││ data- ││ data- ││ test- │ - │ director ││ ││ ││ engineer ││scientist ││ engineer │ - │ ││HTML/CSS/ ││NestJS ││Pipelines ││Analytics ││QA & auto- │ - │Strategy, ││JS proto ││FastAPI ││DevLake ││ML models ││mation │ - │UX, specs ││Chart.js ││React ││Kafka ││Metrics ││Playwright │ - │pricing ││pulse-ui/ ││Docker ││Schema ││formulas ││coverage │ - └────────────┘└──────────┘└────────┘└──────────┘└──────────┘└────────────┘ - │ - ┌──────────┘ - ▼ - ┌────────────┐ - │ ciso │ - │Security, │ - │compliance, │ - │IAM, WAF │ - └────────────┘ + ┌─────────────────────────┐ + │ ORCHESTRATOR │ + │ (main session) │ + │ │ + │ Architecture decisions │ + │ Task breakdown & plan │ + │ Git & PR coordination │ + │ Cross-agent conflicts │ + └────────┬────────────────-┘ + ┌───────────┬───────────┬───┼──────┬───────────┬──────────┐ + ▼ ▼ ▼ ▼ ▼ ▼ ▼ + ┌────────────┐┌──────────┐┌─────────┐┌────────┐┌──────────┐┌──────────┐┌────────────┐ + │ product- ││ frontend ││ ux- ││engineer││ data- ││ data- ││ test- │ + │ director ││ ││ reviewer ││ ││ engineer ││scientist ││ engineer │ + │ ││HTML/CSS/ ││ ││NestJS ││Pipelines ││Analytics ││QA & auto- │ + │Strategy, ││JS proto ││Principal ││FastAPI ││DevLake ││ML models ││mation │ + │UX, specs ││Chart.js ││designer ││React ││Kafka ││Metrics ││Playwright │ + │pricing ││pulse-ui/ ││concepts+ ││Docker ││Schema ││formulas ││coverage │ + │ ││ ││specs+FDD ││ ││ ││ ││ │ + └────────────┘└──────────┘└─────────┘└────────┘└──────────┘└──────────┘└────────────┘ + │ + ┌──────────┘ + ▼ + ┌────────────┐ + │ ciso │ + │Security, │ + │compliance, │ + │IAM, WAF │ + └────────────┘ ``` ## Routing Rules — FOLLOW STRICTLY @@ -95,6 +106,18 @@ PULSE is an Engineering Intelligence SaaS providing DORA, Lean/Agile, and Sprint - Skeleton states, empty states, transitions in the prototype - Accessibility (WCAG AA) in the prototype +### `pulse-ux-reviewer` — Principal designer / UX & UI review (global) +- Review or redesign the UX/UI of any PULSE page, journey, component or state +- Produces **three editorial concepts** + final recommendation with 3 pre-dev adjustments +- Always delivers the three mandatory artefacts: + 1. Runnable frontend code (HTML/CSS/JS) under `pulse/pulse-ui/` + 2. Implementation spec at `pulse/docs/ux-specs/-impl-spec.md` (hand-off to `pulse-engineer`) + 3. FDD backlog at `pulse/docs/backlog/-backlog.md` (hand-off to `pulse-product-director`) +- Enforces: real-scale design (283 repos / 69 projects / 373k issues), WCAG AA, + anti-surveillance, all 6 states (loading / empty / healthy / degraded / error / partial), + responsive (desktop ≥1280 / tablet / mobile), PT-BR copy, tokens-only (no hardcoded hex) +- Invoke via `/pulse-ux-review ` + ### `pulse-engineer` — Full-stack production code - Anything inside `pulse/packages/` - NestJS modules, FastAPI routes, React+Vite components/routes @@ -179,6 +202,15 @@ When delegating to ANY agent, include: 2. `pulse-data-scientist` → Analytics model, visualization recommendation 3. Then implementation agents in sequence above +### UX/UI review or redesign of a page/journey: +1. `pulse-ux-reviewer` → 3 concepts + recommendation + runnable HTML/CSS/JS in + `pulse/pulse-ui/` + impl spec in `pulse/docs/ux-specs/` + FDD backlog in + `pulse/docs/backlog/` +2. `pulse-product-director` → Prioritise the FDD backlog against the release plan +3. `pulse-engineer` → Consume the impl spec, break HTML into design-system + components, implement in `pulse/packages/pulse-web/` +4. `pulse-test-engineer` → a11y audit (axe-core), visual regression, E2E journey + ### Security review: 1. `pulse-ciso` → Review architecture, identify risks 2. `pulse-engineer` → Implement security controls diff --git a/pulse/.env.example b/pulse/.env.example index 3db70b8..1123775 100644 --- a/pulse/.env.example +++ b/pulse/.env.example @@ -22,28 +22,28 @@ NODE_ENV=development # -- PULSE Data API (FastAPI) ----------------------------------------------- PULSE_DATA_PORT=8000 - -# -- DevLake ---------------------------------------------------------------- -DEVLAKE_PORT=8080 -DEVLAKE_API_PORT=4000 -DEVLAKE_ENCRYPTION_SECRET=abcdefghijklmnop -DEVLAKE_PG_DB=lake -DEVLAKE_PG_USER=devlake -DEVLAKE_PG_PASSWORD=devlake_dev -DEVLAKE_PG_PORT=5433 - # -- Source Connector Tokens ------------------------------------------------ # GitHub Personal Access Token (repo, read:org scopes) GITHUB_TOKEN= +# GitHub org slug — REQUIRED. Used by discover_repos() — see ingestion-spec §2.3. +GITHUB_ORG= # GitLab Personal Access Token (read_api scope) GITLAB_TOKEN= # Jira API Token + email JIRA_API_TOKEN= JIRA_EMAIL= +# Jira base URL (e.g., https://your-org.atlassian.net) +JIRA_BASE_URL= +# JIRA_PROJECTS is intentionally absent. PULSE uses dynamic discovery +# (ingestion-spec §2.3); the active project list is maintained in +# `jira_project_catalog` table and resolved by ModeResolver. Do NOT add +# JIRA_PROJECTS unless you set DYNAMIC_JIRA_DISCOVERY_ENABLED=false (not +# recommended). +DYNAMIC_JIRA_DISCOVERY_ENABLED=true # Azure DevOps Personal Access Token (Code, Work Items read) AZURE_DEVOPS_PAT= # Jenkins API credentials (read-only: Overall/Read, Job/Read, Run/Read) -JENKINS_BASE_URL=https://jenkins.webmotors.com.br +JENKINS_BASE_URL= JENKINS_USERNAME= JENKINS_API_TOKEN= diff --git a/pulse/.github/workflows/ci.yml b/pulse/.github/workflows/ci.yml index b63155d..c3e7f42 100644 --- a/pulse/.github/workflows/ci.yml +++ b/pulse/.github/workflows/ci.yml @@ -102,10 +102,18 @@ jobs: run: cd packages/pulse-api && npm run test -- --coverage --ci continue-on-error: false - - name: Pytest — pulse-data + - name: Pytest — pulse-data (unit + contract, platform only) run: | cd packages/pulse-data - python -m pytest tests/unit -v --junitxml=reports/pytest-unit.xml --cov=src --cov-report=xml:reports/coverage.xml + python -m pytest tests/unit tests/contract -v \ + --junitxml=reports/pytest-unit.xml \ + --cov=src --cov-report=xml:reports/coverage-platform.xml + continue-on-error: false + + - name: Pytest — pulse-data (anti-surveillance gate, must pass) + run: | + cd packages/pulse-data + python -m pytest tests/contract/test_anti_surveillance_schemas.py -v continue-on-error: false - name: Vitest — pulse-web diff --git a/pulse/.github/workflows/deploy.yml b/pulse/.github/workflows/deploy.yml new file mode 100644 index 0000000..90fe841 --- /dev/null +++ b/pulse/.github/workflows/deploy.yml @@ -0,0 +1,171 @@ +name: Deploy + +# FDD-OPS-001 Line 4 — Deploy workflow that FORCES worker restart after +# any code push, closing the loop on "worker is running stale bytecode" +# incidents. See pulse/docs/backlog/ops-backlog.md for the full defense +# plan. +# +# STATUS: TEMPLATE (2026-04-23). +# Today deploy at Webmotors is a manual process (no auto-deploy on merge +# to main). This workflow is a documented skeleton for when deploy is +# automated. The parts marked `# TODO:` need to be wired when the real +# pipeline lands (container registry, cluster credentials, secrets). +# The critical middle section — force-restart + health wait + schema +# coherence check — is production-shaped and ready to run. + +on: + workflow_dispatch: + inputs: + environment: + description: "Target environment (gated by GitHub Environment approvals)" + required: true + type: choice + options: + - staging + - production + skip_coherence_check: + description: "Skip post-deploy schema coherence check (break-glass only)" + required: false + type: boolean + default: false + +# Serialize deploys per environment. Never cancel in-flight deploys — +# a half-rolled-out fleet is worse than a delayed one. +concurrency: + group: deploy-${{ github.event.inputs.environment }} + cancel-in-progress: false + +env: + # Workers that MUST be force-restarted after any code change. If a + # new Python service is added, it belongs here. + PYTHON_WORKERS: "pulse-data metrics-worker sync-worker discovery-worker" + +jobs: + deploy: + name: Deploy to ${{ github.event.inputs.environment }} + runs-on: ubuntu-latest + environment: ${{ github.event.inputs.environment }} + timeout-minutes: 20 + + steps: + - name: Checkout + uses: actions/checkout@v4 + + # --------------------------------------------------------------- + # TODO: Build + push images + # --------------------------------------------------------------- + # Replace this with the real registry push once we settle on + # ECR/GHCR/etc. Keep the step name stable so downstream log + # parsers / notifications continue to work. + - name: Build and push images + run: | + echo "::notice::Placeholder build step — wire registry push here" + echo "Image tag: ${GITHUB_SHA}" + # TODO: docker buildx bake + push + + # --------------------------------------------------------------- + # TODO: Roll out new image + # --------------------------------------------------------------- + # In k8s: `kubectl set image deploy/... pulse-data=...:${GITHUB_SHA}` + # In ECS: `aws ecs update-service --force-new-deployment` + # Until deploy is automated, this is a no-op. + - name: Roll out image + run: | + echo "::notice::Placeholder rollout step — wire kubectl/ECS/etc here" + + # --------------------------------------------------------------- + # FDD-OPS-001 Line 4 — CORE + # --------------------------------------------------------------- + # Whatever the rollout mechanism is, Python workers MUST restart + # so they import fresh bytecode. A rolling restart is fine; what + # we cannot tolerate is "deploy claimed success but workers kept + # running old code" — which is the root incident this card fixes. + - name: Force-restart Python workers + env: + WORKERS: ${{ env.PYTHON_WORKERS }} + run: | + echo "::group::Force-restart workers (FDD-OPS-001 L4)" + echo "Workers: ${WORKERS}" + # TODO: replace `docker compose restart` with the production- + # appropriate command. Documented templates: + # + # Kubernetes: + # for w in $WORKERS; do + # kubectl -n pulse rollout restart deployment/$w + # kubectl -n pulse rollout status deployment/$w --timeout=5m + # done + # + # ECS: + # for w in $WORKERS; do + # aws ecs update-service --cluster pulse --service $w --force-new-deployment + # aws ecs wait services-stable --cluster pulse --services $w + # done + # + # docker compose (current local baseline): + # docker compose -f pulse/docker-compose.yml restart $WORKERS + echo "::endgroup::" + + - name: Wait for workers healthy + run: | + echo "::group::Health checks" + # TODO: swap for kubectl rollout status / aws ecs wait / health + # endpoint polling against the target environment's load + # balancer. Template: + # + # for w in $PYTHON_WORKERS; do + # timeout 300 bash -c "until curl -fs https://$w.${ENV}.pulse/health; do sleep 3; done" + # done + echo "::endgroup::" + + # --------------------------------------------------------------- + # FDD-OPS-001 Line 3 + Line 4 integration — coherence check + # --------------------------------------------------------------- + # After workers come back up, trigger a dry-run recalc. The + # recalc endpoint (Line 2) force-reloads modules; if anything is + # wrong with imports post-deploy we catch it here. Then query + # the schema-drift endpoint (Line 3) to confirm the freshly + # restarted workers are writing complete payloads. + - name: Post-deploy schema coherence check + if: ${{ github.event.inputs.skip_coherence_check != 'true' }} + env: + ADMIN_TOKEN: ${{ secrets.INTERNAL_API_TOKEN }} + API_BASE: ${{ secrets.PULSE_DATA_BASE_URL }} + run: | + if [ -z "${API_BASE}" ] || [ -z "${ADMIN_TOKEN}" ]; then + echo "::warning::Skipping coherence check — secrets not configured yet" + exit 0 + fi + + echo "::group::Dry-run recalc (reloads modules via Line 2)" + curl -fsS -X POST \ + -H "X-Admin-Token: ${ADMIN_TOKEN}" \ + "${API_BASE}/data/v1/admin/metrics/recalculate?metric_type=dora&period=30d&dry_run=true" \ + | tee recalc.json + echo "::endgroup::" + + echo "::group::Schema drift check (Line 3)" + DRIFT=$(curl -fsS "${API_BASE}/data/v1/pipeline/schema-drift?hours=1" \ + | python -c 'import json,sys; print(json.load(sys.stdin).get("total_affected_snapshots", 0))') + echo "total_affected_snapshots=${DRIFT}" + if [ "${DRIFT}" != "0" ]; then + echo "::warning::Schema drift detected after deploy (${DRIFT} snapshots). Investigate — this usually means a worker did not actually restart." + # Decision: alert but don't fail. First iteration is advisory. + # Flip to `exit 1` once the signal is trusted. + fi + echo "::endgroup::" + + # ------------------------------------------------------------------- + # Lint the workflow itself so CI catches YAML regressions. + # Runs on every push that touches this file. + # ------------------------------------------------------------------- + lint-workflow: + name: Lint deploy workflow + runs-on: ubuntu-latest + if: github.event_name != 'workflow_dispatch' + steps: + - uses: actions/checkout@v4 + - name: Validate with actionlint + uses: reviewdog/action-actionlint@v1 + with: + actionlint_flags: -pyflakes= .github/workflows/deploy.yml + fail_level: error diff --git a/pulse/Makefile b/pulse/Makefile index c35eff5..98f9fff 100644 --- a/pulse/Makefile +++ b/pulse/Makefile @@ -7,7 +7,9 @@ COMPOSE_TEST := docker compose -f docker-compose.test.yml .PHONY: help up down dev logs clean setup \ test test-unit test-integration \ - migrate seed lint fmt build + migrate seed lint fmt build \ + rotate-secrets check-secrets \ + doctor verify-dev # -------------------------------------------------------------------------- # Help @@ -25,7 +27,6 @@ up: ## Start all backend services (APIs + infra) @echo "Services started. Run 'make dev' in another terminal for frontend." @echo " API: http://localhost:3000" @echo " Data: http://localhost:8000" - @echo " DevLake: http://localhost:8080" down: ## Stop all services $(COMPOSE) down @@ -93,6 +94,73 @@ build: ## Build all packages cd packages/pulse-web && npm run build $(COMPOSE) build +# -------------------------------------------------------------------------- +# Secret rotation +# +# `docker compose restart` reinicia o PROCESSO no container existente e +# NÃO relê o .env. Depois de trocar um secret no .env você precisa +# recriar o container. Este target faz o passo certo: +# docker compose up -d --force-recreate +# que destrói o container e recria lendo o .env do disco. +# +# Lista de serviços: workers Python que fazem chamadas autenticadas a +# source systems (GitHub, Jira, Jenkins, etc.). Se adicionar um novo +# worker/serviço que lê secrets do .env, adicione ele aqui também. +# +# Ver runbook completo em: docs/testing-playbook.md §8.9 +# -------------------------------------------------------------------------- +rotate-secrets: ## Re-create containers that consume .env after a secret rotation + @echo "=== Recriando containers que leem .env ===" + $(COMPOSE) up -d --force-recreate sync-worker discovery-worker metrics-worker pulse-data pulse-api + @echo "" + @echo "=== Pronto. Verifique autenticação com: ===" + @echo " make check-secrets" + +check-secrets: ## Validate auth to external sources (GitHub, Jira) without exposing the token + @echo "=== Verificando autenticação ===" + @if [ ! -f .env ]; then echo "ERROR: pulse/.env não existe"; exit 1; fi + @TOKEN=$$(grep ^GITHUB_TOKEN .env | cut -d= -f2-); \ + if [ -z "$$TOKEN" ]; then \ + echo "GITHUB_TOKEN: não configurado em .env (skip)"; \ + else \ + printf "GITHUB /user: HTTP "; \ + curl -s -o /dev/null -w "%{http_code}\n" -H "Authorization: Bearer $$TOKEN" https://api.github.com/user; \ + ORG=$$(grep ^GITHUB_ORG .env | cut -d= -f2- | tr -d '"'); \ + if [ -n "$$ORG" ]; then \ + printf "GITHUB /orgs/%s/repos: HTTP " "$$ORG"; \ + curl -s -o /dev/null -w "%{http_code}\n" -H "Authorization: Bearer $$TOKEN" "https://api.github.com/orgs/$$ORG/repos?per_page=1"; \ + fi; \ + fi + @JIRA_URL=$$(grep ^JIRA_BASE_URL .env 2>/dev/null | cut -d= -f2- | tr -d '"'); \ + JIRA_USER=$$(grep ^JIRA_EMAIL .env 2>/dev/null | cut -d= -f2- | tr -d '"'); \ + JIRA_TOKEN=$$(grep ^JIRA_API_TOKEN .env 2>/dev/null | cut -d= -f2-); \ + if [ -n "$$JIRA_URL" ] && [ -n "$$JIRA_USER" ] && [ -n "$$JIRA_TOKEN" ]; then \ + printf "JIRA /myself: HTTP "; \ + curl -s -o /dev/null -w "%{http_code}\n" -u "$$JIRA_USER:$$JIRA_TOKEN" "$$JIRA_URL/rest/api/3/myself"; \ + else \ + echo "JIRA: credenciais incompletas em .env (skip)"; \ + fi + @echo "" + @echo "Esperado: 200 em todos os checks ativos. 401/403 = problema de scope/aprovação." + +# -------------------------------------------------------------------------- +# Dev environment health +# +# `doctor` runs BEFORE onboard to validate the host machine has every tool +# and free port PULSE needs. `verify-dev` runs AFTER onboard to confirm the +# stack is actually serving data, not just that containers are "up". +# +# Both are deliberately shell (not python) so they work on a fresh clone +# with only docker + bash installed. +# +# Ver runbook: docs/onboarding.md +# -------------------------------------------------------------------------- +doctor: ## Validate host machine (tools, versions, free ports, disk, memory) + @./scripts/doctor.sh + +verify-dev: ## Post-onboard smoke: API + data API + content + UI healthy + @./scripts/verify-dev.sh + # -------------------------------------------------------------------------- # First-time Setup # -------------------------------------------------------------------------- diff --git a/pulse/config/connections.yaml b/pulse/config/connections.yaml index 79bb5a3..e8c9b34 100644 --- a/pulse/config/connections.yaml +++ b/pulse/config/connections.yaml @@ -20,17 +20,13 @@ connections: token_env: GITHUB_TOKEN base_url: https://api.github.com sync_interval_minutes: 15 + # Per ingestion-spec §2.3 (Discovery-Only): NO explicit list of repos. + # The connector calls `discover_repos(active_months=12)` on each cycle + # via GraphQL `organization.repositories(orderBy: PUSHED_AT)` filtered + # by activity. New repos appear automatically; archived ones drop off + # without manual YAML edits. scope: - repositories: - - "webmotors-private/webmotors.next.ui" - - "webmotors-private/webmotors.portal.ui" - - "webmotors-private/webmotors.buyer.ui" - - "webmotors-private/webmotors.buyer.desktop.ui" - - "webmotors-private/webmotors.catalogo.next.ui" - - "webmotors-private/webmotors.fipe.next.ui" - - "webmotors-private/webmotors.pf" - - "webmotors-private/eleanor.flutter" - - "webmotors-private/webmotors.app.pf.search.bff" + active_months: 12 - name: "Webmotors Jenkins" source: jenkins @@ -40,72 +36,81 @@ connections: base_url: ${JENKINS_BASE_URL} sync_interval_minutes: 15 scope: - # Auto-mapped 2026-03-27 by scanning 1404 Jenkins jobs (READ-ONLY). - # Each job's lastBuild → BuildData → remoteUrls was matched to our 9 GitHub repos. - # Only PRD (production) jobs are included — for DORA Deployment Frequency. - # Full mapping saved in config/jenkins-job-mapping.json. - # - # Since these ARE production jobs (job name contains "prd"), every build - # counts as a production deployment. deploymentPattern matches all builds; - # productionPattern matches the job name itself (always "prd"). - jobs: - # ── webmotors.next.ui (3 PRD jobs) ───────────────── - - fullName: "prd-wm-buyer-home-frontend-ui" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - - fullName: "prd-wm-buyer-search-frontend-ui" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - - fullName: "prd-wm-buyer-subscriptions-frontend-ui" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - # ── webmotors.portal.ui (1 PRD job) ──────────────── - - fullName: "prd-wm-buyer-lambda-home-ui" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - # ── webmotors.buyer.ui (1 PRD job) ───────────────── - - fullName: "prd-wm-buyer-lambda-mobile-ui" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - # ── webmotors.buyer.desktop.ui (1 PRD job) ───────── - - fullName: "prd-wm-buyer-lambda-desktop-ui" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - # ── webmotors.catalogo.next.ui (1 PRD job) ───────── - - fullName: "catalogo-next-ui-prd" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - # ── webmotors.fipe.next.ui (1 PRD job) ───────────── - - fullName: "fipe-next-ui-prd" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - # ── webmotors.pf (6 PRD jobs: android, iOS, web) ─── - - fullName: "android-pf-prd-firebase" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - - fullName: "android-pf-prd-playstore" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - - fullName: "ios-pf-prd-firebase" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - - fullName: "ios-pf-prd-testflight" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - - fullName: "web-cms-pf-prd" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - - fullName: "webservicos-web-prd" - deploymentPattern: ".*" - productionPattern: "(?i)prd" - # ── eleanor.flutter (1 job — no env suffix) ──────── - - fullName: "webmotors-eleanor-flutter" - deploymentPattern: ".*" - productionPattern: ".*" - # ── webmotors.app.pf.search.bff (1 job — no env suffix) - - fullName: "webmotors-app-api-search-bff" - deploymentPattern: ".*" - productionPattern: ".*" + # Job list is loaded from config/jenkins-job-mapping.json (auto-generated). + # Generated by READ-ONLY SCM scan of active PRD Jenkins jobs — each + # job's lastBuild → remoteUrls resolves the GitHub repo. Per + # ingestion-spec §3.6, regen via scripts/discover_jenkins_jobs.py + # when new repos appear (manual or weekly cron). + jobs_from_mapping: true + + - name: "Webmotors Jira" + source: jira + # Jira Cloud uses Basic Auth: email (username) + API token (password) + username_env: JIRA_EMAIL + token_env: JIRA_API_TOKEN + base_url: https://webmotors.atlassian.net + sync_interval_minutes: 15 + # Per ingestion-spec §2.3 (Discovery-Only): NO explicit project list. + # `ProjectDiscoveryService` lists ALL Jira projects; `SmartPrioritizer` + # auto-activates projects with ≥3 PR references. Tenant config in + # `tenant_jira_config` controls discovery mode (must be 'smart' for + # auto-activation). PII-flagged projects require manual approval. + scope: + mode: smart + smart_min_pr_references: 3 + smart_pr_scan_days: 90 + +# Issue status mapping — Webmotors Jira (Portuguese) → PULSE normalized +# Primary source; overrides DEFAULT_STATUS_MAPPING in normalizer.py. +# Normalized values: todo | in_progress | in_review | done +# ADR: "testing" states map to "in_review" (both are post-dev WIP for CFD) +# ADR: "Aguardando Deploy Produção" = done (dev work complete; deploy tracked by Jenkins/DORA) +status_mapping: + # ── Backlog / To Do ── + "refinado": "todo" + "backlog": "todo" + "quebra de histórias": "todo" + "to do": "todo" + "open": "todo" + "new": "todo" + # ── In Progress ── + "em design": "in_progress" + "em imersão": "in_progress" + "em desenvolvimento": "in_progress" + "in progress": "in_progress" + "em andamento": "in_progress" + # ── Review / Testing (collapsed to in_review for CFD 5-band model) ── + "aguardando code review": "in_review" + "em code review": "in_review" + "product review": "in_review" + "planejando testes": "in_review" + "em teste azul": "in_review" + "aguardando teste azul": "in_review" + "em teste hml": "in_review" + "testando": "in_review" + "qa": "in_review" + # ── Backlog / Waiting (Kanban upstream stages) ── + "priorizado": "todo" + "aguardando histórias": "todo" + "aguardando desenvolvimento": "todo" + "priorizado gp": "todo" + "pronto para o gp": "todo" + # ── In Progress (active work, pre-dev analysis) ── + "construção de hipótese": "in_progress" + "desenvolvimento": "in_progress" + "design": "in_progress" + "analise": "in_progress" + "discovery": "in_progress" + "entendimento": "in_progress" + # ── Done ── + "pós-implantação": "done" + "aguardando deploy produção": "done" + "concluído": "done" + "cancelado": "done" + "fechado": "done" + "done": "done" + "closed": "done" + "resolved": "done" # Team definitions and their mappings to source system projects teams: @@ -120,6 +125,12 @@ teams: - "webmotors-private/webmotors.buyer.desktop.ui" - "webmotors-private/webmotors.catalogo.next.ui" - "webmotors-private/webmotors.fipe.next.ui" + jira: + projects: + - "DESC" + - "ENO" + - "ANCR" + - "PUSO" - name: "Canais Digitais App" slug: "canais-digitais-app" @@ -129,3 +140,21 @@ teams: - "webmotors-private/webmotors.pf" - "webmotors-private/eleanor.flutter" - "webmotors-private/webmotors.app.pf.search.bff" + jira: + projects: + - "APPF" + + - name: "Fidelidade" + slug: "fidelidade" + mappings: + jira: + projects: + - "FID" + + - name: "Turbo Lab" + slug: "turbo-lab" + mappings: + jira: + projects: + - "CTURBO" + - "PTURB" diff --git a/pulse/config/jenkins-job-mapping.json b/pulse/config/jenkins-job-mapping.json index e7ad7e9..d099fb4 100644 --- a/pulse/config/jenkins-job-mapping.json +++ b/pulse/config/jenkins-job-mapping.json @@ -1,43 +1,2861 @@ { - "_meta": { - "generated": "2026-03-27", - "method": "READ-ONLY scan of 1404 Jenkins jobs via lastBuild API", - "note": "Maps GitHub repos to Jenkins jobs by reading Git remoteUrls from build metadata" + "webmotors-private/AgendaFacil": { + "prd_jobs": [ + "flychat-ecs-prd" + ], + "all_jobs": [ + "flychat-ecs-prd" + ] }, - "webmotors-private/webmotors.pf": { - "prd_jobs": ["android-pf-prd-firebase", "android-pf-prd-playstore", "ios-pf-prd-firebase", "ios-pf-prd-testflight", "web-cms-pf-prd", "webservicos-web-prd"], - "all_jobs": ["android-pf-hk-firebase", "android-pf-prd-firebase", "android-pf-prd-playstore", "android-pf-prd-promotion", "build-webservicos-all-platforms", "ios-pf-hk-firebase", "ios-pf-prd-firebase", "ios-pf-prd-promotion", "ios-pf-prd-testflight", "pf-check-sonar-coverage", "web-cms-pf-hml", "web-cms-pf-prd", "webservicos-check-sonar-coverage", "webservicos-web-azl", "webservicos-web-dev", "webservicos-web-hml", "webservicos-web-prd"] + "webmotors-private/WebMotors.Android.PF": { + "prd_jobs": [ + "ANDROID-PROD-PF" + ], + "all_jobs": [ + "ANDROID-PROD-PF" + ] }, - "webmotors-private/webmotors.next.ui": { - "prd_jobs": ["prd-wm-buyer-home-frontend-ui", "prd-wm-buyer-search-frontend-ui", "prd-wm-buyer-subscriptions-frontend-ui"], - "all_jobs": ["azl-wm-buyer-home-frontend-ui", "azl-wm-buyer-search-frontend-ui", "hml-wm-buyer-home-frontend-ui", "hml-wm-buyer-search-frontend-ui", "hml-wm-buyer-subscriptions-frontend-ui", "prd-wm-buyer-home-frontend-ui", "prd-wm-buyer-search-frontend-ui", "prd-wm-buyer-subscriptions-frontend-ui"] + "webmotors-private/WebMotors.ETL.Lead.Fixo": { + "prd_jobs": [ + "pi.sales-etl.leadfixo.prd" + ], + "all_jobs": [ + "pi.sales-etl.leadfixo.prd" + ] + }, + "webmotors-private/WebMotors.IOS.PF": { + "prd_jobs": [ + "IOS-PROD-PF" + ], + "all_jobs": [ + "IOS-PROD-PF" + ] + }, + "webmotors-private/WebMotors.Lead.Counting": { + "prd_jobs": [ + "pi.sales-etl-lead.counting-prd" + ], + "all_jobs": [ + "pi.sales-etl-lead.counting-prd" + ] + }, + "webmotors-private/acelera-consultor-api": { + "prd_jobs": [ + "acelera-consultor-api-prd" + ], + "all_jobs": [ + "acelera-consultor-api-prd" + ] + }, + "webmotors-private/acelera-consultor-dashboard": { + "prd_jobs": [ + "acelera-consultor-dashboard-ui-prd" + ], + "all_jobs": [ + "acelera-consultor-dashboard-ui-prd" + ] + }, + "webmotors-private/acelera-consultor-front": { + "prd_jobs": [ + "webmotors-acelera-consultor-front-ui-prd" + ], + "all_jobs": [ + "webmotors-acelera-consultor-front-ui-prd" + ] + }, + "webmotors-private/agendafacil": { + "prd_jobs": [ + "agendafacil-prd" + ], + "all_jobs": [ + "agendafacil-prd" + ] + }, + "webmotors-private/agendafacil.cockpit": { + "prd_jobs": [ + "agendafacil-cockpit-ui-prd" + ], + "all_jobs": [ + "agendafacil-cockpit-ui-prd" + ] + }, + "webmotors-private/agendafacilwhatsappserver": { + "prd_jobs": [ + "agendafacil-whatsappserver-prd" + ], + "all_jobs": [ + "agendafacil-whatsappserver-prd" + ] + }, + "webmotors-private/api-sites": { + "prd_jobs": [ + "cockpit-integration-api-sites-prd" + ], + "all_jobs": [ + "cockpit-integration-api-sites-prd" + ] + }, + "webmotors-private/chatbot-esquentalead": { + "prd_jobs": [ + "Esquenta Leads/esquenta.lead.lambda.prd", + "GenIa/prd-lambda-buora2-dev-agent-handler", + "GenIa/prd-playground" + ], + "all_jobs": [ + "Esquenta Leads/esquenta.lead.lambda.prd", + "GenIa/prd-lambda-buora2-dev-agent-handler", + "GenIa/prd-playground" + ] + }, + "webmotors-private/cockpit-ai-api": { + "prd_jobs": [ + "cockpit-ai-api-prd" + ], + "all_jobs": [ + "cockpit-ai-api-prd" + ] + }, + "webmotors-private/cockpit.components.audit": { + "prd_jobs": [ + "ckp.components.audit.lambda.prd" + ], + "all_jobs": [ + "ckp.components.audit.lambda.prd" + ] + }, + "webmotors-private/cockpit.crm.communicator.cdn": { + "prd_jobs": [ + "cockpit-crm-communicator-cdn-prd-old" + ], + "all_jobs": [ + "cockpit-crm-communicator-cdn-prd-old" + ] + }, + "webmotors-private/cockpit.crm.leadaccept.legacy": { + "prd_jobs": [ + "crm.services.windows.prd" + ], + "all_jobs": [ + "crm.services.windows.prd" + ] + }, + "webmotors-private/cockpit.crm.pipes": { + "prd_jobs": [ + "cockpit-crm-communicator-cdn-prd-old", + "cockpit-crm-lambda-services-prd", + "cockpit-crm-metrics-collection-prd", + "crm.api.message.send.lambda.prd", + "crm.audit.lambda.prd", + "crm.communicator.cdn.lambda.prd", + "crm.communicator.lambda.prd", + "crm.communicator.lambda.prd-v2", + "crm.configuration.lambda.prd", + "crm.customer.lambda.prd", + "crm.dash.api.prd", + "crm.data.sanitization.lambda.prd", + "crm.distribution.lambda.prd", + "crm.emailreceiver.lambda.prd", + "crm.financing.simulation.lambda.v2.prd", + "crm.getlead.lambda.prd", + "crm.getlead.webmotors.lambda.prd", + "crm.inputlead.lambda.prd", + "crm.inputlead.v2.lambda.prd", + "crm.integrated.information.lambda.prd", + "crm.integration.lambda.prd", + "crm.leadnotification.lambda.prd", + "crm.messagetemplates.lambda.prd", + "crm.messenger.lambda.prd", + "crm.negotiations.lambdas.prd", + "crm.panel.lambda.prd", + "crm.plans.lambdas.prd", + "crm.reasons.lambda.prd", + "crm.reports.lambdas.prd", + "crm.schedule.lambda.prd", + "crm.services.windows.prd", + "crm.socket.lambda.prd", + "crm.sync.elastic.lambda.prd", + "crm.synchronizer.lambda.prd", + "crm.thirdpartycrm.lambda.prd", + "crm.updatelead.lambda.prd" + ], + "all_jobs": [ + "cockpit-crm-communicator-cdn-prd-old", + "cockpit-crm-lambda-services-prd", + "cockpit-crm-metrics-collection-prd", + "crm.api.message.send.lambda.prd", + "crm.audit.lambda.prd", + "crm.communicator.cdn.lambda.prd", + "crm.communicator.lambda.prd", + "crm.communicator.lambda.prd-v2", + "crm.configuration.lambda.prd", + "crm.customer.lambda.prd", + "crm.dash.api.prd", + "crm.data.sanitization.lambda.prd", + "crm.distribution.lambda.prd", + "crm.emailreceiver.lambda.prd", + "crm.financing.simulation.lambda.v2.prd", + "crm.getlead.lambda.prd", + "crm.getlead.webmotors.lambda.prd", + "crm.inputlead.lambda.prd", + "crm.inputlead.v2.lambda.prd", + "crm.integrated.information.lambda.prd", + "crm.integration.lambda.prd", + "crm.leadnotification.lambda.prd", + "crm.messagetemplates.lambda.prd", + "crm.messenger.lambda.prd", + "crm.negotiations.lambdas.prd", + "crm.panel.lambda.prd", + "crm.plans.lambdas.prd", + "crm.reasons.lambda.prd", + "crm.reports.lambdas.prd", + "crm.schedule.lambda.prd", + "crm.services.windows.prd", + "crm.socket.lambda.prd", + "crm.sync.elastic.lambda.prd", + "crm.synchronizer.lambda.prd", + "crm.thirdpartycrm.lambda.prd", + "crm.updatelead.lambda.prd" + ] + }, + "webmotors-private/cockpit.crm.vmotors": { + "prd_jobs": [ + "crm.vmotors.pj.api.prd.recycle", + "crm.vmotors.pj.app.api.prd", + "crm.vmotors.pj.azl.update.config.prd", + "crm.vmotors.pj.web.api.prd" + ], + "all_jobs": [ + "crm.vmotors.pj.api.prd.recycle", + "crm.vmotors.pj.app.api.prd", + "crm.vmotors.pj.azl.update.config.prd", + "crm.vmotors.pj.web.api.prd" + ] + }, + "webmotors-private/cockpit.crmc.pipes": { + "prd_jobs": [ + "cockpit-crmcustomer-ia-agent-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-ia-agent-prd" + ] + }, + "webmotors-private/cockpit.crmcustomer.campaign": { + "prd_jobs": [ + "cockpit-crmcustomer-clientfacing-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-clientfacing-prd" + ] + }, + "webmotors-private/cockpit.crmcustomer.campaign.ia": { + "prd_jobs": [ + "cockpit-crmcustomer-campaign-ia-lambda-prd", + "cockpit-crmcustomer-ia-agent-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-campaign-ia-lambda-prd", + "cockpit-crmcustomer-ia-agent-prd" + ] + }, + "webmotors-private/cockpit.crmcustomer.clientfacing.cluster": { + "prd_jobs": [ + "cockpit-crmcustomer-clientfacing-prd", + "cockpit-crmcustomer-ia-agent-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-clientfacing-prd", + "cockpit-crmcustomer-ia-agent-prd" + ] + }, + "webmotors-private/cockpit.crmcustomer.messenger": { + "prd_jobs": [ + "cockpit-crmcustomer-messenger-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-messenger-prd" + ] + }, + "webmotors-private/cockpit.crmcustomer.mfe.campaigns.ui": { + "prd_jobs": [ + "cockpit-crmcustomer-mfe-campaigns-ui-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-mfe-campaigns-ui-prd" + ] + }, + "webmotors-private/cockpit.crmcustomer.mfe.vision.ui": { + "prd_jobs": [ + "cockpit-crmcustomer-mfe-vision-ui-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-mfe-vision-ui-prd" + ] + }, + "webmotors-private/cockpit.crmcustomer.pipes": { + "prd_jobs": [ + "cockpit-crmcustomer-campaign-ia-lambda-prd", + "cockpit-crmcustomer-clientfacing-prd", + "cockpit-crmcustomer-messenger-prd", + "cockpit-crmcustomer-mfe-campaigns-ui-prd", + "cockpit-crmcustomer-mfe-vision-ui-prd", + "cockpit-crmcustomer-segmentation-prd", + "cockpit-crmcustomer-ui-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-campaign-ia-lambda-prd", + "cockpit-crmcustomer-clientfacing-prd", + "cockpit-crmcustomer-messenger-prd", + "cockpit-crmcustomer-mfe-campaigns-ui-prd", + "cockpit-crmcustomer-mfe-vision-ui-prd", + "cockpit-crmcustomer-segmentation-prd", + "cockpit-crmcustomer-ui-prd" + ] + }, + "webmotors-private/cockpit.crmcustomer.segmentation": { + "prd_jobs": [ + "cockpit-crmcustomer-segmentation-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-segmentation-prd" + ] + }, + "webmotors-private/cockpit.crmcustomer.ui": { + "prd_jobs": [ + "cockpit-crmcustomer-ui-prd" + ], + "all_jobs": [ + "cockpit-crmcustomer-ui-prd" + ] + }, + "webmotors-private/cockpit.dealer.api": { + "prd_jobs": [ + "PI-Security/prd-lambda-legalperson-insert-user", + "PI-Security/prd-lambda-legalperson-migrate-sqlserver", + "legalperson-api-all-prd", + "legalperson-api-sync-aurora-prd", + "legalperson-api-sync-fillfilters-prd", + "legalperson-api-sync-migrate-send-prd" + ], + "all_jobs": [ + "PI-Security/prd-lambda-legalperson-insert-user", + "PI-Security/prd-lambda-legalperson-migrate-sqlserver", + "legalperson-api-all-prd", + "legalperson-api-sync-aurora-prd", + "legalperson-api-sync-fillfilters-prd", + "legalperson-api-sync-migrate-send-prd" + ] + }, + "webmotors-private/cockpit.dealer.businesshour": { + "prd_jobs": [ + "ckp.dealer.businesshour.api.prd" + ], + "all_jobs": [ + "ckp.dealer.businesshour.api.prd" + ] + }, + "webmotors-private/cockpit.dealer.group": { + "prd_jobs": [ + "ckp.dealer.group.api.prd" + ], + "all_jobs": [ + "ckp.dealer.group.api.prd" + ] + }, + "webmotors-private/cockpit.dealer.salesman": { + "prd_jobs": [ + "ckp.dealer.salesman.api.prd" + ], + "all_jobs": [ + "ckp.dealer.salesman.api.prd" + ] + }, + "webmotors-private/cockpit.dealer.users": { + "prd_jobs": [ + "ckp.dealer.users.api.prd" + ], + "all_jobs": [ + "ckp.dealer.users.api.prd" + ] + }, + "webmotors-private/cockpit.integration.api.channels": { + "prd_jobs": [ + "cockpit-integration-new-api-channel-prd" + ], + "all_jobs": [ + "cockpit-integration-new-api-channel-prd" + ] + }, + "webmotors-private/cockpit.integration.api.showroom": { + "prd_jobs": [ + "cockpit-integration-showroom-prd" + ], + "all_jobs": [ + "cockpit-integration-showroom-prd" + ] + }, + "webmotors-private/cockpit.integration.migration": { + "prd_jobs": [ + "cockpit-integration-migration-prd" + ], + "all_jobs": [ + "cockpit-integration-migration-prd" + ] + }, + "webmotors-private/cockpit.integration.pipes": { + "prd_jobs": [ + "cockpit-integration-api-catalog-prd", + "cockpit-integration-broadcast-prd", + "cockpit-integration-icarros-prd", + "cockpit-integration-instagram-prd", + "cockpit-integration-meli-prd", + "cockpit-integration-mobiauto-prd", + "cockpit-integration-olx-prd", + "cockpit-integration-santander-prd", + "cockpit-integration-socarrao-prd", + "cockpit-integration-thirdpartycrm-prd", + "cockpit-integration-users-prd", + "cockpit-integration-whatsapp-prd" + ], + "all_jobs": [ + "cockpit-integration-api-catalog-prd", + "cockpit-integration-broadcast-prd", + "cockpit-integration-icarros-prd", + "cockpit-integration-instagram-prd", + "cockpit-integration-meli-prd", + "cockpit-integration-mobiauto-prd", + "cockpit-integration-olx-prd", + "cockpit-integration-santander-prd", + "cockpit-integration-socarrao-prd", + "cockpit-integration-thirdpartycrm-prd", + "cockpit-integration-users-prd", + "cockpit-integration-whatsapp-prd" + ] + }, + "webmotors-private/cockpit.integration.usermanagement.ui": { + "prd_jobs": [ + "cockpit-integration-usermanagement-ui-prd" + ], + "all_jobs": [ + "cockpit-integration-usermanagement-ui-prd" + ] + }, + "webmotors-private/consultor-turbo-api": { + "prd_jobs": [ + "consultor-turbo-api-prd" + ], + "all_jobs": [ + "consultor-turbo-api-prd" + ] + }, + "webmotors-private/dora-metrics-extract": { + "prd_jobs": [ + "AQ/arq.dora.metrics.extract.lambda.prd" + ], + "all_jobs": [ + "AQ/arq.dora.metrics.extract.lambda.prd" + ] + }, + "webmotors-private/eng-lake-database-migration": { + "prd_jobs": [ + "prd-eng-lake-database-migration" + ], + "all_jobs": [ + "prd-eng-lake-database-migration" + ] + }, + "webmotors-private/flyapi": { + "prd_jobs": [ + "agendafacil-flyapi-prd" + ], + "all_jobs": [ + "agendafacil-flyapi-prd" + ] + }, + "webmotors-private/jenkins.common.libs": { + "prd_jobs": [ + "ANDROID-PRD-Lib-Cockpit-LGPD", + "ANDROID-PRD-Lib-Cockpit-Notification", + "ANDROID-PRD-Lib-Webmotors-Design-System", + "ANDROID-PRD-Lib-Webmotors-Network", + "ANDROID-PRD-Lib-Webmotors-Tracking", + "ANDROID-PRODUCTION-Cockpit", + "IOS-PRD-Lib-Cockpit-LGPD", + "IOS-PRD-Lib-Cockpit-Notification", + "IOS-PRD-Lib-Cockpit-WMCore", + "IOS-PRD-Lib-Webmotors-Network", + "IOS-PRD-Lib-Webmotors-Tracking", + "IOS-PRODUCTION-Cockpit", + "KMM-PRD-Lib-Webmotors-Network-OLD", + "KMM-PRD-Lib-Webmotors-Network/merge%2FAPPJ-3739", + "KMM-PRD-Lib-Webmotors-Notification/merge%2FAPPJ-3739", + "KMM-PRD-Lib-Webmotors-Tools", + "iOS-PRD-Lib-Webmotors-Design-System" + ], + "all_jobs": [ + "ANDROID-PRD-Lib-Cockpit-LGPD", + "ANDROID-PRD-Lib-Cockpit-Notification", + "ANDROID-PRD-Lib-Webmotors-Design-System", + "ANDROID-PRD-Lib-Webmotors-Network", + "ANDROID-PRD-Lib-Webmotors-Tracking", + "ANDROID-PRODUCTION-Cockpit", + "IOS-PRD-Lib-Cockpit-LGPD", + "IOS-PRD-Lib-Cockpit-Notification", + "IOS-PRD-Lib-Cockpit-WMCore", + "IOS-PRD-Lib-Webmotors-Network", + "IOS-PRD-Lib-Webmotors-Tracking", + "IOS-PRODUCTION-Cockpit", + "KMM-PRD-Lib-Webmotors-Network-OLD", + "KMM-PRD-Lib-Webmotors-Network/merge%2FAPPJ-3739", + "KMM-PRD-Lib-Webmotors-Notification/merge%2FAPPJ-3739", + "KMM-PRD-Lib-Webmotors-Tools", + "iOS-PRD-Lib-Webmotors-Design-System" + ] + }, + "webmotors-private/lib-mobile-android-design-system": { + "prd_jobs": [ + "ANDROID-PRD-Lib-Webmotors-Design-System" + ], + "all_jobs": [ + "ANDROID-PRD-Lib-Webmotors-Design-System" + ] + }, + "webmotors-private/lib-mobile-android-lgpd": { + "prd_jobs": [ + "ANDROID-PRD-Lib-Cockpit-LGPD" + ], + "all_jobs": [ + "ANDROID-PRD-Lib-Cockpit-LGPD" + ] + }, + "webmotors-private/lib-mobile-android-network": { + "prd_jobs": [ + "ANDROID-PRD-Lib-Webmotors-Network" + ], + "all_jobs": [ + "ANDROID-PRD-Lib-Webmotors-Network" + ] + }, + "webmotors-private/lib-mobile-android-tracking": { + "prd_jobs": [ + "ANDROID-PRD-Lib-Webmotors-Tracking" + ], + "all_jobs": [ + "ANDROID-PRD-Lib-Webmotors-Tracking" + ] + }, + "webmotors-private/lib-mobile-ios-design-system": { + "prd_jobs": [ + "iOS-PRD-Lib-Webmotors-Design-System" + ], + "all_jobs": [ + "iOS-PRD-Lib-Webmotors-Design-System" + ] + }, + "webmotors-private/lib-mobile-ios-lgpd": { + "prd_jobs": [ + "IOS-PRD-Lib-Cockpit-LGPD" + ], + "all_jobs": [ + "IOS-PRD-Lib-Cockpit-LGPD" + ] + }, + "webmotors-private/lib-mobile-ios-network": { + "prd_jobs": [ + "IOS-PRD-Lib-Webmotors-Network" + ], + "all_jobs": [ + "IOS-PRD-Lib-Webmotors-Network" + ] + }, + "webmotors-private/lib-mobile-ios-tracking": { + "prd_jobs": [ + "IOS-PRD-Lib-Webmotors-Tracking" + ], + "all_jobs": [ + "IOS-PRD-Lib-Webmotors-Tracking" + ] + }, + "webmotors-private/lib-mobile-ios-wmcore": { + "prd_jobs": [ + "IOS-PRD-Lib-Cockpit-WMCore" + ], + "all_jobs": [ + "IOS-PRD-Lib-Cockpit-WMCore" + ] + }, + "webmotors-private/lib-mobile-kmm-network": { + "prd_jobs": [ + "KMM-PRD-Lib-Webmotors-Network-OLD", + "KMM-PRD-Lib-Webmotors-Network/merge%2FAPPJ-3739" + ], + "all_jobs": [ + "KMM-PRD-Lib-Webmotors-Network-OLD", + "KMM-PRD-Lib-Webmotors-Network/merge%2FAPPJ-3739" + ] + }, + "webmotors-private/lib-mobile-kmm-notification": { + "prd_jobs": [ + "KMM-PRD-Lib-Webmotors-Notification/merge%2FAPPJ-3739" + ], + "all_jobs": [ + "KMM-PRD-Lib-Webmotors-Notification/merge%2FAPPJ-3739" + ] + }, + "webmotors-private/lib-mobile-kmm-tools": { + "prd_jobs": [ + "KMM-PRD-Lib-Webmotors-Tools" + ], + "all_jobs": [ + "KMM-PRD-Lib-Webmotors-Tools" + ] + }, + "webmotors-private/maisfidelidade.bens.servicos.api": { + "prd_jobs": [ + "mais-fidelidade-bes-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-bes-prd-api" + ] + }, + "webmotors-private/maisfidelidade.bens.servicos.import": { + "prd_jobs": [ + "mais-fidelidade-importador-bes-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-importador-bes-prd-api" + ] + }, + "webmotors-private/maisfidelidade.bens.servicos.ui": { + "prd_jobs": [ + "mais-fidelidade-bes-prd-ui" + ], + "all_jobs": [ + "mais-fidelidade-bes-prd-ui" + ] + }, + "webmotors-private/maisfidelidade.bens.servicos.usarios.import": { + "prd_jobs": [ + "mais-fidelidade-importador-bes-usuario-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-importador-bes-usuario-prd-api" + ] + }, + "webmotors-private/maisfidelidade.contrata.api": { + "prd_jobs": [ + "mais-fidelidade-contrata-prd-api", + "mais-fidelidade-contrata-websocket-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-contrata-prd-api", + "mais-fidelidade-contrata-websocket-prd-api" + ] + }, + "webmotors-private/maisfidelidade.contrata.import": { + "prd_jobs": [ + "mais-fidelidade-importador-contrata-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-importador-contrata-prd-api" + ] + }, + "webmotors-private/maisfidelidade.contrata.ui": { + "prd_jobs": [ + "mais-fidelidade-contrata-prd-ui" + ], + "all_jobs": [ + "mais-fidelidade-contrata-prd-ui" + ] + }, + "webmotors-private/maisfidelidade.gestao.administrativa.api": { + "prd_jobs": [ + "+Fidelidade.Gestao.Administrativa-Prd" + ], + "all_jobs": [ + "+Fidelidade.Gestao.Administrativa-Prd" + ] + }, + "webmotors-private/mobile-android-cockpit": { + "prd_jobs": [ + "ANDROID-PRODUCTION-Cockpit" + ], + "all_jobs": [ + "ANDROID-PRODUCTION-Cockpit" + ] + }, + "webmotors-private/mobile-android-cockpit-notification": { + "prd_jobs": [ + "ANDROID-PRD-Lib-Cockpit-Notification" + ], + "all_jobs": [ + "ANDROID-PRD-Lib-Cockpit-Notification" + ] + }, + "webmotors-private/mobile-ios-cockpit": { + "prd_jobs": [ + "IOS-PRODUCTION-Cockpit" + ], + "all_jobs": [ + "IOS-PRODUCTION-Cockpit" + ] + }, + "webmotors-private/mobile-ios-cockpit-notification": { + "prd_jobs": [ + "IOS-PRD-Lib-Cockpit-Notification" + ], + "all_jobs": [ + "IOS-PRD-Lib-Cockpit-Notification" + ] + }, + "webmotors-private/pipelines-agendafacil": { + "prd_jobs": [ + "flychat-ecs-prd" + ], + "all_jobs": [ + "flychat-ecs-prd" + ] + }, + "webmotors-private/portal-turbo-api": { + "prd_jobs": [ + "portal-turbo-api-prd" + ], + "all_jobs": [ + "portal-turbo-api-prd" + ] + }, + "webmotors-private/precog-leads": { + "prd_jobs": [ + "prd-api-precog-leads" + ], + "all_jobs": [ + "prd-api-precog-leads" + ] + }, + "webmotors-private/push-subscribers-api": { + "prd_jobs": [ + "PushSubscriber-Api/pushsubscriber-api-prd" + ], + "all_jobs": [ + "PushSubscriber-Api/pushsubscriber-api-prd" + ] + }, + "webmotors-private/score-ordenacao": { + "prd_jobs": [ + "Score-Ordenacao-prd" + ], + "all_jobs": [ + "Score-Ordenacao-prd" + ] + }, + "webmotors-private/score-ordenacao-motos": { + "prd_jobs": [ + "Score-Ordenacao-Moto-prd" + ], + "all_jobs": [ + "Score-Ordenacao-Moto-prd" + ] + }, + "webmotors-private/seo-sitemap-strategic": { + "prd_jobs": [ + "wm-seo-sitemap-prd" + ], + "all_jobs": [ + "wm-seo-sitemap-prd" + ] + }, + "webmotors-private/tacografo-agenda-facil": { + "prd_jobs": [ + "tacografo-prd" + ], + "all_jobs": [ + "tacografo-prd" + ] + }, + "webmotors-private/vmotors-api-integrator": { + "prd_jobs": [ + "prd-vmotors-api-integrator" + ], + "all_jobs": [ + "prd-vmotors-api-integrator" + ] + }, + "webmotors-private/vmotors-api-updater": { + "prd_jobs": [ + "prd-vmotors-api-updater" + ], + "all_jobs": [ + "prd-vmotors-api-updater" + ] + }, + "webmotors-private/vmotors-geral": { + "prd_jobs": [ + "cockpit-vmotors-geral-prd" + ], + "all_jobs": [ + "cockpit-vmotors-geral-prd" + ] + }, + "webmotors-private/vmotors-web": { + "prd_jobs": [ + "vmotors-web-php-prd" + ], + "all_jobs": [ + "vmotors-web-php-prd" + ] + }, + "webmotors-private/webmotors": { + "prd_jobs": [ + "pi.money-etl-faturaleadmodel-prd", + "pi.money-etl-faturapatrocinio-prd", + "pi.money-lote-baixamanual-etl-prd", + "pi.money-processos-prompt-etl-prd", + "prd-service-arquivocnab", + "prd-service-baixa-etl-pagamento", + "prd-service-processar-cnab-etl-retorno" + ], + "all_jobs": [ + "pi.money-etl-faturaleadmodel-prd", + "pi.money-etl-faturapatrocinio-prd", + "pi.money-lote-baixamanual-etl-prd", + "pi.money-processos-prompt-etl-prd", + "prd-service-arquivocnab", + "prd-service-baixa-etl-pagamento", + "prd-service-processar-cnab-etl-retorno" + ] + }, + "webmotors-private/webmotors-app-sdui": { + "prd_jobs": [ + "web-cms-pf-prd" + ], + "all_jobs": [ + "web-cms-pf-prd" + ] + }, + "webmotors-private/webmotors.360.view": { + "prd_jobs": [ + "prd-wm-service-360view" + ], + "all_jobs": [ + "prd-wm-service-360view" + ] + }, + "webmotors-private/webmotors.access": { + "prd_jobs": [ + "PI-Security/prd-ecs-api-access" + ], + "all_jobs": [ + "PI-Security/prd-ecs-api-access" + ] + }, + "webmotors-private/webmotors.account.api": { + "prd_jobs": [ + "Account-Api/account-api-prd" + ], + "all_jobs": [ + "Account-Api/account-api-prd" + ] + }, + "webmotors-private/webmotors.accounting": { + "prd_jobs": [ + "prd-ecs-api-accouting" + ], + "all_jobs": [ + "prd-ecs-api-accouting" + ] + }, + "webmotors-private/webmotors.ad-repriorization": { + "prd_jobs": [ + "Repriorizacao-UI/repriorizacao-ui-prd" + ], + "all_jobs": [ + "Repriorizacao-UI/repriorizacao-ui-prd" + ] + }, + "webmotors-private/webmotors.advertise.api": { + "prd_jobs": [ + "Advertise-Api/advertise-api-prd" + ], + "all_jobs": [ + "Advertise-Api/advertise-api-prd" + ] + }, + "webmotors-private/webmotors.api.catalogo": { + "prd_jobs": [ + "CatalogoAPI-PRD" + ], + "all_jobs": [ + "CatalogoAPI-PRD" + ] + }, + "webmotors-private/webmotors.api.charge": { + "prd_jobs": [ + "prd-api-charge" + ], + "all_jobs": [ + "prd-api-charge" + ] + }, + "webmotors-private/webmotors.api.commercial": { + "prd_jobs": [ + "pi.sales.api-commercial.branch.find.prd", + "pi.sales.api-commercial.network.list.prd", + "pi.sales.api-commercial.portifolio.find.prd", + "pi.sales.api-commercial.zone.list.prd" + ], + "all_jobs": [ + "pi.sales.api-commercial.branch.find.prd", + "pi.sales.api-commercial.network.list.prd", + "pi.sales.api-commercial.portifolio.find.prd", + "pi.sales.api-commercial.zone.list.prd" + ] + }, + "webmotors-private/webmotors.api.customer": { + "prd_jobs": [ + "pi.sales.api-customer.search.prd", + "pi.sales.api-customer.wallet.save.status.prd" + ], + "all_jobs": [ + "pi.sales.api-customer.search.prd", + "pi.sales.api-customer.wallet.save.status.prd" + ] + }, + "webmotors-private/webmotors.api.invoice": { + "prd_jobs": [ + "pi.sales.api-invoice.contract.detail.prd" + ], + "all_jobs": [ + "pi.sales.api-invoice.contract.detail.prd" + ] + }, + "webmotors-private/webmotors.api.lead": { + "prd_jobs": [ + "pi.sales.api-lead.contestation.contestated.prd", + "pi.sales.api-lead.contestation.details.prd", + "pi.sales.api-lead.contestation.find.prd", + "pi.sales.api-lead.contestation.reason.prd", + "pi.sales.api-lead.contestation.save.prd", + "pi.sales.api-lead.fixedprice.orderfind.prd", + "pi.sales.api-lead.invoice.find.prd", + "pi.sales.api-lead.log.find.prd", + "pi.sales.api-lead.log.save.prd", + "pi.sales.api-lead.management.duplicated.prd", + "pi.sales.api-lead.management.find.prd", + "pi.sales.api-lead.management.preview.prd", + "pi.sales.api-lead.management.refused.prd", + "pi.sales.api-lead.management.type.prd", + "pi.sales.api-lead.price.find.prd", + "pi.sales.api-lead.price.values.prd", + "pi.sales.api-lead.save.prd", + "pi.sales.api-lead.setup.find.prd", + "pi.sales.api-lead.setup.save.prd", + "pi.sales.api-lead.vehicle.brand.prd" + ], + "all_jobs": [ + "pi.sales.api-lead.contestation.contestated.prd", + "pi.sales.api-lead.contestation.details.prd", + "pi.sales.api-lead.contestation.find.prd", + "pi.sales.api-lead.contestation.reason.prd", + "pi.sales.api-lead.contestation.save.prd", + "pi.sales.api-lead.fixedprice.orderfind.prd", + "pi.sales.api-lead.invoice.find.prd", + "pi.sales.api-lead.log.find.prd", + "pi.sales.api-lead.log.save.prd", + "pi.sales.api-lead.management.duplicated.prd", + "pi.sales.api-lead.management.find.prd", + "pi.sales.api-lead.management.preview.prd", + "pi.sales.api-lead.management.refused.prd", + "pi.sales.api-lead.management.type.prd", + "pi.sales.api-lead.price.find.prd", + "pi.sales.api-lead.price.values.prd", + "pi.sales.api-lead.save.prd", + "pi.sales.api-lead.setup.find.prd", + "pi.sales.api-lead.setup.save.prd", + "pi.sales.api-lead.vehicle.brand.prd" + ] + }, + "webmotors-private/webmotors.api.legalperson.discount": { + "prd_jobs": [ + "legalperson-discount-api-prd" + ], + "all_jobs": [ + "legalperson-discount-api-prd" + ] + }, + "webmotors-private/webmotors.api.payment": { + "prd_jobs": [ + "pi-account-prd-api-integration", + "root-account-prd-api-payment", + "root-account-prd-api-proxy-payment", + "root-account-prd-payment-monitoring" + ], + "all_jobs": [ + "pi-account-prd-api-integration", + "root-account-prd-api-payment", + "root-account-prd-api-proxy-payment", + "root-account-prd-payment-monitoring" + ] + }, + "webmotors-private/webmotors.api.plans": { + "prd_jobs": [ + "pi.sales.api-plans.create.rule.prd", + "pi.sales.api-plans.exclude.financialentry.prd", + "pi.sales.api-plans.exclude.franchise.flag.prd", + "pi.sales.api-plans.get.leadprice.prd", + "pi.sales.api-plans.getavailablefranchise.prd", + "pi.sales.api-plans.getplan.prd", + "pi.sales.api-plans.save.consumption.prd", + "pi.sales.api-plans.save.franchise.prd" + ], + "all_jobs": [ + "pi.sales.api-plans.create.rule.prd", + "pi.sales.api-plans.exclude.financialentry.prd", + "pi.sales.api-plans.exclude.franchise.flag.prd", + "pi.sales.api-plans.get.leadprice.prd", + "pi.sales.api-plans.getavailablefranchise.prd", + "pi.sales.api-plans.getplan.prd", + "pi.sales.api-plans.save.consumption.prd", + "pi.sales.api-plans.save.franchise.prd" + ] + }, + "webmotors-private/webmotors.api.products": { + "prd_jobs": [ + "pi.sales.api-product.find.channel.prd", + "pi.sales.api-product.find.contract.prd", + "pi.sales.api-product.find.prd", + "pi.sales.api-product.find.region.prd", + "pi.sales.api-product.netsuite.find.prd", + "pi.sales.api-product.netsuite.sync.prd" + ], + "all_jobs": [ + "pi.sales.api-product.find.channel.prd", + "pi.sales.api-product.find.contract.prd", + "pi.sales.api-product.find.prd", + "pi.sales.api-product.find.region.prd", + "pi.sales.api-product.netsuite.find.prd", + "pi.sales.api-product.netsuite.sync.prd" + ] + }, + "webmotors-private/webmotors.api.sales": { + "prd_jobs": [ + "pi.sales.api-sales.customer.wallet.save.status.prd", + "pi.sales.api-sales.discount.prd", + "pi.sales.api-sales.order.find.billable.prd", + "pi.sales.api-sales.order.invoice.discount.prd", + "pi.sales.api-sales.pre.order.accept.prd", + "pi.sales.api-sales.pre.order.cancel.notificate.prd", + "pi.sales.api-sales.pre.order.list.prd", + "pi.sales.api-sales.pre.order.prd", + "pi.sales.api-sales.pre.order.store.prd" + ], + "all_jobs": [ + "pi.sales.api-sales.customer.wallet.save.status.prd", + "pi.sales.api-sales.discount.prd", + "pi.sales.api-sales.order.find.billable.prd", + "pi.sales.api-sales.order.invoice.discount.prd", + "pi.sales.api-sales.pre.order.accept.prd", + "pi.sales.api-sales.pre.order.cancel.notificate.prd", + "pi.sales.api-sales.pre.order.list.prd", + "pi.sales.api-sales.pre.order.prd", + "pi.sales.api-sales.pre.order.store.prd" + ] + }, + "webmotors-private/webmotors.api.sales.logger": { + "prd_jobs": [ + "pi.sales.api-sales-logger.prd" + ], + "all_jobs": [ + "pi.sales.api-sales-logger.prd" + ] + }, + "webmotors-private/webmotors.api.sponsor": { + "prd_jobs": [ + "pi.sales.api-sponsor.list-national-manager.prd", + "pi.sales.api-sponsor.list-regional-by-national-manager.prd" + ], + "all_jobs": [ + "pi.sales.api-sponsor.list-national-manager.prd", + "pi.sales.api-sponsor.list-regional-by-national-manager.prd" + ] + }, + "webmotors-private/webmotors.apis": { + "prd_jobs": [ + "webmotors-apis-santander-services-back-prd" + ], + "all_jobs": [ + "webmotors-apis-santander-services-back-prd" + ] + }, + "webmotors-private/webmotors.app.api": { + "prd_jobs": [ + "prd-webmotors-app-api" + ], + "all_jobs": [ + "prd-webmotors-app-api" + ] + }, + "webmotors-private/webmotors.app.pf.push": { + "prd_jobs": [ + "PushSend-Api/pushsend-api-prd" + ], + "all_jobs": [ + "PushSend-Api/pushsend-api-prd" + ] + }, + "webmotors-private/webmotors.atena": { + "prd_jobs": [ + "Arquitetura/atena-prd" + ], + "all_jobs": [ + "Arquitetura/atena-prd" + ] + }, + "webmotors-private/webmotors.buyer": { + "prd_jobs": [ + "prd-wm-buyer-api", + "prd-wm-buyer-api-services" + ], + "all_jobs": [ + "prd-wm-buyer-api", + "prd-wm-buyer-api-services" + ] }, "webmotors-private/webmotors.buyer.desktop.ui": { - "prd_jobs": ["prd-wm-buyer-lambda-desktop-ui"], - "all_jobs": ["azl-wm-buyer-lambda-desktop-ui", "azl-wm-buyer-lambda-desktop-ui-rollback", "hml-wm-buyer-lambda-desktop-ui", "hml-wm-buyer-lambda-desktop-ui-nodejs20", "hml-wm-buyer-lambda-desktop-ui-rollback", "prd-wm-buyer-lambda-desktop-ui", "prd-wm-buyer-lambda-desktop-ui-rollback"] + "prd_jobs": [ + "prd-wm-buyer-lambda-desktop-ui", + "prd-wm-buyer-lambda-desktop-ui-rollback" + ], + "all_jobs": [ + "prd-wm-buyer-lambda-desktop-ui", + "prd-wm-buyer-lambda-desktop-ui-rollback" + ] }, - "webmotors-private/webmotors.portal.ui": { - "prd_jobs": ["prd-wm-buyer-lambda-home-ui"], - "all_jobs": ["azl-wm-buyer-lambda-home-ui", "hml-wm-buyer-lambda-home-ui", "prd-wm-buyer-lambda-home-ui"] + "webmotors-private/webmotors.buyer.fairs.config": { + "prd_jobs": [ + "prd-wm-buyer-fairs-config" + ], + "all_jobs": [ + "prd-wm-buyer-fairs-config" + ] }, "webmotors-private/webmotors.buyer.ui": { - "prd_jobs": ["prd-wm-buyer-lambda-mobile-ui"], - "all_jobs": ["azl-wm-buyer-lambda-mobile-ui", "azl-wm-buyer-lambda-mobile-ui-rollback", "hml-wm-buyer-lambda-mobile-ui", "hml-wm-buyer-lambda-mobile-ui-rollback", "prd-wm-buyer-lambda-mobile-ui", "prd-wm-buyer-lambda-mobile-ui-rollback"] + "prd_jobs": [ + "prd-wm-buyer-lambda-mobile-ui", + "prd-wm-buyer-lambda-mobile-ui-rollback" + ], + "all_jobs": [ + "prd-wm-buyer-lambda-mobile-ui", + "prd-wm-buyer-lambda-mobile-ui-rollback" + ] + }, + "webmotors-private/webmotors.catalog": { + "prd_jobs": [ + "ExportBrazil-PRD" + ], + "all_jobs": [ + "ExportBrazil-PRD" + ] + }, + "webmotors-private/webmotors.catalogo": { + "prd_jobs": [ + "Catalogo-service-v8-and-v9-prd" + ], + "all_jobs": [ + "Catalogo-service-v8-and-v9-prd" + ] + }, + "webmotors-private/webmotors.catalogo.jobs": { + "prd_jobs": [ + "catalogo-console-dotnet-prd" + ], + "all_jobs": [ + "catalogo-console-dotnet-prd" + ] }, "webmotors-private/webmotors.catalogo.next.ui": { - "prd_jobs": ["catalogo-next-ui-prd"], - "all_jobs": ["catalogo-next-ui-azl", "catalogo-next-ui-hml", "catalogo-next-ui-prd"] + "prd_jobs": [ + "catalogo-next-ui-prd" + ], + "all_jobs": [ + "catalogo-next-ui-prd" + ] + }, + "webmotors-private/webmotors.catalogo.ui": { + "prd_jobs": [ + "prd-catalogo-ui", + "prd-sitemap-catalogo" + ], + "all_jobs": [ + "prd-catalogo-ui", + "prd-sitemap-catalogo" + ] + }, + "webmotors-private/webmotors.certifiedpurchase": { + "prd_jobs": [ + "pi.sales.api-certifiedpurchase.change.status.prd", + "pi.sales.api-certifiedpurchase.find.all.prd", + "pi.sales.api-certifiedpurchase.find.status.prd", + "pi.sales.api-certifiedpurchase.inspection.find.prd" + ], + "all_jobs": [ + "pi.sales.api-certifiedpurchase.change.status.prd", + "pi.sales.api-certifiedpurchase.find.all.prd", + "pi.sales.api-certifiedpurchase.find.status.prd", + "pi.sales.api-certifiedpurchase.inspection.find.prd" + ] + }, + "webmotors-private/webmotors.chat.api": { + "prd_jobs": [ + "Chat-Api/chat-api-prd", + "Chat-Api/chat-prd-python" + ], + "all_jobs": [ + "Chat-Api/chat-api-prd", + "Chat-Api/chat-prd-python" + ] + }, + "webmotors-private/webmotors.cockpit.anonymization": { + "prd_jobs": [ + "ckp.anonymization.service.lambda.prd", + "cockpit-anonymization-api-prd" + ], + "all_jobs": [ + "ckp.anonymization.service.lambda.prd", + "cockpit-anonymization-api-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.access": { + "prd_jobs": [ + "cockpit-api-access-prd" + ], + "all_jobs": [ + "cockpit-api-access-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.dealer": { + "prd_jobs": [ + "cockpit-api-dealer-prd" + ], + "all_jobs": [ + "cockpit-api-dealer-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.group.dealer": { + "prd_jobs": [ + "cockpit-api-group-dealer-prd" + ], + "all_jobs": [ + "cockpit-api-group-dealer-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.lead.extract": { + "prd_jobs": [ + "cockpit-api-lead-extract-prd" + ], + "all_jobs": [ + "cockpit-api-lead-extract-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.login": { + "prd_jobs": [ + "cockpit-api-login-prd" + ], + "all_jobs": [ + "cockpit-api-login-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.menu": { + "prd_jobs": [ + "cockpit-api-menu-prd" + ], + "all_jobs": [ + "cockpit-api-menu-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.plan": { + "prd_jobs": [ + "cockpit-api-plan-prd" + ], + "all_jobs": [ + "cockpit-api-plan-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.product": { + "prd_jobs": [ + "cockpit-api-product-prd" + ], + "all_jobs": [ + "cockpit-api-product-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.profile": { + "prd_jobs": [ + "cockpit-api-profile-prd" + ], + "all_jobs": [ + "cockpit-api-profile-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.statement": { + "prd_jobs": [ + "money-api-statement-prd", + "money-statement-service-prd" + ], + "all_jobs": [ + "money-api-statement-prd", + "money-statement-service-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.term": { + "prd_jobs": [ + "cockpit-api-term-prd" + ], + "all_jobs": [ + "cockpit-api-term-prd" + ] + }, + "webmotors-private/webmotors.cockpit.api.user": { + "prd_jobs": [ + "cockpit-api-user-prd" + ], + "all_jobs": [ + "cockpit-api-user-prd" + ] + }, + "webmotors-private/webmotors.cockpit.authorizer": { + "prd_jobs": [ + "ckp.authorizer.api.prd", + "ckp.authorizer.root.api.prd" + ], + "all_jobs": [ + "ckp.authorizer.api.prd", + "ckp.authorizer.root.api.prd" + ] + }, + "webmotors-private/webmotors.cockpit.autoavaliar": { + "prd_jobs": [ + "cockpit-autoavaliar-api-prd" + ], + "all_jobs": [ + "cockpit-autoavaliar-api-prd" + ] + }, + "webmotors-private/webmotors.cockpit.backoffice.offers.ui": { + "prd_jobs": [ + "cockpit-backoffice-offers-ui-prd" + ], + "all_jobs": [ + "cockpit-backoffice-offers-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.backoffice.ui": { + "prd_jobs": [ + "cockpit-backoffice-ui-prd" + ], + "all_jobs": [ + "cockpit-backoffice-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.crm.mfe.configuration.ai.ui": { + "prd_jobs": [ + "cockpit.crm.mfe.configuration.ai.ui.prd" + ], + "all_jobs": [ + "cockpit.crm.mfe.configuration.ai.ui.prd" + ] + }, + "webmotors-private/webmotors.cockpit.crm.mfe.configuration.ui": { + "prd_jobs": [ + "crm.mfe.configuration.ui.prd" + ], + "all_jobs": [ + "crm.mfe.configuration.ui.prd" + ] + }, + "webmotors-private/webmotors.cockpit.crm.mfe.dashboard.ui": { + "prd_jobs": [ + "crm.mfe.dashboard.ui.prd" + ], + "all_jobs": [ + "crm.mfe.dashboard.ui.prd" + ] + }, + "webmotors-private/webmotors.cockpit.crm.ui": { + "prd_jobs": [ + "cockpit-crm-ui-prd", + "cockpit-crm-ui-prd-rollback" + ], + "all_jobs": [ + "cockpit-crm-ui-prd", + "cockpit-crm-ui-prd-rollback" + ] + }, + "webmotors-private/webmotors.cockpit.dependabot.analyzer": { + "prd_jobs": [ + "ckp.analyzer.dependabot.prd" + ], + "all_jobs": [ + "ckp.analyzer.dependabot.prd" + ] + }, + "webmotors-private/webmotors.cockpit.i18n": { + "prd_jobs": [ + "ckp.components.i18n.library.prd", + "ckp.upload.files.i18n.s3.prd" + ], + "all_jobs": [ + "ckp.components.i18n.library.prd", + "ckp.upload.files.i18n.s3.prd" + ] + }, + "webmotors-private/webmotors.cockpit.ia.ui": { + "prd_jobs": [ + "webmotors-cockpit-ia-ui-prd" + ], + "all_jobs": [ + "webmotors-cockpit-ia-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.inspection.ui": { + "prd_jobs": [ + "Cockpit.Inspection/prd-cockpit-inspection-ui" + ], + "all_jobs": [ + "Cockpit.Inspection/prd-cockpit-inspection-ui" + ] + }, + "webmotors-private/webmotors.cockpit.integrador.api": { + "prd_jobs": [ + "cockpit-integration-api-channel-prd" + ], + "all_jobs": [ + "cockpit-integration-api-channel-prd" + ] + }, + "webmotors-private/webmotors.cockpit.integrador.api.adv": { + "prd_jobs": [ + "cockpit-integration-advertisement-prd" + ], + "all_jobs": [ + "cockpit-integration-advertisement-prd" + ] + }, + "webmotors-private/webmotors.cockpit.integrador.api.adv.sync": { + "prd_jobs": [ + "cockpit-integration-adv-sync-prd" + ], + "all_jobs": [ + "cockpit-integration-adv-sync-prd" + ] + }, + "webmotors-private/webmotors.cockpit.integrador.api.lead": { + "prd_jobs": [ + "cockpit-integration-leads-prd" + ], + "all_jobs": [ + "cockpit-integration-leads-prd" + ] + }, + "webmotors-private/webmotors.cockpit.integrador.callback": { + "prd_jobs": [ + "cockpit-integration-callback-prd" + ], + "all_jobs": [ + "cockpit-integration-callback-prd" + ] + }, + "webmotors-private/webmotors.cockpit.integrador.updater": { + "prd_jobs": [ + "cockpit-integration-new-updater-prd" + ], + "all_jobs": [ + "cockpit-integration-new-updater-prd" + ] + }, + "webmotors-private/webmotors.cockpit.integration.pluginpro.loader": { + "prd_jobs": [ + "cockpit-integration-pluginpro-loader-prd" + ], + "all_jobs": [ + "cockpit-integration-pluginpro-loader-prd" + ] + }, + "webmotors-private/webmotors.cockpit.invoice.ui": { + "prd_jobs": [ + "cockpit-invoice-ui-prd" + ], + "all_jobs": [ + "cockpit-invoice-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.landingpages.ui": { + "prd_jobs": [ + "cockpit-landing-pages-ui-prd" + ], + "all_jobs": [ + "cockpit-landing-pages-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.leads.ui": { + "prd_jobs": [ + "cockpit-leads-ui-prd" + ], + "all_jobs": [ + "cockpit-leads-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.library": { + "prd_jobs": [ + "ckp.library.prd" + ], + "all_jobs": [ + "ckp.library.prd" + ] + }, + "webmotors-private/webmotors.cockpit.maisfidelidade.api": { + "prd_jobs": [ + "mais-fidelidade-chamas-prd-api", + "mais-fidelidade-lojas-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-chamas-prd-api", + "mais-fidelidade-lojas-prd-api" + ] + }, + "webmotors-private/webmotors.cockpit.maisfidelidade.import": { + "prd_jobs": [ + "mais-fidelidade-importador-lojas-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-importador-lojas-prd-api" + ] + }, + "webmotors-private/webmotors.cockpit.maisfidelidade.import.chamas": { + "prd_jobs": [ + "mais-fidelidade-lojas-importador-chamas-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-lojas-importador-chamas-prd-api" + ] + }, + "webmotors-private/webmotors.cockpit.maisfidelidade.import.oportunidades": { + "prd_jobs": [ + "mais-fidelidade-lojas-importador-oportunidade-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-lojas-importador-oportunidade-prd-api" + ] + }, + "webmotors-private/webmotors.cockpit.maisfidelidade.ui": { + "prd_jobs": [ + "mais-fidelidade-lojas-prd-ui" + ], + "all_jobs": [ + "mais-fidelidade-lojas-prd-ui" + ] + }, + "webmotors-private/webmotors.cockpit.mfe.integration.ui": { + "prd_jobs": [ + "cockpit-mfe-integration-ui-prd" + ], + "all_jobs": [ + "cockpit-mfe-integration-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.notifications.api": { + "prd_jobs": [ + "central-de-notificacoes-api-prd", + "central-de-notificacoes-prd" + ], + "all_jobs": [ + "central-de-notificacoes-api-prd", + "central-de-notificacoes-prd" + ] + }, + "webmotors-private/webmotors.cockpit.panel.api": { + "prd_jobs": [ + "cockpit-panel-api-prd" + ], + "all_jobs": [ + "cockpit-panel-api-prd" + ] + }, + "webmotors-private/webmotors.cockpit.panel.etl": { + "prd_jobs": [ + "cockpit-panel-etl-prd", + "cockpit-panel-root-etl-prd" + ], + "all_jobs": [ + "cockpit-panel-etl-prd", + "cockpit-panel-root-etl-prd" + ] + }, + "webmotors-private/webmotors.cockpit.sale": { + "prd_jobs": [ + "ckp.sale.pj.api.prd", + "ckp.sale.root.api.prd" + ], + "all_jobs": [ + "ckp.sale.pj.api.prd", + "ckp.sale.root.api.prd" + ] + }, + "webmotors-private/webmotors.cockpit.signature.api": { + "prd_jobs": [ + "prd-cockpit-signature-api" + ], + "all_jobs": [ + "prd-cockpit-signature-api" + ] + }, + "webmotors-private/webmotors.cockpit.signature.ui": { + "prd_jobs": [ + "prd-cockpit-signature-ui" + ], + "all_jobs": [ + "prd-cockpit-signature-ui" + ] + }, + "webmotors-private/webmotors.cockpit.stock.api": { + "prd_jobs": [ + "prd-cockpit-stock", + "prd-cockpit-stock-i18n" + ], + "all_jobs": [ + "prd-cockpit-stock", + "prd-cockpit-stock-i18n" + ] + }, + "webmotors-private/webmotors.cockpit.stock.ui": { + "prd_jobs": [ + "prd-cockpit-stock-ui" + ], + "all_jobs": [ + "prd-cockpit-stock-ui" + ] + }, + "webmotors-private/webmotors.cockpit.stock.ui.channel": { + "prd_jobs": [ + "prd-cockpit-stock-ui-channel" + ], + "all_jobs": [ + "prd-cockpit-stock-ui-channel" + ] + }, + "webmotors-private/webmotors.cockpit.store.api": { + "prd_jobs": [ + "cockpit-store-api-campaign-prd", + "cockpit-store-api-prd", + "cockpit-store-services-prd", + "root-cockpit-store-api-campaign-module-prd" + ], + "all_jobs": [ + "cockpit-store-api-campaign-prd", + "cockpit-store-api-prd", + "cockpit-store-services-prd", + "root-cockpit-store-api-campaign-module-prd" + ] + }, + "webmotors-private/webmotors.cockpit.store.ui": { + "prd_jobs": [ + "cockpit-store-ui-prd" + ], + "all_jobs": [ + "cockpit-store-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.subscription.ui": { + "prd_jobs": [ + "prd-wm-cockpit-subscription-ui" + ], + "all_jobs": [ + "prd-wm-cockpit-subscription-ui" + ] + }, + "webmotors-private/webmotors.cockpit.ui": { + "prd_jobs": [ + "cockpit-ui-prd" + ], + "all_jobs": [ + "cockpit-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.universidade.api": { + "prd_jobs": [ + "universidade-api-prd" + ], + "all_jobs": [ + "universidade-api-prd" + ] + }, + "webmotors-private/webmotors.cockpit.universidade.csv.salesforce.lambda": { + "prd_jobs": [ + "university-send-csv-to-mailing-prd" + ], + "all_jobs": [ + "university-send-csv-to-mailing-prd" + ] + }, + "webmotors-private/webmotors.cockpit.universidade.hub.sales.info.lambda": { + "prd_jobs": [ + "universidade-sales-info-lambda-prd" + ], + "all_jobs": [ + "universidade-sales-info-lambda-prd" + ] + }, + "webmotors-private/webmotors.cockpit.universidade.ui": { + "prd_jobs": [ + "universidade-ui-prd" + ], + "all_jobs": [ + "universidade-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.wallet.api": { + "prd_jobs": [ + "cockpit-wallet-api-prd-old", + "cockpit-wallet-prd", + "cockpit-wallet-services-prd" + ], + "all_jobs": [ + "cockpit-wallet-api-prd-old", + "cockpit-wallet-prd", + "cockpit-wallet-services-prd" + ] + }, + "webmotors-private/webmotors.cockpit.wallet.ui": { + "prd_jobs": [ + "cockpit-wallet-ui-prd" + ], + "all_jobs": [ + "cockpit-wallet-ui-prd" + ] + }, + "webmotors-private/webmotors.cockpit.webtv.ui": { + "prd_jobs": [ + "webmotors-cockpit-webtv-ui-prd" + ], + "all_jobs": [ + "webmotors-cockpit-webtv-ui-prd" + ] + }, + "webmotors-private/webmotors.cognito.api": { + "prd_jobs": [ + "LoginPF-Api/loginpf-api-prd", + "LoginPF-Api/loginpf-whatsapp-prd" + ], + "all_jobs": [ + "LoginPF-Api/loginpf-api-prd", + "LoginPF-Api/loginpf-whatsapp-prd" + ] + }, + "webmotors-private/webmotors.cognito.ui": { + "prd_jobs": [ + "LoginPF-UI/loginpf-ui-prd", + "PI-Security/login-ui/login-ui-prd" + ], + "all_jobs": [ + "LoginPF-UI/loginpf-ui-prd", + "PI-Security/login-ui/login-ui-prd" + ] + }, + "webmotors-private/webmotors.consumer.storybook": { + "prd_jobs": [ + "StoryBook-UI/storybook-ui-prd" + ], + "all_jobs": [ + "StoryBook-UI/storybook-ui-prd" + ] + }, + "webmotors-private/webmotors.coupon.api": { + "prd_jobs": [ + "Coupon-Api/coupon-api-prd" + ], + "all_jobs": [ + "Coupon-Api/coupon-api-prd" + ] + }, + "webmotors-private/webmotors.coupon.ui": { + "prd_jobs": [ + "Coupon-UI/coupon-ui-prd" + ], + "all_jobs": [ + "Coupon-UI/coupon-ui-prd" + ] + }, + "webmotors-private/webmotors.databricks.previa": { + "prd_jobs": [ + "pi.sales.api-billing.preview.prd" + ], + "all_jobs": [ + "pi.sales.api-billing.preview.prd" + ] + }, + "webmotors-private/webmotors.design.system.eleanor": { + "prd_jobs": [ + "eleanor-ui-prd" + ], + "all_jobs": [ + "eleanor-ui-prd" + ] + }, + "webmotors-private/webmotors.dynamic.price.api": { + "prd_jobs": [ + "DynamicPrice-Api/dynamic-price-api-prd" + ], + "all_jobs": [ + "DynamicPrice-Api/dynamic-price-api-prd" + ] + }, + "webmotors-private/webmotors.entity.state": { + "prd_jobs": [ + "pi.sales.api-entity.notify.prd", + "pi.sales.api-entity.save.prd", + "pi.sales.api-entity.send.prd" + ], + "all_jobs": [ + "pi.sales.api-entity.notify.prd", + "pi.sales.api-entity.save.prd", + "pi.sales.api-entity.send.prd" + ] + }, + "webmotors-private/webmotors.etl": { + "prd_jobs": [ + "PI-Security/prd-service-etl-internauta", + "webmotors-etl-services-financiamento-back-prd" + ], + "all_jobs": [ + "PI-Security/prd-service-etl-internauta", + "webmotors-etl-services-financiamento-back-prd" + ] + }, + "webmotors-private/webmotors.etl.services.mailmessage": { + "prd_jobs": [ + "prd-api-mail-message" + ], + "all_jobs": [ + "prd-api-mail-message" + ] + }, + "webmotors-private/webmotors.fairs.opener": { + "prd_jobs": [ + "wm-fairs-opener-prd" + ], + "all_jobs": [ + "wm-fairs-opener-prd" + ] + }, + "webmotors-private/webmotors.financial.ui.bank": { + "prd_jobs": [ + "pi.money-financial-bank-ui-prd" + ], + "all_jobs": [ + "pi.money-financial-bank-ui-prd" + ] + }, + "webmotors-private/webmotors.financiamento.lib": { + "prd_jobs": [ + "webmotors-financing-lib-npm-front-prd" + ], + "all_jobs": [ + "webmotors-financing-lib-npm-front-prd" + ] + }, + "webmotors-private/webmotors.financiamento.santander.api": { + "prd_jobs": [ + "webmotors-financiamento-santander-api-back-prd" + ], + "all_jobs": [ + "webmotors-financiamento-santander-api-back-prd" + ] + }, + "webmotors-private/webmotors.financing-platform": { + "prd_jobs": [ + "webmotors-financing-platform-front-prd" + ], + "all_jobs": [ + "webmotors-financing-platform-front-prd" + ] + }, + "webmotors-private/webmotors.financing.auth": { + "prd_jobs": [ + "webmotors-financing-auth-back-prd" + ], + "all_jobs": [ + "webmotors-financing-auth-back-prd" + ] + }, + "webmotors-private/webmotors.financing.backoffice": { + "prd_jobs": [ + "webmotors-financing-backoffice-back-prd" + ], + "all_jobs": [ + "webmotors-financing-backoffice-back-prd" + ] + }, + "webmotors-private/webmotors.financing.dealerships": { + "prd_jobs": [ + "webmotors-financing-dealerships-back-prd" + ], + "all_jobs": [ + "webmotors-financing-dealerships-back-prd" + ] + }, + "webmotors-private/webmotors.financing.extensions": { + "prd_jobs": [ + "webmotors-financing-packages-back-prd" + ], + "all_jobs": [ + "webmotors-financing-packages-back-prd" + ] + }, + "webmotors-private/webmotors.financing.integrations": { + "prd_jobs": [ + "webmotors-financing-integrations-back-prd" + ], + "all_jobs": [ + "webmotors-financing-integrations-back-prd" + ] + }, + "webmotors-private/webmotors.financing.intermediator": { + "prd_jobs": [ + "webmotors-financing-intermediator-back-prd" + ], + "all_jobs": [ + "webmotors-financing-intermediator-back-prd" + ] + }, + "webmotors-private/webmotors.financing.packages": { + "prd_jobs": [ + "webmotors-financing-packages-back-prd" + ], + "all_jobs": [ + "webmotors-financing-packages-back-prd" + ] + }, + "webmotors-private/webmotors.financing.pionner": { + "prd_jobs": [ + "webmotors-financing-pioneer-back-prd" + ], + "all_jobs": [ + "webmotors-financing-pioneer-back-prd" + ] + }, + "webmotors-private/webmotors.financing.rules": { + "prd_jobs": [ + "webmotors-financing-rules-back-prd" + ], + "all_jobs": [ + "webmotors-financing-rules-back-prd" + ] + }, + "webmotors-private/webmotors.fipe.api": { + "prd_jobs": [ + "fipe-api-dotnet-prd", + "fipe-api-node-prd", + "fipe-average-bike-prd", + "fipe-average-car-prd", + "fipe-parse-file-prd", + "fipe-services-dotnet-prd" + ], + "all_jobs": [ + "fipe-api-dotnet-prd", + "fipe-api-node-prd", + "fipe-average-bike-prd", + "fipe-average-car-prd", + "fipe-parse-file-prd", + "fipe-services-dotnet-prd" + ] }, "webmotors-private/webmotors.fipe.next.ui": { - "prd_jobs": ["fipe-next-ui-prd"], - "all_jobs": ["fipe-lambda-edge-hml", "fipe-next-ui-azl", "fipe-next-ui-hml", "fipe-next-ui-prd"] + "prd_jobs": [ + "fipe-next-ui-prd" + ], + "all_jobs": [ + "fipe-next-ui-prd" + ] + }, + "webmotors-private/webmotors.fipe.ui": { + "prd_jobs": [ + "buyer-fipe-ui-prd" + ], + "all_jobs": [ + "buyer-fipe-ui-prd" + ] + }, + "webmotors-private/webmotors.garagem": { + "prd_jobs": [ + "Garagem-UI/garagem-ui-prd" + ], + "all_jobs": [ + "Garagem-UI/garagem-ui-prd" + ] + }, + "webmotors-private/webmotors.group.fiscal.files": { + "prd_jobs": [ + "api-fiscal-files-prd", + "prd-service-arquivos-grupos", + "prd-service-validar-arquivos-grupos" + ], + "all_jobs": [ + "api-fiscal-files-prd", + "prd-service-arquivos-grupos", + "prd-service-validar-arquivos-grupos" + ] + }, + "webmotors-private/webmotors.hub.pipelines": { + "prd_jobs": [ + "pi.money-etl-faturaleadmodel-prd", + "pi.money-etl-faturapatrocinio-prd", + "pi.money-lote-baixamanual-etl-prd", + "pi.money-lote.reagendarcobranca-etl-prd", + "pi.money-processos-prompt-etl-prd", + "pi.sales-etl-LancarVendaAnuncioSite-prd", + "pi.sales-etl-associacaofeirao-prd", + "pi.sales-etl-ativacao.anuncio-prd", + "pi.sales-etl.venda.automatica.prd", + "pi.sales-etl.venda.automatica.start.stop.prd", + "pi.sales-lote.incluirvenda-prd", + "pi.sales-lote.substituirvenda-prd", + "pi.sales.etl.plano-controle-prd", + "prd-front-hub", + "sales.controlar-pendencias.servico.prd", + "sales.lote-cancelar-venda.servico.prd", + "sales.lote-periodo-desconto.servico.prd", + "sales.processo-reajusteIGPM.servico.prd" + ], + "all_jobs": [ + "pi.money-etl-faturaleadmodel-prd", + "pi.money-etl-faturapatrocinio-prd", + "pi.money-lote-baixamanual-etl-prd", + "pi.money-lote.reagendarcobranca-etl-prd", + "pi.money-processos-prompt-etl-prd", + "pi.sales-etl-LancarVendaAnuncioSite-prd", + "pi.sales-etl-associacaofeirao-prd", + "pi.sales-etl-ativacao.anuncio-prd", + "pi.sales-etl.venda.automatica.prd", + "pi.sales-etl.venda.automatica.start.stop.prd", + "pi.sales-lote.incluirvenda-prd", + "pi.sales-lote.substituirvenda-prd", + "pi.sales.etl.plano-controle-prd", + "prd-front-hub", + "sales.controlar-pendencias.servico.prd", + "sales.lote-cancelar-venda.servico.prd", + "sales.lote-periodo-desconto.servico.prd", + "sales.processo-reajusteIGPM.servico.prd" + ] + }, + "webmotors-private/webmotors.integrador": { + "prd_jobs": [ + "cockpit-integrator-santander-leadfile-prd", + "cockpit-integrator-santander-sendlead-prd" + ], + "all_jobs": [ + "cockpit-integrator-santander-leadfile-prd", + "cockpit-integrator-santander-sendlead-prd" + ] + }, + "webmotors-private/webmotors.jira.automation": { + "prd_jobs": [ + "PI-Security/prd-lambda-jira-automation" + ], + "all_jobs": [ + "PI-Security/prd-lambda-jira-automation" + ] + }, + "webmotors-private/webmotors.landingpages.ui": { + "prd_jobs": [ + "pf-landing-pages-ui-prd" + ], + "all_jobs": [ + "pf-landing-pages-ui-prd" + ] + }, + "webmotors-private/webmotors.lead.api": { + "prd_jobs": [ + "pi.sales.api-lead.contestation.prd", + "pi.sales.api-lead.list.leadbyad.prd", + "pi.sales.api-lead.listLogProcess.prd", + "pi.sales.api-lead.upload.prd" + ], + "all_jobs": [ + "pi.sales.api-lead.contestation.prd", + "pi.sales.api-lead.list.leadbyad.prd", + "pi.sales.api-lead.listLogProcess.prd", + "pi.sales.api-lead.upload.prd" + ] + }, + "webmotors-private/webmotors.lojaoficial.api": { + "prd_jobs": [ + "prd-loja-oficial-api-deletar" + ], + "all_jobs": [ + "prd-loja-oficial-api-deletar" + ] + }, + "webmotors-private/webmotors.maisfidelidade.backoffice.api": { + "prd_jobs": [ + "mais-fidelidade-lojas-backoffice-prd-api" + ], + "all_jobs": [ + "mais-fidelidade-lojas-backoffice-prd-api" + ] + }, + "webmotors-private/webmotors.maisfidelidade.backoffice.ui": { + "prd_jobs": [ + "mais-fidelidade-lojas-backoffice-prd-ui" + ], + "all_jobs": [ + "mais-fidelidade-lojas-backoffice-prd-ui" + ] + }, + "webmotors-private/webmotors.money.pipelines": { + "prd_jobs": [ + "PI-Money/money-criticality-dependabot-prd", + "PI-Money/prd-ecs-santander-integration", + "PI-Money/prd-fiscal-links-lambda-customer", + "PI-Money/prd-invoice-integration", + "prd-api-santander-integration-payments", + "prd-api-webhook-apple-refund", + "prd-api-webhook-apple-router", + "prd-ecs-api-payment", + "prd-nfe-cancel-fakecustomer-lambda-job", + "prd-service-arquivocnab", + "prd-service-arquivos-grupos", + "prd-service-baixa-etl-pagamento", + "prd-service-processar-cnab-etl-retorno", + "prd-service-validar-arquivos-grupos", + "root-account-prd-api-solicitation" + ], + "all_jobs": [ + "PI-Money/money-criticality-dependabot-prd", + "PI-Money/prd-ecs-santander-integration", + "PI-Money/prd-fiscal-links-lambda-customer", + "PI-Money/prd-invoice-integration", + "prd-api-santander-integration-payments", + "prd-api-webhook-apple-refund", + "prd-api-webhook-apple-router", + "prd-ecs-api-payment", + "prd-nfe-cancel-fakecustomer-lambda-job", + "prd-service-arquivocnab", + "prd-service-arquivos-grupos", + "prd-service-baixa-etl-pagamento", + "prd-service-processar-cnab-etl-retorno", + "prd-service-validar-arquivos-grupos", + "root-account-prd-api-solicitation" + ] + }, + "webmotors-private/webmotors.netsuite.integration": { + "prd_jobs": [ + "pi.sales.api-product.netsuite.send.prd" + ], + "all_jobs": [ + "pi.sales.api-product.netsuite.send.prd" + ] + }, + "webmotors-private/webmotors.next.ui": { + "prd_jobs": [ + "prd-wm-buyer-home-frontend-ui", + "prd-wm-buyer-search-frontend-ui", + "prd-wm-buyer-subscriptions-frontend-ui" + ], + "all_jobs": [ + "prd-wm-buyer-home-frontend-ui", + "prd-wm-buyer-search-frontend-ui", + "prd-wm-buyer-subscriptions-frontend-ui" + ] + }, + "webmotors-private/webmotors.pandora": { + "prd_jobs": [ + "ecs-pandora-search-prd", + "pandora-publisher-advertiser-prd", + "pandora-score-prd" + ], + "all_jobs": [ + "ecs-pandora-search-prd", + "pandora-publisher-advertiser-prd", + "pandora-score-prd" + ] + }, + "webmotors-private/webmotors.pandora.monitoring": { + "prd_jobs": [ + "pandora-monitoring-prd" + ], + "all_jobs": [ + "pandora-monitoring-prd" + ] + }, + "webmotors-private/webmotors.parcer.ia.mcp.financing": { + "prd_jobs": [ + "prd-mcp-financiamento" + ], + "all_jobs": [ + "prd-mcp-financiamento" + ] + }, + "webmotors-private/webmotors.parcer.ia.mcp.leads": { + "prd_jobs": [ + "prd-mcp-lead" + ], + "all_jobs": [ + "prd-mcp-lead" + ] + }, + "webmotors-private/webmotors.parcer.ia.mcp.vehicle_discovery": { + "prd_jobs": [ + "prd-mcp-anuncio" + ], + "all_jobs": [ + "prd-mcp-anuncio" + ] + }, + "webmotors-private/webmotors.payment": { + "prd_jobs": [ + "root-account-ios-prd-receive-payment" + ], + "all_jobs": [ + "root-account-ios-prd-receive-payment" + ] + }, + "webmotors-private/webmotors.pf": { + "prd_jobs": [ + "android-pf-prd-firebase", + "android-pf-prd-playstore", + "android-pf-prd-promotion", + "ios-pf-prd-firebase", + "ios-pf-prd-promotion", + "ios-pf-prd-testflight", + "web-cms-pf-prd", + "webservicos-web-prd" + ], + "all_jobs": [ + "android-pf-prd-firebase", + "android-pf-prd-playstore", + "android-pf-prd-promotion", + "ios-pf-prd-firebase", + "ios-pf-prd-promotion", + "ios-pf-prd-testflight", + "web-cms-pf-prd", + "webservicos-web-prd" + ] + }, + "webmotors-private/webmotors.phone.tracking": { + "prd_jobs": [ + "ckp.event.bridge.enable.disable.prd", + "cockpit-integration-phone-tracking-lambda-prd", + "phone-tracking-api-prd" + ], + "all_jobs": [ + "ckp.event.bridge.enable.disable.prd", + "cockpit-integration-phone-tracking-lambda-prd", + "phone-tracking-api-prd" + ] + }, + "webmotors-private/webmotors.physical.person.ui": { + "prd_jobs": [ + "prd-ui-physical-person" + ], + "all_jobs": [ + "prd-ui-physical-person" + ] + }, + "webmotors-private/webmotors.pi.sales.components": { + "prd_jobs": [ + "sales.extension.components.prd" + ], + "all_jobs": [ + "sales.extension.components.prd" + ] + }, + "webmotors-private/webmotors.pk.security.pipes": { + "prd_jobs": [ + "PI-Security/login-ui/login-ui-prd", + "PI-Security/prd-api-ad-zendesk", + "PI-Security/prd-ecs-api-access", + "PI-Security/prd-ecs-api-upld", + "PI-Security/prd-ecs-api-zendesk", + "PI-Security/prd-etl-upld-upload-process", + "PI-Security/prd-lambda-access-api", + "PI-Security/prd-lambda-address-api", + "PI-Security/prd-lambda-annotation", + "PI-Security/prd-lambda-antifraud-api", + "PI-Security/prd-lambda-api-ad", + "PI-Security/prd-lambda-api-cognito", + "PI-Security/prd-lambda-api-legal-person", + "PI-Security/prd-lambda-api-natural-person", + "PI-Security/prd-lambda-audit", + "PI-Security/prd-lambda-authorization", + "PI-Security/prd-lambda-bureau", + "PI-Security/prd-lambda-bureau-javascript", + "PI-Security/prd-lambda-bureau-santander-person", + "PI-Security/prd-lambda-consent", + "PI-Security/prd-lambda-internauta", + "PI-Security/prd-lambda-jira-automation", + "PI-Security/prd-lambda-legal-person-api-new", + "PI-Security/prd-lambda-legal-person-notificate", + "PI-Security/prd-lambda-legal-person-update-sql-keys", + "PI-Security/prd-lambda-login", + "PI-Security/prd-lambda-mail-message", + "PI-Security/prd-lambda-monitor-droz", + "PI-Security/prd-lambda-monitor-zendesk", + "PI-Security/prd-lambda-natural-person-purge", + "PI-Security/prd-lambda-ocr", + "PI-Security/prd-lambda-preventive-analysis", + "PI-Security/prd-lambda-preventive-analysis-legal-person", + "PI-Security/prd-lambda-proposal-analysis", + "PI-Security/prd-lambda-reason", + "PI-Security/prd-lambda-report-ad", + "PI-Security/prd-lambda-restrictive-list", + "PI-Security/prd-service-etl-internauta", + "PI-Security/prd-ui-legalperson-zendesk", + "prd-api-mail-message", + "prd-lambda-api-ad-import", + "prd-lambda-audit-access", + "prd-lambda-physicalperson-salesforce", + "prd-lambda-vehicleinspection", + "prd-ui-physical-person" + ], + "all_jobs": [ + "PI-Security/login-ui/login-ui-prd", + "PI-Security/prd-api-ad-zendesk", + "PI-Security/prd-ecs-api-access", + "PI-Security/prd-ecs-api-upld", + "PI-Security/prd-ecs-api-zendesk", + "PI-Security/prd-etl-upld-upload-process", + "PI-Security/prd-lambda-access-api", + "PI-Security/prd-lambda-address-api", + "PI-Security/prd-lambda-annotation", + "PI-Security/prd-lambda-antifraud-api", + "PI-Security/prd-lambda-api-ad", + "PI-Security/prd-lambda-api-cognito", + "PI-Security/prd-lambda-api-legal-person", + "PI-Security/prd-lambda-api-natural-person", + "PI-Security/prd-lambda-audit", + "PI-Security/prd-lambda-authorization", + "PI-Security/prd-lambda-bureau", + "PI-Security/prd-lambda-bureau-javascript", + "PI-Security/prd-lambda-bureau-santander-person", + "PI-Security/prd-lambda-consent", + "PI-Security/prd-lambda-internauta", + "PI-Security/prd-lambda-jira-automation", + "PI-Security/prd-lambda-legal-person-api-new", + "PI-Security/prd-lambda-legal-person-notificate", + "PI-Security/prd-lambda-legal-person-update-sql-keys", + "PI-Security/prd-lambda-login", + "PI-Security/prd-lambda-mail-message", + "PI-Security/prd-lambda-monitor-droz", + "PI-Security/prd-lambda-monitor-zendesk", + "PI-Security/prd-lambda-natural-person-purge", + "PI-Security/prd-lambda-ocr", + "PI-Security/prd-lambda-preventive-analysis", + "PI-Security/prd-lambda-preventive-analysis-legal-person", + "PI-Security/prd-lambda-proposal-analysis", + "PI-Security/prd-lambda-reason", + "PI-Security/prd-lambda-report-ad", + "PI-Security/prd-lambda-restrictive-list", + "PI-Security/prd-service-etl-internauta", + "PI-Security/prd-ui-legalperson-zendesk", + "prd-api-mail-message", + "prd-lambda-api-ad-import", + "prd-lambda-audit-access", + "prd-lambda-physicalperson-salesforce", + "prd-lambda-vehicleinspection", + "prd-ui-physical-person" + ] + }, + "webmotors-private/webmotors.portal": { + "prd_jobs": [ + "PF-Api/pf-api-app-prd", + "PF-Api/pf-api-web-prd" + ], + "all_jobs": [ + "PF-Api/pf-api-app-prd", + "PF-Api/pf-api-web-prd" + ] + }, + "webmotors-private/webmotors.portal-parceria-lead": { + "prd_jobs": [ + "wm-portal-parceria-lead-prd-pf", + "wm-portal-parceria-lead-prd-root" + ], + "all_jobs": [ + "wm-portal-parceria-lead-prd-pf", + "wm-portal-parceria-lead-prd-root" + ] + }, + "webmotors-private/webmotors.portal.api": { + "prd_jobs": [ + "prd-wm-portal-api" + ], + "all_jobs": [ + "prd-wm-portal-api" + ] + }, + "webmotors-private/webmotors.portal.conversions.facebook": { + "prd_jobs": [ + "prd-lambda-conversions-facebook" + ], + "all_jobs": [ + "prd-lambda-conversions-facebook" + ] + }, + "webmotors-private/webmotors.portal.dealer.info.sync": { + "prd_jobs": [ + "wm-dealer-info-prd" + ], + "all_jobs": [ + "wm-dealer-info-prd" + ] + }, + "webmotors-private/webmotors.portal.lead.analytics": { + "prd_jobs": [ + "prd-portal-lead-analytics-send" + ], + "all_jobs": [ + "prd-portal-lead-analytics-send" + ] + }, + "webmotors-private/webmotors.portal.logs": { + "prd_jobs": [ + "wm-processlog-prd" + ], + "all_jobs": [ + "wm-processlog-prd" + ] + }, + "webmotors-private/webmotors.portal.postenquiry": { + "prd_jobs": [ + "prd-portal-lead-send-post-enquiry" + ], + "all_jobs": [ + "prd-portal-lead-send-post-enquiry" + ] + }, + "webmotors-private/webmotors.portal.ui": { + "prd_jobs": [ + "prd-wm-buyer-lambda-home-ui" + ], + "all_jobs": [ + "prd-wm-buyer-lambda-home-ui" + ] + }, + "webmotors-private/webmotors.portal.wm1.service": { + "prd_jobs": [ + "prd-wm1-portal-service" + ], + "all_jobs": [ + "prd-wm1-portal-service" + ] + }, + "webmotors-private/webmotors.precodinamico.ui": { + "prd_jobs": [ + "PrecoDinamico-UI/precodinamico-ui-prd" + ], + "all_jobs": [ + "PrecoDinamico-UI/precodinamico-ui-prd" + ] + }, + "webmotors-private/webmotors.react.pj": { + "prd_jobs": [ + "Cockpit.Inspection/prd-cockpit-inspection-ui", + "agendafacil-cockpit-ui-prd", + "cockpit-backoffice-offers-ui-prd", + "cockpit-backoffice-ui-prd", + "cockpit-crmcustomer-ui-prd", + "cockpit-integration-usermanagement-ui-prd", + "cockpit-invoice-ui-prd", + "cockpit-leads-ui-prd", + "cockpit-store-ui-prd", + "cockpit-ui-prd", + "cockpit-wallet-ui-prd", + "prd-cockpit-signature-ui", + "prd-cockpit-stock-ui", + "prd-cockpit-stock-ui-channel", + "prd-ui-physical-person", + "subscription-backoffice-ui-prd", + "webmotors-cockpit-webtv-ui-prd" + ], + "all_jobs": [ + "Cockpit.Inspection/prd-cockpit-inspection-ui", + "agendafacil-cockpit-ui-prd", + "cockpit-backoffice-offers-ui-prd", + "cockpit-backoffice-ui-prd", + "cockpit-crmcustomer-ui-prd", + "cockpit-integration-usermanagement-ui-prd", + "cockpit-invoice-ui-prd", + "cockpit-leads-ui-prd", + "cockpit-store-ui-prd", + "cockpit-ui-prd", + "cockpit-wallet-ui-prd", + "prd-cockpit-signature-ui", + "prd-cockpit-stock-ui", + "prd-cockpit-stock-ui-channel", + "prd-ui-physical-person", + "subscription-backoffice-ui-prd", + "webmotors-cockpit-webtv-ui-prd" + ] + }, + "webmotors-private/webmotors.sales.api.revendamais": { + "prd_jobs": [ + "pi.sales.api-revendamais.cancel.prd", + "pi.sales.api-revendamais.save.prd", + "pi.sales.api-revendamais.update.prd" + ], + "all_jobs": [ + "pi.sales.api-revendamais.cancel.prd", + "pi.sales.api-revendamais.save.prd", + "pi.sales.api-revendamais.update.prd" + ] + }, + "webmotors-private/webmotors.sales.dependabot.analyzer": { + "prd_jobs": [ + "sales.dependabot.analyzer.prd" + ], + "all_jobs": [ + "sales.dependabot.analyzer.prd" + ] + }, + "webmotors-private/webmotors.santander.webapi.integration": { + "prd_jobs": [ + "PI-Money/prd-ecs-santander-integration" + ], + "all_jobs": [ + "PI-Money/prd-ecs-santander-integration" + ] + }, + "webmotors-private/webmotors.seller.api": { + "prd_jobs": [ + "Seller-Api/seller-api-prd" + ], + "all_jobs": [ + "Seller-Api/seller-api-prd" + ] + }, + "webmotors-private/webmotors.sendmail": { + "prd_jobs": [ + "ckp.sendmail.salesforce.api.prd" + ], + "all_jobs": [ + "ckp.sendmail.salesforce.api.prd" + ] + }, + "webmotors-private/webmotors.subscription.api": { + "prd_jobs": [ + "portal-assinaturas-root-account/wm-subscription-api-prd-root-account", + "portal-subscription/wm-subscription-prd-api" + ], + "all_jobs": [ + "portal-assinaturas-root-account/wm-subscription-api-prd-root-account", + "portal-subscription/wm-subscription-prd-api" + ] + }, + "webmotors-private/webmotors.subscription.backoffice.ui": { + "prd_jobs": [ + "subscription-backoffice-ui-prd" + ], + "all_jobs": [ + "subscription-backoffice-ui-prd" + ] + }, + "webmotors-private/webmotors.subscription.cms": { + "prd_jobs": [ + "portal-subscription-cms/subscription-cms-api-prd" + ], + "all_jobs": [ + "portal-subscription-cms/subscription-cms-api-prd" + ] + }, + "webmotors-private/webmotors.tv": { + "prd_jobs": [ + "webmotors-tv-cockpit-prd", + "webmotors-tv-common-prd", + "webmotors-tv-feed-prd", + "webmotors-tv-market-square-prd", + "webmotors-tv-production-schedule-hml", + "webmotors-tv-production-schedule-prd", + "webmotors-tv-sale-prd" + ], + "all_jobs": [ + "webmotors-tv-cockpit-prd", + "webmotors-tv-common-prd", + "webmotors-tv-feed-prd", + "webmotors-tv-market-square-prd", + "webmotors-tv-production-schedule-hml", + "webmotors-tv-production-schedule-prd", + "webmotors-tv-sale-prd" + ] + }, + "webmotors-private/webmotors.tv.ui": { + "prd_jobs": [ + "webmotors-tv-ui-prd" + ], + "all_jobs": [ + "webmotors-tv-ui-prd" + ] + }, + "webmotors-private/webmotors.university.banner": { + "prd_jobs": [ + "university-banner-api-prd" + ], + "all_jobs": [ + "university-banner-api-prd" + ] + }, + "webmotors-private/webmotors.university.certificate": { + "prd_jobs": [ + "university-certificate-api-prd" + ], + "all_jobs": [ + "university-certificate-api-prd" + ] + }, + "webmotors-private/webmotors.university.course": { + "prd_jobs": [ + "university-course-api-prd" + ], + "all_jobs": [ + "university-course-api-prd" + ] + }, + "webmotors-private/webmotors.university.enrollment": { + "prd_jobs": [ + "university-enrollment-api-prd" + ], + "all_jobs": [ + "university-enrollment-api-prd" + ] + }, + "webmotors-private/webmotors.university.event": { + "prd_jobs": [ + "university-event-api-prd" + ], + "all_jobs": [ + "university-event-api-prd" + ] + }, + "webmotors-private/webmotors.university.lead": { + "prd_jobs": [ + "university-lead-api-prd" + ], + "all_jobs": [ + "university-lead-api-prd" + ] + }, + "webmotors-private/webmotors.university.streaming": { + "prd_jobs": [ + "university-streaming-api-prd" + ], + "all_jobs": [ + "university-streaming-api-prd" + ] + }, + "webmotors-private/webmotors.university.user": { + "prd_jobs": [ + "university-user-api-prd" + ], + "all_jobs": [ + "university-user-api-prd" + ] + }, + "webmotors-private/webmotors.upld": { + "prd_jobs": [ + "PI-Security/prd-ecs-api-upld" + ], + "all_jobs": [ + "PI-Security/prd-ecs-api-upld" + ] + }, + "webmotors-private/webmotors.vehicleinspection": { + "prd_jobs": [ + "prd-cockpit-stock-vehicleinspection" + ], + "all_jobs": [ + "prd-cockpit-stock-vehicleinspection" + ] + }, + "webmotors-private/webmotors.vender": { + "prd_jobs": [ + "Vender-UI/vender-ui-prd" + ], + "all_jobs": [ + "Vender-UI/vender-ui-prd" + ] + }, + "webmotors-private/webmotors.vender.lp": { + "prd_jobs": [ + "VenderLP-UI/venderlp-ui-prd" + ], + "all_jobs": [ + "VenderLP-UI/venderlp-ui-prd" + ] + }, + "webmotors-private/webmotors.vmotors": { + "prd_jobs": [ + "api-pj-prd" + ], + "all_jobs": [ + "api-pj-prd" + ] + }, + "webmotors-private/webmotors.webapi.payment": { + "prd_jobs": [ + "prd-ecs-api-payment" + ], + "all_jobs": [ + "prd-ecs-api-payment" + ] + }, + "webmotors-private/webmotors.webservicos.api": { + "prd_jobs": [ + "webservicos-api-banner-prd", + "webservicos-api-user-prd" + ], + "all_jobs": [ + "webservicos-api-banner-prd", + "webservicos-api-user-prd" + ] + }, + "webmotors-private/webmotors.webservicos.api.bff": { + "prd_jobs": [ + "webservicos-api-bff-prd" + ], + "all_jobs": [ + "webservicos-api-bff-prd" + ] + }, + "webmotors-private/webmotors.webservicos.lambdas": { + "prd_jobs": [ + "webservicos-lambdas-prd" + ], + "all_jobs": [ + "webservicos-lambdas-prd" + ] + }, + "webmotors-private/webmotors.webservicos.landingpage": { + "prd_jobs": [ + "webservicos-landingpage-next-ui-prd" + ], + "all_jobs": [ + "webservicos-landingpage-next-ui-prd" + ] + }, + "webmotors-private/webmotors.zendesk": { + "prd_jobs": [ + "PI-Security/prd-ecs-api-zendesk" + ], + "all_jobs": [ + "PI-Security/prd-ecs-api-zendesk" + ] + }, + "webmotors-private/webmotors.zendesk.app": { + "prd_jobs": [ + "PI-Security/prd-ui-legalperson-zendesk" + ], + "all_jobs": [ + "PI-Security/prd-ui-legalperson-zendesk" + ] + }, + "webmotors-private/webmotors.zendesk.natural.person.app": { + "prd_jobs": [ + "PI-Security/prd-api-ad-zendesk" + ], + "all_jobs": [ + "PI-Security/prd-api-ad-zendesk" + ] + }, + "webmotors-private/webmotors.zero.integracao.facebook": { + "prd_jobs": [ + "OEM/zero-integration-webhook-facebook/prd-lambda-zero-integration-webhook-facebook" + ], + "all_jobs": [ + "OEM/zero-integration-webhook-facebook/prd-lambda-zero-integration-webhook-facebook" + ] + }, + "webmotors-private/webmotors.zero.leads": { + "prd_jobs": [ + "OEM/zero-leads/prd-lambda-zero-leads" + ], + "all_jobs": [ + "OEM/zero-leads/prd-lambda-zero-leads" + ] + }, + "webmotors-private/webmotors.zero.officialstore.ui": { + "prd_jobs": [ + "zero-officialstore-ui-prod-deletar", + "zero-officialstore-ui/zero-officialstore-ui-prod" + ], + "all_jobs": [ + "zero-officialstore-ui-prod-deletar", + "zero-officialstore-ui/zero-officialstore-ui-prod" + ] + }, + "webmotors-private/webmotors.zero.pipelines": { + "prd_jobs": [ + "OEM/zero-integration-webhook-facebook/prd-lambda-zero-integration-webhook-facebook", + "OEM/zero-leads/prd-lambda-zero-leads", + "OEM/zero-preorder/prd-lambda-zero-preorder" + ], + "all_jobs": [ + "OEM/zero-integration-webhook-facebook/prd-lambda-zero-integration-webhook-facebook", + "OEM/zero-leads/prd-lambda-zero-leads", + "OEM/zero-preorder/prd-lambda-zero-preorder" + ] + }, + "webmotors-private/webmotors.zero.preorder.api": { + "prd_jobs": [ + "OEM/zero-preorder/prd-lambda-zero-preorder" + ], + "all_jobs": [ + "OEM/zero-preorder/prd-lambda-zero-preorder" + ] + }, + "webmotors-private/webmotors.zerokm": { + "prd_jobs": [ + "prd-zero-campaigns-api-deletar" + ], + "all_jobs": [ + "prd-zero-campaigns-api-deletar" + ] + }, + "webmotors-private/webmotors.zerokm.client": { + "prd_jobs": [ + "webmotors-zerokm-client-prd" + ], + "all_jobs": [ + "webmotors-zerokm-client-prd" + ] + }, + "webmotors-private/webmotors.zerokm.ui": { + "prd_jobs": [ + "webmotors-zerokm-ui-prd" + ], + "all_jobs": [ + "webmotors-zerokm-ui-prd" + ] + }, + "webmotors-private/wm1-api": { + "prd_jobs": [ + "prd-wm1-admin", + "prd-wm1-api" + ], + "all_jobs": [ + "prd-wm1-admin", + "prd-wm1-api" + ] }, - "webmotors-private/webmotors.app.pf.search.bff": { - "prd_jobs": ["webmotors-app-api-search-bff"], - "all_jobs": ["webmotors-app-api-search-bff"] + "webmotors-private/wm1-ui": { + "prd_jobs": [ + "prd-wm1-ui" + ], + "all_jobs": [ + "prd-wm1-ui" + ] }, - "webmotors-private/eleanor.flutter": { - "prd_jobs": ["webmotors-eleanor-flutter"], - "all_jobs": ["webmotors-eleanor-flutter"] + "_metadata": { + "generated": "2026-04-14", + "method": "READ-ONLY SCM scan of 544 active PRD Jenkins jobs via lastBuild/remoteUrls API", + "total_repos": 283, + "total_prd_jobs": 577, + "note": "Maps GitHub repos to Jenkins PRD jobs. Source: Git remoteUrl from lastBuild metadata." } } diff --git a/pulse/docker-compose.test.yml b/pulse/docker-compose.test.yml index d6b3dd6..d324542 100644 --- a/pulse/docker-compose.test.yml +++ b/pulse/docker-compose.test.yml @@ -57,20 +57,3 @@ services: timeout: 10s retries: 10 start_period: 30s - - devlake-pg: - image: postgres:16-alpine - container_name: pulse-test-devlake-pg - ports: - - "5433:5432" - environment: - POSTGRES_DB: lake - POSTGRES_USER: devlake - POSTGRES_PASSWORD: devlake_test - tmpfs: - - /var/lib/postgresql/data - healthcheck: - test: ["CMD-SHELL", "pg_isready -U devlake -d lake"] - interval: 3s - timeout: 3s - retries: 10 diff --git a/pulse/docker-compose.yml b/pulse/docker-compose.yml index c4f511c..599f692 100644 --- a/pulse/docker-compose.yml +++ b/pulse/docker-compose.yml @@ -1,5 +1,5 @@ ############################################################################## -# PULSE — Local Development Stack +# PULSE — Local Development Stack (v2 — Custom Connectors, no DevLake) # Run: docker compose up -d (or: make up) # Frontend runs OUTSIDE Docker: cd packages/pulse-web && npm run dev ############################################################################## @@ -10,8 +10,8 @@ services: # -------------------------------------------------------------------------- pulse-api: build: - context: ./packages/pulse-api - dockerfile: Dockerfile + context: ./packages + dockerfile: pulse-api/Dockerfile container_name: pulse-api ports: - "${PULSE_API_PORT:-3000}:3000" @@ -20,9 +20,10 @@ services: DATABASE_URL: postgresql://${POSTGRES_USER:-pulse}:${POSTGRES_PASSWORD:-pulse_dev}@postgres:5432/${POSTGRES_DB:-pulse} REDIS_URL: redis://redis:6379 KAFKA_BROKERS: kafka:29092 - DEVLAKE_API_URL: http://devlake:8080 GITHUB_TOKEN: ${GITHUB_TOKEN:-} + GITHUB_ORG: ${GITHUB_ORG:-webmotors-private} GITLAB_TOKEN: ${GITLAB_TOKEN:-} + JIRA_BASE_URL: ${JIRA_BASE_URL:-} JIRA_API_TOKEN: ${JIRA_API_TOKEN:-} JIRA_EMAIL: ${JIRA_EMAIL:-} AZURE_DEVOPS_PAT: ${AZURE_DEVOPS_PAT:-} @@ -51,7 +52,20 @@ services: environment: DATABASE_URL: postgresql://${POSTGRES_USER:-pulse}:${POSTGRES_PASSWORD:-pulse_dev}@postgres:5432/${POSTGRES_DB:-pulse} KAFKA_BROKERS: kafka:29092 + REDIS_URL: redis://redis:6379 ENVIRONMENT: development + DYNAMIC_JIRA_DISCOVERY_ENABLED: ${DYNAMIC_JIRA_DISCOVERY_ENABLED:-false} + INTERNAL_API_TOKEN: ${INTERNAL_API_TOKEN:-} + # Source API credentials (connectors read directly from APIs) + GITHUB_TOKEN: ${GITHUB_TOKEN:-} + GITHUB_ORG: ${GITHUB_ORG:-webmotors-private} + JIRA_BASE_URL: ${JIRA_BASE_URL:-} + JIRA_EMAIL: ${JIRA_EMAIL:-} + JIRA_API_TOKEN: ${JIRA_API_TOKEN:-} + JIRA_PROJECTS: ${JIRA_PROJECTS:-DESC,ENO,ANCR,PUSO,APPF,FID,CTURBO,PTURB} + JENKINS_BASE_URL: ${JENKINS_BASE_URL:-} + JENKINS_USERNAME: ${JENKINS_USERNAME:-} + JENKINS_API_TOKEN: ${JENKINS_API_TOKEN:-} volumes: - ./packages/pulse-data/src:/app/src depends_on: @@ -60,6 +74,17 @@ services: kafka: condition: service_healthy restart: unless-stopped + # FDD-OPS-001 Linha 1: auto-reload the FastAPI container when Python + # source changes. The pulse-data Dockerfile does NOT launch uvicorn with + # `--reload`, so without this block a `git pull` or file edit leaves the + # HTTP process running stale code until a manual `docker compose restart`. + # `sync+restart` rewrites the files inside the container and then restarts + # the container so Python reimports from disk. + develop: + watch: + - action: sync+restart + path: ./packages/pulse-data/src + target: /app/src # -------------------------------------------------------------------------- # Workers @@ -72,17 +97,42 @@ services: command: python -m src.workers.devlake_sync environment: DATABASE_URL: postgresql://${POSTGRES_USER:-pulse}:${POSTGRES_PASSWORD:-pulse_dev}@postgres:5432/${POSTGRES_DB:-pulse} - DEVLAKE_DB_URL: postgresql://${DEVLAKE_PG_USER:-devlake}:${DEVLAKE_PG_PASSWORD:-devlake_dev}@devlake-pg:5432/${DEVLAKE_PG_DB:-lake} KAFKA_BROKERS: kafka:29092 + REDIS_URL: redis://redis:6379 ENVIRONMENT: development + DYNAMIC_JIRA_DISCOVERY_ENABLED: ${DYNAMIC_JIRA_DISCOVERY_ENABLED:-false} + # Source API credentials + GITHUB_TOKEN: ${GITHUB_TOKEN:-} + GITHUB_ORG: ${GITHUB_ORG:-webmotors-private} + JIRA_BASE_URL: ${JIRA_BASE_URL:-} + JIRA_EMAIL: ${JIRA_EMAIL:-} + JIRA_API_TOKEN: ${JIRA_API_TOKEN:-} + JIRA_PROJECTS: ${JIRA_PROJECTS:-DESC,ENO,ANCR,PUSO,APPF,FID,CTURBO,PTURB} + JENKINS_BASE_URL: ${JENKINS_BASE_URL:-} + JENKINS_USERNAME: ${JENKINS_USERNAME:-} + JENKINS_API_TOKEN: ${JENKINS_API_TOKEN:-} + volumes: + - ./packages/pulse-data/src:/app/src + - ./config/connections.yaml:/app/config/connections.yaml:ro + - ./config/jenkins-job-mapping.json:/app/config/jenkins-job-mapping.json:ro depends_on: postgres: condition: service_healthy kafka: condition: service_healthy - devlake: - condition: service_started + healthcheck: + test: ["CMD-SHELL", "python -c 'import os; os.stat(\"/proc/1/status\")'"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 60s restart: unless-stopped + # FDD-OPS-001 Linha 1: hot-reload — see pulse-data block above. + develop: + watch: + - action: sync+restart + path: ./packages/pulse-data/src + target: /app/src metrics-worker: build: @@ -94,12 +144,58 @@ services: DATABASE_URL: postgresql://${POSTGRES_USER:-pulse}:${POSTGRES_PASSWORD:-pulse_dev}@postgres:5432/${POSTGRES_DB:-pulse} KAFKA_BROKERS: kafka:29092 ENVIRONMENT: development + volumes: + - ./packages/pulse-data/src:/app/src depends_on: postgres: condition: service_healthy kafka: condition: service_healthy + healthcheck: + test: ["CMD-SHELL", "python -c 'import os; os.stat(\"/proc/1/status\")'"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 60s + restart: unless-stopped + # FDD-OPS-001 Linha 1: hot-reload — see pulse-data block above. + develop: + watch: + - action: sync+restart + path: ./packages/pulse-data/src + target: /app/src + + discovery-worker: + build: + context: ./packages/pulse-data + dockerfile: Dockerfile + container_name: pulse-discovery-worker + command: python -m src.workers.discovery_scheduler + environment: + DATABASE_URL: postgresql://${POSTGRES_USER:-pulse}:${POSTGRES_PASSWORD:-pulse_dev}@postgres:5432/${POSTGRES_DB:-pulse} + KAFKA_BROKERS: kafka:29092 + REDIS_URL: redis://redis:6379 + ENVIRONMENT: development + DYNAMIC_JIRA_DISCOVERY_ENABLED: ${DYNAMIC_JIRA_DISCOVERY_ENABLED:-false} + INTERNAL_API_TOKEN: ${INTERNAL_API_TOKEN:-} + JIRA_BASE_URL: ${JIRA_BASE_URL:-} + JIRA_EMAIL: ${JIRA_EMAIL:-} + JIRA_API_TOKEN: ${JIRA_API_TOKEN:-} + JIRA_PROJECTS: ${JIRA_PROJECTS:-DESC,ENO,ANCR,PUSO,APPF,FID,CTURBO,PTURB} + volumes: + - ./packages/pulse-data/src:/app/src + depends_on: + postgres: + condition: service_healthy + redis: + condition: service_healthy restart: unless-stopped + # FDD-OPS-001 Linha 1: hot-reload — see pulse-data block above. + develop: + watch: + - action: sync+restart + path: ./packages/pulse-data/src + target: /app/src # -------------------------------------------------------------------------- # Infrastructure @@ -163,43 +259,6 @@ services: start_period: 30s restart: unless-stopped - # -------------------------------------------------------------------------- - # DevLake - # -------------------------------------------------------------------------- - devlake: - image: apache/devlake:latest - container_name: pulse-devlake - ports: - - "${DEVLAKE_PORT:-8080}:8080" - - "${DEVLAKE_API_PORT:-4000}:4000" - environment: - DB_URL: postgresql://${DEVLAKE_PG_USER:-devlake}:${DEVLAKE_PG_PASSWORD:-devlake_dev}@devlake-pg:5432/${DEVLAKE_PG_DB:-lake}?sslmode=disable - ENCRYPTION_SECRET: ${DEVLAKE_ENCRYPTION_SECRET:-abcdefghijklmnop} - depends_on: - devlake-pg: - condition: service_healthy - restart: unless-stopped - - devlake-pg: - image: postgres:16-alpine - container_name: pulse-devlake-pg - ports: - - "${DEVLAKE_PG_PORT:-5433}:5432" - environment: - POSTGRES_DB: ${DEVLAKE_PG_DB:-lake} - POSTGRES_USER: ${DEVLAKE_PG_USER:-devlake} - POSTGRES_PASSWORD: ${DEVLAKE_PG_PASSWORD:-devlake_dev} - volumes: - - devlake_pgdata:/var/lib/postgresql/data - healthcheck: - test: ["CMD-SHELL", "pg_isready -U ${DEVLAKE_PG_USER:-devlake} -d ${DEVLAKE_PG_DB:-lake}"] - interval: 5s - timeout: 5s - retries: 5 - restart: unless-stopped - volumes: pgdata: driver: local - devlake_pgdata: - driver: local diff --git a/pulse/docs/adrs/014-dynamic-jira-project-discovery.md b/pulse/docs/adrs/014-dynamic-jira-project-discovery.md new file mode 100644 index 0000000..4483227 --- /dev/null +++ b/pulse/docs/adrs/014-dynamic-jira-project-discovery.md @@ -0,0 +1,91 @@ +# ADR-014: Dynamic Jira Project Discovery (Hybrid 4-Mode) + +- **Status:** Accepted +- **Date:** 2026-04-13 +- **Deciders:** Main session (orchestrator) + pulse-data-engineer + pulse-ciso + pulse-product-director +- **Supersedes:** Static `JIRA_PROJECTS` env-var scope configuration +- **Related:** ADR-005 (DevLake vs custom), ADR-011 (metadata-only security), ADR-002 (RLS multi-tenancy) + +--- + +## Context + +PULSE currently scopes Jira ingestion via a static `JIRA_PROJECTS` env var (comma-separated project keys). This was acceptable during single-tenant bootstrap but has become a hard blocker for: + +1. **SaaS onboarding velocity.** Every new tenant requires manual project list curation, re-deploy of `.env`, and operator coordination. This breaks the "connect and see data in minutes" value proposition. +2. **Link-rate ceiling on PR↔Issue correlation.** Analysis of Webmotors data (63,447 PRs) showed 15,475 PRs (24.4%) reference Jira keys in titles, but only 3,220 (5.1%) linked successfully — because ~20 referenced projects (CKP, SECOM, BG, OKM, ESTQ, PF, SALES, APPJ, CRW, SDI, DSP, CRMC, INTG, AFDEV, MONEY, PJUN, FACIL, ENO…) were never in the static list. Keeping the list updated is ops toil that will never converge. +3. **Governance drift.** Teams create new Jira projects continuously; operators lack visibility into what's missing without querying Jira manually. +4. **Product positioning.** Competitors (LinearB, Jellyfish) require explicit project configuration. A "self-discovering engineering platform" is a clear differentiator. + +## Decision + +Adopt a **hybrid dynamic project discovery model** with 4 operational modes, persisted per tenant, with guardrails and admin UI. + +### Modes + +| Mode | Behavior | Use case | +|---|---|---| +| `auto` | All discovered projects are active by default; blocklist overrides | SMB self-serve, low-friction onboarding | +| `allowlist` | Only explicitly approved projects sync; discovery populates catalog as `discovered` requiring human approval | Regulated industries, enterprise with governance | +| `blocklist` | All discovered projects active except those explicitly blocked | Mid-market, operator-driven | +| `smart` | Auto-activates projects referenced by ≥N PRs in lookback window; remainder stays `discovered` | Default recommendation for engineering-centric teams | + +### Architecture + +- **New tables:** `tenant_jira_config`, `jira_project_catalog`, `jira_discovery_audit` (RLS-enforced, audit immutable). +- **New worker:** `discovery-worker` runs scheduled `ProjectDiscoveryService` per tenant, populates catalog. +- **New resolver:** `ModeResolver.resolve_active_projects(tenant_id)` replaces all reads of `settings.jira_project_list` in sync paths. +- **Guardrails:** rate budget per tenant (Redis token bucket), hard cap on active projects, auto-pause after 5 consecutive failures, blocklist precedence. +- **Admin API + UI:** `/api/v1/admin/integrations/jira/*` + `/settings/integrations/jira` route, RBAC-gated to `tenant_admin` role. +- **Feature flag** `DYNAMIC_JIRA_DISCOVERY_ENABLED` enables blue-green rollout; env var remains as bootstrap fallback for 2 releases. + +## Consequences + +### Positive +- Self-serve SaaS onboarding unlocked (competitive moat). +- PR↔Issue link rate projected to rise from 5% → 25-30% at steady state by covering all referenced projects. +- Auto-adapts to org changes (new projects, team splits, mergers). +- Governance-grade audit trail (SOC 2 ready). +- Architectural pattern reusable for GitHub repos, Jenkins jobs, GitLab projects (next iterations). + +### Negative / Costs +- Added complexity: 3 new tables, 1 new worker, new service layer, new UI surface. +- Privacy risk in `auto` mode if tenant has sensitive Jira projects (HR, legal, finance) — mitigated by default `allowlist`, PII regex warnings on discovery, explicit blocklist. +- Variable ingestion cost per tenant (harder to quote pricing upfront) — mitigated by `max_active_projects` hard cap and admin-visible metrics. +- Additional Jira API surface (`/rest/api/3/project/search`) — mitigated by rate-limited discovery schedule (default daily 03:00 UTC). + +### Rollback plan +Feature flag `DYNAMIC_JIRA_DISCOVERY_ENABLED=false` reverts sync workers to reading `JIRA_PROJECTS` env var. Catalog data persists harmlessly; no data migration required for rollback. + +## Alternatives Considered + +### A1 — Keep static list, expand manually (Option 1 in plan) +**Rejected.** Ops toil, drift-prone, doesn't scale in multi-tenant SaaS. Works for 1 tenant, breaks at 10. + +### A2 — Pure auto-discovery (no modes, no governance) +**Rejected.** Ignores privacy/compliance requirements. A bank client would not tolerate automatic ingestion of an "HR-Confidential" Jira project. Governance is non-negotiable. + +### A3 — DevLake-native project discovery +**Rejected per ADR-005.** We migrated off DevLake for Jira ingestion; adding a DevLake dependency back contradicts that decision. + +### A4 — Per-project cron configs (config file) +**Rejected.** Still requires ops intervention, doesn't solve multi-tenant, doesn't solve drift. + +## Implementation + +Detailed phased plan tracked in: `packages/pulse-data/src/contexts/integrations/jira/discovery/` + branch `feat/jira-dynamic-discovery`. + +**Phases:** +0. Foundation (migration, shared types, this ADR) +1. Backend core (discovery service, mode resolver, guardrails, scheduler) +2. API + UI (admin endpoints, settings page) +3. Security + QA (CISO review, integration/E2E/load tests) +4. Rollout (shadow → cutover → deprecate env var) + +## Acceptance Gates + +- Migration 006 preserves existing tenant state (bootstrap from env var). +- `SmartPrioritizer` identifies ≥18 candidate projects from current Webmotors PR scan. +- RLS + RBAC verified by pulse-ciso. +- Link rate measured before/after cutover; target ≥20% improvement. +- Feature flag tested in staging for minimum 7 days before prod cutover. diff --git a/pulse/docs/adrs/ADR-005-devlake-vs-custom-ingestion.md b/pulse/docs/adrs/ADR-005-devlake-vs-custom-ingestion.md new file mode 100644 index 0000000..7191b25 --- /dev/null +++ b/pulse/docs/adrs/ADR-005-devlake-vs-custom-ingestion.md @@ -0,0 +1,295 @@ +# ADR-005: DevLake vs. Ingestao Proprietaria + +**Status:** Proposto (aguardando decisao) +**Data:** 2026-04-09 +**Decisores:** Andre Nascimento + Time PULSE +**Contexto:** Problemas recorrentes com DevLake bloqueiam o pipeline de dados + +--- + +## 1. Contexto e Motivacao + +O PULSE adotou o Apache DevLake como motor de ingestao na arquitetura hibrida (ADR-001, Hipotese 3), com score 4.3/5. A premissa era: "usar DevLake como acelerador de MVP sem criar acoplamento irreversivel" e substituir plugins por conectores customizados quando necessario. + +**Estamos nesse ponto de inflexao.** Nas ultimas semanas, enfrentamos: + +1. **Jira API v2 deprecada** — 6/8 boards falham (HTTP 410). Fix existe em v1.0.3-beta7+, mas upgrade falha +2. **Upgrade DevLake v1.0.2 → v1.0.3-beta7** — Migrations usam sintaxe MySQL (`int unsigned`, `double`, `datetime`) que quebram no PostgreSQL +3. **Perda massiva de dados** — 32.621 issues no tool-layer, apenas 243 no domain-layer (99.3% de perda) +4. **1.426 repos registrados, apenas 21 ingeridos** — Pipeline GitHub tambem incompleto +5. **0 sprints** no domain-layer, apesar de 8 boards Jira configurados +6. **0 deploys Jenkins reais** — Apenas 76 builds dos 16 jobs mapeados + +### Estado Atual dos Dados (09/04/2026) + +| Camada | PRs | Issues | Deployments | Sprints | Repos | +|--------|-----|--------|-------------|---------|-------| +| DevLake Tool Layer | 5.564 | 32.621 | 76 | ? | 1.426 | +| DevLake Domain Layer | 5.544 | 243 | 83 | 0 | 21 | +| PULSE App DB | 5.314 | 243 | 83 | 0 | - | +| **Perda Tool→Domain** | **0.4%** | **99.3%** | - | **100%** | **98.5%** | + +--- + +## 2. Diagnostico: Por que o DevLake esta falhando? + +### 2.1 PostgreSQL e Cidadao de Segunda Classe + +O DevLake foi projetado para MySQL. O suporte a PostgreSQL e "nao oficial": + +- **Issue #8350** — Maintainers declararam: *"We don't have plans to make Postgres officially supported in the near future"* +- **Issue #8778 (Mar 2026)** — Plugin Copilot usa `gorm:"type:datetime"` (MySQL-only) +- **Issue #8564 (Nov 2025)** — Migration usa `ALTER TABLE ... MODIFY` (sintaxe MySQL) +- **Issue #8548 (Aug 2025)** — `GROUP BY` incompativel com PG17 (valido em MySQL com `ONLY_FULL_GROUP_BY` off) +- **Issue #1790 (Mai 2022!)** — `unsigned` integer types. Reportado ha 4 anos, mesmo padrao de bug reaparece em 2026 + +**Padrao sistematico:** Cada novo plugin e escrito/testado contra MySQL. Compatibilidade PG quebra em toda release. + +### 2.2 Versao Estavel? Nao Existe + +| Versao | Status | Periodo Beta | +|--------|--------|-------------| +| v1.0.2 | Estavel | 10 meses (9 betas) | +| v1.0.3 | **Sem data** | 10+ meses (10 betas e contando) | + +O fix do Jira API v3 (PR #8608) foi mergeado em Out/2025 e so existe em betas. Nao ha versao estavel com esse fix. Dependemos de software beta para funcionalidade critica. + +### 2.3 Dupla Normalizacao + +O fluxo atual e: + +``` +GitHub API → DevLake Raw → DevLake Tool → DevLake Domain → PULSE Normalizer → PULSE DB +``` + +PULSE ja reimplementa toda a normalizacao: +- `normalizer.py` (539 linhas): Status mapping com 60+ mapeamentos PT-BR, deteccao de source, linking issue↔PR, calculo de cycle time +- `devlake_reader.py` (272 linhas): Queries SQL no DevLake domain layer +- `devlake_sync.py` (552 linhas): Watermarks, upserts, Kafka publishing + +**Total: 1.363 linhas** de codigo que existem **apenas para ler do DevLake e re-normalizar**. + +O DevLake fornece a extracao de API + paginacao + rate limiting. Tudo mais, PULSE refaz. + +### 2.4 Overhead Operacional + +Para rodar DevLake localmente ou em producao, precisamos de: + +| Componente | Recurso | Custo Estimado (AWS) | +|-----------|---------|---------------------| +| DevLake Server (Go) | ECS Fargate 1vCPU/2GB | ~$35-45/mes | +| DevLake PostgreSQL | RDS separado do PULSE | ~$15-25/mes | +| DevLake Config UI | Nao deployado, mas necessario p/ migrations | ~$10/mes | +| Debugging time | Horas de dev em issues PG | Incalculavel | +| **Total infra extra** | | **~$60-80/mes** | + +--- + +## 3. As Opcoes + +### Opcao A: Manter DevLake + Forcar Upgrade (MySQL backend) + +Trocar o DevLake para usar MySQL ao inves de PostgreSQL, resolvendo os problemas de compatibilidade. + +**Mudancas necessarias:** +- Adicionar container MySQL ao docker-compose (para DevLake) +- Manter PostgreSQL para PULSE App DB +- Re-configurar DevLake `DB_URL` para MySQL +- Re-importar todas as connections/scopes/blueprints +- Testar upgrade path para v1.0.3-beta7+ + +**Prós:** +- Menor mudanca arquitetural — DevLake continua no papel atual +- MySQL e o backend "oficial" — migrations funcionam +- Preserva opcao de adicionar GitLab/Bitbucket/ADO via plugins nativos +- Fix do Jira v3 vem "de graca" com upgrade +- Comunidade DevLake mantem conectores atualizados + +**Contras:** +- Adiciona MySQL ao stack (mais um DB para operar) +- Continuamos dependendo de software beta (v1.0.3 sem release estavel) +- Dupla normalizacao permanece +- Nao resolve o problema de 99.3% de perda de dados Jira (pode ser bug separado) +- Cada upgrade futuro e risco de novos bugs + +**Esforco estimado:** 1-2 dias +**Risco:** Medio — resolve PG, mas nao os problemas estruturais + +--- + +### Opcao B: Ingestao Proprietaria (Substituicao Total) + +Construir conectores Python proprios usando bibliotecas maduras, eliminando DevLake completamente. + +**Bibliotecas por source:** + +| Source | Biblioteca | Stars | Maturidade | +|--------|-----------|-------|------------| +| GitHub | PyGithub / `gql` (GraphQL) | 7k+ | Estavel, ativa | +| Jira | jira-python | 1.8k+ | Estavel, suporta v3 | +| Jenkins | python-jenkins | 600+ | Estavel, ja usamos | +| GitLab (futuro) | python-gitlab | 2k+ | Estavel | +| ADO (futuro) | azure-devops-python-api | MS oficial | Estavel | + +**Componentes a construir (por source):** + +``` +source_connector/ + ├── client.py # API client com auth, rate limiting, retry (~150 linhas) + ├── paginator.py # Paginacao generica (~80 linhas) + ├── extractor.py # Extracao de dados especificos (~200 linhas) + └── tests/ # Unit tests (~150 linhas) +``` + +**Estimativa por conector:** ~400-600 linhas de codigo + ~150 linhas de testes + +**Fluxo simplificado:** +``` +GitHub API ──→ GitHub Connector ──→ Normalizer ──→ PULSE DB ──→ Kafka +Jira API ──→ Jira Connector ──→ Normalizer ──→ PULSE DB ──→ Kafka +Jenkins API ─→ Jenkins Connector ─→ Normalizer ──→ PULSE DB ──→ Kafka +``` + +**Eliminamos:** +- DevLake Server (Go) +- DevLake PostgreSQL (ou MySQL) +- DevLake Config UI +- `devlake_reader.py` (272 linhas) +- Toda logica de DevLake API provisioning no NestJS (~400 linhas) + +**Re-usamos:** +- `normalizer.py` (539 linhas) — mantem intacto, so muda o input +- `devlake_sync.py` → `data_sync.py` — watermarks + Kafka publishing (adapta ~200 linhas) +- Pipeline Monitor — adapta para monitorar conectores ao inves de DevLake + +**Prós:** +- Controle total — sem dependencia de software beta +- Stack simplificado — elimina 2 containers (DevLake + DevLake DB) +- Dados mais ricos — APIs diretas fornecem PR timeline events, first review, approval timestamps que DevLake perde na normalizacao +- Sem dupla normalizacao — Source API → PULSE Normalizer → DB (1 hop, nao 4) +- Python nativo — mesmo stack do resto do pulse-data +- Debugging transparente — sem caixa preta Go +- Comunidade forte — PyGithub, jira-python sao mais estáveis que DevLake +- Customizacao Webmotors — mapeamentos PT-BR, Jenkins patterns, Jira custom fields: controlamos tudo +- Exit strategy planejado — O ADR-001 ja previa isso: "substituir plugins por custom connectors sem impacto ao usuario" + +**Contras:** +- **Esforco maior upfront** — ~2-3 semanas para os 3 conectores MVP (GitHub, Jira, Jenkins) +- Rate limiting proprio — Precisamos implementar backoff/retry (PyGithub ja faz isso) +- Paginacao propria — Cada API tem paginacao diferente (PyGithub/jira-python abstraem isso) +- Manter conectores — Se GitHub/Jira mudar API, precisamos atualizar (risco similar ao DevLake) +- Menos "gratis" para novos sources — GitLab/ADO sao ~1 semana cada para construir + +**Esforco estimado:** 2-3 semanas (3 conectores MVP) +**Risco:** Baixo — APIs sao estaveis, bibliotecas sao maduras + +--- + +### Opcao C: Hibrido Pragmatico (Substituicao Gradual) + +Manter DevLake para GitHub (que funciona), construir conector proprio para Jira (que esta quebrado), e Jenkins (que ja temos python-jenkins). + +**Fase 1 (esta semana):** Conector Jira proprio + Conector Jenkins proprio +**Fase 2 (proximas 2 semanas):** Conector GitHub proprio +**Fase 3:** Remover DevLake completamente + +**Prós:** +- Desbloqueia Jira imediatamente sem esperar upgrade DevLake +- Migra incrementalmente, menor risco +- Pode validar abordagem com Jira antes de migrar GitHub + +**Contras:** +- Complexidade transitoria — dois pipelines rodando em paralelo +- Mais codigo para manter durante a transicao +- DevLake continua consumindo recursos durante a transicao + +**Esforco estimado:** 1 semana (Fase 1) + 1-2 semanas (Fase 2) +**Risco:** Baixo-Medio — complexidade da transicao + +--- + +## 4. Analise Comparativa + +| Criterio | Peso | A (DevLake+MySQL) | B (Proprio Total) | C (Hibrido Gradual) | +|----------|------|-------------------|--------------------|----------------------| +| Time-to-unblock Jira | 25% | 1-2 dias (se funcionar) | 3-5 dias | 3-5 dias | +| Estabilidade longo prazo | 25% | ⚠ Baixa (beta eterno) | ✅ Alta | ✅ Alta | +| Simplicidade operacional | 15% | ❌ +MySQL no stack | ✅ -2 containers | ⚠ Transitorio | +| Riqueza de dados | 15% | ❌ Perde timeline events | ✅ Dados completos | ✅ Dados completos | +| Esforco total (4 semanas) | 10% | ✅ Menor | ⚠ Medio | ⚠ Medio | +| Risco de regressao | 10% | ❌ Alto (cada upgrade) | ✅ Baixo | ✅ Baixo | +| **Score ponderado** | | **2.6/5** | **4.3/5** | **3.9/5** | + +--- + +## 5. Recomendacao + +**Opcao B — Ingestao Proprietaria Total**, com a seguinte priorizacao: + +### Semana 1: Desbloquear Dados +1. **Conector Jira** (~3 dias) — `jira-python`, extrai issues + changelogs + sprints +2. **Conector Jenkins** (~2 dias) — `python-jenkins`, extrai builds de producao + +### Semana 2: Completar GitHub +3. **Conector GitHub** (~4 dias) — `PyGithub` + GraphQL para PR timeline +4. **Adaptar sync worker** (~1 dia) — Trocar `DevLakeReader` por `SourceConnectors` + +### Semana 3: Limpeza +5. **Remover DevLake** do docker-compose +6. **Adaptar Pipeline Monitor** para monitorar conectores +7. **Testes de integracao** end-to-end + +### O que reutilizamos (nao joga fora) +- `normalizer.py` — 100% reuso, so muda a forma como o `raw` dict chega +- `devlake_sync.py` → `data_sync.py` — Watermarks, upserts, Kafka publishing (80% reuso) +- Pipeline Monitor routes — Adapta DevLake health por connector health +- Alembic migrations — Intactas (eng_pull_requests, eng_issues, etc.) +- Kafka topics e metrics worker — Intactos + +### O que muda +- `devlake_reader.py` (272 linhas) → `connectors/{github,jira,jenkins}.py` (~1.200 linhas total) +- DevLake API client no NestJS (~400 linhas) → Removido (config via YAML direto) +- `docker-compose.yml` → Remove `devlake` + `devlake-pg` services +- `scripts/bulk_import_repos.py` → Substituido por GitHub connector com auto-discovery + +--- + +## 6. Validacao da Decisao Original + +O ADR-001 (Hipotese 3) ja previa explicitamente este cenario: + +> *"Se o Apache DevLake perder tracao ou se tornar limitante, substituimos plugins individualmente por conectores customizados sem impacto ao usuario. DevLake e um 'detalhe de implementacao' atras de uma camada de abstracao (o Sync Worker)."* + +**Essa abstracao funcionou.** O Sync Worker + Normalizer + Kafka sao a camada que isolou PULSE do DevLake. A substituicao e cirurgica: trocamos a **fonte de dados** do normalizer, nao a arquitetura. + +--- + +## 7. Riscos e Mitigacoes + +| Risco | Probabilidade | Mitigacao | +|-------|--------------|-----------| +| APIs mudam (GitHub, Jira) | Baixa | Bibliotecas PyGithub/jira-python sao mantidas por comunidades grandes | +| Rate limiting em org grande | Media | PyGithub tem retry built-in; implementar exponential backoff | +| Backfill lento (1.426 repos) | Media | Paralelizar com asyncio; GraphQL batch queries; incremental | +| Falta GitLab/ADO quando cliente pedir | Baixa (R2+) | python-gitlab e azure-devops-python-api estao prontos; ~1 semana cada | +| Regressao nos dados ja ingeridos | Baixa | Manter DevLake DB como backup read-only por 30 dias | + +--- + +## Apendice: Codigo Existente que Sera Reutilizado + +``` +Componente Linhas Reuso +────────────────────────────────────────────────── +normalizer.py 539 ~100% +devlake_sync.py (→ data_sync.py) 552 ~80% +pipeline/routes.py 350 ~70% +pipeline/models.py 120 100% +engineering_data/models.py 180 100% +alembic migrations (001-003) 400 100% +metrics_worker 300+ 100% +kafka shared module 150 100% +────────────────────────────────────────────────── +Total reutilizado 2.591 ~90% +Total a construir (conectores) ~1.500 novo +``` diff --git a/pulse/docs/adrs/PLAN-migration-custom-connectors.md b/pulse/docs/adrs/PLAN-migration-custom-connectors.md new file mode 100644 index 0000000..430f5f3 --- /dev/null +++ b/pulse/docs/adrs/PLAN-migration-custom-connectors.md @@ -0,0 +1,805 @@ +# Plano de Migracao: DevLake → Conectores Proprietarios + +**Status:** Aprovado +**Data:** 2026-04-09 +**Referencia:** ADR-005 +**Estimativa total:** 2-3 semanas + +--- + +## Visao Geral da Mudanca + +``` +ANTES (DevLake): + GitHub API → DevLake Raw → DevLake Tool → DevLake Domain → DevLakeReader → Normalizer → PULSE DB → Kafka + (4 hops, caixa preta Go, 2 DBs separados) + +DEPOIS (Conectores Proprios): + GitHub API → GitHubConnector → Normalizer → PULSE DB → Kafka + Jira API → JiraConnector → Normalizer → PULSE DB → Kafka + Jenkins API → JenkinsConnector → Normalizer → PULSE DB → Kafka + (1 hop, Python puro, 1 DB) +``` + +--- + +## Estrutura de Arquivos — O Que Muda + +``` +packages/pulse-data/src/ +├── config.py # MODIFICA: remove devlake_*, adiciona source configs +├── connectors/ # NOVO: diretorio de conectores +│ ├── __init__.py +│ ├── base.py # NOVO: classe abstrata BaseConnector +│ ├── github_connector.py # NOVO: ~350 linhas +│ ├── jira_connector.py # NOVO: ~400 linhas +│ └── jenkins_connector.py # NOVO: ~250 linhas +├── contexts/ +│ └── engineering_data/ +│ ├── devlake_reader.py # REMOVE (272 linhas) +│ ├── normalizer.py # MODIFICA: ajusta field names (~30 linhas mudam) +│ └── models.py # INTACTO +│ └── pipeline/ +│ ├── devlake_api.py # REMOVE (76 linhas) +│ ├── routes.py # MODIFICA: troca DevLake health por connector health +│ └── models.py # INTACTO +├── workers/ +│ ├── devlake_sync.py # REFATORA → data_sync.py (~150 linhas mudam) +│ └── metrics_worker.py # INTACTO +└── shared/ + ├── kafka.py # INTACTO + └── http_client.py # NOVO: httpx wrapper com retry/rate-limit (~100 linhas) +``` + +### Resumo quantitativo + +| Acao | Arquivos | Linhas | +|------|----------|--------| +| NOVO (conectores + base + http_client) | 5 | ~1.200 | +| MODIFICA (normalizer, config, routes, sync) | 4 | ~200 linhas alteradas | +| REMOVE (devlake_reader, devlake_api) | 2 | -348 linhas | +| INTACTO (models, kafka, migrations, metrics) | 8+ | ~1.500 linhas | +| **Saldo liquido** | | **+~1.050 linhas** | + +--- + +## Fase 1 — Fundacao (Dia 1-2) + +### 1.1 Base Connector + HTTP Client + +**Arquivo:** `src/connectors/base.py` + +```python +from abc import ABC, abstractmethod +from datetime import datetime +from typing import Any + +class BaseConnector(ABC): + """Interface que todo conector de fonte de dados deve implementar. + + Retorna listas de dicts no formato que o normalizer espera. + Cada conector traduz os campos da API nativa para o formato padrao. + """ + + @abstractmethod + async def fetch_pull_requests(self, since: datetime | None = None) -> list[dict[str, Any]]: + """Retorna PRs/MRs no formato padrao.""" + ... + + @abstractmethod + async def fetch_issues(self, since: datetime | None = None) -> list[dict[str, Any]]: + """Retorna issues/work items no formato padrao.""" + ... + + @abstractmethod + async def fetch_issue_changelogs(self, issue_ids: list[str]) -> dict[str, list[dict[str, Any]]]: + """Retorna changelogs de status transitions por issue_id.""" + ... + + @abstractmethod + async def fetch_deployments(self, since: datetime | None = None) -> list[dict[str, Any]]: + """Retorna deployments/builds no formato padrao.""" + ... + + @abstractmethod + async def fetch_sprints(self, since: datetime | None = None) -> list[dict[str, Any]]: + """Retorna sprints no formato padrao.""" + ... + + @abstractmethod + async def fetch_sprint_issues(self, sprint_id: str) -> list[dict[str, Any]]: + """Retorna issues de um sprint especifico.""" + ... + + @abstractmethod + async def close(self) -> None: + """Libera recursos (HTTP sessions, etc).""" + ... +``` + +**Contrato chave:** Os dicts retornados devem ter os mesmos nomes de campos que o `normalizer.py` espera. Isso permite reuso total do normalizer existente. + +**Arquivo:** `src/shared/http_client.py` + +```python +"""HTTP client wrapper com retry, rate-limiting e logging.""" + +import httpx +import asyncio +import logging +from typing import Any + +class ResilientHTTPClient: + """httpx AsyncClient com: + - Retry com exponential backoff (3 tentativas) + - Rate limit awareness (respeita headers X-RateLimit-*) + - Timeout configuravel (30s default) + - Logging de requests/responses + """ + + def __init__(self, base_url: str, auth: dict, timeout: float = 30.0): + ... + + async def get(self, path: str, params: dict = None) -> Any: + """GET com retry e rate-limit handling.""" + ... + + async def get_paginated(self, path: str, params: dict = None, + page_size: int = 100, max_pages: int = 100) -> list[dict]: + """GET com paginacao automatica. Suporta: + - Link header (GitHub) + - startAt/maxResults (Jira) + - page/pageSize (generico) + """ + ... + + async def close(self): + ... +``` + +### 1.2 Atualizar config.py + +**Remover:** +```python +devlake_db_url: str = "..." +devlake_api_url: str = "..." +``` + +**Adicionar:** +```python +# Source API tokens (lidos de env vars, mesmos que o DevLake usava) +github_token: str = "" +github_org: str = "webmotors-private" + +jira_base_url: str = "" +jira_email: str = "" +jira_api_token: str = "" + +jenkins_base_url: str = "" +jenkins_username: str = "" +jenkins_api_token: str = "" +``` + +> **Nota:** Essas env vars ja existem no .env e no docker-compose.yml (GITHUB_TOKEN, JIRA_API_TOKEN, etc). Nao precisa criar novas. + +### 1.3 Mapeamento de Campos: API Nativa → Normalizer + +O normalizer espera dicts com campos especificos. Cada conector precisa mapear: + +**Pull Requests (normalizer espera):** +``` +id, base_repo_id, head_repo_id, status, title, url, author_name, +created_date, merged_date, closed_date, merge_commit_sha, +base_ref, head_ref, additions, deletions +``` + +**Issues (normalizer espera):** +``` +id, url, issue_key, title, status, original_status, story_point, +priority, created_date, updated_date, resolution_date, +lead_time_minutes, assignee_name, type, sprint_id +``` + +**Issue Changelogs (normalizer espera):** +``` +issue_id, from_status (original_from_value), to_status (original_to_value), created_date +``` + +**Deployments (normalizer espera):** +``` +id, cicd_deployment_id, repo_id, name, result, status, +environment, created_date, started_date, finished_date +``` + +**Sprints (normalizer espera):** +``` +id, original_board_id, name, url, status, started_date, +ended_date, completed_date, total_issues (count) +``` + +**Sprint Issues (normalizer espera):** +``` +id, issue_key, status, original_status, story_point, type, resolution_date +``` + +--- + +## Fase 2 — Conector Jira (Dia 3-5) + +**Prioridade #1** porque e o que esta quebrado no DevLake. + +**Arquivo:** `src/connectors/jira_connector.py` + +### Endpoints Jira REST API v3 a usar: + +| Dado | Endpoint | Paginacao | +|------|----------|-----------| +| Issues | `GET /rest/api/3/search` (JQL) | startAt + maxResults (50) | +| Issue detail | `GET /rest/api/3/issue/{key}?expand=changelog` | N/A | +| Sprints | `GET /rest/agile/1.0/board/{boardId}/sprint` | startAt + maxResults | +| Sprint issues | `GET /rest/agile/1.0/sprint/{sprintId}/issue` | startAt + maxResults | +| Boards | `GET /rest/agile/1.0/board` | startAt + maxResults | +| Changelogs | Incluido no expand=changelog do issue | In-line | + +### JQL para busca incremental: +``` +project IN (DESC, ENO, ANCR, PUSO, APPF, FID, CTURBO, PTURB) +AND updated >= "2026-04-01" +ORDER BY updated DESC +``` + +### Mapeamento de campos Jira → Normalizer: + +```python +def _map_issue(self, jira_issue: dict) -> dict: + fields = jira_issue["fields"] + return { + "id": f"jira:JiraIssue:{self._connection_id}:{jira_issue['id']}", + "url": f"{self._base_url}/browse/{jira_issue['key']}", + "issue_key": jira_issue["key"], + "title": fields.get("summary", ""), + "status": fields.get("status", {}).get("name", ""), + "original_status": fields.get("status", {}).get("name", ""), + "story_point": fields.get("story_points") or fields.get("customfield_10028"), + "priority": fields.get("priority", {}).get("name", ""), + "created_date": fields.get("created"), + "updated_date": fields.get("updated"), + "resolution_date": fields.get("resolutiondate"), + "lead_time_minutes": None, # Calculado pelo PULSE + "assignee_name": (fields.get("assignee") or {}).get("displayName"), + "type": fields.get("issuetype", {}).get("name", "Task"), + "sprint_id": None, # Preenchido via sprint API + } +``` + +### Changelogs (inline no expand=changelog): + +```python +def _map_changelogs(self, jira_issue: dict) -> list[dict]: + changelogs = [] + for history in jira_issue.get("changelog", {}).get("histories", []): + for item in history.get("items", []): + if item.get("field") == "status": + changelogs.append({ + "issue_id": f"jira:JiraIssue:{self._connection_id}:{jira_issue['id']}", + "from_status": item.get("fromString", ""), + "to_status": item.get("toString", ""), + "created_date": history.get("created"), + }) + return changelogs +``` + +### Vantagem direta sobre DevLake: +- **Changelogs vem junto com o issue** (expand=changelog) — 1 request vs 2 no DevLake +- **JQL nativo** para filtrar por projeto/data — sem intermediarios +- **API v3** direto — sem depender de fix do DevLake + +### Estimativa: ~400 linhas, 3 dias (incluindo testes) + +--- + +## Fase 3 — Conector Jenkins (Dia 5-6) + +**Arquivo:** `src/connectors/jenkins_connector.py` + +### Endpoints Jenkins API: + +| Dado | Endpoint | +|------|----------| +| Job list | `GET /api/json?tree=jobs[name,url,fullName]` | +| Job builds | `GET /job/{name}/api/json?tree=builds[number,result,timestamp,duration,url]` | +| Build detail | `GET /job/{name}/{number}/api/json` | + +### Mapeamento Jenkins → Normalizer (deployments): + +```python +def _map_build(self, job_name: str, build: dict) -> dict: + result = build.get("result", "UNKNOWN") + timestamp_ms = build.get("timestamp", 0) + duration_ms = build.get("duration", 0) + started = datetime.fromtimestamp(timestamp_ms / 1000, tz=timezone.utc) + finished = datetime.fromtimestamp((timestamp_ms + duration_ms) / 1000, tz=timezone.utc) + + return { + "id": f"jenkins:JenkinsBuild:{self._connection_id}:{job_name}:{build['number']}", + "cicd_deployment_id": f"jenkins:JenkinsJob:{self._connection_id}:{job_name}", + "repo_id": None, + "name": job_name, + "result": result, # SUCCESS, FAILURE, UNSTABLE, ABORTED + "status": "DONE", + "environment": self._detect_environment(job_name), + "created_date": started.isoformat(), + "started_date": started.isoformat(), + "finished_date": finished.isoformat(), + } +``` + +### Deteccao de environment: +Ler de `config/connections.yaml` os patterns `deploymentPattern` e `productionPattern` por job. + +### Estimativa: ~250 linhas, 1.5 dias + +--- + +## Fase 4 — Conector GitHub (Dia 7-10) + +**Arquivo:** `src/connectors/github_connector.py` + +### Estrategia: REST + GraphQL + +**REST API v3** para PRs (simples, paginated): +``` +GET /repos/{owner}/{repo}/pulls?state=all&sort=updated&direction=desc&per_page=100 +``` + +**GraphQL** para dados enriquecidos (timeline events): +```graphql +query($owner: String!, $repo: String!, $cursor: String) { + repository(owner: $owner, name: $repo) { + pullRequests(first: 100, after: $cursor, orderBy: {field: UPDATED_AT, direction: DESC}) { + nodes { + number + title + state + author { login } + createdAt + mergedAt + closedAt + additions + deletions + changedFiles + baseRefName + headRefName + mergeable + reviewRequests(first: 10) { nodes { requestedReviewer { ... on User { login } } } } + reviews(first: 20) { nodes { author { login } state submittedAt } } + timelineItems(first: 50, itemTypes: [READY_FOR_REVIEW_EVENT, REVIEW_REQUESTED_EVENT, PULL_REQUEST_REVIEW]) { + nodes { + __typename + ... on ReadyForReviewEvent { createdAt } + ... on ReviewRequestedEvent { createdAt } + ... on PullRequestReview { submittedAt state } + } + } + } + pageInfo { hasNextPage endCursor } + } + } +} +``` + +### Dados EXTRAS que o DevLake NAO fornecia: +- `first_review_at` — timestamp da primeira review request ou review submetida +- `approved_at` — timestamp da primeira review com state=APPROVED +- `files_changed` — count real de arquivos alterados +- `reviewers` — lista de reviewers com seus estados +- Review timeline completa + +### Mapeamento GitHub → Normalizer: + +```python +def _map_pr(self, repo_full_name: str, pr: dict) -> dict: + return { + "id": f"github:GithubPullRequest:{self._connection_id}:{pr['number']}", + "base_repo_id": f"github:GithubRepo:{self._connection_id}:{repo_full_name}", + "head_repo_id": f"github:GithubRepo:{self._connection_id}:{repo_full_name}", + "status": pr["state"].upper(), # OPEN, CLOSED, MERGED + "title": pr["title"], + "url": pr.get("html_url") or pr.get("url", ""), + "author_name": pr.get("user", {}).get("login", "unknown"), + "created_date": pr["created_at"], + "merged_date": pr.get("merged_at"), + "closed_date": pr.get("closed_at"), + "merge_commit_sha": pr.get("merge_commit_sha"), + "base_ref": pr.get("base", {}).get("ref", ""), + "head_ref": pr.get("head", {}).get("ref", ""), + "additions": pr.get("additions", 0), + "deletions": pr.get("deletions", 0), + # NOVOS — enriquecem o normalizer + "_files_changed": pr.get("changed_files", 0), + "_reviewers": [...], + "_first_review_at": ..., + "_approved_at": ..., + } +``` + +### Discovery de repos: +Substitui `scripts/bulk_import_repos.py`: +```python +async def discover_repos(self, org: str, active_months: int = 12) -> list[str]: + """Lista todos os repos da org, filtrado por atividade recente.""" + repos = await self._client.get_paginated(f"/orgs/{org}/repos", params={"type": "all"}) + cutoff = datetime.now(timezone.utc) - timedelta(days=active_months * 30) + return [r["full_name"] for r in repos if _parse_datetime(r["pushed_at"]) > cutoff] +``` + +### Rate limiting: +- REST: 5.000 req/hora com token (PyGithub gerencia automaticamente) +- GraphQL: 5.000 pontos/hora (1 query = ~1 ponto) +- Para 1.426 repos com ~4 PRs cada: ~5.704 requests = ~1.1 hora no pior caso +- Com GraphQL: ~1.426 queries × 1 ponto = muito menos + +### Estimativa: ~350 linhas, 3 dias (incluindo GraphQL + discovery) + +--- + +## Fase 5 — Refatorar Sync Worker (Dia 10-11) + +### De: `devlake_sync.py` → Para: `data_sync.py` + +**Mudanca cirurgica:** O sync worker troca `DevLakeReader` por `ConnectorAggregator`: + +```python +# ANTES (devlake_sync.py, linha 28): +from src.contexts.engineering_data.devlake_reader import DevLakeReader + +# DEPOIS (data_sync.py): +from src.connectors.github_connector import GitHubConnector +from src.connectors.jira_connector import JiraConnector +from src.connectors.jenkins_connector import JenkinsConnector +``` + +### ConnectorAggregator — agrega dados de multiplos conectores: + +```python +class ConnectorAggregator: + """Agrega dados de multiplos conectores numa interface unificada. + + Implementa a mesma interface que DevLakeReader tinha, para que o + sync worker nao precise mudar sua logica de watermark/upsert/kafka. + """ + def __init__(self): + self._connectors = { + "github": GitHubConnector(...), + "jira": JiraConnector(...), + "jenkins": JenkinsConnector(...), + } + + async def fetch_pull_requests(self, since=None) -> list[dict]: + return await self._connectors["github"].fetch_pull_requests(since) + + async def fetch_issues(self, since=None) -> list[dict]: + return await self._connectors["jira"].fetch_issues(since) + + async def fetch_issue_changelogs(self, issue_ids) -> dict: + return await self._connectors["jira"].fetch_issue_changelogs(issue_ids) + + async def fetch_deployments(self, since=None) -> list[dict]: + return await self._connectors["jenkins"].fetch_deployments(since) + + async def fetch_sprints(self, since=None) -> list[dict]: + return await self._connectors["jira"].fetch_sprints(since) + + async def fetch_sprint_issues(self, sprint_id) -> list[dict]: + return await self._connectors["jira"].fetch_sprint_issues(sprint_id) +``` + +### O que NAO muda no sync worker: +- `sync()` — orquestracao de todas as entidades ✅ +- `_sync_pull_requests()` — logica de watermark + normalize + upsert + kafka ✅ +- `_sync_issues()` — idem ✅ +- `_sync_deployments()` — idem ✅ +- `_sync_sprints()` — idem ✅ +- `_upsert_*()` — todas as queries de ON CONFLICT ✅ +- `_get_watermark()` / `_set_watermark()` — watermark persistence ✅ +- `_log_sync_cycle()` — observability ✅ +- `run_sync_loop()` — cron loop ✅ + +### O que MUDA no sync worker: +- Linha 28: import DevLakeReader → import ConnectorAggregator +- Linha ~114: `self._reader = DevLakeReader()` → `self._reader = ConnectorAggregator()` +- Linha ~205: `await self._reader.close()` → idem (ConnectorAggregator.close() fecha todos) + +**Total: ~5-10 linhas de mudanca no sync worker.** + +### Estimativa: 1 dia + +--- + +## Fase 6 — Atualizar Normalizer (Dia 11) + +### Mudancas minimas no normalizer: + +O normalizer e 99% reutilizavel porque os conectores mapeiam para o formato esperado. Ajustes: + +1. **`_detect_source()`** — Manter como esta (os conectores geram IDs com prefixo `github:`, `jira:`, `jenkins:`) + +2. **`normalize_pull_request()`** — Adicionar suporte aos campos extras do GitHub GraphQL: +```python +# Adicionar apos linha 274: +"first_review_at": _parse_datetime(devlake_pr.get("_first_review_at")), +"approved_at": _parse_datetime(devlake_pr.get("_approved_at")), +"files_changed": devlake_pr.get("_files_changed", 0), +"reviewers": devlake_pr.get("_reviewers", []), +``` + +3. **Docstrings** — Atualizar "DevLake" → "source connector" nas docstrings + +### Estimativa: 0.5 dia + +--- + +## Fase 7 — Atualizar Pipeline Monitor (Dia 12) + +### Remover: `devlake_api.py` (76 linhas) + +### Modificar: `routes.py` + +**ANTES:** Pipeline Monitor compara DevLake counts vs PULSE counts +**DEPOIS:** Pipeline Monitor mostra connector health + PULSE counts + +```python +# Remover: +from src.contexts.pipeline.devlake_api import DevLakeAPIClient +from src.contexts.engineering_data.devlake_reader import DevLakeReader + +# Adicionar: +from src.connectors.github_connector import GitHubConnector +from src.connectors.jira_connector import JiraConnector +from src.connectors.jenkins_connector import JenkinsConnector +``` + +**`_get_devlake_counts()`** → **`_get_source_health()`**: +```python +async def _get_source_health() -> dict: + """Check connectivity and basic counts from each source.""" + health = {} + # GitHub: test API connectivity + try: + gh = GitHubConnector(...) + health["github"] = {"status": "healthy", "org": settings.github_org} + await gh.close() + except Exception as e: + health["github"] = {"status": "error", "error": str(e)} + # ... idem para Jira e Jenkins + return health +``` + +A comparacao DevLake vs PULSE nao faz mais sentido (nao ha DB intermediario). No lugar, o Pipeline Monitor mostra: +- **Connector status** (healthy/error per source) +- **PULSE DB counts** (total records por entidade) +- **Last sync** (de pipeline_sync_log, ja funciona) +- **Watermarks** (de pipeline_watermarks, ja funciona) +- **Errors** (de pipeline_sync_log, ja funciona) + +### Estimativa: 1 dia + +--- + +## Fase 8 — Limpar Infraestrutura (Dia 12-13) + +### 8.1 docker-compose.yml + +**Remover services:** +```yaml +# REMOVER COMPLETAMENTE: +devlake: + image: apache/devlake:v1.0.3-beta7 + ... + +devlake-pg: + image: postgres:16-alpine + ... +``` + +**Remover volume:** +```yaml +volumes: + # REMOVER: + devlake_pgdata: + driver: local +``` + +**Atualizar pulse-data e sync-worker:** +```yaml +pulse-data: + environment: + # REMOVER: + DEVLAKE_DB_URL: ... + DEVLAKE_API_URL: ... + # ADICIONAR: + GITHUB_TOKEN: ${GITHUB_TOKEN:-} + GITHUB_ORG: ${GITHUB_ORG:-webmotors-private} + JIRA_BASE_URL: ${JIRA_BASE_URL:-} + JIRA_EMAIL: ${JIRA_EMAIL:-} + JIRA_API_TOKEN: ${JIRA_API_TOKEN:-} + JENKINS_BASE_URL: ${JENKINS_BASE_URL:-} + JENKINS_USERNAME: ${JENKINS_USERNAME:-} + JENKINS_API_TOKEN: ${JENKINS_API_TOKEN:-} + depends_on: + # REMOVER: + devlake-pg: + condition: service_healthy +``` + +### 8.2 NestJS — Simplificar Integration Module + +O `ConfigLoaderService` hoje faz provisioning no DevLake (criar connections, blueprints, scopes). Com conectores proprios, isso nao e mais necessario. + +**Simplificar `config-loader.service.ts`:** +- Manter: Leitura do `connections.yaml` + criacao de teams/org no PULSE DB +- Remover: Toda logica de `DevLakeApiClient` calls (~300 linhas) +- Remover: `devlake-api.client.ts` inteiro (319 linhas) + +**Resultado:** O NestJS apenas carrega a config YAML e cria registros no PULSE DB. O Python (pulse-data) cuida da ingestao. + +### 8.3 Scripts + +- **Remover:** `scripts/bulk_import_repos.py` (substituido por `GitHubConnector.discover_repos()`) +- **Reescrever:** `scripts/full_ingestion.py` (simplificar — sem DevLake API polling) + +### 8.4 Dependencies (pyproject.toml) + +**Adicionar:** +```toml +PyGithub = ">=2.1.0" # GitHub REST API +gql = ">=3.5.0" # GitHub GraphQL (opcional, pode usar httpx direto) +jira = ">=3.8.0" # Jira REST API v3 +python-jenkins = ">=1.8.0" # Jenkins API +``` + +**Nota:** `httpx` ja e dependencia existente — reutilizar para requests customizados. + +### Estimativa: 1 dia + +--- + +## Fase 9 — Testes (Dia 13-15) + +### 9.1 Unit Tests por Conector + +```python +# tests/connectors/test_jira_connector.py +async def test_map_issue_normalizer_compatible(): + """Garante que o dict retornado tem todos os campos que o normalizer espera.""" + raw_jira = {...} # fixture de issue Jira real + mapped = connector._map_issue(raw_jira) + # Todos os campos devem existir: + assert "id" in mapped + assert "issue_key" in mapped + assert "original_status" in mapped + assert "story_point" in mapped + ... + +async def test_changelog_extraction(): + """Garante que changelogs sao extraidos do expand=changelog.""" + ... + +async def test_incremental_sync_jql(): + """Garante que JQL inclui filtro de updated >= since.""" + ... +``` + +### 9.2 Integration Test — Full Pipeline + +```python +async def test_full_sync_cycle(): + """Testa o fluxo completo: Connector → Normalizer → Upsert → Kafka.""" + # Mock dos conectores com dados reais (fixtures) + aggregator = ConnectorAggregator(connectors={ + "github": MockGitHubConnector(fixtures/github_prs.json), + "jira": MockJiraConnector(fixtures/jira_issues.json), + "jenkins": MockJenkinsConnector(fixtures/jenkins_builds.json), + }) + worker = DataSyncWorker(reader=aggregator) + results = await worker.sync() + + assert results["pull_requests"]["synced"] > 0 + assert results["issues"]["synced"] > 0 + assert results["deployments"]["synced"] > 0 +``` + +### 9.3 Smoke Test com APIs reais + +```bash +# Script de validacao manual (nao automatizado) +python -m scripts.smoke_test_connectors +# Testa: +# 1. GitHub: busca 10 PRs do repo mais ativo +# 2. Jira: busca 10 issues do projeto DESC +# 3. Jenkins: busca 5 builds do job mais recente +# 4. Normalizer: processa os dados sem erro +# 5. Upsert: insere no PULSE DB +``` + +### Estimativa: 2 dias + +--- + +## Fase 10 — Validacao e Cutover (Dia 15) + +### 10.1 Comparar dados pre/pos migracao + +```sql +-- Antes: snapshot dos dados existentes +SELECT source, COUNT(*) FROM eng_pull_requests GROUP BY source; +SELECT source, COUNT(*) FROM eng_issues GROUP BY source; +SELECT source, COUNT(*) FROM eng_deployments GROUP BY source; +``` + +### 10.2 Rodar full sync com conectores novos + +```bash +docker compose up -d # Sem DevLake! +docker exec pulse-data python -m scripts.full_ingestion --reset-watermarks +``` + +### 10.3 Validar contagens + +```sql +-- Depois: contagens devem ser >= as anteriores +-- (podem ser maiores porque os conectores acessam dados que DevLake perdia) +SELECT source, COUNT(*) FROM eng_pull_requests GROUP BY source; +SELECT source, COUNT(*) FROM eng_issues GROUP BY source; -- Esperado: >>243 (os 32K issues do Jira) +``` + +### 10.4 Verificar Pipeline Monitor + +- Dashboard deve mostrar connectors healthy +- Sync logs registrando ciclos completos +- Watermarks atualizando + +--- + +## Cronograma Consolidado + +| Dia | Fase | Entregavel | +|-----|------|-----------| +| 1-2 | Fundacao | `base.py`, `http_client.py`, config atualizada | +| 3-5 | Jira Connector | `jira_connector.py` + testes unitarios | +| 5-6 | Jenkins Connector | `jenkins_connector.py` + testes unitarios | +| 7-10 | GitHub Connector | `github_connector.py` + GraphQL + discovery | +| 10-11 | Refatorar Sync Worker | `data_sync.py` com ConnectorAggregator | +| 11 | Atualizar Normalizer | Campos extras, docstrings | +| 12 | Pipeline Monitor | Trocar DevLake health por connector health | +| 12-13 | Limpar Infra | docker-compose, NestJS, scripts | +| 13-15 | Testes + Validacao | Unit, integration, smoke, cutover | + +--- + +## Riscos e Mitigacoes + +| Risco | Mitigacao | +|-------|----------| +| Rate limit GitHub (1.426 repos) | GraphQL batch + sleep entre batches + cache | +| Story points field customizado no Jira | Ler de connections.yaml qual customfield usar | +| Jenkins auth por certificado | Verificar se basic auth funciona (ja funciona no .env) | +| Dados existentes no PULSE DB divergem | Rodar com --reset-watermarks no primeiro sync | +| Regressao no normalizer | Testes unitarios com fixtures dos dados reais | + +--- + +## Checklist de Prontidao (DoD) + +- [ ] Todos os 3 conectores implementados e testados +- [ ] Normalizer adaptado e testes passando +- [ ] Sync worker usando ConnectorAggregator +- [ ] Pipeline Monitor sem referencias a DevLake +- [ ] docker-compose.yml sem servicos DevLake +- [ ] NestJS sem DevLakeApiClient +- [ ] `make up` sobe stack completo sem DevLake +- [ ] Full sync retorna >= dados anteriores +- [ ] Issues Jira: 32.000+ (vs 243 anteriores) +- [ ] Pipeline Monitor mostra connectors healthy +- [ ] Testes unitarios para os 3 conectores +- [ ] Smoke test com APIs reais da Webmotors diff --git a/pulse/docs/backlog.md b/pulse/docs/backlog.md new file mode 100644 index 0000000..0c766f5 --- /dev/null +++ b/pulse/docs/backlog.md @@ -0,0 +1,51 @@ +# PULSE Data Platform Backlog + +## Pipeline Monitor v2 + +### 1. Step-level instrumentation +Sync worker should emit `{entity_type, step_name, processed, total, duration_sec, status}` events per batch to a `pipeline_step_progress` table. The frontend already renders 4 steps (fetch / changelog / normalize / upsert) when present. Currently the API synthesizes 2 aggregated steps from `pipeline_ingestion_progress` fields as a placeholder. + +**Priority:** High +**Depends on:** Sync worker refactor to emit granular progress events. + +### 2. Rate limit tracking +Currently hardcoded placeholder values per source. Source connectors need to report remaining/limit from API response headers: +- **GitHub:** `X-RateLimit-Remaining` / `X-RateLimit-Limit` headers +- **Jira:** 429 backoff tracking (Jira Cloud does not expose explicit rate-limit headers) +- **Jenkins:** Internal concurrency counter (no standard rate-limit header) + +Store in a `source_rate_limits` table or Redis cache; Pipeline Monitor reads from there. + +**Priority:** Medium + +### 3. Retry button E2E +- RBAC role required: `data_platform` +- POST `/data/v1/pipeline/entities/{sourceId}/{entityType}/retry` endpoint (currently returns 501) +- Sync worker should consume retry requests from a queue (Redis or Kafka topic) +- Frontend button is already hidden behind a feature flag + +**Priority:** Low (requires RBAC + sync worker queue consumer) + +### 4. PR link rate per team -- denominator refinement +Current approximation: `pr_reference_count / total_repo_prs` may overcount when a repo serves multiple squads. Formal definition should be: + +> (PRs mentioning KEY in title AND `linked_issue_ids` contains a matching issue_id) / (PRs mentioning KEY in title) + +This requires joining `eng_pull_requests` with `eng_issues` on issue_key extraction, which is expensive at scale. Consider a materialized view or pre-calculated field on the catalog. + +**Priority:** Medium (accuracy improvement, no user-facing change) + +### 5. Populate `jira_project_catalog.issue_count` +Currently all 69 rows have `issue_count = 0`. The Pipeline Monitor `/teams` endpoint exposes this as the per-squad "ISSUES" column, so it always shows 0. Fix: update the Jira sync worker to refresh `issue_count` (e.g. `UPDATE jira_project_catalog SET issue_count = (SELECT count(*) FROM eng_issues WHERE project_key = jpc.project_key)`) after each full or incremental sync. Also consider refreshing `pr_reference_count` the same way to unblock alternative queries. + +**Priority:** Medium + +### 6. Pipeline events feed +`pipeline_events` table is empty — sync worker and metrics worker don't emit events yet. The `/timeline` endpoint works but returns `[]`. Fix: emit events on: +- Successful sync cycle completion (`success`, per source, with records/duration) +- Errors (existing `recent_errors` plumbing can be forwarded to events) +- Rate-limit warnings +- Backfill start/end + +**Priority:** High (core observability; Pipeline Monitor Timeline tab is inert without this) + diff --git a/pulse/docs/backlog/README.md b/pulse/docs/backlog/README.md new file mode 100644 index 0000000..8fb1f6a --- /dev/null +++ b/pulse/docs/backlog/README.md @@ -0,0 +1,47 @@ +# UX-Sourced Backlog (FDD) + +This directory holds **Feature-Driven Development** backlog cards generated by the +`pulse-ux-reviewer` agent after a UX/UI review. + +It complements the top-level `pulse/docs/backlog.md` (cross-cutting backlog). Files +here are **per-page / per-journey** backlogs derived directly from design work. + +## Purpose + +Translate design decisions into prioritised, persona-driven feature cards with BDD +acceptance criteria so that `pulse-product-director` can slot them into MVP / R1–R4 +planning and `pulse-engineer` can pick them up for implementation. + +## Naming + +`-backlog.md` + +Examples: +- `pipeline-monitor-backlog.md` +- `jira-settings-backlog.md` +- `onboarding-flow-backlog.md` + +## Card template (FDD) + +``` +Feature: +Epic: +Release: MVP | R1 | R2 | R3 | R4 +Persona: Carlos | Ana | Marina | Priya | Roberto | Lucas +Priority: P0 | P1 | P2 +Owner class: Frontend | API | Data | Metrics +Acceptance criteria (BDD): + Given ... + When ... + Then ... +Anti-surveillance check: Pass | Fail — reason +Dependencies: ... +Estimate: XS | S | M | L | XL +Analytics events: ... +``` + +Cards are ordered by delivery sequence (feature set → subsequent feature set). + +## How to produce + +Run `/pulse-ux-review ` — the reviewer agent writes the backlog here. diff --git a/pulse/docs/backlog/dashboard-backlog.md b/pulse/docs/backlog/dashboard-backlog.md new file mode 100644 index 0000000..5142b48 --- /dev/null +++ b/pulse/docs/backlog/dashboard-backlog.md @@ -0,0 +1,844 @@ +# Dashboard — FDD Backlog + +Feature-Driven Development cards for the PULSE Dashboard redesign. +Ordered by delivery sequence. Each card follows ` `. + +--- + +## Feature Set 1 — Foundation: Grouped KPIs & Filters + +### FDD-DSH-001 · Group global KPIs into DORA and Flow pills +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P0 +**Persona:** Carlos (EM), Ana (CTO) +**Owner class:** Frontend (`pulse-engineer`) + +**Acceptance (BDD):** +``` +Given Carlos opens the dashboard + When the global metrics load successfully + Then he sees two labeled groups "DORA Metrics" and "Flow & Management" + And each group contains 4 KPIs with value, unit, trend % and sparkline + And each DORA KPI shows a classification badge (Elite/High/Medium/Low) + +Given the data is loading + When the page mounts + Then 8 KPI skeletons are shown preserving card geometry (no layout shift) + +Given the global endpoint returns an error + When the page mounts + Then an inline error card is shown with a "Tentar novamente" button +``` + +**Anti-surveillance check:** PASS — all metrics are aggregate, no author fields exposed. +**Dependencies:** `GET /data/v1/metrics/global` endpoint (exists). +**Estimate:** M +**Analytics:** `dashboard_viewed`, `dashboard_loading_shown`, `dashboard_error_shown` + +--- + +### FDD-DSH-002 · Filter dashboard by squad with searchable combobox +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P0 +**Persona:** Carlos (EM), Priya (Agile Coach) +**Owner class:** Frontend (`pulse-engineer`) + API (`pulse-engineer`) + +**Acceptance (BDD):** +``` +Given Carlos wants to focus on one squad + When he clicks the squad combobox + Then he sees options grouped by tribo, with a search input at top + And the list renders all 27 squads without scroll lag + +Given Carlos types "sec" in the search + When the list updates + Then only squads whose name or tribo contains "sec" are shown + +Given Carlos selects "SECOM" + When the selection commits + Then the combobox label updates to "SECOM" + And KPI strip, ranking, and evolution all re-fetch scoped to that squad + And an applied-filters banner shows "Exibindo SECOM · últimos 60 dias" +``` + +**Anti-surveillance check:** PASS — filter operates on team id only. +**Dependencies:** `GET /data/v1/pipeline/teams` (exists, returns 27 squads). +**Estimate:** M +**Analytics:** `dashboard_team_filter_changed` + +--- + +### FDD-DSH-003 · Filter dashboard by period (30/60/90/120 days + custom) +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P0 +**Persona:** Carlos, Ana, Priya +**Owner class:** Frontend (`pulse-engineer`) + +**Acceptance (BDD):** +``` +Given the dashboard is showing default period + When Carlos clicks a period pill (30d, 60d, 90d, 120d) + Then the selected pill highlights and aria-checked flips to true + And all sections re-fetch with the new period + +Given Carlos clicks "Personalizado…" + When the custom range panel expands + Then he can pick start and end dates (default: last 90 days) + And if start > end, validation prevents submission and shows inline error + And valid selection refetches data with ISO-formatted range + +Given the period is changed + When the new data arrives + Then applied-filters banner reflects "últimos {N} dias" or "{start} a {end}" +``` + +**Anti-surveillance check:** PASS. +**Dependencies:** Extend `filterStore` to include `60d` and `120d`. Extend API query params. +**Estimate:** S +**Analytics:** `dashboard_period_changed` + +--- + +### FDD-DSH-004 · Remove "PRs Needing Attention" from dashboard +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P0 +**Persona:** Carlos (EM) — reduces clutter. Marina continues to see PRs at `/prs`. +**Owner class:** Frontend (`pulse-engineer`) + API (`pulse-engineer`) + +**Acceptance (BDD):** +``` +Given the dashboard is rendered + When the page mounts + Then the "PRs Needing Attention" section does not appear + And no API call is made for `prsNeedingAttention` + +Given the `/prs` route is accessed + When it mounts + Then the full PR list is still available (no regression) +``` + +**Anti-surveillance check:** PASS — actually REMOVES an author-surface (PR author name was shown on dashboard). Net improvement to anti-surveillance posture. +**Dependencies:** Update `useHomeMetrics` to drop `prsNeedingAttention` branch; verify `/prs` route owns its own hook. +**Estimate:** S +**Analytics:** none (deletion). + +--- + +## Feature Set 2 — Per-Team Metric Rankings + +### FDD-DSH-010 · Display per-team ranking with metric tabs +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P0 +**Persona:** Carlos (EM), Priya (Agile Coach) +**Owner class:** Frontend (`pulse-engineer`) + API (`pulse-engineer`) + Metrics (`pulse-data-scientist`) + +**Acceptance (BDD):** +``` +Given the dashboard has loaded + When Carlos sees the "Comparativo por squad" section + Then 6 metric tabs are available: Deploy Frequency, Lead Time, Change Failure, Cycle Time, WIP, Throughput + And the first tab (Deploy Frequency) is active by default + And a horizontal bar ranking of all 27 squads is shown sorted by that metric + And each row shows: position, squad name, tribo, bar length proportional to value, value+unit, DORA badge + +Given a row classification is "low" + When it is rendered + Then the bar color is `--color-dora-low` AND a "Low" badge is shown (color + label, WCAG A) + +Given Carlos clicks a different metric tab + When the tab switches + Then the sort direction flips appropriately (asc for lower-is-better, desc otherwise) + And the chart subtitle updates to reflect the metric context +``` + +**Anti-surveillance check:** PASS — all rows are squad-level aggregates only. +**Dependencies:** New endpoint `GET /data/v1/metrics/by-team?metric={}&period={}` (needs `pulse-data-engineer` to expose). +**Estimate:** L +**Analytics:** `dashboard_ranking_metric_changed` + +--- + +### FDD-DSH-011 · Classify each ranking row using DORA thresholds +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P1 +**Persona:** Carlos, Ana +**Owner class:** Metrics (`pulse-data-scientist`) + API (`pulse-engineer`) + +**Acceptance (BDD):** +``` +Given a deploy frequency value of 3.8/day + When classified + Then the classification is "elite" (>=1/day) + +Given a change failure rate of 12% + When classified + Then the classification is "medium" (5-15%) + +Given a flow metric without DORA threshold (WIP, Throughput) + When classified + Then the classification uses the quantile-based rule defined by pulse-data-scientist + And the rule is documented in pulse/docs/metrics/classification.md +``` + +**Anti-surveillance check:** PASS. +**Dependencies:** Formula sign-off from `pulse-data-scientist`. +**Estimate:** M +**Analytics:** none. + +--- + +### FDD-DSH-012 · Open team drawer from ranking row click +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P1 +**Persona:** Carlos, Priya +**Owner class:** Frontend (`pulse-engineer`) + +**Acceptance (BDD):** +``` +Given the ranking is visible + When Carlos clicks a team row (or presses Enter/Space on it with focus) + Then a non-modal drawer slides from the right (520px desktop, full-screen mobile) + And the drawer shows the squad name, tribo, 7 metric tiles (current values), and 2 charts: 12-week evolution + cycle-time distribution + And the page behind remains interactive + +Given the drawer is open + When Carlos presses Escape + Then the drawer closes + And focus returns to the originating row + +Given a screen reader user navigates the drawer + When the drawer opens + Then role="dialog" is announced with the squad name as accessible label +``` + +**Anti-surveillance check:** PASS — drawer shows only aggregate team metrics, no PR/author lists. +**Dependencies:** `GET /data/v1/teams/{id}/detail?period={}` endpoint. +**Estimate:** L +**Analytics:** `dashboard_drawer_opened`, `dashboard_drawer_closed` (with dwellMs) + +--- + +## Feature Set 3 — Evolution Small Multiples + +### FDD-DSH-020 · Display 12-week evolution per squad in small multiples +**Epic:** Dashboard Redesign · **Release:** R1 · **Priority:** P1 +**Persona:** Priya (Agile Coach), Carlos +**Owner class:** Frontend (`pulse-engineer`) + API (`pulse-engineer`) + Data (`pulse-data-engineer`) + +**Acceptance (BDD):** +``` +Given the dashboard is loaded + When Priya scrolls to "Evolução por squad" + Then a grid of 27 mini line-charts is shown, one per squad + And squads are grouped under tribo headings (PF, TEC, PI, SALES, BG, DESC, ENO, CPA) + And each tile shows: squad name, tribo, 12-week spark, current value, delta vs 12 weeks ago + +Given Priya changes the "Métrica" select + When a new metric is selected (Deploy Freq, Lead Time, CFR, Cycle P50, WIP, Throughput) + Then all 27 sparks re-render with the new metric + And the value and delta update per tile + +Given the data is backfilling for a specific squad + When its tile renders + Then the spark shows a dashed segment for the backfilling range + And a "Backfill" badge is shown +``` + +**Anti-surveillance check:** PASS — tiles show team-level series only. +**Dependencies:** `GET /data/v1/metrics/by-team/evolution?metric={}&period={}` endpoint. +**Estimate:** L +**Analytics:** `dashboard_evolution_metric_changed` + +--- + +### FDD-DSH-021 · Drill into drawer from small-multiple tile +**Epic:** Dashboard Redesign · **Release:** R1 · **Priority:** P2 +**Persona:** Priya +**Owner class:** Frontend (`pulse-engineer`) + +**Acceptance (BDD):** +``` +Given the small multiples are rendered + When Priya clicks a tile + Then the same team drawer opens (identical to ranking drill-down) + And analytics source is tagged "small-multiple" +``` + +**Anti-surveillance check:** PASS. +**Dependencies:** FDD-DSH-012. +**Estimate:** XS +**Analytics:** `dashboard_drawer_opened { source: 'small-multiple' }` + +--- + +## Feature Set 4 — States & Polish + +### FDD-DSH-030 · Show empty state when no squads are configured +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P1 +**Persona:** Carlos (first-time admin) +**Owner class:** Frontend (`pulse-engineer`) + +**Acceptance (BDD):** +``` +Given no squads exist for the tenant + When the dashboard loads + Then an empty-state card is shown with heading "Nenhuma squad cadastrada ainda" + And a secondary action links to /settings/sources to connect DevLake + And no zero-value KPIs are rendered +``` + +**Anti-surveillance check:** PASS. +**Dependencies:** — +**Estimate:** S +**Analytics:** `dashboard_empty_shown` + +--- + +### FDD-DSH-031 · Show degraded-data banner when sources are delayed +**Epic:** Dashboard Redesign · **Release:** R1 · **Priority:** P1 +**Persona:** Carlos, Lucas (Data Platform) +**Owner class:** Frontend (`pulse-engineer`) + Data (`pulse-data-engineer`) + +**Acceptance (BDD):** +``` +Given one or more data sources have freshness > SLA + When the dashboard mounts + Then a `role="status"` banner is shown above the KPI strip with + "{N} fonte(s) com atraso. Alguns gráficos podem estar parciais." + And a link "Ver pipeline" deep-links to /pipeline-monitor + +Given all sources are fresh + When the dashboard mounts + Then no banner is shown +``` + +**Anti-surveillance check:** PASS. +**Dependencies:** Freshness metadata on `GET /data/v1/metrics/global`. +**Estimate:** S +**Analytics:** `dashboard_degraded_shown` + +--- + +### FDD-DSH-032 · Validate and cap custom date range +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P2 +**Persona:** Ana, Priya +**Owner class:** Frontend (`pulse-engineer`) + +**Acceptance (BDD):** +``` +Given Carlos picks start=2025-04-16 and end=2026-04-16 (365 days) + When he confirms + Then data loads normally + +Given Carlos picks start=2025-01-01 and end=2026-04-16 (>365 days) + When he attempts to confirm + Then inline validation shows "Período máximo: 365 dias" + And data is not refetched + +Given Carlos picks start after end + When validation runs + Then an inline error is shown and refetch is blocked +``` + +**Anti-surveillance check:** PASS. +**Dependencies:** — +**Estimate:** S +**Analytics:** `dashboard_custom_range_rejected { reason }` + +--- + +### FDD-DSH-033 · Accessibility audit on dashboard — ✅ DONE 2026-04-24 +**Epic:** Dashboard Redesign · **Release:** MVP · **Priority:** P0 +**Persona:** All personas +**Owner class:** Test (`pulse-test-engineer`) +**Status:** ✅ Shipped — Sprint 1.2 step 4 (2026-04-23, 3 pages) + FDD-DSH-033 +closure (2026-04-24, +7 pages). Full dashboard surface audited. + +**Delivered — 10 routes automated with axe-core + Playwright:** + +| Page | Rules passing | Spec | +|---|---|---| +| `/` (Home Dashboard) | 23 | `home.spec.ts` | +| `/metrics/dora` | 21 | `dora.spec.ts` | +| `/metrics/cycle-time` | 21 | `cycle-time.spec.ts` | +| `/metrics/throughput` | 21 | `throughput.spec.ts` | +| `/metrics/lean` | 21 | `lean.spec.ts` | +| `/metrics/sprints` | 21 | `sprints.spec.ts` | +| `/prs` | 21 | `prs.spec.ts` | +| `/pipeline-monitor` | 17 | `pipeline-monitor.spec.ts` | +| `/integrations` | 16 | `integrations.spec.ts` | +| `/settings/integrations/jira/catalog`| 21 | `jira-settings.spec.ts` | + +**Result:** 10/10 specs green in 15.4s; **0 critical + 0 serious** across 203 rule-instances. +WCAG 2.1 AA gate is live in CI (tests/e2e/a11y/*.spec.ts runs via `npm run test:a11y`). +Template + runbook documented in `pulse/docs/testing-playbook.md` §8.7. + +**Real bug found & fixed during the audit (Sprint 1.2 step 4):** +`SquadListCard.MetricPair` was wrapping `
/
` in `` instead of `
`. +Per HTML5, `
` only accepts `
`, `
`, or `
` wrappers as direct +children. 88 violations fixed by swapping one element. + +**Deliberate deferrals (tracked elsewhere):** +- `color-contrast` rule disabled via `disableRules` in every spec — tracked + as FDD-OPS-003 (design-system contrast audit, P1). +- `page-has-heading-one` (best-practice, not WCAG) surfaced that + `/pipeline-monitor` has no h1 — added to a11y backlog for polish. +- Drawer/keyboard-only journey (second BDD scenario) is covered by the + smoke E2E spec pattern; dedicated keyboard-nav spec to be added when + the drawer regresses or in Sprint 2 polish. + +**Anti-surveillance check:** PASS. +**Dependencies:** FDD-DSH-001..032 (delivered). +**Estimate:** M (delivered). +**Analytics:** none. + +--- + +## Feature Set 5 — Future (R2+) + +### FDD-DSH-040 · Tribo-level roll-up view +**Release:** R2 · **Priority:** P2 · **Persona:** Ana (CTO) · **Owner:** Frontend + Metrics +Aggregate 27 squads into 8 tribos and show tribo-first with expandable squads. Out of MVP scope. + +### FDD-DSH-041 · Anomaly detection overlay on small multiples +**Release:** R2 · **Priority:** P2 · **Persona:** Priya · **Owner:** Data Scientist + Frontend +Highlight weeks where value crossed 2σ from 12-week baseline. + +### FDD-DSH-042 · Export dashboard snapshot (PNG/PDF) +**Release:** R3 · **Priority:** P3 · **Persona:** Ana · **Owner:** Frontend +For CTO quarterly reviews. Read-only, no external trigger. + +--- + +## Feature Set 6 — 4th DORA Metric (R1) + +### FDD-DSH-050 · MTTR / Time to Restore (4ª métrica DORA) +**Release:** R1 · **Priority:** P1 · **Persona:** Carlos (EM) · Ana (CTO) +**Owner:** Data Engineer + Data Scientist + Backend + Frontend + +**Contexto:** O dashboard hoje renderiza o card "Time to Restore" como "—" com badge "R1" +e tooltip explicativo. O backend (`/data/v1/metrics/home`) já retorna `time_to_restore` +como `null` de forma explícita (campo existe no schema `HomeMetricCard`). Falta a fonte: +calcular MTTR exige **detectar incidentes** e medir o tempo até a resolução. + +**Hipóteses de fonte (a validar com pulse-data-scientist):** +- **Deploys com rollback** — `eng_deployments.source = 'rollback'` OU deploy seguido de + outro deploy do mesmo repo em <4h com tag `revert`/`hotfix` +- **PRs com label** — `hotfix`, `incident`, `revert`, `P0`, `P1` no título ou labels GitHub +- **Issues Jira** — `priority IN (Highest, Blocker)` com resolução ≠ null +- **Alerta externo** (futuro) — webhook PagerDuty/Opsgenie + +**BDD Acceptance Criteria:** +``` +Given the backend has ingested incident signals from at least one source + When the client requests GET /data/v1/metrics/home?period=30d + Then data.time_to_restore.value is a non-null float in hours + And data.time_to_restore.level is one of elite | high | medium | low + And data.time_to_restore.trend_percentage reflects previous-period comparison + +Given the dashboard renders with time_to_restore.value populated + When the user opens the DORA Metrics group + Then the Time to Restore card shows the numeric value with DORA classification badge + And the "R1" pending badge is hidden + And the info tooltip is removed +``` + +**Classification thresholds (DORA 2023):** +- Elite: `< 1h` +- High: `1h ≤ x < 24h` +- Medium: `24h ≤ x < 168h` (1 semana) +- Low: `≥ 168h` + +**Hand-off plan:** +1. `pulse-data-scientist` → define sinal de incidente + fórmula MTTR + validação anti-surveillance +2. `pulse-data-engineer` → cria tabela `eng_incidents` + connector/filter + snapshot worker +3. `pulse-engineer` → endpoint calcula MTTR a partir de `eng_incidents`, remove `pendingLabel="R1"` do frontend +4. `pulse-test-engineer` → testes de fórmula + regressão + +**Anti-surveillance check:** PASS — incidentes agregados por time/repo, nunca por autor. +**Dependencies:** FDD-DSH-001 (home endpoint já expõe o campo como `null`). +**Estimate:** L (envolve 4 agentes). +**Analytics events:** `mttr_card_viewed { has_data }`, `mttr_tooltip_hovered`. + +--- + +## Feature Set 7 — Test Coverage (dívida técnica, alta prioridade) + +### FDD-DSH-060 · Mapeamento squad → team UUID no backend — RESOLVIDO (2026-04-17) +**Release:** R1 · **Priority:** P1 · **Persona:** Carlos (EM) +**Owner:** Data Engineer + Backend · **Status:** DONE + +**Resolução aplicada:** em vez de mapear squad key → team UUID via +`teams.board_config`, optamos por aceitar `squad_key` como query param nativo no +endpoint `/metrics/home` e computar as métricas on-demand via +`compute_home_metrics_on_demand`. Essa rota filtra PRs via regex de título +(mesmo padrão de `/pipeline/teams`), faz join de deploys via repo derivado e +filtra issues por `project_key`. Trade-off: on-demand por request (não há +snapshot pré-calculado por squad × período), mas o volume atual (27 squads, +~600 PRs em 60d) roda em <500ms. + +**Deep-dive pages** (`/dora`, `/cycle-time`, `/throughput`, `/lean`) aceitam +`squad_key` como query param mas ainda caem pra tenant-level (documentado em +código — `_ = squad_key # See FDD-DSH-060`). Próximo passo: estender o +on-demand service para cada tipo de métrica quando o usuário pedir. + +**Contexto original:** Hoje o combobox da home usa 27 squad keys dinâmicos +vindos de `/pipeline/teams` (derivados de PR title regex), mas `/metrics/home` +só aceita `team_id: UUID` da tabela `teams`. + +**BDD:** +``` +Given the user selects squad "okm" in the home combobox + When the dashboard queries /metrics/home + Then the response returns KPIs filtered to that squad + And the client sends squad_key=okm as the query param + And no UUID translation is needed on the client +``` + +**Anti-surveillance:** PASS — squad-level, nunca por autor. +**Dependencies:** FDD-DSH-002. +**Estimate:** M. + +--- + +### FDD-DSH-070 · Pirâmide de testes do dashboard (dívida técnica crítica) — ✅ DONE 2026-04-24 +**Release:** MVP (retroativo) · **Priority:** P0 · **Persona:** Toda a equipe (quality gate) +**Owner:** Test Engineer (principal) + Frontend + Backend (contract tests) +**Status:** ✅ Shipped — Sprint 1.2 (steps 1-6) + FDD-DSH-070 fechamento (2026-04-24) + +**Delivered:** +- ✅ Unit tests (Vitest): `formatDuration` (18), `buildParams` (10) + component tests +- ✅ Component tests (@testing-library/react): `KpiCard`, `ModeSelector`, `ProjectCatalogTable`, `ProjectRowActions` +- ✅ Hook/integration tests (MSW): `useHomeMetrics` incl. 422-regression +- ✅ Contract tests (Zod): 6 endpoints + anti-surveillance meta-test (74 tests) +- ✅ E2E smoke (Playwright): home dashboard journey +- ✅ A11y tests (axe-core): home + DORA + cycle-time, WCAG 2.1 AA gate +- ✅ CI quality gates: 4 jobs root-level, all blocking (gitleaks, lint+tsc, vitest, build) +- ✅ Coverage thresholds: no-regression gate in vitest.config.ts (see playbook §8.10) +- ✅ Retroactive regression tests: + - `buildParams omits team_id for non-UUID squad keys` (covers DSH-060 fix) + - `useHomeMetrics never sends team_id for non-UUID — backend returns 422` (covers reported bug) + - `test_pipeline_fontes_integrity.py` (backend, covers Pipeline Monitor repo-name bug) + +Total: 150 Vitest tests + 1 E2E smoke + 3 a11y specs, ~40s CI wall-clock. + +See: `pulse/docs/testing-playbook.md` (sections 1-8) for the full strategy. + +**Contexto:** O redesign do dashboard (DSH-001..033) foi entregue **sem testes +automatizados**. Dois bugs passaram despercebidos em produção local: +1. Coluna FONTES zerada no Pipeline Monitor (mismatch `split_part` em repo name) +2. Filtro por squad quebra com HTTP 422 (UUID regex não validado no client) + +Ambos teriam sido pegos por testes de contrato/integração simples. É um débito +técnico que precisa ser pago **antes** de R1 para não amplificar. + +**Escopo:** + +1. **Unit tests** (`vitest`) em `pulse/packages/pulse-web/tests/unit/`: + - `buildParams()` — input UUID preserva; squad key omite; empty omite + - `transformHomeMetrics()` — null fields viram `HomeMetricItem` com `value=null` + - `classifyMetric()` — thresholds DORA 2023 + heurísticas Flow + - `formatValue()` em `KpiCard` — null/NaN/Infinity → "—" + +2. **Component tests** (`@testing-library/react`): + - `KpiCard` com `value=null` renderiza "—" + pendingLabel + tooltip + - `TeamCombobox` — busca filtra, agrupa por tribo, anti-surveillance (sem autor) + - `FreshnessBanner` — exibe quando health ≠ healthy, oculta em healthy + - `TeamRankingSection` — troca métrica via tab sincroniza estado + - `TeamDetailDrawer` — Esc fecha, trap-focus, a11y + +3. **Hook/integration tests** (MSW): + - `useHomeMetrics` com `teamId='okm'` não envia `team_id` na query + - `usePipelineTeamsList` retorna 27 squads ordenados por tribo + - `useMetricsByTeam` fallback pro derive quando endpoint 404 + +4. **Contract tests** (Zod): + - Schema Zod por endpoint (`/metrics/home`, `/pipeline/teams`, `/pipeline/health`) + - Falha o build se shape do backend mudar sem atualizar o schema + +5. **E2E tests** (Playwright) em `pulse/e2e/dashboard.spec.ts`: + - "Home → selecionar squad → dashboard carrega sem erro 422" + - "Trocar período 30→60→90→120d atualiza KPIs" + - "Custom date range inválido mostra erro inline" + - "Clicar em bar do ranking abre drawer; Esc fecha; foco volta" + - "Troca de tab métrica sincroniza com seção de evolução" + +6. **A11y tests** (`@axe-core/playwright`): + - Zero violations `serious` ou `critical` na home + - Focus order segue ordem visual + - Reduced motion respeitado + +7. **CI quality gates** em `pulse/.github/workflows/`: + - `npm run test` bloqueia merge se coverage dashboard <80% + - `npm run e2e` bloqueia merge se E2E falhar + - `npm run lint:a11y` bloqueia violações sérias + +**BDD macro:** +``` +Given a PR modifies any file inside src/components/dashboard/ or src/routes/_dashboard/home.tsx + When CI runs + Then unit + component + integration tests execute in <60s + And E2E tests execute in <3min + And coverage report is posted as PR comment + And merge is blocked if any gate fails + +Given the backend changes the shape of /metrics/home + When the frontend build runs + Then the Zod contract test fails at compile/CI time + And a clear diff is shown between expected and actual shape +``` + +**Testes retroativos dos bugs já caçados (prioridade máxima):** +- `test('buildParams omits team_id for non-UUID squad key', ...)` — cobre DSH-060 fix +- `test('home renders without 422 when squad is selected', ...)` — cobre o bug reportado +- `test('deploy count per team uses normalized repo format', ...)` — cobre bug do Pipeline Monitor + +**Anti-surveillance check:** PASS. +**Dependencies:** FDD-DSH-001..033 (código testável já existe). +**Estimate:** L (4–6h test engineer dedicado). +**Analytics:** `test_suite_failed { suite, reason }` (monitorar flakiness). + +**Risco de não fazer:** cada novo bug custa mais — cascata de regressões, refactors +bloqueados por medo de quebrar algo, CI torna-se decorativo. O backlog tem **27 +squads × 6 métricas = 162 combinações de dados**; sem contrato de teste é questão de +tempo até a próxima 422 em produção. + +--- + +### FDD-DSH-080 · Filtros globais no TopBar — DONE (2026-04-17) +**Release:** MVP · **Priority:** P0 · **Persona:** Carlos, Ana, Priya +**Owner:** Frontend (`pulse-engineer`) · **Status:** DONE + +**Contexto:** os filtros Squad + Período existiam apenas no `home.tsx`. Ao +navegar para `/dora`, `/cycle-time`, `/throughput`, `/lean`, `/sprints`, `/prs` +o usuário não conseguia re-aplicar o mesmo escopo — o `TopBar.tsx` original +tinha apenas um select "All Teams" vazio e um select de período com só 3 opções. + +**Resolução:** `TopBar.tsx` agora hospeda `TeamCombobox` + `PeriodSegmented` + +`DateRangeFilter` + botão Limpar. Todas as rotas `/_dashboard/*` reagem aos +filtros via `useFilterStore`. Rotas exemptas (real-time ou catálogo) escondem +a barra: `/pipeline-monitor`, `/settings/integrations*`, `/integrations`. +`home.tsx` removeu as ~25 linhas de filter bar duplicada, mantendo apenas o +chip "Exibindo … · últimos 60 dias" como feedback do escopo aplicado. + +**Anti-surveillance:** PASS. + +--- + +### FDD-DSH-081 · Custom date range (period=custom) — DONE (2026-04-17) +**Release:** MVP · **Priority:** P0 · **Persona:** Carlos, Priya +**Owner:** Backend (`pulse-engineer`) · **Status:** DONE + +**Bug original:** frontend enviava `?period=custom&start_date=…&end_date=…`, +backend validava `period ∈ {7d,14d,30d,60d,90d,120d}` e respondia HTTP 400. + +**Resolução:** +- `"custom"` adicionado a `_VALID_PERIODS`. +- `_parse_period(period, start_date, end_date)` aceita `custom` com validações: + ambas as datas obrigatórias, ISO válido, `start < end`, duração ≤ 365 dias. +- Endpoints `/home`, `/dora`, `/lean`, `/cycle-time`, `/throughput`, `/sprints` + agora aceitam `start_date` e `end_date` opcionais. +- `/home` computa on-demand via `compute_home_metrics_on_demand` quando + `period=custom` (não há snapshot pré-calculado pra janela arbitrária). +- Deep-dive pages usam o snapshot de melhor aproximação (documentado). + +**UI:** `DateRangeFilter` já validava `start < end` e `max 365 dias` antes de +chamar API. `buildParams()` só envia `start_date`/`end_date` quando +`period=custom` e ambas as datas presentes. + +**Anti-surveillance:** PASS. + +--- + +### FDD-DSH-082 · Lead Time strict vs inclusive — DONE (2026-04-17) +**Release:** MVP · **Priority:** P0 · **Persona:** Carlos, Priya +**Owner:** Backend + Frontend (`pulse-engineer`) · **Status:** DONE + +**Bug diagnosticado** (pelo `pulse-data-scientist`, 2026-04-17): Lead Time DORA +calculava `COALESCE(deployed_at, merged_at) − first_commit_at`. Em squads com +cobertura parcial de deploy (ex.: OKM tem 50%), o fallback colapsa o Lead +Time sobre o Cycle Time e produz uma mediana enganosa. + +Evidência (OKM 60d): Cycle Time P50 = 1,20h · Lead Time inclusive (155 PRs) += 119,65h · Lead Time **strict** (78 PRs com deploy) = 404,69h. + +**Resolução:** +- Nova função pura `calculate_lead_time_strict(prs)` em `domain/dora.py` que + exige `deployed_at` real, com guarda-corpo `_LT_STRICT_MIN_SAMPLE = 5` + (retorna `None` quando há menos de 5 PRs elegíveis). +- `DoraMetrics` ganhou campos `lead_time_for_changes_hours_strict`, + `lead_time_strict_eligible_count`, `lead_time_strict_total_count`, + `lt_strict_level` (com defaults — não quebra call sites antigos). +- `HomeMetricsData` ganhou `lead_time_strict: HomeMetricCard` e + `LeadTimeCoverage { covered, total, pct }` exposto como `coverage` no card. +- `lead_time` (inclusive) **mantido** para back-compat com clientes legados. +- Frontend: card "Lead Time" no grupo DORA agora consome `leadTimeStrict`; + tooltip mostra cobertura ("78 de 155 PRs (50% têm deploy linkado)") e, + quando `value=null`, exibe orientação ("Aumente o período / aguarde mais + ingestão"). Inclusive ainda disponível em `homeMetrics.leadTimeForChanges` + para futuros drawers/comparações. +- Recálculo automático: `recalculate.py` e `home_on_demand.py` chamam + `calculate_dora_metrics` que agora popula os novos campos via `asdict()`. + Snapshots novos terão os campos; snapshots antigos retornam `None` + graciosamente (frontend trata). + +**Testes:** 6 novos casos em `tests/unit/test_dora.py::TestLeadTimeStrict` +cobrindo lista vazia, sample <5, exclusão de PRs sem deploy, mediana correta, +divergência inclusive vs strict (cenário OKM), e delta negativo (clock skew). +63/63 testes DORA passam. + +**Anti-surveillance:** PASS — agregado por squad/tenant, sem dados de autor. + +--- + +### FDD-DSH-083 · Tooltips explicativos em todos os 8 KPI cards — DONE (2026-04-17) +**Release:** MVP · **Priority:** P1 · **Persona:** todos +**Owner:** Frontend (`pulse-engineer`) · **Status:** DONE + +**Problema:** usuários novos não sabem como cada métrica é calculada nem +quais dados a alimentam. O ícone `ⓘ` introduzido no MTTR (FDD-DSH-050) +ficou solitário; precisa cobrir todos os cards. + +**Resolução:** +- Novo componente `` em `components/dashboard/InfoTooltip.tsx` + — popover acessível (focus + hover + tap-to-toggle, `role="tooltip"`, + `aria-describedby`, `aria-label` na trigger), suporta multi-linha + (`whitespace-pre-line`), `max-width 320px`, sombra+border via tokens. + **Sem nova dependência** — descartado Radix Tooltip pra manter o bundle + enxuto; revisitar quando outros componentes precisarem de popover. +- `KpiCard` migrado: prop `infoTooltip` agora delega ao `InfoTooltip` + (substitui o antigo `title` HTML nativo, que não permitia multi-linha). +- 8 tooltips adicionados em `routes/_dashboard/home.tsx`, formato + consistente: linha 1 = descrição, depois Fórmula, Dados, e (DORA) + classificação. Lead Time tem cobertura dinâmica injetada. +- Tooltip do card MTTR ("Time to Restore") atualizado pra incluir fórmula + + status R1 + classificação DORA (mantém o `pendingLabel="R1"`). + +**Acessibilidade:** +- Trigger é ` + + + + +
+
+ + +
+

Indicadores globais

+
+
+

DORA Metrics

+
+
+
+

Flow & Management

+
+
+
+
+ + +
+

Leaderboard — Top 10 squads

+

Ranking composto (DORA + Flow normalizados) · 12 semanas

+ + + + + +
#SquadTriboDeploy/diaLead TimeCFRCycle P50Trend
+
+ +

Evolução por squad (12 semanas · Cycle Time P50)

+
+ + + + diff --git a/pulse/pulse-ui/pages/dashboard-investigator/index.html b/pulse/pulse-ui/pages/dashboard-investigator/index.html new file mode 100644 index 0000000..0c412fa --- /dev/null +++ b/pulse/pulse-ui/pages/dashboard-investigator/index.html @@ -0,0 +1,148 @@ + + + + + +PULSE Dashboard — Investigator-first (Concept B) + + + + + + +
PULSE / Dashboard
+
+

PULSE Dashboard Concept B · Investigator-first

+

Matriz densa 27×6 para detectar outliers em segundos. KPI strip compacta + heatmap por classificação DORA.

+ +
+ +
+ +
+ + +
+ + +
+

Matriz Squads × Métricas

+

27 squads × 6 métricas · classificação DORA em cada célula · clique para drill-down

+
+ Elite + High + Medium + Low +
+
+ + + + + + +
SquadDeploy /diaLead Time (h)Change FailureCycle P50 (d)WIPThroughput
+
+
+
+ + + diff --git a/pulse/pulse-ui/pages/dashboard/flow-health-section.css b/pulse/pulse-ui/pages/dashboard/flow-health-section.css new file mode 100644 index 0000000..9c470e7 --- /dev/null +++ b/pulse/pulse-ui/pages/dashboard/flow-health-section.css @@ -0,0 +1,447 @@ +/* ============================================================ + PULSE · Flow Health section (Aging WIP + Flow Efficiency) + Integra à home Diagnostic-first. Tokens-only. BEM. WCAG AA. + ============================================================ */ + +/* ------- Concept switcher (dev preview only) ------- */ +.concept-switch { + display: flex; flex-wrap: wrap; align-items: center; gap: 8px; + margin: 16px 0 24px; + padding: 10px 12px; + background: var(--color-bg-tertiary); + border-radius: var(--radius-card); + font-size: 12px; +} +.concept-switch__label { + font-size: 11px; font-weight: 500; + color: var(--color-text-secondary); + letter-spacing: .04em; text-transform: uppercase; +} +.concept-switch__btn { + height: 28px; padding: 0 10px; + border: 1px solid var(--color-border-default); + background: var(--color-bg-surface); + border-radius: var(--radius-button); + font-size: 12px; color: var(--color-text-secondary); + transition: all 150ms ease-out; +} +.concept-switch__btn.is-active { + background: var(--color-brand-primary); + border-color: var(--color-brand-primary); + color: var(--color-text-inverse); +} +.concept-switch__sep { color: var(--color-text-tertiary); margin: 0 4px; } + +/* ============================================================ + FLOW HEALTH — shared chrome +============================================================ */ +.flow-health { + display: flex; flex-direction: column; gap: 12px; + margin: 24px 0; +} +.flow-health__head { + display: flex; align-items: flex-end; justify-content: space-between; + gap: 16px; +} +.flow-health__meta { + display: flex; align-items: center; gap: 8px; + font-size: 12px; color: var(--color-text-secondary); +} +.flow-health__meta-item { + display: inline-flex; align-items: center; gap: 6px; +} +.flow-health__dot { + width: 8px; height: 8px; border-radius: 50%; + background: var(--color-brand-primary); +} + +.flow-health__grid { + display: grid; + grid-template-columns: minmax(0, 2.1fr) minmax(0, 1fr); + gap: 16px; +} + +/* ============================================================ + CARDS +============================================================ */ +.fh-card { + background: var(--color-bg-surface); + border: 1px solid var(--color-border-default); + border-radius: var(--radius-card); + padding: var(--space-card-padding); + box-shadow: var(--shadow-card); + display: flex; flex-direction: column; gap: 14px; + min-height: 0; +} +.fh-card--wide { padding-bottom: 12px; } + +.fh-card__head { + display: flex; align-items: flex-start; justify-content: space-between; gap: 12px; +} +.fh-card__title-wrap { min-width: 0; } +.fh-card__title { + font-size: 15px; font-weight: 600; + color: var(--color-text-primary); + margin: 0 0 2px 0; +} +.fh-card__sub { + font-size: 12px; color: var(--color-text-secondary); + margin: 0; +} +.fh-card__actions { display: flex; align-items: center; gap: 8px; } + +.fh-badge { + display: inline-flex; align-items: center; + height: 20px; padding: 0 8px; + border-radius: var(--radius-badge); + font-size: 11px; font-weight: 500; + background: var(--color-bg-tertiary); + color: var(--color-text-secondary); +} +.fh-badge--v1 { + background: var(--color-dora-medium-bg); + color: #92400E; +} + +.fh-mono { font-family: var(--font-mono); font-variant-numeric: tabular-nums; } +.fh-tiny { font-size: 11px; color: var(--color-text-tertiary); } + +/* ============================================================ + CRITICAL CALLOUT (at_risk strip) +============================================================ */ +.fh-callout { + display: flex; align-items: center; gap: 10px; + padding: 10px 12px; + border-radius: var(--radius-button); + background: var(--color-dora-low-bg); + color: #991B1B; + font-size: 13px; +} +.fh-callout--danger { background: var(--color-dora-low-bg); color: #991B1B; } +.fh-callout--warning { background: var(--color-dora-medium-bg); color: #92400E; } +.fh-callout--neutral { background: var(--color-bg-tertiary); color: var(--color-text-secondary); } +.fh-callout__icon { display: inline-flex; flex-shrink: 0; } +.fh-callout__text { flex: 1; } +.fh-callout__cta { + border: 0; background: transparent; + font-size: 13px; font-weight: 500; + color: inherit; text-decoration: underline; + padding: 0; white-space: nowrap; +} +.fh-callout[hidden] { display: none; } + +/* ============================================================ + VIEWPORT — shared container for concept A/B/C +============================================================ */ +.fh-viewport { min-height: 280px; } + +/* ------------------------------------------------------------ + CONCEPT A — Outlier-first (ranked at_risk table) +------------------------------------------------------------ */ +.outlier-table { + width: 100%; + border-collapse: collapse; + font-size: 13px; +} +.outlier-table thead th { + text-align: left; + font-size: 11px; font-weight: 500; + color: var(--color-text-secondary); + letter-spacing: .04em; text-transform: uppercase; + padding: 8px 10px; + border-bottom: 1px solid var(--color-border-default); + background: var(--color-bg-surface); + position: sticky; top: 0; +} +.outlier-table tbody td { + padding: 10px; + border-bottom: 1px solid var(--color-border-subtle); + vertical-align: middle; +} +.outlier-table tbody tr:hover { background: var(--color-bg-tertiary); cursor: pointer; } +.outlier-table tbody tr:focus-within { background: var(--color-bg-tertiary); } + +.outlier-table__key { + font-family: var(--font-mono); font-size: 12px; + color: var(--color-text-primary); +} +.outlier-table__age { + font-family: var(--font-mono); font-weight: 500; + text-align: right; + color: var(--color-danger); + font-variant-numeric: tabular-nums; +} +.outlier-table__age--warn { color: var(--color-warning); } +.outlier-table__status { + display: inline-flex; align-items: center; gap: 6px; + font-size: 12px; color: var(--color-text-secondary); +} +.outlier-table__status-dot { width: 6px; height: 6px; border-radius: 50%; background: var(--color-info); } +.outlier-table__status--review .outlier-table__status-dot { background: var(--chart-2); } + +.outlier-bar { + position: relative; + height: 6px; width: 90px; + background: var(--color-bg-tertiary); + border-radius: 999px; + overflow: hidden; +} +.outlier-bar__fill { + position: absolute; inset: 0 auto 0 0; + background: var(--color-danger); + border-radius: 999px; +} +.outlier-bar__fill--warn { background: var(--color-warning); } + +.outlier-table__squad { + font-size: 12px; font-weight: 500; + color: var(--color-text-primary); +} +.outlier-table__squad-hint { + font-size: 11px; color: var(--color-text-tertiary); + margin-left: 6px; +} + +.outlier-foot { + display: flex; justify-content: space-between; align-items: center; + padding: 10px 10px 2px; + font-size: 12px; color: var(--color-text-secondary); +} +.outlier-foot__link { + color: var(--color-brand-primary); + font-weight: 500; text-decoration: none; +} +.outlier-foot__link:hover { text-decoration: underline; } + +/* ------------------------------------------------------------ + CONCEPT B — Distribution-first (histogram + squad chips) +------------------------------------------------------------ */ +.dist-wrap { display: grid; grid-template-columns: 1fr; gap: 12px; } + +.dist-chart-shell { + position: relative; + height: 180px; + padding: 6px 4px 0; +} +.dist-chart-shell canvas { max-width: 100%; } + +.dist-legend { + display: flex; gap: 16px; flex-wrap: wrap; + font-size: 11px; color: var(--color-text-secondary); + padding: 0 4px; +} +.dist-legend__item { display: inline-flex; align-items: center; gap: 6px; } +.dist-legend__swatch { width: 10px; height: 10px; border-radius: 3px; } +.dist-legend__swatch--healthy { background: var(--color-info); } +.dist-legend__swatch--watch { background: var(--color-warning); } +.dist-legend__swatch--risk { background: var(--color-danger); } + +.dist-squad-chips { + display: flex; flex-wrap: wrap; gap: 6px; + padding-top: 10px; border-top: 1px solid var(--color-border-subtle); +} +.dist-squad-chip { + display: inline-flex; align-items: center; gap: 6px; + padding: 4px 10px; + border-radius: var(--radius-badge); + background: var(--color-dora-low-bg); + color: #991B1B; + font-size: 12px; font-weight: 500; + border: 0; cursor: pointer; +} +.dist-squad-chip .fh-mono { font-weight: 600; font-size: 12px; } +.dist-squad-chip:hover { filter: brightness(0.97); } + +/* ------------------------------------------------------------ + CONCEPT C — Squad × age heatmap (top-15 squads) +------------------------------------------------------------ */ +.heatmap { + display: grid; + grid-template-columns: 130px repeat(4, 1fr) 56px; + gap: 3px; + font-size: 12px; +} +.heatmap__hdr { + font-size: 10px; font-weight: 500; + color: var(--color-text-secondary); + letter-spacing: .04em; text-transform: uppercase; + padding: 4px 6px; + border-bottom: 1px solid var(--color-border-default); + text-align: right; +} +.heatmap__hdr--squad { text-align: left; } +.heatmap__row-label { + display: flex; align-items: center; + padding: 0 6px; height: 28px; + font-weight: 500; color: var(--color-text-primary); + font-size: 12px; + white-space: nowrap; overflow: hidden; text-overflow: ellipsis; +} +.heatmap__cell { + height: 28px; + display: flex; align-items: center; justify-content: center; + font-family: var(--font-mono); font-size: 12px; + border-radius: 4px; + color: var(--color-text-primary); + background: var(--color-bg-tertiary); + cursor: pointer; +} +.heatmap__cell[data-intensity="0"] { background: var(--color-bg-tertiary); color: var(--color-text-tertiary); } +.heatmap__cell[data-intensity="1"] { background: #DBEAFE; } +.heatmap__cell[data-intensity="2"] { background: #BFDBFE; } +.heatmap__cell[data-intensity="3"] { background: #93C5FD; } +.heatmap__cell[data-intensity="4"] { background: #60A5FA; color: #fff; } +/* risk column uses red scale */ +.heatmap__cell--risk[data-intensity="0"] { background: var(--color-bg-tertiary); color: var(--color-text-tertiary); } +.heatmap__cell--risk[data-intensity="1"] { background: #FEE2E2; color: #991B1B; } +.heatmap__cell--risk[data-intensity="2"] { background: #FCA5A5; color: #7F1D1D; } +.heatmap__cell--risk[data-intensity="3"] { background: var(--color-danger); color: #fff; font-weight: 600; } +.heatmap__cell--risk[data-intensity="4"] { background: #991B1B; color: #fff; font-weight: 600; } + +.heatmap__total { font-family: var(--font-mono); text-align: right; padding: 0 6px; color: var(--color-text-secondary); } + +.heatmap-foot { + display: flex; justify-content: space-between; align-items: center; + padding-top: 10px; + font-size: 12px; color: var(--color-text-secondary); +} + +/* ============================================================ + FLOW EFFICIENCY — gauge + disclaimer +============================================================ */ +.fe-gauge { + display: flex; flex-direction: column; align-items: center; gap: 12px; + padding: 8px 0 4px; +} +.fe-gauge__ring { position: relative; } +.fe-gauge__value { + position: absolute; inset: 0; + display: flex; flex-direction: column; align-items: center; justify-content: center; + text-align: center; +} +.fe-gauge__pct { + font-size: 32px; font-weight: 700; + color: var(--color-text-primary); + line-height: 1; + font-variant-numeric: tabular-nums; +} +.fe-gauge__trend { + display: inline-flex; align-items: center; gap: 4px; + font-size: 11px; margin-top: 4px; + color: var(--color-text-secondary); +} +.fe-gauge__trend--down { color: var(--color-danger); } +.fe-gauge__trend--up { color: var(--color-success); } + +.fe-stats { + display: grid; grid-template-columns: 1fr; gap: 6px; + width: 100%; + margin: 0; + padding-top: 10px; + border-top: 1px solid var(--color-border-subtle); + font-size: 12px; +} +.fe-stats > div { display: flex; justify-content: space-between; gap: 8px; } +.fe-stats dt { color: var(--color-text-secondary); margin: 0; } +.fe-stats dd { color: var(--color-text-primary); margin: 0; font-weight: 500; } + +.fe-disclaimer { + display: flex; gap: 8px; + padding: 10px; + background: var(--color-bg-tertiary); + border-radius: var(--radius-button); + font-size: 12px; line-height: 1.45; + color: var(--color-text-secondary); +} +.fe-disclaimer__icon { flex-shrink: 0; color: var(--color-text-tertiary); margin-top: 2px; } +.fe-disclaimer strong { color: var(--color-text-primary); font-weight: 600; } +.fe-disclaimer p { margin: 0; } + +/* ============================================================ + DRAWER (shared list of at_risk items) +============================================================ */ +.fh-drawer { + position: fixed; top: 0; right: 0; bottom: 0; + width: 520px; max-width: 100vw; + background: var(--color-bg-surface); + border-left: 1px solid var(--color-border-default); + box-shadow: var(--shadow-elevated); + display: flex; flex-direction: column; + z-index: 60; + transform: translateX(0); + transition: transform 200ms ease-out; +} +.fh-drawer[hidden] { display: none; } +.fh-drawer__head { + display: flex; justify-content: space-between; align-items: flex-start; + padding: 16px; + border-bottom: 1px solid var(--color-border-default); +} +.fh-drawer__eyebrow { + font-size: 10px; letter-spacing: .08em; text-transform: uppercase; + color: var(--color-text-tertiary); + margin: 0 0 4px 0; +} +.fh-drawer__title { font-size: 16px; font-weight: 600; margin: 0 0 2px 0; } +.fh-drawer__sub { font-size: 12px; color: var(--color-text-secondary); margin: 0; } + +.fh-drawer__filters { + display: grid; grid-template-columns: 1fr 140px 140px; gap: 8px; + padding: 12px 16px; + border-bottom: 1px solid var(--color-border-default); +} +.fh-drawer__filters input, +.fh-drawer__filters select { + height: 32px; padding: 0 10px; + border: 1px solid var(--color-border-default); + border-radius: var(--radius-button); + font-size: 13px; + background: var(--color-bg-surface); + color: var(--color-text-primary); +} +.fh-drawer__body { flex: 1; overflow-y: auto; padding: 8px 0; } + +.btn-icon { + width: 32px; height: 32px; + display: inline-flex; align-items: center; justify-content: center; + background: transparent; border: 0; + border-radius: var(--radius-button); + color: var(--color-text-secondary); +} +.btn-icon:hover { background: var(--color-bg-tertiary); } + +/* ============================================================ + STATES +============================================================ */ +.fh-skeleton-row { + height: 14px; background: var(--color-bg-tertiary); + border-radius: 4px; margin: 10px 12px; + animation: pulse-skeleton 1.6s ease-in-out infinite; +} +.fh-empty { + display: flex; flex-direction: column; align-items: center; justify-content: center; + gap: 8px; padding: 40px 16px; text-align: center; + color: var(--color-text-secondary); font-size: 13px; +} +.fh-empty strong { color: var(--color-text-primary); font-size: 14px; } + +/* ============================================================ + RESPONSIVE +============================================================ */ +@media (max-width: 1100px) { + .flow-health__grid { grid-template-columns: 1fr; } + .heatmap { grid-template-columns: 110px repeat(4, 1fr) 48px; } +} +@media (max-width: 768px) { + .heatmap { grid-template-columns: 96px repeat(4, 1fr) 44px; font-size: 11px; } + .outlier-table thead th:nth-child(4), + .outlier-table tbody td:nth-child(4) { display: none; } /* hide progress bar */ + .fh-drawer { width: 100vw; } + .fh-drawer__filters { grid-template-columns: 1fr; } + .fe-gauge__pct { font-size: 28px; } +} + +@media (prefers-reduced-motion: reduce) { + .fh-drawer { transition: none; } + .fh-skeleton-row { animation: none; } +} diff --git a/pulse/pulse-ui/pages/dashboard/flow-health-section.html b/pulse/pulse-ui/pages/dashboard/flow-health-section.html new file mode 100644 index 0000000..8cb90a5 --- /dev/null +++ b/pulse/pulse-ui/pages/dashboard/flow-health-section.html @@ -0,0 +1,174 @@ + + + + + + PULSE · Flow Health Section (concepts A/B/C) + + + + + + + + + + + + + + +
+ + + + +
+ +
+
+

Flow Health

+

Saúde do fluxo Kanban · baseado em status atuais (Aging WIP) e histórico (Flow Efficiency 60d)

+
+
+ + + Kanban-native · MVP v1 + +
+
+ +
+ + +
+
+
+

Aging WIP

+

633 itens em progresso · P50 6,5d · P85 22,3d

+
+
+ +
+
+ + +
+ + + 67 itens em risco (idade > 44,6d, 2× P85 histórico 90d) + + +
+ + +
+ +
+
+ + +
+
+
+

Flow Efficiency

+

Tempo em trabalho ativo ÷ cycle time · 60d

+
+ v1 +
+ + + + + +
+ +
+
+ + + + +
+ + + diff --git a/pulse/pulse-ui/pages/dashboard/flow-health-section.js b/pulse/pulse-ui/pages/dashboard/flow-health-section.js new file mode 100644 index 0000000..0d83440 --- /dev/null +++ b/pulse/pulse-ui/pages/dashboard/flow-health-section.js @@ -0,0 +1,431 @@ +/* ============================================================ + PULSE · Flow Health section — runtime (ES module) + Renders concepts A/B/C + states (healthy/critical/empty/loading) + ============================================================ */ + +// ---------- MOCK DATA (Webmotors-scale) ---------- + +const SQUADS = [ + 'DESK', 'BG', 'SECOM', 'CHECKOUT', 'SEARCH', 'PLATFORM', 'DATA', 'MOBILE', + 'BILLING', 'ONBOARD', 'RECS', 'CATALOG', 'INVENTORY', 'PRICING', 'GROWTH', + 'RISK', 'CS', 'ADS', 'FLEET', 'PHOTOS', 'SEO', 'CRM', 'DEVEX', 'QA', + 'SRE', 'INFRA', 'DOCS', +]; + +// Deterministic PRNG so concepts stay comparable across reloads +function rng(seed) { let s = seed >>> 0; return () => (s = (s * 1664525 + 1013904223) >>> 0, s / 0xFFFFFFFF); } + +function buildAgingItems(count = 633, state = 'healthy') { + const r = rng(42); + const items = []; + for (let i = 0; i < count; i++) { + // 70% healthy ≤ p85, 20% watch, 10% at_risk + const bucket = r(); + let age; + if (bucket < 0.70) age = r() * 22; // 0–22d (≤ P85) + else if (bucket < 0.90) age = 22 + r() * 22; // 22–44d (watch) + else age = 44 + r() * 90; // 44–134d (at_risk) + const squad = SQUADS[Math.floor(r() * SQUADS.length)]; + const status = r() < 0.65 ? 'in_progress' : 'in_review'; + const num = Math.floor(r() * 50000); + items.push({ + issue_key: `${squad}-${num}`, + age_days: +age.toFixed(1), + status_category: status, + status_name: status === 'in_progress' ? 'In Progress' : 'In Review', + squad_key: squad, + is_at_risk: age > 44.6, + }); + } + // Critical state: concentrate 24 at_risk in DESK + if (state === 'critical') { + for (let i = 0; i < 24; i++) { + items.push({ + issue_key: `DESK-${40000 + i}`, + age_days: +(45 + Math.random() * 90).toFixed(1), + status_category: 'in_progress', + status_name: 'In Progress', + squad_key: 'DESK', + is_at_risk: true, + }); + } + } + return items.sort((a, b) => b.age_days - a.age_days); +} + +const STATE_DATA = { + healthy: { + aging_wip: { count: 633, p50_days: 6.5, p85_days: 22.3, at_risk_count: 67, at_risk_threshold_days: 44.6 }, + fe: { value: 0.42, trend_pp: -3, sample_size: 2145, insufficient_data: false }, + }, + critical: { + aging_wip: { count: 657, p50_days: 7.1, p85_days: 24.8, at_risk_count: 91, at_risk_threshold_days: 49.6 }, + fe: { value: 0.34, trend_pp: -8, sample_size: 2018, insufficient_data: false }, + }, + empty: { + aging_wip: { count: 0, p50_days: 0, p85_days: 0, at_risk_count: 0, at_risk_threshold_days: 0 }, + fe: { value: null, trend_pp: 0, sample_size: 0, insufficient_data: true }, + }, +}; + +// ---------- STATE ---------- +let currentConcept = 'A'; +let currentState = 'healthy'; + +// ---------- FORMATTERS ---------- +const fmtPct = (v) => `${Math.round(v * 100)}%`; +const fmtDays = (v) => `${v.toFixed(1).replace('.', ',')}d`; +const fmtNum = (v) => v.toLocaleString('pt-BR'); + +// ---------- RENDERERS — each concept ---------- + +function renderConceptA(items, summary) { + // Outlier-first: top 8 at_risk in ranked table + const atRisk = items.filter((i) => i.is_at_risk).slice(0, 8); + if (atRisk.length === 0) return renderEmpty(); + const maxAge = atRisk[0].age_days; + const rows = atRisk.map((it) => { + const pct = Math.min(100, (it.age_days / maxAge) * 100); + const isExtreme = it.age_days > summary.at_risk_threshold_days * 1.5; + return ` + + ${it.issue_key} + + ${it.squad_key} + + + + ${it.status_name} + + + + + + ${fmtDays(it.age_days)} + `; + }).join(''); + + return ` + + + + + + + + + + + ${rows} +
IssueSquadStatusIdadeDias
+
+ Mostrando 8 de ${summary.at_risk_count} em risco + Ver lista completa → +
`; +} + +function renderConceptB(items, summary) { + // Distribution-first: histogram of age buckets + squad chips + if (items.length === 0) return renderEmpty(); + const buckets = [ + { label: '0–7d', min: 0, max: 7, color: 'info', count: 0 }, + { label: '7–14d', min: 7, max: 14, color: 'info', count: 0 }, + { label: '14–22d', min: 14, max: 22.3, color: 'info', count: 0 }, + { label: '22–45d', min: 22.3, max: 44.6, color: 'warning', count: 0 }, + { label: '> 45d (at risk)', min: 44.6, max: Infinity, color: 'danger', count: 0 }, + ]; + items.forEach((it) => { + const b = buckets.find((x) => it.age_days >= x.min && it.age_days < x.max); + if (b) b.count++; + }); + + // Top 5 squads with most at_risk + const squadRisk = {}; + items.filter((i) => i.is_at_risk).forEach((i) => { squadRisk[i.squad_key] = (squadRisk[i.squad_key] || 0) + 1; }); + const topSquads = Object.entries(squadRisk).sort((a, b) => b[1] - a[1]).slice(0, 6); + + return ` +
+
+ +
+ +
+ ${topSquads.map(([sq, n]) => ``).join('')} + ${topSquads.length === 6 ? `` : ''} +
+
`; +} + +function renderConceptC(items, summary) { + // Squad × age heatmap — top 12 squads by WIP count + if (items.length === 0) return renderEmpty(); + const byS = {}; + items.forEach((it) => { + if (!byS[it.squad_key]) byS[it.squad_key] = { total: 0, b0: 0, b1: 0, b2: 0, b3: 0, risk: 0 }; + const s = byS[it.squad_key]; + s.total++; + if (it.age_days < 7) s.b0++; + else if (it.age_days < 14) s.b1++; + else if (it.age_days < 22.3) s.b2++; + else if (it.age_days < 44.6) s.b3++; + if (it.is_at_risk) s.risk++; + }); + const squads = Object.entries(byS) + .sort((a, b) => b[1].risk - a[1].risk || b[1].total - a[1].total) + .slice(0, 12); + + const maxCount = Math.max(1, ...squads.flatMap(([, v]) => [v.b0, v.b1, v.b2, v.b3])); + const maxRisk = Math.max(1, ...squads.map(([, v]) => v.risk)); + + function intensity(v, max) { + if (v === 0) return 0; + const r = v / max; + if (r < 0.25) return 1; + if (r < 0.5) return 2; + if (r < 0.75) return 3; + return 4; + } + + const rows = squads.map(([sq, v]) => ` +
${sq}
+
${v.b0 || '·'}
+
${v.b1 || '·'}
+
${v.b2 || '·'}
+
${v.risk || '·'}
+
${Math.round((v.risk / v.total) * 100)}%
+
${v.total}
+ `).join(''); + + return ` +
+
Squad
+
0–7d
+
7–14d
+
14–22d
+
Em risco (>45d)
+
% risco
+
Total
+ ${rows} +
+
+ Top 12 squads por volume em risco · ${SQUADS.length - 12} squads restantes com fluxo saudável + Ver todas as squads → +
`; +} + +function renderEmpty() { + return ` +
+ Nenhum item em progresso no momento. + Quando squads iniciarem trabalho, a idade dos itens aparece aqui. Pipeline de ingestão atualiza a cada 5 min. +
`; +} + +function renderLoading() { + return ` +
+ ${Array.from({ length: 6 }).map(() => `
`).join('')} +
`; +} + +// ---------- RENDER ORCHESTRATION ---------- + +let distChart = null; + +function renderSection() { + const data = STATE_DATA[currentState]; + const viewport = document.getElementById('aging-viewport'); + const callout = document.getElementById('aging-callout'); + const subEl = document.getElementById('aging-sub'); + const arc = document.getElementById('fe-arc'); + const feValueEl = document.getElementById('fe-value'); + const feTrendEl = document.getElementById('fe-trend'); + const atRiskCountEl = document.getElementById('aging-atrisk-count'); + const thresholdEl = document.getElementById('aging-threshold'); + + // Loading — overrides everything + if (currentState === 'loading') { + viewport.innerHTML = renderLoading(); + callout.hidden = true; + subEl.textContent = 'Carregando…'; + return; + } + + // Update chrome + const agg = data.aging_wip; + subEl.textContent = agg.count === 0 + ? 'Sem itens em progresso no momento' + : `${fmtNum(agg.count)} itens em progresso · P50 ${fmtDays(agg.p50_days)} · P85 ${fmtDays(agg.p85_days)}`; + + if (agg.at_risk_count > 0) { + callout.hidden = false; + atRiskCountEl.textContent = fmtNum(agg.at_risk_count); + thresholdEl.textContent = fmtDays(agg.at_risk_threshold_days); + } else { + callout.hidden = true; + } + + // FE update + if (data.fe.insufficient_data) { + feValueEl.textContent = '—'; + feTrendEl.innerHTML = 'dados insuficientes'; + arc.setAttribute('stroke-dashoffset', '326.73'); + } else { + feValueEl.textContent = fmtPct(data.fe.value); + const circ = 2 * Math.PI * 52; // 326.73 + arc.setAttribute('stroke-dasharray', circ.toFixed(2)); + arc.setAttribute('stroke-dashoffset', (circ * (1 - data.fe.value)).toFixed(2)); + const trendSign = data.fe.trend_pp >= 0 ? '+' : ''; + feTrendEl.innerHTML = ` + + ${trendSign}${data.fe.trend_pp}pp vs 60d anteriores`; + feTrendEl.className = 'fe-gauge__trend ' + (data.fe.trend_pp >= 0 ? 'fe-gauge__trend--up' : 'fe-gauge__trend--down'); + } + + // Viewport by concept + const items = buildAgingItems(agg.count, currentState); + let html = ''; + if (currentState === 'empty') html = renderEmpty(); + else if (currentConcept === 'A') html = renderConceptA(items, agg); + else if (currentConcept === 'B') html = renderConceptB(items, agg); + else html = renderConceptC(items, agg); + + viewport.innerHTML = html; + + // Draw chart for concept B + if (currentConcept === 'B' && currentState !== 'empty') { + drawDistChart(items); + } + + bindViewportHandlers(); +} + +function drawDistChart(items) { + const ctx = document.getElementById('dist-chart'); + if (!ctx || !window.Chart) return; + const buckets = [ + { label: '0–7d', min: 0, max: 7, color: '#3B82F6', count: 0 }, + { label: '7–14d', min: 7, max: 14, color: '#60A5FA', count: 0 }, + { label: '14–22d', min: 14, max: 22.3, color: '#93C5FD', count: 0 }, + { label: '22–45d', min: 22.3, max: 44.6, color: '#F59E0B', count: 0 }, + { label: '> 45d', min: 44.6, max: Infinity, color: '#EF4444', count: 0 }, + ]; + items.forEach((it) => { + const b = buckets.find((x) => it.age_days >= x.min && it.age_days < x.max); + if (b) b.count++; + }); + if (distChart) distChart.destroy(); + distChart = new window.Chart(ctx, { + type: 'bar', + data: { + labels: buckets.map((b) => b.label), + datasets: [{ + data: buckets.map((b) => b.count), + backgroundColor: buckets.map((b) => b.color), + borderRadius: 6, + borderSkipped: false, + barPercentage: 0.75, + }], + }, + options: { + responsive: true, maintainAspectRatio: false, + plugins: { + legend: { display: false }, + tooltip: { + backgroundColor: '#fff', borderColor: '#E5E7EB', borderWidth: 1, + titleColor: '#111827', bodyColor: '#111827', + callbacks: { + label: (ctx) => `${ctx.parsed.y} itens (${Math.round(ctx.parsed.y / items.length * 100)}%)`, + }, + }, + }, + scales: { + x: { grid: { display: false }, ticks: { color: '#6B7280', font: { size: 11 } } }, + y: { beginAtZero: true, grid: { color: '#F3F4F6' }, ticks: { color: '#9CA3AF', font: { size: 11 }, precision: 0 } }, + }, + }, + }); +} + +// ---------- DRAWER ---------- + +function openDrawer() { + const d = document.getElementById('fh-drawer'); + d.hidden = false; + const items = buildAgingItems(STATE_DATA[currentState].aging_wip.count, currentState) + .filter((i) => i.is_at_risk); + const body = document.getElementById('fh-drawer-body'); + body.innerHTML = ` + + + + + + + + ${items.slice(0, 200).map((it) => ` + + + + + + `).join('')} + +
IssueSquadStatusDias
${it.issue_key}${it.squad_key}${it.status_name}${fmtDays(it.age_days)}
+ ${items.length > 200 ? `

Mostrando 200 de ${items.length}. Use os filtros acima.

` : ''} + `; + document.getElementById('fh-drawer-close').focus(); +} +function closeDrawer() { document.getElementById('fh-drawer').hidden = true; } + +function bindViewportHandlers() { + document.querySelectorAll('[data-action="open-drawer"]').forEach((el) => { + el.addEventListener('click', (e) => { e.preventDefault(); openDrawer(); }); + }); + document.querySelectorAll('.outlier-table tbody tr').forEach((tr) => { + tr.addEventListener('click', () => openDrawer()); + }); + document.querySelectorAll('.heatmap__cell--risk[data-squad]').forEach((c) => { + c.addEventListener('click', () => openDrawer()); + }); +} + +// ---------- SWITCHER ---------- + +function bindSwitcher() { + document.querySelectorAll('[data-concept]').forEach((b) => { + b.addEventListener('click', () => { + document.querySelectorAll('[data-concept]').forEach((x) => x.classList.remove('is-active')); + b.classList.add('is-active'); + currentConcept = b.dataset.concept; + renderSection(); + }); + }); + document.querySelectorAll('[data-state]').forEach((b) => { + b.addEventListener('click', () => { + document.querySelectorAll('[data-state]').forEach((x) => x.classList.remove('is-active')); + b.classList.add('is-active'); + currentState = b.dataset.state; + renderSection(); + }); + }); + + document.getElementById('aging-view-list').addEventListener('click', openDrawer); + document.getElementById('fh-drawer-close').addEventListener('click', closeDrawer); + document.addEventListener('keydown', (e) => { + if (e.key === 'Escape') closeDrawer(); + }); +} + +// ---------- BOOT ---------- +document.addEventListener('DOMContentLoaded', () => { + bindSwitcher(); + renderSection(); +}); diff --git a/pulse/pulse-ui/pages/dashboard/index.html b/pulse/pulse-ui/pages/dashboard/index.html new file mode 100644 index 0000000..c0308bf --- /dev/null +++ b/pulse/pulse-ui/pages/dashboard/index.html @@ -0,0 +1,240 @@ + + + + + + PULSE Dashboard — Diagnostic-first + + + + + + + + + + + + + + + + + + + +
+ + +
+
+

PULSE Dashboard

+

+ Visão de engenharia por time em DORA e Flow. Selecione squad e período para explorar. +

+
+ + +
+ + +
+ + Exibindo todas as 27 squads · últimos 60 dias + +
+ + +
+

Indicadores globais

+ +
+ + +
+
+

+ + DORA Metrics +

+ Classificação geral: Elite em 3 de 4 +
+ +
+
+ + +
+
+

+ + Flow & Management +

+ Based on Little's Law · P50/P85 +
+ +
+
+
+
+ + +
+
+

Comparativo por squad

+

Ordenado por desempenho — clique em uma squad para abrir detalhe.

+
+ + +
+ + + + + + +
+ +
+
+
+

Deploy Frequency por squad

+

Maior é melhor · thresholds DORA overlay

+
+
+ Elite + High + Medium + Low +
+
+ + +
+ +
+
+
+ + +
+
+

Evolução por squad

+

Mini trend de 12 semanas · hover mostra valor na semana

+ +
+ + +
+
+ +
+ +
+
+ +
+

+ PULSE é read-only. Dados de DevLake · GitHub · Jenkins · Jira. Ver pipeline. +

+
+ +
+ + + + + +
+ + + diff --git a/pulse/pulse-ui/pages/dashboard/mock-data.js b/pulse/pulse-ui/pages/dashboard/mock-data.js new file mode 100644 index 0000000..af7e573 --- /dev/null +++ b/pulse/pulse-ui/pages/dashboard/mock-data.js @@ -0,0 +1,170 @@ +// Mock data shared by dashboard concepts (Executive / Investigator / Diagnostic) +// Real-scale: 27 squads grouped by 8 tribos (Webmotors) + +export const GLOBAL_METRICS = { + dora: { + deploymentFrequency: { label: 'Deploy Frequency', value: 4.2, unit: '/dia', classification: 'elite', trendPct: 12.3, sparkline: [2.8,3.1,3.4,3.0,3.6,3.9,3.7,4.0,4.2,4.1,4.3,4.2] }, + leadTimeForChanges: { label: 'Lead Time', value: 18, unit: 'h', classification: 'elite', trendPct: -8.1, sparkline: [26,24,25,23,22,21,20,19,19,18,18,18] }, + changeFailureRate: { label: 'Change Failure', value: 6.8, unit: '%', classification: 'high', trendPct: -2.4, sparkline: [9.2,8.8,8.1,7.9,7.5,7.2,7.0,6.9,6.8,6.7,6.8,6.8] }, + timeToRestore: { label: 'Time to Restore', value: 1.2, unit: 'h', classification: 'elite', trendPct: -15.0, sparkline: [2.1,1.9,1.8,1.6,1.5,1.4,1.3,1.3,1.2,1.2,1.2,1.2] }, + }, + flow: { + cycleTimeP50: { label: 'Cycle Time P50', value: 2.8, unit: 'd', trendPct: -5.4, sparkline: [3.4,3.3,3.1,3.0,2.9,2.9,2.8,2.8,2.8,2.8,2.8,2.8] }, + cycleTimeP85: { label: 'Cycle Time P85', value: 8.1, unit: 'd', trendPct: 3.2, sparkline: [7.2,7.4,7.6,7.7,7.8,7.9,8.0,8.0,8.1,8.0,8.1,8.1] }, + wip: { label: 'WIP', value: 143, unit: 'items',trendPct: 8.0, sparkline: [124,128,130,132,134,136,138,140,141,142,143,143] }, + throughput: { label: 'Throughput', value: 87, unit: 'PRs/sem', trendPct: 6.2, sparkline: [78,80,81,82,83,84,85,86,86,87,87,87] }, + }, +}; + +// 27 squads distributed across 8 tribos — realistic Webmotors-like distribution +const RAW_TEAMS = [ + // Tribe PF (Produtos Financeiros) — 4 squads + { id: 'pf-okm', name: 'OEM Integração', tribe: 'PF', cls: 'elite' }, + { id: 'pf-fin', name: 'Financiamento', tribe: 'PF', cls: 'elite' }, + { id: 'pf-seg', name: 'Seguros', tribe: 'PF', cls: 'high' }, + { id: 'pf-pag', name: 'Pagamentos', tribe: 'PF', cls: 'high' }, + + // Tribe TEC (Tecnologia transversal) — 4 + { id: 'tec-sdi', name: 'Segurança da Informação', tribe: 'TEC', cls: 'medium' }, + { id: 'tec-obs', name: 'Observabilidade', tribe: 'TEC', cls: 'high' }, + { id: 'tec-plt', name: 'Plataforma Cloud', tribe: 'TEC', cls: 'elite' }, + { id: 'tec-dev', name: 'Developer Experience', tribe: 'TEC', cls: 'high' }, + + // Tribe PI (Publicidade / Integrações) — 3 + { id: 'pi-secom', name: 'SECOM', tribe: 'PI', cls: 'low' }, + { id: 'pi-ads', name: 'Ads Platform', tribe: 'PI', cls: 'medium' }, + { id: 'pi-part', name: 'Parceiros', tribe: 'PI', cls: 'medium' }, + + // Tribe SALES — 4 + { id: 'sls-lead', name: 'Lead Management', tribe: 'SALES',cls: 'high' }, + { id: 'sls-crm', name: 'CRM', tribe: 'SALES',cls: 'medium' }, + { id: 'sls-prc', name: 'Precificação', tribe: 'SALES',cls: 'elite' }, + { id: 'sls-dlr', name: 'Dealer Portal', tribe: 'SALES',cls: 'high' }, + + // Tribe BG (Buy/Grow) — 3 + { id: 'bg-buy', name: 'Buy Flow', tribe: 'BG', cls: 'high' }, + { id: 'bg-grw', name: 'Growth', tribe: 'BG', cls: 'medium' }, + { id: 'bg-chk', name: 'Checkout', tribe: 'BG', cls: 'high' }, + + // Tribe DESC (Descoberta) — 3 + { id: 'dsc-srh', name: 'Search', tribe: 'DESC', cls: 'elite' }, + { id: 'dsc-rec', name: 'Recomendação', tribe: 'DESC', cls: 'high' }, + { id: 'dsc-cat', name: 'Catálogo', tribe: 'DESC', cls: 'medium' }, + + // Tribe ENO (Enablers / Ops) — 3 + { id: 'eno-ops', name: 'SRE', tribe: 'ENO', cls: 'elite' }, + { id: 'eno-dat', name: 'Data Platform', tribe: 'ENO', cls: 'high' }, + { id: 'eno-iam', name: 'Identity', tribe: 'ENO', cls: 'medium' }, + + // Tribe CPA (Core & Apps) — 3 + { id: 'cpa-mob', name: 'Mobile Apps', tribe: 'CPA', cls: 'high' }, + { id: 'cpa-web', name: 'Web Core', tribe: 'CPA', cls: 'high' }, + { id: 'cpa-bff', name: 'BFF', tribe: 'CPA', cls: 'low' }, +]; + +// Deterministic seeded PRNG for reproducible mock +function mulberry32(seed) { + return function () { + let t = seed += 0x6D2B79F5; + t = Math.imul(t ^ t >>> 15, t | 1); + t ^= t + Math.imul(t ^ t >>> 7, t | 61); + return ((t ^ t >>> 14) >>> 0) / 4294967296; + }; +} + +// Generate metrics per classification band +function generateTeamMetrics(cls, rng) { + const bands = { + elite: { df: [3.0, 6.0], lt: [8, 22], cfr: [1, 5], ct50: [1.5, 3.0], ct85: [4, 7], wip: [8, 16], thr: [18, 30] }, + high: { df: [1.2, 3.0], lt: [22, 50], cfr: [5, 10], ct50: [2.5, 5.0], ct85: [7, 12], wip: [12, 22],thr: [12, 22] }, + medium: { df: [0.4, 1.2], lt: [50, 120], cfr: [10, 15], ct50: [4.0, 7.0], ct85: [10,18], wip: [18, 30],thr: [8, 15] }, + low: { df: [0.05, 0.4], lt: [120,360], cfr: [15, 40], ct50: [6.0, 12 ], ct85: [16,30], wip: [25, 45],thr: [3, 10] }, + }; + const b = bands[cls]; + const pick = ([lo, hi]) => +(lo + rng() * (hi - lo)).toFixed(cls === 'elite' || cls === 'high' ? 1 : 1); + return { + deployFreq: pick(b.df), + leadTime: Math.round(pick(b.lt)), + cfr: +pick(b.cfr).toFixed(1), + cycleTimeP50: pick(b.ct50), + cycleTimeP85: pick(b.ct85), + wip: Math.round(pick(b.wip)), + throughput: Math.round(pick(b.thr)), + }; +} + +// Generate 12-week evolution sparkline with mild trend +function generateEvolution(baseline, rng, trend = 0) { + const pts = []; + let v = baseline * (1 - trend * 0.15); + for (let i = 0; i < 12; i++) { + v = v + (baseline - v) * 0.25 + (rng() - 0.5) * baseline * 0.12; + pts.push(+Math.max(0, v).toFixed(2)); + } + // End exactly at baseline + pts[pts.length - 1] = baseline; + return pts; +} + +export const TEAMS = (() => { + const rng = mulberry32(1492); + return RAW_TEAMS.map((t) => { + const m = generateTeamMetrics(t.cls, rng); + return { + ...t, + ...m, + evolution: { + deployFreq: generateEvolution(m.deployFreq, rng, -0.05), + leadTime: generateEvolution(m.leadTime, rng, 0.08), + cfr: generateEvolution(m.cfr, rng, 0.02), + cycleTimeP50: generateEvolution(m.cycleTimeP50, rng, 0.0), + wip: generateEvolution(m.wip, rng, 0.10), + throughput: generateEvolution(m.throughput, rng, -0.05), + }, + }; + }); +})(); + +export const TRIBES = [...new Set(TEAMS.map((t) => t.tribe))]; + +export const PERIOD_OPTIONS = [ + { id: '30d', label: '30 dias' }, + { id: '60d', label: '60 dias' }, + { id: '90d', label: '90 dias' }, + { id: '120d', label: '120 dias' }, + { id: 'custom', label: 'Personalizado…' }, +]; + +// DORA thresholds (2023 DORA report) +export const DORA_THRESHOLDS = { + deployFreq: { elite: 1, high: 0.14, medium: 0.03 }, // per day + leadTime: { elite: 24, high: 168, medium: 720 }, // hours (<= is better) + cfr: { elite: 5, high: 10, medium: 15 }, // % (<= is better) +}; + +export function classifyDora(metric, value) { + const t = DORA_THRESHOLDS[metric]; + if (!t) return 'neutral'; + if (metric === 'deployFreq') { + if (value >= t.elite) return 'elite'; + if (value >= t.high) return 'high'; + if (value >= t.medium) return 'medium'; + return 'low'; + } + if (value <= t.elite) return 'elite'; + if (value <= t.high) return 'high'; + if (value <= t.medium) return 'medium'; + return 'low'; +} + +export function fmtNumber(n, unit = '') { + if (n == null) return '—'; + if (Math.abs(n) >= 1000) return (n / 1000).toFixed(1) + 'k' + (unit ? ' ' + unit : ''); + if (Number.isInteger(n)) return n.toString() + (unit ? ' ' + unit : ''); + return n.toFixed(1) + (unit ? ' ' + unit : ''); +} + +export function fmtTrend(pct) { + const sign = pct >= 0 ? '+' : ''; + return `${sign}${pct.toFixed(1)}%`; +} diff --git a/pulse/pulse-ui/pages/dashboard/script.js b/pulse/pulse-ui/pages/dashboard/script.js new file mode 100644 index 0000000..7af428d --- /dev/null +++ b/pulse/pulse-ui/pages/dashboard/script.js @@ -0,0 +1,503 @@ +// PULSE Dashboard — Diagnostic-first (concept C, winning) +// Vanilla ES module. Tokens-only. Chart.js for visualisation. + +import { GLOBAL_METRICS, TEAMS, TRIBES, PERIOD_OPTIONS, classifyDora, fmtNumber, fmtTrend } from './mock-data.js'; + +/* -------------------------------------------------------- * + * Token helpers (read from CSS custom properties) + * -------------------------------------------------------- */ +const css = (name) => getComputedStyle(document.documentElement).getPropertyValue(name).trim(); + +const DORA_COLOR = { + elite: () => css('--color-dora-elite'), + high: () => css('--color-dora-high'), + medium: () => css('--color-dora-medium'), + low: () => css('--color-dora-low'), + neutral:() => css('--color-text-tertiary'), +}; + +/* -------------------------------------------------------- * + * Dashboard state + * -------------------------------------------------------- */ +const state = { + teamId: null, // null = all squads + period: '60d', + customStart: null, + customEnd: null, + activeRankingMetric: 'deployFreq', + activeEvolutionMetric: 'cycleTimeP50', +}; + +/* ---------------- KPI GROUP RENDER ---------------- */ +function renderKpiGroups() { + const dora = GLOBAL_METRICS.dora; + const flow = GLOBAL_METRICS.flow; + + const doraOrder = [ + ['deploymentFrequency', 'Deploy Freq', 'dora'], + ['leadTimeForChanges', 'Lead Time', 'dora'], + ['changeFailureRate', 'Change Failure', 'dora'], + ['timeToRestore', 'Time to Restore', 'dora'], + ]; + const flowOrder = [ + ['cycleTimeP50', 'Cycle Time P50', 'flow'], + ['cycleTimeP85', 'Cycle Time P85', 'flow'], + ['wip', 'Work in Progress','flow'], + ['throughput', 'Throughput', 'flow'], + ]; + + const doraEl = document.getElementById('kpi-dora'); + const flowEl = document.getElementById('kpi-flow'); + doraEl.innerHTML = doraOrder.map(([k, lbl]) => kpiCardHtml(dora[k], lbl, 'dora', k)).join(''); + flowEl.innerHTML = flowOrder.map(([k, lbl]) => kpiCardHtml(flow[k], lbl, 'flow', k)).join(''); + + // sparklines + [...doraOrder, ...flowOrder].forEach(([k, , ,], idx) => { + const m = (dora[k] || flow[k]); + const canvas = document.getElementById(`sp-${k}`); + if (canvas && m?.sparkline) drawSparkline(canvas, m.sparkline, m.classification); + }); +} + +function kpiCardHtml(m, label, family, key) { + if (!m) return ''; + const badge = m.classification + ? `${classifLabel(m.classification)}` + : ''; + const trendClass = trendClassFor(key, m.trendPct); + return ` +
+
${label}
+
+ ${m.value} + ${m.unit} +
+
+ + ${fmtTrend(m.trendPct)} + + +
+
+ ${badge} +
+
+ `; +} + +function classifLabel(c) { + return { elite: 'Elite', high: 'High', medium: 'Medium', low: 'Low', neutral: '—' }[c] || c; +} + +// Lower-is-better metrics: leadTime, cfr, timeToRestore, cycleTimeP50/P85, wip +function trendClassFor(key, pct) { + const lowerIsBetter = ['leadTimeForChanges','changeFailureRate','timeToRestore','cycleTimeP50','cycleTimeP85','wip']; + if (lowerIsBetter.includes(key)) { + return pct < 0 ? 'kpi__trend--down' : 'kpi__trend--bad-up'; + } + // higher is better + return pct >= 0 ? 'kpi__trend--up' : 'kpi__trend--bad-up'; +} + +/* ---------------- SPARKLINE (Chart.js) ---------------- */ +function drawSparkline(canvas, data, classification = 'neutral') { + const color = DORA_COLOR[classification]?.() || css('--color-brand-primary'); + new Chart(canvas, { + type: 'line', + data: { + labels: data.map((_, i) => i), + datasets: [{ data, borderColor: color, borderWidth: 1.5, fill: false, + pointRadius: 0, tension: 0.35 }], + }, + options: { + responsive: false, maintainAspectRatio: false, + plugins: { legend: { display: false }, tooltip: { enabled: false } }, + scales: { x: { display: false }, y: { display: false } }, + animation: false, + }, + }); +} + +/* ---------------- COMBOBOX (team filter) ---------------- */ +function initTeamCombobox() { + const trigger = document.getElementById('f-team'); + const panel = document.getElementById('team-list'); + const search = document.getElementById('team-search'); + const optsEl = document.getElementById('team-options'); + const value = document.getElementById('team-value'); + + function renderOptions(filter = '') { + const needle = filter.toLowerCase(); + const byTribe = new Map(); + TEAMS.forEach((t) => { + const hay = `${t.name} ${t.tribe}`.toLowerCase(); + if (needle && !hay.includes(needle)) return; + if (!byTribe.has(t.tribe)) byTribe.set(t.tribe, []); + byTribe.get(t.tribe).push(t); + }); + + const parts = []; + parts.push(`
  • + Todas as squads${TEAMS.length} +
  • `); + for (const [tribe, list] of byTribe.entries()) { + parts.push(`
  • ${tribe}
  • `); + list.forEach((t) => { + parts.push(`
  • + ${t.name} +
  • `); + }); + } + optsEl.innerHTML = parts.join(''); + } + + function open() { + panel.hidden = false; + trigger.setAttribute('aria-expanded', 'true'); + renderOptions(''); + search.value = ''; search.focus(); + } + function close() { + panel.hidden = true; + trigger.setAttribute('aria-expanded', 'false'); + } + + trigger.addEventListener('click', () => panel.hidden ? open() : close()); + document.addEventListener('click', (e) => { + if (!e.target.closest('#team-combobox')) close(); + }); + search.addEventListener('input', (e) => renderOptions(e.target.value)); + + optsEl.addEventListener('click', (e) => { + const li = e.target.closest('.combobox__option'); + if (!li) return; + state.teamId = li.dataset.id || null; + value.textContent = state.teamId + ? TEAMS.find((t) => t.id === state.teamId)?.name + : 'Todas as squads'; + close(); + updateAppliedFilters(); + renderAll(); + trackEvent('dashboard_team_filter_changed', { teamId: state.teamId }); + }); + + renderOptions(); +} + +/* ---------------- PERIOD SEGMENTED ---------------- */ +function initPeriodSegmented() { + const btns = document.querySelectorAll('.segmented__opt'); + const dateRange = document.getElementById('date-range'); + + btns.forEach((b) => { + b.addEventListener('click', () => { + btns.forEach((x) => { x.classList.remove('is-active'); x.setAttribute('aria-checked', 'false'); }); + b.classList.add('is-active'); b.setAttribute('aria-checked', 'true'); + state.period = b.dataset.period; + dateRange.hidden = state.period !== 'custom'; + updateAppliedFilters(); + renderAll(); + trackEvent('dashboard_period_changed', { period: state.period }); + }); + }); + + document.getElementById('date-start').addEventListener('change', (e) => { state.customStart = e.target.value; updateAppliedFilters(); }); + document.getElementById('date-end').addEventListener('change', (e) => { state.customEnd = e.target.value; updateAppliedFilters(); }); + + document.getElementById('btn-reset').addEventListener('click', () => { + state.teamId = null; state.period = '60d'; + document.getElementById('team-value').textContent = 'Todas as squads'; + btns.forEach((x) => { x.classList.toggle('is-active', x.dataset.period === '60d'); }); + dateRange.hidden = true; + updateAppliedFilters(); + renderAll(); + }); +} + +function updateAppliedFilters() { + document.getElementById('af-scope').textContent = state.teamId + ? TEAMS.find((t) => t.id === state.teamId)?.name + : 'todas as 27 squads'; + const label = PERIOD_OPTIONS.find((p) => p.id === state.period)?.label; + document.getElementById('af-period').textContent = state.period === 'custom' + ? `${state.customStart || '—'} a ${state.customEnd || '—'}` + : label; +} + +/* ---------------- RANKING TAB + CHART ---------------- */ +function initRankingTabs() { + document.querySelectorAll('.metric-tab').forEach((t) => { + t.addEventListener('click', () => { + document.querySelectorAll('.metric-tab').forEach((x) => { + x.classList.remove('is-active'); x.setAttribute('aria-selected', 'false'); + }); + t.classList.add('is-active'); t.setAttribute('aria-selected', 'true'); + state.activeRankingMetric = t.dataset.metric; + renderRanking(); + trackEvent('dashboard_ranking_metric_changed', { metric: state.activeRankingMetric }); + }); + }); +} + +const METRIC_META = { + deployFreq: { title: 'Deploy Frequency por squad', sub: 'Deploys por dia · maior é melhor', key: 'deployFreq', sortDir: 'desc', unit: '/dia' }, + leadTime: { title: 'Lead Time por squad', sub: 'Horas commit → produção · menor é melhor',key: 'leadTime', sortDir: 'asc', unit: 'h' }, + cfr: { title: 'Change Failure Rate por squad', sub: '% de deploys com falha · menor é melhor', key: 'cfr', sortDir: 'asc', unit: '%' }, + cycleTime: { title: 'Cycle Time P50 por squad', sub: 'Dias · menor é melhor', key: 'cycleTimeP50', sortDir: 'asc', unit: 'd' }, + wip: { title: 'Work in Progress por squad', sub: 'Itens em progresso · menor é mais saudável', key: 'wip', sortDir: 'asc', unit: 'itens'}, + throughput: { title: 'Throughput por squad', sub: 'PRs/semana · maior é melhor', key: 'throughput', sortDir: 'desc', unit: 'PRs/sem'}, +}; + +function classifyForMetric(metric, value) { + // Map to DORA classification where possible + if (metric === 'deployFreq') return classifyDora('deployFreq', value); + if (metric === 'leadTime') return classifyDora('leadTime', value); + if (metric === 'cfr') return classifyDora('cfr', value); + // Flow metrics: quantile-based (approx) + if (metric === 'cycleTimeP50') return value < 3 ? 'elite' : value < 5 ? 'high' : value < 8 ? 'medium' : 'low'; + if (metric === 'wip') return value < 15 ? 'elite' : value < 22 ? 'high' : value < 30 ? 'medium' : 'low'; + if (metric === 'throughput') return value >= 20 ? 'elite' : value >= 14 ? 'high' : value >= 9 ? 'medium' : 'low'; + return 'neutral'; +} + +function renderRanking() { + const meta = METRIC_META[state.activeRankingMetric]; + document.getElementById('ranking-metric-title').textContent = meta.title; + document.getElementById('ranking-metric-sub').textContent = meta.sub; + + const teams = [...TEAMS].sort((a, b) => meta.sortDir === 'desc' + ? b[meta.key] - a[meta.key] + : a[meta.key] - b[meta.key]); + + const max = Math.max(...teams.map((t) => t[meta.key])); + const container = document.getElementById('ranking-chart'); + + if (teams.length === 0) { + container.innerHTML = `
    +

    Sem squads para exibir

    +

    Conecte DevLake para começar a coletar dados.

    +
    `; + return; + } + + container.innerHTML = teams.map((t, i) => { + const value = t[meta.key]; + const cls = classifyForMetric(state.activeRankingMetric, value); + const pct = max > 0 ? (value / max) * 100 : 0; + return ` +
    + ${i + 1} + + ${t.name} + ${t.tribe} + + + ${fmtNumber(value)} ${meta.unit} + ${classifLabel(cls)} +
    + `; + }).join(''); + + // bind row click → drawer + container.querySelectorAll('.rank-row').forEach((row) => { + row.addEventListener('click', () => openDrawer(row.dataset.teamId)); + row.addEventListener('keydown', (e) => { + if (e.key === 'Enter' || e.key === ' ') { e.preventDefault(); openDrawer(row.dataset.teamId); } + }); + }); +} + +/* ---------------- EVOLUTION SMALL MULTIPLES ---------------- */ +function initEvolutionControls() { + document.getElementById('ev-metric').addEventListener('change', (e) => { + state.activeEvolutionMetric = e.target.value; + renderSmallMultiples(); + trackEvent('dashboard_evolution_metric_changed', { metric: state.activeEvolutionMetric }); + }); +} + +function renderSmallMultiples() { + const metricKey = state.activeEvolutionMetric; + const container = document.getElementById('small-multiples'); + container.innerHTML = ''; + + // Group by tribe + const groups = new Map(); + TEAMS.forEach((t) => { + if (!groups.has(t.tribe)) groups.set(t.tribe, []); + groups.get(t.tribe).push(t); + }); + + for (const [tribe, teams] of groups.entries()) { + const title = document.createElement('div'); + title.className = 'sm-group-title'; + title.textContent = `${tribe} · ${teams.length} squads`; + container.appendChild(title); + + teams.forEach((t) => { + const tile = document.createElement('div'); + tile.className = 'sm-tile'; + tile.setAttribute('role', 'button'); + tile.setAttribute('tabindex', '0'); + tile.dataset.teamId = t.id; + const series = t.evolution[metricKey] || []; + const current = series[series.length - 1] ?? 0; + const prev = series[0] ?? 0; + const delta = prev > 0 ? ((current - prev) / prev) * 100 : 0; + tile.innerHTML = ` +
    + ${t.name} + ${t.tribe} +
    +
    +
    ${fmtNumber(current)}
    +
    ${fmtTrend(delta)} vs 12 sem atrás
    + `; + tile.addEventListener('click', () => openDrawer(t.id)); + tile.addEventListener('keydown', (e) => { + if (e.key === 'Enter' || e.key === ' ') { e.preventDefault(); openDrawer(t.id); } + }); + container.appendChild(tile); + + const cls = classifyForMetric(state.activeRankingMetric, current); + drawSparkline(tile.querySelector('canvas'), series, cls); + }); + } +} + +/* ---------------- DRAWER ---------------- */ +let drawerCharts = []; +function openDrawer(teamId) { + const team = TEAMS.find((t) => t.id === teamId); + if (!team) return; + + drawerCharts.forEach((c) => c.destroy()); + drawerCharts = []; + + document.getElementById('drawer-tribe').textContent = `TRIBO ${team.tribe}`; + document.getElementById('drawer-title').textContent = team.name; + + const body = document.getElementById('drawer-body'); + body.innerHTML = ` +
    + ${drawerMetric('Deploy Freq', fmtNumber(team.deployFreq), '/dia', classifyForMetric('deployFreq', team.deployFreq))} + ${drawerMetric('Lead Time', fmtNumber(team.leadTime), 'h', classifyForMetric('leadTime', team.leadTime))} + ${drawerMetric('Change Failure',fmtNumber(team.cfr), '%', classifyForMetric('cfr', team.cfr))} + ${drawerMetric('Cycle P50', fmtNumber(team.cycleTimeP50), 'd', classifyForMetric('cycleTimeP50', team.cycleTimeP50))} + ${drawerMetric('Cycle P85', fmtNumber(team.cycleTimeP85), 'd', 'neutral')} + ${drawerMetric('WIP', fmtNumber(team.wip), 'itens', classifyForMetric('wip', team.wip))} + ${drawerMetric('Throughput', fmtNumber(team.throughput), 'PRs/sem',classifyForMetric('throughput', team.throughput))} +
    + +
    +

    Evolução (12 sem) · ${METRIC_META[state.activeRankingMetric].title.replace(' por squad','')}

    +
    +
    + +
    +

    Distribuição Cycle Time (P50 / P85)

    +
    +
    + `; + + const metricKey = METRIC_META[state.activeRankingMetric].key; + const series = team.evolution[metricKey] || team.evolution.cycleTimeP50; + + const evoCanvas = document.getElementById('drawer-evo'); + drawerCharts.push(new Chart(evoCanvas, { + type: 'line', + data: { + labels: series.map((_, i) => `S-${11 - i}`), + datasets: [{ + data: series, borderColor: css('--color-brand-primary'), borderWidth: 2, + backgroundColor: 'rgba(99,102,241,0.08)', fill: true, pointRadius: 2, tension: 0.3, + }], + }, + options: { + responsive: true, maintainAspectRatio: false, + plugins: { legend: { display: false } }, + scales: { + x: { grid: { display: false }, ticks: { font: { family: css('--font-mono'), size: 10 } } }, + y: { grid: { color: css('--color-border-subtle') }, ticks: { font: { family: css('--font-mono'), size: 10 } } }, + }, + }, + })); + + const distCanvas = document.getElementById('drawer-dist'); + drawerCharts.push(new Chart(distCanvas, { + type: 'bar', + data: { + labels: ['P50', 'P85'], + datasets: [{ + data: [team.cycleTimeP50, team.cycleTimeP85], + backgroundColor: [css('--color-brand-primary'), css('--color-warning')], + borderRadius: 4, + }], + }, + options: { + indexAxis: 'y', responsive: true, maintainAspectRatio: false, + plugins: { legend: { display: false } }, + scales: { + x: { grid: { color: css('--color-border-subtle') }, ticks: { font: { family: css('--font-mono'), size: 10 } } }, + y: { grid: { display: false } }, + }, + }, + })); + + const drawer = document.getElementById('drawer'); + drawer.hidden = false; + document.getElementById('drawer-close').focus(); + trackEvent('dashboard_drawer_opened', { teamId }); +} + +function drawerMetric(label, value, unit, cls) { + return ` +
    +
    ${label}
    +
    + ${value} ${unit} +
    +
    + `; +} + +function initDrawer() { + document.getElementById('drawer-close').addEventListener('click', () => { + document.getElementById('drawer').hidden = true; + }); + document.addEventListener('keydown', (e) => { + if (e.key === 'Escape') document.getElementById('drawer').hidden = true; + }); +} + +/* ---------------- ANALYTICS ---------------- */ +function trackEvent(name, payload = {}) { + // Hook point — replaced by Mixpanel/PostHog in production. + // eslint-disable-next-line no-console + console.debug('[analytics]', name, payload); +} + +/* ---------------- MAIN ---------------- */ +function renderAll() { + renderKpiGroups(); + renderRanking(); + renderSmallMultiples(); +} + +document.addEventListener('DOMContentLoaded', () => { + // Wait for Chart.js to load + const boot = () => { + if (!window.Chart) return requestAnimationFrame(boot); + initTeamCombobox(); + initPeriodSegmented(); + initRankingTabs(); + initEvolutionControls(); + initDrawer(); + updateAppliedFilters(); + renderAll(); + trackEvent('dashboard_viewed', { period: state.period }); + }; + boot(); +}); diff --git a/pulse/pulse-ui/pages/dashboard/styles.css b/pulse/pulse-ui/pages/dashboard/styles.css new file mode 100644 index 0000000..be923e3 --- /dev/null +++ b/pulse/pulse-ui/pages/dashboard/styles.css @@ -0,0 +1,527 @@ +/* ============================================================ + PULSE Dashboard — Diagnostic-first (winning concept) + BEM naming · tokens only · WCAG AA · responsive + ============================================================ */ + +/* Skip link */ +.skip-link { + position: absolute; top: -40px; left: 8px; + background: var(--color-brand-primary); color: #fff; + padding: 6px 12px; border-radius: var(--radius-button); + font-weight: 500; text-decoration: none; z-index: 1000; +} +.skip-link:focus { top: 8px; } + +/* ========== TOPBAR ========== */ +.topbar { + display: flex; align-items: center; justify-content: space-between; + height: 56px; padding: 0 24px; + background: var(--color-bg-surface); + border-bottom: 1px solid var(--color-border-default); + position: sticky; top: 0; z-index: 50; +} +.topbar__brand { + display: flex; align-items: center; gap: 10px; + font-size: 14px; color: var(--color-text-primary); +} +.brand-mark { + width: 22px; height: 22px; border-radius: 6px; + background: linear-gradient(135deg, #4648d4 0%, #6063ee 100%); +} +.topbar__divider { color: var(--color-text-tertiary); } +.topbar__crumb { color: var(--color-text-secondary); font-weight: 500; } +.topbar__time { font-size: 12px; color: var(--color-text-secondary); } +.topbar__time time { font-family: var(--font-mono); } + +/* ========== PAGE WRAPPER ========== */ +.page { + max-width: 1440px; + margin: 0 auto; + padding: var(--space-page-padding); + padding-top: 32px; + padding-bottom: 80px; +} + +.section-title { + font-size: 16px; font-weight: 600; + color: var(--color-text-primary); + margin: 0 0 4px 0; +} +.section-sub { font-size: 13px; color: var(--color-text-secondary); margin: 0; } + +/* ========== PAGE HEAD + FILTERS ========== */ +.page-head { + display: grid; + grid-template-columns: 1fr auto; + gap: 24px; + align-items: end; + margin-bottom: 16px; +} +.page-head__title { + font-size: 24px; font-weight: 600; + color: var(--color-text-primary); + margin: 0 0 4px 0; + letter-spacing: -0.01em; +} +.page-head__subtitle { + font-size: 14px; color: var(--color-text-secondary); + margin: 0; max-width: 60ch; +} + +.filters { + display: flex; gap: 12px; align-items: flex-end; + flex-wrap: wrap; +} +.filter { display: flex; flex-direction: column; gap: 6px; margin: 0; border: 0; padding: 0; } +.filter--period { gap: 4px; } +.filter__label { + font-size: 12px; font-weight: 500; color: var(--color-text-secondary); + letter-spacing: 0.02em; text-transform: uppercase; +} +.filter__sublabel { font-size: 11px; color: var(--color-text-tertiary); } + +/* Segmented control for period */ +.segmented { + display: inline-flex; + background: var(--color-bg-tertiary); + border-radius: var(--radius-button); + padding: 3px; +} +.segmented__opt { + height: 30px; padding: 0 12px; + border: 0; background: transparent; + font-size: 13px; font-weight: 500; color: var(--color-text-secondary); + border-radius: 6px; transition: all 150ms ease-out; +} +.segmented__opt.is-active { + background: var(--color-bg-surface); + color: var(--color-text-primary); + box-shadow: var(--shadow-card); +} +.segmented__opt:hover:not(.is-active) { color: var(--color-text-primary); } + +.date-range { + display: flex; gap: 8px; margin-top: 8px; +} +.date-range__item { display: flex; flex-direction: column; gap: 2px; } +.date-range__item input { + height: 32px; padding: 0 8px; + border: 1px solid var(--color-border-default); + border-radius: var(--radius-button); + font-family: var(--font-mono); font-size: 12px; + color: var(--color-text-primary); + background: var(--color-bg-surface); +} + +/* Combobox (searchable team filter) */ +.combobox { position: relative; } +.combobox__trigger { + display: inline-flex; align-items: center; justify-content: space-between; + gap: 10px; + min-width: 220px; height: 36px; padding: 0 12px; + border: 1px solid var(--color-border-default); + border-radius: var(--radius-button); + background: var(--color-bg-surface); + font-size: 13px; color: var(--color-text-primary); +} +.combobox__trigger:hover { border-color: var(--color-text-tertiary); } +.icon-chevron { width: 14px; height: 14px; color: var(--color-text-tertiary); } + +.combobox__panel { + position: absolute; top: calc(100% + 4px); left: 0; right: 0; + background: var(--color-bg-surface); + border: 1px solid var(--color-border-default); + border-radius: var(--radius-card); + box-shadow: var(--shadow-elevated); + padding: 8px; width: 320px; max-width: 90vw; + z-index: 40; +} +.combobox__search { + width: 100%; height: 32px; padding: 0 10px; + border: 1px solid var(--color-border-default); + border-radius: var(--radius-button); + font-size: 13px; margin-bottom: 6px; +} +.combobox__options { + list-style: none; margin: 0; padding: 0; + max-height: 260px; overflow-y: auto; +} +.combobox__option { + padding: 8px 10px; border-radius: 6px; + font-size: 13px; cursor: pointer; + display: flex; justify-content: space-between; align-items: center; +} +.combobox__option[aria-selected="true"], +.combobox__option:hover { background: var(--color-brand-light); color: var(--color-brand-primary-hover); } +.combobox__group { + font-size: 10px; font-weight: 600; text-transform: uppercase; + letter-spacing: 0.06em; color: var(--color-text-tertiary); + padding: 10px 10px 4px; margin-top: 4px; +} + +.btn-ghost { + height: 36px; padding: 0 12px; + border: 1px solid transparent; + border-radius: var(--radius-button); + background: transparent; + color: var(--color-text-secondary); font-size: 13px; font-weight: 500; +} +.btn-ghost:hover { background: var(--color-bg-tertiary); color: var(--color-text-primary); } + +/* ========== APPLIED FILTERS STRIP ========== */ +.applied-filters { + display: flex; align-items: center; gap: 8px; + padding: 8px 12px; margin-bottom: 24px; + background: var(--color-bg-surface); + border: 1px dashed var(--color-border-default); + border-radius: var(--radius-button); + font-size: 13px; color: var(--color-text-secondary); +} + +/* ========== KPI GROUPS ========== */ +.kpi-groups { margin-bottom: 32px; } +.kpi-groups__grid { + display: grid; + grid-template-columns: repeat(2, 1fr); + gap: var(--space-section-gap); + margin-top: 12px; +} + +.kpi-group { + background: var(--color-bg-surface); + border: 1px solid var(--color-border-default); + border-radius: var(--radius-card); + padding: var(--space-card-padding); + box-shadow: var(--shadow-card); +} +.kpi-group__head { + display: flex; align-items: baseline; justify-content: space-between; + gap: 12px; margin-bottom: 14px; +} +.kpi-group__title { + display: inline-flex; align-items: center; gap: 8px; + font-size: 13px; font-weight: 600; + color: var(--color-text-primary); + letter-spacing: 0.02em; text-transform: uppercase; + margin: 0; +} +.kpi-group__dot { width: 8px; height: 8px; border-radius: 50%; } +.kpi-group__dot--dora { background: var(--color-brand-primary); } +.kpi-group__dot--flow { background: var(--color-info); } +.kpi-group__hint { font-size: 12px; color: var(--color-text-tertiary); } + +.kpi-group__grid { + display: grid; + grid-template-columns: repeat(4, 1fr); + gap: 12px; +} + +/* KPI card */ +.kpi { + padding: 14px; + border-radius: 10px; + background: var(--color-bg-secondary); + border: 1px solid var(--color-border-subtle); + display: flex; flex-direction: column; gap: 6px; +} +.kpi__label { + font-size: 11px; font-weight: 500; letter-spacing: 0.04em; + color: var(--color-text-secondary); text-transform: uppercase; +} +.kpi__value-row { + display: flex; align-items: baseline; gap: 4px; +} +.kpi__value { + font-size: 24px; font-weight: 700; color: var(--color-text-primary); + font-variant-numeric: tabular-nums; line-height: 1.1; +} +.kpi__unit { font-size: 12px; color: var(--color-text-secondary); font-weight: 500; } +.kpi__meta { display: flex; align-items: center; justify-content: space-between; gap: 6px; } +.kpi__trend { + font-size: 12px; font-weight: 500; font-variant-numeric: tabular-nums; +} +.kpi__trend--up { color: var(--color-success); } +.kpi__trend--down { color: var(--color-success); } /* contextual: lower is better for lead time / CFR */ +.kpi__trend--bad-up { color: var(--color-danger); } +.kpi__spark { width: 60px; height: 20px; } + +/* ========== RANKING SECTION ========== */ +.rankings { margin-bottom: 32px; } +.rankings__head { margin-bottom: 12px; } +.metric-tabs { + display: flex; gap: 4px; overflow-x: auto; + background: var(--color-bg-surface); + border: 1px solid var(--color-border-default); + border-radius: var(--radius-card); + padding: 4px; margin-bottom: 12px; +} +.metric-tab { + flex: 1; min-width: 140px; + height: 34px; padding: 0 12px; + border: 0; background: transparent; + font-size: 13px; font-weight: 500; + color: var(--color-text-secondary); + border-radius: 8px; white-space: nowrap; +} +.metric-tab.is-active { + background: var(--color-brand-light); + color: var(--color-brand-primary-hover); +} +.metric-tab:hover:not(.is-active) { background: var(--color-bg-tertiary); color: var(--color-text-primary); } + +.ranking-card { + background: var(--color-bg-surface); + border: 1px solid var(--color-border-default); + border-radius: var(--radius-card); + padding: var(--space-card-padding); + box-shadow: var(--shadow-card); +} +.ranking-card__head { + display: flex; align-items: flex-start; justify-content: space-between; + gap: 12px; margin-bottom: 16px; flex-wrap: wrap; +} +.ranking-card__title { font-size: 15px; font-weight: 600; margin: 0 0 2px 0; } +.ranking-card__sub { font-size: 12px; color: var(--color-text-secondary); margin: 0; } +.ranking-card__legend { display: flex; gap: 12px; flex-wrap: wrap; } +.dora-legend { display: inline-flex; align-items: center; gap: 6px; font-size: 12px; color: var(--color-text-secondary); } +.dora-dot { width: 8px; height: 8px; border-radius: 50%; display: inline-block; } +.dora-dot--elite { background: var(--color-dora-elite); } +.dora-dot--high { background: var(--color-dora-high); } +.dora-dot--medium { background: var(--color-dora-medium); } +.dora-dot--low { background: var(--color-dora-low); } + +/* Ranking rows (horizontal bars for 27 teams) */ +.ranking-chart { + max-height: 620px; overflow-y: auto; + padding-right: 4px; + display: flex; flex-direction: column; gap: 2px; +} +.rank-row { + display: grid; + grid-template-columns: 32px minmax(140px, 200px) 1fr 72px 60px; + align-items: center; + gap: 12px; + padding: 6px 8px; + border-radius: 8px; + cursor: pointer; + transition: background 150ms ease-out; +} +.rank-row:hover, .rank-row:focus-visible { background: var(--color-bg-secondary); } +.rank-row__pos { + font-family: var(--font-mono); font-size: 12px; + color: var(--color-text-tertiary); text-align: right; + font-variant-numeric: tabular-nums; +} +.rank-row__team { + display: flex; flex-direction: column; gap: 1px; min-width: 0; +} +.rank-row__team-name { + font-size: 13px; color: var(--color-text-primary); font-weight: 500; + white-space: nowrap; overflow: hidden; text-overflow: ellipsis; +} +.rank-row__team-tribe { + font-size: 10px; color: var(--color-text-tertiary); + text-transform: uppercase; letter-spacing: 0.06em; +} +.rank-row__bar-track { + position: relative; height: 20px; + background: var(--color-bg-tertiary); + border-radius: 4px; overflow: hidden; +} +.rank-row__bar-fill { + position: absolute; left: 0; top: 0; bottom: 0; + border-radius: 4px; + transition: width 300ms ease-out; +} +.rank-row__bar-fill--elite { background: var(--color-dora-elite); } +.rank-row__bar-fill--high { background: var(--color-dora-high); } +.rank-row__bar-fill--medium { background: var(--color-dora-medium); } +.rank-row__bar-fill--low { background: var(--color-dora-low); } +.rank-row__bar-fill--neutral{ background: var(--color-text-tertiary); } +/* Threshold reference lines */ +.rank-row__bar-threshold { + position: absolute; top: -2px; bottom: -2px; + width: 1px; + background: rgba(17,24,39,0.35); +} +.rank-row__bar-threshold::after { + content: ''; position: absolute; top: -4px; + left: -3px; width: 7px; height: 3px; + background: rgba(17,24,39,0.35); +} +.rank-row__value { + font-family: var(--font-mono); font-size: 13px; + color: var(--color-text-primary); + text-align: right; font-variant-numeric: tabular-nums; +} +.rank-row__badge { justify-self: end; } + +/* ========== EVOLUTION SMALL MULTIPLES ========== */ +.evolution { margin-bottom: 32px; } +.evolution__head { + display: grid; grid-template-columns: 1fr auto; + align-items: end; gap: 16px; margin-bottom: 14px; +} +.evolution__controls { display: flex; flex-direction: column; gap: 4px; align-items: flex-start; } +.select { + height: 34px; padding: 0 10px; + border: 1px solid var(--color-border-default); + border-radius: var(--radius-button); + background: var(--color-bg-surface); + font-size: 13px; color: var(--color-text-primary); + min-width: 180px; +} + +.small-multiples { + background: var(--color-bg-surface); + border: 1px solid var(--color-border-default); + border-radius: var(--radius-card); + padding: var(--space-card-padding); + display: grid; + grid-template-columns: repeat(4, 1fr); + gap: 12px; +} +.sm-tile { + padding: 10px; + border: 1px solid var(--color-border-subtle); + border-radius: 8px; + background: var(--color-bg-secondary); + cursor: pointer; + transition: border-color 150ms ease-out, transform 150ms ease-out; +} +.sm-tile:hover { border-color: var(--color-brand-primary); transform: translateY(-1px); } +.sm-tile__head { + display: flex; align-items: center; justify-content: space-between; + gap: 6px; margin-bottom: 6px; +} +.sm-tile__name { + font-size: 12px; font-weight: 500; color: var(--color-text-primary); + white-space: nowrap; overflow: hidden; text-overflow: ellipsis; +} +.sm-tile__tribe { + font-size: 9px; color: var(--color-text-tertiary); text-transform: uppercase; + letter-spacing: 0.06em; +} +.sm-tile__chart { height: 40px; } +.sm-tile__value { + font-family: var(--font-mono); font-size: 13px; font-weight: 600; + color: var(--color-text-primary); font-variant-numeric: tabular-nums; + margin-top: 4px; +} +.sm-tile__delta { font-size: 11px; color: var(--color-text-secondary); } + +/* Tribe group heading within small multiples */ +.sm-group-title { + grid-column: 1 / -1; + font-size: 11px; font-weight: 600; + color: var(--color-text-tertiary); + text-transform: uppercase; letter-spacing: 0.06em; + padding: 4px 0; + border-top: 1px solid var(--color-border-subtle); + margin-top: 4px; +} +.sm-group-title:first-child { border-top: 0; margin-top: 0; } + +/* ========== DRAWER ========== */ +.drawer { + position: fixed; top: 0; right: 0; bottom: 0; + width: 520px; max-width: 100vw; + background: var(--color-bg-surface); + box-shadow: -8px 0 24px rgba(0,0,0,0.08); + z-index: 60; + display: flex; flex-direction: column; + transform: translateX(100%); + transition: transform 200ms ease-out; +} +.drawer:not([hidden]) { transform: translateX(0); display: flex; } +.drawer[hidden] { display: none; } +.drawer__head { + display: flex; align-items: flex-start; justify-content: space-between; + gap: 12px; padding: 20px 20px 16px 20px; + border-bottom: 1px solid var(--color-border-default); +} +.drawer__eyebrow { + font-size: 11px; font-weight: 600; letter-spacing: 0.06em; + text-transform: uppercase; color: var(--color-text-tertiary); + margin: 0 0 4px 0; +} +.drawer__title { + font-size: 18px; font-weight: 600; margin: 0; + color: var(--color-text-primary); +} +.drawer__body { padding: 20px; overflow-y: auto; flex: 1; } + +.btn-icon { + width: 32px; height: 32px; + border: 0; background: transparent; + display: inline-flex; align-items: center; justify-content: center; + color: var(--color-text-secondary); border-radius: 6px; +} +.btn-icon:hover { background: var(--color-bg-tertiary); color: var(--color-text-primary); } +.btn-icon svg { width: 16px; height: 16px; } + +.drawer-metrics { + display: grid; grid-template-columns: repeat(2, 1fr); gap: 10px; margin-bottom: 18px; +} +.drawer-metric { + padding: 10px; + background: var(--color-bg-secondary); + border: 1px solid var(--color-border-subtle); + border-radius: 8px; +} +.drawer-metric__label { + font-size: 11px; color: var(--color-text-secondary); + text-transform: uppercase; letter-spacing: 0.04em; font-weight: 500; +} +.drawer-metric__value { + font-family: var(--font-mono); font-size: 18px; font-weight: 600; + font-variant-numeric: tabular-nums; margin-top: 4px; +} + +.drawer-chart-block { margin-bottom: 18px; } +.drawer-chart-block h4 { font-size: 13px; font-weight: 600; margin: 0 0 8px 0; } +.drawer-chart { height: 160px; } + +/* ========== FOOTER ========== */ +.page-foot { + margin-top: 48px; padding-top: 24px; + border-top: 1px solid var(--color-border-subtle); +} +.page-foot__text { font-size: 12px; color: var(--color-text-tertiary); margin: 0; } +.inline-link { color: var(--color-brand-primary); text-decoration: none; } +.inline-link:hover { text-decoration: underline; } + +/* ========== STATES ========== */ +.state-empty, .state-error { + padding: 48px 24px; + text-align: center; + background: var(--color-bg-surface); + border: 1px dashed var(--color-border-default); + border-radius: var(--radius-card); +} +.state-empty h3 { margin: 0 0 6px 0; font-size: 16px; color: var(--color-text-primary); } +.state-empty p { margin: 0; font-size: 14px; color: var(--color-text-secondary); } +.state-error { border-color: var(--color-danger); background: var(--color-dora-low-bg); } + +/* ========== RESPONSIVE ========== */ +@media (max-width: 1279px) { + .kpi-groups__grid { grid-template-columns: 1fr; } + .kpi-group__grid { grid-template-columns: repeat(4, 1fr); } + .small-multiples { grid-template-columns: repeat(3, 1fr); } + .rank-row { grid-template-columns: 24px minmax(120px, 160px) 1fr 64px 54px; gap: 8px; } +} +@media (max-width: 900px) { + .kpi-group__grid { grid-template-columns: repeat(2, 1fr); } + .small-multiples { grid-template-columns: repeat(2, 1fr); } + .page-head { grid-template-columns: 1fr; } + .drawer { width: 100vw; } +} +@media (max-width: 640px) { + .page { padding: 16px; padding-top: 24px; } + .kpi-group__grid { grid-template-columns: 1fr 1fr; } + .small-multiples { grid-template-columns: 1fr; } + .rank-row { grid-template-columns: 20px 1fr auto; grid-template-rows: auto auto; gap: 4px 8px; } + .rank-row__bar-track { grid-column: 1 / -1; } + .metric-tabs { flex-wrap: nowrap; } + .ranking-chart { max-height: 480px; } +} diff --git a/pulse/pulse-ui/tokens.css b/pulse/pulse-ui/tokens.css new file mode 100644 index 0000000..17458c1 --- /dev/null +++ b/pulse/pulse-ui/tokens.css @@ -0,0 +1,133 @@ +/* PULSE Design Tokens — espelho de packages/pulse-web/src/globals.css */ +:root { + /* Backgrounds */ + --color-bg-primary: #FFFFFF; + --color-bg-secondary: #F9FAFB; + --color-bg-tertiary: #F3F4F6; + --color-bg-surface: #FFFFFF; + --color-bg-elevated: #FFFFFF; + + /* Text */ + --color-text-primary: #111827; + --color-text-secondary: #6B7280; + --color-text-tertiary: #9CA3AF; + --color-text-inverse: #FFFFFF; + + /* Borders */ + --color-border-default: #E5E7EB; + --color-border-subtle: #F3F4F6; + + /* Brand */ + --color-brand-primary: #6366F1; + --color-brand-primary-hover: #4F46E5; + --color-brand-light: #EEF2FF; + + /* Status */ + --color-success: #10B981; + --color-warning: #F59E0B; + --color-danger: #EF4444; + --color-info: #3B82F6; + + /* DORA Classification */ + --color-dora-elite: #10B981; + --color-dora-high: #3B82F6; + --color-dora-medium: #F59E0B; + --color-dora-low: #EF4444; + + /* DORA backgrounds (heatmap cells) */ + --color-dora-elite-bg: #D1FAE5; + --color-dora-high-bg: #DBEAFE; + --color-dora-medium-bg: #FEF3C7; + --color-dora-low-bg: #FEE2E2; + + /* Chart palette */ + --chart-1: #6366F1; + --chart-2: #8B5CF6; + --chart-3: #EC4899; + --chart-4: #F59E0B; + --chart-5: #10B981; + --chart-6: #6B7280; + + /* Spacing */ + --space-page-padding: 1.5rem; + --space-card-padding: 1.25rem; + --space-section-gap: 1.5rem; + + /* Radius */ + --radius-card: 0.75rem; + --radius-button: 0.5rem; + --radius-badge: 9999px; + + /* Shadows */ + --shadow-card: 0 1px 3px rgba(0,0,0,0.05); + --shadow-elevated: 0 4px 12px rgba(0,0,0,0.08); + + /* Typography scale */ + --font-sans: 'Inter', system-ui, -apple-system, sans-serif; + --font-mono: 'JetBrains Mono', ui-monospace, monospace; +} + +*, *::before, *::after { box-sizing: border-box; } + +html, body { + margin: 0; + padding: 0; + background: var(--color-bg-secondary); + color: var(--color-text-primary); + font-family: var(--font-sans); + -webkit-font-smoothing: antialiased; + -moz-osx-font-smoothing: grayscale; + font-size: 14px; + line-height: 1.5; +} + +button { font-family: inherit; cursor: pointer; } + +/* Accessibility: focus rings */ +:focus-visible { + outline: 2px solid var(--color-brand-primary); + outline-offset: 2px; + border-radius: 4px; +} + +/* Shared skeleton shimmer */ +@keyframes pulse-skeleton { + 0%, 100% { opacity: 1; } + 50% { opacity: 0.5; } +} +.skeleton { + background: var(--color-bg-tertiary); + border-radius: 4px; + animation: pulse-skeleton 1.6s ease-in-out infinite; +} + +@media (prefers-reduced-motion: reduce) { + .skeleton { animation: none !important; } + * { transition: none !important; animation: none !important; } +} + +/* Shared DORA badges */ +.badge { + display: inline-flex; + align-items: center; + gap: 4px; + height: 20px; + padding: 0 8px; + border-radius: var(--radius-badge); + font-size: 11px; + font-weight: 500; + letter-spacing: 0.02em; + white-space: nowrap; +} +.badge--elite { background: var(--color-dora-elite-bg); color: #065F46; } +.badge--high { background: var(--color-dora-high-bg); color: #1E40AF; } +.badge--medium { background: var(--color-dora-medium-bg); color: #92400E; } +.badge--low { background: var(--color-dora-low-bg); color: #991B1B; } +.badge--neutral{ background: var(--color-bg-tertiary); color: var(--color-text-secondary); } + +/* Utility: sr-only */ +.sr-only { + position: absolute; width: 1px; height: 1px; padding: 0; + margin: -1px; overflow: hidden; clip: rect(0,0,0,0); + white-space: nowrap; border: 0; +} diff --git a/pulse/scripts/bulk_import_repos.py b/pulse/scripts/bulk_import_repos.py new file mode 100644 index 0000000..1a0e1e7 --- /dev/null +++ b/pulse/scripts/bulk_import_repos.py @@ -0,0 +1,528 @@ +#!/usr/bin/env python3 +"""PULSE — Bulk Import GitHub Repos into DevLake. + +Discovers all repositories from the GitHub org via DevLake's remote-scopes API, +filters out archived/inactive repos, and registers them as scopes in DevLake. + +This does NOT trigger data collection — it only registers repos so that +the next Blueprint run (or manual trigger) will collect their data. + +Usage: + # Dry run — see what would be imported + python scripts/bulk_import_repos.py --dry-run + + # Import all active repos + python scripts/bulk_import_repos.py + + # Import only repos with activity in the last 12 months + python scripts/bulk_import_repos.py --active-months 12 + + # Import only repos matching a pattern + python scripts/bulk_import_repos.py --filter "webmotors.*.ui" + + # After import, trigger ingestion + python scripts/full_ingestion.py +""" + +from __future__ import annotations + +import argparse +import json +import logging +import re +import sys +import time +from datetime import datetime, timedelta, timezone + +import httpx + +# ────────────────────────────────────────────────────────────── +# Configuration +# ────────────────────────────────────────────────────────────── + +DEVLAKE_API = "http://localhost:8080" +CONNECTION_ID = 1 +SCOPE_CONFIG_ID = 1 # "Webmotors Default" +ORG = "webmotors-private" +BATCH_SIZE = 50 # DevLake recommends batches of ~50 scopes per PUT + +logging.basicConfig( + level=logging.INFO, + format="%(asctime)s [%(levelname)s] %(message)s", + datefmt="%H:%M:%S", +) +log = logging.getLogger("bulk-import") + + +# ────────────────────────────────────────────────────────────── +# Helpers +# ────────────────────────────────────────────────────────────── + +def ok(msg: str) -> None: + log.info(f"\033[92m ✓ {msg}\033[0m") + +def warn(msg: str) -> None: + log.warning(f"\033[93m ⚠ {msg}\033[0m") + +def fail(msg: str) -> None: + log.error(f"\033[91m ✗ {msg}\033[0m") + +def header(msg: str) -> None: + log.info(f"\033[1m\033[96m{'─' * 60}\033[0m") + log.info(f"\033[1m\033[96m {msg}\033[0m") + log.info(f"\033[1m\033[96m{'─' * 60}\033[0m") + + +# ────────────────────────────────────────────────────────────── +# Step 1: Discover all repos from GitHub org via DevLake API +# ────────────────────────────────────────────────────────────── + +def discover_all_repos(client: httpx.Client) -> list[dict]: + """Paginate through all repos in the org via DevLake remote-scopes.""" + all_repos = [] + page_token = "" + page = 0 + + while True: + page += 1 + params: dict = {"groupId": ORG} + if page_token: + params["pageToken"] = page_token + + resp = client.get( + f"{DEVLAKE_API}/plugins/github/connections/{CONNECTION_ID}/remote-scopes", + params=params, + timeout=30, + ) + resp.raise_for_status() + data = resp.json() + children = data.get("children", []) + all_repos.extend(children) + + log.info(f" Page {page}: {len(children)} repos (total: {len(all_repos)})") + + next_token = data.get("nextPageToken", "") + if not next_token or not children: + break + page_token = next_token + + return all_repos + + +# ────────────────────────────────────────────────────────────── +# Step 2: Get already-imported scopes +# ────────────────────────────────────────────────────────────── + +def get_existing_scopes(client: httpx.Client) -> set[str]: + """Return set of fullName for repos already imported.""" + resp = client.get( + f"{DEVLAKE_API}/plugins/github/connections/{CONNECTION_ID}/scopes", + timeout=30, + ) + resp.raise_for_status() + data = resp.json() + scopes = data.get("scopes", []) + return {s["scope"]["fullName"] for s in scopes if "scope" in s} + + +# ────────────────────────────────────────────────────────────── +# Step 3: Filter repos +# ────────────────────────────────────────────────────────────── + +def filter_repos( + repos: list[dict], + existing: set[str], + *, + pattern: str | None = None, + active_months: int | None = None, + include_archived: bool = False, +) -> tuple[list[dict], dict[str, int]]: + """Filter repos and return (filtered_list, stats).""" + stats = { + "total_discovered": len(repos), + "already_imported": 0, + "archived": 0, + "pattern_excluded": 0, + "inactive": 0, + "selected": 0, + } + + filtered = [] + cutoff = None + if active_months: + cutoff = datetime.now(timezone.utc) - timedelta(days=active_months * 30) + + pattern_re = re.compile(pattern, re.IGNORECASE) if pattern else None + + for repo in repos: + full_name = repo.get("fullName", "") + name = repo.get("name", "") + repo_data = repo.get("data", {}) or {} + + # Skip already imported + if full_name in existing: + stats["already_imported"] += 1 + continue + + # Skip archived repos (check data.archived if available) + if not include_archived and repo_data.get("archived", False): + stats["archived"] += 1 + continue + + # Pattern filter + if pattern_re and not pattern_re.search(name) and not pattern_re.search(full_name): + stats["pattern_excluded"] += 1 + continue + + # Activity filter — check updatedDate from data + if cutoff: + updated = repo_data.get("updatedDate") or repo_data.get("updated_at") + if updated and updated != "0001-01-01T00:00:00Z": + try: + updated_dt = datetime.fromisoformat(updated.replace("Z", "+00:00")) + if updated_dt < cutoff: + stats["inactive"] += 1 + continue + except (ValueError, TypeError): + pass # Can't parse, include it + + filtered.append(repo) + stats["selected"] += 1 + + return filtered, stats + + +# ────────────────────────────────────────────────────────────── +# Step 4: Register repos as scopes in DevLake (batch PUT) +# ────────────────────────────────────────────────────────────── + +def register_scopes( + client: httpx.Client, + repos: list[dict], + dry_run: bool = False, +) -> int: + """Register repos as scopes in DevLake via PUT. + + DevLake's PUT /plugins/github/connections/:id/scopes + accepts a list of scope objects. We send in batches. + """ + total_registered = 0 + + for batch_start in range(0, len(repos), BATCH_SIZE): + batch = repos[batch_start : batch_start + BATCH_SIZE] + batch_num = (batch_start // BATCH_SIZE) + 1 + total_batches = (len(repos) + BATCH_SIZE - 1) // BATCH_SIZE + + # Build scope objects for DevLake + scope_objects = [] + for repo in batch: + scope_obj = { + "connectionId": CONNECTION_ID, + "githubId": int(repo["id"]), + "name": repo["name"], + "fullName": repo["fullName"], + "scopeConfigId": SCOPE_CONFIG_ID, + } + scope_objects.append(scope_obj) + + if dry_run: + log.info( + f" [DRY RUN] Batch {batch_num}/{total_batches}: " + f"would register {len(scope_objects)} repos" + ) + for s in scope_objects[:3]: + log.info(f" → {s['fullName']}") + if len(scope_objects) > 3: + log.info(f" ... and {len(scope_objects) - 3} more") + total_registered += len(scope_objects) + continue + + log.info( + f" Batch {batch_num}/{total_batches}: " + f"registering {len(scope_objects)} repos..." + ) + + try: + resp = client.put( + f"{DEVLAKE_API}/plugins/github/connections/{CONNECTION_ID}/scopes", + json={"data": scope_objects}, + timeout=60, + ) + if resp.status_code in (200, 201): + total_registered += len(scope_objects) + ok(f"Batch {batch_num} registered ({total_registered} total)") + else: + fail( + f"Batch {batch_num} failed: HTTP {resp.status_code} — " + f"{resp.text[:200]}" + ) + # Continue with next batch instead of failing completely + except httpx.HTTPError as e: + fail(f"Batch {batch_num} HTTP error: {e}") + + # Small delay between batches to be gentle on DevLake + if batch_start + BATCH_SIZE < len(repos): + time.sleep(1) + + return total_registered + + +# ────────────────────────────────────────────────────────────── +# Step 5: Update Blueprint to include all scopes +# ────────────────────────────────────────────────────────────── + +def update_blueprint_connections(client: httpx.Client, blueprint_id: int, dry_run: bool = False) -> bool: + """Ensure the blueprint's GitHub connection includes all registered scopes. + + DevLake blueprints reference scopes by their scope IDs. We need to + update the blueprint to include all the new scopes we just registered. + """ + # Get current blueprint + resp = client.get(f"{DEVLAKE_API}/blueprints/{blueprint_id}", timeout=30) + if resp.status_code != 200: + fail(f"Could not fetch blueprint {blueprint_id}: {resp.status_code}") + return False + + blueprint = resp.json() + log.info(f" Blueprint #{blueprint_id}: {blueprint.get('name', '?')}") + + # Get all currently registered scopes (paginate if needed) + all_scopes = [] + page = 1 + while True: + scopes_resp = client.get( + f"{DEVLAKE_API}/plugins/github/connections/{CONNECTION_ID}/scopes", + params={"page": page, "pageSize": 100}, + timeout=30, + ) + if scopes_resp.status_code != 200: + # Fallback: try without pagination params + scopes_resp = client.get( + f"{DEVLAKE_API}/plugins/github/connections/{CONNECTION_ID}/scopes", + timeout=30, + ) + scopes_resp.raise_for_status() + all_scopes = scopes_resp.json().get("scopes", []) + break + batch = scopes_resp.json().get("scopes", []) + if not batch: + break + all_scopes.extend(batch) + if len(batch) < 100: + break + page += 1 + + all_scope_ids = [str(s["scope"]["githubId"]) for s in all_scopes] + + log.info(f" Total registered scopes: {len(all_scope_ids)}") + + # Build updated connections config + # Blueprint settings format depends on DevLake version + settings = blueprint.get("settings", {}) + connections = settings.get("connections", []) + + github_conn = None + for conn in connections: + if conn.get("pluginName") == "github" and conn.get("connectionId") == CONNECTION_ID: + github_conn = conn + break + + if not github_conn: + warn(f"No GitHub connection found in blueprint {blueprint_id} — skipping") + return False + + current_scopes = github_conn.get("scopes", []) + current_scope_ids = {s.get("scopeId") for s in current_scopes} + + log.info(f" Current blueprint scopes: {len(current_scope_ids)}") + + # Build new scopes list — keep existing + add new + new_scope_entries = list(current_scopes) # Keep existing + added = 0 + for scope_id in all_scope_ids: + if scope_id not in current_scope_ids: + new_scope_entries.append({ + "scopeId": scope_id, + "entities": ["CODE", "CODE_REVIEW", "CROSS"], + }) + added += 1 + + if added == 0: + ok("Blueprint already has all scopes — no update needed") + return True + + log.info(f" Adding {added} new scopes to blueprint") + + if dry_run: + warn(f"DRY RUN — would update blueprint {blueprint_id} with {len(new_scope_entries)} total scopes") + return True + + # Update the blueprint + github_conn["scopes"] = new_scope_entries + + patch_resp = client.patch( + f"{DEVLAKE_API}/blueprints/{blueprint_id}", + json={ + "settings": settings, + }, + timeout=60, + ) + + if patch_resp.status_code == 200: + ok(f"Blueprint {blueprint_id} updated with {len(new_scope_entries)} scopes") + return True + else: + fail(f"Blueprint update failed: {patch_resp.status_code} — {patch_resp.text[:200]}") + return False + + +# ────────────────────────────────────────────────────────────── +# Main +# ────────────────────────────────────────────────────────────── + +def main(): + parser = argparse.ArgumentParser( + description="Bulk import GitHub repos into DevLake", + ) + parser.add_argument( + "--dry-run", + action="store_true", + help="Show what would be imported without making changes", + ) + parser.add_argument( + "--filter", + type=str, + default=None, + help="Regex pattern to filter repos by name (e.g. 'webmotors\\..*\\.ui')", + ) + parser.add_argument( + "--active-months", + type=int, + default=None, + help="Only import repos with activity in the last N months", + ) + parser.add_argument( + "--include-archived", + action="store_true", + help="Include archived repositories", + ) + parser.add_argument( + "--blueprint-id", + type=int, + default=1, + help="Blueprint ID to update with new scopes (default: 1)", + ) + parser.add_argument( + "--skip-blueprint", + action="store_true", + help="Don't update the blueprint after importing scopes", + ) + args = parser.parse_args() + + start = time.time() + + header("PULSE — Bulk GitHub Repo Import") + log.info(f" DevLake API: {DEVLAKE_API}") + log.info(f" Connection: #{CONNECTION_ID} (GitHub)") + log.info(f" Org: {ORG}") + log.info(f" Scope Config: #{SCOPE_CONFIG_ID} (Webmotors Default)") + log.info(f" Dry run: {args.dry_run}") + if args.filter: + log.info(f" Filter pattern: {args.filter}") + if args.active_months: + log.info(f" Active months: {args.active_months}") + log.info("") + + client = httpx.Client(timeout=30) + + # ── Step 1: Health check ── + header("Step 1/5 — Health Check") + try: + resp = client.get(f"{DEVLAKE_API}/ping", timeout=10) + resp.raise_for_status() + ok("DevLake API is healthy") + except Exception as e: + fail(f"DevLake API unreachable: {e}") + sys.exit(1) + + # ── Step 2: Discover all repos ── + header("Step 2/5 — Discover Repos from GitHub Org") + all_repos = discover_all_repos(client) + ok(f"Discovered {len(all_repos)} repos in {ORG}") + + # ── Step 3: Get existing + filter ── + header("Step 3/5 — Filter Repos") + existing = get_existing_scopes(client) + log.info(f" Already imported: {len(existing)} repos") + + filtered, stats = filter_repos( + all_repos, + existing, + pattern=args.filter, + active_months=args.active_months, + include_archived=args.include_archived, + ) + + log.info("") + log.info(" Filter Results:") + log.info(f" Total discovered: {stats['total_discovered']:>6}") + log.info(f" Already imported: {stats['already_imported']:>6}") + log.info(f" Archived (skip): {stats['archived']:>6}") + if args.filter: + log.info(f" Pattern excluded: {stats['pattern_excluded']:>6}") + if args.active_months: + log.info(f" Inactive (skip): {stats['inactive']:>6}") + log.info(f" ─────────────────────────") + log.info(f" To import: {stats['selected']:>6}") + + if not filtered: + ok("No new repos to import — all repos already registered") + return + + # Show sample of repos to import + log.info("") + log.info(" Sample repos to import:") + for repo in filtered[:10]: + log.info(f" → {repo['fullName']}") + if len(filtered) > 10: + log.info(f" ... and {len(filtered) - 10} more") + + # ── Step 4: Register scopes ── + header("Step 4/5 — Register Scopes in DevLake") + registered = register_scopes(client, filtered, dry_run=args.dry_run) + + if registered > 0: + ok(f"Registered {registered} new repos as DevLake scopes") + else: + warn("No repos were registered") + + # ── Step 5: Update Blueprint ── + if not args.skip_blueprint: + header("Step 5/5 — Update Blueprint") + update_blueprint_connections(client, args.blueprint_id, dry_run=args.dry_run) + else: + log.info(" Skipping blueprint update (--skip-blueprint)") + + # ── Summary ── + elapsed = int(time.time() - start) + header(f"Import Complete ({elapsed}s)") + log.info(f" Repos discovered: {len(all_repos)}") + log.info(f" Previously imported: {len(existing)}") + log.info(f" Newly registered: {registered}") + log.info(f" Total scopes: {len(existing) + registered}") + log.info("") + if not args.dry_run: + log.info(" Next steps:") + log.info(" 1. Trigger DevLake collection:") + log.info(f" python scripts/full_ingestion.py") + log.info(" 2. Or wait for the next scheduled Blueprint run") + log.info(f" Blueprint #{args.blueprint_id} runs every 15 min") + else: + log.info(" This was a DRY RUN — no changes were made.") + log.info(" Remove --dry-run to actually import.") + + client.close() + + +if __name__ == "__main__": + main() diff --git a/pulse/scripts/doctor.sh b/pulse/scripts/doctor.sh new file mode 100755 index 0000000..a7ad71c --- /dev/null +++ b/pulse/scripts/doctor.sh @@ -0,0 +1,270 @@ +#!/usr/bin/env bash +# +# PULSE — dev environment doctor +# --------------------------------------------------------------------------- +# Runs BEFORE docker comes up. Validates the host machine has everything +# needed to bring PULSE online (tools, versions, free ports, disk, memory). +# +# Output: pretty table with ✓ / ✗ / ! markers per check. +# Exit codes: +# 0 all checks pass +# 1 at least one hard-fail — blocks `make onboard` +# 2 only warnings — `make onboard` can proceed, user should address later +# +# Philosophy: every failure prints an actionable fix, never just the symptom. +# Designed for macOS + Linux. WSL2 works; native Windows does not (prints +# a warning suggesting WSL2). +# --------------------------------------------------------------------------- + +set -uo pipefail + +# ---------------------------------------------------------------- colors +if [ -t 1 ]; then + RED=$'\033[31m'; GRN=$'\033[32m'; YEL=$'\033[33m' + CYN=$'\033[36m'; DIM=$'\033[2m'; BLD=$'\033[1m'; RST=$'\033[0m' +else + RED=""; GRN=""; YEL=""; CYN=""; DIM=""; BLD=""; RST="" +fi + +# ---------------------------------------------------------------- state +HARD_FAILS=0 +WARNINGS=0 + +pass() { printf " ${GRN}✓${RST} %-22s ${DIM}%s${RST}\n" "$1" "${2:-}"; } +fail() { printf " ${RED}✗${RST} %-22s ${RED}%s${RST}\n" "$1" "$2" + [ $# -ge 3 ] && printf " ${DIM}fix: %s${RST}\n" "$3" + HARD_FAILS=$((HARD_FAILS + 1)); } +warn() { printf " ${YEL}!${RST} %-22s ${YEL}%s${RST}\n" "$1" "$2" + [ $# -ge 3 ] && printf " ${DIM}note: %s${RST}\n" "$3" + WARNINGS=$((WARNINGS + 1)); } +section() { printf "\n${BLD}${CYN}%s${RST}\n" "$1"; } + +# ---------------------------------------------------------------- helpers +semver_ge() { + # returns 0 (true) when $1 >= $2 (major.minor comparison) + local a b + a=$(printf '%s' "$1" | awk -F. '{printf "%d%03d", $1, $2}') + b=$(printf '%s' "$2" | awk -F. '{printf "%d%03d", $1, $2}') + [ "$a" -ge "$b" ] +} + +port_in_use() { + # Returns 0 if port is in use, 1 if free. Works on macOS + Linux. + local port=$1 + if command -v lsof >/dev/null 2>&1; then + lsof -i ":${port}" -sTCP:LISTEN -nP >/dev/null 2>&1 + elif command -v ss >/dev/null 2>&1; then + ss -ltn "sport = :${port}" 2>/dev/null | grep -q ":${port}" + else + # Last resort: try to bind — not 100% but better than nothing + ! (echo > "/dev/tcp/127.0.0.1/${port}") 2>/dev/null + fi +} + +port_owner() { + local port=$1 + if command -v lsof >/dev/null 2>&1; then + lsof -i ":${port}" -sTCP:LISTEN -nP 2>/dev/null | awk 'NR==2 {print $1 " (PID " $2 ")"; exit}' + fi +} + +# ---------------------------------------------------------------- header +printf "${BLD}🔍 PULSE doctor — host environment check${RST}\n" +printf "${DIM}(run before ${BLD}make onboard${RST}${DIM} on a fresh clone)${RST}\n" + +# ---------------------------------------------------------------- platform +section "Platform" + +UNAME_S=$(uname -s) +case "$UNAME_S" in + Darwin) + pass "Platform" "macOS ($(uname -m))" + ;; + Linux) + if grep -qi microsoft /proc/version 2>/dev/null; then + pass "Platform" "WSL2 ($(uname -m))" + else + pass "Platform" "Linux ($(uname -m))" + fi + ;; + *) + warn "Platform" "$UNAME_S" "Native Windows is not supported — use WSL2." + ;; +esac + +# ---------------------------------------------------------------- tools +section "Required tools" + +# Bash +if [ -n "${BASH_VERSION:-}" ]; then + pass "Bash" "$BASH_VERSION" +else + warn "Bash" "not detected" "doctor.sh runs best under bash; zsh/sh may skip some checks" +fi + +# Docker +if ! command -v docker >/dev/null 2>&1; then + fail "Docker" "not installed" "install from https://docs.docker.com/get-docker/" +elif ! docker info >/dev/null 2>&1; then + fail "Docker" "daemon not running" "start Docker Desktop (or systemctl start docker)" +else + DOCKER_VER=$(docker version --format '{{.Client.Version}}' 2>/dev/null || echo unknown) + if semver_ge "$DOCKER_VER" "24.0"; then + pass "Docker" "$DOCKER_VER" + else + warn "Docker" "$DOCKER_VER (want ≥24.0)" "older Docker may hit compose compat issues" + fi +fi + +# Docker Compose (v2 plugin) +if docker compose version >/dev/null 2>&1; then + CMP_VER=$(docker compose version --short 2>/dev/null || echo unknown) + pass "Docker Compose" "v$CMP_VER" +else + fail "Docker Compose" "v2 plugin missing" "docker CLI 20.10+ ships it, or: https://docs.docker.com/compose/install/" +fi + +# Node.js +if ! command -v node >/dev/null 2>&1; then + fail "Node.js" "not installed" "install via nvm: https://github.com/nvm-sh/nvm (then: nvm install 20)" +else + NODE_VER=$(node --version 2>/dev/null | sed 's/^v//') + if semver_ge "$NODE_VER" "20.0"; then + pass "Node.js" "$NODE_VER" + else + fail "Node.js" "$NODE_VER (want ≥20)" "nvm install 20 && nvm use 20" + fi +fi + +# npm +if command -v npm >/dev/null 2>&1; then + pass "npm" "$(npm --version)" +else + fail "npm" "not installed" "bundled with Node.js — reinstall Node" +fi + +# Python — host only needs python3 for JSON parsing in verify-dev.sh. +# The real 3.12 runtime lives inside the pulse-data container. A warning +# when host is <3.12 just informs the user that running pytest OUTSIDE +# the container (`cd packages/pulse-data && pytest`) won't work. +if ! command -v python3 >/dev/null 2>&1; then + fail "Python 3" "not installed" "install Python 3.9+ (host needs it for json parsing). macOS ships 3.9+ by default" +else + PY_VER=$(python3 --version 2>&1 | awk '{print $2}') + if semver_ge "$PY_VER" "3.12"; then + pass "Python 3" "$PY_VER" + elif semver_ge "$PY_VER" "3.9"; then + warn "Python 3" "$PY_VER (container uses 3.12)" "host Python is only for JSON parsing; container has its own 3.12. To run pytest on host: pyenv install 3.12" + else + fail "Python 3" "$PY_VER (want ≥3.9)" "upgrade Python on host — needed for basic json tooling" + fi +fi + +# Git +if command -v git >/dev/null 2>&1; then + pass "Git" "$(git --version | awk '{print $3}')" +else + fail "Git" "not installed" "install git (required for pre-commit hooks)" +fi + +# ---------------------------------------------------------------- optional tools +section "Optional tools" + +if command -v gitleaks >/dev/null 2>&1; then + pass "Gitleaks" "$(gitleaks version 2>/dev/null)" +else + warn "Gitleaks" "not installed" "pre-commit hook will skip secret scan. Install: brew install gitleaks" +fi + +if command -v doppler >/dev/null 2>&1; then + pass "Doppler CLI" "$(doppler --version 2>/dev/null | head -1)" +else + warn "Doppler CLI" "not installed" "needed ONLY for optional real-ingestion overlay. Install: brew install dopplerhq/cli/doppler" +fi + +if command -v gh >/dev/null 2>&1; then + pass "GitHub CLI" "$(gh --version 2>/dev/null | head -1 | awk '{print $3}')" +else + warn "GitHub CLI" "not installed" "nice-to-have for PR workflows. Install: brew install gh" +fi + +# ---------------------------------------------------------------- ports +section "Ports (must be free)" + +# PULSE default ports. If user customized these in .env, doctor will still +# check the defaults — that's fine, it's the onboard-from-clean path. +declare -a PORTS=( + "3000:pulse-api" + "5173:pulse-web (Vite)" + "5432:postgres" + "6379:redis" + "8000:pulse-data" + "9092:kafka" +) + +# If docker-compose stack is already up, the ports will be "in use" by +# Docker itself — that's OK, not a conflict. Detect by checking if +# pulse-* containers are running. +STACK_UP=0 +if command -v docker >/dev/null 2>&1 && docker info >/dev/null 2>&1; then + if docker compose -f docker-compose.yml ps --status running --format '{{.Service}}' 2>/dev/null | grep -q .; then + STACK_UP=1 + fi +fi + +for entry in "${PORTS[@]}"; do + port="${entry%%:*}" + label="${entry#*:}" + if port_in_use "$port"; then + owner=$(port_owner "$port" || echo "unknown") + # If stack is already up AND the occupier looks like Docker, this is + # expected — the ports ARE used, by PULSE itself. + if [ "$STACK_UP" = "1" ] && printf '%s' "$owner" | grep -qiE 'docke|docker'; then + pass "Port $port" "$label — bound by running PULSE stack (ok)" + else + fail "Port $port ($label)" "in use by $owner" "stop the conflicting service, or change the port in pulse/.env" + fi + else + pass "Port $port" "$label — free" + fi +done + +# ---------------------------------------------------------------- disk + memory +section "Resources" + +# Disk +# df works differently on macOS vs Linux; parse available GB either way. +AVAIL_GB=$(df -Pk . | awk 'NR==2 {printf "%d", $4 / 1024 / 1024}') +if [ "$AVAIL_GB" -ge 15 ]; then + pass "Disk space" "${AVAIL_GB} GB available" +elif [ "$AVAIL_GB" -ge 5 ]; then + warn "Disk space" "${AVAIL_GB} GB available" "tight — docker images + db may grow to ~10 GB" +else + fail "Disk space" "${AVAIL_GB} GB available" "free ≥ 15 GB on this partition before continuing" +fi + +# Docker memory allocation (best-effort) +if command -v docker >/dev/null 2>&1 && docker info >/dev/null 2>&1; then + DOCKER_MEM_BYTES=$(docker info --format '{{.MemTotal}}' 2>/dev/null || echo 0) + if [ "$DOCKER_MEM_BYTES" -gt 0 ]; then + DOCKER_MEM_GB=$((DOCKER_MEM_BYTES / 1024 / 1024 / 1024)) + if [ "$DOCKER_MEM_GB" -ge 4 ]; then + pass "Docker memory" "${DOCKER_MEM_GB} GB allocated" + else + warn "Docker memory" "${DOCKER_MEM_GB} GB allocated" "bump Docker Desktop → Settings → Resources → Memory to ≥ 4 GB" + fi + fi +fi + +# ---------------------------------------------------------------- summary +printf "\n" +if [ "$HARD_FAILS" -gt 0 ]; then + printf "${RED}${BLD}✖ %d hard fail(s)${RST} ${DIM}+ %d warning(s). Fix and re-run ${BLD}make doctor${RST}${DIM}.${RST}\n" "$HARD_FAILS" "$WARNINGS" + exit 1 +elif [ "$WARNINGS" -gt 0 ]; then + printf "${YEL}${BLD}⚠ %d warning(s)${RST} ${DIM}— onboard can proceed, address later.${RST}\n" "$WARNINGS" + exit 2 +else + printf "${GRN}${BLD}✓ All checks passed.${RST} ${DIM}Ready for ${BLD}make onboard${RST}${DIM}.${RST}\n" + exit 0 +fi diff --git a/pulse/scripts/full_ingestion.py b/pulse/scripts/full_ingestion.py new file mode 100644 index 0000000..5df1814 --- /dev/null +++ b/pulse/scripts/full_ingestion.py @@ -0,0 +1,721 @@ +#!/usr/bin/env python3 +"""PULSE Full Ingestion Script. + +Orchestrates a complete data ingestion from all configured sources +(GitHub, Jira, Jenkins) through DevLake into the PULSE database. + +Key features: +- Resumable: DevLake pipelines checkpoint internally; PULSE watermarks + are stored in PostgreSQL. Safe to stop and restart. +- Idempotent: ON CONFLICT upserts guarantee no duplicates. +- Observable: Logs progress, record counts, and errors in real time. + +Usage: + # From pulse/ directory: + python scripts/full_ingestion.py + + # Or with options: + python scripts/full_ingestion.py --skip-devlake # Only sync PULSE (DevLake already has data) + python scripts/full_ingestion.py --reset-watermarks # Force full re-sync from DevLake to PULSE + python scripts/full_ingestion.py --blueprint-id 1 # Trigger specific blueprint only + python scripts/full_ingestion.py --dry-run # Show what would happen +""" + +from __future__ import annotations + +import argparse +import asyncio +import json +import logging +import os +import sys +import time +from datetime import datetime, timezone +from pathlib import Path +from typing import Any + +import httpx +import asyncpg + +# ── Logging ────────────────────────────────────────────────────────────────── + +logging.basicConfig( + level=logging.INFO, + format="%(asctime)s [%(levelname)s] %(message)s", + datefmt="%H:%M:%S", +) +log = logging.getLogger("full_ingestion") + +# ── Configuration ──────────────────────────────────────────────────────────── + +# DevLake API — the Gin server runs on 8080 inside the container, +# mapped to 8080 externally. The basePath is "/" (not "/api/"). +DEVLAKE_API = os.environ.get("DEVLAKE_API_URL", "http://localhost:8080") + +# DevLake PostgreSQL (read-only) +DEVLAKE_DB = os.environ.get( + "DEVLAKE_DB_URL", + "postgresql://devlake:devlake_dev@localhost:5433/lake", +) + +# PULSE PostgreSQL +PULSE_DB = os.environ.get( + "DATABASE_URL", + "postgresql://pulse:pulse_dev@localhost:5432/pulse", +) + +TENANT_ID = os.environ.get( + "DEFAULT_TENANT_ID", + "00000000-0000-0000-0000-000000000001", +) + +# Poll interval for DevLake pipeline status (seconds) +POLL_INTERVAL = 30 + +# Maximum retries for a failed DevLake pipeline +MAX_RETRIES = 3 + +# ── ANSI Colors ────────────────────────────────────────────────────────────── + +class C: + BOLD = "\033[1m" + GREEN = "\033[92m" + YELLOW = "\033[93m" + RED = "\033[91m" + CYAN = "\033[96m" + DIM = "\033[2m" + RESET = "\033[0m" + + +def banner(msg: str) -> None: + log.info(f"{C.BOLD}{C.CYAN}{'─' * 60}{C.RESET}") + log.info(f"{C.BOLD}{C.CYAN} {msg}{C.RESET}") + log.info(f"{C.BOLD}{C.CYAN}{'─' * 60}{C.RESET}") + + +def ok(msg: str) -> None: + log.info(f"{C.GREEN} ✓ {msg}{C.RESET}") + + +def warn(msg: str) -> None: + log.warning(f"{C.YELLOW} ⚠ {msg}{C.RESET}") + + +def fail(msg: str) -> None: + log.error(f"{C.RED} ✗ {msg}{C.RESET}") + + +def info(msg: str) -> None: + log.info(f" {msg}") + + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 1 — Health checks +# ═══════════════════════════════════════════════════════════════════════════ + + +async def check_devlake_health(client: httpx.AsyncClient) -> bool: + """Verify DevLake API is reachable and responding.""" + try: + r = await client.get(f"{DEVLAKE_API}/ping", timeout=10) + if r.status_code == 200: + ok("DevLake API is healthy") + return True + # Try alternate path + r = await client.get(f"{DEVLAKE_API}/health", timeout=10) + if r.status_code == 200: + ok("DevLake API is healthy") + return True + except Exception as e: + fail(f"DevLake API unreachable: {e}") + return False + + +async def check_devlake_db() -> bool: + """Verify DevLake PostgreSQL is reachable.""" + try: + conn = await asyncpg.connect(DEVLAKE_DB) + result = await conn.fetchval("SELECT COUNT(*) FROM pull_requests") + await conn.close() + ok(f"DevLake DB is healthy — {result:,} pull_requests") + return True + except Exception as e: + fail(f"DevLake DB unreachable: {e}") + return False + + +async def check_pulse_db() -> bool: + """Verify PULSE PostgreSQL is reachable.""" + try: + conn = await asyncpg.connect(PULSE_DB) + # Test with RLS context + await conn.execute(f"SET app.current_tenant = '{TENANT_ID}'") + result = await conn.fetchval( + "SELECT COUNT(*) FROM eng_pull_requests WHERE tenant_id = $1::uuid", + TENANT_ID, + ) + await conn.close() + ok(f"PULSE DB is healthy — {result:,} eng_pull_requests") + return True + except Exception as e: + fail(f"PULSE DB unreachable: {e}") + return False + + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 2 — Inventory: list what DevLake has configured +# ═══════════════════════════════════════════════════════════════════════════ + + +async def get_inventory(client: httpx.AsyncClient) -> dict[str, Any]: + """Fetch all connections, scopes, and blueprints from DevLake.""" + inventory: dict[str, Any] = {"connections": {}, "blueprints": []} + + for plugin, conn_id in [("github", 1), ("jira", 2), ("jenkins", 1)]: + try: + r = await client.get(f"{DEVLAKE_API}/plugins/{plugin}/connections/{conn_id}/scopes") + if r.status_code == 200: + data = r.json() + scopes = data.get("scopes", data) if isinstance(data, dict) else data + inventory["connections"][plugin] = { + "connectionId": conn_id, + "scopeCount": len(scopes) if isinstance(scopes, list) else data.get("count", 0), + "scopes": scopes if isinstance(scopes, list) else [], + } + ok(f"{plugin}: {inventory['connections'][plugin]['scopeCount']} scopes configured") + else: + warn(f"{plugin}: connection {conn_id} returned HTTP {r.status_code}") + except Exception as e: + warn(f"{plugin}: could not fetch scopes — {e}") + + try: + r = await client.get(f"{DEVLAKE_API}/blueprints") + if r.status_code == 200: + data = r.json() + bps = data.get("blueprints", data) if isinstance(data, dict) else data + inventory["blueprints"] = bps if isinstance(bps, list) else [] + for bp in inventory["blueprints"]: + status = "enabled" if bp.get("enable") else "disabled" + ok(f"Blueprint #{bp['id']}: {bp['name']} ({status}, cron: {bp.get('cronConfig', 'manual')})") + except Exception as e: + warn(f"Could not fetch blueprints: {e}") + + return inventory + + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 3 — Trigger DevLake pipelines and monitor progress +# ═══════════════════════════════════════════════════════════════════════════ + + +async def check_running_pipelines(client: httpx.AsyncClient) -> list[dict]: + """Check if there are any currently running DevLake pipelines.""" + try: + r = await client.get(f"{DEVLAKE_API}/pipelines", params={"pageSize": 5, "page": 1}) + if r.status_code == 200: + data = r.json() + pipelines = data.get("pipelines", []) + running = [p for p in pipelines if p.get("status") == "TASK_RUNNING"] + return running + except Exception: + pass + return [] + + +async def trigger_blueprint( + client: httpx.AsyncClient, + blueprint_id: int, + blueprint_name: str, +) -> int | None: + """Trigger a DevLake blueprint and return the pipeline ID.""" + try: + r = await client.post( + f"{DEVLAKE_API}/blueprints/{blueprint_id}/trigger", + timeout=30, + ) + if r.status_code in (200, 201): + data = r.json() + pipeline_id = data.get("id") + ok(f"Triggered blueprint '{blueprint_name}' → pipeline #{pipeline_id}") + return pipeline_id + else: + fail(f"Failed to trigger blueprint #{blueprint_id}: HTTP {r.status_code} — {r.text[:200]}") + except Exception as e: + fail(f"Error triggering blueprint #{blueprint_id}: {e}") + return None + + +async def wait_for_pipeline( + client: httpx.AsyncClient, + pipeline_id: int, + blueprint_name: str, +) -> str: + """Poll DevLake pipeline status until it completes or fails. + + Returns the final status: TASK_COMPLETED, TASK_PARTIAL, TASK_FAILED, TASK_CANCELLED. + """ + start_time = time.monotonic() + last_log_time = 0.0 + spinner = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"] + spin_idx = 0 + + while True: + try: + r = await client.get(f"{DEVLAKE_API}/pipelines/{pipeline_id}") + if r.status_code == 200: + data = r.json() + status = data.get("status", "UNKNOWN") + elapsed = time.monotonic() - start_time + elapsed_str = _format_duration(elapsed) + + if status in ("TASK_COMPLETED", "TASK_PARTIAL", "TASK_FAILED", "TASK_CANCELLED"): + icon = "✓" if status == "TASK_COMPLETED" else "⚠" if status == "TASK_PARTIAL" else "✗" + color = C.GREEN if status == "TASK_COMPLETED" else C.YELLOW if status == "TASK_PARTIAL" else C.RED + log.info(f"{color} {icon} Pipeline #{pipeline_id} ({blueprint_name}): {status} in {elapsed_str}{C.RESET}") + return status + + # Log progress every 60s + if elapsed - last_log_time >= 60: + # Try to get task details + tasks_info = "" + try: + tr = await client.get(f"{DEVLAKE_API}/pipelines/{pipeline_id}/tasks") + if tr.status_code == 200: + tasks = tr.json() + if isinstance(tasks, list): + active = [t for t in tasks if t.get("status") == "TASK_RUNNING"] + if active: + subtask = active[0].get("subtaskName", "") + plugin = active[0].get("plugin", "") + tasks_info = f" [{plugin}: {subtask}]" + except Exception: + pass + + s = spinner[spin_idx % len(spinner)] + spin_idx += 1 + info(f"{s} Pipeline #{pipeline_id}: {status} — {elapsed_str} elapsed{tasks_info}") + last_log_time = elapsed + + except Exception as e: + warn(f"Error polling pipeline #{pipeline_id}: {e}") + + await asyncio.sleep(POLL_INTERVAL) + + +async def run_devlake_ingestion( + client: httpx.AsyncClient, + blueprints: list[dict], + specific_blueprint_id: int | None = None, +) -> dict[int, str]: + """Run DevLake blueprints and wait for completion. + + Returns a dict of {blueprint_id: final_status}. + """ + results: dict[int, str] = {} + + # Check for already running pipelines + running = await check_running_pipelines(client) + if running: + warn(f"{len(running)} pipeline(s) already running — waiting for completion first") + for p in running: + status = await wait_for_pipeline(client, p["id"], f"existing-#{p['id']}") + info(f"Existing pipeline #{p['id']} finished: {status}") + + # Filter blueprints + targets = blueprints + if specific_blueprint_id: + targets = [bp for bp in blueprints if bp["id"] == specific_blueprint_id] + if not targets: + fail(f"Blueprint #{specific_blueprint_id} not found") + return results + + # Trigger each blueprint sequentially (DevLake processes one at a time) + for bp in targets: + bp_id = bp["id"] + bp_name = bp["name"] + banner(f"DevLake: Triggering '{bp_name}' (#{bp_id})") + + retries = 0 + while retries < MAX_RETRIES: + pipeline_id = await trigger_blueprint(client, bp_id, bp_name) + if not pipeline_id: + fail(f"Could not trigger blueprint '{bp_name}' — skipping") + results[bp_id] = "TRIGGER_FAILED" + break + + status = await wait_for_pipeline(client, pipeline_id, bp_name) + results[bp_id] = status + + if status in ("TASK_COMPLETED", "TASK_PARTIAL"): + break + elif status == "TASK_FAILED" and retries < MAX_RETRIES - 1: + retries += 1 + warn(f"Pipeline failed — retrying ({retries}/{MAX_RETRIES})...") + await asyncio.sleep(10) + else: + break + + return results + + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 4 — Record counts in DevLake DB +# ═══════════════════════════════════════════════════════════════════════════ + + +async def get_devlake_counts() -> dict[str, int]: + """Get record counts from DevLake domain tables.""" + counts: dict[str, int] = {} + tables = { + "pull_requests": "pull_requests", + "issues": "issues", + "deployments": "cicd_deployment_commits", + "sprints": "sprints", + "issue_changelogs": "issue_changelogs", + } + try: + conn = await asyncpg.connect(DEVLAKE_DB) + for name, table in tables.items(): + try: + result = await conn.fetchval(f"SELECT COUNT(*) FROM {table}") + counts[name] = result or 0 + except Exception: + counts[name] = 0 + await conn.close() + except Exception as e: + warn(f"Could not query DevLake DB: {e}") + return counts + + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 5 — Reset PULSE watermarks (optional) +# ═══════════════════════════════════════════════════════════════════════════ + + +async def reset_pulse_watermarks() -> None: + """Delete all watermarks to force a full re-sync from DevLake to PULSE.""" + try: + conn = await asyncpg.connect(PULSE_DB) + deleted = await conn.execute( + "DELETE FROM pipeline_watermarks WHERE tenant_id = $1::uuid", + TENANT_ID, + ) + await conn.close() + ok(f"Watermarks reset: {deleted}") + except Exception as e: + warn(f"Could not reset watermarks: {e}") + + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 6 — Trigger sync worker (DevLake → PULSE DB → Kafka) +# ═══════════════════════════════════════════════════════════════════════════ + + +async def trigger_sync_worker() -> bool: + """Trigger the PULSE sync worker via direct import. + + The sync worker reads from DevLake DB, normalizes, upserts to PULSE DB, + and publishes to Kafka topics. + """ + info("Starting PULSE sync worker cycle...") + + try: + # Add project root to path + project_root = Path(__file__).resolve().parent.parent / "packages" / "pulse-data" + sys.path.insert(0, str(project_root)) + + # Set env vars for the worker + os.environ.setdefault("DATABASE_URL", PULSE_DB.replace("postgresql://", "postgresql+asyncpg://")) + os.environ.setdefault("DEVLAKE_DB_URL", DEVLAKE_DB) + os.environ.setdefault("KAFKA_BROKERS", os.environ.get("KAFKA_BROKERS", "localhost:9092")) + os.environ.setdefault("DEFAULT_TENANT_ID", TENANT_ID) + + from src.workers.devlake_sync import DevLakeSyncWorker + + worker = DevLakeSyncWorker() + try: + results = await worker.sync() + ok(f"Sync worker cycle completed: {results}") + finally: + await worker.close() + return True + except ImportError: + warn("Could not import sync worker — running via Docker instead") + return await trigger_sync_worker_docker() + except Exception as e: + fail(f"Sync worker error: {e}") + return False + + +async def trigger_sync_worker_docker() -> bool: + """Trigger sync worker via docker compose exec.""" + import subprocess + + compose_file = Path(__file__).resolve().parent.parent / "docker-compose.yml" + cmd = [ + "docker", "compose", "-f", str(compose_file), + "exec", "-T", "sync-worker", + "python", "-c", + "import asyncio\nasync def _run():\n from src.workers.devlake_sync import DevLakeSyncWorker\n w = DevLakeSyncWorker()\n try:\n r = await w.sync()\n print(f'Sync results: {r}')\n finally:\n await w.close()\nasyncio.run(_run())", + ] + + info("Triggering sync via Docker container...") + try: + result = subprocess.run( + cmd, + capture_output=True, + text=True, + timeout=600, # 10 minute timeout + ) + if result.returncode == 0: + ok("Docker sync worker cycle completed") + return True + else: + fail(f"Docker sync failed: {result.stderr[:300]}") + return False + except subprocess.TimeoutExpired: + warn("Sync worker timed out (10 min) — will continue on next cycle") + return False + except Exception as e: + fail(f"Docker exec error: {e}") + return False + + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 7 — Final counts and validation +# ═══════════════════════════════════════════════════════════════════════════ + + +async def get_pulse_counts() -> dict[str, int]: + """Get record counts from PULSE domain tables.""" + counts: dict[str, int] = {} + tables = { + "pull_requests": "eng_pull_requests", + "issues": "eng_issues", + "deployments": "eng_deployments", + "sprints": "eng_sprints", + } + try: + conn = await asyncpg.connect(PULSE_DB) + for name, table in tables.items(): + try: + result = await conn.fetchval( + f"SELECT COUNT(*) FROM {table} WHERE tenant_id = $1::uuid", + TENANT_ID, + ) + counts[name] = result or 0 + except Exception: + counts[name] = 0 + await conn.close() + except Exception as e: + warn(f"Could not query PULSE DB: {e}") + return counts + + +def print_comparison(devlake: dict[str, int], pulse: dict[str, int]) -> None: + """Print a comparison table of DevLake vs PULSE record counts.""" + banner("Final Record Count Comparison") + header = f" {'Entity':<20} {'DevLake':>10} {'PULSE':>10} {'Delta':>10} {'Status':>10}" + info(header) + info(" " + "─" * 62) + + total_dl = 0 + total_pl = 0 + all_synced = True + + for entity in ["pull_requests", "issues", "deployments", "sprints"]: + dl = devlake.get(entity, 0) + pl = pulse.get(entity, 0) + delta = dl - pl + total_dl += dl + total_pl += pl + + if abs(delta) <= 5: + status = f"{C.GREEN}✓ synced{C.RESET}" + elif delta > 0: + status = f"{C.YELLOW}⚠ behind{C.RESET}" + all_synced = False + else: + status = f"{C.CYAN}↑ ahead{C.RESET}" + + info(f" {entity:<20} {dl:>10,} {pl:>10,} {delta:>+10,} {status}") + + info(" " + "─" * 62) + info(f" {'TOTAL':<20} {total_dl:>10,} {total_pl:>10,} {total_dl - total_pl:>+10,}") + + if all_synced: + ok("All entities are in sync!") + else: + warn("Some entities have pending records — the sync worker will catch up on next cycle (15 min)") + + +# ═══════════════════════════════════════════════════════════════════════════ +# Utilities +# ═══════════════════════════════════════════════════════════════════════════ + + +def _format_duration(seconds: float) -> str: + """Format seconds into human-readable duration.""" + if seconds < 60: + return f"{seconds:.0f}s" + elif seconds < 3600: + return f"{seconds / 60:.1f}m" + else: + h = int(seconds // 3600) + m = int((seconds % 3600) // 60) + return f"{h}h {m}m" + + +# ═══════════════════════════════════════════════════════════════════════════ +# MAIN ORCHESTRATOR +# ═══════════════════════════════════════════════════════════════════════════ + + +async def main(args: argparse.Namespace) -> None: + started_at = time.monotonic() + + banner("PULSE Full Ingestion — Starting") + info(f"DevLake API: {DEVLAKE_API}") + info(f"DevLake DB: {DEVLAKE_DB.split('@')[1] if '@' in DEVLAKE_DB else DEVLAKE_DB}") + info(f"PULSE DB: {PULSE_DB.split('@')[1] if '@' in PULSE_DB else PULSE_DB}") + info(f"Tenant: {TENANT_ID}") + info(f"Dry run: {args.dry_run}") + info("") + + # ── Step 1: Health checks ── + banner("Step 1/7 — Health Checks") + + async with httpx.AsyncClient(timeout=30) as client: + devlake_ok = await check_devlake_health(client) + if not devlake_ok: + # Try with /health or just assume it's OK if we can reach blueprints + try: + r = await client.get(f"{DEVLAKE_API}/blueprints") + devlake_ok = r.status_code == 200 + if devlake_ok: + ok("DevLake API responded on /blueprints") + except Exception: + pass + + devlake_db_ok = await check_devlake_db() + pulse_db_ok = await check_pulse_db() + + if not devlake_db_ok or not pulse_db_ok: + fail("Required databases are not reachable. Aborting.") + sys.exit(1) + + # ── Step 2: Inventory ── + banner("Step 2/7 — DevLake Inventory") + + async with httpx.AsyncClient(timeout=30) as client: + inventory = await get_inventory(client) + + if not inventory["blueprints"]: + fail("No blueprints found in DevLake. Configure blueprints first.") + sys.exit(1) + + # ── Step 3: DevLake ingestion (API → DevLake DB) ── + if not args.skip_devlake: + banner("Step 3/7 — DevLake Data Collection (API → DevLake DB)") + info("This step pulls data from GitHub/Jira/Jenkins APIs into DevLake.") + info("It may take 2-8 hours depending on data volume.") + info("Safe to interrupt — DevLake checkpoints internally.") + info("") + + if args.dry_run: + warn("DRY RUN — skipping DevLake trigger") + else: + async with httpx.AsyncClient(timeout=60) as client: + results = await run_devlake_ingestion( + client, + inventory["blueprints"], + specific_blueprint_id=args.blueprint_id, + ) + for bp_id, status in results.items(): + if status in ("TASK_COMPLETED", "TASK_PARTIAL"): + ok(f"Blueprint #{bp_id}: {status}") + else: + fail(f"Blueprint #{bp_id}: {status}") + else: + info("Skipping DevLake collection (--skip-devlake)") + + # ── Step 4: DevLake record counts ── + banner("Step 4/7 — DevLake Record Counts") + devlake_counts = await get_devlake_counts() + for entity, count in sorted(devlake_counts.items()): + info(f" {entity:<25} {count:>10,}") + + # ── Step 5: Reset watermarks (optional) ── + if args.reset_watermarks: + banner("Step 5/7 — Reset PULSE Watermarks") + if args.dry_run: + warn("DRY RUN — would reset watermarks") + else: + await reset_pulse_watermarks() + else: + info("Step 5/7 — Keeping existing watermarks (incremental sync)") + + # ── Step 6: Sync worker (DevLake DB → PULSE DB → Kafka) ── + banner("Step 6/7 — PULSE Sync (DevLake → PULSE DB → Kafka)") + info("Syncing records from DevLake DB into PULSE with normalization...") + + if args.dry_run: + warn("DRY RUN — skipping sync worker") + else: + success = await trigger_sync_worker() + if not success: + warn("Sync worker had issues — records may catch up in next scheduled cycle (15 min)") + + # ── Step 7: Final validation ── + banner("Step 7/7 — Validation") + devlake_final = await get_devlake_counts() + pulse_final = await get_pulse_counts() + print_comparison(devlake_final, pulse_final) + + # ── Summary ── + elapsed = time.monotonic() - started_at + banner(f"PULSE Full Ingestion — Complete ({_format_duration(elapsed)})") + info(f"DevLake records: {sum(devlake_final.get(e, 0) for e in ['pull_requests', 'issues', 'deployments', 'sprints']):,}") + info(f"PULSE records: {sum(pulse_final.values()):,}") + info("") + info("Next steps:") + info(" • The sync worker runs every 15 min and will catch any remaining delta") + info(" • The metrics worker consumes Kafka events and recalculates DORA/Lean/Sprint metrics") + info(" • Check Pipeline Monitor at http://localhost:5173/pipeline-monitor") + + +def parse_args() -> argparse.Namespace: + parser = argparse.ArgumentParser( + description="PULSE Full Ingestion — Orchestrate complete data collection", + ) + parser.add_argument( + "--skip-devlake", + action="store_true", + help="Skip DevLake collection phase (only sync DevLake → PULSE)", + ) + parser.add_argument( + "--reset-watermarks", + action="store_true", + help="Reset PULSE watermarks to force full re-sync from DevLake", + ) + parser.add_argument( + "--blueprint-id", + type=int, + default=None, + help="Trigger only a specific blueprint ID", + ) + parser.add_argument( + "--dry-run", + action="store_true", + help="Show what would happen without making changes", + ) + return parser.parse_args() + + +if __name__ == "__main__": + args = parse_args() + try: + asyncio.run(main(args)) + except KeyboardInterrupt: + log.info(f"\n{C.YELLOW}Interrupted by user. Safe to re-run — all progress is checkpointed.{C.RESET}") + sys.exit(130) diff --git a/pulse/scripts/verify-dev.sh b/pulse/scripts/verify-dev.sh new file mode 100755 index 0000000..230b0c5 --- /dev/null +++ b/pulse/scripts/verify-dev.sh @@ -0,0 +1,140 @@ +#!/usr/bin/env bash +# +# PULSE — post-onboard smoke check +# --------------------------------------------------------------------------- +# Runs AFTER `make onboard`. Validates the stack is actually serving data, +# not just that containers are "up": +# +# - pulse-api /health → 200 +# - pulse-data /health → 200 +# - GET /data/v1/metrics/home — has data.deployment_frequency.value +# - GET /data/v1/pipeline/teams — returns ≥ 10 squads (seed target) +# - (optional) Vite dev server at :5173 responds — only if running +# +# Philosophy: if this passes, the new dev can open the browser and expect +# a rendered dashboard with KPIs. If this fails, it points at the broken +# layer (db / worker / seed / UI). +# +# Exit: 0 on all-pass, 1 on any hard failure. +# --------------------------------------------------------------------------- + +set -uo pipefail + +# ---------------------------------------------------------------- colors +if [ -t 1 ]; then + RED=$'\033[31m'; GRN=$'\033[32m'; YEL=$'\033[33m' + CYN=$'\033[36m'; DIM=$'\033[2m'; BLD=$'\033[1m'; RST=$'\033[0m' +else + RED=""; GRN=""; YEL=""; CYN=""; DIM=""; BLD=""; RST="" +fi + +# ---------------------------------------------------------------- config +API_HOST="${PULSE_API_HOST:-http://localhost:3000}" +DATA_HOST="${PULSE_DATA_HOST:-http://localhost:8000}" +WEB_HOST="${PULSE_WEB_HOST:-http://localhost:5173}" +MIN_SQUADS="${MIN_SQUADS:-10}" + +FAILS=0 + +pass() { printf " ${GRN}✓${RST} %-30s ${DIM}%s${RST}\n" "$1" "${2:-}"; } +fail() { printf " ${RED}✗${RST} %-30s ${RED}%s${RST}\n" "$1" "$2" + [ $# -ge 3 ] && printf " ${DIM}fix: %s${RST}\n" "$3" + FAILS=$((FAILS + 1)); } +skip() { printf " ${YEL}∅${RST} %-30s ${YEL}%s${RST}\n" "$1" "$2"; } +section() { printf "\n${BLD}${CYN}%s${RST}\n" "$1"; } + +# http_status URL [timeout_s] +http_status() { + local url=$1 + local to=${2:-5} + curl -s -o /dev/null -w "%{http_code}" --max-time "$to" "$url" 2>/dev/null || echo "000" +} + +# http_json URL [timeout_s] +http_json() { + local url=$1 + local to=${2:-10} + curl -s --max-time "$to" "$url" 2>/dev/null +} + +# ---------------------------------------------------------------- header +printf "${BLD}🔍 PULSE verify-dev — post-onboard smoke${RST}\n" +printf "${DIM}(expect all checks ✓ after ${BLD}make onboard${RST}${DIM} completes)${RST}\n" + +# ---------------------------------------------------------------- health +section "API health" + +# pulse-api uses global prefix `/api/v1` (NestJS setGlobalPrefix). +# Keep the verify path aligned with src/main.ts — if someone changes +# the prefix there, this check will start failing (intentional coupling). +API_HEALTH=$(http_status "$API_HOST/api/v1/health") +if [ "$API_HEALTH" = "200" ]; then + pass "pulse-api /api/v1/health" "200 OK" +else + fail "pulse-api /api/v1/health" "HTTP $API_HEALTH" "check logs: docker compose logs pulse-api" +fi + +DATA_HEALTH=$(http_status "$DATA_HOST/health") +if [ "$DATA_HEALTH" = "200" ]; then + pass "pulse-data /health" "200 OK" +else + fail "pulse-data /health" "HTTP $DATA_HEALTH" "check logs: docker compose logs pulse-data" +fi + +# ---------------------------------------------------------------- data content +section "Data content (seed ingested?)" + +# /metrics/home — should return DORA metrics with non-null deployment_frequency. +# Timeout is 60s because this endpoint can compute metrics on-demand when a +# snapshot is missing — cold-start after `make seed-dev` may take ~30-60s +# until the metrics-worker fills in snapshots. After seed runs once, the +# response is sub-second. +HOME_RESP=$(http_json "$DATA_HOST/data/v1/metrics/home?period=30d" 60) +if [ -z "$HOME_RESP" ]; then + fail "GET /metrics/home" "no response (60s timeout)" "pulse-data may be computing snapshots on-demand — wait 60s and retry, or run: docker compose logs metrics-worker" +else + # Parse with python (always available after doctor passes) + DF_VALUE=$(printf '%s' "$HOME_RESP" \ + | python3 -c "import sys,json; d=json.load(sys.stdin); v=d.get('data',{}).get('deployment_frequency',{}).get('value'); print(v if v is not None else '')" 2>/dev/null || echo "") + if [ -z "$DF_VALUE" ] || [ "$DF_VALUE" = "None" ] || [ "$DF_VALUE" = "null" ]; then + fail "GET /metrics/home" "deployment_frequency is null" "seed didn't run or no deploys were inserted. Run: make seed-dev" + else + pass "GET /metrics/home" "deployment_frequency = $DF_VALUE" + fi +fi + +# /pipeline/teams — should return ≥ MIN_SQUADS squads +TEAMS_RESP=$(http_json "$DATA_HOST/data/v1/pipeline/teams" 10) +if [ -z "$TEAMS_RESP" ]; then + fail "GET /pipeline/teams" "no response" "pulse-data may be still booting — wait 30s and retry" +else + TEAMS_COUNT=$(printf '%s' "$TEAMS_RESP" \ + | python3 -c "import sys,json; d=json.load(sys.stdin); t=d.get('teams',d) if isinstance(d,dict) else d; print(len(t) if isinstance(t,list) else 0)" 2>/dev/null || echo "0") + if [ "$TEAMS_COUNT" -ge "$MIN_SQUADS" ]; then + pass "GET /pipeline/teams" "$TEAMS_COUNT squads (≥ $MIN_SQUADS required)" + else + fail "GET /pipeline/teams" "$TEAMS_COUNT squads (< $MIN_SQUADS required)" "seed may be incomplete. Re-run: make seed-reset" + fi +fi + +# ---------------------------------------------------------------- UI (optional) +section "UI (Vite dev server)" + +WEB_STATUS=$(http_status "$WEB_HOST" 3) +if [ "$WEB_STATUS" = "200" ]; then + pass "vite dev server" "200 OK" +elif [ "$WEB_STATUS" = "000" ]; then + skip "vite dev server" "not running (run: make dev)" +else + fail "vite dev server" "HTTP $WEB_STATUS" "check: cd packages/pulse-web && npm run dev" +fi + +# ---------------------------------------------------------------- summary +printf "\n" +if [ "$FAILS" -eq 0 ]; then + printf "${GRN}${BLD}✓ Stack is healthy.${RST} ${DIM}Open ${BLD}%s${RST}${DIM} in your browser.${RST}\n" "$WEB_HOST" + exit 0 +else + printf "${RED}${BLD}✖ %d check(s) failed.${RST} ${DIM}Look at the fix hints above and re-run.${RST}\n" "$FAILS" + exit 1 +fi