Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
c67ccc8
Refactor ToolCallSidePanel to KortixComputer with Actions, Files, and…
google-labs-jules[bot] Dec 7, 2025
602a249
cleaup for agent-permissions-system integration
Logrui Dec 7, 2025
fd68ebc
synced with frontend
Logrui Dec 7, 2025
5fdda1c
checkpoint before replacing the entirety of frontend\src\components\t…
Logrui Dec 8, 2025
12ee567
partial sync with upstream production for thread folder minus chat input
Logrui Dec 8, 2025
7c57c12
.upstream-syncing workflow creation - WIP
Logrui Dec 8, 2025
cc31319
portkit creation checkpoint
Logrui Dec 8, 2025
fa08edb
portkit creation checkpoint - scripts
Logrui Dec 8, 2025
ec6a2bc
portkit creation checkpoint - scripts
Logrui Dec 8, 2025
263b052
checkpoint portkit development
Logrui Dec 8, 2025
0a399fb
checkpoint portkit development
Logrui Dec 8, 2025
36aedd8
renamed scan-registry to update-registry script
Logrui Dec 8, 2025
4837853
custom feature tagging testing
Logrui Dec 8, 2025
ee46f38
feat(portkit): enhance research tooling and workflows with robust scr…
Logrui Dec 8, 2025
c6a5f09
feat: merge sandbox proxy api and document browser agent
Logrui Dec 9, 2025
aa85c7a
manually merged frontend\src\hooks\messages from upstream/PRODUCTION
Logrui Dec 9, 2025
b471cb2
bug fixes for legacy browser tool view on frontend and synced tools w…
Logrui Dec 9, 2025
b2c5eb2
docs: comprehensive analysis of upstream/main architectural changes
claude Dec 9, 2025
5879f75
backend fixes
Logrui Dec 9, 2025
15a53d2
Add missing upstream workflows (mobile-eas-update, mobile-testflight,…
Logrui Dec 9, 2025
2b46405
bugfixes: fix 500 error in sandbox file content endpoint caused by un…
Logrui Dec 9, 2025
4a1d12d
Merge pull request #15 from Logrui/claude/sync-upstream-main-01QXvb6Q…
Logrui Dec 9, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
File renamed without changes.
File renamed without changes.
66 changes: 66 additions & 0 deletions .agent/workflows/codemap.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
---
description: Generate a comprehensive technical "codemap" document for a specific feature in the codebase.
---

# /codemap

Generate a comprehensive technical "codemap" document for a specific feature in the codebase.

## Purpose
To create a detailed, deep-dive technical reference document (`[feature]-codemap.md`) that maps out the file structure, architecture, data flow, and core components of a specific feature. This helps developers quickly understand complex parts of the system.

## Instructions

### 1. Analyze the Request
- **If the user DOES NOT specify a feature**:
- Briefly scan the current workspace to identify major features or modules.
- Provide a numbered list of 3 potential codemaps you could generate (e.g., "1. Authentication System", "2. Agent Builder", "3. Data Ingestion Pipeline").
- Ask the user to select one or provide their own topic.

- **If the user DOES specify a feature**:
- Proceed to Step 2 immediately.

### 2. Research Phase
- **Search**: Use file search and grep tools to locate all files related to the requested feature. Look for frontend components, backend controllers, services, types, and database models.
- **Understand**: Read key files to understand the purpose, logic, and relationships between components.
- **Identify Critical Paths**: Determine which files are `⭐ CRITICAL` to the feature's functionality.

### 3. Generate Codemap
Create a new file named `[feature]-codemap.md` in the **root** of the workspace (unless the user specified a different location). The file MUST include the following sections:

#### A. File Structure (Core Files)
- List only the most essential files.

#### B. File Structure (Comprehensive)
- Create a visual text-based tree diagram of ALL related components.
- Mark critical core files with `⭐ CRITICAL`.
- Include a short description of the purpose of each file/folder.
- **Format Example**:
```text
frontend/src/sections/feature/
├── main-component.tsx # Main entry point
│ └── Orchestrates sub-components
├── components/
│ ├── sub-component.tsx # Renders specific UI elements ⭐ CRITICAL
│ │ └── Handles user interactions
```

#### C. Architecture & Data Flow
- **Component Interaction Flow**: Describe how components talk to each other.
- **Data Flow**: Describe how data moves through the feature.
- **Mermaid Charts**: Include Mermaid diagrams for:
- User Interaction Flow
- Component Interaction Flow
- Networking/API Interaction Flow

#### D. Code Examples
- Provide snippets of `⭐ CRITICAL` components to illustrate key logic.

#### E. Additional Sections
- Add any other relevant sections (e.g., "State Management", "API Endpoints", "Database Schema", "Configuration") that would be useful for a developer.

## Constraints & Best Practices
- **Length**: Aim for a comprehensive document (target at least ~500 lines if the feature complexity warrants it).
- **Location**: Save to the root workspace directory by default.
- **Style**: Be technical, precise, and exhaustive.
- **Visuals**: Use ASCII trees for file structures and Mermaid for flows.
File renamed without changes.
File renamed without changes.
190 changes: 190 additions & 0 deletions .agent/workflows/portkit.analyze.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---

//turbo-all

---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---

## User Input

```text
$ARGUMENTS
```

You **MUST** consider the user input before proceeding (if not empty).

## Goal

Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.

## Operating Constraints

**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).

**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.

## Execution Steps

### 1. Initialize Analysis Context

Run `.portkit/scripts/powershell/check-prerequisites.ps1 --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:

- SPEC = FEATURE_DIR/spec.md
- PLAN = FEATURE_DIR/plan.md
- TASKS = FEATURE_DIR/tasks.md

Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").

### 2. Load Artifacts (Progressive Disclosure)

Load only the minimal necessary context from each artifact:

**From spec.md:**

- Overview/Context
- Functional Requirements
- Non-Functional Requirements
- User Stories
- Edge Cases (if present)

**From plan.md:**

- Architecture/stack choices
- Data Model references
- Phases
- Technical constraints

**From tasks.md:**

- Task IDs
- Descriptions
- Phase grouping
- Parallel markers [P]
- Referenced file paths

**From constitution:**

- Load `.specify/memory/constitution.md` for principle validation

### 3. Build Semantic Models

Create internal representations (do not include raw artifacts in output):

- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
- **User story/action inventory**: Discrete user actions with acceptance criteria
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements

### 4. Detection Passes (Token-Efficient Analysis)

Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.

#### A. Duplication Detection

- Identify near-duplicate requirements
- Mark lower-quality phrasing for consolidation

#### B. Ambiguity Detection

- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)

#### C. Underspecification

- Requirements with verbs but missing object or measurable outcome
- User stories missing acceptance criteria alignment
- Tasks referencing files or components not defined in spec/plan

#### D. Constitution Alignment

- Any requirement or plan element conflicting with a MUST principle
- Missing mandated sections or quality gates from constitution

#### E. Coverage Gaps

- Requirements with zero associated tasks
- Tasks with no mapped requirement/story
- Non-functional requirements not reflected in tasks (e.g., performance, security)

#### F. Inconsistency

- Terminology drift (same concept named differently across files)
- Data entities referenced in plan but absent in spec (or vice versa)
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)

### 5. Severity Assignment

Use this heuristic to prioritize findings:

- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order

### 6. Produce Compact Analysis Report

Output a Markdown report (no file writes) with the following structure:

## Specification Analysis Report

| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |

(Add one row per finding; generate stable IDs prefixed by category initial.)

**Coverage Summary Table:**

| Requirement Key | Has Task? | Task IDs | Notes |
|-----------------|-----------|----------|-------|

**Constitution Alignment Issues:** (if any)

**Unmapped Tasks:** (if any)

**Metrics:**

- Total Requirements
- Total Tasks
- Coverage % (requirements with >=1 task)
- Ambiguity Count
- Duplication Count
- Critical Issues Count

### 7. Provide Next Actions

At end of report, output a concise Next Actions block:

- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"

### 8. Offer Remediation

Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)

## Operating Principles

### Context Efficiency

- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts

### Analysis Guidelines

- **NEVER modify files** (this is read-only analysis)
- **NEVER hallucinate missing sections** (if absent, report them accurately)
- **Prioritize constitution violations** (these are always CRITICAL)
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
- **Report zero issues gracefully** (emit success report with coverage statistics)

## Context

{{args}}
63 changes: 63 additions & 0 deletions .agent/workflows/portkit.clarify.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
description: Ask the user specifying questions to clarify scope before deep research
handoffs:
- label: Research Feature
agent: portkit.research
prompt: Clarification complete. Updates have been made to specs/[feature]/spec.md. Proceed to research.
---

//turbo-all

## User Input
```text
$ARGUMENTS
```

You **MUST** consider the user input before proceeding (if not empty).

## Goal
Facilitate a "Clarification Phase" where the agent asks the user critical questions about the feature to prevent assumptions during the Research phase (e.g., "Where is the entry point?", "Is this a UI component or a backend service?").

## Note
This is an **Interactive** workflow. The agent should stop and wait for user input after generating the questions.

## Outline
1. **Parse Input**: Identify Feature Name/Spec from `$ARGUMENTS`.
2. **Verify Context**: Ensure `specs/[feature]/spec.md` exists. Use `portkit.specify` if it does not.

3. **Phase 1: Ambiguity Scan & Question Generation**:
* **Action**: Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing.
* **Functional Scope**: Core user goals, success criteria, explicit inclusions/exclusions.
* **Adaptation Strategy**: Bridge adapters, omissions, replaced dependencies.
* **Dependencies**: Internal vs External, Conflict resolution (e.g. Auth, Billing), "Poison" dependencies.
* **Integration Points**: UI location, Route structure, Backend hooks.
* **Edge Cases**: Failure handling, specific constraint scenarios.
* **Generate Questions**:
* Create a prioritized queue of candidate clarification questions (maximum 5).
* Each question must be answerable with a short multiple-choice selection or a short phrase.
* Focus on "High Impact" ambiguities that would block Research or Planning.

4. **Phase 2: Sequential Questioning Loop (Interactive)**:
* **User Interaction**: Present EXACTLY ONE question at a time.
* **Recommendation**:
* For multiple-choice, recommend the best option based on Portkit best practices (e.g. "Strip Billing").
* Format as: `**Recommended:** Option [X] - <reasoning>`
* **Integration**:
* After EACH accepted answer, immediately update the in-memory representation of the spec.
* Append to a `## Clarifications` section: `- Q: <question> → A: <final answer>`.
* Apply the clarification to the appropriate section (Scope, Strategy, etc).
* **Stop Condition**: 5 questions asked, or user signals "done".

5. **Phase 3: Finalize Spec**:
* **Action**: Write the updated spec back to `specs/[feature]/spec.md`.
* Ensure no contradictory statements remain.
* Ensure terminology is consistent.
* **Keyword Generation**:
* Based on the spec and clarifications, append a list of **search keywords** to the Spec file (e.g., `## Research Keywords`).
* Format: `* Scan Keywords: [array of technical terms, e.g., 'Credits', 'BillingContext', 'verify_user']`
* These hints will guide the `scan-risk` and `grep` operations in the Research phase.

6. **Completion**:
* Output: "Spec updated with clarification details and research keywords."
* **Stats**: Questions asked/answered, sections touched, keywords generated.
* **Recommendation**: Run `/portkit.research` to begin technical analysis with these new constraints.
Loading