Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions scripts/skills/api-design.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
## API Design Expertise

Apply these API design patterns:

### RESTful Conventions
- Use nouns for resources, HTTP verbs for actions (GET /users, POST /users, DELETE /users/:id)
- Return appropriate status codes: 200 OK, 201 Created, 400 Bad Request, 404 Not Found, 422 Unprocessable
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use the canonical HTTP 422 reason phrase.

At Line 7, prefer 422 Unprocessable Entity for accuracy and consistency with common API documentation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/skills/api-design.md` at line 7, Update the status code list string
"Return appropriate status codes: 200 OK, 201 Created, 400 Bad Request, 404 Not
Found, 422 Unprocessable" to use the canonical reason phrase by replacing "422
Unprocessable" with "422 Unprocessable Entity" so the line reads "... 404 Not
Found, 422 Unprocessable Entity".

- Use consistent error response format: `{ "error": { "code": "...", "message": "..." } }`
- Version APIs when breaking changes are needed (/v1/users, /v2/users)

### Request/Response Design
- Accept and return JSON (Content-Type: application/json)
- Use camelCase for JSON field names
- Include pagination for list endpoints (limit, offset or cursor)
- Support filtering and sorting via query parameters

### Input Validation
- Validate ALL input at the API boundary — never trust client data
- Return specific validation errors with field names
- Sanitize strings against injection (SQL, XSS, command injection)
- Set reasonable size limits on request bodies

### Error Handling
- Never expose stack traces or internal errors to clients
- Log full error details server-side
- Use consistent error codes that clients can programmatically handle
- Include request-id in responses for debugging

### Authentication & Authorization
- Verify auth on EVERY endpoint (don't rely on frontend-only checks)
- Use principle of least privilege for authorization
- Validate tokens/sessions on each request
- Rate limit sensitive endpoints (login, password reset)
30 changes: 30 additions & 0 deletions scripts/skills/brainstorming.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
## Brainstorming: Socratic Design Refinement

Before writing the implementation plan, challenge your assumptions with these questions:

### Requirements Clarity
- What is the **minimum viable change** that satisfies this issue?
- Are there implicit requirements not stated in the issue?
- What are the acceptance criteria? If none are stated, define them.

### Design Alternatives
- What are at least 2 different approaches to solve this?
- What are the trade-offs of each? (complexity, performance, maintainability)
- Which approach minimizes the blast radius of changes?

### Risk Assessment
- What could go wrong with the chosen approach?
- What existing functionality could break?
- Are there edge cases not covered by the issue description?

### Dependency Analysis
- What existing code does this depend on?
- What other code depends on what you're changing?
- Are there any circular dependency risks?

### Simplicity Check
- Can this be solved with fewer files changed?
- Is there existing infrastructure you can reuse?
- Would a simpler approach work for 90% of cases?

Document your reasoning in the plan. Show the alternatives you considered and why you chose this approach.
33 changes: 33 additions & 0 deletions scripts/skills/data-pipeline.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
## Data Pipeline Expertise

Apply these data engineering patterns:

### Schema Design
- Define schemas explicitly — never rely on implicit structure
- Use migrations for all schema changes (never manual ALTER TABLE)
- Add indexes for frequently queried columns
- Consider denormalization for read-heavy paths

### Data Integrity
- Use transactions for multi-step operations
- Implement idempotency keys for operations that could be retried
- Validate data at ingestion — reject bad data early
- Use constraints (NOT NULL, UNIQUE, FOREIGN KEY) in the database layer

### Query Patterns
- Avoid N+1 queries — use JOINs or batch loading
- Use EXPLAIN to verify query plans for complex queries
- Paginate large result sets — never SELECT * without LIMIT
- Use parameterized queries — never string concatenation for SQL

### Migration Safety
- Migrations must be reversible (include rollback steps)
- Test migrations on a copy of production data
- Add new columns as nullable, then backfill, then add NOT NULL
- Never drop columns in the same deploy as code changes

### Backpressure & Resilience
- Implement circuit breakers for external data sources
- Use dead letter queues for failed processing
- Set timeouts on all external calls
- Monitor queue depths and processing latency
28 changes: 28 additions & 0 deletions scripts/skills/documentation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
## Documentation Expertise

For documentation-focused issues, apply a lightweight approach:

### Scope
- Focus on accuracy over comprehensiveness
- Update only what's actually changed or incorrect
- Remove outdated information rather than marking it deprecated
- Keep examples current and runnable

### Writing Style
- Use active voice and present tense
- Lead with the most important information
- Use code examples for anything technical
- Keep paragraphs short — 2-3 sentences max

### Structure
- Start with a one-line summary of what this documents
- Include prerequisites and setup if applicable
- Provide a quick start / most common usage first
- Put advanced topics and edge cases later

### Skip Heavy Stages
This is a documentation change. The following pipeline stages can be simplified:
- **Design stage**: Skip — documentation doesn't need architecture design
- **Build stage**: Focus on file edits only, no compilation needed
- **Test stage**: Verify links work and examples are syntactically correct
- **Review stage**: Focus on accuracy and clarity, not code patterns
34 changes: 34 additions & 0 deletions scripts/skills/frontend-design.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
## Frontend Design Expertise

Apply these frontend patterns to your implementation:

### Accessibility (Required)
- All interactive elements must have keyboard support
- Use semantic HTML elements (button, nav, main, article)
- Include aria-labels for non-text interactive elements
- Ensure color contrast meets WCAG AA (4.5:1 for text)
- Test with screen reader mental model: does the DOM order make sense?

### Responsive Design
- Mobile-first: start with mobile layout, enhance for larger screens
- Use relative units (rem, %, vh/vw) instead of fixed pixels
- Test breakpoints: 320px, 768px, 1024px, 1440px
- Touch targets: minimum 44x44px

### Component Patterns
- Keep components focused — one responsibility per component
- Lift state up only when siblings need to share it
- Use composition over inheritance
- Handle loading, error, and empty states for every data-dependent component

### Performance
- Lazy-load below-the-fold content
- Optimize images (appropriate format, size, lazy loading)
- Minimize re-renders — check dependency arrays in effects
- Avoid layout thrashing — batch DOM reads and writes

### User Experience
- Provide immediate feedback for user actions
- Show loading indicators for operations > 300ms
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Prefer plain-language threshold phrasing for readability.

Line 32 (> 300ms) is understandable, but docs read cleaner with plain text (e.g., “over 300 ms”).

Suggested wording tweak
-- Show loading indicators for operations > 300ms
+- Show loading indicators for operations that take over 300 ms
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- Show loading indicators for operations > 300ms
- Show loading indicators for operations that take over 300 ms
🧰 Tools
🪛 LanguageTool

[grammar] ~32-~32: Ensure spelling is correct
Context: ...how loading indicators for operations > 300ms - Use optimistic updates where safe - Pres...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/skills/frontend-design.md` at line 32, Replace the shorthand “>
300ms” in the phrase "Show loading indicators for operations > 300ms" with
plain-language wording such as "over 300 ms" (or "greater than 300 ms") so the
line reads "Show loading indicators for operations over 300 ms"; keep the rest
of the sentence unchanged and ensure a space between the number and "ms".

- Use optimistic updates where safe
- Preserve user input on errors — never clear forms on failed submit
37 changes: 37 additions & 0 deletions scripts/skills/performance.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
## Performance Expertise

Apply these optimization patterns:

### Profiling First
- Measure before optimizing — identify the actual bottleneck
- Use profiling tools appropriate to the language/runtime
- Focus on the critical path — optimize what users experience

### Caching Strategy
- Cache expensive computations and repeated queries
- Set appropriate TTLs — stale data vs freshness trade-off
- Invalidate caches on write operations
- Use cache layers: in-memory (L1) → distributed (L2) → database (L3)

### Database Performance
- Add indexes for frequently queried columns (check EXPLAIN plans)
- Avoid N+1 queries — use batch loading or JOINs
- Use connection pooling
- Consider read replicas for read-heavy workloads

### Algorithm Complexity
- Prefer O(n log n) over O(n²) for sorting/searching
- Use appropriate data structures (hash maps for lookups, trees for ranges)
- Avoid unnecessary allocations in hot paths
- Pre-compute values that are used repeatedly

### Network Optimization
- Minimize round trips — batch API calls where possible
- Use compression for large payloads
- Implement pagination — never return unbounded result sets
- Use CDNs for static assets

### Benchmarking
- Include before/after benchmarks for performance changes
- Test with realistic data volumes (not just unit test fixtures)
- Measure p50, p95, p99 latencies — not just averages
33 changes: 33 additions & 0 deletions scripts/skills/product-thinking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
## Product Thinking Expertise

Consider the user perspective in your implementation:

### User Stories
- Who is the user for this feature?
- What problem does this solve for them?
- What is their workflow before and after this change?
- Define acceptance criteria from the user's perspective

### User Experience
- What is the simplest interaction that solves the problem?
- How does the user discover this feature?
- What happens when things go wrong? (error states, recovery)
- Is the feature accessible to users with disabilities?

### Edge Cases from User Perspective
- What if the user has no data yet? (empty state)
- What if the user has too much data? (pagination, filtering)
- What if the user makes a mistake? (undo, confirmation)
- What if the user is on a slow connection? (loading states)

### Progressive Disclosure
- Show the most important information first
- Hide complexity behind progressive interactions
- Don't overwhelm with options — provide sensible defaults
- Use contextual help instead of documentation

### Feedback & Communication
- Confirm successful actions immediately
- Explain errors in plain language — not error codes
- Show progress for long-running operations
- Preserve user context across navigation
38 changes: 38 additions & 0 deletions scripts/skills/security-audit.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
## Security Audit Expertise

Apply OWASP Top 10 and security best practices:

### Injection Prevention
- Use parameterized queries for ALL database access
- Sanitize user input before rendering in HTML/templates
- Validate and sanitize file paths — prevent directory traversal
- Never execute user-supplied strings as code or commands

### Authentication
- Hash passwords with bcrypt/argon2 (never MD5/SHA1)
- Implement account lockout after failed attempts
- Use secure session management (HttpOnly, Secure, SameSite cookies)
- Require re-authentication for sensitive operations

### Authorization
- Check permissions server-side on EVERY request
- Use deny-by-default — explicitly grant access
- Verify resource ownership (user can only access their own data)
- Log authorization failures for monitoring

### Data Protection
- Never log sensitive data (passwords, tokens, PII)
- Encrypt sensitive data at rest
- Use HTTPS for all communications
- Set appropriate CORS headers — never use wildcard in production

### Secrets Management
- Never hardcode secrets in source code
- Use environment variables or secret managers
- Rotate secrets regularly
- Check for accidentally committed secrets (API keys, passwords, tokens)

### Dependency Security
- Check for known vulnerabilities in dependencies
- Pin dependency versions to prevent supply chain attacks
- Review new dependencies before adding them
29 changes: 29 additions & 0 deletions scripts/skills/systematic-debugging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
## Systematic Debugging: Root Cause Analysis

A previous attempt at this stage FAILED. Do NOT blindly retry the same approach. Follow this 4-phase investigation:

### Phase 1: Evidence Collection
- Read the error output from the previous attempt carefully
- Identify the EXACT line/file where the failure occurred
- Check if the error is a symptom or the root cause
- Look for patterns: is this a known error type?

### Phase 2: Hypothesis Formation
- List 3 possible root causes for this failure
- For each hypothesis, identify what evidence would confirm or deny it
- Rank hypotheses by likelihood

### Phase 3: Root Cause Verification
- Test the most likely hypothesis first
- Read the relevant source code — don't guess
- Check if previous artifacts (plan.md, design.md) are correct or flawed
- If the plan was correct but execution failed, focus on execution
- If the plan was flawed, document what was wrong

### Phase 4: Targeted Fix
- Fix the ROOT CAUSE, not the symptom
- If the previous approach was fundamentally wrong, choose a different approach
- If it was a minor error, make the minimal fix
- Document what went wrong and why the new approach is better

IMPORTANT: If you find existing artifacts from a successful previous stage, USE them — don't regenerate from scratch.
37 changes: 37 additions & 0 deletions scripts/skills/testing-strategy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
## Testing Strategy Expertise

Apply these testing patterns:

### Test Pyramid
- **Unit tests** (70%): Test individual functions/methods in isolation
- **Integration tests** (20%): Test component interactions and boundaries
- **E2E tests** (10%): Test critical user flows end-to-end

### What to Test
- Happy path: the expected successful flow
- Error cases: what happens when things go wrong?
- Edge cases: empty inputs, maximum values, concurrent access
- Boundary conditions: off-by-one, empty collections, null/undefined

### Test Quality
- Each test should verify ONE behavior
- Test names should describe the expected behavior, not the implementation
- Tests should be independent — no shared mutable state between tests
- Tests should be deterministic — same result every run

### Coverage Strategy
- Aim for meaningful coverage, not 100% line coverage
- Focus coverage on business logic and error handling
- Don't test framework code or simple getters/setters
- Cover the branches, not just the lines

### Mocking Guidelines
- Mock external dependencies (APIs, databases, file system)
- Don't mock the code under test
- Use realistic test data — edge cases reveal bugs
- Verify mock interactions when the side effect IS the behavior

### Regression Testing
- Write a failing test FIRST that reproduces the bug
- Then fix the bug and verify the test passes
- Keep regression tests — they prevent the bug from recurring
Comment on lines +1 to +37
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

PR scope appears mismatched with the stated objective.

These additions are skill-guide docs, but the objective/issue for this PR is updating stale AUTO line counts in .claude/CLAUDE.md (sw-intelligence.sh, sw-pipeline.sh). Please verify the correct diff/branch was pushed, or update PR title/objectives to match this content.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/skills/testing-strategy.md` around lines 1 - 37, The PR contains a
docs file (scripts/skills/testing-strategy.md) unrelated to the stated objective
of updating AUTO line counts in .claude/CLAUDE.md for sw-intelligence.sh and
sw-pipeline.sh; either push the correct diff with the updates to
.claude/CLAUDE.md and the two scripts (sw-intelligence.sh, sw-pipeline.sh) or
update the PR title/description to reflect that you're adding skill-guide docs;
verify the branch contains the intended changes by checking for modifications to
.claude/CLAUDE.md and the two script files and repush the proper commit set
before requesting review.

36 changes: 36 additions & 0 deletions scripts/skills/two-stage-review.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
## Two-Stage Code Review

This review runs in two passes. Complete Pass 1 fully before starting Pass 2.

### Pass 1: Spec Compliance

Compare the implementation against the plan and issue requirements:

1. **Task Checklist**: Does the code implement every task from plan.md?
2. **Files Modified**: Were all planned files actually modified?
3. **Requirements Coverage**: Does the implementation satisfy every requirement from the issue?
4. **Missing Features**: Is anything from the plan NOT implemented?
5. **Scope Creep**: Was anything added that WASN'T in the plan?

For each gap found:
- **[SPEC-GAP]** description — what was planned vs what was implemented

If all requirements are met, write: "Spec compliance: PASS — all planned tasks implemented."

---

### Pass 2: Code Quality

Now review the code for engineering quality:

1. **Logic bugs** — incorrect conditions, off-by-one errors, null handling
2. **Security** — injection, XSS, auth bypass, secret exposure
3. **Error handling** — missing catch blocks, silent failures, unclear error messages
4. **Performance** — unnecessary loops, missing indexes, N+1 queries
5. **Naming and clarity** — confusing names, missing context, magic numbers
6. **Test coverage** — are new code paths tested? Edge cases covered?

For each issue found, use format:
- **[SEVERITY]** file:line — description

Severity: Critical, Bug, Security, Warning, Suggestion
Loading