Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .chezmoi.yaml.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,15 @@
{{- $workEmail := "" -}}
{{- $npmToken := "" -}}
{{- $signingKey := "" -}}
{{- $grafanaInstanceId := "" -}}
{{- $grafanaApiToken := "" -}}

{{- if and (not $businessUse) $hasOp -}}
{{- $name = onepasswordRead "op://Dotfiles/Git/name" -}}
{{- $email = onepasswordRead "op://Dotfiles/Git/email" -}}
{{- $npmToken = onepasswordRead "op://Dotfiles/NPM/credential" -}}
{{- $grafanaInstanceId = onepasswordRead "op://Dotfiles/GrafanaCloud/instance_id" -}}
{{- $grafanaApiToken = onepasswordRead "op://Dotfiles/GrafanaCloud/api_token" -}}
{{- else -}}
{{- $name = promptStringOnce . "name" "Your name" -}}
{{- $email = promptStringOnce . "email" "Your email (personal)" -}}
Expand All @@ -39,6 +43,8 @@ data:
work_email: {{ $workEmail | quote }}
npm_token: {{ $npmToken | quote }}
signing_key: {{ $signingKey | quote }}
grafana_instance_id: {{ $grafanaInstanceId | quote }}
grafana_api_token: {{ $grafanaApiToken | quote }}

# Auto tmux on terminal startup
auto_tmux: true
Expand Down
59 changes: 2 additions & 57 deletions dot_claude/CLAUDE.md
Original file line number Diff line number Diff line change
@@ -1,58 +1,3 @@
# Workflow Orchestration
# Global Configuration

## Plan Node Default

- Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions)
- If something goes sideways, STOP and re-plan immediately — don't keep pushing
- Use plan mode for verification steps, not just building
- Write detailed specs upfront to reduce ambiguity

## Subagent Strategy

- Use subagents liberally to keep main context window clean
- Offload research, exploration, and parallel analysis to subagents
- For complex problems, throw more compute at it via subagents
- One tack per subagent for focused execution

## Self-Improvement Loop

- After ANY correction from the user: update tasks/lessons.md with the pattern
- Write rules for yourself that prevent the same mistake
- Ruthlessly iterate on these lessons until mistake rate drops
- Review lessons at session start for relevant project

## Verification Before Done

- Never mark a task complete without proving it works
- Diff behavior between main and your changes when relevant
- Ask yourself: "Would a staff engineer approve this?"
- Run tests, check logs, demonstrate correctness

## Demand Elegance (Balanced)

- For non-trivial changes: pause and ask "is there a more elegant way?"
- If a fix feels hacky: "Knowing everything I know now, implement the elegant solution"
- Skip this for simple, obvious fixes — don't over-engineer
- Challenge your own work before presenting it

## Autonomous Bug Fixing

- When given a bug report: just fix it. Don't ask for hand-holding
- Point at logs, errors, failing tests — then resolve them
- Zero context switching required from the user
- Go fix failing CI tests without being told how

# Task Management

- **Plan First**: Write plan to tasks/todo.md with checkable items
- **Verify Plan**: Check in before starting implementation
- **Track Progress**: Mark items complete as you go
- **Explain Changes**: High-level summary at each step
- **Document Results**: Add review section to tasks/todo.md
- **Capture Lessons**: Update tasks/lessons.md after corrections

# Core Principles

- **Simplicity First**: Make every change as simple as possible. Impact minimal code.
- **No Laziness**: Find root causes. No temporary fixes. Senior developer standards.
- **Minimal Impact**: Changes should only touch what's necessary. Avoid introducing bugs.
This is the global CLAUDE.md. Rules are organized in `~/.claude/rules/`.
35 changes: 35 additions & 0 deletions dot_claude/rules/design-decision-visibility.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
alwaysApply: true
---

# Design Decision Visibility

## Purpose

Prevent AI slop caused by silently applying better practices. Make implicit design decisions visible so that humans retain decision-making authority.

## Best Practice vs Better Practice

- **Best practice**: The optimal choice is established across virtually all contexts (e.g., use prepared statements for SQL injection prevention, never store passwords in plaintext). Apply silently without confirmation.
- **Better practice**: Generally recommended, but the optimal choice may differ depending on project context (e.g., error handling strategy, pagination approach, caching strategy, state management pattern). Never apply silently.

When uncertain whether something is a best practice or a better practice, treat it as a better practice and ask for confirmation.

## Before Implementation: Present Tradeoffs

Before writing code:

1. List design decisions where multiple options exist
1. Present tradeoffs for each option
1. When options span different levels of abstraction, organize by layer (infrastructure / application / UX)
1. Obtain user confirmation before proceeding

When applying a better practice, state why that choice was made and present alternatives.

## After Implementation: Self-Review of Implicit Choices

Upon completing implementation, list:

- Design decisions implicitly made in the code
- Whether each is a best practice or better practice
- For better practices, the rationale for the choice in the current project context
8 changes: 2 additions & 6 deletions dot_claude/rules/development-principles.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ alwaysApply: true
- Less code is better code
- Do not add lines without explicit user request
- Avoid over-engineering and premature abstraction
- Find root causes — no temporary fixes. Senior developer standards
- Changes should only touch what's necessary. Avoid introducing bugs

## Modern Practices

Expand All @@ -27,12 +29,6 @@ alwaysApply: true
- Do not speculate on specifications — ask or investigate
- Admit uncertainty rather than guessing

## Configuration Changes

- Before modifying allow/deny patterns or access controls, state the intended behavior explicitly
- Deny rules should be narrow and specific — block only the dangerous cases
- Verify the change doesn't break normal usage (dry-run or test examples)

## Comments

- Write "why", not "what"
Expand Down
56 changes: 56 additions & 0 deletions dot_claude/rules/workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
---
alwaysApply: true
---

# Workflow

## Planning

- Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions)
- If something goes sideways, STOP and re-plan immediately
- Use plan mode for verification steps, not just building
- Write detailed specs upfront to reduce ambiguity

## Subagent Strategy

- Use subagents liberally to keep main context window clean
- Offload research, exploration, and parallel analysis to subagents
- For complex problems, throw more compute at it via subagents
- One tack per subagent for focused execution

## Verification

- Never mark a task complete without proving it works
- Diff behavior between main and your changes when relevant
- Ask yourself: "Would a staff engineer approve this?"
- Run tests, check logs, demonstrate correctness

## Demand Elegance (Balanced)

- For non-trivial changes: pause and ask "is there a more elegant way?"
- If a fix feels hacky: "Knowing everything I know now, implement the elegant solution"
- Skip this for simple, obvious fixes
- Challenge your own work before presenting it

## Autonomous Bug Fixing

- When given a bug report: just fix it. Don't ask for hand-holding
- Point at logs, errors, failing tests — then resolve them
- Zero context switching required from the user
- Go fix failing CI tests without being told how

## Self-Improvement Loop

- After ANY correction from the user: update tasks/lessons.md with the pattern
- Write rules for yourself that prevent the same mistake
- Ruthlessly iterate on these lessons until mistake rate drops
- Review lessons at session start for relevant project

## Task Management

- **Plan First**: Write plan to tasks/todo.md with checkable items
- **Verify Plan**: Check in before starting implementation
- **Track Progress**: Mark items complete as you go
- **Explain Changes**: High-level summary at each step
- **Document Results**: Add review section to tasks/todo.md
- **Capture Lessons**: Update tasks/lessons.md after corrections
118 changes: 118 additions & 0 deletions dot_claude/skills/ios-submission-review/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
---
name: ios-submission-review
description: >
Pre-submission review for iOS App Store. Scans the codebase for common
rejection reasons and generates a pass/fail report with fixes.
Use this skill when the user mentions App Store submission, review, release,
審査, 提出, or phrases like "ready to submit", "before submitting to Apple",
"submission review", "rejection check". Also trigger when the user is
preparing a TestFlight build for external review or discussing App Store
rejection issues. Supports Swift/SwiftUI, UIKit, and React Native projects
with Apple IAP and RevenueCat.
---

# iOS Submission Review

Automated pre-submission review that catches the most common App Store rejection reasons before you submit. Modeled after the thoroughness of tools like Rork, but running entirely in CLI against your actual codebase.

Apple rejects ~25% of submissions (1.93M out of 7.77M in 2024). Over 40% of those are for easily avoidable issues. This skill checks for them systematically.

## How it works

When invoked, run all checks below against the current project. The user shouldn't need to provide any input beyond triggering the skill — detect the project type (Swift/RN) automatically and scan everything.

## Step 1: Detect Project Type

Determine the stack by checking for:
- `*.xcodeproj` / `*.xcworkspace` → Native iOS (Swift/ObjC)
- `package.json` with `react-native` → React Native
- `Podfile`, `Gemfile` → Additional native dependencies
- `app.json` / `app.config.js` → Expo (React Native)

Also detect IAP setup:
- `RevenueCatUI`, `Purchases`, `@revenuecat/purchases-react-native` → RevenueCat
- `StoreKit`, `SKPaymentQueue` → Native StoreKit
- Both can coexist

## Step 2: Run Checks by Guideline

Read `references/guidelines.md` for the full checklist and code signals per guideline. For each section, scan the codebase for the listed patterns and verify required elements exist.

### Check order (by rejection frequency):

1. **2.1 App Completeness** — Placeholder content, crash risks, debug artifacts, dead-end flows
2. **5.1 Privacy** — Privacy policy, permission purpose strings, PrivacyInfo.xcprivacy, account deletion, ATT
3. **2.3 Accurate Metadata** — Feature flags hiding functionality, demo account readiness
4. **3.1 In-App Purchase** — Restore button, pricing display, subscription terms, RevenueCat config
5. **4.0 Design** — WebView-only check, safe area usage, layout adaptability
6. **2026 Requirements** — SDK version, age rating, AI consent, privacy manifests

For each check:
- Scan relevant files using Grep/Glob
- Classify as PASS, WARN, or FAIL
- For FAIL/WARN: cite the specific file and line, explain why it's a problem, and state the fix

## Step 3: Generate Report

Output a structured report directly in the conversation:

```
# iOS Submission Review

Project: [name] | Stack: [Swift/React Native] | IAP: [Apple/RevenueCat/None]

## Summary
- X passes, Y warnings, Z failures
- Estimated review risk: LOW / MEDIUM / HIGH

## Failures (must fix before submission)

### [Guideline #] [Title]
- **File**: path/to/file.swift:42
- **Issue**: [what's wrong]
- **Fix**: [specific action to take]

## Warnings (review recommended)

### [Guideline #] [Title]
- **File**: path/to/file.swift:42
- **Issue**: [what's wrong]
- **Recommendation**: [suggested action]

## Passes
- [Guideline #] [Title] — OK

## Pre-Submission Reminders
- [ ] Demo account credentials in App Review Notes (if login required)
- [ ] Screenshots match current UI
- [ ] What's New text updated
- [ ] Backend services running and not blocking Apple IP ranges
- [ ] TestFlight build tested on physical device
```

## Step 4: Visual Verification (Chrome)

After the code scan, use Chrome (browser automation) to verify UI-level issues that static analysis can't catch:

- If a local dev server or Simulator build is available, launch the app and visually check:
- Onboarding flow completes without dead ends
- Paywall displays pricing, subscription terms, and Restore button
- Privacy policy link is accessible from Settings or onboarding
- Account deletion flow exists (if signup is present)
- No placeholder text or broken images visible on screen
- If App Store Connect is accessible in the browser, verify:
- Screenshots match the current app UI
- Demo account credentials are filled in App Review Notes
- IAP products are in "Ready to Submit" state and attached to the build
- Age rating questionnaire is up to date

Skip this step if no browser automation is available or the user declines. Note in the report which visual checks were performed and which were skipped.

## Important behaviors

- Run all checks automatically — don't ask the user which guidelines to check
- Be specific: cite files and lines, not vague warnings
- Don't report PASS items in detail — just list them. Focus the user's attention on failures and warnings
- If IAP is detected, check RevenueCat/StoreKit configuration thoroughly — IAP rejections are painful because resubmission requires full re-review
- If the project has no iOS-specific files, say so and exit early
- When using Chrome for visual verification, follow the browser-automation rule: operate one field at a time, verify after each action
Loading
Loading