Transform ad-hoc Claude sessions into reproducible development pipelines with parallel execution, automatic retry, and full state management.
- Features
- Installation
- Quick Start
- Usage
- Examples
- Documentation
- Troubleshooting
- Contributing
- License
- Acknowledgments
β¨ Workflow Orchestration - Define complex development workflows in simple YAML β‘ Parallel Execution - Run multiple Claude agents simultaneously with MapReduce π Automatic Retry - Smart retry strategies with exponential backoff and circuit breakers πΎ Full State Management - Checkpoint and resume interrupted workflows exactly where they left off π― Goal-Seeking - Iterative refinement until specifications are met π³ Git Integration - Automatic worktree management and commit tracking π‘οΈ Error Recovery - Comprehensive failure handling with on-failure handlers π Analytics - Cost tracking, performance metrics, and optimization recommendations π§ Extensible - Custom validators, handlers, and workflow composition π Documentation - Comprehensive man pages and built-in help system
cargo install prodigy
# Coming soon - use cargo install for now
brew install prodigy
# Clone the repository
git clone https://github.com/iepathos/prodigy
cd prodigy
# Build and install
cargo build --release
cargo install --path .
# Optional: Install man pages
./scripts/install-man-pages.sh
Get up and running in under 5 minutes with these simple examples.
- Initialize Prodigy in your project:
prodigy init
- Create a simple workflow (
fix-tests.yml
):
name: fix-failing-tests
steps:
- shell: "cargo test"
on_failure:
claude: "/fix-test-failures"
max_attempts: 3
- Run the workflow:
prodigy run fix-tests.yml
Process multiple files simultaneously with MapReduce:
name: add-documentation
mode: mapreduce
setup:
- shell: "find src -name '*.rs' -type f > files.json"
map:
input: files.json
agent_template:
- claude: "/add-rust-docs ${item}"
max_parallel: 10
reduce:
- claude: "/summarize Documentation added to ${map.successful} files"
Run with:
prodigy run add-documentation.yml
Iteratively improve code until all tests pass:
name: achieve-full-coverage
steps:
- goal_seek:
goal: "Achieve 100% test coverage"
command: "claude: /improve-test-coverage"
validate: "cargo tarpaulin --print-summary | grep '100.00%'"
max_attempts: 5
# Run a workflow
prodigy run workflow.yml
# Execute a single command with retries
prodigy exec "claude: /refactor main.rs" --retry 3
# Process files in parallel
prodigy batch "*.py" --command "claude: /add-types" --parallel 5
# Resume an interrupted workflow
prodigy resume workflow-123
# Goal-seeking operation
prodigy goal-seek --goal "Fix all linting errors" --command "claude: /fix-lint"
# View analytics and costs
prodigy analytics --session abc123
# Manage worktrees
prodigy worktree ls # List active worktrees
prodigy worktree ls --detailed # Show enhanced session information
prodigy worktree ls --json # Output in JSON format
prodigy worktree ls --detailed --json # Combine detailed info with JSON output
prodigy worktree clean # Clean up inactive worktrees
retry_defaults:
attempts: 3
backoff: exponential
initial_delay: 2s
max_delay: 30s
jitter: true
steps:
- shell: "deploy.sh"
retry:
attempts: 5
backoff:
fibonacci:
initial: 1s
retry_on: [network, timeout]
retry_budget: 5m
env:
NODE_ENV: production
WORKERS:
command: "nproc"
cache: true
secrets:
API_KEY: ${vault:api/keys/production}
steps:
- shell: "npm run build"
env:
BUILD_TARGET: production
working_dir: ./frontend
imports:
- path: ./common/base.yml
alias: base
templates:
test-suite:
parameters:
- name: language
type: string
steps:
- shell: "${language} test"
workflows:
main:
extends: base.default
steps:
- use: test-suite
with:
language: cargo
Prodigy automatically tracks git changes during workflow execution and provides context variables for accessing file changes, commits, and statistics:
${step.files_added}
- Files added in the current step${step.files_modified}
- Files modified in the current step${step.files_deleted}
- Files deleted in the current step${step.files_changed}
- All files changed (added + modified + deleted)${step.commits}
- Commit hashes created in the current step${step.commit_count}
- Number of commits in the current step${step.insertions}
- Lines inserted in the current step${step.deletions}
- Lines deleted in the current step
${workflow.files_added}
- All files added across the workflow${workflow.files_modified}
- All files modified across the workflow${workflow.files_deleted}
- All files deleted across the workflow${workflow.files_changed}
- All files changed across the workflow${workflow.commits}
- All commit hashes across the workflow${workflow.commit_count}
- Total commits across the workflow${workflow.insertions}
- Total lines inserted across the workflow${workflow.deletions}
- Total lines deleted across the workflow
Variables support pattern filtering using glob patterns:
# Get only markdown files added
- shell: "echo '${step.files_added:*.md}'"
# Get only Rust source files modified
- claude: "/review ${step.files_modified:*.rs}"
# Get specific directory changes
- shell: "echo '${workflow.files_changed:src/*}'"
Control output format with modifiers:
# JSON array format
- shell: "echo '${step.files_added:json}'" # ["file1.rs", "file2.rs"]
# Newline-separated (for scripts)
- shell: "echo '${step.files_added:lines}'" # file1.rs\nfile2.rs
# Comma-separated
- shell: "echo '${step.files_added:csv}'" # file1.rs,file2.rs
# Space-separated (default)
- shell: "echo '${step.files_added}'" # file1.rs file2.rs
name: code-review-workflow
steps:
# Make changes
- claude: "/implement feature X"
commit_required: true
# Review only the changed Rust files
- claude: "/review-code ${step.files_modified:*.rs}"
# Generate changelog for markdown files
- shell: "echo 'Changed docs:' && echo '${step.files_added:*.md:lines}'"
# Conditional execution based on changes
- shell: "cargo test"
when: "${step.files_modified:*.rs}" # Only run if Rust files changed
# Summary at the end
- claude: |
/summarize-changes
Total files changed: ${workflow.files_changed:json}
Commits created: ${workflow.commit_count}
Lines added: ${workflow.insertions}
Lines removed: ${workflow.deletions}
Prodigy looks for configuration in these locations (in order):
.prodigy/config.yml
- Project-specific configuration~/.config/prodigy/config.yml
- User configuration/etc/prodigy/config.yml
- System-wide configuration
Example configuration:
# .prodigy/config.yml
claude:
model: claude-3-opus
max_tokens: 4096
worktree:
max_parallel: 20
cleanup_policy:
idle_timeout: 300
max_age: 3600
retry:
default_attempts: 3
default_backoff: exponential
storage:
events_dir: ~/.prodigy/events
state_dir: ~/.prodigy/state
Fix all test failures automatically with intelligent retry:
name: test-pipeline
steps:
- shell: "cargo test"
on_failure:
- claude: "/analyze-test-failure ${shell.output}"
- claude: "/fix-test-failure"
- shell: "cargo test"
retry:
attempts: 3
backoff: exponential
- shell: "cargo fmt -- --check"
on_failure: "cargo fmt"
- shell: "cargo clippy -- -D warnings"
on_failure:
claude: "/fix-clippy-warnings"
Analyze and improve multiple files concurrently:
name: parallel-analysis
mode: mapreduce
setup:
- shell: |
find . -name "*.rs" -exec wc -l {} + |
sort -rn |
head -20 |
awk '{print $2}' > complex-files.json
map:
input: complex-files.json
agent_template:
- claude: "/analyze-complexity ${item}"
- claude: "/suggest-refactoring ${item}"
- shell: "cargo test --lib $(basename ${item} .rs)"
max_parallel: 10
reduce:
- claude: "/generate-refactoring-report ${map.results}"
- shell: "echo 'Analyzed ${map.total} files, ${map.successful} successful'"
Iteratively improve performance until benchmarks pass:
name: performance-optimization
steps:
- goal_seek:
goal: "Reduce benchmark time below 100ms"
command: "claude: /optimize-performance benches/main.rs"
validate: |
cargo bench --bench main |
grep "time:" |
awk '{print ($2 < 100) ? "score: 100" : "score: " int(100 - $2)}'
threshold: 100
max_attempts: 10
timeout: 1800
- shell: "cargo bench --bench main > benchmark-results.txt"
- claude: "/document-optimization benchmark-results.txt"
- π User Guide - Complete guide to using Prodigy
- π§ API Reference - Detailed API documentation
- π Workflow Syntax - YAML workflow configuration reference
- ποΈ Architecture - System design and internals
- π€ Contributing Guide - How to contribute to Prodigy
- π Man Pages - Unix-style manual pages for all commands
Command | Description |
---|---|
prodigy run <workflow> |
Execute a workflow |
prodigy exec <command> |
Run a single command |
prodigy batch <pattern> |
Process files in parallel |
prodigy resume <id> |
Resume interrupted workflow |
prodigy goal-seek |
Run goal-seeking operation |
prodigy analytics |
View session analytics |
prodigy worktree |
Manage git worktrees |
prodigy init |
Initialize Prodigy in project |
Performance: Workflows running slowly
- Check parallel execution limits:
prodigy run workflow.yml --max-parallel 20
- Enable verbose mode to identify bottlenecks:
prodigy run workflow.yml -v
Note: The -v
flag also enables Claude streaming JSON output for debugging Claude interactions.
- Review analytics for optimization opportunities:
prodigy analytics --session <session-id>
Resume: How to recover from interrupted workflows
Prodigy automatically creates checkpoints. To resume:
# List available checkpoints
prodigy checkpoints list
# Resume from latest checkpoint
prodigy resume
# Resume specific workflow
prodigy resume workflow-abc123
MapReduce: Jobs failing with "DLQ not empty"
Review and reprocess failed items:
# View failed items
prodigy dlq view <job-id>
# Reprocess failed items
prodigy dlq retry <job-id> --max-parallel 5
Configuration: Settings not being applied
Check configuration precedence:
# Show effective configuration
prodigy config show
# Validate configuration
prodigy config validate
Installation: Man pages not available
Install man pages manually:
cd prodigy
./scripts/install-man-pages.sh
# Or install to user directory
./scripts/install-man-pages.sh --user
Debugging: Need more information about failures
Enable debug logging:
# Set log level
export RUST_LOG=debug
prodigy run workflow.yml -vv
# View detailed events
prodigy events --job-id <job-id> --verbose
Verbosity: Controlling Claude streaming output
Prodigy provides fine-grained control over Claude interaction visibility:
Default behavior (no flags):
prodigy run workflow.yml
# Shows progress and results, but no Claude JSON streaming output
Verbose mode (-v):
prodigy run workflow.yml -v
# Shows Claude streaming JSON output for debugging interactions
Debug mode (-vv) and trace mode (-vvv):
prodigy run workflow.yml -vv
prodigy run workflow.yml -vvv
# Also shows Claude streaming output plus additional internal logs
Force Claude output (environment override):
PRODIGY_CLAUDE_CONSOLE_OUTPUT=true prodigy run workflow.yml
# Shows Claude streaming output regardless of verbosity level
This allows you to keep normal runs clean while enabling detailed debugging when needed.
- π Report Issues
- π¬ Discussions
- π§ Email Support
We welcome contributions! Please see our Contributing Guide for details.
# Fork and clone the repository
git clone https://github.com/YOUR-USERNAME/prodigy
cd prodigy
# Set up development environment
cargo build
cargo test
# Run with verbose output
RUST_LOG=debug cargo run -- run test.yml
# Before submitting PR
cargo fmt
cargo clippy -- -D warnings
cargo test
- π¦ Package manager distributions (brew, apt, yum)
- π Internationalization and translations
- π Documentation and examples
- π§ͺ Testing and bug reports
- β‘ Performance optimizations
- π¨ UI/UX improvements
Prodigy is dual-licensed under MIT and Apache 2.0. See LICENSE for details.
Prodigy builds on the shoulders of giants:
- Claude Code CLI - The AI pair programmer that powers Prodigy
- Tokio - Async runtime for Rust
- Clap - Command-line argument parsing
- Serde - Serialization framework
Special thanks to all contributors who have helped make Prodigy better!
Made with β€οΈ by developers, for developers
Features β’ Quick Start β’ Docs β’ Contributing