Transform messy prompts into precise, powerful instructions for AI assistants.
Features β’ Installation β’ Quick Start β’ Providers β’ Configuration β’ Documentation
Ever type a vague prompt like "make a website" and get mediocre results? Prep transforms your casual thoughts into precise, actionable prompts that get better results from any AI assistant.
$ prep "make a website for my cat photos"
# Output:
Build a responsive single-page website to showcase cat photos with the following
requirements: a masonry-style photo grid layout, lightbox functionality for
enlarged viewing, lazy loading for performance, mobile-first responsive design,
and a simple navigation header. Use HTML5, CSS3 with Flexbox/Grid, and vanilla
JavaScript. Include alt text placeholders for accessibility.π Multi-Provider Support β Local Ollama, Ollama Cloud, OpenAI, Anthropic
π¨ Beautiful Terminal UI β Colored output, progress spinners, interactive prompts
βοΈ Persistent Configuration β Set your defaults once, use everywhere
π Clipboard Integration β Copy refined prompts with --copy
π History Tracking β SQLite-backed history with search
π Prompt Templates β 10 built-in templates for common tasks
π Shell Completions β Bash, Zsh, Fish, PowerShell
# Clone the repository
git clone https://github.com/caseybarajas/prep.git
cd prep
# Build and install
cargo install --path .- Rust 1.70+ β Install Rust
- AI Provider β One of:
- Ollama running locally, OR
- API key for Ollama Cloud, OpenAI, or Anthropic
# Simple prompt refinement
prep "write a python script to parse json"
# Pipe from file or other commands
cat rough_idea.txt | prep
# Use with a specific provider
prep --provider openai "build a REST API"# Start Ollama
ollama serve
# Pull a model
ollama pull llama3.2
# Use prep
prep "create a dockerfile"# Set your API key
export OPENAI_API_KEY="sk-..."
# OR
export ANTHROPIC_API_KEY="sk-ant-..."
# OR
export OLLAMA_API_KEY="..."
# Use the cloud provider
prep --provider openai "design a database schema"
prep --provider anthropic "write unit tests"
prep --provider ollama-cloud "explain microservices"| Provider | Flag | Model Default | API Key Variable |
|---|---|---|---|
| Ollama (Local) | --provider ollama |
llama3.2 |
Not required |
| Ollama Cloud | --provider ollama-cloud |
llama3.2 |
OLLAMA_API_KEY |
| OpenAI | --provider openai |
gpt-4o |
OPENAI_API_KEY |
| Anthropic | --provider anthropic |
claude-3-5-sonnet |
ANTHROPIC_API_KEY |
prep config initConfiguration is stored at ~/.config/prep/config.toml:
[default]
provider = "ollama" # Default provider
model = "llama3.2" # Default model
output_format = "text" # text, json, or markdown
copy_to_clipboard = false # Auto-copy results
[providers.ollama-local]
endpoint = "http://localhost:11434"
[providers.openai]
endpoint = "https://api.openai.com/v1"
model = "gpt-4o"
[providers.anthropic]
endpoint = "https://api.anthropic.com/v1"
model = "claude-3-5-sonnet"
[ui]
color = true # Colored output
spinner = true # Show progress spinners
[history]
enabled = true # Track refinement history
max_entries = 1000 # Max history entriesprep config show # Display current config
prep config set KEY VALUE # Set a value
prep config get KEY # Get a value
prep config path # Show config file locationprep [OPTIONS] [PROMPT]
Arguments:
[PROMPT] Raw prompt to refine (reads from stdin if not provided)
Options:
-p, --provider <NAME> AI provider (ollama, ollama-cloud, openai, anthropic)
-m, --model <NAME> Model to use (overrides config)
-o, --output <FORMAT> Output format: text, json, markdown
-C, --copy Copy result to clipboard
-t, --template <NAME> Use a prompt template
--context <FILE> Include file as additional context
--dry-run Preview without calling API
-v, --verbose Show diagnostic output
--no-color Disable colored output
--no-history Don't save to history
-h, --help Print help
-V, --version Print version
Subcommands:
config Manage configuration
history View and manage refinement history
templates Work with prompt templates
completions Generate shell completions
Built-in templates optimize prompts for specific tasks:
| Template | Description |
|---|---|
code |
Code generation requests |
debug |
Debugging assistance |
docs |
Documentation writing |
explain |
Concept explanations |
review |
Code review requests |
refactor |
Refactoring tasks |
test |
Test writing |
api |
API design |
security |
Security analysis |
architecture |
System architecture |
# List all templates
prep templates list
# Use a template
prep --template code "parse CSV files in rust"
prep --template debug "my function returns null"# List recent refinements
prep history list
# Show specific entry
prep history show 42
# Search history
prep history search "python"
# Clear all history
prep history clear# Bash
prep completions bash > ~/.local/share/bash-completion/completions/prep
# Zsh (add ~/.zfunc to fpath in .zshrc first)
prep completions zsh > ~/.zfunc/_prep
# Fish
prep completions fish > ~/.config/fish/completions/prep.fish
# PowerShell
prep completions powershell >> $PROFILE$ prep "make api"
# Output:
Design and implement a RESTful API with the following specifications:
define the resource endpoints (GET, POST, PUT, DELETE), implement proper
HTTP status codes, add request validation, include error handling with
meaningful messages, and document the endpoints. Specify the programming
language and framework to use.$ prep --context src/main.rs "add error handling"
# The context file content is sent along with your prompt
# for more relevant refinements# JSON output (for scripting)
prep --output json "write tests" | jq .refined_prompt
# Markdown (for documentation)
prep --output markdown "explain oauth"# Refine and pass to another tool
prep "write a greeting function" | pbcopy
# Use with AI coding assistants
prep "add caching to this function" --context utils.py | claude- API keys are never stored in config files
- Keys are loaded from environment variables only
- History is stored locally in SQLite
- See SECURITY.md for details
Contributions are welcome! Please read CONTRIBUTING.md for guidelines.
MIT License β see LICENSE for details.
Made with β€οΈ and Rust