
**The Prompt is the Code**
Check out the new Inline Role Syntax when working with multiple concurrent models.
Version 0.10.0 introduces breaking changes affecting:
Environment Variables — Naming conventions now use double underscore (
__) for nested config sections (e.g.,AIA_LLM__TEMPERATURE,AIA_FLAGS__DEBUG). See Environment Variables for the complete list.Configuration Files — Configuration now uses a nested YAML structure with sections like
llm:,prompts:,output:,flags:, etc. The defaults.yml file is the single source of truth for all configuration options. See Configuration Guide for details.File Locations — Configuration files now follow the XDG Base Directory Specification. The default config file location is
~/.config/aia/aia.yml. See Installation Guide for setup instructions.Review the Configuration Guide before upgrading to v0.10.0.
AIA is a command-line utility that facilitates interaction with AI models through dynamic prompt management. It automates the management of pre-compositional prompts and executes generative AI commands with enhanced features including embedded directives, shell integration, embedded Ruby, history management, interactive chat, and prompt workflows.
AIA leverages the following Ruby gems:
- prompt_manager to manage prompts,
- ruby_llm to access LLM providers,
- ruby_llm-mcp for Model Context Protocol (MCP) support,
- and can use the shared_tools gem which provides a collection of common ready-to-use MCP clients and functions for use with LLMs that support tools.
For more information on AIA visit these locations:
-
Install AIA:
gem install aia
-
Install dependencies:
brew install fzf
-
Create your first prompt:
mkdir -p ~/.prompts echo "What is [TOPIC]?" > ~/.prompts/what_is.txt
-
Run your prompt:
aia what_is
You'll be prompted to enter a value for
[TOPIC], then AIA will send your question to the AI model. -
Start an interactive chat:
aia --chat # Or use multiple models for comparison aia --chat -m gpt-4o-mini,gpt-3.5-turbo
, ,
(\____/) AI Assistant (v0.9.7) is Online
(_oo_) gpt-4o-mini
(O) using ruby_llm (v1.3.1)
__||__ \) model db was last refreshed on
[/______\] / 2025-06-18
/ \__AI__/ \/ You can share my tools
/ /__\
(\ /____\
One of AIA's most powerful features is the ability to send a single prompt to multiple AI models simultaneously and compare their responses side-by-side—complete with token usage and cost tracking.
# Compare responses from 3 models with token counts and cost estimates
aia --chat -m gpt-4o,claude-3-5-sonnet,gemini-1.5-pro --tokens --costExample output:
You: What's the best approach for handling database migrations in a microservices architecture?
from: gpt-4o
Use a versioned migration strategy with backward compatibility...
from: claude-3-5-sonnet
Consider the Expand-Contract pattern for zero-downtime migrations...
from: gemini-1.5-pro
Implement a schema registry with event-driven synchronization...
┌─────────────────────────────────────────────────────────────────┐
│ Model │ Input Tokens │ Output Tokens │ Cost │
├─────────────────────────────────────────────────────────────────┤
│ gpt-4o │ 156 │ 342 │ $0.0089 │
│ claude-3-5-sonnet │ 156 │ 418 │ $0.0063 │
│ gemini-1.5-pro │ 156 │ 387 │ $0.0041 │
└─────────────────────────────────────────────────────────────────┘
Why this matters:
- Compare reasoning approaches - See how different models tackle the same problem
- Identify blind spots - One model might catch something others miss
- Cost optimization - Find the best price/performance ratio for your use case
- Consensus building - Use
--consensusto synthesize the best answer from all models
- Installation & Prerequisites
- Basic Usage
- Configuration
- Advanced Features
- Examples & Tips
- Security Considerations
- Troubleshooting
- Development
- Contributing
- Roadmap
- License
- Articles on AIA
- Ruby: >= 3.2.0
- External Tools:
- fzf - Command-line fuzzy finder
# Install AIA gem
gem install aia
# Install required external tools (macOS)
brew install fzf
# Install required external tools (Linux)
# Ubuntu/Debian
sudo apt install fzf
# Arch Linux
sudo pacman -S fzfGet completion functions for your shell:
# For bash users
aia --completion bash >> ~/.bashrc
# For zsh users
aia --completion zsh >> ~/.zshrc
# For fish users
aia --completion fish >> ~/.config/fish/config.fish# Basic usage
aia [OPTIONS] PROMPT_ID [CONTEXT_FILES...]
# Interactive chat session
aia --chat [--role ROLE] [--model MODEL]
# Use a specific model
aia --model gpt-4 my_prompt
# Specify output file
aia --output result.md my_prompt
# Use a role/system prompt
aia --role expert my_prompt
# Enable fuzzy search for prompts
aia --fuzzy| Option | Description | Example |
|---|---|---|
--chat |
Start interactive chat session | aia --chat |
--model MODEL |
Specify AI model(s) to use. Supports MODEL[=ROLE] syntax |
aia --model gpt-4o-mini,gpt-3.5-turbo or aia --model gpt-4o=architect,claude=security |
--consensus |
Enable consensus mode for multi-model | aia --consensus |
--no-consensus |
Force individual responses | aia --no-consensus |
--role ROLE |
Use a role/system prompt (default for all models) | aia --role expert |
--list-roles |
List available role files | aia --list-roles |
--output FILE |
Specify output file | aia --output results.md |
--fuzzy |
Use fuzzy search for prompts | aia --fuzzy |
--tokens |
Display token usage in chat mode | aia --chat --tokens |
--cost |
Include cost calculations with token usage | aia --chat --cost |
--help |
Show complete help | aia --help |
~/.prompts/ # Default prompts directory
├── ask.txt # Simple question prompt
├── code_review.txt # Code review prompt
├── roles/ # Role/system prompts
│ ├── expert.txt # Expert role
│ └── teacher.txt # Teaching role
└── _prompts.log # History log
The most commonly used configuration options:
| Option | Default | Description |
|---|---|---|
model |
gpt-4o-mini |
AI model to use |
prompts_dir |
~/.prompts |
Directory containing prompts |
output |
temp.md |
Default output file |
temperature |
0.7 |
Model creativity (0.0-1.0) |
chat |
false |
Start in chat mode |
AIA determines configuration settings using this order (highest to lowest priority):
- Embedded config directives (in prompt files):
//config model = gpt-4 - Command-line arguments:
--model gpt-4 - Environment variables:
export AIA_MODEL=gpt-4 - Configuration files:
~/.aia/config.yml - Default values
Environment Variables:
export AIA_MODEL=gpt-4
export AIA_PROMPTS__DIR=~/my-prompts
export AIA_LLM__TEMPERATURE=0.8Configuration File (~/.aia/config.yml):
model: gpt-4
prompts_dir: ~/my-prompts
temperature: 0.8
chat: falseEmbedded Directives (in prompt files):
//config model = gpt-4
//config temperature = 0.8
Your prompt content here...
Click to view all configuration options
| Config Item Name | CLI Options | Default Value | Environment Variable |
|---|---|---|---|
| adapter | --adapter | ruby_llm | AIA_ADAPTER |
| aia_dir | ~/.aia | AIA_DIR | |
| append | -a, --append | false | AIA_FLAGS__APPEND |
| chat | --chat | false | AIA_FLAGS__CHAT |
| clear | --clear | false | AIA_FLAGS__CLEAR |
| config_file | -c, --config-file | ~/.aia/config.yml | AIA_CONFIG_FILE |
| debug | -d, --debug | false | AIA_FLAGS__DEBUG |
| embedding_model | --em, --embedding_model | text-embedding-ada-002 | AIA_LLM__EMBEDDING_MODEL |
| erb | true | AIA_FLAGS__ERB | |
| frequency_penalty | --frequency-penalty | 0.0 | AIA_LLM__FREQUENCY_PENALTY |
| fuzzy | -f, --fuzzy | false | AIA_FLAGS__FUZZY |
| image_quality | --iq, --image-quality | standard | AIA_IMAGE__QUALITY |
| image_size | --is, --image-size | 1024x1024 | AIA_IMAGE__SIZE |
| image_style | --style, --image-style | vivid | AIA_IMAGE__STYLE |
| history_file | --history-file | ~/.prompts/_prompts.log | AIA_OUTPUT__HISTORY_FILE |
| markdown | --md, --markdown | true | AIA_OUTPUT__MARKDOWN |
| max_tokens | --max-tokens | 2048 | AIA_LLM__MAX_TOKENS |
| model | -m, --model | gpt-4o-mini | AIA_MODEL |
| next | -n, --next | nil | AIA_NEXT |
| output | -o, --output | temp.md | AIA_OUTPUT__FILE |
| parameter_regex | --regex | '(?-mix:([[A-Z _|]+]))' | AIA_PROMPTS__PARAMETER_REGEX |
| pipeline | --pipeline | [] | AIA_PIPELINE |
| presence_penalty | --presence-penalty | 0.0 | AIA_LLM__PRESENCE_PENALTY |
| prompt_extname | .txt | AIA_PROMPTS__EXTNAME | |
| prompts_dir | --prompts-dir | ~/.prompts | AIA_PROMPTS__DIR |
| refresh | --refresh | 7 (days) | AIA_REGISTRY__REFRESH |
| require_libs | --rq --require | [] | AIA_REQUIRE_LIBS |
| role | -r, --role | AIA_ROLE | |
| roles_dir | ~/.prompts/roles | AIA_ROLES__DIR | |
| roles_prefix | --roles-prefix | roles | AIA_ROLES__PREFIX |
| shell | true | AIA_FLAGS__SHELL | |
| speak | --speak | false | AIA_FLAGS__SPEAK |
| speak_command | afplay | AIA_SPEECH__COMMAND | |
| speech_model | --sm, --speech-model | tts-1 | AIA_SPEECH__MODEL |
| system_prompt | --system-prompt | AIA_SYSTEM_PROMPT | |
| temperature | -t, --temperature | 0.7 | AIA_LLM__TEMPERATURE |
| terse | --terse | false | AIA_FLAGS__TERSE |
| tokens | --tokens | false | AIA_FLAGS__TOKENS |
| cost | --cost | false | AIA_FLAGS__COST |
| tool_paths | --tools | [] | AIA_TOOLS__PATHS |
| allowed_tools | --at, --allowed-tools | nil | AIA_TOOLS__ALLOWED |
| rejected_tools | --rt, --rejected-tools | nil | AIA_TOOLS__REJECTED |
| top_p | --top-p | 1.0 | AIA_LLM__TOP_P |
| transcription_model | --tm, --transcription-model | whisper-1 | AIA_TRANSCRIPTION__MODEL |
| verbose | -v, --verbose | false | AIA_FLAGS__VERBOSE |
| voice | --voice | alloy | AIA_SPEECH__VOICE |
Directives are special commands in prompt files that begin with // and provide dynamic functionality:
| Directive | Description | Example |
|---|---|---|
//config |
Set configuration values | //config model = gpt-4 |
//context |
Show context for this conversation with checkpoint markers | //context |
//checkpoint |
Create a named checkpoint of current context | //checkpoint save_point |
//restore |
Restore context to a previous checkpoint | //restore save_point |
//include |
Insert file contents | //include path/to/file.txt |
//paste |
Insert clipboard contents | //paste |
//shell |
Execute shell commands | //shell ls -la |
//robot |
Show the pet robot ASCII art w/versions | //robot |
//ruby |
Execute Ruby code | //ruby puts "Hello World" |
//next |
Set next prompt in sequence | //next summary |
//pipeline |
Set prompt workflow | //pipeline analyze,summarize,report |
//clear |
Clear conversation history | //clear |
//help |
Show available directives | //help |
//model |
Show current model configuration | //model |
//available_models |
List available models | //available_models |
//tools |
Show available tools (optional filter by name) | //tools or //tools file |
//review |
Review current context with checkpoint markers | //review |
Directives can also be used in the interactive chat sessions.
# Set model and temperature for this prompt
//config model = gpt-4
//config temperature = 0.9
# Enable chat mode and terse responses
//config chat = true
//config terse = true
Your prompt content here...# Include file contents
//include ~/project/README.md
# Paste clipboard contents
//paste
# Execute shell commands
//shell git log --oneline -10
# Run Ruby code
//ruby require 'json'; puts JSON.pretty_generate({status: "ready"})
Analyze the above information and provide insights.AIA provides powerful context management capabilities in chat mode through checkpoint and restore directives:
# Create a checkpoint with automatic naming (1, 2, 3...)
//checkpoint
# Create a named checkpoint
//checkpoint important_decision
# Restore to the last checkpoint
//restore
# Restore to a specific checkpoint
//restore important_decision
# View context with checkpoint markers
//contextExample Chat Session:
You: Tell me about Ruby programming
AI: Ruby is a dynamic programming language...
You: //checkpoint ruby_basics
You: Now explain object-oriented programming
AI: Object-oriented programming (OOP) is...
You: //checkpoint oop_concepts
You: Actually, let's go back to Ruby basics
You: //restore ruby_basics
You: //context
=== Chat Context ===
Total messages: 4
Checkpoints: ruby_basics, oop_concepts
1. [System]: You are a helpful assistant
2. [User]: Tell me about Ruby programming
3. [Assistant]: Ruby is a dynamic programming language...
📍 [Checkpoint: ruby_basics]
----------------------------------------
4. [User]: Now explain object-oriented programming
=== End of Context ===
Key Features:
- Auto-naming: Checkpoints without names use incrementing integers (1, 2, 3...)
- Named checkpoints: Use meaningful names like
//checkpoint before_refactor - Default restore:
//restorewithout a name restores to the last checkpoint - Context visualization:
//contextshows checkpoint markers in conversation history - Clean slate:
//clearremoves all context and checkpoints
You can extend AIA with custom directives by creating Ruby files that define new directive methods:
# examples/directives/ask.rb
module AIA
class DirectiveProcessor
private
desc "A meta-prompt to LLM making its response available as part of the primary prompt"
def ask(args, context_manager=nil)
meta_prompt = args.empty? ? "What is meta-prompting?" : args.join(' ')
AIA.config.client.chat(meta_prompt)
end
end
endUsage: Use the --tools option to specific a specific directive file or a directory full of files
# Load custom directive
aia --tools examples/directives/ask.rb --chat
# Use the results of the custom directive as input to a prompt
//ask gather the latest closing data for the DOW, NASDAQ, and S&P 500AIA supports running multiple AI models simultaneously, allowing you to:
- Compare responses from different models
- Get consensus answers from multiple AI perspectives
- Leverage the strengths of different models for various tasks
Specify multiple models using comma-separated values with the -m flag:
# Use two models
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo
# Use three models
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini
# Works in chat mode too
aia --chat -m gpt-4o-mini,gpt-3.5-turboUse the --consensus flag to have the primary model (first in the list) synthesize responses from all models into a unified answer:
# Enable consensus mode
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini --consensusConsensus Output Format:
from: gpt-4o-mini (consensus)
Based on the insights from multiple AI models, here is a comprehensive answer that
incorporates the best perspectives and resolves any contradictions...
By default (or with --no-consensus), each model provides its own response:
# Default behavior - show individual responses
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini
# Explicitly disable consensus
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo --no-consensusIndividual Responses Output Format:
from: gpt-4o-mini
Response from the first model...
from: gpt-3.5-turbo
Response from the second model...
from: gpt-5-mini
Response from the third model...
View your current multi-model configuration using the //model directive:
# In any prompt file or chat session
//modelExample Output:
Multi-Model Configuration:
==========================
Model count: 3
Primary model: gpt-4o-mini (used for consensus when --consensus flag is enabled)
Consensus mode: false
Model Details:
--------------------------------------------------
1. gpt-4o-mini (primary)
2. gpt-3.5-turbo
3. gpt-5-mini
Key Features:
- Primary Model: The first model in the list serves as the consensus orchestrator
- Concurrent Processing: All models run simultaneously for better performance
- Flexible Output: Choose between individual responses or synthesized consensus
- Error Handling: Invalid models are reported but don't prevent valid models from working
- Batch Mode Support: Multi-model responses are properly formatted in output files
Monitor token consumption and estimate costs across all models with --tokens and --cost:
# Display token usage for each model
aia my_prompt -m gpt-4o,claude-3-sonnet --tokens
# Include cost estimates (automatically enables --tokens)
aia my_prompt -m gpt-4o,claude-3-sonnet --cost
# In chat mode with full tracking
aia --chat -m gpt-4o,claude-3-sonnet,gemini-pro --costToken Usage Output:
from: gpt-4o
Here's my analysis of the code...
from: claude-3-sonnet
Looking at this code, I notice...
Tokens: gpt-4o: input=245, output=312 | claude-3-sonnet: input=245, output=287
Cost: gpt-4o: $0.0078 | claude-3-sonnet: $0.0045 | Total: $0.0123
Use Cases for Token/Cost Tracking:
- Budget management - Monitor API costs in real-time during development
- Model comparison - Identify which models are most cost-effective for your tasks
- Optimization - Find the right balance between response quality and cost
- Billing insights - Track usage patterns across different model providers
AIA supports running local AI models through Ollama and LM Studio, providing privacy, offline capability, and cost savings.
Ollama runs AI models locally on your machine.
# Install Ollama (macOS)
brew install ollama
# Pull a model
ollama pull llama3.2
# Use with AIA - prefix model name with 'ollama/'
aia --model ollama/llama3.2 my_prompt
# In chat mode
aia --chat --model ollama/llama3.2
# Combine with cloud models
aia --model ollama/llama3.2,gpt-4o-mini --consensus my_promptEnvironment Variables:
# Optional: Set custom Ollama API endpoint
export OLLAMA_API_BASE=http://localhost:11434LM Studio provides a desktop application for running local models with an OpenAI-compatible API.
# 1. Install LM Studio from lmstudio.ai
# 2. Download and load a model in LM Studio
# 3. Start the local server in LM Studio
# Use with AIA - prefix model name with 'lms/'
aia --model lms/qwen/qwen3-coder-30b my_prompt
# In chat mode
aia --chat --model lms/your-model-name
# Mix local and cloud models
aia --model lms/local-model,gpt-4o-mini my_promptEnvironment Variables:
# Optional: Set custom LM Studio API endpoint (default: http://localhost:1234/v1)
export LMS_API_BASE=http://localhost:1234/v1The //models directive automatically detects local providers and queries their endpoints:
# In a prompt file or chat session
//models
# Output will show:
# - Ollama models from http://localhost:11434/api/tags
# - LM Studio models from http://localhost:1234/v1/models
# - Cloud models from RubyLLM databaseBenefits of Local Models:
- 🔒 Privacy: No data sent to external servers
- 💰 Cost: Zero API costs after initial setup
- 🚀 Speed: No network latency
- 📡 Offline: Works without internet connection
- 🔧 Control: Full control over model and parameters
AIA automatically processes shell patterns in prompts:
- Environment variables:
$HOME,${USER} - Command substitution:
$(date),$(git branch --show-current)
Examples:
# Dynamic system information
As a system administrator on a $(uname -s) platform, how do I optimize performance?
# Include file contents via shell
Here's my current configuration: $(cat ~/.bashrc | head -20)
# Use environment variables
My home directory is $HOME and I'm user $USER.Security Note: Be cautious with shell integration. Review prompts before execution as they can run arbitrary commands.
AIA supports full ERB processing in prompts for dynamic content generation:
<%# ERB example in prompt file %>
Current time: <%= Time.now %>
Random number: <%= rand(100) %>
<% if ENV['USER'] == 'admin' %>
You have admin privileges.
<% else %>
You have standard user privileges.
<% end %>
<%= AIA.config.model %> is the current model.Chain multiple prompts for complex workflows:
# Command line
aia analyze --next summarize --next report
# In prompt files
# analyze.txt contains: //next summarize
# summarize.txt contains: //next report# Command line
aia research --pipeline analyze,summarize,report,present
# In prompt file
//pipeline analyze,summarize,report,presentresearch.txt:
//config model = gpt-4
//next analyze
Research the topic: [RESEARCH_TOPIC]
Provide comprehensive background information.
analyze.txt:
//config output = analysis.md
//next summarize
Analyze the research data and identify key insights.
summarize.txt:
//config output = summary.md
Create a concise summary of the analysis with actionable recommendations.
Roles define the context and personality for AI responses:
# Use a predefined role
aia --role expert analyze_code.rb
# Roles are stored in ~/.prompts/roles/
# expert.txt might contain:
# "You are a senior software engineer with 15 years of experience..."Creating Custom Roles:
# Create a code reviewer role
cat > ~/.prompts/roles/code_reviewer.txt << EOF
You are an experienced code reviewer. Focus on:
- Code quality and best practices
- Security vulnerabilities
- Performance optimizations
- Maintainability issues
Provide specific, actionable feedback.
EOFPer-Model Roles (Multi-Model Role Assignment):
Assign different roles to different models using inline model=role syntax:
# Different perspectives on the same design
aia --model gpt-4o=architect,claude=security,gemini=performance design_doc.md
# Output shows each model with its role:
# from: gpt-4o (architect)
# The proposed microservices architecture provides good separation...
#
# from: claude (security)
# I'm concerned about the authentication flow between services...
#
# from: gemini (performance)
# The database access pattern could become a bottleneck...Multiple Perspectives (Same Model, Different Roles):
# Get optimistic, pessimistic, and realistic views
aia --model gpt-4o=optimist,gpt-4o=pessimist,gpt-4o=realist business_plan.md
# Output shows instance numbers:
# from: gpt-4o #1 (optimist)
# This market opportunity is massive...
#
# from: gpt-4o #2 (pessimist)
# The competition is fierce and our runway is limited...
#
# from: gpt-4o #3 (realist)
# Given our current team size, we should focus on MVP first...Mixed Role Assignment:
# Some models with roles, some with default
aia --model gpt-4o=architect,claude,gemini=performance --role security design.md
# gpt-4o gets architect (inline)
# claude gets security (default from --role)
# gemini gets performance (inline)Discovering Available Roles:
# List all available role files
aia --list-roles
# Output:
# Available roles in ~/.prompts/roles:
# - architect
# - performance
# - security
# - code_reviewer
# - specialized/senior_architect # nested paths supportedRole Organization:
Roles can be organized in subdirectories:
# Create nested role structure
mkdir -p ~/.prompts/roles/specialized
echo "You are a senior software architect..." > ~/.prompts/roles/specialized/senior_architect.txt
# Use nested roles
aia --model gpt-4o=specialized/senior_architect design.mdUsing Config Files for Model Roles (v2):
Define model-role assignments in your config file (~/.aia/config.yml) for reusable setups:
# Array of hashes format (mirrors internal structure)
model:
- model: gpt-4o
role: architect
- model: claude
role: security
- model: gemini
role: performance
# Also supports models without roles
model:
- model: gpt-4o
role: architect
- model: claude # No role assignedThen simply run:
aia design_doc.md # Uses model configuration from config fileUsing Environment Variables (v2):
Set default model-role assignments via environment variable:
# Set in your shell profile (.bashrc, .zshrc, etc.)
export AIA_MODEL="gpt-4o=architect,claude=security,gemini=performance"
# Or for a single command
AIA_MODEL="gpt-4o=architect,claude=security" aia design.mdConfiguration Precedence:
When model roles are specified in multiple places, the precedence is:
- Command-line inline (highest):
--model gpt-4o=architect - Command-line flag:
--model gpt-4o --role architect - Environment variable:
AIA_MODEL="gpt-4o=architect" - Config file (lowest):
modelarray in~/.aia/config.yml
### RubyLLM::Tool Support
AIA supports function calling through RubyLLM tools for extended capabilities:
```bash
# Load tools from directory
aia --tools ~/my-tools/ --chat
# Load specific tool files
aia --tools weather.rb,calculator.rb --chat
# Filter tools
aia --tools ~/tools/ --allowed-tools weather,calc
aia --tools ~/tools/ --rejected-tools deprecated
Tool Examples (see examples/tools/ directory):
- File operations (read, write, list)
- Shell command execution
- API integrations
- Data processing utilities
MCP Client Examples (see examples/tools/mcp/ directory):
AIA supports Model Context Protocol (MCP) clients for extended functionality:
# GitHub MCP Server (requires: brew install github-mcp-server)
# Set GITHUB_PERSONAL_ACCESS_TOKEN environment variable
aia --tools examples/tools/mcp/github_mcp_server.rb --chat
# iMCP for macOS (requires: brew install --cask loopwork/tap/iMCP)
# Provides access to Notes, Calendar, Contacts, etc.
aia --tools examples/tools/mcp/imcp.rb --chatThese MCP clients require the ruby_llm-mcp gem and provide access to external services and data sources through the Model Context Protocol.
AIA supports defining MCP (Model Context Protocol) servers directly in your configuration file. This allows MCP tools to be automatically loaded at startup without needing to specify them on the command line each time.
Add MCP servers to your ~/.aia/config.yml file:
:mcp_servers:
- name: "server-name"
command: "server_command"
args: ["arg1", "arg2"]
timeout: 30000 # milliseconds (default: 8000)
env:
ENV_VAR: "value"| Option | Required | Default | Description |
|---|---|---|---|
name |
Yes | - | Unique identifier for the MCP server |
command |
Yes | - | Executable command (absolute path or found in PATH) |
args |
No | [] |
Array of command-line arguments |
timeout |
No | 8000 |
Connection timeout in milliseconds |
env |
No | {} |
Environment variables for the server process |
The GitHub MCP server provides access to GitHub repositories, issues, pull requests, and more:
# ~/.aia/config.yml
:mcp_servers:
- name: "github"
command: "github-mcp-server"
args: ["stdio"]
timeout: 15000
env:
GITHUB_PERSONAL_ACCESS_TOKEN: "ghp_your_token_here"Setup:
# Install GitHub MCP server (macOS)
brew install github-mcp-server
# Or via npm
npm install -g @anthropic/github-mcp-server
# Set your GitHub token (recommended: use environment variable instead of config)
export GITHUB_PERSONAL_ACCESS_TOKEN="ghp_your_token_here"gem install htmSee the full HTM documentation for database configuration and system environment variable usage.
A custom Ruby-based MCP server for accessing database-backed long term memory:
# ~/.aia/config.yml
:mcp_servers:
- name: "htm"
command: "htm_mcp.rb"
args: ["stdio"]
timeout: 30000
env:
HTM_DBURL: "postgres://localhost:5432/htm_development"
...Notes:
- The
commandcan be just the executable name if it's in your PATH - AIA automatically resolves command paths, so you don't need absolute paths
- Environment variables in the
envsection are passed only to that MCP server process
You can configure multiple MCP servers to provide different capabilities:
# ~/.aia/config.yml
:mcp_servers:
- name: "github"
command: "github-mcp-server"
args: ["stdio"]
env:
GITHUB_PERSONAL_ACCESS_TOKEN: "ghp_your_token_here"
- name: "htm"
command: "htm_mcp.rb"
args: ["stdio"]
timeout: 30000
env:
HTM_DBURL: "postgres://localhost:5432/htm_development"
- name: "filesystem"
command: "filesystem-mcp-server"
args: ["stdio", "--root", "/Users/me/projects"]When MCP servers are configured, AIA displays them in the startup robot:
, ,
(\____/) AI Assistant (v0.9.23) is Online
(_oo_) gpt-4o-mini (supports tools)
(O) using ruby_llm (v1.9.0 MCP v0.6.1)
__||__ \) model db was last refreshed on
[/______\] / 2025-06-03
/ \__AI__/ \/ You can share my tools
/ /__\ MCP: github, htm
(\ /____\
Use the //tools directive in chat mode to see all available tools including those from MCP servers:
aia --chat
> //tools
Available Tools:
- github_create_issue: Create a new GitHub issue
- github_list_repos: List repositories for the authenticated user
- htm_query: Execute a query against the HTM database
- htm_insert: Insert a record into HTM
...
# Filter tools by name (case-insensitive)
> //tools github
Available Tools (filtered by 'github')
- github_create_issue: Create a new GitHub issue
- github_list_repos: List repositories for the authenticated userIf an MCP server fails to load, AIA will display a warning:
WARNING: MCP server 'github' command not found: github-mcp-server
WARNING: MCP server entry missing name or command: {...}
ERROR: Failed to load MCP server 'htm': Connection timeout
Common Issues:
| Problem | Solution |
|---|---|
| Command not found | Ensure the command is in your PATH or use absolute path |
| Connection timeout | Increase the timeout value |
| Missing environment variables | Add required env vars to the env section |
| Server hangs on startup | Check that all required environment variables are set |
Debug Mode:
Enable debug mode to see detailed MCP server loading information:
aia --debug --chatShared Tools Collection: AIA can use the shared_tools gem which provides a curated collection of commonly-used tools (aka functions) via the --require option.
# Access shared tools automatically (included with AIA)
aia --require shared_tools/ruby_llm --chat
# To access just one specific shared tool
aia --require shared_tools/ruby_llm/edit_file --chat
# Combine with your own local custom RubyLLM-based tools
aia --require shared_tools/ruby_llm --tools ~/my-tools/ --chatThe above examples show the shared_tools being used within an interactive chat session. They are also available in batch prompts as well using the same --require option. You can also use the //ruby directive to require the shared_tools as well and using a require statement within an ERB block.
# ~/.prompts/code_review.txt
//config model = gpt-4o-mini
//config temperature = 0.3
Review this code for:
- Best practices adherence
- Security vulnerabilities
- Performance issues
- Maintainability concerns
Code to review:Usage: aia code_review mycode.rb
# ~/.prompts/meeting_notes.txt
//config model = gpt-4o-mini
//pipeline format,action_items
Raw meeting notes:
//include [NOTES_FILE]
Please clean up and structure these meeting notes.# ~/.prompts/document.txt
//config model = gpt-4o-mini
//shell find [PROJECT_DIR] -name "*.rb" | head -10
Generate documentation for the Ruby project shown above.
Include: API references, usage examples, and setup instructions.# ~/.prompts/decision_maker.txt
# Compare different AI perspectives on complex decisions
What are the pros and cons of [DECISION_TOPIC]?
Consider: technical feasibility, business impact, risks, and alternatives.
Analyze this thoroughly and provide actionable recommendations.Usage examples:
# Get individual perspectives from each model
aia decision_maker -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini --no-consensus
# Get a synthesized consensus recommendation
aia decision_maker -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini --consensus
# Use with chat mode for follow-up questions
aia --chat -m gpt-4o-mini,gpt-3.5-turbo --consensusThe --exec flag is used to create executable prompts. If it is not present on the shebang line then the prompt file will be treated like any other context file. That means that the file will be included as context in the prompt but no dynamic content integration or directives will be processed. All other AIA options are, well, optional. All you need is an initial prompt ID and the --exec flag.
In the example below the option --no-output is used to direct the output from the LLM processing of the prompt to STDOUT. This way the executable prompts can be good citizens on the *nix command line receiving piped in input via STDIN and send its output to STDOUT.
Create executable prompts:
weather_report (make executable with chmod +x):
#!/usr/bin/env aia run --no-output --exec
# Get current storm activity for the east and south coast of the US
Summarize the tropical storm outlook fpr the Atlantic, Caribbean Sea and Gulf of America.
//webpage https://www.nhc.noaa.gov/text/refresh/MIATWOAT+shtml/201724_MIATWOAT.shtmlUsage:
./weather_report
./weather_report | glow # Render the markdown with glow# ~/.prompts/run.txt
# Desc: A configuration only prompt file for use with executable prompts
# Put whatever you want here to setup the configuration desired.
# You could also add a system prompt to preface your intended promptUsage: echo "What is the meaning of life?" | aia run
# ~/.prompts/ad_hoc.txt
[WHAT_NOW_HUMAN]Usage: aia ad_hoc - perfect for any quick one-shot question without cluttering shell history.
# ~/.bashrc_aia
export AIA_PROMPTS__DIR=~/.prompts
export AIA_OUTPUT__FILE=./temp.md
export AIA_MODEL=gpt-4o-mini
export AIA_FLAGS__VERBOSE=true # Shows spinner while waiting for LLM response
alias chat='aia --chat --terse'
ask() { echo "$1" | aia run --no-output; }The chat alias and the ask function (shown above in HASH) are two powerful tools for interacting with the AI assistant. The chat alias allows you to engage in an interactive conversation with the AI assistant, while the ask function allows you to ask a question and receive a response. Later in this document the run prompt ID is discussed. Besides using the run prompt ID here its also used in making executable prompt files.
~/.prompts/
├── daily/ # Daily workflow prompts
├── development/ # Coding and review prompts
├── research/ # Research and analysis
├── roles/ # System prompts
└── workflows/ # Multi-step pipelines
AIA executes shell commands and Ruby code embedded in prompts. This provides powerful functionality but requires caution:
- Review prompts before execution, especially from untrusted sources
- Avoid storing sensitive data in prompts (API keys, passwords)
- Use parameterized prompts instead of hardcoding sensitive values
- Limit file permissions on prompt directories if sharing systems
# ✅ Good: Use parameters for sensitive data
//config api_key = [API_KEY]
# ❌ Bad: Hardcode secrets
//config api_key = sk-1234567890abcdef
# ✅ Good: Validate shell commands
//shell ls -la /safe/directory
# ❌ Bad: Dangerous shell commands
//shell rm -rf / # Never do this!# Set restrictive permissions on prompts directory
chmod 700 ~/.prompts
chmod 600 ~/.prompts/*.txtPrompt not found:
# Check prompts directory
ls $AIA_PROMPTS__DIR
# Verify prompt file exists
ls ~/.prompts/my_prompt.txt
# Use fuzzy search
aia --fuzzyModel errors:
# List available models
aia --available-models
# Check model name spelling
aia --model gpt-4o # Correct
aia --model gpt4 # IncorrectShell integration not working:
# Verify shell patterns
echo "Test: $(date)" # Should show current date
echo "Home: $HOME" # Should show home directoryConfiguration issues:
# Check current configuration
aia --config
# Debug configuration loading
aia --debug --config| Error | Cause | Solution |
|---|---|---|
| "Prompt not found" | Missing prompt file | Check file exists and spelling |
| "Model not available" | Invalid model name | Use --available-models to list valid models |
| "Shell command failed" | Invalid shell syntax | Test shell commands separately first |
| "Configuration error" | Invalid config syntax | Check config file YAML syntax |
AIA provides multiple log level options to control the verbosity of logging output. These options set the log level for all three loggers:
- aia: Used within the AIA codebase for application-level logging
- llm: Passed to the RubyLLM gem's configuration (
RubyLLM.logger) - mcp: Passed to the RubyLLM::MCP process (
RubyLLM::MCP.logger)
| Option | Description |
|---|---|
-d, --debug |
Enable debug output (most verbose) and set all loggers to DEBUG level |
--no-debug |
Disable debug output |
--info |
Set all loggers to INFO level |
--warn |
Set all loggers to WARN level (default) |
--error |
Set all loggers to ERROR level |
--fatal |
Set all loggers to FATAL level (least verbose) |
# Enable debug mode (most verbose - shows all log messages)
aia --debug my_prompt
# Combine with verbose for maximum output
aia --debug --verbose my_prompt
# Use info level for moderate logging
aia --info my_prompt
# Use error level to only see errors and fatal messages
aia --error my_prompt
# Use fatal level for minimal logging (only critical errors)
aia --fatal --chatLog Level Hierarchy (from most to least verbose):
- debug - All messages including detailed debugging information
- info - Informational messages and above
- warn - Warnings, errors, and fatal messages (default)
- error - Only errors and fatal messages
- fatal - Only critical/fatal messages
Slow model responses:
- Try smaller/faster models:
--model gpt-4o-mini - Reduce max_tokens:
--max-tokens 1000 - Use lower temperature for faster responses:
--temperature 0.1
Large prompt processing:
- Break into smaller prompts using
--pipeline - Use
//includeselectively instead of large files - Consider model context limits
# Run unit tests
rake test
# Run integration tests
rake integration
# Run all tests with coverage
rake all_tests
open coverage/index.html# Install locally with documentation
just install
# Generate documentation
just gen_doc
# Static code analysis
just flayShellCommandExecutor Refactor:
The ShellCommandExecutor is now a class (previously a module) with instance variables for cleaner encapsulation. Class-level methods remain for backward compatibility.
Prompt Variable Fallback:
Variables are always parsed from prompt text when no .json history file exists, ensuring parameter prompting works correctly.
Bug reports and pull requests are welcome on GitHub at https://github.com/MadBomber/aia.
When reporting issues, please include:
- AIA version:
aia --version - Ruby version:
ruby --version - Operating system
- Minimal reproduction example
- Error messages and debug output
git clone https://github.com/MadBomber/aia.git
cd aia
bundle install
rake test- Configuration UI for complex setups
- Better error handling and user feedback
- Performance optimization for large prompt libraries
- Enhanced security controls for shell integration
- Enhanced Search: Restore full-text search within prompt files
- UI Improvements: Better configuration management for fzf and rg tools
- Logging: Enhanced logging using Ruby Logger class; integration with RubyLLM and RubyLLM::MCP logging
The gem is available as open source under the terms of the MIT License.