AI CLI is a Rust application that acts as a provider-agnostic AI assistant within a sandboxed multi-platform terminal environment. It supports any OpenAI-compatible API to assist with coding tasks, file operations, online searches, email sending, and shell commands. The application takes initiative to provide solutions, execute commands, and analyze results without explicit user confirmation, unless the action is ambiguous or potentially destructive.
- Chat Interface: Provides a command-line interface for interacting with AI models.
- Provider Agnostic: Works with any OpenAI-compatible API (Google Gemini, OpenAI, local LLMs, etc.).
- Tool Execution: Executes system commands using the
execute_commandfunction, allowing the AI to interact with the file system and other system utilities. - Online Search: Performs online searches using the
search_onlinefunction, enabling the AI to retrieve up-to-date information from the web. - Email Sending: Sends emails using the
send_emailfunction, allowing the AI to send notifications or reports. - Conversation History: Maintains a conversation history to provide context for the AI model.
- Ctrl+C Handling: Gracefully shuts down the application and cleans up resources when Ctrl+C is pressed.
src/main.rs: Contains the main application logic, including the chat interface, tool execution, and API interaction.src/config.rs: Handles configuration loading and provider-specific settings.src/search.rs: Implements the online search functionality using the Tavily Search API.src/command.rs: Handles system command execution with sandboxing and security considerations.src/email.rs: Manages email sending functionality with SMTP support.src/alpha_vantage.rs: Provides integration with the Alpha Vantage API for financial data.src/file_edit.rs: Implements file editing capabilities including reading, writing, searching, and applying diffs.src/spinner.rs: Provides a loading spinner for visual feedback during operations.
To run AI CLI, you need to set up a .aicli.conf file in your home directory with the following variables:
# AI Provider Configuration (Required)
API_BASE_URL=https://generativelanguage.googleapis.com
API_VERSION=v1beta
MODEL=gemini-1.5-flash
API_KEY=your_api_key_here
SMTP_SERVER_IP=localhost
SMTP_USERNAME=
SMTP_PASSWORD=
DESTINATION_EMAIL=
SENDER_EMAIL=
TAVILY_API_KEY=
ALPHA_VANTAGE_API_KEY=
API_BASE_URL=https://generativelanguage.googleapis.com
API_VERSION=v1beta
MODEL=gemini-1.5-flash
API_KEY=your_gemini_api_key_hereAPI_BASE_URL=https://api.openai.com
API_VERSION=v1
MODEL=gpt-4
API_KEY=sk-your_openai_api_key_hereAPI_BASE_URL=http://localhost:11434
API_VERSION=v1
MODEL=llama3
API_KEY=API_BASE_URL=https://your-provider.com
API_VERSION=v1
MODEL=your-model-name
API_KEY=your_api_key_hereAPI_BASE_URL: The base URL of the AI provider's API endpointAPI_VERSION: The API version to use (e.g., v1, v1beta)MODEL: The model name to use (e.g., gemini-2.5-flash, gpt-4, llama3)API_KEY: Your API key for authenticationSMTP_SERVER_IP: The IP address or hostname of the SMTP server (defaults to localhost if not specified)SMTP_USERNAME: Username for SMTP authentication (optional, required for non-localhost servers)SMTP_PASSWORD: Password for SMTP authentication (optional, required for non-localhost servers)DESTINATION_EMAIL: The email address to which thesend_emailfunction will send emailsSENDER_EMAIL: The email address to use as the sender (optional, defaults to DESTINATION_EMAIL)TAVILY_API_KEY: Your API key for the Tavily Search APIALPHA_VANTAGE_API_KEY: Your API key for the Alpha Vantage API
-
Clone the repository:
git clone <repository_url> cd ai-cli
-
Create a
.aicli.conffile in your home directory and set the required environment variables as described in the Configuration Setup section. -
Run the application:
cargo run
-
Chat with the AI by typing messages in the command-line interface. Use
!commandto run shell commands directly (e.g.,!lsor!dir). Typeexitto quit orclearto reset the conversation.
If you were using the previous version, you can migrate your configuration:
-
Rename your existing
.gemini.confto.aicli.conf:mv ~/.gemini.conf ~/.aicli.conf
-
Add the new required fields to your
.aicli.conf:API_BASE_URL=https://generativelanguage.googleapis.com API_VERSION=v1beta MODEL=gemini-1.5-flash
-
Keep your existing
API_KEY(renamed fromGEMINI_API_KEY)
AI CLI is designed to work with any OpenAI-compatible API. The following providers have been tested:
- Google Gemini: Full support with tool calling
- OpenAI: Full support with tool calling
- Local LLMs (Ollama): Basic support (may require adjustments for tool calling)
- Uses query parameter authentication (
?key=API_KEY) - Endpoint format:
{base_url}/{version}/models/{model}:generateContent - Full tool calling support
- Uses header authentication (
Authorization: Bearer API_KEY) - Endpoint format:
{base_url}/{version}/chat/completions - Full tool calling support
- May not require authentication
- Endpoint format:
{base_url}/{version}/chat/completions - Tool calling support varies by model
Run with the --debug flag to see configuration details:
cargo run -- --debugThis will display:
- AI provider configuration
- API endpoint being used
- Authentication method
- SMTP settings
- Ensure
~/.aicli.confexists and contains the required fields - Check that your API key is valid and has the correct format
- Verify the API base URL is correct for your provider
- Check your internet connection
- Verify the API endpoint is accessible
- Ensure your API key has sufficient credits/permissions
- Some providers may have limited tool calling support
- Check the provider's documentation for compatibility
- Try using a different model if available
Contributions are welcome! Please feel free to submit pull requests or open issues for bugs and feature requests.
MIT License