A powerful interactive shell for Ollama, featuring enhanced chat capabilities, document analysis, image analysis, and real-time web search integration.
Created by Christopher Bradford
Contact: contact@christopherdanielbradford.com
- Interactive chat with any Ollama model
- Model management (download, delete, list)
- System prompt management and storage
- Chat history with export capabilities (Markdown, HTML, PDF)
- Configurable settings and preferences
- π Enhanced Search & Analysis: Real-time web search integration with DuckDuckGo
- πΌοΈ Image Analysis: Support for analyzing images using vision models
- π Document Analysis: Support for various document formats
- π Real-time Information: Combine search results with model analysis
- π’ Confluence Integration: Search and analyze Confluence content with LLM-powered insights
- πΎ Export Capabilities: Export chats in multiple formats
- β‘ Drag & Drop: Easy file sharing in chat
- π§ Context Management: Intelligent management of conversation context
- π Knowledge Base: Local vector database for persistent information storage
- π§ Fine-Tuning: Fine-tune models with Unsloth (NVIDIA) or MLX (Apple Silicon)
- π Enhanced File Creation: Create files with natural language commands using intelligent filename extraction and content generation
- Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # Unix/Mac- Install dependencies:
pip install -r requirements.txt- Run the application:
python ollama_shell.pyThe Context Management feature allows you to optimize token usage and maintain important information in long conversations:
-
Pin Important Messages:
/pin 5
This pins message #5 to ensure it stays in context even as the conversation grows.
-
Exclude Irrelevant Messages:
/exclude 3
This excludes message #3 from the context, saving tokens for more relevant information.
-
Summarize Conversations:
/summarize
This creates a concise summary of the conversation to save tokens while maintaining context.
-
View Context Status:
/context
Shows the current status of your context management, including pinned and excluded messages.
-
Check Token Usage:
/tokens
Displays detailed token usage information for each message.
The Enhanced Search feature allows you to combine real-time web information with model analysis:
-
Basic Search:
search: What are the latest developments in AI?
This will search the web and provide an analyzed summary.
-
Focused Questions:
search: Who won the latest Super Bowl and what was the score?Gets real-time information and provides accurate, up-to-date answers.
-
Technical Queries:
search: How to implement rate limiting in Python?
Combines web search results with model expertise for comprehensive answers.
The Web Browsing integration allows you to gather information from the web and save it to files:
-
Headline Gathering:
gather the latest news headlines and save them in "todaysNews.txt"
collect gaming headlines and save to "gamingNews.txt"find tech headlines and save them as "techUpdate.txt"This will intelligently gather headlines from relevant websites based on the topic and save them to the specified file.
-
General Information Gathering:
gather information about gardening tips for beginners and save it as "gardeningGuide.txt"
research information on electric vehicles and save to "evGuide.txt"This will collect comprehensive information about the topic and save it to the specified file.
-
Dynamic Content Type Detection:
The system automatically detects the content type based on your query and selects the most relevant sources:
- News: BBC, CNN, The Guardian, AP News, Al Jazeera
- Gaming: IGN, GameSpot, Polygon, Kotaku, PC Gamer
- Tech: The Verge, TechCrunch, Wired, Ars Technica, CNET
- General: Uses LLM to determine relevant sites based on the query
-
Custom Filenames:
You can specify a custom filename in your request, or the system will use an appropriate default based on the content type:
- News: newsSummary.txt
- Gaming: gameSummary.txt
- Tech: techSummary.txt
- General: topicSummary.txt
-
File Storage:
All files are saved to your Documents folder by default, and the full path is returned in the response for easy access.
Analyze images using vision models:
-
From Menu:
- Select "Analyze Image" from the menu
- Enter the path to your image
- Optionally provide a custom prompt
-
Drag & Drop:
- Press Ctrl+V to toggle drag & drop mode
- Drag an image file into the terminal
- The vision model will analyze the image
-
In Chat Context:
- Images can be included in chat for analysis
- Supported formats: JPG, PNG, GIF, BMP, WebP
Analyze various document types:
-
Supported Formats:
- Word Documents (.docx, .doc)
- Excel Spreadsheets (.xlsx, .xls)
- PDF Files (.pdf)
- Text Files (.txt, .md)
- Code Files (.py, .js, .html, etc.)
-
Drag & Drop:
- Press Ctrl+V to toggle drag & drop mode
- Drag a document into the terminal
- The model will analyze the document content
-
File Creation:
- Create files directly from the chat using the
/createcommand
/create [filename] [content]
- Natural language syntax:
/create a haiku about nature and save to nature.txt
- Example:
/create data.csv "Name,Age,City\nJohn,30,New York\nJane,25,Boston"- Supported formats: TXT, CSV, DOC/DOCX, XLS/XLSX, PDF
- Create files directly from the chat using the
Interact with your filesystem using natural language commands through the Filesystem MCP Protocol integration:
-
Basic Usage:
/fsnl list files in my Documents folderThis will list all files and directories in your Documents folder.
-
Supported Operations:
-
List Directories: View contents of directories
/fsnl show me what's in my Downloads folder -
Create Directories: Create new folders
/fsnl create a new folder called 'ProjectData' in my Documents
-
Read Files: View the contents of text files
/fsnl show me the contents of config.json
-
Write Files: Create or modify text files
/fsnl create a file called notes.txt in Documents with the text "Meeting notes for today"
-
Find Files: Search for files matching patterns
/fsnl find all PDF files in my Documents folder -
Delete Files: Remove files (use with caution)
/fsnl delete the temporary file temp.txt
-
-
Path Handling:
- Supports absolute paths:
/Users/username/Documents - Supports relative paths:
Documents/ProjectData - Supports user home shortcuts:
~/Documentsor justDocuments - Handles special paths like
/Documents(automatically maps to user's Documents folder)
- Supports absolute paths:
-
Best Practices:
- Be specific about file and directory names to avoid ambiguity
- Use full paths when working with files outside common directories
- Verify operations before deleting files or overwriting important data
- For complex operations, break them down into multiple commands
- Use the model's capabilities to explain file contents or summarize directory listings
-
Security Features:
- Operations are limited to user-accessible directories
- Sensitive system directories are protected
- All operations are executed with user permissions
Interact with your Confluence instance (both Cloud and Server) using natural language commands through the Confluence MCP Protocol integration:
-
Automatic Setup During Installation:
- During installation, you'll be prompted to set up the Confluence integration
- If you choose to set it up, a configuration file will be created at
Created Files/confluence_config.env - You can edit this file to provide your Confluence details
-
Manual Setup:
- Copy the template file from
Created Files/config/confluence_config_template.envtoCreated Files/confluence_config.env - Edit the file to provide your Confluence details:
- For detailed instructions, see the Comprehensive Setup Guide
# Confluence Configuration # Confluence URL (e.g., https://wiki.example.com or https://your-domain.atlassian.net) CONFLUENCE_URL=https://your-confluence-url # Your username/email for Confluence CONFLUENCE_EMAIL=your.email@example.com # Your Personal Access Token (PAT) or API token CONFLUENCE_API_TOKEN=your_token_here # Authentication method (basic, bearer, or pat) CONFLUENCE_AUTH_METHOD=pat # Is this a Confluence Cloud instance? (true/false) CONFLUENCE_IS_CLOUD=false
- Copy the template file from
-
Authentication Methods:
- Confluence Server: Use a Personal Access Token (PAT) with the
patauthentication method - Confluence Cloud: Use an API token with the
basicauthentication method
- Confluence Server: Use a Personal Access Token (PAT) with the
-
Using the Integration:
- Run the
/confluencecommand to activate the integration - Use natural language to interact with your Confluence instance, for example:
- "List all spaces in Confluence"
- "Create a new page titled 'Meeting Notes' in the 'Team' space"
- "Search for pages about 'project planning'"
- "Get the content of the 'Home' page in the 'Documentation' space"
- Run the
-
API Tokens:
- For Confluence Cloud: Generate an API token at https://id.atlassian.com/manage-profile/security/api-tokens
- For Confluence Server: Generate a Personal Access Token in your Confluence Server instance
- Configuration will be saved securely as environment variables
-
Testing Your Configuration:
- Run the test script to verify your Confluence setup:
python test_confluence_setup.py
- The script will check your configuration file, validate settings, and test the connection to your Confluence instance
- Run the test script to verify your Confluence setup:
-
Troubleshooting:
- If you encounter issues with the Confluence integration, refer to the Confluence Troubleshooting Guide
- The guide covers common problems with connection, authentication, and permissions
-
Example Script:
- An example script is provided to demonstrate how to use the Confluence integration programmatically
- Located at
examples/confluence_example.py - Run the script to see how to connect to Confluence, list spaces, and search for content:
python examples/confluence_example.py
-
Supported Operations:
- List Spaces: View all accessible spaces in your Confluence instance
list all spaces
- Get Space Details: View details about a specific space
show me details about the Engineering space
- List Pages: View pages in a specific space
list all pages in the Marketing space - Get Page Content: View the content of a specific page
show me the content of the "Project Roadmap" page - Create Pages: Create new pages in a space
create a new page titled "Meeting Notes" in the Team space with content "Notes from today's meeting"
- Update Pages: Update existing pages
update the "Weekly Status" page in the Project space with new content
- Search Content: Search for content using Confluence Query Language (CQL)
For more targeted searches, use specific query formats:
search for pages containing "budget proposal"search Confluence for information about Polarionfind all Confluence pages containing "API documentation"search Confluence for "release planning" - Content Analysis: Get AI-powered analysis of search results
analyze Confluence content about "deployment procedures"The LLM will automatically analyze search results to provide direct answers to your queries, extracting relevant information from multiple pages.explain the information in Confluence about SSL certificates - Manage Labels: Add or remove labels from pages
add labels "important,review" to the "Quarterly Report" page
- List Spaces: View all accessible spaces in your Confluence instance
-
Best Practices:
- Be specific about space and page names to avoid ambiguity
- For complex operations, break them down into multiple commands
- Use the model's capabilities to summarize or explain page content
Ollama Shell includes a powerful Jira integration that allows you to interact with Jira through natural language commands.
- Setup:
- Run the
/jira setupcommand to configure the integration - You'll be prompted to enter your Jira URL, user email, and API token
- For detailed setup instructions, see the Jira Setup Guide
- Run the
- Authentication:
- Generate an API token at https://id.atlassian.com/manage-profile/security/api-tokens
- Use your Atlassian account email address
- Configuration will be saved securely as environment variables
- Testing Your Configuration:
- After setup, test your configuration with:
/jira status
- After setup, test your configuration with:
- Troubleshooting:
- If you encounter issues with the Jira integration, refer to the Jira Troubleshooting Guide
- The guide covers common problems with connection, authentication, and JQL queries
- Example Script:
- An example script is provided to demonstrate how to use the Jira integration programmatically
- Located at
examples/jira_example.py - Run the script to see how to search for issues, get issue details, add comments, and update issues:
python examples/jira_example.py
- Supported Operations:
-
Natural Language Search: Find issues using intuitive natural language queries
/jira search highest priority bugs/jira find issues assigned to me and status = "In Progress"The natural language query processor supports a wide range of query patterns:
/jira show high priority issues assigned to me/jira display issues assigned to John Smith except for closed items/jira find open bugs with high priority in the PROJECT project/jira list all unresolved issues created this weekThe system intelligently handles various priority formats (P2, High Priority), resolution statuses, and assignee specifications to generate the correct JQL query.
-
Get Issue Details: View detailed information about a specific issue
/jira get PROJECT-123 -
Add Comments: Add comments to issues
/jira comment PROJECT-123 This is a comment added via Ollama Shell -
Update Issues: Update issue fields such as status, priority, or assignee
/jira update PROJECT-123 status "In Progress" -
Issue Analysis: Get AI-powered analysis of issues
/jira analyze PROJECT-123The LLM will automatically analyze the issue details to provide insights, identify key points from the description, current status, next steps, and recommendations for resolution.
-
The Context Management feature allows you to optimize token usage and maintain important information in long conversations:
- Pin Important Messages:
This pins message #5 to ensure it stays in context even as the conversation grows.
/pin 5
- Exclude Irrelevant Messages:
This excludes message #3 from the context, saving tokens for more relevant information.
/exclude 3
- Summarize Conversations:
This creates a concise summary of the conversation to save tokens while maintaining context.
/summarize
- View Context Status:
Shows the current status of your context management, including pinned and excluded messages.
/context
- Check Token Usage:
Displays detailed token usage information for each message.
/tokens
Store and retrieve information using a local vector database:
- Features:
- Persistent storage across sessions
- Semantic search for relevant information
- Automatic integration with chat context
- Vector embeddings for similarity matching
- Usage:
- Add important information with
/kb add [text] - Search the knowledge base with
/kb search [query] - Enable/disable with
/kb toggle - View status with
/kb status
- Add important information with
- Benefits:
- Remember important facts between sessions
- Build a personalized knowledge repository
- Automatically enhance responses with relevant context
- Reduce repetitive questions
- Adding Documents:
- Drag & drop documents directly into the chat
- Choose to add them to the knowledge base when prompted
- Documents are automatically chunked for optimal storage
- Search across all documents with
/kb search [query]
Fine-tune models using a modular system that supports Unsloth (for NVIDIA GPUs) or MLX (for Apple Silicon):
-
Features:
- Modular architecture for better maintainability and extensibility
- Automatic hardware detection for optimal framework selection
- Support for Unsloth on NVIDIA GPUs
- Support for MLX on Apple Silicon
- Direct integration with Ollama models
- CPU fallback for other systems
- Dataset preparation from various formats
- Detailed progress tracking with ETA estimation
- Job control (pause, resume, delete)
- Easy export to Ollama
-
Prerequisites:
- For macOS users:
- Homebrew (
brew) is required cmakeis needed (will be installed automatically if missing)
- Homebrew (
- For NVIDIA GPU users:
- CUDA drivers must be installed
nvidia-smishould be available in your path
- For macOS users:
-
Fine-Tuning Workflow:
# Install required dependencies based on your hardware /finetune install # Check system status and hardware detection /finetune status # Prepare a dataset for fine-tuning /finetune dataset /path/to/your/dataset.json # List available datasets /finetune datasets # Create a fine-tuning job /finetune create my_job_name llama3.2:latest # Change dataset for the job (optional) /finetune dataset-set my_job_name another_dataset_id # Start the fine-tuning process /finetune start my_job_name # Monitor progress /finetune list # Export the fine-tuned model to Ollama /finetune export my_job_name # Remove unused datasets (optional) /finetune dataset-remove dataset_id -
Supported Dataset Formats:
- JSON: List of objects with "text" field or "prompt"/"completion" fields
- JSONL: One JSON object per line
- CSV: With header row and text column
- TXT: Plain text files (one sample per line)
-
Framework-Specific Notes:
- MLX (Apple Silicon):
- Uses MLX-LM for efficient fine-tuning on Apple Silicon
- Automatically installed from source to ensure compatibility
- Directly integrates with Ollama models via the API
- Optimized for M1/M2/M3 chips
- Unsloth (NVIDIA):
- Uses Unsloth for 2-3x faster fine-tuning on NVIDIA GPUs
- Supports QLoRA for memory-efficient training
- Compatible with most Hugging Face models
- MLX (Apple Silicon):
-
Testing Your Fine-Tuned Model:
# Switch to your fine-tuned model /model my_job_name # Test with different prompts to see how it performs What is the capital of France? # Compare with the original model /model llama3.2:latest What is the capital of France? # You can also use the /compare command to test both models side by side /compare my_job_name llama3.2:latest "What is the capital of France?"
- π Word Documents (.docx, .doc)
- π Excel Spreadsheets (.xlsx, .xls)
- π PDF Files (.pdf)
- βοΈ Text Files (.txt, .md)
- π» Code Files (.py, .js, .html, .css, .json, .yaml, .yml)
- πΌοΈ JPEG/JPG
- π¨ PNG
- π GIF
- ποΈ BMP
- π WebP
- π Markdown
- π HTML
- π PDF
- Python 3.8+
- Ollama with compatible models
- See
requirements.txtfor full list
-
Clone the repository:
git clone https://github.com/yourusername/ollama_shell.git cd ollama_shell -
Run the installation script: On macOS/Linux:
./install.sh
On Windows:
install.bat
This will:
- Create the
Created Filesdirectory for storing user data - Set up a virtual environment (if not already present)
- Install all required dependencies
- Configure the application for first use
- Create the
Configuration is stored in config.json and includes:
- Default model settings
- Vision model preferences
- History and logging preferences
- UI customization options
- Use
Ctrl+Vto toggle drag & drop mode - Combine search and document analysis for comprehensive answers
- Save frequently used prompts for quick access
- Export important conversations for future reference
- Use custom prompts with image analysis for specific insights
- Use
/pinto keep important context in long conversations - Use
/summarizeto condense long conversations while maintaining context - Use the knowledge base to store and retrieve important information across sessions
- Add documents to your knowledge base via drag & drop for persistent access to their content
Ollama Shell stores user-specific data in the Created Files directory, which is excluded from Git to protect your privacy and prevent accidental sharing of personal data. This includes:
- Fine-tuning jobs: All fine-tuning job data, including models and logs
- Datasets: Prepared datasets for fine-tuning
- Configuration: User-specific configuration files
- Generated models: Model files created during fine-tuning This separation ensures that your personal models, datasets, and configurations remain private and are not accidentally committed to version control.
/pin [message_number]- Pin a message to keep it in context/unpin [message_number]- Unpin a previously pinned message/exclude [message_number]- Exclude a message from context/include [message_number]- Include a previously excluded message/summarize- Summarize the conversation to save tokens/context- Show current context management status/tokens- Show token usage information/help- Show context management help
/kb status- Show knowledge base status and statistics/kb add [text]- Add text to your knowledge base/kb search [query]- Search your knowledge base for relevant information/kb toggle- Enable or disable knowledge base integration
Ollama Shell can generate and save content in various file formats directly from the chat. This feature supports intelligent content generation based on your requests.
/create [filename] [content]- Create a file with the specified content/create [request] and save to [filename]- Generate content based on your request and save it to the specified file
- Text files (
.txt) - Markdown files (
.md) - CSV files (
.csv) - Word documents (
.doc,.docx) - Excel spreadsheets (
.xls,.xlsx) - PDF files (
.pdf) - Code files (
.py,.js,.html,.css,.json)
# Create a text file with specific content
/create notes.txt This is a simple note with some text content.# Generate content based on a request
/create a haiku about redwood forests and save to redwood.txt# Create a structured document
/create sample.docx Write a formal business letter to ABC Corp requesting a product catalog.# Generate a spreadsheet
/create sales_data.xlsx Create a spreadsheet with quarterly sales data for 2024.Ollama Shell provides powerful agentic task completion capabilities using local LLMs. This feature allows you to execute complex tasks through natural language commands without relying on external services.
-
Installation:
- Install Agentic Mode directly from Ollama Shell:
/agentic --install
- Choose your preferred installation method (uv or conda)
- If Agentic Mode is already installed, the system will automatically update it instead
- The installation process will:
- Install all required dependencies (including langchain, toml, and requests)
- Create the necessary configuration files for Ollama integration
- Set up proper Python module structure for importing
Ollama Shell seamlessly integrates with Agentic Mode to use local Ollama models:
- The integration automatically configures the system to work with Ollama models
- No API keys are required - everything works locally
- Both regular and vision-capable models are supported
- The system will automatically detect and fix any configuration issues
- All required dependencies are automatically installed when needed
-
Configuration:
- Configure Agentic Mode to work with your Ollama models:
/agentic --configure
- This will use your default model from Ollama Shell configuration
- Or specify a particular model:
/agentic --configure --model llama3.2
- You can also configure Agentic Mode through the Settings menu:
/settings
- This creates a configuration file that connects to your local Ollama instance
-
Single Task Execution:
- Execute a one-off task using natural language:
/agentic Create a Python script that downloads images from a website
- The agent will break down the task, create necessary files, and execute the required steps
-
Interactive Mode:
- Enter an interactive session with the agent:
/agentic --mode
- This allows for multi-turn conversations and complex task execution
-
Specifying Models:
- By default, Agentic Mode will use your default model from Ollama Shell configuration
- Optionally specify a different model for task execution:
/agentic --model codellama Find all Python files in my project and analyze their complexity
The Agentic Assistant provides a user-friendly interface for executing complex tasks through natural language instructions:
-
Basic Usage:
# From the main menu, select the "Agentic Assistant" option # Or use the command line ollama_shell.py assistant
-
Example Tasks:
-
Create files with specific content:
# In the Agentic Assistant mode > Create a Python script that calculates prime numbers and save it to primes.py
-
Analyze images:
# In the Agentic Assistant mode > Analyze the image sunset.jpg and describe what you see
-
General task assistance:
# In the Agentic Assistant mode > Help me write a regular expression to match email addresses
-
-
Features:
- Intelligent task recognition and routing
- Support for file creation with natural language descriptions
- Image analysis capabilities
- Step-by-step guidance for complex tasks
- Integration with existing Ollama models
The Agentic Mode and Agentic Assistant enable a wide range of complex tasks:
-
File Operations:
- Create, read, update, and delete files
- Search for files with specific patterns
- Organize and categorize files
-
Code Generation and Analysis:
- Generate code in various programming languages
- Debug existing code
- Refactor and optimize code
-
Data Processing:
- Extract data from various formats (CSV, JSON, etc.)
- Transform and analyze data
- Generate reports and visualizations
-
System Automation:
- Execute system commands
- Monitor processes
- Schedule tasks
# Generate a Python script for web scraping
/agentic Create a Python script that scrapes news headlines from CNN
# Analyze and organize files
/agentic Find all PDF files in my Downloads folder and organize them by topic
# Data analysis task
/agentic Analyze the data in sales.csv and create a summary report
# Code refactoring
/agentic Refactor my Python script to use async functions-
Be Specific: Provide clear and detailed instructions for best results
-
Start Simple: Begin with straightforward tasks before attempting more complex ones
-
Use Interactive Mode: For multi-step tasks, interactive mode provides better context
-
Model Selection: Different models excel at different tasks; experiment to find the best fit
-
Installation Issues:
- Ensure Python 3.10+ is installed
- Try alternative installation method (uv/conda)
- Check system dependencies
-
Configuration Problems:
- Verify Ollama is running
- Ensure the specified model is downloaded
- Check the configuration file for correct settings
-
Task Execution Errors:
- Break complex tasks into smaller steps
- Provide more detailed instructions
- Try a different model
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - See LICENSE file for details
