Skip to content

marcus888-techstack/make-your-own-ai-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Make Your Own AI Agent

A comprehensive demo collection showing how to build AI agents using LangChain with different models and tools.

🚀 Quick Start

Prerequisites

  1. Python 3.8+ installed
  2. uv for fast Python package management:
    # Install uv
    curl -LsSf https://astral.sh/uv/install.sh | sh
    # or with pip
    pip install uv
  3. Project setup:
    # Create virtual environment and install dependencies
    uv venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    uv pip install langchain langchain-ollama langchain-google-genai
  4. API Keys (for cloud models):
  5. Ollama (for local models):
    • Install from ollama.ai
    • Pull a model: ollama pull gpt-oss:20b

📁 Demo Files

01_use_chat_model.py

Basic Chat Model Usage with Google Gemini

Learn how to:

  • Initialize chat models with API keys
  • Send basic chat messages
  • Handle multi-turn conversations
  • Use streaming responses
  • Batch process multiple queries
  • Implement async processing
uv run python 01_use_chat_model.py

02_use_tools.py

Tool Creation and Integration

Explore:

  • Creating custom tools with @tool decorator
  • Binding tools to chat models
  • Manual tool execution
  • Tool execution loops
  • Advanced tools with complex return types
  • Error handling in tools
uv run python 02_use_tools.py

03_use_ollama.py

Local AI with Ollama

Discover:

  • Setting up ChatOllama for local inference
  • Using different models (llama2, codellama, mistral, etc.)
  • System prompts and conversation management
  • Streaming responses
  • Custom model parameters
  • Model availability checking
uv run python 03_use_ollama.py

04_use_tools_with_ollama.py

Combining Local AI with Tools

Master:

  • Integrating tools with Ollama models
  • File operations (read/write)
  • Multi-step tool execution chains
  • Weather and calculation combos
  • Error handling with tools
  • Context-aware tool usage
uv run python 04_use_tools_with_ollama.py

🛠️ Setup Instructions

For Google Gemini (Cloud)

  1. Ensure virtual environment is activated:

    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  2. Get your API key from Google AI Studio

  3. Set environment variable:

    macOS/Linux:

    export GOOGLE_API_KEY="your-api-key-here"
    # To make it permanent, add to ~/.bashrc or ~/.zshrc:
    echo 'export GOOGLE_API_KEY="your-api-key-here"' >> ~/.bashrc
    # or for zsh:
    echo 'export GOOGLE_API_KEY="your-api-key-here"' >> ~/.zshrc

    Windows (Command Prompt):

    set GOOGLE_API_KEY=your-api-key-here
    # To make it permanent:
    setx GOOGLE_API_KEY "your-api-key-here"

    Windows (PowerShell):

    $env:GOOGLE_API_KEY="your-api-key-here"
    # To make it permanent:
    [System.Environment]::SetEnvironmentVariable('GOOGLE_API_KEY', 'your-api-key-here', 'User')
  4. Run the demos:

    uv run python 01_use_chat_model.py
    uv run python 02_use_tools.py

For Ollama (Local)

  1. Install Ollama:

    # macOS
    brew install ollama
    
    # Linux
    curl -fsSL https://ollama.ai/install.sh | sh
  2. Start Ollama server:

    ollama serve
  3. Pull a model:

    ollama pull gpt-oss:20b
    # or try other models:
    # ollama pull llama2
    # ollama pull codellama
    # ollama pull mistral
  4. Run local demos:

    uv run python 03_use_ollama.py
    uv run python 04_use_tools_with_ollama.py

🎯 Use Cases

Cloud AI (Google Gemini)

  • Production applications requiring high accuracy
  • Applications with internet connectivity
  • Complex reasoning tasks
  • Multilingual support

Local AI (Ollama)

  • Privacy-sensitive applications
  • Offline environments
  • Cost-effective solutions
  • Development and experimentation

🔧 Available Tools

The demos include several pre-built tools:

  • Calculator: Safe mathematical expression evaluation
  • File Reader/Writer: Text file operations
  • Current Time: Date and time information
  • Weather: Mock weather data (extensible to real APIs)
  • Word Counter: Text analysis and statistics
  • Number Analyzer: Statistical analysis of number arrays
  • File Search: Pattern-based file searching

🚦 Running the Demos

Basic Usage

# Activate virtual environment first
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Cloud-based AI with tools
uv run python 01_use_chat_model.py
uv run python 02_use_tools.py

# Local AI with Ollama
uv run python 03_use_ollama.py
uv run python 04_use_tools_with_ollama.py

Modifying Examples

Each demo file is heavily commented and modular. You can:

  1. Uncomment sections to enable/disable specific examples
  2. Modify prompts to test different scenarios
  3. Add new tools by following the @tool decorator pattern
  4. Change models by updating the model names
  5. Adjust parameters like temperature, top_p, etc.

🔍 Troubleshooting

Common Issues

Google Gemini API Errors:

Error: API key not found
  • Set your GOOGLE_API_KEY environment variable:
    • macOS/Linux: export GOOGLE_API_KEY="your-key"
    • Windows CMD: setx GOOGLE_API_KEY "your-key"
    • Windows PowerShell: $env:GOOGLE_API_KEY="your-key"
  • Verify the key is valid in Google AI Studio
  • Restart your terminal/IDE after setting permanent environment variables

Ollama Connection Errors:

Error connecting to Ollama
  • Check if Ollama is running: ollama ps
  • Verify the model is installed: ollama list
  • Start Ollama server: ollama serve

Import Errors:

ModuleNotFoundError: No module named 'langchain_ollama'
  • Install missing dependencies: uv pip install langchain-ollama
  • Or activate your virtual environment: source .venv/bin/activate

📚 Learning Path

  1. Start with 01_use_chat_model.py to understand basic chat interactions
  2. Move to 02_use_tools.py to learn tool creation and binding
  3. Try 03_use_ollama.py for local AI model usage
  4. Complete with 04_use_tools_with_ollama.py for advanced local AI + tools

🤝 Contributing

Feel free to:

  • Add new demo files
  • Create additional tools
  • Improve error handling
  • Add support for other models
  • Update documentation

📄 License

This project is open source and available under the MIT License.

🆘 Support


Happy AI Agent Building! 🤖

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published