A comprehensive demo collection showing how to build AI agents using LangChain with different models and tools.
- Python 3.8+ installed
- uv for fast Python package management:
# Install uv curl -LsSf https://astral.sh/uv/install.sh | sh # or with pip pip install uv
- Project setup:
# Create virtual environment and install dependencies uv venv source .venv/bin/activate # On Windows: .venv\Scripts\activate uv pip install langchain langchain-ollama langchain-google-genai
- API Keys (for cloud models):
- Google Gemini: Get API key from Google AI Studio
- Ollama (for local models):
- Install from ollama.ai
- Pull a model:
ollama pull gpt-oss:20b
Basic Chat Model Usage with Google Gemini
Learn how to:
- Initialize chat models with API keys
- Send basic chat messages
- Handle multi-turn conversations
- Use streaming responses
- Batch process multiple queries
- Implement async processing
uv run python 01_use_chat_model.pyTool Creation and Integration
Explore:
- Creating custom tools with
@tooldecorator - Binding tools to chat models
- Manual tool execution
- Tool execution loops
- Advanced tools with complex return types
- Error handling in tools
uv run python 02_use_tools.pyLocal AI with Ollama
Discover:
- Setting up ChatOllama for local inference
- Using different models (llama2, codellama, mistral, etc.)
- System prompts and conversation management
- Streaming responses
- Custom model parameters
- Model availability checking
uv run python 03_use_ollama.pyCombining Local AI with Tools
Master:
- Integrating tools with Ollama models
- File operations (read/write)
- Multi-step tool execution chains
- Weather and calculation combos
- Error handling with tools
- Context-aware tool usage
uv run python 04_use_tools_with_ollama.py-
Ensure virtual environment is activated:
source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Get your API key from Google AI Studio
-
Set environment variable:
macOS/Linux:
export GOOGLE_API_KEY="your-api-key-here" # To make it permanent, add to ~/.bashrc or ~/.zshrc: echo 'export GOOGLE_API_KEY="your-api-key-here"' >> ~/.bashrc # or for zsh: echo 'export GOOGLE_API_KEY="your-api-key-here"' >> ~/.zshrc
Windows (Command Prompt):
set GOOGLE_API_KEY=your-api-key-here # To make it permanent: setx GOOGLE_API_KEY "your-api-key-here"
Windows (PowerShell):
$env:GOOGLE_API_KEY="your-api-key-here" # To make it permanent: [System.Environment]::SetEnvironmentVariable('GOOGLE_API_KEY', 'your-api-key-here', 'User')
-
Run the demos:
uv run python 01_use_chat_model.py uv run python 02_use_tools.py
-
Install Ollama:
# macOS brew install ollama # Linux curl -fsSL https://ollama.ai/install.sh | sh
-
Start Ollama server:
ollama serve
-
Pull a model:
ollama pull gpt-oss:20b # or try other models: # ollama pull llama2 # ollama pull codellama # ollama pull mistral
-
Run local demos:
uv run python 03_use_ollama.py uv run python 04_use_tools_with_ollama.py
- Production applications requiring high accuracy
- Applications with internet connectivity
- Complex reasoning tasks
- Multilingual support
- Privacy-sensitive applications
- Offline environments
- Cost-effective solutions
- Development and experimentation
The demos include several pre-built tools:
- Calculator: Safe mathematical expression evaluation
- File Reader/Writer: Text file operations
- Current Time: Date and time information
- Weather: Mock weather data (extensible to real APIs)
- Word Counter: Text analysis and statistics
- Number Analyzer: Statistical analysis of number arrays
- File Search: Pattern-based file searching
# Activate virtual environment first
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Cloud-based AI with tools
uv run python 01_use_chat_model.py
uv run python 02_use_tools.py
# Local AI with Ollama
uv run python 03_use_ollama.py
uv run python 04_use_tools_with_ollama.pyEach demo file is heavily commented and modular. You can:
- Uncomment sections to enable/disable specific examples
- Modify prompts to test different scenarios
- Add new tools by following the
@tooldecorator pattern - Change models by updating the model names
- Adjust parameters like temperature, top_p, etc.
Google Gemini API Errors:
Error: API key not found
- Set your
GOOGLE_API_KEYenvironment variable:- macOS/Linux:
export GOOGLE_API_KEY="your-key" - Windows CMD:
setx GOOGLE_API_KEY "your-key" - Windows PowerShell:
$env:GOOGLE_API_KEY="your-key"
- macOS/Linux:
- Verify the key is valid in Google AI Studio
- Restart your terminal/IDE after setting permanent environment variables
Ollama Connection Errors:
Error connecting to Ollama
- Check if Ollama is running:
ollama ps - Verify the model is installed:
ollama list - Start Ollama server:
ollama serve
Import Errors:
ModuleNotFoundError: No module named 'langchain_ollama'
- Install missing dependencies:
uv pip install langchain-ollama - Or activate your virtual environment:
source .venv/bin/activate
- Start with
01_use_chat_model.pyto understand basic chat interactions - Move to
02_use_tools.pyto learn tool creation and binding - Try
03_use_ollama.pyfor local AI model usage - Complete with
04_use_tools_with_ollama.pyfor advanced local AI + tools
Feel free to:
- Add new demo files
- Create additional tools
- Improve error handling
- Add support for other models
- Update documentation
This project is open source and available under the MIT License.
- LangChain Documentation: python.langchain.com
- Ollama Documentation: ollama.ai
- Google Gemini: ai.google.dev
Happy AI Agent Building! 🤖