A comprehensive LangGraph learning template with 5 structured learning plans, production-ready setup, and MuleSoft integration path. Perfect for learning agent development patterns.
Docker Registry: dukeman/langgraph on DockerHub
This is a GitHub template repository designed for learning LangGraph patterns. Click "Use this template" to create your own learning repository.
- β 5 Comprehensive Learning Plans - Structured path from basic to advanced
- β Production-Ready Setup - Docker, Kubernetes, Helm charts
- β Comprehensive Testing - Unit, integration, and learning validation tests
- β MuleSoft Integration Path - Ready for enterprise integration
- β Clear Documentation - Step-by-step guides and success criteria
-
Quick setup and run:
make quick-start
-
Or step by step:
# Set up development environment make setup # Create environment file make setup-env # Edit .env and add your OpenAI API key # Run with Docker make run-docker
-
Deploy to Kubernetes:
# Using Skaffold (recommended for development) make dev # Using Helm directly export OPENAI_API_KEY=your_key_here make helm-install
βββ src/
β βββ agent/
β β βββ core.py # Core LangGraph agent implementation
β βββ api/
β βββ models.py # Pydantic models
β βββ routes.py # FastAPI routes
βββ helm/
β βββ langgraph-agent/ # Helm chart for Kubernetes
βββ Makefile # Main Makefile with all commands
βββ main.py # Application entry point
βββ Dockerfile # Container definition
βββ docker-compose.yml # Local development
βββ skaffold.yaml # Skaffold configuration
βββ requirements.txt # Python dependencies
## π§ Features
### Core Agent (`src/agent/core.py`)
- **State Management**: Uses `MessagesState` for better message handling
- **Tool Integration**: Extensible tool system with basic examples
- **Memory Persistence**: Redis or in-memory checkpointing
- **Streaming Support**: Real-time response streaming
- **Session Management**: Multi-user session support
### Modern Agent (`src/agent/modern.py`)
- **Prebuilt Components**: Uses `create_react_agent` for simplified setup
- **Best Practices**: Follows latest LangGraph patterns
- **Simplified API**: Cleaner interface with modern conventions
### Web API (`src/api/`)
- **FastAPI Framework**: Modern, fast web framework
- **RESTful Endpoints**: `/chat`, `/chat/modern`, `/chat/stream`, `/chat/stream/modern`, `/health`
- **Pydantic Models**: Type-safe request/response models
- **Health Checks**: Kubernetes-ready health endpoints
- **Dual Implementation**: Both custom and modern LangGraph patterns
### Deployment
- **Docker**: Multi-stage build with security best practices
- **Kubernetes**: Full Helm chart with Redis dependency and health checks
- **Scaling**: Horizontal Pod Autoscaler support
- **Security**: Non-root user, proper resource limits
- **Health Checks**: Kubernetes liveness and readiness probes
## π οΈ LangGraph Patterns Demonstrated
### State Management
```python
# Modern approach using MessagesState
from langgraph.graph.message import MessagesState
# Custom approach with TypedDict
class AgentState(TypedDict):
messages: Annotated[List, "The conversation messages"]
user_input: str
agent_response: str
session_id: str
metadata: dict
tools_used: Annotated[List[str], "List of tools used in this session"]
The framework includes basic tools that can be extended:
- Time Tool: Get current date/time
- Calculator: Safe mathematical expressions
- Echo Tool: Simple message echoing
- Redis checkpointing for production
- In-memory fallback for development
- Session-based state management
- Tool usage tracking
# Custom implementation
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{"message": "What time is it?", "session_id": "user123"}'
# Modern implementation
curl -X POST "http://localhost:8000/chat/modern" \
-H "Content-Type: application/json" \
-d '{"message": "What time is it?", "session_id": "user123"}'# Custom implementation
curl -X POST "http://localhost:8000/chat/stream" \
-H "Content-Type: application/json" \
-d '{"message": "Calculate 2+2", "session_id": "user123"}'
# Modern implementation
curl -X POST "http://localhost:8000/chat/stream/modern" \
-H "Content-Type: application/json" \
-d '{"message": "Calculate 2+2", "session_id": "user123"}'curl http://localhost:8000/health# Start with Docker Compose
docker-compose up
# Test the API
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{"message": "What time is it?"}'
# Response
{
"response": "Current time: 2024-01-15 14:30:25",
"session_id": "default",
"tools_used": ["get_current_time"],
"metadata": {"timestamp": "2024-01-15T14:30:25", "model": "gpt-4o-mini"},
"timestamp": "2024-01-15T14:30:25"
}# Deploy to your cluster
export OPENAI_API_KEY=your_key_here
./scripts/deploy.sh
# Check deployment
kubectl get pods -n langgraph-agent
# Port forward for testing
kubectl port-forward service/langgraph-agent 8000:8000 -n langgraph-agent# Start development mode with hot reload
make dev
# Or run once
make dev-run
# Check status
make dev-status
# Run tests
make dev-testThe project includes a comprehensive Makefile with all necessary commands:
make help # Show all available commands
make setup # Set up development environment
make setup-env # Create .env file from template
make run-local # Run application locally with Python
make run-docker # Run with Docker Compose
make test # Run all tests
make dev # Start Skaffold development mode
make dev-test # Run tests against Skaffold deploymentmake build # Build Docker image
make build-no-cache # Build Docker image without cache
make push # Push Docker image to registry
make build-and-push # Build and push Docker image
make helm-install # Install with Helm
make helm-uninstall # Uninstall Helm release
make prod-deploy # Deploy to productionmake clean # Clean up local files
make clean-docker # Clean up Docker resources
make clean-k8s # Clean up Kubernetes resources
make check-deps # Check if required tools are installed
make lint # Run code linting
make format # Format codemake quick-start # Complete quick start workflow
make dev-workflow # Complete development workflow@tool
def your_custom_tool(param: str) -> str:
"""Description of your tool."""
# Implementation
return "result"
# Add to the agent in core.py
self.llm_with_tools = self.llm.bind_tools([get_current_time, calculate, echo, your_custom_tool])class YourCustomState(TypedDict):
# Add your custom fields
custom_field: str
# ... existing fieldsfrom langgraph.checkpoint.postgres import PostgresSaver
# Use PostgreSQL for production
checkpointer = PostgresSaver.from_conn_string("postgresql://...")This repository includes comprehensive learning plans to master LangGraph:
- Understand Current Code - Master the existing implementation
- Interfaces and Extensibility - Deep dive into LangGraph interfaces
- State Stores and Persistence - Master state management patterns
- Production Patterns - Learn production-ready patterns
- MuleSoft Integration - Integrate with MuleSoft ecosystems
# Set up learning environment
make setup
make setup-env
# Start with Learning Plan 1
cd docs/learning-plans
python test_learning_01.pyEach learning plan includes:
- Hands-on exercises with expected outputs
- Comprehensive test files for validation
- Clear success criteria for completion
- Additional resources for deeper understanding
- Start Learning: Begin with the learning plans in
docs/learning-plans/ - Test Locally: Run
docker-compose upand test the API - Deploy to K8s: Use the Helm chart in your homelab
- Add More Tools: Extend with domain-specific tools
- Add Authentication: Implement proper security
- Add Monitoring: Implement logging and metrics
- Scale: Test horizontal scaling with HPA
The learning plans provide a structured path to MuleSoft integration:
- Learn LangGraph: Complete the learning plans to understand patterns
- Extend Tools: Add MuleSoft-specific tools (connectors, flows, etc.)
- Custom State: Add MuleSoft context to the state
- API Integration: Connect to MuleSoft APIs
- Deployment: Adapt for MuleSoft runtime environments
This is a learning project for understanding LangGraph patterns. Feel free to extend and modify for your specific use cases.
This project is for exploration and learning purposes.