"Nearly half of Fortune 500 executives believe artificial intelligence is actively damaging their organizations... Business leaders are making a category error, treating AI transformation like previous technology rollouts and delegating it to IT departments."
β May Habib, CEO, Writer AI | TED AI Conference 2024
Workshop Duration: 60 minutes | Level: Intermediate | Last Updated: January 2025
Billions of dollars are being spent on AI initiatives that failβnot because the technology is flawed, but because organizations fundamentally misunderstand how to implement AI systems. The most common error: treating AI agents as stateless services rather than stateful cognitive systems.
This workshop demonstrates foundational patterns for building AI agents that actually work.
At Workhelix, our mission is understanding what makes workers more productiveβbeyond the hypeβand empowering organizations to leverage that understanding. This workshop represents one small step toward equipping leadership and employees to use AI correctly: implementing memory systems and cost optimization patterns that enable measurable productivity gains.
Focus: Implementation patterns that deliver results, not promises that don't.
By the end of this workshop, you will have implemented:
- Stateful agent architecture with LangGraph checkpointers and stores for memory persistence
- Memory type classification applying semantic, episodic, and procedural memory patterns
- Semantic caching system using Redis to reduce LLM API costs and improve latency
- Quantitative measurement framework for tracking cost optimization and productivity impact
This workshop provides rigorous theoretical grounding in:
- LangGraph graph-based orchestration for stateful multi-step reasoning
- Memory architecture based on Von Neumann and cognitive science principles
- Semantic caching theory including similarity metrics and cost economics
- Production patterns for persistence, scaling, and observability
All concepts demonstrated through hands-on Python implementation.
- Memory-Enabled Agent - Chatbot with semantic, episodic, and procedural memory
- Cached Agent System - Multi-step workflow with Redis LangCache integration
- Cost Tracking Dashboard - Metrics to measure optimization impact
# Clone the repository
git clone https://github.com/EconoBen/langgraph-redis-workshop.git
cd langgraph-redis-workshop
# Start Redis with Docker
docker-compose up -d
# Create virtual environment (using uv recommended)
uv venv --python python3.12
source .venv/bin/activate
# Install dependencies
uv pip install -r requirements.txt
# Copy and configure environment variables
cp .env.example .env
# Edit .env with your API keys (ANTHROPIC_API_KEY or OPENAI_API_KEY)# Start Jupyter
jupyter notebook
# Open notebooks in order:
# 1. notebooks/01_memory_implementation.ipynb
# 2. notebooks/02_redis_caching.ipynb
# 3. notebooks/03_cost_optimization.ipynbSee SETUP.md for detailed setup instructions.
- Python 3.10+: Intermediate proficiency with Python programming
- LLMs & APIs: Basic understanding of large language models and API usage
- REST APIs: Familiarity with making HTTP requests
- Python 3.10+ with pip or uv
- Docker (for local Redis container)
- Jupyter notebook environment
- LLM Provider (choose one):
- OpenAI API - GPT-4, GPT-3.5
- Anthropic API - Claude 3.5 Sonnet
- LangGraph
- Redis
- Agent frameworks
- View Online: MARP Presentation
- Download PDF: presentation.pdf
- Source: slides/presentation.md
Located in notebooks/:
01_memory_implementation.ipynb- Implement short-term and long-term memory02_redis_caching.ipynb- Add semantic caching with Redis03_cost_optimization.ipynb- Measure and optimize costs
Solutions available in notebooks/solutions/
Located in src/ and examples/:
- Memory Implementations:
src/memory/ - Redis Caching:
src/cache/ - Complete Working Agents:
examples/
Located in docs/:
| Time | Section | Format |
|---|---|---|
| 0-5 min | Introduction & Setup | Slides |
| 5-15 min | Memory Architecture Theory | Slides |
| 15-30 min | Exercise 1: Memory Implementation | Hands-on (Jupyter) |
| 30-40 min | Memory Types Deep Dive | Slides + Live Demo |
| 40-50 min | Exercise 2: Redis Caching | Hands-on (Jupyter) |
| 50-58 min | Exercise 3: Cost Metrics | Hands-on (Jupyter) |
| 58-60 min | Wrap-up & Resources | Slides |
Store facts, preferences, and structured knowledge:
store.put(
namespace=("user_123", "preferences"),
key="food_preference",
value={"type": "cuisine", "preference": "Italian", "source": "conversation"}
)Maintain conversation history and context:
checkpointer = PostgresSaver.from_conn_string(DATABASE_URL)
graph = builder.compile(checkpointer=checkpointer)
# Automatic thread-based historyAdapt agent behaviors based on outcomes:
store.put(
namespace=("agent_strategies",),
key=f"success_pattern_{task_type}",
value={"approach": strategy, "success_rate": 0.85}
)Reduce costs and latency by caching LLM responses:
from langchain_redis import RedisCache
cached_llm = llm.with_cache(
RedisCache(
redis_url=REDIS_URL,
ttl=3600,
similarity_threshold=0.95
)
)Impact: 40-70% cost reduction for typical multi-step agent workflows
After completing this workshop, you'll have:
| Metric | Before Optimization | After Optimization | Improvement |
|---|---|---|---|
| Token Usage (multi-step task) | ~15,000 tokens | ~5,000 tokens | 67% reduction |
| Cost per Interaction | $0.30 | $0.10 | 67% reduction |
| Average Latency | 3.2s | 1.8s | 44% improvement |
| Cache Hit Rate | 0% | 65% | N/A |
Results based on typical chatbot with 5-turn conversations
- LangGraph 0.2+: Agent orchestration framework
- LangChain 0.3+: LLM integration
- Redis 7+: Semantic caching layer
- LangMem SDK: Memory management utilities
- Python 3.10+: Programming language
This workshop supports both OpenAI and Anthropic. All code includes provider abstraction:
# Toggle between providers
llm = get_llm(provider="anthropic") # or "openai"langgraph-redis-workshop/
βββ README.md # This file
βββ SETUP.md # Detailed setup guide
βββ docker-compose.yml # Local Redis container
βββ slides/ # MARP presentation
βββ notebooks/ # Jupyter notebook exercises
βββ src/ # Reusable code modules
βββ examples/ # Complete working examples
βββ tests/ # Test suite
βββ docs/ # Additional documentation
βββ requirements.txt # Python dependencies
βββ .env.example # Environment template
Found an issue or have improvements? Contributions welcome!
- Fork the repository
- Create a feature branch (
git checkout -b feature/improvement) - Commit changes (
git commit -m 'Add improvement') - Push to branch (
git push origin feature/improvement) - Open a Pull Request
MIT License - See LICENSE for details
Ben Labaschin
- GitHub: @EconoBen
- Workshop Date: [Your Conference/Event Name]
If you use this workshop material, please cite:
@misc{labaschin2025langgraph,
author = {Labaschin, Ben},
title = {Building Stateful AI Agents: Memory Management and Optimization with LangGraph and Redis},
year = {2025},
publisher = {GitHub},
url = {https://github.com/EconoBen/langgraph-redis-workshop}
}β Star this repo if you found it helpful! π Report issues to help improve the workshop π¬ Share feedback to make future workshops better