An educational project demonstrating how to build an intelligent restaurant recommendation chatbot using LangChain, Google Gemini, and Streamlit.
This project teaches students:
- How to build AI agents using LangChain
- Creating custom tools for specific tasks
- Implementing a conversational interface with Streamlit
- Managing chat history and session state
- Integrating Large Language Models (LLMs) with structured data
The chatbot can:
- Search Restaurants: Find restaurants by cuisine type (Indian, Italian, Chinese, Mexican)
- Get Details: Retrieve complete information about specific restaurants
- Check Availability: Verify reservation availability for specific dates
restaurant-chatbot/
│
├── .streamlit/
│ └── config.toml # Streamlit theme configuration
├── chatbot.py # Core chatbot logic with LangChain agents
├── app.py # Streamlit web interface
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
├── .gitignore # Git ignore file
└── README.md # Project documentation
Before starting, you need to install Python and Git on your system.
-
Download Python:
- Visit python.org/downloads
- Click "Download Python 3.x.x" (latest version)
-
Run the Installer:
⚠️ IMPORTANT: Check "Add Python to PATH" during installation- Click "Install Now"
- Wait for installation to complete
-
Verify Installation:
python --version
You should see something like
Python 3.11.x
Option 1: Using Official Installer (Recommended)
-
Download Python:
- Visit python.org/downloads
- Download the macOS installer
-
Run the Installer:
- Open the downloaded
.pkgfile - Follow the installation wizard
- Open the downloaded
-
Verify Installation:
python3 --version
Option 2: Using Homebrew
# Install Homebrew if not already installed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Python
brew install python
# Verify installation
python3 --version# Update package list
sudo apt update
# Install Python 3 and pip
sudo apt install python3 python3-pip python3-venv
# Verify installation
python3 --version
pip3 --version# Install Python 3
sudo dnf install python3 python3-pip
# Verify installation
python3 --version
pip3 --version-
Download Git:
- Visit git-scm.com/download/win
- Download will start automatically
-
Run the Installer:
- Use default settings (recommended)
- Click "Next" through the wizard
-
Verify Installation:
git --version
Alternative: Using Winget (Windows Package Manager)
winget install Git.GitOption 1: Using Xcode Command Line Tools (Recommended)
# This will prompt to install Git
git --version
# Or explicitly install
xcode-select --installOption 2: Using Homebrew
brew install gitVerify Installation:
git --version# Install Git
sudo apt update
sudo apt install git
# Verify installation
git --version# Install Git
sudo dnf install git
# Verify installation
git --versionAfter installation, verify everything is working:
Windows:
python --version
pip --version
git --versionMac/Linux:
python3 --version
pip3 --version
git --versionAll commands should return version numbers without errors.
- ✅ Python 3.8 or higher installed on your system (see Step 0 above)
- ✅ Git installed on your system (see Step 0 above)
- ✅ A Google account (for Gemini API access)
Open your terminal/command prompt and run:
git clone https://github.com/alumnx-ai-labs/chatbot-002.git
cd chatbot-002Go to https://aistudio.google.com/apikey
Sign in with your Google account
- If you don't have a project yet, you'll see a button that says "Create API key in new project"
- Click on it to create a new project
- If you already have a project, click "Create API key" and select your project
- Click "Create API key"
- Your API key will be displayed - it looks something like:
AIza... - IMPORTANT: Copy this key immediately and store it securely
- You won't be able to see the full key again!
Keep this key safe - you'll need it in the next step.
In the project directory, create a new file named .env (note the dot at the beginning):
On Mac/Linux:
cp .env.example .envOn Windows (Command Prompt):
copy .env.example .envOn Windows (PowerShell):
Copy-Item .env.example .envOpen the .env file in a text editor and replace your_google_gemini_api_key_here with your actual Gemini API key:
Before:
GOOGLE_API_KEY=your_google_gemini_api_key_here
After:
GOOGLE_API_KEY=AIzaSy*************************
- Never commit the
.envfile to Git! It's already in.gitignore. - Never share your API key publicly
- Keep your
.envfile secure and local to your machine
This step locks the app to dark theme, regardless of your system settings.
In the project root directory, create a .streamlit folder:
On Mac/Linux:
mkdir .streamlitOn Windows (Command Prompt):
mkdir .streamlitOn Windows (PowerShell):
New-Item -ItemType Directory -Path .streamlitCreate a file named config.toml inside the .streamlit folder with the following content:
On Mac/Linux:
cat > .streamlit/config.toml << 'EOF'
[theme]
primaryColor = "#2196f3"
backgroundColor = "#000000"
secondaryBackgroundColor = "#1a1a1a"
textColor = "#ffffff"
font = "sans serif"
[client]
showSidebarNavigation = false
EOFOn Windows:
Create the file manually or copy the content below into .streamlit\config.toml:
[theme]
primaryColor = "#2196f3"
backgroundColor = "#000000"
secondaryBackgroundColor = "#1a1a1a"
textColor = "#ffffff"
font = "sans serif"
[client]
showSidebarNavigation = falseWhat this does:
- Sets a permanent black background
- Configures blue accent colors
- Sets white text for readability
- Prevents theme from changing with system settings
A virtual environment keeps your project dependencies isolated from other Python projects.
# Create virtual environment
python3 -m venv venv
# Activate virtual environment
source venv/bin/activate
# You should see (venv) at the start of your terminal prompt# Create virtual environment
python -m venv venv
# Activate virtual environment
venv\Scripts\activate.bat
# You should see (venv) at the start of your prompt# Create virtual environment
python -m venv venv
# Activate virtual environment
venv\Scripts\Activate.ps1
# If you get an execution policy error, run:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# Then try activating againNote: You'll need to activate the virtual environment every time you open a new terminal session to work on this project.
With your virtual environment activated, install the required packages:
pip install -r requirements.txtThis will install:
- Streamlit (web interface)
- LangChain (AI agent framework)
- Google Generative AI (Gemini API)
- Other necessary dependencies
Wait for installation to complete - this may take a few minutes.
With everything set up, start the Streamlit app:
streamlit run app.py- The terminal will show some output
- Your default web browser will automatically open
- The app will be running at
http://localhost:8501 - If the browser doesn't open automatically, manually visit that URL
- The app will automatically load your API key from the
.envfile - You should see "✅ API Key loaded from .env file" in the sidebar
- Click the "Initialize Chatbot" button in the sidebar
- Wait for the success message: "✅ Chatbot initialized successfully!"
Troubleshooting:
- If you see "❌ API Key not found!", make sure your
.envfile is in the project root directory - If initialization fails, check that your API key is correct in the
.envfile - Restart the Streamlit app after making changes to
.env
Type a message in the input box at the bottom, such as:
- "Show me Indian restaurants"
- "Tell me about Spice Palace"
- "Can I book a table for 4 at Pizza Bella on 2024-11-15?"
The UI features:
- Black background for reduced eye strain
- Colored message bubbles with white text for better readability
- Blue bubbles for your messages
- Green bubbles for bot responses
- Theme stays consistent regardless of system light/dark mode
To stop the Streamlit server:
- Press
Ctrl + Cin the terminal - To deactivate the virtual environment, type:
deactivate
┌─────────────────────────────────────────────────────────────────┐
│ User Interface │
│ (Streamlit App) │
└────────────────────────────┬────────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────────┐
│ RestaurantChatbot Class │
│ (chatbot.py) │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ LangChain Agent Executor │ │
│ │ ┌────────────────────────────────────────────────────┐ │ │
│ │ │ Google Gemini LLM │ │ │
│ │ │ (Decision Making Brain) │ │ │
│ │ └────────────────────────────────────────────────────┘ │ │
│ │ ↓ │ │
│ │ ┌────────────────────────────────────────────────────┐ │ │
│ │ │ Tool Selection │ │ │
│ │ │ (Agent decides which tool to call) │ │ │
│ │ └────────────────────────────────────────────────────┘ │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌───────────────────┼───────────────────┐ │
│ ↓ ↓ ↓ │
│ ┌─────────────┐ ┌──────────────┐ ┌─────────────────┐ │
│ │ Tool 1: │ │ Tool 2: │ │ Tool 3: │ │
│ │ Search │ │ Get │ │ Check │ │
│ │ Restaurants │ │ Restaurant │ │ Reservation │ │
│ │ by Cuisine │ │ Details │ │ Availability │ │
│ └─────────────┘ └──────────────┘ └─────────────────┘ │
│ │ │ │ │
│ └───────────────────┼───────────────────┘ │
│ ↓ │
│ Restaurant Database │
│ (Simulated with Python dicts) │
└─────────────────────────────────────────────────────────────────┘
Responsibilities:
- Render the user interface
- Handle user input and display messages
- Manage session state (chat history)
- Provide API key configuration interface
Key Features:
- Session State Management: Uses
st.session_stateto maintain conversation history across page reloads - Message Display: Custom CSS styling for user and bot messages
- Interactive Input: Real-time chat input with send button
- Sidebar Configuration: Safe API key input and chatbot initialization
Code Flow:
- User enters message
- Message added to session state
- Message sent to chatbot backend
- Response received and displayed
- UI updates with new message
Responsibilities:
- Initialize and manage the AI agent
- Process user queries
- Coordinate between LLM and tools
- Return formatted responses
Key Components:
- Purpose: Acts as the "brain" that decides what to do
- Decision Process:
- Receives user input
- Analyzes intent using Gemini LLM
- Determines which tool(s) to call
- Executes tool(s) in sequence if needed
- Formulates natural language response
- Model:
gemini-2.0-flash-exp - Role: Natural language understanding and generation
- Capabilities:
- Understands user intent
- Decides tool usage
- Generates conversational responses
- Maintains context from chat history
- Defines the agent's personality and capabilities
- Provides guidelines for responses
- Instructs the agent on tool usage
- Sets conversation tone (friendly, helpful)
Example Decision Flow:
User: "I want Indian food"
↓
Gemini LLM analyzes: "User wants restaurant recommendations"
↓
Agent decides: "Use search_restaurants_by_cuisine tool"
↓
Tool returns: [List of Indian restaurants]
↓
Gemini formats response: "I found 3 great Indian restaurants..."
Each tool is a Python function decorated with @tool that the agent can call.
@tool
def search_restaurants_by_cuisine(cuisine_type: str) -> list:- Purpose: Find restaurants by cuisine type
- Input: Cuisine name (e.g., "indian", "italian")
- Output: List of restaurant dictionaries with name, rating, price
- Data Source: Hardcoded dictionary (simulates database)
@tool
def get_restaurant_details(restaurant_name: str) -> dict:- Purpose: Get comprehensive information about a specific restaurant
- Input: Restaurant name
- Output: Dictionary with address, phone, hours, specialties
- Use Case: When user asks "Tell me more about X"
@tool
def check_reservation_availability(restaurant_name: str, date: str, party_size: int) -> str:- Purpose: Check if reservation is possible
- Input: Restaurant name, date (YYYY-MM-DD), number of people
- Output: Availability status message
- Logic: Checks against simulated availability calendar
Why Use Tools?
- Structured Data Access: Tools provide reliable, formatted data
- Separation of Concerns: LLM handles language, tools handle data
- Extensibility: Easy to add new capabilities
- Accuracy: Prevents LLM from hallucinating restaurant info
Currently implemented as Python dictionaries:
restaurants = {
"indian": [
{"name": "Spice Palace", "rating": 4.5, "price": "₹₹"},
...
],
...
}In Production:
- Would be replaced with actual database (PostgreSQL, MongoDB, etc.)
- Could integrate with real APIs (Yelp, Google Places, Zomato)
- Would include real-time availability systems
Let's trace a complete user interaction:
User Input: "I want to eat Italian food tonight"
Step 1: Frontend (app.py)
User types message → Streamlit captures input → Adds to session state
Step 2: Chatbot Initialization
app.py calls: chatbot.chat("I want to eat Italian food tonight", history)
Step 3: Agent Processing
RestaurantChatbot.chat() → agent_executor.invoke()
Step 4: LLM Analysis
Gemini LLM receives:
- User message: "I want to eat Italian food tonight"
- Available tools: [search_restaurants_by_cuisine, get_restaurant_details, check_reservation_availability]
- System prompt: "You are a helpful restaurant assistant..."
LLM thinks: "User wants Italian restaurants. I should use search_restaurants_by_cuisine tool."
Step 5: Tool Execution
Agent calls: search_restaurants_by_cuisine("italian")
↓
Tool searches database
↓
Returns: [
{"name": "Pizza Bella", "rating": 4.4, "price": "₹₹"},
{"name": "Pasta Dreams", "rating": 4.6, "price": "₹₹₹"},
{"name": "Roma Kitchen", "rating": 4.2, "price": "₹₹"}
]
Step 6: Response Generation
Gemini LLM formats response:
"I found 3 wonderful Italian restaurants for you! 🍝
1. **Pasta Dreams** - Rating: 4.6 ⭐ (₹₹₹)
2. **Pizza Bella** - Rating: 4.4 ⭐ (₹₹)
3. **Roma Kitchen** - Rating: 4.2 ⭐ (₹₹)
Would you like to know more about any of these?"
Step 7: Return to Frontend
Response flows back: chatbot.chat() → app.py → Streamlit display
Step 8: UI Update
Streamlit adds bot message to session state → Renders on screen → User sees response
- Agent Framework: Built-in support for creating AI agents
- Tool Integration: Easy to add custom functions
- Prompt Templates: Structured way to guide LLM behavior
- Chat History: Built-in memory management
- Cost-Effective: Competitive pricing for educational use
- Fast: Flash model provides quick responses
- Capable: Handles tool calling and reasoning well
- Accessible: Easy to get API key for learning
- Rapid Development: Build UI with pure Python
- No Frontend Skills Needed: No HTML/CSS/JavaScript required
- Session State: Built-in state management for chat apps
- Interactive: Real-time updates and user interaction
- Separation of Concerns: Business logic separate from presentation
- Testability: Can test chatbot logic independently
- Reusability: Chatbot class can be used in other interfaces (CLI, API, etc.)
- Maintainability: Easier to update one component without affecting the other
The agent uses a ReAct (Reasoning + Acting) pattern:
- Reason: Analyze the user's request
- Act: Decide which tool to use (if any)
- Observe: See the tool's output
- Reason: Determine if more tools are needed
- Respond: Formulate final answer
Example Multi-Step Reasoning:
User: "What's the best rated Italian restaurant and can I book it for tomorrow?"
Agent Reasoning:
1. "I need to find Italian restaurants" → Use search_restaurants_by_cuisine
2. "Now I have a list, Pasta Dreams has highest rating (4.6)"
3. "User wants to book for tomorrow" → Use check_reservation_availability
4. "I have all info needed" → Formulate response
Final Response: "The best rated Italian restaurant is Pasta Dreams (4.6⭐)..."
This architecture makes the chatbot:
- Intelligent: Can handle complex, multi-step queries
- Accurate: Uses structured data, not hallucinations
- Extensible: Easy to add new capabilities
- Maintainable: Clean separation of concerns
- Educational: Clear structure for learning
Custom functions that the AI agent can call:
search_restaurants_by_cuisine(): Searches database by cuisine typeget_restaurant_details(): Fetches detailed restaurant informationcheck_reservation_availability(): Checks if reservations are available
The LangChain agent that:
- Understands user intent
- Decides which tools to use
- Formulates natural language responses
Streamlit interface that:
- Displays chat messages
- Manages conversation history
- Provides user input handling
-
Understanding AI Agents
- Agents can use tools to accomplish tasks
- They decide autonomously which tool to call based on user input
- The LLM acts as the "brain" making these decisions
-
Tool Design
- Each tool has a specific, well-defined purpose
- Tools use type hints for clarity
- Docstrings help the AI understand when to use each tool
-
Conversation Management
- Chat history maintains context across messages
- Session state in Streamlit preserves data between interactions
- The agent uses history to provide coherent responses
-
Prompt Engineering
- The system prompt guides the agent's behavior
- Clear instructions result in better responses
- Personality can be defined through prompts
Students can extend this project by:
-
Adding More Tools
- Menu browsing
- Price comparison
- Review summaries
- Distance calculation
-
Enhancing Data
- Connect to a real database
- Add more restaurants and cuisines
- Include images and reviews
-
Improving UI
- Add restaurant images
- Show ratings visually (stars)
- Display maps with restaurant locations
- Add filters (price range, rating, distance)
-
Advanced Features
- Multi-language support
- Voice input/output
- Recommendation based on preferences
- Integration with real reservation systems
Finding Restaurants:
User: I want to eat Indian food
Bot: I found 3 great Indian restaurants for you! [lists restaurants]
Getting Details:
User: Tell me more about Spice Palace
Bot: Spice Palace is an Indian restaurant... [provides complete details]
Checking Availability:
User: Can I book a table for 4 at Spice Palace on 2024-11-15?
Bot: ✓ Spice Palace has availability for 4 people on 2024-11-15
- LangChain: Agent framework and tool integration
- Google Gemini: Large Language Model (LLM)
- Streamlit: Web application framework
- Python: Core programming language
- The restaurant data is hardcoded for demonstration purposes
- The API key should be kept secure and not committed to version control
- In production, use environment variables for sensitive data
- The availability checker uses simulated data
Students are encouraged to:
- Add new features
- Improve the UI
- Expand the restaurant database
- Add error handling
- Write tests
This is an educational project. Feel free to use and modify for learning purposes.
Issue: "Module not found" error
Solution: Make sure all dependencies are installed with pip install -r requirements.txt
Issue: API key error Solution: Verify your Google Gemini API key is valid and has the necessary permissions
Issue: Chatbot not responding Solution: Check the console for error messages and verify internet connectivity
Happy Learning! 🎓