Skip to content

mr-jones123/XAI-Research

Repository files navigation

XeeAI - Explainable AI Chatbot

Live Demo License: MIT Next.js FastAPI Python

XeeAI is an open-source explainable AI platform that provides transparency into LLM decision-making processes. Using the LIME (Local Interpretable Model-agnostic Explanations) algorithm, XeeAI visualizes how AI models interpret user inputs and generate responses, helping users understand and trust AI systems.

Live Demo: https://xai-research.vercel.app/ Research Paper: https://mr-jones123.github.io/static-website-for-paper/

XeeAI Demo


πŸ“– Table of Contents


🎯 About The Project

XeeAI bridges the gap between complex AI systems and user understanding by providing real-time explanations of how language models process inputs and generate outputs. This project aims to:

  • Increase Transparency: Show users how their inputs influence AI responses
  • Build Trust: Help users understand AI decision-making through visualizations
  • Promote AI Literacy: Make explainable AI accessible to everyone
  • Research Tool: Provide a platform for studying AI interpretability

The project uses the C-LIME (Conditional LIME) algorithm to generate explanations, showing which parts of the user's input had the most significant impact on the AI's response.


✨ Features

  • πŸ€– Interactive AI Chat: Real-time streaming conversations with Google's Gemini AI
  • πŸ“Š LIME Explanations: Visual breakdown of how input features influence outputs
  • πŸ“ˆ Interactive Visualizations: Bar charts showing feature importance scores
  • 🎨 Modern UI: Clean, responsive interface built with Tailwind CSS and shadcn/ui
  • πŸ“ Markdown Support: Rich text rendering with syntax highlighting
  • πŸŒ™ Dark Mode: Built-in theme switching
  • πŸ”„ Streaming Responses: Server-sent events for real-time AI responses
  • πŸ“± Fully Responsive: Works seamlessly on desktop, tablet, and mobile
  • 🐳 Docker Support: Easy deployment with containerization
  • πŸš€ Production Ready: Deployed on Vercel (frontend) and Render (backend)

πŸ› οΈ Tech Stack

Frontend

Backend

Deployment

Package Managers

  • pnpm - Fast, disk space efficient npm alternative (frontend)
  • uv - Blazing fast Python package installer (backend)

πŸš€ Getting Started

Prerequisites

Make sure you have the following installed:

  • Node.js 18+ - Download
  • Python 3.11+ - Download
  • pnpm - Install globally:
    npm install -g pnpm
  • uv - Install Python package manager:
    curl -LsSf https://astral.sh/uv/install.sh | sh
  • Docker (optional) - Download

Installation

  1. Clone the repository

    git clone https://github.com/yourusername/XAI-Research.git
    cd XAI-Research
  2. Install frontend dependencies

    pnpm install
  3. Install backend dependencies

    uv pip install -r requirements.txt

Environment Variables

Create a .env.local file in the root directory:

# Required: Your Gemini API Key
GEMINI_API_KEY=your_gemini_api_key_here

# Optional: Backend endpoint (for production)
NEXT_PUBLIC_RENDER_ENDPOINT=https://your-backend.onrender.com

Getting a Gemini API Key:

  1. Go to Google AI Studio
  2. Sign in with your Google account
  3. Click "Create API Key"
  4. Copy the key and paste it in your .env.local file

Running Locally

Option 1: Run Frontend and Backend Separately

Terminal 1 - Start the Backend (FastAPI):

cd src/api
uvicorn main:app --reload --host 0.0.0.0 --port 8000

Backend will be available at: http://localhost:8000

Terminal 2 - Start the Frontend (Next.js):

pnpm dev

Frontend will be available at: http://localhost:3000

Option 2: Run with Docker

Build and run the backend container:

# Build the Docker image
docker build -t xeeai-backend .

# Run the container
docker run -p 8000:8000 --env-file .env xeeai-backend

Start the frontend:

pnpm dev

🐳 Docker Deployment

Building the Backend Image

The backend can be containerized for easy deployment:

# Build the image
docker build -t yourusername/xeeai-backend:latest .

# Run locally
docker run -p 8000:8000 -e GEMINI_API_KEY=your_key_here yourusername/xeeai-backend:latest

# Push to Docker Hub
docker login
docker push yourusername/xeeai-backend:latest

Docker Compose (Optional)

Create a docker-compose.yml for running both services:

version: '3.8'
services:
  backend:
    build: .
    ports:
      - "8000:8000"
    env_file:
      - .env
    environment:
      - PORT=8000

Run with: docker-compose up


πŸ“ Project Structure

XAI-Research/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ api/                          # FastAPI Backend
β”‚   β”‚   β”œβ”€β”€ main.py                   # Main FastAPI application
β”‚   β”‚   β”œβ”€β”€ clime/                    # C-LIME algorithm implementation
β”‚   β”‚   β”‚   β”œβ”€β”€ clime.py             # Core LIME explainer
β”‚   β”‚   β”‚   β”œβ”€β”€ gemini_wrapper.py   # Gemini model wrapper
β”‚   β”‚   β”‚   β”œβ”€β”€ segmenter.py        # Text segmentation (spaCy)
β”‚   β”‚   β”‚   β”œβ”€β”€ subset_utils.py     # Subset sampling utilities
β”‚   β”‚   β”‚   └── linear_model.py     # Linear model fitting
β”‚   β”‚   └── utils/
β”‚   β”‚       └── stream.py            # SSE streaming logic
β”‚   β”‚
β”‚   β”œβ”€β”€ app/                          # Next.js App Router
β”‚   β”‚   β”œβ”€β”€ layout.tsx               # Root layout
β”‚   β”‚   β”œβ”€β”€ page.tsx                 # Landing page
β”‚   β”‚   β”œβ”€β”€ globals.css              # Global styles
β”‚   β”‚   └── (pages)/chatbot/        # Chatbot page
β”‚   β”‚
β”‚   β”œβ”€β”€ components/                   # React Components
β”‚   β”‚   β”œβ”€β”€ Chatbot.tsx              # Main chatbot component
β”‚   β”‚   β”œβ”€β”€ ChatInterface.tsx        # Chat UI
β”‚   β”‚   β”œβ”€β”€ ExplainablePanel.tsx     # LIME visualization
β”‚   β”‚   └── ui/                      # shadcn/ui components
β”‚   β”‚
β”‚   └── hooks/                        # Custom React hooks
β”‚       └── useStreamingChat.ts      # Chat streaming hook
β”‚
β”œβ”€β”€ public/                           # Static assets
β”œβ”€β”€ Dockerfile                        # Backend container config
β”œβ”€β”€ .dockerignore                     # Docker ignore rules
β”œβ”€β”€ requirements.txt                  # Python dependencies
β”œβ”€β”€ package.json                      # Node.js dependencies
β”œβ”€β”€ pnpm-lock.yaml                   # pnpm lock file
β”œβ”€β”€ next.config.ts                   # Next.js configuration
β”œβ”€β”€ tailwind.config.ts               # Tailwind configuration
└── README.md                        # This file

πŸ”¬ How It Works

Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      HTTP/SSE       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      API Call      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Next.js    β”‚ ────────────────>   β”‚   FastAPI    β”‚ ────────────────>  β”‚   Gemini    β”‚
β”‚  Frontend   β”‚                     β”‚   Backend    β”‚                    β”‚     API     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ <────────────────   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ <────────────────  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚                                     β”‚
      β”‚                                     β”‚
      β”‚         LIME Explanation            β”‚
      β”‚ <β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚
      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Recharts  β”‚
β”‚   Visuals   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

LIME Explanation Process

  1. User Input: User sends a message to the chatbot
  2. Initial Response: Gemini generates a response (streamed to frontend)
  3. Text Segmentation: Input is segmented into words/sentences using spaCy
  4. Perturbation: Multiple variations of the input are created by masking segments
  5. Model Queries: Each variation is sent to Gemini to generate responses
  6. Similarity Scoring: Outputs are compared to the original response
  7. Linear Model Fitting: A linear model explains which segments are most important
  8. Visualization: Feature importance scores are displayed as an interactive bar chart

LIME Algorithm

This project uses C-LIME (Conditional LIME), an adaptation of LIME for text generation models. The implementation is based on IBM's ICX360 framework (see Acknowledgments).

Key Features:

  • Adaptive segmentation (words for short texts, sentences for long texts)
  • Perturbation-based explanations
  • Linear approximation of model behavior
  • Visual feature importance ranking

πŸ™ Acknowledgments

This project uses the C-LIME (Conditional LIME) algorithm adapted from the IBM ICX360 (Intelligent Conversational Explainability 360) framework.

We would like to express our sincere gratitude to the IBM Research team and all contributors to the ICX360 project for their groundbreaking work in explainable AI for conversational systems. Their open-source implementation provided the foundation for the explanation capabilities in XeeAI.

Reference:

Built With

Special thanks to the following open-source projects:


🀝 Contributing

Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

How to Contribute

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feat/amazing-feature)
  3. Commit your Changes using Conventional Commits:
    git commit -m "feat: add amazing feature"
  4. Push to the Branch (git push origin feat/amazing-feature)
  5. Open a Pull Request

Commit Convention

This project uses Conventional Commits:

  • feat: - New feature
  • fix: - Bug fix
  • docs: - Documentation changes
  • style: - Code style changes (formatting)
  • refactor: - Code refactoring
  • test: - Adding tests
  • chore: - Maintenance tasks

Development Guidelines

  • Follow the existing code style
  • Write meaningful commit messages
  • Add tests for new features (when testing infrastructure is available)
  • Update documentation as needed
  • Ensure your code builds without errors: pnpm build

πŸ“„ License

Distributed under the MIT License. See LICENSE for more information.


πŸ“§ Contact

Project Link: https://github.com/yourusername/XAI-Research](https://github.com/mr-jones123/XAI-Research.git)

Live Demo: https://xai-research.vercel.app/


🌟 Star History

If you find this project useful, please consider giving it a star! ⭐

Star History Chart


Built with ❀️ for AI transparency

About

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors