XeeAI is an open-source explainable AI platform that provides transparency into LLM decision-making processes. Using the LIME (Local Interpretable Model-agnostic Explanations) algorithm, XeeAI visualizes how AI models interpret user inputs and generate responses, helping users understand and trust AI systems.
Live Demo: https://xai-research.vercel.app/ Research Paper: https://mr-jones123.github.io/static-website-for-paper/
- About The Project
- Features
- Tech Stack
- Getting Started
- Docker Deployment
- Project Structure
- How It Works
- Acknowledgments
- Contributing
- License
XeeAI bridges the gap between complex AI systems and user understanding by providing real-time explanations of how language models process inputs and generate outputs. This project aims to:
- Increase Transparency: Show users how their inputs influence AI responses
- Build Trust: Help users understand AI decision-making through visualizations
- Promote AI Literacy: Make explainable AI accessible to everyone
- Research Tool: Provide a platform for studying AI interpretability
The project uses the C-LIME (Conditional LIME) algorithm to generate explanations, showing which parts of the user's input had the most significant impact on the AI's response.
- π€ Interactive AI Chat: Real-time streaming conversations with Google's Gemini AI
- π LIME Explanations: Visual breakdown of how input features influence outputs
- π Interactive Visualizations: Bar charts showing feature importance scores
- π¨ Modern UI: Clean, responsive interface built with Tailwind CSS and shadcn/ui
- π Markdown Support: Rich text rendering with syntax highlighting
- π Dark Mode: Built-in theme switching
- π Streaming Responses: Server-sent events for real-time AI responses
- π± Fully Responsive: Works seamlessly on desktop, tablet, and mobile
- π³ Docker Support: Easy deployment with containerization
- π Production Ready: Deployed on Vercel (frontend) and Render (backend)
- Next.js 15.1.6 - React framework with App Router
- React 19.2.0 - UI library
- TypeScript - Type-safe JavaScript
- Tailwind CSS 3.4.18 - Utility-first CSS
- shadcn/ui - Component library (Radix UI primitives)
- Recharts 2.15.4 - Data visualization
- Framer Motion - Animations
- react-markdown - Markdown rendering
- FastAPI 0.119.1 - Modern Python web framework
- Python 3.11+ - Programming language
- Google Generative AI - Gemini 2.5 Flash integration
- scikit-learn - Machine learning (for LIME)
- spaCy - Natural language processing
- Uvicorn - ASGI server
- Pydantic - Data validation
- Vercel - Frontend hosting + serverless functions
- Render - Backend API hosting
- Docker - Containerization
- Docker Hub - Container registry
- pnpm - Fast, disk space efficient npm alternative (frontend)
- uv - Blazing fast Python package installer (backend)
Make sure you have the following installed:
- Node.js 18+ - Download
- Python 3.11+ - Download
- pnpm - Install globally:
npm install -g pnpm
- uv - Install Python package manager:
curl -LsSf https://astral.sh/uv/install.sh | sh - Docker (optional) - Download
-
Clone the repository
git clone https://github.com/yourusername/XAI-Research.git cd XAI-Research -
Install frontend dependencies
pnpm install
-
Install backend dependencies
uv pip install -r requirements.txt
Create a .env.local file in the root directory:
# Required: Your Gemini API Key
GEMINI_API_KEY=your_gemini_api_key_here
# Optional: Backend endpoint (for production)
NEXT_PUBLIC_RENDER_ENDPOINT=https://your-backend.onrender.comGetting a Gemini API Key:
- Go to Google AI Studio
- Sign in with your Google account
- Click "Create API Key"
- Copy the key and paste it in your
.env.localfile
Terminal 1 - Start the Backend (FastAPI):
cd src/api
uvicorn main:app --reload --host 0.0.0.0 --port 8000Backend will be available at: http://localhost:8000
Terminal 2 - Start the Frontend (Next.js):
pnpm devFrontend will be available at: http://localhost:3000
Build and run the backend container:
# Build the Docker image
docker build -t xeeai-backend .
# Run the container
docker run -p 8000:8000 --env-file .env xeeai-backendStart the frontend:
pnpm devThe backend can be containerized for easy deployment:
# Build the image
docker build -t yourusername/xeeai-backend:latest .
# Run locally
docker run -p 8000:8000 -e GEMINI_API_KEY=your_key_here yourusername/xeeai-backend:latest
# Push to Docker Hub
docker login
docker push yourusername/xeeai-backend:latestCreate a docker-compose.yml for running both services:
version: '3.8'
services:
backend:
build: .
ports:
- "8000:8000"
env_file:
- .env
environment:
- PORT=8000Run with: docker-compose up
XAI-Research/
βββ src/
β βββ api/ # FastAPI Backend
β β βββ main.py # Main FastAPI application
β β βββ clime/ # C-LIME algorithm implementation
β β β βββ clime.py # Core LIME explainer
β β β βββ gemini_wrapper.py # Gemini model wrapper
β β β βββ segmenter.py # Text segmentation (spaCy)
β β β βββ subset_utils.py # Subset sampling utilities
β β β βββ linear_model.py # Linear model fitting
β β βββ utils/
β β βββ stream.py # SSE streaming logic
β β
β βββ app/ # Next.js App Router
β β βββ layout.tsx # Root layout
β β βββ page.tsx # Landing page
β β βββ globals.css # Global styles
β β βββ (pages)/chatbot/ # Chatbot page
β β
β βββ components/ # React Components
β β βββ Chatbot.tsx # Main chatbot component
β β βββ ChatInterface.tsx # Chat UI
β β βββ ExplainablePanel.tsx # LIME visualization
β β βββ ui/ # shadcn/ui components
β β
β βββ hooks/ # Custom React hooks
β βββ useStreamingChat.ts # Chat streaming hook
β
βββ public/ # Static assets
βββ Dockerfile # Backend container config
βββ .dockerignore # Docker ignore rules
βββ requirements.txt # Python dependencies
βββ package.json # Node.js dependencies
βββ pnpm-lock.yaml # pnpm lock file
βββ next.config.ts # Next.js configuration
βββ tailwind.config.ts # Tailwind configuration
βββ README.md # This file
βββββββββββββββ HTTP/SSE ββββββββββββββββ API Call βββββββββββββββ
β Next.js β ββββββββββββββββ> β FastAPI β ββββββββββββββββ> β Gemini β
β Frontend β β Backend β β API β
βββββββββββββββ <ββββββββββββββββ ββββββββββββββββ <ββββββββββββββββ βββββββββββββββ
β β
β β
β LIME Explanation β
β <ββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββ
β Recharts β
β Visuals β
βββββββββββββββ
- User Input: User sends a message to the chatbot
- Initial Response: Gemini generates a response (streamed to frontend)
- Text Segmentation: Input is segmented into words/sentences using spaCy
- Perturbation: Multiple variations of the input are created by masking segments
- Model Queries: Each variation is sent to Gemini to generate responses
- Similarity Scoring: Outputs are compared to the original response
- Linear Model Fitting: A linear model explains which segments are most important
- Visualization: Feature importance scores are displayed as an interactive bar chart
This project uses C-LIME (Conditional LIME), an adaptation of LIME for text generation models. The implementation is based on IBM's ICX360 framework (see Acknowledgments).
Key Features:
- Adaptive segmentation (words for short texts, sentences for long texts)
- Perturbation-based explanations
- Linear approximation of model behavior
- Visual feature importance ranking
This project uses the C-LIME (Conditional LIME) algorithm adapted from the IBM ICX360 (Intelligent Conversational Explainability 360) framework.
We would like to express our sincere gratitude to the IBM Research team and all contributors to the ICX360 project for their groundbreaking work in explainable AI for conversational systems. Their open-source implementation provided the foundation for the explanation capabilities in XeeAI.
Reference:
- ICX360 GitHub Repository: https://github.com/IBM/ICX360
- Original LIME Paper: "Why Should I Trust You?" Explaining the Predictions of Any Classifier by Ribeiro et al. (2016)
Special thanks to the following open-source projects:
- Next.js - React framework
- FastAPI - Modern Python web framework
- Google Generative AI - Gemini API
- scikit-learn - Machine learning library
- spaCy - Industrial-strength NLP
- shadcn/ui - Beautiful UI components
- Recharts - Composable charting library
- Vercel - Deployment platform
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feat/amazing-feature) - Commit your Changes using Conventional Commits:
git commit -m "feat: add amazing feature" - Push to the Branch (
git push origin feat/amazing-feature) - Open a Pull Request
This project uses Conventional Commits:
feat:- New featurefix:- Bug fixdocs:- Documentation changesstyle:- Code style changes (formatting)refactor:- Code refactoringtest:- Adding testschore:- Maintenance tasks
- Follow the existing code style
- Write meaningful commit messages
- Add tests for new features (when testing infrastructure is available)
- Update documentation as needed
- Ensure your code builds without errors:
pnpm build
Distributed under the MIT License. See LICENSE for more information.
Project Link: https://github.com/yourusername/XAI-Research](https://github.com/mr-jones123/XAI-Research.git)
Live Demo: https://xai-research.vercel.app/
If you find this project useful, please consider giving it a star! β
Built with β€οΈ for AI transparency
