Skip to content

charant30/ragsystems

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RAG Enterprise Admin Dashboard

Enterprise-grade Streamlit admin dashboard for exploring, configuring, and deploying 10 RAG (Retrieval-Augmented Generation) architectures.

RAG Dashboard Python Streamlit

✨ Features

10 RAG Architectures

Architecture Description Complexity
Naive RAG Basic retrieve-and-generate pipeline Low
Advanced RAG Query optimization, hybrid search, re-ranking Medium
Modular RAG Plug-and-play component architecture Medium
Agentic RAG Autonomous agent with planning and tool use High
Self-RAG Self-evaluating and self-corrective system High
Corrective RAG Error detection, validation, and correction High
GraphRAG Knowledge graph-enhanced retrieval High
RAG Fusion Multi-query parallel retrieval with rank fusion Medium
Adaptive RAG Dynamic strategy selection based on complexity High
Multi-Modal RAG Text, image, and video retrieval Very High

LLM Providers

  • ✅ OpenAI (GPT-4o, GPT-4, GPT-3.5)
  • ✅ Anthropic (Claude 3.5, Claude 3)
  • ✅ Ollama (Local models: Llama, Mistral, etc.)
  • ✅ AWS Bedrock

Vector Stores

  • ✅ FAISS (Local, default)
  • ✅ Pinecone (Cloud)

🚀 Quick Start

# Clone the repository
cd rags

# Install dependencies
pip install -r requirements.txt

# Copy environment template
cp .env.example .env

# Edit .env with your API keys (optional for local models)

# Run the dashboard
streamlit run streamlit_app/app.py

📁 Project Structure

rags/
├── streamlit_app/
│   ├── app.py                    # Main entry point
│   └── pages/                    # Dashboard pages
│       ├── 01_Dashboard.py
│       ├── 02_Architecture_Explorer.py
│       ├── 03_Interactive_Demos.py
│       ├── 04_Comparison_Matrix.py
│       ├── 05_Settings.py
│       └── 06_Documentation.py
├── rag_systems/                  # 10 RAG implementations
│   ├── base.py                   # Base interface
│   ├── naive_rag.py
│   ├── advanced_rag.py
│   ├── modular_rag.py
│   ├── agentic_rag.py
│   ├── self_rag.py
│   ├── corrective_rag.py
│   ├── graph_rag.py
│   ├── rag_fusion.py
│   ├── adaptive_rag.py
│   └── multimodal_rag.py
├── services/                     # Provider abstractions
│   ├── llm_service.py
│   ├── embedding_service.py
│   └── vector_store_service.py
├── app/
│   └── config.py                 # Configuration
├── requirements.txt
└── .env.example

💻 Usage

Basic RAG Pipeline

from rag_systems import get_rag_system

# Initialize any RAG type
rag = get_rag_system(
    "naive",  # or "advanced", "agentic", "graph", etc.
    llm_provider="openai",
    llm_model="gpt-4o-mini",
    embedding_provider="local",
    vector_store_type="faiss",
)

# Index documents
rag.index_documents([
    "Document 1 content...",
    "Document 2 content...",
])

# Run pipeline
response = rag.run_pipeline("Your question here?", top_k=5)

print(response.answer)
print(f"Retrieved {len(response.retrieved_documents)} documents")
print(f"Completed in {response.total_duration_ms:.2f}ms")

With Local Models (Free)

rag = get_rag_system(
    "advanced",
    llm_provider="ollama",
    llm_model="llama3.2",
    embedding_provider="local",
    embedding_model="all-MiniLM-L6-v2",
    vector_store_type="faiss",
)

📊 Dashboard Pages

  1. 🏠 Dashboard - Overview and quick stats
  2. 📚 Architecture Explorer - Deep dive into each RAG type
  3. 🔬 Interactive Demos - Test RAG pipelines live
  4. 📊 Comparison Matrix - Compare all architectures
  5. ⚙️ Settings - Configure providers and parameters
  6. 📖 Documentation - Best practices and guides

🔧 Configuration

Edit .env with your API keys:

OPENAI_API_KEY=sk-your-key
ANTHROPIC_API_KEY=sk-ant-your-key
OLLAMA_BASE_URL=http://localhost:11434

📄 License

MIT License

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages