Skip to content

Ayushlion8/FinRAG-Copilot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🟢 Pluang Knowledge Copilot

AI Knowledge Base & Customer Support Copilot (RAG-based)

Overview

Pluang Knowledge Copilot is an AI-powered assistant that answers user queries strictly based on internal Pluang documents using a Retrieval Augmented Generation (RAG) architecture.

The goal of this project is to demonstrate how AI systems can be:

  • grounded in trusted data
  • resistant to hallucinations
  • production-aware (quota, cost, reliability)
  • easy to reason about and extend

This project was built as part of the Pluang Tech Intern Assignment.


Key Features

  • 📚 Document-grounded answers only
    The assistant answers strictly from indexed internal documents and refuses to guess when information is missing.

  • 🔍 Retrieval Augmented Generation (RAG)
    Combines semantic search (FAISS + embeddings) with LLM-based reasoning.

  • 🧾 Explicit source citations
    Every grounded answer includes clear source references.

  • 🛑 Hallucination avoidance
    If the answer is not present in the knowledge base, the assistant clearly states so.

  • ⚙️ Quota-aware LLM usage
    Automatically falls back across multiple Gemini models when quota limits are hit.

  • 🧩 Modular, clean architecture
    Clear separation between configuration, retrieval, prompting, and LLM logic.


Architecture Overview

High-level flow:

  1. Internal documents are loaded and embedded using a local embedding model.
  2. Embeddings are stored in a FAISS vector database.
  3. User queries are converted into semantic search queries.
  4. Relevant document chunks are retrieved.
  5. Gemini LLM generates an answer only from retrieved context.
  6. Sources are shown only when an answer is grounded.

Tech Stack

Frontend

  • Streamlit (chat-style UI)

Backend / AI

  • Python
  • LangChain (RAG orchestration)
  • FAISS (vector store)
  • Gemini Flash models (generation)
  • HuggingFace sentence-transformers (local embeddings)

Repository Structure


├── app.py          # Streamlit entry point
├── core/
│ ├── config.py        # API keys & model list
│ ├── llm.py          # Gemini model fallback logic
│ ├── vectorstore.py # FAISS + embeddings
│ └── prompt.py     # Prompt template
├── data/
│ └── mock_data.json # Internal knowledge documents
├── decision_document.md
├── requirements.txt
└── README.md


Example Queries

Grounded queries

  • What is the minimum amount for Pluang Gold savings?
  • Is there a cooling-off period for crypto withdrawals?
  • What are the fees for physical gold redemption?

Unanswerable queries (hallucination test)

  • Who is the CEO of Pluang in 2026?
  • Is Pluang regulated by SEBI?
  • What is Pluang’s stock price today?

Screenshots

Grounded Answer with Source Citation

Grounded Answer Shows a grounded response with explicit source citation.


Grounded Answer Shows a grounded response with explicit source citation.


Grounded Answer Shows a grounded response with explicit source citation.


Hallucination Avoidance (Out-of-scope Query)

Refusal Answer Demonstrates safe refusal when information is not present.

How to Run Locally

python -m venv venv
source venv/bin/activate   # Windows: venv\Scripts\Activate
pip install -r requirements.txt
streamlit run app.py

About

RAG-based internal knowledge copilot for fintech support, providing grounded answers with source citations and safe refusals.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages