A lightweight Retrieval-Augmented Generation (RAG) framework leveraging FAISS, LangChain, and Google Generative AI for document-based question answering and contextual assistance.
- Document Retrieval: Efficiently retrieves relevant documents using FAISS.
- Text Chunking: Splits large documents into manageable chunks for better processing.
- Generative AI Integration: Uses Google Gemini for embedding generation and answering queries.
- State Graph: Implements a state graph for structured application flow.
- LangGraph Integration: Utilizes LangGraph to define and manage the application's state graph, ensuring a clear and logical workflow.
- Python 3.8 or higher
- A valid Google Gemini API key
-
Clone the repository:
git clone <repository-url> cd RAG_proj
-
Create a virtual environment:
python -m venv venv venv\Scripts\activate
-
Install dependencies:
Install all the dependencies using pip-install
-
Add your Google Gemini API key to a
.envfile:GOOGLE_API_KEY=<your-api-key>
Run the rag_basic.ipynb notebook to test the framework. The notebook demonstrates:
- Document loading and chunking.
- Retrieval and generation steps.
- Query answering using the state graph.
This project is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. See the LICENSE file for details.
Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.