A Retrieval-Augmented Generation (RAG) system for answering questions based on custom documents.
Built with LangChain, Chroma, BeautifulSoup, FastAPI, Streamlit, and Docker Compose.
- 💬 Interactive chat interface with Streamlit
- 📚 Document retrieval using LangChain + Chroma embeddings
- 🤖 Supports multiple LLM models: DeepSeek, OpenAI, K2-Think, Qwen, etc.. (https://ai.io.net/ai/api-keys (free api))
- ⚡ Backend powered by FastAPI
- 🔒 Safe API key storage using environment variables (.env)
- 🐳 Fully containerized with Docker Compose
- Python 3.10
- LangChain
- Chroma (vector database)
- BeautifulSoup (bs4)
- FastAPI
- Streamlit
- Docker Compose
- Clone the repository:
git clone <your-repo-url>
cd RAG-start- Create a
.envfile:
LLM=your_api_token_hereImportant: Do not commit this file; add it to
.gitignore.
- Build and run with Docker Compose:
docker compose up --build- 🌐 Frontend (Streamlit): http://localhost:8501
- ⚡ Backend (FastAPI): http://localhost:8000/docs
- Open Streamlit in your browser
- Select a model from the sidebar
- Ask a question in the chat input
- The system retrieves relevant documents and generates a response using the selected LLM
RAG-start/
├─ api.py # API key helper
├─ streamlit_app.py # Streamlit frontend
├─ query_rag.py # FastAPI backend
├─ rag_data.py # Embeddings & Chroma vector DB
├─ Dockerfile.api # Backend Dockerfile
├─ Dockerfile.streamlit # Frontend Dockerfile
├─ docker-compose.yml # Docker Compose setup
├─ chroma_db/ # Persisted vector database
├─ show.jpg # Screenshot of coding process
├─ app.jpg # Screenshot of frontend
├─ requirements.txt # Python dependencies
├─ .env # Environment variables (not in repo)
- API keys are never hardcoded
- Use
.envfiles and Docker environment variables
- Works with multiple LLMs
- Fully containerized
- Designed for rapid prototyping and RAG experimentation
- Easy to share and deploy for collaborators
Contributions are welcome! Please fork the repo, create a feature branch, and submit a pull request.
Created by @nurikw3 (tg) – feel free to reach out for questions or collaborations.

