Completely local RAG with chat UI
Clone the repo:
git clone git@github.com:justushar/DocOllama.git
cd DocOllamaInstall the dependencies:
pip install -r requirements.txtFetch your LLM (llama3.1 by default):
ollama pull llama3.1:8bRun the Ollama server
ollama serveStart DocOllama:
streamlit run app.py
Extracts text from PDF documents and creates chunks (using semantic and character splitter) that are stored in a vector databse
Given a query, searches for similar documents, reranks the result and applies LLM chain filter before returning the response.
Combines the LLM with the retriever to answer a given user question