A RAG application built for quering day to day needs
- Support of openAI & Opensource LLM
- Chuncking/Embedding - cost effective
- cache for LLM - cost effective
- Enabling upload for large documents and data
- CLI options
- Scalability
Audience: Tech/Business (with basic sofware experience)
- UI - ELM
- Backend Services - Rust/Python
- VectorDB - Elastic
- LLM - Opensource/OpenAPI
- Make sure you have docker installed
- clone the project to local
git clone https://github.com/Karuturirs/QueryHive.git
cd QueryHive- Start all applications
./start_queryhive.sh
- Stop all applications
./stop_queryhive.sh
pip install sentence-transformers transformers openai tiktoken nltk
cargo add pyo3 --features "extension-module" cargo add pyo3 --features "auto-initialize"
export RUST_LOG=info # This will enable logs at info level and higher export APP_ENV=local cargo run
- UI chat screen to be max
- remove toggle button on screen and make it radio
- work on llm
- why indexing not started on application start
- [ ]
