@@ -47,18 +47,24 @@ Choose models based on your system capabilities:
4747| ** Chat** | ` phi3:mini ` | ~ 2.3GB | 4GB | Low-resource systems |
4848
4949
50+ ### Installation Options
51+
52+ Choose your preferred installation method:
53+
54+ ### Option 1: Direct Installation
5055
51- ### Prerequisites (Required for Both Installation Methods)
56+ ** Prerequisite: Ollama (for local AI models)**
57+
58+ Install Ollama
5259
53- ** 1. Install Ollama** (for local AI models):
5460``` bash
5561# macOS
5662brew install ollama
5763
5864# Or download from https://ollama.com
5965```
66+ Start Ollama and install required models
6067
61- ** 2. Start Ollama and install required models** :
6268``` bash
6369ollama serve
6470
@@ -69,11 +75,7 @@ ollama pull nomic-embed-text
6975ollama pull qwen3:14b
7076```
7177
72- ### Installation Options
73-
74- Choose your preferred installation method:
7578
76- ### Option 1: Direct Installation
7779
7880** Additional Prerequisites:**
7981- Python 3.8+
@@ -106,7 +108,10 @@ Choose your preferred installation method:
106108
107109### Option 2: Docker Installation
108110
109- ** Additional Prerequisites:**
111+ With this option, you don't need to separately install Ollama, it will automatically
112+ get started by docker compose.
113+
114+ ** Prerequisites:**
110115- Docker and Docker Compose
111116
112117** Installation Steps:**
@@ -122,7 +127,17 @@ Choose your preferred installation method:
122127 docker-compose up
123128 ```
124129
125- 3 . ** Open your browser** to ` http://localhost:8501 `
130+ 3 . ** Install models**
131+
132+ ```
133+ # embedding model
134+ docker exec -it ollama ollama pull nomic-embed-text
135+
136+ # chat model
137+ docker exec -it ollama ollama pull qwen3:14b
138+ ```
139+
140+ 4 . ** Open your browser** to ` http://localhost:8501 `
126141
127142## 📖 How to Use
128143
0 commit comments