-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathenv.example
More file actions
65 lines (45 loc) · 1.87 KB
/
env.example
File metadata and controls
65 lines (45 loc) · 1.87 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
# ProcessPulse Configuration
# Copy this file to .env and customize as needed
# cp env.example .env
# ===========================================
# PORT CONFIGURATION
# ===========================================
# Change these if ports conflict with existing services
FRONTEND_PORT=80 # Web interface (change to 8080 if 80 is taken)
OLLAMA_PORT=11434 # Ollama API (for external access)
PERPLEXICA_PORT=3000 # Perplexica web search
# ===========================================
# AI MODEL CONFIGURATION
# ===========================================
# Models are downloaded automatically on first run
#
# CHAT MODEL OPTIONS (pick based on your hardware):
# - llama3.1:8b (~4.7 GB) - Recommended, good balance
# - mistral:7b (~4.1 GB) - Fast, good quality
# - llama3.1:70b (~40 GB) - Best quality, needs 48GB+ RAM
# - phi3:medium (~7.9 GB) - Microsoft's model
#
# EMBEDDING MODEL OPTIONS:
# - nomic-embed-text (~275 MB) - Recommended, fast
# - bge-m3 (~1.2 GB) - Higher quality
CHAT_MODEL=llama3.1:8b
EMBEDDING_MODEL=nomic-embed-text
# ===========================================
# DEVELOPMENT/DEBUG
# ===========================================
DEBUG=false
# ===========================================
# OPTIONAL: EXTERNAL OLLAMA
# ===========================================
# If you already have Ollama running on your machine,
# you can point to it instead of using the Docker version.
# Uncomment and set:
#
# OLLAMA_EXTERNAL=true
# OLLAMA_BASE_URL=http://host.docker.internal:11434
# ===========================================
# OPTIONAL: GPU SUPPORT
# ===========================================
# For NVIDIA GPU acceleration, edit docker-compose.yml
# and uncomment the GPU section under the ollama service.
# Requires: nvidia-docker2 package installed