Skip to content

Commit 79759ff

Browse files
zdtswVaishnaviHire
authored andcommitted
docs: update using "starter" distro than "ollama"
- update example and create one without using userconfigmap - set new env to enable ollama - use the same llama model as in llama-stack - remove deprecated distro images from distribution.json - set INFERNECE_MODEL to OLLAMA_INFERENCE_MODEL - set images to use "latest" tag than 0.2.15 Signed-off-by: Wen Zhou <wenzhou@redhat.com>
1 parent 03531b0 commit 79759ff

File tree

5 files changed

+29
-16
lines changed

5 files changed

+29
-16
lines changed

README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -80,11 +80,10 @@ spec:
8080
replicas: 1
8181
server:
8282
distribution:
83-
name: ollama
83+
name: starter
8484
containerSpec:
85-
port: 8321
8685
env:
87-
- name: INFERENCE_MODEL
86+
- name: OLLAMA_INFERENCE_MODEL
8887
value: "llama3.2:1b"
8988
- name: OLLAMA_URL
9089
value: "http://ollama-server-service.ollama-dist.svc.cluster.local:11434"

config/samples/_v1alpha1_llamastackdistribution.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ spec:
77
server:
88
containerSpec:
99
env:
10-
- name: INFERENCE_MODEL
10+
- name: OLLAMA_INFERENCE_MODEL
1111
value: 'llama3.2:1b'
1212
- name: OLLAMA_URL
1313
value: 'http://ollama-server-service.ollama-dist.svc.cluster.local:11434'

config/samples/example-with-configmap.yaml

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ data:
4848
apiVersion: llamastack.io/v1alpha1
4949
kind: LlamaStackDistribution
5050
metadata:
51-
name: llamastack-with-config
51+
name: llamastack-with-userconfig
5252
spec:
5353
replicas: 1
5454
server:
@@ -57,10 +57,8 @@ spec:
5757
containerSpec:
5858
port: 8321
5959
env:
60-
- name: INFERENCE_MODEL
61-
value: "llama3.2:1b"
62-
- name: OLLAMA_URL
63-
value: "http://ollama-server-service.ollama-dist.svc.cluster.local:11434"
60+
- name: OLLAMA_EMBEDDING_MODEL
61+
value: all-minilm:l6-v2
6462
userConfig:
6563
configMapName: llama-stack-config
6664
# configMapNamespace: "" # Optional - defaults to the same namespace as the CR
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
---
2+
apiVersion: llamastack.io/v1alpha1
3+
kind: LlamaStackDistribution
4+
metadata:
5+
name: llamastack-without-userconfig
6+
spec:
7+
replicas: 1
8+
server:
9+
distribution:
10+
name: starter
11+
containerSpec:
12+
env:
13+
- name: OLLAMA_INFERENCE_MODEL
14+
value: "llama3.2:1b"
15+
- name: OLLAMA_URL
16+
value: "http://ollama-server-service.ollama-dist.svc.cluster.local:11434"
17+
storage:
18+
size: "10Gi" # Optional - defaults to 10Gi
19+
mountPath: "/home/lls/.lls" # Optional - defaults to /.llama

distributions.json

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,6 @@
11
{
2-
"starter": "docker.io/llamastack/distribution-starter:latest",
3-
"ollama": "docker.io/llamastack/distribution-ollama:latest",
4-
"bedrock": "docker.io/llamastack/distribution-bedrock:latest",
5-
"remote-vllm": "docker.io/llamastack/distribution-remote-vllm:latest",
6-
"tgi": "docker.io/llamastack/distribution-tgi:latest",
7-
"together": "docker.io/llamastack/distribution-together:latest",
8-
"vllm-gpu": "docker.io/llamastack/distribution-vllm-gpu:latest"
2+
"starter": "docker.io/llamastack/distribution-starter:latest",
3+
"remote-vllm": "docker.io/llamastack/distribution-remote-vllm:latest",
4+
"meta-reference-gpu": "docker.io/llamastack/distribution-meta-reference-gpu:latest",
5+
"postgres-demo": "docker.io/llamastack/distribution-postgres-demo:latest"
96
}

0 commit comments

Comments
 (0)