@@ -39,7 +39,7 @@ print(model.identifier)
3939
4040While you can provide an ` api_key ` keyword argument,
4141we recommend using [ python-dotenv] ( https://pypi.org/project/python-dotenv/ )
42- to add ` LLAMA_STACK_API_KEY ="My API Key"` to your ` .env ` file
42+ to add ` LLAMA_STACK_CLIENT_API_KEY ="My API Key"` to your ` .env ` file
4343so that your API Key is not stored in source control.
4444
4545## Async usage
@@ -309,10 +309,10 @@ Note that requests that time out are [retried twice by default](#retries).
309309
310310We use the standard library [ ` logging ` ] ( https://docs.python.org/3/library/logging.html ) module.
311311
312- You can enable logging by setting the environment variable ` LLAMA_STACK_LOG ` to ` info ` .
312+ You can enable logging by setting the environment variable ` LLAMA_STACK_CLIENT_LOG ` to ` info ` .
313313
314314``` shell
315- $ export LLAMA_STACK_LOG =info
315+ $ export LLAMA_STACK_CLIENT_LOG =info
316316```
317317
318318Or to ` debug ` for more verbose logging.
@@ -425,7 +425,7 @@ import httpx
425425from llama_stack_client import LlamaStackClient, DefaultHttpxClient
426426
427427client = LlamaStackClient(
428- # Or use the `LLAMA_STACK_BASE_URL ` env var
428+ # Or use the `LLAMA_STACK_CLIENT_BASE_URL ` env var
429429 base_url = " http://my.test.server.example.com:8083" ,
430430 http_client = DefaultHttpxClient(
431431 proxy = " http://my.test.proxy.example.com" ,
0 commit comments