From e60a0ab9a92a1183583cdeaccae063e9044deb4e Mon Sep 17 00:00:00 2001 From: Sven Knoblauch <33262434+sven-knoblauch@users.noreply.github.com> Date: Tue, 12 Nov 2024 13:55:09 +0100 Subject: [PATCH 1/2] Update README.md Add lora adapter changes to readme (usage of env variable) --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 92a6688..de1c2a8 100644 --- a/README.md +++ b/README.md @@ -132,6 +132,7 @@ Below is a summary of the available RunPod Worker images, categorized by image s | `TOKENIZER_POOL_TYPE` | 'ray' | `str` | Type of tokenizer pool to use for asynchronous tokenization. | | `TOKENIZER_POOL_EXTRA_CONFIG` | None | `dict` | Extra config for tokenizer pool. | | `ENABLE_LORA` | False | `bool` | If True, enable handling of LoRA adapters. | +|`LORA_MODULES` | None | `dict` | Add lora adapter from huggingface {"name": "xxx", "path": "xxx/xxxx", "base_model_name": "xxx/xxxx"}| | `MAX_LORAS` | 1 | `int` | Max number of LoRAs in a single batch. | | `MAX_LORA_RANK` | 16 | `int` | Max LoRA rank. | | `LORA_EXTRA_VOCAB_SIZE` | 256 | `int` | Maximum size of extra vocabulary for LoRA adapters. | From daf7721979dfb2c0d8e16630a498ba4141a87147 Mon Sep 17 00:00:00 2001 From: Sven Knoblauch <33262434+sven-knoblauch@users.noreply.github.com> Date: Fri, 15 Nov 2024 08:53:20 +0100 Subject: [PATCH 2/2] Update README.md Co-authored-by: Patrick Rachford --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index de1c2a8..743c870 100644 --- a/README.md +++ b/README.md @@ -132,7 +132,7 @@ Below is a summary of the available RunPod Worker images, categorized by image s | `TOKENIZER_POOL_TYPE` | 'ray' | `str` | Type of tokenizer pool to use for asynchronous tokenization. | | `TOKENIZER_POOL_EXTRA_CONFIG` | None | `dict` | Extra config for tokenizer pool. | | `ENABLE_LORA` | False | `bool` | If True, enable handling of LoRA adapters. | -|`LORA_MODULES` | None | `dict` | Add lora adapter from huggingface {"name": "xxx", "path": "xxx/xxxx", "base_model_name": "xxx/xxxx"}| +| `LORA_MODULES` | None | `dict` | Add lora adapter from Hugging Face {"name": "xxx", "path": "xxx/xxxx", "base_model_name": "xxx/xxxx"}| | `MAX_LORAS` | 1 | `int` | Max number of LoRAs in a single batch. | | `MAX_LORA_RANK` | 16 | `int` | Max LoRA rank. | | `LORA_EXTRA_VOCAB_SIZE` | 256 | `int` | Maximum size of extra vocabulary for LoRA adapters. |