Skip to content

Commit e3713c0

Browse files
committed
add README.md
1 parent 16123ec commit e3713c0

File tree

1 file changed

+25
-0
lines changed

1 file changed

+25
-0
lines changed

README.md

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# LLM Scaler
2+
3+
LLM Scaler is an GenAI solution for text generation, image generation, video generation etc running on [Intel® Arc™ Pro B60 GPUs](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/workstations/b-series/overview.html). LLM Scalar leverages standard frameworks such as vLLM, ComfyUI, Xinference etc and ensures the best performance for State-of-Art GenAI models running on Arc B60 GPUs.
4+
5+
---
6+
7+
## LLM Scaler vLLM
8+
9+
llm-scaler-vllm supports running text generation models using vLLM:
10+
11+
- [Getting Started](vllm/README.md/#1-getting-started-and-usage)
12+
- [Features](vllm/README.md/#2-advanced-features)
13+
- [Supported Models](vllm/README.md/#3-supported-models)
14+
15+
## LLM Scaler Omni (experimental)
16+
17+
llm-scaler-omni supports running image/voice/video generation etc using ComfyUI, Xinference:
18+
19+
- [Getting Started](omni/README.md/#getting-started-with-omni-docker-image)
20+
- [ComfyUI Support](omni/README.md/#comfyui)
21+
- [Xinference Support](omni/README.md/#xinference)
22+
23+
---
24+
## Get Support
25+
- Please report a bug or raise a feature request by opening a [Github Issue](https://github.com/intel/llm-scaler/issues)

0 commit comments

Comments
 (0)