Optimizing LLM inference for real-time customer support using compiler and runtime techniques. This project profiles inference bottlenecks in open-source LLMs (Phi-2, Mistral), applies torch.compile and quantization strategies, and demonstrates latency and memory improvements for conversational AI Co-Pilots.
-
Notifications
You must be signed in to change notification settings - Fork 0
Optimizing LLM inference for real-time customer support using compiler and runtime techniques. This project profiles inference bottlenecks in open-source LLMs (Phi-2, Mistral), applies torch.compile and quantization strategies, and demonstrates latency and memory improvements for conversational AI Co-Pilots.
License
maitribg/talk-fast
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
Optimizing LLM inference for real-time customer support using compiler and runtime techniques. This project profiles inference bottlenecks in open-source LLMs (Phi-2, Mistral), applies torch.compile and quantization strategies, and demonstrates latency and memory improvements for conversational AI Co-Pilots.
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published