Conversation
1. Add preset for vulkan. 2. Add backend ggml-vulkan. 3. Add some log info.
|
Thank you for doing this, I was able to run ollama + vulkan locally with ~21% improvement over CPU inference with an AMD iGPU (8700G vs 780M). |
|
I compiled this on 22.04, but it's still not working for me. Vulkan is detected on both of my GPUS, but ollama looks for rocm and falls back to CPU when it doesn't find the rocm library. Do I need to install rocm even if I don't intend to use it?
|
|
@pmonck: running it as root helped me |
Thanks, that worked. |
|
@Dts0 Are these instructions valid for a Windows build? |
|
If you're looking for Windows, binaries, newer versions, or more detailed introductions, see : |
|
Thank you! I’ll take a look
…On Sun, Mar 9, 2025 at 8:23 AM, dts ***@***.***(mailto:On Sun, Mar 9, 2025 at 8:23 AM, dts <<a href=)> wrote:
If you're looking for Windows, binaries, newer versions, or more detailed introductions, see :
[#7](#7)
—
Reply to this email directly, [view it on GitHub](#14 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/AYIQWA4MLKXUWNEJHIPMIUT2TQXEXAVCNFSM6AAAAABXB6YNYOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMBYHAZTANJSGA).
You are receiving this because you commented.Message ID: ***@***.***>
[Dts0] Dts0 left a comment [(whyvl/ollama-vulkan#14)](#14 (comment))
If you're looking for Windows, binaries, newer versions, or more detailed introductions, see :
[#7](#7)
—
Reply to this email directly, [view it on GitHub](#14 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/AYIQWA4MLKXUWNEJHIPMIUT2TQXEXAVCNFSM6AAAAABXB6YNYOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMBYHAZTANJSGA).
You are receiving this because you commented.Message ID: ***@***.***>
|
Fix #7
Now you can build it without
make -f Makefile.sync clean synccmd:
If a error occur while running, you can run it with sudo or enable CAP_PERFMON .