Skip to content

Conversation

mudler
Copy link
Owner

@mudler mudler commented Dec 11, 2024

Description

This pull request centralizes CMAKE_ARGS composition as such is shared between backends that are ggml-based. llama.cpp cmake args can be used for instance with bark.cpp and stable-diffusion.cpp (ggml based). This aims to enable cuda and hipblas support on bark.cpp and stablediffusion.cpp (ggml variant).

For now this doesn't aim to be smart and share this in a common way (maybe using cmake, or a makefile that is called by both to generate the cmake args). The attempt of this pr is to understand any changes that might be required by enabling the flags for the respective backends. I'm not sure that bark and stablediffusion currently linking processes are correct in term of GPU support.

Notes for Reviewers

Signed commits

  • Yes, I signed my commits.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Copy link

netlify bot commented Dec 11, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 894a302
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/6759fe760a6baa000818c441
😎 Deploy Preview https://deploy-preview-4367--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link

This PR is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 10 days.

@github-actions github-actions bot added the Stale label Aug 26, 2025
Copy link

github-actions bot commented Sep 9, 2025

This PR was closed because it has been stalled for 10 days with no activity.

@github-actions github-actions bot closed this Sep 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants