-
Notifications
You must be signed in to change notification settings - Fork 96
Open
Description
Description
I'm currently using LightAgent and appreciate its lightweight design and flexibility. However, I would find it extremely valuable if there was support for local LLM models (like Llama, Mistral, etc.) that could be run without needing external API calls.
Use Case
Many users, including myself, want to:
- Run agents completely offline for privacy concerns
- Reduce API costs associated with commercial models
- Experiment with open-source models through local inference
Proposed Implementation
Perhaps this could be implemented by:
- Adding support for local inference libraries like llama.cpp or vLLM
- Creating an adapter interface that allows interchangeability between cloud and local models
- Including documentation on hardware requirements and optimization tips
Additional Context
I've seen similar functionality in other frameworks, and this would make LightAgent even more versatile while maintaining its core lightweight philosophy.
Would this be something you'd consider adding to the roadmap?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels