Implementing my own LLM API #21
Answered
by
Holmeswww
westlongtime
asked this question in
Q&A
-
|
how to bulid AgentKit without using api? I want to load my local LLM |
Beta Was this translation helpful? Give feedback.
Answered by
Holmeswww
May 7, 2024
Replies: 1 comment
-
|
You can write your custom LLM function by extending or following the format of https://github.com/Holmeswww/AgentKit/blob/main/src/agentkit/llm_api/base.py |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
Holmeswww
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
LLM_API_FUNCTIONcan be any LLM API function that takesmsg:listandshrink_idx:int, and outputsllm_result:strandusage:dict. Where msg is a prompt (OpenAI format by default), andshrink_idx:intis an index at which the LLM should reduce the length of the prompt in case of overflow.You can write your custom LLM function by extending or following the format of https://github.com/Holmeswww/AgentKit/blob/main/src/agentkit/llm_api/base.py