Service / Game Name
OpenAI-Compatible API Providers (Local & Remote LLMs)
Website URL
https://platform.openai.com/docs/api-reference
Description
Kai currently supports OpenAI-compatible APIs, which is great for integrating both remote and local LLM providers (such as Ollama, LM Studio, or custom endpoints). However, the current implementation abstracts away most advanced model parameters.
For power users—especially those running local models—this becomes a significant limitation.
Many OpenAI-compatible backends support configurable parameters such as:
- context window size (max tokens / context length)
- temperature
- top_p
- frequency_penalty
- presence_penalty
- max_tokens
- stop sequences
At the moment, Kai does not provide a way to modify or override these parameters when configuring an OpenAI-compatible service.
This limits the ability to:
- fully utilize larger context windows available in local models
- control generation behavior for agent-like workflows
- optimize performance vs quality tradeoffs
- experiment with different inference strategies
Proposal
Add an "Advanced Settings" section when configuring an OpenAI-compatible API service, allowing users to:
- manually set context size (or max tokens)
- adjust temperature and sampling parameters
- define default generation settings per service
Additionally, exposing these parameters would significantly improve Kai’s usefulness in autonomous or semi-autonomous workflows, where control over generation behavior is critical.
Why this matters
Kai is evolving beyond a simple assistant into a more agent-like system. In that context, giving users control over model parameters is essential to:
- improve reliability in long-running tasks
- reduce hallucinations (via temperature control)
- better leverage local LLM capabilities
This feature would greatly enhance Kai’s flexibility and appeal to advanced users.
API / Data Source
https://platform.openai.com/docs/api-reference/chat/create
Examples of compatible providers:
Policy
Service / Game Name
OpenAI-Compatible API Providers (Local & Remote LLMs)
Website URL
https://platform.openai.com/docs/api-reference
Description
Kai currently supports OpenAI-compatible APIs, which is great for integrating both remote and local LLM providers (such as Ollama, LM Studio, or custom endpoints). However, the current implementation abstracts away most advanced model parameters.
For power users—especially those running local models—this becomes a significant limitation.
Many OpenAI-compatible backends support configurable parameters such as:
At the moment, Kai does not provide a way to modify or override these parameters when configuring an OpenAI-compatible service.
This limits the ability to:
Proposal
Add an "Advanced Settings" section when configuring an OpenAI-compatible API service, allowing users to:
Additionally, exposing these parameters would significantly improve Kai’s usefulness in autonomous or semi-autonomous workflows, where control over generation behavior is critical.
Why this matters
Kai is evolving beyond a simple assistant into a more agent-like system. In that context, giving users control over model parameters is essential to:
This feature would greatly enhance Kai’s flexibility and appeal to advanced users.
API / Data Source
https://platform.openai.com/docs/api-reference/chat/create
Examples of compatible providers:
Policy