Skip to content

[FEATURE] Add more Custom Api for backend. Support for GLM, kimi2.5 and other coding model #3

@suamnbimali

Description

@suamnbimali

Problem / Probleme

The project currently supports a limited set of AI providers in .env, which makes it hard to use GLM and other custom/self-hosted AI backends without changing source code.
Le projet prend actuellement en charge un ensemble limite de fournisseurs IA dans .env, ce qui rend difficile l'utilisation de GLM et d'autres backends IA personnalises/autobergeres sans modifier le code source.

Proposed Solution / Solution proposee

Add first-class .env configuration for GLM and generic custom AI providers.

Suggested variables:

  • GLM_API_KEY
  • GLM_BASE_URL (optional, default official endpoint)
  • GLM_MODEL (e.g. glm-4.5)
  • CUSTOM_AI_API_KEY
  • CUSTOM_AI_BASE_URL
  • CUSTOM_AI_MODEL
  • CUSTOM_AI_PROVIDER (optional label)
  • CUSTOM_AI_HEADERS (optional JSON string for custom auth/routing headers)

Also update:

  • provider selection logic to read these env vars
  • docs with setup examples
  • validation/error messages for missing keys or invalid URLs

Ajouter une configuration .env de premiere classe pour GLM et des fournisseurs IA personnalises, plus la logique de selection, la documentation et la validation associee.

Alternatives Considered / Alternatives envisagees

  1. Hardcode GLM in code without .env support: quick but not scalable.

  2. Use only one generic provider variable: simpler but loses provider-specific defaults/validation.

  3. Require config files instead of .env: more flexible but heavier for most users.

  4. Coder GLM en dur sans support .env : rapide mais non evolutif.

  5. Utiliser uniquement un fournisseur generique : plus simple mais moins de validation.

  6. Imposer des fichiers de configuration : flexible mais plus lourd.

Use Case / Cas d'usage

  • Teams switching between OpenAI, GLM, and internal gateways per environment.

  • Developers in regions where GLM is preferred/required.

  • CI/CD deployments where provider credentials are injected via environment variables.

  • Equipes qui basculent entre OpenAI, GLM et des passerelles internes selon l'environnement.

  • Developpeurs dans des regions ou GLM est prefere/requis.

  • Deploiements CI/CD avec injection des identifiants via variables d'environnement.

Additional Context / Contexte supplementaire

Acceptance criteria:

  • GLM works via .env only (no code edits needed)
  • At least one generic custom provider can be configured via .env
  • Backward compatibility with existing providers
  • README includes copy-paste .env examples for GLM and custom AI
  • Clear startup errors when required env vars are missing

Criteres d'acceptation:

  • GLM fonctionne uniquement via .env (sans modification du code)
  • Au moins un fournisseur IA personnalise configurable via .env
  • Compatibilite avec les fournisseurs existants
  • Exemples .env dans le README
  • Messages d'erreur clairs si des variables sont manquantes

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions