Skip to content

Add Mistral AI provider#31

Open
Genmin wants to merge 1 commit intotaracodlabs:mainfrom
Genmin:feat/mistral-provider
Open

Add Mistral AI provider#31
Genmin wants to merge 1 commit intotaracodlabs:mainfrom
Genmin:feat/mistral-provider

Conversation

@Genmin
Copy link
Copy Markdown

@Genmin Genmin commented Apr 28, 2026

Summary

  • Adds a first-class Mistral AI provider backed by https://api.mistral.ai/v1/chat/completions
  • Registers Mistral across core provider resolution, smart routing, fallback/race helpers, auxiliary calls, CLI /provider add, API onboarding, provider validation, and streaming endpoint maps
  • Adds MISTRAL_API_KEY to .env.example and a disabled default mistral config entry using mistral-large-latest

Notes

  • The provider mirrors the existing OpenAI-compatible Groq/BOA shape and supports non-streaming, streaming, and tool-call requests.
  • Response content extraction handles both plain string content and chunk-array content.

Validation

  • npx tsc --noEmit --target ES2020 --module commonjs --moduleResolution node --esModuleInterop --allowSyntheticDefaultImports --skipLibCheck --strict false --lib ES2020,DOM providers/mistral.ts
  • npx esbuild providers/mistral.ts --bundle --platform=node --target=node18 --outfile=/tmp/aiden-mistral-provider.js
  • git diff --check

Existing local build blockers observed

  • npm ci currently fails before installing because package.json and package-lock.json are out of sync (aiden-os@3.17.0 missing from the lockfile).
  • After installing without modifying the lockfile, npm run build stops on an existing minimatch import/type issue in core/toolRegistry.ts.
  • npm run build:api also stops on unresolved @aws-sdk/client-s3 from unzipper in this local install.

Closes #25

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0dc40ab3e4

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread core/agentLoop.ts
nvidia: 'https://integrate.api.nvidia.com/v1/chat/completions',
github: 'https://models.inference.ai.azure.com/v1/chat/completions',
boa: 'https://api.bayofassets.com/v1/chat/completions',
mistral: 'https://api.mistral.ai/v1/chat/completions',
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Normalize Mistral content before returning from callLLM

Adding mistral to OPENAI_COMPAT_ENDPOINTS routes it through the generic OpenAI branch in callLLM, which returns d?.choices?.[0]?.message?.content verbatim; Mistral can return that field as an array of content parts (as already handled in providers/mistral.ts). In that case downstream planner/retry paths that do raw.trim() will hit raw.trim is not a function, causing failed turns and unintended fallback behavior whenever Mistral is selected.

Useful? React with 👍 / 👎.

Comment thread api/server.ts
nvidia: 'https://integrate.api.nvidia.com/v1/chat/completions',
github: 'https://models.inference.ai.azure.com/chat/completions',
boa: 'https://api.bayofassets.com/v1/chat/completions',
mistral: 'https://api.mistral.ai/v1/chat/completions',
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Parse Mistral content parts in provider-racing helper

By adding mistral to COMPAT_ENDPOINTS, fetchProviderResponse now returns Mistral responses through the same unnormalized message.content path used for string-only providers. If Mistral returns content-part arrays, raceProviders calls winner.text.trim()/result.text.trim() and treats the call as failed, so pinned or raced Mistral providers can be skipped even when the API returned a valid answer.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add Mistral AI provider

1 participant