Skip to content

Comments

feat: MiniMax LLM プラグインのサポートを追加#82

Open
anandMohanan wants to merge 3 commits intooboroge0:mainfrom
anandMohanan:main
Open

feat: MiniMax LLM プラグインのサポートを追加#82
anandMohanan wants to merge 3 commits intooboroge0:mainfrom
anandMohanan:main

Conversation

@anandMohanan
Copy link

@anandMohanan anandMohanan commented Feb 16, 2026

MiniMaxのLLMをAITuberFlowで使用できるようにするプラグインを追加しました。

  • 新規プラグイン追加: plugins/minimax-llm/
    • MiniMax M2.5、M2.5-highspeed、M2.1、M2.1-highspeed、M2モデルに対応
    • OpenAI SDK互換のAPIを使用
    • 推論機能サポート
    • 回答からthinkingタグを除去
  • フロントエンド設定: NodeSettingsにMiniMax設定UIを追加
  • 入出力ポート: Canvasにminimax-llmポートを追加

Summary by CodeRabbit

  • New Features
    • Added MiniMax LLM support to the workflow editor with multiple M2.5 model options.
    • New MiniMax node type: configurable API key, model, system prompt, temperature, max tokens, reasoning mode, and demo mode fallback.
    • Supports dynamic and static prompt ports plus prompt-section-driven prompt assembly for flexible prompts.
    • Responses are cleaned of internal "thinking" content and emit response events; includes improved error messaging.

- Add minimax-llm plugin with M2.5 model support
- Add frontend configuration for MiniMax in NodeSettings
- Add input/output ports for minimax-llm in Canvas
- Strip thinking tags from MiniMax responses
@coderabbitai
Copy link

coderabbitai bot commented Feb 16, 2026

📝 Walkthrough

Walkthrough

Adds a MiniMax LLM integration: editor recognition for minimax-llm, node settings and model options, a plugin manifest defining the node and config schema, and a MiniMaxLLMNode implementation with prompt-section handling, API calls, thinking-content stripping, and error handling.

Changes

Cohort / File(s) Summary
Editor & Node Settings
apps/web/components/editor/Canvas.tsx, apps/web/components/panels/NodeSettings.tsx
Canvas: treat minimax-llm like other LLM nodes for dynamic and static port resolution. NodeSettings: add minimax model group with five entries and a minimax-llm node config (apiKey, model, systemPrompt, promptSections, temperature, maxTokens, reasoning).
Plugin Manifest
plugins/minimax-llm/manifest.json
New plugin manifest declaring metadata, UI labels/icons, input/output ports (prompt/response), events, and a full config schema with validation and conditional fields (reasoning visibility by model).
Node Implementation
plugins/minimax-llm/node.ts
New MiniMaxLLMNode class: setup/teardown, OpenAI client init (or demo mode), clamp/configure params, build prompt from sections, call chat API (include reasoning_effort when supported), strip <think> blocks, emit response event, and comprehensive error handling.

Sequence Diagram

sequenceDiagram
    participant UI as Editor UI
    participant Node as MiniMaxLLMNode
    participant API as MiniMax API
    participant Bus as Event Bus / Result Handler

    UI->>Node: setup(config: apiKey, model, prompts)
    Node->>Node: init client or enable demo mode
    UI->>Node: execute(inputs)
    Node->>Node: buildPromptFromSections() or use inputs.prompt
    Node->>Node: assemble system prompt (character/personality)
    Node->>API: chat request {model, messages, temperature, max_tokens, reasoning_effort?}
    API-->>Node: chat response (may include <think> blocks)
    Node->>Node: stripThinkingContent()
    Node->>Bus: emit response-generated event (cleaned content)
    Bus-->>UI: deliver response
    Note over Node: on API/error -> return localized error message
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 I hopped through code with nimble paws,
Prompt sections stitched without a pause,
MiniMax listens, then thinks—then speaks,
I tidy thoughts the code still keeps,
A rabbit's cheer for clever tweaks!

🚥 Pre-merge checks | ✅ 3 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the primary change: adding MiniMax LLM plugin support. It is specific, concise, and reflects the main objective of the pull request.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@plugins/minimax-llm/node.ts`:
- Around line 175-178: The code accesses response.choices[0].message.content
without guarding against an empty or missing choices array; update the handling
in the method that calls this.client.chat.completions.create (the block that
assigns to result and calls this.stripThinkingContent) to defensively check that
response.choices is an array with at least one element and that
choices[0].message and choices[0].message.content exist, falling back to an
empty string (or a safe default) when any of those are missing, then pass that
safe string into this.stripThinkingContent so a malformed API response won't
throw a TypeError and will still hit the surrounding error handling.
- Around line 93-130: stripThinkingContent is failing because thinkEnd is set to
the generic "</" and only removes those two chars, leaving the rest of the
closing tag in the output; replace this logic by matching the full closing tag
and removing complete <think>...</think> blocks case-insensitively. In
stripThinkingContent, change the end marker to the full closing tag "</think>"
(or better, abandon the manual index math and use a case-insensitive non-greedy
regex like /<think>[\s\S]*?<\/think>/i) to remove entire think blocks, update
any references to thinkEnd/closingStart/endIdx accordingly, ensure you handle
multiple occurrences by global matching, and trim the final result before
returning from stripThinkingContent.
- Around line 153-159: When building fullSystemPrompt in the node.ts block,
guard against characterName being undefined before concatenating; if
characterPersonality is truthy but characterName is missing, either use a safe
fallback (e.g., "the user" or an empty string) or omit the "You are <name>."
prefix so you don't produce "You are undefined."; update the logic around
variables characterName, characterPersonality and fullSystemPrompt (the
concatenation currently at the lines that set fullSystemPrompt using
this.systemPrompt) to compute a safe name value or conditionally add the "You
are ..." sentence only when characterName is non-empty.
- Around line 132-219: The code currently sets (apiParams as
any).reasoning_effort when a reasoning model is used; MiniMax does not support
reasoning_effort — change the logic in execute() (around apiParams /
MiniMaxLLMNode.REASONING_MODELS and this.reasoning) to instead add an extra_body
field to the request: set (apiParams as any).extra_body = { reasoning_split:
true } when reasoning is enabled, so the call to
this.client.chat.completions.create(apiParams) uses
extra_body={"reasoning_split": true} to enable MiniMax's native reasoning split
handling.
🧹 Nitpick comments (2)
apps/web/components/panels/NodeSettings.tsx (2)

947-957: Reasoning field is always visible, but only applies to M2.5 models.

The manifest defines a showWhen condition restricting the reasoning field to M2.5 and M2.5-highspeed models. The nodeConfigs entry here lacks the corresponding showWhen, so users selecting M2.1 or M2 will still see the "Reasoning Effort" dropdown even though the backend ignores it for those models.

♻️ Add showWhen to match the manifest
       {
         key: "reasoning",
         type: "select",
         label: "Reasoning Effort",
         options: [
           { label: "None", value: "none" },
           { label: "Low", value: "low" },
           { label: "Medium", value: "medium" },
           { label: "High", value: "high" },
         ],
+        showWhen: { key: "model", value: ["MiniMax-M2.5", "MiniMax-M2.5-highspeed"] },
       },

2522-2557: Missing minimax-llm entry in translation key maps.

The getNodeLabel keyMap (and the corresponding getFieldLabel nodeKeyMap at Line 2569) don't include a "minimax-llm" entry. The label will fall back to "MiniMax" from the schema, but translations via t() won't work for this node type or its fields.

Comment on lines 132 to 219
async execute(
inputs: Record<string, any>,
context: NodeContext,
): Promise<Record<string, any>> {
if (!this.client) {
await context.log("[デモモード] 定型文応答を返します", "info");
return { response: MiniMaxLLMNode.DEMO_RESPONSE };
}

const prompt = this.promptSections
? this.buildPromptFromSections(inputs)
: ((inputs.prompt as string) ?? "");

if (!prompt) {
await context.log("No prompt provided", "warning");
return { response: "" };
}

try {
await context.log("Calling MiniMax API (" + this.model + ")...");

const characterName = context.getCharacterName();
const characterPersonality = context.getCharacterPersonality();

let fullSystemPrompt = this.systemPrompt;
if (characterPersonality) {
fullSystemPrompt = this.systemPrompt + "\n\nYou are " + characterName + ". " + characterPersonality;
}

const apiParams: OpenAI.ChatCompletionCreateParamsNonStreaming = {
model: this.model,
messages: [
{ role: "system" as const, content: fullSystemPrompt },
{ role: "user" as const, content: prompt },
],
temperature: this.temperature,
max_tokens: this.maxTokens,
};

if (MiniMaxLLMNode.REASONING_MODELS.has(this.model) && this.reasoning !== "none") {
(apiParams as any).reasoning_effort = this.reasoning;
}

const response = await this.client.chat.completions.create(apiParams);

let result = response.choices[0].message.content ?? "";
result = this.stripThinkingContent(result);

await context.log("Response received (" + result.length + " chars)");

await context.emitEvent(
createEvent("response.generated", {
text: result,
model: this.model,
}),
);

return { response: result };
} catch (error: unknown) {
if (error instanceof OpenAI.APIConnectionError) {
const errorMsg = getErrorMessage(ErrorCode.LLM_CONNECTION_FAILED, "ja", {
provider: "MiniMax",
});
await context.log(errorMsg, "error");
return { response: "Error: Connection failed" };
}

if (error instanceof OpenAI.RateLimitError) {
const errorMsg = getErrorMessage(ErrorCode.LLM_RATE_LIMIT, "ja", {
provider: "MiniMax",
});
await context.log(errorMsg, "error");
return { response: "Error: Rate limit exceeded" };
}

if (error instanceof OpenAI.APIError) {
const errorMsg = getErrorMessage(ErrorCode.LLM_API_ERROR, "ja", {
provider: "MiniMax",
error: error.message,
});
await context.log(errorMsg, "error");
return { response: "Error: " + error.message };
}

const message = error instanceof Error ? error.message : String(error);
await context.log("Unexpected error: " + message, "error");
return { response: "Error: " + message };
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

MiniMax API base URL documentation 2025 reasoning_effort parameter

💡 Result:

  • MiniMax OpenAI-compatible API base URL (official docs):

  • reasoning_effort parameter:

    • MiniMax’s official OpenAI-compatible documentation does not mention/support a reasoning_effort request parameter; instead, it documents extra_body={"reasoning_split": True} to split reasoning into reasoning_details (otherwise reasoning is embedded in <think>...</think> in content). (platform.minimax.io)
    • If you saw reasoning_effort for “MiniMax M2”, that appears in some third-party OpenAI-compatible gateways (example: Fireworks), which state MiniMax M2 accepts only string values 'low' | 'medium' | 'high' and defaults to 'medium'. (docs.fireworks.ai)

Citations:


The reasoning_effort parameter is not supported by MiniMax's official API—use extra_body={"reasoning_split": True} instead.

The base URL https://api.minimax.io/v1 is correct per official documentation. However, MiniMax's OpenAI-compatible endpoint does not document support for reasoning_effort as a request parameter. For reasoning handling, MiniMax's official API uses extra_body={"reasoning_split": True} to split reasoning into reasoning_details (otherwise reasoning is embedded in <think>...</think> tags in the message content). The current code's use of reasoning_effort appears to be a third-party gateway pattern (e.g., Fireworks) rather than native MiniMax support.

🤖 Prompt for AI Agents
In `@plugins/minimax-llm/node.ts` around lines 132 - 219, The code currently sets
(apiParams as any).reasoning_effort when a reasoning model is used; MiniMax does
not support reasoning_effort — change the logic in execute() (around apiParams /
MiniMaxLLMNode.REASONING_MODELS and this.reasoning) to instead add an extra_body
field to the request: set (apiParams as any).extra_body = { reasoning_split:
true } when reasoning is enabled, so the call to
this.client.chat.completions.create(apiParams) uses
extra_body={"reasoning_split": true} to enable MiniMax's native reasoning split
handling.

Comment on lines +153 to +159
const characterName = context.getCharacterName();
const characterPersonality = context.getCharacterPersonality();

let fullSystemPrompt = this.systemPrompt;
if (characterPersonality) {
fullSystemPrompt = this.systemPrompt + "\n\nYou are " + characterName + ". " + characterPersonality;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

characterName may be undefined, producing "You are undefined." in the system prompt.

When characterPersonality is truthy but characterName is empty or undefined, Line 158 concatenates it directly, yielding an unintended string like "You are undefined. ...".

🛡️ Proposed fix
      let fullSystemPrompt = this.systemPrompt;
      if (characterPersonality) {
-       fullSystemPrompt = this.systemPrompt + "\n\nYou are " + characterName + ". " + characterPersonality;
+       const name = characterName || "an AI assistant";
+       fullSystemPrompt = this.systemPrompt + "\n\nYou are " + name + ". " + characterPersonality;
      }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const characterName = context.getCharacterName();
const characterPersonality = context.getCharacterPersonality();
let fullSystemPrompt = this.systemPrompt;
if (characterPersonality) {
fullSystemPrompt = this.systemPrompt + "\n\nYou are " + characterName + ". " + characterPersonality;
}
const characterName = context.getCharacterName();
const characterPersonality = context.getCharacterPersonality();
let fullSystemPrompt = this.systemPrompt;
if (characterPersonality) {
const name = characterName || "an AI assistant";
fullSystemPrompt = this.systemPrompt + "\n\nYou are " + name + ". " + characterPersonality;
}
🤖 Prompt for AI Agents
In `@plugins/minimax-llm/node.ts` around lines 153 - 159, When building
fullSystemPrompt in the node.ts block, guard against characterName being
undefined before concatenating; if characterPersonality is truthy but
characterName is missing, either use a safe fallback (e.g., "the user" or an
empty string) or omit the "You are <name>." prefix so you don't produce "You are
undefined."; update the logic around variables characterName,
characterPersonality and fullSystemPrompt (the concatenation currently at the
lines that set fullSystemPrompt using this.systemPrompt) to compute a safe name
value or conditionally add the "You are ..." sentence only when characterName is
non-empty.

Comment on lines +175 to +178
const response = await this.client.chat.completions.create(apiParams);

let result = response.choices[0].message.content ?? "";
result = this.stripThinkingContent(result);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

No null-safety on response.choices[0].

If the API returns an empty choices array (e.g., due to content filtering or an unexpected API response), accessing response.choices[0].message.content will throw a TypeError that bypasses the typed error handling below.

🛡️ Proposed defensive check
      const response = await this.client.chat.completions.create(apiParams);

-     let result = response.choices[0].message.content ?? "";
+     const choice = response.choices?.[0];
+     if (!choice) {
+       await context.log("No response choices returned from MiniMax API", "error");
+       return { response: "Error: Empty response from API" };
+     }
+     let result = choice.message.content ?? "";
      result = this.stripThinkingContent(result);
🤖 Prompt for AI Agents
In `@plugins/minimax-llm/node.ts` around lines 175 - 178, The code accesses
response.choices[0].message.content without guarding against an empty or missing
choices array; update the handling in the method that calls
this.client.chat.completions.create (the block that assigns to result and calls
this.stripThinkingContent) to defensively check that response.choices is an
array with at least one element and that choices[0].message and
choices[0].message.content exist, falling back to an empty string (or a safe
default) when any of those are missing, then pass that safe string into
this.stripThinkingContent so a malformed API response won't throw a TypeError
and will still hit the surrounding error handling.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
plugins/minimax-llm/node.ts (2)

17-202: Missing onEvent method required by coding guidelines.

The coding guidelines for plugins/**/node.ts require implementing setup, execute, onEvent, and teardown. This class is missing onEvent.

Proposed fix
+  async onEvent(event: Event, context: NodeContext): Promise<void> {
+    // No events to handle
+  }
+
   async teardown(): Promise<void> {

As per coding guidelines, plugins/**/node.ts: "Use BaseNode class and implement setup, execute, onEvent, and teardown methods for TypeScript node implementations".


90-107: stripThinkingContent is case-sensitive — won't strip <Think> or <THINK> variants.

The previous fix correctly uses the full </think> closing tag now, but the matching is case-sensitive. If the model returns <Think>…</Think> or <THINK>…</THINK>, the content won't be stripped. A regex approach handles this more robustly:

Proposed fix
  private stripThinkingContent(content: string): string {
-   const thinkStart = "<think>";
-   const thinkEnd = "</think>";
-   
-   let result = content;
-   
-   while (true) {
-     const startIdx = result.indexOf(thinkStart);
-     if (startIdx === -1) break;
-     
-     const endIdx = result.indexOf(thinkEnd, startIdx + thinkStart.length);
-     if (endIdx === -1) break;
-     
-     result = result.substring(0, startIdx) + result.substring(endIdx + thinkEnd.length);
-   }
-   
-   return result.trim();
+   const stripped = content.replace(/<think>[\s\S]*?<\/think>/gi, "");
+   return stripped.trim() || content.trim();
  }

@oboroge0
Copy link
Owner

v2.1.0 でのリファクタリングに伴うフィードバック

v2.1.0 (#93) でプラグイン設定UIの動的レンダリング化を行いました。
これにより、plugins/ フォルダ内のファイルだけで新規プラグインが完結するようになっています。

変更が必要な点

1. Canvas.tsxNodeSettings.tsx の変更を削除してください

これらのハードコードされたマップへの追加はもう不要です。manifest.json の node.inputs/node.outputsconfig から自動的にUIが生成されます。

2. manifest.jsonpromptSections を追加してください

現在の manifest.json には Prompt Builder フィールドがないため、動的入力ポート生成が機能しません。
config セクションに以下を追加してください:

"promptSections": {
  "type": "prompt-builder",
  "label": "Prompt Builder",
  "description": "Build complex prompts by combining static text and dynamic input sections"
}

(参考: plugins/openai-llm/manifest.json の同フィールド)

3. 最新の main にリベースしてください

v2.1.0 で Canvas.tsx と NodeSettings.tsx が大きく変更されているため、コンフリクトが発生する可能性があります。リベース後に上記2ファイルの変更を除外すれば解消されます。

変更不要な点

  • plugins/minimax-llm/manifest.json — 設定スキーマ、showWhen、UI定義など良くできています
  • plugins/minimax-llm/node.ts — 実装は問題ありません

要約すると、最終的に必要なのは plugins/minimax-llm/ 内の2ファイルだけ です。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants