Conversation
- Add minimax-llm plugin with M2.5 model support - Add frontend configuration for MiniMax in NodeSettings - Add input/output ports for minimax-llm in Canvas - Strip thinking tags from MiniMax responses
📝 WalkthroughWalkthroughAdds a MiniMax LLM integration: editor recognition for Changes
Sequence DiagramsequenceDiagram
participant UI as Editor UI
participant Node as MiniMaxLLMNode
participant API as MiniMax API
participant Bus as Event Bus / Result Handler
UI->>Node: setup(config: apiKey, model, prompts)
Node->>Node: init client or enable demo mode
UI->>Node: execute(inputs)
Node->>Node: buildPromptFromSections() or use inputs.prompt
Node->>Node: assemble system prompt (character/personality)
Node->>API: chat request {model, messages, temperature, max_tokens, reasoning_effort?}
API-->>Node: chat response (may include <think> blocks)
Node->>Node: stripThinkingContent()
Node->>Bus: emit response-generated event (cleaned content)
Bus-->>UI: deliver response
Note over Node: on API/error -> return localized error message
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@plugins/minimax-llm/node.ts`:
- Around line 175-178: The code accesses response.choices[0].message.content
without guarding against an empty or missing choices array; update the handling
in the method that calls this.client.chat.completions.create (the block that
assigns to result and calls this.stripThinkingContent) to defensively check that
response.choices is an array with at least one element and that
choices[0].message and choices[0].message.content exist, falling back to an
empty string (or a safe default) when any of those are missing, then pass that
safe string into this.stripThinkingContent so a malformed API response won't
throw a TypeError and will still hit the surrounding error handling.
- Around line 93-130: stripThinkingContent is failing because thinkEnd is set to
the generic "</" and only removes those two chars, leaving the rest of the
closing tag in the output; replace this logic by matching the full closing tag
and removing complete <think>...</think> blocks case-insensitively. In
stripThinkingContent, change the end marker to the full closing tag "</think>"
(or better, abandon the manual index math and use a case-insensitive non-greedy
regex like /<think>[\s\S]*?<\/think>/i) to remove entire think blocks, update
any references to thinkEnd/closingStart/endIdx accordingly, ensure you handle
multiple occurrences by global matching, and trim the final result before
returning from stripThinkingContent.
- Around line 153-159: When building fullSystemPrompt in the node.ts block,
guard against characterName being undefined before concatenating; if
characterPersonality is truthy but characterName is missing, either use a safe
fallback (e.g., "the user" or an empty string) or omit the "You are <name>."
prefix so you don't produce "You are undefined."; update the logic around
variables characterName, characterPersonality and fullSystemPrompt (the
concatenation currently at the lines that set fullSystemPrompt using
this.systemPrompt) to compute a safe name value or conditionally add the "You
are ..." sentence only when characterName is non-empty.
- Around line 132-219: The code currently sets (apiParams as
any).reasoning_effort when a reasoning model is used; MiniMax does not support
reasoning_effort — change the logic in execute() (around apiParams /
MiniMaxLLMNode.REASONING_MODELS and this.reasoning) to instead add an extra_body
field to the request: set (apiParams as any).extra_body = { reasoning_split:
true } when reasoning is enabled, so the call to
this.client.chat.completions.create(apiParams) uses
extra_body={"reasoning_split": true} to enable MiniMax's native reasoning split
handling.
🧹 Nitpick comments (2)
apps/web/components/panels/NodeSettings.tsx (2)
947-957: Reasoning field is always visible, but only applies to M2.5 models.The manifest defines a
showWhencondition restricting the reasoning field to M2.5 and M2.5-highspeed models. ThenodeConfigsentry here lacks the correspondingshowWhen, so users selecting M2.1 or M2 will still see the "Reasoning Effort" dropdown even though the backend ignores it for those models.♻️ Add showWhen to match the manifest
{ key: "reasoning", type: "select", label: "Reasoning Effort", options: [ { label: "None", value: "none" }, { label: "Low", value: "low" }, { label: "Medium", value: "medium" }, { label: "High", value: "high" }, ], + showWhen: { key: "model", value: ["MiniMax-M2.5", "MiniMax-M2.5-highspeed"] }, },
2522-2557: Missingminimax-llmentry in translation key maps.The
getNodeLabelkeyMap (and the correspondinggetFieldLabelnodeKeyMap at Line 2569) don't include a"minimax-llm"entry. The label will fall back to"MiniMax"from the schema, but translations viat()won't work for this node type or its fields.
| async execute( | ||
| inputs: Record<string, any>, | ||
| context: NodeContext, | ||
| ): Promise<Record<string, any>> { | ||
| if (!this.client) { | ||
| await context.log("[デモモード] 定型文応答を返します", "info"); | ||
| return { response: MiniMaxLLMNode.DEMO_RESPONSE }; | ||
| } | ||
|
|
||
| const prompt = this.promptSections | ||
| ? this.buildPromptFromSections(inputs) | ||
| : ((inputs.prompt as string) ?? ""); | ||
|
|
||
| if (!prompt) { | ||
| await context.log("No prompt provided", "warning"); | ||
| return { response: "" }; | ||
| } | ||
|
|
||
| try { | ||
| await context.log("Calling MiniMax API (" + this.model + ")..."); | ||
|
|
||
| const characterName = context.getCharacterName(); | ||
| const characterPersonality = context.getCharacterPersonality(); | ||
|
|
||
| let fullSystemPrompt = this.systemPrompt; | ||
| if (characterPersonality) { | ||
| fullSystemPrompt = this.systemPrompt + "\n\nYou are " + characterName + ". " + characterPersonality; | ||
| } | ||
|
|
||
| const apiParams: OpenAI.ChatCompletionCreateParamsNonStreaming = { | ||
| model: this.model, | ||
| messages: [ | ||
| { role: "system" as const, content: fullSystemPrompt }, | ||
| { role: "user" as const, content: prompt }, | ||
| ], | ||
| temperature: this.temperature, | ||
| max_tokens: this.maxTokens, | ||
| }; | ||
|
|
||
| if (MiniMaxLLMNode.REASONING_MODELS.has(this.model) && this.reasoning !== "none") { | ||
| (apiParams as any).reasoning_effort = this.reasoning; | ||
| } | ||
|
|
||
| const response = await this.client.chat.completions.create(apiParams); | ||
|
|
||
| let result = response.choices[0].message.content ?? ""; | ||
| result = this.stripThinkingContent(result); | ||
|
|
||
| await context.log("Response received (" + result.length + " chars)"); | ||
|
|
||
| await context.emitEvent( | ||
| createEvent("response.generated", { | ||
| text: result, | ||
| model: this.model, | ||
| }), | ||
| ); | ||
|
|
||
| return { response: result }; | ||
| } catch (error: unknown) { | ||
| if (error instanceof OpenAI.APIConnectionError) { | ||
| const errorMsg = getErrorMessage(ErrorCode.LLM_CONNECTION_FAILED, "ja", { | ||
| provider: "MiniMax", | ||
| }); | ||
| await context.log(errorMsg, "error"); | ||
| return { response: "Error: Connection failed" }; | ||
| } | ||
|
|
||
| if (error instanceof OpenAI.RateLimitError) { | ||
| const errorMsg = getErrorMessage(ErrorCode.LLM_RATE_LIMIT, "ja", { | ||
| provider: "MiniMax", | ||
| }); | ||
| await context.log(errorMsg, "error"); | ||
| return { response: "Error: Rate limit exceeded" }; | ||
| } | ||
|
|
||
| if (error instanceof OpenAI.APIError) { | ||
| const errorMsg = getErrorMessage(ErrorCode.LLM_API_ERROR, "ja", { | ||
| provider: "MiniMax", | ||
| error: error.message, | ||
| }); | ||
| await context.log(errorMsg, "error"); | ||
| return { response: "Error: " + error.message }; | ||
| } | ||
|
|
||
| const message = error instanceof Error ? error.message : String(error); | ||
| await context.log("Unexpected error: " + message, "error"); | ||
| return { response: "Error: " + message }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
MiniMax API base URL documentation 2025 reasoning_effort parameter
💡 Result:
-
MiniMax OpenAI-compatible API base URL (official docs):
- International:
https://api.minimax.io/v1(platform.minimax.io) - China:
https://api.minimaxi.com/v1(platform.minimax.io)
- International:
-
reasoning_effortparameter:- MiniMax’s official OpenAI-compatible documentation does not mention/support a
reasoning_effortrequest parameter; instead, it documentsextra_body={"reasoning_split": True}to split reasoning intoreasoning_details(otherwise reasoning is embedded in<think>...</think>incontent). (platform.minimax.io) - If you saw
reasoning_effortfor “MiniMax M2”, that appears in some third-party OpenAI-compatible gateways (example: Fireworks), which state MiniMax M2 accepts only string values'low' | 'medium' | 'high'and defaults to'medium'. (docs.fireworks.ai)
- MiniMax’s official OpenAI-compatible documentation does not mention/support a
Citations:
- 1: https://platform.minimax.io/docs/api-reference/text-openai-api
- 2: https://platform.minimax.io/docs/api-reference/text-m2-function-call-refer
- 3: https://platform.minimax.io/docs/api-reference/text-openai-api
- 4: https://docs.fireworks.ai/api-reference/post-completions?utm_source=openai
The reasoning_effort parameter is not supported by MiniMax's official API—use extra_body={"reasoning_split": True} instead.
The base URL https://api.minimax.io/v1 is correct per official documentation. However, MiniMax's OpenAI-compatible endpoint does not document support for reasoning_effort as a request parameter. For reasoning handling, MiniMax's official API uses extra_body={"reasoning_split": True} to split reasoning into reasoning_details (otherwise reasoning is embedded in <think>...</think> tags in the message content). The current code's use of reasoning_effort appears to be a third-party gateway pattern (e.g., Fireworks) rather than native MiniMax support.
🤖 Prompt for AI Agents
In `@plugins/minimax-llm/node.ts` around lines 132 - 219, The code currently sets
(apiParams as any).reasoning_effort when a reasoning model is used; MiniMax does
not support reasoning_effort — change the logic in execute() (around apiParams /
MiniMaxLLMNode.REASONING_MODELS and this.reasoning) to instead add an extra_body
field to the request: set (apiParams as any).extra_body = { reasoning_split:
true } when reasoning is enabled, so the call to
this.client.chat.completions.create(apiParams) uses
extra_body={"reasoning_split": true} to enable MiniMax's native reasoning split
handling.
| const characterName = context.getCharacterName(); | ||
| const characterPersonality = context.getCharacterPersonality(); | ||
|
|
||
| let fullSystemPrompt = this.systemPrompt; | ||
| if (characterPersonality) { | ||
| fullSystemPrompt = this.systemPrompt + "\n\nYou are " + characterName + ". " + characterPersonality; | ||
| } |
There was a problem hiding this comment.
characterName may be undefined, producing "You are undefined." in the system prompt.
When characterPersonality is truthy but characterName is empty or undefined, Line 158 concatenates it directly, yielding an unintended string like "You are undefined. ...".
🛡️ Proposed fix
let fullSystemPrompt = this.systemPrompt;
if (characterPersonality) {
- fullSystemPrompt = this.systemPrompt + "\n\nYou are " + characterName + ". " + characterPersonality;
+ const name = characterName || "an AI assistant";
+ fullSystemPrompt = this.systemPrompt + "\n\nYou are " + name + ". " + characterPersonality;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const characterName = context.getCharacterName(); | |
| const characterPersonality = context.getCharacterPersonality(); | |
| let fullSystemPrompt = this.systemPrompt; | |
| if (characterPersonality) { | |
| fullSystemPrompt = this.systemPrompt + "\n\nYou are " + characterName + ". " + characterPersonality; | |
| } | |
| const characterName = context.getCharacterName(); | |
| const characterPersonality = context.getCharacterPersonality(); | |
| let fullSystemPrompt = this.systemPrompt; | |
| if (characterPersonality) { | |
| const name = characterName || "an AI assistant"; | |
| fullSystemPrompt = this.systemPrompt + "\n\nYou are " + name + ". " + characterPersonality; | |
| } |
🤖 Prompt for AI Agents
In `@plugins/minimax-llm/node.ts` around lines 153 - 159, When building
fullSystemPrompt in the node.ts block, guard against characterName being
undefined before concatenating; if characterPersonality is truthy but
characterName is missing, either use a safe fallback (e.g., "the user" or an
empty string) or omit the "You are <name>." prefix so you don't produce "You are
undefined."; update the logic around variables characterName,
characterPersonality and fullSystemPrompt (the concatenation currently at the
lines that set fullSystemPrompt using this.systemPrompt) to compute a safe name
value or conditionally add the "You are ..." sentence only when characterName is
non-empty.
| const response = await this.client.chat.completions.create(apiParams); | ||
|
|
||
| let result = response.choices[0].message.content ?? ""; | ||
| result = this.stripThinkingContent(result); |
There was a problem hiding this comment.
No null-safety on response.choices[0].
If the API returns an empty choices array (e.g., due to content filtering or an unexpected API response), accessing response.choices[0].message.content will throw a TypeError that bypasses the typed error handling below.
🛡️ Proposed defensive check
const response = await this.client.chat.completions.create(apiParams);
- let result = response.choices[0].message.content ?? "";
+ const choice = response.choices?.[0];
+ if (!choice) {
+ await context.log("No response choices returned from MiniMax API", "error");
+ return { response: "Error: Empty response from API" };
+ }
+ let result = choice.message.content ?? "";
result = this.stripThinkingContent(result);🤖 Prompt for AI Agents
In `@plugins/minimax-llm/node.ts` around lines 175 - 178, The code accesses
response.choices[0].message.content without guarding against an empty or missing
choices array; update the handling in the method that calls
this.client.chat.completions.create (the block that assigns to result and calls
this.stripThinkingContent) to defensively check that response.choices is an
array with at least one element and that choices[0].message and
choices[0].message.content exist, falling back to an empty string (or a safe
default) when any of those are missing, then pass that safe string into
this.stripThinkingContent so a malformed API response won't throw a TypeError
and will still hit the surrounding error handling.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
plugins/minimax-llm/node.ts (2)
17-202: MissingonEventmethod required by coding guidelines.The coding guidelines for
plugins/**/node.tsrequire implementingsetup,execute,onEvent, andteardown. This class is missingonEvent.Proposed fix
+ async onEvent(event: Event, context: NodeContext): Promise<void> { + // No events to handle + } + async teardown(): Promise<void> {As per coding guidelines,
plugins/**/node.ts: "Use BaseNode class and implement setup, execute, onEvent, and teardown methods for TypeScript node implementations".
90-107:stripThinkingContentis case-sensitive — won't strip<Think>or<THINK>variants.The previous fix correctly uses the full
</think>closing tag now, but the matching is case-sensitive. If the model returns<Think>…</Think>or<THINK>…</THINK>, the content won't be stripped. A regex approach handles this more robustly:Proposed fix
private stripThinkingContent(content: string): string { - const thinkStart = "<think>"; - const thinkEnd = "</think>"; - - let result = content; - - while (true) { - const startIdx = result.indexOf(thinkStart); - if (startIdx === -1) break; - - const endIdx = result.indexOf(thinkEnd, startIdx + thinkStart.length); - if (endIdx === -1) break; - - result = result.substring(0, startIdx) + result.substring(endIdx + thinkEnd.length); - } - - return result.trim(); + const stripped = content.replace(/<think>[\s\S]*?<\/think>/gi, ""); + return stripped.trim() || content.trim(); }
v2.1.0 でのリファクタリングに伴うフィードバックv2.1.0 (#93) でプラグイン設定UIの動的レンダリング化を行いました。 変更が必要な点1.
|
MiniMaxのLLMをAITuberFlowで使用できるようにするプラグインを追加しました。
plugins/minimax-llm/Summary by CodeRabbit