diff --git a/README.md b/README.md index 84c27e21..dfc24df0 100644 --- a/README.md +++ b/README.md @@ -32,7 +32,7 @@ OpenUsage lives in your menu bar and shows you how much of your AI coding subscr - [**Gemini**](docs/providers/gemini.md) / pro, flash, workspace/free/paid tier - [**JetBrains AI Assistant**](docs/providers/jetbrains-ai-assistant.md) / quota, remaining - [**Kimi Code**](docs/providers/kimi.md) / session, weekly -- [**MiniMax**](docs/providers/minimax.md) / coding plan session +- [**MiniMax**](docs/providers/minimax.md) / coding plan session model-calls, CN TTS/image buckets - [**OpenCode Go**](docs/providers/opencode-go.md) / 5h, weekly, monthly spend limits - [**Windsurf**](docs/providers/windsurf.md) / prompt credits, flex credits - [**Z.ai**](docs/providers/zai.md) / session, weekly, web searches diff --git a/docs/providers/minimax.md b/docs/providers/minimax.md index 9b78f338..d35a8624 100644 --- a/docs/providers/minimax.md +++ b/docs/providers/minimax.md @@ -8,6 +8,8 @@ - **Endpoint:** `GET https://api.minimax.io/v1/api/openplatform/coding_plan/remains` - **Auth:** `Authorization: Bearer ` - **Window model:** dynamic rolling 5-hour limit (per MiniMax Coding Plan docs) +- **Display note:** OpenUsage shows the raw text-session counts from the remains API as `model-calls`, because that matches the observed official usage display. +- **CN note:** current CN docs use `https://www.minimaxi.com/v1/api/openplatform/coding_plan/remains`. ## Authentication @@ -44,6 +46,7 @@ Fallbacks: When the selected region is `CN`, requests use: +- `https://www.minimaxi.com/v1/api/openplatform/coding_plan/remains` - `https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains` - `https://api.minimaxi.com/v1/coding_plan/remains` @@ -61,25 +64,48 @@ Expected payload fields: ## Usage Mapping -- Treat `current_interval_usage_count` as remaining prompts (MiniMax remains API behavior). +- Treat `current_interval_usage_count` as the remaining raw session/resource count returned by the remains API. +- For the main text `Session` line, OpenUsage displays the raw remains numbers as `model-calls` rather than converting them to `prompts`. - If only remaining aliases are provided, compute `used = total - remaining`. - If explicit used-count fields are provided, prefer them. -- Plan name is taken from explicit plan/title fields when available. -- If plan fields are missing in GLOBAL mode, infer plan tier from known limits (`100/300/1000/2000` prompts or `1500/4500/15000/30000` model-call equivalents). -- If plan fields are missing in CN mode, infer only exact known CN limits (`600/1500/4500` model-call counts). +- Plan name is taken from explicit plan/title fields when available, and normalized to a shared six-plan naming scheme: + - `Starter` + - `Plus` + - `Max` + - `Plus-High-Speed` + - `Max-High-Speed` + - `Ultra-High-Speed` +- If plan fields are missing, infer the plan tier from the current Token Plan quota table for the selected region: + - `GLOBAL` raw `model-calls`: `1500 => Starter`, `4500 => Plus`, `15000 => Max`, `30000 => Ultra-High-Speed` + - `CN` raw `model-calls`: `600 => Starter`, `1500 => Plus`, `4500 => Max`, `30000 => Ultra-High-Speed` +- For overlapping middle tiers, the plugin also inspects companion daily quotas when present to disambiguate `Standard` vs `High-Speed`: + - `GLOBAL 4500`: `image-01 50` or `Speech 2.8 4000` => `Plus`; `image-01 100` or `Speech 2.8 9000` => `Plus-High-Speed` + - `GLOBAL 15000`: `image-01 120` or `Speech 2.8 11000` => `Max`; `image-01 200` or `Speech 2.8 19000` => `Max-High-Speed` + - `CN 1500`: `image-01 50` or `speech-hd 4000` => `Plus`; `image-01 100` or `speech-hd 9000` => `Plus-High-Speed` + - `CN 4500`: `image-01 120` or `speech-hd 11000` => `Max`; `image-01 200` or `speech-hd 19000` => `Max-High-Speed` +- If those companion quotas are absent or conflicting, the plugin falls back to the coarse family label (`Plus` / `Max`) instead of guessing. +- Additional `model_remains[]` companion resource buckets are rendered as separate daily detail lines in both `GLOBAL` and `CN` mode, for example `speech-hd` (`Text to Speech HD`) or `image-01`. - Use `end_time` for reset timestamp when present. - Fallback to `remains_time` when `end_time` is absent. - Use `start_time` + `end_time` as `periodDurationMs` when both are valid. +- Non-session companion resource lines use a daily window when only `remains_time` is present. +- Prompt-based marketing copy is ignored by the plugin; all inference is based on raw remains quotas and companion resource buckets. +- Official package tables used for this split, checked on 2026-03-23: + - Global: + - CN: ## Output - **Plan**: best-effort from API payload (normalized to concise label, with ` (CN)` or ` (GLOBAL)` suffix) - **Session** (overview progress line): - `label`: `Session` - - `format`: count (`prompts`) - - `used`: computed used prompts - - `limit`: total prompt limit for current window + - `format`: count (`model-calls`) + - `used`: computed used model-call count from raw remains data + - `limit`: raw session limit from the remains payload - `resetsAt`: derived from `end_time` or `remains_time` +- **Extra resources** (detail progress lines when present in either region): + - `Text to Speech HD` / `Text to Speech Turbo`: count (`chars`) + - `Image Generation` / `image-01`: count (`images`) ## Errors diff --git a/plugins/minimax/plugin.js b/plugins/minimax/plugin.js index 2e10476c..242fc3de 100644 --- a/plugins/minimax/plugin.js +++ b/plugins/minimax/plugin.js @@ -4,27 +4,49 @@ "https://api.minimax.io/v1/coding_plan/remains", "https://www.minimax.io/v1/api/openplatform/coding_plan/remains", ] - const CN_PRIMARY_USAGE_URL = "https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains" - const CN_FALLBACK_USAGE_URLS = ["https://api.minimaxi.com/v1/coding_plan/remains"] + const CN_PRIMARY_USAGE_URL = "https://www.minimaxi.com/v1/api/openplatform/coding_plan/remains" + const CN_FALLBACK_USAGE_URLS = [ + "https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains", + "https://api.minimaxi.com/v1/coding_plan/remains", + ] const GLOBAL_API_KEY_ENV_VARS = ["MINIMAX_API_KEY", "MINIMAX_API_TOKEN"] const CN_API_KEY_ENV_VARS = ["MINIMAX_CN_API_KEY", "MINIMAX_API_KEY", "MINIMAX_API_TOKEN"] const CODING_PLAN_WINDOW_MS = 5 * 60 * 60 * 1000 const CODING_PLAN_WINDOW_TOLERANCE_MS = 10 * 60 * 1000 - // GLOBAL plan tiers (based on prompt limits) - const GLOBAL_PROMPT_LIMIT_TO_PLAN = { - 100: "Starter", - 300: "Plus", - 1000: "Max", - 2000: "Ultra", - } - // CN plan tiers (based on model call counts = prompts × 15) - // Starter: 40 prompts = 600, Plus: 100 prompts = 1500, Max: 300 prompts = 4500 - const CN_PROMPT_LIMIT_TO_PLAN = { + const DAILY_WINDOW_MS = 24 * 60 * 60 * 1000 + const GLOBAL_MODEL_CALL_LIMIT_TO_PLAN = { + 1500: "Starter", + 4500: "Plus", + 15000: "Max", + 30000: "Ultra-High-Speed", + } + const CN_MODEL_CALL_LIMIT_TO_PLAN = { 600: "Starter", 1500: "Plus", 4500: "Max", + 30000: "Ultra-High-Speed", + } + const GLOBAL_COMPANION_QUOTA_HINTS = { + 4500: { + image01: { 50: "Plus", 100: "Plus-High-Speed" }, + speechHd: { 4000: "Plus", 9000: "Plus-High-Speed" }, + }, + 15000: { + image01: { 120: "Max", 200: "Max-High-Speed" }, + speechHd: { 11000: "Max", 19000: "Max-High-Speed" }, + }, } - const MODEL_CALLS_PER_PROMPT = 15 + const CN_COMPANION_QUOTA_HINTS = { + 1500: { + image01: { 50: "Plus", 100: "Plus-High-Speed" }, + speechHd: { 4000: "Plus", 9000: "Plus-High-Speed" }, + }, + 4500: { + image01: { 120: "Max", 200: "Max-High-Speed" }, + speechHd: { 11000: "Max", 19000: "Max-High-Speed" }, + }, + } + const MODEL_CALLS_SUFFIX = "model-calls" function readString(value) { if (typeof value !== "string") return null @@ -54,9 +76,24 @@ if (!raw) return null const compact = raw.replace(/\s+/g, " ").trim() const withoutPrefix = compact.replace(/^minimax\s+coding\s+plan\b[:\-]?\s*/i, "").trim() - if (withoutPrefix) return withoutPrefix - if (/coding\s+plan/i.test(compact)) return "Coding Plan" - return compact + const base = withoutPrefix || compact + if (/coding\s+plan/i.test(compact) && !withoutPrefix) return "Coding Plan" + + const canonical = base + .replace(/\s*-\s*/g, "-") + .replace(/极速版/gi, "High-Speed") + .replace(/highspeed/gi, "High-Speed") + .replace(/high-speed/gi, "High-Speed") + .replace(/\s+/g, " ") + .trim() + + if (/^starter$/i.test(canonical)) return "Starter" + if (/^plus$/i.test(canonical)) return "Plus" + if (/^max$/i.test(canonical)) return "Max" + if (/^plus-?high-speed$/i.test(canonical)) return "Plus-High-Speed" + if (/^max-?high-speed$/i.test(canonical)) return "Max-High-Speed" + if (/^ultra-?high-speed$/i.test(canonical)) return "Ultra-High-Speed" + return canonical } function inferPlanNameFromLimit(totalCount, endpointSelection) { @@ -65,15 +102,153 @@ const normalized = Math.round(n) if (endpointSelection === "CN") { - // CN totals are model-call counts; only exact known CN tiers should infer. - return CN_PROMPT_LIMIT_TO_PLAN[normalized] || null + return CN_MODEL_CALL_LIMIT_TO_PLAN[normalized] || null } + return GLOBAL_MODEL_CALL_LIMIT_TO_PLAN[normalized] || null + } + + function readUsageRawName(item) { + return normalizeUsageName( + pickFirstString([ + item.model_name, + item.modelName, + item.resource_name, + item.resourceName, + item.name, + ]) + ) + } + + function normalizeUsageNameKey(value) { + return value ? value.toLowerCase() : "" + } - if (GLOBAL_PROMPT_LIMIT_TO_PLAN[normalized]) return GLOBAL_PROMPT_LIMIT_TO_PLAN[normalized] + function isSpeechHdUsageName(name) { + return ( + name.includes("text to speech hd") || + name.includes("speech 2.8") || + /^speech(?:-[\d.]+)?-hd$/.test(name) + ) + } + + function isSpeechTurboUsageName(name) { + return ( + name.includes("text to speech turbo") || + /^speech(?:-[\d.]+)?-turbo$/.test(name) + ) + } - if (normalized % MODEL_CALLS_PER_PROMPT !== 0) return null - const inferredPromptLimit = normalized / MODEL_CALLS_PER_PROMPT - return GLOBAL_PROMPT_LIMIT_TO_PLAN[inferredPromptLimit] || null + function isImage01UsageName(name) { + return name.includes("image-01") + } + + function isSessionUsageName(name) { + return ( + name.includes("minimax-m") || + name.includes("text model") || + name.includes("coding") + ) + } + + function inferPlanNameFromSignals(signals, endpointSelection) { + const sessionTotal = readNumber(signals && signals.sessionTotal) + if (sessionTotal === null || sessionTotal <= 0) return null + + const basePlanName = inferPlanNameFromLimit(sessionTotal, endpointSelection) + if (!basePlanName) return null + + const hintTable = + endpointSelection === "CN" ? CN_COMPANION_QUOTA_HINTS : GLOBAL_COMPANION_QUOTA_HINTS + const hintSpec = hintTable[Math.round(sessionTotal)] + if (!hintSpec) return basePlanName + + const image01Total = readNumber(signals.image01Total) + const speechHdTotal = readNumber(signals.speechHdTotal) + const candidates = [] + + if (image01Total !== null) { + const planFromImage = hintSpec.image01[Math.round(image01Total)] + if (planFromImage) candidates.push(planFromImage) + } + if (speechHdTotal !== null) { + const planFromSpeech = hintSpec.speechHd[Math.round(speechHdTotal)] + if (planFromSpeech) candidates.push(planFromSpeech) + } + + if (candidates.length === 0) return basePlanName + if (candidates.every((candidate) => candidate === candidates[0])) return candidates[0] + return basePlanName + } + + function collectPlanInferenceSignals(modelRemains) { + const signals = { + sessionTotal: null, + speechHdTotal: null, + image01Total: null, + } + let fallbackSessionTotal = null + + for (let i = 0; i < modelRemains.length; i += 1) { + const item = modelRemains[i] + if (!item || typeof item !== "object") continue + + const total = readNumber(item.current_interval_total_count ?? item.currentIntervalTotalCount) + if (total === null || total <= 0) continue + + const normalizedTotal = Math.round(total) + if (fallbackSessionTotal === null) fallbackSessionTotal = normalizedTotal + + const name = normalizeUsageNameKey(readUsageRawName(item)) + if (signals.speechHdTotal === null && isSpeechHdUsageName(name)) { + signals.speechHdTotal = normalizedTotal + continue + } + if (signals.image01Total === null && isImage01UsageName(name)) { + signals.image01Total = normalizedTotal + continue + } + if (signals.sessionTotal === null && isSessionUsageName(name)) { + signals.sessionTotal = normalizedTotal + } + } + + if (signals.sessionTotal === null) signals.sessionTotal = fallbackSessionTotal + return signals + } + + function normalizeUsageName(value) { + const raw = readString(value) + if (!raw) return null + return raw.replace(/\s+/g, " ").trim() + } + + function classifyUsageEntry(item, endpointSelection, index) { + const rawName = readUsageRawName(item) + const name = normalizeUsageNameKey(rawName) + + if (isSpeechHdUsageName(name)) { + return { label: "Text to Speech HD", suffix: "chars", isSession: false } + } + if (isSpeechTurboUsageName(name)) { + return { label: "Text to Speech Turbo", suffix: "chars", isSession: false } + } + if (isImage01UsageName(name)) { + return { label: "image-01", suffix: "images", isSession: false } + } + if (name.includes("image generation")) { + return { label: "Image Generation", suffix: "images", isSession: false } + } + if (isSessionUsageName(name)) { + return { label: "Session", suffix: MODEL_CALLS_SUFFIX, isSession: true } + } + if (index === 0) { + return { label: "Session", suffix: MODEL_CALLS_SUFFIX, isSession: true } + } + return { + label: rawName || "Usage", + suffix: "count", + isSession: false, + } } function epochToMs(epoch) { @@ -82,7 +257,7 @@ return Math.abs(n) < 1e10 ? n * 1000 : n } - function inferRemainsMs(remainsRaw, endMs, nowMs) { + function inferRemainsMs(remainsRaw, endMs, nowMs, expectedWindowMs) { if (remainsRaw === null || remainsRaw <= 0) return null const asSecondsMs = remainsRaw * 1000 @@ -99,7 +274,8 @@ } // Coding Plan resets every 5h. Use that constraint before defaulting. - const maxExpectedMs = CODING_PLAN_WINDOW_MS + CODING_PLAN_WINDOW_TOLERANCE_MS + const maxExpectedMs = + (expectedWindowMs || CODING_PLAN_WINDOW_MS) + CODING_PLAN_WINDOW_TOLERANCE_MS const secondsLooksValid = asSecondsMs <= maxExpectedMs const millisecondsLooksValid = asMillisecondsMs <= maxExpectedMs @@ -212,6 +388,112 @@ throw "Could not parse usage data." } + function parseModelRemainEntry(ctx, item, endpointSelection, index) { + if (!item || typeof item !== "object") return null + + const usageMeta = classifyUsageEntry(item, endpointSelection, index) + let total = readNumber(item.current_interval_total_count ?? item.currentIntervalTotalCount) + if (total === null || total <= 0) return null + + const usageFieldCount = readNumber(item.current_interval_usage_count ?? item.currentIntervalUsageCount) + const remainingCount = readNumber( + item.current_interval_remaining_count ?? + item.currentIntervalRemainingCount ?? + item.current_interval_remains_count ?? + item.currentIntervalRemainsCount ?? + item.current_interval_remain_count ?? + item.currentIntervalRemainCount ?? + item.remaining_count ?? + item.remainingCount ?? + item.remains_count ?? + item.remainsCount ?? + item.remaining ?? + item.remains ?? + item.left_count ?? + item.leftCount + ) + // MiniMax "coding_plan/remains" commonly returns remaining usage in current_interval_usage_count. + const inferredRemainingCount = remainingCount !== null ? remainingCount : usageFieldCount + const explicitUsed = readNumber( + item.current_interval_used_count ?? + item.currentIntervalUsedCount ?? + item.used_count ?? + item.used + ) + let used = explicitUsed + + if (used === null && inferredRemainingCount !== null) used = total - inferredRemainingCount + if (used === null) return null + + if (used < 0) used = 0 + if (used > total) used = total + + const startMs = epochToMs(item.start_time ?? item.startTime) + const endMs = epochToMs(item.end_time ?? item.endTime) + const remainsRaw = readNumber(item.remains_time ?? item.remainsTime) + const nowMs = Date.now() + const expectedRemainsWindowMs = + !usageMeta.isSession ? DAILY_WINDOW_MS : CODING_PLAN_WINDOW_MS + const remainsMs = inferRemainsMs(remainsRaw, endMs, nowMs, expectedRemainsWindowMs) + + let resetsAt = endMs !== null ? ctx.util.toIso(endMs) : null + if (!resetsAt && remainsMs !== null) { + resetsAt = ctx.util.toIso(nowMs + remainsMs) + } + + let periodDurationMs = null + if (startMs !== null && endMs !== null && endMs > startMs) { + periodDurationMs = endMs - startMs + } else if (!usageMeta.isSession) { + periodDurationMs = DAILY_WINDOW_MS + } + + return { + label: usageMeta.label, + used, + total, + suffix: usageMeta.suffix, + resetsAt, + periodDurationMs, + } + } + + function pickGlobalSessionRemainItem(modelRemains) { + let fallbackItem = null + + for (let i = 0; i < modelRemains.length; i += 1) { + const item = modelRemains[i] + if (!item || typeof item !== "object") continue + + const total = readNumber(item.current_interval_total_count ?? item.currentIntervalTotalCount) + if (total === null || total <= 0) continue + if (!fallbackItem) fallbackItem = item + + const name = normalizeUsageNameKey(readUsageRawName(item)) + if (isSessionUsageName(name)) return item + } + + return fallbackItem + } + + function orderRemainItemsForDisplay(modelRemains, endpointSelection) { + if (!Array.isArray(modelRemains) || modelRemains.length === 0) return [] + + const ordered = [] + const sessionItem = + endpointSelection === "GLOBAL" ? pickGlobalSessionRemainItem(modelRemains) : null + if (sessionItem) ordered.push(sessionItem) + + for (let i = 0; i < modelRemains.length; i += 1) { + const item = modelRemains[i] + if (!item || typeof item !== "object") continue + if (sessionItem && item === sessionItem) continue + ordered.push(item) + } + + return ordered + } + function parsePayloadShape(ctx, payload, endpointSelection) { if (!payload || typeof payload !== "object") return null @@ -244,69 +526,19 @@ if (!modelRemains || modelRemains.length === 0) return null - let chosen = modelRemains[0] - for (let i = 0; i < modelRemains.length; i += 1) { - const item = modelRemains[i] - if (!item || typeof item !== "object") continue - const total = readNumber(item.current_interval_total_count ?? item.currentIntervalTotalCount) - if (total !== null && total > 0) { - chosen = item - break - } - } - - if (!chosen || typeof chosen !== "object") return null - - const total = readNumber(chosen.current_interval_total_count ?? chosen.currentIntervalTotalCount) - if (total === null || total <= 0) return null - - const usageFieldCount = readNumber(chosen.current_interval_usage_count ?? chosen.currentIntervalUsageCount) - const remainingCount = readNumber( - chosen.current_interval_remaining_count ?? - chosen.currentIntervalRemainingCount ?? - chosen.current_interval_remains_count ?? - chosen.currentIntervalRemainsCount ?? - chosen.current_interval_remain_count ?? - chosen.currentIntervalRemainCount ?? - chosen.remaining_count ?? - chosen.remainingCount ?? - chosen.remains_count ?? - chosen.remainsCount ?? - chosen.remaining ?? - chosen.remains ?? - chosen.left_count ?? - chosen.leftCount - ) - // MiniMax "coding_plan/remains" commonly returns remaining prompts in current_interval_usage_count. - const inferredRemainingCount = remainingCount !== null ? remainingCount : usageFieldCount - const explicitUsed = readNumber( - chosen.current_interval_used_count ?? - chosen.currentIntervalUsedCount ?? - chosen.used_count ?? - chosen.used - ) - let used = explicitUsed - - if (used === null && inferredRemainingCount !== null) used = total - inferredRemainingCount - if (used === null) return null - if (used < 0) used = 0 - if (used > total) used = total + const entries = [] + const seenLabels = Object.create(null) + const remainsToParse = orderRemainItemsForDisplay(modelRemains, endpointSelection) - const startMs = epochToMs(chosen.start_time ?? chosen.startTime) - const endMs = epochToMs(chosen.end_time ?? chosen.endTime) - const remainsRaw = readNumber(chosen.remains_time ?? chosen.remainsTime) - const nowMs = Date.now() - const remainsMs = inferRemainsMs(remainsRaw, endMs, nowMs) - - let resetsAt = endMs !== null ? ctx.util.toIso(endMs) : null - if (!resetsAt && remainsMs !== null) { - resetsAt = ctx.util.toIso(nowMs + remainsMs) + for (let i = 0; i < remainsToParse.length; i += 1) { + const entry = parseModelRemainEntry(ctx, remainsToParse[i], endpointSelection, i) + if (!entry) continue + if (seenLabels[entry.label]) continue + seenLabels[entry.label] = true + entries.push(entry) } - let periodDurationMs = null - if (startMs !== null && endMs !== null && endMs > startMs) { - periodDurationMs = endMs - startMs - } + if (entries.length === 0) return null const explicitPlanName = normalizePlanName(pickFirstString([ data.current_subscribe_title, @@ -318,15 +550,15 @@ payload.plan_name, payload.plan, ])) - const inferredPlanName = inferPlanNameFromLimit(total, endpointSelection) + const inferredPlanName = inferPlanNameFromSignals( + collectPlanInferenceSignals(modelRemains), + endpointSelection + ) const planName = explicitPlanName || inferredPlanName return { planName, - used, - total, - resetsAt, - periodDurationMs, + entries, } } @@ -362,21 +594,19 @@ throw "MiniMax API key missing. Set MINIMAX_API_KEY or MINIMAX_CN_API_KEY." } - // CN API returns model call counts (needs division by 15 for prompts) - // GLOBAL API returns prompt counts directly - const isCnEndpoint = successfulEndpoint === "CN" - const displayMultiplier = isCnEndpoint ? 1 / MODEL_CALLS_PER_PROMPT : 1 - - const line = { - label: "Session", - used: Math.round(parsed.used * displayMultiplier), - limit: Math.round(parsed.total * displayMultiplier), - format: { kind: "count", suffix: "prompts" }, - } - if (parsed.resetsAt) line.resetsAt = parsed.resetsAt - if (parsed.periodDurationMs !== null) line.periodDurationMs = parsed.periodDurationMs + const lines = parsed.entries.map((entry) => { + const line = { + label: entry.label, + used: Math.round(entry.used), + limit: Math.round(entry.total), + format: { kind: "count", suffix: entry.suffix }, + } + if (entry.resetsAt) line.resetsAt = entry.resetsAt + if (entry.periodDurationMs !== null) line.periodDurationMs = entry.periodDurationMs + return ctx.line.progress(line) + }) - const result = { lines: [ctx.line.progress(line)] } + const result = { lines } if (parsed.planName) { const regionLabel = successfulEndpoint === "CN" ? " (CN)" : " (GLOBAL)" result.plan = parsed.planName + regionLabel diff --git a/plugins/minimax/plugin.json b/plugins/minimax/plugin.json index f8a714aa..5ff16e7e 100644 --- a/plugins/minimax/plugin.json +++ b/plugins/minimax/plugin.json @@ -7,6 +7,10 @@ "icon": "icon.svg", "brandColor": "#F5433C", "lines": [ - { "type": "progress", "label": "Session", "scope": "overview", "primaryOrder": 1 } + { "type": "progress", "label": "Session", "scope": "overview", "primaryOrder": 1 }, + { "type": "progress", "label": "Text to Speech HD", "scope": "detail" }, + { "type": "progress", "label": "Text to Speech Turbo", "scope": "detail" }, + { "type": "progress", "label": "Image Generation", "scope": "detail" }, + { "type": "progress", "label": "image-01", "scope": "detail" } ] } diff --git a/plugins/minimax/plugin.test.js b/plugins/minimax/plugin.test.js index fa1112a1..713a8692 100644 --- a/plugins/minimax/plugin.test.js +++ b/plugins/minimax/plugin.test.js @@ -4,8 +4,9 @@ import { makeCtx } from "../test-helpers.js" const PRIMARY_USAGE_URL = "https://api.minimax.io/v1/api/openplatform/coding_plan/remains" const FALLBACK_USAGE_URL = "https://api.minimax.io/v1/coding_plan/remains" const LEGACY_WWW_USAGE_URL = "https://www.minimax.io/v1/api/openplatform/coding_plan/remains" -const CN_PRIMARY_USAGE_URL = "https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains" -const CN_FALLBACK_USAGE_URL = "https://api.minimaxi.com/v1/coding_plan/remains" +const CN_PRIMARY_USAGE_URL = "https://www.minimaxi.com/v1/api/openplatform/coding_plan/remains" +const CN_FALLBACK_USAGE_URL = "https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains" +const CN_LEGACY_FALLBACK_USAGE_URL = "https://api.minimaxi.com/v1/coding_plan/remains" const loadPlugin = async () => { await import("./plugin.js") @@ -179,6 +180,7 @@ describe("minimax plugin", () => { status: 200, headers: {}, bodyText: JSON.stringify(successPayload({ + plan_name: undefined, model_remains: [ { model_name: "MiniMax-M2", @@ -197,7 +199,9 @@ describe("minimax plugin", () => { const plugin = await loadPlugin() const result = plugin.probe(ctx) - expect(result.lines[0].used).toBe(20) // (1500-1200) / 15 = 20 + expect(result.lines[0].used).toBe(300) + expect(result.lines[0].limit).toBe(1500) + expect(result.lines[0].format.suffix).toBe("model-calls") expect(result.plan).toBe("Plus (CN)") const first = ctx.host.http.request.mock.calls[0][0].url const last = ctx.host.http.request.mock.calls[ctx.host.http.request.mock.calls.length - 1][0].url @@ -214,6 +218,7 @@ describe("minimax plugin", () => { if (req.url === LEGACY_WWW_USAGE_URL) return { status: 500, headers: {}, bodyText: "{}" } if (req.url === CN_PRIMARY_USAGE_URL) return { status: 401, headers: {}, bodyText: "" } if (req.url === CN_FALLBACK_USAGE_URL) return { status: 401, headers: {}, bodyText: "" } + if (req.url === CN_LEGACY_FALLBACK_USAGE_URL) return { status: 401, headers: {}, bodyText: "" } return { status: 404, headers: {}, bodyText: "{}" } }) @@ -230,6 +235,7 @@ describe("minimax plugin", () => { if (req.url === LEGACY_WWW_USAGE_URL) return { status: 401, headers: {}, bodyText: "" } if (req.url === CN_PRIMARY_USAGE_URL) return { status: 500, headers: {}, bodyText: "{}" } if (req.url === CN_FALLBACK_USAGE_URL) return { status: 500, headers: {}, bodyText: "{}" } + if (req.url === CN_LEGACY_FALLBACK_USAGE_URL) return { status: 500, headers: {}, bodyText: "{}" } return { status: 404, headers: {}, bodyText: "{}" } }) @@ -254,15 +260,15 @@ describe("minimax plugin", () => { const line = result.lines[0] expect(line.label).toBe("Session") expect(line.type).toBe("progress") - expect(line.used).toBe(120) // current_interval_usage_count is remaining + expect(line.used).toBe(120) expect(line.limit).toBe(300) expect(line.format.kind).toBe("count") - expect(line.format.suffix).toBe("prompts") + expect(line.format.suffix).toBe("model-calls") expect(line.resetsAt).toBe("2023-11-15T03:13:20.000Z") expect(line.periodDurationMs).toBe(18000000) }) - it("treats current_interval_usage_count as remaining prompts", async () => { + it("treats current_interval_usage_count as remaining model-calls", async () => { const ctx = makeCtx() setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) ctx.host.http.request.mockReturnValue({ @@ -285,6 +291,7 @@ describe("minimax plugin", () => { expect(result.lines[0].used).toBe(0) expect(result.lines[0].limit).toBe(1500) + expect(result.lines[0].format.suffix).toBe("model-calls") }) it("infers Starter plan from 1500 model-call limit", async () => { @@ -311,6 +318,293 @@ describe("minimax plugin", () => { expect(result.plan).toBe("Starter (GLOBAL)") expect(result.lines[0].used).toBe(300) expect(result.lines[0].limit).toBe(1500) + expect(result.lines[0].format.suffix).toBe("model-calls") + }) + + it("infers Plus tier from 4500 GLOBAL model-call limit", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + current_interval_total_count: 4500, + current_interval_usage_count: 4200, + model_name: "MiniMax-M2.7", + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Plus (GLOBAL)") + expect(result.lines[0].used).toBe(300) + expect(result.lines[0].limit).toBe(4500) + expect(result.lines[0].format.suffix).toBe("model-calls") + }) + + it("infers Max tier from 15000 GLOBAL model-call limit", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + current_interval_total_count: 15000, + current_interval_usage_count: 12000, + model_name: "MiniMax-M2.7", + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Max (GLOBAL)") + expect(result.lines[0].used).toBe(3000) + expect(result.lines[0].limit).toBe(15000) + expect(result.lines[0].format.suffix).toBe("model-calls") + }) + + it("infers GLOBAL Plus-High-Speed from companion image-01 quota", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + model_name: "MiniMax-M2.7-highspeed", + current_interval_total_count: 4500, + current_interval_usage_count: 4200, + }, + { + model_name: "image-01", + current_interval_total_count: 100, + current_interval_usage_count: 100, + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Plus-High-Speed (GLOBAL)") + expect(result.lines).toHaveLength(2) + expect(result.lines[0].label).toBe("Session") + expect(result.lines[0].limit).toBe(4500) + expect(result.lines[1]).toMatchObject({ + label: "image-01", + used: 0, + limit: 100, + format: { kind: "count", suffix: "images" }, + }) + }) + + it("prefers the GLOBAL session entry when a companion bucket appears first", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + model_name: "image-01", + current_interval_total_count: 100, + current_interval_usage_count: 90, + }, + { + model_name: "MiniMax-M2.7-highspeed", + current_interval_total_count: 4500, + current_interval_usage_count: 4200, + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Plus-High-Speed (GLOBAL)") + expect(result.lines).toHaveLength(2) + expect(result.lines[0]).toMatchObject({ + label: "Session", + used: 300, + limit: 4500, + format: { kind: "count", suffix: "model-calls" }, + }) + expect(result.lines[1]).toMatchObject({ + label: "image-01", + used: 10, + limit: 100, + format: { kind: "count", suffix: "images" }, + }) + }) + + it("infers GLOBAL Max-High-Speed from companion speech quota", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + model_name: "MiniMax-M2.7-highspeed", + current_interval_total_count: 15000, + current_interval_usage_count: 12000, + }, + { + model_name: "speech-hd", + current_interval_total_count: 19000, + current_interval_usage_count: 19000, + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Max-High-Speed (GLOBAL)") + expect(result.lines).toHaveLength(2) + expect(result.lines[0].label).toBe("Session") + expect(result.lines[0].limit).toBe(15000) + expect(result.lines[1]).toMatchObject({ + label: "Text to Speech HD", + used: 0, + limit: 19000, + format: { kind: "count", suffix: "chars" }, + }) + }) + + it("shows extra GLOBAL token-plan resource lines for speech-hd and image-01", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + data: { + base_resp: { status_code: 0 }, + current_subscribe_title: "Plus-High-Speed", + model_remains: [ + { + model_name: "MiniMax-M2.7-highspeed", + current_interval_total_count: 4500, + current_interval_usage_count: 4200, + start_time: 1700000000000, + end_time: 1700018000000, + }, + { + model_name: "speech-hd", + current_interval_total_count: 9000, + current_interval_usage_count: 7200, + start_time: 1700000000000, + end_time: 1700086400000, + }, + { + model_name: "image-01", + current_interval_total_count: 100, + current_interval_usage_count: 80, + start_time: 1700000000000, + end_time: 1700086400000, + }, + ], + }, + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Plus-High-Speed (GLOBAL)") + expect(result.lines).toHaveLength(3) + expect(result.lines[0]).toMatchObject({ + label: "Session", + used: 300, + limit: 4500, + format: { kind: "count", suffix: "model-calls" }, + }) + expect(result.lines[1]).toMatchObject({ + label: "Text to Speech HD", + used: 1800, + limit: 9000, + format: { kind: "count", suffix: "chars" }, + }) + expect(result.lines[2]).toMatchObject({ + label: "image-01", + used: 20, + limit: 100, + format: { kind: "count", suffix: "images" }, + }) + }) + + it("uses a daily remains_time window for GLOBAL resource lines without end_time", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) + vi.spyOn(Date, "now").mockReturnValue(1700000000000) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + data: { + base_resp: { status_code: 0 }, + current_subscribe_title: "Plus-High-Speed", + model_remains: [ + { + model_name: "MiniMax-M2.7-highspeed", + current_interval_total_count: 4500, + current_interval_usage_count: 4200, + start_time: 1700000000000, + end_time: 1700018000000, + }, + { + model_name: "speech-hd", + current_interval_total_count: 9000, + current_interval_usage_count: 7200, + remains_time: 86400, + }, + { + model_name: "image-01", + current_interval_total_count: 100, + current_interval_usage_count: 80, + remains_time: 86400, + }, + ], + }, + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + const expectedReset = new Date(1700000000000 + 86400 * 1000).toISOString() + + expect(result.lines[1]).toMatchObject({ + label: "Text to Speech HD", + resetsAt: expectedReset, + periodDurationMs: 86400000, + }) + expect(result.lines[2]).toMatchObject({ + label: "image-01", + resetsAt: expectedReset, + periodDurationMs: 86400000, + }) }) it("does not fallback to model name when plan cannot be inferred", async () => { @@ -336,6 +630,7 @@ describe("minimax plugin", () => { expect(result.plan).toBeUndefined() expect(result.lines[0].used).toBe(337) + expect(result.lines[0].format.suffix).toBe("model-calls") }) it("supports nested payload and remains_time reset fallback", async () => { @@ -368,6 +663,7 @@ describe("minimax plugin", () => { expect(result.plan).toBe("Max (GLOBAL)") expect(line.used).toBe(60) expect(line.limit).toBe(100) + expect(line.format.suffix).toBe("model-calls") expect(line.resetsAt).toBe(expectedReset) }) @@ -398,6 +694,7 @@ describe("minimax plugin", () => { expect(line.used).toBe(45) expect(line.limit).toBe(100) + expect(line.format.suffix).toBe("model-calls") expect(line.resetsAt).toBe(new Date(1700000000000 + 300000).toISOString()) }) @@ -427,6 +724,7 @@ describe("minimax plugin", () => { expect(result.plan).toBe("Pro (GLOBAL)") expect(line.used).toBe(180) expect(line.limit).toBe(300) + expect(line.format.suffix).toBe("model-calls") }) it("throws on HTTP auth status", async () => { @@ -441,7 +739,7 @@ describe("minimax plugin", () => { message = String(e) } expect(message).toContain("Session expired") - expect(ctx.host.http.request.mock.calls.length).toBe(5) + expect(ctx.host.http.request.mock.calls.length).toBe(6) }) it("falls back to secondary endpoint when primary fails", async () => { @@ -463,6 +761,7 @@ describe("minimax plugin", () => { const result = plugin.probe(ctx) expect(result.lines[0].used).toBe(120) + expect(result.lines[0].format.suffix).toBe("model-calls") expect(ctx.host.http.request.mock.calls.length).toBe(2) }) @@ -494,7 +793,9 @@ describe("minimax plugin", () => { const plugin = await loadPlugin() const result = plugin.probe(ctx) - expect(result.lines[0].used).toBe(20) // (1500-1200) / 15 = 20 + expect(result.lines[0].used).toBe(300) + expect(result.lines[0].limit).toBe(1500) + expect(result.lines[0].format.suffix).toBe("model-calls") expect(ctx.host.http.request.mock.calls.length).toBe(2) expect(ctx.host.http.request.mock.calls[0][0].url).toBe(CN_PRIMARY_USAGE_URL) expect(ctx.host.http.request.mock.calls[1][0].url).toBe(CN_FALLBACK_USAGE_URL) @@ -526,11 +827,158 @@ describe("minimax plugin", () => { const result = plugin.probe(ctx) expect(result.plan).toBe("Starter (CN)") - expect(result.lines[0].limit).toBe(40) // 600 / 15 = 40 prompts - expect(result.lines[0].used).toBe(7) // (600-500) / 15 = 6.67 ≈ 7 + expect(result.lines[0].limit).toBe(600) + expect(result.lines[0].used).toBe(100) + expect(result.lines[0].format.suffix).toBe("model-calls") + }) + + it("keeps raw CN session counts when explicit plan metadata is present", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_CN_API_KEY: "cn-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + plan_name: "Plus", + model_remains: [ + { + model_name: "MiniMax-M2.5", + current_interval_total_count: 100, + current_interval_usage_count: 70, + start_time: 1700000000000, + end_time: 1700018000000, + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Plus (CN)") + expect(result.lines).toHaveLength(1) + expect(result.lines[0].label).toBe("Session") + expect(result.lines[0].limit).toBe(100) + expect(result.lines[0].used).toBe(30) + expect(result.lines[0].format.suffix).toBe("model-calls") }) - it("infers CN Plus plan from 1500 model-call limit", async () => { + it("shows extra CN token-plan resource lines for speech-hd and image-01", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_CN_API_KEY: "cn-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + data: { + base_resp: { status_code: 0 }, + current_subscribe_title: "Plus", + model_remains: [ + { + model_name: "MiniMax-M2.5", + current_interval_total_count: 100, + current_interval_usage_count: 70, + start_time: 1700000000000, + end_time: 1700018000000, + }, + { + model_name: "speech-hd", + current_interval_total_count: 4000, + current_interval_usage_count: 3200, + start_time: 1700000000000, + end_time: 1700086400000, + }, + { + model_name: "image-01", + current_interval_total_count: 50, + current_interval_usage_count: 40, + start_time: 1700000000000, + end_time: 1700086400000, + }, + ], + }, + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Plus (CN)") + expect(result.lines).toHaveLength(3) + expect(result.lines[0]).toMatchObject({ + label: "Session", + used: 30, + limit: 100, + format: { kind: "count", suffix: "model-calls" }, + }) + expect(result.lines[1]).toMatchObject({ + label: "Text to Speech HD", + used: 800, + limit: 4000, + format: { kind: "count", suffix: "chars" }, + }) + expect(result.lines[2]).toMatchObject({ + label: "image-01", + used: 10, + limit: 50, + format: { kind: "count", suffix: "images" }, + }) + }) + + it("uses a daily remains_time window for CN resource lines without end_time", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_CN_API_KEY: "cn-key" }) + vi.spyOn(Date, "now").mockReturnValue(1700000000000) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + data: { + base_resp: { status_code: 0 }, + current_subscribe_title: "Plus", + model_remains: [ + { + model_name: "MiniMax-M2.5", + current_interval_total_count: 100, + current_interval_usage_count: 70, + start_time: 1700000000000, + end_time: 1700018000000, + }, + { + model_name: "speech-hd", + current_interval_total_count: 4000, + current_interval_usage_count: 3200, + remains_time: 86400, + }, + { + model_name: "image-01", + current_interval_total_count: 50, + current_interval_usage_count: 40, + remains_time: 86400, + }, + ], + }, + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + const expectedReset = new Date(1700000000000 + 86400 * 1000).toISOString() + + expect(result.lines[1]).toMatchObject({ + label: "Text to Speech HD", + resetsAt: expectedReset, + periodDurationMs: 86400000, + }) + expect(result.lines[2]).toMatchObject({ + label: "image-01", + resetsAt: expectedReset, + periodDurationMs: 86400000, + }) + }) + + it("infers Plus tier from 1500 CN model-call limit", async () => { const ctx = makeCtx() setEnv(ctx, { MINIMAX_CN_API_KEY: "cn-key" }) ctx.host.http.request.mockReturnValue({ @@ -556,11 +1004,12 @@ describe("minimax plugin", () => { const result = plugin.probe(ctx) expect(result.plan).toBe("Plus (CN)") - expect(result.lines[0].limit).toBe(100) // 1500 / 15 = 100 prompts - expect(result.lines[0].used).toBe(20) // (1500-1200) / 15 = 20 + expect(result.lines[0].limit).toBe(1500) + expect(result.lines[0].used).toBe(300) + expect(result.lines[0].format.suffix).toBe("model-calls") }) - it("infers CN Max plan from 4500 model-call limit", async () => { + it("infers Max tier from 4500 CN model-call limit", async () => { const ctx = makeCtx() setEnv(ctx, { MINIMAX_CN_API_KEY: "cn-key" }) ctx.host.http.request.mockReturnValue({ @@ -586,8 +1035,160 @@ describe("minimax plugin", () => { const result = plugin.probe(ctx) expect(result.plan).toBe("Max (CN)") - expect(result.lines[0].limit).toBe(300) // 4500 / 15 = 300 prompts - expect(result.lines[0].used).toBe(120) // (4500-2700) / 15 = 120 + expect(result.lines[0].limit).toBe(4500) + expect(result.lines[0].used).toBe(1800) + expect(result.lines[0].format.suffix).toBe("model-calls") + }) + + it("infers CN Plus-High-Speed from companion image-01 quota", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_CN_API_KEY: "cn-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + model_name: "MiniMax-M*", + current_interval_total_count: 1500, + current_interval_usage_count: 1466, + }, + { + model_name: "image-01", + current_interval_total_count: 100, + current_interval_usage_count: 100, + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Plus-High-Speed (CN)") + expect(result.lines[0].label).toBe("Session") + expect(result.lines[0].limit).toBe(1500) + }) + + it("infers CN Max-High-Speed from companion speech quota", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_CN_API_KEY: "cn-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + model_name: "MiniMax-M*", + current_interval_total_count: 4500, + current_interval_usage_count: 4000, + }, + { + model_name: "speech-hd", + current_interval_total_count: 19000, + current_interval_usage_count: 19000, + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Max-High-Speed (CN)") + expect(result.lines[0].label).toBe("Session") + expect(result.lines[0].limit).toBe(4500) + }) + + it("falls back to the coarse CN tier when companion quotas conflict", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_CN_API_KEY: "cn-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + model_name: "MiniMax-M*", + current_interval_total_count: 1500, + current_interval_usage_count: 1400, + }, + { + model_name: "speech-hd", + current_interval_total_count: 9000, + current_interval_usage_count: 9000, + }, + { + model_name: "image-01", + current_interval_total_count: 50, + current_interval_usage_count: 50, + }, + { + model_name: "speech-2.8-turbo", + current_interval_total_count: 8000, + current_interval_usage_count: 7900, + }, + { + model_name: "Image Generation", + current_interval_total_count: 25, + current_interval_usage_count: 24, + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Plus (CN)") + expect(result.lines).toHaveLength(5) + expect(result.lines[1]).toMatchObject({ + label: "Text to Speech HD", + format: { kind: "count", suffix: "chars" }, + }) + expect(result.lines[3]).toMatchObject({ + label: "Text to Speech Turbo", + format: { kind: "count", suffix: "chars" }, + }) + expect(result.lines[4]).toMatchObject({ + label: "Image Generation", + format: { kind: "count", suffix: "images" }, + }) + }) + + it("normalizes CN explicit high-speed plan labels to the shared six-plan naming", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_CN_API_KEY: "cn-key" }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + data: { + base_resp: { status_code: 0 }, + current_subscribe_title: "Plus-极速版", + model_remains: [ + { + model_name: "MiniMax-M2.5-highspeed", + current_interval_total_count: 1500, + current_interval_usage_count: 1200, + start_time: 1700000000000, + end_time: 1700018000000, + }, + ], + }, + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(result.plan).toBe("Plus-High-Speed (CN)") + expect(result.lines[0].limit).toBe(1500) + expect(result.lines[0].used).toBe(300) + expect(result.lines[0].format.suffix).toBe("model-calls") }) it("does not infer CN plan for unknown CN model-call limits", async () => { @@ -616,8 +1217,9 @@ describe("minimax plugin", () => { const result = plugin.probe(ctx) expect(result.plan).toBeUndefined() - expect(result.lines[0].limit).toBe(600) // 9000 / 15 = 600 prompts - expect(result.lines[0].used).toBe(200) // (9000-6000) / 15 = 200 prompts + expect(result.lines[0].limit).toBe(9000) + expect(result.lines[0].used).toBe(3000) + expect(result.lines[0].format.suffix).toBe("model-calls") }) it("falls back when primary returns auth-like status", async () => { @@ -640,6 +1242,7 @@ describe("minimax plugin", () => { const result = plugin.probe(ctx) expect(result.lines[0].used).toBe(120) + expect(result.lines[0].format.suffix).toBe("model-calls") expect(ctx.host.http.request.mock.calls.length).toBe(2) }) @@ -694,6 +1297,28 @@ describe("minimax plugin", () => { const plugin = await loadPlugin() const result = plugin.probe(ctx) expect(result.lines[0].used).toBe(120) + expect(result.lines[0].limit).toBe(300) + expect(result.lines[0].format.suffix).toBe("model-calls") + }) + + it("falls back to GLOBAL when MINIMAX_CN_API_KEY lookup throws in AUTO mode", async () => { + const ctx = makeCtx() + ctx.host.env.get.mockImplementation((name) => { + if (name === "MINIMAX_CN_API_KEY") throw new Error("cn env unavailable") + if (name === "MINIMAX_API_KEY") return "global-key" + return null + }) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify(successPayload()), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + + expect(ctx.host.http.request.mock.calls[0][0].url).toBe(PRIMARY_USAGE_URL) + expect(result.plan).toBe("Plus (GLOBAL)") }) it("supports camelCase modelRemains and explicit used count fields", async () => { @@ -813,6 +1438,7 @@ describe("minimax plugin", () => { expect(result.plan).toBe("Team (GLOBAL)") expect(result.lines[0].used).toBe(180) expect(result.lines[0].limit).toBe(300) + expect(result.lines[0].format.suffix).toBe("model-calls") }) it("clamps negative used counts to zero", async () => { @@ -835,6 +1461,7 @@ describe("minimax plugin", () => { const plugin = await loadPlugin() const result = plugin.probe(ctx) expect(result.lines[0].used).toBe(0) + expect(result.lines[0].format.suffix).toBe("model-calls") }) it("clamps used counts above total", async () => { @@ -857,6 +1484,7 @@ describe("minimax plugin", () => { const plugin = await loadPlugin() const result = plugin.probe(ctx) expect(result.lines[0].used).toBe(100) + expect(result.lines[0].format.suffix).toBe("model-calls") }) it("supports epoch seconds for start/end timestamps", async () => { @@ -909,6 +1537,55 @@ describe("minimax plugin", () => { expect(result.lines[0].resetsAt).toBe(new Date(1700000000000 + 300000).toISOString()) }) + it("prefers milliseconds remains_time when end_time makes it a closer match", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) + vi.spyOn(Date, "now").mockReturnValue(1700000000000) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + current_interval_total_count: 100, + current_interval_usage_count: 40, + remains_time: 300000, + end_time: 1700000300000, + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + expect(result.lines[0].resetsAt).toBe(new Date(1700000300000).toISOString()) + }) + + it("uses overflow comparison when remains_time exceeds the expected window", async () => { + const ctx = makeCtx() + setEnv(ctx, { MINIMAX_API_KEY: "mini-key" }) + vi.spyOn(Date, "now").mockReturnValue(1700000000000) + ctx.host.http.request.mockReturnValue({ + status: 200, + headers: {}, + bodyText: JSON.stringify({ + base_resp: { status_code: 0 }, + model_remains: [ + { + current_interval_total_count: 100, + current_interval_usage_count: 40, + remains_time: 20000000, + }, + ], + }), + }) + + const plugin = await loadPlugin() + const result = plugin.probe(ctx) + expect(result.lines[0].resetsAt).toBe(new Date(1700000000000 + 20000000).toISOString()) + }) + it("throws parse error when model_remains entries are unusable", async () => { const ctx = makeCtx() setEnv(ctx, { MINIMAX_API_KEY: "mini-key" })