Conversation
* fix(logs): improve fake 200 error logs (#765) * fix(proxy): 识别 200+HTML 假200并触发故障转移 * fix(utils): 收紧 HTML 文档识别避免误判 * fix(proxy): 非流式假200补齐强信号 JSON error 检测 * fix(utils): 假200检测兼容 BOM * perf(proxy): 降低非流式嗅探读取上限 * fix(proxy): 客户端隐藏 FAKE_200_* 内部码 * fix(logs): 补齐 endpoint_pool_exhausted/404 错因展示 - endpoint_pool_exhausted 写入 attemptNumber,避免被 initial_selection/session_reuse 去重吞掉\n- 决策链/技术时间线补齐 resource_not_found 的失败态与说明\n- 更新 provider-chain i18n 文案并新增单测覆盖 * fix(proxy): 非流式 JSON 假200检测覆盖 Content-Length - 对 application/json 且 Content-Length<=32KiB 的 2xx 响应也做强信号嗅探\n- 补齐 200+JSON error(带 Content-Length)触发故障转移的回归测试 * chore: format code (fix-issue-749-fake-200-html-detection-005fad3) * fix(i18n): 修正 ru 端点池耗尽文案 - 修正俄语中 endpoint 的复数属格拼写(конечных точек)\n- 不影响 key,仅更新展示文案 * test(formatter): 补齐 resource_not_found 组合场景覆盖 - 覆盖 resource_not_found + retry_success 多供应商链路\n- 覆盖缺少 errorDetails.provider 的降级渲染路径 * fix(proxy): FAKE_200 客户端提示附带脱敏片段 * fix: 改进 FAKE_200 错误原因提示 * fix(proxy): verboseProviderError 回传假200原文 - fake-200/空响应:verboseProviderError 开启时在 error.details 返回详细报告与上游原文(不落库)\n- forwarder: 将检测到的原文片段挂到 ProxyError.upstreamError.rawBody\n- tests: 覆盖 verbose details 与 rawBody 透传 * fix(proxy): 强化 Content-Length 校验与假200片段防泄露 - forwarder: 将非法 Content-Length 视为无效,避免漏检 HTML/空响应\n- errors: FAKE_200 客户端 detail 二次截断 + 轻量脱敏(防御性)\n- tests: 覆盖非法 Content-Length 漏检回归 * docs(proxy): 说明非流式假200检测上限 * docs(settings): 补充 verboseProviderError 安全提示 * fix(proxy): verboseProviderError rawBody 基础脱敏 * chore: format code (fix-issue-749-fake-200-html-detection-b56b790) * docs(settings): 说明 verboseProviderError 基础脱敏 * fix(proxy/logs): 假200 推断状态码并显著标记 * fix(i18n): 回退 verboseProviderErrorDesc 原始文案 * fix(stream): 404 资源不存在不计入熔断 --------- Co-authored-by: tesgth032 <tesgth032@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Ding <44717411+ding113@users.noreply.github.com> * fix: add missing import for inferUpstreamErrorStatusCodeFromText The function inferUpstreamErrorStatusCodeFromText was used in response-handler.ts but was not imported, causing a TypeScript compilation error during build. Fixed: - Added inferUpstreamErrorStatusCodeFromText to imports from @/lib/utils/upstream-error-detection CI Run: https://github.com/ding113/claude-code-hub/actions/runs/22033028838 * fix(proxy): deduplicate getFake200ReasonKey and strengthen client-facing sanitization Extract duplicated getFake200ReasonKey() from SummaryTab and ProviderChainPopover into a shared fake200-reason.ts utility, eliminating the risk of silent drift when new FAKE_200_* codes are added. Replace the 3-pattern manual sanitization in getClientSafeMessage() with the existing sanitizeErrorTextForDetail() (6 patterns), closing a gap where JWT tokens, emails, and password/config paths could leak to clients via the FAKE_200 error detail path. Add unit tests verifying JWT, email, and password sanitization. * fix(proxy): address bugbot review comments on fake-200 error handling - Add i18n for HTTP status prefix in LogicTraceTab (5 languages) - Wrap verbose details gathering in try-catch to prevent cascading failures - Truncate rawBody to 4096 chars before sanitization in error-handler - Tighten not_found regex to require contextual prefixes, preventing false 404 inference - Add debug logging to silent catch blocks in readResponseTextUpTo - Add test assertion for fake200DetectedReason display --------- Co-authored-by: tesgth032 <tesgth032@hotmail.com> Co-authored-by: tesgth032 <tesgth032@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
* refactor(provider): improve provider page performance (#782) * fix: Providers 管理页批量化端点统计与测活日志 * perf: 优化 provider 统计与 my-usage 查询性能 * perf: Providers 管理页移除 refresh 放大器并按需加载端点区块 * fix: 跟进 review 补齐 Providers 批量与统计健壮性 * fix: 跟进 CodeRabbit 修复 in-view 与测活数据校验 * perf: 补齐 in-view 稳定化与 batch 404 复原 * perf: my-usage 配额/汇总减少 DB 往返 * perf(providers): 端点池热路径批量熔断查询与索引迁移 (#779) - 运行时端点选择与严格审计统计改为批量读取端点熔断状态,减少 Redis 往返\n- probe 写入在端点并发删除时静默忽略,避免 FK 失败导致任务中断\n- 新增索引迁移:idx_provider_endpoints_pick_enabled / idx_providers_vendor_type_url_active\n- repository 批量查询模块改为 server-only,避免误暴露为 Server Action * fix: 跟进 review 去重熔断 reset 与 scanEnd (#779) * fix: 精确熔断 reset + repo 使用 server-only (#779) * fix: my-usage 补齐 sessionId/warmup 过滤 (#779) * perf: provider 统计 in-flight 去重更稳健 (#779) * fix: ProviderForm 统一失效相关缓存 (#779) * fix: Providers/Usage 细节修正与用例补齐 (#779) * style: biome 格式化补齐 (#779) * fix(#779): 熔断状态同步与 probeLogs 批量查询改进 * fix(#781): 清理孤儿端点并修正 Endpoint Health * perf: 优化 usage logs 与端点同步(#779/#781) * refactor: 移除端点冗余过滤(#779) * fix: 熔断状态批量查询仅覆盖启用端点(#779) * fix: Provider 统计兼容脏数据并稳定 probe logs 排序(#779) * perf: 禁用 Providers 重查询的 window focus 自动刷新(#779) * fix: 多实例熔断状态定期同步,并修复 backfill 遗留软删除端点(#779/#781) * perf: probe scheduler 仅探测启用 provider 的端点(#781) * perf: ProviderForm 避免重复 refetch 并稳定 hover circuit key(#779) * perf: 全局 QueryClient 策略与 usage/user 索引优化(#779) * perf: 时区统计索引命中与批量删除优化(#779) * perf: 降低 logs/users 页面无效重算 * fix(provider): endpoint pool 仅基于启用 provider - sync/backfill/delete:引用判断与回填仅考虑 is_enabled=true 的 provider,避免 disabled provider 复活旧 endpoint - updateProvider:provider 从禁用启用时确保端点存在 - Dashboard Endpoint Health:避免并发刷新覆盖用户切换,vendor/type 仅从启用 provider 推导 - probe logs 批量接口:滚动发布场景下部分 404 不全局禁用 batch - 补齐 endpoint-selector 单测以匹配 findEnabled* 语义 * perf: Dashboard vendor/type 轻量查询与 usage logs 并行查询 * fix(migrate): advisory lock 串行迁移并移除 emoji 日志 * fix: endpoint hover 兜底并规范 batch probe logs SQL * perf(settings/providers): 减少冗余刷新并复用 endpoint/circuit 缓存 * perf(probe/statistics): 修正 probe 锁/计数并收敛统计与 usage 扫描 * perf(probe/ui): 优化 probe 目标筛选 SQL 并减少 sparkline 闪烁 * fix(db): 修复 Drizzle snapshot 链 * fix(perf): 补强 Providers 批量与缓存一致性 - Provider 统计:消除隐式 cross join,收敛 in-flight 清理;deleteProvidersBatch 降低事务内往返\n- Providers hover:按 QueryClient 隔离微批量并支持 AbortSignal,减少串扰与潜在泄漏\n- Probe/熔断/缓存:probe 目标查询改为 join;Redis 同步时更新计数字段;统计缓存保持 FIFO 语义\n- My Usage:userBreakdown 补齐 5m/1h cache 聚合列(当前 UI 未展示) * chore: format code (issue-779-provider-performance-23b338e) * chore: 触发 CI 重跑 * fix(provider): 批量启用时补齐 endpoint pool - batchUpdateProviders 会走 updateProvidersBatch;当供应商从 disabled 批量启用时,best-effort 插入缺失的 provider_endpoints 记录\n- 避免历史/竞态导致启用后严格端点策略下无可用 endpoint 而被阻断 * fix(perf): 收敛 Providers 刷新放大并优化探测/分页 * perf: 收敛 availability/probe 轮询并优化 my-usage (#779/#781) - AvailabilityDashboard: 抑制重叠/乱序刷新,前后台切换节流强刷\n- Probe scheduler/cleanup: idle DB poll + 锁续租,降低无意义扫描与并发清理\n- Endpoint circuit: Redis 同步节流(1s)\n- My Usage: key/user breakdown 合并为单次聚合\n- DB: 新增 message_request key+model/endpoint 部分索引迁移;修复 journal 单调性校验与迁移表 created_at 自愈 * fix(ui): 恢复全局 react-query 默认配置 * fix(availability): 刷新 vendors 时清理旧 endpoint 选择 * perf: 补强 Providers 探测与 Usage Logs 性能 * perf(ui): useInViewOnce 共享 IntersectionObserver 降低资源占用 - 按 (root+options) 复用 observer pool,减少长列表/大表格下的 observer 实例数\n- 补齐单测覆盖(test env 直通 + 共享/释放语义) * perf: providers batch where 优化与 sparkline 降级并发修正 * perf: my-usage breakdown 补齐缓存字段并优化筛选缓存 * perf: 优化端点熔断 Redis 负载与探测候选 * fix(#781): Endpoint Health 仅展示启用 provider 引用端点 * 修正端点健康筛选并增强URL解析容错 * docs(provider-endpoints): 说明 keepPreviousWhenReferenced 语义 * perf(availability): EndpointTab 前后台切换节流刷新 * docs(availability): 补充 EndpointTab 刷新节流注释 * chore(review): 按 AI 审阅补齐注释并收敛细节 * fix: 修正 provider 统计 SQL 的 DST 日界 --------- Co-authored-by: tesgth032 <tesgth032@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> * refactor: consolidate migrations, extract shared utilities, fix bugbot issues Merge 6 index migrations (0068-0073) into single idempotent migration. Extract reusable utilities from duplicated code across the codebase: - TTLMap<K,V>: generic LRU+TTL cache replacing 3 inline implementations - createAbortError: shared abort error factory from 2 components - startLeaderLockKeepAlive: shared leader lock renewal from 2 schedulers - ProbeLogsBatcher: data-fetching infra extracted from sparkline component - buildUsageLogConditions: shared SQL filter builder from 3 query functions Additional cleanup: - Simplify useInViewOnce hook (remove unused options, keep shared observer pool) - Remove dead code (sumKeyTotalCostById, unexport internal types) - Hardcode env var defaults (ENDPOINT_CIRCUIT_HEALTH_CACHE_MAX_SIZE, ENDPOINT_PROBE_IDLE_DB_POLL_INTERVAL_MS) - Fix in-flight dedup race condition in getProviderStatistics - Fix yesterday/today interval boundary inconsistency (lte -> lt) - Add NaN guard for limitPerEndpoint in batch probe logs - Add updatedAt to deleteProvider for audit consistency - Log swallowed flush() errors in batchers instead of silently catching * fix: resolve loading state reset and advisory lock client close errors Remove silent option guard so vendor loading state always resets when the request completes, preventing stale loading indicators. Wrap advisory lock client.end() in try-catch to avoid unhandled errors during connection teardown. --------- Co-authored-by: tesgth032 <tesgth032@hotmail.com> Co-authored-by: tesgth032 <tesgth032@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
… endpoint id to integer Interval additions/subtractions in template literals lacked grouping parentheses, causing incorrect evaluation when composed with AT TIME ZONE. The VALUES list for endpoint IDs also needed an explicit integer cast to avoid type ambiguity in the LATERAL join.
…nd integer cast bugs Cover three PostgreSQL runtime errors caused by operator precedence and type inference issues in raw SQL expressions: - Leaderboard date conditions missing parentheses around INTERVAL arithmetic before AT TIME ZONE, triggering pg_catalog.timezone error - Overview comparison queries with the same parenthesization problem on yesterdayStartLocal / yesterdayEndLocal expressions - Provider endpoints batch CTE VALUES inferred as text, causing "integer = text" mismatch on LATERAL join; validated ::integer cast
…nses On tee'd ReadableStreams backed by push-mode Node streams, awaiting reader.cancel() in readResponseTextUpTo can block indefinitely when the other branch has not started consuming yet. Switch to fire-and-forget cancel to unblock the main request path.
* feat(provider): add swap cache TTL billing option to invert 1h/5min cost calculation When enabled on a provider, swaps the 5m and 1h token buckets for cost calculation only — the log badge remains unchanged. This addresses cases where a provider reports 1h cache but actually bills at the 5min rate. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(provider): swap cache TTL at data entry so badge/cost/metrics are consistent Move swap logic from post-resolution (billing5m/billing1h) to data entry point, inverting both bucket values and cache_ttl before downstream processing. Also swap session fallback TTL when usageMetrics.cache_ttl is absent. Update i18n descriptions and add tests for new behavior. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(provider): extract applySwapCacheTtlBilling and fix Langfuse usage consistency Extract swap logic into reusable applySwapCacheTtlBilling() function, apply it in all response paths (non-streaming, SSE), and return finalized usage from finalizeRequestStats so Langfuse traces receive already-swapped metrics instead of re-parsing raw response text. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(provider): address code review findings for swap cache TTL billing - Record swapCacheTtlApplied on error/abort paths so NULL unambiguously means pre-migration - Add .default(false) to schema column and regenerate migration for query consistency - Remove redundant ?? false where provider is guaranteed non-null - Document in-place mutation in normalizeUsageWithSwap JSDoc - Include swapCacheTtlApplied in audit query for session detail page Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(provider): clone usageMetrics in normalizeUsageWithSwap to prevent mutation side-effects The function was mutating the caller's object in-place via applySwapCacheTtlBilling, risking double-swap and inconsistent state. Now clones before swapping. Also adds swap_cache_ttl_billing to editProvider schema and a caller-isolation test. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(ui): add swap indicator on cacheTtlApplied badge in logs Thread swapCacheTtlApplied from DB through repository SELECT queries, error-details-dialog props, and all 3 badge render locations. When swap is active, badge turns amber with "~" suffix and i18n tooltip. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(ui): reorder swap toggle and add missing DialogTitle for a11y Move Swap Cache TTL Billing toggle directly after Forward client IP for logical grouping. Add hidden DialogTitle to all provider dialogs missing it (add, edit, clone, vendor key) to satisfy Radix a11y check. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(ui): prevent duplicate "default" key in provider group tabs When a provider has groupTag="default", the value already exists in the Set. Adding "default" again for the hasDefaultGroup case produced a duplicate React key warning. Deleting it from the Set before spreading eliminates the duplication. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: format code (feature-swap-cache-ttl-billing-00fa88c) * fix(ui): handle explicit "default" groupTag in provider tabs When a provider has an explicit "default" groupTag, treat it the same as having no tag rather than adding a literal "default" entry to the groups Set, which caused a duplicate tab. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(provider): address code review nitpicks - Reorder VisuallyHidden import to third-party block in two files - Expand comment on session fallback cache_ttl inversion - Use ?? false for swapCacheTtlApplied to guarantee boolean type - Use destructuring swap instead of temp variable in applySwapCacheTtlBilling Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: John Doe <johndoe@example.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
…tokens checks (#801) * refactor(proxy): introduce EndpointPolicy to replace hardcoded count_tokens checks Replace scattered isCountTokensRequest() conditionals with a unified EndpointPolicy system resolved once at session construction time. This generalizes the "raw passthrough" behavior to cover both count_tokens and responses/compact endpoints via a single policy object. Key changes: - Add endpoint-paths.ts (path constants + normalization with case/slash/query handling) - Add endpoint-policy.ts (EndpointPolicy interface + resolution logic) - ProxySession holds immutable EndpointPolicy resolved at construction - GuardPipeline.fromSession() reads policy instead of RequestType enum - Forwarder, ResponseHandler, RequestFilter all gate on policy flags - proxy-handler uses trackConcurrentRequests from policy * fix(proxy): remove duplicate cache TTL call and cache endpoint policy in local var - Remove redundant first applyCacheTtlOverrideToMessage call (lines 1905-1917) that duplicated the post-Anthropic-overrides call (lines 2025-2036) - Cache session.getEndpointPolicy() in local variable in error handling path to avoid repeated accessor calls Addresses: gemini-code-assist and coderabbitai review comments
* perf(dashboard): add cache key types and builder utilities * perf(cache): add Redis overview cache with 10s TTL * perf(cache): add Redis statistics cache with 30s TTL * perf(db): rewrite statistics SQL to eliminate CROSS JOIN anti-pattern * perf(dashboard): wire overview action to Redis cache with 10s TTL * perf(dashboard): wire statistics action to Redis cache Replace direct DB calls (getUserStatisticsFromDB, getKeyStatisticsFromDB, getMixedStatisticsFromDB) with getStatisticsWithCache() in getUserStatistics(). The cache module handles Redis read-through with 30s TTL, distributed locking, and fail-open fallback to direct DB queries. * perf(dashboard): optimize client-side caching, polling, and lazy load charts - Remove cache: no-store from fetchLeaderboard to respect s-maxage=60 - Reduce overview polling from 5s to 15s with staleTime: 10_000 - Add staleTime: 30_000 and keepPreviousData to statistics query - Add staleTime: 60_000 to all 3 leaderboard queries - Lazy load StatisticsChartCard via next/dynamic with ssr: false * perf(db): add PG indexes for dashboard query optimization * perf(dashboard): expand SSR prefetch to include overview data * perf(db): commit migration artifacts for dashboard query indexes * test(dashboard): add unit tests for performance optimization modules * test(dashboard): add unit tests for performance optimization modules * test(actions): mock redis lifecycle in provider undo tests * fix(i18n): use fullwidth parentheses in zh-TW dashboard labels * fix(dashboard): address all bugbot comments from PR #808 - Replace O(N) redis.keys() with cursor-based scanPattern() in invalidateStatisticsCache (issue 1) - Fix lock not released when queryDatabase throws: move del(lockKey) to finally block in both statistics-cache and overview-cache (issues 2+4) - Wrap setex in inner try/catch so Redis write failure doesn't trigger double DB query via outer catch (issues 3+4) - Guard queryDatabase against undefined userId for keys/mixed modes (issue 5) - Remove duplicate buildCacheKey; use buildStatisticsCacheKey from dashboard-cache.ts throughout (issue 6) - Add TypeScript overloads to buildOverviewCacheKey preventing overview:user:undefined keys at compile time (issue 7) - Replace hardcoded Chinese sentinel "其他用户" with "__others__" and map it to i18n key othersAggregate in 5 locales (issue 8) - Extract duplicated Redis in-memory mock into shared tests/unit/actions/redis-mock-utils.ts (issue 9) * chore: format code (dashboard-perf-optimization-df4337e) --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
LiveSessionsPanel hardcoded rowSpan={2} which created an implicit
second grid row, making it 24px taller than adjacent leaderboard cards.
* feat(i18n): add session origin chain translation keys Add originDecisionTitle/Desc/Loading/Unavailable/Expand keys to logs.details.logicTrace in all 5 language dashboard.json files. Add originHint key to summary in all 5 language provider-chain.json files. * fix(test): fix sqlToString helper to handle drizzle column references * feat(repo): add findSessionOriginChain repository function * feat(ui): add origin context hint to ProviderChainPopover * feat(actions): add getSessionOriginChain server action * feat(ui): add collapsible origin chain to LogicTraceTab * test: add integration tests for session origin chain feature * fix(ui): replace hardcoded labels with i18n keys in origin chain collapsible * chore: format code (session-reuse-origin-chain-f1a2b5d) * fix(pr810): address coderabbit review comments - fix semantic mismatch: use providersCount key for enabledProviders display - fix probability formatting: use formatProbability() instead of Math.round - fix i18n: translate selectionMethod enum via selectionMethods namespace - add selectionMethods translations to all 5 language files - add JSDoc to findSessionOriginChain repository function - fix test: null providerChain mock now returns row with null providerChain - fix test: add assertion before trigger click in error-details-dialog test - add 2 missing test cases: non-admin unauthorized + exception path * chore: format code (session-reuse-origin-chain-152b428) --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
…artial indexes for key lookups
- getOverviewMetrics: read from usageLedger instead of messageRequest - getOverviewMetricsWithComparison: same migration for all 3 parallel queries - Error rate: statusCode >= 400 -> NOT isSuccess (pre-computed boolean) - EXCLUDE_WARMUP_CONDITION -> LEDGER_BILLING_CONDITION - Removed deletedAt null checks (ledger has no deletedAt)
- aggregateSessionStats: read from usageLedger instead of messageRequest - aggregateMultipleSessionStats: read from usageLedger instead of messageRequest - Use LEDGER_BILLING_CONDITION instead of EXCLUDE_WARMUP_CONDITION + FILTER - Use finalProviderId for provider sub-queries - Remove deletedAt checks (ledger has no soft-delete) - Detail-view functions unchanged (still on messageRequest)
…n ClientRestrictionsEditor Replace free-text TagInput with checkbox rows for preset clients (Allow/Block) in the provider edit form, matching the existing user edit form pattern. - Rewrite ClientRestrictionsEditor to render preset clients as checkbox rows with Allow/Block options, and custom patterns via ArrayTagInputField - Update routing-section.tsx to pass translations object instead of individual label/placeholder/getPresetLabel props - Add 12 unit tests covering preset toggles, mutual exclusion, custom value splitting, disabled state, and mixed preset+custom scenarios
) * feat(ui): add model vendor icons to usage tables and price filters Display upstream AI vendor icons (Claude, OpenAI, DeepSeek, etc.) next to model names in usage records, my-usage table, and expand price table filter buttons from 3 hardcoded to 20 data-driven vendors. * fix(ui): widen model column and shrink cost column in usage logs tables Regular table: model column 180px -> 220px. Virtualized table: model flex 1.0 -> 1.3, cost flex 0.7 -> 0.6. * feat(ui): add click-to-copy on model names in usage logs and my-usage tables * chore: format code (feat-provider-icons-7fa5a90) * fix: strictly sort MODEL_VENDOR_RULES by prefix length descending Address bugbot review comments from gemini-code-assist and greptile: baichuan (8 chars) was misplaced after 7-char prefixes. Reorder all entries with length-group comments and alphabetical sort within groups. --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
- Remove translucent backdrop from price drawer to prevent darkened content - Replace bg-black/10 with bg-muted/20 for theme-aware table background - Compact database status cards into inline layout - Wrap notification types in Section with increased spacing
…ctions batch edit (#833) * fix: responses/compact endpoint, CB cache invalidation, client restrictions batch edit 1. Add /v1/responses/compact to STANDARD_ENDPOINTS in forwarder to prevent misclassification as MCP request and bypassing endpoint pool selection. 2. Invalidate Redis circuit breaker config cache and in-memory config cache after batch patch apply and undo when CB fields are changed, matching the single-provider editProvider behavior. 3. Add missing allowedClients/blockedClients handlers in buildPatchDraftFromFormState and isValidSetValue validation in provider-patch-contract, fixing silent drops in batch edit pipeline. * refactor: extract CB_PROVIDER_KEYS to module-level constant Move circuit breaker provider key Set from undoProviderPatch function scope to module level, avoiding repeated allocation on each call.
* feat(notification): add cache hit rate alert (#824) * feat(notification): 增加缓存命中率异常告警 (#823) - 支持 5m/30m/1h/1.5h/auto 滚动窗口,按 provider×model 统计并与历史/今日/上一个窗口对比 - 以同一 sessionId + TTL 约束定义 eligible 命中口径,避免跨会话误判 - 新增 webhook 类型/模板/占位符与通知设置页配置,并加入去重冷却 - 增加单测覆盖 decision 与模板渲染 * fix(cache-hit-rate-alert): 收窄 windowMode 校验并补齐边界用例 * refactor(provider): 批量更新使用 RETURNING 计数 * refactor: 清理告警冗余计算并补齐异常用例 * fix(cache-hit-rate-alert): Map 输入过滤无效指标 * fix(i18n): zh-TW 调整 Tokens 文案 * fix(cache-hit-rate-alert): 修复 cooldown=0 与类型断言 * refactor(cache-hit-rate-alert): 优化去重读写与类型细节 * fix(cache-hit-rate-alert): 修复 fan-out 重试漏发并补强 abs_min * fix(cache-hit-rate-alert): targets 模式按 binding 提交 cooldown * fix(cache-hit-rate-alert): 补强 cooldown 边界与 payload 校验 * fix(cache-hit-rate-alert): 加强 payload 校验与冷却提交容错 * test/cache-hit-rate-alert: 补强 dedup 断言与 guard 校验 * fix(cache-hit-rate-alert): payload guard 校验 generatedAt * refactor(cache-hit-rate-alert): cooldown keys 去重 * fix(cache-hit-rate-alert): 严格校验 window.mode * fix(notification): 强化缓存命中率告警健壮性 - 校验 window.startTime/endTime 可解析且 end>=start - fan-out 主作业 job.update 失败改为 best-effort - 补充调度注释:共享 repeat 作业会忽略 per-binding cron/timezone * fix(cache-hit-rate-alert): 强化 TTL 推断与告警稳定性 * fix(notification): cache-hit-rate-alert 调度兼容大间隔 * fix(notification): 加固缓存命中率告警边界与校验 - dropAbs/严重度字段统一 clamp(避免负跌幅) - swapCacheTtlApplied 下 eligible TTL 口径还原 - cooldown EX 秒数强制整数 + UI/API 数值输入校验 * fix(settings): 通知设置数值解析正确处理 null/空串 避免 Number(null)==0 导致阈值/间隔被错误 clamp * fix(settings): 测试 Webhook 类型选择移除不安全断言 --------- Co-authored-by: tesgth032 <tesgth032@users.noreply.github.com> * fix: address code review findings for cache hit rate alert - Fix costAlertCheckInterval null producing invalid cron in legacy mode - Cap historicalLookbackDays to 90 to prevent unbounded DB queries - Strengthen CacheHitRateAlertWindow.mode type from string to union - Complete default webhook template with all 12 documented placeholders - Add trailing newline to migration file (POSIX compliance) - Replace safeNumberOnChange with NumberInput component to fix UX issue where clearing a number input via backspace caused value snap-back * fix: address valid bugbot findings - Add max=2147483647 to minEligibleTokens input (match PG integer limit) - Replace relative import with @/ alias in cache-hit-rate-alert repository - Add Invalid Date guard before SQL query to prevent NaN propagation --------- Co-authored-by: tesgth032 <tesgth032@hotmail.com> Co-authored-by: tesgth032 <tesgth032@users.noreply.github.com>
Add two non-retryable error rules for /v1/responses endpoint: - store=false item not found (priority 73): stops retry when response items are not persisted due to store=false - input must be a list (priority 74): stops retry when input field is not an array format Both use OpenAI error format and contains matching for fast detection.
* 修复统计图表卡片在小高度下裁切 - 统计卡片在 50vh 外框下改为可滚动,图表高度按可用空间自适应,避免 Legend 被裁切\n- 开发环境新增 /internal/ui-preview/statistics-chart 预览页(无需 DB)\n- 开发环境放行 /internal/ui-preview 免登录访问,方便本地 UI 检查 * 修复统计图表高度 calc 语法以生效 * 收紧开发环境 ui-preview 路径放行匹配 * 补充图表高度计算说明并添加 dev 放行单测 * 移除未定义的 scrollbar 类并调整测试调用 * 预览页使用 @/ 别名导入组件 * 拆分注释行宽以便阅读 * 优化统计图表高度计算可维护性 - 将图表高度与非图表区域偏移提取为常量,改用 style 计算 clamp() 高度 - 单测增加 beforeEach 重置模块与 mock,避免状态泄漏 * chore(ui): 收敛首页统计图表裁切修复到组件内 * fix(ui): 统计卡片禁用滚动并自适应图表高度 * fix(ui): 高度解析仅接受 px 以避免误判 vh * fix(ui): 收紧统计卡片并消除裁切 * fix(ui): 统计卡片 md 留白改为 p-1 * fix(ui): 统计卡片数值字号微调 * fix(ui): 统计 legend 操作按钮悬浮到右上角 * fix(ui): legend 用户名截断添加 title * fix(ui): 统计 tooltip 提升层级并上移 * fix(ui): tooltip 列表滚轮可滚动并尽量展开 * fix(ui): 统计卡片最小高度 300px 并上限 60vh * fix(ui): 统计 legend 按钮宽度适配 i18n * fix(ui): legend 操作按钮避免换行遮挡 * fix(ui): 统计卡片指标字号与注释完善 * fix(ui): legend 操作区适配 RTL * fix(ui): 统计卡片监听 visualViewport 变化 * fix(ui): 移除不必要的 visualViewport scroll 监听 * fix(ui): 减少统计卡片高度重算触发 --------- Co-authored-by: tesgth032 <tesgth032@users.noreply.github.com>
* 修复:熔断器禁用后仍拦截供应商 - failureThreshold<=0 视为禁用:检测到非 closed 状态时强制复位并写回 Redis\n- recordFailure 在 OPEN 状态不再重置 circuitOpenUntil,避免熔断窗口被延长\n- 补充单测覆盖 isCircuitOpen / getAllHealthStatusAsync / recordFailure * 优化:降低熔断配置强制刷新频率 - isCircuitOpen 在非 closed 状态下按最小间隔强刷配置,减少持续 OPEN 时的 Redis 压力\n- recordFailure 在已 OPEN 时不再重复拉取配置,仅持久化计数 * 修复:批量刷新时清理 Redis 缺失的残留熔断状态 - getAllHealthStatusAsync 在批量刷新时,如果 Redis 无该 provider 状态,则清理内存中的非 closed 状态\n- 补充单测覆盖该清理逻辑,避免健康状态展示/筛选残留 * 重构:抽取禁用熔断器处理并稳定异步告警测试 - 提取 handleDisabledCircuitBreaker/needsHealthResetToClosed,消除多处重复逻辑\n- 单测用 expect.poll 等待告警触发,避免忙等待导致的潜在不稳定 * 优化:缓存默认熔断配置并并行预取批量配置 * 优化:批量强刷配置增加并发上限 * 测试:抽取熔断器测试通用 mocks * 熔断器:60s 强刷 + Pub/Sub 配置失效 + 禁用时主动闭合 * 测试:兼容 CI 环境的配置失效订阅用例 * 优化:Pub/Sub 订阅失败指数退避 --------- Co-authored-by: tesgth032 <tesgth032@users.noreply.github.com>
* feat: add provider scheduled active time and remove deprecated joinClaudePool Add HH:mm-based active time windows (start/end) to providers for time-based routing control. Supports same-day and cross-day schedules with system timezone integration. Also removes the deprecated joinClaudePool column/feature. Key changes: - Core: isProviderActiveNow() with NaN fail-open defense - Schema: active_time_start/end columns, drop join_claude_pool (single migration) - Types: 7 interfaces updated across Provider/Batch/Create/Update - Proxy: schedule_inactive filter in provider selection hot path - Batch: HH:mm regex validation in patch contract - UI: schedule toggle, time pickers, cross-day hint, list badge - i18n: 5 languages (zh-CN, zh-TW, en, ja, ru) - Tests: 25 schedule tests + 10 batch patch tests * fix: address code review findings from PR #844 1. Add schedule check in findReusable session reuse path to prevent bound sessions from bypassing active time window filtering. Clears session binding when provider is outside schedule. 2. Strict HH:mm validation in parseHHMM using regex - values like "24:00", "9:00", "99:99" now return NaN and trigger fail-open, instead of producing silently incorrect minute values.
|
Important Review skippedToo many files! This PR contains 300 files, which is 150 over the limit of 150. 📒 Files selected for processing (300)
You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🧪 测试结果
总体结果: ✅ 所有测试通过 |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 本次更新是 v0.6.1 版本发布,主要围绕核心组件的重构、新功能的引入以及性能优化展开。通过对鉴权、计费和日志系统的深度改造,显著提升了系统的稳定性、可观测性和管理灵活性。新版本引入了 Langfuse 可观测性、增强的供应商批量操作和客户端限制,以及缓存命中率告警等实用功能,旨在为用户提供更强大、更精细化的控制能力。同时,本次更新也伴随着重要的数据库结构调整和兼容性变化,建议用户在更新前仔细阅读相关说明。 Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Actionable comments posted: 3
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/app/[locale]/dashboard/availability/_components/endpoint-probe-history.tsx (2)
73-87:⚠️ Potential issue | 🟠 Major端点列表加载存在竞态覆盖风险。
Line 75 发起请求后没有失效保护;快速切换
selectedVendorId/selectedType时,旧请求可能在新请求后返回并覆盖当前endpoints与selectedEndpointId。建议修复
useEffect(() => { + let cancelled = false; if (!selectedVendorId || !selectedType) { setEndpoints([]); setSelectedEndpointId(null); return; } setLoadingEndpoints(true); getDashboardProviderEndpoints({ vendorId: selectedVendorId, providerType: selectedType }) .then((data) => { + if (cancelled) return; setEndpoints(data); setSelectedEndpointId((prev) => { if (!prev) { return data[0]?.id ?? null; } return data.some((endpoint) => endpoint.id === prev) ? prev : (data[0]?.id ?? null); }); }) .catch(console.error) - .finally(() => setLoadingEndpoints(false)); + .finally(() => { + if (!cancelled) setLoadingEndpoints(false); + }); + + return () => { + cancelled = true; + }; }, [selectedVendorId, selectedType]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/dashboard/availability/_components/endpoint-probe-history.tsx around lines 73 - 87, The endpoint-list fetch in the effect that calls getDashboardProviderEndpoints risks race conditions: track and ignore stale responses by using an abortable pattern or a local request token inside the effect before calling getDashboardProviderEndpoints, and only call setEndpoints and setSelectedEndpointId if the token is still current; ensure you also abort or ignore results in the .catch/.finally handlers so setLoadingEndpoints(false) corresponds to the latest request. Apply this change inside the effect that uses setLoadingEndpoints, getDashboardProviderEndpoints, setEndpoints and setSelectedEndpointId (the effect with dependencies [selectedVendorId, selectedType]) so rapid changes don’t let older responses overwrite newer state.
95-111:⚠️ Potential issue | 🟠 Major探测日志加载同样会被旧请求回写。
Line 103 在异步返回后直接
setLogs,未确认当前选中的endpointId仍是本次请求目标;切换端点时可能展示错位日志。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/dashboard/availability/_components/endpoint-probe-history.tsx around lines 95 - 111, The fetch in the effect can race and overwrite logs from an earlier request; capture the current selectedEndpointId into a local variable (e.g., const requestEndpointId = selectedEndpointId) before the async fetch in the component (endpoint-probe-history effect), and after awaiting the response verify that selectedEndpointId === requestEndpointId before calling setLogs (and any state that depends on the request result); alternatively use an AbortController tied to the fetch and abort previous requests when selectedEndpointId changes—implement one of these strategies around setLoadingLogs/setLogs to prevent stale responses from writing over the currently selected endpoint's logs.
🟠 Major comments (20)
src/app/v1/_lib/proxy/endpoint-paths.ts-58-63 (1)
58-63:⚠️ Potential issue | 🟠 Major修复
/v1前缀匹配的边界条件。Line 58 的
startsWith(V1_PREFIX)会把/v10/...、/v1beta/...误判为 v1 路径,Line 62 切片后会生成错误路由。建议修复
export function toV1RoutePath(pathname: string): string { - if (!pathname.startsWith(V1_PREFIX)) { + if (pathname !== V1_PREFIX && !pathname.startsWith(`${V1_PREFIX}/`)) { return pathname; } const routePath = pathname.slice(V1_PREFIX.length); return routePath.length > 0 ? routePath : "/"; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/v1/_lib/proxy/endpoint-paths.ts` around lines 58 - 63, 当前实现用 pathname.startsWith(V1_PREFIX) 会把 "/v10"、"/v1beta" 等误判为 v1;修复方法是在判断前缀时确保后面是结束或斜杠,例如替换 startsWith(V1_PREFIX) 的判断为 pathname === V1_PREFIX || pathname.startsWith(V1_PREFIX + "/"),然后按现有逻辑对 routePath (pathname.slice(V1_PREFIX.length)) 进行处理;参考标识符:V1_PREFIX、pathname、routePath 在 endpoint-paths.ts 中定位并替换条件判断以修复边界情况。src/app/[locale]/my-usage/_components/usage-logs-table.tsx-107-112 (1)
107-112:⚠️ Potential issue | 🟠 Major可点击
<span>缺少可访问性语义,键盘用户无法完成复制Line 107-112 当前把交互绑定在
<span>上,键盘/辅助技术可达性不足。建议改成语义化<button type="button">。建议修改
- <span - className="cursor-pointer hover:underline truncate" - onClick={() => handleCopyModel(log.model!)} - > - {log.model} - </span> + <button + type="button" + className="cursor-pointer hover:underline truncate text-left" + onClick={() => handleCopyModel(log.model)} + > + {log.model} + </button>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/my-usage/_components/usage-logs-table.tsx around lines 107 - 112, The clickable <span> used to trigger handleCopyModel(log.model!) lacks keyboard/ARIA semantics; replace the <span> with a semantic <button type="button"> (preserving className, onClick handler and content) so keyboard users can activate copying and screen readers get correct role; ensure any truncated text styling and hover:underline remain and remove any tabindex hacks—update the element rendering in usage-logs-table.tsx where handleCopyModel is used.src/app/v1/_lib/proxy-handler.ts-62-64 (1)
62-64:⚠️ Potential issue | 🟠 Major并发计数有“未加先减”的一致性风险。
当 guard 链提前返回时,不会执行加计数,但
finally仍可能减计数,导致并发计数被污染。建议用显式标记保证加减严格配对。建议修复
export async function handleProxyRequest(c: Context): Promise<Response> { let session: ProxySession | null = null; + let concurrentCountIncremented = false; try { @@ if (session.sessionId && session.getEndpointPolicy().trackConcurrentRequests) { await SessionTracker.incrementConcurrentCount(session.sessionId); + concurrentCountIncremented = true; } @@ } finally { // 11. 减少并发计数(确保无论成功失败都执行)- 跳过 count_tokens - if (session?.sessionId && session.getEndpointPolicy().trackConcurrentRequests) { + if (session?.sessionId && concurrentCountIncremented) { await SessionTracker.decrementConcurrentCount(session.sessionId); } } }Also applies to: 99-101
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/v1/_lib/proxy-handler.ts` around lines 62 - 64, The concurrent counter can be decremented without a prior increment when guards return early; wrap the increment/decrement with an explicit boolean flag (e.g., didIncrement) so you only call SessionTracker.decrementConcurrentCount(session.sessionId) in the finally block if you successfully called SessionTracker.incrementConcurrentCount; apply this change to the block using session.sessionId && session.getEndpointPolicy().trackConcurrentRequests (and the analogous block around lines 99-101) so increments and decrements are strictly paired.scripts/copy-version-to-standalone.cjs-11-11 (1)
11-11:⚠️ Potential issue | 🟠 Major需要为项目添加 Node.js 最小版本约束以支持
fs.cpSync。
fs.cpSync需要 Node.js v16.7.0+。项目当前 package.json 未定义engines.node,也未通过 .nvmrc 或 CI 配置指定 Node 版本,存在在不兼容的旧版本上运行发布脚本而导致错误的风险。建议:
- 在 package.json 中添加
"engines": {"node": ">=16.7.0"}或更高版本- 或在 .nvmrc 中指定最小版本
- 或确保 CI/CD 工作流强制使用支持该 API 的 Node 版本
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/copy-version-to-standalone.cjs` at line 11, The script uses fs.cpSync (synchronous copy) which requires Node.js ≥16.7.0, so add a minimum Node constraint to the project to prevent runtime failures: update package.json to include an "engines" entry like "node": ">=16.7.0" (or higher), and/or add a .nvmrc with the same version, and ensure CI workflows (e.g., setup-node actions or matrix) are pinned to that version; look for the use of fs.cpSync in scripts/copy-version-to-standalone.cjs to justify and document the change.src/app/[locale]/settings/providers/_components/forms/provider-form/components/form-tab-nav.tsx-66-68 (1)
66-68:⚠️ Potential issue | 🟠 Major缺少可见焦点态会影响键盘切换标签
这里移除了默认焦点样式(
focus-visible:outline-none),但没有补充可见焦点 ring。键盘用户在横向布局下很难判断当前焦点位置。建议修复
- "hover:text-foreground focus-visible:outline-none", + "hover:text-foreground focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/settings/providers/_components/forms/provider-form/components/form-tab-nav.tsx around lines 66 - 68, The tab button classes in form-tab-nav.tsx removed the default focus outline ("focus-visible:outline-none") but didn't add a visible focus indicator; update the class string used for the tab items (the JSX element that uses isActive to choose "text-primary" vs "text-muted-foreground") to include keyboard-visible focus styles such as "focus-visible:ring-2 focus-visible:ring-offset-2 focus-visible:ring-primary" (and optionally "focus-visible:ring-offset-background") so that when a tab receives focus via keyboard a clear ring is shown while keeping the existing focus-visible:outline-none if you want to suppress the browser outline.src/app/v1/_lib/proxy/client-guard.ts-33-44 (1)
33-44:⚠️ Potential issue | 🟠 MajorAPI 错误消息使用硬编码字符串,违反 i18n 规范要求
第 33-44 行的错误消息(如
"Client blocked by pattern: ..."和"Client not in allowed list: ...")使用硬编码英文字符串。根据CLAUDE.md编码规范,所有面向用户的字符串(**/*.{ts,tsx,js,jsx})必须使用 i18n(支持 zh-CN、zh-TW、en、ja、ru)。这些 API 响应消息是面向用户(客户端/开发者)的文本,应遵循 i18n 规范。此外,这个问题并非仅限于当前代码段,整个代理层的防护模块(auth-guard、model-guard、client-guard 等)都使用了硬编码字符串。建议统一为代理层防护模块实现 i18n 处理。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/v1/_lib/proxy/client-guard.ts` around lines 33 - 44, The error messages in client-guard.ts are hardcoded English strings (e.g., those built when result.allowed is false using result.matchType, result.matchedPattern, and allowedClients) which breaks i18n; replace these literal messages with calls to the i18n/localization helper used across the project (or add one if missing) and pass parameters (matchedPattern, detectedClient, allowedClients, signals length/list) into the localized templates, then call ProxyResponses.buildError(400, localizedMessage, "invalid_request_error"); also audit the other guard modules (auth-guard, model-guard) to apply the same i18n pattern for all user-facing API responses.src/app/v1/_lib/proxy/errors.ts-481-528 (1)
481-528:⚠️ Potential issue | 🟠 Major客户端错误文案存在硬编码,未走 i18n。
Line [492]-Line [525] 新增的
FAKE_200_*原因文本、默认文本和Upstream detail前缀都直接写死为英文,会绕过多语言能力。请改为 i18n key + 参数插值(如statusCode、detail)。As per coding guidelines: 所有用户可见字符串必须使用 i18n(zh-CN、zh-TW、en、ja、ru),禁止硬编码展示文案。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/v1/_lib/proxy/errors.ts` around lines 481 - 528, The new FAKE_200_* user-facing strings in the FAKE_200_* branch (constructed from this.message, the inferredNote using this.statusCode, and the "Upstream detail:" prefix built from this.upstreamError?.body) are hardcoded English; replace them with i18n keys and parameter interpolation instead of literal text. Concretely, update the switch that builds reason, the inferredNote construction, and the final returned string to call the project i18n helper (e.g. t(...) or i18n.t(...)) with distinct keys for each FAKE_200_* reason, a key for the inferred status note that accepts statusCode, and a key for the upstream detail phrasing that accepts a sanitized detail param (use the existing sanitizeErrorTextForDetail output and clipped/normalized detail as the param). Ensure no user-visible literals remain and pass statusCode/detail as params to the translation calls so translations can be provided for all locales.src/app/api/auth/login/route.ts-1-16 (1)
1-16:⚠️ Potential issue | 🟠 MajorAPI 路由应使用 Hono 框架,但此文件使用了 Next.js 原生 API。
根据编码规范,
src/app/api/**/*.{ts,tsx}路由应使用 Hono 框架。此文件直接使用NextRequest/NextResponse,而非 Hono(项目中src/app/v1/等路由已正确使用 Hono)。建议将此认证路由及其他src/app/api/下的路由迁移至 Hono 框架以保持一致性。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/auth/login/route.ts` around lines 1 - 16, This file uses NextRequest/NextResponse and Next.js API conventions but must be migrated to the Hono framework; update src/app/api/auth/login/route.ts to export a Hono handler instead of Next.js route handlers, replace NextRequest/NextResponse types/usages with Hono Request/Context (or Hono's Context.env) and wire the handler into a Hono router, and adapt existing utilities (getLoginRedirectTarget, getSessionTokenMode, setAuthCookie, toKeyFingerprint, validateKey, createCsrfOriginGuard, LoginAbusePolicy, withAuthResponseHeaders) to work with Hono’s request/response lifecycle (e.g., read body via ctx.req, set cookies/headers on ctx.res, return ctx.text/json etc.); ensure middleware-like logic such as CSRF guard and abuse checks run via Hono middleware or inside the handler and that any logger/getTranslations/env access remains compatible with Hono.src/app/[locale]/dashboard/availability/_components/availability-dashboard.tsx-181-188 (1)
181-188:⚠️ Potential issue | 🟠 Major错误率口径会被小流量 Provider 放大,概览数据可能失真。
这里是“先算每个 provider 的错误率,再对 provider 求平均”,会让低请求量 provider 对总错误率影响过大。概览更合理的口径通常是
总错误数 / 总请求数。建议修复(按总量加权)
- const errorRate = - providers.length > 0 - ? providers.reduce((sum, p) => { - const total = p.totalRequests; - const errors = p.timeBuckets.reduce((s, b) => s + b.redCount, 0); - return sum + (total > 0 ? errors / total : 0); - }, 0) / providers.length - : 0; + const { totalErrors, totalRequests } = providers.reduce( + (acc, p) => { + acc.totalErrors += p.timeBuckets.reduce((s, b) => s + b.redCount, 0); + acc.totalRequests += p.totalRequests; + return acc; + }, + { totalErrors: 0, totalRequests: 0 }, + ); + const errorRate = totalRequests > 0 ? totalErrors / totalRequests : 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/dashboard/availability/_components/availability-dashboard.tsx around lines 181 - 188, The current errorRate calculation averages per-provider error rates (using providers.reduce on p.totalRequests and p.timeBuckets) which overweights low-volume providers; change it to compute a weighted overall rate by summing total errors and total requests across all providers and dividing: compute totalErrors = providers.reduce((acc,p)=> acc + p.timeBuckets.reduce((s,b)=> s + b.redCount,0),0) and totalRequests = providers.reduce((acc,p)=> acc + p.totalRequests,0), then set errorRate = totalRequests > 0 ? totalErrors / totalRequests : 0 (keep the same variable name errorRate and use the existing providers, p.totalRequests and b.redCount symbols).src/app/[locale]/dashboard/availability/_components/availability-dashboard.tsx-45-106 (1)
45-106:⚠️ Potential issue | 🟠 Major缺少超时兜底会导致自动刷新被长期阻塞。
当前
inFlightRef会阻止后续轮询;若某次请求长期挂起,后续刷新会一直被跳过,仪表盘可能持续显示陈旧数据。建议为每次请求增加超时中止。建议修复(增加请求超时兜底)
const TARGET_BUCKETS = 60; +const REQUEST_TIMEOUT_MS = 15000; const fetchData = useCallback( async (options?: { force?: boolean }) => { @@ const requestId = ++requestIdRef.current; const controller = new AbortController(); + const timeoutId = window.setTimeout(() => { + controller.abort(); + }, REQUEST_TIMEOUT_MS); abortControllerRef.current = controller; inFlightRef.current = true; @@ } finally { + clearTimeout(timeoutId); if (requestId === requestIdRef.current) { setLoading(false); setRefreshing(false); inFlightRef.current = false; } } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/dashboard/availability/_components/availability-dashboard.tsx around lines 45 - 106, The fetchData function can hang indefinitely because requests have no timeout; add a per-request timeout that aborts the current AbortController so inFlightRef won't block future polls: when creating controller in fetchData (abortControllerRef and requestIdRef usage), start a setTimeout (e.g. REQUEST_TIMEOUT_MS) that calls controller.abort() and store the timer id; pass controller.signal to fetch as already done; clear the timeout on success/error (before returning in try/catch) and also clear it in finally to avoid leaks; ensure abortControllerRef is updated only with the current controller and that the timeout is tied to that controller so aborted timeouts don't affect later requests.src/app/[locale]/settings/providers/_components/provider-manager.tsx-154-160 (1)
154-160:⚠️ Potential issue | 🟠 Major将
"default"保持为内部键,展示文案请走 i18n。这里把
"default"注入分组集合后,会在分组按钮中直接显示为英文常量。建议保留内部键不变,但渲染时映射为翻译文案。建议修改
+const DEFAULT_GROUP_KEY = "default"; - tags.forEach((g) => { - if (g === "default") { + tags.forEach((g) => { + if (g === DEFAULT_GROUP_KEY) { hasDefaultGroup = true; } else { groups.add(g); } }); - if (hasDefaultGroup) { - return ["default", ...sortedGroups]; + if (hasDefaultGroup) { + return [DEFAULT_GROUP_KEY, ...sortedGroups]; }- {group} + {group === DEFAULT_GROUP_KEY ? tFilter("groups.default") : group}As per coding guidelines "All user-facing strings must use i18n (5 languages supported: zh-CN, zh-TW, en, ja, ru). Never hardcode display text".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/settings/providers/_components/provider-manager.tsx around lines 154 - 160, Keep the internal group key "default" unchanged when collecting tags in tags.forEach (preserve hasDefaultGroup and groups.add behavior), but do not render the raw "default" string to users; update the UI rendering of group buttons in provider-manager.tsx to map the "default" key to a localized label via the i18n function (e.g., use t('providers.group.default') or the existing translation helper) and ensure all other group names are passed through i18n when displayed so no user-facing text is hardcoded.src/app/[locale]/settings/providers/_components/provider-manager.tsx-689-689 (1)
689-689:⚠️ Potential issue | 🟠 Major确认 Line 689 传入
filteredProviders存在数据遗漏风险。当用户在有过滤条件下选择项目,再改变筛选条件时,
selectedProviderIds可能包含不在当前filteredProviders中的 ID。BatchEditDialog会通过providers.filter((p) => selectedProviderIds.has(p.id))进行二次过滤(line 116-118),导致不在filteredProviders中的已选项被丢失,无法进行批量编辑。建议传递完整的
providers列表而非filteredProviders,或在筛选条件改变时清除selectedProviderIds中不再存在的 ID。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/settings/providers/_components/provider-manager.tsx at line 689, The current call passes filteredProviders to BatchEditDialog which causes selectedProviderIds to be lost when filters change because BatchEditDialog further filters by providers.filter(p => selectedProviderIds.has(p.id)); either pass the full providers list instead of filteredProviders (so BatchEditDialog can match selections against the complete set) or, if you must pass filteredProviders, proactively remove any IDs not present in filteredProviders from selectedProviderIds when filters change (update the state that holds selectedProviderIds in the component where filteredProviders is computed). Ensure you modify the code paths that reference filteredProviders, selectedProviderIds, and the BatchEditDialog props accordingly.src/app/[locale]/dashboard/logs/_components/usage-logs-view-virtualized.tsx-138-145 (1)
138-145:⚠️ Potential issue | 🟠 Major确认
placeholderData与isLoading语义冲突,导致加载状态失效Line 138-145 中设置
placeholderData: EMPTY_PROVIDERS会导致 React Query v5 在初始请求阶段将isPending设为false(因为已有占位数据),从而isLoading计算结果为false(定义为isPending && isFetching)。这样虽然实际数据正在被获取,但下游RequestFilters组件中的加载状态反馈会失效:
- Line 172 的按钮不会禁用
- Line 180 和 196-198 的加载文本不会显示
建议移除
placeholderData或改为追踪isFetching && isPlaceholderData来驱动加载态,避免用户看不到数据加载过程。可选修改示例
const { data: providersData = EMPTY_PROVIDERS, isLoading: isProvidersLoading } = useQuery< ProviderDisplay[] >({ queryKey: ["usage-log-providers"], queryFn: getProviders, enabled: isAdmin && providers === undefined, - placeholderData: EMPTY_PROVIDERS, });或追踪更精确的状态:
const { data: providersData = EMPTY_PROVIDERS, - isLoading: isProvidersLoading, + isFetching: isProvidersFetching, + isPlaceholderData: isProvidersPlaceholder, } = useQuery< ProviderDisplay[] >({ queryKey: ["usage-log-providers"], queryFn: getProviders, enabled: isAdmin && providers === undefined, placeholderData: EMPTY_PROVIDERS, }); + const isProvidersLoading = isProvidersFetching && isProvidersPlaceholder;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/dashboard/logs/_components/usage-logs-view-virtualized.tsx around lines 138 - 145, The placeholderData: EMPTY_PROVIDERS passed into the useQuery that defines providersData/isProvidersLoading causes isLoading to be false while the fetch is pending; remove placeholderData from the useQuery call (the one using queryKey ["usage-log-providers"] and queryFn getProviders) so React Query can reflect the real loading state, or alternatively compute loading for downstream components (RequestFilters and its controls) from queryState.isFetching && queryState.isPlaceholderData instead of isLoading; update references to isProvidersLoading (and any UI toggles in RequestFilters) to use the corrected loading signal.scripts/verify-ledger-consistency.ts-96-103 (1)
96-103:⚠️ Potential issue | 🟠 Major当前“critical”判定与“orphan 可接受”语义冲突,会在合法场景误报失败。
你已经声明 orphan 在日志删除后可预期,但
count mismatch/cost mismatch仍按全量表对比,会把该场景判为失败。建议将 parity 改为“仅对可关联请求行”进行比对,或在判定里扣除 orphan 影响。🧩 建议修改方向(示意)
- const countRow = getFirstRow<CountCheckRow>(await client` + const countRow = getFirstRow<CountCheckRow>(await client` SELECT ( SELECT COUNT(*)::integer FROM message_request WHERE blocked_by IS DISTINCT FROM 'warmup' ) AS message_request_count, ( - SELECT COUNT(*)::integer - FROM usage_ledger + SELECT COUNT(*)::integer + FROM usage_ledger ul + JOIN message_request mr ON mr.id = ul.request_id + WHERE mr.blocked_by IS DISTINCT FROM 'warmup' ) AS usage_ledger_count `);- ledger_sum AS ( - SELECT COALESCE(SUM(cost_usd), 0) AS total_cost - FROM usage_ledger - ) + ledger_sum AS ( + SELECT COALESCE(SUM(ul.cost_usd), 0) AS total_cost + FROM usage_ledger ul + JOIN message_request mr ON mr.id = ul.request_id + WHERE mr.blocked_by IS DISTINCT FROM 'warmup' + )Also applies to: 106-116
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/verify-ledger-consistency.ts` around lines 96 - 103, The current parity checks print and treat count/cost mismatches as critical across the full tables (using countRow.message_request_count, countRow.usage_ledger_count, costRow.message_total_cost, costRow.ledger_total_cost) while separately acknowledging that orphanRow.orphan_ledger_count may be expected after log deletion; change the logic so parity is computed only over ledger rows that can be associated with request rows (exclude orphanRow.orphan_ledger_count) or subtract orphanRow.orphan_ledger_count and any warmupRow.warmup_leak_count from the usage_ledger and ledger_total_cost before comparing to message_request_count/message_total_cost, and update the failure condition accordingly so legitimate orphan/warmup cases do not trigger critical failures.src/actions/session-origin-chain.ts-34-44 (1)
34-44:⚠️ Potential issue | 🟠 Major权限校验与数据查询逻辑分离,存在防御深度不足的安全隐患。
非管理员用户的权限检查(行 34-44)仅用于"存在性校验",但随后调用
findSessionOriginChain(sessionId)时该函数不再验证用户身份。虽然sessionId采用密码学随机生成(crypto.randomBytes(6)),短期内伪造成本高,但若sessionId被泄露、记录或枚举,任何人都可直接调用findSessionOriginChain读取链条数据而跳过权限检查。建议将 ownership 约束参数化合并到findSessionOriginChain中,或创建带权限验证的仓储层包装方法,确保数据层自身强制执行访问控制。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/actions/session-origin-chain.ts` around lines 34 - 44, The current flow does a one-off ownership check using messageRequest/ownershipCondition and then calls findSessionOriginChain(sessionId) which does not enforce ownership, allowing a leaked or guessed sessionId to bypass access controls; fix by pushing the ownership constraint into the data-access layer—either add an ownership parameter to findSessionOriginChain (e.g., accept userId/ownershipCondition and apply it in its internal DB query) or create a repository wrapper method (e.g., findSessionOriginChainForUser) that always applies the ownershipCondition (with admin bypass logic) when querying messageRequest/session origin chain; ensure the DB query inside the adjusted function uses the same isNull(messageRequest.deletedAt) and eq(messageRequest.sessionId, sessionId) plus ownershipCondition so access control is enforced at the data layer.src/app/[locale]/login/page.tsx-34-34 (1)
34-34:⚠️ Potential issue | 🟠 Major默认站点标题存在硬编码,需改为 i18n 回退。
Line 34 与 Line 104 把
"Claude Code Hub"作为用户可见默认值,会绕过多语言体系。可参考修改
-const DEFAULT_SITE_TITLE = "Claude Code Hub"; @@ - const [siteTitle, setSiteTitle] = useState(DEFAULT_SITE_TITLE); + const [siteTitle, setSiteTitle] = useState(() => t("brand.defaultSiteTitle"));As per coding guidelines
**/*.{ts,tsx,js,jsx}: All user-facing strings must use i18n (5 languages supported: zh-CN, zh-TW, en, ja, ru). Never hardcode display text.Also applies to: 104-104
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/login/page.tsx at line 34, DEFAULT_SITE_TITLE is hardcoded to "Claude Code Hub" (also used again in the page render) and must be replaced with an i18n fallback; update the constant and the usage in the component (DEFAULT_SITE_TITLE and the render at the current page export) to call the project's translation helper (e.g., use translation hook or t function) so the default site title comes from the locale strings for zh-CN, zh-TW, en, ja, and ru, and ensure the translations file contains entries for this key; keep the same identifier (DEFAULT_SITE_TITLE) but initialize it from the i18n lookup so any consumer of the constant gets the localized fallback.src/app/[locale]/dashboard/logs/_components/error-details-dialog/components/LogicTraceTab.tsx-363-379 (1)
363-379:⚠️ Potential issue | 🟠 Major新增 UI 文案存在硬编码,已破坏多语言约束。
Line 363、Line 367、Line 372、Line 378、Line 391、Line 475 这几处是直接展示给用户的英文字符串(如
Total、Enabled、providers filtered、Priority),应改为 i18n key。🛠️ 建议修改(示例)
-<span className="text-muted-foreground">Total:</span> +<span className="text-muted-foreground">{t("logicTrace.total")}:</span> -<span className="text-muted-foreground">Enabled:</span> +<span className="text-muted-foreground">{t("logicTrace.enabled")}:</span> -<span className="text-muted-foreground">After Group:</span> +<span className="text-muted-foreground">{t("logicTrace.afterGroup")}:</span> -<span className="text-muted-foreground">After Model:</span> +<span className="text-muted-foreground">{t("logicTrace.afterModel")}:</span> -subtitle={`${originFilteredProviders.length} providers filtered`} +subtitle={t("logicTrace.providersFiltered", { count: originFilteredProviders.length })} -subtitle={`Priority ${ctx.selectedPriority}`} +subtitle={t("logicTrace.priorityLabel", { priority: ctx.selectedPriority })}As per coding guidelines "All user-facing strings must use i18n (5 languages supported: zh-CN, zh-TW, en, ja, ru). Never hardcode display text".
Also applies to: 391-392, 475-476
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/dashboard/logs/_components/error-details-dialog/components/LogicTraceTab.tsx around lines 363 - 379, The UI in LogicTraceTab.tsx contains hardcoded user-facing English strings ("Total", "Enabled", "After Group", "After Model", "providers filtered", "Priority") around the ctx fields; replace these literals with i18n keys using the project's translation helper (same pattern as other components) so the displayed text uses localized messages (e.g., call the repository's t/useTranslations function and reference keys like logicTrace.total, logicTrace.enabled, logicTrace.afterGroup, logicTrace.afterModel, logicTrace.providersFiltered, logicTrace.priority). Update the occurrences around ctx.totalProviders, ctx.enabledProviders, ctx.afterGroupFilter, ctx.afterModelFilter and also the other hardcoded strings at the noted locations (lines referenced in the review: 391-392, 475-476) to use the new keys.src/actions/providers.ts-2277-2280 (1)
2277-2280:⚠️ Potential issue | 🟠 Major批量删除/撤销里的状态清理未等待完成。
clearProviderState在removeProvider中是被await的;但这里未等待,可能出现并发竞态和未处理的异步错误。建议修复
- for (const id of snapshotProviderIds) { - clearProviderState(id); - clearConfigCache(id); - } + await Promise.all( + snapshotProviderIds.map(async (id) => { + await clearProviderState(id); + clearConfigCache(id); + }) + ); ... - for (const id of snapshot.providerIds) { - clearProviderState(id); - clearConfigCache(id); - } + await Promise.all( + snapshot.providerIds.map(async (id) => { + await clearProviderState(id); + clearConfigCache(id); + }) + );Also applies to: 2346-2349
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/actions/providers.ts` around lines 2277 - 2280, The loop over snapshotProviderIds is calling clearProviderState and clearConfigCache without awaiting them, which can cause race conditions and unhandled async errors (removeProvider awaits clearProviderState elsewhere); fix by awaiting the async cleanup — either await each call inside the for..of (await clearProviderState(id); await clearConfigCache(id);) or run them concurrently and await Promise.all(snapshotProviderIds.map(id => Promise.all([clearProviderState(id), clearConfigCache(id)]))); apply the same change to the second occurrence around the code that mirrors lines 2346-2349 so both places properly await clearProviderState and clearConfigCache.src/actions/providers.ts-2049-2051 (1)
2049-2051:⚠️ Potential issue | 🟠 Major撤销令牌删除时机过早,失败后不可重试。
Line 2049-2051 与 Line 2341-2343 在执行实际回滚前就删除 token;若后续回滚写库失败,会导致用户失去再次撤销机会。
Also applies to: 2341-2343
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/actions/providers.ts` around lines 2049 - 2051, The undo token is being removed too early (providerPatchUndoStore.delete(parsed.data.undoToken)) before the actual rollback/write completes, which prevents retries if the rollback fails; locate both occurrences of providerPatchUndoStore.delete(parsed.data.undoToken) and move the deletion so it runs only after the rollback persistence operation returns success (i.e., after the code that performs the actual rollback write/DB update completes), add error handling around the rollback so on failure you do not delete the token and you log/propagate the error, and consider making the delete conditional or part of the same transactional/atomic operation to ensure idempotency.src/actions/providers.ts-1264-1315 (1)
1264-1315:⚠️ Potential issue | 🟠 Major单条编辑 Undo 快照遗漏了客户端限制字段。
editProvider在 Line 756 依赖SINGLE_EDIT_PREIMAGE_FIELD_TO_PROVIDER_KEY收集 preimage;但该映射缺少allowed_clients/blocked_clients,导致这两类修改无法被正确撤销。建议修复
const SINGLE_EDIT_PREIMAGE_FIELD_TO_PROVIDER_KEY: Record<string, keyof Provider> = { ... model_redirects: "modelRedirects", allowed_models: "allowedModels", + allowed_clients: "allowedClients", + blocked_clients: "blockedClients", limit_5h_usd: "limit5hUsd", ... };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/actions/providers.ts` around lines 1264 - 1315, SINGLE_EDIT_PREIMAGE_FIELD_TO_PROVIDER_KEY is missing mappings for client restriction fields so editProvider cannot capture preimages for undo; update the mapping to include "allowed_clients" -> "allowedClients" and "blocked_clients" -> "blockedClients" (ensuring these are valid keys on Provider), then run the unit/undo flow that uses editProvider to verify single-edit snapshots include these fields.
🟡 Minor comments (17)
src/app/v1/_lib/proxy/endpoint-policy.ts-1-1 (1)
1-1:⚠️ Potential issue | 🟡 Minor请改为
@/别名导入,避免相对路径。Line 1 当前使用
./endpoint-paths,与仓库导入规范不一致,建议改为@/app/v1/_lib/proxy/endpoint-paths。建议修改
-import { normalizeEndpointPath, V1_ENDPOINT_PATHS } from "./endpoint-paths"; +import { normalizeEndpointPath, V1_ENDPOINT_PATHS } from "@/app/v1/_lib/proxy/endpoint-paths";As per coding guidelines "Use path alias
@/to reference files in./src/directory".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/v1/_lib/proxy/endpoint-policy.ts` at line 1, Update the import in endpoint-policy.ts to use the project path alias instead of a relative path: replace the current import from "./endpoint-paths" with "@/app/v1/_lib/proxy/endpoint-paths" so references to normalizeEndpointPath and V1_ENDPOINT_PATHS use the alias; ensure any tooling/tsconfig path mappings still resolve and run tests/build after the change.src/app/v1/_lib/proxy-handler.ts-10-10 (1)
10-10:⚠️ Potential issue | 🟡 Minor请改用
@/别名导入路径。这里新增/修改后的相对路径导入不符合仓库约定,建议改为
@/开头以保持一致性。As per coding guidelines "
**/*.{ts,tsx,js,jsx}: Use path alias@/to reference files in./src/directory".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/v1/_lib/proxy-handler.ts` at line 10, Replace the relative import of GuardPipelineBuilder with the project path-alias form: change the import statement that currently references "./proxy/guard-pipeline" to use the "@/..." alias (e.g. "@/app/v1/_lib/proxy/guard-pipeline") so GuardPipelineBuilder is imported via the repo's `@/` alias convention.src/app/api/admin/database/export/route.ts-136-139 (1)
136-139:⚠️ Potential issue | 🟡 Minor无效的
mode参数会静默回退到 "full",可能隐藏客户端错误。当前实现在
mode参数无效时静默使用 "full",这可能导致调用方难以发现拼写错误或参数问题(如传入ledgeronly而非ledgerOnly)。建议对无效模式返回 400 错误,明确告知调用方:
建议的修复方案
const modeParam = url.searchParams.get("mode") ?? "full"; - const mode: ExportMode = VALID_EXPORT_MODES.has(modeParam as ExportMode) - ? (modeParam as ExportMode) - : "full"; + if (!VALID_EXPORT_MODES.has(modeParam as ExportMode)) { + if (lockId) { + await releaseBackupLock(lockId, "export").catch(() => {}); + } + return Response.json( + { error: `无效的导出模式: ${modeParam},有效值: full, excludeLogs, ledgerOnly` }, + { status: 400 } + ); + } + const mode = modeParam as ExportMode;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/admin/database/export/route.ts` around lines 136 - 139, The current logic silently falls back to "full" when modeParam is invalid (code around modeParam, mode, ExportMode and VALID_EXPORT_MODES in route.ts); change it to validate modeParam and, if it's not a member of VALID_EXPORT_MODES, return a 400 Bad Request response with a clear error message naming the invalid value and listing allowed modes instead of defaulting to "full"; keep the successful branch casting to ExportMode when VALID_EXPORT_MODES.has(modeParam) is true and proceed as before.src/app/api/auth/logout/route.ts-12-16 (1)
12-16:⚠️ Potential issue | 🟡 MinorCSRF 保护在开发环境下未被强制执行
enforceInDevelopment: process.env.VITEST === "true"表示 CSRF 保护仅在运行 Vitest 测试时才被启用。在本地开发及生产环境中,CSRF 检查都会被跳过。这个配置在
src/app/api/auth/login/route.ts中也使用相同的模式,说明这是有意为之的设计。建议:
- 添加注释说明这个设计的原因(例如为了改善开发体验)
- 或考虑使用
process.env.NODE_ENV !== "production" || process.env.VITEST === "true"以在生产环境中强制执行 CSRF 检查🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/auth/logout/route.ts` around lines 12 - 16, The CSRF guard is configured with enforceInDevelopment: process.env.VITEST === "true", which only enables enforcement during Vitest runs; update the createCsrfOriginGuard usage (csrfGuard) to either add a clarifying comment explaining why CSRF is relaxed in development (e.g., to simplify local dev and testing) or change the condition to enforce in production by using a stronger check such as process.env.NODE_ENV === "production" || process.env.VITEST === "true"; make the same consistent change in src/app/api/auth/login/route.ts where createCsrfOriginGuard is used so both logout (csrfGuard) and login have identical, documented enforcement behavior.src/app/v1/_lib/proxy/auth-guard.ts-16-21 (1)
16-21:⚠️ Potential issue | 🟡 Minor移除未使用的
maxAttemptsPerKey配置配置中定义了
maxAttemptsPerKey: 20,但在预认证阶段的所有check()、recordSuccess()和recordFailure()调用中都只传递了clientIp,没有传递 API key。由于这是预认证速率限制器(在密钥验证前执行),目前仅按 IP 进行限流,应移除maxAttemptsPerKey配置以避免误导后续维护者。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/v1/_lib/proxy/auth-guard.ts` around lines 16 - 21, proxyAuthPolicy 的 LoginAbusePolicy 构造里包含未使用的 maxAttemptsPerKey 配置:在预认证阶段只有 clientIp 被传给 check()/recordSuccess()/recordFailure(),没有传递 API key,故删除该选项以避免误导。打开 proxyAuthPolicy 的声明并移除 maxAttemptsPerKey: 20,保留 maxAttemptsPerIp(或 maxAttemptsPerIp 改名为 maxAttemptsPerIp/类似已存在字段),确保 LoginAbusePolicy 构造参数只包含实际使用的字段以保持语义准确。src/app/v1/_lib/proxy/client-detector.ts-65-73 (1)
65-73:⚠️ Potential issue | 🟡 Minor潜在的空引用风险
访问
session.request.message.metadata前未检查session.request.message是否存在。如果请求体格式异常导致message为null或undefined,此处会抛出TypeError。🛡️ 建议添加防御性检查
- const metadata = session.request.message.metadata; + const metadata = session.request.message?.metadata;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/v1/_lib/proxy/client-detector.ts` around lines 65 - 73, The code accesses session.request.message.metadata without verifying session.request.message exists, which can throw a TypeError; update the logic around session.request.message.metadata in client-detector.ts (the block that currently pushes "metadata-user-id" via signals.push) to defensively check that session.request and session.request.message are non-null/defined (or use optional chaining) before accessing metadata, then retain the existing type checks for metadata and user_id and only call signals.push("metadata-user-id") when all required values are present.src/actions/users.ts-787-793 (1)
787-793:⚠️ Potential issue | 🟡 Minor错误消息参数不一致。
第 790 行调用
tError("BATCH_SIZE_EXCEEDED")时未传递max参数,而batchUpdateUsers函数(第 873 行)使用了tError("BATCH_SIZE_EXCEEDED", { max: MAX_BATCH_SIZE })。这可能导致错误消息不完整。建议修复
+ const MAX_BATCH_SIZE = 500; + if (sanitizedIds.length > MAX_BATCH_SIZE) { return { ok: false, - error: tError("BATCH_SIZE_EXCEEDED"), + error: tError("BATCH_SIZE_EXCEEDED", { max: MAX_BATCH_SIZE }), errorCode: ERROR_CODES.INVALID_FORMAT, }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/actions/users.ts` around lines 787 - 793, The error message call when sanitizing IDs uses tError("BATCH_SIZE_EXCEEDED") without the required parameter, resulting in an incomplete message; update the return in the sanitizedIds length check to call tError("BATCH_SIZE_EXCEEDED", { max: MAX_BATCH_SIZE }) (referencing sanitizedIds, tError, MAX_BATCH_SIZE) so it matches how batchUpdateUsers constructs the same error and produces a complete message.src/app/api/auth/login/route.ts-170-178 (1)
170-178:⚠️ Potential issue | 🟡 Minor速率限制错误消息使用了不准确的翻译键。
使用
t?.("loginFailed")作为速率限制响应的消息在语义上不正确。"loginFailed" 暗示认证失败,而不是请求过于频繁。建议使用专门的 i18n 键,如"rateLimited"或"tooManyAttempts"。建议的修复
NextResponse.json( { - error: t?.("loginFailed") ?? t?.("serverError") ?? "Too many attempts", + error: t?.("tooManyAttempts") ?? "Too many attempts", errorCode: "RATE_LIMITED", }, { status: 429 } )需要在
auth.errors命名空间中添加tooManyAttempts翻译键。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/auth/login/route.ts` around lines 170 - 178, Replace the incorrect translation key used for the 429 rate-limit response in route.ts so the message reflects throttling rather than login failure: in the withAuthResponseHeaders/NextResponse.json block (the rate-limited branch) swap t?.("loginFailed") for a dedicated key such as t?.("auth.errors.tooManyAttempts") or t?.("auth.errors.rateLimited") and update the fallback string to "Too many attempts" (or a matching user-facing message). Also add the new translation key (tooManyAttempts or rateLimited) to the auth.errors i18n namespace so localization files contain the proper text.src/app/api/auth/login/route.ts-232-238 (1)
232-238:⚠️ Potential issue | 🟡 Minor
httpMismatchGuidance的回退消息不合适。如果
securityT?.("cookieWarningDescription")加载失败,当前回退到t?.("apiKeyInvalidOrExpired")或t?.("serverError"),这些消息与 HTTP/HTTPS 不匹配的问题无关,会让用户困惑。建议使用更相关的硬编码回退消息,或完全省略此字段而不是显示误导性信息。
建议的修复
if (hasSecureCookieHttpMismatch(request)) { const securityT = await getAuthSecurityTranslations(locale); - responseBody.httpMismatchGuidance = - securityT?.("cookieWarningDescription") ?? - t?.("apiKeyInvalidOrExpired") ?? - t?.("serverError"); + const guidance = securityT?.("cookieWarningDescription"); + if (guidance) { + responseBody.httpMismatchGuidance = guidance; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/auth/login/route.ts` around lines 232 - 238, The current fallback for responseBody.httpMismatchGuidance uses unrelated translations (t("apiKeyInvalidOrExpired") / t("serverError")); update the hasSecureCookieHttpMismatch block so that after calling getAuthSecurityTranslations(locale) you either set responseBody.httpMismatchGuidance to a relevant hard-coded fallback message (e.g. a short HTTPS/cookie mismatch guidance) when securityT is missing or simply omit the httpMismatchGuidance field instead of inserting unrelated error text; adjust the code around hasSecureCookieHttpMismatch, getAuthSecurityTranslations, and responseBody.httpMismatchGuidance (and keep usage of locale and t untouched) to implement one of these two options.src/app/[locale]/dashboard/availability/_components/endpoint/probe-grid.tsx-67-77 (1)
67-77:⚠️ Potential issue | 🟡 Minor
safeHostnameFromUrl对host:port的兼容分支无法生效,导致格式化失败。
new URL("localhost:8080")不会抛错,而是将localhost当作自定义 scheme 处理,此时hostname为空字符串。当前代码直接返回null,不会进入 catch 块触发https://回退分支,与注释目标不一致。建议先判断第一次解析的 hostname 是否存在,再决定是否尝试加 scheme 重新解析。建议修复
function safeHostnameFromUrl(input: string): string | null { const url = input.trim(); if (!url) return null; try { - return new URL(url).hostname || null; + const parsedHostname = new URL(url).hostname; + if (parsedHostname) return parsedHostname; } catch { - // 兼容历史/手工录入:允许 host:port 或无 scheme 的写法。 - try { - return new URL(`https://${url}`).hostname || null; - } catch { - return null; - } } + + // 兼容历史/手工录入:允许 host:port 或无 scheme 的写法。 + try { + return new URL(`https://${url}`).hostname || null; + } catch { + return null; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/dashboard/availability/_components/endpoint/probe-grid.tsx around lines 67 - 77, The safeHostnameFromUrl function returns null for inputs like "localhost:8080" because new URL(input) treats "localhost" as a scheme and yields an empty hostname so the catch branch never runs; update safeHostnameFromUrl to: after trimming and creating url = input, attempt const parsed = new URL(url) inside try, then if parsed.hostname is truthy return it, otherwise fall through to the second try that does new URL(`https://${url}`) and return its hostname (or null) — reference the safeHostnameFromUrl function and the two new URL(...) parsing branches when making the change.messages/ja/provider-chain.json-83-88 (1)
83-88:⚠️ Potential issue | 🟡 Minor日语语言包新增文案存在英文未本地化
Line 83-88 新增的多条
filterDetails文案仍是英文,会在日语界面直接显示混合语言,建议改为日语翻译。建议修复
- "session_reuse_client_restriction": "Session reuse rejected: client restriction", - "blocklist_hit": "Blocked by pattern: {pattern}", - "allowlist_miss": "Not in allowed list", - "detectedClient": "Detected: {client}", - "providerAllowlist": "Allowlist: {list}", - "providerBlocklist": "Blocklist: {list}" + "session_reuse_client_restriction": "セッション再利用が拒否されました: クライアント制限", + "blocklist_hit": "ブロックパターンに一致: {pattern}", + "allowlist_miss": "許可リストに含まれていません", + "detectedClient": "検出クライアント: {client}", + "providerAllowlist": "許可リスト: {list}", + "providerBlocklist": "ブロックリスト: {list}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@messages/ja/provider-chain.json` around lines 83 - 88, The new entries in messages/ja/provider-chain.json ("session_reuse_client_restriction", "blocklist_hit", "allowlist_miss", "detectedClient", "providerAllowlist", "providerBlocklist") are still in English—translate each value into Japanese, preserving placeholders ({pattern}, {client}, {list}) exactly and matching existing localization tone/format; update these six string values with appropriate Japanese text so the Japanese UI shows fully localized messages.messages/ru/provider-chain.json-83-88 (1)
83-88:⚠️ Potential issue | 🟡 Minor俄语文案中混入英文字符串,建议补全本地化。
Line 83-88 的
filterDetails多条新增值仍是英文,会在ru界面直接露出英文文案。可参考修正
- "session_reuse_client_restriction": "Session reuse rejected: client restriction", - "blocklist_hit": "Blocked by pattern: {pattern}", - "allowlist_miss": "Not in allowed list", - "detectedClient": "Detected: {client}", - "providerAllowlist": "Allowlist: {list}", - "providerBlocklist": "Blocklist: {list}" + "session_reuse_client_restriction": "Повторное использование сессии отклонено: ограничение клиента", + "blocklist_hit": "Заблокировано по шаблону: {pattern}", + "allowlist_miss": "Не входит в разрешённый список", + "detectedClient": "Обнаружено: {client}", + "providerAllowlist": "Список разрешённых: {list}", + "providerBlocklist": "Список блокировок: {list}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@messages/ru/provider-chain.json` around lines 83 - 88, The Russian localization contains untranslated English strings for the filterDetails keys; update the values for "session_reuse_client_restriction", "blocklist_hit", "allowlist_miss", "detectedClient", "providerAllowlist", and "providerBlocklist" in messages/ru/provider-chain.json to proper Russian equivalents (preserve placeholders like {pattern}, {client}, {list}) so the UI shows fully localized text; ensure punctuation and casing match existing Russian entries.src/app/[locale]/login/page.tsx-351-359 (1)
351-359:⚠️ Potential issue | 🟡 Minor密码可见性按钮被移出键盘焦点序。
Line 358 设置
tabIndex={-1}后,键盘用户无法聚焦该按钮,影响可访问性。可参考修改
- tabIndex={-1}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/login/page.tsx around lines 351 - 359, The password visibility toggle button in the login page component is made unfocusable by tabIndex={-1}, breaking keyboard accessibility; update the button (the element using onClick={() => setShowPassword((prev) => !prev)} and aria-label based on showPassword) to be keyboard-focusable by removing the tabIndex prop or setting tabIndex={0} so it can receive focus and be activated via keyboard; ensure the aria-label logic for showPassword remains intact.messages/zh-CN/provider-chain.json-83-88 (1)
83-88:⚠️ Potential issue | 🟡 MinorfilterDetails 中存在未翻译的英文文本
与 zh-TW 文件相同的问题,以下字符串应翻译为简体中文:
session_reuse_client_restrictionblocklist_hitallowlist_missdetectedClientproviderAllowlistproviderBlocklist建议的简体中文翻译
- "session_reuse_client_restriction": "Session reuse rejected: client restriction", - "blocklist_hit": "Blocked by pattern: {pattern}", - "allowlist_miss": "Not in allowed list", - "detectedClient": "Detected: {client}", - "providerAllowlist": "Allowlist: {list}", - "providerBlocklist": "Blocklist: {list}" + "session_reuse_client_restriction": "会话复用被拒绝:客户端限制", + "blocklist_hit": "被模式阻止:{pattern}", + "allowlist_miss": "不在允许列表中", + "detectedClient": "检测到:{client}", + "providerAllowlist": "允许列表:{list}", + "providerBlocklist": "阻止列表:{list}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@messages/zh-CN/provider-chain.json` around lines 83 - 88, The listed message keys (session_reuse_client_restriction, blocklist_hit, allowlist_miss, detectedClient, providerAllowlist, providerBlocklist) in messages/zh-CN/provider-chain.json are still in English; replace their values with appropriate Simplified Chinese translations (matching the intent in zh-TW but using 简体中文) so these keys read as user-facing Chinese strings, ensuring placeholders like {pattern}, {client}, and {list} are preserved exactly.messages/zh-TW/provider-chain.json-83-88 (1)
83-88:⚠️ Potential issue | 🟡 MinorfilterDetails 中存在未翻译的英文文本
以下字符串应翻译为繁体中文以保持语言一致性:
session_reuse_client_restriction: "Session reuse rejected: client restriction"blocklist_hit: "Blocked by pattern: {pattern}"allowlist_miss: "Not in allowed list"detectedClient: "Detected: {client}"providerAllowlist: "Allowlist: {list}"providerBlocklist: "Blocklist: {list}"建议的繁体中文翻译
- "session_reuse_client_restriction": "Session reuse rejected: client restriction", - "blocklist_hit": "Blocked by pattern: {pattern}", - "allowlist_miss": "Not in allowed list", - "detectedClient": "Detected: {client}", - "providerAllowlist": "Allowlist: {list}", - "providerBlocklist": "Blocklist: {list}" + "session_reuse_client_restriction": "會話複用被拒絕:客戶端限制", + "blocklist_hit": "被模式阻擋:{pattern}", + "allowlist_miss": "不在允許列表中", + "detectedClient": "檢測到:{client}", + "providerAllowlist": "允許列表:{list}", + "providerBlocklist": "阻擋列表:{list}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@messages/zh-TW/provider-chain.json` around lines 83 - 88, Translate the six untranslated English values in the messages JSON so the Traditional Chinese locale is consistent: update the keys "session_reuse_client_restriction", "blocklist_hit", "allowlist_miss", "detectedClient", "providerAllowlist", and "providerBlocklist" to their appropriate 繁體中文 strings (e.g., replace "Session reuse rejected: client restriction" with a 繁體中文 equivalent and similarly translate "Blocked by pattern: {pattern}", "Not in allowed list", "Detected: {client}", "Allowlist: {list}", and "Blocklist: {list}"), preserving the placeholder tokens ({pattern}, {client}, {list}) exactly as-is.src/app/[locale]/dashboard/logs/_components/error-details-dialog/types.ts-51-52 (1)
51-52:⚠️ Potential issue | 🟡 Minor请明确
null与undefined的语义边界。这里是
boolean | null且可选,建议在注释中写清:null代表“明确未知”,undefined代表“未返回/未提供”,避免各 Tab 解释不一致。✍️ 建议补充注释语义
- /** Whether swap cache TTL billing was applied */ + /** + * Whether swap cache TTL billing was applied. + * true/false: 已明确记录;null: 明确未知;undefined: 上游未返回该字段 + */ swapCacheTtlApplied?: boolean | null;Based on learnings: In TypeScript interfaces, explicitly document and enforce distinct meanings for null and undefined.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/dashboard/logs/_components/error-details-dialog/types.ts around lines 51 - 52, The property swapCacheTtlApplied (in src/app/[locale]/dashboard/logs/_components/error-details-dialog/types.ts) currently is declared as an optional boolean | null; update its doc comment to explicitly state the semantic difference: "null" means "explicitly unknown/checked but no value" and "undefined" means "field not returned / not provided by the backend"; then either keep the current type and rely on that documented contract or make the type explicit by changing it to boolean | null | undefined to reflect both states—ensure the comment sits immediately above swapCacheTtlApplied and references this exact name so consumers know how to interpret null vs undefined.messages/zh-TW/dashboard.json-343-343 (1)
343-343:⚠️ Potential issue | 🟡 Minor繁中词汇风格建议统一,避免“简繁混用/术语漂移”。
Line 343 的句式用词可更贴近 zh-TW(如“透過”“提升”),Line 786 的“已禁用”也与文件内常见“已停用”不一致,建议统一以减少认知割裂。
✏️ 建议文案(示例)
- "cacheOptimizationHint": "會話複用通過在同一對話中保持供應商親和性來優化效能,減少選擇開銷並提高快取命中率。", + "cacheOptimizationHint": "會話複用透過在同一對話中維持供應商親和性來優化效能,減少選擇開銷並提升快取命中率。", - "disabled": "已禁用", + "disabled": "已停用",Also applies to: 786-790
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@messages/zh-TW/dashboard.json` at line 343, The translation for the key "cacheOptimizationHint" and other nearby strings (including the instance using “已禁用”) mixes Simplified and Traditional phrasing; update the text to use consistent zh-TW vocabulary (e.g., replace “通过”→“透過”, “提高”→“提升”, and normalize “已禁用”→“已停用”) so all UI copy uses the same Traditional Taiwanese style across the file.
| console.log("[SKIP] DSN/DATABASE_URL not configured, skipping ledger consistency verification."); | ||
| process.exit(0); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Node.js process.exit() finally block execution behavior
💡 Result:
In Node.js, process.exit() terminates the process synchronously and does not return to JavaScript, so it can prevent a finally block from running if it’s called inside the try (or before control has a chance to leave the try...finally construct). Node also warns that process.exit() forces an immediate exit even if async operations are pending. [1]
Separately, JavaScript’s finally runs when control flow exits the try...catch...finally construct (e.g., via return/throw). But process termination is outside normal control flow, so those guarantees don’t help once you explicitly end the process. [2]
What to do instead (common pattern):
- Prefer
process.exitCode = <n>and then let the event loop drain (sofinally/cleanup can run). Node explicitly recommendsprocess.exitCodefor graceful exit. [1] - Or
throw/returnout of the function, do cleanup infinally, and only callprocess.exit()after cleanup if you truly must.
Sources
- Node.js docs —
process.exit([code])forces synchronous termination; pending async work may be abandoned; recommendsprocess.exitCodefor graceful exit. [1] - MDN —
finallyexecutes before control flow exits thetry...catch...finallyconstruct. [2]
🏁 Script executed:
cat -n scripts/verify-ledger-consistency.ts | head -150Repository: ding113/claude-code-hub
Length of output: 4625
用 process.exitCode 和 return 替换 process.exit(),确保 finally 块中的数据库连接清理代码能够执行。
在 Node.js/Bun 中,process.exit() 会同步终止进程,不会等待 finally 块执行,导致 121 行和 124 行的调用会阻止第 126 行的 await client.end() 运行,可能造成数据库连接未正常关闭。建议改为设置 process.exitCode 后 return,让事件循环有机会完成异步清理。
建议修改
if (!dsn) {
console.log("[SKIP] DSN/DATABASE_URL not configured, skipping ledger consistency verification.");
- process.exit(0);
+ process.exitCode = 0;
+ return;
}
@@
if (criticalIssues.length > 0) {
console.error(`RESULT: FAILED (${criticalIssues.join(", ")})`);
- process.exit(1);
+ process.exitCode = 1;
+ return;
}
console.log("RESULT: PASS (no critical mismatches)");
- process.exit(0);
+ process.exitCode = 0;
+ return;
} finally {
await client.end();
}另外也适用于:第 132 行
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| console.log("[SKIP] DSN/DATABASE_URL not configured, skipping ledger consistency verification."); | |
| process.exit(0); | |
| console.log("[SKIP] DSN/DATABASE_URL not configured, skipping ledger consistency verification."); | |
| process.exitCode = 0; | |
| return; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/verify-ledger-consistency.ts` around lines 39 - 40, Replace direct
calls to process.exit(...) with setting process.exitCode and returning so the
surrounding finally block can run and allow asynchronous cleanup (e.g., change
the console.log + process.exit(0) branch to set process.exitCode = 0; return).
Update any other process.exit usages in this file similarly (including the other
early-exit in the same script) to ensure the finally block and await
client.end() execute and the DB client is closed properly.
| await providerPatchUndoStore.set(undoToken, { | ||
| undoToken, | ||
| operationId, | ||
| providerIds: [providerId], | ||
| preimage: { | ||
| [providerId]: preimageFields, | ||
| }, | ||
| patch: EMPTY_PROVIDER_BATCH_PATCH, | ||
| }); |
There was a problem hiding this comment.
主数据已提交后再写 Redis 快照,会出现“实际成功但接口报错”。
例如 Line 769/845/2266 已完成数据库变更,随后 Line 810/850/1998/2271 的 Redis 写入若失败会落入外层 catch 返回失败,造成调用方误判并可能重复提交。
Also applies to: 850-855, 1994-2006, 2271-2275
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/actions/providers.ts` around lines 810 - 818, The Redis snapshot write
(providerPatchUndoStore.set with keys undoToken, operationId, providerIds,
preimage and patch using EMPTY_PROVIDER_BATCH_PATCH) happens after the primary
DB commit and currently can throw into the outer catch, causing callers to see a
failure even though the DB change succeeded; change the logic to make this Redis
write best-effort: wrap providerPatchUndoStore.set in its own try/catch (for all
similar calls that create undo snapshots), log the error including identifying
context (undoToken/operationId/providerId) but do not rethrow or abort the flow,
and optionally enqueue a background retry if you have a retry mechanism — ensure
the API returns success when the DB commit succeeded.
| // Only reflect specific origin when credentials are explicitly opted-in. | ||
| // The proxy API uses Bearer tokens; reflecting arbitrary origins with | ||
| // credentials enabled would let any malicious site make credentialed | ||
| // cross-origin requests. | ||
| if (options.allowCredentials && options.origin) { | ||
| headers.set("Access-Control-Allow-Origin", options.origin); | ||
| headers.append("Vary", "Origin"); | ||
| headers.set("Access-Control-Allow-Credentials", "true"); | ||
| } |
There was a problem hiding this comment.
凭据模式下反射任意 Origin 存在跨站风险。
Line 29 当前只要 allowCredentials 为 true 就反射请求源;Line 32 同时返回 Access-Control-Allow-Credentials: true。这在 cookie/session 凭据场景下会把受保护响应暴露给任意站点。建议增加 origin allowlist 校验后再反射。
建议修复示例
function buildCorsHeaders(options: {
origin?: string | null;
requestHeaders?: string | null;
allowCredentials?: boolean;
+ allowedOrigins?: readonly string[];
}) {
const headers = new Headers(DEFAULT_CORS_HEADERS);
- if (options.allowCredentials && options.origin) {
+ if (
+ options.allowCredentials &&
+ options.origin &&
+ options.allowedOrigins?.includes(options.origin)
+ ) {
headers.set("Access-Control-Allow-Origin", options.origin);
headers.append("Vary", "Origin");
headers.set("Access-Control-Allow-Credentials", "true");
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/v1/_lib/cors.ts` around lines 25 - 33, The current CORS block
reflects any options.origin when options.allowCredentials is true, which is
unsafe; update the logic in the CORS handler that sets headers (the block using
options.allowCredentials, options.origin, headers.set, headers.append) to first
validate the incoming origin against a configured allowlist (e.g.
options.allowedOrigins or a similar whitelist source) and only set
Access-Control-Allow-Origin and Access-Control-Allow-Credentials when the origin
is explicitly allowed; if the origin is not in the allowlist, do not reflect it
and do not set Access-Control-Allow-Credentials (you may still set Vary: Origin
as needed).
There was a problem hiding this comment.
Code Review
This is a massive and impressive release that touches on many core aspects of the application, from authentication and billing to performance and developer experience. The introduction of the usage_ledger for decoupling analytics is a great architectural improvement. The new features like batch provider editing with undo, client restrictions, and Langfuse integration are well-implemented. The security enhancements, particularly the CSRF protection, rate limiting on login, and safer CORS handling, are critical and well-executed. I've identified a couple of medium-severity issues related to data consistency that are worth discussing, but overall, this is an excellent set of changes.
| EXCEPTION WHEN OTHERS THEN | ||
| RAISE WARNING 'fn_upsert_usage_ledger failed for request_id=%: %', NEW.id, SQLERRM; | ||
| RETURN NEW; | ||
| END; |
There was a problem hiding this comment.
The EXCEPTION WHEN OTHERS block in the fn_upsert_usage_ledger trigger only raises a WARNING. This means if the trigger fails to update the usage_ledger for any reason, the main request will still succeed, but the billing and analytics data in usage_ledger will become out of sync. This could lead to silent data discrepancies if database warnings are not actively monitored. For a production environment, you might consider a more robust error handling strategy, such as logging failures to a separate dead-letter table for later inspection and reprocessing, to ensure the integrity of billing data.
| const modelBreakdown = await db | ||
| .select({ | ||
| model: messageRequest.model, | ||
| requests: sql<number>`count(*)::int`, | ||
| cost: sql<string>`COALESCE(sum(${messageRequest.costUsd}), 0)`, | ||
| inputTokens: sql<number>`COALESCE(sum(${messageRequest.inputTokens}), 0)::double precision`, | ||
| outputTokens: sql<number>`COALESCE(sum(${messageRequest.outputTokens}), 0)::double precision`, | ||
| cacheCreationTokens: sql<number>`COALESCE(sum(${messageRequest.cacheCreationInputTokens}), 0)::double precision`, | ||
| cacheReadTokens: sql<number>`COALESCE(sum(${messageRequest.cacheReadInputTokens}), 0)::double precision`, | ||
| // User breakdown(跨所有 Key) | ||
| userRequests: sql<number>`count(*)::int`, | ||
| userCost: sql<string>`COALESCE(sum(${messageRequest.costUsd}), 0)`, | ||
| userInputTokens: sql<number>`COALESCE(sum(${messageRequest.inputTokens}), 0)::double precision`, | ||
| userOutputTokens: sql<number>`COALESCE(sum(${messageRequest.outputTokens}), 0)::double precision`, | ||
| userCacheCreationTokens: sql<number>`COALESCE(sum(${messageRequest.cacheCreationInputTokens}), 0)::double precision`, | ||
| userCacheReadTokens: sql<number>`COALESCE(sum(${messageRequest.cacheReadInputTokens}), 0)::double precision`, | ||
| userCacheCreation5mTokens: sql<number>`COALESCE(sum(${messageRequest.cacheCreation5mInputTokens}), 0)::double precision`, | ||
| userCacheCreation1hTokens: sql<number>`COALESCE(sum(${messageRequest.cacheCreation1hInputTokens}), 0)::double precision`, | ||
| // Key breakdown(FILTER 聚合) | ||
| keyRequests: sql<number>`count(*) FILTER (WHERE ${messageRequest.key} = ${keyString})::int`, | ||
| keyCost: sql<string>`COALESCE(sum(${messageRequest.costUsd}) FILTER (WHERE ${messageRequest.key} = ${keyString}), 0)`, | ||
| keyInputTokens: sql<number>`COALESCE(sum(${messageRequest.inputTokens}) FILTER (WHERE ${messageRequest.key} = ${keyString}), 0)::double precision`, | ||
| keyOutputTokens: sql<number>`COALESCE(sum(${messageRequest.outputTokens}) FILTER (WHERE ${messageRequest.key} = ${keyString}), 0)::double precision`, | ||
| keyCacheCreationTokens: sql<number>`COALESCE(sum(${messageRequest.cacheCreationInputTokens}) FILTER (WHERE ${messageRequest.key} = ${keyString}), 0)::double precision`, | ||
| keyCacheReadTokens: sql<number>`COALESCE(sum(${messageRequest.cacheReadInputTokens}) FILTER (WHERE ${messageRequest.key} = ${keyString}), 0)::double precision`, | ||
| keyCacheCreation5mTokens: sql<number>`COALESCE(sum(${messageRequest.cacheCreation5mInputTokens}) FILTER (WHERE ${messageRequest.key} = ${keyString}), 0)::double precision`, | ||
| keyCacheCreation1hTokens: sql<number>`COALESCE(sum(${messageRequest.cacheCreation1hInputTokens}) FILTER (WHERE ${messageRequest.key} = ${keyString}), 0)::double precision`, | ||
| }) | ||
| .from(messageRequest) | ||
| .where( | ||
| and( | ||
| eq(messageRequest.userId, session.user.id), | ||
| eq(messageRequest.userId, userId), | ||
| isNull(messageRequest.deletedAt), | ||
| EXCLUDE_WARMUP_CONDITION, | ||
| startTime ? gte(messageRequest.createdAt, new Date(startTime)) : undefined, | ||
| endTime ? lt(messageRequest.createdAt, new Date(endTime)) : undefined | ||
| startDate ? gte(messageRequest.createdAt, startDate) : undefined, | ||
| endDate ? lt(messageRequest.createdAt, endDate) : undefined | ||
| ) | ||
| ) | ||
| .groupBy(messageRequest.model) | ||
| .orderBy(sql`sum(${messageRequest.costUsd}) DESC`); |
There was a problem hiding this comment.
The getMyStatsSummary function queries the messageRequest table to aggregate statistics. A major goal of this PR is to decouple billing and statistics from the main logs table by introducing usage_ledger. Other parts of the application have been migrated to use usage_ledger for this purpose. To maintain consistency and fully realize the benefits of this decoupling (like improved query performance on a smaller, more targeted table), this query should also be migrated to use usage_ledger. The usage_ledger table appears to contain all the necessary columns for this aggregation.
There was a problem hiding this comment.
Code Review Summary
This is a massive release PR (v0.6.1) that introduces significant new features including Langfuse integration, usage ledger decoupling, cache hit rate alerts, and provider scheduled active time. The code is generally well-structured with comprehensive error handling and good test coverage.
PR Size: XL
- Lines changed: 91,441 (85,377 additions + 6,060 deletions)
- Files changed: 499
Recommendation: Given the size of this PR, consider splitting future releases into smaller, more focused PRs for easier review:
- Auth/Security refactoring
- Langfuse integration
- Usage ledger decoupling
- Cache hit rate alerts
- Provider scheduled active time
- UI/UX improvements
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Logic/Bugs | 0 | 0 | 0 | 0 |
| Security | 0 | 0 | 0 | 0 |
| Error Handling | 0 | 0 | 1 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
| Tests | 0 | 0 | 0 | 0 |
| Simplification | 0 | 0 | 0 | 0 |
Medium Priority Issues (Should Consider Fixing)
1. Empty catch block in session.ts (line 812)
- File:
src/app/v1/_lib/proxy/session.ts - Function:
resolveSessionEndpointPolicy - Issue:
} catch {}silently swallows errors without logging - Violates CLAUDE.md rule: "No Silent Failures: Any error caught without logging or user feedback is a CRITICAL defect"
- Suggested fix:
} catch (error) {
logger.debug("[ProxySession] Failed to resolve endpoint policy from URL", {
error: error instanceof Error ? error.message : String(error),
pathname: requestUrl.pathname,
});
}Review Coverage
- Logic and correctness - Clean
- Security (OWASP Top 10) - Clean (Redis eval is safe, SQL uses Drizzle ORM properly)
- Error handling - One minor issue noted
- Type safety - Clean (any usage is primarily in test mocks)
- Documentation accuracy - Clean
- Test coverage - Adequate (new features have corresponding tests)
- Code clarity - Good
Positive Observations
- Comprehensive test coverage for new features (cache-hit-rate-alert, usage-ledger, langfuse)
- Proper error logging throughout the codebase
- Good use of TypeScript types and validation
- Security-conscious implementation (CSRF protection, auth refactoring)
Automated review by Claude AI
| }); | ||
| }) | ||
| .catch(() => {}); | ||
|
|
There was a problem hiding this comment.
[Critical] [ERROR-SWALLOWED] Empty .catch(() => {}) swallows fetch failures
Evidence: src/app/[locale]/login/page.tsx:136 and src/app/[locale]/login/page.tsx:164
.catch(() => {});Why this is a problem: If /api/version or /api/system-settings fails, the UI silently falls back with no log and no user-visible hint, which makes production debugging much harder.
Suggested fix:
.catch((err) => {
console.error("LoginPage: failed to fetch /api/version", err);
});(Apply the same pattern to the /api/system-settings fetch.)
| return withAuthResponseHeaders( | ||
| NextResponse.json( | ||
| { error: t?.("apiKeyRequired") ?? "API key is required" }, | ||
| { status: 400 } |
There was a problem hiding this comment.
[High] [STANDARD-VIOLATION] Login API returns hardcoded English fallback strings instead of i18n
Violated guideline (CLAUDE.md): **i18n Required** - All user-facing strings must use i18n (5 languages supported). Never hardcode display text
Evidence: src/app/api/auth/login/route.ts:196
{ error: t?.("apiKeyRequired") ?? "API key is required" }(Also at src/app/api/auth/login/route.ts:173, :204, :217, :228, :263, :299.)
Why this is a problem: These strings are returned to the login UI and can be shown verbatim to non-English users, breaking localization consistency.
Suggested fix (pattern; apply to all occurrences):
const error = t?.("apiKeyRequired");
return withAuthResponseHeaders(
NextResponse.json(
shouldIncludeFailureTaxonomy(request)
? { errorCode: "KEY_REQUIRED", ...(error ? { error } : {}) }
: { ...(error ? { error } : {}) },
{ status: 400 },
),
);| @@ -411,13 +446,28 @@ export function formatProviderTimeline( | |||
| if (ctx.filteredProviders && ctx.filteredProviders.length > 0) { | |||
| timeline += `\n${t("timeline.filtered")}:\n`; | |||
| for (const f of ctx.filteredProviders) { | |||
| const icon = f.reason === "circuit_open" ? "⚡" : "💰"; | |||
| const icon = | |||
| f.reason === "circuit_open" ? "⚡" : f.reason === "client_restriction" ? "🚫" : "💰"; | |||
| const detailsText = f.details | |||
There was a problem hiding this comment.
[High] [STANDARD-VIOLATION] Emoji character introduced in a string literal
Violated guideline (CLAUDE.md): **No Emoji in Code** - Never use emoji characters in any code, comments, or string literals
Evidence: src/lib/utils/provider-chain-formatter.ts:450
const icon =
f.reason === "circuit_open"
? "\u26A1"
: f.reason === "client_restriction"
? "\u{1F6AB}"
: "\u{1F4B0}";Why this is a problem: Emoji in output strings can render inconsistently across terminals/fonts and violates the repo rule.
Suggested fix:
const icon =
f.reason === "circuit_open" ? "!" : f.reason === "client_restriction" ? "X" : "$";There was a problem hiding this comment.
Code Review Summary
This PR is XL (499 files, 91,437 lines changed) and bundles multiple major refactors/features, which makes review, rollback, and root-cause isolation high risk. The issues called out below are limited to high-confidence defects/standard violations on changed lines.
PR Size: XL
- Lines changed: 91437
- Files changed: 499
- Split suggestion: separate into (1) auth/login/session changes, (2) provider batch ops + UI, (3) Langfuse/observability, (4) DB migrations/backup tooling, (5) remaining UI/test refactors.
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Logic/Bugs | 0 | 2 | 0 | 0 |
| Security | 0 | 0 | 0 | 0 |
| Error Handling | 1 | 0 | 0 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
| Tests | 0 | 0 | 0 | 0 |
| Simplification | 0 | 0 | 0 | 0 |
Critical Issues (Must Fix)
- [ERROR-SWALLOWED] Swallowed promise rejections on the login page hide
/api/versionand/api/system-settingsfailures (src/app/[locale]/login/page.tsx:136,src/app/[locale]/login/page.tsx:164).
High Priority Issues (Should Fix)
- [STANDARD-VIOLATION] Hardcoded English fallback error strings returned from login API bypass i18n (
src/app/api/auth/login/route.ts:173,src/app/api/auth/login/route.ts:196,src/app/api/auth/login/route.ts:204,src/app/api/auth/login/route.ts:217,src/app/api/auth/login/route.ts:228,src/app/api/auth/login/route.ts:263,src/app/api/auth/login/route.ts:299). - [STANDARD-VIOLATION] Emoji character introduced in a string literal (
src/lib/utils/provider-chain-formatter.ts:450).
Review Coverage
- Logic and correctness
- Security (OWASP Top 10)
- Error handling
- Type safety
- Documentation accuracy
- Test coverage
- Code clarity
Automated review by Codex AI
There was a problem hiding this comment.
Applied size/XL label to PR #845.
Left 3 inline review comments (diff lines only):
src/app/[locale]/login/page.tsx:136(+:164) — empty.catch(() => {})swallows fetch failuressrc/app/api/auth/login/route.ts:196(+ other listed lines) — hardcoded English fallback error strings bypass i18nsrc/lib/utils/provider-chain-formatter.ts:450— emoji introduced in a string literal (violatesCLAUDE.md)
Submitted the required PR review summary via gh pr review --comment.
Summary
Release v0.6.1 is a major update that significantly refactors CCH's authentication, billing, and logging components while fixing several known issues and introducing new features.
New Features
Observability - Langfuse Integration
src/lib/langfuse/, dependencies:@langfuse/client,@langfuse/otel,@langfuse/tracingProvider Batch Operations Refactor
Usage Ledger Decoupling
usage_ledgertable for immutable audit loggingCache Hit Rate Anomaly Alert
Decision Chain Tracing
Provider Scheduled Active Time
activeTimeStart,activeTimeEnd(HH:mm format)Client Restriction Refactor
blockedClientsfield for user/key/provider level blockingSwap Cache TTL Billing Option
swapCacheTtlBillingoption for providersPerformance Optimizations
DISTINCT ONwithLATERAL JOINfor better performanceBug Fixes
/v1/responses/compactendpoint handlingOther Updates
Breaking Changes
joinClaudePoolremovedImportant: Due to authentication refactoring, all existing client sessions will be invalidated after updating to v0.6.x.
Migration Notes
drizzle/0068_*throughdrizzle/0077_*File Changes Summary
Testing
中文说明
本次更新显著重构了 CCH 的鉴权、计费和日志组件,修复了若干已知问题。
新功能
其他更新
Breaking Change
Description enhanced by Claude AI