Skip to content

feat: 添加 Token 使用量统计功能#30

Open
david188888 wants to merge 1 commit intoErlichLiu:mainfrom
david188888:feat/usage-statistics
Open

feat: 添加 Token 使用量统计功能#30
david188888 wants to merge 1 commit intoErlichLiu:mainfrom
david188888:feat/usage-statistics

Conversation

@david188888
Copy link

@david188888 david188888 commented Feb 18, 2026

功能概述

实现完整的 Token 使用量统计系统,帮助用户追踪 API 调用情况和预估成本。

主要特性

数据统计

  • 收集每个对话的 Token 使用量(输入/输出)
  • 支持 Anthropic、OpenAI、Google 等主流 Provider
  • 按日、按模型维度的使用量聚合
  • 基于可配置价格表的 API 成本预估

可视化界面

  • 全新的 UsageSettings 统计页面
  • 采用科技蓝设计风格,支持深色/浅色主题
  • 数据总览卡片(总 Token、对话数、预估成本)
  • 趋势图表(7/30/90 天)
  • 模型使用量分布饼图
  • 最近对话列表及详情

技术实现

  • ChatMessage 新增 usage 字段持久化存储
  • Provider 适配器解析 SSE 流中的 usage 事件
  • usage-service 提供统计计算和数据聚合
  • usage-atoms 管理统计状态

测试说明

  • 测试各 Provider 的 usage 数据正确解析
  • 测试统计页面数据显示正确
  • 测试定价设置保存和生效
  • 测试深色/浅色主题切换

相关文件

  • 新增:usage-service.ts, usage-atoms.ts, UsageSettings.tsx, usage.ts
  • 修改:ipc.ts, preload/index.ts, 各 Provider 适配器, chat-service.ts

联系方式:david.liu1888888@gmail.com

实现完整的 Token 使用量统计系统,包含以下功能:

核心功能:
- 收集和存储每个对话的 Token 使用量数据
- 支持 Anthropic、OpenAI、Google 等 Provider 的 usage 数据解析
- 提供每日、模型维度的使用量统计聚合
- 预估 API 调用成本(基于可配置的价格表)

UI 组件:
- 新增 UsageSettings 统计页面,采用科技蓝设计风格
- 支持深色/浅色主题自适应
- 展示总览数据、趋势图表、模型分布、对话列表
- 可配置的模型定价设置

技术实现:
- ChatMessage 新增 usage 字段存储 Token 使用量
- Provider 适配器解析 SSE 流中的 usage 事件
- usage-service 提供统计计算和数据聚合
- usage-atoms 管理统计状态

IPC 通道:
- usage:getStats - 获取使用量统计
- usage:getConversationUsage - 获取对话详情
- usage:getSettings / usage:updateSettings - 管理定价配置

Co-Authored-By: Claude <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings February 18, 2026 12:04
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements a comprehensive Token usage statistics system for tracking API consumption across chat and agent conversations. The feature helps users monitor their API usage, estimate costs based on configurable pricing models, and visualize usage patterns through an interactive dashboard.

Changes:

  • Added usage tracking at the message level for all provider types (Anthropic, OpenAI, Google)
  • Implemented backend service for aggregating statistics by day, model, and conversation
  • Created a new UsageSettings UI component with charts (area, pie, bar) for data visualization
  • Integrated usage data collection into the streaming SSE pipeline and chat service

Reviewed changes

Copilot reviewed 19 out of 20 changed files in this pull request and generated 14 comments.

Show a summary per file
File Description
packages/shared/src/types/usage.ts New type definitions for usage statistics, pricing, and aggregations
packages/shared/src/types/chat.ts Added TokenUsage interface to ChatMessage for tracking tokens per message
packages/core/src/providers/types.ts Added StreamUsageEvent type and TokenUsage re-export
packages/core/src/providers/sse-reader.ts Added usage accumulation logic in SSE stream processing
packages/core/src/providers/openai-adapter.ts OpenAI usage parsing from stream chunks with cache token support
packages/core/src/providers/google-adapter.ts Google usage parsing from usageMetadata in stream responses
packages/core/src/providers/anthropic-adapter.ts Anthropic usage parsing from message_start and message_delta events
apps/electron/src/main/lib/usage-service.ts Core service for statistics calculation, cost estimation, and data aggregation
apps/electron/src/main/lib/chat-service.ts Integration to persist usage data with assistant messages
apps/electron/src/main/ipc.ts IPC handlers for usage statistics endpoints
apps/electron/src/preload/index.ts Preload API exposure for usage-related IPC calls
apps/electron/src/renderer/atoms/usage-atoms.ts Jotai atoms for usage state management and formatting utilities
apps/electron/src/renderer/components/settings/UsageSettings.tsx Main UI component with charts, cards, and conversation list
apps/electron/src/renderer/components/settings/SettingsPanel.tsx Integration of UsageSettings tab into settings panel
apps/electron/src/renderer/atoms/settings-tab.ts Added 'usage' to SettingsTab union type
apps/electron/src/renderer/components/settings/primitives/SettingsUIConstants.ts Updated CARD_CLASS styling for tech-blue design theme
package.json Added recharts@^3.7.0 dependency for data visualization
bun.lock Lock file updates for recharts and its dependencies

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +169 to +198
/** 从 Agent 事件中提取使用量统计 */
function extractAgentUsage(messages: AgentMessage[]): {
totalTokens: number
promptTokens: number
completionTokens: number
modelId: string
} {
let totalTokens = 0
let promptTokens = 0
let completionTokens = 0
let modelId = ''

for (const msg of messages) {
if (msg.role === 'assistant' && msg.events) {
for (const event of msg.events) {
if (event.type === 'complete' && event.usage) {
promptTokens += event.usage.inputTokens
completionTokens += event.usage.outputTokens || 0
totalTokens += event.usage.inputTokens + (event.usage.outputTokens || 0)
}
if (event.type === 'usage_update') {
promptTokens += event.usage.inputTokens
totalTokens += event.usage.inputTokens
}
}
}
}

return { totalTokens, promptTokens, completionTokens, modelId }
}
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Agent usage extraction logic doesn't extract or use the modelId from agent events. The modelId variable is initialized as an empty string but never populated, which means Agent sessions will always report 'unknown' as the model ID. This affects both cost estimation accuracy and model distribution statistics.

Consider extracting the model ID from the agent session metadata (similar to how chat conversations use conv.modelId) or from agent events if available.

Copilot uses AI. Check for mistakes.
Comment on lines +644 to +645
// TODO: 跳转到对应对话
console.log('点击对话:', conv.conversationId)
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TODO comment indicates that conversation navigation is not yet implemented. This means users cannot actually navigate to a conversation when clicking on it in the usage statistics view. While this is acknowledged as incomplete, it's worth noting that the onClick handler currently only logs to console and provides no user feedback.

Suggested change
// TODO: 跳转到对应对话
console.log('点击对话:', conv.conversationId)
window.alert('暂未支持从使用统计页直接打开对话,请在会话列表中手动查找该对话。')

Copilot uses AI. Check for mistakes.
Comment on lines +201 to +417
export async function getUsageStats(days: number = 30): Promise<UsageStats> {
const startTimestamp = getStartTimestamp(days)

// 获取所有对话和会话
const conversations = listConversations()
const sessions = listAgentSessions()


// 初始化统计数据
let totalConversations = 0
let totalMessages = 0
let totalTokens = 0
let totalPromptTokens = 0
let totalCompletionTokens = 0
let estimatedCost = 0

// 按日期聚合
const dailyMap = new Map<string, DailyUsage>()

// 按模型聚合
const modelMap = new Map<string, ModelUsage>()

// 最近对话列表
const recentConversations: ConversationUsage[] = []

// 处理 Chat 对话
let processedChatConvs = 0
for (const conv of conversations) {
if (conv.createdAt < startTimestamp) continue

const messages = getConversationMessages(conv.id)
const usage = extractChatUsage(messages)


if (usage.totalTokens === 0) continue
processedChatConvs++

totalConversations++
totalMessages += messages.length
totalTokens += usage.totalTokens
totalPromptTokens += usage.promptTokens
totalCompletionTokens += usage.completionTokens

const modelId = usage.modelId || conv.modelId || 'unknown'
const cost = calculateCost(
{
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
totalTokens: usage.totalTokens,
},
modelId,
)
estimatedCost += cost

// 按日期聚合
const date = formatDate(conv.createdAt)
const existing = dailyMap.get(date)
if (existing) {
existing.totalTokens += usage.totalTokens
existing.promptTokens += usage.promptTokens
existing.completionTokens += usage.completionTokens
existing.conversationCount++
existing.messageCount += messages.length
existing.estimatedCost += cost
} else {
dailyMap.set(date, {
date,
totalTokens: usage.totalTokens,
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
conversationCount: 1,
messageCount: messages.length,
estimatedCost: cost,
})
}

// 按模型聚合
const model = modelMap.get(modelId)
if (model) {
model.totalTokens += usage.totalTokens
model.promptTokens += usage.promptTokens
model.completionTokens += usage.completionTokens
model.conversationCount++
model.estimatedCost += cost
} else {
modelMap.set(modelId, {
modelId,
totalTokens: usage.totalTokens,
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
conversationCount: 1,
estimatedCost: cost,
})
}

// 添加到最近对话列表
recentConversations.push({
conversationId: conv.id,
title: conv.title,
modelId: modelId || conv.modelId || 'unknown',
channelId: conv.channelId || '',
createdAt: conv.createdAt,
messageCount: messages.length,
totalTokens: usage.totalTokens,
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
mode: 'chat',
})
}

// 处理 Agent 会话
for (const session of sessions) {
if (session.createdAt < startTimestamp) continue

const messages = getAgentSessionMessages(session.id)
const usage = extractAgentUsage(messages)

if (usage.totalTokens === 0) continue

totalConversations++
totalMessages += messages.length
totalTokens += usage.totalTokens
totalPromptTokens += usage.promptTokens
totalCompletionTokens += usage.completionTokens

const modelId = usage.modelId || 'unknown'
const cost = calculateCost(
{
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
totalTokens: usage.totalTokens,
},
modelId,
)
estimatedCost += cost

// 按日期聚合
const date = formatDate(session.createdAt)
const existing = dailyMap.get(date)
if (existing) {
existing.totalTokens += usage.totalTokens
existing.promptTokens += usage.promptTokens
existing.completionTokens += usage.completionTokens
existing.conversationCount++
existing.messageCount += messages.length
existing.estimatedCost += cost
} else {
dailyMap.set(date, {
date,
totalTokens: usage.totalTokens,
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
conversationCount: 1,
messageCount: messages.length,
estimatedCost: cost,
})
}

// 按模型聚合
const model = modelMap.get(modelId)
if (model) {
model.totalTokens += usage.totalTokens
model.promptTokens += usage.promptTokens
model.completionTokens += usage.completionTokens
model.conversationCount++
model.estimatedCost += cost
} else {
modelMap.set(modelId, {
modelId,
totalTokens: usage.totalTokens,
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
conversationCount: 1,
estimatedCost: cost,
})
}

// 添加到最近对话列表
recentConversations.push({
conversationId: session.id,
title: session.title,
modelId: modelId || 'unknown',
channelId: session.channelId || '',
createdAt: session.createdAt,
messageCount: messages.length,
totalTokens: usage.totalTokens,
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
mode: 'agent',
})
}

// 按日期排序(从近到远)
const dailyUsage = Array.from(dailyMap.values()).sort(
(a, b) => new Date(b.date).getTime() - new Date(a.date).getTime(),
)

// 按 Token 数量排序(从多到少)
const modelUsage = Array.from(modelMap.values()).sort(
(a, b) => b.totalTokens - a.totalTokens,
)

// 按创建时间排序(从近到远)
recentConversations.sort((a, b) => b.createdAt - a.createdAt)

return {
totalConversations,
totalMessages,
totalTokens,
promptTokens: totalPromptTokens,
completionTokens: totalCompletionTokens,
estimatedCost,
dailyUsage,
modelUsage,
recentConversations: recentConversations.slice(0, 20), // 只返回最近 20 条
}
}
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential performance issue: The getUsageStats function iterates through ALL conversations and agent sessions for every call, loading and parsing all messages for each one. For users with many conversations (hundreds or thousands), this could become slow. Consider:

  1. Adding caching or memoization for usage data
  2. Computing usage incrementally when messages are saved rather than on-demand
  3. Storing pre-computed usage statistics in the conversation/session metadata
  4. Adding pagination or limiting the number of conversations processed

Copilot uses AI. Check for mistakes.
model: input.modelId,
messages,
stream: true,
stream_options: { include_usage: true },
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stream_options: { include_usage: true } is added to the OpenAI request body, but there's no error handling for cases where this option might not be supported by the provider or custom endpoints. Some OpenAI-compatible APIs might not support this parameter and could fail or ignore it. Consider adding this conditionally or handling potential errors gracefully.

Suggested change
stream_options: { include_usage: true },

Copilot uses AI. Check for mistakes.
Comment on lines +227 to +236
let processedChatConvs = 0
for (const conv of conversations) {
if (conv.createdAt < startTimestamp) continue

const messages = getConversationMessages(conv.id)
const usage = extractChatUsage(messages)


if (usage.totalTokens === 0) continue
processedChatConvs++
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable processedChatConvs is declared and incremented but never used. This appears to be debug code that should be removed or used for logging/metrics purposes.

Copilot uses AI. Check for mistakes.

/** 获取使用量统计总览 */
export async function getUsageStats(days: number = 30): Promise<UsageStats> {
const startTimestamp = getStartTimestamp(days)
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling for invalid input values. The days parameter is not validated, which could lead to issues if negative numbers or non-numeric values are passed. Consider adding validation to ensure days is a positive integer, and provide a reasonable default or error message if invalid.

Suggested change
const startTimestamp = getStartTimestamp(days)
// 验证并规范化 days 参数,确保为正整数;否则回退到默认值 30
const normalizedDays =
Number.isFinite(days) && days > 0 ? Math.floor(days) : 30
const startTimestamp = getStartTimestamp(normalizedDays)

Copilot uses AI. Check for mistakes.
Comment on lines +204 to +224
// 处理 message_start 事件中的 usage(输入 token)
if (event.type === 'message_start' && event.message?.usage) {
const usage = event.message.usage
events.push({
type: 'usage',
promptTokens: usage.input_tokens,
completionTokens: 0,
totalTokens: usage.input_tokens,
})
}

// 处理 message_delta 事件中的 usage(输出 token)
if (event.type === 'message_delta' && event.usage) {
const outputTokens = event.usage.output_tokens ?? event.delta?.output_tokens ?? 0
events.push({
type: 'usage',
promptTokens: 0,
completionTokens: outputTokens,
totalTokens: outputTokens,
})
}
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The usage accumulation logic in Anthropic adapter sends two separate usage events - one for input tokens (message_start) and one for output tokens (message_delta). While the sse-reader.ts correctly accumulates these, the logic could potentially double-count tokens if the provider sends cumulative values rather than incremental ones. The current implementation assumes all usage events contain incremental values, but this should be verified against the Anthropic API documentation to ensure correctness.

Copilot uses AI. Check for mistakes.
export * from './UsageSettings'
export * from './AboutSettings'
export * from './primitives'
export * from './primitives/index'
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent path handling: The export statement changed from export * from './primitives' to export * from './primitives/index'. While this works, it's inconsistent with the other exports in the same file which don't include '/index'. The original version was more idiomatic for TypeScript/JavaScript module resolution. Consider reverting to export * from './primitives' for consistency.

Suggested change
export * from './primitives/index'
export * from './primitives'

Copilot uses AI. Check for mistakes.
Comment on lines +238 to +273
/** 自定义 Tooltip 内容 - 科技感样式 */
function ChartTooltip({
active,
payload,
label,
valueFormatter,
labelFormatter,
}: {
active?: boolean
payload?: Array<{ value: number; name: string; dataKey?: string }>
label?: string
valueFormatter?: (value: number) => string
labelFormatter?: (label: string) => string
}): React.ReactElement | null {
if (!active || !payload || !payload.length) return null

return (
<div className="rounded-xl bg-card/95 backdrop-blur-md border border-border/50 shadow-xl p-3 min-w-[160px]">
{label && labelFormatter && (
<p className="text-xs font-medium text-muted-foreground mb-2 pb-2 border-b border-border/50">
{labelFormatter(label)}
</p>
)}
<div className="space-y-1.5">
{payload.map((entry, index) => (
<div key={index} className="flex items-center justify-between gap-4">
<span className="text-xs text-muted-foreground">{entry.name}</span>
<span className="text-sm font-semibold font-mono">
{valueFormatter ? valueFormatter(Number(entry.value)) : entry.value}
</span>
</div>
))}
</div>
</div>
)
}
Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The custom Tooltip component for charts doesn't include ARIA attributes or proper semantic HTML structure. The recharts library Tooltip component may not be fully accessible to screen readers. Consider adding appropriate ARIA labels and roles, or verify that recharts provides accessibility features by default.

Copilot uses AI. Check for mistakes.
Comment on lines +208 to +234

// 初始化统计数据
let totalConversations = 0
let totalMessages = 0
let totalTokens = 0
let totalPromptTokens = 0
let totalCompletionTokens = 0
let estimatedCost = 0

// 按日期聚合
const dailyMap = new Map<string, DailyUsage>()

// 按模型聚合
const modelMap = new Map<string, ModelUsage>()

// 最近对话列表
const recentConversations: ConversationUsage[] = []

// 处理 Chat 对话
let processedChatConvs = 0
for (const conv of conversations) {
if (conv.createdAt < startTimestamp) continue

const messages = getConversationMessages(conv.id)
const usage = extractChatUsage(messages)


Copy link

Copilot AI Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are empty lines at line 208 and line 234 in the code. While this doesn't affect functionality, these appear to be accidental and should be removed for cleaner code.

Copilot uses AI. Check for mistakes.
Copilot AI added a commit to david188888/Proma that referenced this pull request Feb 18, 2026
Co-authored-by: david188888 <113694285+david188888@users.noreply.github.com>
Copilot AI added a commit to david188888/Proma that referenced this pull request Feb 18, 2026
Co-authored-by: david188888 <113694285+david188888@users.noreply.github.com>
@david188888 david188888 changed the title feat: 添加 Token 使用量统计功能 Closes #13. feat: 添加 Token 使用量统计功能 Feb 18, 2026
@david188888 david188888 changed the title Closes #13. feat: 添加 Token 使用量统计功能 feat: 添加 Token 使用量统计功能 Closes #13. Feb 18, 2026
@david188888 david188888 changed the title feat: 添加 Token 使用量统计功能 Closes #13. feat: 添加 Token 使用量统计功能 Closes #13 Feb 18, 2026
@david188888 david188888 changed the title feat: 添加 Token 使用量统计功能 Closes #13 feat: 添加 Token 使用量统计功能 Feb 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants