Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
70cfb8e
refactor: harden dashboard refresh flow and consolidate parsers
janekbaraniewski Mar 9, 2026
3172b4f
refactor: share cursor state db readers
janekbaraniewski Mar 9, 2026
94fc9e5
fix: bind daemon refresh work to service lifecycle
janekbaraniewski Mar 9, 2026
7cec271
refactor: finish cursor parser cleanup and share log throttling
janekbaraniewski Mar 9, 2026
c99a533
refactor: extract tui breakdowns and split daemon server
janekbaraniewski Mar 9, 2026
dd9e56d
refactor: continue daemon and telemetry decomposition
janekbaraniewski Mar 9, 2026
a3b9793
refactor: share usage breakdown extractors
janekbaraniewski Mar 9, 2026
dbf3520
refactor: split telemetry snapshot projection
janekbaraniewski Mar 9, 2026
2579f4a
refactor: split telemetry queries and openrouter generations
janekbaraniewski Mar 9, 2026
441debf
refactor: split cursor api and cache helpers
janekbaraniewski Mar 9, 2026
80be6aa
refactor: split tui and daemon runtime concerns
janekbaraniewski Mar 9, 2026
3652539
refactor: finish cursor and openrouter provider splits
janekbaraniewski Mar 9, 2026
676a48b
refactor: inject ollama clock and refresh audit table
janekbaraniewski Mar 9, 2026
e3d9bc4
refactor: split openrouter account flow and detail tokens
janekbaraniewski Mar 9, 2026
804a10e
refactor: bind telemetry sources to accounts and move metric parsing …
janekbaraniewski Mar 9, 2026
e7565a8
refactor: replace duplicate utility helpers and move legacy path shim…
janekbaraniewski Mar 9, 2026
241d6ce
refactor: share hook ingest flow and split detail analytics sections
janekbaraniewski Mar 9, 2026
b0fa065
refactor: split usage view aggregate orchestration
janekbaraniewski Mar 9, 2026
3868806
refactor: split provider display info and share fallback metrics
janekbaraniewski Mar 9, 2026
8342a6b
refactor: split codex live and session usage helpers
janekbaraniewski Mar 9, 2026
d866d36
refactor: split claude code helpers and settings modal layout
janekbaraniewski Mar 9, 2026
6eb33d9
refactor: split copilot github api flow
janekbaraniewski Mar 9, 2026
c987412
refactor: split copilot local session flow
janekbaraniewski Mar 9, 2026
ada0470
refactor: cache tile composition and split claude conversations
janekbaraniewski Mar 9, 2026
6b13532
refactor: tighten account config docs and trim config test setup
janekbaraniewski Mar 9, 2026
ed4f0c1
refactor: split gemini session parsing and tighten context seams
janekbaraniewski Mar 9, 2026
3cf676a
refactor: split telemetry collectors and provider helpers
janekbaraniewski Mar 9, 2026
1b031d4
refactor: close remaining tui and provider cleanup gaps
janekbaraniewski Mar 9, 2026
9f9f034
fix: anchor analytics and tile dates to snapshot time
janekbaraniewski Mar 10, 2026
45ff4f5
refactor: standardize lo-backed collection helpers
janekbaraniewski Mar 10, 2026
663de11
refactor: split tui render surfaces and add runtime hints
janekbaraniewski Mar 10, 2026
0105a59
refactor: finish provider cleanup and close audit follow-ups
janekbaraniewski Mar 10, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion cmd/demo/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (
"io"
"log"
"os"
"sync/atomic"
"time"

tea "github.com/charmbracelet/bubbletea"
Expand Down Expand Up @@ -39,6 +40,7 @@ func main() {

ctx, cancel := context.WithCancel(context.Background())
defer cancel()
var snapshotRequestID atomic.Uint64

refreshAll := func() {
snaps := make(map[string]core.UsageSnapshot, len(accounts))
Expand All @@ -61,7 +63,11 @@ func main() {
}
snaps[acct.ID] = snap
}
p.Send(tui.SnapshotsMsg(snaps))
p.Send(tui.SnapshotsMsg{
Snapshots: snaps,
TimeWindow: core.TimeWindow30d,
RequestID: snapshotRequestID.Add(1),
})
}

go func() {
Expand Down
27 changes: 13 additions & 14 deletions cmd/openusage/dashboard.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import (
"github.com/janekbaraniewski/openusage/internal/config"
"github.com/janekbaraniewski/openusage/internal/core"
"github.com/janekbaraniewski/openusage/internal/daemon"
"github.com/janekbaraniewski/openusage/internal/dashboardapp"
"github.com/janekbaraniewski/openusage/internal/tui"
"github.com/janekbaraniewski/openusage/internal/version"
)
Expand All @@ -31,6 +32,9 @@ func runDashboard(cfg config.Config) {

timeWindow := core.ParseTimeWindow(cfg.Data.TimeWindow)

ctx, cancel := context.WithCancel(context.Background())
defer cancel()

model := tui.NewModel(
cfg.UI.WarnThreshold,
cfg.UI.CritThreshold,
Expand All @@ -39,6 +43,7 @@ func runDashboard(cfg config.Config) {
cachedAccounts,
timeWindow,
)
model.SetServices(dashboardapp.NewService(ctx))

socketPath := daemon.ResolveSocketPath()

Expand All @@ -47,12 +52,10 @@ func runDashboard(cfg config.Config) {
socketPath,
verbose,
)
viewRuntime.SetTimeWindow(string(timeWindow))

ctx, cancel := context.WithCancel(context.Background())
defer cancel()
viewRuntime.SetTimeWindow(timeWindow)

var program *tea.Program
dispatcher := &snapshotDispatcher{}

model.SetOnAddAccount(func(acct core.AccountConfig) {
if strings.TrimSpace(acct.ID) == "" || strings.TrimSpace(acct.Provider) == "" {
Expand Down Expand Up @@ -96,16 +99,11 @@ func runDashboard(cfg config.Config) {
}
})

model.SetOnRefresh(func() {
go func() {
snaps := viewRuntime.ReadWithFallback(ctx)
if len(snaps) > 0 && program != nil {
program.Send(tui.SnapshotsMsg(snaps))
}
}()
model.SetOnRefresh(func(window core.TimeWindow) {
dispatcher.refresh(ctx, viewRuntime, window)
})

model.SetOnTimeWindowChange(func(tw string) {
model.SetOnTimeWindowChange(func(tw core.TimeWindow) {
viewRuntime.SetTimeWindow(tw)
})

Expand All @@ -118,6 +116,7 @@ func runDashboard(cfg config.Config) {
})

program = tea.NewProgram(model, tea.WithAltScreen(), tea.WithMouseCellMotion())
dispatcher.bind(program)

go func() {
runStartupUpdateCheck(
Expand All @@ -139,8 +138,8 @@ func runDashboard(cfg config.Config) {
ctx,
viewRuntime,
interval,
func(snaps map[string]core.UsageSnapshot) {
program.Send(tui.SnapshotsMsg(snaps))
func(frame daemon.SnapshotFrame) {
dispatcher.dispatch(frame)
},
func(state daemon.DaemonState) {
program.Send(mapDaemonState(state))
Expand Down
43 changes: 43 additions & 0 deletions cmd/openusage/snapshot_dispatcher.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
package main

import (
"context"
"sync/atomic"

tea "github.com/charmbracelet/bubbletea"
"github.com/janekbaraniewski/openusage/internal/core"
"github.com/janekbaraniewski/openusage/internal/daemon"
"github.com/janekbaraniewski/openusage/internal/tui"
)

type snapshotDispatcher struct {
program *tea.Program
nextID atomic.Uint64
}

func (d *snapshotDispatcher) bind(program *tea.Program) {
d.program = program
}

func (d *snapshotDispatcher) dispatch(frame daemon.SnapshotFrame) {
d.send(frame, d.nextID.Add(1))
}

func (d *snapshotDispatcher) refresh(ctx context.Context, rt *daemon.ViewRuntime, window core.TimeWindow) {
requestID := d.nextID.Add(1)
go func() {
frame := rt.ReadWithFallbackForWindow(ctx, window)
d.send(frame, requestID)
}()
}

func (d *snapshotDispatcher) send(frame daemon.SnapshotFrame, requestID uint64) {
if d == nil || d.program == nil || len(frame.Snapshots) == 0 {
return
}
d.program.Send(tui.SnapshotsMsg{
Snapshots: frame.Snapshots,
TimeWindow: frame.TimeWindow,
RequestID: requestID,
})
}
108 changes: 1 addition & 107 deletions cmd/openusage/telemetry.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,6 @@ import (
"github.com/janekbaraniewski/openusage/internal/daemon"
"github.com/janekbaraniewski/openusage/internal/detect"
"github.com/janekbaraniewski/openusage/internal/integrations"
"github.com/janekbaraniewski/openusage/internal/providers"
"github.com/janekbaraniewski/openusage/internal/providers/shared"
"github.com/janekbaraniewski/openusage/internal/telemetry"
"github.com/spf13/cobra"
)
Expand Down Expand Up @@ -52,9 +50,6 @@ func newTelemetryHookCommand() *cobra.Command {
Args: cobra.ExactArgs(1),
RunE: func(_ *cobra.Command, args []string) error {
sourceName := args[0]
if _, ok := providers.TelemetrySourceBySystem(sourceName); !ok {
return fmt.Errorf("unknown hook source %q", sourceName)
}

payload, err := io.ReadAll(os.Stdin)
if err != nil {
Expand Down Expand Up @@ -90,7 +85,7 @@ func newTelemetryHookCommand() *cobra.Command {
daemonErr = err
}

result, err := ingestHookLocally(
result, err := daemon.IngestHookLocally(
ctx,
sourceName,
strings.TrimSpace(accountID),
Expand Down Expand Up @@ -148,107 +143,6 @@ func newTelemetryHookCommand() *cobra.Command {
return cmd
}

func ingestHookLocally(
ctx context.Context,
sourceName string,
accountID string,
payload []byte,
dbPath string,
spoolDir string,
spoolOnly bool,
) (daemon.HookResponse, error) {
source, ok := providers.TelemetrySourceBySystem(sourceName)
if !ok {
return daemon.HookResponse{}, fmt.Errorf("unknown hook source %q", sourceName)
}
reqs, err := telemetry.ParseSourceHookPayload(source, payload, shared.TelemetryCollectOptions{}, accountID)
if err != nil {
return daemon.HookResponse{}, fmt.Errorf("parse hook payload: %w", err)
}
resp := daemon.HookResponse{
Source: sourceName,
Enqueued: len(reqs),
}
if len(reqs) == 0 {
return resp, nil
}

if strings.TrimSpace(dbPath) == "" {
resolved, resolveErr := telemetry.DefaultDBPath()
if resolveErr != nil {
return daemon.HookResponse{}, fmt.Errorf("resolve telemetry db path: %w", resolveErr)
}
dbPath = resolved
}
if strings.TrimSpace(spoolDir) == "" {
resolved, resolveErr := telemetry.DefaultSpoolDir()
if resolveErr != nil {
return daemon.HookResponse{}, fmt.Errorf("resolve telemetry spool dir: %w", resolveErr)
}
spoolDir = resolved
}

store, err := telemetry.OpenStore(dbPath)
if err != nil {
return daemon.HookResponse{}, fmt.Errorf("open telemetry store: %w", err)
}
defer store.Close()

pipeline := telemetry.NewPipeline(store, telemetry.NewSpool(spoolDir))
if spoolOnly {
enqueued, enqueueErr := pipeline.EnqueueRequests(reqs)
if enqueueErr != nil {
return daemon.HookResponse{}, fmt.Errorf("enqueue to telemetry spool: %w", enqueueErr)
}
resp.Enqueued = enqueued
return resp, nil
}

retries := make([]telemetry.IngestRequest, 0, len(reqs))
var firstIngestErr error
for _, req := range reqs {
resp.Processed++
result, ingestErr := store.Ingest(ctx, req)
if ingestErr != nil {
if firstIngestErr == nil {
firstIngestErr = ingestErr
}
retries = append(retries, req)
continue
}
if result.Deduped {
resp.Deduped++
} else {
resp.Ingested++
}
}

if len(retries) == 0 {
return resp, nil
}
if firstIngestErr != nil {
resp.Warnings = append(resp.Warnings, fmt.Sprintf("direct ingest failed for %d event(s): %v", len(retries), firstIngestErr))
}

enqueued, enqueueErr := pipeline.EnqueueRequests(retries)
if enqueueErr != nil {
resp.Failed += len(retries)
resp.Warnings = append(resp.Warnings, fmt.Sprintf("retry enqueue failed: %v", enqueueErr))
return resp, nil
}
flush, warnings := daemon.FlushInBatches(ctx, pipeline, enqueued)
resp.Processed += flush.Processed
resp.Ingested += flush.Ingested
resp.Deduped += flush.Deduped
resp.Failed += flush.Failed
resp.Warnings = append(resp.Warnings, warnings...)

if remaining := len(retries) - flush.Processed; remaining > 0 {
resp.Warnings = append(resp.Warnings, fmt.Sprintf("%d event(s) remain queued in spool", remaining))
}
return resp, nil
}

func newTelemetryDaemonCommand() *cobra.Command {
var (
socketPath string
Expand Down
29 changes: 29 additions & 0 deletions docs/CODEBASE_AUDIT_ACTION_TABLE_2026-03-09.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Codebase Audit Action Table

Date: 2026-03-09
Repository: `/Users/janekbaraniewski/Workspace/priv/openusage`
Branch: `feat/dashboard-race-parser-cleanups`

## Fixed in This Branch

| ID | Status | Area | Evidence | Resolution | Follow-up |
| --- | --- | --- | --- | --- | --- |
| R57 | Fixed | Account config contract hardening | `internal/core/provider.go`, `internal/config/config.go`, `internal/daemon/source_collectors.go`, `internal/detect/cursor.go`, `internal/detect/claude_code.go` | Provider-local runtime paths now live behind `ProviderPaths` and `Path`/`SetPath` helpers. Config load normalizes legacy `paths` payloads into the new field, and daemon/detect flows consume the typed path accessors instead of ad hoc provider-specific overloads. | Retain legacy `paths` read compatibility until the persisted config shape can be fully simplified. |
| R58 | Fixed | TUI settings/detail decomposition | `internal/tui/settings_modal.go`, `internal/tui/settings_modal_input.go`, `internal/tui/detail.go`, `internal/tui/detail_metrics.go`, `internal/tui/detail_analytics_sections.go` | Settings input/update logic and large detail metric/render sections are split out of the remaining coordinator files. The hot TUI files now separate state/input from section rendering much more cleanly. | Only split further if new features start coupling unrelated flows again. |
| R59 | Fixed | Detail and analytics metric decoding cleanup | `internal/core/analytics_costs.go`, `internal/core/usage_breakdowns_domains.go`, `internal/tui/detail.go`, `internal/tui/detail_analytics_sections.go`, `internal/tui/model_display_info.go` | Remaining burn-rate, language, MCP, and model-cost detection paths now go through shared core helpers instead of renderer-owned metric-prefix checks. UI code consumes shared semantic helpers rather than decoding raw key conventions inline. | Keep new metric-schema additions in `internal/core`, not in TUI renderers. |
| R60 | Fixed | Render-path caching follow-through | `internal/tui/render_cache.go`, `internal/tui/analytics_cache.go`, `internal/tui/tiles_cache.go`, `internal/tui/model_input.go`, `internal/tui/model_commands.go`, `internal/tui/dashboard_views.go` | Tile, analytics, and detail render paths are now explicitly invalidated on snapshot, window, theme, layout, and selection changes. Detail rendering is cached the same way analytics and tile composition already were, closing the remaining hot-path rebuild gap. | Profile before adding any more caching layers. |
| R61 | Fixed | Gemini CLI provider decomposition | `internal/providers/gemini_cli/gemini_cli.go`, `internal/providers/gemini_cli/api_usage.go`, `internal/providers/gemini_cli/session_usage.go` | API/quota/account flows and local session aggregation are split out of the coordinator file. The main provider file is now mostly wiring plus fetch orchestration. | Keep future Gemini changes inside the matching helper unit. |
| R62 | Fixed | Ollama provider decomposition follow-through | `internal/providers/ollama/ollama.go`, `internal/providers/ollama/local_api.go`, `internal/providers/ollama/cloud_api.go`, `internal/providers/ollama/desktop_db.go`, `internal/providers/ollama/desktop_db_settings.go`, `internal/providers/ollama/desktop_db_tokens.go`, `internal/providers/ollama/desktop_db_breakdowns.go` | Ollama’s coordinator, local API, cloud API, and desktop SQLite flows are now separated by concern. The remaining large desktop DB path is split into settings/schema helpers, token estimation, and usage breakdown/daily series helpers. | Keep future SQLite-specific work inside the dedicated desktop DB helper files. |
| R63 | Fixed | Telemetry and config fixture cleanup | `internal/telemetry/test_helpers_test.go`, `internal/telemetry/usage_view_test.go`, `internal/config/test_helpers_test.go` | Shared store/file helpers now cover the repeated setup patterns in the telemetry and config suites, and `usage_view_test.go` is reduced below the previous monolith threshold. | Apply the same helper pattern to other large suites when they next change. |
| R64 | Fixed | Runtime-hint rollout follow-through | `internal/core/provider.go`, `internal/detect/codex.go`, `internal/detect/cursor.go`, `internal/detect/ollama.go`, `internal/providers/codex/live_usage.go`, `internal/providers/copilot/copilot.go`, `internal/providers/ollama/request_helpers.go` | Remaining runtime-only config/account hints now flow through `RuntimeHints` and `Hint`/`SetHint` helpers instead of direct provider code reaching into ad hoc `ExtraData` keys for local paths and overrides. | Keep new runtime-only provider hints behind `Hint`/`SetHint` rather than adding more direct map reads. |
| R65 | Fixed | Provider/session and test-suite decomposition follow-through | `internal/providers/claude_code/conversation_usage.go`, `internal/providers/claude_code/conversation_usage_projection.go`, `internal/providers/copilot/local_data.go`, `internal/providers/copilot/telemetry_session_file.go`, `internal/providers/copilot/copilot_test.go`, `internal/providers/openrouter/openrouter_analytics_test.go`, `internal/providers/openrouter/openrouter_analytics_rollups_test.go`, `internal/providers/zai/zai.go` | The remaining long provider/session paths are now split by parser/projection/aggregation concern, and the last oversized high-churn test suites are divided by scenario family with shared helpers extracted. | Split again only when a specific family regrows into another mixed-responsibility file. |

## Remaining Review State

No active `P1`, `P2`, or `P3` review items remain from this audit. The earlier follow-up rows were either resolved in this branch or explicitly reclassified as optional future design choices rather than outstanding issues.

## Summary

- The original high-risk review items `A1`, `A2`, `A3`, `A4`, `A12`, `A14`, and `A15` are addressed in this branch.
- The remaining provider/session decomposition, runtime-hint rollout, and large-suite cleanup work is also addressed in this branch.
- No additional high-confidence correctness bug was found during the follow-up review after the dashboard timeframe race fix.
73 changes: 73 additions & 0 deletions docs/SYSTEM_REVIEW_DUPLICATION_AND_RESPONSIBILITY_REPORT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# System Review: Post-Cleanup State

Date: 2026-03-09
Repository: `/Users/janekbaraniewski/Workspace/priv/openusage`
Branch: `feat/dashboard-race-parser-cleanups`

## Scope

This report reflects the tree after the dashboard timeframe-race fix, parser consolidation work, daemon/read-model cleanup, provider decomposition, TUI decomposition, render-cache follow-through, runtime-hint cleanup, large-suite splitting, and the final `A1`/`A2`/`A3`/`A4`/`A12`/`A14`/`A15` cleanup pass.

It replaces the earlier “remaining gaps” snapshot. The goal now is to document the actual post-cleanup state, not to preserve stale open items.

## What Is Resolved

The following earlier review themes are materially closed in this branch:

- Dashboard timeframe race and stale snapshot acceptance.
- Read-model cache dedupe ignoring time window.
- Stringly typed daemon/telemetry time-window flow.
- Parser duplication across Cursor, Codex, and Claude Code dashboard/telemetry paths.
- OpenRouter, Cursor, Claude Code, Codex, Copilot, OpenCode, Z.AI, Gemini CLI, and Ollama monolith concentration in their previously hottest paths.
- TUI side-effect leakage into persistence, integration install, and provider validation.
- Major TUI composition concentration in tile/detail/settings code.
- Remaining detail/analytics metric-prefix parsing pockets that were still living in renderer code.
- Tile/detail/analytics render-path recomputation on every frame.
- Account-config runtime-path overload in the hot path.
- Repeated telemetry/config/provider test setup boilerplate in the most actively changed suites.
- Remaining runtime-only provider overrides reaching directly into ad hoc `ExtraData` fields.
- The last oversized high-churn Copilot/OpenRouter test suites.

## Current Findings

### 1. No remaining high-confidence correctness bug surfaced in the follow-up review

After the final cleanup pass and validation run, I did not find another issue on the level of the original dashboard timeframe race. The remaining items are not hidden state-corruption or concurrency defects; they are explicit maintenance tradeoffs.

Validation used for this state:

- `go test ./...`
- `go vet ./...`
- `make build`

### 2. The codebase now has clearer responsibility boundaries in the hot areas

The most change-prone areas are no longer concentrated the way they were at the start of the branch:

- TUI render/state work is split across dedicated settings/detail/cache/helper units.
- Provider-local parsing and fetch logic are split by concern in the previously worst provider files.
- Daemon hook ingest, HTTP, polling, spool, and read-model paths are separated.
- Telemetry usage-view query/materialization/projection/aggregate logic is separated.

This reduces review blast radius and makes future concurrency/data-flow work easier to reason about.

### 3. No active audit-priority items remain
The earlier follow-up list is now closed for the purposes of this review. What remains in the repo are ordinary future refactor options, not unresolved `P1`/`P2`/`P3` findings from this audit.

## References

- [CODEBASE_AUDIT_ACTION_TABLE_2026-03-09.md](/Users/janekbaraniewski/Workspace/priv/openusage/docs/CODEBASE_AUDIT_ACTION_TABLE_2026-03-09.md)
- [internal/tui/render_cache.go](/Users/janekbaraniewski/Workspace/priv/openusage/internal/tui/render_cache.go)
- [internal/tui/detail_metrics.go](/Users/janekbaraniewski/Workspace/priv/openusage/internal/tui/detail_metrics.go)
- [internal/tui/settings_modal_input.go](/Users/janekbaraniewski/Workspace/priv/openusage/internal/tui/settings_modal_input.go)
- [internal/providers/ollama/desktop_db.go](/Users/janekbaraniewski/Workspace/priv/openusage/internal/providers/ollama/desktop_db.go)
- [internal/providers/ollama/desktop_db_tokens.go](/Users/janekbaraniewski/Workspace/priv/openusage/internal/providers/ollama/desktop_db_tokens.go)
- [internal/providers/gemini_cli/api_usage.go](/Users/janekbaraniewski/Workspace/priv/openusage/internal/providers/gemini_cli/api_usage.go)
- [internal/core/provider.go](/Users/janekbaraniewski/Workspace/priv/openusage/internal/core/provider.go)
- [internal/telemetry/test_helpers_test.go](/Users/janekbaraniewski/Workspace/priv/openusage/internal/telemetry/test_helpers_test.go)

## Bottom Line

- The original review’s high-priority structural set is addressed.
- The repo is in materially better shape than at the start of the branch.
- Remaining items are optional follow-up architecture choices, not outstanding bugs from the review.
Loading
Loading