Skip to content

feat(pricing&usage) - Add usage + pricing tracking for fal.ai renders (SDK + CLI)#94

Open
Michailbul wants to merge 10 commits intovargHQ:mainfrom
Michailbul:feat/usage-tracking
Open

feat(pricing&usage) - Add usage + pricing tracking for fal.ai renders (SDK + CLI)#94
Michailbul wants to merge 10 commits intovargHQ:mainfrom
Michailbul:feat/usage-tracking

Conversation

@Michailbul
Copy link
Copy Markdown
Contributor

What this adds (tl;dr)

  • Tracks image/video usage + estimated cost for fal.ai generations.
  • Persists daily usage to disk and exposes a CLI view (varg usage).
  • Prints a per‑render usage summary (cached vs generated).

Scope: fal.ai only.
Architecture is provider‑agnostic via ProviderPricing registry + GenerationMetrics, so adding new providers is a small additive change (implement pricing adapter + emit metrics).

How it works

  • SDK renderer records usage on each image/video generation (cached items tracked in session only).
  • Pricing is fetched from fal.ai pricing API; on timeout/unavailable, costs are reported as $0 with a warning.
  • Storage: daily usage saved to .cache/usage/YYYY-MM-DD.json.

Where it lives

  • Core tracker + pricing: src/ai-sdk/usage/*
  • Render integration: src/react/renderers/render.ts
  • CLI: src/cli/commands/usage.tsx

How to use

  • SDK: usage auto‑enabled by default. Use VARG_TRACK_USAGE=0 to disable.
  • CLI: bun run src/cli/index.ts usage or bun run src/cli/index.ts usage --history.

Testing

  • bun test src/ai-sdk/usage/usage.test.ts

  • bun test src/ai-sdk/usage/integration.test.ts

  • bun run examples/usage-pricing-fal-test.tsx (manual; hits fal.ai API)
    Runs image → edit → video using fal.ai, then prints session summary.

Notes

  • Cached generations do not increment daily totals; they are shown in the session summary only.
  • Pricing fetch has a 5s timeout; if it fails, a warning is displayed and costs remain $0.

Michailbul and others added 6 commits January 29, 2026 18:20
- Fetch real-time pricing from fal.ai API with 24h cache and 5s timeout
- Track image/video generation costs per session
- Daily usage persistence in .cache/usage/
- Cache hit detection to show savings
- CLI usage summary after renders
- Support for daily limits via environment variables
- No hardcoded prices - warns if pricing API unavailable
- Fix: hasLimits() recognizes 0 as valid limit
- Fix: loadLimitsFromEnv() handles "0" and invalid values correctly
- Fix: Session summary sums record.count for multi-item generations
- Add integration tests for SDK, CLI, and JSX contexts

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Jan 30, 2026

📝 Walkthrough

walkthrough

adds a new usage-tracking & pricing subsystem: types, storage, pricing providers (fal, elevenlabs), a usage tracker, render pipeline integration, cli commands, tests, docs, and example for measuring and enforcing daily generation limits and costs.

changes

Cohort / File(s) Summary
usage types & errors
src/ai-sdk/usage/types.ts, src/ai-sdk/usage/pricing-errors.ts
new type model for providers/resources, generation records, daily limits, session summaries, and a PricingUnavailableError type.
pricing core & registry
src/ai-sdk/usage/pricing.ts, src/ai-sdk/usage/index.ts
adds PricingRegistry, ProviderPricing interface, ElevenLabsPricing, formatCost/estimateCost, and barrel exports for usage surface.
fal pricing integration
src/ai-sdk/providers/fal-extras/pricing-api.ts, src/ai-sdk/providers/fal-extras/pricing.ts, src/ai-sdk/providers/fal.ts
new fal.ai pricing client (cached), FalPricing provider, endpoint resolution, and registration during fal provider init.
storage & persistence
src/ai-sdk/usage/storage.ts
daily usage persistence to .cache/usage/YYYY-MM-DD.json, helpers for date/reset logic, listing and history retrieval.
usage tracker
src/ai-sdk/usage/tracker.ts
UsageTracker class and factory: load/save state, record generations, check/assert limits with warnings, session summaries, pricing error handling, and env-driven limits.
render integration
src/react/renderers/render.ts, src/react/renderers/context.ts, src/react/types.ts
injects UsageTracker into render context; wrapGenerateImage/wrapGenerateVideo detect cache, check limits on misses, record metrics, save usage and print warnings/summary after render.
cli: usage command
src/cli/commands/usage.tsx, src/cli/commands/index.ts, src/cli/index.ts
new varg usage command with --json/--history/--date, prints human and JSON summaries, history table, and usage help.
tests & example
src/ai-sdk/usage/usage.test.ts, src/ai-sdk/usage/integration.test.ts, examples/usage-pricing-fal-test.tsx
extensive unit/integration tests and a manual fal pricing example covering pricing, storage, tracker, limits, persistence, and JSON output.
supporting & misc
README.md, package.json, src/ai-sdk/generate-video.ts, src/ai-sdk/video-model.ts, src/ai-sdk/cache.ts, src/cli/commands/storyboard.tsx
docs updated for usage env vars/examples, new ./usage export path, generate-video result includes usage, VideoModelV3Usage aliased to ImageModelV3Usage, depsToCacheKey exported and used for cache keys.

sequence diagram(s)

sequenceDiagram
    participant client as Client
    participant render as Render
    participant tracker as UsageTracker
    participant cache as Cache
    participant gen as Generator
    participant pricing as Pricing
    participant storage as Storage

    client->>render: render(component, { usage: true })
    render->>tracker: createUsageTracker()
    tracker->>storage: loadDailyUsage()
    render->>gen: wrapGenerateImage / wrapGenerateVideo
    gen->>cache: lookup(cacheKey)
    alt cache hit
        cache-->>gen: cached result
        gen->>tracker: record(metrics, cached: true)
    else cache miss
        gen->>pricing: estimateCost(metrics)
        pricing-->>gen: cost estimate
        gen->>gen: perform generation
        gen->>tracker: record(metrics, cached: false)
    end
    tracker->>storage: saveDailyUsage()
    render->>client: print usage summary & warnings
Loading

estimated code review effort

🎯 4 (complex) | ⏱️ ~50 minutes

possibly related prs

poem

meow — tokens and seconds tracked with care,
cache saves purrs, pricing sings in the air,
limits whisper at eighty percent light,
daily files sleep under .cache’s sight,
cheers to meters that keep our ai bright.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 51.22% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed the title clearly summarizes the main addition: usage + pricing tracking for fal.ai renders across sdk and cli, which matches the changeset.
Description check ✅ Passed the description comprehensively explains what was added (usage tracking, daily persistence, cli view), how it works, where it lives, and includes testing/usage instructions—all directly related to the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@src/ai-sdk/generate-video.ts`:
- Around line 163-164: The code casts result.usage directly to GenerationMetrics
which can drop token-based usage (VideoModelV3Usage); update the handling around
the "usage" variable (where result.usage is read) to detect both shapes (use a
type guard against VideoModelV3Usage vs GenerationMetrics) and normalize them
into a single GenerationMetrics-compatible object when needed (e.g., map
inputTokens/outputTokens to a token-based Usage shape and fill
provider/resourceType/defaults or keep the original GenerationMetrics as-is).
Ensure places that consume "usage" (the normalized variable) can rely on
provider/resourceType/resourceId while still preserving token counts when
present.

In `@src/ai-sdk/usage/types.ts`:
- Around line 232-241: The hint currently builds the env var name using
this.limitType.toUpperCase(), which collapses camelCase (e.g., "speechMinutes")
into "SPEECHMINUTES"; change the construction in toJSON() to convert camelCase
to UPPER_SNAKE_CASE (insert underscores before capitals then .toUpperCase()),
e.g. use a small transformation on limitType like
this.limitType.replace(/([a-z])([A-Z])/g, '$1_$2').toUpperCase() when building
the hint string so the env var becomes VARG_DAILY_LIMIT_SPEECH_MINUTES for
"speechMinutes".

In `@src/react/renderers/render.ts`:
- Around line 154-159: The code currently defaults an unrecognized model
provider to "fal" in the GenerationMetrics object (see GenerationMetrics,
isUsageProvider, modelProvider, opts.model), which misattributes costs; update
the provider field so that if isUsageProvider(modelProvider) is true use
modelProvider, otherwise set a neutral sentinel like "unknown" (or derive the
provider from opts.model if that object exposes a provider field), or
alternatively add supported providers (e.g., "higgsfield") to the UsageProvider
type if appropriate; modify the provider assignment accordingly to avoid
silently mapping unknown providers to "fal".
- Around line 136-144: The cache key construction is duplicated and fragile —
centralize it into a single utility and use it everywhere instead of recreating
strings like the `cacheKeyStr` (`generateImage:${...}`) in `render.ts` (where
`optsWithCache.cacheKey` is mapped) and the similar construction in
`storyboard.tsx`; create a shared function (e.g., `depsToCacheKey(deps:
Array<unknown>, prefix?: string)`) that maps values to strings, joins them with
the chosen separator and prepends the `generateImage` prefix when needed, then
replace the inline building in `render.ts` (`cacheKeyStr`) and other locations
to call that utility and use `cacheStorage.get`/set with its result to ensure
consistent cache hits.
🧹 Nitpick comments (8)
src/ai-sdk/usage/integration.test.ts (3)

220-224: env restoration might not catch all scenarios

the originalEnv spread creates a shallow copy at module load time, but if other tests modify process.env before this block runs, the "original" won't be accurate. also, direct assignment process.env = originalEnv can behave unexpectedly in some runtimes.

♻️ safer pattern using delete + restore
 describe("Environment Variable Loading", () => {
-  const originalEnv = { ...process.env };
+  const envKeysToRestore = [
+    "VARG_DAILY_LIMIT_IMAGES",
+    "VARG_DAILY_LIMIT_VIDEOS", 
+    "VARG_DAILY_LIMIT_COST",
+    "VARG_DAILY_RESET_HOUR_UTC",
+  ];
+  const originalValues: Record<string, string | undefined> = {};
+
+  beforeEach(() => {
+    for (const key of envKeysToRestore) {
+      originalValues[key] = process.env[key];
+    }
+  });

   afterEach(() => {
-    process.env = originalEnv;
+    for (const key of envKeysToRestore) {
+      if (originalValues[key] === undefined) {
+        delete process.env[key];
+      } else {
+        process.env[key] = originalValues[key];
+      }
+    }
   });

300-302: accessing private internals is fragile

casting to any to poke at pricingErrors will break silently if the internal field name changes. consider adding a test helper method or making the error injection part of the public test api.


42-46: any cast on unit type could be tightened

the unit field is cast to any - if there's a proper type for units (like "image" | "second"), using it would catch typos in test data.

src/cli/commands/usage.tsx (1)

156-161: potential semantic mismatch in json outputlooking at the relevant code snippet from src/ai-sdk/usage/types.ts (lines 191-213), the UsageJsonOutput.counts interface shows speech: number and music: number. in state, these are stored as speechMinutes and musicMinutes (durations in minutes).

the json output puts rounded minutes into fields named just speech and music in the counts object. semantically "counts" suggests item counts (like for image/video), but for speech/music you're outputting duration in minutes. this could be confusing for api consumers.

📝 suggestion: clarify the json structure

consider either:

  1. rename counts.speech/counts.music to make it clear they're durations (e.g., move to a durations object)
  2. add a comment in the type definition explaining the units
    durations: {
      video: state.videoSeconds,
+     speechMinutes: Math.round(state.speechMinutes * 10) / 10,
+     musicMinutes: Math.round(state.musicMinutes * 10) / 10,
    },
-   counts: {
-     image: state.images,
-     video: state.videos,
-     speech: Math.round(state.speechMinutes * 10) / 10,
-     music: Math.round(state.musicMinutes * 10) / 10,
-   },
+   counts: {
+     image: state.images,
+     video: state.videos,
+   },
src/ai-sdk/usage/storage.ts (2)

139-154: dynamic import inside function

the readdir import happens on each call to listUsageDates. not a big deal for this use case, but could be hoisted to the top with other imports for consistency.

minor: hoist import
-import { mkdir } from "node:fs/promises";
+import { mkdir, readdir } from "node:fs/promises";
 import type { DailyUsageState } from "./types";
...

 export async function listUsageDates(
   usageDir = DEFAULT_USAGE_DIR,
 ): Promise<string[]> {
-  const { readdir } = await import("node:fs/promises");
-
   try {
     const files = await readdir(usageDir);

15-20: tests exist but edge case coverage is minimal

the logic is sound—tests confirm it returns the right format and respects custom reset hours. however, no tests are visible for boundary scenarios like the one mentioned (5am reset at 3am utc) or extreme values. worth adding a test or two for reset hour behavior near midnight to catch any gotchas. 🐱

src/ai-sdk/usage/tracker.ts (1)

533-545: potential performance concern with sequential pricing fetches for cached items

when calculating savedFromCache, each cached record triggers an individual calculateCost call. if there are many cached items, this could cause multiple sequential api requests (or cache lookups).

not a blocker since the pricing api has 24h cache, but worth noting for future optimization if session summaries become slow with lots of cached items.

src/ai-sdk/usage/usage.test.ts (1)

106-114: console.log statements in tests

these console logs add visual separators but will clutter test output. consider removing or gating behind a verbose flag.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

providers should be scoped and ai-sdk should not depend on specific provider implementaiton

you may move this to ../providers/fal-extras, for example

readonly videos: GeneratedVideo[];
readonly warnings: SharedV3Warning[];
/** Usage metrics from the generation (if provided by the model) */
readonly usage?: GenerationMetrics;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏽

}

// Extract usage metrics if provided by the model
const usage = normalizeUsage(result.usage, model);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this should happen inside the provider

Comment on lines +188 to +192
case "fal":
case "elevenlabs":
case "openai":
case "replicate":
case "google":
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, this should be scoped per-provider

* Optional token usage for the video generation call.
*/
usage?: VideoModelV3Usage;
usage?: VideoModelV3Usage | GenerationMetrics;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait i didn't notice -- we already had this type? we need to use this type then

Copy link
Copy Markdown
Contributor

@caffeinum caffeinum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice job testing this!

i didn't mention before, but high-level i am trying to follow original ai-sdk.dev -- the provider-specific behaviour should all happen inside SharedProviderV3 class

there seems to be already usage return type -- we should re-use it, or upgrade it if it doesnt work for VideoGen. but we should not keep multiple ways to do the same thing

let's discuss the draft of the implementation in the issue, first, to make sure we're the same page?

@Michailbul
Copy link
Copy Markdown
Contributor Author

nice job testing this!

i didn't mention before, but high-level i am trying to follow original ai-sdk.dev -- the provider-specific behaviour should all happen inside SharedProviderV3 class

there seems to be already usage return type -- we should re-use it, or upgrade it if it doesnt work for VideoGen. but we should not keep multiple ways to do the same thing

let's discuss the draft of the implementation in the issue, first, to make sure we're the same page?

Sure. Thanks for the corrections!
it makes sense indeed.

I will do the followng

  • Move fal‑specific pricing/mapping out of core usage/* into provider scope
    (e.g. providers/fal‑extras), so core SDK stays provider‑agnostic
  • Reuse the existing ai‑sdk usage type for model returns (token usage only)
    instead of introducing a separate shape; will keep GenerationMetrics strictly
    internal for tracking/cost
  • Move any usage normalization into provider classes (e.g. FalVideoModel)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@src/react/renderers/render.ts`:
- Around line 136-152: The limit check (usage.assertLimits("image")) runs before
cache detection which blocks fully cached renders when limits are reached; move
the usage.assertLimits("image") call to after the cache-detection block and only
invoke it when cached === false (i.e., on cache miss) in the generateImage path
(the code around optsWithCache, cacheStorage, depsToCacheKey, and cached
variable); apply the identical change to the other similar block referenced (the
code around lines 189-205) so both paths skip limit checks for cache hits.
🧹 Nitpick comments (6)
src/ai-sdk/usage/integration.test.ts (2)

340-340: shadowed TEST_DIR constants

you've got TEST_DIR defined at different scopes with different values (lines 28, 340, 420). works fine but the shadowing can be confusing when reading. consider unique names like CLI_TEST_DIR and RENDER_TEST_DIR for clarity.

Also applies to: 420-420


301-303: accessing private members via (tracker as any)

casting to any to push to private pricingErrors array couples these tests to internal implementation. if you refactor the tracker's internals, these tests break.

consider either:

  • exposing a test helper method like _addPricingErrorForTest()
  • or testing through the public api by triggering actual pricing failures

Also applies to: 316-318

src/ai-sdk/providers/fal-extras/pricing-api.ts (2)

45-65: silent default in mapUnit could hide api changes

defaulting unknown units to "image" might mask issues if fal.ai adds new pricing units. consider logging a warning here too:

     default:
-      // Default to per-image for unknown units
+      console.warn(`[varg] Unknown fal.ai pricing unit "${falUnit}", defaulting to "image"`);
       return "image";

209-231: estimateCost quantity semantics are unit-dependent

the quantity param means different things per unit (seconds for "second", characters for "1k_chars", etc). might be clearer to have the caller pass already-normalized values or document this behavior in the jsdoc.

not blocking, just a usability note.

src/ai-sdk/usage/usage.test.ts (1)

107-113: decorative console.log in tests

these visual separators are fine but add noise to test output. bun test already groups output by describe blocks. consider removing for cleaner logs or keep if your team likes the visual organization.

src/react/renderers/render.ts (1)

475-539: guard usage save so render output isn’t blocked

if usage persistence fails (fs perms, disk full), this will throw after all the heavy work. consider wrapping save + summary in a try/catch with a warning so render still completes.

possible tweak
-  // Save usage data and print summary
-  await usage.save();
-
-  if (!options.quiet && usage.isEnabled()) {
-    // Display pricing warnings if any
-    if (usage.hasPricingErrors()) {
-      const warningMsg = usage.getPricingWarningMessage();
-      if (warningMsg) {
-        console.log(`\n\x1b[33m${warningMsg}\x1b[0m\n`);
-      }
-      usage.markPricingErrorsShown();
-    }
-
-    const summary = await usage.getSessionSummary();
-    const hasActivity =
-      summary.images.generated > 0 ||
-      summary.videos.generated > 0 ||
-      summary.images.cached > 0 ||
-      summary.videos.cached > 0;
-
-    // Only show cost summary if we have pricing data (no errors)
-    const hasPricingData = !usage.hasPricingErrors() || summary.totalCost > 0;
-
-    if (hasActivity) {
-      console.log("\n\x1b[36m─────────────────────────────────────\x1b[0m");
-      console.log("\x1b[36m Usage Summary\x1b[0m");
-      console.log("\x1b[36m─────────────────────────────────────\x1b[0m");
-
-      if (summary.images.generated > 0 || summary.images.cached > 0) {
-        const imageStr =
-          summary.images.cached > 0
-            ? `${summary.images.generated} generated, ${summary.images.cached} cached`
-            : `${summary.images.generated} generated`;
-        const costStr = hasPricingData
-          ? formatCost(summary.images.cost)
-          : "N/A";
-        console.log(`  Images:  ${imageStr.padEnd(28)} ${costStr}`);
-      }
-
-      if (summary.videos.generated > 0 || summary.videos.cached > 0) {
-        const videoStr =
-          summary.videos.cached > 0
-            ? `${summary.videos.generated} generated, ${summary.videos.cached} cached`
-            : `${summary.videos.generated} generated (${Math.round(summary.videos.duration ?? 0)}s)`;
-        const costStr = hasPricingData
-          ? formatCost(summary.videos.cost)
-          : "N/A";
-        console.log(`  Videos:  ${videoStr.padEnd(28)} ${costStr}`);
-      }
-
-      console.log("\x1b[36m─────────────────────────────────────\x1b[0m");
-      const totalCostStr = hasPricingData
-        ? formatCost(summary.totalCost)
-        : "N/A (pricing unavailable)";
-      console.log(`  Session total                      ${totalCostStr}`);
-
-      if (summary.savedFromCache > 0 && hasPricingData) {
-        console.log(
-          `  \x1b[32m💰 Saved ${formatCost(summary.savedFromCache)} from cache\x1b[0m`,
-        );
-      }
-
-      console.log("\x1b[36m─────────────────────────────────────\x1b[0m\n");
-    }
-  }
+  // Save usage data and print summary
+  try {
+    await usage.save();
+
+    if (!options.quiet && usage.isEnabled()) {
+      // Display pricing warnings if any
+      if (usage.hasPricingErrors()) {
+        const warningMsg = usage.getPricingWarningMessage();
+        if (warningMsg) {
+          console.log(`\n\x1b[33m${warningMsg}\x1b[0m\n`);
+        }
+        usage.markPricingErrorsShown();
+      }
+
+      const summary = await usage.getSessionSummary();
+      const hasActivity =
+        summary.images.generated > 0 ||
+        summary.videos.generated > 0 ||
+        summary.images.cached > 0 ||
+        summary.videos.cached > 0;
+
+      // Only show cost summary if we have pricing data (no errors)
+      const hasPricingData = !usage.hasPricingErrors() || summary.totalCost > 0;
+
+      if (hasActivity) {
+        console.log("\n\x1b[36m─────────────────────────────────────\x1b[0m");
+        console.log("\x1b[36m Usage Summary\x1b[0m");
+        console.log("\x1b[36m─────────────────────────────────────\x1b[0m");
+
+        if (summary.images.generated > 0 || summary.images.cached > 0) {
+          const imageStr =
+            summary.images.cached > 0
+              ? `${summary.images.generated} generated, ${summary.images.cached} cached`
+              : `${summary.images.generated} generated`;
+          const costStr = hasPricingData
+            ? formatCost(summary.images.cost)
+            : "N/A";
+          console.log(`  Images:  ${imageStr.padEnd(28)} ${costStr}`);
+        }
+
+        if (summary.videos.generated > 0 || summary.videos.cached > 0) {
+          const videoStr =
+            summary.videos.cached > 0
+              ? `${summary.videos.generated} generated, ${summary.videos.cached} cached`
+              : `${summary.videos.generated} generated (${Math.round(summary.videos.duration ?? 0)}s)`;
+          const costStr = hasPricingData
+            ? formatCost(summary.videos.cost)
+            : "N/A";
+          console.log(`  Videos:  ${videoStr.padEnd(28)} ${costStr}`);
+        }
+
+        console.log("\x1b[36m─────────────────────────────────────\x1b[0m");
+        const totalCostStr = hasPricingData
+          ? formatCost(summary.totalCost)
+          : "N/A (pricing unavailable)";
+        console.log(`  Session total                      ${totalCostStr}`);
+
+        if (summary.savedFromCache > 0 && hasPricingData) {
+          console.log(
+            `  \x1b[32m💰 Saved ${formatCost(summary.savedFromCache)} from cache\x1b[0m`,
+          );
+        }
+
+        console.log("\x1b[36m─────────────────────────────────────\x1b[0m\n");
+      }
+    }
+  } catch (err) {
+    if (!options.quiet) {
+      console.warn(
+        `\x1b[33m⚠ usage tracking failed: ${(err as Error).message}\x1b[0m`,
+      );
+    }
+  }

Copy link
Copy Markdown
Contributor

@caffeinum caffeinum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good! please fix conflicts, and address two comments

also please review the submission, don't just copy-paste my comments into claude

}

function depsToKey(prefix: string, deps: CacheKeyDeps): string {
export function depsToCacheKey(deps: CacheKeyDeps, prefix?: string): string {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this implies we want to use it without prefix sometimes? but we don't do it anywhere

* Persists daily usage data to .cache/usage/YYYY-MM-DD.json
*/

import { mkdir } from "node:fs/promises";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we use Bun.file API for filesystem operations

however, i have updated it recently to abstract the storage implementation, which allows to run varg in vercel environment -- so we need to use that for persistence

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants