Skip to content

sairintechnologycom/burnlens

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

215 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BurnLens — See exactly what your LLM API calls cost

pip install burnlens — open-source LLM FinOps proxy for OpenAI, Anthropic (Claude), and Google Gemini. Track real token costs, attribute spend to features, teams, and customers, and detect waste. Zero code changes. Everything runs locally.

PyPI Downloads License: Apache 2.0 Python 3.10+ GitHub stars

Zero code changes. Every dollar tracked. Works with the official OpenAI, Anthropic, and Google AI SDKs out of the box.


Install

pip install burnlens
burnlens start
# Dashboard → http://127.0.0.1:8420/ui

Point your SDK at the proxy

# OpenAI — note the /v1 suffix
export OPENAI_BASE_URL=http://127.0.0.1:8420/proxy/openai/v1

# Anthropic (Claude)
export ANTHROPIC_BASE_URL=http://127.0.0.1:8420/proxy/anthropic

# Google Gemini — one-line patch
import burnlens.patch; burnlens.patch.patch_google()

Your existing SDK code works unchanged. BurnLens intercepts, logs, and forwards — nothing else.

Tag any request for attribution

X-BurnLens-Tag-Feature:  chat
X-BurnLens-Tag-Team:     backend
X-BurnLens-Tag-Customer: acme-corp

Tags are stripped before reaching the AI provider. They never leave your machine.


Why BurnLens

  • OpenAI and Anthropic bill by model, not by feature. You find out at month end which feature cost the most.
  • Reasoning tokens on o1 / o3 / Claude thinking models cost 10× more than expected. One prompt change can balloon your bill.
  • One bad deploy can burn $47K before anyone notices. Budget alerts catch it in minutes.

BurnLens fixes this at the proxy layer — no instrumentation, no SDK wrapping, no vendor lock-in.


What you get

BurnLens dashboard — LLM cost tracking by model, feature, team, and customer

  • Cost timeline — daily spend trend across all providers
  • Attribution — cost by model, feature, team, customer
  • Waste detection — context bloat, duplicate requests, model overkill
  • Per-request detail — tokens, cost, and latency for every call
  • Budget alerts — Slack + terminal notifications when you hit spend limits

Supported providers

Provider Models
OpenAI gpt-4o, gpt-4o-mini, o1, o3, o1-mini, gpt-4-turbo
Anthropic (Claude) claude-opus-4, claude-sonnet-4, claude-3-5-sonnet, claude-3-haiku
Google Gemini gemini-2.0-flash, gemini-1.5-pro, gemini-1.5-flash

Reasoning tokens, cached tokens, and vision tokens are all tracked separately.


CLI

burnlens start         # proxy + dashboard on :8420
burnlens top           # live cost by model (htop-style)
burnlens report        # weekly cost summary
burnlens analyze       # waste detection report
burnlens export        # CSV of last 7 days

Configuration

Zero config required — sensible defaults out of the box. Optional burnlens.yaml:

budget_limit_usd: 500.00
budgets:
  teams:
    backend: 200.00
    research: 100.00
alerts:
  slack_webhook: https://hooks.slack.com/...

How it works

App → SDK → BurnLens proxy (localhost:8420) → AI provider
                 ↓
           SQLite: logs request, calculates cost, extracts tags
                 ↓
        Dashboard (localhost:8420/ui) + CLI (burnlens top/report)
  • Local-first. Everything runs on localhost. No cloud account needed.
  • Privacy-preserving. Prompts and completions never leave your machine. API keys pass through, never stored remotely.
  • Streaming passthrough. SSE chunks forwarded immediately. < 20 ms proxy overhead.
  • Fail-open. If BurnLens can't log, it still forwards the request. Never breaks your app.

Cloud (optional)

Need team-wide dashboards and multi-workspace cost tracking? BurnLens Cloud offers:

  • Free — local proxy only (this repo)
  • Cloud — $29/mo — personal cloud dashboard, 7-day trial
  • Teams — $99/mo — multi-user workspaces, shared budgets

The CLI is free forever. Cloud is opt-in and only syncs anonymised cost records (tokens + cost — never prompts, completions, or API keys).


Contributing

Issues and PRs welcome. See CONTRIBUTING.md.

License

Apache License 2.0

About

Open-source LLM FinOps proxy — track OpenAI, Anthropic (Claude), and Google Gemini costs by feature, team, and customer. Zero code changes. pip install burnlens.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors