Lightweight LLM cost and quality tracking for Python.
Track your OpenAI, Anthropic, and other LLM API costs in real-time with a simple SDK, CLI, and dashboard.
- ✅ Real-time cost tracking — Know exactly how much each request costs
- ✅ Multi-provider support — OpenAI, Anthropic, and generic tracking
- ✅ Auto-patching — Just call
patch_openai()and forget about it - ✅ CLI tools — Quick stats from the terminal
- ✅ Web dashboard — Beautiful visualizations with Streamlit
- ✅ Local storage — SQLite database, no cloud required
- ✅ Project tagging — Organize costs by project/environment
LLM API costs can add up fast. With intensive usage (AI agents, batch processing, production apps), you can easily hit $500/day or more without realizing it.
This is exactly why llmcost exists — to make your spending visible before it becomes a surprise on your bill.
# Basic (CLI + SDK)
pip install llmcost
# With dashboard
pip install llmcost[dashboard]
# From GitHub (latest)
pip install git+https://github.com/ori-ops/llmcost.gitfrom openai import OpenAI
from llmcost import patch_openai
client = OpenAI()
patch_openai(client, project="my-app")
# All requests are now tracked automatically!
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)from llmcost import track
import openai
response = track(
openai.chat.completions.create,
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)from llmcost import track_decorator
@track_decorator(project="my-app", tags=["prod"])
def ask_gpt(question: str):
return openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": question}]
)# View usage stats
llmcost stats
llmcost stats --days 7 --project my-app
# View recent requests
llmcost history
llmcost history --limit 50
# Set budget alerts
llmcost budget --daily 10 --monthly 100
# View model pricing
llmcost pricing
# Launch web dashboard
llmcost dashboardllmcost dashboard --port 8501Opens a beautiful web dashboard showing:
- Total costs and request counts
- Cost breakdown by model
- Daily spending trends
- Recent request log
- Budget alerts
- GPT-4o, GPT-4o-mini
- GPT-4 Turbo, GPT-4
- GPT-3.5 Turbo
- o1, o1-mini, o3-mini
- Claude 3/4 Opus
- Claude 3.5/4 Sonnet
- Claude 3.5 Haiku
Use log_request() for manual tracking of any provider:
from llmcost import log_request, get_db
log_request(
get_db(),
provider="custom",
model="my-model",
input_tokens=100,
output_tokens=50,
)Data is stored in ~/.llmcost/usage.db by default.
To use a custom location:
from llmcost.db import get_db
from pathlib import Path
conn = get_db(Path("/custom/path/usage.db"))- Budget alerts via Telegram/Slack
- Cloud sync for team usage
- More providers (Mistral, Cohere, etc.)
- Cost projections
- VS Code extension
MIT
Built by Ori 🤖