Skip to content

ori-ops/llmcost

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llmcost 💰

Lightweight LLM cost and quality tracking for Python.

Track your OpenAI, Anthropic, and other LLM API costs in real-time with a simple SDK, CLI, and dashboard.

Features

  • Real-time cost tracking — Know exactly how much each request costs
  • Multi-provider support — OpenAI, Anthropic, and generic tracking
  • Auto-patching — Just call patch_openai() and forget about it
  • CLI tools — Quick stats from the terminal
  • Web dashboard — Beautiful visualizations with Streamlit
  • Local storage — SQLite database, no cloud required
  • Project tagging — Organize costs by project/environment

⚠️ Cost Warning

LLM API costs can add up fast. With intensive usage (AI agents, batch processing, production apps), you can easily hit $500/day or more without realizing it.

This is exactly why llmcost exists — to make your spending visible before it becomes a surprise on your bill.

Installation

# Basic (CLI + SDK)
pip install llmcost

# With dashboard
pip install llmcost[dashboard]

# From GitHub (latest)
pip install git+https://github.com/ori-ops/llmcost.git

Quick Start

Option 1: Auto-patch (recommended)

from openai import OpenAI
from llmcost import patch_openai

client = OpenAI()
patch_openai(client, project="my-app")

# All requests are now tracked automatically!
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

Option 2: Manual tracking

from llmcost import track
import openai

response = track(
    openai.chat.completions.create,
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

Option 3: Decorator

from llmcost import track_decorator

@track_decorator(project="my-app", tags=["prod"])
def ask_gpt(question: str):
    return openai.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": question}]
    )

CLI Usage

# View usage stats
llmcost stats
llmcost stats --days 7 --project my-app

# View recent requests
llmcost history
llmcost history --limit 50

# Set budget alerts
llmcost budget --daily 10 --monthly 100

# View model pricing
llmcost pricing

# Launch web dashboard
llmcost dashboard

Dashboard

llmcost dashboard --port 8501

Opens a beautiful web dashboard showing:

  • Total costs and request counts
  • Cost breakdown by model
  • Daily spending trends
  • Recent request log
  • Budget alerts

Supported Models

OpenAI

  • GPT-4o, GPT-4o-mini
  • GPT-4 Turbo, GPT-4
  • GPT-3.5 Turbo
  • o1, o1-mini, o3-mini

Anthropic

  • Claude 3/4 Opus
  • Claude 3.5/4 Sonnet
  • Claude 3.5 Haiku

Other

Use log_request() for manual tracking of any provider:

from llmcost import log_request, get_db

log_request(
    get_db(),
    provider="custom",
    model="my-model",
    input_tokens=100,
    output_tokens=50,
)

Configuration

Data is stored in ~/.llmcost/usage.db by default.

To use a custom location:

from llmcost.db import get_db
from pathlib import Path

conn = get_db(Path("/custom/path/usage.db"))

Roadmap

  • Budget alerts via Telegram/Slack
  • Cloud sync for team usage
  • More providers (Mistral, Cohere, etc.)
  • Cost projections
  • VS Code extension

License

MIT

Author

Built by Ori 🤖

Releases

No releases published

Packages

 
 
 

Contributors

Languages