Summary
Codex sessions created via ChatGPT Plus/Pro subscription auth (auth_mode: "chatgpt" in ~/.codex/auth.json) are silently dropped by CodeBurn, showing 0 calls / $0.00 cost even when sessions are large and active. Two distinct bugs combine to cause this.
Reproduction
- Codex CLI logged in via ChatGPT Plus/Pro (not API key)
- Use Codex normally for several hours
- Run
codeburn today or open the menu bar app
Expected: session count + token usage shown (cost can be $0 since subscription is flat fee, but tokens/calls should be visible).
Actual: Codex shows 0 calls / $0.00, even though sessions exist on disk and contain thousands of token events.
Environment
- CodeBurn:
0.9.5 (Homebrew menu bar build)
- Codex CLI: latest,
auth_mode: chatgpt, chatgpt_plan_type: pro
- macOS 14.x
- Sessions at
~/.codex/sessions/2026/05/ — 14 files, recent ones >250KB each
Root cause #1 — session_meta line exceeds 16KB read cap (release-only bug)
The shipped Homebrew binary at /opt/homebrew/Cellar/codeburn/0.9.5/bin/codeburn:551 reads only the first 16384 bytes of a session file to parse the meta line:
const buf = Buffer.alloc(16384)
const { bytesRead } = await fh.read(buf, 0, 16384, 0)
const text = buf.toString('utf-8', 0, bytesRead)
const nl = text.indexOf('\n')
const line = nl >= 0 ? text.slice(0, nl) : text
return JSON.parse(line) // throws on truncated JSON → null returned → file skipped
Real measurement from my latest Codex session:
$ head -1 ~/.codex/sessions/2026/05/03/rollout-2026-05-03T01-45-58-*.jsonl | wc -c
21828
The first line is 21,828 bytes — gets truncated at 16,384, JSON.parse throws, discoverSessionsInDir() skips the file entirely. Session never enters the pipeline.
The main branch source at src/providers/codex.ts already streams properly (createInterface over createReadStream), so npm install -g codeburn@latest from main may already fix this. The Homebrew/menu bar 0.9.5 artifact does not include the streaming fix.
Fix: rebuild the release artifacts from current main, or bump the buffer cap if streaming isn't desired.
Root cause #2 — ChatGPT subscription doesn't emit token counts
Even after fixing #1, sessions would still show 0 tokens because Codex CLI on ChatGPT subscription emits token_count events with info: null. The endpoint behind ChatGPT Plus/Pro returns rate-limit percentages instead of raw token counters.
Sample event from my session (1308 such events in one file, all with info: null):
{
"timestamp": "2026-05-02T22:57:30.040Z",
"type": "event_msg",
"payload": {
"type": "token_count",
"info": null,
"rate_limits": {
"primary": {
"used_percent": 11.0,
"window_minutes": 300,
"resets_at": 1777771246
},
"secondary": {
"used_percent": 35.0,
"window_minutes": 10080,
"resets_at": 1777973328
},
"plan_type": "pro"
}
}
}
Compared to API-key auth where info contains:
{
"last_token_usage": { "input_tokens": 1234, "output_tokens": 567, ... },
"total_token_usage": { ... }
}
In src/providers/codex.ts around line 255, the parser reads info.last_token_usage / info.total_token_usage — both undefined when info: null, so all token fields stay 0, and the totalTokens === 0 guard at line ~296 makes the entry skip. Session is parsed but yields no calls.
Suggested fix for #2 (estimation fallback, modeled on cursor.ts)
Track message text length during the session and estimate tokens when info is null, similar to how providers/cursor.ts handles older Cursor versions:
+ const CHARS_PER_TOKEN = 4
+ let estimatedInputChars = 0
+ let estimatedOutputChars = 0
// when iterating response_item entries:
+ if (entry.type === 'response_item' && entry.payload?.type === 'message') {
+ const role = entry.payload.role
+ const text = (entry.payload.content ?? []).map(c => c.text ?? '').join('')
+ if (role === 'user') estimatedInputChars += text.length
+ else if (role === 'assistant') estimatedOutputChars += text.length
+ }
// before the totalTokens === 0 guard:
+ if (inputTokens === 0 && outputTokens === 0 && (estimatedInputChars > 0 || estimatedOutputChars > 0)) {
+ inputTokens = Math.ceil(estimatedInputChars / CHARS_PER_TOKEN)
+ outputTokens = Math.ceil(estimatedOutputChars / CHARS_PER_TOKEN)
+ }
Cost can stay at $0 (subscription is flat fee), but calls and estimated tokens become visible, which is what users on ChatGPT Plus/Pro want — to see where their subscription quota goes.
Bonus: the rate_limits.primary.used_percent and secondary.used_percent fields could be surfaced as a "subscription quota used" indicator alongside the estimated tokens. This is unique data only ChatGPT-auth users have.
Why this matters
ChatGPT Plus is $20/month — many CodeBurn users are on it (cheaper than API for heavy use). Right now they see €0.00 and conclude CodeBurn doesn't track Codex at all, when in reality their data is just being silently dropped. Even an estimation with a "(estimated)" label would be more useful than total invisibility.
Happy to PR
I can submit a PR for either fix (or both) if welcome. Let me know if there's a preferred shape — flag-gated estimation, separate "estimated" field in the data model, etc.
Summary
Codex sessions created via ChatGPT Plus/Pro subscription auth (
auth_mode: "chatgpt"in~/.codex/auth.json) are silently dropped by CodeBurn, showing 0 calls / $0.00 cost even when sessions are large and active. Two distinct bugs combine to cause this.Reproduction
codeburn todayor open the menu bar appExpected: session count + token usage shown (cost can be $0 since subscription is flat fee, but tokens/calls should be visible).
Actual: Codex shows 0 calls / $0.00, even though sessions exist on disk and contain thousands of token events.
Environment
0.9.5(Homebrew menu bar build)auth_mode: chatgpt,chatgpt_plan_type: pro~/.codex/sessions/2026/05/— 14 files, recent ones >250KB eachRoot cause #1 —
session_metaline exceeds 16KB read cap (release-only bug)The shipped Homebrew binary at
/opt/homebrew/Cellar/codeburn/0.9.5/bin/codeburn:551reads only the first 16384 bytes of a session file to parse the meta line:Real measurement from my latest Codex session:
The first line is 21,828 bytes — gets truncated at 16,384,
JSON.parsethrows,discoverSessionsInDir()skips the file entirely. Session never enters the pipeline.The
mainbranch source atsrc/providers/codex.tsalready streams properly (createInterfaceovercreateReadStream), sonpm install -g codeburn@latestfrommainmay already fix this. The Homebrew/menu bar0.9.5artifact does not include the streaming fix.Fix: rebuild the release artifacts from current
main, or bump the buffer cap if streaming isn't desired.Root cause #2 — ChatGPT subscription doesn't emit token counts
Even after fixing #1, sessions would still show 0 tokens because Codex CLI on ChatGPT subscription emits
token_countevents withinfo: null. The endpoint behind ChatGPT Plus/Pro returns rate-limit percentages instead of raw token counters.Sample event from my session (1308 such events in one file, all with
info: null):{ "timestamp": "2026-05-02T22:57:30.040Z", "type": "event_msg", "payload": { "type": "token_count", "info": null, "rate_limits": { "primary": { "used_percent": 11.0, "window_minutes": 300, "resets_at": 1777771246 }, "secondary": { "used_percent": 35.0, "window_minutes": 10080, "resets_at": 1777973328 }, "plan_type": "pro" } } }Compared to API-key auth where
infocontains:{ "last_token_usage": { "input_tokens": 1234, "output_tokens": 567, ... }, "total_token_usage": { ... } }In
src/providers/codex.tsaround line 255, the parser readsinfo.last_token_usage/info.total_token_usage— both undefined wheninfo: null, so all token fields stay 0, and thetotalTokens === 0guard at line ~296 makes the entry skip. Session is parsed but yields no calls.Suggested fix for #2 (estimation fallback, modeled on
cursor.ts)Track message text length during the session and estimate tokens when
infois null, similar to howproviders/cursor.tshandles older Cursor versions:Cost can stay at $0 (subscription is flat fee), but calls and estimated tokens become visible, which is what users on ChatGPT Plus/Pro want — to see where their subscription quota goes.
Bonus: the
rate_limits.primary.used_percentandsecondary.used_percentfields could be surfaced as a "subscription quota used" indicator alongside the estimated tokens. This is unique data only ChatGPT-auth users have.Why this matters
ChatGPT Plus is
$20/month— many CodeBurn users are on it (cheaper than API for heavy use). Right now they see €0.00 and conclude CodeBurn doesn't track Codex at all, when in reality their data is just being silently dropped. Even an estimation with a "(estimated)" label would be more useful than total invisibility.Happy to PR
I can submit a PR for either fix (or both) if welcome. Let me know if there's a preferred shape — flag-gated estimation, separate "estimated" field in the data model, etc.