Skip to content

feat: add --format toon output for agent pipelines#268

Open
TimNooren wants to merge 3 commits intomainfrom
feat/toon-format
Open

feat: add --format toon output for agent pipelines#268
TimNooren wants to merge 3 commits intomainfrom
feat/toon-format

Conversation

@TimNooren
Copy link
Contributor

Adds TOON (Token-Oriented Object Notation) as an output format option via --format toon.

Why

nansen-cli output is almost entirely uniform arrays of objects (token flows, smart money, screener results) — TOON's exact sweet spot. Benchmarks show ~40% fewer tokens vs JSON with marginally better LLM accuracy, making it ideal for agent pipelines that pipe nansen-cli output into LLM prompts.

What

  • npm install @toon-format/toon (zero transitive deps, MIT)
  • formatOutput() gains a toon branch alongside csv/table/pretty
  • --format toon advertised in research subcommand help only (works globally, just not cluttering other help text)
  • README updated with brief mention

Usage

nansen research smart-money token-flows --chain ethereum --format toon

Zero regression risk

Opt-in flag. All existing formats unchanged.

@nansen-pr-reviewer
Copy link

nansen-pr-reviewer bot commented Mar 7, 2026

pr-reviewer Summary

📝 3 findings

Review completed. Please address the findings below.

Findings by Severity

Severity Count
🟡 Medium 1
🔵 Low 2

Review effort: 2/5 (Simple)

Summary

This PR adds a --format toon output option that encodes CLI responses using the @toon-format/toon library (a compact, token-efficient encoding intended for LLM agent pipelines). The implementation is clean and well-integrated into the existing format-dispatch pattern. The @toon-format/toon package is a real published npm package and its lockfile integrity hash matches the npm registry. No critical or high-severity issues found.

Findings

src/__tests__/cli.internal.test.jsMedium: No tests added for --format toon

Every other format option has dedicated unit and integration test coverage in cli.internal.test.js:

  • formatOutput with table, pretty, etc. is unit-tested at lines 182–205
  • --format csv has a full formatCsv unit test suite (lines 2243–2291) and an end-to-end integration test (lines 2293–2337)

The new toon branch in formatOutput and the --format toon CLI flow have zero tests. If toonEncode changes behavior or the integration breaks, there is no regression safety net. The CSV tests provide a clear template for what test coverage should look like here.

Suggested fix: Add a describe('formatOutput – toon', ...) block testing the toon: true path (success, success: false error path, and data ?? data fallback), plus an integration test mirroring the --format csv integration tests.


README.md vs .changeset/toon-format.mdLow: Inconsistent token reduction claims

  • README.md (line 71): "reduces token count ~40% vs JSON"
  • .changeset/toon-format.md: "~74% token reduction vs pretty-printed JSON"

Both figures may be technically accurate against different baselines (compact JSON vs pretty-printed JSON), but a reader comparing them will be confused about which to trust. The README should specify the baseline (e.g., "vs compact JSON") or align with the changeset's claim and baseline.


src/cli.js line 1592–1599 — Low: --stream silently overrides --format toon with no warning

The stream check appears before the format dispatch:

if (stream) {
  // Stream mode: output each record as a JSON line (NDJSON)
  ...
  return { type: 'stream', data: result };
}
// toon/csv/table/etc. only reached if !stream
const formatted = formatOutput(successData, { pretty, table, csv, toon });

If a user passes both --stream and --format toon, the stream wins silently and the toon format is ignored. The same issue exists for --format csv (pre-existing), but introducing a second format flag increases the surface area. Consider emitting a warning or erroring early when conflicting format flags are combined.


Token usage: 1,448 input, 6,546 output, 186,944 cache read, 20,549 cache write

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants