Skip to content

RTK as context preprocessor for AI pipelines (input mode) #818

@pszymkowiak

Description

@pszymkowiak

Idea

Use RTK as a context preparation tool for custom AI workflows — not just as an output compressor for Claude Code, but as an input preprocessor that injects compressed data into LLM prompts at precise locations.

Current use (output mode)

Claude runs command → hook rewrites → rtk compresses output → Claude receives fewer tokens

Proposed use (input mode)

Custom script calls rtk → gets compressed output → injects it into LLM prompt exactly where needed

Example

# In a custom AI workflow/script
context=$(rtk ls /project/src)
git_context=$(rtk git log -n 20)

# Inject compressed context into prompt at precise locations
prompt="Given this project structure:\n$context\n\nAnd recent changes:\n$git_context\n\nDo X..."

Why it's powerful

  • User controls what context goes into the prompt and where
  • RTK filters noise BEFORE it reaches the LLM context window
  • Works with any LLM, any framework, any pipeline — not just Claude Code
  • Turns RTK into a universal context compression library for AI agents

Credit

Idea from Alexandre Balmes who is already doing this in production with his own workflows (awf).

"En vrai ce que j'aime bien aussi c'est que dans mes propres workflow avec awf j'appelle volontairement une commande via rtk pour injecter l'output dans le prompt du LLM à l'endroit où ça m'intéresse. C'est diablement efficace."

Metadata

Metadata

Assignees

No one assigned

    Labels

    ideaFeature idea from community

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions