Skip to content

[Question]: Recommended integration pattern for multi-agent AI pipelines with shared creative memory and a recommended way of way of integrating OpenViking with OpenAI agents sdk #700

@VikramxD

Description

@VikramxD

Your Question

  1. Is find() the right primitive for agent tool searches, or should we
    be using a different API (search, query, etc.) for agents that need to
    reason over results and synthesize creative output?

  2. Progressive context loading — we're implementing L1 (overview/abstract)
    for background context and L2 (full read) for active context. Is
    abstract() the right call for L1 summaries, and is there a way to get
    even more compact L0 listings?

  3. Cross-pipeline knowledge sharing — when one pipeline finishes,
    the ptjer pipeline needs to discover its creative decisions. Currently
    both search viking://resources/. Is there a better pattern for extracting
    and surfacing knowledge across pipelines?

  4. Agent-scoped relevance — is there a way to bias search results based on
    what the current agent is working on? (e.g., "I'm writing scene 5 with
    characters Kenji and Yuki at the jazz club, boost results related to
    this context")

Basis

Context

What we're building

A multi-agent content production system where specialized AI agents collaborate to
produce creative content — screenplays, storyboards, video clips, music.
Each pipeline has 3-5 agents running sequentially. Agents are built on the
OpenAI Agents SDK with function tools that query OpenViking for knowledge
retrieval. Pipelines are orchestrated by Temporal (each agent = separate
activity).

Knowledge layer

We have a 129-topic cinematic knowledge corpus loaded into OpenViking —
master techniques, film theory, genre conventions, director styles, and
production craft. Each topic is structured markdown from authoritative
sources (ASC manuals, academic papers, masterclass transcripts).

On top of this corpus, agents write structured creative state as resources
during generation:

  • viking://resources/characters/ — profiles with voice, arc, description
  • viking://resources/locations/ — settings with atmosphere, significance
  • viking://resources/structure/ — plot beats, scene outlines
  • viking://resources/scenes/ — completed scene content

How agents use OpenViking today

Agents have function tools that call the HTTP API:

  • find(query, target_uri, limit) — semantic search over corpus and resources
  • read(uri) — exact content from a known URI
  • abstract(uri) — directory-level summaries
  • ls(uri) — browse directory contents
  • add_resource(path) — write structured markdown

A Scene Writer agent searches for noir lighting techniques from the corpus,
reads the character profile for the protagonist, checks plot beat
requirements, and synthesizes all of that into a scene.

Issues at scale

Token explosion — By scene 12 of 15, the agent carries ~50K tokens of
prior conversation history (all scenes, tool calls, tool results) instead
of retrieving just what it needs from OpenViking.

Tool hallucination — Agents share conversation history via a
database-backed session. The Continuity Checker sees the Scene Writer's
save_scene tool calls in shared history and tries to call them — causing
runtime errors (Tool not found in agent).

Blunt retrievalfind() returns the same results regardless of the
agent's current task. Scene 12 gets the same corpus hits as scene 1. No
way to scope relevance to the current scene's characters, location, or
dramatic context.

No cross-pipeline discovery — Character voice patterns, visual style
choices, and narrative arc decisions are locked inside one pipeline's
resources. Other pipelines can search viking://resources/ but there's
no extracted knowledge layer connecting them.

All-or-nothing context — Full L2 content loads for every character
regardless of whether they appear in the current scene. 15 full profiles
when only 3 characters are active.

Environment

  • OpenViking HTTP mode (AsyncHTTPClient) against localhost:1933
  • openviking Python package (latest)
  • OpenAI Agents SDK (openai-agents) for agent orchestration

Code Example (Optional)

Related Area

Other

Before Asking

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    Status

    Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions