Skip to content

feat(examples): add Honcho memory integration for OpenAI Agents SDK#511

Closed
m1lestones wants to merge 7 commits intoplastic-labs:mainfrom
m1lestones:feat/openai-agents-memory-integration
Closed

feat(examples): add Honcho memory integration for OpenAI Agents SDK#511
m1lestones wants to merge 7 commits intoplastic-labs:mainfrom
m1lestones:feat/openai-agents-memory-integration

Conversation

@m1lestones
Copy link
Copy Markdown

@m1lestones m1lestones commented Apr 6, 2026

Summary

Adds examples/openai-agents/ — a new integration example that gives OpenAI Agents SDK agents persistent memory using Honcho.

The OpenAI Agents SDK is one of the most widely used agent frameworks right now and has no existing Honcho example. This PR closes that gap.

What's included

  • tools/client.py — Honcho client init + HonchoContext dataclass (holds user_id, session_id, assistant_id; passed as the SDK's context arg)
  • tools/save_memory.py — persists conversation turns to Honcho
  • tools/query_memory.py@function_tool that calls Honcho's Dialectic API so the agent can answer "What do you remember about me?" with grounded, semantic responses
  • tools/get_context.py — fetches session history from Honcho formatted as OpenAI message dicts
  • main.py — agent definition with dynamic instructions + interactive demo
  • tests/test_basic.py — structural/import tests (no API keys needed)
  • tests/test_integration.py — integration tests against the live Honcho API (skipped without HONCHO_API_KEY)

Key design decisions

Dynamic instructions — instead of a static system prompt, a callable injects Honcho's session.context() before every LLM request. The model always has an up-to-date view of the conversation without the developer managing history manually.

query_memory as a @function_tool — exposes Honcho's Dialectic API to the LLM directly. The agent calls it when the user asks about their history or preferences, getting a semantically grounded answer from Honcho's memory layer.

HonchoContext dataclass — follows the SDK's recommended pattern of passing identity via Runner.run(..., context=ctx) rather than global state, making the integration composable and testable.

Structure mirrors examples/zo/

The layout, tool signatures, docstring style, and test patterns deliberately follow the existing Zo Computer example to keep the examples folder consistent.

cc @ajspig

Summary by CodeRabbit

  • New Features

    • Added Python and TypeScript OpenAI Agents examples demonstrating Honcho-backed persistent conversation memory, a memory-query tool, automatic context injection, and interactive REPL demos.
  • Documentation

    • New example README with installation, env setup, quick-start walkthrough, dynamic-instructions pattern, API reference, and usage notes.
  • Tests

    • Added structural unit tests and conditional integration tests covering memory save/get behavior and tool exports.
  • Chores

    • Added example project configs and packaging/test metadata for Python and TypeScript.

@ajspig
Copy link
Copy Markdown
Contributor

ajspig commented Apr 7, 2026

Hey @m1lestones — thanks for putting this together! An OpenAI Agents SDK integration is a welcome addition. The code is clean and the concept mapping (user_id → Peer, session_id → Session, assistant_id → Peer) is accurate.

Before this is merged in: Remove the Docker changes from this PR — they're already covered by your #505. Commits e9a577c and 7a9f00c (localhost port binding, credential warnings) duplicate the work in your separate PR #505. We'll review and merge #505 on its own. Please rebase or drop those commits from this branch so #511 is purely the OpenAI Agents SDK example.

note:
At first I wasnt sure if the manual-wiring approach you've taken here (explicit save_memory calls around Runner.run()) was the best approach, but it seems like nobody has shipped a dedicated PyPI package for OpenAI Agents SDK + long-term memory yet. The LangGraph integration uses the same manual pattern (save → retrieve → generate → save inside a graph node), and that's fine because LangGraph doesn't really have a clean marketplace or hook surface that lends itself to a deeper drop-in package.

So it looks like this is the right approach for now. We'll track a deeper hooks-based integration (using RunHooks) as a future enhancement.

Adds a new integration example at examples/openai-agents/ that gives
OpenAI Agents SDK agents persistent memory using Honcho.

Key design decisions:
- Dynamic `instructions` callable injects Honcho session context into
  the system prompt before every LLM request, so the model always has
  an up-to-date view of the conversation.
- `query_memory` is exposed as a `@function_tool` so the agent can call
  Honcho's Dialectic API on demand to answer questions like "What do you
  remember about me?" with grounded, semantic answers.
- `HonchoContext` dataclass is passed as the SDK's `context` argument,
  keeping user/session identity in the run context rather than global state.
- `chat()` helper wraps `Runner.run()` to auto-save each turn to Honcho
  before and after the agent runs.

Structure mirrors the existing examples/zo/ integration:
- tools/client.py   — Honcho client init + HonchoContext dataclass
- tools/save_memory.py  — persist conversation turns
- tools/query_memory.py — @function_tool for Dialectic API queries
- tools/get_context.py  — fetch context for instruction injection
- main.py           — agent definition + interactive demo
- tests/            — structural tests (no keys) + integration tests
@m1lestones m1lestones force-pushed the feat/openai-agents-memory-integration branch from 7a9f00c to c37e81f Compare April 7, 2026 21:15
@m1lestones
Copy link
Copy Markdown
Author

m1lestones commented Apr 7, 2026

Hi @ajspig! Thanks for the detailed review and kind words. I've removed the Docker commits from this branch it now contains only the OpenAI Agents SDK example. Ready for another look whenever you get a chance.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 7, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds OpenAI Agents “Honcho Memory Integration” examples (Python + TypeScript): README, runnable REPL examples, Honcho-backed tools to save/query/get conversation memory, unit and conditional integration tests, and project configs for both languages.

Changes

Cohort / File(s) Summary
Documentation
examples/openai-agents/README.md
New README describing Honcho memory integration, install/env setup, quick-start examples, API reference for HonchoContext and helpers, and test commands.
Python Example & CLI
examples/openai-agents/python/main.py
New runnable example: setup_session(), exported honcho_agent, async chat() that loads/saves memory and injects Honcho context, plus a REPL entrypoint.
Python Tools & Types
examples/openai-agents/python/tools/__init__.py, .../tools/client.py, .../tools/get_context.py, .../tools/query_memory.py, .../tools/save_memory.py
New toolset and types: HonchoContext dataclass, get_client(), get_context(), query_memory (agent tool), and save_memory(); re-exported via tools package.
Python Packaging & Tests
examples/openai-agents/python/pyproject.toml, .../tests/test_basic.py, .../tests/test_integration.py
Added pyproject with deps and pytest config; unit tests for structure and conditional integration tests exercising live Honcho memory (skipped if HONCHO_API_KEY unset).
TypeScript Example & REPL
examples/openai-agents/typescript/main.ts
New TS example mirroring Python flow: buildInstructions(), chat() to load/save context and run agent, plus a readline REPL.
TypeScript Tools & Types
examples/openai-agents/typescript/tools/client.ts, .../tools/getContext.ts, .../tools/queryMemory.ts, .../tools/saveMemory.ts
New TS toolset: HonchoContext interface, getClient(), getContext(), queryMemoryTool (agent tool), and saveMemory() implementation.
TypeScript Config & Manifest
examples/openai-agents/typescript/package.json, examples/openai-agents/typescript/tsconfig.json
Added package.json (Bun-based start script, deps) and tsconfig.json (ES2022, strict) for the TS example.

Sequence Diagram

sequenceDiagram
    participant User
    participant App as Application
    participant Honcho
    participant Agent as OpenAI Agent

    User->>App: send message
    App->>Honcho: save_memory(user message)
    Honcho-->>App: confirmation

    App->>Honcho: get_context(tokens)
    Honcho-->>App: recent conversation history

    App->>Agent: run with dynamic instructions + injected context
    Agent->>Honcho: query_memory(natural language)
    Honcho-->>Agent: memory result
    Agent-->>App: final output

    App->>Honcho: save_memory(assistant response)
    Honcho-->>App: confirmation
    App->>User: return response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • ajspig
  • VVoruganti

Poem

🐇 I tuck each turn into Honcho's nest,
I hop through code to store and test,
Python hums and TypeScript sings,
Agents listen for memory's wings,
A rabbit cheers—examples take flight!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 53.66% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding a Honcho memory integration example for the OpenAI Agents SDK. It is concise, specific, and directly reflects the primary purpose of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
examples/openai-agents/tools/client.py (1)

33-58: Consider caching the Honcho client instance.

get_client() creates a new Honcho instance on each call. For an example this is fine, but for production use, caching the client (e.g., via @functools.cache or module-level singleton) would avoid repeated initialization overhead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/tools/client.py` around lines 33 - 58, get_client
currently instantiates a new Honcho on every call; change it to return a cached
singleton instance by either decorating get_client with functools.cache or by
storing the Honcho instance in a module-level variable (e.g., _honcho_client)
and returning it on subsequent calls; ensure the cache key respects the
workspace_id argument (or document that workspace_id must be constant) and keep
the same behavior for reading HONCHO_API_KEY and HONCHO_WORKSPACE_ID before
creating the cached Honcho instance.
examples/openai-agents/tools/save_memory.py (1)

40-40: Consider caching peer membership to reduce API calls.

session.add_peers() is called on every save_memory invocation, even when peers are already members. For an example this is acceptable, but in production usage this adds unnecessary API overhead. The Honcho API handles this idempotently, so correctness is not affected.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/tools/save_memory.py` at line 40, The call to
session.add_peers([user_peer, assistant_peer]) inside save_memory
unconditionally repeats API calls; update save_memory to cache peer membership
and only call session.add_peers when peers are not known members: maintain a
module-level or session-attached set (e.g., cached_peers or
session._cached_peers) and check that it contains user_peer.id and
assistant_peer.id before invoking add_peers, add newly-added peer ids to the
cache after a successful call, and invalidate or reset the cache when a session
is created/replaced or on add_peers errors so membership stays correct.
examples/openai-agents/tests/test_integration.py (1)

128-142: Roundtrip test could have a stronger assertion.

The test verifies at least one message is returned but doesn't confirm the actual content matches what was saved. Consider asserting that the saved content appears in the retrieved messages for a more robust end-to-end check.

💡 Optional enhancement for stronger validation
     def test_saved_messages_appear_in_context(self):
         user_id = unique_id("user")
         session_id = unique_id("session")
 
         save_memory(user_id, "Hello!", "user", session_id)
         save_memory(user_id, "Hi there!", "assistant", session_id)
 
         ctx = HonchoContext(user_id=user_id, session_id=session_id)
         messages = get_context(ctx)
 
         assert isinstance(messages, list)
         assert len(messages) >= 1
+        # Verify saved content appears in retrieved context
+        all_content = " ".join(m["content"] for m in messages)
+        assert "Hello!" in all_content or "Hi there!" in all_content
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/tests/test_integration.py` around lines 128 - 142,
Update the TestSaveGetRoundtrip.test_saved_messages_appear_in_context to assert
that the messages returned by get_context contain the exact content saved by
save_memory: after creating user_id and session_id and calling
save_memory(user_id, "Hello!", "user", session_id) and save_memory(user_id, "Hi
there!", "assistant", session_id), build ctx = HonchoContext(...) and call
messages = get_context(ctx), then assert messages is a list and that at least
one message text equals "Hello!" and another equals "Hi there!" (or assert the
combined sequence/order if get_context preserves order) so the test checks
actual saved content rather than only length.
examples/openai-agents/main.py (2)

94-106: Interactive loop is functional but creates new event loop per message.

Using asyncio.run() inside the loop creates and destroys the event loop for each message. This is acceptable for a simple demo but would be inefficient for production use. Consider documenting this as a known limitation or refactoring to reuse the event loop.

♻️ Optional refactor using async main
-if __name__ == "__main__":
-    print("HonchoMemoryAgent — type 'quit' to exit\n")
-    _user_id = "demo-user"
-    _session_id = "demo-session"
-
-    while True:
-        _user_input = input("You: ").strip()
-        if not _user_input:
-            continue
-        if _user_input.lower() in ("quit", "exit"):
-            break
-        _response = asyncio.run(chat(_user_id, _user_input, _session_id))
-        print(f"Agent: {_response}\n")
+async def main():
+    """Interactive demo loop."""
+    print("HonchoMemoryAgent — type 'quit' to exit\n")
+    user_id = "demo-user"
+    session_id = "demo-session"
+
+    while True:
+        user_input = input("You: ").strip()
+        if not user_input:
+            continue
+        if user_input.lower() in ("quit", "exit"):
+            break
+        response = await chat(user_id, user_input, session_id)
+        print(f"Agent: {response}\n")
+
+
+if __name__ == "__main__":
+    asyncio.run(main())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/main.py` around lines 94 - 106, The interactive loop
calls asyncio.run(chat(...)) for each message which recreates the event loop
repeatedly; refactor by creating an async entrypoint (e.g., async def main())
that runs once and awaits chat(...) inside the loop, and in the if __name__ ==
"__main__" block call asyncio.run(main()) instead of calling asyncio.run per
message; to handle the blocking input() inside the async loop either run input()
in a thread via asyncio.to_thread() or use an async input helper (or
alternatively add a short comment noting the current implementation is a
demo-only limitation), referencing the chat function and the current __main__
interactive loop to locate changes.

65-91: Consider adding error handling for external API calls in production.

The chat function makes external calls (save_memory, Runner.run) without explicit error handling. If either call fails, unhandled exceptions will propagate. While acceptable for a demo, consider wrapping these calls in try/except blocks for robustness in production environments.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/main.py` around lines 65 - 91, The chat function
currently calls save_memory and Runner.run (and accesses result.final_output)
without error handling; wrap the call sequence in a try/except (or
try/except/finally) inside chat to catch exceptions from save_memory and
Runner.run, log the error (or surface a controlled error), and return a safe
fallback or re-raise a wrapped exception; ensure you still attempt to persist
assistant/user memory where appropriate (use separate try/except blocks around
save_memory calls so a persistence failure doesn’t drop the whole response
flow), and reference the HonchoContext, save_memory, Runner.run, and
result.final_output symbols when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/openai-agents/tests/test_integration.py`:
- Around line 8-20: Remove the unused import symbol query_memory from the
top-level import block (where HonchoContext, get_context, save_memory are
imported) in the test file; simply delete the "from tools.query_memory import
query_memory" line so only used imports remain, then run the test suite to
confirm no import errors.
- Around line 1-6: The module-level docstring in test_integration.py incorrectly
states both HONCHO_API_KEY and OPENAI_API_KEY are required; update that
docstring to only mention HONCHO_API_KEY (and that tests are skipped when it's
absent), and optionally note these tests exercise the save_memory/get_context
helpers rather than running the agent to make intent clear.

---

Nitpick comments:
In `@examples/openai-agents/main.py`:
- Around line 94-106: The interactive loop calls asyncio.run(chat(...)) for each
message which recreates the event loop repeatedly; refactor by creating an async
entrypoint (e.g., async def main()) that runs once and awaits chat(...) inside
the loop, and in the if __name__ == "__main__" block call asyncio.run(main())
instead of calling asyncio.run per message; to handle the blocking input()
inside the async loop either run input() in a thread via asyncio.to_thread() or
use an async input helper (or alternatively add a short comment noting the
current implementation is a demo-only limitation), referencing the chat function
and the current __main__ interactive loop to locate changes.
- Around line 65-91: The chat function currently calls save_memory and
Runner.run (and accesses result.final_output) without error handling; wrap the
call sequence in a try/except (or try/except/finally) inside chat to catch
exceptions from save_memory and Runner.run, log the error (or surface a
controlled error), and return a safe fallback or re-raise a wrapped exception;
ensure you still attempt to persist assistant/user memory where appropriate (use
separate try/except blocks around save_memory calls so a persistence failure
doesn’t drop the whole response flow), and reference the HonchoContext,
save_memory, Runner.run, and result.final_output symbols when making these
changes.

In `@examples/openai-agents/tests/test_integration.py`:
- Around line 128-142: Update the
TestSaveGetRoundtrip.test_saved_messages_appear_in_context to assert that the
messages returned by get_context contain the exact content saved by save_memory:
after creating user_id and session_id and calling save_memory(user_id, "Hello!",
"user", session_id) and save_memory(user_id, "Hi there!", "assistant",
session_id), build ctx = HonchoContext(...) and call messages =
get_context(ctx), then assert messages is a list and that at least one message
text equals "Hello!" and another equals "Hi there!" (or assert the combined
sequence/order if get_context preserves order) so the test checks actual saved
content rather than only length.

In `@examples/openai-agents/tools/client.py`:
- Around line 33-58: get_client currently instantiates a new Honcho on every
call; change it to return a cached singleton instance by either decorating
get_client with functools.cache or by storing the Honcho instance in a
module-level variable (e.g., _honcho_client) and returning it on subsequent
calls; ensure the cache key respects the workspace_id argument (or document that
workspace_id must be constant) and keep the same behavior for reading
HONCHO_API_KEY and HONCHO_WORKSPACE_ID before creating the cached Honcho
instance.

In `@examples/openai-agents/tools/save_memory.py`:
- Line 40: The call to session.add_peers([user_peer, assistant_peer]) inside
save_memory unconditionally repeats API calls; update save_memory to cache peer
membership and only call session.add_peers when peers are not known members:
maintain a module-level or session-attached set (e.g., cached_peers or
session._cached_peers) and check that it contains user_peer.id and
assistant_peer.id before invoking add_peers, add newly-added peer ids to the
cache after a successful call, and invalidate or reset the cache when a session
is created/replaced or on add_peers errors so membership stays correct.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a8c1ecfc-f225-4d38-a713-0607580b0cec

📥 Commits

Reviewing files that changed from the base of the PR and between 95c72d7 and c37e81f.

📒 Files selected for processing (10)
  • examples/openai-agents/README.md
  • examples/openai-agents/main.py
  • examples/openai-agents/pyproject.toml
  • examples/openai-agents/tests/test_basic.py
  • examples/openai-agents/tests/test_integration.py
  • examples/openai-agents/tools/__init__.py
  • examples/openai-agents/tools/client.py
  • examples/openai-agents/tools/get_context.py
  • examples/openai-agents/tools/query_memory.py
  • examples/openai-agents/tools/save_memory.py

Comment thread examples/openai-agents/python/tests/test_integration.py
Comment thread examples/openai-agents/python/tests/test_integration.py
@ajspig
Copy link
Copy Markdown
Contributor

ajspig commented Apr 9, 2026

Hey @m1lestones

One request: OpenAI ships a TypeScript/JS version of the Agents SDK as well (@openai/agents on npm — openai/openai-agents-js). Could you add a matching TypeScript version of the example? The examples/langgraph/ folder already follows a python/ + typescript/ subdirectory pattern we could mirror here.

The TS version should cover the same pieces as the python.

The Honcho JS/TS SDK is at honcho-ai on npm. Happy to help if you hit any snags with the TS SDK surface.

@m1lestones
Copy link
Copy Markdown
Author

Done! I restructured the folder to mirror examples/langgraph/ Python files are now under python/ and the TypeScript version is under typescript/. It covers the same pieces: client.ts, saveMemory.ts, getContext.ts, queryMemory.ts, and main.ts with the same async dynamic-instructions + query_memory tool pattern. Uses @openai/agents + @honcho-ai/sdk + Zod.

…to python/typescript

- Move Python files to python/ subdirectory (mirrors examples/langgraph/ layout)
- Add TypeScript version using @openai/agents JS SDK
  - tools/client.ts: HonchoContext interface + getClient()
  - tools/getContext.ts: fetch session history via session.context()
  - tools/saveMemory.ts: persist messages via session.addMessages()
  - tools/queryMemory.ts: Dialectic API tool via peer.chat()
  - main.ts: Agent with dynamic instructions, REPL loop, export chat()
- package.json + tsconfig.json for bun/node execution

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (1)
examples/openai-agents/python/tests/test_integration.py (1)

1-5: ⚠️ Potential issue | 🟡 Minor

Docstring overstates the required secrets.

These tests only gate on HONCHO_API_KEY and never run the agent, so mentioning OPENAI_API_KEY is misleading.

📝 Proposed fix
-These tests run against the live Honcho API and require both
-``HONCHO_API_KEY`` and ``OPENAI_API_KEY`` to be set. They are skipped
-automatically when the keys are absent.
+These tests run against the live Honcho API and require ``HONCHO_API_KEY``
+to be set. They are skipped automatically when the key is absent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/python/tests/test_integration.py` around lines 1 - 5,
The module docstring incorrectly states both HONCHO_API_KEY and OPENAI_API_KEY
are required; update the top-level triple-quoted docstring in
tests/test_integration.py (the module docstring) to only mention HONCHO_API_KEY
as the required secret and that tests are skipped when that key is absent,
removing any reference to OPENAI_API_KEY to avoid misleading readers.
🧹 Nitpick comments (2)
examples/openai-agents/typescript/tools/saveMemory.ts (1)

16-18: Normalize whitespace-only messages before saving.

Line 16 allows " " to be stored as a message. Consider trimming first.

Proposed refactor
-  if (!content) {
+  const normalized = content.trim();
+  if (!normalized) {
     throw new Error("content must not be empty");
   }
@@
-  await session.addMessages([sender.message(content)]);
+  await session.addMessages([sender.message(normalized)]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/typescript/tools/saveMemory.ts` around lines 16 - 18,
The current check allows strings with only whitespace (e.g., "  ") to pass; in
the saveMemory function (or the function handling the content variable in
examples/openai-agents/typescript/tools/saveMemory.ts) trim the incoming content
first (e.g., const trimmed = content.trim()), validate using the trimmed value
(if (!trimmed) throw new Error("content must not be empty")), and use the
trimmed value when persisting/saving so no whitespace-only messages are stored.
examples/openai-agents/python/tools/get_context.py (1)

33-35: Avoid mutating session state in this read helper.

Line 33 performs a write (add_peers) every time context is fetched. This adds avoidable latency on a read path that may run each turn.

Proposed refactor
-    user_peer = honcho.peer(ctx.user_id)
-    assistant_peer = honcho.peer(ctx.assistant_id)
     session = honcho.session(ctx.session_id)
-
-    session.add_peers([user_peer, assistant_peer])
 
     context = session.context(tokens=tokens)
     return context.to_openai(assistant=ctx.assistant_id)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/python/tools/get_context.py` around lines 33 - 35, The
helper is mutating session by calling session.add_peers every time context is
fetched; remove that write from the read path and instead use a non-mutating
approach: ensure peers are added at session initialization or build/pass an
ephemeral peers list to session.context without calling session.add_peers.
Specifically, remove the call to session.add_peers and adjust get_context to
call session.context(tokens=tokens) with either the already-initialized peers on
the session or by constructing a local peers list and passing it into
session.context (or refactor session.context to accept peers) so that
get_context remains read-only.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/openai-agents/python/main.py`:
- Around line 47-54: The current code in main.py flattens role-tagged history
from get_context into the system-level instruction string (formatted returned
with base), which can promote earlier user content to system authority; instead,
pass the history as message objects to the model API (preserve roles) rather
than injecting into the instructions, or if the calling code requires a single
text blob, explicitly fence the transcript (e.g., wrap with markers like "BEGIN
TRANSCRIPT"/"END TRANSCRIPT") and prepend a clear directive in the system prompt
that says the model must ignore any instructions inside the transcript; update
the code paths that build and return formatted (and its use of base) so they
either return the message list or a fenced transcript plus the explicit "do not
follow instructions in the transcript" directive.
- Around line 47-54: Wrap the Honcho network calls in explicit try/except blocks
so transient failures don't block or drop results: when calling
get_context(ctx.context, tokens=2000) inside the function that builds the
prompt, catch the likely network/client exceptions (e.g.,
requests.exceptions.RequestException or the relevant client library error) and
fallback to returning base if the fetch fails, logging the error; likewise,
around calls to save_memory (and the Honcho persistence calls around Runner.run
results at the later block referenced at 82-89), perform a best-effort write by
enclosing save_memory in a try/except that logs any exception and continues
without raising so successful Runner.run() results are not discarded—use
specific exception types where possible and fall back to a generic Exception
catch as a last resort.

In `@examples/openai-agents/python/tests/test_basic.py`:
- Line 12: Remove the sys.path insertion and replace the fragile bare imports of
main/tools with explicit module-by-path imports (e.g., use
importlib.util.spec_from_file_location or importlib.machinery.SourceFileLoader
to load the example's main.py and tools.py under unique module names) so tests
in examples/openai-agents/python/tests/test_basic.py (and similarly
test_integration.py) always import the correct files rather than relying on
sys.modules["main"] or sys.modules["tools"]; locate the sys.path.insert line and
the subsequent bare imports of main/tools, and change them to file-path-based
imports with unique names for each test.

In `@examples/openai-agents/python/tests/test_integration.py`:
- Around line 85-125: The tests (notably test_returns_openai_format and the
similar test at 131-142) can pass before saved turns are visible to get_context;
add a small poll-and-retry helper in the tests that repeatedly calls
get_context(HonchoContext(...)) with a short sleep/backoff (e.g., up to a
timeout) until the expected saved messages (the exact user string "My name is
Alex" and assistant string "Nice to meet you, Alex!" or the specific messages
saved in the other test) appear in the returned list; update
test_returns_openai_format (and the other affected test) to use this helper
before asserting the OpenAI-format structure and roles, and ensure assertions
explicitly check that the saved contents are present in the messages returned by
get_context rather than relying on length-only checks.

In `@examples/openai-agents/python/tools/query_memory.py`:
- Around line 28-33: Trim and validate the query before using it: replace uses
of the raw query with a stripped version (e.g., assign query_stripped =
query.strip()), check if query_stripped is empty and raise ValueError("query
must not be empty") if so, then call get_client(), get the peer via
honcho.peer(ctx.context.user_id), and pass the stripped query to
peer.chat(query=query_stripped) so whitespace-only input is rejected and not
sent to peer.chat.

In `@examples/openai-agents/python/tools/save_memory.py`:
- Around line 32-34: The current check in save_memory (in
examples/openai-agents/python/tools/save_memory.py) only rejects None/empty but
allows whitespace-only strings; update the validation in the save_memory
function to trim whitespace (use content.strip()) and raise ValueError if the
stripped content is empty so that strings like "   " are rejected before
persisting.

In `@examples/openai-agents/typescript/main.ts`:
- Around line 54-67: The current turn is being saved before fetching history
which causes the new message to appear twice (in getContext and again when
appending to input); modify the flow in main.ts to call getContext(ctx, 2000)
and build historyText/input first, then call saveMemory(userId, message, "user",
sessionId) after (or omit appending the message if you must save first). Update
usage of saveMemory and getContext (and variables userId, message, sessionId,
history, input) so the current message is not included both in history and
appended again to input.

In `@examples/openai-agents/typescript/tools/getContext.ts`:
- Line 25: Replace the incorrect call context.toOpenai({ assistant:
ctx.assistantId }) as Message[] with the correct SDK usage
context.toOpenAI(ctx.assistantId), remove the unsafe cast, then filter the
returned messages to keep only those with role "user" or "assistant" and map
them to the expected Message shape (ensuring role is "user" | "assistant" and
preserving optional name if present); adjust the code in getContext (use symbols
context.toOpenAI and ctx.assistantId and the Message type) so the function
returns a properly typed array rather than relying on an unchecked cast.

In `@examples/openai-agents/typescript/tools/queryMemory.ts`:
- Around line 24-28: The code force-casts runContext.context to HonchoContext
and then dereferences ctx.userId (in the block around runContext, HonchoContext,
getClient(), peer.chat and trimmed), which will throw if runContext or its
context is missing; add a guard that verifies runContext and runContext.context
exist and that ctx.userId is defined before calling
getClient()/honcho.peer(...). If the check fails, return or throw a clear
error/result (e.g., short-circuit the tool with a descriptive message) rather
than proceeding to peer.chat, so you avoid a TypeError when context is absent.

---

Duplicate comments:
In `@examples/openai-agents/python/tests/test_integration.py`:
- Around line 1-5: The module docstring incorrectly states both HONCHO_API_KEY
and OPENAI_API_KEY are required; update the top-level triple-quoted docstring in
tests/test_integration.py (the module docstring) to only mention HONCHO_API_KEY
as the required secret and that tests are skipped when that key is absent,
removing any reference to OPENAI_API_KEY to avoid misleading readers.

---

Nitpick comments:
In `@examples/openai-agents/python/tools/get_context.py`:
- Around line 33-35: The helper is mutating session by calling session.add_peers
every time context is fetched; remove that write from the read path and instead
use a non-mutating approach: ensure peers are added at session initialization or
build/pass an ephemeral peers list to session.context without calling
session.add_peers. Specifically, remove the call to session.add_peers and adjust
get_context to call session.context(tokens=tokens) with either the
already-initialized peers on the session or by constructing a local peers list
and passing it into session.context (or refactor session.context to accept
peers) so that get_context remains read-only.

In `@examples/openai-agents/typescript/tools/saveMemory.ts`:
- Around line 16-18: The current check allows strings with only whitespace
(e.g., "  ") to pass; in the saveMemory function (or the function handling the
content variable in examples/openai-agents/typescript/tools/saveMemory.ts) trim
the incoming content first (e.g., const trimmed = content.trim()), validate
using the trimmed value (if (!trimmed) throw new Error("content must not be
empty")), and use the trimmed value when persisting/saving so no whitespace-only
messages are stored.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7eba1812-d79f-4589-9c6f-9fa95b47b7a8

📥 Commits

Reviewing files that changed from the base of the PR and between c37e81f and d46495c.

📒 Files selected for processing (16)
  • examples/openai-agents/python/main.py
  • examples/openai-agents/python/pyproject.toml
  • examples/openai-agents/python/tests/test_basic.py
  • examples/openai-agents/python/tests/test_integration.py
  • examples/openai-agents/python/tools/__init__.py
  • examples/openai-agents/python/tools/client.py
  • examples/openai-agents/python/tools/get_context.py
  • examples/openai-agents/python/tools/query_memory.py
  • examples/openai-agents/python/tools/save_memory.py
  • examples/openai-agents/typescript/main.ts
  • examples/openai-agents/typescript/package.json
  • examples/openai-agents/typescript/tools/client.ts
  • examples/openai-agents/typescript/tools/getContext.ts
  • examples/openai-agents/typescript/tools/queryMemory.ts
  • examples/openai-agents/typescript/tools/saveMemory.ts
  • examples/openai-agents/typescript/tsconfig.json
✅ Files skipped from review due to trivial changes (4)
  • examples/openai-agents/typescript/tsconfig.json
  • examples/openai-agents/python/tools/init.py
  • examples/openai-agents/python/pyproject.toml
  • examples/openai-agents/typescript/package.json

Comment thread examples/openai-agents/python/main.py Outdated

import pytest

sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid sys.path + bare main/tools imports here.

This example now shares the same main.py / tools layout as other examples. Once another suite imports those names, Python will reuse sys.modules["main"] and sys.modules["tools"], so these tests can silently exercise the wrong example depending on import order. Import by explicit package name or file path instead of prepending the directory and loading generic module names. The same pattern should be fixed in test_integration.py too.

Also applies to: 15-95

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/python/tests/test_basic.py` at line 12, Remove the
sys.path insertion and replace the fragile bare imports of main/tools with
explicit module-by-path imports (e.g., use
importlib.util.spec_from_file_location or importlib.machinery.SourceFileLoader
to load the example's main.py and tools.py under unique module names) so tests
in examples/openai-agents/python/tests/test_basic.py (and similarly
test_integration.py) always import the correct files rather than relying on
sys.modules["main"] or sys.modules["tools"]; locate the sys.path.insert line and
the subsequent bare imports of main/tools, and change them to file-path-based
imports with unique names for each test.

Comment thread examples/openai-agents/python/tests/test_integration.py Outdated
Comment thread examples/openai-agents/python/tools/query_memory.py Outdated
Comment thread examples/openai-agents/python/tools/save_memory.py Outdated
Comment thread examples/openai-agents/typescript/main.ts Outdated
await session.addPeers([userPeer, assistantPeer]);

const context = await session.context({ tokens });
return context.toOpenai({ assistant: ctx.assistantId }) as Message[];
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Verify SDK and example method names/signatures used in repo.
rg -n --type=ts 'toOpenAI\s*\('
rg -n --type=ts 'toOpenai\s*\('

Repository: plastic-labs/honcho

Length of output: 450


🏁 Script executed:

# Read the SDK method definition
sed -n '175,195p' sdks/typescript/src/session_context.ts

Repository: plastic-labs/honcho

Length of output: 922


🏁 Script executed:

# Read more of the method to see the complete implementation and return
sed -n '175,220p' sdks/typescript/src/session_context.ts

Repository: plastic-labs/honcho

Length of output: 1524


🏁 Script executed:

# Check getContext.ts full context - imports and function signature
head -30 examples/openai-agents/typescript/tools/getContext.ts

Repository: plastic-labs/honcho

Length of output: 766


Use the correct Honcho SDK API: toOpenAI(ctx.assistantId) and remove unsafe cast.

Line 25 has two critical issues:

  1. Wrong method name: toOpenai should be toOpenAI (capital AI)
  2. Wrong signature: { assistant: ctx.assistantId } should be direct parameter ctx.assistantId

The SDK's toOpenAI() returns messages with optional name properties and includes "system" role messages incompatible with the expected Message type (role: "user" | "assistant"). The unsafe cast masks this incompatibility. Filter to only "user" and "assistant" roles and map to the correct shape.

Proposed fix
-  return context.toOpenai({ assistant: ctx.assistantId }) as Message[];
+  const messages = context.toOpenAI(ctx.assistantId);
+  return messages
+    .filter(
+      (m): m is { role: "user" | "assistant"; content: string } =>
+        m.role === "user" || m.role === "assistant"
+    )
+    .map(({ role, content }) => ({ role, content }));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return context.toOpenai({ assistant: ctx.assistantId }) as Message[];
const messages = context.toOpenAI(ctx.assistantId);
return messages
.filter(
(m): m is { role: "user" | "assistant"; content: string } =>
m.role === "user" || m.role === "assistant"
)
.map(({ role, content }) => ({ role, content }));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/typescript/tools/getContext.ts` at line 25, Replace
the incorrect call context.toOpenai({ assistant: ctx.assistantId }) as Message[]
with the correct SDK usage context.toOpenAI(ctx.assistantId), remove the unsafe
cast, then filter the returned messages to keep only those with role "user" or
"assistant" and map them to the expected Message shape (ensuring role is "user"
| "assistant" and preserving optional name if present); adjust the code in
getContext (use symbols context.toOpenAI and ctx.assistantId and the Message
type) so the function returns a properly typed array rather than relying on an
unchecked cast.

Comment thread examples/openai-agents/typescript/tools/queryMemory.ts Outdated
@m1lestones
Copy link
Copy Markdown
Author

Fixed all three issues:

  • add_peers() now called once at session setup, removed from get_context and save_memory
  • to_openai() output passed directly as structured messages to Runner.run() instead of being flattened to plain text
  • Session ID is now a UUID per run to prevent history accumulation

Tested end-to-end against the live Honcho API — memory persists correctly across sessions.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
examples/openai-agents/python/main.py (1)

40-44: ⚠️ Potential issue | 🟠 Major

Wrap Honcho I/O with explicit error handling (best-effort where appropriate).

add_peers, save_memory, and get_context are uncaught network/client operations. A transient failure currently aborts the turn or drops persistence after a successful model run. Add explicit try/except blocks with safe fallbacks for context fetch and post-run writes.

💡 Proposed fix
 import asyncio
+import logging
 import uuid
@@
 from tools.save_memory import save_memory
 
+logger = logging.getLogger(__name__)
+
@@
 def setup_session(user_id: str, session_id: str, assistant_id: str = "assistant") -> None:
@@
-    honcho = get_client()
-    user_peer = honcho.peer(user_id)
-    assistant_peer = honcho.peer(assistant_id)
-    session = honcho.session(session_id)
-    session.add_peers([user_peer, assistant_peer])
+    try:
+        honcho = get_client()
+        user_peer = honcho.peer(user_id)
+        assistant_peer = honcho.peer(assistant_id)
+        session = honcho.session(session_id)
+        session.add_peers([user_peer, assistant_peer])
+    except Exception as exc:
+        raise RuntimeError("Failed to initialize Honcho session peers") from exc
@@
-    save_memory(user_id, message, "user", session_id)
+    try:
+        save_memory(user_id, message, "user", session_id)
+    except Exception as exc:
+        logger.warning("Could not persist user message: %s", exc)
@@
-    history = get_context(ctx, tokens=2000)
+    try:
+        history = get_context(ctx, tokens=2000)
+    except Exception as exc:
+        logger.warning("Could not load Honcho context; continuing without history: %s", exc)
+        history = []
@@
-    save_memory(user_id, response, "assistant", session_id)
+    try:
+        save_memory(user_id, response, "assistant", session_id)
+    except Exception as exc:
+        logger.warning("Could not persist assistant message: %s", exc)

Please verify and narrow catches to the SDK’s concrete exception classes:

#!/bin/bash
set -euo pipefail

# 1) Confirm uncaught I/O call sites in this file
rg -nC2 'add_peers|save_memory|get_context|Runner\.run' examples/openai-agents/python/main.py

# 2) Discover exception classes exposed in Honcho SDK
rg -n --type=py '^class\s+\w*(Error|Exception)\b' sdks/python/src/honcho -g '!**/tests/**'

# 3) Discover raised exception types in Honcho SDK
rg -n --type=py '\braise\s+\w*(Error|Exception)\b' sdks/python/src/honcho -g '!**/tests/**'

As per coding guidelines, "Use explicit error handling with appropriate exception types".

Also applies to: 77-88

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/python/main.py` around lines 40 - 44, Wrap Honcho
network I/O calls in explicit try/except blocks: protect honcho.add_peers([...])
(best-effort: catch the Honcho SDK network/ClientError class, log a warning and
proceed without aborting), protect honcho.session(...).get_context() (catch the
SDK-specific exception, log the failure and fall back to an empty context), and
protect honcho.session(...).save_memory(...) (catch the SDK write exception, log
and continue without failing the run); locate these calls by the symbols
add_peers, get_context, and save_memory in examples/openai-agents/python/main.py
and replace broad excepts with the concrete Honcho SDK exception classes
discovered from the SDK (e.g., ClientError/NetworkError) as indicated by your
grep inspection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/openai-agents/python/main.py`:
- Around line 76-83: The current user turn is being saved with
save_memory(user_id, message, "user", session_id) and may already be returned by
get_context(ctx, tokens=2000), so input_messages = history +
[{"role":"user","content":message}] can duplicate the same turn; modify the
logic around get_context and input_messages (used when calling
Runner.run(honcho_agent,...)) to avoid duplication by either: fetching history
first and only append the current turn if history's last entry is not the same
role/content/session, or by calling get_context with an option to exclude the
just-saved turn; update the code that builds input_messages to perform this
presence check (compare role and content) before appending the current message.

---

Duplicate comments:
In `@examples/openai-agents/python/main.py`:
- Around line 40-44: Wrap Honcho network I/O calls in explicit try/except
blocks: protect honcho.add_peers([...]) (best-effort: catch the Honcho SDK
network/ClientError class, log a warning and proceed without aborting), protect
honcho.session(...).get_context() (catch the SDK-specific exception, log the
failure and fall back to an empty context), and protect
honcho.session(...).save_memory(...) (catch the SDK write exception, log and
continue without failing the run); locate these calls by the symbols add_peers,
get_context, and save_memory in examples/openai-agents/python/main.py and
replace broad excepts with the concrete Honcho SDK exception classes discovered
from the SDK (e.g., ClientError/NetworkError) as indicated by your grep
inspection.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: be630939-a407-4342-a10a-99a07a56d60d

📥 Commits

Reviewing files that changed from the base of the PR and between d46495c and 6ec5f54.

📒 Files selected for processing (3)
  • examples/openai-agents/python/main.py
  • examples/openai-agents/python/tools/get_context.py
  • examples/openai-agents/python/tools/save_memory.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • examples/openai-agents/python/tools/get_context.py
  • examples/openai-agents/python/tools/save_memory.py

Comment thread examples/openai-agents/python/main.py Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
examples/openai-agents/python/main.py (1)

86-99: ⚠️ Potential issue | 🟠 Major

Avoid duplicating the current user turn in input_messages.

Saving first and then calling get_context() can return the same user message you append again at Line 98, which double-injects the turn into the prompt.

💡 Proposed fix
-    input_messages = history + [{"role": "user", "content": message}]
+    input_messages = [*history]
+    if not history or (
+        history[-1].get("role") != "user"
+        or history[-1].get("content") != message
+    ):
+        input_messages.append({"role": "user", "content": message})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/python/main.py` around lines 86 - 99, The current flow
saves the user turn via save_memory(user_id, message, "user", session_id) before
calling get_context(ctx) and then unconditionally appends {"role": "user",
"content": message} to input_messages, which can duplicate the same turn; fix by
either calling get_context() first and then save_memory(), or keep the save
order but deduplicate the returned history from get_context() (e.g., filter
history entries in history returned by get_context(ctx) to remove any entry with
role "user" and content equal to message/session_id) before creating
input_messages = history + [{"role":"user","content":message}]; update code
around save_memory, get_context, and input_messages accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/openai-agents/python/main.py`:
- Around line 69-70: The chat() function must accept and propagate an
assistant_id instead of hardcoding "assistant": update chat(user_id: str,
message: str, session_id: str) to include assistant_id (with the same default as
setup_session if desired), pass that assistant_id into setup_session(session_id,
assistant_id=...) and use the same assistant_id in both save_memory() calls (for
saving assistant messages) rather than the hardcoded "assistant"; adjust any
callers accordingly so assistant identities are preserved when storing memory.

---

Duplicate comments:
In `@examples/openai-agents/python/main.py`:
- Around line 86-99: The current flow saves the user turn via
save_memory(user_id, message, "user", session_id) before calling
get_context(ctx) and then unconditionally appends {"role": "user", "content":
message} to input_messages, which can duplicate the same turn; fix by either
calling get_context() first and then save_memory(), or keep the save order but
deduplicate the returned history from get_context() (e.g., filter history
entries in history returned by get_context(ctx) to remove any entry with role
"user" and content equal to message/session_id) before creating input_messages =
history + [{"role":"user","content":message}]; update code around save_memory,
get_context, and input_messages accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e9f93868-f28a-4ffe-aec4-a63fdb2803e8

📥 Commits

Reviewing files that changed from the base of the PR and between 6ec5f54 and 58efc16.

📒 Files selected for processing (1)
  • examples/openai-agents/python/main.py

Comment thread examples/openai-agents/python/main.py Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
examples/openai-agents/python/tests/test_integration.py (1)

112-123: ⚠️ Potential issue | 🟡 Minor

test_respects_token_limit can pass without actually exercising the token budget.

If Honcho's async processing hasn't caught up (likely given the other test needs poll/retry), both small and large can be [] and 0 >= 0 passes trivially. Even when messages are present, five short strings likely fit within the 50-token budget too, so small and large may be identical.

Consider (a) applying the same poll/retry pattern used in test_saved_messages_appear_in_context to ensure at least some messages are retrievable, and (b) writing enough content that a 50-token budget truncates it (so len(large) > len(small) can be asserted meaningfully).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/python/tests/test_integration.py` around lines 112 -
123, The test_respects_token_limit currently can pass without exercising the
token budget because async saves may not be visible and the five short
save_memory entries may fit within 50 tokens; update test_respects_token_limit
to (a) reuse the poll/retry pattern from test_saved_messages_appear_in_context
to wait until save_memory writes are visible for the given user_id/session_id
(use the same retry loop and check that get_context returns non-empty), and (b)
make the saved messages longer or increase their count so that get_context(ctx,
tokens=50) returns a smaller list than get_context(ctx, tokens=8000), then
assert len(large) > len(small); reference functions/classes:
test_respects_token_limit, save_memory, get_context, HonchoContext, and
test_saved_messages_appear_in_context.
🧹 Nitpick comments (3)
examples/openai-agents/typescript/main.ts (2)

29-39: buildInstructions doesn't use its arguments — consider simplifying to a string.

The PR description mentions a "dynamic instructions" design that injects Honcho session.context() into instructions before each LLM request. However, this implementation ignores runContext and _agent and returns a static string. Since history is now prepended to input at Line 70 instead (which is a cleaner approach), the callable form is unnecessary here.

♻️ Optional simplification
-function buildInstructions(
-  runContext: RunContext<HonchoContext>,
-  _agent: Agent<HonchoContext>
-): string {
-  const base =
-    "You are a helpful assistant with persistent memory powered by Honcho. " +
-    "You remember users across conversations. " +
-    "When a user asks what you remember about them, use the query_memory tool.";
-
-  return base;
-}
+const INSTRUCTIONS =
+  "You are a helpful assistant with persistent memory powered by Honcho. " +
+  "You remember users across conversations. " +
+  "When a user asks what you remember about them, use the query_memory tool.";

And update Line 43 to instructions: INSTRUCTIONS.

Alternatively, if the callable form is intentional (to mirror the Python example and leave room for later dynamic injection), a short comment explaining that would help future readers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/typescript/main.ts` around lines 29 - 39, The
buildInstructions function currently ignores its parameters (runContext and
_agent) and returns a static string; either replace this callable with a plain
constant (e.g., INSTRUCTIONS) and update the usage site (where instructions is
set at the call currently on line 43) to use INSTRUCTIONS, or keep the function
but add a brief comment explaining it’s intentionally static for parity with the
Python example; locate the buildInstructions function and the place where its
return is passed as "instructions" (referenced as buildInstructions and the
instructions assignment) and perform the chosen simplification or add the
explanatory comment.

95-108: Readline loop: consider handling EOF / Ctrl-D gracefully.

rl.question rejects on stream close (Ctrl-D / piped stdin EOF), which will be caught by the try/catch and printed as an error before the loop immediately re-prompts on a closed stream — potentially spinning. Adding an rl.on("close", ...) handler or breaking out when userInput is null/undefined would make the REPL more robust. Minor nit for a demo.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/typescript/main.ts` around lines 95 - 108, The
readline loop incorrectly treats EOF/stream-close as an ordinary error and can
spin; update the loop that uses rl.question and the variable userInput so EOF
(null/undefined) or stdin close breaks the loop instead of retrying: detect when
rl.question returns null/undefined or listen for rl.on("close") and set a flag
to exit the while(true) loop, ensuring you still call rl.close() and avoid
calling chat(userId, userInput, sessionId) when userInput is null; keep the
existing try/catch for other errors but ensure EOF is handled out of the error
path to prevent re-prompts on a closed stream.
examples/openai-agents/python/main.py (1)

128-135: Run a single event loop instead of asyncio.run per turn.

Calling asyncio.run(chat(...)) on every iteration spins up and tears down a fresh event loop (and its executor threadpool) per user message. For a demo this works, but switching the REPL to an async def main() driven by one top-level asyncio.run(main()) is cheaper and more idiomatic, and prevents surprises if tools ever cache async resources across turns.

♻️ Proposed refactor
-if __name__ == "__main__":
-    print("HonchoMemoryAgent — type 'quit' to exit\n")
-    # Replace "demo-user" with a real user identifier in production.
-    _user_id = "demo-user"
-    # A fresh session ID per run prevents history from accumulating across runs.
-    _session_id = str(uuid.uuid4())
-
-    # Register peers once at session start — not on every turn.
-    setup_session(_user_id, _session_id)
-
-    while True:
-        _user_input = input("You: ").strip()
-        if not _user_input:
-            continue
-        if _user_input.lower() in ("quit", "exit"):
-            break
-        _response = asyncio.run(chat(_user_id, _user_input, _session_id))
-        print(f"Agent: {_response}\n")
+async def _repl() -> None:
+    print("HonchoMemoryAgent — type 'quit' to exit\n")
+    user_id = "demo-user"
+    session_id = str(uuid.uuid4())
+    setup_session(user_id, session_id)
+
+    while True:
+        user_input = input("You: ").strip()
+        if not user_input:
+            continue
+        if user_input.lower() in ("quit", "exit"):
+            break
+        response = await chat(user_id, user_input, session_id)
+        print(f"Agent: {response}\n")
+
+
+if __name__ == "__main__":
+    asyncio.run(_repl())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/openai-agents/python/main.py` around lines 128 - 135, The loop
currently calls asyncio.run(chat(...)) per user message which creates/destroys
an event loop each turn; refactor by making an async def main() that contains
the input loop (use await chat(_user_id, _user_input, _session_id) inside the
loop) and then call asyncio.run(main()) once at program start, removing
asyncio.run from inside the loop and keeping variables like _user_input,
_user_id, _session_id and the print("Agent: ...") logic unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@examples/openai-agents/python/tests/test_integration.py`:
- Around line 112-123: The test_respects_token_limit currently can pass without
exercising the token budget because async saves may not be visible and the five
short save_memory entries may fit within 50 tokens; update
test_respects_token_limit to (a) reuse the poll/retry pattern from
test_saved_messages_appear_in_context to wait until save_memory writes are
visible for the given user_id/session_id (use the same retry loop and check that
get_context returns non-empty), and (b) make the saved messages longer or
increase their count so that get_context(ctx, tokens=50) returns a smaller list
than get_context(ctx, tokens=8000), then assert len(large) > len(small);
reference functions/classes: test_respects_token_limit, save_memory,
get_context, HonchoContext, and test_saved_messages_appear_in_context.

---

Nitpick comments:
In `@examples/openai-agents/python/main.py`:
- Around line 128-135: The loop currently calls asyncio.run(chat(...)) per user
message which creates/destroys an event loop each turn; refactor by making an
async def main() that contains the input loop (use await chat(_user_id,
_user_input, _session_id) inside the loop) and then call asyncio.run(main())
once at program start, removing asyncio.run from inside the loop and keeping
variables like _user_input, _user_id, _session_id and the print("Agent: ...")
logic unchanged.

In `@examples/openai-agents/typescript/main.ts`:
- Around line 29-39: The buildInstructions function currently ignores its
parameters (runContext and _agent) and returns a static string; either replace
this callable with a plain constant (e.g., INSTRUCTIONS) and update the usage
site (where instructions is set at the call currently on line 43) to use
INSTRUCTIONS, or keep the function but add a brief comment explaining it’s
intentionally static for parity with the Python example; locate the
buildInstructions function and the place where its return is passed as
"instructions" (referenced as buildInstructions and the instructions assignment)
and perform the chosen simplification or add the explanatory comment.
- Around line 95-108: The readline loop incorrectly treats EOF/stream-close as
an ordinary error and can spin; update the loop that uses rl.question and the
variable userInput so EOF (null/undefined) or stdin close breaks the loop
instead of retrying: detect when rl.question returns null/undefined or listen
for rl.on("close") and set a flag to exit the while(true) loop, ensuring you
still call rl.close() and avoid calling chat(userId, userInput, sessionId) when
userInput is null; keep the existing try/catch for other errors but ensure EOF
is handled out of the error path to prevent re-prompts on a closed stream.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 807bd8dd-fafa-4222-9eac-0e9e9eeb24ac

📥 Commits

Reviewing files that changed from the base of the PR and between 58efc16 and 1614cad.

📒 Files selected for processing (8)
  • examples/openai-agents/python/main.py
  • examples/openai-agents/python/tests/test_integration.py
  • examples/openai-agents/python/tools/query_memory.py
  • examples/openai-agents/python/tools/save_memory.py
  • examples/openai-agents/typescript/main.ts
  • examples/openai-agents/typescript/tools/getContext.ts
  • examples/openai-agents/typescript/tools/queryMemory.ts
  • examples/openai-agents/typescript/tools/saveMemory.ts
✅ Files skipped from review due to trivial changes (1)
  • examples/openai-agents/typescript/tools/getContext.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • examples/openai-agents/typescript/tools/queryMemory.ts
  • examples/openai-agents/python/tools/save_memory.py

…uctions

- Add poll/retry to test_respects_token_limit so async saves are visible
  before asserting, and use longer messages so 50-token budget truncates
- Simplify buildInstructions to a plain INSTRUCTIONS constant since
  runContext and _agent arguments were unused; remove unused RunContext import
@ajspig
Copy link
Copy Markdown
Contributor

ajspig commented Apr 28, 2026

Closing this as part of a broader prioritization shift and in an effort to minimize maintenance burden. Thanks for putting in the work on this!

@ajspig ajspig closed this Apr 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants