Skip to content

feat: add Pydantic AI memory integration example#546

Closed
m1lestones wants to merge 2 commits intoplastic-labs:mainfrom
m1lestones:feat/pydantic-ai-memory-integration
Closed

feat: add Pydantic AI memory integration example#546
m1lestones wants to merge 2 commits intoplastic-labs:mainfrom
m1lestones:feat/pydantic-ai-memory-integration

Conversation

@m1lestones
Copy link
Copy Markdown

@m1lestones m1lestones commented Apr 10, 2026

Summary

  • Adds examples/pydantic-ai/python/ — a full Honcho memory integration for Pydantic AI agents
  • Uses @agent.system_prompt decorator for dynamic Honcho context injection and @agent.tool for the query_memory tool
  • Demonstrates message_history threading for in-session coherence alongside Honcho's cross-session memory
  • Follows the same pattern as the existing examples/openai-agents/ example

What's included

examples/pydantic-ai/
├── README.md
└── python/
    ├── main.py
    ├── pyproject.toml
    └── tools/
        ├── client.py       # HonchoContext + get_client()
        ├── save_memory.py
        └── get_context.py

How it works

  1. Dynamic system prompt@agent.system_prompt registers honcho_system_prompt(), called before every LLM request to append Honcho session history.
  2. Memory tool@agent.tool registers query_memory(), which calls Honcho's Dialectic API via RunContext[HonchoAgentDeps].
  3. Message history threadingchat() returns (response, result.all_messages()). Pass the returned history back on the next call for in-session coherence.
  4. Auto-savechat() persists the user message before the agent runs and the assistant response after.

Test plan

  • Set HONCHO_API_KEY and OPENAI_API_KEY in python/.env
  • pip install pydantic-ai honcho-ai python-dotenv
  • cd python && python main.py
  • Tell the agent something about yourself, start a new session, then ask "What do you remember about me?"

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added complete example implementation of a Pydantic AI agent with Honcho persistent memory integration, including memory query functionality and cross-turn conversation history management.
  • Documentation

    • Added comprehensive setup guide with environment variables, installation instructions, and quick-start code snippet for the integration example.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 10, 2026

Warning

Rate limit exceeded

@m1lestones has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 11 minutes and 7 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 11 minutes and 7 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 914fbc44-7317-4e0b-b804-edfa002512d3

📥 Commits

Reviewing files that changed from the base of the PR and between 55ba72f and c2cfc3c.

📒 Files selected for processing (3)
  • examples/pydantic-ai/python/main.py
  • examples/pydantic-ai/python/tools/get_context.py
  • examples/pydantic-ai/python/tools/save_memory.py

Walkthrough

This pull request adds a comprehensive example demonstrating how to integrate Honcho persistent memory with Pydantic AI agents. The example includes documentation, project configuration, a functional implementation with memory query and save tools, and helper modules for Honcho client initialization and context management.

Changes

Cohort / File(s) Summary
Documentation
examples/pydantic-ai/README.md
New example guide explaining the integration architecture, environment setup, quick-start snippet, and the runtime control flow involving system prompt hooks, memory tools, and auto-save behavior.
Project Configuration
examples/pydantic-ai/python/pyproject.toml
Added project metadata, dependencies (pydantic-ai, honcho-ai, python-dotenv), and hatchling build system configuration.
Main Implementation
examples/pydantic-ai/python/main.py
Core example code wiring a PydanticAI agent with Honcho memory, including a system_prompt hook for fetching conversation history, a query_memory tool for semantic recall, and a chat() coroutine managing message persistence across turns.
Helper Tools
examples/pydantic-ai/python/tools/client.py, examples/pydantic-ai/python/tools/get_context.py, examples/pydantic-ai/python/tools/save_memory.py
Utility modules for Honcho client initialization, context retrieval with token limits, and conversation turn persistence. Includes HonchoContext dataclass for per-session state management.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ChatFunc as chat() function
    participant Agent as PydanticAI Agent
    participant SystemPromptHook as System Prompt Hook
    participant MemoryTool as query_memory Tool
    participant Honcho as Honcho API
    participant Memory as Conversation Memory

    User->>ChatFunc: chat(user_id, message, session_id, message_history)
    ChatFunc->>Memory: save_memory(user_message, role="user")
    Memory->>Honcho: record message
    ChatFunc->>Agent: run agent with message_history
    Agent->>SystemPromptHook: fetch base system prompt
    SystemPromptHook->>Honcho: get_context(user_id, session_id)
    Honcho-->>SystemPromptHook: conversation history
    SystemPromptHook-->>Agent: enriched system prompt
    Agent->>Agent: determine if memory query needed
    alt Memory Query Required
        Agent->>MemoryTool: query_memory(query)
        MemoryTool->>Honcho: Dialectic API search
        Honcho-->>MemoryTool: semantic results
        MemoryTool-->>Agent: formatted memory response
    end
    Agent-->>ChatFunc: response text
    ChatFunc->>Memory: save_memory(response, role="assistant")
    Memory->>Honcho: record message
    ChatFunc-->>User: response + updated message_history
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • ajspig
  • VVoruganti

Poem

🐰 A rabbit hops through memory's lane,
Where Pydantic agents now reign,
Honcho threads persist, recall with grace,
Tools fetch context, save conversations in place,
Smart minds remember, no knowledge to spare! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add Pydantic AI memory integration example' accurately describes the main change: adding a new example that demonstrates Honcho memory integration with Pydantic AI agents.
Docstring Coverage ✅ Passed Docstring coverage is 85.71% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
examples/pydantic-ai/python/tools/get_context.py (1)

5-5: Prefer absolute imports in this module.

Line 5 uses a relative import; this repo guideline prefers absolute imports.

Suggested change
-from .client import HonchoContext, get_client
+from tools.client import HonchoContext, get_client

As per coding guidelines: **/*.py: Follow isort conventions with absolute imports preferred.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/pydantic-ai/python/tools/get_context.py` at line 5, The module
currently uses a relative import ("from .client import HonchoContext,
get_client"); change this to an absolute import from the project package root so
imports follow the repo isort guidelines (e.g., replace the relative import with
an absolute one that imports HonchoContext and get_client from the client module
at package level), ensuring HonchoContext and get_client remain referenced
correctly.
examples/pydantic-ai/python/tools/save_memory.py (1)

3-3: Prefer absolute imports in this module.

Line 3 uses a relative import; repository guidance prefers absolute imports.

Suggested change
-from .client import get_client
+from tools.client import get_client

As per coding guidelines: **/*.py: Follow isort conventions with absolute imports preferred.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/pydantic-ai/python/tools/save_memory.py` at line 3, Replace the
relative import "from .client import get_client" with an absolute import that
references the package path to the client module (so the module symbol is still
get_client); update the import in save_memory.py to use the absolute module path
for client (e.g., import the client module that exposes get_client instead of a
relative ".client") to satisfy isort/absolute-import guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/pydantic-ai/python/main.py`:
- Around line 119-129: Wrap both calls to save_memory (the one after receiving
user message and the one after creating response) in explicit try/except blocks
so transient memory persistence errors don't abort the turn: surround
save_memory(user_id, message, "user", session_id) and save_memory(user_id,
response, "assistant", session_id) with try handling the appropriate persistence
exception (or Exception if no specific memory error class exists), log the error
via the module logger (or logger.exception) including context
(user_id/session_id) and continue execution so honcho_agent.run and LLM response
delivery are not prevented by memory write failures.

In `@examples/pydantic-ai/python/tools/get_context.py`:
- Line 29: Validate the tokens parameter locally before calling the Honcho SDK:
in the get_context function (or wherever you call
session.context(tokens=tokens)), ensure tokens is either None or an int > 0 and
raise a ValueError with a clear message (e.g., "tokens must be a positive
integer") if not; this prevents forwarding 0/negative or non-int values to
session.context and provides immediate, explicit error handling.

In `@examples/pydantic-ai/python/tools/save_memory.py`:
- Around line 35-36: The code currently maps any non-"assistant" role to
user_peer and writes that memory; instead validate the role variable before
creating sender: accept only "assistant" or "user" and raise a ValueError (or a
similarly appropriate exception) for unsupported values, so update the logic
around sender/assistant_peer/user_peer and the session.add_messages call to
perform explicit role validation and throw the exception rather than silently
treating unknown roles as "user".

---

Nitpick comments:
In `@examples/pydantic-ai/python/tools/get_context.py`:
- Line 5: The module currently uses a relative import ("from .client import
HonchoContext, get_client"); change this to an absolute import from the project
package root so imports follow the repo isort guidelines (e.g., replace the
relative import with an absolute one that imports HonchoContext and get_client
from the client module at package level), ensuring HonchoContext and get_client
remain referenced correctly.

In `@examples/pydantic-ai/python/tools/save_memory.py`:
- Line 3: Replace the relative import "from .client import get_client" with an
absolute import that references the package path to the client module (so the
module symbol is still get_client); update the import in save_memory.py to use
the absolute module path for client (e.g., import the client module that exposes
get_client instead of a relative ".client") to satisfy isort/absolute-import
guidelines.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7bd23588-4bdc-4f07-b498-943a8fab2102

📥 Commits

Reviewing files that changed from the base of the PR and between 5b6bd59 and 55ba72f.

📒 Files selected for processing (7)
  • examples/pydantic-ai/README.md
  • examples/pydantic-ai/python/main.py
  • examples/pydantic-ai/python/pyproject.toml
  • examples/pydantic-ai/python/tools/__init__.py
  • examples/pydantic-ai/python/tools/client.py
  • examples/pydantic-ai/python/tools/get_context.py
  • examples/pydantic-ai/python/tools/save_memory.py

Comment thread examples/pydantic-ai/python/main.py Outdated
Comment thread examples/pydantic-ai/python/tools/get_context.py
Comment thread examples/pydantic-ai/python/tools/save_memory.py
@ajspig
Copy link
Copy Markdown
Contributor

ajspig commented Apr 28, 2026

Closing this as part of a broader prioritization shift and in an effort to minimize maintenance burden. Thanks for putting in the work on this!

@ajspig ajspig closed this Apr 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants