feat: add Pydantic AI memory integration example#546
feat: add Pydantic AI memory integration example#546m1lestones wants to merge 2 commits intoplastic-labs:mainfrom
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 11 minutes and 7 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
WalkthroughThis pull request adds a comprehensive example demonstrating how to integrate Honcho persistent memory with Pydantic AI agents. The example includes documentation, project configuration, a functional implementation with memory query and save tools, and helper modules for Honcho client initialization and context management. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ChatFunc as chat() function
participant Agent as PydanticAI Agent
participant SystemPromptHook as System Prompt Hook
participant MemoryTool as query_memory Tool
participant Honcho as Honcho API
participant Memory as Conversation Memory
User->>ChatFunc: chat(user_id, message, session_id, message_history)
ChatFunc->>Memory: save_memory(user_message, role="user")
Memory->>Honcho: record message
ChatFunc->>Agent: run agent with message_history
Agent->>SystemPromptHook: fetch base system prompt
SystemPromptHook->>Honcho: get_context(user_id, session_id)
Honcho-->>SystemPromptHook: conversation history
SystemPromptHook-->>Agent: enriched system prompt
Agent->>Agent: determine if memory query needed
alt Memory Query Required
Agent->>MemoryTool: query_memory(query)
MemoryTool->>Honcho: Dialectic API search
Honcho-->>MemoryTool: semantic results
MemoryTool-->>Agent: formatted memory response
end
Agent-->>ChatFunc: response text
ChatFunc->>Memory: save_memory(response, role="assistant")
Memory->>Honcho: record message
ChatFunc-->>User: response + updated message_history
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
examples/pydantic-ai/python/tools/get_context.py (1)
5-5: Prefer absolute imports in this module.Line 5 uses a relative import; this repo guideline prefers absolute imports.
Suggested change
-from .client import HonchoContext, get_client +from tools.client import HonchoContext, get_clientAs per coding guidelines:
**/*.py: Follow isort conventions with absolute imports preferred.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/pydantic-ai/python/tools/get_context.py` at line 5, The module currently uses a relative import ("from .client import HonchoContext, get_client"); change this to an absolute import from the project package root so imports follow the repo isort guidelines (e.g., replace the relative import with an absolute one that imports HonchoContext and get_client from the client module at package level), ensuring HonchoContext and get_client remain referenced correctly.examples/pydantic-ai/python/tools/save_memory.py (1)
3-3: Prefer absolute imports in this module.Line 3 uses a relative import; repository guidance prefers absolute imports.
Suggested change
-from .client import get_client +from tools.client import get_clientAs per coding guidelines:
**/*.py: Follow isort conventions with absolute imports preferred.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/pydantic-ai/python/tools/save_memory.py` at line 3, Replace the relative import "from .client import get_client" with an absolute import that references the package path to the client module (so the module symbol is still get_client); update the import in save_memory.py to use the absolute module path for client (e.g., import the client module that exposes get_client instead of a relative ".client") to satisfy isort/absolute-import guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/pydantic-ai/python/main.py`:
- Around line 119-129: Wrap both calls to save_memory (the one after receiving
user message and the one after creating response) in explicit try/except blocks
so transient memory persistence errors don't abort the turn: surround
save_memory(user_id, message, "user", session_id) and save_memory(user_id,
response, "assistant", session_id) with try handling the appropriate persistence
exception (or Exception if no specific memory error class exists), log the error
via the module logger (or logger.exception) including context
(user_id/session_id) and continue execution so honcho_agent.run and LLM response
delivery are not prevented by memory write failures.
In `@examples/pydantic-ai/python/tools/get_context.py`:
- Line 29: Validate the tokens parameter locally before calling the Honcho SDK:
in the get_context function (or wherever you call
session.context(tokens=tokens)), ensure tokens is either None or an int > 0 and
raise a ValueError with a clear message (e.g., "tokens must be a positive
integer") if not; this prevents forwarding 0/negative or non-int values to
session.context and provides immediate, explicit error handling.
In `@examples/pydantic-ai/python/tools/save_memory.py`:
- Around line 35-36: The code currently maps any non-"assistant" role to
user_peer and writes that memory; instead validate the role variable before
creating sender: accept only "assistant" or "user" and raise a ValueError (or a
similarly appropriate exception) for unsupported values, so update the logic
around sender/assistant_peer/user_peer and the session.add_messages call to
perform explicit role validation and throw the exception rather than silently
treating unknown roles as "user".
---
Nitpick comments:
In `@examples/pydantic-ai/python/tools/get_context.py`:
- Line 5: The module currently uses a relative import ("from .client import
HonchoContext, get_client"); change this to an absolute import from the project
package root so imports follow the repo isort guidelines (e.g., replace the
relative import with an absolute one that imports HonchoContext and get_client
from the client module at package level), ensuring HonchoContext and get_client
remain referenced correctly.
In `@examples/pydantic-ai/python/tools/save_memory.py`:
- Line 3: Replace the relative import "from .client import get_client" with an
absolute import that references the package path to the client module (so the
module symbol is still get_client); update the import in save_memory.py to use
the absolute module path for client (e.g., import the client module that exposes
get_client instead of a relative ".client") to satisfy isort/absolute-import
guidelines.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7bd23588-4bdc-4f07-b498-943a8fab2102
📒 Files selected for processing (7)
examples/pydantic-ai/README.mdexamples/pydantic-ai/python/main.pyexamples/pydantic-ai/python/pyproject.tomlexamples/pydantic-ai/python/tools/__init__.pyexamples/pydantic-ai/python/tools/client.pyexamples/pydantic-ai/python/tools/get_context.pyexamples/pydantic-ai/python/tools/save_memory.py
…memory error handling
|
Closing this as part of a broader prioritization shift and in an effort to minimize maintenance burden. Thanks for putting in the work on this! |
Summary
examples/pydantic-ai/python/— a full Honcho memory integration for Pydantic AI agents@agent.system_promptdecorator for dynamic Honcho context injection and@agent.toolfor thequery_memorytoolmessage_historythreading for in-session coherence alongside Honcho's cross-session memoryexamples/openai-agents/exampleWhat's included
How it works
@agent.system_promptregistershoncho_system_prompt(), called before every LLM request to append Honcho session history.@agent.toolregistersquery_memory(), which calls Honcho's Dialectic API viaRunContext[HonchoAgentDeps].chat()returns(response, result.all_messages()). Pass the returned history back on the next call for in-session coherence.chat()persists the user message before the agent runs and the assistant response after.Test plan
HONCHO_API_KEYandOPENAI_API_KEYinpython/.envpip install pydantic-ai honcho-ai python-dotenvcd python && python main.py🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation