feat: add agent API support to Rust and TypeScript SDKs#66
feat: add agent API support to Rust and TypeScript SDKs#66AnthonyRonning merged 5 commits intomasterfrom
Conversation
Implement the /v1/agent/* API surface for the Sage-style persistent agent system, including config management, memory blocks, archival memory, semantic search, conversation management, and SSE-based chat streaming. Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
📝 WalkthroughWalkthroughThis PR introduces comprehensive Agent API support and OAuth authentication to both Rust and TypeScript SDKs. It adds new types and public methods for agent configuration, memory management (blocks, archival), memory search, conversations, and SSE-based agent chat streaming. OAuth handlers for GitHub, Google, and Apple are added. Encrypted API call retry logic is enhanced with token refresh handling on session loss and authorization failures. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant SDK as SDK (encrypted_api_call)
participant Server
participant Auth as Auth Server
Client->>SDK: API Call with session token
SDK->>Server: Send encrypted request with headers
alt Success (200)
Server-->>SDK: Success response
SDK-->>Client: Decrypted result
else Unauthorized (401)
Server-->>SDK: 401 Unauthorized
SDK->>Auth: Call refresh_token (inner)
Auth-->>SDK: New token
SDK->>Server: Retry with new token
Server-->>SDK: Success response
SDK-->>Client: Decrypted result
else Session/Encryption Error (400)
Server-->>SDK: 400 Bad Request
SDK->>SDK: Reinitialize session
SDK->>Server: Retry request
Server-->>SDK: Success response
SDK-->>Client: Decrypted result
end
sequenceDiagram
participant Client
participant SDK as SDK (agent_chat)
participant Server
Client->>SDK: agent_chat(input)
SDK->>Server: SSE stream connection
loop Receive SSE Events
Server-->>SDK: Encrypted SSE token
SDK->>SDK: Decrypt token
SDK->>SDK: Parse base64 event
alt Message Event
SDK-->>Client: AgentSseEvent::Message {messages, step}
else Done Event
SDK-->>Client: AgentSseEvent::Done {total_steps, total_messages}
else Error Event
SDK-->>Client: AgentSseEvent::Error {error}
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
Deploying opensecret-sdk with
|
| Latest commit: |
2fb1461
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://9c1e5334.opensecret-sdk.pages.dev |
| Branch Preview URL: | https://feat-agent-api.opensecret-sdk.pages.dev |
- Add OAuth types (GitHub/Google/Apple init, callback, Apple native sign-in) - Add 7 OAuth methods to OpenSecretClient matching TypeScript SDK - Add auto retry with re-attestation on session/400/encryption errors - Add auto token refresh retry on 401 (matches TS SDK behavior) - Add 0.0.0.0 to mock attestation whitelist for local dev - Use serde alias for csrf_token/state field compatibility Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
12b395e to
26ad5d2
Compare
Non-encrypted SSE events (heartbeats, retries) were causing base64 decode errors. Now silently skipped to match TypeScript SDK behavior. Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Changed model from non-existent 'glm-5' to 'llama-3.3-70b'. Relaxed assertions to not require both tools be called (LLM non-determinism). Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (6)
rust/tests/agent_integration.rs (3)
10-42: Consider extractingsetup_authenticated_clientto a shared test utilities module.This helper function is duplicated from
ai_integration.rs. Extracting it to a shared module (e.g.,tests/common/mod.rs) would reduce duplication and ensure consistent test setup across all integration tests.♻️ Suggested approach
Create
rust/tests/common/mod.rs:use opensecret::{OpenSecretClient, Result}; use std::env; use uuid::Uuid; pub async fn setup_authenticated_client() -> Result<OpenSecretClient> { // ... shared implementation }Then in test files:
mod common; use common::setup_authenticated_client;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/tests/agent_integration.rs` around lines 10 - 42, The helper function setup_authenticated_client is duplicated across tests; extract it into a shared test utilities module so both rust/tests/agent_integration.rs and rust/tests/ai_integration.rs reuse the same implementation. Create tests/common/mod.rs that exports pub async fn setup_authenticated_client() -> Result<OpenSecretClient> (importing opensecret::OpenSecretClient, uuid::Uuid, std::env, dotenv as needed), move the existing implementation there, then replace the local function in each test file with mod common; use common::setup_authenticated_client; ensuring error types and imports match the original signatures.
251-281: Consider adding assertion for search results.The test inserts archival memory and searches for it but doesn't assert that the search results contain the inserted content. While embedding propagation may be asynchronous, consider adding a brief delay or at minimum asserting
results.results.len() > 0to validate the search API returns data.💡 Suggested improvement
let results = client .search_agent_memory(search) .await .expect("Failed to search agent memory"); println!("Search returned {} results", results.results.len()); + + // Note: Embedding propagation may be async, so we just verify the API works + // In a more robust test, add a delay or retry logic to verify content match + assert!( + results.results.iter().any(|r| r.content.contains("quantum")), + "Search should return the inserted content" + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/tests/agent_integration.rs` around lines 251 - 281, In test_memory_search, after inserting via insert_archival_memory and calling search_agent_memory with MemorySearchRequest, add a validation to ensure the search returned results (e.g., assert results.results.len() > 0) and, if embedding propagation is flaky, add a short async delay (tokio::time::sleep) before the search to allow indexing; update the test function test_memory_search to perform this assertion and optional delay so the inserted archival memory is verified by the search.
352-359: Consider usingassert!with descriptive messages instead ofpanic!.Using
panic!loses the test context. Consider usingassert!or returning aResultfor clearer test failure diagnostics.💡 Suggested improvement
AgentSseEvent::Error(err) => { - panic!("Agent error: {}", err.error); + assert!(false, "Unexpected agent error: {}", err.error); } }, Err(e) => { - panic!("Stream error: {:?}", e); + assert!(false, "Unexpected stream error: {:?}", e); }Or collect errors and assert at the end for better diagnostics.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/tests/agent_integration.rs` around lines 352 - 359, Replace the panic! calls inside the match handling AgentSseEvent::Error and the Err(e) arm with assertions or a test Result return so failures preserve test context: in the block matching AgentSseEvent::Error(err) replace panic!("Agent error: {}", err.error) with an assert!(false, "Agent error: {}", err.error) or return Err(format!("Agent error: {}", err.error).into()), and in the Err(e) arm replace panic!("Stream error: {:?}", e) with an assert!(false, "Stream error: {:?}", e) or return Err(format!("Stream error: {:?}", e).into()); alternatively, accumulate any errors into a Vec and assert at the end of the test for clearer diagnostics, referencing AgentSseEvent, err, and e to locate the exact spots to change.rust/src/types.rs (1)
712-720: Consider using strongly-typeddatafor conversation items.
AgentConversationItemsResponse.datais typed asVec<Value>, which loses type safety. If the conversation item structure is well-defined, consider creating a dedicatedAgentConversationItemtype for better ergonomics and compile-time guarantees.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/types.rs` around lines 712 - 720, AgentConversationItemsResponse currently uses a generic Vec<Value> for data which sacrifices type safety; define a new strongly-typed struct AgentConversationItem (derive Serialize, Deserialize, Debug, Clone) that models the conversation item fields and replace pub data: Vec<Value> with pub data: Vec<AgentConversationItem> in the AgentConversationItemsResponse struct, then update any code that constructs/parses AgentConversationItemsResponse to produce/consume AgentConversationItem (or implement From/TryFrom<Value> conversions) so deserialization is robust and callers get compile-time guarantees.rust/src/client.rs (1)
631-653: Consider makinginvite_codeoptional in OAuth callback methods.The
handle_github_callback,handle_google_callback, andhandle_apple_callbackmethods requireinvite_code: String, but the TypeScript SDK and common OAuth flows often treat invite codes as optional. If the server allows empty strings, this works, but anOption<String>might be more idiomatic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/client.rs` around lines 631 - 653, Change the invite_code parameter and related request struct to be optional: update handle_github_callback (and parallel methods handle_google_callback and handle_apple_callback) to accept invite_code: Option<String>, modify the OAuthCallbackRequest type used in these functions so its invite_code field is Option<String>, and adjust construction of OAuthCallbackRequest to pass the Option directly; ensure the encrypted_api_call invocation and any serialization will handle None (omitting the field or sending null) and that session_manager.set_tokens continues to work with response.refresh_token as an Option<String> if you make that optional as well.src/lib/test/integration/agent.test.ts (1)
217-250: Consider making the SSE event assertion more robust.The current assertion at lines 245-247 performs a loose string check for event types. When the test is unskipped, consider parsing the SSE events properly to validate structure rather than relying on substring matching, which could produce false positives if these strings appear in message content.
💡 Optional: More robust SSE event validation
- // The decrypted SSE stream should contain agent event types and JSON data - const hasAgentEvents = - text.includes("agent.message") || text.includes("agent.done") || text.includes("messages"); - expect(hasAgentEvents).toBe(true); + // Parse SSE events and validate structure + const lines = text.split("\n"); + const eventLines = lines.filter(line => line.startsWith("event:") || line.startsWith("data:")); + expect(eventLines.length).toBeGreaterThan(0); + + // Verify at least one valid agent event type is present + const hasAgentEvent = lines.some(line => + line.startsWith("event: agent.message") || + line.startsWith("event: agent.done") || + line.startsWith("data:") + ); + expect(hasAgentEvent).toBe(true);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/test/integration/agent.test.ts` around lines 217 - 250, The SSE assertion in the skipped test "Agent chat via SSE (using createCustomFetch)" is fragile because it only does substring matching on the full response text; update the test to parse the SSE stream produced by createCustomFetch and validate event structure instead of string includes: split the response into SSE records (e.g., by blank-line delimiters), for each record extract "event:" and "data:" lines, and assert that at least one event name equals "agent.message", "agent.done", or "messages" and that corresponding "data:" lines contain valid JSON (parseable) or expected payload shape; keep this logic inside the same test function around the response.text() handling and use the createCustomFetch response text as the input to the parser.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rust/src/client.rs`:
- Around line 1354-1360: Update agent_chat to prefer API key authentication like
create_chat_completion_stream: check for the API key first (the same field used
by create_chat_completion_stream) and if present insert it into headers as the
Authorization bearer value using HeaderValue::from_str and the same error
mapping; only if no API key is present, fall back to calling
self.session_manager.get_access_token() and inserting the JWT into headers
(preserve the existing Error::Authentication mapping). Modify the logic around
headers.insert/AUTHORIZATION in agent_chat to mirror the decision order used by
create_chat_completion_stream while keeping HeaderValue::from_str error handling
and the same header key (AUTHORIZATION).
In `@src/lib/api.ts`:
- Around line 2661-2688: The listAgentConversationItems function currently
accepts params { limit, after, order } but should match the canonical pattern
used by listInstructions and listConversationItems by adding a before parameter
and including it in the query string; update the params type in
listAgentConversationItems to { limit?: number; after?: string; before?: string;
order?: string }, add the if (params?.before) branch to push
`before=${encodeURIComponent(params.before)}` into queryParams, and ensure the
final URL construction and authenticatedApiCall invocation remain unchanged; use
listConversationItems/listInstructions as the reference pattern for naming and
behavior.
---
Nitpick comments:
In `@rust/src/client.rs`:
- Around line 631-653: Change the invite_code parameter and related request
struct to be optional: update handle_github_callback (and parallel methods
handle_google_callback and handle_apple_callback) to accept invite_code:
Option<String>, modify the OAuthCallbackRequest type used in these functions so
its invite_code field is Option<String>, and adjust construction of
OAuthCallbackRequest to pass the Option directly; ensure the encrypted_api_call
invocation and any serialization will handle None (omitting the field or sending
null) and that session_manager.set_tokens continues to work with
response.refresh_token as an Option<String> if you make that optional as well.
In `@rust/src/types.rs`:
- Around line 712-720: AgentConversationItemsResponse currently uses a generic
Vec<Value> for data which sacrifices type safety; define a new strongly-typed
struct AgentConversationItem (derive Serialize, Deserialize, Debug, Clone) that
models the conversation item fields and replace pub data: Vec<Value> with pub
data: Vec<AgentConversationItem> in the AgentConversationItemsResponse struct,
then update any code that constructs/parses AgentConversationItemsResponse to
produce/consume AgentConversationItem (or implement From/TryFrom<Value>
conversions) so deserialization is robust and callers get compile-time
guarantees.
In `@rust/tests/agent_integration.rs`:
- Around line 10-42: The helper function setup_authenticated_client is
duplicated across tests; extract it into a shared test utilities module so both
rust/tests/agent_integration.rs and rust/tests/ai_integration.rs reuse the same
implementation. Create tests/common/mod.rs that exports pub async fn
setup_authenticated_client() -> Result<OpenSecretClient> (importing
opensecret::OpenSecretClient, uuid::Uuid, std::env, dotenv as needed), move the
existing implementation there, then replace the local function in each test file
with mod common; use common::setup_authenticated_client; ensuring error types
and imports match the original signatures.
- Around line 251-281: In test_memory_search, after inserting via
insert_archival_memory and calling search_agent_memory with MemorySearchRequest,
add a validation to ensure the search returned results (e.g., assert
results.results.len() > 0) and, if embedding propagation is flaky, add a short
async delay (tokio::time::sleep) before the search to allow indexing; update the
test function test_memory_search to perform this assertion and optional delay so
the inserted archival memory is verified by the search.
- Around line 352-359: Replace the panic! calls inside the match handling
AgentSseEvent::Error and the Err(e) arm with assertions or a test Result return
so failures preserve test context: in the block matching
AgentSseEvent::Error(err) replace panic!("Agent error: {}", err.error) with an
assert!(false, "Agent error: {}", err.error) or return Err(format!("Agent error:
{}", err.error).into()), and in the Err(e) arm replace panic!("Stream error:
{:?}", e) with an assert!(false, "Stream error: {:?}", e) or return
Err(format!("Stream error: {:?}", e).into()); alternatively, accumulate any
errors into a Vec and assert at the end of the test for clearer diagnostics,
referencing AgentSseEvent, err, and e to locate the exact spots to change.
In `@src/lib/test/integration/agent.test.ts`:
- Around line 217-250: The SSE assertion in the skipped test "Agent chat via SSE
(using createCustomFetch)" is fragile because it only does substring matching on
the full response text; update the test to parse the SSE stream produced by
createCustomFetch and validate event structure instead of string includes: split
the response into SSE records (e.g., by blank-line delimiters), for each record
extract "event:" and "data:" lines, and assert that at least one event name
equals "agent.message", "agent.done", or "messages" and that corresponding
"data:" lines contain valid JSON (parseable) or expected payload shape; keep
this logic inside the same test function around the response.text() handling and
use the createCustomFetch response text as the input to the parser.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
rust/src/client.rsrust/src/types.rsrust/tests/agent_integration.rsrust/tests/ai_integration.rssrc/lib/api.tssrc/lib/index.tssrc/lib/test/integration/agent.test.ts
| export async function listAgentConversationItems( | ||
| conversationId: string, | ||
| params?: { limit?: number; after?: string; order?: string } | ||
| ): Promise<ConversationItemsResponse> { | ||
| let url = `${apiUrl}/v1/agent/conversations/${encodeURIComponent(conversationId)}/items`; | ||
| const queryParams: string[] = []; | ||
|
|
||
| if (params?.limit !== undefined) { | ||
| queryParams.push(`limit=${params.limit}`); | ||
| } | ||
| if (params?.after) { | ||
| queryParams.push(`after=${encodeURIComponent(params.after)}`); | ||
| } | ||
| if (params?.order) { | ||
| queryParams.push(`order=${encodeURIComponent(params.order)}`); | ||
| } | ||
|
|
||
| if (queryParams.length > 0) { | ||
| url += `?${queryParams.join("&")}`; | ||
| } | ||
|
|
||
| return authenticatedApiCall<void, ConversationItemsResponse>( | ||
| url, | ||
| "GET", | ||
| undefined, | ||
| "Failed to list agent conversation items" | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's examine the listConversationItems function to verify its parameters
rg -n "listConversationItems" src/lib/api.ts -A 30 | head -50Repository: OpenSecretCloud/OpenSecret-SDK
Length of output: 1277
🏁 Script executed:
#!/bin/bash
# Search for type definitions or interfaces related to conversation items
rg -n "ConversationItemsResponse|ConversationItem" src/lib/api.ts -B 2 -A 3Repository: OpenSecretCloud/OpenSecret-SDK
Length of output: 2198
🏁 Script executed:
#!/bin/bash
# Check if there are any comments or documentation about the agent endpoint differences
rg -n "agent.*conversation.*item|order.*parameter" src/lib/api.ts -iRepository: OpenSecretCloud/OpenSecret-SDK
Length of output: 276
🏁 Script executed:
#!/bin/bash
# Look for any backend API or OpenAPI documentation in the repo
find . -type f \( -name "*.md" -o -name "*.json" -o -name "*.yaml" -o -name "*.yml" \) | grep -i "api\|openapi\|swagger\|doc" | head -20Repository: OpenSecretCloud/OpenSecret-SDK
Length of output: 895
🏁 Script executed:
#!/bin/bash
# Search for comments or documentation about agent API design
rg -n "agent.*API|order.*parameter|backward.*paginat" src/lib/api.ts -i -B 2 -A 2Repository: OpenSecretCloud/OpenSecret-SDK
Length of output: 1257
🏁 Script executed:
#!/bin/bash
# Check other agent functions to see if there's a consistent pattern
rg -n "export async function.*agent" src/lib/api.ts -A 5 | head -80Repository: OpenSecretCloud/OpenSecret-SDK
Length of output: 56
🏁 Script executed:
#!/bin/bash
# Search for any backend API schema or OpenAPI spec that might clarify this
find . -type f \( -name "*.json" -o -name "*.yaml" -o -name "*.yml" \) | xargs rg -l "agent.*conversation\|order.*parameter" 2>/dev/null | head -10Repository: OpenSecretCloud/OpenSecret-SDK
Length of output: 56
🏁 Script executed:
#!/bin/bash
# Check if there are any comments or JSDoc about the listAgentConversationItems function
rg -n "listAgentConversationItems" src/lib/api.ts -B 20 | grep -E "\/\*|@|^\s*\*|^\s*\/\/" | head -30Repository: OpenSecretCloud/OpenSecret-SDK
Length of output: 56
🏁 Script executed:
#!/bin/bash
# Read the documentation around lines 2311-2314 to see which function it describes
sed -n '2290,2330p' src/lib/api.tsRepository: OpenSecretCloud/OpenSecret-SDK
Length of output: 1395
🏁 Script executed:
#!/bin/bash
# Also check the section around lines 1687-1690 to see what function that documents
sed -n '1670,1710p' src/lib/api.tsRepository: OpenSecretCloud/OpenSecret-SDK
Length of output: 1304
🏁 Script executed:
#!/bin/bash
# Check the documentation and implementation of listAgentConversationItems more carefully
sed -n '2630,2690p' src/lib/api.tsRepository: OpenSecretCloud/OpenSecret-SDK
Length of output: 1802
🏁 Script executed:
#!/bin/bash
# Check what parameters listInstructions actually implements
sed -n '2330,2370p' src/lib/api.tsRepository: OpenSecretCloud/OpenSecret-SDK
Length of output: 1232
🏁 Script executed:
#!/bin/bash
# Search for any type definitions that might specify what parameters these functions should accept
rg -n "ConversationListParams|AgentConversationListParams|InstructionListParams" src/lib/api.ts -A 5Repository: OpenSecretCloud/OpenSecret-SDK
Length of output: 426
🏁 Script executed:
#!/bin/bash
# Let's verify if there's any documentation comment for listConversationItems mentioning order
sed -n '2050,2080p' src/lib/api.tsRepository: OpenSecretCloud/OpenSecret-SDK
Length of output: 1037
🏁 Script executed:
#!/bin/bash
# Let's also check if there are type definitions for ConversationListParams
rg -n "type.*ConversationListParams\|interface.*ConversationListParams" src/lib/api.ts -A 5Repository: OpenSecretCloud/OpenSecret-SDK
Length of output: 56
Add before parameter to listAgentConversationItems and order to listConversationItems for consistency.
listAgentConversationItems accepts { limit, after, order } while listConversationItems accepts { limit, after, before }. This inconsistency should be resolved. The listInstructions function (line 2328) demonstrates the correct pattern by supporting all four parameters: { limit, after, before, order }. Both conversation item listing functions should align with this pattern. Additionally, listConversationItems is marked as @deprecated, so the agent version should be the canonical implementation and should include all standard pagination/ordering parameters.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/api.ts` around lines 2661 - 2688, The listAgentConversationItems
function currently accepts params { limit, after, order } but should match the
canonical pattern used by listInstructions and listConversationItems by adding a
before parameter and including it in the query string; update the params type in
listAgentConversationItems to { limit?: number; after?: string; before?: string;
order?: string }, add the if (params?.before) branch to push
`before=${encodeURIComponent(params.before)}` into queryParams, and ensure the
final URL construction and authenticatedApiCall invocation remain unchanged; use
listConversationItems/listInstructions as the reference pattern for naming and
behavior.
Implement the /v1/agent/* API surface for the Sage-style persistent agent system, including config management, memory blocks, archival memory, semantic search, conversation management, and SSE-based chat streaming.
Summary by CodeRabbit
Release Notes
New Features
Improvements