Skip to content

Memory system and optimization via feedback backpropagation#7

Merged
AdityaGolatkar merged 4 commits intostrands-labs:mainfrom
alexachille:feat/memory-and-optimization
Apr 8, 2026
Merged

Memory system and optimization via feedback backpropagation#7
AdityaGolatkar merged 4 commits intostrands-labs:mainfrom
alexachille:feat/memory-and-optimization

Conversation

@alexachille
Copy link
Copy Markdown
Contributor

Summary

  • Adds a pluggable memory backend system for storing named parameters (strings, lists, procedural code) with file-based and Amazon Bedrock AgentCore implementations
  • Introduces .trace() to build a computation graph across AI Function calls, enabling a backward pass from output feedback to the parameters that produced it
  • Ships a TextGrad-inspired optimizer that walks the graph backward and consolidates updates into the memory backend
  • Procedural parameters allow the optimizer to distill execution traces into cached Python functions (JIT-like reuse)
  • Updates documentation and README to cover the new memory and optimization system; adds usage examples.
  • Improves typing with separate SyncAIFunction/AsyncAIFunction classes ; migrates template engine to tstr; adds graph visualization utilities

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Adds support for persistent, optimizable memory parameters in agentic
workflows. Key additions:

- Computation graph: ai_function.trace() builds a graph of all function
  calls and memory accesses, enabling a backward pass from output feedback
  to the parameters that produced it.

- Memory backends: a pluggable MemoryBackend base class for storing named
  parameters (strings, lists, or code). Ships with file-based and
  Amazon Bedrock AgentCore implementations.

- Procedural parameters: a special parameter type that stores reusable
  Python functions distilled from the agent's execution trace — allowing
  subsequent runs to reuse proven code rather than regenerating it,
  analogous to JIT compilation for agentic logic.

- Optimizer: a pluggable Optimizer base class for feedback propagation.
  Ships with a TextGrad-inspired implementation that walks the graph
  backward node-by-node and consolidates updates into the memory backend.

- Improve typing by introducing separate SyncAIFunction and AsyncAIFunction
  classes with ParamSpec generics. Migrate template engine to tstr.
@AdityaGolatkar AdityaGolatkar self-assigned this Apr 8, 2026
@AdityaGolatkar AdityaGolatkar self-requested a review April 8, 2026 00:40
@AdityaGolatkar AdityaGolatkar merged commit 43e7c7d into strands-labs:main Apr 8, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants