Agent Memories
Introduction
Agent memories are modular components that let agents store, retrieve, and stream context, knowledge, and events across a distributed system. In our stack we treat four practical classes of memory:
- Context Cache Memory — ultra-low-latency working state with optional broadcast
- In-Memory (Memory Grid) — volatile object store via FrameDB (fast reads/writes)
- Storage (Memory Grid) — persistent object store via FrameDB (durable, queryable)
- Streaming (Memory Grid) — ordered event queues via FrameDB (Redis Streams)
Assumed “current scenario”: multi-agent planning/execution that needs fast shared context, live event coordination, and durable results for audit/replay.
Memory Type | Purpose / Use Case | Best for Current Scenario |
---|---|---|
Context Cache Memory (Agent Context Cache) | Ultra-fast working memory for an agent: namespace-scoped get/set , dual-write (in-memory → Redis optional), NATS broadcasting of updates, buffered async processing, graceful shutdown. Ideal for per-request state, small configs, feature flags, and inter-agent context sync. |
Use as the primary live context: keep the agent’s active plan, slots, recent tool outputs, and coordination flags here; enable topics to broadcast updates to peers in real time. |
In-Memory (Memory Grid / memory_backend="in-memory" ) |
Volatile FrameDB object store (typically Redis) for large, hot objects: intermediate tensors, chunked responses, session artifacts. LFU client cache can accelerate hot reads. | Use for fast scratch space and shared large blobs between components when you don’t need durability; snapshot to Storage before teardown with a backup job. |
Storage (Memory Grid / memory_backend="storage" ) |
Durable FrameDB object store (TiDB/MySQL-compatible) for long-term memory: episodic logs, semantic knowledge, model snapshots, datasets, verified results. Supports backups to S3/MinIO and later restore. | Use as the system of record: persist final outputs, provenance, and artifacts for audit/replay; restore to In-Memory for hot processing when needed. |
Streaming (Memory Grid / memory_backend="streaming" ) |
Ordered event queues (Redis Streams) for real-time pipelines: tool/agent events, progress signals, notifications, ingestion feeds. Supports continuous listeners or bulk drain. | Use as the coordination bus: emit plan steps, tool completions, and status ticks; other agents/services subscribe to drive workflows and UI updates. |
Agent Context Memory — Client Developer Documentation
Introduction
The Agent Context Memory is a modular, pluggable, and event-driven memory system for intelligent agents and distributed runtime environments.
It combines the fast in-memory + persistent storage capabilities of the Agent Context Cache with memory configuration entries (MemoryItem
) defined in an agent’s subject specification.
It provides:
- Namespace-isolated context storage for runtime state, configuration, and knowledge.
- Multiple backend support (in-memory, Redis, file-based, or custom backends).
- Dual-write mode for redundancy between primary and secondary stores.
- Optional NATS-based broadcasting of memory updates across distributed agents.
- Persistent backup support for durability.
- Declarative configuration via the
MemoryItem
data structure in the subject specification.
A typical client workflow is:
- Define a
MemoryItem
in the subject specification to register a memory instance. - Initialize
AgentContextCache
with the configured backend(s). - Store and retrieve values with namespace isolation.
- Enable broadcasting for distributed state sync if required.
- Enable backup for long-term persistence.
Registering Agent Context Memory in Subject Specification
The Agent Context Memory is registered in an agent’s Subject Specification using the MemoryItem
structure.
This tells the agent what memory to instantiate, which backend to use, and any custom backend-specific settings.
Data Class Reference
@dataclass
class MemoryItem:
memory_id: str
memory_type: str = ""
memory_backend: str = ""
memory_custom_config: Dict[str, Any] = field(default_factory=dict)
Minimal Required Configuration
{
"memory_id": "context-memory-01",
"memory_type": "agent-context",
"memory_backend": "redis",
"memory_custom_config": {
"redis_url": "redis://redis.default.svc.cluster.local:6379/0",
"enable_broadcast": true,
"topics": ["agent.context.updates"],
"enable_backup": true
}
}
Field Descriptions
Field | Location | Type | Description |
---|---|---|---|
memory_id | root | str |
Unique identifier for this memory entry in the subject spec. |
memory_type | root | str |
Type of memory (e.g., "agent-context" , "episodic" , "semantic" ). |
memory_backend | root | str |
Backend to use (inmemory , redis , file , or a custom backend implementing BaseContextCache ). |
memory_custom_config.redis_url | config | str |
Redis connection URL if using the Redis backend. |
memory_custom_config.enable_broadcast | config | bool |
Whether to broadcast updates via NATS. |
memory_custom_config.topics | config | List[str] |
List of NATS topics to publish updates to. |
memory_custom_config.enable_backup | config | bool |
Whether to persist updates for durability. |
Example in a Subject Specification
{
"subject_id": "agent-alpha",
"subject_type": "agent",
"memories": [
{
"memory_id": "context-memory-01",
"memory_type": "agent-context",
"memory_backend": "redis",
"memory_custom_config": {
"redis_url": "redis://redis.default.svc.cluster.local:6379/0",
"enable_broadcast": true,
"topics": ["agent.context.updates"],
"enable_backup": true
}
}
]
}
This ensures that the agent will instantiate and manage a context memory instance using the specified backend, broadcast settings, and backup configuration.
Import and Setup
Importing the Context Memory
from agent_sdk.context_cache import AgentContextCache, InMemoryContextCache, RedisContextCache
Creating from a MemoryItem
memory_config = {
"memory_backend": "redis",
"memory_custom_config": {
"redis_url": "redis://redis.default.svc.cluster.local:6379/0",
"enable_broadcast": True,
"topics": ["agent.context.updates"]
}
}
if memory_config["memory_backend"] == "redis":
primary = RedisContextCache(redis_url=memory_config["memory_custom_config"]["redis_url"])
else:
primary = InMemoryContextCache()
ctx = AgentContextCache(primary_cache=primary)
Usage Guide
Store and Retrieve Context Values
ctx.set("user_profile", {"name": "Alice", "age": 30})
print(ctx.get("user_profile"))
Namespace Isolation
ctx.set("session_token", "abc123", namespace="auth")
print(ctx.get("session_token", namespace="auth")) # abc123
Managing Keys and Namespaces
ctx.list_keys() # All keys in default namespace
ctx.list_namespaces() # All namespaces
ctx.clear_namespace("auth") # Remove all keys in 'auth'
Topics and Broadcasting
When enable_broadcast
is true in memory_custom_config
, the memory system will publish updates to NATS.
ctx.set_topics_list(["agent.context.updates"])
Example event payload:
{
"event_type": "context_update",
"namespace": "default",
"key": "some_key",
"value": "some_value"
}
Backup Support
When enable_backup
is true:
ctx.enable_backup()
ctx.set_backup_settings({"interval": 60})
Lifecycle
Always shut down gracefully:
ctx.shutdown()
Got it — here’s the cleaned-up Memory Grid — Client Developer Documentation with
memory_type
fixed to always be "memory-grid"
and memory_backend
holding values like "in-memory"
, "storage"
, or "streaming"
.
Memory Grid — Client Developer Documentation
Introduction
The Memory Grid is the integration layer between an agent and distributed FrameDB instances. It allows the agent to maintain and access different kinds of memory through a single, consistent API.
Memory Grid supports three backend modes:
- in-memory — Fast, volatile storage for ephemeral data.
- storage — Persistent object storage for long-term retention.
- streaming — Ordered queues for real-time event processing.
All modes use the same FrameDBClient
interface for storing, retrieving, streaming, and backing up data.
A typical workflow is:
- Declare a Memory Grid entry in the agent’s subject specification.
- Initialize the FrameDB client with cluster and routing URLs.
- Perform operations according to the backend type.
- Optionally back up or restore memory between instances or external storage.
Registering Memory Grid in Subject Specification
A Memory Grid is declared in the subject specification using the MemoryItem
structure.
The memory_type
must be "memory-grid"
, and the memory_backend
specifies which mode to use.
Data Class Reference
from dataclasses import dataclass, field
from typing import Dict, Any
@dataclass
class MemoryItem:
memory_id: str
memory_type: str = "memory-grid" # Fixed value
memory_backend: str = "" # "in-memory", "storage", or "streaming"
memory_custom_config: Dict[str, Any] = field(default_factory=dict)
Minimal Required Configuration
{
"memory_id": "memory-grid-01",
"memory_type": "memory-grid",
"memory_backend": "in-memory",
"memory_custom_config": {
"framedb_id": "framedb-inmem-1",
"cluster_url": "http://framedb-cluster",
"routing_url": "http://routing-service"
}
}
Field Descriptions
Field | Location | Type | Description |
---|---|---|---|
memory_id | root | str |
Unique identifier for this memory entry in the subject spec. |
memory_type | root | str |
Must be "memory-grid" for Memory Grid integration. |
memory_backend | root | str |
Memory mode — "in-memory" , "storage" , or "streaming" . |
memory_custom_config.framedb_id | config | str |
Target FrameDB instance ID. |
memory_custom_config.cluster_url | config | str |
Base API URL for the FrameDB cluster. |
memory_custom_config.routing_url | config | str |
API URL for the FrameDB routing service. |
memory_custom_config.extra_params | config | dict |
Optional backend-specific settings (e.g., cache size, credentials). |
Import and Setup
from framedb_sdk import FrameDBClient
client = FrameDBClient(
cluster_url="http://framedb-cluster",
routing_url="http://routing-service"
)
In-Memory Backend (memory_backend="in-memory"
)
Purpose: Fast, volatile memory backed by Redis for temporary state, shared context, or scratchpad data.
Example:
client.set_object({
"key": "session:123",
"data": b"ephemeral binary data",
"framedb_id": "framedb-inmem-1",
"type": "in-memory"
})
result = client.get_object("session:123")
print(result["object"]["data"])
Notes:
- Data is not persisted after restart.
- Use
create_backup(...)
to persist important data.
Storage Backend (memory_backend="storage"
)
Purpose: Persistent object storage backed by TiDB or another SQL-compatible DB.
Example:
client.set_pythonic_object(
key="model_snapshot_v1",
obj={"weights": [1, 2, 3]},
framedb_id="framedb-storage-1",
framedb_type="storage"
)
res = client.get_pythonic_object("model_snapshot_v1")
print(res["object"]["python_object"])
Notes:
- Supports long-term retention and backups to object stores.
Streaming Backend (memory_backend="streaming"
)
Purpose: Ordered queues backed by Redis Streams for event-driven workflows.
Example — Continuous Listener:
for msg in client.listen_for_stream_data("agent-events", "framedb-stream-1"):
print("New event:", msg)
Example — Bulk Pull:
for msg in client.pull_all_stream_data("agent-events", "framedb-stream-1"):
print("Event:", msg)
Notes:
- Maintains strict ordering.
- Suitable for live ingestion or inter-agent communication.
Backup & Restore
# Backup a single key
client.create_backup("session:123", "framedb-backup-1")
# Restore from backup
client.restore_from_backup(
keys=["session:123"],
framedb_id="framedb-inmem-1",
framedb_type="in-memory",
s3_credentials_dict={
"bucket": "my-bucket",
"access_key": "...",
"secret_key": "...",
"region": "ap-south-1"
}
)