Memory¶
MemoryProvider ¶
Bases: ABC
Pluggable memory backend for AI context construction.
Implement this ABC to control how conversation history is retrieved
for AI generation. The library ships with :class:SlidingWindowMemory
(simple last-N events) as the default.
Lifecycle methods ingest, clear, and close are concrete
no-ops so that simple implementations only need to override retrieve.
retrieve
abstractmethod
async
¶
Retrieve context for AI generation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
room_id
|
str
|
The room being processed. |
required |
current_event
|
RoomEvent
|
The event that triggered AI generation. |
required |
context
|
RoomContext
|
Full room context including recent events, bindings, and participants. |
required |
channel_id
|
str | None
|
The AI channel requesting memory (useful when multiple AI channels share a room). |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
A |
MemoryResult
|
class: |
MemoryResult
|
in the AI context. |
ingest
async
¶
Ingest an event into memory (optional).
Stateful providers (e.g. summarization, vector stores) can override this to update their internal state as events arrive. The default implementation is a no-op.
MemoryResult
dataclass
¶
Result returned by a memory provider.
Memory providers can return pre-built AI messages (e.g. summaries, synthetic context) and/or raw room events that AIChannel will convert using its own content extraction logic (preserving vision support).
A provider may populate one or both fields. messages are prepended
first, then events are converted and appended.
SlidingWindowMemory ¶
Bases: MemoryProvider
Return the most recent events from the room context.
This replicates the original AIChannel behavior of slicing
context.recent_events[-max_events:].
SummarizingMemory ¶
SummarizingMemory(inner, provider, max_context_tokens, *, tier1_ratio=0.5, tier2_ratio=0.85, truncate_chars=2000, summary_max_tokens=1000, min_events=5, summary_cache_ttl_seconds=1800.0)
Bases: MemoryProvider
Two-tier memory provider that proactively manages context budget.
Wraps an inner MemoryProvider (typically SlidingWindowMemory)
and applies two tiers of context reduction:
- Tier 1 — truncate large event bodies in older messages when total
estimated tokens exceed
tier1_ratio * max_context_tokens. - Tier 2 — summarize older events via a lightweight AI provider when
total tokens still exceed
tier2_ratio * max_context_tokens.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inner
|
MemoryProvider
|
The wrapped memory provider. |
required |
provider
|
AIProvider
|
A lightweight AI provider for summarization (e.g. Haiku). |
required |
max_context_tokens
|
int
|
Total token budget for the AI context. |
required |
tier1_ratio
|
float
|
Fraction of budget that triggers tier-1 truncation. |
0.5
|
tier2_ratio
|
float
|
Fraction of budget that triggers tier-2 summarization. |
0.85
|
truncate_chars
|
int
|
Max characters per old event body in tier 1. |
2000
|
summary_max_tokens
|
int
|
Max tokens for the LLM summary response. |
1000
|
min_events
|
int
|
Minimum events to keep before summarizing (tier 2). |
5
|
summary_cache_ttl_seconds
|
float
|
TTL for cached summaries. |
1800.0
|
RetrievalMemory ¶
Bases: MemoryProvider
Wraps an inner provider and enriches context with knowledge sources.
On retrieve, queries all configured knowledge sources concurrently,
merges results by score, and prepends a context message with relevant
knowledge before the inner provider's messages.
On ingest, forwards to the inner provider and indexes text content
in all knowledge sources.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sources
|
list[KnowledgeSource]
|
Knowledge sources to query on each retrieval. |
required |
inner
|
MemoryProvider
|
The wrapped memory provider for conversation history. |
required |
max_results
|
int
|
Maximum knowledge results to include in context. |
5
|
min_query_length
|
int
|
Minimum query length to trigger search. |
3
|
MockMemoryProvider ¶
Bases: MemoryProvider
Mock memory provider that records calls and returns configured results.
Example::
mock = MockMemoryProvider(
messages=[AIMessage(role="system", content="Summary of conversation")],
)
result = await mock.retrieve("room1", event, context)
assert len(mock.retrieve_calls) == 1