While your agents sleep, the Dream Engine runs a 9-stage REM pipeline — synthesizing insights, compressing redundancy, and building new connections. Not just storing. Genuinely processing.
Traditional AI memory systems are lookup tables with a search function. They retrieve — they never integrate, generalize, or improve. The Dream Engine changes the paradigm entirely.
Static retrieval over a fixed corpus. Chunks are stored as-is, forever. New data doesn't affect old understanding. No consolidation. No growth.
Similarity search at inference time. Semantically close chunks surface together. Better than keyword search — but still purely reactive retrieval.
9-stage overnight cognitive pipeline. Hippocampal replay, synaptic homeostasis, predictive coding. Memory that gets smarter while you sleep.
Each stage maps to a documented neuroscientific mechanism. This is not metaphor — it is a computational implementation of the same processes that consolidate human memory overnight.
| # | Strategy | Neuroscience Basis | What It Does | Implementation Notes |
|---|---|---|---|---|
| 01 | SYNTHESIZE | Hippocampal replayBuzsáki (2015) | Combines related memories into unified concepts. Sharp-wave ripples during NREM replay episodic traces, allowing the cortex to abstract across experiences and form schematic representations. | Core consolidation stage. Run first for maximum downstream benefit. Powers the collective dream across hive namespaces. |
| 02 | PATTERN_EXTRACT | Statistical learningSaffran et al. (1996) | Surfaces recurring themes across memory traces. Detects structural patterns invisible in any single memory — frequency, co-occurrence, causal chains, and temporal sequences. | Powers FORECAST and procedural skill extraction in downstream stages. |
| 03 | INSIGHT_GENERATE | Cortical binding (IIT)Tononi (2004) | Creates novel connections invisible in raw data. Integrated Information Theory: consciousness and insight arise from maximally integrated information across cortical modules. Generates bridging concepts. | Can run in adversarial mode (GAN model — Deperrois et al. 2022) for hypothesis-vs-antithesis synthesis. |
| 04 | VALIDATE | Memory gatingReconsolidation theory | Filters weak or contradictory insights, strengthens reliable ones. Mirrors the brain's reconsolidation window — memories are mutable only when reactivated, allowing error correction before re-storage. | Critical for production deployments. Prevents hallucinated insights from polluting the memory graph. |
| 05 | EVOLVE | Hebbian plasticityHebb (1949) | Adapts memory weights based on usage patterns and outcomes. "Neurons that fire together wire together" — frequently co-activated memories strengthen their associative bonds over time. | Drives long-term personalization. Agent behavior adapts to its own history without manual prompting. |
| 06 | FORECAST | Predictive codingClark (2013) | Generates forward predictions from memory patterns. The brain is a prediction machine — the cortex continuously generates and refines probabilistic models of upcoming sensory input and events. | Returns high-confidence forecast chains for agent planning. Feeds into TMR priority scoring. |
| 07 | COMPRESS | Synaptic Homeostasis (SHY)Tononi & Cirelli (2014) | Removes redundancy, distills essentials. The SHY hypothesis: sleep downscales synaptic strength built up during waking to a sustainable baseline while preserving the signal-to-noise ratio of important traces. | Reduces memory footprint 30–40% without measurable information loss. Essential for long-lived agents. |
| 08 | ASSOCIATE | Spreading activationCollins & Loftus (1975) | Builds semantic connection networks. Activation spreads through associative memory like ripples — priming nearby concepts and revealing non-obvious semantic neighborhoods and analogical bridges. | Populates the memory knowledge graph visible in Memory Explorer. |
| 09 | REFLECT | Metacognitive monitoringNelson & Narens (1990) | Generates "lessons learned" and procedural skills. Meta-level monitoring: the system examines its own memory processes, extracts transferable heuristics, and encodes them as executable if/then rules. | Primary source of skills visible in Skills Forge. Can be deployed directly into agent system prompts. |
TMR (Targeted Memory Reactivation) is a real technique: playing specific cues during slow-wave sleep selectively strengthens targeted memories. We implement this as a closed-loop priority queue that injects high-value memory cues at simulated slow-oscillation up-state windows.
Memories are scored before the dream run using a weighted formula combining salience (importance to the agent's mission), difficulty (resistance to past consolidation), and contradiction flags. High-scoring memories are cued first during the active pipeline.
POST /v1/memory/tmr/queue
Real TMR targets the up-phase of slow oscillations — periods of high cortical excitability during NREM sleep. We simulate this by scheduling cue injection at pipeline checkpoints where the model's context is maximally receptive to new associative connections.
Target memory keys are surfaced into the active synthesis context during SYNTHESIZE and ASSOCIATE stages. The pipeline processes them with elevated priority, ensuring they participate in the maximum number of cross-memory connections.
GET /v1/memory/tmr/cues
Benchmarked against AutoDream (a proprietary consolidation project from a major lab) and Mem0 (the most-adopted open-source memory layer). The gap is structural, not incremental.
| Feature | Dream Engine | AutoDream | Mem0 |
|---|---|---|---|
| Open source | ✓ | ✗ | ✗ |
| 9-stage consolidation pipeline | ✓ | ✗ | ✗ |
| Targeted Memory Reactivation (TMR) | ✓ | ✗ | partial |
| Collective / multiplayer dreams | ✓ | ✗ | ✗ |
| Adversarial insight mode (GAN) | ✓ | ✗ | ✗ |
| Intelligence Score KPI | ✓ | ✗ | ✗ |
| Self-hostable | ✓ | ✗ | ✗ |
| Procedural skill extraction | ✓ | ✗ | ✗ |
Every dream run returns an Intelligence Score — a normalized performance metric that captures insight yield relative to compute spent. Track it over time to see your agent's cognitive growth curve.
A full pipeline run with all 9 strategies, adversarial insight generation enabled, Claude Sonnet, standard budget:
A 9-stage Full REM run on a large memory batch typically yields a score of 28–45, depending on memory diversity and model capability. Scores compound as memories cross-reference across nightly cycles.
The Dream Engine scales beyond individual agents. Collective Dream runs across an entire hive namespace — synthesizing insights from every agent's memories into a shared intelligence layer that compounds nightly.
Hive namespaces aggregate memories from all team agents. Any insight discovered by one agent becomes available to the collective during the next dream cycle. Cross-pollination happens automatically.
The SYNTHESIZE stage operates across agent boundaries. Memories from your research agent, coding agent, and planning agent are woven into unified concepts no single agent could reach alone.
Each Collective Dream returns a team-level Intelligence Score. Track your organization's cognitive growth as a compound metric. Hive scores consistently outperform individual agent scores by 2–4x.
The REFLECT, EVOLVE, and FORECAST stages extract procedural heuristics from memory patterns. These compile into if X → run Y rules that inject directly into agent system prompts — making your agents smarter with every dream cycle, no manual prompt engineering required.
One API call triggers a full 9-stage REM pipeline. Your agent's memories are consolidated, insights are extracted, and skills are written. Wake up smarter.