slopshop.gg / dream-engine / pipeline

Dream Engine Pipeline

9-stage neuroscience-inspired memory synthesis. BiOtA bidirectional replay. Large SWR clusters. Free forever.

v4.0 POST /v1/memory/dream/start POST /v1/memory/dream/biota POST /v1/memory/context-dream Free
Section 01
The Full Pipeline
Each stage mirrors a distinct neurological process. Temperature controls divergence: low = consolidation, high = recombination. Stage 7 intentionally spikes — that is where creative synthesis occurs.
Stage Name Biological Analog Temp Output
1 Reactivation Large SWR clusters (Robinson Neuron 2026) 0.70 Co-activated memory clusters
2 Pattern Detection NREM SO-spindle coupling 0.65 Recurring patterns
3 Synthesis / Transfer ASC hippocampo-cortical dialog 0.70 Merged schema
3.5 Schema Scaffold
gating pass
vmPFC Go-CLS gating 0.75 High-SNR schemas only
4 Contradiction Resolution Schema updating 0.60 Resolved contradictions
5 Priority Scoring Intelligence Score computation 0.65 Salience-ranked insights
6 Abstraction Neocortical schema formation 0.70 Higher-order abstractions
7 REM Recombination
← THE MAGIC
REM theta + PGO waves 1.15 🔥 Creative leaps + what-ifs
8 SHY Downscaling Synaptic Homeostasis (Tononi) 0.35 40–55% token savings
9 Consolidated Output Predictive reorganization 0.40 Morning Brief + actions
Stage 3.5 is a gating pass, not a billable stage. Schemas below SNR threshold are dropped before Stage 4.

Section 02
BiOtA Mode — Bidirectional Online Transfer Algorithm
Human sleep runs 4–5 ultradian cycles per night. BiOtA mirrors this: NREM forward pass builds schemas, REM reverse pass recombines them, feedback loop refines convergence. Run 3–5 iterations for maximum insight yield.
biota-config.js
// BiOtA — 3-5 iterations (mirrors human ultradian sleep cycles)
// NREM forward → REM reverse → feedback loop
 
POST /v1/memory/dream/biota
{
  "iterations": 4,
  "swr_cluster_threshold": 0.75, // min co-activation score
  "convergence_threshold": 0.85, // stop early if reached
  "namespace": "default"
}
 
// Response
{
  "biota_id": "biota_4f9a",
  "iterations_run": 4,
  "convergence_score": 0.92,
  "nrem_schemas": 18,
  "rem_recombinations": 7,
  "tokens_saved": 58, // % via SHY downscaling
  "intelligence_score": 91.3
}
iteration anatomy
Each iteration: Stage 1–6 (NREM consolidation, forward) → Stage 7 (REM recombination, reverse) → Stage 8–9 (SHY pruning + output). Convergence is measured as cosine similarity between successive consolidated memory states. Once ≥ convergence_threshold, the loop terminates early.

Section 03
ContextDream — Token Compression for LLMs
Pass any large context window through the Dream pipeline. Returns a semantically compressed gist, creative leaps surface from Stage 7, and a predictive slice sized to your token budget. Effective context 3.2× the raw window.
context-dream.js
POST /v1/memory/context-dream
{
  "context": "Your long LLM context here...",
  "token_budget": 32000
}
 
// Response
{
  "gist": "...", // NREM-consolidated summary
  "creative_leaps": ["...", "..."], // Stage 7 REM outputs
  "predictive_slice": "...", // fits token_budget exactly
  "tokens_saved": 19840, // absolute token reduction
  "rem_score": 0.88 // recombination quality (0-1)
}

Section 04
Temperature Curve
Temperature controls the model's divergence at each stage. Stages 1–6 consolidate (0.60–0.75). Stage 7 spikes hard to 1.15 — this is where novel associations form. Stages 8–9 cool sharply to lock in signal and prune noise.
S1
0.70
S2
0.65
S3
0.70
S3.5
0.75
S4
0.60
S5
0.65
S6
0.70
S7
1.15 REM PEAK 🔥
S8
0.35
S9
0.40
Bars proportional to temperature. S8–S9 cool to lock signal; noise pruned by SHY downscaling.

Section 05
Benchmark Results
MIB v2 (Memory Intelligence Benchmark). Baselines: Mem0 (Series A, managed), Zep (OSS). Scores averaged across 200 memory sessions with 500–5,000 memories each.
benchmark — MIB v2
Dream Engine vs Baselines (MIB v2):
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Creative Leap Novelty: 4.8× vs Mem0/Zep
Token Savings (SHY): 55–65% via pruning
Effective Context: 3.2× raw window
BiOtA Convergence: 0.92 avg (4 iters)
Insight Quality (MIB): 4.1× actionable
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Section 06
Quick Start
Three calls get you the full pipeline. All endpoints accept JSON. Authenticate with Authorization: Bearer $SLOP_KEY.
1
Start a Dream
bash
# Kick off all 9 strategies
curl -X POST /v1/memory/dream/start \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{"strategies": ["synthesize","pattern_extract","insight_generate",
                  "compress","associate","validate","evolve","forecast","reflect"]}'
 
// → { "dream_id": "drm_7f2b", "stages": 9, "status": "running" }
2
Run BiOtA Mode (full neuroscience pipeline)
bash
curl -X POST /v1/memory/dream/biota \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{"iterations": 4}'
 
// → { "biota_id": "biota_4f9a", "convergence_score": 0.92, ... }
3
Get your Context Pack
bash
curl -X POST /v1/memory/context-dream \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{"context": "...", "token_budget": 32000}'
 
// → { "gist": "...", "creative_leaps": [...], "tokens_saved": 19840 }
CLI shorthand
slop dream run --strategy synthesize,compress,insight_generate
slop dream run --biota --iterations 4
slop context-dream --budget 32k < context.txt