A backend execution layer vs an orchestration framework
| Feature | Slopshop | LangChain |
|---|---|---|
| What it is | Backend execution layer (API + CLI) | Python/JS orchestration framework |
| Built-in tools | 1,255 real handlers | Wrappers around external APIs |
| Real compute on server | 925 pure-compute | Delegates to external services |
| Self-hostable | Full self-hosting | Framework only (tools need external services) |
| Verification | _engine: "real" + SHA-256 | No built-in verification |
| Persistent memory | 8 core APIs, free forever | Requires external DB setup |
| MCP support | Native server | Community adapters |
| Setup complexity | npm install -g slopshop | Framework + dependencies + config |
| Language lock-in | Language-agnostic (HTTP API) | Python or JS |
| Works in Claude Code/Cursor | MCP native | Not directly |
LangChain is the right choice when you need fine-grained control over prompt chains, retrieval-augmented generation pipelines, or complex multi-step agent logic in Python. It's an orchestration framework for building custom agent workflows.
Slopshop is the right choice when your agents need a ready-to-use backend with real tools, persistent memory, and verified results. No framework setup, no dependency management -- one API key, 1,255 handlers, works inside any MCP-compatible client. They complement each other: use LangChain for orchestration, Slopshop as the tool backend.
Slopshop is a self-hostable backend for AI agents: real tools, persistent memory, one execution layer.
78 categories · 1,255 handlers · 925 self-hostable pure-compute · 8 core memory APIs free forever · 500 free credits on signup
memory-set · memory-get · memory-search · memory-list · memory-delete · memory-stats · memory-namespace-list · counter-get — always 0 credits