SLOPSHOP.ggv4.0
Tools Templates Pricing Docs Playbooks
5 real-world playbooks — copy, paste, ship

Recipes for
real problems

Which APIs to call, how to chain them, what it costs, and what breaks. Go from curious to shipping in under 10 minutes.

Playbooks

PLAYBOOK 01

Process Invoices Automatically

Drop in raw invoice text, get back clean structured JSON — invoice ID, customer, amount, and date — stored in persistent memory and ready to query.

~13 credits total 3 API calls memory-set is free
Input
Invoice #4471 Acme Corp — 2025-03-01 Services rendered: $2,400.00 Due: 2025-03-31
Output
{ "invoice_id": "4471", "customer": "Acme Corp", "amount": 2400.00, "date": "2025-03-01" }
Step 1

Extract structured data llm-data-extract

Send raw invoice text to the LLM extraction tool with a schema hint. It returns structured JSON every time — no prompt engineering required.

bash 10 credits
# Step 1 — extract structured fields from raw invoice text
curl -X POST https://slopshop.gg/v1/tools/llm-data-extract \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Invoice #4471\nAcme Corp — 2025-03-01\nServices rendered: $2,400.00\nDue: 2025-03-31",
    "schema": {
      "invoice_id": "string",
      "customer":   "string",
      "amount":     "number",
      "date":       "string (YYYY-MM-DD)"
    }
  }'
response save result.extracted
{
  "ok": true,
  "result": {
    "extracted": {
      "invoice_id": "4471",
      "customer":   "Acme Corp",
      "amount":     2400.00,
      "date":       "2025-03-01"
    }
  },
  "_engine": "real"
}
Step 2

Validate the JSON text-json-validate

Before persisting, confirm the shape is exactly right. This catches LLM hallucinations (wrong types, missing fields) before they silently corrupt your records.

bash 2 credits
# Step 2 — validate the extracted JSON against your schema
curl -X POST https://slopshop.gg/v1/tools/text-json-validate \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "json": "{\"invoice_id\":\"4471\",\"customer\":\"Acme Corp\",\"amount\":2400.00,\"date\":\"2025-03-01\"}",
    "required_keys": ["invoice_id", "customer", "amount", "date"],
    "types": {
      "invoice_id": "string",
      "amount":     "number"
    }
  }'
response
{
  "ok": true,
  "result": {
    "valid":   true,
    "errors":  [],
    "parsed":  { /* the parsed object */ }
  }
}
Step 3

Persist to memory memory-set — FREE

Store the validated invoice JSON under a namespaced key. Retrieve it later with memory-search or memory-get. Memory operations never consume credits.

bash FREE
# Step 3 — store to persistent memory (always free)
curl -X POST https://slopshop.gg/v1/tools/memory-set \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "key":   "invoice:4471",
    "value": {
      "invoice_id": "4471",
      "customer":   "Acme Corp",
      "amount":     2400.00,
      "date":       "2025-03-01"
    },
    "ttl": 2592000
  }'
pipe version

All three steps above can be collapsed into a single piped request. The | operator passes each tool's output as the next tool's input.

bash one request, same 12 credits
# Pipe version — extract | validate | store in one call
curl -X POST https://slopshop.gg/v1/pipes/run \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "pipe": [
      {
        "tool":  "llm-data-extract",
        "input": {
          "text":   "Invoice #4471\nAcme Corp — 2025-03-01\nServices: $2400\nDue: 2025-03-31",
          "schema": { "invoice_id":"string","customer":"string","amount":"number","date":"string" }
        }
      },
      {
        "tool":  "text-json-validate",
        "input": {
          "json":          "$prev.extracted",
          "required_keys": ["invoice_id","customer","amount","date"]
        }
      },
      {
        "tool":  "memory-set",
        "input": {
          "key":   "invoice:$prev.parsed.invoice_id",
          "value": "$prev.parsed"
        }
      }
    ]
  }'
cost breakdown
ToolCreditsNotes
llm-data-extract10LLM inference call
text-json-validate2Schema validation + type check
memory-set0Always free
Total~12 credits

When to use this

  • Parsing emailed invoices, receipts, or purchase orders into your accounting system
  • Any workflow where structured data arrives as unstructured text (contracts, forms, tickets)
  • Building a queue processor that needs to validate before writing to a database

Common pitfalls

  • Don't skip validation. LLMs occasionally hallucinate field names or return strings where you expect numbers. The validate step catches this before it hits your DB.
  • Watch currency symbols. The extractor strips "$" by default. If your schema expects a string like "$2,400" rather than a number, specify the type explicitly.
  • TTL defaults to forever. Pass ttl in seconds if you want automatic expiry. 2592000 = 30 days.
  • Duplicate keys overwrite. Use a unique key like invoice:<id> rather than a generic latest_invoice.

PLAYBOOK 02

Security Audit Any Website

Get a full security picture in under a minute — tech stack fingerprint, SSL certificate health, HTTP security headers, and response time baseline.

~23 credits total 4 API calls or 1 template call
One-shot: POST /v1/agent/template/security-audit

Skip the manual steps entirely. Pass a URL and the agent runs all four checks, aggregates findings, and returns a single structured report with risk ratings. Same ~23 credits. View template docs →

Step 1

Fingerprint the tech stack sense-url-tech-stack

Detect frameworks, CDNs, analytics, CMS, and server software. Exposed tech is a primary attack vector — know it before attackers do.

bash 5 credits
curl -X POST https://slopshop.gg/v1/tools/sense-url-tech-stack \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "url": "https://target.example.com" }'
response
{
  "result": {
    "technologies": [
      { "name": "WordPress", "version": "6.4.2", "confidence": 98 },
      { "name": "PHP",       "version": "8.1",   "confidence": 85 },
      { "name": "Cloudflare", "version": null,   "confidence": 100 }
    ]
  }
}
Step 2

Check SSL certificate sense-ssl-check

Validate the certificate chain, check expiry, detect weak cipher suites, and flag common misconfigurations like self-signed certs or expired intermediates.

bash 6 credits
curl -X POST https://slopshop.gg/v1/tools/sense-ssl-check \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "hostname": "target.example.com" }'
response
{
  "result": {
    "valid":          true,
    "expires_in_days": 47,
    "issuer":         "Let's Encrypt",
    "grade":          "A",
    "warnings":       ["Certificate expires in less than 60 days"]
  }
}
Step 3

Audit HTTP security headers sense-url-headers

Check for Content-Security-Policy, X-Frame-Options, HSTS, and other headers that prevent common web attacks.

bash 6 credits
curl -X POST https://slopshop.gg/v1/tools/sense-url-headers \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "url": "https://target.example.com", "security_check": true }'
response
{
  "result": {
    "headers": { /* all response headers */ },
    "security": {
      "score":   62,
      "missing": ["Content-Security-Policy", "Permissions-Policy"],
      "present": ["Strict-Transport-Security", "X-Frame-Options"]
    }
  }
}
Step 4

Measure response time sense-url-response-time

Establish a baseline. Slow pages often indicate server-side bloat, unindexed DB queries, or upstream DDoS protection kicking in. Useful for diff-ing before/after deploys.

bash 6 credits
curl -X POST https://slopshop.gg/v1/tools/sense-url-response-time \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "url": "https://target.example.com", "samples": 3 }'
response
{
  "result": {
    "avg_ms":  312,
    "min_ms":  287,
    "max_ms":  341,
    "status":  200,
    "ttfb_ms": 134
  }
}
or: one-shot template
bash ~23 credits — all 4 checks in one call
# One-shot: agent runs all checks, returns a ranked report
curl -X POST https://slopshop.gg/v1/agent/template/security-audit \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "url": "https://target.example.com" }'

The template returns a single report object with risk_level, findings[], and recommendations[] — pre-ranked by severity. Useful when you don't want to aggregate manually.

cost breakdown
ToolCreditsNotes
sense-url-tech-stack5External HTTP probe
sense-ssl-check6TLS handshake + chain validation
sense-url-headers6Headers + security scoring
sense-url-response-time63 samples default
Total~23 credits

When to use this

  • Auditing a vendor or third-party site before onboarding them
  • Pre-launch security checklist for your own sites
  • Scheduled weekly job to catch certificate expiry before it pages you at 3am
  • Building a security dashboard across a fleet of customer-facing URLs

Common pitfalls

  • HTTP vs HTTPS. Pass the full URL with scheme. For SSL checks use the bare hostname without protocol.
  • Cloudflare can obscure the real stack. If you see only Cloudflare in the tech stack, the origin server is behind a proxy. The report notes this.
  • CSP absence is not critical on every site. A static marketing page without user data doesn't need the same CSP rigor as a banking app. Context matters.
  • Response time varies by geography. Slopshop probes from its hosted region. If your users are in a different region, use the template's region parameter when available.

PLAYBOOK 03

Build a Research Agent with Memory

Fetch a URL, analyze the content, store the findings in persistent memory, then search and recall them later — across sessions, across runs.

~4 credits total 4 API calls memory is always free
Step 1

Fetch the content sense-url-content

Retrieve the cleaned text content from any URL — no HTML noise, just the readable article or page body, ready for analysis.

bash 3 credits
curl -X POST https://slopshop.gg/v1/tools/sense-url-content \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url":        "https://example.com/research-paper",
    "clean_html": true,
    "max_chars":  8000
  }'
response
{
  "result": {
    "title":      "Advances in Transformer Architectures",
    "content":    "The paper introduces...",
    "word_count": 3847,
    "url":        "https://example.com/research-paper"
  }
}
Step 2

Analyze word count and density text-word-count

Get word count, unique words, top terms, and reading time — useful for tagging and indexing research material so memory-search returns relevant results.

bash 1 credit
curl -X POST https://slopshop.gg/v1/tools/text-word-count \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text":     "The paper introduces...",
    "top_n":    10,
    "stopwords": true
  }'
response
{
  "result": {
    "word_count":    3847,
    "unique_words":  1204,
    "reading_time_min": 15,
    "top_terms": [
      { "term": "transformer", "count": 48 },
      { "term": "attention",   "count": 31 }
    ]
  }
}
Step 3

Store in persistent memory memory-set — FREE

Save the content, metadata, and top terms together. Tag the entry with searchable keywords so you can surface it later from a natural language query.

bash FREE
curl -X POST https://slopshop.gg/v1/tools/memory-set \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "key": "research:transformer-architectures-2025",
    "value": {
      "title":    "Advances in Transformer Architectures",
      "url":      "https://example.com/research-paper",
      "summary":  "The paper introduces...",
      "tags":     ["transformer", "attention", "architecture", "ml"],
      "saved_at": "2025-03-26T10:00:00Z"
    }
  }'
Step 4

Search and recall later memory-search — FREE

Retrieve stored research entries by semantic query. Works across all memory you have stored — search by topic, tag, or phrase.

bash FREE — days/weeks/sessions later
# Later — find everything you stored about transformers
curl -X POST https://slopshop.gg/v1/tools/memory-search \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query":  "transformer architecture papers",
    "top_k":  5,
    "prefix": "research:"
  }'
response
{
  "result": {
    "matches": [
      {
        "key":   "research:transformer-architectures-2025",
        "score": 0.94,
        "value": { /* the stored object */ }
      }
    ]
  }
}
cost breakdown
ToolCreditsNotes
sense-url-content3External fetch + parsing
text-word-count1Local compute
memory-set0Free
memory-search0Free — unlimited recall queries
Total~4 creditsPer URL fetched

Pro tip — build a corpus

  • Loop this playbook over a list of URLs at ~4 credits each to build a searchable knowledge base.
  • 100 articles = ~400 credits. Memory storage and all future searches remain free.
  • Use a consistent key prefix like research: so you can scope searches to just your research corpus.

When to use this

  • Giving an AI agent persistent memory that survives across sessions
  • Building a research assistant that accumulates knowledge over time
  • Competitive intelligence — fetch competitor pages, store findings, query later
  • Summarizing and indexing documentation for internal search

Common pitfalls

  • Memory is per API key. All agents sharing a key share a memory namespace. Use a key prefix like agent-a:research: to namespace per agent.
  • Max value size is 512KB. For large documents, store a summary + the URL, not the full content.
  • memory-search is semantic but not perfect. For exact-match retrieval use memory-get with a known key.

PLAYBOOK 04

Clean and Transform Data Pipeline

Turn messy CSV into clean, filtered, sorted, statistically annotated JSON — without spinning up a data warehouse.

~5 credits total 4 API calls or 1 pipe call
One-shot: POST /v1/pipes/data-clean

Pass your CSV and cleaning rules, get back clean JSON with stats attached. All four steps run server-side in a single round-trip. View pipe docs →

Raw CSV in
name,age,revenue Alice,29,14200 Bob,,8100 ,31, Carol,27,19500
Clean JSON out
[ {"name":"Alice","age":29,"revenue":14200}, {"name":"Carol","age":27,"revenue":19500} ]
Step 1

Parse CSV to JSON text-csv-to-json

Convert the raw CSV string into a typed JSON array. Headers become object keys; types are inferred automatically.

bash 1 credit
curl -X POST https://slopshop.gg/v1/tools/text-csv-to-json \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "csv":        "name,age,revenue\nAlice,29,14200\nBob,,8100\n,31,\nCarol,27,19500",
    "infer_types": true,
    "trim":       true
  }'
Step 2

Filter out incomplete rows exec-filter-json

Remove rows with null or empty required fields. Pass a filter expression — here we keep only rows where both name and revenue are present.

bash 1 credit
curl -X POST https://slopshop.gg/v1/tools/exec-filter-json \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "data":   [ /* array from step 1 */ ],
    "filter": "item.name != null && item.revenue != null && item.age != null"
  }'
Step 3

Sort by revenue descending exec-sort-json

Sort the filtered rows so the highest-revenue records come first. Stable sort preserves original order for ties.

bash 1 credit
curl -X POST https://slopshop.gg/v1/tools/exec-sort-json \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "data":      [ /* filtered array from step 2 */ ],
    "sort_by":   "revenue",
    "direction": "desc"
  }'
Step 4

Generate summary statistics analyze-json-stats

Get min, max, mean, median, and standard deviation for all numeric fields. Useful for validation and dashboards.

bash 2 credits
curl -X POST https://slopshop.gg/v1/tools/analyze-json-stats \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "data":   [ /* sorted array from step 3 */ ],
    "fields": ["age", "revenue"]
  }'
response
{
  "result": {
    "stats": {
      "revenue": {
        "min": 14200, "max": 19500,
        "mean": 16850, "median": 16850, "stddev": 2650
      },
      "age": {
        "min": 27, "max": 29, "mean": 28
      }
    },
    "row_count": 2,
    "dropped_rows": 2
  }
}
or: one-shot pipe
bash ~5 credits — all steps in one call
# Pre-built pipe: CSV in, clean+sorted+stats JSON out
curl -X POST https://slopshop.gg/v1/pipes/data-clean \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "csv":            "name,age,revenue\nAlice,29,14200\nBob,,8100\n,31,\nCarol,27,19500",
    "required_fields": ["name", "revenue", "age"],
    "sort_by":        "revenue",
    "direction":      "desc",
    "stats":          true
  }'
cost breakdown
ToolCreditsNotes
text-csv-to-json1Local compute
exec-filter-json1Expression eval, sandboxed
exec-sort-json1Stable sort
analyze-json-stats2Statistical analysis
Total~5 creditsPer pipeline run

When to use this

  • Pre-processing data exports before loading into a database or BI tool
  • Cleaning form submissions or survey responses before analysis
  • Any agent that ingests CSV uploads and needs to work with structured data
  • Validating data quality on ingestion (dropped_rows tells you the loss rate)

Common pitfalls

  • Character encoding issues. Ensure your CSV is UTF-8 before sending. Latin-1 encoded files with smart quotes will cause parse errors.
  • Filter expressions are sandboxed JS. Use item.field syntax. Avoid this, eval, or DOM references — they are blocked.
  • Very large datasets. The API accepts up to 5MB per request. For larger files, chunk the CSV by rows and run the pipeline in batches.
  • Type inference for dates. Date strings like "2025-03-01" come back as strings, not Date objects. Parse them downstream if you need date arithmetic.

PLAYBOOK 05

Hash and Verify File Integrity

Generate SHA-256, SHA-512, and MD5 hashes for any input in a single chained flow. Use for checksums, deduplication, and tamper detection.

~4 credits total 3 API calls or 1 pipe call
One-shot: POST /v1/pipes/hash-everything

Send any text or base64-encoded binary, get back all three hashes at once. Fastest path when you just need the checksums. View pipe docs →

Input
Hello, Slopshop!
All three hashes
sha256: a3f4b2... sha512: 9c1e87... md5: d41d8c...
Step 1

SHA-256 hash crypto-hash-sha256

The industry-standard hash for file integrity and digital signatures. Use this as your primary checksum.

bash 1 credit
curl -X POST https://slopshop.gg/v1/tools/crypto-hash-sha256 \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "input": "Hello, Slopshop!" }'
response
{
  "ok": true,
  "result": {
    "hash":      "a3f4b2c1d8e5f7a9...",
    "algorithm": "SHA-256",
    "input_len": 16
  },
  "_engine": "real"
}
Step 2

SHA-512 hash crypto-hash-sha512

Higher security margin for long-term archival or when your threat model requires 512-bit output. Useful for cryptographic key derivation chains.

bash 1 credit
curl -X POST https://slopshop.gg/v1/tools/crypto-hash-sha512 \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "input": "Hello, Slopshop!" }'
response
{
  "result": {
    "hash":      "9c1e87f3a2b4d6e8...",
    "algorithm": "SHA-512",
    "input_len": 16
  }
}
Step 3

MD5 hash crypto-hash-md5

Fast, widely supported, and sufficient for deduplication and non-security checksums. Do not use MD5 for passwords or security-critical integrity checks — it is cryptographically broken for those use cases.

bash 1 credit
curl -X POST https://slopshop.gg/v1/tools/crypto-hash-md5 \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "input": "Hello, Slopshop!" }'
response
{
  "result": {
    "hash":      "d41d8cd98f00b204...",
    "algorithm": "MD5"
  }
}
or: hash-everything pipe
bash ~3 credits — all three hashes at once
# Get all three hashes in a single request
curl -X POST https://slopshop.gg/v1/pipes/hash-everything \
  -H "Authorization: Bearer $SLOP_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "input": "Hello, Slopshop!" }'
response
{
  "ok": true,
  "result": {
    "sha256": "a3f4b2c1d8e5f7a9...",
    "sha512": "9c1e87f3a2b4d6e8...",
    "md5":    "d41d8cd98f00b204...",
    "input_len": 16
  },
  "_engine": "real"
}
cost breakdown
ToolCreditsNotes
crypto-hash-sha2561Local compute, deterministic
crypto-hash-sha5121Local compute
crypto-hash-md51Local compute
Total~3 creditshash-everything pipe

When to use this

  • Generating checksums for file upload verification (user uploads a file + expected SHA-256)
  • Deduplicating content in a pipeline — hash the content, check if the MD5 already exists in memory
  • Publishing software release checksums (SHA-256 or SHA-512 recommended)
  • Building a tamper-detection log — hash each record and store the hash in a separate immutable store

Common pitfalls

  • Encoding matters. "Hello" and "Hello\n" produce entirely different hashes. Trim trailing whitespace or newlines before hashing if you want consistent results across systems.
  • Binary files. Send binary content as base64-encoded string and set "encoding": "base64" in the request body.
  • MD5 collision attacks are real. Never use MD5 to verify that a security-sensitive file hasn't been tampered with by an adversary. Use SHA-256 or SHA-512 instead.
  • Hash comparison must be constant-time. If you're building a verification flow, use the crypto-hmac-verify tool instead of comparing hash strings yourself — naive string comparison is vulnerable to timing attacks.
Go deeper
System Hive — always-on agent workspace → System Army — 10K parallel agents in one call → Templates Pre-built agent workflows → Catalog All 82 categories of tools →