Define Slopshop endpoints as CrewAI BaseTool subclasses. Assign them to agents. Wire agents into a crew. real tools across 82 categories — DNS, SSL, HTTP, crypto — available to every agent in your crew.
Free persistent memory. Every result verified.
CrewAI tools are Python classes that subclass BaseTool. Each Slopshop endpoint becomes one tool class — define name, description, args_schema, and a _run method that calls Slopshop.
Install CrewAI and requests. CrewAI works with any LLM backend — OpenAI, Anthropic, Groq, Ollama. Slopshop handles tool execution regardless.
Create one BaseTool subclass per Slopshop tool. The _run method POSTs to Slopshop's REST API and returns the JSON result.
Pass tool instances to your Agent objects in the tools=[] list. Define tasks and kickoff the crew.
A complete example showing a factory function to generate CrewAI tool classes from Slopshop's OpenAPI schema, then a crew with two agents — a researcher and an executor.
import os, json, requests from typing import Type, Any from pydantic import BaseModel, Field from crewai import Agent, Task, Crew, Process from crewai.tools import BaseTool SLOP_KEY = os.environ["SLOPSHOP_API_KEY"] SLOP_URL = "https://slopshop.gg" # Factory: generate a CrewAI BaseTool class for any Slopshop endpoint def make_slopshop_tool(slug: str, description: str, schema: dict) -> Type[BaseTool]: # Build a Pydantic model from the OpenAPI schema properties fields = {} props = schema.get("properties", {}) required = schema.get("required", []) for field_name, field_schema in props.items(): default = ... if field_name in required else None fields[field_name] = (str, Field(default, description=field_schema.get("description", ""))) ArgsModel = type( f"{slug.replace('-','_').title()}Args", (BaseModel,), {"__annotations__": {k: v[0] for k, v in fields.items()}, **{k: v[1] for k, v in fields.items()}} ) def _run(self, **kwargs) -> str: res = requests.post( f"{SLOP_URL}/v1/{slug}", json={k: v for k, v in kwargs.items() if v is not None}, headers={"Authorization": f"Bearer {SLOP_KEY}"} ) return json.dumps(res.json()) return type( f"Slopshop{slug.replace('-','').title()}Tool", (BaseTool,), { "name": slug, "description": description, "args_schema": ArgsModel, "_run": _run } ) # Load Slopshop tools from OpenAPI schema def load_tools(slugs: list[str]) -> list[BaseTool]: schema = requests.get( f"{SLOP_URL}/v1/openapi.json", headers={"Authorization": f"Bearer {SLOP_KEY}"} ).json() tools = [] for slug in slugs: path = schema["paths"].get(f"/v1/{slug}", {}).get("post", {}) if not path: continue params = path["requestBody"]["content"]["application/json"]["schema"] ToolClass = make_slopshop_tool(slug, path.get("summary", slug), params) tools.append(ToolClass()) return tools # Build the tool set network_tools = load_tools(["dns-lookup", "ssl-check", "whois", "http-request"]) memory_tools = load_tools(["memory-set", "memory-get"]) crypto_tools = load_tools(["hash"]) # Define agents with role-specific tool sets researcher = Agent( role="Infrastructure Researcher", goal="Gather accurate infrastructure data about target domains", backstory="You are a network analyst who investigates domain infrastructure using real-time DNS and SSL data.", tools=network_tools, verbose=True ) analyst = Agent( role="Security Analyst", goal="Analyze infrastructure data, identify issues, and save findings to memory", backstory="You analyze infrastructure reports and save structured findings for future reference.", tools=memory_tools + crypto_tools, verbose=True ) # Define tasks gather_task = Task( description="Look up DNS records, SSL certificate, and WHOIS data for slopshop.gg", agent=researcher, expected_output="DNS records, SSL cert details, WHOIS registration info as JSON" ) analyze_task = Task( description="Analyze the infrastructure findings and save a summary to memory key 'audit-slopshop'", agent=analyst, expected_output="Summary of findings saved to Slopshop memory. Key risks flagged." ) # Assemble and run the crew crew = Crew( agents=[researcher, analyst], tasks=[gather_task, analyze_task], process=Process.sequential, verbose=True ) result = crew.kickoff() print(result)
CrewAI orchestrates. Slopshop executes. Every agent in your crew gets access to real tools across 82 categories — no mocking, no simulation, no hallucinated execution.
DNS, SSL, HTTP, crypto, hashing, code execution, data transforms. Assign any subset to any agent in your crew.
Crew agents can write to and read from Slopshop's key-value store at zero credit cost. Share state between agents without a database.
Each Slopshop endpoint maps to a clean BaseTool subclass. Typed args via Pydantic. No raw HTTP in your agent code.
Audit logs per tool call. See exactly which agent called which tool, with what arguments, and what Slopshop returned.
Run Slopshop on your own infrastructure. Works with air-gapped CrewAI deployments and private LLM backends.