Register Slopshop endpoints in AutoGen's function_map. Your AssistantAgent declares the functions, your UserProxyAgent executes them via Slopshop. real tools across 82 categories available to every agent in the conversation.
Free persistent memory. Every result verified.
AutoGen's function_map maps function names to Python callables. Register Slopshop tool wrappers in the UserProxyAgent's function_map — the AssistantAgent declares them via llm_config tools.
Install the pyautogen package. Slopshop tools are plain Python callables — no extra dependencies beyond requests.
Create Python functions that call Slopshop's REST API. Register them in a function_map dict keyed by tool name.
The AssistantAgent gets tool schemas in llm_config["functions"]. The UserProxyAgent gets the function_map to execute them when called.
A complete AutoGen setup. The AssistantAgent gets tool schemas via llm_config. The UserProxyAgent gets the function_map — when the assistant calls a function, the proxy agent executes it against Slopshop.
import os, json, requests import autogen SLOP_KEY = os.environ["SLOPSHOP_API_KEY"] SLOP_URL = "https://slopshop.gg" OPENAI_KEY = os.environ["OPENAI_API_KEY"] # Build function schemas from Slopshop's OpenAPI spec def get_function_schemas(slugs: list[str]) -> list[dict]: schema = requests.get( f"{SLOP_URL}/v1/openapi.json", headers={"Authorization": f"Bearer {SLOP_KEY}"} ).json() functions = [] for slug in slugs: path = schema["paths"].get(f"/v1/{slug}", {}).get("post", {}) if not path: continue params = path["requestBody"]["content"]["application/json"]["schema"] functions.append({ "name": slug, "description": path.get("summary", ""), "parameters": params }) return functions # Build function_map: name -> callable that hits Slopshop def make_slopshop_fn(slug: str): def fn(**kwargs) -> str: res = requests.post( f"{SLOP_URL}/v1/{slug}", json={k: v for k, v in kwargs.items() if v is not None}, headers={"Authorization": f"Bearer {SLOP_KEY}"} ) return json.dumps(res.json()) fn.__name__ = slug return fn # Tool slugs to expose TOOL_SLUGS = ["dns-lookup", "ssl-check", "http-request", "whois", "hash", "memory-set", "memory-get"] function_schemas = get_function_schemas(TOOL_SLUGS) function_map = {slug: make_slopshop_fn(slug) for slug in TOOL_SLUGS} # LLM config for AssistantAgent — includes function schemas llm_config = { "config_list": [{"model": "gpt-4o", "api_key": OPENAI_KEY}], "functions": function_schemas, "timeout": 120 } # AssistantAgent — knows about the tools, chooses when to call them assistant = autogen.AssistantAgent( name="SlopshopAssistant", llm_config=llm_config, system_message="You are an infrastructure analyst. Use the available tools to gather real data about domains and services. Always verify results with real API calls." ) # UserProxyAgent — executes function calls via Slopshop user_proxy = autogen.UserProxyAgent( name="SlopshopExecutor", human_input_mode="NEVER", max_consecutive_auto_reply=10, is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""), function_map=function_map, code_execution_config=False # Slopshop handles execution, not local code ) # Start the conversation user_proxy.initiate_chat( assistant, message="Audit slopshop.gg: check DNS records, SSL certificate validity, HTTP response headers, and WHOIS data. Save a summary to memory key 'audit-result'. Reply TERMINATE when done." )
AutoGen coordinates multi-agent conversations. Slopshop provides the execution substrate — real tools across 82 categories that never hallucinate their results.
DNS, SSL, HTTP, crypto, hashing, code execution, data transforms. Register any subset in your function_map. All results verified.
AutoGen agents can write to and read from Slopshop's key-value store at zero credit cost. Share state across agent turns without a database.
Each Slopshop endpoint maps to a single Python callable. Clean separation between the LLM deciding what to call and Slopshop executing it.
Audit logs per tool call. See every function the AssistantAgent requested, with Slopshop's real response — not simulated output.
Run Slopshop on your own infrastructure alongside AutoGen. Works with private Azure OpenAI deployments and on-premises setups.