Sequential agent execution is the bottleneck you've accepted as normal. Why run 1,000 scoring jobs one by one when they're independent? Slopshop Army fans out any task to thousands of parallel agents, collects results, verifies them with a Merkle tree, and returns a single aggregated response. No orchestration infrastructure required.
Most agent frameworks are built for sequential reasoning: one agent, one task, one result. But many real problems are embarrassingly parallel:
Doing any of this yourself means spinning up async workers, managing concurrency, handling partial failures, and aggregating results. That's days of infrastructure work before you've written a single line of actual agent logic.
Specify a task, a count, an optional input list, and a strategy. Army fans out the work, runs agents in parallel, and returns aggregated results with a Merkle root you can use to verify the full result set without downloading every individual output.
curl -X POST https://slopshop.gg/v1/army/deploy \ -H "Authorization: Bearer demo_key_slopshop" \ -H "Content-Type: application/json" \ -d '{ "task": "score_quality", "count": 100, "inputs": [ {"id": "prod_001", "text": "Ultra-soft 100% cotton t-shirt with reinforced seams"}, {"id": "prod_002", "text": "T shirt"}, {"id": "prod_003", "text": "Premium moisture-wicking athletic tee, 4-way stretch fabric"} ], "strategy": "map", "params": { "criteria": ["clarity", "completeness", "tone"], "output_format": "score_0_to_10" } }'
{
"ok": true,
"army_id": "army_4d8f2a",
"agents_deployed": 100,
"strategy": "map",
"results": [
{ "id": "prod_001", "score": 8.4, "breakdown": { "clarity": 9, "completeness": 8, "tone": 8 } },
{ "id": "prod_002", "score": 1.2, "breakdown": { "clarity": 2, "completeness": 1, "tone": 1 } },
{ "id": "prod_003", "score": 9.1, "breakdown": { "clarity": 9, "completeness": 10, "tone": 9 } }
],
"merkle_root": "a3f9c2d8e1b047...",
"total_time_ms": 412,
"_engine": "real"
}
Set strategy: "monte_carlo" and Army runs N independent trials with randomized parameters, then returns the distribution of outcomes. Use this for risk modeling, sensitivity analysis, or any problem where you want to understand variance rather than a single answer.
curl -X POST https://slopshop.gg/v1/army/deploy \ -H "Authorization: Bearer demo_key_slopshop" \ -d '{ "task": "revenue_projection", "count": 5000, "strategy": "monte_carlo", "params": { "base_price": 49.99, "price_variance": 0.15, "demand_elasticity": -1.2, "market_size": 50000, "market_variance": 0.25 } }' # Returns: distribution stats (p5, p25, p50, p75, p95), # histogram data, and full trial results with merkle_root
{
"ok": true,
"strategy": "monte_carlo",
"trials": 5000,
"distribution": {
"p5": 1820400,
"p25": 2104000,
"p50": 2389500,
"p75": 2741000,
"p95": 3218000
},
"mean": 2402100,
"std_dev": 421800,
"merkle_root": "b7e3a1f0c9d2...",
"_engine": "real"
}
Army supports four execution strategies. Pick the one that matches your problem:
Apply the same task to each item in an input list independently. One agent per item, results returned in the same order as inputs.
Run N trials of a stochastic task with randomized parameters. Returns the full result distribution — mean, percentiles, std dev, histogram.
Run the same single task N times independently, then aggregate by majority vote or averaging. Reduces variance for high-stakes decisions.
Agents compete: each produces a candidate result, then a judging pass eliminates weaker candidates in bracket rounds. Returns the winning result with its elimination path.
Every Army response includes a merkle_root. This is the root of a Merkle tree built from hashes of all individual agent results. You can use it to:
curl -X POST https://slopshop.gg/v1/army/army_4d8f2a/verify \ -H "Authorization: Bearer demo_key_slopshop" \ -d '{ "result_id": "prod_001", "merkle_root": "a3f9c2d8e1b047..." }' # Returns: { "included": true, "proof": [...], "leaf_hash": "..." }
| Tier | Max agents per call | Max input list size | Strategies |
|---|---|---|---|
| Free (500cr) | 100 | 100 items | map, ensemble |
| Standard | 1,000 | 1,000 items | All strategies |
| Pro | 1,000 | 1,000 items | All + custom strategies |
Army charges per agent deployed, not per call. Standard task agents cost 2 credits each. Monte Carlo trials cost 1 credit each (lighter compute). You get 500 free credits on signup — enough for 1,000 free agent runs to validate your use case. See full pricing.
100 parallel agents free. 500 credits on signup. Merkle verification included.