coxec is a concurrent execution engine with built-in protocol clients, templates, pipeline chaining, and native AI agent support.
- Run
Nexecutions withCconcurrent workers. - Timing Control: Set individual (
--timeout) and global (--global-timeout) limits. - Traffic Shaping: Stagger worker starts with
--delay, add--jitter, or use--rampup. - Throttling: Enforce maximum execution rates with
--rate(e.g.,50/s,100/m). - Server Mode: Start
coxecas an HTTP execution service with-s/--server. - Powerful Go Template support in all execution modes.
- Generate random test data (names, emails, phones, numbers).
- Drive iterations from data files or sequences.
- Per-execution verbose logging (duration, exit status) to stderr.
- Stable per-run env var:
COXEC_INDEX=1..N. - Exit codes that distinguish
all-failed,partially-failed, andtimeout(124) runs.
make build
./bin/coxec --versiongo install github.com/0funct0ry/coxec@latest
coxec --versionOne of -e/--exec, -f/--file, -t/--template, or -s/--server is required.
coxec -e 'echo "hello from $COXEC_INDEX"' -c 4 -n 10Run a command 100 times with concurrency 20:
coxec -e 'curl -fsS https://example.com/health' -c 20 -n 100Run with per-execution timings and exit status (verbose output goes to stderr):
coxec -e 'sleep 0.1' -c 8 -n 50 -vExecute the contents of a script file repeatedly:
coxec -f ./test.sh -c 4 -n 12Suppress child stdout/stderr payload (summary still prints on stderr):
coxec -e 'echo hi' -c 4 -n 20 --silentHide the summary/diagnostics by redirecting stderr:
coxec -e 'echo hi' -c 2 -n 4 2>/dev/nullUse -t for multi-step execution plans or complex data handling:
# smoke-test.tmpl
.http GET https://api.example.com/users/{{randInt 1 100}}
|> .http POST https://api.example.com/logs
--body {"user_id": "{{.Prev.ID}}", "status": "verified"}
coxec -t smoke-test.tmpl -c 10 -n 100Start coxec as a long-running HTTP service:
coxec --server --port 9000Warning: Starting the server without any authentication flags is insecure and should only be used for local testing. A warning message will be logged on startup.
You can secure the /exec endpoint with a Bearer token:
coxec --server --port 9000 --auth-token "super-secret-token"Enable encryption in transit by providing certificate and key files (PEM format):
coxec --server --port 8443 --tls-cert cert.pem --tls-key key.pemThe server will listen on HTTPS when both flags are provided. If one is missing, the server will fail to start.
coxec supports loading server settings from a YAML file (default: coxec.yaml in the current directory). You can specify a custom path using --config <path>.
Example coxec.yaml:
server:
addr: 0.0.0.0
port: 9000
auth-token: "${MY_SECRET_TOKEN}"
max-concurrent-jobs: 50
default-concurrency: 5
default-iterations: 10
enable-sync: true
enable-async: true
enable-webhooks: false
enable-ws: false
tls:
cert: cert.pem
key: key.pemPrecedence Order:
- CLI Flags (e.g.,
-p 8081) - Environment Variables (e.g.,
COXEC_SERVER_PORT=8082) - Configuration File (
coxec.yaml) - Internal Defaults
Environment variables follow the pattern COXEC_SERVER_<KEY> (e.g., COXEC_SERVER_AUTH_TOKEN, COXEC_SERVER_MAX_CONCURRENT_JOBS).
The max-concurrent-jobs setting (or --max-concurrent-jobs flag) enforces a global limit on the number of active requests (jobs) the server handles at once. When the limit is reached, new requests receive a 429 Too Many Requests response with a Retry-After: 60 header.
Verify the server status using the /health endpoint:
curl http://localhost:8080/healthResponse:
{
"status": "ok",
"version": "1.0.0",
"active_jobs": 2,
"job_store": "memory",
"uptime_seconds": 3600,
"features": {
"sync": true,
"async": true,
"webhooks": false,
"ws": false
}
}The endpoint returns 503 Service Unavailable if the server is starting up or shutting down.
Trigger a concurrent execution and wait for the full results in the HTTP response using the /exec endpoint:
curl -X POST http://localhost:8080/exec \
-H "Content-Type: application/json" \
-H "Authorization: Bearer super-secret-token" \
-d '{
"exec": ".http GET https://api.example.com/users/{{.Iteration}}",
"concurrency": 10,
"iterations": 100,
"timeout": "5s"
}'Response:
{
"status": "ok",
"report": {
"total_executions": 100,
"success_count": 98,
"fail_count": 2,
"timeout_count": 0,
"total_duration": "105ms",
"average_latency": "100.5ms",
"p50_latency": "100.1ms",
"p90_latency": "100.9ms",
"p95_latency": "100.9ms",
"p99_latency": "100.9ms",
"rate_per_second": 19.0,
"stdout": ["Line 1", "Line 2"],
"stderr": ["Summary line 1", "Summary line 2"]
}
}Submit a job and receive a job_id immediately without waiting for it to finish. Use the /async/exec endpoint:
curl -i -X POST http://localhost:8080/async/exec \
-H "Content-Type: application/json" \
-H "Idempotency-Key: my-request-123" \
-d '{
"exec": ".sleep 10s",
"iterations": 10
}'Response (202 Accepted):
{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "queued"
}The Idempotency-Key header is supported on /async/exec and /jobs/:name/run to prevent duplicate job submissions. If a request with the same key is received, the server returns the existing job_id with a 200 OK status, safely ensuring that identical jobs are not executed multiple times.
Monitor and control asynchronous jobs using the /jobs endpoint:
-
List All Jobs:
GET /jobsReturns a paginated list of job summaries for all current and recently completed jobs, sorted by submission time (newest first). Only jobs within the configured retention window are shown; running and queued jobs are always included.# List all jobs (defaults: limit=50, offset=0) curl -s 'http://localhost:8080/jobs' | jq . # Paginate: second page of results curl -s 'http://localhost:8080/jobs?limit=10&offset=10' | jq .
Response:
{ "jobs": [ { "id": "550e8400-e29b-41d4-a716-446655440000", "name": ".sleep 10s", "state": "completed", "submitted_at": "2026-03-25T08:00:00Z", "started_at": "2026-03-25T08:00:00Z", "finished_at": "2026-03-25T08:00:10Z" } ], "total": 1, "limit": 50, "offset": 0, "active_jobs": 0 }Query Param Default Max Description limit501000Max jobs to return per page offset0— Number of jobs to skip -
Check Status:
GET /jobs/:idReturns the current state (queued,running,completed,failed,cancelled), timing metadata, and a summary of execution results once finished.curl -s http://localhost:8080/jobs/550e8400-e29b-41d4-a716-446655440000 | jq .
Response:
{ "job_id": "550e8400-e29b-41d4-a716-446655440000", "state": "completed", "submitted_at": "2026-03-25T08:00:00Z", "started_at": "2026-03-25T08:00:00Z", "duration": "10.05s", "concurrency": 10, "iterations_requested": 100, "iterations_completed": 100, "label": "nightly-cleanup", "summary": { "success_count": 100, "fail_count": 0, "total_duration": "10.05s", "average_latency": "100ms" } } -
Cancel Job:
DELETE /jobs/:idGracefully stops a running or queued job.- Returns
202 Acceptedif cancellation is initiated. - Returns
409 Conflictif the job is already in a terminal state (completed,failed, orcancelled). - Returns
404 Not Foundif the job does not exist or has expired.
In-progress executions are notified of cancellation via context propagation, allowing them to stop gracefully.
- Returns
-
Automatic Cleanup: Finished jobs are automatically removed from memory or disk based on time (
--job-ttl) and quantity (--job-history). By default,coxeckeeps up to 1000 completed jobs for up to 24 hours using its built-inJobStore. The server supports an in-memory store (default), persistent SQLite store, and shared Redis store. Cleanup happens in the background every minute.# Keep jobs for only 1 hour and at most 100 recent jobs using SQLite persistence coxec --server --job-store sqlite --job-store-dsn ./coxec-jobs.db --job-ttl 1h --job-history 100 # Use shared Redis store for multi-instance deployments coxec --server --job-store redis --job-store-dsn redis://localhost:6379/0
-
Live Streaming (SSE):
GET /jobs/:id/streamOpens a Server-Sent Events (SSE) connection to stream execution results in real-time as they complete. The stream continues until the job reaches a terminal state, at which point a final summary event is sent and the connection is closed.curl -H "Authorization: Bearer super-secret-token" -N http://localhost:8080/jobs/:id/stream -
Interactive Sessions (WebSocket):
GET /ws?job_id=:idEstablish a bidirectional WebSocket connection for live monitoring and job control.# Enable with -w flag on server start websocat "ws://localhost:8080/ws?job_id=..."
Bidirectional Features:
- Live Results: Receive structured JSON events (
type: "result"andtype: "done") as tasks complete. - Cancel Action: Send
{"action": "cancel"}to immediately terminate a running or queued job. - Auto-Close: The server automatically closes the connection once the final
"done"event is delivered.
Events:
result: Emitted for each completed execution.{ "type": "result", "data": { "index": 1, "worker_id": 0, "status": "success", "duration": "100.5ms", "status_code": 200 } }done: Emitted when the job is finished.{ "type": "done", "data": { "job_id": "...", "state": "completed", "summary": { ... } } }
- Live Results: Receive structured JSON events (
-
Final Report:
GET /jobs/:id/reportReturns a comprehensive, structured JSON report for a completed, failed, or cancelled job. This includes success rates, latency percentiles (min, p50, p75, p90, p95, p99, max), and an error breakdown by type and message.- Returns
425 Too Earlyif the job is still in progress (queuedorrunning). - Returns
404 Not Foundif the job does not exist or has expired.
curl -s http://localhost:8080/jobs/550e8400-e29b-41d4-a716-446655440000/report | jq .
Response:
{ "job_id": "550e8400-e29b-41d4-a716-446655440000", "status": "partial", "duration": "10.05s", "concurrency": 10, "iterations": { "requested": 100, "completed": 100 }, "counts": { "success": 95, "failure": 5, "retry": 0 }, "latencies": { "min": "50ms", "p50": "100ms", "p75": "120ms", "p90": "150ms", "p95": "180ms", "p99": "200ms", "max": "250ms" }, "errors": [ { "type": "HTTP", "message": "500 Internal Server Error", "count": 3 }, { "type": "TCP", "message": "connection refused", "count": 2 } ] } - Returns
-
Webhook Deliveries: Automatically POST common details about completed jobs to an external URL.
curl -X POST http://localhost:8080/async/exec \ -d '{ "exec": ".sleep 1s", "callback_url": "https://my-ci.example.com/webhooks/coxec", "callback_headers": { "X-Secret": "my-shared-secret" } }'
The server will send a JSON POST request to the
callback_urlwhen the job finishes (terminal state). The payload is identical to theGET /jobs/:idresponse. Delivery features include:- Automatic Retries: Retries delivery up to
COXEC_SERVER_CALLBACK_RETRYtimes (default: 3) with exponential backoff (1s, 2s, 4s...). - HTTPS Requirement: By default, only HTTPS URLs are allowed. HTTP is permitted only if an explicit
--callback-allow-listis provided and the target IP matches. For local testing, you can bypass this requirement completely using the-k, --callback-allow-insecureflag. - CIDR Allow-list: Restrict callback URLs to specific network ranges for security using
--callback-allow-list. - Custom Headers: Pass authentication tokens or correlation IDs via
callback_headers.
- Automatic Retries: Retries delivery up to
exec: (Required) The command string to execute.concurrency: (Optional) Maximum number of concurrent executions. Overrides the value set at server startup (Max: 1,000).iterations: (Optional) Total number of executions. Overrides the value set at server startup (Max: 10,000,000).timeout: (Optional) Timeout for each execution (e.g., "5s", "100ms").rate: (Optional) Maximum execution rate (e.g., "10/s").vars: (Optional) Map of user-defined variables for templates.delay: (Optional) Constant delay between iterations.jitter: (Optional) Random jitter added to delay.rampup: (Optional) Duration to linearly increase concurrency.verbose: (Optional) If true, returns detailed per-execution results in thedetailsfield.label: (Optional) A descriptive name for the job, returned in status responses.
The endpoint returns 400 Bad Request for invalid payloads (e.g. non-positive values, or exceeding maximum bounds) and 500 Internal Server Error if execution fails catastrophically.
- Flexible Requests: Supports both JSON and form-encoded (
application/x-www-form-urlencoded) payloads. You can usecurl -d "exec=echo hi"without extra headers. - Smart Responses: Automatically returns human-readable plain text (matching CLI output) if you don't explicitly request JSON via the
Acceptheader. - Header-less JSON: If you send a JSON payload via
curl -d, the server will automatically detect and parse it as JSON even without theContent-Type: application/jsonheader. - Structured
execObjects: To avoid escaping hell with complex JSON payloads (like in.http), you can pass a structured JSON object for theexecfield. The server will automatically quote and format it for the engine:"exec": { "client": ".http", "method": "POST", "url": "http://localhost:9090/post", "body": { "id": "123", "status": "active" } }
-e, --exec string: Shell command to execute repeatedly.-f, --file string: Path to a file whose contents will be executed repeatedly.-t, --template string: Path to a Go template file defining the execution plan.-s, --server: Startcoxecin server mode.-a, --addr string: Bind address for the server (default:127.0.0.1).-p, --port int: Port to listen on (default:8080).--auth-token string: Bearer token required for server API requests (except/health).--auth-basic string: Basic auth credentials inuser:passformat required for server API requests (except/health).--auth-hmac-secret string: HMAC secret for verifyingX-Signature: sha256=<hex>headers on server API requests (except/health).--tls-cert string: Path to TLS certificate file (PEM format).--tls-key string: Path to TLS private key file (PEM format).--config string: Path to configuration file (e.g.,coxec.yaml).--max-concurrent-jobs int: Maximum number of concurrent jobs (requests) allowed globally.--enable-sync bool: Enable synchronous execution mode (default:true).--enable-async bool: Enable asynchronous execution mode (default:true).--job-ttl duration: Time to retain finished jobs in memory or disk (default:24h).--job-history int: Maximum number of completed jobs to retain (default:1000).--job-store string: Job store backend:memory(default),sqlite, orredis.--job-store-dsn string: Data source name for the job store (e.g.,./coxec-jobs.dborredis://localhost:6379/0).-w, --enable-ws: Enable WebSocket endpoint for live job monitoring and control (default:false).-i, --ws-ping-interval duration: Interval between server pings to WebSocket clients (default:30s).-M, --ws-max-clients int: Maximum concurrent WebSocket connections (default:50).-W, --enable-webhooks: Enable background webhook delivery (default:false).-T, --callback-timeout duration: Timeout for webhook HTTP requests (default:10s).-R, --callback-retry int: Number of delivery retries with exponential backoff (default:3).-L, --callback-allow-list strings: CIDR ranges allowed for callback URLs (e.g.,10.0.0.0/8,192.168.1.0/24).-k, --callback-allow-insecure: Allow HTTP callback URLs even when no allow-list is provided (local testing only).-c, --concurrency int: Number of concurrent workers. (default:1)-n, --iterations int: Total number of executions (defaults to--concurrency).--rate string: Maximum execution rate (e.g.,50/s,10/m,1/h).--timeout duration: Max time for a single execution (e.g.,5s,500ms).--global-timeout duration: Max time for the entire run (e.g.,1h,15m).--delay duration: Fixed delay between starting each worker/iteration.--jitter duration: Random variation added to delay (delay ± jitter).--rampup duration: Gradually increase concurrency over this period.--var key=value: Set a user variable available as{{.Var "key"}}.-v, --verbose: Print per-execution timing and status to stderr.--silent: Suppress the child stdout/stderr payload.--version: Print the version and exit.
- Each execution runs as
sh -c <command>. - For
-f/--file,coxecrenders the template then passes the result tosh -c. - For
-t/--template,coxecexecutes the plan natively (built-ins or shell fallthrough).
All execution flags support Go templates (text/template) with the following context:
.Iteration: 0-based iteration number..WorkerID: ID of the worker (goroutine) executing the task..Timestamp: Execution start time (RFC3339 with ms)..TimestampUnix/Milli/Nano: Numeric timestamps..UUID: Unique ID for the execution.{{.Env "KEY"}}: Get environment variable.{{.Var "KEY"}}: Get user variable from--var(falls back to Env)..Prev: Result object from the previous pipeline step.
- Formatting:
quote(shell escaping). - Random Data:
randInt min max,randFloat min max dec,randString len,randChoice "a" "b". - Identity:
randName,randEmail,randPhone,uuid,ulid. - Data Driven:
seq start end step: Generate iteration-based sequence.counter "name": Shared incrementing counter.fileLine "data.txt": Sequential line access (wraps).fileLineAt "data.txt" 1: 1-based line access.
Random Load Generation:
coxec -c 10 -n 100 -e '.http POST https://api.example.com/users --body {"name": "{{randName}}", "email": "{{randEmail}}", "phone": "{{randPhone}}"}'Sequential Batch Processing:
coxec -c 5 -n 1000 -e '.http PUT https://api.example.com/items/{{fileLine "ids.txt"}} --body {"status": "active"}'Custom Identification:
coxec -c 20 -n 100 -e 'echo "Starting iteration {{counter \"job\"}} with ID {{ulid}}"'Sequence Generation:
# Generate requests for pages 10, 15, 20, 25, 30
coxec -n 5 -e 'curl https://api.example.com/search?page={{seq 10 100 5}}'coxec provides precise control over how and when executions happen.
Limit throughput to avoid overwhelming target systems:
# Aim for 50 requests per second across 20 workers
coxec -c 20 -n 1000 --rate 50/s -e '.http GET https://api.example.com'Avoid thundering herds by gradually increasing concurrency:
# Start 1 worker every 3 seconds until 10 are active
coxec -c 10 -n 100 --rampup 30s -e '.http GET https://api.example.com'Protect against hanging processes or slow APIs:
# Kill any task taking longer than 2s; stop the whole run after 5m
coxec -n 100 --timeout 2s --global-timeout 5m -e 'sleep 10'Add spacing and jitter between starts for realistic simulations:
# Space starts by 500ms ± 100ms
coxec -c 5 -n 20 --delay 500ms --jitter 100ms -e 'echo "Starting..."'- Child stdout is written to stdout (unless
--silent). - Summary and verbose diagnostics are written to stderr.
0: all executions succeeded (or-n 0).1: partial failure (some executions failed, some succeeded).2: all executions failed.64: CLI validation error (for example, missing-e/-f).130: interrupted (Ctrl-C / SIGTERM).10: other unexpected error.
make lint
make test
make buildMIT. See LICENSE.