Single-script benchmark suite for fairvisor/edge with reproducible latency and throughput tests. The primary workflow is now a local controller that orchestrates two remote Linux hosts:
- one remote Fairvisor host
- one remote k6 load-generator host
Script: run-all.sh
run-all.sh runs 6 scenarios:
- Raw nginx baseline latency (
:8082) - Fairvisor
decision_servicelatency (POST /v1/decision) - Fairvisor
reverse_proxylatency - Max throughput: simple policy (1 rule)
- Max throughput: complex policy (5 rules + JWT + loop detection)
- Max throughput: LLM token estimation policy (
token_bucket_llm)
Each run prints a summary table and stores raw artifacts.
- Designed for Linux hosts (Amazon Linux / Ubuntu)
- Installs dependencies automatically:
- OpenResty
- k6
- jq, bc, git, python3, pip3
- Clones
https://github.com/fairvisor/edgeinto/opt/fairvisor
Run from your local machine:
FAIRVISOR_REMOTE=ubuntu@fairvisor-host \
LOADGEN_REMOTE=ubuntu@loadgen-host \
FAIRVISOR_TARGET_HOST=10.0.0.42 \
bash run-all.shMeaning of the variables:
FAIRVISOR_REMOTE: SSH target for the Fairvisor hostLOADGEN_REMOTE: SSH target for the k6 load-generator hostFAIRVISOR_TARGET_HOST: IP or DNS name that the load-generator host should use for HTTP traffic to FairvisorSSH_OPTS: optional extrassh/scpflags used by the controller, for example-o StrictHostKeyChecking=accept-new
What the controller does:
- copies
run-all.shto both hosts under/tmp/fv-bench/ - installs only the dependencies each role needs
- starts/stops baseline nginx, backend nginx, and Fairvisor on the Fairvisor host
- generates and runs k6 workloads on the load-generator host
- pulls raw k6 result JSON files back to the local controller for summary rendering
FAIRVISOR_TARGET_HOST is optional if the SSH hostname of FAIRVISOR_REMOTE is also the correct network address from the load-generator host.
On a first-time connection, prefer setting SSH_OPTS so the controller does not block on host key confirmation:
SSH_OPTS="-o StrictHostKeyChecking=accept-new" \
FAIRVISOR_REMOTE=ubuntu@fairvisor-host \
LOADGEN_REMOTE=ubuntu@loadgen-host \
bash run-all.shFor quick local validation or development:
bash run-all.shThis keeps the old single-host behavior where Fairvisor and k6 share one machine.
To validate orchestration commands without touching real hosts:
DRY_RUN=1 \
FAIRVISOR_REMOTE=ubuntu@fairvisor-host \
LOADGEN_REMOTE=ubuntu@loadgen-host \
FAIRVISOR_TARGET_HOST=10.0.0.42 \
bash run-all.shFor setups where SUT and k6 share one host, optional taskset pinning is supported. In remote-controller mode, pinning is applied independently on whichever host is running that role.
ORESTY_CPUSETfor OpenResty/backend processesK6_CPUSETfor k6
Example:
ORESTY_CPUSET=0-3 K6_CPUSET=4-7 bash run-all.shIf taskset exists and host has >=8 cores, defaults are auto-split 50/50.
On the load-generator host:
- Raw k6 summaries:
/tmp/fv-bench/results/ - Generated k6 scripts:
/tmp/fv-bench/scripts/
On the Fairvisor host:
- Service logs:
/tmp/fv-bench/run/*/logs/
On the local controller:
- Pulled result JSONs:
/tmp/fv-bench/controller-results/
Measured on AWS c7i.2xlarge (8 vCPU, 16 GB RAM), Ubuntu 24.04.3 LTS. k6 v0.54.0, constant-arrival-rate, 10 000 RPS / 60 s / 10 s warmup. CPU pinning: OpenResty on cores 0–3, k6 on cores 4–7.
| Percentile | Decision Service | Reverse Proxy | Raw nginx |
|---|---|---|---|
| p50 | 112 μs | 241 μs | 71 μs |
| p90 | 191 μs | 376 μs | 190 μs |
| p99 | 426 μs | 822 μs | 446 μs |
| p99.9 | 2 990 μs | 2 980 μs | 1 610 μs |
| Configuration | RPS |
|---|---|
| Simple rate limit (1 rule) | 110 500 |
| Complex policy (5 rules, JWT + loop detect) | 67 600 |
| Token estimation (token_bucket_llm) | 49 400 |
Your numbers will vary by instance type and OS. Use
results/reference.jsonto compare programmatically.
decision_serviceandreverse_proxyare different paths:decision_service: decision API onlyreverse_proxy: decision + upstream proxy round-trip
- In single-host runs, k6 and SUT can contend for CPU unless pinned or separated.
- The recommended path for production-like proxy latency is the two-host controller mode.
This benchmark suite is intended to be published in:
https://github.com/fairvisor/benchmark