Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
689a8c9
test(smoke): scenario helpers; drop warp-gap wait
lrubasze Apr 29, 2026
0d5af48
style(e2e): apply cargo +nightly fmt
lrubasze Apr 29, 2026
13b1276
test(snapshots): add generator binary for cold/warm artifacts
lrubasze Apr 29, 2026
37f2b0e
test(smoke): wire cold/warm scenarios
lrubasze Apr 29, 2026
845d308
test(snapshots): split spec/snapshot blocks; resume from existing DB
lrubasze Apr 30, 2026
dd406e3
test(snapshots): exclude keystore/ when tarring node DBs
lrubasze May 1, 2026
4d07bc8
test(smoke): wire cold/warm against GCS-resolved artifacts
lrubasze May 1, 2026
7730e5a
test(smoke): rename smoke -> smoke_fresh for symmetry with cold/warm
lrubasze May 1, 2026
15e1c3f
test(snapshots): bundle artifacts; add lightSyncState specs
lrubasze May 1, 2026
d6fbcaa
test(smoke): timestamp JS log lines
lrubasze May 1, 2026
8816ac5
test(snapshots): dump smoldot before tarring node DBs
lrubasze May 1, 2026
17c58f1
test(smoke): floor check via chainHead_v1, not legacy chain_getFinali…
lrubasze May 1, 2026
0f07258
test(smoke): rename finalized_floor -> expected_initial_finalized
lrubasze May 1, 2026
cd5262c
test(snapshots): pin v1 BUNDLE_SHA256
lrubasze May 1, 2026
08de99e
refactor(network): replace 3 mode enums with single Scenario enum
lrubasze May 4, 2026
4b2dfe3
refactor(network): share spawned_chain_spec_paths between modules
lrubasze May 5, 2026
cf14344
refactor(snapshots): convert generator from bin to ignored test
lrubasze May 5, 2026
ba2806d
docs(snapshots): align v2 regen example with new defaults
lrubasze May 5, 2026
484e28f
test(snapshots): repin v1 BUNDLE_SHA256 to regenerated bundle
lrubasze May 5, 2026
cd14e07
ci(zombienet): run smoke_cold and smoke_warm
lrubasze May 5, 2026
710fe98
fix(network): resolve para spec path via unique_id, not chain_id
lrubasze May 5, 2026
da60669
test(statement-store): exit JS scripts directly to skip terminate hang
lrubasze May 5, 2026
b2f2f58
fix(network): resolve para spec path using const
lrubasze May 6, 2026
9f74344
Merge remote-tracking branch 'origin/main' into lrubasze/smoldot_more…
lrubasze May 7, 2026
6fde2bb
test(smoke): skip initial newBlock burst when counting parachain blocks
lrubasze May 7, 2026
5f06557
test(smoke): exit JS script directly to skip terminate hang
lrubasze May 7, 2026
a3897fd
Merge branch 'main' into lrubasze/smoldot_more_smoke_tests
lrubasze May 8, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 11 additions & 5 deletions .github/workflows/zombienet.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,16 +56,22 @@ jobs:
fail-fast: false
matrix:
test:
- job-name: "zombienet-smoldot-0000-smoke"
test: "smoke"
- job-name: "zombienet-smoldot-0001-smoke_fresh"
test: "smoke_fresh"
runner-type: "default"
- job-name: "zombienet-smoldot-0001-statement_store_submission"
- job-name: "zombienet-smoldot-0002-smoke_cold"
test: "smoke_cold"
runner-type: "default"
- job-name: "zombienet-smoldot-0003-smoke_warm"
test: "smoke_warm"
runner-type: "default"
- job-name: "zombienet-smoldot-0004-statement_store_submission"
test: "statement_store_submission"
runner-type: "default"
- job-name: "zombienet-smoldot-0002-statement_store_reception"
- job-name: "zombienet-smoldot-0005-statement_store_reception"
test: "statement_store_reception"
runner-type: "default"
- job-name: "zombienet-smoldot-0003-statement_store_peer_connection"
- job-name: "zombienet-smoldot-0006-statement_store_peer_connection"
test: "statement_store_peer_connection"
runner-type: "default"
- job-name: "zombienet-smoldot-0004-statement_store_browser"
Expand Down
101 changes: 101 additions & 0 deletions e2e-tests/docs/smoke-scenarios.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# Smoldot smoke-test scenarios

Three smoke tests exercise distinct smoldot startup conditions:

| Test | Network | Smoldot spec | Smoldot DB |
|-----------------|------------------------|-------------------------------------------|---------------------------|
| `smoke_fresh` | spawned from genesis | vanilla | none |
| `smoke_cold` | spawned from snapshot | with `lightSyncState` + `stateRootHash` | none |
| `smoke_warm` | spawned from snapshot | with `lightSyncState` + `stateRootHash` | preloaded `databaseContent` |

Cold/warm both rely on smoldot's real warp sync (gap from `lightSyncState` to current head exceeds `warp_sync_minimum_gap=32`) so authority-set rotations along the way are handled by warp-sync proof fragments.

Chain: `westend-local` relay + `people-westend-local` parachain. Same as fresh, so all three scenarios are directly comparable.

## Artifact bundle

Single GCS object per version:

```
gs://zombienet-db-snaps/zombienet/smoldot_smoke_db/{ARTIFACTS_VERSION}/bundle.tar.gz
```

Contains:

- `relaychain-db.tgz`, `parachain-db.tgz` — node DB snapshots; keystore stripped
- `relay-spec.json`, `para-spec.json` — full chain specs (substrate side)
- `relay-spec-lightSyncState.json`, `para-spec-lightSyncState.json` — slim chain specs (smoldot side, `genesis.stateRootHash` instead of `genesis.raw`)
- `smoldot-db/relay.json`, `smoldot-db/para.json` — `chainHead_unstable_finalizedDatabase` dumps for warm

`ARTIFACTS_VERSION` and `BUNDLE_SHA256` live in `e2e-tests/src/snapshot.rs`. On first use the bundle is downloaded into `~/.cache/smoldot-e2e/{ARTIFACTS_VERSION}/`, SHA-verified, and extracted in place.

For local iteration: `ARTIFACTS_DIR_OVERRIDE=/path/to/dir` skips download/verify and uses files directly from that directory.

## Regenerating the artifact bundle

Triggered when:
- Runtime/binary changes invalidate the snapshot DB (genesis hash mismatch or block-format break).
- Adjusting `--target-finalized` / `--spec-at-finalized` to change the warp-sync gap or chain age.
- Upgrading smoldot in a way that changes its `databaseContent` format.

Steps (from `e2e-tests/`):

1. **Bump version** in `src/snapshot.rs`:
```rust
pub const ARTIFACTS_VERSION: &str = "v2"; // or whatever
```

2. **Run the generator test** to produce a fresh bundle. Either start from genesis (~3 h for `TARGET_FINALIZED=2000`) or resume from an existing source DB (~50 min):

```bash
# from genesis:
ZOMBIE_PROVIDER=native \
SMOKE_SNAPSHOT_OUT=/tmp/smoldot-snap-v2 \
SMOKE_SNAPSHOT_TARGET_FINALIZED=2500 \
SMOKE_SNAPSHOT_SPEC_AT_FINALIZED=1250 \
cargo test --release --test smoke_generate_snapshots -- --ignored --nocapture

# or resume:
ZOMBIE_PROVIDER=native \
SMOKE_SNAPSHOT_OUT=/tmp/smoldot-snap-v2 \
SMOKE_SNAPSHOT_TARGET_FINALIZED=2000 \
SMOKE_SNAPSHOT_SPEC_AT_FINALIZED=1525 \
SMOKE_SNAPSHOT_RELAY_DB=/path/to/old/relaychain-db.tgz \
SMOKE_SNAPSHOT_PARA_DB=/path/to/old/parachain-db.tgz \
cargo test --release --test smoke_generate_snapshots -- --ignored --nocapture
```

Required: `ZOMBIE_PROVIDER=native`, polkadot/polkadot-parachain on `PATH`. The module-level docstring in `tests/smoke_generate_snapshots.rs` lists every env var.

It produces `bundle.tar.gz` under `SMOKE_SNAPSHOT_OUT` and prints the SHA256 in the manifest at the end.

3. **Verify locally** before publishing:

```bash
ARTIFACTS_DIR_OVERRIDE=/tmp/smoldot-snap-v2 cargo test --test smoke_cold -- --nocapture
ARTIFACTS_DIR_OVERRIDE=/tmp/smoldot-snap-v2 cargo test --test smoke_warm -- --nocapture
```

Both must pass. If they don't, it's almost certainly the chain spec / runtime version or the `--spec-at-finalized` choice — fix and retry before uploading.

4. **Publish**:
```bash
gsutil cp /tmp/smoldot-snap-v2/bundle.tar.gz \
gs://zombienet-db-snaps/zombienet/smoldot_smoke_db/v2/bundle.tar.gz
```

5. **Pin the new SHA** in `src/snapshot.rs` (copy the value from the generator manifest):
```rust
const BUNDLE_SHA256: &str = "<hash>";
```

6. **CI cache key** invalidates automatically — the workflow's cache step keys on `hashFiles('e2e-tests/src/snapshot.rs')`, so bumping the constant is enough.

7. Commit, open PR, run cold/warm tests in CI to confirm GCS download + extract path works end-to-end.

## Notes on common pitfalls

- **Sibling nodes with identical session keys equivocate.** The generator excludes `keystore/` when tarring; zombienet inserts per-node keys via `author_insertKey` after startup. Don't add keystore back into the snapshot.
- **Same tarball path passed to multiple zombienet nodes** triggers a TOCTOU race in zombienet-provider's `with_db_snapshot` cache. The generator and `network::stage_per_node_snapshots` work around this by copying the tarball once per consuming node.
- **Spec-at-finalized too close to target-finalized**: gap ≤ 32 means smoldot uses follow-forward instead of warp sync, which can't traverse GRANDPA rotations. Default `M = N/2` keeps it safe.
- **Bootnode multiaddrs are per-spawn** (zombienet picks free ports). The committed specs ship with empty `bootNodes`; `network::prepare_runtime_specs` injects current multiaddrs into a runtime copy before handing the spec to smoldot.
13 changes: 10 additions & 3 deletions e2e-tests/js/helpers.js
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ export function createSmoldotClient() {
logCallback: (level, target, message) => {
const labels = { 1: "ERROR", 2: "WARN", 3: "INFO", 4: "DEBUG", 5: "TRACE" };
const label = labels[level] ?? `L${level}`;
console.error(`[${label}] [${target}] ${message}`);
console.error(`[${new Date().toISOString()}] [${label}] [${target}] ${message}`);
},
});
}
Expand All @@ -35,6 +35,12 @@ export async function addChainFromSpec(client, specPath, opts = {}) {
return client.addChain({ chainSpec, ...opts });
}

export function readDbContentIfSet(envVar) {
const path = process.env[envVar];
if (!path) return undefined;
return fs.readFileSync(path, "utf8");
}

let nextId = 1;

export function sendRpc(chain, method, params = []) {
Expand Down Expand Up @@ -132,10 +138,11 @@ export async function readJsonRpcUntil(chain, predicate, deadlineMs) {

export function report(name, passed, detail) {
const suffix = detail ? `: ${detail}` : "";
const ts = new Date().toISOString();
if (passed) {
console.log(`PASS: ${name}${suffix}`);
console.log(`[${ts}] PASS: ${name}${suffix}`);
} else {
console.log(`FAIL: ${name}${suffix}`);
console.log(`[${ts}] FAIL: ${name}${suffix}`);
process.exitCode = 1;
}
}
136 changes: 123 additions & 13 deletions e2e-tests/js/smoke.js
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,22 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.

import * as fs from "node:fs";
import {
createSmoldotClient,
addChainFromSpec,
readDbContentIfSet,
sendRpc,
readJsonRpcUntil,
sendRpcAndWait,
report,
} from "./helpers.js";

const relaySpecPath = process.env.RELAY_CHAIN_SPEC;
const paraSpecPath = process.env.PARA_CHAIN_SPEC;
const requiredBlocks = Number.parseInt(process.env.REQUIRED_BLOCKS, 10);
const expectedInitialFinalized = Number.parseInt(process.env.EXPECTED_INITIAL_FINALIZED ?? "0", 10);
const dbDumpDir = process.env.SMOLDOT_DB_DUMP_DIR;

if (!relaySpecPath || !paraSpecPath || !Number.isFinite(requiredBlocks)) {
console.error(
Expand All @@ -34,20 +39,108 @@ if (!relaySpecPath || !paraSpecPath || !Number.isFinite(requiredBlocks)) {
process.exit(1);
}

// Decodes the block number from a hex SCALE-encoded substrate header.
// Layout: parent_hash (32 B) | compact-encoded number | rest. The compact
// modes 0/1/2 cover block numbers up to 2^30; that's the only range we'll
// ever assert against.
function decodeHeaderNumber(hexStr) {
const stripped = hexStr.startsWith("0x") ? hexStr.slice(2) : hexStr;
const bytes = Buffer.from(stripped, "hex");
if (bytes.length < 33) throw new Error(`header hex too short: ${bytes.length} bytes`);
const off = 32;
const b0 = bytes[off];
const mode = b0 & 0b11;
if (mode === 0) return b0 >>> 2;
if (mode === 1) return (b0 | (bytes[off + 1] << 8)) >>> 2;
if (mode === 2) {
return (
(b0 | (bytes[off + 1] << 8) | (bytes[off + 2] << 16) | (bytes[off + 3] << 24)) >>> 2
);
}
throw new Error(`compact mode 3 not supported in decodeHeaderNumber`);
}

const client = createSmoldotClient();
let relay;
let para;
let passed = true;

try {
relay = await addChainFromSpec(client, relaySpecPath);
const relayDbContent = readDbContentIfSet("SMOLDOT_DB_RELAY");
const paraDbContent = readDbContentIfSet("SMOLDOT_DB_PARA");

relay = await addChainFromSpec(client, relaySpecPath, {
databaseContent: relayDbContent,
});
report("addChain relay", true);

para = await addChainFromSpec(client, paraSpecPath, {
databaseContent: paraDbContent,
potentialRelayChains: [relay],
});
report("addChain parachain", true);

// Assert smoldot's first reported finalized block ≥ expected. Uses
// chainHead_v1: subscribe on the relay, wait for the `initialized` event
// (which fires only after warp sync) and decode the newest finalized
// header's number. Legacy `chain_getFinalizedHead` would race the
// warp-sync gate — smoldot blocks legacy RPCs until the gate opens.
if (expectedInitialFinalized > 0) {
const relayFollowReqId = sendRpc(relay, "chainHead_v1_follow", [false]).toString();
const relaySubId = await readJsonRpcUntil(
relay,
(msg) => {
if (msg.id === relayFollowReqId) {
if (msg.error)
throw new Error(
`relay chainHead_v1_follow failed: ${JSON.stringify(msg.error)}`,
);
return msg.result;
}
return undefined;
},
Date.now() + 30_000,
);
if (typeof relaySubId !== "string" || !relaySubId) {
throw new Error("Unexpected relay follow subscription id");
}
const finalizedHash = await readJsonRpcUntil(
relay,
(msg) => {
if (msg.method !== "chainHead_v1_followEvent") return undefined;
if (msg.params?.subscription !== relaySubId) return undefined;
const r = msg.params.result;
if (r?.event === "initialized") {
const hashes = r.finalizedBlockHashes ?? [];
return hashes[hashes.length - 1];
}
if (r?.event === "stop") throw new Error("relay chainHead follow stopped");
return undefined;
},
Date.now() + 120_000,
);
if (typeof finalizedHash !== "string") {
throw new Error("relay chainHead never reported initialized");
}
const headerHex = await sendRpcAndWait(
relay,
"chainHead_v1_header",
[relaySubId, finalizedHash],
30_000,
);
const num = decodeHeaderNumber(headerHex);
const ok = num >= expectedInitialFinalized;
report(
"relay finalized at-or-past expected_initial_finalized",
ok,
`finalized=#${num} expected=#${expectedInitialFinalized}`,
);
if (!ok)
throw new Error(
`relay finalized #${num} below expected_initial_finalized #${expectedInitialFinalized}`,
);
}

const followReqId = sendRpc(para, "chainHead_v1_follow", [false]).toString();
const subId = await readJsonRpcUntil(
para,
Expand All @@ -68,24 +161,27 @@ try {
}
report("chainHead_v1_follow accepted", true, `subId=${subId}`);

const initialBlocks = new Set();
// Skip the initial `newBlock` burst (replay of already-known blocks); the
// first `bestBlockChanged` marks its end. Otherwise a warm-started smoldot
// would satisfy the threshold from cached state alone.
let burstDone = false;
let newBlocks = 0;
await readJsonRpcUntil(
para,
(msg) => {
if (msg.method !== "chainHead_v1_followEvent") return undefined;
if (msg.params?.subscription !== subId) return undefined;
const result = msg.params.result;
if (result?.event === "initialized") {
for (const h of result.finalizedBlockHashes ?? []) initialBlocks.add(h);
} else if (result?.event === "newBlock" && !initialBlocks.has(result.blockHash)) {
if (result?.event === "bestBlockChanged") {
burstDone = true;
} else if (result?.event === "newBlock" && burstDone) {
if (++newBlocks >= requiredBlocks) return true;
} else if (result?.event === "stop") {
throw new Error("chainHead follow stopped unexpectedly");
}
return undefined;
},
Date.now() + 120_000,
Date.now() + 180_000,
);

const ok = newBlocks >= requiredBlocks;
Expand All @@ -95,15 +191,29 @@ try {
`count=${newBlocks}/${requiredBlocks}`,
);
if (!ok) passed = false;

if (passed && dbDumpDir) {
fs.mkdirSync(dbDumpDir, { recursive: true });
const relayDb = await sendRpcAndWait(
relay,
"chainHead_unstable_finalizedDatabase",
[],
30_000,
);
const paraDb = await sendRpcAndWait(
para,
"chainHead_unstable_finalizedDatabase",
[],
30_000,
);
fs.writeFileSync(`${dbDumpDir}/relay.json`, relayDb);
fs.writeFileSync(`${dbDumpDir}/para.json`, paraDb);
report("dumped smoldot databaseContent", true, dbDumpDir);
}
} catch (e) {
report("smoke", false, e.message);
passed = false;
} finally {
try {
await client.terminate();
} catch (_) {}
}

if (!passed || process.exitCode) {
process.exit(1);
}
// Finish as soon as the result is known
process.exit(passed && !process.exitCode ? 0 : 1);
9 changes: 2 additions & 7 deletions e2e-tests/js/statement_store_peer_connection.js
Original file line number Diff line number Diff line change
Expand Up @@ -124,12 +124,7 @@ try {
} catch (e) {
report("statement_store_peer_connection", false, e.message);
passed = false;
} finally {
try {
await client.terminate();
} catch (_) {}
}

if (!passed || process.exitCode) {
process.exit(1);
}
// Finish as soon as the result is known
process.exit(passed && !process.exitCode ? 0 : 1);
9 changes: 2 additions & 7 deletions e2e-tests/js/statement_store_reception.js
Original file line number Diff line number Diff line change
Expand Up @@ -145,12 +145,7 @@ try {
} catch (e) {
report("statement_store_reception", false, e.message);
passed = false;
} finally {
try {
await client.terminate();
} catch (_) {}
}

if (!passed || process.exitCode) {
process.exit(1);
}
// Finish as soon as the result is known
process.exit(passed && !process.exitCode ? 0 : 1);
Loading
Loading