tests: Add zombienet test for bitswap_v1_get#3240
Conversation
|
@skunert you mention snapshots should live on Google Drive, but I haven't seen anything similar. polkadot-bulletin-chain stores its build outputs as GitHub artifacts (e.g. release.yml#L98-L103). Should we do that instead? |
|
@BigTava An example can be seen here: https://github.com/paritytech/polkadot-sdk/blob/3501f2c3c8e46d0b726994065a05615995f03880/cumulus/zombienet/zombienet-sdk/tests/zombie_ci/full_node_warp_sync/common.rs#L26 I think artifacts can be of limited size only and have a retention period. These snapshots we should only generate very rarely and then always reuse. |
|
Hmm looks like I lost access to our cloud storage, @pepoviola what is the current way to store these snapshots, do you still have access? |
You should have access to |
Snapshots live in the zombienet-db-snaps bucket under smoldot/bulletin_fetch/, accessed via hardcoded URLs that with_db_snapshot() downloads automatically. Local override is wired through DB_SNAPSHOT_*_OVERRIDE env vars. Drops the manifest.json read on the fetch path: payload metadata is derived from bulletin::payloads() at runtime instead.
Renamed to bulletin::payloads() in the code earlier; README missed it.
Thanks @skunert! I added the remaining code based on this example. The snapshots come out to ~50 MB total. Someone with bucket-write access needs to upload them for the CI job to pass. |
lrubasze
left a comment
There was a problem hiding this comment.
Nice work!
Few comments from my side
The DB_SNAPSHOT_* constants now point at -2026-05-04.tgz so the bucket keeps a clear version history instead of silently overwriting on each refresh. Bump the date in the constants (and re-upload) whenever the bulletin runtime or bulletin::payloads() changes.
- Mixed-availability scenario now iterates every full-only payload instead of only the first match, so both 31 B and 1 MiB cases exercise gossip fallback. - Rename payload labels to `all-nodes-*` / `one-node-*` so they describe corpus location instead of opaque `both-*` / `full-only-*`. - Hoist `RELAY_CHAIN`, `RELAY_BINARY`, `PARA_BINARY` into `bulletin.rs`; both test files reference the shared consts. - Trim the `bitswapGetWithRetry` doc to a single line. - `cargo fmt`.
The bare `sendRpcAndWait` was relying on the implicit ordering that prior `known-*` scenarios had already warmed up smoldot. Wrap this call too so the assertion is robust to test reordering and cold-start hangs. `-32602` propagates on the first attempt anyway, so behaviour in the happy path is unchanged.
|
Dumb question: does I can imagine such snapshot can be used for bulletin related functionality also in other repos. @lrubasze @skunert - does it feel right for you? |
I think this is fine. Tests that use snapshot and test that generates it need to have the same network setup. So i think it is good to keep them in one place. Unless we outsource zombienet network setup too. |
I thought about this in the past too, my 2c is that its okay to include this per-repo, because sometimes the snapshots need to replicate specific scenarios. For bulletin I can imagine scenarios with specific counts of renew/store tx etc. |
| ); | ||
| } | ||
|
|
||
| for (const payload of payloads.filter((p) => !p.on_partial)) { |
There was a problem hiding this comment.
One nit (don't think we need to change it right now). Would be nice to know if we actually hit the partial case where a CID is unavailable. Imagine if peer switching was broken and smoldot would always ask the same peer, then this test would become flaky depending on which peer is chosen for request right? Not sure we have a good way to handle this case currently.
There was a problem hiding this comment.
Smoldot handles DontHave silently in bitswap_service.rs, so there was nothing for the test to scrape today. Feel free to file that as a follow-up issue to discuss and implement.
Avoids matrix-slot collision with PR #3242 (more smoke tests), which takes 0004-0006.
I aggre, but we can offload the creation logic to zombienet-sdk itself, so per repo you only need to setup the network and call something like Thx! |
Closes #3232