feat: Basilica metadata verification and image deploy mode#42
feat: Basilica metadata verification and image deploy mode#42
Conversation
…fy CLI Replace the spot-check verification system with metadata-based verification that uses the Basilica public metadata API to check deployment health and Docker image pullability. - Add MetadataVerifier using Basilica public metadata API - Add --image flag to `kinitro miner push` for deploying pre-built Docker images directly (without HuggingFace source code) - Add `kinitro miner verify` command for standalone deployment verification (supports --deployment-id and --netuid --uid/--hotkey modes) - Enroll public metadata on deploy for validator verification - Add integration and unit tests for the full metadata verification flow
📝 WalkthroughWalkthroughRemoves HuggingFace-based spot-check verification and miner repo/revision fields; adds image-based deployment CLI and Basilica metadata verification (scheduler and executor); updates commitment format to deployment_id/encrypted_deployment and adjusts CLI, types, configs, tests, and dependencies accordingly. Changes
Sequence Diagram(s)sequenceDiagram
participant Miner as Miner/User
participant CLI as Miner CLI
participant Basilica as Basilica API
participant Docker as Docker Registry
rect rgba(0, 100, 200, 0.5)
Note over Miner,Docker: Image-Based Deployment Flow
Miner->>CLI: miner deploy --image <image> --name <name>
CLI->>Basilica: basilica_push(image, name)
Basilica-->>CLI: deployment_id
CLI->>Basilica: enroll_metadata(deployment_id, image)
Basilica-->>CLI: enroll result
CLI-->>Miner: DEPLOYMENT SUCCESSFUL (Deployment ID)
end
rect rgba(100, 200, 0, 0.5)
Note over Miner,Docker: Verification Flow (Deployment ID)
Miner->>CLI: miner verify --deployment-id <id>
CLI->>Basilica: fetch deployment metadata (deployment_id)
Basilica-->>CLI: metadata (image, state, uptime)
CLI->>Docker: check image publicability
Docker-->>CLI: public/private
CLI-->>Miner: VERIFIED / FAILED (details)
end
rect rgba(200, 100, 0, 0.5)
Note over Miner,Docker: Verification Flow (Chain Lookup)
Miner->>CLI: miner verify --netuid <n> --uid <uid>
CLI->>Chain: resolve_hotkey(uid)
Chain-->>CLI: hotkey
CLI->>Chain: read_miner_commitment(hotkey)
Chain-->>CLI: MinerCommitment (deployment_id/encrypted_deployment)
CLI->>Basilica: fetch deployment metadata
Basilica-->>CLI: metadata
CLI->>Docker: check image publicability
Docker-->>CLI: public/private
CLI-->>Miner: VERIFIED / FAILED (details)
end
sequenceDiagram
participant Scheduler as Scheduler Main
participant Verifier as MetadataVerifier
participant Basilica as Basilica API
participant TaskGen as Task Generator
rect rgba(100, 150, 200, 0.5)
Note over Scheduler,TaskGen: Pre-Task Verification Cycle
Scheduler->>Scheduler: Discover miners / commitments
Scheduler->>Verifier: verify_miners(commitments)
par Parallel Verification
Verifier->>Basilica: fetch metadata (miner 1)
Basilica-->>Verifier: result 1
and
Verifier->>Basilica: fetch metadata (miner 2)
Basilica-->>Verifier: result 2
and
Verifier->>Basilica: fetch metadata (miner N)
Basilica-->>Verifier: result N
end
Verifier-->>Scheduler: verified_results
Scheduler->>Scheduler: Filter passing miners
alt All miners failed
Scheduler-->>Scheduler: Abort cycle (no tasks)
else Miners available
Scheduler->>TaskGen: Generate tasks for passing miners
TaskGen-->>Scheduler: tasks
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@kinitro/cli/miner/deploy.py`:
- Around line 169-200: The "Next step" guidance shown when deploying with image
mode (the block handling the image variable and the deployment =
client.deploy(...) call) incorrectly implies a HuggingFace repo is required;
update the user-facing message printed after successful image deployment to
explicitly state that although a repo/revision is needed for the on-chain
commitment format (parse_commitment expects repo:rev:deployment_id), users
deploying pre-built images must still provide a repo and revision for the chain
commit or follow an alternative image-only commit format; either (A) change the
next-step text to clarify "If you deployed a pre-built image you still must
provide --repo and --revision for on-chain commitment (format
repo:rev:deployment_id)," or (B) implement support in parse_commitment to accept
an image-only commitment token and document that format—pick one and update the
message and parse_commitment usage accordingly (references: image variable,
deployment variable, and parse_commitment).
In `@kinitro/cli/miner/verify.py`:
- Around line 40-41: The Miner UID is being hardcoded to MinerUID(0) in
_read_commitment_from_chain and another construction (both creating
MinerCommitment), so update _read_commitment_from_chain to accept an optional
uid parameter (e.g., uid: Optional[int] or Optional[MinerUID]) and use that when
constructing MinerCommitment instead of MinerUID(0); then thread that UID from
the verify function (which parses --uid in chain-lookup mode) into the call to
_read_commitment_from_chain and into any other place that currently uses
MinerUID(0) (refer to the MinerCommitment constructor sites and the verify
function) so the actual user-provided UID is preserved in the verification
result.
In `@kinitro/executor/verification.py`:
- Around line 76-84: The parse_image_ref logic incorrectly uses
image.split(":")[0] when image_tag is provided, which drops registry ports and
repo paths (e.g., myregistry.com:5443/org/app); update the branch that handles a
supplied image_tag in the parse_image_ref function so it mirrors the safe
splitting used in the other branch: use image.rsplit(":", 1) and only treat the
right-hand part as a tag if that part does not contain a "/" (i.e., it's a real
tag); otherwise keep the full image as name and set tag = image_tag. Ensure you
reference the variables image, image_tag, name, tag and replace
image.split(":")[0] with this conditional rsplit-based logic.
- Around line 119-121: The __init__ creates BasilicaClient() without credentials
but BasilicaClient requires an API token; update the __init__ in verification.py
to supply a token (e.g., read BASILICA_API_TOKEN from environment and pass as
api_key to BasilicaClient or ensure the env var is set before construction),
remove or change the misleading comment about "No API key needed", and keep
get_public_deployment_metadata() usage unchanged since it can be called once the
client is initialized with a valid token.
🧹 Nitpick comments (9)
kinitro/cli/miner/deploy.py (1)
173-173: Name sanitization logic is duplicated across three locations.The same sanitization pattern (
"".join(c if c.isalnum() or c == "-" else "-" for c in name).strip("-")[:63]) appears at Lines 173, 208, and 482. Extract this into a shared helper to avoid inconsistency if the rules change.♻️ Proposed helper
+def _sanitize_deployment_name(name: str) -> str: + """Sanitize a deployment name to DNS-compatible format (max 63 chars).""" + return "".join(c if c.isalnum() or c == "-" else "-" for c in name).strip("-")[:63] + + def basilica_push(Then replace each inline expression with
_sanitize_deployment_name(name).Also applies to: 208-208
kinitro/executor/verification.py (1)
213-218: Unbounded concurrency inverify_minersmay overwhelm registries or the Basilica API.
asyncio.gather(*tasks)fires all verification requests concurrently. With many miners, this could hit rate limits on Docker Hub's token endpoint or the Basilica metadata API. Consider usingasyncio.Semaphoreto cap concurrent verifications.♻️ Proposed bounded concurrency
+_MAX_CONCURRENT_VERIFICATIONS = 10 + async def verify_miners( self, commitments: list[MinerCommitment] ) -> list[MetadataVerificationResult]: """Verify all miners concurrently.""" - tasks = [self.verify_miner(c) for c in commitments] - return await asyncio.gather(*tasks) + sem = asyncio.Semaphore(_MAX_CONCURRENT_VERIFICATIONS) + + async def _limited(c: MinerCommitment) -> MetadataVerificationResult: + async with sem: + return await self.verify_miner(c) + + return await asyncio.gather(*[_limited(c) for c in commitments])kinitro/scheduler/main.py (1)
210-253: Clean integration of metadata verification into the scheduler cycle.The gating via
config.metadata_verification_enabled, per-miner logging of results, and cycle-fail on zero verified miners are all well-handled. One minor observation:MetadataVerifier()(and its internalBasilicaClient()) is re-instantiated every cycle. Consider hoisting it toScheduler.__init__if the client is stateless and reusable across cycles.kinitro/cli/miner/verify.py (3)
8-12: Importing a private function_query_commitment_by_hotkey_asyncfrom another module.The underscore prefix signals this is an internal implementation detail of
kinitro.chain.commitments. If this function is needed by the CLI, consider making it public (rename toquery_commitment_by_hotkey_async) to formalize the contract.#!/bin/bash # Check how many places import this private function rg -n '_query_commitment_by_hotkey_async' --type=py
52-52: Add type annotation to_print_resultparameter.The
resultparameter is untyped. Annotate it withMetadataVerificationResultfor clarity and type-checker support.♻️ Proposed fix
-def _print_result(result) -> None: +def _print_result(result: "MetadataVerificationResult") -> None:This requires importing
MetadataVerificationResult(add to the existing import fromkinitro.executor.verification):-from kinitro.executor.verification import MetadataVerifier +from kinitro.executor.verification import MetadataVerificationResult, MetadataVerifier
115-125: Multipleasyncio.run()calls in chain-mode path create separate event loops unnecessarily.Lines 115 and 125 each call
asyncio.run()for the hotkey lookup and commitment read. These could be combined into a single async function to reuse the event loop and theAsyncSubtensorconnection.♻️ Sketch of combined async helper
async def _resolve_commitment(network: str, netuid: int, uid: int | None, hotkey: str | None) -> tuple[str, MinerCommitment | None]: """Resolve hotkey (if needed) and read commitment in a single event loop.""" async with AsyncSubtensor(network=network) as subtensor: if uid is not None and not hotkey: neurons = await subtensor.neurons(netuid=netuid) if uid < 0 or uid >= len(neurons): return "", None hotkey = neurons[uid].hotkey raw, block = await _query_commitment_by_hotkey_async(subtensor, netuid, hotkey) ...tests/unit/test_metadata_verification.py (2)
114-116:MetadataVerifier.__new__bypasses__init__— consider a lighter alternative.Using
__new__to skip__init__and manually assigning_clientworks but is fragile: if__init__adds new attributes, every test silently breaks. Patching the client constructor is more resilient.♻️ Suggested alternative
- verifier = MetadataVerifier.__new__(MetadataVerifier) - verifier._client = MagicMock() - verifier._client.get_public_deployment_metadata = MagicMock(return_value=_FakeMetadata()) + mock_client = MagicMock() + mock_client.get_public_deployment_metadata.return_value = _FakeMetadata() + with patch("kinitro.executor.verification.BasilicaClient", return_value=mock_client): + verifier = MetadataVerifier()Apply the same pattern to the other
TestVerifyMinerandTestVerifyMinerstests.
25-51:_make_commitmentand_FakeMetadataare duplicated intest_metadata_e2e.py.Both this file and
tests/integration/test_metadata_e2e.pydefine near-identical_make_commitmentand_FakeMetadatahelpers (with slightly different defaults). Consider extracting them into a sharedconftest.pyortests/helpers.pyto reduce duplication and keep defaults consistent.tests/integration/test_metadata_e2e.py (1)
90-98: Simplify the deploy kwargs assertions.The
orconditions are logically redundant —kwargs.get("source") is Noneis already true when the key is absent. A simpler check would be clearer.♻️ Suggested simplification
- assert deploy_call.kwargs.get("source") is None or "source" not in deploy_call.kwargs - assert ( - deploy_call.kwargs.get("pip_packages") is None - or "pip_packages" not in deploy_call.kwargs - ) + assert deploy_call.kwargs.get("source") is None + assert deploy_call.kwargs.get("pip_packages") is None
| return MinerCommitment( | ||
| uid=MinerUID(0), |
There was a problem hiding this comment.
Miner UID is hardcoded to 0 even when the actual UID is known.
In chain-lookup mode with --uid, the user-provided UID is available but not passed into the MinerCommitment (Line 41 and Line 98 both use MinerUID(0)). The verification result will display miner_uid=0, which is misleading. Pass the known UID when available.
🐛 Proposed fix for chain mode (Lines 40-41)
Refactor _read_commitment_from_chain to accept an optional UID:
async def _read_commitment_from_chain(
- network: str, netuid: int, hotkey: str
+ network: str, netuid: int, hotkey: str, uid: int = 0
) -> MinerCommitment | None:
...
return MinerCommitment(
- uid=MinerUID(0),
+ uid=MinerUID(uid),
...
)And in the verify function, thread the UID through:
- commitment = asyncio.run(_read_commitment_from_chain(network, netuid, query_hotkey))
+ lookup_uid = uid if uid is not None else 0
+ commitment = asyncio.run(_read_commitment_from_chain(network, netuid, query_hotkey, uid=lookup_uid))Also applies to: 97-105
🤖 Prompt for AI Agents
In `@kinitro/cli/miner/verify.py` around lines 40 - 41, The Miner UID is being
hardcoded to MinerUID(0) in _read_commitment_from_chain and another construction
(both creating MinerCommitment), so update _read_commitment_from_chain to accept
an optional uid parameter (e.g., uid: Optional[int] or Optional[MinerUID]) and
use that when constructing MinerCommitment instead of MinerUID(0); then thread
that UID from the verify function (which parses --uid in chain-lookup mode) into
the call to _read_commitment_from_chain and into any other place that currently
uses MinerUID(0) (refer to the MinerCommitment constructor sites and the verify
function) so the actual user-provided UID is preserved in the verification
result.
| # If image_tag provided separately, strip any tag already on the image name | ||
| if image_tag: | ||
| name = image.split(":")[0] | ||
| tag = image_tag | ||
| elif ":" in image: | ||
| name, tag = image.rsplit(":", 1) | ||
| else: | ||
| name = image | ||
| tag = "latest" |
There was a problem hiding this comment.
parse_image_ref mishandles images with port-based registries when image_tag is provided.
Line 78 uses image.split(":")[0], which splits from the left. For an image like myregistry.com:5443/org/app with a separate image_tag, this produces name = "myregistry.com" — losing the port and repository path entirely.
Use rsplit with awareness of the registry-port pattern, consistent with the elif branch on Line 81.
🐛 Proposed fix
# If image_tag provided separately, strip any tag already on the image name
if image_tag:
- name = image.split(":")[0]
+ # Use rsplit to handle registry:port correctly (e.g., myregistry.com:5443/org/app:v1)
+ # Only strip a trailing :<tag> component, not a :<port> in the registry.
+ if "/" in image and ":" in image.rsplit("/", 1)[-1]:
+ name = image.rsplit(":", 1)[0]
+ else:
+ name = image
tag = image_tag🤖 Prompt for AI Agents
In `@kinitro/executor/verification.py` around lines 76 - 84, The parse_image_ref
logic incorrectly uses image.split(":")[0] when image_tag is provided, which
drops registry ports and repo paths (e.g., myregistry.com:5443/org/app); update
the branch that handles a supplied image_tag in the parse_image_ref function so
it mirrors the safe splitting used in the other branch: use image.rsplit(":", 1)
and only treat the right-hand part as a tag if that part does not contain a "/"
(i.e., it's a real tag); otherwise keep the full image as name and set tag =
image_tag. Ensure you reference the variables image, image_tag, name, tag and
replace image.split(":")[0] with this conditional rsplit-based logic.
| def __init__(self) -> None: | ||
| # No API key needed for public metadata reads | ||
| self._client = BasilicaClient() |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
basilica-sdk python BasilicaClient initialization without api_key public metadata
💡 Result:
In the current basilica-sdk (PyPI package basilica-sdk, e.g. 0.19.0), you cannot initialize BasilicaClient without an API token: the underlying PyO3/Rust constructor always resolves api_key from the argument or BASILICA_API_TOKEN and errors if neither is present. [1]
Even though the Python wrapper includes get_public_deployment_metadata() and documents it as “no authentication required”, you still need a token just to construct the client. [2]
What to do instead
-
If you can provide a token (recommended):
import os from basilica import BasilicaClient os.environ["BASILICA_API_TOKEN"] = "basilica_..." # or pass api_key=... client = BasilicaClient() md = client.get_public_deployment_metadata("my-deployment")
(Constructor + env var usage shown in the SDK docs.) [3]
-
If you truly need zero token usage, the SDK won’t work as-is—you’ll need to call the public-metadata HTTP endpoint directly (e.g., with
requests) once you’ve identified the exact route from Basilica’s API docs/source. The SDK itself does not provide an “unauthenticated client” mode because construction fails without a token. [1]
Sources:
[1] Rust/PyO3 BasilicaClient constructor enforces API key (env var fallback, otherwise error) — crates/basilica-sdk-python/src/lib.rs [1]
[2] Python wrapper exposes get_public_deployment_metadata() and describes it as unauthenticated — python/basilica/__init__.py [2]
[3] PyPI project description shows BASILICA_API_TOKEN usage / BasilicaClient() init — PyPI basilica-sdk page [3]
BasilicaClient() requires an API key and will fail at initialization.
The Basilica SDK (v0.19.0) constructor always requires an API token—either passed as the api_key argument or via the BASILICA_API_TOKEN environment variable. Initializing without either will raise an error. Although get_public_deployment_metadata() does not require authentication to call, the client itself cannot be constructed without a token.
Pass the API key or set the environment variable before constructing the client:
import os
from basilica import BasilicaClient
os.environ["BASILICA_API_TOKEN"] = "basilica_..." # or pass api_key="..."
self._client = BasilicaClient()Also remove the misleading comment—an API key is required even for public metadata reads.
🤖 Prompt for AI Agents
In `@kinitro/executor/verification.py` around lines 119 - 121, The __init__
creates BasilicaClient() without credentials but BasilicaClient requires an API
token; update the __init__ in verification.py to supply a token (e.g., read
BASILICA_API_TOKEN from environment and pass as api_key to BasilicaClient or
ensure the env var is set before construction), remove or change the misleading
comment about "No API key needed", and keep get_public_deployment_metadata()
usage unchanged since it can be called once the client is initialized with a
valid token.
…oyments Replace the HuggingFace upload workflow with a Docker-image-only deployment model via Basilica. Simplify on-chain commitment format to just deployment_id (or encrypted e:<blob>), with backward compatibility for legacy repo:rev:id format. Remove all deprecated fields (miner_repo, miner_revision, verification_passed, verification_score).
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@docs/miner-guide.md`:
- Around line 383-390: Add language specifiers to the two fenced code blocks
that show the commitment examples so the markdown linter MD040 is satisfied:
update the block containing "deployment-uuid" and the block containing
"e:<base85-encrypted-blob>" to use a plaintext identifier (e.g., ```text)
instead of bare ```; ensure both blocks are changed consistently in the
miner-guide.md content.
🧹 Nitpick comments (8)
kinitro/chain/commitments.py (1)
263-352: Significant duplication between sync and asyncread_miner_commitments.The two functions are nearly identical, differing only in
awaitcalls. Consider extracting the shared commitment-building logic into a helper to reduce duplication. This is an existing pattern though, so it can be addressed separately.Also applies to: 355-425
kinitro/cli/miner/verify.py (2)
49-49: Add type annotation forresultparameter.
_print_resultaccepts an untyped parameter. AddingMetadataVerificationResultimproves readability and enables type-checker coverage.♻️ Suggested fix
-def _print_result(result) -> None: +def _print_result(result: "MetadataVerificationResult") -> None:You'll also need to import it:
from kinitro.executor.verification import MetadataVerifier +from kinitro.executor.verification import MetadataVerificationResult(Or combine into a single import.)
8-11: Importing private function_query_commitment_by_hotkey_async.This creates a coupling to an internal API (underscore-prefixed by convention). Consider exposing a public wrapper in
commitments.pyor reusing the existing_read_commitment_from_chainpattern fromcommitment.pyto avoid depending on the private symbol directly.kinitro/cli/miner/deploy.py (3)
14-25: Redundant length check onurl_parts.Line 21:
str.split(".")always returns a list with at least one element, solen(url_parts) > 0is alwaysTrue. Consider removing it or replacing with a more meaningful guard (e.g., checking the extracted part is non-empty).♻️ Suggested simplification
if deployment.url and ( deployment.url.startswith("http://") or deployment.url.startswith("https://") ): try: url_parts = deployment.url.split("//")[1].split(".") - if len(url_parts) > 0: - return url_parts[0] + subdomain = url_parts[0] + if subdomain: + return subdomain except (IndexError, AttributeError): pass
120-127:--imageis required even when--skip-deployis set.When
--skip-deployis used (line 128), the image value is only used for display in the summary (lines 196, 297). Requiring users to provide an image they aren't deploying is a minor UX friction. Consider makingimageoptional (defaulting toNone) and validating it's present only whenskip_deployisFalse.
99-104: Metadata enrollment logic is duplicated betweenbasilica_pushandminer_deploy.The
enroll_metadatatry/except block appears in both functions (lines 99–104 and 246–251). Consider extracting a small helper to keep this DRY.Also applies to: 246-251
tests/unit/test_metadata_verification.py (2)
105-124: Consider using a fixture or factory instead of__new__to constructMetadataVerifier.Using
MetadataVerifier.__new__(MetadataVerifier)to bypass__init__(lines 109, 129, 144, 164, 178, 200) is fragile — if__init__gains additional state in the future, these tests will silently break. A sharedpytestfixture that patchesBasilicaClientduring construction would be more resilient and reduce repetition.♻️ Example fixture
`@pytest.fixture` def verifier(): with patch("kinitro.executor.verification.BasilicaClient") as mock_cls: v = MetadataVerifier() v._client = mock_cls.return_value yield vThen each test receives
verifierand configuresverifier._client.get_public_deployment_metadataas needed.
196-224: Batch test could verify result-to-commitment ordering.The test asserts
results[0].verified is Trueandresults[1].verified is False, relying onasyncio.gatherpreserving order (which it does). For clarity and robustness, also asserting onresults[0].miner_uid/results[1].miner_uidwould make the ordering expectation explicit and catch any future refactors that break it.
| ``` | ||
| deployment-uuid | ||
| ``` | ||
|
|
||
| **Encrypted commitment:** | ||
| ```json | ||
| {"m":"your-username/kinitro-policy","r":"abc123def456...","e":"<base85-encrypted-blob>"} | ||
| ``` | ||
|
|
||
| Where: | ||
|
|
||
| - `m` = HuggingFace model repository | ||
| - `r` = HuggingFace revision (commit SHA) | ||
| - `d` = Basilica deployment ID (UUID) - for plain commitments | ||
| - `e` = Encrypted deployment ID (base85 blob) - for encrypted commitments | ||
| e:<base85-encrypted-blob> | ||
| ``` |
There was a problem hiding this comment.
Fenced code blocks missing language specifier (MD040).
The commitment format examples on lines 383 and 388 lack a language identifier, which triggers the markdownlint MD040 warning. Adding text (or plaintext) as the language will satisfy the linter.
📝 Proposed fix
-```
-deployment-uuid
+```text
+deployment-uuidEncrypted commitment:
- -e:<base85-encrypted-blob> +text
+e:
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 383-383: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 388-388: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@docs/miner-guide.md` around lines 383 - 390, Add language specifiers to the
two fenced code blocks that show the commitment examples so the markdown linter
MD040 is satisfied: update the block containing "deployment-uuid" and the block
containing "e:<base85-encrypted-blob>" to use a plaintext identifier (e.g.,
```text) instead of bare ```; ensure both blocks are changed consistently in the
miner-guide.md content.
Summary
--imageflag tokinitro miner pushfor deploying pre-built Docker images directly to Basilica without HuggingFace source code generation.kinitro miner verifyCLI command for standalone deployment verification, supporting both direct (--deployment-id) and chain-lookup (--netuid --uid/--hotkey) modes.Test plan
ruff check .,ruff format --check .,ty check .all passkinitro miner push --image python:3.11-slim --name test-metadata-verifyagainst Basilica APIkinitro miner verify --deployment-id <id>against Basilica metadata APIpushandverifycommandsSummary by CodeRabbit
New Features
Changes
Tests