Skip to content

feat: Basilica metadata verification and image deploy mode#42

Open
Shr1ftyy wants to merge 2 commits intomainfrom
feat/basilica-metadata-api
Open

feat: Basilica metadata verification and image deploy mode#42
Shr1ftyy wants to merge 2 commits intomainfrom
feat/basilica-metadata-api

Conversation

@Shr1ftyy
Copy link
Collaborator

@Shr1ftyy Shr1ftyy commented Feb 14, 2026

Summary

  • Replace spot-check verification with metadata-based verification using the Basilica public metadata API. Checks deployment health state and Docker image public pullability via container registry APIs.
  • Add --image flag to kinitro miner push for deploying pre-built Docker images directly to Basilica without HuggingFace source code generation.
  • Add kinitro miner verify CLI command for standalone deployment verification, supporting both direct (--deployment-id) and chain-lookup (--netuid --uid/--hotkey) modes.
  • Auto-enroll public metadata on deploy so validators can verify deployments.

Test plan

  • 168 tests pass (12 new integration + 18 new unit + 138 existing)
  • ruff check ., ruff format --check ., ty check . all pass
  • Live-tested kinitro miner push --image python:3.11-slim --name test-metadata-verify against Basilica API
  • Live-tested kinitro miner verify --deployment-id <id> against Basilica metadata API
  • CLI help output verified for both push and verify commands

Summary by CodeRabbit

  • New Features

    • Added a CLI verify command to check deployment status and health.
    • Image-based deployment workflow: deploy via Docker image (--image, --name).
  • Changes

    • Verification moved to metadata-driven checks (Basilica) and can run at scheduler time.
    • Removed local spot-check model verification and HuggingFace-focused flows; HF dependency dropped.
    • Basilica SDK bumped to 0.19.0.
  • Tests

    • Added comprehensive unit and integration tests for metadata verification.

…fy CLI

Replace the spot-check verification system with metadata-based verification
that uses the Basilica public metadata API to check deployment health and
Docker image pullability.

- Add MetadataVerifier using Basilica public metadata API
- Add --image flag to `kinitro miner push` for deploying pre-built Docker
  images directly (without HuggingFace source code)
- Add `kinitro miner verify` command for standalone deployment verification
  (supports --deployment-id and --netuid --uid/--hotkey modes)
- Enroll public metadata on deploy for validator verification
- Add integration and unit tests for the full metadata verification flow
@coderabbitai
Copy link

coderabbitai bot commented Feb 14, 2026

📝 Walkthrough

Walkthrough

Removes HuggingFace-based spot-check verification and miner repo/revision fields; adds image-based deployment CLI and Basilica metadata verification (scheduler and executor); updates commitment format to deployment_id/encrypted_deployment and adjusts CLI, types, configs, tests, and dependencies accordingly.

Changes

Cohort / File(s) Summary
Verification core
kinitro/executor/verification.py, kinitro/executor/worker.py, kinitro/executor/config.py
Replaces spot-check PolicyVerifier with metadata-driven MetadataVerifier and ImageRef; removes verification config fields and all per-task verification flows and related Worker APIs.
Scheduler & task filtering
kinitro/scheduler/config.py, kinitro/scheduler/main.py, kinitro/scheduler/task_generator.py
Adds metadata_verification_enabled flag and pre-task verification step that filters discovered miners; removes miner_repo/miner_revision from generated tasks.
CLI: deploy / verify / commit
kinitro/cli/miner/deploy.py, kinitro/cli/miner/verify.py, kinitro/cli/miner/__init__.py, kinitro/cli/miner/commitment.py
Introduces verify command; switches deploy/commit flows to image-based deployment (--image, --name) and deployment_id-centric on-chain commits; removes repo/revision parameters.
Commitment model & chain
kinitro/chain/commitments.py, kinitro/types.py, kinitro/config.py
Removes huggingface_repo/revision/docker_image from MinerCommitment/TypedDicts; adds encrypted_deployment handling and new parsing/encryption formats; simplifies validation to deployment_id/encrypted_deployment.
Backend & API models
kinitro/backend/models.py, kinitro/backend/storage.py, kinitro/api/routes/tasks.py
Removes miner_repo/miner_revision from Task/TaskPool and task creation paths; strips verification fields from TaskResult schemas.
Docs, templates & readme
README.md, docs/*, kinitro/miner/template/*, kinitro/miner/template/Dockerfile
Reworks docs and templates from HuggingFace repo flow to Docker image-centric instructions and deployment-id usage.
Deps & tests
pyproject.toml, kinitro/miner/template/requirements.txt, tests/integration/test_metadata_e2e.py, tests/unit/test_metadata_verification.py, tests/unit/test_crypto.py
Bumps basilica-sdk and removes huggingface-hub; adds integration/unit tests for metadata verification, image parsing, and commitment parsing/decryption.

Sequence Diagram(s)

sequenceDiagram
    participant Miner as Miner/User
    participant CLI as Miner CLI
    participant Basilica as Basilica API
    participant Docker as Docker Registry

    rect rgba(0, 100, 200, 0.5)
    Note over Miner,Docker: Image-Based Deployment Flow
    Miner->>CLI: miner deploy --image <image> --name <name>
    CLI->>Basilica: basilica_push(image, name)
    Basilica-->>CLI: deployment_id
    CLI->>Basilica: enroll_metadata(deployment_id, image)
    Basilica-->>CLI: enroll result
    CLI-->>Miner: DEPLOYMENT SUCCESSFUL (Deployment ID)
    end

    rect rgba(100, 200, 0, 0.5)
    Note over Miner,Docker: Verification Flow (Deployment ID)
    Miner->>CLI: miner verify --deployment-id <id>
    CLI->>Basilica: fetch deployment metadata (deployment_id)
    Basilica-->>CLI: metadata (image, state, uptime)
    CLI->>Docker: check image publicability
    Docker-->>CLI: public/private
    CLI-->>Miner: VERIFIED / FAILED (details)
    end

    rect rgba(200, 100, 0, 0.5)
    Note over Miner,Docker: Verification Flow (Chain Lookup)
    Miner->>CLI: miner verify --netuid <n> --uid <uid>
    CLI->>Chain: resolve_hotkey(uid)
    Chain-->>CLI: hotkey
    CLI->>Chain: read_miner_commitment(hotkey)
    Chain-->>CLI: MinerCommitment (deployment_id/encrypted_deployment)
    CLI->>Basilica: fetch deployment metadata
    Basilica-->>CLI: metadata
    CLI->>Docker: check image publicability
    Docker-->>CLI: public/private
    CLI-->>Miner: VERIFIED / FAILED (details)
    end
Loading
sequenceDiagram
    participant Scheduler as Scheduler Main
    participant Verifier as MetadataVerifier
    participant Basilica as Basilica API
    participant TaskGen as Task Generator

    rect rgba(100, 150, 200, 0.5)
    Note over Scheduler,TaskGen: Pre-Task Verification Cycle
    Scheduler->>Scheduler: Discover miners / commitments
    Scheduler->>Verifier: verify_miners(commitments)
    
    par Parallel Verification
        Verifier->>Basilica: fetch metadata (miner 1)
        Basilica-->>Verifier: result 1
    and
        Verifier->>Basilica: fetch metadata (miner 2)
        Basilica-->>Verifier: result 2
    and
        Verifier->>Basilica: fetch metadata (miner N)
        Basilica-->>Verifier: result N
    end
    
    Verifier-->>Scheduler: verified_results
    Scheduler->>Scheduler: Filter passing miners
    alt All miners failed
        Scheduler-->>Scheduler: Abort cycle (no tasks)
    else Miners available
        Scheduler->>TaskGen: Generate tasks for passing miners
        TaskGen-->>Scheduler: tasks
    end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • rayokamoto

Poem

🐰
I hopped from task to scheduler tree,
Switched checks upstream, so tasks run free.
Images pulled and metadata read,
Deployment IDs now lead ahead.
Hooray — upstream checks, a safer spree! 🥕

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 74.68% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: introduction of Basilica metadata verification and a new image-based deployment mode, replacing the previous HuggingFace and spot-check verification approach.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/basilica-metadata-api

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@kinitro/cli/miner/deploy.py`:
- Around line 169-200: The "Next step" guidance shown when deploying with image
mode (the block handling the image variable and the deployment =
client.deploy(...) call) incorrectly implies a HuggingFace repo is required;
update the user-facing message printed after successful image deployment to
explicitly state that although a repo/revision is needed for the on-chain
commitment format (parse_commitment expects repo:rev:deployment_id), users
deploying pre-built images must still provide a repo and revision for the chain
commit or follow an alternative image-only commit format; either (A) change the
next-step text to clarify "If you deployed a pre-built image you still must
provide --repo and --revision for on-chain commitment (format
repo:rev:deployment_id)," or (B) implement support in parse_commitment to accept
an image-only commitment token and document that format—pick one and update the
message and parse_commitment usage accordingly (references: image variable,
deployment variable, and parse_commitment).

In `@kinitro/cli/miner/verify.py`:
- Around line 40-41: The Miner UID is being hardcoded to MinerUID(0) in
_read_commitment_from_chain and another construction (both creating
MinerCommitment), so update _read_commitment_from_chain to accept an optional
uid parameter (e.g., uid: Optional[int] or Optional[MinerUID]) and use that when
constructing MinerCommitment instead of MinerUID(0); then thread that UID from
the verify function (which parses --uid in chain-lookup mode) into the call to
_read_commitment_from_chain and into any other place that currently uses
MinerUID(0) (refer to the MinerCommitment constructor sites and the verify
function) so the actual user-provided UID is preserved in the verification
result.

In `@kinitro/executor/verification.py`:
- Around line 76-84: The parse_image_ref logic incorrectly uses
image.split(":")[0] when image_tag is provided, which drops registry ports and
repo paths (e.g., myregistry.com:5443/org/app); update the branch that handles a
supplied image_tag in the parse_image_ref function so it mirrors the safe
splitting used in the other branch: use image.rsplit(":", 1) and only treat the
right-hand part as a tag if that part does not contain a "/" (i.e., it's a real
tag); otherwise keep the full image as name and set tag = image_tag. Ensure you
reference the variables image, image_tag, name, tag and replace
image.split(":")[0] with this conditional rsplit-based logic.
- Around line 119-121: The __init__ creates BasilicaClient() without credentials
but BasilicaClient requires an API token; update the __init__ in verification.py
to supply a token (e.g., read BASILICA_API_TOKEN from environment and pass as
api_key to BasilicaClient or ensure the env var is set before construction),
remove or change the misleading comment about "No API key needed", and keep
get_public_deployment_metadata() usage unchanged since it can be called once the
client is initialized with a valid token.
🧹 Nitpick comments (9)
kinitro/cli/miner/deploy.py (1)

173-173: Name sanitization logic is duplicated across three locations.

The same sanitization pattern ("".join(c if c.isalnum() or c == "-" else "-" for c in name).strip("-")[:63]) appears at Lines 173, 208, and 482. Extract this into a shared helper to avoid inconsistency if the rules change.

♻️ Proposed helper
+def _sanitize_deployment_name(name: str) -> str:
+    """Sanitize a deployment name to DNS-compatible format (max 63 chars)."""
+    return "".join(c if c.isalnum() or c == "-" else "-" for c in name).strip("-")[:63]
+
+
 def basilica_push(

Then replace each inline expression with _sanitize_deployment_name(name).

Also applies to: 208-208

kinitro/executor/verification.py (1)

213-218: Unbounded concurrency in verify_miners may overwhelm registries or the Basilica API.

asyncio.gather(*tasks) fires all verification requests concurrently. With many miners, this could hit rate limits on Docker Hub's token endpoint or the Basilica metadata API. Consider using asyncio.Semaphore to cap concurrent verifications.

♻️ Proposed bounded concurrency
+_MAX_CONCURRENT_VERIFICATIONS = 10
+
     async def verify_miners(
         self, commitments: list[MinerCommitment]
     ) -> list[MetadataVerificationResult]:
         """Verify all miners concurrently."""
-        tasks = [self.verify_miner(c) for c in commitments]
-        return await asyncio.gather(*tasks)
+        sem = asyncio.Semaphore(_MAX_CONCURRENT_VERIFICATIONS)
+
+        async def _limited(c: MinerCommitment) -> MetadataVerificationResult:
+            async with sem:
+                return await self.verify_miner(c)
+
+        return await asyncio.gather(*[_limited(c) for c in commitments])
kinitro/scheduler/main.py (1)

210-253: Clean integration of metadata verification into the scheduler cycle.

The gating via config.metadata_verification_enabled, per-miner logging of results, and cycle-fail on zero verified miners are all well-handled. One minor observation: MetadataVerifier() (and its internal BasilicaClient()) is re-instantiated every cycle. Consider hoisting it to Scheduler.__init__ if the client is stateless and reusable across cycles.

kinitro/cli/miner/verify.py (3)

8-12: Importing a private function _query_commitment_by_hotkey_async from another module.

The underscore prefix signals this is an internal implementation detail of kinitro.chain.commitments. If this function is needed by the CLI, consider making it public (rename to query_commitment_by_hotkey_async) to formalize the contract.

#!/bin/bash
# Check how many places import this private function
rg -n '_query_commitment_by_hotkey_async' --type=py

52-52: Add type annotation to _print_result parameter.

The result parameter is untyped. Annotate it with MetadataVerificationResult for clarity and type-checker support.

♻️ Proposed fix
-def _print_result(result) -> None:
+def _print_result(result: "MetadataVerificationResult") -> None:

This requires importing MetadataVerificationResult (add to the existing import from kinitro.executor.verification):

-from kinitro.executor.verification import MetadataVerifier
+from kinitro.executor.verification import MetadataVerificationResult, MetadataVerifier

115-125: Multiple asyncio.run() calls in chain-mode path create separate event loops unnecessarily.

Lines 115 and 125 each call asyncio.run() for the hotkey lookup and commitment read. These could be combined into a single async function to reuse the event loop and the AsyncSubtensor connection.

♻️ Sketch of combined async helper
async def _resolve_commitment(network: str, netuid: int, uid: int | None, hotkey: str | None) -> tuple[str, MinerCommitment | None]:
    """Resolve hotkey (if needed) and read commitment in a single event loop."""
    async with AsyncSubtensor(network=network) as subtensor:
        if uid is not None and not hotkey:
            neurons = await subtensor.neurons(netuid=netuid)
            if uid < 0 or uid >= len(neurons):
                return "", None
            hotkey = neurons[uid].hotkey
        raw, block = await _query_commitment_by_hotkey_async(subtensor, netuid, hotkey)
        ...
tests/unit/test_metadata_verification.py (2)

114-116: MetadataVerifier.__new__ bypasses __init__ — consider a lighter alternative.

Using __new__ to skip __init__ and manually assigning _client works but is fragile: if __init__ adds new attributes, every test silently breaks. Patching the client constructor is more resilient.

♻️ Suggested alternative
-        verifier = MetadataVerifier.__new__(MetadataVerifier)
-        verifier._client = MagicMock()
-        verifier._client.get_public_deployment_metadata = MagicMock(return_value=_FakeMetadata())
+        mock_client = MagicMock()
+        mock_client.get_public_deployment_metadata.return_value = _FakeMetadata()
+        with patch("kinitro.executor.verification.BasilicaClient", return_value=mock_client):
+            verifier = MetadataVerifier()

Apply the same pattern to the other TestVerifyMiner and TestVerifyMiners tests.


25-51: _make_commitment and _FakeMetadata are duplicated in test_metadata_e2e.py.

Both this file and tests/integration/test_metadata_e2e.py define near-identical _make_commitment and _FakeMetadata helpers (with slightly different defaults). Consider extracting them into a shared conftest.py or tests/helpers.py to reduce duplication and keep defaults consistent.

tests/integration/test_metadata_e2e.py (1)

90-98: Simplify the deploy kwargs assertions.

The or conditions are logically redundant — kwargs.get("source") is None is already true when the key is absent. A simpler check would be clearer.

♻️ Suggested simplification
-        assert deploy_call.kwargs.get("source") is None or "source" not in deploy_call.kwargs
-        assert (
-            deploy_call.kwargs.get("pip_packages") is None
-            or "pip_packages" not in deploy_call.kwargs
-        )
+        assert deploy_call.kwargs.get("source") is None
+        assert deploy_call.kwargs.get("pip_packages") is None

Comment on lines +40 to +41
return MinerCommitment(
uid=MinerUID(0),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Miner UID is hardcoded to 0 even when the actual UID is known.

In chain-lookup mode with --uid, the user-provided UID is available but not passed into the MinerCommitment (Line 41 and Line 98 both use MinerUID(0)). The verification result will display miner_uid=0, which is misleading. Pass the known UID when available.

🐛 Proposed fix for chain mode (Lines 40-41)

Refactor _read_commitment_from_chain to accept an optional UID:

 async def _read_commitment_from_chain(
-    network: str, netuid: int, hotkey: str
+    network: str, netuid: int, hotkey: str, uid: int = 0
 ) -> MinerCommitment | None:
     ...
     return MinerCommitment(
-        uid=MinerUID(0),
+        uid=MinerUID(uid),
         ...
     )

And in the verify function, thread the UID through:

-        commitment = asyncio.run(_read_commitment_from_chain(network, netuid, query_hotkey))
+        lookup_uid = uid if uid is not None else 0
+        commitment = asyncio.run(_read_commitment_from_chain(network, netuid, query_hotkey, uid=lookup_uid))

Also applies to: 97-105

🤖 Prompt for AI Agents
In `@kinitro/cli/miner/verify.py` around lines 40 - 41, The Miner UID is being
hardcoded to MinerUID(0) in _read_commitment_from_chain and another construction
(both creating MinerCommitment), so update _read_commitment_from_chain to accept
an optional uid parameter (e.g., uid: Optional[int] or Optional[MinerUID]) and
use that when constructing MinerCommitment instead of MinerUID(0); then thread
that UID from the verify function (which parses --uid in chain-lookup mode) into
the call to _read_commitment_from_chain and into any other place that currently
uses MinerUID(0) (refer to the MinerCommitment constructor sites and the verify
function) so the actual user-provided UID is preserved in the verification
result.

Comment on lines +76 to +84
# If image_tag provided separately, strip any tag already on the image name
if image_tag:
name = image.split(":")[0]
tag = image_tag
elif ":" in image:
name, tag = image.rsplit(":", 1)
else:
name = image
tag = "latest"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

parse_image_ref mishandles images with port-based registries when image_tag is provided.

Line 78 uses image.split(":")[0], which splits from the left. For an image like myregistry.com:5443/org/app with a separate image_tag, this produces name = "myregistry.com" — losing the port and repository path entirely.

Use rsplit with awareness of the registry-port pattern, consistent with the elif branch on Line 81.

🐛 Proposed fix
     # If image_tag provided separately, strip any tag already on the image name
     if image_tag:
-        name = image.split(":")[0]
+        # Use rsplit to handle registry:port correctly (e.g., myregistry.com:5443/org/app:v1)
+        # Only strip a trailing :<tag> component, not a :<port> in the registry.
+        if "/" in image and ":" in image.rsplit("/", 1)[-1]:
+            name = image.rsplit(":", 1)[0]
+        else:
+            name = image
         tag = image_tag
🤖 Prompt for AI Agents
In `@kinitro/executor/verification.py` around lines 76 - 84, The parse_image_ref
logic incorrectly uses image.split(":")[0] when image_tag is provided, which
drops registry ports and repo paths (e.g., myregistry.com:5443/org/app); update
the branch that handles a supplied image_tag in the parse_image_ref function so
it mirrors the safe splitting used in the other branch: use image.rsplit(":", 1)
and only treat the right-hand part as a tag if that part does not contain a "/"
(i.e., it's a real tag); otherwise keep the full image as name and set tag =
image_tag. Ensure you reference the variables image, image_tag, name, tag and
replace image.split(":")[0] with this conditional rsplit-based logic.

Comment on lines +119 to +121
def __init__(self) -> None:
# No API key needed for public metadata reads
self._client = BasilicaClient()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

basilica-sdk python BasilicaClient initialization without api_key public metadata

💡 Result:

In the current basilica-sdk (PyPI package basilica-sdk, e.g. 0.19.0), you cannot initialize BasilicaClient without an API token: the underlying PyO3/Rust constructor always resolves api_key from the argument or BASILICA_API_TOKEN and errors if neither is present. [1]

Even though the Python wrapper includes get_public_deployment_metadata() and documents it as “no authentication required”, you still need a token just to construct the client. [2]

What to do instead

  • If you can provide a token (recommended):

    import os
    from basilica import BasilicaClient
    
    os.environ["BASILICA_API_TOKEN"] = "basilica_..."  # or pass api_key=...
    client = BasilicaClient()
    md = client.get_public_deployment_metadata("my-deployment")

    (Constructor + env var usage shown in the SDK docs.) [3]

  • If you truly need zero token usage, the SDK won’t work as-is—you’ll need to call the public-metadata HTTP endpoint directly (e.g., with requests) once you’ve identified the exact route from Basilica’s API docs/source. The SDK itself does not provide an “unauthenticated client” mode because construction fails without a token. [1]

Sources:
[1] Rust/PyO3 BasilicaClient constructor enforces API key (env var fallback, otherwise error) — crates/basilica-sdk-python/src/lib.rs [1]
[2] Python wrapper exposes get_public_deployment_metadata() and describes it as unauthenticated — python/basilica/__init__.py [2]
[3] PyPI project description shows BASILICA_API_TOKEN usage / BasilicaClient() init — PyPI basilica-sdk page [3]


BasilicaClient() requires an API key and will fail at initialization.

The Basilica SDK (v0.19.0) constructor always requires an API token—either passed as the api_key argument or via the BASILICA_API_TOKEN environment variable. Initializing without either will raise an error. Although get_public_deployment_metadata() does not require authentication to call, the client itself cannot be constructed without a token.

Pass the API key or set the environment variable before constructing the client:

import os
from basilica import BasilicaClient

os.environ["BASILICA_API_TOKEN"] = "basilica_..."  # or pass api_key="..."
self._client = BasilicaClient()

Also remove the misleading comment—an API key is required even for public metadata reads.

🤖 Prompt for AI Agents
In `@kinitro/executor/verification.py` around lines 119 - 121, The __init__
creates BasilicaClient() without credentials but BasilicaClient requires an API
token; update the __init__ in verification.py to supply a token (e.g., read
BASILICA_API_TOKEN from environment and pass as api_key to BasilicaClient or
ensure the env var is set before construction), remove or change the misleading
comment about "No API key needed", and keep get_public_deployment_metadata()
usage unchanged since it can be called once the client is initialized with a
valid token.

…oyments

Replace the HuggingFace upload workflow with a Docker-image-only deployment
model via Basilica. Simplify on-chain commitment format to just deployment_id
(or encrypted e:<blob>), with backward compatibility for legacy repo:rev:id
format. Remove all deprecated fields (miner_repo, miner_revision,
verification_passed, verification_score).
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@docs/miner-guide.md`:
- Around line 383-390: Add language specifiers to the two fenced code blocks
that show the commitment examples so the markdown linter MD040 is satisfied:
update the block containing "deployment-uuid" and the block containing
"e:<base85-encrypted-blob>" to use a plaintext identifier (e.g., ```text)
instead of bare ```; ensure both blocks are changed consistently in the
miner-guide.md content.
🧹 Nitpick comments (8)
kinitro/chain/commitments.py (1)

263-352: Significant duplication between sync and async read_miner_commitments.

The two functions are nearly identical, differing only in await calls. Consider extracting the shared commitment-building logic into a helper to reduce duplication. This is an existing pattern though, so it can be addressed separately.

Also applies to: 355-425

kinitro/cli/miner/verify.py (2)

49-49: Add type annotation for result parameter.

_print_result accepts an untyped parameter. Adding MetadataVerificationResult improves readability and enables type-checker coverage.

♻️ Suggested fix
-def _print_result(result) -> None:
+def _print_result(result: "MetadataVerificationResult") -> None:

You'll also need to import it:

 from kinitro.executor.verification import MetadataVerifier
+from kinitro.executor.verification import MetadataVerificationResult

(Or combine into a single import.)


8-11: Importing private function _query_commitment_by_hotkey_async.

This creates a coupling to an internal API (underscore-prefixed by convention). Consider exposing a public wrapper in commitments.py or reusing the existing _read_commitment_from_chain pattern from commitment.py to avoid depending on the private symbol directly.

kinitro/cli/miner/deploy.py (3)

14-25: Redundant length check on url_parts.

Line 21: str.split(".") always returns a list with at least one element, so len(url_parts) > 0 is always True. Consider removing it or replacing with a more meaningful guard (e.g., checking the extracted part is non-empty).

♻️ Suggested simplification
     if deployment.url and (
         deployment.url.startswith("http://") or deployment.url.startswith("https://")
     ):
         try:
             url_parts = deployment.url.split("//")[1].split(".")
-            if len(url_parts) > 0:
-                return url_parts[0]
+            subdomain = url_parts[0]
+            if subdomain:
+                return subdomain
         except (IndexError, AttributeError):
             pass

120-127: --image is required even when --skip-deploy is set.

When --skip-deploy is used (line 128), the image value is only used for display in the summary (lines 196, 297). Requiring users to provide an image they aren't deploying is a minor UX friction. Consider making image optional (defaulting to None) and validating it's present only when skip_deploy is False.


99-104: Metadata enrollment logic is duplicated between basilica_push and miner_deploy.

The enroll_metadata try/except block appears in both functions (lines 99–104 and 246–251). Consider extracting a small helper to keep this DRY.

Also applies to: 246-251

tests/unit/test_metadata_verification.py (2)

105-124: Consider using a fixture or factory instead of __new__ to construct MetadataVerifier.

Using MetadataVerifier.__new__(MetadataVerifier) to bypass __init__ (lines 109, 129, 144, 164, 178, 200) is fragile — if __init__ gains additional state in the future, these tests will silently break. A shared pytest fixture that patches BasilicaClient during construction would be more resilient and reduce repetition.

♻️ Example fixture
`@pytest.fixture`
def verifier():
    with patch("kinitro.executor.verification.BasilicaClient") as mock_cls:
        v = MetadataVerifier()
        v._client = mock_cls.return_value
        yield v

Then each test receives verifier and configures verifier._client.get_public_deployment_metadata as needed.


196-224: Batch test could verify result-to-commitment ordering.

The test asserts results[0].verified is True and results[1].verified is False, relying on asyncio.gather preserving order (which it does). For clarity and robustness, also asserting on results[0].miner_uid / results[1].miner_uid would make the ordering expectation explicit and catch any future refactors that break it.

Comment on lines +383 to +390
```
deployment-uuid
```

**Encrypted commitment:**
```json
{"m":"your-username/kinitro-policy","r":"abc123def456...","e":"<base85-encrypted-blob>"}
```

Where:

- `m` = HuggingFace model repository
- `r` = HuggingFace revision (commit SHA)
- `d` = Basilica deployment ID (UUID) - for plain commitments
- `e` = Encrypted deployment ID (base85 blob) - for encrypted commitments
e:<base85-encrypted-blob>
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fenced code blocks missing language specifier (MD040).

The commitment format examples on lines 383 and 388 lack a language identifier, which triggers the markdownlint MD040 warning. Adding text (or plaintext) as the language will satisfy the linter.

📝 Proposed fix
-```
-deployment-uuid
+```text
+deployment-uuid

Encrypted commitment:
- -e:<base85-encrypted-blob> +text
+e:

🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 383-383: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


[warning] 388-388: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@docs/miner-guide.md` around lines 383 - 390, Add language specifiers to the
two fenced code blocks that show the commitment examples so the markdown linter
MD040 is satisfied: update the block containing "deployment-uuid" and the block
containing "e:<base85-encrypted-blob>" to use a plaintext identifier (e.g.,
```text) instead of bare ```; ensure both blocks are changed consistently in the
miner-guide.md content.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant