Skip to content

fix: update verify command to use network flag instead of chain ID#24

Merged
bowd merged 8 commits intomainfrom
fix/verify-network-flag
Feb 27, 2026
Merged

fix: update verify command to use network flag instead of chain ID#24
bowd merged 8 commits intomainfrom
fix/verify-network-flag

Conversation

@bowd
Copy link
Contributor

@bowd bowd commented Sep 9, 2025

Summary

This PR fixes the verify command to use network names (via --network flag) instead of chain IDs, making it consistent with all other treb commands.

Problem

The verify command was the only command that:

  • Used --chain flag with numeric chain IDs
  • Had hardcoded chain ID to network name mapping that would fail for networks not in the hardcoded list
  • Was inconsistent with other commands like run, list, tag that use --network

Solution

  • Replace --chain flag with --network flag
  • Remove hardcoded chain ID mapping
  • Dynamically resolve network names from configured networks in foundry.toml
  • Update help text and examples to use network names

Changes

  • CLI: Replace --chain flag with --network flag in verify command
  • Usecase: Remove hardcoded getNetworkName() function that mapped chain IDs
  • Usecase: Add dynamic getNetworkNameForChainID() that searches configured networks
  • Usecase: Update error messages to reference --network instead of --chain
  • Usecase: Remove chain ID parsing from deployment identifier matching

Testing

# Before (would fail for non-hardcoded chains):
treb verify Counter --chain 11155111
treb verify 0x1234... --chain 11155111

# After (works with any configured network):
treb verify Counter --network sepolia
treb verify 0x1234... --network sepolia

Breaking Changes

This is a breaking change for users who were using --chain flag. They will need to update their scripts to use --network instead.

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Verify supports selecting multiple verifiers (etherscan, blockscout, sourcify) via flags, a custom Blockscout URL, and a network (-n) flag.
  • Refactor

    • Verification and listing flows made network-centric; progress reporting unified with improved non-interactive rendering.
  • UI

    • Deployment lists show network names (name (ID)); verifier status simplified to e/s/b; removed leading info glyphs from tag messages.
  • Tests

    • Added unit tests and manual scripts for verify flows and progress indicators.
  • Documentation

    • Help text and examples updated for multi-verifier and network usage.

@coderabbitai
Copy link

coderabbitai bot commented Sep 9, 2025

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Replaces the legacy verification subsystem with a new multi-verifier flow (etherscan, blockscout, sourcify); adds DeploymentFilter-driven APIs, NetworkResolver and DeploymentResolver wiring, interactive/non-interactive progress reporters, CLI verifier flags and network flag, and updates renderers and golden outputs for network names and verifier statuses.

Changes

Cohort / File(s) Summary
CLI: verify + render
internal/cli/verify.go, internal/cli/render/deployments.go
Added verifier selection flags and blockscout URL; replaced chain-filter with network flag; build DeploymentFilter from config; call new VerifyAll/VerifySpecific signatures; renderer now accepts networkNames map[uint64]string and shows network-name-aware chain labels and per-verifier statuses including Blockscout.
Usecases: verify & list
internal/usecase/verify_deployment.go, internal/usecase/list_deployments.go, internal/usecase/ports.go, internal/usecase/verify_deployment_test.go
VerifyDeployment now depends on DeploymentResolver and ProgressSink; VerifyAll/VerifySpecific accept domain.DeploymentFilter; network-name resolution via NetworkResolver; ContractVerifier.Verify signature extended with verifiers and blockscout URL; ListDeployments returns NetworkNames map; tests added for resolver query propagation.
Verification adapters & providers
internal/adapters/verification/verifier.go, internal/adapters/verification/internal_verifier.go (removed), internal/adapters/verification/manager.go (removed), internal/adapters/providers.go
Removed legacy internal verifier/manager; introduced new Verifier (NewVerifier) replacing VerifierAdapter; Verify runs multi-explorer verification (etherscan/blockscout/sourcify), executes forge verify, builds explorer URLs, aggregates per-verifier statuses; providers wiring updated to use NewVerifier.
Progress reporters
internal/adapters/progress/spinner.go, internal/adapters/progress/verify_progress.go
Renamed progress hook to OnProgress, added Info/Error helpers on spinner reporter; new VerifyProgress implements usecase.ProgressSink, supports interactive spinner and non-interactive line output, emits gathering/network-resolve/verifying/completed stages.
Wiring / DI
internal/app/wire_gen.go, internal/adapters/providers.go
NetworkResolver injected earlier and passed into ListDeployments; NewVerifyDeployment wiring updated to accept deploymentResolver and progress reporter; switched to NewVerifier.
Tests & manual scripts
internal/usecase/verify_deployment_test.go, test/manual/*
Added unit test ensuring DeploymentQuery propagation; added manual scripts to exercise verify progress and interactive verify scenarios.
Golden outputs / UI text
test/testdata/golden/...
Updated expected outputs: chain labels now show name (id) (e.g., anvil-31337 (31337)); emoji status markers replaced with e[-] s[-] b[-]; removed some informational glyphs in tag messages.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant CLI as CLI (verify cmd)
  participant Config as App.Config
  participant Usecase as VerifyDeployment
  participant DepRes as DeploymentResolver
  participant NetRes as NetworkResolver
  participant Verifier as Verifier
  participant Progress as ProgressSink
  User->>CLI: treb verify <id|--all> --network <name?> [--etherscan|--blockscout|--sourcify]
  CLI->>Config: read Config (network → chainID)
  CLI->>Usecase: VerifyAll(ctx, DeploymentFilter, VerifyOptions)
  Usecase->>Progress: emit(gathering, "Gathering deployments")
  Usecase->>DepRes: ResolveDeployment(ctx, DeploymentQuery{Reference, ChainID, Namespace})
  alt need network name
    Usecase->>NetRes: GetNetworks()
    NetRes-->>Usecase: networks
    Usecase->>Usecase: getNetworkNameForChainID(chainID)
    Usecase->>Progress: emit(network-resolve, "Resolved network")
  end
  Usecase->>Progress: emit(verifying, "Verifying <displayName>")
  Usecase->>Verifier: Verify(ctx, deployment, network, verifiers[], blockscoutVerifierURL)
  Verifier-->>Usecase: result / error
  Usecase->>Progress: emit(completed, result)
  Usecase-->>CLI: return VerifyAllResult / VerifyResult
  CLI-->>User: render results (includes network names & verifier statuses)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

I twitch my whiskers, hop and pry,
From chains to networks I now fly.
I nibble flags and spin my wheel,
Emit small hops for each verify deal.
Hooray — the rabbit stamps: “All clear!” 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the primary change: replacing the chain ID flag with a network flag in the verify command, which is the core objective of this PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/verify-network-flag

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/usecase/verify_deployment.go (1)

281-288: Fix CI: remove unused parseChainID and its import.

golangci-lint flags parseChainID as unused; also “strconv” is now unused.

Apply:

 import (
   "context"
   "fmt"
-  "strconv"
   "strings"
   ...
 )
@@
-func parseChainID(s string) uint64 {
-  chainID, err := strconv.ParseUint(s, 10, 64)
-  if err != nil {
-    return 0
-  }
-  return chainID
-}

Also applies to: 3-11

🧹 Nitpick comments (3)
internal/usecase/verify_deployment.go (2)

173-191: Double-resolve of network; consider returning NetworkInfo from the helper.

You resolve once in getNetworkNameForChainID and again via ResolveNetwork. Inline the second by returning (name, info) from the helper to avoid extra resolver calls.

Illustrative refactor:

- networkName, err := v.getNetworkNameForChainID(ctx, deployment.ChainID)
+ networkName, networkInfo, err := v.getNetworkForChainID(ctx, deployment.ChainID)
...
- networkInfo, err := v.networkResolver.ResolveNetwork(ctx, networkName)
- if err != nil {
+ if err != nil {
   return &VerifyResult{ ...

And implement getNetworkForChainID to return both name and resolved info (reusing the already-resolved network). Low-risk perf win when verifying many deployments on the same chain.


290-310: Micro-cache chainID→network to avoid repeated scans.

When verifying many deployments, getNetworkNameForChainID scans and resolves all networks per call. Cache results in VerifyDeployment.

Apply:

 type VerifyDeployment struct {
   repo             DeploymentRepository
   contractVerifier ContractVerifier
   networkResolver  NetworkResolver
+  networkNameCache map[uint64]string
 }
@@
   return &VerifyDeployment{
     repo:             repo,
     contractVerifier: contractVerifier,
     networkResolver:  networkResolver,
+    networkNameCache: make(map[uint64]string),
   }
@@
 func (v *VerifyDeployment) getNetworkNameForChainID(ctx context.Context, chainID uint64) (string, error) {
+  if name, ok := v.networkNameCache[chainID]; ok {
+    return name, nil
+  }
   networkNames := v.networkResolver.GetNetworks(ctx)
   for _, name := range networkNames {
     network, err := v.networkResolver.ResolveNetwork(ctx, name)
     if err != nil {
       continue
     }
     if network.ChainID == chainID {
+      v.networkNameCache[chainID] = name
       return name, nil
     }
   }
   return "", fmt.Errorf("no network configuration found for chain ID %d. Please ensure the network is configured in foundry.toml", chainID)
 }

Also applies to: 14-31, 20-31

internal/cli/verify.go (1)

32-39: Deduplicate and tighten examples.

“staging/Counter” appears twice. Clean up to avoid confusion.

   treb verify Counter:v2                   # Verify specific deployment by label
   treb verify staging/Counter              # Verify by namespace/contract
-  treb verify Counter --network sepolia    # Verify by contract on network
-  treb verify staging/Counter              # Verify by namespace/contract
+  treb verify Counter --network sepolia    # Verify by contract on network
   treb verify 0x1234... --network sepolia  # Verify by address (requires --network)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a60ed04 and a0e75fa.

📒 Files selected for processing (2)
  • internal/cli/verify.go (3 hunks)
  • internal/usecase/verify_deployment.go (3 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-26T10:13:23.572Z
Learnt from: CR
PR: trebuchet-org/treb-cli#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T10:13:23.572Z
Learning: Applies to cli/pkg/verification/**/*.go : Implement contract verification logic under cli/pkg/verification/

Applied to files:

  • internal/cli/verify.go
🧬 Code graph analysis (2)
internal/usecase/verify_deployment.go (1)
internal/domain/models/deployment.go (1)
  • Deployment (38-69)
internal/cli/verify.go (1)
internal/domain/config/runtime.go (1)
  • Network (33-38)
🪛 GitHub Actions: CI
internal/usecase/verify_deployment.go

[error] 282-282: golangci-lint: func parseChainID is unused (unused)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Integration Tests (macOS)
  • GitHub Check: Integration Tests

Comment on lines +58 to 63

// Get network info from app config if available
if app.Config.Network != nil {
filter.ChainID = app.Config.Network.ChainID
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

--network flag is declared but never used here; address lookups will still fail without ChainID.

  • You set filter.ChainID from app.Config.Network, but nothing wires the new --network into app.Config for this command.
  • If getApp doesn’t bind/propagate this flag, lookups by address will hit “--network is required” despite the user passing it.

Action one of:

  • Remove this local flag and rely on a root persistent “--network” that getApp already honors.
  • Or bind/consume the flag here and apply it to the app before building the filter.

If binding locally, e.g.:

 import (
   "fmt"
   "os"
+  "github.com/spf13/viper"
@@
- cmd.Flags().StringP("network", "n", "", "Network to run on (e.g., mainnet, sepolia, local)")
+ cmd.Flags().StringP("network", "n", "", "Network to run on (e.g., mainnet, sepolia, local)")
+ // Bind so getApp (if Viper-based) sees it
+ _ = viper.BindPFlag("network", cmd.Flags().Lookup("network"))

And before creating the filter (or immediately after getApp):

   app, err := getApp(cmd)
   if err != nil { return err }
+  // If your app supports selecting a network by name, ensure it is applied here.
+  if n, _ := cmd.Flags().GetString("network"); n != "" && (app.Config.Network == nil || app.Config.Network.Name != n) {
+    // Example: choose the appropriate call per your app (SetNetwork/ApplyNetwork/etc.)
+    if err := app.ApplyNetwork(cmd.Context(), n); err != nil {
+      return fmt.Errorf("failed to select network %q: %w", n, err)
+    }
+  }

Please confirm which path your codebase uses (root persistent flag vs. local binding) and wire accordingly.

Run to check wiring:

Also applies to: 99-99


🏁 Script executed:

#!/bin/bash
# Inspect network flag handling across the repo
rg -nP -C2 'PersistentFlags\(\)\.StringP\(\s*"network"|StringP\(\s*"network"' --type=go
rg -nP -C2 'BindPFlag\(\s*"network"' --type=go
rg -nP -C2 'GetString\(\s*"network"' --type=go
rg -nP -C2 'func getApp\(' --type=go

Length of output: 2697


Bind or remove the local --network flag
internal/cli/verify.go: the local cmd.Flags().StringP("network", "n", ...) isn’t bound via Viper, so v.GetString("network") in internal/config/provider.go never sees it. Either remove this local flag to use a root persistent --network, or bind it here (e.g. _ = viper.BindPFlag("network", cmd.Flags().Lookup("network")) before getApp) and apply the selected network.

🤖 Prompt for AI Agents
internal/cli/verify.go around lines 58 to 63: the local --network flag is
declared but not bound to Viper so v.GetString("network") never sees it; bind
the flag to Viper before calling getApp (e.g. call viper.BindPFlag for
cmd.Flags().Lookup("network")), then read the selected network from Viper and
apply it (override or set filter.ChainID accordingly) so the user's local flag
takes effect; alternatively remove the local flag to rely on the root persistent
--network flag.

Comment on lines 129 to 135
if filter.ChainID == 0 {
return nil, fmt.Errorf("--chain flag is required when looking up by address")
return nil, fmt.Errorf("--network flag is required when looking up by address")
}
deployment, err = v.repo.GetDeploymentByAddress(ctx, filter.ChainID, identifier)
if err != nil {
return nil, fmt.Errorf("deployment not found at address %s on chain %d", identifier, filter.ChainID)
return nil, fmt.Errorf("deployment not found at address %s on network", identifier)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address-lookup errors: include concrete network context and fix incomplete message.

  • “deployment not found at address %s on network” is incomplete; include network name or chain ID.
  • Minor copy: drop “flag” for consistency with other help text.

Apply:

-    if filter.ChainID == 0 {
-      return nil, fmt.Errorf("--network flag is required when looking up by address")
-    }
+    if filter.ChainID == 0 {
+      return nil, fmt.Errorf("--network is required when looking up by address")
+    }
...
-    return nil, fmt.Errorf("deployment not found at address %s on network", identifier)
+    if name, _ := v.getNetworkNameForChainID(ctx, filter.ChainID); name != "" {
+      return nil, fmt.Errorf("deployment not found at address %s on network %s", identifier, name)
+    }
+    return nil, fmt.Errorf("deployment not found at address %s on chain ID %d", identifier, filter.ChainID)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if filter.ChainID == 0 {
return nil, fmt.Errorf("--chain flag is required when looking up by address")
return nil, fmt.Errorf("--network flag is required when looking up by address")
}
deployment, err = v.repo.GetDeploymentByAddress(ctx, filter.ChainID, identifier)
if err != nil {
return nil, fmt.Errorf("deployment not found at address %s on chain %d", identifier, filter.ChainID)
return nil, fmt.Errorf("deployment not found at address %s on network", identifier)
}
if filter.ChainID == 0 {
return nil, fmt.Errorf("--network is required when looking up by address")
}
deployment, err = v.repo.GetDeploymentByAddress(ctx, filter.ChainID, identifier)
if err != nil {
if name, _ := v.getNetworkNameForChainID(ctx, filter.ChainID); name != "" {
return nil, fmt.Errorf("deployment not found at address %s on network %s", identifier, name)
}
return nil, fmt.Errorf("deployment not found at address %s on chain ID %d", identifier, filter.ChainID)
}
🤖 Prompt for AI Agents
In internal/usecase/verify_deployment.go around lines 129–135, the error
messages are incomplete: change the validation error to reference "--network"
(drop the word "flag") and update the lookup failure to include the network
context (e.g., chain ID or resolved network name) and the underlying error;
specifically, if filter.ChainID == 0 return a message like "--network is
required when looking up by address", and when GetDeploymentByAddress fails
return a formatted error that includes the identifier and the chain ID (or
network name) plus the wrapped err so callers get concrete network context and
the original error details.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/adapters/progress/spinner.go (1)

194-210: NopProgressReporter doesn’t implement ProgressSink

NopProgressReporter still exposes ReportStage/ReportProgress but lacks OnProgress/Info/Error. Either delete this type (use usecase.NopProgress) or implement the new interface.

 type NopProgressReporter struct{}
@@
-func (r *NopProgressReporter) ReportStage(ctx context.Context, stage usecase.ExecutionStage) {}
-
-// ReportProgress does nothing
-func (r *NopProgressReporter) ReportProgress(ctx context.Context, event usecase.ProgressEvent) {}
+func (r *NopProgressReporter) OnProgress(ctx context.Context, event usecase.ProgressEvent) {}
+func (r *NopProgressReporter) Info(message string)                                {}
+func (r *NopProgressReporter) Error(message string)                               {}

Optional: add a compile-time assertion mirroring the spinner one.

🧹 Nitpick comments (26)
internal/usecase/ports.go (2)

88-92: Ambiguity: one name per chain ID loses information

NetworkNames map[uint64]string cannot represent multiple configured networks sharing the same chain ID (common for multiple RPCs to 1 chain). Listing may show an arbitrary name. Prefer map[uint64][]string or a documented canonical‑selection rule.

Example change:

 type DeploymentListResult struct {
   Deployments  []*models.Deployment
   Summary      DeploymentSummary
-  NetworkNames map[uint64]string // Map of chain ID to network name
+  NetworkNames map[uint64][]string // chain ID -> one or more network names
 }

66-76: Duplicate interfaces for progress sinks

ComposeSink and RunProgressSink duplicate ProgressSink. This invites drift. Either alias them or remove duplicates.

Example:

-type ComposeSink interface {
-  OnProgress(ctx context.Context, event ProgressEvent)
-  Info(message string)
-  Error(message string)
-}
+type ComposeSink = ProgressSink
@@
-type RunProgressSink interface {
-  OnProgress(ctx context.Context, event ProgressEvent)
-  Info(message string)
-  Error(message string)
-}
+type RunProgressSink = ProgressSink
test/manual/test_progress_simple.sh (2)

4-4: Harden shell options

Add -u and -o pipefail to catch unset vars and pipeline failures.

-set -e
+set -euo pipefail

6-8: Guard against missing binary

Fail fast if $TREB_BIN isn’t executable; optionally fall back to command -v treb.

 TREB_BIN="$SCRIPT_DIR/../../bin/treb"
 
+if [[ ! -x "$TREB_BIN" ]]; then
+  echo "treb binary not found at $TREB_BIN"; exit 1
+fi
test/manual/test_verify_progress.sh (5)

4-4: Harden shell options

Use strict mode for more reliable E2E runs.

-set -e
+set -euo pipefail

53-58: Race: anvil readiness is time-based

sleep 2 can flake. Wait for the port instead.

-anvil --port 8547 > /dev/null 2>&1 &
-ANVIL_PID=$!
-sleep 2
+anvil --port 8547 > /dev/null 2>&1 &
+ANVIL_PID=$!
+# wait for port 8547
+for i in {1..50}; do
+  if curl -sSf http://127.0.0.1:8547 >/dev/null 2>&1; then break; fi
+  sleep 0.2
+done

53-61: Ensure cleanup on error

Trap to kill anvil even if the script fails.

 anvil --port 8547 > /dev/null 2>&1 &
 ANVIL_PID=$!
@@
-# Configure network
+"# Ensure cleanup"
+trap 'kill ${ANVIL_PID} 2>/dev/null || true' EXIT
+# Configure network

84-88: Deterministic network selection

For predictability post-flag change, pass --network test-network to verify --all.

-echo "Command: $TREB_BIN verify --all"
+echo "Command: $TREB_BIN verify --all --network test-network"
@@
-"$TREB_BIN" verify --all || true
+"$TREB_BIN" verify --all --network test-network || true

6-9: Pre-flight prerequisite checks

Optional: fail fast if required tools are missing.

 TREB_BIN="$TREB_CLI_DIR/bin/treb"
 
+command -v forge >/dev/null || { echo "forge not found"; exit 1; }
+command -v anvil >/dev/null || { echo "anvil not found"; exit 1; }
+[[ -x "$TREB_BIN" ]] || { echo "treb not found at $TREB_BIN"; exit 1; }
test/manual/test_verify_interactive.sh (3)

4-4: Harden shell options

Use strict mode.

-set -e
+set -euo pipefail

35-40: Race: anvil readiness

Replace fixed sleep with port wait; add trap for cleanup.

 anvil --port 8546 > /dev/null 2>&1 &
 ANVIL_PID=$!
-sleep 2
+trap 'kill ${ANVIL_PID} 2>/dev/null || true' EXIT
+for i in {1..50}; do
+  if curl -sSf http://127.0.0.1:8546 >/dev/null 2>&1; then break; fi
+  sleep 0.2
+done

55-64: Script is non-interactive by design—make that explicit

Consider prompting before running the interactive command, or clearly instruct the user to run it manually.

-echo "Run: $TREB_BIN verify Counter --network test-network"
+echo "Run manually: $TREB_BIN verify Counter --network test-network"
+# Optionally:
+# read -r -p "Press Enter to run interactive verify now (Ctrl-C to skip)..." _
+# "$TREB_BIN" verify Counter --network test-network
internal/adapters/progress/spinner.go (2)

30-39: Cursor visibility

Consider hiding the cursor during spinner activity for cleaner UX.

- s.HideCursor = false
+ s.HideCursor = true

76-92: OnProgress overrides stage display

OnProgress sets spinner.Suffix to the message, overwriting updateSpinnerDisplay()’s stage chain. Decide on one UX or merge both (e.g., “ — ”).

-  r.spinner.Suffix = " " + event.Message
+  if display := r.composeDisplayWithMessage(event.Message); display != "" {
+    r.spinner.Suffix = " " + display
+  } else {
+    r.spinner.Suffix = " " + event.Message
+  }

Support method (outside selected range):

func (r *SpinnerProgressReporter) composeDisplayWithMessage(msg string) string {
  // reuse updateSpinnerDisplay logic to build stage string, then append msg
  // or extract logic from updateSpinnerDisplay into a pure builder
  return "" // implement
}
internal/usecase/verify_deployment_test.go (2)

30-57: Assert error semantics explicitly (propagate ErrNotFound)

After calling VerifySpecific, assert the concrete error using errors.Is to lock in behavior.

Apply this diff:

-        // Error is expected since mock returns not found
-        assert.Error(t, err)
+        // Error is expected since mock returns not found and should be propagated
+        assert.Error(t, err)
+        assert.ErrorIs(t, err, domain.ErrNotFound)

60-70: Mock: capture call count for stricter verification (optional)

Track and assert ResolveDeployment is invoked exactly once.

Apply this diff:

 type mockDeploymentResolver struct {
-    resolveFunc func(context.Context, domain.DeploymentQuery) (*models.Deployment, error)
+    resolveFunc  func(context.Context, domain.DeploymentQuery) (*models.Deployment, error)
+    calledCount  int
 }
 
 func (m *mockDeploymentResolver) ResolveDeployment(ctx context.Context, query domain.DeploymentQuery) (*models.Deployment, error) {
-    if m.resolveFunc != nil {
-        return m.resolveFunc(ctx, query)
-    }
+    m.calledCount++
+    if m.resolveFunc != nil { return m.resolveFunc(ctx, query) }
     return nil, domain.ErrNotFound
 }

Then assert in the test:

-    // Verify the deployment resolver was called with correct query
+    // Verify the deployment resolver was called with correct query
+    assert.Equal(t, 1, mockResolver.calledCount)
internal/cli/render/deployments.go (1)

466-500: Width calculation should use rune width, not byte length

len() on strings mis-measures wide chars (✓, ✗, ⏳) and can misalign columns. Use go-pretty/text.RuneWidth after stripping ANSI.

Apply this diff:

-    for _, table := range tables {
-        for _, row := range table {
+    for _, tbl := range tables {
+        for _, row := range tbl {
             for colIdx, cell := range row {
                 if colIdx < len(widths) {
-                    // Strip ANSI codes for width calculation
-                    cellWidth := len(stripAnsiCodes(cell))
+                    // Strip ANSI; then measure display width
+                    cellWidth := text.RuneWidth(stripAnsiCodes(cell))
                     if cellWidth > widths[colIdx] {
                         widths[colIdx] = cellWidth
                     }
                 }
             }
         }
     }

Also adjust the continuation prefix width in renderTableWithWidths:

-        if i == 0 {
-            width += 2 + len([]rune(continuationPrefix))
-        }
+        if i == 0 {
+            width += 2 + text.RuneWidth(continuationPrefix)
+        }
internal/usecase/list_deployments.go (2)

62-71: Avoid repeated ResolveNetwork calls; pre-index networks

Current loop resolves every chainID by iterating all network names repeatedly (O(C*N)). Build a chainID→name index once.

Apply this diff:

-    networkNames := make(map[uint64]string)
-    for chainID := range summary.ByChain {
-        // Try to find network name for this chain ID
-        networkName := uc.findNetworkName(ctx, chainID)
-        if networkName != "" {
-            networkNames[chainID] = networkName
-        }
-    }
+    // Build a chainID→name index once, then pick needed entries
+    idx := uc.buildNetworkIndex(ctx)
+    networkNames := make(map[uint64]string, len(summary.ByChain))
+    for chainID := range summary.ByChain {
+        if name, ok := idx[chainID]; ok && name != "" {
+            networkNames[chainID] = name
+        }
+    }

Add helper (see below).


125-144: Replace linear search with an index (helper below)

findNetworkName scans all networks per lookup. Introduce buildNetworkIndex to resolve each network once.

Add this helper below (outside changed block):

// buildNetworkIndex builds a map from chainID to network name by resolving each network once.
func (uc *ListDeployments) buildNetworkIndex(ctx context.Context) map[uint64]string {
	if uc.networkResolver == nil {
		return nil
	}
	result := make(map[uint64]string)
	for _, name := range uc.networkResolver.GetNetworks(ctx) {
		net, err := uc.networkResolver.ResolveNetwork(ctx, name)
		if err != nil || net == nil {
			continue
		}
		// First name wins if duplicates exist for a chainID.
		if _, exists := result[net.ChainID]; !exists {
			result[net.ChainID] = name
		}
	}
	return result
}
internal/adapters/progress/verify_progress.go (1)

97-112: Spinner access should be synchronized or usage serialized

If OnProgress can be called concurrently, Start/Stop/Suffix mutate shared state. Protect with a mutex, or document single-goroutine usage.

Apply this minimal guard:

 type VerifyProgress struct {
     out         io.Writer
     interactive bool
     spinner     *spinner.Spinner
     startTime   time.Time
+    mu          sync.Mutex
 }
 
 func (v *VerifyProgress) OnProgress(ctx context.Context, event usecase.ProgressEvent) {
+    v.mu.Lock()
+    defer v.mu.Unlock()
     ...
 }
 
 func (v *VerifyProgress) Info(message string) {
-    // Stop spinner temporarily
+    v.mu.Lock()
+    defer v.mu.Unlock()
     ...
 }
 
 func (v *VerifyProgress) Error(message string) {
-    // Stop spinner temporarily
+    v.mu.Lock()
+    defer v.mu.Unlock()
     ...
 }

And add:

-import "time"
+import (
+    "time"
+    "sync"
+)

Also applies to: 114-129

internal/app/wire_gen.go (1)

92-92: SpinnerProgressReporter implements usecase.ProgressSink — no code change required

internal/adapters/providers.go binds *progress.SpinnerProgressReporter to usecase.ProgressSink and wire_gen.go constructs it via progress.NewSpinnerProgressReporter(); NewVerifyProgress is defined at internal/adapters/progress/verify_progress.go. If you intend to use VerifyProgress instead, update internal/app/wire.go (not the generated wire_gen.go) and re-run wire.

internal/cli/verify.go (1)

28-39: Docs polish: remove duplicate example and keep network-centric flow tight.

Line 31 and 33 show the same example; drop one and keep the network examples grouped.

Apply:

   treb verify Counter                      # Verify specific contract
   treb verify Counter:v2                   # Verify specific deployment by label
   treb verify staging/Counter              # Verify by namespace/contract
-  treb verify Counter --network sepolia    # Verify by contract on network
-  treb verify staging/Counter              # Verify by namespace/contract
+  treb verify Counter --network sepolia    # Verify by contract on network
   treb verify 0x1234... --network sepolia  # Verify by address (requires --network)
   treb verify --all                        # Verify all unverified contracts (skip local)
   treb verify --all --force                # Re-verify all contracts including verified
   treb verify Counter --force              # Re-verify even if already verified
   treb verify Counter --network sepolia --namespace staging  # Verify with filters
internal/usecase/verify_deployment.go (4)

121-129: Improve UX: include resolved network name in progress, fall back to chain ID.

Users think in network names now. Show the network if resolvable.

Apply:

-    v.progress.OnProgress(ctx, ProgressEvent{
-      Stage:   "verifying",
-      Current: i + 1,
-      Total:   len(result.ToVerify),
-      Message: fmt.Sprintf("Verifying %s on chain %d...", deployment.ContractName, deployment.ChainID),
-      Spinner: true,
-    })
+    // Try to resolve a display network name; ignore errors and fall back to chain ID
+    name, _ := v.getNetworkNameForChainID(ctx, deployment.ChainID)
+    msg := fmt.Sprintf("Verifying %s on chain %d...", deployment.ContractName, deployment.ChainID)
+    if name != "" {
+      msg = fmt.Sprintf("Verifying %s on %s...", deployment.ContractName, name)
+    }
+    v.progress.OnProgress(ctx, ProgressEvent{
+      Stage:   "verifying",
+      Current: i + 1,
+      Total:   len(result.ToVerify),
+      Message: msg,
+      Spinner: true,
+    })

74-80: Skip all common local chains.

You skip 31337; Ganache and some local setups still use 1337. Consider skipping both.

Apply:

-    if deployment.ChainID == 31337 {
+    if deployment.ChainID == 31337 || deployment.ChainID == 1337 {
       result.Skipped = append(result.Skipped, &SkippedDeployment{
         Deployment: deployment,
-        Reason:     "Local chain",
+        Reason:     "Local chain",
       })
       continue
     }

255-275: Avoid O(N×M) network scans; add a tiny cache for chainID→name.

getNetworkNameForChainID resolves every configured network for every deployment. Add a small in-memory cache on the use case to cut repeated resolution.

Apply:

 type VerifyDeployment struct {
   repo               DeploymentRepository
   contractVerifier   ContractVerifier
   networkResolver    NetworkResolver
   deploymentResolver DeploymentResolver
   progress           ProgressSink
+  // cache: chainID -> network name
+  networkNameCache   map[uint64]string
 }
@@
 func NewVerifyDeployment(
@@
 ) *VerifyDeployment {
   return &VerifyDeployment{
     repo:               repo,
     contractVerifier:   contractVerifier,
     networkResolver:    networkResolver,
     deploymentResolver: deploymentResolver,
-    progress:           progress,
+    progress:           progress,
+    networkNameCache:   make(map[uint64]string),
   }
 }
@@
 func (v *VerifyDeployment) getNetworkNameForChainID(ctx context.Context, chainID uint64) (string, error) {
-  // Get all available network names
+  // Fast path: cache
+  if name, ok := v.networkNameCache[chainID]; ok && name != "" {
+    return name, nil
+  }
+  // Get all available network names
   networkNames := v.networkResolver.GetNetworks(ctx)
@@
-    if network.ChainID == chainID {
-      return name, nil
+    if network.ChainID == chainID {
+      v.networkNameCache[chainID] = name
+      return name, nil
     }
   }
@@
-  return "", fmt.Errorf("no network configuration found for chain ID %d. Please ensure the network is configured in foundry.toml", chainID)
+  return "", fmt.Errorf("no network configuration found for chain ID %d. Please ensure the network is configured in foundry.toml", chainID)
 }

154-161: Minor semantics: “Errors” used for a non-error success case.

Returning Success: true with an “Already verified…” string inside Errors is semantically odd and may confuse renderers. Consider a Messages or Warnings field, or return Success: true with an empty error slice and let the renderer infer from status.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a0e75fa and c1dcffd.

📒 Files selected for processing (12)
  • internal/adapters/progress/spinner.go (1 hunks)
  • internal/adapters/progress/verify_progress.go (1 hunks)
  • internal/app/wire_gen.go (2 hunks)
  • internal/cli/render/deployments.go (5 hunks)
  • internal/cli/verify.go (3 hunks)
  • internal/usecase/list_deployments.go (3 hunks)
  • internal/usecase/ports.go (1 hunks)
  • internal/usecase/verify_deployment.go (6 hunks)
  • internal/usecase/verify_deployment_test.go (1 hunks)
  • test/manual/test_progress_simple.sh (1 hunks)
  • test/manual/test_verify_interactive.sh (1 hunks)
  • test/manual/test_verify_progress.sh (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-26T10:13:23.572Z
Learnt from: CR
PR: trebuchet-org/treb-cli#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T10:13:23.572Z
Learning: Applies to cli/pkg/verification/**/*.go : Implement contract verification logic under cli/pkg/verification/

Applied to files:

  • internal/adapters/progress/verify_progress.go
  • internal/cli/verify.go
  • internal/usecase/verify_deployment.go
🧬 Code graph analysis (9)
internal/usecase/ports.go (1)
internal/domain/models/deployment.go (1)
  • Deployment (38-69)
internal/usecase/verify_deployment_test.go (5)
internal/usecase/ports.go (9)
  • DeploymentRepository (16-32)
  • ContractVerifier (42-45)
  • NetworkResolver (168-171)
  • DeploymentResolver (199-202)
  • ProgressSink (60-64)
  • NopProgress (79-79)
  • NopProgress (81-81)
  • NopProgress (82-82)
  • NopProgress (83-83)
internal/usecase/verify_deployment.go (2)
  • NewVerifyDeployment (21-35)
  • VerifyOptions (38-42)
internal/domain/queries.go (1)
  • DeploymentQuery (31-38)
internal/domain/errors.go (1)
  • ErrNotFound (16-16)
internal/domain/filters.go (1)
  • DeploymentFilter (15-21)
internal/app/wire_gen.go (3)
internal/config/provider.go (1)
  • ProvideNetworkResolver (169-171)
internal/usecase/list_deployments.go (1)
  • NewListDeployments (28-34)
internal/usecase/verify_deployment.go (1)
  • NewVerifyDeployment (21-35)
internal/cli/render/deployments.go (1)
internal/domain/models/deployment.go (1)
  • Deployment (38-69)
internal/usecase/list_deployments.go (2)
internal/usecase/ports.go (3)
  • DeploymentRepository (16-32)
  • NetworkResolver (168-171)
  • DeploymentListResult (88-92)
internal/config/network.go (1)
  • NetworkResolver (23-29)
internal/adapters/progress/verify_progress.go (1)
internal/usecase/ports.go (2)
  • ProgressEvent (50-57)
  • ProgressSink (60-64)
internal/adapters/progress/spinner.go (1)
internal/usecase/ports.go (1)
  • ProgressEvent (50-57)
internal/cli/verify.go (2)
internal/domain/config/runtime.go (1)
  • Network (33-38)
internal/usecase/verify_deployment.go (1)
  • VerifyDeployment (12-18)
internal/usecase/verify_deployment.go (4)
internal/usecase/ports.go (6)
  • DeploymentRepository (16-32)
  • ContractVerifier (42-45)
  • NetworkResolver (168-171)
  • DeploymentResolver (199-202)
  • ProgressSink (60-64)
  • ProgressEvent (50-57)
internal/domain/filters.go (1)
  • DeploymentFilter (15-21)
internal/domain/queries.go (1)
  • DeploymentQuery (31-38)
internal/domain/models/deployment.go (1)
  • Deployment (38-69)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Integration Tests
  • GitHub Check: Integration Tests (macOS)
  • GitHub Check: Lint
🔇 Additional comments (13)
internal/adapters/progress/spinner.go (1)

76-86: Potential concurrent access to spinner

If progress events are emitted from multiple goroutines, spinner state mutations need serialization.

If that’s possible, wrap OnProgress/ReportStage with a sync.Mutex.

internal/usecase/verify_deployment_test.go (2)

14-28: Constructor wiring test: LGTM

Good sanity check that the extended constructor accepts a DeploymentResolver and ProgressSink.


30-41: No nil-guard required for VerifySpecific — smoke test passed

Ran the provided script: TestVerifyDeployment_UsesDeploymentResolver completed without panic and internal/usecase tests passed, so the nil deps in the test do not cause a panic.

internal/cli/render/deployments.go (4)

59-59: Renderer now receives networkNames: LGTM

Passing the resolved network names enables clearer headers.


174-181: Header formatting: fallback behavior is fine

Displays “NetworkName (ChainID)” when available; otherwise numeric chain ID. Consider “unknown (id)” as an explicit fallback later if UX asks for it.


64-64: Signature change ripple-check — call sites updated

internal/cli/render/deployments.go:59 now calls r.displayTableFormat(result.Deployments, result.NetworkNames); no other matches found.


374-375: Verifier prefixes switched to 'e'/'s' — update user-facing legend

Code prefixes Etherscan with "e" and Sourcify with "s" (internal/cli/render/deployments.go: lines ~374–375 and ~392–393). Confirm and update README/docs/CLI help/legend to show: e = Etherscan, s = Sourcify.

internal/usecase/list_deployments.go (2)

22-25: New dependency: LGTM

Storing NetworkResolver here aligns the renderer’s need for names.


28-33: Constructor updated — invocations verified; docs outdated

NewListDeployments is called with the new networkResolver in internal/app/wire_gen.go:49. The only other match is documentation still using the old signature at docs/agent/phase1-implementation-guide.md:305 — update docs if needed.

internal/app/wire_gen.go (1)

48-50: Early wiring of networkResolver into ListDeployments: LGTM

Keeps a single resolver instance; avoids duplicate construction later.

internal/adapters/progress/verify_progress.go (1)

22-29: Start time initialization: LGTM

Capturing start at construction is appropriate for total duration reporting.

internal/usecase/verify_deployment.go (1)

13-18: Constructor fields extension looks correct.

New dependencies are coherently added; no issues.

internal/cli/verify.go (1)

58-62: Resolve — network CLI flag is already bound; no change required

SetupViper binds the invoked command's flags (v.BindPFlag in internal/config/provider.go), Provider reads v.GetString("network") to resolve cfg.Network, and verify defines --network — app.Config.Network will reflect the CLI flag so filter.ChainID will be populated.

Comment on lines +31 to +95
// OnProgress handles progress events
func (v *VerifyProgress) OnProgress(ctx context.Context, event usecase.ProgressEvent) {
if !v.interactive {
// In non-interactive mode, just print the message
if event.Message != "" {
fmt.Fprintln(v.out, event.Message)
}
return
}

// Handle spinner states
if event.Spinner {
if v.spinner == nil {
v.spinner = spinner.New(spinner.CharSets[14], 100*time.Millisecond)
v.spinner.Writer = v.out
_ = v.spinner.Color("cyan", "bold")
}

// Update spinner message
v.spinner.Suffix = " " + event.Message

if !v.spinner.Active() {
v.spinner.Start()
}
} else if v.spinner != nil && v.spinner.Active() {
v.spinner.Stop()
}

// Handle stage-specific messages
switch event.Stage {
case "gathering":
// Initial gathering phase
if !v.interactive {
fmt.Fprintln(v.out, "🔍 Gathering deployments to verify...")
}

case "verifying":
// Show progress for multiple verifications
if event.Total > 0 && !v.interactive {
fmt.Fprintf(v.out, "📝 [%d/%d] %s\n", event.Current, event.Total, event.Message)
}

case "network-resolve":
// Network resolution phase
if !v.interactive {
fmt.Fprintf(v.out, "🌐 %s\n", event.Message)
}

case "verification":
// Actual verification submission
if !v.interactive {
fmt.Fprintf(v.out, "🚀 %s\n", event.Message)
}

case "completed":
// Stop any active spinner
if v.spinner != nil && v.spinner.Active() {
v.spinner.Stop()
}

// Show completion time
duration := time.Since(v.startTime)
color.New(color.FgGreen).Fprintf(v.out, "✅ Verification completed in %s\n", duration.Round(time.Millisecond))
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Stage-specific prints are unreachable; restructure OnProgress

Early return for non-interactive prevents stage formatting, and interactive branch guards prints with if !v.interactive. Net effect: stage blocks never print in any mode.

Apply this diff to unify behavior:

 func (v *VerifyProgress) OnProgress(ctx context.Context, event usecase.ProgressEvent) {
-    if !v.interactive {
-        // In non-interactive mode, just print the message
-        if event.Message != "" {
-            fmt.Fprintln(v.out, event.Message)
-        }
-        return
-    }
-
-    // Handle spinner states
-    if event.Spinner {
+    if v.interactive && event.Spinner {
         if v.spinner == nil {
             v.spinner = spinner.New(spinner.CharSets[14], 100*time.Millisecond)
             v.spinner.Writer = v.out
             _ = v.spinner.Color("cyan", "bold")
         }
-
-        // Update spinner message
         v.spinner.Suffix = " " + event.Message
-
         if !v.spinner.Active() {
             v.spinner.Start()
         }
-    } else if v.spinner != nil && v.spinner.Active() {
+    } else if v.interactive && v.spinner != nil && v.spinner.Active() {
         v.spinner.Stop()
     }
 
-    // Handle stage-specific messages
-    switch event.Stage {
+    // Render non-interactive output with stage formatting
+    if !v.interactive {
+        switch event.Stage {
+        case "gathering":
+            fmt.Fprintln(v.out, "🔍 Gathering deployments to verify...")
+        case "verifying":
+            if event.Total > 0 {
+                fmt.Fprintf(v.out, "📝 [%d/%d] %s\n", event.Current, event.Total, event.Message)
+            } else if event.Message != "" {
+                fmt.Fprintln(v.out, event.Message)
+            }
+        case "network-resolve":
+            fmt.Fprintf(v.out, "🌐 %s\n", event.Message)
+        case "verification":
+            fmt.Fprintf(v.out, "🚀 %s\n", event.Message)
+        case "completed":
+            duration := time.Since(v.startTime)
+            color.New(color.FgGreen).Fprintf(v.out, "✅ Verification completed in %s\n", duration.Round(time.Millisecond))
+        default:
+            if event.Message != "" {
+                fmt.Fprintln(v.out, event.Message)
+            }
+        }
+        return
+    }
+
+    // Interactive: only special-case completion print
+    switch event.Stage {
     case "completed":
         // Stop any active spinner
         if v.spinner != nil && v.spinner.Active() {
             v.spinner.Stop()
         }
 
         // Show completion time
         duration := time.Since(v.startTime)
         color.New(color.FgGreen).Fprintf(v.out, "✅ Verification completed in %s\n", duration.Round(time.Millisecond))
     }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// OnProgress handles progress events
func (v *VerifyProgress) OnProgress(ctx context.Context, event usecase.ProgressEvent) {
if !v.interactive {
// In non-interactive mode, just print the message
if event.Message != "" {
fmt.Fprintln(v.out, event.Message)
}
return
}
// Handle spinner states
if event.Spinner {
if v.spinner == nil {
v.spinner = spinner.New(spinner.CharSets[14], 100*time.Millisecond)
v.spinner.Writer = v.out
_ = v.spinner.Color("cyan", "bold")
}
// Update spinner message
v.spinner.Suffix = " " + event.Message
if !v.spinner.Active() {
v.spinner.Start()
}
} else if v.spinner != nil && v.spinner.Active() {
v.spinner.Stop()
}
// Handle stage-specific messages
switch event.Stage {
case "gathering":
// Initial gathering phase
if !v.interactive {
fmt.Fprintln(v.out, "🔍 Gathering deployments to verify...")
}
case "verifying":
// Show progress for multiple verifications
if event.Total > 0 && !v.interactive {
fmt.Fprintf(v.out, "📝 [%d/%d] %s\n", event.Current, event.Total, event.Message)
}
case "network-resolve":
// Network resolution phase
if !v.interactive {
fmt.Fprintf(v.out, "🌐 %s\n", event.Message)
}
case "verification":
// Actual verification submission
if !v.interactive {
fmt.Fprintf(v.out, "🚀 %s\n", event.Message)
}
case "completed":
// Stop any active spinner
if v.spinner != nil && v.spinner.Active() {
v.spinner.Stop()
}
// Show completion time
duration := time.Since(v.startTime)
color.New(color.FgGreen).Fprintf(v.out, "✅ Verification completed in %s\n", duration.Round(time.Millisecond))
}
}
// OnProgress handles progress events
func (v *VerifyProgress) OnProgress(ctx context.Context, event usecase.ProgressEvent) {
if v.interactive && event.Spinner {
if v.spinner == nil {
v.spinner = spinner.New(spinner.CharSets[14], 100*time.Millisecond)
v.spinner.Writer = v.out
_ = v.spinner.Color("cyan", "bold")
}
v.spinner.Suffix = " " + event.Message
if !v.spinner.Active() {
v.spinner.Start()
}
} else if v.interactive && v.spinner != nil && v.spinner.Active() {
v.spinner.Stop()
}
// Render non-interactive output with stage formatting
if !v.interactive {
switch event.Stage {
case "gathering":
fmt.Fprintln(v.out, "🔍 Gathering deployments to verify...")
case "verifying":
if event.Total > 0 {
fmt.Fprintf(v.out, "📝 [%d/%d] %s\n", event.Current, event.Total, event.Message)
} else if event.Message != "" {
fmt.Fprintln(v.out, event.Message)
}
case "network-resolve":
fmt.Fprintf(v.out, "🌐 %s\n", event.Message)
case "verification":
fmt.Fprintf(v.out, "🚀 %s\n", event.Message)
case "completed":
duration := time.Since(v.startTime)
color.New(color.FgGreen).Fprintf(v.out, "✅ Verification completed in %s\n", duration.Round(time.Millisecond))
default:
if event.Message != "" {
fmt.Fprintln(v.out, event.Message)
}
}
return
}
// Interactive: only special-case completion print
switch event.Stage {
case "completed":
// Stop any active spinner
if v.spinner != nil && v.spinner.Active() {
v.spinner.Stop()
}
// Show completion time
duration := time.Since(v.startTime)
color.New(color.FgGreen).Fprintf(v.out, "✅ Verification completed in %s\n", duration.Round(time.Millisecond))
}
}
🤖 Prompt for AI Agents
internal/adapters/progress/verify_progress.go lines 31-95: the early return for
non-interactive mode prevents any of the stage-specific prints from ever
executing (and the stage cases redundantly check !v.interactive), so refactor
OnProgress to unify behavior by removing the early return and the redundant
!v.interactive guards inside the stage switch; instead, handle spinner
setup/updates only when v.interactive is true, and inside each stage case print
the appropriate message for non-interactive mode (and stop the spinner/print
completion in interactive mode) so both interactive and non-interactive flows
run the stage logic correctly.

Comment on lines +99 to 101
cmd.Flags().StringP("network", "n", "", "Network to run on (e.g., mainnet, sepolia, local)")
cmd.Flags().StringVar(&namespace, "namespace", "", "Filter by namespace")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bind and propagate --network; otherwise the flag is ignored by getApp/config.

The flag is declared but never bound or applied, so getApp won’t see it and app.Config.Network will remain nil. Bind it to Viper (or remove this local flag and rely on a root persistent --network if that already exists).

Apply this diff within the function to bind the flag:

 cmd.Flags().BoolVar(&debugFlag, "debug", false, "Show debug information including forge verify commands")
-cmd.Flags().StringP("network", "n", "", "Network to run on (e.g., mainnet, sepolia, local)")
+cmd.Flags().StringP("network", "n", "", "Network to use (e.g., mainnet, sepolia, local)")
+// Ensure config provider (Viper) sees the flag before getApp()
+_ = viper.BindPFlag("network", cmd.Flags().Lookup("network"))
 cmd.Flags().StringVar(&namespace, "namespace", "", "Filter by namespace")

And add the import:

 import (
   "fmt"
   "os"
 
   "github.com/spf13/cobra"
+  "github.com/spf13/viper"
   "github.com/trebuchet-org/treb-cli/internal/cli/render"
   "github.com/trebuchet-org/treb-cli/internal/domain"
   "github.com/trebuchet-org/treb-cli/internal/usecase"
 )

Optionally, if your app exposes a method to select a network by name, apply it after getApp to ensure filter.ChainID is set from the selected network:

if n, _ := cmd.Flags().GetString("network"); n != "" {
  if app.Config.Network == nil || app.Config.Network.Name != n {
    if err := app.ApplyNetwork(cmd.Context(), n); err != nil {
      return fmt.Errorf("failed to select network %q: %w", n, err)
    }
  }
}
🤖 Prompt for AI Agents
internal/cli/verify.go lines 99-101: the "network" flag is declared but never
bound or propagated so getApp/app.Config won't see it; bind the flag to Viper
(import github.com/spf13/viper) using viper.BindPFlag for the "network" flag
immediately after declaring it so the value flows into config, or replace the
local flag with the root persistent flag if already defined; additionally, after
calling getApp, check cmd.Flags().GetString("network") and if non-empty call
app.ApplyNetwork(ctx, name) (handling errors) to set app.Config.Network and
ensure filter.ChainID is populated.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
test/testdata/golden/integration/TestListCommand/list_with_deployments/commands.golden (1)

3-3: LGTM! Consistent with network-aware display changes.

Also applies to: 6-6

test/testdata/golden/integration/TestInitCommand/init_and_deploy/commands.golden (1)

74-74: LGTM! Golden output updated consistently.

Also applies to: 77-77

test/testdata/golden/integration/TestListCommand/list_with_all_categories/commands.golden (1)

3-3: LGTM! Comprehensive golden output update across all deployment categories.

All deployment categories (PROXIES, IMPLEMENTATIONS, SINGLETONS, LIBRARIES) now consistently show the network-aware chain display and ASCII status indicators.

Also applies to: 6-6, 10-10, 13-13, 16-16

test/testdata/golden/integration/TestRunCommand/run_multiple_contracts/commands.golden (1)

76-76: LGTM! Run command golden output updated consistently.

Also applies to: 79-80

🧹 Nitpick comments (2)
test/testdata/golden/integration/TestTagCommand/tag_after_deployment_update/commands.golden (1)

2-3: Inconsistent status indicators between lines.

Line 2 no longer has a status glyph (removed "ℹ ") while line 3 retains the success checkmark "✅". This creates a visual inconsistency in the output where one status line has an indicator and the other doesn't.

If this is intentional, consider whether both lines should follow the same pattern—either both with glyphs or both without.

test/testdata/golden/integration/TestTagCommand/tag_multiple_contracts_different_tags/commands.golden (1)

20-21: Document "e?" and "s?" status markers: add clear descriptions for the plain-text markers (e.g. what “e?” and “s?” denote) to the CLI help output or project documentation.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c1dcffd and 57a6d34.

📒 Files selected for processing (32)
  • test/testdata/golden/integration/TestInitCommand/init_and_deploy/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_contracts_in_subdirectories/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_filter_by_chain/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_filter_by_namespace/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_multiple_contract_types/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_with_all_categories/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_with_deployments/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_with_labels/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_with_mixed_deployment_status/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_with_multiple_chains/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_with_multiple_namespaces_and_chains/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_with_proxy_relationships/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestListCommand/list_with_tags/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestRunCommand/run_multiple_contracts/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestRunCommand/run_with_custom_proxy/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestRunCommand/simple/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_add_multiple/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_add_single/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_after_deployment_update/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_by_address/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_cross_namespace/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_deployment_by_id/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_library_deployment/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_long_tag_name/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_multiple_contracts_different_tags/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_remove_all_tags/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_remove_existing/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_with_label/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_with_namespace/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_with_network_and_namespace/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_with_special_characters/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestWorkflow/complex_with_libraries/commands.golden (2 hunks)
✅ Files skipped from review due to trivial changes (11)
  • test/testdata/golden/integration/TestTagCommand/tag_with_special_characters/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_deployment_by_id/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_remove_all_tags/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_long_tag_name/commands.golden
  • test/testdata/golden/integration/TestRunCommand/simple/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_filter_by_chain/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_remove_existing/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_tags/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_proxy_relationships/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_multiple_namespaces_and_chains/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_library_deployment/commands.golden
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Integration Tests
  • GitHub Check: Integration Tests (macOS)
  • GitHub Check: Lint
🔇 Additional comments (26)
test/testdata/golden/integration/TestTagCommand/tag_by_address/commands.golden (1)

2-2: LGTM: Test fixture updated to match new output format.

The removal of the informational glyph (ℹ) from the log message aligns with the broader formatting changes across the PR's test outputs.

test/testdata/golden/integration/TestListCommand/list_multiple_contract_types/commands.golden (2)

3-3: LGTM! Network-aware chain display format.

The updated format "anvil-31337 (31337)" provides both the network name and numeric chain ID, improving readability and aligning with the PR's network-centric approach.


6-11: LGTM! Improved terminal compatibility with plain text status indicators.

The switch from emoji-based status indicators ("🅔 ? 🅢 ?") to plain text ("e? s?") improves terminal compatibility and accessibility while preserving the same semantic information. The change is applied consistently across all deployment types.

test/testdata/golden/integration/TestListCommand/list_filter_by_namespace/commands.golden (2)

3-3: LGTM! Network-aware chain display improves readability.

The updated format "anvil-31337 (31337)" provides better context by showing both the network name and chain ID, aligning with the PR's goal of making the CLI network-aware.

Also applies to: 13-13


6-6: Verify that the status indicator format change was intentional.

The status indicators changed from emoji-based "🅔 ? 🅢 ?" to textual "e? s?". This change appears unrelated to the core network flag updates in this PR.

Was this change intentional, or is it a side effect of other updates? If intentional, it improves terminal compatibility and test stability.

Also applies to: 16-16

test/testdata/golden/integration/TestTagCommand/tag_add_multiple/commands.golden (1)

2-2: Cosmetic change: informational glyph removed from tag addition messages.

The "ℹ" prefix has been consistently removed from all intermediate "Added tag..." messages (lines 2, 9, 16), while the success checkmark "✅" lines remain unchanged. This creates a clearer visual hierarchy between informational and success messages.

However, this cosmetic change appears unrelated to the main PR objective (migrating verify command from --chain to --network). Is this intentional cleanup bundled with the PR, or a side-effect of changes to output formatting utilities?

Also applies to: 9-9, 16-16

test/testdata/golden/integration/TestListCommand/list_contracts_in_subdirectories/commands.golden (2)

3-3: LGTM! Network-aware chain display.

The new format "anvil-31337 (31337)" clearly shows both the network name and chain ID, improving readability and aligning with the PR's shift to network-centric commands.


6-8: LGTM! Status indicator format updated.

The change from emoji-based markers to plain text "e? s?" improves terminal compatibility and maintains consistent alignment across all deployment entries.

test/testdata/golden/integration/TestTagCommand/tag_add_single/commands.golden (1)

2-2: Glyph removal matches updated CLI style.

Update keeps golden output aligned with new tag messaging format.

test/testdata/golden/integration/TestTagCommand/tag_with_label/commands.golden (1)

2-2: Consistent formatting tweak.

Removing the info glyph brings this golden in line with the revised CLI output.

test/testdata/golden/integration/TestTagCommand/tag_with_network_and_namespace/commands.golden (1)

2-2: Formatting aligns with new output convention.

Golden now reflects the glyph-free “Added tag …” messaging used elsewhere.

test/testdata/golden/integration/TestTagCommand/tag_cross_namespace/commands.golden (3)

2-2: LGTM: Info glyph removed from progress messages.

The removal of the "ℹ " prefix makes the output cleaner while retaining the success checkmark on subsequent lines.

Also applies to: 9-9


17-17: LGTM: Chain display now includes network name.

The format "anvil-31337 (31337)" aligns with the PR's goal of making the CLI network-centric and provides better context for users.

Also applies to: 27-27


20-20: LGTM: Status indicators simplified to plain text.

Replacing emoji markers "🅔 ? 🅢 ?" with "e? s?" improves terminal compatibility and consistency across environments.

Also applies to: 30-30

test/testdata/golden/integration/TestTagCommand/tag_with_namespace/commands.golden (1)

2-2: LGTM: Info glyph removed from progress message.

Consistent with other test updates, the "ℹ " prefix is removed to simplify output formatting.

test/testdata/golden/integration/TestListCommand/list_with_mixed_deployment_status/commands.golden (1)

3-3: LGTM! Golden output correctly reflects network-aware formatting.

The chain display now shows "anvil-31337 (31337)" instead of just the numeric chain ID, and status indicators have been simplified to ASCII "e? s?". These changes align with the PR's network-centric approach and are consistent across all golden test files.

Also applies to: 6-6

test/testdata/golden/integration/TestListCommand/list_with_labels/commands.golden (2)

3-3: LGTM: Chain display now includes network name.

The chain display format updated to "anvil-31337 (31337)" aligns with the PR objective of making network names the primary identifier. This improves readability and consistency with other treb commands.


6-8: LGTM: Status indicators simplified to plain text.

The status indicators changed from emoji-based "🅔 ? 🅢 ?" to text-based "e? s?". This improves test stability and avoids potential encoding issues while maintaining the same semantic information.

test/testdata/golden/integration/TestWorkflow/complex_with_libraries/commands.golden (2)

98-98: LGTM: Network-aware chain labels across multiple networks.

Both chain displays correctly show the network name prefix format for different networks (anvil-31337 and anvil-31338), demonstrating that the network resolution works correctly across multiple chains.

Also applies to: 214-214


101-104: LGTM: Consistent status indicator updates.

Status indicators updated to plain text format "e? s?" for both SINGLETONS and LIBRARIES sections, maintaining consistency across different deployment types and command outputs.

Also applies to: 217-219

test/testdata/golden/integration/TestRunCommand/run_with_custom_proxy/commands.golden (2)

64-64: LGTM: Network name displayed in chain label.

The chain display correctly shows "anvil-31337 (31337)", consistent with the network-aware formatting applied across all test outputs.


67-71: LGTM: Status indicators updated for proxy deployments.

Status indicators changed to plain text "e? s?" for both PROXIES and IMPLEMENTATIONS sections, demonstrating the formatting updates work correctly for proxy-based deployment scenarios.

test/testdata/golden/integration/TestTagCommand/tag_multiple_contracts_different_tags/commands.golden (2)

2-2: LGTM: Cleaner output without info emoji.

Removing the "ℹ " prefix simplifies the success messages while preserving clarity.

Also applies to: 9-9


17-17: Enhanced chain display aligns with network-centric design.

The new format "anvil-31337 (31337)" shows both the network name and chain ID, which directly supports the PR's objective of making the CLI more network-centric and improving user clarity.

test/testdata/golden/integration/TestListCommand/list_with_multiple_chains/commands.golden (2)

3-3: LGTM! Network-aware chain display improves clarity.

The new format "NetworkName (ChainID)" provides both human-readable network names and numeric chain IDs, making the output more user-friendly and aligned with the PR's network-centric approach.

Also applies to: 8-8


6-6: LGTM! ASCII status tokens improve terminal compatibility.

The migration from emoji markers to ASCII tokens "e? s?" ensures consistent rendering across different terminal environments while maintaining clear status indication.

Also applies to: 11-11

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d5149b2 and ed53623.

📒 Files selected for processing (7)
  • test/testdata/golden/integration/TestInitCommand/init_and_deploy/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestRunCommand/run_multiple_contracts/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestRunCommand/run_with_custom_proxy/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestRunCommand/simple/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_cross_namespace/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestTagCommand/tag_multiple_contracts_different_tags/commands.golden (1 hunks)
  • test/testdata/golden/integration/TestWorkflow/complex_with_libraries/commands.golden (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • test/testdata/golden/integration/TestRunCommand/run_with_custom_proxy/commands.golden
🚧 Files skipped from review as they are similar to previous changes (1)
  • test/testdata/golden/integration/TestTagCommand/tag_cross_namespace/commands.golden
🔇 Additional comments (7)
test/testdata/golden/integration/TestWorkflow/complex_with_libraries/commands.golden (1)

98-219: LGTM: Golden test output correctly updated.

The test expectations have been properly updated to reflect the new network-centric rendering:

  • Chain labels now display network names alongside chain IDs (e.g., "anvil-31337 (31337)")
  • Status indicators changed from emoji placeholders to textual markers (e[-] s[-] b[-]) representing verification status on Etherscan, Sourcify, and Blockscout

These changes are consistent with the PR's goal of making the CLI network-aware and adding verification support.

test/testdata/golden/integration/TestInitCommand/init_and_deploy/commands.golden (1)

74-77: LGTM: Golden test output correctly updated.

Test expectations updated consistently with the network-centric rendering changes:

  • Chain label displays network name with chain ID: "anvil-31337 (31337)"
  • Status indicators use textual markers: "e[-] s[-] b[-]"
test/testdata/golden/integration/TestRunCommand/simple/commands.golden (1)

40-43: LGTM: Golden test output correctly updated.

Test expectations updated to match the new rendering format:

  • Chain label now includes network name: "anvil-31337 (31337)"
  • Status indicators converted to textual format: "e[-] s[-] b[-]"

Changes are consistent across all golden test files in this PR.

test/testdata/golden/integration/TestRunCommand/run_multiple_contracts/commands.golden (2)

76-76: LGTM! Network-name-centric display format.

The chain display now shows "anvil-31337 (31337)" format, correctly reflecting the network name alongside the chain ID. This aligns with the PR's objective of making the CLI network-name-centric.


79-80: LGTM! Verification status display updated.

The SINGLETONS entries now use textual verification status indicators e[-] s[-] b[-], which likely represent Etherscan/Sourcify/Blockscout verification status. The [-] values are appropriate for local anvil deployments where public explorer verification isn't applicable. The format is consistent across both contract entries.

test/testdata/golden/integration/TestTagCommand/tag_multiple_contracts_different_tags/commands.golden (2)

2-2: LGTM: Cleaner output format.

The removal of the leading "ℹ" prefix makes the output more concise while maintaining clarity.

Also applies to: 9-9


17-17: LGTM: Enhanced chain display format.

The updated format now shows both the network name and chain ID (anvil-31337 (31337)), providing better context and aligning with the PR's goal of being network-centric.

Comment on lines +20 to +21
Counter (v<VERSION>) 0x74148047D6bDf624C94eFc07F60cEE7b6052FB29 e[-] s[-] b[-] <TIMESTAMP>
SampleToken (token-v1) 0x793f69C1702a0E900a142D8575E44F9E4DA86fFf e[-] s[-] b[-] <TIMESTAMP>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Verify that the new status indicator format is documented.

The verification status format has changed from emoji-based (🅔 ? 🅢 ?) to text-based (e[-] s[-] b[-]), and adds blockscout support. While this improves accessibility and adds new functionality, ensure the meaning of [-] (presumably "not verified" or "status unknown") is clearly documented in user-facing help text, as it's less intuitive than the previous ? indicator.


🏁 Script executed:

#!/bin/bash
# Description: Check if verification status indicators are documented in help text or README

# Search for documentation of verification status format
rg -n -C3 'e\[-\]|s\[-\]|b\[-\]|verification.*status|verifier.*indicator' --type md

# Search for help text in CLI code
rg -n -C3 'e\[-\]|s\[-\]|b\[-\]|verification.*status' --type go -g '!*_test.go' -g '!test/*'

Length of output: 4220


🏁 Script executed:

#!/bin/bash
# Search CLI code for text-based verification indicators
rg -n 'e\[-\]' -C3 --type go
rg -n 's\[-\]' -C3 --type go
rg -n 'b\[-\]' -C3 --type go

Length of output: 0


Document new verification status indicators

Add user-facing documentation (e.g., README or CLI help) clarifying that e[-], s[-], and b[-] denote an unverified/unknown status for Etherscan, Sourcify, and Blockscout.

🤖 Prompt for AI Agents
In
test/testdata/golden/integration/TestTagCommand/tag_multiple_contracts_different_tags/commands.golden
around lines 20-21 the output shows verification status indicators `e[-]`,
`s[-]`, and `b[-]` without any user-facing explanation; add/update documentation
and CLI help to define these indicators as denoting unverified/unknown status
for Etherscan, Sourcify, and Blockscout respectively. Modify the project README
(or a relevant docs page) to include a short section describing the three-letter
indicators and their possible forms (e.g., e[+]/e[-] etc.), and update the
command's --help text to include a one-line note explaining that `e`, `s`, and
`b` refer to Etherscan, Sourcify, and Blockscout and that `[-]` means
unverified/unknown. Ensure wording is concise and examples match the golden
output formatting.

bowd and others added 6 commits February 27, 2026 14:23
- Replace --chain flag with --network flag for consistency
- Remove hardcoded chain ID to network name mapping
- Update network resolution to dynamically find networks by chain ID
- Remove chain ID parsing from deployment identifier matching
- Update command examples to use --network instead of chain IDs

This makes the verify command consistent with other treb commands that
use network names from foundry.toml configuration.

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
…s in list

- Add progress reporting to verify command using ProgressSink interface
- Show network names with chain IDs in parentheses in treb ls command
- Fix network filtering when using --all flag with --network in verify command
- Update SpinnerProgressReporter to implement ProgressSink interface correctly

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (2)
internal/usecase/list_deployments.go (1)

125-143: Consider caching network resolution results within a single Run invocation.

The findNetworkName method calls ResolveNetwork for each network name when searching for a matching ChainID. If multiple chains need resolution and the network list is large, this results in repeated resolution calls.

For typical use cases with few chains this is acceptable, but consider caching resolved networks if performance becomes a concern with many deployments.

♻️ Optional optimization
 func (uc *ListDeployments) Run(ctx context.Context, params ListDeploymentsParams) (*DeploymentListResult, error) {
     // ... existing code ...

     // Build network names map
     networkNames := make(map[uint64]string)
+    // Pre-resolve all networks once
+    resolvedNetworks := make(map[string]*config.Network)
+    for _, name := range uc.networkResolver.GetNetworks(ctx) {
+        if net, err := uc.networkResolver.ResolveNetwork(ctx, name); err == nil {
+            resolvedNetworks[name] = net
+        }
+    }
     for chainID := range summary.ByChain {
-        networkName := uc.findNetworkName(ctx, chainID)
-        if networkName != "" {
-            networkNames[chainID] = networkName
+        for name, net := range resolvedNetworks {
+            if net.ChainID == chainID {
+                networkNames[chainID] = name
+                break
+            }
         }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/usecase/list_deployments.go` around lines 125 - 143, findNetworkName
currently calls uc.networkResolver.ResolveNetwork for every name on each
invocation which can repeat expensive resolutions; modify ListDeployments (e.g.,
in Run) to build a local map cache: call uc.networkResolver.GetNetworks once,
resolve each name once via uc.networkResolver.ResolveNetwork, store resolved
Network objects keyed by name (or chainID) and then have findNetworkName consult
that local cache instead of calling ResolveNetwork repeatedly; ensure the cache
is scoped to the Run invocation (pass it into findNetworkName or make
findNetworkName accept a prefilled map) and preserve existing error handling for
ResolveNetwork failures.
internal/adapters/verification/verifier.go (1)

309-317: Iteration order of map is non-deterministic when setting EtherscanURL.

When setting deployment.Verification.EtherscanURL from the first verified verifier, the iteration order over deployment.Verification.Verifiers map is non-deterministic in Go. This could result in different URLs being set across runs.

Consider prioritizing a specific verifier (e.g., etherscan first) for consistency:

♻️ Proposed fix for deterministic URL selection
 if verifiedCount == totalCount {
 	deployment.Verification.Status = models.VerificationStatusVerified
-	// Set the explorer URL to the first verified one
-	for _, status := range deployment.Verification.Verifiers {
-		if status.Status == "verified" && status.URL != "" {
-			deployment.Verification.EtherscanURL = status.URL
-			break
-		}
-	}
+	// Set the explorer URL with priority: etherscan > blockscout > sourcify
+	for _, verifierName := range []string{"etherscan", "blockscout", "sourcify"} {
+		if status, exists := deployment.Verification.Verifiers[verifierName]; exists {
+			if status.Status == "verified" && status.URL != "" {
+				deployment.Verification.EtherscanURL = status.URL
+				break
+			}
+		}
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/verification/verifier.go` around lines 309 - 317, The code
sets deployment.Verification.EtherscanURL by iterating
deployment.Verification.Verifiers (a map) which is non-deterministic; change the
selection logic in the block that runs when verifiedCount == totalCount (around
where VerificationStatusVerified is set) to deterministically choose a preferred
verifier: first check for a verifier with a known preferred key (e.g.,
"etherscan") and non-empty URL, use that if present; if not found, fall back to
a stable deterministic selection such as iterating sorted keys or selecting the
lexicographically smallest key from deployment.Verification.Verifiers to set
deployment.Verification.EtherscanURL. Ensure the code references the same map
and preserves the existing status assignment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/adapters/progress/spinner.go`:
- Around line 76-92: Delete the dead no-op reporter type NopProgressReporter and
its ReportProgress method (the unused functions that do not implement
ProgressSink), since NopSink in nop_sink.go already provides the correct
OnProgress/Info/Error no-op implementation; remove the NopProgressReporter
declaration and its associated method(s) and any stray references, and ensure
there are no remaining compile-time assertions or usages expecting
ReportProgress so the build relies on NopSink instead.

In `@internal/adapters/verification/verifier.go`:
- Around line 231-266: The executeForgeVerify function currently ignores context
cancellation, so update its signature to accept a context (e.g.,
executeForgeVerify(ctx context.Context, args []string) error) and use
exec.CommandContext(ctx, "forge", args...) instead of exec.Command so the
spawned process is canceled when ctx is done; then update all callers (including
Verify) to pass the received ctx through, preserving existing debug logging and
output parsing in executeForgeVerify and returning the same error/cleanup
behavior when the context triggers cancellation.

In `@internal/cli/render/deployments.go`:
- Around line 524-540: The displayWidth function currently misclassifies many
Unicode runes as width 2 and special-cases variation selector 0xFE0E; replace
its custom logic by using the established library github.com/mattn/go-runewidth
(e.g. runewidth.RuneWidth or runewidth.StringWidth) in the displayWidth
implementation so widths follow UAX#11; remove the manual >127 check and
special-casing of '✔' and 0xFE0E, update the function comment to reflect that
runewidth is used, and ensure the function still returns an int width for the
given string.

In `@internal/usecase/verify_deployment.go`:
- Around line 267-273: The loop over networkNames calling
v.networkResolver.ResolveNetwork should not silently ignore all resolve errors
and later return a generic “no network configuration found”; instead, capture
per-network ResolveNetwork failures (include the network name) into a slice or
error aggregator while continuing to attempt other networks, and after the loop
if no networks resolved return a consolidated error that includes the collected
ResolveNetwork errors so callers can see why configured networks failed; apply
the same change to the similar handling around the ResolveNetwork call at the
other block (lines 280-281) so both places return detailed resolve errors rather
than masking them.
- Around line 21-34: Constructor NewVerifyDeployment may accept a nil
ProgressSink which causes panics when VerifyAll, verifyDeployment (and other
methods that call v.progress.OnProgress) invoke OnProgress unconditionally; fix
by either (a) normalizing in NewVerifyDeployment: if progress == nil assign a
no-op implementation that satisfies ProgressSink, or (b) add nil checks before
each progress emission (e.g., in VerifyAll, verifyDeployment and the other
affected methods) so you only call v.progress.OnProgress when v.progress != nil;
reference the ProgressSink field on the VerifyDeployment struct and the methods
VerifyAll and verifyDeployment when applying the change.

---

Nitpick comments:
In `@internal/adapters/verification/verifier.go`:
- Around line 309-317: The code sets deployment.Verification.EtherscanURL by
iterating deployment.Verification.Verifiers (a map) which is non-deterministic;
change the selection logic in the block that runs when verifiedCount ==
totalCount (around where VerificationStatusVerified is set) to deterministically
choose a preferred verifier: first check for a verifier with a known preferred
key (e.g., "etherscan") and non-empty URL, use that if present; if not found,
fall back to a stable deterministic selection such as iterating sorted keys or
selecting the lexicographically smallest key from
deployment.Verification.Verifiers to set deployment.Verification.EtherscanURL.
Ensure the code references the same map and preserves the existing status
assignment.

In `@internal/usecase/list_deployments.go`:
- Around line 125-143: findNetworkName currently calls
uc.networkResolver.ResolveNetwork for every name on each invocation which can
repeat expensive resolutions; modify ListDeployments (e.g., in Run) to build a
local map cache: call uc.networkResolver.GetNetworks once, resolve each name
once via uc.networkResolver.ResolveNetwork, store resolved Network objects keyed
by name (or chainID) and then have findNetworkName consult that local cache
instead of calling ResolveNetwork repeatedly; ensure the cache is scoped to the
Run invocation (pass it into findNetworkName or make findNetworkName accept a
prefilled map) and preserve existing error handling for ResolveNetwork failures.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ed53623 and de27f18.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (48)
  • internal/adapters/progress/spinner.go
  • internal/adapters/progress/verify_progress.go
  • internal/adapters/providers.go
  • internal/adapters/verification/internal_verifier.go
  • internal/adapters/verification/manager.go
  • internal/adapters/verification/verifier.go
  • internal/app/wire_gen.go
  • internal/cli/render/deployments.go
  • internal/cli/verify.go
  • internal/usecase/list_deployments.go
  • internal/usecase/ports.go
  • internal/usecase/verify_deployment.go
  • internal/usecase/verify_deployment_test.go
  • test/manual/test_progress_simple.sh
  • test/manual/test_verify_interactive.sh
  • test/manual/test_verify_progress.sh
  • test/testdata/golden/integration/TestInitCommand/init_and_deploy/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_contracts_in_subdirectories/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_filter_by_chain/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_filter_by_namespace/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_multiple_contract_types/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_all_categories/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_deployments/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_labels/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_mixed_deployment_status/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_multiple_chains/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_multiple_namespaces_and_chains/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_proxy_relationships/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_tags/commands.golden
  • test/testdata/golden/integration/TestRunCommand/run_multiple_contracts/commands.golden
  • test/testdata/golden/integration/TestRunCommand/run_with_custom_proxy/commands.golden
  • test/testdata/golden/integration/TestRunCommand/simple/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_add_multiple/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_add_single/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_after_deployment_update/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_by_address/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_cross_namespace/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_deployment_by_id/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_library_deployment/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_long_tag_name/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_multiple_contracts_different_tags/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_remove_all_tags/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_remove_existing/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_with_label/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_with_namespace/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_with_network_and_namespace/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_with_special_characters/commands.golden
  • test/testdata/golden/integration/TestWorkflow/complex_with_libraries/commands.golden
💤 Files with no reviewable changes (2)
  • internal/adapters/verification/internal_verifier.go
  • internal/adapters/verification/manager.go
✅ Files skipped from review due to trivial changes (3)
  • test/testdata/golden/integration/TestTagCommand/tag_with_network_and_namespace/commands.golden
  • test/testdata/golden/integration/TestInitCommand/init_and_deploy/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_remove_existing/commands.golden
🚧 Files skipped from review as they are similar to previous changes (17)
  • test/testdata/golden/integration/TestTagCommand/tag_after_deployment_update/commands.golden
  • test/testdata/golden/integration/TestRunCommand/run_with_custom_proxy/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_with_special_characters/commands.golden
  • test/manual/test_verify_progress.sh
  • test/testdata/golden/integration/TestListCommand/list_with_all_categories/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_proxy_relationships/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_contracts_in_subdirectories/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_multiple_chains/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_filter_by_chain/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_with_mixed_deployment_status/commands.golden
  • test/testdata/golden/integration/TestRunCommand/run_multiple_contracts/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_multiple_contract_types/commands.golden
  • test/manual/test_progress_simple.sh
  • test/manual/test_verify_interactive.sh
  • test/testdata/golden/integration/TestTagCommand/tag_multiple_contracts_different_tags/commands.golden
  • test/testdata/golden/integration/TestListCommand/list_filter_by_namespace/commands.golden
  • test/testdata/golden/integration/TestTagCommand/tag_with_label/commands.golden

Comment on lines +76 to +92
// OnProgress handles progress events
func (r *SpinnerProgressReporter) OnProgress(ctx context.Context, event usecase.ProgressEvent) {
// Handle spinner states
if event.Spinner {
if !r.spinner.Active() {
r.spinner.Start()
}
r.spinner.Suffix = " " + event.Message
} else if r.spinner.Active() {
r.spinner.Stop()
}

// Update current stage info if needed
if len(r.stages) > 0 {
r.stages[len(r.stages)-1].Message = event.Message
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== 1. ProgressSink interface definition ==="
rg -n -A 10 'type ProgressSink interface' internal/usecase/ports.go

echo
echo "=== 2. SpinnerProgressReporter implementation ==="
rg -n -B 2 'func \(r \*SpinnerProgressReporter\)' internal/adapters/progress/spinner.go | head -60

echo
echo "=== 3. NopProgressReporter implementation ==="
rg -n -B 2 'func \(r \*NopProgressReporter\)' internal/adapters/progress/spinner.go | head -40

echo
echo "=== 4. Usage of ProgressSink in codebase ==="
rg -n 'ProgressSink' --type go internal/ | head -30

Repository: trebuchet-org/treb-cli

Length of output: 4013


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== NopSink full implementation ==="
cat -n internal/adapters/progress/nop_sink.go

echo
echo "=== RunProgress implementation (key methods) ==="
rg -n -B 2 'func \(r \*RunProgress\)' internal/adapters/progress/run_progress.go

echo
echo "=== Usage of NopProgressReporter ==="
rg -n 'NopProgressReporter' internal/ cli/

echo
echo "=== Usage of NopSink ==="
rg -n 'NopSink' internal/ cli/

Repository: trebuchet-org/treb-cli

Length of output: 1077


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Check if NopProgressReporter is actually used ==="
rg -n 'NopProgressReporter' --type go internal/ cli/ 2>&1 || echo "Not found"

echo
echo "=== RunProgress file content ==="
cat -n internal/adapters/progress/run_progress.go

echo
echo "=== Check spinner.go for NopProgressReporter assertion ==="
rg -n -A 1 'Ensure.*NopProgressReporter' internal/adapters/progress/spinner.go || echo "No assertion found"

Repository: trebuchet-org/treb-cli

Length of output: 2455


Remove NopProgressReporter — it's dead code and doesn't implement ProgressSink.

NopProgressReporter (lines 194–206) defines ReportProgress, which is not part of the ProgressSink interface. It has no compile-time assertion and is never used anywhere in the codebase. A correct no-op implementation already exists in nop_sink.go (NopSink) which properly implements OnProgress, Info, and Error. Delete NopProgressReporter entirely.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/progress/spinner.go` around lines 76 - 92, Delete the dead
no-op reporter type NopProgressReporter and its ReportProgress method (the
unused functions that do not implement ProgressSink), since NopSink in
nop_sink.go already provides the correct OnProgress/Info/Error no-op
implementation; remove the NopProgressReporter declaration and its associated
method(s) and any stray references, and ensure there are no remaining
compile-time assertions or usages expecting ReportProgress so the build relies
on NopSink instead.

Comment on lines 231 to 266
// executeForgeVerify executes a forge verify-contract command
func (v *Verifier) executeForgeVerify(args []string) error {
cmd := exec.Command("forge", args...)
cmd.Dir = v.projectRoot

// Print the command if debug is enabled
if v.debug {
cmdStr := fmt.Sprintf("forge %s", strings.Join(args, " "))
fmt.Fprintf(os.Stderr, "\n[DEBUG] Executing: %s\n", cmdStr)
fmt.Fprintf(os.Stderr, "[DEBUG] Working directory: %s\n\n", v.projectRoot)
}

output, err := cmd.CombinedOutput()
if err != nil {
return nil, fmt.Errorf("failed to create internal verifier: %w", err)
// Parse the output for specific error messages
outputStr := string(output)
if strings.Contains(outputStr, "Already Verified") ||
strings.Contains(outputStr, "is already verified") ||
strings.Contains(outputStr, "already verified") {
// Contract is already verified, not an error
return nil
}
return fmt.Errorf("verification failed: %s", strings.TrimSpace(outputStr))
}

return &VerifierAdapter{
verifier: verifier,
}, nil
// Check if verification was successful
outputStr := string(output)
if strings.Contains(outputStr, "Contract successfully verified") ||
strings.Contains(outputStr, "Already Verified") ||
strings.Contains(outputStr, "is already verified") ||
strings.Contains(outputStr, "already verified") {
return nil
}

return fmt.Errorf("verification status unclear: %s", strings.TrimSpace(outputStr))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Context cancellation not propagated to forge command execution.

The Verify method receives a context.Context but executeForgeVerify doesn't use it. Long-running verification commands won't respect context cancellation (e.g., user Ctrl+C, timeouts).

🔧 Proposed fix to support context cancellation
-func (v *Verifier) executeForgeVerify(args []string) error {
-	cmd := exec.Command("forge", args...)
+func (v *Verifier) executeForgeVerify(ctx context.Context, args []string) error {
+	cmd := exec.CommandContext(ctx, "forge", args...)
 	cmd.Dir = v.projectRoot

Then update callers to pass ctx:

-	return v.executeForgeVerify(args)
+	return v.executeForgeVerify(ctx, args)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/verification/verifier.go` around lines 231 - 266, The
executeForgeVerify function currently ignores context cancellation, so update
its signature to accept a context (e.g., executeForgeVerify(ctx context.Context,
args []string) error) and use exec.CommandContext(ctx, "forge", args...) instead
of exec.Command so the spawned process is canceled when ctx is done; then update
all callers (including Verify) to pass the received ctx through, preserving
existing debug logging and output parsing in executeForgeVerify and returning
the same error/cleanup behavior when the context triggers cancellation.

Comment on lines 524 to 540
// displayWidth calculates the display width of a string, accounting for wide characters
func displayWidth(s string) int {
width := 0
for _, r := range s {
// Most unicode symbols and emoji take 2 columns in terminals
// Checkmarks and similar symbols are wide characters
if r == '✔' || r == 0xFE0E || r == '⏳' {
width += 2 // These characters are wide in most terminals
} else if r > 127 {
// Other unicode characters
width += 2
} else {
width += 1
}
}
return width
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, verify the code exists at the specified location
cat -n internal/cli/render/deployments.go | sed -n '524,540p'

Repository: trebuchet-org/treb-cli

Length of output: 681


🏁 Script executed:

# Check if go-runewidth is already a dependency
cat go.mod | grep -i runewidth

Repository: trebuchet-org/treb-cli

Length of output: 116


🏁 Script executed:

# Search for other displayWidth or width-related functions in the codebase
rg "displayWidth|runewidth|unicode.*width" --type go

Repository: trebuchet-org/treb-cli

Length of output: 334


🏁 Script executed:

# Check how displayWidth is being used
rg "displayWidth\(" --type go -B 2 -A 2

Repository: trebuchet-org/treb-cli

Length of output: 767


🏁 Script executed:

# Check the broader context around the displayWidth function
cat -n internal/cli/render/deployments.go | sed -n '520,550p'

Repository: trebuchet-org/treb-cli

Length of output: 728


🏁 Script executed:

# Check what column 2 contains - search for the table construction context
cat -n internal/cli/render/deployments.go | sed -n '500,525p'

Repository: trebuchet-org/treb-cli

Length of output: 910


🏁 Script executed:

# Look for where cellWidth is used after being set
cat -n internal/cli/render/deployments.go | sed -n '515,545p'

Repository: trebuchet-org/treb-cli

Length of output: 788


🏁 Script executed:

# Check if go-runewidth is actually imported and used elsewhere
rg "go-runewidth|runewidth\." --type go

Repository: trebuchet-org/treb-cli

Length of output: 48


The Unicode width calculation is inaccurate and contradicts the code comment.

The displayWidth function treats all characters > 127 as width 2, which is incorrect for most Unicode characters. For example, accented Latin characters like 'é' are width 1, not 2. Additionally, the code contradicts its own comment on line 507 which states "unicode checkmark and box drawing chars count as 1"—they're actually counted as 2 in the implementation.

The check for 0xFE0E (a variation selector) is also unusual, as this character typically appears as a modifier rather than standalone.

Since github.com/mattn/go-runewidth is already a project dependency (v0.0.16), consider using it for accurate Unicode width handling according to UAX #11. This matters primarily if non-ASCII text eventually appears in the verification column (column 2).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/cli/render/deployments.go` around lines 524 - 540, The displayWidth
function currently misclassifies many Unicode runes as width 2 and special-cases
variation selector 0xFE0E; replace its custom logic by using the established
library github.com/mattn/go-runewidth (e.g. runewidth.RuneWidth or
runewidth.StringWidth) in the displayWidth implementation so widths follow
UAX#11; remove the manual >127 check and special-casing of '✔' and 0xFE0E,
update the function comment to reflect that runewidth is used, and ensure the
function still returns an int width for the given string.

Comment on lines 21 to 34
func NewVerifyDeployment(
repo DeploymentRepository,
contractVerifier ContractVerifier,
networkResolver NetworkResolver,
deploymentResolver DeploymentResolver,
progress ProgressSink,
) *VerifyDeployment {
return &VerifyDeployment{
repo: repo,
contractVerifier: contractVerifier,
networkResolver: networkResolver,
repo: repo,
contractVerifier: contractVerifier,
networkResolver: networkResolver,
deploymentResolver: deploymentResolver,
progress: progress,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Defend against nil ProgressSink before emitting progress events.

VerifyAll/verifyDeployment call v.progress.OnProgress(...) unconditionally. If a caller provides a nil sink, verification fails with a runtime panic instead of continuing silently.

Proposed fix
+func (v *VerifyDeployment) emitProgress(ctx context.Context, event ProgressEvent) {
+	if v.progress != nil {
+		v.progress.OnProgress(ctx, event)
+	}
+}
+
 func (v *VerifyDeployment) VerifyAll(ctx context.Context, filter domain.DeploymentFilter, options VerifyOptions) (*VerifyAllResult, error) {
-	v.progress.OnProgress(ctx, ProgressEvent{
+	v.emitProgress(ctx, ProgressEvent{
 		Stage:   "gathering",
 		Message: "Gathering deployments to verify...",
 		Spinner: true,
 	})
@@
-		v.progress.OnProgress(ctx, ProgressEvent{
+		v.emitProgress(ctx, ProgressEvent{
 			Stage:   "verifying",
 			Current: i + 1,
 			Total:   len(result.ToVerify),
 			Message: fmt.Sprintf("Verifying %s on chain %d...", deployment.ContractName, deployment.ChainID),
 			Spinner: true,
 		})
@@
-	v.progress.OnProgress(ctx, ProgressEvent{
+	v.emitProgress(ctx, ProgressEvent{
 		Stage:   "network-resolve",
 		Message: fmt.Sprintf("Resolving network for %s...", displayName),
 		Spinner: true,
 	})
@@
-	v.progress.OnProgress(ctx, ProgressEvent{
+	v.emitProgress(ctx, ProgressEvent{
 		Stage:   "verification",
 		Message: fmt.Sprintf("Submitting %s to block explorers...", displayName),
 		Spinner: true,
 	})

Also applies to: 55-59, 124-130, 191-195, 217-221

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/usecase/verify_deployment.go` around lines 21 - 34, Constructor
NewVerifyDeployment may accept a nil ProgressSink which causes panics when
VerifyAll, verifyDeployment (and other methods that call v.progress.OnProgress)
invoke OnProgress unconditionally; fix by either (a) normalizing in
NewVerifyDeployment: if progress == nil assign a no-op implementation that
satisfies ProgressSink, or (b) add nil checks before each progress emission
(e.g., in VerifyAll, verifyDeployment and the other affected methods) so you
only call v.progress.OnProgress when v.progress != nil; reference the
ProgressSink field on the VerifyDeployment struct and the methods VerifyAll and
verifyDeployment when applying the change.

Comment on lines 267 to 273
// Try to resolve each network to find matching chain ID
for _, name := range networkNames {
network, err := v.networkResolver.ResolveNetwork(ctx, name)
if err != nil {
// Skip networks that can't be resolved
continue
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t mask network resolver errors as “not configured”.

All ResolveNetwork failures are dropped, then the function returns a generic “no network configuration found” error. That can mislead users when configs exist but fail to resolve (e.g., malformed entries).

Proposed fix
 func (v *VerifyDeployment) getNetworkNameForChainID(ctx context.Context, chainID uint64) (string, error) {
 	// Get all available network names
 	networkNames := v.networkResolver.GetNetworks(ctx)
+	var failedResolutions []string

 	// Try to resolve each network to find matching chain ID
 	for _, name := range networkNames {
 		network, err := v.networkResolver.ResolveNetwork(ctx, name)
 		if err != nil {
-			// Skip networks that can't be resolved
+			failedResolutions = append(failedResolutions, fmt.Sprintf("%s: %v", name, err))
 			continue
 		}
@@
 	}

 	// If no network found, return an error
+	if len(failedResolutions) > 0 {
+		return "", fmt.Errorf("no matching chain ID %d; failed to resolve some configured networks: %v", chainID, failedResolutions)
+	}
 	return "", fmt.Errorf("no network configuration found for chain ID %d. Please ensure the network is configured in foundry.toml", chainID)
 }

Also applies to: 280-281

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/usecase/verify_deployment.go` around lines 267 - 273, The loop over
networkNames calling v.networkResolver.ResolveNetwork should not silently ignore
all resolve errors and later return a generic “no network configuration found”;
instead, capture per-network ResolveNetwork failures (include the network name)
into a slice or error aggregator while continuing to attempt other networks, and
after the loop if no networks resolved return a consolidated error that includes
the collected ResolveNetwork errors so callers can see why configured networks
failed; apply the same change to the similar handling around the ResolveNetwork
call at the other block (lines 280-281) so both places return detailed resolve
errors rather than masking them.

- Propagate context to executeForgeVerify for proper cancellation support
- Add nil ProgressSink guard in NewVerifyDeployment (consistent with PruneRegistry)
- Replace custom displayWidth with go-runewidth for accurate Unicode widths
- Remove dead NopProgressReporter code (NopProgress in ports.go already serves this role)
- Surface network resolver errors instead of silently swallowing them

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update golden file for governor test to match new list output format
(network name display and verification status icons from this branch).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@bowd bowd merged commit 0870755 into main Feb 27, 2026
5 checks passed
@bowd bowd deleted the fix/verify-network-flag branch February 27, 2026 14:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants