Skip to content

Conversation

@saiintbrisson
Copy link
Collaborator

@saiintbrisson saiintbrisson commented Jan 6, 2026

Summary by CodeRabbit

  • New Features

    • Added verification claims system allowing users to submit outcome claims for predictions with confidence scores and evidence sources.
    • Added prediction claim submission and retrieval APIs with pagination support.
    • Added verifier topic registration to enable verifiers to specialize in specific prediction topics.
    • Added feedback submission mechanism for verifiers to report validation issues.
    • Added judge service for automated claim validation and verdict generation.
  • Chores

    • Database schema updates to support verification workflows.
    • Updated deployment configurations for new judge service.

✏️ Tip: You can customize this high-level summary in your review settings.

@vercel
Copy link

vercel bot commented Jan 6, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
prediction-swarm Ready Ready Preview, Comment Jan 6, 2026 0:54am
torus-bridge Ready Ready Preview, Comment Jan 6, 2026 0:54am
torus-governance Ready Ready Preview, Comment Jan 6, 2026 0:54am
torus-page Ready Ready Preview, Comment Jan 6, 2026 0:54am
torus-portal Ready Ready Preview, Comment Jan 6, 2026 0:54am
torus-wallet Ready Ready Preview, Comment Jan 6, 2026 0:54am

@saiintbrisson saiintbrisson changed the base branch from main to dev January 6, 2026 00:53
@saiintbrisson
Copy link
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 6, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 6, 2026

Walkthrough

This pull request introduces a claims-based verification system for predictions. It adds database tables for verification claims, verifier feedback, and verifier topic registrations. New API endpoints in swarm-api enable verifiers to submit claims with evidence and feedback. The swarm-verifier service is refactored to operate as a claims processor (swarm-judge), evaluating mature claims via LLM prompts rather than processing individual predictions directly.

Changes

Cohort / File(s) Summary
Deployment Configuration
.github/ci-config.json, services/swarm-judge/helmfile.yaml, services/swarm-judge/values.yaml
Added swarm-judge deployment target to CI config and Helm values. Updated release names and image repository from swarm-verifier to swarm-judge.
Database Schema & Migrations
atlas/migrations/prediction_swarm/20260105202138.sql, atlas/migrations/webapps/20260105111457_open_verification.sql, atlas/migrations/webapps/20260105140000_verifier_feedback.sql, atlas/migrations/webapps/20260105160000_verification_claim_unique.sql
Created three new tables: verification_claim (stores verifier claims), verifier_feedback (stores per-verifier feedback), verifier_topic_registration (links verifiers to topics). Extended verdict table with accepted_claim_id foreign key. Added indexes on parsed_prediction_id, verifier_agent_id, created_at, and failure_cause.
Database Schema Exports
packages/db/src/schema/index.ts, packages/db/src/schema/memory.ts, packages/db/src/schema/swarm-verifier.ts
Exported new schemas for verification claims, verifier feedback, and topic registrations. Added verifierCursorStateSchema for tracking cursor state per verifier. Updated verdictSchema with acceptedClaimId field.
Swarm API – Routes & Schemas
services/swarm-api/src/routes/claims.ts, services/swarm-api/src/schemas/claims.ts
Implemented comprehensive claims API with endpoints: GET /v1/predictions/claimable (fetch claimable predictions with filtering), POST /v1/predictions/:id/claim (submit verification claim), POST /v1/predictions/:id/feedback (submit feedback), POST /v1/verifiers/register-topic (register verifier to topic), GET /v1/predictions/:id/context (retrieve prediction context with tweets). Added Zod schemas for claim submissions, queries, feedback, and prediction context validation.
Swarm API – Integration
services/swarm-api/src/server.ts, services/swarm-api/src/routes/predictions.ts, services/swarm-api/package.json
Registered claimsRouter in server middleware. Extended prediction insertion with deduplication logic (findCanonicalPrediction and duplicate relation storage). Added test script and vitest dependency.
Swarm Verifier – Refactored Architecture
services/swarm-verifier/src/index.ts, services/swarm-verifier/src/services/prediction-verifier.ts
Rewritten entry point with structured environment validation, SwarmApiClient initialization, and polling loop for claimable predictions. PredictionVerifier class now orchestrates multi-step LLM workflow: validates slices, extracts timeframe via LLM, validates extraction, generates verdict, and submits claim or feedback. Replaced file-based prompt loading with PromptLoader class. Added retry logic with exponential backoff for API calls.
Swarm Verifier – AI Integration
services/swarm-verifier/src/ai/openrouter-client.ts, services/swarm-verifier/src/ai/prompt-loader.ts, services/swarm-verifier/src/schemas/llm-response.ts
Added OpenRouterClient for structured LLM completions with schema validation and retry logic, supporting both standard and search-enabled requests. Introduced PromptLoader for caching Markdown prompts. Added Zod schemas for LLM responses: TimeframeExtractionResult, FilterValidationResult, VerdictResult.
Swarm Verifier – Utilities & Storage
services/swarm-verifier/src/api-client.ts, services/swarm-verifier/src/seen-storage.ts, services/swarm-verifier/src/utils/retry.ts
Introduced SwarmApiClient with sr25519 signature-based authentication for API requests (getClaimablePredictions, getPredictionContext, submitClaim, submitFeedback). Added verifier cursor storage functions (getVerifierCursor, updateVerifierCursor) for resumable polling. Implemented withRetry utility with exponential backoff for robust API operations.
Swarm Verifier – Removed Components
services/swarm-verifier/src/dedup-retroactive.ts, services/swarm-verifier/DEDUPLICATION.md, services/swarm-verifier/vitest.config.ts, services/swarm-verifier/package.json
Removed retroactive deduplication script and related documentation. Deduplication logic moved to swarm-api. Removed vitest configuration and adjusted dependencies (added Polkadot ecosystem packages, zod, openai, canonicalize; removed dedup-related scripts and test runner). Simplified eslint config.
Swarm Judge – New Service
services/swarm-judge/ (package.json, src/index.ts, src/judge.ts, src/logger.ts, helmfile.yaml, values.yaml, tsconfig.json, eslint.config.js, …/*.md)
New service implementing claims-based verdict generation. Refactored PredictionVerifier to PredictionJudge, processing mature claims rather than individual predictions. Fetches predictions with associated claims, computes majority verdict from claims, and stores linked verdict. Added comprehensive prompt documentation (CLAIM_VALIDATION_PROMPT.md, FILTER_VALIDATION_PROMPT.md, TIMEFRAME_PROMPT.md, VERDICT_PROMPT.md) defining LLM validation procedures.
Configuration & Tooling
apps/torus-wallet/next-env.d.ts, services/swarm-api/vitest.config.ts
Updated Next.js import path from dev types to standard types. Added Vitest configuration for swarm-api test runner.

Sequence Diagram(s)

sequenceDiagram
    participant Verifier as Swarm Verifier<br/>(Claims Submitter)
    participant API as Swarm API
    participant DB as Database
    participant Judge as Swarm Judge<br/>(Verdict Generator)
    participant LLM as LLM<br/>(OpenRouter)

    rect rgb(230, 245, 255)
    Note over Verifier,DB: Claims Submission Phase
    Verifier->>API: GET /v1/predictions/claimable
    API->>DB: Query predictions without:<br/>verdict, feedback, duplicate claims
    DB-->>API: Return claimable predictions
    API-->>Verifier: Claimable predictions batch
    
    Verifier->>API: GET /v1/predictions/:id/context
    API->>DB: Fetch prediction context & tweets
    DB-->>API: Prediction with tweets
    API-->>Verifier: Full prediction context
    
    Verifier->>Verifier: Validate & analyze prediction
    Verifier->>API: POST /v1/predictions/:id/claim
    rect rgb(200, 220, 255)
    Note over API,DB: Signature verification & storage
    API->>DB: Insert verification_claim
    API->>DB: Store claim_outcome, confidence, sources
    end
    DB-->>API: Claim stored
    API-->>Verifier: claim_id returned
    end

    rect rgb(240, 255, 230)
    Note over Judge,DB: Verdict Generation Phase
    Judge->>DB: SELECT predictions WITH<br/>mature claims & no verdict
    DB-->>Judge: Prediction + claims array
    
    Judge->>LLM: Validate claim consistency
    LLM-->>Judge: is_logically_consistent,<br/>sources_relevant
    
    Judge->>LLM: Generate verdict from<br/>claim majority
    LLM-->>Judge: verdict, confidence, reasoning
    
    Judge->>DB: INSERT verdict with<br/>accepted_claim_id → verification_claim
    DB-->>Judge: Verdict persisted
    end

    Note over Verifier,Judge: Claims flow enables async<br/>verification & dispute resolution
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • #409: Overlaps with deduplication logic removal and replacement in prediction processing workflows.

Suggested labels

enhancement

Suggested reviewers

  • PsicoThePato
  • EdSDR

Poem

🐰 Verifiers submit their claims with care,
The Judge examines evidence fair,
LLMs validate what's been said,
Truth from many minds is read,
Predictions now have voice to share! 🎯

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.43% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'refac(pred-swarm): open verifier' clearly and specifically describes the main change: a refactor of the prediction-swarm component to open up the verifier functionality to external agents.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Fix all issues with AI Agents 🤖
In @apps/torus-wallet/next-env.d.ts:
- Line 3: The committed change manually edits next-env.d.ts to import
"./.next/types/routes.d.ts"; instead of editing this auto-generated file, revert
any manual modifications in next-env.d.ts so it matches the default Next.js
scaffold and regenerate it by running the Next.js dev or build command (e.g.,
`next dev` or `next build`) to recreate correct auto-generated declarations;
ensure you do not hardcode the import in next-env.d.ts and commit the
regenerated file as-is.

In @atlas/migrations/prediction_swarm/20260105202138.sql:
- Around line 30-43: The foreign key column parsed_prediction_id in the
verifier_feedback table should not generate its own UUID; remove the DEFAULT
uuidv7() from the column definition so it remains "parsed_prediction_id uuid NOT
NULL" and keep the existing foreign key constraint
("verifier_feedback_parsed_prediction_id_parsed_prediction_id_fk") intact; apply
the same change pattern used for verification_claim to avoid auto-generating
UUIDs for referenced IDs.
- Around line 50-61: The migration creates table verifier_topic_registration
with topic_id defined as uuid NOT NULL DEFAULT uuidv7(), but foreign key columns
should not supply a default; remove the DEFAULT uuidv7() from the topic_id
column definition so topic_id is declared as "topic_id" uuid NOT NULL and keep
the existing FK constraint referencing prediction_topic(id) (preserving ON
DELETE NO ACTION).
- Around line 2-20: The migration creates
verification_claim.parsed_prediction_id with a DEFAULT uuidv7(), but
parsed_prediction_id is a foreign key to parsed_prediction(id) so the default
will generate non-existent IDs and cause FK violations; remove the DEFAULT
uuidv7() from the parsed_prediction_id column definition in the CREATE TABLE for
"verification_claim" (leave it NOT NULL) so callers must provide a valid
parsed_prediction_id that exists in "parsed_prediction" (no changes to the FK
constraint "verification_claim_parsed_prediction_id_parsed_prediction_id_fk"
otherwise).
- Around line 27-28: The ALTER TABLE statement adds a nullable foreign key
column "accepted_claim_id" to table "public"."verdict" but sets DEFAULT
uuidv7(), which will generate non-existent UUIDs and cause FK violations; update
the migration so the ALTER TABLE adds "accepted_claim_id" uuid NULL with no
DEFAULT (leave default as NULL) and still add the FOREIGN KEY constraint
"verdict_accepted_claim_id_verification_claim_id_fk" referencing
"public"."verification_claim" ("id").

In @atlas/migrations/webapps/20260105140000_verifier_feedback.sql:
- Around line 1-17: The migration creates verifier_feedback but depends on
failure_cause_enum and parsed_prediction which live in the prediction_swarm DB
(per atlas.hcl dev-webapps vs dev-prediction-swarm), so this will fail; either
move/define failure_cause_enum and the parsed_prediction table into the webapps
schema before creating verifier_feedback, or instead add this verifier_feedback
migration to the prediction_swarm environment so it runs against the DB that
already contains failure_cause_enum and parsed_prediction; update the migration
placement accordingly and ensure REFERENCES parsed_prediction(id) and
failure_cause_enum resolve in the same target DB.

In @packages/db/src/schema/memory.ts:
- Around line 376-378: The foreign key definition for acceptedClaimId
(uuidv7("accepted_claim_id").references(() => verificationClaimSchema.id)) lacks
an onDelete behavior; update the acceptedClaimId foreign key declaration to
include an explicit onDelete clause (choose either "set null" to preserve
verdicts but null the claim reference, or "restrict" to prevent deletion of
referenced verificationClaim rows) and adjust the column nullability if using
"set null" so verdict records can have a nullable acceptedClaimId.

In @packages/db/src/schema/swarm-verifier.ts:
- Line 1: Remove the unused `integer` import from the import statement that
currently reads import { integer, text } from "drizzle-orm/pg-core";—leave only
the symbols actually used (e.g., `text`) so the file no longer imports
`integer`.

In @services/swarm-api/vitest.config.ts:
- Around line 1-9: The config uses __dirname which is undefined in ES modules;
replace it by deriving the directory from import.meta.url: import {
fileURLToPath } from "node:url" and compute const __filename =
fileURLToPath(import.meta.url) and const __dirname = path.dirname(__filename),
then use that __dirname in the alias path.resolve call (keep symbols:
import.meta.url, fileURLToPath, __filename, __dirname, path.resolve,
defineConfig).

In @services/swarm-judge/eslint.config.js:
- Around line 4-9: Remove the redundant empty config object that sets ignores:
[] and match the swarm-verifier pattern by exporting the base config directly;
replace the current export default array that includes { ignores: [] } with a
direct export of the spreaded baseConfig (i.e., use export default
[...baseConfig]) so the module no longer contains the unused ignores property
and aligns with the other service.

In @services/swarm-judge/FILTER_VALIDATION_PROMPT.md:
- Line 139: The failure_cause enum lists "FUTURE_TIMEFRAME" but there is no
corresponding explanation in the "disqualifying factors" section; either add a
short definition and examples for when to mark a claim with failure_cause =
"FUTURE_TIMEFRAME" (e.g., content asserting plans/intended future actions or
predictions that make extraction inapplicable) into the disqualifying factors
block, or if this cause is invalid remove "FUTURE_TIMEFRAME" from the
failure_cause union in FILTER_VALIDATION_PROMPT.md and update any references to
it (search for the symbol failure_cause and the literal "FUTURE_TIMEFRAME") so
the enum and documentation stay consistent.

In @services/swarm-judge/package.json:
- Line 35: Remove the "vitest" entry from the runtime "dependencies" section of
package.json and ensure it remains only under "devDependencies"; update the
dependencies object by deleting the "vitest" key (leave the existing
"devDependencies.vitest" entry intact or add the correct version there), then
run your package manager (npm/yarn/pnpm) to restore a clean lockfile so vitest
is no longer installed as a production dependency.

In @services/swarm-judge/src/index.ts:
- Around line 15-28: The file reads for timeframePrompt, filterValidationPrompt,
and verdictPrompt use raw await readFile and can cause unhandled promise
rejections; replace these with the tryAsync abstraction from
@torus-network/torus-utils/try-catch (import tryAsync) and call tryAsync(() =>
readFile(..., 'utf-8')) for each prompt, then handle the returned [err, result]
tuple: log or throw a clear error if err is present and assign the result to
timeframePrompt/filterValidationPrompt/verdictPrompt respectively so failures
are handled via the project’s error-handling abstraction.

In @services/swarm-judge/TIMEFRAME_PROMPT.md:
- Around line 233-237: The example JSON in TIMEFRAME_PROMPT.md uses the
inconsistent field "goal_text" while the schema expects "target_text"; update
the example to replace "goal_text" with "target_text" (e.g., change "goal_text":
"ETH will flip BTC" to "target_text": "ETH will flip BTC") and run a quick grep
for any other occurrences of "goal_text" to ensure all examples match the
schema.
- Line 160: Update the precision union to include the missing tokens used in
examples: add "unbounded" (and also "event" as used in Example 6) to the
precision type so it becomes "hour" | "day" | "week" | "month" | "quarter" |
"year" | "unbounded" | "event"; ensure any schema/type definitions and relevant
validation logic that reference precision accept these new values and update any
documentation/comments that enumerate valid precision values.

In @services/swarm-verifier/package.json:
- Around line 25-35: Update the OpenAI SDK dependency in package.json by
replacing the existing entry "openai": "^4.73.1" with the current version (e.g.,
"openai": "^6.15.0"), then reinstall dependencies (npm/yarn/pnpm) and run your
tests; also scan code that imports/uses the OpenAI client (search for usages of
the OpenAI package name or OpenAI client construction) and apply any required
migration changes per the 6.x release notes to fix breaking API changes.

In @services/swarm-verifier/src/ai/prompt-loader.ts:
- Around line 16-34: The loadPrompt method uses a raw try-catch and must be
refactored to use the project's error handling helpers: import trySync or
trySyncStr from "@torus-network/torus-utils/try-catch" and replace the try {
readFileSync(...) } catch block with a call to the appropriate helper (e.g.,
trySyncStr(() => readFileSync(promptPath, "utf-8")) or trySync(readFileSync,
promptPath, "utf-8")). Preserve the existing behavior: set
this.cache.set(filename, content) and return content on success, and on failure
throw a new Error with the same message format (`Failed to load prompt
${filename}: ${err.message}`) using the error object returned by the helper;
keep the rest of loadPrompt (cache check and promptPath construction) unchanged.
- Around line 12-14: The constructor currently defaults promptsDir to
process.cwd(), which is fragile; change the default to be module-relative by
assigning this.promptsDir = promptsDir ?? resolve(__dirname, '..') in the
constructor of the PromptLoader (or the class containing the constructor) so
prompt files are resolved relative to the module file rather than the current
working directory.

In @services/swarm-verifier/src/schemas/llm-response.ts:
- Around line 3-30: Add JSDoc comments describing the purpose of
TimeframeExtractionSchema and the TimeframeExtractionResult type, then tighten
validation for start_utc and end_utc by replacing z.string().nullable() with
z.string().datetime().nullable() (or z.string().refine(...) to enforce ISO-8601
if your zod version lacks .datetime), keeping nullability; ensure the JSDoc
documents that these fields are ISO-8601 UTC timestamps (or null) and mention
precision/confidence semantics for the schema.
- Around line 32-54: Add JSDoc for FilterValidationSchema and the exported
FilterValidationResult type: document the schema purpose and each field
(context: description of the validated text; is_valid: boolean meaning of
true/false; failure_cause: enumerate and explain each allowed string value and
that it may be null; confidence: number between 0 and 1 indicating certainty;
reasoning: human-readable explanation of why the label was assigned). Attach
JSDoc above the FilterValidationSchema declaration and a short typedef comment
for FilterValidationResult, keeping descriptions concise and following project
JSDoc style.
- Around line 56-63: Add JSDoc comments for the exported VerdictSchema and
VerdictResult that document each field: explain that "valid" indicates whether
the input/query was well-formed/processable (true if the verification run was
applicable), while "verdict" is the outcome of the verification (true means the
subject passes the check); document "confidence" as a numeric score between 0
and 1 describing certainty and note its min/max constraints; document
"reasoning" as the human- or machine-readable explanation supporting the
verdict; attach these comments directly above the exported VerdictSchema and
VerdictResult declarations (referencing VerdictSchema, VerdictResult, valid,
verdict, confidence, reasoning).

In @services/swarm-verifier/src/services/prediction-verifier.ts:
- Around line 252-260: The current implementation calls
this.llmClient.completeWithSearch(...) in the method that returns { verdict,
sources: undefined }, which drops any web-search citations; update the method to
extract citations from the response of completeWithSearch (inspect the returned
object from completeWithSearch when invoking this.llmClient.completeWithSearch
with the VerdictSchema) and set sources to that citation list (or an empty array
if none), returning { verdict, sources }; if this is intentionally a
placeholder, add a TODO comment above the return mentioning extraction of search
citations and link to the expected OpenRouter plugin citation format for future
implementation.
- Around line 269-276: Replace the raw try-catch around
this.apiClient.getPredictionContext in the function that obtains
PredictionContext with the tryAsync helper from
'@torus-network/torus-utils/try-catch': call tryAsync(() =>
this.apiClient.getPredictionContext(prediction.id)) (or
tryAsync(this.apiClient.getPredictionContext, prediction.id) depending on
signature), assign context = result.result on success, and on failure log the
error with logger.error(String(result.error)), increment this.stats.errors and
return null—remove the original try-catch and ensure the tryAsync import is
added.

In @services/swarm-verifier/src/utils/retry.ts:
- Around line 4-27: The withRetry function uses a raw try-catch and console.log
and needs to be refactored to use the tryAsync abstraction from
@torus-network/torus-utils/try-catch, replace console.log with the project
structured logger (e.g., use the existing logger instance or import one) and add
JSDoc for the function; specifically, import tryAsync, wrap the operation
invocation with await tryAsync(() => operation()), handle the returned [err,
result] tuple to set lastError or return result, compute/backoff and await delay
as before but log retry attempts via the structured logger, and add a short
JSDoc block describing parameters (operation, maxRetries, baseDelay) and the
returned Promise<T>.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
services/swarm-judge/src/judge.ts (2)

633-665: Add timeout to external API calls to prevent indefinite hangs.

The fetch calls to OpenRouter lack timeout handling. If the API becomes unresponsive, workers will block indefinitely, potentially exhausting the worker pool and causing cascading failures.

🔎 Proposed fix using AbortController with timeout
  private async generateVerdict(
    context: string,
    targetText: string,
    timeframeText: string,
    timeframeResult: TimeframeExtractionResult,
  ): Promise<VerdictResult> {
    // ... inputData setup ...

+   const controller = new AbortController();
+   const timeoutId = setTimeout(() => controller.abort(), 120_000); // 2 minute timeout

    const response = await fetch(
      "https://openrouter.ai/api/v1/chat/completions",
      {
        method: "POST",
        headers: {
          Authorization: `Bearer ${this.config.openrouterApiKey}`,
          "Content-Type": "application/json",
        },
        body: JSON.stringify({
          // ... body ...
        }),
+       signal: controller.signal,
      },
    );

+   clearTimeout(timeoutId);

    // ... rest of method ...
  }

Apply similar changes to validateFilterExtraction and extractTimeframe methods.


702-703: Avoid eslint-disable and refactor the non-null assertion.

Per coding guidelines, eslint-disable comments should not be used. Additionally, the !! operator is a form of non-null assertion.

🔎 Proposed fix
    const sources: UrlCitation[] | undefined =
      message.annotations
        ?.filter((ann) => ann.type === "url_citation" && ann.url_citation)
-       // eslint-disable-next-line
-       .map((ann) => ann.url_citation!!) ?? undefined;
+       .map((ann) => ann.url_citation)
+       .filter((citation): citation is UrlCitation => citation !== undefined) ?? undefined;
🟡 Minor comments (8)
apps/torus-wallet/next-env.d.ts-3-3 (1)

3-3: Regenerate auto-generated file instead of manually editing

While the import path ./.next/types/routes.d.ts is correct for Next.js 16.x, this file should not be manually edited. Although next-env.d.ts is committed to version control, it's auto-generated by Next.js and can be regenerated cleanly by running next dev or next build rather than manually patching the import path. Regenerating ensures full consistency with the current build output.

services/swarm-verifier/src/schemas/llm-response.ts-56-63 (1)

56-63: Add JSDoc documentation to clarify the distinct purpose of each field.

According to the DOCUMENTATION_STYLE.md, all exported types and interfaces require JSDoc documentation. The VerdictSchema and VerdictResult lack documentation explaining the purpose of each field, particularly the distinction between the valid and verdict boolean fields, which both appear in the schema but serve different purposes.

Suggested JSDoc improvements
+/**
+ * Schema for LLM verdict result on a prediction claim.
+ * Contains validation status, final judgment, and reasoning.
+ */
 export const VerdictSchema = z.object({
+  /** Whether the verdict response passed validation */
   valid: z.boolean(),
+  /** The actual verdict judgment on the claim */
   verdict: z.boolean(),
+  /** Confidence score between 0 and 1 */
   confidence: z.number().min(0).max(1),
+  /** Reasoning explanation for the verdict */
   reasoning: z.string(),
 });

+/**
+ * Inferred TypeScript type for LLM verdict results.
+ */
 export type VerdictResult = z.infer<typeof VerdictSchema>;
services/swarm-verifier/src/services/prediction-verifier.ts-442-446 (1)

442-446: Add validation and logging for null timeframe dates to prevent silent data quality issues.

The TimeframeExtractionSchema explicitly allows null start_utc and end_utc for all status values (not just "missing"). When status is "explicit", "implicit", or "inferred", the code proceeds without validating that dates are present, then silently falls back to the current timestamp at lines 443-444. This masks extraction failures—e.g., if the LLM returns a successful status but fails to extract actual dates, the claim will be submitted with incorrect temporal context without any indication of the problem.

Either:

  1. Add a check to reject non-missing statuses with null dates, or
  2. Log a warning when the fallback is triggered to surface data quality issues
services/swarm-judge/TIMEFRAME_PROMPT.md-233-237 (1)

233-237: Inconsistent field name in example.

The input JSON uses "goal_text" but the input schema (line 19) defines "target_text". Maintain consistency across all examples.

🔎 Check all examples for field name consistency
#!/bin/bash
# Description: Find all occurrences of goal_text vs target_text in examples

echo "=== Checking for 'goal_text' usage ==="
rg -n '"goal_text"' services/swarm-judge/TIMEFRAME_PROMPT.md

echo -e "\n=== Checking for 'target_text' usage ==="
rg -n '"target_text"' services/swarm-judge/TIMEFRAME_PROMPT.md
services/swarm-judge/TIMEFRAME_PROMPT.md-160-160 (1)

160-160: Add "unbounded" to precision enum.

The output schema defines precision as "hour" | "day" | "week" | "month" | "quarter" | "year", but several examples (lines 309, 414, 444, 472, 501, 530, 559) use "precision": "unbounded" for missing timeframes. Add "unbounded" to the enum or clarify the expected precision value for missing/event_trigger statuses.

🔎 Proposed fix
-  "precision": "hour" | "day" | "week" | "month" | "quarter" | "year",
+  "precision": "hour" | "day" | "week" | "month" | "quarter" | "year" | "event" | "unbounded",

Note: Also add "event" since Example 6 (line 371) uses it.

services/swarm-judge/package.json-35-35 (1)

35-35: Remove vitest from runtime dependencies.

Vitest appears in both dependencies (line 35) and devDependencies (line 48). Testing frameworks should only be in devDependencies as they're not needed at runtime.

🔎 Proposed fix
   "dependencies": {
     "@t3-oss/env-core": "^0.11.1",
     "@torus-ts/db": "workspace:*",
     "@torus-network/sdk": "workspace:*",
     "@torus-network/torus-utils": "workspace:*",
     "cors": "^2.8.5",
     "express": "^4.19.2",
     "node-fetch": "^3.3.2",
     "superjson": "catalog:",
     "zod": "catalog:",
-    "tsafe": "catalog:",
-    "vitest": "catalog:"
+    "tsafe": "catalog:"
   },

Also applies to: 48-48

packages/db/src/schema/swarm-verifier.ts-1-1 (1)

1-1: Remove unused import.

The integer import from drizzle-orm/pg-core is not used in this file.

🔎 Proposed fix
-import { integer, text } from "drizzle-orm/pg-core";
+import { text } from "drizzle-orm/pg-core";
 import { createTable, ss58Address, timeFields } from "./utils";
services/swarm-judge/FILTER_VALIDATION_PROMPT.md-139-139 (1)

139-139: Undocumented failure cause: FUTURE_TIMEFRAME.

The failure_cause enum includes "FUTURE_TIMEFRAME" but this cause is not explained in the "disqualifying factors" section (lines 64-129). Either add documentation for when this should be used, or remove it from the enum if it's not applicable.

🧹 Nitpick comments (18)
services/swarm-verifier/eslint.config.js (1)

1-3: Add JSDoc type annotation for consistency and type safety.

The swarm-judge ESLint config includes a JSDoc type annotation (/** @type {import('typescript-eslint').Config} */), which improves IDE support and type checking. Consider adding the same annotation here for consistency across services.

🔎 Proposed addition
 import baseConfig from "@torus-ts/eslint-config/base";
 
+/** @type {import('typescript-eslint').Config} */
 export default [...baseConfig];
services/swarm-verifier/src/ai/prompt-loader.ts (2)

1-1: Consider async file I/O to avoid blocking the event loop.

Using readFileSync blocks the Node.js event loop during file reads. While caching mitigates repeated blocking, the first access to each prompt will still block. Consider using readFile (async) with await to keep the event loop responsive, especially if prompts are loaded during request handling.

🔎 Suggested async refactor
-import { readFileSync } from "node:fs";
+import { readFile } from "node:fs/promises";
 import { resolve } from "node:path";

Then update the loadPrompt method to be async (see separate comment on that method).


4-7: Enhance JSDoc to follow project documentation standards.

The documentation is minimal. Per coding guidelines, follow JSDoc standards defined in @docs/DOCUMENTATION_STYLE.md. Consider documenting:

  • Class purpose and responsibilities
  • Constructor parameters and their defaults
  • Caching behavior
  • Error conditions
  • Thread-safety considerations (if applicable)
services/swarm-verifier/src/ai/openrouter-client.ts (2)

106-110: Use structured logger instead of console.error.

For consistency with the rest of the codebase (e.g., BasicLogger used in prediction-verifier.ts), use a logger instance here instead of raw console.error.

🔎 Proposed fix
+import { BasicLogger } from "@torus-network/torus-utils/logger";
+
+const logger = BasicLogger.create({ name: "openrouter-client" });
+
 // In error handling:
-      console.error(`[OpenRouter] Full error details:`, {
-        message: lastError.message,
-        cause: lastError.cause,
-        stack: lastError.stack,
-      });
+      logger.error(`Full error details:`, {
+        message: lastError.message,
+        cause: lastError.cause,
+        stack: lastError.stack,
+      });

186-188: Inconsistent error logging compared to completeStructured.

completeStructured logs full error details (message, cause, stack) at lines 106-110, but completeWithSearch only throws a basic error message. This inconsistency would be resolved by the refactor to extract common logic.

services/swarm-verifier/src/services/prediction-verifier.ts (3)

282-288: Unhandled errors from submitFeedback and other async calls.

Multiple await calls throughout verifyPrediction (e.g., lines 282, 302, 320, 357, 375, 399, 420, 436) can throw but lack error handling. If any fails, the function throws without incrementing stats.errors, potentially causing inconsistent state tracking.

Consider wrapping these calls with tryAsync and handling failures gracefully, especially for the submitFeedback/submitClaim calls which are side-effecting operations.


78-89: Silent failure when tweet not found in extractSliceText.

Line 85 returns an empty string if a tweet is missing, which silently corrupts the extracted text. While validatePostSlices should catch this earlier, consider adding a defensive log or throwing to prevent subtle bugs if the call order changes.


449-452: predictionsProcessed only incremented for claims, not feedback.

The counter predictionsProcessed is only incremented when a claim is submitted (line 451), but predictions resulting in feedback are also "processed". Consider either:

  1. Renaming to claimsGenerated for clarity, or
  2. Incrementing at the start/end of verifyPrediction for all outcomes
services/swarm-judge/TIMEFRAME_PROMPT.md (1)

79-85: Consider clarifying EST/EDT distinction for market hours.

Line 84 references "4 PM EST" converting to "21:00:00 UTC", but US markets observe daylight saving time. During EDT (March-November), 4 PM EDT is 20:00:00 UTC, not 21:00:00 UTC. Consider specifying "4 PM ET" (Eastern Time) or providing both EST/EDT conversions to avoid confusion.

packages/db/src/schema/memory.ts (1)

297-303: Document the timestamp format for retrievedAt.

The retrievedAt field is typed as string, which is acceptable for JSONB storage but could lead to inconsistencies if the format isn't enforced. Consider adding a JSDoc comment specifying the expected format (e.g., ISO 8601) or using Date type if runtime serialization handles conversion.

📝 Suggested documentation
 /**
  * Source evidence for verification claims
  */
 export interface ClaimSource {
   url: string;
   title?: string;
   snippet?: string;
+  /** ISO 8601 timestamp string */
   retrievedAt: string;
   archiveUrl?: string;
 }
services/swarm-api/src/routes/predictions.ts (1)

249-300: Deduplication logic is well-integrated; consider monitoring performance for large batches.

The deduplication workflow is correctly positioned within the transaction and properly handles duplicate relationships. The similarity score precision (toFixed(4)) matches the schema definition.

For large prediction batches with many predictions per tweet, the dedup logic could be computationally intensive since it queries all related predictions and performs pairwise comparisons via findCanonicalPrediction. Consider adding performance monitoring or batch size limits if this becomes a bottleneck.

services/swarm-verifier/src/seen-storage.ts (1)

11-11: Avoid creating a new database connection on every function call.

Both getVerifierCursor and updateVerifierCursor call createDb() on each invocation. This can lead to connection pool exhaustion under load. Consider using a lazily-initialized shared instance or accepting db as a parameter.

Proposed fix: use lazy initialization
 import { createDb, eq } from "@torus-ts/db";
 import { verifierCursorStateSchema } from "@torus-ts/db/schema";
+
+let dbInstance: ReturnType<typeof createDb> | null = null;
+
+function getDb() {
+  if (!dbInstance) {
+    dbInstance = createDb();
+  }
+  return dbInstance;
+}

 export async function getVerifierCursor(
   verifierAgentId: string,
 ): Promise<string | undefined> {
-  const db = createDb();
+  const db = getDb();
   // ...
 }

 export async function updateVerifierCursor(
   verifierAgentId: string,
   cursor: string,
 ): Promise<void> {
-  const db = createDb();
+  const db = getDb();
   // ...
 }

Also applies to: 30-30

services/swarm-judge/FILTER_VALIDATION_PROMPT.md (1)

246-277: Duplicate example numbering.

There are two "Example 3" sections (lines 246 and 277). Similarly, later examples use "5b" and "5c" variants. Consider renumbering sequentially for clarity (Example 3, 4, 5, 6...).

services/swarm-verifier/src/index.ts (1)

94-101: Consider adding a timeout for individual prediction verification.

The verifyPrediction calls have no timeout. If the LLM client hangs or a network issue occurs, the entire batch could stall indefinitely. The withRetry utility only wraps the initial fetch, not the verification processing.

Example: wrap with a timeout
const VERIFICATION_TIMEOUT_MS = 120_000; // 2 minutes

const verificationPromises = predictions.map((prediction) =>
  Promise.race([
    verifier.verifyPrediction(prediction),
    new Promise<null>((_, reject) =>
      setTimeout(() => reject(new Error("Verification timeout")), VERIFICATION_TIMEOUT_MS)
    ),
  ]).catch((error) => {
    logger.error(`Failed to verify prediction ${prediction.id}`, error);
    batchErrors++;
    return null;
  }),
);
services/swarm-api/src/routes/claims.ts (1)

203-225: Consider using a transaction for claim submission.

The claim submission checks for an existing verdict (lines 190-201) and then inserts the claim (lines 203-218). Without a transaction, a race condition could allow duplicate claims if two verifiers submit simultaneously and both pass the verdict check before either insert completes.

Example using transaction
const [insertedClaim] = await db.transaction(async (tx) => {
  const existingVerdict = await tx
    .select({ id: verdictSchema.id })
    .from(verdictSchema)
    .where(eq(verdictSchema.parsedPredictionId, predictionId))
    .limit(1);

  if (existingVerdict.length > 0) {
    throw new HttpError(400, "Cannot submit claim: prediction already has a verdict");
  }

  return tx
    .insert(verificationClaimSchema)
    .values({ /* ... */ })
    .onConflictDoNothing()
    .returning({ id: verificationClaimSchema.id });
});
services/swarm-verifier/src/api-client.ts (2)

158-170: Add request timeout to prevent hanging.

The fetch calls have no timeout configured. Network issues or slow server responses could cause the client to hang indefinitely.

Add AbortController timeout
+const DEFAULT_TIMEOUT_MS = 30_000;

 private async get<T>(
   path: string,
   params?: Record<string, string | number | boolean | undefined>,
 ): Promise<T> {
   const url = new URL(`${this.apiUrl}${path}`);

   if (params) {
     for (const [key, value] of Object.entries(params)) {
       if (value !== undefined) {
         url.searchParams.set(key, String(value));
       }
     }
   }

+  const controller = new AbortController();
+  const timeoutId = setTimeout(() => controller.abort(), DEFAULT_TIMEOUT_MS);

-  const response = await fetch(url.toString(), {
+  const response = await fetch(url.toString(), {
     method: "GET",
     headers: this.generateAuthHeaders(),
+    signal: controller.signal,
   });
+  clearTimeout(timeoutId);

   // ...
 }

Also applies to: 176-190


9-52: Consider importing shared types instead of duplicating.

The PostSlice and ClaimSource interfaces are duplicated here. Per the relevant code snippets, these are already defined in packages/db/src/schema/memory.ts. Importing from the shared location would reduce maintenance burden.

Import from shared schema
+import type { PostSlice, ClaimSource } from "@torus-ts/db/schema";

-interface PostSlice {
-  source: { tweet_id: string };
-  start: number;
-  end: number;
-}
// ... remove other duplicated interfaces

Note: The local PostSlice.source only has tweet_id, while the shared one has full PredictionSource. You may need to adjust or create a narrower type if the full interface isn't needed.

services/swarm-judge/src/judge.ts (1)

697-698: Consider using error handling abstractions for JSON parsing.

The JSON.parse calls could throw on malformed LLM responses. Per coding guidelines, prefer using error handling abstractions from @torus-network/torus-utils/try-catch instead of relying on the outer try-catch.

This would allow for more specific error messages and potentially retrying with different prompts when the LLM returns invalid JSON.

Comment on lines +2 to +20
CREATE TABLE "public"."verification_claim" (
"id" uuid NOT NULL DEFAULT uuidv7(),
"parsed_prediction_id" uuid NOT NULL DEFAULT uuidv7(),
"verifier_agent_id" character varying(256) NOT NULL,
"verifier_agent_signature" text NOT NULL,
"claim_outcome" boolean NOT NULL,
"confidence" numeric NOT NULL,
"reasoning" text NOT NULL,
"sources" jsonb NULL,
"timeframe_start_utc" timestamptz NULL,
"timeframe_end_utc" timestamptz NULL,
"timeframe_precision" character varying(32) NULL,
"created_at" timestamptz NOT NULL DEFAULT now(),
"updated_at" timestamptz NOT NULL DEFAULT now(),
"deleted_at" timestamptz NULL,
PRIMARY KEY ("id"),
CONSTRAINT "verification_claim_unique_verifier" UNIQUE ("parsed_prediction_id", "verifier_agent_id"),
CONSTRAINT "verification_claim_parsed_prediction_id_parsed_prediction_id_fk" FOREIGN KEY ("parsed_prediction_id") REFERENCES "public"."parsed_prediction" ("id") ON UPDATE NO ACTION ON DELETE CASCADE
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove DEFAULT uuidv7() from foreign key column parsed_prediction_id.

Line 4 specifies DEFAULT uuidv7() for parsed_prediction_id, but this column is a foreign key referencing parsed_prediction(id). A random default UUID would almost certainly not exist in the referenced table, causing FK violations. The default should be removed so the column must be explicitly provided.

🔎 Proposed fix
  CREATE TABLE "public"."verification_claim" (
    "id" uuid NOT NULL DEFAULT uuidv7(),
-   "parsed_prediction_id" uuid NOT NULL DEFAULT uuidv7(),
+   "parsed_prediction_id" uuid NOT NULL,
    "verifier_agent_id" character varying(256) NOT NULL,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE "public"."verification_claim" (
"id" uuid NOT NULL DEFAULT uuidv7(),
"parsed_prediction_id" uuid NOT NULL DEFAULT uuidv7(),
"verifier_agent_id" character varying(256) NOT NULL,
"verifier_agent_signature" text NOT NULL,
"claim_outcome" boolean NOT NULL,
"confidence" numeric NOT NULL,
"reasoning" text NOT NULL,
"sources" jsonb NULL,
"timeframe_start_utc" timestamptz NULL,
"timeframe_end_utc" timestamptz NULL,
"timeframe_precision" character varying(32) NULL,
"created_at" timestamptz NOT NULL DEFAULT now(),
"updated_at" timestamptz NOT NULL DEFAULT now(),
"deleted_at" timestamptz NULL,
PRIMARY KEY ("id"),
CONSTRAINT "verification_claim_unique_verifier" UNIQUE ("parsed_prediction_id", "verifier_agent_id"),
CONSTRAINT "verification_claim_parsed_prediction_id_parsed_prediction_id_fk" FOREIGN KEY ("parsed_prediction_id") REFERENCES "public"."parsed_prediction" ("id") ON UPDATE NO ACTION ON DELETE CASCADE
);
CREATE TABLE "public"."verification_claim" (
"id" uuid NOT NULL DEFAULT uuidv7(),
"parsed_prediction_id" uuid NOT NULL,
"verifier_agent_id" character varying(256) NOT NULL,
"verifier_agent_signature" text NOT NULL,
"claim_outcome" boolean NOT NULL,
"confidence" numeric NOT NULL,
"reasoning" text NOT NULL,
"sources" jsonb NULL,
"timeframe_start_utc" timestamptz NULL,
"timeframe_end_utc" timestamptz NULL,
"timeframe_precision" character varying(32) NULL,
"created_at" timestamptz NOT NULL DEFAULT now(),
"updated_at" timestamptz NOT NULL DEFAULT now(),
"deleted_at" timestamptz NULL,
PRIMARY KEY ("id"),
CONSTRAINT "verification_claim_unique_verifier" UNIQUE ("parsed_prediction_id", "verifier_agent_id"),
CONSTRAINT "verification_claim_parsed_prediction_id_parsed_prediction_id_fk" FOREIGN KEY ("parsed_prediction_id") REFERENCES "public"."parsed_prediction" ("id") ON UPDATE NO ACTION ON DELETE CASCADE
);
🤖 Prompt for AI Agents
In @atlas/migrations/prediction_swarm/20260105202138.sql around lines 2-20, The
migration creates verification_claim.parsed_prediction_id with a DEFAULT
uuidv7(), but parsed_prediction_id is a foreign key to parsed_prediction(id) so
the default will generate non-existent IDs and cause FK violations; remove the
DEFAULT uuidv7() from the parsed_prediction_id column definition in the CREATE
TABLE for "verification_claim" (leave it NOT NULL) so callers must provide a
valid parsed_prediction_id that exists in "parsed_prediction" (no changes to the
FK constraint "verification_claim_parsed_prediction_id_parsed_prediction_id_fk"
otherwise).

Comment on lines +27 to +28
-- Modify "verdict" table
ALTER TABLE "public"."verdict" ADD COLUMN "accepted_claim_id" uuid NULL DEFAULT uuidv7(), ADD CONSTRAINT "verdict_accepted_claim_id_verification_claim_id_fk" FOREIGN KEY ("accepted_claim_id") REFERENCES "public"."verification_claim" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove DEFAULT uuidv7() from nullable foreign key accepted_claim_id.

The accepted_claim_id column is nullable (correct), but the DEFAULT uuidv7() is problematic. If an INSERT doesn't explicitly provide this column, it would generate a random UUID that likely doesn't exist in verification_claim, causing FK violations. For nullable FKs, the default should be NULL (implicit when no DEFAULT is specified).

🔎 Proposed fix
-ALTER TABLE "public"."verdict" ADD COLUMN "accepted_claim_id" uuid NULL DEFAULT uuidv7(), ADD CONSTRAINT ...
+ALTER TABLE "public"."verdict" ADD COLUMN "accepted_claim_id" uuid NULL, ADD CONSTRAINT ...
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
-- Modify "verdict" table
ALTER TABLE "public"."verdict" ADD COLUMN "accepted_claim_id" uuid NULL DEFAULT uuidv7(), ADD CONSTRAINT "verdict_accepted_claim_id_verification_claim_id_fk" FOREIGN KEY ("accepted_claim_id") REFERENCES "public"."verification_claim" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION;
-- Modify "verdict" table
ALTER TABLE "public"."verdict" ADD COLUMN "accepted_claim_id" uuid NULL, ADD CONSTRAINT "verdict_accepted_claim_id_verification_claim_id_fk" FOREIGN KEY ("accepted_claim_id") REFERENCES "public"."verification_claim" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION;
🤖 Prompt for AI Agents
In @atlas/migrations/prediction_swarm/20260105202138.sql around lines 27-28, The
ALTER TABLE statement adds a nullable foreign key column "accepted_claim_id" to
table "public"."verdict" but sets DEFAULT uuidv7(), which will generate
non-existent UUIDs and cause FK violations; update the migration so the ALTER
TABLE adds "accepted_claim_id" uuid NULL with no DEFAULT (leave default as NULL)
and still add the FOREIGN KEY constraint
"verdict_accepted_claim_id_verification_claim_id_fk" referencing
"public"."verification_claim" ("id").

Comment on lines +30 to +43
CREATE TABLE "public"."verifier_feedback" (
"id" uuid NOT NULL DEFAULT uuidv7(),
"parsed_prediction_id" uuid NOT NULL DEFAULT uuidv7(),
"verifier_agent_id" character varying(256) NOT NULL,
"verifier_agent_signature" text NOT NULL,
"failure_cause" "public"."failure_cause_enum" NOT NULL,
"reason" text NOT NULL,
"created_at" timestamptz NOT NULL DEFAULT now(),
"updated_at" timestamptz NOT NULL DEFAULT now(),
"deleted_at" timestamptz NULL,
PRIMARY KEY ("id"),
CONSTRAINT "verifier_feedback_unique" UNIQUE ("parsed_prediction_id", "verifier_agent_id"),
CONSTRAINT "verifier_feedback_parsed_prediction_id_parsed_prediction_id_fk" FOREIGN KEY ("parsed_prediction_id") REFERENCES "public"."parsed_prediction" ("id") ON UPDATE NO ACTION ON DELETE CASCADE
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove DEFAULT uuidv7() from foreign key column in verifier_feedback.

Same issue as verification_claim: line 32 has "parsed_prediction_id" uuid NOT NULL DEFAULT uuidv7() which is inappropriate for a foreign key column.

🔎 Proposed fix
  CREATE TABLE "public"."verifier_feedback" (
    "id" uuid NOT NULL DEFAULT uuidv7(),
-   "parsed_prediction_id" uuid NOT NULL DEFAULT uuidv7(),
+   "parsed_prediction_id" uuid NOT NULL,
    "verifier_agent_id" character varying(256) NOT NULL,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE "public"."verifier_feedback" (
"id" uuid NOT NULL DEFAULT uuidv7(),
"parsed_prediction_id" uuid NOT NULL DEFAULT uuidv7(),
"verifier_agent_id" character varying(256) NOT NULL,
"verifier_agent_signature" text NOT NULL,
"failure_cause" "public"."failure_cause_enum" NOT NULL,
"reason" text NOT NULL,
"created_at" timestamptz NOT NULL DEFAULT now(),
"updated_at" timestamptz NOT NULL DEFAULT now(),
"deleted_at" timestamptz NULL,
PRIMARY KEY ("id"),
CONSTRAINT "verifier_feedback_unique" UNIQUE ("parsed_prediction_id", "verifier_agent_id"),
CONSTRAINT "verifier_feedback_parsed_prediction_id_parsed_prediction_id_fk" FOREIGN KEY ("parsed_prediction_id") REFERENCES "public"."parsed_prediction" ("id") ON UPDATE NO ACTION ON DELETE CASCADE
);
CREATE TABLE "public"."verifier_feedback" (
"id" uuid NOT NULL DEFAULT uuidv7(),
"parsed_prediction_id" uuid NOT NULL,
"verifier_agent_id" character varying(256) NOT NULL,
"verifier_agent_signature" text NOT NULL,
"failure_cause" "public"."failure_cause_enum" NOT NULL,
"reason" text NOT NULL,
"created_at" timestamptz NOT NULL DEFAULT now(),
"updated_at" timestamptz NOT NULL DEFAULT now(),
"deleted_at" timestamptz NULL,
PRIMARY KEY ("id"),
CONSTRAINT "verifier_feedback_unique" UNIQUE ("parsed_prediction_id", "verifier_agent_id"),
CONSTRAINT "verifier_feedback_parsed_prediction_id_parsed_prediction_id_fk" FOREIGN KEY ("parsed_prediction_id") REFERENCES "public"."parsed_prediction" ("id") ON UPDATE NO ACTION ON DELETE CASCADE
);
🤖 Prompt for AI Agents
In @atlas/migrations/prediction_swarm/20260105202138.sql around lines 30-43, The
foreign key column parsed_prediction_id in the verifier_feedback table should
not generate its own UUID; remove the DEFAULT uuidv7() from the column
definition so it remains "parsed_prediction_id uuid NOT NULL" and keep the
existing foreign key constraint
("verifier_feedback_parsed_prediction_id_parsed_prediction_id_fk") intact; apply
the same change pattern used for verification_claim to avoid auto-generating
UUIDs for referenced IDs.

Comment on lines +50 to +61
-- Create "verifier_topic_registration" table
CREATE TABLE "public"."verifier_topic_registration" (
"id" uuid NOT NULL DEFAULT uuidv7(),
"verifier_agent_id" character varying(256) NOT NULL,
"topic_id" uuid NOT NULL DEFAULT uuidv7(),
"created_at" timestamptz NOT NULL DEFAULT now(),
"updated_at" timestamptz NOT NULL DEFAULT now(),
"deleted_at" timestamptz NULL,
PRIMARY KEY ("id"),
CONSTRAINT "verifier_topic_unique" UNIQUE ("verifier_agent_id", "topic_id"),
CONSTRAINT "verifier_topic_registration_topic_id_prediction_topic_id_fk" FOREIGN KEY ("topic_id") REFERENCES "public"."prediction_topic" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove DEFAULT uuidv7() from foreign key column topic_id.

Line 54 has "topic_id" uuid NOT NULL DEFAULT uuidv7() which should not have a default for a foreign key column.

The ON DELETE NO ACTION choice for the topic FK is appropriate for preserving registration history.

🔎 Proposed fix
  CREATE TABLE "public"."verifier_topic_registration" (
    "id" uuid NOT NULL DEFAULT uuidv7(),
    "verifier_agent_id" character varying(256) NOT NULL,
-   "topic_id" uuid NOT NULL DEFAULT uuidv7(),
+   "topic_id" uuid NOT NULL,
    "created_at" timestamptz NOT NULL DEFAULT now(),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
-- Create "verifier_topic_registration" table
CREATE TABLE "public"."verifier_topic_registration" (
"id" uuid NOT NULL DEFAULT uuidv7(),
"verifier_agent_id" character varying(256) NOT NULL,
"topic_id" uuid NOT NULL DEFAULT uuidv7(),
"created_at" timestamptz NOT NULL DEFAULT now(),
"updated_at" timestamptz NOT NULL DEFAULT now(),
"deleted_at" timestamptz NULL,
PRIMARY KEY ("id"),
CONSTRAINT "verifier_topic_unique" UNIQUE ("verifier_agent_id", "topic_id"),
CONSTRAINT "verifier_topic_registration_topic_id_prediction_topic_id_fk" FOREIGN KEY ("topic_id") REFERENCES "public"."prediction_topic" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION
);
-- Create "verifier_topic_registration" table
CREATE TABLE "public"."verifier_topic_registration" (
"id" uuid NOT NULL DEFAULT uuidv7(),
"verifier_agent_id" character varying(256) NOT NULL,
"topic_id" uuid NOT NULL,
"created_at" timestamptz NOT NULL DEFAULT now(),
"updated_at" timestamptz NOT NULL DEFAULT now(),
"deleted_at" timestamptz NULL,
PRIMARY KEY ("id"),
CONSTRAINT "verifier_topic_unique" UNIQUE ("verifier_agent_id", "topic_id"),
CONSTRAINT "verifier_topic_registration_topic_id_prediction_topic_id_fk" FOREIGN KEY ("topic_id") REFERENCES "public"."prediction_topic" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION
);
🤖 Prompt for AI Agents
In @atlas/migrations/prediction_swarm/20260105202138.sql around lines 50-61, The
migration creates table verifier_topic_registration with topic_id defined as
uuid NOT NULL DEFAULT uuidv7(), but foreign key columns should not supply a
default; remove the DEFAULT uuidv7() from the topic_id column definition so
topic_id is declared as "topic_id" uuid NOT NULL and keep the existing FK
constraint referencing prediction_topic(id) (preserving ON DELETE NO ACTION).

Comment on lines +1 to +17
-- Verifier feedback table for per-verifier feedback on predictions
CREATE TABLE IF NOT EXISTS verifier_feedback (
id UUID PRIMARY KEY DEFAULT uuidv7(),
parsed_prediction_id UUID NOT NULL REFERENCES parsed_prediction(id) ON DELETE CASCADE,
verifier_agent_id VARCHAR(256) NOT NULL,
verifier_agent_signature TEXT NOT NULL,
failure_cause failure_cause_enum NOT NULL,
reason TEXT NOT NULL,
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
deleted_at TIMESTAMP WITH TIME ZONE,
UNIQUE(parsed_prediction_id, verifier_agent_id)
);

CREATE INDEX IF NOT EXISTS verifier_feedback_prediction_idx ON verifier_feedback(parsed_prediction_id);
CREATE INDEX IF NOT EXISTS verifier_feedback_agent_idx ON verifier_feedback(verifier_agent_id);
CREATE INDEX IF NOT EXISTS verifier_feedback_failure_cause_idx ON verifier_feedback(failure_cause);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check for failure_cause_enum definition in migrations
rg "CREATE TYPE.*failure_cause_enum" atlas/migrations/ -A 5

# Also check for parsed_prediction table
rg "CREATE TABLE.*parsed_prediction" atlas/migrations/ -B 2 -A 10

# List migration files in order
ls -1 atlas/migrations/webapps/ | grep -E "^[0-9]+_.*\.sql$" | sort

Repository: renlabs-dev/torus-ts

Length of output: 6034


🏁 Script executed:

#!/bin/bash
# Read the actual verifier_feedback.sql file
cat atlas/migrations/webapps/20260105140000_verifier_feedback.sql

# Check if these directories share the same schema
echo "=== Checking migration directory structure ==="
ls -la atlas/migrations/

# Check for any schema-specific setup in webapps directory
rg "CREATE SCHEMA\|SET search_path" atlas/migrations/webapps/ | head -20

Repository: renlabs-dev/torus-ts

Length of output: 1255


🏁 Script executed:

#!/bin/bash
# Look for atlas configuration
find atlas -name "*.toml" -o -name "*.yaml" -o -name "*.yml" -o -name "*.json" | head -20

# Check for README or documentation in migrations
find atlas -name "README*" -o -name "*.md" | head -20

# Look for any initialization or schema setup scripts
rg "migration_status\|migrations.version" atlas/migrations/ -l

# Check if there are any env-specific migration directories
ls -la atlas/migrations/webapps/ | head -10

Repository: renlabs-dev/torus-ts

Length of output: 661


🏁 Script executed:

#!/bin/bash
# Search for migration runner code
rg "prediction_swarm|webapps" --type ts --type tsx -l | head -20

# Look for any migration setup or initialization code
rg "migrations.*prediction_swarm\|migrations.*webapps" -l | head -20

# Check for any database configuration
find . -name "*.ts" -o -name "*.js" | xargs rg -l "DATABASE_URL\|db.*connection\|migration" 2>/dev/null | grep -E "src/|config/" | head -15

Repository: renlabs-dev/torus-ts

Length of output: 91


🏁 Script executed:

#!/bin/bash
# Search for migration runner code in TypeScript files
rg "prediction_swarm|webapps" -t js -t ts -l 2>/dev/null | head -20

# Look for database/atlas setup
fd -e ts -e js | xargs rg "migration.*path\|MIGRATION" 2>/dev/null | head -30

# Check root directory for config files
cat tsconfig.json 2>/dev/null | head -20

# Look for packages structure
ls -la packages/ 2>/dev/null | head -20

Repository: renlabs-dev/torus-ts

Length of output: 763


🏁 Script executed:

#!/bin/bash
# Check the db package for migration setup
ls -la packages/db/

# Look for atlas or migration configuration
find packages/db -type f | head -20

# Search for references to prediction_swarm or webapps migrations
rg "prediction_swarm|webapps.*migration" packages/ 2>/dev/null | head -20

# Check for any database client setup
cat packages/db/src/*.ts 2>/dev/null | head -100

Repository: renlabs-dev/torus-ts

Length of output: 4339


🏁 Script executed:

#!/bin/bash
# Check for any documentation about the migration structure
cat packages/db/creating-items-in-database.md

# Look for any atlas or migration runner configuration
find . -name "atlas.hcl" -o -name "atlas.yaml" -o -name ".atlas*" 2>/dev/null

# Check if there's a script that runs these migrations in order
find . -name "*.sh" -type f | xargs grep -l "migration\|atlas" 2>/dev/null | head -10

Repository: renlabs-dev/torus-ts

Length of output: 4413


🏁 Script executed:

#!/bin/bash
# Read the atlas.hcl configuration
cat ./atlas.hcl

Repository: renlabs-dev/torus-ts

Length of output: 1194


This migration references schema objects from the prediction_swarm environment but is defined in the webapps environment—they use separate databases and this will fail during migration.

The verifier_feedback table references failure_cause_enum and parsed_prediction(id), which are only defined in the prediction_swarm environment's database schema. The atlas.hcl configuration shows these are separate dev databases (dev-webapps vs dev-prediction-swarm). Either:

  1. Move the failure_cause_enum type and parsed_prediction table definition to the webapps environment, or
  2. Define the verifier_feedback table in the prediction_swarm environment instead.
🤖 Prompt for AI Agents
In @atlas/migrations/webapps/20260105140000_verifier_feedback.sql around lines
1-17, The migration creates verifier_feedback but depends on failure_cause_enum
and parsed_prediction which live in the prediction_swarm DB (per atlas.hcl
dev-webapps vs dev-prediction-swarm), so this will fail; either move/define
failure_cause_enum and the parsed_prediction table into the webapps schema
before creating verifier_feedback, or instead add this verifier_feedback
migration to the prediction_swarm environment so it runs against the DB that
already contains failure_cause_enum and parsed_prediction; update the migration
placement accordingly and ensure REFERENCES parsed_prediction(id) and
failure_cause_enum resolve in the same target DB.

Comment on lines +3 to +30
export const TimeframeExtractionSchema = z.object({
timeframe_status: z.enum([
"explicit",
"implicit",
"inferred",
"event_trigger",
"missing",
]),
start_utc: z.string().nullable(),
end_utc: z.string().nullable(),
precision: z.enum([
"hour",
"day",
"week",
"month",
"quarter",
"year",
"unbounded",
"event",
]),
reasoning: z.string(),
assumptions: z.array(z.string()),
confidence: z.number().min(0).max(1),
});

export type TimeframeExtractionResult = z.infer<
typeof TimeframeExtractionSchema
>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add JSDoc documentation and enhance timestamp validation.

According to coding guidelines, all code must follow JSDoc standards. Additionally, the start_utc and end_utc fields should use more specific datetime validation rather than plain strings.

🔎 Proposed improvements
+/**
+ * Schema for LLM-extracted timeframe information from prediction text.
+ * Validates the extracted start/end times, precision, and confidence.
+ */
 export const TimeframeExtractionSchema = z.object({
+  /** Status indicating how the timeframe was determined */
   timeframe_status: z.enum([
     "explicit",
     "implicit",
     "inferred",
     "event_trigger",
     "missing",
   ]),
-  start_utc: z.string().nullable(),
-  end_utc: z.string().nullable(),
+  /** ISO 8601 datetime string for prediction start time */
+  start_utc: z.string().datetime().nullable(),
+  /** ISO 8601 datetime string for prediction end time */
+  end_utc: z.string().datetime().nullable(),
+  /** Granularity of the time prediction */
   precision: z.enum([
     "hour",
     "day",
     "week",
     "month",
     "quarter",
     "year",
     "unbounded",
     "event",
   ]),
+  /** LLM explanation of timeframe extraction logic */
   reasoning: z.string(),
+  /** Assumptions made during timeframe extraction */
   assumptions: z.array(z.string()),
+  /** Confidence score between 0 and 1 */
   confidence: z.number().min(0).max(1),
 });

+/**
+ * Inferred TypeScript type for timeframe extraction results.
+ */
 export type TimeframeExtractionResult = z.infer<
   typeof TimeframeExtractionSchema
 >;

As per coding guidelines, JSDoc documentation is required for all code.

🤖 Prompt for AI Agents
In @services/swarm-verifier/src/schemas/llm-response.ts around lines 3-30, Add
JSDoc comments describing the purpose of TimeframeExtractionSchema and the
TimeframeExtractionResult type, then tighten validation for start_utc and
end_utc by replacing z.string().nullable() with z.string().datetime().nullable()
(or z.string().refine(...) to enforce ISO-8601 if your zod version lacks
.datetime), keeping nullability; ensure the JSDoc documents that these fields
are ISO-8601 UTC timestamps (or null) and mention precision/confidence semantics
for the schema.

Comment on lines +32 to +54
export const FilterValidationSchema = z.object({
context: z.string(),
is_valid: z.boolean(),
failure_cause: z
.enum([
"BROKEN_EXTRACTION",
"VAGUE_TARGET",
"PRESENT_STATE",
"NEGATION",
"SARCASM",
"QUOTING_OTHERS",
"HEAVY_HEDGING",
"FUTURE_TIMEFRAME",
"SELF_ANNOUNCEMENT",
"PERSONAL_ACTION",
"OTHER",
])
.nullable(),
confidence: z.number().min(0).max(1),
reasoning: z.string(),
});

export type FilterValidationResult = z.infer<typeof FilterValidationSchema>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add JSDoc documentation for FilterValidationSchema.

According to coding guidelines, all code must follow JSDoc standards. This schema and its fields need documentation explaining the validation logic and failure cause values.

🔎 Proposed improvements
+/**
+ * Schema for LLM-based validation of prediction claims against filtering rules.
+ * Determines if a claim is valid or should be rejected with a specific cause.
+ */
 export const FilterValidationSchema = z.object({
+  /** Contextual information used for validation */
   context: z.string(),
+  /** Whether the claim passes validation filters */
   is_valid: z.boolean(),
+  /** Specific reason for validation failure, if any */
   failure_cause: z
     .enum([
       "BROKEN_EXTRACTION",
       "VAGUE_TARGET",
       "PRESENT_STATE",
       "NEGATION",
       "SARCASM",
       "QUOTING_OTHERS",
       "HEAVY_HEDGING",
       "FUTURE_TIMEFRAME",
       "SELF_ANNOUNCEMENT",
       "PERSONAL_ACTION",
       "OTHER",
     ])
     .nullable(),
+  /** Confidence score between 0 and 1 */
   confidence: z.number().min(0).max(1),
+  /** LLM explanation of validation decision */
   reasoning: z.string(),
 });

+/**
+ * Inferred TypeScript type for filter validation results.
+ */
 export type FilterValidationResult = z.infer<typeof FilterValidationSchema>;

As per coding guidelines, JSDoc documentation is required for all code.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const FilterValidationSchema = z.object({
context: z.string(),
is_valid: z.boolean(),
failure_cause: z
.enum([
"BROKEN_EXTRACTION",
"VAGUE_TARGET",
"PRESENT_STATE",
"NEGATION",
"SARCASM",
"QUOTING_OTHERS",
"HEAVY_HEDGING",
"FUTURE_TIMEFRAME",
"SELF_ANNOUNCEMENT",
"PERSONAL_ACTION",
"OTHER",
])
.nullable(),
confidence: z.number().min(0).max(1),
reasoning: z.string(),
});
export type FilterValidationResult = z.infer<typeof FilterValidationSchema>;
/**
* Schema for LLM-based validation of prediction claims against filtering rules.
* Determines if a claim is valid or should be rejected with a specific cause.
*/
export const FilterValidationSchema = z.object({
/** Contextual information used for validation */
context: z.string(),
/** Whether the claim passes validation filters */
is_valid: z.boolean(),
/** Specific reason for validation failure, if any */
failure_cause: z
.enum([
"BROKEN_EXTRACTION",
"VAGUE_TARGET",
"PRESENT_STATE",
"NEGATION",
"SARCASM",
"QUOTING_OTHERS",
"HEAVY_HEDGING",
"FUTURE_TIMEFRAME",
"SELF_ANNOUNCEMENT",
"PERSONAL_ACTION",
"OTHER",
])
.nullable(),
/** Confidence score between 0 and 1 */
confidence: z.number().min(0).max(1),
/** LLM explanation of validation decision */
reasoning: z.string(),
});
/**
* Inferred TypeScript type for filter validation results.
*/
export type FilterValidationResult = z.infer<typeof FilterValidationSchema>;
🤖 Prompt for AI Agents
In @services/swarm-verifier/src/schemas/llm-response.ts around lines 32-54, Add
JSDoc for FilterValidationSchema and the exported FilterValidationResult type:
document the schema purpose and each field (context: description of the
validated text; is_valid: boolean meaning of true/false; failure_cause:
enumerate and explain each allowed string value and that it may be null;
confidence: number between 0 and 1 indicating certainty; reasoning:
human-readable explanation of why the label was assigned). Attach JSDoc above
the FilterValidationSchema declaration and a short typedef comment for
FilterValidationResult, keeping descriptions concise and following project JSDoc
style.

Comment on lines +252 to +260
const verdict = await this.llmClient.completeWithSearch(
this.promptLoader.getVerdictPrompt(),
userPrompt,
VerdictSchema,
{ temperature: 0.1 },
);

return { verdict, sources: undefined };
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Sources are always undefined, negating the web search feature.

The completeWithSearch method enables web search via OpenRouter plugins, but the returned sources are hardcoded to undefined. If the web search plugin returns citations, they should be extracted and returned here. Otherwise, this renders the search-enabled completion equivalent to the standard completion.

Is this intentional as a placeholder for future implementation? If so, consider adding a TODO comment. If the OpenRouter web plugin returns citations in a specific format, I can help implement the extraction logic.

🤖 Prompt for AI Agents
In @services/swarm-verifier/src/services/prediction-verifier.ts around lines
252-260, The current implementation calls this.llmClient.completeWithSearch(...)
in the method that returns { verdict, sources: undefined }, which drops any
web-search citations; update the method to extract citations from the response
of completeWithSearch (inspect the returned object from completeWithSearch when
invoking this.llmClient.completeWithSearch with the VerdictSchema) and set
sources to that citation list (or an empty array if none), returning { verdict,
sources }; if this is intentionally a placeholder, add a TODO comment above the
return mentioning extraction of search citations and link to the expected
OpenRouter plugin citation format for future implementation.

Comment on lines +269 to +276
let context: PredictionContext;
try {
context = await this.apiClient.getPredictionContext(prediction.id);
} catch (error) {
logger.error(`Failed to get prediction context: ${String(error)}`);
this.stats.errors++;
return null;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Replace raw try-catch with tryAsync from torus-utils.

Per coding guidelines, raw try-catch blocks should be avoided in application code. Use the error handling abstractions from @torus-network/torus-utils/try-catch.

🔎 Proposed fix
+import { tryAsync } from "@torus-network/torus-utils/try-catch";

-    let context: PredictionContext;
-    try {
-      context = await this.apiClient.getPredictionContext(prediction.id);
-    } catch (error) {
-      logger.error(`Failed to get prediction context: ${String(error)}`);
-      this.stats.errors++;
-      return null;
-    }
+    const [contextError, context] = await tryAsync(
+      this.apiClient.getPredictionContext(prediction.id),
+    );
+    if (contextError) {
+      logger.error(`Failed to get prediction context: ${String(contextError)}`);
+      this.stats.errors++;
+      return null;
+    }

As per coding guidelines.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let context: PredictionContext;
try {
context = await this.apiClient.getPredictionContext(prediction.id);
} catch (error) {
logger.error(`Failed to get prediction context: ${String(error)}`);
this.stats.errors++;
return null;
}
const [contextError, context] = await tryAsync(
this.apiClient.getPredictionContext(prediction.id),
);
if (contextError) {
logger.error(`Failed to get prediction context: ${String(contextError)}`);
this.stats.errors++;
return null;
}
🤖 Prompt for AI Agents
In @services/swarm-verifier/src/services/prediction-verifier.ts around lines
269-276, Replace the raw try-catch around this.apiClient.getPredictionContext in
the function that obtains PredictionContext with the tryAsync helper from
'@torus-network/torus-utils/try-catch': call tryAsync(() =>
this.apiClient.getPredictionContext(prediction.id)) (or
tryAsync(this.apiClient.getPredictionContext, prediction.id) depending on
signature), assign context = result.result on success, and on failure log the
error with logger.error(String(result.error)), increment this.stats.errors and
return null—remove the original try-catch and ensure the tryAsync import is
added.

Comment on lines +4 to +27
export async function withRetry<T>(
operation: () => Promise<T>,
maxRetries = 3,
baseDelay = 1000,
): Promise<T> {
let lastError: unknown;

for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await operation();
} catch (error) {
lastError = error;
if (attempt < maxRetries - 1) {
const delay = baseDelay * Math.pow(2, attempt);
console.log(
` Retry attempt ${attempt + 1}/${maxRetries - 1} after ${delay}ms...`,
);
await new Promise((resolve) => setTimeout(resolve, delay));
}
}
}

throw lastError;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Replace raw try-catch with error handling abstraction from torus-utils.

This code violates the coding guideline: "NEVER use raw try-catch blocks in application code; instead use error handling abstractions from @torus-network/torus-utils/try-catch". Additionally, console.log should be replaced with a structured logger, and JSDoc documentation is missing.

🔎 Refactor to use tryAsync and improve logging
+/**
+ * Retries an async operation with exponential backoff.
+ * @param operation - The async operation to retry
+ * @param maxRetries - Maximum number of retry attempts (default: 3)
+ * @param baseDelay - Base delay in milliseconds for exponential backoff (default: 1000)
+ * @returns Promise resolving to the operation result
+ * @throws The last error encountered if all retries fail
+ */
 export async function withRetry<T>(
   operation: () => Promise<T>,
   maxRetries = 3,
   baseDelay = 1000,
 ): Promise<T> {
-  let lastError: unknown;
-
   for (let attempt = 0; attempt < maxRetries; attempt++) {
-    try {
-      return await operation();
-    } catch (error) {
-      lastError = error;
+    const [error, result] = await tryAsync(operation);
+    
+    if (!error) {
+      return result;
+    }
+    
-      if (attempt < maxRetries - 1) {
+    if (attempt < maxRetries - 1) {
-        const delay = baseDelay * Math.pow(2, attempt);
-        console.log(
-          `  Retry attempt ${attempt + 1}/${maxRetries - 1} after ${delay}ms...`,
-        );
+      const delay = baseDelay * Math.pow(2, attempt);
+      // TODO: Replace with structured logger
+      console.log(
+        `  Retry attempt ${attempt + 1}/${maxRetries - 1} after ${delay}ms...`,
+      );
-        await new Promise((resolve) => setTimeout(resolve, delay));
-      }
+      await new Promise((resolve) => setTimeout(resolve, delay));
+    } else {
+      throw error;
     }
   }
-
-  throw lastError;
+  
+  throw new Error("All retry attempts exhausted");
 }

Note: You'll need to import tryAsync from @torus-network/torus-utils/try-catch.

As per coding guidelines, raw try-catch blocks must be replaced with error handling abstractions.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @services/swarm-verifier/src/utils/retry.ts around lines 4-27, The withRetry
function uses a raw try-catch and console.log and needs to be refactored to use
the tryAsync abstraction from @torus-network/torus-utils/try-catch, replace
console.log with the project structured logger (e.g., use the existing logger
instance or import one) and add JSDoc for the function; specifically, import
tryAsync, wrap the operation invocation with await tryAsync(() => operation()),
handle the returned [err, result] tuple to set lastError or return result,
compute/backoff and await delay as before but log retry attempts via the
structured logger, and add a short JSDoc block describing parameters (operation,
maxRetries, baseDelay) and the returned Promise<T>.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants