Skip to content

Conversation

@Shitikyan
Copy link

@Shitikyan Shitikyan commented Dec 1, 2025

PR Type

Enhancement


Description

  • Implemented L2PS Batch Aggregator service to collect and submit L2PS transactions to main mempool

  • Added nonce field and reference_block tracking for transaction ordering and consistency

  • Enhanced L2PS mempool with batch operations, status lifecycle management, and cleanup utilities

  • Improved transaction type handling for L2PS encrypted transactions and identity normalization

  • Added comprehensive database entity support and index optimization for L2PS operations


Diagram Walkthrough

flowchart LR
  L2PS["L2PS Mempool<br/>Processed Txs"] -->|Group by UID| Batch["Batch Aggregator<br/>Creates Batches"]
  Batch -->|Encrypt & Sign| Submit["Submit to<br/>Main Mempool"]
  Submit -->|Update Status| Update["Mark as<br/>Batched"]
  Update -->|Cleanup Old| Cleanup["Remove Confirmed<br/>Transactions"]
  Main["Main Mempool<br/>with Nonce"] -.->|Reference Block| Sig["Signaling Server<br/>Add Transactions"]
Loading

File Walkthrough

Relevant files
Enhancement
12 files
L2PSBatchAggregator.ts
New L2PS batch aggregator service implementation                 
+550/-0 
l2ps_mempool.ts
Enhanced L2PS mempool with batch operations and status lifecycle
+204/-12
index.ts
Initialize L2PS batch aggregator and parallel networks     
+27/-15 
signalingServer.ts
Add reference_block tracking for mempool transactions       
+8/-2     
endpointHandlers.ts
Add L2PS encrypted transaction type handling                         
+33/-5   
handleL2PS.ts
Update L2PS transaction type imports                                         
+2/-1     
getPeerIdentity.ts
Implement robust identity normalization and validation     
+122/-7 
L2PSMempool.ts
Add sequence_number field and optimize indexes                     
+13/-5   
Mempool.ts
Change nonce column to bigint with nullable default           
+1/-1     
Transactions.ts
Change nonce column to bigint with nullable default           
+1/-1     
GCRSubnetsTxs.ts
Update L2PS transaction type imports                                         
+1/-1     
OfflineMessages.ts
Update SerializedEncryptedObject type import                         
+1/-1     
Bug_fix
2 files
parallelNetworks.ts
Fix hex key file parsing and async encryption handling     
+32/-6   
PoRBFT.ts
Skip GCR edit processing for non-GCR transactions               
+6/-0     
Configuration_changes
1 files
datasource.ts
Register L2PS entities in database datasource                       
+6/-0     

Summary by CodeRabbit

  • New Features

    • Major L2PS launch: batch aggregation service, proof generation/verification, execution engine, status-aware mempool APIs, optional ZK proofs, CLI tools and load/stress testers.
    • UD support: multi-chain domain resolution, verification, assign/remove flows, and points award/deduction.
  • Bug Fixes

    • Improved nonce/mempool safety, duplicate protection, proof apply/rollback resilience, and stronger peer cryptographic checks.
  • Chores

    • DB schema/index updates, security headers, path‑traversal guards, centralized logging, typings, and CI/type-check enhancements.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 1, 2025

Important

Review skipped

Too many files!

123 files out of 273 files are above the max files limit of 150.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Adds a comprehensive L2PS subsystem (batching, zk prover, proofs, executor, mempool/status/DB entities), Unstoppable Domains multi‑chain resolution and incentives, consensus hooks to apply/rollback proofs, startup/service wiring, security/middleware hardening, many scripts/tools, and supporting utilities/types.

Changes

Cohort / File(s) Summary
L2PS batch & zk stack
src/libs/l2ps/L2PSBatchAggregator.ts, src/libs/l2ps/zk/*, src/libs/l2ps/zk/circuits/*, src/libs/l2ps/zk/scripts/*, scripts/send-l2-batch.ts, src/libs/l2ps/zk/*.d.ts
New aggregator service, PLONK prover/wrappers, circuits, key/setup scripts, zkey/wasm tooling, CLI for sending batches, HMAC auth, nonce persistence, proof generation/verification, docs and typings.
Proofs, consensus & executor
src/libs/l2ps/L2PSProofManager.ts, src/libs/l2ps/L2PSConsensus.ts, src/libs/l2ps/L2PSTransactionExecutor.ts
New proof manager, consensus integration (apply/rollback proofs), executor to validate/apply L2PS txs and produce L1 batch txs.
L2PS mempool & persistence
src/libs/blockchain/l2ps_mempool.ts, src/model/entities/L2PSMempool.ts, src/model/entities/L2PSTransactions.ts, src/model/entities/L2PSProofs.ts, src/utilities/sharedState.ts
Status lifecycle constants/types, status-aware CRUD and cleanup, sequence_number field, gcr_edits/affected_accounts_count, new entities and indexes, SharedState.l2psBatchNonce.
Startup / lifecycle wiring
src/index.ts
Load L2PS networks at startup and start/stop L2PSBatchAggregator alongside existing L2PSHashService with extended shutdown handling.
Mempool submission / chain guards
src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts, src/libs/blockchain/chain.ts
Mempool.addTransaction now includes reference_block from Chain.getLastBlockNumber(); Chain.getTxByHash returns `Transaction
Network handlers & L2PS routing
src/libs/network/endpointHandlers.ts, src/libs/network/routines/transactions/handleL2PS.ts, src/libs/network/routines/transactions/handleIdentityRequest.ts, src/libs/network/server_rpc.ts
New l2psEncryptedTx handling, modular decrypt/validate/execute pipeline delegating to L2PSTransactionExecutor, UD identity request cases, and minor GET handler formatting change.
UD (Unstoppable Domains) integration
src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts, src/libs/blockchain/gcr/gcr_routines/udSolanaResolverHelper.ts, src/libs/blockchain/UDTypes/uns_sol.*
New UDIdentityManager (multi‑chain resolution & verification), Solana resolver helper, Anchor IDL/type artifacts and Solana helpers.
GCR identity & incentives wiring
src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts, src/libs/blockchain/gcr/gcr_routines/IncentiveManager.ts, src/libs/blockchain/gcr/gcr_routines/signatureDetector.ts, src/libs/blockchain/gcr/gcr_routines/identityManager.ts, src/libs/blockchain/gcr/handleGCR.ts
applyUdIdentityAdd/remove, udDomainLinked/udDomainUnlinked incentive hooks, signature detection helpers, typed getIdentities key, and identities.ud initialization.
PointSystem & models
src/features/incentive/PointSystem.ts, src/model/entities/GCRv2/GCR_Main.ts, src/model/entities/types/IdentityTypes.ts
Methods to award/deduct UD domain points, breakdown.udDomains exposure, SavedUdIdentity type and StoredIdentities.ud field, GCR_Main.udDomains addition.
Datasource & schema updates
src/model/datasource.ts, src/model/entities/Mempool.ts, src/model/entities/Transactions.ts
Register L2PS entities in DataSource; change nonce columns to bigint (nullable, default 0) for Mempool and Transactions.
Peer identity & node‑call
src/libs/peer/routines/getPeerIdentity.ts, src/libs/network/manageNodeCall.ts
Challenge‑response identity normalization/verification (getPeerIdentity may return null); new resolveUdDomain node call using UDIdentityManager.
Security, middleware & IO hardening
src/features/activitypub/fedistore.ts, src/features/activitypub/fediverse.ts, src/features/mcp/MCPServer.ts, src/libs/utils/demostdlib/groundControl.ts, .gitignore
Collection whitelist/schema validation, apply helmet() to Express/SSE, HTTPS key path traversal guards, and new .gitignore entries.
Utilities, logging & key handling
src/utilities/validateUint8Array.ts, src/libs/l2ps/parallelNetworks.ts, src/libs/l2ps/L2PSConcurrentSync.ts, src/libs/l2ps/L2PSHashService.ts, src/utilities/errorMessage.ts
Harden Uint8Array validation, hex→bytes key loading, promise-locking for L2PS loads, centralized logger usage, standardized error extraction, and new encryptTransaction API.
Docs / deps / typings / scripts / memories
package.json, src/libs/l2ps/zk/README.md, src/libs/l2ps/zk/*.d.ts, .serena/**, scripts/*
Added Solana/Anchor/snarkjs/circomlib deps and type‑check scripts, zk README and TS declarations, many UD memories/docs, and multiple L2PS tooling scripts (load/stress/send/generate).

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Aggregator as L2PSBatchAggregator
  participant Repo as L2PSMempool
  participant Prover as L2PSBatchProver
  participant ProofMgr as L2PSProofManager
  participant Shared as SharedState
  participant Mempool as MainMempool

  Aggregator->>Repo: getByStatus("PROCESSED", limit)
  Repo-->>Aggregator: txs[]
  Aggregator->>Shared: getNextBatchNonce()
  Shared-->>Aggregator: nonce
  Aggregator->>Prover: generateProof(batchPayload)
  Prover-->>Aggregator: zk_proof?
  Aggregator->>ProofMgr: createProof(l2psUid, l1BatchHash, gcrEdits...)
  ProofMgr-->>Aggregator: proof_id
  Aggregator->>Mempool: submitBatchTx(signedBatchTx with reference_block)
  Mempool-->>Aggregator: success/failure
  Aggregator->>Repo: updateStatusBatch(hashes,"BATCHED")
  Aggregator->>Repo: cleanupByStatus("BATCHED", age)
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120+ minutes

Possibly related PRs

Suggested labels

Possible security concern, Review effort 5/5

Suggested reviewers

  • cwilvx

Poem

🐇 I bundle whispers into a tidy batch,
I HMAC crumbs and guard the secret latch,
I hop through mempools with nonce snug in paw,
I sign, I proof, and tidy every flaw,
🥕✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 37.08% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'L2ps implementation' is vague and lacks specificity about the primary change, using an acronym without context. Consider a more descriptive title that highlights the main feature, e.g., 'Add L2PS Batch Aggregator service and mempool enhancements' or 'Implement L2PS batching and transaction lifecycle management'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Dec 1, 2025

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Missing authenticated encryption

Description: L2PS batch payload is only base64-encoded without additional authenticated encryption
(AEAD) or signatures, which risks batch tampering or malleability since anyone able to
alter DB or transit could change included encrypted items without detection beyond a
simple hash list that itself is not authenticated within an envelope.
L2PSBatchAggregator.ts [340-355]

Referred Code
const batchHashInput = `L2PS_BATCH_${l2psUid}:${sortedHashes.length}:${sortedHashes.join(",")}`
const batchHash = Hashing.sha256(batchHashInput)

// For batch transactions, we store the batch data as base64
// The data is already encrypted at the individual transaction level,
// so we just package them together
const batchDataString = JSON.stringify(transactionData)
const encryptedBatch = Buffer.from(batchDataString).toString("base64")

return {
    l2ps_uid: l2psUid,
    encrypted_batch: encryptedBatch,
    transaction_count: transactions.length,
    batch_hash: batchHash,
    transaction_hashes: transactionHashes,
}
Signature domain confusion

Description: Batch transactions are signed using a global node signing context without explicit key
selection or domain separation, which could enable cross-protocol signature re-use or
confusion if the same key is used for other message types; include a domain-separated
message/prefix and verify key source.
L2PSBatchAggregator.ts [368-444]

Referred Code
try {
    const sharedState = getSharedState

    // Use keypair.publicKey (set by loadIdentity) instead of identity.ed25519
    if (!sharedState.keypair?.publicKey) {
        log.error("[L2PS Batch Aggregator] Node keypair not loaded yet")
        return false
    }

    // Get node's public key as hex string for 'from' field
    const nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)

    // Use timestamp as nonce for batch transactions
    // This ensures uniqueness and proper ordering without requiring GCR account
    const batchNonce = Date.now()

    // Create batch transaction content
    const transactionContent = {
        type: "l2psBatch",
        from: nodeIdentityHex,
        to: nodeIdentityHex, // Self-directed for relay


 ... (clipped 56 lines)
Incorrect timestamp handling

Description: Timestamps are stored as strings in a bigint column and compared lexicographically in
cleanup queries, which can lead to incorrect ordering and retention behavior enabling DoS
by preventing cleanup or deleting newer entries accidentally.
l2ps_mempool.ts [167-176]

Referred Code
// REVIEW: PR Fix #2 - Store timestamp as string for bigint column
await this.repo.save({
    hash: encryptedTx.hash,
    l2ps_uid: l2psUid,
    original_hash: originalHash,
    encrypted_tx: encryptedTx,
    status: status,
    timestamp: Date.now().toString(),
    block_number: blockNumber,
})
Weak peer authentication

Description: Peer identity verification relies on equality of a provided identity value without a
cryptographic challenge/response, enabling spoofing or replayed identities if an attacker
injects a matching value.
getPeerIdentity.ts [146-185]

Referred Code
        JSON.stringify(response, null, 2),
)
// Response management
if (response.result === 200) {
    console.log("[PEER AUTHENTICATION] Received response")
    console.log(response.response)

    const receivedIdentity = normalizeIdentity(response.response)
    const expectedIdentity = normalizeExpectedIdentity(expectedKey)

    if (!receivedIdentity) {
        console.log(
            "[PEER AUTHENTICATION] Unable to normalize identity payload",
        )
        return null
    }

    if (!expectedIdentity) {
        console.log(
            "[PEER AUTHENTICATION] Unable to normalize expected identity",
        )


 ... (clipped 19 lines)
Missing authorization check

Description: Early return bypasses main mempool handling for 'l2psEncryptedTx' but lacks explicit
access controls or origin checks, risking that any caller submits such txs to be routed
privately without proper authorization.
endpointHandlers.ts [331-351]

Referred Code
// Handle encrypted L2PS transactions
// These are routed to the L2PS mempool via handleSubnetTx (which calls handleL2PS)
console.log("[handleExecuteTransaction] Processing L2PS Encrypted Tx")
var l2psResult = await ServerHandlers.handleSubnetTx(
    tx as L2PSTransaction,
)
result.response = l2psResult
// If successful, we don't want to add this to the main mempool
// The handleL2PS routine takes care of adding it to the L2PS mempool
if (l2psResult.result === 200) {
    result.success = true
    // Prevent adding to main mempool by returning early or setting a flag?
    // The current logic adds to mempool if result.success is true.
    // We need to avoid that for L2PS txs as they are private.

    // Hack: We return here to avoid the main mempool logic below
    return result
} else {
    result.success = false
}
break
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
Limited Auditing: New L2PS batch aggregation and mempool submission flows add critical actions without
structured audit logs that capture actor identity, action, and outcomes in a consistent
format.

Referred Code
    // Start the interval timer
    this.intervalId = setInterval(async () => {
        await this.safeAggregateAndSubmit()
    }, this.AGGREGATION_INTERVAL)

    log.info(`[L2PS Batch Aggregator] Started with ${this.AGGREGATION_INTERVAL}ms interval`)
}

/**
 * Stop the L2PS batch aggregation service
 * 
 * Gracefully shuts down the service, waiting for any ongoing operations to complete.
 * 
 * @param timeoutMs - Maximum time to wait for ongoing operations (default: 15 seconds)
 */
async stop(timeoutMs = 15000): Promise<void> {
    if (!this.isRunning) {
        return
    }

    log.info("[L2PS Batch Aggregator] Stopping batch aggregation service")


 ... (clipped 292 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Error Context Gaps: While errors are caught and logged, batch submission and cleanup paths lack explicit
retries/backoff and may not include sufficient context (e.g., batch identifiers and
l2ps_uid) in all error logs for reliable recovery.

Referred Code
private async submitBatchToMempool(batchPayload: L2PSBatchPayload): Promise<boolean> {
    try {
        const sharedState = getSharedState

        // Use keypair.publicKey (set by loadIdentity) instead of identity.ed25519
        if (!sharedState.keypair?.publicKey) {
            log.error("[L2PS Batch Aggregator] Node keypair not loaded yet")
            return false
        }

        // Get node's public key as hex string for 'from' field
        const nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)

        // Use timestamp as nonce for batch transactions
        // This ensures uniqueness and proper ordering without requiring GCR account
        const batchNonce = Date.now()

        // Create batch transaction content
        const transactionContent = {
            type: "l2psBatch",
            from: nodeIdentityHex,


 ... (clipped 81 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Key Parsing Risks: New hex file parsing for L2PS keys adds validation but still directly trusts filesystem
inputs without explicit size/entropy checks or secure key handling guarantees.

Referred Code
    nodeConfig.keys.private_key_path,
)
const ivPath = path.resolve(process.cwd(), nodeConfig.keys.iv_path)

if (!fs.existsSync(privateKeyPath) || !fs.existsSync(ivPath)) {
    throw new Error(`L2PS key files not found for ${uid}`)
}

const privateKeyHex = fs.readFileSync(privateKeyPath, "utf8").trim()
const ivHex = fs.readFileSync(ivPath, "utf8").trim()

const privateKeyBytes = hexFileToBytes(privateKeyHex, `${uid} private key`)
const ivBytes = hexFileToBytes(ivHex, `${uid} IV`)

const l2ps = await L2PS.create(privateKeyBytes, ivBytes)
const l2psConfig: L2PSConfig = {

Learn more about managing compliance generic rules or creating your own custom rules

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Dec 1, 2025

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Use a persistent nonce for batches
Suggestion Impact:The commit introduced a persistent nonce mechanism via getNextBatchNonce and replaced Date.now() with this method when setting the batch nonce, addressing uniqueness and ordering concerns.

code diff:

+    /** Persistent nonce counter for batch transactions */
+    private batchNonceCounter: number = 0
     
     /** Statistics tracking */
     private stats = {
@@ -319,14 +328,18 @@
     /**
      * Create an encrypted batch payload from transactions
      * 
+     * Uses HMAC-SHA256 for authenticated encryption to prevent tampering.
+     * 
      * @param l2psUid - L2PS network identifier
      * @param transactions - Transactions to include in batch
-     * @returns L2PS batch payload with encrypted data
+     * @returns L2PS batch payload with encrypted data and authentication tag
      */
     private async createBatchPayload(
         l2psUid: string,
         transactions: L2PSMempoolTx[],
     ): Promise<L2PSBatchPayload> {
+        const sharedState = getSharedState
+        
         // Collect transaction hashes and encrypted data
         const transactionHashes = transactions.map(tx => tx.hash)
         const transactionData = transactions.map(tx => ({
@@ -346,20 +359,51 @@
         const batchDataString = JSON.stringify(transactionData)
         const encryptedBatch = Buffer.from(batchDataString).toString("base64")
 
+        // Create HMAC-SHA256 authentication tag for tamper detection
+        // Uses node's private key as HMAC key for authenticated encryption
+        const hmacKey = sharedState.keypair?.privateKey 
+            ? Buffer.from(sharedState.keypair.privateKey as Uint8Array).toString("hex").slice(0, 64)
+            : batchHash // Fallback to batch hash if keypair not available
+        const hmacData = `${l2psUid}:${encryptedBatch}:${batchHash}:${transactionHashes.join(",")}`
+        const authenticationTag = crypto
+            .createHmac("sha256", hmacKey)
+            .update(hmacData)
+            .digest("hex")
+
         return {
             l2ps_uid: l2psUid,
             encrypted_batch: encryptedBatch,
             transaction_count: transactions.length,
             batch_hash: batchHash,
             transaction_hashes: transactionHashes,
-        }
+            authentication_tag: authenticationTag,
+        }
+    }
+
+    /**
+     * Get next persistent nonce for batch transactions
+     * 
+     * Uses a monotonically increasing counter combined with timestamp
+     * to ensure uniqueness across restarts and prevent replay attacks.
+     * 
+     * @returns Promise resolving to the next nonce value
+     */
+    private async getNextBatchNonce(): Promise<number> {
+        // Combine counter with timestamp for uniqueness across restarts
+        // Counter ensures ordering within same millisecond
+        this.batchNonceCounter++
+        const timestamp = Date.now()
+        // Use high bits for timestamp, low bits for counter
+        // This allows ~1000 batches per millisecond before collision
+        return timestamp * 1000 + (this.batchNonceCounter % 1000)
     }
 
     /**
      * Submit a batch transaction to the main mempool
      * 
      * Creates a transaction of type 'l2psBatch' and submits it to the main
-     * mempool for inclusion in the next block.
+     * mempool for inclusion in the next block. Uses domain-separated signatures
+     * to prevent cross-protocol signature reuse.
      * 
      * @param batchPayload - Encrypted batch payload (includes l2ps_uid)
      * @returns true if submission was successful
@@ -377,9 +421,9 @@
             // Get node's public key as hex string for 'from' field
             const nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)
 
-            // Use timestamp as nonce for batch transactions
-            // This ensures uniqueness and proper ordering without requiring GCR account
-            const batchNonce = Date.now()
+            // Use persistent nonce for batch transactions
+            // This ensures uniqueness and proper ordering, preventing replay attacks
+            const batchNonce = await this.getNextBatchNonce()
 

Replace the use of Date.now() as a nonce in L2PSBatchAggregator with a
persistent, monotonically increasing nonce. This will prevent transaction
collisions and replay issues in a distributed environment.

Examples:

src/libs/l2ps/L2PSBatchAggregator.ts [380-392]
            // Use timestamp as nonce for batch transactions
            // This ensures uniqueness and proper ordering without requiring GCR account
            const batchNonce = Date.now()

            // Create batch transaction content
            const transactionContent = {
                type: "l2psBatch",
                from: nodeIdentityHex,
                to: nodeIdentityHex, // Self-directed for relay
                from_ed25519_address: nodeIdentityHex,

 ... (clipped 3 lines)

Solution Walkthrough:

Before:

// src/libs/l2ps/L2PSBatchAggregator.ts
private async submitBatchToMempool(batchPayload: L2PSBatchPayload): Promise<boolean> {
    try {
        // ...
        // Use timestamp as nonce for batch transactions
        // This ensures uniqueness and proper ordering without requiring GCR account
        const batchNonce = Date.now()

        const transactionContent = {
            // ...
            nonce: batchNonce,
            // ...
        }
        // ...
        await Mempool.addTransaction(batchTransaction as any)
        // ...
    } catch (error: any) {
        // ...
    }
}

After:

// src/libs/l2ps/L2PSBatchAggregator.ts

// A new class member to manage the nonce
private batchNonceCounter: number = 0; // Or load from a persistent store

private async getNextNonce(): Promise<number> {
    // Logic to fetch and increment a persistent nonce for the aggregator's identity.
    // This could involve a DB call or interacting with the GCR.
    // For simplicity, showing an in-memory counter.
    this.batchNonceCounter++;
    return this.batchNonceCounter;
}

private async submitBatchToMempool(batchPayload: L2PSBatchPayload): Promise<boolean> {
    try {
        const batchNonce = await this.getNextNonce();
        const transactionContent = {
            // ...
            nonce: batchNonce,
            // ...
        };
        // ...
    } catch (error: any) {
        // ...
    }
}
Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies a critical design flaw in the new L2PSBatchAggregator where using Date.now() as a nonce is not robust and can lead to transaction collisions, compromising the reliability of the entire batching mechanism.

High
Possible issue
Fix incorrect string-based timestamp comparison
Suggestion Impact:The commit updated cleanupByStatus to convert the cutoff to a string and changed the WHERE clause to CAST(timestamp AS BIGINT) < CAST(:cutoff AS BIGINT), ensuring numeric comparison as suggested.

code diff:

-            const cutoffTimestamp = Date.now() - olderThanMs
-
+            const cutoffTimestamp = (Date.now() - olderThanMs).toString()
+
+            // Use CAST to ensure numeric comparison instead of lexicographic string comparison
+            // This prevents incorrect ordering and retention behavior
             const result = await this.repo
                 .createQueryBuilder()
                 .delete()
                 .from(L2PSMempoolTx)
-                .where("timestamp < :cutoff", { cutoff: cutoffTimestamp.toString() })
+                .where("CAST(timestamp AS BIGINT) < CAST(:cutoff AS BIGINT)", { cutoff: cutoffTimestamp })
                 .andWhere("status = :status", { status })

In the cleanupByStatus function, cast the timestamp column and :cutoff parameter
to BIGINT in the SQL query to ensure correct numeric comparison instead of
incorrect string comparison.

src/libs/blockchain/l2ps_mempool.ts [482-505]

 public static async cleanupByStatus(status: L2PSStatus, olderThanMs: number): Promise<number> {
     try {
         await this.ensureInitialized()
 
-        const cutoffTimestamp = Date.now() - olderThanMs
+        const cutoffTimestamp = (Date.now() - olderThanMs).toString()
 
         const result = await this.repo
             .createQueryBuilder()
             .delete()
             .from(L2PSMempoolTx)
-            .where("timestamp < :cutoff", { cutoff: cutoffTimestamp.toString() })
+            .where("CAST(timestamp AS BIGINT) < CAST(:cutoff AS BIGINT)", { cutoff: cutoffTimestamp })
             .andWhere("status = :status", { status })
             .execute()
 
         const deletedCount = result.affected || 0
         if (deletedCount > 0) {
             log.info(`[L2PS Mempool] Cleaned up ${deletedCount} old ${status} transactions`)
         }
         return deletedCount
 
     } catch (error: any) {
         log.error(`[L2PS Mempool] Error during cleanup by status ${status}:`, error)
         return 0
     }
 }

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 9

__

Why: This suggestion correctly identifies a critical bug where comparing timestamps as strings leads to incorrect lexicographical comparison, which would cause the cleanupByStatus function to delete the wrong records.

High
Add missing nonce to transaction
Suggestion Impact:The commit introduced a persistent nonce via getNextBatchNonce() and used it when creating the batch transaction, and also added the nonce to the transaction object submitted to the mempool.

code diff:

@@ -377,9 +421,9 @@
             // Get node's public key as hex string for 'from' field
             const nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)
 
-            // Use timestamp as nonce for batch transactions
-            // This ensures uniqueness and proper ordering without requiring GCR account
-            const batchNonce = Date.now()
+            // Use persistent nonce for batch transactions
+            // This ensures uniqueness and proper ordering, preventing replay attacks
+            const batchNonce = await this.getNextBatchNonce()
 
             // Create batch transaction content
             const transactionContent = {
@@ -403,10 +447,12 @@
             const contentString = JSON.stringify(transactionContent)
             const hash = Hashing.sha256(contentString)
 
-            // Sign the transaction
+            // Sign with domain separation to prevent cross-protocol signature reuse
+            // Domain prefix ensures this signature cannot be replayed in other contexts
+            const domainSeparatedMessage = `${this.SIGNATURE_DOMAIN}:${contentString}`
             const signature = await ucrypto.sign(
                 sharedState.signingAlgorithm,
-                new TextEncoder().encode(contentString),
+                new TextEncoder().encode(domainSeparatedMessage),
             )
 
             // Create batch transaction object matching mempool expectations
@@ -417,6 +463,7 @@
                 signature: signature ? {
                     type: sharedState.signingAlgorithm,
                     data: uint8ArrayToHex(signature.signature),
+                    domain: this.SIGNATURE_DOMAIN, // Include domain for verification
                 } : null,
                 reference_block: 0, // Will be set by mempool
                 status: "pending", // Required by MempoolTx entity

Add the missing nonce property to the batchTransaction object to ensure the
correct value is stored in the mempool.

src/libs/l2ps/L2PSBatchAggregator.ts [412-427]

 // Create batch transaction object matching mempool expectations
 // Note: status and extra fields are required by MempoolTx entity
 const batchTransaction = {
     hash,
     content: transactionContent,
     signature: signature ? {
         type: sharedState.signingAlgorithm,
         data: uint8ArrayToHex(signature.signature),
     } : null,
+    nonce: batchNonce, // Add nonce to the top-level object
     reference_block: 0, // Will be set by mempool
     status: "pending", // Required by MempoolTx entity
     extra: null, // Optional field
 }
 
 // Submit to main mempool
 const result = await Mempool.addTransaction(batchTransaction as any)

[Suggestion processed]

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that the nonce property is missing from the batchTransaction object, which would cause an incorrect default value of 0 to be stored instead of the calculated batchNonce.

Medium
Prevent failed transactions from leaking
Suggestion Impact:The commit wrapped the case in braces, changed var to const, and returned early on success to avoid main mempool processing, aligning with the suggestion’s goal to prevent leakage. It did not implement returning on failure, but partially applied the recommendation to prevent successful L2PS txs from entering the main mempool.

code diff:

-            case "l2psEncryptedTx":
+            case "l2psEncryptedTx": {
                 // Handle encrypted L2PS transactions
                 // These are routed to the L2PS mempool via handleSubnetTx (which calls handleL2PS)
                 console.log("[handleExecuteTransaction] Processing L2PS Encrypted Tx")
-                var l2psResult = await ServerHandlers.handleSubnetTx(
+                const l2psResult = await ServerHandlers.handleSubnetTx(
                     tx as L2PSTransaction,
                 )
                 result.response = l2psResult
@@ -339,16 +339,13 @@
                 // The handleL2PS routine takes care of adding it to the L2PS mempool
                 if (l2psResult.result === 200) {
                     result.success = true
-                    // Prevent adding to main mempool by returning early or setting a flag?
-                    // The current logic adds to mempool if result.success is true.
-                    // We need to avoid that for L2PS txs as they are private.
-                    
-                    // Hack: We return here to avoid the main mempool logic below
+                    // Return early to avoid adding L2PS transactions to main mempool
                     return result
                 } else {
                     result.success = false
                 }
                 break
+            }
 

Refactor the l2psEncryptedTx case to return immediately for both success and
failure scenarios, preventing failed L2PS transactions from being processed by
the main mempool logic.

src/libs/network/endpointHandlers.ts [330-351]

-case "l2psEncryptedTx":
+case "l2psEncryptedTx": {
     // Handle encrypted L2PS transactions
     // These are routed to the L2PS mempool via handleSubnetTx (which calls handleL2PS)
     console.log("[handleExecuteTransaction] Processing L2PS Encrypted Tx")
-    var l2psResult = await ServerHandlers.handleSubnetTx(
+    const l2psResult = await ServerHandlers.handleSubnetTx(
         tx as L2PSTransaction,
     )
     result.response = l2psResult
-    // If successful, we don't want to add this to the main mempool
-    // The handleL2PS routine takes care of adding it to the L2PS mempool
-    if (l2psResult.result === 200) {
-        result.success = true
-        // Prevent adding to main mempool by returning early or setting a flag?
-        // The current logic adds to mempool if result.success is true.
-        // We need to avoid that for L2PS txs as they are private.
-        
-        // Hack: We return here to avoid the main mempool logic below
-        return result
-    } else {
-        result.success = false
-    }
-    break
+    // L2PS transactions (successful or not) should not be added to the main mempool.
+    // The handleL2PS routine handles the L2PS mempool logic.
+    // We return here to prevent fall-through to the main mempool logic.
+    result.success = l2psResult.result === 200
+    return result
+}

[Suggestion processed]

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a logic flaw where a failed L2PS transaction could fall through and be incorrectly processed by the main mempool logic, violating the privacy design of L2PS.

Medium
Throw error instead of returning null
Suggestion Impact:The commit addressed the return-type mismatch noted by the suggestion, but did so by changing the function signature to Promise (explicitly allowing null) rather than switching failure paths to throw errors. The function still returns null in failure scenarios.

code diff:

 export default async function getPeerIdentity(
     peer: Peer,
     expectedKey: string,
-): Promise<Peer> {
+): Promise<Peer | null> {
+    // Generate cryptographic challenge for this authentication session

In getPeerIdentity, throw errors in failure scenarios instead of returning null
to adhere to the function's Promise return type and improve error handling.

src/libs/peer/routines/getPeerIdentity.ts [125-201]

 export default async function getPeerIdentity(
     peer: Peer,
     expectedKey: string,
 ): Promise<Peer> {
     // Getting our identity
     console.warn("[PEER AUTHENTICATION] Getting peer identity")
     console.log(peer)
     console.log(expectedKey)
 
     const nodeCall: NodeCall = {
         message: "getPeerIdentity",
         data: null,
         muid: null,
     }
 
     const response = await peer.call({
         method: "nodeCall",
         params: [nodeCall],
     })
     console.log(
         "[PEER AUTHENTICATION] Response Received: " +
             JSON.stringify(response, null, 2),
     )
     // Response management
     if (response.result === 200) {
         console.log("[PEER AUTHENTICATION] Received response")
-        //console.log(response[1].identity.toString("hex"))
         console.log(response.response)
 
         const receivedIdentity = normalizeIdentity(response.response)
         const expectedIdentity = normalizeExpectedIdentity(expectedKey)
 
         if (!receivedIdentity) {
-            console.log(
-                "[PEER AUTHENTICATION] Unable to normalize identity payload",
-            )
-            return null
+            throw new Error("[PEER AUTHENTICATION] Unable to normalize identity payload")
         }
 
         if (!expectedIdentity) {
-            console.log(
-                "[PEER AUTHENTICATION] Unable to normalize expected identity",
-            )
-            return null
+            throw new Error("[PEER AUTHENTICATION] Unable to normalize expected identity")
         }
 
-        if (receivedIdentity === expectedIdentity) {
-            console.log("[PEER AUTHENTICATION] Identity is the expected one")
-        } else {
-            console.log(
-                "[PEER AUTHENTICATION] Identity is not the expected one",
-            )
-            console.log("Expected: ")
-            console.log(expectedIdentity)
-            console.log("Received: ")
-            console.log(receivedIdentity)
-            return null
+        if (receivedIdentity !== expectedIdentity) {
+            const errorMsg = `[PEER AUTHENTICATION] Identity mismatch. Expected: ${expectedIdentity}, Received: ${receivedIdentity}`
+            console.error(errorMsg)
+            throw new Error(errorMsg)
         }
+        
+        console.log("[PEER AUTHENTICATION] Identity is the expected one")
         // Adding the property to the peer
         peer.identity = receivedIdentity // Identity is now known
         peer.status.online = true // Peer is now online
         peer.status.ready = true // Peer is now ready
         peer.status.timestamp = new Date().getTime()
         peer.verification.status = true // We verified the peer
         peer.verification.timestamp = new Date().getTime()
         return peer
     } else {
-        console.log("[PEER AUTHENTICATION] No response received")
-        return null
+        throw new Error(`[PEER AUTHENTICATION] Peer returned non-200 status: ${response.result}`)
     }
 }

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that returning null violates the function's Promise<Peer> return type, which can cause runtime errors. Throwing an error is a more robust and type-safe approach.

Medium
  • Update

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
src/libs/l2ps/parallelNetworks.ts (1)

57-77: Good input validation for hex conversion.

The hexFileToBytes helper properly validates hex input (empty check, even length, valid characters) before conversion. The error messages are clear and specific.

Consider adding JSDoc to clarify that the return type is a binary string (forge's byte string format):

+/**
+ * Converts hex-encoded file content to a binary byte string.
+ * @param value - Hex string to convert (with optional 0x prefix)
+ * @param label - Label for error messages
+ * @returns Binary string representation of bytes (forge format)
+ */
 function hexFileToBytes(value: string, label: string): string {
src/libs/network/endpointHandlers.ts (1)

56-58: Temporary type workaround acknowledged.

The any type alias is explicitly marked as temporary. Consider creating a proper type definition in the workarounds module when the SDK export becomes available.

src/libs/l2ps/L2PSBatchAggregator.ts (2)

343-347: Consider renaming encrypted_batch to encoded_batch or clarifying the encryption.

The field is named encrypted_batch but the actual operation is base64 encoding, not encryption. While the comment explains that individual transactions are already encrypted, the field name may cause confusion. Consider either:

  1. Renaming to encoded_batch or batch_data for clarity
  2. Adding encryption at the batch level if batch-level encryption is intended

519-531: Inconsistent SharedState access pattern.

Line 529 uses SharedState.getInstance() while line 369 uses getSharedState. Consider using a consistent pattern throughout the class for maintainability.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 53d71e6 and 32a0a8c.

📒 Files selected for processing (15)
  • src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts (2 hunks)
  • src/index.ts (5 hunks)
  • src/libs/blockchain/l2ps_mempool.ts (6 hunks)
  • src/libs/consensus/v2/PoRBFT.ts (1 hunks)
  • src/libs/l2ps/L2PSBatchAggregator.ts (1 hunks)
  • src/libs/l2ps/parallelNetworks.ts (4 hunks)
  • src/libs/network/endpointHandlers.ts (4 hunks)
  • src/libs/network/routines/transactions/handleL2PS.ts (1 hunks)
  • src/libs/peer/routines/getPeerIdentity.ts (2 hunks)
  • src/model/datasource.ts (3 hunks)
  • src/model/entities/GCRv2/GCRSubnetsTxs.ts (1 hunks)
  • src/model/entities/L2PSMempool.ts (2 hunks)
  • src/model/entities/Mempool.ts (1 hunks)
  • src/model/entities/OfflineMessages.ts (1 hunks)
  • src/model/entities/Transactions.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts (1)
src/libs/blockchain/chain.ts (1)
  • Chain (41-669)
src/libs/l2ps/L2PSBatchAggregator.ts (6)
src/utilities/waiter.ts (1)
  • resolve (94-108)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/blockchain/l2ps_mempool.ts (1)
  • L2PS_STATUS (15-24)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/utilities/sharedState.ts (2)
  • getSharedState (275-277)
  • SharedState (17-270)
src/libs/utils/calibrateTime.ts (1)
  • getNetworkTimestamp (17-23)
src/libs/network/endpointHandlers.ts (1)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/libs/blockchain/l2ps_mempool.ts (2)
src/libs/blockchain/transaction.ts (1)
  • hash (106-114)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/l2ps/parallelNetworks.ts (1)
src/utilities/sharedState.ts (1)
  • getSharedState (275-277)
🪛 Biome (2.1.2)
src/libs/network/endpointHandlers.ts

[error] 334-337: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🔇 Additional comments (47)
src/model/entities/OfflineMessages.ts (1)

2-2: LGTM!

Using import type is the correct approach since SerializedEncryptedObject is only used as a type annotation. This ensures the import is erased during compilation and clarifies that it's purely for type checking.

src/model/entities/GCRv2/GCRSubnetsTxs.ts (1)

2-2: LGTM! Clean import optimization.

The removal of the unused Transaction import while keeping L2PSTransaction as a type-only import aligns with the PR's broader pattern of migrating to type-only imports for L2PS-related types.

src/model/entities/Mempool.ts (1)

40-41: Verify TypeScript type compatibility with bigint column.

The database column is now bigint, but the TypeScript type remains number. JavaScript's number type has a safe integer range of ±2^53-1, while PostgreSQL's bigint supports ±2^63-1. If nonce values exceed the safe integer range, precision loss may occur.

Consider updating the TypeScript type to match the database representation:

-    @Column("bigint", { name: "nonce", nullable: true, default: 0 })
-    nonce: number
+    @Column("bigint", { name: "nonce", nullable: true, default: 0 })
+    nonce: bigint

Alternatively, verify that nonce values in your use case will never exceed Number.MAX_SAFE_INTEGER (9007199254740991).

src/model/entities/Transactions.ts (1)

46-47: Verify TypeScript type compatibility with bigint column.

Same concern as in Mempool.ts: the database column is bigint but the TypeScript type is number. This could lead to precision loss for nonce values exceeding Number.MAX_SAFE_INTEGER.

Consider updating to:

-    @Column("bigint", { name: "nonce", nullable: true, default: 0 })
-    nonce: number
+    @Column("bigint", { name: "nonce", nullable: true, default: 0 })
+    nonce: bigint

Or verify that nonce values will remain within the safe integer range.

src/libs/network/routines/transactions/handleL2PS.ts (1)

1-2: LGTM! Import path consolidation.

The migration of L2PSTransaction to the @/types/sdk-workarounds path and the use of type-only imports aligns with the PR's broader refactoring pattern for L2PS types.

src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts (2)

57-57: LGTM! Necessary import for reference block retrieval.

The addition of the Chain import is required to fetch the reference block number for mempool operations.


661-665: LGTM! Correctly implements updated mempool API.

The change properly fetches the reference block number and includes it in the mempool transaction, aligning with the updated Mempool.addTransaction signature that now requires reference_block: number.

src/model/datasource.ts (2)

26-27: LGTM! New L2PS entity imports.

The addition of L2PSHash and L2PSMempoolTx entity imports supports the L2PS batch aggregation functionality introduced in this PR.


49-50: LGTM! Entities properly registered.

Both L2PS entities are correctly registered in both DataSource configurations, ensuring they're available for TypeORM operations.

Also applies to: 83-84

src/libs/l2ps/parallelNetworks.ts (3)

13-14: LGTM! Import path consolidation.

The migration of L2PSTransaction to type-only import from @/types/sdk-workarounds is consistent with the PR's refactoring pattern.


265-288: LGTM! New transaction encryption API.

The encryptTransaction method provides a clean public API for encrypting transactions. It properly:

  • Awaits the asynchronous encryptTx operation (line 271)
  • Signs the encrypted transaction with the node's private key for integrity
  • Accesses getSharedState correctly as a property (line 274)

185-191: Verify key format compatibility before merging.

The change to read keys as hex-encoded strings and validate them through hexFileToBytes is a security improvement, but this appears to be a breaking change if existing deployments use keys in a different format. Before approving, clarify:

  • Are existing L2PS key files already hex-encoded, or will this require a migration?
  • Is there migration guidance for existing deployments?
src/libs/consensus/v2/PoRBFT.ts (1)

388-393: LGTM! Handles transactions without GCR edits gracefully.

The new logic correctly allows transactions without GCR edits (e.g., L2PS batch transactions) to pass through as successful without attempting to apply non-existent edits. This prevents potential errors from iterating over undefined or empty arrays.

src/libs/network/endpointHandlers.ts (2)

19-20: Import restructuring looks good.

The type-only import for L2PSTransaction from the SDK workarounds module aligns with the broader refactoring pattern in this PR.


126-131: GCREdit hashing optimization improves consistency.

Precomputing the JSON strings before hashing both ensures identical input for comparison and avoids redundant serialization. This is a good change.

src/index.ts (4)

34-35: New L2PS service imports are correctly added.


388-393: L2PS network loading has error handling but continues on failure.

The error is logged but doesn't prevent startup. This is appropriate for a non-critical service, but verify that downstream code (lines 397+) handles the case where l2psJoinedUids may be empty or undefined if loading fails.


402-406: L2PSBatchAggregator service initialization looks correct.

Starting the batch aggregator alongside the hash service when L2PS networks are present is the expected behavior.


422-429: Verify that L2PSHashService.stop() and L2PSBatchAggregator.stop() are idempotent.

The graceful shutdown correctly calls both services' stop() methods with proper error handling. However, confirm that these methods use atomic state flags or similar guards to safely handle multiple invocations (e.g., when called during startup interruption, or if the handler fires twice). Idempotent stop() methods should complete without error or side effects when services weren't started or are already stopped. This is especially important for signal handlers that may be triggered concurrently.

src/libs/peer/routines/getPeerIdentity.ts (4)

16-43: Helper types and asHexString function handle identity format variations.

The implementation handles colon-prefixed identities (e.g., "ed25519:0xabc...") and various prefix cases. The hex validation regex and normalization to lowercase with 0x prefix is correct.


97-113: normalizeExpectedIdentity provides fallback for keys without prefix.

The fallback for pure hex strings without 0x prefix ensures comparison works even with inconsistent input formats.


152-183: Identity comparison now uses normalized values consistently.

The updated flow properly normalizes both received and expected identities before comparison, with appropriate null checks and logging. Assigning the normalized identity to peer.identity ensures consistent storage.


45-95: Comprehensive identity normalization handles multiple input formats.

The function correctly handles strings, Uint8Array, ArrayBuffer, typed arrays, Node.js Buffer payloads, and envelope objects. The recursive call for maybeEnvelope.data is appropriate.

However, verify that uint8ArrayToHex and asHexString handle the 0x prefix consistently. If uint8ArrayToHex returns 0xabcd while asHexString returns abcd (or vice versa), the final output will be inconsistent across different input types.

src/model/entities/L2PSMempool.ts (1)

34-40: New sequence_number column for deterministic ordering.

Using bigint with string type in TypeScript matches the timestamp pattern and prevents JavaScript precision loss. The default value of "0" is appropriate.

src/libs/blockchain/l2ps_mempool.ts (8)

10-26: Well-structured status lifecycle constants.

The L2PS_STATUS object with as const assertion provides type-safe status values with clear lifecycle documentation: pending → processed → batched → confirmed → (deleted).


142-156: Improved block number determination with safer validation.

The updated logic properly checks if shardBlockRef is a valid non-negative number before use, and validates lastBlockNumber as a fallback. The additional safety check at lines 159-164 is defensive programming.


167-176: Timestamp stored as string for bigint column compatibility.

Using Date.now().toString() aligns with TypeORM's handling of bigint columns as strings to prevent JavaScript precision loss.


351-374: Batch status update is efficient for bulk operations.

Using TypeORM's In() operator for batch updates is the correct approach. The early return for empty arrays prevents unnecessary database calls.


389-410: getByStatus provides efficient status-based querying.

The method correctly applies optional limit with take and maintains consistent ordering by timestamp and hash.


453-473: deleteByHashes correctly handles bulk deletion.

Using In() for batch delete with proper affected count handling.


482-506: cleanupByStatus uses query builder for parameterized deletion.

The cutoff timestamp is correctly converted to string for bigint column comparison. The parameterized query prevents SQL injection.


579-588: Cleanup method updated for string timestamp and uses status constant.

Using L2PS_STATUS.PROCESSED instead of a magic string improves maintainability.

src/libs/l2ps/L2PSBatchAggregator.ts (15)

1-8: LGTM!

Imports are well-organized and all appear to be used appropriately throughout the class.


15-26: LGTM!

Well-documented interface with clear field descriptions.


46-97: LGTM!

Well-structured singleton with appropriate constants and comprehensive statistics tracking.


107-138: LGTM!

Correct lifecycle management with proper guard against double-start and statistics reset.


147-174: LGTM!

Graceful shutdown with timeout-bounded wait for ongoing operations.


182-215: LGTM!

Solid reentrancy protection with proper error handling and cleanup in finally block.


226-253: LGTM!

Clear aggregation logic with appropriate limits and error propagation.


261-272: LGTM!

Simple and clear grouping logic.


280-317: LGTM!

Correct batch processing with proper error handling and status updates.


453-468: LGTM!

Cleanup logic correctly isolated with non-blocking error handling.


475-481: LGTM!

Running average calculation is mathematically correct.


486-503: LGTM!

Comprehensive statistics logging with safe division handling.


538-549: LGTM!

Clean debug/test utility with appropriate guards.


369-375: Verify getSharedState is invoked correctly at line 369.

Line 369 assigns getSharedState to sharedState without parentheses. If getSharedState is a function (rather than a getter or direct export), this will assign the function itself instead of the SharedState instance, causing line 372's property access to fail. Confirm the export type and apply parentheses if needed.


414-427: Type assertion as any indicates potential type mismatch.

The as any cast at line 427 suggests the batchTransaction object doesn't match the expected Transaction type. This could mask type errors and lead to runtime issues if the mempool expects different fields. Verify that all required fields from the Transaction interface are present in the batchTransaction object, or define a proper type instead of using as any.

Comment on lines 1 to 4
import { FindManyOptions, In, Repository } from "typeorm"
import Datasource from "@/model/datasource"
import { L2PSMempoolTx } from "@/model/entities/L2PSMempool"
import { L2PSTransaction } from "@kynesyslabs/demosdk/types"
import type { L2PSTransaction } from "@kynesyslabs/demosdk/types"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Import inconsistency for L2PSTransaction type.

This file imports L2PSTransaction from @kynesyslabs/demosdk/types, but the entity file (L2PSMempool.ts) imports it from @/types/sdk-workarounds. This could cause type mismatches if the definitions differ.

Consider using the same import source for consistency:

-import type { L2PSTransaction } from "@kynesyslabs/demosdk/types"
+import type { L2PSTransaction } from "@/types/sdk-workarounds"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { FindManyOptions, In, Repository } from "typeorm"
import Datasource from "@/model/datasource"
import { L2PSMempoolTx } from "@/model/entities/L2PSMempool"
import { L2PSTransaction } from "@kynesyslabs/demosdk/types"
import type { L2PSTransaction } from "@kynesyslabs/demosdk/types"
import { FindManyOptions, In, Repository } from "typeorm"
import Datasource from "@/model/datasource"
import { L2PSMempoolTx } from "@/model/entities/L2PSMempool"
import type { L2PSTransaction } from "@/types/sdk-workarounds"
🤖 Prompt for AI Agents
In src/libs/blockchain/l2ps_mempool.ts around lines 1-4, the L2PSTransaction
type is imported from @kynesyslabs/demosdk/types while the L2PSMempool entity
imports it from @/types/sdk-workarounds; replace the import so both files use
the same source (preferably change this file to import L2PSTransaction from
"@/types/sdk-workarounds" to match the entity), update any import paths
accordingly, and run type checks to ensure no mismatches remain.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (6)
src/model/entities/L2PSMempool.ts (1)

14-17: Composite L2PS indexes now follow correct TypeORM usage; review need for separate single-column indexes.

Defining the composite indexes at the class level with ["l2ps_uid", ...] correctly models common access patterns like (l2ps_uid, status) or (l2ps_uid, sequence_number) and fixes the earlier misuse of multi-column @Index on a property.

You now also have single-column indexes on timestamp and block_number in addition to the new composites. Depending on your query mix, some of these might be redundant and add write/storage overhead. It’s worth checking which patterns you actually use (e.g., WHERE l2ps_uid = ? AND status = ? vs WHERE timestamp > ?) and pruning any indexes that don’t buy you query plans.

src/libs/network/endpointHandlers.ts (1)

57-58: Track the temporary SubnetPayload alias

The temporary SubnetPayload = any alias is fine as a bridge, but once the proper type is exported from the SDK, this should be replaced (or removed if unused) to regain type safety on subnet/L2PS payloads.

src/libs/l2ps/parallelNetworks.ts (1)

57-77: hexFileToBytes could align better with modern crypto patterns by returning Uint8Array

The helper's validation is solid, but it currently returns a forge byte string (via forge.util.hexToBytes), which is a legacy JavaScript pattern. While this works with L2PS's flexible parameter types (it accepts string | Buffer | Uint8Array for IVs), adopting Uint8Array would provide better type safety and align with modern cryptographic libraries. Consider using a utility like ethers.utils.arrayify() or manual conversion (new Uint8Array(forge.util.hexToBytes(cleaned).split('').map(c => c.charCodeAt(0)))) to return Uint8Array instead.

src/libs/network/routines/transactions/handleL2PS.ts (3)

1-10: Import change to type-only L2PSTransaction looks good; consider minor consistency tweak

The new import type { L2PSTransaction } aligns with TS best practices and avoids pulling runtime code for a type-only usage. For consistency and slightly leaner bundles, you might also consider switching RPCResponse to a type-only import if it’s not used as a runtime value.


23-88: Validation, decryption, and signature verification flow is solid; consider tightening decryptedTx typing

The staged checks (L2PS UID presence, L2PS instance lookup, guarded decryptTx, and Transaction.confirmTx) give a clear and defensible path for rejecting malformed or unauthorised L2PS traffic. One small improvement would be to give decryptedTx an explicit transaction/interface type instead of leaving it implicitly any, so that accesses like decryptedTx.content.from and decryptedTx.hash are type-checked at compile time.


91-147: Improve duplicate handling and mempool write robustness; normalize error response shape

A few related points in this section:

  • There’s a TOCTOU window between existsByOriginalHash(originalHash) and the subsequent addTransaction(...): under concurrent submissions of the same L2PS payload, both calls can see existsByOriginalHash === false and then race on insert. If the DB layer enforces uniqueness on original_hash, it would be safer to rely on that and map the “duplicate insert” failure from addTransaction back to a 409, rather than pre-checking in application code.

  • addTransaction isn’t wrapped in a try/catch, unlike existsByOriginalHash. If it can throw (e.g., DB/network error), this will bubble up as an unhandled rejection instead of being surfaced as a structured RPC error. Mirroring the existsByOriginalHash pattern here would make the failure mode clearer.

    For example:

  • const mempoolResult = await L2PSMempool.addTransaction(

  •    l2psUid, 
    
  •    l2psTx, 
    
  •    originalHash, 
    
  •    "processed",
    
  • )

  • let mempoolResult
  • try {
  •    mempoolResult = await L2PSMempool.addTransaction(
    
  •        l2psUid,
    
  •        l2psTx,
    
  •        originalHash,
    
  •        "processed",
    
  •    )
    
  • } catch (error) {
  •    response.result = 500
    
  •    response.response = false
    
  •    response.extra = `Mempool insert failed: ${error instanceof Error ? error.message : "Unknown error"}`
    
  •    return response
    
  • }

- For the “already processed” path (Lines 114‑118), `response.response` is a string, whereas other error branches use `false` and reserve the message for `extra`. If downstream consumers assume a boolean on failures, it might be worth standardizing this branch to match the others, keeping the human‑readable message in `extra`.

Overall behaviour is correct, but tightening these points will make the handler more robust and predictable under load.

</blockquote></details>

</blockquote></details>

<details>
<summary>📜 Review details</summary>

**Configuration used**: CodeRabbit UI

**Review profile**: CHILL

**Plan**: Pro

**Disabled knowledge base sources:**

- Linear integration is disabled by default for public repositories

> You can enable these sources in your CodeRabbit configuration.

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 32a0a8c9d313baf4e64693c2441d6c8867beae41 and a1082d18ef9d0ba4ec221000c9d7e0dc0357fec2.

</details>

<details>
<summary>📒 Files selected for processing (5)</summary>

* `src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts` (2 hunks)
* `src/libs/l2ps/parallelNetworks.ts` (4 hunks)
* `src/libs/network/endpointHandlers.ts` (4 hunks)
* `src/libs/network/routines/transactions/handleL2PS.ts` (1 hunks)
* `src/model/entities/L2PSMempool.ts` (3 hunks)

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (1)</summary>

* src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts

</details>

<details>
<summary>🧰 Additional context used</summary>

<details>
<summary>🧬 Code graph analysis (2)</summary>

<details>
<summary>src/libs/network/endpointHandlers.ts (1)</summary><blockquote>

<details>
<summary>src/libs/crypto/hashing.ts (1)</summary>

* `Hashing` (15-26)

</details>

</blockquote></details>
<details>
<summary>src/libs/l2ps/parallelNetworks.ts (1)</summary><blockquote>

<details>
<summary>src/utilities/sharedState.ts (1)</summary>

* `getSharedState` (275-277)

</details>

</blockquote></details>

</details>

</details>

<details>
<summary>🔇 Additional comments (7)</summary><blockquote>

<details>
<summary>src/model/entities/L2PSMempool.ts (1)</summary><blockquote>

`2-2`: **Type-only import for `L2PSTransaction` is a good fit here.**

Using `import type` keeps this entity free of unnecessary runtime dependencies while still giving you strong typing on `encrypted_tx`, which is explicitly mapped as `"jsonb"` in TypeORM.

</blockquote></details>
<details>
<summary>src/libs/network/endpointHandlers.ts (3)</summary><blockquote>

`19-21`: **Type-only import for `L2PSTransaction` is appropriate here**

Separating the value import (`Transaction`) from the type-only import (`L2PSTransaction`) keeps runtime bundle clean and matches how the type is used only in casts/annotations in this file.

---

`126-132`: **GCREdit hashing refactor looks correct**

Precomputing `gcrEditsString` and `txGcrEditsString` and hashing those keeps the two hash computations aligned and reduces the chance of subtle differences in serialization. This preserves the previous comparison semantics while being clearer.

---

`330-348`: **L2PS encrypted tx case correctly bypasses main mempool**

The `"l2psEncryptedTx"` case delegates to `handleSubnetTx`/`handleL2PS`, returns early on `result === 200`, and marks `result.success = false` otherwise. That ensures successful L2PS-encrypted transactions do not fall through into the main mempool path while failures still skip mempool insertion because `result.success` is false.

</blockquote></details>
<details>
<summary>src/libs/l2ps/parallelNetworks.ts (3)</summary><blockquote>

`13-15`: **Consistent SDK type usage**

Importing `Transaction`/`SigningAlgorithm` as values and `L2PSTransaction` as a type-only import matches how they’re used in this module and keeps runtime imports minimal.

---

`185-191`: **Hex-based key/IV loading is safer and more explicit**

Reading the private key and IV as hex, then normalizing and converting via `hexFileToBytes`, is a clear improvement over treating them as generic UTF‑8 strings and adds robust validation for misformatted files.

---

`271-288`: **Verify that `encryptedTx.content.from` is set to the node's public key at call sites**

The async encryption and signing flow appears coherent—awaiting `l2ps.encryptTx` and then signing `JSON.stringify(encryptedTx.content)` with the node's `signingAlgorithm` provides a consistent approach. Confirm that call sites set `content.from` to this node's public key so the verification logic in `decryptTransaction` correctly validates that the node signed the payload.

</blockquote></details>

</blockquote></details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Comment on lines +34 to +40
/**
* Sequence number within the L2PS network for ordering
* Auto-incremented per l2ps_uid to ensure deterministic transaction order
* @example 1, 2, 3... or timestamp-based sequence like 1697049600, 1697049601...
*/
@Column("bigint", { default: "0" })
sequence_number: string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify sequence_number semantics — sequencing is app-layer, not database-level.

The column is a plain bigint with default "0" and no database-side sequence or per-UID auto-incrementing. The actual sequencing is enforced in the L2PS batch logic (L2PSMempoolService.insertMempool() calculates the next sequence as max(existing) + 1 per l2ps_uid), so update the comment to reflect this app-layer behavior: "Monotonic sequence assigned per l2ps_uid by the L2PS batch logic; stored as bigint (surfaced as string in TypeScript for safe ordering)."

🤖 Prompt for AI Agents
In src/model/entities/L2PSMempool.ts around lines 34 to 40, the comment for
sequence_number incorrectly implies database-side auto-incrementing; update the
comment to state that sequencing is handled at the application layer (L2PS batch
logic via L2PSMempoolService.insertMempool(), which computes next sequence as
max(existing)+1 per l2ps_uid), and that the column is stored as a bigint with a
TypeScript string type for safe ordering/representation.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/features/activitypub/fedistore.ts (2)

51-60: Critical schema mismatch: INSERT columns don't match table definitions.

The SQL statement attempts to insert into (id, data) columns, but none of the schemas in collectionSchemas define a data column. For example, the actors table has columns id, type, name, inbox, outbox, followers, following, liked, not id, data.

This will cause runtime SQL errors: "table X has no column named data".

The CRUD operations need to be updated to match the multi-column schemas defined in lines 6-23, or the schemas should be changed to use a two-column (id, data) JSON-blob approach.

Option 1: Update CRUD to match multi-column schemas

For example, for actors collection:

-    const sql = `INSERT INTO ${collection}(id, data) VALUES(?, ?)`
-    this.db.run(sql, [item.id, JSON.stringify(item)], function (err) {
+    // Dynamically build INSERT based on item properties
+    const columns = Object.keys(item).join(', ')
+    const placeholders = Object.keys(item).map(() => '?').join(', ')
+    const values = Object.values(item)
+    const sql = `INSERT OR REPLACE INTO ${collection}(${columns}) VALUES(${placeholders})`
+    this.db.run(sql, values, function (err) {

Option 2: Change schemas to use JSON blob storage

     private readonly collectionSchemas = {
-        actors: "id TEXT PRIMARY KEY, type TEXT, name TEXT, inbox TEXT, outbox TEXT, followers TEXT, following TEXT, liked TEXT",
+        actors: "id TEXT PRIMARY KEY, data TEXT",
-        objects: "id TEXT PRIMARY KEY, type TEXT, attributedTo TEXT, content TEXT",
+        objects: "id TEXT PRIMARY KEY, data TEXT",
         // ... apply to all collections
     }

62-77: Critical schema mismatch: SELECT returns incompatible structure.

The SELECT * statement returns all columns defined in the schema (e.g., for actors: id, type, name, inbox, outbox, followers, following, liked), but the original implementation likely expected a data column containing a JSON blob that could be parsed.

The current code on line 71 just assigns row directly without any transformation, which will return an object with multiple properties instead of the expected structure.

This needs to be aligned with the chosen schema approach from the saveItem fix. If using multi-column schemas, the callback should receive the structured object. If using JSON blob storage, the SELECT should parse the data column.

src/libs/network/manageNodeCall.ts (1)

27-32: Duplicate imports detected.

Twitter, Tweet, and Mempool are imported twice. This appears to be a merge artifact.

Apply this diff to remove the duplicates:

 import { Twitter } from "../identity/tools/twitter"
 import { Tweet } from "@kynesyslabs/demosdk/types"
-import { uint8ArrayToHex } from "@kynesyslabs/demosdk/encryption"
-import { Twitter } from "../identity/tools/twitter"
-import { Tweet } from "@kynesyslabs/demosdk/types"
-import Mempool from "../blockchain/mempool_v2"
+import { uint8ArrayToHex } from "@kynesyslabs/demosdk/encryption"
♻️ Duplicate comments (1)
src/libs/blockchain/l2ps_mempool.ts (1)

1-4: Import inconsistency for L2PSTransaction type.

This file imports L2PSTransaction from @kynesyslabs/demosdk/types, but the entity file (L2PSMempool.ts) imports it from @/types/sdk-workarounds. This could cause type mismatches if the definitions differ.

Consider using the same import source for consistency:

-import type { L2PSTransaction } from "@kynesyslabs/demosdk/types"
+import type { L2PSTransaction } from "@/types/sdk-workarounds"
🧹 Nitpick comments (15)
.gitignore (1)

184-204: Remove duplicate .gitignore entries to reduce clutter.

Several entries in the newly-added section are already present earlier in the file:

  • Line 185: temp (duplicate of line 166)
  • Line 186: STORAGE_PROGRAMS_SPEC.md (duplicate of line 172)
  • Line 187: local_tests/ (similar to line 135 local_tests without trailing slash)
  • Line 188: claudedocs (duplicate of line 132)
  • Line 190: docs/storage_features (already present before the new additions)
  • Line 204: REVIEWER_QUESTIONS_ANSWERED.md (duplicate of line 200)

Apply this diff to clean up the duplicates:

-docs/storage_features
-temp
-STORAGE_PROGRAMS_SPEC.md
-local_tests/
-claudedocs
+
# Remove lines 184-204 entirely; all entries already exist in the file above
src/utilities/validateUint8Array.ts (2)

13-27: Strengthen value validation to truly enforce a “Uint8Array-like” shape

The numeric-keyed record checks are a good improvement, but the current validation still allows values such as negatives, numbers >255, and non-integers (e.g. 1.75). Buffer.from(sortedValues) will silently coerce these, which is surprising for a function named validateIfUint8Array.

Consider tightening the checks along these lines:

  • Require integer values: Number.isInteger(val).
  • Require byte range: val >= 0 && val <= 255.
  • Optionally, require non‑negative integer keys (to avoid -1, etc.).

Example diff sketch:

-        const allValuesNumbers = entries.every(([, val]) => typeof val === "number")
+        const allValuesNumbers = entries.every(([, val]) =>
+            typeof val === "number" &&
+            Number.isInteger(val) &&
+            val >= 0 &&
+            val <= 255
+        )

This would make the “validated → Buffer” path much closer to the semantics of an actual Uint8Array and avoid silent truncation/modulo behavior.


14-16: Minor: avoid repeated Number() conversions and consider non-negative key constraint

You convert string keys to numbers twice: once in allKeysNumericIntegers and again in the sort comparator. To shave a bit of overhead and keep behavior consistent, you could:

  • Either store the numeric key alongside each entry during validation, or
  • Reuse the same conversion logic in a helper, ensuring identical rules for both validation and sort.

Also, if negative indices are never meaningful in your context, tightening the key check to num >= 0 would better match typical index semantics.

This is minor, but would make intent clearer and avoid subtle drift between “valid key” and “sort key” behavior.

Also applies to: 24-26

src/libs/network/endpointHandlers.ts (2)

57-58: Track the temporary SubnetPayload type or use a more specific type.

The any type bypasses TypeScript's type checking. Consider either:

  1. Using a more specific temporary type based on usage (e.g., Record<string, unknown>)
  2. Creating a GitHub issue to track proper export from the SDK
  3. Adding a TODO comment with an issue number

Apply this diff:

-// TEMPORARY: Define SubnetPayload until proper export is available
-type SubnetPayload = any
+// TODO(#XXX): Replace with proper export from @kynesyslabs/demosdk/types
+type SubnetPayload = {
+    l2ps_uid?: string;
+    encrypted_payload?: unknown;
+    [key: string]: unknown;
+}

330-377: Consider extracting L2PS validation logic into a helper function.

The validation checks follow a consistent pattern and could be refactored into a dedicated validation function to improve maintainability and testability. This would also make it easier to add new validation rules in the future.

Example refactor:

// Add new helper function at top level or in a utilities file
function validateL2PSTransaction(tx: Transaction): { valid: boolean; error?: string } {
    if (!tx.signature?.data) {
        return { valid: false, error: "L2PS transaction requires valid signature" }
    }
    
    const l2psPayload = tx.content?.data?.[1]
    if (!l2psPayload || typeof l2psPayload !== "object") {
        return { valid: false, error: "Invalid L2PS payload structure" }
    }
    
    const senderAddress = tx.content?.from || tx.content?.from_ed25519_address
    if (!senderAddress) {
        return { valid: false, error: "L2PS transaction requires sender address" }
    }
    
    return { valid: true }
}

// Then in the switch case:
case "l2psEncryptedTx": {
    console.log("[handleExecuteTransaction] Processing L2PS Encrypted Tx")
    
    const validation = validateL2PSTransaction(tx)
    if (!validation.valid) {
        log.error(`[handleExecuteTransaction] L2PS tx rejected: ${validation.error}`)
        result.success = false
        result.response = { error: validation.error }
        break
    }
    
    const l2psResult = await ServerHandlers.handleSubnetTx(tx as L2PSTransaction)
    // ... rest of logic
}
src/libs/blockchain/l2ps_mempool.ts (1)

173-179: Document the timestamp-as-string pattern or consider proper bigint handling.

Storing Date.now().toString() for bigint column compatibility is a workaround that requires CAST operations in queries (see lines 491-501). While functional, this approach:

  • Requires explicit CAST in every timestamp comparison
  • Has performance implications for indexed queries
  • Could be error-prone if CAST is forgotten

Consider either:

  1. Using a proper TypeORM bigint transformer:
@Column({
    type: "bigint",
    transformer: {
        to: (value: number) => value?.toString(),
        from: (value: string) => value ? parseInt(value, 10) : 0
    }
})
timestamp: number
  1. If keeping current approach, add a prominent comment in the entity definition explaining why string storage is used and referencing the CAST requirements.
src/libs/network/manageNodeCall.ts (1)

274-290: Add input validation for data.domain and use consistent logging.

The resolveUdDomain case should validate that data.domain is provided before calling the resolver, similar to other cases in this file. Also, use log.error for consistency with the rest of the codebase.

 case "resolveUdDomain": {
+    if (!data.domain) {
+        response.result = 400
+        response.response = "No domain specified"
+        break
+    }
     try {
         const res = await UDIdentityManager.resolveUDDomain(data.domain)

         if (res) {
             response.response = res
         }
     } catch (error) {
-        console.error(error)
+        log.error("[UD] Failed to resolve domain: " + error)
         response.result = 400
         response.response = {
             success: false,
             error: "Failed to resolve web3 domain",
         }
     }
     break
 }
src/libs/peer/routines/getPeerIdentity.ts (1)

232-238: Backward compatibility with security trade-off documented.

The warning log for peers without challenge signatures is appropriate. However, ensure this backward compatibility mode is eventually deprecated once all peers support challenge-response.

Consider adding a configuration flag to enforce challenge-response in production:

// Future: Add config to enforce challenge-response
// const REQUIRE_CHALLENGE_RESPONSE = process.env.REQUIRE_PEER_CHALLENGE === 'true'
src/features/incentive/PointSystem.ts (1)

1041-1052: Duplicate domain check returns 200 instead of 400.

When a domain is already linked, the response returns result: 200 with a message stating points were already awarded. This is inconsistent with other methods (e.g., awardWeb3WalletPoints returns 400 for already linked wallets at lines 339-342).

Consider returning 400 for consistency with wallet linking behavior, or document why UD domains use a different pattern:

             if (domainAlreadyLinked) {
                 return {
-                    result: 200,
+                    result: 400,
                     response: {
                         pointsAwarded: 0,
                         totalPoints: userPointsWithIdentities.totalPoints,
                         message: "UD domain points already awarded",
                     },
src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts (3)

64-65: Remove empty constructor or add initialization logic.

The empty constructor serves no purpose since the class only has static methods. Consider removing it entirely or making the class a namespace/module pattern.

 export class UDIdentityManager {
-    constructor() {}
-
     /**
      * Convert EVM domain resolution to unified format

396-399: Simplify isOwner boolean assignment.

The double negation !! is redundant when the result of === is already boolean.

-            const isOwner = !!(
-                signingAddress ===
-                (resolution.metadata[signatureType] || {}).owner
-            )
+            const isOwner =
+                signingAddress === resolution.metadata[signatureType]?.owner

690-697: Return type any reduces type safety.

The getIdentities method returns Promise<any> which loses type information. Consider using a more specific type or generic.

-    static async getIdentities(address: string, key?: string): Promise<any> {
+    static async getIdentities(
+        address: string,
+        key?: string,
+    ): Promise<Record<string, unknown> | unknown> {
         const gcr = await ensureGCRForUser(address)
         if (key) {
-            return gcr.identities[key]
+            return gcr.identities[key] as unknown
         }

-        return gcr.identities
+        return gcr.identities as Record<string, unknown>
     }
src/libs/blockchain/gcr/gcr_routines/udSolanaResolverHelper.ts (3)

308-311: Unsafe type assertion bypasses type checking.

The as any assertion on the IDL could mask type mismatches between the IDL and the UnsSol type definition.

-      this.program = new Program(UnsSolIdl as any,provider) as Program<UnsSol>
+      this.program = new Program<UnsSol>(UnsSolIdl as UnsSol, provider)

If the IDL types don't align, consider regenerating the TypeScript types from the IDL using Anchor's type generation.


364-375: Redundant RPC call when data may already be available.

Line 366 calls connection.getAccountInfo() again for the same address that was just fetched on line 348-350. The data from tokenAccountInfo.value should already contain the raw account data.

       // Fallback: parse raw token account data
-      if (tokenAccountInfo.value && "data" in tokenAccountInfo.value.data) {
-        const accountInfo = await connection.getAccountInfo(tokenAccounts.value[0].address)
-
-        if (accountInfo && accountInfo.data.length >= 64) {
+      if (tokenAccountInfo.value && Buffer.isBuffer(tokenAccountInfo.value.data)) {
+        const data = tokenAccountInfo.value.data
+        if (data.length >= 64) {
           // SPL Token account layout: mint (32 bytes) + owner (32 bytes) + ...
-          const ownerBytes = accountInfo.data.slice(32, 64)
+          const ownerBytes = data.slice(32, 64)
           const owner = new PublicKey(ownerBytes).toString()
           log.debug(`Found domain owner via raw data: ${owner}`)
           return owner
         }
       }

260-270: Validation occurs after BigInt conversion.

The validation on line 263 checks if bigIntRecordVersion < 0 but the conversion already happened on line 260. If recordVersion is invalid (e.g., NaN, negative), the BigInt conversion itself could throw. Consider validating before conversion.

   private deriveRecordPda(
     recordVersion: number,
     sldPda: PublicKey,
     recordKey: string,
     version = this.defaultVersion,
   ): PublicKey {
-    const bigIntRecordVersion = BigInt(recordVersion)
-
     // Validate recordVersion before BigInt conversion to prevent TypeError
-    if (bigIntRecordVersion < BigInt(0)) {
+    if (!Number.isInteger(recordVersion) || recordVersion < 0) {
       throw new Error(
-        `Invalid record version: ${bigIntRecordVersion}. Must be a non-negative integer.`,
+        `Invalid record version: ${recordVersion}. Must be a non-negative integer.`,
       )
     }
+    
+    const bigIntRecordVersion = BigInt(recordVersion)

     const versionBuffer = Buffer.alloc(8)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between a1082d1 and 11f2310.

📒 Files selected for processing (36)
  • .gitignore (2 hunks)
  • .serena/memories/_index.md (1 hunks)
  • .serena/memories/session_ud_ownership_verification_2025_10_21.md (1 hunks)
  • .serena/memories/session_ud_points_implementation_2025_01_31.md (1 hunks)
  • .serena/memories/ud_architecture_patterns.md (1 hunks)
  • .serena/memories/ud_integration_complete.md (1 hunks)
  • .serena/memories/ud_phase5_complete.md (1 hunks)
  • .serena/memories/ud_phases_tracking.md (1 hunks)
  • .serena/memories/ud_security_patterns.md (1 hunks)
  • .serena/memories/ud_technical_reference.md (1 hunks)
  • package.json (3 hunks)
  • src/features/activitypub/fedistore.ts (4 hunks)
  • src/features/activitypub/fediverse.ts (1 hunks)
  • src/features/incentive/PointSystem.ts (6 hunks)
  • src/features/mcp/MCPServer.ts (2 hunks)
  • src/features/multichain/routines/XMParser.ts (1 hunks)
  • src/libs/blockchain/UDTypes/uns_sol.json (1 hunks)
  • src/libs/blockchain/UDTypes/uns_sol.ts (1 hunks)
  • src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts (5 hunks)
  • src/libs/blockchain/gcr/gcr_routines/IncentiveManager.ts (2 hunks)
  • src/libs/blockchain/gcr/gcr_routines/identityManager.ts (1 hunks)
  • src/libs/blockchain/gcr/gcr_routines/signatureDetector.ts (1 hunks)
  • src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts (1 hunks)
  • src/libs/blockchain/gcr/gcr_routines/udSolanaResolverHelper.ts (1 hunks)
  • src/libs/blockchain/gcr/handleGCR.ts (1 hunks)
  • src/libs/blockchain/l2ps_mempool.ts (6 hunks)
  • src/libs/l2ps/L2PSBatchAggregator.ts (1 hunks)
  • src/libs/network/endpointHandlers.ts (4 hunks)
  • src/libs/network/manageNodeCall.ts (3 hunks)
  • src/libs/network/routines/transactions/handleIdentityRequest.ts (3 hunks)
  • src/libs/network/server_rpc.ts (1 hunks)
  • src/libs/peer/routines/getPeerIdentity.ts (3 hunks)
  • src/libs/utils/demostdlib/groundControl.ts (1 hunks)
  • src/model/entities/GCRv2/GCR_Main.ts (1 hunks)
  • src/model/entities/types/IdentityTypes.ts (3 hunks)
  • src/utilities/validateUint8Array.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • .serena/memories/ud_phases_tracking.md
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-10T22:57:18.166Z
Learnt from: tcsenpai
Repo: kynesyslabs/node PR: 477
File: .serena/memories/storage_programs_complete.md:64-82
Timestamp: 2025-10-10T22:57:18.166Z
Learning: Files in the `.serena/memories/` directory should be left alone - do not suggest linting fixes, formatting changes, or other modifications to files in this directory.

Applied to files:

  • .serena/memories/_index.md
🧬 Code graph analysis (7)
src/libs/peer/routines/getPeerIdentity.ts (3)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/libs/peer/Peer.ts (1)
  • Peer (15-346)
src/libs/network/manageNodeCall.ts (1)
  • NodeCall (37-41)
src/features/activitypub/fedistore.ts (1)
src/features/activitypub/feditypes.ts (1)
  • collection (54-54)
src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts (5)
src/libs/blockchain/gcr/handleGCR.ts (1)
  • GCRResult (73-77)
src/libs/blockchain/gcr/gcr_routines/ensureGCRForUser.ts (1)
  • ensureGCRForUser (8-33)
src/libs/blockchain/gcr/gcr_routines/udSolanaResolverHelper.ts (1)
  • domainExists (691-702)
src/model/entities/types/IdentityTypes.ts (1)
  • SavedUdIdentity (43-53)
src/libs/blockchain/gcr/gcr_routines/IncentiveManager.ts (1)
  • IncentiveManager (9-164)
src/libs/network/endpointHandlers.ts (1)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/libs/network/manageNodeCall.ts (1)
src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts (1)
  • UDIdentityManager (64-698)
src/libs/blockchain/l2ps_mempool.ts (2)
src/libs/blockchain/transaction.ts (1)
  • hash (106-114)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/blockchain/gcr/gcr_routines/udSolanaResolverHelper.ts (3)
src/libs/blockchain/UDTypes/uns_sol.ts (1)
  • UnsSol (7-2403)
src/utilities/cli_libraries/wallet.ts (1)
  • Wallet (10-134)
src/utilities/logger.ts (1)
  • error (125-132)
🪛 LanguageTool
.serena/memories/ud_integration_complete.md

[grammar] ~97-~97: Ensure spelling is correct
Context: ...s award/deduct methods - GCR_Main.ts: udDomains breakdown field Created: - `signat...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🪛 markdownlint-cli2 (0.18.1)
.serena/memories/ud_architecture_patterns.md

6-6: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


133-133: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


141-141: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (44)
src/features/multichain/routines/XMParser.ts (1)

40-42: Unable to complete verification. Repository access required.

src/utilities/validateUint8Array.ts (1)

2-5: Early-returning arrays/typed arrays changes behavior; confirm callers don’t rely on normalization

The new Array.isArray(input) || ArrayBuffer.isView(input) path now passes raw arrays, Buffer, Uint8Array, DataView, etc. straight through instead of normalizing them via the object-path logic. If any callers previously relied on “JSON-shaped” inputs (e.g. {"0":1,"1":2}) and plain arrays yielding a Uint8Array/Buffer, this is a behavior change: arrays will now remain number[] and fail instanceof Uint8Array checks.

Given how widely this kind of utility tends to be reused, it’s worth double-checking a few representative call sites to ensure they still behave as expected with the new early-exit semantics, especially around plain arrays and DataView.

src/features/activitypub/fedistore.ts (4)

5-23: Schema definitions look good, but verify CRUD alignment.

The centralized schema registry is a good refactoring approach. Each collection has well-defined columns matching ActivityPub entity requirements.


33-36: Good use of single source of truth for validation.

Initializing validCollections from collectionSchemas keys ensures the whitelist stays in sync with defined schemas. The initialization order is correct since this executes synchronously before any async database callbacks.


38-42: Effective whitelist validation for security.

The validation method provides good protection against invalid collection names and potential SQL injection attacks on table names.


44-49: Table creation uses centralized schemas correctly.

The iteration over collectionSchemas properly applies the defined column structures for each collection.

src/libs/utils/demostdlib/groundControl.ts (1)

72-75: Insufficient path traversal protection—can be bypassed with multiple techniques.

The current validation only checks for .. substrings, which is easily circumvented:

  • Absolute paths: /etc/passwd, /var/log/secrets bypass the check entirely
  • URL encoding: %2e%2e, %2e%2e%2f when decoded by filesystem APIs
  • Path normalization: ....//, /./, or symbolic links
  • Case sensitivity: On case-insensitive filesystems, variations may work

An attacker controlling the keys object could read arbitrary files from the filesystem.

Implement robust path traversal protection using path.resolve() with directory whitelist validation, normalizing paths before access, and validating that resolved paths remain within an allowed certificate directory.

src/features/activitypub/fediverse.ts (1)

2-7: LGTM! Good security enhancement.

Adding Helmet middleware provides sensible security defaults (X-Content-Type-Options, X-Frame-Options, etc.). Placement before express.json() is correct.

src/libs/network/server_rpc.ts (1)

338-341: LGTM!

This is a minor formatting change with no functional impact. The response payload remains unchanged.

package.json (3)

99-99: LGTM on tweetnacl addition.

tweetnacl ^1.0.3 is a stable, well-audited cryptographic library commonly used for Ed25519 signature verification, aligning with the UD Solana verification flow described in the architecture patterns.


10-13: LGTM! Improved linting and type-checking scripts.

The added scripts provide better developer tooling:

  • Lint commands now properly ignore test directories.
  • Type-check scripts using both Bun and tsc offer flexibility for CI/local workflows.

51-75: Address bs58 v6 ESM migration and verify demosdk version.

  1. bs58 ^6.0.0: This major version switched to ESM-only and requires migration. Ensure your project is configured for ESM: add "type": "module" to package.json, use .mjs extensions, or configure TypeScript with compatible module settings (e.g., "module": "esnext"). Replace require() with ESM import statements.

  2. @kynesyslabs/demosdk: The version specified (^2.5.6) does not appear to exist in the npm registry (latest available is v2.3.22). Verify the correct version number in package.json.

  3. @solana/web3.js ^1.98.4: This is the current stable version as of December 2025; no action needed.

.serena/memories/_index.md (1)

1-40: No review comments for this file.

Based on learnings, files in the .serena/memories/ directory should be left alone.

.serena/memories/ud_architecture_patterns.md (1)

1-145: No review comments for this file.

Based on learnings, files in the .serena/memories/ directory should be left alone. The static analysis hints regarding code block language specifiers are not actionable for this directory.

src/libs/network/endpointHandlers.ts (1)

126-132: LGTM - Clear refactor for deterministic hashing.

Creating intermediate string variables before hashing improves clarity and makes the hashing inputs explicit. This ensures consistent string representation and makes debugging easier.

src/libs/blockchain/l2ps_mempool.ts (2)

16-31: Excellent TypeScript pattern for status lifecycle.

The as const assertion combined with the derived L2PSStatus type provides strong type safety while maintaining string values for database storage. The lifecycle documentation in comments is also helpful.


356-379: Well-implemented batch status update.

The method efficiently uses TypeORM's In operator for bulk updates, includes proper error handling, and returns the affected count for caller verification. The empty array guard at the start prevents unnecessary database calls.

src/libs/l2ps/L2PSBatchAggregator.ts (1)

16-29: Well-designed batch payload interface.

The L2PSBatchPayload interface is clearly documented with comprehensive JSDoc comments for each field. The inclusion of authentication_tag for tamper detection and transaction_hashes for transparency is good design.

src/features/mcp/MCPServer.ts (1)

21-21: LGTM - Security hardening with helmet middleware.

Adding helmet middleware to the SSE transport Express app is a security best practice. Helmet automatically sets various HTTP headers to protect against common web vulnerabilities (XSS, clickjacking, etc.). This change aligns with the same security pattern applied in src/features/activitypub/fediverse.ts.

Also applies to: 259-259

src/model/entities/GCRv2/GCR_Main.ts (1)

36-36: LGTM - Well-designed schema extension for UD domain points.

The udDomains field is appropriately marked as optional for backward compatibility with historical records. This allows existing GCR entries to continue functioning while supporting per-domain UD point tracking for new entries. The field structure { [domain: string]: number } provides flexible per-domain point mapping.

src/libs/blockchain/gcr/handleGCR.ts (1)

497-502: LGTM - Consistent identity schema extension for UD support.

Adding ud: [] to the default identities structure extends GCRMain accounts to support Unstoppable Domains identity storage. The empty array default is appropriate and aligns with the StoredIdentities type extension introduced in src/model/entities/types/IdentityTypes.ts. This change enables the UD identity management flows implemented elsewhere in the PR.

src/libs/blockchain/gcr/gcr_routines/identityManager.ts (1)

329-336: LGTM - Type narrowing for identity key parameter.

The union type "xm" | "web2" | "pqc" | "ud" properly constrains the key parameter to valid identity categories, aligning with the new UD identity support. The implementation correctly accesses gcr.identities[key].

.serena/memories/ud_security_patterns.md (1)

1-157: Good security documentation for UD domain handling.

This documentation provides clear security patterns for UD domain ownership verification, including multi-chain considerations (EVM case-insensitive vs Solana case-sensitive), error handling patterns, and testing scenarios. The alignment with UDIdentityManager implementation is correct.

src/libs/network/routines/transactions/handleIdentityRequest.ts (2)

75-81: LGTM - UD identity assign handling follows established patterns.

The implementation correctly delegates to UDIdentityManager.verifyPayload() with proper type casting. The comment appropriately notes that UD follows signature-based verification like XM identities. Based on the relevant code snippet, verifyPayload performs comprehensive verification including domain resolution, address authorization, and signature validation.


97-101: LGTM - UD identity removal added to removal cases.

Adding ud_identity_remove to the existing removal case list is consistent with other identity types. The removal flow appropriately returns success, as the actual state changes are handled elsewhere in the GCR identity routines.

src/libs/blockchain/gcr/gcr_routines/IncentiveManager.ts (1)

137-163: LGTM - UD domain incentive hooks follow established patterns.

The udDomainLinked and udDomainUnlinked methods correctly mirror the pattern used for other identity types (Twitter, GitHub, Discord). The delegation to PointSystem.awardUdDomainPoints() and deductUdDomainPoints() maintains proper separation of concerns.

src/libs/network/manageNodeCall.ts (1)

255-255: Type assertion suggests SDK type mismatch.

The (tweet as any).id cast indicates a type safety issue where the Tweet type doesn't include the id property. Consider updating the SDK types if available or creating a local extended interface to avoid the any cast.

src/libs/blockchain/UDTypes/uns_sol.ts (1)

1-14: LGTM - Solana Anchor IDL type definition for UNS program.

The type definition properly models the Solana UNS program's IDL structure for TypeScript usage. The documentation correctly notes this is a type helper, not the actual IDL.

The hardcoded program address at line 8 (6eLvwb1dwtV5coME517Ki53DojQaRLUctY9qHqAsS9G2) should be verified to ensure it matches the expected production deployment of the UNS Solana program. Confirm this address against:

  • The actual target/idl/uns_sol.json file referenced in the comments
  • The production Solana deployment documentation
  • Any environment-specific address configurations
src/model/entities/types/IdentityTypes.ts (2)

30-53: Well-documented interface for UD identity storage.

The SavedUdIdentity interface is comprehensive with clear documentation of the Phase 5 changes. The breaking change from resolvedAddress to signingAddress is appropriately documented.


69-69: LGTM - StoredIdentities extended correctly.

The ud property is correctly typed as an array of SavedUdIdentity, maintaining consistency with other identity types in the structure.

src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts (4)

541-565: Thorough validation for UD identity add operation.

Good defensive validation of all required fields before processing. The explicit checks for each field ensure clear error messages when data is malformed.


645-695: UD identity removal looks correct.

The removal logic properly validates the account exists, checks for existing domain, performs case-insensitive comparison, and triggers the appropriate incentive deduction.


893-909: UD first-connection check uses proper case-insensitive SQL.

The LOWER() function in the SQL query ensures consistent domain matching regardless of case, aligning with the in-memory checks.


598-614: Potential TOCTOU race condition on domain uniqueness check.

The domain existence check and subsequent push/save are not atomic. Two concurrent requests could both pass the domainExists check before either saves, resulting in duplicate domains for the same account.

Consider using a database-level unique constraint or wrapping the check-and-save in a transaction to prevent this race condition.

.serena/memories/session_ud_ownership_verification_2025_10_21.md (1)

1-138: Session memory file - documentation only.

This is an internal development session memory file documenting past implementation decisions. It provides useful context for understanding the UD ownership verification flow but is not production code requiring review.

src/libs/blockchain/gcr/gcr_routines/signatureDetector.ts (2)

59-75: LGTM - Helper functions are well-designed.

validateAddressType and isSignableAddress are clean utility wrappers that follow the single responsibility principle.


23-46: Clean signature type detection with proper input validation.

Good defensive programming with early guard for non-string inputs. The regex patterns correctly identify EVM (0x-prefixed 40 hex chars) and Solana (Base58 32-44 chars) addresses. The Base58 character set appropriately excludes 0, O, I, and l per Solana specifications.

src/libs/peer/routines/getPeerIdentity.ts (1)

17-96: Comprehensive identity normalization pipeline.

The normalizeIdentity function handles a wide variety of input formats (strings, Uint8Array, ArrayBuffer, Buffer payloads, envelope objects) and normalizes them to consistent lowercase hex strings. This is robust defensive coding.

src/features/incentive/PointSystem.ts (4)

21-22: Point values for UD domains appropriately differentiated.

.demos domains receive higher points (3) compared to other UD domains (1), incentivizing adoption of the platform's native TLD.


204-213: LGTM - udDomains breakdown handling correctly implemented.

The initialization check and accumulation logic for UD domain points follows the same pattern as other point types.


1054-1092: Strong ownership verification for awarding points.

Good security practice to verify:

  1. Domain exists in GCR identities (prevents race conditions)
  2. Domain is owned by user's linked wallets via checkOwnerLinkedWallets

This prevents point farming through unowned domains.


1152-1156: Verify ownership verification is skipped only when deductUdDomainPoints is always called after GCR identity removal.

The comment assumes ownership verification is redundant because GCRIdentityRoutines already verifies ownership via signature. However, this optimization is only valid if deductUdDomainPoints is exclusively called as part of the identity removal flow. If this method can be invoked independently (e.g., through other code paths or administrative operations), the ownership check may be necessary to prevent points inflation from domain transfers. Confirm the calling context of deductUdDomainPoints across the codebase to validate this assumption.

src/libs/blockchain/gcr/gcr_routines/udSolanaResolverHelper.ts (1)

643-667: LGTM - Domain parsing and validation.

The resolveDomain convenience method properly validates domain format before delegating to resolve(). The early return pattern with structured error responses is clean.

src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts (1)

288-319: Hardcoded RPC URLs should be externalized and include fallback mechanisms for production reliability.

Public RPC endpoints can become rate-limited, unavailable, or unreliable. Consider:

  1. Externalize endpoints via environment variables or configuration files
  2. Implement endpoint rotation and fallback logic with retry logic (exponential backoff + jitter)
  3. Add health checks or circuit breaker patterns to detect and skip unhealthy endpoints
  4. Use dedicated/premium RPC providers for critical operations, or maintain a pool of multiple endpoints

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
src/utilities/sharedState.ts (1)

85-88: Clarify that l2psBatchNonce persistence is process‑local only

The field works fine as an in‑memory counter for the aggregator, but it does not survive process restarts. Given the comment (“Persistent nonce…”) and how getLastNonceFromStorage is documented/used, it may be worth clarifying that this is runtime/session persistence only, or wiring this through a durable store if true cross‑restart monotonicity is required.

src/libs/l2ps/L2PSBatchAggregator.ts (2)

337-385: HMAC key derivation assumes Ed25519‑style raw byte private keys

createBatchPayload now correctly requires a private key and avoids the insecure fallback, but it assumes sharedState.keypair.privateKey is a Uint8Array:

const hmacKey = Buffer.from(sharedState.keypair.privateKey as Uint8Array)
    .toString("hex")
    .slice(0, 64)

If other key types (e.g. RSA forge objects) are ever used, this will throw at runtime. Consider either:

  • Explicitly enforcing/validating that signingAlgorithm is Ed25519 and the keypair is backed by raw byte keys before starting this service, or
  • Normalizing privateKey into a Uint8Array in one place (e.g. SharedState) and reusing that representation here.

This keeps the aggregator resilient to future changes in key handling.


396-439: Nonce monotonicity is good per process, but comments overstate cross‑restart persistence

getNextBatchNonce combines Date.now() * 1000 with the last stored nonce and saves via sharedState.l2psBatchNonce. This addresses the earlier in‑memory collision issues and gives a monotonically increasing sequence within a running process.

However:

  • getLastNonceFromStorage only reads from shared state, which is reset on process restart.
  • The doc comment says “persists the last used nonce … ensure uniqueness across restarts”, which is not strictly true without durable storage or reading back from chain/mempool.

If cross‑restart monotonicity is a hard requirement, consider persisting the last nonce in a DB (or deriving a lower bound from chain state) instead of only in memory. Otherwise, I’d suggest relaxing the comment to describe this as a timestamp‑based, best‑effort monotonic nonce with in‑process persistence.

src/libs/peer/routines/getPeerIdentity.ts (1)

176-285: Propagate getPeerIdentity failure semantics through verifyPeer

getPeerIdentity can now return null on normalization or challenge‑verification failure, but verifyPeer ignores that and always returns the original peer. Callers of verifyPeer therefore have to remember to inspect peer.verification.status (or related fields) instead of being able to rely on the return type, which is easy to miss.

Consider tightening the API by either:

  • changing verifyPeer to return Promise<Peer | null> and bubbling up the null from getPeerIdentity, or
  • throwing on failure so callers must handle an explicit error.

That would make the verification contract clearer and reduce the chance of mistakenly treating an unverified peer as verified.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 11f2310 and 6af2eca.

📒 Files selected for processing (3)
  • src/libs/l2ps/L2PSBatchAggregator.ts (1 hunks)
  • src/libs/peer/routines/getPeerIdentity.ts (3 hunks)
  • src/utilities/sharedState.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/libs/peer/routines/getPeerIdentity.ts (2)
src/libs/peer/Peer.ts (1)
  • Peer (15-346)
src/libs/network/manageNodeCall.ts (1)
  • NodeCall (37-41)
src/libs/l2ps/L2PSBatchAggregator.ts (4)
src/libs/blockchain/l2ps_mempool.ts (1)
  • L2PS_STATUS (16-29)
src/utilities/sharedState.ts (2)
  • getSharedState (276-278)
  • SharedState (17-271)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/libs/utils/calibrateTime.ts (1)
  • getNetworkTimestamp (17-23)
🔇 Additional comments (4)
src/libs/l2ps/L2PSBatchAggregator.ts (3)

49-147: Lifecycle, reentrancy, and stats handling look solid

Singleton construction, start/stop lifecycle, interval setup, and the isAggregating guard provide a clear, race‑free aggregation loop. Resetting statistics on start and logging them on stop is also a good touch for observability.


562-600: Statistics and status exposure are clean and non‑mutating

updateCycleTime, logStatistics, getStatistics, and getStatus are straightforward, keep stats encapsulated, and return copies where appropriate. This should make external monitoring easy without risking accidental mutation of internal counters.


620-636: forceAggregation correctly reuses safety checks

Reusing safeAggregateAndSubmit with explicit checks for isRunning and isAggregating provides a clear and safe testing/debug path without bypassing the reentrancy guard.

src/libs/peer/routines/getPeerIdentity.ts (1)

125-173: Challenge generation and ed25519 verification look correct and resolve the earlier security gap

The new generateChallenge + verifyChallenge flow is well‑structured: crypto.randomBytes(32) gives a strong nonce, and signing/verifying the domain‑separated string DEMOS_PEER_AUTH_V1:${challenge} with ucrypto.verify provides proper, non‑replayable proof of key possession. This directly addresses the previous "incomplete signature verification" concern and is a solid security upgrade.

Verify that:

  • The responder side signs the exact same domain‑separated message format
  • The ucrypto.verify invocation's algorithm name, key encoding, and signature encoding match the library's expected API
  • The hexToUint8Array helper correctly converts hex strings to Uint8Array for both public key and signature parameters

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 5, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (1)
src/libs/blockchain/l2ps_mempool.ts (1)

1-8: Import source inconsistency already flagged.

This imports L2PSTransaction from @kynesyslabs/demosdk/types, but the entity file imports it from @/types/sdk-workarounds. Consider aligning import sources to prevent type mismatches.

🧹 Nitpick comments (9)
src/libs/blockchain/l2ps_mempool.ts (1)

638-654: Consider adding explicit types for reduce accumulators.

The reduce callbacks use implicit any for the accumulator. Adding explicit types improves type safety.

-            const transactionsByUID = byUID.reduce((acc, row) => {
+            const transactionsByUID = byUID.reduce((acc: Record<string, number>, row) => {
                 acc[row.l2ps_uid] = parseInt(row.count)
                 return acc
-            }, {})
+            }, {} as Record<string, number>)
src/libs/l2ps/L2PSConsensus.ts (3)

269-277: Avoid dynamic imports - use L2PSProofManager instead.

Dynamic imports inside a method add runtime overhead and complexity. Since L2PSProofManager is already imported, consider adding a resetProofToPending method there and using it here.

Add to L2PSProofManager:

static async resetProofToPending(proofId: number): Promise<void> {
    const repo = await this.getRepo()
    await repo.update(proofId, {
        status: "pending",
        applied_block_number: null,
        processed_at: null
    })
}

Then replace lines 269-277:

-                const repo = await (await import("@/model/datasource")).default.getInstance()
-                const ds = repo.getDataSource()
-                const proofRepo = ds.getRepository((await import("@/model/entities/L2PSProofs")).L2PSProof)
-                
-                await proofRepo.update(proof.id, {
-                    status: "pending",
-                    applied_block_number: null,
-                    processed_at: null
-                })
+                await L2PSProofManager.resetProofToPending(proof.id)

232-241: Inefficient query - filter by block number in database.

Fetching 1000 proofs and filtering in memory is inefficient. Query directly with the block number filter.

Consider adding a method to L2PSProofManager that filters by applied_block_number:

static async getAppliedProofsForBlock(blockNumber: number): Promise<L2PSProof[]> {
    const repo = await this.getRepo()
    return repo.find({
        where: { applied_block_number: blockNumber, status: "applied" },
        order: { created_at: "ASC" }
    })
}

Then use it here:

-            const appliedProofs = await L2PSProofManager.getProofs(
-                "", // all L2PS networks
-                "applied",
-                1000
-            )
-
-            // Filter by block number and rollback in reverse order
-            const proofsToRollback = appliedProofs
-                .filter(p => p.applied_block_number === blockNumber)
-                .reverse()
+            const proofsToRollback = (await L2PSProofManager.getAppliedProofsForBlock(blockNumber)).reverse()

291-304: Same inefficiency - use database-level filtering.

Similar to rollbackProofsForBlock, this fetches all applied proofs (up to 10,000) and filters in memory. Use the same database-level filtering approach suggested above.

src/model/entities/L2PSProofs.ts (1)

73-79: Consider extracting proof type for reusability.

The inline type for proof is well-documented but could be extracted as a named interface (e.g., L2PSProofData) for reuse in other files like L2PSProofManager.

export interface L2PSProofData {
    type: "snark" | "stark" | "placeholder"
    data: string
    verifier_key?: string
    public_inputs: any[]
}
src/libs/l2ps/L2PSTransactionExecutor.ts (2)

310-334: Wrap switch case declaration in block and note unused simulate parameter.

The account declaration at line 312 needs a block scope. Also, the simulate parameter is unused in this method - consider removing it or documenting why it's reserved for future use.

Apply this diff:

     private static async validateGCREdit(
         edit: GCREdit,
-        simulate: boolean
+        _simulate: boolean  // Reserved for future use
     ): Promise<L2PSExecutionResult> {
         const repo = await this.getL1Repo()

         switch (edit.type) {
-            case "balance":
-                const account = await this.getOrCreateL1Account(edit.account as string)
+            case "balance": {
+                const account = await this.getOrCreateL1Account(edit.account as string)
                 
                 if (edit.operation === "remove") {
                     // ...
                 }
                 break
+            }

             case "nonce":

461-481: getNetworkStats doesn't call init() before accessing Datasource.

Other methods in this class call await this.init() first, but getNetworkStats directly accesses Datasource.getInstance(). While this may work due to lazy initialization in Datasource itself, it's inconsistent with the rest of the class.

     static async getNetworkStats(l2psUid: string): Promise<{
         totalTransactions: number
         pendingProofs: number
         appliedProofs: number
     }> {
+        await this.init()
         const dsInstance = await Datasource.getInstance()
src/libs/l2ps/L2PSProofManager.ts (2)

348-361: Concurrent clone() calls on the same query builder may have subtle issues.

While TypeORM's clone() should create independent copies, running 4 parallel queries with cloned builders that share the initial where clause could have edge cases. Consider using separate query builders or restructuring as a single aggregation query for efficiency.

A single query with aggregation would be more efficient:

const stats = await repo
    .createQueryBuilder("proof")
    .select("proof.status", "status")
    .addSelect("COUNT(*)", "count")
    .where(l2psUid ? "proof.l2ps_uid = :l2psUid" : "1=1", { l2psUid })
    .groupBy("proof.status")
    .getRawMany()

// Then transform to { pending, applied, rejected, total }

264-298: Consider adding optimistic locking or status check before updates.

Both markProofApplied and markProofRejected update without checking the current status. A proof that's already applied or rejected could be updated again, potentially corrupting state.

Add a status precondition:

     static async markProofApplied(proofId: number, blockNumber: number): Promise<void> {
         const repo = await this.getRepo()
         
-        await repo.update(proofId, {
+        const result = await repo.update(
+            { id: proofId, status: "pending" as L2PSProofStatus },
+            {
             status: "applied" as L2PSProofStatus,
             applied_block_number: blockNumber,
             processed_at: new Date()
         })
+        
+        if (result.affected === 0) {
+            log.warning(`[L2PS ProofManager] Proof ${proofId} was not pending or doesn't exist`)
+        }

         log.info(`[L2PS ProofManager] Marked proof ${proofId} as applied in block ${blockNumber}`)
     }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6af2eca and fbb8534.

📒 Files selected for processing (9)
  • src/libs/blockchain/l2ps_mempool.ts (6 hunks)
  • src/libs/consensus/v2/PoRBFT.ts (4 hunks)
  • src/libs/l2ps/L2PSConsensus.ts (1 hunks)
  • src/libs/l2ps/L2PSProofManager.ts (1 hunks)
  • src/libs/l2ps/L2PSTransactionExecutor.ts (1 hunks)
  • src/libs/network/routines/transactions/handleL2PS.ts (3 hunks)
  • src/model/datasource.ts (3 hunks)
  • src/model/entities/L2PSProofs.ts (1 hunks)
  • src/model/entities/L2PSTransactions.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/model/datasource.ts
  • src/libs/consensus/v2/PoRBFT.ts
🧰 Additional context used
🧬 Code graph analysis (5)
src/model/entities/L2PSProofs.ts (1)
src/model/entities/L2PSTransactions.ts (1)
  • Entity (27-143)
src/model/entities/L2PSTransactions.ts (1)
src/model/entities/L2PSProofs.ts (1)
  • Entity (38-143)
src/libs/l2ps/L2PSConsensus.ts (2)
src/libs/l2ps/L2PSProofManager.ts (1)
  • L2PSProofManager (66-363)
src/libs/blockchain/gcr/handleGCR.ts (1)
  • GCRResult (73-77)
src/libs/blockchain/l2ps_mempool.ts (2)
src/libs/blockchain/transaction.ts (1)
  • hash (106-114)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/l2ps/L2PSProofManager.ts (2)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/model/entities/L2PSProofs.ts (1)
  • L2PSProofStatus (27-31)
🪛 Biome (2.1.2)
src/libs/l2ps/L2PSTransactionExecutor.ts

[error] 129-129: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)


[error] 238-238: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)


[error] 239-239: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)


[error] 247-247: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)


[error] 312-312: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🔇 Additional comments (16)
src/libs/blockchain/l2ps_mempool.ts (2)

10-31: Well-structured status lifecycle constants.

The lifecycle documentation and type derivation pattern are clean. The as const assertion with derived type is idiomatic TypeScript.


114-193: LGTM - Solid transaction addition with proper validation.

The method handles block number edge cases correctly, includes duplicate detection via both original and encrypted hashes, and stores timestamps as strings for bigint column compatibility.

src/libs/network/routines/transactions/handleL2PS.ts (3)

1-12: LGTM - Clean imports with new execution dependencies.

The separation of L2PSTransaction type import and addition of L2PSTransactionExecutor and logger are appropriate for the new execution phase.


139-160: Execution phase structure is sound.

The try-catch pattern with status updates on failure is appropriate. The temporary use of l2psTx.hash as the L1 batch reference with a clear comment about future updates is acceptable for the current implementation phase.


174-189: LGTM - Comprehensive response with privacy consideration.

The response structure includes all relevant execution details while appropriately returning only the hash of the decrypted transaction rather than its full content, preserving L2PS privacy.

src/model/entities/L2PSTransactions.ts (3)

27-35: LGTM - Comprehensive indexing strategy.

The multi-column indexes (UID+FROM, UID+TO) alongside single-column indexes provide efficient query paths for common L2PS transaction lookups.


98-114: LGTM - Appropriate use of bigint for large numeric values.

Using bigint for amount, nonce, and timestamp is correct for handling values that may exceed JavaScript's safe integer limit.


116-124: Consider aligning status values with L2PS_STATUS constants if both entities track the same transaction lifecycle.

This entity uses a hardcoded union type for status ("pending" | "batched" | "confirmed" | "failed"), while L2PS_STATUS constants define additional statuses ("processed", "executed"). If both entities represent the same lifecycle, consolidating to a shared type would improve maintainability. If they intentionally represent different stages (e.g., mempool vs. finalized transactions), document this distinction to prevent future confusion.

src/libs/l2ps/L2PSConsensus.ts (2)

26-45: LGTM - Well-defined consensus result interface.

The L2PSConsensusResult interface comprehensively captures the outcome of proof applications, including per-proof details for debugging and monitoring.


184-196: Rollback logic is correct.

The loop correctly starts at editResults.length - 2 to skip the failed edit and iterates backwards, rolling back only successfully applied edits. The isRollback: true flag properly signals to HandleGCR.apply that this is a reversal operation.

src/model/entities/L2PSProofs.ts (1)

44-143: LGTM - Well-structured proof entity.

The entity comprehensively captures the L2PS proof lifecycle with appropriate fields for tracking creation, application, and rejection states. Documentation is thorough and nullable fields are correctly annotated.

src/libs/l2ps/L2PSTransactionExecutor.ts (2)

57-77: Lazy initialization pattern looks correct.

The singleton-style initialization with initPromise properly handles concurrent initialization attempts and avoids race conditions. The pattern correctly checks for both l1Repo and initPromise to prevent redundant initialization.


339-371: Add error handling for duplicate transaction hashes.

The save operation may fail or create duplicates if tx.hash already exists, depending on whether unique constraints are defined on the hash field. Add validation to check for existing records before saving, or implement upsert logic to prevent data integrity issues.

src/libs/l2ps/L2PSProofManager.ts (3)

222-262: Proof verification logic looks sound for placeholder implementation.

The verification properly validates structure, checks required fields on GCR edits, and uses deterministicStringify for consistent hash comparison. The placeholder approach is well-documented with TODO for future ZK verification.


70-93: Initialization pattern is consistent and correct.

The lazy singleton initialization with initPromise properly handles concurrent calls, matching the pattern used in L2PSTransactionExecutor. This ensures thread-safe repository initialization.


29-39: No changes needed. The deterministicStringify function correctly handles nested objects within arrays because JSON.stringify's replacer function is invoked for every value during serialization, including objects nested inside arrays. The condition if (value && typeof value === 'object' && !Array.isArray(value)) will match and sort keys for all such nested objects before serialization, ensuring consistent output regardless of original key ordering.

Likely an incorrect or invalid review comment.

Comment on lines 192 to 209
/**
* Get pending proofs for a specific block
*
* @param blockNumber - Target block number
* @returns Array of proofs targeting this block
*/
static async getProofsForBlock(blockNumber: number): Promise<L2PSProof[]> {
const repo = await this.getRepo()

// Get all pending proofs that haven't been applied
// Proofs are applied in order of creation
return repo.find({
where: {
status: "pending" as L2PSProofStatus
},
order: { created_at: "ASC" }
})
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

getProofsForBlock ignores the blockNumber parameter.

The method accepts blockNumber but doesn't use it in the query. This appears to be incomplete implementation or the parameter should be removed.

Either use the parameter or remove it:

     static async getProofsForBlock(blockNumber: number): Promise<L2PSProof[]> {
         const repo = await this.getRepo()

-        // Get all pending proofs that haven't been applied
-        // Proofs are applied in order of creation
+        // TODO: Filter proofs targeting this specific block
+        // For now, returns all pending proofs in creation order
         return repo.find({
             where: {
                 status: "pending" as L2PSProofStatus
             },
             order: { created_at: "ASC" }
         })
     }
🤖 Prompt for AI Agents
In src/libs/l2ps/L2PSProofManager.ts around lines 192 to 209, the
getProofsForBlock method accepts blockNumber but never uses it; either apply the
block filter or remove the unused parameter. Fix by inspecting the L2PSProof
entity to determine the correct column name (e.g., blockNumber, targetBlock, or
block_number) and add it to the repository query where clause alongside
status:"pending" (keeping the ASC order), ensuring types match; alternatively,
if no per-block filtering is required, remove the blockNumber parameter and
update all callers accordingly.

Comment on lines +38 to +43
@Entity("l2ps_proofs")
@Index("IDX_L2PS_PROOFS_UID", ["l2ps_uid"])
@Index("IDX_L2PS_PROOFS_STATUS", ["status"])
@Index("IDX_L2PS_PROOFS_BLOCK", ["target_block_number"])
@Index("IDX_L2PS_PROOFS_BATCH_HASH", ["l1_batch_hash"])
@Index("IDX_L2PS_PROOFS_UID_STATUS", ["l2ps_uid", "status"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo in index name: IDX_L2PS_PROFS_UID_STATUS.

The index name is missing an "O" - should be IDX_L2PS_PROOFS_UID_STATUS for consistency with other index names.

 @Index("IDX_L2PS_PROOFS_BATCH_HASH", ["l1_batch_hash"])
-@Index("IDX_L2PS_PROFS_UID_STATUS", ["l2ps_uid", "status"])
+@Index("IDX_L2PS_PROOFS_UID_STATUS", ["l2ps_uid", "status"])
 export class L2PSProof {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@Entity("l2ps_proofs")
@Index("IDX_L2PS_PROOFS_UID", ["l2ps_uid"])
@Index("IDX_L2PS_PROOFS_STATUS", ["status"])
@Index("IDX_L2PS_PROOFS_BLOCK", ["target_block_number"])
@Index("IDX_L2PS_PROOFS_BATCH_HASH", ["l1_batch_hash"])
@Index("IDX_L2PS_PROOFS_UID_STATUS", ["l2ps_uid", "status"])
@Entity("l2ps_proofs")
@Index("IDX_L2PS_PROOFS_UID", ["l2ps_uid"])
@Index("IDX_L2PS_PROOFS_STATUS", ["status"])
@Index("IDX_L2PS_PROOFS_BLOCK", ["target_block_number"])
@Index("IDX_L2PS_PROOFS_BATCH_HASH", ["l1_batch_hash"])
@Index("IDX_L2PS_PROOFS_UID_STATUS", ["l2ps_uid", "status"])
🤖 Prompt for AI Agents
In src/model/entities/L2PSProofs.ts around lines 38 to 43, the index name string
for the compound index is misspelled as "IDX_L2PS_PROFS_UID_STATUS"; change it
to "IDX_L2PS_PROOFS_UID_STATUS" to match the naming convention used by the other
indexes. Update the @Index decorator's first argument to the corrected name and
ensure any corresponding DB migration or schema sync is updated/applied so the
database index name stays in sync.

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 5, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
src/libs/blockchain/l2ps_mempool.ts (1)

1-4: Import source inconsistency remains.

This is a duplicate of a past review comment. The file imports L2PSTransaction from @kynesyslabs/demosdk/types but the entity file imports it from @/types/sdk-workarounds. This could cause type mismatches.

src/libs/l2ps/L2PSProofManager.ts (1)

198-209: Unused blockNumber parameter - acknowledged with TODO.

The blockNumber parameter is still not used in the query. The TODO comment on lines 201-202 acknowledges this is intentional placeholder behavior pending block-specific batching implementation.

🧹 Nitpick comments (3)
src/libs/network/routines/transactions/handleL2PS.ts (1)

1-2: Consolidate duplicate imports from the same module.

SonarCloud correctly flags that @kynesyslabs/demosdk/types is imported on both lines 1 and 2. Merge into a single import.

Apply this diff:

-import type { BlockContent } from "@kynesyslabs/demosdk/types"
-import type { L2PSTransaction } from "@kynesyslabs/demosdk/types"
+import type { BlockContent, L2PSTransaction } from "@kynesyslabs/demosdk/types"
src/libs/l2ps/L2PSProofManager.ts (1)

29-39: Use localeCompare for reliable alphabetical sorting.

The static analysis correctly identifies that Object.keys(value).sort() may produce inconsistent results with non-ASCII characters or locale-specific orderings. Use localeCompare for deterministic cross-platform sorting.

Apply this diff:

 function deterministicStringify(obj: any): string {
     return JSON.stringify(obj, (key, value) => {
         if (value && typeof value === 'object' && !Array.isArray(value)) {
-            return Object.keys(value).sort().reduce((sorted: any, k) => {
+            return Object.keys(value).sort((a, b) => a.localeCompare(b)).reduce((sorted: any, k) => {
                 sorted[k] = value[k]
                 return sorted
             }, {})
         }
         return value
     })
 }
src/libs/l2ps/L2PSTransactionExecutor.ts (1)

113-223: Consider reducing cognitive complexity of execute() method.

Static analysis flags cognitive complexity of 31 (allowed: 15). While the logic is readable, consider extracting the switch-case handling into separate methods (e.g., handleDemosWorkTransaction, handleDefaultTransaction) to reduce complexity and improve testability.

Example extraction pattern:

private static async handleTransactionByType(
    tx: Transaction,
    simulate: boolean
): Promise<{ gcrEdits: GCREdit[]; affectedAccounts: string[] } | L2PSExecutionResult> {
    switch (tx.content.type) {
        case "native":
            return this.handleNativeTransaction(tx, simulate)
        case "demoswork":
            return this.handleDemosWorkTransaction(tx, simulate)
        default:
            return this.handleGenericTransaction(tx, simulate)
    }
}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fbb8534 and 512eabc.

📒 Files selected for processing (4)
  • src/libs/blockchain/l2ps_mempool.ts (6 hunks)
  • src/libs/l2ps/L2PSProofManager.ts (1 hunks)
  • src/libs/l2ps/L2PSTransactionExecutor.ts (1 hunks)
  • src/libs/network/routines/transactions/handleL2PS.ts (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/libs/l2ps/L2PSProofManager.ts (3)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/model/entities/L2PSProofs.ts (1)
  • L2PSProofStatus (27-31)
src/utilities/logger.ts (1)
  • error (125-132)
🪛 GitHub Check: SonarCloud Code Analysis
src/libs/network/routines/transactions/handleL2PS.ts

[warning] 2-2: '@kynesyslabs/demosdk/types' imported multiple times.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru318Mx7vD0jIgt1nL&open=AZru318Mx7vD0jIgt1nL&pullRequest=514


[warning] 1-1: '@kynesyslabs/demosdk/types' imported multiple times.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru318Mx7vD0jIgt1nK&open=AZru318Mx7vD0jIgt1nK&pullRequest=514

src/libs/l2ps/L2PSProofManager.ts

[failure] 32-32: Provide a compare function that depends on "String.localeCompare", to reliably sort elements alphabetically.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32Axx7vD0jIgt1nW&open=AZru32Axx7vD0jIgt1nW&pullRequest=514


[warning] 201-201: Complete the task associated to this "TODO" comment.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru77doJeh6yjF3sOs5&open=AZru77doJeh6yjF3sOs5&pullRequest=514


[warning] 75-75: Expected non-Promise value in a boolean conditional.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32Axx7vD0jIgt1nX&open=AZru32Axx7vD0jIgt1nX&pullRequest=514


[warning] 239-239: Complete the task associated to this "TODO" comment.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32Axx7vD0jIgt1nY&open=AZru32Axx7vD0jIgt1nY&pullRequest=514

src/libs/l2ps/L2PSTransactionExecutor.ts

[warning] 237-237: Replace this "switch" statement by "if" statements to increase readability.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32A8x7vD0jIgt1nc&open=AZru32A8x7vD0jIgt1nc&pullRequest=514


[failure] 113-113: Refactor this function to reduce its Cognitive Complexity from 31 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32A8x7vD0jIgt1na&open=AZru32A8x7vD0jIgt1na&pullRequest=514


[warning] 59-59: Expected non-Promise value in a boolean conditional.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32A8x7vD0jIgt1nZ&open=AZru32A8x7vD0jIgt1nZ&pullRequest=514

🔇 Additional comments (10)
src/libs/network/routines/transactions/handleL2PS.ts (1)

139-172: LGTM - Execution flow and status updates correctly implemented.

The execution flow properly integrates with L2PSTransactionExecutor and uses the correct hash (l2psTx.hash) for all updateStatus calls, matching the hash stored during addTransaction. The previous bug where originalHash was incorrectly used has been addressed.

src/libs/blockchain/l2ps_mempool.ts (3)

10-31: LGTM - Well-defined status lifecycle.

The status constants and type are properly structured with clear documentation of the lifecycle flow. The as const assertion ensures type safety.


356-379: LGTM - Batch update operation is well-implemented.

Handles empty array edge case, uses In() operator efficiently, and properly returns the affected count with appropriate logging.


584-593: LGTM - Timestamp comparison fixed.

The cleanup method now uses CAST(timestamp AS BIGINT) < CAST(:cutoff AS BIGINT) for proper numeric comparison, consistent with cleanupByStatus. This addresses the previously flagged string comparison issue.

src/libs/l2ps/L2PSProofManager.ts (2)

115-118: LGTM - Deterministic hash generation.

The transactionsHash now uses deterministicStringify with stable inputs (l2psUid, l1BatchHash, gcrEdits), addressing the previous concern about non-deterministic Date.now() usage.


222-262: LGTM - Placeholder verification is correctly structured.

The verification logic properly validates proof structure, GCR edit fields, and uses deterministic hashing for placeholder proof comparison. The TODO for actual ZK verification is appropriately documented.

src/libs/l2ps/L2PSTransactionExecutor.ts (4)

127-174: LGTM - Switch cases properly scoped with blocks.

The switch cases now use block scoping (braces) for the native and default cases, addressing the previous concern about variable hoisting and temporal dead zone issues.


237-294: LGTM - Native transaction handling with proper validation.

Switch cases are properly block-scoped, and amount validation at line 243 includes comprehensive type checking (typeof, Number.isFinite, positive check), addressing the previous review concerns.


307-339: LGTM - Unused variable removed, validation properly structured.

The previously flagged unused repo variable has been replaced with a proper init() call. The validation logic for balance edits correctly checks L1 state sufficiency.


381-402: LGTM - Status update now handles missing records.

The method now checks result.affected and logs a warning when no transaction is found, addressing the previous concern about silent success on non-existent records.

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 5, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/libs/l2ps/L2PSBatchAggregator.ts (1)

451-531: Address reference_block and publicKey casting concerns from previous review.

Two issues from the previous review remain unresolved:

  1. Line 508: reference_block: 0 — The comment suggests mempool will set this, but according to prior review analysis, Mempool.addTransaction passes this value through without recalculation. Batch transactions will persist with reference_block = 0, potentially bypassing anchoring/ordering semantics. Set a meaningful reference block here (e.g., sharedState.lastKnownBlock or query latest block height) instead of relying on downstream logic.

  2. Line 462: Unsafe publicKey cast — Casting sharedState.keypair.publicKey as Uint8Array without validation will fail at runtime if the key isn't a Uint8Array. Either reuse sharedState.publicKeyHex (preferred) or add type validation before casting.

Based on prior review feedback at lines 451-531.

🧹 Nitpick comments (11)
src/libs/l2ps/L2PSConcurrentSync.ts (1)

167-188: Batch duplicate detection improves sync performance.

The pre-fetch approach to check for existing hashes before per-transaction insertion is a good optimization. However, consider adding a guard for empty txHashes array to avoid unnecessary database queries.

         // Batch duplicate detection: check all hashes at once
         const txHashes = transactions.map(tx => tx.hash)
         const existingHashes = new Set<string>()

+        if (txHashes.length === 0) {
+            log.debug("[L2PS Sync] No transaction hashes to check for duplicates")
+            return
+        }
+
         // Query database once for all hashes
         try {
src/libs/l2ps/parallelNetworks.ts (1)

55-75: Well-structured hex validation helper.

The hexFileToBytes function properly validates hex input with comprehensive checks for empty values, odd length, and non-hex characters. Per SonarCloud hint, consider using replaceAll for clarity.

-    const cleaned = value.trim().replace(/^0x/, "").replace(/\s+/g, "")
+    const cleaned = value.trim().replace(/^0x/, "").replaceAll(/\s+/g, "")
src/libs/blockchain/l2ps_mempool.ts (1)

626-629: Type safety concern with parseInt in stats aggregation.

The parseInt(row.count) assumes the database returns a string, which is typical for aggregate results, but consider using Number() for robustness or adding type annotations.

             const transactionsByUID = byUID.reduce((acc, row) => {
-                acc[row.l2ps_uid] = parseInt(row.count)
+                acc[row.l2ps_uid] = Number(row.count)
                 return acc
-            }, {})
+            }, {} as Record<string, number>)
src/libs/l2ps/L2PSConsensus.ts (4)

68-161: High cognitive complexity in applyPendingProofs.

This function exceeds the complexity threshold (20 vs 15 allowed). Consider extracting the proof result processing (lines 94-107) and the post-processing logic (lines 113-148) into separate helper methods.

+    /**
+     * Process individual proof results and update counters
+     */
+    private static processProofResult(
+        result: L2PSConsensusResult,
+        proofResult: { success: boolean; editsApplied: number },
+        proof: L2PSProof
+    ): void {
+        if (proofResult.success) {
+            result.proofsApplied++
+            result.totalEditsApplied += proofResult.editsApplied
+            result.affectedAccounts.push(...proof.affected_accounts)
+        } else {
+            result.proofsFailed++
+            result.success = false
+        }
+    }

167-264: High cognitive complexity in applyProof with nested rollback logic.

The function has complexity of 24 (exceeds 15). The rollback logic (lines 228-238) is particularly complex. Consider extracting it to a dedicated helper.

+    /**
+     * Rollback previously applied edits on failure
+     */
+    private static async rollbackAppliedEdits(
+        editResults: GCRResult[],
+        gcrEdits: GCREdit[],
+        mockTx: any
+    ): Promise<void> {
+        for (let i = editResults.length - 2; i >= 0; i--) {
+            if (editResults[i].success) {
+                const rollbackEdit = { ...gcrEdits[i], isRollback: true }
+                await HandleGCR.apply(rollbackEdit, mockTx, true, false)
+            }
+        }
+    }

289-300: Use explicit compare function for deterministic sorting.

For deterministic hash ordering across all environments, use localeCompare as flagged by static analysis.

-                proof_hashes: proofs.map(p => p.transactions_hash).sort(),
+                const sortedProofHashes = proofs.map(p => p.transactions_hash).slice().sort((a, b) => a.localeCompare(b))
                 transaction_count: totalTransactions,
...
             const batchHash = Hashing.sha256(JSON.stringify({
                 blockNumber,
-                proofHashes: batchPayload.proof_hashes,
-                l2psNetworks: l2psNetworks.sort()
+                proofHashes: sortedProofHashes,
+                l2psNetworks: l2psNetworks.slice().sort((a, b) => a.localeCompare(b))
             }))

394-402: Dynamic imports in rollback path may impact performance.

Using dynamic imports inside the rollback loop adds latency to an already critical recovery path. Consider caching the repository reference or initializing it once at the start of rollbackProofsForBlock.

     static async rollbackProofsForBlock(blockNumber: number): Promise<void> {
         try {
+            // Initialize repository once at the start
+            const repo = await (await import("@/model/datasource")).default.getInstance()
+            const ds = repo.getDataSource()
+            const proofRepo = ds.getRepository((await import("@/model/entities/L2PSProofs")).L2PSProof)
+
             // Get proofs that were applied in this block
             const appliedProofs = await L2PSProofManager.getProofs(

Then use proofRepo directly inside the loop instead of re-importing.

src/libs/l2ps/L2PSTransactionExecutor.ts (1)

113-226: High cognitive complexity in execute method.

The function has complexity of 31 (exceeds 15 allowed). Consider extracting the switch case handling into a dedicated method like processTransactionByType.

+    /**
+     * Process transaction based on type and generate GCR edits
+     */
+    private static async processTransactionByType(
+        tx: Transaction,
+        simulate: boolean
+    ): Promise<{ gcrEdits: GCREdit[]; affectedAccounts: string[] } | L2PSExecutionResult> {
+        const gcrEdits: GCREdit[] = []
+        const affectedAccounts: string[] = []
+
+        switch (tx.content.type) {
+            case "native": {
+                const result = await this.handleNativeTransaction(tx, simulate)
+                if (!result.success) return result
+                gcrEdits.push(...(result.gcr_edits || []))
+                affectedAccounts.push(...(result.affected_accounts || []))
+                break
+            }
+            // ... other cases
+        }
+        return { gcrEdits, affectedAccounts }
+    }
src/libs/l2ps/L2PSBatchAggregator.ts (3)

1-10: Clean up duplicate imports and prefer node:crypto.

Two minor import improvements:

  1. Lines 4-5 import from @/utilities/sharedState twice—merge into a single import statement.
  2. Line 9 uses crypto instead of node:crypto. The node: prefix is the modern Node.js convention for built-in modules.

Apply this diff:

-import SharedState from "@/utilities/sharedState"
-import { getSharedState } from "@/utilities/sharedState"
+import SharedState, { getSharedState } from "@/utilities/sharedState"
 import log from "@/utilities/logger"
 import { Hashing, ucrypto, uint8ArrayToHex } from "@kynesyslabs/demosdk/encryption"
 import { getNetworkTimestamp } from "@/libs/utils/calibrateTime"
-import crypto from "crypto"
+import crypto from "node:crypto"

79-80: Remove unused batchNonceCounter field.

The batchNonceCounter field at line 80 is never used—the persistent nonce implementation uses SharedState.l2psBatchNonce instead (lines 396-439). This appears to be leftover from the refactoring that addressed the nonce persistence issue.

Apply this diff:

     /** Domain separator for batch transaction signatures */
     private readonly SIGNATURE_DOMAIN = "L2PS_BATCH_TX_V1"

-    /** Persistent nonce counter for batch transactions */
-    private batchNonceCounter: number = 0
-    
     /** Statistics tracking */
     private stats = {

431-438: Improve error logging in nonce storage.

Line 437 logs the error directly in a template literal, which will use Object.toString() and may produce uninformative output like [object Object]. Use error.message or error.stack for better diagnostics.

Apply this diff:

         } catch (error) {
-            log.warning(`[L2PS Batch Aggregator] Failed to persist nonce: ${error}`)
+            log.warning(`[L2PS Batch Aggregator] Failed to persist nonce: ${error instanceof Error ? error.message : String(error)}`)
         }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 512eabc and 483046a.

📒 Files selected for processing (9)
  • src/libs/blockchain/l2ps_mempool.ts (9 hunks)
  • src/libs/l2ps/L2PSBatchAggregator.ts (1 hunks)
  • src/libs/l2ps/L2PSConcurrentSync.ts (2 hunks)
  • src/libs/l2ps/L2PSConsensus.ts (1 hunks)
  • src/libs/l2ps/L2PSHashService.ts (4 hunks)
  • src/libs/l2ps/L2PSProofManager.ts (1 hunks)
  • src/libs/l2ps/L2PSTransactionExecutor.ts (1 hunks)
  • src/libs/l2ps/parallelNetworks.ts (18 hunks)
  • src/model/entities/L2PSProofs.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
src/libs/l2ps/parallelNetworks.ts (2)
src/utilities/logger.ts (1)
  • error (125-132)
src/utilities/sharedState.ts (1)
  • getSharedState (276-278)
src/model/entities/L2PSProofs.ts (1)
src/model/entities/L2PSTransactions.ts (1)
  • Entity (27-143)
src/libs/l2ps/L2PSTransactionExecutor.ts (1)
src/libs/l2ps/L2PSProofManager.ts (1)
  • L2PSProofManager (66-366)
src/libs/blockchain/l2ps_mempool.ts (2)
src/libs/blockchain/transaction.ts (1)
  • hash (106-114)
src/utilities/logger.ts (1)
  • error (125-132)
🪛 GitHub Check: SonarCloud Code Analysis
src/libs/l2ps/parallelNetworks.ts

[warning] 60-60: Prefer String#replaceAll() over String#replace().

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32Alx7vD0jIgt1nV&open=AZru32Alx7vD0jIgt1nV&pullRequest=514

src/libs/l2ps/L2PSConsensus.ts

[failure] 299-299: Provide a compare function that depends on "String.localeCompare", to reliably sort elements alphabetically.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZrvJoWOFDjduF-gtQu5&open=AZrvJoWOFDjduF-gtQu5&pullRequest=514


[warning] 299-299: Move this array "sort" operation to a separate statement or replace it with "toSorted".

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZrvJoWOFDjduF-gtQu4&open=AZrvJoWOFDjduF-gtQu4&pullRequest=514


[failure] 167-167: Refactor this function to reduce its Cognitive Complexity from 24 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32AOx7vD0jIgt1nU&open=AZru32AOx7vD0jIgt1nU&pullRequest=514


[failure] 68-68: Refactor this function to reduce its Cognitive Complexity from 20 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZrvJoWOFDjduF-gtQu2&open=AZrvJoWOFDjduF-gtQu2&pullRequest=514


[failure] 289-289: Provide a compare function that depends on "String.localeCompare", to reliably sort elements alphabetically.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZrvJoWOFDjduF-gtQu3&open=AZrvJoWOFDjduF-gtQu3&pullRequest=514

src/libs/l2ps/L2PSBatchAggregator.ts

[warning] 80-80: Member 'batchNonceCounter' is never reassigned; mark it as readonly.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32BGx7vD0jIgt1nm&open=AZru32BGx7vD0jIgt1nm&pullRequest=514


[warning] 437-437: 'error' will use Object's default stringification format ('[object Object]') when stringified.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32BGx7vD0jIgt1no&open=AZru32BGx7vD0jIgt1no&pullRequest=514


[warning] 4-4: '@/utilities/sharedState' imported multiple times.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32BGx7vD0jIgt1nj&open=AZru32BGx7vD0jIgt1nj&pullRequest=514


[warning] 9-9: Prefer node:crypto over crypto.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32BGx7vD0jIgt1nl&open=AZru32BGx7vD0jIgt1nl&pullRequest=514


[failure] 352-352: Provide a compare function that depends on "String.localeCompare", to reliably sort elements alphabetically.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32BGx7vD0jIgt1nn&open=AZru32BGx7vD0jIgt1nn&pullRequest=514


[warning] 5-5: '@/utilities/sharedState' imported multiple times.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32BGx7vD0jIgt1nk&open=AZru32BGx7vD0jIgt1nk&pullRequest=514

src/libs/l2ps/L2PSTransactionExecutor.ts

[warning] 240-240: Replace this "switch" statement by "if" statements to increase readability.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32A8x7vD0jIgt1nc&open=AZru32A8x7vD0jIgt1nc&pullRequest=514


[warning] 59-59: Expected non-Promise value in a boolean conditional.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32A8x7vD0jIgt1nZ&open=AZru32A8x7vD0jIgt1nZ&pullRequest=514


[failure] 113-113: Refactor this function to reduce its Cognitive Complexity from 31 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32A8x7vD0jIgt1na&open=AZru32A8x7vD0jIgt1na&pullRequest=514

src/libs/l2ps/L2PSProofManager.ts

[warning] 242-242: Complete the task associated to this "TODO" comment.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32Axx7vD0jIgt1nY&open=AZru32Axx7vD0jIgt1nY&pullRequest=514


[warning] 75-75: Expected non-Promise value in a boolean conditional.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32Axx7vD0jIgt1nX&open=AZru32Axx7vD0jIgt1nX&pullRequest=514


[warning] 204-204: Complete the task associated to this "TODO" comment.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru77doJeh6yjF3sOs5&open=AZru77doJeh6yjF3sOs5&pullRequest=514


[failure] 32-32: Provide a compare function that depends on "String.localeCompare", to reliably sort elements alphabetically.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32Axx7vD0jIgt1nW&open=AZru32Axx7vD0jIgt1nW&pullRequest=514

🔇 Additional comments (27)
src/libs/l2ps/L2PSConcurrentSync.ts (2)

119-119: LGTM: Log level consistency update.

The change from log.warn to log.warning aligns with the logging API used elsewhere in the L2PS codebase.


190-222: Transaction insertion flow is well-structured.

The duplicate filtering before insertion and the graceful handling of addTransaction failures (distinguishing between duplicates and other errors) is appropriate.

src/libs/l2ps/parallelNetworks.ts (4)

108-132: Good race condition prevention with loading promise lock.

The pattern of storing loading promises to prevent concurrent loadL2PS calls for the same UID is appropriate. The promise is correctly cleaned up in the finally block.


141-198: Security-conscious configuration loading.

Path traversal validation (lines 143-148) and comprehensive key file validation (lines 166-179) are good security practices. The hex-to-bytes conversion for cryptographic material is properly handled.


260-283: Encryption flow with proper signing.

The encryptTransaction method correctly awaits the async encryption and signs the result with the node's private key. The signature attachment pattern is consistent.


336-352: Safe array destructuring with validation.

Good defensive programming - validating the array structure before destructuring prevents runtime errors from malformed transaction data.

src/libs/blockchain/l2ps_mempool.ts (3)

10-31: Well-defined status lifecycle constants.

The L2PS_STATUS object with as const assertion provides type safety and the documented lifecycle flow is clear: pending → processed → executed → batched → confirmed.


347-370: Efficient batch status update implementation.

The updateStatusBatch method correctly uses TypeORM's In operator for bulk updates and includes proper early return for empty arrays.


478-502: Consistent timestamp comparison in cleanup.

The CAST(timestamp AS BIGINT) approach ensures correct numeric comparison for string-stored timestamps, matching the pattern in the cleanup method.

src/libs/l2ps/L2PSHashService.ts (2)

224-228: Log level consistency maintained.

The change to log.warning aligns with the logging convention used across the L2PS codebase.


98-99: Demos instance initialization is appropriately placed.

Initializing the Demos SDK instance once during service start and reusing it for all hash update transactions is efficient.

src/model/entities/L2PSProofs.ts (1)

38-49: Entity structure and indexes are well-defined.

The index naming is now consistent (IDX_L2PS_PROOFS_*) and the compound index on [l2ps_uid, status] will optimize common query patterns.

src/libs/l2ps/L2PSTransactionExecutor.ts (4)

57-72: Lazy initialization pattern is correct.

The check if (this.initPromise) correctly tests for existence of an ongoing initialization, not the promise's resolved value. This prevents multiple concurrent initializations.


240-297: Single-case switch is intentional for extensibility.

While static analysis suggests using if statements, the switch structure anticipates future native operations beyond "send". The current implementation is acceptable.


399-405: Good: Update result verification implemented.

The check for result.affected === 0 addresses the previous review feedback about silent success when no rows match.


264-281: GCR edits correctly capture transfer semantics.

The debit/credit pair with proper account references and transaction hash linking follows the expected pattern for balance modifications.

src/libs/l2ps/L2PSBatchAggregator.ts (5)

11-29: Well-structured batch payload interface.

The L2PSBatchPayload interface is clear and well-documented. The combination of encrypted_batch, batch_hash, and authentication_tag provides both confidentiality and integrity verification.


116-183: Solid lifecycle management with graceful shutdown.

The start/stop methods properly handle:

  • Service state validation
  • Reentrancy protection with isAggregating flag
  • Graceful shutdown with configurable timeout
  • Statistics reset on restart

235-262: Clean aggregation logic with proper grouping.

The aggregation workflow is well-structured:

  1. Fetch executed transactions
  2. Group by L2PS UID
  3. Process each network's batch independently

This allows multiple L2PS networks to be batched in the same cycle efficiently.


337-385: HMAC generation properly secured.

The HMAC fallback issue from previous reviews has been correctly addressed. The code now throws an error if the keypair is unavailable (line 364-365), ensuring batch authentication always uses proper cryptographic material.


540-637: Comprehensive statistics and clean public API.

The cleanup, statistics tracking, and public API methods are well-designed:

  • cleanupOldBatchedTransactions() prevents unbounded growth
  • Statistics provide operational visibility
  • getStatus() and getStatistics() enable monitoring
  • forceAggregation() supports testing/debugging
src/libs/l2ps/L2PSProofManager.ts (6)

44-59: Clear result interfaces.

The ProofCreationResult and ProofApplicationResult interfaces provide clean, type-safe return values with optional fields for success cases and required error messages.


106-172: Deterministic proof creation properly implemented.

The createProof method correctly addresses the previous review concern about non-deterministic hashing. Lines 118-120 and 127-133 now use deterministicStringify without Date.now(), ensuring the same inputs always produce the same hash.


201-212: Block-specific filtering documented as future work.

The blockNumber parameter is currently unused (line 204 TODO), but this is clearly documented as intended for future block-specific batching. The method signature reserves this parameter to avoid breaking changes later.

Note: The TODO at line 204 represents planned functionality rather than forgotten work.


225-265: Placeholder verification with solid structural validation.

The current verification implementation provides:

  • Basic structure validation (lines 228-231)
  • GCR edit field validation (lines 234-240)
  • Hash consistency checks for placeholder proofs (lines 245-258)

The TODO at line 242 documents the future ZK verification work, which is appropriate for this phase.


273-301: Clean proof status transitions.

The markProofApplied and markProofRejected methods properly handle proof lifecycle transitions with timestamps and appropriate logging.


309-365: Efficient query methods with proper filtering.

The query methods are well-implemented:

  • getProofByBatchHash for L1 batch lookups
  • getProofs with flexible filtering by UID and status
  • getStats efficiently uses Promise.all with cloned query builders (lines 357-362) for parallel counts

Comment on lines 126 to 131
/**
* Individual transaction hashes from L2PS mempool
* Used to update mempool status to 'confirmed' after proof application
*/
@Column("jsonb", { default: "[]" })
transaction_hashes: string[]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -n "jsonb.*default" --type ts -C2

Repository: kynesyslabs/node

Length of output: 176


🌐 Web query:

TypeORM JSONB column default value array syntax

💡 Result:

Use a JSONB (not a Postgres array) and set the default as a raw SQL expression. Example:

@column({
type: 'jsonb',
array: false,
default: () => "'[]'",
nullable: false,
})
users: { id: string }[] = [];

Notes:

  • Omitting array:false can make TypeORM create jsonb[] (Postgres array of jsonb) which causes malformed-array errors when you try to store a JSON array. [1]
  • The default must be a raw DB expression (hence default: () => "'[]'"); you can also use "'[]'::jsonb" if you prefer an explicit cast. [1][2]

Sources:
[1] StackOverflow — TypeORM jsonb array column (example and fix).
[2] TypeORM docs — Postgres column types.


Fix JSONB default value to use raw SQL expression.

The string literal "[]" will not produce the correct JSONB default at the database level. TypeORM requires JSONB defaults to be raw SQL expressions.

-    @Column("jsonb", { default: "[]" })
+    @Column("jsonb", { default: () => "'[]'" })
     transaction_hashes: string[]

Alternatively, you can use an explicit cast: default: () => "'[]'::jsonb"

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* Individual transaction hashes from L2PS mempool
* Used to update mempool status to 'confirmed' after proof application
*/
@Column("jsonb", { default: "[]" })
transaction_hashes: string[]
/**
* Individual transaction hashes from L2PS mempool
* Used to update mempool status to 'confirmed' after proof application
*/
@Column("jsonb", { default: () => "'[]'" })
transaction_hashes: string[]
🤖 Prompt for AI Agents
In src/model/entities/L2PSProofs.ts around lines 126 to 131, the Column
decorator uses the string literal default: "[]" which does not set a proper
JSONB default at the DB level; change the default to a raw SQL expression or
casted literal (e.g., use TypeORM's function form like default: () =>
"'[]'::jsonb") so the database column receives a true jsonb empty array default;
update the @Column(...) options accordingly and run migrations or sync to apply
the corrected default.

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 8, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/libs/l2ps/parallelNetworks.ts (1)

268-283: Silent failure when signing fails.

If ucrypto.sign returns null or undefined, the transaction is returned without a signature and no warning is logged. This could make debugging difficult if signing unexpectedly fails.

Consider logging when signature generation fails:

         if (signature) {
             encryptedTx.signature = {
                 type: sharedState.signingAlgorithm,
                 data: uint8ArrayToHex(signature.signature),
             }
+        } else {
+            log.warning(`[L2PS] Failed to sign encrypted transaction for ${uid}`)
         }
♻️ Duplicate comments (2)
src/libs/l2ps/L2PSBatchAggregator.ts (1)

507-510: reference_block: 0 may bypass block anchoring semantics.

This was flagged in a past review. The comment says "Will be set by mempool" but based on the provided mempool_v2.ts snippet, reference_block appears to be passed through unchanged. Transactions with reference_block: 0 might:

  1. Never expire (if expiry is based on reference_block)
  2. Have incorrect ordering relative to other transactions

Verify whether Mempool.addTransaction actually sets reference_block or if it needs to be computed here. If the mempool doesn't override it, set it to the current block height:

-                reference_block: 0, // Will be set by mempool
+                reference_block: sharedState.lastKnownBlock || 0, // Anchor to current block
src/libs/peer/routines/getPeerIdentity.ts (1)

46-96: Critical: Inconsistent hex prefix between normalization paths breaks identity comparison.

This is the same issue flagged in the previous review and remains unresolved. The function returns different canonical formats depending on input type:

  • String inputs (line 52): asHexString(raw) → returns "0x..." format
  • Byte inputs (lines 56, 65, 69, 73, 78-80): uint8ArrayToHex(...).toLowerCase() → returns bare hex without "0x" prefix

When receivedIdentity === expectedIdentity comparison executes at line 254, it will fail if one identity was derived from a string and the other from bytes, even when they represent the same public key.

Apply this diff to wrap all byte-path returns with asHexString or prepend "0x":

     if (raw instanceof Uint8Array) {
-        return uint8ArrayToHex(raw).toLowerCase()
+        return `0x${uint8ArrayToHex(raw).toLowerCase()}`
     }
 
     if (ArrayBuffer.isView(raw)) {
         const view = raw as ArrayBufferView
         const bytes =
             view instanceof Uint8Array
                 ? view
                 : new Uint8Array(view.buffer, view.byteOffset, view.byteLength)
-        return uint8ArrayToHex(bytes).toLowerCase()
+        return `0x${uint8ArrayToHex(bytes).toLowerCase()}`
     }
 
     if (raw instanceof ArrayBuffer) {
-        return uint8ArrayToHex(new Uint8Array(raw)).toLowerCase()
+        return `0x${uint8ArrayToHex(new Uint8Array(raw)).toLowerCase()}`
     }
 
     if (Array.isArray(raw) && raw.every(item => typeof item === "number")) {
-        return uint8ArrayToHex(Uint8Array.from(raw)).toLowerCase()
+        return `0x${uint8ArrayToHex(Uint8Array.from(raw)).toLowerCase()}`
     }
 
     const maybeBuffer = raw as Partial<BufferPayload>
     if (maybeBuffer?.type === "Buffer" && Array.isArray(maybeBuffer.data)) {
-        return uint8ArrayToHex(
-            Uint8Array.from(maybeBuffer.data),
-        ).toLowerCase()
+        return `0x${uint8ArrayToHex(
+            Uint8Array.from(maybeBuffer.data),
+        ).toLowerCase()}`
     }
🧹 Nitpick comments (9)
src/libs/l2ps/L2PSConsensus.ts (2)

354-411: Dynamic imports inside rollback loop are inefficient and error-prone.

The dynamic imports at lines 394-396 are executed inside the for loop, which:

  1. Creates unnecessary overhead by importing the same modules multiple times
  2. Could fail partway through rollback, leaving state inconsistent

Move imports outside the loop:

     static async rollbackProofsForBlock(blockNumber: number): Promise<void> {
         try {
+            // Import once before the loop
+            const repo = await (await import("@/model/datasource")).default.getInstance()
+            const ds = repo.getDataSource()
+            const { L2PSProof: L2PSProofEntity } = await import("@/model/entities/L2PSProofs")
+            const proofRepo = ds.getRepository(L2PSProofEntity)
+
             // Get proofs that were applied in this block
             const appliedProofs = await L2PSProofManager.getProofs(
                 "", // all L2PS networks
                 "applied",
                 1000
             )

             // ... rollback logic ...

             for (const proof of proofsToRollback) {
                 // ... rollback each edit ...

-                // Reset proof status to pending
-                const repo = await (await import("@/model/datasource")).default.getInstance()
-                const ds = repo.getDataSource()
-                const proofRepo = ds.getRepository((await import("@/model/entities/L2PSProofs")).L2PSProof)
-
                 await proofRepo.update(proof.id, {
                     status: "pending",
                     applied_block_number: null,
                     processed_at: null
                 })
             }

416-429: getBlockStats fetches all applied proofs with high limit - potential performance issue.

This method fetches up to 10,000 proofs and filters in memory. For a production system with many blocks and proofs, this could be slow and memory-intensive.

Consider adding a database-level filter for applied_block_number:

     static async getBlockStats(blockNumber: number): Promise<{
         proofsApplied: number
         totalEdits: number
         affectedAccounts: number
     }> {
-        const appliedProofs = await L2PSProofManager.getProofs("", "applied", 10000)
-        const blockProofs = appliedProofs.filter(p => p.applied_block_number === blockNumber)
+        // Use direct query with block filter for efficiency
+        const repo = await (await import("@/model/datasource")).default.getInstance()
+        const ds = repo.getDataSource()
+        const proofRepo = ds.getRepository((await import("@/model/entities/L2PSProofs")).L2PSProof)
+        
+        const blockProofs = await proofRepo.find({
+            where: { status: "applied", applied_block_number: blockNumber }
+        })

         return {
             proofsApplied: blockProofs.length,
             totalEdits: blockProofs.reduce((sum, p) => sum + p.gcr_edits.length, 0),
             affectedAccounts: new Set(blockProofs.flatMap(p => p.affected_accounts)).size
         }
     }
src/libs/l2ps/L2PSTransactionExecutor.ts (1)

138-155: demoswork case missing block scope - inconsistent with other cases.

The case "demoswork": and default: cases at lines 138 and 157 are not wrapped in blocks, unlike case "native":. While these cases don't declare variables that would cause TDZ issues currently, adding blocks would maintain consistency and prevent future issues.

-            case "demoswork":
+            case "demoswork": {
                 if (tx.content.gcr_edits && tx.content.gcr_edits.length > 0) {
                     // ...
                 }
                 break
+            }

-            default:
+            default: {
                 if (tx.content.gcr_edits && tx.content.gcr_edits.length > 0) {
                     // ...
                 }
+            }
src/libs/l2ps/L2PSProofManager.ts (1)

352-364: QueryBuilder cloning may cause unexpected behavior in parallel execution.

The getStats method clones the query builder multiple times for parallel getCount() calls. While this works, TypeORM's clone() shares some internal state. The parallel execution with Promise.all should be safe here, but consider using separate query builders if issues arise.

For maximum safety, use separate query builders:

     static async getStats(l2psUid?: string): Promise<{
         pending: number
         applied: number
         rejected: number
         total: number
     }> {
         const repo = await this.getRepo()
+        
+        const baseWhere = l2psUid ? { l2ps_uid: l2psUid } : {}

         const [pending, applied, rejected, total] = await Promise.all([
-            queryBuilder.clone().andWhere("proof.status = :status", { status: "pending" }).getCount(),
-            queryBuilder.clone().andWhere("proof.status = :status", { status: "applied" }).getCount(),
-            queryBuilder.clone().andWhere("proof.status = :status", { status: "rejected" }).getCount(),
-            queryBuilder.clone().getCount()
+            repo.count({ where: { ...baseWhere, status: "pending" as L2PSProofStatus } }),
+            repo.count({ where: { ...baseWhere, status: "applied" as L2PSProofStatus } }),
+            repo.count({ where: { ...baseWhere, status: "rejected" as L2PSProofStatus } }),
+            repo.count({ where: baseWhere })
         ])

         return { pending, applied, rejected, total }
     }
src/libs/l2ps/L2PSBatchAggregator.ts (2)

611-616: Inconsistent SharedState access pattern.

Line 615 uses SharedState.getInstance() directly while lines 340, 415, 432, and 452 use the imported getSharedState. This inconsistency could cause confusion and potential issues if the access patterns differ.

Use consistent access pattern throughout:

     getStatus(): {
         // ...
     } {
         return {
             isRunning: this.isRunning,
             isAggregating: this.isAggregating,
             intervalMs: this.AGGREGATION_INTERVAL,
-            joinedL2PSCount: SharedState.getInstance().l2psJoinedUids?.length || 0,
+            joinedL2PSCount: getSharedState.l2psJoinedUids?.length || 0,
         }
     }

461-461: Public key type assumption may fail for non-Uint8Array keys.

The code casts sharedState.keypair.publicKey as Uint8Array without runtime validation. If the key is stored in a different format (e.g., hex string), this will produce incorrect results.

Add runtime type checking or use a normalized getter:

-            const nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)
+            let nodeIdentityHex: string
+            if (sharedState.keypair.publicKey instanceof Uint8Array) {
+                nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey)
+            } else if (typeof sharedState.keypair.publicKey === 'string') {
+                nodeIdentityHex = sharedState.keypair.publicKey
+            } else {
+                log.error("[L2PS Batch Aggregator] Unsupported public key format")
+                return false
+            }
src/libs/network/endpointHandlers.ts (3)

57-58: Tighten or remove temporary SubnetPayload alias once real type exists

Right now SubnetPayload = any is unused here and may confuse future readers. Once the proper export is available, either replace this alias with the real type where needed or drop it entirely if you keep relying on L2PSTransaction instead.


330-377: Strengthen L2PS encrypted tx validation with better typing and sender checks

The new l2psEncryptedTx case correctly short‑circuits on malformed input and returns early on success so L2PS txs never hit the main mempool. A couple of refinements would make this path more robust:

  • Use the existing L2PSEncryptedPayload (or a dedicated interface) to type tx.content.data[1] instead of relying on typeof l2psPayload === "object", and consider validating a few required fields.
  • Consider asserting that the derived senderAddress matches the sender argument (or whatever canonical identity you use) so a peer cannot submit a tx on behalf of another identity just by setting from/from_ed25519_address.

These are hardening suggestions; the current logic is functionally consistent with the surrounding flow.


814-898: Verify block_number field usage and align behavior with doc comment

The overall validation flow for handleL2PSHashUpdate looks solid (structure checks, required fields, guarding updateHash with try/catch), but there are a couple of points to double‑check:

  • You gate on tx.block_number and pass BigInt(tx.block_number) to L2PSHashes.updateHash, while other parts of this file use blockNumber. Please confirm that Transaction for this path indeed exposes block_number (and is populated before this handler runs); otherwise this check will always fail and the update will be rejected.
  • The doc comment says “Validates that the sender is part of the L2PS network”, but the implementation only checks that this node has a local l2psInstance for the UID. If sender‑membership validation is required by the protocol, you may want to incorporate the sender identity and signature here, or adjust the comment to reflect the actual behavior.

Neither blocks merging, but they’re worth verifying so L2PS hash updates behave as intended.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 483046a and 34e3ab5.

📒 Files selected for processing (8)
  • src/libs/l2ps/L2PSBatchAggregator.ts (1 hunks)
  • src/libs/l2ps/L2PSConsensus.ts (1 hunks)
  • src/libs/l2ps/L2PSProofManager.ts (1 hunks)
  • src/libs/l2ps/L2PSTransactionExecutor.ts (1 hunks)
  • src/libs/l2ps/parallelNetworks.ts (18 hunks)
  • src/libs/network/endpointHandlers.ts (4 hunks)
  • src/libs/network/routines/transactions/handleL2PS.ts (2 hunks)
  • src/libs/peer/routines/getPeerIdentity.ts (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
src/libs/peer/routines/getPeerIdentity.ts (3)
src/features/zk/iZKP/zk.ts (1)
  • generateChallenge (35-38)
src/libs/peer/Peer.ts (1)
  • Peer (15-346)
src/libs/network/manageNodeCall.ts (1)
  • NodeCall (37-41)
src/libs/l2ps/L2PSBatchAggregator.ts (6)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/blockchain/l2ps_mempool.ts (1)
  • L2PS_STATUS (16-29)
src/utilities/sharedState.ts (2)
  • getSharedState (276-278)
  • SharedState (17-271)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/libs/blockchain/transaction.ts (1)
  • hash (106-114)
src/libs/blockchain/mempool_v2.ts (1)
  • Mempool (18-227)
src/libs/l2ps/parallelNetworks.ts (2)
src/utilities/logger.ts (1)
  • error (125-132)
src/utilities/sharedState.ts (1)
  • getSharedState (276-278)
src/libs/l2ps/L2PSProofManager.ts (4)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/model/entities/L2PSProofs.ts (1)
  • L2PSProofStatus (27-31)
src/utilities/logger.ts (1)
  • error (125-132)
src/features/activitypub/feditypes.ts (1)
  • rejected (62-62)
src/libs/network/endpointHandlers.ts (1)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
🔇 Additional comments (21)
src/libs/network/routines/transactions/handleL2PS.ts (2)

137-170: Execution flow and status updates look correct.

The execution flow properly uses l2psTx.hash (the encrypted transaction hash) for mempool status updates, which correctly matches the hash stored in the mempool at line 124. Error handling appropriately logs and returns descriptive messages for both caught exceptions and failed execution results.


172-186: Response structure exposes detailed execution metadata.

The response now includes comprehensive execution details (proof_id, affected_accounts, message) which is useful for debugging and tracing. However, exposing affected_accounts in the response could leak information about which accounts are involved in the L2PS transaction.

Verify whether exposing affected_accounts in the response aligns with L2PS privacy requirements. If L2PS is meant to provide privacy, this could inadvertently leak transaction participants.

src/libs/l2ps/L2PSConsensus.ts (1)

302-329: L1 batch transaction uses empty signature data.

The batch transaction at line 309 has an empty signature (data: ""). While the comment says "System-generated, no actual signature needed", this could cause issues:

  1. Signature verification logic elsewhere may reject transactions with empty signatures
  2. No authentication that this transaction came from a legitimate consensus participant

Verify that downstream transaction handling (mempool, block inclusion, validation) accepts transactions with empty signature data, or consider signing with the node's key for authenticity.

src/libs/l2ps/L2PSTransactionExecutor.ts (3)

127-174: Switch cases properly scoped with block statements.

The switch cases are now wrapped in blocks (case "native": { ... }) as recommended in the past review. This prevents variable hoisting issues and TDZ problems.


240-297: Native transaction handling correctly validates amount type.

The amount validation at lines 246-248 properly checks for typeof amount !== 'number', !Number.isFinite(amount), and amount <= 0 as recommended in the past review.


399-405: Update status now properly checks affected rows.

The updateTransactionStatus method now checks result.affected === 0 and logs a warning, addressing the past review concern about silent failures.

src/libs/l2ps/L2PSProofManager.ts (2)

201-212: getProofsForBlock ignores blockNumber parameter - documented with TODO.

The previous review noted this issue. The method now includes a TODO comment explaining that block-specific filtering will be implemented when block-specific batching is ready. This is acceptable as a known limitation.

The TODO at line 204-205 clearly documents the intent to filter by target_block_number in the future.


117-120: Deterministic hash generation now uses stable inputs.

The transactionsHash computation no longer includes Date.now(), addressing the past review concern. It now uses deterministicStringify with { l2psUid, l1BatchHash, gcrEdits } which are all stable inputs.

src/libs/l2ps/L2PSBatchAggregator.ts (2)

363-374: HMAC security properly enforced - no insecure fallback.

The code now throws an error when sharedState.keypair?.privateKey is unavailable instead of falling back to the predictable batchHash. This addresses the past review's critical security concern.


395-438: Nonce persistence only survives within process lifetime.

The nonce is stored in sharedState.l2psBatchNonce, which only persists during the current process lifetime. On service restart, the nonce resets to 0 and relies on timestamp * 1000 to avoid collisions. While Math.max(timestampNonce, lastNonce + 1) provides some protection, rapid restarts within the same millisecond could still cause issues.

For production resilience, consider persisting the nonce to the database:

// Example: Store in a settings/config table
private async getLastNonceFromStorage(): Promise<number> {
    const repo = await Datasource.getInstance()
    const ds = repo.getDataSource()
    // Query from a persistent settings table
    const setting = await ds.getRepository(Settings).findOne({ 
        where: { key: 'l2ps_batch_nonce' } 
    })
    return setting?.value ? parseInt(setting.value, 10) : 0
}
src/libs/peer/routines/getPeerIdentity.ts (2)

140-173: Excellent: Proper ed25519 signature verification now implemented.

This addresses the previous review concern about incomplete signature verification. The function now:

  • Uses proper cryptographic verification via ucrypto.verify instead of heuristic checks
  • Implements domain separation ("DEMOS_PEER_AUTH_V1") to prevent signature reuse across contexts
  • Correctly normalizes inputs by removing "0x" prefixes before verification
  • Includes appropriate error handling

180-180: Verify that all calling code handles the nullable return type.

The return type changed from Promise<Peer> to Promise<Peer | null>. This is a breaking API change that requires all call sites to properly handle the null case. Ensure the codebase has been fully updated to check for null returns before this change is merged.

src/libs/l2ps/parallelNetworks.ts (7)

108-133: Good implementation of promise locking to prevent race conditions.

The pattern correctly handles concurrent load requests for the same UID by returning the existing promise, and properly cleans up in the finally block.


141-169: Good defense-in-depth security practices.

Path traversal protection via startsWith check (lines 146-148) combined with UID validation (line 110) provides robust protection. The explicit validation of nodeConfig.keys before property access (lines 167-169) prevents potential undefined access errors.


297-311: Verify: Missing signature allows decryption to proceed.

Invalid signatures correctly throw an error (line 307), but missing signatures only log a warning while allowing decryption to continue. This asymmetry could allow unsigned transactions to bypass verification.

If missing signatures should be rejected, consider:

         } else {
-            log.warning(`[L2PS] No signature found on encrypted transaction for ${uid}`)
+            throw new Error(`L2PS transaction missing required signature for ${uid}`)
         }

If allowing unsigned transactions is intentional (e.g., for backwards compatibility), consider documenting this behavior in the method's JSDoc.


336-349: Good defensive validation before destructuring.

The array length check (line 337) prevents potential runtime errors when destructuring, and the error handling provides useful debugging information.


181-187: LGTM - hex-based key loading is well structured.

The flow properly reads hex strings, validates them via hexFileToBytes, and passes the byte data to L2PS.create. The async/await pattern is correctly used.


226-251: Verify all logger methods exist.

This segment uses log.warning (line 230) and log.info (line 244) in addition to log.error. Ensure these methods are defined on the logger class.


396-402: Verify log.debug method exists on the logger class.

The logger utility snippet provided in context only shows log.error(). Confirm that log.debug() is defined and accessible on the logger class, as its absence would cause a runtime error when processing L2PS transactions.

src/libs/network/endpointHandlers.ts (2)

19-20: Type‑only import for L2PSTransaction is a clean improvement

Using import type { L2PSTransaction } keeps the runtime bundle lean while preserving typing; the separation from the value import for Transaction looks good.


126-132: GCREdit hashing refactor is fine and keeps semantics unchanged

Factoring the JSON strings into gcrEditsString and txGcrEditsString before hashing preserves behavior while making logging/debugging easier; no issues from a correctness standpoint.

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 8, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

1 similar comment
@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 8, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
src/libs/peer/routines/getPeerIdentity.ts (2)

46-95: Identity normalization returns mixed hex formats (“0x…” vs bare hex) – unify to a single canonical form

Right now:

  • normalizeIdentity returns bare hex (no 0x prefix) for all byte/typed-array/BufferPayload cases (uint8ArrayToHex(...).toLowerCase()).
  • String and IdentityEnvelope.publicKey cases go through asHexString, returning 0x-prefixed hex.
  • normalizeExpectedIdentity always returns 0x-prefixed hex.

This makes the equality check on Line 253 (receivedIdentity === expectedIdentity) fail whenever the peer sends identity as bytes/arrays instead of a string, even if both represent the same key. This is the same functional risk flagged in the earlier review.

A minimal fix is to make every non-string path in normalizeIdentity return the same 0x-prefixed canonical form by delegating to asHexString:

@@
-    if (raw instanceof Uint8Array) {
-        return uint8ArrayToHex(raw).toLowerCase()
-    }
+    if (raw instanceof Uint8Array) {
+        return asHexString(uint8ArrayToHex(raw)) // always "0x..." lowercased
+    }
@@
-    if (ArrayBuffer.isView(raw)) {
-        const bytes =
-            raw instanceof Uint8Array
-                ? raw
-                : new Uint8Array(raw.buffer, raw.byteOffset, raw.byteLength)
-        return uint8ArrayToHex(bytes).toLowerCase()
-    }
+    if (ArrayBuffer.isView(raw)) {
+        const bytes =
+            raw instanceof Uint8Array
+                ? raw
+                : new Uint8Array(raw.buffer, raw.byteOffset, raw.byteLength)
+        return asHexString(uint8ArrayToHex(bytes))
+    }
@@
-    if (raw instanceof ArrayBuffer) {
-        return uint8ArrayToHex(new Uint8Array(raw)).toLowerCase()
-    }
+    if (raw instanceof ArrayBuffer) {
+        return asHexString(uint8ArrayToHex(new Uint8Array(raw)))
+    }
@@
-    if (Array.isArray(raw) && raw.every(item => typeof item === "number")) {
-        return uint8ArrayToHex(Uint8Array.from(raw)).toLowerCase()
-    }
+    if (Array.isArray(raw) && raw.every(item => typeof item === "number")) {
+        return asHexString(uint8ArrayToHex(Uint8Array.from(raw)))
+    }
@@
-    const maybeBuffer = raw as Partial<BufferPayload>
-    if (maybeBuffer?.type === "Buffer" && Array.isArray(maybeBuffer.data)) {
-        return uint8ArrayToHex(
-            Uint8Array.from(maybeBuffer.data),
-        ).toLowerCase()
-    }
+    const maybeBuffer = raw as Partial<BufferPayload>
+    if (maybeBuffer?.type === "Buffer" && Array.isArray(maybeBuffer.data)) {
+        return asHexString(uint8ArrayToHex(Uint8Array.from(maybeBuffer.data)))
+    }

With this, both receivedIdentity and expectedIdentity are always "0x..." hex strings and comparisons, logging, and downstream usage are consistent.

If you expect any non-hex payloads in these branches, we can instead prepend "0x" manually and avoid re-validating with asHexString. Would you like help adjusting this to your exact payload shapes?

Also applies to: 97-113


230-251: Challenge-response is still optional – spoofing risk remains (previously flagged)

Even with proper ed25519 verification in place, peers that omit challenge_signature are still accepted after only logging a warning (Lines 245–251). This keeps backward compatibility but leaves an open path for identity spoofing by any peer that doesn’t implement the challenge flow. This is the same concern raised in the earlier review.

If you’re ready to harden this, one strict version would be:

         if (challengeSignature) {
@@
             console.log("[PEER AUTHENTICATION] Challenge-response verified successfully")
         } else {
-            // Log warning but allow connection for backward compatibility
-            console.warn(
-                "[PEER AUTHENTICATION] WARNING: Peer did not provide challenge signature - " +
-                "authentication is weaker without challenge-response verification",
-            )
+            // Reject peers without challenge-response
+            console.log(
+                "[PEER AUTHENTICATION] No challenge signature provided - rejecting connection",
+            )
+            return null
         }

If you need a transition period, I’d recommend:

  • Logging a clear deprecation notice when challenge_signature is missing.
  • Adding metrics so you can see how many peers will be impacted.
  • Planning a staged rollout where non-compliant peers are eventually rejected.
🧹 Nitpick comments (3)
src/libs/network/endpointHandlers.ts (1)

350-358: Misleading comment: sender address is not verified against signature.

The comment states "Verify sender address matches the transaction signature" but the code only checks for the presence of a sender address, not that it cryptographically matches the signature. The actual signature verification happens earlier (lines 238-258) against validatedData.rpc_public_key, which may differ from the transaction's from address.

If authorization requires that the signer is the sender, consider adding actual verification:

                // Verify sender address matches the transaction signature
-                // This prevents unauthorized submission of L2PS transactions
+                // Ensure sender address is present (signature was already verified above)
                const senderAddress = tx.content?.from || tx.content?.from_ed25519_address
                if (!senderAddress) {
                    log.error("[handleExecuteTransaction] L2PS tx rejected: missing sender address")
                    result.success = false
                    result.response = { error: "L2PS transaction requires sender address" }
                    break
                }
+
+                // Optional: verify signer matches sender if required
+                // if (validatedData.rpc_public_key.data !== senderAddress) { ... }
src/libs/peer/routines/getPeerIdentity.ts (2)

115-121: verifyPeer always returns peer even if authentication fails – clarify or tighten the contract

getPeerIdentity can now return null on any verification failure, but verifyPeer ignores that and always returns the original peer. Callers that assume await verifyPeer(...) implies a verified identity may be misled if they don’t also check peer.verification.status.

Consider one of these options:

  1. Propagate failure explicitly (safer, breaking change):
-export async function verifyPeer(
-    peer: Peer,
-    expectedKey: string,
-): Promise<Peer> {
-    await getPeerIdentity(peer, expectedKey)
-    return peer
-}
+export async function verifyPeer(
+    peer: Peer,
+    expectedKey: string,
+): Promise<Peer | null> {
+    return getPeerIdentity(peer, expectedKey)
+}
  1. Keep signature but expose result clearly (non-breaking, but more implicit):
  • Return a boolean (true if verified, false otherwise) and let callers read peer from their own scope.
  • Or, at minimum, document in code comments that callers must check peer.verification.status and/or the return value of getPeerIdentity rather than trusting the returned peer.

I recommend auditing current call sites of verifyPeer to ensure none are treating a successful await as “peer is verified” without an additional check. If you’d like, I can help you write a script to locate and inspect those usages.


124-131: Overall challenge-based identity flow is well-structured; consider minor robustness/logging tweaks

The end-to-end flow—generate challenge → send via NodeCall.data → normalize identity/publicKey → verify ed25519 signature → compare with normalized expected key → update peer state—is coherent and a big improvement security-wise.

A few minor, optional refinements you might consider later:

  • Reduce raw logging of sensitive objects (peer, full response) in production, or route through your structured logger with log levels and redaction for keys/signatures.
  • Include the normalized identity (shortened) in peer.verification.message alongside the truncated challenge to aid debugging multiple peers with similar keys.
  • Guard against malformed response.response types by defaulting responseData to {} and maybe logging its type when normalization fails, to make operational debugging easier.

No blockers here; these are polish items.

Also applies to: 175-182, 188-215, 253-263, 266-272

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 34e3ab5 and fcd46f6.

📒 Files selected for processing (4)
  • src/libs/l2ps/L2PSConsensus.ts (1 hunks)
  • src/libs/l2ps/L2PSProofManager.ts (1 hunks)
  • src/libs/network/endpointHandlers.ts (4 hunks)
  • src/libs/peer/routines/getPeerIdentity.ts (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
src/libs/peer/routines/getPeerIdentity.ts (3)
src/features/zk/iZKP/zk.ts (1)
  • generateChallenge (35-38)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/network/manageNodeCall.ts (1)
  • NodeCall (37-41)
src/libs/network/endpointHandlers.ts (1)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/libs/l2ps/L2PSConsensus.ts (4)
src/libs/l2ps/L2PSProofManager.ts (1)
  • L2PSProofManager (66-366)
src/libs/blockchain/l2ps_mempool.ts (1)
  • L2PSMempool (48-659)
src/libs/blockchain/gcr/handleGCR.ts (2)
  • GCRResult (73-77)
  • HandleGCR (80-535)
src/libs/blockchain/chain.ts (1)
  • Chain (41-669)
src/libs/l2ps/L2PSProofManager.ts (4)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/model/entities/L2PSProofs.ts (1)
  • L2PSProofStatus (27-31)
src/utilities/logger.ts (1)
  • error (125-132)
src/features/activitypub/feditypes.ts (1)
  • rejected (62-62)
🔇 Additional comments (6)
src/libs/network/endpointHandlers.ts (3)

19-19: LGTM!

Using import type for Transaction and L2PSTransaction is the correct pattern since these are only used for type annotations and type assertions, not runtime operations.


123-128: LGTM!

The refactor to use intermediate string variables improves readability and aids debugging while maintaining the same functional behavior.


327-374: Block scope correctly applied.

The switch case is now properly wrapped in a block { } with const l2psResult, addressing the previous scope leakage concern. The early return pattern cleanly prevents L2PS transactions from being added to the main mempool.

src/libs/peer/routines/getPeerIdentity.ts (1)

132-172: Ed25519 challenge verification logic looks correct; confirm ucrypto and TextEncoder behavior in your target runtime

verifyChallenge now:

  • Builds a domain-separated message ("DEMOS_PEER_AUTH_V1:<challenge>").
  • Strips optional 0x from publicKey and signature.
  • Uses ucrypto.verify({ algorithm: "ed25519", message, publicKey, signature }) with hexToUint8Array conversions.

This is a solid upgrade over the previous heuristic and matches typical ed25519 verification patterns. I don’t see functional issues in this block as written.

Please double-check against your actual runtime and SDK version that:

  • ucrypto.verify expects a Uint8Array message and key/signature in the way you’re passing them.
  • TextEncoder is available in your Node target (or polyfilled via TS lib settings).

If you want, I can help you draft a small integration test that signs and verifies a known message with a test keypair to lock this in.

src/libs/l2ps/L2PSProofManager.ts (2)

117-152: Deterministic proof hashing and verification look consistent.

Using deterministicStringify both when constructing transactionsHash and the placeholder proof (data) in createProof, and again when recomputing the expected hash in verifyProof, gives you stable, JSONB‑robust hashes and removes the previous Date.now() non‑determinism. This should make proof recreation and verification much more reliable.

Also applies to: 245-252


322-339: Add getAppliedProofsForBlock to support consensus rollback and stats.

To avoid misusing getProofs with an empty UID and to give consensus a precise way to fetch proofs by block, add a dedicated helper with the suggested implementation.

The pattern is sound: creating a specialized query method (getAppliedProofsForBlock) that encapsulates the specific filtering logic by block and applied status prevents misuse of the generic getProofs method, which requires a UID parameter. This is good API design.

Ensure the implementation aligns with:

  • The L2PSProof entity schema (verify applied_block_number field exists)
  • The intended usage in L2PSConsensus.rollbackProofsForBlock and getBlockStats methods
  • Query performance expectations (ordering by id DESC and taking up to 1000 records)

Comment on lines 233 to 240
// Validate each GCR edit has required fields
for (const edit of proof.gcr_edits) {
// Balance and nonce edits require account field
if (!edit.type || (edit.type === 'balance' && !('account' in edit))) {
log.warning(`[L2PS ProofManager] Proof ${proof.id} has invalid GCR edit`)
return false
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Include nonce edits in the account-required validation.

The comment states “Balance and nonce edits require account field”, but the guard only enforces this for balance edits. As written, a malformed nonce edit without account would pass structural validation.

You can tighten this with:

-            // Validate each GCR edit has required fields
-            for (const edit of proof.gcr_edits) {
-                // Balance and nonce edits require account field
-                if (!edit.type || (edit.type === 'balance' && !('account' in edit))) {
-                    log.warning(`[L2PS ProofManager] Proof ${proof.id} has invalid GCR edit`)
-                    return false
-                }
-            }
+            // Validate each GCR edit has required fields
+            for (const edit of proof.gcr_edits) {
+                if (!edit.type) {
+                    log.warning(`[L2PS ProofManager] Proof ${proof.id} has invalid GCR edit`)
+                    return false
+                }
+
+                // Balance and nonce edits require an account field
+                if ((edit.type === "balance" || edit.type === "nonce") && !("account" in edit)) {
+                    log.warning(`[L2PS ProofManager] Proof ${proof.id} has invalid GCR edit`)
+                    return false
+                }
+            }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Validate each GCR edit has required fields
for (const edit of proof.gcr_edits) {
// Balance and nonce edits require account field
if (!edit.type || (edit.type === 'balance' && !('account' in edit))) {
log.warning(`[L2PS ProofManager] Proof ${proof.id} has invalid GCR edit`)
return false
}
}
// Validate each GCR edit has required fields
for (const edit of proof.gcr_edits) {
if (!edit.type) {
log.warning(`[L2PS ProofManager] Proof ${proof.id} has invalid GCR edit`)
return false
}
// Balance and nonce edits require an account field
if ((edit.type === "balance" || edit.type === "nonce") && !("account" in edit)) {
log.warning(`[L2PS ProofManager] Proof ${proof.id} has invalid GCR edit`)
return false
}
}
🤖 Prompt for AI Agents
In src/libs/l2ps/L2PSProofManager.ts around lines 233 to 240, the validation
only requires the account field for edits of type 'balance' but the comment says
both 'balance' and 'nonce' need an account; update the guard to also treat
edit.type === 'nonce' as requiring 'account' (i.e., check if edit.type is
'balance' or 'nonce' and that 'account' is present), leave the warning and
return false behavior intact so malformed nonce edits are rejected the same way
as balance edits.

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 8, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
src/libs/l2ps/L2PSConsensus.ts (1)

377-425: Using getProofs("", "applied", ...) still filters by empty UID, breaking rollback and stats

rollbackProofsForBlock and getBlockStats both call:

const appliedProofs = await L2PSProofManager.getProofs("", "applied", ...)

Passing "" does not mean “all networks”; it filters for l2ps_uid === "". Unless you actually have an L2PS network with an empty UID, both methods will see zero proofs:

  • No proofs rolled back when a block fails.
  • Block stats always report zero applied proofs/edits.

Please switch to a helper that explicitly means “all applied proofs for a given block”, e.g.:

const proofsToRollback = await L2PSProofManager.getAppliedProofsForBlock(blockNumber, 1000)

and likewise in getBlockStats:

const blockProofs = await L2PSProofManager.getAppliedProofsForBlock(blockNumber, 10000)

(or whatever helper you’ve implemented on L2PSProofManager to cover this case).

This is a functional correctness issue in consensus rollback and observability.

Also applies to: 439-451

🧹 Nitpick comments (3)
src/libs/network/routines/transactions/handleL2PS.ts (1)

25-29: Consider optional chaining for nested structure checks

The nested guards on l2psTx.content.data[1].l2ps_uid and decryptedTx.content.from are correct but a bit noisy. You could simplify with optional chaining for readability, e.g.:

if (!l2psTx.content?.data?.[1]?.l2ps_uid) {
    return "Invalid L2PS transaction structure: missing l2ps_uid in data payload"
}

if (!decryptedTx?.content?.from) {
    return { decryptedTx: null, error: "Invalid decrypted transaction structure" }
}

Pure readability / style; behaviour stays the same.

Also applies to: 61-62

src/libs/l2ps/L2PSBatchAggregator.ts (1)

395-408: Nonce handling is fine, but comments overstate persistence guarantees

getNextBatchNonce derives nonces from Date.now() * 1000 and a value cached in sharedState.l2psBatchNonce. This gives a monotonic sequence within a process and across restarts as long as the system clock is monotonic, but it does not persist across process restarts (no DB/FS write).

To avoid confusion for future maintainers, I’d tighten the comments:

  • Emphasize that persistence is per-process (via shared state), not durable.
  • Note that uniqueness across restarts is time-based, not strictly persisted.

Purely documentation; logic itself looks reasonable given the current constraints.

Also applies to: 413-425

src/libs/l2ps/L2PSTransactionExecutor.ts (1)

229-297: Native handler lacks structure checks on data and ignores simulate (optional tightening)

handleNativeTransaction assumes tx.content.data is a ["native", INativePayload] tuple and doesn’t use the simulate flag:

const nativePayloadData = tx.content.data as ["native", INativePayload]
const nativePayload = nativePayloadData[1]

If a malformed transaction reaches here, this can throw before validation. Also, simulate does not affect any behaviour (we always hit the DB and build edits).

Not strictly wrong, but you might want to harden this:

if (!Array.isArray(tx.content.data) || tx.content.data[0] !== "native") {
    return { success: false, message: "Invalid native payload structure" }
}
const [, nativePayload] = tx.content.data as ["native", INativePayload]

and optionally gate side effects in simulate mode if that’s part of your design contract.

Pure robustness / clarity; current path is logically correct for well-formed inputs.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fcd46f6 and 3b1e134.

📒 Files selected for processing (4)
  • src/libs/l2ps/L2PSBatchAggregator.ts (1 hunks)
  • src/libs/l2ps/L2PSConsensus.ts (1 hunks)
  • src/libs/l2ps/L2PSTransactionExecutor.ts (1 hunks)
  • src/libs/network/routines/transactions/handleL2PS.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/libs/network/routines/transactions/handleL2PS.ts (5)
src/libs/l2ps/parallelNetworks.ts (1)
  • ParallelNetworks (82-439)
src/libs/blockchain/transaction.ts (1)
  • Transaction (44-534)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/network/server_rpc.ts (1)
  • emptyResponse (46-51)
src/libs/l2ps/L2PSTransactionExecutor.ts (1)
  • L2PSTransactionExecutor (49-487)
src/libs/l2ps/L2PSBatchAggregator.ts (6)
src/utilities/logger.ts (1)
  • error (125-132)
src/utilities/sharedState.ts (2)
  • getSharedState (276-278)
  • SharedState (17-271)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/libs/utils/calibrateTime.ts (1)
  • getNetworkTimestamp (17-23)
src/libs/blockchain/transaction.ts (1)
  • hash (106-114)
src/libs/blockchain/mempool_v2.ts (1)
  • Mempool (18-227)
🪛 GitHub Check: SonarCloud Code Analysis
src/libs/network/routines/transactions/handleL2PS.ts

[warning] 26-26: Prefer using an optional chain expression instead, as it's more concise and easier to read.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZr99cUniVABametmCZA&open=AZr99cUniVABametmCZA&pullRequest=514


[warning] 61-61: Prefer using an optional chain expression instead, as it's more concise and easier to read.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZr99cUniVABametmCZB&open=AZr99cUniVABametmCZB&pullRequest=514

src/libs/l2ps/L2PSBatchAggregator.ts

[warning] 436-436: 'error' will use Object's default stringification format ('[object Object]') when stringified.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZru32BGx7vD0jIgt1no&open=AZru32BGx7vD0jIgt1no&pullRequest=514

🔇 Additional comments (2)
src/libs/network/routines/transactions/handleL2PS.ts (1)

129-146: Mempool status updates are now keyed by encrypted hash (looks correct)

The execution and error paths consistently call L2PSMempool.updateStatus(l2psTx.hash, ...), which matches the mempool’s use of the encrypted transaction hash as its primary key. This fixes the earlier mismatch with originalHash and should make status transitions (processedfailed / executed) reliable.

src/libs/l2ps/L2PSBatchAggregator.ts (1)

336-383: Clarify and standardize getSharedState usage

The code pattern const sharedState = getSharedState followed by property access like sharedState.keypair?.privateKey suggests a potential issue. If getSharedState is a function (which the naming convention indicates), this will fail at runtime.

Confirm whether getSharedState is a function and, if so, add invocation parentheses:

  • createBatchPayload (line 338)
  • getLastNonceFromStorage
  • saveNonceToStorage
  • submitBatchToMempool

Alternatively, if it's already a singleton instance, rename it to sharedState to clarify the intent and prevent future confusion.

Comment on lines 113 to 144
static async execute(
l2psUid: string,
tx: Transaction,
l1BatchHash: string,
simulate: boolean = false
): Promise<L2PSExecutionResult> {
try {
log.info(`[L2PS Executor] Processing tx ${tx.hash} from L2PS ${l2psUid} (type: ${tx.content.type})`)

// Generate GCR edits based on transaction type
const editsResult = await this.generateGCREdits(tx, simulate)
if (!editsResult.success) {
return editsResult
}

const gcrEdits = editsResult.gcr_edits || []
const affectedAccounts = editsResult.affected_accounts || []

// Create proof with GCR edits (if not simulating)
if (!simulate && gcrEdits.length > 0) {
return this.createProofAndRecord(l2psUid, tx, l1BatchHash, gcrEdits, affectedAccounts)
}

return {
success: true,
message: simulate
? `Validated: ${gcrEdits.length} GCR edits would be generated`
: `Proof created with ${gcrEdits.length} GCR edits (will apply at consensus)`,
gcr_edits: gcrEdits,
affected_accounts: [...new Set(affectedAccounts)]
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

execute reports “Proof created” even when no proof is created (zero-edit case)

When gcrEdits.length === 0, the if (!simulate && gcrEdits.length > 0) branch is skipped, so no proof is created or recorded. However the fallback success path still says:

message: simulate 
    ? `Validated: ${gcrEdits.length} GCR edits would be generated`
    : `Proof created with ${gcrEdits.length} GCR edits (will apply at consensus)`

For zero edits this reports “Proof created with 0 GCR edits” even though no proof exists, which can confuse callers and operators.

Consider distinguishing the “no edits / no proof” case:

-            return {
-                success: true,
-                message: simulate 
-                    ? `Validated: ${gcrEdits.length} GCR edits would be generated`
-                    : `Proof created with ${gcrEdits.length} GCR edits (will apply at consensus)`,
-                gcr_edits: gcrEdits,
-                affected_accounts: [...new Set(affectedAccounts)]
-            }
+            const base = {
+                success: true,
+                gcr_edits: gcrEdits,
+                affected_accounts: [...new Set(affectedAccounts)],
+            }
+
+            if (simulate) {
+                return {
+                    ...base,
+                    message: `Validated: ${gcrEdits.length} GCR edits would be generated`,
+                }
+            }
+
+            return {
+                ...base,
+                message: gcrEdits.length === 0
+                    ? "No GCR edits generated; transaction recorded without proof"
+                    : `Proof created with ${gcrEdits.length} GCR edits (will apply at consensus)`,
+            }

This keeps semantics and observability aligned.

🤖 Prompt for AI Agents
In src/libs/l2ps/L2PSTransactionExecutor.ts around lines 113 to 144, the success
fallback message incorrectly claims a proof was created when gcrEdits.length ===
0; change the logic so when not simulating and gcrEdits.length === 0 you return
a success result that explicitly states no GCR edits were generated and that no
proof was created/recorded, and preserve the existing proof-created message only
when a proof was actually created; keep gcr_edits as an empty array and ensure
affected_accounts is deduplicated in the returned result.

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 8, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

1 similar comment
@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
package.json (1)

51-102: Verify @solana/web3.js version explicitly due to recent supply-chain incident.

The new dependencies appropriately support the ZK-SNARK infrastructure (snarkjs 0.7.5, circomlib 2.0.5, circomlibjs 0.1.7), Solana/Anchor integrations, and security hardening (helmet 8.1.0). However, @solana/web3.js experienced a supply-chain attack in December 2024 (CVE-2024-54134) affecting versions 1.95.6–1.95.7. The pinned version 1.98.4 is post-patch and should be safe, but confirm this version is intentional and monitor for any subsequent releases. Additionally, ensure snarkjs remains at ≥0.7.0 to maintain the fix for CVE-2023-33252, and keep circomlib updated as maintenance activity is limited.

♻️ Duplicate comments (6)
src/libs/network/routines/transactions/handleL2PS.ts (1)

65-68: Signature verification still not checking the success flag.

This issue was flagged in a previous review. Transaction.confirmTx returns a structured result with a success property, but the code only checks if the result is truthy. This could allow invalid signatures to pass.

     const verificationResult = await Transaction.confirmTx(decryptedTx, decryptedTx.content.from)
-    if (!verificationResult) {
-        return { decryptedTx: null, error: "Transaction signature verification failed" }
+    if (!verificationResult || !verificationResult.success) {
+        const msg = verificationResult?.message || "Transaction signature verification failed"
+        return { decryptedTx: null, error: msg }
     }
src/libs/l2ps/L2PSBatchAggregator.ts (2)

609-612: Verify reference_block: 0 handling.

A past review flagged that reference_block: 0 may not be recalculated by the mempool, potentially bypassing ordering/anchoring semantics. The comment indicates this was addressed, but the code still shows reference_block: 0.

Please verify that either:

  1. The mempool now sets a proper reference block, or
  2. A meaningful reference block should be computed here

562-563: Unchecked type cast for publicKey may cause runtime error.

The cast sharedState.keypair.publicKey as Uint8Array assumes the key is always a Uint8Array, but per past review comments, this could fail if the key is a different type.

Add a runtime check:

-            const nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)
+            const publicKey = sharedState.keypair.publicKey
+            if (!(publicKey instanceof Uint8Array)) {
+                log.error("[L2PS Batch Aggregator] publicKey is not a Uint8Array")
+                return false
+            }
+            const nodeIdentityHex = uint8ArrayToHex(publicKey)

Alternatively, if sharedState.publicKeyHex exists (as suggested in past comments), use that directly.

src/libs/l2ps/L2PSTransactionExecutor.ts (1)

136-143: Misleading success message when no proof is created.

When gcrEdits.length === 0 and not simulating, the method reports "Proof created with 0 GCR edits" even though no proof was created (the if condition at line 132 was false).

Apply this diff to distinguish the zero-edit case:

+            const uniqueAccounts = [...new Set(affectedAccounts)]
+
+            if (simulate) {
+                return {
+                    success: true,
+                    message: `Validated: ${gcrEdits.length} GCR edits would be generated`,
+                    gcr_edits: gcrEdits,
+                    affected_accounts: uniqueAccounts
+                }
+            }
+
             return {
                 success: true,
-                message: simulate 
-                    ? `Validated: ${gcrEdits.length} GCR edits would be generated`
-                    : `Proof created with ${gcrEdits.length} GCR edits (will apply at consensus)`,
+                message: gcrEdits.length === 0
+                    ? "No GCR edits generated; transaction recorded without proof"
+                    : `Transaction recorded with ${gcrEdits.length} GCR edits (proof will be created during batching)`,
                 gcr_edits: gcrEdits,
-                affected_accounts: [...new Set(affectedAccounts)]
+                affected_accounts: uniqueAccounts
             }

This clarifies that when no edits are generated, no proof is created.

src/model/entities/L2PSProofs.ts (1)

144-149: Fix JSONB default value syntax.

The string literal default: "[]" does not produce a correct JSONB default at the database level. TypeORM requires raw SQL expressions for JSONB defaults.

Apply this diff:

     /**
      * Individual transaction hashes from L2PS mempool
      * Used to update mempool status to 'confirmed' after proof application
      */
-    @Column("jsonb", { default: "[]" })
+    @Column("jsonb", { default: () => "'[]'" })
     transaction_hashes: string[]

Alternatively, you can use an explicit cast: default: () => "'[]'::jsonb"

Based on past review comments flagging TypeORM JSONB default value requirements.

src/libs/l2ps/L2PSProofManager.ts (1)

227-233: Nonce edits should also require the account field.

The comment states "Balance and nonce edits require account field", but the validation only enforces this for balance edits. Nonce edits without an account field would pass validation, creating inconsistency.

Apply this diff:

             // Validate each GCR edit has required fields
             for (const edit of proof.gcr_edits) {
-                if (!edit.type || (edit.type === 'balance' && !('account' in edit))) {
+                if (!edit.type) {
                     log.warning(`[L2PS ProofManager] Proof ${proof.id} has invalid GCR edit`)
                     return false
                 }
+                
+                // Balance and nonce edits require account field
+                if ((edit.type === 'balance' || edit.type === 'nonce') && !('account' in edit)) {
+                    log.warning(`[L2PS ProofManager] Proof ${proof.id} has invalid GCR edit: ${edit.type} missing account`)
+                    return false
+                }
             }
🧹 Nitpick comments (14)
src/libs/l2ps/zk/README.md (1)

65-86: Add language specifiers to fenced code blocks and fix batch_20 inconsistency.

Two minor issues:

  1. Lines 65 and 80 have fenced code blocks without language specifiers (use text or plaintext)
  2. Line 84 references batch_20/ in the generated keys structure, but the setup script (line 90) notes that batch_20 was removed due to stability issues
-```
+```text
 zk/
 ├── L2PSBatchProver.ts          # Main prover class (auto-selects batch size)
...
 **Generated (gitignored):**
-```
+```text
 ├── keys/                       # ~1GB proving keys
 │   ├── batch_5/
 │   ├── batch_10/
-│   └── batch_20/
 └── ptau/                       # ~500MB powers of tau

</blockquote></details>
<details>
<summary>src/libs/l2ps/zk/scripts/setup_all_batches.sh (3)</summary><blockquote>

`33-36`: **Fix quoting and portability issues in file size check.**

Two issues:
1. The `stat` output should be quoted to prevent word splitting (SC2046)
2. `stat -c%s` is GNU/Linux-specific; macOS uses `stat -f%z`



```diff
-    if [ ! -f "$PTAU_DIR/$file" ] || [ $(stat -c%s "$PTAU_DIR/$file") -lt 1000000 ]; then
+    if [ ! -f "$PTAU_DIR/$file" ]; then
+        need_download=true
+    elif [[ "$OSTYPE" == "darwin"* ]]; then
+        [ "$(stat -f%z "$PTAU_DIR/$file")" -lt 1000000 ] && need_download=true
+    else
+        [ "$(stat -c%s "$PTAU_DIR/$file")" -lt 1000000 ] && need_download=true
+    fi
+    
+    if [ "${need_download:-false}" = true ]; then
         echo -e "${YELLOW}Downloading pot${size}...${NC}"

Alternatively, use a portable approach with wc -c:

if [ ! -f "$PTAU_DIR/$file" ] || [ "$(wc -c < "$PTAU_DIR/$file")" -lt 1000000 ]; then

36-36: Add error handling for curl download.

curl may fail silently without -f (fail on HTTP errors). Consider adding validation:

-        curl -L -o "$PTAU_DIR/$file" "$url"
+        if ! curl -fL -o "$PTAU_DIR/$file" "$url"; then
+            echo -e "\033[0;31mFailed to download $file\033[0m"
+            exit 1
+        fi

1-14: Consider adding dependency validation.

The script assumes circom, npx, and snarkjs are installed. Adding validation upfront would provide clearer error messages.

 set -e
 
+# Check dependencies
+for cmd in circom npx; do
+    if ! command -v "$cmd" &> /dev/null; then
+        echo -e "\033[0;31mError: $cmd is required but not installed.\033[0m"
+        exit 1
+    fi
+done
+
 SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
src/libs/l2ps/zk/circuits/l2ps_batch_10.circom (1)

20-32: pre_hash output is computed but never used.

The BalanceTransfer template computes pre_hash (lines 20, 29-32) but L2PSBatch only uses post_hash for state chaining. Either integrate pre_hash into the state verification or remove it to reduce constraint count.

src/libs/l2ps/zk/circuits/l2ps_batch_5.circom (2)

26-27: Unused constraint signal - dead code in circuit.

The check signal is computed but never constrained against anything. In Circom, signals that are computed but not used in constraints don't contribute to the proof's security guarantees. This appears to be leftover debug code.

Either remove it or add a meaningful constraint:

-    signal check;
-    check <== sender_after * sender_after;

20-21: pre_hash output is declared but never used.

The pre_hash output is computed using Poseidon but is never referenced by the L2PSBatch template (only post_hash is used at line 71). This adds unnecessary constraints to the circuit.

If pre_hash is needed for future verification, keep it. Otherwise, consider removing to reduce circuit size:

-    signal output pre_hash;
     signal output post_hash;
     
     sender_after === sender_before - amount;
     receiver_after === receiver_before + amount;
-    
-    signal check;
-    check <== sender_after * sender_after;
-    
-    component preHasher = Poseidon(2);
-    preHasher.inputs[0] <== sender_before;
-    preHasher.inputs[1] <== receiver_before;
-    pre_hash <== preHasher.out;

Also applies to: 29-32

src/libs/l2ps/zk/L2PSBatchProver.ts (3)

9-23: Global prototype patching at import time is risky.

Patching EventTarget.prototype.dispatchEvent at module load time affects all code in the process, not just this module. This could cause unexpected behavior in other libraries that rely on standard dispatchEvent behavior.

Consider isolating this patch:

  1. Move it inside initialize() where it's only applied when the prover is actually used
  2. Or wrap snarkjs calls in a try-catch instead of globally patching
-// Bun compatibility: patch web-worker before importing snarkjs
-const isBun = typeof (globalThis as any).Bun !== 'undefined';
-if (isBun) {
-    // Suppress web-worker errors in Bun by patching dispatchEvent
-    const originalDispatchEvent = EventTarget.prototype.dispatchEvent;
-    EventTarget.prototype.dispatchEvent = function(event: any) {
-        ...
-    };
-}
+// Bun compatibility handled in initialize() to avoid global side effects

139-158: Clarify that loadedKeys caches paths, not loaded key data.

The method is named loadKeys and the cache loadedKeys, but it actually caches file paths that snarkjs will load on each fullProve call. This is correct behavior for snarkjs, but the naming suggests pre-loading into memory.

Consider renaming for clarity:

-    private loadedKeys: Map<BatchSize, { zkey: any; wasm: string }> = new Map();
+    private resolvedKeyPaths: Map<BatchSize, { zkeyPath: string; wasmPath: string }> = new Map();

290-294: Synchronous file read blocks the event loop.

fs.readFileSync on line 294 blocks the event loop during proof verification. For a service processing multiple batches, this could impact responsiveness.

-        const vkey = JSON.parse(fs.readFileSync(vkeyPath, 'utf-8'));
+        const vkey = JSON.parse(await fs.promises.readFile(vkeyPath, 'utf-8'));

Alternatively, cache the verification key in loadedKeys after first load.

src/libs/l2ps/L2PSBatchAggregator.ts (1)

93-94: Unused readonly field batchNonceCounter.

This field is declared but never used - nonce management now uses getNextBatchNonce() with shared state storage. Remove the dead code:

-    /** Persistent nonce counter for batch transactions */
-    private readonly batchNonceCounter: number = 0
src/libs/l2ps/zk/circomlibjs.d.ts (3)

32-42: Replace any with a more specific type for field operations.

Line 33 uses any for the F field, which defeats TypeScript's type safety. The field operations object likely shares a similar structure with the Poseidon F interface (lines 12-15) or could be typed as a common FieldOperations interface.

Consider defining a shared interface:

interface FieldOperations {
    toObject(element: Uint8Array): bigint
    toString(element: Uint8Array): string
    // Add other common field operation methods if applicable
}

Then apply this diff:

     export function buildBabyjub(): Promise<{
-        F: any
+        F: FieldOperations
         Generator: [bigint, bigint]

47-52: Replace any with a more specific type for field operations.

Line 48 uses any for the F field. This should use the same typed interface as suggested for other field operations to maintain consistency and type safety across the circomlibjs declarations.

Apply this diff:

     export function buildEddsa(): Promise<{
-        F: any
+        F: FieldOperations
         prv2pub(privateKey: Uint8Array): [bigint, bigint]

57-61: Replace any with a more specific type for field operations.

Line 58 uses any for the F field. For consistency and type safety, this should use the same typed interface as the other field operations.

Apply this diff:

     export function buildMimcSponge(): Promise<{
-        F: any
+        F: FieldOperations
         hash(left: bigint, right: bigint, key: bigint): bigint
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3b1e134 and c1ebef8.

📒 Files selected for processing (15)
  • package.json (3 hunks)
  • scripts/send-l2-batch.ts (1 hunks)
  • src/libs/l2ps/L2PSBatchAggregator.ts (1 hunks)
  • src/libs/l2ps/L2PSProofManager.ts (1 hunks)
  • src/libs/l2ps/L2PSTransactionExecutor.ts (1 hunks)
  • src/libs/l2ps/zk/BunPlonkWrapper.ts (1 hunks)
  • src/libs/l2ps/zk/L2PSBatchProver.ts (1 hunks)
  • src/libs/l2ps/zk/README.md (1 hunks)
  • src/libs/l2ps/zk/circomlibjs.d.ts (1 hunks)
  • src/libs/l2ps/zk/circuits/l2ps_batch_10.circom (1 hunks)
  • src/libs/l2ps/zk/circuits/l2ps_batch_5.circom (1 hunks)
  • src/libs/l2ps/zk/scripts/setup_all_batches.sh (1 hunks)
  • src/libs/l2ps/zk/snarkjs.d.ts (1 hunks)
  • src/libs/network/routines/transactions/handleL2PS.ts (2 hunks)
  • src/model/entities/L2PSProofs.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
src/libs/l2ps/zk/BunPlonkWrapper.ts (1)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/l2ps/L2PSProofManager.ts (3)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/model/entities/L2PSProofs.ts (1)
  • L2PSProofStatus (27-31)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/l2ps/L2PSTransactionExecutor.ts (3)
src/libs/blockchain/transaction.ts (1)
  • Transaction (44-534)
src/libs/l2ps/L2PSProofManager.ts (1)
  • L2PSProofManager (71-357)
src/libs/l2ps/zk/L2PSBatchProver.ts (2)
  • L2PSTransaction (43-49)
  • hash (163-166)
src/libs/l2ps/zk/L2PSBatchProver.ts (1)
src/libs/l2ps/zk/BunPlonkWrapper.ts (1)
  • plonkVerifyBun (86-187)
src/model/entities/L2PSProofs.ts (1)
src/model/entities/L2PSTransactions.ts (1)
  • Entity (27-143)
🪛 markdownlint-cli2 (0.18.1)
src/libs/l2ps/zk/README.md

65-65: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


80-80: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Shellcheck (0.11.0)
src/libs/l2ps/zk/scripts/setup_all_batches.sh

[warning] 33-33: Quote this to prevent word splitting.

(SC2046)

🔇 Additional comments (30)
src/libs/l2ps/zk/circuits/l2ps_batch_10.circom (1)

26-27: Unused check signal - clarify intent or remove.

The check signal computes sender_after * sender_after but is never used in any constraint. If the intent is to enforce non-negative balances, note that squaring in a finite field doesn't prevent underflow—negative values (large field elements) still produce valid squares.

If this is intentional for future use or as a constraint-generation placeholder, consider adding a comment. Otherwise, remove it.

-    signal check;
-    check <== sender_after * sender_after;
+    // Note: Non-negativity is enforced off-chain before proof generation
src/libs/l2ps/zk/snarkjs.d.ts (1)

1-78: LGTM - Minimal but functional type declarations for snarkjs.

The type declarations provide adequate coverage for PLONK/Groth16 proof generation and verification workflows. Using any for proof objects is pragmatic given snarkjs's dynamic nature.

src/libs/network/routines/transactions/handleL2PS.ts (3)

12-20: LGTM - Clean error response helper.

The createErrorResponse helper centralizes error formatting, improving code consistency.


129-145: Execution flow and status updates look correct.

The execution delegates to L2PSTransactionExecutor, properly handles errors with logging, and updates mempool status appropriately (failed on error, executed on success).


70-70: Double cast is necessary for type conversion between SDK and local Transaction classes, not a type safety bypass.

The cast converts from the SDK's Transaction type (returned by l2psInstance.decryptTx()) to the local Transaction class required by the function's return type. The local Transaction class implements the SDK's ITransaction interface, and the object passes validation via Transaction.confirmTx() before the cast, ensuring structural correctness at runtime.

src/libs/l2ps/zk/circuits/l2ps_batch_5.circom (1)

23-24: Consider adding balance underflow protection.

The constraint sender_after === sender_before - amount operates in a finite field where underflow wraps around rather than failing. A malicious prover could create a valid proof with sender_before < amount resulting in a wrapped value.

If balance validity must be enforced on-chain, add a range check or ensure the off-chain prover validates sender_before >= amount before proof generation.

src/libs/l2ps/zk/L2PSBatchProver.ts (1)

215-278: LGTM - proof generation flow is well-structured.

The generateProof method properly:

  • Ensures initialization before use
  • Validates input (empty batch check)
  • Selects appropriate batch size
  • Pads transactions correctly
  • Uses singleThread option for Bun compatibility
src/libs/l2ps/zk/BunPlonkWrapper.ts (2)

86-187: LGTM - Bun-compatible PLONK verification is well-implemented.

The implementation:

  • Correctly uses singleThread: true to avoid Bun worker crashes
  • Properly terminates the curve in finally block to prevent memory leaks
  • Returns false on errors rather than throwing, which is appropriate for verification
  • Follows the snarkjs reference implementation faithfully

29-78: LGTM - Keccak256Transcript correctly implements Fiat-Shamir.

The transcript class properly handles polynomial commitments and scalars for challenge derivation, matching the snarkjs reference implementation.

src/libs/l2ps/L2PSBatchAggregator.ts (4)

663-669: LGTM - Cycle time calculation is correct.

The updateCycleTime method is only called after successfulCycles is incremented (line 248), so division by zero cannot occur. The running average formula correctly handles the first cycle case.


444-456: ZK proof uses placeholder data - proofs won't verify actual balances.

The TODO at line 447 notes this, but it's worth emphasizing: the current implementation derives balance values from transaction hashes rather than actual amounts. This means:

  1. ZK proofs will be mathematically valid but semantically meaningless
  2. Balance underflows/overflows won't be caught
  3. The proof only demonstrates circuit execution, not transaction validity

Until actual transaction amounts are extracted, consider either:

  • Disabling ZK proofs entirely (zkEnabled = false)
  • Clearly logging that proofs are in "placeholder mode"

397-410: LGTM - HMAC key handling now fails fast as recommended.

The code now throws an error when keypair.privateKey is unavailable instead of using an insecure batchHash fallback. This addresses the security concern from the previous review.


514-526: Nonce persistence is process-lifetime only - consider database storage for production.

The current implementation persists nonces in sharedState, which survives within the same process but resets on restart. The timestamp-based fallback (line 500-503) provides reasonable collision resistance, but for production deployments with strict ordering requirements, consider persisting to the database.

The current approach is acceptable for development/testing.

src/libs/l2ps/L2PSProofManager.ts (5)

34-44: LGTM! Deterministic stringify implementation is solid.

The alphabetical key sorting ensures consistent hashing across JSONB round-trips, and arrays are correctly left in their original order.


111-171: LGTM! Proof creation uses deterministic hashing.

The previous concern about non-deterministic Date.now() timestamps has been resolved. The implementation now uses deterministicStringify consistently for stable hash generation.


194-211: LGTM! Reserved parameter for future block-specific filtering.

The blockNumber parameter is intentionally unused with a clear FUTURE comment. This reserves the API signature for block-specific batching without requiring breaking changes later.


258-292: LGTM! Status update methods are correctly implemented.

Both markProofApplied and markProofRejected properly update proof status with appropriate metadata (block number, error message, timestamps).


332-356: LGTM! Statistics query uses proper QueryBuilder cloning.

The clone() pattern correctly creates independent queries for each status count, and Promise.all efficiently executes them in parallel.

src/model/entities/L2PSProofs.ts (3)

38-44: LGTM! Index definitions are correct and comprehensive.

All index names follow a consistent naming convention, and the compound index on [l2ps_uid, status] will optimize common query patterns.


85-97: LGTM! Proof structure is well-defined and flexible.

The JSONB column supports multiple proof systems (hash, plonk, snark, stark) with comprehensive metadata fields for production use.


99-167: LGTM! Column definitions are appropriate and well-documented.

All columns have correct types, nullability, and defaults. The use of simple-array for affected_accounts is suitable for a string array.

src/libs/l2ps/L2PSTransactionExecutor.ts (7)

49-77: LGTM! Repository initialization uses proper promise guard pattern.

The lazy initialization with initPromise guard prevents race conditions when multiple calls occur before initialization completes.


154-186: LGTM! GCR edit generation logic is clear and handles all cases.

The method properly delegates to handleNativeTransaction for native transactions and validates provided gcr_edits for other types.


226-296: LGTM! Native transaction handling includes proper validation.

The amount validation checks type, finiteness, and ensures positive values. Variables are properly scoped within the if/else blocks.


298-333: LGTM! GCR edit validation is properly scoped.

All switch cases are wrapped in braces, and the unused repo variable issue from past reviews has been resolved.


335-370: LGTM! Transaction recording handles BigInt conversions correctly.

The method properly converts amount, nonce, and timestamp to BigInt types as required by the entity schema.


372-396: LGTM! Status update now detects when no rows match.

The method checks result.affected === 0 and logs a warning, preventing silent failures when updating non-existent transaction hashes.


398-484: LGTM! Query methods provide comprehensive L2PS data access.

The account transaction query correctly uses an array of where conditions to match both sender and receiver addresses. Statistics properly combine transaction and proof data.

src/libs/l2ps/zk/circomlibjs.d.ts (2)

1-5: LGTM!

The header comment clearly describes the file's purpose.


6-27: LGTM!

The Poseidon interface and builder declarations are well-typed with appropriate JSDoc comments.

@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/libs/blockchain/chain.ts (1)

191-193: Inconsistent null handling: getTransactionFromHash may throw on null input.

Unlike the established pattern in getTxByHash (lines 76-92), this method calls Transaction.fromRawTransaction directly without checking if the repository returned null. If findOneBy returns null, fromRawTransaction will throw an error due to the defensive check at line 499 in transaction.ts.

Update the method to align with the getTxByHash pattern:

 static async getTransactionFromHash(hash: string): Promise<Transaction | null> {
+    const rawTx = await this.transactions.findOneBy({ hash: ILike(hash) })
+    
+    if (!rawTx) {
+        return null
+    }
+    
-    return Transaction.fromRawTransaction(
-        await this.transactions.findOneBy({ hash: ILike(hash) }),
-    )
+    return Transaction.fromRawTransaction(rawTx)
 }

This requires:

  1. Updating the return type from Promise<Transaction> to Promise<Transaction | null>
  2. Adding null check before calling fromRawTransaction
  3. Updating the call site in src/libs/blockchain/routines/subOperations.ts (line 38) to handle null returns
♻️ Duplicate comments (2)
src/libs/l2ps/L2PSBatchAggregator.ts (2)

736-748: Critical: getSharedState must be called as a function.

On line 746, getSharedState is not invoked. This will fail at runtime when attempting to access the .l2psJoinedUids property on the function itself rather than the state object.

This issue was flagged in past review comments but remains unresolved.

Based on learnings, past review comment from coderabbitai flagged this exact issue. Apply this diff:

     getStatus(): {
         isRunning: boolean;
         isAggregating: boolean;
         intervalMs: number;
         joinedL2PSCount: number;
     } {
+        const sharedState = getSharedState()
         return {
             isRunning: this.isRunning,
             isAggregating: this.isAggregating,
             intervalMs: this.AGGREGATION_INTERVAL,
-            joinedL2PSCount: getSharedState.l2psJoinedUids?.length || 0,
+            joinedL2PSCount: sharedState.l2psJoinedUids?.length || 0,
         }
     }

638-638: Set meaningful reference_block value.

The comment states "Will be set by mempool" but according to past reviews, the mempool implementation passes reference_block through to persistence without recalculating it. Storing 0 may bypass anchoring/ordering semantics.

Consider setting a meaningful reference block:

-                reference_block: 0, // Will be set by mempool
+                reference_block: sharedState.lastKnownBlock || 0, // Anchor to current block height

If lastKnownBlock is not available in SharedState, query the latest block height from the blockchain state before constructing this transaction, or document why 0 is intentionally correct for batch transactions.

Based on learnings, past review comment from coderabbitai indicates this needs to be set to a proper value.

🧹 Nitpick comments (3)
src/libs/l2ps/L2PSBatchAggregator.ts (3)

93-94: Remove unused batchNonceCounter property.

This property is declared but never used in the implementation. The actual nonce management correctly uses getLastNonceFromStorage() and saveNonceToStorage() methods. This dead code should be removed to avoid confusion.

Apply this diff:

-    /** Persistent nonce counter for batch transactions */
-    private readonly batchNonceCounter: number = 0

399-410: Add runtime type guard for privateKey.

The code assumes sharedState.keypair.privateKey is a Uint8Array without verification. If the actual type is a string or another format, the Buffer.from() cast will fail or produce incorrect results.

Apply this diff to add a type check:

         if (!sharedState.keypair?.privateKey) {
             throw new Error("[L2PS Batch Aggregator] Node keypair not available for HMAC generation")
         }
         
-        const hmacKey = Buffer.from(sharedState.keypair.privateKey as Uint8Array)
+        const privateKey = sharedState.keypair.privateKey
+        if (!(privateKey instanceof Uint8Array)) {
+            throw new Error("[L2PS Batch Aggregator] Node private key must be Uint8Array format")
+        }
+        
+        const hmacKey = Buffer.from(privateKey)
             .toString("hex")
             .slice(0, 64)

586-592: Add runtime type guard for publicKey.

The code assumes sharedState.keypair.publicKey is a Uint8Array without verification. If sharedState.publicKeyHex is already available and normalized, prefer reusing it. Otherwise, add a type check.

Based on learnings, past reviews flagged this assumption. Apply one of these approaches:

Option 1 (preferred if publicKeyHex is available):

-            // Get node's public key as hex string for 'from' field
-            const nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)
+            // Get node's public key as hex string for 'from' field
+            const nodeIdentityHex = sharedState.publicKeyHex
+            if (!nodeIdentityHex) {
+                log.error("[L2PS Batch Aggregator] Node public key hex not available")
+                return false
+            }

Option 2 (if publicKeyHex is not available):

-            // Get node's public key as hex string for 'from' field
-            const nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)
+            // Get node's public key as hex string for 'from' field
+            const publicKey = sharedState.keypair.publicKey
+            if (!(publicKey instanceof Uint8Array)) {
+                log.error("[L2PS Batch Aggregator] Node public key must be Uint8Array format")
+                return false
+            }
+            const nodeIdentityHex = uint8ArrayToHex(publicKey)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c1ebef8 and 36e2071.

📒 Files selected for processing (3)
  • src/libs/blockchain/chain.ts (1 hunks)
  • src/libs/blockchain/transaction.ts (1 hunks)
  • src/libs/l2ps/L2PSBatchAggregator.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/libs/blockchain/chain.ts (1)
src/libs/blockchain/transaction.ts (2)
  • hash (106-114)
  • Transaction (44-538)
🔇 Additional comments (1)
src/libs/blockchain/transaction.ts (1)

499-501: Good defensive check for null/undefined input.

The early validation prevents potential null pointer exceptions during transaction reconstruction. This aligns well with the updated null-handling pattern in chain.ts.

@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
src/libs/l2ps/L2PSBatchAggregator.ts (4)

372-424: Critical: getSharedState isn’t invoked; sharedState.keypair will crash.
This is the same “must call getSharedState()” issue as prior reviews.

Also: current HMAC key usage passes a hex string to createHmac, which uses UTF-8 bytes of that string (likely unintended). Prefer a Buffer key.

-        const sharedState = getSharedState
+        const sharedState = getSharedState()
+        if (!sharedState) {
+            throw new Error("[L2PS Batch Aggregator] Shared state unavailable")
+        }
 ...
-        const hmacKey = Buffer.from(sharedState.keypair.privateKey as Uint8Array)
-            .toString("hex")
-            .slice(0, 64)
+        const hmacKey = Buffer.from(sharedState.keypair.privateKey as Uint8Array)
         const hmacData = `${l2psUid}:${encryptedBatch}:${batchHash}:${transactionHashes.join(",")}`
         const authenticationTag = crypto
             .createHmac("sha256", hmacKey)
             .update(hmacData)
             .digest("hex")

528-554: Critical: nonce persistence uses getSharedState without calling it.
This is the same “must call getSharedState()” issue as prior reviews.

     private async getLastNonceFromStorage(): Promise<number> {
         try {
-            const sharedState = getSharedState
+            const sharedState = getSharedState()
             // Use shared state to persist nonce across the session
             // This survives within the same process lifetime
             if (sharedState.l2psBatchNonce) {
                 return sharedState.l2psBatchNonce
             }
             return 0
         } catch {
             return 0
         }
     }
 ...
     private async saveNonceToStorage(nonce: number): Promise<void> {
         try {
-            const sharedState = getSharedState
+            const sharedState = getSharedState()
             // Store in shared state for persistence
             sharedState.l2psBatchNonce = nonce
         } catch (error) {

566-669: Critical: submission path uses getSharedState without calling it.
This is the same “must call getSharedState()” issue as prior reviews.

Also, two previously-raised robustness concerns still apply here:

  • reference_block: 0 may get persisted as-is if mempool doesn’t override it.
  • publicKey as Uint8Array is a brittle cast; prefer a normalized hex field (e.g., sharedState.publicKeyHex) or runtime conversion.
     private async submitBatchToMempool(batchPayload: L2PSBatchPayload): Promise<boolean> {
         try {
-            const sharedState = getSharedState
+            const sharedState = getSharedState()
 ...
-            const nodeIdentityHex = uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)
+            const nodeIdentityHex =
+                (sharedState as any).publicKeyHex ??
+                uint8ArrayToHex(sharedState.keypair.publicKey as Uint8Array)
 ...
             const batchTransaction = {
 ...
-                reference_block: 0, // Will be set by mempool
+                // TODO: set a real reference_block (or ensure Mempool.addTransaction overwrites it)
+                reference_block: 0,

744-756: Critical: getStatus() uses getSharedState as a value, not getSharedState().
This is the same “must call getSharedState()” issue as prior reviews.

     getStatus(): {
 ...
     } {
+        const sharedState = getSharedState()
         return {
             isRunning: this.isRunning,
             isAggregating: this.isAggregating,
             intervalMs: this.AGGREGATION_INTERVAL,
-            joinedL2PSCount: getSharedState.l2psJoinedUids?.length || 0,
+            joinedL2PSCount: sharedState.l2psJoinedUids?.length || 0,
         }
     }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 36e2071 and 7fcc912.

📒 Files selected for processing (1)
  • src/libs/l2ps/L2PSBatchAggregator.ts (1 hunks)

claude and others added 29 commits January 11, 2026 13:37
Create OmniProtocol_Specifications folder with complete protocol documentation:

- 01_Overview.mdx: Protocol architecture and key benefits
- 02_Message_Format.mdx: Binary message structure (header, auth, payload, checksum)
- 03_Authentication.mdx: Ed25519 signatures and replay protection
- 04_Opcode_Reference.mdx: Complete reference for 50+ opcodes
- 05_Transport_Layer.mdx: ConnectionPool, PeerConnection, MessageFramer
- 06_Server_Architecture.mdx: TCP/TLS server implementation
- 07_Rate_Limiting.mdx: DoS protection with sliding window algorithm
- 08_Serialization.mdx: Primitive types and payload encoding
- 09_Configuration.mdx: All configuration options and env vars
- 10_Integration.mdx: Node integration and migration guide

Documentation covers the complete OmniProtocol implementation including
wire format, security features, and production deployment considerations.
…ol-documentation-W0SZp

Add comprehensive OmniProtocol MDX documentation
- Add Prometheus + Grafana monitoring section to README.md
- Add Network Ports section with required/optional ports to both files
- Include TCP/UDP protocol requirements for OmniProtocol and WS proxy
- Add default ports note for users with custom configurations
- Add ufw firewall examples for quick setup

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Replace manual bun install with ./install-deps.sh
- Add note about Rust/Cargo requirement for wstcp
- Include Rust installation instructions in full guide

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix PromQL: use deriv() instead of rate() for gauge metrics
- Add MetricsCollector.stop() to graceful shutdown sequence
- Rename node_info to node_metadata to avoid metric collision
- Handle division by zero in peer health percentage query
- Add non-Linux fallback for network I/O metrics collection
- Use subshell pattern for monitoring stack shutdown in run script
- Clarify METRICS_PORT comment (node endpoint vs Prometheus server)
- Fix monitoring/README.md env var names and example ports
- Fix MD058 lint: add blank lines around tables in INSTALL.md

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update all PromQL expressions in system-health.json to use the demos_
prefix that MetricsService automatically applies to all metric names:
- demos_system_cpu_usage_percent
- demos_system_memory_usage_percent
- demos_system_memory_used_bytes
- demos_system_memory_total_bytes
- demos_system_load_average_1m/5m/15m
- demos_service_docker_container_up
- demos_service_port_open

Also fix TLSNotary container label from "tlsn-server" to "tlsn" to match
the displayName used in MetricsCollector.collectDockerHealth().

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add header comment to prometheus.yml explaining port distinction
- Document that node metrics target must match METRICS_PORT from main .env
- Add "Important Port Distinction" section to README Configuration
- Fix troubleshooting curl example port from 3333 to 9090
- Clarify PROMETHEUS_PORT table entry (server port, not node metrics)

METRICS_PORT (9090) = Demos node metrics endpoint (main .env)
PROMETHEUS_PORT (9091) = Prometheus server external port (monitoring/.env)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Prevent cardinality explosion by removing the peer_id label from the
peer_latency_seconds histogram. Each unique peer would create new time
series, causing unbounded growth.

Aggregated latency across all peers is sufficient for monitoring;
individual peer debugging should use structured logging instead.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…tion

- Add curl timeout flags (--connect-timeout 1 --max-time 2) to health check
  loops for TLSNotary and Grafana to prevent hanging when services are slow
- Fix MetricsService log message to say "configured port" instead of
  "initialized on port" since the service doesn't bind to the port
- Add URL validation in PeerManager to ensure peerData.url is a non-empty
  string before assignment, logging a warning and skipping invalid entries

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Added Grafana dashboard and a couple of fixes
- Resolved conflict in .env.example (added Prometheus metrics config)
- Resolved conflict in manageExecution.ts (kept L2PS handler)
- Added handleL2PS static method to ServerHandlers

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
… manager query flexibility, and fix default value for transaction hashes
…andling and enhance Poseidon hasher interface
…ure authentication and add node identity certificates.
…ant discovery and delta sync, and enable DTR Manager.
…monic, and update L2PS transaction execution and network management.
@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
1 Security Hotspot
4.7% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants