Skip to content

lint: enable detekt rule: LargeClass#333

Open
AdrianLeeElder wants to merge 27 commits intodevelopfrom
fix/issue-296
Open

lint: enable detekt rule: LargeClass#333
AdrianLeeElder wants to merge 27 commits intodevelopfrom
fix/issue-296

Conversation

@AdrianLeeElder
Copy link
Copy Markdown
Contributor

@AdrianLeeElder AdrianLeeElder commented Mar 31, 2026

Closes #296

Automated by auto-agent.

Summary by CodeRabbit

  • New Features

    • Added extensive demo-data tooling: deterministic reseed, purge, and freshness checks across analytics, events, sessions, replays, dashboards, profiles, logs, LLM generations, Datadog/infra/DBM/debugger/NDM/SBOM datasets, security, synthetics, and uptime heartbeats to populate demo environments and dashboards.
  • Chores

    • Enabled complexity linting with test-source exclusions and removed a test-level lint suppression.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 31, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Enabled Detekt LargeClass rule with test-source excludes, removed a matching test suppression, and added multiple internal demo reseed modules/constants that check, purge, and reseed synthetic demo data across ClickHouse, PostgreSQL, and the filesystem.

Changes

Cohort / File(s) Summary
Detekt Configuration
backend/detekt.yml, ee/backend/detekt.yml
Re-enabled complexity:LargeClass (threshold = 1700) and added excludes for various test/platform test source sets.
Test Suppression
backend/src/test/kotlin/com/moneat/services/NetworkDevicesTest.kt
Removed file-level @file:Suppress("LargeClass") annotation.
Demo Reseed Constants
backend/src/main/kotlin/com/moneat/config/DemoReseedConstants.kt
Added demo project/org UInt64 constants (P1/P2/P3/ORG1) and reseed tuning constants.
ClickHouse helper
backend/src/main/kotlin/com/moneat/config/DemoReseedClickHouse.kt
Added requireClickHouse2xx(response, context) to validate ClickHouse HTTP responses and include response excerpt in errors.
Core demo reseed (events/sessions/replays)
backend/src/main/kotlin/com/moneat/config/DemoReseedCore.kt, .../DemoReseedAnalytics.kt
New core routines to check/purge/reseed events, sessions, analytics and replays via ClickHouse INSERT/ALTER/SELECT; operations use best-effort error handling and logging.
Profiles (fs + ClickHouse)
backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt
Deterministic profile UUID generation, writes profile JSON files to storage, inserts ClickHouse profiles rows, and provides cleanup helpers; uses Exposed transactions and best-effort error handling.
Dashboards
backend/src/main/kotlin/com/moneat/config/DemoReseedDashboards.kt
Adds dashboard seeding routines: count, purge, and insert dashboards/widgets with serialized query/display configs inside a DB transaction.
Domain-specific reseed modules
backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt, .../DemoReseedLlm.kt, .../DemoReseedSynthetics.kt, .../DemoReseedSecurity.kt, .../DemoReseedDatadog.kt, .../DemoReseedInfra.kt, .../DemoReseedUptime.kt
Added per-domain check/purge/reseed functions that run ClickHouse SELECT/ALTER/INSERT statements (generative SQL via numbers()/UUID funcs) with non-fatal logging on failures.

Sequence Diagram(s)

sequenceDiagram
  participant Runner as ReseedRunner
  participant CH as ClickHouse
  participant PG as PostgreSQL
  participant FS as Filesystem

  Runner->>CH: checkFresh...Count() (SELECT count())
  alt needs reseed
    Runner->>CH: purge...() (ALTER TABLE ... DELETE) [best‑effort]
    Runner->>PG: purge/seed hosts/profiles (Exposed transaction / INSERT/UPDATE) [best‑effort]
    Runner->>FS: write/clean demo profile files
    Runner->>CH: reseed...() (INSERT ... SELECT FROM numbers(...)) [multiple domains]
  end
  Runner->>PG: finalize state updates (Exposed transaction)
  Runner->>Runner: log warnings/info (non‑fatal on failure)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped through SQL and logs tonight,
I planted demo rows by ClickHouse light.
I purged a few tables and wrote profile files,
Sprinkled UUIDs and timestamps for miles.
Hooray — tiny seeds, big demos, and fewer silent tests!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 18.06% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'lint: enable detekt rule: LargeClass' clearly and concisely summarizes the main objective of enabling the LargeClass detekt rule.
Linked Issues check ✅ Passed The PR fulfills all coding requirements from issue #296: enables LargeClass rule, sets appropriate threshold, excludes test directories, and fixes violations by suppressing/removing them.
Out of Scope Changes check ✅ Passed All changes are scoped to enabling the LargeClass detekt rule and fixing violations; no unrelated modifications or features were introduced.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/issue-296

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Nitpick comments (4)
backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt (1)

67-68: Consider refactoring instead of suppressing LongMethod.

Both reseedKubernetesData() and reseedNdmData() use @Suppress("LongMethod"). Per coding guidelines, the preference is to fix the root cause rather than suppress.

These functions could be split by resource type (e.g., insertK8sPods(), insertK8sNodes(), etc.), which would also make each piece more testable.

For demo data seeding code with low cognitive complexity, this is acceptable as a deferred refactor.

Also applies to: 434-435

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt` around lines 67
- 68, reseedKubernetesData() (and reseedNdmData()) are annotated with
`@Suppress`("LongMethod"); instead of suppressing, refactor each into smaller,
resource-focused functions (e.g., insertK8sPods(), insertK8sNodes(),
insertK8sServices(), insertNdmDisks(), insertNdmNodes(), etc.), move the
corresponding insertion logic out of reseedKubernetesData()/reseedNdmData() into
those helpers, have the original reseed* functions call the new helpers in
sequence, and then remove the `@Suppress`("LongMethod") annotations so each new
helper has a short, testable body and reduced cognitive complexity.
backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt (2)

180-229: Duplicate profile insertion logic.

The profile insertion logic here (lines 199-229) duplicates the logic in ensureDemoProfileRows() from DemoReseedProfiles.kt (lines 70-111). Both use the same UUID generation pattern, profile types, hosts, and SQL structure.

Consider extracting this into a shared helper or calling ensureDemoProfileRows() directly to avoid drift between implementations.

Suggested approach

Replace lines 180-229 with a call to the existing function:

// Profiles — use shared function for consistency
val demoProfileIds = (1..DEMO_PROFILE_COUNT).map { n ->
    "00000000-0000-4000-8000-" + n.toString().padStart(12, '0')
}
ensureDemoProfileRows(demoProfileIds)

This ensures both code paths generate identical profile data.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt` around lines
180 - 229, This block duplicates profile generation and insertion logic already
implemented in ensureDemoProfileRows() (DemoReseedProfiles.kt); remove the
inline generation/INSERT (profileValues/profilesSql and ClickHouseClient.execute
call) and instead compute the deterministic demoProfileIds the same way and call
ensureDemoProfileRows(demoProfileIds) so both paths share the same
implementation and avoid drift (keep the existing logger behavior by relying on
ensureDemoProfileRows' error handling).

83-84: Consider extracting data-generation helpers instead of suppressing LongMethod.

The @Suppress("LongMethod") annotation works around the detekt finding. Per coding guidelines, the preference is to fix the root cause. The function could be split into smaller helpers (e.g., seedHosts(), seedSpans(), seedProfiles(), etc.) which would also improve testability.

That said, for demo data seeding with low cognitive complexity, this is a lower-priority refactor.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt` around lines
83 - 84, The function reseedDatadogData is too long and currently suppressed;
refactor it by extracting logical sections into smaller helper functions (for
example create seedHosts(), seedSpans(), seedProfiles(), seedServices() or
similarly named suspend helpers) that each encapsulate a coherent piece of the
demo-data generation logic, update reseedDatadogData to call those helpers in
sequence, and remove the `@Suppress`("LongMethod") once the method length is
reduced; keep original behavior and signatures (suspend) and move any local
variables used by multiple steps into parameters or well-scoped return values.
backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt (1)

158-197: Consider extracting helper functions instead of suppressing LongMethod.

The @Suppress("LongMethod") annotation suppresses the detekt finding rather than addressing it. Per coding guidelines, suppressions should be avoided when the root cause can be fixed.

However, given this is a data-generation function producing JSON output, the complexity is relatively low (mostly string construction). If refactoring is deferred, consider adding a brief comment explaining why the suppression is acceptable here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt` around lines
158 - 197, The buildSentryProfile function is long and currently suppresses the
LongMethod detekt rule; refactor by extracting small helpers (e.g.,
parseModuleAndFunction(name: String): Pair<String,String>,
buildFramesJson(frames: List<String>): String, buildStacksJson(stacks:
List<List<Int>>): String, buildSamplesJson(stackIndicesSize: Int, seed: Int):
String) and move the frameIndex/stackIndices calculation into a short helper
(e.g., computeFrameIndexAndStacks(stacks: List<List<String>>):
Pair<List<String>, List<List<Int>>>), then have buildSentryProfile call those
helpers to assemble the final JSON string; if you choose not to refactor now,
replace the `@Suppress`("LongMethod") with a one-line comment explaining why the
suppression is acceptable for a simple data-generation function and reference
buildSentryProfile so reviewers know the rationale.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/src/main/kotlin/com/moneat/config/DemoReseedAnalytics.kt`:
- Around line 32-40: The suspend function in DemoReseedAnalytics.kt is using
runCatching which can swallow CancellationException and disrupt graceful
shutdown; replace each runCatching call with com.moneat.utils.suspendRunCatching
(preserving the existing lambda body and subsequent getOrElse handling) so
cancellation semantics are preserved—update imports to reference
suspendRunCatching and keep the same return paths (e.g., the block that
currently does ClickHouseClient.execute/query handling and the logger.warn
getOrElse branches) while ensuring the surrounding suspend function signatures
remain unchanged.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedCore.kt`:
- Around line 46-55: Replace usages of runCatching in suspend functions with
suspendRunCatching from com.moneat.utils to avoid swallowing
CancellationException: in purgeOldDemoData() replace runCatching {
ClickHouseClient.execute(query) } and the later runCatching {
ClickHouseClient.execute("ALTER TABLE issues ...") } with suspendRunCatching,
and do the same in reseedEvents(), reseedSessions(), and reseedReplays() where
runCatching wraps calls to ClickHouseClient.execute (or similar suspend calls);
keep the existing onFailure handling (e.g., logger.warn { ... }) but call it on
the suspendRunCatching result so structured concurrency is preserved.
- Around line 24-36: The checkFreshDataCount() function calls
ClickHouseClient.execute(query) without catching exceptions; wrap the execute +
response/body parsing in a try/catch or runCatching (similar to
checkFreshDatadogCount()) to handle ConnectException/TimeoutException and other
failures, log the error or debug message, and return 0 on any exception; ensure
you still check response.status for non-2xx and parse body via
body.trim().toLongOrNull() as before so behavior on success is unchanged.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt`:
- Around line 27-55: Replace usages of runCatching with suspendRunCatching in
this file for all suspend functions (starting with checkFreshDatadogCount) so
coroutine cancellation exceptions are propagated; specifically change the
wrapper around the logic in checkFreshDatadogCount() to call
com.moneat.utils.suspendRunCatching { ... } and do the same for the other
suspend functions noted (the blocks starting at the other listed locations),
preserving the existing success and getOrElse handling but importing/using
suspendRunCatching instead of runCatching.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt`:
- Around line 45-53: In suspend functions in DemoReseedInfra.kt (notably
checkFreshInfraDataCount(), purgeInfraDemoData(), reseedKubernetesData(),
reseedDbmData(), reseedDebuggerData(), reseedNdmData(), reseedSbomData())
replace runCatching { ... }.getOrElse { ... } with suspendRunCatching { ...
}.getOrElse { ... } from com.moneat.utils so CancellationException isn't
swallowed; keep existing success/error handling but ensure you import
suspendRunCatching and preserve the logger.warn/... return values as before.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedLlm.kt`:
- Around line 24-35: Wrap the logic inside checkFreshLlmDataCount() with
suspendRunCatching so any non-fatal exceptions from
ClickHouseClient.execute(...) or response.bodyAsText() are captured (but
CancellationException is preserved by suspendRunCatching), and return 0 on
failure; specifically, enclose the query execution, response status check
(response.status.value), and body parsing (body.trim().toLongOrNull()) inside
suspendRunCatching in the checkFreshLlmDataCount function so the function
returns the parsed count on success and 0 on error without aborting the outer
reseed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt`:
- Around line 43-48: The purgeLogsDemoData and reseedLogs functions currently
discard ClickHouseClient.execute responses, so non-2xx HTTP responses are
ignored; update both functions (purgeLogsDemoData and reseedLogs) to capture the
execute response, check response.status for 2xx (e.g., 200..299), and treat
non-2xx as failures by throwing or returning an error inside the runCatching
block (or logging an error and rethrowing) similar to how checkFreshLogsCount
inspects response.status; ensure the onFailure handler still logs the failure
(including response.status and body/message) so partial/stale deletions are
detected.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt`:
- Around line 49-57: The suspend function ensureDemoProfileRows is using
runCatching which can swallow CancellationException; replace runCatching with
suspendRunCatching (from com.moneat.utils) for the blocks that wrap
ClickHouseClient.execute (the block computing check and the similar blocks
around lines 62-68 and 108) so coroutine cancellation is preserved; update the
three runCatching calls to suspendRunCatching while keeping the same getOrElse {
0L } handling and references to the variables (e.g., check and the other result
variables) and imports as needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedSecurity.kt`:
- Around line 52-53: Remove the `@Suppress`("LongMethod") on the
reseedSecurityData function and refactor the long suspend fun
reseedSecurityData() into smaller, focused helpers: extract SQL-building logic
into one or more functions (e.g., buildUserInsertStatements(),
buildRoleInsertStatements()) and extract execution logic into helpers (e.g.,
executeSqlBatch(), runTransactionalInserts()), then call those helpers from
reseedSecurityData so the method is short and the linter warning is resolved
without suppression.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedSynthetics.kt`:
- Around line 84-87: The reseed success log is emitted regardless of
ClickHouseClient.execute(syntheticsSql) response; modify the flow in
DemoReseedSynthetics (where runCatching wraps ClickHouseClient.execute and uses
logger) to inspect the returned response object for success (e.g., check
HTTP/ClickHouse status or a success flag) and only log "Synthetics demo data
reseed complete" when the response indicates a successful insert; on
non-successful responses, log a warning or error including response details
(status/code/body) and avoid emitting the completion message, while keeping the
existing onFailure handler for exceptions.
- Around line 50-51: Remove the `@Suppress`("LongMethod") on reseedSyntheticsData
and refactor the function into smaller helpers so Detekt no longer flags a long
method: extract the SQL-building logic into a function like
buildSyntheticsInsertSql(), extract the DB-execution logic into
executeSyntheticsInsert(db: /* appropriate type */, sql: String) or suspend
helper(s) such as prepareSyntheticsStatements() and runSyntheticsInsertions(),
and keep reseedSyntheticsData() as a short orchestration that calls these
helpers; ensure each new helper has a single responsibility and appropriate
signatures so the original method length is reduced and the linter suppression
can be removed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedUptime.kt`:
- Around line 38-64: The suspend function DemoReseedUptime.kt currently uses
runCatching around ClickHouseClient.execute calls which can mask coroutine
cancellations and also never validates HTTP response status; replace both
runCatching usages with com.moneat.utils.suspendRunCatching to preserve
CancellationException behavior, and after each ClickHouseClient.execute call
inspect the returned response (the value returned by ClickHouseClient.execute)
for non-2xx/4xx/5xx HTTP status and throw or convert to failure so
.onFailure/.onSuccess logic runs correctly; target the runCatching blocks around
ClickHouseClient.execute and ensure the status validation logic raises an
exception on non-successful responses so errors are handled by the
suspendRunCatching flow.
- Around line 29-31: The Postgres update currently updates all monitors with
organizationId = -1, causing inconsistency with ClickHouse heartbeats which only
target the demoMonitors list; modify the UptimeMonitors.update call to also
filter by IDs in demoMonitors (map each string to UUID) so the WHERE uses
(UptimeMonitors.organizationId eq -1) and (UptimeMonitors.id inList
demoMonitors.map { UUID.fromString(it) }), keeping the existing update body
unchanged so only the two demo monitor rows are updated to match the inserted
heartbeats.

---

Nitpick comments:
In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt`:
- Around line 180-229: This block duplicates profile generation and insertion
logic already implemented in ensureDemoProfileRows() (DemoReseedProfiles.kt);
remove the inline generation/INSERT (profileValues/profilesSql and
ClickHouseClient.execute call) and instead compute the deterministic
demoProfileIds the same way and call ensureDemoProfileRows(demoProfileIds) so
both paths share the same implementation and avoid drift (keep the existing
logger behavior by relying on ensureDemoProfileRows' error handling).
- Around line 83-84: The function reseedDatadogData is too long and currently
suppressed; refactor it by extracting logical sections into smaller helper
functions (for example create seedHosts(), seedSpans(), seedProfiles(),
seedServices() or similarly named suspend helpers) that each encapsulate a
coherent piece of the demo-data generation logic, update reseedDatadogData to
call those helpers in sequence, and remove the `@Suppress`("LongMethod") once the
method length is reduced; keep original behavior and signatures (suspend) and
move any local variables used by multiple steps into parameters or well-scoped
return values.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt`:
- Around line 67-68: reseedKubernetesData() (and reseedNdmData()) are annotated
with `@Suppress`("LongMethod"); instead of suppressing, refactor each into
smaller, resource-focused functions (e.g., insertK8sPods(), insertK8sNodes(),
insertK8sServices(), insertNdmDisks(), insertNdmNodes(), etc.), move the
corresponding insertion logic out of reseedKubernetesData()/reseedNdmData() into
those helpers, have the original reseed* functions call the new helpers in
sequence, and then remove the `@Suppress`("LongMethod") annotations so each new
helper has a short, testable body and reduced cognitive complexity.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt`:
- Around line 158-197: The buildSentryProfile function is long and currently
suppresses the LongMethod detekt rule; refactor by extracting small helpers
(e.g., parseModuleAndFunction(name: String): Pair<String,String>,
buildFramesJson(frames: List<String>): String, buildStacksJson(stacks:
List<List<Int>>): String, buildSamplesJson(stackIndicesSize: Int, seed: Int):
String) and move the frameIndex/stackIndices calculation into a short helper
(e.g., computeFrameIndexAndStacks(stacks: List<List<String>>):
Pair<List<String>, List<List<Int>>>), then have buildSentryProfile call those
helpers to assemble the final JSON string; if you choose not to refactor now,
replace the `@Suppress`("LongMethod") with a one-line comment explaining why the
suppression is acceptable for a simple data-generation function and reference
buildSentryProfile so reviewers know the rationale.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2780f24f-4159-42a5-b354-9a35dd3f92b2

📥 Commits

Reviewing files that changed from the base of the PR and between e0f7261 and f73f079.

📒 Files selected for processing (12)
  • backend/src/main/kotlin/com/moneat/config/DemoReseedAnalytics.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedConstants.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedCore.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedDashboards.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedLlm.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedSecurity.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedSynthetics.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedUptime.kt

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (6)
backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt (2)

63-69: Consider validating ClickHouse response status.

The purge operation doesn't validate HTTP response status. Use requireClickHouse2xx for consistency with other DemoReseed modules.

Suggested fix
     suspendRunCatching {
-        ClickHouseClient.execute(
-            "ALTER TABLE profiles DELETE WHERE organization_id = $ORG1"
+        requireClickHouse2xx(
+            ClickHouseClient.execute("ALTER TABLE profiles DELETE WHERE organization_id = $ORG1"),
+            "Purge old demo profiles"
         )
     }.onFailure {
         logger.warn { "Purge old demo profiles failed (non-fatal): ${it.message}" }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt` around lines
63 - 69, The ClickHouse purge call currently ignores the HTTP response and only
logs failures; update the suspendRunCatching block that calls
ClickHouseClient.execute(...) inside DemoReseedProfiles to validate the response
with requireClickHouse2xx so non-2xx responses are treated as failures. Replace
or wrap the execute(...) invocation so its result is passed into
requireClickHouse2xx(...) and keep the existing onFailure { logger.warn { "Purge
old demo profiles failed (non-fatal): ${it.message}" } } behavior to preserve
logging on error.

109-111: Consider validating ClickHouse response status for insert.

Same as purge—use requireClickHouse2xx for consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt` around lines
109 - 111, The ClickHouse insert uses suspendRunCatching {
ClickHouseClient.execute(sql) } but doesn't validate HTTP/status success;
replace the current success/failure handling to call requireClickHouse2xx on the
response returned by ClickHouseClient.execute(sql) (similar to the purge flow)
so non-2xx responses are treated as failures, and then log via logger.warn on
failure and logger.info with profileIds.size on success; update references
around suspendRunCatching, ClickHouseClient.execute, and requireClickHouse2xx
accordingly.
backend/src/main/kotlin/com/moneat/config/DemoReseedLlm.kt (2)

45-54: Consider validating ClickHouse response status for consistency.

Unlike purgeLogsDemoData() in DemoReseedLogs.kt which uses requireClickHouse2xx(), this function doesn't validate the HTTP response. A 4xx/5xx from ClickHouse would be silently ignored (only exceptions are caught). This could leave stale data if the DELETE fails with an error response.

Suggested fix
 internal suspend fun purgeLlmDemoData() {
     suspendRunCatching {
-        ClickHouseClient.execute("ALTER TABLE llm_generations DELETE WHERE project_id IN ($P1, $P2, $P3)")
+        requireClickHouse2xx(
+            ClickHouseClient.execute("ALTER TABLE llm_generations DELETE WHERE project_id IN ($P1, $P2, $P3)"),
+            "Purge llm_generations"
+        )
     }.onFailure { logger.warn { "Purge llm_generations failed (non-fatal): ${it.message}" } }
 
     // SummingMergeTree materialized rows need explicit cleanup.
     suspendRunCatching {
-        ClickHouseClient.execute("ALTER TABLE llm_generations_hourly_mv DELETE WHERE project_id IN ($P1, $P2, $P3)")
+        requireClickHouse2xx(
+            ClickHouseClient.execute("ALTER TABLE llm_generations_hourly_mv DELETE WHERE project_id IN ($P1, $P2, $P3)"),
+            "Purge llm_generations_hourly_mv"
+        )
     }.onFailure { logger.warn { "Purge llm_generations_hourly_mv failed (non-fatal): ${it.message}" } }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedLlm.kt` around lines 45 -
54, purgeLlmDemoData currently calls ClickHouseClient.execute(...) but doesn't
validate non-exception HTTP error responses; update the two execute calls inside
purgeLlmDemoData to capture the response and pass it to
requireClickHouse2xx(...) (same pattern used in purgeLogsDemoData) so 4xx/5xx
responses are detected and handled, keeping the suspendRunCatching wrappers to
log non-fatal failures; reference ClickHouseClient.execute and
requireClickHouse2xx in your change.

260-262: Consider validating ClickHouse response status.

Same as purge operations—non-2xx responses won't be detected.

Suggested fix
-    suspendRunCatching { ClickHouseClient.execute(sql) }
+    suspendRunCatching {
+        requireClickHouse2xx(ClickHouseClient.execute(sql), "Reseed llm_generations")
+    }
         .onFailure { logger.warn { "Reseed llm_generations failed (non-fatal): ${it.message}" } }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedLlm.kt` around lines 260
- 262, The call to ClickHouseClient.execute(sql) is wrapped in
suspendRunCatching but currently ignores non-2xx ClickHouse responses; change
the block to capture the execute response (e.g., val resp =
ClickHouseClient.execute(sql)) and explicitly check its status/code (e.g.,
resp.status()/resp.code() or resp.isSuccess) and treat non-success codes as
failures so they go to the onFailure path; update the onFailure log in the
suspendRunCatching chain (the current logger.warn { "Reseed llm_generations
failed (non-fatal): ${it.message}" } line) to include the response code/body
when non-2xx is detected so unexpected ClickHouse responses are surfaced.
backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt (1)

70-73: Consider validating ClickHouse response status in purge loop.

The purge operations don't validate HTTP response status. Non-2xx responses would be silently ignored.

Suggested fix
     for (table in tables) {
         suspendRunCatching {
-            ClickHouseClient.execute("ALTER TABLE $table DELETE WHERE organization_id = $ORG1")
+            requireClickHouse2xx(
+                ClickHouseClient.execute("ALTER TABLE $table DELETE WHERE organization_id = $ORG1"),
+                "Purge $table"
+            )
         }.onFailure { logger.warn { "Purge $table failed (non-fatal): ${it.message}" } }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt` around lines
70 - 73, The purge loop uses suspendRunCatching { ClickHouseClient.execute(...)
} but doesn’t inspect the HTTP response; change it to capture the response from
ClickHouseClient.execute, check its HTTP/status code (e.g., non-2xx) and treat
that as a failure (log via logger.warn or call .onFailure) instead of ignoring
it; update the block around suspendRunCatching/ClickHouseClient.execute to
inspect the response status and include response body or status in the log
message when non-2xx is returned so purge failures aren’t silently ignored.
backend/src/main/kotlin/com/moneat/config/DemoReseedDashboards.kt (1)

100-110: Model layout metadata instead of passing raw ints.

insertWidget currently takes x, y, w, h, and order as positional primitives, which is why the seed definitions below are packed with hard-coded coordinates/sizes and other raw literals. A small layout/spec type plus named constants for repeated limits would make these dashboards much easier to audit.

As per coding guidelines, "Use named constants instead of magic numbers (e.g., const val MAX_RETRIES = 5 instead of if (count > 5))".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDashboards.kt` around
lines 100 - 110, insertWidget currently takes raw positional ints (x, y, w, h,
order); refactor by introducing a small layout/spec model (e.g., a data class
Layout or WidgetLayout and an enum/values for common sizes/limits) and replace
those primitive parameters with a single layout parameter (and optional order if
needed) in the insertWidget signature and all call sites; also create named
constants (const val) for repeated magic numbers like default width/height and
grid limits and use them when constructing Layout instances in the dashboard
seed definitions to eliminate hard-coded literals and make coordinates/sizes
self-describing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/src/main/kotlin/com/moneat/config/DemoReseedAnalytics.kt`:
- Around line 33-41: The suspendRunCatching lambda contains a non-local "return
0" which exits the surrounding function and bypasses the getOrElse error
handler; in the block that calls ClickHouseClient.execute inside
suspendRunCatching, replace the non-local return with a local expression (e.g.,
make the if (response.status.value !in 200..299) branch evaluate to 0 rather
than using "return 0") so the lambda returns a value normally and getOrElse can
handle exceptions; look at the suspendRunCatching block,
ClickHouseClient.execute call, response.bodyAsText(), and
body.trim().toLongOrNull() to implement this change.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDashboards.kt`:
- Around line 67-70: The delete should remove child widget rows first: before
calling Dashboards.deleteWhere(...) in DemoReseedDashboards (function in
DemoReseedDashboards.kt), query DashboardWidgets where
DashboardWidgets.dashboardId is in the set of demo dashboard IDs (select
Dashboards.id where orgId eq DEMO_ORG_ID and createdBy eq DEMO_USER_ID) and
delete those widget rows, then delete the Dashboards; also add the import for
org.jetbrains.exposed.v1.core.inList to enable the "inList" call used to match
dashboardId against the dashboard ID set.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt`:
- Around line 84-108: The long reseedDatadogData() function should be split into
focused helper suspending functions (e.g., reseedHosts(), reseedApmSpans(),
reseedProfiles(), etc.), remove the `@Suppress`("LongMethod") and move each large
transaction/exec block into its own helper so each helper handles one data type;
keep the existing suspendRunCatching logging behavior by either wrapping each
helper call in suspendRunCatching { transaction { ... } } or have the helper
perform its own transaction/exec calls (referencing transaction and exec in the
diff), then have reseedDatadogData simply call these helpers in sequence and
surface/aggregate their failures with the same logger.warn pattern.
- Around line 47-48: The code uses a non-local `return 0` inside the lambda
handling `response`, which exits the outer function and bypasses the surrounding
`getOrElse` error handler; replace the `return 0` with throwing an exception
(e.g., `throw IllegalStateException("Unexpected response status:
${response.status.value}")`) so the error propagates into the `getOrElse` path,
allowing the handler to run and then let `val cnt =
response.bodyAsText().trim().toLongOrNull() ?: 0` execute only on successful
responses.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt`:
- Around line 33-41: The lambda inside suspendRunCatching in
checkFreshLogsCount() uses a non-local `return 0` which exits the containing
function and bypasses the getOrElse error handler; change that to a local
expression (use `0L`) and make the other fallbacks consistent
(body.trim().toLongOrNull() ?: 0L and getOrElse { ... 0L }) so the lambda always
returns Long locally and errors are still handled by getOrElse.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt`:
- Around line 159-198: The buildSentryProfile function currently suppresses the
LongMethod warning; instead remove the `@Suppress`("LongMethod") and split
buildSentryProfile into small helpers: extract frame collection and index logic
into a function like computeFramesAndIndex(service: String): Pair<List<String>,
Map<String,Int>> (used where frames and frameIndex are built), move the frames
-> JSON mapping into buildFramesJson(frames: List<String>): String (responsible
for splitting module/function and producing the JSON pieces), move stack index
mapping into buildStacksJson(stackIndices: List<List<Int>>): String, and move
sample generation into buildSamplesJson(sampleCount: Int, stackCount: Int):
String; then have buildSentryProfile orchestrate these helpers (calling
SERVICE_STACKS lookup, computeFramesAndIndex, building stackIndices, and
composing the final JSON string). Ensure each helper has a single responsibility
and update names referenced above (computeFramesAndIndex, buildFramesJson,
buildStacksJson, buildSamplesJson, buildSentryProfile) accordingly.

---

Nitpick comments:
In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDashboards.kt`:
- Around line 100-110: insertWidget currently takes raw positional ints (x, y,
w, h, order); refactor by introducing a small layout/spec model (e.g., a data
class Layout or WidgetLayout and an enum/values for common sizes/limits) and
replace those primitive parameters with a single layout parameter (and optional
order if needed) in the insertWidget signature and all call sites; also create
named constants (const val) for repeated magic numbers like default width/height
and grid limits and use them when constructing Layout instances in the dashboard
seed definitions to eliminate hard-coded literals and make coordinates/sizes
self-describing.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt`:
- Around line 70-73: The purge loop uses suspendRunCatching {
ClickHouseClient.execute(...) } but doesn’t inspect the HTTP response; change it
to capture the response from ClickHouseClient.execute, check its HTTP/status
code (e.g., non-2xx) and treat that as a failure (log via logger.warn or call
.onFailure) instead of ignoring it; update the block around
suspendRunCatching/ClickHouseClient.execute to inspect the response status and
include response body or status in the log message when non-2xx is returned so
purge failures aren’t silently ignored.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedLlm.kt`:
- Around line 45-54: purgeLlmDemoData currently calls
ClickHouseClient.execute(...) but doesn't validate non-exception HTTP error
responses; update the two execute calls inside purgeLlmDemoData to capture the
response and pass it to requireClickHouse2xx(...) (same pattern used in
purgeLogsDemoData) so 4xx/5xx responses are detected and handled, keeping the
suspendRunCatching wrappers to log non-fatal failures; reference
ClickHouseClient.execute and requireClickHouse2xx in your change.
- Around line 260-262: The call to ClickHouseClient.execute(sql) is wrapped in
suspendRunCatching but currently ignores non-2xx ClickHouse responses; change
the block to capture the execute response (e.g., val resp =
ClickHouseClient.execute(sql)) and explicitly check its status/code (e.g.,
resp.status()/resp.code() or resp.isSuccess) and treat non-success codes as
failures so they go to the onFailure path; update the onFailure log in the
suspendRunCatching chain (the current logger.warn { "Reseed llm_generations
failed (non-fatal): ${it.message}" } line) to include the response code/body
when non-2xx is detected so unexpected ClickHouse responses are surfaced.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt`:
- Around line 63-69: The ClickHouse purge call currently ignores the HTTP
response and only logs failures; update the suspendRunCatching block that calls
ClickHouseClient.execute(...) inside DemoReseedProfiles to validate the response
with requireClickHouse2xx so non-2xx responses are treated as failures. Replace
or wrap the execute(...) invocation so its result is passed into
requireClickHouse2xx(...) and keep the existing onFailure { logger.warn { "Purge
old demo profiles failed (non-fatal): ${it.message}" } } behavior to preserve
logging on error.
- Around line 109-111: The ClickHouse insert uses suspendRunCatching {
ClickHouseClient.execute(sql) } but doesn't validate HTTP/status success;
replace the current success/failure handling to call requireClickHouse2xx on the
response returned by ClickHouseClient.execute(sql) (similar to the purge flow)
so non-2xx responses are treated as failures, and then log via logger.warn on
failure and logger.info with profileIds.size on success; update references
around suspendRunCatching, ClickHouseClient.execute, and requireClickHouse2xx
accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 782f2638-796a-400b-98bf-85e8a2ba0d01

📥 Commits

Reviewing files that changed from the base of the PR and between f73f079 and e2350d8.

📒 Files selected for processing (13)
  • backend/src/main/kotlin/com/moneat/config/DemoReseedAnalytics.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedClickHouse.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedCore.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedDashboards.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedLlm.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedSecurity.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedSynthetics.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedUptime.kt
  • ee/backend/detekt.yml
✅ Files skipped from review due to trivial changes (2)
  • backend/src/main/kotlin/com/moneat/config/DemoReseedSynthetics.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt
🚧 Files skipped from review as they are similar to previous changes (2)
  • backend/src/main/kotlin/com/moneat/config/DemoReseedSecurity.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedCore.kt

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt (1)

68-112: Pull the shared seed knobs out of the SQL literal.

This first builder already hard-codes the seed count, time offsets, cluster metadata, and repeated labels directly in the SQL, and the same pattern repeats through the other builders. Moving those values into named constants or small data structures will make the demo dataset easier to tune and reduce drift between builders.

As per coding guidelines: **/*.kt: Use named constants instead of magic numbers (e.g., const val MAX_RETRIES = 5 instead of if (count > 5)).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt` around lines 68
- 112, The SQL builder k8sPodsInsertSql embeds repeated magic values (seed count
15, arrays for namespaces, pod names, statuses, app labels, cluster metadata,
and time offsets) directly in the string; extract these into named Kotlin
constants or small data structures (e.g., private const val K8S_SEED_COUNT = 15,
lists/arrays for NAMESPACES, POD_NAMES, STATUSES, APPS, CLUSTER_NAME/ID, and
time offset functions) and reference them when building the SQL (use string
templates to inject constants or generate the numbers(NUMBER) source
dynamically) so the seed knobs are centralized and reusable across other
builders like the other insert functions. Ensure you update k8sPodsInsertSql to
use these constants instead of hard-coded literals and adjust the FROM
numbers(...) call to use K8S_SEED_COUNT.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt`:
- Around line 38-55: The current checkFreshInfraDataCount() only queries
k8s_resources but is used to gate reseedDbmData(), reseedDebuggerData(),
reseedNdmData(), and reseedSbomData() in DemoDataReseeder (used around lines 52
and 117-127), so a partial failure leaves some domains stale while k8s rows are
fresh; fix by either expanding checkFreshInfraDataCount() to include counts for
every infra table (e.g., add SUM/UNION counts for dbm, debugger, ndm, sbom
tables in the ClickHouse SQL and return a combined indicator) or replace it with
per-domain checks (implement checkFreshDbmDataCount(),
checkFreshDebuggerDataCount(), checkFreshNdmDataCount(),
checkFreshSbomDataCount()) and use those specific checks before calling
reseedDbmData(), reseedDebuggerData(), reseedNdmData(), and reseedSbomData();
keep the same suspendRunCatching/error-logging pattern and return 0 on failures
so existing control flow in DemoDataReseeder continues to work safely.
- Around line 57-64: The DELETE/INSERT calls using ClickHouseClient.execute
(e.g., inside purgeInfraDemoData and reseedKubernetesData) currently discard
HttpResponse and thus miss non-2xx ClickHouse errors; update each mutation call
to capture the HttpResponse, read the body as text, and validate it exactly like
executeWithFormat does: check response.status and use
check(!response.isClickHouseError(body)) (or throw/log with the body) so
failures are detected; replace the suspendRunCatching wrapper around
ClickHouseClient.execute in purgeInfraDemoData (and the other listed mutation
functions) with code that inspects response.status.value and calls
isClickHouseError(body) before treating the mutation as successful.
- Around line 304-313: The SQL list inside the triple-quoted string in
DemoReseedInfra.kt contains PostgreSQL positional placeholders ($1, $2, $3,
etc.) which Kotlin will treat as string templates; update every occurrence of
these placeholders in that triple-quoted SQL block (the queries like the
SELECT/UPDATE/INSERT/DELETE lines) to use Kotlin's template-escape form (i.e.
replace $1, $2, $3 ... with ${'$'}1, ${'$'}2, ${'$'}3, etc.) so the placeholders
are preserved as literal dollar-sign placeholders in the generated string.

---

Nitpick comments:
In `@backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt`:
- Around line 68-112: The SQL builder k8sPodsInsertSql embeds repeated magic
values (seed count 15, arrays for namespaces, pod names, statuses, app labels,
cluster metadata, and time offsets) directly in the string; extract these into
named Kotlin constants or small data structures (e.g., private const val
K8S_SEED_COUNT = 15, lists/arrays for NAMESPACES, POD_NAMES, STATUSES, APPS,
CLUSTER_NAME/ID, and time offset functions) and reference them when building the
SQL (use string templates to inject constants or generate the numbers(NUMBER)
source dynamically) so the seed knobs are centralized and reusable across other
builders like the other insert functions. Ensure you update k8sPodsInsertSql to
use these constants instead of hard-coded literals and adjust the FROM
numbers(...) call to use K8S_SEED_COUNT.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4097bddf-b123-4172-87e7-cf95ab9ee7ed

📥 Commits

Reviewing files that changed from the base of the PR and between e2350d8 and 6e9e528.

📒 Files selected for processing (3)
  • backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedInfra.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt
🚧 Files skipped from review as they are similar to previous changes (2)
  • backend/src/main/kotlin/com/moneat/config/DemoReseedProfiles.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt (2)

43-45: Extract magic number to a named constant.

The 2 in INTERVAL 2 HOUR is a magic number controlling the freshness check window. Per coding guidelines, use a named constant.

Proposed fix

Add a constant in the file or in DemoReseedConstants.kt:

private const val FRESHNESS_CHECK_WINDOW_HOURS = 2

Then update the query:

-                    AND $timeCol >= now() - INTERVAL 2 HOUR
+                    AND $timeCol >= now() - INTERVAL $FRESHNESS_CHECK_WINDOW_HOURS HOUR
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt` around lines
43 - 45, Replace the magic literal "2" used in the SQL fragment "INTERVAL 2
HOUR" with a named constant (e.g., FRESHNESS_CHECK_WINDOW_HOURS) to make the
freshness window explicit; add a private const val FRESHNESS_CHECK_WINDOW_HOURS
= 2 either at top of DemoReseedDatadog.kt or in DemoReseedConstants.kt, then
update the query string in DemoReseedDatadog (the multiline SQL using orgCol,
timeCol and ORG1) to interpolate that constant into the INTERVAL clause so the
SQL uses INTERVAL ${FRESHNESS_CHECK_WINDOW_HOURS} HOUR.

99-122: Consider parameterized queries for SQL values.

While the host data is hardcoded and currently safe from injection, string interpolation in SQL ('${h[0]}') establishes a pattern that could be problematic if copied elsewhere. For internal demo tooling this is acceptable, but parameterized queries would be safer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt` around lines
99 - 122, The reseedDatadogHostsPostgres function builds SQL with string
interpolation of hostData (see hostData list and the exec(...) call), which
encourages unsafe patterns; replace the interpolated SQL with parameterized
statements or use Exposed's DSL/batchInsert to bind values instead of embedding
'${h[n]}' into the SQL. Specifically, change the loop that calls exec(...) to
either prepare a single parameterized INSERT/ON CONFLICT statement and set
parameters from h (hostname, os, platform, processor, cpu_cores,
memory_total_kb, agent_version) for each row, or use transaction {
HostsTable.batchInsert(hostData) { ... } } so values are bound safely and the ON
CONFLICT update for last_seen_at is applied via the DSL or a prepared statement.
Ensure you keep the same fields and the ON CONFLICT behavior while removing
direct string interpolation.
backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt (1)

61-181: Extract repeated numeric literals into named constants.

This block embeds many magic numbers (8, 200, 300, 50, 90000, etc.), which makes tuning and intent-tracing harder over time. Please centralize these as constants (likely in DemoReseedConstants.kt) and compose the SQL from those names.

As per coding guidelines, "Use named constants instead of magic numbers (e.g., const val MAX_RETRIES = 5 instead of if (count > 5))."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt` around lines 61
- 181, The SQL-building code embeds many magic numbers (e.g., 8, 200, 300, 50,
90000, the CASE modulus values, and the bucket/seed constants referenced in sql
and msgCase/tagsServiceCase/tagsEnvCase), so extract these numeric literals into
named constants in a new DemoReseedConstants.kt and replace inline literals in
the DemoReseedLogs.kt expressions (referencing msgCase, tagsServiceCase,
tagsEnvCase, sql,
LOG_BUCKET_1_MAX/LOG_BUCKET_2_MAX/LOG_BUCKET_3_MAX/LOG_BUCKET_4_BASE_MINUTES,
and LOG_SEED_ROWS) with the new constant names; keep semantics identical (use
constants for CASE moduli, offsets, time/window sizes, status-code ranges, id
ranges like 90000, and message timing values like 200/300) and update the SQL
string composition to reference those constants.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt`:
- Around line 96-97: The duplicate completion message occurs because
reseedDatadogData() already logs "Datadog agent demo data reseed complete" after
orchestration, while the private helper reseedDatadogNetworkConnections() logs
the same message again; remove the logger.info call from
reseedDatadogNetworkConnections() so only reseedDatadogData() emits the final
completion log, leaving all other logging in the helper intact.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt`:
- Around line 50-53: The ALTER TABLE DELETE mutation in DemoReseedLogs.kt is
asynchronous and can race with the reseed; update the purge request that calls
"ALTER TABLE logs DELETE WHERE organization_id = $P1 OR organization_id = 0" so
the ClickHouse call waits for completion (e.g. append/attach "SETTINGS
mutations_sync = 2" to the SQL or send the equivalent query/session setting)
before proceeding to the reseed; ensure the change is applied where the response
is passed to requireClickHouse2xx(response, "Purge logs") so the subsequent
.onFailure/.onSuccess flow only runs after the mutation finishes.

---

Nitpick comments:
In `@backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt`:
- Around line 43-45: Replace the magic literal "2" used in the SQL fragment
"INTERVAL 2 HOUR" with a named constant (e.g., FRESHNESS_CHECK_WINDOW_HOURS) to
make the freshness window explicit; add a private const val
FRESHNESS_CHECK_WINDOW_HOURS = 2 either at top of DemoReseedDatadog.kt or in
DemoReseedConstants.kt, then update the query string in DemoReseedDatadog (the
multiline SQL using orgCol, timeCol and ORG1) to interpolate that constant into
the INTERVAL clause so the SQL uses INTERVAL ${FRESHNESS_CHECK_WINDOW_HOURS}
HOUR.
- Around line 99-122: The reseedDatadogHostsPostgres function builds SQL with
string interpolation of hostData (see hostData list and the exec(...) call),
which encourages unsafe patterns; replace the interpolated SQL with
parameterized statements or use Exposed's DSL/batchInsert to bind values instead
of embedding '${h[n]}' into the SQL. Specifically, change the loop that calls
exec(...) to either prepare a single parameterized INSERT/ON CONFLICT statement
and set parameters from h (hostname, os, platform, processor, cpu_cores,
memory_total_kb, agent_version) for each row, or use transaction {
HostsTable.batchInsert(hostData) { ... } } so values are bound safely and the ON
CONFLICT update for last_seen_at is applied via the DSL or a prepared statement.
Ensure you keep the same fields and the ON CONFLICT behavior while removing
direct string interpolation.

In `@backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt`:
- Around line 61-181: The SQL-building code embeds many magic numbers (e.g., 8,
200, 300, 50, 90000, the CASE modulus values, and the bucket/seed constants
referenced in sql and msgCase/tagsServiceCase/tagsEnvCase), so extract these
numeric literals into named constants in a new DemoReseedConstants.kt and
replace inline literals in the DemoReseedLogs.kt expressions (referencing
msgCase, tagsServiceCase, tagsEnvCase, sql,
LOG_BUCKET_1_MAX/LOG_BUCKET_2_MAX/LOG_BUCKET_3_MAX/LOG_BUCKET_4_BASE_MINUTES,
and LOG_SEED_ROWS) with the new constant names; keep semantics identical (use
constants for CASE moduli, offsets, time/window sizes, status-code ranges, id
ranges like 90000, and message timing values like 200/300) and update the SQL
string composition to reference those constants.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 29b5b938-b4c1-4509-bd86-25deea5834ec

📥 Commits

Reviewing files that changed from the base of the PR and between 6e9e528 and 01298db.

📒 Files selected for processing (4)
  • backend/src/main/kotlin/com/moneat/config/DemoReseedAnalytics.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedDashboards.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedDatadog.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedLogs.kt
🚧 Files skipped from review as they are similar to previous changes (2)
  • backend/src/main/kotlin/com/moneat/config/DemoReseedAnalytics.kt
  • backend/src/main/kotlin/com/moneat/config/DemoReseedDashboards.kt

Use the same threshold (1700) and test-path excludes as backend/detekt.yml
so enterprise Kotlin is checked consistently in CI.

Made-with: Cursor
Remove duplicated profile INSERT logic from DemoReseedDatadog.kt and
delegate to ensureDemoProfileRows in DemoReseedProfiles.kt. Log Datadog
reseed completion once at the end of reseedDatadogData() instead of
inside reseedDatadogNetworkConnections().

Made-with: Cursor
Turn on complexity LargeClass with threshold 1700 (above detekt default
600) and standard test source excludes, matching incremental split of
large production types. EE config mirrors core.

No other detekt rule changes.

Made-with: Cursor
The prior commit disabled MaxLineLength, MatchingDeclarationName, and
FunctionNaming (core) and MaxLineLength (EE) while LargeClass was already
enabled. Restore those settings so only LargeClass is the intentional
delta vs main; detekt still passes.

Made-with: Cursor
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 3, 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Enable detekt rule: LargeClass

1 participant