feat: anonymous usage analytics + fix all code scanning alerts#59
feat: anonymous usage analytics + fix all code scanning alerts#59WellDunDun merged 4 commits intomainfrom
Conversation
Analytics: - Add fire-and-forget anonymous telemetry (no PII, easy opt-out) - Add `selftune telemetry` CLI command (status/enable/disable) - Show telemetry disclosure during `selftune init` - Track command_run events with lazy import to avoid startup cost - 17 tests covering privacy, fire-and-forget, and opt-out behavior Security fixes (resolves all 14 CodeQL alerts): - Fix incomplete regex sanitization in llm-call.ts (js/incomplete-sanitization) - Pin Docker base images to SHA256 digests (3 Dockerfiles) - Pin curl-piped Bun installs to bun-v1.3.10 (4 instances) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughAdds a privacy-preserving, opt-outable analytics subsystem and CLI telemetry command, integrates disclosure notices into init flow, extends config types, strengthens markdown fence escaping, pins Node/Bun images by digest/version, and bumps package version to 0.2.3. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI Command
participant Analytics as Analytics Module
participant FS as Local Config/FS
participant Endpoint as Telemetry Endpoint
CLI->>Analytics: trackEvent(eventName, props)
Analytics->>FS: isAnalyticsEnabled() (env/config/CI)
FS-->>Analytics: enabled / disabled
alt enabled
Analytics->>Analytics: buildEvent(eventName, props)<br/>(anonymous_id, os, arch, versions)
Analytics->>Endpoint: POST /v1/events (async, 3s timeout, fire-and-forget)
Endpoint-->>Analytics: response (ignored)
else disabled
Analytics-->>CLI: return immediately (no network)
end
Analytics-->>CLI: return void (non-blocking)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.devcontainer/Dockerfile:
- Line 34: Replace the POSIX /bin/sh usage for the RUN lines that start with
"set -euo pipefail && curl -fsSL https://bun.sh/install | bash -s ..." by
executing the command under bash explicitly (e.g., invoke /bin/bash -lc 'set
-euo pipefail && curl -fsSL https://bun.sh/install | bash -s "bun-v1.3.10"' ),
and apply the same change to the second identical RUN occurrence; this ensures
pipefail is supported by running the install line under bash instead of the
default dash shell.
In `@cli/selftune/analytics.ts`:
- Around line 80-84: getAnonymousId currently derives an identifier from
hostname and username (via cachedAnonymousId), which is reversible; change it to
generate a cryptographically-random, non-PII identifier and persist it so it’s
stable across runs: update getAnonymousId to first try reading a persisted
anonymous id from the app's config/storage, return cachedAnonymousId if set,
otherwise generate a random id (e.g., crypto.randomBytes), store it in the
config and set cachedAnonymousId, and then return it; reference the existing
cachedAnonymousId and getAnonymousId symbols when making this change so you
replace the hostname/username derivation with secure random generation +
persistence.
In `@cli/selftune/index.ts`:
- Line 25: Add the new telemetry CLI command to all required docs: update
README.md Commands table to include a row "selftune telemetry | Manage anonymous
usage analytics", add a Quick Reference entry in skill/SKILL.md like "selftune
telemetry [status|enable|disable]", update the Workflow Routing section in
skill/SKILL.md to include trigger keywords (e.g., "telemetry, analytics, disable
analytics, opt out") and link those triggers to a new
skill/Workflows/Telemetry.md, and create skill/Workflows/Telemetry.md describing
when agents should invoke the telemetry command, expected inputs
(status/enable/disable), sample invocations, and any routing notes so the
implementations in analytics.ts and the wiring in index.ts are discoverable and
routable.
In `@tests/analytics/analytics.test.ts`:
- Around line 166-218: The test suite is nondeterministic because trackEvent's
enablement reads cached loadConfig() from the user's real home; to fix, isolate
config/home before importing the analytics module: in
tests/analytics/analytics.test.ts ensure you set a temporary HOME (or CI config
path env) and/or mock loadConfig() to return analytics_enabled before
requiring/importing the module that exports trackEvent, then import trackEvent
(or require the analytics module) after that setup so the module-level cache
uses the isolated config; update setup/teardown to restore original env and
clear module cache (e.g., delete require.cache for the analytics module) so
other tests remain unaffected.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: cc18a84c-6606-495d-ba51-ad657b30adaa
📒 Files selected for processing (10)
.devcontainer/Dockerfilecli/selftune/analytics.tscli/selftune/index.tscli/selftune/init.tscli/selftune/types.tscli/selftune/utils/llm-call.tspackage.jsontests/analytics/analytics.test.tstests/sandbox/docker/Dockerfiletests/sandbox/docker/Dockerfile.openclaw
| * selftune quickstart — Guided onboarding: init, ingest, status, and suggestions | ||
| * selftune repair-skill-usage — Rebuild trustworthy skill usage from transcripts | ||
| * selftune export-canonical — Export canonical telemetry for downstream ingestion | ||
| * selftune telemetry — Manage anonymous usage analytics (status, enable, disable) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Required command-surface docs =="
for f in "README.md" "AGENTS.md" "skill/SKILL.md"; do
if [ -f "$f" ]; then
echo "--- $f"
rg -n "selftune telemetry|telemetry" "$f" || true
else
echo "--- MISSING: $f"
fi
done
echo
echo "== Workflow docs mentioning telemetry (if present) =="
if [ -d "skill/Workflows" ]; then
fd -HI '\.md$' skill/Workflows --exec rg -n "selftune telemetry|SELFTUNE_NO_ANALYTICS|analytics" {}
else
echo "skill/Workflows directory not found"
fiRepository: selftune-dev/selftune
Length of output: 442
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check the relevant lines in index.ts
echo "=== Lines 20-30 (around line 25) ==="
sed -n '20,30p' cli/selftune/index.ts | cat -n
echo
echo "=== Lines 52-62 (around line 57) ==="
sed -n '52,62p' cli/selftune/index.ts | cat -n
echo
echo "=== Lines 466-480 (around line 471) ==="
sed -n '466,480p' cli/selftune/index.ts | cat -n
echo
echo "=== Checking full file size ==="
wc -l cli/selftune/index.tsRepository: selftune-dev/selftune
Length of output: 2175
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== README.md: Commands section ==="
if grep -n "^| \`selftune" README.md >/dev/null 2>&1; then
sed -n '/^## Commands/,/^##/p' README.md | head -40
else
echo "Looking for Commands table..."
rg -A 30 "Commands" README.md | head -50
fi
echo
echo "=== AGENTS.md: project tree / command list ==="
rg -A 25 "project tree|selftune [a-z]" AGENTS.md | head -50
echo
echo "=== skill/SKILL.md: Quick Reference section ==="
if [ -f "skill/SKILL.md" ]; then
rg -A 30 "Quick Reference|Commands" skill/SKILL.md | head -50
else
echo "skill/SKILL.md not found"
fi
echo
echo "=== Check if analytics.js exists ==="
fd -t f analytics.js cli/selftune/Repository: selftune-dev/selftune
Length of output: 6345
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Checking if telemetry/analytics is mentioned anywhere else ==="
rg -l "telemetry|analytics" skill/ --type md || true
echo
echo "=== Looking at existing Workflow docs as template ==="
ls -1 skill/Workflows/*.md | head -5 | xargs -I {} bash -c "echo '=== {} ===' && head -10 {}"
echo
echo "=== Check if analytics.js referenced anywhere ==="
rg "analytics\.js|analytics\.ts" cli/Repository: selftune-dev/selftune
Length of output: 2763
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check if analytics.ts or analytics.js exists ==="
find cli/selftune -name "*analytic*" -o -name "*telemetry*"
echo
echo "=== Check init.ts for TELEMETRY_NOTICE import ==="
rg -B 2 -A 2 "TELEMETRY_NOTICE" cli/selftune/init.ts
echo
echo "=== List all files in cli/selftune/ ==="
ls -1 cli/selftune/*.ts | head -20Repository: selftune-dev/selftune
Length of output: 1059
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check analytics.ts for cliMain export ==="
head -50 cli/selftune/analytics.ts
echo
echo "=== Search for status/enable/disable commands in analytics.ts ==="
rg "case|enable|disable|status" cli/selftune/analytics.ts | head -20
echo
echo "=== Check if Telemetry.md or Analytics.md workflow exists ==="
ls -1 skill/Workflows/ | grep -i "telemetry\|analytic" || echo "No Telemetry/Analytics workflow doc found"
echo
echo "=== Check Workflows.md for telemetry routing ==="
rg "telemetry|analytics" skill/Workflows/Workflows.md || echo "Not found in Workflows.md"Repository: selftune-dev/selftune
Length of output: 3364
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check skill/SKILL.md for any telemetry routing ==="
rg -i "telemetry|analytics" skill/SKILL.md
echo
echo "=== Check if there's a Workflow Routing table in SKILL.md ==="
rg -B 2 -A 15 "Workflow.*Routing|Routing.*[Tt]able" skill/SKILL.md | head -50
echo
echo "=== Full list of workflow docs ==="
ls -1 skill/Workflows/*.md | sed 's|.*/||' | sortRepository: selftune-dev/selftune
Length of output: 2080
Add telemetry command to all required documentation.
The telemetry command (status, enable, disable) is implemented in analytics.ts and wired into index.ts, but is missing from:
- README.md Commands table — Add row for
selftune telemetry | Manage anonymous usage analytics - skill/SKILL.md Quick Reference — Add entry:
selftune telemetry [status|enable|disable] - skill/SKILL.md Workflow Routing — Add trigger keywords (e.g., "telemetry, analytics, disable analytics, opt out") and link to new
Workflows/Telemetry.md - skill/Workflows/Telemetry.md — Create workflow doc explaining when/how agents invoke this command
Per coding guidelines: CLI command additions must be reflected in these four documentation surfaces so agents can discover and route to the feature.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/selftune/index.ts` at line 25, Add the new telemetry CLI command to all
required docs: update README.md Commands table to include a row "selftune
telemetry | Manage anonymous usage analytics", add a Quick Reference entry in
skill/SKILL.md like "selftune telemetry [status|enable|disable]", update the
Workflow Routing section in skill/SKILL.md to include trigger keywords (e.g.,
"telemetry, analytics, disable analytics, opt out") and link those triggers to a
new skill/Workflows/Telemetry.md, and create skill/Workflows/Telemetry.md
describing when agents should invoke the telemetry command, expected inputs
(status/enable/disable), sample invocations, and any routing notes so the
implementations in analytics.ts and the wiring in index.ts are discoverable and
routable.
- Use bash explicitly for pipefail in Dockerfiles (dash compatibility) - Replace hostname-derived anonymous ID with persisted random ID - Add telemetry to README commands table, SKILL.md routing, and workflow docs - Isolate analytics tests from real user config via resetAnalyticsState() Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
tests/analytics/analytics.test.ts (1)
3-10:⚠️ Potential issue | 🟠 MajorTests are still environment-coupled via early module import.
Because analytics is imported at Lines 3-10 before setup runs, config-path resolution can still reflect real user home/config.
resetAnalyticsState()clears caches, but it does not isolate filesystem inputs, so Line 135 can fail nondeterministically on machines withanalytics_disabled: true.As per coding guidelines
tests/**/*.ts: "Isolated tests with proper setup/teardown".Also applies to: 18-24, 132-136
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/analytics/analytics.test.ts` around lines 3 - 10, The tests import analytics at module scope which causes real-home/config resolution; to fix, stop importing Analytics statically and instead dynamic-import it after test setup so filesystem/env can be isolated: in your test setup (beforeEach) call jest.resetModules(), set up a temp HOME/XDG_CONFIG_HOME or mock fs, then do const { getAnonymousId, buildEvent, trackEvent, isAnalyticsEnabled, resetAnalyticsState } = await import("../../cli/selftune/analytics.js"); ensure resetAnalyticsState() is called between tests/after setting mocks so caches clear; this removes early module-side config reads that made Line 135 flaky.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/selftune/analytics.ts`:
- Around line 220-227: The current fire-and-forget call in trackEvent can throw
synchronously if fetchFn throws before returning a Promise, bypassing the
.catch; wrap the invocation so synchronous throws are converted to a promise
rejection — e.g., call Promise.resolve().then(() => fetchFn(endpoint, { ... }))
and keep the existing .catch(...) handler (or perform a try/catch that then
returns a resolved Promise) to ensure trackEvent never throws; update the call
site where fetchFn is invoked in trackEvent to use this pattern and retain the
AbortSignal.timeout and JSON body logic.
- Around line 245-247: Wrap the filesystem operations that create the directory
and write the telemetry config (the block using mkdirSync(SELFTUNE_CONFIG_DIR,
...), writeFileSync(SELFTUNE_CONFIG_PATH, ...), and invalidateConfigCache()) in
a try/catch inside the enable and disable telemetry handlers; on catch log an
agent-actionable error (e.g., "Unable to save telemetry settings: check file
permissions or run with appropriate privileges") using the same CLI logger and
then call process.exit(1). Apply the same protection to the redundant write
block referenced around the other handler (the block at 273-283) so both
enable/disable flows behave consistently.
In `@skill/SKILL.md`:
- Line 106: The Resource Index is missing an entry for the new workflow file
referenced in the routing table; add a corresponding row in skill/SKILL.md’s
Resource Index table that registers Workflows/Telemetry.md (use the same
keywords/label used in the routing row: "telemetry, analytics, disable
analytics, opt out, usage data, tracking, privacy" and the display name
"Telemetry") so Workflows/Telemetry.md is discoverable and consistent with the
Workflow Routing entry.
In `@skill/Workflows/Telemetry.md`:
- Around line 38-43: The "What Is NOT Collected" section currently overstates
unlinkability; either remove linkable fields from telemetry payloads or update
the wording to accurately reflect that telemetry includes sent_at and
anonymous_id which can link events. Edit the "What Is NOT Collected" wording
(under the "What Is NOT Collected" header) to explicitly note that timestamps
(sent_at) and stable anonymous identifiers (anonymous_id) are collected and can
be used to correlate events, or implement changes to the telemetry payload to
drop those fields (sent_at / anonymous_id) if true unlinkability is required.
Ensure the change references the sent_at and anonymous_id symbols in the updated
text so consumers are aware.
In `@tests/analytics/analytics.test.ts`:
- Line 187: Replace the fragile fixed 50ms sleep and any micro-threshold
assertions in analytics.test.ts with a bounded polling helper that repeatedly
checks the expected state until success or a timeout; specifically remove uses
of await new Promise((r) => setTimeout(r, 50)) and instead call a reusable
wait/poll utility (e.g., waitFor, pollUntil, or a custom retryUntil) to assert
the immediate state and then poll until the condition is true (with a sensible
timeout and interval), and update the related assertions that relied on "<50ms"
timing to use the same polling helper; apply this change for the occurrences
around the current snippet and the other spots mentioned (lines near 212-213 and
253-255) and reference the test functions/assertions that currently use the
sleep and timing assertions when implementing the helper.
---
Duplicate comments:
In `@tests/analytics/analytics.test.ts`:
- Around line 3-10: The tests import analytics at module scope which causes
real-home/config resolution; to fix, stop importing Analytics statically and
instead dynamic-import it after test setup so filesystem/env can be isolated: in
your test setup (beforeEach) call jest.resetModules(), set up a temp
HOME/XDG_CONFIG_HOME or mock fs, then do const { getAnonymousId, buildEvent,
trackEvent, isAnalyticsEnabled, resetAnalyticsState } = await
import("../../cli/selftune/analytics.js"); ensure resetAnalyticsState() is
called between tests/after setting mocks so caches clear; this removes early
module-side config reads that made Line 135 flaky.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8a255c75-23e4-42d6-910d-579ac00286d6
📒 Files selected for processing (6)
.devcontainer/DockerfileREADME.mdcli/selftune/analytics.tsskill/SKILL.mdskill/Workflows/Telemetry.mdtests/analytics/analytics.test.ts
- Wrap fetchFn in try + Promise.resolve for sync throw safety - Add try/catch with actionable errors for telemetry enable/disable - Add Telemetry.md to SKILL.md Resource Index - Fix privacy wording: disclose anonymous_id and sent_at linkability - Replace hardcoded test sleeps with polling helper + add sync-throw test Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
selftune telemetrycommand, opt-out via env or config)llm-call.ts(high severity CodeQL alert)bun-v1.3.10(medium severity alerts)Telemetry endpoint (
telemetry.selftune.dev) is being deployed separately via selftune-cloud-app. Until live,trackEventsilently fails (fire-and-forget design).Test plan
bun test tests/analytics/— 17 tests pass (privacy, opt-out, fire-and-forget)selftune telemetry statusshows correct stateselftune telemetry disable/enabletoggles config🤖 Generated with Claude Code