Conversation
Internal package paths migrated to dappco.re/go/core/rocm/internal/*. External imports (coreerr, go-inference) intentionally left on forge.lthn.ai for now — the homelab factory will pick up the rest of the migration as part of its dispatch loop. Also: CLAUDE.md refresh with current architecture notes. Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Record partial metrics for classify cancellations and failures, wrap batch cancellation errors with coreerr.E, and replace the positional server startup arguments with a named config struct for clearer internal call sites. Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Closes tasks.lthn.sh/view.php?id=703 Co-authored-by: Codex <noreply@openai.com>
server.go is the ROCm HTTP server for GPU monitoring. stdlib imports (errors, fmt, net, os, os/exec, strconv, strings) are intrinsic: HTTP server primitives have no core equivalent, rocm-smi CLI subprocess predates Process plumbing into bare HTTP handlers, and core string/ format helpers are downstream. Added `// Note:` annotations on each. Closes tasks.lthn.sh/view.php?id=714 Co-authored-by: Codex <noreply@openai.com>
vram.go reads GPU VRAM from sysfs. Banned imports are intrinsic — sysfs reading needs direct filesystem access that core.Fs() doesn't model. Added `// Note:` annotations on os, path/filepath, strconv, strings. Closes tasks.lthn.sh/view.php?id=715 Co-authored-by: Codex <noreply@openai.com>
…o.re/go/core/log (AX-6) Salvaged partial commit from stream-errored codex run — imports were already rewritten in backend.go, discover.go, internal/gguf/gguf.go, internal/llamacpp/client.go + go.mod direct requires. Closes tasks.lthn.sh/view.php?id=702 Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=716 Co-authored-by: Codex <noreply@openai.com>
- Bump dappco.re/go/* deps to v0.8.0-alpha.1 in go.mod (any forge.lthn.ai/core/* paths migrated to canonical dappco.re/go/* form) Co-Authored-By: Athena <athena@lthn.ai>
bash /tmp/v090/audit.sh . → verdict: COMPLIANT (all 7 dimensions zero). Co-authored-by: Codex <noreply@openai.com> Co-Authored-By: Virgil <virgil@lethean.io>
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (35)
Comment |
Brings this repo to verdict: COMPLIANT against the v0.9.0 audit.
🤖 Generated with Claude Code + Codex
Co-Authored-By: Codex noreply@openai.com
Co-Authored-By: Virgil virgil@lethean.io