This is an in-depth code audit performed by my agent harness. I'm sharing it as an inspiration on what to improve and fix.
Code Audit Report — yaos
Date: 2026-04-12
Scope: Full repository (src/, server/, tests/)
Authors: Claude Opus 4.6, GPT 5.4 & Gemini 3.1 Pro
Theme: Public server boundaries need tighter hardening
Remediation path: Remove per-vault side effects before auth, keep unauthenticated responses minimal, and reject clearly invalid upload requests earlier.
Estimated effort: medium
P1: Unauthorized requests can trigger Durable Object trace writes for arbitrary vault IDs
Priority: P1 | Complexity: medium | Validity: high
Location: server/src/index.ts:322 ↔ server/src/index.ts:668 ↔ server/src/index.ts:738 ↔ server/src/server.ts:112
Evidence: recordVaultTrace() resolves a Durable Object from user-controlled vaultId and posts to /__yaos/trace. Both the websocket and HTTP vault routes call it before returning unclaimed, server_misconfigured, and unauthorized responses.
Impact: Unauthenticated callers can force per-vault Durable Object work and trace persistence before auth succeeds, creating avoidable public-edge cost and resource pressure.
Suggestion: Do not call recordVaultTrace() for unauthenticated or invalid requests. If rejection telemetry is needed, log it without touching room-scoped Durable Objects.
Validation: Add tests that hit unauthorized /vault/... and /vault/sync/... routes and assert no room trace state is created.
P2: /api/capabilities exposes deployment metadata without authentication
Priority: P2 | Complexity: low | Validity: high
Location: server/src/index.ts:569 ↔ server/src/index.ts:316 ↔ src/sync/serverCapabilities.ts:3
Evidence: GET /api/capabilities is unauthenticated, but getCapabilities() includes updateRepoUrl and updateRepoBranch from server config, and the client-side type expects those public fields.
Impact: Anyone who can reach the Worker can learn deployment repo coordinates and branch details.
Suggestion: Split capabilities into public and authenticated fields, or omit update metadata from unauthenticated responses.
Validation: Add a request test showing unauthenticated capabilities omit repo metadata.
P2: Blob uploads are size-limited only after the full request body is buffered
Priority: P2 | Complexity: medium | Validity: high
Location: server/src/index.ts:454
Evidence: The upload route calls await req.arrayBuffer() before checking body.byteLength > MAX_BLOB_UPLOAD_BYTES.
Impact: Oversized uploads still consume memory and CPU before they are rejected, weakening the protection offered by the 10 MB limit.
Suggestion: Reject based on Content-Length when present before buffering, and enforce the cap incrementally if streaming inspection becomes available.
Validation: Add a test with oversized Content-Length and verify the route rejects before reading the full body.
Theme: Plugin/server contracts have drifted in user-visible ways
Remediation path: Move shared behavior behind explicit contract helpers and add one or two boundary-focused integration tests.
Estimated effort: medium
P1: Attachment size can be configured above the server’s hard upload cap
Priority: P1 | Complexity: low | Validity: high
Location: src/settings.ts:543 ↔ src/main.ts:2821 ↔ src/sync/blobSync.ts:337 ↔ server/src/index.ts:25
Evidence: The settings UI accepts any positive maxAttachmentSizeKB, createBlobSyncManager() forwards it unchanged, and BlobSyncManager uses that client-side limit. The Worker separately hard-caps uploads at 10 * 1024 * 1024 bytes.
Impact: Users can save a valid-looking setting that guarantees upload failures for larger attachments.
Suggestion: Cap the UI at the server limit or expose the server max through capabilities and validate before enqueueing uploads.
Validation: Add an integration test that sets maxAttachmentSizeKB above 10240 and verifies the plugin fails locally instead of attempting upload.
P2: Remote sync deletes bypass the user’s trash preference
Priority: P2 | Complexity: low | Validity: high
Location: src/sync/diskMirror.ts:557 ↔ src/sync/diskMirror.ts:602 ↔ src/sync/blobSync.ts:1025 ↔ src/main.ts:3901
Evidence: Some plugin paths use this.app.fileManager.trashFile(...), but remote delete and rename-replacement flows in DiskMirror and BlobSyncManager still call this.app.vault.delete(...). ESLint flags these same locations.
Impact: Files deleted through remote sync can be permanently removed even when the vault is configured to use trash.
Suggestion: Route all plugin-managed deletes through one helper that prefers FileManager.trashFile().
Validation: Add regression coverage for remote markdown and blob deletes with trash-based deletion enabled.
Theme: Core orchestration files have become structural bottlenecks
Remediation path: Keep the current behavior but re-establish smaller composition roots by extracting vertical slices from the biggest files.
Estimated effort: high
P2: src/main.ts has become a catch-all composition root
Priority: P2 | Complexity: high | Validity: high
Location: src/main.ts:182
Evidence: The plugin entrypoint is 4.6k lines and holds lifecycle wiring, sync orchestration, diagnostics, update handling, snapshot flows, commands, and modal classes.
Impact: Changes in unrelated areas collide in one file, which raises review cost and makes safe extraction harder over time.
Suggestion: Keep main.ts as the composition root only, and extract one vertical first such as snapshots, diagnostics, or update/capability handling.
Validation: Extract one slice and verify plugin build plus existing regression coverage still pass.
P2: server/src/index.ts mixes Worker composition with most feature routes
Priority: P2 | Complexity: medium | Validity: high
Location: server/src/index.ts:533
Evidence: The same file handles claim/auth state, setup pages, capabilities, sync gateway logic, blobs, snapshots, and update metadata.
Impact: Security-sensitive edits and product-surface edits share one route file, increasing the blast radius of changes.
Suggestion: Split route families into focused modules such as auth, capabilities, sync, blobs, and snapshots.
Validation: Add route-level tests around extracted handlers and compare responses before removing the inline branches.
P2: src/settings.ts mixes persisted settings, onboarding modals, and settings-tab rendering
Priority: P2 | Complexity: medium | Validity: high
Location: src/settings.ts:49 ↔ src/settings.ts:134 ↔ src/settings.ts:240 ↔ src/settings.ts:283
Evidence: The file contains settings defaults, PairDeviceModal, RecoveryKitModal, and the full settings tab implementation in one 700+ line module.
Impact: Small settings changes now share a file with unrelated UI flows, which makes targeted refactors and tests more expensive.
Suggestion: Move modal classes and large UI sections into dedicated modules while keeping the settings shape/defaults local.
Validation: Extract one modal first and verify settings save/load behavior remains unchanged.
P2: main.ts and settings.ts are mutually coupled at the module boundary
Priority: P2 | Complexity: low | Validity: high
Location: src/main.ts:2 ↔ src/settings.ts:3
Evidence: main.ts imports settings types and UI wiring from ./settings, while settings.ts imports the plugin class type from ./main.
Impact: The settings layer is coupled directly to the lifecycle host, which makes it harder to slim main.ts down over time.
Suggestion: Replace the plugin-class import with a narrower interface that exposes only what the settings UI needs.
Validation: Typecheck after swapping the type import for a minimal interface.
Theme: One additional performance issue is real but lower priority
Remediation path: Clean up once the boundary issues above are fixed.
Estimated effort: low
P3: Snapshot payload lookup scales with total snapshot count
Priority: P3 | Complexity: medium | Validity: high
Location: server/src/snapshot.ts:175
Evidence: getSnapshotPayload() calls listSnapshots() and searches every snapshot index to find a single snapshotId before fetching the payload object.
Impact: Snapshot downloads get progressively more expensive as history grows.
Suggestion: If the key structure remains deterministic, reconstruct the object key directly from the snapshot identifier or maintain a lighter lookup path.
Validation: Compare R2 operations for a single snapshot download before and after the change.
Removed from the original Desktop draft
- Dropped the "AI-generated project" framing because it is not evidenced and does not help remediation.
- Dropped the timing-attack claim on env-token comparison because the code difference is real, but the practical exploitability and priority were overstated.
- Dropped the reflected
Content-Type / stored XSS claim because the evidence did not clearly support the stated impact.
- Dropped the "no unit tests for the largest modules" finding as a headline issue because the repo already has substantial regression coverage, even if it is not conventional unit-test coverage.
- Dropped small lint/dead-code/style items from the main findings list to keep the report focused on structural and boundary issues.
Structural Health Summary
Overall Assessment
The codebase has strong domain-specific thinking and solid regression coverage around the sync engine, but a few public-boundary issues and several oversized orchestration files are now the main structural risks.
Top Systemic Issues
- Public request handling still does too much work before auth succeeds.
- Plugin/server behavior depends on implicit contracts that are not enforced in one place.
- A small set of orchestration files now carries too much responsibility.
Findings Overview
| Priority |
Count |
| P0 |
0 |
| P1 |
2 |
| P2 |
6 |
| P3 |
1 |
Recommended Next Steps
- Remove pre-auth
recordVaultTrace() calls.
- Hide unauthenticated repo metadata from
/api/capabilities.
- Align attachment-size limits across settings, client behavior, and server enforcement.
- Normalize remote deletes through a trash-aware helper.
- Extract one vertical slice out of
src/main.ts or one route family out of server/src/index.ts.
This is an in-depth code audit performed by my agent harness. I'm sharing it as an inspiration on what to improve and fix.
Code Audit Report — yaos
Date: 2026-04-12
Scope: Full repository (
src/,server/,tests/)Authors: Claude Opus 4.6, GPT 5.4 & Gemini 3.1 Pro
Theme: Public server boundaries need tighter hardening
Remediation path: Remove per-vault side effects before auth, keep unauthenticated responses minimal, and reject clearly invalid upload requests earlier.
Estimated effort: medium
P1: Unauthorized requests can trigger Durable Object trace writes for arbitrary vault IDs
Priority: P1 | Complexity: medium | Validity: high
Location:
server/src/index.ts:322↔server/src/index.ts:668↔server/src/index.ts:738↔server/src/server.ts:112Evidence:
recordVaultTrace()resolves a Durable Object from user-controlledvaultIdand posts to/__yaos/trace. Both the websocket and HTTP vault routes call it before returningunclaimed,server_misconfigured, andunauthorizedresponses.Impact: Unauthenticated callers can force per-vault Durable Object work and trace persistence before auth succeeds, creating avoidable public-edge cost and resource pressure.
Suggestion: Do not call
recordVaultTrace()for unauthenticated or invalid requests. If rejection telemetry is needed, log it without touching room-scoped Durable Objects.Validation: Add tests that hit unauthorized
/vault/...and/vault/sync/...routes and assert no room trace state is created.P2:
/api/capabilitiesexposes deployment metadata without authenticationPriority: P2 | Complexity: low | Validity: high
Location:
server/src/index.ts:569↔server/src/index.ts:316↔src/sync/serverCapabilities.ts:3Evidence:
GET /api/capabilitiesis unauthenticated, butgetCapabilities()includesupdateRepoUrlandupdateRepoBranchfrom server config, and the client-side type expects those public fields.Impact: Anyone who can reach the Worker can learn deployment repo coordinates and branch details.
Suggestion: Split capabilities into public and authenticated fields, or omit update metadata from unauthenticated responses.
Validation: Add a request test showing unauthenticated capabilities omit repo metadata.
P2: Blob uploads are size-limited only after the full request body is buffered
Priority: P2 | Complexity: medium | Validity: high
Location:
server/src/index.ts:454Evidence: The upload route calls
await req.arrayBuffer()before checkingbody.byteLength > MAX_BLOB_UPLOAD_BYTES.Impact: Oversized uploads still consume memory and CPU before they are rejected, weakening the protection offered by the 10 MB limit.
Suggestion: Reject based on
Content-Lengthwhen present before buffering, and enforce the cap incrementally if streaming inspection becomes available.Validation: Add a test with oversized
Content-Lengthand verify the route rejects before reading the full body.Theme: Plugin/server contracts have drifted in user-visible ways
Remediation path: Move shared behavior behind explicit contract helpers and add one or two boundary-focused integration tests.
Estimated effort: medium
P1: Attachment size can be configured above the server’s hard upload cap
Priority: P1 | Complexity: low | Validity: high
Location:
src/settings.ts:543↔src/main.ts:2821↔src/sync/blobSync.ts:337↔server/src/index.ts:25Evidence: The settings UI accepts any positive
maxAttachmentSizeKB,createBlobSyncManager()forwards it unchanged, andBlobSyncManageruses that client-side limit. The Worker separately hard-caps uploads at10 * 1024 * 1024bytes.Impact: Users can save a valid-looking setting that guarantees upload failures for larger attachments.
Suggestion: Cap the UI at the server limit or expose the server max through capabilities and validate before enqueueing uploads.
Validation: Add an integration test that sets
maxAttachmentSizeKBabove 10240 and verifies the plugin fails locally instead of attempting upload.P2: Remote sync deletes bypass the user’s trash preference
Priority: P2 | Complexity: low | Validity: high
Location:
src/sync/diskMirror.ts:557↔src/sync/diskMirror.ts:602↔src/sync/blobSync.ts:1025↔src/main.ts:3901Evidence: Some plugin paths use
this.app.fileManager.trashFile(...), but remote delete and rename-replacement flows inDiskMirrorandBlobSyncManagerstill callthis.app.vault.delete(...). ESLint flags these same locations.Impact: Files deleted through remote sync can be permanently removed even when the vault is configured to use trash.
Suggestion: Route all plugin-managed deletes through one helper that prefers
FileManager.trashFile().Validation: Add regression coverage for remote markdown and blob deletes with trash-based deletion enabled.
Theme: Core orchestration files have become structural bottlenecks
Remediation path: Keep the current behavior but re-establish smaller composition roots by extracting vertical slices from the biggest files.
Estimated effort: high
P2:
src/main.tshas become a catch-all composition rootPriority: P2 | Complexity: high | Validity: high
Location:
src/main.ts:182Evidence: The plugin entrypoint is 4.6k lines and holds lifecycle wiring, sync orchestration, diagnostics, update handling, snapshot flows, commands, and modal classes.
Impact: Changes in unrelated areas collide in one file, which raises review cost and makes safe extraction harder over time.
Suggestion: Keep
main.tsas the composition root only, and extract one vertical first such as snapshots, diagnostics, or update/capability handling.Validation: Extract one slice and verify plugin build plus existing regression coverage still pass.
P2:
server/src/index.tsmixes Worker composition with most feature routesPriority: P2 | Complexity: medium | Validity: high
Location:
server/src/index.ts:533Evidence: The same file handles claim/auth state, setup pages, capabilities, sync gateway logic, blobs, snapshots, and update metadata.
Impact: Security-sensitive edits and product-surface edits share one route file, increasing the blast radius of changes.
Suggestion: Split route families into focused modules such as
auth,capabilities,sync,blobs, andsnapshots.Validation: Add route-level tests around extracted handlers and compare responses before removing the inline branches.
P2:
src/settings.tsmixes persisted settings, onboarding modals, and settings-tab renderingPriority: P2 | Complexity: medium | Validity: high
Location:
src/settings.ts:49↔src/settings.ts:134↔src/settings.ts:240↔src/settings.ts:283Evidence: The file contains settings defaults,
PairDeviceModal,RecoveryKitModal, and the full settings tab implementation in one 700+ line module.Impact: Small settings changes now share a file with unrelated UI flows, which makes targeted refactors and tests more expensive.
Suggestion: Move modal classes and large UI sections into dedicated modules while keeping the settings shape/defaults local.
Validation: Extract one modal first and verify settings save/load behavior remains unchanged.
P2:
main.tsandsettings.tsare mutually coupled at the module boundaryPriority: P2 | Complexity: low | Validity: high
Location:
src/main.ts:2↔src/settings.ts:3Evidence:
main.tsimports settings types and UI wiring from./settings, whilesettings.tsimports the plugin class type from./main.Impact: The settings layer is coupled directly to the lifecycle host, which makes it harder to slim
main.tsdown over time.Suggestion: Replace the plugin-class import with a narrower interface that exposes only what the settings UI needs.
Validation: Typecheck after swapping the type import for a minimal interface.
Theme: One additional performance issue is real but lower priority
Remediation path: Clean up once the boundary issues above are fixed.
Estimated effort: low
P3: Snapshot payload lookup scales with total snapshot count
Priority: P3 | Complexity: medium | Validity: high
Location:
server/src/snapshot.ts:175Evidence:
getSnapshotPayload()callslistSnapshots()and searches every snapshot index to find a singlesnapshotIdbefore fetching the payload object.Impact: Snapshot downloads get progressively more expensive as history grows.
Suggestion: If the key structure remains deterministic, reconstruct the object key directly from the snapshot identifier or maintain a lighter lookup path.
Validation: Compare R2 operations for a single snapshot download before and after the change.
Removed from the original Desktop draft
Content-Type/ stored XSS claim because the evidence did not clearly support the stated impact.Structural Health Summary
Overall Assessment
The codebase has strong domain-specific thinking and solid regression coverage around the sync engine, but a few public-boundary issues and several oversized orchestration files are now the main structural risks.
Top Systemic Issues
Findings Overview
Recommended Next Steps
recordVaultTrace()calls./api/capabilities.src/main.tsor one route family out ofserver/src/index.ts.