Skip to content

feat: 添加可选的 Express + SQLite 后端服务器,支持跨设备自动同步#43

Open
Micah-Zheng wants to merge 21 commits intoAmintaCCCP:mainfrom
Micah-Zheng:main
Open

feat: 添加可选的 Express + SQLite 后端服务器,支持跨设备自动同步#43
Micah-Zheng wants to merge 21 commits intoAmintaCCCP:mainfrom
Micah-Zheng:main

Conversation

@Micah-Zheng
Copy link

@Micah-Zheng Micah-Zheng commented Feb 25, 2026

概述

为 GithubStarsManager 添加可选的 Express + SQLite 后端服务器,实现跨设备数据自动同步、CORS-free API 代理和加密令牌存储。后端完全可选,不影响现有的纯前端使用方式。

新增功能

🖥️ 后端服务器 (server/)

  • Express + SQLite 轻量后端,零外部数据库依赖
  • 数据持久化:仓库、releases、自定义分类、AI 配置、WebDAV 配置
  • API 代理:GitHub / AI / WebDAV 请求走后端代理,彻底解决浏览器 CORS 问题
  • 加密存储:API Key 等敏感信息使用 AES-256 加密存储,不暴露在浏览器网络面板
  • Bearer Token 认证:可选的 API_SECRET 保护后端接口
  • Docker 支持docker-compose up --build 一键部署前后端

🔄 自动同步

  • 启动同步:应用启动时自动从后端拉取最新数据(后端优先策略)
  • 变更推送:本地数据变更后 2 秒防抖自动推送到后端
  • 跨设备轮询:每 5 秒从后端拉取数据,后端数据未变则跳过更新(指纹比对),避免覆盖编辑中的内容
  • 配置勾选同步:AI/WebDAV 配置及其激活状态(activeAIConfig/activeWebDAVConfig)跨设备同步
  • 手动备用:Settings 面板保留手动同步按钮,自动同步失败时可手动触发
  • 静默运行:同步过程无 UI 干扰,仅失败时 console 输出

🌐 前端适配

  • Settings 面板:新增后端连接配置区域(URL、API Secret、连接测试)
  • 数据同步按钮:支持手动"同步到后端"/"从后端同步"
  • 双语错误提示:51 个后端错误码,前端根据语言设置显示中/英文错误信息

📖 文档更新

  • README.md 和 README_zh.md 同步更新,添加后端使用说明
  • 两个文档间添加语言切换链接

文件变更(42 files)

新增文件

路径 说明
server/ 完整后端项目(Express + SQLite + TypeScript)
server/src/routes/ 7 个路由:health、repositories、releases、categories、configs、sync、proxy
server/src/services/ 加密服务、代理服务
server/src/db/ SQLite 连接、Schema、迁移
server/tests/ 加密和代理的单元测试
src/services/backendAdapter.ts 前端后端适配器
src/services/autoSync.ts 自动同步服务(5秒轮询 + 2秒防抖推送 + 指纹比对)
src/utils/backendErrors.ts 双语错误码翻译映射
docker-compose.yml 前后端 Docker 编排
nginx.conf Nginx 反向代理配置

修改文件

路径 说明
src/App.tsx 集成后端初始化和自动同步
src/store/useAppStore.ts 新增 setAIConfigs/setWebDAVConfigs/backendApiSecret
src/components/SettingsPanel.tsx 新增后端连接和同步 UI
src/types/index.ts 新增 WebDAVConfig 类型
Dockerfile 修复 nginx default.conf 冲突
README.md / README_zh.md 文档同步更新

环境变量

变量 必填 说明
API_SECRET Bearer Token 认证密钥,不设则无认证
ENCRYPTION_KEY AES-256 加密密钥,不设则自动生成
PORT 后端端口,默认 3000

兼容性

  • 不修改任何现有前端服务(githubApi.ts、aiService.ts、webdavService.ts)
  • localStorage 持久化机制保持不变,后端为补充而非替代
  • 无后端时应用行为与改动前完全一致

测试

  • ✅ TypeScript 编译通过
  • ✅ Vite 构建通过
  • ✅ 加密服务单元测试通过
  • ✅ Docker 部署验证通过
  • ✅ 跨设备自动同步验证通过

Summary by CodeRabbit

  • New Features
    • Optional backend with encrypted secrets, new backend API (health, repositories, releases, categories, configs, sync), AI/WebDAV/GitHub proxying, import/export, and automatic/manual bidirectional sync; Settings UI to connect, test, and trigger sync; client-side backend discovery.
  • Documentation
    • EN/ZH READMEs expanded with backend setup, WebDAV backup, deployment guides, tech stack, contributing, and license.
  • Chores
    • Docker/dev updates (compose adds backend service and volume; server image added; production image removes default Nginx conf); .gitignore extended.
  • Tests
    • New server tests covering proxy and crypto utilities.

Micah Zheng and others added 8 commits February 25, 2026 07:17
- 添加 Express + SQLite 后端,支持仓库、发布、分类、配置的 CRUD 路由
- 添加 GitHub/AI/WebDAV 代理路由,解决浏览器 CORS 问题
- 添加 AES-256 加密存储,保护 API 密钥安全
- 添加跨设备数据同步功能
- 添加前端 BackendAdapter,支持自动发现和认证
- 添加设置面板后端服务器配置区域(连接测试、同步控制)
- 添加双语错误提示(51 个错误码,中英文翻译)
- 添加 Docker Compose 部署方案(前端 + 后端 + nginx)
- 添加服务端测试(加密: 7 个, 代理: 13 个)
- 添加开发脚本(dev:server, dev:all, build:server, build:all)
- 更新 README.md 和 README_zh.md,补充后端文档和语言切换
feat: 添加可选的 Express + SQLite 后端服务器
fix: 删除 nginx 默认配置避免与自定义配置冲突
- 后端: 修复 configs.ts 列名错误 (provider→api_type, is_default→is_active, 移除不存在的 created_at/updated_at)
- 后端: 添加 PUT /api/configs/ai/bulk 和 PUT /api/configs/webdav/bulk 批量同步端点
- 后端: GET 端点支持 ?decrypt=true 返回完整密钥
- 后端: 修复 sync.ts import 中相同的列名错误和缺失字段
- 前端: backendAdapter 添加 syncAIConfigs/fetchAIConfigs/syncWebDAVConfigs/fetchWebDAVConfigs
- 前端: store 添加 setAIConfigs/setWebDAVConfigs 批量设置方法
- 前端: SettingsPanel 同步按钮现在同时同步仓库、发布、AI配置和WebDAV配置
feat: 添加 AI/WebDAV 配置同步到后端功能
- 启动时自动从后端拉取最新数据(后端优先策略)
- 本地数据变更后2秒防抖自动推送到后端
- 每5秒轮询后端拉取数据,实现跨设备无感同步
- 轮询时比较数据指纹,后端数据未变则跳过更新,避免覆盖编辑中的内容
- 同步AI/WebDAV配置及其勾选状态(activeAIConfig/activeWebDAVConfig)
- 后端新增settings读写接口,前端新增autoSync服务
- 启动时自动从后端拉取最新数据(后端优先策略)
- 本地数据变更后2秒防抖自动推送到后端
- 每5秒轮询后端拉取数据,实现跨设备无感同步
- 轮询时比较数据指纹,后端数据未变则跳过更新,避免覆盖编辑中的内容
- 同步AI/WebDAV配置及其勾选状态(activeAIConfig/activeWebDAVConfig)
- 后端新增settings读写接口,前端新增autoSync服务

Co-authored-by: Micah Zheng <micahzheng@MicahdeMacBook-Pro.local>
@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds an optional Express+SQLite backend (auth, encryption, migrations, REST APIs, proxying, import/export), server build/tests/config, Docker/nginx/docker-compose updates, and frontend integration for backend discovery, auto-sync, store/type changes, and AI-service backend fallback.

Changes

Cohort / File(s) Summary
Repository & ignore files
\.gitignore, server/\.gitignore
New ignore patterns for server build/data and orchestrator files (server/data/*.db, server/data/.encryption-key, server/dist/, .sisyphus/).
Containers & proxy config
docker-compose.yml, Dockerfile, server/Dockerfile, nginx.conf
Add backend service and backend-data volume; production Dockerfile removes default Nginx conf before copying assets; add server multi-stage Dockerfile; Nginx adds /api/ proxy to backend.
Monorepo manifests & scripts
package.json, server/package.json
Add new server/package.json; new npm scripts for dev/all and build/all; add concurrently devDependency.
Server TS & testing config
server/tsconfig.json, server/vitest.config.ts
Add TypeScript and Vitest config for server project.
Server runtime & entry
server/src/config.ts, server/src/index.ts
New runtime config (data dir, ENCRYPTION_KEY handling), Express app factory, startup/shutdown, migration bootstrap, health checks.
Middleware & error handling
server/src/middleware/auth.ts, server/src/middleware/errorHandler.ts
Add auth middleware with bearer/timing-safe compare and centralized Express error handler.
Database layer
server/src/db/connection.ts, server/src/db/schema.ts, server/src/db/migrations.ts
Add better-sqlite3 connection (ensure dir, PRAGMAs), schema initializer creating required tables, and migration runner with schema_version tracking.
Server services
server/src/services/crypto.ts, server/src/services/proxyService.ts
Add AES-256-GCM encrypt/decrypt utilities and a robust proxy helper (validation, timeouts, aborts, redaction, structured responses).
Server routes
server/src/routes/...
health.ts, repositories.ts, releases.ts, categories.ts, configs.ts, sync.ts, proxy.ts
Add REST endpoints for health, repositories, releases, categories; configs (AI/WebDAV/settings) with encryption/masking and bulk ops; proxy endpoints for GitHub/AI/WebDAV; sync import/export with transactional semantics.
Server package & Dockerfile
server/package.json, server/Dockerfile
New server package manifest and multi-stage Dockerfile exposing port 3000 and declaring /app/data volume.
Server tests
server/tests/*
Add unit tests for proxyService and crypto service (fetch mocks, encryption edge cases).
Frontend integration
src/App.tsx, src/components/SettingsPanel.tsx, src/services/backendAdapter.ts, src/services/autoSync.ts
Add BackendAdapter singleton for discovery/proxy, auto-sync (debounced push, periodic pull), Settings UI for backend config/testing/sync, and initialization in App.
Frontend store/types/utilities
src/store/useAppStore.ts, src/types/index.ts, src/utils/backendErrors.ts
Add backendApiSecret state and setters; update AppState type; add bilingual backend error translations and translate helper.
AI service fallback
src/services/aiService.ts
Route AI requests through backend proxy when available; otherwise fall back to direct upstream calls; unify request payloads.
Docs & localization
README.md, README_zh.md
Significant README expansions (Tech Stack, optional backend guide, WebDAV backup, deployment, contributing, license) and extensive Chinese localization/structure updates.

Sequence Diagrams

sequenceDiagram
    participant FE as Frontend (App)
    participant BA as BackendAdapter
    participant BS as Backend Server
    participant DB as SQLite DB
    participant Ext as External API

    FE->>BA: probe /api/health
    BA->>BS: GET /api/health
    BS-->>BA: 200 {status,version}
    BA-->>FE: backend available

    FE->>BA: startAutoSync
    Note over FE,BA: periodic pull + debounced push
    FE->>BA: PUT /api/repositories (debounced)
    BA->>BS: PUT /api/repositories (Bearer)
    BS->>DB: INSERT OR REPLACE repositories
    DB-->>BS: OK
    BS-->>BA: 200
    BA-->>FE: update hashes/state
Loading
sequenceDiagram
    participant FE as Frontend
    participant BS as Backend Server
    participant Auth as Auth Middleware
    participant Proxy as Proxy Service
    participant ExtAPI as External API

    FE->>BS: POST /api/proxy/github/... (Bearer)
    BS->>Auth: validate token (timingSafeEqual)
    Auth-->>BS: OK
    BS->>BS: read encrypted github token (settings)
    BS->>BS: decrypt token (crypto)
    BS->>Proxy: proxyRequest -> GitHub API with auth
    Proxy->>ExtAPI: forward request
    ExtAPI-->>Proxy: response
    Proxy-->>BS: relay response
    BS-->>FE: return proxied response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐇 I nibbled keys and spun a thread,
Wrote tiny routes while snug in bed,
I guard the secrets, sync the stars,
Hop-scotch packets near and far,
Hoppity — the backend hums ahead.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 31.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly describes the main feature: adding an optional Express + SQLite backend server with cross-device auto-sync capability.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
docker-compose.yml (1)

1-34: ⚠️ Potential issue | 🔴 Critical

Critical: Duplicate services: top-level key — the new frontend/backend services are silently discarded.

The YAML file has two services: keys (lines 3 and 26). Per the YAML specification, duplicate keys produce undefined behavior; most parsers (including Docker Compose's) use the last occurrence. This means the frontend and backend services defined on lines 3–21 are silently overridden by the old github-stars-manager block on line 26. The new deployment will never run.

Additionally, both frontend and github-stars-manager bind host port 8080, which would conflict if both were somehow parsed.

Remove the legacy github-stars-manager service block (or merge it), keeping only the new frontend + backend setup.

🐛 Proposed fix — remove the duplicate services block
 version: '3.8'

 services:
   frontend:
     build: .
     ports:
       - "8080:80"
     depends_on:
       - backend
     restart: unless-stopped

   backend:
     build: ./server
     ports:
       - "3000:3000"
     environment:
       - API_SECRET=${API_SECRET:-}
       - ENCRYPTION_KEY=${ENCRYPTION_KEY:-}
     volumes:
       - backend-data:/app/data
     restart: unless-stopped

 volumes:
   backend-data:
-
-services:
-  github-stars-manager:
-    build: .
-    ports:
-      - "8080:80"
-    restart: unless-stopped
-    # Environment variables can be set here if needed
-    # environment:
-    #   - NODE_ENV=production
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yml` around lines 1 - 34, The file has duplicate top-level
"services" keys causing the first block (frontend, backend) to be overridden by
the legacy "github-stars-manager" service; remove or merge the legacy block so
there is a single "services:" mapping containing "frontend" and "backend" and
(if needed) move "github-stars-manager" under that same mapping; also resolve
the port conflict by not binding "github-stars-manager" to host port 8080 if you
keep it (avoid duplicating "8080:80" used by "frontend"); ensure the backend
environment/volumes/restart settings (API_SECRET, ENCRYPTION_KEY, backend-data)
remain in the final "backend" service definition.
🟠 Major comments (17)
server/src/middleware/errorHandler.ts-3-17 (1)

3-17: ⚠️ Potential issue | 🟠 Major

Missing res.headersSent guard and hardcoded error code.

Two concerns:

  1. res.headersSent check: If headers were already sent (e.g., partial streaming), calling res.status().json() will throw. Express error handlers should delegate to the default handler via _next(err) in that case.

  2. Hardcoded code: 'INTERNAL_SERVER_ERROR': If a route throws an error with a specific code (e.g., UNAUTHORIZED, GITHUB_PROXY_FAILED), this middleware discards it. The frontend's translateBackendError relies on specific codes to show localized messages. Consider propagating a custom code property from the error if present.

Suggested fix
 export function errorHandler(
   err: Error,
   _req: Request,
   res: Response,
   _next: NextFunction
 ): void {
   console.error('Unhandled error:', err.stack || err.message);

+  if (res.headersSent) {
+    return _next(err);
+  }
+
-  const statusCode = (err as Error & { statusCode?: number }).statusCode || 500;
+  const errWithMeta = err as Error & { statusCode?: number; code?: string };
+  const statusCode = errWithMeta.statusCode || 500;
   const message =
     process.env.NODE_ENV === 'production'
       ? 'Internal Server Error'
       : err.message || 'Internal Server Error';
+  const code = errWithMeta.code || 'INTERNAL_SERVER_ERROR';

-  res.status(statusCode).json({ error: message, code: 'INTERNAL_SERVER_ERROR' });
+  res.status(statusCode).json({ error: message, code });
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/middleware/errorHandler.ts` around lines 3 - 17, The errorHandler
currently always calls res.status(...).json(...) and overwrites any custom error
code; update errorHandler to first check res.headersSent and if true call
_next(err) to let Express handle it, and when building the response derive the
error code from the thrown error (e.g., (err as any).code ||
'INTERNAL_SERVER_ERROR') instead of hardcoding it; keep existing statusCode
extraction ((err as Error & { statusCode?: number }).statusCode || 500) and
existing console.error logging but include the propagated code in the response
body.
server/Dockerfile-4-4 (1)

4-4: ⚠️ Potential issue | 🟠 Major

Do not ship devDependencies in the runtime image.

Copying node_modules from the build stage pulls test/build tooling into production. Prune to prod deps before copying.

🔧 Proposed fix
 RUN npm ci
 COPY . .
-RUN npm run build
+RUN npm run build && npm prune --omit=dev

Also applies to: 11-11

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/Dockerfile` at line 4, The Dockerfile currently installs all
dependencies with the RUN npm ci step and copies node_modules into the runtime
image; change the build stage to install only production deps (e.g. use RUN npm
ci --only=production or set NODE_ENV=production before npm ci) or run npm prune
--production in the build stage just before the copy to the final image so
devDependencies are removed; update the RUN npm ci invocation(s) referenced in
the Dockerfile to ensure only prod dependencies are present when copying to the
runtime stage.
src/services/backendAdapter.ts-10-23 (1)

10-23: ⚠️ Potential issue | 🟠 Major

Move AbortController creation inside the loop to enable fallback URL attempts.

The shared signal causes subsequent fetch attempts to fail immediately after the first timeout, preventing the second backend URL from being tried. Once controller.abort() executes, the signal remains permanently aborted and any new fetch using that signal fails immediately with an AbortError, rather than attempting an actual network request.

🔧 Proposed fix
-      const controller = new AbortController();
-      const timeoutId = setTimeout(() => controller.abort(), 3000);
-
       // Try common backend URLs
       const urls = [
         window.location.origin + '/api',
         'http://localhost:3000/api',
       ];
 
       for (const baseUrl of urls) {
+        const controller = new AbortController();
+        const timeoutId = setTimeout(() => controller.abort(), 3000);
         try {
           const res = await fetch(`${baseUrl}/health`, {
             signal: controller.signal,
           });

           if (res.ok) {
             const data = await res.json();
             if (data.status === 'ok') {
               this._backendUrl = baseUrl;
               console.log(`✅ Backend connected: ${baseUrl}`);
-              clearTimeout(timeoutId);
               return;
             }
           }
         } catch {
           // Try next URL
+        } finally {
+          clearTimeout(timeoutId);
         }
       }
-
-      clearTimeout(timeoutId);
       this._backendUrl = null;

Also applies to: syncRepositories, syncReleases, and syncSettings methods (lines 195–203, 215–223, 280–288) do not validate response status; they silently ignore HTTP errors and may lose sync data without notification.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 10 - 23, The AbortController is
created once and reused across attempts so when its timeout triggers all
subsequent fetches immediately fail; move creation of AbortController and its
setTimeout inside the for-loop that iterates over baseUrl (the fetch call to
`${baseUrl}/health`) so each attempt gets a fresh controller/signal and timeout,
ensure you clearTimeout on success or after each attempt and only call
controller.abort() for that attempt, and handle AbortError separately in the
catch so the loop can continue to the next URL; additionally, update
syncRepositories, syncReleases, and syncSettings to validate responses by
checking response.ok (or status) and throw or log a clear error when HTTP status
is not successful so sync failures are not silently ignored.
src/services/backendAdapter.ts-195-203 (1)

195-203: ⚠️ Potential issue | 🟠 Major

Add error handling to write sync endpoints.

The methods syncRepositories, syncReleases, and syncSettings ignore failed fetch responses, silently dropping sync updates (auth/db/validation failures) without surfacing errors. Other methods in this class properly handle errors using throwTranslatedError; these sync endpoints should follow the same pattern.

🔧 Proposed fix
   async syncRepositories(repos: Repository[]): Promise<void> {
     if (!this._backendUrl) return;
 
-    await fetch(`${this._backendUrl}/repositories`, {
+    const res = await fetch(`${this._backendUrl}/repositories`, {
       method: 'PUT',
       headers: this.getAuthHeaders(),
       body: JSON.stringify({ repositories: repos })
     });
+    if (!res.ok) await this.throwTranslatedError(res, 'Sync repositories error');
   }
@@
   async syncReleases(releases: Release[]): Promise<void> {
     if (!this._backendUrl) return;
 
-    await fetch(`${this._backendUrl}/releases`, {
+    const res = await fetch(`${this._backendUrl}/releases`, {
       method: 'PUT',
       headers: this.getAuthHeaders(),
       body: JSON.stringify({ releases })
     });
+    if (!res.ok) await this.throwTranslatedError(res, 'Sync releases error');
   }
@@
   async syncSettings(settings: Record<string, unknown>): Promise<void> {
     if (!this._backendUrl) return;
 
-    await fetch(`${this._backendUrl}/settings`, {
+    const res = await fetch(`${this._backendUrl}/settings`, {
       method: 'PUT',
       headers: this.getAuthHeaders(),
       body: JSON.stringify(settings)
     });
+    if (!res.ok) await this.throwTranslatedError(res, 'Sync settings error');
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 195 - 203, The syncRepositories
method (and sibling methods syncReleases and syncSettings) currently ignores
fetch failures; update each to capture the fetch Response, check response.ok,
and call throwTranslatedError(response) on non-OK responses so errors are
surfaced instead of silently dropped. Concretely: await the fetch into a
variable (e.g., const res = await fetch(...)), then if (!res.ok) await
throwTranslatedError(res); keep the same request body/headers (getAuthHeaders())
and Promise<void> signature.
server/Dockerfile-1-1 (1)

1-1: ⚠️ Potential issue | 🟠 Major

Use a currently supported Node LTS base image.

node:18-alpine reached end-of-life on April 30, 2025, and is no longer receiving security updates. Upgrade to an active LTS version: Node 22 (supported until April 2027) or Node 24 (supported until April 2028). Node 20 is in maintenance mode and reaches EOL in April 2026.

Also applies to: 8-8

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/Dockerfile` at line 1, The Dockerfile currently uses an EOL Node image
("FROM node:18-alpine AS build"); update the base image to a supported LTS
(e.g., "node:22-alpine" or "node:24-alpine") by replacing "node:18-alpine" in
the FROM line, and apply the same upgrade wherever the same FROM line appears
(the other occurrence noted as "8-8") to ensure the container uses a supported
Node LTS version.
src/App.tsx-38-57 (1)

38-57: ⚠️ Potential issue | 🟠 Major

Race condition: cleanup may miss stopping auto-sync if unmount happens before async init completes.

If the component unmounts before initBackend() resolves (e.g., React StrictMode double-mount in dev), the cleanup on line 52 sees unsubscribe as null and skips stopAutoSync. Meanwhile, startAutoSync() still fires after the await chain completes, leaking the poll interval and store subscription.

Additionally, initBackend() on line 50 lacks error handling — a rejection from backend.init() or syncFromBackend() becomes an unhandled promise rejection.

🛠️ Proposed fix using a cancelled flag
   useEffect(() => {
     let unsubscribe: (() => void) | null = null;
+    let cancelled = false;

     const initBackend = async () => {
-      await backend.init();
-      if (backend.isAvailable) {
-        await syncFromBackend();
-        unsubscribe = startAutoSync();
+      try {
+        await backend.init();
+        if (backend.isAvailable && !cancelled) {
+          await syncFromBackend();
+          if (!cancelled) {
+            unsubscribe = startAutoSync();
+          }
+        }
+      } catch (err) {
+        console.error('Failed to initialize backend:', err);
       }
     };

     initBackend();

     return () => {
+      cancelled = true;
       if (unsubscribe) {
         stopAutoSync(unsubscribe);
       }
     };
   }, []);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/App.tsx` around lines 38 - 57, The effect can leak the auto-sync and
swallow init errors; wrap the async initBackend body in a try/catch and add a
cancelled flag (e.g., let cancelled = false) so you only call startAutoSync when
not cancelled, and if startAutoSync returns an unsubscribe after cancelled is
true immediately call stopAutoSync(unsubscribe) to tear it down; also assign
unsubscribe from startAutoSync into the outer unsubscribe var and ensure cleanup
sets cancelled = true and, if unsubscribe is non-null, calls
stopAutoSync(unsubscribe). Reference initBackend, startAutoSync, stopAutoSync,
unsubscribe, backend.init, and syncFromBackend when applying these changes.
src/store/useAppStore.ts-370-371 (1)

370-371: ⚠️ Potential issue | 🟠 Major

Avoid persisting backendApiSecret to local storage

backendApiSecret is sensitive and is now serialized by Zustand persist. This increases exposure through XSS/local storage disclosure.

🛡️ Safer default
-        // 持久化后端设置
-        backendApiSecret: state.backendApiSecret,
+        // backendApiSecret: keep in-memory only (do not persist)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/store/useAppStore.ts` around lines 370 - 371, The persist config in
useAppStore is serializing the sensitive backendApiSecret; remove
backendApiSecret from persisted state by updating the Zustand persist
partialize/whitelist to exclude backendApiSecret (or explicitly pick fields to
persist) so it remains in-memory only; locate the persist wrapper around
useAppStore and adjust its configuration to not serialize state.backendApiSecret
(keep the property on the store for runtime use but do not include it in the
persisted keys).
server/src/routes/proxy.ts-166-169 (1)

166-169: ⚠️ Potential issue | 🟠 Major

Prevent caller headers from overriding WebDAV Authorization

Line 168 currently allows extraHeaders.Authorization to replace the server-generated Basic auth from stored credentials.

🔒 Proposed header merge fix
-    const headers: Record<string, string> = {
-      'Authorization': `Basic ${credentials}`,
-      ...(extraHeaders || {}),
-    };
+    const headers: Record<string, string> = {
+      ...(extraHeaders || {}),
+      'Authorization': `Basic ${credentials}`,
+    };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/proxy.ts` around lines 166 - 169, The current merge in the
headers object allows extraHeaders.Authorization to override the
server-generated Basic auth; modify the merge so the server-side Authorization
(built from credentials) takes precedence—either omit Authorization from
extraHeaders before merging or spread extraHeaders first and then set
'Authorization': `Basic ${credentials}` after (update the headers construction
in proxy.ts where headers, extraHeaders, and credentials are used).
src/components/SettingsPanel.tsx-561-572 (1)

561-572: ⚠️ Potential issue | 🟠 Major

Sync-from-backend currently cannot apply deletions

Because updates only run when fetched arrays are non-empty, an empty backend state won’t overwrite stale local data (despite the overwrite confirmation text).

🔁 Proposed overwrite-consistent sync
-      if (repoData.repositories.length > 0) {
-        setRepositories(repoData.repositories);
-      }
-      if (releaseData.releases.length > 0) {
-        setReleases(releaseData.releases);
-      }
-      if (aiConfigData.length > 0) {
-        setAIConfigs(aiConfigData);
-      }
-      if (webdavConfigData.length > 0) {
-        setWebDAVConfigs(webdavConfigData);
-      }
+      setRepositories(repoData.repositories);
+      setReleases(releaseData.releases);
+      setAIConfigs(aiConfigData);
+      setWebDAVConfigs(webdavConfigData);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/SettingsPanel.tsx` around lines 561 - 572, The sync logic only
updates local state when fetched arrays have length>0, so deletions from the
backend aren’t applied; change the checks in the SettingsPanel sync to set state
even for empty arrays — e.g. replace the conditional blocks around
setRepositories, setReleases, setAIConfigs, and setWebDAVConfigs so they assign
the fetched values (or default to []) whenever the corresponding payload exists
(check for undefined/null rather than length), e.g. use repoData.repositories
!== undefined ? setRepositories(repoData.repositories) : setRepositories([])
(and analogous changes for releaseData.releases, aiConfigData, webdavConfigData)
so empty backend state overwrites local state consistently.
server/src/config.ts-21-30 (1)

21-30: ⚠️ Potential issue | 🟠 Major

Validate encryption key format at load time

ENCRYPTION_KEY (or key file content) is accepted as-is. Invalid values defer failure to runtime crypto operations.

✅ Proposed fail-fast validation
+function validateEncryptionKey(key: string): string {
+  if (!/^[a-fA-F0-9]{64}$/.test(key)) {
+    throw new Error('ENCRYPTION_KEY must be a 64-character hex string (32 bytes)');
+  }
+  return key;
+}
+
 function resolveEncryptionKey(dataDir: string): string {
   const envKey = process.env.ENCRYPTION_KEY;
   if (envKey) {
-    return envKey;
+    return validateEncryptionKey(envKey);
   }
 
   const keyFilePath = path.join(dataDir, '.encryption-key');
   if (fs.existsSync(keyFilePath)) {
-    return fs.readFileSync(keyFilePath, 'utf-8').trim();
+    return validateEncryptionKey(fs.readFileSync(keyFilePath, 'utf-8').trim());
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/config.ts` around lines 21 - 30, The resolveEncryptionKey function
currently returns ENCRYPTION_KEY or the contents of .encryption-key as-is;
change it to validate the key at load time and fail fast: inside
resolveEncryptionKey (and where it reads env var or fs.readFileSync), trim and
ensure the value is non-empty and matches the expected format/length your crypto
code requires (e.g., exact byte length or a base64/hex regex), and if validation
fails throw a clear Error describing whether the env var or key file is invalid
and what format is expected (include ENCRYPTION_KEY and keyFilePath in the
message for context).
server/src/services/proxyService.ts-47-52 (1)

47-52: ⚠️ Potential issue | 🟠 Major

Preserve upstream status when JSON parsing fails

If upstream sends content-type: application/json with empty/malformed body, response.json() throws and the catch path returns 502, masking the real upstream status.

🧩 Proposed tolerant JSON parsing
     let data: unknown;
     const contentType = response.headers.get('content-type') || '';
     if (contentType.includes('application/json')) {
-      data = await response.json();
+      const raw = await response.text();
+      if (!raw) {
+        data = null;
+      } else {
+        try {
+          data = JSON.parse(raw);
+        } catch {
+          data = raw;
+        }
+      }
     } else {
       data = await response.text();
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/services/proxyService.ts` around lines 47 - 52, When upstream
declares content-type 'application/json' but body is empty/malformed,
response.json() can throw and lead to a 502; fix this in the proxy handler by
reading the body as text first (use response.text()) and then attempt
JSON.parse(text) inside a try/catch, falling back to the raw text if parsing
fails, and ensure you always preserve and use the original upstream
response.status (from response.status) rather than converting parse errors into
502; update the logic around contentType, response.json()/response.text(), and
the code that constructs the proxied response (the handler in proxyService.ts)
to implement this tolerant JSON parsing and status preservation.
server/src/routes/releases.ts-82-114 (1)

82-114: ⚠️ Potential issue | 🟠 Major

Validate required release fields before entering the transaction

This path can write null into required DB columns (for example tag_name, repo fields), which turns bad client input into a 500/rollback instead of a 400.

✅ Proposed validation guard
-    const { releases } = req.body as { releases: Record<string, unknown>[] };
+    const { releases } = req.body as { releases: Record<string, unknown>[] };
     if (!Array.isArray(releases)) {
       res.status(400).json({ error: 'releases array required', code: 'RELEASES_ARRAY_REQUIRED' });
       return;
     }
+
+    for (const [index, release] of releases.entries()) {
+      const repository = release.repository as { id?: number; full_name?: string; name?: string } | undefined;
+      const repoId = repository?.id ?? release.repo_id;
+      const repoFullName = repository?.full_name ?? release.repo_full_name;
+      const repoName = repository?.name ?? release.repo_name;
+
+      if (
+        release.id === undefined ||
+        typeof release.tag_name !== 'string' ||
+        repoId === undefined ||
+        typeof repoFullName !== 'string' ||
+        typeof repoName !== 'string'
+      ) {
+        res.status(400).json({
+          error: `Invalid release payload at index ${index}`,
+          code: 'INVALID_RELEASE_PAYLOAD',
+        });
+        return;
+      }
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/releases.ts` around lines 82 - 114, Validate required
fields on each release before entering the db.transaction: check that required
properties like release.id, release.tag_name, and repository fields
(repository.id, repository.full_name, repository.name) or their top-level
fallbacks (repo_id, repo_full_name, repo_name) are present and of the expected
types; if any release fails validation, respond with res.status(400).json({
error: ..., code: ... }) and do not call the upsert transaction. Update the
logic that currently prepares and calls stmt.run inside upsert (referencing
releases, upsert, and stmt.run) to perform this guard up front and only proceed
to db.transaction when all releases pass validation.
server/src/routes/configs.ts-94-109 (1)

94-109: ⚠️ Potential issue | 🟠 Major

Bulk sync can irreversibly replace real secrets with masked placeholders.

Both bulk routes encrypt incoming apiKey/password blindly after DELETE. If payload contains masked values (***xxxx), encrypted placeholders overwrite real credentials.

💡 Proposed direction
+      // Snapshot existing encrypted secrets before delete
+      const existingAi = new Map(
+        (db.prepare('SELECT id, api_key_encrypted FROM ai_configs').all() as Array<Record<string, unknown>>)
+          .map((r) => [String(r.id), (r.api_key_encrypted as string) ?? null])
+      );
       db.prepare('DELETE FROM ai_configs').run();
...
-        const encryptedKey = c.apiKey ? encrypt(c.apiKey, config.encryptionKey) : '';
+        const encryptedKey =
+          typeof c.apiKey === 'string' && c.apiKey.startsWith('***')
+            ? existingAi.get(String(c.id)) ?? null
+            : (c.apiKey ? encrypt(c.apiKey, config.encryptionKey) : null);

Apply the same preservation logic to webdav_configs.password.

Also applies to: 249-263

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/configs.ts` around lines 94 - 109, Bulk sync blindly
overwrites encrypted secrets (apiKey/password) with masked placeholders; update
the bulkSync logic (function/const bulkSync and the INSERT stmt.run for
ai_configs) to detect masked incoming secrets (e.g. values starting with "***")
and, before deleting rows, query existing rows by id to capture current
api_key_encrypted values and reuse them when the payload contains a masked
value; apply the same preservation logic to webdav_configs.password in the
corresponding bulk path (the other block around lines 249-263) so that masked
placeholders do not replace real encrypted credentials.
server/src/routes/sync.ts-154-163 (1)

154-163: ⚠️ Potential issue | 🟠 Major

Category and asset-filter imports are dropping fields during sync.

Import upserts only persist a subset of columns. Fields used by the CRUD routes (description, color, sort_order, platform, etc.) can be lost after import.

💡 Proposed fix
-          INSERT OR REPLACE INTO categories (id, name, icon, keywords, is_custom)
-          VALUES (?, ?, ?, ?, ?)
+          INSERT OR REPLACE INTO categories (id, name, description, keywords, color, icon, sort_order, is_custom)
+          VALUES (?, ?, ?, ?, ?, ?, ?, ?)
...
-            c.id, c.name ?? '', c.icon ?? '📁',
-            typeof c.keywords === 'string' ? c.keywords : JSON.stringify(c.keywords ?? []),
-            c.is_custom ? 1 : 0
+            c.id, c.name ?? '', c.description ?? null,
+            typeof c.keywords === 'string' ? c.keywords : JSON.stringify(c.keywords ?? []),
+            c.color ?? null, c.icon ?? '📁', c.sort_order ?? 0, c.is_custom ? 1 : 0
-          INSERT OR REPLACE INTO asset_filters (id, name, keywords)
-          VALUES (?, ?, ?)
+          INSERT OR REPLACE INTO asset_filters (id, name, description, keywords, platform, sort_order)
+          VALUES (?, ?, ?, ?, ?, ?)
...
-            f.id, f.name ?? '',
-            typeof f.keywords === 'string' ? f.keywords : JSON.stringify(f.keywords ?? [])
+            f.id, f.name ?? '', f.description ?? null,
+            typeof f.keywords === 'string' ? f.keywords : JSON.stringify(f.keywords ?? []),
+            f.platform ?? null, f.sort_order ?? 0

Also applies to: 171-179

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/sync.ts` around lines 154 - 163, The upsert for categories
(catStmt) only writes a subset of columns, which causes other fields like
description, color, sort_order, platform, etc. to be dropped during sync; update
the INSERT OR REPLACE statement (and the analogous asset/filter upserts around
the other block) to include all persisted columns used by CRUD routes (e.g.,
description, color, sort_order, platform, and any custom fields) and pass those
values in the .run(...) call, or switch to an INSERT ... ON CONFLICT(id) DO
UPDATE that explicitly sets each column to the incoming value so existing fields
are not lost (adjust the parameter order to match). Ensure you update both the
categories upsert (catStmt) and the corresponding asset-filter upsert statements
mentioned in the review.
server/src/routes/repositories.ts-45-48 (1)

45-48: ⚠️ Potential issue | 🟠 Major

Clamp pagination inputs to safe bounds.

page/limit are not bounded. Negative or very large values can cause pathological queries and unstable behavior.

💡 Proposed fix
-    const page = parseInt(req.query.page as string) || 1;
-    const limit = parseInt(req.query.limit as string) || 100;
+    const page = Math.max(1, Number.parseInt(req.query.page as string, 10) || 1);
+    const requestedLimit = Number.parseInt(req.query.limit as string, 10) || 100;
+    const limit = Math.min(200, Math.max(1, requestedLimit));
     const search = req.query.search as string | undefined;
     const offset = (page - 1) * limit;

Also applies to: 59-60

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` around lines 45 - 48, The pagination
inputs parsed into page and limit are unbounded; clamp them after parsing to
prevent negative/huge values: ensure page = Math.max(1, parsedPage) and limit =
Math.min(MAX_LIMIT, Math.max(1, parsedLimit)) (pick a sensible MAX_LIMIT, e.g.
100 or 1000) and then recompute offset = (page - 1) * limit; apply the same
clamping logic wherever page/limit are parsed (the variables page, limit, parsed
via parseInt and used to compute offset in repositories.ts).
server/src/routes/configs.ts-136-146 (1)

136-146: ⚠️ Potential issue | 🟠 Major

Return 404 when updating a non-existent config record.

Both PUT /api/configs/ai/:id and PUT /api/configs/webdav/:id always return success even when no row was updated.

💡 Proposed fix
-    db.prepare(
+    const updateResult = db.prepare(
       'UPDATE ai_configs SET name = ?, api_type = ?, model = ?, base_url = ?, api_key_encrypted = ?, is_active = ?, custom_prompt = ?, use_custom_prompt = ?, concurrency = ? WHERE id = ?'
     ).run(name ?? '', apiType ?? 'openai', model ?? '', baseUrl ?? null, encryptedKey, isActive ? 1 : 0, customPrompt ?? null, useCustomPrompt ? 1 : 0, concurrency ?? 1, id);
+    if (updateResult.changes === 0) {
+      res.status(404).json({ error: 'AI config not found', code: 'AI_CONFIG_NOT_FOUND' });
+      return;
+    }
-    db.prepare(
+    const updateResult = db.prepare(
       'UPDATE webdav_configs SET name = ?, url = ?, username = ?, password_encrypted = ?, path = ?, is_active = ? WHERE id = ?'
     ).run(name ?? '', url ?? '', username ?? '', encryptedPwd, path ?? '/', isActive ? 1 : 0, id);
+    if (updateResult.changes === 0) {
+      res.status(404).json({ error: 'WebDAV config not found', code: 'WEBDAV_CONFIG_NOT_FOUND' });
+      return;
+    }

Also applies to: 289-299

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/configs.ts` around lines 136 - 146, The update handlers
call db.prepare(...).run(...) but ignore the run result; change the handlers for
PUT /api/configs/ai/:id and PUT /api/configs/webdav/:id to capture the statement
result (the object returned by db.prepare(...).run) and check result.changes —
if changes === 0, return res.status(404).json({ error: 'Config not found' })
instead of continuing to build and return a success response; otherwise proceed
to decrypt/mask the api key (maskApiKey, decrypt, config.encryptionKey) and
return the updated object as before. Ensure both codepaths (the block using
encryptedKey/maskedKey shown and the analogous block at lines 289-299) perform
the same existence check.
src/services/autoSync.ts-138-150 (1)

138-150: ⚠️ Potential issue | 🟠 Major

syncToBackend reports success even when requests fail.

Promise.allSettled never throws, so rejected sync operations still print “✅ Synced to backend”.

💡 Proposed fix
-    await Promise.allSettled([
+    const results = await Promise.allSettled([
       backend.syncRepositories(state.repositories),
       backend.syncReleases(state.releases),
       backend.syncAIConfigs(state.aiConfigs),
       backend.syncWebDAVConfigs(state.webdavConfigs),
       backend.syncSettings({
         activeAIConfig: state.activeAIConfig,
         activeWebDAVConfig: state.activeWebDAVConfig,
       }),
     ]);
-
-    console.log('✅ Synced to backend');
+    const failed = results.filter((r) => r.status === 'rejected');
+    if (failed.length > 0) {
+      console.error(`syncToBackend: ${failed.length} request(s) failed`, failed);
+      return;
+    }
+    console.log('✅ Synced to backend');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 138 - 150, syncToBackend uses
Promise.allSettled so it always reaches the success log even when some
backend.sync* calls fail; update the code in syncToBackend to inspect the
Promise.allSettled results (the array returned by Promise.allSettled([...])) and
if any result.status === 'rejected' either throw an error (or aggregate and log
the rejection reasons) before printing "✅ Synced to backend", otherwise continue
to print success; reference the Promise.allSettled call and
backend.syncRepositories/ syncReleases/ syncAIConfigs/ syncWebDAVConfigs/
syncSettings to locate where to check results and handle rejections.
🟡 Minor comments (8)
server/postcss.config.js-1-3 (1)

1-3: ⚠️ Potential issue | 🟡 Minor

Remove this file — PostCSS config has no place in a backend server directory.

postcss.config.js is a CSS build-tool configuration file used by frontend bundlers (e.g., Vite, webpack) to apply CSS transformations such as Tailwind or autoprefixer. The server/ directory is a Node.js/Express + SQLite backend and has no CSS processing pipeline. This file appears to be an accidental copy from the frontend root and should be deleted to avoid misleading contributors.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/postcss.config.js` around lines 1 - 3, The file exports a PostCSS
config (export default { plugins: [] }) that doesn't belong in the backend
server; remove server/postcss.config.js from the repo to avoid confusion and
accidental frontend tooling picks — simply delete the file and ensure no server
code references it.
src/utils/backendErrors.ts-60-68 (1)

60-68: ⚠️ Potential issue | 🟡 Minor

getCurrentLanguage can return an unexpected language value, causing undefined error messages.

If parsed.state?.language holds a value other than 'zh' or 'en' (any truthy non-matching string), it bypasses the || 'zh' fallback. Downstream, entry[lang] on line 76 would return undefined instead of a translated string.

Suggested fix
 function getCurrentLanguage(): 'zh' | 'en' {
   try {
     const storeData = localStorage.getItem('github-stars-manager');
     if (storeData) {
       const parsed = JSON.parse(storeData);
-      return parsed.state?.language || 'zh';
+      const lang = parsed.state?.language;
+      return lang === 'en' ? 'en' : 'zh';
     }
   } catch { /* ignore */ }
   return 'zh';
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/backendErrors.ts` around lines 60 - 68, getCurrentLanguage can
return arbitrary truthy strings from parsed.state.language which may not be 'zh'
or 'en', causing entry[lang] to be undefined; update getCurrentLanguage to
validate parsed.state?.language and only return 'zh' or 'en' (otherwise return
'zh') so callers like entry[lang] are safe — locate the getCurrentLanguage
function and add an explicit check (e.g., if language === 'zh' || language ===
'en') before returning the value.
README_zh.md-179-184 (1)

179-184: ⚠️ Potential issue | 🟡 Minor

Add blank lines around the environment-variable table.

This currently triggers markdownlint MD058 and should be normalized.

📝 Proposed fix
 #### 环境变量
+
 | 变量 | 必填 | 说明 |
 |----------|----------|-------------|
 | `API_SECRET` | 否 | API 认证令牌。未设置时禁用认证。 |
 | `ENCRYPTION_KEY` | 否 | 用于加密存储密钥的 AES-256 密钥。未设置时自动生成。 |
 | `PORT` | 否 | 服务器端口(默认:3000) |
+
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README_zh.md` around lines 179 - 184, The Markdown table under the "####
环境变量" header lacks blank lines before and after it (rows include `API_SECRET`,
`ENCRYPTION_KEY`, `PORT`), which triggers markdownlint MD058; fix by inserting a
single blank line between the "#### 环境变量" header and the table, and another
blank line after the table so the table is separated from surrounding text.
server/src/index.ts-79-81 (1)

79-81: ⚠️ Potential issue | 🟡 Minor

Main-module detection fails on Windows due to leading / in URL pathname.

new URL(import.meta.url).pathname on Windows returns /C:/path/to/file.js (leading slash before the drive letter), while process.argv[1].replace(/\\/g, '/') normalizes to C:/path/to/file.js. The string comparison will always be false, so startServer() never executes on Windows.

Not a blocker since Docker runs Linux, but it prevents local development on Windows.

🛠️ Robust fix using Node.js fileURLToPath
+import { fileURLToPath } from 'node:url';
+import path from 'node:path';
+
-const isMainModule = process.argv[1] && new URL(import.meta.url).pathname === (process.platform === 'win32' ? process.argv[1].replace(/\\/g, '/') : process.argv[1]);
+const isMainModule = process.argv[1] && fileURLToPath(import.meta.url) === path.resolve(process.argv[1]);
 if (isMainModule) {
   startServer();
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/index.ts` around lines 79 - 81, The main-module detection string
comparison fails on Windows because new URL(import.meta.url).pathname includes a
leading slash; replace it by converting the file URL to a platform path and
comparing normalized/resolved paths. Import fileURLToPath from 'url' and path
from 'path', compute const currentFile = fileURLToPath(import.meta.url) and
normalize/resolve process.argv[1] (e.g., via
path.resolve(process.argv[1].replace(/\\/g,'/')) or just
path.resolve(process.argv[1])), then set isMainModule by comparing
path.resolve(currentFile) === resolvedArg and call startServer() when true;
update references around isMainModule, import.meta.url, and startServer
accordingly.
src/store/useAppStore.ts-243-243 (1)

243-243: ⚠️ Potential issue | 🟡 Minor

Reconcile active config IDs when replacing config lists

If sync replaces aiConfigs/webdavConfigs and the active ID no longer exists, activeAIConfig / activeWebDAVConfig becomes stale.

♻️ Proposed state-safe setters
-      setAIConfigs: (aiConfigs) => set({ aiConfigs }),
+      setAIConfigs: (aiConfigs) => set((state) => ({
+        aiConfigs,
+        activeAIConfig: aiConfigs.some(c => c.id === state.activeAIConfig) ? state.activeAIConfig : null,
+      })),
...
-      setWebDAVConfigs: (webdavConfigs) => set({ webdavConfigs }),
+      setWebDAVConfigs: (webdavConfigs) => set((state) => ({
+        webdavConfigs,
+        activeWebDAVConfig: webdavConfigs.some(c => c.id === state.activeWebDAVConfig) ? state.activeWebDAVConfig : null,
+      })),

Also applies to: 259-259

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/store/useAppStore.ts` at line 243, When replacing the config arrays
(e.g., in the setter setAIConfigs and similarly setWebDAVConfigs) reconcile the
active ID: check if the existing activeAIConfig (or activeWebDAVConfig) ID still
exists in the incoming aiConfigs (or webdavConfigs); if it does nothing else, if
it does not then clear the active ID or select a sensible fallback (e.g., first
item or undefined) and update activeAIConfig/activeWebDAVConfig in the same
state update. Modify setAIConfigs and setWebDAVConfigs to perform this check and
update atomically so the store never holds an array and a dangling active ID.
src/components/SettingsPanel.tsx-502-518 (1)

502-518: ⚠️ Potential issue | 🟡 Minor

Wrap backend connection test in try/catch

If backend.init() or backend.checkHealth() throws, the function exits without controlled user feedback.

🧯 Proposed error-safe flow
   const handleTestBackendConnection = async () => {
-    setBackendStatus('checking');
-    // Save the secret first
-    setBackendApiSecret(backendSecretInput || null);
-    // Re-init and check
-    await backend.init();
-    const health = await backend.checkHealth();
-    if (health) {
-      setBackendStatus('connected');
-      setBackendHealth({ version: health.version, timestamp: health.timestamp });
-      alert(t('后端连接成功!', 'Backend connection successful!'));
-    } else {
+    try {
+      setBackendStatus('checking');
+      setBackendApiSecret(backendSecretInput || null);
+      await backend.init();
+      const health = await backend.checkHealth();
+      if (health) {
+        setBackendStatus('connected');
+        setBackendHealth({ version: health.version, timestamp: health.timestamp });
+        alert(t('后端连接成功!', 'Backend connection successful!'));
+        return;
+      }
       setBackendStatus('disconnected');
       setBackendHealth(null);
       alert(t('后端连接失败,请检查服务器是否运行。', 'Backend connection failed. Please check if the server is running.'));
+    } catch (error) {
+      console.error('Backend connection test failed:', error);
+      setBackendStatus('disconnected');
+      setBackendHealth(null);
+      alert(t('后端连接失败,请检查服务器配置。', 'Backend connection failed. Please check backend configuration.'));
     }
   };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/SettingsPanel.tsx` around lines 502 - 518, The
handleTestBackendConnection function currently calls backend.init() and
backend.checkHealth() without error handling; wrap the call sequence in a
try/catch so any thrown error sets state to a failure path: in try keep the
existing success branch (setBackendStatus('connected'), setBackendHealth(...),
success alert), in catch call setBackendStatus('disconnected'),
setBackendHealth(null), log the error (console.error or a logger) and show a
localized failure alert including the error message; ensure any necessary
cleanup or final state updates are handled so the UI never remains in the
'checking' state if an exception occurs.
server/src/routes/repositories.ts-141-147 (1)

141-147: ⚠️ Potential issue | 🟡 Minor

Normalize PATCH payloads for array-backed JSON columns.

ai_tags, ai_platforms, and custom_tags are serialized without shape checks. Non-array payloads get stored as invalid JSON shapes for these columns.

💡 Proposed fix
-      ai_tags: (v) => JSON.stringify(v),
-      ai_platforms: (v) => JSON.stringify(v),
+      ai_tags: (v) => JSON.stringify(Array.isArray(v) ? v : []),
+      ai_platforms: (v) => JSON.stringify(Array.isArray(v) ? v : []),
...
-      custom_tags: (v) => JSON.stringify(v),
+      custom_tags: (v) => JSON.stringify(Array.isArray(v) ? v : []),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` around lines 141 - 147, The serializer for
ai_tags, ai_platforms, and custom_tags currently JSON.stringify()s any input and
can produce non-array JSON; update each serializer in repositories.ts to
normalize values to arrays: if v is null/undefined return null; else if
Array.isArray(v) return JSON.stringify(v); otherwise wrap the scalar into an
array and return JSON.stringify([v]). This ensures the array-backed JSON columns
always store an array shape.
server/src/routes/categories.ts-6-9 (1)

6-9: ⚠️ Potential issue | 🟡 Minor

Guarantee parseJsonColumn always returns an array.

JSON.parse may return object/null/string, which breaks the implied array contract for keywords. Guard with Array.isArray(...).

💡 Proposed fix
 function parseJsonColumn(value: unknown): unknown[] {
   if (typeof value !== 'string' || !value) return [];
-  try { return JSON.parse(value); } catch { return []; }
+  try {
+    const parsed = JSON.parse(value);
+    return Array.isArray(parsed) ? parsed : [];
+  } catch {
+    return [];
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/categories.ts` around lines 6 - 9, parseJsonColumn
currently returns whatever JSON.parse yields (objects, null, strings), breaking
the array contract for fields like keywords; update parseJsonColumn to
JSON.parse the string into a temp variable then return it only if
Array.isArray(temp) (otherwise return []), and ensure it handles null/invalid
JSON in the catch by returning []; this keeps the function signature consistent
for callers like keywords processing.
🧹 Nitpick comments (8)
.gitignore (1)

35-36: Minor: leading space in the comment line.

Line 35 has a leading space before # ( # Orchestrator internal files), inconsistent with other section headers in this file.

Suggested fix
- # Orchestrator internal files
+# Orchestrator internal files
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore around lines 35 - 36, Remove the unintended leading space before
the section header comment " # Orchestrator internal files" so it matches other
headers (change to "# Orchestrator internal files"); ensure the comment sits
immediately above the .sisyphus/ entry and follows the same formatting/style as
other section headers in the file.
server/tsconfig.json (1)

12-13: Consider removing declaration generation for an application server.

declaration and declarationMap generate .d.ts files, which are useful for libraries but unnecessary for a standalone Express server. Disabling them would slightly speed up builds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/tsconfig.json` around lines 12 - 13, Remove or disable TypeScript
declaration generation for the application server by setting the "declaration"
and "declarationMap" compiler options to false (or removing those keys) in the
tsconfig so .d.ts files are not emitted for the Express server build; keep the
change scoped to this tsconfig and ensure no downstream consumers rely on these
declaration files before committing.
server/src/routes/health.ts (1)

8-8: Avoid hardcoding service version in health response.

This will drift on future releases; prefer a single source (e.g., env-injected app version) used across runtime and docs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/health.ts` at line 8, The health route currently hardcodes
version: '0.1.0' causing drift; replace the literal with a single source of
truth (e.g., process.env.APP_VERSION || the package.json "version") so runtime
and docs remain consistent: update the health response in
server/src/routes/health.ts to read the version from an env var (APP_VERSION)
with a fallback to the app/package version (imported or exposed by your config
module) instead of the hardcoded '0.1.0'.
server/src/index.ts (2)

64-76: Consider adding a forced shutdown timeout.

If active connections don't close, server.close() waits indefinitely. A forced process.exit(1) after a timeout (e.g., 10s) ensures the process doesn't hang during deploys.

⏱️ Example timeout guard
   const shutdown = () => {
     console.log('\n🛑 Shutting down...');
+    const forceExit = setTimeout(() => {
+      console.error('⚠️ Forced shutdown after timeout');
+      process.exit(1);
+    }, 10_000);
+    forceExit.unref();
     server.close(() => {
       closeDb();
       console.log('👋 Server stopped');
       process.exit(0);
     });
   };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/index.ts` around lines 64 - 76, The graceful shutdown handler
shutdown currently calls server.close() and may hang forever if connections
don't finish; add a forced timeout (e.g., 10_000 ms) inside shutdown that calls
process.exit(1) if the server hasn't closed in time, store the timeout id so you
can clear it when the server.close callback runs, and ensure closeDb() is still
called before exiting; update the signal handlers that reference shutdown
accordingly and make sure to clear the timeout on successful shutdown to avoid
double exits.

22-25: Consider restricting CORS origins and lowering the JSON body limit.

cors() with no options allows all origins, and 50mb is a large body limit. Both are permissive defaults that could be tightened for a backend storing sensitive data (encrypted API keys, configs). Consider making the allowed origin configurable and reducing the body limit to what the sync payloads actually need.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/index.ts` around lines 22 - 25, Replace the permissive CORS and
large JSON body settings used in app.use(cors()) and app.use(express.json({
limit: '50mb' })): make the allowed origins configurable (e.g., read
CORS_ALLOWED_ORIGIN or a list from env) and pass that to cors({ origin: ... })
instead of the default open policy, and make the JSON body limit configurable
(e.g., process.env.JSON_BODY_LIMIT || '1mb') and reduce the default to a smaller
value appropriate for sync payloads; update any related startup/README notes to
document the new env vars.
server/src/services/crypto.ts (2)

6-9: Consider validating the key length before use.

Buffer.from(key, 'hex') silently produces a shorter buffer for invalid/truncated hex input. If the resulting buffer isn't exactly 32 bytes, createCipheriv throws a generic OpenSSL error that's hard to diagnose. A guard would make debugging easier.

🛡️ Optional defensive check
 export function encrypt(plaintext: string, key: string): string {
   const keyBuffer = Buffer.from(key, 'hex');
+  if (keyBuffer.length !== 32) {
+    throw new Error(`Invalid encryption key: expected 32 bytes, got ${keyBuffer.length}`);
+  }
   const iv = crypto.randomBytes(IV_LENGTH);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/services/crypto.ts` around lines 6 - 9, Validate the hex key
before using it in encrypt: after creating keyBuffer = Buffer.from(key, 'hex')
inside the encrypt function, check that keyBuffer.length === 32 (or the expected
key length for ALGORITHM) and throw a clear, descriptive error (e.g., "Invalid
encryption key: expected 32-byte hex key") if it isn't; this prevents the
generic OpenSSL error from createCipheriv and makes failures easier to
diagnose—reference encrypt, keyBuffer, createCipheriv, ALGORITHM, and IV_LENGTH
when applying the guard.

25-35: Same key validation applies to decrypt.

For symmetry and defensive coding, the same check would be valuable here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/services/crypto.ts` around lines 25 - 35, In function decrypt, add
the same defensive key validation as in encrypt: verify that the provided key
string produces a Buffer of the expected length (e.g., 32 bytes for AES-256)
after Buffer.from(key, 'hex') and throw a clear Error if the key is malformed or
wrong length; ensure this check occurs before using keyBuffer to create the
decipher so invalid hex or bad-length keys are rejected with a descriptive
message.
server/src/routes/sync.ts (1)

187-199: Prepare SQL statements once per dataset, not once per row.

db.prepare(...) is called inside loops for ai_configs and webdav_configs. Preparing once outside the loop reduces overhead and improves import throughput.

Also applies to: 207-217

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/sync.ts` around lines 187 - 199, The loop in the sync route
prepares SQL statements on every iteration (see the ai_configs loop using
db.prepare(...) and similarly the webdav_configs loop), causing unnecessary
overhead; fix by moving the INSERT OR REPLACE db.prepare(...) calls out of the
for-loops and creating a single prepared statement for ai_configs (e.g.,
insertAiConfigStmt) and one for webdav_configs (e.g., insertWebdavConfigStmt)
before iterating, then call .run(...) on those prepared statements inside the
loops, ensuring you still compute existingKey and other per-row values before
invoking insertAiConfigStmt.run(...) or insertWebdavConfigStmt.run(...).

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 10fc86d and 66ea2e3.

⛔ Files ignored due to path filters (3)
  • dist/index.html is excluded by !**/dist/**
  • package-lock.json is excluded by !**/package-lock.json
  • server/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (39)
  • .gitignore
  • Dockerfile
  • README.md
  • README_zh.md
  • docker-compose.yml
  • nginx.conf
  • package.json
  • server/.gitignore
  • server/Dockerfile
  • server/data/.gitkeep
  • server/package.json
  • server/postcss.config.js
  • server/src/config.ts
  • server/src/db/connection.ts
  • server/src/db/migrations.ts
  • server/src/db/schema.ts
  • server/src/index.ts
  • server/src/middleware/auth.ts
  • server/src/middleware/errorHandler.ts
  • server/src/routes/categories.ts
  • server/src/routes/configs.ts
  • server/src/routes/health.ts
  • server/src/routes/proxy.ts
  • server/src/routes/releases.ts
  • server/src/routes/repositories.ts
  • server/src/routes/sync.ts
  • server/src/services/crypto.ts
  • server/src/services/proxyService.ts
  • server/tests/routes/proxy.test.ts
  • server/tests/services/crypto.test.ts
  • server/tsconfig.json
  • server/vitest.config.ts
  • src/App.tsx
  • src/components/SettingsPanel.tsx
  • src/services/autoSync.ts
  • src/services/backendAdapter.ts
  • src/store/useAppStore.ts
  • src/types/index.ts
  • src/utils/backendErrors.ts

Micah Zheng added 3 commits February 25, 2026 10:38
- schema.ts: releases 表添加缺失的 prerelease 和 draft 列
- auth.ts: 健康检查路径从 /api/health 改为 /health(中间件挂载点相对路径)
- sync.ts: AI 配置 upsert 参数顺序修正(base_url 和 model 位置对调)
- proxyService.ts: 日志中的 URL 脱敏,防止泄露 Gemini API key 等敏感参数
- autoSync.ts: 移除 length > 0 检查,允许空数组同步以支持删除传播
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
server/src/services/proxyService.ts (1)

53-56: Multi-value headers (e.g., Set-Cookie) are collapsed into a single string.

Record<string, string> cannot represent repeated header names. In Node.js 18+ fetch, response.headers.forEach yields each value per entry; for headers that legitimately repeat (like Set-Cookie), only the last value is retained. For the current upstream targets (GitHub, AI, WebDAV) this is unlikely to cause problems, but it is a limitation worth being aware of.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/services/proxyService.ts` around lines 53 - 56, The current
responseHeaders: Record<string,string> collapses multi-value headers; update the
collection to preserve repeated names by using a type like
Record<string,string|string[]> for responseHeaders and change the
response.headers.forEach handling (the responseHeaders variable and the
response.headers.forEach callback) to accumulate values: if a header key already
exists, convert/append to an array, otherwise store the string; ensure callers
that expect single strings are adjusted to accept arrays or join values where
appropriate (e.g., join with ", " except for Set-Cookie which must be preserved
as multiple entries).
server/src/middleware/auth.ts (1)

25-31: Handle Authorization scheme case-insensitively.

Line 25 currently enforces exact 'Bearer '. Auth schemes are case-insensitive, so this can reject valid clients using bearer.

Proposed diff
-  const authHeader = req.headers.authorization;
-  if (!authHeader || !authHeader.startsWith('Bearer ')) {
+  const authHeader = req.headers.authorization;
+  const [scheme, credentials] = authHeader?.split(/\s+/, 2) ?? [];
+  if (!scheme || scheme.toLowerCase() !== 'bearer' || !credentials) {
     res.status(401).json({ error: 'Unauthorized', code: 'UNAUTHORIZED' });
     return;
   }
-
-  const token = authHeader.slice(7);
+  const token = credentials;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/middleware/auth.ts` around lines 25 - 31, The check for the
Authorization scheme is currently case-sensitive (authHeader.startsWith('Bearer
')) which will reject valid headers like 'bearer'; change the logic to perform a
case-insensitive check and extract the token safely (e.g. use a case-insensitive
regex or lowercase comparison). Specifically, update the guard around authHeader
to use a regex like authHeader.match(/^Bearer\s+(.+)$/i) or
authHeader.toLowerCase().startsWith('bearer '), and extract the token from the
matched group or by slicing after the first space (use the original header for
extraction when using regex to preserve token case). Ensure the code still
returns 401 on failure and assigns the token to the existing token variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/src/services/proxyService.ts`:
- Around line 41-46: The Content-Type existence check in the body handling block
uses a case-sensitive lookup (headers['Content-Type']) and can add a duplicate
header when callers pass 'content-type' lowercase; update the logic in the block
that prepares fetchOptions (the code referencing body, method, headers, and
fetchOptions) to perform a case-insensitive check for an existing content-type
header (e.g., by using a Headers instance and calling get('content-type') or by
scanning headers' keys with key.toLowerCase() === 'content-type') and only set
the 'Content-Type': 'application/json' header when no variant is present; ensure
you preserve existing header casing/values rather than blindly injecting a
second header.
- Around line 15-25: The redactUrl function currently only masks
['key','api_key','token','access_token'] and may miss other common sensitive
query params; update redactUrl to include additional keys such as
'secret','apikey','client_secret','password','auth' (and any other org-specific
secrets) so all these params are checked and replaced with '***' via
URL.searchParams.set; locate the function redactUrl in proxyService.ts and
expand the redact list used in the for loop to cover these additional
identifiers.
- Around line 58-64: The code calls response.json() whenever Content-Type
includes application/json which throws on empty bodies (e.g., 204) and bubbles
up as a 502; change the logic in proxyService.ts to first read the raw body via
response.text(), treat an empty string (or content-length === '0') as no-body
(e.g., data = null or ''), and only attempt JSON.parse on the text when
non-empty and contentType includes 'application/json' (wrap JSON.parse in a
try/catch and fall back to the raw text on parse errors) instead of directly
calling response.json().

---

Nitpick comments:
In `@server/src/middleware/auth.ts`:
- Around line 25-31: The check for the Authorization scheme is currently
case-sensitive (authHeader.startsWith('Bearer ')) which will reject valid
headers like 'bearer'; change the logic to perform a case-insensitive check and
extract the token safely (e.g. use a case-insensitive regex or lowercase
comparison). Specifically, update the guard around authHeader to use a regex
like authHeader.match(/^Bearer\s+(.+)$/i) or
authHeader.toLowerCase().startsWith('bearer '), and extract the token from the
matched group or by slicing after the first space (use the original header for
extraction when using regex to preserve token case). Ensure the code still
returns 401 on failure and assigns the token to the existing token variable.

In `@server/src/services/proxyService.ts`:
- Around line 53-56: The current responseHeaders: Record<string,string>
collapses multi-value headers; update the collection to preserve repeated names
by using a type like Record<string,string|string[]> for responseHeaders and
change the response.headers.forEach handling (the responseHeaders variable and
the response.headers.forEach callback) to accumulate values: if a header key
already exists, convert/append to an array, otherwise store the string; ensure
callers that expect single strings are adjusted to accept arrays or join values
where appropriate (e.g., join with ", " except for Set-Cookie which must be
preserved as multiple entries).

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 61e9c56 and cffcaa6.

📒 Files selected for processing (5)
  • server/src/db/schema.ts
  • server/src/middleware/auth.ts
  • server/src/routes/sync.ts
  • server/src/services/proxyService.ts
  • src/services/autoSync.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • src/services/autoSync.ts
  • server/src/routes/sync.ts
  • server/src/db/schema.ts

- 扩展 redactUrl 敏感参数列表(添加 secret/apikey/client_secret/password/auth)
- Content-Type 检查改为大小写不敏感,防止注入重复 header
- 空响应体安全处理,先读 text 再 JSON.parse,避免 204 响应误报 502
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
server/src/services/proxyService.ts (1)

35-38: Avoid mutating caller-owned headers through shared reference.

Line 35 assigns headers directly into fetchOptions, and Line 47 mutates it. This can leak side effects to the caller if the same object is reused.

♻️ Proposed refactor
-    const fetchOptions: RequestInit = {
+    const requestHeaders: Record<string, string> = { ...headers };
+    const fetchOptions: RequestInit = {
       method,
-      headers,
+      headers: requestHeaders,
       signal: controller.signal,
     };
@@
-      const hasContentType = Object.keys(headers).some(
+      const hasContentType = Object.keys(requestHeaders).some(
         k => k.toLowerCase() === 'content-type'
       );
       if (!hasContentType) {
-        (fetchOptions.headers as Record<string, string>)['Content-Type'] = 'application/json';
+        requestHeaders['Content-Type'] = 'application/json';
       }

Also applies to: 43-48

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/services/proxyService.ts` around lines 35 - 38, The code assigns
the caller-provided headers object directly into fetchOptions and later mutates
it, which can leak side effects; fix by creating a copy of the headers before
using or mutating them (e.g., const localHeaders = new Headers(headers) or const
localHeaders = { ...headers } / Object.assign({}, headers)), use localHeaders in
fetchOptions (replace headers with localHeaders) and perform any header
mutations against localHeaders (not the original headers), ensuring
controller.signal remains unchanged; update references to fetchOptions, headers,
and any mutation sites so only the cloned localHeaders is modified.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/src/services/proxyService.ts`:
- Around line 51-53: The timeout is being cleared as soon as headers arrive
(clearTimeout(timeoutId) right after await fetch), leaving
response.text()/parsing unprotected; move the clearTimeout so it runs only after
the full body has been read and processed (e.g., after await response.text() and
any JSON.parse or downstream processing) to ensure the timer covers the entire
proxy lifecycle; update the same pattern around the other response handling
blocks that call response.text() / response.json() (lines referenced in the
comment) so each clearTimeout(timeoutId) is executed after body consumption in
the proxyService fetch/response handling code.
- Around line 27-33: In proxyRequest (and around where you call fetch) add URL
guardrails: parse the caller-provided url with new URL(url) (use the existing
ProxyRequestOptions/ proxyRequest), ensure protocol is either 'http:' or
'https:', reject hostnames that are 'localhost' or '127.0.0.1', and disallow
known metadata/service IPs (e.g. 169.254.169.254) and private CIDR ranges
(10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16); to reliably detect IPs resolve the
hostname (dns.promises.lookup with { all: true }) and validate each returned
address using net.isIP and CIDR checks before performing fetch; throw/reject
early with a clear error if any check fails so the fetch never executes.

---

Nitpick comments:
In `@server/src/services/proxyService.ts`:
- Around line 35-38: The code assigns the caller-provided headers object
directly into fetchOptions and later mutates it, which can leak side effects;
fix by creating a copy of the headers before using or mutating them (e.g., const
localHeaders = new Headers(headers) or const localHeaders = { ...headers } /
Object.assign({}, headers)), use localHeaders in fetchOptions (replace headers
with localHeaders) and perform any header mutations against localHeaders (not
the original headers), ensuring controller.signal remains unchanged; update
references to fetchOptions, headers, and any mutation sites so only the cloned
localHeaders is modified.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cffcaa6 and bae5c34.

📒 Files selected for processing (1)
  • server/src/services/proxyService.ts

- docker-compose.yml: 删除重复 services 块(Critical)
- server/Dockerfile: 升级 Node 22 + 移除 devDependencies(Major x2)
- errorHandler: 添加 headersSent 检查 + 传播 error code(Major)
- crypto: 校验加密密钥长度(Major)
- index.ts: 修复 Windows 主模块检测(Minor)
- 删除 server/postcss.config.js(Minor)
- proxy.ts: 防止 WebDAV Authorization 被覆盖(Major)
- releases.ts: 校验 release id + 分页 clamp(Major x2)
- repositories.ts: 分页 clamp(Major)
- configs.ts: 更新不存在记录返回 404 + bulk 同步保留已有密钥(Major x2)
- sync.ts: 补全 category/asset-filter 缺失字段(Major)
- proxyService.ts: SSRF 防护 + timeout 覆盖 body 读取(Major x2)
- backendAdapter.ts: AbortController 移入循环 + sync 方法错误处理(Major x2)
- App.tsx: useEffect race condition 修复(Major)
- useAppStore.ts: 不持久化 backendApiSecret(Major)
- backendErrors.ts: getCurrentLanguage 类型安全(Minor)
- autoSync.ts: syncToBackend 报告失败详情(Major)
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🧹 Nitpick comments (7)
src/services/backendAdapter.ts (2)

85-103: Excessive type assertions in fetchStarredRepos response mapping.

The chain of as casts on lines 98-101 is hard to read and fragile. Consider defining a local interface for the GitHub starred response shape, then using a single cast at the boundary.

Suggested approach
+  interface StarredItem {
+    starred_at?: string;
+    repo?: Repository;
+  }
   const data = await res.json();
-  return (data as Record<string, unknown>[]).map((item) =>
-    (item as { starred_at?: string; repo?: Repository }).starred_at && (item as { repo?: Repository }).repo
-      ? { ...((item as { repo: Repository }).repo), starred_at: (item as { starred_at: string }).starred_at }
-      : item as unknown as Repository
-  );
+  return (data as (StarredItem | Repository)[]).map((item) => {
+    const starred = item as StarredItem;
+    return starred.starred_at && starred.repo
+      ? { ...starred.repo, starred_at: starred.starred_at }
+      : item as Repository;
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 85 - 103, The mapping in
fetchStarredRepos uses many inline type assertions which is hard to read; define
a local interface (e.g., GitHubStarredItem { starred_at?: string; repo?:
Repository }) and cast the parsed JSON once to GitHubStarredItem[] right after
await res.json(), then map over that typed array and produce Repository objects
(merging repo with starred_at when present); keep error handling via
throwTranslatedError and use existing symbols fetchStarredRepos, Repository,
getAuthHeaders, _backendUrl to locate and replace the current mapping logic.

194-233: No request timeout on data sync and fetch methods.

After init() succeeds, all subsequent fetch calls (e.g., syncRepositories, fetchRepositories) have no AbortController or timeout. If the backend becomes unresponsive, these will hang indefinitely, blocking the UI sync flow.

Consider adding a shared timeout helper (e.g., 15–30s) for all backend requests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 194 - 233, The
syncRepositories/fetchRepositories/syncReleases/fetchReleases methods use fetch
with no timeout, so a hung backend can block the UI; add a shared timeout helper
(e.g., createRequestWithTimeout or fetchWithTimeout) that uses AbortController
and a configurable timeout (15–30s) and call it instead of fetch in these
methods, wiring the same getAuthHeaders() and body/URL and ensuring the
AbortController is cleaned up (clearTimeout) and HTTP errors still route to
throwTranslatedError; update all four methods to use this helper so every
backend request has a bounded timeout.
server/src/routes/sync.ts (1)

8-12: maskApiKey is duplicated across sync.ts and configs.ts.

The identical helper exists in both files. Extract it into a shared utility (e.g., server/src/utils/mask.ts) to keep things DRY.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/sync.ts` around lines 8 - 12, The maskApiKey function is
duplicated; extract it into a single shared utility module (e.g., create a new
module exporting function maskApiKey) and replace the duplicate implementations
in both sync.ts and configs.ts with imports of that exported function; ensure
you preserve the same signature (key: string | null | undefined): string and
behavior, update any import statements in files that used the old local function
to import from the new utility, and remove the redundant local copies.
server/src/services/proxyService.ts (1)

27-42: SSRF hostname validation is in place but only checks string literals.

The current validation blocks localhost, 127.0.0.1, ::1, 169.254.169.254, and private IP patterns as string matches on the hostname. A hostname like evil.attacker.com resolving to 127.0.0.1 would bypass this. DNS-based resolution would close this gap but adds complexity. This is a reasonable tradeoff for the current threat model (admin-configured URLs), but worth noting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/services/proxyService.ts` around lines 27 - 42, validateUrl
currently only checks the hostname string against BLOCKED_HOSTS and
PRIVATE_IP_PATTERNS, which allows hostnames that resolve to blocked IPs; change
validateUrl to be async (e.g., validateUrl -> validateUrlAsync) and after
parsing the URL use a DNS resolution (dns.promises.lookup or
dns.promises.resolve) to get all resolved addresses for parsed.hostname, then
check each resolved IP against BLOCKED_HOSTS/IP literals and PRIVATE_IP_PATTERNS
(and IPv6 equivalents) and throw the same blocked-proxy Error if any resolved
address matches; ensure you handle lookup failures (treat errors as blocked or
surface a clear error) and update any callers to await the new async function.
server/src/routes/configs.ts (1)

10-14: maskApiKey and maskPassword are identical functions.

Both perform the same logic. Consolidate into a single maskSecret helper.

Also applies to: 187-191

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/configs.ts` around lines 10 - 14, maskApiKey and
maskPassword duplicate the same logic; consolidate them into a single helper
named maskSecret that accepts (key: string | null | undefined): string and
implements the existing masking behavior, then replace all uses of maskApiKey
and maskPassword with maskSecret (update exports/imports if needed) and remove
the redundant function definitions so only maskSecret remains referenced by the
routes and any other callers (e.g., the other occurrence mentioned).
src/services/autoSync.ts (1)

47-91: Any single entity change triggers store writes for all fulfilled slices, including unchanged ones.

hasChanges is a single boolean ORed across all five entities. When (e.g.) only reposResult changed, the if (!hasChanges) return guard at line 91 still lets releasesResult, aiResult, webdavResult, and settingsResult through to their respective state.set* calls at lines 96–117, even though their data is identical to what's in the store.

While Zustand's set performs shallow reference replacement rather than deep equality, each call still triggers the subscription callbacks and creates a new state reference. With five potential writes per poll cycle and a 5-second interval, this adds noise to the store's change stream and makes the debounce-push fire unnecessarily.

Per-entity change tracking would confine writes to only the slices that actually changed:

♻️ Proposed refactor — track changes per entity
-    let hasChanges = false;
+    const entityChanged = { repos: false, releases: false, ai: false, webdav: false, settings: false };
 
     if (reposResult.status === 'fulfilled') {
       const hash = quickHash(reposResult.value.repositories);
       if (hash !== _lastHash.repos) {
         _lastHash.repos = hash;
-        hasChanges = true;
+        entityChanged.repos = true;
       }
     }
     // ... repeat for each entity ...
 
-    if (!hasChanges) return;
+    if (!Object.values(entityChanged).some(Boolean)) return;
 
     _isSyncingFromBackend = true;
     const state = useAppStore.getState();
 
-    if (reposResult.status === 'fulfilled') {
+    if (reposResult.status === 'fulfilled' && entityChanged.repos) {
       state.setRepositories(reposResult.value.repositories);
     }
-    if (releasesResult.status === 'fulfilled') {
+    if (releasesResult.status === 'fulfilled' && entityChanged.releases) {
       state.setReleases(releasesResult.value.releases);
     }
     // ... etc.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 47 - 91, The current single boolean
hasChanges causes all fulfilled slices to be written even if only one changed;
replace this with per-entity change flags (e.g., reposChanged, releasesChanged,
aiChanged, webdavChanged, settingsChanged) computed by comparing quickHash(...)
to _lastHash.* for each of reposResult, releasesResult, aiResult, webdavResult,
settingsResult; only update _lastHash for that entity when its flag is true and
only call the corresponding state.setX (e.g., state.setRepos, state.setReleases,
state.setAi, state.setWebdav, state.setSettings) for entities whose changed flag
is true; you can still skip all work early if none of the flags are true.
src/store/useAppStore.ts (1)

243-243: setAIConfigs / setWebDAVConfigs don't reconcile the active-config pointer.

Both bulk setters do a full replace without checking whether the currently active config ID still exists in the incoming list. Contrast with deleteAIConfig (line 238–241) and deleteWebDAVConfig (line 254–257), which explicitly null out the active pointer when the referenced entry is removed.

When setAIConfigs or setWebDAVConfigs is called directly from UI code (e.g. SettingsPanel) with a new list that no longer contains the current active ID, activeAIConfig / activeWebDAVConfig becomes a dangling reference.

autoSync mitigates this specific call-path by syncing active config IDs via settingsResult independently, but any direct caller of these setters does not get that guarantee.

♻️ Proposed fix
-      setAIConfigs: (aiConfigs) => set({ aiConfigs }),
+      setAIConfigs: (aiConfigs) => set((state) => ({
+        aiConfigs,
+        activeAIConfig: aiConfigs.some(c => c.id === state.activeAIConfig)
+          ? state.activeAIConfig
+          : null,
+      })),
-      setWebDAVConfigs: (webdavConfigs) => set({ webdavConfigs }),
+      setWebDAVConfigs: (webdavConfigs) => set((state) => ({
+        webdavConfigs,
+        activeWebDAVConfig: webdavConfigs.some(c => c.id === state.activeWebDAVConfig)
+          ? state.activeWebDAVConfig
+          : null,
+      })),

Also applies to: 259-259

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/store/useAppStore.ts` at line 243, The bulk setters setAIConfigs and
setWebDAVConfigs currently replace the arrays without reconciling
activeAIConfig/activeWebDAVConfig, which can leave dangling IDs; modify
setAIConfigs and setWebDAVConfigs to (1) read the current
activeAIConfig/activeWebDAVConfig from the store, (2) replace the
aiConfigs/webDAVConfigs with the incoming list, and (3) if the existing active
ID is not present in the new list, set the corresponding
activeAIConfig/activeWebDAVConfig to null (matching the behavior in
deleteAIConfig/deleteWebDAVConfig); ensure callers like SettingsPanel and any
autoSync/settingsResult logic keep working by preserving this pointer-check
logic in those setter implementations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docker-compose.yml`:
- Around line 14-18: Remove the public host binding "3000:3000" from the service
ports so the backend is not published by default (or scope it to loopback like
"127.0.0.1:3000:3000" for internal-only access), and stop allowing empty secrets
by removing the empty-default syntax for API_SECRET and ENCRYPTION_KEY (do not
use ${API_SECRET:-} / ${ENCRYPTION_KEY:-}); require these env vars to be
provided (or fail/validate at runtime) so the compose defaults cannot start an
unauthenticated instance; expose the port only via a separate override file for
trusted environments.

In `@server/Dockerfile`:
- Around line 8-16: The final Dockerfile stage is running the app as root;
create or reuse a non-root user (e.g., appuser) and drop privileges before
launching the process: after creating the app dir (RUN mkdir -p data) and
copying files, add steps to create the user/group, chown /app and the data
volume (to ensure NODE can write), and then switch to that user with a USER
directive before the CMD ["node","dist/index.js"]; ensure the VOLUME /app/data
and EXPOSE 3000 remain accessible to the non-root user.

In `@server/src/middleware/errorHandler.ts`:
- Around line 11-12: The early-exit in the error-handling middleware is
returning the value of _next(err), which violates the function's void return
type; update the block that checks res.headersSent to call _next(err) as a
standalone statement and then immediately return (e.g., call _next(err);
return;) so the function exits without returning a value — locate the check
around res.headersSent in the errorHandler middleware and change the single-line
"return _next(err)" into two statements referencing _next and res.headersSent.

In `@server/src/routes/configs.ts`:
- Around line 17-47: The route GET '/api/configs/ai' currently uses the query
flag shouldDecrypt (req.query.decrypt) to return plaintext API keys via
decrypt(...), exposing secrets to any authenticated caller; remove or gate this
behavior by enforcing a strict authorization check before enabling decryption
(e.g., require req.user?.isAdmin or a dedicated permission like
req.user.canViewSecrets) and only set shouldDecrypt when that check passes,
otherwise always return masked values from maskApiKey; also consider removing
the decrypt query entirely or requiring an admin-only environment flag (e.g.,
config.allowPlaintextReveal) and ensure decrypt(...) and apiKey values are never
logged or included in error messages.

In `@server/src/routes/proxy.ts`:
- Around line 161-164: The code builds targetUrl by concatenating baseUrl and
path which can produce double slashes; update the logic around baseUrl
(webdavConfig.url) and targetUrl to normalize/trim slashes before concatenation
(e.g., remove a trailing '/' from baseUrl or remove a leading '/' from path, or
use URL constructor) so that targetUrl = normalized baseUrl + '/' + normalized
path always yields a single slash; adjust references in this block that compute
targetUrl and ensure username/password encoding (credentials) is unchanged.
- Around line 86-92: The code calls decrypt(aiConfig.api_key_encrypted as
string, config.encryptionKey) without guarding for a missing/null encrypted key,
causing a crypto error; update the proxy handler to check
aiConfig.api_key_encrypted (and its type) before calling decrypt (e.g., if
null/undefined respond with 400 or 404 and a clear code like
'AI_API_KEY_MISSING'), only call decrypt when the value is a non-empty string,
and include the decrypt call location (decrypt(..., config.encryptionKey)) in
the guarded branch so the outer catch no longer hides this specific error.

In `@server/src/routes/repositories.ts`:
- Around line 109-116: PUT and PATCH write paths are calling JSON.stringify
directly on incoming fields (e.g., repo.topics, repo.ai_tags, repo.ai_platforms,
repo.custom_tags) which can persist scalars or objects and break the list
contract; normalize each of those fields before serializing by coercing to an
array shape: treat null/undefined as [], if value is already an array leave it,
if scalar or non-array object wrap it in [value] (optionally mapping elements to
strings if expected), then call JSON.stringify on that normalized array in the
same code paths that currently call JSON.stringify for these symbols.
- Around line 7-10: parseJsonColumn currently returns whatever JSON.parse
yields, allowing objects or scalars into fields expected to be arrays; change
parseJsonColumn to only return a string[] by parsing the value, verifying
Array.isArray(parsed), and if so mapping each element to a string (e.g., via
String(elem)) and returning that array, otherwise return an empty array—apply
this to the parseJsonColumn function referenced by fields like topics and
ai_tags to ensure a stable response shape.
- Around line 136-137: The route currently does const id =
parseInt(req.params.id); which accepts strings like "12abc"; change it to
explicitly validate req.params.id before parsing and return 400 for invalid ids:
check that req.params.id matches a strict integer pattern (e.g. /^\d+$/) or that
Number.isInteger(Number(req.params.id)) and then parse with
parseInt(req.params.id, 10) to produce id; if validation fails, respond with
res.status(400).json(...) and do not proceed to query the repository. Ensure you
update the logic around the id variable and any subsequent lookup code that uses
id.
- Around line 144-149: The current PATCH mappers for analysis_failed and
subscribed_to_releases use truthy coercion (v ? 1 : 0) which treats strings like
"false" as true; change both mapper functions (analysis_failed and
subscribed_to_releases) to explicitly require booleans by checking typeof v ===
'boolean' and map true -> 1 and false -> 0, and otherwise surface a validation
error (or return an explicit failure/undefined) so non-boolean inputs are
rejected instead of coerced.

In `@src/services/autoSync.ts`:
- Around line 26-28: The quickHash function currently uses JSON.stringify which
is order-sensitive and expensive on every poll; change quickHash to compute a
stable, lightweight fingerprint instead (e.g., for arrays of domain objects
compute and sort a list of "id:updatedAt" strings and join, falling back to
JSON.stringify only for small/unknown shapes), and update callers that pass
repositories/releases to pass this minimal stable representation (you can add a
helper like repoFingerprint to generate the sorted id:updatedAt string and call
quickHash on that string), ensuring quickHash and any new repoFingerprint remain
deterministic and avoid full-object serialization on the 5s tick.
- Around line 164-195: startAutoSync currently overwrites module-level timers
and the subscription, leaking previous intervals and callbacks; before creating
a new subscription or interval, clear any existing debounce timer
(clearTimeout(_debounceTimer)), clear the existing poll interval
(clearInterval(_pollTimer)), and call a stored previous unsubscribe function
(e.g., module-level _unsubscribe) if present; then register the new subscription
via useAppStore.subscribe (assign its return to _unsubscribe), set _pollTimer
with setInterval, and return the new unsubscribe so repeated calls do not leak
prior resources.
- Around line 35-36: syncFromBackend is missing a re-entrancy guard which allows
concurrent executions to race on _lastHash and store updates; add an in-flight
boolean (e.g., _isPullingFromBackend) or extend _isSyncingFromBackend to
early-return at the top of syncFromBackend to prevent re-entry, set it true
before any awaits and false in a finally block, and ensure all existing writes
(setRepositories, setReleases, set... etc.) and the _lastHash mutations happen
only while the guard is held so concurrent runs cannot interleave.

In `@src/services/backendAdapter.ts`:
- Around line 127-131: The current decoding uses atob(data.content) which
corrupts multi-byte UTF-8; instead, decode the Base64 into bytes and then UTF-8
decode those bytes (e.g., convert base64 string to a Uint8Array and run through
TextDecoder('utf-8')) before returning the README content — update the branch
handling data.encoding === 'base64' in the code around the await res.json() call
so it returns a proper UTF-8 string for data.content instead of using atob.

In `@src/utils/backendErrors.ts`:
- Around line 1-58: The ERROR_MESSAGES map is missing entries for
SYNC_AI_CONFIGS_FAILED, SYNC_WEBDAV_CONFIGS_FAILED, and INVALID_REQUEST which
causes fallback generic messages; update the ERROR_MESSAGES constant by adding
these three keys with appropriate zh and en messages (e.g.,
SYNC_AI_CONFIGS_FAILED: { zh: '同步 AI 配置失败', en: 'Failed to sync AI configs' },
SYNC_WEBDAV_CONFIGS_FAILED: { zh: '同步 WebDAV 配置失败', en: 'Failed to sync WebDAV
configs' }, INVALID_REQUEST: { zh: '无效的请求', en: 'Invalid request' }) so look for
the ERROR_MESSAGES object in this file and append the new entries in the same
format as the existing error definitions.

---

Nitpick comments:
In `@server/src/routes/configs.ts`:
- Around line 10-14: maskApiKey and maskPassword duplicate the same logic;
consolidate them into a single helper named maskSecret that accepts (key: string
| null | undefined): string and implements the existing masking behavior, then
replace all uses of maskApiKey and maskPassword with maskSecret (update
exports/imports if needed) and remove the redundant function definitions so only
maskSecret remains referenced by the routes and any other callers (e.g., the
other occurrence mentioned).

In `@server/src/routes/sync.ts`:
- Around line 8-12: The maskApiKey function is duplicated; extract it into a
single shared utility module (e.g., create a new module exporting function
maskApiKey) and replace the duplicate implementations in both sync.ts and
configs.ts with imports of that exported function; ensure you preserve the same
signature (key: string | null | undefined): string and behavior, update any
import statements in files that used the old local function to import from the
new utility, and remove the redundant local copies.

In `@server/src/services/proxyService.ts`:
- Around line 27-42: validateUrl currently only checks the hostname string
against BLOCKED_HOSTS and PRIVATE_IP_PATTERNS, which allows hostnames that
resolve to blocked IPs; change validateUrl to be async (e.g., validateUrl ->
validateUrlAsync) and after parsing the URL use a DNS resolution
(dns.promises.lookup or dns.promises.resolve) to get all resolved addresses for
parsed.hostname, then check each resolved IP against BLOCKED_HOSTS/IP literals
and PRIVATE_IP_PATTERNS (and IPv6 equivalents) and throw the same blocked-proxy
Error if any resolved address matches; ensure you handle lookup failures (treat
errors as blocked or surface a clear error) and update any callers to await the
new async function.

In `@src/services/autoSync.ts`:
- Around line 47-91: The current single boolean hasChanges causes all fulfilled
slices to be written even if only one changed; replace this with per-entity
change flags (e.g., reposChanged, releasesChanged, aiChanged, webdavChanged,
settingsChanged) computed by comparing quickHash(...) to _lastHash.* for each of
reposResult, releasesResult, aiResult, webdavResult, settingsResult; only update
_lastHash for that entity when its flag is true and only call the corresponding
state.setX (e.g., state.setRepos, state.setReleases, state.setAi,
state.setWebdav, state.setSettings) for entities whose changed flag is true; you
can still skip all work early if none of the flags are true.

In `@src/services/backendAdapter.ts`:
- Around line 85-103: The mapping in fetchStarredRepos uses many inline type
assertions which is hard to read; define a local interface (e.g.,
GitHubStarredItem { starred_at?: string; repo?: Repository }) and cast the
parsed JSON once to GitHubStarredItem[] right after await res.json(), then map
over that typed array and produce Repository objects (merging repo with
starred_at when present); keep error handling via throwTranslatedError and use
existing symbols fetchStarredRepos, Repository, getAuthHeaders, _backendUrl to
locate and replace the current mapping logic.
- Around line 194-233: The
syncRepositories/fetchRepositories/syncReleases/fetchReleases methods use fetch
with no timeout, so a hung backend can block the UI; add a shared timeout helper
(e.g., createRequestWithTimeout or fetchWithTimeout) that uses AbortController
and a configurable timeout (15–30s) and call it instead of fetch in these
methods, wiring the same getAuthHeaders() and body/URL and ensuring the
AbortController is cleaned up (clearTimeout) and HTTP errors still route to
throwTranslatedError; update all four methods to use this helper so every
backend request has a bounded timeout.

In `@src/store/useAppStore.ts`:
- Line 243: The bulk setters setAIConfigs and setWebDAVConfigs currently replace
the arrays without reconciling activeAIConfig/activeWebDAVConfig, which can
leave dangling IDs; modify setAIConfigs and setWebDAVConfigs to (1) read the
current activeAIConfig/activeWebDAVConfig from the store, (2) replace the
aiConfigs/webDAVConfigs with the incoming list, and (3) if the existing active
ID is not present in the new list, set the corresponding
activeAIConfig/activeWebDAVConfig to null (matching the behavior in
deleteAIConfig/deleteWebDAVConfig); ensure callers like SettingsPanel and any
autoSync/settingsResult logic keep working by preserving this pointer-check
logic in those setter implementations.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bae5c34 and 8faefce.

⛔ Files ignored due to path filters (1)
  • dist/index.html is excluded by !**/dist/**
📒 Files selected for processing (16)
  • docker-compose.yml
  • server/Dockerfile
  • server/src/index.ts
  • server/src/middleware/errorHandler.ts
  • server/src/routes/configs.ts
  • server/src/routes/proxy.ts
  • server/src/routes/releases.ts
  • server/src/routes/repositories.ts
  • server/src/routes/sync.ts
  • server/src/services/crypto.ts
  • server/src/services/proxyService.ts
  • src/App.tsx
  • src/services/autoSync.ts
  • src/services/backendAdapter.ts
  • src/store/useAppStore.ts
  • src/utils/backendErrors.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • server/src/index.ts
  • server/src/services/crypto.ts
  • server/src/routes/releases.ts

Comment on lines 14 to 18
ports:
- "3000:3000"
environment:
- API_SECRET=${API_SECRET:-}
- ENCRYPTION_KEY=${ENCRYPTION_KEY:-}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid exposing an unauthenticated backend by default.

Line 14/Line 15 publishes the backend port, while Line 17 allows empty API_SECRET. That makes default deployments externally reachable without auth.

🛡️ Safer default diff (internal-only backend)
   backend:
     build: ./server
-    ports:
-      - "3000:3000"
+    expose:
+      - "3000"
     environment:
       - API_SECRET=${API_SECRET:-}
       - ENCRYPTION_KEY=${ENCRYPTION_KEY:-}

If direct host access is needed, expose 3000:3000 via an override file for trusted environments only.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ports:
- "3000:3000"
environment:
- API_SECRET=${API_SECRET:-}
- ENCRYPTION_KEY=${ENCRYPTION_KEY:-}
expose:
- "3000"
environment:
- API_SECRET=${API_SECRET:-}
- ENCRYPTION_KEY=${ENCRYPTION_KEY:-}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yml` around lines 14 - 18, Remove the public host binding
"3000:3000" from the service ports so the backend is not published by default
(or scope it to loopback like "127.0.0.1:3000:3000" for internal-only access),
and stop allowing empty secrets by removing the empty-default syntax for
API_SECRET and ENCRYPTION_KEY (do not use ${API_SECRET:-} /
${ENCRYPTION_KEY:-}); require these env vars to be provided (or fail/validate at
runtime) so the compose defaults cannot start an unauthenticated instance;
expose the port only via a separate override file for trusted environments.

Comment on lines +11 to +12
if (res.headersSent) {
return _next(err);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid returning a value from a void function.

_next(err) returns a value, so return _next(err) violates the void return type. Split into two statements to satisfy the lint rule and keep the early exit.

Proposed fix
   if (res.headersSent) {
-    return _next(err);
+    _next(err);
+    return;
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (res.headersSent) {
return _next(err);
if (res.headersSent) {
_next(err);
return;
}
🧰 Tools
🪛 Biome (2.4.4)

[error] 12-12: The function should not return a value because its return type is void.

(lint/correctness/noVoidTypeReturn)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/middleware/errorHandler.ts` around lines 11 - 12, The early-exit
in the error-handling middleware is returning the value of _next(err), which
violates the function's void return type; update the block that checks
res.headersSent to call _next(err) as a standalone statement and then
immediately return (e.g., call _next(err); return;) so the function exits
without returning a value — locate the check around res.headersSent in the
errorHandler middleware and change the single-line "return _next(err)" into two
statements referencing _next and res.headersSent.

Comment on lines +17 to +47
router.get('/api/configs/ai', (req, res) => {
try {
const db = getDb();
const shouldDecrypt = req.query.decrypt === 'true';
const rows = db.prepare('SELECT * FROM ai_configs ORDER BY id ASC').all() as Record<string, unknown>[];
const configs = rows.map((row) => {
let decryptedKey = '';
try {
if (row.api_key_encrypted && typeof row.api_key_encrypted === 'string') {
decryptedKey = decrypt(row.api_key_encrypted, config.encryptionKey);
}
} catch { /* leave empty */ }
return {
id: row.id,
name: row.name,
apiType: row.api_type,
model: row.model,
baseUrl: row.base_url,
apiKey: shouldDecrypt ? decryptedKey : maskApiKey(decryptedKey),
isActive: !!row.is_active,
customPrompt: row.custom_prompt ?? null,
useCustomPrompt: !!row.use_custom_prompt,
concurrency: row.concurrency ?? 1,
};
});
res.json(configs);
} catch (err) {
console.error('GET /api/configs/ai error:', err);
res.status(500).json({ error: 'Failed to fetch AI configs', code: 'FETCH_AI_CONFIGS_FAILED' });
}
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

?decrypt=true exposes plaintext secrets to any authenticated client.

Any request with a valid Bearer token can retrieve decrypted API keys and passwords. If the API_SECRET is shared across multiple users or services, this grants all of them access to all stored secrets. Ensure this is an acceptable trust boundary for your deployment model.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/configs.ts` around lines 17 - 47, The route GET
'/api/configs/ai' currently uses the query flag shouldDecrypt
(req.query.decrypt) to return plaintext API keys via decrypt(...), exposing
secrets to any authenticated caller; remove or gate this behavior by enforcing a
strict authorization check before enabling decryption (e.g., require
req.user?.isAdmin or a dedicated permission like req.user.canViewSecrets) and
only set shouldDecrypt when that check passes, otherwise always return masked
values from maskApiKey; also consider removing the decrypt query entirely or
requiring an admin-only environment flag (e.g., config.allowPlaintextReveal) and
ensure decrypt(...) and apiKey values are never logged or included in error
messages.

Comment on lines +86 to +92
const aiConfig = db.prepare('SELECT * FROM ai_configs WHERE id = ?').get(configId) as Record<string, unknown> | undefined;
if (!aiConfig) {
res.status(404).json({ error: 'AI config not found', code: 'AI_CONFIG_NOT_FOUND' });
return;
}

const apiKey = decrypt(aiConfig.api_key_encrypted as string, config.encryptionKey);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Decrypt of api_key_encrypted will throw an unhelpful error if the value is null.

If an AI config exists but has no encrypted key (e.g., it was created without one), aiConfig.api_key_encrypted is null and decrypt(null as string, ...) will throw a confusing crypto error. The outer catch returns a generic "AI proxy failed" message, hiding the real cause.

Consider adding a guard:

Proposed fix
+    if (!aiConfig.api_key_encrypted) {
+      res.status(400).json({ error: 'AI config has no API key', code: 'AI_CONFIG_NOT_FOUND' });
+      return;
+    }
     const apiKey = decrypt(aiConfig.api_key_encrypted as string, config.encryptionKey);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const aiConfig = db.prepare('SELECT * FROM ai_configs WHERE id = ?').get(configId) as Record<string, unknown> | undefined;
if (!aiConfig) {
res.status(404).json({ error: 'AI config not found', code: 'AI_CONFIG_NOT_FOUND' });
return;
}
const apiKey = decrypt(aiConfig.api_key_encrypted as string, config.encryptionKey);
const aiConfig = db.prepare('SELECT * FROM ai_configs WHERE id = ?').get(configId) as Record<string, unknown> | undefined;
if (!aiConfig) {
res.status(404).json({ error: 'AI config not found', code: 'AI_CONFIG_NOT_FOUND' });
return;
}
if (!aiConfig.api_key_encrypted) {
res.status(400).json({ error: 'AI config has no API key', code: 'AI_CONFIG_NOT_FOUND' });
return;
}
const apiKey = decrypt(aiConfig.api_key_encrypted as string, config.encryptionKey);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/proxy.ts` around lines 86 - 92, The code calls
decrypt(aiConfig.api_key_encrypted as string, config.encryptionKey) without
guarding for a missing/null encrypted key, causing a crypto error; update the
proxy handler to check aiConfig.api_key_encrypted (and its type) before calling
decrypt (e.g., if null/undefined respond with 400 or 404 and a clear code like
'AI_API_KEY_MISSING'), only call decrypt when the value is a non-empty string,
and include the decrypt call location (decrypt(..., config.encryptionKey)) in
the guarded branch so the outer catch no longer hides this specific error.

Comment on lines +26 to +28
function quickHash(data: unknown): string {
return JSON.stringify(data);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

quickHash via JSON.stringify is order-sensitive and O(n) on every poll cycle.

JSON.stringify serialises the full payload on every 5-second tick. Two concerns:

  1. False positives from property-order variance — if the backend (or any middleware) returns the same object with keys in a different order, JSON.stringify produces a different string, flagging a "change" and writing to the store unnecessarily.
  2. Performance at scale — for large repositories or releases arrays, serialising the full array every 5 seconds (potentially megabytes of JSON) on the main thread could introduce jank.

Consider hashing over a stable, lightweight representation (e.g., sorted IDs + updatedAt timestamps) instead of full serialisation:

function quickHash(data: unknown): string {
  // Full serialisation kept as fallback for small objects; callers
  // responsible for passing a stable, minimal representation.
  return JSON.stringify(data);
}

Or, for repositories specifically:

const repoFingerprint = (repos: Repository[]) =>
  repos.map(r => `${r.id}:${r.updatedAt}`).sort().join(',');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 26 - 28, The quickHash function
currently uses JSON.stringify which is order-sensitive and expensive on every
poll; change quickHash to compute a stable, lightweight fingerprint instead
(e.g., for arrays of domain objects compute and sort a list of "id:updatedAt"
strings and join, falling back to JSON.stringify only for small/unknown shapes),
and update callers that pass repositories/releases to pass this minimal stable
representation (you can add a helper like repoFingerprint to generate the sorted
id:updatedAt string and call quickHash on that string), ensuring quickHash and
any new repoFingerprint remain deterministic and avoid full-object serialization
on the 5s tick.

Comment on lines 164 to 195
export function startAutoSync(): () => void {
// 1. Subscribe to local changes → push to backend (2s debounce)
const unsubscribe = useAppStore.subscribe((state, prevState) => {
if (_isSyncingFromBackend) return;

const changed =
state.repositories !== prevState.repositories ||
state.releases !== prevState.releases ||
state.aiConfigs !== prevState.aiConfigs ||
state.webdavConfigs !== prevState.webdavConfigs ||
state.activeAIConfig !== prevState.activeAIConfig ||
state.activeWebDAVConfig !== prevState.activeWebDAVConfig;

if (!changed) return;

// Debounce: wait 2s after last change before pushing
if (_debounceTimer) {
clearTimeout(_debounceTimer);
}
_debounceTimer = setTimeout(() => {
syncToBackend();
}, 2000);
});

// 2. Poll backend every 5s → pull fresh data for cross-device sync
_pollTimer = setInterval(() => {
syncFromBackend();
}, POLL_INTERVAL);

console.log('🔄 Auto-sync started (push debounce: 2s, poll: 5s)');
return unsubscribe;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

startAutoSync called more than once leaks the prior poll timer and store subscription.

_pollTimer is a module-level variable. If startAutoSync() is called a second time (e.g., during React Strict Mode double-invocation, HMR, or component remount), the previous setInterval ID is overwritten. The old interval is never cleared and can never be cleared afterward, since its ID is gone. Simultaneously, a second useAppStore.subscribe callback is registered, but startAutoSync returns only the newest unsubscribe function — the first subscription leaks.

This means the poll fires at twice the intended rate and the debounce push fires twice per state change.

🔒 Proposed fix — clear existing timers and subscriptions before re-registering
+// Module-level reference to the current store unsubscribe function
+let _storeUnsubscribe: (() => void) | null = null;
+
 export function startAutoSync(): () => void {
+  // Prevent duplicate timers and subscriptions on re-entry
+  if (_storeUnsubscribe) {
+    _storeUnsubscribe();
+    _storeUnsubscribe = null;
+  }
+  if (_pollTimer) {
+    clearInterval(_pollTimer);
+    _pollTimer = null;
+  }
+  if (_debounceTimer) {
+    clearTimeout(_debounceTimer);
+    _debounceTimer = null;
+  }
+
   const unsubscribe = useAppStore.subscribe((state, prevState) => {
     ...
   });
+  _storeUnsubscribe = unsubscribe;
 
   _pollTimer = setInterval(() => {
     syncFromBackend();
   }, POLL_INTERVAL);
 
   return unsubscribe;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 164 - 195, startAutoSync currently
overwrites module-level timers and the subscription, leaking previous intervals
and callbacks; before creating a new subscription or interval, clear any
existing debounce timer (clearTimeout(_debounceTimer)), clear the existing poll
interval (clearInterval(_pollTimer)), and call a stored previous unsubscribe
function (e.g., module-level _unsubscribe) if present; then register the new
subscription via useAppStore.subscribe (assign its return to _unsubscribe), set
_pollTimer with setInterval, and return the new unsubscribe so repeated calls do
not leak prior resources.

1. docker-compose.yml: 后端端口改用 expose 而非 ports,防止未认证暴露
2. server/Dockerfile: 添加 USER node 以非 root 用户运行容器
3. repositories.ts: parseJsonColumn 强制返回数组,防止非法类型泄漏
4. repositories.ts: 写入路径规范化列表字段为数组
5. repositories.ts: 布尔字段使用严格比较 (=== true || === 1)
6. autoSync.ts: syncFromBackend 添加重入锁防止并发竞态
7. autoSync.ts: startAutoSync 重复调用时先清理旧定时器
8. backendAdapter.ts: base64 解码改用 TextDecoder 支持多字节 UTF-8
@Micah-Zheng
Copy link
Author

@coderabbitai review

1 similar comment
@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

1 similar comment
@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (9)
server/Dockerfile (1)

8-14: Non-root runtime user is correctly implemented.

RUN mkdir -p data && chown -R node:node /app followed by USER node (lines 13–14) matches the recommended hardening pattern. The VOLUME declaration on line 15 appearing after the chown also ensures the mount-point inherits the correct ownership before Docker's volume machinery takes over.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/Dockerfile` around lines 8 - 14, The Dockerfile correctly switches to
a non-root runtime user: keep the existing RUN mkdir -p data && chown -R
node:node /app and USER node lines as-is, and ensure the VOLUME declaration
remains after the chown so the mount-point inherits the intended ownership; no
code changes required.
src/services/autoSync.ts (4)

168-208: startAutoSync still leaks the previous store subscription on re-entry.

Timers are properly cleaned up (lines 170-177), but the useAppStore.subscribe callback from a prior startAutoSync() call is never unsubscribed. If startAutoSync is called twice (HMR, Strict Mode, remount), the old subscription continues to fire, causing duplicate syncToBackend pushes per state change.

Store a module-level reference to the previous unsubscribe and call it before re-subscribing.

Proposed fix
+let _storeUnsubscribe: (() => void) | null = null;
+
 export function startAutoSync(): () => void {
+  if (_storeUnsubscribe) {
+    _storeUnsubscribe();
+    _storeUnsubscribe = null;
+  }
   if (_pollTimer) {
     clearInterval(_pollTimer);
     _pollTimer = null;
   }
   if (_debounceTimer) {
     clearTimeout(_debounceTimer);
     _debounceTimer = null;
   }

   const unsubscribe = useAppStore.subscribe((state, prevState) => {
     ...
   });
+  _storeUnsubscribe = unsubscribe;

   _pollTimer = setInterval(() => {
     syncFromBackend();
   }, POLL_INTERVAL);

   console.log('🔄 Auto-sync started (push debounce: 2s, poll: 5s)');
   return unsubscribe;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 168 - 208, The startAutoSync function
currently creates a new subscription via useAppStore.subscribe each call but
never unsubscribes previous subscriptions; add a module-level variable (e.g.,
previousUnsubscribe or _storeUnsubscribe) and, at the top of startAutoSync, if
that var is set call it to unsubscribe and null it before creating the new
subscription; assign the returned unsubscribe from useAppStore.subscribe to that
module-level variable and ensure it is cleared when returning/unmounting so
repeated calls to startAutoSync (HMR/StrictMode/remount) do not leak
subscriptions or cause duplicate syncToBackend calls.

36-39: Re-entrancy guard for syncFromBackend — previous concern addressed.

_isSyncingFromBackendActive at line 37 properly prevents concurrent executions, with the flag reset in the finally block at line 127.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 36 - 39, The re-entrancy guard using
_isSyncingFromBackendActive inside syncFromBackend is correctly implemented and
reset in the finally block; remove the duplicate review comment to avoid
noise—ensure references to _isSyncingFromBackendActive, syncFromBackend, and the
finally block remain unchanged and only the duplicated reviewer note is deleted.

27-29: quickHash still uses full JSON.stringify — prior concern remains.

This was flagged in a previous review. Full serialization on every 5-second poll can cause jank with large datasets and is order-sensitive.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 27 - 29, quickHash currently does a
full JSON.stringify which is expensive and order-sensitive; replace it with a
fast, order-independent fingerprinting strategy: in quickHash(…) detect
primitives and for objects/arrays compute a stable serialization (sort object
keys recursively) or sample/length-based summary, then feed that smaller string
into a fast non-blocking hash (e.g., xxhash/murmur or a lightweight JS rolling
hash) so the polling path (quickHash) avoids full heavyweight serialization and
produces deterministic hashes; update the quickHash implementation and any
callers to use the new stable+fast-hash behavior.

53-59: Deletion propagation gating removed — previous concern addressed.

The length > 0 checks have been removed, so empty arrays from the backend now correctly propagate and overwrite local state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 53 - 59, Previously empty repo arrays
were blocked by a length check; to ensure deletions from the backend propagate,
keep the new logic but guard against undefined and ensure deterministic hashing:
when handling reposResult in the function containing this block, call
quickHash(reposResult.value.repositories ?? []) so undefined becomes an empty
array, and verify quickHash returns a stable hash for [] (update quickHash
implementation if needed); then compare to _lastHash.repos and update
_lastHash.repos and hasChanges as shown (symbols: reposResult, quickHash,
_lastHash.repos, hasChanges).
server/src/routes/repositories.ts (3)

7-13: parseJsonColumn now validates arrays — previous concern partially addressed.

The Array.isArray check on line 11 is a good improvement. The remaining suggestion from the prior review (filtering elements to string[]) is still applicable for tighter type safety, but this is a reasonable middle ground.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` around lines 7 - 13, The parseJsonColumn
function currently returns unknown[]; tighten its return type and contents by
making it return string[]: after JSON.parse (inside parseJsonColumn) ensure
parsed is an array with Array.isArray(parsed) then filter its elements to only
strings (e.g., parsed.filter(el => typeof el === 'string')) and return that
filtered array; also update the function signature from parseJsonColumn(value:
unknown): unknown[] to parseJsonColumn(value: unknown): string[] so callers get
the stronger type guarantee.

112-120: Write-path array normalization addressed.

The Array.isArray(repo.x) ? repo.x : [] guards on lines 112, 114, 115, and 118 address the prior concern about persisting non-array shapes. Element-level string filtering (from the prior review) is still absent but this is a reasonable improvement.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` around lines 112 - 120, The write-path now
guards repo.topics, repo.ai_tags, repo.ai_platforms, and repo.custom_tags with
Array.isArray checks in repositories.ts, but you should also normalize array
elements to stable string values before persisting: in the repository
save/update handler (the code around JSON.stringify(Array.isArray(repo.topics) ?
repo.topics : [])) map/filter each of those arrays to include only non-empty
strings (e.g., arr.filter(Boolean).map(String)) and optionally dedupe, then
JSON.stringify that normalized array so no non-string elements are stored.

139-139: parseInt without validation still present on :id param.

parseInt("12abc") silently parses to 12. This was flagged in a previous review and remains unaddressed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` at line 139, The current use of parseInt
on req.params.id (assigning to id) can silently accept values like "12abc";
replace it with a strict validation: first verify req.params.id matches a
digits-only pattern (e.g., /^\d+$/) or use Number and check Number.isInteger
after conversion, return a 400/validation error when the param is invalid, then
safely convert to an integer and assign to id; update the code paths using id
(the id variable in the repositories route handler) to rely on this validated
conversion.
src/services/backendAdapter.ts (1)

128-131: Base64 → UTF-8 decoding fix applied — previous concern addressed.

Lines 129-131 now decode via Uint8Array.from(atob(...)) + TextDecoder, properly handling multi-byte UTF-8.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 128 - 131, The base64→UTF-8
decoding has been corrected using atob + Uint8Array.from + TextDecoder, but
there is a duplicate/leftover implementation to remove; in
src/services/backendAdapter.ts locate the block that checks data.encoding ===
'base64' (the code using atob, Uint8Array.from, and new TextDecoder()) and keep
this implementation while removing any duplicate or older base64-decoding
branches elsewhere in the same function (or duplicate blocks) so only this
correct decoder remains; also ensure the runtime environment supports
atob/TextDecoder or replace with a Buffer-based fallback if this code runs in
Node.js.
🧹 Nitpick comments (7)
server/Dockerfile (2)

13-13: Prefer the absolute path in mkdir for resilience against future WORKDIR changes.

mkdir -p data relies implicitly on WORKDIR /app. Using the absolute path makes the intent self-documenting and prevents a subtle breakage if the working directory is ever updated.

♻️ Proposed fix
-RUN mkdir -p data && chown -R node:node /app
+RUN mkdir -p /app/data && chown -R node:node /app
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/Dockerfile` at line 13, The Dockerfile RUN line uses a relative path
"data" which depends on WORKDIR; change the command that creates and chowns the
directory so it uses the absolute path (/app/data) instead of "data" and still
chowns /app (update the RUN instruction that currently reads RUN mkdir -p data
&& chown -R node:node /app to create /app/data explicitly and then run chown -R
node:node /app).

1-1: Pin the base image to a specific version for reproducible builds.

Both stages use node:22-alpine, a floating tag that can silently absorb Node.js patch-level and Alpine OS updates. This risks non-deterministic builds and potential supply-chain issues between runs without any change to the Dockerfile.

📌 Proposed fix: pin to a full semver tag
-FROM node:22-alpine AS build
+FROM node:22.22.0-alpine AS build
 ...
-FROM node:22-alpine
+FROM node:22.22.0-alpine

Alternatively, pin to a specific Alpine variant for even more control:

-FROM node:22-alpine AS build
+FROM node:22.22.0-alpine3.23 AS build

Also applies to: 8-8

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/Dockerfile` at line 1, The Dockerfile currently uses the floating tag
"node:22-alpine" which can lead to non-reproducible builds; change the base
image references (both stages that use node:22-alpine) to a fully-pinned
semver/alpine combo (e.g., node:22.x.y-alpineN.M) so the exact Node.js and
Alpine versions are fixed; update every FROM line using node:22-alpine to the
chosen pinned tag to ensure deterministic builds and repeatability.
src/services/backendAdapter.ts (3)

55-73: getAuthHeaders always sets Content-Type: application/json, even for GET requests.

GET requests issued by fetchRepositories, fetchReleases, fetchAIConfigs, fetchWebDAVConfigs, and fetchSettings pass this.getAuthHeaders() which unconditionally sets Content-Type: application/json. While most servers ignore Content-Type on GET requests, it's unnecessary noise and some strict proxies may complain. Consider splitting auth-only headers from content headers, or only setting Content-Type when a body is present.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 55 - 73, getAuthHeaders
currently always adds 'Content-Type: application/json'; change it to return only
authentication headers (Authorization when secret exists) and place
'Content-Type: application/json' only where request bodies are sent. Update
getAuthHeaders to omit Content-Type, and then add Content-Type in the methods
that send a body (e.g., the POST/PUT callers) or create a separate helper like
getAuthAndJsonHeaders used by body-sending methods; ensure callers listed in the
review (fetchRepositories, fetchReleases, fetchAIConfigs, fetchWebDAVConfigs,
fetchSettings) use the auth-only headers so GET requests no longer include
Content-Type.

196-215: No timeout or abort signal on sync/fetch operations.

All data sync and fetch methods (syncRepositories, fetchRepositories, syncReleases, etc.) use bare fetch() without an AbortController or timeout. A hung backend connection will block indefinitely. The init() method correctly uses a 3-second timeout—consider applying a similar pattern (with a longer timeout, e.g. 30s) to these operations, especially since they run on an automated 5-second poll cycle where a stalled request would accumulate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 196 - 215, syncRepositories and
fetchRepositories (and other network methods like syncReleases) use bare fetch()
and can hang; add an AbortController with a ~30s timeout similar to init().
Create an AbortController, pass controller.signal into fetch calls in
syncRepositories and fetchRepositories, start a setTimeout that calls
controller.abort() after 30000ms, clear the timeout after the fetch resolves,
and ensure thrown errors from abort are handled by throwTranslatedError as
before; update getAuthHeaders/body usage unchanged and mirror this pattern in
other sync/fetch methods (e.g., syncReleases) to prevent accumulating stalled
requests.

85-103: Excessive type casting in fetchStarredRepos response mapping.

Lines 98-102 layer multiple as casts on the same object in a single expression, making it fragile and hard to follow. Consider destructuring with a typed interface to improve readability and catch shape mismatches.

Proposed refactor
-    const data = await res.json();
-    return (data as Record<string, unknown>[]).map((item) =>
-      (item as { starred_at?: string; repo?: Repository }).starred_at && (item as { repo?: Repository }).repo
-        ? { ...((item as { repo: Repository }).repo), starred_at: (item as { starred_at: string }).starred_at }
-        : item as unknown as Repository
-    );
+    const data: Array<{ starred_at?: string; repo?: Repository }> = await res.json();
+    return data.map((item) =>
+      item.starred_at && item.repo
+        ? { ...item.repo, starred_at: item.starred_at }
+        : item as unknown as Repository
+    );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 85 - 103, fetchStarredRepos uses
multiple inline type assertions on the response items which is hard to read and
brittle; define a local typed interface (e.g., StarredItem { starred_at?:
string; repo?: Repository }) and cast the parsed JSON to StarredItem[] (or
validate it), then map by destructuring each item ({ starred_at, repo }) and
return repo with starred_at when present (or the item as Repository when repo
missing) instead of chaining many `as` casts; update references in
fetchStarredRepos to use this interface and simple destructuring to improve
readability and type safety.
server/src/routes/repositories.ts (1)

81-133: Bulk PUT with INSERT OR REPLACE silently deletes and re-creates rows.

INSERT OR REPLACE in SQLite drops the existing row and inserts a new one when a conflict occurs. If the repositories table has columns with defaults or triggers not covered by the INSERT column list, those values are lost. This is fine as long as the column list is exhaustive (which it appears to be here), but be aware that any future column additions to the schema that aren't also added to this statement will silently reset to defaults on upsert.

More importantly, for the sync use-case: this endpoint replaces individual rows but does not delete repositories that exist in the DB but are absent from the incoming array. If the intent is full-state sync (backend mirrors local), stale rows will accumulate. Consider whether a "delete where id NOT IN (...)" step is needed inside the transaction.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` around lines 81 - 133, The PUT handler
router.put('/api/repositories') currently upserts incoming rows with the
prepared stmt and transaction upsert() but never removes DB rows missing from
the incoming repositories array, causing stale records to accumulate; modify the
transaction (the db.transaction block that defines upsert) to perform a
conditional delete after the loop: if repositories is empty delete all rows,
otherwise delete rows where id NOT IN (list of incoming ids) using a
parameterized statement (chunking the IN list if needed to stay under SQLite
bind limits), ensuring the delete runs inside the same transaction so upserts
and deletions are atomic and continue to use the existing stmt and repositories
variable.
src/services/autoSync.ts (1)

93-120: All fulfilled results are written to the store even if only one field changed.

hasChanges is a single aggregate boolean. When it's true (even if only repos changed), lines 99-120 overwrite all fulfilled results into the store, including unchanged fields. This triggers unnecessary Zustand notifications for subscribers of unchanged slices.

Consider tracking per-field change flags and only calling the corresponding set* methods for fields whose hash actually differed.

Sketch
-    let hasChanges = false;
+    const changed = { repos: false, releases: false, ai: false, webdav: false, settings: false };

     if (reposResult.status === 'fulfilled') {
       const hash = quickHash(reposResult.value.repositories);
       if (hash !== _lastHash.repos) {
         _lastHash.repos = hash;
-        hasChanges = true;
+        changed.repos = true;
       }
     }
     // ... same for other fields ...

-    if (!hasChanges) return;
+    if (!Object.values(changed).some(Boolean)) return;

     _isSyncingFromBackend = true;
     const state = useAppStore.getState();

-    if (reposResult.status === 'fulfilled') {
+    if (reposResult.status === 'fulfilled' && changed.repos) {
       state.setRepositories(reposResult.value.repositories);
     }
     // ... same pattern for other fields ...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 93 - 120, The code writes every
fulfilled result back into the Zustand store whenever aggregate hasChanges is
true, causing unnecessary subscriptions to fire; change the logic to compute
per-field change flags (e.g., reposChanged, releasesChanged, aiChanged,
webdavChanged, settingsChanged or per-setting flags like
activeAIConfigChanged/activeWebDAVConfigChanged) by comparing hashes/values for
reposResult, releasesResult, aiResult, webdavResult and settingsResult before
mutating state, then only call state.setRepositories, state.setReleases,
state.setAIConfigs, state.setWebDAVConfigs, state.setActiveAIConfig and
state.setActiveWebDAVConfig when their corresponding changed flag is true; keep
the _isSyncingFromBackend toggle as-is but ensure it spans only the actual
updates so unchanged slices do not trigger notifications.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/src/routes/repositories.ts`:
- Around line 56-59: The WHERE clause building in repositories.ts uses unescaped
user input for LIKE (see the SQL string, searchPattern variable, and params
array), so % and _ in search will act as wildcards; update the code to escape
backslashes, percent (%) and underscore (_) in the search string (e.g., replace
"\" -> "\\", "%" -> "\%", "_" -> "\_"), then build searchPattern =
`%${escapedSearch}%` and push that to params, and append an ESCAPE clause to the
SQL (e.g., " ... LIKE ? ESCAPE '\\'") so the escaped characters are treated
literally; ensure all five LIKE parameters use the escaped searchPattern.

In `@src/services/backendAdapter.ts`:
- Around line 8-44: The init() method in backendAdapter.ts always probes
'http://localhost:3000/api' which can hit a user's machine in production; update
the probing logic in init() (and the urls array and any use of this._backendUrl)
to only include localhost when running in a development context or when
explicitly configured: make candidate backend URLs configurable via a
constructor parameter or config option, or guard adding
'http://localhost:3000/api' with a check like NODE_ENV === 'development' or
location.hostname === 'localhost', and default to only probing
window.location.origin + '/api' in production; ensure the new configuration is
used when assigning this._backendUrl and preserve the existing timeout/abort
behavior.

---

Duplicate comments:
In `@server/Dockerfile`:
- Around line 8-14: The Dockerfile correctly switches to a non-root runtime
user: keep the existing RUN mkdir -p data && chown -R node:node /app and USER
node lines as-is, and ensure the VOLUME declaration remains after the chown so
the mount-point inherits the intended ownership; no code changes required.

In `@server/src/routes/repositories.ts`:
- Around line 7-13: The parseJsonColumn function currently returns unknown[];
tighten its return type and contents by making it return string[]: after
JSON.parse (inside parseJsonColumn) ensure parsed is an array with
Array.isArray(parsed) then filter its elements to only strings (e.g.,
parsed.filter(el => typeof el === 'string')) and return that filtered array;
also update the function signature from parseJsonColumn(value: unknown):
unknown[] to parseJsonColumn(value: unknown): string[] so callers get the
stronger type guarantee.
- Around line 112-120: The write-path now guards repo.topics, repo.ai_tags,
repo.ai_platforms, and repo.custom_tags with Array.isArray checks in
repositories.ts, but you should also normalize array elements to stable string
values before persisting: in the repository save/update handler (the code around
JSON.stringify(Array.isArray(repo.topics) ? repo.topics : [])) map/filter each
of those arrays to include only non-empty strings (e.g.,
arr.filter(Boolean).map(String)) and optionally dedupe, then JSON.stringify that
normalized array so no non-string elements are stored.
- Line 139: The current use of parseInt on req.params.id (assigning to id) can
silently accept values like "12abc"; replace it with a strict validation: first
verify req.params.id matches a digits-only pattern (e.g., /^\d+$/) or use Number
and check Number.isInteger after conversion, return a 400/validation error when
the param is invalid, then safely convert to an integer and assign to id; update
the code paths using id (the id variable in the repositories route handler) to
rely on this validated conversion.

In `@src/services/autoSync.ts`:
- Around line 168-208: The startAutoSync function currently creates a new
subscription via useAppStore.subscribe each call but never unsubscribes previous
subscriptions; add a module-level variable (e.g., previousUnsubscribe or
_storeUnsubscribe) and, at the top of startAutoSync, if that var is set call it
to unsubscribe and null it before creating the new subscription; assign the
returned unsubscribe from useAppStore.subscribe to that module-level variable
and ensure it is cleared when returning/unmounting so repeated calls to
startAutoSync (HMR/StrictMode/remount) do not leak subscriptions or cause
duplicate syncToBackend calls.
- Around line 36-39: The re-entrancy guard using _isSyncingFromBackendActive
inside syncFromBackend is correctly implemented and reset in the finally block;
remove the duplicate review comment to avoid noise—ensure references to
_isSyncingFromBackendActive, syncFromBackend, and the finally block remain
unchanged and only the duplicated reviewer note is deleted.
- Around line 27-29: quickHash currently does a full JSON.stringify which is
expensive and order-sensitive; replace it with a fast, order-independent
fingerprinting strategy: in quickHash(…) detect primitives and for
objects/arrays compute a stable serialization (sort object keys recursively) or
sample/length-based summary, then feed that smaller string into a fast
non-blocking hash (e.g., xxhash/murmur or a lightweight JS rolling hash) so the
polling path (quickHash) avoids full heavyweight serialization and produces
deterministic hashes; update the quickHash implementation and any callers to use
the new stable+fast-hash behavior.
- Around line 53-59: Previously empty repo arrays were blocked by a length
check; to ensure deletions from the backend propagate, keep the new logic but
guard against undefined and ensure deterministic hashing: when handling
reposResult in the function containing this block, call
quickHash(reposResult.value.repositories ?? []) so undefined becomes an empty
array, and verify quickHash returns a stable hash for [] (update quickHash
implementation if needed); then compare to _lastHash.repos and update
_lastHash.repos and hasChanges as shown (symbols: reposResult, quickHash,
_lastHash.repos, hasChanges).

In `@src/services/backendAdapter.ts`:
- Around line 128-131: The base64→UTF-8 decoding has been corrected using atob +
Uint8Array.from + TextDecoder, but there is a duplicate/leftover implementation
to remove; in src/services/backendAdapter.ts locate the block that checks
data.encoding === 'base64' (the code using atob, Uint8Array.from, and new
TextDecoder()) and keep this implementation while removing any duplicate or
older base64-decoding branches elsewhere in the same function (or duplicate
blocks) so only this correct decoder remains; also ensure the runtime
environment supports atob/TextDecoder or replace with a Buffer-based fallback if
this code runs in Node.js.

---

Nitpick comments:
In `@server/Dockerfile`:
- Line 13: The Dockerfile RUN line uses a relative path "data" which depends on
WORKDIR; change the command that creates and chowns the directory so it uses the
absolute path (/app/data) instead of "data" and still chowns /app (update the
RUN instruction that currently reads RUN mkdir -p data && chown -R node:node
/app to create /app/data explicitly and then run chown -R node:node /app).
- Line 1: The Dockerfile currently uses the floating tag "node:22-alpine" which
can lead to non-reproducible builds; change the base image references (both
stages that use node:22-alpine) to a fully-pinned semver/alpine combo (e.g.,
node:22.x.y-alpineN.M) so the exact Node.js and Alpine versions are fixed;
update every FROM line using node:22-alpine to the chosen pinned tag to ensure
deterministic builds and repeatability.

In `@server/src/routes/repositories.ts`:
- Around line 81-133: The PUT handler router.put('/api/repositories') currently
upserts incoming rows with the prepared stmt and transaction upsert() but never
removes DB rows missing from the incoming repositories array, causing stale
records to accumulate; modify the transaction (the db.transaction block that
defines upsert) to perform a conditional delete after the loop: if repositories
is empty delete all rows, otherwise delete rows where id NOT IN (list of
incoming ids) using a parameterized statement (chunking the IN list if needed to
stay under SQLite bind limits), ensuring the delete runs inside the same
transaction so upserts and deletions are atomic and continue to use the existing
stmt and repositories variable.

In `@src/services/autoSync.ts`:
- Around line 93-120: The code writes every fulfilled result back into the
Zustand store whenever aggregate hasChanges is true, causing unnecessary
subscriptions to fire; change the logic to compute per-field change flags (e.g.,
reposChanged, releasesChanged, aiChanged, webdavChanged, settingsChanged or
per-setting flags like activeAIConfigChanged/activeWebDAVConfigChanged) by
comparing hashes/values for reposResult, releasesResult, aiResult, webdavResult
and settingsResult before mutating state, then only call state.setRepositories,
state.setReleases, state.setAIConfigs, state.setWebDAVConfigs,
state.setActiveAIConfig and state.setActiveWebDAVConfig when their corresponding
changed flag is true; keep the _isSyncingFromBackend toggle as-is but ensure it
spans only the actual updates so unchanged slices do not trigger notifications.

In `@src/services/backendAdapter.ts`:
- Around line 55-73: getAuthHeaders currently always adds 'Content-Type:
application/json'; change it to return only authentication headers
(Authorization when secret exists) and place 'Content-Type: application/json'
only where request bodies are sent. Update getAuthHeaders to omit Content-Type,
and then add Content-Type in the methods that send a body (e.g., the POST/PUT
callers) or create a separate helper like getAuthAndJsonHeaders used by
body-sending methods; ensure callers listed in the review (fetchRepositories,
fetchReleases, fetchAIConfigs, fetchWebDAVConfigs, fetchSettings) use the
auth-only headers so GET requests no longer include Content-Type.
- Around line 196-215: syncRepositories and fetchRepositories (and other network
methods like syncReleases) use bare fetch() and can hang; add an AbortController
with a ~30s timeout similar to init(). Create an AbortController, pass
controller.signal into fetch calls in syncRepositories and fetchRepositories,
start a setTimeout that calls controller.abort() after 30000ms, clear the
timeout after the fetch resolves, and ensure thrown errors from abort are
handled by throwTranslatedError as before; update getAuthHeaders/body usage
unchanged and mirror this pattern in other sync/fetch methods (e.g.,
syncReleases) to prevent accumulating stalled requests.
- Around line 85-103: fetchStarredRepos uses multiple inline type assertions on
the response items which is hard to read and brittle; define a local typed
interface (e.g., StarredItem { starred_at?: string; repo?: Repository }) and
cast the parsed JSON to StarredItem[] (or validate it), then map by
destructuring each item ({ starred_at, repo }) and return repo with starred_at
when present (or the item as Repository when repo missing) instead of chaining
many `as` casts; update references in fetchStarredRepos to use this interface
and simple destructuring to improve readability and type safety.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8faefce and f12da43.

📒 Files selected for processing (5)
  • docker-compose.yml
  • server/Dockerfile
  • server/src/routes/repositories.ts
  • src/services/autoSync.ts
  • src/services/backendAdapter.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • docker-compose.yml

1. repositories.ts: LIKE 搜索转义 % 和 _ 通配符防止意外匹配
2. backendAdapter.ts: 仅在开发环境探测 localhost:3000
@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (7)
docker-compose.yml (1)

17-18: ⚠️ Potential issue | 🟠 Major

Require API_SECRET in Compose defaults (still auth-off by default).

Line [17] still allows an empty secret (${API_SECRET:-}), so deployments can come up without API auth unless operators notice and override it.

🔐 Suggested safe-default diff
     environment:
-      - API_SECRET=${API_SECRET:-}
+      - API_SECRET=${API_SECRET:?API_SECRET must be set}
       - ENCRYPTION_KEY=${ENCRYPTION_KEY:-}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yml` around lines 17 - 18, The docker-compose env line allows
an empty API_SECRET via `${API_SECRET:-}`, so deployments can start without an
auth secret; change the expansion to require the variable (e.g. replace
`${API_SECRET:-}` with `${API_SECRET:?API_SECRET must be set}`) so compose fails
if API_SECRET is not provided; apply the same pattern to ENCRYPTION_KEY if you
also want to require it (refer to the API_SECRET and ENCRYPTION_KEY entries in
docker-compose.yml).
server/src/routes/repositories.ts (4)

140-141: ⚠️ Potential issue | 🟡 Minor

Reject malformed :id values with 400 before querying.

Line 140 uses parseInt, so values like "12abc" are accepted as 12, and invalid IDs fall through as 404. Validate the path param strictly and return 400 for invalid input.

Proposed fix
-    const id = parseInt(req.params.id);
+    const idParam = req.params.id;
+    if (!/^\d+$/.test(idParam)) {
+      res.status(400).json({ error: 'Invalid repository id', code: 'INVALID_REPOSITORY_ID' });
+      return;
+    }
+    const id = Number(idParam);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` around lines 140 - 141, The handler
currently uses parseInt(req.params.id) which accepts malformed values like
"12abc" as 12; change the validation so req.params.id is strictly a positive
integer (e.g. test against /^\d+$/ or parse with Number and confirm
Number.isInteger and string equivalence) and return res.status(400).json(...)
for invalid IDs before any DB/query logic; ensure you keep the existing variable
name id for the parsed integer and leave updates (req.body) handling unchanged
so subsequent code uses the validated id.

69-73: ⚠️ Potential issue | 🟠 Major

Keep COUNT(*) filtering identical to the data query.

Line 70-73 rebuilds search conditions with raw %${search}% and no ESCAPE, while Lines 57-60 use escaped patterns. This can produce mismatched total vs repositories and inconsistent wildcard behavior.

Proposed fix
-    const countSql = search
-      ? 'SELECT COUNT(*) as total FROM repositories WHERE name LIKE ? OR full_name LIKE ? OR description LIKE ? OR ai_summary LIKE ? OR ai_tags LIKE ?'
+    const countSql = search
+      ? "SELECT COUNT(*) as total FROM repositories WHERE name LIKE ? ESCAPE '\\' OR full_name LIKE ? ESCAPE '\\' OR description LIKE ? ESCAPE '\\' OR ai_summary LIKE ? ESCAPE '\\' OR ai_tags LIKE ? ESCAPE '\\'"
       : 'SELECT COUNT(*) as total FROM repositories';
-    const countParams = search ? Array(5).fill(`%${search}%`) : [];
+    const escapedSearch = search ? search.replace(/[%_\\]/g, '\\$&') : '';
+    const countParams = search ? Array(5).fill(`%${escapedSearch}%`) : [];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` around lines 69 - 73, The COUNT query
builds its own unescaped LIKE patterns, causing mismatches with the main data
query; update the countSql/countParams logic to use the same escaped search
pattern and WHERE clause construction as the data query (reuse the same
helper/code that produces the patterns or the same variable — e.g., the escaped
likePattern and params used by the repositories SELECT logic), and ensure the
SQL uses the same ESCAPE handling so COUNT(*) filters identically to the
repository rows returned.

7-13: ⚠️ Potential issue | 🟠 Major

Enforce a strict string[] contract for JSON list fields on both read and write paths.

Line 7 returns unknown[], and Lines 113-116/119/145-146/150 serialize arbitrary arrays without element normalization. Non-string payloads can be persisted and then leaked back to clients, breaking response shape guarantees.

Proposed fix
-function parseJsonColumn(value: unknown): unknown[] {
+function parseJsonColumn(value: unknown): string[] {
   if (typeof value !== 'string' || !value) return [];
   try {
-    const parsed = JSON.parse(value);
-    return Array.isArray(parsed) ? parsed : [];
+    const parsed: unknown = JSON.parse(value);
+    if (!Array.isArray(parsed)) return [];
+    return parsed.filter((item): item is string => typeof item === 'string');
   } catch { return []; }
 }
+
+function toJsonStringArray(value: unknown): string {
+  if (!Array.isArray(value)) return '[]';
+  return JSON.stringify(value.filter((item): item is string => typeof item === 'string'));
+}
...
-          JSON.stringify(Array.isArray(repo.topics) ? repo.topics : []),
+          toJsonStringArray(repo.topics),
...
-          JSON.stringify(Array.isArray(repo.ai_tags) ? repo.ai_tags : []),
-          JSON.stringify(Array.isArray(repo.ai_platforms) ? repo.ai_platforms : []),
+          toJsonStringArray(repo.ai_tags),
+          toJsonStringArray(repo.ai_platforms),
...
-          JSON.stringify(Array.isArray(repo.custom_tags) ? repo.custom_tags : []),
+          toJsonStringArray(repo.custom_tags),
...
-      ai_tags: (v) => JSON.stringify(Array.isArray(v) ? v : []),
-      ai_platforms: (v) => JSON.stringify(Array.isArray(v) ? v : []),
+      ai_tags: (v) => toJsonStringArray(v),
+      ai_platforms: (v) => toJsonStringArray(v),
...
-      custom_tags: (v) => JSON.stringify(Array.isArray(v) ? v : []),
+      custom_tags: (v) => toJsonStringArray(v),

Also applies to: 113-116, 119-119, 145-146, 150-150

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` around lines 7 - 13, The JSON list helper
must enforce a strict string[] contract on read and write: update
parseJsonColumn to return string[] (not unknown[]) by parsing the string,
ensuring the parsed value is an array, then mapping/filtering each element to a
string (e.g., keep existing string elements and convert others via String(...)
or drop null/undefined), and remove the catch-all unknown return; then find the
write/serialize sites that JSON.stringify arrays (the locations referenced in
the review where arrays are persisted) and normalize those arrays to string[]
before serializing (use the same normalization logic or call the updated
parseJsonColumn/normalize function) so only string elements are ever persisted
and returned to clients.

148-153: ⚠️ Potential issue | 🟠 Major

Do not silently coerce non-boolean PATCH flags.

Lines 148 and 153 currently map any non-true/1 input to 0. Inputs like "false" or "0" should be rejected, not coerced, to avoid accidental data corruption.

Proposed fix
+    if ('analysis_failed' in updates && typeof updates.analysis_failed !== 'boolean') {
+      res.status(400).json({ error: 'analysis_failed must be boolean', code: 'INVALID_ANALYSIS_FAILED' });
+      return;
+    }
+    if ('subscribed_to_releases' in updates && typeof updates.subscribed_to_releases !== 'boolean') {
+      res.status(400).json({ error: 'subscribed_to_releases must be boolean', code: 'INVALID_SUBSCRIPTION_FLAG' });
+      return;
+    }
...
-      analysis_failed: (v) => (v === true || v === 1) ? 1 : 0,
+      analysis_failed: (v) => ((v as boolean) ? 1 : 0),
...
-      subscribed_to_releases: (v) => (v === true || v === 1) ? 1 : 0,
+      subscribed_to_releases: (v) => ((v as boolean) ? 1 : 0),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/repositories.ts` around lines 148 - 153, The inline mappers
for the PATCH flags analysis_failed and subscribed_to_releases currently coerce
any non-true/1 input to 0; change them to validate the incoming value and reject
invalid types instead of silently coercing: accept only boolean true/false or
numeric 1/0, map true/1 -> 1 and false/0 -> 0, and return a validation error
(HTTP 400) when the value is any other type (e.g. strings like "false" or "0");
apply the same validation logic to both analysis_failed and
subscribed_to_releases to avoid accidental data corruption.
src/services/autoSync.ts (2)

27-29: quickHash via JSON.stringify is order-sensitive — previous concern still applies.

This was flagged in a prior review and remains unaddressed. JSON.stringify is sensitive to property ordering, so semantically identical objects returned from the backend with different key orders will produce different hashes, causing unnecessary store writes on every poll.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 27 - 29, quickHash currently uses
JSON.stringify which is order-sensitive and causes identical objects with
different key ordering to produce different hashes; update quickHash to produce
a deterministic hash by either (a) replacing JSON.stringify with a stable
serializer (e.g., fast-json-stable-stringify or json-stable-stringify) and then
hashing the string, or (b) implementing a canonicalization that recursively
sorts object keys before stringifying and then hashing; ensure you change the
quickHash function to use the stable serialization + a short hash (e.g.,
sha1/xxhash) instead of raw JSON to prevent spurious store writes.

168-208: startAutoSync still leaks the previous store subscription on re-entry.

Timers are now correctly cleared on re-entry (lines 170–177), which addresses part of the prior feedback. However, the store subscription returned by useAppStore.subscribe on line 179 is not tracked at module level — if startAutoSync is called a second time (HMR, React Strict Mode, remount), the prior subscription callback remains active and will continue firing debounced syncToBackend calls.

Proposed fix — track and teardown the previous subscription
+let _storeUnsubscribe: (() => void) | null = null;
+
 export function startAutoSync(): () => void {
+  if (_storeUnsubscribe) {
+    _storeUnsubscribe();
+    _storeUnsubscribe = null;
+  }
   if (_pollTimer) {
     clearInterval(_pollTimer);
     _pollTimer = null;
   }
   if (_debounceTimer) {
     clearTimeout(_debounceTimer);
     _debounceTimer = null;
   }

   const unsubscribe = useAppStore.subscribe((state, prevState) => {
     // ...
   });
+  _storeUnsubscribe = unsubscribe;

   _pollTimer = setInterval(() => {
     syncFromBackend();
   }, POLL_INTERVAL);

   console.log('🔄 Auto-sync started (push debounce: 2s, poll: 5s)');
-  return unsubscribe;
+  return () => {
+    _storeUnsubscribe = null;
+    unsubscribe();
+  };
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 168 - 208, The subscription returned
by useAppStore.subscribe is not tracked across calls so previous subscriptions
leak; add a module-level variable (e.g., _storeUnsubscribe) and on startAutoSync
entry teardown it if present (call _storeUnsubscribe() and set it null) before
creating a new subscription, then assign the new unsubscribe to
_storeUnsubscribe and ensure stop/cleanup (wherever you clear timers) also calls
and nulls _storeUnsubscribe; keep returning the new unsubscribe for backward
compatibility.
🧹 Nitpick comments (5)
src/services/backendAdapter.ts (4)

120-140: owner and repo are not URL-encoded before path interpolation.

If owner or repo ever contain characters that are significant in URLs (e.g., #, ?, %, or even / from an unexpected source), the constructed URL will be malformed. While GitHub identifiers are typically safe, defensive encoding is cheap:

Proposed fix
-    const res = await fetch(`${this._backendUrl}/proxy/github/repos/${owner}/${repo}/readme`, {
+    const res = await fetch(`${this._backendUrl}/proxy/github/repos/${encodeURIComponent(owner)}/${encodeURIComponent(repo)}/readme`, {

Apply similarly to getRepositoryReleases (line 146).

Also applies to: 142-156

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 120 - 140, The URL for the
backend proxy in getRepositoryReadme (and similarly in getRepositoryReleases) is
built by interpolating owner and repo raw into the path, which can produce
malformed URLs for values containing reserved characters; update the fetch calls
to URL-encode owner and repo using encodeURIComponent when constructing
`${this._backendUrl}/proxy/github/repos/.../${owner}/${repo}/readme` (and the
releases endpoint) so the path segments are safe, leaving the rest of the
request (method, headers, body) unchanged.

58-76: Content-Type: application/json is set for all requests including GET.

getAuthHeaders() unconditionally includes Content-Type: application/json, which is then applied to every request including GETs (e.g., fetchRepositories, fetchSettings). While most servers ignore it, some strict proxies or CORS configurations may respond differently to GET requests carrying a Content-Type header (it can trigger a preflight where one wouldn't otherwise be needed).

Consider splitting the helper, or only attaching Content-Type when a body is present.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 58 - 76, getAuthHeaders()
currently always adds 'Content-Type: application/json', causing GET requests
(e.g., fetchRepositories, fetchSettings) to carry that header; remove the
unconditional Content-Type from getAuthHeaders() so it only returns auth-related
headers (Authorization when secret exists). Add a small helper (e.g.,
getJsonHeaders() or a parameter on getAuthHeaders like includeJson) that merges
'Content-Type: application/json' only for callers that send a body (e.g.,
createRepository, updateSettings, any POST/PUT/PATCH methods), ensuring
Authorization is preserved for all requests.

101-105: Heavy type-assertion chain is fragile and hard to follow.

The starred-repos mapping uses deeply nested casts (item as { starred_at?: string; repo?: Repository }, repeated). This works but is brittle — any shape change from the backend proxy will silently produce incorrect data. A small typed guard or intermediate variable would improve clarity.

Cleaner alternative
return (data as Record<string, unknown>[]).map((item) => {
  const wrapped = item as { starred_at?: string; repo?: Repository };
  if (wrapped.starred_at && wrapped.repo) {
    return { ...wrapped.repo, starred_at: wrapped.starred_at };
  }
  return item as unknown as Repository;
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 101 - 105, The mapping uses
repeated, fragile type assertions for item; replace them with a single
intermediate typed variable and a narrow type-guard check: cast item once to a
local const (e.g., wrapped: { starred_at?: string; repo?: Repository }), check
wrapped.starred_at && wrapped.repo, and then return { ...wrapped.repo,
starred_at: wrapped.starred_at } or the original item cast to Repository
otherwise; update the map callback in backendAdapter (the map over data and the
item handling) to use this clearer approach to avoid the deep assertion chain.

210-218: Hardcoded limit=10000 may silently truncate large collections.

fetchRepositories and fetchReleases request at most 10 000 items. If a user's collection exceeds this limit, the sync will silently drop the remainder without any indication. Consider either paginating or at least logging a warning when total > 10000.

Also applies to: 231-239

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 210 - 218, The hardcoded
"limit=10000" in fetchRepositories and fetchReleases risks silently truncating
results; update both methods (fetchRepositories, fetchReleases) to handle
pagination instead of a single large-limit request: call the backend repeatedly
using the API's paging parameters (limit/offset or page) until you've fetched
total items, accumulating results and using throwTranslatedError for failures;
if backend does not support pagination, at minimum fetch the first page and log
a clear warning (processLogger.warn or similar) when the response's total field
is greater than the requested limit so the caller knows results were truncated.
src/services/autoSync.ts (1)

99-110: All fulfilled results are written to the store even when only one field changed.

When hasChanges is true (even for a single field), every fulfilled result is written to the store (lines 99–110). Because the data originates from the backend (new object references each time), every set* call replaces the current store reference. This is safe from infinite loops thanks to the _isSyncingFromBackend guard, but it means the Zustand store will emit change notifications for fields whose data hasn't actually changed — downstream selectors and memoized components relying on reference equality will re-render unnecessarily.

Consider guarding each store write with its own hash-changed check:

Sketch
+    const reposChanged = reposResult.status === 'fulfilled' && quickHash(reposResult.value.repositories) !== prevReposHash;
+    // ... similarly for other fields
+
     _isSyncingFromBackend = true;
     const state = useAppStore.getState();
 
-    if (reposResult.status === 'fulfilled') {
+    if (reposResult.status === 'fulfilled' && reposChanged) {
       state.setRepositories(reposResult.value.repositories);
     }
-    if (releasesResult.status === 'fulfilled') {
+    if (releasesResult.status === 'fulfilled' && releasesChanged) {
       state.setReleases(releasesResult.value.releases);
     }
     // ... etc

You could track per-field change flags alongside the hash comparison block above (lines 53–91) to avoid the redundancy.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 99 - 110, The current logic writes
every fulfilled backend result into the Zustand store whenever hasChanges is
true, causing unnecessary reference changes and re-renders; update the code so
each store setter (state.setRepositories, state.setReleases, state.setAIConfigs,
state.setWebDAVConfigs) is only called when that specific field’s hash/contents
actually changed (the same per-field comparison used in the hash comparison
block around hasChanges/_isSyncingFromBackend). Add per-field change flags
during the existing hash checks and guard each corresponding set* call with its
flag so unchanged fields are not replaced while preserving the existing
_isSyncingFromBackend guard and overall flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/autoSync.ts`:
- Around line 93-110: The issue is that _lastHash is not updated after a
successful syncToBackend, causing the next syncFromBackend poll to detect a
change and redundantly overwrite the store; fix it by updating the in-memory
fingerprint(s) at the end of syncToBackend (after the
Promise.allSettled/successful push completes) to the current backend hashes you
just pushed so that _lastHash reflects the new state and syncFromBackend will
treat the subsequent fetch as unchanged; locate the syncToBackend function and
add logic to set _lastHash (and any related hash variables used by
syncFromBackend) once the push promises resolve successfully.

In `@src/services/backendAdapter.ts`:
- Around line 88-106: Add a private fetchWithTimeout helper (e.g., private async
fetchWithTimeout(input: RequestInfo, init?: RequestInit, timeoutMs = 15000):
Promise<Response>) that creates an AbortController, sets a timeout to call
controller.abort(), uses fetch with the merged init and controller.signal, and
clears the timeout in a finally block; then replace all direct fetch(...) calls
in this class (notably in fetchStarredRepos, syncRepositories,
fetchRepositories, any proxy calls used by syncFromBackend, etc.) with
this.fetchWithTimeout(...), and keep the init() call using the 3s timeout by
calling fetchWithTimeout(..., 3000) so hung backend requests are always bounded.

---

Duplicate comments:
In `@docker-compose.yml`:
- Around line 17-18: The docker-compose env line allows an empty API_SECRET via
`${API_SECRET:-}`, so deployments can start without an auth secret; change the
expansion to require the variable (e.g. replace `${API_SECRET:-}` with
`${API_SECRET:?API_SECRET must be set}`) so compose fails if API_SECRET is not
provided; apply the same pattern to ENCRYPTION_KEY if you also want to require
it (refer to the API_SECRET and ENCRYPTION_KEY entries in docker-compose.yml).

In `@server/src/routes/repositories.ts`:
- Around line 140-141: The handler currently uses parseInt(req.params.id) which
accepts malformed values like "12abc" as 12; change the validation so
req.params.id is strictly a positive integer (e.g. test against /^\d+$/ or parse
with Number and confirm Number.isInteger and string equivalence) and return
res.status(400).json(...) for invalid IDs before any DB/query logic; ensure you
keep the existing variable name id for the parsed integer and leave updates
(req.body) handling unchanged so subsequent code uses the validated id.
- Around line 69-73: The COUNT query builds its own unescaped LIKE patterns,
causing mismatches with the main data query; update the countSql/countParams
logic to use the same escaped search pattern and WHERE clause construction as
the data query (reuse the same helper/code that produces the patterns or the
same variable — e.g., the escaped likePattern and params used by the
repositories SELECT logic), and ensure the SQL uses the same ESCAPE handling so
COUNT(*) filters identically to the repository rows returned.
- Around line 7-13: The JSON list helper must enforce a strict string[] contract
on read and write: update parseJsonColumn to return string[] (not unknown[]) by
parsing the string, ensuring the parsed value is an array, then
mapping/filtering each element to a string (e.g., keep existing string elements
and convert others via String(...) or drop null/undefined), and remove the
catch-all unknown return; then find the write/serialize sites that
JSON.stringify arrays (the locations referenced in the review where arrays are
persisted) and normalize those arrays to string[] before serializing (use the
same normalization logic or call the updated parseJsonColumn/normalize function)
so only string elements are ever persisted and returned to clients.
- Around line 148-153: The inline mappers for the PATCH flags analysis_failed
and subscribed_to_releases currently coerce any non-true/1 input to 0; change
them to validate the incoming value and reject invalid types instead of silently
coercing: accept only boolean true/false or numeric 1/0, map true/1 -> 1 and
false/0 -> 0, and return a validation error (HTTP 400) when the value is any
other type (e.g. strings like "false" or "0"); apply the same validation logic
to both analysis_failed and subscribed_to_releases to avoid accidental data
corruption.

In `@src/services/autoSync.ts`:
- Around line 27-29: quickHash currently uses JSON.stringify which is
order-sensitive and causes identical objects with different key ordering to
produce different hashes; update quickHash to produce a deterministic hash by
either (a) replacing JSON.stringify with a stable serializer (e.g.,
fast-json-stable-stringify or json-stable-stringify) and then hashing the
string, or (b) implementing a canonicalization that recursively sorts object
keys before stringifying and then hashing; ensure you change the quickHash
function to use the stable serialization + a short hash (e.g., sha1/xxhash)
instead of raw JSON to prevent spurious store writes.
- Around line 168-208: The subscription returned by useAppStore.subscribe is not
tracked across calls so previous subscriptions leak; add a module-level variable
(e.g., _storeUnsubscribe) and on startAutoSync entry teardown it if present
(call _storeUnsubscribe() and set it null) before creating a new subscription,
then assign the new unsubscribe to _storeUnsubscribe and ensure stop/cleanup
(wherever you clear timers) also calls and nulls _storeUnsubscribe; keep
returning the new unsubscribe for backward compatibility.

---

Nitpick comments:
In `@src/services/autoSync.ts`:
- Around line 99-110: The current logic writes every fulfilled backend result
into the Zustand store whenever hasChanges is true, causing unnecessary
reference changes and re-renders; update the code so each store setter
(state.setRepositories, state.setReleases, state.setAIConfigs,
state.setWebDAVConfigs) is only called when that specific field’s hash/contents
actually changed (the same per-field comparison used in the hash comparison
block around hasChanges/_isSyncingFromBackend). Add per-field change flags
during the existing hash checks and guard each corresponding set* call with its
flag so unchanged fields are not replaced while preserving the existing
_isSyncingFromBackend guard and overall flow.

In `@src/services/backendAdapter.ts`:
- Around line 120-140: The URL for the backend proxy in getRepositoryReadme (and
similarly in getRepositoryReleases) is built by interpolating owner and repo raw
into the path, which can produce malformed URLs for values containing reserved
characters; update the fetch calls to URL-encode owner and repo using
encodeURIComponent when constructing
`${this._backendUrl}/proxy/github/repos/.../${owner}/${repo}/readme` (and the
releases endpoint) so the path segments are safe, leaving the rest of the
request (method, headers, body) unchanged.
- Around line 58-76: getAuthHeaders() currently always adds 'Content-Type:
application/json', causing GET requests (e.g., fetchRepositories, fetchSettings)
to carry that header; remove the unconditional Content-Type from
getAuthHeaders() so it only returns auth-related headers (Authorization when
secret exists). Add a small helper (e.g., getJsonHeaders() or a parameter on
getAuthHeaders like includeJson) that merges 'Content-Type: application/json'
only for callers that send a body (e.g., createRepository, updateSettings, any
POST/PUT/PATCH methods), ensuring Authorization is preserved for all requests.
- Around line 101-105: The mapping uses repeated, fragile type assertions for
item; replace them with a single intermediate typed variable and a narrow
type-guard check: cast item once to a local const (e.g., wrapped: { starred_at?:
string; repo?: Repository }), check wrapped.starred_at && wrapped.repo, and then
return { ...wrapped.repo, starred_at: wrapped.starred_at } or the original item
cast to Repository otherwise; update the map callback in backendAdapter (the map
over data and the item handling) to use this clearer approach to avoid the deep
assertion chain.
- Around line 210-218: The hardcoded "limit=10000" in fetchRepositories and
fetchReleases risks silently truncating results; update both methods
(fetchRepositories, fetchReleases) to handle pagination instead of a single
large-limit request: call the backend repeatedly using the API's paging
parameters (limit/offset or page) until you've fetched total items, accumulating
results and using throwTranslatedError for failures; if backend does not support
pagination, at minimum fetch the first page and log a clear warning
(processLogger.warn or similar) when the response's total field is greater than
the requested limit so the caller knows results were truncated.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8faefce and 64b3b9d.

📒 Files selected for processing (5)
  • docker-compose.yml
  • server/Dockerfile
  • server/src/routes/repositories.ts
  • src/services/autoSync.ts
  • src/services/backendAdapter.ts

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (5)
docker-compose.yml (2)

14-15: LGTM — port exposure hardened to internal-only.

Switching from ports to expose means port 3000 is reachable only by services on the same Docker network (i.e., the nginx/frontend proxy), not by the host or the public internet. This directly addresses the previous review finding.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yml` around lines 14 - 15, Change is correct: ensure the
service in docker-compose.yml uses expose: - "3000" (not ports) so 3000 is
internal-only, remove any remaining ports: entries that map 3000 elsewhere, and
confirm the frontend/proxy service is on the same Docker network so it can reach
the exposed port; verify no host port mapping for 3000 remains.

17-18: ⚠️ Potential issue | 🟠 Major

Verify server behavior when ENCRYPTION_KEY is empty — sensitive data may be stored unencrypted.

The PR advertises AES-256 encryption for stored AI and WebDAV credentials. With ENCRYPTION_KEY=${ENCRYPTION_KEY:-}, a plain docker-compose up (the most common path) starts the backend with an empty key. Depending on the server-side implementation in server/src/, this could mean:

  • Sensitive credentials are stored in plaintext, or
  • A deterministic/trivially weak key is derived from the empty string, or
  • The process crashes at first use.

API_SECRET being optional is an explicit design decision and is fine. ENCRYPTION_KEY is different — the system's confidentiality guarantee depends on it.

Consider either rejecting startup when ENCRYPTION_KEY is unset/empty (server-side guard), or at minimum documenting the plaintext-fallback behavior clearly so operators understand the risk.

Run the following to check how the server handles an empty/missing ENCRYPTION_KEY:

#!/bin/bash
# Description: Find where ENCRYPTION_KEY is read and how an empty value is handled.
rg -n "ENCRYPTION_KEY" --type ts -A 5 -B 2
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yml` around lines 17 - 18, The docker-compose allows
ENCRYPTION_KEY to be empty which can lead to plaintext storage or weak
encryption; update the server startup to explicitly validate
process.env.ENCRYPTION_KEY (the code paths that call initEncryption /
getEncryptionKey / encryptCredentials / decryptCredentials) and fail fast with a
clear error and non-zero exit if the value is missing/empty, or alternatively
implement and document an explicit, logged plaintext-fallback policy; add the
check in the central bootstrap/config loader (where environment vars are read)
so the server never proceeds using an empty key.
src/services/backendAdapter.ts (1)

91-337: ⚠️ Potential issue | 🟠 Major

Add a shared timeout wrapper for all backend requests.

Only init() is time-bounded right now; other request paths can hang indefinitely and stall sync/polling flows.

⏱️ Proposed refactor (apply to all request paths)
+  private async fetchWithTimeout(
+    input: RequestInfo | URL,
+    init?: RequestInit,
+    timeoutMs = 15000,
+  ): Promise<Response> {
+    const controller = new AbortController();
+    const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
+    try {
+      return await fetch(input, { ...init, signal: controller.signal });
+    } finally {
+      clearTimeout(timeoutId);
+    }
+  }

   async init(): Promise<void> {
@@
-          const res = await fetch(`${baseUrl}/health`, {
-            signal: controller.signal,
-          });
+          const res = await this.fetchWithTimeout(`${baseUrl}/health`, {}, 3000);
@@
-    const res = await fetch(`${this._backendUrl}/repositories?limit=10000`, {
+    const res = await this.fetchWithTimeout(`${this._backendUrl}/repositories?limit=10000`, {
       headers: this.getAuthHeaders()
     });

Then replace the remaining direct fetch(...) calls similarly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/backendAdapter.ts` around lines 91 - 337, The codebase lacks a
shared timeout for backend HTTP calls causing potential hangs; introduce a
single helper (e.g., a method named requestWithTimeout or fetchWithTimeout on
the BackendAdapter class) that accepts the same parameters as fetch plus a
timeout (ms) and races fetch against a timeout AbortController, throwing a
translated error on timeout, then replace direct fetch calls in functions like
getRepositoryReadme, getRepositoryReleases, checkRateLimit, proxyAIRequest,
proxyWebDAV, syncRepositories, fetchRepositories, syncReleases, fetchReleases,
syncAIConfigs, fetchAIConfigs, syncWebDAVConfigs, fetchWebDAVConfigs,
syncSettings, fetchSettings, exportData, importData, and checkHealth to call
requestWithTimeout with a sensible default timeout (configurable) instead of
calling fetch directly.
src/services/autoSync.ts (2)

135-158: ⚠️ Potential issue | 🟡 Minor

Refresh _lastHash after successful push to avoid redundant pull writes.

After a successful syncToBackend(), hashes remain stale, so the next poll can treat just-pushed data as changed.

🧩 Proposed fix
     if (failures.length > 0) {
       console.warn(`⚠️ Synced to backend with ${failures.length} error(s):`, failures.map(f => (f as PromiseRejectedResult).reason));
     } else {
       console.log('✅ Synced to backend');
+      _lastHash.repos = quickHash(state.repositories);
+      _lastHash.releases = quickHash(state.releases);
+      _lastHash.ai = quickHash(state.aiConfigs);
+      _lastHash.webdav = quickHash(state.webdavConfigs);
+      _lastHash.settings = quickHash({
+        activeAIConfig: state.activeAIConfig,
+        activeWebDAVConfig: state.activeWebDAVConfig,
+      });
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 135 - 158, syncToBackend currently
leaves the module-level _lastHash stale after a successful push so the next poll
may treat just-pushed data as changed; update _lastHash when all backend sync
promises succeed by recomputing the current state hash from
useAppStore.getState() (or existing hash utility used elsewhere) and assigning
it to _lastHash inside syncToBackend (e.g., after the "✅ Synced to backend"
branch), and do this only when failures.length === 0 so we don’t mask real
remote changes.

13-14: ⚠️ Potential issue | 🟠 Major

startAutoSync() can still leak prior store subscriptions on re-entry.

Timers are reset, but previous useAppStore.subscribe handlers are not centrally tracked/cleaned before creating a new one.

♻️ Proposed fix
 // Polling timer for pull-from-backend
 let _pollTimer: ReturnType<typeof setInterval> | null = null;
+let _storeUnsubscribe: (() => void) | null = null;

 export function startAutoSync(): () => void {
   // Guard: if already running, stop previous instance first
+  if (_storeUnsubscribe) {
+    _storeUnsubscribe();
+    _storeUnsubscribe = null;
+  }
   if (_pollTimer) {
@@
   const unsubscribe = useAppStore.subscribe((state, prevState) => {
@@
   });
+  _storeUnsubscribe = unsubscribe;
@@
 export function stopAutoSync(unsubscribe: () => void): void {
@@
-  unsubscribe();
+  if (_storeUnsubscribe) {
+    _storeUnsubscribe();
+    _storeUnsubscribe = null;
+  } else {
+    unsubscribe();
+  }
   console.log('🔄 Auto-sync stopped');
 }

Also applies to: 168-208, 213-223

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 13 - 14, startAutoSync() currently
resets timers but does not remove previous store subscriptions created via
useAppStore.subscribe, causing subscription leaks on re-entry; modify
startAutoSync and stopAutoSync to track and call unsubscribe functions returned
by useAppStore.subscribe (e.g., storeSubUnsubscribe or an array like
_storeUnsubscribes) before creating new subscriptions, ensure stopAutoSync
invokes all tracked unsubscribers and clears the tracking variable, and update
startAutoSync to call stopAutoSync (or clear existing unsubscribers) prior to
registering new useAppStore.subscribe handlers so no duplicate subscriptions
remain.
🧹 Nitpick comments (1)
server/Dockerfile (1)

8-17: Runtime stage LGTM — non-root hardening properly applied.

USER node at line 14 with chown -R node:node /app at line 13 fully addresses the previous review finding. VOLUME ownership is correct because /app/data is created and chowned before the VOLUME instruction.

Optional micro-optimisation: adding --chown=node:node directly on the COPY --from=build lines (10–12) would eliminate the separate chown layer, but the current approach is functionally equivalent.

🔧 Optional: inline ownership on COPY instructions
-COPY --from=build /app/dist ./dist
-COPY --from=build /app/node_modules ./node_modules
-COPY --from=build /app/package.json ./
-RUN mkdir -p data && chown -R node:node /app
+COPY --from=build --chown=node:node /app/dist ./dist
+COPY --from=build --chown=node:node /app/node_modules ./node_modules
+COPY --from=build --chown=node:node /app/package.json ./
+RUN mkdir -p /app/data && chown node:node /app/data
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/Dockerfile` around lines 8 - 17, The runtime stage is correct but you
can micro-optimize image layers by applying ownership during the COPYs instead
of running a separate chown layer; update the three COPY --from=build
instructions (the lines that copy ./dist, ./node_modules, and ./package.json) to
include --chown=node:node so they produce files owned by node, then remove the
explicit RUN chown -R node:node /app line while keeping USER node and VOLUME
/app/data as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@docker-compose.yml`:
- Around line 14-15: Change is correct: ensure the service in docker-compose.yml
uses expose: - "3000" (not ports) so 3000 is internal-only, remove any remaining
ports: entries that map 3000 elsewhere, and confirm the frontend/proxy service
is on the same Docker network so it can reach the exposed port; verify no host
port mapping for 3000 remains.
- Around line 17-18: The docker-compose allows ENCRYPTION_KEY to be empty which
can lead to plaintext storage or weak encryption; update the server startup to
explicitly validate process.env.ENCRYPTION_KEY (the code paths that call
initEncryption / getEncryptionKey / encryptCredentials / decryptCredentials) and
fail fast with a clear error and non-zero exit if the value is missing/empty, or
alternatively implement and document an explicit, logged plaintext-fallback
policy; add the check in the central bootstrap/config loader (where environment
vars are read) so the server never proceeds using an empty key.

In `@src/services/autoSync.ts`:
- Around line 135-158: syncToBackend currently leaves the module-level _lastHash
stale after a successful push so the next poll may treat just-pushed data as
changed; update _lastHash when all backend sync promises succeed by recomputing
the current state hash from useAppStore.getState() (or existing hash utility
used elsewhere) and assigning it to _lastHash inside syncToBackend (e.g., after
the "✅ Synced to backend" branch), and do this only when failures.length === 0
so we don’t mask real remote changes.
- Around line 13-14: startAutoSync() currently resets timers but does not remove
previous store subscriptions created via useAppStore.subscribe, causing
subscription leaks on re-entry; modify startAutoSync and stopAutoSync to track
and call unsubscribe functions returned by useAppStore.subscribe (e.g.,
storeSubUnsubscribe or an array like _storeUnsubscribes) before creating new
subscriptions, ensure stopAutoSync invokes all tracked unsubscribers and clears
the tracking variable, and update startAutoSync to call stopAutoSync (or clear
existing unsubscribers) prior to registering new useAppStore.subscribe handlers
so no duplicate subscriptions remain.

In `@src/services/backendAdapter.ts`:
- Around line 91-337: The codebase lacks a shared timeout for backend HTTP calls
causing potential hangs; introduce a single helper (e.g., a method named
requestWithTimeout or fetchWithTimeout on the BackendAdapter class) that accepts
the same parameters as fetch plus a timeout (ms) and races fetch against a
timeout AbortController, throwing a translated error on timeout, then replace
direct fetch calls in functions like getRepositoryReadme, getRepositoryReleases,
checkRateLimit, proxyAIRequest, proxyWebDAV, syncRepositories,
fetchRepositories, syncReleases, fetchReleases, syncAIConfigs, fetchAIConfigs,
syncWebDAVConfigs, fetchWebDAVConfigs, syncSettings, fetchSettings, exportData,
importData, and checkHealth to call requestWithTimeout with a sensible default
timeout (configurable) instead of calling fetch directly.

---

Nitpick comments:
In `@server/Dockerfile`:
- Around line 8-17: The runtime stage is correct but you can micro-optimize
image layers by applying ownership during the COPYs instead of running a
separate chown layer; update the three COPY --from=build instructions (the lines
that copy ./dist, ./node_modules, and ./package.json) to include
--chown=node:node so they produce files owned by node, then remove the explicit
RUN chown -R node:node /app line while keeping USER node and VOLUME /app/data
as-is.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8faefce and 64b3b9d.

📒 Files selected for processing (5)
  • docker-compose.yml
  • server/Dockerfile
  • server/src/routes/repositories.ts
  • src/services/autoSync.ts
  • src/services/backendAdapter.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • server/src/routes/repositories.ts

- backendAdapter.ts: 添加 fetchWithTimeout 辅助方法,所有 fetch 调用增加 30s 超时
- autoSync.ts: syncToBackend 后更新 _lastHash 防止下次轮询虚假覆盖
- backendErrors.ts: 补充 SYNC_AI_CONFIGS_FAILED/SYNC_WEBDAV_CONFIGS_FAILED/INVALID_REQUEST 错误码
@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/services/autoSync.ts (1)

177-217: ⚠️ Potential issue | 🟠 Major

Repeated startAutoSync() calls can leak previous store subscriptions.

Timers are cleared on restart, but previously registered useAppStore.subscribe callbacks are not centrally tracked/cleaned before re-subscribing.

💡 Proposed fix
+let _storeUnsubscribe: (() => void) | null = null;
+
 export function startAutoSync(): () => void {
+  if (_storeUnsubscribe) {
+    _storeUnsubscribe();
+    _storeUnsubscribe = null;
+  }
   if (_pollTimer) {
     clearInterval(_pollTimer);
     _pollTimer = null;
   }
   if (_debounceTimer) {
     clearTimeout(_debounceTimer);
     _debounceTimer = null;
   }

   const unsubscribe = useAppStore.subscribe((state, prevState) => {
     ...
   });
+  _storeUnsubscribe = unsubscribe;

   _pollTimer = setInterval(() => {
     syncFromBackend();
   }, POLL_INTERVAL);

   console.log('🔄 Auto-sync started (push debounce: 2s, poll: 5s)');
   return unsubscribe;
 }

 export function stopAutoSync(unsubscribe: () => void): void {
   if (_debounceTimer) {
     clearTimeout(_debounceTimer);
     _debounceTimer = null;
   }
   if (_pollTimer) {
     clearInterval(_pollTimer);
     _pollTimer = null;
   }
-  unsubscribe();
+  if (_storeUnsubscribe) {
+    _storeUnsubscribe();
+    _storeUnsubscribe = null;
+  } else {
+    unsubscribe();
+  }
   console.log('🔄 Auto-sync stopped');
 }

Also applies to: 222-232

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 177 - 217, startAutoSync currently
creates a new useAppStore.subscribe handler each call without unsubscribing
previous ones; add a module-level variable (e.g., _storeUnsubscribe) and inside
startAutoSync, if _storeUnsubscribe exists call it and set it to null before
creating a new subscription, then assign the new subscribe return to
_storeUnsubscribe and ensure any cleanup path (when stopping timers or on
return) clears _storeUnsubscribe as well; reference startAutoSync, the local
unsubscribe variable returned by useAppStore.subscribe, and the poll/debounce
cleanup code to centralize and call the previous unsubscribe so subscriptions
are not leaked.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/autoSync.ts`:
- Around line 50-110: The current logic sets hasChanges when any slice's hash
differs and then blindly applies all fulfilled payloads and refreshes all
_lastHash entries, which can overwrite local unsynced slices after partial push
failures; change to track changed slices individually (e.g., reposChanged,
releasesChanged, aiChanged, webdavChanged, settingsChanged) only when quickHash
differs and the corresponding promise is fulfilled, then when applying to the
store call only the matching setters (state.setRepositories, state.setReleases,
state.setAIConfigs, state.setWebDAVConfigs, etc.) for those changed slices, and
update each _lastHash.* only after that specific slice was successfully
applied/synced so failed pushes do not cause other slices to roll back. Ensure
_isSyncingFromBackend is set around per-slice updates and that you do not return
early in a way that prevents per-slice handling.
- Around line 135-171: The syncToBackend function can run overlapping pushes;
add an in-flight push guard similar to the existing _isSyncingFromBackend by
introducing and checking a flag (e.g., _isPushingToBackend) at the start of
syncToBackend, early-return if true, set it to true before awaiting the
Promise.allSettled call, and ensure it is always reset to false in a finally
block so concurrent debounced calls do not overlap or commit out-of-order
snapshots; update references inside syncToBackend and any callers as needed.

---

Duplicate comments:
In `@src/services/autoSync.ts`:
- Around line 177-217: startAutoSync currently creates a new
useAppStore.subscribe handler each call without unsubscribing previous ones; add
a module-level variable (e.g., _storeUnsubscribe) and inside startAutoSync, if
_storeUnsubscribe exists call it and set it to null before creating a new
subscription, then assign the new subscribe return to _storeUnsubscribe and
ensure any cleanup path (when stopping timers or on return) clears
_storeUnsubscribe as well; reference startAutoSync, the local unsubscribe
variable returned by useAppStore.subscribe, and the poll/debounce cleanup code
to centralize and call the previous unsubscribe so subscriptions are not leaked.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 64b3b9d and 23a1d40.

📒 Files selected for processing (3)
  • src/services/autoSync.ts
  • src/services/backendAdapter.ts
  • src/utils/backendErrors.ts

- autoSync.ts: startAutoSync() 重复调用时先清理之前的 store subscription 防止泄漏
- autoSync.ts: syncFromBackend/syncToBackend 按 slice 单独追踪变化,部分失败不影响其他 slice
@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

- 添加 _isPushingToBackend 防止并发推送重叠
- stopAutoSync 优先使用模块级 _storeUnsubscribe 清理
- 修复 syncToBackend 函数闭合括号缺失
@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/services/autoSync.ts (1)

43-46: ⚠️ Potential issue | 🟠 Major

Prevent pull/push interleaving; stale pulls can overwrite newer pushes.

At Line 143, syncToBackend can still run while a pull is in-flight (_isSyncingFromBackendActive === true), and syncFromBackend can also run during push. This allows races on _lastHash and out-of-order store application.

🔧 Proposed fix (serialize pull/push + queue pending push)
 // Prevent overlapping pushes to backend
 let _isPushingToBackend = false;
+let _hasPendingPush = false;

 export async function syncFromBackend(): Promise<void> {
-  if (!backend.isAvailable || _isSyncingFromBackendActive) return;
+  if (!backend.isAvailable || _isSyncingFromBackendActive || _isPushingToBackend) return;

   _isSyncingFromBackendActive = true;
   try {
     ...
   } finally {
     _isSyncingFromBackend = false;
     _isSyncingFromBackendActive = false;
+    if (_hasPendingPush) {
+      _hasPendingPush = false;
+      void syncToBackend();
+    }
   }
 }

 export async function syncToBackend(): Promise<void> {
   if (!backend.isAvailable) return;
+  if (_isSyncingFromBackendActive) {
+    _hasPendingPush = true;
+    return;
+  }
   if (_isSyncingFromBackend) return;
   if (_isPushingToBackend) return;

   _isPushingToBackend = true;
+  _hasPendingPush = false;
   try {
     ...

Also applies to: 131-134, 141-147

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 43 - 46, The current logic allows
syncToBackend and syncFromBackend to run concurrently causing races on _lastHash
and store application; modify the synchronization so push and pull are
serialized: treat _isSyncingFromBackendActive as a global mutex for both
syncFromBackend and syncToBackend (check-and-set at each entry, return/queue if
busy), add a small in-memory pending push queue (e.g., pendingPushes array) that
syncToBackend appends to when a push is requested while a sync is active, and
after the active sync finishes (in the finally/cleanup of either syncFromBackend
or syncToBackend) drain the pendingPushes and run syncToBackend sequentially
until empty; update references to _lastHash handling so only the draining
syncToBackend updates it and ensure both syncFromBackend and syncToBackend
respect the mutex to prevent interleaving.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/services/autoSync.ts`:
- Around line 43-46: The current logic allows syncToBackend and syncFromBackend
to run concurrently causing races on _lastHash and store application; modify the
synchronization so push and pull are serialized: treat
_isSyncingFromBackendActive as a global mutex for both syncFromBackend and
syncToBackend (check-and-set at each entry, return/queue if busy), add a small
in-memory pending push queue (e.g., pendingPushes array) that syncToBackend
appends to when a push is requested while a sync is active, and after the active
sync finishes (in the finally/cleanup of either syncFromBackend or
syncToBackend) drain the pendingPushes and run syncToBackend sequentially until
empty; update references to _lastHash handling so only the draining
syncToBackend updates it and ensure both syncFromBackend and syncToBackend
respect the mutex to prevent interleaving.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8b193e8 and 9595fa8.

📒 Files selected for processing (1)
  • src/services/autoSync.ts

- syncFromBackend 期间阻止 syncToBackend 并发执行
- 新增 _hasPendingPush 队列机制,pull 完成后自动执行排队的 push
- 修复 CodeRabbit 第九轮审查的 Major 问题
@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/services/autoSync.ts (1)

35-37: quickHash via JSON.stringify is still order-sensitive and O(n) on every poll tick.

The previous review flagged this — JSON.stringify will produce different strings for semantically identical objects whose keys arrive in a different order, causing false-positive "changed" detections and redundant store writes. For large repositories/releases arrays serialised every 5 seconds, this is also a main-thread cost worth addressing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 35 - 37, quickHash currently uses
JSON.stringify which is order-sensitive and reserialises the entire payload each
tick; replace quickHash with a stable, order-insensitive, and faster approach:
deterministically serialize objects by sorting object keys and sorting arrays by
a stable identifier (e.g., id/name) before hashing, and use a fast
non-cryptographic streaming hash (e.g., xxhash/farmhash) instead of raw
JSON.stringify; additionally, if possible implement per-item caching (store
individual item hashes and only re-hash items that changed) to avoid O(n) work
every poll; locate and update the quickHash function to perform key-sorting +
array-stable-sort + streaming hash and add per-item hash caching where quickHash
is used.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/autoSync.ts`:
- Around line 133-142: The finally block in syncFromBackend is missing its
closing brace — close the finally block before the function's closing brace so
the scope is correct; specifically, after resetting _isSyncingFromBackend and
_isSyncingFromBackendActive and handling the _hasPendingPush branch, add the
missing `}` to terminate the finally block in the syncFromBackend function so
subsequent exports (export const ..., export function ..., etc.) are back at top
level.
- Around line 203-269: When starting or stopping auto-sync, reset the in-flight
state flags so an interrupted or double-invoked lifecycle doesn't permanently
block syncing: in startAutoSync() clear or set _isPushingToBackend = false,
_isSyncingFromBackendActive = false, and _hasPendingPush = false before (or
immediately after) clearing timers/subscriptions; do the same resets in
stopAutoSync() before returning/clearing the unsubscribe path so any pending or
active sync state is cleared when auto-sync is restarted or stopped.

---

Duplicate comments:
In `@src/services/autoSync.ts`:
- Around line 35-37: quickHash currently uses JSON.stringify which is
order-sensitive and reserialises the entire payload each tick; replace quickHash
with a stable, order-insensitive, and faster approach: deterministically
serialize objects by sorting object keys and sorting arrays by a stable
identifier (e.g., id/name) before hashing, and use a fast non-cryptographic
streaming hash (e.g., xxhash/farmhash) instead of raw JSON.stringify;
additionally, if possible implement per-item caching (store individual item
hashes and only re-hash items that changed) to avoid O(n) work every poll;
locate and update the quickHash function to perform key-sorting +
array-stable-sort + streaming hash and add per-item hash caching where quickHash
is used.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9595fa8 and 7a2bcdd.

📒 Files selected for processing (1)
  • src/services/autoSync.ts

- 修复 Critical: syncFromBackend finally 块缺少闭合括号导致作用域错误
- 修复 Minor: startAutoSync/stopAutoSync 重置所有 in-flight 状态标志
- 防止重复调用永久阻塞同步
@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src/services/autoSync.ts (2)

35-37: quickHash still uses JSON.stringify — order-sensitive and O(n) on every poll.

This was flagged in a prior review. JSON.stringify can produce different output for semantically identical objects if key order varies, causing false-positive change detection. For large arrays polled every 5 seconds, full serialization may also be expensive on the main thread.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 35 - 37, quickHash currently uses
JSON.stringify which is order-sensitive and expensive; replace it with a
deterministic, faster hashing approach by implementing a stable traversal that
feeds a streaming hash (e.g., Node's crypto.createHash or a fast native hash
like xxhash) instead of serializing the whole object: recursively traverse
values in quickHash, for objects iterate keys in sorted order, for arrays hash
elements in sequence, and update the hash via hash.update(...) so the result is
stable across key order changes and avoids building large intermediate strings;
alternatively plug in a proven stable-hash utility but ensure the function name
quickHash is updated to use the streaming/deterministic implementation.

218-221: ⚠️ Potential issue | 🟠 Major

_isSyncingFromBackend is not reset in startAutoSync — subscription pushes can be permanently suppressed.

Lines 219–221 reset three of the four in-flight flags, but _isSyncingFromBackend (the flag checked by the subscription callback on line 224) is omitted. If startAutoSync is re-invoked while a pull's store-write phase is active (e.g., React Strict Mode, HMR), the flag stays true and every subsequent store change is silently ignored by the subscription.

stopAutoSync correctly resets all four flags (line 276), so this is an oversight in startAutoSync.

🐛 Proposed fix
   // Reset in-flight state flags to prevent permanent sync blocking
+  _isSyncingFromBackend = false;
   _isPushingToBackend = false;
   _isSyncingFromBackendActive = false;
   _hasPendingPush = false;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 218 - 221, startAutoSync fails to
reset the _isSyncingFromBackend flag, causing subscription pushes to be
suppressed; update startAutoSync to reset _isSyncingFromBackend along with the
existing _isPushingToBackend, _isSyncingFromBackendActive, and _hasPendingPush
flags so the subscription callback (which checks _isSyncingFromBackend) will
resume handling store changes; mirror the behavior used in stopAutoSync to
ensure all four in-flight flags are cleared when starting auto-sync.
🧹 Nitpick comments (2)
src/services/autoSync.ts (2)

258-272: stopAutoSync parameter unsubscribe is redundant with _storeUnsubscribe.

startAutoSync already stores the subscription in _storeUnsubscribe (line 244). stopAutoSync checks _storeUnsubscribe first (line 267) and only falls back to the parameter (line 271). In normal usage the parameter is never reached, making the API surface misleading — callers must thread through a value that's ignored.

Consider either removing the parameter (using only the module-level reference) or removing _storeUnsubscribe and relying solely on the caller-provided handle.

♻️ Option: parameterless stop
-export function stopAutoSync(unsubscribe: () => void): void {
+export function stopAutoSync(): void {
   if (_debounceTimer) {
     clearTimeout(_debounceTimer);
     _debounceTimer = null;
   }
   if (_pollTimer) {
     clearInterval(_pollTimer);
     _pollTimer = null;
   }
   if (_storeUnsubscribe) {
     _storeUnsubscribe();
     _storeUnsubscribe = null;
-  } else {
-    unsubscribe();
   }
   // Reset in-flight state flags
   _isPushingToBackend = false;
   _isSyncingFromBackendActive = false;
   _isSyncingFromBackend = false;
   _hasPendingPush = false;
   console.log('🔄 Auto-sync stopped');
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 258 - 272, The stopAutoSync function’s
unsubscribe parameter is redundant because startAutoSync stores the unsubscribe
in the module-level _storeUnsubscribe; remove the unsubscribe parameter from
stopAutoSync and update its implementation to always call and clear
_storeUnsubscribe (and the debounce/poll timers) instead of conditionally
falling back to the parameter; also update any callers to call stopAutoSync()
with no arguments and remove the unused parameter from its signature where
referenced.

148-198: syncToBackend silently drops calls when _isPushingToBackend is true — queued changes may be lost.

If a debounce-triggered push arrives while a prior push is still in-flight (line 156), it returns without queuing. Unlike the pull-during-push case (line 151–153 which sets _hasPendingPush), there's no equivalent _hasPendingPushAgain mechanism. If no further store change occurs, the latest local state never reaches the backend until the next user interaction or the poll cycle round-trips the stale data.

In practice the push is fast and the 2s debounce makes overlap unlikely, so this is low-risk — but adding the same queuing pattern used for _hasPendingPush would close the gap.

♻️ Sketch — queue a re-push when overlapping
+let _hasPendingRePush = false;
+
 export async function syncToBackend(): Promise<void> {
   if (!backend.isAvailable) return;
   if (_isSyncingFromBackendActive) {
     _hasPendingPush = true;
     return;
   }
   if (_isSyncingFromBackend) return;
-  if (_isPushingToBackend) return;
+  if (_isPushingToBackend) {
+    _hasPendingRePush = true;
+    return;
+  }
   ...
   } finally {
     _isPushingToBackend = false;
+    if (_hasPendingRePush) {
+      _hasPendingRePush = false;
+      void syncToBackend();
+    }
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 148 - 198, When syncToBackend sees
_isPushingToBackend is true it currently just returns and drops the request;
change that early-return to set _hasPendingPush = true before returning, and in
the finally block after clearing _isPushingToBackend check _hasPendingPush: if
true, clear it and re-invoke syncToBackend() (or schedule it) so the queued push
runs after the in-flight push finishes; update references to syncToBackend,
_isPushingToBackend, and _hasPendingPush accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/autoSync.ts`:
- Around line 44-142: syncFromBackend currently updates the per-slice _lastHash
entries (e.g. _lastHash.repos, .releases, .ai, .webdav, .settings) before
applying the corresponding store writes, which can desync the fingerprint if a
state setter (state.setRepositories, state.setReleases, state.setAIConfigs,
state.setWebDAVConfigs, state.setActiveAIConfig, state.setActiveWebDAVConfig)
throws; move each slice's _lastHash update to immediately after its successful
setter call so the hash only changes when the store write completed, and if a
setter throws, leave that slice's _lastHash untouched and handle/log the error
(do not pre-commit the hash).

---

Duplicate comments:
In `@src/services/autoSync.ts`:
- Around line 35-37: quickHash currently uses JSON.stringify which is
order-sensitive and expensive; replace it with a deterministic, faster hashing
approach by implementing a stable traversal that feeds a streaming hash (e.g.,
Node's crypto.createHash or a fast native hash like xxhash) instead of
serializing the whole object: recursively traverse values in quickHash, for
objects iterate keys in sorted order, for arrays hash elements in sequence, and
update the hash via hash.update(...) so the result is stable across key order
changes and avoids building large intermediate strings; alternatively plug in a
proven stable-hash utility but ensure the function name quickHash is updated to
use the streaming/deterministic implementation.
- Around line 218-221: startAutoSync fails to reset the _isSyncingFromBackend
flag, causing subscription pushes to be suppressed; update startAutoSync to
reset _isSyncingFromBackend along with the existing _isPushingToBackend,
_isSyncingFromBackendActive, and _hasPendingPush flags so the subscription
callback (which checks _isSyncingFromBackend) will resume handling store
changes; mirror the behavior used in stopAutoSync to ensure all four in-flight
flags are cleared when starting auto-sync.

---

Nitpick comments:
In `@src/services/autoSync.ts`:
- Around line 258-272: The stopAutoSync function’s unsubscribe parameter is
redundant because startAutoSync stores the unsubscribe in the module-level
_storeUnsubscribe; remove the unsubscribe parameter from stopAutoSync and update
its implementation to always call and clear _storeUnsubscribe (and the
debounce/poll timers) instead of conditionally falling back to the parameter;
also update any callers to call stopAutoSync() with no arguments and remove the
unused parameter from its signature where referenced.
- Around line 148-198: When syncToBackend sees _isPushingToBackend is true it
currently just returns and drops the request; change that early-return to set
_hasPendingPush = true before returning, and in the finally block after clearing
_isPushingToBackend check _hasPendingPush: if true, clear it and re-invoke
syncToBackend() (or schedule it) so the queued push runs after the in-flight
push finishes; update references to syncToBackend, _isPushingToBackend, and
_hasPendingPush accordingly.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7a2bcdd and c0f3dfc.

📒 Files selected for processing (1)
  • src/services/autoSync.ts

- startAutoSync 现在重置所有四个 in-flight 状态标志
- syncFromBackend 中 _lastHash 改为在 store setter 成功后才更新
- 防止 setter 异常导致指纹与实际 store 状态不同步
@Micah-Zheng
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/services/autoSync.ts (1)

35-37: quickHash still uses full JSON.stringify — order-sensitive and O(n) per poll tick.

This was raised in a previous review and remains unresolved. JSON.stringify produces different strings for structurally identical objects whose keys are in different order, and it serialises the full payload on every 5-second tick.

♻️ Lightweight stable fingerprint for array slices
-function quickHash(data: unknown): string {
-  return JSON.stringify(data);
+type Identifiable = { id: string | number; updatedAt?: string | number };
+
+function quickHash(data: unknown): string {
+  if (Array.isArray(data) && data.length > 0 && 'id' in (data[0] as object)) {
+    return (data as Identifiable[])
+      .map(r => `${r.id}:${r.updatedAt ?? ''}`)
+      .sort()
+      .join(',');
+  }
+  // Fallback for small/non-array payloads (settings, configs)
+  return JSON.stringify(data);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 35 - 37, quickHash currently does a
full JSON.stringify (order-sensitive and O(n) each 5s tick); replace it with a
lightweight, order-insensitive stable fingerprint: implement quickHash to detect
primitives vs objects/arrays, and for objects iterate keys in sorted order (or
pick a deterministic subset of keys) and for arrays hash length plus a few
representative elements (e.g., first/last/N samples) to keep it O(1) or O(k) per
tick; feed the deterministic string fragments into a small, fast
non-cryptographic hash (FNV/murmur/Node's crypto.createHash with xxhash if
available) and return that short hex string so structurally-equal payloads
produce the same fingerprint and per-tick cost is bounded.
🧹 Nitpick comments (3)
src/services/autoSync.ts (3)

258-261: Returned unsubscribe bypasses timer cleanup if called directly.

startAutoSync returns the store's unsubscribe function. A caller who invokes it directly — rather than through stopAutoSync — only unsubscribes from the store; _pollTimer, _debounceTimer, and the in-flight flags are left running/set. The JSDoc could clarify that the returned function should only be passed to stopAutoSync, or startAutoSync could return a comprehensive teardown closure.

♻️ Return a self-contained teardown closure
-  console.log('🔄 Auto-sync started (push debounce: 2s, poll: 5s)');
-  return unsubscribe;
+  console.log('🔄 Auto-sync started (push debounce: 2s, poll: 5s)');
+  // Return a teardown that also clears timers — pass to stopAutoSync or call directly.
+  return () => stopAutoSync(unsubscribe);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 258 - 261, startAutoSync currently
returns the store's unsubscribe directly which leaves internal timers and
in-flight flags running if called; change startAutoSync to return a
self-contained teardown closure that (1) clears/invalidates _pollTimer and
_debounceTimer, (2) resets any in-flight flags used by the module (e.g.
_pushInFlight, _pollInFlight), (3) calls the original store unsubscribe, and (4)
optionally logs the shutdown; update or clarify the JSDoc for
startAutoSync/stopAutoSync to state callers should use this returned teardown
(or stopAutoSync) rather than calling the raw store unsubscribe.

155-163: _isSyncingFromBackend guard at line 162 is unreachable dead code.

_isSyncingFromBackend is only ever set true at line 105, which is inside syncFromBackend's try block — a scope already guarded by _isSyncingFromBackendActive = true (line 47). The two flags are always true together, so by the time control reaches line 162, line 158 has already returned. This creates a misleading implication that there is a state where _isSyncingFromBackend is true while _isSyncingFromBackendActive is false.

♻️ Remove the unreachable guard
   // If a pull is in-flight, queue this push for after pull completes
   if (_isSyncingFromBackendActive) {
     _hasPendingPush = true;
     return;
   }
-  if (_isSyncingFromBackend) return;
   if (_isPushingToBackend) return;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 155 - 163, The guard "if
(_isSyncingFromBackend) return;" in syncToBackend is unreachable because
_isSyncingFromBackend is always set together with _isSyncingFromBackendActive in
syncFromBackend; remove the redundant _isSyncingFromBackend check from
syncToBackend (or consolidate the two flags into a single syncing-from-backend
flag used consistently across syncFromBackend and syncToBackend) and update any
places that read/write _isSyncingFromBackend or _isSyncingFromBackendActive so
only one source of truth (e.g., _isSyncingFromBackendActive) controls the
in-flight pull state.

266-280: unsubscribe parameter is always silently ignored when _storeUnsubscribe is set.

The else branch (line 278) is only reached if _storeUnsubscribe is null, which means startAutoSync was never called or stopAutoSync is being called a second time. In the normal lifecycle the parameter is dead. Consider removing it (or making it optional) to avoid giving callers the impression their value is used.

♻️ Drop the redundant parameter
-export function stopAutoSync(unsubscribe: () => void): void {
+export function stopAutoSync(): void {
   if (_debounceTimer) {
     clearTimeout(_debounceTimer);
     _debounceTimer = null;
   }
   if (_pollTimer) {
     clearInterval(_pollTimer);
     _pollTimer = null;
   }
   if (_storeUnsubscribe) {
     _storeUnsubscribe();
     _storeUnsubscribe = null;
-  } else {
-    unsubscribe();
   }

This pairs with the optional refactor above (returning a stopAutoSync closure from startAutoSync).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 266 - 280, The stopAutoSync function
currently accepts an unsubscribe parameter that is ignored whenever the
module-level _storeUnsubscribe is set; either remove the parameter from
stopAutoSync or make it optional and only call it when _storeUnsubscribe is
null—update stopAutoSync (the function signature and its single call/usage
sites) so it no longer misleads callers: either delete the unsubscribe argument
and calls that pass it, or change the signature to stopAutoSync(unsubscribe?: ()
=> void) and retain the existing else branch that invokes unsubscribe() only
when _storeUnsubscribe is null; ensure tests and any callers of
startAutoSync/stopAutoSync are updated accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/autoSync.ts`:
- Around line 109-135: The writes for each slice must be isolated so one thrown
setter doesn't abort the rest: wrap each block that checks changed.* and
result.status (for repos, releases, ai, webdav, settings) in its own try/catch,
call the corresponding setter (state.setRepositories, state.setReleases,
state.setAIConfigs, state.setWebDAVConfigs, state.setActiveAIConfig,
state.setActiveWebDAVConfig) inside the try, and only assign the matching
_lastHash.repos/_lastHash.releases/_lastHash.ai/_lastHash.webdav/_lastHash.settings
after the setter succeeds; in the catch log or swallow the error but do not
rethrow so other slices continue to be applied in the same poll cycle.

---

Duplicate comments:
In `@src/services/autoSync.ts`:
- Around line 35-37: quickHash currently does a full JSON.stringify
(order-sensitive and O(n) each 5s tick); replace it with a lightweight,
order-insensitive stable fingerprint: implement quickHash to detect primitives
vs objects/arrays, and for objects iterate keys in sorted order (or pick a
deterministic subset of keys) and for arrays hash length plus a few
representative elements (e.g., first/last/N samples) to keep it O(1) or O(k) per
tick; feed the deterministic string fragments into a small, fast
non-cryptographic hash (FNV/murmur/Node's crypto.createHash with xxhash if
available) and return that short hex string so structurally-equal payloads
produce the same fingerprint and per-tick cost is bounded.

---

Nitpick comments:
In `@src/services/autoSync.ts`:
- Around line 258-261: startAutoSync currently returns the store's unsubscribe
directly which leaves internal timers and in-flight flags running if called;
change startAutoSync to return a self-contained teardown closure that (1)
clears/invalidates _pollTimer and _debounceTimer, (2) resets any in-flight flags
used by the module (e.g. _pushInFlight, _pollInFlight), (3) calls the original
store unsubscribe, and (4) optionally logs the shutdown; update or clarify the
JSDoc for startAutoSync/stopAutoSync to state callers should use this returned
teardown (or stopAutoSync) rather than calling the raw store unsubscribe.
- Around line 155-163: The guard "if (_isSyncingFromBackend) return;" in
syncToBackend is unreachable because _isSyncingFromBackend is always set
together with _isSyncingFromBackendActive in syncFromBackend; remove the
redundant _isSyncingFromBackend check from syncToBackend (or consolidate the two
flags into a single syncing-from-backend flag used consistently across
syncFromBackend and syncToBackend) and update any places that read/write
_isSyncingFromBackend or _isSyncingFromBackendActive so only one source of truth
(e.g., _isSyncingFromBackendActive) controls the in-flight pull state.
- Around line 266-280: The stopAutoSync function currently accepts an
unsubscribe parameter that is ignored whenever the module-level
_storeUnsubscribe is set; either remove the parameter from stopAutoSync or make
it optional and only call it when _storeUnsubscribe is null—update stopAutoSync
(the function signature and its single call/usage sites) so it no longer
misleads callers: either delete the unsubscribe argument and calls that pass it,
or change the signature to stopAutoSync(unsubscribe?: () => void) and retain the
existing else branch that invokes unsubscribe() only when _storeUnsubscribe is
null; ensure tests and any callers of startAutoSync/stopAutoSync are updated
accordingly.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c0f3dfc and 894c189.

📒 Files selected for processing (1)
  • src/services/autoSync.ts

Comment on lines +109 to +135
if (changed.repos && reposResult.status === 'fulfilled') {
state.setRepositories(reposResult.value.repositories);
_lastHash.repos = hashes.repos;
}
if (changed.releases && releasesResult.status === 'fulfilled') {
state.setReleases(releasesResult.value.releases);
_lastHash.releases = hashes.releases;
}
if (changed.ai && aiResult.status === 'fulfilled') {
state.setAIConfigs(aiResult.value);
_lastHash.ai = hashes.ai;
}
if (changed.webdav && webdavResult.status === 'fulfilled') {
state.setWebDAVConfigs(webdavResult.value);
_lastHash.webdav = hashes.webdav;
}
// Sync active selections from settings
if (changed.settings && settingsResult.status === 'fulfilled') {
const settings = settingsResult.value;
if (typeof settings.activeAIConfig === 'string' || settings.activeAIConfig === null) {
state.setActiveAIConfig(settings.activeAIConfig as string | null);
}
if (typeof settings.activeWebDAVConfig === 'string' || settings.activeWebDAVConfig === null) {
state.setActiveWebDAVConfig(settings.activeWebDAVConfig as string | null);
}
_lastHash.settings = hashes.settings;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

A thrown setter exception aborts all remaining slice writes in the same poll cycle.

If state.setRepositories() (line 110) throws, the subsequent setReleases, setAIConfigs, setWebDAVConfigs, and setActiveAIConfig/setActiveWebDAVConfig calls are all skipped. Their hashes are not committed, so they retry on the next poll — but the store is left partially written until then (e.g., repos updated, releases stale). Wrapping each slice write independently prevents this.

🛡️ Proposed fix — isolate each slice write
-    // Update store then commit hash — hash only changes if setter succeeds
-    if (changed.repos && reposResult.status === 'fulfilled') {
-      state.setRepositories(reposResult.value.repositories);
-      _lastHash.repos = hashes.repos;
-    }
-    if (changed.releases && releasesResult.status === 'fulfilled') {
-      state.setReleases(releasesResult.value.releases);
-      _lastHash.releases = hashes.releases;
-    }
-    if (changed.ai && aiResult.status === 'fulfilled') {
-      state.setAIConfigs(aiResult.value);
-      _lastHash.ai = hashes.ai;
-    }
-    if (changed.webdav && webdavResult.status === 'fulfilled') {
-      state.setWebDAVConfigs(webdavResult.value);
-      _lastHash.webdav = hashes.webdav;
-    }
-    if (changed.settings && settingsResult.status === 'fulfilled') {
-      const settings = settingsResult.value;
-      if (typeof settings.activeAIConfig === 'string' || settings.activeAIConfig === null) {
-        state.setActiveAIConfig(settings.activeAIConfig as string | null);
-      }
-      if (typeof settings.activeWebDAVConfig === 'string' || settings.activeWebDAVConfig === null) {
-        state.setActiveWebDAVConfig(settings.activeWebDAVConfig as string | null);
-      }
-      _lastHash.settings = hashes.settings;
-    }
+    // Update store then commit hash — hash only changes if setter succeeds.
+    // Each slice is isolated so a failure in one does not skip the others.
+    if (changed.repos && reposResult.status === 'fulfilled') {
+      try { state.setRepositories(reposResult.value.repositories); _lastHash.repos = hashes.repos; }
+      catch (e) { console.error('Failed to apply repos from backend:', e); }
+    }
+    if (changed.releases && releasesResult.status === 'fulfilled') {
+      try { state.setReleases(releasesResult.value.releases); _lastHash.releases = hashes.releases; }
+      catch (e) { console.error('Failed to apply releases from backend:', e); }
+    }
+    if (changed.ai && aiResult.status === 'fulfilled') {
+      try { state.setAIConfigs(aiResult.value); _lastHash.ai = hashes.ai; }
+      catch (e) { console.error('Failed to apply AI configs from backend:', e); }
+    }
+    if (changed.webdav && webdavResult.status === 'fulfilled') {
+      try { state.setWebDAVConfigs(webdavResult.value); _lastHash.webdav = hashes.webdav; }
+      catch (e) { console.error('Failed to apply WebDAV configs from backend:', e); }
+    }
+    if (changed.settings && settingsResult.status === 'fulfilled') {
+      try {
+        const settings = settingsResult.value;
+        if (typeof settings.activeAIConfig === 'string' || settings.activeAIConfig === null) {
+          state.setActiveAIConfig(settings.activeAIConfig as string | null);
+        }
+        if (typeof settings.activeWebDAVConfig === 'string' || settings.activeWebDAVConfig === null) {
+          state.setActiveWebDAVConfig(settings.activeWebDAVConfig as string | null);
+        }
+        _lastHash.settings = hashes.settings;
+      } catch (e) { console.error('Failed to apply settings from backend:', e); }
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/autoSync.ts` around lines 109 - 135, The writes for each slice
must be isolated so one thrown setter doesn't abort the rest: wrap each block
that checks changed.* and result.status (for repos, releases, ai, webdav,
settings) in its own try/catch, call the corresponding setter
(state.setRepositories, state.setReleases, state.setAIConfigs,
state.setWebDAVConfigs, state.setActiveAIConfig, state.setActiveWebDAVConfig)
inside the try, and only assign the matching
_lastHash.repos/_lastHash.releases/_lastHash.ai/_lastHash.webdav/_lastHash.settings
after the setter succeeds; in the catch log or swallow the error but do not
rethrow so other slices continue to be applied in the same poll cycle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant