Skip to content

schedules.batchUpdate: tight max(100) caps reject realistic payloads (AI curve × many days) #424

@ng

Description

@ng

Problem

schedules.batchUpdate validates each delete/create array with .max(100):

https://github.com/…/src/server/routers/schedules.ts#L569 (approx)

deletes: z.object({
  temperature: z.array(idSchema).max(100).default([]),
  power: z.array(idSchema).max(100).default([]),
  alarm: z.array(idSchema).max(100).default([]),
}).default(...)
creates: z.object({
  temperature: z.array(...).max(100).default([]),
  ...
})

The iOS client hits this cap in normal use once AI curves are in the mix.

Repro on my pod right now

schedules.getAll shows:

LEFT temperature: 128 rows  (mon:22 tue:23 wed:23 thu:23 fri:4 sat:23 sun:10)
RIGHT temperature: 68 rows   (sat:23 sun:23 tue:22)

A healthy schedule should be ~4 phases × 7 days × 2 sides = 56 rows max per side. The 128 are legitimately distinct time points from accumulated AI curve applies where the previous day's points weren't cleared — but regardless of how we got here, the math for a realistic "apply AI curve to all 7 days" operation breaks the cap:

  • AI curves contain 15–23 time points per day
  • Apply-to-all-days with existing entries: deletes.temperature = 128 (L) + 68 (R) = 196 rows → 400 BAD_REQUEST
  • Apply-to-all-days fresh: creates.temperature = ~20/day × 7 × 2 sides = 280400 BAD_REQUEST

When this fails, the whole transaction is rejected, the scheduler isn't reloaded, and the client either retries individually (old code) or swallows the error (current iOS: I just added a chunking workaround).

Proposed fix

Bump the cap. On a local-only API serving one device, there's no attacker-budget argument for 100. The underlying SQLite transaction handles thousands of rows in tens of milliseconds.

z.array(idSchema).max(1000)   // or even just remove the cap

Apply to both deletes.* and creates.* (and updates.* while you're there).

Meanwhile on iOS (already landed)

Just shipped client-side chunking that splits the batch into ≤100-per-array sub-calls so this bug doesn't block users. Each sub-batch runs its own transaction + scheduler reload, so raising the cap would let us collapse back to a single call with a single reload.

Test plan

  • Send a batchUpdate with 300 creates + 300 deletes; verify it succeeds
  • Existing unit tests still pass
  • Manual: apply AI curve to all 7 days from iOS without client-side chunking

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions