Skip to content

Inter-job coordination: file locking to prevent parallel write conflicts #18

@FrederikHandberg

Description

@FrederikHandberg

Problem

Two parallel jobs have no visibility into each other's state. If both try to write to the same file (dashboard files, workstream findings), last write wins — no locking, no coordination.

This was flagged in Jensen Speaks for the Mar 21 parallel run: 'if one fails, the other may need rerun.' The system has no way to express dependencies between jobs.

Fix

Short-term: flock-based file locking

Already used for deploys (see CLAUDE.md). Extend to shared files:

flock -xn /tmp/lock-dashboard -c 'update-dashboard.sh'

Medium-term: job declares resources

Add writes field to JobRequest:

{ "prompt": "...", "writes": ["dashboard.py", "workstreams/WS-001.md"] }

Worker refuses to start a new job if it declares overlapping writes with a running job.

Acceptance Criteria

  • Shared resource writes use flock
  • writes field in JobRequest (optional)
  • Worker checks write conflicts before starting job
  • Dashboard shows resource locks held by running jobs
  • Conflict produces clear error, not silent data corruption

Context

From Jensen limitation audit (2026-03-22). Workstream: WS-001.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions