Skip to content

jamus/orbitq-api-docs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 

Repository files navigation

OrbitQ Β· Service

TypeScript Node

Backend API service powering the OrbitQ rocket launch tracker.

OrbitQ is an mobile app that lets users track upcoming rocket launches and receive push notifications when a launch's status or schedule changes β€” including real-time countdown alerts at 24 hours, 1 hour, and 5 minutes to launch. This README describes the backend service that powers it.


Architecture

OrbitQ API sits between the iOS app and the upstream Launch Library 2 (LL2) API. It caches responses to manage rate limits, persists tracking subscriptions, and runs background jobs to detect changes and dispatch push notifications via Expo.

graph TD
    App["πŸ“± OrbitQ iOS App"]
    API["πŸ–₯️ OrbitQ API\n(Express / TypeScript)"]
    Redis["⚑ Redis\n(Upstash)"]
    PG["πŸ—„οΈ PostgreSQL"]
    LL2["πŸš€ Launch Library 2\n(thespacedevs.com)"]
    Expo["πŸ“¨ Expo Push\nNotifications"]

    App -->|"API requests (API-Key auth)"| API
    API -->|"Cache check"| Redis
    Redis -->|"Cache miss β†’ fetch"| LL2
    LL2 -->|"Response cached"| Redis
    API -->|"Read/write tracking & snapshots"| PG
    API -->|"Send push notifications"| Expo
    Expo -->|"Push delivery"| App
Loading

Tech Stack

Layer Technology
Runtime Node.js 20, TypeScript 5
Web framework Express 4
Database PostgreSQL (via pg)
Cache Upstash Redis (@upstash/redis)
HTTP client 🐢 underrated-fetch β€” fetch with built-in Redis caching
Push notifications Expo Push Notifications SDK (abstracts APNs + FCM)
Error monitoring Sentry
Testing Vitest + Supertest
Deployment Railway (nixpacks)

Features

API Request Flow

The iOS app requests launch data through the OrbitQ API rather than hitting LL2 directly. Every request checks Redis first β€” only fetching from LL2 on a cache miss.

sequenceDiagram
    participant App as πŸ“± iOS App
    participant API as OrbitQ API
    participant Redis as Redis (Upstash)
    participant LL2 as Launch Library 2

    App->>API: GET /api/v1/launches/upcoming
    API->>Redis: Check cache key
    alt Cache hit
        Redis-->>API: Cached JSON
        API-->>App: 200 OK (from cache)
    else Cache miss
        Redis-->>API: (miss)
        API->>LL2: GET /launches/upcoming/
        LL2-->>API: Fresh JSON
        API->>Redis: Store with TTL
        API-->>App: 200 OK (fresh)
    end
Loading

πŸ’‘ Design decision: static TTL over dynamic An earlier iteration explored dynamic TTL β€” shortening cache expiry when a launch was within 30 minutes of its NET. This gave fresher data during countdowns but made the code very difficult to reason about. Static TTLs make the request budget deterministic and easy to reason about. Will monitor if we need to shorten the caches (and increase our rate LL2 limit) over time.


Launch tracking, change Detection & Push Notifications

Users can track launches. A background job runs every 4 minutes, compares the latest LL2 data against stored snapshots, and notifies subscribers when anything changes.

sequenceDiagram
    participant Job as Change Detect Job
    participant Cache as Redis Cache
    participant DB as PostgreSQL
    participant Expo as Expo Push
    participant App as πŸ“± iOS App

    Job->>Cache: Fetch upcoming launches (cached)
    Cache-->>Job: Launch list
    Job->>DB: Load tracked launch IDs + snapshots
    DB-->>Job: Snapshots

    loop For each tracked launch (in upcoming window)
        Job->>Job: Diff current vs. snapshot
        alt Change detected (status or schedule)
            Job->>DB: Update snapshot
            Job->>Cache: Invalidate detail cache entry
            Job->>DB: Log notification
            Job->>Expo: Send push notification
            Expo-->>App: "Schedule Change Β· Falcon 9 Block 5 / Starlink / Delayed 2 hours"
        else No change
            Job->>DB: Update snapshot (last_checked)
        end
    end
Loading

Three notification types are sent when a change is detected:

Type Trigger Title Example body
status_update Launch status changes Status update "Falcon 9 Block 5 \nStarlink Group 6-14 \nGo for Launch"
schedule_change NET (launch time) shifts Schedule Change "Falcon 9 Block 5 \nStarlink Group 6-14 \nDelayed 2 hours"
launch_update Both change at once Launch update "Falcon 9 Block 5 \nStarlink Group 6-14 \nStatus: Go for Launch \nSchedule: Delayed 2 hours"

πŸ’‘ Design decision: snapshot diffing over webhooks LL2 doesn't offer webhooks, so change detection is polling-based. Rather than storing just a timestamp of the last check, the API persists a full snapshot of each tracked launch (status ID + name, NET, launch name). This makes the diff unambiguous β€” a change is detected the moment any field diverges from the stored value, with no risk of missing an update that happened and reverted between polls.

πŸ’‘ Design decision: only monitor launches in the upcoming window The change detect job only diffs tracked launches that appear in the limit=50 upcoming fetch. Launches outside the window (e.g. a far-future mission like Neutron Maiden Flight) are skipped β€” no per-launch fallback API call is made. This keeps API usage predictable and avoids burning the hourly rate limit on missions unlikely to have imminent changes. Once a launch enters the top 50 as its date approaches, monitoring resumes automatically using the existing snapshot.

πŸ’‘ Design decision: proactive cache invalidation on change When a change is detected, the launch's detail cache entry is immediately invalidated. This ensures users who tap the push notification see the updated status rather than the stale cached response, without waiting for the TTL to expire.

N+1 issue identified with Claude. All tracked launch IDs, their snapshots, and their subscribed device tokens are fetched in three queries before the per-launch loop begins. The alternative β€” querying per launch inside the loop β€” would produce N+1 database round-trips for every job run. Upfront bulk fetching keeps the job's DB footprint constant regardless of how many launches are tracked.


Countdown Notifications & Push Notifications

Separately from change detection, a countdown monitor fires time-based alerts as a launch approaches. Thresholds are checked every 60 seconds.

sequenceDiagram
    participant Job as Countdown Monitor (60s)
    participant DB as PostgreSQL
    participant Expo as Expo Push
    participant App as πŸ“± iOS App

    Job->>DB: Load tracked launches + snapshots
    loop For each tracked launch
        Job->>Job: Calculate time until launch
        alt Within 24h threshold (not yet sent)
            Job->>Expo: "NET Β· Falcon 9 Block 5 / Starlink / 24 hours till launch"
            Expo-->>App: Push notification
            Job->>DB: Mark 24h threshold sent
        else Within 1h threshold (not yet sent)
            Job->>Expo: "NET Β· Falcon 9 Block 5 / Starlink / 1 hour till launch"
            Expo-->>App: Push notification
            Job->>DB: Mark 1h threshold sent
        else Within 5m threshold (not yet sent)
            Job->>Expo: "NET Β· Falcon 9 Block 5 / Starlink / 5 minutes till launch"
            Expo-->>App: Push notification
            Job->>DB: Mark 5m threshold sent
        end
    end
Loading

If a launch slips past a threshold (e.g. delayed from T-30min to T+3h), the sent record is cleared so the notification will fire again when the window reopens.

πŸ’‘ Design decision: store net_at_send on countdown records Each countdown record stores the NET (launch time) that was current when the threshold was marked sent. If the launch subsequently slips, the stored NET no longer matches the snapshot β€” the old record is deleted and the threshold becomes eligible to fire again.


Auto-Tracking launches

Additionally to manual tracking, Users can define filter rules (by agency and/or launch location for now) so that matching upcoming launches are tracked automatically.

sequenceDiagram
    participant App as πŸ“± iOS App
    participant API as OrbitQ API
    participant DB as PostgreSQL
    participant Cache as Redis Cache

    App->>API: PUT /api/v1/auto-tracking (filters)
    API->>DB: Save filter rules
    API->>Cache: Fetch upcoming launches (cached)
    Cache-->>API: Launch list
    API->>API: Match launches against filters
    API->>DB: Add matching auto-tracked launches
    API-->>App: 200 OK (saved filters)

    Note over API,DB: Background job also runs every 30 min
    loop Auto-tracking sync (30 min)
        API->>DB: Load all device filters
        API->>Cache: Fetch upcoming launches (cached)
        Cache-->>API: Launch list
        API->>DB: Add new matches / remove stale auto-tracks
    end
Loading

Filters support two fields β€” agencies (launch agency IDs) and locations (launch pad location IDs) β€” combined via a matchMode:

matchMode Behavior
"or" (default) Launch must satisfy at least one filter type β€” e.g. any SpaceX launch or any KSC launch
"and" Launch must satisfy every non-empty filter type β€” e.g. SpaceX and KSC (currently unused by client)

Within each filter type, matching is always OR (e.g. SpaceX or ULA). Agency and location IDs come from the LL2 API (launch_service_provider.id and pad.location.id).

Auto-tracked launches are stored with source = 'auto' in tracked_launches. Manually tracked launches (source = 'manual') are never removed by the auto-tracking system. All existing notification logic (change detection, countdowns) applies to auto-tracked launches exactly as it does to manual ones.

πŸ’‘ Design decision: no extra LL2 API calls The auto-tracking sync job reuses the same cached upcoming-launches fetch as the change detect job β€” the same URL, same TTL.


Background Jobs

Six jobs run independently in async loops managed by a central task scheduler. Each job completes before sleeping β€” no overlapping runs.

πŸ’‘ Design decision: async loops, not cron Each job is a while (!aborted) loop that sleeps after each run completes. This means a slow run simply delays the next one β€” the interval is measured from completion, not start. Cron-style scheduling (e.g. node-cron) would fire at "wall-clock" times regardless of whether the previous run has finished. This could cause jobs to overlap up under load.

graph LR
    Scheduler["Task Scheduler | (AbortController)"]

    Scheduler --> A["πŸ”„ Change Detect | every 4 min"]
    Scheduler --> B["πŸ“Έ Backfill Snapshots | every 30 sec"]
    Scheduler --> C["⏳ Countdown Monitor | every 60 sec"]
    Scheduler --> D["πŸ“¬ Receipt Check | every 15 min"]
    Scheduler --> E["🧹 Housekeeping | every 1 hr"]
    Scheduler --> F["🎯 Auto-tracking Sync | every 30 min"]

    A -->|"Diffs launch state, | sends change alerts"| LL2["LL2 API / Cache"]
    B -->|"Fetches initial snapshot | for newly tracked launches"| LL2
    C -->|"Fires T-24h/1h/5m | countdown notifications"| PG["PostgreSQL"]
    D -->|"Validates Expo | push receipts"| Expo["Expo API"]
    E -->|"Cleans orphaned tokens, archives old logs"| PG
    F -->|"Syncs filter rules against | upcoming launches"| LL2
Loading

πŸ’‘ Design decision: backfill as a separate fast job The change detect job can only diff a launch against a snapshot that already exists. When a user tracks a new launch, there's no snapshot yet. Rather than special-casing this inside change detect, a dedicated backfill job runs every 30 seconds to create those first snapshots quickly.


Testing

Tests are written with Vitest and Supertest. The focus is on the five background tasks, which contain the most business-critical logic. HTTP endpoints are covered by integration tests via Supertest; the task layer is tested in isolation with mocked dependencies.

Task Coverage

Task Interval What's tested
Change Detect 4 min Diff logic for status and schedule changes; correct notification type selected; snapshot updated after each diff; cache invalidated on change; launches outside the upcoming window skipped without extra API calls; 429 on upcoming fetch aborts the run
Backfill Snapshots 30 sec Snapshot created on first run for a newly tracked launch; ghost launches (not found in LL2) removed from tracking; 429 stops processing; returns false when no launches are missing snapshots
Countdown Monitor 60 sec Each threshold (24h, 1h, 5m) fires exactly once; threshold cleared and re-queued when a launch slips past its window; no duplicate sends within the same window
Receipt Check 15 min Expo error receipts detected and the corresponding device token flagged; successful receipts produce no side effects
Housekeeping 1 hr All four cleanup queries always run; returns true when any removed rows, false when all return 0
Auto-tracking Sync 30 min Matching launches added and stale auto-tracks removed per device; 50-launch cap respected; manual tracks never removed; no-op when no devices have filters set

Claude was used to generate tests


API Endpoints

Protected (require API-Key header)

GET /api/v1/launches/upcoming
GET /api/v1/launches/previous
GET /api/v1/launches/:id
GET /api/v1/config/launch_statuses

Responses are proxied from Launch Library 2 and cached.

Tracking (require API-Key header)

POST   /api/v1/tracking           β€” Subscribe a device token to a launch
DELETE /api/v1/tracking/:launchId β€” Unsubscribe a device from a launch
GET    /api/v1/tracking           β€” List launches tracked by a device

Auto-tracking (require API-Key header)

GET    /api/v1/auto-tracking      β€” Get the device's current filter rules (null if none)
PUT    /api/v1/auto-tracking      β€” Save filter rules and trigger an immediate sync
DELETE /api/v1/auto-tracking      β€” Remove filter rules and all auto-tracked launches

Database Schema

erDiagram
    tracked_launches {
        serial id PK
        varchar device_token
        varchar launch_id
        varchar source
        timestamptz created_at
    }

    device_auto_tracking_filters {
        serial id PK
        varchar device_token
        jsonb filters
        timestamptz updated_at
    }

    launch_snapshots {
        varchar launch_id PK
        integer status_id
        varchar status_name
        timestamptz net
        varchar launch_name
        timestamptz last_checked
    }

    notification_log {
        serial id PK
        varchar device_token
        varchar launch_id
        varchar change_type
        text message
        varchar expo_ticket_id
        boolean success
        timestamptz net_at_send
        timestamptz sent_at
    }

    countdown_notifications {
        serial id PK
        varchar launch_id
        varchar device_token
        varchar threshold
        timestamptz net_at_send
        timestamptz sent_at
    }

    ll2_usage_log {
        timestamptz hour PK
        integer request_count
    }

    tracked_launches }o--|| launch_snapshots : "launch_id"
    tracked_launches ||--o{ notification_log : "device_token + launch_id"
    tracked_launches ||--o{ countdown_notifications : "device_token + launch_id"
    device_auto_tracking_filters ||--o{ tracked_launches : "device_token"
Loading

Deployment

The service is deployed on Railway using nixpacks for zero-config builds. Upstash provides serverless Redis with no infrastructure to manage. Errors are captured in Sentry.


Related

Authorship

This documentation was co-authored with Claude (Anthropic).

About

Public documentation for OrbitQ service and API

Topics

Resources

Stars

Watchers

Forks

Contributors