Skip to content

fix(holyship): spawn holyshipper with PORT=3100 to match fleet URL template#133

Merged
TSavo merged 1 commit intomainfrom
fix/holyshipper-port-3100
Apr 18, 2026
Merged

fix(holyship): spawn holyshipper with PORT=3100 to match fleet URL template#133
TSavo merged 1 commit intomainfrom
fix/holyshipper-port-3100

Conversation

@TSavo
Copy link
Copy Markdown
Contributor

@TSavo TSavo commented Apr 18, 2026

Summary

Port mismatch. Core's Fleet.create builds `Instance.url = http://:3100` (fleet.ts:101). But holyshipper containers were being spawned with `PORT=8080`. `waitForReady` pings http://:3100/health, nothing answers, 30s timeout, teardown.

Verified on prod (battleaxe → core VPS):

```
docker logs holyship-its-over-stack-overflowing-spit-on-that-thang
... [holyshipper] worker-runtime listening on :8080
```

Fix

Pass `PORT=3100` in the env for holyshipper. Holyshipper already reads `process.env.PORT`, so no holyshipper-side change needed.

Test plan

  • biome + tsc clean
  • Merge → deploy → ship fresh issue → holyshipper starts on :3100 → waitForReady succeeds → provision completes → dispatch prompt → holyshipper opens a PR on the target repo. E2E green.

🤖 Generated with Claude Code

Summary by Sourcery

Bug Fixes:

  • Set holyshipper PORT environment variable to 3100 to match the fleet-assigned instance URL and avoid failed readiness checks.

…mplate

Core's Fleet.create constructs Instance.url as http://<name>:3100
(fleet.ts:101), but holyshipper was being spawned with PORT=8080.
waitForReady then pinged the wrong port, timed out after 30s every
time, and the worker tore down the freshly-started holyshipper.

Verified via battleaxe SSH on prod:

  holyship-its-over-stack-overflowing-spit-on-that-thang:
    [holyshipper] worker-runtime listening on :8080

But fleet's URL says 3100 — nothing answers, timeout, teardown.

Change PORT to 3100 to match the URL template. Holyshipper respects
the env var, so no holyshipper-side change needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 18, 2026 01:10
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 18, 2026

Warning

Rate limit exceeded

@TSavo has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 32 minutes and 51 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 32 minutes and 51 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0c8d8578-78d4-45c2-8a2c-32ca78b7aa9a

📥 Commits

Reviewing files that changed from the base of the PR and between d15a962 and cd9805e.

📒 Files selected for processing (1)
  • platforms/holyship/src/fleet/holyshipper-fleet-manager.ts
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/holyshipper-port-3100

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 18, 2026

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

This PR fixes a port mismatch between the fleet URL template and holyshipper containers by updating the holyshipper process environment to listen on port 3100 instead of 8080, ensuring waitForReady health checks succeed.

Sequence diagram for holyshipper provisioning and waitForReady health check

sequenceDiagram
    actor User
    participant Core as Core_Service
    participant Fleet as Fleet_Manager
    participant Docker as Docker_Engine
    participant Holy as Holyshipper_Container

    User->>Core: Request new holyshipper instance
    Core->>Fleet: Fleet.create(entityId)
    Note over Core,Fleet: Core sets instance.url = http://<name>:3100
    Fleet->>Docker: Start container holyship-<entityId>
    activate Docker
    Docker-->>Holy: Run holyshipper with env
    Note over Holy: HOLYSHIP_GATEWAY_URL, HOLYSHIP_ENTITY_ID, PORT=3100
    deactivate Docker

    Core->>Holy: waitForReady GET /health on port 3100
    alt PORT is 3100 (after fix)
        Holy-->>Core: 200 OK
        Core-->>User: Provisioning succeeds
    else PORT was 8080 (before fix)
        Holy-->>Holy: Listening on port 8080
        Core-->>Core: Retries until 30s timeout
        Core-->>User: Provisioning fails, teardown instance
    end
Loading

File-Level Changes

Change Details Files
Align holyshipper container listening port with Fleet.create URL expectations.
  • Update the PORT environment variable passed into holyshipper from 8080 to 3100 so it matches the Fleet.create URL template
  • Document in-code why holyshipper must listen on 3100 by adding a clarifying comment about waitForReady timeouts when ports differ
platforms/holyship/src/fleet/holyshipper-fleet-manager.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a readiness/health-check failure when provisioning holyshipper worker containers via core’s fleet API by aligning the container’s listening port with core’s default Instance.url template (http://<name>:3100).

Changes:

  • Set PORT=3100 for holyshipper containers spawned by HolyshipperFleetManager.
  • Add inline rationale documenting why the port must match core’s fleet URL template to avoid waitForReady timeouts.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Since 3100 is now assumed both here and in Fleet.create, consider extracting the port into a shared constant/config so that future changes to the default port don't silently desynchronize these code paths.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Since `3100` is now assumed both here and in `Fleet.create`, consider extracting the port into a shared constant/config so that future changes to the default port don't silently desynchronize these code paths.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Apr 18, 2026

Greptile Summary

Fixes a port mismatch where holyshipper containers were spawned with PORT=8080 but core's Fleet.create constructs instance.url = http://<name>:3100 (fleet.ts line 101), causing waitForReady to poll the wrong port and time out after 30 seconds. The fix is a single-line change to PORT: "3100" in holyshipper-fleet-manager.ts, confirmed by the core source and production logs.

Confidence Score: 5/5

Safe to merge — the fix is a targeted, correct one-line change with no side effects.

The change aligns the container's listening port with what core's Fleet.create uses as the fallback URL (confirmed at fleet.ts:101). No other files have the stale port, the comment explains the reasoning clearly, and the production symptom exactly matches the described root cause.

No files require special attention.

Important Files Changed

Filename Overview
platforms/holyship/src/fleet/holyshipper-fleet-manager.ts PORT env var corrected from 8080 to 3100 to match core's Fleet.create URL template; change is minimal, well-commented, and directly resolves the described 30s timeout regression.

Sequence Diagram

sequenceDiagram
    participant H as HolyshipperFleetManager
    participant C as Core (Fleet.create)
    participant CT as Container (holyshipper)

    H->>C: createContainer(name, image, env={PORT:"3100"})
    C->>CT: docker run --env PORT=3100 ...
    Note over CT: holyshipper listens on :3100
    C-->>H: instance = { url: "http://hs-xxx:3100" }
    H->>CT: GET http://hs-xxx:3100/health (waitForReady)
    CT-->>H: 200 OK
    H->>CT: POST /credentials
    H->>CT: POST /checkout
    H-->>H: ProvisionResult { runnerUrl: "http://hs-xxx:3100" }
Loading

Reviews (1): Last reviewed commit: "fix(holyship): spawn holyshipper with PO..." | Re-trigger Greptile

@TSavo TSavo merged commit 7f75945 into main Apr 18, 2026
13 of 14 checks passed
@TSavo TSavo deleted the fix/holyshipper-port-3100 branch April 18, 2026 01:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants