Skip to content

Add nginx load balancing support for multi-instance Streamlit deployments#336

Merged
t0mdavid-m merged 7 commits intomainfrom
claude/docker-load-balancer-h8VWZ
Feb 9, 2026
Merged

Add nginx load balancing support for multi-instance Streamlit deployments#336
t0mdavid-m merged 7 commits intomainfrom
claude/docker-load-balancer-h8VWZ

Conversation

@t0mdavid-m
Copy link
Member

@t0mdavid-m t0mdavid-m commented Feb 9, 2026

Summary

This PR adds optional nginx load balancing to support running multiple Streamlit server instances in parallel, improving scalability and availability. The feature is disabled by default (single instance mode) and can be enabled by setting STREAMLIT_SERVER_COUNT environment variable to a value greater than 1.

Key Changes

  • Added nginx installation to both Dockerfile and Dockerfile_simple for load balancing capabilities
  • Created entrypoint scripts (entrypoint.sh and entrypoint_simple.sh) that replace inline shell script generation:
    • entrypoint.sh: Full-featured version with Redis, RQ workers, cron, and load balancing support
    • entrypoint_simple.sh: Lightweight version with cron and load balancing support
  • Implemented dynamic nginx configuration that:
    • Generates upstream server blocks based on STREAMLIT_SERVER_COUNT
    • Uses ip_hash for sticky sessions (critical for Streamlit's in-process state management)
    • Configures WebSocket support with proper headers and long timeouts
    • Disables buffering for streaming responses
  • Added STREAMLIT_SERVER_COUNT environment variable (default: 1) to control the number of Streamlit instances
  • Updated docker-compose.yml to expose the new environment variable configuration

Implementation Details

  • When STREAMLIT_SERVER_COUNT=1 (default), Streamlit runs directly on port 8501 with no load balancer overhead
  • When STREAMLIT_SERVER_COUNT>1, multiple Streamlit instances start on internal ports (8510+) and nginx proxies requests on port 8501
  • Sticky sessions via ip_hash ensure users maintain session state across requests
  • WebSocket connections are properly configured for Streamlit's real-time features
  • Entrypoint scripts are now external files for better maintainability and clarity

https://claude.ai/code/session_018VEL5xKZfe4LCcUa8iUHJ9

Summary by CodeRabbit

  • New Features

    • nginx-backed load balancing to distribute traffic across multiple app instances.
    • Introduced STREAMLIT_SERVER_COUNT (default: 1) to toggle single- vs. multi-instance modes.
  • Chores

    • Container startup now uses a generated entrypoint script to support both single-instance and nginx load-balanced operation.
    • Orchestration config adjusted to rely on the image’s default startup behavior combined with STREAMLIT_SERVER_COUNT.

@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Dockerfiles add nginx and a generated /app/entrypoint.sh, plus a public env var STREAMLIT_SERVER_COUNT (default 1). At startup the entrypoint either runs a single Streamlit on 8501 or launches multiple Streamlit processes (8510+) and nginx on 8501 as a load balancer.

Changes

Cohort / File(s) Summary
Primary Dockerfile (run-app stage)
Dockerfile
Install nginx, add STREAMLIT_SERVER_COUNT env (default 1), create /app/entrypoint.sh via heredoc and chmod +x, implement runtime logic: when count > 1 generate nginx upstream config, spawn Streamlit processes on ports 8510+, and start nginx on 8501; otherwise run single Streamlit on 8501.
Simplified Dockerfile variant
Dockerfile_simple
Mirror changes from primary Dockerfile: install nginx, expose STREAMLIT_SERVER_COUNT, replace prior entrypoint with a heredoc-generated /app/entrypoint.sh that preserves cron startup and supports single or nginx-backed multi-instance operation.
Compose / service config
docker-compose.yml
Remove explicit streamlit run command and set STREAMLIT_SERVER_COUNT=1 (commented). Delegate startup behavior to image entrypoint and environment flag rather than an explicit command.
Manifest / metadata
manifest_file (inline)
Lines changed reflect added env var and entrypoint/script generation: +39/+46 lines across manifests, updating run-app startup declarations.

Sequence Diagram(s)

sequenceDiagram
    participant Container as Container Start
    participant Entrypoint as Entrypoint Script
    participant Nginx as Nginx (8501)
    participant S0 as Streamlit (8510 / 8501)
    participant Sx as Streamlit (8511+)
    participant Client as Client

    Container->>Entrypoint: run image CMD -> entrypoint
    Entrypoint->>Entrypoint: read STREAMLIT_SERVER_COUNT
    alt STREAMLIT_SERVER_COUNT > 1
        Entrypoint->>Entrypoint: generate nginx upstream (8510, 8511...)
        Entrypoint->>S0: start Streamlit on 8510
        Entrypoint->>Sx: start additional Streamlit instances
        Entrypoint->>Nginx: write nginx.conf and start nginx on 8501
        Client->>Nginx: HTTP request to 8501
        Nginx->>S0: proxy to chosen instance
        S0-->>Nginx: response
        Nginx-->>Client: response
    else STREAMLIT_SERVER_COUNT == 1
        Entrypoint->>S0: start single Streamlit on 8501
        Client->>S0: direct request to 8501
        S0-->>Client: response
    end
Loading

Poem

🐇
I nibbled code and found a door,
Streams multiply and hop some more,
Nginx hums at eighty-five-oh-one,
Ports parade beneath the sun,
Hop, balance, serve — the batch is done!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add nginx load balancing support for multi-instance Streamlit deployments' accurately and clearly describes the primary change in the pull request.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch claude/docker-load-balancer-h8VWZ

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@entrypoint_simple.sh`:
- Around line 33-54: The current nginx upstream uses ip_hash inside the upstream
block for streamlit_backend which pins sessions by client IP and will misroute
multiple users sharing an IP (e.g., behind NAT or another proxy); replace
ip_hash-based affinity by either removing ip_hash entirely to allow normal
round-robin load balancing, or implement a cookie-based sticky session solution
(e.g., nginx-sticky-module or commercial sticky cookie) in the upstream
configuration for streamlit_backend, or instead move session affinity out of the
proxy and configure Streamlit to use an external session-state backend like
Redis; update the upstream streamlit_backend block and related proxy_pass
handling accordingly.
- Around line 9-11: Validate STREAMLIT_SERVER_COUNT before using it in the
numeric comparison: ensure SERVER_COUNT (the variable set from
STREAMLIT_SERVER_COUNT) contains only digits (e.g., with a regex check like [[
"$SERVER_COUNT" =~ ^[0-9]+$ ]] or a case statement) and if it is not numeric,
set SERVER_COUNT to a safe default (e.g., 1) or exit with a clear error; then
use the existing if [ "$SERVER_COUNT" -gt 1 ] test unchanged. Update the
assignment and add the numeric guard near where SERVER_COUNT is set so invalid
values (like "auto") won't cause the shell arithmetic comparison to fail.
- Around line 74-85: The current loop launches Streamlit instances in background
(variables SERVER_COUNT, BASE_PORT, PORT) and then uses a fixed sleep before
starting nginx, which provides no readiness or supervision; change this by
adding a per-instance readiness loop that polls each Streamlit health endpoint
(e.g., http://127.0.0.1:$PORT/_stcore/health) with curl -sf and a timeout/retry
backoff and only proceed to start nginx when all ports respond healthy, and add
a simple supervisor/trap: track PIDs of background streamlit processes, install
a trap to kill children on exit and run a wait loop that exits the container if
any Streamlit PID dies (or restarts it), so nginx (exec nginx -g 'daemon off;')
is only started after readiness and the container fails fast on instance crash.

In `@entrypoint.sh`:
- Around line 19-24: The RQ workers are started in the background without
supervision; update the worker startup loop (where WORKER_COUNT and the "rq
worker openms-workflows --url $REDIS_URL --name worker-$i &" command are used)
to record each child PID (e.g., append $! to a PID file or an array) and add a
trap/wait loop that monitors child exits (using trap '...' SIGCHLD or a wait -n
loop) to log failures and restart the specific worker process when it dies;
ensure restart logic uses the same invocation and preserves the worker name
(worker-$i) and logs PID changes so crashes are detected and workers are
relaunched.
- Around line 12-17: The Redis readiness loop using "until redis-cli ping" can
hang indefinitely; modify the entrypoint.sh loop (the until ... redis-cli ping
... done block) to enforce a retry limit by introducing a counter and a
max-retries variable (configurable via an env var like REDIS_WAIT_MAX_RETRIES
with a sensible default), incrementing the counter each iteration, and breaking
with a non-zero exit and a clear error message (e.g., "Redis did not become
ready after X attempts") when the limit is reached; keep the existing sleep
interval and successful "Redis is ready" echo on success.
🧹 Nitpick comments (2)
entrypoint_simple.sh (1)

1-89: Significant code duplication with entrypoint.sh.

The entire nginx configuration generation and multi-instance startup logic (lines 11–85) is duplicated verbatim between this file and entrypoint.sh. If the nginx config or startup logic needs to change, both files must be updated in lockstep.

Consider extracting the shared nginx/load-balancer logic into a common helper script (e.g., start_load_balancer.sh) that both entrypoints source.

Dockerfile_simple (1)

28-28: nginx is installed unconditionally, even when load balancing is disabled (default).

Since the default STREAMLIT_SERVER_COUNT=1 means nginx is never used, this adds ~5–10 MB to every image. This is a minor trade-off for simplicity, but worth noting — if image size matters, nginx could be installed conditionally or in a separate build target.

Comment on lines 9 to 11
SERVER_COUNT=${STREAMLIT_SERVER_COUNT:-1}

if [ "$SERVER_COUNT" -gt 1 ]; then
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

No input validation on STREAMLIT_SERVER_COUNT.

If a user sets STREAMLIT_SERVER_COUNT to a non-numeric value (e.g., "auto"), the [ "$SERVER_COUNT" -gt 1 ] comparison will produce a shell error and the container will exit due to set -e. Consider adding a guard:

Proposed fix
 SERVER_COUNT=${STREAMLIT_SERVER_COUNT:-1}
+if ! [[ "$SERVER_COUNT" =~ ^[0-9]+$ ]]; then
+    echo "ERROR: STREAMLIT_SERVER_COUNT must be a positive integer, got '$SERVER_COUNT'" >&2
+    exit 1
+fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
SERVER_COUNT=${STREAMLIT_SERVER_COUNT:-1}
if [ "$SERVER_COUNT" -gt 1 ]; then
SERVER_COUNT=${STREAMLIT_SERVER_COUNT:-1}
if ! [[ "$SERVER_COUNT" =~ ^[0-9]+$ ]]; then
echo "ERROR: STREAMLIT_SERVER_COUNT must be a positive integer, got '$SERVER_COUNT'" >&2
exit 1
fi
if [ "$SERVER_COUNT" -gt 1 ]; then
🤖 Prompt for AI Agents
In `@entrypoint_simple.sh` around lines 9 - 11, Validate STREAMLIT_SERVER_COUNT
before using it in the numeric comparison: ensure SERVER_COUNT (the variable set
from STREAMLIT_SERVER_COUNT) contains only digits (e.g., with a regex check like
[[ "$SERVER_COUNT" =~ ^[0-9]+$ ]] or a case statement) and if it is not numeric,
set SERVER_COUNT to a safe default (e.g., 1) or exit with a clear error; then
use the existing if [ "$SERVER_COUNT" -gt 1 ] test unchanged. Update the
assignment and add the numeric guard near where SERVER_COUNT is set so invalid
values (like "auto") won't cause the shell arithmetic comparison to fail.

Comment on lines 33 to 54
upstream streamlit_backend {
# Sticky sessions: ensure a user always reaches the same Streamlit instance.
# This is critical because Streamlit maintains per-session state in-process.
ip_hash;

${UPSTREAM_SERVERS} }

map \$http_upgrade \$connection_upgrade {
default upgrade;
'' close;
}

server {
listen 8501;

location / {
proxy_pass http://streamlit_backend;
proxy_http_version 1.1;

# WebSocket support (required for Streamlit)
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection \$connection_upgrade;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

ip_hash breaks when clients share an IP or are behind a downstream load balancer.

ip_hash uses the client's IP for sticky sessions. In environments where multiple users share the same public IP (corporate NAT, another reverse proxy in front), all those users will be pinned to the same Streamlit instance, negating the load-balancing benefit and potentially mixing sessions.

If the deployment will sit behind another proxy, consider using a cookie-based sticky session mechanism (e.g., nginx sticky cookie in the commercial version, or the nginx-sticky-module for OSS) or Streamlit's built-in external session-state backend (e.g., Redis) to decouple session affinity from the proxy layer.

🤖 Prompt for AI Agents
In `@entrypoint_simple.sh` around lines 33 - 54, The current nginx upstream uses
ip_hash inside the upstream block for streamlit_backend which pins sessions by
client IP and will misroute multiple users sharing an IP (e.g., behind NAT or
another proxy); replace ip_hash-based affinity by either removing ip_hash
entirely to allow normal round-robin load balancing, or implement a cookie-based
sticky session solution (e.g., nginx-sticky-module or commercial sticky cookie)
in the upstream configuration for streamlit_backend, or instead move session
affinity out of the proxy and configure Streamlit to use an external
session-state backend like Redis; update the upstream streamlit_backend block
and related proxy_pass handling accordingly.

Comment on lines 74 to 85
for i in $(seq 0 $((SERVER_COUNT - 1))); do
PORT=$((BASE_PORT + i))
echo "Starting Streamlit instance on port $PORT..."
streamlit run app.py --server.port $PORT --server.address 127.0.0.1 &
done

# Give Streamlit instances a moment to start
sleep 2

# Start nginx in foreground as the main process
echo "Starting nginx load balancer on port 8501..."
exec nginx -g 'daemon off;'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Backgrounded Streamlit instances are unsupervised; no health-check before starting nginx.

  1. Streamlit instances are launched with & but nothing monitors them. If one crashes, nginx will route traffic to a dead upstream and return 502s indefinitely.
  2. sleep 2 is a fixed delay rather than an actual readiness check. On slow or resource-constrained hosts, 2 seconds may not be enough.

Consider:

  • Adding a readiness loop that polls each Streamlit port (e.g., curl -sf http://127.0.0.1:$PORT/_stcore/health) before starting nginx.
  • Using a lightweight process supervisor (e.g., supervisord, or a trap + wait loop) so that if any Streamlit instance dies, the container can either restart it or exit cleanly.
🤖 Prompt for AI Agents
In `@entrypoint_simple.sh` around lines 74 - 85, The current loop launches
Streamlit instances in background (variables SERVER_COUNT, BASE_PORT, PORT) and
then uses a fixed sleep before starting nginx, which provides no readiness or
supervision; change this by adding a per-instance readiness loop that polls each
Streamlit health endpoint (e.g., http://127.0.0.1:$PORT/_stcore/health) with
curl -sf and a timeout/retry backoff and only proceed to start nginx when all
ports respond healthy, and add a simple supervisor/trap: track PIDs of
background streamlit processes, install a trap to kill children on exit and run
a wait loop that exits the container if any Streamlit PID dies (or restarts it),
so nginx (exec nginx -g 'daemon off;') is only started after readiness and the
container fails fast on instance crash.

entrypoint.sh Outdated
Comment on lines 12 to 17
# Wait for Redis to be ready
until redis-cli ping > /dev/null 2>&1; do
echo "Waiting for Redis..."
sleep 1
done
echo "Redis is ready"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Redis readiness loop has no timeout — container can hang indefinitely.

If Redis fails to start (e.g., misconfiguration, port conflict), this loop will spin forever and the container will appear stuck. Add a retry limit:

Proposed fix
 # Wait for Redis to be ready
-until redis-cli ping > /dev/null 2>&1; do
+RETRIES=0
+MAX_RETRIES=30
+until redis-cli ping > /dev/null 2>&1; do
+    RETRIES=$((RETRIES + 1))
+    if [ "$RETRIES" -ge "$MAX_RETRIES" ]; then
+        echo "ERROR: Redis failed to start after $MAX_RETRIES seconds" >&2
+        exit 1
+    fi
     echo "Waiting for Redis..."
     sleep 1
 done
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Wait for Redis to be ready
until redis-cli ping > /dev/null 2>&1; do
echo "Waiting for Redis..."
sleep 1
done
echo "Redis is ready"
# Wait for Redis to be ready
RETRIES=0
MAX_RETRIES=30
until redis-cli ping > /dev/null 2>&1; do
RETRIES=$((RETRIES + 1))
if [ "$RETRIES" -ge "$MAX_RETRIES" ]; then
echo "ERROR: Redis failed to start after $MAX_RETRIES seconds" >&2
exit 1
fi
echo "Waiting for Redis..."
sleep 1
done
echo "Redis is ready"
🤖 Prompt for AI Agents
In `@entrypoint.sh` around lines 12 - 17, The Redis readiness loop using "until
redis-cli ping" can hang indefinitely; modify the entrypoint.sh loop (the until
... redis-cli ping ... done block) to enforce a retry limit by introducing a
counter and a max-retries variable (configurable via an env var like
REDIS_WAIT_MAX_RETRIES with a sensible default), incrementing the counter each
iteration, and breaking with a non-zero exit and a clear error message (e.g.,
"Redis did not become ready after X attempts") when the limit is reached; keep
the existing sleep interval and successful "Redis is ready" echo on success.

entrypoint.sh Outdated
Comment on lines 19 to 24
# Start RQ worker(s) in background
WORKER_COUNT=${RQ_WORKER_COUNT:-1}
echo "Starting $WORKER_COUNT RQ worker(s)..."
for i in $(seq 1 $WORKER_COUNT); do
rq worker openms-workflows --url $REDIS_URL --name worker-$i &
done
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

RQ workers are fire-and-forget — no monitoring or restart on failure.

Similar to the Streamlit instances, RQ workers are backgrounded and never checked. If a worker crashes, the job queue will silently stop processing. Consider logging worker PIDs and/or using a trap to detect child exits.

🤖 Prompt for AI Agents
In `@entrypoint.sh` around lines 19 - 24, The RQ workers are started in the
background without supervision; update the worker startup loop (where
WORKER_COUNT and the "rq worker openms-workflows --url $REDIS_URL --name
worker-$i &" command are used) to record each child PID (e.g., append $! to a
PID file or an array) and add a trap/wait loop that monitors child exits (using
trap '...' SIGCHLD or a wait -n loop) to log failures and restart the specific
worker process when it dies; ensure restart logic uses the same invocation and
preserves the worker name (worker-$i) and logs PID changes so crashes are
detected and workers are relaunched.

When STREAMLIT_SERVER_COUNT > 1, the entrypoint dynamically generates an
nginx config and launches multiple Streamlit instances on internal ports
(8510+), with nginx on port 8501 using ip_hash sticky sessions for
WebSocket compatibility. Default (STREAMLIT_SERVER_COUNT=1) preserves
existing behavior with no nginx overhead.

https://claude.ai/code/session_018VEL5xKZfe4LCcUa8iUHJ9
@t0mdavid-m t0mdavid-m force-pushed the claude/docker-load-balancer-h8VWZ branch from b58b03c to be23790 Compare February 9, 2026 14:10
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
Dockerfile (1)

162-219: 🛠️ Refactor suggestion | 🟠 Major

Same inline-script and maintainability concerns as Dockerfile_simple — extract to an external entrypoint.sh file.

This is the same pattern flagged in Dockerfile_simple. The PR description states entrypoint scripts were "replaced with external entrypoint scripts" but both Dockerfiles still generate them inline. Extracting to a COPY'd file is even more important here due to the additional Redis/RQ logic increasing the script's complexity.

🤖 Fix all issues with AI agents
In `@Dockerfile`:
- Around line 211-213: Replace the static "sleep 2" before the "exec nginx -g
'daemon off;'" startup with a health-check loop that polls Streamlit's
/_stcore/health endpoint on each internal port (the same ports your nginx
upstreams use) and only proceeds to start nginx when all endpoints respond
healthy; implement a loop that iterates the port list, uses a lightweight HTTP
check (curl -sf or wget --spider) with a short retry/backoff and overall timeout
per port, logs failures and retries, and exits with non-zero if any port never
becomes healthy, then call the existing "exec nginx -g 'daemon off;'" once all
checks pass.
- Around line 187-218: The Dockerfile spawns multiple background services
(Redis, RQ workers, multiple Streamlit instances created from the SERVER_COUNT
loop, and nginx started with exec nginx -g "daemon off;") without a proper
init/supervisor to reap zombies or restart failed services; replace the current
ad-hoc backgrounding with a real process supervisor (e.g., supervisord) or at
minimum use tini as PID 1: add tini as the ENTRYPOINT and install/configure
supervisord to run and manage redis, the RQ worker processes, each streamlit
instance (those started in the for loop using streamlit run app.py --server.port
$PORT --server.address 127.0.0.1), and nginx (instead of exec nginx -g "daemon
off;"), or create a supervisord config that defines programs for redis,
rqworker, streamlit instances (parametrized by SERVER_COUNT), and nginx so they
are supervised, can restart, and are correctly reaped.

In `@Dockerfile_simple`:
- Line 102: Validate the STREAMLIT_SERVER_COUNT/SERVER_COUNT env var before
using the numeric comparison in the entrypoint: check that "$SERVER_COUNT"
matches a positive-integer regex and is >=1, and if not print a clear error to
stderr and exit non‑zero; then you can safely use the existing conditional that
contains [ "$SERVER_COUNT" -gt 1 ]. Ensure you update the variable name
references (STREAMLIT_SERVER_COUNT -> SERVER_COUNT) consistently and place the
validation before the line with [ "$SERVER_COUNT" -gt 1 ].
- Around line 116-125: The startup loop that launches multiple Streamlit
instances by backgrounding "streamlit run app.py ... &" (using SERVER_COUNT and
BASE_PORT) leaves those children orphaned when the script execs nginx ("exec
nginx -g \"daemon off;\""), causing zombie processes and no restarts; replace
this pattern by running a proper init/reaper (add tini as PID 1 or run the
container with --init) and supervise/restart the Streamlit workers instead of
backgrounding them — e.g., use a lightweight supervisor (supervisord) or
implement a restart loop/trap in the entrypoint to launch managed workers and
have the supervisor start nginx (or run nginx under the supervisor) so Streamlit
processes are reaped and automatically restarted on crash.
- Around line 92-131: The Dockerfile currently builds a fragile inline
entrypoint by echoing a large heredoc into /app/entrypoint.sh (see the block
creating /app/entrypoint.sh which references STREAMLIT_SERVER_COUNT,
UPSTREAM_SERVERS, BASE_PORT and the nginx config); replace this by moving the
entire script to a new file named entrypoint_simple.sh in the repo, COPY that
file into the image in the Dockerfile, chmod +x it, and change the Dockerfile to
use that script as the container entrypoint (avoid the echo -e generation and
inline nginx config), keeping the same runtime logic (cron start, multi-instance
loop, nginx exec or single exec streamlit) so symbols like
STREAMLIT_SERVER_COUNT, BASE_PORT, and the nginx upstream/template remain
unchanged.
- Around line 123-125: The Dockerfile currently uses a hardcoded "sleep 2"
before starting nginx which can cause 502s; replace that with a readiness loop
that polls Streamlit's health endpoint (/_stcore/health) on the Streamlit
port(s) using curl and only proceeds to run the final exec nginx -g "daemon
off;" once a successful HTTP 200 response is received (with a timeout and
retry/backoff to avoid infinite loops); remove the static sleep and ensure the
loop logs retries and exits non-zero on overall timeout so nginx (the existing
exec command) only starts when Streamlit is ready.
🧹 Nitpick comments (1)
Dockerfile (1)

196-199: PORT variable in the loop shadows the build ARG PORT=8501 — no functional issue but potentially confusing.

The entrypoint script uses PORT as a loop variable for the internal Streamlit ports (8510+), while the Dockerfile declares ARG PORT=8501 at line 13. Since the ARG doesn't persist into the entrypoint's runtime environment, there's no conflict, but using a different name (e.g., INSTANCE_PORT) would improve clarity.

Comment on lines 187 to 218
# Load balancer setup\n\
SERVER_COUNT=${STREAMLIT_SERVER_COUNT:-1}\n\
\n\
if [ "$SERVER_COUNT" -gt 1 ]; then\n\
echo "Starting $SERVER_COUNT Streamlit instances with nginx load balancer..."\n\
\n\
# Generate nginx upstream block\n\
UPSTREAM_SERVERS=""\n\
BASE_PORT=8510\n\
for i in $(seq 0 $((SERVER_COUNT - 1))); do\n\
PORT=$((BASE_PORT + i))\n\
UPSTREAM_SERVERS="${UPSTREAM_SERVERS} server 127.0.0.1:${PORT};\\n"\n\
done\n\
\n\
# Write nginx config\n\
echo -e "worker_processes auto;\\npid /run/nginx.pid;\\n\\nevents {\\n worker_connections 1024;\\n}\\n\\nhttp {\\n upstream streamlit_backend {\\n ip_hash;\\n${UPSTREAM_SERVERS} }\\n\\n map \\$http_upgrade \\$connection_upgrade {\\n default upgrade;\\n \\x27\\x27 close;\\n }\\n\\n server {\\n listen 8501;\\n\\n location / {\\n proxy_pass http://streamlit_backend;\\n proxy_http_version 1.1;\\n proxy_set_header Upgrade \\$http_upgrade;\\n proxy_set_header Connection \\$connection_upgrade;\\n proxy_set_header Host \\$host;\\n proxy_set_header X-Real-IP \\$remote_addr;\\n proxy_set_header X-Forwarded-For \\$proxy_add_x_forwarded_for;\\n proxy_set_header X-Forwarded-Proto \\$scheme;\\n proxy_read_timeout 86400;\\n proxy_send_timeout 86400;\\n proxy_buffering off;\\n }\\n }\\n}" > /etc/nginx/nginx.conf\n\
\n\
# Start Streamlit instances on internal ports (localhost only)\n\
for i in $(seq 0 $((SERVER_COUNT - 1))); do\n\
PORT=$((BASE_PORT + i))\n\
echo "Starting Streamlit instance on port $PORT..."\n\
streamlit run app.py --server.port $PORT --server.address 127.0.0.1 &\n\
done\n\
\n\
sleep 2\n\
echo "Starting nginx load balancer on port 8501..."\n\
exec nginx -g "daemon off;"\n\
else\n\
# Single instance mode (default) - run Streamlit directly on port 8501\n\
echo "Starting Streamlit app..."\n\
exec streamlit run app.py\n\
fi\n\
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Compounded unsupervised-process problem: Redis + RQ workers + Streamlit instances + nginx.

In multi-instance mode, this container runs at least 4+ background daemons (Redis, N RQ workers, M Streamlit instances) with nginx as PID 1. None of the background processes have supervision or restart capability, and nginx won't reap zombie children.

This is the same issue flagged in Dockerfile_simple but amplified here. A process supervisor like supervisord would be strongly recommended for this container, or at minimum tini as the init process.

🤖 Prompt for AI Agents
In `@Dockerfile` around lines 187 - 218, The Dockerfile spawns multiple background
services (Redis, RQ workers, multiple Streamlit instances created from the
SERVER_COUNT loop, and nginx started with exec nginx -g "daemon off;") without a
proper init/supervisor to reap zombies or restart failed services; replace the
current ad-hoc backgrounding with a real process supervisor (e.g., supervisord)
or at minimum use tini as PID 1: add tini as the ENTRYPOINT and
install/configure supervisord to run and manage redis, the RQ worker processes,
each streamlit instance (those started in the for loop using streamlit run
app.py --server.port $PORT --server.address 127.0.0.1), and nginx (instead of
exec nginx -g "daemon off;"), or create a supervisord config that defines
programs for redis, rqworker, streamlit instances (parametrized by
SERVER_COUNT), and nginx so they are supervised, can restart, and are correctly
reaped.

Dockerfile Outdated
Comment on lines 211 to 213
sleep 2\n\
echo "Starting nginx load balancer on port 8501..."\n\
exec nginx -g "daemon off;"\n\
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Same sleep 2 readiness concern as Dockerfile_simple.

Replace with a health-check loop polling Streamlit's /_stcore/health endpoint on each internal port before starting nginx. See the proposed fix in the Dockerfile_simple review.

🤖 Prompt for AI Agents
In `@Dockerfile` around lines 211 - 213, Replace the static "sleep 2" before the
"exec nginx -g 'daemon off;'" startup with a health-check loop that polls
Streamlit's /_stcore/health endpoint on each internal port (the same ports your
nginx upstreams use) and only proceeds to start nginx when all endpoints respond
healthy; implement a loop that iterates the port list, uses a lightweight HTTP
check (curl -sf or wget --spider) with a short retry/backoff and overall timeout
per port, logs failures and retries, and exits with non-zero if any port never
becomes healthy, then call the existing "exec nginx -g 'daemon off;'" once all
checks pass.

Comment on lines 92 to 131
RUN echo -e '#!/bin/bash\n\
set -e\n\
source /root/miniforge3/bin/activate streamlit-env\n\
\n\
# Start cron for workspace cleanup\n\
service cron start\n\
\n\
# Load balancer setup\n\
SERVER_COUNT=${STREAMLIT_SERVER_COUNT:-1}\n\
\n\
if [ "$SERVER_COUNT" -gt 1 ]; then\n\
echo "Starting $SERVER_COUNT Streamlit instances with nginx load balancer..."\n\
\n\
# Generate nginx upstream block\n\
UPSTREAM_SERVERS=""\n\
BASE_PORT=8510\n\
for i in $(seq 0 $((SERVER_COUNT - 1))); do\n\
PORT=$((BASE_PORT + i))\n\
UPSTREAM_SERVERS="${UPSTREAM_SERVERS} server 127.0.0.1:${PORT};\\n"\n\
done\n\
\n\
# Write nginx config\n\
echo -e "worker_processes auto;\\npid /run/nginx.pid;\\n\\nevents {\\n worker_connections 1024;\\n}\\n\\nhttp {\\n upstream streamlit_backend {\\n ip_hash;\\n${UPSTREAM_SERVERS} }\\n\\n map \\$http_upgrade \\$connection_upgrade {\\n default upgrade;\\n \\x27\\x27 close;\\n }\\n\\n server {\\n listen 8501;\\n\\n location / {\\n proxy_pass http://streamlit_backend;\\n proxy_http_version 1.1;\\n proxy_set_header Upgrade \\$http_upgrade;\\n proxy_set_header Connection \\$connection_upgrade;\\n proxy_set_header Host \\$host;\\n proxy_set_header X-Real-IP \\$remote_addr;\\n proxy_set_header X-Forwarded-For \\$proxy_add_x_forwarded_for;\\n proxy_set_header X-Forwarded-Proto \\$scheme;\\n proxy_read_timeout 86400;\\n proxy_send_timeout 86400;\\n proxy_buffering off;\\n }\\n }\\n}" > /etc/nginx/nginx.conf\n\
\n\
# Start Streamlit instances on internal ports (localhost only)\n\
for i in $(seq 0 $((SERVER_COUNT - 1))); do\n\
PORT=$((BASE_PORT + i))\n\
echo "Starting Streamlit instance on port $PORT..."\n\
streamlit run app.py --server.port $PORT --server.address 127.0.0.1 &\n\
done\n\
\n\
sleep 2\n\
echo "Starting nginx load balancer on port 8501..."\n\
exec nginx -g "daemon off;"\n\
else\n\
# Single instance mode (default) - run Streamlit directly on port 8501\n\
echo "Starting Streamlit app..."\n\
exec streamlit run app.py\n\
fi\n\
' > /app/entrypoint.sh
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Massive inline script is fragile and hard to maintain — consider an external file.

The entrypoint script is generated via a single echo -e with deeply nested escaping (especially the nginx config on line 114). This makes the script nearly impossible to read, debug, or modify safely. A single misplaced \n or \\ will silently produce a broken script or nginx config at runtime, with no build-time feedback.

Consider using a separate entrypoint_simple.sh file and COPYing it into the image (which is what the PR description says was done, but the actual code still uses inline generation).

🛠️ Proposed approach

Create a file entrypoint_simple.sh alongside the Dockerfile with the script contents, then in the Dockerfile:

-RUN echo -e '#!/bin/bash\n\
-set -e\n\
-...long inline script...
-' > /app/entrypoint.sh
+COPY entrypoint_simple.sh /app/entrypoint.sh

This gives you:

  • Syntax highlighting and linting in your editor/CI
  • Shell linting via shellcheck
  • Readable nginx config generation
  • Easier diffs in future PRs
🤖 Prompt for AI Agents
In `@Dockerfile_simple` around lines 92 - 131, The Dockerfile currently builds a
fragile inline entrypoint by echoing a large heredoc into /app/entrypoint.sh
(see the block creating /app/entrypoint.sh which references
STREAMLIT_SERVER_COUNT, UPSTREAM_SERVERS, BASE_PORT and the nginx config);
replace this by moving the entire script to a new file named
entrypoint_simple.sh in the repo, COPY that file into the image in the
Dockerfile, chmod +x it, and change the Dockerfile to use that script as the
container entrypoint (avoid the echo -e generation and inline nginx config),
keeping the same runtime logic (cron start, multi-instance loop, nginx exec or
single exec streamlit) so symbols like STREAMLIT_SERVER_COUNT, BASE_PORT, and
the nginx upstream/template remain unchanged.

# Load balancer setup\n\
SERVER_COUNT=${STREAMLIT_SERVER_COUNT:-1}\n\
\n\
if [ "$SERVER_COUNT" -gt 1 ]; then\n\
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Non-numeric STREAMLIT_SERVER_COUNT will crash the entrypoint with an unhelpful error.

The [ "$SERVER_COUNT" -gt 1 ] comparison will fail with a confusing error if the env var is set to a non-integer value (e.g., "auto" or ""). Consider adding input validation early in the script.

🛠️ Proposed validation
# Validate SERVER_COUNT is a positive integer
if ! [[ "$SERVER_COUNT" =~ ^[0-9]+$ ]] || [ "$SERVER_COUNT" -lt 1 ]; then
    echo "ERROR: STREAMLIT_SERVER_COUNT must be a positive integer, got: '$SERVER_COUNT'" >&2
    exit 1
fi
🤖 Prompt for AI Agents
In `@Dockerfile_simple` at line 102, Validate the
STREAMLIT_SERVER_COUNT/SERVER_COUNT env var before using the numeric comparison
in the entrypoint: check that "$SERVER_COUNT" matches a positive-integer regex
and is >=1, and if not print a clear error to stderr and exit non‑zero; then you
can safely use the existing conditional that contains [ "$SERVER_COUNT" -gt 1 ].
Ensure you update the variable name references (STREAMLIT_SERVER_COUNT ->
SERVER_COUNT) consistently and place the validation before the line with [
"$SERVER_COUNT" -gt 1 ].

Comment on lines 116 to 125
# Start Streamlit instances on internal ports (localhost only)\n\
for i in $(seq 0 $((SERVER_COUNT - 1))); do\n\
PORT=$((BASE_PORT + i))\n\
echo "Starting Streamlit instance on port $PORT..."\n\
streamlit run app.py --server.port $PORT --server.address 127.0.0.1 &\n\
done\n\
\n\
sleep 2\n\
echo "Starting nginx load balancer on port 8501..."\n\
exec nginx -g "daemon off;"\n\
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Backgrounded Streamlit processes lack supervision and will become zombies under nginx PID 1.

When SERVER_COUNT > 1, Streamlit instances are forked into the background (&), then exec nginx replaces the shell as PID 1. This creates two problems:

  1. Zombie processes: nginx won't wait() on the orphaned Streamlit children. If any Streamlit process exits, it becomes a zombie.
  2. No restart on crash: If a Streamlit instance crashes, it stays down permanently with no supervision.

Consider using tini (or docker run --init) as PID 1 to handle zombie reaping, and a lightweight process supervisor (e.g., supervisord) or a trap-based loop to restart crashed instances.

🤖 Prompt for AI Agents
In `@Dockerfile_simple` around lines 116 - 125, The startup loop that launches
multiple Streamlit instances by backgrounding "streamlit run app.py ... &"
(using SERVER_COUNT and BASE_PORT) leaves those children orphaned when the
script execs nginx ("exec nginx -g \"daemon off;\""), causing zombie processes
and no restarts; replace this pattern by running a proper init/reaper (add tini
as PID 1 or run the container with --init) and supervise/restart the Streamlit
workers instead of backgrounding them — e.g., use a lightweight supervisor
(supervisord) or implement a restart loop/trap in the entrypoint to launch
managed workers and have the supervisor start nginx (or run nginx under the
supervisor) so Streamlit processes are reaped and automatically restarted on
crash.

Comment on lines 123 to 125
sleep 2\n\
echo "Starting nginx load balancer on port 8501..."\n\
exec nginx -g "daemon off;"\n\
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded sleep 2 is unreliable as a readiness gate.

Streamlit may take longer than 2 seconds to start (especially under load or on slower hardware). nginx will begin proxying immediately, resulting in 502 errors for early requests. Consider a readiness loop that polls the Streamlit ports before starting nginx:

🛠️ Proposed readiness check
-    sleep 2
+    # Wait for all Streamlit instances to be ready
+    for i in $(seq 0 $((SERVER_COUNT - 1))); do
+        PORT=$((BASE_PORT + i))
+        echo "Waiting for Streamlit on port $PORT..."
+        until curl -s -o /dev/null http://127.0.0.1:$PORT/_stcore/health; do
+            sleep 1
+        done
+    done

Note: This requires curl to be installed (it's already in the apt-get line). The /_stcore/health endpoint is Streamlit's built-in health check.

🤖 Prompt for AI Agents
In `@Dockerfile_simple` around lines 123 - 125, The Dockerfile currently uses a
hardcoded "sleep 2" before starting nginx which can cause 502s; replace that
with a readiness loop that polls Streamlit's health endpoint (/_stcore/health)
on the Streamlit port(s) using curl and only proceeds to run the final exec
nginx -g "daemon off;" once a successful HTTP 200 response is received (with a
timeout and retry/backoff to avoid infinite loops); remove the static sleep and
ensure the loop logs retries and exits non-zero on overall timeout so nginx (the
existing exec command) only starts when Streamlit is ready.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@Dockerfile`:
- Line 203: The nginx upstream block currently uses ip_hash which pins by
$remote_addr and will break behind a cloud LB/NAT; in the upstream named
streamlit_backend replace ip_hash with a header-based consistent hash (e.g. use
hash $http_x_real_ip consistent) and ensure the proxy setup forwards/relies on
X-Real-IP (see proxy_set_header X-Real-IP) so backend affinity uses the real
client IP header rather than $remote_addr.

In `@Dockerfile_simple`:
- Line 115: The generated nginx upstream uses ip_hash which hashes $remote_addr
and will collapse clients behind a proxy/NAT; update the upstream block for
streamlit_backend to use a consistent hash on the forwarded client IP instead
(e.g., replace the ip_hash directive in the upstream streamlit_backend with a
hash directive that uses $http_x_forwarded_for and the consistent keyword such
as hash $http_x_forwarded_for consistent;) and ensure the existing
proxy_set_header X-Real-IP / X-Forwarded-For headers remain so the backend
selection uses the original client IP.
🧹 Nitpick comments (1)
Dockerfile (1)

187-219: Load balancer logic is copy-pasted from Dockerfile_simple — extract to a shared script.

Lines 187–219 are nearly identical to lines 99–131 in Dockerfile_simple. If a bug is fixed in one (e.g., the sleep 2 readiness issue, input validation, or the nginx config), it must be manually replicated in the other — a classic DRY violation.

Since the previous review already recommended extracting entrypoint scripts to external files, consider also sharing the load-balancer setup logic between the two entrypoints (e.g., a setup-nginx.sh helper sourced by both entrypoint.sh and entrypoint_simple.sh).

Dockerfile Outdated
\n\
# Write nginx config\n\
mkdir -p /etc/nginx\n\
echo -e "worker_processes auto;\\npid /run/nginx.pid;\\n\\nevents {\\n worker_connections 1024;\\n}\\n\\nhttp {\\n upstream streamlit_backend {\\n ip_hash;\\n${UPSTREAM_SERVERS} }\\n\\n map \\$http_upgrade \\$connection_upgrade {\\n default upgrade;\\n \\x27\\x27 close;\\n }\\n\\n server {\\n listen 8501;\\n\\n location / {\\n proxy_pass http://streamlit_backend;\\n proxy_http_version 1.1;\\n proxy_set_header Upgrade \\$http_upgrade;\\n proxy_set_header Connection \\$connection_upgrade;\\n proxy_set_header Host \\$host;\\n proxy_set_header X-Real-IP \\$remote_addr;\\n proxy_set_header X-Forwarded-For \\$proxy_add_x_forwarded_for;\\n proxy_set_header X-Forwarded-Proto \\$scheme;\\n proxy_read_timeout 86400;\\n proxy_send_timeout 86400;\\n proxy_buffering off;\\n }\\n }\\n}" > /etc/nginx/nginx.conf\n\
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same ip_hash concern as Dockerfile_simple — will break behind an upstream proxy.

ip_hash hashes $remote_addr. When this container runs behind a cloud LB or NAT, all requests share the same peer IP and get pinned to a single backend. See the Dockerfile_simple review comment for the recommended hash $http_x_real_ip consistent; alternative.

🤖 Prompt for AI Agents
In `@Dockerfile` at line 203, The nginx upstream block currently uses ip_hash
which pins by $remote_addr and will break behind a cloud LB/NAT; in the upstream
named streamlit_backend replace ip_hash with a header-based consistent hash
(e.g. use hash $http_x_real_ip consistent) and ensure the proxy setup
forwards/relies on X-Real-IP (see proxy_set_header X-Real-IP) so backend
affinity uses the real client IP header rather than $remote_addr.

\n\
# Write nginx config\n\
mkdir -p /etc/nginx\n\
echo -e "worker_processes auto;\\npid /run/nginx.pid;\\n\\nevents {\\n worker_connections 1024;\\n}\\n\\nhttp {\\n upstream streamlit_backend {\\n ip_hash;\\n${UPSTREAM_SERVERS} }\\n\\n map \\$http_upgrade \\$connection_upgrade {\\n default upgrade;\\n \\x27\\x27 close;\\n }\\n\\n server {\\n listen 8501;\\n\\n location / {\\n proxy_pass http://streamlit_backend;\\n proxy_http_version 1.1;\\n proxy_set_header Upgrade \\$http_upgrade;\\n proxy_set_header Connection \\$connection_upgrade;\\n proxy_set_header Host \\$host;\\n proxy_set_header X-Real-IP \\$remote_addr;\\n proxy_set_header X-Forwarded-For \\$proxy_add_x_forwarded_for;\\n proxy_set_header X-Forwarded-Proto \\$scheme;\\n proxy_read_timeout 86400;\\n proxy_send_timeout 86400;\\n proxy_buffering off;\\n }\\n }\\n}" > /etc/nginx/nginx.conf\n\
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

ip_hash will collapse all clients to one backend when behind an upstream proxy or NAT.

ip_hash hashes $remote_addr, which is the direct peer IP. If this container sits behind a cloud load balancer, CDN, or corporate NAT, all users share the same $remote_addr and will always hit the same Streamlit instance — defeating load balancing entirely.

Consider using the hash directive with the forwarded client IP instead:

Proposed nginx upstream change (inside the generated config)
     upstream streamlit_backend {
-        ip_hash;
+        hash $http_x_real_ip consistent;
         server 127.0.0.1:8510;
         ...
     }

Alternatively, hash $http_x_forwarded_for consistent; works if X-Real-IP is not set by the outer proxy. The consistent keyword enables consistent hashing so adding/removing backends doesn't remap all sessions.

🤖 Prompt for AI Agents
In `@Dockerfile_simple` at line 115, The generated nginx upstream uses ip_hash
which hashes $remote_addr and will collapse clients behind a proxy/NAT; update
the upstream block for streamlit_backend to use a consistent hash on the
forwarded client IP instead (e.g., replace the ip_hash directive in the upstream
streamlit_backend with a hash directive that uses $http_x_forwarded_for and the
consistent keyword such as hash $http_x_forwarded_for consistent;) and ensure
the existing proxy_set_header X-Real-IP / X-Forwarded-For headers remain so the
backend selection uses the original client IP.

ip_hash pins all users behind the same NAT/VPN/reverse-proxy to a
single backend, defeating the load balancer. least_conn distributes
new connections to the instance with fewest active connections, and
once a WebSocket is established it stays on that backend for the
session lifetime, so sticky sessions are not needed.

https://claude.ai/code/session_018VEL5xKZfe4LCcUa8iUHJ9
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@Dockerfile`:
- Line 203: The nginx upstream for streamlit_backend currently uses
"least_conn", which breaks Streamlit's in-process sessions; update the upstream
definition in the Dockerfile to enable session affinity for "upstream
streamlit_backend" (e.g., replace "least_conn" with "ip_hash" or a sticky
session directive such as the upstream sticky module) so reconnecting users are
routed to the same backend instance; ensure the change is applied where
"upstream streamlit_backend" and "least_conn" appear in the Dockerfile.

In `@Dockerfile_simple`:
- Line 115: The upstream directive uses "least_conn" which breaks Streamlit
session affinity (upstream block named streamlit_backend); change it to a sticky
hashing strategy such as replacing least_conn with "ip_hash" (or use "hash
$cookie_streamlit_session consistent" / "hash $http_x_forwarded_for consistent"
for proxied setups) so repeated requests and reconnects are routed to the same
backend and users don't lose in-process Streamlit state; update the upstream
block for streamlit_backend accordingly and ensure any proxy headers used (e.g.,
X-Forwarded-For) match the chosen hash strategy.
🧹 Nitpick comments (1)
Dockerfile (1)

162-220: Near-complete duplication of entrypoint logic between Dockerfile and Dockerfile_simple.

The load-balancing block (nginx config generation, Streamlit instance loop, readiness wait, nginx exec) is copy-pasted across both Dockerfiles. Any bug fix or improvement must be applied in two places. Consider extracting the shared logic into a common script (e.g., start-load-balancer.sh) that both entrypoints source or call.

Dockerfile Outdated
\n\
# Write nginx config\n\
mkdir -p /etc/nginx\n\
echo -e "worker_processes auto;\\npid /run/nginx.pid;\\n\\nevents {\\n worker_connections 1024;\\n}\\n\\nhttp {\\n upstream streamlit_backend {\\n least_conn;\\n${UPSTREAM_SERVERS} }\\n\\n map \\$http_upgrade \\$connection_upgrade {\\n default upgrade;\\n \\x27\\x27 close;\\n }\\n\\n server {\\n listen 8501;\\n\\n location / {\\n proxy_pass http://streamlit_backend;\\n proxy_http_version 1.1;\\n proxy_set_header Upgrade \\$http_upgrade;\\n proxy_set_header Connection \\$connection_upgrade;\\n proxy_set_header Host \\$host;\\n proxy_set_header X-Real-IP \\$remote_addr;\\n proxy_set_header X-Forwarded-For \\$proxy_add_x_forwarded_for;\\n proxy_set_header X-Forwarded-Proto \\$scheme;\\n proxy_read_timeout 86400;\\n proxy_send_timeout 86400;\\n proxy_buffering off;\\n }\\n }\\n}" > /etc/nginx/nginx.conf\n\
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Same least_conn session-affinity issue as Dockerfile_simple.

Streamlit is stateful — least_conn will route reconnecting users to different backends, losing their in-process session state. See the detailed comment on Dockerfile_simple line 115.

🤖 Prompt for AI Agents
In `@Dockerfile` at line 203, The nginx upstream for streamlit_backend currently
uses "least_conn", which breaks Streamlit's in-process sessions; update the
upstream definition in the Dockerfile to enable session affinity for "upstream
streamlit_backend" (e.g., replace "least_conn" with "ip_hash" or a sticky
session directive such as the upstream sticky module) so reconnecting users are
routed to the same backend instance; ensure the change is applied where
"upstream streamlit_backend" and "least_conn" appear in the Dockerfile.

\n\
# Write nginx config\n\
mkdir -p /etc/nginx\n\
echo -e "worker_processes auto;\\npid /run/nginx.pid;\\n\\nevents {\\n worker_connections 1024;\\n}\\n\\nhttp {\\n upstream streamlit_backend {\\n least_conn;\\n${UPSTREAM_SERVERS} }\\n\\n map \\$http_upgrade \\$connection_upgrade {\\n default upgrade;\\n \\x27\\x27 close;\\n }\\n\\n server {\\n listen 8501;\\n\\n location / {\\n proxy_pass http://streamlit_backend;\\n proxy_http_version 1.1;\\n proxy_set_header Upgrade \\$http_upgrade;\\n proxy_set_header Connection \\$connection_upgrade;\\n proxy_set_header Host \\$host;\\n proxy_set_header X-Real-IP \\$remote_addr;\\n proxy_set_header X-Forwarded-For \\$proxy_add_x_forwarded_for;\\n proxy_set_header X-Forwarded-Proto \\$scheme;\\n proxy_read_timeout 86400;\\n proxy_send_timeout 86400;\\n proxy_buffering off;\\n }\\n }\\n}" > /etc/nginx/nginx.conf\n\
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

least_conn breaks Streamlit session affinity — users will lose state on reconnect.

Streamlit stores session state in-process. With least_conn, when a user refreshes the page or their WebSocket reconnects (network hiccup, browser tab sleep), the new connection is routed to whichever backend has the fewest connections — likely a different instance. This silently destroys the user's session state (uploaded files, workflow progress, widget values).

The commit message notes that "established WebSocket connections remain bound to their backend," which is true, but any new connection (page refresh, network interruption, new browser tab) will be load-balanced afresh, losing the session.

Neither least_conn nor ip_hash is ideal here. Consider ip_hash (acceptable for most deployments) or hash $cookie_streamlit_session consistent with a sticky cookie if you need to handle clients behind shared NATs:

Proposed approach using ip_hash (simplest fix)

Replace least_conn with ip_hash in the upstream block. For deployments behind a reverse proxy, use hash $http_x_forwarded_for consistent instead.

🤖 Prompt for AI Agents
In `@Dockerfile_simple` at line 115, The upstream directive uses "least_conn"
which breaks Streamlit session affinity (upstream block named
streamlit_backend); change it to a sticky hashing strategy such as replacing
least_conn with "ip_hash" (or use "hash $cookie_streamlit_session consistent" /
"hash $http_x_forwarded_for consistent" for proxied setups) so repeated requests
and reconnects are routed to the same backend and users don't lose in-process
Streamlit state; update the upstream block for streamlit_backend accordingly and
ensure any proxy headers used (e.g., X-Forwarded-For) match the chosen hash
strategy.

nginx defaults to 1MB max body size, which blocks Streamlit file
uploads with a 400 error. Set to 0 (unlimited) to let Streamlit
enforce its own 200MB limit from config.toml.

https://claude.ai/code/session_018VEL5xKZfe4LCcUa8iUHJ9
least_conn routes each HTTP request independently, so the file upload
POST (/_stcore/upload_file) can land on a different backend than the
WebSocket session, causing a 400 error.

Use hash $remote_addr$http_x_forwarded_for consistent instead:
- Provides session affinity so uploads hit the correct backend
- Behind a reverse proxy: XFF header differentiates real client IPs
- Direct connections: falls back to remote_addr (like ip_hash)
- "consistent" minimizes redistribution when backends are added/removed

https://claude.ai/code/session_018VEL5xKZfe4LCcUa8iUHJ9
Replace ip_hash/hash-on-IP with cookie-based session affinity using
nginx's built-in map and $request_id:

- map $cookie_stroute $route_key: if browser has a "stroute" cookie,
  reuse its value; otherwise fall back to $request_id (a unique random
  hex string nginx generates per-request)
- hash $route_key consistent: route based on the cookie/random value
- add_header Set-Cookie on every response to persist the routing key

This ensures each browser gets its own sticky backend regardless of
source IP, fixing both:
- File uploads (POST must hit the same backend as the WebSocket session)
- Load distribution when all users share the same IP (NAT/VPN/proxy)

No new packages required - uses only built-in nginx directives.

https://claude.ai/code/session_018VEL5xKZfe4LCcUa8iUHJ9
@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Caution

Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted.

Error details
{"name":"HttpError","status":500,"request":{"method":"PATCH","url":"https://api.github.com/repos/OpenMS/streamlit-template/issues/comments/3871811057","headers":{"accept":"application/vnd.github.v3+json","user-agent":"octokit.js/0.0.0-development octokit-core.js/7.0.6 Node.js/24","authorization":"token [REDACTED]","content-type":"application/json; charset=utf-8"},"body":{"body":"<!-- This is an auto-generated comment: summarize by coderabbit.ai -->\n<!-- This is an auto-generated comment: review paused by coderabbit.ai -->\n\n> [!NOTE]\n> ## Reviews paused\n> \n> It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the `reviews.auto_review.auto_pause_after_reviewed_commits` setting.\n> \n> Use the following commands to manage reviews:\n> - `@coderabbitai resume` to resume automatic reviews.\n> - `@coderabbitai review` to trigger a single review.\n> \n> Use the checkboxes below for quick actions:\n> - [ ] <!-- {\"checkboxId\": \"7f6cc2e2-2e4e-497a-8c31-c9e4573e93d1\"} --> ▶️ Resume reviews\n> - [ ] <!-- {\"checkboxId\": \"e9bb8d72-00e8-4f67-9cb2-caf3b22574fe\"} --> 🔍 Trigger review\n\n<!-- end of auto-generated comment: review paused by coderabbit.ai -->\n<!-- walkthrough_start -->\n\n<details>\n<summary>📝 Walkthrough</summary>\n\n## Walkthrough\n\nDockerfiles add nginx and a generated /app/entrypoint.sh, plus a public env var STREAMLIT_SERVER_COUNT (default 1). At startup the entrypoint either runs a single Streamlit on 8501 or launches multiple Streamlit processes (8510+) and nginx on 8501 as a load balancer.\n\n## Changes\n\n|Cohort / File(s)|Summary|\n|---|---|\n|**Primary Dockerfile (run-app stage)** <br> `Dockerfile`|Install nginx, add `STREAMLIT_SERVER_COUNT` env (default 1), create `/app/entrypoint.sh` via heredoc and `chmod +x`, implement runtime logic: when count > 1 generate nginx upstream config, spawn Streamlit processes on ports 8510+, and start nginx on 8501; otherwise run single Streamlit on 8501.|\n|**Simplified Dockerfile variant** <br> `Dockerfile_simple`|Mirror changes from primary Dockerfile: install nginx, expose `STREAMLIT_SERVER_COUNT`, replace prior entrypoint with a heredoc-generated `/app/entrypoint.sh` that preserves cron startup and supports single or nginx-backed multi-instance operation.|\n|**Compose / service config** <br> `docker-compose.yml`|Remove explicit `streamlit run` command and set `STREAMLIT_SERVER_COUNT=1` (commented). Delegate startup behavior to image entrypoint and environment flag rather than an explicit command.|\n|**Manifest / metadata** <br> `manifest_file` (inline) |Lines changed reflect added env var and entrypoint/script generation: +39/+46 lines across manifests, updating run-app startup declarations.|\n\n## Sequence Diagram(s)\n\n```mermaid\nsequenceDiagram\n    participant Container as Container Start\n    participant Entrypoint as Entrypoint Script\n    participant Nginx as Nginx (8501)\n    participant S0 as Streamlit (8510 / 8501)\n    participant Sx as Streamlit (8511+)\n    participant Client as Client\n\n    Container->>Entrypoint: run image CMD -> entrypoint\n    Entrypoint->>Entrypoint: read STREAMLIT_SERVER_COUNT\n    alt STREAMLIT_SERVER_COUNT > 1\n        Entrypoint->>Entrypoint: generate nginx upstream (8510, 8511...)\n        Entrypoint->>S0: start Streamlit on 8510\n        Entrypoint->>Sx: start additional Streamlit instances\n        Entrypoint->>Nginx: write nginx.conf and start nginx on 8501\n        Client->>Nginx: HTTP request to 8501\n        Nginx->>S0: proxy to chosen instance\n        S0-->>Nginx: response\n        Nginx-->>Client: response\n    else STREAMLIT_SERVER_COUNT == 1\n        Entrypoint->>S0: start single Streamlit on 8501\n        Client->>S0: direct request to 8501\n        S0-->>Client: response\n    end\n```\n\n## Poem\n\n> 🐇  \n> I nibbled code and found a door,  \n> Streams multiply and hop some more,  \n> Nginx hums at eighty-five-oh-one,  \n> Ports parade beneath the sun,  \n> Hop, balance, serve — the batch is done!\n\n</details>\n\n<!-- walkthrough_end -->\n\n<!-- pre_merge_checks_walkthrough_start -->\n\n<details>\n<summary>🚥 Pre-merge checks | ✅ 3</summary>\n\n<details>\n<summary>✅ Passed checks (3 passed)</summary>\n\n|     Check name     | Status   | Explanation                                                                                                                                                    |\n| :----------------: | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n|  Description Check | ✅ Passed | Check skipped - CodeRabbit’s high-level summary is enabled.                                                                                                    |\n|     Title check    | ✅ Passed | The title 'Add nginx load balancing support for multi-instance Streamlit deployments' accurately and clearly describes the primary change in the pull request. |\n| Docstring Coverage | ✅ Passed | No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.                                                     |\n\n</details>\n\n<sub>✏️ Tip: You can configure your own custom pre-merge checks in the settings.</sub>\n\n</details>\n\n<!-- pre_merge_checks_walkthrough_end -->\n\n<!-- finishing_touch_checkbox_start -->\n\n<details>\n<summary>✨ Finishing touches</summary>\n\n<details>\n<summary>🧪 Generate unit tests (beta)</summary>\n\n- [ ] <!-- {\"checkboxId\": \"f47ac10b-58cc-4372-a567-0e02b2c3d479\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Create PR with unit tests\n- [ ] <!-- {\"checkboxId\": \"07f1e7d6-8a8e-4e23-9900-8731c2c87f58\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Post copyable unit tests in a comment\n- [ ] <!-- {\"checkboxId\": \"6ba7b810-9dad-11d1-80b4-00c04fd430c8\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Commit unit tests in branch `claude/docker-load-balancer-h8VWZ`\n\n</details>\n\n</details>\n\n<!-- finishing_touch_checkbox_end -->\n\n<!-- tips_start -->\n\n---\n\nThanks for using [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=OpenMS/streamlit-template&utm_content=336)! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.\n\n<details>\n<summary>❤️ Share</summary>\n\n- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)\n- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)\n- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)\n- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)\n\n</details>\n\n<sub>Comment `@coderabbitai help` to get the list of available commands and usage tips.</sub>\n\n<!-- tips_end -->\n\n<!-- internal state start -->\n\n\n<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEYDEZyAAUASpETZWaCrKPR1AGxJcAgrXoYRPAYAB6QHvho9AJoHpgMwUT22Nzc+BS4kABm6ZDM2B7iYMGIuPEkkADKuBQkaMwe6pBK3BGybBi4yAAUtpBmAMwDAGwAlJCQBgCqNgAyXLC4uNyIHAD0a0G4sNgCGkzMawDy3GQAspVrpbX1jbhgNMyt1CRr3AUea0PDE9OIlFxcAAGZi0NASeC0MDMSCAFAIqvhsBQGN5GHFsEo1rR8AwANaUMARKJgGJxDAoihgWAADgAagB1ABakEASYQwZykTh5bRYeHVajYVb8U68yAAYRuNFoXAATECZcMwPKlQBOaAARgGHAALOqOABWbWMtwIZB9KK0ZD4bjifAYWKQQLBEIktD/ehE6KxeKJSAEZKpdKZCjYDAYX35QrwVoVao3BpNf4UKR8EplcnSFBYbjOWJeDwaGCwCpZOq4JEVVC0JAqLzReRKLJoAqZbqVaA2ACiPjOswAktAAPqVTs2Wmjwdio5TABy0AAvOrxph6KgyLW6JA5PYSEtfe2uz3+0OR2OJ1PZ9BIHpIOqNEYANIkeQMWCYUirAxQJ2hLOlPPUPAdroP4m7+gI+DbJAAAiOL4hQWTwF46AYPQsF4pQiFeIOiDwE8Xj3lAnadC4aTBJkiAMBQ0ZdJAJAhDUaBiGB+DZEh0gcAYExQOwpH4ORGiILAcLZB8YClgKtT0N0Nh0EgAA0kA2AAipAADu6TwYgilUXaimelu3rkokoz3txdEkbIZGdDheExoJwnwo0RCLGpJDwC5ra6Rg+mRF6ZIJIEplfjBsj2sw8AMI6QS/kwGCIUQSKAXanHmQA4mQyg0MgKTXHUMJJimW4RHiMhupuwEHt2vYDsOo7jjYk7TnOZm6JAUz/Mg0aDm+QnZLkpSRbi8idbhdrIP6vDSJQUhVDU+V3FmYC8Di0jIP+NCtVAYp2glFbIPSJACJUcG7gGaQZOp6jCWgqQUNa1EvJAxZRJQyAruEdpJOIbCIl0W0wTWAheDI2BZKW1GBP1fB5fUvq1IgaQYJ1hFVB21XHnVZ6NRec4WRC90YB0mQSM48AbiBSjSY2zaFLewVQNiGGUgcaT/Bo7QeJAKRglKfqsfRbOxujR61aeDVNZe94GAAQiQb4QuknFQFVosnvV57NQuepzfGi0hsjzTwLUYgePIwEXZk1L6kC6pXVBGCsT+ISo6rNXq9jktznoOtnC20bIXGC1NGm5TIAb/BYORlD2lzls9Nb6pAgA1KMADc0XOuESA0IbwHW7bKH0CtITwJmtQAI7YNItFqddKDcD1brCYNeIjWtQGGx9h3HadFEpJb0sWOKLARZkbCIIgaAfskTguEYUDQUDdaZ7FjTsIOzBoCEg4QbQsi2QAXhUjRj5A3T/Jk/pAuM/rgvxAQxSEgCYBMgNMtreZwy5AlfV6UW6KHkKfJowM4L7nmrcJoWEKgpE9NpdAHgIh1yhkHSBuBX4KHih5CsHo8JNG6CQDQRANCKTlECL+t9WK3VaLIaW+hjDgB4qhfgWQcAEGIJlKgfMDjEy4LwfgwhRDiCkDIF8ihlCqHUFoHQ9CTBQDgKgVAmA2GEFIOQLhm4eHsC4FQNSs8t4uC3GIpQVBJGaG0LoMAhgGGmAMOheC0DOIACIXEGGHj4PsHD1EvHoA4Oe5tWGvnfNIBeMFToIXYo6fAejQ6IOQM7BBX1cJKCUnJLqWBtgVANmAah9gyikH5hZCmBlSQ+kCKjPsJFFDYBRPQNAPBdiNCimQAmdpiaQFJo9YGwtDzuyxhLXGV465QXqe/OmpMPDVxYbeQpcUaj4C5p6V0AVNwSEQIWXCgQvDFGRumFEW55bgiAhQVGxEahWX4p0T6QQooRSIBo+gd9IBqKypuNY1C1i8QuQJPq3QqJllWeTdAz1KB0BxFmP0xYwnM2gbfN8mQ7JeGJvEp+rp3SfSiIZAKvo1LFiwOLTWl5ry3i4CA7ASFLQoVXmEXKECYRZHujCSM4gYxZhoBQWOutg6ZHjuff8GRfTUEgInG+ik1LUWylS52+xdr6WbOSYsyBmUB1jHSxaocMxWiwCVWIsB8ClEUh9fltFnYZ0gsWCgdd/iKSaAbd69hEiBzVSHXZ5RI48CDMKm2d4QpwAqPYzCkTHZ6P+S8ZA7zUifMstZTQfUIT1PqRasFUUPoGNxF1TI9FRB4A3IpWozxArfShVNRWgosyNHIPYKiNFnmcOSlgah90mKwFRn65IFB7qhmrFDbJuSNIUFxFkJBKAcrcF5ixH+JAh1COpWizcJSjKFsYLdFQSF1C0PMJYHwhQsqdwmqxTJzRRBxC4Xu6ZgsgwVT4O8YGkULLiHECEkKxFWlE3YB0smFMLR0C4G7TGBKcZa3tsJMZJNYhTLtt0G9TToXwUgJ2GctIIU2FDDk1IeTp4kFGCFKwjS71MVtFgIJgQf33r4uRKt1EbTPRXBWpIwbuadWBUmpmXjXn0AjdwKN5yY0OXPuK9QucIUBoiV4ZczDXzMEUJAZOYROPcfI50PjaY6j0HwKw4IFaKgvNPXaIwm7IBnEwPAUs/8ABikSfCx1kMfCgRhZjBEzMR0g0oZMDBVGsMA+oDAuKcQvWxInoG2Xwt4Hzrj3GeJ0z4/RzgAmMDfCRxAoS/CUoSf6RAshSgkBhDmPEmG/xlEQfWoubL7q0BqZmFpxs2nvs6eTbpaNen/o1oBol/o5n3UWX5ZZ5Q1OnF0xgVGskC2ZkPbhELZHvlXOGTdPI/tCSOcozW7YQrgjqHJo0Y+yBvI6V2tghGUrUUxHRQulZfBcVkEaxjMWLWvZXhvOqRSiM0BqUNsq1l0cOUOlQQmBFrrNWKSi5K5R0q4qsNpfGYqcFDXMONSi50GdggOq2SQHZ/4Mx5HEXm0M9r0dggoPQH7i0VooknqchiZBKXGpSDwagwl0sD09RBKCyq0d7IqN0Z2roMK0HE74x1qONX7OtLu4CM2/pbhuLiX03k530AYF4TAKQhu7h5MgeiOcZeEwwxkGnAhDmKz4B9bYNwoRT1LDr8s3BsjDpm0jw95A9FfN4wZ7d7L637shRUJQCvcyEatKwi9GQr0NNvc0zo62n1EQwK+9pdWv2gVcwAAz/Tdz2gyk/CfCUF8bMYcN4fDzxy5FFq3UZ5j4rgSf5PO+Lw5JPRgt7hlM7gQc0CDB9hnP2GcnZ7OOe2wllzXBk7amGJ57zvn/NgCMEzeCYBWb6sIZzZx4Wt2RbrXzPxBi4vOafcNuIpOvd0RCK0SKTQeHG9YhHInTQZs7j3FDZRVXCZx8/Q11PHsBla0XOfYICuMS+kFjJGoHSDEVYHYEQFMiLFQB33WjKF12t31wVmOWyEZSpUANPwomdWDFDCzAkBxGK39FqDNl9GAkPTwkwwwVA3FDOGggUGYDUHIHoFvwdxICdxj2qzfSuSHWngzkPQ6wWRt2iSiT0RaEpy1UP3INIEoKnVpkwOcCtzoMbzqWYUPRWghCpnxg4Jfy6S8DzTp0oEhWUUf2PyaUTCwJ/hwOCDwIYHrToUM13D1TqUTy4EdnIBCjOAcOk1wiIHtHLFqG5jHQryiTcKgFpFf2QlqHClI1cJIF73IH72CVc2H083VCME7EGi3m4XEUnQhFYLonBiDC4FmGiX01dx3QG090PR9xPQ93PWP0vTU2vQL3vUjySxChnDtFiIMCMybxrhQliEyyQE4gMAc3iPi0SKH21BSLCz8y/CsRsSYTU1YWbHYSB00VHm0R/hexi0MW3CYBMRUDUHMRkXmLkUUPUEHEhEQEHFqFyLcloBwjgMyHoQMDOIEH1GpAECBAGBRAECyCBGGBVBIBIGpBIHVAEBHwYAYG1ENAEFoAYAGBlDQG1CyGpBVBRIYDtlkUYQORlAGAAHYVRbYVR8T9R1RqRhg0AZQ6ggQVRhg8SBBhgjoGBUTtQBB1Q0AgRtRVN8TLFDAzigQ0A0A0SCTaABhtQZQlBrYQSlAZR9Rhh8SSB5TqRqQmwVSVQVQbYZQ5RtQ+TXicSGAgR8SgRvimxhhwSBh1R8SBh9Qsh9QwSBBqR1RDQVR/A0B5T1R1RaB1QGBhgviopsSIBIBhguTvTqRtRqQ0A6hjSfTSSVQsgZR4zGSR8BgVTtRaAiTkz3MES9T9SgyeELiribiSA7i6BBxKc9SzippN5KBSBBxXxRB01Hj5DcyABvLiSAJxJAWwGWEqfEWgHaMAzoKwfVKUJxLgJsDwa1DspxISREDwWgXsuCWwcc7IWIaciYLsxAI4FMaiUCDAVcycjczs6sWgFDDAdCOMRIRAMUYsPEVcmoaueSGc08889wXALwW8xsh8kMEgZ8zc180MaCaQUvQjL8+8ic9cv8mcitfsvsSeP+OMVclxf8zsuIUocC3EWSBwQoRAVcgAbQ7ImHbImFIs7IbLxBnHqBIGQuAsoio0I3FDvNxCcVQrItnLKHLDwsBF/LYtIqcUAMwHrWQswvsGl1SE3G2nERsEOPUAwQQBckJBLJIC5k31ixHSKW6VoA0FYqIv4qkyUGQrUmcHDECF0rIs3PSA8mCFiEwqorYGQqUHopok7j8zIoAF8+KSKLKnEKLcR7KaKuAnF3zkI/LzKfKNpBQfyny9LNzBLfDXKgq20H1kIAByFLalDFfyMpJIBwQMS6HIPgVnIXVVPWJoFoNoZFVK9AKEpKGgM2ErBXOoCgBqpy6tfXCaYtaiLfcYkjCFVQj4SdKuGuHSvizcgywKzs4yjlRIcK9ik2PbRKWoQ8qCsazsqymKWy5igK5ClKmivSzyvS7y9ivynaoK9Ca4X0HaFMTDOa/iyK7iv0Xi2Kzs+K4SoKjo0Sckf3fqLtfqqFHfegaBT3Es8DJ6JmS6qGJgG60gQsSocS7gX0CGmoGXfAGGioPy0al6pxBGBZPARKzs+kCVb3HESGogZAQqw/LIUMMQM9CKSeX0TJGEdLQCRALIeQPgtG5QApE3aQPVBcrGiyzsiaoyky2atapxDamyjwOy6ixy0mlGwIPCg6jsgAXVQqcXQtwFsDotAoJt8v1AYCUERNLBlAEARLtJlHxO1AYFJNoHBidMpKyDBEdLIRVKZIRJIGGCGG5I5NECUBHyyHVDxKyGTPCs1rdG1psBCsmt8sdIYAhO5ORKiAjKBARP1CRMVNoCZPjKtrFIYHtNpIEGpPVHNMNCYhsP1AGFtmpDFJiHxNoHxIRIVHDuRshiIGuu5pIEqXZVjn5BoFXNbPcoMGHrzKgGrLYAoDrL8uuIrMDKgBWPwEHBzEFBIGbJoGbMuheNbIjtKCsGbHdB8FwFklLIHNHnUB2lDFwHHP1FHrOMXuXoPrXo2jXrnqsSAA== -->\n\n<!-- internal state end -->"},"request":{"retryCount":3,"retries":3,"retryAfter":16}},"response":{"url":"https://api.github.com/repos/OpenMS/streamlit-template/issues/comments/3871811057","status":500,"headers":{"access-control-allow-origin":"*","access-control-expose-headers":"ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset","content-length":"0","content-security-policy":"default-src 'none'","content-type":"application/json; charset=utf-8","date":"Mon, 09 Feb 2026 16:24:10 GMT","referrer-policy":"origin-when-cross-origin, strict-origin-when-cross-origin","server":"github.com","strict-transport-security":"max-age=31536000; includeSubdomains; preload","vary":"Accept-Encoding, Accept, X-Requested-With","x-accepted-github-permissions":"issues=write; pull_requests=write","x-content-type-options":"nosniff","x-frame-options":"deny","x-github-api-version-selected":"2022-11-28","x-github-media-type":"github.v3; format=json","x-github-request-id":"0841:3D8207:10F1DDF:48F9F73:698A0A28","x-ratelimit-limit":"6100","x-ratelimit-remaining":"5875","x-ratelimit-reset":"1770654859","x-ratelimit-resource":"core","x-ratelimit-used":"225","x-xss-protection":"0"},"data":""}}

3 similar comments
@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Caution

Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted.

Error details
{"name":"HttpError","status":500,"request":{"method":"PATCH","url":"https://api.github.com/repos/OpenMS/streamlit-template/issues/comments/3871811057","headers":{"accept":"application/vnd.github.v3+json","user-agent":"octokit.js/0.0.0-development octokit-core.js/7.0.6 Node.js/24","authorization":"token [REDACTED]","content-type":"application/json; charset=utf-8"},"body":{"body":"<!-- This is an auto-generated comment: summarize by coderabbit.ai -->\n<!-- This is an auto-generated comment: review paused by coderabbit.ai -->\n\n> [!NOTE]\n> ## Reviews paused\n> \n> It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the `reviews.auto_review.auto_pause_after_reviewed_commits` setting.\n> \n> Use the following commands to manage reviews:\n> - `@coderabbitai resume` to resume automatic reviews.\n> - `@coderabbitai review` to trigger a single review.\n> \n> Use the checkboxes below for quick actions:\n> - [ ] <!-- {\"checkboxId\": \"7f6cc2e2-2e4e-497a-8c31-c9e4573e93d1\"} --> ▶️ Resume reviews\n> - [ ] <!-- {\"checkboxId\": \"e9bb8d72-00e8-4f67-9cb2-caf3b22574fe\"} --> 🔍 Trigger review\n\n<!-- end of auto-generated comment: review paused by coderabbit.ai -->\n<!-- walkthrough_start -->\n\n<details>\n<summary>📝 Walkthrough</summary>\n\n## Walkthrough\n\nDockerfiles add nginx and a generated /app/entrypoint.sh, plus a public env var STREAMLIT_SERVER_COUNT (default 1). At startup the entrypoint either runs a single Streamlit on 8501 or launches multiple Streamlit processes (8510+) and nginx on 8501 as a load balancer.\n\n## Changes\n\n|Cohort / File(s)|Summary|\n|---|---|\n|**Primary Dockerfile (run-app stage)** <br> `Dockerfile`|Install nginx, add `STREAMLIT_SERVER_COUNT` env (default 1), create `/app/entrypoint.sh` via heredoc and `chmod +x`, implement runtime logic: when count > 1 generate nginx upstream config, spawn Streamlit processes on ports 8510+, and start nginx on 8501; otherwise run single Streamlit on 8501.|\n|**Simplified Dockerfile variant** <br> `Dockerfile_simple`|Mirror changes from primary Dockerfile: install nginx, expose `STREAMLIT_SERVER_COUNT`, replace prior entrypoint with a heredoc-generated `/app/entrypoint.sh` that preserves cron startup and supports single or nginx-backed multi-instance operation.|\n|**Compose / service config** <br> `docker-compose.yml`|Remove explicit `streamlit run` command and set `STREAMLIT_SERVER_COUNT=1` (commented). Delegate startup behavior to image entrypoint and environment flag rather than an explicit command.|\n|**Manifest / metadata** <br> `manifest_file` (inline) |Lines changed reflect added env var and entrypoint/script generation: +39/+46 lines across manifests, updating run-app startup declarations.|\n\n## Sequence Diagram(s)\n\n```mermaid\nsequenceDiagram\n    participant Container as Container Start\n    participant Entrypoint as Entrypoint Script\n    participant Nginx as Nginx (8501)\n    participant S0 as Streamlit (8510 / 8501)\n    participant Sx as Streamlit (8511+)\n    participant Client as Client\n\n    Container->>Entrypoint: run image CMD -> entrypoint\n    Entrypoint->>Entrypoint: read STREAMLIT_SERVER_COUNT\n    alt STREAMLIT_SERVER_COUNT > 1\n        Entrypoint->>Entrypoint: generate nginx upstream (8510, 8511...)\n        Entrypoint->>S0: start Streamlit on 8510\n        Entrypoint->>Sx: start additional Streamlit instances\n        Entrypoint->>Nginx: write nginx.conf and start nginx on 8501\n        Client->>Nginx: HTTP request to 8501\n        Nginx->>S0: proxy to chosen instance\n        S0-->>Nginx: response\n        Nginx-->>Client: response\n    else STREAMLIT_SERVER_COUNT == 1\n        Entrypoint->>S0: start single Streamlit on 8501\n        Client->>S0: direct request to 8501\n        S0-->>Client: response\n    end\n```\n\n## Poem\n\n> 🐇  \n> I nibbled code and found a door,  \n> Streams multiply and hop some more,  \n> Nginx hums at eighty-five-oh-one,  \n> Ports parade beneath the sun,  \n> Hop, balance, serve — the batch is done!\n\n</details>\n\n<!-- walkthrough_end -->\n\n<!-- pre_merge_checks_walkthrough_start -->\n\n<details>\n<summary>🚥 Pre-merge checks | ✅ 3</summary>\n\n<details>\n<summary>✅ Passed checks (3 passed)</summary>\n\n|     Check name     | Status   | Explanation                                                                                                                                                    |\n| :----------------: | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n|  Description Check | ✅ Passed | Check skipped - CodeRabbit’s high-level summary is enabled.                                                                                                    |\n|     Title check    | ✅ Passed | The title 'Add nginx load balancing support for multi-instance Streamlit deployments' accurately and clearly describes the primary change in the pull request. |\n| Docstring Coverage | ✅ Passed | No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.                                                     |\n\n</details>\n\n<sub>✏️ Tip: You can configure your own custom pre-merge checks in the settings.</sub>\n\n</details>\n\n<!-- pre_merge_checks_walkthrough_end -->\n\n<!-- finishing_touch_checkbox_start -->\n\n<details>\n<summary>✨ Finishing touches</summary>\n\n<details>\n<summary>🧪 Generate unit tests (beta)</summary>\n\n- [ ] <!-- {\"checkboxId\": \"f47ac10b-58cc-4372-a567-0e02b2c3d479\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Create PR with unit tests\n- [ ] <!-- {\"checkboxId\": \"07f1e7d6-8a8e-4e23-9900-8731c2c87f58\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Post copyable unit tests in a comment\n- [ ] <!-- {\"checkboxId\": \"6ba7b810-9dad-11d1-80b4-00c04fd430c8\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Commit unit tests in branch `claude/docker-load-balancer-h8VWZ`\n\n</details>\n\n</details>\n\n<!-- finishing_touch_checkbox_end -->\n\n<!-- tips_start -->\n\n---\n\nThanks for using [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=OpenMS/streamlit-template&utm_content=336)! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.\n\n<details>\n<summary>❤️ Share</summary>\n\n- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)\n- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)\n- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)\n- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)\n\n</details>\n\n<sub>Comment `@coderabbitai help` to get the list of available commands and usage tips.</sub>\n\n<!-- tips_end -->\n\n<!-- internal state start -->\n\n\n<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEYDEZyAAUASpETZWaCrKPR1AGxJcAgrXoYRPAYAB6QHvho9AJoHpgMwUT22Nzc+BS4kABm6ZDM2B7iYMGIuPEkkADKuBQkaMwe6pBK3BGybBi4yAAUtpBmAMwDAGwAlJCQBgCqNgAyXLC4uNyIHAD0a0G4sNgCGkzMawDy3GQAspVrpbX1jbhgNMyt1CRr3AUea0PDE9OIlFxcAAGZi0NASeC0MDMSCAFAIqvhsBQGN5GHFsEo1rR8AwANaUMARKJgGJxDAoihgWAADgAagB1ABakEASYQwZykTh5bRYeHVajYVb8U68yAAYRuNFoXAATECZcMwPKlQBOaAARgGHAALOqOABWbWMtwIZB9KK0ZD4bjifAYWKQQLBEIktD/ehE6KxeKJSAEZKpdKZCjYDAYX35QrwVoVao3BpNf4UKR8EplcnSFBYbjOWJeDwaGCwCpZOq4JEVVC0JAqLzReRKLJoAqZbqVaA2ACiPjOswAktAAPqVTs2Wmjwdio5TABy0AAvOrxph6KgyLW6JA5PYSEtfe2uz3+0OR2OJ1PZ9BIHpIOqNEYANIkeQMWCYUirAxQJ2hLOlPPUPAdroP4m7+gI+DbJAAAiOL4hQWTwF46AYPQsF4pQiFeIOiDwE8Xj3lAnadC4aTBJkiAMBQ0ZdJAJAhDUaBiGB+DZEh0gcAYExQOwpH4ORGiILAcLZB8YClgKtT0N0Nh0EgAA0kA2AAipAADu6TwYgilUXaimelu3rkokoz3txdEkbIZGdDheExoJwnwo0RCLGpJDwC5ra6Rg+mRF6ZIJIEplfjBsj2sw8AMI6QS/kwGCIUQSKAXanHmQA4mQyg0MgKTXHUMJJimW4RHiMhupuwEHt2vYDsOo7jjYk7TnOZm6JAUz/Mg0aDm+QnZLkpSRbi8idbhdrIP6vDSJQUhVDU+V3FmYC8Di0jIP+NCtVAYp2glFbIPSJACJUcG7gGaQZOp6jCWgqQUNa1EvJAxZRJQyAruEdpJOIbCIl0W0wTWAheDI2BZKW1GBP1fB5fUvq1IgaQYJ1hFVB21XHnVZ6NRec4WRC90YB0mQSM48AbiBSjSY2zaFLewVQNiGGUgcaT/Bo7QeJAKRglKfqsfRbOxujR61aeDVNZe94GAAQiQb4QuknFQFVosnvV57NQuepzfGi0hsjzTwLUYgePIwEXZk1L6kC6pXVBGCsT+ISo6rNXq9jktznoOtnC20bIXGC1NGm5TIAb/BYORlD2lzls9Nb6pAgA1KMADc0XOuESA0IbwHW7bKH0CtITwJmtQAI7YNItFqddKDcD1brCYNeIjWtQGGx9h3HadFEpJb0sWOKLARZkbCIIgaAfskTguEYUDQUDdaZ7FjTsIOzBoCEg4QbQsi2QAXhUjRj5A3T/Jk/pAuM/rgvxAQxSEgCYBMgNMtreZwy5AlfV6UW6KHkKfJowM4L7nmrcJoWEKgpE9NpdAHgIh1yhkHSBuBX4KHih5CsHo8JNG6CQDQRANCKTlECL+t9WK3VaLIaW+hjDgB4qhfgWQcAEGIJlKgfMDjEy4LwfgwhRDiCkDIF8ihlCqHUFoHQ9CTBQDgKgVAmA2GEFIOQLhm4eHsC4FQNSs8t4uC3GIpQVBJGaG0LoMAhgGGmAMOheC0DOIACIXEGGHj4PsHD1EvHoA4Oe5tWGvnfNIBeMFToIXYo6fAejQ6IOQM7BBX1cJKCUnJLqWBtgVANmAah9gyikH5hZCmBlSQ+kCKjPsJFFDYBRPQNAPBdiNCimQAmdpiaQFJo9YGwtDzuyxhLXGV465QXqe/OmpMPDVxYbeQpcUaj4C5p6V0AVNwSEQIWXCgQvDFGRumFEW55bgiAhQVGxEahWX4p0T6QQooRSIBo+gd9IBqKypuNY1C1i8QuQJPq3QqJllWeTdAz1KB0BxFmP0xYwnM2gbfN8mQ7JeGJvEp+rp3SfSiIZAKvo1LFiwOLTWl5ry3i4CA7ASFLQoVXmEXKECYRZHujCSM4gYxZhoBQWOutg6ZHjuff8GRfTUEgInG+ik1LUWylS52+xdr6WbOSYsyBmUB1jHSxaocMxWiwCVWIsB8ClEUh9fltFnYZ0gsWCgdd/iKSaAbd69hEiBzVSHXZ5RI48CDMKm2d4QpwAqPYzCkTHZ6P+S8ZA7zUifMstZTQfUIT1PqRasFUUPoGNxF1TI9FRB4A3IpWozxArfShVNRWgosyNHIPYKiNFnmcOSlgah90mKwFRn65IFB7qhmrFDbJuSNIUFxFkJBKAcrcF5ixH+JAh1COpWizcJSjKFsYLdFQSF1C0PMJYHwhQsqdwmqxTJzRRBxC4Xu6ZgsgwVT4O8YGkULLiHECEkKxFWlE3YB0smFMLR0C4G7TGBKcZa3tsJMZJNYhTLtt0G9TToXwUgJ2GctIIU2FDDk1IeTp4kFGCFKwjS71MVtFgIJgQf33r4uRKt1EbTPRXBWpIwbuadWBUmpmXjXn0AjdwKN5yY0OXPuK9QucIUBoiV4ZczDXzMEUJAZOYROPcfI50PjaY6j0HwKw4IFaKgvNPXaIwm7IBnEwPAUs/8ABikSfCx1kMfCgRhZjBEzMR0g0oZMDBVGsMA+oDAuKcQvWxInoG2Xwt4Hzrj3GeJ0z4/RzgAmMDfCRxAoS/CUoSf6RAshSgkBhDmPEmG/xlEQfWoubL7q0BqZmFpxs2nvs6eTbpaNen/o1oBol/o5n3UWX5ZZ5Q1OnF0xgVGskC2ZkPbhELZHvlXOGTdPI/tCSOcozW7YQrgjqHJo0Y+yBvI6V2tghGUrUUxHRQulZfBcVkEaxjMWLWvZXhvOqRSiM0BqUNsq1l0cOUOlQQmBFrrNWKSi5K5R0q4qsNpfGYqcFDXMONSi50GdggOq2SQHZ/4Mx5HEXm0M9r0dggoPQH7i0VooknqchiZBKXGpSDwagwl0sD09RBKCyq0d7IqN0Z2roMK0HE74x1qONX7OtLu4CM2/pbhuLiX03k530AYF4TAKQhu7h5MgeiOcZeEwwxkGnAhDmKz4B9bYNwoRT1LDr8s3BsjDpm0jw95A9FfN4wZ7d7L637shRUJQCvcyEatKwi9GQr0NNvc0zo62n1EQwK+9pdWv2gVcwAAz/Tdz2gyk/CfCUF8bMYcN4fDzxy5FFq3UZ5j4rgSf5PO+Lw5JPRgt7hlM7gQc0CDB9hnP2GcnZ7OOe2wllzXBk7amGJ57zvn/NgCMEzeCYBWb6sIZzZx4Wt2RbrXzPxBi4vOafcNuIpOvd0RCK0SKTQeHG9YhHInTQZs7j3FDZRVXCZx8/Q11PHsBla0XOfYICuMS+kFjJGoHSDEVYHYEQFMiLFQB33WjKF12t31wVmOWyEZSpUANPwomdWDFDCzAkBxGK39FqDNl9GAkPTwkwwwVA3FDOGggUGYDUHIHoFvwdxICdxj2qzfSuSHWngzkPQ6wWRt2iSiT0RaEpy1UP3INIEoKnVpkwOcCtzoMbzqWYUPRWghCpnxg4Jfy6S8DzTp0oEhWUUf2PyaUTCwJ/hwOCDwIYHrToUM13D1TqUTy4EdnIBCjOAcOk1wiIHtHLFqG5jHQryiTcKgFpFf2QlqHClI1cJIF73IH72CVc2H083VCME7EGi3m4XEUnQhFYLonBiDC4FmGiX01dx3QG090PR9xPQ93PWP0vTU2vQL3vUjySxChnDtFiIMCMybxrhQliEyyQE4gMAc3iPi0SKH21BSLCz8y/CsRsSYTU1YWbHYSB00VHm0R/hexi0MW3CYBMRUDUHMRkXmLkUUPUEHEhEQEHFqFyLcloBwjgMyHoQMDOIEH1GpAECBAGBRAECyCBGGBVBIBIGpBIHVAEBHwYAYG1ENAEFoAYAGBlDQG1CyGpBVBRIYDtlkUYQORlAGAAHYVRbYVR8T9R1RqRhg0AZQ6ggQVRhg8SBBhgjoGBUTtQBB1Q0AgRtRVN8TLFDAzigQ0A0A0SCTaABhtQZQlBrYQSlAZR9Rhh8SSB5TqRqQmwVSVQVQbYZQ5RtQ+TXicSGAgR8SgRvimxhhwSBh1R8SBh9Qsh9QwSBBqR1RDQVR/A0B5T1R1RaB1QGBhgviopsSIBIBhguTvTqRtRqQ0A6hjSfTSSVQsgZR4zGSR8BgVTtRaAiTkz3MES9T9SgyeELiribiSA7i6BBxKc9SzippN5KBSBBxXxRB01Hj5DcyABvLiSAJxJAWwGWEqfEWgHaMAzoKwfVKUJxLgJsDwa1DspxISREDwWgXsuCWwcc7IWIaciYLsxAI4FMaiUCDAVcycjczs6sWgFDDAdCOMRIRAMUYsPEVcmoaueSGc08889wXALwW8xsh8kMEgZ8zc180MaCaQUvQjL8+8ic9cv8mcitfsvsSeP+OMVclxf8zsuIUocC3EWSBwQoRAVcgAbQ7ImHbImFIs7IbLxBnHqBIGQuAsoio0I3FDvNxCcVQrItnLKHLDwsBF/LYtIqcUAMwHrWQswvsGl1SE3G2nERsEOPUAwQQBckJBLJIC5k31ixHSKW6VoA0FYqIv4qkyUGQrUmcHDECF0rIs3PSA8mCFiEwqorYGQqUHopok7j8zIoAF8+KSKLKnEKLcR7KaKuAnF3zkI/LzKfKNpBQfyny9LNzBLfDXKgq20H1kIAByFLalDFfyMpJIBwQMS6HIPgVnIXVVPWJoFoNoZFVK9AKEpKGgM2ErBXOoCgBqpy6tfXCaYtaiLfcYkjCFVQj4SdKuGuHSvizcgywKzs4yjlRIcK9ik2PbRKWoQ8qCsazsqymKWy5igK5ClKmivSzyvS7y9ivynaoK9Ca4X0HaFMTDOa/iyK7iv0Xi2Kzs+K4SoKjo0Sckf3fqLtfqqFHfegaBT3Es8DJ6JmS6qGJgG60gQsSocS7gX0CGmoGXfAGGioPy0al6pxBGBZPARKzs+kCVb3HESGogZAQqw/LIUMMQM9CKSeX0TJGEdLQCRALIeQPgtG5QApE3aQPVBcrGiyzsiaoyky2atapxDamyjwOy6ixy0mlGwIPCg6jsgAXVQqcXQtwFsDotAoJt8v1AYCUERNLBlAEARLtJlHxO1AYFJNoHBidMpKyDBEdLIRVKZIRJIGGCGG5I5NECUBHyyHVDxKyGTPCs1rdG1psBCsmt8sdIYAhO5ORKiAjKBARP1CRMVNoCZPjKtrFIYHtNpIEGpPVHNMNCYhsP1AGFtmpDFJiHxNoHxIRIVHDuRshiIGuu5pIEqXZVjn5BoFXNbPcoMGHrzKgGrLYAoDrL8uuIrMDKgBWPwEHBzEFBIGbJoGbMuheNbIjtKCsGbHdB8FwFklLIHNHnUB2lDFwHHP1FHrOMXuXoPrXo2jXrnqsSAA== -->\n\n<!-- internal state end -->"},"request":{"retryCount":3,"retries":3,"retryAfter":16}},"response":{"url":"https://api.github.com/repos/OpenMS/streamlit-template/issues/comments/3871811057","status":500,"headers":{"access-control-allow-origin":"*","access-control-expose-headers":"ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset","content-length":"0","content-security-policy":"default-src 'none'","content-type":"application/json; charset=utf-8","date":"Mon, 09 Feb 2026 16:24:10 GMT","referrer-policy":"origin-when-cross-origin, strict-origin-when-cross-origin","server":"github.com","strict-transport-security":"max-age=31536000; includeSubdomains; preload","vary":"Accept-Encoding, Accept, X-Requested-With","x-accepted-github-permissions":"issues=write; pull_requests=write","x-content-type-options":"nosniff","x-frame-options":"deny","x-github-api-version-selected":"2022-11-28","x-github-media-type":"github.v3; format=json","x-github-request-id":"0841:3D8207:10F1DDF:48F9F73:698A0A28","x-ratelimit-limit":"6100","x-ratelimit-remaining":"5875","x-ratelimit-reset":"1770654859","x-ratelimit-resource":"core","x-ratelimit-used":"225","x-xss-protection":"0"},"data":""}}

@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Caution

Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted.

Error details
{"name":"HttpError","status":500,"request":{"method":"PATCH","url":"https://api.github.com/repos/OpenMS/streamlit-template/issues/comments/3871811057","headers":{"accept":"application/vnd.github.v3+json","user-agent":"octokit.js/0.0.0-development octokit-core.js/7.0.6 Node.js/24","authorization":"token [REDACTED]","content-type":"application/json; charset=utf-8"},"body":{"body":"<!-- This is an auto-generated comment: summarize by coderabbit.ai -->\n<!-- This is an auto-generated comment: review paused by coderabbit.ai -->\n\n> [!NOTE]\n> ## Reviews paused\n> \n> It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the `reviews.auto_review.auto_pause_after_reviewed_commits` setting.\n> \n> Use the following commands to manage reviews:\n> - `@coderabbitai resume` to resume automatic reviews.\n> - `@coderabbitai review` to trigger a single review.\n> \n> Use the checkboxes below for quick actions:\n> - [ ] <!-- {\"checkboxId\": \"7f6cc2e2-2e4e-497a-8c31-c9e4573e93d1\"} --> ▶️ Resume reviews\n> - [ ] <!-- {\"checkboxId\": \"e9bb8d72-00e8-4f67-9cb2-caf3b22574fe\"} --> 🔍 Trigger review\n\n<!-- end of auto-generated comment: review paused by coderabbit.ai -->\n<!-- walkthrough_start -->\n\n<details>\n<summary>📝 Walkthrough</summary>\n\n## Walkthrough\n\nDockerfiles add nginx and a generated /app/entrypoint.sh, plus a public env var STREAMLIT_SERVER_COUNT (default 1). At startup the entrypoint either runs a single Streamlit on 8501 or launches multiple Streamlit processes (8510+) and nginx on 8501 as a load balancer.\n\n## Changes\n\n|Cohort / File(s)|Summary|\n|---|---|\n|**Primary Dockerfile (run-app stage)** <br> `Dockerfile`|Install nginx, add `STREAMLIT_SERVER_COUNT` env (default 1), create `/app/entrypoint.sh` via heredoc and `chmod +x`, implement runtime logic: when count > 1 generate nginx upstream config, spawn Streamlit processes on ports 8510+, and start nginx on 8501; otherwise run single Streamlit on 8501.|\n|**Simplified Dockerfile variant** <br> `Dockerfile_simple`|Mirror changes from primary Dockerfile: install nginx, expose `STREAMLIT_SERVER_COUNT`, replace prior entrypoint with a heredoc-generated `/app/entrypoint.sh` that preserves cron startup and supports single or nginx-backed multi-instance operation.|\n|**Compose / service config** <br> `docker-compose.yml`|Remove explicit `streamlit run` command and set `STREAMLIT_SERVER_COUNT=1` (commented). Delegate startup behavior to image entrypoint and environment flag rather than an explicit command.|\n|**Manifest / metadata** <br> `manifest_file` (inline) |Lines changed reflect added env var and entrypoint/script generation: +39/+46 lines across manifests, updating run-app startup declarations.|\n\n## Sequence Diagram(s)\n\n```mermaid\nsequenceDiagram\n    participant Container as Container Start\n    participant Entrypoint as Entrypoint Script\n    participant Nginx as Nginx (8501)\n    participant S0 as Streamlit (8510 / 8501)\n    participant Sx as Streamlit (8511+)\n    participant Client as Client\n\n    Container->>Entrypoint: run image CMD -> entrypoint\n    Entrypoint->>Entrypoint: read STREAMLIT_SERVER_COUNT\n    alt STREAMLIT_SERVER_COUNT > 1\n        Entrypoint->>Entrypoint: generate nginx upstream (8510, 8511...)\n        Entrypoint->>S0: start Streamlit on 8510\n        Entrypoint->>Sx: start additional Streamlit instances\n        Entrypoint->>Nginx: write nginx.conf and start nginx on 8501\n        Client->>Nginx: HTTP request to 8501\n        Nginx->>S0: proxy to chosen instance\n        S0-->>Nginx: response\n        Nginx-->>Client: response\n    else STREAMLIT_SERVER_COUNT == 1\n        Entrypoint->>S0: start single Streamlit on 8501\n        Client->>S0: direct request to 8501\n        S0-->>Client: response\n    end\n```\n\n## Poem\n\n> 🐇  \n> I nibbled code and found a door,  \n> Streams multiply and hop some more,  \n> Nginx hums at eighty-five-oh-one,  \n> Ports parade beneath the sun,  \n> Hop, balance, serve — the batch is done!\n\n</details>\n\n<!-- walkthrough_end -->\n\n<!-- pre_merge_checks_walkthrough_start -->\n\n<details>\n<summary>🚥 Pre-merge checks | ✅ 3</summary>\n\n<details>\n<summary>✅ Passed checks (3 passed)</summary>\n\n|     Check name     | Status   | Explanation                                                                                                                                                    |\n| :----------------: | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n|  Description Check | ✅ Passed | Check skipped - CodeRabbit’s high-level summary is enabled.                                                                                                    |\n|     Title check    | ✅ Passed | The title 'Add nginx load balancing support for multi-instance Streamlit deployments' accurately and clearly describes the primary change in the pull request. |\n| Docstring Coverage | ✅ Passed | No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.                                                     |\n\n</details>\n\n<sub>✏️ Tip: You can configure your own custom pre-merge checks in the settings.</sub>\n\n</details>\n\n<!-- pre_merge_checks_walkthrough_end -->\n\n<!-- finishing_touch_checkbox_start -->\n\n<details>\n<summary>✨ Finishing touches</summary>\n\n<details>\n<summary>🧪 Generate unit tests (beta)</summary>\n\n- [ ] <!-- {\"checkboxId\": \"f47ac10b-58cc-4372-a567-0e02b2c3d479\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Create PR with unit tests\n- [ ] <!-- {\"checkboxId\": \"07f1e7d6-8a8e-4e23-9900-8731c2c87f58\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Post copyable unit tests in a comment\n- [ ] <!-- {\"checkboxId\": \"6ba7b810-9dad-11d1-80b4-00c04fd430c8\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Commit unit tests in branch `claude/docker-load-balancer-h8VWZ`\n\n</details>\n\n</details>\n\n<!-- finishing_touch_checkbox_end -->\n\n<!-- tips_start -->\n\n---\n\nThanks for using [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=OpenMS/streamlit-template&utm_content=336)! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.\n\n<details>\n<summary>❤️ Share</summary>\n\n- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)\n- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)\n- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)\n- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)\n\n</details>\n\n<sub>Comment `@coderabbitai help` to get the list of available commands and usage tips.</sub>\n\n<!-- tips_end -->\n\n<!-- internal state start -->\n\n\n<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEYDEZyAAUASpETZWaCrKPR1AGxJcAgrXoYRPAYAB6QHvho9AJoHpgMwUT22Nzc+BS4kABm6ZDM2B7iYMGIuPEkkADKuBQkaMwe6pBK3BGybBi4yAAUtpBmAMwDAGwAlJCQBgCqNgAyXLC4uNyIHAD0a0G4sNgCGkzMawDy3GQAspVrpbX1jbhgNMyt1CRr3AUea0PDE9OIlFxcAAGZi0NASeC0MDMSCAFAIqvhsBQGN5GHFsEo1rR8AwANaUMARKJgGJxDAoihgWAADgAagB1ABakEASYQwZykTh5bRYeHVajYVb8U68yAAYRuNFoXAATECZcMwPKlQBOaAARgGHAALOqOABWbWMtwIZB9KK0ZD4bjifAYWKQQLBEIktD/ehE6KxeKJSAEZKpdKZCjYDAYX35QrwVoVao3BpNf4UKR8EplcnSFBYbjOWJeDwaGCwCpZOq4JEVVC0JAqLzReRKLJoAqZbqVaA2ACiPjOswAktAAPqVTs2Wmjwdio5TABy0AAvOrxph6KgyLW6JA5PYSEtfe2uz3+0OR2OJ1PZ9BIHpIOqNEYANIkeQMWCYUirAxQJ2hLOlPPUPAdroP4m7+gI+DbJAAAiOL4hQWTwF46AYPQsF4pQiFeIOiDwE8Xj3lAnadC4aTBJkiAMBQ0ZdJAJAhDUaBiGB+DZEh0gcAYExQOwpH4ORGiILAcLZB8YClgKtT0N0Nh0EgAA0kA2AAipAADu6TwYgilUXaimelu3rkokoz3txdEkbIZGdDheExoJwnwo0RCLGpJDwC5ra6Rg+mRF6ZIJIEplfjBsj2sw8AMI6QS/kwGCIUQSKAXanHmQA4mQyg0MgKTXHUMJJimW4RHiMhupuwEHt2vYDsOo7jjYk7TnOZm6JAUz/Mg0aDm+QnZLkpSRbi8idbhdrIP6vDSJQUhVDU+V3FmYC8Di0jIP+NCtVAYp2glFbIPSJACJUcG7gGaQZOp6jCWgqQUNa1EvJAxZRJQyAruEdpJOIbCIl0W0wTWAheDI2BZKW1GBP1fB5fUvq1IgaQYJ1hFVB21XHnVZ6NRec4WRC90YB0mQSM48AbiBSjSY2zaFLewVQNiGGUgcaT/Bo7QeJAKRglKfqsfRbOxujR61aeDVNZe94GAAQiQb4QuknFQFVosnvV57NQuepzfGi0hsjzTwLUYgePIwEXZk1L6kC6pXVBGCsT+ISo6rNXq9jktznoOtnC20bIXGC1NGm5TIAb/BYORlD2lzls9Nb6pAgA1KMADc0XOuESA0IbwHW7bKH0CtITwJmtQAI7YNItFqddKDcD1brCYNeIjWtQGGx9h3HadFEpJb0sWOKLARZkbCIIgaAfskTguEYUDQUDdaZ7FjTsIOzBoCEg4QbQsi2QAXhUjRj5A3T/Jk/pAuM/rgvxAQxSEgCYBMgNMtreZwy5AlfV6UW6KHkKfJowM4L7nmrcJoWEKgpE9NpdAHgIh1yhkHSBuBX4KHih5CsHo8JNG6CQDQRANCKTlECL+t9WK3VaLIaW+hjDgB4qhfgWQcAEGIJlKgfMDjEy4LwfgwhRDiCkDIF8ihlCqHUFoHQ9CTBQDgKgVAmA2GEFIOQLhm4eHsC4FQNSs8t4uC3GIpQVBJGaG0LoMAhgGGmAMOheC0DOIACIXEGGHj4PsHD1EvHoA4Oe5tWGvnfNIBeMFToIXYo6fAejQ6IOQM7BBX1cJKCUnJLqWBtgVANmAah9gyikH5hZCmBlSQ+kCKjPsJFFDYBRPQNAPBdiNCimQAmdpiaQFJo9YGwtDzuyxhLXGV465QXqe/OmpMPDVxYbeQpcUaj4C5p6V0AVNwSEQIWXCgQvDFGRumFEW55bgiAhQVGxEahWX4p0T6QQooRSIBo+gd9IBqKypuNY1C1i8QuQJPq3QqJllWeTdAz1KB0BxFmP0xYwnM2gbfN8mQ7JeGJvEp+rp3SfSiIZAKvo1LFiwOLTWl5ry3i4CA7ASFLQoVXmEXKECYRZHujCSM4gYxZhoBQWOutg6ZHjuff8GRfTUEgInG+ik1LUWylS52+xdr6WbOSYsyBmUB1jHSxaocMxWiwCVWIsB8ClEUh9fltFnYZ0gsWCgdd/iKSaAbd69hEiBzVSHXZ5RI48CDMKm2d4QpwAqPYzCkTHZ6P+S8ZA7zUifMstZTQfUIT1PqRasFUUPoGNxF1TI9FRB4A3IpWozxArfShVNRWgosyNHIPYKiNFnmcOSlgah90mKwFRn65IFB7qhmrFDbJuSNIUFxFkJBKAcrcF5ixH+JAh1COpWizcJSjKFsYLdFQSF1C0PMJYHwhQsqdwmqxTJzRRBxC4Xu6ZgsgwVT4O8YGkULLiHECEkKxFWlE3YB0smFMLR0C4G7TGBKcZa3tsJMZJNYhTLtt0G9TToXwUgJ2GctIIU2FDDk1IeTp4kFGCFKwjS71MVtFgIJgQf33r4uRKt1EbTPRXBWpIwbuadWBUmpmXjXn0AjdwKN5yY0OXPuK9QucIUBoiV4ZczDXzMEUJAZOYROPcfI50PjaY6j0HwKw4IFaKgvNPXaIwm7IBnEwPAUs/8ABikSfCx1kMfCgRhZjBEzMR0g0oZMDBVGsMA+oDAuKcQvWxInoG2Xwt4Hzrj3GeJ0z4/RzgAmMDfCRxAoS/CUoSf6RAshSgkBhDmPEmG/xlEQfWoubL7q0BqZmFpxs2nvs6eTbpaNen/o1oBol/o5n3UWX5ZZ5Q1OnF0xgVGskC2ZkPbhELZHvlXOGTdPI/tCSOcozW7YQrgjqHJo0Y+yBvI6V2tghGUrUUxHRQulZfBcVkEaxjMWLWvZXhvOqRSiM0BqUNsq1l0cOUOlQQmBFrrNWKSi5K5R0q4qsNpfGYqcFDXMONSi50GdggOq2SQHZ/4Mx5HEXm0M9r0dggoPQH7i0VooknqchiZBKXGpSDwagwl0sD09RBKCyq0d7IqN0Z2roMK0HE74x1qONX7OtLu4CM2/pbhuLiX03k530AYF4TAKQhu7h5MgeiOcZeEwwxkGnAhDmKz4B9bYNwoRT1LDr8s3BsjDpm0jw95A9FfN4wZ7d7L637shRUJQCvcyEatKwi9GQr0NNvc0zo62n1EQwK+9pdWv2gVcwAAz/Tdz2gyk/CfCUF8bMYcN4fDzxy5FFq3UZ5j4rgSf5PO+Lw5JPRgt7hlM7gQc0CDB9hnP2GcnZ7OOe2wllzXBk7amGJ57zvn/NgCMEzeCYBWb6sIZzZx4Wt2RbrXzPxBi4vOafcNuIpOvd0RCK0SKTQeHG9YhHInTQZs7j3FDZRVXCZx8/Q11PHsBla0XOfYICuMS+kFjJGoHSDEVYHYEQFMiLFQB33WjKF12t31wVmOWyEZSpUANPwomdWDFDCzAkBxGK39FqDNl9GAkPTwkwwwVA3FDOGggUGYDUHIHoFvwdxICdxj2qzfSuSHWngzkPQ6wWRt2iSiT0RaEpy1UP3INIEoKnVpkwOcCtzoMbzqWYUPRWghCpnxg4Jfy6S8DzTp0oEhWUUf2PyaUTCwJ/hwOCDwIYHrToUM13D1TqUTy4EdnIBCjOAcOk1wiIHtHLFqG5jHQryiTcKgFpFf2QlqHClI1cJIF73IH72CVc2H083VCME7EGi3m4XEUnQhFYLonBiDC4FmGiX01dx3QG090PR9xPQ93PWP0vTU2vQL3vUjySxChnDtFiIMCMybxrhQliEyyQE4gMAc3iPi0SKH21BSLCz8y/CsRsSYTU1YWbHYSB00VHm0R/hexi0MW3CYBMRUDUHMRkXmLkUUPUEHEhEQEHFqFyLcloBwjgMyHoQMDOIEH1GpAECBAGBRAECyCBGGBVBIBIGpBIHVAEBHwYAYG1ENAEFoAYAGBlDQG1CyGpBVBRIYDtlkUYQORlAGAAHYVRbYVR8T9R1RqRhg0AZQ6ggQVRhg8SBBhgjoGBUTtQBB1Q0AgRtRVN8TLFDAzigQ0A0A0SCTaABhtQZQlBrYQSlAZR9Rhh8SSB5TqRqQmwVSVQVQbYZQ5RtQ+TXicSGAgR8SgRvimxhhwSBh1R8SBh9Qsh9QwSBBqR1RDQVR/A0B5T1R1RaB1QGBhgviopsSIBIBhguTvTqRtRqQ0A6hjSfTSSVQsgZR4zGSR8BgVTtRaAiTkz3MES9T9SgyeELiribiSA7i6BBxKc9SzippN5KBSBBxXxRB01Hj5DcyABvLiSAJxJAWwGWEqfEWgHaMAzoKwfVKUJxLgJsDwa1DspxISREDwWgXsuCWwcc7IWIaciYLsxAI4FMaiUCDAVcycjczs6sWgFDDAdCOMRIRAMUYsPEVcmoaueSGc08889wXALwW8xsh8kMEgZ8zc180MaCaQUvQjL8+8ic9cv8mcitfsvsSeP+OMVclxf8zsuIUocC3EWSBwQoRAVcgAbQ7ImHbImFIs7IbLxBnHqBIGQuAsoio0I3FDvNxCcVQrItnLKHLDwsBF/LYtIqcUAMwHrWQswvsGl1SE3G2nERsEOPUAwQQBckJBLJIC5k31ixHSKW6VoA0FYqIv4qkyUGQrUmcHDECF0rIs3PSA8mCFiEwqorYGQqUHopok7j8zIoAF8+KSKLKnEKLcR7KaKuAnF3zkI/LzKfKNpBQfyny9LNzBLfDXKgq20H1kIAByFLalDFfyMpJIBwQMS6HIPgVnIXVVPWJoFoNoZFVK9AKEpKGgM2ErBXOoCgBqpy6tfXCaYtaiLfcYkjCFVQj4SdKuGuHSvizcgywKzs4yjlRIcK9ik2PbRKWoQ8qCsazsqymKWy5igK5ClKmivSzyvS7y9ivynaoK9Ca4X0HaFMTDOa/iyK7iv0Xi2Kzs+K4SoKjo0Sckf3fqLtfqqFHfegaBT3Es8DJ6JmS6qGJgG60gQsSocS7gX0CGmoGXfAGGioPy0al6pxBGBZPARKzs+kCVb3HESGogZAQqw/LIUMMQM9CKSeX0TJGEdLQCRALIeQPgtG5QApE3aQPVBcrGiyzsiaoyky2atapxDamyjwOy6ixy0mlGwIPCg6jsgAXVQqcXQtwFsDotAoJt8v1AYCUERNLBlAEARLtJlHxO1AYFJNoHBidMpKyDBEdLIRVKZIRJIGGCGG5I5NECUBHyyHVDxKyGTPCs1rdG1psBCsmt8sdIYAhO5ORKiAjKBARP1CRMVNoCZPjKtrFIYHtNpIEGpPVHNMNCYhsP1AGFtmpDFJiHxNoHxIRIVHDuRshiIGuu5pIEqXZVjn5BoFXNbPcoMGHrzKgGrLYAoDrL8uuIrMDKgBWPwEHBzEFBIGbJoGbMuheNbIjtKCsGbHdB8FwFklLIHNHnUB2lDFwHHP1FHrOMXuXoPrXo2jXrnqsSAA== -->\n\n<!-- internal state end -->"},"request":{"retryCount":3,"retries":3,"retryAfter":16}},"response":{"url":"https://api.github.com/repos/OpenMS/streamlit-template/issues/comments/3871811057","status":500,"headers":{"access-control-allow-origin":"*","access-control-expose-headers":"ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset","content-length":"0","content-security-policy":"default-src 'none'","content-type":"application/json; charset=utf-8","date":"Mon, 09 Feb 2026 16:24:10 GMT","referrer-policy":"origin-when-cross-origin, strict-origin-when-cross-origin","server":"github.com","strict-transport-security":"max-age=31536000; includeSubdomains; preload","vary":"Accept-Encoding, Accept, X-Requested-With","x-accepted-github-permissions":"issues=write; pull_requests=write","x-content-type-options":"nosniff","x-frame-options":"deny","x-github-api-version-selected":"2022-11-28","x-github-media-type":"github.v3; format=json","x-github-request-id":"0841:3D8207:10F1DDF:48F9F73:698A0A28","x-ratelimit-limit":"6100","x-ratelimit-remaining":"5875","x-ratelimit-reset":"1770654859","x-ratelimit-resource":"core","x-ratelimit-used":"225","x-xss-protection":"0"},"data":""}}

@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Caution

Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted.

Error details
{"name":"HttpError","status":500,"request":{"method":"PATCH","url":"https://api.github.com/repos/OpenMS/streamlit-template/issues/comments/3871811057","headers":{"accept":"application/vnd.github.v3+json","user-agent":"octokit.js/0.0.0-development octokit-core.js/7.0.6 Node.js/24","authorization":"token [REDACTED]","content-type":"application/json; charset=utf-8"},"body":{"body":"<!-- This is an auto-generated comment: summarize by coderabbit.ai -->\n<!-- This is an auto-generated comment: review paused by coderabbit.ai -->\n\n> [!NOTE]\n> ## Reviews paused\n> \n> It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the `reviews.auto_review.auto_pause_after_reviewed_commits` setting.\n> \n> Use the following commands to manage reviews:\n> - `@coderabbitai resume` to resume automatic reviews.\n> - `@coderabbitai review` to trigger a single review.\n> \n> Use the checkboxes below for quick actions:\n> - [ ] <!-- {\"checkboxId\": \"7f6cc2e2-2e4e-497a-8c31-c9e4573e93d1\"} --> ▶️ Resume reviews\n> - [ ] <!-- {\"checkboxId\": \"e9bb8d72-00e8-4f67-9cb2-caf3b22574fe\"} --> 🔍 Trigger review\n\n<!-- end of auto-generated comment: review paused by coderabbit.ai -->\n<!-- walkthrough_start -->\n\n<details>\n<summary>📝 Walkthrough</summary>\n\n## Walkthrough\n\nDockerfiles add nginx and a generated /app/entrypoint.sh, plus a public env var STREAMLIT_SERVER_COUNT (default 1). At startup the entrypoint either runs a single Streamlit on 8501 or launches multiple Streamlit processes (8510+) and nginx on 8501 as a load balancer.\n\n## Changes\n\n|Cohort / File(s)|Summary|\n|---|---|\n|**Primary Dockerfile (run-app stage)** <br> `Dockerfile`|Install nginx, add `STREAMLIT_SERVER_COUNT` env (default 1), create `/app/entrypoint.sh` via heredoc and `chmod +x`, implement runtime logic: when count > 1 generate nginx upstream config, spawn Streamlit processes on ports 8510+, and start nginx on 8501; otherwise run single Streamlit on 8501.|\n|**Simplified Dockerfile variant** <br> `Dockerfile_simple`|Mirror changes from primary Dockerfile: install nginx, expose `STREAMLIT_SERVER_COUNT`, replace prior entrypoint with a heredoc-generated `/app/entrypoint.sh` that preserves cron startup and supports single or nginx-backed multi-instance operation.|\n|**Compose / service config** <br> `docker-compose.yml`|Remove explicit `streamlit run` command and set `STREAMLIT_SERVER_COUNT=1` (commented). Delegate startup behavior to image entrypoint and environment flag rather than an explicit command.|\n|**Manifest / metadata** <br> `manifest_file` (inline) |Lines changed reflect added env var and entrypoint/script generation: +39/+46 lines across manifests, updating run-app startup declarations.|\n\n## Sequence Diagram(s)\n\n```mermaid\nsequenceDiagram\n    participant Container as Container Start\n    participant Entrypoint as Entrypoint Script\n    participant Nginx as Nginx (8501)\n    participant S0 as Streamlit (8510 / 8501)\n    participant Sx as Streamlit (8511+)\n    participant Client as Client\n\n    Container->>Entrypoint: run image CMD -> entrypoint\n    Entrypoint->>Entrypoint: read STREAMLIT_SERVER_COUNT\n    alt STREAMLIT_SERVER_COUNT > 1\n        Entrypoint->>Entrypoint: generate nginx upstream (8510, 8511...)\n        Entrypoint->>S0: start Streamlit on 8510\n        Entrypoint->>Sx: start additional Streamlit instances\n        Entrypoint->>Nginx: write nginx.conf and start nginx on 8501\n        Client->>Nginx: HTTP request to 8501\n        Nginx->>S0: proxy to chosen instance\n        S0-->>Nginx: response\n        Nginx-->>Client: response\n    else STREAMLIT_SERVER_COUNT == 1\n        Entrypoint->>S0: start single Streamlit on 8501\n        Client->>S0: direct request to 8501\n        S0-->>Client: response\n    end\n```\n\n## Poem\n\n> 🐇  \n> I nibbled code and found a door,  \n> Streams multiply and hop some more,  \n> Nginx hums at eighty-five-oh-one,  \n> Ports parade beneath the sun,  \n> Hop, balance, serve — the batch is done!\n\n</details>\n\n<!-- walkthrough_end -->\n\n<!-- pre_merge_checks_walkthrough_start -->\n\n<details>\n<summary>🚥 Pre-merge checks | ✅ 3</summary>\n\n<details>\n<summary>✅ Passed checks (3 passed)</summary>\n\n|     Check name     | Status   | Explanation                                                                                                                                                    |\n| :----------------: | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n|  Description Check | ✅ Passed | Check skipped - CodeRabbit’s high-level summary is enabled.                                                                                                    |\n|     Title check    | ✅ Passed | The title 'Add nginx load balancing support for multi-instance Streamlit deployments' accurately and clearly describes the primary change in the pull request. |\n| Docstring Coverage | ✅ Passed | No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.                                                     |\n\n</details>\n\n<sub>✏️ Tip: You can configure your own custom pre-merge checks in the settings.</sub>\n\n</details>\n\n<!-- pre_merge_checks_walkthrough_end -->\n\n<!-- finishing_touch_checkbox_start -->\n\n<details>\n<summary>✨ Finishing touches</summary>\n\n<details>\n<summary>🧪 Generate unit tests (beta)</summary>\n\n- [ ] <!-- {\"checkboxId\": \"f47ac10b-58cc-4372-a567-0e02b2c3d479\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Create PR with unit tests\n- [ ] <!-- {\"checkboxId\": \"07f1e7d6-8a8e-4e23-9900-8731c2c87f58\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Post copyable unit tests in a comment\n- [ ] <!-- {\"checkboxId\": \"6ba7b810-9dad-11d1-80b4-00c04fd430c8\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Commit unit tests in branch `claude/docker-load-balancer-h8VWZ`\n\n</details>\n\n</details>\n\n<!-- finishing_touch_checkbox_end -->\n\n<!-- tips_start -->\n\n---\n\nThanks for using [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=OpenMS/streamlit-template&utm_content=336)! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.\n\n<details>\n<summary>❤️ Share</summary>\n\n- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)\n- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)\n- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)\n- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)\n\n</details>\n\n<sub>Comment `@coderabbitai help` to get the list of available commands and usage tips.</sub>\n\n<!-- tips_end -->\n\n<!-- internal state start -->\n\n\n<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEYDEZyAAUASpETZWaCrKPR1AGxJcAgrXoYRPAYAB6QHvho9AJoHpgMwUT22Nzc+BS4kABm6ZDM2B7iYMGIuPEkkADKuBQkaMwe6pBK3BGybBi4yAAUtpBmAMwDAGwAlJCQBgCqNgAyXLC4uNyIHAD0a0G4sNgCGkzMawDy3GQAspVrpbX1jbhgNMyt1CRr3AUea0PDE9OIlFxcAAGZi0NASeC0MDMSCAFAIqvhsBQGN5GHFsEo1rR8AwANaUMARKJgGJxDAoihgWAADgAagB1ABakEASYQwZykTh5bRYeHVajYVb8U68yAAYRuNFoXAATECZcMwPKlQBOaAARgGHAALOqOABWbWMtwIZB9KK0ZD4bjifAYWKQQLBEIktD/ehE6KxeKJSAEZKpdKZCjYDAYX35QrwVoVao3BpNf4UKR8EplcnSFBYbjOWJeDwaGCwCpZOq4JEVVC0JAqLzReRKLJoAqZbqVaA2ACiPjOswAktAAPqVTs2Wmjwdio5TABy0AAvOrxph6KgyLW6JA5PYSEtfe2uz3+0OR2OJ1PZ9BIHpIOqNEYANIkeQMWCYUirAxQJ2hLOlPPUPAdroP4m7+gI+DbJAAAiOL4hQWTwF46AYPQsF4pQiFeIOiDwE8Xj3lAnadC4aTBJkiAMBQ0ZdJAJAhDUaBiGB+DZEh0gcAYExQOwpH4ORGiILAcLZB8YClgKtT0N0Nh0EgAA0kA2AAipAADu6TwYgilUXaimelu3rkokoz3txdEkbIZGdDheExoJwnwo0RCLGpJDwC5ra6Rg+mRF6ZIJIEplfjBsj2sw8AMI6QS/kwGCIUQSKAXanHmQA4mQyg0MgKTXHUMJJimW4RHiMhupuwEHt2vYDsOo7jjYk7TnOZm6JAUz/Mg0aDm+QnZLkpSRbi8idbhdrIP6vDSJQUhVDU+V3FmYC8Di0jIP+NCtVAYp2glFbIPSJACJUcG7gGaQZOp6jCWgqQUNa1EvJAxZRJQyAruEdpJOIbCIl0W0wTWAheDI2BZKW1GBP1fB5fUvq1IgaQYJ1hFVB21XHnVZ6NRec4WRC90YB0mQSM48AbiBSjSY2zaFLewVQNiGGUgcaT/Bo7QeJAKRglKfqsfRbOxujR61aeDVNZe94GAAQiQb4QuknFQFVosnvV57NQuepzfGi0hsjzTwLUYgePIwEXZk1L6kC6pXVBGCsT+ISo6rNXq9jktznoOtnC20bIXGC1NGm5TIAb/BYORlD2lzls9Nb6pAgA1KMADc0XOuESA0IbwHW7bKH0CtITwJmtQAI7YNItFqddKDcD1brCYNeIjWtQGGx9h3HadFEpJb0sWOKLARZkbCIIgaAfskTguEYUDQUDdaZ7FjTsIOzBoCEg4QbQsi2QAXhUjRj5A3T/Jk/pAuM/rgvxAQxSEgCYBMgNMtreZwy5AlfV6UW6KHkKfJowM4L7nmrcJoWEKgpE9NpdAHgIh1yhkHSBuBX4KHih5CsHo8JNG6CQDQRANCKTlECL+t9WK3VaLIaW+hjDgB4qhfgWQcAEGIJlKgfMDjEy4LwfgwhRDiCkDIF8ihlCqHUFoHQ9CTBQDgKgVAmA2GEFIOQLhm4eHsC4FQNSs8t4uC3GIpQVBJGaG0LoMAhgGGmAMOheC0DOIACIXEGGHj4PsHD1EvHoA4Oe5tWGvnfNIBeMFToIXYo6fAejQ6IOQM7BBX1cJKCUnJLqWBtgVANmAah9gyikH5hZCmBlSQ+kCKjPsJFFDYBRPQNAPBdiNCimQAmdpiaQFJo9YGwtDzuyxhLXGV465QXqe/OmpMPDVxYbeQpcUaj4C5p6V0AVNwSEQIWXCgQvDFGRumFEW55bgiAhQVGxEahWX4p0T6QQooRSIBo+gd9IBqKypuNY1C1i8QuQJPq3QqJllWeTdAz1KB0BxFmP0xYwnM2gbfN8mQ7JeGJvEp+rp3SfSiIZAKvo1LFiwOLTWl5ry3i4CA7ASFLQoVXmEXKECYRZHujCSM4gYxZhoBQWOutg6ZHjuff8GRfTUEgInG+ik1LUWylS52+xdr6WbOSYsyBmUB1jHSxaocMxWiwCVWIsB8ClEUh9fltFnYZ0gsWCgdd/iKSaAbd69hEiBzVSHXZ5RI48CDMKm2d4QpwAqPYzCkTHZ6P+S8ZA7zUifMstZTQfUIT1PqRasFUUPoGNxF1TI9FRB4A3IpWozxArfShVNRWgosyNHIPYKiNFnmcOSlgah90mKwFRn65IFB7qhmrFDbJuSNIUFxFkJBKAcrcF5ixH+JAh1COpWizcJSjKFsYLdFQSF1C0PMJYHwhQsqdwmqxTJzRRBxC4Xu6ZgsgwVT4O8YGkULLiHECEkKxFWlE3YB0smFMLR0C4G7TGBKcZa3tsJMZJNYhTLtt0G9TToXwUgJ2GctIIU2FDDk1IeTp4kFGCFKwjS71MVtFgIJgQf33r4uRKt1EbTPRXBWpIwbuadWBUmpmXjXn0AjdwKN5yY0OXPuK9QucIUBoiV4ZczDXzMEUJAZOYROPcfI50PjaY6j0HwKw4IFaKgvNPXaIwm7IBnEwPAUs/8ABikSfCx1kMfCgRhZjBEzMR0g0oZMDBVGsMA+oDAuKcQvWxInoG2Xwt4Hzrj3GeJ0z4/RzgAmMDfCRxAoS/CUoSf6RAshSgkBhDmPEmG/xlEQfWoubL7q0BqZmFpxs2nvs6eTbpaNen/o1oBol/o5n3UWX5ZZ5Q1OnF0xgVGskC2ZkPbhELZHvlXOGTdPI/tCSOcozW7YQrgjqHJo0Y+yBvI6V2tghGUrUUxHRQulZfBcVkEaxjMWLWvZXhvOqRSiM0BqUNsq1l0cOUOlQQmBFrrNWKSi5K5R0q4qsNpfGYqcFDXMONSi50GdggOq2SQHZ/4Mx5HEXm0M9r0dggoPQH7i0VooknqchiZBKXGpSDwagwl0sD09RBKCyq0d7IqN0Z2roMK0HE74x1qONX7OtLu4CM2/pbhuLiX03k530AYF4TAKQhu7h5MgeiOcZeEwwxkGnAhDmKz4B9bYNwoRT1LDr8s3BsjDpm0jw95A9FfN4wZ7d7L637shRUJQCvcyEatKwi9GQr0NNvc0zo62n1EQwK+9pdWv2gVcwAAz/Tdz2gyk/CfCUF8bMYcN4fDzxy5FFq3UZ5j4rgSf5PO+Lw5JPRgt7hlM7gQc0CDB9hnP2GcnZ7OOe2wllzXBk7amGJ57zvn/NgCMEzeCYBWb6sIZzZx4Wt2RbrXzPxBi4vOafcNuIpOvd0RCK0SKTQeHG9YhHInTQZs7j3FDZRVXCZx8/Q11PHsBla0XOfYICuMS+kFjJGoHSDEVYHYEQFMiLFQB33WjKF12t31wVmOWyEZSpUANPwomdWDFDCzAkBxGK39FqDNl9GAkPTwkwwwVA3FDOGggUGYDUHIHoFvwdxICdxj2qzfSuSHWngzkPQ6wWRt2iSiT0RaEpy1UP3INIEoKnVpkwOcCtzoMbzqWYUPRWghCpnxg4Jfy6S8DzTp0oEhWUUf2PyaUTCwJ/hwOCDwIYHrToUM13D1TqUTy4EdnIBCjOAcOk1wiIHtHLFqG5jHQryiTcKgFpFf2QlqHClI1cJIF73IH72CVc2H083VCME7EGi3m4XEUnQhFYLonBiDC4FmGiX01dx3QG090PR9xPQ93PWP0vTU2vQL3vUjySxChnDtFiIMCMybxrhQliEyyQE4gMAc3iPi0SKH21BSLCz8y/CsRsSYTU1YWbHYSB00VHm0R/hexi0MW3CYBMRUDUHMRkXmLkUUPUEHEhEQEHFqFyLcloBwjgMyHoQMDOIEH1GpAECBAGBRAECyCBGGBVBIBIGpBIHVAEBHwYAYG1ENAEFoAYAGBlDQG1CyGpBVBRIYDtlkUYQORlAGAAHYVRbYVR8T9R1RqRhg0AZQ6ggQVRhg8SBBhgjoGBUTtQBB1Q0AgRtRVN8TLFDAzigQ0A0A0SCTaABhtQZQlBrYQSlAZR9Rhh8SSB5TqRqQmwVSVQVQbYZQ5RtQ+TXicSGAgR8SgRvimxhhwSBh1R8SBh9Qsh9QwSBBqR1RDQVR/A0B5T1R1RaB1QGBhgviopsSIBIBhguTvTqRtRqQ0A6hjSfTSSVQsgZR4zGSR8BgVTtRaAiTkz3MES9T9SgyeELiribiSA7i6BBxKc9SzippN5KBSBBxXxRB01Hj5DcyABvLiSAJxJAWwGWEqfEWgHaMAzoKwfVKUJxLgJsDwa1DspxISREDwWgXsuCWwcc7IWIaciYLsxAI4FMaiUCDAVcycjczs6sWgFDDAdCOMRIRAMUYsPEVcmoaueSGc08889wXALwW8xsh8kMEgZ8zc180MaCaQUvQjL8+8ic9cv8mcitfsvsSeP+OMVclxf8zsuIUocC3EWSBwQoRAVcgAbQ7ImHbImFIs7IbLxBnHqBIGQuAsoio0I3FDvNxCcVQrItnLKHLDwsBF/LYtIqcUAMwHrWQswvsGl1SE3G2nERsEOPUAwQQBckJBLJIC5k31ixHSKW6VoA0FYqIv4qkyUGQrUmcHDECF0rIs3PSA8mCFiEwqorYGQqUHopok7j8zIoAF8+KSKLKnEKLcR7KaKuAnF3zkI/LzKfKNpBQfyny9LNzBLfDXKgq20H1kIAByFLalDFfyMpJIBwQMS6HIPgVnIXVVPWJoFoNoZFVK9AKEpKGgM2ErBXOoCgBqpy6tfXCaYtaiLfcYkjCFVQj4SdKuGuHSvizcgywKzs4yjlRIcK9ik2PbRKWoQ8qCsazsqymKWy5igK5ClKmivSzyvS7y9ivynaoK9Ca4X0HaFMTDOa/iyK7iv0Xi2Kzs+K4SoKjo0Sckf3fqLtfqqFHfegaBT3Es8DJ6JmS6qGJgG60gQsSocS7gX0CGmoGXfAGGioPy0al6pxBGBZPARKzs+kCVb3HESGogZAQqw/LIUMMQM9CKSeX0TJGEdLQCRALIeQPgtG5QApE3aQPVBcrGiyzsiaoyky2atapxDamyjwOy6ixy0mlGwIPCg6jsgAXVQqcXQtwFsDotAoJt8v1AYCUERNLBlAEARLtJlHxO1AYFJNoHBidMpKyDBEdLIRVKZIRJIGGCGG5I5NECUBHyyHVDxKyGTPCs1rdG1psBCsmt8sdIYAhO5ORKiAjKBARP1CRMVNoCZPjKtrFIYHtNpIEGpPVHNMNCYhsP1AGFtmpDFJiHxNoHxIRIVHDuRshiIGuu5pIEqXZVjn5BoFXNbPcoMGHrzKgGrLYAoDrL8uuIrMDKgBWPwEHBzEFBIGbJoGbMuheNbIjtKCsGbHdB8FwFklLIHNHnUB2lDFwHHP1FHrOMXuXoPrXo2jXrnqsSAA== -->\n\n<!-- internal state end -->"},"request":{"retryCount":3,"retries":3,"retryAfter":16}},"response":{"url":"https://api.github.com/repos/OpenMS/streamlit-template/issues/comments/3871811057","status":500,"headers":{"access-control-allow-origin":"*","access-control-expose-headers":"ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset","content-length":"0","content-security-policy":"default-src 'none'","content-type":"application/json; charset=utf-8","date":"Mon, 09 Feb 2026 16:24:10 GMT","referrer-policy":"origin-when-cross-origin, strict-origin-when-cross-origin","server":"github.com","strict-transport-security":"max-age=31536000; includeSubdomains; preload","vary":"Accept-Encoding, Accept, X-Requested-With","x-accepted-github-permissions":"issues=write; pull_requests=write","x-content-type-options":"nosniff","x-frame-options":"deny","x-github-api-version-selected":"2022-11-28","x-github-media-type":"github.v3; format=json","x-github-request-id":"0841:3D8207:10F1DDF:48F9F73:698A0A28","x-ratelimit-limit":"6100","x-ratelimit-remaining":"5875","x-ratelimit-reset":"1770654859","x-ratelimit-resource":"core","x-ratelimit-used":"225","x-xss-protection":"0"},"data":""}}

@t0mdavid-m t0mdavid-m merged commit 126a74b into main Feb 9, 2026
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants