-
Notifications
You must be signed in to change notification settings - Fork 50
Description
The current AI Workbench tutorial for running a multi-container web application with Docker Compose (e.g. Flask + Redis) are missing critical information.
Following the tutorial verbatim results in a running Compose stack that is not accessible, with no clear indication of the additional Workbench-specific steps required.
This caused significant confusion because the containers are healthy and running, yet the application cannot be accessed via browser or localhost.
Environment
- NVIDIA AI Workbench (desktop app)
- Docker Compose (managed by Workbench)
- Example stack: Flask + Redis
- Host: macOS
- Target: Remote system (DGX Spark)
What the tutorial implies
The tutorial suggests that:
- Starting Docker Compose is sufficient to run the web app
- Port mappings (ports: in docker-compose.yml) are enough to make the app accessible
- The user can verify the app via localhost:
Actual behavior
After following the tutorial:
- Docker Compose is running
- All services (web, redis) are healthy
- Flask is listening correctly on 0.0.0.0
- But the application is not accessible
- curl localhost: fails
- “Open in Location” shows no configured locations
from a user perspective everything looks correct (containers running, no errors). There is no feedback explaining why the app isn’t reachable. Users naturally try docker compose ps, but Docker is not available in the container.
The failure mode feels like a bug, not a missing configuration step. This is especially confusing for users coming from standard Docker / Compose workflows.
Happy to clarify or test any proposed documentation changes if helpful. Thank you!