diff --git a/README.md b/README.md index 1001791..834f122 100644 --- a/README.md +++ b/README.md @@ -29,6 +29,14 @@ $ deadline bundle gui-submit job_bundles/gui_control_showcase ![deadline bundle gui-submit showcase](.images/deadline-bundle-gui-submit-showcase.png) +## Container samples + +The [containers](containers) directory contains Dockerfiles for building +container images compatible with Deadline Cloud worker environments. The +[al2023-deadline](containers/al2023-deadline/) sample replicates the +service-managed fleet worker AMI package set on Amazon Linux 2023, useful for +building and testing conda packages or other software locally. + ## Conda recipes The [conda_recipes](conda_recipes) directory contains samples and tooling for building conda packages for your diff --git a/containers/README.md b/containers/README.md new file mode 100644 index 0000000..f8bdc8d --- /dev/null +++ b/containers/README.md @@ -0,0 +1,18 @@ +# AWS Deadline Cloud container samples + +The container samples in this directory provide Dockerfiles and related resources +for building container images compatible with +[AWS Deadline Cloud](https://aws.amazon.com/deadline-cloud/) worker environments. + +Use these to build and test software locally with the same system libraries, +toolchains, and runtime environment as Deadline Cloud workers. + +## Samples + +### AL2023 worker-equivalent image + +The [al2023-deadline](al2023-deadline/) sample provides a Dockerfile that +replicates the package set of the Deadline Cloud service-managed fleet (SMF) +worker AMI on top of the base Amazon Linux 2023 image. Use it to build and test +[conda packages](../conda_recipes/) or other software that must be compatible +with the worker runtime. diff --git a/containers/al2023-deadline/README.md b/containers/al2023-deadline/README.md new file mode 100644 index 0000000..9b698c6 --- /dev/null +++ b/containers/al2023-deadline/README.md @@ -0,0 +1,70 @@ +# AL2023 Deadline Cloud worker-equivalent image + +This Dockerfile replicates the package set of an April 2026 snapshot of the +AWS Deadline Cloud service-managed fleet (SMF) worker AMI on top of the base +Amazon Linux 2023 image. It is a point-in-time capture — the actual worker AMI +may drift as packages are added or updated. + +## Use cases + +- Build and test [conda packages](../../conda_recipes/) with the same GLIBC + version, system libraries, and runtime environment as real workers. +- Reproduce worker-side build or runtime failures locally. +- Validate that your software dependencies are satisfied by the worker + environment before submitting jobs. + +## What's included + +The image installs packages in layered groups matching the worker AMI: + +| Layer | Contents | +|-------|----------| +| Core system tools | `jq`, `git`, `wget`, `unzip`, `vim`, `sudo`, `rsync`, … | +| Build toolchain | GCC, G++, `binutils`, `kernel-headers`, `glibc-devel`, `zlib-devel` | +| X11 / Mesa / OpenGL | Headless rendering dependencies (`libX11`, `mesa-libGL`, `libglvnd`, …) | +| Image / media libs | `libjpeg-turbo`, `libpng`, `libtiff`, `libwebp` | +| Networking / NFS / security | NFS utils, NSS, SSSD, `openssh-clients`, `iptables-nft` | +| Python 3.11 | `python3.11`, `pip`, `setuptools` | +| Docker / containerd | For container-in-container workflows (host socket mount or `--privileged`) | +| Misc | AWS CLI v2, Boost, jemalloc, TBB, and other libraries present on the worker | + +## Building the image + +```bash +docker build -f Dockerfile.worker-equivalent -t al2023-deadline:latest . +``` + +## Running a container + +```bash +# Interactive shell +docker run --rm -it al2023-deadline:latest + +# Build a conda package inside the container +docker run --rm -v "$PWD":/work -w /work al2023-deadline:latest \ + bash -c "pip3.11 install conda-build && conda build my-recipe/" +``` + +## GPU support + +For NVIDIA GPU support, add the NVIDIA container toolkit repository before +building: + +```dockerfile +RUN dnf config-manager --add-repo \ + https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \ + && dnf install -y nvidia-container-toolkit +``` + +Then run the container with `--gpus all`: + +```bash +docker run --rm --gpus all al2023-deadline:latest nvidia-smi +``` + +## Limitations + +- This is a **point-in-time snapshot** (April 2026). The actual SMF worker AMI + may have newer or additional packages. +- Docker-in-Docker is not enabled by default. Mount the host Docker socket or + use `--privileged` if you need it.