Skip to content
This repository was archived by the owner on Jan 3, 2024. It is now read-only.
This repository was archived by the owner on Jan 3, 2024. It is now read-only.

Intentionally optimize Docker image #18

@pihart

Description

@pihart

The current Docker image is huge—even with recent optimizations (#13, #16), it is around 3.9 GB!

Part of why is just that it has so many packages. But part of why is that it is oriented around the existing Puppet provisioner, which isn't amenable to the types of optimizations that container images are.

For example, it is a lot easier in a Dockerfile to initiate an install of some software, and then copy over only the relevant portions (leaving behind caches and even parts of the program which will never be used). You can download each dependency in a parallel stage, and selectively copy it over to the final image, dramatically improving performance of builds and rebuilds. Better, you can copy the software directly from prebuilt official images; the software providers have done the hard work of isolating the necessary components, and the software is already built.

Intentionally optimizing for containerized loads might be worthwhile. This is best done together with #17; see that issue for the general strategy, as well as other suggestions such as slim versions.1

Footnotes

  1. But with a lot more work, it is also possible to rewrite the Dockerfile so that it completely bypasses Puppet, while still having students use Puppet in a VM.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions