Important
This repository is a fork of DownUnderCTFs kube-ctf, containing a subset of their services and setup.
This repository contains the Kubernetes challenge manager from kube-ctf modified to fit into the CTF Pilot ecosystem.
The Challenge manager is used to spin up containers for isolated challenges, giving every team a unique instance of a challenge.
- A request is made by the CTFd scoreboard (in reality it can be whatever system you choose).
- The team id or user id is hashed with the challenge name and a secret in order to produce a container ID This ensures that each user has a unique hostname and that if the challenge is relaunched, the user will still have the same hostname.
- The template is fetched from the kubernetes control plane. We then check if the current template can be executed at the current time.
- The container ID, owner ID, and expiry date is injected into the template.
- This template is then applied using the Kubernetes API.
- If the player wants to extend the challenge expiry, they may do so by clicking the extend button. This will restore the original expiry time on the challenge.
In hindsight, we could have used kubernetes secrets for some of the config values which we will probably do, but this is quick and dirty.
# Environment variables
KUBECTF_NAMESPACE="default" # namespace to run the isolated challenges in
KUBECTF_BASE_DOMAIN="example.com" # base domain of the isolated challenges
KUBECTF_API_DOMAIN="challenge-manager.${BASE_DOMAIN}" # api domain of the isolated challenges
KUBECTF_MAX_OWNER_DEPLOYMENTS="0" # Maximum amount of deployments per team. 0 is unlimited
KUBECTF_AUTH_SECRET="keyboard-cat" # secret to sign the JWT for CTFd
KUBECTF_CONTAINER_SECRET="keyboard-cat" # secret to generate the container IDs
KUBECTF_REGISTRY_PREFIX="gcr.io/downunderctf" # container registry prefix exposed through the handlebars variable registry_prefixThe cleaning up of containers is done using kube-janitor. This is a job that runs periodically and removes resources that are past a certain expiry date, as we can't rely on players to remember to shut down their resources.
An example template has been provided in the root of this repository at templates/whoami/kube-isolated.yaml.
Currently the system expects that at least the values below are defined for every resource. Challenge manager
currently does not validate that any of these exist at runtime (something we're looking at fixing), but
will fail to renew/remove the resource later on.
labels:
challenges.ctfpilot.com/type: misc
challenges.ctfpilot.com/name: challenge
instanced.challenges.ctfpilot.com/deployment: "{{ deployment_id }}"
instanced.challenges.ctfpilot.com/owner: "{{ owner_id }}"
annotations:
janitor/expires: "{{ expires }}"
It may be resource heavy to spin up separate databases for each team, especially in larger CTFs. This system has worked well for hosting DUCTF and we have load tested it up to 1000 concurrent containers on Google Kubernetes Engine. It can probably support more, and should only be limited by the how fast the control plane can handle requests.
We welcome contributions of all kinds, from code and documentation to bug reports and feedback!
Please check the Contribution Guidelines (CONTRIBUTING.md) for detailed guidelines on how to contribute.
To maintain the ability to distribute contributions across all our licensing models, all code contributions require signing a Contributor License Agreement (CLA).
You can review the CLA here. CLA signing happens automatically when you create your first pull request.
To administrate the CLA signing process, we are using CLA assistant lite.
A copy of the CLA document is also included in this repository as CLA.md.
Signatures are stored in the cla repository.
This repository is licensed under the MIT License, in accordance with the kube-ctf licensing.
You can find the full license in the LICENSE file.
We expect all contributors to adhere to our Code of Conduct to ensure a welcoming and inclusive environment for all.