diff --git a/.github/workflows/python-ci.yml b/.github/workflows/python-ci.yml new file mode 100644 index 0000000000..28fa32cebc --- /dev/null +++ b/.github/workflows/python-ci.yml @@ -0,0 +1,73 @@ +name: Python CI (Lab03) + +on: + push: + branches: [ "main", "master", "lab3", "lab03" ] + paths: + - "app_python/**" + - ".github/workflows/python-ci.yml" + pull_request: + paths: + - "app_python/**" + - ".github/workflows/python-ci.yml" + +concurrency: + group: python-ci-${{ github.ref }} + cancel-in-progress: true + +jobs: + test-lint: + runs-on: ubuntu-latest + defaults: + run: + working-directory: app_python + + steps: + - uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: "3.11" + cache: "pip" + cache-dependency-path: "app_python/requirements.txt" + + - name: Install deps + run: | + python -m pip install --upgrade pip + pip install -r requirements.txt + + - name: Lint (ruff) + run: ruff check . + + - name: Tests + run: pytest -q + + docker-build-push: + needs: test-lint + runs-on: ubuntu-latest + if: github.event_name == 'push' && (github.ref_name == 'main' || github.ref_name == 'master' || github.ref_name == 'lab3' || github.ref_name == 'lab03') + + steps: + - uses: actions/checkout@v4 + + - name: Set version (CalVer) + run: | + echo "CALVER=$(date +%Y.%m)" >> $GITHUB_ENV + echo "BUILD=${{ github.run_number }}" >> $GITHUB_ENV + + - name: Login to Docker Hub + uses: docker/login-action@v3 + with: + username: ${{ secrets.DOCKER_USERNAME }} + password: ${{ secrets.DOCKER_PASSWORD }} + + - name: Build & Push + uses: docker/build-push-action@v6 + with: + context: ./app_python + file: ./app_python/Dockerfile + push: true + tags: | + ${{ secrets.DOCKER_USERNAME }}/devops-info-service:${{ env.CALVER }}.${{ env.BUILD }} + ${{ secrets.DOCKER_USERNAME }}/devops-info-service:latest diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000..2c181ea163 --- /dev/null +++ b/.gitignore @@ -0,0 +1,23 @@ +__pycache__/ +*.py[cod] +venv/ +__MACOSX/ +*.log + +.vscode/ +.idea/ + +.obsidian + +.DS_Store +.env + +terraform/.terraform/ +terraform/.terraform.lock.hcl +terraform/terraform.tfstate +terraform/terraform.tfstate.* +**/.terraform/* +**/*.tfstate +**/*.tfstate.* + +pulumi/venv/ \ No newline at end of file diff --git a/README.md b/README.md index 0b159ed716..371d51f456 100644 --- a/README.md +++ b/README.md @@ -1,81 +1,271 @@ -# DevOps Engineering Labs +# DevOps Engineering: Core Practices -## Introduction +[![Labs](https://img.shields.io/badge/Labs-18-blue)](#labs) +[![Exam](https://img.shields.io/badge/Exam-Optional-green)](#exam-alternative) +[![Duration](https://img.shields.io/badge/Duration-18%20Weeks-lightgrey)](#course-roadmap) -Welcome to the DevOps Engineering course labs! These hands-on labs are designed to guide you through various aspects of DevOps practices and principles. As you progress through the labs, you'll gain practical experience in application development, containerization, testing, infrastructure setup, CI/CD processes, and more. +Master **production-grade DevOps practices** through hands-on labs. Build, containerize, deploy, monitor, and scale applications using industry-standard tools. -## Lab Syllabus +--- -Lab 1: Web Application Development -Lab 2: Containerization -Lab 3: Continuous Integration -Lab 4: Infrastructure as Code & Terraform -Lab 5: Configuration Management -Lab 6: Ansible Automation -Lab 7: Observability, Logging, Loki Stack -Lab 8: Monitoring & Prometheus -Lab 9: Kubernetes & Declarative Manifests -Lab 10: Helm Charts & Library Charts -Lab 11: Kubernetes Secrets Management (Vault, ConfigMaps) -Lab 12: Kubernetes ConfigMaps & Environment Variables -Lab 13: GitOps with ArgoCD -Lab 14: StatefulSet Optimization -Lab 15: Kubernetes Monitoring & Init Containers -Lab 16: IPFS & Fleek Decentralization +## Quick Start -## Architecture +1. **Fork** this repository +2. **Clone** your fork locally +3. **Start with Lab 1** and progress sequentially +4. **Submit PRs** for each lab (details below) -This repository has a master branch containing an introduction. Each new lab assignment will be added as a markdown file with a lab number. +--- -## Rules +## Course Roadmap -To successfully complete the labs and pass the course, follow these rules: +| Week | Lab | Topic | Key Technologies | +|------|-----|-------|------------------| +| 1 | 1 | Web Application Development | Python/Go, Best Practices | +| 2 | 2 | Containerization | Docker, Multi-stage Builds | +| 3 | 3 | Continuous Integration | GitHub Actions, Snyk | +| 4 | 4 | Infrastructure as Code | Terraform, Cloud Providers | +| 5 | 5 | Configuration Management | Ansible Basics | +| 6 | 6 | Continuous Deployment | Ansible Advanced | +| 7 | 7 | Logging | Promtail, Loki, Grafana | +| 8 | 8 | Monitoring | Prometheus, Grafana | +| 9 | 9 | Kubernetes Basics | Minikube, Deployments, Services | +| 10 | 10 | Helm Charts | Templating, Hooks | +| 11 | 11 | Secrets Management | K8s Secrets, HashiCorp Vault | +| 12 | 12 | Configuration & Storage | ConfigMaps, PVCs | +| 13 | 13 | GitOps | ArgoCD | +| 14 | 14 | Progressive Delivery | Argo Rollouts | +| 15 | 15 | StatefulSets | Persistent Storage, Headless Services | +| 16 | 16 | Cluster Monitoring | Kube-Prometheus, Init Containers | +| โ€” | **Exam Alternative Labs** | | | +| 17 | 17 | Edge Deployment | Fly.io, Global Distribution | +| 18 | 18 | Decentralized Storage | 4EVERLAND, IPFS, Web3 | -1. **Lab Dependency:** Complete the labs in order; each lab builds upon the previous one. -2. **Submission and Grading:** Submit your solutions as pull requests (PRs) to the master branch of this repository. You need at least 6/10 points for each lab to pass. -3. **Fork Repository:** Fork this repository to your workspace to create your own version for solving the labs. -4. **Recommended Workflow:** Build your solutions incrementally. Complete lab N based on lab N-1. -5. **PR Creation:** Create a PR from your fork to the master branch of this repository and from your fork's branch to your fork's master branch. -6. **Wait for Grade:** Once your PR is created, wait for your lab to be reviewed and graded. +--- -### Example for the first lab +## Grading -1. Fork this repository. -2. Checkout to the lab1 branch. -3. Complete the lab1 tasks. -4. Push the code to your repository. -5. Create a PR to the master branch of this repository from your fork's lab1 branch. -6. Create a PR to the master branch of your repository from your lab1 branch. -7. Wait for your grade. +### Grade Composition -## Grading and Grades Distribution +| Component | Weight | Points | +|-----------|--------|--------| +| **Labs (16 required)** | 80% | 160 pts | +| **Final Exam** | 20% | 40 pts | +| **Bonus Tasks** | Extra | +40 pts max | +| **Total** | 100% | 200 pts | -Your final grade will be determined based on labs and a final exam: +### Exam Alternative -- Labs: 70% of your final grade. -- Final Exam: 30% of your final grade. +Don't want to take the exam? Complete **both** bonus labs: -Grade ranges: +| Lab | Topic | Points | +|-----|-------|--------| +| **Lab 17** | Fly.io Edge Deployment | 20 pts | +| **Lab 18** | 4EVERLAND & IPFS | 20 pts | -- [90-100] - A -- [75-90) - B -- [60-75) - C -- [0-60) - D +**Requirements:** +- Complete both labs (17 + 18 = 40 pts, replaces exam) +- Minimum 16/20 on each lab +- Deadline: **1 week before exam date** +- Can still take exam if you need more points for desired grade -### Labs Grading +
+๐Ÿ“Š Grade Scale -Each lab is worth 10 points. Completing main tasks correctly earns you 10 points. Completing bonus tasks correctly adds 2.5 points. You can earn a maximum of 12.5 points per lab by completing all main and bonus tasks. +| Grade | Points | Percentage | +|-------|--------|------------| +| **A** | 180-200+ | 90-100% | +| **B** | 150-179 | 75-89% | +| **C** | 120-149 | 60-74% | +| **D** | 0-119 | 0-59% | -Finishing all bonus tasks lets you skip the exam and grants you 5 extra points. Incomplete bonus tasks require you to take the exam, which could save you from failing it. +**Minimum to Pass:** 120 points (60%) ->The labs account for 70% of your final grade. With 14 labs in total, each lab contributes 5% to your final grade. Completing all main tasks in a lab earns you the maximum 10 points, which corresponds to 5% of your final grade. ->If you successfully complete all bonus tasks, you'll earn an additional 2.5 points, totaling 12.5 points for that lab, or 6.25% of your final grade. Over the course of all 14 labs, the cumulative points from bonus tasks add up to 87.5% of your final grade. ->Additionally, a 5% bonus is granted for successfully finishing all bonus tasks, ensuring that if you successfully complete everything, your final grade will be 92.5%, which corresponds to an A grade. +
-## Deadlines and Labs Distribution +
+๐Ÿ“ˆ Grade Examples -Each week, two new labs will be available. You'll have one week to submit your solutions. Refer to Moodle for presentation slides and deadlines. +**Scenario 1: Labs + Exam** +``` +Labs: 16 ร— 9 = 144 pts +Bonus: 5 labs ร— 2.5 = 12.5 pts +Exam: 35/40 pts +Total: 191.5 pts = 96% (A) +``` -## Submission Policy +**Scenario 2: Labs + Exam Alternative** +``` +Labs: 16 ร— 9 = 144 pts +Bonus: 8 labs ร— 2.5 = 20 pts +Lab 17: 18 pts +Lab 18: 17 pts +Total: 199 pts = 99.5% (A) +``` -Submitting your lab results on time is crucial for your grading. Late submissions receive a maximum score of 6 points for the corresponding lab. Remember, completing all labs is necessary to successfully pass the course. +
+ +--- + +## Lab Structure + +Each lab is worth **10 points** (main tasks) + **2.5 points** (bonus). + +- **Minimum passing score:** 6/10 per lab +- **Late submissions:** Max 6/10 (within 1 week) +- **Very late (>1 week):** Not accepted + +
+๐Ÿ“‹ Lab Categories + +**Foundation (Labs 1-2)** +- Web app development +- Docker containerization + +**CI/CD & Infrastructure (Labs 3-4)** +- GitHub Actions +- Terraform + +**Configuration Management (Labs 5-6)** +- Ansible playbooks and roles + +**Observability (Labs 7-8)** +- Loki logging stack +- Prometheus monitoring + +**Kubernetes Core (Labs 9-12)** +- K8s basics, Helm +- Secrets, ConfigMaps + +**Advanced Kubernetes (Labs 13-16)** +- ArgoCD, Argo Rollouts +- StatefulSets, Monitoring + +**Exam Alternative (Labs 17-18)** +- Fly.io, 4EVERLAND/IPFS + +
+ +--- + +## How to Submit + +```bash +# 1. Create branch +git checkout -b lab1 + +# 2. Complete lab tasks + +# 3. Commit and push +git add . +git commit -m "Complete lab1" +git push -u origin lab1 + +# 4. Create TWO Pull Requests: +# PR #1: your-fork:lab1 โ†’ course-repo:master +# PR #2: your-fork:lab1 โ†’ your-fork:master +``` + +
+๐Ÿ“ Submission Checklist + +- [ ] All main tasks completed +- [ ] Documentation files created +- [ ] Screenshots where required +- [ ] Code tested and working +- [ ] Markdown validated ([linter](https://dlaa.me/markdownlint/)) +- [ ] Both PRs created + +
+ +--- + +## Resources + +
+๐Ÿ› ๏ธ Required Tools + +| Tool | Purpose | +|------|---------| +| Git | Version control | +| Docker | Containerization | +| kubectl | Kubernetes CLI | +| Helm | K8s package manager | +| Minikube | Local K8s cluster | +| Terraform | Infrastructure as Code | +| Ansible | Configuration management | + +
+ +
+๐Ÿ“š Documentation Links + +**Core:** +- [Docker](https://docs.docker.com/) +- [Kubernetes](https://kubernetes.io/docs/) +- [Helm](https://helm.sh/docs/) + +**CI/CD:** +- [GitHub Actions](https://docs.github.com/en/actions) +- [Terraform](https://www.terraform.io/docs) +- [Ansible](https://docs.ansible.com/) + +**Observability:** +- [Prometheus](https://prometheus.io/docs/) +- [Grafana](https://grafana.com/docs/) + +**Advanced:** +- [ArgoCD](https://argo-cd.readthedocs.io/) +- [Argo Rollouts](https://argoproj.github.io/argo-rollouts/) +- [HashiCorp Vault](https://developer.hashicorp.com/vault/docs) + +
+ +
+๐Ÿ’ก Tips for Success + +1. **Start early** - Don't wait until deadline +2. **Read instructions fully** before starting +3. **Test everything** before submitting +4. **Document as you go** - Don't leave it for the end +5. **Ask questions early** - Don't wait until last minute +6. **Use proper Git workflow** - Branches, commits, PRs + +
+ +
+๐Ÿ”ง Common Issues + +**Docker:** +- Daemon not running โ†’ Start Docker Desktop +- Permission denied โ†’ Add user to docker group + +**Minikube:** +- Won't start โ†’ Try `--driver=docker` +- Resource issues โ†’ Allocate more memory/CPU + +**Kubernetes:** +- ImagePullBackOff โ†’ Check image name/registry +- CrashLoopBackOff โ†’ Check logs: `kubectl logs ` + +
+ +--- + +## Course Completion + +After completing all 16 core labs (+ optional Labs 17-18), you'll have: + +โœ… Full-stack DevOps expertise +โœ… Production-ready portfolio with 16-18 projects +โœ… Container and Kubernetes mastery +โœ… CI/CD pipeline experience +โœ… Infrastructure as Code skills +โœ… Monitoring and observability knowledge +โœ… GitOps workflow experience + +--- + +**Ready to begin? Start with [Lab 1](labs/lab01.md)!** + +Questions? Check the course Moodle page or ask during office hours. diff --git a/app_go/.dockerignore b/app_go/.dockerignore new file mode 100644 index 0000000000..333846391d --- /dev/null +++ b/app_go/.dockerignore @@ -0,0 +1,13 @@ +# Build outputs +devops-info-service +*.exe +*.out + +# Git / editor / OS +.git/ +.DS_Store +.idea/ +.vscode/ + +# Docs/tests not needed in image build context (optional) +docs/ diff --git a/app_go/Dockerfile b/app_go/Dockerfile new file mode 100644 index 0000000000..c90ba9e50f --- /dev/null +++ b/app_go/Dockerfile @@ -0,0 +1,35 @@ +# syntax=docker/dockerfile:1 + +# --- Stage 1: Builder --- +FROM golang:1.23.5-alpine3.21 AS builder + +WORKDIR /src + +# Cache deps first +COPY go.mod ./ +RUN go mod download + +# Copy source +COPY main.go ./ + +# Build a static binary (smaller + easier to run in minimal images) +ARG TARGETOS +ARG TARGETARCH + +RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH:-arm64} \ + go build -trimpath -ldflags="-s -w" -o /out/devops-info-service . + +# --- Stage 2: Runtime (minimal, non-root) --- +FROM gcr.io/distroless/static-debian12:nonroot AS runtime + +WORKDIR /app + +COPY --from=builder /out/devops-info-service /app/devops-info-service + +EXPOSE 8080 + +ENV HOST=0.0.0.0 \ + PORT=8080 + +# distroless:nonroot already runs as nonroot +ENTRYPOINT ["/app/devops-info-service"] diff --git a/app_go/README.md b/app_go/README.md new file mode 100644 index 0000000000..9e7e356fb0 --- /dev/null +++ b/app_go/README.md @@ -0,0 +1,53 @@ +# DevOps Info Service (Go) โ€” Bonus Task + +## Overview +A Go implementation of the DevOps Info Service. It provides two endpoints: +- `GET /` returns service/system/runtime/request information in JSON +- `GET /health` returns a simple health status JSON + +## Prerequisites +- Go installed (check with `go version`) + +## Run (from source) +```bash +go run . +``` +By default the service listens onย `0.0.0.0:8080`. + +### Custom configuration + +```bash +HOST=127.0.0.1 PORT=9090 go run . +``` + +## Build (binary) + +```bash +go build -o devops-info-service +``` + +## Run (binary) + +```bash +./devops-info-service +``` + +### Custom configuration (binary) + +```bash +HOST=127.0.0.1 PORT=9090 ./devops-info-service +``` + +## API Endpoints + +### GET / + +```bash +curl -s http://127.0.0.1:8080/ | python -m json.tool +``` + +### GET /health + +```bash +curl -s http://127.0.0.1:8080/health | python -m json.tool +``` diff --git a/app_go/devops-info-service b/app_go/devops-info-service new file mode 100755 index 0000000000..2b6d5eb4fd Binary files /dev/null and b/app_go/devops-info-service differ diff --git a/app_go/docs/GO.md b/app_go/docs/GO.md new file mode 100644 index 0000000000..dd58a9af72 --- /dev/null +++ b/app_go/docs/GO.md @@ -0,0 +1,7 @@ +# Why Go (Compiled Language) + +I chose Go for the compiled-language bonus task because: +- Go compiles into a single binary, which is convenient for deployment. +- It has a minimal standard library for building HTTP services (`net/http`). +- Itโ€™s fast to build and run and is commonly used in infrastructure/DevOps tooling. +- Small, self-contained binaries work well with Docker multi-stage builds in later labs. \ No newline at end of file diff --git a/app_go/docs/LAB01.md b/app_go/docs/LAB01.md new file mode 100644 index 0000000000..25a045574f --- /dev/null +++ b/app_go/docs/LAB01.md @@ -0,0 +1,49 @@ +# LAB01 โ€” Bonus Task (Go) + +## Implemented Endpoints +- `GET /` โ€” returns service, system, runtime, request info + endpoints list (JSON) +- `GET /health` โ€” returns health status + timestamp + uptime_seconds (JSON) + +The JSON structure matches the Python version (same top-level fields and same key layout inside each section). + +## How to Run (from source) +```bash +go run . +``` +#### Test: + +```bash +curl -s http://127.0.0.1:8080/ | python -m json.tool curl -s http://127.0.0.1:8080/health | python -m json.tool +``` + +## How to Build and Run (binary) + +#### Build: + +```bash +go build -o devops-info-service ls -lh devops-info-service +``` + +#### Run: + +```bash +./devops-info-service +``` + +#### Test binary: + +```bash +curl -s http://127.0.0.1:8080/health | python -m json.tool +``` + +## Screenshots + +Screenshots are stored inย `docs/screenshots/`: + +Recommended set: +- `01-go-run.png`ย โ€” running from source (`go run .`) +- `02-main-endpoint.png`ย โ€”ย `GET /`ย output +- `03-health-endpoint.png`ย โ€”ย `GET /health`ย output +- `04-go-build.png`ย โ€”ย `go build`ย +ย `ls -lh`ย showing binary size +- `05-binary-run.png`ย โ€” running compiled binary (`./devops-info-service`) +- `06-binary-health.png`ย โ€” health check from binary run \ No newline at end of file diff --git a/app_go/docs/LAB02.md b/app_go/docs/LAB02.md new file mode 100644 index 0000000000..8c261d923f --- /dev/null +++ b/app_go/docs/LAB02.md @@ -0,0 +1,146 @@ +# LAB02 (Bonus) โ€” Multi-Stage Docker Build (Go) + +This document explains how I containerized the compiled Go application using aย **multi-stage Docker build**ย to minimize the final image size and reduce the runtime attack surface.ย  + +--- + +## Multi-Stage Strategy + +### Stage 1 โ€” Builder + +- **Image:**ย `golang:1.23.5-alpine3.21` + +- **Purpose:**ย provides the Go toolchain needed to compile the application. + +- **Output artifact:**ย a single Linux binary atย `/out/devops-info-service`. + + +Key choices and why they matter: + +- **Dependency caching:**ย `go mod download`ย runs right after copyingย `go.mod`, before copying source code. + This means dependency download is cached and not repeated on every code change. + +- **Static binary:**ย `CGO_ENABLED=0`ย builds a static binary, which makes it suitable for minimal runtime images. + +- **Smaller binary:**ย `-ldflags="-s -w"`ย strips debug symbols to reduce binary size. + + +### Stage 2 โ€” Runtime + +- **Image:**ย `gcr.io/distroless/static-debian12:nonroot` + +- **Purpose:**ย run only the binary (no shell, no package manager, minimal filesystem). + +- **Security benefit:**ย defaultย **non-root**ย user and fewer components installed โ†’ smaller attack surface. + + +--- + +## Size Comparison (builder vs final) + +Final image size fromย `docker images`: + +``` +REPOSITORY TAG IMAGE ID CREATED SIZE +lab02-go latest 0f3bc22c104c 2 minutes ago 13.4MB +lab02-go-builder latest a52c4160b20d 2 minutes ago 468MB +``` + + +**Analysis:** +The multi-stage Go image is much smaller because the final runtime stage contains only the compiled binary and a minimal runtime filesystem. The builder stage contains the full Go toolchain and is not shipped. + +--- + +## Dockerfile Walkthrough + +Key parts of the Dockerfile and purpose: + +```dockerfile +FROM golang:1.23.5-alpine3.21 AS builder +WORKDIR /src + +COPY go.mod ./ +RUN go mod download +``` + +- Copies onlyย `go.mod`ย first and downloads dependencies โ†’ maximizes Docker layer caching. + + +```dockerfile +COPY main.go ./ + +ARG TARGETOS +ARG TARGETARCH + +RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH:-arm64} \ + go build -trimpath -ldflags="-s -w" -o /out/devops-info-service . +``` + +- Copies the source and compiles aย **static**ย binary. + +- Usesย `TARGETOS/TARGETARCH`ย to build correctly on different platforms (important on Apple Silicon / arm64). + + +```dockerfile +FROM gcr.io/distroless/static-debian12:nonroot AS runtime +WORKDIR /app +COPY --from=builder /out/devops-info-service /app/devops-info-service +EXPOSE 8080 ENV HOST=0.0.0.0 PORT=8080 +ENTRYPOINT ["/app/devops-info-service"] +``` + +- Distroless runtime is minimal and runs as non-root. + +- Only the binary is copied into the final image. + +- Port and env vars match the app defaults. + + +--- + +## Build & Run Evidence + +### Build + +```shell +docker build -t lab02-go -f app_go/Dockerfile app_go [+] +Building ... FINISHED +=> naming to docker.io/library/lab02-go:latest +``` + +### Run + +```shell +docker run --rm -p 8080:8080 lab02-go +``` + +### Test endpoints + +```shell +curl http://localhost:8080/ +# returned JSON with service/system/runtime/request information +``` + +(Optionally) + +```shell +curl http://localhost:8080/health +``` + +--- + +## Why Multi-Stage Builds Matter (Compiled Languages) + +- **Smaller images:**ย faster pulls, less storage, faster deploys (final image is ~13.4MB). + +- **Security:**ย runtime image excludes compilers, shells, package managers โ†’ fewer vulnerabilities and lower attack surface. + +- **Clear separation:**ย build-time vs run-time concerns are isolated. + + +Trade-offs: + +- Dockerfile becomes slightly more complex. + +- Debugging inside distroless containers is harder (no shell), so logs/metrics are preferred. \ No newline at end of file diff --git a/app_go/docs/screenshots/01-go-run.png b/app_go/docs/screenshots/01-go-run.png new file mode 100644 index 0000000000..c4c509db6b Binary files /dev/null and b/app_go/docs/screenshots/01-go-run.png differ diff --git a/app_go/docs/screenshots/02-main-endpoint.png b/app_go/docs/screenshots/02-main-endpoint.png new file mode 100644 index 0000000000..b3feb72f57 Binary files /dev/null and b/app_go/docs/screenshots/02-main-endpoint.png differ diff --git a/app_go/docs/screenshots/03-health-endpoint.png b/app_go/docs/screenshots/03-health-endpoint.png new file mode 100644 index 0000000000..da87ce83f7 Binary files /dev/null and b/app_go/docs/screenshots/03-health-endpoint.png differ diff --git a/app_go/docs/screenshots/04-go-build.png b/app_go/docs/screenshots/04-go-build.png new file mode 100644 index 0000000000..a3b615a218 Binary files /dev/null and b/app_go/docs/screenshots/04-go-build.png differ diff --git a/app_go/docs/screenshots/05-binary-run.png b/app_go/docs/screenshots/05-binary-run.png new file mode 100644 index 0000000000..ce3bea4ff6 Binary files /dev/null and b/app_go/docs/screenshots/05-binary-run.png differ diff --git a/app_go/docs/screenshots/06-binary-health.png b/app_go/docs/screenshots/06-binary-health.png new file mode 100644 index 0000000000..68dc2894a7 Binary files /dev/null and b/app_go/docs/screenshots/06-binary-health.png differ diff --git a/app_go/go.mod b/app_go/go.mod new file mode 100644 index 0000000000..3d6605d68d --- /dev/null +++ b/app_go/go.mod @@ -0,0 +1,3 @@ +module devops-info-service-go + +go 1.23 diff --git a/app_go/main.go b/app_go/main.go new file mode 100644 index 0000000000..7c57e28961 --- /dev/null +++ b/app_go/main.go @@ -0,0 +1,139 @@ +package main + +import ( + "encoding/json" + "fmt" + "log" + "net" + "net/http" + "os" + "runtime" + "time" +) + +var startTime = time.Now().UTC() + +type Endpoint struct { + Path string `json:"path"` + Method string `json:"method"` + Description string `json:"description"` +} + +type ResponseRoot struct { + Service map[string]any `json:"service"` + System map[string]any `json:"system"` + Runtime map[string]any `json:"runtime"` + Request map[string]any `json:"request"` + Endpoints []Endpoint `json:"endpoints"` +} + +func uptimeSeconds() int { + return int(time.Since(startTime).Seconds()) +} + +func uptimeHuman(sec int) string { + h := sec / 3600 + m := (sec % 3600) / 60 + return fmt.Sprintf("%d hour(s), %d minute(s)", h, m) +} + +func hostname() string { + h, err := os.Hostname() + if err != nil { + return "" + } + return h +} + +func getClientIP(r *http.Request) string { + xff := r.Header.Get("X-Forwarded-For") + if xff != "" { + for i := 0; i < len(xff); i++ { + if xff[i] == ',' { + return xff[:i] + } + } + return xff + } + + host, _, err := net.SplitHostPort(r.RemoteAddr) + if err != nil { + return r.RemoteAddr + } + return host +} + +func writeJSON(w http.ResponseWriter, status int, v any) { + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(status) + enc := json.NewEncoder(w) + enc.SetIndent("", " ") + _ = enc.Encode(v) +} + +func mainHandler(w http.ResponseWriter, r *http.Request) { + up := uptimeSeconds() + + resp := ResponseRoot{ + Service: map[string]any{ + "name": "devops-info-service", + "version": "1.0.0", + "description": "DevOps course info service", + "framework": "Go net/http", + }, + System: map[string]any{ + "hostname": hostname(), + "platform": runtime.GOOS, + "platform_version": "", + "architecture": runtime.GOARCH, + "cpu_count": runtime.NumCPU(), + "python_version": runtime.Version(), + }, + Runtime: map[string]any{ + "uptime_seconds": up, + "uptime_human": uptimeHuman(up), + "current_time": time.Now().UTC().Format(time.RFC3339Nano), + "timezone": "UTC", + }, + Request: map[string]any{ + "client_ip": getClientIP(r), + "user_agent": r.UserAgent(), + "method": r.Method, + "path": r.URL.Path, + }, + Endpoints: []Endpoint{ + {Path: "/", Method: "GET", Description: "Service information"}, + {Path: "/health", Method: "GET", Description: "Health check"}, + }, + } + + log.Printf("Request: %s %s", r.Method, r.URL.Path) + writeJSON(w, http.StatusOK, resp) +} + +func healthHandler(w http.ResponseWriter, r *http.Request) { + log.Printf("Request: %s %s", r.Method, r.URL.Path) + writeJSON(w, http.StatusOK, map[string]any{ + "status": "healthy", + "timestamp": time.Now().UTC().Format(time.RFC3339Nano), + "uptime_seconds": uptimeSeconds(), + }) +} + +func main() { + port := os.Getenv("PORT") + if port == "" { + port = "8080" + } + host := os.Getenv("HOST") + if host == "" { + host = "0.0.0.0" + } + + http.HandleFunc("/", mainHandler) + http.HandleFunc("/health", healthHandler) + + addr := host + ":" + port + log.Printf("Starting Go app on %s", addr) + log.Fatal(http.ListenAndServe(addr, nil)) +} diff --git a/app_python/.dockerignore b/app_python/.dockerignore new file mode 100644 index 0000000000..b3d91b6ca5 --- /dev/null +++ b/app_python/.dockerignore @@ -0,0 +1,23 @@ +# Python cache / bytecode +__pycache__/ +*.py[cod] + +# Virtual environments +venv/ +.venv/ + +# Editor/OS files +.DS_Store +.idea/ +.vscode/ + +# Tests and docs are not needed at runtime +tests/ +docs/ + +# Git +.git/ +.gitignore + +# Misc +*.log diff --git a/app_python/.gitignore b/app_python/.gitignore new file mode 100644 index 0000000000..236b92ec13 --- /dev/null +++ b/app_python/.gitignore @@ -0,0 +1,13 @@ +__pycache__/ +*.py[cod] +venv/ +__MACOSX/ +*.log + +.vscode/ +.idea/ + +.obsidian + +.DS_Store +.env \ No newline at end of file diff --git a/app_python/Dockerfile b/app_python/Dockerfile new file mode 100644 index 0000000000..42002e9f4a --- /dev/null +++ b/app_python/Dockerfile @@ -0,0 +1,29 @@ +# syntax=docker/dockerfile:1 + +FROM python:3.13.1-slim AS runtime + +# Prevent Python from writing .pyc files and ensure logs are unbuffered +ENV PYTHONDONTWRITEBYTECODE=1 \ + PYTHONUNBUFFERED=1 + +WORKDIR /app + +# Create a dedicated, non-root user +RUN groupadd --system app && useradd --system --gid app --uid 10001 --create-home app + +# Install dependencies first for better layer caching +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +# Copy only the application code (no venv/tests/docs) +COPY --chown=app:app app.py . + +EXPOSE 5000 + +USER app + +# Flask app binds to 0.0.0.0 by default in this project (HOST env) +ENV HOST=0.0.0.0 \ + PORT=5000 + +CMD ["python", "app.py"] diff --git a/app_python/README.md b/app_python/README.md new file mode 100644 index 0000000000..74462d6f89 --- /dev/null +++ b/app_python/README.md @@ -0,0 +1,63 @@ +![CI](https://github.com/ostxxp/DevOps-Core-Course/actions/workflows/python-ci.yml/badge.svg) + +# DevOps Info Service (Labs 01โ€“03) + +## Overview +Simple web service that returns service, system, runtime and request information. + +## Prerequisites +- Python 3.11+ +- pip +## Installation + +```bash +python -m venv venv +source venv/bin/activate +pip install -r requirements.txt +``` +## Running the Application + +```bash +python app.py +PORT=8080 python app.py +HOST=127.0.0.1 PORT=3000 python app.py +``` + +## API Endpoints + +- `GET /`ย - Service and system information +- `GET /health`ย - Health check +## Configuration + +|Variable|Default|Description| +|---|---|---| +|HOST|0.0.0.0|Bind host| +|PORT|5000|Bind port| +|DEBUG|False|Flask debug mode + +## Docker + +This app can be containerized and run with Docker. + +**Build (pattern):** + +- `docker build -t : -f app_python/Dockerfile app_python` + +**Run (pattern):** + +- `docker run --rm -p :5000 :` +- Optional envs: `-e PORT=5000 -e HOST=0.0.0.0` + +**Test endpoints (pattern):** + +- `curl http://localhost:/` +- `curl http://localhost:/health` + +**Pull from Docker Hub (pattern):** + +- `docker pull /:` +- then run it with the same `docker run -p ...` pattern + +Example: +`docker run --rm -p 5000:5000 ostxxp/devops-lab02-python:latest` + diff --git a/app_python/app.py b/app_python/app.py new file mode 100644 index 0000000000..8514d3da91 --- /dev/null +++ b/app_python/app.py @@ -0,0 +1,115 @@ +import logging +import os +import platform +import socket +from datetime import datetime, timezone + +from flask import Flask, jsonify, request + +logging.basicConfig( + level=logging.INFO, + format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", +) +logger = logging.getLogger("devops-info-service") + +HOST = os.getenv("HOST", "0.0.0.0") +PORT = int(os.getenv("PORT", "5000")) +DEBUG = os.getenv("DEBUG", "False").lower() == "true" + +app = Flask(__name__) +START_TIME = datetime.now(timezone.utc) + + +def _uptime_seconds() -> int: + return int((datetime.now(timezone.utc) - START_TIME).total_seconds()) + + +def _uptime_human(seconds: int) -> str: + hours = seconds // 3600 + minutes = (seconds % 3600) // 60 + return f"{hours} hour(s), {minutes} minute(s)" + + +def get_system_info() -> dict: + return { + "hostname": socket.gethostname(), + "platform": platform.system(), + "platform_version": platform.version(), + "architecture": platform.machine(), + "cpu_count": os.cpu_count() or 0, + "python_version": platform.python_version(), + } + + +def get_request_info() -> dict: + forwarded_for = request.headers.get("X-Forwarded-For", "") + client_ip = forwarded_for.split(",")[0].strip() if forwarded_for else request.remote_addr + + return { + "client_ip": client_ip or "", + "user_agent": request.headers.get("User-Agent", ""), + "method": request.method, + "path": request.path, + } + + +def list_endpoints() -> list: + return [ + {"path": "/", "method": "GET", "description": "Service information"}, + {"path": "/health", "method": "GET", "description": "Health check"}, + ] + + +@app.get("/") +def index(): + logger.info("Request: %s %s", request.method, request.path) + + uptime_sec = _uptime_seconds() + payload = { + "service": { + "name": "devops-info-service", + "version": "1.0.0", + "description": "DevOps course info service", + "framework": "Flask", + }, + "system": get_system_info(), + "runtime": { + "uptime_seconds": uptime_sec, + "uptime_human": _uptime_human(uptime_sec), + "current_time": datetime.now(timezone.utc).isoformat(), + "timezone": "UTC", + }, + "request": get_request_info(), + "endpoints": list_endpoints(), + } + return jsonify(payload), 200 + + +@app.get("/health") +def health(): + logger.info("Request: %s %s", request.method, request.path) + + return ( + jsonify( + { + "status": "healthy", + "timestamp": datetime.now(timezone.utc).isoformat(), + "uptime_seconds": _uptime_seconds(), + } + ), + 200, + ) + +@app.errorhandler(404) +def not_found(_err): + return jsonify({"error": "Not Found", "message": "Endpoint does not exist"}), 404 + + +@app.errorhandler(500) +def internal_error(_err): + return jsonify({"error": "Internal Server Error", "message": "An unexpected error occurred"}), 500 + + +if __name__ == "__main__": + logger.info("Starting app on %s:%s (debug=%s)", HOST, PORT, DEBUG) + app.run(host=HOST, port=PORT, debug=DEBUG) diff --git a/app_python/docs/LAB01.md b/app_python/docs/LAB01.md new file mode 100644 index 0000000000..1d18796cfa --- /dev/null +++ b/app_python/docs/LAB01.md @@ -0,0 +1,65 @@ +# Lab 1 + +## 1) Framework Selection +**Chosen framework:** Flask + +**Why Flask:** +- Minimal setup and easy to understand for a beginner +- Perfect for a small service with only a couple endpoints +- Clear request handling and simple JSON responses + +| Framework | Pros | Cons | +|---|---|---| +| Flask | Simple, lightweight, easy learning curve | Less โ€œbuilt-inโ€ features than Django | +| FastAPI | Great docs, async-ready, OpenAPI | Slightly more concepts (typing, ASGI) | +| Django | Full-featured framework | Overkill for this small service | + +## 2) Best Practices Applied +### Clean Code Organization +- Separate helper functions: system info, request info, uptime +- Clear naming and small functions + +### Configuration via Environment Variables +- `HOST`, `PORT`, `DEBUG` are read from environment variables + +### Error Handling +- Custom JSON responses for 404 and 500 errors + +### Logging +- Basic logging configured (INFO level) +- Logs requests to `/` and `/health` + +## 3) API Documentation +### GET / +Returns service metadata, system information, runtime info and request details. + +Example test: +```bash +curl -s http://127.0.0.1:5000/ | python -m json.tool +``` +### GET /health + +Returns health status, timestamp and uptime. + +Example test: + +`curl -s http://127.0.0.1:5000/health | python -m json.tool` + +## 4) Testing Evidence + +Screenshots are stored in: +`docs/screenshots/` +- `main-endpoint.png`ย โ€” main endpoint JSON output +- `health-check.png`ย โ€” health endpoint JSON output + +## 5) Challenges & Solutions + +- **Challenge:**ย Understanding required JSON structure + **Solution:**ย Implemented endpoints step-by-step and validated output using curl + json.tool. +- **Challenge:**ย Making the service configurable + **Solution:**ย Added environment variablesย `HOST`ย andย `PORT`ย and verified by running on port 8080. + +## 6) GitHub Community + +Starring repositories helps bookmark useful projects and signals appreciation to maintainers, improving open-source discovery. +Following developers (professor/TAs/classmates) helps networking and makes it easier to learn from othersโ€™ activity and collaborate in team projects. \ No newline at end of file diff --git a/app_python/docs/LAB02.md b/app_python/docs/LAB02.md new file mode 100644 index 0000000000..5ccd48c535 --- /dev/null +++ b/app_python/docs/LAB02.md @@ -0,0 +1,195 @@ +# LAB02 โ€” Docker Containerization (Python) + +This document explains how the Lab 1 Python application was containerized using Docker best practices and published to Docker Hub.ย  + +--- + +## Docker Best Practices Applied + +### 1) Pinning a specific base image version + +**What I did:** +Used a pinned slim Python image:ย `python:3.13.1-slim`. + +**Why it matters:** +Pinning a specific image version guarantees reproducible builds and protects the application from unexpected breaking changes introduced by upstream image updates. + +**Snippet:** + +```dockerfile +FROM python:3.13.1-slim +``` + +--- + +### 2) Running as a non-root user + +**What I did:** +Created a dedicated system user and switched to it using theย `USER`ย directive. + +**Why it matters:** +Running containers as non-root significantly reduces the impact of a potential container compromise by following the principle of least privilege. + +**Snippet:** + +```dockerfile +RUN groupadd --system app && useradd --system --gid app --uid 1000 app USER app +``` + +--- + +### 3) Proper layer ordering (cache optimization) + +**What I did:** +Copiedย `requirements.txt`ย first, installed dependencies, and only then copied the application source code. + +**Why it matters:** +Docker caches layers. Since dependencies change less frequently than application code, this approach speeds up rebuilds by reusing cached dependency layers. + +**Snippet:** + +```dockerfile +COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY --chown=app:app app.py . +``` + +--- + +### 4) Copying only necessary files +ย `.dockerignore` + +**What I did:** +Copied only required files (`requirements.txt`,ย `app.py`) and excluded unnecessary files usingย `.dockerignore`. + +**Why it matters:** +A smaller build context leads to faster builds, smaller images, and prevents accidental inclusion of sensitive or irrelevant files. + +**`.dockerignore`ย excerpt:** + +``` +venv/ +tests/ +docs/ +.git/ +__pycache__/ +``` + +--- + +### 5) Minimal runtime image + +**What I did:** +Used theย `slim`ย Python image and disabled pip cache during dependency installation. + +**Why it matters:** +Smaller images reduce download time, storage usage, and overall attack surface. + +--- + +## Image Information & Decisions + +- **Base image:**ย `python:3.13.1-slim` + **Justification:**ย Official Python image with minimal footprint while fully supporting Flask. + +- **Exposed port:**ย `5000`ย (matches application default) + +- **Final image size:**ย `214 MB` + +- **Layer structure:** + + 1. Base image + + 2. Dependency installation + + 3. Application source code + + +--- + +## Build & Run Process + +### Build output + +```shell +docker build -t lab02-python -f app_python/Dockerfile app_python +``` + +``` +Successfully built 4e71b36e52d3 +Successfully tagged lab02-python:latest +``` + +--- + +### Run output + +```shell +docker run --rm -p 5000:5000 lab02-python Running on http://0.0.0.0:5000 +``` + +--- + +### Endpoint tests + +```shell +curl http://localhost:5000/ HTTP/1.1 200 OK +``` + +```shell +curl http://localhost:5000/health {"status":"healthy"} +``` + +--- + +## Docker Hub + +- **Repository URL:** + https://hub.docker.com/r/ostxxp/devops-lab02-python + + +### Tagging strategy + +The image was tagged using the pattern: + +`/:` + +For this lab, the image was published as: + +`ostxxp/devops-lab02-python:latest` + +This strategy ensures global uniqueness in Docker Hub and allows future versioned releases alongside theย `latest`ย tag. + +--- + +## Technical Analysis + +### Why does this Dockerfile work the way it does? + +The Dockerfile installs dependencies first to leverage caching, runs the application as a non-root user for security, and exposes the correct port to allow access via Docker port mapping. + +### What would happen if the layer order changed? + +If application code were copied before installing dependencies, Docker would invalidate the cache on every code change, forcing a full reinstall of dependencies and slowing down rebuilds. + +### Security considerations implemented + +- Non-root container user + +- Minimal base image + +- Limited copied files + +- No pip cache retained + + +### How doesย `.dockerignore`ย improve the build? + +It reduces the build context size, speeds up the build process, and prevents unnecessary or sensitive files from being included in the image. + +--- + +## Challenges & Solutions + +- **Issue:**ย Understanding Docker image caching behavior. + +- **Solution:**ย Reordered layers and tested rebuild performance. + +- **What I learned:**ย Proper Dockerfile structure directly impacts performance, security, and maintainability. \ No newline at end of file diff --git a/app_python/docs/LAB03.md b/app_python/docs/LAB03.md new file mode 100644 index 0000000000..d4f09805bc --- /dev/null +++ b/app_python/docs/LAB03.md @@ -0,0 +1,92 @@ +# LAB03 โ€” CI/CD with GitHub Actions + +## 1. Overview + +- Testing framework: **pytest** + - Chosen for simple syntax and good Flask integration. +- Linting tool: **ruff** + - Fast Python linter, easy to integrate in CI. +- CI: GitHub Actions + - Runs on push and pull request for `app_python/**` +- Versioning strategy: **Calendar Versioning (CalVer)** + - Format: `YYYY.MM.` + - Also tags image as `latest` +- Docker image: + - `/devops-info-service` + +--- + +## 2. Local Testing + +Run locally: + +```bash +cd app_python +pip install -r requirements.txt +ruff check . +pytest -q +``` + +Example output: + +`3 passed in 0.24s` + +--- + +## 3. Workflow Evidence + +- โœ… Tests and lint pass in GitHub Actions + +- โœ… Docker image built and pushed automatically + +- โœ… Tags created: + + - `YYYY.MM.` + + - `latest` + +- โœ… CI status badge added to README + + +--- + +## 4. CI Best Practices Implemented + +- **Dependency caching**ย (`cache: pip`) + Speeds up repeated workflow runs. + +- **Job dependency (`needs`)** + Docker build runs only if tests and lint pass. + +- **Path filters** + Workflow runs only whenย `app_python/**`ย changes. + +- **Concurrency cancel** + Cancels outdated runs on same branch. + + +--- + +## 5. Key Decisions + +### Why CalVer? + +This project is a service (not a library), so breaking changes are not critical for version communication. +CalVer makes versioning simple and automatically generated in CI. + +### Docker Tags + +- `YYYY.MM.` + +- `latest` + + +### What is tested? + +- `GET /` + +- `GET /health` + +- 404 error handling + +- JSON response structure \ No newline at end of file diff --git a/app_python/docs/screenshots/01-main-endpoint.png b/app_python/docs/screenshots/01-main-endpoint.png new file mode 100644 index 0000000000..dcc5cb65de Binary files /dev/null and b/app_python/docs/screenshots/01-main-endpoint.png differ diff --git a/app_python/docs/screenshots/02-health-check.png b/app_python/docs/screenshots/02-health-check.png new file mode 100644 index 0000000000..0df0550336 Binary files /dev/null and b/app_python/docs/screenshots/02-health-check.png differ diff --git a/app_python/docs/screenshots/03-formatted-output.png b/app_python/docs/screenshots/03-formatted-output.png new file mode 100644 index 0000000000..fc962b6bc1 Binary files /dev/null and b/app_python/docs/screenshots/03-formatted-output.png differ diff --git a/app_python/requirements.txt b/app_python/requirements.txt new file mode 100644 index 0000000000..4bad402958 --- /dev/null +++ b/app_python/requirements.txt @@ -0,0 +1,4 @@ +Flask==3.1.0 +pytest +pytest-cov +ruff \ No newline at end of file diff --git a/app_python/tests/conftest.py b/app_python/tests/conftest.py new file mode 100644 index 0000000000..db710a6dd6 --- /dev/null +++ b/app_python/tests/conftest.py @@ -0,0 +1,5 @@ +import os +import sys + +# add app_python/ to import path so "import app" works +sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))) diff --git a/app_python/tests/test_api.py b/app_python/tests/test_api.py new file mode 100644 index 0000000000..a00cc43368 --- /dev/null +++ b/app_python/tests/test_api.py @@ -0,0 +1,53 @@ +import pytest +from app import app + + +@pytest.fixture +def client(): + app.config["TESTING"] = True + with app.test_client() as c: + yield c + + +def test_root_ok_and_structure(client): + r = client.get("/") + assert r.status_code == 200 + + data = r.get_json() + assert isinstance(data, dict) + + # service + assert "service" in data + assert data["service"]["framework"] == "Flask" + + # system + assert "system" in data + assert "hostname" in data["system"] + assert "python_version" in data["system"] + + # runtime + assert "runtime" in data + assert "uptime_seconds" in data["runtime"] + assert "current_time" in data["runtime"] + + # endpoints + assert "endpoints" in data + assert isinstance(data["endpoints"], list) + + +def test_health_ok_and_structure(client): + r = client.get("/health") + assert r.status_code == 200 + + data = r.get_json() + assert isinstance(data, dict) + assert data["status"] == "healthy" + assert "timestamp" in data + assert "uptime_seconds" in data + + +def test_404_json(client): + r = client.get("/nope") + assert r.status_code == 404 + data = r.get_json() + assert data["error"] == "Not Found" \ No newline at end of file diff --git a/lab1.md b/lab1.md deleted file mode 100644 index 30b74c95f5..0000000000 --- a/lab1.md +++ /dev/null @@ -1,65 +0,0 @@ -# Lab 1: Web Application Development - -## Overview - -In this lab assignment, you will develop a simple web application using Python and best practices. You will also have the opportunity to create a bonus web application using a different programming language. Follow the tasks below to complete the lab assignment. - -## Task 1: Python Web Application - -**6 Points:** - -1. Create `app_python` Folder: - - Create a folder named `app_python` to contain your Python web application files. - - Inside the `app_python` folder, create a file named `PYTHON.md`. - -2. Develop and Test Python Web Application: - - Develop a Python web application that displays the current time in Moscow. - - Choose a suitable framework for your web application and justify your choice in the `PYTHON.md` file. - - Implement best practices in your code and follow coding standards. - - Test your application to ensure the displayed time updates upon page refreshing. - -## Task 2: Well Decorated Description - -**4 Points:** - -1. Update `PYTHON.md`: - - Describe best practices applied in the web application. - - Explain how you followed coding standards, implemented testing, and ensured code quality. - -2. Create `README.md` in `app_python` folder: - - Use a Markdown template to document the Python web application. - -3. Ensure: - - Maintain a clean `.gitignore` file. - - Use a concise `requirements.txt` file for required dependencies. - -### List of Requirements - -- MSK Time timezone set up -- 2 PRs created -- README includes Overview -- Nice Markdown decoration -- Local installation details in README - -## Bonus Task: Additional Web Application - -**2.5 Points:** - -1. Create `app_*` Folder: - - Create a folder named `app_*` in the main project directory, replacing `*` with a programming language of your choice (other than Python). - - Inside the `app_*` folder, create a file named `*`.md. - -2. Develop Your Own Web App: - - Create a web application using the programming language you chose. - - Decide what your web application will display or do, and use your creativity. - -3. Follow Main Task Steps: - - Implement your bonus web application following the same suggestions and steps as the main Python web application task. - -### Guidelines - -- Use proper Markdown formatting and structure for the documentation files. We will use [online one](https://dlaa.me/markdownlint/) to check your `.md` files. -- Organize the files within the lab folder using appropriate naming conventions. -- Create a PR from your fork to the master branch of this repository and from your fork's branch to your fork's master branch with your completed lab assignment. - -> Note: Apply best practices, coding standards, and testing to your Python web application. Explore creativity in your bonus web application, and document your process using Markdown. diff --git a/lab10.md b/lab10.md deleted file mode 100644 index c472086168..0000000000 --- a/lab10.md +++ /dev/null @@ -1,91 +0,0 @@ -# Lab 10: Introduction to Helm - -## Overview - -In this lab, you will become familiar with Helm, set up a local development environment, and generate manifests for your application. - -## Task 1: Helm Setup and Chart Creation - -**6 Points:** - -1. Learn About Helm: - - Begin by exploring the architecture and concepts of Helm: - - [Helm Architecture](https://helm.sh/docs/topics/architecture/) - - [Understanding Helm Charts](https://helm.sh/docs/topics/charts/) - -2. Install Helm: - - Install Helm using the instructions provided: - - [Helm Installation](https://helm.sh/docs/intro/install/) - - [Chart Repository Initialization](https://helm.sh/docs/intro/quickstart/#initialize-a-helm-chart-repository) - -3. Create Your Own Helm Chart: - - Generate a Helm chart for your application. - - Inside the `k8s` folder, create a Helm chart template by using the command `helm create your-app`. - - Replace the default repository and tag inside the `values.yaml` file with your repository name. - - Modify the `containerPort` setting in the `deployment.yml` file. - - If you encounter issues with `livenessProbe` and `readinessProbe`, you can comment them out. - - > For troubleshooting, you can use the `minikube dashboard` command. - -4. Install Your Helm Chart: - - Install your custom Helm chart and ensure that all services are healthy. Verify this by checking the `Workloads` page in the Minikube dashboard. - -5. Access Your Application: - - Confirm that your application is accessible by running the `minikube service your_service_name` command. - -6. Create a HELM.md File: - - Construct a `HELM.md` file and provide the output of the `kubectl get pods,svc` command within it. - -## Task 2: Helm Chart Hooks - -**4 Points:** - -1. Learn About Chart Hooks: - - Familiarize yourself with [Helm Chart Hooks](https://helm.sh/docs/topics/charts_hooks/). - -2. Implement Helm Chart Hooks: - - Develop pre-install and post-install pods within your Helm chart, without adding any complex logic (e.g., use "sleep 20"). You can refer to [Example 1 in the guide](https://www.golinuxcloud.com/kubernetes-helm-hooks-examples/). - -3. Troubleshoot Hooks: - - Execute the following commands to troubleshoot your hooks: - 1. `helm lint ` - 2. `helm install --dry-run helm-hooks ` - 3. `kubectl get po` - -4. Provide Output: - - Execute the following commands and include their output in your report: - 1. `kubectl get po` - 2. `kubectl describe po ` - 3. `kubectl describe po ` - -5. Hook Delete Policy: - - Implement a hook delete policy to remove the hook once it has executed successfully. - -**List of Requirements:** - -- Helm Chart with Hooks implemented, including the hook delete policy. -- Output of the `kubectl get pods,svc` command in `HELM.md`. -- Output of all commands from the step 4 of Task 2 in `HELM.md`. - -## Bonus Task: Helm Library Chart - -**To Earn 2.5 Additional Points:** - -1. Helm Chart for Extra App: - - Prepare a Helm chart for an additional application. - -2. Helm Library Charts: - - Get acquainted with [Helm Library Charts](https://helm.sh/docs/topics/library_charts/). - -3. Create a Library Chart: - - Develop a simple library chart that includes a "labels" template. You can follow the steps outlined in [the Using Library Charts guide](https://austindewey.com/2020/08/17/how-to-reduce-helm-chart-boilerplate-with-library-charts/). Use this library chart for both of your applications. - -### Guidelines - -- Ensure your documentation is clear and well-structured. -- Include all the necessary components. -- Follow appropriate file and folder naming conventions. -- Create and participate in PRs for the peer review process. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. - -> Note: Detailed documentation is crucial to ensure that your Helm deployment and hooks function as expected. Engage with the bonus tasks to further enhance your understanding and application deployment skills. diff --git a/lab11.md b/lab11.md deleted file mode 100644 index 4994bb1a80..0000000000 --- a/lab11.md +++ /dev/null @@ -1,85 +0,0 @@ -# Lab 11: Kubernetes Secrets and Hashicorp Vault - -## Overview - -In this lab, you will learn how to manage sensitive data, such as passwords, tokens, or keys, within Kubernetes. Additionally, you will configure CPU and memory limits for your application. - -## Task 1: Kubernetes Secrets and Resource Management - -**6 Points:** - -1. Create a Secret Using `kubectl`: - - Learn about Kubernetes Secrets and create a secret using the `kubectl` command: - - [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) - - [Managing Secrets with kubectl](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/#decoding-secret) - -2. Verify and Decode Your Secret: - - Confirm and decode the secret, then create an `11.md` file within the `k8s` folder. Provide the output of the necessary commands inside this file. - -3. Manage Secrets with Helm: - - Use Helm to manage your secrets. - - Create a `secrets.yaml` file in the `templates` folder. - - Define a `secret` object within this YAML file. - - Add an `env` field to your `Deployment`. The path to update is: `spec.template.spec.containers.env`. - - > Refer to this [Helm Secrets Video](https://www.youtube.com/watch?v=hRSlKRvYe1A) for guidance. - - - Update your Helm deployment as instructed in the video. - - Retrieve the list of pods using the command `kubectl get po`. Use the name of the pod as proof of your success within the report. - - Verify your secret inside the pod, for example: `kubectl exec demo-5f898f5f4c-2gpnd -- printenv | grep MY_PASS`. Share this output in `11.md`. - -## Task 2: Vault Secret Management System - -**4 Points:** - -1. Install Vault Using Helm Chart: - - Install Vault using a Helm chart. Follow the steps provided in this guide: - - [Vault Installation Guide](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-sidecar#install-the-vault-helm-chart) - -2. Follow the Tutorial with Your Helm Chart: - - Adapt the tutorial to work with your Helm chart, including the following steps: - - [Set a Secret in Vault](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-sidecar#set-a-secret-in-vault) - - [Configure Kubernetes Authentication](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-sidecar#configure-kubernetes-authentication) - - Be cautious with the service account. If you used `helm create ...`, it will be created automatically. In the guide, they create it manually. - - [Manually Define a Kubernetes Service Account](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-sidecar#define-a-kubernetes-service-account) - -3. Implement Vault Secrets in Your Helm Chart: - - Use the steps from the guide as an example for your Helm chart: - - [Update values.yaml](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-sidecar#launch-an-application) - - [Add Labels](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-sidecar#inject-secrets-into-the-pod) - - Test to ensure your credentials are injected successfully. Use the `kubectl exec -it -- bash` command to access the container. Verify the injected secrets using `cat /path/to/your/secret` and `df -h`. Share the output in the `11.md` report. - - Apply a template as described in the guide. Test the updates as you did in the previous step and provide the outputs in `11.md`. - -**List of Requirements:** - -- Proof of work with a secret in `11.md` for the Task 1 - steps 2 and 3. -- `secrets.yaml` file. -- Resource requests and limits for CPU and memory. -- Vault configuration implemented, with proofs in `11.md`. - -## Bonus Task: Resource Management and Environment Variables - -**2.5 Points:** - -1. Read About Resource Management: - - Familiarize yourself with resource management in Kubernetes: - - [Resource Management](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) - -2. Set Up Requests and Limits for CPU and Memory for Both Helm Charts: - - Configure resource requests and limits for CPU and memory for your application. - - Test to ensure these configurations work correctly. - -3. Add Environment Variables for Your Containers for Both Helm Charts: - - Read about Kubernetes environment variables: - - [Kubernetes Environment Variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) - - Update your Helm chart with several environment variables using named templates. Move these variables to the `_helpers.tpl` file: - - [Helm Named Templates](https://helm.sh/docs/chart_template_guide/named_templates/) - -### Guidelines - -- Ensure that your documentation is clear and organized. -- Include all the necessary components. -- Follow appropriate file and folder naming conventions. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. - -> Note: Thorough documentation is essential to demonstrate your success in managing secrets and resource allocation in Kubernetes. Explore the bonus tasks to enhance your skills further. diff --git a/lab12.md b/lab12.md deleted file mode 100644 index efb72a29ec..0000000000 --- a/lab12.md +++ /dev/null @@ -1,68 +0,0 @@ -# Lab 12: Kubernetes ConfigMaps - -## Overview - -In this lab, you'll delve into Kubernetes ConfigMaps, focusing on managing non-confidential data and upgrading your application for persistence. ConfigMaps provide a way to decouple configuration artifacts from image content, allowing you to manage configuration data separately from the application. - -## Task 1: Upgrade Application for Persistence - -**6 Points:** - -1. Upgrade Your Application: - - Modify your application to: - - Implement a counter logic in your application to keep track of the number of times it's accessed. - - Save the counter number in the `visits` file. - - Introduce a new endpoint `/visits` to display the recorded visits. - - Test the changes: - - Update your `docker-compose.yml` to include a new volume with your `visits` file. - - Verify that the enhancements work as expected, you must see the updated number in the `visits` file on the host machine. - - Update the `README.md` for your application. - -## Task 2: ConfigMap Implementation - -**4 Points:** - -1. Understand ConfigMaps: - - Read about ConfigMaps in Kubernetes: - - [ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) - -2. Mount a Config File: - - Create a `files` folder with a `config.json` file. - - Populate `config.json` with data in JSON format. - - Use Helm to mount `config.json`: - - Create a `configMap` manifest, extracting data from `config.json` using `.Files.Get`. - - Update `deployment.yaml` with `Volumes` and `VolumeMounts`. - - [Example](https://carlos.mendible.com/2019/02/10/kubernetes-mount-file-pod-with-configmap/) - - Install the updated Helm chart and verify success: - - Retrieve the list of pods: `kubectl get po`. - - Use the pod name as proof of successful deployment. - - Check the ConfigMap inside the pod, e.g., `kubectl exec demo-758cc4d7c4-cxnrn -- cat /config.json`. - -3. Documentation: - - Create `12.md` in the `k8s` folder and include the output of relevant commands. - -**List of Requirements:** - -- `config.json` in the `files` folder. -- `configMap` retrieving data from `config.json` using `.Files.Get`. -- `Volume`s and `VolumeMount`s in `deployments.yml`. -- `12.md` documenting the results of commands. - -## Bonus Task: ConfigMap via Environment Variables - -**2.5 Points:** - -1. Upgrade Bonus App: - - Implement persistence logic in your bonus app. - -2. ConfigMap via Environment Variables: - - Utilize ConfigMap via environment variables in a running container using the `envFrom` property. - - Provide proof with the output of the `env` command inside your container. - -### Guidelines - -- Maintain clear and organized documentation. -- Use appropriate naming conventions for files and folders. -- For your repository PR, ensure it's from the `lab12` branch to the main branch. - -> Note: Clear documentation is crucial to demonstrate successful data persistence and ConfigMap utilization in Kubernetes. Explore the bonus tasks to further enhance your skills. diff --git a/lab13.md b/lab13.md deleted file mode 100644 index e6f6c919f8..0000000000 --- a/lab13.md +++ /dev/null @@ -1,212 +0,0 @@ -# Lab 13: ArgoCD for GitOps Deployment - -## Overview - -In this lab, you will implement ArgoCD to automate Kubernetes application deployments using GitOps principles. Youโ€™ll install ArgoCD via Helm, configure it to manage your Python app, and simulate production-like workflows. - -## Task 1: Deploy and Configure ArgoCD - -**6 Points:** - -1. Install ArgoCD via Helm - - Add the ArgoCD Helm repository: - - ```bash - helm repo add argo https://argoproj.github.io/argo-helm - ``` - - [ArgoCD Helm Chart Docs](https://github.com/argoproj/argo-helm) - - - Install ArgoCD: - - ```bash - helm install argo argo/argo-cd --namespace argocd --create-namespace - ``` - - [ArgoCD Installation Guide](https://argo-cd.readthedocs.io/en/stable/getting_started/) - - - Verify installation: - - ```bash - kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=argocd-server -n argocd --timeout=90s - ``` - -2. Install ArgoCD CLI - - Install the ArgoCD CLI tool (required for command-line interactions): - - ```bash - # For macOS (Homebrew): - brew install argocd - - # For Debian/Ubuntu: - sudo apt-get install -y argocd - - # For other OS/architectures: - curl -sSL -o argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64 - chmod +x argocd - sudo mv argocd /usr/local/bin/ - ``` - - [ArgoCD CLI Docs](https://argo-cd.readthedocs.io/en/stable/cli_installation/) - - - Verify CLI installation: - - ```bash - argocd version - ``` - -3. Access the ArgoCD UI - - Forward the ArgoCD server port: - - ```bash - kubectl port-forward svc/argocd-server -n argocd 8080:443 & - ``` - - - Log in using the initial admin password: - - ```bash - # Retrieve the password: - kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 --decode - - # Log in via CLI: - argocd login localhost:8080 --insecure - argocd account login - ``` - - [ArgoCD Authentication Docs](https://argo-cd.readthedocs.io/en/stable/user-guide/accessing/) - -4. Configure Python App Sync - - Create an ArgoCD folder: - Add an `ArgoCD` folder in your `k8s` directory for ArgoCD manifests. - - - Define the ArgoCD Application: - Create `argocd-python-app.yaml` in the `ArgoCD` folder: - - ```yaml - apiVersion: argoproj.io/v1alpha1 - kind: Application - metadata: - name: python-app - namespace: argocd - spec: - project: default - source: - repoURL: https://github.com//S25-core-course-labs.git - targetRevision: lab13 - path: - helm: - valueFiles: - - values.yaml - destination: - server: https://kubernetes.default.svc - namespace: default - syncPolicy: - automated: {} - ``` - - [ArgoCD Application Manifest Docs](https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative_setup/) - - - Apply the configuration: - - ```bash - kubectl apply -f ArgoCD/argocd-python-app.yaml - ``` - - - Verify sync: - - ```bash - argocd app sync python-app - argocd app status python-app - ``` - -5. Test Sync Workflow - - Modify `values.yaml` (e.g., update `replicaCount`). - - Commit and push changes to the target branch from the config. - - Observe ArgoCD auto-sync the update: - - ```bash - argocd app status python-app - ``` - -### Task 2: Multi-Environment Deployment & Auto-Sync - -**4 Points:** - -1. Set Up Multi-Environment Configurations - - Extend your Python appโ€™s Helm chart to support `dev` and `prod` environments. - - Create environment-specific values files (`values-dev.yaml`, `values-prod.yaml`). - -2. Create Namespaces - - ```bash - kubectl create namespace dev - kubectl create namespace prod - ``` - -3. Deploy Multi-Environment via ArgoCD - - Define two ArgoCD applications with auto-sync: - `argocd-python-dev.yaml` and `argocd-python-prod.yaml` (as before). - -4. Enable Auto-Sync - - Test auto-sync by updating `values-prod.yaml` and pushing to Git. - -5. Self-Heal Testing - - Test 1: Manual Override of Replica Count - 1. Modify the deploymentโ€™s replica count manually: - - ```bash - kubectl patch deployment python-app-prod -n prod --patch '{"spec":{"replicas": 3}}' - ``` - - 2. Observe ArgoCD auto-revert the change (due to `syncPolicy.automated`): - - ```bash - argocd app sync python-app-prod - argocd app status python-app-prod - ``` - - - Test 2: Delete a Pod (Replica) - 1. Delete a pod in the `prod` namespace: - - ```bash - kubectl delete pod -n prod -l - ``` - - 2. Verify Kubernetes recreates the pod to match the deploymentโ€™s `replicaCount`: - - ```bash - kubectl get pods -n prod -w - ``` - - 3. Confirm ArgoCD shows no drift (since pod deletions donโ€™t affect the desired state): - - ```bash - argocd app diff python-app-prod - ``` - -6. Documentation - - In `13.md`, include: - - Output of `kubectl get pods -n prod` before and after pod deletion. - - Screenshots of ArgoCD UI showing sync status and the dashboard after both tests. - - Explanation of how ArgoCD handles configuration drift vs. runtime events. - -## Bonus Task: Sync Your Bonus App with ArgoCD - -**2.5 Points:** - -1. Configure ArgoCD for Bonus App - - Create an `argocd--app.yaml` similar to Task 1, pointing to your bonus appโ€™s helm chart folder. - - Sync and validate deployment with: - - ```bash - kubectl get pods -n - ``` - -### Guidelines - -- Follow the [ArgoCD docs](https://argo-cd.readthedocs.io/) for advanced configurations. -- Use consistent naming conventions (e.g., `lab13` branch for Git commits). -- Document all steps in `13.md` (include diffs, outputs, and UI screenshots). -- For your repository PR, ensure it's from the `lab14` branch to the main branch. - -> **Note**: This lab emphasizes GitOps workflows, environment isolation, and automation. Mastery of ArgoCD will streamline your CI/CD pipelines in real-world scenarios. diff --git a/lab14.md b/lab14.md deleted file mode 100644 index d1d6ba51cd..0000000000 --- a/lab14.md +++ /dev/null @@ -1,106 +0,0 @@ -# Lab 14: Kubernetes StatefulSet - -## Overview - -In this lab, you'll explore Kubernetes StatefulSets, focusing on managing stateful applications with guarantees about the ordering and uniqueness of a set of Pods. - -## Task 1: Implement StatefulSet in Helm Chart - -**6 Points:** - -1. Understand StatefulSets: - - Read about StatefulSet objects: - - [Concept](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) - - [Tutorial](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/) - -2. Update Helm Chart: - - Rename `deployment.yml` to `statefulset.yml`. - - Create a manifest for StatefulSet following the tutorial. - - Test with command: `helm install --dry-run --debug name_of_your_chart path_to_your_chart`. - - Fix any issues and deploy it. - - Apply best practices by moving values to variables in `values.yml` meaningfully. - -## Task 2: StatefulSet Exploration and Optimization - -**4 Points:** - -1. Research and Documentation: - - Create `14.md` report. - - Include the output of `kubectl get po,sts,svc,pvc` commands. - - Use `minikube service name_of_your_statefulset` command to access your app. - - Access the root path of your app from different tabs and modes in your browser. - - Check the content of your file in each pod, e.g., `kubectl exec pod/demo-0 -- cat visits`, and provide the output for all replicas. - - Describe and explain differences in the report. - -2. Persistent Storage Validation - - Delete a pod: - - ```bash - kubectl delete pod app-stateful-0 - ``` - - - Verify that the PVC and data persist: - - ```bash - kubectl get pvc - kubectl exec app-stateful-0 -- cat /data/visits - ``` - -3. Headless Service Access - - Access pods via DNS: - - ```bash - kubectl exec app-stateful-0 -- nslookup app-stateful-1.app-stateful - ``` - - - Document DNS resolution in `14.md`. - -4. Monitoring & Alerts - - Add liveness/readiness probes to your StatefulSet. - - Describe in `14.md`: - - How probes ensure pod health. - - Why theyโ€™re critical for stateful apps. - -5. Ordering Guarantee and Parallel Operations: - - Explain why ordering guarantees are unnecessary for your app. - - Implement a way to instruct the StatefulSet controller to launch or terminate all Pods in parallel. - -**List of Requirements:** - -- Outputs of commands in `14.md`. -- Results of the "number of visits" command for each pod, with an explanation in `14.md`. -- Answers to questions in point 2 of `14.md`. -- Implementation of parallel launch and terminate. - -## Bonus Task: Update Strategies - -**2.5 Points:** - -1. Apply StatefulSet to Bonus App - - Convert your bonus appโ€™s Helm chart to use a StatefulSet. - -2. Explore Update Strategies - - Implement Rolling Updates: - - ```yaml - spec: - updateStrategy: - type: RollingUpdate - rollingUpdate: - partition: 1 - ``` - - - Test Canaries: - Update a subset of pods first. - - - Document in `14.md`: - - Explain `OnDelete`, `RollingUpdate`, and their use cases. - - Compare with Deployment update strategies. - -### Guidelines - -- Maintain clear and organized documentation. -- Use appropriate naming conventions for files and folders. -- For your repository PR, ensure it's from the `lab14` branch to the main branch. - -> Note: Understanding StatefulSets and their optimization is crucial for managing stateful applications in Kubernetes. Explore the bonus tasks to further enhance your skills. diff --git a/lab15.md b/lab15.md deleted file mode 100644 index 887587145d..0000000000 --- a/lab15.md +++ /dev/null @@ -1,78 +0,0 @@ -# Lab 15: Kubernetes Monitoring and Init Containers - -## Overview - -In this lab, you will explore Kubernetes cluster monitoring using Prometheus with the Kube Prometheus Stack. Additionally, you'll delve into the concept of Init Containers in Kubernetes. - -## Task 1: Kubernetes Cluster Monitoring with Prometheus - -**6 Points:** - -1. This lab was tested on a specific version of components: - - Minikube v1.33.0 - - Minikube kubectl v1.28.3 - - kube-prometheus-stack-57.2.0 v0.72.0 - - the minikube start command - `minikube start --driver=docker --container-runtime=containerd` - -2. Read about `Kube Prometheus Stack`: - - [Helm chart with installation guide](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) - - [Explanation of components](https://github.com/prometheus-operator/kube-prometheus#kubeprometheus) - -3. Describe Components: - - Create `15.md` and detail the components of the Kube Prometheus Stack, explaining their roles and functions. Avoid direct copy-pasting; provide a personal understanding. - -4. Install Helm Charts: - - Install the Kube Prometheus Stack to your Kubernetes cluster. - - Install your app's Helm chart. - - Provide the output of the `kubectl get po,sts,svc,pvc,cm` command in the report and explain each part. - -5. Utilize Grafana Dashboards: - - Access Grafana using `minikube service monitoring-grafana`. - - Explore existing dashboards to find information about your cluster: - 1. Check CPU and Memory consumption of your StatefulSet. - 2. Identify Pods with higher and lower CPU usage in the default namespace. - 3. Monitor node memory usage in percentage and megabytes. - 4. Count the number of pods and containers managed by the Kubelet service. - 5. Evaluate network usage of Pods in the default namespace. - 6. Determine the number of active alerts; also check the Web UI with `minikube service monitoring-kube-prometheus-alertmanager`. - - Provide answers to all these points in the report. - -## Task 2: Init Containers - -**4 Points:** - -1. Read about `Init Containers`: - - [Concept](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) - - [Tutorial](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) - -2. Implement Init Container: - - Create a new Volume. - - Implement an Init container to download any file using `wget` (you can use a site from the example). - - Provide proof of success, e.g., `kubectl exec pod/demo-0 -- cat /test.html`. - -**List of Requirements:** - -- Detailed explanation of monitoring stack components in `15.md`. -- Output and explanation of `kubectl get po,sts,svc,pvc,cm`. -- Answers to all 6 questions from point 4 in `15.md`. -- Implementation of Init Container. -- Proof of Init Container downloading a file. - -## Bonus Task: App Metrics & Multiple Init Containers - -**2.5 Points:** - -1. App Metrics: - - Fetch metrics from your app and provide proof. - -2. Init Container Queue: - - Create a queue of three Init containers, with any logic like adding new lines to the same file. - - Provide proof using the `cat` tool. - -### Guidelines - -- Ensure clear and organized documentation. -- Use appropriate naming conventions for files and folders. -- For your repository PR, ensure it's from the `lab15` branch to the main branch. - -> Note: Demonstrate successful implementation and understanding of Kubernetes monitoring and Init Containers. Take your time to explore the bonus tasks for additional learning opportunities. diff --git a/lab16.md b/lab16.md deleted file mode 100644 index 37912fc50b..0000000000 --- a/lab16.md +++ /dev/null @@ -1,75 +0,0 @@ -# Lab 16: IPFS and Fleek - -In this lab, you will explore essential DevOps tools and set up a project on the Fleek service. Follow the tasks below to complete the lab assignment. - -## Task 1: Set Up an IPFS Gateway Using Docker - -Objective: Understand and implement an IPFS gateway using Docker, upload a file, and verify it via an IPFS cluster. - -1. Set Up IPFS Gateway: - - Install Docker on your machine if it's not already installed. - - [Docker Installation Guide](https://docs.docker.com/get-docker/) - - - Pull the IPFS Docker image and run an IPFS container: - - ```sh - docker pull ipfs/go-ipfs - docker run -d --name ipfs_host -v /path/to/folder/with/file:/export -v ipfs_data:/data/ipfs -p 8080:8080 -p 4001:4001 -p 5001:5001 ipfs/go-ipfs - ``` - - - Verify the IPFS container is running: - - ```sh - docker ps - ``` - -2. Upload a File to IPFS: - - Open a browser and access the IPFS web UI: - - ```sh - http://127.0.0.1:5001/webui/ - ``` - - - Explore the web UI and wait for 5 minutes to sync up with the network. - - Upload any file via the web UI. - - Use the obtained hash to access the file via any public IPFS gateway. Here are a few options: - - [IPFS.io Gateway](https://ipfs.io/ipfs/) - - [Cloudflare IPFS Gateway](https://cloudflare-ipfs.com/ipfs/) - - [Infura IPFS Gateway](https://ipfs.infura.io/ipfs/) - - - Append your file hash to any of the gateway URLs to verify your file is accessible. Note that it may fail due to network overload, so don't worry if you can't reach it. - -3. Documentation: - - Create a `submission2.md` file. - - Share information about connected peers and bandwidth in your report. - - Provide the hash and the URLs used to verify the file on the IPFS gateways. - -## Task 2: Set Up Project on Fleek.xyz - -Objective: Set up a project on the Fleek service and share the IPFS link. - -1. Research: - - Understand what IPFS is and its purpose. - - Explore Fleek's features. - -2. Set Up: - - Sign up for a Fleek account if you haven't already. - - Use your fork of the Labs repository as your project source. Optionally, set up your own website (notify us in advance). - - Configure the project settings on Fleek. - - Deploy the Labs repository to Fleek, ensuring it is uploaded to IPFS. - -3. Documentation: - - Share the IPFS link and domain of the deployed project in the `submission2.md` file. - -## Additional Resources - -- [IPFS Documentation](https://docs.ipfs.io/) -- [Fleek Documentation](https://docs.fleek.xyz/) - -### Guidelines - -- Use proper Markdown formatting for documentation files. -- Organize files with appropriate naming conventions. -- Create a Pull Request to the main branch of the repository with your completed lab assignment. - -> Note: Actively explore and document your findings to gain hands-on experience with IPFS and Fleek. diff --git a/lab16/index.html b/lab16/index.html deleted file mode 100644 index acce39eee3..0000000000 --- a/lab16/index.html +++ /dev/null @@ -1,303 +0,0 @@ - - - - - - DevOps Engineering Expert Track - - - - -
- -
- -
-
-

Master Modern DevOps Practices

-

16 hands-on labs covering Kubernetes, Terraform, CI/CD, and more

- Start Free Trial โ†’ -
-
- -
-

Why This Course?

-
-
- -

16 Advanced Labs

-

Build production-ready systems from scratch

-
-
- -

Industry-Standard Tools

-

Terraform, ArgoCD, Prometheus, Vault, and more

-
-
- -

Job-Ready Skills

-

Learn tools used by top tech companies

-
-
-
- -
-
-

Lab Syllabus (2025 Edition)

-
    -
  1. Lab 1: Web Application Development
  2. -
  3. Lab 2: Containerization
  4. -
  5. Lab 3: Continuous Integration
  6. -
  7. Lab 4: Infrastructure as Code & Terraform
  8. -
  9. Lab 5: Configuration Management
  10. -
  11. Lab 6: Ansible Automation
  12. -
  13. Lab 7: Observability, Logging, Loki Stack
  14. -
  15. Lab 8: Monitoring & Prometheus
  16. -
  17. Lab 9: Kubernetes & Declarative Manifests
  18. -
  19. Lab 10: Helm Charts & Library Charts
  20. -
  21. Lab 11: Kubernetes Secrets Management (Vault, ConfigMaps)
  22. -
  23. Lab 12: Kubernetes ConfigMaps & Environment Variables
  24. -
  25. Lab 13: GitOps with ArgoCD
  26. -
  27. Lab 14: StatefulSet Optimization
  28. -
  29. Lab 15: Kubernetes Monitoring & Init Containers
  30. -
  31. Lab 16: IPFS & Fleek Decentralization
  32. -
-
-
- -
-

Learning Progression

-
-

Phase 1: Foundations (Labs 1-6)

-

Web Dev โ†’ Containers โ†’ CI/CD โ†’ IaC โ†’ Ansible

-
-
-

Phase 2: Observability (Labs 7-8)

-

Logging โ†’ Monitoring โ†’ Loki/Prometheus

-
-
-

Phase 3: Kubernetes Mastery (Labs 9-12)

-

Deployments โ†’ Helm โ†’ Secrets โ†’ ConfigMaps

-
-
-

Phase 4: Expert Track (Labs 13-16)

-

GitOps โ†’ StatefulSets โ†’ IPFS โ†’ Final Project

-
-
- - - - - - \ No newline at end of file diff --git a/lab2.md b/lab2.md deleted file mode 100644 index ff71bc227d..0000000000 --- a/lab2.md +++ /dev/null @@ -1,85 +0,0 @@ -# Lab 2: Containerization - Docker - -## Overview - -In this lab assignment, you will learn to containerize applications using Docker, while focusing on best practices. Additionally, you will explore Docker multi-stage builds. Follow the tasks below to complete the lab assignment. - -## Task 1: Dockerize Your Application - -**6 Points:** - -1. Create a `Dockerfile`: - - Inside the `app_python` folder, craft a `Dockerfile` for your application. - - Research and implement Docker best practices. Utilize a Dockerfile linter for quality assurance. - -2. Build and Test Docker Image: - - Build a Docker image using your Dockerfile. - - Thoroughly test the image to ensure it functions correctly. - -3. Push Image to Docker Hub: - - If you lack a public Docker Hub account, create one. - - Push your Docker image to your public Docker Hub account. - -4. Run and Verify Docker Image: - - Retrieve the Docker image from your Docker Hub account. - - Execute the image and validate its functionality. - -## Task 2: Docker Best Practices - -**4 Points:** - -1. Enhance your docker image by implementing [Docker Best Practices](https://docs.docker.com/build/building/best-practices/). - - No root user inside, or you will get no points at all. - -2. Write `DOCKER.md`: - - Inside the `app_python` folder, create a `DOCKER.md` file. - - Elaborate on the best practices you employed within your Dockerfile. - - Implementing and listing numerous Docker best practices will earn you more points. - -3. Enhance the README.md: - - Update the `README.md` file in the `app_python` folder. - - Include a dedicated `Docker` section, explaining your containerized application and providing clear instructions for execution. - - How to build? - - How to pull? - - How to run? - -### List of Requirements - -- Rootless container. -- Use COPY, but only specific files. -- Layer sanity. -- Use `.dockerignore`. -- Use a precise version of your base image and language, example `python:3-alpine3.15`. - -## Bonus Task: Multi-Stage Builds Exploration - -**2.5 Points:** - -1. Dockerize Previous App: - - Craft a `Dockerfile` for the application from the prior lab. - - Place this Dockerfile within the corresponding `app_*` folder. - -2. Follow Main Task Guidelines: - - Apply the same steps and suggestions as in the primary Dockerization task. - -3. Study Docker Multi-Stage Builds: - - Familiarize yourself with Docker multi-stage builds. - - Consider implementing multi-stage builds, only if they enhance your project's structure and efficiency. - -4. Study Distroless Images: - - Explore how to use Distroless images by reviewing the official documentation: [GoogleContainerTools/distroless](https://github.com/GoogleContainerTools/distroless). - - Create new `distroless.Dockerfile` files for your Python app and your second app. - - Use the `nonroot` tag for both images to ensure they run with non-root privileges. - - Verify that the applications work correctly with the Distroless images. - - Compare the sizes of your previous Docker images with the new Distroless-based images. - - In the `DOCKER.md` file, describe the differences between the Distroless images and your previous images. Explain why these differences exist (e.g., smaller size, reduced attack surface, etc.). - - Include a screenshot of your final results (e.g., image sizes). - - Add a new section to the `README.md` file titled "Distroless Image Version". - -### Guidelines - -- Utilize appropriate Markdown formatting and structure for all documentation. -- Organize files within the lab folder with suitable naming conventions. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. - -> Note: Utilize Docker to containerize your application, adhering to best practices. Explore Docker multi-stage builds for a deeper understanding, and document your process using Markdown. diff --git a/lab3.md b/lab3.md deleted file mode 100644 index 2f4899750a..0000000000 --- a/lab3.md +++ /dev/null @@ -1,53 +0,0 @@ -# Lab 3: Continuous Integration Lab - -## Overview - -In this lab assignment, you will delve into continuous integration (CI) practices by focusing on code testing, setting up Git Actions CI, and optimizing workflows. Additionally, you will have the opportunity to explore bonus tasks to enhance your CI knowledge. Follow the tasks below to complete the lab assignment. - -## Task 1: Code Testing and Git Actions CI - -**6 Points:** - -1. Code Testing: - - Begin by researching and implementing best practices for code testing. - - Write comprehensive unit tests for your application. - - In the `PYTHON.md` file, describe the unit tests you've created and the best practices you applied. - - Enhance the `README.md` file by adding a "Unit Tests" section. - -2. Set Up Git Actions CI: - - Create a CI workflow using GitHub Actions to build and test your Python project. Refer to the [official GitHub Actions documentation](https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python) for guidance. - - Ensure your CI workflow includes at least three essential steps: Dependencies, Linter, and Tests. - - Integrate Docker-related steps into your CI workflow, at least two steps Login, Build & Push. You can refer to the [Docker GitHub Actions documentation](https://docs.docker.com/ci-cd/github-actions/) for assistance. - - Update the `README.md` file to provide information about your CI workflow. - -## Task 2: CI Workflow Improvements - -**4 Points:** - -1. Workflow Enhancements: - - Add a workflow status badge to your repository for visibility. - - Dive into best practices for CI workflows and apply them to optimize your existing workflow. - - Utilize build cache to enhance workflow efficiency. - - Create a `CI.md` file and document the best practices you've implemented. - -2. Implement Snyk Vulnerability Checks: - - Integrate Snyk into your CI workflow to identify and address vulnerabilities in your projects. You can refer to the [Python example](https://github.com/snyk/actions/tree/master/python-3.8) for guidance, check [another option](https://docs.snyk.io/integrations/snyk-ci-cd-integrations/github-actions-integration#use-your-own-development-environment) how to install dependencies if you face any issue. - -## Bonus Task - -**2.5 Points:** - -1. Follow the Main Task Steps: - - Apply the same steps as in the primary CI task to set up CI workflows for an extra application. You can find useful examples in the [GitHub Actions starter workflows](https://github.com/actions/starter-workflows/tree/main/ci). - -2. CI Workflow Improvements: - 1. Python App CI: Configure the CI workflow to run only when changes occur in the `app_python` folder. - 2. Extra Language App CI: Configure the CI workflow to run only when changes occur in the `app_` folder. - -### Guidelines - -- Use proper Markdown formatting and structure for all documentation files. -- Organize files within the lab folder with suitable naming conventions. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. - -> Note: Implement CI best practices, optimize your workflows, and explore bonus tasks to deepen your understanding of continuous integration. diff --git a/lab4.md b/lab4.md deleted file mode 100644 index e88b5e63e5..0000000000 --- a/lab4.md +++ /dev/null @@ -1,84 +0,0 @@ -# Lab 4: Infrastructure as Code Lab - -## Overview - -In this lab assignment, you will explore Infrastructure as Code (IAC) using Terraform. You'll build Docker and AWS infrastructures and dive into managing GitHub repositories through Terraform. Additionally, there are bonus tasks to enhance your Terraform skills. Follow the tasks below to complete the lab assignment. - -## Task 1: Introduction to Terraform - -**6 Points:** - -0. You will need a VPN tool for this lab - -1. Get Familiar with Terraform: - - Begin by familiarizing yourself with Terraform by reading the [introduction](https://www.terraform.io/intro/index.html) and exploring [best practices](https://www.terraform.io/docs/cloud/guides/recommended-practices/index.html). - -2. Set Up Terraform Workspace: - - Create a `terraform` folder to organize your Terraform workspaces. - - Inside the `terraform` folder, create a file named `TF.md`. - -3. Docker Infrastructure Using Terraform: - - Follow the [Docker tutorial](https://learn.hashicorp.com/collections/terraform/docker-get-started) for building a Docker infrastructure with Terraform. - - Perform the following tasks as instructed in the tutorial: - - Install Terraform. - - Build the Infrastructure. - - Provide the output of the following commands in the `TF.md` file: - - ```sh - terraform state show - terraform state list - ``` - - - Document a part of the log with the applied changes. - - Utilize input variables to rename your Docker container. - - Finish the tutorial and provide the output of the `terraform output` command in the `TF.md` file. - -4. Yandex Cloud Infrastracture Using Terraform: - - Create an account on [Yandex Cloud](https://cloud.yandex.com/). - - Check for available free-tier options and select a free VM instance suitable for this lab. - - Follow the [Yandex Quickstart Guide](https://yandex.cloud/en-ru/docs/tutorials/infrastructure-management/terraform-quickstart#linux_1) to set up and configure Terraform for managing Yandex Cloud resources. - - Document the entire process, including setup steps, configurations, and any challenges encountered, in the `TF.md` file. - -5. [Optioinal] AWS Infrastructure Using Terraform: - - Follow the [AWS tutorial](https://learn.hashicorp.com/tutorials/terraform/aws-build?in=terraform/aws-get-started) alongside the instructions from the previous step. - -## Task 2: Terraform for GitHub - -**4 Points:** - -1. GitHub Infrastructure Using Terraform: - - Utilize the [GitHub provider for Terraform](https://registry.terraform.io/providers/integrations/github/latest/docs). - - Create a directory inside the `terraform` folder specifically for managing your GitHub project infrastructure. - - Build GitHub infrastructure following a reference like [this example](https://dev.to/pwd9000/manage-and-maintain-github-with-terraform-2k86). Prepare `.tf` files that include: - - Repository name - - Repository description - - Visibility settings - - Default branch - - Branch protection rule for the default branch - - Avoid placing your token as a variable in the code; instead, use an environment variable. - -2. Import Existing Repository: - - Use the `terraform import` command to import your current GitHub repository into your Terraform configuration. No need to create a new one. Example: `terraform import "github_repository.core-course-labs" "core-course-labs"`. - -3. Apply Terraform Changes: - - Apply changes from your Terraform configuration to your GitHub repository. - -4. Document Best Practices: - - Provide Terraform-related best practices that you applied in the `TF.md` file. - -## Bonus Task: Adding Teams - -**2.5 Points:** - -1. GitHub Teams Using Terraform: - - You need to create a new organization. - - Extend your Terraform configuration to add several teams to your GitHub repository, each with different levels of access. - - Apply the changes and ensure they take effect in your GitHub repository. - -### Guidelines - -- Use proper Markdown formatting and structure for documentation files. -- Organize files within the lab folder with suitable naming conventions. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. - -> Note: Dive into Terraform to manage infrastructures efficiently. Explore the AWS and Docker tutorials, and don't forget to document your process and best practices in the `TF.md` file. diff --git a/lab5.md b/lab5.md deleted file mode 100644 index bead18ceea..0000000000 --- a/lab5.md +++ /dev/null @@ -1,141 +0,0 @@ -# Lab 5: Ansible and Docker Deployment - -## Overview - -In this lab, you will get acquainted with Ansible, a powerful configuration management and automation tool. Your objective is to use Ansible to deploy Docker on a newly created cloud VM. This knowledge will be essential for your application deployment in the next lab. - -## Task 1: Initial Setup - -**6 Points:** - -1. Repository Structure: - - Organize your repository following the recommended structure below: - - ```sh - . - |-- README.md - |-- ansible - | |-- inventory - | | `-- default_aws_ec2.yml - | |-- playbooks - | | `-- dev - | | `-- main.yaml - | |-- roles - | | |-- docker - | | | |-- defaults - | | | | `-- main.yml - | | | |-- handlers - | | | | `-- main.yml - | | | |-- tasks - | | | | |-- install_compose.yml - | | | | |-- install_docker.yml - | | | | `-- main.yml - | | | `-- README.md - | | `-- web_app - | | |-- defaults - | | | `-- main.yml - | | |-- handlers - | | | `-- main.yml - | | |-- meta - | | | `-- main.yml - | | |-- tasks - | | | `-- main.yml - | | `-- templates - | | `-- docker-compose.yml.j2 - | `-- ansible.cfg - |-- app_go - |-- app_python - `-- terraform - ``` - -2. Installation and Introduction: - - Install Ansible and familiarize yourself with its basics. You can follow the [Ansible installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html). - -3. Use an Existing Ansible Role for Docker: - - Utilize an existing Ansible role for Docker from `ansible-galaxy` as a template. You can explore [this Docker role](https://github.com/geerlingguy/ansible-role-docker) as an example. - -4. Create a Playbook and Testing: - - Develop an Ansible playbook for deploying Docker. - - Test your playbook to ensure it works as expected. - -## Task 2: Custom Docker Role - -**4 Points:** - -1. Create Your Custom Docker Role: - - Develop a custom Ansible role for Docker with the following tasks: - 1. Install Docker and Docker Compose. - 2. Update your playbook to utilize this custom role. [Tricks and Tips](https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html). - 3. Test your playbook with the custom role to ensure successful deployment. - 4. Make sure the role has a task to configure Docker to start on boot (`systemctl enable docker`). - 5. Include a task to add the current user to the `docker` group to avoid using `sudo` for Docker commands. - -2. Documentation: - - Develop an `ANSIBLE.md` file in the `ansible` folder to document your Ansible-related work. - - Create a `README.md` file in the `ansible/roles/docker` folder. - - Use a Markdown template to describe your Docker role, its requirements and usage. - - Example `README.md` template for the Docker role: - - ```markdown - # Docker Role - - This role installs and configures Docker and Docker Compose. - - ## Requirements - - - Ansible 2.9+ - - Ubuntu 22.04 - - ## Role Variables - - - `docker_version`: The version of Docker to install (default: `latest`). - - `docker_compose_version`: The version of Docker Compose to install (default: `1.29.2`). - - ## Example Playbook - - ```yaml - - hosts: all - roles: - - role: docker - ``` - -3. Deployment Output: - - Execute your playbook to deploy the Docker role. - - Provide the last 50 lines of the output from your deployment command in the `ANSIBLE.md` file. - - Use the `--check` flag with `ansible-playbook` to perform a dry run and verify changes before applying them. - - Example command: - - ```sh - ansible-playbook --diff - ``` - -4. **Inventory Details:** - - Execute the following command `ansible-inventory -i .yaml --list` and provide its output in the `ANSIBLE.md` file. - - Validate the inventory file using `ansible-inventory -i .yaml --graph` to visualize the inventory structure. - - Ensure you have documented the inventory information. - -## Bonus Task: Dynamic Inventory - -**2.5 Points:** - -1. Set up Dynamic Inventory: - - Implement dynamic inventory for your cloud environment, if available. - - You may explore ready-made solutions for dynamic inventories: - - - [AWS Example](https://docs.ansible.com/ansible/latest/collections/amazon/aws/aws_ec2_inventory.html) - - [Yandex Cloud (Note: Not tested)](https://github.com/rodion-goritskov/yacloud_compute) - - Implementing dynamic inventory can enhance your automation capabilities. - -2. Secure Docker Configuration: - - Add a task to configure Docker security settings, disable root access. - - Use the `copy` module and modify the `daemon.json` file. - -### Guidelines - -- Use proper Markdown formatting and structure for documentation files. -- Organize files within the lab folder with suitable naming conventions. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. - -> Note: Ensure that your repository is well-structured, follow Ansible best practices, and provide clear documentation for a successful submission. diff --git a/lab6.md b/lab6.md deleted file mode 100644 index cc8249390d..0000000000 --- a/lab6.md +++ /dev/null @@ -1,139 +0,0 @@ -# Lab 6: Ansible and Application Deployment - -## Overview - -In this lab, you will utilize Ansible to set up a Continuous Deployment (CD) process for your application. - -## Task 1: Application Deployment - -**6 Points:** - -1. Create an Ansible Role: - - Develop an Ansible role specifically for deploying your application's Docker image, it can be done manually or via `ansible-galaxy init roles/web_app`. Call it `web_app`. - - Define variables in `roles/web_app/defaults/main.yml`. - - Add tasks to `roles/web_app/tasks/main.yml` to pull the Docker image and start the container. - - > Managing just a container is bad practice, you can omit it and move to the Task 2 directly. - -2. Update the Playbook: - - Modify your Ansible playbook to integrate the new role you've created for Docker image deployment. - -3. Deployment Output: - - Execute your playbook to deploy the role. - - Provide the last 50 lines of the output from your deployment command in the `ANSIBLE.md` file. - -## Task 2: Ansible Best Practices - -**4 Points:** - -1. Group Tasks with Blocks: - - Organize related tasks within your playbooks using Ansible blocks. - - Implement logical blocks. For example: - - ```yaml - - name: Setup Docker Environment - block: - - name: Install Docker - apt: - name: docker.io - state: present - - - name: Start Docker Service - service: - name: docker - state: started - enabled: yes - tags: - - setup - ``` - -2. Role Dependency: - - Set the role dependency for your `web_app` role to include the `docker` role. - - Specify dependencies in `roles/web_app/meta/main.yml`. - -3. Apply Tags: - - Implement Ansible tags to group tasks logically and enable selective execution. For example: - - ```yaml - - name: Pull Docker image - docker_image: - name: "{{ docker_image }}" - source: pull - tags: - - docker - ``` - - - Run specific tags. For example: - - ```bash - ansible-playbook site.yml --tags docker - ``` - -4. Wipe Logic: - - Create a wipe logic in `roles/web_app/tasks/0-wipe.yml`. This should include removing your Docker container and all related files. - - Ensure that this wipe process can be enabled or disabled by using a variable, for example, `web_app_full_wipe=true`. - -5. Separate Tag for Wipe: - - Utilize a distinct tag for the **Wipe** section of your Ansible playbook. This allows you to run the wipe tasks independently from the main tasks. - -6. Docker Compose File: - - Write a Jinja2 template (`roles/web_app/templates/docker-compose.yml.j2`). For example: - - ```yaml - version: '3' - services: - app: - image: "{{ docker_image }}" - ports: - - "{{ app_port }}:80" - ``` - - - Deliver the template using the `template` module in `roles/web_app/tasks/main.yml`. - - Suggested structure: - - ```sh - . - |-- defaults - | `-- main.yml - |-- meta - | `-- main.yml - |-- tasks - | |-- 0-wipe.yml - | `-- main.yml - `-- templates - `-- docker-compose.yml.j2 - ``` - -7. Create `README.md`: - - Create a `README.md` file in the `ansible/roles/web_app` folder. - - Use a suggested Docker Markdown template from the previous lab to describe your role, its requirements and usage. - -## Bonus Task: CD Improvement - -**2.5 Points:** - -1. Create an Extra Playbook: - - Develop an additional Ansible playbook specifically for your bonus application. - - You can reuse the existing Ansible role you created for your primary application or create a new one. - - Suggested structure: - - ```sh - . - `--ansible - `-- playbooks - `-- dev - |-- app_python - | `-- main.yaml - `-- app_go - `-- main.yaml - ``` - -### Guidelines - -- Use proper Markdown formatting and structure for documentation files. -- Organize files within the lab folder with suitable naming conventions. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. -- Follow the suggested structure for your Ansible roles, tasks, and templates. -- Utilize Ansible best practices such as grouping tasks with blocks, applying tags, and separating roles logically. - -> Note: Apply diligence to your Ansible implementation, follow best practices, and clearly document your work to achieve the best results in this lab assignment. diff --git a/lab7.md b/lab7.md deleted file mode 100644 index 48e65eb202..0000000000 --- a/lab7.md +++ /dev/null @@ -1,59 +0,0 @@ -# Lab 7: Monitoring and Logging - -## Overview - -In this lab, you will become familiar with a logging stack that includes Promtail, Loki, and Grafana. Your goal is to create a Docker Compose configuration and configuration files to set up this logging stack. - -## Task 1: Logging Stack Setup - -**6 Points:** - -1. Study the Logging Stack: - - Begin by researching the components of the logging stack: - - [Grafana Webinar: Loki Getting Started](https://grafana.com/go/webinar/loki-getting-started/) - - [Loki Overview](https://grafana.com/docs/loki/latest/overview/) - - [Loki GitHub Repository](https://github.com/grafana/loki) - -2. Create a Monitoring Folder: - - Start by creating a new folder named `monitoring` in your project directory. - -3. Docker Compose Configuration: - - Inside the `monitoring` folder, prepare a `docker-compose.yml` file that defines the entire logging stack along with your application. - - To assist you in this task, refer to these resources for sample Docker Compose configurations: - - [Example Docker Compose Configuration from Loki Repository](https://github.com/grafana/loki/blob/main/production/docker-compose.yaml) - - [Promtail Configuration Example](https://github.com/black-rosary/loki-nginx/blob/master/promtail/promtail.yml) (Adapt it as needed) - -4. Testing: - - Verify that the configured logging stack and your application work as expected. - -## Task 2: Documentation and Reporting - -**4 Points:** - -1. Logging Stack Report: - - Create a new file named `LOGGING.md` to document how the logging stack you've set up functions. - - Provide detailed explanations of each component's role within the stack. - -2. Screenshots: - - Capture screenshots that demonstrate the successful operation of your logging stack. - - Include these screenshots in your `LOGGING.md` report for reference. - -## Bonus Task: Additional Configuration - -**2.5 Points:** - -1. Integrating Your Extra App: - - Extend the `docker-compose.yml` configuration to include your additional application. - -2. Configure Stack for Comprehensive Logging: - - Modify the logging stack's configuration to collect logs from all containers defined in the `docker-compose.yml`. - - Include screenshots in your `LOGGING.md` report to demonstrate your success. - -### Guidelines - -- Ensure that your documentation in `LOGGING.md` is well-structured and comprehensible. -- Follow proper naming conventions for files and folders. -- Use code blocks and Markdown formatting where appropriate. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. - -> Note: Thoroughly document your work, and ensure the logging stack functions correctly. Utilize the bonus points opportunity to enhance your understanding and the completeness of your setup. diff --git a/lab8.md b/lab8.md deleted file mode 100644 index 8eb0752ec7..0000000000 --- a/lab8.md +++ /dev/null @@ -1,71 +0,0 @@ -# Lab 8: Monitoring with Prometheus - -## Overview - -In this lab, you will become acquainted with Prometheus, set it up, and configure applications to collect metrics. - -## Task 1: Prometheus Setup - -**6 Points:** - -1. Learn About Prometheus: - - Begin by reading about Prometheus and its fundamental concepts: - - [Prometheus Overview](https://prometheus.io/docs/introduction/overview/) - - [Prometheus Naming Best Practices](https://prometheus.io/docs/practices/naming/) - -2. Integration with Docker Compose: - - Expand your existing `docker-compose.yml` file from the previous lab to include Prometheus. - -3. Prometheus Configuration: - - Configure Prometheus to collect metrics from both Loki and Prometheus containers. - -4. Verify Prometheus Targets: - - Access `http://localhost:9090/targets` to ensure that Prometheus is correctly scraping metrics. - - Capture screenshots that confirm the successful setup and place them in a file named `METRICS.md` within the monitoring folder. - -## Task 2: Dashboard and Configuration Enhancements - -**4 Points:** - -1. Grafana Dashboards: - - Set up dashboards in Grafana for both Loki and Prometheus. - - You can use examples as references: - - [Example Dashboard for Loki](https://grafana.com/grafana/dashboards/13407) - - [Example Dashboard for Prometheus](https://grafana.com/grafana/dashboards/3662) - - Capture screenshots displaying your successful dashboard configurations and include them in `METRICS.md`. - -2. Service Configuration Updates: - - Enhance the configuration of all services in the `docker-compose.yml` file: - - Add log rotation mechanisms. - - Specify memory limits for containers. - - Ensure these changes are documented within your `METRICS.md` file. - -3. Metrics Gathering: - - Extend Prometheus to gather metrics from all services defined in the `docker-compose.yml` file. - -## Bonus Task: Metrics and Health Checks - -**To Earn 2.5 Additional Points:** - -1. Application Metrics: - - Integrate metrics into your applications. You can refer to Python examples like: - - [Monitoring a Synchronous Python Web Application](https://dzone.com/articles/monitoring-your-synchronous-python-web-application) - - [Metrics Monitoring in Python](https://opensource.com/article/18/4/metrics-monitoring-and-python) - -2. Obtain Application Metrics: - - Configure your applications to export metrics. - -3. METRICS.md Update: - - Document your progress with the bonus tasks, including screenshots, in the `METRICS.md` file. - -4. Health Checks: - - Further enhance the `docker-compose.yml` file's service configurations by adding health checks for the containers. - -### Guidelines - -- Maintain a well-structured and comprehensible `METRICS.md` document. -- Adhere to file and folder naming conventions. -- Utilize code blocks and Markdown formatting where appropriate. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. - -> Note: Ensure thorough documentation of your work, and guarantee that Prometheus correctly collects metrics. Take advantage of the bonus tasks to deepen your understanding and enhance the completeness of your setup. diff --git a/lab9.md b/lab9.md deleted file mode 100644 index 5493f042a6..0000000000 --- a/lab9.md +++ /dev/null @@ -1,76 +0,0 @@ -# Lab 9: Introduction to Kubernetes - -## Overview - -In this lab, you will explore Kubernetes, set up a local development environment, and create manifests for your application. - -## Task 1: Kubernetes Setup and Basic Deployment - -**6 Points:** - -1. Learn About Kubernetes: - - Begin by studying the fundamentals of Kubernetes: - - [What is Kubernetes](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/) - - [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) - -2. Install Kubernetes Tools: - - Install `kubectl` and `minikube`, essential tools for managing Kubernetes. - - [Kubernetes Tools](https://kubernetes.io/docs/tasks/tools/) - -3. Deploy Your Application: - - Deploy your application within the Minikube cluster using the `kubectl create` command. Create a `Deployment` resource for your app. - - [Example of Creating a Deployment](https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-deployment) - - [Deployment Overview](https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/) - -4. Access Your Application: - - Make your application accessible from outside the Kubernetes virtual network. Achieve this by creating a `Service` resource. - - [Example of Creating a Service](https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service) - - [Service Overview](https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/) - -5. Create a Kubernetes Folder: - - Establish a `k8s` folder within your repository. - - Create a `README.md` report within this folder and include the output of the `kubectl get pods,svc` command. - -6. Cleanup: - - Remove the `Deployment` and `Service` resources that you created, maintaining a tidy Kubernetes environment. - -## Task 2: Declarative Kubernetes Manifests - -**4 Points:** - -1. Manifest Files for Your Application: - - As a more efficient and structured approach, employ configuration files to deploy your application. - - Create a `deployment.yml` manifest file that describes your app's deployment, specifying at least 3 replicas. - - [Kubernetes Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) - - [Declarative Management of Kubernetes Objects Using Configuration Files](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/) - -2. Service Manifest: - - Develop a `service.yml` manifest file for your application. - -3. Manifest Files in `k8s` Folder: - - Store these manifest files in the `k8s` folder of your repository. - - Additionally, provide the output of the `kubectl get pods,svc` command in the `README.md` report. - - Include the output of the `minikube service --all` command and the result from your browser, with a screenshot demonstrating that the IP matches the output of `minikube service --all`. - -## Bonus Task: Additional Configuration and Ingress - -**To Earn 2.5 Additional Points:** - -1. Manifests for Extra App: - - Create `deployment` and `service` manifests for an additional application. - -2. Ingress Manifests: - - Construct [Ingress manifests](https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/) for your applications. - -3. Application Availability Check: - - Utilize `curl` or a similar tool to verify the availability of your applications. Include the output in the report. - -**Guidelines:** - -- Maintain a clear and well-structured `README.md` document. -- Ensure that all required components are included. -- Adhere to file and folder naming conventions. -- Create and participate in PRs to facilitate the peer review process. -- Create pull requests (PRs) as needed: from your fork to the main branch of this repository, and from your fork's branch to your fork's master branch. - -> Note: Detailed documentation is crucial to ensure that your Kubernetes deployment is fully functional and accessible. Engage with the bonus tasks to further enhance your understanding and application deployment skills. diff --git a/labs/lab01.md b/labs/lab01.md new file mode 100644 index 0000000000..18c9ff6c43 --- /dev/null +++ b/labs/lab01.md @@ -0,0 +1,693 @@ +# Lab 1 โ€” DevOps Info Service: Web Application Development + +![difficulty](https://img.shields.io/badge/difficulty-beginner-success) +![topic](https://img.shields.io/badge/topic-Web%20Development-blue) +![points](https://img.shields.io/badge/points-10%2B2.5-orange) +![languages](https://img.shields.io/badge/languages-Python%20|%20Go-informational) + +> Build a DevOps info service that reports system information and health status. This service will evolve throughout the course into a comprehensive monitoring tool. + +## Overview + +Create a **DevOps Info Service** - a web application providing detailed information about itself and its runtime environment. This foundation will grow throughout the course as you add containerization, CI/CD, monitoring, and persistence. + +**What You'll Learn:** +- Web framework selection and implementation +- System introspection and API design +- Python best practices and documentation +- Foundation for future DevOps tooling + +**Tech Stack:** Python 3.11+ | Flask 3.1 or FastAPI 0.115 + +--- + +## Tasks + +### Task 1 โ€” Python Web Application (6 pts) + +Build a production-ready Python web service with comprehensive system information. + +#### 1.1 Project Structure + +Create this structure: + +``` +app_python/ +โ”œโ”€โ”€ app.py # Main application +โ”œโ”€โ”€ requirements.txt # Dependencies +โ”œโ”€โ”€ .gitignore # Git ignore +โ”œโ”€โ”€ README.md # App documentation +โ”œโ”€โ”€ tests/ # Unit tests (Lab 3) +โ”‚ โ””โ”€โ”€ __init__.py +โ””โ”€โ”€ docs/ # Lab documentation + โ”œโ”€โ”€ LAB01.md # Your lab submission + โ””โ”€โ”€ screenshots/ # Proof of work + โ”œโ”€โ”€ 01-main-endpoint.png + โ”œโ”€โ”€ 02-health-check.png + โ””โ”€โ”€ 03-formatted-output.png +``` + +#### 1.2 Choose Web Framework + +Select and justify your choice: +- **Flask** - Lightweight, easy to learn +- **FastAPI** - Modern, async, auto-documentation +- **Django** - Full-featured, includes ORM + +Document your decision in `app_python/docs/LAB01.md`. + +#### 1.3 Implement Main Endpoint: `GET /` + +Return comprehensive service and system information: + +```json +{ + "service": { + "name": "devops-info-service", + "version": "1.0.0", + "description": "DevOps course info service", + "framework": "Flask" + }, + "system": { + "hostname": "my-laptop", + "platform": "Linux", + "platform_version": "Ubuntu 24.04", + "architecture": "x86_64", + "cpu_count": 8, + "python_version": "3.13.1" + }, + "runtime": { + "uptime_seconds": 3600, + "uptime_human": "1 hour, 0 minutes", + "current_time": "2026-01-07T14:30:00.000Z", + "timezone": "UTC" + }, + "request": { + "client_ip": "127.0.0.1", + "user_agent": "curl/7.81.0", + "method": "GET", + "path": "/" + }, + "endpoints": [ + {"path": "/", "method": "GET", "description": "Service information"}, + {"path": "/health", "method": "GET", "description": "Health check"} + ] +} +``` + +
+๐Ÿ’ก Implementation Hints + +**Get System Information:** +```python +import platform +import socket +from datetime import datetime + +hostname = socket.gethostname() +platform_name = platform.system() +architecture = platform.machine() +python_version = platform.python_version() +``` + +**Calculate Uptime:** +```python +start_time = datetime.now() + +def get_uptime(): + delta = datetime.now() - start_time + seconds = int(delta.total_seconds()) + hours = seconds // 3600 + minutes = (seconds % 3600) // 60 + return { + 'seconds': seconds, + 'human': f"{hours} hours, {minutes} minutes" + } +``` + +**Request Information:** +```python +# Flask +request.remote_addr # Client IP +request.headers.get('User-Agent') # User agent +request.method # HTTP method +request.path # Request path + +# FastAPI +request.client.host +request.headers.get('user-agent') +request.method +request.url.path +``` + +
+ +#### 1.4 Implement Health Check: `GET /health` + +Simple health endpoint for monitoring: + +```json +{ + "status": "healthy", + "timestamp": "2024-01-15T14:30:00.000Z", + "uptime_seconds": 3600 +} +``` + +Return HTTP 200 for healthy status. This will be used for Kubernetes probes in Lab 9. + +
+๐Ÿ’ก Implementation Hints + +```python +# Flask +@app.route('/health') +def health(): + return jsonify({ + 'status': 'healthy', + 'timestamp': datetime.now(timezone.utc).isoformat(), + 'uptime_seconds': get_uptime()['seconds'] + }) + +# FastAPI +@app.get("/health") +def health(): + return { + 'status': 'healthy', + 'timestamp': datetime.now(timezone.utc).isoformat(), + 'uptime_seconds': get_uptime()['seconds'] + } +``` + +
+ +#### 1.5 Configuration + +Make your app configurable via environment variables: + +```python +import os + +HOST = os.getenv('HOST', '0.0.0.0') +PORT = int(os.getenv('PORT', 5000)) +DEBUG = os.getenv('DEBUG', 'False').lower() == 'true' +``` + +**Test:** +```bash +python app.py # Default: 0.0.0.0:5000 +PORT=8080 python app.py # Custom port +HOST=127.0.0.1 PORT=3000 python app.py +``` + +--- + +### Task 2 โ€” Documentation & Best Practices (4 pts) + +#### 2.1 Application README (`app_python/README.md`) + +Create user-facing documentation: + +**Required Sections:** +1. **Overview** - What the service does +2. **Prerequisites** - Python version, dependencies +3. **Installation** + ```bash + python -m venv venv + source venv/bin/activate + pip install -r requirements.txt + ``` +4. **Running the Application** + ```bash + python app.py + # Or with custom config + PORT=8080 python app.py + ``` +5. **API Endpoints** + - `GET /` - Service and system information + - `GET /health` - Health check +6. **Configuration** - Environment variables table + +#### 2.2 Best Practices + +Implement these in your code: + +**1. Clean Code Organization** +- Clear function names +- Proper imports grouping +- Comments only where needed +- Follow PEP 8 + +
+๐Ÿ’ก Example Structure + +```python +""" +DevOps Info Service +Main application module +""" +import os +import socket +import platform +from datetime import datetime, timezone +from flask import Flask, jsonify, request + +app = Flask(__name__) + +# Configuration +HOST = os.getenv('HOST', '0.0.0.0') +PORT = int(os.getenv('PORT', 5000)) + +# Application start time +START_TIME = datetime.now(timezone.utc) + +def get_system_info(): + """Collect system information.""" + return { + 'hostname': socket.gethostname(), + 'platform': platform.system(), + 'architecture': platform.machine(), + 'python_version': platform.python_version() + } + +@app.route('/') +def index(): + """Main endpoint - service and system information.""" + # Implementation +``` + +
+ +**2. Error Handling** + +
+๐Ÿ’ก Implementation + +```python +@app.errorhandler(404) +def not_found(error): + return jsonify({ + 'error': 'Not Found', + 'message': 'Endpoint does not exist' + }), 404 + +@app.errorhandler(500) +def internal_error(error): + return jsonify({ + 'error': 'Internal Server Error', + 'message': 'An unexpected error occurred' + }), 500 +``` + +
+ +**3. Logging** + +
+๐Ÿ’ก Implementation + +```python +import logging + +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' +) +logger = logging.getLogger(__name__) + +logger.info('Application starting...') +logger.debug(f'Request: {request.method} {request.path}') +``` + +
+ +**4. Dependencies (`requirements.txt`)** + +```txt +# Web Framework +Flask==3.1.0 +# or +fastapi==0.115.0 +uvicorn[standard]==0.32.0 # Includes performance extras +``` + +Pin exact versions for reproducibility. + +**5. Git Ignore (`.gitignore`)** + +```gitignore +# Python +__pycache__/ +*.py[cod] +venv/ +*.log + +# IDE +.vscode/ +.idea/ + +# OS +.DS_Store +``` + +#### 2.3 Lab Submission (`app_python/docs/LAB01.md`) + +Document your implementation: + +**Required Sections:** +1. **Framework Selection** + - Your choice and why + - Comparison table with alternatives +2. **Best Practices Applied** + - List practices with code examples + - Explain importance of each +3. **API Documentation** + - Request/response examples + - Testing commands +4. **Testing Evidence** + - Screenshots showing endpoints work + - Terminal output +5. **Challenges & Solutions** + - Problems encountered + - How you solved them + +**Required Screenshots:** +- Main endpoint showing complete JSON +- Health check response +- Formatted/pretty-printed output + +#### 2.4 GitHub Community Engagement + +**Objective:** Explore GitHub's social features that support collaboration and discovery. + +**Actions Required:** +1. **Star** the course repository +2. **Star** the [simple-container-com/api](https://github.com/simple-container-com/api) project โ€” a promising open-source tool for container management +3. **Follow** your professor and TAs on GitHub: + - Professor: [@Cre-eD](https://github.com/Cre-eD) + - TA: [@marat-biriushev](https://github.com/marat-biriushev) + - TA: [@pierrepicaud](https://github.com/pierrepicaud) +4. **Follow** at least 3 classmates from the course + +**Document in LAB01.md:** + +Add a "GitHub Community" section (after Challenges & Solutions) with 1-2 sentences explaining: +- Why starring repositories matters in open source +- How following developers helps in team projects and professional growth + +
+๐Ÿ’ก GitHub Social Features + +**Why Stars Matter:** + +**Discovery & Bookmarking:** +- Stars help you bookmark interesting projects for later reference +- Star count indicates project popularity and community trust +- Starred repos appear in your GitHub profile, showing your interests + +**Open Source Signal:** +- Stars encourage maintainers (shows appreciation) +- High star count attracts more contributors +- Helps projects gain visibility in GitHub search and recommendations + +**Professional Context:** +- Shows you follow best practices and quality projects +- Indicates awareness of industry tools and trends + +**Why Following Matters:** + +**Networking:** +- See what other developers are working on +- Discover new projects through their activity +- Build professional connections beyond the classroom + +**Learning:** +- Learn from others' code and commits +- See how experienced developers solve problems +- Get inspiration for your own projects + +**Collaboration:** +- Stay updated on classmates' work +- Easier to find team members for future projects +- Build a supportive learning community + +**Career Growth:** +- Follow thought leaders in your technology stack +- See trending projects in real-time +- Build visibility in the developer community + +**GitHub Best Practices:** +- Star repos you find useful (not spam) +- Follow developers whose work interests you +- Engage meaningfully with the community +- Your GitHub activity shows employers your interests and involvement + +
+ +--- + +## Bonus Task โ€” Compiled Language (2.5 pts) + +Implement the same service in a compiled language to prepare for multi-stage Docker builds (Lab 2). + +**Choose One:** +- **Go** (Recommended) - Small binaries, fast compilation +- **Rust** - Memory safety, modern features +- **Java/Spring Boot** - Enterprise standard +- **C#/ASP.NET Core** - Cross-platform .NET + +**Structure:** + +``` +app_go/ (or app_rust, app_java, etc.) +โ”œโ”€โ”€ main.go +โ”œโ”€โ”€ go.mod +โ”œโ”€โ”€ README.md +โ””โ”€โ”€ docs/ + โ”œโ”€โ”€ LAB01.md # Implementation details + โ”œโ”€โ”€ GO.md # Language justification + โ””โ”€โ”€ screenshots/ +``` + +**Requirements:** +- Same two endpoints: `/` and `/health` +- Same JSON structure +- Document build process +- Compare binary size to Python + +
+๐Ÿ’ก Go Example Skeleton + +```go +package main + +import ( + "encoding/json" + "net/http" + "os" + "runtime" + "time" +) + +type ServiceInfo struct { + Service Service `json:"service"` + System System `json:"system"` + Runtime Runtime `json:"runtime"` + Request Request `json:"request"` +} + +var startTime = time.Now() + +func mainHandler(w http.ResponseWriter, r *http.Request) { + info := ServiceInfo{ + Service: Service{ + Name: "devops-info-service", + Version: "1.0.0", + }, + System: System{ + Platform: runtime.GOOS, + Architecture: runtime.GOARCH, + CPUCount: runtime.NumCPU(), + }, + // ... implement rest + } + + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(info) +} + +func main() { + http.HandleFunc("/", mainHandler) + http.HandleFunc("/health", healthHandler) + + port := os.Getenv("PORT") + if port == "" { + port = "8080" + } + + http.ListenAndServe(":"+port, nil) +} +``` + +
+ +--- + +## How to Submit + +1. **Create Branch:** + ```bash + git checkout -b lab01 + ``` + +2. **Commit Work:** + ```bash + git add app_python/ + git commit -m "feat: implement lab01 devops info service" + git push -u origin lab01 + ``` + +3. **Create Pull Requests:** + - **PR #1:** `your-fork:lab01` โ†’ `course-repo:master` + - **PR #2:** `your-fork:lab01` โ†’ `your-fork:master` + +4. **Verify:** + - All files present + - Screenshots included + - Documentation complete + +--- + +## Acceptance Criteria + +### Main Tasks (10 points) + +**Application Functionality (3 pts):** +- [ ] Service runs without errors +- [ ] `GET /` returns all required fields: + - [ ] Service metadata (name, version, description, framework) + - [ ] System info (hostname, platform, architecture, CPU, Python version) + - [ ] Runtime info (uptime, current time, timezone) + - [ ] Request info (client IP, user agent, method, path) + - [ ] Endpoints list +- [ ] `GET /health` returns status and uptime +- [ ] Configurable via environment variables (PORT, HOST) + +**Code Quality (2 pts):** +- [ ] Clean code structure +- [ ] PEP 8 compliant +- [ ] Error handling implemented +- [ ] Logging configured + +**Documentation (3 pts):** +- [ ] `app_python/README.md` complete with all sections +- [ ] `app_python/docs/LAB01.md` includes: + - [ ] Framework justification + - [ ] Best practices documentation + - [ ] API examples + - [ ] Testing evidence + - [ ] Challenges solved + - [ ] GitHub Community section (why stars/follows matter) +- [ ] All 3 required screenshots present +- [ ] Course repository starred +- [ ] simple-container-com/api repository starred +- [ ] Professor and TAs followed on GitHub +- [ ] At least 3 classmates followed on GitHub + +**Configuration (2 pts):** +- [ ] `requirements.txt` with pinned versions +- [ ] `.gitignore` properly configured +- [ ] Environment variables working + +### Bonus Task (2.5 points) + +- [ ] Compiled language app implements both endpoints +- [ ] Same JSON structure as Python version +- [ ] `app_/README.md` with build/run instructions +- [ ] `app_/docs/GO.md` with language justification +- [ ] `app_/docs/LAB01.md` with implementation details +- [ ] Screenshots showing compilation and execution + +--- + +## Rubric + +| Criteria | Points | Description | +|----------|--------|-------------| +| **Functionality** | 3 pts | Both endpoints work with complete, correct data | +| **Code Quality** | 2 pts | Clean, organized, follows Python standards | +| **Documentation** | 3 pts | Complete README and lab submission docs | +| **Configuration** | 2 pts | Dependencies, environment vars, .gitignore | +| **Bonus** | 2.5 pts | Compiled language implementation | +| **Total** | 12.5 pts | 10 pts required + 2.5 pts bonus | + +**Grading Scale:** +- **10/10:** Perfect implementation, excellent documentation +- **8-9/10:** All works, good docs, minor improvements possible +- **6-7/10:** Core functionality present, basic documentation +- **<6/10:** Missing features or documentation, needs revision + +--- + +## Resources + +
+๐Ÿ“š Python Web Frameworks + +- [Flask 3.1 Documentation](https://flask.palletsprojects.com/en/latest/) +- [Flask Quickstart](https://flask.palletsprojects.com/en/latest/quickstart/) +- [FastAPI Documentation](https://fastapi.tiangolo.com/) +- [FastAPI Tutorial](https://fastapi.tiangolo.com/tutorial/first-steps/) +- [Django 5.1 Documentation](https://docs.djangoproject.com/en/5.1/) + +
+ +
+๐Ÿ Python Best Practices + +- [PEP 8 Style Guide](https://pep8.org/) +- [Python Logging Tutorial](https://docs.python.org/3/howto/logging.html) +- [Python platform module](https://docs.python.org/3/library/platform.html) +- [Python socket module](https://docs.python.org/3/library/socket.html) + +
+ +
+๐Ÿ”ง Compiled Languages (Bonus) + +- [Go Web Development](https://golang.org/doc/articles/wiki/) +- [Go net/http Package](https://pkg.go.dev/net/http) +- [Rust Web Frameworks](https://www.arewewebyet.org/) +- [Spring Boot Quickstart](https://spring.io/quickstart) +- [ASP.NET Core Tutorial](https://docs.microsoft.com/aspnet/core/) + +
+ +
+๐Ÿ› ๏ธ Development Tools + +- [Postman](https://www.postman.com/) - API testing +- [HTTPie](https://httpie.io/) - Command-line HTTP client +- [curl](https://curl.se/) - Data transfer tool +- [jq](https://stedolan.github.io/jq/) - JSON processor + +
+ +--- + +## Looking Ahead + +This service evolves throughout the course: + +- **Lab 2:** Containerize with Docker, multi-stage builds +- **Lab 3:** Add unit tests and CI/CD pipeline +- **Lab 8:** Add `/metrics` endpoint for Prometheus +- **Lab 9:** Deploy to Kubernetes using `/health` probes +- **Lab 12:** Add `/visits` endpoint with file persistence +- **Lab 13:** Multi-environment deployment with GitOps + +--- + +**Good luck!** ๐Ÿš€ + +> **Remember:** Keep it simple, write clean code, and document thoroughly. This foundation will carry through all 16 labs! diff --git a/labs/lab02.md b/labs/lab02.md new file mode 100644 index 0000000000..1c3e032f89 --- /dev/null +++ b/labs/lab02.md @@ -0,0 +1,366 @@ +# Lab 2 โ€” Docker Containerization + +![difficulty](https://img.shields.io/badge/difficulty-beginner-success) +![topic](https://img.shields.io/badge/topic-Containerization-blue) +![points](https://img.shields.io/badge/points-10%2B2.5-orange) +![tech](https://img.shields.io/badge/tech-Docker-informational) + +> Containerize your Python app from Lab 1 using Docker best practices and publish it to Docker Hub. + +## Overview + +Take your Lab 1 application and package it into a Docker container. Learn image optimization, security basics, and the Docker workflow used in production. + +**What You'll Learn:** +- Writing production-ready Dockerfiles +- Docker best practices and security +- Image optimization techniques +- Docker Hub workflow + +**Tech Stack:** Docker 25+ | Python 3.13-slim | Multi-stage builds + +--- + +## Tasks + +### Task 1 โ€” Create Dockerfile (4 pts) + +**Objective:** Write a Dockerfile that containerizes your Python app following best practices. + +Create `app_python/Dockerfile` with these requirements: + +**Must Have:** +- Non-root user (mandatory) +- Specific base image version (e.g., `python:3.13-slim` or `python:3.12-slim`) +- Only copy necessary files +- Proper layer ordering +- `.dockerignore` file + +**Your app should work the same way in the container as it did locally.** + +
+๐Ÿ’ก Dockerfile Concepts & Resources + +**Key Dockerfile Instructions to Research:** +- `FROM` - Choose your base image (look at python:3.13-slim, python:3.12-slim, python:3.13-alpine) +- `RUN` - Execute commands (creating users, installing packages) +- `WORKDIR` - Set working directory +- `COPY` - Copy files into the image +- `USER` - Switch to non-root user +- `EXPOSE` - Document which port your app uses +- `CMD` - Define how to start your application + +**Critical Concepts:** +- **Layer Caching**: Why does the order of COPY commands matter? +- **Non-root User**: How do you create and switch to a non-root user? +- **Base Image Selection**: What's the difference between slim, alpine, and full images? +- **Dependency Installation**: Why copy requirements.txt separately from application code? + +**Resources:** +- [Dockerfile Reference](https://docs.docker.com/reference/dockerfile/) +- [Best Practices Guide](https://docs.docker.com/build/building/best-practices/) +- [Python Image Variants](https://hub.docker.com/_/python) - Use 3.13-slim or 3.12-slim + +**Think About:** +- What happens if you copy all files before installing dependencies? +- Why shouldn't you run as root? +- How does layer caching speed up rebuilds? + +
+ +
+๐Ÿ’ก .dockerignore Concepts + +**Purpose:** Prevent unnecessary files from being sent to Docker daemon during build (faster builds, smaller context). + +**What Should You Exclude?** +Think about what doesn't need to be in your container: +- Development artifacts (like Python's `__pycache__`, `*.pyc`) +- Version control files (`.git` directory) +- IDE configuration files +- Virtual environments (`venv/`, `.venv/`) +- Documentation that's not needed at runtime +- Test files (if not running tests in container) + +**Key Question:** Why does excluding files from the build context matter for build speed? + +**Resources:** +- [.dockerignore Documentation](https://docs.docker.com/engine/reference/builder/#dockerignore-file) +- Look at your `.gitignore` for inspiration - many patterns overlap + +**Exercise:** Start minimal and add exclusions as needed, rather than copying a huge list you don't understand. + +
+ +**Test Your Container:** + +You should be able to: +1. Build your image using the `docker build` command +2. Run a container from your image with proper port mapping +3. Access your application endpoints from the host machine + +Verify that your application works the same way in the container as it did locally. + +--- + +### Task 2 โ€” Docker Hub (2 pts) + +**Objective:** Publish your image to Docker Hub. + +**Requirements:** +1. Create a Docker Hub account (if you don't have one) +2. Tag your image with your Docker Hub username +3. Authenticate with Docker Hub +4. Push your image to the registry +5. Verify the image is publicly accessible + +**Documentation Required:** +- Terminal output showing successful push +- Docker Hub repository URL +- Explanation of your tagging strategy + +
+๐Ÿ’ก Docker Hub Resources + +**Useful Commands:** +- `docker tag` - Tag images for registry push +- `docker login` - Authenticate with Docker Hub +- `docker push` - Upload image to registry +- `docker pull` - Download image from registry + +**Resources:** +- [Docker Hub Quickstart](https://docs.docker.com/docker-hub/quickstart/) +- [Docker Tag Reference](https://docs.docker.com/reference/cli/docker/image/tag/) +- [Best Practices for Tagging](https://docs.docker.com/build/building/best-practices/#tagging) + +
+ +--- + +### Task 3 โ€” Documentation (4 pts) + +**Objective:** Document your Docker implementation with focus on understanding and decisions. + +#### 3.1 Update `app_python/README.md` + +Add a **Docker** section explaining how to use your containerized application. Include command patterns (not exact commands) for: +- Building the image locally +- Running a container +- Pulling from Docker Hub + +#### 3.2 Create `app_python/docs/LAB02.md` + +Document your implementation with these sections: + +**Required Sections:** + +1. **Docker Best Practices Applied** + - List each practice you implemented (non-root user, layer caching, .dockerignore, etc.) + - Explain WHY each matters (not just what it does) + - Include relevant Dockerfile snippets with explanations + +2. **Image Information & Decisions** + - Base image chosen and justification (why this specific version?) + - Final image size and your assessment + - Layer structure explanation + - Optimization choices you made + +3. **Build & Run Process** + - Complete terminal output from your build process + - Terminal output showing container running + - Terminal output from testing endpoints (curl/httpie) + - Docker Hub repository URL + +4. **Technical Analysis** + - Why does your Dockerfile work the way it does? + - What would happen if you changed the layer order? + - What security considerations did you implement? + - How does .dockerignore improve your build? + +5. **Challenges & Solutions** + - Issues encountered during implementation + - How you debugged and resolved them + - What you learned from the process + +--- + +## Bonus Task โ€” Multi-Stage Build (2.5 pts) + +**Objective:** Containerize your compiled language app (from Lab 1 bonus) using multi-stage builds. + +**Why Multi-Stage?** Separate build environment from runtime โ†’ smaller final image. + +**Example Flow:** +1. **Stage 1 (Builder):** Compile the app (large image with compilers) +2. **Stage 2 (Runtime):** Copy only the binary (small image, no build tools) + +
+๐Ÿ’ก Multi-Stage Build Concepts + +**The Problem:** Compiled language images include the entire compiler/SDK in the final image (huge!). + +**The Solution:** Use multiple `FROM` statements: +- **Stage 1 (Builder)**: Use full SDK image, compile your application +- **Stage 2 (Runtime)**: Use minimal base image, copy only the compiled binary + +**Key Concepts to Research:** +- How to name build stages (`AS builder`) +- How to copy files from previous stages (`COPY --from=builder`) +- Choosing runtime base images (alpine, distroless, scratch) +- Static vs dynamic compilation (affects what base image you can use) + +**Questions to Explore:** +- What's the size difference between your builder and final image? +- Why can't you just use the builder image as your final image? +- What security benefits come from smaller images? +- Can you use `FROM scratch`? Why or why not? + +**Resources:** +- [Multi-Stage Builds Documentation](https://docs.docker.com/build/building/multi-stage/) +- [Distroless Base Images](https://github.com/GoogleContainerTools/distroless) +- Language-specific: Search "Go static binary Docker" or "Rust alpine Docker" + +**Challenge:** Try to get your final image under 20MB. + +
+ +**Requirements:** +- Multi-stage Dockerfile in `app_go/` (or your chosen language) +- Working containerized application +- Documentation in `app_go/docs/LAB02.md` explaining: + - Your multi-stage build strategy + - Size comparison with analysis (builder vs final image) + - Why multi-stage builds matter for compiled languages + - Terminal output showing build process and image sizes + - Technical explanation of each stage's purpose + +**Bonus Points Given For:** +- Significant size reduction achieved with clear metrics +- Deep understanding of multi-stage build benefits +- Analysis of security implications (smaller attack surface) +- Explanation of trade-offs and decisions made + +--- + +## How to Submit + +1. **Create Branch:** Create a new branch called `lab02` + +2. **Commit Work:** + - Add your changes (app_python/ directory with Dockerfile, .dockerignore, updated docs) + - Commit with a descriptive message following conventional commits format + - Push to your fork + +3. **Create Pull Requests:** + - **PR #1:** `your-fork:lab02` โ†’ `course-repo:master` + - **PR #2:** `your-fork:lab02` โ†’ `your-fork:master` + +--- + +## Acceptance Criteria + +### Main Tasks (10 points) + +**Dockerfile (4 pts):** +- [ ] Dockerfile exists in `app_python/` +- [ ] Uses specific base image version +- [ ] Runs as non-root user (USER directive) +- [ ] Proper layer ordering (dependencies before code) +- [ ] Only copies necessary files +- [ ] `.dockerignore` file present +- [ ] Image builds successfully +- [ ] Container runs and app works + +**Docker Hub (2 pts):** +- [ ] Image pushed to Docker Hub +- [ ] Image is publicly accessible +- [ ] Correct tagging used +- [ ] Can pull and run from Docker Hub + +**Documentation (4 pts):** +- [ ] `app_python/README.md` has Docker section with command patterns +- [ ] `app_python/docs/LAB02.md` complete with: + - [ ] Best practices explained with WHY (not just what) + - [ ] Image information and justifications for choices + - [ ] Terminal output from build, run, and testing + - [ ] Technical analysis demonstrating understanding + - [ ] Challenges and solutions documented + - [ ] Docker Hub repository URL provided + +### Bonus Task (2.5 points) + +- [ ] Multi-stage Dockerfile for compiled language app +- [ ] Working containerized application +- [ ] Documentation in `app_/docs/LAB02.md` with: + - [ ] Multi-stage strategy explained + - [ ] Terminal output showing image sizes (builder vs final) + - [ ] Analysis of size reduction and why it matters + - [ ] Technical explanation of each stage + - [ ] Security benefits discussed + +--- + +## Rubric + +| Criteria | Points | Description | +|----------|--------|-------------| +| **Dockerfile** | 4 pts | Correct, secure, optimized | +| **Docker Hub** | 2 pts | Successfully published | +| **Documentation** | 4 pts | Complete and clear | +| **Bonus** | 2.5 pts | Multi-stage implementation | +| **Total** | 12.5 pts | 10 pts required + 2.5 pts bonus | + +**Grading:** +- **10/10:** Perfect Dockerfile, deep understanding demonstrated, excellent analysis +- **8-9/10:** Working container, good practices, solid understanding shown +- **6-7/10:** Container works, basic security, surface-level explanations +- **<6/10:** Missing requirements, runs as root, copy-paste without understanding + +--- + +## Resources + +
+๐Ÿ“š Docker Documentation + +- [Dockerfile Best Practices](https://docs.docker.com/build/building/best-practices/) +- [Dockerfile Reference](https://docs.docker.com/reference/dockerfile/) +- [Multi-Stage Builds](https://docs.docker.com/build/building/multi-stage/) +- [.dockerignore](https://docs.docker.com/reference/dockerfile/#dockerignore-file) +- [Docker Build Guide](https://docs.docker.com/build/guide/) + +
+ +
+๐Ÿ”’ Security Resources + +- [Docker Security Best Practices](https://docs.docker.com/build/building/best-practices/#security) +- [Snyk Docker Security](https://snyk.io/learn/docker-security-scanning/) +- [Why Non-Root Containers](https://docs.docker.com/build/building/best-practices/#user) +- [Distroless Images](https://github.com/GoogleContainerTools/distroless) - Minimal base images + +
+ +
+๐Ÿ› ๏ธ Tools + +- [Hadolint](https://github.com/hadolint/hadolint) - Dockerfile linter +- [Dive](https://github.com/wagoodman/dive) - Explore image layers +- [Docker Hub](https://hub.docker.com/) - Container registry + +
+ +--- + +## Looking Ahead + +- **Lab 3:** CI/CD will automatically build these Docker images +- **Lab 7-8:** Deploy containers with docker-compose for logging/monitoring +- **Lab 9:** Run these containers in Kubernetes +- **Lab 13:** ArgoCD will deploy containerized apps automatically + +--- + +**Good luck!** ๐Ÿš€ + +> **Remember:** Understanding beats copy-paste. Explain your decisions, not just your actions. Run as non-root or no points! diff --git a/labs/lab03.md b/labs/lab03.md new file mode 100644 index 0000000000..9824e934b3 --- /dev/null +++ b/labs/lab03.md @@ -0,0 +1,931 @@ +# Lab 3 โ€” Continuous Integration (CI/CD) + +![difficulty](https://img.shields.io/badge/difficulty-beginner-success) +![topic](https://img.shields.io/badge/topic-CI/CD-blue) +![points](https://img.shields.io/badge/points-10%2B2.5-orange) +![tech](https://img.shields.io/badge/tech-GitHub%20Actions-informational) + +> Automate your Python app testing and Docker builds with GitHub Actions CI/CD pipeline. + +## Overview + +Take your containerized app from Labs 1-2 and add automated testing and deployment. Learn how CI/CD catches bugs early, ensures code quality, and automates the Docker build/push workflow. + +**What You'll Learn:** +- Writing effective unit tests +- GitHub Actions workflow syntax +- CI/CD best practices (caching, matrix builds, security scanning) +- Automated Docker image publishing +- Continuous integration for multiple applications + +**Tech Stack:** GitHub Actions | pytest 8+ | Python 3.11+ | Snyk | Docker + +**Connection to Previous Labs:** +- **Lab 1:** Test the endpoints you created +- **Lab 2:** Automate the Docker build/push workflow +- **Lab 4+:** This CI pipeline will run for all future labs + +--- + +## Tasks + +### Task 1 โ€” Unit Testing (3 pts) + +**Objective:** Write comprehensive unit tests for your Python application to ensure reliability. + +**Requirements:** + +1. **Choose a Testing Framework** + - Research Python testing frameworks (pytest, unittest, etc.) + - Select one and justify your choice + - Install it in your `requirements.txt` or create `requirements-dev.txt` + +2. **Write Unit Tests** + - Create `app_python/tests/` directory + - Write tests for **all** your endpoints: + - `GET /` - Verify JSON structure and required fields + - `GET /health` - Verify health check response + - Test both successful responses and error cases + - Aim for meaningful test coverage (not just basic smoke tests) + +3. **Run Tests Locally** + - Verify all tests pass locally before CI setup + - Document how to run tests in your README + +
+๐Ÿ’ก Testing Framework Guidance + +**Popular Python Testing Frameworks:** + +**pytest (Recommended):** +- Pros: Simple syntax, powerful fixtures, excellent plugin ecosystem +- Cons: Additional dependency +- Use case: Most modern Python projects + +**unittest:** +- Pros: Built into Python (no extra dependencies) +- Cons: More verbose, less modern features +- Use case: Minimal dependency projects + +**Key Testing Concepts to Research:** +- Test fixtures and setup/teardown +- Mocking external dependencies +- Testing HTTP endpoints (test client usage) +- Test coverage measurement +- Assertions and expected vs actual results + +**What Should You Test?** +- Correct HTTP status codes (200, 404, 500) +- Response data structure (JSON fields present) +- Response data types (strings, integers, etc.) +- Edge cases (invalid requests, missing data) +- Error handling (what happens when things fail?) + +**Questions to Consider:** +- How do you test a Flask/FastAPI app without starting the server? +- Should you test that `hostname` returns your actual hostname, or just that the field exists? +- How do you simulate different client IPs or user agents in tests? + +**Resources:** +- [Pytest Documentation](https://docs.pytest.org/) +- [Flask Testing](https://flask.palletsprojects.com/en/stable/testing/) +- [FastAPI Testing](https://fastapi.tiangolo.com/tutorial/testing/) +- [Python unittest](https://docs.python.org/3/library/unittest.html) + +**Anti-Patterns to Avoid:** +- Testing framework functionality instead of your code +- Tests that always pass regardless of implementation +- Tests with no assertions +- Tests that depend on external services + +
+ +**What to Document:** +- Your testing framework choice and why +- Test structure explanation +- How to run tests locally +- Terminal output showing all tests passing + +--- + +### Task 2 โ€” GitHub Actions CI Workflow (4 pts) + +**Objective:** Create a GitHub Actions workflow that automatically tests your code and builds Docker images with proper versioning. + +**Requirements:** + +1. **Create Workflow File** + - Create `.github/workflows/python-ci.yml` in your repository + - Name your workflow descriptively + +2. **Implement Essential CI Steps** + + Your workflow must include these logical stages: + + **a) Code Quality & Testing:** + - Install dependencies + - Run a linter (pylint, flake8, black, ruff, etc.) + - Run your unit tests + + **b) Docker Build & Push with Versioning:** + - Authenticate with Docker Hub + - Build your Docker image + - Tag with proper version strategy (see versioning section below) + - Push to Docker Hub with multiple tags + +3. **Versioning Strategy** + + Choose **one** versioning approach and implement it: + + **Option A: Semantic Versioning (SemVer)** + - Version format: `v1.2.3` (major.minor.patch) + - Use git tags for releases + - Tag images like: `username/app:1.2.3`, `username/app:1.2`, `username/app:latest` + - **When to use:** Traditional software releases with breaking changes + + **Option B: Calendar Versioning (CalVer)** + - Version format: `2024.01.15` or `2024.01` (year.month.day or year.month) + - Based on release date + - Tag images like: `username/app:2024.01`, `username/app:latest` + - **When to use:** Time-based releases, continuous deployment + + **Required:** + - Document which strategy you chose and why + - Implement it in your CI workflow + - Show at least 2 tags per image (e.g., version + latest) + +4. **Workflow Triggers** + - Configure when the workflow runs (push, pull request, etc.) + - Consider which branches should trigger builds + +5. **Testing the Workflow** + - Push your workflow file and verify it runs + - Fix any issues that arise + - Ensure all steps complete successfully + - Verify Docker Hub shows your version tags + +
+๐Ÿ’ก GitHub Actions Concepts + +**Core Concepts to Research:** + +**Workflow Anatomy:** +- `name` - What is your workflow called? +- `on` - When does it run? (push, pull_request, schedule, etc.) +- `jobs` - What work needs to be done? +- `steps` - Individual commands within a job +- `runs-on` - What OS environment? (ubuntu-latest, etc.) + +**Key Questions:** +- Should you run CI on every push, or only on pull requests? +- What happens if tests fail? Should the workflow continue? +- How do you access secrets (like Docker Hub credentials) securely? +- Why might you want multiple jobs vs multiple steps in one job? + +**Python CI Steps Pattern:** +```yaml +# This is a pattern, not exact copy-paste code +# Research the actual syntax and actions needed + +- Set up Python environment +- Install dependencies +- Run linter +- Run tests +``` + +**Docker CI Steps Pattern:** +```yaml +# This is a pattern, not exact copy-paste code +# Research the actual actions and their parameters + +- Log in to Docker Hub +- Extract metadata for tags +- Build and push Docker image +``` + +**Important Concepts:** +- **Actions Marketplace:** Reusable actions (actions/checkout@v4, actions/setup-python@v5, docker/build-push-action@v6) +- **Secrets:** How to store Docker Hub credentials securely +- **Job Dependencies:** Can one job depend on another succeeding? +- **Matrix Builds:** Testing multiple Python versions (optional but good to know) +- **Caching:** Speed up workflows by caching dependencies (we'll add this in Task 3) + +**Resources:** +- [GitHub Actions Documentation](https://docs.github.com/en/actions) +- [Building and Testing Python](https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python) +- [Publishing Docker Images](https://docs.docker.com/ci-cd/github-actions/) +- [GitHub Actions Marketplace](https://github.com/marketplace?type=actions) +- [Workflow Syntax](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions) + +**Security Best Practices:** +- Never hardcode passwords or tokens in workflow files +- Use GitHub Secrets for sensitive data +- Understand when secrets are exposed to pull requests from forks +- Use `secrets.GITHUB_TOKEN` for GitHub API access (auto-provided) + +**Docker Hub Authentication:** +You'll need to create a Docker Hub access token and add it as a GitHub Secret. Research: +- How to create Docker Hub access tokens +- How to add secrets to your GitHub repository +- How to reference secrets in workflow files (hint: `${{ secrets.NAME }}`) + +
+ +
+๐Ÿ’ก Versioning Strategy Guidance + +**Semantic Versioning (SemVer):** + +**Format:** MAJOR.MINOR.PATCH (e.g., 1.2.3) +- **MAJOR:** Breaking changes (incompatible API changes) +- **MINOR:** New features (backward-compatible) +- **PATCH:** Bug fixes (backward-compatible) + +**Implementation Approaches:** +1. **Manual Git Tags:** Create git tags (v1.0.0) and reference in workflow +2. **Automated from Commits:** Parse conventional commits to bump version +3. **GitHub Releases:** Trigger on release creation + +**Docker Tagging Example:** +- `username/app:1.2.3` (full version) +- `username/app:1.2` (minor version, rolling) +- `username/app:1` (major version, rolling) +- `username/app:latest` (latest stable) + +**Pros:** Clear when breaking changes occur, industry standard for libraries +**Cons:** Requires discipline to follow rules correctly + +--- + +**Calendar Versioning (CalVer):** + +**Common Formats:** +- `YYYY.MM.DD` (e.g., 2024.01.15) - Daily releases +- `YYYY.MM.MICRO` (e.g., 2024.01.0) - Monthly with patch number +- `YYYY.0M` (e.g., 2024.01) - Monthly releases + +**Implementation Approaches:** +1. **Date-based:** Generate from current date in workflow +2. **Git SHA:** Combine with short commit SHA (2024.01-a1b2c3d) +3. **Build Number:** Use GitHub run number (2024.01.42) + +**Docker Tagging Example:** +- `username/app:2024.01` (month version) +- `username/app:2024.01.123` (with build number) +- `username/app:latest` (latest build) + +**Pros:** No ambiguity, good for continuous deployment, easier to remember +**Cons:** Doesn't indicate breaking changes + +--- + +**How to Implement in CI:** + +**Using docker/metadata-action:** +```yaml +# Pattern - research actual syntax +- name: Docker metadata + uses: docker/metadata-action + with: + # Define your tagging strategy here + # Can reference git tags, dates, commit SHAs +``` + +**Manual Tagging:** +```yaml +# Pattern - research actual syntax +- name: Generate version + run: echo "VERSION=$(date +%Y.%m.%d)" >> $GITHUB_ENV + +- name: Build and push + # Use ${{ env.VERSION }} in tags +``` + +**Questions to Consider:** +- How often will you release? (Daily? Per feature? Monthly?) +- Do users need to know about breaking changes explicitly? +- Are you building a library (use SemVer) or a service (CalVer works)? +- How will you track what's in each version? + +**Resources:** +- [Semantic Versioning](https://semver.org/) +- [Calendar Versioning](https://calver.org/) +- [Docker Metadata Action](https://github.com/docker/metadata-action) +- [Conventional Commits](https://www.conventionalcommits.org/) (for automated SemVer) + +
+ +
+๐Ÿ’ก Debugging GitHub Actions + +**Common Issues & How to Debug:** + +**Workflow Won't Trigger:** +- Check your `on:` configuration +- Verify you pushed to the correct branch +- Look at Actions tab for filtering options + +**Steps Failing:** +- Click into the failed step to see full logs +- Check for typos in action names or parameters +- Verify secrets are configured correctly +- Test commands locally first + +**Docker Build Fails:** +- Ensure Dockerfile is in the correct location +- Check context path in build step +- Verify base image exists and is accessible +- Test Docker build locally first + +**Authentication Issues:** +- Verify secret names match exactly (case-sensitive) +- Check that Docker Hub token has write permissions +- Ensure you're using `docker/login-action` correctly + +**Debugging Techniques:** +- Add `run: echo "Debug message"` steps to understand workflow state +- Use `run: env` to see available environment variables +- Check Actions tab for detailed logs +- Enable debug logging (add `ACTIONS_RUNNER_DEBUG` secret = true) + +
+ +**What to Document:** +- Your workflow trigger strategy and reasoning +- Why you chose specific actions from the marketplace +- Your Docker tagging strategy (latest? version tags? commit SHA?) +- Link to successful workflow run in GitHub Actions tab +- Terminal output or screenshot of green checkmark + +--- + +### Task 3 โ€” CI Best Practices & Security (3 pts) + +**Objective:** Optimize your CI workflow and add security scanning. + +**Requirements:** + +1. **Add Status Badge** + - Add a GitHub Actions status badge to your `app_python/README.md` + - The badge should show the current workflow status (passing/failing) + +2. **Implement Dependency Caching** + - Add caching for Python dependencies to speed up workflow + - Measure and document the speed improvement + +3. **Add Security Scanning with Snyk** + - Integrate Snyk vulnerability scanning into your workflow + - Configure it to check for vulnerabilities in your dependencies + - Document any vulnerabilities found and how you addressed them + +4. **Apply CI Best Practices** + - Research and implement at least 3 additional CI best practices + - Document which practices you applied and why they matter + +
+๐Ÿ’ก CI Best Practices Guidance + +**Dependency Caching:** + +Caching speeds up workflows by reusing previously downloaded dependencies. + +**Key Concepts:** +- What should be cached? (pip packages, Docker layers, etc.) +- What's the cache key? (based on requirements.txt hash) +- When does cache become invalid? +- How much time does caching save? + +**Actions to Research:** +- `actions/cache` for general caching +- `actions/setup-python` has built-in cache support + +**Questions to Explore:** +- Where are Python packages stored that should be cached? +- How do you measure cache hit vs cache miss? +- What happens if requirements.txt changes? + +**Status Badges:** + +Show workflow status directly in your README. + +**Format Pattern:** +```markdown +![Workflow Name](https://github.com/username/repo/workflows/workflow-name/badge.svg) +``` + +Research how to: +- Get the correct badge URL for your workflow +- Make badges clickable (link to Actions tab) +- Display specific branch status + +**CI Best Practices to Consider:** + +Research and choose at least 3 to implement: + +1. **Fail Fast:** Stop workflow on first failure +2. **Matrix Builds:** Test multiple Python versions (3.12, 3.13) +3. **Job Dependencies:** Don't push Docker if tests fail +4. **Conditional Steps:** Only push on main branch +5. **Pull Request Checks:** Require passing CI before merge +6. **Workflow Concurrency:** Cancel outdated workflow runs +7. **Docker Layer Caching:** Cache Docker build layers +8. **Environment Variables:** Use env for repeated values +9. **Secrets Scanning:** Prevent committing secrets +10. **YAML Validation:** Lint your workflow files + +**Resources:** +- [GitHub Actions Best Practices](https://docs.github.com/en/actions/learn-github-actions/usage-limits-billing-and-administration#usage-limits) +- [Caching Dependencies](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows) +- [Security Hardening](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions) + +
+ +
+๐Ÿ’ก Snyk Integration Guidance + +**What is Snyk?** + +Snyk is a security tool that scans your dependencies for known vulnerabilities. + +**Key Concepts:** +- Vulnerability databases (CVEs) +- Severity levels (low, medium, high, critical) +- Automated dependency updates +- Security advisories + +**Integration Options:** + +1. **Snyk GitHub Action:** + - Use `snyk/actions` from GitHub Marketplace + - Requires Snyk API token (free tier available) + - Can fail builds on vulnerabilities + +2. **Snyk CLI in Workflow:** + - Install Snyk CLI in workflow + - Run `snyk test` command + - More flexible but requires setup + +**Setup Steps:** +1. Create free Snyk account +2. Get API token from Snyk dashboard +3. Add token as GitHub Secret +4. Add Snyk step to workflow +5. Configure severity threshold (what level fails the build?) + +**Questions to Explore:** +- Should every vulnerability fail your build? +- What if vulnerabilities have no fix available? +- How do you handle false positives? +- When should you break the build vs just warn? + +**Resources:** +- [Snyk GitHub Actions](https://github.com/snyk/actions) +- [Snyk Python Example](https://github.com/snyk/actions/tree/master/python) +- [Snyk Documentation](https://docs.snyk.io/integrations/ci-cd-integrations/github-actions-integration) + +**Common Issues:** +- Dependencies not installed before Snyk runs +- API token not configured correctly +- Overly strict severity settings breaking builds +- Virtual environment confusion + +**What to Document:** +- Your severity threshold decision and reasoning +- Any vulnerabilities found and your response +- Whether you fail builds on vulnerabilities or just warn + +
+ +**What to Document:** +- Status badge in README (visible proof it works) +- Caching implementation and speed improvement metrics +- CI best practices you applied with explanations +- Snyk integration results and vulnerability handling +- Terminal output showing improved workflow performance + +--- + +## Bonus Task โ€” Multi-App CI with Path Filters + Test Coverage (2.5 pts) + +**Objective:** Set up CI for your compiled language app with intelligent path-based triggers AND add test coverage tracking. + +**Part 1: Multi-App CI (1.5 pts)** + +1. **Create Second CI Workflow** + - Create `.github/workflows/-ci.yml` for your Go/Rust/Java app + - Implement similar CI steps (lint, test, build Docker image) + - Use language-specific actions and best practices + - Apply versioning strategy (SemVer or CalVer) consistently + +2. **Implement Path-Based Triggers** + - Python workflow should only run when `app_python/` files change + - Compiled language workflow should only run when `app_/` files change + - Neither should run when only docs or other files change + +3. **Optimize for Multiple Apps** + - Ensure both workflows can run in parallel + - Consider using workflow templates (DRY principle) + - Document the benefits of path-based triggers + +**Part 2: Test Coverage Badge (1 pt)** + +4. **Add Coverage Tracking** + - Install coverage tool (`pytest-cov` for Python, coverage tool for your other language) + - Generate coverage reports in CI workflow + - Integrate with codecov.io or coveralls.io (free for public repos) + - Add coverage badge to README showing percentage + +5. **Coverage Goals** + - Document your current coverage percentage + - Identify what's not covered and why + - Set a coverage threshold in CI (e.g., fail if below 70%) + +
+๐Ÿ’ก Path Filters & Multi-App CI + +**Why Path Filters?** + +In a monorepo with multiple apps, you don't want to run Python CI when only Go code changes. + +**Path Filter Syntax:** +```yaml +on: + push: + paths: + - 'app_python/**' + - '.github/workflows/python-ci.yml' +``` + +**Key Concepts:** +- Glob patterns for path matching +- When to include workflow file itself +- Exclude patterns (paths-ignore) +- How to test path filters + +**Questions to Explore:** +- Should changes to README.md trigger CI? +- Should changes to the root .gitignore trigger CI? +- What about changes to both apps in one commit? +- How do you test that path filters work correctly? + +**Multi-Language CI Patterns:** + +**For Go:** +- actions/setup-go +- golangci-lint for linting +- go test for testing +- Multi-stage Docker builds (from Lab 2 bonus) + +**For Rust:** +- actions-rs/toolchain +- cargo clippy for linting +- cargo test for testing +- cargo-audit for security + +**For Java:** +- actions/setup-java +- Maven or Gradle for build +- Checkstyle or SpotBugs for linting +- JUnit tests + +**Workflow Reusability:** + +Consider: +- Reusable workflows (call one workflow from another) +- Composite actions (bundle steps together) +- Workflow templates (DRY for similar workflows) + +**Resources:** +- [Path Filters](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#onpushpull_requestpaths) +- [Reusable Workflows](https://docs.github.com/en/actions/using-workflows/reusing-workflows) +- [Starter Workflows](https://github.com/actions/starter-workflows/tree/main/ci) + +
+ +
+๐Ÿ’ก Test Coverage Tracking + +**What is Test Coverage?** + +Coverage measures what percentage of your code is executed by your tests. High coverage = more code is tested. + +**Why Coverage Matters:** +- Identifies untested code paths +- Prevents regressions (changes breaking untested code) +- Increases confidence in refactoring +- Industry standard quality metric + +**Coverage Tools by Language:** + +**Python (pytest-cov):** +```bash +# Install +pip install pytest-cov + +# Run with coverage +pytest --cov=app_python --cov-report=xml --cov-report=term + +# Generates coverage.xml for upload +``` + +**Go (built-in):** +```bash +go test -coverprofile=coverage.out ./... +go tool cover -html=coverage.out +``` + +**Rust (tarpaulin):** +```bash +cargo install cargo-tarpaulin +cargo tarpaulin --out Xml +``` + +**Java (JaCoCo with Maven/Gradle):** +```bash +mvn test jacoco:report +# or +gradle test jacocoTestReport +``` + +**Integration Services:** + +**Codecov (Recommended):** +- Free for public repos +- Beautiful visualizations +- PR comments with coverage diff +- Setup: Sign in with GitHub, add repo, upload coverage report + +**Coveralls:** +- Alternative to Codecov +- Similar features +- Different UI + +**Coverage in CI Workflow:** +```yaml +# Pattern for Python (research actual syntax) +- name: Run tests with coverage + run: pytest --cov=. --cov-report=xml + +- name: Upload to Codecov + uses: codecov/codecov-action@v4 + with: + file: ./coverage.xml + token: ${{ secrets.CODECOV_TOKEN }} +``` + +**Coverage Badge:** +```markdown +![Coverage](https://codecov.io/gh/username/repo/branch/main/graph/badge.svg) +``` + +**Setting Coverage Thresholds:** + +You can fail CI if coverage drops below a threshold: + +```yaml +# In pytest.ini or pyproject.toml +[tool:pytest] +addopts = --cov=. --cov-fail-under=70 +``` + +**Questions to Consider:** +- What's a reasonable coverage target? (70%? 80%? 90%?) +- Should you aim for 100% coverage? (Usually no - diminishing returns) +- What code is OK to leave untested? (Error handlers, config, main) +- How do you test hard-to-reach code paths? + +**Best Practices:** +- Don't chase 100% coverage blindly +- Focus on testing critical business logic +- Integration points should have high coverage +- Simple getters/setters can be skipped +- Measure coverage trends, not just absolute numbers + +**Resources:** +- [Codecov Documentation](https://docs.codecov.com/) +- [pytest-cov Documentation](https://pytest-cov.readthedocs.io/) +- [Go Coverage](https://go.dev/blog/cover) +- [Cargo Tarpaulin](https://github.com/xd009642/tarpaulin) +- [JaCoCo](https://www.jacoco.org/) + +
+ +**What to Document:** +- Second workflow implementation with language-specific best practices +- Path filter configuration and testing proof +- Benefits analysis: Why path filters matter in monorepos +- Example showing workflows running independently +- Terminal output or Actions tab showing selective triggering +- **Coverage integration:** Screenshot/link to codecov/coveralls dashboard +- **Coverage analysis:** Current percentage, what's covered/not covered, your threshold + +--- + +## How to Submit + +1. **Create Branch:** + - Create a new branch called `lab03` + - Develop your CI workflows on this branch + +2. **Commit Work:** + - Add workflow files (`.github/workflows/`) + - Add test files (`app_python/tests/`) + - Add documentation (`app_python/docs/LAB03.md`) + - Commit with descriptive message following conventional commits + +3. **Verify CI Works:** + - Push to your fork and verify workflows run + - Check that all jobs pass + - Review workflow logs for any issues + +4. **Create Pull Requests:** + - **PR #1:** `your-fork:lab03` โ†’ `course-repo:master` + - **PR #2:** `your-fork:lab03` โ†’ `your-fork:master` + - CI should run automatically on your PRs + +--- + +## Acceptance Criteria + +### Main Tasks (10 points) + +**Unit Testing (3 pts):** +- [ ] Testing framework chosen with justification +- [ ] Tests exist in `app_python/tests/` directory +- [ ] All endpoints have test coverage +- [ ] Tests pass locally (terminal output provided) +- [ ] README updated with testing instructions + +**GitHub Actions CI (4 pts):** +- [ ] Workflow file exists at `.github/workflows/python-ci.yml` +- [ ] Workflow includes: dependency installation, linting, testing +- [ ] Workflow includes: Docker Hub login, build, and push +- [ ] Versioning strategy chosen (SemVer or CalVer) and implemented +- [ ] Docker images tagged with at least 2 tags (e.g., version + latest) +- [ ] Workflow triggers configured appropriately +- [ ] All workflow steps pass successfully +- [ ] Docker Hub shows versioned images +- [ ] Link to successful workflow run provided + +**CI Best Practices (3 pts):** +- [ ] Status badge added to README and working +- [ ] Dependency caching implemented with performance metrics +- [ ] Snyk security scanning integrated +- [ ] At least 3 CI best practices applied +- [ ] Documentation complete (see Documentation Requirements section) + +### Bonus Task (2.5 points) + +**Part 1: Multi-App CI (1.5 pts)** +- [ ] Second workflow created for compiled language app (`.github/workflows/-ci.yml`) +- [ ] Language-specific linting and testing implemented +- [ ] Versioning strategy applied to second app +- [ ] Path filters configured for both workflows +- [ ] Path filters tested and proven to work (workflows run selectively) +- [ ] Both workflows can run in parallel +- [ ] Documentation explains benefits and shows selective triggering + +**Part 2: Test Coverage (1 pt)** +- [ ] Coverage tool integrated (`pytest-cov` or equivalent) +- [ ] Coverage reports generated in CI workflow +- [ ] Codecov or Coveralls integration complete +- [ ] Coverage badge added to README +- [ ] Coverage threshold set in CI (optional but recommended) +- [ ] Documentation includes coverage analysis (percentage, what's covered/not) + +--- + +## Documentation Requirements + +Create `app_python/docs/LAB03.md` with these sections: + +### 1. Overview +- Testing framework used and why you chose it +- What endpoints/functionality your tests cover +- CI workflow trigger configuration (when does it run?) +- Versioning strategy chosen (SemVer or CalVer) and rationale + +### 2. Workflow Evidence +``` +Provide links/terminal output for: +- โœ… Successful workflow run (GitHub Actions link) +- โœ… Tests passing locally (terminal output) +- โœ… Docker image on Docker Hub (link to your image) +- โœ… Status badge working in README +``` + +### 3. Best Practices Implemented +Quick list with one-sentence explanations: +- **Practice 1:** Why it helps +- **Practice 2:** Why it helps +- **Practice 3:** Why it helps +- **Caching:** Time saved (before vs after) +- **Snyk:** Any vulnerabilities found? Your action taken + +### 4. Key Decisions +Answer these briefly (2-3 sentences each): +- **Versioning Strategy:** SemVer or CalVer? Why did you choose it for your app? +- **Docker Tags:** What tags does your CI create? (e.g., latest, version number, etc.) +- **Workflow Triggers:** Why did you choose those triggers? +- **Test Coverage:** What's tested vs not tested? + +### 5. Challenges (Optional) +- Any issues you encountered and how you fixed them +- Keep it brief - bullet points are fine + +--- + +## Rubric + +| Criteria | Points | Description | +|----------|--------|-------------| +| **Unit Testing** | 3 pts | Comprehensive tests, good coverage | +| **CI Workflow** | 4 pts | Complete, functional, automated | +| **Best Practices** | 3 pts | Optimized, secure, well-documented | +| **Bonus** | 2.5 pts | Multi-app CI with path filters | +| **Total** | 12.5 pts | 10 pts required + 2.5 pts bonus | + +**Grading:** +- **10/10:** All tasks complete, CI works flawlessly, clear documentation, meaningful tests +- **8-9/10:** CI works, good test coverage, best practices applied, solid documentation +- **6-7/10:** CI functional, basic tests, some best practices, minimal documentation +- **<6/10:** CI broken or missing steps, poor tests, incomplete work + +**Quick Checklist for Full Points:** +- โœ… Tests actually test your endpoints (not just imports) +- โœ… CI workflow runs and passes +- โœ… Docker image builds and pushes successfully +- โœ… At least 3 best practices applied (caching, Snyk, status badge, etc.) +- โœ… Documentation complete but concise (no essay needed!) +- โœ… Links/evidence provided (workflow runs, Docker Hub, etc.) + +**Documentation Should Take:** 15-30 minutes to write, 5 minutes to review + +--- + +## Resources + +
+๐Ÿ“š GitHub Actions Documentation + +- [GitHub Actions Quickstart](https://docs.github.com/en/actions/quickstart) +- [Workflow Syntax](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions) +- [Building and Testing Python](https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python) +- [Publishing Docker Images](https://docs.docker.com/ci-cd/github-actions/) +- [GitHub Actions Marketplace](https://github.com/marketplace?type=actions) + +
+ +
+๐Ÿงช Testing Resources + +- [Pytest Documentation](https://docs.pytest.org/) +- [Flask Testing Guide](https://flask.palletsprojects.com/en/stable/testing/) +- [FastAPI Testing Guide](https://fastapi.tiangolo.com/tutorial/testing/) +- [Python Testing Best Practices](https://realpython.com/python-testing/) + +
+ +
+๐Ÿ”’ Security & Quality + +- [Snyk GitHub Actions](https://github.com/snyk/actions) +- [Snyk Python Integration](https://docs.snyk.io/integrations/ci-cd-integrations/github-actions-integration) +- [GitHub Security Best Practices](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions) +- [Dependency Scanning](https://docs.github.com/en/code-security/supply-chain-security) + +
+ +
+โšก Performance & Optimization + +- [Caching Dependencies](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows) +- [Docker Build Cache](https://docs.docker.com/build/cache/) +- [Workflow Optimization](https://docs.github.com/en/actions/learn-github-actions/usage-limits-billing-and-administration) + +
+ +
+๐Ÿ› ๏ธ CI/CD Tools + +- [act](https://github.com/nektos/act) - Run GitHub Actions locally +- [actionlint](https://github.com/rhysd/actionlint) - Lint workflow files +- [GitHub CLI](https://cli.github.com/) - Manage workflows from terminal + +
+ +--- + +## Looking Ahead + +- **Lab 4-6:** CI will validate your Terraform and Ansible code +- **Lab 7-8:** CI will run integration tests with logging/metrics +- **Lab 9-10:** CI will validate Kubernetes manifests and Helm charts +- **Lab 13:** ArgoCD will deploy what CI builds (GitOps!) +- **All Future Labs:** This pipeline is your safety net for changes + +--- + +**Good luck!** ๐Ÿš€ + +> **Remember:** CI isn't about having green checkmarksโ€”it's about catching problems before they reach production. Focus on meaningful tests and understanding why each practice matters. Think like a DevOps engineer: automate everything, fail fast, and learn from failures. diff --git a/labs/lab04.md b/labs/lab04.md new file mode 100644 index 0000000000..eefa858953 --- /dev/null +++ b/labs/lab04.md @@ -0,0 +1,1510 @@ +# Lab 4 โ€” Infrastructure as Code (Terraform & Pulumi) + +![difficulty](https://img.shields.io/badge/difficulty-beginner-success) +![topic](https://img.shields.io/badge/topic-Infrastructure%20as%20Code-blue) +![points](https://img.shields.io/badge/points-10%2B2.5-orange) +![tech](https://img.shields.io/badge/tech-Terraform%20%7C%20Pulumi-informational) + +> Provision cloud infrastructure using code with Terraform and Pulumi, comparing both approaches. + +## Overview + +Learn Infrastructure as Code (IaC) by creating virtual machines in the cloud using two popular tools: Terraform (declarative, HCL) and Pulumi (imperative, real programming languages). + +**What You'll Learn:** +- Terraform fundamentals and HCL syntax +- Pulumi fundamentals and infrastructure with code +- Cloud provider APIs and resources +- Infrastructure lifecycle management +- IaC best practices and validation +- Comparing IaC tools and approaches + +**Connection to Previous Labs:** +- **Lab 2:** Created Docker images - now we'll provision infrastructure to run them +- **Lab 3:** CI/CD for applications - now we'll add CI/CD for infrastructure +- **Lab 5:** Ansible will provision software on these VMs (you'll need a VM ready!) + +**Tech Stack:** Terraform 1.9+ | Pulumi 3.x | Yandex Cloud / AWS + +**Why Two Tools?** +By using both Terraform and Pulumi for the same task, you'll understand: +- Different IaC philosophies (declarative vs imperative) +- Tool trade-offs and use cases +- How to evaluate IaC tools for your needs + +**Important for Lab 5:** +The VM you create in this lab will be used in **Lab 5 (Ansible)** for configuration management. You have two options: +- **Option A (Recommended):** Keep your cloud VM running until you complete Lab 5 +- **Option B:** Use a local VM (see Local VM Alternative section below) + +If you choose to destroy your cloud VM after Lab 4, you can easily recreate it later using your Terraform/Pulumi code! + +--- + +## Important: Cloud Provider Selection + +### Recommended for Russia: Yandex Cloud + +Yandex Cloud offers free tier and is accessible in Russia: +- 1 VM with 20% vCPU, 1 GB RAM (free tier) +- 10 GB SSD storage +- No credit card required initially + +### Alternative Cloud Providers + +If Yandex Cloud is unavailable, choose any of these: + +**VK Cloud (Russia):** +- Russian cloud provider +- Free trial with bonus credits +- Good documentation in Russian + +**AWS (Amazon Web Services):** +- 750 hours/month free tier (t2.micro) +- Most popular globally +- Extensive documentation + +**GCP (Google Cloud Platform):** +- $300 free credits for 90 days +- Always-free tier for e2-micro +- Modern interface + +**Azure (Microsoft):** +- $200 free credits for 30 days +- Free tier for B1s instances +- Good Windows support + +**DigitalOcean:** +- Simple pricing and interface +- $200 free credits with GitHub Student Pack +- Beginner-friendly + +### Cost Management ๐Ÿšจ + +**IMPORTANT - Read This:** +- โœ… **Use smallest/free tier instances only** +- โœ… **Run `terraform destroy` when done testing** +- โœ… **Consider keeping VM for Lab 5 to avoid recreation** +- โœ… **Set billing alerts if available** +- โœ… **If not using for Lab 5, delete resources after lab completion** +- โŒ **Never commit cloud credentials to Git** + +--- + +## Local VM Alternative + +If you cannot or prefer not to use cloud providers, you can use a local VM instead. This VM will need to meet specific requirements for Lab 5 (Ansible). + +### Option 1: VirtualBox/VMware VM + +**Requirements:** +- Ubuntu 24.04 LTS (recommended) or Ubuntu 22.04 LTS +- 1 GB RAM minimum (2 GB recommended) +- 10 GB disk space +- Network adapter in Bridged mode (or NAT with port forwarding) +- SSH server installed and configured +- Your SSH public key added to `~/.ssh/authorized_keys` +- Static or predictable IP address + +**Setup Steps:** +```bash +# Install SSH server (if not installed) +sudo apt update +sudo apt install openssh-server + +# Add your SSH public key +mkdir -p ~/.ssh +echo "your-public-key-here" >> ~/.ssh/authorized_keys +chmod 700 ~/.ssh +chmod 600 ~/.ssh/authorized_keys + +# Verify SSH access from your host machine +ssh username@vm-ip-address +``` + +### Option 2: Vagrant VM + +**Requirements:** +- Vagrant installed on your machine +- VirtualBox (or another Vagrant provider) + +**Basic Vagrantfile:** +```ruby +Vagrant.configure("2") do |config| + config.vm.box = "ubuntu/noble64" # Ubuntu 24.04 LTS + # Or use "ubuntu/jammy64" for Ubuntu 22.04 LTS + config.vm.network "private_network", ip: "192.168.56.10" + config.vm.provider "virtualbox" do |vb| + vb.memory = "2048" + end +end +``` + +### Option 3: WSL2 (Windows Subsystem for Linux) + +**Note:** WSL2 can work but has networking limitations. Bridged mode VM is preferred. + +**If using local VM:** +- You can skip Terraform/Pulumi cloud provider setup +- Document your local VM setup instead +- For Task 1, show VM creation (manual or Vagrant) +- For Task 2, you can skip Pulumi (or use Pulumi to manage Vagrant) +- Focus on understanding IaC concepts with cloud provider research + +**Recommended Approach:** +Even with a local VM, complete the Terraform/Pulumi tasks with a cloud provider to gain real IaC experience. You can destroy the cloud VM after Lab 4 and use your local VM for Lab 5. + +--- + +## Tasks + +### Task 1 โ€” Terraform VM Creation (4 pts) + +**Objective:** Create a virtual machine using Terraform on your chosen cloud provider. + +**Requirements:** + +1. **Setup Terraform** + - Install Terraform CLI + - Choose and configure your cloud provider + - Set up authentication (access keys, service accounts, etc.) + - Initialize Terraform + +2. **Define Infrastructure** + + Create a `terraform/` directory with the following resources: + + **Minimum Required Resources:** + - **VM/Compute Instance** (smallest free tier size) + - **Network/VPC** (if required by provider) + - **Security Group/Firewall Rules:** + - Allow SSH (port 22) from your IP + - Allow HTTP (port 80) + - Allow custom port 5000 (for future app deployment) + - **Public IP Address** (to access VM remotely) + +3. **Configuration Best Practices** + - Use variables for configurable values (region, instance type, etc.) + - Use outputs to display important information (public IP, etc.) + - Add appropriate tags/labels for resource identification + - Use `.gitignore` for sensitive files + +4. **Apply Infrastructure** + - Run `terraform plan` to preview changes + - Review the plan carefully + - Apply infrastructure + - Verify VM is accessible via SSH + - Document the public IP and connection method + +5. **State Management** + - Keep state file local (for now) + - Understand what the state file contains + - **Never commit `terraform.tfstate` to Git** + +
+๐Ÿ’ก Terraform Fundamentals + +**What is Terraform?** + +Terraform is a declarative IaC tool that lets you define infrastructure in configuration files (HCL - HashiCorp Configuration Language). + +**Key Concepts:** + +**Providers:** +- Plugins that interact with cloud APIs +- Each cloud has its own provider (yandex, aws, google, azurerm) +- Configure authentication and region + +**Resources:** +- Infrastructure components (VMs, networks, firewalls) +- Format: `resource "type" "name" { ... }` +- Each resource has required and optional arguments + +**Data Sources:** +- Query existing infrastructure +- Example: Find latest Ubuntu image ID +- Format: `data "type" "name" { ... }` + +**Variables:** +- Make configurations reusable +- Define in `variables.tf` +- Set values in `terraform.tfvars` (gitignored!) +- Reference: `var.variable_name` + +**Outputs:** +- Display important values after apply +- Example: VM public IP +- Define in `outputs.tf` + +**State File:** +- Tracks real infrastructure +- Maps config to reality +- **Never commit to Git** (contains sensitive data) +- Add to `.gitignore` + +**Typical Workflow:** +```bash +terraform init # Initialize provider plugins +terraform fmt # Format code +terraform validate # Check syntax +terraform plan # Preview changes +terraform apply # Create/update infrastructure +terraform destroy # Delete all infrastructure +``` + +**Resources:** +- [Terraform Documentation](https://developer.hashicorp.com/terraform/docs) +- [Terraform Registry](https://registry.terraform.io/) - Provider docs +- [HCL Syntax](https://developer.hashicorp.com/terraform/language/syntax) + +
+ +
+โ˜๏ธ Yandex Cloud Terraform Guide + +**Yandex Cloud Setup:** + +**Authentication:** +- Create service account in Yandex Cloud Console +- Generate authorized key (JSON) +- Set key file path or use environment variables + +**Provider Configuration Pattern:** +```hcl +terraform { + required_providers { + yandex = { + source = "yandex-cloud/yandex" + } + } +} + +provider "yandex" { + # Configuration here (zone, folder_id, etc.) +} +``` + +**Key Resources:** +- `yandex_compute_instance` - Virtual machine +- `yandex_vpc_network` - Virtual private cloud +- `yandex_vpc_subnet` - Subnet within VPC +- `yandex_vpc_security_group` - Firewall rules + +**Free Tier Instance:** +- Platform: standard-v2 +- Cores: 2 (core_fraction = 20%) +- Memory: 1 GB +- Boot disk: 10 GB HDD + +**SSH Access:** +- Add SSH public key to `metadata` +- Use `ssh-keys` metadata field +- Connect: `ssh @` + +**Resources:** +- [Yandex Cloud Terraform Provider](https://registry.terraform.io/providers/yandex-cloud/yandex/latest/docs) +- [Getting Started Guide](https://cloud.yandex.com/en/docs/tutorials/infrastructure-management/terraform-quickstart) +- [Compute Instance Example](https://registry.terraform.io/providers/yandex-cloud/yandex/latest/docs/resources/compute_instance) + +
+ +
+โ˜๏ธ AWS Terraform Guide + +**AWS Setup:** + +**Authentication:** +- Create IAM user with EC2 permissions +- Generate access key ID and secret access key +- Configure AWS CLI or use environment variables +- Never hardcode credentials + +**Provider Configuration Pattern:** +```hcl +terraform { + required_providers { + aws = { + source = "hashicorp/aws" + } + } +} + +provider "aws" { + region = var.region # e.g., "us-east-1" +} +``` + +**Key Resources:** +- `aws_instance` - EC2 instance +- `aws_vpc` - Virtual Private Cloud +- `aws_subnet` - Subnet within VPC +- `aws_security_group` - Firewall rules +- `aws_key_pair` - SSH key + +**Free Tier Instance:** +- Instance type: t2.micro +- AMI: Amazon Linux 2 or Ubuntu (find with data source) +- 750 hours/month free for 12 months +- 30 GB storage included + +**Data Source for AMI:** +Use `aws_ami` data source to find latest Ubuntu image dynamically + +**Resources:** +- [AWS Provider Documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) +- [EC2 Instance Resource](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance) +- [AWS Free Tier](https://aws.amazon.com/free/) + +
+ +
+โ˜๏ธ GCP Terraform Guide + +**GCP Setup:** + +**Authentication:** +- Create service account in Google Cloud Console +- Download JSON key file +- Set `GOOGLE_APPLICATION_CREDENTIALS` environment variable +- Enable Compute Engine API + +**Provider Configuration Pattern:** +```hcl +terraform { + required_providers { + google = { + source = "hashicorp/google" + } + } +} + +provider "google" { + project = var.project_id + region = var.region +} +``` + +**Key Resources:** +- `google_compute_instance` - VM instance +- `google_compute_network` - VPC network +- `google_compute_subnetwork` - Subnet +- `google_compute_firewall` - Firewall rules + +**Free Tier Instance:** +- Machine type: e2-micro +- Zone: us-central1-a (or other free tier zone) +- Always free (within limits) +- Boot disk: 30 GB standard persistent disk + +**Resources:** +- [Google Provider Documentation](https://registry.terraform.io/providers/hashicorp/google/latest/docs) +- [Compute Instance Resource](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance) +- [GCP Free Tier](https://cloud.google.com/free) + +
+ +
+โ˜๏ธ Other Cloud Providers + +**Azure:** +- Provider: `azurerm` +- Resource: `azurerm_linux_virtual_machine` +- Free tier: B1s instance +- [Azure Provider Docs](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) + +**VK Cloud:** +- Based on OpenStack +- Provider: OpenStack provider +- [VK Cloud Documentation](https://mcs.mail.ru/help/) + +**DigitalOcean:** +- Provider: `digitalocean` +- Resource: `digitalocean_droplet` +- Simple and beginner-friendly +- [DigitalOcean Provider Docs](https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs) + +**Questions to Explore:** +- What's the smallest instance size for your provider? +- How do you find the right OS image ID? +- What authentication method does your provider use? +- How do you add SSH keys to instances? + +
+ +
+๐Ÿ”’ Security Best Practices + +**Credentials Management:** + +**โŒ NEVER DO THIS:** +```hcl +provider "aws" { + access_key = "AKIAIOSFODNN7EXAMPLE" # NEVER! + secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" # NEVER! +} +``` + +**โœ… DO THIS INSTEAD:** + +**Option 1: Environment Variables** +```bash +export AWS_ACCESS_KEY_ID="your-key" +export AWS_SECRET_ACCESS_KEY="your-secret" +# Provider will auto-detect +``` + +**Option 2: Credentials File** +```bash +# ~/.aws/credentials (for AWS) +[default] +aws_access_key_id = your-key +aws_secret_access_key = your-secret +``` + +**Option 3: terraform.tfvars (gitignored)** +```hcl +# terraform.tfvars (add to .gitignore!) +access_key = "your-key" +secret_key = "your-secret" +``` + +**Files to Add to .gitignore:** +``` +# Terraform +*.tfstate +*.tfstate.* +.terraform/ +terraform.tfvars +*.tfvars +.terraform.lock.hcl + +# Cloud credentials +*.pem +*.key +*.json # Service account keys +credentials +``` + +**SSH Key Management:** +- Generate SSH key pair locally +- Add public key to cloud provider +- Keep private key secure (never commit) +- Use `chmod 600` on private key file + +**Security Group Rules:** +- Restrict SSH to your IP only (not 0.0.0.0/0) +- Only open ports you need +- Document why each port is open + +
+ +
+๐Ÿ“ Terraform Project Structure + +**Recommended Structure:** + +``` +terraform/ +โ”œโ”€โ”€ .gitignore # Ignore state, credentials +โ”œโ”€โ”€ main.tf # Main resources +โ”œโ”€โ”€ variables.tf # Input variables +โ”œโ”€โ”€ outputs.tf # Output values +โ”œโ”€โ”€ terraform.tfvars # Variable values (gitignored!) +โ””โ”€โ”€ README.md # Setup instructions +``` + +**What Goes in Each File:** + +**main.tf:** +- Provider configuration +- Resource definitions +- Data sources + +**variables.tf:** +- Variable declarations +- Descriptions +- Default values (non-sensitive only) + +**outputs.tf:** +- Important values to display +- VM IP addresses +- Connection strings + +**terraform.tfvars:** +- Actual variable values +- Secrets and credentials +- **MUST be in .gitignore** + +**Alternative: Single File** +For small projects, you can put everything in `main.tf`, but multi-file is more maintainable. + +
+ +**What to Document:** +- Cloud provider chosen and why +- Terraform version used +- Resources created (VM size, region, etc.) +- Public IP address of created VM +- SSH connection command +- Terminal output from `terraform plan` and `terraform apply` +- Proof of SSH access to VM + +--- + +### Task 2 โ€” Pulumi VM Creation (4 pts) + +**Objective:** Destroy the Terraform VM and recreate the same infrastructure using Pulumi. + +**Requirements:** + +1. **Cleanup Terraform Infrastructure** + - Run `terraform destroy` to delete all resources + - Verify all resources are deleted in cloud console + - Document the cleanup process + +2. **Setup Pulumi** + - Install Pulumi CLI + - Choose a programming language (Python recommended, or TypeScript, Go, C#, Java) + - Initialize a new Pulumi project + - Configure cloud provider + +3. **Recreate Same Infrastructure** + + Create a `pulumi/` directory with equivalent resources: + + **Same Resources as Task 1:** + - VM/Compute Instance (same size) + - Network/VPC + - Security Group/Firewall (same rules) + - Public IP Address + + **Goal:** Functionally identical infrastructure, different tool + +4. **Apply Infrastructure** + - Run `pulumi preview` to see planned changes + - Apply infrastructure with `pulumi up` + - Verify VM is accessible via SSH + - Document the public IP + +5. **Compare Experience** + - What was easier/harder than Terraform? + - How does the code differ? + - Which approach do you prefer and why? + +
+๐Ÿ’ก Pulumi Fundamentals + +**What is Pulumi?** + +Pulumi is an imperative IaC tool that lets you write infrastructure using real programming languages (Python, TypeScript, Go, etc.). + +**Key Differences from Terraform:** + +| Aspect | Terraform | Pulumi | +|--------|-----------|--------| +| **Language** | HCL (declarative) | Python, JS, Go, etc. (imperative) | +| **State** | Local or remote state file | Pulumi Cloud (free tier) or self-hosted | +| **Logic** | Limited (count, for_each) | Full programming language | +| **Testing** | External tools | Native unit tests | +| **Secrets** | Plain in state | Encrypted by default | + +**Key Concepts:** + +**Resources:** +- Similar to Terraform, but defined in code +- Example (Python): `vm = compute.Instance("my-vm", ...)` + +**Stacks:** +- Like Terraform workspaces +- Separate environments (dev, staging, prod) +- Each has its own config and state + +**Outputs:** +- Return values from your program +- Example: `pulumi.export("ip", vm.public_ip)` + +**Config:** +- Per-stack configuration +- Set with: `pulumi config set key value` +- Access in code: `config.get("key")` + +**Typical Workflow:** +```bash +pulumi new