diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
new file mode 100644
index 00000000..fd476164
--- /dev/null
+++ b/.github/pull_request_template.md
@@ -0,0 +1,17 @@
+## Goal
+[Provide a clear the goal of PR]
+
+## Changes
+- [List the key changes or modifications made in the code.]
+- [Highlight any significant refactoring or architectural decisions.]
+
+## Testing
+[Provide clear instructions on how to test the changes locally.]
+
+## Artifacts & Screenshots
+[Provide the screenshots of work (may be in submission file)]
+
+### Checklist:
+- [ ] Clear title and description
+- [ ] Documentation/README updated if needed
+- [ ] No secrets or large temporary files
\ No newline at end of file
diff --git a/assets-for-labs/image.png b/assets-for-labs/image.png
new file mode 100644
index 00000000..3f9c3c73
Binary files /dev/null and b/assets-for-labs/image.png differ
diff --git a/labs/lab1.md b/labs/lab1.md
deleted file mode 100644
index 43053d85..00000000
--- a/labs/lab1.md
+++ /dev/null
@@ -1,276 +0,0 @@
-# Lab 1 — Setup OWASP Juice Shop & PR Workflow
-
-
-
-
-
-> **Goal:** Run OWASP Juice Shop locally, complete a triage report, and standardize PR submissions.
-> **Deliverable:** A PR from `feature/lab1` to the course repo with `labs/submission1.md` containing triage report and PR template setup. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Launching **OWASP Juice Shop** for security testing
-- Capturing a **triage report** covering version, URL, health check, exposure, risks, and next actions
-- Bootstrapping a **repeatable PR workflow** with a template
-
-> We **do not** copy Juice Shop code into the repo. You'll run the official Docker image and keep **only lab artifacts** in your fork.
-
----
-
-## Tasks
-
-### Task 1 — OWASP Juice Shop Deployment (5 pts)
-
-**Objective:** Run Juice Shop locally and complete a Triage report capturing deployment, health check, exposure, and top risks.
-
-#### 1.1: Deploy Juice Shop Container
-
-```bash
-docker run -d --name juice-shop \
- -p 127.0.0.1:3000:3000 \
- bkimminich/juice-shop:v19.0.0
-```
-
-#### 1.2: Initial Verification
-
-- Browse to `http://localhost:3000` and confirm the app loads
-- Verify API responds: `curl -s http://127.0.0.1:3000/rest/products | head`
-
-#### 1.3: Complete Triage Report
-
-Create `labs/submission1.md` using this template:
-
-```markdown
-# Triage Report — OWASP Juice Shop
-
-## Scope & Asset
-- Asset: OWASP Juice Shop (local lab instance)
-- Image: bkimminich/juice-shop:v19.0.0
-- Release link/date: —
-- Image digest (optional):
-
-## Environment
-- Host OS:
-- Docker:
-
-## Deployment Details
-- Run command used: `docker run -d --name juice-shop -p 127.0.0.1:3000:3000 bkimminich/juice-shop:v19.0.0`
-- Access URL: http://127.0.0.1:3000
-- Network exposure: 127.0.0.1 only [ ] Yes [ ] No (explain if No)
-
-## Health Check
-- Page load: attach screenshot of home page (path or embed)
-- API check: first 5–10 lines from `curl -s http://127.0.0.1:3000/rest/products | head`
-
-## Surface Snapshot (Triage)
-- Login/Registration visible: [ ] Yes [ ] No — notes: <...>
-- Product listing/search present: [ ] Yes [ ] No — notes: <...>
-- Admin or account area discoverable: [ ] Yes [ ] No — notes: <...>
-- Client-side errors in console: [ ] Yes [ ] No — notes: <...>
-- Security headers (quick look — optional): `curl -I http://127.0.0.1:3000` → CSP/HSTS present? notes: <...>
-
-## Risks Observed (Top 3)
-1)
-2)
-3)
-```
-
-In `labs/submission1.md`, document:
-- Complete triage report using provided template
-- Screenshots or API output demonstrating working deployment
-- Environment details and security observations
-- Analysis of top 3 security risks identified during assessment
-
----
-
-### Task 2 — PR Template Setup (4 pts)
-
-**Objective:** Standardize submissions so every lab PR has the same sections and checks.
-
-#### 2.1: Create PR Template
-
-Create `.github/pull_request_template.md` with:
-- Sections: **Goal**, **Changes**, **Testing**, **Artifacts & Screenshots**
-- Checklist (3 items): clear title, docs updated if needed, no secrets/large temp files
-
-```bash
-# Commit message example:
-git commit -m "docs: add PR template"
-```
-
-#### 2.2: Verify Template Application
-
-```bash
-git checkout -b feature/lab1
-git add labs/submission1.md
-git commit -m "docs(lab1): add submission1 triage report"
-git push -u origin feature/lab1
-```
-
-Verify that:
-- PR description auto-fills with sections & checklist
-- Fill in **Goal / Changes / Testing / Artifacts & Screenshots** and tick checkboxes
-- Screenshots and API snippet are embedded in `labs/submission1.md`
-
-In `labs/submission1.md`, document:
-- PR template creation process and verification
-- Evidence that template auto-fills correctly
-- Analysis of how templates improve collaboration workflow
-
-
-One-time Bootstrap Note
-
-GitHub loads PR templates from the **default branch of your fork (`main`)**. Add the template to `main` first, then open your lab PR from `feature/lab1`.
-
-
-
----
-
-### Task 6 — GitHub Community Engagement (1 pt)
-
-**Objective:** Explore GitHub's social features that support collaboration and discovery.
-
-**Actions Required:**
-1. **Star** the course repository
-2. **Star** the [simple-container-com/api](https://github.com/simple-container-com/api) project — a promising open-source tool for container management
-3. **Follow** your professor and TAs on GitHub:
- - Professor: [@Cre-eD](https://github.com/Cre-eD)
- - TA: [@marat-biriushev](https://github.com/marat-biriushev)
- - TA: [@pierrepicaud](https://github.com/pierrepicaud)
-4. **Follow** at least 3 classmates from the course
-
-**Document in labs/submission1.md:**
-
-Add a "GitHub Community" section (after Challenges & Solutions) with 1-2 sentences explaining:
-- Why starring repositories matters in open source
-- How following developers helps in team projects and professional growth
-
-
-💡 GitHub Social Features
-
-**Why Stars Matter:**
-
-**Discovery & Bookmarking:**
-- Stars help you bookmark interesting projects for later reference
-- Star count indicates project popularity and community trust
-- Starred repos appear in your GitHub profile, showing your interests
-
-**Open Source Signal:**
-- Stars encourage maintainers (shows appreciation)
-- High star count attracts more contributors
-- Helps projects gain visibility in GitHub search and recommendations
-
-**Professional Context:**
-- Shows you follow best practices and quality projects
-- Indicates awareness of industry tools and trends
-
-**Why Following Matters:**
-
-**Networking:**
-- See what other developers are working on
-- Discover new projects through their activity
-- Build professional connections beyond the classroom
-
-**Learning:**
-- Learn from others' code and commits
-- See how experienced developers solve problems
-- Get inspiration for your own projects
-
-**Collaboration:**
-- Stay updated on classmates' work
-- Easier to find team members for future projects
-- Build a supportive learning community
-
-**Career Growth:**
-- Follow thought leaders in your technology stack
-- See trending projects in real-time
-- Build visibility in the developer community
-
-**GitHub Best Practices:**
-- Star repos you find useful (not spam)
-- Follow developers whose work interests you
-- Engage meaningfully with the community
-- Your GitHub activity shows employers your interests and involvement
-
-
-
----
-
-## How to Submit
-
-1. Create a branch for this lab and push it to your fork:
-
- ```bash
- git switch -c feature/lab1
- # create labs/submission1.md with your findings
- git add labs/submission1.md
- git commit -m "docs: add lab1 submission"
- git push -u origin feature/lab1
- ```
-
-2. Open a PR from your fork's `feature/lab1` branch → **course repository's main branch**.
-
-3. In the PR description, include:
-
- ```text
- - [x] Task 1 done — OWASP Juice Shop deployment + triage report
- - [x] Task 2 done — PR template setup + verification
- - [x] Task 6 done — GitHub community engagement
- ```
-
-4. **Copy the PR URL** and submit it via **Moodle before the deadline**.
-
----
-
-## Acceptance Criteria
-
-- ✅ Branch `feature/lab1` exists with commits for each task
-- ✅ File `labs/submission1.md` contains required triage report for Tasks 1, 2, and 6
-- ✅ OWASP Juice Shop successfully deployed and documented
-- ✅ File `.github/pull_request_template.md` exists on **main** branch
-- ✅ GitHub community engagement completed (stars and follows)
-- ✅ PR from `feature/lab1` → **course repo main branch** is open
-- ✅ PR link submitted via Moodle before the deadline
-- ✅ **No Juice Shop source code** copied into repo—only lab artifacts
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| -------------------------------------------------------- | -----: |
-| Task 1 — OWASP Juice Shop deployment + triage report | **5** |
-| Task 2 — PR template setup + verification | **4** |
-| Task 6 — GitHub community engagement | **1** |
-| **Total** | **10** |
-
----
-
-## Guidelines
-
-- Use clear Markdown headers to organize sections in `submission1.md`
-- Include both command outputs and written analysis for each task
-- Document deployment process and security observations
-- Ensure screenshots and evidence demonstrate working setup
-
-
-Security Notes
-
-- Always bind to `127.0.0.1` to avoid exposing the app beyond localhost
-- Pin specific Docker image versions for reproducibility
-- Never commit application source code—only lab artifacts and reports
-
-
-
-
-Deployment Tips
-
-- Check GitHub Releases page for specific version dates and notes
-- Verify API endpoints respond before completing triage report
-- Document all observed security issues in the triage template
-- Keep deployment commands simple and well-documented
-
-
\ No newline at end of file
diff --git a/labs/lab10.md b/labs/lab10.md
deleted file mode 100644
index 378b56a5..00000000
--- a/labs/lab10.md
+++ /dev/null
@@ -1,209 +0,0 @@
-# Lab 10 — Vulnerability Management & Response with DefectDojo
-
-
-
-
-
-> Goal: Stand up DefectDojo locally, import prior lab findings (ZAP, Semgrep, Trivy/Grype, Nuclei), and produce a stakeholder-ready reporting & metrics package.
-> Deliverable: A PR from `feature/lab10` with `labs/submission10.md` summarizing setup evidence, import results, metrics snapshot highlights, and links to exported artifacts. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Standing up OWASP DefectDojo locally via Docker Compose
-- Organizing findings across products/engagements/tests
-- Importing findings from multiple tools (ZAP, Semgrep, Trivy, Nuclei)
-- Generating reports that non-technical stakeholders can consume
-- Deriving basic program metrics (open/closed status, severity mix, SLA outlook)
-
-> Primary platform: OWASP DefectDojo (open source, 2025)
-
----
-
-## Prerequisites
-
-- Docker with Compose V2 (`docker compose` available)
-- `git`, `curl`, `jq`
-- Prior lab outputs available locally (paths below)
-
-Create working directories:
-```bash
-mkdir -p labs/lab10/{setup,imports,report}
-```
-
----
-
-## Tasks
-
-### Task 1 — DefectDojo Local Setup (2 pts)
-Objective: Run DefectDojo locally and prepare the structure for managing findings.
-
-#### 1.1: Clone and start DefectDojo
-```bash
-# Clone upstream
-git clone https://github.com/DefectDojo/django-DefectDojo.git labs/lab10/setup/django-DefectDojo
-cd labs/lab10/setup/django-DefectDojo
-
-# Optional: check compose compatibility
-./docker/docker-compose-check.sh || true
-
-# Build and start (first run can take a bit)
-docker compose build
-docker compose up -d
-
-# Verify containers are healthy
-docker compose ps
-# UI: http://localhost:8080
-```
-
-#### 1.2: Get admin credentials (no manual superuser needed)
-```bash
-# Watch initializer logs until the admin password is printed
-docker compose logs -f initializer
-# In a second terminal, extract the password once available:
-docker compose logs initializer | grep "Admin password:"
-
-# Login to the UI at http://localhost:8080 with:
-# Username: admin
-# Password:
-```
----
-
-### Task 2 — Import Prior Findings (4 pts)
-Objective: Import findings from your previous labs into the engagement.
-
-Use the importer script below; no manual API calls are required. The script will auto‑create the product type/product/engagement if missing.
-
-#### 2.1: Get API token and set variables
-```bash
-# In the UI: Profile → API v2 Key → copy your token
-export DD_API="http://localhost:8080/api/v2"
-export DD_TOKEN="REPLACE_WITH_YOUR_API_TOKEN"
-
-# Target context (adjust names if you prefer)
-export DD_PRODUCT_TYPE="Engineering"
-export DD_PRODUCT="Juice Shop"
-export DD_ENGAGEMENT="Labs Security Testing"
-# The import script will auto-detect importer names from your instance.
-```
-
-#### 2.2: Required reports (expected paths)
-- ZAP: `labs/lab5/zap/zap-report-noauth.json`
-- Semgrep: `labs/lab5/semgrep/semgrep-results.json`
-- Trivy: `labs/lab4/trivy/trivy-vuln-detailed.json`
-- Nuclei: `labs/lab5/nuclei/nuclei-results.json`
-- Grype (optional): `labs/lab4/syft/grype-vuln-results.json`
-
-#### 2.3: Run the importer script
-```bash
-bash labs/lab10/imports/run-imports.sh
-```
-The script auto-detects importer names, auto-creates context if missing, imports any reports found at the paths above, and saves responses under `labs/lab10/imports/`.
----
-
-### Task 3 — Reporting & Program Metrics (4 pts)
-Objective: Turn raw imports into an easy-to-understand report and metrics package that a stakeholder can consume without prior Dojo experience.
-
-#### 3.1: Create a baseline progress snapshot
-- From the engagement dashboard, note the counts for Active, Verified, and Mitigated findings.
-- Use the “Filters” sidebar to group by severity; grab a screenshot or jot the numbers.
-- Record the snapshot using the template below:
- ```bash
- mkdir -p labs/lab10/report
- cat > labs/lab10/report/metrics-snapshot.md <<'EOF'
- # Metrics Snapshot — Lab 10
-
- - Date captured:
- - Active findings:
- - Critical:
- - High:
- - Medium:
- - Low:
- - Informational:
- - Verified vs. Mitigated notes:
- EOF
- ```
-
-#### 3.2: Generate governance-ready artifacts
-- In the Engagement → Reports page, choose a human-readable template (Executive, Detailed, or similar) and generate a PDF or HTML report.
- - Save it to `labs/lab10/report/dojo-report.pdf` or `.html`.
-- Download the “Findings list (CSV)” from the same page and store it as `labs/lab10/report/findings.csv` for spreadsheet analysis.
-
-#### 3.3: Extract key metrics for `labs/submission10.md`
-- From the report or dashboard, capture:
- - Open vs. Closed counts by severity.
- - Findings per tool (ZAP, Semgrep, Trivy, Nuclei, and Grype).
- - Any SLA breaches or items due within the next 14 days.
- - Top recurring CWE/OWASP categories.
-- Summarize these in prose (3–5 bullet points) inside `labs/submission10.md`.
-
-Deliverables for this task:
-- `labs/lab10/report/metrics-snapshot.md`
-- `labs/lab10/report/dojo-report.(pdf|html)`
-- `labs/lab10/report/findings.csv`
-- Metric summary bullets in `labs/submission10.md`
-
----
-
-## Acceptance Criteria
-
-- ✅ DefectDojo runs locally and an admin user can log in
-- ✅ Product Type, Product, and Engagement are configured
-- ✅ Imports completed for ZAP, Semgrep, Trivy (plus Nuclei/Grype if available)
-- ✅ Reporting artifacts generated: metrics snapshot, Dojo report, findings CSV, and summary bullets in `labs/submission10.md`
-- ✅ All artifacts saved under `labs/lab10/`
-
----
-
-## How to Submit
-
-1. Create a branch for this lab and push it to your fork:
-```bash
-git switch -c feature/lab10
-# create labs/submission10.md with your findings
-git add labs/lab10/ labs/submission10.md
-git commit -m "docs: lab10 — DefectDojo vuln management"
-git push -u origin feature/lab10
-```
-2. Open a PR from your fork’s `feature/lab10` → course repo’s `main`.
-3. Include this checklist in the PR description:
-```text
-- [x] Task 1 — Dojo setup and structure
-- [x] Task 2 — Imports completed (multi-tool)
-- [x] Task 3 — Report + metrics package
-```
-4. Submit the PR URL via Moodle before the deadline.
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ------------------------------------------------------------ | -----: |
-| Task 1 — DefectDojo local setup | 2.0 |
-| Task 2 — Import prior findings (multi-tool) | 4.0 |
-| Task 3 — Reporting & metrics package | 4.0 |
-| Total | 10.0 |
-
----
-
-## Guidelines
-
-- Keep sensitive data out of uploads; use lab outputs only
-- Prefer JSON formats for robust importer support
-- If you explore deduplication, note the algorithm choice (helps explain numbers)
-- Be explicit when marking false positives (add justification)
-- Keep SLAs realistic but time-bound; reference calendar dates
-
-
-References
-
-- DefectDojo: https://github.com/DefectDojo/django-DefectDojo
-- Importers list: check your UI Import Scan page for exact `scan_type` names
-- Local API v2 docs: http://localhost:8080/api/v2/doc/ (after startup)
-- Official docs (Open Source): https://docs.defectdojo.com/en/open_source/
-- CVSS v3.1 Calculator: https://www.first.org/cvss/calculator/3.1
-
-
diff --git a/labs/lab10/imports/run-imports.sh b/labs/lab10/imports/run-imports.sh
deleted file mode 100644
index 0f0e33c9..00000000
--- a/labs/lab10/imports/run-imports.sh
+++ /dev/null
@@ -1,134 +0,0 @@
-#!/usr/bin/env bash
-set -euo pipefail
-
-# Batch import helper for Lab 10
-# - Auto-detects scan_type names from your Dojo instance
-# - Imports whichever files exist among ZAP, Semgrep, Trivy, Nuclei (and optional Grype)
-#
-# Usage:
-# export DD_API="http://localhost:8080/api/v2"
-# export DD_TOKEN=""
-# # Optional overrides (defaults shown)
-# export DD_PRODUCT_TYPE="${DD_PRODUCT_TYPE:-Engineering}"
-# export DD_PRODUCT="${DD_PRODUCT:-Juice Shop}"
-# export DD_ENGAGEMENT="${DD_ENGAGEMENT:-Labs Security Testing}"
-# bash labs/lab10/imports/run-imports.sh
-
-here_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
-out_dir="$here_dir"
-
-require_env() {
- local name="$1"
- if [[ -z "${!name:-}" ]]; then
- echo "ERROR: env var $name is required" >&2
- exit 1
- fi
-}
-
-require_env DD_API
-require_env DD_TOKEN
-
-DD_PRODUCT_TYPE="${DD_PRODUCT_TYPE:-Engineering}"
-DD_PRODUCT="${DD_PRODUCT:-Juice Shop}"
-DD_ENGAGEMENT="${DD_ENGAGEMENT:-Labs Security Testing}"
-
-echo "Using context:"
-echo " DD_API=$DD_API"
-echo " DD_PRODUCT_TYPE=$DD_PRODUCT_TYPE"
-echo " DD_PRODUCT=$DD_PRODUCT"
-echo " DD_ENGAGEMENT=$DD_ENGAGEMENT"
-
-have_jq=true
-command -v jq >/dev/null 2>&1 || have_jq=false
-if ! $have_jq; then
- echo "WARN: jq not found; falling back to defaults for scan_type names." >&2
-fi
-
-# Discover scan type names from your instance if jq is available
-SCAN_ZAP="${SCAN_ZAP:-}"
-SCAN_SEMGREP="${SCAN_SEMGREP:-}"
-SCAN_TRIVY="${SCAN_TRIVY:-}"
-SCAN_NUCLEI="${SCAN_NUCLEI:-}"
-
-if $have_jq; then
- echo "Discovering importer names from /test_types/ ..."
- mapfile -t types < <(curl -sS -H "Authorization: Token $DD_TOKEN" "$DD_API/test_types/?limit=2000" | jq -r '.results[].name')
- choose_type() {
- local pat="$1"
- local fallback="$2"
- local val=""
- for t in "${types[@]}"; do
- if [[ "$t" =~ $pat ]]; then val="$t"; break; fi
- done
- if [[ -z "$val" ]]; then val="$fallback"; fi
- echo "$val"
- }
- SCAN_ZAP="${SCAN_ZAP:-$(choose_type '^ZAP' 'ZAP Scan')}"
- SCAN_SEMGREP="${SCAN_SEMGREP:-$(choose_type '^Semgrep' 'Semgrep JSON Report')}"
- SCAN_TRIVY="${SCAN_TRIVY:-$(choose_type '^Trivy' 'Trivy Scan')}"
- SCAN_NUCLEI="${SCAN_NUCLEI:-$(choose_type '^Nuclei' 'Nuclei Scan')}"
- # Grype importer (commonly named "Anchore Grype")
- if [[ -z "${SCAN_GRYPE:-}" ]]; then
- SCAN_GRYPE=$(printf '%s\n' "${types[@]}" | grep -i '^Anchore Grype' | head -n1)
- if [[ -z "$SCAN_GRYPE" ]]; then
- SCAN_GRYPE=$(printf '%s\n' "${types[@]}" | grep -i 'Grype' | head -n1)
- fi
- fi
-else
- SCAN_ZAP="${SCAN_ZAP:-ZAP Scan}"
- SCAN_SEMGREP="${SCAN_SEMGREP:-Semgrep JSON Report}"
- SCAN_TRIVY="${SCAN_TRIVY:-Trivy Scan}"
- SCAN_NUCLEI="${SCAN_NUCLEI:-Nuclei Scan}"
-fi
-SCAN_GRYPE="${SCAN_GRYPE:-Anchore Grype}"
-
-echo "Importer names:"
-echo " ZAP = $SCAN_ZAP"
-echo " Semgrep = $SCAN_SEMGREP"
-echo " Trivy = $SCAN_TRIVY"
-echo " Nuclei = $SCAN_NUCLEI"
-echo " Grype = $SCAN_GRYPE"
-
-import_scan() {
- local scan_type="$1"; shift
- local file="$1"; shift
- if [[ ! -f "$file" ]]; then
- echo "SKIP: $scan_type file not found: $file"
- return 0
- fi
- local base out
- base="$(basename "$file")"
- out="$out_dir/import-${base//[^A-Za-z0-9_.-]/_}.json"
- echo "Importing $scan_type from $file"
- curl -sS -X POST "$DD_API/import-scan/" \
- -H "Authorization: Token $DD_TOKEN" \
- -F "scan_type=$scan_type" \
- -F "file=@$file" \
- -F "product_type_name=$DD_PRODUCT_TYPE" \
- -F "product_name=$DD_PRODUCT" \
- -F "engagement_name=$DD_ENGAGEMENT" \
- -F "auto_create_context=true" \
- -F "minimum_severity=Info" \
- -F "close_old_findings=false" \
- -F "push_to_jira=false" \
- | tee "$out"
-}
-
-# Candidate paths per tool
-zap_file="labs/lab5/zap/zap-report-noauth.json"
-semgrep_file="labs/lab5/semgrep/semgrep-results.json"
-trivy_file="labs/lab4/trivy/trivy-vuln-detailed.json"
-nuclei_file="labs/lab5/nuclei/nuclei-results.json"
-
-# Grype
-grype_file="labs/lab4/syft/grype-vuln-results.json"
-
-import_scan "$SCAN_ZAP" "$zap_file"
-import_scan "$SCAN_SEMGREP" "$semgrep_file"
-import_scan "$SCAN_TRIVY" "$trivy_file"
-import_scan "$SCAN_NUCLEI" "$nuclei_file"
-
-# Grype
-import_scan "$SCAN_GRYPE" "$grype_file"
-
-echo "Done. Import responses saved under $out_dir"
diff --git a/labs/lab11.md b/labs/lab11.md
deleted file mode 100644
index 4ada0627..00000000
--- a/labs/lab11.md
+++ /dev/null
@@ -1,285 +0,0 @@
-# Lab 11 — Reverse Proxy Hardening: Nginx Security Headers, TLS, and Rate Limiting
-
-
-
-
-
-> Goal: Place OWASP Juice Shop behind an Nginx reverse proxy and harden it with security headers, TLS, and request rate limiting — without changing app code.
-> Deliverable: A PR from `feature/lab11` with `labs/submission11.md` including command evidence, header/TLS scans, rate-limit test results, and a short analysis of trade-offs.
-
----
-
-## Overview
-
-You will:
-- Deploy Juice Shop behind a reverse proxy using Docker Compose
-- Add and verify essential security headers (XFO, XCTO, HSTS, Referrer-Policy, Permissions-Policy, COOP/CORP)
-- Enable TLS with a local self-signed certificate and verify configuration
-- Implement request rate limiting and timeouts to reduce brute-force/DoS risk
-
-This lab is designed to be practical and educational, focusing on changes operations teams can make without touching application code.
-
----
-
-## Prerequisites
-
-Before starting, ensure you have:
-- ✅ Docker installed and running (`docker --version`)
-- ✅ Docker Compose installed (`docker compose version`)
-- ✅ `curl` and `jq` for testing and JSON parsing
-- ✅ At least 2GB free disk space
-- ✅ ~45-60 minutes available
-
-**Quick Setup Check:**
-```bash
-# Pull images in advance (optional)
-docker pull bkimminich/juice-shop:v19.0.0
-docker pull nginx:stable-alpine
-docker pull alpine:latest
-docker pull drwetter/testssl.sh:latest
-
-# Create working directories
-mkdir -p labs/lab11/{reverse-proxy/certs,logs,analysis}
-```
-
-**Files provided in this repo:**
-- `labs/lab11/docker-compose.yml` - Stack configuration
-- `labs/lab11/reverse-proxy/nginx.conf` - Pre-configured with security headers, TLS, rate limiting
-
----
-
-## Tasks
-
-### Task 1 — Reverse Proxy Compose Setup (2 pts)
-⏱️ **Estimated time:** 10 minutes
-
-**Objective:** Run Juice Shop behind Nginx (no app port exposed directly).
-
-#### 1.1: Prepare certs and start the stack
-```bash
-# Navigate to lab11 directory
-cd labs/lab11
-
-# Generate a local self-signed cert with SAN for localhost so Nginx can start
-docker run --rm -v "$(pwd)/reverse-proxy/certs":/certs \
- alpine:latest \
- sh -c "apk add --no-cache openssl && cat > /tmp/san.cnf << 'EOF' && \
-cat /tmp/san.cnf && \
-openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
- -keyout /certs/localhost.key -out /certs/localhost.crt \
- -config /tmp/san.cnf -extensions v3_req
-[ req ]
-default_bits = 2048
-distinguished_name = req_distinguished_name
-x509_extensions = v3_req
-prompt = no
-
-[ req_distinguished_name ]
-CN = localhost
-
-[ v3_req ]
-subjectAltName = @alt_names
-
-[ alt_names ]
-DNS.1 = localhost
-IP.1 = 127.0.0.1
-IP.2 = ::1
-EOF"
-
-# Start services
-docker compose up -d
-docker compose ps
-
-# Verify HTTP (should redirect to HTTPS)
-curl -s -o /dev/null -w "HTTP %{http_code}\n" http://localhost:8080/
-```
-
-Expected: `HTTP 308` (redirect to HTTPS).
-
-#### 1.2: Confirm no direct app exposure
-```bash
-# Only Nginx should have published host ports; Juice Shop should have none
-docker compose ps
-```
-
-In `labs/submission11.md`, document:
-
-**Task 1 Requirements:**
- - Explain why reverse proxies are valuable for security (TLS termination, security headers injection, request filtering, single access point)
- - Explain why hiding direct app ports reduces attack surface
- - Include the `docker compose ps` output showing only Nginx has published host ports (Juice Shop shows none)
-
----
-
-### Task 2 — Security Headers (3 pts)
-⏱️ **Estimated time:** 10 minutes
-
-**Objective:** Review the essential headers at the proxy and verify they’re present over HTTP/HTTPS.
-
-Headers configured in `nginx.conf`:
- - `X-Frame-Options: DENY`
- - `X-Content-Type-Options: nosniff`
- - `Referrer-Policy: strict-origin-when-cross-origin`
- - `Permissions-Policy: camera=(), geolocation=(), microphone=()`
- - `Cross-Origin-Opener-Policy: same-origin`
- - `Cross-Origin-Resource-Policy: same-origin`
- - `Content-Security-Policy-Report-Only: default-src 'self'; img-src 'self' data:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'`
-
-Note: CSP is set in Report-Only mode to avoid breaking Juice Shop functionality.
-
-⏱️ ~10 minutes
-
-#### 2.1: Verify headers (HTTP)
-```bash
-curl -sI http://localhost:8080/ | tee analysis/headers-http.txt
-```
-
-#### 2.2: Verify headers (after TLS in Task 3)
-```bash
-curl -skI https://localhost:8443/ | tee analysis/headers-https.txt
-```
-
-In `labs/submission11.md`, document:
-
-**Task 2 Requirements:**
- - Paste relevant security headers from `headers-https.txt`
- - For each header, explain what it protects against:
- - **X-Frame-Options**: ---
- - **X-Content-Type-Options**: ---
- - **Strict-Transport-Security (HSTS)**: ---
- - **Referrer-Policy**: ---
- - **Permissions-Policy**: ---
- - **COOP/CORP**: ---
- - **CSP-Report-Only**: ---
----
-
-### Task 3 — TLS, HSTS, Rate Limiting & Timeouts (5 pts)
-⏱️ **Estimated time:** 20 minutes
-
-**Objective:** Confirm HTTPS and HSTS behavior, scan TLS, and validate rate limiting and timeouts to reduce brute-force and slowloris risks.
-
-#### 3.1: Scan TLS (testssl.sh)
-Use one of the following, depending on your OS:
-```bash
-# Linux: use host networking to reach localhost:8443
-docker run --rm --network host drwetter/testssl.sh:latest https://localhost:8443 \
- | tee analysis/testssl.txt
-
-# Mac/Windows (Docker Desktop): target host.docker.internal
-docker run --rm drwetter/testssl.sh:latest https://host.docker.internal:8443 \
- | tee analysis/testssl.txt
-```
-
----
-
-#### 3.2: Validate rate limiting on login
-Login rate limit is configured on `/rest/user/login` with Nginx `limit_req` and `limit_req_status 429`.
-
-##### Trigger rate limiting
-```bash
-for i in $(seq 1 12); do \
- curl -sk -o /dev/null -w "%{http_code}\n" \
- -H 'Content-Type: application/json' \
- -X POST https://localhost:8443/rest/user/login \
- -d '{"email":"a@a","password":"a"}'; \
-done | tee analysis/rate-limit-test.txt
-```
-Expected: Some responses return `429` once the burst+rate thresholds are exceeded.
-
-In `labs/submission11.md`, document:
-
-**Task 3 Requirements:**
-- TLS/testssl summary:
- - Summarize TLS protocol support from testssl scan (which versions are enabled)
- - List cipher suites that are supported
- - Explain why TLSv1.2+ is required (prefer TLSv1.3)
- - Note any warnings or vulnerabilities from testssl output
- - Confirm HSTS header appears only on HTTPS responses (not HTTP)
-
-Note on dev certificates: On localhost you should still expect these “NOT ok” items with a self‑signed cert: chain of trust (self‑signed), OCSP/CRL/CT/CAA, and OCSP stapling not offered. To eliminate them, either trust a local CA (e.g., mkcert) or use a real domain and a public CA (e.g., Let’s Encrypt) and then enable OCSP stapling (comments in nginx.conf).
-
-- Rate limiting & timeouts:
- - Show rate-limit test output (how many 200s vs 429s)
- - Explain the rate limit configuration: `rate=10r/m`, `burst=5`, and why these values balance security vs usability
- - Explain timeout settings in nginx.conf: `client_body_timeout`, `client_header_timeout`, `proxy_read_timeout`, `proxy_send_timeout`, with trade-offs
- - Paste relevant lines from access.log showing 429 responses
-
----
-
-## Acceptance Criteria
-
-- ✅ Nginx reverse proxy running; Juice Shop not directly exposed
-- ✅ Security headers present over HTTP/HTTPS; HSTS only on HTTPS
-- ✅ TLS enabled and scanned; HSTS verified; outputs captured
-- ✅ Rate limiting returns 429 on excessive login attempts; logs captured; timeouts discussed
-- ✅ All outputs committed under `labs/lab11/`
-
----
-
-## Cleanup
-
-After completing the lab:
-
-```bash
-# Stop and remove containers
-cd labs/lab11 # if not already there
-docker compose down
-
-# Optional: Remove generated certificates
-# rm -rf labs/lab11/reverse-proxy/certs/*
-
-# Check disk space
-docker system df
-```
-
----
-
-## How to Submit
-
-1. Create a branch and push it to your fork:
-```bash
-git switch -c feature/lab11
-# create labs/submission11.md with your findings
-git add labs/lab11/ labs/submission11.md
-git commit -m "docs: add lab11 — nginx reverse proxy hardening"
-git push -u origin feature/lab11
-```
-2. Open a PR from your fork’s `feature/lab11` → course repo’s `main`.
-3. In the PR description include:
-```text
-- [x] Task 1 — Reverse proxy compose setup
-- [x] Task 2 — Security headers verification
-- [x] Task 3 — TLS + HSTS + rate limiting + timeouts (+ testssl)
-```
-4. Submit the PR URL via Moodle before the deadline.
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ----------------------------------------------------- | -----: |
-| Task 1 — Reverse proxy compose setup | 2.0 |
-| Task 2 — Security headers (HTTP/HTTPS) | 3.0 |
-| Task 3 — TLS, HSTS, rate limiting & timeouts | 5.0 |
-| Total | 10.0 |
-
----
-
-## Guidelines
-
-- Keep app container internal; only expose Nginx ports to host
-- Use `add_header ... always;` so headers appear even on errors/redirects
-- Place HSTS only on HTTPS server blocks
-- Start CSP in Report-Only and iterate; Juice Shop is JS-heavy and can break under strict CSP
-- Choose rate limits that balance security and usability; document your rationale
-
-
-Resources
-
-- Nginx security headers: https://nginx.org/en/docs/http/ngx_http_headers_module.html
-- TLS config guidelines: https://ssl-config.mozilla.org/
-- testssl.sh: https://github.com/drwetter/testssl.sh
-- Permissions Policy: https://www.w3.org/TR/permissions-policy-1/
-
-
diff --git a/labs/lab11/docker-compose.yml b/labs/lab11/docker-compose.yml
deleted file mode 100644
index da5002c1..00000000
--- a/labs/lab11/docker-compose.yml
+++ /dev/null
@@ -1,19 +0,0 @@
-services:
- juice:
- image: bkimminich/juice-shop:v19.0.0
- restart: unless-stopped
- expose:
- - "3000"
-
- nginx:
- image: nginx:stable-alpine
- restart: unless-stopped
- depends_on:
- - juice
- ports:
- - "8080:8080" # HTTP (will redirect to HTTPS)
- - "8443:8443" # HTTPS
- volumes:
- - ./reverse-proxy/nginx.conf:/etc/nginx/nginx.conf:ro
- - ./reverse-proxy/certs:/etc/nginx/certs:ro
- - ./logs:/var/log/nginx:rw
diff --git a/labs/lab11/reverse-proxy/nginx.conf b/labs/lab11/reverse-proxy/nginx.conf
deleted file mode 100644
index b90f6c47..00000000
--- a/labs/lab11/reverse-proxy/nginx.conf
+++ /dev/null
@@ -1,127 +0,0 @@
-user nginx;
-worker_processes auto;
-
-events { worker_connections 1024; }
-
-http {
- include /etc/nginx/mime.types;
- default_type application/octet-stream;
- sendfile on;
- keepalive_timeout 10;
- server_tokens off;
- gzip off;
-
- # Security-focused logs
- log_format security '$remote_addr - $remote_user [$time_local] '
- '"$request" $status $body_bytes_sent '
- '"$http_referer" "$http_user_agent" '
- 'rt=$request_time uct=$upstream_connect_time '
- 'urt=$upstream_response_time';
- access_log /var/log/nginx/access.log security;
- error_log /var/log/nginx/error.log warn;
-
- # Upstream app
- upstream juice {
- server juice:3000;
- keepalive 32;
- }
-
- # Rate limit zone for login
- # ~10 req/min per IP, burst of 5
- limit_req_zone $binary_remote_addr zone=login:10m rate=10r/m;
- limit_req_status 429;
-
- map $http_upgrade $connection_upgrade { default upgrade; '' close; }
-
- # Common proxy settings
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_http_version 1.1;
- proxy_set_header Connection $connection_upgrade;
- proxy_set_header Upgrade $http_upgrade;
- # Prevent upstream TLS BREACH vector by disabling compression from upstream
- proxy_set_header Accept-Encoding "";
- proxy_read_timeout 30s;
- proxy_send_timeout 30s;
- proxy_connect_timeout 5s;
- proxy_hide_header X-Powered-By;
- # Hide upstream headers to avoid duplicates and enforce policy at the proxy
- proxy_hide_header X-Frame-Options;
- proxy_hide_header X-Content-Type-Options;
- proxy_hide_header Referrer-Policy;
- proxy_hide_header Permissions-Policy;
- proxy_hide_header Cross-Origin-Opener-Policy;
- proxy_hide_header Cross-Origin-Resource-Policy;
- proxy_hide_header Content-Security-Policy;
- proxy_hide_header Content-Security-Policy-Report-Only;
- proxy_hide_header Access-Control-Allow-Origin;
-
- # HTTP server (redirect to HTTPS)
- server {
- listen 8080;
- listen [::]:8080;
- server_name _;
-
- # Core headers (also on redirects)
- add_header X-Frame-Options "DENY" always;
- add_header X-Content-Type-Options "nosniff" always;
- add_header Referrer-Policy "strict-origin-when-cross-origin" always;
- add_header Permissions-Policy "camera=(), geolocation=(), microphone=()" always;
- add_header Cross-Origin-Opener-Policy "same-origin" always;
- add_header Cross-Origin-Resource-Policy "same-origin" always;
- add_header Content-Security-Policy-Report-Only "default-src 'self'; img-src 'self' data:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'" always;
-
- return 308 https://$host:8443$request_uri;
- }
-
- # HTTPS server
- server {
- listen 8443 ssl;
- listen [::]:8443 ssl;
- http2 on;
- server_name _;
-
- ssl_certificate /etc/nginx/certs/localhost.crt;
- ssl_certificate_key /etc/nginx/certs/localhost.key;
- ssl_session_timeout 10m;
- ssl_session_cache shared:SSL:10m;
- ssl_protocols TLSv1.2 TLSv1.3;
- ssl_ciphers "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:EECDH+AESGCM:EDH+AESGCM";
- ssl_prefer_server_ciphers on;
- ssl_stapling off;
- # If using a publicly-trusted certificate, you may enable OCSP stapling:
- # ssl_stapling on;
- # ssl_stapling_verify on;
- # resolver 1.1.1.1 8.8.8.8 valid=300s;
- # resolver_timeout 5s;
- # ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
-
- client_max_body_size 2m;
- client_body_timeout 10s;
- client_header_timeout 10s;
- keepalive_timeout 10s;
- send_timeout 10s;
-
- # Security headers (include HSTS here only)
- add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
- add_header X-Frame-Options "DENY" always;
- add_header X-Content-Type-Options "nosniff" always;
- add_header Referrer-Policy "strict-origin-when-cross-origin" always;
- add_header Permissions-Policy "camera=(), geolocation=(), microphone=()" always;
- add_header Cross-Origin-Opener-Policy "same-origin" always;
- add_header Cross-Origin-Resource-Policy "same-origin" always;
- add_header Content-Security-Policy-Report-Only "default-src 'self'; img-src 'self' data:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'" always;
-
- location = /rest/user/login {
- limit_req zone=login burst=5 nodelay;
- limit_req_log_level warn;
- proxy_pass http://juice;
- }
-
- location / {
- proxy_pass http://juice;
- }
- }
-}
diff --git a/labs/lab12.md b/labs/lab12.md
deleted file mode 100644
index 1bf1af41..00000000
--- a/labs/lab12.md
+++ /dev/null
@@ -1,401 +0,0 @@
-# Lab 12 — Kata Containers: VM-backed Container Sandboxing (Local)
-
-
-
-
-
-> Goal: Run OWASP Juice Shop under Kata Containers to experience VM-backed container isolation, compare it with the default runc runtime, and document security/operational trade-offs.
-> Deliverable: A PR from `feature/lab12` with `labs/submission12.md` containing setup evidence, runtime comparisons (runc vs kata), isolation tests, and a brief performance summary with recommendations.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Installing/Configuring Kata Containers as a Docker/containerd runtime (Linux)
-- Running the same workload (Juice Shop) with `runc` vs `kata-runtime`
-- Observing isolation differences (guest kernel, process visibility, restricted operations)
-- Measuring basic performance characteristics and trade-offs
-
-> VM-backed sandboxes like Kata place each container/pod inside a lightweight VM, adding a strong isolation boundary while preserving container UX.
-
----
-
-## Prerequisites
-
-Before starting, ensure you have:
-- ✅ Linux host with hardware virtualization enabled (Intel VT-x or AMD-V)
- - Check: `egrep -c '(vmx|svm)' /proc/cpuinfo` (should return > 0)
- - Nested virtualization required if running inside a VM
-- ✅ containerd (1.7+) and nerdctl (1.7+) with root/sudo privileges
-- ✅ `jq`, `curl`, and `awk` installed
-- ✅ At least 4GB RAM and 10GB free disk space
-- ✅ ~60-90 minutes available (installation can take time)
-
-Install containerd + nerdctl (example on Debian/Ubuntu):
-```bash
-sudo apt-get update && sudo apt-get install -y containerd
-sudo containerd config default | sudo tee /etc/containerd/config.toml >/dev/null
-sudo systemctl enable --now containerd
-
-# Install nerdctl (binary)
-VER=2.2.0
-curl -fL -o /tmp/nerdctl.tgz "https://github.com/containerd/nerdctl/releases/download/v${VER}/nerdctl-${VER}-linux-amd64.tar.gz"
-sudo tar -C /usr/local/bin -xzf /tmp/nerdctl.tgz nerdctl && rm /tmp/nerdctl.tgz
-
-containerd --version
-sudo nerdctl --version
-
-# Prepare working directories
-mkdir -p labs/lab12/{setup,runc,kata,isolation,bench,analysis}
-```
-
-If you plan to use the Kata assets installer, ensure `zstd` is available for extracting the release tarball:
-```bash
-sudo apt-get install -y zstd jq
-```
-
----
-
-## Tasks
-
-### Task 1 — Install and Configure Kata (2 pts)
-⏱️ **Estimated time:** 20-30 minutes
-
-**Objective:** Install Kata and make it available to containerd (nerdctl) as `io.containerd.kata.v2`.
-
-#### 1.1: Install Kata
-
-- Build the Kata Rust runtime in a container and copy the shim to your host:
-
-```bash
-# Build inside a Rust container; output goes to labs/lab12/setup/kata-out/
-bash labs/lab12/setup/build-kata-runtime.sh
-
-# Install the shim onto your host PATH (requires sudo)
-sudo install -m 0755 labs/lab12/setup/kata-out/containerd-shim-kata-v2 /usr/local/bin/
-command -v containerd-shim-kata-v2 && containerd-shim-kata-v2 --version | tee labs/lab12/setup/kata-built-version.txt
-```
-
-Notes:
-- The runtime alone is not sufficient; Kata also needs a guest kernel + rootfs image. Prefer your distro packages for these artifacts, or follow the upstream docs to obtain them. If you already have Kata installed, replacing just the shim binary is typically sufficient for this lab.
-
-- Install Kata assets and default config (runtime-rs):
-```bash
-sudo bash labs/lab12/scripts/install-kata-assets.sh # downloads kata-static and wires configuration
-```
- - If you see an error like "load TOML config failed" when running a Kata container, it means the default configuration file is missing. The script above creates `/etc/kata-containers/runtime-rs/configuration.toml` pointing to the installed defaults.
-
-#### 1.2: Configure containerd + nerdctl
-- Enable `io.containerd.kata.v2` per Kata docs (Kata 3’s shim is `containerd-shim-kata-v2`).
-- Minimal config example for config version 3 (most current containerd):
-```toml
-[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.kata]
- runtime_type = 'io.containerd.kata.v2'
-```
- - Legacy configs may use:
-```toml
-[plugins.'io.containerd.grpc.v1.cri'.containerd.runtimes.kata]
- runtime_type = 'io.containerd.kata.v2'
-```
-
-Automated update (recommended):
-```bash
-sudo bash labs/lab12/scripts/configure-containerd-kata.sh # updates /etc/containerd/config.toml
-```
-- Restart and verify a test container:
-```bash
-sudo systemctl restart containerd
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 uname -a
-```
-
-In `labs/submission12.md`, document:
-
-**Task 1 Requirements:**
-- Show the shim `containerd-shim-kata-v2 --version`
-- Show a successful test run with `sudo nerdctl run --runtime io.containerd.kata.v2 ...`
-
----
-
-### Task 2 — Run and Compare Containers (runc vs kata) (3 pts)
-⏱️ **Estimated time:** 15-20 minutes
-
-**Objective:** Run workloads with both runtimes and compare their environments.
-
-#### 2.1: Start runc container (Juice Shop)
-```bash
-# runc (default under nerdctl) - full application
-sudo nerdctl run -d --name juice-runc -p 3012:3000 bkimminich/juice-shop:v19.0.0
-
-# Wait for readiness
-sleep 10
-curl -s -o /dev/null -w "juice-runc: HTTP %{http_code}\n" http://localhost:3012 | tee labs/lab12/runc/health.txt
-```
-
-#### 2.2: Run Kata containers (Alpine-based tests)
-
-> **Note:** Due to a known issue with nerdctl + Kata runtime-rs v3 and long-running detached containers,
-> we'll use short-lived Alpine containers for Kata demonstrations.
-
-```bash
-echo "=== Kata Container Tests ==="
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 uname -a | tee labs/lab12/kata/test1.txt
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 uname -r | tee labs/lab12/kata/kernel.txt
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 sh -c "grep 'model name' /proc/cpuinfo | head -1" | tee labs/lab12/kata/cpu.txt
-```
-
-#### 2.3: Kernel comparison (Key finding)
-
-```bash
-echo "=== Kernel Version Comparison ===" | tee labs/lab12/analysis/kernel-comparison.txt
-echo -n "Host kernel (runc uses this): " | tee -a labs/lab12/analysis/kernel-comparison.txt
-uname -r | tee -a labs/lab12/analysis/kernel-comparison.txt
-
-echo -n "Kata guest kernel: " | tee -a labs/lab12/analysis/kernel-comparison.txt
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 cat /proc/version | tee -a labs/lab12/analysis/kernel-comparison.txt
-```
-
-#### 2.4: CPU virtualization check
-
-```bash
-echo "=== CPU Model Comparison ===" | tee labs/lab12/analysis/cpu-comparison.txt
-echo "Host CPU:" | tee -a labs/lab12/analysis/cpu-comparison.txt
-grep "model name" /proc/cpuinfo | head -1 | tee -a labs/lab12/analysis/cpu-comparison.txt
-
-echo "Kata VM CPU:" | tee -a labs/lab12/analysis/cpu-comparison.txt
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 sh -c "grep 'model name' /proc/cpuinfo | head -1" | tee -a labs/lab12/analysis/cpu-comparison.txt
-```
-
-In `labs/submission12.md`, document:
-
-**Task 2 Requirements:**
-- Show juice-runc health check (HTTP 200 from port 3012)
-- Show Kata containers running successfully with `--runtime io.containerd.kata.v2`
-- Compare kernel versions:
- - runc uses host kernel (same as `uname -r`)
- - Kata uses separate guest kernel (6.12.47 or similar)
-- Compare CPU models (real vs virtualized)
-- Explain isolation implications:
- - **runc**: ?
- - **Kata**: ?
-
----
-
-### Task 3 — Isolation Tests (3 pts)
-⏱️ **Estimated time:** 15 minutes
-
-**Objective:** Observe and compare isolation characteristics between runc and Kata.
-
-#### 3.1: Kernel ring buffer (dmesg) access
-
-This demonstrates the most significant isolation difference:
-
-```bash
-echo "=== dmesg Access Test ===" | tee labs/lab12/isolation/dmesg.txt
-
-echo "Kata VM (separate kernel boot logs):" | tee -a labs/lab12/isolation/dmesg.txt
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 dmesg 2>&1 | head -5 | tee -a labs/lab12/isolation/dmesg.txt
-```
-
-**Key observation:** Kata containers show VM boot logs, proving they run in a separate kernel.
-
-#### 3.2: /proc filesystem visibility
-
-```bash
-echo "=== /proc Entries Count ===" | tee labs/lab12/isolation/proc.txt
-
-echo -n "Host: " | tee -a labs/lab12/isolation/proc.txt
-ls /proc | wc -l | tee -a labs/lab12/isolation/proc.txt
-
-echo -n "Kata VM: " | tee -a labs/lab12/isolation/proc.txt
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 sh -c "ls /proc | wc -l" | tee -a labs/lab12/isolation/proc.txt
-```
-
-#### 3.3: Network interfaces
-
-```bash
-echo "=== Network Interfaces ===" | tee labs/lab12/isolation/network.txt
-
-echo "Kata VM network:" | tee -a labs/lab12/isolation/network.txt
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 ip addr | tee -a labs/lab12/isolation/network.txt
-```
-
-#### 3.4: Kernel modules
-
-```bash
-echo "=== Kernel Modules Count ===" | tee labs/lab12/isolation/modules.txt
-
-echo -n "Host kernel modules: " | tee -a labs/lab12/isolation/modules.txt
-ls /sys/module | wc -l | tee -a labs/lab12/isolation/modules.txt
-
-echo -n "Kata guest kernel modules: " | tee -a labs/lab12/isolation/modules.txt
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 sh -c "ls /sys/module 2>/dev/null | wc -l" | tee -a labs/lab12/isolation/modules.txt
-```
-
-In `labs/submission12.md`, document:
-
-**Task 3 Requirements:**
-- Show dmesg output differences (Kata shows VM boot logs, proving separate kernel)
-- Compare /proc filesystem visibility
-- Show network interface configuration in Kata VM
-- Compare kernel module counts (host vs guest VM)
-- Explain isolation boundary differences:
- - **runc**: ?
- - **kata**: ?
-- Discuss security implications:
- - Container escape in runc = ?
- - Container escape in Kata = ?
-
----
-
-### Task 4 — Performance Comparison (2 pts)
-⏱️ **Estimated time:** 10 minutes
-
-**Objective:** Compare startup time and overhead between runc and Kata.
-
-#### 4.1: Container startup time comparison
-
-```bash
-echo "=== Startup Time Comparison ===" | tee labs/lab12/bench/startup.txt
-
-echo "runc:" | tee -a labs/lab12/bench/startup.txt
-time sudo nerdctl run --rm alpine:3.19 echo "test" 2>&1 | grep real | tee -a labs/lab12/bench/startup.txt
-
-echo "Kata:" | tee -a labs/lab12/bench/startup.txt
-time sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 echo "test" 2>&1 | grep real | tee -a labs/lab12/bench/startup.txt
-```
-
-#### 4.2: HTTP response latency (juice-runc only)
-
-```bash
-echo "=== HTTP Latency Test (juice-runc) ===" | tee labs/lab12/bench/http-latency.txt
-out="labs/lab12/bench/curl-3012.txt"
-: > "$out"
-
-for i in $(seq 1 50); do
- curl -s -o /dev/null -w "%{time_total}\n" http://localhost:3012/ >> "$out"
-done
-
-echo "Results for port 3012 (juice-runc):" | tee -a labs/lab12/bench/http-latency.txt
-awk '{s+=$1; n+=1} END {if(n>0) printf "avg=%.4fs min=%.4fs max=%.4fs n=%d\n", s/n, min, max, n}' \
- min=$(sort -n "$out" | head -1) max=$(sort -n "$out" | tail -1) "$out" | tee -a labs/lab12/bench/http-latency.txt
-```
-
-In `labs/submission12.md`, document:
-
-**Task 4 Requirements:**
-- Show startup time comparison (runc: <1s, Kata: 3-5s)
-- Show HTTP latency for juice-runc baseline
-- Analyze performance tradeoffs:
- - **Startup overhead**: ?
- - **Runtime overhead**: ?
- - **CPU overhead**: ?
-- Interpret when to use each:
- - **Use runc when**: ?
- - **Use Kata when**: ?
-
----
-
-## Acceptance Criteria
-
-- ✅ Kata shim installed and verified (`containerd-shim-kata-v2 --version`)
-- ✅ containerd configured; runtime `io.containerd.kata.v2` used for `juice-kata`
-- ✅ runc vs kata containers both reachable; environment differences captured
-- ✅ Isolation tests executed and results summarized
-- ✅ Basic latency snapshot recorded and discussed
-- ✅ All artifacts saved under `labs/lab12/` and committed
-
----
-
-## Known Issues and Troubleshooting
-
-### nerdctl + Kata runtime-rs detached container issue
-
-**Symptom:** Long-running detached containers fail with:
-```
-FATA[0001] failed to create shim task: Others("failed to handle message create container
-Caused by:
- 0: open stdout
- 1: No such file or directory (os error 2)
-```
-
-**Root Cause:** Race condition in logging initialization between nerdctl and Kata runtime-rs v3.
-
-**Workarounds:**
-1. Use short-lived/interactive containers (as in this lab)
-2. Use Kubernetes with Kata (fully supported)
-3. Use Docker with older Kata versions
-4. Use containerd's `ctr` command directly
-
-**Status:** Known issue, fix expected in future releases.
-
-### Verifying Kata is working
-
-If you encounter issues, verify Kata basics:
-
-```bash
-# Test simple execution
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 echo "Kata works"
-
-# Check kernel version (should be 6.12.47 or similar, NOT your host kernel)
-sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 uname -r
-
-# Check Kata shim
-ls -la /usr/local/bin/containerd-shim-kata-v2
-containerd-shim-kata-v2 --version
-
-# Check containerd logs
-sudo journalctl -u containerd -n 50 --no-pager | grep -i kata
-```
-
----
-
-## How to Submit
-
-1. Create a branch and push it to your fork:
-```bash
-git switch -c feature/lab12
-# create labs/submission12.md with your findings
-git add labs/lab12/ labs/submission12.md
-git commit -m "docs: add lab12 — kata containers sandboxing"
-git push -u origin feature/lab12
-```
-2. Open a PR from your fork’s `feature/lab12` → course repo’s `main`.
-3. In the PR description include:
-```text
-- [x] Task 1 — Kata install + runtime config
-- [x] Task 2 — runc vs kata runtime comparison
-- [x] Task 3 — Isolation tests
-- [x] Task 4 — Basic performance snapshot
-```
-4. Submit the PR URL via Moodle before the deadline.
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ------------------------------------------------------ | -----: |
-| Task 1 — Install + Configure Kata | 2.0 |
-| Task 2 — Run and Compare (runc vs kata) | 3.0 |
-| Task 3 — Isolation Tests | 3.0 |
-| Task 4 — Performance Snapshot | 2.0 |
-| Total | 10.0 |
-
----
-
-## Guidelines
-
-- Prefer non-privileged containers; avoid `--privileged` unless a test explicitly calls for it
-- Use containerd+nerdctl with `io.containerd.kata.v2` per Kata 3 docs (Docker `--runtime=kata` is legacy)
-- Nested virtualization must be enabled if inside a VM (check your cloud provider or hypervisor settings)
-- Use clear, concise evidence in `submission12.md` and focus your analysis on isolation trade-offs vs operational overhead
-
-
-References
-
-- Kata Containers: https://github.com/kata-containers/kata-containers
-- Install docs (Kata 3): https://github.com/kata-containers/kata-containers/tree/main/docs/install
-- containerd runtime config: https://github.com/kata-containers/kata-containers/tree/main/docs
-
-
diff --git a/labs/lab12/scripts/configure-containerd-kata.sh b/labs/lab12/scripts/configure-containerd-kata.sh
deleted file mode 100755
index 163133af..00000000
--- a/labs/lab12/scripts/configure-containerd-kata.sh
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/usr/bin/env bash
-set -euo pipefail
-
-# configure-containerd-kata.sh
-# Idempotently ensure containerd has the Kata runtime configured:
-# [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]
-# runtime_type = "io.containerd.kata.v2"
-#
-# Usage:
-# sudo bash labs/lab12/scripts/configure-containerd-kata.sh
-
-CONF_DEFAULT="/etc/containerd/config.toml"
-# Allow override via $CONF or first CLI arg
-CONF="${CONF:-${1:-$CONF_DEFAULT}}"
-TMP=$(mktemp)
-
-backup() {
- if [ -f "$CONF" ]; then
- cp -a "$CONF" "${CONF}.$(date +%Y%m%d%H%M%S).bak"
- fi
-}
-
-ensure_default() {
- if [ ! -s "$CONF" ]; then
- echo "Generating default containerd config at $CONF" >&2
- mkdir -p "$(dirname "$CONF")"
- containerd config default > "$CONF"
- fi
-}
-
-detect_header() {
- # Prefer v3 split-CRI path if present; otherwise fallback to grpc path
- if grep -q "^\[plugins\.'io\.containerd\.cri\.v1\.runtime'\]" "$CONF"; then
- echo "[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.kata]"
- else
- echo "[plugins.'io.containerd.grpc.v1.cri'.containerd.runtimes.kata]"
- fi
-}
-
-insert_or_update_kata() {
- local header
- header=$(detect_header)
- local value=" runtime_type = 'io.containerd.kata.v2'"
-
- # Process file: update runtime_type inside the kata table if it exists,
- # otherwise append a new table at the end.
- awk -v hdr="$header" -v val="$value" '
- BEGIN { inside=0; updated=0 }
- {
- if ($0 == hdr) {
- print $0; inside=1; next
- }
- if (inside) {
- if ($0 ~ /^\[/) {
- if (!updated) print val
- inside=0
- print $0
- next
- }
- if ($0 ~ /^\s*runtime_type\s*=\s*/){
- print val; updated=1; next
- }
- print $0; next
- }
- print $0
- }
- END {
- if (inside && !updated) {
- print val
- } else if (!inside && NR > 0) {
- # Check if header ever appeared; if not, append it.
- # We can infer by searching the output later, but simpler: do a second pass.
- }
- }
- ' "$CONF" > "$TMP"
-
- if ! grep -qF "$header" "$TMP"; then
- {
- printf '\n%s\n%s\n' "$header" "$value"
- } >> "$TMP"
- fi
-
- install -m 0644 "$TMP" "$CONF"
-}
-
-main() {
- backup
- ensure_default
- insert_or_update_kata
- echo "Updated $CONF with Kata runtime: io.containerd.kata.v2" >&2
- echo "Restart containerd to apply: sudo systemctl restart containerd" >&2
-}
-
-main "$@"
diff --git a/labs/lab12/scripts/install-kata-assets.sh b/labs/lab12/scripts/install-kata-assets.sh
deleted file mode 100755
index c3c586d9..00000000
--- a/labs/lab12/scripts/install-kata-assets.sh
+++ /dev/null
@@ -1,79 +0,0 @@
-#!/usr/bin/env bash
-set -euo pipefail
-
-# install-kata-assets.sh
-# Download and install Kata Containers static assets (kernel, rootfs image,
-# default runtime-rs configuration) under /opt/kata, and ensure a
-# configuration file exists in an expected path for runtime-rs.
-#
-# Usage:
-# sudo bash labs/lab12/scripts/install-kata-assets.sh [KATA_VER]
-#
-# Notes:
-# - Requires: curl, jq, tar (with zstd support), and root privileges.
-# - Creates or updates a symlink at:
-# /etc/kata-containers/runtime-rs/configuration.toml
-# pointing to the installed default configuration.
-
-VER_ARG=${1:-}
-ARCH=$(uname -m)
-case ${ARCH} in
- x86_64) ARCH=amd64 ;;
- aarch64|arm64) ARCH=arm64 ;;
- *) echo "Unsupported architecture: $(uname -m)" >&2; exit 1 ;;
-esac
-
-if [[ -n "${VER_ARG}" ]]; then
- KATA_VER=$(echo "${VER_ARG}" | sed -E 's/^v//')
-else
- KATA_VER=$(curl -fsSL https://api.github.com/repos/kata-containers/kata-containers/releases/latest | jq -r .tag_name)
- KATA_VER=${KATA_VER#v}
-fi
-
-ASSET_URL="https://github.com/kata-containers/kata-containers/releases/download/${KATA_VER}/kata-static-${KATA_VER}-${ARCH}.tar.zst"
-
-echo "Installing Kata static assets ${KATA_VER} for ${ARCH}" >&2
-TMP_TAR=$(mktemp --suffix=.tar.zst)
-curl -fL -o "${TMP_TAR}" "${ASSET_URL}"
-
-# Extract to root; archive lays files under /opt/kata, /usr/local/bin, etc.
-# Prefer explicit decompressor if available to avoid tar invoking external zstd unexpectedly.
-if command -v zstd >/dev/null 2>&1; then
- zstd -d -c "${TMP_TAR}" | tar -xf - -C /
-elif command -v unzstd >/dev/null 2>&1; then
- unzstd -c "${TMP_TAR}" | tar -xf - -C /
-elif tar --help 2>/dev/null | grep -q -- '--zstd'; then
- tar --zstd -xf "${TMP_TAR}" -C /
-else
- echo "Missing zstd support to extract ${TMP_TAR}." >&2
- echo "Install the zstd package (e.g., sudo apt-get update && sudo apt-get install -y zstd) and re-run." >&2
- exit 1
-fi
-rm -f "${TMP_TAR}"
-
-# Link configuration to an expected path for runtime-rs
-sudo mkdir -p /etc/kata-containers/runtime-rs
-SRC_CANDIDATES=(
- "/opt/kata/share/defaults/kata-containers/runtime-rs/configuration-dragonball.toml"
- "/opt/kata/share/defaults/kata-containers/configuration-dragonball.toml"
- "/opt/kata/share/defaults/kata-containers/runtime-rs/configuration.toml"
- "/usr/share/defaults/kata-containers/runtime-rs/configuration.toml"
-)
-
-for src in "${SRC_CANDIDATES[@]}"; do
- if [[ -f "$src" ]]; then
- ln -sf "$src" /etc/kata-containers/runtime-rs/configuration.toml
- echo "Linked runtime-rs config -> $src" >&2
- break
- fi
-done
-
-if [[ ! -f /etc/kata-containers/runtime-rs/configuration.toml ]]; then
- echo "Warning: could not find a default runtime-rs configuration in known locations." >&2
- echo "Check /opt/kata/share/defaults/kata-containers/ and create: /etc/kata-containers/runtime-rs/configuration.toml" >&2
- exit 1
-fi
-
-echo "Kata assets installed. Restart containerd and test a kata container." >&2
-echo " sudo systemctl restart containerd" >&2
-echo " sudo nerdctl run --rm --runtime io.containerd.kata.v2 alpine:3.19 uname -a" >&2
diff --git a/labs/lab12/setup/build-kata-runtime.sh b/labs/lab12/setup/build-kata-runtime.sh
deleted file mode 100644
index b909a410..00000000
--- a/labs/lab12/setup/build-kata-runtime.sh
+++ /dev/null
@@ -1,56 +0,0 @@
-#!/usr/bin/env bash
-set -euo pipefail
-
-# Build Kata Containers 3.x Rust runtime (containerd-shim-kata-v2)
-# inside a temporary Rust toolchain container, and place the binary
-# into the provided output directory. This avoids installing build
-# dependencies on the host.
-#
-# Usage:
-# bash labs/lab12/setup/build-kata-runtime.sh
-# # result: labs/lab12/setup/kata-out/containerd-shim-kata-v2
-
-ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd)"
-WORK_DIR="${ROOT_DIR}/lab12/setup/kata-build"
-OUT_DIR="${ROOT_DIR}/lab12/setup/kata-out"
-
-mkdir -p "${WORK_DIR}" "${OUT_DIR}"
-
-echo "Building Kata runtime in Docker..." >&2
-docker run --rm \
- -e CARGO_NET_GIT_FETCH_WITH_CLI=true \
- -v "${WORK_DIR}":/work \
- -v "${OUT_DIR}":/out \
- rust:1.75-bookworm bash -lc '
- set -euo pipefail
- apt-get update && apt-get install -y --no-install-recommends \
- git make gcc pkg-config ca-certificates musl-tools libseccomp-dev && \
- update-ca-certificates || true
-
- # Ensure cargo/rustup are available
- export PATH=/usr/local/cargo/bin:$PATH
- rustc --version; cargo --version; rustup --version || true
-
- cd /work
- if [ ! -d kata-containers ]; then
- git clone --depth 1 https://github.com/kata-containers/kata-containers.git
- fi
- cd kata-containers/src/runtime-rs
-
- # Add MUSL target for static build expected by runtime Makefile
- rustup target add x86_64-unknown-linux-musl || true
-
- # Build the runtime (shim v2)
- make
-
- # Collect the produced binary
- f=$(find target -type f -name containerd-shim-kata-v2 | head -n1)
- if [ -z "$f" ]; then
- echo "ERROR: built binary not found" >&2; exit 1
- fi
- install -m 0755 "$f" /out/containerd-shim-kata-v2
- strip /out/containerd-shim-kata-v2 || true
- /out/containerd-shim-kata-v2 --version || true
- '
-
-echo "Done. Binary saved to: ${OUT_DIR}/containerd-shim-kata-v2" >&2
diff --git a/labs/lab2.md b/labs/lab2.md
deleted file mode 100644
index e66fb964..00000000
--- a/labs/lab2.md
+++ /dev/null
@@ -1,189 +0,0 @@
-# Lab 2 — Threat Modeling with Threagile
-
-
--blue)
-
-
-> **Goal:** Model OWASP Juice Shop (`bkimminich/juice-shop:v19.0.0`) deployment and generate an automation-first threat model with Threagile.
-> **Deliverable:** A PR from `feature/lab2` to the course repo with `labs/submission2.md` containing Threagile outputs and risk analysis. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Creating an **as-code** model with **Threagile** and automatically generating **risk reports + diagrams** from YAML
-- Making security-relevant model changes and demonstrating how they **impact the risk landscape**
-- Analyzing threat model outputs and documenting security findings systematically
-
-> Keep using the Juice Shop from Lab 1 (`:19.0.0`) as your target application.
-
----
-
-## Tasks
-
-### Task 1 — Threagile Baseline Model (6 pts)
-
-**Objective:** Use the provided Threagile model to generate a PDF report + diagrams and analyze the baseline risk posture.
-
-#### 1.1: Generate Baseline Threat Model
-
-```bash
-mkdir -p labs/lab2/baseline labs/lab2/secure
-
-docker run --rm -v "$(pwd)":/app/work threagile/threagile \
- -model /app/work/labs/lab2/threagile-model.yaml \
- -output /app/work/labs/lab2/baseline \
- -generate-risks-excel=false -generate-tags-excel=false
-```
-
-#### 1.2: Verify Generated Outputs
-
-Expected files in `labs/lab2/baseline/`:
-- `report.pdf` — full PDF report (includes diagrams)
-- Diagrams: data-flow & data-asset diagrams (PNG)
-- Risk exports: `risks.json`, `stats.json`, `technical-assets.json`
-
-#### 1.3: Risk Analysis and Documentation
-
-Calculate composite scores using these weights:
-- Severity: critical (5) > elevated (4) > high (3) > medium (2) > low (1)
-- Likelihood: very-likely (4) > likely (3) > possible (2) > unlikely (1)
-- Impact: high (3) > medium (2) > low (1)
-- **Composite score** = `Severity*100 + Likelihood*10 + Impact`
-
-In `labs/submission2.md`, document:
-- **Top 5 Risks** table with Severity, Category, Asset, Likelihood, Impact
-- Risk ranking methodology and composite score calculations
-- Analysis of critical security concerns identified
-- Screenshots or references to generated diagrams
-
----
-
-### Task 2 — HTTPS Variant & Risk Comparison (4 pts)
-
-**Objective:** Create a secure variant of the model and demonstrate how security controls affect the threat landscape.
-
-#### 2.1: Create Secure Model Variant
-
-Copy the baseline model and make these specific changes:
-- **User Browser → communication_links → Direct to App**: set `protocol: https`
-- **Reverse Proxy → communication_links**: set `protocol: https`
-- **Persistent Storage**: set `encryption: transparent`
-- Save as: `labs/lab2/threagile-model.secure.yaml`
-
-#### 2.2: Generate Secure Variant Analysis
-
-```bash
-docker run --rm -v "$(pwd)":/app/work threagile/threagile \
- -model /app/work/labs/lab2/threagile-model.secure.yaml \
- -output /app/work/labs/lab2/secure \
- -generate-risks-excel=false -generate-tags-excel=false
-```
-
-#### 2.3: Generate Risk Comparison
-
-```bash
-jq -n \
- --slurpfile b labs/lab2/baseline/risks.json \
- --slurpfile s labs/lab2/secure/risks.json '
-def tally(x):
-(x | group_by(.category) | map({ (.[0].category): length }) | add) // {};
-(tally($b[0])) as $B |
-(tally($s[0])) as $S |
-(($B + $S) | keys | sort) as $cats |
-[
-"| Category | Baseline | Secure | Δ |",
-"|---|---:|---:|---:|"
-] + (
-$cats | map(
-"| " + . + " | " +
-(($B[.] // 0) | tostring) + " | " +
-(($S[.] // 0) | tostring) + " | " +
-(((($S[.] // 0) - ($B[.] // 0))) | tostring) + " |"
-)
-) | .[]'
-```
-
-In `labs/submission2.md`, document:
-- **Risk Category Delta Table** (Baseline vs Secure vs Δ)
-- **Delta Run Explanation** covering:
- - Specific changes made to the model
- - Observed results in risk categories
- - Analysis of why these changes reduced/modified risks
-- Comparison of diagrams between baseline and secure variants
-
----
-
-## How to Submit
-
-1. Create a branch for this lab and push it to your fork:
-
- ```bash
- git switch -c feature/lab2
- # create labs/submission2.md with your findings
- git add labs/submission2.md labs/lab2/
- git commit -m "docs: add lab2 submission"
- git push -u origin feature/lab2
- ```
-
-2. Open a PR from your fork's `feature/lab2` branch → **course repository's main branch**.
-
-3. In the PR description, include:
-
- ```text
- - [x] Task 1 done — Threagile baseline model + risk analysis
- - [x] Task 2 done — HTTPS variant + risk comparison
- ```
-
-4. **Copy the PR URL** and submit it via **Moodle before the deadline**.
-
----
-
-## Acceptance Criteria
-
-- ✅ Branch `feature/lab2` exists with commits for each task
-- ✅ File `labs/submission2.md` contains required analysis for Tasks 1-2
-- ✅ Threagile baseline and secure models successfully generated
-- ✅ Both `labs/lab2/baseline/` and `labs/lab2/secure/` folders contain complete outputs
-- ✅ Top 5 risks analysis and risk category delta comparison documented
-- ✅ PR from `feature/lab2` → **course repo main branch** is open
-- ✅ PR link submitted via Moodle before the deadline
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ------------------------------------------------------------ | -----: |
-| Task 1 — Threagile baseline model + risk analysis | **6** |
-| Task 2 — HTTPS variant + risk comparison analysis | **4** |
-| **Total** | **10** |
-
----
-
-## Guidelines
-
-- Use clear Markdown headers to organize sections in `submission2.md`
-- Include both command outputs and written analysis for each task
-- Document threat modeling process and security findings systematically
-- Ensure all generated artifacts are properly committed to the repository
-
-
-Threat Modeling Notes
-
-- Model exactly the architecture you're running from Lab 1 (localhost deployment)
-- Use consistent asset/link names between baseline and secure models for accurate diffs
-- Focus on actionable security insights rather than comprehensive risk catalogs
-
-
-
-
-Technical Tips
-
-- Verify report PDFs open correctly and diagrams render properly
-- Use the provided jq command exactly as shown for consistent delta tables
-- Keep explanations concise—one-page summaries are more valuable than detailed reports
-- Check that Threagile Docker container has proper file permissions for output generation
-
-
diff --git a/labs/lab2/baseline/data-asset-diagram.png b/labs/lab2/baseline/data-asset-diagram.png
new file mode 100644
index 00000000..4457d768
Binary files /dev/null and b/labs/lab2/baseline/data-asset-diagram.png differ
diff --git a/labs/lab2/baseline/data-flow-diagram.png b/labs/lab2/baseline/data-flow-diagram.png
new file mode 100644
index 00000000..a8803816
Binary files /dev/null and b/labs/lab2/baseline/data-flow-diagram.png differ
diff --git a/labs/lab2/baseline/report.pdf b/labs/lab2/baseline/report.pdf
new file mode 100644
index 00000000..4eb1adb8
Binary files /dev/null and b/labs/lab2/baseline/report.pdf differ
diff --git a/labs/lab2/baseline/risks.json b/labs/lab2/baseline/risks.json
new file mode 100644
index 00000000..21c99d9b
--- /dev/null
+++ b/labs/lab2/baseline/risks.json
@@ -0,0 +1 @@
+[{"category":"unencrypted-asset","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eUnencrypted Technical Asset\u003c/b\u003e named \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"unencrypted-asset@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"unencrypted-asset","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eUnencrypted Technical Asset\u003c/b\u003e named \u003cb\u003ePersistent Storage\u003c/b\u003e","synthetic_id":"unencrypted-asset@persistent-storage","most_relevant_data_asset":"","most_relevant_technical_asset":"persistent-storage","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["persistent-storage"]},{"category":"missing-identity-store","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Identity Store\u003c/b\u003e in the threat model (referencing asset \u003cb\u003eReverse Proxy\u003c/b\u003e as an example)","synthetic_id":"missing-identity-store@reverse-proxy","most_relevant_data_asset":"","most_relevant_technical_asset":"reverse-proxy","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":[]},{"category":"unnecessary-technical-asset","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eUnnecessary Technical Asset\u003c/b\u003e named \u003cb\u003ePersistent Storage\u003c/b\u003e","synthetic_id":"unnecessary-technical-asset@persistent-storage","most_relevant_data_asset":"","most_relevant_technical_asset":"persistent-storage","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["persistent-storage"]},{"category":"unnecessary-technical-asset","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eUnnecessary Technical Asset\u003c/b\u003e named \u003cb\u003eUser Browser\u003c/b\u003e","synthetic_id":"unnecessary-technical-asset@user-browser","most_relevant_data_asset":"","most_relevant_technical_asset":"user-browser","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["user-browser"]},{"category":"cross-site-scripting","risk_status":"unchecked","severity":"elevated","exploitation_likelihood":"likely","exploitation_impact":"medium","title":"\u003cb\u003eCross-Site Scripting (XSS)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"cross-site-scripting@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"cross-site-request-forgery","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"very-likely","exploitation_impact":"low","title":"\u003cb\u003eCross-Site Request Forgery (CSRF)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e via \u003cb\u003eDirect to App (no proxy)\u003c/b\u003e from \u003cb\u003eUser Browser\u003c/b\u003e","synthetic_id":"cross-site-request-forgery@juice-shop@user-browser\u003edirect-to-app-no-proxy","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"user-browser\u003edirect-to-app-no-proxy","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"cross-site-request-forgery","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"very-likely","exploitation_impact":"low","title":"\u003cb\u003eCross-Site Request Forgery (CSRF)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e via \u003cb\u003eTo App\u003c/b\u003e from \u003cb\u003eReverse Proxy\u003c/b\u003e","synthetic_id":"cross-site-request-forgery@juice-shop@reverse-proxy\u003eto-app","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"reverse-proxy\u003eto-app","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"container-baseimage-backdooring","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eContainer Base Image Backdooring\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"container-baseimage-backdooring@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"probable","data_breach_technical_assets":["juice-shop"]},{"category":"missing-build-infrastructure","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Build Infrastructure\u003c/b\u003e in the threat model (referencing asset \u003cb\u003eJuice Shop Application\u003c/b\u003e as an example)","synthetic_id":"missing-build-infrastructure@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":[]},{"category":"missing-waf","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eMissing Web Application Firewall (WAF)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-waf@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"unencrypted-communication","risk_status":"unchecked","severity":"elevated","exploitation_likelihood":"likely","exploitation_impact":"high","title":"\u003cb\u003eUnencrypted Communication\u003c/b\u003e named \u003cb\u003eDirect to App (no proxy)\u003c/b\u003e between \u003cb\u003eUser Browser\u003c/b\u003e and \u003cb\u003eJuice Shop Application\u003c/b\u003e transferring authentication data (like credentials, token, session-id, etc.)","synthetic_id":"unencrypted-communication@user-browser\u003edirect-to-app-no-proxy@user-browser@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"user-browser","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"user-browser\u003edirect-to-app-no-proxy","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"unencrypted-communication","risk_status":"unchecked","severity":"elevated","exploitation_likelihood":"likely","exploitation_impact":"medium","title":"\u003cb\u003eUnencrypted Communication\u003c/b\u003e named \u003cb\u003eTo App\u003c/b\u003e between \u003cb\u003eReverse Proxy\u003c/b\u003e and \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"unencrypted-communication@reverse-proxy\u003eto-app@reverse-proxy@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"reverse-proxy","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"reverse-proxy\u003eto-app","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"missing-authentication-second-factor","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Two-Factor Authentication\u003c/b\u003e covering communication link \u003cb\u003eDirect to App (no proxy)\u003c/b\u003e from \u003cb\u003eUser Browser\u003c/b\u003e to \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-authentication-second-factor@user-browser\u003edirect-to-app-no-proxy@user-browser@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"user-browser\u003edirect-to-app-no-proxy","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"missing-authentication-second-factor","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Two-Factor Authentication\u003c/b\u003e covering communication link \u003cb\u003eTo App\u003c/b\u003e from \u003cb\u003eUser Browser\u003c/b\u003e forwarded via \u003cb\u003eReverse Proxy\u003c/b\u003e to \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-authentication-second-factor@reverse-proxy\u003eto-app@reverse-proxy@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"reverse-proxy\u003eto-app","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"missing-hardening","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"likely","exploitation_impact":"low","title":"\u003cb\u003eMissing Hardening\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-hardening@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"missing-hardening","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"likely","exploitation_impact":"low","title":"\u003cb\u003eMissing Hardening\u003c/b\u003e risk at \u003cb\u003ePersistent Storage\u003c/b\u003e","synthetic_id":"missing-hardening@persistent-storage","most_relevant_data_asset":"","most_relevant_technical_asset":"persistent-storage","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["persistent-storage"]},{"category":"missing-authentication","risk_status":"unchecked","severity":"elevated","exploitation_likelihood":"likely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Authentication\u003c/b\u003e covering communication link \u003cb\u003eTo App\u003c/b\u003e from \u003cb\u003eReverse Proxy\u003c/b\u003e to \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-authentication@reverse-proxy\u003eto-app@reverse-proxy@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"reverse-proxy\u003eto-app","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"unnecessary-data-transfer","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eUnnecessary Data Transfer\u003c/b\u003e of \u003cb\u003eTokens \u0026 Sessions\u003c/b\u003e data at \u003cb\u003eUser Browser\u003c/b\u003e from/to \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"unnecessary-data-transfer@tokens-sessions@user-browser@juice-shop","most_relevant_data_asset":"tokens-sessions","most_relevant_technical_asset":"user-browser","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["user-browser"]},{"category":"unnecessary-data-transfer","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eUnnecessary Data Transfer\u003c/b\u003e of \u003cb\u003eTokens \u0026 Sessions\u003c/b\u003e data at \u003cb\u003eUser Browser\u003c/b\u003e from/to \u003cb\u003eReverse Proxy\u003c/b\u003e","synthetic_id":"unnecessary-data-transfer@tokens-sessions@user-browser@reverse-proxy","most_relevant_data_asset":"tokens-sessions","most_relevant_technical_asset":"user-browser","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["user-browser"]},{"category":"server-side-request-forgery","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"likely","exploitation_impact":"low","title":"\u003cb\u003eServer-Side Request Forgery (SSRF)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e server-side web-requesting the target \u003cb\u003eWebhook Endpoint\u003c/b\u003e via \u003cb\u003eTo Challenge WebHook\u003c/b\u003e","synthetic_id":"server-side-request-forgery@juice-shop@webhook-endpoint@juice-shop\u003eto-challenge-webhook","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"juice-shop\u003eto-challenge-webhook","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"server-side-request-forgery","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"likely","exploitation_impact":"low","title":"\u003cb\u003eServer-Side Request Forgery (SSRF)\u003c/b\u003e risk at \u003cb\u003eReverse Proxy\u003c/b\u003e server-side web-requesting the target \u003cb\u003eJuice Shop Application\u003c/b\u003e via \u003cb\u003eTo App\u003c/b\u003e","synthetic_id":"server-side-request-forgery@reverse-proxy@juice-shop@reverse-proxy\u003eto-app","most_relevant_data_asset":"","most_relevant_technical_asset":"reverse-proxy","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"reverse-proxy\u003eto-app","data_breach_probability":"possible","data_breach_technical_assets":["reverse-proxy"]},{"category":"missing-vault","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Vault (Secret Storage)\u003c/b\u003e in the threat model (referencing asset \u003cb\u003eJuice Shop Application\u003c/b\u003e as an example)","synthetic_id":"missing-vault@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":[]}]
\ No newline at end of file
diff --git a/labs/lab2/baseline/stats.json b/labs/lab2/baseline/stats.json
new file mode 100644
index 00000000..88cd78be
--- /dev/null
+++ b/labs/lab2/baseline/stats.json
@@ -0,0 +1 @@
+{"risks":{"critical":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":0},"elevated":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":4},"high":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":0},"low":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":5},"medium":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":14}}}
\ No newline at end of file
diff --git a/labs/lab2/baseline/technical-assets.json b/labs/lab2/baseline/technical-assets.json
new file mode 100644
index 00000000..45457f1e
--- /dev/null
+++ b/labs/lab2/baseline/technical-assets.json
@@ -0,0 +1 @@
+{"juice-shop":{"Id":"juice-shop","Title":"Juice Shop Application","Description":"OWASP Juice Shop server (Node.js/Express, v19.0.0).","Usage":0,"Type":1,"Size":2,"Technology":6,"Machine":2,"Internet":false,"MultiTenant":false,"Redundant":false,"CustomDevelopedParts":true,"OutOfScope":false,"UsedAsClientByHuman":false,"Encryption":0,"JustificationOutOfScope":"","Owner":"Lab Owner","Confidentiality":1,"Integrity":2,"Availability":2,"JustificationCiaRating":"In-scope web application (contains all business logic and vulnerabilities by design).","Tags":["app","nodejs"],"DataAssetsProcessed":["user-accounts","orders","product-catalog","tokens-sessions"],"DataAssetsStored":["logs"],"DataFormatsAccepted":[0],"CommunicationLinks":[{"Id":"juice-shop\u003eto-challenge-webhook","SourceId":"juice-shop","TargetId":"webhook-endpoint","Title":"To Challenge WebHook","Description":"Optional outbound callback (HTTP POST) to external WebHook when a challenge is solved.","Protocol":2,"Tags":["egress"],"VPN":false,"IpFiltered":false,"Readonly":false,"Authentication":0,"Authorization":0,"Usage":0,"DataAssetsSent":["orders"],"DataAssetsReceived":null,"DiagramTweakWeight":1,"DiagramTweakConstraint":true}],"DiagramTweakOrder":0,"RAA":70.02881844380403},"persistent-storage":{"Id":"persistent-storage","Title":"Persistent Storage","Description":"Host-mounted volume for database, file uploads, and logs.","Usage":1,"Type":2,"Size":3,"Technology":10,"Machine":1,"Internet":false,"MultiTenant":false,"Redundant":false,"CustomDevelopedParts":false,"OutOfScope":false,"UsedAsClientByHuman":false,"Encryption":0,"JustificationOutOfScope":"","Owner":"Lab Owner","Confidentiality":1,"Integrity":2,"Availability":2,"JustificationCiaRating":"Local disk storage for the container – not directly exposed, but if compromised it contains sensitive data (database and logs).","Tags":["storage","volume"],"DataAssetsProcessed":[],"DataAssetsStored":["logs","user-accounts","orders","product-catalog"],"DataFormatsAccepted":[3],"CommunicationLinks":[],"DiagramTweakOrder":0,"RAA":100},"reverse-proxy":{"Id":"reverse-proxy","Title":"Reverse Proxy","Description":"Optional reverse proxy (e.g., Nginx) for TLS termination and adding security headers.","Usage":0,"Type":1,"Size":2,"Technology":20,"Machine":1,"Internet":false,"MultiTenant":false,"Redundant":false,"CustomDevelopedParts":false,"OutOfScope":false,"UsedAsClientByHuman":false,"Encryption":1,"JustificationOutOfScope":"","Owner":"Lab Owner","Confidentiality":1,"Integrity":2,"Availability":2,"JustificationCiaRating":"Not exposed to internet directly; improves security of inbound traffic.","Tags":["optional","proxy"],"DataAssetsProcessed":["product-catalog","tokens-sessions"],"DataAssetsStored":[],"DataFormatsAccepted":[0],"CommunicationLinks":[{"Id":"reverse-proxy\u003eto-app","SourceId":"reverse-proxy","TargetId":"juice-shop","Title":"To App","Description":"Proxy forwarding to app (HTTP on 3000 internally).","Protocol":1,"Tags":[],"VPN":false,"IpFiltered":false,"Readonly":false,"Authentication":0,"Authorization":0,"Usage":0,"DataAssetsSent":["tokens-sessions"],"DataAssetsReceived":["product-catalog"],"DiagramTweakWeight":1,"DiagramTweakConstraint":true}],"DiagramTweakOrder":0,"RAA":9.623538157950035},"user-browser":{"Id":"user-browser","Title":"User Browser","Description":"End-user web browser (client).","Usage":0,"Type":0,"Size":0,"Technology":2,"Machine":1,"Internet":true,"MultiTenant":false,"Redundant":false,"CustomDevelopedParts":false,"OutOfScope":false,"UsedAsClientByHuman":true,"Encryption":0,"JustificationOutOfScope":"","Owner":"External User","Confidentiality":0,"Integrity":1,"Availability":1,"JustificationCiaRating":"Client controlled by end user (potentially an attacker).","Tags":["actor","user"],"DataAssetsProcessed":[],"DataAssetsStored":[],"DataFormatsAccepted":[0],"CommunicationLinks":[{"Id":"user-browser\u003eto-reverse-proxy-preferred","SourceId":"user-browser","TargetId":"reverse-proxy","Title":"To Reverse Proxy (preferred)","Description":"User browser to reverse proxy (HTTPS on 443).","Protocol":2,"Tags":["primary"],"VPN":false,"IpFiltered":false,"Readonly":false,"Authentication":2,"Authorization":2,"Usage":0,"DataAssetsSent":["tokens-sessions"],"DataAssetsReceived":["product-catalog"],"DiagramTweakWeight":1,"DiagramTweakConstraint":true},{"Id":"user-browser\u003edirect-to-app-no-proxy","SourceId":"user-browser","TargetId":"juice-shop","Title":"Direct to App (no proxy)","Description":"Direct browser access to app (HTTP on 3000).","Protocol":1,"Tags":["direct"],"VPN":false,"IpFiltered":false,"Readonly":false,"Authentication":2,"Authorization":2,"Usage":0,"DataAssetsSent":["tokens-sessions"],"DataAssetsReceived":["product-catalog"],"DiagramTweakWeight":1,"DiagramTweakConstraint":true}],"DiagramTweakOrder":0,"RAA":25.859639506459924},"webhook-endpoint":{"Id":"webhook-endpoint","Title":"Webhook Endpoint","Description":"External WebHook service (3rd-party, if configured for integrations).","Usage":0,"Type":0,"Size":0,"Technology":14,"Machine":1,"Internet":true,"MultiTenant":true,"Redundant":true,"CustomDevelopedParts":false,"OutOfScope":true,"UsedAsClientByHuman":false,"Encryption":0,"JustificationOutOfScope":"Third-party service to receive notifications (not under our control).","Owner":"Third-Party","Confidentiality":1,"Integrity":1,"Availability":1,"JustificationCiaRating":"External service that receives data (like order or challenge info). Treated as a trusted integration point but could be abused if misconfigured.","Tags":["saas","webhook"],"DataAssetsProcessed":["orders"],"DataAssetsStored":[],"DataFormatsAccepted":[0],"CommunicationLinks":[],"DiagramTweakOrder":0,"RAA":1}}
\ No newline at end of file
diff --git a/labs/lab2/secure/data-asset-diagram.png b/labs/lab2/secure/data-asset-diagram.png
new file mode 100644
index 00000000..aacf4016
Binary files /dev/null and b/labs/lab2/secure/data-asset-diagram.png differ
diff --git a/labs/lab2/secure/data-flow-diagram.png b/labs/lab2/secure/data-flow-diagram.png
new file mode 100644
index 00000000..5ead09e2
Binary files /dev/null and b/labs/lab2/secure/data-flow-diagram.png differ
diff --git a/labs/lab2/secure/report.pdf b/labs/lab2/secure/report.pdf
new file mode 100644
index 00000000..b830f7fe
Binary files /dev/null and b/labs/lab2/secure/report.pdf differ
diff --git a/labs/lab2/secure/risks.json b/labs/lab2/secure/risks.json
new file mode 100644
index 00000000..2088ecaa
--- /dev/null
+++ b/labs/lab2/secure/risks.json
@@ -0,0 +1 @@
+[{"category":"missing-vault","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Vault (Secret Storage)\u003c/b\u003e in the threat model (referencing asset \u003cb\u003eJuice Shop Application\u003c/b\u003e as an example)","synthetic_id":"missing-vault@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":[]},{"category":"missing-authentication-second-factor","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Two-Factor Authentication\u003c/b\u003e covering communication link \u003cb\u003eDirect to App (no proxy)\u003c/b\u003e from \u003cb\u003eUser Browser\u003c/b\u003e to \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-authentication-second-factor@user-browser\u003edirect-to-app-no-proxy@user-browser@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"user-browser\u003edirect-to-app-no-proxy","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"missing-authentication-second-factor","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Two-Factor Authentication\u003c/b\u003e covering communication link \u003cb\u003eTo App\u003c/b\u003e from \u003cb\u003eUser Browser\u003c/b\u003e forwarded via \u003cb\u003eReverse Proxy\u003c/b\u003e to \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-authentication-second-factor@reverse-proxy\u003eto-app@reverse-proxy@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"reverse-proxy\u003eto-app","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"unnecessary-technical-asset","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eUnnecessary Technical Asset\u003c/b\u003e named \u003cb\u003ePersistent Storage\u003c/b\u003e","synthetic_id":"unnecessary-technical-asset@persistent-storage","most_relevant_data_asset":"","most_relevant_technical_asset":"persistent-storage","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["persistent-storage"]},{"category":"unnecessary-technical-asset","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eUnnecessary Technical Asset\u003c/b\u003e named \u003cb\u003eUser Browser\u003c/b\u003e","synthetic_id":"unnecessary-technical-asset@user-browser","most_relevant_data_asset":"","most_relevant_technical_asset":"user-browser","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["user-browser"]},{"category":"container-baseimage-backdooring","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eContainer Base Image Backdooring\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"container-baseimage-backdooring@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"probable","data_breach_technical_assets":["juice-shop"]},{"category":"missing-build-infrastructure","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Build Infrastructure\u003c/b\u003e in the threat model (referencing asset \u003cb\u003eJuice Shop Application\u003c/b\u003e as an example)","synthetic_id":"missing-build-infrastructure@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":[]},{"category":"unencrypted-asset","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eUnencrypted Technical Asset\u003c/b\u003e named \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"unencrypted-asset@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"missing-identity-store","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"unlikely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Identity Store\u003c/b\u003e in the threat model (referencing asset \u003cb\u003eReverse Proxy\u003c/b\u003e as an example)","synthetic_id":"missing-identity-store@reverse-proxy","most_relevant_data_asset":"","most_relevant_technical_asset":"reverse-proxy","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":[]},{"category":"cross-site-scripting","risk_status":"unchecked","severity":"elevated","exploitation_likelihood":"likely","exploitation_impact":"medium","title":"\u003cb\u003eCross-Site Scripting (XSS)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"cross-site-scripting@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"server-side-request-forgery","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"likely","exploitation_impact":"low","title":"\u003cb\u003eServer-Side Request Forgery (SSRF)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e server-side web-requesting the target \u003cb\u003eWebhook Endpoint\u003c/b\u003e via \u003cb\u003eTo Challenge WebHook\u003c/b\u003e","synthetic_id":"server-side-request-forgery@juice-shop@webhook-endpoint@juice-shop\u003eto-challenge-webhook","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"juice-shop\u003eto-challenge-webhook","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"server-side-request-forgery","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"likely","exploitation_impact":"low","title":"\u003cb\u003eServer-Side Request Forgery (SSRF)\u003c/b\u003e risk at \u003cb\u003eReverse Proxy\u003c/b\u003e server-side web-requesting the target \u003cb\u003eJuice Shop Application\u003c/b\u003e via \u003cb\u003eTo App\u003c/b\u003e","synthetic_id":"server-side-request-forgery@reverse-proxy@juice-shop@reverse-proxy\u003eto-app","most_relevant_data_asset":"","most_relevant_technical_asset":"reverse-proxy","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"reverse-proxy\u003eto-app","data_breach_probability":"possible","data_breach_technical_assets":["reverse-proxy"]},{"category":"missing-waf","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eMissing Web Application Firewall (WAF)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-waf@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"missing-authentication","risk_status":"unchecked","severity":"elevated","exploitation_likelihood":"likely","exploitation_impact":"medium","title":"\u003cb\u003eMissing Authentication\u003c/b\u003e covering communication link \u003cb\u003eTo App\u003c/b\u003e from \u003cb\u003eReverse Proxy\u003c/b\u003e to \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-authentication@reverse-proxy\u003eto-app@reverse-proxy@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"reverse-proxy\u003eto-app","data_breach_probability":"possible","data_breach_technical_assets":["juice-shop"]},{"category":"unnecessary-data-transfer","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eUnnecessary Data Transfer\u003c/b\u003e of \u003cb\u003eTokens \u0026 Sessions\u003c/b\u003e data at \u003cb\u003eUser Browser\u003c/b\u003e from/to \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"unnecessary-data-transfer@tokens-sessions@user-browser@juice-shop","most_relevant_data_asset":"tokens-sessions","most_relevant_technical_asset":"user-browser","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["user-browser"]},{"category":"unnecessary-data-transfer","risk_status":"unchecked","severity":"low","exploitation_likelihood":"unlikely","exploitation_impact":"low","title":"\u003cb\u003eUnnecessary Data Transfer\u003c/b\u003e of \u003cb\u003eTokens \u0026 Sessions\u003c/b\u003e data at \u003cb\u003eUser Browser\u003c/b\u003e from/to \u003cb\u003eReverse Proxy\u003c/b\u003e","synthetic_id":"unnecessary-data-transfer@tokens-sessions@user-browser@reverse-proxy","most_relevant_data_asset":"tokens-sessions","most_relevant_technical_asset":"user-browser","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["user-browser"]},{"category":"cross-site-request-forgery","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"very-likely","exploitation_impact":"low","title":"\u003cb\u003eCross-Site Request Forgery (CSRF)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e via \u003cb\u003eDirect to App (no proxy)\u003c/b\u003e from \u003cb\u003eUser Browser\u003c/b\u003e","synthetic_id":"cross-site-request-forgery@juice-shop@user-browser\u003edirect-to-app-no-proxy","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"user-browser\u003edirect-to-app-no-proxy","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"cross-site-request-forgery","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"very-likely","exploitation_impact":"low","title":"\u003cb\u003eCross-Site Request Forgery (CSRF)\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e via \u003cb\u003eTo App\u003c/b\u003e from \u003cb\u003eReverse Proxy\u003c/b\u003e","synthetic_id":"cross-site-request-forgery@juice-shop@reverse-proxy\u003eto-app","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"reverse-proxy\u003eto-app","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"missing-hardening","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"likely","exploitation_impact":"low","title":"\u003cb\u003eMissing Hardening\u003c/b\u003e risk at \u003cb\u003eJuice Shop Application\u003c/b\u003e","synthetic_id":"missing-hardening@juice-shop","most_relevant_data_asset":"","most_relevant_technical_asset":"juice-shop","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["juice-shop"]},{"category":"missing-hardening","risk_status":"unchecked","severity":"medium","exploitation_likelihood":"likely","exploitation_impact":"low","title":"\u003cb\u003eMissing Hardening\u003c/b\u003e risk at \u003cb\u003ePersistent Storage\u003c/b\u003e","synthetic_id":"missing-hardening@persistent-storage","most_relevant_data_asset":"","most_relevant_technical_asset":"persistent-storage","most_relevant_trust_boundary":"","most_relevant_shared_runtime":"","most_relevant_communication_link":"","data_breach_probability":"improbable","data_breach_technical_assets":["persistent-storage"]}]
\ No newline at end of file
diff --git a/labs/lab2/secure/stats.json b/labs/lab2/secure/stats.json
new file mode 100644
index 00000000..c19a18a6
--- /dev/null
+++ b/labs/lab2/secure/stats.json
@@ -0,0 +1 @@
+{"risks":{"critical":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":0},"elevated":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":2},"high":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":0},"low":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":5},"medium":{"accepted":0,"false-positive":0,"in-discussion":0,"in-progress":0,"mitigated":0,"unchecked":13}}}
\ No newline at end of file
diff --git a/labs/lab2/secure/technical-assets.json b/labs/lab2/secure/technical-assets.json
new file mode 100644
index 00000000..a082acb4
--- /dev/null
+++ b/labs/lab2/secure/technical-assets.json
@@ -0,0 +1 @@
+{"juice-shop":{"Id":"juice-shop","Title":"Juice Shop Application","Description":"OWASP Juice Shop server (Node.js/Express, v19.0.0).","Usage":0,"Type":1,"Size":2,"Technology":6,"Machine":2,"Internet":false,"MultiTenant":false,"Redundant":false,"CustomDevelopedParts":true,"OutOfScope":false,"UsedAsClientByHuman":false,"Encryption":0,"JustificationOutOfScope":"","Owner":"Lab Owner","Confidentiality":1,"Integrity":2,"Availability":2,"JustificationCiaRating":"In-scope web application (contains all business logic and vulnerabilities by design).","Tags":["app","nodejs"],"DataAssetsProcessed":["user-accounts","orders","product-catalog","tokens-sessions"],"DataAssetsStored":["logs"],"DataFormatsAccepted":[0],"CommunicationLinks":[{"Id":"juice-shop\u003eto-challenge-webhook","SourceId":"juice-shop","TargetId":"webhook-endpoint","Title":"To Challenge WebHook","Description":"Optional outbound callback (HTTP POST) to external WebHook when a challenge is solved.","Protocol":2,"Tags":["egress"],"VPN":false,"IpFiltered":false,"Readonly":false,"Authentication":0,"Authorization":0,"Usage":0,"DataAssetsSent":["orders"],"DataAssetsReceived":null,"DiagramTweakWeight":1,"DiagramTweakConstraint":true}],"DiagramTweakOrder":0,"RAA":70.02881844380403},"persistent-storage":{"Id":"persistent-storage","Title":"Persistent Storage","Description":"Host-mounted volume for database, file uploads, and logs.","Usage":1,"Type":2,"Size":3,"Technology":10,"Machine":1,"Internet":false,"MultiTenant":false,"Redundant":false,"CustomDevelopedParts":false,"OutOfScope":false,"UsedAsClientByHuman":false,"Encryption":1,"JustificationOutOfScope":"","Owner":"Lab Owner","Confidentiality":1,"Integrity":2,"Availability":2,"JustificationCiaRating":"Local disk storage for the container – not directly exposed, but if compromised it contains sensitive data (database and logs).","Tags":["storage","volume"],"DataAssetsProcessed":[],"DataAssetsStored":["logs","user-accounts","orders","product-catalog"],"DataFormatsAccepted":[3],"CommunicationLinks":[],"DiagramTweakOrder":0,"RAA":100},"reverse-proxy":{"Id":"reverse-proxy","Title":"Reverse Proxy","Description":"Optional reverse proxy (e.g., Nginx) for TLS termination and adding security headers.","Usage":0,"Type":1,"Size":2,"Technology":20,"Machine":1,"Internet":false,"MultiTenant":false,"Redundant":false,"CustomDevelopedParts":false,"OutOfScope":false,"UsedAsClientByHuman":false,"Encryption":1,"JustificationOutOfScope":"","Owner":"Lab Owner","Confidentiality":1,"Integrity":2,"Availability":2,"JustificationCiaRating":"Not exposed to internet directly; improves security of inbound traffic.","Tags":["optional","proxy"],"DataAssetsProcessed":["product-catalog","tokens-sessions"],"DataAssetsStored":[],"DataFormatsAccepted":[0],"CommunicationLinks":[{"Id":"reverse-proxy\u003eto-app","SourceId":"reverse-proxy","TargetId":"juice-shop","Title":"To App","Description":"Proxy forwarding to app (HTTP on 3000 internally).","Protocol":2,"Tags":[],"VPN":false,"IpFiltered":false,"Readonly":false,"Authentication":0,"Authorization":0,"Usage":0,"DataAssetsSent":["tokens-sessions"],"DataAssetsReceived":["product-catalog"],"DiagramTweakWeight":1,"DiagramTweakConstraint":true}],"DiagramTweakOrder":0,"RAA":9.623538157950035},"user-browser":{"Id":"user-browser","Title":"User Browser","Description":"End-user web browser (client).","Usage":0,"Type":0,"Size":0,"Technology":2,"Machine":1,"Internet":true,"MultiTenant":false,"Redundant":false,"CustomDevelopedParts":false,"OutOfScope":false,"UsedAsClientByHuman":true,"Encryption":0,"JustificationOutOfScope":"","Owner":"External User","Confidentiality":0,"Integrity":1,"Availability":1,"JustificationCiaRating":"Client controlled by end user (potentially an attacker).","Tags":["actor","user"],"DataAssetsProcessed":[],"DataAssetsStored":[],"DataFormatsAccepted":[0],"CommunicationLinks":[{"Id":"user-browser\u003eto-reverse-proxy-preferred","SourceId":"user-browser","TargetId":"reverse-proxy","Title":"To Reverse Proxy (preferred)","Description":"User browser to reverse proxy (HTTPS on 443).","Protocol":2,"Tags":["primary"],"VPN":false,"IpFiltered":false,"Readonly":false,"Authentication":2,"Authorization":2,"Usage":0,"DataAssetsSent":["tokens-sessions"],"DataAssetsReceived":["product-catalog"],"DiagramTweakWeight":1,"DiagramTweakConstraint":true},{"Id":"user-browser\u003edirect-to-app-no-proxy","SourceId":"user-browser","TargetId":"juice-shop","Title":"Direct to App (no proxy)","Description":"Direct browser access to app (HTTP on 3000).","Protocol":2,"Tags":["direct"],"VPN":false,"IpFiltered":false,"Readonly":false,"Authentication":2,"Authorization":2,"Usage":0,"DataAssetsSent":["tokens-sessions"],"DataAssetsReceived":["product-catalog"],"DiagramTweakWeight":1,"DiagramTweakConstraint":true}],"DiagramTweakOrder":0,"RAA":25.859639506459924},"webhook-endpoint":{"Id":"webhook-endpoint","Title":"Webhook Endpoint","Description":"External WebHook service (3rd-party, if configured for integrations).","Usage":0,"Type":0,"Size":0,"Technology":14,"Machine":1,"Internet":true,"MultiTenant":true,"Redundant":true,"CustomDevelopedParts":false,"OutOfScope":true,"UsedAsClientByHuman":false,"Encryption":0,"JustificationOutOfScope":"Third-party service to receive notifications (not under our control).","Owner":"Third-Party","Confidentiality":1,"Integrity":1,"Availability":1,"JustificationCiaRating":"External service that receives data (like order or challenge info). Treated as a trusted integration point but could be abused if misconfigured.","Tags":["saas","webhook"],"DataAssetsProcessed":["orders"],"DataAssetsStored":[],"DataFormatsAccepted":[0],"CommunicationLinks":[],"DiagramTweakOrder":0,"RAA":1}}
\ No newline at end of file
diff --git a/labs/lab2/threagile-model.secure.yaml b/labs/lab2/threagile-model.secure.yaml
new file mode 100644
index 00000000..d449bdfa
--- /dev/null
+++ b/labs/lab2/threagile-model.secure.yaml
@@ -0,0 +1,429 @@
+threagile_version: 1.0.0
+
+title: OWASP Juice Shop — Local Lab Threat Model
+date: 2025-09-18
+
+author:
+ name: Student Name
+ homepage: https://example.edu
+
+management_summary_comment: >
+ Threat model for a local OWASP Juice Shop setup. Users access the app
+ either directly via HTTP on port 3000 or through an optional reverse proxy that
+ terminates TLS and adds security headers. The app runs in a container
+ and writes data to a host-mounted volume (for database, uploads, logs).
+ Optional outbound notifications (e.g., a challenge-solution WebHook) can be configured for integrations.
+
+business_criticality: important # archive, operational, important, critical, mission-critical
+
+business_overview:
+ description: >
+ Training environment for DevSecOps. This model covers a deliberately vulnerable
+ web application (OWASP Juice Shop) running locally in a Docker container. The focus is on a minimal architecture, STRIDE threat analysis, and actionable mitigations for the identified risks.
+
+ images:
+ # - dfd.png: Data Flow Diagram (if exported from the tool)
+
+technical_overview:
+ description: >
+ A user’s web browser connects to the Juice Shop application (Node.js/Express server) either directly on **localhost:3000** (HTTP) or via a **reverse proxy** on ports 80/443 (with HTTPS). The Juice Shop server may issue outbound requests to external services (e.g., a configured **WebHook** for solved challenge notifications). All application data (the SQLite database, file uploads, logs) is stored on the host’s filesystem via a mounted volume. Key trust boundaries include the **Internet** (user & external services) → **Host** (local machine/VM) → **Container Network** (isolated app container).
+ images: []
+
+questions:
+ Do you expose port 3000 beyond localhost?: ""
+ Do you use a reverse proxy with TLS and security headers?: ""
+ Are any outbound integrations (webhooks) configured?: ""
+ Is any sensitive data stored in logs or files?: ""
+
+abuse_cases:
+ Credential Stuffing / Brute Force: >
+ Attackers attempt repeated login attempts to guess credentials or exhaust system resources.
+ Stored XSS via Product Reviews: >
+ Malicious scripts are inserted into product reviews, getting stored and executed in other users’ browsers.
+ SSRF via Outbound Requests: >
+ Server-side requests (e.g. profile image URL fetch or WebHook callback) are abused to access internal network resources.
+
+security_requirements:
+ TLS in transit: Enforce HTTPS for user traffic via a TLS-terminating reverse proxy with strong ciphers and certificate management.
+ AuthZ on sensitive routes: Implement strict server-side authorization checks (role/permission) on admin or sensitive functionalities.
+ Rate limiting & lockouts: Apply rate limiting and account lockout policies to mitigate brute-force and automated attacks on authentication and expensive operations.
+ Secure headers: Add security headers (HSTS, CSP, X-Frame-Options, X-Content-Type-Options, etc.) at the proxy or app to mitigate client-side attacks.
+ Secrets management: Protect secret keys and credentials (JWT signing keys, OAuth client secrets) – keep them out of code repos and avoid logging them.
+
+tags_available:
+ # Relevant technologies and environment tags
+ - docker
+ - nodejs
+ # Data and asset tags
+ - pii
+ - auth
+ - tokens
+ - logs
+ - public
+ - actor
+ - user
+ - optional
+ - proxy
+ - app
+ - storage
+ - volume
+ - saas
+ - webhook
+ # Communication tags
+ - primary
+ - direct
+ - egress
+
+# =========================
+# DATA ASSETS
+# =========================
+data_assets:
+
+ User Accounts:
+ id: user-accounts
+ description: "User profile data, credential hashes, emails."
+ usage: business
+ tags: ["pii", "auth"]
+ origin: user-supplied
+ owner: Lab Owner
+ quantity: many
+ confidentiality: confidential
+ integrity: critical
+ availability: important
+ justification_cia_rating: >
+ Contains personal identifiers and authentication data. High confidentiality is required to protect user privacy, and integrity is critical to prevent account takeovers.
+
+ Orders:
+ id: orders
+ description: "Order history, addresses, and payment metadata (no raw card numbers)."
+ usage: business
+ tags: ["pii"]
+ origin: application
+ owner: Lab Owner
+ quantity: many
+ confidentiality: confidential
+ integrity: important
+ availability: important
+ justification_cia_rating: >
+ Contains users’ personal data and business transaction records. Integrity and confidentiality are important to prevent fraud or privacy breaches.
+
+ Product Catalog:
+ id: product-catalog
+ description: "Product information (names, descriptions, prices) available to all users."
+ usage: business
+ tags: ["public"]
+ origin: application
+ owner: Lab Owner
+ quantity: many
+ confidentiality: public
+ integrity: important
+ availability: important
+ justification_cia_rating: >
+ Product data is intended to be public, but its integrity is important (to avoid defacement or price manipulation that could mislead users).
+
+ Tokens & Sessions:
+ id: tokens-sessions
+ description: "Session identifiers, JWTs for authenticated sessions, CSRF tokens."
+ usage: business
+ tags: ["auth", "tokens"]
+ origin: application
+ owner: Lab Owner
+ quantity: many
+ confidentiality: confidential
+ integrity: important
+ availability: important
+ justification_cia_rating: >
+ If session tokens are compromised, attackers can hijack user sessions. They must be kept confidential and intact; availability is less critical (tokens can be reissued).
+
+ Logs:
+ id: logs
+ description: "Application and access logs (may inadvertently contain PII or secrets)."
+ usage: devops
+ tags: ["logs"]
+ origin: application
+ owner: Lab Owner
+ quantity: many
+ confidentiality: internal
+ integrity: important
+ availability: important
+ justification_cia_rating: >
+ Logs are for internal use (troubleshooting, monitoring). They should not be exposed publicly, and sensitive data should be sanitized to protect confidentiality.
+
+# =========================
+# TECHNICAL ASSETS
+# =========================
+technical_assets:
+
+ User Browser:
+ id: user-browser
+ description: "End-user web browser (client)."
+ type: external-entity
+ usage: business
+ used_as_client_by_human: true
+ out_of_scope: false
+ justification_out_of_scope:
+ size: system
+ technology: browser
+ tags: ["actor", "user"]
+ internet: true
+ machine: virtual
+ encryption: none
+ owner: External User
+ confidentiality: public
+ integrity: operational
+ availability: operational
+ justification_cia_rating: "Client controlled by end user (potentially an attacker)."
+ multi_tenant: false
+ redundant: false
+ custom_developed_parts: false
+ data_assets_processed: []
+ data_assets_stored: []
+ data_formats_accepted:
+ - json
+ communication_links:
+ To Reverse Proxy (preferred):
+ target: reverse-proxy
+ description: "User browser to reverse proxy (HTTPS on 443)."
+ protocol: https
+ authentication: session-id
+ authorization: enduser-identity-propagation
+ tags: ["primary"]
+ vpn: false
+ ip_filtered: false
+ readonly: false
+ usage: business
+ data_assets_sent:
+ - tokens-sessions
+ data_assets_received:
+ - product-catalog
+ Direct to App (no proxy):
+ target: juice-shop
+ description: "Direct browser access to app (HTTP on 3000)."
+ protocol: https
+ authentication: session-id
+ authorization: enduser-identity-propagation
+ tags: ["direct"]
+ vpn: false
+ ip_filtered: false
+ readonly: false
+ usage: business
+ data_assets_sent:
+ - tokens-sessions
+ data_assets_received:
+ - product-catalog
+
+ Reverse Proxy:
+ id: reverse-proxy
+ description: "Optional reverse proxy (e.g., Nginx) for TLS termination and adding security headers."
+ type: process
+ usage: business
+ used_as_client_by_human: false
+ out_of_scope: false
+ justification_out_of_scope:
+ size: application
+ technology: reverse-proxy
+ tags: ["optional", "proxy"]
+ internet: false
+ machine: virtual
+ encryption: transparent
+ owner: Lab Owner
+ confidentiality: internal
+ integrity: important
+ availability: important
+ justification_cia_rating: "Not exposed to internet directly; improves security of inbound traffic."
+ multi_tenant: false
+ redundant: false
+ custom_developed_parts: false
+ data_assets_processed:
+ - product-catalog
+ - tokens-sessions
+ data_assets_stored: []
+ data_formats_accepted:
+ - json
+ communication_links:
+ To App:
+ target: juice-shop
+ description: "Proxy forwarding to app (HTTP on 3000 internally)."
+ protocol: https
+ authentication: none
+ authorization: none
+ tags: []
+ vpn: false
+ ip_filtered: false
+ readonly: false
+ usage: business
+ data_assets_sent:
+ - tokens-sessions
+ data_assets_received:
+ - product-catalog
+
+ Juice Shop Application:
+ id: juice-shop
+ description: "OWASP Juice Shop server (Node.js/Express, v19.0.0)."
+ type: process
+ usage: business
+ used_as_client_by_human: false
+ out_of_scope: false
+ justification_out_of_scope:
+ size: application
+ technology: web-server
+ tags: ["app", "nodejs"]
+ internet: false
+ machine: container
+ encryption: none
+ owner: Lab Owner
+ confidentiality: internal
+ integrity: important
+ availability: important
+ justification_cia_rating: "In-scope web application (contains all business logic and vulnerabilities by design)."
+ multi_tenant: false
+ redundant: false
+ custom_developed_parts: true
+ data_assets_processed:
+ - user-accounts
+ - orders
+ - product-catalog
+ - tokens-sessions
+ data_assets_stored:
+ - logs
+ data_formats_accepted:
+ - json
+ communication_links:
+ To Challenge WebHook:
+ target: webhook-endpoint
+ description: "Optional outbound callback (HTTP POST) to external WebHook when a challenge is solved."
+ protocol: https
+ authentication: none
+ authorization: none
+ tags: ["egress"]
+ vpn: false
+ ip_filtered: false
+ readonly: false
+ usage: business
+ data_assets_sent:
+ - orders
+
+ Persistent Storage:
+ id: persistent-storage
+ description: "Host-mounted volume for database, file uploads, and logs."
+ type: datastore
+ usage: devops
+ used_as_client_by_human: false
+ out_of_scope: false
+ justification_out_of_scope:
+ size: component
+ technology: file-server
+ tags: ["storage", "volume"]
+ internet: false
+ machine: virtual
+ encryption: transparent
+ owner: Lab Owner
+ confidentiality: internal
+ integrity: important
+ availability: important
+ justification_cia_rating: "Local disk storage for the container – not directly exposed, but if compromised it contains sensitive data (database and logs)."
+ multi_tenant: false
+ redundant: false
+ custom_developed_parts: false
+ data_assets_processed: []
+ data_assets_stored:
+ - logs
+ - user-accounts
+ - orders
+ - product-catalog
+ data_formats_accepted:
+ - file
+ communication_links: {}
+
+ Webhook Endpoint:
+ id: webhook-endpoint
+ description: "External WebHook service (3rd-party, if configured for integrations)."
+ type: external-entity
+ usage: business
+ used_as_client_by_human: false
+ out_of_scope: true
+ justification_out_of_scope: "Third-party service to receive notifications (not under our control)."
+ size: system
+ technology: web-service-rest
+ tags: ["saas", "webhook"]
+ internet: true
+ machine: virtual
+ encryption: none
+ owner: Third-Party
+ confidentiality: internal
+ integrity: operational
+ availability: operational
+ justification_cia_rating: "External service that receives data (like order or challenge info). Treated as a trusted integration point but could be abused if misconfigured."
+ multi_tenant: true
+ redundant: true
+ custom_developed_parts: false
+ data_assets_processed:
+ - orders
+ data_assets_stored: []
+ data_formats_accepted:
+ - json
+ communication_links: {}
+
+# =========================
+# TRUST BOUNDARIES
+# =========================
+trust_boundaries:
+
+ Internet:
+ id: internet
+ description: "Untrusted public network (Internet)."
+ type: network-dedicated-hoster
+ tags: []
+ technical_assets_inside:
+ - user-browser
+ - webhook-endpoint
+ trust_boundaries_nested:
+ - host
+
+ Host:
+ id: host
+ description: "Local host machine / VM running the Docker environment."
+ type: network-dedicated-hoster
+ tags: []
+ technical_assets_inside:
+ - reverse-proxy
+ - persistent-storage
+ trust_boundaries_nested:
+ - container-network
+
+ Container Network:
+ id: container-network
+ description: "Docker container network (isolated internal network for containers)."
+ type: network-dedicated-hoster
+ tags: []
+ technical_assets_inside:
+ - juice-shop
+ trust_boundaries_nested: []
+
+# =========================
+# SHARED RUNTIMES
+# =========================
+shared_runtimes:
+
+ Docker Host:
+ id: docker-host
+ description: "Docker Engine and default bridge network on the host."
+ tags: ["docker"]
+ technical_assets_running:
+ - juice-shop
+ # If the reverse proxy is containerized, include it:
+ # - reverse-proxy
+
+# =========================
+# INDIVIDUAL RISK CATEGORIES (optional)
+# =========================
+individual_risk_categories: {}
+
+# =========================
+# RISK TRACKING (optional)
+# =========================
+risk_tracking: {}
+
+# (Optional diagram layout tweaks can be added here)
+#diagram_tweak_edge_layout: spline
+#diagram_tweak_layout_left_to_right: true
\ No newline at end of file
diff --git a/labs/lab2/threagile-model.yaml b/labs/lab2/threagile-model.yaml
index 85c01a79..30e7c0a0 100644
--- a/labs/lab2/threagile-model.yaml
+++ b/labs/lab2/threagile-model.yaml
@@ -426,4 +426,4 @@ risk_tracking: {}
# (Optional diagram layout tweaks can be added here)
#diagram_tweak_edge_layout: spline
-#diagram_tweak_layout_left_to_right: true
+#diagram_tweak_layout_left_to_right: true
\ No newline at end of file
diff --git a/labs/lab3.md b/labs/lab3.md
deleted file mode 100644
index 47ec06df..00000000
--- a/labs/lab3.md
+++ /dev/null
@@ -1,265 +0,0 @@
-# Lab 3 — Secure Git
-
-
-
-
-
-> **Goal:** Practice secure Git fundamentals: signed commits and pre-commit secret scanning.
-> **Deliverable:** A PR from `feature/lab3` to the course repo with `labs/submission3.md` containing secure Git practices implementation. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Verifying commit authenticity with **SSH commit signing**
-- Preventing secrets exposure with **pre-commit scanning** (TruffleHog + Gitleaks)
-- Implementing automated security controls in development workflows
-
----
-
-## Tasks
-
-### Task 1 — SSH Commit Signature Verification (5 pts)
-
-**Objective:** Configure SSH commit signing to verify commit authenticity and integrity.
-
-#### 1.1: Research Commit Signing Benefits
-
-Study why commit signing is crucial for verifying the integrity and authenticity of commits:
-- [GitHub Docs on SSH Commit Verification](https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification)
-- [Atlassian Guide to SSH and Git](https://confluence.atlassian.com/bitbucketserver/sign-commits-and-tags-with-ssh-keys-1305971205.html)
-
-#### 1.2: Configure SSH Commit Signing
-
-1. **Generate SSH Key (Option A - Recommended):**
-
- ```sh
- ssh-keygen -t ed25519 -C "your_email@example.com"
- ```
-
-2. **Use Existing SSH Key (Option B):**
-
- Use an existing SSH key and add it to GitHub
-
-3. **Configure Git for SSH Signing:**
-
- ```sh
- git config --global user.signingkey
- git config --global commit.gpgSign true
- git config --global gpg.format ssh
- ```
-
-#### 1.3: Create Signed Commit
-
-```sh
-git commit -S -m "docs: add commit signing summary"
-```
-
-In `labs/submission3.md`, document:
-- Summary explaining the benefits of signing commits for security
-- Evidence of successful SSH key setup and configuration
-- Analysis: "Why is commit signing critical in DevSecOps workflows?"
-- Screenshots or verification of the "Verified" badge on GitHub
-
----
-
-### Task 2 — Pre-commit Secret Scanning (5 pts)
-
-**Objective:** Implement local Git pre-commit hook that scans staged changes for secrets using Dockerized TruffleHog and Gitleaks.
-
-#### 2.1: Create Pre-commit Hook
-
-1. **Setup Pre-commit Hook File:**
-
- Create `.git/hooks/pre-commit` with the following content:
-
- ```bash
- #!/usr/bin/env bash
- set -euo pipefail
- echo "[pre-commit] scanning staged files for secrets…"
-
- # Collect staged files (added/changed)
- mapfile -t STAGED < <(git diff --cached --name-only --diff-filter=ACM)
- if [ ${#STAGED[@]} -eq 0 ]; then
- echo "[pre-commit] no staged files; skipping scans"
- exit 0
- fi
-
- FILES=()
- for f in "${STAGED[@]}"; do
- [ -f "$f" ] && FILES+=("$f")
- done
- if [ ${#FILES[@]} -eq 0 ]; then
- echo "[pre-commit] no regular files to scan; skipping"
- exit 0
- fi
-
- echo "[pre-commit] Files to scan: ${FILES[*]}"
-
- NON_LECTURES_FILES=()
- LECTURES_FILES=()
- for f in "${FILES[@]}"; do
- if [[ "$f" == lectures/* ]]; then
- LECTURES_FILES+=("$f")
- else
- NON_LECTURES_FILES+=("$f")
- fi
- done
-
- echo "[pre-commit] Non-lectures files: ${NON_LECTURES_FILES[*]:-none}"
- echo "[pre-commit] Lectures files: ${LECTURES_FILES[*]:-none}"
-
- TRUFFLEHOG_FOUND_SECRETS=false
- if [ ${#NON_LECTURES_FILES[@]} -gt 0 ]; then
- echo "[pre-commit] TruffleHog scan on non-lectures files…"
-
- set +e
- TRUFFLEHOG_OUTPUT=$(docker run --rm -v "$(pwd):/repo" -w /repo \
- trufflesecurity/trufflehog:latest \
- filesystem "${NON_LECTURES_FILES[@]}" 2>&1)
- TRUFFLEHOG_EXIT_CODE=$?
- set -e
- echo "$TRUFFLEHOG_OUTPUT"
-
- if [ $TRUFFLEHOG_EXIT_CODE -ne 0 ]; then
- echo "[pre-commit] ✖ TruffleHog detected potential secrets in non-lectures files"
- TRUFFLEHOG_FOUND_SECRETS=true
- else
- echo "[pre-commit] ✓ TruffleHog found no secrets in non-lectures files"
- fi
- else
- echo "[pre-commit] Skipping TruffleHog (only lectures files staged)"
- fi
-
- echo "[pre-commit] Gitleaks scan on staged files…"
- GITLEAKS_FOUND_SECRETS=false
- GITLEAKS_FOUND_IN_LECTURES=false
-
- for file in "${FILES[@]}"; do
- echo "[pre-commit] Scanning $file with Gitleaks..."
-
- GITLEAKS_RESULT=$(docker run --rm -v "$(pwd):/repo" -w /repo \
- zricethezav/gitleaks:latest \
- detect --source="$file" --no-git --verbose --exit-code=0 --no-banner 2>&1 || true)
-
- if [ -n "$GITLEAKS_RESULT" ] && echo "$GITLEAKS_RESULT" | grep -q -E "(Finding:|WRN leaks found)"; then
- echo "Gitleaks found secrets in $file:"
- echo "$GITLEAKS_RESULT"
- echo "---"
-
- if [[ "$file" == lectures/* ]]; then
- echo "⚠️ Secrets found in lectures directory - allowing as educational content"
- GITLEAKS_FOUND_IN_LECTURES=true
- else
- echo "✖ Secrets found in non-excluded file: $file"
- GITLEAKS_FOUND_SECRETS=true
- fi
- else
- echo "[pre-commit] No secrets found in $file"
- fi
- done
-
- echo ""
- echo "[pre-commit] === SCAN SUMMARY ==="
- echo "TruffleHog found secrets in non-lectures files: $TRUFFLEHOG_FOUND_SECRETS"
- echo "Gitleaks found secrets in non-lectures files: $GITLEAKS_FOUND_SECRETS"
- echo "Gitleaks found secrets in lectures files: $GITLEAKS_FOUND_IN_LECTURES"
- echo ""
-
- if [ "$TRUFFLEHOG_FOUND_SECRETS" = true ] || [ "$GITLEAKS_FOUND_SECRETS" = true ]; then
- echo -e "✖ COMMIT BLOCKED: Secrets detected in non-excluded files." >&2
- echo "Fix or unstage the offending files and try again." >&2
- exit 1
- elif [ "$GITLEAKS_FOUND_IN_LECTURES" = true ]; then
- echo "⚠️ Secrets found only in lectures directory (educational content) - allowing commit."
- fi
-
- echo "✓ No secrets detected in non-excluded files; proceeding with commit."
- exit 0
- ```
-
-2. **Make Hook Executable:**
-
- ```bash
- chmod +x .git/hooks/pre-commit
- ```
-
-#### 2.2: Test Secret Detection
-
-Verify hook functionality:
-- Add a test secret (e.g., fake AWS key) to a file and stage it
-- Attempt to commit - should be blocked by TruffleHog or Gitleaks
-- Remove/redact the secret, then commit again to confirm success
-
-In `labs/submission3.md`, document:
-- Pre-commit hook setup process and configuration
-- Evidence of successful secret detection blocking commits
-- Test results showing both blocked and successful commits
-- Analysis of how automated secret scanning prevents security incidents
-
----
-
-## How to Submit
-
-1. Create a branch for this lab and push it to your fork:
-
- ```bash
- git switch -c feature/lab3
- # create labs/submission3.md with your findings
- git add labs/submission3.md
- git commit -m "docs: add lab3 submission"
- git push -u origin feature/lab3
- ```
-
-2. Open a PR from your fork's `feature/lab3` branch → **course repository's main branch**.
-
-3. In the PR description, include:
-
- ```text
- - [x] Task 1 done — SSH commit signing setup
- - [x] Task 2 done — Pre-commit secrets scanning setup
- ```
-
-4. **Copy the PR URL** and submit it via **Moodle before the deadline**.
-
----
-
-## Acceptance Criteria
-
-- ✅ Branch `feature/lab3` exists with commits for each task
-- ✅ File `labs/submission3.md` contains required analysis for both tasks
-- ✅ At least one commit shows **"Verified"** (signed via SSH) on GitHub
-- ✅ Local `.git/hooks/pre-commit` runs TruffleHog and Gitleaks via Docker and blocks secrets
-- ✅ PR from `feature/lab3` → **course repo main branch** is open
-- ✅ PR link submitted via Moodle before the deadline
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ------------------------------------------------ | -----: |
-| Task 1 — SSH commit signing setup + analysis | **5** |
-| Task 2 — Pre-commit secrets scanning setup | **5** |
-| **Total** | **10** |
-
----
-
-## Guidelines
-
-- Use clear Markdown headers to organize sections in `submission3.md`
-- Include both command outputs and written analysis for each task
-- Document security configurations and testing procedures thoroughly
-- Demonstrate both successful and blocked operations for secret scanning
-
-
-Security Configuration Notes
-
-- Ensure the email on your commits matches your GitHub account for proper verification
-- Verify `gpg.format` is set to `ssh` for proper signing configuration
-- Test pre-commit hooks thoroughly with both legitimate and test secret content
-- Docker Desktop/Engine must be running for secret scanning tools
-- Ensure all commits are properly signed for verification on GitHub
-
-
diff --git a/labs/lab4.md b/labs/lab4.md
deleted file mode 100644
index 12cc043d..00000000
--- a/labs/lab4.md
+++ /dev/null
@@ -1,325 +0,0 @@
-# Lab 4 — SBOM Generation & Software Composition Analysis
-
-
-
-
-
-> **Goal:** Generate Software Bills of Materials (SBOMs) for OWASP Juice Shop using Syft and Trivy, perform comprehensive Software Composition Analysis with Grype and Trivy, then compare the toolchain capabilities.
-> **Deliverable:** A PR from `feature/lab4` to the course repo with `labs/submission4.md` containing SBOM analysis, SCA findings, and comprehensive toolchain comparison. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Generating **SBOMs** with **Syft** and **Trivy** using Docker images for consistency
-- Performing **Software Composition Analysis (SCA)** with **Grype** (Anchore) and **Trivy**
-- **Comprehensive feature comparison** between **Syft+Grype** vs **Trivy all-in-one** approaches
-- **License analysis**, **vulnerability management**, and **supply chain security assessment**
-
-> Continue using the OWASP Juice Shop from previous labs (`bkimminich/juice-shop:v19.0.0`) as your target application.
-
----
-
-## Tasks
-
-### Task 1 — SBOM Generation with Syft and Trivy (4 pts)
-
-**Objective:** Generate comprehensive SBOMs using both Syft and Trivy Docker images, extracting maximum metadata including licenses, file information, and dependency relationships.
-
-#### 1.1: Setup SBOM Generation Environment
-
-```bash
-# Prepare working directory
-mkdir -p labs/lab4/{syft,trivy,comparison,analysis}
-
-# Pull required Docker images
-docker pull anchore/syft:latest
-docker pull aquasec/trivy:latest
-docker pull anchore/grype:latest
-```
-
-#### 1.2: Comprehensive SBOM Generation with Syft
-
-```bash
-# Syft native JSON format (most detailed)
-docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
- -v "$(pwd)":/tmp anchore/syft:latest \
- bkimminich/juice-shop:v19.0.0 -o syft-json=/tmp/labs/lab4/syft/juice-shop-syft-native.json
-
-# Human-readable table
-docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
- -v "$(pwd)":/tmp anchore/syft:latest \
- bkimminich/juice-shop:v19.0.0 -o table=/tmp/labs/lab4/syft/juice-shop-syft-table.txt
-
-# Extract licenses from the native JSON format
-echo "Extracting licenses from Syft SBOM..." > labs/lab4/syft/juice-shop-licenses.txt
-jq -r '.artifacts[] | select(.licenses != null and (.licenses | length > 0)) | "\(.name) | \(.version) | \(.licenses | map(.value) | join(", "))"' \
- labs/lab4/syft/juice-shop-syft-native.json >> labs/lab4/syft/juice-shop-licenses.txt
-```
-
-#### 1.3: Comprehensive SBOM Generation with Trivy
-
-```bash
-# SBOM with license information
-docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
- -v "$(pwd)":/tmp aquasec/trivy:latest image \
- --format json --output /tmp/labs/lab4/trivy/juice-shop-trivy-detailed.json \
- --list-all-pkgs bkimminich/juice-shop:v19.0.0
-
-# Human-readable table with package details
-docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
- -v "$(pwd)":/tmp aquasec/trivy:latest image \
- --format table --output /tmp/labs/lab4/trivy/juice-shop-trivy-table.txt \
- --list-all-pkgs bkimminich/juice-shop:v19.0.0
-```
-
-#### 1.4: SBOM Analysis and Extraction
-
-```bash
-# Component Analysis
-echo "=== SBOM Component Analysis ===" > labs/lab4/analysis/sbom-analysis.txt
-echo "" >> labs/lab4/analysis/sbom-analysis.txt
-echo "Syft Package Counts:" >> labs/lab4/analysis/sbom-analysis.txt
-jq -r '.artifacts[] | .type' labs/lab4/syft/juice-shop-syft-native.json | sort | uniq -c >> labs/lab4/analysis/sbom-analysis.txt
-
-echo "" >> labs/lab4/analysis/sbom-analysis.txt
-echo "Trivy Package Counts:" >> labs/lab4/analysis/sbom-analysis.txt
-jq -r '.Results[] as $result | $result.Packages[]? | "\($result.Target // "Unknown") - \(.Type // "unknown")"' \
- labs/lab4/trivy/juice-shop-trivy-detailed.json | sort | uniq -c >> labs/lab4/analysis/sbom-analysis.txt
-
-# License Extraction
-echo "" >> labs/lab4/analysis/sbom-analysis.txt
-echo "=== License Analysis ===" >> labs/lab4/analysis/sbom-analysis.txt
-echo "" >> labs/lab4/analysis/sbom-analysis.txt
-echo "Syft Licenses:" >> labs/lab4/analysis/sbom-analysis.txt
-jq -r '.artifacts[]? | select(.licenses != null) | .licenses[]? | .value' \
- labs/lab4/syft/juice-shop-syft-native.json | sort | uniq -c >> labs/lab4/analysis/sbom-analysis.txt
-
-echo "" >> labs/lab4/analysis/sbom-analysis.txt
-echo "Trivy Licenses (OS Packages):" >> labs/lab4/analysis/sbom-analysis.txt
-jq -r '.Results[] | select(.Class // "" | contains("os-pkgs")) | .Packages[]? | select(.Licenses != null) | .Licenses[]?' \
- labs/lab4/trivy/juice-shop-trivy-detailed.json | sort | uniq -c >> labs/lab4/analysis/sbom-analysis.txt
-
-echo "" >> labs/lab4/analysis/sbom-analysis.txt
-echo "Trivy Licenses (Node.js):" >> labs/lab4/analysis/sbom-analysis.txt
-jq -r '.Results[] | select(.Class // "" | contains("lang-pkgs")) | .Packages[]? | select(.Licenses != null) | .Licenses[]?' \
- labs/lab4/trivy/juice-shop-trivy-detailed.json | sort | uniq -c >> labs/lab4/analysis/sbom-analysis.txt
-```
-
-In `labs/submission4.md`, document:
-- **Package Type Distribution** comparison between Syft and Trivy
-- **Dependency Discovery Analysis** - which tool found more/better dependency data
-- **License Discovery Analysis** - which tool found more/better license data
-
----
-
-### Task 2 — Software Composition Analysis with Grype and Trivy (3 pts)
-
-**Objective:** Perform comprehensive vulnerability analysis using both Grype (designed for Syft SBOMs) and Trivy's built-in vulnerability scanning.
-
-#### 2.1: SCA with Grype (Anchore)
-
-```bash
-# Scan using the Syft-generated SBOM
-docker run --rm -v "$(pwd)":/tmp anchore/grype:latest \
- sbom:/tmp/labs/lab4/syft/juice-shop-syft-native.json \
- -o json > labs/lab4/syft/grype-vuln-results.json
-
-# Human-readable vulnerability report
-docker run --rm -v "$(pwd)":/tmp anchore/grype:latest \
- sbom:/tmp/labs/lab4/syft/juice-shop-syft-native.json \
- -o table > labs/lab4/syft/grype-vuln-table.txt
-```
-
-#### 2.2: SCA with Trivy (All-in-One)
-
-```bash
-# Full vulnerability scan with detailed output
-docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
- -v "$(pwd)":/tmp aquasec/trivy:latest image \
- --format json --output /tmp/labs/lab4/trivy/trivy-vuln-detailed.json \
- bkimminich/juice-shop:v19.0.0
-
-# Secrets scanning
-docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
- -v "$(pwd)":/tmp aquasec/trivy:latest image \
- --scanners secret --format table \
- --output /tmp/labs/lab4/trivy/trivy-secrets.txt \
- bkimminich/juice-shop:v19.0.0
-
-# License compliance scanning
-docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
- -v "$(pwd)":/tmp aquasec/trivy:latest image \
- --scanners license --format json \
- --output /tmp/labs/lab4/trivy/trivy-licenses.json \
- bkimminich/juice-shop:v19.0.0
-```
-
-#### 2.3: Vulnerability Analysis and Risk Assessment
-
-```bash
-# Count vulnerabilities by severity
-echo "=== Vulnerability Analysis ===" > labs/lab4/analysis/vulnerability-analysis.txt
-echo "" >> labs/lab4/analysis/vulnerability-analysis.txt
-echo "Grype Vulnerabilities by Severity:" >> labs/lab4/analysis/vulnerability-analysis.txt
-jq -r '.matches[]? | .vulnerability.severity' labs/lab4/syft/grype-vuln-results.json | sort | uniq -c >> labs/lab4/analysis/vulnerability-analysis.txt
-
-echo "" >> labs/lab4/analysis/vulnerability-analysis.txt
-echo "Trivy Vulnerabilities by Severity:" >> labs/lab4/analysis/vulnerability-analysis.txt
-jq -r '.Results[]?.Vulnerabilities[]? | .Severity' labs/lab4/trivy/trivy-vuln-detailed.json | sort | uniq -c >> labs/lab4/analysis/vulnerability-analysis.txt
-
-# License comparison summary
-echo "" >> labs/lab4/analysis/vulnerability-analysis.txt
-echo "=== License Analysis Summary ===" >> labs/lab4/analysis/vulnerability-analysis.txt
-echo "Tool Comparison:" >> labs/lab4/analysis/vulnerability-analysis.txt
-if [ -f labs/lab4/syft/juice-shop-syft-native.json ]; then
- syft_licenses=$(jq -r '.artifacts[] | select(.licenses != null) | .licenses[].value' labs/lab4/syft/juice-shop-syft-native.json 2>/dev/null | sort | uniq | wc -l)
- echo "- Syft found $syft_licenses unique license types" >> labs/lab4/analysis/vulnerability-analysis.txt
-fi
-if [ -f labs/lab4/trivy/trivy-licenses.json ]; then
- trivy_licenses=$(jq -r '.Results[].Licenses[]?.Name' labs/lab4/trivy/trivy-licenses.json 2>/dev/null | sort | uniq | wc -l)
- echo "- Trivy found $trivy_licenses unique license types" >> labs/lab4/analysis/vulnerability-analysis.txt
-fi
-```
-
-In `labs/submission4.md`, document:
-- **SCA Tool Comparison** - vulnerability detection capabilities
-- **Critical Vulnerabilities Analysis** - top 5 most critical findings with remediation
-- **License Compliance Assessment** - risky licenses and compliance recommendations
-- **Additional Security Features** - secrets scanning results
-
----
-
-### Task 3 — Toolchain Comparison: Syft+Grype vs Trivy All-in-One (3 pts)
-
-**Objective:** Comprehensive comparison of the specialized toolchain (Syft+Grype) versus the integrated solution (Trivy) across multiple dimensions.
-
-#### 3.1: Accuracy and Coverage Analysis
-
-```bash
-# Compare package detection
-echo "=== Package Detection Comparison ===" > labs/lab4/comparison/accuracy-analysis.txt
-
-# Extract unique packages from each tool
-jq -r '.artifacts[] | "\(.name)@\(.version)"' labs/lab4/syft/juice-shop-syft-native.json | sort > labs/lab4/comparison/syft-packages.txt
-jq -r '.Results[]?.Packages[]? | "\(.Name)@\(.Version)"' labs/lab4/trivy/juice-shop-trivy-detailed.json | sort > labs/lab4/comparison/trivy-packages.txt
-
-# Find packages detected by both tools
-comm -12 labs/lab4/comparison/syft-packages.txt labs/lab4/comparison/trivy-packages.txt > labs/lab4/comparison/common-packages.txt
-
-# Find packages unique to each tool
-comm -23 labs/lab4/comparison/syft-packages.txt labs/lab4/comparison/trivy-packages.txt > labs/lab4/comparison/syft-only.txt
-comm -13 labs/lab4/comparison/syft-packages.txt labs/lab4/comparison/trivy-packages.txt > labs/lab4/comparison/trivy-only.txt
-
-echo "Packages detected by both tools: $(wc -l < labs/lab4/comparison/common-packages.txt)" >> labs/lab4/comparison/accuracy-analysis.txt
-echo "Packages only detected by Syft: $(wc -l < labs/lab4/comparison/syft-only.txt)" >> labs/lab4/comparison/accuracy-analysis.txt
-echo "Packages only detected by Trivy: $(wc -l < labs/lab4/comparison/trivy-only.txt)" >> labs/lab4/comparison/accuracy-analysis.txt
-
-# Compare vulnerability findings
-echo "" >> labs/lab4/comparison/accuracy-analysis.txt
-echo "=== Vulnerability Detection Overlap ===" >> labs/lab4/comparison/accuracy-analysis.txt
-
-# Extract CVE IDs
-jq -r '.matches[]? | .vulnerability.id' labs/lab4/syft/grype-vuln-results.json | sort | uniq > labs/lab4/comparison/grype-cves.txt
-jq -r '.Results[]?.Vulnerabilities[]? | .VulnerabilityID' labs/lab4/trivy/trivy-vuln-detailed.json | sort | uniq > labs/lab4/comparison/trivy-cves.txt
-
-echo "CVEs found by Grype: $(wc -l < labs/lab4/comparison/grype-cves.txt)" >> labs/lab4/comparison/accuracy-analysis.txt
-echo "CVEs found by Trivy: $(wc -l < labs/lab4/comparison/trivy-cves.txt)" >> labs/lab4/comparison/accuracy-analysis.txt
-echo "Common CVEs: $(comm -12 labs/lab4/comparison/grype-cves.txt labs/lab4/comparison/trivy-cves.txt | wc -l)" >> labs/lab4/comparison/accuracy-analysis.txt
-```
-
-In `labs/submission4.md`, document:
-- **Accuracy Analysis** - package detection and vulnerability overlap quantified
-- **Tool Strengths and Weaknesses** - practical observations from your testing
-- **Use Case Recommendations** - when to choose Syft+Grype vs Trivy
-- **Integration Considerations** - CI/CD, automation, and operational aspects
-
----
-
-## How to Submit
-
-1. Create a branch for this lab and push it to your fork:
-
- ```bash
- git switch -c feature/lab4
- # create labs/submission4.md with your findings
- git add labs/submission4.md labs/lab4/
- git commit -m "docs: add lab4 submission - SBOM generation and SCA comparison"
- git push -u origin feature/lab4
- ```
-
-2. Open a PR from your fork's `feature/lab4` branch → **course repository's main branch**.
-
-3. In the PR description, include:
-
- ```text
- - [x] Task 1 done — SBOM Generation with Syft and Trivy
- - [x] Task 2 done — SCA with Grype and Trivy
- - [x] Task 3 done — Comprehensive Toolchain Comparison
- ```
-
-4. **Copy the PR URL** and submit it via **Moodle before the deadline**.
-
----
-
-## Acceptance Criteria
-
-- ✅ Branch `feature/lab4` exists with commits for each task
-- ✅ File `labs/submission4.md` contains required analysis for Tasks 1-3
-- ✅ SBOM generation completed successfully with both Syft and Trivy
-- ✅ Comprehensive SCA performed with both Grype and Trivy vulnerability scanning
-- ✅ Quantitative toolchain comparison completed with accuracy analysis
-- ✅ All generated SBOMs, vulnerability reports, and analysis files committed
-- ✅ PR from `feature/lab4` → **course repo main branch** is open
-- ✅ PR link submitted via Moodle before the deadline
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ---------------------------------------------------------------- | -----: |
-| Task 1 — SBOM generation with Syft and Trivy + analysis | **4** |
-| Task 2 — SCA with Grype and Trivy + vulnerability assessment | **3** |
-| Task 3 — Comprehensive toolchain comparison + recommendations | **3** |
-| **Total** | **10** |
-
----
-
-## Guidelines
-
-- Use clear Markdown headers to organize sections in `submission4.md`
-- Include both quantitative metrics and qualitative analysis for each task
-- Document all Docker commands used and any issues encountered
-- Provide actionable security recommendations based on findings
-- Focus on practical insights over theoretical comparisons
-
-
-SBOM Quality Notes
-
-- NYU research (SBOMit project) shows metadata-based SBOM generation has accuracy limitations
-- Pay attention to packages detected by one tool but not the other - document these discrepancies
-- Consider the "lying SBOM" problem when evaluating tool accuracy
-
-
-
-
-SCA Best Practices
-
-- Always cross-reference critical vulnerabilities between tools before taking action
-- Evaluate both direct and transitive dependency risks in your analysis
-- Consider CVSS scores, exploitability, and context when prioritizing vulnerabilities
-- Document false positives and tool-specific detection patterns
-
-
-
-
-Comparison Methodology
-
-- Use consistent container image and execution environment for fair comparison
-- Focus on practical operational differences, not just feature checklists
-- Consider maintenance overhead and community support in your analysis
-- Provide specific use case recommendations based on quantitative findings
-
-
diff --git a/labs/lab5.md b/labs/lab5.md
deleted file mode 100644
index 472f9367..00000000
--- a/labs/lab5.md
+++ /dev/null
@@ -1,480 +0,0 @@
-# Lab 5 — Security Analysis: SAST & DAST of OWASP Juice Shop
-
-
-
-
-
-> **Goal:** Perform Static Application Security Testing (SAST) using Semgrep and Dynamic Application Security Testing (DAST) using multiple tools (ZAP, Nuclei, Nikto, SQLmap) against OWASP Juice Shop to identify security vulnerabilities and compare tool effectiveness.
-> **Deliverable:** A PR from `feature/lab5` to the course repo with `labs/submission5.md` containing SAST findings, DAST results from multiple tools, and security recommendations. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Performing **Static Application Security Testing (SAST)** with **Semgrep** using Docker containers
-- Conducting **Dynamic Application Security Testing (DAST)** using multiple specialized tools
-- **Tool comparison analysis** between different DAST tools (ZAP, Nuclei, Nikto, SQLmap)
-- **Vulnerability correlation** between SAST and DAST findings
-- **Security tool selection** for different vulnerability types
-
-These skills are essential for DevSecOps integration and security testing automation.
-
-> Use the OWASP Juice Shop (`bkimminich/juice-shop:v19.0.0`) as your target application, with access to its source code for SAST analysis.
-
----
-
-## Tasks
-
-### Task 1 — Static Application Security Testing with Semgrep (3 pts)
-⏱️ **Estimated time:** 15-20 minutes
-
-**Objective:** Perform SAST analysis using Semgrep to identify security vulnerabilities in the OWASP Juice Shop source code.
-
-#### 1.1: Setup SAST Environment
-
-1. **Prepare Working Directory and Clone Source:**
-
- ```bash
- mkdir -p labs/lab5/{semgrep,zap,nuclei,nikto,sqlmap,analysis}
- git clone https://github.com/juice-shop/juice-shop.git --depth 1 --branch v19.0.0 labs/lab5/semgrep/juice-shop
- ```
-
-#### 1.2: SAST Analysis with Semgrep
-
-1. **Run Semgrep Security Scan:**
-
- ```bash
- docker run --rm -v "$(pwd)/labs/lab5/semgrep/juice-shop":/src \
- -v "$(pwd)/labs/lab5/semgrep":/output \
- semgrep/semgrep:latest \
- semgrep --config=p/security-audit --config=p/owasp-top-ten \
- --json --output=/output/semgrep-results.json /src
-
- # Generate human-readable security report
- docker run --rm -v "$(pwd)/labs/lab5/semgrep/juice-shop":/src \
- -v "$(pwd)/labs/lab5/semgrep":/output \
- semgrep/semgrep:latest \
- semgrep --config=p/security-audit --config=p/owasp-top-ten \
- --text --output=/output/semgrep-report.txt /src
- ```
-
-
-#### 1.3: SAST Results Analysis
-
-1. **Analyze SAST Results:**
-
- ```bash
- echo "=== SAST Analysis Report ===" > labs/lab5/analysis/sast-analysis.txt
- jq '.results | length' labs/lab5/semgrep/semgrep-results.json >> labs/lab5/analysis/sast-analysis.txt
- ```
-
-In `labs/submission5.md`, document:
-
-**Required Sections:**
-
-1. SAST Tool Effectiveness:
- - Describe what types of vulnerabilities Semgrep detected
- - Evaluate coverage (how many files scanned, how many findings)
-
-2. Critical Vulnerability Analysis:
- - List **5 most critical findings** from Semgrep results
- - For each vulnerability include:
- - Vulnerability type (e.g., SQL Injection, Hardcoded Secret)
- - File path and line number
- - Severity level
-
----
-
-### Task 2 — Dynamic Application Security Testing with Multiple Tools (5 pts)
-⏱️ **Estimated time:** 60-90 minutes (scans run in background)
-
-**Objective:** Perform DAST analysis using ZAP (with authentication) and specialized tools (Nuclei, Nikto, SQLmap) to compare their effectiveness.
-
-#### 2.1: Setup DAST Environment
-
-1. **Start OWASP Juice Shop:**
-
- ```bash
- docker run -d --name juice-shop-lab5 -p 3000:3000 bkimminich/juice-shop:v19.0.0
-
- # Wait for application to start
- sleep 10
-
- # Verify it's running
- curl -s http://localhost:3000 | head -n 5
- ```
-
-
-#### 2.2: OWASP ZAP Unauthenticated Scanning
-⏱️ ~5 minutes
-
-1. **Run Unauthenticated ZAP Baseline Scan:**
-
- ```bash
- # Baseline scan without authentication
- docker run --rm --network host \
- -v "$(pwd)/labs/lab5/zap":/zap/wrk/:rw \
- zaproxy/zap-stable:latest \
- zap-baseline.py -t http://localhost:3000 \
- -r report-noauth.html -J zap-report-noauth.json
- ```
-
-
- > This baseline scan discovers vulnerabilities in publicly accessible endpoints.
-
-#### 2.3: OWASP ZAP Authenticated Scanning
-⏱️ ~20-30 minutes
-
-> **⚠️ Important:** Authenticated scanning uses ZAP's Automation Framework. The configuration file is pre-created in `labs/lab5/scripts/zap-auth.yaml` for consistency.
-
-1. **Verify Authentication Endpoint:**
-
- ```bash
- # Test login with admin credentials (default Juice Shop account)
- curl -s -X POST http://localhost:3000/rest/user/login \
- -H 'Content-Type: application/json' \
- -d '{"email":"admin@juice-sh.op","password":"admin123"}' | jq '.authentication.token'
- ```
-
- You should see a JWT token returned, confirming the endpoint works.
-
-2. **Run Authenticated ZAP Scan:**
-
- ```bash
- docker run --rm --network host \
- -v "$(pwd)/labs/lab5":/zap/wrk/:rw \
- zaproxy/zap-stable:latest \
- zap.sh -cmd \
- -autorun /zap/wrk/scripts/zap-auth.yaml
- ```
-
-
- 📝 ZAP Configuration Explained
-
- The `labs/lab5/scripts/zap-auth.yaml` file configures:
- - **Authentication**: JSON-based login with admin credentials
- - **Session Management**: Cookie-based session tracking
- - **Verification**: Uses regex to detect successful login (looks for "authentication" in response)
- - **Scanning Jobs**: Spider → AJAX Spider → Passive Scan → Active Scan → Report Generation
- - **AJAX Spider**: Discovers ~10x more URLs by executing JavaScript (finds dynamic endpoints)
-
-
-
-
- **What this scan discovers:**
- - 🔓 **Authenticated endpoints** like `/rest/admin/application-configuration`
- - 🛒 **User-specific features** (basket, orders, payment, profile)
- - 🔐 **Admin panel** vulnerabilities
- - 📊 **10x more URLs** than unauthenticated scan (AJAX spider finds ~1,200 URLs)
-
- **Expected output:**
- ```
- Job spider found 112 URLs
- Job spiderAjax found 1,199 URLs # AJAX spider discovers much more!
- Job report generated report /zap/wrk/report-auth.html
- ```
-
- > **Key Insight:** Look for `http://localhost:3000/rest/admin/` endpoints in the output - these prove authentication is working!
-
-3. **Compare Authenticated vs Unauthenticated Scans:**
-
- Run: `bash labs/lab5/scripts/compare_zap.sh`
-
-#### 2.4: Multi-Tool Specialized Scanning
-
-> **💡 Networking Note:** Docker networking varies by tool:
-> - `--network host`: Shares host's network (ZAP, Nuclei, Nikto)
-> - `--network container:NAME`: Shares another container's network namespace (SQLmap)
-> Use the pattern shown in each command for best compatibility.
-
-1. **Nuclei Template-Based Scan:** ⏱️ ~5 minutes
-
- ```bash
- docker run --rm --network host \
- -v "$(pwd)/labs/lab5/nuclei":/app \
- projectdiscovery/nuclei:latest \
- -ut -u http://localhost:3000 \
- -jsonl -o /app/nuclei-results.json
- ```
-
-
-2. **Nikto Web Server Scan:** ⏱️ ~5-10 minutes
-
- ```bash
- docker run --rm --network host \
- -v "$(pwd)/labs/lab5/nikto":/tmp \
- sullo/nikto:latest \
- -h http://localhost:3000 -o /tmp/nikto-results.txt
- ```
-
-
-3. **SQLmap SQL Injection Test:** ⏱️ ~10-20 minutes per endpoint
-
- > **Network Solution:** Share the network namespace with Juice Shop container so SQLmap can access `localhost:3000`
-
- ```bash
- # Test both vulnerable endpoints - Search (GET) and Login (POST JSON)
- docker run --rm \
- --network container:juice-shop-lab5 \
- -v "$(pwd)/labs/lab5/sqlmap":/output \
- sqlmapproject/sqlmap \
- -u "http://localhost:3000/rest/products/search?q=*" \
- --dbms=sqlite --batch --level=3 --risk=2 \
- --technique=B --threads=5 --output-dir=/output
-
- docker run --rm \
- --network container:juice-shop-lab5 \
- -v "$(pwd)/labs/lab5/sqlmap":/output \
- sqlmapproject/sqlmap \
- -u "http://localhost:3000/rest/user/login" \
- --data '{"email":"*","password":"test"}' \
- --method POST \
- --headers='Content-Type: application/json' \
- --dbms=sqlite --batch --level=5 --risk=3 \
- --technique=BT --threads=5 --output-dir=/output \
- --dump
- ```
-
-
- **How this works:**
-
- **Networking:**
- - `--network container:juice-shop-lab5` - Shares network namespace with Juice Shop container
- - Inside SQLmap container, `localhost:3000` now directly reaches Juice Shop (no DNS/port forwarding needed)
-
- **Endpoint 1 - Search (GET):**
- - URL: `http://localhost:3000/rest/products/search?q=*`
- - `*` marks the `q` parameter for injection testing
- - `--technique=B` - Boolean-based blind SQL injection (true/false responses)
- - Faster scan, detects logic-based vulnerabilities
-
- **Endpoint 2 - Login (POST JSON):**
- - URL: `http://localhost:3000/rest/user/login`
- - Tests JSON `email` parameter with `*` marker
- - `--technique=BT` - Boolean + Time-based blind SQL injection
- - Time-based detects when responses take longer (SQL `SLEEP()` commands)
- - More thorough, bypasses authentication without valid credentials
-
- **Database & Extraction:**
- - `--dbms=sqlite` - Optimizes for SQLite-specific syntax (Juice Shop uses SQLite)
- - `--dump` - Automatically extracts database contents after confirming vulnerability
- - Will extract Users table including emails and bcrypt password hashes
-
- **Shell escaping:**
- - Single quotes `'...'` wrap the entire shell command to avoid JSON escaping issues
-
- > **Expected Results:** SQLmap will find SQL injection in both endpoints and extract ~20 user accounts with hashed passwords. Scan duration: 10-20 minutes.
-
-#### 2.5: DAST Results Analysis
-
-1. **Compare Tool Results:**
-
- Run: `bash labs/lab5/scripts/summarize_dast.sh`
-
-In `labs/submission5.md`, document:
-
-**Required Sections:**
-
-1. Authenticated vs Unauthenticated Scanning:
- - Compare URL discovery count (use numbers from `compare_zap.sh` output)
- - List examples of admin/authenticated endpoints discovered
- - Explain why authenticated scanning matters for security testing
-
-2. Tool Comparison Matrix:
- - Create a comparison table with columns: Tool | Findings | Severity Breakdown | Best Use Case
- - Include all 4 DAST tools: ZAP, Nuclei, Nikto, SQLmap
- - Use actual numbers from your scan outputs
-
-3. Tool-Specific Strengths:
- - Describe what each tool excels at:
- - **ZAP**: (e.g., comprehensive scanning, authentication support)
- - **Nuclei**: (e.g., speed, known CVE detection)
- - **Nikto**: (e.g., server misconfiguration)
- - **SQLmap**: (e.g., deep SQL injection analysis)
- - Provide 1-2 example findings from each tool
-
----
-
-### Task 3 — SAST/DAST Correlation and Security Assessment (2 pts)
-⏱️ **Estimated time:** 20-30 minutes
-
-**Objective:** Correlate findings from SAST and DAST approaches to provide comprehensive security assessment.
-
-#### 3.1: SAST/DAST Correlation
-
-1. **Create Correlation Analysis:**
-
- ```bash
- echo "=== SAST/DAST Correlation Report ===" > labs/lab5/analysis/correlation.txt
-
- # Count SAST findings
- sast_count=$(jq '.results | length' labs/lab5/semgrep/semgrep-results.json 2>/dev/null || echo "0")
-
- # Count DAST findings from all tools
- zap_med=$(grep -c "class=\"risk-2\"" labs/lab5/zap/report-auth.html 2>/dev/null)
- zap_high=$(grep -c "class=\"risk-3\"" labs/lab5/zap/report-auth.html 2>/dev/null)
- zap_total=$(( (zap_med / 2) + (zap_high / 2) ))
- nuclei_count=$(wc -l < labs/lab5/nuclei/nuclei-results.json 2>/dev/null || echo "0")
- nikto_count=$(grep -c '+ ' labs/lab5/nikto/nikto-results.txt 2>/dev/null || echo '0')
-
- # Count SQLmap findings
- sqlmap_csv=$(find labs/lab5/sqlmap -name "results-*.csv" 2>/dev/null | head -1)
- if [ -f "$sqlmap_csv" ]; then
- sqlmap_count=$(tail -n +2 "$sqlmap_csv" | grep -v '^$' | wc -l)
- else
- sqlmap_count=0
- fi
-
- echo "Security Testing Results Summary:" >> labs/lab5/analysis/correlation.txt
- echo "" >> labs/lab5/analysis/correlation.txt
- echo "SAST (Semgrep): $sast_count code-level findings" >> labs/lab5/analysis/correlation.txt
- echo "DAST (ZAP authenticated): $zap_total alerts" >> labs/lab5/analysis/correlation.txt
- echo "DAST (Nuclei): $nuclei_count template matches" >> labs/lab5/analysis/correlation.txt
- echo "DAST (Nikto): $nikto_count server issues" >> labs/lab5/analysis/correlation.txt
- echo "DAST (SQLmap): $sqlmap_count SQL injection vulnerabilities" >> labs/lab5/analysis/correlation.txt
- echo "" >> labs/lab5/analysis/correlation.txt
-
- echo "Key Insights:" >> labs/lab5/analysis/correlation.txt
- echo "" >> labs/lab5/analysis/correlation.txt
- echo "SAST (Static Analysis):" >> labs/lab5/analysis/correlation.txt
- echo " - Finds code-level vulnerabilities before deployment" >> labs/lab5/analysis/correlation.txt
- echo " - Detects: hardcoded secrets, SQL injection patterns, insecure crypto" >> labs/lab5/analysis/correlation.txt
- echo " - Fast feedback in development phase" >> labs/lab5/analysis/correlation.txt
- echo "" >> labs/lab5/analysis/correlation.txt
- echo "DAST (Dynamic Analysis):" >> labs/lab5/analysis/correlation.txt
- echo " - Finds runtime configuration and deployment issues" >> labs/lab5/analysis/correlation.txt
- echo " - Detects: missing security headers, authentication flaws, server misconfigs" >> labs/lab5/analysis/correlation.txt
- echo " - Authenticated scanning reveals 60%+ more attack surface" >> labs/lab5/analysis/correlation.txt
- echo "" >> labs/lab5/analysis/correlation.txt
- echo "Recommendation: Use BOTH approaches for comprehensive security coverage" >> labs/lab5/analysis/correlation.txt
-
- cat labs/lab5/analysis/correlation.txt
- ```
-
-
-
-In `labs/submission5.md`, document:
-
-**Required Sections:**
-
-1. SAST vs DAST Comparison:
- - Compare total findings: SAST count vs combined DAST count
- - Identify 2-3 vulnerability types found ONLY by SAST
- - Identify 2-3 vulnerability types found ONLY by DAST
- - Explain why each approach finds different things
-
----
-
-## Acceptance Criteria
-
-- ✅ Branch `feature/lab5` exists with commits for each task.
-- ✅ File `labs/submission5.md` contains required analysis for Tasks 1-3.
-- ✅ SAST analysis completed with Semgrep.
-- ✅ DAST analysis completed with ZAP and specialized tools.
-- ✅ SAST/DAST correlation analysis completed.
-- ✅ All generated reports, configurations, and analysis files committed.
-- ✅ PR from `feature/lab5` → **course repo main branch** is open.
-- ✅ PR link submitted via Moodle before the deadline.
-
----
-
-## Cleanup
-
-After completing the lab:
-
-```bash
-# Stop and remove containers
-docker stop juice-shop-lab5
-docker rm juice-shop-lab5
-
-# Optional: Remove large source code directory (~200MB)
-# rm -rf labs/lab5/semgrep/juice-shop
-
-# Check disk space recovered
-docker system df
-```
-
----
-
-## How to Submit
-
-1. Create a branch for this lab and push it to your fork:
-
- ```bash
- git switch -c feature/lab5
- # create labs/submission5.md with your findings
- git add labs/submission5.md labs/lab5/
- git commit -m "docs: add lab5 submission - SAST/multi-approach DAST security analysis"
- git push -u origin feature/lab5
- ```
-
-2. Open a PR from your fork's `feature/lab5` branch → **course repository's main branch**.
-
-3. In the PR description, include:
-
- ```text
- - [x] Task 1 done — SAST Analysis with Semgrep
- - [x] Task 2 done — DAST Analysis (ZAP + Nuclei + Nikto + SQLmap)
- - [x] Task 3 done — SAST/DAST Correlation
- ```
-
-4. **Copy the PR URL** and submit it via **Moodle before the deadline**.
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ------------------------------------------------------------------- | -----: |
-| Task 1 — SAST with Semgrep + basic analysis | **3** |
-| Task 2 — DAST analysis (ZAP + Nuclei + Nikto + SQLmap) + comparison | **5** |
-| Task 3 — SAST/DAST correlation + recommendations | **2** |
-| **Total** | **10** |
-
----
-
-## Guidelines
-
-- Use clear Markdown headers to organize sections in `submission5.md`.
-- Include evidence from tool outputs to support your analysis.
-- Focus on practical insights about when to use each tool in a DevSecOps workflow.
-- Provide actionable security recommendations based on findings.
-
-
-Tool Comparison Reference
-
-**SAST Tool:**
-- **Semgrep**: Static code analysis using pattern-based security rulesets
-
-**DAST Tools:**
-- **ZAP**: Comprehensive web application scanner with integrated reporting
-- **Nuclei**: Fast template-based vulnerability scanner with community templates
-- **Nikto**: Web server vulnerability scanner for server misconfigurations
-- **SQLmap**: Specialized SQL injection testing tool
-
-**Tool Selection in DevSecOps:**
-- **Semgrep**: Early in development pipeline (pre-commit, PR checks)
-- **ZAP**: Staging/QA environment for comprehensive web app testing
-- **Nuclei**: Quick scans for known CVEs in any environment
-- **Nikto**: Web server security assessment during deployment
-- **SQLmap**: Targeted SQL injection testing when SAST/DAST indicate database issues
-
-
-
-
-Expected Vulnerability Categories
-
-**SAST typically finds:**
-- Hardcoded credentials and API keys in source code
-- Insecure cryptographic usage patterns
-- Code-level injection vulnerabilities (SQL, command, etc.)
-- Path traversal and insecure file handling
-
-**DAST typically finds:**
-- Authentication and session management issues
-- Runtime configuration problems (security headers, SSL/TLS)
-- XSS, CSRF, and other runtime exploitation vectors
-- Information disclosure through HTTP responses
-
-
diff --git a/labs/lab6.md b/labs/lab6.md
deleted file mode 100644
index a772e4a4..00000000
--- a/labs/lab6.md
+++ /dev/null
@@ -1,568 +0,0 @@
-# Lab 6 — Infrastructure-as-Code Security: Scanning & Policy Enforcement
-
-
-
-
-
-> **Goal:** Perform security analysis on vulnerable Infrastructure-as-Code using multiple scanning tools (tfsec, Checkov, Terrascan for Terraform; KICS for Pulumi and Ansible) and conduct comparative analysis to identify misconfigurations and security issues.
-> **Deliverable:** A PR from `feature/lab6` to the course repo with `labs/submission6.md` containing IaC security findings, tool comparison analysis, and security insights. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Scanning **Terraform** infrastructure code with multiple security tools (tfsec, Checkov, Terrascan)
-- Analyzing **Pulumi** infrastructure code with **KICS (Checkmarx)**, an open-source scanner with first-class Pulumi YAML support
-- Scanning **Ansible** playbooks with **KICS** for security issues and misconfigurations
-- Comparing different IaC security scanners to evaluate their effectiveness
-- Analyzing critical security vulnerabilities and developing remediation strategies
-- Recommending tool selection strategies for real-world DevSecOps pipelines
-
-These skills are essential for shift-left security in infrastructure deployment and DevSecOps automation.
-
-> You will work with intentionally vulnerable Terraform, Pulumi, and Ansible code provided in the course repository to practice identifying and fixing security misconfigurations.
-
----
-
-## Tasks
-
-### Task 1 — Terraform & Pulumi Security Scanning (5 pts)
-
-**Objective:** Scan vulnerable Terraform and Pulumi configurations using multiple security tools to compare effectiveness and identify infrastructure security issues.
-
-#### 1.1: Setup Scanning Environment
-
-1. **Prepare Analysis Directory:**
-
- ```bash
- # Create analysis directory for all scan results
- mkdir -p labs/lab6/analysis
- ```
-
-
-Vulnerable IaC Code Structure
-
-**Location:** `labs/lab6/vulnerable-iac/`
-
-**Terraform code** (`labs/lab6/vulnerable-iac/terraform/`):
-- `main.tf` - AWS infrastructure with public S3 buckets, hardcoded credentials
-- `security_groups.tf` - Overly permissive security rules (0.0.0.0/0)
-- `database.tf` - Unencrypted RDS instances, public databases
-- `iam.tf` - Wildcard IAM permissions, privilege escalation
-- `variables.tf` - Insecure default values, hardcoded secrets
-
-**Pulumi code** (`labs/lab6/vulnerable-iac/pulumi/`):
-- `__main__.py` - Python-based infrastructure with 21 security issues
-- `Pulumi.yaml` - Configuration with default secret values
-- `Pulumi-vulnerable.yaml` - YAML-based Pulumi manifest (for KICS scanning)
-- Includes: public S3, open security groups, unencrypted databases
-
-**Ansible code** (`labs/lab6/vulnerable-iac/ansible/`):
-- `deploy.yml` - Hardcoded secrets, poor command execution
-- `configure.yml` - Weak SSH config, security misconfigurations
-- `inventory.ini` - Credentials in plaintext
-
-**Total: 80+ intentional security vulnerabilities across all frameworks**
-
-> **Note:** Pulumi code includes both Python and YAML formats. KICS is used for Pulumi scanning due to its first-class support for Pulumi YAML manifests and comprehensive query catalog.
-
-
-
-#### 1.2: Scan with tfsec
-
-1. **Run tfsec Security Scanner:**
-
- ```bash
- # Scan Terraform code with tfsec (JSON output)
- docker run --rm -v "$(pwd)/labs/lab6/vulnerable-iac/terraform":/src \
- aquasec/tfsec:latest /src \
- --format json > labs/lab6/analysis/tfsec-results.json
-
- # Generate readable report
- docker run --rm -v "$(pwd)/labs/lab6/vulnerable-iac/terraform":/src \
- aquasec/tfsec:latest /src > labs/lab6/analysis/tfsec-report.txt
- ```
-
-#### 1.3: Scan with Checkov
-
-1. **Run Checkov Security Scanner:**
-
- ```bash
- # Scan with Checkov (JSON output)
- docker run --rm -v "$(pwd)/labs/lab6/vulnerable-iac/terraform":/tf \
- bridgecrew/checkov:latest \
- -d /tf --framework terraform \
- -o json > labs/lab6/analysis/checkov-terraform-results.json
-
- # Generate readable report
- docker run --rm -v "$(pwd)/labs/lab6/vulnerable-iac/terraform":/tf \
- bridgecrew/checkov:latest \
- -d /tf --framework terraform \
- --compact > labs/lab6/analysis/checkov-terraform-report.txt
- ```
-
-#### 1.4: Scan with Terrascan
-
-1. **Run Terrascan Security Scanner:**
-
- ```bash
- # Scan with Terrascan (JSON output)
- docker run --rm -v "$(pwd)/labs/lab6/vulnerable-iac/terraform":/iac \
- tenable/terrascan:latest scan \
- -i terraform -d /iac \
- -o json > labs/lab6/analysis/terrascan-results.json
-
- # Generate readable report
- docker run --rm -v "$(pwd)/labs/lab6/vulnerable-iac/terraform":/iac \
- tenable/terrascan:latest scan \
- -i terraform -d /iac \
- -o human > labs/lab6/analysis/terrascan-report.txt
- ```
-
-#### 1.5: Terraform Scanning Analysis
-
-1. **Compare Tool Results:**
-
- ```bash
- echo "=== Terraform Security Analysis ===" > labs/lab6/analysis/terraform-comparison.txt
-
- # Count findings from each tool
- tfsec_count=$(jq '.results | length' labs/lab6/analysis/tfsec-results.json 2>/dev/null || echo "0")
- checkov_count=$(jq '.summary.failed' labs/lab6/analysis/checkov-terraform-results.json 2>/dev/null || echo "0")
- terrascan_count=$(jq '.results.scan_summary.violated_policies' labs/lab6/analysis/terrascan-results.json 2>/dev/null || echo "0")
-
- echo "tfsec findings: $tfsec_count" >> labs/lab6/analysis/terraform-comparison.txt
- echo "Checkov findings: $checkov_count" >> labs/lab6/analysis/terraform-comparison.txt
- echo "Terrascan findings: $terrascan_count" >> labs/lab6/analysis/terraform-comparison.txt
- ```
-
-#### 1.6: Scan Pulumi Code with KICS
-
-1. **Scan Pulumi with KICS (Checkmarx):**
-
- KICS provides first-class Pulumi support and scans Pulumi YAML manifests directly. It includes a comprehensive catalog of Pulumi-specific security queries for AWS, Azure, GCP, and Kubernetes resources.
-
- ```bash
- # Scan Pulumi with KICS (JSON and HTML reports)
- docker run -t --rm -v "$(pwd)/labs/lab6/vulnerable-iac/pulumi":/src \
- checkmarx/kics:latest \
- scan -p /src -o /src/kics-report --report-formats json,html
-
- # Move reports to analysis directory
- sudo mv labs/lab6/vulnerable-iac/pulumi/kics-report/results.json labs/lab6/analysis/kics-pulumi-results.json
- sudo mv labs/lab6/vulnerable-iac/pulumi/kics-report/results.html labs/lab6/analysis/kics-pulumi-report.html
-
- # Generate readable summary (console output)
- docker run -t --rm -v "$(pwd)/labs/lab6/vulnerable-iac/pulumi":/src \
- checkmarx/kics:latest \
- scan -p /src --minimal-ui > labs/lab6/analysis/kics-pulumi-report.txt 2>&1 || true
- ```
-
-2. **Analyze Pulumi Results:**
-
- ```bash
- echo "=== Pulumi Security Analysis (KICS) ===" > labs/lab6/analysis/pulumi-analysis.txt
-
- # KICS JSON structure: queries_total, queries_failed (vulnerabilities found)
- high_severity=$(jq '.severity_counters.HIGH // 0' labs/lab6/analysis/kics-pulumi-results.json 2>/dev/null || echo "0")
- medium_severity=$(jq '.severity_counters.MEDIUM // 0' labs/lab6/analysis/kics-pulumi-results.json 2>/dev/null || echo "0")
- low_severity=$(jq '.severity_counters.LOW // 0' labs/lab6/analysis/kics-pulumi-results.json 2>/dev/null || echo "0")
- total_findings=$(jq '.total_counter // 0' labs/lab6/analysis/kics-pulumi-results.json 2>/dev/null || echo "0")
-
- echo "KICS Pulumi findings: $total_findings" >> labs/lab6/analysis/pulumi-analysis.txt
- echo " HIGH severity: $high_severity" >> labs/lab6/analysis/pulumi-analysis.txt
- echo " MEDIUM severity: $medium_severity" >> labs/lab6/analysis/pulumi-analysis.txt
- echo " LOW severity: $low_severity" >> labs/lab6/analysis/pulumi-analysis.txt
- ```
-
-In `labs/submission6.md`, document:
-- **Terraform Tool Comparison** - Effectiveness of tfsec vs. Checkov vs. Terrascan
-- **Pulumi Security Analysis** - Findings from KICS on Pulumi code
-- **Terraform vs. Pulumi** - Compare security issues between declarative HCL and programmatic YAML approaches
-- **KICS Pulumi Support** - Evaluate KICS's Pulumi-specific query catalog
-- **Critical Findings** - At least 5 significant security issues
-- **Tool Strengths** - What each tool excels at detecting
-
----
-
-### Task 2 — Ansible Security Scanning with KICS (2 pts)
-
-**Objective:** Scan vulnerable Ansible playbooks using KICS to identify security issues, misconfigurations, and best practice violations.
-
-#### 2.1: Scan Ansible Playbooks with KICS
-
-
-Vulnerable Ansible Code Structure
-
-The provided Ansible code includes common security issues:
-- `deploy.yml` - Playbook with hardcoded secrets
-- `configure.yml` - Tasks without `no_log` for sensitive operations
-- `inventory.ini` - Insecure inventory configuration
-
-KICS provides comprehensive Ansible security queries for:
-- Secrets management issues
-- Command execution vulnerabilities
-- File permissions and access control
-- Authentication and access issues
-- Insecure configurations
-
-
-
-1. **Run KICS Security Scanner for Ansible:**
-
- KICS auto-detects Ansible playbooks and applies Ansible-specific security queries:
-
- ```bash
- # Scan Ansible playbooks with KICS (JSON and HTML reports)
- docker run -t --rm -v "$(pwd)/labs/lab6/vulnerable-iac/ansible":/src \
- checkmarx/kics:latest \
- scan -p /src -o /src/kics-report --report-formats json,html
-
- # Move reports to analysis directory
- sudo mv labs/lab6/vulnerable-iac/ansible/kics-report/results.json labs/lab6/analysis/kics-ansible-results.json
- sudo mv labs/lab6/vulnerable-iac/ansible/kics-report/results.html labs/lab6/analysis/kics-ansible-report.html
-
- # Generate readable summary
- docker run -t --rm -v "$(pwd)/labs/lab6/vulnerable-iac/ansible":/src \
- checkmarx/kics:latest \
- scan -p /src --minimal-ui > labs/lab6/analysis/kics-ansible-report.txt 2>&1 || true
- ```
-
-#### 2.2: Ansible Security Analysis
-
-1. **Analyze KICS Ansible Results:**
-
- ```bash
- echo "=== Ansible Security Analysis (KICS) ===" > labs/lab6/analysis/ansible-analysis.txt
-
- # Count findings by severity
- high_severity=$(jq '.severity_counters.HIGH // 0' labs/lab6/analysis/kics-ansible-results.json 2>/dev/null || echo "0")
- medium_severity=$(jq '.severity_counters.MEDIUM // 0' labs/lab6/analysis/kics-ansible-results.json 2>/dev/null || echo "0")
- low_severity=$(jq '.severity_counters.LOW // 0' labs/lab6/analysis/kics-ansible-results.json 2>/dev/null || echo "0")
- total_findings=$(jq '.total_counter // 0' labs/lab6/analysis/kics-ansible-results.json 2>/dev/null || echo "0")
-
- echo "KICS Ansible findings: $total_findings" >> labs/lab6/analysis/ansible-analysis.txt
- echo " HIGH severity: $high_severity" >> labs/lab6/analysis/ansible-analysis.txt
- echo " MEDIUM severity: $medium_severity" >> labs/lab6/analysis/ansible-analysis.txt
- echo " LOW severity: $low_severity" >> labs/lab6/analysis/ansible-analysis.txt
- ```
-
-In `labs/submission6.md`, document:
-- **Ansible Security Issues** - Key security problems identified by KICS
-- **Best Practice Violations** - Explain at least 3 violations and their security impact
-- **KICS Ansible Queries** - Evaluate the types of security checks KICS performs
-- **Remediation Steps** - How to fix the identified issues
-
----
-
-### Task 3 — Comparative Tool Analysis & Security Insights (3 pts)
-
-**Objective:** Analyze and compare the effectiveness of different IaC security scanning tools to understand their strengths and weaknesses, and develop insights for tool selection in real-world scenarios.
-
-#### 3.1: Create Comprehensive Tool Comparison
-
-1. **Generate Summary Statistics:**
-
- ```bash
- # Generate comprehensive comparison statistics
- echo "=== Comprehensive Tool Comparison ===" > labs/lab6/analysis/tool-comparison.txt
-
- # Terraform tools
- tfsec_count=$(jq '.results | length' labs/lab6/analysis/tfsec-results.json 2>/dev/null || echo "0")
- checkov_tf_count=$(jq '.summary.failed' labs/lab6/analysis/checkov-terraform-results.json 2>/dev/null || echo "0")
- terrascan_count=$(jq '.results.scan_summary.violated_policies' labs/lab6/analysis/terrascan-results.json 2>/dev/null || echo "0")
-
- # Pulumi tool
- kics_pulumi_count=$(jq '.total_counter // 0' labs/lab6/analysis/kics-pulumi-results.json 2>/dev/null || echo "0")
-
- # Ansible tool
- kics_ansible_count=$(jq '.total_counter // 0' labs/lab6/analysis/kics-ansible-results.json 2>/dev/null || echo "0")
-
- echo "Terraform Scanning Results:" >> labs/lab6/analysis/tool-comparison.txt
- echo " - tfsec: $tfsec_count findings" >> labs/lab6/analysis/tool-comparison.txt
- echo " - Checkov: $checkov_tf_count findings" >> labs/lab6/analysis/tool-comparison.txt
- echo " - Terrascan: $terrascan_count findings" >> labs/lab6/analysis/tool-comparison.txt
- echo "" >> labs/lab6/analysis/tool-comparison.txt
- echo "Pulumi Scanning Results (KICS): $kics_pulumi_count findings" >> labs/lab6/analysis/tool-comparison.txt
- echo "Ansible Scanning Results (KICS): $kics_ansible_count findings" >> labs/lab6/analysis/tool-comparison.txt
- ```
-
-2. **Create Tool Effectiveness Matrix:**
-
- In `labs/submission6.md`, create a comprehensive comparison table:
-
- | Criterion | tfsec | Checkov | Terrascan | KICS |
- |-----------|-------|---------|-----------|------|
- | **Total Findings** | # | # | # | # (Pulumi + Ansible) |
- | **Scan Speed** | Fast/Medium/Slow | | | |
- | **False Positives** | Low/Med/High | | | |
- | **Report Quality** | ⭐-⭐⭐⭐⭐ | | | |
- | **Ease of Use** | ⭐-⭐⭐⭐⭐ | | | |
- | **Documentation** | ⭐-⭐⭐⭐⭐ | | | |
- | **Platform Support** | Terraform only | Multiple | Multiple | Multiple |
- | **Output Formats** | JSON, text, SARIF, etc | | | |
- | **CI/CD Integration** | Easy/Medium/Hard | | | |
- | **Unique Strengths** | | | | |
-
-#### 3.2: Vulnerability Category Analysis
-
-1. **Categorize Findings by Security Domain:**
-
- In `labs/submission6.md`, analyze tool performance across security categories:
-
- | Security Category | tfsec | Checkov | Terrascan | KICS (Pulumi) | KICS (Ansible) | Best Tool |
- |------------------|-------|---------|-----------|---------------|----------------|----------|
- | **Encryption Issues** | ? | ? | ? | ? | N/A | ? |
- | **Network Security** | ? | ? | ? | ? | ? | ? |
- | **Secrets Management** | ? | ? | ? | ? | ? | ? |
- | **IAM/Permissions** | ? | ? | ? | ? | ? | ? |
- | **Access Control** | ? | ? | ? | ? | ? | ? |
- | **Compliance/Best Practices** | ? | ? | ? | ? | ? | ? |
-
- **Instructions:**
- - Review the JSON/HTML reports from each tool
- - Count findings in each security category
- - Identify which tools excel at detecting specific issue types
- - Note unique findings detected by only one tool
-
-In `labs/submission6.md`, document:
-- **Tool Comparison Matrix** - Comprehensive evaluation with all metrics
-- **Category Analysis** - Tool performance across security domains
-- **Top 5 Critical Findings** - Detailed analysis with remediation code examples
-- **Tool Selection Guide** - Recommendations for different use cases
-- **Lessons Learned** - Insights about tool effectiveness, false positives, and limitations
-- **CI/CD Integration Strategy** - Practical multi-stage pipeline recommendations
-- **Justification** - Explain your reasoning for tool choices and strategy
-
----
-
-## Acceptance Criteria
-
-- ✅ Branch `feature/lab6` exists with commits for each task.
-- ✅ File `labs/submission6.md` contains required analysis for Tasks 1-3.
-- ✅ Terraform scanned with tfsec, Checkov, and Terrascan.
-- ✅ Pulumi scanned with KICS (Checkmarx).
-- ✅ Ansible playbooks scanned with KICS (Checkmarx).
-- ✅ Comparative analysis completed with tool evaluation matrices.
-- ✅ All scan results and analysis outputs committed.
-- ✅ PR from `feature/lab6` → **course repo main branch** is open.
-- ✅ PR link submitted via Moodle before the deadline.
-
----
-
-## How to Submit
-
-1. Create a branch for this lab and push it to your fork:
-
- ```bash
- git switch -c feature/lab6
- # create labs/submission6.md with your findings
- git add labs/submission6.md labs/lab6/analysis/
- git commit -m "docs: add lab6 submission - IaC security scanning and comparative analysis"
- git push -u origin feature/lab6
- ```
-
-2. Open a PR from your fork's `feature/lab6` branch → **course repository's main branch**.
-
-3. In the PR description, include:
-
- ```text
- - [x] Task 1 done — Terraform & Pulumi scanning with multiple tools
- - [x] Task 2 done — Ansible security analysis
- - [x] Task 3 done — Comparative tool analysis and security insights
- ```
-
-4. **Copy the PR URL** and submit it via **Moodle before the deadline**.
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ---------------------------------------------------------------- | -----: |
-| Task 1 — Terraform & Pulumi scanning (multiple tools) + analysis | **5** |
-| Task 2 — Ansible scanning (KICS) + remediation | **2** |
-| Task 3 — Comparative analysis + security insights | **3** |
-| **Total** | **10** |
-
----
-
-## Guidelines
-
-- Use clear Markdown headers to organize sections in `submission6.md`.
-- Include evidence from tool outputs (JSON excerpts, command outputs) to support your analysis.
-- Focus on practical insights about tool selection for IaC security.
-- Provide actionable remediation steps for identified issues.
-- Document any challenges encountered with different tools.
-
-
-Directory Structure After Lab
-
-```
-labs/lab6/
-├── vulnerable-iac/ # Vulnerable code to scan (DO NOT MODIFY)
-│ ├── terraform/ # 30 Terraform vulnerabilities
-│ ├── pulumi/ # 21 Pulumi vulnerabilities
-│ ├── ansible/ # 26 Ansible vulnerabilities
-│ └── README.md # Vulnerability catalog
-├── analysis/ # All scan results and analysis
-│ ├── tfsec-results.json
-│ ├── tfsec-report.txt
-│ ├── checkov-terraform-results.json
-│ ├── checkov-terraform-report.txt
-│ ├── terrascan-results.json
-│ ├── terrascan-report.txt
-│ ├── kics-pulumi-results.json
-│ ├── kics-pulumi-report.html
-│ ├── kics-pulumi-report.txt
-│ ├── kics-ansible-results.json
-│ ├── kics-ansible-report.html
-│ ├── kics-ansible-report.txt
-│ ├── tool-comparison.txt
-│ ├── terraform-comparison.txt
-│ ├── pulumi-analysis.txt
-│ └── ansible-analysis.txt
-└── submission6.md # Your submission document
-```
-
-
-
-
-Common Issues & Troubleshooting
-
-**Issue: Docker volume mount permission errors**
-```bash
-# Solution: Ensure you're running commands from the project root directory
-pwd # Should show path ending in the course repo name
-```
-
-**Issue: jq command not found**
-```bash
-# Install jq for JSON parsing
-# macOS: brew install jq
-# Ubuntu: sudo apt-get install jq
-# Windows WSL: sudo apt-get install jq
-```
-
-**Issue: Checkov output format**
-```bash
-# Checkov uses -o flag for output format (json, cli, sarif, etc.)
-# Use shell redirection (>) to save output to a specific file
-# Example: -o json > output.json
-```
-
-**Issue: KICS exits with non-zero exit code**
-```bash
-# This is expected when vulnerabilities are found
-# We use "|| true" to continue execution and capture the output
-# Alternative: Use --ignore-on-exit results to suppress non-zero exit codes
-```
-
-**Issue: KICS not detecting Pulumi files**
-```bash
-# Ensure Pulumi-vulnerable.yaml is in the scan directory
-# KICS auto-detects Pulumi YAML files by extension and content
-# Verify KICS version supports Pulumi (v1.6.x+)
-# Use: docker run --rm checkmarx/kics:latest version
-```
-
-**Issue: KICS report directory not found**
-```bash
-# KICS creates the report directory automatically
-# Ensure you're using -o flag with proper path: -o /src/kics-report
-# The container path must match the mounted volume
-```
-
-
-
-
-Tool Comparison Reference
-
-**Terraform Scanning Tools:**
-- **tfsec**: Fast, Terraform-specific scanner with low false positives
-- **Checkov**: Policy-as-code approach with 1000+ built-in policies (supports Terraform, CloudFormation, K8s, Docker)
-- **Terrascan**: OPA-based scanner with compliance framework mapping
-
-**Pulumi Scanning:**
-- **KICS (Checkmarx)**: Open-source scanner with first-class Pulumi YAML support
- - Dedicated Pulumi queries catalog for AWS, Azure, GCP, and Kubernetes
- - Auto-detects Pulumi platform
- - Announced Pulumi support in v1.6.x with continued expansion
- - Provides JSON, HTML, SARIF, and console output formats
-
-**Ansible Scanning:**
-- **KICS (Checkmarx)**: Open-source scanner with comprehensive Ansible security queries
- - Dedicated Ansible queries catalog
- - Detects secrets management issues, command injection, insecure configurations
- - Same tool across Terraform, Pulumi, and Ansible for consistency
-
-**Tool Selection Guidelines:
-- **tfsec**: Use for fast CI/CD scans, Terraform-specific checks
-- **Checkov**: Use for comprehensive Terraform and multi-framework coverage (CloudFormation, K8s, Docker)
-- **KICS**: Use for Pulumi and Ansible scanning with first-class support and extensive query catalog
- - Provides unified scanning across Pulumi and Ansible
- - Comprehensive security queries for AWS, Azure, GCP, Kubernetes resources
- - Single tool for consistency across multiple IaC frameworks
-- **Terrascan**: Use for compliance-focused scanning (PCI-DSS, HIPAA, etc.)
-- **Conftest**: Use for custom organizational policy enforcement across all IaC types
-
-
-
-
-Common IaC Security Issues
-
-**Common Terraform & Pulumi Issues:**
-- Unencrypted S3 buckets and RDS instances
-- Security groups allowing 0.0.0.0/0 access
-- Hardcoded credentials and secrets
-- Missing resource tags for governance
-- Overly permissive IAM policies
-- Publicly accessible databases
-
-**Common Ansible Issues:**
-- Hardcoded passwords in playbooks
-- Missing `no_log` on sensitive tasks
-- Overly permissive file permissions (0777)
-- Using `shell` instead of proper modules
-- Missing `become` privilege escalation controls
-- Unencrypted Ansible Vault or missing vault usage
-
-**Security Requirements to Enforce:**
-- Encryption requirements (at-rest and in-transit)
-- Network segmentation and access controls
-- Tagging standards for governance and cost allocation
-- Region restrictions and compliance requirements
-- IAM least-privilege principles
-- Regular security assessments and audits
-
-
-
-
-Remediation Best Practices
-
-**Terraform & Pulumi Remediation:**
-- Enable S3 bucket encryption with `server_side_encryption_configuration`
-- Restrict security group ingress to specific CIDR blocks
-- Use AWS Secrets Manager or Parameter Store for credentials
-- Add required tags to all resources
-- Implement least-privilege IAM policies
-- Set RDS instances to `storage_encrypted = true`
-
-**Ansible Remediation:**
-- Use Ansible Vault for all secrets: `ansible-vault encrypt vars.yml`
-- Add `no_log: true` to tasks handling sensitive data
-- Set proper file permissions (0644 for configs, 0600 for keys)
-- Use Ansible modules instead of `shell`/`command` where possible
-- Implement proper `become` with specific users
-- Regular security updates in playbooks
-
-**Security Scanning Best Practices:**
-- Integrate security scanning into CI/CD pipelines
-- Use pre-commit hooks for early detection
-- Run multiple tools for comprehensive coverage
-- Regularly update scanning tools and their rule sets
-- Document and track remediation progress
-- Establish SLAs for fixing critical/high severity issues
-
-
diff --git a/labs/lab6/vulnerable-iac/README.md b/labs/lab6/vulnerable-iac/README.md
deleted file mode 100644
index 65cc1e17..00000000
--- a/labs/lab6/vulnerable-iac/README.md
+++ /dev/null
@@ -1,284 +0,0 @@
-# Vulnerable Infrastructure-as-Code for Lab 6
-
-⚠️ **WARNING: This directory contains intentionally vulnerable code for educational purposes only!**
-
-## Overview
-
-This directory contains deliberately insecure Terraform, Pulumi, and Ansible code designed for Lab 6 - Infrastructure-as-Code Security. Students will use security scanning tools to identify and understand these vulnerabilities.
-
-## ⚠️ DO NOT USE IN PRODUCTION!
-
-**These files contain serious security vulnerabilities and should NEVER be used in real environments.**
-
----
-
-## 📂 Directory Structure
-
-```
-vulnerable-iac/
-├── terraform/
-│ ├── main.tf # Public S3 buckets, hardcoded credentials
-│ ├── security_groups.tf # Overly permissive firewall rules
-│ ├── database.tf # Unencrypted databases, weak configurations
-│ ├── iam.tf # Wildcard IAM permissions
-│ └── variables.tf # Insecure default values
-├── pulumi/
-│ ├── __main__.py # Python-based infrastructure with 21 security issues
-│ ├── Pulumi.yaml # Config with default secret values
-│ ├── Pulumi-vulnerable.yaml # YAML-based Pulumi manifest (for KICS scanning)
-│ └── requirements.txt # Python dependencies
-└── ansible/
- ├── deploy.yml # Hardcoded secrets, poor practices
- ├── configure.yml # Weak SSH config, security misconfigurations
- └── inventory.ini # Credentials in plaintext
-```
-
----
-
-## 🔴 Terraform Vulnerabilities (30 issues)
-
-### Authentication & Credentials
-1. Hardcoded AWS access key in provider configuration
-2. Hardcoded AWS secret key in provider configuration
-9. Hardcoded database password in plain text
-30. Hardcoded API key in variables with default value
-
-### Storage Security
-2. S3 bucket with public-read ACL
-3. S3 bucket without encryption configuration
-4. S3 bucket public access block disabled
-16. DynamoDB table without encryption
-
-### Network Security
-5. Security group allowing all traffic from 0.0.0.0/0
-6. SSH (port 22) accessible from anywhere
-6. RDP (port 3389) accessible from anywhere
-7. MySQL (port 3306) exposed to internet
-7. PostgreSQL (port 5432) exposed to internet
-
-### Database Security
-8. RDS instance without storage encryption
-10. RDS instance publicly accessible
-11. RDS backup retention set to 0 (no backups)
-12. RDS deletion protection disabled
-14. RDS multi-AZ disabled (no high availability)
-15. RDS auto minor version upgrade disabled
-
-### IAM & Permissions
-18. IAM policy with wildcard (*) actions and resources
-19. IAM role with full S3 access on all resources
-20. IAM user with inline policy granting excessive permissions
-21. IAM access keys created for service account
-22. IAM credentials exposed in outputs without sensitive flag
-23. IAM policy allowing privilege escalation paths
-
-### Configuration Management
-24. No region validation for resource deployment
-25. Weak default password in variables
-26. Public access enabled by default
-27. Encryption disabled by default
-28. SSH allowed from anywhere by default
-29. Backup retention days set to 0 by default
-
----
-
-## 🔴 Pulumi Vulnerabilities (21+ issues)
-
-> **Note:** Pulumi code is provided in both Python (`__main__.py`) and YAML (`Pulumi-vulnerable.yaml`) formats. The YAML format is used for KICS scanning, which has first-class Pulumi YAML support.
-
-### Authentication & Credentials
-1. Hardcoded AWS access key in provider
-2. Hardcoded AWS secret key in provider
-3. Hardcoded database password in code
-4. Hardcoded API key in code
-21. Default config values with secrets in Pulumi.yaml
-
-### Storage Security
-3. S3 bucket with public-read ACL
-4. S3 bucket without encryption configuration
-17. DynamoDB table without server-side encryption
-18. DynamoDB table without point-in-time recovery
-19. EBS volume without encryption
-
-### Network Security
-5. Security group allowing all traffic from 0.0.0.0/0
-6. SSH and RDP accessible from anywhere
-
-### Database Security
-7. RDS instance without storage encryption
-8. RDS instance publicly accessible
-9. RDS backup retention set to 0 (no backups)
-10. RDS deletion protection disabled
-
-### IAM & Permissions
-11. IAM policy with wildcard (*) actions and resources
-12. IAM role with full S3 access on all resources
-16. Lambda function with overly permissive IAM role
-
-### Compute Security
-13. EC2 instance without root volume encryption
-14. Secrets exposed in EC2 user data
-
-### Secrets Management
-15. Secrets exposed in Pulumi outputs (not marked as secret)
-
-### Logging & Monitoring
-20. CloudWatch log group without retention policy
-20. CloudWatch log group without KMS encryption
-
----
-
-## 🔴 Ansible Vulnerabilities (26 issues)
-
-### Secrets Management
-1. Hardcoded database password in playbook vars
-2. Hardcoded API key in playbook vars
-3. Database connection string with credentials
-20. SSL private key in plaintext
-38. Global variables with secrets in inventory
-41. Production using same credentials as development
-
-### Command Execution
-4. Using shell module instead of proper apt module
-5. MySQL command with password visible in logs
-10. Downloading and executing script without verification
-17. Shell command with potential injection vulnerability
-32. Using raw module to flush firewall rules
-
-### File Permissions & Access
-6. Configuration file with 0777 permissions (world-writable)
-7. SSH private key with 0644 permissions (should be 0600)
-16. Downloaded file with 0777 permissions
-
-### Authentication & Access Control
-21. SELinux disabled
-22. Passwordless sudo for all commands
-23. SSH PermitRootLogin enabled
-23. SSH PasswordAuthentication enabled
-23. SSH PermitEmptyPasswords enabled
-34. Authorized key added for root user
-
-### Logging & Monitoring
-5. Sensitive command without no_log flag
-13. Password hashing without no_log
-14. Debug output exposing secrets
-18. Password visible in task name
-26. Passwords logged in plaintext files
-
-### Network Security
-9. Firewall (ufw) disabled
-25. Application listening on 0.0.0.0 (all interfaces)
-
-### Credential Management
-11. Git credentials hardcoded in repository URL
-35. Credentials in inventory file
-36. Using root user with password authentication
-37. SSH private key path in plaintext inventory
-
-### Configuration Security
-15. Using 'latest' instead of pinned versions
-24. Installing unnecessary development tools on production
-28. Insecure temp file handling with predictable names
-29. No timeout for long-running tasks
-31. Fetching sensitive files without encryption
-33. No checksum validation for templates
-39. Insecure SSH connection settings (StrictHostKeyChecking=no)
-40. No connection timeout configured
-
-### Error Handling
-12. Ignoring errors for critical database migrations
-30. No proper error handling in assertions
-
----
-
-## 🛠️ Tools to Use
-
-Students should scan this code with:
-
-### Terraform
-- **tfsec**: Fast Terraform security scanner
-- **Checkov**: Policy-as-code security scanner
-- **Terrascan**: OPA-based compliance scanner
-
-### Pulumi
-- **KICS (Checkmarx)**: Open-source scanner with first-class Pulumi YAML support
- - Dedicated Pulumi queries catalog (AWS/Azure/GCP/Kubernetes)
- - Auto-detects Pulumi platform
- - Provides comprehensive security analysis
-
-### Ansible
-- **KICS (Checkmarx)**: Open-source scanner with comprehensive Ansible security queries
- - Dedicated Ansible queries catalog
- - Auto-detects Ansible playbooks
- - Provides comprehensive security analysis
-
-### Policy-as-Code
-- **Conftest/OPA**: Custom policy enforcement for all IaC types
-
----
-
-## 📋 Expected Student Outcomes
-
-Students should:
-1. Identify all 80+ security vulnerabilities across Terraform, Pulumi, and Ansible code
- - Note: Pulumi code includes both Python and YAML formats for comprehensive analysis
-2. Compare detection capabilities of different tools
-3. Compare security issues between declarative (Terraform HCL) and programmatic (Pulumi Python/YAML) IaC
-4. Evaluate KICS's first-class Pulumi support and query catalog
-5. Understand false positives vs true positives
-6. Write custom policies to catch organizational-specific issues
-7. Provide remediation steps for each vulnerability class
-8. Recommend tool selection strategies for CI/CD pipelines
-
----
-
-## 🔧 How to Use (Students)
-
-```bash
-# Copy vulnerable code to your lab directory
-cp -r vulnerable-iac/terraform/* labs/lab6/terraform/
-cp -r vulnerable-iac/pulumi/* labs/lab6/pulumi/
-cp -r vulnerable-iac/ansible/* labs/lab6/ansible/
-
-# Scan with multiple tools (see lab6.md for commands)
-docker run --rm -v "$(pwd)/labs/lab6/terraform":/src aquasec/tfsec:latest /src
-docker run --rm -v "$(pwd)/labs/lab6/terraform":/tf bridgecrew/checkov:latest -d /tf
-docker run -t --rm -v "$(pwd)/labs/lab6/pulumi":/src checkmarx/kics:latest scan -p /src -o /src/kics-report --report-formats json,html
-# ... and more
-```
-
----
-
-## 📚 Learning Resources
-
-- [OWASP Infrastructure as Code Security](https://owasp.org/www-project-devsecops/)
-- [Terraform Security Best Practices](https://www.terraform.io/docs/cloud/guides/recommended-practices/index.html)
-- [Pulumi Security Best Practices](https://www.pulumi.com/docs/guides/crossguard/)
-- [Ansible Security Best Practices](https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html)
-- [CIS AWS Foundations Benchmark](https://www.cisecurity.org/benchmark/amazon_web_services)
-- [CIS Distribution Independent Linux Benchmark](https://www.cisecurity.org/benchmark/distribution_independent_linux)
-
----
-
-## 🔒 Security Notice
-
-**These files are for educational purposes only. They contain intentional security vulnerabilities that would compromise real systems. Never deploy this code to any environment connected to the internet or containing real data.**
-
----
-
-## ✅ Validation
-
-To verify students have completed the lab successfully, check that they:
-- [ ] Identified at least 20 Terraform vulnerabilities
-- [ ] Identified at least 15 Pulumi vulnerabilities
-- [ ] Identified at least 15 Ansible vulnerabilities
-- [ ] Compared at least 4 scanning tools (tfsec, Checkov for Terraform, KICS for Pulumi, Terrascan, ansible-lint)
-- [ ] Analyzed differences between Terraform (HCL) and Pulumi (Python/YAML) security issues
-- [ ] Evaluated KICS's Pulumi-specific query catalog and platform support
-- [ ] Created at least 3 custom OPA policies
-- [ ] Provided remediation guidance
-- [ ] Explained tool selection rationale
-
----
-
-*Lab created for F25-DevSecOps-Intro course*
diff --git a/labs/lab6/vulnerable-iac/ansible/configure.yml b/labs/lab6/vulnerable-iac/ansible/configure.yml
deleted file mode 100644
index e87a4c4a..00000000
--- a/labs/lab6/vulnerable-iac/ansible/configure.yml
+++ /dev/null
@@ -1,140 +0,0 @@
----
-# Vulnerable Configuration Playbook for Lab 6
-
-- name: Configure web servers (VULNERABLE)
- hosts: all
- become: true
- gather_facts: yes
-
- vars:
- # SECURITY ISSUE #20 - Plaintext secrets
- ssl_private_key: |
- -----BEGIN PRIVATE KEY-----
- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC7VJTUt...
- -----END PRIVATE KEY-----
-
- admin_password: "Admin123!"
-
- tasks:
- # SECURITY ISSUE #21 - No SELinux or AppArmor
- - name: Disable SELinux
- selinux:
- state: disabled
- when: ansible_os_family == "RedHat"
-
- # SECURITY ISSUE #22 - Permissive sudo configuration
- - name: Configure sudo for app user
- lineinfile:
- path: /etc/sudoers
- line: 'appuser ALL=(ALL) NOPASSWD: ALL'
- validate: '/usr/sbin/visudo -cf %s'
- # Allowing passwordless sudo for all commands!
-
- # SECURITY ISSUE #23 - Weak SSH configuration
- - name: Configure SSH
- lineinfile:
- path: /etc/ssh/sshd_config
- regexp: "{{ item.regexp }}"
- line: "{{ item.line }}"
- loop:
- - { regexp: '^PermitRootLogin', line: 'PermitRootLogin yes' } # Should be 'no'!
- - { regexp: '^PasswordAuthentication', line: 'PasswordAuthentication yes' } # Should be 'no'!
- - { regexp: '^PermitEmptyPasswords', line: 'PermitEmptyPasswords yes' } # Should be 'no'!
- notify: restart sshd
-
- # SECURITY ISSUE #24 - Installing unnecessary packages
- - name: Install all development tools
- apt:
- name:
- - build-essential
- - gcc
- - g++
- - gdb
- - strace
- - tcpdump
- state: present
- # Development tools on production server!
-
- # SECURITY ISSUE #25 - Exposing application on all interfaces
- - name: Configure application to listen on all interfaces
- lineinfile:
- path: /etc/myapp/config.yml
- regexp: '^listen:'
- line: 'listen: 0.0.0.0:8080'
- # Should bind to specific interface or localhost
-
- # SECURITY ISSUE #26 - Logging sensitive information
- - name: Log database connection
- lineinfile:
- path: /var/log/myapp/app.log
- line: "Database connection: postgresql://admin:{{ admin_password }}@localhost/myapp"
- create: yes
- # Logging password in plaintext!
-
- # SECURITY ISSUE #27 - Using vars_prompt without no_log
- - name: Set API credentials
- command: echo "API_KEY={{ api_key }}" >> /etc/environment
- # Credentials in command output
-
- # SECURITY ISSUE #28 - Insecure temp file handling
- - name: Create temporary file
- shell: echo "{{ admin_password }}" > /tmp/password.txt
- # Password in temp file with predictable name!
-
- # SECURITY ISSUE #29 - No timeout for long-running tasks
- - name: Wait for service
- wait_for:
- port: 8080
- delay: 10
- timeout: 0 # Wait forever!
-
- # SECURITY ISSUE #30 - Using assert without proper error handling
- - name: Check configuration
- assert:
- that:
- - ansible_os_family == "Debian"
- fail_msg: "Unsupported OS"
- # Exposing system information in error
-
- # SECURITY ISSUE #31 - Fetching files without encryption
- - name: Backup configuration
- fetch:
- src: /etc/myapp/config.env
- dest: /backups/
- flat: yes
- # Transferring sensitive config in plaintext!
-
- # SECURITY ISSUE #32 - Using raw module
- - name: Execute raw command
- raw: iptables -F # Flush all firewall rules!
- # Should use proper firewall modules
-
- # SECURITY ISSUE #33 - No checksum validation for templates
- - name: Deploy configuration template
- template:
- src: app.conf.j2
- dest: /etc/nginx/sites-available/app.conf
- mode: '0644'
- # No backup, no validation before deployment
- notify: reload nginx
-
- # SECURITY ISSUE #34 - Authorized_key with wrong permissions
- - name: Add authorized key
- authorized_key:
- user: root
- key: "{{ lookup('file', '/tmp/id_rsa.pub') }}"
- state: present
- # Adding key for root user!
-
- handlers:
- - name: restart sshd
- service:
- name: sshd
- state: restarted
- # No validation of sshd config before restart!
-
- - name: reload nginx
- service:
- name: nginx
- state: reloaded
- # No config test before reload!
diff --git a/labs/lab6/vulnerable-iac/ansible/deploy.yml b/labs/lab6/vulnerable-iac/ansible/deploy.yml
deleted file mode 100644
index 9f7edece..00000000
--- a/labs/lab6/vulnerable-iac/ansible/deploy.yml
+++ /dev/null
@@ -1,128 +0,0 @@
----
-# Vulnerable Ansible Playbook for Lab 6
-# This file contains intentional security issues for educational purposes
-# DO NOT use this playbook in production!
-
-- name: Deploy web application (VULNERABLE)
- hosts: webservers
- become: yes
-
- vars:
- # SECURITY ISSUE #1 - Hardcoded password in playbook!
- db_password: "SuperSecret123!"
- # SECURITY ISSUE #2 - Hardcoded API key!
- api_key: "sk_live_1234567890abcdef"
- # SECURITY ISSUE #3 - Database connection string with credentials
- db_connection: "postgresql://admin:password123@db.example.com:5432/myapp"
-
- tasks:
- # SECURITY ISSUE #4 - Using shell instead of proper module
- - name: Install packages with shell
- shell: apt-get install -y nginx mysql-client
- # Should use apt module instead
-
- # SECURITY ISSUE #5 - Command with hardcoded password visible in logs
- - name: Set database password
- command: mysql -u root -p{{ db_password }} -e "CREATE DATABASE myapp;"
- # Missing no_log: true - password will appear in logs!
-
- # SECURITY ISSUE #6 - File with overly permissive permissions
- - name: Create config file
- copy:
- content: |
- DB_PASSWORD={{ db_password }}
- API_KEY={{ api_key }}
- dest: /etc/myapp/config.env
- mode: '0777' # World readable/writable!
- owner: root
- group: root
-
- # SECURITY ISSUE #7 - SSH key with wrong permissions
- - name: Deploy SSH key
- copy:
- src: files/id_rsa
- dest: /root/.ssh/id_rsa
- mode: '0644' # Should be 0600!
- owner: root
- group: root
-
- # SECURITY ISSUE #8 - Running command as root without necessity
- - name: Create application directory
- command: mkdir -p /var/www/myapp
- become: yes
- become_user: root
- # Should use file module and run as regular user
-
- # SECURITY ISSUE #9 - Disabling firewall
- - name: Disable firewall
- service:
- name: ufw
- state: stopped
- enabled: no
- # Should never disable firewall!
-
- # SECURITY ISSUE #10 - Downloading and executing script without verification
- - name: Download and run setup script
- shell: curl http://example.com/setup.sh | bash
- # No HTTPS, no checksum verification!
-
- # SECURITY ISSUE #11 - Git clone with hardcoded credentials in URL
- - name: Clone repository
- git:
- repo: 'https://username:password@github.com/company/repo.git'
- dest: /var/www/myapp
- # Credentials in URL!
-
- # SECURITY ISSUE #12 - Ignoring errors
- - name: Run database migration
- command: /usr/local/bin/migrate
- ignore_errors: yes
- # Should not ignore errors for critical tasks
-
- # SECURITY ISSUE #13 - Using deprecated bare variables
- - name: Create user
- user:
- name: "{{ username }}" # Variable not defined, will fail
- password: "{{ password | password_hash('sha512') }}"
- # Password operation without no_log!
-
- # SECURITY ISSUE #14 - Debug statement exposing secrets
- - name: Debug configuration
- debug:
- msg: "Database: {{ db_connection }}, API Key: {{ api_key }}"
- # Exposing secrets in debug output!
-
- # SECURITY ISSUE #15 - Using latest version (non-deterministic)
- - name: Install application
- apt:
- name: myapp
- state: latest # Should pin specific version
- update_cache: yes
-
- # SECURITY ISSUE #16 - No validation of downloaded files
- - name: Download application
- get_url:
- url: http://example.com/app.tar.gz # HTTP not HTTPS!
- dest: /tmp/app.tar.gz
- # No checksum validation!
- mode: '0777'
-
- # SECURITY ISSUE #17 - Running with shell expansion
- - name: Process files
- shell: rm -rf {{ user_input }}/*
- # Shell injection risk if user_input is not sanitized!
-
- # SECURITY ISSUE #18 - Synchronous password in task name
- - name: Set password to SuperSecret123!
- # Task name exposes password!
- user:
- name: appuser
- password: "{{ 'SuperSecret123!' | password_hash('sha512') }}"
-
- handlers:
- # SECURITY ISSUE #19 - Handler without proper service check
- - name: restart nginx
- service:
- name: nginx
- state: restarted
- # No validation that nginx is properly configured before restart
diff --git a/labs/lab6/vulnerable-iac/ansible/inventory.ini b/labs/lab6/vulnerable-iac/ansible/inventory.ini
deleted file mode 100644
index 8dc56497..00000000
--- a/labs/lab6/vulnerable-iac/ansible/inventory.ini
+++ /dev/null
@@ -1,38 +0,0 @@
-# Vulnerable Ansible Inventory for Lab 6
-# SECURITY ISSUE #35 - Credentials in inventory file!
-
-[webservers]
-web1.example.com ansible_user=root ansible_password=RootPass123!
-web2.example.com ansible_user=root ansible_ssh_pass=RootPass123!
-
-[databases]
-# SECURITY ISSUE #36 - Using root user and default port
-db1.example.com ansible_user=root ansible_port=22 ansible_password=DbPass123!
-
-[appservers]
-# SECURITY ISSUE #37 - Private key path in plaintext
-app1.example.com ansible_user=deploy ansible_ssh_private_key_file=/tmp/insecure_key
-
-[all:vars]
-# SECURITY ISSUE #38 - Global variables with secrets
-ansible_become_password=Sudo123!
-db_admin_password=AdminDB123!
-api_secret_key=sk_live_abcdef1234567890
-
-# SECURITY ISSUE #39 - Using insecure connection settings
-ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
-
-# SECURITY ISSUE #40 - No connection timeout
-ansible_connection=ssh
-ansible_ssh_timeout=0
-
-[production:children]
-webservers
-databases
-appservers
-
-[production:vars]
-environment=production
-# SECURITY ISSUE #41 - Production using same credentials as dev
-admin_user=admin
-admin_pass=AdminProd123!
diff --git a/labs/lab6/vulnerable-iac/pulumi/Pulumi-vulnerable.yaml b/labs/lab6/vulnerable-iac/pulumi/Pulumi-vulnerable.yaml
deleted file mode 100644
index 469d7624..00000000
--- a/labs/lab6/vulnerable-iac/pulumi/Pulumi-vulnerable.yaml
+++ /dev/null
@@ -1,279 +0,0 @@
-#
-# Vulnerable Pulumi YAML Configuration for Lab 6
-# This file contains intentional security issues for educational purposes
-# DO NOT use this code in production!
-#
-# KICS Documentation: https://docs.kics.io
-# Pulumi YAML Queries: https://docs.kics.io/latest/queries/pulumi-queries/
-#
-
-name: vulnerable-pulumi-lab6
-runtime: yaml
-description: Intentionally vulnerable Pulumi infrastructure for Lab 6 - DO NOT USE IN PRODUCTION!
-
-variables:
- # SECURITY ISSUE #1 - Hardcoded database password
- dbPassword: "SuperSecret123!"
-
- # SECURITY ISSUE #2 - Hardcoded API key
- apiKey: "sk_live_1234567890abcdef"
-
- awsRegion: "us-east-1"
-
-resources:
- # SECURITY ISSUE #3 - Public S3 bucket
- publicBucket:
- type: aws:s3:Bucket
- properties:
- bucket: my-public-bucket-pulumi-yaml
- acl: public-read # Public access!
- tags:
- Name: "Public Bucket"
- # Missing required tags: Environment, Owner, CostCenter
-
- # SECURITY ISSUE #4 - S3 bucket without encryption
- unencryptedBucket:
- type: aws:s3:Bucket
- properties:
- bucket: my-unencrypted-bucket-pulumi-yaml
- acl: private
- versioning:
- enabled: false # Versioning disabled
- tags:
- Name: "Unencrypted Bucket"
- # No serverSideEncryptionConfiguration!
-
- # SECURITY ISSUE #5 - Security group allowing all traffic from anywhere
- allowAllSg:
- type: aws:ec2:SecurityGroup
- properties:
- name: allow-all-sg-yaml
- description: "Allow all inbound traffic"
- vpcId: vpc-12345678
- ingress:
- - description: "Allow all traffic"
- fromPort: 0
- toPort: 65535
- protocol: "-1" # All protocols
- cidrBlocks:
- - "0.0.0.0/0" # From anywhere!
- egress:
- - fromPort: 0
- toPort: 0
- protocol: "-1"
- cidrBlocks:
- - "0.0.0.0/0"
- tags:
- Name: "Allow All Security Group"
-
- # SECURITY ISSUE #6 - SSH and RDP open to the world
- sshOpenSg:
- type: aws:ec2:SecurityGroup
- properties:
- name: ssh-open-sg-yaml
- description: "SSH and RDP from anywhere"
- vpcId: vpc-12345678
- ingress:
- - description: "SSH from anywhere"
- fromPort: 22
- toPort: 22
- protocol: tcp
- cidrBlocks:
- - "0.0.0.0/0" # SSH from anywhere!
- - description: "RDP from anywhere"
- fromPort: 3389
- toPort: 3389
- protocol: tcp
- cidrBlocks:
- - "0.0.0.0/0" # RDP from anywhere!
- tags:
- Name: "SSH Open"
-
- # SECURITY ISSUE #7 & #8 - Unencrypted and publicly accessible RDS instance
- unencryptedDb:
- type: aws:rds:Instance
- properties:
- identifier: mydb-unencrypted-pulumi-yaml
- engine: postgres
- engineVersion: "13.7"
- instanceClass: db.t3.micro
- allocatedStorage: 20
- username: admin
- password: ${dbPassword} # Using hardcoded password!
- storageEncrypted: false # SECURITY ISSUE #7 - No encryption!
- publiclyAccessible: true # SECURITY ISSUE #8 - Public access!
- skipFinalSnapshot: true
- backupRetentionPeriod: 0 # SECURITY ISSUE #9 - No backups!
- deletionProtection: false # SECURITY ISSUE #10
- vpcSecurityGroupIds:
- - ${allowAllSg.id}
- tags:
- Name: "Unencrypted Database"
-
- # SECURITY ISSUE #11 - IAM policy with wildcard permissions
- adminPolicy:
- type: aws:iam:Policy
- properties:
- name: admin-policy-yaml
- description: "Policy with wildcard permissions"
- policy:
- fn::toJSON:
- Version: "2012-10-17"
- Statement:
- - Effect: Allow
- Action: "*" # Wildcard action!
- Resource: "*" # Wildcard resource!
-
- # SECURITY ISSUE #12 - IAM role with overly permissive S3 access
- appRole:
- type: aws:iam:Role
- properties:
- name: app-role-yaml
- assumeRolePolicy:
- fn::toJSON:
- Version: "2012-10-17"
- Statement:
- - Action: sts:AssumeRole
- Effect: Allow
- Principal:
- Service: ec2.amazonaws.com
-
- s3FullAccessPolicy:
- type: aws:iam:RolePolicy
- properties:
- name: s3-full-access-yaml
- role: ${appRole.id}
- policy:
- fn::toJSON:
- Version: "2012-10-17"
- Statement:
- - Effect: Allow
- Action: "s3:*" # Full S3 access!
- Resource: "*" # All resources!
-
- # SECURITY ISSUE #13 & #14 - EC2 instance without encryption and secrets in user data
- unencryptedInstance:
- type: aws:ec2:Instance
- properties:
- ami: ami-0c55b159cbfafe1f0
- instanceType: t2.micro
- vpcSecurityGroupIds:
- - ${sshOpenSg.id}
- userData:
- fn::toJSON:
- - "#!/bin/bash"
- - "echo 'DB_PASSWORD=${dbPassword}' > /etc/app/config" # Password in user data!
- - "echo 'API_KEY=${apiKey}' >> /etc/app/config"
- tags:
- Name: "Unencrypted Instance"
- # No root block device encryption specified!
-
- # SECURITY ISSUE #16 - Lambda function with overly permissive IAM role
- lambdaRole:
- type: aws:iam:Role
- properties:
- name: lambda-role-yaml
- assumeRolePolicy:
- fn::toJSON:
- Version: "2012-10-17"
- Statement:
- - Action: sts:AssumeRole
- Effect: Allow
- Principal:
- Service: lambda.amazonaws.com
-
- lambdaPolicy:
- type: aws:iam:RolePolicy
- properties:
- name: lambda-policy-yaml
- role: ${lambdaRole.id}
- policy:
- fn::toJSON:
- Version: "2012-10-17"
- Statement:
- - Effect: Allow
- Action:
- - "s3:*"
- - "dynamodb:*"
- - "rds:*"
- - "ec2:*"
- Resource: "*"
-
- # SECURITY ISSUE #17 & #18 - DynamoDB table without encryption or PITR
- unencryptedTable:
- type: aws:dynamodb:Table
- properties:
- name: my-table-pulumi-yaml
- attributes:
- - name: id
- type: S
- hashKey: id
- billingMode: PAY_PER_REQUEST
- pointInTimeRecovery:
- enabled: false # SECURITY ISSUE #18 - No PITR
- tags:
- Name: "Unencrypted Table"
- # No serverSideEncryption specified! SECURITY ISSUE #17
-
- # SECURITY ISSUE #19 - EBS volume without encryption
- unencryptedVolume:
- type: aws:ebs:Volume
- properties:
- availabilityZone: us-east-1a
- size: 10
- encrypted: false # No encryption!
- tags:
- Name: "Unencrypted Volume"
-
- # SECURITY ISSUE #20 - CloudWatch log group without retention or KMS encryption
- logGroup:
- type: aws:cloudwatch:LogGroup
- properties:
- name: /aws/app/logs-yaml
- retentionInDays: 0 # Logs never expire - cost and compliance issue
- # No kmsKeyId specified - no encryption!
-
- # SECURITY ISSUE #21 - EKS cluster without encryption
- eksCluster:
- type: aws:eks:Cluster
- properties:
- name: vulnerable-eks-yaml
- roleArn: ${appRole.arn}
- vpcConfig:
- subnetIds:
- - subnet-12345678
- - subnet-87654321
- endpointPublicAccess: true # Public access enabled
- publicAccessCidrs:
- - "0.0.0.0/0" # Accessible from anywhere!
- # No encryptionConfig specified!
-
-# SECURITY ISSUE #15 - Exposing secrets in outputs (not marked as secret)
-outputs:
- bucketName:
- value: ${publicBucket.id}
-
- dbEndpoint:
- value: ${unencryptedDb.endpoint}
-
- # These outputs expose sensitive data!
- dbPassword:
- value: ${dbPassword}
-
- apiKey:
- value: ${apiKey}
-
- region:
- value: ${awsRegion}
-
-# Configuration with default secret values - SECURITY ISSUE
-config:
- aws:region:
- default: us-east-1
-
- # Should not have defaults for sensitive values!
- db_password:
- default: "DefaultPass123!"
-
- api_key:
- default: "sk_default_key"
diff --git a/labs/lab6/vulnerable-iac/pulumi/Pulumi.yaml b/labs/lab6/vulnerable-iac/pulumi/Pulumi.yaml
deleted file mode 100644
index f0d65d01..00000000
--- a/labs/lab6/vulnerable-iac/pulumi/Pulumi.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
-name: vulnerable-pulumi-lab6
-runtime: python
-description: Intentionally vulnerable Pulumi infrastructure for Lab 6 - DO NOT USE IN PRODUCTION!
-
-config:
- # SECURITY ISSUE #21 - Default config values with secrets
- aws:region:
- default: us-east-1
- db_password:
- default: "DefaultPass123!" # Should not have default for passwords!
- api_key:
- default: "sk_default_key" # Should not have default for secrets!
diff --git a/labs/lab6/vulnerable-iac/pulumi/__main__.py b/labs/lab6/vulnerable-iac/pulumi/__main__.py
deleted file mode 100644
index 57f3d669..00000000
--- a/labs/lab6/vulnerable-iac/pulumi/__main__.py
+++ /dev/null
@@ -1,248 +0,0 @@
-"""
-Vulnerable Pulumi Infrastructure Code for Lab 6
-This file contains intentional security issues for educational purposes
-DO NOT use this code in production!
-
-Language: Python
-Cloud: AWS
-"""
-
-import pulumi
-import pulumi_aws as aws
-
-# SECURITY ISSUE #1 - Hardcoded AWS credentials
-aws_provider = aws.Provider("aws-provider",
- region="us-east-1",
- access_key="AKIAIOSFODNN7EXAMPLE", # Hardcoded!
- secret_key="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" # Hardcoded!
-)
-
-# SECURITY ISSUE #2 - Hardcoded secrets in config
-config = pulumi.Config()
-db_password = "SuperSecret123!" # Should use config.require_secret()
-api_key = "sk_live_1234567890abcdef" # Hardcoded API key
-
-# SECURITY ISSUE #3 - Public S3 bucket
-public_bucket = aws.s3.Bucket("public-bucket",
- bucket="my-public-bucket-pulumi",
- acl="public-read", # Public access!
- tags={
- "Name": "Public Bucket",
- # Missing required tags: Environment, Owner
- }
-)
-
-# SECURITY ISSUE #4 - S3 bucket without encryption
-unencrypted_bucket = aws.s3.Bucket("unencrypted-bucket",
- bucket="my-unencrypted-bucket-pulumi",
- acl="private",
- # No server_side_encryption_configuration!
- versioning=aws.s3.BucketVersioningArgs(
- enabled=False # Versioning disabled
- ),
- tags={
- "Name": "Unencrypted Bucket"
- }
-)
-
-# SECURITY ISSUE #5 - Security group allowing all traffic from anywhere
-allow_all_sg = aws.ec2.SecurityGroup("allow-all-sg",
- description="Allow all inbound traffic",
- vpc_id="vpc-12345678",
- ingress=[
- aws.ec2.SecurityGroupIngressArgs(
- description="Allow all traffic",
- from_port=0,
- to_port=65535,
- protocol="-1", # All protocols
- cidr_blocks=["0.0.0.0/0"], # From anywhere!
- )
- ],
- egress=[
- aws.ec2.SecurityGroupEgressArgs(
- from_port=0,
- to_port=0,
- protocol="-1",
- cidr_blocks=["0.0.0.0/0"],
- )
- ],
- tags={
- "Name": "Allow All Security Group"
- }
-)
-
-# SECURITY ISSUE #6 - SSH open to the world
-ssh_open_sg = aws.ec2.SecurityGroup("ssh-open-sg",
- description="SSH from anywhere",
- vpc_id="vpc-12345678",
- ingress=[
- aws.ec2.SecurityGroupIngressArgs(
- description="SSH from anywhere",
- from_port=22,
- to_port=22,
- protocol="tcp",
- cidr_blocks=["0.0.0.0/0"], # SSH from anywhere!
- ),
- aws.ec2.SecurityGroupIngressArgs(
- description="RDP from anywhere",
- from_port=3389,
- to_port=3389,
- protocol="tcp",
- cidr_blocks=["0.0.0.0/0"], # RDP from anywhere!
- )
- ],
- tags={
- "Name": "SSH Open"
- }
-)
-
-# SECURITY ISSUE #7 - Unencrypted RDS instance
-unencrypted_db = aws.rds.Instance("unencrypted-db",
- identifier="mydb-unencrypted-pulumi",
- engine="postgres",
- engine_version="13.7",
- instance_class="db.t3.micro",
- allocated_storage=20,
- username="admin",
- password=db_password, # Hardcoded password from above!
- storage_encrypted=False, # No encryption!
- publicly_accessible=True, # SECURITY ISSUE #8 - Public access!
- skip_final_snapshot=True,
- backup_retention_period=0, # SECURITY ISSUE #9 - No backups!
- deletion_protection=False, # SECURITY ISSUE #10
- vpc_security_group_ids=[allow_all_sg.id],
- tags={
- "Name": "Unencrypted Database"
- }
-)
-
-# SECURITY ISSUE #11 - IAM policy with wildcard permissions
-admin_policy = aws.iam.Policy("admin-policy",
- description="Policy with wildcard permissions",
- policy=pulumi.Output.all().apply(lambda _: """{
- "Version": "2012-10-17",
- "Statement": [{
- "Effect": "Allow",
- "Action": "*",
- "Resource": "*"
- }]
- }""")
-)
-
-# SECURITY ISSUE #12 - IAM role with overly permissive S3 access
-app_role = aws.iam.Role("app-role",
- assume_role_policy=pulumi.Output.all().apply(lambda _: """{
- "Version": "2012-10-17",
- "Statement": [{
- "Action": "sts:AssumeRole",
- "Effect": "Allow",
- "Principal": {
- "Service": "ec2.amazonaws.com"
- }
- }]
- }""")
-)
-
-s3_full_access_policy = aws.iam.RolePolicy("s3-full-access",
- role=app_role.id,
- policy=pulumi.Output.all().apply(lambda _: """{
- "Version": "2012-10-17",
- "Statement": [{
- "Effect": "Allow",
- "Action": "s3:*",
- "Resource": "*"
- }]
- }""")
-)
-
-# SECURITY ISSUE #13 - EC2 instance without encryption
-unencrypted_instance = aws.ec2.Instance("unencrypted-instance",
- ami="ami-0c55b159cbfafe1f0",
- instance_type="t2.micro",
- vpc_security_group_ids=[ssh_open_sg.id],
- # No root_block_device encryption!
- user_data=f"""#!/bin/bash
- echo "DB_PASSWORD={db_password}" > /etc/app/config # SECURITY ISSUE #14 - Password in user data!
- echo "API_KEY={api_key}" >> /etc/app/config
- """,
- tags={
- "Name": "Unencrypted Instance"
- }
-)
-
-# SECURITY ISSUE #15 - Exposing secrets in outputs (not marked as secret)
-pulumi.export("bucket_name", public_bucket.id)
-pulumi.export("db_endpoint", unencrypted_db.endpoint)
-pulumi.export("db_password", db_password) # Exposing password!
-pulumi.export("api_key", api_key) # Exposing API key!
-
-# SECURITY ISSUE #16 - Lambda function with overly permissive IAM role
-lambda_role = aws.iam.Role("lambda-role",
- assume_role_policy=pulumi.Output.all().apply(lambda _: """{
- "Version": "2012-10-17",
- "Statement": [{
- "Action": "sts:AssumeRole",
- "Effect": "Allow",
- "Principal": {
- "Service": "lambda.amazonaws.com"
- }
- }]
- }""")
-)
-
-lambda_policy = aws.iam.RolePolicy("lambda-policy",
- role=lambda_role.id,
- policy=pulumi.Output.all().apply(lambda _: """{
- "Version": "2012-10-17",
- "Statement": [{
- "Effect": "Allow",
- "Action": [
- "s3:*",
- "dynamodb:*",
- "rds:*",
- "ec2:*"
- ],
- "Resource": "*"
- }]
- }""")
-)
-
-# SECURITY ISSUE #17 - DynamoDB table without encryption
-unencrypted_table = aws.dynamodb.Table("unencrypted-table",
- name="my-table-pulumi",
- attributes=[
- aws.dynamodb.TableAttributeArgs(
- name="id",
- type="S",
- )
- ],
- hash_key="id",
- billing_mode="PAY_PER_REQUEST",
- # No server_side_encryption!
- point_in_time_recovery=aws.dynamodb.TablePointInTimeRecoveryArgs(
- enabled=False # SECURITY ISSUE #18 - No PITR
- ),
- tags={
- "Name": "Unencrypted Table"
- }
-)
-
-# SECURITY ISSUE #19 - EBS volume without encryption
-unencrypted_volume = aws.ebs.Volume("unencrypted-volume",
- availability_zone="us-east-1a",
- size=10,
- encrypted=False, # No encryption!
- tags={
- "Name": "Unencrypted Volume"
- }
-)
-
-# SECURITY ISSUE #20 - CloudWatch log group without retention
-log_group = aws.cloudwatch.LogGroup("app-logs",
- name="/aws/app/logs",
- retention_in_days=0, # Logs never expire - cost and compliance issue
- # No KMS encryption!
-)
-
-print(f"⚠️ WARNING: This Pulumi stack contains {20} intentional security vulnerabilities!")
-print(" This is for educational purposes only - DO NOT deploy to production!")
diff --git a/labs/lab6/vulnerable-iac/pulumi/requirements.txt b/labs/lab6/vulnerable-iac/pulumi/requirements.txt
deleted file mode 100644
index 5f626e8c..00000000
--- a/labs/lab6/vulnerable-iac/pulumi/requirements.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-pulumi>=3.0.0,<4.0.0
-pulumi-aws>=6.0.0,<7.0.0
diff --git a/labs/lab6/vulnerable-iac/terraform/database.tf b/labs/lab6/vulnerable-iac/terraform/database.tf
deleted file mode 100644
index 1897f378..00000000
--- a/labs/lab6/vulnerable-iac/terraform/database.tf
+++ /dev/null
@@ -1,92 +0,0 @@
-# Vulnerable Database Configuration for Lab 6
-# Contains unencrypted databases and poor security practices
-
-# Unencrypted RDS instance - SECURITY ISSUE #8
-resource "aws_db_instance" "unencrypted_db" {
- identifier = "mydb-unencrypted"
- engine = "postgres"
- engine_version = "13.7"
- instance_class = "db.t3.micro"
- allocated_storage = 20
-
- username = "admin"
- password = "SuperSecretPassword123!" # SECURITY ISSUE #9 - Hardcoded password!
-
- storage_encrypted = false # No encryption!
-
- publicly_accessible = true # SECURITY ISSUE #10 - Public access!
-
- skip_final_snapshot = true
-
- # No backup configuration
- backup_retention_period = 0 # SECURITY ISSUE #11 - No backups!
-
- # Missing monitoring
- enabled_cloudwatch_logs_exports = []
-
- # No deletion protection
- deletion_protection = false # SECURITY ISSUE #12
-
- # Using default security group
- vpc_security_group_ids = [aws_security_group.database_exposed.id]
-
- tags = {
- Name = "Unencrypted Database"
- # Missing required tags
- }
-}
-
-# Database with weak configuration - SECURITY ISSUE #13
-resource "aws_db_instance" "weak_db" {
- identifier = "mydb-weak"
- engine = "mysql"
- engine_version = "5.7.38" # Old version with known vulnerabilities
- instance_class = "db.t3.micro"
- allocated_storage = 20
-
- username = "root" # Using default admin username
- password = "password123" # Weak password!
-
- storage_encrypted = true
- kms_key_id = "" # Empty KMS key - using default key
-
- publicly_accessible = false
-
- # Multi-AZ disabled
- multi_az = false # SECURITY ISSUE #14 - No high availability
-
- # Auto minor version upgrade disabled
- auto_minor_version_upgrade = false # SECURITY ISSUE #15
-
- # No performance insights
- performance_insights_enabled = false
-
- skip_final_snapshot = true
-
- tags = {
- Name = "Weak Database"
- }
-}
-
-# DynamoDB table without encryption - SECURITY ISSUE #16
-resource "aws_dynamodb_table" "unencrypted_table" {
- name = "my-table"
- billing_mode = "PAY_PER_REQUEST"
- hash_key = "id"
-
- attribute {
- name = "id"
- type = "S"
- }
-
- # No server_side_encryption configuration!
-
- # No point-in-time recovery
- point_in_time_recovery {
- enabled = false # SECURITY ISSUE #17
- }
-
- tags = {
- Name = "Unencrypted DynamoDB Table"
- }
-}
diff --git a/labs/lab6/vulnerable-iac/terraform/iam.tf b/labs/lab6/vulnerable-iac/terraform/iam.tf
deleted file mode 100644
index 8ac6746f..00000000
--- a/labs/lab6/vulnerable-iac/terraform/iam.tf
+++ /dev/null
@@ -1,125 +0,0 @@
-# Vulnerable IAM Configuration for Lab 6
-# Contains overly permissive IAM policies
-
-# IAM policy with wildcard permissions - SECURITY ISSUE #18
-resource "aws_iam_policy" "admin_policy" {
- name = "overly-permissive-policy"
- description = "Policy with wildcard permissions"
-
- policy = jsonencode({
- Version = "2012-10-17"
- Statement = [
- {
- Effect = "Allow"
- Action = "*" # All actions allowed!
- Resource = "*" # On all resources!
- }
- ]
- })
-}
-
-# IAM role with full S3 access - SECURITY ISSUE #19
-resource "aws_iam_role" "app_role" {
- name = "application-role"
-
- assume_role_policy = jsonencode({
- Version = "2012-10-17"
- Statement = [
- {
- Action = "sts:AssumeRole"
- Effect = "Allow"
- Principal = {
- Service = "ec2.amazonaws.com"
- }
- }
- ]
- })
-}
-
-resource "aws_iam_role_policy" "s3_full_access" {
- name = "s3-full-access"
- role = aws_iam_role.app_role.id
-
- policy = jsonencode({
- Version = "2012-10-17"
- Statement = [
- {
- Effect = "Allow"
- Action = [
- "s3:*" # All S3 actions!
- ]
- Resource = "*" # On all buckets!
- }
- ]
- })
-}
-
-# IAM user with inline policy - SECURITY ISSUE #20
-resource "aws_iam_user" "service_account" {
- name = "service-account"
- path = "/system/"
-
- tags = {
- Name = "Service Account"
- }
-}
-
-resource "aws_iam_user_policy" "service_policy" {
- name = "service-inline-policy"
- user = aws_iam_user.service_account.name
-
- policy = jsonencode({
- Version = "2012-10-17"
- Statement = [
- {
- Effect = "Allow"
- Action = [
- "ec2:*", # Full EC2 access
- "s3:*", # Full S3 access
- "rds:*" # Full RDS access
- ]
- Resource = "*"
- }
- ]
- })
-}
-
-# Access key for IAM user - SECURITY ISSUE #21
-resource "aws_iam_access_key" "service_key" {
- user = aws_iam_user.service_account.name
-}
-
-# Output sensitive data - SECURITY ISSUE #22
-output "access_key_id" {
- value = aws_iam_access_key.service_key.id
- # Should be marked as sensitive!
-}
-
-output "secret_access_key" {
- value = aws_iam_access_key.service_key.secret
- # Exposing secret key in output!
-}
-
-# IAM policy allowing privilege escalation - SECURITY ISSUE #23
-resource "aws_iam_policy" "privilege_escalation" {
- name = "potential-privilege-escalation"
- description = "Policy that allows privilege escalation"
-
- policy = jsonencode({
- Version = "2012-10-17"
- Statement = [
- {
- Effect = "Allow"
- Action = [
- "iam:CreatePolicy",
- "iam:CreateUser",
- "iam:AttachUserPolicy",
- "iam:AttachRolePolicy",
- "iam:PutUserPolicy",
- "iam:PutRolePolicy"
- ]
- Resource = "*"
- }
- ]
- })
-}
diff --git a/labs/lab6/vulnerable-iac/terraform/main.tf b/labs/lab6/vulnerable-iac/terraform/main.tf
deleted file mode 100644
index 027cdecf..00000000
--- a/labs/lab6/vulnerable-iac/terraform/main.tf
+++ /dev/null
@@ -1,43 +0,0 @@
-# Vulnerable Terraform Configuration for Lab 6
-# This file contains intentional security issues for educational purposes
-# DO NOT use this code in production!
-
-provider "aws" {
- region = "us-east-1"
- # Hardcoded credentials - SECURITY ISSUE #1
- access_key = "AKIAIOSFODNN7EXAMPLE"
- secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
-}
-
-# Public S3 bucket - SECURITY ISSUE #2
-resource "aws_s3_bucket" "public_data" {
- bucket = "my-public-bucket-lab6"
- acl = "public-read" # Public access enabled!
-
- tags = {
- Name = "Public Data Bucket"
- # Missing required tags: Environment, Owner, CostCenter
- }
-}
-
-# S3 bucket without encryption - SECURITY ISSUE #3
-resource "aws_s3_bucket" "unencrypted_data" {
- bucket = "my-unencrypted-bucket-lab6"
- acl = "private"
-
- # No server_side_encryption_configuration!
-
- versioning {
- enabled = false # Versioning disabled
- }
-}
-
-# S3 bucket with public access block disabled - SECURITY ISSUE #4
-resource "aws_s3_bucket_public_access_block" "bad_config" {
- bucket = aws_s3_bucket.public_data.id
-
- block_public_acls = false # Should be true
- block_public_policy = false # Should be true
- ignore_public_acls = false # Should be true
- restrict_public_buckets = false # Should be true
-}
diff --git a/labs/lab6/vulnerable-iac/terraform/security_groups.tf b/labs/lab6/vulnerable-iac/terraform/security_groups.tf
deleted file mode 100644
index 88f706d5..00000000
--- a/labs/lab6/vulnerable-iac/terraform/security_groups.tf
+++ /dev/null
@@ -1,92 +0,0 @@
-# Vulnerable Security Groups for Lab 6
-# Contains overly permissive network rules
-
-# Security group allowing ALL traffic from anywhere - SECURITY ISSUE #5
-resource "aws_security_group" "allow_all" {
- name = "allow-all-traffic"
- description = "Allow all inbound traffic from anywhere"
- vpc_id = "vpc-12345678"
-
- ingress {
- description = "Allow all traffic"
- from_port = 0
- to_port = 65535
- protocol = "-1" # All protocols
- cidr_blocks = ["0.0.0.0/0"] # From anywhere!
- }
-
- egress {
- from_port = 0
- to_port = 0
- protocol = "-1"
- cidr_blocks = ["0.0.0.0/0"]
- }
-
- tags = {
- Name = "Allow All Security Group"
- }
-}
-
-# Security group with SSH open to the world - SECURITY ISSUE #6
-resource "aws_security_group" "ssh_open" {
- name = "ssh-from-anywhere"
- description = "SSH access from anywhere"
- vpc_id = "vpc-12345678"
-
- ingress {
- description = "SSH from anywhere"
- from_port = 22
- to_port = 22
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"] # SSH from anywhere!
- }
-
- ingress {
- description = "RDP from anywhere"
- from_port = 3389
- to_port = 3389
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"] # RDP from anywhere!
- }
-
- egress {
- from_port = 0
- to_port = 0
- protocol = "-1"
- cidr_blocks = ["0.0.0.0/0"]
- }
-
- tags = {
- Name = "SSH Open Security Group"
- }
-}
-
-# Security group with database ports exposed - SECURITY ISSUE #7
-resource "aws_security_group" "database_exposed" {
- name = "database-public"
- description = "Database accessible from internet"
- vpc_id = "vpc-12345678"
-
- ingress {
- description = "MySQL from anywhere"
- from_port = 3306
- to_port = 3306
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"] # Database exposed!
- }
-
- ingress {
- description = "PostgreSQL from anywhere"
- from_port = 5432
- to_port = 5432
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"] # Database exposed!
- }
-
- egress {
- from_port = 0
- to_port = 0
- protocol = "-1"
- cidr_blocks = ["0.0.0.0/0"]
- }
-}
diff --git a/labs/lab6/vulnerable-iac/terraform/variables.tf b/labs/lab6/vulnerable-iac/terraform/variables.tf
deleted file mode 100644
index df4547a7..00000000
--- a/labs/lab6/vulnerable-iac/terraform/variables.tf
+++ /dev/null
@@ -1,75 +0,0 @@
-# Vulnerable Variables Configuration for Lab 6
-# Contains insecure default values
-
-variable "aws_region" {
- description = "AWS region"
- type = string
- default = "us-east-1"
- # No validation for approved regions - SECURITY ISSUE #24
-}
-
-variable "db_password" {
- description = "Database password"
- type = string
- default = "changeme123" # SECURITY ISSUE #25 - Weak default password!
- # Should not have a default value for passwords!
- # Should be marked as sensitive!
-}
-
-variable "environment" {
- description = "Environment name"
- type = string
- default = "production" # Defaulting to production is risky
- # No validation
-}
-
-variable "enable_public_access" {
- description = "Enable public access to resources"
- type = bool
- default = true # SECURITY ISSUE #26 - Public access enabled by default!
-}
-
-variable "enable_encryption" {
- description = "Enable encryption"
- type = bool
- default = false # SECURITY ISSUE #27 - Encryption disabled by default!
-}
-
-variable "allowed_ssh_cidr" {
- description = "CIDR blocks allowed for SSH"
- type = list(string)
- default = ["0.0.0.0/0"] # SECURITY ISSUE #28 - Allows SSH from anywhere!
-}
-
-variable "backup_retention_days" {
- description = "Number of days to retain backups"
- type = number
- default = 0 # SECURITY ISSUE #29 - No backups by default!
-}
-
-variable "api_key" {
- description = "API key for external service"
- type = string
- default = "sk_test_1234567890abcdef" # SECURITY ISSUE #30 - Hardcoded API key!
- # Should not have default, should be sensitive
-}
-
-# No validation constraints on critical variables
-variable "instance_type" {
- description = "EC2 instance type"
- type = string
- default = "t2.micro"
- # No validation - could use expensive instance types
-}
-
-variable "allowed_regions" {
- description = "List of allowed AWS regions"
- type = list(string)
- default = ["us-east-1", "us-west-2", "eu-west-1"]
- # Not enforced anywhere in the code
-}
-
-# Missing required variables
-# - No variable for required resource tags
-# - No variable for KMS key IDs
-# - No variable for logging configuration
diff --git a/labs/lab7.md b/labs/lab7.md
deleted file mode 100644
index 48c23111..00000000
--- a/labs/lab7.md
+++ /dev/null
@@ -1,414 +0,0 @@
-# Lab 7 — Container Security: Image Scanning & Deployment Hardening
-
-
-
-
-
-> **Goal:** Analyze container images for vulnerabilities, audit Docker host security, and compare secure deployment configurations.
-> **Deliverable:** A PR from `feature/lab7` to the course repo with `labs/submission7.md` containing vulnerability analysis, CIS benchmark results, and deployment security comparison. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- **Container image vulnerability scanning** using next-generation tools (Docker Scout, Snyk)
-- **Docker security benchmarking** with CIS Docker Benchmark compliance assessment
-- **Secure container deployment** analysis and configuration comparison
-- **Container security assessment** using modern scanning and analysis tools
-- **Security configuration impact** analysis for production deployments
-
-These skills are essential for implementing container security in DevSecOps pipelines and production environments.
-
-> Target application: OWASP Juice Shop (`bkimminich/juice-shop:v19.0.0`)
-
----
-
-## Prerequisites
-
-### Docker Scout CLI Setup
-
-Docker Scout requires authentication and a CLI plugin installation.
-
-#### Step 1: Install Docker Scout CLI Plugin
-
-**For Linux/macOS:**
-```bash
-curl -sSfL https://raw.githubusercontent.com/docker/scout-cli/main/install.sh | sh -s --
-```
-
-**Verify installation:**
-```bash
-docker scout version
-```
-
-You should see output like: `version: v1.x.x`
-
-#### Step 2: Docker Hub Authentication
-
-Docker Scout requires a Docker Hub account and Personal Access Token (PAT).
-
-**Create account and generate PAT:**
-
-1. **Create Docker Hub account** (if needed): Visit https://hub.docker.com
-
-2. **Generate PAT:**
-
- - Log in → Account Settings → Security → Personal Access Tokens
- - Click **New Access Token**
- - Description: `Lab7 Docker Scout Access`
- - Permissions: Select **Read, Write, Delete**
- - Click **Generate** and copy the token immediately
-
-3. **Authenticate:**
-
- ```bash
- docker login
- # Username: your-docker-hub-username
- # Password: paste-your-PAT (not your password!)
- ```
-
-4. **Verify access:**
-
- ```bash
- docker scout quickview busybox:latest
- # Should display vulnerability scan results
- ```
-
-**Why PAT over password?**
-- Limited scope permissions for least privilege
-- Easy to revoke without changing account password
-- Required for SSO-enabled organizations
-- Better audit trail
-
-Learn more: https://docs.docker.com/go/access-tokens/
-
----
-
-## Tasks
-
-### Task 1 — Image Vulnerability & Configuration Analysis (3 pts)
-
-**Objective:** Scan container images for vulnerabilities and configuration issues.
-
-#### 1.1: Setup Working Directory
-
-```bash
-mkdir -p labs/lab7/{scanning,hardening,analysis}
-cd labs/lab7
-```
-
-#### 1.2: Vulnerability Scanning
-
-```bash
-# Pull the image to scan locally
-docker pull bkimminich/juice-shop:v19.0.0
-
-# Detailed CVE analysis
-docker scout cves bkimminich/juice-shop:v19.0.0 | tee scanning/scout-cves.txt
-```
-
-**Understanding the output:**
-- **C/H/M/L** = Critical/High/Medium/Low severity counts
-- Look for CVE IDs, affected packages, and potential impact
-
-#### 1.3: Snyk comparison
-
-```bash
-# Requires Snyk account: https://snyk.io
-# Set token: export SNYK_TOKEN=your-token
-docker run --rm \
- -e SNYK_TOKEN \
- -v /var/run/docker.sock:/var/run/docker.sock \
- snyk/snyk:docker snyk test --docker bkimminich/juice-shop:v19.0.0 --severity-threshold=high \
- | tee scanning/snyk-results.txt
-```
-
-#### 1.4: Configuration Assessment
-
-```bash
-# Scan for security and best practice issues
-docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
- goodwithtech/dockle:latest \
- bkimminich/juice-shop:v19.0.0 | tee scanning/dockle-results.txt
-```
-
-**Look for:**
-- **FATAL/WARN** issues about running as root
-- Exposed secrets in environment variables
-- Missing security configurations
-- File permission issues
-
-**📊 Document in `labs/submission7.md`:**
-
-1. **Top 5 Critical/High Vulnerabilities**
- - CVE ID, affected package, severity, and impact
-
-2. **Dockle Configuration Findings**
- - List FATAL and WARN issues
- - Explain why each is a security concern
-
-3. **Security Posture Assessment**
- - Does the image run as root?
- - What security improvements would you recommend?
-
----
-
-### Task 2 — Docker Host Security Benchmarking (3 pts)
-
-**Objective:** Audit Docker host configuration against CIS Docker Benchmark.
-
-#### 2.1: Run CIS Docker Benchmark
-
-```bash
-# Run CIS Docker Benchmark security audit
-docker run --rm --net host --pid host --userns host --cap-add audit_control \
- -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
- -v /var/lib:/var/lib:ro \
- -v /var/run/docker.sock:/var/run/docker.sock:ro \
- -v /usr/lib/systemd:/usr/lib/systemd:ro \
- -v /etc:/etc:ro --label docker_bench_security \
- docker/docker-bench-security | tee hardening/docker-bench-results.txt
-```
-
-**Understanding the output:**
-- **[PASS]** - Security control properly configured
-- **[WARN]** - Potential issue requiring review
-- **[FAIL]** - Security control not properly configured
-- **[INFO]** - Informational (no action needed)
-
-**Key sections:**
-1. Host Configuration
-2. Docker daemon configuration
-3. Docker daemon configuration files
-4. Container Images and Build Files
-5. Container Runtime
-
-**📊 Document in `labs/submission7.md`:**
-
-1. **Summary Statistics**
- - Total PASS/WARN/FAIL/INFO counts
-
-2. **Analysis of Failures** (if any)
- - List failures and explain security impact
- - Propose specific remediation steps
-
----
-
-### Task 3 — Deployment Security Configuration Analysis (4 pts)
-
-**Objective:** Compare deployment configurations to understand security hardening trade-offs.
-
-#### 3.1: Deploy Three Security Profiles
-
-```bash
-# Profile 1: Default (baseline)
-docker run -d --name juice-default -p 3001:3000 \
- bkimminich/juice-shop:v19.0.0
-
-# Profile 2: Hardened (security restrictions)
-docker run -d --name juice-hardened -p 3002:3000 \
- --cap-drop=ALL \
- --security-opt=no-new-privileges \
- --memory=512m \
- --cpus=1.0 \
- bkimminich/juice-shop:v19.0.0
-
-# Profile 3: Production (maximum hardening)
-docker run -d --name juice-production -p 3003:3000 \
- --cap-drop=ALL \
- --cap-add=NET_BIND_SERVICE \
- --security-opt=no-new-privileges \
- --security-opt=seccomp=default \
- --memory=512m \
- --memory-swap=512m \
- --cpus=1.0 \
- --pids-limit=100 \
- --restart=on-failure:3 \
- bkimminich/juice-shop:v19.0.0
-
-# Wait for startup
-sleep 15
-
-# Verify all containers are running
-docker ps -a --filter name=juice-
-```
-
-#### 3.2: Compare Configurations
-
-```bash
-# Test functionality
-echo "=== Functionality Test ===" | tee analysis/deployment-comparison.txt
-curl -s -o /dev/null -w "Default: HTTP %{http_code}\n" http://localhost:3001 | tee -a analysis/deployment-comparison.txt
-curl -s -o /dev/null -w "Hardened: HTTP %{http_code}\n" http://localhost:3002 | tee -a analysis/deployment-comparison.txt
-curl -s -o /dev/null -w "Production: HTTP %{http_code}\n" http://localhost:3003 | tee -a analysis/deployment-comparison.txt
-
-# Check resource usage
-echo "" | tee -a analysis/deployment-comparison.txt
-echo "=== Resource Usage ===" | tee -a analysis/deployment-comparison.txt
-docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}" \
- juice-default juice-hardened juice-production | tee -a analysis/deployment-comparison.txt
-
-# Inspect security settings
-echo "" | tee -a analysis/deployment-comparison.txt
-echo "=== Security Configurations ===" | tee -a analysis/deployment-comparison.txt
-for container in juice-default juice-hardened juice-production; do
- echo "" | tee -a analysis/deployment-comparison.txt
- echo "Container: $container" | tee -a analysis/deployment-comparison.txt
- docker inspect $container --format 'CapDrop: {{.HostConfig.CapDrop}}
-SecurityOpt: {{.HostConfig.SecurityOpt}}
-Memory: {{.HostConfig.Memory}}
-CPU: {{.HostConfig.CpuQuota}}
-PIDs: {{.HostConfig.PidsLimit}}
-Restart: {{.HostConfig.RestartPolicy.Name}}' | tee -a analysis/deployment-comparison.txt
-done
-```
-
-#### 3.3: Cleanup
-
-```bash
-docker stop juice-default juice-hardened juice-production
-docker rm juice-default juice-hardened juice-production
-```
-
-**📊 Document in `labs/submission7.md`:**
-
-#### 1. Configuration Comparison Table
-
-Create a table from `docker inspect` output comparing all three profiles:
-- Capabilities (dropped/added)
-- Security options
-- Resource limits (memory, CPU, PIDs)
-- Restart policy
-
-#### 2. Security Measure Analysis
-
-Research and explain EACH security flag:
-
-**a) `--cap-drop=ALL` and `--cap-add=NET_BIND_SERVICE`**
-- What are Linux capabilities? (Research this!)
-- What attack vector does dropping ALL capabilities prevent?
-- Why do we need to add back NET_BIND_SERVICE?
-- What's the security trade-off?
-
-**b) `--security-opt=no-new-privileges`**
-- What does this flag do? (Look it up!)
-- What type of attack does it prevent?
-- Are there any downsides to enabling it?
-
-**c) `--memory=512m` and `--cpus=1.0`**
-- What happens if a container doesn't have resource limits?
-- What attack does memory limiting prevent?
-- What's the risk of setting limits too low?
-
-**d) `--pids-limit=100`**
-- What is a fork bomb?
-- How does PID limiting help?
-- How to determine the right limit?
-
-**e) `--restart=on-failure:3`**
-- What does this policy do?
-- When is auto-restart beneficial? When is it risky?
-- Compare `on-failure` vs `always`
-
-#### 3. Critical Thinking Questions
-
-1. **Which profile for DEVELOPMENT? Why?**
-
-2. **Which profile for PRODUCTION? Why?**
-
-3. **What real-world problem do resource limits solve?**
-
-4. **If an attacker exploits Default vs Production, what actions are blocked in Production?**
-
-5. **What additional hardening would you add?**
-
-
----
-
-## Acceptance Criteria
-
-- ✅ Branch `feature/lab7` exists with commits for each task
-- ✅ File `labs/submission7.md` contains required analysis for Tasks 1-3
-- ✅ Vulnerability scanning completed with Docker Scout
-- ✅ CIS Docker Benchmark audit completed
-- ✅ Deployment security comparison completed
-- ✅ All scan outputs committed to `labs/lab7/`
-- ✅ PR from `feature/lab7` → **course repo main branch** is open
-- ✅ PR link submitted via Moodle before the deadline
-
----
-
-## How to Submit
-
-1. Create a branch for this lab and push it to your fork:
-
- ```bash
- git switch -c feature/lab7
- # create labs/submission7.md with your findings
- git add labs/submission7.md labs/lab7/
- git commit -m "docs: add lab7 submission - container security analysis"
- git push -u origin feature/lab7
- ```
-
-2. Open a PR from your fork's `feature/lab7` branch → **course repository's main branch**.
-
-3. In the PR description, include:
-
- ```text
- - [x] Task 1 done — Advanced Image Security & Configuration Analysis
- - [x] Task 2 done — Docker Security Benchmarking & Assessment
- - [x] Task 3 done — Secure Container Deployment Analysis
- ```
-
-4. **Copy the PR URL** and submit it via **Moodle before the deadline**.
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ---------------------------------------------------------------- | -----: |
-| Task 1 — Image vulnerability & configuration analysis | **3** |
-| Task 2 — Docker host security benchmarking | **3** |
-| Task 3 — Deployment security configuration analysis | **4** |
-| **Total** | **10** |
-
----
-
-## Guidelines
-
-- Use clear Markdown headers to organize sections in `submission7.md`
-- Include evidence from tool outputs to support your analysis
-- Research security concepts thoroughly—don't copy-paste
-- Focus on understanding trade-offs between security and usability
-
-
-Container Security Resources
-
-**Documentation:**
-- [Docker Security](https://docs.docker.com/engine/security/)
-- [CIS Docker Benchmark](https://www.cisecurity.org/benchmark/docker)
-- [Linux Capabilities](https://man7.org/linux/man-pages/man7/capabilities.7.html)
-
-
-
-
-Expected Security Findings
-
-**Image Vulnerabilities:**
-- Outdated base packages with CVEs
-- Vulnerable dependencies
-- Missing security patches
-
-**CIS Benchmark:**
-- Insecure daemon configuration
-- Missing resource limits
-- Excessive privileges
-
-**Deployment Gaps:**
-- Running as root
-- Unnecessary capabilities
-- No resource limits
-
-
\ No newline at end of file
diff --git a/labs/lab8.md b/labs/lab8.md
deleted file mode 100644
index 1fda7f2c..00000000
--- a/labs/lab8.md
+++ /dev/null
@@ -1,318 +0,0 @@
-# Lab 8 — Software Supply Chain Security: Signing, Verification, and Attestations
-
-
-
-
-
-> Goal: Sign and verify container images, attach and verify attestations (SBOM/provenance), and optionally sign non-container artifacts — all locally, without code changes.
-> Deliverable: A PR from `feature/lab8` with `labs/submission8.md` containing signing/verification logs, attestation evidence, and a short analysis. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Image signing/verification with Cosign against a local registry
-- Attestations (SBOM and provenance) and payload inspection
-- Optional artifact (blob) signing for non-container assets
-
-Context: Cosign is a widely used OSS tool for image signing and attestations. If you produced SBOMs in Lab 4, reuse them here for attestations.
-
-> Target application: `bkimminich/juice-shop:v19.0.0`
-
----
-
-## Prerequisites
-
-- Docker (Docker Desktop or Engine) and internet access
-- `jq` for JSON processing
-- Cosign installed (binary)
- - See: https://docs.sigstore.dev/cosign/system_config/installation/
- - Verify: `cosign version`
-
-Install Cosign (quick start):
-
-```bash
-# Linux x86_64 (install to /usr/local/bin)
-curl -sSL -o cosign "https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64"
-chmod +x cosign && sudo mv cosign /usr/local/bin/
-
-# Verify
-cosign version
-```
-
-Docs:
-- Install guide: https://docs.sigstore.dev/cosign/system_config/installation/
-- Releases: https://github.com/sigstore/cosign/releases/latest
-
-Prepare working directories:
-```bash
-mkdir -p labs/lab8/{registry,signing,attest,analysis,artifacts}
-```
-
----
-
-## Tasks
-
-### Task 1 — Local Registry, Signing & Verification (4 pts)
-**Objective:** Push the image to a local registry, sign it with Cosign, and verify the signature, including a tamper demonstration.
-
-#### 1.1: Pull and push to local registry
-```bash
-# Pull target image
-docker pull bkimminich/juice-shop:v19.0.0
-
-# Start local registry on localhost:5000 (Distribution v3)
-docker run -d --restart=always -p 5000:5000 --name registry registry:3
-
-# Tag and push the image to the local registry
-docker tag bkimminich/juice-shop:v19.0.0 localhost:5000/juice-shop:v19.0.0
-docker push localhost:5000/juice-shop:v19.0.0
-
-# Recommended: use a digest reference (from the local registry) instead of a tag
-DIGEST=$(curl -sI \
- -H 'Accept: application/vnd.docker.distribution.manifest.v2+json' \
- http://localhost:5000/v2/juice-shop/manifests/v19.0.0 \
- | tr -d '\r' | awk -F': ' '/Docker-Content-Digest/ {print $2}')
-REF="localhost:5000/juice-shop@${DIGEST}"
-echo "Using digest ref: $REF" | tee labs/lab8/analysis/ref.txt
-```
-
-#### 1.2: Generate a Cosign key pair for signing
-```bash
-cd labs/lab8/signing
-cosign generate-key-pair
-cd -
-# This creates cosign.key (private key) and cosign.pub (public key)
-# You will be prompted to set a passphrase for the private key
-```
-
-#### 1.3: Sign and verify the image
-```bash
-# Sign the image using your private key
-cosign sign --yes \
- --allow-insecure-registry \
- --tlog-upload=false \
- --key labs/lab8/signing/cosign.key \
- "$REF"
-
-# Verify the signature using your public key and save the output
-cosign verify \
- --allow-insecure-registry \
- --insecure-ignore-tlog \
- --key labs/lab8/signing/cosign.pub \
- "$REF"
-```
-
-> Note for students:
-> - This verify flow is valid for a local, insecure registry. You correctly use a digest reference and `--allow-insecure-registry`.
-> - The warning appears because you signed with `--tlog-upload=false`. Using `--insecure-ignore-tlog` tells Cosign to skip Rekor transparency log verification in this lab context.
-> - For production: remove `--insecure-ignore-tlog`, sign without `--tlog-upload=false` (so the signature is recorded in Rekor), avoid insecure registries, and always verify/sign by digest (not by tag).
-
-#### 1.4: Tamper demonstration
-```bash
-docker pull busybox:latest
-docker tag busybox:latest localhost:5000/juice-shop:v19.0.0
-docker push localhost:5000/juice-shop:v19.0.0
-
-# IMPORTANT: Re-resolve the tag to the NEW digest from the local registry
-DIGEST_AFTER=$(curl -sI \
- -H 'Accept: application/vnd.docker.distribution.manifest.v2+json' \
- http://localhost:5000/v2/juice-shop/manifests/v19.0.0 \
- | tr -d '\r' | awk -F': ' '/Docker-Content-Digest/ {print $2}')
-REF_AFTER="localhost:5000/juice-shop@${DIGEST_AFTER}"
-echo "After tamper digest ref: $REF_AFTER" | tee labs/lab8/analysis/ref-after-tamper.txt
-
-# Verify should now FAIL for the new digest (not signed with your key)
-cosign verify \
- --allow-insecure-registry \
- --insecure-ignore-tlog \
- --key labs/lab8/signing/cosign.pub \
- "$REF_AFTER"
-
-# Sanity check: verifying the ORIGINAL digest still succeeds (supply chain best practice)
-cosign verify \
- --allow-insecure-registry \
- --insecure-ignore-tlog \
- --key labs/lab8/signing/cosign.pub \
- "$REF"
-```
-
-In `labs/submission8.md`, explain how signing protects against tag tampering and what “subject digest” means.
-
----
-
-### Task 2 — Attestations: SBOM (reuse) & Provenance (4 pts)
-
-**Objective:** Attach and verify attestations (SBOM and simple provenance) to the image and inspect the attestation envelope.
-
-```bash
-mkdir -p labs/lab4/syft
-docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
- -v "$(pwd)":/tmp anchore/syft:latest \
- "$REF" -o syft-json=/tmp/labs/lab4/syft/juice-shop-syft-native.json
-```
-
-Generate CycloneDX SBOM for attestation:
-
-```bash
-# Option A: Convert Syft-native SBOM from Lab 4 → CycloneDX JSON
-docker run --rm \
- -v "$(pwd)/labs/lab4/syft":/in:ro \
- -v "$(pwd)/labs/lab8/attest":/out \
- anchore/syft:latest \
- convert /in/juice-shop-syft-native.json -o cyclonedx-json=/out/juice-shop.cdx.json
-```
-
-#### 2.1: SBOM as an attestation (CycloneDX)
-```bash
-# Example using CycloneDX SBOM created above
-cosign attest --yes \
- --allow-insecure-registry \
- --tlog-upload=false \
- --key labs/lab8/signing/cosign.key \
- --predicate labs/lab8/attest/juice-shop.cdx.json \
- --type cyclonedx \
- "$REF"
-
-# Verify the SBOM attestation
-cosign verify-attestation \
- --allow-insecure-registry \
- --insecure-ignore-tlog \
- --key labs/lab8/signing/cosign.pub \
- --type cyclonedx \
- "$REF" \
- | tee labs/lab8/attest/verify-sbom-attestation.txt
-```
-
-#### 2.2: Simple provenance attestation
-
-```bash
-# Create a minimal, valid SLSA Provenance v1 predicate with a proper RFC3339 timestamp
-BUILD_TS=$(date -u +%Y-%m-%dT%H:%M:%SZ)
-cat > labs/lab8/attest/provenance.json << EOF
-{
- "_type": "https://slsa.dev/provenance/v1",
- "buildType": "manual-local-demo",
- "builder": {"id": "student@local"},
- "invocation": {"parameters": {"image": "${REF}"}},
- "metadata": {"buildStartedOn": "${BUILD_TS}", "completeness": {"parameters": true}}
-}
-EOF
-
-cosign attest --yes \
- --allow-insecure-registry \
- --tlog-upload=false \
- --key labs/lab8/signing/cosign.key \
- --predicate labs/lab8/attest/provenance.json \
- --type slsaprovenance \
- "$REF"
-
-# Verify the provenance attestation
-cosign verify-attestation \
- --allow-insecure-registry \
- --insecure-ignore-tlog \
- --key labs/lab8/signing/cosign.pub \
- --type slsaprovenance \
- "$REF" | tee labs/lab8/attest/verify-provenance.txt
-```
-
-In `labs/submission8.md`, document:
- - How attestations differ from signatures
- - What information the SBOM attestation contains
- - What provenance attestations provide for supply chain security
----
-
-### Task 3 — Artifact (Blob/Tarball) Signing (2 pts)
-
-**Objective:** Sign a non-container artifact (e.g., a tarball) and verify the signature.
-
-```bash
-echo "sample content $(date -u)" > labs/lab8/artifacts/sample.txt
-tar -czf labs/lab8/artifacts/sample.tar.gz -C labs/lab8/artifacts sample.txt
-
-# Option A: Cosign sign-blob using a bundle (recommended)
-cosign sign-blob \
- --yes \
- --tlog-upload=false \
- --key labs/lab8/signing/cosign.key \
- --bundle labs/lab8/artifacts/sample.tar.gz.bundle \
- labs/lab8/artifacts/sample.tar.gz
-
-cosign verify-blob \
- --key labs/lab8/signing/cosign.pub \
- --bundle labs/lab8/artifacts/sample.tar.gz.bundle \
- labs/lab8/artifacts/sample.tar.gz | tee labs/lab8/artifacts/verify-blob.txt
-```
-
-In `labs/submission8.md`, document:
- - Use cases for signing non-container artifacts (e.g., release binaries, configuration files)
- - How blob signing differs from container image signing
-
----
-
-## Acceptance Criteria
-
-- ✅ `labs/submission8.md` includes analysis and evidence for Tasks 1–3
-- ✅ Image pushed to local registry; Cosign signature created and verified
-- ✅ Tamper scenario demonstrated and explained
-- ✅ At least one attestation attached and verified (SBOM or provenance); payload inspected with `jq`
-- ✅ Artifact signing performed and verified
-- ✅ All outputs saved under `labs/lab8/` and committed
-
----
-
-## How to Submit
-
-1. Create a branch for this lab and push it to your fork:
-
-```bash
-git switch -c feature/lab8
-# create labs/submission8.md with your findings
-git add labs/lab8/ labs/submission8.md
-git commit -m "docs: add lab8 submission — signing + attestations"
-git push -u origin feature/lab8
-```
-
-2. Open a PR from your fork’s `feature/lab8` → course repo’s `main`.
-3. Include this checklist in the PR description:
-
-```text
-- [x] Task 1 — Local registry, signing, verification (+ tamper demo)
-- [x] Task 2 — Attestations (SBOM or provenance) + payload inspection
-- [x] Task 3 — Artifact signing (blob/tarball)
-```
-
-4. Submit the PR URL via Moodle before the deadline.
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| ------------------------------------------------------------- | -----: |
-| Task 1 — Local Registry, Signing & Verification | 4.0 |
-| Task 2 — SBOM/Provenance Attestations (verify + inspect) | 4.0 |
-| Task 3 — Artifact (Blob/Tarball) Signing | 2.0 |
-| Total | 10.0 |
-
----
-
-## Guidelines
-
-- Use the Cosign binary (most widely tested in 2025 for local flows)
-- Keep keys out of version control; commit only logs and reports
-- Use strong passphrases; rotate and store securely
-- Reuse your Lab 4 SBOM for attestation if possible; otherwise create a minimal predicate JSON
-
-
-References
-
-- Cosign install: https://docs.sigstore.dev/cosign/system_config/installation/
-- in-toto/attestations: https://github.com/in-toto/attestation
-- CycloneDX: https://cyclonedx.org/
-- SPDX: https://spdx.dev/
-
-
-
-
diff --git a/labs/lab9.md b/labs/lab9.md
deleted file mode 100644
index 91e20bef..00000000
--- a/labs/lab9.md
+++ /dev/null
@@ -1,239 +0,0 @@
-# Lab 9 — Monitoring & Compliance: Falco Runtime Detection + Conftest Policies
-
-
-
-
-
-> Goal: Detect suspicious container behavior with Falco and enforce deployment hardening via policy-as-code using Conftest (Rego) — all runnable locally.
-> Deliverable: A PR from `feature/lab9` with `labs/submission9.md` containing Falco alert evidence, custom rule/tuning notes, Conftest test results, and analysis of provided manifests/policies. Submit the PR link via Moodle.
-
----
-
-## Overview
-
-In this lab you will practice:
-- Runtime threat detection for containers with Falco (eBPF)
-- Writing/customizing Falco rules and tuning noise/false positives
-- Policy-as-code with Conftest (OPA/Rego) against Kubernetes manifests
-- Analyzing how security policies enforce deployment hardening best practices
-
-> Runtime target for Task 1: BusyBox helper container (`alpine:3.19`).
-
----
-
-## Prerequisites
-
-- Docker (or Docker Desktop)
-- `jq`
-- Optional: `kubectl` or a local K8s (kind/minikube) is NOT required — Conftest runs offline against YAML
-
-Prepare working directories:
-```bash
-mkdir -p labs/lab9/{falco/{rules,logs},analysis}
-```
-
----
-
-## Tasks
-
-### Task 1 — Runtime Security Detection with Falco (6 pts)
-**Objective:** Run Falco with modern eBPF, trigger alerts from a shell-enabled BusyBox container, and add one custom rule with basic tuning.
-
-#### 1.1: Start a shell-enabled helper container
-```bash
-# Use Alpine (BusyBox) to trigger events — no app needed
-docker run -d --name lab9-helper alpine:3.19 sleep 1d
-```
-
-#### 1.2: Run Falco (containerized) with modern eBPF
-```bash
-# Start Falco container (JSON output to stdout)
-docker run -d --name falco \
- --privileged \
- -v /proc:/host/proc:ro \
- -v /boot:/host/boot:ro \
- -v /lib/modules:/host/lib/modules:ro \
- -v /usr:/host/usr:ro \
- -v /var/run/docker.sock:/host/var/run/docker.sock \
- -v "$(pwd)/labs/lab9/falco/rules":/etc/falco/rules.d:ro \
- falcosecurity/falco:latest \
- falco -U \
- -o json_output=true \
- -o time_format_iso_8601=true
-
-# Follow Falco logs
-docker logs -f falco | tee labs/lab9/falco/logs/falco.log &
-```
-
-Note: The official Falco image defaults to the modern eBPF engine (engine.kind=modern_ebpf). No extra flag is needed beyond running with the required privileges and mounts.
-
-#### 1.3: Trigger two baseline alerts
-```bash
-# A) Terminal shell inside container (expected rule: Terminal shell in container)
-docker exec -it lab9-helper /bin/sh -lc 'echo hello-from-shell'
-
-# B) Container drift: write under a binary directory
-# Writes to /usr/local/bin should trigger Falco's drift detection
-docker exec --user 0 lab9-helper /bin/sh -lc 'echo boom > /usr/local/bin/drift.txt'
-```
-
-#### 1.4: Add one custom Falco rule and validate
-Create `labs/lab9/falco/rules/custom-rules.yaml`:
-```yaml
-# Detect new writable file under /usr/local/bin inside any container
-- rule: Write Binary Under UsrLocalBin
- desc: Detects writes under /usr/local/bin inside any container
- condition: evt.type in (open, openat, openat2, creat) and
- evt.is_open_write=true and
- fd.name startswith /usr/local/bin/ and
- container.id != host
- output: >
- Falco Custom: File write in /usr/local/bin (container=%container.name user=%user.name file=%fd.name flags=%evt.arg.flags)
- priority: WARNING
- tags: [container, compliance, drift]
-```
-
-Falco auto-reloads rules in `/etc/falco/rules.d`. If you don't see your custom alert after a minute, force a reload:
-```bash
-docker kill --signal=SIGHUP falco && sleep 2
-```
-
-Validate the custom rule by triggering another write:
-```bash
-# This should trigger BOTH the built-in drift rule AND your custom rule
-docker exec --user 0 lab9-helper /bin/sh -lc 'echo custom-test > /usr/local/bin/custom-rule.txt'
-```
-
-#### 1.5: Generate Falco test events
-```bash
-# Falco event generator creates a short burst of detectable actions
-docker run --rm --name eventgen \
- --privileged \
- -v /proc:/host/proc:ro -v /dev:/host/dev \
- falcosecurity/event-generator:latest run syscall
-```
-
-What this does
-Executes a curated set of syscalls (e.g., fileless execution, sensitive file reads) that should appear as Falco alerts. This helps verify your Falco setup is working correctly.
-
-
-In `labs/submission9.md`, document:
-- Baseline alerts observed from `falco.log`
-- Your custom rule’s purpose and when it should/shouldn’t fire
-
----
-
-### Task 2 — Policy-as-Code with Conftest (Rego) (4 pts)
-**Objective:** Run provided security policies against K8s manifests, analyze policy violations, and understand how hardening satisfies compliance requirements.
-
-#### 2.1: Review provided Kubernetes manifests
-Open and review the provided manifests:
-- `labs/lab9/manifests/k8s/juice-unhardened.yaml` (baseline — do NOT edit)
-- `labs/lab9/manifests/k8s/juice-hardened.yaml` (compliant version)
-
-Compare both manifests to understand what hardening changes were applied.
-
-#### 2.2: Review provided Conftest Rego policies
-Examine the provided security policies:
-- `labs/lab9/policies/k8s-security.rego` — enforces Kubernetes security best practices
-- `labs/lab9/policies/compose-security.rego` — enforces Docker Compose security patterns
-
-These policies check for common misconfigurations like running as root, missing resource limits, privileged containers, etc.
-
-#### 2.3: Run Conftest against both manifests
-```bash
-# Test unhardened manifest (expect policy violations)
-docker run --rm -v "$(pwd)/labs/lab9":/project \
- openpolicyagent/conftest:latest \
- test /project/manifests/k8s/juice-unhardened.yaml -p /project/policies --all-namespaces | tee labs/lab9/analysis/conftest-unhardened.txt
-
-# Test hardened manifest (should pass or only warnings)
-docker run --rm -v "$(pwd)/labs/lab9":/project \
- openpolicyagent/conftest:latest \
- test /project/manifests/k8s/juice-hardened.yaml -p /project/policies --all-namespaces | tee labs/lab9/analysis/conftest-hardened.txt
-
-# Test Docker Compose manifest
-docker run --rm -v "$(pwd)/labs/lab9":/project \
- openpolicyagent/conftest:latest \
- test /project/manifests/compose/juice-compose.yml -p /project/policies --all-namespaces | tee labs/lab9/analysis/conftest-compose.txt
-```
-
-In `labs/submission9.md`, document:
-- The policy violations from the unhardened manifest and why each matters for security
-- The specific hardening changes in the hardened manifest that satisfy policies
-- Analysis of the Docker Compose manifest results
-
----
-
-## Acceptance Criteria
-
-- ✅ Branch `feature/lab9` contains Falco setup, logs, and a custom rule file
-- ✅ At least two Falco alerts captured and explained (baseline + custom)
-- ✅ Conftest policies reviewed and tested against manifests
-- ✅ Unhardened K8s manifest fails; hardened manifest passes (warnings OK)
-- ✅ `labs/submission9.md` includes evidence and analysis for both tasks
-
----
-
-## How to Submit
-
-1. Create a branch and push it to your fork:
-```bash
-git switch -c feature/lab9
-# create labs/submission9.md with your findings
-git add labs/lab9/ labs/submission9.md
-git commit -m "docs: add lab9 — falco runtime + conftest policies"
-git push -u origin feature/lab9
-```
-2. Open a PR from your fork’s `feature/lab9` → course repo’s `main`.
-3. In the PR description include:
-```text
-- [x] Task 1 — Falco runtime detection (alerts + custom rule)
-- [x] Task 2 — Conftest policies (fail→pass hardening)
-```
-4. Submit the PR URL via Moodle before the deadline.
-
----
-
-## Rubric (10 pts)
-
-| Criterion | Points |
-| --------------------------------------------------------------- | -----: |
-| Task 1 — Falco runtime detection + custom rule | 6.0 |
-| Task 2 — Conftest policies + hardened manifests | 4.0 |
-| Total | 10.0 |
-
----
-
-## Guidelines
-
-- Keep Falco running while you trigger events; copy only relevant alert lines into your submission
-- Place custom Falco rules under `labs/lab9/falco/rules/` and commit them
-- Conftest “deny” enforces hard requirements; “warn” provides guidance without failing
-- Aim for minimal, practical policies that reflect production hardening baselines
-
-
-References
-
-- Falco: https://falco.org/docs/
-- Falco container: https://github.com/falcosecurity/falco
-- Event Generator: https://github.com/falcosecurity/event-generator
-- Conftest: https://github.com/open-policy-agent/conftest
-- OPA/Rego: https://www.openpolicyagent.org/docs/
-
-
-
-
-Troubleshooting
-
-- Falco engine: If Falco logs show that modern eBPF is unsupported, switch engine with: `-o engine.kind=ebpf` in the `docker run` command.
-- Permissions: Ensure Docker is running and you can run privileged containers. If `--privileged` or mounts fail, try a Linux host or WSL2.
-- Container context: For drift tests, write from inside the container (not `docker cp`) so Falco reports a non-host `container.id`.
-- Conftest: If pulling the image fails, try specifying a version tag, e.g., `openpolicyagent/conftest:v0.63.0`.
-
-
-
-### Cleanup
-```bash
-docker rm -f falco lab9-helper 2>/dev/null || true
-```
diff --git a/labs/lab9/manifests/compose/juice-compose.yml b/labs/lab9/manifests/compose/juice-compose.yml
deleted file mode 100644
index acfcb64b..00000000
--- a/labs/lab9/manifests/compose/juice-compose.yml
+++ /dev/null
@@ -1,11 +0,0 @@
-services:
- juice:
- image: bkimminich/juice-shop:v19.0.0
- ports: ["3006:3000"]
- user: "10001:10001"
- read_only: true
- tmpfs: ["/tmp"]
- security_opt:
- - no-new-privileges:true
- cap_drop: ["ALL"]
-
diff --git a/labs/lab9/manifests/k8s/juice-hardened.yaml b/labs/lab9/manifests/k8s/juice-hardened.yaml
deleted file mode 100644
index 10521196..00000000
--- a/labs/lab9/manifests/k8s/juice-hardened.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: juice-hardened
-spec:
- replicas: 1
- selector:
- matchLabels: { app: juice }
- template:
- metadata:
- labels: { app: juice }
- spec:
- containers:
- - name: juice
- image: bkimminich/juice-shop:v19.0.0
- securityContext:
- runAsNonRoot: true
- allowPrivilegeEscalation: false
- readOnlyRootFilesystem: true
- capabilities:
- drop: ["ALL"]
- resources:
- requests: { cpu: "100m", memory: "256Mi" }
- limits: { cpu: "500m", memory: "512Mi" }
- ports:
- - containerPort: 3000
- readinessProbe:
- httpGet: { path: /, port: 3000 }
- initialDelaySeconds: 5
- periodSeconds: 10
- livenessProbe:
- httpGet: { path: /, port: 3000 }
- initialDelaySeconds: 10
- periodSeconds: 20
----
-apiVersion: v1
-kind: Service
-metadata:
- name: juice-hardened
-spec:
- selector: { app: juice }
- ports:
- - port: 80
- targetPort: 3000
-
diff --git a/labs/lab9/manifests/k8s/juice-unhardened.yaml b/labs/lab9/manifests/k8s/juice-unhardened.yaml
deleted file mode 100644
index 94e96fc3..00000000
--- a/labs/lab9/manifests/k8s/juice-unhardened.yaml
+++ /dev/null
@@ -1,28 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: juice-unhardened
-spec:
- replicas: 1
- selector:
- matchLabels: { app: juice }
- template:
- metadata:
- labels: { app: juice }
- spec:
- containers:
- - name: juice
- image: bkimminich/juice-shop:latest
- ports:
- - containerPort: 3000
----
-apiVersion: v1
-kind: Service
-metadata:
- name: juice-unhardened
-spec:
- selector: { app: juice }
- ports:
- - port: 80
- targetPort: 3000
-
diff --git a/labs/lab9/policies/compose-security.rego b/labs/lab9/policies/compose-security.rego
deleted file mode 100644
index eebca76b..00000000
--- a/labs/lab9/policies/compose-security.rego
+++ /dev/null
@@ -1,33 +0,0 @@
-package compose.security
-
-containers := input.services
-
-# Helper: true if array arr contains value v
-has_value(arr, v) if {
- some i
- arr[i] == v
-}
-
-deny contains msg if {
- svc := containers[_]
- not svc.user
- msg := "services must set an explicit non-root user"
-}
-
-deny contains msg if {
- svc := containers[_]
- not svc.read_only
- msg := "services must set read_only: true"
-}
-
-deny contains msg if {
- svc := containers[_]
- not has_value(svc.cap_drop, "ALL")
- msg := "services must drop ALL capabilities"
-}
-
-warn contains msg if {
- svc := containers[_]
- not has_value(svc.security_opt, "no-new-privileges:true")
- msg := "services should enable no-new-privileges"
-}
diff --git a/labs/lab9/policies/k8s-security.rego b/labs/lab9/policies/k8s-security.rego
deleted file mode 100644
index 041ff3bd..00000000
--- a/labs/lab9/policies/k8s-security.rego
+++ /dev/null
@@ -1,88 +0,0 @@
-package k8s.security
-
-# Helper: true if array arr contains value v
-has_value(arr, v) if {
- some i
- arr[i] == v
-}
-
-# No :latest tags
-deny contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- endswith(c.image, ":latest")
- msg := sprintf("container %q uses disallowed :latest tag", [c.name])
-}
-
-# Require essential securityContext settings
-deny contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not c.securityContext.runAsNonRoot
- msg := sprintf("container %q must set runAsNonRoot: true", [c.name])
-}
-
-deny contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not c.securityContext.allowPrivilegeEscalation == false
- msg := sprintf("container %q must set allowPrivilegeEscalation: false", [c.name])
-}
-
-deny contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not c.securityContext.readOnlyRootFilesystem == true
- msg := sprintf("container %q must set readOnlyRootFilesystem: true", [c.name])
-}
-
-deny contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not has_value(c.securityContext.capabilities.drop, "ALL")
- msg := sprintf("container %q must drop ALL capabilities", [c.name])
-}
-
-# Require CPU/Memory requests and limits
-deny contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not c.resources.requests.cpu
- msg := sprintf("container %q missing resources.requests.cpu", [c.name])
-}
-
-deny contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not c.resources.requests.memory
- msg := sprintf("container %q missing resources.requests.memory", [c.name])
-}
-
-deny contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not c.resources.limits.cpu
- msg := sprintf("container %q missing resources.limits.cpu", [c.name])
-}
-
-deny contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not c.resources.limits.memory
- msg := sprintf("container %q missing resources.limits.memory", [c.name])
-}
-
-# Recommend probes
-warn contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not c.readinessProbe
- msg := sprintf("container %q should define readinessProbe", [c.name])
-}
-
-warn contains msg if {
- input.kind == "Deployment"
- c := input.spec.template.spec.containers[_]
- not c.livenessProbe
- msg := sprintf("container %q should define livenessProbe", [c.name])
-}
diff --git a/labs/submission1.md b/labs/submission1.md
new file mode 100644
index 00000000..3a13d0bf
--- /dev/null
+++ b/labs/submission1.md
@@ -0,0 +1,72 @@
+# Lab 1
+
+## Task 1
+
+### Triage Report — OWASP Juice Shop
+
+#### Scope & Asset
+- Asset: OWASP Juice Shop (local lab instance)
+- Image: bkimminich/juice-shop:v19.0.0
+- Release link/date: [GitHub Releases](https://github.com/juice-shop/juice-shop/releases/tag/v19.0.0) — 2023-11-22
+- Image digest:
+
+#### Environment
+- Host OS: Windows 10 Pro
+- Docker: 28.3.2
+
+#### Deployment Details
+- Run command used: `docker run -d --name juice-shop -p 127.0.0.1:3000:3000 bkimminich/juice-shop:v19.0.0`
+- Access URL: http://127.0.0.1:3000
+- Network exposure: 127.0.0.1 only [x] Yes [ ] No (explain if No)
+
+#### Health Check
+- Page load: 
+- API check:
+ ```
+ [
+ {
+ "id": 1,
+ "name": "Apple Juice",
+ "description": "The all-time classic.",
+ "price": 1.99,
+ "image": "apple_juice.jpg"
+ },
+ {
+ "id": 2,
+ ...
+ }
+ ]
+ ```
+
+#### Surface Snapshot (Triage)
+- Login/Registration visible: [x] Yes [ ] No — notes: in the upper right corner
+- Product listing/search present: [x] Yes [ ] No
+- Admin or account area discoverable: [ ] Yes [x] No
+- Client-side errors in console: [ ] Yes [x] No
+- Security headers (quick look — optional): `curl -I http://127.0.0.1:3000` → CSP/HSTS present? notes: not present
+
+#### Risks Observed (Top 3)
+1) No brute-force protection: the login form has no CAPTCHA or delay on multiple attempts.
+2) Lack of HTTP security headers: CSP and HSTS are not configured, which increases the risk of XSS and MITM attacks.
+3) The possibility of registering arbitrary users: there is no email verification, which can lead to spam and fake accounts.
+
+## Task 2
+
+### Process
+1. The file `.github/pull_request_template.md` has been created in the root of the repository in the main branch
+2. Added sections: Goal, Changes, Testing, Artifacts & Screenshots
+3. Added a three-point checklist
+
+### Evidence
+When creating a PR in GitHub, the form is automatically filled in with the suggested template.
+
+### Workflow Improvement
+PR templates standardize the review process, reduce the number of forgotten sections, and speed up verification.
+
+## Task 3
+
+## Why Star Repositories?
+Asterisks on GitHub are a way to mark interesting projects, support developers, and save the repository for future use. It also helps projects to be more visible in the community.
+
+## Why Follow Developers?
+Subscribing to developers allows you to see their activity, study their code, find inspiration, and expand your professional network of contacts.
\ No newline at end of file
diff --git a/labs/submission2.md b/labs/submission2.md
new file mode 100644
index 00000000..e8f5ccc5
--- /dev/null
+++ b/labs/submission2.md
@@ -0,0 +1,67 @@
+# Lab 2
+
+## Task 1
+
+### Screenshots of generated diagrams
+
+
+
+
+
+
+### Top 5 Risks table
+
+| Severity | Category | Asset | Likelihood | Impact | Score |
+|----------|---------------------------|--------------|------------|--------|-------|
+| elevated | unencrypted-communication | user-browser | likely | high | 433 |
+| elevated | missing-authentication | juice-shop | likely | medium | 432 |
+| elevated | cross-site-scripting | juice-shop | likely | medium | 432 |
+| elevated | unencrypted-communication | reverse-proxy| likely | medium | 432 |
+| medium | cross-site-request-forgery | juice-shop | very-likely| low | 241 |
+
+Score in the table calculated by rules:
+- Severity: critical (5) > elevated (4) > high (3) > medium (2) > low (1)
+- Likelihood: very-likely (4) > likely (3) > possible (2) > unlikely (1)
+- Impact: high (3) > medium (2) > low (1)
+
+### Analysis of critical security concerns identified
+
+1. Unencrypted-communication: Data transfer between the user's browser and the application takes place over HTTP without encryption
+2. Missing-authentication: Juice Shop has insufficient authentication for some functions
+3. Cross-site-scripting: Application is vulnerable to XSS attacks
+4. Cross-site-request-forgery: Juice Shop is vulnerable to CSRF attacks
+
+## Task 2
+
+### Screenshots of generated diagrams
+
+
+
+
+
+### Risk Category Delta Table
+| Category | Baseline | Secure | Δ |
+|------------------------------------|---------:|-------:|----:|
+| container-baseimage-backdooring | 1 | 1 | 0 |
+| cross-site-request-forgery | 2 | 2 | 0 |
+| cross-site-scripting | 1 | 1 | 0 |
+| missing-authentication | 1 | 1 | 0 |
+| missing-authentication-second-factor | 2 | 2 | 0 |
+| missing-build-infrastructure | 1 | 1 | 0 |
+| missing-hardening | 2 | 2 | 0 |
+| missing-identity-store | 1 | 1 | 0 |
+| missing-vault | 1 | 1 | 0 |
+| missing-waf | 1 | 1 | 0 |
+| server-side-request-forgery | 2 | 2 | 0 |
+| unencrypted-asset | 2 | 1 | -1 |
+| unencrypted-communication | 2 | 0 | -2 |
+| unnecessary-data-transfer | 2 | 2 | 0 |
+| unnecessary-technical-asset | 2 | 2 | 0 |
+
+### Delta Run
+
+- **Change Made**: Implemented HTTPS encryption for communication links between User Browser, Reverse Proxy, and Juice Shop Application.
+
+- **Observed Result**: Reduced unencrypted-communication risks from 2 to 0 (Δ = -2), while maintaining other risk categories at baseline levels.
+
+- **Why**: HTTPS encryption protects sensitive authentication data and improves the confidentiality of user credentials and session tokens.
\ No newline at end of file