MyApp is a full-stack task management application built with modern DevOps practices.
Frontend : React 18 + Vite + Nginx
Backend : Node.js + Express
Database : PostgreSQL 16
Containers : Docker
Orchestration : Kubernetes (K8s)
Config Mgmt : Kustomize
CI/CD : GitHub Actions
Networking : Nginx Ingress + cert-manager
Secrets/Config : Kubernetes Secrets & ConfigMaps
Internet User
↓
Ingress (nginx-ingress-controller)
├─ TLS/HTTPS (cert-manager + Let's Encrypt)
├─ Rate limiting
└─ Route based on path
↓
┌─────┴──────┐
↓ ↓
Frontend Backend
(React (Node.js
Nginx) Express)
↓ ↓
└─────┬──────┘
│ SQL
↓
PostgreSQL
(StatefulSet)
- Type: Deployment
- Replicas: 2-3 (dev), 3-10 (prod with HPA)
- Image: React app built with Vite, served by Nginx
- Port: 80 (internal)
- Resources:
- Dev: 128Mi memory, 100m CPU
- Prod: 512Mi memory, 500m CPU
- Health Check: Nginx defaults (serving static content)
- Type: Deployment
- Replicas: 2 (dev), 3-10 (prod with HPA)
- Image: Node.js 20 Alpine
- Port: 8080
- Health Check: GET /health endpoint
- Resources:
- Dev: 128Mi memory, 100m CPU
- Prod: 1Gi memory, 1000m CPU
- Type: StatefulSet
- Replicas: 1 (single database)
- Image: PostgreSQL 16 Alpine
- Port: 5432
- Storage: Persistent Volume Claim (10Gi)
- Security:
- Runs as non-root user (postgres:999)
- Credentials from Secrets
- Ingress: Routes external traffic to services
- Services:
- Frontend: ClusterIP (internal only)
- Backend: ClusterIP (internal only)
- PostgreSQL: ClusterIP (internal only)
- Network Policies: Restrict pod-to-pod communication
- HPA (Horizontal Pod Autoscaler): For backend and frontend
- Target: CPU 80% utilization
- Min Replicas: 2 (dev), 3 (prod)
- Max Replicas: 10 (prod)
frontend/
├── Dockerfile # Multi-stage build
├── nginx.conf # Nginx configuration
├── package.json # Dependencies: React, Vite
├── vite.config.js # Vite bundler config
├── index.html # HTML entry point
└── src/
├── App.jsx # Root React component
├── main.jsx # React app entry
├── components/ # React components
│ ├── TaskForm.jsx
│ ├── TaskList.jsx
│ └── Stats.jsx
├── styles/
│ └── index.css # Global styles
└── test/
├── App.test.js
└── setup.js
Build Process:
- Install dependencies (npm ci)
- Build for production (vite build)
- Serve with Nginx (simple HTTP server)
- Final image: ~50-100MB
Environment Variables:
VITE_API_URL: Backend API endpoint (e.g., http://localhost:8080)
backend/
├── Dockerfile # Multi-stage build
├── package.json # Dependencies: Express, pg
├── src/
│ ├── server.js # Express app entry
│ ├── app.js # Express configuration
│ ├── routes/
│ │ ├── health.js # Health check endpoint
│ │ └── tasks.js # Task CRUD endpoints
│ ├── middleware/
│ │ └── errorHandler.js # Error handling
│ └── db/
│ ├── connection.js # PostgreSQL connection
│ └── migrate.js # Database migrations
├── __tests__/
│ └── api.test.js # API tests
└── coverage/ # Test coverage reports
Build Process:
- Install dependencies (npm ci --omit=dev)
- Run tests (jest)
- Copy code to runtime stage
- Final image: ~180-250MB
API Endpoints:
GET /health- Health check (used by K8s readiness probe)GET /api/tasks- List all tasksPOST /api/tasks- Create taskPUT /api/tasks/:id- Update taskDELETE /api/tasks/:id- Delete task
Environment Variables:
PORT: 8080DATABASE_URL: PostgreSQL connection stringNODE_ENV: production/developmentDB_USER,DB_PASSWORD: Database credentials
k8s/
├── base/
│ ├── kustomization.yaml # Base kustomization file
│ ├── namespace.yaml # myapp namespace definition
│ ├── backend-deployment.yaml # Backend deployment spec
│ ├── backend-service.yaml # Backend service
│ ├── backend-hpa.yaml # Backend auto-scaling
│ ├── backend-pdb.yaml # Pod disruption budget
│ ├── frontend-deployment.yaml # Frontend deployment spec
│ ├── frontend-service.yaml # Frontend service
│ ├── frontend-hpa.yaml # Frontend auto-scaling
│ ├── postgres-statefulset.yaml # Database statefulset
│ ├── postgres-service.yaml # Database service
│ ├── configmap.yaml # Environment configs
│ ├── secrets.yaml # Encrypted secrets
│ ├── service-accounts.yaml # RBAC service accounts
│ ├── ingress.yaml # External access routing
│ └── network-policies.yaml # Pod communication rules
│
└── overlays/
├── dev/
│ └── kustomization.yaml # Dev overrides: 1 replica, small resources
└── prod/
└── kustomization.yaml # Prod overrides: 3+ replicas with HPA, large resources
Key Files Explained:
replicas: 2 # Default (overridden per environment)
strategy: RollingUpdate # Update pods gradually
selector: app=backend # Pod label to select
containers:
- name: backend
image: backend:latest # Overridden per environment
port: 8080
resources:
requests: # Minimum guaranteed resources
cpu: 100m
memory: 128Mi
limits: # Maximum resources allowed
cpu: 500m
memory: 512Mi
livenessProbe: # Restart if unhealthy
httpGet: /health
failureThreshold: 3
readinessProbe: # Remove from load balancer if not ready
httpGet: /health
initialDelaySeconds: 5resources:
- namespace.yaml
- backend-deployment.yaml
- backend-service.yaml
- backend-hpa.yaml
- frontend-deployment.yaml
- frontend-service.yaml
- postgres-statefulset.yaml
- ingress.yaml
# ... etc
# This lists all K8s resources to deploy
# Overlays will patch/override specific valuespatchesStrategicMerge:
- |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 3 # Override: more replicas for production
template:
spec:
containers:
- name: backend
resources:
limits:
cpu: "1" # More CPU for production
memory: 1Gi # More memory for production
# Patches override base values
# Final manifest = base + patchesservices:
frontend:
build: ./frontend
ports:
- "3000:3000"
environment:
- VITE_API_URL=http://localhost:8080
volumes:
- ./frontend/src:/app/src # Live reload
backend:
build: ./backend
ports:
- "8080:8080"
environment:
- DATABASE_URL=postgresql://myapp:secret@postgres:5432/myapp
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:16-alpine
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=myapp
- POSTGRES_PASSWORD=secret
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U myapp"]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:Usage:
docker-compose up --build # Start all services
docker-compose logs -f backend # View logs
docker-compose down -v # Stop and clean upTrigger: Push to any branch or PR to main
Jobs (in order):
-
test: Run automated tests
- Runs: npm test (both frontend and backend)
- Condition: Runs on all branches
- Passes: Code is ready to build
-
build-and-push: Build Docker images
- Runs: docker build & docker push
- Condition: Only if tests pass
- Output: Images pushed to GHCR
-
validate-k8s: Validate Kubernetes manifests
- Runs: kustomize build & kubeconform
- Condition: Checks syntax
- Output: Ensures K8s files are valid
-
deploy-dev: Deploy to dev namespace
- Runs: Deploys to dev namespace for verification
- Condition: Automatic on main branch push
- Output: Tests deployment process
-
deploy-prod: Deploy to production cluster
- Runs: Deploys to real production cluster
- Condition: Automatic after dev passes (no approval needed)
- Output: Shows frontend URL
- Cluster: Local (docker-compose or local kind)
- Namespace: myapp-dev
- Replicas: 1 per service
- Resources: Minimal
- Database: Local PostgreSQL
- Update Strategy: Manual
- Purpose: Quick iteration, testing locally
- Cluster: Real Kubernetes cluster (AWS/GCP/Azure/etc.)
- Namespace: myapp-production
- Replicas: 3+ (auto-scales to 10 with HPA)
- Resources: Large (1 CPU, 1Gi RAM per pod)
- Database: Real database (can be managed database like RDS)
- Update Strategy: Rolling updates (zero downtime)
- Monitoring: Enabled
- Purpose: Live application serving real users
Developer writes code locally
↓
docker-compose up
↓
Test locally at http://localhost:3000
↓
Verify changes work
git add .
git commit -m "Add new feature"
git push origin main
↓
GitHub detects push to main
↓
Triggers .github/workflows/ci-cd.yaml
GitHub Actions spins up ubuntu-latest runner
↓
Checks out code
↓
Installs Node.js
↓
npm test (runs tests in backend/ and frontend/)
↓
If tests pass → proceed to build
If tests fail → stop, notify developer
For each service (frontend, backend):
↓
Docker setup (QEMU, Buildx for multi-arch)
↓
docker build (multi-stage process)
Stage 1: Install dependencies
Stage 2: Run tests (if fails, image not created)
Stage 3: Create runtime image
↓
docker push → GHCR (GitHub Container Registry)
↓
Trivy security scan → Check for vulnerabilities
kustomize build k8s/overlays/dev
↓
Merges base/ + overlays/dev/
↓
Outputs final YAML manifest
↓
kubeconform validates against K8s schema
↓
If invalid → stop, show errors
If valid → proceed to deploy
kubectl apply -k k8s/overlays/dev
↓
Kustomize merges base + dev overrides
↓
kubectl deploys all resources:
- Namespace
- ConfigMaps & Secrets
- Deployments (frontend, backend)
- StatefulSet (PostgreSQL)
- Services
- Ingress
- Network Policies
↓
kubectl rollout status (wait for pods ready)
↓
Pods starting → pulling images → becoming ready
↓
Show deployment info in logs
If KUBE_CONFIG_PROD secret configured:
↓
kubectl config set (connect to production cluster)
↓
kustomize edit set image (update image tag)
↓
kubectl apply -k k8s/overlays/prod
↓
Rolling update starts:
Create new pod with new image
↓
Old pod still running (zero downtime)
↓
Health checks pass
↓
Remove old pod
↓
Repeat for each pod
↓
kubectl rollout status (monitor)
↓
If healthy → success
If unhealthy → automatic rollback
↓
Show frontend URL in logs
GitHub Actions output shows:
🌐 Frontend URL: https://myapp.example.com
User opens browser → visits URL → uses new feature
VITE_API_URL=http://localhost:8080 # Backend API endpoint
VITE_ENV=development|production # App environment
PORT=8080 # HTTP port
NODE_ENV=production|development # Node environment
DATABASE_URL=postgresql://user:pass@host:5432/db
DB_USER=myapp # Database user
DB_PASSWORD=secret # Database password
DB_HOST=postgres # Database host
DB_PORT=5432 # Database port
DB_NAME=myapp # Database name
Located in: k8s/base/secrets.yaml
Stored as: Base64 encoded (at rest)
Referenced in: Deployments as environment variables
db-credentials:
DB_USER: myapp
DB_PASSWORD: secret
ghcr-secret:
username: github_username
password: github_token
server: ghcr.io
Located in: k8s/base/configmap.yaml
Referenced in: Deployments as environment variables
ENVIRONMENT: production
LOG_LEVEL: info
DATABASE_HOST: postgres.myapp-production.svc.cluster.local
FRONTEND_URL: https://myapp.example.com
livenessProbe:
httpGet:
path: /health
port: 8080
failureThreshold: 3
periodSeconds: 10
# If 3 consecutive probes fail → Kubernetes kills pod and restarts it
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
# If not ready → remove from load balancer (don't send traffic)# View logs from deployment
kubectl logs deployment/backend -n myapp-production -f
# View logs from specific pod
kubectl logs pod/backend-xyz -n myapp-production
# Previous crash logs
kubectl logs --previous pod/backend-xyz -n myapp-production
# Structured logs (JSON)
kubectl logs deployment/backend -n myapp-production -o jsonpath='{.items[*].metadata.name}'# Pod resource usage
kubectl top pods -n myapp-production
# Node resource usage
kubectl top nodes
# HPA status
kubectl get hpa -n myapp-production
kubectl describe hpa backend-hpa -n myapp-production# Recent events (errors, warnings)
kubectl get events -n myapp-production --sort-by='.lastTimestamp'
# Watch events in real-time
kubectl get events -w -n myapp-production- PostgreSQL credentials stored in Kubernetes Secrets
- GitHub Token for registry authentication
- RBAC: Service accounts with specific permissions
- Network Policies: Restrict pod-to-pod traffic
- Frontend can talk to Backend (through Ingress)
- Backend can talk to PostgreSQL
- PostgreSQL only talks to Backend
- Non-root users (appuser, postgres)
- Resource limits prevent resource exhaustion
- Read-only root filesystem (where possible)
- Security context: fsGroup, runAsNonRoot
- Never store secrets in Git
- Use Kubernetes Secrets for sensitive data
- In production: Consider HashiCorp Vault or AWS Secrets Manager
- cert-manager + Let's Encrypt
- Automatic certificate renewal
- HSTS enabled (force HTTPS)
Cause: Image not found in GHCR
Solution: Check image name/tag, verify it was pushed
kubectl describe pod POD_NAME
# Check Events section for details
Cause: App exits or crashes on startup
Solution: Check logs
kubectl logs --previous pod/POD_NAME
# Look for error messages
Cause: App using too much resources
Solution: Either optimize app or increase limits
kubectl set resources deployment backend --limits=cpu=2000m,memory=2Gi
Cause: Network issue or database down
Solution:
1. Check postgres pod: kubectl get statefulset postgres
2. Check network policy: kubectl get networkpolicies
3. Test from inside pod: kubectl exec -it POD_NAME -- /bin/sh
Cause: Pod startup time, image download, etc.
Solution:
1. Check resource requests/limits
2. Check health probe configuration
3. Use lightweight base images (Alpine)
4. Pre-pull images on nodes
kubectl set resources deployment backend --limits=cpu=2000m,memory=2Gikubectl scale deployment backend --replicas=10
# Or automatic with HPA
kubectl autoscale deployment backend --min=2 --max=10 --cpu-percent=80- Add indexes to frequently queried columns
- Use connection pooling (PgBouncer)
- Consider read replicas
- Use managed database (RDS, Cloud SQL) in production
- Understand Docker (build, run, images, registries)
- Understand Kubernetes basics (pods, deployments, services)
- Run docker-compose locally
- Push code and watch CI/CD run
- Learn kubectl commands by heart
- Understand Kustomize overlays
- Configure production cluster
- Deploy to production manually
- Set up monitoring (Prometheus, Grafana)
- Learn network policies
- Implement auto-scaling
- Set up logging aggregation
- Infrastructure as Code (Terraform)
- Service mesh (Istio)
- GitOps (ArgoCD)
- Disaster recovery & backup strategies
- Kubernetes Documentation
- Docker Documentation
- GitHub Actions Docs
- Kustomize
- CNCF Certification Programs
You now have a production-ready DevOps infrastructure! 🚀
Everything you need: ✅ Local development (docker-compose) ✅ Automated testing ✅ Automated building ✅ Automated deployment ✅ Multiple environments ✅ Zero-downtime updates ✅ Auto-scaling ✅ High availability ✅ Security best practices ✅ Monitoring readiness
Go build amazing things! 🎉