From 1f3647d3ec395c1696f14aa19f6331be7b534ee8 Mon Sep 17 00:00:00 2001 From: Muhamad Sazwan Bin Ismail Date: Thu, 6 Nov 2025 13:12:18 +0800 Subject: [PATCH] Add deployment guide for Node.js app on GKE MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Added a comprehensive guide for deploying a Node.js application to Google Kubernetes Engine (GKE), including prerequisites, setup steps, Docker configuration, and deployment instructions. # Complete Node.js App Deployment Guide for Google Kubernetes Engine (GKE) ## πŸ“‹ Table of Contents 1. [Prerequisites](#prerequisites) 2. [Application Setup](#application-setup) 3. [Docker Configuration](#docker-configuration) 4. [GKE Cluster Setup](#gke-cluster-setup) 5. [Kubernetes Manifests](#kubernetes-manifests) 6. [Deployment Process](#deployment-process) 7. [Monitoring & Scaling](#monitoring--scaling) 8. [CI/CD Automation](#cicd-automation) 9. [Troubleshooting](#troubleshooting) ## 🎯 Prerequisites ### Required Tools Installation ```bash # Install Google Cloud CLI curl https://sdk.cloud.google.com | bash exec -l $SHELL # Install kubectl gcloud components install kubectl # Install Docker # On macOS: brew install --cask docker # On Ubuntu: sudo apt-get update && sudo apt-get install -y docker.io # Verify installations gcloud --version kubectl version --client docker --version ``` ### Google Cloud Setup ```bash # Authenticate with GCP gcloud auth login # Set your project gcloud config set project YOUR_PROJECT_ID # Enable required APIs gcloud services enable \ container.googleapis.com \ containerregistry.googleapis.com \ cloudbuild.googleapis.com \ compute.googleapis.com ``` ## πŸš€ Application Setup ### Project Structure ``` nodejs-gke-app/ β”œβ”€β”€ src/ β”‚ β”œβ”€β”€ app.js β”‚ β”œβ”€β”€ routes/ β”‚ β”‚ β”œβ”€β”€ api.js β”‚ β”‚ └── health.js β”‚ └── middleware/ β”‚ └── security.js β”œβ”€β”€ tests/ β”‚ └── app.test.js β”œβ”€β”€ Dockerfile β”œβ”€β”€ .dockerignore β”œβ”€β”€ package.json β”œβ”€β”€ cloudbuild.yaml β”œβ”€β”€ k8s/ β”‚ β”œβ”€β”€ namespace.yaml β”‚ β”œβ”€β”€ deployment.yaml β”‚ β”œβ”€β”€ service.yaml β”‚ β”œβ”€β”€ hpa.yaml β”‚ └── configmap.yaml └── README.md ``` ### Package.json ```json { "name": "nodejs-gke-app", "version": "1.0.0", "description": "Production Node.js app for GKE", "main": "src/app.js", "scripts": { "start": "node src/app.js", "dev": "nodemon src/app.js", "test": "jest --coverage", "docker:build": "docker build -t nodejs-gke-app .", "docker:run": "docker run -p 8080:8080 nodejs-gke-app", "lint": "eslint src/", "security:audit": "npm audit --audit-level high" }, "dependencies": { "express": "^4.18.2", "helmet": "^7.0.0", "cors": "^2.8.5", "compression": "^1.7.4", "morgan": "^1.10.0", "express-rate-limit": "^6.8.1", "express-async-errors": "^3.1.1" }, "devDependencies": { "jest": "^29.5.0", "supertest": "^6.3.3", "nodemon": "^2.0.22", "eslint": "^8.45.0" }, "engines": { "node": ">=18.0.0", "npm": ">=9.0.0" } } ``` ### Production-Ready Node.js App (src/app.js) ```javascript require('express-async-errors'); const express = require('express'); const helmet = require('helmet'); const cors = require('cors'); const compression = require('compression'); const morgan = require('morgan'); const rateLimit = require('express-rate-limit'); const app = express(); const PORT = process.env.PORT || 8080; // Security middleware app.use(helmet({ contentSecurityPolicy: { directives: { defaultSrc: ["'self'"], styleSrc: ["'self'", "'unsafe-inline'"], scriptSrc: ["'self'"], imgSrc: ["'self'", "data:", "https:"] } } })); app.use(cors({ origin: process.env.ALLOWED_ORIGINS || '*', credentials: true })); app.use(compression()); app.use(morgan('combined')); // Rate limiting const limiter = rateLimit({ windowMs: 15 * 60 * 1000, max: process.env.RATE_LIMIT_MAX || 100, message: 'Too many requests from this IP' }); app.use(limiter); // Body parsing app.use(express.json({ limit: '10mb' })); app.use(express.urlencoded({ extended: true })); // Routes app.use('/health', require('./routes/health')); app.use('/api', require('./routes/api')); // Root endpoint app.get('/', (req, res) => { res.json({ message: 'πŸš€ Node.js App Running on GKE', version: process.env.APP_VERSION || '1.0.0', environment: process.env.NODE_ENV, timestamp: new Date().toISOString(), uptime: process.uptime() }); }); // Error handling middleware app.use((error, req, res, next) => { console.error('Error:', error.stack); res.status(500).json({ error: 'Internal Server Error', message: process.env.NODE_ENV === 'production' ? 'Something went wrong!' : error.message }); }); // 404 handler app.use('*', (req, res) => { res.status(404).json({ error: 'Route not found', path: req.originalUrl }); }); // Graceful shutdown process.on('SIGTERM', () => { console.log('Received SIGTERM, starting graceful shutdown'); server.close(() => { console.log('Process terminated'); process.exit(0); }); }); const server = app.listen(PORT, '0.0.0.0', () => { console.log(` πŸš€ Server running on port ${PORT} πŸ“Š Environment: ${process.env.NODE_ENV || 'development'} πŸ•’ Started at: ${new Date().toISOString()} `); }); module.exports = app; ``` ### Health Routes (src/routes/health.js) ```javascript const express = require('express'); const router = express.Router(); router.get('/', (req, res) => { const healthcheck = { status: 'OK', timestamp: new Date().toISOString(), uptime: process.uptime(), memory: process.memoryUsage(), environment: process.env.NODE_ENV, nodeVersion: process.version }; res.status(200).json(healthcheck); }); router.get('/ready', (req, res) => { // Add your readiness checks here (database, external services, etc.) res.status(200).json({ status: 'READY' }); }); router.get('/live', (req, res) => { // Liveness check res.status(200).json({ status: 'ALIVE' }); }); module.exports = router; ``` ### API Routes (src/routes/api.js) ```javascript const express = require('express'); const router = express.Router(); router.get('/info', (req, res) => { res.json({ app: 'Node.js GKE Application', version: '1.0.0', description: 'Production-ready Node.js app deployed on GKE', features: [ 'Docker containerization', 'Kubernetes deployment', 'Health checks', 'Auto-scaling', 'Monitoring' ] }); }); router.get('/users', async (req, res) => { // Example API endpoint const users = [ { id: 1, name: 'John Doe', email: 'john@example.com' }, { id: 2, name: 'Jane Smith', email: 'jane@example.com' } ]; res.json(users); }); module.exports = router; ``` ## 🐳 Docker Configuration ### Multi-Stage Dockerfile ```dockerfile # Build stage FROM node:18-alpine AS builder WORKDIR /app # Copy package files COPY package*.json ./ COPY .npmrc ./ # Install all dependencies (including dev dependencies for building) RUN npm ci # Copy source code COPY . . # Remove dev dependencies for production RUN npm prune --production # Runtime stage FROM node:18-alpine AS runtime # Install security updates and curl for health checks RUN apk update && apk upgrade && apk add --no-cache curl # Create non-root user RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 WORKDIR /app # Copy from builder stage COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules COPY --chown=nodejs:nodejs /app/package*.json ./ COPY --chown=nodejs:nodejs /app/src ./src # Create logs directory RUN mkdir -p logs && chown -R nodejs:nodejs logs # Switch to non-root user USER nodejs # Expose port EXPOSE 8080 # Health check HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \ CMD curl -f http://localhost:8080/health || exit 1 # Start the application CMD ["node", "src/app.js"] ``` ### .dockerignore ``` node_modules npm-debug.log .git .gitignore README.md .env .nyc_output coverage .dockerignore Dockerfile .cloudbuild tests jest.config.js .eslintrc.js Dockerfile .docker .vscode .idea *.log ``` ## ☸️ Kubernetes Manifests ### Namespace (k8s/namespace.yaml) ```yaml apiVersion: v1 kind: Namespace metadata: name: nodejs-production labels: name: nodejs-production environment: production ``` ### ConfigMap (k8s/configmap.yaml) ```yaml apiVersion: v1 kind: ConfigMap metadata: name: nodejs-app-config namespace: nodejs-production data: NODE_ENV: "production" PORT: "8080" LOG_LEVEL: "info" RATE_LIMIT_MAX: "100" ALLOWED_ORIGINS: "*" ``` ### Deployment (k8s/deployment.yaml) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nodejs-app namespace: nodejs-production labels: app: nodejs-app version: v1.0.0 spec: replicas: 3 selector: matchLabels: app: nodejs-app strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: nodejs-app version: v1.0.0 annotations: prometheus.io/scrape: "true" prometheus.io/port: "8080" prometheus.io/path: "/metrics" spec: containers: - name: nodejs-app image: gcr.io/YOUR_PROJECT_ID/nodejs-gke-app:latest ports: - containerPort: 8080 name: http env: - name: NODE_ENV valueFrom: configMapKeyRef: name: nodejs-app-config key: NODE_ENV - name: PORT valueFrom: configMapKeyRef: name: nodejs-app-config key: PORT - name: APP_VERSION value: "v1.0.0" resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "200m" livenessProbe: httpGet: path: /health/live port: 8080 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 readinessProbe: httpGet: path: /health/ready port: 8080 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 1 startupProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 10 securityContext: runAsNonRoot: true runAsUser: 1001 allowPrivilegeEscalation: false readOnlyRootFilesystem: true capabilities: drop: - ALL --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: nodejs-app-pdb namespace: nodejs-production spec: minAvailable: 2 selector: matchLabels: app: nodejs-app ``` ### Service (k8s/service.yaml) ```yaml apiVersion: v1 kind: Service metadata: name: nodejs-app-service namespace: nodejs-production labels: app: nodejs-app annotations: cloud.google.com/load-balancer-type: "External" cloud.google.com/neg: '{"ingress": true}' spec: type: LoadBalancer selector: app: nodejs-app ports: - name: http port: 80 targetPort: 8080 protocol: TCP sessionAffinity: None ``` ### Horizontal Pod Autoscaler (k8s/hpa.yaml) ```yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: nodejs-app-hpa namespace: nodejs-production spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: nodejs-app minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 50 periodSeconds: 60 scaleUp: stabilizationWindowSeconds: 60 policies: - type: Percent value: 100 periodSeconds: 60 ``` ## πŸ—οΈ GKE Cluster Setup ### Production Cluster Creation ```bash #!/bin/bash PROJECT_ID="YOUR_PROJECT_ID" CLUSTER_NAME="nodejs-production-cluster" REGION="us-central1" ZONE="us-central1-a" echo "Creating production GKE cluster..." gcloud container clusters create $CLUSTER_NAME \ --project=$PROJECT_ID \ --zone=$ZONE \ --num-nodes=2 \ --machine-type=e2-medium \ --disk-size=50GB \ --disk-type=pd-ssd \ --enable-ip-alias \ --enable-autoscaling \ --min-nodes=1 \ --max-nodes=5 \ --enable-autorepair \ --enable-autoupgrade \ --shielded-integrity-monitoring \ --shielded-secure-boot \ --release-channel=regular \ --workload-pool=$PROJECT_ID.svc.id.goog \ --enable-stackdriver-kubernetes echo "Getting cluster credentials..." gcloud container clusters get-credentials $CLUSTER_NAME \ --zone $ZONE \ --project $PROJECT_ID echo "Verifying cluster connection..." kubectl get nodes echo "Cluster created successfully!" ``` ### Advanced Cluster with Node Pools ```bash #!/bin/bash PROJECT_ID="YOUR_PROJECT_ID" CLUSTER_NAME="nodejs-advanced-cluster" REGION="us-central1" # Create cluster with minimal system node pool gcloud container clusters create $CLUSTER_NAME \ --project=$PROJECT_ID \ --region=$REGION \ --node-locations=us-central1-a,us-central1-b \ --num-nodes=1 \ --machine-type=e2-small \ --enable-ip-alias # Add application node pool gcloud container node-pools create app-pool \ --cluster=$CLUSTER_NAME \ --region=$REGION \ --num-nodes=2 \ --machine-type=e2-medium \ --enable-autoscaling \ --min-nodes=1 \ --max-nodes=5 \ --disk-size=50GB \ --disk-type=pd-ssd \ --node-labels=environment=production,workload=app # Add monitoring node pool gcloud container node-pools create monitoring-pool \ --cluster=$CLUSTER_NAME \ --region=$REGION \ --num-nodes=1 \ --machine-type=e2-small \ --node-taints=monitoring=true:NoSchedule \ --node-labels=environment=production,workload=monitoring echo "Cluster and node pools created successfully!" ``` ## πŸš€ Deployment Process ### Manual Deployment Script ```bash #!/bin/bash set -e PROJECT_ID="YOUR_PROJECT_ID" CLUSTER_NAME="nodejs-production-cluster" ZONE="us-central1-a" APP_NAME="nodejs-gke-app" VERSION="v1.0.0" echo "πŸš€ Starting Node.js App Deployment to GKE..." # Build Docker image echo "πŸ“¦ Building Docker image..." docker build -t $APP_NAME . # Test the image locally (optional) echo "πŸ§ͺ Testing image locally..." docker run -d --name test-app -p 8080:8080 $APP_NAME sleep 10 curl -f http://localhost:8080/health || (echo "❌ Local test failed"; exit 1) docker stop test-app && docker rm test-app # Tag and push to GCR echo "🏷️ Tagging image..." docker tag $APP_NAME gcr.io/$PROJECT_ID/$APP_NAME:latest docker tag $APP_NAME gcr.io/$PROJECT_ID/$APP_NAME:$VERSION echo "πŸ“€ Pushing to Google Container Registry..." docker push gcr.io/$PROJECT_ID/$APP_NAME:latest docker push gcr.io/$PROJECT_ID/$APP_NAME:$VERSION # Connect to GKE cluster echo "πŸ”— Connecting to GKE cluster..." gcloud container clusters get-credentials $CLUSTER_NAME \ --zone $ZONE \ --project $PROJECT_ID # Create namespace if not exists kubectl apply -f k8s/namespace.yaml # Deploy application echo "πŸ“ Deploying application..." kubectl apply -f k8s/configmap.yaml kubectl apply -f k8s/deployment.yaml kubectl apply -f k8s/service.yaml kubectl apply -f k8s/hpa.yaml # Wait for deployment to complete echo "⏳ Waiting for deployment to be ready..." kubectl rollout status deployment/nodejs-app -n nodejs-production --timeout=600s # Get external IP EXTERNAL_IP=$(kubectl get service nodejs-app-service -n nodejs-production -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "βœ… Deployment completed successfully!" echo "🌐 Your application is now live: http://$EXTERNAL_IP" echo "πŸ“Š Health check: http://$EXTERNAL_IP/health" echo "πŸ” API info: http://$EXTERNAL_IP/api/info" # Run tests against deployed application echo "πŸ§ͺ Running post-deployment tests..." curl -f http://$EXTERNAL_IP/health && echo "βœ… Health check passed" curl -f http://$EXTERNAL_IP/api/info && echo "βœ… API check passed" ``` ### Automated CI/CD with Cloud Build #### cloudbuild.yaml ```yaml steps: # Build the container image - name: 'gcr.io/cloud-builders/docker' args: - 'build' - '--no-cache' - '-t' - 'gcr.io/$PROJECT_ID/nodejs-gke-app:$COMMIT_SHA' - '-t' - 'gcr.io/$PROJECT_ID/nodejs-gke-app:latest' - '.' # Run tests - name: 'gcr.io/cloud-builders/docker' args: - 'run' - '--rm' - 'gcr.io/$PROJECT_ID/nodejs-gke-app:$COMMIT_SHA' - 'npm' - 'test' # Push the container image to Container Registry - name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/$PROJECT_ID/nodejs-gke-app:$COMMIT_SHA'] - name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/$PROJECT_ID/nodejs-gke-app:latest'] # Deploy to GKE - name: 'gcr.io/cloud-builders/gke-deploy' args: - 'run' - '--filename=k8s/' - '--image=gcr.io/$PROJECT_ID/nodejs-gke-app:$COMMIT_SHA' - '--location=us-central1-a' - '--cluster=nodejs-production-cluster' # Run integration tests - name: 'gcr.io/cloud-builders/curl' args: - '-f' - 'http://$(kubectl get service nodejs-app-service -n nodejs-production -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/health' waitFor: ['-'] entrypoint: 'bash' images: - 'gcr.io/$PROJECT_ID/nodejs-gke-app:$COMMIT_SHA' - 'gcr.io/$PROJECT_ID/nodejs-gke-app:latest' options: logging: CLOUD_LOGGING_ONLY ``` ## πŸ“Š Monitoring & Scaling ### Application Monitoring ```bash # View application logs kubectl logs -n nodejs-production -l app=nodejs-app --tail=50 # Stream logs in real-time kubectl logs -n nodejs-production -l app=nodejs-app -f # View pod resource usage kubectl top pods -n nodejs-production # View node resource usage kubectl top nodes # Describe service to get endpoints kubectl describe service nodejs-app-service -n nodejs-production ``` ### Performance Testing ```bash # Install k6 for load testing (optional) brew install k6 # Create load test script (loadtest.js) echo " import http from 'k6/http'; import { check, sleep } from 'k6'; export const options = { stages: [ { duration: '2m', target: 100 }, // ramp up to 100 users { duration: '5m', target: 100 }, // stay at 100 users { duration: '2m', target: 0 }, // ramp down to 0 users ], }; export default function () { const res = http.get('http://YOUR_EXTERNAL_IP/health'); check(res, { 'status is 200': (r) => r.status === 200, 'response time < 200ms': (r) => r.timings.duration < 200, }); sleep(1); } " > loadtest.js # Run load test k6 run loadtest.js ``` ## πŸ”„ Update and Rollback ### Application Updates ```bash #!/bin/bash # Build new version docker build -t gcr.io/YOUR_PROJECT_ID/nodejs-gke-app:v2.0.0 . # Push new version docker push gcr.io/YOUR_PROJECT_ID/nodejs-gke-app:v2.0.0 # Update deployment kubectl set image deployment/nodejs-app \ nodejs-app=gcr.io/YOUR_PROJECT_ID/nodejs-gke-app:v2.0.0 \ -n nodejs-production # Monitor rollout kubectl rollout status deployment/nodejs-app -n nodejs-production # View rollout history kubectl rollout history deployment/nodejs-app -n nodejs-production ``` ### Rollback if Needed ```bash # Rollback to previous version kubectl rollout undo deployment/nodejs-app -n nodejs-production # Rollback to specific revision kubectl rollout undo deployment/nodejs-app --to-revision=1 -n nodejs-production ``` ## πŸ—‘οΈ Cleanup Script ```bash #!/bin/bash PROJECT_ID="YOUR_PROJECT_ID" CLUSTER_NAME="nodejs-production-cluster" ZONE="us-central1-a" echo "🧹 Cleaning up GKE deployment..." # Delete Kubernetes resources kubectl delete -f k8s/ --ignore-not-found=true # Wait for resources to be deleted sleep 30 # Delete the cluster echo "πŸ—‘οΈ Deleting cluster $CLUSTER_NAME..." gcloud container clusters delete $CLUSTER_NAME \ --zone $ZONE \ --quiet # Delete container images echo "πŸ—‘οΈ Deleting container images..." gcloud container images delete gcr.io/$PROJECT_ID/nodejs-gke-app:latest --quiet gcloud container images delete gcr.io/$PROJECT_ID/nodejs-gke-app:v1.0.0 --quiet echo "βœ… Cleanup completed!" ``` ## πŸ› οΈ Troubleshooting Common Issues ### Common Commands for Debugging ```bash # Check cluster status gcloud container clusters list # Check node status kubectl get nodes # Check all resources in namespace kubectl get all -n nodejs-production # Describe pod for details kubectl describe pod -n nodejs-production -l app=nodejs-app # Check events kubectl get events -n nodejs-production --sort-by=.metadata.creationTimestamp # Check service endpoints kubectl get endpoints nodejs-app-service -n nodejs-production # Check ingress status (if using) kubectl get ingress -n nodejs-production # Debug networking kubectl run -it --rm debug --image=busybox -n nodejs-production -- sh ``` ### Common Issues and Solutions 1. **Image Pull Errors**: Check GCR permissions and image tags 2. **Resource Limits**: Adjust CPU/memory requests in deployment 3. **Health Check Failures**: Verify liveness/readiness probe endpoints 4. **Service Not Accessible**: Check LoadBalancer status and firewall rules 5. **Pod CrashLoopBackOff**: Check application logs and resource limits This comprehensive guide provides everything needed to deploy, manage, and scale Node.js applications on GKE with production best practices! --- Google Cloud project | 307 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 307 insertions(+) create mode 100644 Google Cloud project diff --git a/Google Cloud project b/Google Cloud project new file mode 100644 index 0000000..0066d2b --- /dev/null +++ b/Google Cloud project @@ -0,0 +1,307 @@ +I'll help you build and deploy an application to Google Kubernetes Engine (GKE). Here's a complete step-by-step guide: + +## Prerequisites + +1. **Google Cloud Account** with billing enabled +2. **Google Cloud SDK** installed +3. **Docker** installed locally +4. **kubectl** installed + +## Step 1: Set Up Your Project + +```bash +# Set your Google Cloud project +gcloud config set project YOUR_PROJECT_ID + +# Enable required APIs +gcloud services enable container.googleapis.com +gcloud services enable containerregistry.googleapis.com +``` + +## Step 2: Create a Simple Application + +Create a basic Node.js application: + +**package.json:** +```json +{ + "name": "gke-app", + "version": "1.0.0", + "description": "Sample app for GKE deployment", + "main": "server.js", + "scripts": { + "start": "node server.js" + }, + "dependencies": { + "express": "^4.18.2" + } +} +``` + +**server.js:** +```javascript +const express = require('express'); +const app = express(); +const PORT = process.env.PORT || 8080; + +app.get('/', (req, res) => { + res.send(` + + +

Hello from GKE!

+

Server time: ${new Date().toISOString()}

+

Host: ${process.env.HOSTNAME || 'unknown'}

+ + + `); +}); + +app.get('/health', (req, res) => { + res.json({ status: 'healthy', timestamp: new Date().toISOString() }); +}); + +app.listen(PORT, () => { + console.log(`Server running on port ${PORT}`); +}); +``` + +## Step 3: Create Dockerfile + +**Dockerfile:** +```dockerfile +# Use the official Node.js runtime as base image +FROM node:18-alpine + +# Set working directory in container +WORKDIR /app + +# Copy package files +COPY package*.json ./ + +# Install dependencies +RUN npm install --production + +# Copy application code +COPY . . + +# Expose port +EXPOSE 8080 + +# Start the application +CMD ["npm", "start"] +``` + +**.dockerignore:** +``` +node_modules +npm-debug.log +.git +.gitignore +README.md +Dockerfile +.dockerignore +``` + +## Step 4: Build and Test Docker Image Locally + +```bash +# Build the Docker image +docker build -t my-node-app . + +# Test locally +docker run -p 8080:8080 my-node-app + +# Visit http://localhost:8080 to verify it works +``` + +## Step 5: Push to Google Container Registry (GCR) + +```bash +# Configure Docker to use gcloud as credential helper +gcloud auth configure-docker + +# Tag the image for GCR +docker tag my-node-app gcr.io/YOUR_PROJECT_ID/my-node-app:v1.0 + +# Push to GCR +docker push gcr.io/YOUR_PROJECT_ID/my-node-app:v1.0 +``` + +## Step 6: Create a GKE Cluster + +```bash +# Create a cluster +gcloud container clusters create my-gke-cluster \ + --num-nodes=2 \ + --zone=us-central1-a \ + --machine-type=e2-small + +# Get cluster credentials +gcloud container clusters get-credentials my-gke-cluster \ + --zone=us-central1-a +``` + +## Step 7: Create Kubernetes Deployment Files + +**deployment.yaml:** +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-node-app + labels: + app: my-node-app +spec: + replicas: 3 + selector: + matchLabels: + app: my-node-app + template: + metadata: + labels: + app: my-node-app + spec: + containers: + - name: my-node-app + image: gcr.io/YOUR_PROJECT_ID/my-node-app:v1.0 + ports: + - containerPort: 8080 + env: + - name: NODE_ENV + value: "production" + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "200m" + livenessProbe: + httpGet: + path: /health + port: 8080 + initialDelaySeconds: 5 + periodSeconds: 10 + readinessProbe: + httpGet: + path: /health + port: 8080 + initialDelaySeconds: 5 + periodSeconds: 10 +``` + +**service.yaml:** +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-node-app-service + labels: + app: my-node-app +spec: + type: LoadBalancer + selector: + app: my-node-app + ports: + - port: 80 + targetPort: 8080 + protocol: TCP +``` + +## Step 8: Deploy to GKE + +```bash +# Apply the deployment +kubectl apply -f deployment.yaml + +# Apply the service +kubectl apply -f service.yaml + +# Check deployment status +kubectl get deployments + +# Check pods +kubectl get pods + +# Check service (get external IP) +kubectl get services +``` + +## Step 9: Verify Deployment + +```bash +# Get the external IP +kubectl get service my-node-app-service + +# Test the application (replace EXTERNAL_IP with actual IP) +curl http://EXTERNAL_IP + +# View pod logs +kubectl logs -l app=my-node-app --tail=50 +``` + +## Step 10: Scaling and Updates + +```bash +# Scale the deployment +kubectl scale deployment my-node-app --replicas=5 + +# Update the application (after building new image) +docker build -t gcr.io/YOUR_PROJECT_ID/my-node-app:v2.0 . +docker push gcr.io/YOUR_PROJECT_ID/my-node-app:v2.0 +kubectl set image deployment/my-node-app my-node-app=gcr.io/YOUR_PROJECT_ID/my-node-app:v2.0 + +# Rollback if needed +kubectl rollout undo deployment/my-node-app +``` + +## Step 11: Clean Up + +```bash +# Delete the deployment and service +kubectl delete -f deployment.yaml +kubectl delete -f service.yaml + +# Delete the cluster +gcloud container clusters delete my-gke-cluster --zone=us-central1-a + +# Delete the container image +gcloud container images delete gcr.io/YOUR_PROJECT_ID/my-node-app:v1.0 --force-delete-tags +``` + +## Automation Script + +Create a **deploy.sh** script for automation: + +```bash +#!/bin/bash +PROJECT_ID="YOUR_PROJECT_ID" +IMAGE_NAME="my-node-app" +VERSION="v1.0" +CLUSTER_NAME="my-gke-cluster" +ZONE="us-central1-a" + +echo "Building Docker image..." +docker build -t $IMAGE_NAME . + +echo "Tagging image for GCR..." +docker tag $IMAGE_NAME gcr.io/$PROJECT_ID/$IMAGE_NAME:$VERSION + +echo "Pushing to GCR..." +docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:$VERSION + +echo "Deploying to GKE..." +gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE +kubectl apply -f deployment.yaml +kubectl apply -f service.yaml + +echo "Deployment complete!" +kubectl get services +``` + +Make it executable: +```bash +chmod +x deploy.sh +``` + +This complete workflow shows you how to containerize an application, push it to Google Container Registry, and deploy it to a GKE cluster with proper configuration for production use.