This project is all about taking a standard Three-Tier app (React Frontend, Node.js Backend, MongoDB) and moving it from a manual setup to a fully automated DevSecOps pipeline.
I started by using Terraform to build all the infrastructure on AWS automatically.
- VPC & Network: I created a custom network (VPC). The Load Balancer sits in the public part, and the worker nodes are hidden in private subnets for security.
- EKS Cluster: I spun up a Kubernetes cluster (EKS) to run the actual application containers.
- Permissions: I set up specific IAM roles so Jenkins can push images and EKS can pull them without using hardcoded keys.
I used Jenkins to handle the heavy lifting of building and testing the code.
- Code Checks: Whenever I push code to GitHub, SonarQube scans it to make sure there are no bugs or security holes.
- Building the App: Jenkins takes the code, wraps it in a Docker container, and gives it a unique tag (like a version number).
- Storing Images: These images are sent to AWS ECR (Elastic Container Registry) for safe keeping.
- Updating Manifests: The cool part ,Jenkins automatically edits my
deployment.yamlfile in GitHub with the new image tag. This triggers the next step.
Here are the specific pipelines I built:
I don't just ship code; I verify it. SonarQube scans every single commit to check for bugs, vulnerabilities, and messy code.
Instead of running kubectl apply manually, I used Argo CD.
- GitOps: Argo CD watches my GitHub repo. As soon as Jenkins updates that manifest file, Argo CD sees the change.
- Auto-Sync: It immediately pulls the new changes and updates the live cluster. This ensures what's in my code is exactly what's running in production.
I kept the database separate for better management.
- The compute runs on AWS EKS.
- The database is on MongoDB Atlas (cloud-managed).
- Security: I didn't hardcode passwords. I used Kubernetes Secrets to inject the credentials safely into the backend.
I set up a full monitoring stack because we need to know if things break.
- Prometheus: Collects all the data (metrics) from the cluster.
- Grafana: Turns that data into nice graphs. I can see CPU usage, memory, and network traffic.
- Alerts: If a pod crashes or gets stuck, the system knows.
While building this, I hit a really annoying error. The frontend just wouldn't load, and the Load Balancer showed a 502 Bad Gateway.
I was used to React apps running on port 3000, so I configured all my Kubernetes files (Service, Ingress) to point to port 3000. But this project uses Vite, and Vite defaults to port 5173. So, the Load Balancer was knocking on port 3000, but the app was listening on 5173. Nobody was answering, so the connection failed.
I had to align everything to use the correct port.
1. Changed Kubernetes Config
I went into my frontend/service.yaml and told it to send traffic to the real port:
ports:
- port: 3000 # Keep 3000 for internal talking
targetPort: 5173 # POINT TO VITE!2. Exposed Vite to the World
By default, Vite only listens to localhost. This means it ignores requests coming from outside the container (like from the Load Balancer).
I fixed this by updating the start script in package.json:
"dev": "vite --host 0.0.0.0 --port 5173"The --host 0.0.0.0 part is crucial—it lets outside traffic in.
3. Allowed My Domain
Finally, Vite has a security setting that blocked my custom domain edureliefsl.xyz. I added this to vite.config.js to whitelist it:
allowedHosts: ['edureliefsl.xyz']Once I did these three things, the "Unhealthy" status turned "Healthy", and the site came online!


jenkins.png)







