This project began as a containerized full-stack YOLO application built with Docker and Docker Compose, then automated using Vagrant and Ansible, and has now been extended to Kubernetes (GKE/Minikube) for full orchestration and scalability.
Final Project live server: http://34.123.108.130/
Install Docker Engine by following the instructions here:
Before you start, ensure you are logged in to Docker Hub.
Create the client Dockerfile and test the build with the following command:
docker build -t nancynaomy/naomi-yolo-client:v1.0.0 client/The initial image size may be large, so optimization is recommended. After testing several base images, using alpine:3.16.7 resulted in a significantly smaller image:

With further optimization, the image size was reduced to 60.9 MB:

Similarly, to build the backend image, use the following command:
docker build -t nancynaomy/naomi-yolo-backend:v1.0.0 backend/The image is created successfully:

Create a docker-compose.yaml file in the root folder:
yolo
|__ compose.yaml-
Define the client container as a service:
- Bind the port:
ports: - "3000:3000"
- Specify the build directory
- Attach to a network
- Bind the port:
-
Define the backend container similarly and attach it to the same network.
-
Define a database container and connect it to the backend container using a separate network.
The frontend and backend Docker images have been successfully created:
Next, we used Docker Compose to pull the database and build the services:
docker compose up --builda succesfull build
The website is now accessible at 0.0.0.0:300:
Image is persistent,,,, even if you restart the container.
My final images
This repo automates deploying a containerized YOLO e-commerce site using Infrastructure as Code. Vagrant creates the VM, Ansible installs and configures everything, and Docker runs the app services — so you can get the entire system up and running with a single vagrant up.
1. Vagrant creates the VM.
2. Ansible install dependencies and configure the application stack
3. Docker runs the app services All these enables you to get the entire system up and running with a single command:
vagrant up.
1. Common Role
- Sets up the essential system tools and base configuration for Docker deployment.
Tasks
-
Configure base dependencies and updates system packages.
-
Setup Docker Compose tool and giving appropriate rights to owner andiInstalls git and docker.io.
-
Downloads and installs the latest version of Docker Compose.
-
Starts and enables the Docker service.
-
Creates a Docker network (yolo-network) to link all containers.
-
Clones the YOLO application repository from GitHub into /home/vagrant/yolo-app.
2. Database Role
- Deploys the MongoDB database container.
Tasks
-
Pulls the official MongoDB image (mongo:3.0).
-
Creates a container named mongo-db.
-
Exposes port 27017 internally for database communication.
-
Mounts a persistent volume mongo-data for data storage.
-
Connects the MongoDB container to the shared Docker network.
3. Backend Role
- Runs the backend image that communicates with Database.
Task
-
Pulls the backend image:
namanoo/naomi-yolo-backend:v1.0.0 -
Creates a container named
yolo_backend-container. -
Exposes port 5000 for API access.
-
Links to the database container through the network:
app-network.
4. Fontend Role
- Deploys client react application
Tasks
- Pulls Frontend image from docker hub:
namanoo/naomi-yolo-client:v1.0.0 - It creates a container called `yolo-frontend1.
- Then it maps guest port
80to host port3000 - it conect to same network as backend container for communication
This is part of the structure where I am using ansible
├──roles/
├── backend
│ └── tasks
│ └── main.yml
├── common
│ └── tasks
│ └── main.yml
├── database
│ └── tasks
│ └── main.yml
└── frontend
└── tasks
└── main.yml
├── Vagrantfile
├── ansible.cfg
├── playbook.yml
├── README.md
├──vars/
└── main.ymlThe Kubernetes stage deploys the YOLO app using a multi-pod architecture:
MongoDB as a StatefulSet for persistent database storage
Backend as a Deployment (2 replicas) connected to Mongo via internal DNS
Frontend as a Deployment exposed via NodePort or Ingress
ConfigMap and Secret used for environment configuration and credentials
Namespace (yolo) used to isolate all project resources
| Object Type | Name | Purpose |
|---|---|---|
| Namespace | yolo |
Logical separation of all YOLO components |
| ConfigMap | yolo-backend-config |
Stores non-sensitive app settings (PORT, CLIENT_ORIGIN) |
| Secret | yolo-secrets |
Stores DB credentials (MONGO_ROOT_USERNAME, MONGO_ROOT_PASSWORD) |
| StatefulSet | mongo |
Persistent MongoDB pod with attached PVC (1Gi–5Gi) |
| Deployment | yolo-backend |
Backend API deployment (2 replicas) |
| Deployment | yolo-frontend |
React frontend deployment |
| Service | mongo |
ClusterIP service for DB access |
| Service | yolo-backend |
ClusterIP service for internal API access |
| Service | yolo-frontend |
NodePort service for browser access |
| Ingress | yolo-ingress |
Optional HTTP routing for external access |
manifests
├── 00-namespace.yaml
├── backend-deployment.yaml
├── configmap-secret.yaml
├── ingress.yaml
├── mongo-statefulset.yaml
├── secrets.yaml
└── web-deployment.yaml- Create the namespace and apply manifests
kubectl apply -f manifests/
- Check resources
Check resources- Access frontend
- If using NodePort:
minikube service yolo-frontend -n yolo
frontend will open from a browser
- If using Ingress:
Add to /etc/hosts
Below is a summary of the containerized services deployed on Google Kubernetes Engine (GKE) as part of the YOLO application. Each service runs in its own pod and is managed via Kubernetes manifests, using images hosted on Docker Hub.
| Service | Image |
|---|---|
| Frontend | docker.ionamanoo/naomi-yolo-client:v1.0.2 |
| Backend | docker.ionamanoo/naomi-yolo-backend:v1.0.2 |
| MongoDB | docker.ionamanoo/naomi-yolo-mongodb:v1.0.2 |
These containers are orchestrated via Kubernetes Deployments and a StatefulSet (for MongoDB), with services exposing them internally and externally through a GKE Ingress for public access.
gcloud container clusters create yolo-cluster \
--zone us-central1-a \
--num-nodes 3 \
--machine-type e2-medium \
--disk-type pd-standard \
--disk-size 30 \
--addons HttpLoadBalancing
gcloud container clusters get-credentials yolo-cluster --zone us-central1-a --project my-yolo-project-ip4- Apply the manifests
Run:
kubectl apply -f manifests/kubectl get svc -n yoloNaomy Nancy














