# Create a simple pod
kubectl run nginx-pod --image=nginx
# Create a pod manifest without launching
kubectl run nginx --image=nginx --dry-run=client -o yaml
# Check image used by a pod
kubectl get pod <pod-name> -o jsonpath="{.spec.containers[*].image}"
# Delete a pod
kubectl delete pod webapp
# Create a pod using a YAML file (even if image is wrong)
vim redis-pod.yamlapiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis123 # wrong image intentionally- Pod: Smallest unit — runs containers.
- ReplicaSet: Ensures N pods are running; restarts crashed pods.
- Deployment: Manages ReplicaSets; supports updates, rollbacks.
Use Deployments in real projects.
- ReplicaSet: Ensures N identical pods
- Deployment: Manages ReplicaSets (rolling updates, rollbacks)
- StatefulSet: Ordered/stable pods (with identity)
- DaemonSet: Runs a pod on every node
- Job/CronJob: Run tasks to completion or on a schedule
kubectl apply -f /root/replicaset-definition-1.yamlapiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset-2
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myimagekubectl get rs
kubectl describe rs new-replica-set
kubectl edit rs new-replica-set# Method 1: scale command
kubectl scale replicaset new-replica-set --replicas=2
# Method 2: edit directly
kubectl edit replicaset new-replica-set
# change replicas: 4 -> 2kubectl delete replicaset replicaset-1 replicaset-2kubectl create deployment --image=nginx nginx
# with 4 replicas
kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deploy.yaml
kubectl apply -f nginx-deploy.yamlkubectl get deployments
kubectl describe deployment <deployment-name>kubectl apply -f /root/deployment-definition-1.yamlNote: Ensure kind is capitalized: kind: Deployment
# Pods stuck in ImagePullBackOff
kubectl describe pod <pod-name>
# Check Events section for image errorskubectl get pods -o wide | grep ^newpods-kubectl get pods -o wide- Use Deployments over raw ReplicaSets
- Pods with invalid images show
ImagePullBackOff - YAML definitions must have correct selectors & labels
- Use
--dry-run=client -o yamlto safely generate YAML before applying - Always match
selector.matchLabelswithtemplate.metadata.labelsin ReplicaSets
A Kubernetes Service is a stable virtual IP (ClusterIP) that provides reliable access to a set of Pods, even if they come and go.
- ClusterIP (default): internal access only (inside the cluster)
- NodePort: opens a specific port on all Nodes for external access
- LoadBalancer: provisioned by cloud providers for external traffic
- ExternalName: maps service to a DNS name outside the cluster
- Load balance traffic across Pods using labels
- Decouple access from Pod IPs (which change)
- Enable stable discovery and communication
kubectl get services
kubectl get service <name> -o yaml
kubectl get endpoints <service-name>NAME ENDPOINTS AGE
kubernetes 172.17.0.2:6443,172.17.0.3:6443 25mEach IP:PORT = 1 endpoint.
kubectl apply -f service-definition-1.yamlapiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
type: ClusterIPThis routes traffic from port 80 to port 8080 of Pods with label app=myapp.
apiVersion: v1 # API version used for the Service object
kind: Service # Resource type is a Service
metadata:
name: webapp-service # Name of the service
namespace: default # Namespace where it will be created
spec:
type: NodePort # Exposes service on each Node's IP at a static port
selector: # Matches pods with this label
name: simple-webapp # Targets pods with label name=simple-webapp
ports:
- port: 8080 # Port exposed by the service (inside the cluster)
targetPort: 8080 # Port on the container to forward to
nodePort: 30080 # External port on the node (for public access)User (via browser or curl)
|
v
NodeIP:30080 (nodePort)
|
v
Service: webapp-service
- type: NodePort
- port: 8080
|
v
Pod(s) with label: name=simple-webapp
- containerPort: 8080- User accesses
http://<NodeIP>:30080 - NodePort forwards to Service
port: 8080 - Service forwards to
targetPort: 8080on the matching Pods
- Default service type is:
ClusterIP - To get endpoints:
kubectl get endpoints <svc> - Create services with YAML and
kubectl apply -f - Each endpoint = IP:Port combo linked to the selected Pods
A Namespace is like a folder for your Kubernetes resources.
- Organize resources (pods, services, etc.)
- Isolate environments (e.g.,
dev,test,prod) - Enable multi-team or multi-tenant clusters
- Apply resource limits and RBAC rules
default: standard workspacekube-system: core system componentskube-public: readable by all userskube-node-lease: for node heartbeat tracking
kubectl get namespaces
kubectl create namespace my-namespace
kubectl delete namespace my-namespace
kubectl get pods -n my-namespace
# Accessing pods in specific namespaces
kubectl get pods --namespace=dev
kubectl get pods --namespace=prod
kubectl get pods --namespace=default
kubectl get pods
kubectl get pods --all-namespaces
# Change default namespace for the current context
kubectl config set-context $(kubectl config current-context) --namespace=dev
kubectl config set-context $(kubectl config current-context) --namespace=prodUse -n or --namespace to target a specific namespace.
A ResourceQuota restricts the total CPU, memory, and pod count usage per namespace.
- Prevent resource hogging
- Enforce fair sharing
- Ensure all teams declare resource limits in specs
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota # Quota name
namespace: dev # Applies to the 'dev' namespace
spec:
hard:
pods: "10" # Max 10 pods allowed
requests.cpu: "4" # Total CPU requested ≤ 4 cores
requests.memory: 5Gi # Total memory requested ≤ 5Gi
limits.cpu: "10" # Max CPU limit set ≤ 10 cores
limits.memory: 10Gi # Max memory limit set ≤ 10Gikubectl create -f compute-quota.yaml- Quotas apply at the namespace level
- Resource usage across all nodes is aggregated per namespace
- Nodes are not limited, just the namespace usage totals
🔎 Question: What DNS name should the Blue app use to access the db-service in its own namespace marketing?
-
Short DNS (within same namespace):
db-service -
Full DNS (any namespace):
db-service.marketing.svc.cluster.local
Use the short name when the client pod is in the same namespace.
- You tell Kubernetes what to do now using CLI.
- Fast, direct, but not easily repeatable or versioned.
- You describe the desired state in a YAML file.
- Great for version control, automation, and repeatability.
kubectl run nginx --image=nginx # Run a pod
kubectl create deployment nginx --image=nginx # Create a deployment
kubectl expose deployment nginx --port=80 # Expose service
kubectl edit deployment nginx # Edit live object
kubectl scale deployment nginx --replicas=5 # Scale replicas
kubectl set image deployment nginx nginx=nginx:1.18 # Update imagekubectl create -f nginx.yaml # Create from YAML
kubectl replace -f nginx.yaml # Replace
kubectl delete -f nginx.yaml # DeleteapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx...
spec:
containers:
- name: nginx-container
image: nginx:1.18
...| Type | Command Example | Stored In | Best For |
|---|---|---|---|
| Imperative | kubectl run, kubectl edit |
In-memory | Quick fixes, testing |
| Declarative | kubectl apply -f file.yaml |
Source files | CI/CD, automation |
- Use
--dry-run=client -o yamlto generate resource definitions without creating them:
kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx.yamlModify and apply:
kubectl apply -f nginx.yamlkubectl run nginx --image=nginxkubectl run nginx --image=nginx --dry-run=client -o yamlkubectl create deployment nginx --image=nginx
kubectl scale deployment nginx --replicas=4kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > nginx-deployment.yamlkubectl expose pod redis --port=6379 --name=redis-service --type=ClusterIPkubectl expose pod nginx --type=NodePort --port=80 --name=nginx-serviceTo specify a nodePort, generate the YAML first, then edit:
kubectl expose pod nginx --type=NodePort --port=80 --dry-run=client -o yaml > svc.yamlAdd:
nodePort: 30080Then:
kubectl apply -f svc.yamlkubectl run nginx-pod --image=nginx:alpinekubectl run redis --image=redis:alpine --labels=tier=dbkubectl run redis --image=redis:alpine --dry-run=client -o yaml > redis.yaml
# edit file to add:
# metadata.labels.tier: db
kubectl apply -f redis.yamlkubectl expose pod redis --name=redis-service --port=6379 --target-port=6379 --type=ClusterIPkubectl create deployment webapp --image=kodekloud/webapp-color --replicas=3kubectl create namespace dev-nskubectl create deployment redis-deploy --image=redis --replicas=2 -n dev-nskubectl run httpd --image=httpd:alpine --port=80
kubectl expose pod httpd --port=80 --target-port=80 --name=httpd --type=ClusterIPImperative is great for speed and experimentation. Declarative is essential for automation, consistency, and production-grade deployments.
When you use kubectl apply -f file.yaml, Kubernetes performs a 3-way merge between:
- Your local file – the new desired state (e.g.
nginx.yaml) - Last-applied-configuration – what you applied last time (stored as annotation in the live object)
- Live object configuration – the current state of the object in the cluster
This is the YAML you apply:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: nginx-container
image: nginx:1.19This is stored as an annotation in the live object under:
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{ ... json ... }'This helps kubectl detect what changed between the last apply and the current YAML.
This is what actually runs in the cluster. It may have been changed manually or by controllers:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: nginx-container
image: nginx:1.19
status:
conditions:
- type: Initialized
status: "True"- Compare your YAML with the last-applied-configuration.
- Detect changes.
- Patch only the differences into the live object.
- Update the annotation with the new
last-applied-configuration.
- Manual changes to the live object not reflected in YAML will be overwritten.
- Always re-apply using updated YAMLs to ensure drift doesn't happen.
- For safer updates: avoid editing live objects manually (e.g., with
kubectl edit).
- Always use
kubectl applyfor declarative workflows. - Use
kubectl diff -f file.yamlto preview changes. - If needed, use
kubectl replacefor full overwrite (not recommended for live apps).
kubectl apply -f my.yaml # Apply declarative config
kubectl get pod myapp-pod -o yaml # View live object
kubectl annotate ... # View/edit annotationskubectl get pod myapp-pod -o json | jq -r '.metadata.annotations["kubectl.kubernetes.io/last-applied-configuration"]' | jq .This will pretty-print the last-applied config from the annotation (in JSON).
By understanding the 3-way diff and how kubectl apply works, you can safely manage your YAML-driven configurations with confidence. 💡
Imperative commands are direct kubectl commands used to create, update, or delete Kubernetes resources without YAML manifests. Great for quick testing and small tasks.
kubectl run <pod-name> --image=<image-name>
# Example:
kubectl run nginx-pod --image=nginx:alpinekubectl run <pod-name> --image=<image-name> --labels=key=value
# Example:
kubectl run redis --image=redis:alpine --labels=tier=dbkubectl expose pod <pod-name> --port=<port> --name=<service-name>
# Example:
kubectl expose pod redis --port=6379 --name=redis-servicekubectl create deployment <name> --image=<image-name> --replicas=<n>
# Example:
kubectl create deployment webapp --image=kodekloud/webapp-color --replicas=3kubectl scale deployment <name> --replicas=<n>kubectl run <pod-name> --image=<image-name> --dry-run=client -o yaml > pod.yaml
# Edit pod.yaml as needed, e.g., add containerPort, labels, etc.
kubectl apply -f pod.yamlkubectl create namespace <namespace-name>
# Example:
kubectl create namespace dev-nskubectl create deployment <name> --image=<image-name> --replicas=<n> -n <namespace>
# Example:
kubectl create deployment redis-deploy --image=redis --replicas=2 -n dev-nskubectl get pods
kubectl get svc
kubectl get deployments
kubectl get namespaces
kubectl get pods -n <namespace>kubectl describe pod <pod-name>
kubectl describe svc <service-name>kubectl delete pod <pod-name>
kubectl delete deployment <name>
kubectl delete svc <name>
kubectl delete namespace <namespace-name>
## 🧭 Scheduling (CKA Cheatsheet)
This section covers Kubernetes pod scheduling essentials, especially useful for CKA exam scenarios.
---
### 🔍 Investigating Pending Pods
```bash
kubectl describe pod <pod-name>- Check for
Node: <none>→ unscheduled - Look for events at the bottom (taints, no resources, etc.)
kubectl get nodes
kubectl describe node <node-name>- Confirm node readiness and check for taints
kubectl get pods -n kube-system- Ensure all control plane components (especially
kube-scheduler) are running
- 🚫 Missing scheduler →
kube-schedulernot running - 🚫 Taints on nodes → no matching tolerations
- 🚫 No nodes available → all unschedulable
- 🚫 Resource requests too high → can't fit on any node
⚠️ NodeSelector/Affinity mismatch
- Assign pod to a specific node:
spec:
nodeName: node01- Apply YAML:
kubectl apply -f nginx.yaml- Check placement:
kubectl get pods -o wideTo schedule on the control-plane node (typically tainted with NoSchedule):
spec:
nodeName: controlplane
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"Apply and verify:
kubectl delete pod nginx
kubectl apply -f nginx.yaml
kubectl get pods -o widekubectl get pods -A # All namespaces
kubectl describe node <node> # See taints, allocatable resources
kubectl get events --sort-by=.metadata.creationTimestamp # View recent events✅ Tip for exam: If a pod is Pending with no events, check if the scheduler is even running.
| Feature | Labels | Annotations |
|---|---|---|
| Purpose | Identify, group, and select | Store arbitrary non-identifying metadata |
| Queryable | ✅ Yes (e.g. via kubectl get) |
❌ No |
| Used by | Selectors (e.g. Services, RS) | Internal tools, clients, audits |
- Key-value pairs attached to objects (pods, nodes, etc.)
- Used to organize, filter, and select resources
metadata:
labels:
app: myapp
role: frontendAttach labels to Pods, Deployments, ReplicaSets, etc.
Used to select resources matching specific labels.
Examples:
kubectl get pods --selector app=myapp
kubectl get pods -l app=myapp,role=frontendReplicaSet:
- Has its own
labels:(for metadata) - Uses
selector.matchLabels:to match pods
Pods (template):
- Labels under
template.metadata.labelsmust match the selector of the ReplicaSet
# replicaset-example.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-webapp
labels:
app: webapp
role: frontend
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp # Must match selector
role: frontend
spec:
containers:
- name: webserver
image: nginx- Attach arbitrary metadata to objects
- Not used for selectors or grouping
metadata:
annotations:
release.version: "v2.1"Useful for:
- Build info
- Tooling metadata
- Documentation
Count all pods in the dev environment:
kubectl get pods -l env=dev --no-headers | wc -lCount all pods in the finance business unit:
kubectl get pods -l bu=finance --no-headers | wc -lCount all objects in the prod environment:
kubectl get all -A -l env=prod --no-headers | wc -lFind pod in prod environment, finance BU, frontend role:
kubectl get pods -l env=prod,bu=finance,role=frontendOriginal (incorrect):
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-broken
spec:
replicas: 2
selector:
matchLabels:
role: api
template:
metadata:
labels:
role: db # ❌ does not match selectorFixed version:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-fixed
spec:
replicas: 2
selector:
matchLabels:
role: api
template:
metadata:
labels:
role: api # ✅ matches selector
spec:
containers:
- name: backend
image: httpdkubectl apply -f rs-fixed.yaml
kubectl get rs
kubectl get pods -l role=api