This project deploys GKE and additional components to GCP using GitHub Actions or alternatively Azure DevOps Pipelines supported by Terraform and Ansible. Jumphost is equipped tooling as kubectl, Helm, ArgoCD and sample k8s manifests. Both GKE and Jumphost are pre-configured to deliver logs and metrics to GCP Cloud Logging and Cloud Monitoring.
Project assumes private GKE cluster not exposed to the external world. However GKE setup is preconfigured to permit external access to API Server on demand by running gcloud container clusters update CLUSTER --master-authorized-networks CIDR or by updating respective Terraform variables. Documentation and provsioning code reflect GKE isolation. Normally many things could be done easier for public GKE cluster.
Projects assumes existing GCP projects (limitation of the GCP Free Tier).
- create GCP project, e.g.
workload-318005with activated billing and few APIs:
gcloud beta billing projects link PROJECT_ID --billing-account=BILLING_ACCOUNT
gcloud services enable \
cloudresourcemanager.googleapis.com \
secretmanager.googleapis.com
- create GCP Service account (SA) and store SA JSON file.
- create GCP Cloud Storage for tfstate in the Workload project
gsutil mb -p workload-318005 -c standard -l europe-central2 -b on gs://tfstate_PROJECT_ID_gke-deployer
- create GitHub Actions Secret
GCP_SA. Remove new lines before importing JSON file to the GitHub UI.
jq -c . GCP_SA.json
- create GitHub Actions Secret
GCP_SSH_PRIVATE_KEYfor Jumphost access. GitHub actions support multiline variables. Not the case of Azure DevOps. Anyhow consider storing multiline variables as base64 encoded strings and decode when using in the pipeline.
- Deploy GKE and components using GitHub Actions or alternatively Azure DevOps Pipelines.
- Deploy Argo CD Application manifest towards you application.
- Infrastructure logging and monitoring work out of the box. Application monitoring using
kube-prometheus-stackneeds configuring custom ServiceMonitors and/or PodMonitors as used here. - Deploy sample application using ArgoCD.
Branches are organized as dev/stage/prod. Branch name is passed to INFRA_ENV variable within CICD workflow. Based on INFRA_ENV variable Terraform decides which *.tfvars file to use. Ansible utilizes the same variable as well.
- Terraform
importupdates only the state file. Add configuration block to TF, otherwise be ready that imported object will be delete on TF apply. - Application monitoring not deployed yet in Free Tier setup. Include e.g. https://github.com/jkosik/kube-prometheus-stack to the stack and define process - who will deploy and customize alerts? App developers or infra team?
In production, consider building Master GCP Project to create and manage workload GCP Projects and workload SAs. Normally Master GCP Project would contain SA for running Terraform provisioning of the workload GCP Projects and resources within. GCP Free Tier does not allow SA to create other GCP Projects. Workaround is to precreate workload GCP Project and workload SA in advance manually. ClusterAPI project could be a long-term way to go though - Management CAPI cluster in every public cloud to deploy and bootstrap K8S clusters there.
This project uses SealedSecrets. SealedSecrets controller is deployed to GKE. GKE specifics are configured as well. kubeseal client is preconfigured on the Jumphost. SealedSecrets support also offline sealing and "bring your own certificates" - consideration for multicloud and private clouds.
Instead of Terraform you can use gcloud powered deployment pipeline. Update other/gke-deploy-gcloud/gke.vars and run gke-deploy-gcloud/deployment-local.sh to build GKE from the console. Optionally use GitHub Actions.
--regioncreates HA cluster.--zonecreates zonal non-HA GKE.export GOOGLE_CREDENTIALS=GCP_SA.jsonis sufficient for Terraform. Gcloud needsgcloud auth activate-service-account...
For more complex usecases use dynamic inventory for GCP and parse output if needed:
ansible-galaxy collection install google.cloud
ansible-inventory -i inventory-dynamic-gcp.yaml --list
ansible -i inventory-dynamic-gcp.yaml all -m ping
