|
Note
|
This repository contains the guide documentation source. To view the guide in published form, view it on the Open Liberty website. |
Explore how to deploy microservices to Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP).
You will learn how to deploy two microservices in Open Liberty containers to a Kubernetes cluster on Google Kubernetes Engine (GKE).
Kubernetes is an open source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. If you would like to learn more about Kubernetes, check out the Deploying microservices to Kubernetes guide.
There are different cloud-based solutions for running your Kubernetes workloads. A cloud-based infrastructure enables you to focus on developing your microservices without worrying about low-level infrastructure details for deployment. Using a cloud helps you to easily scale and manage your microservices in a high-availability setup.
Google Cloud Platform offers a managed Kubernetes service called Google Kubernetes Engine (GKE). Using GKE simplifies the process of running Kubernetes on Google Cloud Platform without needing to install or maintain your own Kubernetes control plane. It provides a hosted Kubernetes cluster that you can deploy your microservices to. You will use GKE with a Google Container Registry (GCR). GCR is a private registry that is used to store and distribute your container images. Note, since GKE is not free, a small cost is associated with running this guide. See the official GKE pricing documentation for more details.
The two microservices you will deploy are called system and inventory.
The system microservice returns the JVM system properties of the running container.
It also returns the name of the pod in the HTTP header, making replicas easy to distinguish from each other.
The inventory microservice adds the properties from the system microservice to the inventory.
This demonstrates how communication can be established between pods inside a cluster.
Before you begin, the following additional tools need to be installed:
-
Google Account: To run this guide and use Google Cloud Platform, you will need a Google account. If you do not have an account already, navigate to the Google account sign-up page to create a Google account.
-
Google Cloud Platform Account: Visit the Google Cloud Platform console to link your Google account to Google Cloud Platform. A free trial Google Cloud Platform account provides $300 credit over 12 months, and has a sufficient amount of resources to run this guide. Sign up for a free trial account to try Google Cloud Platform for free.
-
Google Cloud SDK - CLI: You will need to use the
gcloudcommand-line tool included in the Google Cloud SDK. See the official Install the Cloud SDK: Command Line Interface documentation for information about setting up the Google Cloud Platform CLI for your platform. To verify that thegcloudis installed correctly, run the following command:gcloud info -
kubectl: You need the Kubernetes command-line tool
kubectlto interact with your Kubernetes cluster. If you do not havekubectlinstalled already, use the Google Cloud Platform CLI to download and installkubectlwith the following command:gcloud components install kubectl
To create a Google Cloud Project, first initialize the Google Cloud SDK by peforming the gcloud inital setup.
The gcloud init command launches an interactive setup that creates or modifies configuration for gcloud,
such as setting the user account or specifying the project to use:
gcloud initFollow the prompt to log in with your Google Cloud Platform account. This authorizes Google Cloud SDK to access Google Cloud Platform using your account credentials.
If you do not have any projects on your account, you will be prompted to create one. Otherwise, select the option to create a new project.
You will need to specify a Project ID for your project. Enter a Project ID that is unique within Google Cloud and matches the pattern described in the prompt.
If the Project ID is available to use, you see the following output:
Your current project has been set to: [project-id]. ... Your Google Cloud SDK is configured and ready to use!
Make sure that billing is enabled for your project, so that you can use its Google Cloud services. Follow the Modify a Project’s Billing Settings documentation to enable billing for your Google cloud project.
To run this guide, you need to use certain Google Cloud services, such as the
Compute Engine API, Cloud Build API, and the Kubernetes Engine API.
You will use the Compute Engine API to set the default Compute Engine region and zone where the
resources for your cloud deployments will be hosted on.
The Cloud Build API allows you to build container images and push them to a Google Container Registry.
Your private container registry manages and stores the container images that you build in later steps.
To deploy your application to Google Kubernetes Engine (GKE), you will need to enable the Kubernetes Engine API.
The container images that you build will run on a Google Kubernetes Engine cluster.
Enable the necessary Google Cloud APIs for your project using the gcloud services enable command.
To see a list of Google Cloud APIs and services that are available for your project, run the following command:
gcloud services list --availableYou see an output similar to the following:
NAME TITLE abusiveexperiencereport.googleapis.com Abusive Experience Report API cloudbuild.googleapis.com Cloud Build API composer.googleapis.com Cloud Composer API compute.googleapis.com Compute Engine API computescanning.googleapis.com Compute Scanning API contacts.googleapis.com Contacts API container.googleapis.com Kubernetes Engine API containeranalysis.googleapis.com Container Analysis API containerregistry.googleapis.com Container Registry API
The NAME field is the value that you need to pass into the gcloud services enable command to enable an API.
Run the following command to enable the Compute Engine API, Cloud Build API, and the Kubernetes Engine API:
gcloud services enable compute.googleapis.com cloudbuild.googleapis.com container.googleapis.comA Compute Engine region is a geographical location used to host your Compute Engine resources.
Each region is composed of multiple zones. For example, the asia-east1 region is divided into
multiple zones: asia-east1-a, asia-east1-b, and asia-east1-c.
Some resources are limited to specific regions or zones, and other resources are available across all regions.
See the
Global, Regional, and Zonal Resources
documentation for more details.
If resources are created without specifying a region or zone, these new resources run in the default location for your project. The metadata for your resources are stored at this specified Google Cloud location.
See the list of available zones and its corresponding regions for your project:
gcloud compute zones listYou see an output similar to the following:
NAME REGION STATUS us-west1-b us-west1 UP us-west1-c us-west1 UP us-west1-a us-west1 UP europe-west1-b europe-west1 UP europe-west1-d europe-west1 UP europe-west1-c europe-west1 UP asia-east1-b asia-east1 UP asia-east1-a asia-east1 UP asia-east1-c asia-east1 UP southamerica-east1-b southamerica-east1 UP southamerica-east1-c southamerica-east1 UP southamerica-east1-a southamerica-east1 UP northamerica-northeast1-a northamerica-northeast1 UP northamerica-northeast1-b northamerica-northeast1 UP northamerica-northeast1-c northamerica-northeast1 UP
To set the default Compute Engine region and zone, run the gcloud config set compute command.
Remember to replace [region] and [zone] with a region and zone that is available for your project.
Make sure that your zone is within the region that you set.
gcloud config set compute/region [region]
gcloud config set compute/zone [zone]The starting Java project, which you can find in the start directory, is a multi-module Maven
project. It is made up of the system and inventory microservices. Each microservice exists in its own directory,
start/system and start/inventory. Both of these directories contain a Dockerfile, which is necessary
for building the container images. If you’re unfamiliar with Dockerfiles, check out the
Containerizing microservices guide.
Navigate to the start directory and run the following command:
mvn packageNow that your microservices are packaged, build your container images using Google Cloud Build.
Instead of having to install Docker locally to containerize your application, you can use Cloud Build’s
gcloud build submit --tag command to build a Docker image using a Dockerfile, and push the image to a container registry.
Cloud Build is similar to running the docker build and docker push commands.
The gcloud build submit --tag command needs to be run from the directory containing the Dockerfile.
You will build images for system and inventory by running the gcloud build submit --tag command
from both the start/system and start/inventory directory.
Navigate to the start/system directory.
Build the system image and push it to your container registry using Cloud Build.
Your container registry is located at gcr.io/[project-id].
Replace [project-id] with the Project ID that you previously defined for your Google Cloud project.
To get the Project ID for your project, run the gcloud config get-value project command.
gcloud builds submit --tag gcr.io/[project-id]/system:1.0-SNAPSHOTIf the system image builds and pushes successfully, you see the following output:
DONE --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ID CREATE_TIME DURATION SOURCE IMAGES STATUS 30a71b4c-3481-48da-9faa-63f689316c3b 2020-02-12T16:22:33+00:00 1M37S gs://[project-id]_cloudbuild/source/1581524552.36-65181b73aa63423998ae8ecdfbaeddff.tgz gcr.io/[project-id]/system:1.0-SNAPSHOT SUCCESS
Navigate to the start/inventory directory.
Build the inventory image and push it to your container registry using Cloud Build:
gcloud builds submit --tag gcr.io/[project-id]/inventory:1.0-SNAPSHOTYou see the following output:
DONE ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ID CREATE_TIME DURATION SOURCE IMAGES STATUS edbf9f6f-f01b-46cf-a998-594ad2df9bb3 2020-02-12T16:25:49+00:00 1M11S gs://[project-id]_cloudbuild/source/1581524748.42-445ddab4cd3b4ba18e28a965e3942cea.tgz gcr.io/[project-id]/inventory:1.0-SNAPSHOT SUCCESS
To verify that the images are built, run the following command to list all existing container images for your project:
gcloud container images listYour two images system and inventory should appear in the list of all container images:
NAME gcr.io/[project-id]/inventory gcr.io/[project-id]/system
To create your GKE cluster, use the gcloud container clusters create command.
When the cluster is created, the command outputs information about the cluster.
You might need to wait while your cluster is being created.
Replace [cluster-name] with a name that you want for your cluster.
The name for your cluster must only contain lowercase alphanumeric characters and -,
and must start with a letter and end with an alphanumeric character.
gcloud container clusters create [cluster-name] --num-nodes 1When your cluster is successfully created, you see the following output:
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS [cluster-name] [zone] 1.13.11-gke.23 35.203.77.52 n1-standard-1 1.13.11-gke.23 1 RUNNING
Since a zone was not specified, your cluster was created in the default zone that you previously defined.
The option --num-nodes creates a cluster with a certain number of nodes in the Kubernetes node pool.
By default, if this option is excluded, three nodes are assigned to the node pool.
You created a single-node cluster since this application does not require a large amount of resources.
Run the following command to check the status of the available node in your GKE cluster:
kubectl get nodesThe kubectl get nodes command outputs information about the node.
The STATUS of the node is in the Ready state.
NAME STATUS ROLES AGE VERSION gke-[cluster-name]-default-pool-be4471fe-qnl6 Ready <none> 46s v1.13.11-gke.23
Now that your container images are built and you have created a Kubernetes cluster, you can deploy the images using a Kubernetes resource definition.
A Kubernetes resource definition is a yaml file that contains a description of all your
deployments, services, or any other resources that you want to deploy. All resources can
also be deleted from the cluster by using the same yaml file that you used to deploy them.
The kubernetes.yaml resource definition file is provided for you. If you are interested
in learning more about the Kubernetes resource definition, check out the
Deploying microservices to Kubernetes
guide.
Update thekubernetes.yaml.kubernetes.yaml
Replace [project-id] with your Project ID.
You can get the Project ID for your project by running the gcloud config get-value project command.
kubernetes.yaml
link:finish/kubernetes.yaml[role=include]The image is the name and tag of the container image that you want
to use for the container. The kubernetes.yaml file references the images that you pushed to your registry
for the system and inventory repositories.
The service that is used to expose your deployments has a type of NodePort.
This means you can access these services from outside of your cluster via a specific port.
You can expose your services in other ways such as using a LoadBalancer service type or using an Ingress.
In production, you would most likely use an Ingress.
To deploy your microservices to Google Kubernetes Engine, you need Kubernetes to create
the contents of the kubernetes.yaml file.
Navigate to the start directory and run the following command to deploy the resources defined in the kubernetes.yaml file:
kubectl apply -f kubernetes.yamlYou will see an output similar to the following:
deployment.apps/system-deployment created deployment.apps/inventory-deployment created service/system-service created service/inventory-service created
Run the following command to check the status of your pods:
kubectl get podsIf all the pods are healthy and running, you see an output similar to the following:
NAME READY STATUS RESTARTS AGE system-deployment-6bd97d9bf6-4ccds 1/1 Running 0 15s inventory-deployment-645767664f-nbtd9 1/1 Running 0 15s
To try out your microservices, you need to allow TCP traffic on your node ports, 31000 and 32000,
for the system and inventory microservices.
Create a firewall rule to allow TCP traffic on your node ports:
gcloud compute firewall-rules create sys-node-port --allow tcp:31000
gcloud compute firewall-rules create inv-node-port --allow tcp:32000Take note of the EXTERNAL-IP in the output of the following command. It is the hostname you will later substitute into [hostname]:
kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP gke-[cluster-name]-default-pool-be4 Ready <none> 14m v1.13.11-gke.23 10.162.0.2 35.203.106.216
To access your microservices, point your browser to the following URLs, substituting the appropriate [hostname] value:
-
http://[hostname]:31000/system/properties -
http://[hostname]:32000/inventory/systems
In the first URL, you see a result in JSON format with the system properties of the container JVM. The second URL returns an empty list, which is expected because no system properties are stored in the inventory yet.
Point your browser to the http://[hostname]:32000/inventory/systems/system-service URL. When you visit this URL, these system
properties are automatically stored in the inventory. Go back to http://[hostname]:32000/inventory/systems and
you see a new entry for system-service.
A few tests are included for you to test the basic functionality of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before you proceed further.
pom.xml
link:finish/inventory/pom.xml[role=include]The default properties that are defined in the pom.xml file are:
| Property | Description |
|---|---|
|
IP or hostname for your cluster |
|
Name of the Kubernetes Service wrapping the |
|
The NodePort of the Kubernetes Service |
|
The NodePort of the Kubernetes Service |
Run the Maven failsafe:integration-test goal to test your microservices by replacing the [hostname]
with the value determined in the previous section.
mvn failsafe:integration-test -Dcluster.ip=[hostname]If the tests pass, you will see an output similar to the following for each service:
------------------------------------------------------- T E S T S ------------------------------------------------------- Running it.io.openliberty.guides.system.SystemEndpointIT Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.system.SystemEndpointIT Results: Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
------------------------------------------------------- T E S T S ------------------------------------------------------- Running it.io.openliberty.guides.inventory.InventoryEndpointIT Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT Results: Tests run: 4, Failures: 0, Errors: 0, Skipped: 0
It is important to clean up your resources when you are finished with the guide so that you do not incur extra charges for ongoing usage.
When you no longer need your deployed microservices, you can delete all Kubernetes resources
by running the kubectl delete command:
kubectl delete -f kubernetes.yamlSince you are done testing your cluster, clean up all of its related sources using the gcloud container clusters delete command.
gcloud container clusters delete [cluster-name]Remove the container images from the container registry:
gcloud container images delete gcr.io/[project-id]/system:1.0-SNAPSHOT gcr.io/[project-id]/inventory:1.0-SNAPSHOTDelete your Google Cloud project:
gcloud projects delete [project-id]You have just deployed two microservices running in Open Liberty to Google Kubernetes Engine (GKE). You also
learned how to use kubectl to deploy your microservices on a Kubernetes cluster.