About
Made with:
- Adoptium Temurin OpenJDK 17.0.8
 - Spring Boot v3.1.2
 - Gradle 8.2.1
 - IntelliJ IDEA 2023.1 (Ultimate Edition)
 
Download, install, and initialize the gcloud SDK on your local machine
Refer to the gcloud CLI documentation to complete this step.
Install the gcloud SDK to the user's home directory (e.g., /Users/USERNAME/google-cloud-sdk).
When it's finished installing, add the gcloud executable to your system's $PATH and run the command:
gcloud initgcloud CLI: Application Default Credentials (ADC) usage
gcloud auth login
gcloud auth application-default logingcloud CLI: Generate an Application Default Credentials (ADC) access token
If you're running the application locally, you can use the following command to generate an access token using Application Default Credentials (ADC):
gcloud auth application-default print-access-tokenexport GCP_ACCESS_TOKEN="$(gcloud auth application-default print-access-token)"gcloud CLI: Generate an access token for service account impersonation
Run this command to generate an access token for a specific GCP service account:
export GCP_ACCESS_TOKEN=$(gcloud auth print-access-token --impersonate-service-account='GCP_SA_EMAIL_ADDRESS')Replace the following:
GCP_SA_EMAIL_ADDRESS: the email address of the service account to impersonate.
Example:
export GCP_ACCESS_TOKEN=$(gcloud auth print-access-token --impersonate-service-account='gcp-tekton-sa@lift-with-your-legs-123456.iam.gserviceaccount.com')Create and store a service account key
This section refers to usage of a GCP service account key (.json) file stored on your local file system.
To map a local gcloud installation to a volume on a container instance running the application, include the -v parameter in the docker run command used to start a container instance, as described below.
Assuming the user's service account key file is stored in the same directory as their local gcloud installation:
/Users/USERNAME/.config/gcloud
export LOCAL_GCLOUD_AUTH_DIRECTORY=$HOME/.config/gcloudand the target volume on the container instance is:
/root/.config/gcloud
export CONTAINER_GCLOUD_AUTH_DIRECTORY=/root/.config/gcloudthe command to run the container instance would be:
docker run --rm -it \
  -e GCP_SA_KEY_PATH=${GCP_SA_KEY_PATH} \
  -e GCP_ACCESS_TOKEN=${GCP_ACCESS_TOKEN} \
  -e GCP_PROJECT_ID=${GCP_PROJECT_ID} \
  -e BQ_DATASET=${BQ_DATASET} \
  -e BQ_TABLE=${BQ_TABLE} \
  -v ${LOCAL_GCLOUD_AUTH_DIRECTORY}:${CONTAINER_GCLOUD_AUTH_DIRECTORY} \
  -v ${LOCAL_MAVEN_REPOSITORY}:${CONTAINER_MAVEN_REPOSITORY} \
  java17-spring-gradle-bigquery-referenceReplace the following in the path to the gcloud directory:
USERNAME: the current OS user's username
so that the path to the service account key file is correct, e.g.:
/Users/squidmin/.config/gcloud/sa-private-key.json
Read here for more information about creating service account keys.
Read here for more information about run config CLI arguments.
Activate GCP service account
gcloud auth activate-service-account --key-file=${GCP_SA_KEY_PATH}Replace the following:
GCP_SA_KEY_PATH: path to the user's service account key file.
Example:
gcloud auth activate-service-account --key-file='/Users/squidmin/.config/gcloud/sa-private-key.json'Set the active GCP project
gcloud config set project ${GCP_PROJECT_ID}List available gcloud SDK components
gcloud components listUpdate gcloud SDK components
gcloud components updateCLI reference table: Run configuration
TODO
Build JAR
./gradlew clean build./gradlew clean build -x test./gradlew clean build testClasses -x testAdd manifest file
jar -cmvf \
  ./build/tmp/jar/MANIFEST.MF \
  ./build/libs/java17-spring-gradle-bigquery-reference-${REVISION}.jar \
  ./build/classes/java/main/org/squidmin/java/spring/gradle/bigquery/Java17SpringGradleBigQueryReferenceApplication.classBuild container image
docker build \
  --build-arg GCP_SA_KEY_PATH=${GCP_SA_KEY_PATH} \
  --build-arg GCP_PROJECT_ID=${GCP_PROJECT_ID} \
  --build-arg BQ_DATASET=${BQ_DATASET} \
  --build-arg BQ_TABLE=${BQ_TABLE}
  -t java17-spring-gradle-bigquery-reference .Run container
docker run --rm -it \
  -e GCP_SA_KEY_PATH=${GCP_SA_KEY_PATH} \
  -e GCP_ACCESS_TOKEN=${GCP_ACCESS_TOKEN} \
  -e GCP_PROJECT_ID=${GCP_PROJECT_ID} \
  -e BQ_DATASET=${BQ_DATASET} \
  -e BQ_TABLE=${BQ_TABLE} \
  -v ${LOCAL_GCLOUD_AUTH_DIRECTORY}:${CONTAINER_GCLOUD_AUTH_DIRECTORY} \
  -v ${LOCAL_MAVEN_REPOSITORY}:${CONTAINER_MAVEN_REPOSITORY} \
  java17-spring-gradle-bigquery-referenceRun jar
exec java -jar \
  -Dspring.profiles.active=local \
  ./build/libs/java17-spring-gradle-bigquery-reference-${REVISION}.jarWhy use
execwith Java applications?In Docker containers: Commonly seen in Dockerfiles for Java applications. It ensures that the Java application receives Unix signals directly because it's running as the container's PID 1 process. This is important for graceful shutdown and handling other system signals.
In scripts: To ensure that the Java application is the only running process and to handle signals and exit codes directly, without a shell in between.
In summary,
exec java -jaros a powerful combination used to execute Java applications packaged as JAR files directly as the main process, ensuring that they handle system signals directly and that their lifecycle is tightly coupled with the lifecycle of the shell or container they're running in.
List datasets
bq ls --filter labels.key:value \
  --max_results ${MAX_RESULTS} \
  --format=prettyjson \
  --project_id ${GCP_PROJECT_ID}Replace the following:
key:value: a label key and value, if applicable.MAX_RESULTS: an integer representing the number of datasets to list.GCP_PROJECT_ID: the name of the GCP project to target.
Examples:
bq ls --format=prettyCreate a dataset
Refer to the GCP documentation for creating datasets.
Examples:
bq --location=us mk \
  --dataset \
  --default_partition_expiration=3600 \
  --default_table_expiration=3600 \
  --description="An example." \
  --label=test_label_1:test_value_1 \
  --label=test_label_2:test_value_2 \
  --max_time_travel_hours=168 \
  --storage_billing_model=LOGICAL \
  ${GCP_PROJECT_ID}:${BQ_DATASET}The Cloud Key Management Service (KMS) key parameter (KMS_KEY_NAME) can be specified.
This parameter is used to pass the name of the default Cloud Key Management Service key used to protect newly created tables in this dataset.
You cannot create a Google-encrypted table in a dataset with this parameter set.
bq --location=us mk \
  --dataset \
  --default_kms_key=${KMS_KEY_NAME} \
  ... \
  ${GCP_PROJECT_ID}:${BQ_DATASET}Delete a dataset
Refer to the GCP documentation for deleting a dataset.
Remove all tables in the dataset (-r flag):
bq rm -r -f -d ${GCP_PROJECT_ID}:${BQ_TABLE}Create a table with a configured schema
Create an empty table with an inline schema definition
bq mk --table ${GCP_PROJECT_ID}:${BQ_DATASET}.${BQ_TABLE} ${SCHEMA}Replace the following:
GCP_PROJECT_ID: the name of the GCP project to target.BQ_DATASET: the name of the BigQuery dataset to target.BQ_TABLE: the name of the BigQuery table to target.SCHEMA: an inline schema definition.
Example:
bq mk --table \
  example-project-id:test_dataset_integration.test_table_integration \
  id:STRING,creation_timestamp:DATETIME,last_update_timestamp:DATETIME,column_a:STRING,column_b:BOOLFor an example JSON schema file, refer to: /schema/example.json.
Create an empty table
bq mk --table \
  ${GCP_PROJECT_ID}:${BQ_DATASET}.${BQ_TABLE} \
  path_to_schema_fileExample:
bq mk --table \
  example-project-id:test_dataset_integration.test_table_integration \
  ./schema/example.jsonCreate a table with CSV data
bq --location=location load \
  --source_format=${FORMAT} \
  ${GCP_PROJECT_ID}:${BQ_DATASET}.${BQ_TABLE} \
  path_to_data_file \
  path_to_schema_fileExample:
bq --location=us load \
  --source_format=CSV \
  example-project-id:test_dataset_integration.test_table_integration \
  ./csv/example.csv \
  ./schema/example.jsonRefer to the BigQuery documentation: Details of loading CSV data.
Delete a table
bq rm --table ${BQ_DATASET}.${BQ_TABLE}Show table schema
Example:
bq show \
  --schema \
  --format=prettyjson \
  example-project-id:test_dataset_integration.test_table_integrationThe table schema can be written to a file:
bq show \
  --schema \
  --format=prettyjson \
  example-project-id:test_dataset_integration.test_table_integration \ > ./schema/example_show-write.jsonModify table schemas
bq update \
  ${GCP_PROJECT_ID}:test_dataset_integration.test_table_integration \
  ./schema/example_update.jsonRefer to the GCP documentation on modifying table schemas..
Insert data into a table
Examples:
Insert for known values:
bq insert test_dataset_integration.test_table_integration ./json/example.jsonSpecify a template suffix (--template_suffix or -x):
bq insert --ignore_unknown_values \
  --template_suffix=_insert \
  test_dataset_integration.test_table_integration \
  ./json/example.jsonRefer to the bq insert documentation.
Run an interactive query
bq query \
  --use_legacy_sql=false \
  'query_string'Example:
bq query \
  --use_legacy_sql=false \
  'SELECT
    id, fieldC
  FROM
    `example-project-id.test_dataset_integration.test_table_integration`
  LIMIT
    3;'