diff --git a/content/en/docs/apis/porch/_index.md b/content/en/docs/apis/porch/_index.md index 0c133962..428367bb 100644 --- a/content/en/docs/apis/porch/_index.md +++ b/content/en/docs/apis/porch/_index.md @@ -2,6 +2,9 @@ title: "Package Orchestration API Specifications" type: docs weight: 5 -description: Reference for the Nephio Porch APIs +description: Reference for the Porch APIs --- -{{< iframe src="https://doc.crds.dev/github.com/nephio-project/porch@v1.4.0" sub="https://doc.crds.dev/github.com/nephio-project/porch@v1.4.0">}} +For detailed API and CRD specifications, please refer to: + +- [Porch Aggregated API type definitions](https://docs.porch.nephio.org/docs/7_cli_api/api-ref/) +- [Porch CRD Reference](https://docs.porch.nephio.org/docs/7_cli_api/crd-ref/) diff --git a/content/en/docs/glossary-abbreviations.md b/content/en/docs/glossary-abbreviations.md index 24e667ef..d8d64533 100644 --- a/content/en/docs/glossary-abbreviations.md +++ b/content/en/docs/glossary-abbreviations.md @@ -45,7 +45,7 @@ services, applications, etc.) which: * abstracts configuration file structure and storage from operations that act upon the configuration data; clients manipulating configuration data don’t need to directly interact with storage (git, container images) -Source of definition and more information about Configuration as Data can be found in the [kpt documentation](/content/en/docs/porch/config-as-data.md). +Source of definition and more information about Configuration as Data can be found in the [kpt documentation](https://kpt.dev/book/02-concepts/#configuration-as-data-key-principles). ## Controller This term comes from Kubernetes where diff --git a/content/en/docs/guides/install-guides/install-on-byoc.md b/content/en/docs/guides/install-guides/install-on-byoc.md index 29609b28..d58fc2f9 100644 --- a/content/en/docs/guides/install-guides/install-on-byoc.md +++ b/content/en/docs/guides/install-guides/install-on-byoc.md @@ -21,7 +21,7 @@ your environment and choices. - *kubectl* [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)on your workstation - *kpt* [installed](https://kpt.dev/installation/kpt-cli) on your workstation (version v1.0.0-beta.43 or later) - - *porchctl* [installed](/content/en/docs/porch/user-guides/porchctl-cli-guide.md) on your workstation + - *porchctl* [installed](https://docs.porch.nephio.org/docs/3_getting_started/installing-porchctl/) on your workstation - Sudo-less *docker*, *Podman*, or *nerdctl*. If using *Podman* or *nerdctl*, you must set the [`KPT_FN_RUNTIME`](https://kpt.dev/reference/cli/fn/render/?id=environment-variables) diff --git a/content/en/docs/guides/install-guides/install-on-multiple-vm.md b/content/en/docs/guides/install-guides/install-on-multiple-vm.md index 489d690e..51299693 100644 --- a/content/en/docs/guides/install-guides/install-on-multiple-vm.md +++ b/content/en/docs/guides/install-guides/install-on-multiple-vm.md @@ -19,7 +19,7 @@ weight: 7 * Kubernetes version 1.26+ * *kubectl* [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) * *kpt* [installed](https://kpt.dev/installation/kpt-cli) (version v1.0.0-beta.43 or later) -* *porchctl* [installed](/content/en/docs/porch/user-guides/porchctl-cli-guide.md) on your workstation +* *porchctl* [installed](https://docs.porch.nephio.org/docs/3_getting_started/installing-porchctl/) on your workstation ## Installation of the management cluster diff --git a/content/en/docs/guides/user-guides/usecase-user-guides/exercise-3-fluxcd-wl.md b/content/en/docs/guides/user-guides/usecase-user-guides/exercise-3-fluxcd-wl.md index 36cdfdb3..ab590114 100644 --- a/content/en/docs/guides/user-guides/usecase-user-guides/exercise-3-fluxcd-wl.md +++ b/content/en/docs/guides/user-guides/usecase-user-guides/exercise-3-fluxcd-wl.md @@ -64,7 +64,7 @@ oai-core-packages git Package false True https://github Once *Ready*, we can utilize blueprint packages from these upstream repositories. -In this example, we will use the [Porch package variant controller](/content/en/docs/porch/package-variant.md#core-concepts) +In this example, we will use the [Porch package variant controller](https://docs.porch.nephio.org/docs/5_architecture_and_components/relevant_old_docs/package-variant/#core-concepts) to deploy the new Workload Cluster. This fully automates the onboarding process, including the auto approval and publishing of the new package. diff --git a/content/en/docs/guides/user-guides/usecase-user-guides/exercise-5-argocd-wl.md b/content/en/docs/guides/user-guides/usecase-user-guides/exercise-5-argocd-wl.md index d64f69b6..a62dda30 100644 --- a/content/en/docs/guides/user-guides/usecase-user-guides/exercise-5-argocd-wl.md +++ b/content/en/docs/guides/user-guides/usecase-user-guides/exercise-5-argocd-wl.md @@ -161,7 +161,7 @@ oai-core-packages git Package false True https://github Once *Ready*, we can utilize blueprint packages from these upstream repositories. -In this example, we will use the [Porch package variant controller](/content/en/docs/porch/package-variant.md#core-concepts) +In this example, we will use the [Porch package variant controller](https://docs.porch.nephio.org/docs/5_architecture_and_components/relevant_old_docs/package-variant/#core-concepts) to deploy the new Workload Cluster. This fully automates the onboarding process, including the auto approval and publishing of the new package. diff --git a/content/en/docs/porch/_index.md b/content/en/docs/porch/_index.md index 8425fd58..c9130a83 100644 --- a/content/en/docs/porch/_index.md +++ b/content/en/docs/porch/_index.md @@ -1,12 +1,16 @@ --- -title: "Porch documentation" +title: "Porch" type: docs weight: 6 -description: Documentation of Porch --- - ## Overview +{{% alert title="Note" color="primary" %}} + +**The Porch documentation has been moved to [https://docs.porch.nephio.org/](https://docs.porch.nephio.org/).** + +{{% /alert %}} + Porch is “kpt-as-a-service”, providing opinionated package management, manipulation, and lifecycle operations in a Kubernetes-based API. This allows automation of these operations using standard Kubernetes controller techniques. @@ -20,4 +24,4 @@ was decided that Porch would not be part of the kpt project and the code was don Porch is maintained by the Nephio community. Porch will evolve with Nephio and its architecture and implementation will be updated to meet the functional and non-functional requirements on it -and on Nephio as a whole. \ No newline at end of file +and on Nephio as a whole. diff --git a/content/en/docs/porch/config-as-data.md b/content/en/docs/porch/config-as-data.md deleted file mode 100644 index 0b882807..00000000 --- a/content/en/docs/porch/config-as-data.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: "Configuration as Data" -type: docs -weight: 1 -description: ---- - -## Why - -This document provides background context for Package Orchestration, which is further elaborated in a dedicated -[document](package-orchestration.md). - -## Configuration as Data - -Configuration as Data is an approach to management of configuration (incl. -configuration of infrastructure, policy, services, applications, etc.) which: - -* makes configuration data the source of truth, stored separately from the live - state -* uses a uniform, serializable data model to represent configuration -* separates code that acts on the configuration from the data and from packages - / bundles of the data -* abstracts configuration file structure and storage from operations that act - upon the configuration data; clients manipulating configuration data don’t - need to directly interact with storage (git, container images) - -![CaD Overview](/static/images/porch/CaD-Overview.svg) - -## Key Principles - -A system based on CaD should observe the following key principles: - -* secrets should be stored separately, in a secret-focused storage -system ([example](https://cert-manager.io/)) -* stores a versioned history of configuration changes by change sets to bundles - of related configuration data -* relies on uniformity and consistency of the configuration format, including - type metadata, to enable pattern-based operations on the configuration data, - along the lines of duck typing -* separates schemas for the configuration data from the data, and relies on - schema information for strongly typed operations and to disambiguate data - structures and other variations within the model -* decouples abstractions of configuration from collections of configuration data -* represents abstractions of configuration generators as data with schemas, like - other configuration data -* finds, filters / queries / selects, and/or validates configuration data that - can be operated on by given code (functions) -* finds and/or filters / queries / selects code (functions) that can operate on - resource types contained within a body of configuration data -* actuation (reconciliation of configuration data with live state) is separate - from transformation of configuration data, and is driven by the declarative - data model -* transformations, particularly value propagation, are preferable to wholesale - configuration generation except when the expansion is dramatic (say, >10x) -* transformation input generation should usually be decoupled from propagation -* deployment context inputs should be taken from well defined “provider context” - objects -* identifiers and references should be declarative -* live state should be linked back to sources of truth (configuration) - -## KRM CaD - -Our implementation of the Configuration as Data approach ( -[kpt](https://kpt.dev), -[Config Sync](https://cloud.google.com/anthos-config-management/docs/config-sync-overview), -and [Package Orchestration](https://github.com/nephio-project/porch)) -is built on the foundation of -[Kubernetes Resource Model](https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md) -(KRM). - -{{% alert title="Note" color="primary" %}} - -Even though KRM is not a requirement of Config as Data (just like -Python or Go templates or Jinja are not specifically -requirements for [IaC](https://en.wikipedia.org/wiki/Infrastructure_as_code)), the choice of -another foundational config representation format would necessitate -implementing adapters for all types of infrastructure and applications -configured, including Kubernetes, CRDs, GCP resources and more. Likewise, choice -of another configuration format would require redesign of a number of the -configuration management mechanisms that have already been designed for KRM, -such as 3-way merge, structural merge patch, schema descriptions, resource -metadata, references, status conventions, etc. - -{{% /alert %}} - - -**KRM CaD** is therefore a specific approach to implementing *Configuration as Data* which: - -* uses [KRM](https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md) - as the configuration serialization data model -* uses [Kptfile](https://kpt.dev/reference/schema/kptfile/) to store package metadata -* uses [ResourceList](https://kpt.dev/reference/schema/resource-list/) as a serialized package wire-format -* uses a function `ResourceList → ResultList` (*kpt* function) as the foundational, composable unit of - package-manipulation code (note that other forms of code can manipulate packages as well, i.e. UIs, custom algorithms - not necessarily packaged and used as kpt functions) - -and provides the following basic functionality: - -* load a serialized package from a repository (as ResourceList) (examples of repository may be one or more of: local - HDD, Git repository, OCI, Cloud Storage, etc.) -* save a serialized package (as ResourceList) to a package repository -* evaluate a function on a serialized package (ResourceList) -* [render](https://kpt.dev/book/04-using-functions/01-declarative-function-execution) a package (evaluate functions - declared within the package itself) -* create a new (empty) package -* fork (or clone) an existing package from one package repository (called upstream) to another (called downstream) -* delete a package from a repository -* associate a version with the package; guarantee immutability of packages with an assigned version -* incorporate changes from the new version of an upstream package into a new version of a downstream package (3 way merge) -* revert to a prior version of a package - -## Value - -The Config as Data approach enables some key value which is available in other -configuration management approaches to a lesser extent or is not available -at all. - -* simplified authoring of configuration using a variety of methods and sources -* WYSIWYG interaction with configuration using a simple data serialization formation rather than a code-like format -* layering of interoperable interface surfaces (notably GUI) over declarative configuration mechanisms rather than - forcing choices between exclusive alternatives (exclusively UI/CLI or IaC initially followed by exclusively - UI/CLI or exclusively IaC) -* the ability to apply UX techniques to simplify configuration authoring and viewing -* compared to imperative tools (e.g., UI, CLI) that directly modify the live state via APIs, CaD enables versioning, - undo, audits of configuration history, review/approval, pre-deployment preview, validation, safety checks, - constraint-based policy enforcement, and disaster recovery -* bulk changes to configuration data in their sources of truth -* injection of configuration to address horizontal concerns -* merging of multiple sources of truth -* state export to reusable blueprints without manual templatization -* cooperative editing of configuration by humans and automation, such as for security remediation (which is usually - implemented against live-state APIs) -* reusability of configuration transformation code across multiple bodies of configuration data containing the same - resource types, amortizing the effort of writing, testing, documenting the code -* combination of independent configuration transformations -* implementation of config transformations using the languages of choice, including both programming and scripting - approaches -* reducing the frequency of changes to existing transformation code -* separation of roles between developer and non-developer configuration users -* defragmenting the configuration transformation ecosystem -* admission control and invariant enforcement on sources of truth -* maintaining variants of configuration blueprints without one-size-fits-all full struct-constructor-style - parameterization and without manually constructing and maintaining patches -* drift detection and remediation for most of the desired state via continuous reconciliation using apply and/or for - specific attributes via targeted mutation of the sources of truth - -## Related Articles - -For more information about Configuration as Data and Kubernetes Resource Model, -visit the following links: - -* [Rationale for kpt](https://kpt.dev/guides/rationale) -* [Understanding Configuration as Data](https://cloud.google.com/blog/products/containers-kubernetes/understanding-configuration-as-data-in-kubernetes) - blog post. -* [Kubernetes Resource Model](https://cloud.google.com/blog/topics/developers-practitioners/build-platform-krm-part-1-whats-platform) - blog post series diff --git a/content/en/docs/porch/contributors-guide/_index.md b/content/en/docs/porch/contributors-guide/_index.md deleted file mode 100644 index bae9fa65..00000000 --- a/content/en/docs/porch/contributors-guide/_index.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -title: "Porch Contributor Guide" -type: docs -weight: 7 -description: ---- - -## Changing Porch API - -If you change the API resources, in `api/porch/.../*.go`, update the generated code by running: - -```sh -make generate -``` - -## Components - -Porch comprises of several software components: - -* [api](https://github.com/nephio-project/porch/tree/main/api): Definition of the KRM API supported by the Porch - extension apiserver -* [porchctl](https://github.com/nephio-project/porch/tree/main/cmd/porchctl): CLI command tool for administration of - Porch `Repository` and `PackageRevision` custom resources. -* [apiserver](https://github.com/nephio-project/porch/tree/main/pkg/apiserver): The Porch apiserver implementation, REST - handlers, Porch `main` function -* [engine](https://github.com/nephio-project/porch/tree/main/pkg/engine): Core logic of Package Orchestration - - operations on package contents -* [func](https://github.com/nephio-project/porch/tree/main/func): KRM function evaluator microservice; exposes GRPC API -* [repository](https://github.com/nephio-project/porch/blob/main/pkg/repository): Repository integration package -* [git](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/git): Integration with Git repository. -* [oci](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/oci): Integration with OCI repository. -* [cache](https://github.com/nephio-project/porch/tree/main/pkg/cache): Package caching. -* [controllers](https://github.com/nephio-project/porch/tree/main/controllers): `Repository` CRD. No controller; - Porch apiserver watches these resources for changes as repositories are (un-)registered. -* [test](https://github.com/nephio-project/porch/tree/main/test): Test Git Server for Porch e2e testing, and - [e2e](https://github.com/nephio-project/porch/tree/main/test/e2e) tests. - -## Running Porch - -See dedicated documentation on running Porch: - -* [locally](../running-porch/running-locally.md) -* [on GKE](../running-porch/running-on-GKE.md) - -## Build the Container Images - -Build Docker images of Porch components: - -```sh -# Build Images -make build-images - -# Push Images to Docker Registry -make push-images - -# Supported make variables: -# IMAGE_TAG - image tag, i.e. 'latest' (defaults to 'latest') -# GCP_PROJECT_ID - GCP project hosting gcr.io repository (will translate to gcr.io/${GCP_PROJECT_ID}) -# IMAGE_REPO - overwrites the default image repository - -# Example: -IMAGE_TAG=$(git rev-parse --short HEAD) make push-images -``` - -## Running Locally - -Follow the [Running Porch Locally](../running-porch/running-locally.md) guide to run Porch locally. - -## Debugging - -To debug Porch, run Porch locally [running-locally.md](../running-porch/running-locally.md), exit porch server running -in the shell, and launch Porch under the debugger. VS Code debug session is pre-configured in -[launch.json](https://github.com/nephio-project/porch/blob/main/.vscode/launch.json). - -Update the launch arguments to your needs. - -## Code Pointers - -Some useful code pointers: - -* Porch REST API handlers in [registry/porch](https://github.com/nephio-project/porch/tree/main/pkg/registry/porch), - for example [packagerevision.go](https://github.com/nephio-project/porch/tree/main/pkg/registry/porch/packagerevision.go) -* Background task handling cache updates in [background.go](https://github.com/nephio-project/porch/tree/main/pkg/registry/porch/background.go) -* Git repository integration in [pkg/git](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/git) -* OCI repository integration in [pkg/oci](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/oci) -* CaD Engine in [engine](https://github.com/nephio-project/porch/tree/main/pkg/engine) -* e2e tests in [e2e](https://github.com/nephio-project/porch/tree/main/test/e2e). See below more on testing. - -## Running Tests - -All tests can be run using `make test`. Individual tests can be run using `go test`. -End-to-End tests assume that Porch instance is running and `KUBECONFIG` is configured -with the instance. The tests will automatically detect whether they are running against -Porch running on local machine or k8s cluster and will start Git server appropriately, -then run test suite against the Porch instance. - -## Makefile Targets - -* `make generate`: generate code based on Porch API definitions (runs k8s code generators) -* `make tidy`: tidies all Porch modules -* `make fmt`: formats golang sources -* `make build-images`: builds Porch Docker images -* `make push-images`: builds and pushes Porch Docker images -* `make deployment-config`: customizes configuration which installs Porch - in k8s cluster with correct image names, annotations, service accounts. - The deployment-ready configuration is copied into `./.build/deploy` -* `make deploy`: deploys Porch in the k8s cluster configured with current kubectl context -* `make push-and-deploy`: builds, pushes Porch Docker images, creates deployment configuration, and deploys Porch -* `make` or `make all`: builds and runs Porch [locally](../running-porch/running-locally.md) -* `make test`: runs tests - -## VS Code - -[VS Code](https://code.visualstudio.com/) works really well for editing and debugging. -Just open VS Code from the root folder of the Porch repository and it will work fine. The folder contains the needed -configuration to Launch different functions of Porch. diff --git a/content/en/docs/porch/contributors-guide/dev-process.md b/content/en/docs/porch/contributors-guide/dev-process.md deleted file mode 100644 index 06a40dce..00000000 --- a/content/en/docs/porch/contributors-guide/dev-process.md +++ /dev/null @@ -1,281 +0,0 @@ ---- -title: "Development process" -type: docs -weight: 3 -description: ---- - -After you ran the setup script as explained in the [environment setup](environment-setup.md) you are ready to start the actual development of porch. That process involves (among others) a combination of the tasks explained below. - -## Build and deploy all of porch - -The following command will rebuild all of porch and deploy all of its components into your porch-test kind cluster (created in the [environment setup](environment-setup.md)): - -```bash -make run-in-kind -``` - -## Troubleshoot the porch API server - -There are several ways to develop, test and troubleshoot the porch API server. In this chapter we describe an option where every other parts of porch is running in the porch-test kind cluster, but the porch API server is running locally on your machine, typically in an IDE. - -The following command will rebuild and deploy porch, except the porch API server component, and also prepares your environment for connecting the local API server with the in-cluster components. - -```bash -make run-in-kind-no-server -``` - -After issuing this command you are expected to start the porch API server locally on your machine (outside of the kind cluster); probably in your IDE, potentially in a debugger. - -### Configure VS Code to run the Porch (API)server - -The simplest way to run the porch API server is to launch it in a VS Code IDE, as described by the following process: - -1. Open the *porch.code-workspace* file in the root of the porch git repository. - -1. Edit your local *.vscode/launch.json* file as follows: Change the `--kubeconfig` argument of the Launch Server - configuration to point to a *KUBECONFIG* file that is set to the kind cluster as the current context. - -{{% alert title="Note" color="primary" %}} - - If your current *KUBECONFIG* environment variable already points to the porch-test kind cluster, then you don't have to touch anything. - - {{% /alert %}} - -1. Launch the Porch server locally in VS Code by selecting the *Launch Server* configuration on the VS Code - *Run and Debug* window. For more information please refer to the - [VS Code debugging documentation](https://code.visualstudio.com/docs/editor/debugging). - -### Check to ensure that the API server is serving requests: - -```bash -curl https://localhost:4443/apis/porch.kpt.dev/v1alpha1 -k -``` - -
-Sample output - -```json -{ - "kind": "APIResourceList", - "apiVersion": "v1", - "groupVersion": "porch.kpt.dev/v1alpha1", - "resources": [ - { - "name": "packagerevisionresources", - "singularName": "", - "namespaced": true, - "kind": "PackageRevisionResources", - "verbs": [ - "get", - "list", - "patch", - "update" - ] - }, - { - "name": "packagerevisions", - "singularName": "", - "namespaced": true, - "kind": "PackageRevision", - "verbs": [ - "create", - "delete", - "get", - "list", - "patch", - "update", - "watch" - ] - }, - { - "name": "packagerevisions/approval", - "singularName": "", - "namespaced": true, - "kind": "PackageRevision", - "verbs": [ - "get", - "patch", - "update" - ] - }, - { - "name": "packages", - "singularName": "", - "namespaced": true, - "kind": "Package", - "verbs": [ - "create", - "delete", - "get", - "list", - "patch", - "update" - ] - } - ] -} -``` - -
- - -## Troubleshoot the porch controllers - -There are several ways to develop, test and troubleshoot the porch controllers (i.e. *PackageVariant*, *PackageVariantSet*). In this chapter we describe an option where every other parts of porch is running in the porch-test kind cluster, but the process hosting all porch controllers is running locally on your machine. - -The following command will rebuild and deploy porch, except the porch-controllers component: - -```bash -make run-in-kind-no-controllers -``` - -After issuing this command you are expected to start the porch controllers process locally on your machine (outside of -the kind cluster); probably in your IDE, potentially in a debugger. If you are using VS Code you can use the -**Launch Controllers** configuration that is defined in the -[launch.json](https://github.com/nephio-project/porch/blob/main/.vscode/launch.json) file of the porch git repository. - -## Run the unit tests - -```bash -make test -``` - -## Run the end-to-end tests - -To run the end-to-end tests against the Kubernetes API server where *KUBECONFIG* points to, simply issue: - -```bash -make test-e2e -``` - -To run the end-to-end tests against a clean deployment, issue: - -```bash -make test-e2e-clean -``` -This will -- create a brand new kind cluster, -- rebuild porch -- deploy the newly built porch into the new cluster -- run the end-to-end tests against that -- deletes the kind cluster if all tests passed - -This process closely mimics the end-to-end tests that are run against your PR on GitHub. - -In order to run just one particular test case you can execute something similar to this: - -```bash -E2E=1 go test -v ./test/e2e -run TestE2E/PorchSuite/TestPackageRevisionInMultipleNamespaces -``` -or this: -```bash -E2E=1 go test -v ./test/e2e/cli -run TestPorch/rpkg-lifecycle - -``` - -To run the end to end tests on your local machine towards a Porch server running in VS Code, be aware of the following if the tests are not running: -- Set the actual load balancer IP address for the function runner in your *launch.json*, for example - "--function-runner=172.18.255.201:9445" -- Clear the git cache of your Porch workspace before every test run, for example - `rm -fr /.cache/git/*` - -## Run the load test - -A script is provided to run a Porch load test against the Kubernetes API server where *KUBECONFIG* points to. - -```bash -porch % scripts/run-load-test.sh -h - -run-load-test.sh - runs a load test on porch - - usage: run-load-test.sh [-options] - - options - -h - this help message - -s hostname - the host name of the git server for porch git repositories - -r repo-count - the number of repositories to create during the test, a positive integer - -p package-count - the number of packages to create in each repo during the test, a positive integer - -e package-revision-count - the number of packagerevisions to create on each package during the test, a positive integer - -f result-file - the file where the raw results will be stored, defaults to load_test_results.txt - -o repo-result-file - the file where the results by reop will be stored, defaults to load_test_repo_results.csv - -l log-file - the file where the test log will be stored, defaults to load_test.log - -y - dirty mode, do not clean up after tests -``` - -The load test creates, copies, proposes and approves `repo-count` repositories, each with `package-count` packages -with `package-revision-count` package revisions created for each package. The script initializes or copies each -package revision in turn. It adds a pipeline with two "apply-replacements" kpt functions to the Kptfile of each -package revision. It updates the package revision, and then proposes and approves it. - -The load test script creates repositories on the git server at `hostname`, so it's URL will be `http://nephio:secret@hostname:3000/nephio/`. -The script expects a git server to be running at that URL. - -The `result-file` is a text file containing the time it takes for a package to move from being initialized or -copied to being approved. It also records the time it takes to proppose-delete and delete each package revision. - -The `repo-result-file` is a CSV file that tabulates the results from `result-file` into columns for each repository created. - -For example: - -```bash -porch % scripts/run-load-test.sh -s 172.18.255.200 -r 4 -p 2 -e 3 -running load test towards git server http://nephio:secret@172.18.255.200:3000/nephio/ - 4 repositories will be created - 2 packages in each repo - 3 pacakge revisions in each package - results will be stored in "load_test_results.txt" - repo results will be stored in "load_test_repo_results.csv" - the log will be stored in "load_test.log" -load test towards git server http://nephio:secret@172.18.255.200:3000/nephio/ completed -``` - -In the load test above, a total of 24 package revisions were created and deleted. - -|REPO-1-TEST|REPO-1-TIME|REPO-2-TEST|REPO-2-TIME|REPO-3-TEST|REPO-3-TIME|REPO-4-TEST|REPO-4-TIME| -|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| -1:1|1.951846|1:1|1.922723|1:1|2.019615|1:1|1.992746 -1:2|1.762657|1:2|1.864306|1:2|1.873962|1:2|1.846436 -1:3|1.807281|1:3|1.930068|1:3|1.860375|1:3|1.881649 -2:1|1.829227|2:1|1.904997|2:1|1.956160|2:1|1.988209 -2:2|1.803494|2:2|1.912169|2:2|1.915905|2:2|1.902103 -2:3|1.816716|2:3|1.948171|2:3|1.931904|2:3|1.952902 -del-6a0b3…|.918442|del-e757b…|.904881|del-d39cd…|.944850|del-6222f…|.911060 -del-378a4…|.831815|del-9211c…|.866386|del-316a5…|.898638|del-31d9f…|.895919 -del-89073…|.874867|del-97d45…|.876450|del-830e0…|.905896|del-7d411…|.866947 -del-4756f…|.850528|del-c95db…|.903599|del-4c450…|.884997|del-587f8…|.842529 -del-9860a…|.887118|del-9c1b9…|1.018930|del-66ae…|.929470|del-6ae3d…|.905359 -del-a11e5…|.845834|del-71540…|.899935|del-8d1e8…|.891296|del-9e2bb…|.864382 -del-1d789…|.851242|del-ffdc3…|.897862|del-75e45…|.852323|del-82eef…|.916630 -del-8ae7e…|.872696|del-58097…|.894618|del-d164f…|.852093|del-9da24…|.849919 - -## Switching between tasks - -The `make run-in-kind`, `make run-in-kind-no-server` and `make run-in-kind-no-controller` commands can be executed right after each other. No clean-up or restart is required between them. The make scripts will intelligently do the necessary changes in your current porch deployment in kind (e.g. removing or re-adding the porch API server). - -You can always find the configuration of your current deployment in *.build/deploy*. - -You can always use `make test` and `make test-e2e` to test your current setup, no matter which of the above detailed configurations it is. - -## Getting to know the make targets - -Try: `make help` - -## Restart with a clean-slate - -Sometimes the development kind cluster gets cluttered and you may experience weird behavior from porch. -In this case you might want to restart with a clean slate: -First, delete the development kind cluster with the following command: - -```bash -kind delete cluster --name porch-test -``` - -then re-run the [setup script](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh): - -```bash -./scripts/setup-dev-env.sh -``` - -finally deploy porch into the kind cluster by any of the methods explained above. - diff --git a/content/en/docs/porch/contributors-guide/environment-setup-vm.md b/content/en/docs/porch/contributors-guide/environment-setup-vm.md deleted file mode 100644 index 31fe187d..00000000 --- a/content/en/docs/porch/contributors-guide/environment-setup-vm.md +++ /dev/null @@ -1,162 +0,0 @@ ---- -title: "Setting up a VM environment" -type: docs -weight: 2 -description: ---- - -This tutorial gives short instructions on how to set up a development environment for Porch on a Nephio VM. It outlines the steps to -get a [kind](https://kind.sigs.k8s.io/) cluster up and running to which a Porch instance running in Visual Studio Code -can connect to and interact with. If you are not familiar with how porch works, it is highly recommended that you go -through the [Starting with Porch tutorial](../user-guides/install-and-using-porch.md) before going through this one. - -## Setting up the environment - -1. The first step is to install the Nephio sandbox environment on your VM using the procedure described in -[Installation on a single VM](../../guides/install-guides/install-on-single-vm.md). In short, log onto your VM and give the command -below: - -```bash -wget -O - https://raw.githubusercontent.com/nephio-project/test-infra/main/e2e/provision/init.sh | \ -sudo NEPHIO_DEBUG=false \ - NEPHIO_BRANCH=main \ - NEPHIO_USER=ubuntu \ - bash -``` - -2. Set up your VM for development (optional but recommended step). - -```bash -echo '' >> ~/.bashrc -echo 'source <(kubectl completion bash)' >> ~/.bashrc -echo 'source <(kpt completion bash)' >> ~/.bashrc -echo 'source <(porchctl completion bash)' >> ~/.bashrc -echo '' >> ~/.bashrc -echo 'alias h=history' >> ~/.bashrc -echo 'alias k=kubectl' >> ~/.bashrc -echo '' >> ~/.bashrc -echo 'complete -o default -F __start_kubectl k' >> ~/.bashrc - -sudo usermod -a -G syslog ubuntu -sudo usermod -a -G docker ubuntu -``` - -3. Log out of your VM and log in again so that the group changes on the *ubuntu* user are picked up. - -```bash -> exit - -> ssh ubuntu@thevmhostname -> groups -ubuntu adm dialout cdrom floppy sudo audio dip video plugdev syslog netdev lxd docker -``` - -4. Install *go* so that you can build Porch on the VM: - -```bash -wget -O - https://go.dev/dl/go1.22.5.linux-amd64.tar.gz | sudo tar -C /usr/local -zxvf - - -echo '' >> ~/.profile -echo '# set PATH for go' >> ~/.profile -echo 'if [ -d "/usr/local/go" ]' >> ~/.profile -echo 'then' >> ~/.profile -echo ' PATH="/usr/local/go/bin:$PATH"' >> ~/.profile -echo 'fi' >> ~/.profile -``` - -5. Log out of your VM and log in again so that the *go* is added to your path. Verify that *go* is in the path: - -```bash -> exit - -> ssh ubuntu@thevmhostname - -> go version -go version go1.22.5 linux/amd64 -``` - -6. Install *go delve* for debugging on the VM: - -```bash -go install -v github.com/go-delve/delve/cmd/dlv@latest -``` - -7. Clone Porch onto the VM - -```bash -mkdir -p git/github/nephio-project -cd ~/git/github/nephio-project - -# Clone porch -git clone https://github.com/nephio-project/porch.git -cd porch -``` - -8. Change the Kind cluster name in the Porch Makefile to match the Kind cluster name on the VM: - -```bash -sed -i "s/^KIND_CONTEXT_NAME ?= porch-test$/KIND_CONTEXT_NAME ?= "$(kind get clusters)"/" Makefile -``` - -9. Expose the Porch function runner so that the Nephio server running in VS Code can access it - -```bash -kubectl expose svc -n porch-system function-runner --name=xfunction-runner --type=LoadBalancer --load-balancer-ip='172.18.0.202' -``` - -10. Set the *KUBECONFIG* and *FUNCTION_RUNNER_IP* environment variables in the *.profile* file - You **must** do this step before connecting with VS Code because VS Code caches the environment on the server. If you - want to change the values of these variables subsequently, you must restart the VM server. - - ```bash - echo '' >> ~/.profile - echo 'export KUBECONFIG="/home/ubuntu/.kube/config"' >> ~/.profile - echo 'export FUNCTION_RUNNER_IP="172.18.0.202"' >> ~/.profile - ``` - -You have now set up the VM so that it can be used for remove debugging of Porch. - -## Setting up VS Code - -Use the [VS Code Remote SSH] -(https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) -plugin to debug from VS Code running on your local machine towards a VM. Detailed documentation -on the plugin and its use is available on the -[Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) in the VS Code -documentation. - -1. Use the **Connect to a remote host** instructions on the -[Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) page to connect to your VM. - -2. Click **Open Folder** and browse to the Porch code on the VM, */home/ubuntu/git/github/nephio-project/porch* in this - case: - -![Browse to Porch code](/static/images/porch/contributor/01_VSCodeOpenPorchFolder.png) - -3. VS Code now opens the Porch project on the VM. - -![Porch code is open](/static/images/porch/contributor/02_VSCodeConnectedPorch.png) - -4. We now need to install support for *go* debugging in VS Code. Trigger this by launching a debug configuration in - VS Code. - Here we use the **Launch Override Server** configuration. - -![Launch the Override Server VS Code debug configuration](/static/images/porch/contributor/03_LaunchOverrideServer.png) - -5. VS Code complains that *go* debugging is not supported, click the **Install go Extension** button. - -![VS Code go debugging not supported message](/static/images/porch/contributor/04_GoDebugNotSupportedPopup.png) - -6. Go automatically presents the Go debug plugin for installation. Click the **Install** button. - -![VS Code Go debugging plugin selected](/static/images/porch/contributor/05_GoExtensionAutoSelected.png) - -7. VS Code installs the plugin. - -![VS Code Go debugging plugin installed](/static/images/porch/contributor/06_GoExtensionInstalled.png) - -You have now set up VS Code so that it can be used for remove debugging of Porch. - -## Getting started with actual development - -You can find a detailed description of the actual development process [here](dev-process.md). diff --git a/content/en/docs/porch/contributors-guide/environment-setup.md b/content/en/docs/porch/contributors-guide/environment-setup.md deleted file mode 100644 index 90d42c15..00000000 --- a/content/en/docs/porch/contributors-guide/environment-setup.md +++ /dev/null @@ -1,251 +0,0 @@ ---- -title: "Setting up a local environment" -type: docs -weight: 2 -description: ---- - -This tutorial gives short instructions on how to set up a development environment for Porch on your local machine. It outlines the steps to -get a [kind](https://kind.sigs.k8s.io/) cluster up and running to which a Porch instance running in Visual Studio Code -can connect to and interact with. If you are not familiar with how porch works, it is highly recommended that you go -through the [Starting with Porch tutorial](../user-guides/install-and-using-porch.md) before going through this one. - -{{% alert title="Note" color="primary" %}} - -As your development environment, you can run the code on a remote VM and use the -[VS Code Remote SSH](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) -plugin to connect to it. - -{{% /alert %}} - -## Extra steps for MacOS users - -The script the make deployment-config target to generate the deployment files for porch. The scripts called by this -make target use recent *bash* additions. MacOS comes with *bash* 3.x.x - -1. Install *bash* 4.x.x or better of *bash* using homebrew, see - [this post for details](https://apple.stackexchange.com/questions/193411/update-bash-to-version-4-0-on-osx) -2. Ensure that */opt/homebrew/bin* is earlier in your path than */bin* and */usr/bin* - -{{% alert title="Note" color="primary" %}} - -The changes above **permanently** change the *bash* version for **all** applications and may cause side -effects. - -{{% /alert %}} - - -## Setup the environment automatically - -The [*./scripts/setup-dev-env.sh*](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh) setup -script automatically builds a porch development environment. - -{{% alert title="Note" color="primary" %}} - -This is only one of many possible ways of building a working porch development environment so feel free -to customize it to suit your needs. - -{{% /alert %}} - -The setup script will perform the following steps: - -1. Install a kind cluster. The name of the cluster is read from the PORCH_TEST_CLUSTER environment variable, otherwise - it defaults to porch-test. The configuration of the cluster is taken from - [here](https://github.com/nephio-project/porch/blob/main/deployments/local/kind_porch_test_cluster.yaml). -1. Install the MetalLB load balancer into the cluster, in order to allow LoadBalancer typed Services to work properly. -1. Install the Gitea git server into the cluster. This can be used to test porch during development, but it is not used - in automated end-to-end tests. Gitea is exposed to the host via port 3000. The GUI is accessible via - , or (username: nephio, password: secret). - {{% alert title="Note" color="primary" %}} - - If you are using WSL2 (Windows Subsystem for Linux), then Gitea is also accessible from the Windows host via the - URL. - - {{% /alert %}} -1. Generate the PKI resources (key pairs and certificates) required for end-to-end tests. -1. Build the porch CLI binary. The result will be generated as *.build/porchctl*. - -That's it! If you want to run the steps manually, please use the code of the script as a detailed description. - -The setup script is idempotent in the sense that you can rerun it without cleaning up first. This also means that if the -script is interrupted for any reason, and you run it again it should effectively continue the process where it left off. - -## Extra manual steps - -Copy the *.build/porchctl* binary (that was built by the setup script) to somewhere in your $PATH, or add the *.build* -directory to your PATH. - -## Build and deploy porch - -You can build all of porch, and also deploy it into your newly created kind cluster with this command. - -```bash -make run-in-kind -``` - -See more advanced variants of this command in the [detailed description of the development process](dev-process.md). - -## Check that everything works as expected - -At this point you are basically ready to start developing porch, but before you start it is worth checking that -everything works as expected. - -### Check that the APIservice is ready - -```bash -kubectl get apiservice v1alpha1.porch.kpt.dev -``` - -Sample output: - -```bash -NAME SERVICE AVAILABLE AGE -v1alpha1.porch.kpt.dev porch-system/api True 18m -``` - -### Check the porch api-resources - -```bash -kubectl api-resources | grep porch -``` - -Sample output: - -```bash -packagerevs config.porch.kpt.dev/v1alpha1 true PackageRev -packagevariants config.porch.kpt.dev/v1alpha1 true PackageVariant -packagevariantsets config.porch.kpt.dev/v1alpha2 true PackageVariantSet -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true PorchPackage -``` - -## Create Repositories using your local Porch server - -To connect Porch to Gitea, follow [step 7 in the Starting with Porch](../user-guides/install-and-using-porch.md) -tutorial to create the repositories in Porch. - -You will notice logging messages in VS Code when you run the `kubectl apply -f porch-repositories.yaml` command. - -You can check that your locally running Porch server has created the repositories by running the `porchctl` command: - -```bash -porchctl repo get -A -``` - -Sample output: - -```bash -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -You can also check the repositories using *kubectl*. - -```bash -kubectl get repositories -n porch-demo -``` - -Sample output: - -```bash -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -You now have a locally running Porch (API)server. Happy developing! - -## Restart from scratch - -Sometimes the development cluster gets cluttered and you may experience weird behavior from porch. -In this case you might want to restart from scratch, by deleting the development cluster with the following -command: - -```bash -kind delete cluster --name porch-test -``` - -and running the [setup script](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh) again: - -```bash -./scripts/setup-dev-env.sh -``` - -## Getting started with actual development - -You can find a detailed description of the actual development process [here](dev-process.md). - -## Enabling Open Telemetry/Jaeger tracing - -### Enabling tracing on a Porch deployment - -Follow the steps below to enable Open Telemetry/Jaeger tracing on your Porch deployment. - -1. Apply the Porch *deployment.yaml* manifest for Jaeger. - -```bash -kubectl apply -f https://raw.githubusercontent.com/nephio-project/porch/refs/heads/main/deployments/tracing/deployment.yaml -``` - -2. Add the environment variable *OTEL* to the porch-server manifest: - -```bash -kubectl edit deployment -n porch-system porch-server -``` - -```bash -env: -- name: OTEL - value: otel://jaeger-oltp:4317 -``` - -3. Set up port forwarding of the Jaeger HTTP port to your local machine: - -```bash -kubectl port-forward -n porch-system service/jaeger-http 16686 -``` - -4. Open the Jaeger UI in your browser at *http://localhost:16686* - -### Enable tracing on a local Porch server - -Follow the steps below to enable Open Telemetry/Jaeger tracing on a porch server running locally on your machine, such as in VS Code. - -1. Download the Jaeger binary tarball for your local machine architecture from [the Jaeger download page](https://www.jaegertracing.io/download/#binaries) and untar the tarball in some suitable directory. - -2. Run Jaeger: - -```bash -cd jaeger -./jaeger-all-in-one -``` - -3. Configure the Porch server to output Open Telemetry traces: - - Set the *OTEL* environment variable to point at the Jaeger server - - In *.vscode/launch.json*: - -```bash -"env": { - ... - ... -"OTEL": "otel://localhost:4317", - ... - ... -} -``` - - In a shell: - -```bash -export OTEL="otel://localhost:4317" -``` - -4. Open the Jaeger UI in your browser at *http://localhost:16686* - -5. Run the Porch Server. - diff --git a/content/en/docs/porch/function-runner-pod-templates.md b/content/en/docs/porch/function-runner-pod-templates.md deleted file mode 100644 index 85c000aa..00000000 --- a/content/en/docs/porch/function-runner-pod-templates.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: "Function runner pod templating" -type: docs -weight: 4 -description: ---- - -## Why - -`porch-fn-runner` implements a simplistic function-as-a-service for executing kpt functions, running the needed kpt -functions wrapped in a GRPC server. The function is starting up a number of function evaluator pods for each of the kpt -functions, along with a front-end service, pointing to its respective pod. As with any operator that manages pods, it's good -to provide some templating and parametrization capabilities of the pods that will be managed by the function runner. - -## Contract for writing pod templates - -The following contract needs to be fulfilled by any function evaluator pod template: - -1. There is a container named "function". -2. The entrypoint of the "function" container will start the wrapper GRPC server. -3. The image of the "function" container can be set to the kpt function's image without impacting starting the - entrypoint. -4. The arguments of the "function" container can be appended with the entries from the Dockerfile ENTRYPOINT of the kpt - function image. - -## Enabling pod templating on function runner - -A ConfigMap with the pod template should be created in the namespace where the porch-fn-runner pod is running. -The ConfigMap's name should be included as `--function-pod-template` in the command line arguments in the pod spec of the function runner. - -```yaml -... -spec: - serviceAccountName: porch-fn-runner - containers: - - name: function-runner - image: gcr.io/example-google-project-id/porch-function-runner:latest - imagePullPolicy: IfNotPresent - command: - - /server - - --config=/config.yaml - - --functions=/functions - - --pod-namespace=porch-fn-system - - --function-pod-template=kpt-function-eval-pod-template - env: - - name: WRAPPER_SERVER_IMAGE - value: gcr.io/example-google-project-id/porch-wrapper-server:latest - ports: - - containerPort: 9445 - # Add grpc readiness probe to ensure the cache is ready - readinessProbe: - exec: - command: - - /grpc-health-probe - - -addr - - localhost:9445 -... -``` - -## Example pod template - -The below pod template ConfigMap matches the default behavior: - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: kpt-function-eval-pod-template -data: - template: | - apiVersion: v1 - kind: Pod - annotations: - cluster-autoscaler.kubernetes.io/safe-to-evict: true - spec: - initContainers: - - name: copy-wrapper-server - image: gcr.io/example-google-project-id/porch-wrapper-server:latest - command: - - cp - - -a - - /wrapper-server/. - - /wrapper-server-tools - volumeMounts: - - name: wrapper-server-tools - mountPath: /wrapper-server-tools - containers: - - name: function - image: image-replaced-by-kpt-func-image - command: - - /wrapper-server-tools/wrapper-server - volumeMounts: - - name: wrapper-server-tools - mountPath: /wrapper-server-tools - volumes: - - name: wrapper-server-tools - emptyDir: {} -``` diff --git a/content/en/docs/porch/package-orchestration.md b/content/en/docs/porch/package-orchestration.md deleted file mode 100644 index 9e899713..00000000 --- a/content/en/docs/porch/package-orchestration.md +++ /dev/null @@ -1,408 +0,0 @@ ---- -title: "Package Orchestration" -type: docs -weight: 2 -description: ---- - -## Why - -People who want to take advantage of the benefits of [Configuration as Data](config-as-data.md) can do so today using -a [kpt](https://kpt.dev) CLI and the kpt function ecosystem, including its [functions catalog](https://catalog.kpt.dev/). -Package authoring is possible using a variety of editors with [YAML](https://yaml.org/) support. That said, a delightful -UI experience of WYSIWYG package authoring which supports broader package lifecycle, including package authoring with -*guardrails*, approval workflow, package deployment, and more, is not yet available. - -Porch *Package Orchestration* (Porch) is part of the Nephio implementation of a Configuration as Data approach. It offers an API and -a CLI that enables building that delightful UI experience for supporting the configuration lifecycle. - -## Core Concepts - -This section briefly describes core concepts of package orchestration: - -***Package***: Package is a collection of related configuration files containing configuration of [KRM][krm] -**resources**. Specifically, configuration packages are [kpt packages](https://kpt.dev/). - -***Repository***: Repositories store packages. For example [git][] or [OCI][oci]. ([more details](#repositories)) - -Packages are sequentially ***versioned***; multiple versions of the same package may exist in a repository. -([more details](#package-versioning)) - -A package may have a link (URL) to an ***upstream package*** (a specific version) from which it was cloned. -([more details](#package-relationships)) - -Package may be in one of several lifecycle stages: - -* ***Draft*** - package is being created or edited. The package contents can be modified but package is not ready to be - used (i.e. deployed) -* ***Proposed*** - author of the package proposed that the package be published -* ***Published*** - the changes to the package have been approved and the package is ready to be used. Published - packages can be deployed or cloned - -***Functions*** (specifically, [KRM functions][krm functions]) can be applied to packages to mutate or validate resources -within them. Functions can be applied to a package to create specific package mutation while editing a package draft, -functions can be added to package's Kptfile [pipeline][]. - -A repository can be designated as ***deployment repository***. *Published* packages in a deployment repository are -considered deployment-ready. ([more details](#deployment)) - -## Core Components of Configuration as Data Implementation - -The Core implementation of Configuration as Data, *CaD Core*, is a set of components and APIs which collectively enable: - -* Registration of repositories (Git, OCI) containing kpt packages and the discovery of packages -* Management of package lifecycles, including authoring, versioning, deletion, creation and mutations of a package draft, - process of proposing the package draft, and publishing of the approved package -* Package lifecycle operations such as: - - * assisted or automated rollout of package upgrade when a new version of the upstream package version becomes - available (3 way merge) - * rollback of a package to previous version -* Deployment of packages from deployment repositories and observability of their deployment status -* Permission model that allows role-based access control - -### High-Level Architecture - -At the high level, the Core CaD functionality comprises: - -* a generic (i.e. not task-specific) package orchestration service implementing - - * package repository management - * package discovery, authoring and lifecycle management - -* [porchctl](user-guides/porchctl-cli-guide.md) - a Git-native, schema-aware, extensible client-side tool for managing KRM packages -* a GitOps-based deployment mechanism (for example [configsync][]), which distributes and deploys configuration, and - provides observability of the status of deployed resources -* a task-specific UI supporting repository management, package discovery, authoring, and lifecycle - -![CaD Core Architecture](/static/images/porch/CaD-Core-Architecture.svg) - -## CaD Concepts Elaborated - -Concepts briefly introduced above are elaborated in more detail in this section. - -### Repositories - -Porch and [configsync][] currently integrate with [git][] repositories, and there is an existing design to add OCI -support to kpt. Initially, the Package Orchestration service will prioritize integration with [git][], and support for -additional repository types may be added in the future as required. - -Requirements applicable to all repositories include: ability to store packages, their versions, and sufficient metadata -associated with package to capture: - -* package dependency relationships (upstream - downstream) -* package lifecycle state (draft, proposed, published) -* package purpose (base package) -* (optionally) customer-defined attributes - -At repository registration, customers must be able to specify details needed to store packages in appropriate locations -in the repository. For example, registration of a Git repository must accept a branch and a directory. - -{{% alert title="Note" color="primary" %}} - -A user role with sufficient permissions can register a package or function repository, including repositories -containing functions authored by the customer, or other providers. Since the functions in the registered repositories -become discoverable, customers must be aware of the implications of registering function repositories and trust the -contents thereof. - -{{% /alert %}} - -### Package Versioning - -Packages are sequentially versioned. The important requirements are: - -* ability to compare any 2 versions of a package to be either "newer than", equal, or "older than" relationship -* ability to support automatic assignment of versions -* ability to support [optimistic concurrency][optimistic-concurrency] of package changes via version numbers -* a simple model which easily supports automation - -We use a simple integer sequence to represent package versions. - -### Package Relationships - -Kpt packages support the concept of ***upstream***. When a package is cloned from another, the new package -(called the ***downstream*** package) maintains an upstream link to the specific version of the package from which it was -cloned. If a new version of the upstream package becomes available, the upstream link can be used to update the downstream package. - -### Deployment - -The deployment mechanism is responsible for deploying configuration packages from a repository and affecting the live -state. Because the configuration is stored in standard repositories (Git, and in the future OCI), the deployment -component is pluggable. By default, [configsync][] is the deployment mechanism used by CaD Core implementation but -others can be used as well. - -Here we highlight some key attributes of the deployment mechanism and its integration within the CaD Core: - -* _Published_ packages in a deployment repository are considered ready to be deployed -* configsync supports deploying individual packages and whole repositories. For Git specifically that translates to a - requirement to be able to specify repository, branch/tag/ref, and directory when instructing configsync to deploy a - package. -* _Draft_ packages need to be identified in such a way that configsync can easily avoid deploying them. -* configsync needs to be able to pin to specific versions of deployable packages in order to orchestrate rollouts and - rollbacks. This means it must be possible to GET a specific version of a package. -* configsync needs to be able to discover when new versions are available for deployment. - -## Package Orchestration - Porch - -Having established the context of the CaD Core components and the overall architecture, the remainder of the document -will focus on **Porch** - Package Orchestration service. - -To reiterate the role of Package Orchestration service among the CaD Core components, it is: - -* [Repository Management](#repository-management) -* [Package Discovery](#package-discovery) -* [Package Authoring](#package-authoring) and Lifecycle - -In the following section we'll expand more on each of these areas. The term _client_ used in these sections can be -either a person interacting with the UI such as a web application or a command-line tool, or an automated agent or -process. - -### Repository Management - -The repository management functionality of Package Orchestration service enables the client to: - -* register, unregister, update registration of repositories, and discover registered repositories. Git repository - integration will be available first, with OCI and possibly more delivered in the subsequent releases. -* manage repository-wide upstream/downstream relationships, i.e. designate default upstream repository from which - packages will be cloned. -* annotate repository with metadata such as whether repository contains deployment ready packages or not; metadata can - be application or customer specific - -### Package Discovery - -The package discovery functionality of Package Orchestration service enables the client to: - -* browse packages in a repository -* discover configuration packages in registered repositories and sort/filter based on the repository containing the - package, package metadata, version, package lifecycle stage (draft, proposed, published) -* retrieve resources and metadata of an individual package, including latest version or any specific version or draft - of a package, for the purpose of introspection of a single package or for comparison of contents of multiple - versions of a package, or related packages -* enumerate _upstream_ packages available for creating (cloning) a _downstream_ package -* identify downstream packages that need to be upgraded after a change is made to an upstream package -* identify all deployment-ready packages in a deployment repository that are ready to be synced to a deployment target - by configsync -* identify new versions of packages in a deployment repository that can be rolled out to a deployment target by configsync - -### Package Authoring - -The package authoring and lifecycle functionality of the package Orchestration service enables the client to: - -* Create a package _draft_ via one of the following means: - - * an empty draft 'from scratch' (`porchctl rpkg init`) - * clone of an upstream package (`porchctl rpkg clone`) from either a - registered upstream repository or from another accessible, unregistered, repository - * edit an existing package (`porchctl rpkg pull`) - * roll back / restore a package to any of its previous versions - (`porchctl rpkg pull` of a previous version) - -* Push changes to a package _draft_. In general, mutations include adding/modifying/deleting any part of the package's - contents. Some specific examples include: - - * add/change/delete package metadata (i.e. some properties in the `Kptfile`) - * add/change/delete resources in the package - * add function mutators/validators to the package's pipeline - * add/change/delete sub-package - * retrieve the contents of the package for arbitrary client-side mutations (`porchctl rpkg pull`) - * update/replace the package contents with new contents, for example results of a client-side mutations by a UI - (`porchctl rpkg push`) - -* Rebase a package onto another upstream base package or onto a newer version of the same package (to - aid with conflict resolution during the process of publishing a draft package) - -* Get feedback during package authoring, and assistance in recovery from merge conflicts, invalid package changes, guardrail violations - -* Propose for a _draft_ package be _published_. -* Apply an arbitrary decision criteria, and by a manual or automated action, approve (or reject) proposal of a _draft_ - package to be _published_. -* Perform bulk operations such as: - - * Assisted/automated update (upgrade, rollback) of groups of packages matching specific criteria (i.e. base package - has new version or specific base package version has a vulnerability and should be rolled back) - * Proposed change validation (pre-validating change that adds a validator function to a base package) - -* Delete an existing package. - -#### Authoring & Latency - -An important goal of the Package Orchestration service is to support building of task-specific UIs. In order to deliver -low latency user experience acceptable to UI interactions, the innermost authoring loop (depicted below) will require: - -* high performance access to the package store (load/save package) with caching -* low latency execution of mutations and transformations on the package contents -* low latency [KRM function][krm functions] evaluation and package rendering (evaluation of package's function - pipelines) - -![Inner Loop](/static/images/porch/Porch-Inner-Loop.svg) - -#### Authoring & Access Control - -A client can assign actors (persons, service accounts) to roles that determine which operations they are allowed to -perform in order to satisfy requirements of the basic roles. For example, only permitted roles can: - -* manipulate repository registration, enforcement of repository-wide invariants and guardrails -* create a draft of a package and propose the draft be published -* approve (or reject) the proposal to publish a draft package -* clone a package from a specific upstream repository -* perform bulk operations such as rollout upgrade of downstream packages, including rollouts across multiple downstream - repositories -* etc. - -### Porch Architecture - -The Package Orchestration service, **Porch** is designed to be hosted in a [Kubernetes](https://kubernetes.io/) cluster. - -The overall architecture is shown below, and includes also existing components (k8s apiserver and configsync). - -![Porch Architecture](/static/images/porch/Porch-Architecture.svg) - -In addition to satisfying requirements highlighted above, the focus of the architecture was to: - -* establish clear components and interfaces -* support a low-latency package authoring experience required by the UIs - -The Porch components are: - -#### Porch Server - -The Porch server is implemented as [Kubernetes extension API server][apiserver]. The benefits of using Kubernetes -extension API server are: - -* well-defined and familiar API style -* availability of generated clients -* integration with existing Kubernetes ecosystem and tools such as `kubectl` CLI, - [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) -* avoids requirement to open another network port to access a separate endpoint running inside k8s cluster (this is a - distinct advantage over GRPC which we considered as an alternative approach) - -Resources implemented by Porch include: - -* `PackageRevision` - represents the _metadata_ of the configuration package revision stored in a _package_ repository. -* `PackageRevisionResources` - represents the _contents_ of the package revision - -Note that each configuration package revision is represented by a _pair_ of resources which each present a different -view (or [representation][] of the same underlying package revision. - -Repository registration is supported by a `Repository` [custom resource][crds]. - -**Porch server** itself comprises several key components, including: - -* The *Porch aggregated apiserver* which implements the integration into the main Kubernetes apiserver, and directly - serves API requests for the `PackageRevision`, `PackageRevisionResources` resources. -* Package orchestration *engine* which implements the package lifecycle operations, and package mutation workflows -* *CaD Library* which implements specific package manipulation algorithms such as package rendering (evaluation of - package's function *pipeline*), initialization of a new package, etc. The CaD Library is shared with `kpt` - where it likewise provides the core package manipulation algorithms. -* *Package cache* which enables both local caching, as well as abstract manipulation of packages and their contents - irrespectively of the underlying storage mechanism (Git, or OCI) -* *Repository adapters* for Git and OCI which implement the specific logic of interacting with those types of package - repositories. -* *Function runtime* which implements support for evaluating [kpt functions][functions] and multi-tier cache of - functions to support low latency function evaluation - -#### Function Runner - -**Function runner** is a separate service responsible for evaluating [kpt functions][functions]. Function runner exposes -a [GRPC](https://grpc.io/) endpoint which enables evaluating a kpt function on the provided configuration package. - -The GRPC technology was chosen for the function runner service because the [requirements](#grpc-api) that informed -choice of KRM API for the Package Orchestration service do not apply. The function runner is an internal microservice, -an implementation detail not exposed to external callers. This makes GRPC perfectly suitable. - -The function runner also maintains a cache of functions to support low latency function evaluation. It achieves this through -two mechanisms available to it for evaluation of a function - -**Executable Evaluation** approach executes the function within the Pod runtime through shell based invocation of function -binary; for which function binaries are bundled inside the function runner image itself - -**Pod Evaluation** approach is utilized when invoked function is not available via Executable Evaluation approach wherein -function runner pod starts the function pod corresponding to invoked function along with a front-end service. Once -the pod and service are up and running, it's exposed GRPC endpoint is invoked for function evaluation, passing the input -package. For this mechanism, function runner reads the list of functions and their images supplied via a config -file at startup, and spawns function pods, along with a corresponding front-end service for each configured function. -These function pods/services are terminated after a pre-configured period of inactivity (default 30 minutes) by function -runner and recreated on the next invocation. - -#### CaD Library - -The [kpt](https://kpt.dev/) CLI already implements foundational package manipulation algorithms in order to provide the -command line user experience, including: - -* [kpt pkg init](https://kpt.dev/reference/cli/pkg/init/) - create an empty, valid, KRM package -* [kpt pkg get](https://kpt.dev/reference/cli/pkg/get/) - create a downstream package by cloning an upstream package; - set up the upstream reference of the downstream package -* [kpt pkg update](https://kpt.dev/reference/cli/pkg/update/) - update the downstream package with changes from new - version of upstream, 3-way merge -* [kpt fn eval](https://kpt.dev/reference/cli/fn/eval/) - evaluate a kpt function on a package -* [kpt fn render](https://kpt.dev/reference/cli/fn/render/) - render the package by executing the function pipeline of - the package and its nested packages -* [kpt fn source](https://kpt.dev/reference/cli/fn/source/) and [kpt fn sink](https://kpt.dev/reference/cli/fn/sink/) - - read package from local disk as a `ResourceList` and write package represented as `ResourcesList` into local disk - -The same set of primitives form the foundational building blocks of the package orchestration service. Further, the -package orchestration service combines these primitives into higher-level operations (for example, package orchestrator -renders packages automatically on changes, future versions will support bulk operations such as upgrade of multiple -packages, etc). - -The implementation of the package manipulation primitives in kpt was refactored (with initial refactoring completed, and -more to be performed as needed) in order to: - -* create a reusable CaD library, usable by both kpt CLI and Package Orchestration service -* create abstractions for dependencies which differ between CLI and Porch, most notable are dependency on Docker for - function evaluation, and dependency on the local file system for package rendering. - -Over time, the CaD Library will provide the package manipulation primitives: - -* create a valid empty package (init) -* update package upstream pointers (get) -* perform 3-way merge (update) -* render - core package rendering algorithm using a pluggable function evaluator to support: - - * function evaluation via Docker (used by kpt CLI) - * function evaluation via an RPC to a service or appropriate function sandbox - * high-performance evaluation of trusted, built-in, functions without sandbox - -* heal configuration (restore comments after lossy transformation) - -and both kpt CLI and Porch will consume the library. This approach will allow leveraging the investment already made -into the high quality package manipulation primitives, and enable functional parity between Kpt CLI and Package -Orchestration service. - -## User Guide - -Find the Porch User Guide in a dedicated -[document](https://github.com/kptdev/kpt/blob/main/site/guides/porch-user-guide.md). - -## Open Issues/Questions - -### Deployment Rollouts & Orchestration - -__Not Yet Resolved__ - -Cross-cluster rollouts and orchestration of deployment activity. For example, package deployed by configsync in cluster -A, and only on success, the same (or a different) package deployed by configsync in cluster B. - -## Alternatives Considered - -### GRPC API - -We considered the use of [GRPC]() for the Porch API. The primary advantages of implementing Porch as an extension -Kubernetes apiserver are: - -* customers won't have to open another port to their Kubernetes cluster and can reuse their existing infrastructure -* customers can likewise reuse existing, familiar, Kubernetes tooling ecosystem - - -[krm]: https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md -[functions]: https://kpt.dev/book/02-concepts/03-functions -[krm functions]: https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md -[pipeline]: https://kpt.dev/book/04-using-functions/01-declarative-function-execution -[Config Sync]: https://cloud.google.com/anthos-config-management/docs/config-sync-overview -[kpt]: https://kpt.dev/ -[git]: https://git-scm.org/ -[optimistic-concurrency]: https://en.wikipedia.org/wiki/Optimistic_concurrency_control -[apiserver]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/ -[representation]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#differing-representations -[crds]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/ -[oci]: https://github.com/opencontainers/image-spec/blob/main/spec.md diff --git a/content/en/docs/porch/package-variant.md b/content/en/docs/porch/package-variant.md deleted file mode 100644 index 27312c2f..00000000 --- a/content/en/docs/porch/package-variant.md +++ /dev/null @@ -1,1219 +0,0 @@ ---- -title: "Package Variant Controller" -type: docs -weight: 3 -description: ---- - -## Why - -When deploying workloads across large fleets of clusters, it is often necessary to modify the workload configuration for -a specific cluster. Additionally, those workloads may evolve over time with security or other patches that require -updates. [Configuration as Data](config-as-data.md) in general and [Package Orchestration](package-orchestration.md) in -particular can assist in this. However, they are still centered around manual, one-by-one hydration and configuration of -a workload. - -This proposal introduces concepts and a set of resources for automating the creation and lifecycle management of package -variants. These are designed to address several different dimensions of scalability: - -- Number of different workloads for a given cluster -- Number of clusters across which those workloads are deployed -- Different types or characteristics of those clusters -- Complexity of the organizations deploying those workloads -- Changes to those workloads over time - -## See Also - -- [Package Orchestration](package-orchestration.md) -- [#3347](https://github.com/GoogleContainerTools/kpt/issues/3347) Bulk package creation -- [#3243](https://github.com/GoogleContainerTools/kpt/issues/3243) Support bulk package upgrades -- [#3488](https://github.com/GoogleContainerTools/kpt/issues/3488) Porch: BaseRevision controller aka Fan Out - controller - but more -- [Managing Package - Revisions](https://docs.google.com/document/d/1EzUUDxLm5jlEG9d47AQOxA2W6HmSWVjL1zqyIFkqV1I/edit?usp=sharing) -- [Porch UpstreamPolicy Resource - API](https://docs.google.com/document/d/1OxNon_1ri4YOqNtEQivBgeRzIPuX9sOyu-nYukjwN1Q/edit?usp=sharing&resourcekey=0-2nDYYH5Kw58IwCatA4uDQw) - -## Core Concepts - -For this solution, workloads are represented by packages. Package is a more general concept, being an arbitrary -bundle of resources, and therefore is sufficient to solve the originally stated problem. - -The basic idea here is to introduce a *PackageVariant* resource that manages the derivation of a variant of a package from -the original source package, and to manage the evolution of that variant over time. This effectively automates the -human-centered process for variant creation one might use with *kpt*: - -1. Clone an upstream package locally -1. Make changes to the local package, setting values in resources and executing KRM functions -1. Push the package to a new repository and tag it as a new version - -Similarly, *PackageVariant* can manage the process of updating a package when a new version of the upstream package is -published. In the human-centered workflow, a user would use `kpt pkg update` to pull in changes to their derivative -package. When using a *PackageVariant* resource, the change would be made to the upstream specification in the resource, -and the controller would propose a new Draft package reflecting the outcome of `kpt pkg update`. - -By automating this process, we open up the possibility of performing systematic changes that tie back to our different -dimensions of scalability. We can use data about the specific variant we are creating to lookup additional context in -the Porch cluster, and copy that information into the variant. That context is a well-structured resource, not simply -key/value pairs. KRM functions within the package can interpret the resource, modifying other resources in the package -accordingly. The context can come from multiple sources that vary differently along those dimensions of scalability. -For example, one piece of information may vary by region, another by individual site, another by cloud provider, and yet -another based on whether we are deploying to development, staging, or production. By utilizing resources in the Porch -cluster as our input model, we can represent this complexity in a manageable model that is reused across many packages, -rather than scattered in package-specific templates or key/value pairs without any structure. KRM functions, also reused -across packages but configured as needed for the specific package, are used to interpret the resources within the -package. This decouples authoring of the packages, creation of the input model, and deploy-time use of that input model -within the packages, allowing those activities to be performed by different teams or organizations. - -We refer to the mechanism described above as configuration injection. It enables dynamic, context-aware creation of -variants. Another way to think about it is as a continuous reconciliation, much like other Kubernetes controllers. In -this case, the inputs are a parent package *P* and a context *C* (which may be a collection of many independent -resources), with the output being the derived package *D*. When a new version of *C* is created by updates to in-cluster -resources, we get a new revision of *D*, customized based upon the updated context. Similarly, the user (or an -automation) can monitor for new versions of *P*; when one arrives, the *PackageVariant* can be updated to point to that -new version, resulting in a newly proposed Draft of *D*, updated to reflect the upstream changes. This will be explained -in more detail below. - -This proposal also introduces a way to "fan-out", or create multiple *PackageVariant* resources declaratively based upon a -list or selector, with the *PackageVariantSet* resource. This is combined with the injection mechanism to enable -generation of large sets of variants that are specialized to a particular target repository, cluster, or other resource. - -## Basic Package Cloning - -The *PackageVariant* resource controls the creation and lifecycle of a variant of a package. That is, it defines the -original (upstream) package, the new (downstream) package, and the changes (mutations) that need to be made to transform -the upstream into the downstream. It also allows the user to specify policies around adoption, deletion, and update of -package revisions that are under the control of the package variant controller. - -The simple clone operation is shown in *Figure 1*. - -| ![Figure 1: Basic Package Cloning](/static/images/porch/packagevariant-clone.png) | ![Legend](/static/images/porch/packagevariant-legend.png) | -| :---: | :---: | -| *Figure 1: Basic Package Cloning* | *Legend* | - -{{% alert title="Note" color="primary" %}} - -*Proposal* and *approval* are not handled by the package variant controller. Those are left to humans or other -controllers. The exception is the proposal of deletion (there is no concept of a "Draft" deletion), which the package -variant control will do, depending upon the specified deletion policy. - -{{% /alert %}} - -### PackageRevision Metadata - -The package variant controller utilizes Porch APIs. This means that it is not just doing a clone operation, but in -fact creating a Porch *PackageRevision* resource. In particular, that resource can contain Kubernetes metadata that is -not part of the package as stored in the repository. - -Some of that metadata is necessary for the management of the *PackageRevision* by the package variant controller - for -example, the owner reference indicating which *PackageVariant* created the *PackageRevision*. These are not under the user's -control. However, the *PackageVariant* resource does make the annotations and labels of the *PackageRevision* available as -values that may be controlled during the creation of the *PackageRevision*. This can assist in additional automation -workflows. - -## Introducing Variance - -Just cloning is not that interesting, so the *PackageVariant* resource also allows you to control various ways of mutating -the original package to create the variant. - -### Package Context[^porch17] - -Every *kpt* package that is fetched with `--for-deployment` will contain a ConfigMap called *kptfile.kpt.dev*. -Analogously, when Porch creates a package in a deployment repository, it will create this ConfigMap, if it does not -already exist. *Kpt* (or Porch) will automatically add a key name to the ConfigMap data, with the value of the package -name. This ConfigMap can then be used as input to functions in the *kpt* function pipeline. - -This process holds true for package revisions created via the package variant controller as well. Additionally, the -author of the *PackageVariant* resource can specify additional key-value pairs to insert into the package context, as -shown in *Figure 2*. - -| ![Figure 2: Package Context Mutation](/static/images/porch/packagevariant-context.png) | -| :---: | -| *Figure 2: Package Context Mutation * | - -While this is convenient, it can be easily abused, leading to over-parameterization. The preferred approach is -configuration injection, as described below, since it allows inputs to adhere to a well-defined, reusable schema, rather -than simple key/value pairs. - -### Kptfile Function Pipeline Editing[^porch18] - -In the manual workflow, one of the ways we edit packages is by running KRM functions imperatively. *PackageVariant* offers -a similar capability, by allowing the user to add functions to the beginning of the downstream package *Kptfile* -mutators pipeline. These functions will then execute before the functions present in the upstream pipeline. It is not -exactly the same as running functions imperatively, because they will also be run in every subsequent execution of the -downstream package function pipeline. But it can achieve the same goals. - -For example, consider an upstream package that includes a Namespace resource. In many organizations, the deployer of the -workload may not have the permissions to provision cluster-scoped resources like namespaces. This means that they would -not be able to use this upstream package without removing the Namespace resource (assuming that they only have access to -a pipeline that deploys with constrained permissions). By adding a function that removes Namespace resources, and a call -to set-namespace, they can take advantage of the upstream package. - -Similarly, the *Kptfile* pipeline editing feature provides an easy mechanism for the deployer to create and set the -namespace if their downstream package application pipeline allows it, as seen in *Figure 3*.[^setns] - -| ![Figure 3: KRM Function Pipeline Editing](/static/images/porch/packagevariant-function.png) | -| :---: | -| *Figure 3: Kptfile Function Pipeline Editing * | - -### Configuration Injection[^porch18] - -Adding values to the package context or functions to the pipeline works for configuration that is under the control of -the creator of the *PackageVariant* resource. However, in more advanced use cases, we may need to specialize the package -based upon other contextual information. This particularly comes into play when the user deploying the workload does -not have direct control over the context in which it is being deployed. For example, one part of the organization may -manage the infrastructure - that is, the cluster in which we are deploying the workload - and another part the actual -workload. We would like to be able to pull in inputs specified by the infrastructure team automatically, based the -cluster to which we are deploying the workload, or perhaps the region in which that cluster is deployed. - -To facilitate this, the package variant controller can "inject" configuration directly into the package. This means it -will use information specific to this instance of the package to lookup a resource in the Porch cluster, and copy that -information into the package. Of course, the package has to be ready to receive this information. So, there is a -protocol for facilitating this dance: - -- Packages may contain resources annotated with *kpt.dev/config-injection* -- Often, these will also be *config.kubernetes.io/local-config* resources, as they are likely just used by local - functions as input. But this is not mandatory. -- The package variant controller will look for any resource in the Kubernetes cluster matching the Group, Version, and - Kind of the package resource, and satisfying the injection selector. -- The package variant controller will copy the spec field from the matching in-cluster resource to the in-package - resource, or the data field in the case of a ConfigMap. - -| ![Figure 4: Configuration Injection](/static/images/porch/packagevariant-config-injection.png) | -| :---: | -| *Figure 4: Configuration Injection* | - -{{% alert title="Note" color="primary" %}} - -Because we are injecting data from the Kubernetes cluster, we can also monitor that data for changes. For -each resource we inject, the package variant controller will establish a Kubernetes "watch" on the resource (or perhaps -on the collection of such resources). A change to that resource will result in a new Draft package with the updated -configuration injected. - -{{% /alert %}} - -There are a number of additional details that will be described in the detailed design below, along with the specific -API definition. - -## Lifecycle Management - -### Upstream Changes -The package variant controller allows you to specific a specific upstream package revision to clone, or you can specify -a floating tag[^notimplemented]. - -If you specify a specific upstream revision, then the downstream will not be changed unless the *PackageVariant* resource -itself is modified to point to a new revision. That is, the user must edit the *PackageVariant*, and change the upstream -package reference. When that is done, the package variant controller will update any existing Draft package under its -ownership by doing the equivalent of a `kpt pkg update` to update the downstream to be based upon the new upstream -revision. If a Draft does not exist, then the package variant controller will create a new Draft based on the current -published downstream, and apply the `kpt pkg update`. This updated Draft must then be proposed and approved like any -other package change. - -If a floating tag is used, then explicit modification of the *PackageVariant* is not needed. Rather, when the floating tag -is moved to a new tagged revision of the upstream package, the package revision controller will notice and automatically -propose and update to that revision. For example, the upstream package author may designate three floating tags: stable, -beta, and alpha. The upstream package author can move these tags to specific revisions, and any *PackageVariant* resource -tracking them will propose updates to their downstream packages. - -### Adoption and Deletion Policies - -When a *PackageVariant* resource is created, it will have a particular repository and package name as the downstream. The -adoption policy controls whether the package variant controller takes over an existing package with that name, in that -repository. - -Analogously, when a *PackageVariant* resource is deleted, a decision must be made about whether or not to delete the -downstream package. This is controlled by the deletion policy. - -## Fan Out of Variant Generation[^pvsimpl] - -When used with a single package, the package variant controller mostly helps us handle the time dimension - producing -new versions of a package as the upstream changes, or as injected resources are updated. It can also be useful for -automating common, systematic changes made when bringing an external package into an organization, or an organizational -package into a team repository. - -That is useful, but not extremely compelling by itself. More interesting is when we use *PackageVariant* as a primitive -for automations that act on other dimensions of scale. That means writing controllers that emit *PackageVariant* -resources. For example, we can create a controller that instantiates a *PackageVariant* for each developer in our -organization, or we can create a controller to manage *PackageVariant*s across environments. The ability to not only clone -a package, but make systematic changes to that package enables flexible automation. - -Workload controllers in Kubernetes are a useful analogy. In Kubernetes, we have different workload controllers such as -Deployment, StatefulSet, and DaemonSet. Ultimately, all of these result in Pods; however, the decisions about what Pods -to create, how to schedule them across Nodes, how to configure those Pods, and how to manage those Pods as changes -happen are very different with each workload controller. Similarly, we can build different controllers to handle -different ways in which we want to generate *PackageRevisions*. The *PackageVariant* resource provides a convenient -primitive for all of those controllers, allowing a them to leverage a range of well-defined operations to mutate the -packages as needed. - -A common need is the ability to generate many variants of a package based on a simple list of some entity. Some examples -include generating package variants to spin up development environments for each developer in an organization; -instantiating the same package, with slight configuration changes, across a fleet of clusters; or instantiating some -package per customer. - -The package variant set controller is designed to fill this common need. This controller consumes *PackageVariantSet* -resources, and outputs *PackageVariant* resources. The *PackageVariantSet* defines: - -- the upstream package -- targeting criteria -- a template for generating one *PackageVariant* per target - -Three types of targeting are supported: - -- An explicit list of repositories and package names -- A label selector for Repository objects -- An arbitrary object selector - -Rules for generating a *PackageVariant* are associated with a list of targets using a template. That template can have -explicit values for various *PackageVariant* fields, or it can use -[Common Expression Language (CEL)](https://github.com/google/cel-go) expressions to specify the field values. - -*Figure 5* shows an example of creating *PackageVariant* resources based upon the explicitly list of repositories. In this -example, for the *cluster-01* and *cluster-02* repositories, no template is defined the resulting *PackageVariants*; -it simply takes the defaults. However, for *cluster-03*, a template is defined to change the downstream package name to -*bar*. - -| ![Figure 5: PackageVariantSet with Repository List](/static/images/porch/packagevariantset-target-list.png) | -| :---: | -| *Figure 5: PackageVariantSet with Repository List* | - -It is also possible to target the same package to a repository more than once, using different names. This is useful, -for example, if the package is used to provision namespaces and you would like to provision many namespaces in the same -cluster. It is also useful if a repository is shared across multiple clusters. In *Figure 6*, two *PackageVariant* -resources for creating the *foo* package in the repository cluster-01 are generated, one for each listed package name. -Since no packageNames field is listed for cluster-02, only one instance is created for that repository. - -| ![Figure 6: PackageVariantSet with Package List](/static/images/porch/packagevariantset-target-list-with-packages.png) | -| :---: | -| *Figure 6: PackageVariantSet with Package List* | - -*Figure 7* shows an example that combines a repository label selector with configuration injection that various based -upon the target. The template for the *PackageVariant* includes a CEL expression for the one of the injectors, so that -the injection varies systematically based upon attributes of the target. - -| ![Figure 7: PackageVariantSet with Repository Selector](/static/images/porch/packagevariantset-target-repo-selector.png) | -| :---: | -| *Figure 7: PackageVariantSet with Repository Selector* | - -## Detailed Design - -### PackageVariant API - -The Go types below defines the PackageVariantSpec. - -```go -type PackageVariantSpec struct { - Upstream *Upstream `json:"upstream,omitempty"` - Downstream *Downstream `json:"downstream,omitempty"` - - AdoptionPolicy AdoptionPolicy `json:"adoptionPolicy,omitempty"` - DeletionPolicy DeletionPolicy `json:"deletionPolicy,omitempty"` - - Labels map[string]string `json:"labels,omitempty"` - Annotations map[string]string `json:"annotations,omitempty"` - - PackageContext *PackageContext `json:"packageContext,omitempty"` - Pipeline *kptfilev1.Pipeline `json:"pipeline,omitempty"` - Injectors []InjectionSelector `json:"injectors,omitempty"` -} - -type Upstream struct { - Repo string `json:"repo,omitempty"` - Package string `json:"package,omitempty"` - Revision string `json:"revision,omitempty"` -} - -type Downstream struct { - Repo string `json:"repo,omitempty"` - Package string `json:"package,omitempty"` -} - -type PackageContext struct { - Data map[string]string `json:"data,omitempty"` - RemoveKeys []string `json:"removeKeys,omitempty"` -} - -type InjectionSelector struct { - Group *string `json:"group,omitempty"` - Version *string `json:"version,omitempty"` - Kind *string `json:"kind,omitempty"` - Name string `json:"name"` -} - -``` - -#### Basic Spec Fields - -The Upstream and Downstream fields specify the source package and destination repository and package name. The -Repo fields refer to the names Porch Repository resources in the same namespace as the *PackageVariant* resource. -The Downstream does not contain a revision, because the package variant controller will only create Draft packages. -The Revision of the eventual *PackageRevision* resource will be determined by Porch at the time of approval. - -The Labels and Annotations fields list metadata to include on the created *PackageRevision*. These values are set -*only* at the time a Draft package is created. They are ignored for subsequent operations, even if the *PackageVariant* -itself has been modified. This means users are free to change these values on the *PackageRevision*; the package variant -controller will not touch them again. - -AdoptionPolicy controls how the package variant controller behaves if it finds an existing *PackageRevision* Draft -matching the Downstream. If the AdoptionPolicy is adoptExisting, then the package variant controller will -take ownership of the Draft, associating it with this *PackageVariant*. This means that it will begin to reconcile the -Draft, just as if it had created it in the first place. An AdoptionPolicy of adoptNone (the default) will simply -ignore any matching Drafts that were not created by the controller. - -DeletionPolicy controls how the package variant controller behaves with respect to *PackageRevisions* that it has -created when the *PackageVariant* resource itself is deleted. A value of delete (the default) will delete the -*PackageRevision* (potentially removing it from a running cluster, if the downstream package has been deployed). A value -of orphan will remove the owner references and leave the *PackageRevisions* in place. - -#### Package Context Injection - -*PackageVariant* resource authors may specify key-value pairs in the spec.packageContext.data field of the resource. -These key-value pairs will be automatically added to the data of the *kptfile.kpt.dev* ConfigMap, if it exists. - -Specifying the key name is invalid and must fail validation of the *PackageVariant*. This key is reserved for *kpt* or -Porch to set to the package name. Similarly, package-path is reserved and will result in an error. - -The spec.packageContext.removeKeys field can also be used to specify a list of keys that the package variant -controller should remove from the data field of the *kptfile.kpt.dev* ConfigMap. - -When creating or updating a package, the package variant controller will ensure that: - -- The *kptfile.kpt.dev* ConfigMap exists, failing if not -- All of the key-value pairs in spec.packageContext.data exist in the data field of the ConfigMap. -- None of the keys listed in spec.packageContext.removeKeys exist in the ConfigMap. - -{{% alert title="Note" color="primary" %}} - -If a user adds a key via *PackageVariant*, then changes the *PackageVariant* to no longer add that key, it will -NOT be removed automatically, unless the user also lists the key in the removeKeys list. This avoids the need to track -which keys were added by *PackageVariant*. - -{{% /alert %}} - -Similarly, if a user manually adds a key in the downstream that is also listed in the removeKeys field, the package -variant controller will remove that key the next time it needs to update the downstream package. There will be no -attempt to coordinate "ownership" of these keys. - -If the controller is unable to modify the ConfigMap for some reason, this is considered an error and should prevent -generation of the Draft. This will result in the condition Ready being set to False. - -#### Kptfile Function Pipeline Editing - -*PackageVariant* resource creators may specify a list of KRM functions to add to the beginning of the *Kptfile's* pipeline. -These functions are listed in the field spec.pipeline, which is a -[Pipeline](https://github.com/GoogleContainerTools/kpt/blob/cf1f326486214f6b4469d8432287a2fa705b48f5/pkg/api/kptfile/v1/types.go#L236), -just as in the *Kptfile*. The user can therefore prepend both validators and mutators. - -Functions added in this way are always added to the *beginning* of the *Kptfile* pipeline. In order to enable management -of the list on subsequent reconciliations, functions added by the package variant controller will use the Name field -of the [Function](https://github.com/GoogleContainerTools/kpt/blob/cf1f326486214f6b4469d8432287a2fa705b48f5/pkg/api/kptfile/v1/types.go#L283). -In the *Kptfile*, each function will be named as the dot-delimited concatenation of *PackageVariant*, the name of the -*PackageVariant* resource, the function name as specified in the pipeline of the *PackageVariant* resource (if present), and -the positional location of the function in the array. - -For example, if the *PackageVariant* resource contains: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: my-pv -spec: - ... - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.1 - configMap: - namespace: my-ns - name: my-func - - image: gcr.io/kpt-fn/set-labels:v0.1 - configMap: - app: foo -``` - -Then the resulting *Kptfile* will have these two entries prepended to its mutators list: - -```yaml - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.1 - configMap: - namespace: my-ns - name: PackageVariant.my-pv.my-func.0 - - image: gcr.io/kpt-fn/set-labels:v0.1 - configMap: - app: foo - name: PackageVariant.my-pv..1 -``` - -During subsequent reconciliations, this allows the controller to identify the functions within its control, remove them -all, and re-add them based on its updated content. By including the *PackageVariant* name, we enable chains of -*PackageVariants* to add functions, so long as the user is careful about their choice of resource names and avoids -conflicts. - -If the controller is unable to modify the Pipeline for some reason, this is considered an error and should prevent -generation of the Draft. This will result in the condition Ready being set to False. - -#### Configuration Injection Details - -As described [above](#configuration-injection), configuration injection is a process whereby in-package resources are -matched to in-cluster resources, and the spec of the in-cluster resources is copied to the in-package resource. - -Configuration injection is controlled by a combination of in-package resources with annotations, and *injectors* -(also known as *injection selectors*) defined on the *PackageVariant* resource. Package authors control the injection -points they allow in their packages, by flagging specific resources as *injection points* with an annotation. Creators -of the *PackageVariant* resource specify how to map in-cluster resources to those injection points using the injection -selectors. Injection selectors are defined in the spec.injectors field of the *PackageVariant*. This field is an ordered -array of structs containing a GVK (group, version, kind) tuple as separate fields, and name. Only the name is required. -To identify a match, all fields present must match the in-cluster object, and all *GVK* fields present must match the -in-package resource. In general the name will not match the in-package resource; this is discussed in more detail below. - -The annotations, along with the GVK of the annotated resource, allow a package to "advertise" the injections it can -accept and understand. These injection points effectively form a configuration API for the package, and the injection -selectors provide a way for the *PackageVariant* author to specify the inputs for those APIs from the possible values in -the management cluster. If we define those APIs carefully, they can be used across many packages; since they are -KRM resources, we can apply versioning and schema validation to them as well. This creates a more maintainable, -automatable set of APIs for package customization than simple key/value pairs. - -As an example, we may define a GVK that contains service endpoints that many applications use. In each application -package, we would then include an instance of that resource, say called "service-endpoints", and configure a function to -propagate the values from that resource to others within our package. As those endpoints may vary by region, in our -Porch cluster we can create an instance of this GVK for each region: "useast1-service-endpoints", -"useast2-service-endpoints", "uswest1-service-endpoints", etc. When we instantiate the *PackageVariant* for a cluster, we -want to inject the resource corresponding to the region in which the cluster exists. Thus, for each cluster we will -create a *PackageVariant* resource pointing to the upstream package, but with injection selector name values that are -specific to the region for that cluster. - -It is important to realize that the name of the in-package resource and the in-cluster resource need not match. In fact, -it would be an unusual coincidence if they did match. The names in the package are the same across *PackageVariants* -using that upstream, but we want to inject different resources for each one such *PackageVariant*. We also do not want to -change the name in the package, because it likely has meaning within the package and will be used by functions in the -package. Also, different owners control the names of the in-package and in-cluster resources. The names in the package -are in the control of the package author. The names in the cluster are in the control of whoever populates the cluster -(for example, some infrastructure team). The selector is the glue between them, and is in control of the *PackageVariant* -resource creator. - -The GVK on the other hand, has to be the same for the in-package resource and the in-cluster resource, because it tells -us the API schema for the resource. Also, the namespace of the in-cluster object needs to be the same as the -*PackageVariant* resource, or we could leak resources from namespaces to which our *PackageVariant* user does not have -access. - -With that understanding, the injection process works as follows: - -1. The controller will examine all in-package resources, looking for those with an annotation named - *kpt.dev/config-injection*, with one of the following values: required or optional. We will call these "injection - points". It is the responsibility of the package author to define these injection points, and to specify which are - required and which are optional. Optional injection points are a way of specifying default values. -1. For each injection point, a condition will be created *in the downstream PackageRevision*, with ConditionType set to - the dot-delimited concatenation of config.injection, with the in-package resource kind and name, and the value set - to False. Note that since the package author controls the name of the resource, kind and name are sufficient to - disambiguate the injection point. We will call this ConditionType the injection point ConditionType". -1. For each required injection point, the injection point ConditionType will be added to the *PackageRevision* - readinessGates by the package variant controller. Optional injection points' ConditionTypes must not be added to - the readinessGates by the package variant controller, but humans or other actors may do so at a later date, and the - package variant controller should not remove them on subsequent reconciliations. Also, this relies upon - readinessGates gating publishing the package to a *deployment* repository, but not gating publishing to a blueprint - repository. -1. The injection processing will proceed as follows. For each injection point: - - - The controller will identify all in-cluster objects in the same namespace as the *PackageVariant* resource, with GVK - matching the injection point (the in-package resource). If the controller is unable to load this objects (e.g., - there are none and the CRD is not installed), the injection point ConditionType will be set to False, with a - message indicating that the error, and processing proceeds to the next injection point. Note that for optional - injection this may be an acceptable outcome, so it does not interfere with overall generation of the Draft. - - The controller will look through the list of injection selectors in order and checking if any of the in-cluster - objects match the selector. If so, that in-cluster object is selected, and processing of the list of injection - selectors stops. Note that the namespace is set based upon the *PackageVariant* resource, the GVK is set based upon - the in-package resource, and all selectors require name. Thus, at most one match is possible for any given - selector. Also note that *all fields present in the selector* must match the in-cluster resource, and only the - *GVK fields present in the selector* must match the in-package resource. - - If no in-cluster object is selected, the injection point ConditionType will be set to False with a message that - no matching in-cluster resource was found, and processing proceeds to the next injection point. - - If a matching in-cluster object is selected, then it is injected as follows: - - - For ConfigMap resources, the data field from the in-cluster resource is copied to the data field of the - in-package resource (the injection point), overwriting it. - - For other resource types, the spec field from the in-cluster resource is copied to the spec field of the - in-package resource (the injection point), overwriting it. - - An annotation with name *kpt.dev/injected-resource-name* and value set to the name of the in-cluster resource is - added (or overwritten) in the in-package resource. - -If the overall injection cannot be completed for some reason, or if any of the below problems exist in the upstream -package, it is considered an error and should prevent generation of the Draft: - - - There is a resource annotated as an injection point but having an invalid annotation value (i.e., other than - required or optional). - - There are ambiguous condition types due to conflicting GVK and name values. These must be disambiguated in the - upstream package, if so. - -This will result in the condition Ready being set to False. - -{{% alert title="Note" color="primary" %}} - -Whether or not all required injection points are fulfilled does not affect the *PackageVariant* conditions, -only the *PackageRevision* conditions. - -{{% /alert %}} - -**A Further Note on Selectors** - -By allowing the use of GVK, not just name, in the selector, more precision in selection is enabled. This is a -way to constrain the injections that will be done. That is, if the package has 10 different objects with -config-injection annotation, the *PackageVariant* could say it only wants to replace certain GVKs, allowing better -control. - -Consider, for example, if the cluster contains these resources: - -- GVK1 foo -- GVK1 bar -- GVK2 foo -- GVK2 bar - -If we could only define injection selectors based upon name, it would be impossible to ever inject one GVK with *foo* -and another with *bar*. Instead, by using GVK, we can accomplish this with a list of selectors like: - - - GVK1 foo - - GVK2 bar - -That said, often name will be sufficiently unique when combined with the in-package resource GVK, and so making the -selector GVK optional is more convenient. This allows a single injector to apply to multiple injection points with -different GVKs. - -#### Order of Mutations - -During creation, the first thing the controller does is clone the upstream package to create the downstream package. - -For update, first note that changes to the downstream *PackageRevision* can be triggered for several different reasons: - -1. The *PackageVariant* resource is updated, which could change any of the options for introducing variance, or could also - change the upstream package revision referenced. -1. A new revision of the upstream package has been selected due to a floating tag change, or due to a force retagging of - the upstream. -1. An injected in-cluster object is updated. - -The downstream *PackageRevision* may have been updated by humans or other automation actors since creation, so we cannot -simply recreate the downstream *PackageRevision* from scratch when one of these changes happens. Instead, the controller -must maintain the later edits by doing the equivalent of a `kpt pkg update`, in the case of changes to the upstream for -any reason. Any other changes require reapplication of the *PackageVariant* functionality. With that understanding, we can -see that the controller will perform mutations on the downstream package in this order, for both creation and update: - -1. Create (via Clone) or Update (via `kpt pkg update` equivalent) - - - This is done by the Porch server, not by the package variant controller directly. - - This means that Porch will run the *Kptfile* pipeline after clone or update. - -1. Package variant controller applies configured mutations - - - Package Context Injections - - *Kptfile* KRM Function Pipeline Additions/Changes - - Config Injection - -1. Package variant controller saves the *PackageRevision* and *PackageRevisionResources*. - - - Porch server executes the *Kptfile* pipeline - -The package variant controller mutations edit resources (including the *Kptfile*), based on the contents of the -*PackageVariant* and the injected in-cluster resources, but cannot affect one another. The results of those mutations -throughout the rest of the package is materialized by the execution of the *Kptfile* pipeline during the save operation. - -#### PackageVariant Status - -PackageVariant sets the following status conditions: - - - **Stalled** is set to True if there has been a failure that most likely requires user intervention. - - **Ready** is set to True if the last reconciliation successfully produced an up-to-date Draft. - -The *PackageVariant* resource will also contain a DownstreamTargets field, containing a list of downstream Draft and -Proposed *PackageRevisions* owned by this *PackageVariant* resource, or the latest Published *PackageRevision* if there -are none in Draft or Proposed state. Typically, there is only a single Draft, but use of the adopt value for -AdoptionPolicy could result in multiple Drafts being owned by the same *PackageVariant*. - -### PackageVariantSet API[^pvsimpl] - -The Go types below defines the `PackageVariantSetSpec`. - -```go -// PackageVariantSetSpec defines the desired state of PackageVariantSet -type PackageVariantSetSpec struct { - Upstream *pkgvarapi.Upstream `json:"upstream,omitempty"` - Targets []Target `json:"targets,omitempty"` -} - -type Target struct { - // Exactly one of Repositories, RepositorySeletor, and ObjectSelector must be - // populated - // option 1: an explicit repositories and package names - Repositories []RepositoryTarget `json:"repositories,omitempty"` - - // option 2: a label selector against a set of repositories - RepositorySelector *metav1.LabelSelector `json:"repositorySelector,omitempty"` - - // option 3: a selector against a set of arbitrary objects - ObjectSelector *ObjectSelector `json:"objectSelector,omitempty"` - - // Template specifies how to generate a PackageVariant from a target - Template *PackageVariantTemplate `json:"template,omitempty"` -} -``` - -At the highest level, a *PackageVariantSet* is just an upstream, and a list of targets. For each target, there is a set of -criteria for generating a list, and a set of rules (a template) for creating a *PackageVariant* from each list entry. - -Since template is optional, lets start with describing the different types of targets, and how the criteria in each is -used to generate a list that seeds the *PackageVariant* resources. - -The Target structure must include exactly one of three different ways of generating the list. The first is a simple -list of repositories and package names for each of those repositories[^repo-pkg-expr]. The package name list is -needed for uses cases in which you want to repeatedly instantiate the same package in a single repository. For example, -if a repository represents the contents of a cluster, you may want to instantiate a namespace package once for each -namespace, with a name matching the namespace. - -This example shows using the repositories field: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - namespace: default - name: example -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - repositories: - - name: cluster-01 - - name: cluster-02 - - name: cluster-03 - packageNames: - - foo-a - - foo-b - - foo-c - - name: cluster-04 - packageNames: - - foo-a - - foo-b -``` - -In this case, *PackageVariant* resources are created for each of these pairs of downstream repositories and packages -names: - -| Repository | Package Name | -| ---------- | ------------ | -| cluster-01 | foo | -| cluster-02 | foo | -| cluster-03 | foo-a | -| cluster-03 | foo-b | -| cluster-03 | foo-c | -| cluster-04 | foo-a | -| cluster-04 | foo-b | - -All of those *PackageVariants* have the same upstream. - -The second criteria targeting is via a label selector against Porch Repository objects, along with a list of package -names. Those packages will be instantiated in each matching repository. Just like in the first example, not listing a -package name defaults to one package, with the same name as the upstream package. Suppose, for example, we have these -four repositories defined in our Porch cluster: - -| Repository | Labels | -| ---------- | ------------------------------------- | -| cluster-01 | region=useast1, env=prod, org=hr | -| cluster-02 | region=uswest1, env=prod, org=finance | -| cluster-03 | region=useast2, env=prod, org=hr | -| cluster-04 | region=uswest1, env=prod, org=hr | - -If we create a *PackageVariantSet* with the following spec: - -```yaml -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - repositorySelector: - matchLabels: - env: prod - org: hr - - repositorySelector: - matchLabels: - region: uswest1 - packageNames: - - foo-a - - foo-b - - foo-c -``` - -then *PackageVariant* resources will be created with these repository and package names: - -| Repository | Package Name | -| ---------- | ------------ | -| cluster-01 | foo | -| cluster-03 | foo | -| cluster-04 | foo | -| cluster-02 | foo-a | -| cluster-02 | foo-b | -| cluster-02 | foo-c | -| cluster-04 | foo-a | -| cluster-04 | foo-b | -| cluster-04 | foo-c | - -Finally, the third possibility allows the use of *arbitrary* resources in the Porch cluster as targeting criteria. The -objectSelector looks like this: - -```yaml -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - objectSelector: - apiVersion: krm-platform.bigco.com/v1 - kind: Team - matchLabels: - org: hr - role: dev -``` - -It works exactly like the repository selector - in fact the repository selector is equivalent to the object selector -with the apiVersion and kind values set to point to Porch Repository resources. That is, the repository name comes -from the object name, and the package names come from the listed package names. In the description of the template, we -will see how to derive different repository names from the objects. - -#### PackageVariant Template - -As previously discussed, the list entries generated by the target criteria result in *PackageVariant* entries. If no -template is specified, then *PackageVariant* default are used, along with the downstream repository name and package name -as described in the previous section. The template allows the user to have control over all of the values in the -resulting *PackageVariant*. The template API is shown below. - -```go -type PackageVariantTemplate struct { - // Downstream allows overriding the default downstream package and repository name - // +optional - Downstream *DownstreamTemplate `json:"downstream,omitempty"` - - // AdoptionPolicy allows overriding the PackageVariant adoption policy - // +optional - AdoptionPolicy *pkgvarapi.AdoptionPolicy `json:"adoptionPolicy,omitempty"` - - // DeletionPolicy allows overriding the PackageVariant deletion policy - // +optional - DeletionPolicy *pkgvarapi.DeletionPolicy `json:"deletionPolicy,omitempty"` - - // Labels allows specifying the spec.Labels field of the generated PackageVariant - // +optional - Labels map[string]string `json:"labels,omitempty"` - - // LabelsExprs allows specifying the spec.Labels field of the generated PackageVariant - // using CEL to dynamically create the keys and values. Entries in this field take precedent over - // those with the same keys that are present in Labels. - // +optional - LabelExprs []MapExpr `json:"labelExprs,omitempty"` - - // Annotations allows specifying the spec.Annotations field of the generated PackageVariant - // +optional - Annotations map[string]string `json:"annotations,omitempty"` - - // AnnotationsExprs allows specifying the spec.Annotations field of the generated PackageVariant - // using CEL to dynamically create the keys and values. Entries in this field take precedent over - // those with the same keys that are present in Annotations. - // +optional - AnnotationExprs []MapExpr `json:"annotationExprs,omitempty"` - - // PackageContext allows specifying the spec.PackageContext field of the generated PackageVariant - // +optional - PackageContext *PackageContextTemplate `json:"packageContext,omitempty"` - - // Pipeline allows specifying the spec.Pipeline field of the generated PackageVariant - // +optional - Pipeline *PipelineTemplate `json:"pipeline,omitempty"` - - // Injectors allows specifying the spec.Injectors field of the generated PackageVariant - // +optional - Injectors []InjectionSelectorTemplate `json:"injectors,omitempty"` -} - -// DownstreamTemplate is used to calculate the downstream field of the resulting -// package variants. Only one of Repo and RepoExpr may be specified; -// similarly only one of Package and PackageExpr may be specified. -type DownstreamTemplate struct { - Repo *string `json:"repo,omitempty"` - Package *string `json:"package,omitempty"` - RepoExpr *string `json:"repoExpr,omitempty"` - PackageExpr *string `json:"packageExpr,omitempty"` -} - -// PackageContextTemplate is used to calculate the packageContext field of the -// resulting package variants. The plain fields and Exprs fields will be -// merged, with the Exprs fields taking precedence. -type PackageContextTemplate struct { - Data map[string]string `json:"data,omitempty"` - RemoveKeys []string `json:"removeKeys,omitempty"` - DataExprs []MapExpr `json:"dataExprs,omitempty"` - RemoveKeyExprs []string `json:"removeKeyExprs,omitempty"` -} - -// InjectionSelectorTemplate is used to calculate the injectors field of the -// resulting package variants. Exactly one of the Name and NameExpr fields must -// be specified. The other fields are optional. -type InjectionSelectorTemplate struct { - Group *string `json:"group,omitempty"` - Version *string `json:"version,omitempty"` - Kind *string `json:"kind,omitempty"` - Name *string `json:"name,omitempty"` - - NameExpr *string `json:"nameExpr,omitempty"` -} - -// MapExpr is used for various fields to calculate map entries. Only one of -// Key and KeyExpr may be specified; similarly only on of Value and ValueExpr -// may be specified. -type MapExpr struct { - Key *string `json:"key,omitempty"` - Value *string `json:"value,omitempty"` - KeyExpr *string `json:"keyExpr,omitempty"` - ValueExpr *string `json:"valueExpr,omitempty"` -} - -// PipelineTemplate is used to calculate the pipeline field of the resulting -// package variants. -type PipelineTemplate struct { - // Validators is used to caculate the pipeline.validators field of the - // resulting package variants. - // +optional - Validators []FunctionTemplate `json:"validators,omitempty"` - - // Mutators is used to caculate the pipeline.mutators field of the - // resulting package variants. - // +optional - Mutators []FunctionTemplate `json:"mutators,omitempty"` -} - -// FunctionTemplate is used in generating KRM function pipeline entries; that -// is, it is used to generate Kptfile Function objects. -type FunctionTemplate struct { - kptfilev1.Function `json:",inline"` - - // ConfigMapExprs allows use of CEL to dynamically create the keys and values in the - // function config ConfigMap. Entries in this field take precedent over those with - // the same keys that are present in ConfigMap. - // +optional - ConfigMapExprs []MapExpr `json:"configMapExprs,omitempty"` -} -``` - -This is a pretty complicated structure. To make it more understandable, the first thing to notice is that many fields -have a plain version, and an Expr version. The plain version is used when the value is static across all the -*PackageVariants*; the Expr version is used when the value needs to vary across *PackageVariants*. - -Let's consider a simple example. Suppose we have a package for provisioning namespaces called "base-ns". We want to -instantiate this several times in the *cluster-01* repository. We could do this with this *PackageVariantSet*: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - namespace: default - name: example -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - targets: - - repositories: - - name: cluster-01 - packageNames: - - ns-1 - - ns-2 - - ns-3 -``` - -That will produce three *PackageVariant* resources with the same upstream, all with the same downstream repo, and each -with a different downstream package name. If we also want to set some labels identically across the packages, we can -do that with the template.labels field: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - namespace: default - name: example -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - targets: - - repositories: - - name: cluster-01 - packageNames: - - ns-1 - - ns-2 - - ns-3 - template: - labels: - package-type: namespace - org: hr -``` - -The resulting *PackageVariant* resources will include labels in their spec, and will be identical other than their -names and the downstream.package: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaaa -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - downstream: - repo: cluster-01 - package: ns-1 - labels: - package-type: namespace - org: hr ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaab -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - downstream: - repo: cluster-01 - package: ns-2 - labels: - package-type: namespace - org: hr ---- - -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaac -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - downstream: - repo: cluster-01 - package: ns-3 - labels: - package-type: namespace - org: hr -``` - -When using other targeting means, the use of the Expr fields becomes more likely, because we have more possible -sources for different field values. The Expr values are all -[Common Expression Language (CEL)](https://github.com/google/cel-go) expressions, rather than static values. This allows -the user to construct values based upon various fields of the targets. Consider again the RepositorySelector example, -where we have these repositories in the cluster. - -| Repository | Labels | -| ---------- | ------------------------------------- | -| cluster-01 | region=useast1, env=prod, org=hr | -| cluster-02 | region=uswest1, env=prod, org=finance | -| cluster-03 | region=useast2, env=prod, org=hr | -| cluster-04 | region=uswest1, env=prod, org=hr | - -If we create a *PackageVariantSet* with the following spec, we can use the Expr fields to add labels to the -*PackageVariantSpecs* (and thus to the resulting *PackageRevisions* later) that vary based on cluster. We can also use -this to vary the injectors defined for each *PackageVariant*, resulting in each *PackageRevision* having different -resources injected. This spec: - -```yaml -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - repositorySelector: - matchLabels: - env: prod - org: hr - template: - labelExprs: - key: org - valueExpr: "repository.labels['org']" - injectorExprs: - - nameExpr: "repository.labels['region'] + '-endpoints'" -``` - -will result in three *PackageVariant* resources, one for each Repository with the labels env=prod and org=hr. The labels -and injectors fields of the *PackageVariantSpec* will be different for each of these *PackageVariants*, as determined by -the use of the Expr fields in the template, as shown here: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaaa -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - downstream: - repo: cluster-01 - package: foo - labels: - org: hr - injectors: - name: useast1-endpoints ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaab -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - downstream: - repo: cluster-03 - package: foo - labels: - org: hr - injectors: - name: useast2-endpoints ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaac -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - downstream: - repo: cluster-04 - package: foo - labels: - org: hr - injectors: - name: uswest1-endpoints -``` - -Since the injectors are different for each *PackageVariant*, the resulting *PackageRevisions* will each have different -resources injected. - -When CEL expressions are evaluated, they have an environment associated with them. That is, there are certain objects -that are accessible within the CEL expression. For CEL expressions used in the *PackageVariantSet* template field, -the following variables are available: - -| CEL Variable | Variable Contents | -| -------------- | ------------------------------------------------------------ | -| repoDefault | The default repository name based on the targeting criteria. | -| packageDefault | The default package name based on the targeting criteria. | -| upstream | The upstream *PackageRevision*. | -| repository | The downstream Repository. | -| target | The target object (details vary; see below). | - -There is one expression that is an exception to the table above. Since the repository value corresponds to the -Repository of the downstream, we must first evaluate the downstream.repoExpr expression to find that repository. -Thus, for that expression only, repository is not a valid variable. - -There is one more variable available across all CEL expressions: the target variable. This variable has a meaning that -varies depending on the type of target, as follows: - -| Target Type | target Variable Contents | -| ------------------- | ---------------------------------------------------------------------------------------------- | -| Repo/Package List | A struct with two fields: repo and package, the same as the repoDefault and packageDefault values. | -| Repository Selector | The Repository selected by the selector. Although not recommended, this could be different than the repository value, which an be altered with downstream.repo or downstream.repoExpr. | -| Object Selector | The Object selected by the selector. | - -For the various resource variables - upstream, repository, and target - arbitrary access to all fields of the -object could lead to security concerns. Therefore, only a subset of the data is available for use in CEL expressions. -Specifically, the following fields: name, namespace, labels, and annotations. - -Given the slight quirk with the repoExpr, it may be helpful to state the processing flow for the template evaluation: - -1. The upstream *PackageRevision* is loaded. It must be in the same namespace as the *PackageVariantSet*[^multi-ns-reg]. -1. The targets are determined. -1. For each target: - - 1. The CEL environment is prepared with repoDefault, packageDefault, upstream, and target variables. - 1. The downstream repository is determined and loaded, as follows: - - - If present, downstream.repoExpr is evaluated using the CEL environment, and the result used as the downstream - repository name. - - Otherwise, if downstream.repo is set, that is used as the downstream repository name. - - If neither is present, the default repository name based on the target is used (i.e., the same value as the - repoDefault variable). - - The resulting downstream repository name is used to load the corresponding Repository object in the same - namespace as the *PackageVariantSet*. - - 1. The downstream Repository is added to the CEL environment. - 1. All other CEL expressions are evaluated. - -1. Note that if any of the resources (e.g., the upstream *PackageRevision*, or the downstream Repository) are not found - our otherwise fail to load, processing stops and a failure condition is raised. Similarly, if a CEL expression - cannot be properly evaluated due to syntax or other reasons, processing stops and a failure condition is raised. - -#### Other Considerations - -It would appear convenient to automatically inject the *PackageVariantSet* targeting resource. However, it is better to -require the package advertise the ways it accepts injections (i.e., the GVKs it understands), and only inject those. -This keeps the separation of concerns cleaner; the package does not build in an awareness of the context in which it -expects to be deployed. For example, a package should not accept a Porch Repository resource just because that happens -to be the targeting mechanism. That would make the package unusable in other contexts. - -#### PackageVariantSet Status - -The *PackageVariantSet* status uses these conditions: - - - Stalled is set to True if there has been a failure that most likely requires user intervention. - - Ready is set to True if the last reconciliation successfully reconciled all targeted *PackageVariant* resources. - -## Future Considerations -- As an alternative to the floating tag proposal, we may instead want to have a separate tag tracking controller that - can update PV and PVS resources to tweak their upstream as the tag moves. -- Installing a collection of packages across a set of clusters, or performing the same mutations to each package in a - collection, is only supported by creating multiple *PackageVariant* / *PackageVariantSet* resources. Options to consider - for these use cases: - - - upstreams listing multiple packages. - - Label selector against *PackageRevisions*. This does not seem that useful, as *PackageRevisions* are highly re-usable - and would likely be composed in many different ways. - - A *PackageRevisionSet* resource that simply contained a list of Upstream structures and could be used as an Upstream. - This is functionally equivalent to the upstreams option, but that list is reusable across resources. - - Listing multiple *PackageRevisionSets* in the upstream would be nice as well. - - Any or all of these could be implemented in *PackageVariant*, *PackageVariantSet*, or both. - -## Footnotes - -[^porch17]: Implemented in Porch v0.0.17. -[^porch18]: Coming in Porch v0.0.18. -[^notimplemented]: Proposed here but not yet implemented as of Porch v0.0.18. -[^setns]: As of this writing, the set-namespace function does not have a create option. This should be added to - avoid the user needing to also use the `upsert-resource` function. Such common operation should be simple for users. -[^pvsimpl]: This document describes *PackageVariantSet* v1alpha2, which will be available starting with Porch v0.0.18. - In Porch v0.0.16 and 17, the v1alpha1 implementation is available, but it is a somewhat different API, without - support for CEL or any injection. It is focused only on fan out targeting, and uses a [slightly different targeting API](https://github.com/nephio-project/porch/blob/main/controllers/packagevariants/api/v1alpha1/packagevariant_types.go). -[^repo-pkg-expr]: This is not exactly correct. As we will see later in the template discussion, this the repository - and package names listed actually are just defaults for the template; they can be further manipulated in the template - to reference different downstream repositories and package names. The same is true for the repositories selected via - the `repositorySelector` option. However, this can be ignored for now. -[^multi-ns-reg]: Note that the same upstream repository can be registered in multiple namespaces without a problem. This - simplifies access controls, avoiding the need for cross-namespace relationships between Repositories and other Porch - resources. diff --git a/content/en/docs/porch/running-porch/_index.md b/content/en/docs/porch/running-porch/_index.md deleted file mode 100644 index 4c68b980..00000000 --- a/content/en/docs/porch/running-porch/_index.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: "Running Porch" -type: docs -weight: 6 -description: ---- - diff --git a/content/en/docs/porch/running-porch/running-locally.md b/content/en/docs/porch/running-porch/running-locally.md deleted file mode 100644 index 9d6a519a..00000000 --- a/content/en/docs/porch/running-porch/running-locally.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: "Running Porch Locally" -type: docs -weight: 1 -description: ---- - -## Prerequisites - -To run Porch locally, you will need: - -* Linux machine (technically it is possible to run Porch locally on a Mac but - due to differences in Docker between Linux and Mac, the Porch scripts are - confirmed to work on Linux) -* [go 1.23.5](https://go.dev/dl/) or newer -* [docker](https://docs.docker.com/get-docker/) -* [git](https://git-scm.com/) -* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) -* [make](https://www.gnu.org/software/make/) - -## Getting Started - -Clone the porch repository. - -```sh -git clone https://github.com/nephio-project/porch.git -``` - -Download dependencies: - -```sh -cd porch - -make tidy -``` - -## Running Porch - -Porch consists of: -* k8s extension apiserver [porch](https://github.com/nephio-project/porch/tree/main/pkg/apiserver) -* kpt function evaluator [func](https://github.com/nephio-project/porch/tree/main/func) -* k8s [controllers](https://github.com/nephio-project/porch/tree/main/controllers) - -In addition, to run Porch locally, we need to run the main k8s apiserver and its backing storage, etcd. - -To build and run Porch locally in one command, run: - -```sh -# Start Porch in one command: -make all -``` - -This will: - -* create Docker network named *porch* -* build and start etcd Docker container -* build and start main k8s apiserver Docker container -* build and start the kpt function evaluator microservice - [func](https://github.com/nephio-project/porch/tree/main/func) Docker container -* build Porch binary and run it locally -* configure Porch as the extension apiserver - -{{% alert title="Note" color="primary" %}} - -This command does not build and start the Porch k8s controllers. Those -are not required for basic package orchestration but are required for deploying packages. - -{{% /alert %}} - -You can also run the commands individually which can be useful when developing, -in particular building and running Porch extension apiserver. - -```sh -# Create Porch network -make network - -# Build and start etcd container -make start-etcd - -# Build and start main apiserver container -make start-kube-apiserver - -# Build and start kpt function evaluator microservice Docker container -make start-function-runner - -# Build and start Porch on your local machine. -make run-local -``` - -Porch will run directly on your local machine and API requests will be forwarded to it from the -main apiserver. Configure `kubectl` context to interact with the main k8s apiserver running as -Docker container: - -```sh -export KUBECONFIG=${PWD}/deployments/local/kubeconfig - -# Confirm Porch is running -kubectl api-resources | grep porch - -packagerevs config.porch.kpt.dev/v1alpha1 true PackageRev -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true PorchPackage -``` - -## Restarting Porch - -If you make code changes, an expedient way to rebuild and restart porch is: - -* Stop Porch running in the shell session (Ctrl+C) -* Run `make run-local` again to rebuild and restart Porch - -## Stopping Porch - -To stop Porch and all associated Docker containers, including the Docker network, run: - -```sh -make stop -``` - -## Troubleshooting - -If you run into issues that look like *git: authentication required*, make sure you have SSH -keys set up on your local machine. diff --git a/content/en/docs/porch/running-porch/running-on-GKE.md b/content/en/docs/porch/running-porch/running-on-GKE.md deleted file mode 100644 index 26c49e4b..00000000 --- a/content/en/docs/porch/running-porch/running-on-GKE.md +++ /dev/null @@ -1,246 +0,0 @@ ---- -title: "Running Porch on GKE" -type: docs -weight: 2 -description: ---- - -You can install Porch by either using one of the -[released versions](https://github.com/nephio-project/porch/releases), or building Porch from sources. - -## Prerequisites - -{{% alert title="Note" color="primary" %}} - -Porch should run on any Kubernetes cluster and should work on any cloud. We have just started by documenting one -known-good configuration: GCP and GKE. We would welcome comparable installation instructions or feedback from people -that try it out on other clouds / configurations. - -{{% /alert %}} - -To run one of the [released versions](https://github.com/nephio-project/porch/releases) of Porch on GKE, you will -need: - -* A [GCP Project](https://console.cloud.google.com/projectcreate) -* [gcloud](https://cloud.google.com/sdk/docs/install) -* [kubectl](https://kubernetes.io/docs/tasks/tools/); you can install it via `gcloud components install kubectl` -* [porchctl](https://github.com/nephio-project/porch/releases/download/dev/porchctl.tgz) -* Command line utilities such as *curl*, *tar* - -To build and run Porch on GKE, you will also need: - -* A container registry which will work with your GKE cluster. - [Artifact Registry](https://console.cloud.google.com/artifacts) or - [Container Registry](https://console.cloud.google.com/gcr) work well though you can use others too. -* [go 1.21](https://go.dev/dl/) or newer -* [docker](https://docs.docker.com/get-docker/) -* [Configured docker credential helper](https://cloud.google.com/sdk/gcloud/reference/auth/configure-docker) -* [git](https://git-scm.com/) -* [make](https://www.gnu.org/software/make/) - -## Getting Started - -Make sure your gcloud is configured with your project (alternatively, you can augment all following gcloud -commands below with --project flag): - -```bash -gcloud config set project YOUR_GCP_PROJECT -``` - -Select a GKE cluster or create a new one: - -```bash -gcloud services enable container.googleapis.com -gcloud container clusters create-auto --region us-central1 porch-dev -``` -{{% alert title="Note" color="primary" %}} - -For development of Porch, in particular for running Porch tests, Standard GKE cluster is currently preferable. Select a -[GCP region](https://cloud.google.com/compute/docs/regions-zones#available) that works best for your needs: - - ```bash -gcloud services enable container.googleapis.com -gcloud container clusters create --region us-central1 porch-dev -``` - -And ensure *kubectl* is targeting your GKE cluster: - -```bash -gcloud container clusters get-credentials --region us-central1 porch-dev -``` -{{% /alert %}} - -## Run Released Version of Porch - -To run a released version of Porch, download the release configuration bundle from -[Porch release page](https://github.com/nephio-project/porch/releases). - -Untar and apply the *porch_blueprint.tar.gz* configuration bundle. This will install: - -* Porch server -* [configsync](https://kpt.dev/gitops/configsync/) - -```bash -mkdir porch-install -tar xzf ~/Downloads/porch_blueprint.tar.gz -C porch-install -kubectl apply -f porch-install -kubectl wait deployment --for=condition=Available porch-server -n porch-system -``` - -You can verify that Porch is running by querying the api-resources: - -```bash -kubectl api-resources | grep porch -``` -Expected output will include: - -```bash -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -``` - -To install configsync: - -```bash -echo " -apiVersion: configmanagement.gke.io/v1 -kind: ConfigManagement -metadata: - name: config-management -spec: - enableMultiRepo: true -" | kubectl apply -f - -``` - -## Run Custom Build of Porch - -To run custom build of Porch, you will need additional [prerequisites](#prerequisites). The commands below use -[Google Container Registry](https://console.cloud.google.com/gcr). - -Clone this repository into *${GOPATH}/src/github.com/GoogleContainerTools/kpt*. - -```bash -git clone https://github.com/GoogleContainerTools/kpt.git "${GOPATH}/src/github.com/GoogleContainerTools/kpt" -``` - -[Configure](https://cloud.google.com/sdk/gcloud/reference/auth/configure-docker) docker credential helper for your -repository. - -If your use case doesn't require Porch to interact with GCP container registries, you can build and deploy Porch by -running the `gcr.io/YOUR-PROJECT-ID/porch-server:SHORT-COMMIT-SHA` command. It will build and push Porch Docker images into (by default) Google Container Registry -named (example shown is the Porch server image). - -```bash -IMAGE_TAG=$(git rev-parse --short HEAD) make push-and-deploy-no-sa -``` - -If you want to use a different repository, you can set IMAGE_REPO variable -(see [Makefile](https://github.com/nephio-project/porch/blob/main/Makefile#L33) for details). - -The `make push-and-deploy-no-sa` target will install Porch but not configsync. You can install configsync in your k8s -cluster manually following the -[documentation](https://github.com/GoogleContainerTools/kpt-config-sync/blob/main/docs/installation.md). - -{{% alert title="Note" color="primary" %}} - -The -no-sa (no service account) targets create Porch deployment -configuration which does not associate Kubernetes service accounts with GCP -service accounts. This is sufficient for Porch to integrate with Git repositories -using Basic Auth, for example GitHub. - -As above, you can verify that Porch is running by querying the api-resources: - -```bash -kubectl api-resources | grep porch -``` -{{% /alert %}} - -### Workload Identity - -[Workload Identity](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) is a simple way to -access Google Cloud services from Porch. - -#### Google Cloud Source Repositories - -[Cloud Source Repositories](https://cloud.google.com/source-repositories) can be access using workload identity, -removing the need to store credentials in the cluster. - -To set it up, create the necessary service accounts and give it the required roles: - -```bash -GCP_PROJECT_ID=$(gcloud config get-value project) - -# Create GCP service account (GSA) for Porch server. -gcloud iam service-accounts create porch-server - -# We want to create and delete images. Assign IAM roles to allow repository -# administration. -gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \ - --member "serviceAccount:porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role "roles/source.admin" - -gcloud iam service-accounts add-iam-policy-binding porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/iam.workloadIdentityUser \ - --member "serviceAccount:${GCP_PROJECT_ID}.svc.id.goog[porch-system/porch-server]" - -# We need to associate the Kubernetes Service Account (KSA) -# with the GSA by annotating the KSA. -kubectl annotate serviceaccount porch-server -n porch-system \ - iam.gke.io/gcp-service-account=porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com -``` - -Build Porch, push images, and deploy Porch server and controllers using the `make` target that adds workload identity -service account annotations: - -```bash -IMAGE_TAG=$(git rev-parse --short HEAD) make push-and-deploy -``` - -As above, you can verify that Porch is running by querying the api-resources: - -```bash -kubectl api-resources | grep porch -``` - -To register a repository, use the following command: - -```bash -porchctl repo register --repo-workload-identity --namespace=default https://source.developers.google.com/p//r/ -``` - -#### OCI - -To integrate with OCI repositories such as -[Artifact Registry](https://console.cloud.google.com/artifacts) or -[Container Registry](https://console.cloud.google.com/gcr), Porch relies on -[workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity). - -For that use case, create service accounts and assign roles: - -```bash -GCP_PROJECT_ID=$(gcloud config get-value project) - -# Create GCP service account for Porch server. -gcloud iam service-accounts create porch-server -# Create GCP service account for Porch sync controller. -gcloud iam service-accounts create porch-sync - -# We want to create and delete images. Assign IAM roles to allow repository -# administration. -gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \ - --member "serviceAccount:porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role "roles/artifactregistry.repoAdmin" - -gcloud iam service-accounts add-iam-policy-binding porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/iam.workloadIdentityUser \ - --member "serviceAccount:${GCP_PROJECT_ID}.svc.id.goog[porch-system/porch-server]" - -gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \ - --member "serviceAccount:porch-sync@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role "roles/artifactregistry.reader" - -gcloud iam service-accounts add-iam-policy-binding porch-sync@${GCP_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/iam.workloadIdentityUser \ - --member "serviceAccount:${GCP_PROJECT_ID}.svc.id.goog[porch-system/porch-controllers]" -``` diff --git a/content/en/docs/porch/user-guides/_index.md b/content/en/docs/porch/user-guides/_index.md deleted file mode 100644 index c8a18209..00000000 --- a/content/en/docs/porch/user-guides/_index.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: "Porch user guides" -type: docs -weight: 6 -description: ---- - diff --git a/content/en/docs/porch/user-guides/git-authentication-config.md b/content/en/docs/porch/user-guides/git-authentication-config.md deleted file mode 100644 index 7fd0b170..00000000 --- a/content/en/docs/porch/user-guides/git-authentication-config.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: "Authenticating to Remote Git Repositories" -type: docs -weight: 4 -description: "" ---- - -## Porch Server to Git Interaction - -The porch server handles interaction with associated git repositories through the use of porch repository CR (Custom Resource) which act as a link between the porch server and the git repositories the server is meant to interact with and store packages on. - -More information on porch repositories can be found [here](../package-orchestration.md#repositories). - -There are 2 main methods of authenticating to a git repository and an additional configuration. -These are - -1. Basic Authentication -2. Bearer Token Authentication -3. HTTPS/TLS Configuration - -### Basic Authentication - -A porch repository object can be created through the use of the `porchctl repo reg porch-test-repository -n porch-test http://example-ip:example-port/repo.git --repo-basic-password=password --repo-basic-username=username` command which creates a secret and repository object. - -The basic authentication secret must meet the following criteria: - -- Exist in the same namespace as the Repository CR (Custom Resource) that requires it. -- Have a Data keys named *username* and *password* containing the relevant information. -- Be of type *basic-auth*. - -The value used in the *password* field can be substituted for a base64 encoded Personal Access Token (PAT) from the GIT instance being used. An Example of this can be found [here](./porchctl-cli-guide.md#repository-registration) - -Which would be the equivalent of doing a `kubectl apply -f` on a yaml file with the following content (assuming the porch-test namespace exists on the cluster): - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: git-auth-secret - namespace: porch-test -data: - username: base-64-encoded-username - password: base-64-encoded-password # or base64-encoded-PAT -type: kubernetes.io/basic-auth - ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: Repository - -metadata: - name: porch-test-repository - namespace: porch-test - -spec: - description: porch test repository - content: Package - deployment: false - type: git - git: - repo: http://example-ip:example-port/repo.git - directory: / - branch: main - secretRef: - name: git-auth-secret -``` - -When The Porch Server is interacting with a Git instance through this http-basic-auth configuration it does so over HTTP. An example HTTP Request using this configuration can be seen below. - -```logs -PUT -https://example-ip/apis/config.porch.kpt.dev/v1alpha1/namespaces/porch-test/repositories/porch-test-repo/status -Request Headers: - User-Agent: __debug_bin1520795790/v0.0.0 (linux/amd64) kubernetes/$Format - Authorization: Basic bmVwaGlvOnNlY3JldA== - Accept: application/json, */* - Content-Type: application/json -``` - -where *bmVwaGlvOnNlY3JldA==* is base64 encoded in the format *username:password* and after base64 decoding becomes *nephio:secret*. For simple personal access token login, the password section can be substituted with the PAT token. - -### Bearer Token Authentication - -The authentication to the git repository can be configured to be in bearer token format by altering the secret used in the porch repository object. - -The bearer token authentication secret must meet the following criteria: - -- Exist in the same namespace as the Repository CR (Custom Resource) that requires it -- Have a Data key named *bearerToken* containing the relevant git token information. -- Be of type *Opaque*. - -For example: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: git-auth-secret - namespace: porch-test -data: - bearerToken: base-64-encoded-bearer-token -type: Opaque -``` - -When The Porch Server is interacting with a Git instance through this http-token-auth configuration, it does so overt HTTP. An example HTTP Request using this configuration can be seen below. - -```logs -PUT https://example-ip/apis/config.porch.kpt.dev/v1alpha1/namespaces/porch-test/repositories/porch-test-repo/status -Request Headers: - User-Agent: __debug_bin1520795790/v0.0.0 (linux/amd64) kubernetes/$Format - Authorization: Bearer 4764aacf8cc6d72cab58e96ad6fd3e3746648655 - Accept: application/json, */* - Content-Type: application/json -``` - -where *4764aacf8cc6d72cab58e96ad6fd3e3746648655* in the Authorization header is a PAT token, but can be whichever type of bearer token is accepted by the user's git instance. - -{{% alert title="Note" color="primary" %}} -Please Note that the Porch server caches the authentication credentials from the secret, therefore if the secret's contents are updated they may in fact not be the credentials used in the authentication. - -When the cached old secret credentials are no longer valid the porch server will query the secret again to use the new credentials. - -If these new credentials are valid they become the new cached authentication credentials. -{{% /alert %}} - -### HTTPS/TLS Configuration - -To enable the porch server to communicate with a custom git deployment over HTTPS, we must: - -1. Provide an additional arguments flag *use-git-cabundle=true* to the porch-server deployment. -2. Provide an additional Kubernetes secret containing the relevant certificate chain in the form of a cabundle. - -The secret itself must meet the following criteria: - -- Exist in the same namespace as the Repository CR that requires it. -- Be named specifically \-ca-bundle. -- Have a Data key named *ca.crt* containing the relevant ca certificate (chain). - -For example, a Git Repository is hosted over HTTPS at the URL: `https://my-gitlab.com/joe.bloggs/blueprints.git` - -Before creating the new Repository in the **GitLab** namespace, we must create a secret that fulfils the criteria above. - -`kubectl create secret generic gitlab-ca-bundle --namespace=gitlab --from-file=ca.crt` - -Which would produce the following: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: gitlab-ca-bundle - namespace: gitlab -type: Opaque -data: - ca.crt: FAKE1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNuakNDQWdHZ0F3SUJBZ0lRTEdmUytUK3YyRDZDczh1MVBlUUlKREFLQmdncWhrak9QUVFEQkRBZE1Sc3cKR1FZRFZRUURFeEpqWlhKMExXMWhibUZuWlhJdWJHOWpZV3d3SGhjTk1qUXdOVE14TVRFeU5qTXlXaGNOTWpRdwpPREk1TVRFeU5qTXlXakFWTVJNd0VRWURWUVFGRXdveE1qTTBOVFkzT0Rrd01JSUJJakFOQmdrcWhraUc5dzBCCkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXhCUUtWMEVzQ1JOOGxuV3lQR1ZWNXJwam5QZkI2emszK0N4cEp2NVMKUWhpMG1KbDI0elV1WWZjRzNxdFUva1NuREdjK3NQRUY0RmlOcUlsSTByWHBQSXBPazhKbjEvZU1VT3RkZUUyNgpSWEZBWktjeDVvdUJyZVNja3hsN2RPVkJnOE1EM1h5RU1PQU5nM0hJZ1J4ZWx2U2p1dy8vMURhSlRnK0lBS0dUCkgrOVlRVFcrZDIwSk5wQlR3NkdnQlRsYmdqL2FMRWEwOXVYSVBjK0JUSkpXRThIeDhkVjFNbEtHRFlDU29qZFgKbG9TN1FIa0dsSVk3M0NPZVVGWEVnTlFVVmZaZHdreXNsT3F4WmdXUTNZTFZHcEFyRitjOVdyUGpQQU5NQWtORQpPdHRvaG8zTlRxQ3FST3JEa0RMYWdsU1BKSUd1K25TcU5veVVxSUlWWkV5R1dRSURBUUFCbzJBd1hqQU9CZ05WCkhROEJBZjhFQkFNQ0JhQXdEQVlEVlIwVEFRSC9CQUl3QURBZkJnTlZIU01FR0RBV2dCUitFZTVDTnVJSkcwZjkKV3J3VzdqYUZFeVdzb1RBZEJnTlZIUkVFRmpBVWdoSm5hWFJzWVdJdVpYaGhiWEJzWlM1amIyMHdDZ1lJS29aSQp6ajBFQXdRRGdZb0FNSUdHQWtGLzRyNUM4bnkwdGVIMVJlRzdDdXJHYk02SzMzdTFDZ29GTkthajIva2ovYzlhCnZwODY0eFJKM2ZVSXZGMEtzL1dNUHNad2w2bjMxUWtXT2VpM01aYWtBUUpCREw0Kyt4UUxkMS9uVWdqOW1zN2MKUUx3NXVEMGxqU0xrUS9mOTJGYy91WHc4QWVDck5XcVRqcDEycDJ6MkUzOXRyWWc1a2UvY2VTaWFPUm16eUJuTwpTUTg9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0= -``` diff --git a/content/en/docs/porch/user-guides/install-and-using-porch.md b/content/en/docs/porch/user-guides/install-and-using-porch.md deleted file mode 100644 index bb9f6859..00000000 --- a/content/en/docs/porch/user-guides/install-and-using-porch.md +++ /dev/null @@ -1,2099 +0,0 @@ ---- -title: "Install and use Porch" -type: docs -weight: 1 -description: "A tutorial to install and use Porch" ---- - -This tutorial is a guide to installing and using Porch. It is based on the -[Porch demo produced by Tal Liron of Google](https://github.com/tliron/klab/tree/main/environments/porch-demo). Users -should be comfortable using *git*, *docker*, and *kubernetes*. - -See also [the Nephio Learning Resource](https://github.com/nephio-project/docs/blob/main/learning.md) page for -background help and information. - -## Prerequisites - -The tutorial can be executed on a Linux VM or directly on a laptop. It has been verified to execute on a MacBook Pro M1 -machine and an Ubuntu 20.04 VM. - -The following software should be installed prior to running through the tutorial: - -1. [git](https://git-scm.com/) -2. [Docker](https://www.docker.com/get-started/) -3. [kubectl](https://kubernetes.io/docs/reference/kubectl/) -4. [kind](https://kind.sigs.k8s.io/) -5. [kpt](https://github.com/kptdev/kpt) -6. [The go programming language](https://go.dev/) -7. [Visual Studio Code](https://code.visualstudio.com/download) -8. [VS Code extensions for go](https://code.visualstudio.com/docs/languages/go) - -## Clone the repository and cd into the tutorial - -```bash -git clone https://github.com/nephio-project/porch.git - -cd porch/examples/tutorials/starting-with-porch/ -``` - -## Create the Kind clusters for management and edge1 - -Create the clusters: - -```bash -kind create cluster --config=kind_management_cluster.yaml -kind create cluster --config=kind_edge1_cluster.yaml -``` - -Output the *kubectl* configuration for the clusters: - -```bash -kind get kubeconfig --name=management > ~/.kube/kind-management-config -kind get kubeconfig --name=edge1 > ~/.kube/kind-edge1-config -``` - -Toggling *kubectl* between the clusters: - -```bash -export KUBECONFIG=~/.kube/kind-management-config - -export KUBECONFIG=~/.kube/kind-edge1-config -``` - -## Install MetalLB on the management cluster - -Install the MetalLB load balancer on the management cluster to expose services: - -```bash -export KUBECONFIG=~/.kube/kind-management-config -kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml -kubectl wait --namespace metallb-system \ - --for=condition=ready pod \ - --selector=component=controller \ - --timeout=90s -``` - -Check the subnet that is being used by the kind network in docker - -```bash -docker network inspect kind | grep Subnet -``` - -Sample output: - -```yaml -"Subnet": "172.18.0.0/16", -"Subnet": "fc00:f853:ccd:e793::/64" -``` - -Edit the *metallb-conf.yaml* file and ensure the spec.addresses range is in the IPv4 subnet being used by the kind network in docker. - -```yaml -... -spec: - addresses: - - 172.18.255.200-172.18.255.250 -... -``` - -Apply the MetalLB configuration: - -```bash -kubectl apply -f metallb-conf.yaml -``` - -## Deploy and set up Gitea on the management cluster using kpt - -Get the *gitea kpt* package: - -```bash -export KUBECONFIG=~/.kube/kind-management-config - -cd kpt_packages - -kpt pkg get https://github.com/nephio-project/catalog/tree/main/distros/sandbox/gitea -``` - -Comment out the preconfigured IP address from the *gitea/service-gitea.yaml* file in the *gitea kpt* package: - -```bash -11c11 -< metallb.universe.tf/loadBalancerIPs: 172.18.0.200 ---- -> # metallb.universe.tf/loadBalancerIPs: 172.18.0.200 -``` - -Now render, init and apply the *gitea kpt* package: - -```bash -kpt fn render gitea -kpt live init gitea # You only need to do this command once -kpt live apply gitea -``` - -Once the package is applied, all the Gitea pods should come up and you should be able to reach the Gitea UI on the -exposed IP Address/port of the Gitea service. - -```bash -kubectl get svc -n gitea gitea - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -gitea LoadBalancer 10.96.243.120 172.18.255.200 22:31305/TCP,3000:31102/TCP 10m -``` - -The UI is available at http://172.18.255.200:3000 in the example above. - -To login to Gitea, use the credentials nephio:secret. - -## Create repositories on Gitea for management and edge1 - -On the Gitea UI, click the **+** opposite **Repositories** and fill in the form for both the *management* and *edge1* -repositories. Use default values except for the following fields: - -- Repository Name: "Management" or "edge1" -- Description: Something appropriate - -Alternatively, we can create the repositories via curl: - -```bash -curl -k -H "content-type: application/json" "http://nephio:secret@172.18.255.200:3000/api/v1/user/repos" --data '{"name":"management"}' - -curl -k -H "content-type: application/json" "http://nephio:secret@172.18.255.200:3000/api/v1/user/repos" --data '{"name":"edge1"}' -``` - -Check the repositories: - -```bash - curl -k -H "content-type: application/json" "http://nephio:secret@172.18.255.200:3000/api/v1/user/repos" | grep -Po '"name": *\K"[^"]*"' -``` - -Now initialize both repositories with an initial commit. - -Initialize the *management* repository: - -```bash -cd ../repos -git clone http://172.18.255.200:3000/nephio/management -cd management - -touch README.md -git init -git checkout -b main -git config user.name nephio -git add README.md - -git commit -m "first commit" -git remote remove origin -git remote add origin http://nephio:secret@172.18.255.200:3000/nephio/management.git -git remote -v -git push -u origin main -cd .. - ``` - -Initialize the *edge1* repository: - -```bash -git clone http://172.18.255.200:3000/nephio/edge1 -cd edge1 - -touch README.md -git init -git checkout -b main -git config user.name nephio -git add README.md - -git commit -m "first commit" -git remote remove origin -git remote add origin http://nephio:secret@172.18.255.200:3000/nephio/edge1.git -git remote -v -git push -u origin main -cd ../../ -``` - -## Install Porch - -We will use the *Porch Kpt* package from Nephio catalog repository. - -```bash -cd kpt_packages - -kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/core/porch -``` - -Now we can install porch. We render the *kpt* package and then init and apply it. - -```bash -kpt fn render porch -kpt live init porch # You only need to do this command once -kpt live apply porch -``` - -Check that the Porch PODs are running on the management cluster: - -```bash -kubectl get pod -n porch-system -NAME READY STATUS RESTARTS AGE -function-runner-7994f65554-nrzdh 1/1 Running 0 81s -function-runner-7994f65554-txh9l 1/1 Running 0 81s -porch-controllers-7fb4497b77-2r2r6 1/1 Running 0 81s -porch-server-68bfdddbbf-pfqsm 1/1 Running 0 81s -``` - -Check that the Porch CRDs and other resources have been created: - -```bash -kubectl api-resources | grep porch -packagerevs config.porch.kpt.dev/v1alpha1 true PackageRev -packagevariants config.porch.kpt.dev/v1alpha1 true PackageVariant -packagevariantsets config.porch.kpt.dev/v1alpha2 true PackageVariantSet -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true Package -``` - -## Connect the Gitea repositories to Porch - -Create a demo namespace: - -```bash -kubectl create namespace porch-demo -``` - -Create a secret for the Gitea credentials in the demo namespace: - -```bash -kubectl create secret generic gitea \ - --namespace=porch-demo \ - --type=kubernetes.io/basic-auth \ - --from-literal=username=nephio \ - --from-literal=password=secret -``` - -Now, define the Gitea repositories in Porch: - -```bash -kubectl apply -f porch-repositories.yaml -``` - -Check that the repositories have been correctly created: - -```bash -kubectl get repositories -n porch-demo -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -edge1 git Package true True http://172.18.255.200:3000/nephio/edge1.git -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -## Configure configsync on the workload cluster - -configsync is installed on the edge1 cluster so that it syncs the contents of the *edge1* repository onto the edge1 -workload cluster. We will use the configsync package from Nephio. - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -cd kpt_packages - -kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/core/configsync -kpt fn render configsync -kpt live init configsync -kpt live apply configsync -``` - -Check that the configsync PODs are up and running: - -```bash -kubectl get pod -n config-management-system -NAME READY STATUS RESTARTS AGE -config-management-operator-6946b77565-f45pc 1/1 Running 0 118m -reconciler-manager-5b5d8557-gnhb2 2/2 Running 0 118m -``` - -Now, we need to set up a RootSync CR to synchronize the *edge1* repository: - -```bash -kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/optional/rootsync -``` - -Edit the *rootsync/package-context.yaml* file to set the name of the cluster/repo we are syncing from/to: - -```bash -9c9 -< name: example-rootsync ---- -> name: edge1 -``` - -Render the package. This configures the *rootsync/rootsync.yaml* file in the Kpt package: - -```bash -kpt fn render rootsync -``` - -Edit the *rootsync/rootsync.yaml* file to set the IP address of Gitea and to turn off authentication for accessing -Gitea: - -```bash -11c11 -< repo: http://172.18.0.200:3000/nephio/example-cluster-name.git ---- -> repo: http://172.18.255.200:3000/nephio/edge1.git -13,15c13,16 -< auth: token -< secretRef: -< name: example-cluster-name-access-token-configsync ---- -> auth: none -> # auth: token -> # secretRef: -> # name: edge1-access-token-configsync -``` - -Initialize and apply RootSync: - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -kpt live init rootsync # This command is only needed once -kpt live apply rootsync -``` - -Check that the RootSync CR is created: - -```bash -kubectl get rootsync -n config-management-system -NAME RENDERINGCOMMIT RENDERINGERRORCOUNT SOURCECOMMIT SOURCEERRORCOUNT SYNCCOMMIT SYNCERRORCOUNT -edge1 613eb1ad5632d95c4336894f8a128cc871fb3266 613eb1ad5632d95c4336894f8a128cc871fb3266 613eb1ad5632d95c4336894f8a128cc871fb3266 -``` - -Check that configsync is synchronized with the repository on the management cluster: - -```bash -kubectl get pod -n config-management-system -l app=reconciler -NAME READY STATUS RESTARTS AGE -root-reconciler-edge1-68576f878c-92k54 4/4 Running 0 2d17h - -kubectl logs -n config-management-system root-reconciler-edge1-68576f878c-92k54 -c git-sync -f - -``` - -The result should be similar to: - -```bash -INFO: detected pid 1, running init handler -I0105 17:50:11.472934 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="" "cmd"="git config --global gc.autoDetach false" -I0105 17:50:11.493046 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="" "cmd"="git config --global gc.pruneExpire now" -I0105 17:50:11.513487 15 main.go:473] "level"=0 "msg"="starting up" "pid"=15 "args"=["/git-sync","--root=/repo/source","--dest=rev","--max-sync-failures=30","--error-file=error.json","--v=5"] -I0105 17:50:11.514044 15 main.go:923] "level"=0 "msg"="cloning repo" "origin"="http://172.18.255.200:3000/nephio/edge1.git" "path"="/repo/source" -I0105 17:50:11.514061 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="" "cmd"="git clone -v --no-checkout -b main --depth 1 http://172.18.255.200:3000/nephio/edge1.git /repo/source" -I0105 17:50:11.706506 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git rev-parse HEAD" -I0105 17:50:11.729292 15 main.go:737] "level"=0 "msg"="syncing git" "rev"="HEAD" "hash"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.729332 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git fetch -f --tags --depth 1 http://172.18.255.200:3000/nephio/edge1.git main" -I0105 17:50:11.920110 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git cat-file -t 385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.945545 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git rev-parse 385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.967150 15 main.go:726] "level"=1 "msg"="removing worktree" "path"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.967359 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git worktree prune" -I0105 17:50:11.987522 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git worktree add --detach /repo/source/385295a2143f10a6cda0cf4609c45d7499185e01 385295a2143f10a6cda0cf4609c45d7499185e01 --no-checkout" -I0105 17:50:12.057698 15 main.go:772] "level"=0 "msg"="adding worktree" "path"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "branch"="origin/main" -I0105 17:50:12.057988 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "cmd"="git reset --hard 385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:12.099783 15 main.go:833] "level"=0 "msg"="reset worktree to hash" "path"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "hash"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:12.099805 15 main.go:838] "level"=0 "msg"="updating submodules" -I0105 17:50:12.099976 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "cmd"="git submodule update --init --recursive --depth 1" -I0105 17:50:12.442466 15 main.go:694] "level"=1 "msg"="creating tmp symlink" "root"="/repo/source/" "dst"="385295a2143f10a6cda0cf4609c45d7499185e01" "src"="tmp-link" -I0105 17:50:12.442494 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/" "cmd"="ln -snf 385295a2143f10a6cda0cf4609c45d7499185e01 tmp-link" -I0105 17:50:12.453694 15 main.go:699] "level"=1 "msg"="renaming symlink" "root"="/repo/source/" "old_name"="tmp-link" "new_name"="rev" -I0105 17:50:12.453718 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/" "cmd"="mv -T tmp-link rev" -I0105 17:50:12.467904 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git gc --auto" -I0105 17:50:12.492329 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git cat-file -t HEAD" -I0105 17:50:12.518878 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git rev-parse HEAD" -I0105 17:50:12.540979 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 -I0105 17:50:27.553609 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git rev-parse HEAD" -I0105 17:50:27.600401 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git ls-remote -q http://172.18.255.200:3000/nephio/edge1.git refs/heads/main" -I0105 17:50:27.694035 15 main.go:1065] "level"=1 "msg"="no update required" "rev"="HEAD" "local"="385295a2143f10a6cda0cf4609c45d7499185e01" "remote"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:27.694159 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 -I0105 17:50:42.695482 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git rev-parse HEAD" -I0105 17:50:42.733276 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git ls-remote -q http://172.18.255.200:3000/nephio/edge1.git refs/heads/main" -I0105 17:50:42.826422 15 main.go:1065] "level"=1 "msg"="no update required" "rev"="HEAD" "local"="385295a2143f10a6cda0cf4609c45d7499185e01" "remote"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:42.826611 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 - -....... - -I0108 11:04:05.935586 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git rev-parse HEAD" -I0108 11:04:05.981750 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git ls-remote -q http://172.18.255.200:3000/nephio/edge1.git refs/heads/main" -I0108 11:04:06.079536 15 main.go:1065] "level"=1 "msg"="no update required" "rev"="HEAD" "local"="385295a2143f10a6cda0cf4609c45d7499185e01" "remote"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0108 11:04:06.079599 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 -``` - -## Exploring the Porch resources - -We have configured three repositories in Porch: - -```bash -kubectl get repositories -n porch-demo -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -edge1 git Package true True http://172.18.255.200:3000/nephio/edge1.git -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -A repository is a CR of the Porch Repository CRD. You can examine the *repositories.config.porch.kpt.dev* CRD with -either of the following commands (both of which are rather verbose): - -```bash -kubectl get crd -n porch-system repositories.config.porch.kpt.dev -o yaml -kubectl describe crd -n porch-system repositories.config.porch.kpt.dev -``` - -You can examine any other CRD using the commands above and changing the CRD name/namespace. - -The full list of Nephio CRDs is as below: - -```bash -kubectl api-resources --api-group=porch.kpt.dev -NAME SHORTNAMES APIVERSION NAMESPACED KIND -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true Package -``` - -The PackageRevision CRD is used to keep track of revision (or version) of each package found in the repositories. - -```bash -kubectl get packagerevision -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -external-blueprints-922121d0bcdd56bfa8cae6c375720e2b5f358ab0 free5gc-cp main main false Published external-blueprints -external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 free5gc-cp v1 v1 true Published external-blueprints -external-blueprints-716aae722092dbbb9470e56079b90ad76ec8f0d5 free5gc-operator main main false Published external-blueprints -external-blueprints-d65dc89f7a2472650651e9aea90edfcc81a9afc6 free5gc-operator v1 v1 false Published external-blueprints -external-blueprints-9fee880e8fa52066f052c9cae7aac2e2bc1b5a54 free5gc-operator v2 v2 false Published external-blueprints -external-blueprints-91d60ee31d2d0a1a6d5f1807593d5419434accd3 free5gc-operator v3 v3 false Published external-blueprints -external-blueprints-21f19a0641cf520e7dc6268e64c58c2c30c27036 free5gc-operator v4 v4 false Published external-blueprints -external-blueprints-bf2e7522ee92680bd49571ab309e3f61320cf36d free5gc-operator v5 v5 true Published external-blueprints -external-blueprints-c1b9ecb73118e001ab1d1213e6a2c94ab67a0939 free5gc-upf main main false Published external-blueprints -external-blueprints-5d48b1516e7b1ea15830ffd76b230862119981bd free5gc-upf v1 v1 true Published external-blueprints -external-blueprints-ed97798b46b36d135cf23d813eccad4857dff90f pkg-example-amf-bp main main false Published external-blueprints -external-blueprints-ed744bfdf4a4d15d4fcf3c46fde27fd6ac32d180 pkg-example-amf-bp v1 v1 false Published external-blueprints -external-blueprints-5489faa80782f91f1a07d04e206935d14c1eb24c pkg-example-amf-bp v2 v2 false Published external-blueprints -external-blueprints-16e2255bd433ef532684a3c1434ae0bede175107 pkg-example-amf-bp v3 v3 false Published external-blueprints -external-blueprints-7689cc6c953fa83ea61283983ce966dcdffd9bae pkg-example-amf-bp v4 v4 false Published external-blueprints -external-blueprints-caff9609883eea7b20b73b7425e6694f8eb6adc3 pkg-example-amf-bp v5 v5 true Published external-blueprints -external-blueprints-00b6673c438909975548b2b9f20c2e1663161815 pkg-example-smf-bp main main false Published external-blueprints -external-blueprints-4f7dfbede99dc08f2b5144ca550ca218109c52f2 pkg-example-smf-bp v1 v1 false Published external-blueprints -external-blueprints-3d9ab8f61ce1d35e264d5719d4b3c0da1ab02328 pkg-example-smf-bp v2 v2 false Published external-blueprints -external-blueprints-2006501702e105501784c78be9e7d57e426d85e8 pkg-example-smf-bp v3 v3 false Published external-blueprints -external-blueprints-c97ed7c13b3aa47cb257217f144960743aec1253 pkg-example-smf-bp v4 v4 false Published external-blueprints -external-blueprints-3bd78e46b014dac5cc0c58788c1820d043d61569 pkg-example-smf-bp v5 v5 true Published external-blueprints -external-blueprints-c3f660848d9d7a4df5481ec2e06196884778cd84 pkg-example-upf-bp main main false Published external-blueprints -external-blueprints-4cb00a17c1ee2585d6c187ba4d0211da960c0940 pkg-example-upf-bp v1 v1 false Published external-blueprints -external-blueprints-5903efe295026124e6fea926df154a72c5bd1ea9 pkg-example-upf-bp v2 v2 false Published external-blueprints -external-blueprints-16142d8d23c1b8e868a9524a1b21634c79b432d5 pkg-example-upf-bp v3 v3 false Published external-blueprints -external-blueprints-60ef45bb8f55b63556e7467f16088325022a7ece pkg-example-upf-bp v4 v4 false Published external-blueprints -external-blueprints-7757966cc7b965f1b9372370a4b382c8375a2b40 pkg-example-upf-bp v5 v5 true Published external-blueprints -``` - -The PackageRevisionResources resource is an API Aggregation resource that Porch uses to wrap the GET URL for the package -on its repository. - -```bash -kubectl get packagerevisionresources -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION REPOSITORY FILES -external-blueprints-922121d0bcdd56bfa8cae6c375720e2b5f358ab0 free5gc-cp main main external-blueprints 28 -external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 free5gc-cp v1 v1 external-blueprints 28 -external-blueprints-716aae722092dbbb9470e56079b90ad76ec8f0d5 free5gc-operator main main external-blueprints 14 -external-blueprints-d65dc89f7a2472650651e9aea90edfcc81a9afc6 free5gc-operator v1 v1 external-blueprints 11 -external-blueprints-9fee880e8fa52066f052c9cae7aac2e2bc1b5a54 free5gc-operator v2 v2 external-blueprints 11 -external-blueprints-91d60ee31d2d0a1a6d5f1807593d5419434accd3 free5gc-operator v3 v3 external-blueprints 14 -external-blueprints-21f19a0641cf520e7dc6268e64c58c2c30c27036 free5gc-operator v4 v4 external-blueprints 14 -external-blueprints-bf2e7522ee92680bd49571ab309e3f61320cf36d free5gc-operator v5 v5 external-blueprints 14 -external-blueprints-c1b9ecb73118e001ab1d1213e6a2c94ab67a0939 free5gc-upf main main external-blueprints 6 -external-blueprints-5d48b1516e7b1ea15830ffd76b230862119981bd free5gc-upf v1 v1 external-blueprints 6 -external-blueprints-ed97798b46b36d135cf23d813eccad4857dff90f pkg-example-amf-bp main main external-blueprints 16 -external-blueprints-ed744bfdf4a4d15d4fcf3c46fde27fd6ac32d180 pkg-example-amf-bp v1 v1 external-blueprints 7 -external-blueprints-5489faa80782f91f1a07d04e206935d14c1eb24c pkg-example-amf-bp v2 v2 external-blueprints 8 -external-blueprints-16e2255bd433ef532684a3c1434ae0bede175107 pkg-example-amf-bp v3 v3 external-blueprints 16 -external-blueprints-7689cc6c953fa83ea61283983ce966dcdffd9bae pkg-example-amf-bp v4 v4 external-blueprints 16 -external-blueprints-caff9609883eea7b20b73b7425e6694f8eb6adc3 pkg-example-amf-bp v5 v5 external-blueprints 16 -external-blueprints-00b6673c438909975548b2b9f20c2e1663161815 pkg-example-smf-bp main main external-blueprints 17 -external-blueprints-4f7dfbede99dc08f2b5144ca550ca218109c52f2 pkg-example-smf-bp v1 v1 external-blueprints 8 -external-blueprints-3d9ab8f61ce1d35e264d5719d4b3c0da1ab02328 pkg-example-smf-bp v2 v2 external-blueprints 9 -external-blueprints-2006501702e105501784c78be9e7d57e426d85e8 pkg-example-smf-bp v3 v3 external-blueprints 17 -external-blueprints-c97ed7c13b3aa47cb257217f144960743aec1253 pkg-example-smf-bp v4 v4 external-blueprints 17 -external-blueprints-3bd78e46b014dac5cc0c58788c1820d043d61569 pkg-example-smf-bp v5 v5 external-blueprints 17 -external-blueprints-c3f660848d9d7a4df5481ec2e06196884778cd84 pkg-example-upf-bp main main external-blueprints 17 -external-blueprints-4cb00a17c1ee2585d6c187ba4d0211da960c0940 pkg-example-upf-bp v1 v1 external-blueprints 8 -external-blueprints-5903efe295026124e6fea926df154a72c5bd1ea9 pkg-example-upf-bp v2 v2 external-blueprints 8 -external-blueprints-16142d8d23c1b8e868a9524a1b21634c79b432d5 pkg-example-upf-bp v3 v3 external-blueprints 17 -external-blueprints-60ef45bb8f55b63556e7467f16088325022a7ece pkg-example-upf-bp v4 v4 external-blueprints 17 -external-blueprints-7757966cc7b965f1b9372370a4b382c8375a2b40 pkg-example-upf-bp v5 v5 external-blueprints 17 -``` - -Let's examine the *free5gc-cp v1* package. - -The PackageRevision CR name for *free5gc-cp v1* is external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9. - -```bash -kubectl get packagerevision -n porch-demo external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 -o yaml -``` - -```yaml -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevision -metadata: - creationTimestamp: "2023-06-13T13:35:34Z" - labels: - kpt.dev/latest-revision: "true" - name: external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 - namespace: porch-demo - resourceVersion: 5fc9561dcd4b2630704c192e89887490e2ff3c61 - uid: uid:free5gc-cp:v1 -spec: - lifecycle: Published - packageName: free5gc-cp - repository: external-blueprints - revision: v1 - workspaceName: v1 -status: - publishTimestamp: "2023-06-13T13:35:34Z" - publishedBy: dnaleksandrov@gmail.com - upstreamLock: {} -``` - -Getting the *PackageRevisionResources* pulls the package from its repository with each file serialized into a name-value -map of resources in it's spec. - -
-Open this to see the command and the result - -```bash -kubectl get packagerevisionresources -n porch-demo external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 -o yaml -``` -```yaml -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevisionResources -metadata: - creationTimestamp: "2023-06-13T13:35:34Z" - name: external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 - namespace: porch-demo - resourceVersion: 5fc9561dcd4b2630704c192e89887490e2ff3c61 - uid: uid:free5gc-cp:v1 -spec: - packageName: free5gc-cp - repository: external-blueprints - resources: - Kptfile: | - apiVersion: kpt.dev/v1 - kind: Kptfile - metadata: - name: free5gc-cp - annotations: - config.kubernetes.io/local-config: "true" - info: - description: this package represents free5gc NFs, which are required to perform E2E conn testing - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.4.1 - configPath: package-context.yaml - README.md: "# free5gc-cp\n\n## Description\nPackage representing free5gc control - plane NFs.\n\nPackage definition is based on [Towards5gs helm charts](https://github.com/Orange-OpenSource/towards5gs-helm), - \nand service level configuration is preserved as defined there.\n\n### Network - Functions (NFs)\n\nfree5gc project implements following NFs:\n\n\n| NF | Description - | local-config |\n| --- | --- | --- |\n| AMF | Access and Mobility Management - Function | true |\n| AUSF | Authentication Server Function | false |\n| NRF - | Network Repository Function | false |\n| NSSF | Network Slice Selection Function - | false |\n| PCF | Policy Control Function | false |\n| SMF | Session Management - Function | true |\n| UDM | Unified Data Management | false |\n| UDR | Unified - Data Repository | false |\n\nalso Database and Web UI is defined:\n\n| Service - | Description | local-config |\n| --- | --- | --- |\n| mongodb | Database to - store free5gc data | false |\n| webui | UI used to register UE | false |\n\nNote: - `local-config: true` indicates that this resources won't be deployed to the - workload cluster\n\n### Dependencies\n\n- `mongodb` requires `Persistent Volume`. - We need to assure that dynamic PV provisioning will be available on the cluster\n- - `NRF` should be running before other NFs will be instantiated\n - all NFs - packages contain `wait-nrf` init-container\n- `NRF` and `WEBUI` require DB\n - \ - packages contain `wait-mongodb` init-container\n- `WEBUI` service is exposed - as `NodePort` \n - will be used to register UE on the free5gc side\n- Communication - via `SBI` between NFs and communication with `mongodb` is defined using K8s - `ClusterIP` services\n - it forces you to deploy all NFs on a single cluster - or consider including `service mesh` in a multi-cluster scenario\n\n## Usage\n\n### - Fetch the package\n`kpt pkg get REPO_URI[.git]/PKG_PATH[@VERSION] free5gc-cp`\n\nDetails: - https://kpt.dev/reference/cli/pkg/get/\n\n### View package content\n`kpt pkg - tree free5gc-cp`\n\nDetails: https://kpt.dev/reference/cli/pkg/tree/\n\n### - Apply the package\n```\nkpt live init free5gc-cp\nkpt live apply free5gc-cp - --reconcile-timeout=2m --output=table\n```\n\nDetails: https://kpt.dev/reference/cli/live/\n\n" - ausf/ausf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - ausf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n ausfcfg.yaml: |\n info:\n version: 1.0.2\n description: - AUSF initial local configuration\n\n configuration:\n serviceNameList:\n - \ - nausf-auth\n \n sbi:\n scheme: http\n registerIPv4: - ausf-nausf # IP used to register to NRF\n bindingIPv4: 0.0.0.0 # - IP used to bind the service\n port: 80\n tls:\n key: - config/TLS/ausf.key\n pem: config/TLS/ausf.pem\n \n nrfUri: - http://nrf-nnrf:8000\n plmnSupportList:\n - mcc: 208\n mnc: - 93\n - mcc: 123\n mnc: 45\n groupId: ausfGroup001\n eapAkaSupiImsiPrefix: - false\n\n logger:\n AUSF:\n ReportCaller: false\n debugLevel: - info\n" - ausf/ausf-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-ausf\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: ausf\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: ausf\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: ausf\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n \n containers:\n - \ - name: ausf\n image: towards5gs/free5gc-ausf:v3.1.1\n imagePullPolicy: - IfNotPresent\n securityContext:\n {}\n ports:\n - - containerPort: 80\n command: [\"./ausf\"]\n args: [\"-c\", \"../config/ausfcfg.yaml\"]\n - \ env:\n - name: GIN_MODE\n value: release\n volumeMounts:\n - \ - mountPath: /free5gc/config/\n name: ausf-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: ausf-volume\n projected:\n sources:\n - - configMap:\n name: ausf-configmap\n" - ausf/ausf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: ausf-nausf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: ausf - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: ausf - mongodb/dep-sts.yaml: "---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n - \ name: mongodb\n namespace: default\n labels:\n app.kubernetes.io/name: - mongodb\n app.kubernetes.io/instance: free5gc\n app.kubernetes.io/component: - mongodb\nspec:\n serviceName: mongodb\n updateStrategy:\n type: RollingUpdate\n - \ selector:\n matchLabels:\n app.kubernetes.io/name: mongodb\n app.kubernetes.io/instance: - free5gc\n app.kubernetes.io/component: mongodb\n template:\n metadata:\n - \ labels:\n app.kubernetes.io/name: mongodb\n app.kubernetes.io/instance: - free5gc\n app.kubernetes.io/component: mongodb\n spec:\n \n serviceAccountName: - mongodb\n affinity:\n podAffinity:\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - \ - podAffinityTerm:\n labelSelector:\n matchLabels:\n - \ app.kubernetes.io/name: mongodb\n app.kubernetes.io/instance: - free5gc\n app.kubernetes.io/component: mongodb\n namespaces:\n - \ - \"default\"\n topologyKey: kubernetes.io/hostname\n - \ weight: 1\n nodeAffinity:\n \n securityContext:\n - \ fsGroup: 1001\n sysctls: []\n containers:\n - name: - mongodb\n image: docker.io/bitnami/mongodb:4.4.4-debian-10-r0\n imagePullPolicy: - \"IfNotPresent\"\n securityContext:\n runAsNonRoot: true\n - \ runAsUser: 1001\n env:\n - name: BITNAMI_DEBUG\n - \ value: \"false\"\n - name: ALLOW_EMPTY_PASSWORD\n value: - \"yes\"\n - name: MONGODB_SYSTEM_LOG_VERBOSITY\n value: - \"0\"\n - name: MONGODB_DISABLE_SYSTEM_LOG\n value: - \"no\"\n - name: MONGODB_ENABLE_IPV6\n value: \"no\"\n - \ - name: MONGODB_ENABLE_DIRECTORY_PER_DB\n value: \"no\"\n - \ ports:\n - name: mongodb\n containerPort: - 27017\n livenessProbe:\n exec:\n command:\n - \ - mongo\n - --disableImplicitSessions\n - - --eval\n - \"db.adminCommand('ping')\"\n initialDelaySeconds: - 30\n periodSeconds: 10\n timeoutSeconds: 5\n successThreshold: - 1\n failureThreshold: 6\n readinessProbe:\n exec:\n - \ command:\n - bash\n - -ec\n - - |\n mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello().isWritablePrimary - || db.hello().secondary' | grep -q 'true'\n initialDelaySeconds: - 5\n periodSeconds: 10\n timeoutSeconds: 5\n successThreshold: - 1\n failureThreshold: 6\n resources:\n limits: - {}\n requests: {}\n volumeMounts:\n - name: datadir\n - \ mountPath: /bitnami/mongodb/data/db/\n subPath: \n - \ volumes:\n volumeClaimTemplates:\n - metadata:\n name: datadir\n - \ spec:\n accessModes:\n - \"ReadWriteOnce\"\n resources:\n - \ requests:\n storage: \"6Gi\"\n" - mongodb/serviceaccount.yaml: | - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: mongodb - namespace: default - labels: - app.kubernetes.io/name: mongodb - app.kubernetes.io/instance: free5gc - secrets: - - name: mongodb - mongodb/svc.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: mongodb - namespace: default - labels: - app.kubernetes.io/name: mongodb - app.kubernetes.io/instance: free5gc - app.kubernetes.io/component: mongodb - spec: - type: ClusterIP - ports: - - name: mongodb - port: 27017 - targetPort: mongodb - nodePort: null - selector: - app.kubernetes.io/name: mongodb - app.kubernetes.io/instance: free5gc - app.kubernetes.io/component: mongodb - namespace.yaml: | - apiVersion: v1 - kind: Namespace - metadata: - name: example - labels: - pod-security.kubernetes.io/warn: "privileged" - pod-security.kubernetes.io/audit: "privileged" - pod-security.kubernetes.io/enforce: "privileged" - nrf/nrf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - nrf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n nrfcfg.yaml: |\n info:\n version: 1.0.1\n description: - NRF initial local configuration\n \n configuration:\n MongoDBName: - free5gc\n MongoDBUrl: mongodb://mongodb:27017\n\n serviceNameList:\n - \ - nnrf-nfm\n - nnrf-disc\n\n sbi:\n scheme: http\n - \ registerIPv4: nrf-nnrf # IP used to serve NFs or register to another - NRF\n bindingIPv4: 0.0.0.0 # IP used to bind the service\n port: - 8000\n tls:\n key: config/TLS/nrf.key\n pem: config/TLS/nrf.pem\n - \ DefaultPlmnId:\n mcc: 208\n mnc: 93\n\n logger:\n NRF:\n - \ ReportCaller: false\n debugLevel: info\n" - nrf/nrf-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-nrf\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: nrf\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: nrf\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: nrf\n spec:\n initContainers:\n - \ - name: wait-mongo\n image: busybox:1.32.0\n env:\n - - name: DEPENDENCIES\n value: mongodb:27017\n command: [\"sh\", - \"-c\", \"until nc -z $DEPENDENCIES; do echo waiting for the MongoDB; sleep - 2; done;\"]\n containers:\n - name: nrf\n image: towards5gs/free5gc-nrf:v3.1.1\n - \ imagePullPolicy: IfNotPresent\n securityContext:\n {}\n - \ ports:\n - containerPort: 8000\n command: [\"./nrf\"]\n - \ args: [\"-c\", \"../config/nrfcfg.yaml\"]\n env: \n - - name: DB_URI\n value: mongodb://mongodb/free5gc\n - name: - GIN_MODE\n value: release\n volumeMounts:\n - mountPath: - /free5gc/config/\n name: nrf-volume\n resources:\n limits:\n - \ cpu: 100m\n memory: 128Mi\n requests:\n - \ cpu: 100m\n memory: 128Mi\n readinessProbe:\n - \ initialDelaySeconds: 0\n periodSeconds: 1\n timeoutSeconds: - 1\n failureThreshold: 40\n successThreshold: 1\n httpGet:\n - \ scheme: \"HTTP\"\n port: 8000\n livenessProbe:\n - \ initialDelaySeconds: 120\n periodSeconds: 10\n timeoutSeconds: - 10\n failureThreshold: 3\n successThreshold: 1\n httpGet:\n - \ scheme: \"HTTP\"\n port: 8000\n dnsPolicy: ClusterFirst\n - \ restartPolicy: Always\n\n volumes:\n - name: nrf-volume\n projected:\n - \ sources:\n - configMap:\n name: nrf-configmap\n" - nrf/nrf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: nrf-nnrf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: nrf - spec: - type: ClusterIP - ports: - - port: 8000 - targetPort: 8000 - protocol: TCP - name: http - selector: - project: free5gc - nf: nrf - nssf/nssf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - nssf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n nssfcfg.yaml: |\n info:\n version: 1.0.1\n description: - NSSF initial local configuration\n\n configuration:\n serviceNameList:\n - \ - nnssf-nsselection\n - nnssf-nssaiavailability\n\n sbi:\n - \ scheme: http\n registerIPv4: nssf-nnssf # IP used to register - to NRF\n bindingIPv4: 0.0.0.0 # IP used to bind the service\n port: - 80\n tls:\n key: config/TLS/nssf.key\n pem: config/TLS/nssf.pem\n - \ \n nrfUri: http://nrf-nnrf:8000\n \n nsiList:\n - - snssai:\n sst: 1\n nsiInformationList:\n - nrfId: - http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: 10\n - - snssai:\n sst: 1\n sd: 1\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 11\n - snssai:\n sst: 1\n sd: 2\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 12\n - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 12\n - snssai:\n sst: 1\n sd: 3\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 13\n - snssai:\n sst: 2\n nsiInformationList:\n - - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: 20\n - \ - snssai:\n sst: 2\n sd: 1\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 21\n - snssai:\n sst: 1\n sd: 010203\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 22\n amfSetList:\n - amfSetId: 1\n amfList:\n - - ffa2e8d7-3275-49c7-8631-6af1df1d9d26\n - 0e8831c3-6286-4689-ab27-1e2161e15cb1\n - \ - a1fba9ba-2e39-4e22-9c74-f749da571d0d\n nrfAmfSet: http://nrf-nnrf:8081/nnrf-nfm/v1/nf-instances\n - \ supportedNssaiAvailabilityData:\n - tai:\n plmnId:\n - \ mcc: 466\n mnc: 92\n tac: - 33456\n supportedSnssaiList:\n - sst: 1\n sd: - 1\n - sst: 1\n sd: 2\n - sst: - 2\n sd: 1\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33457\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - \ - sst: 1\n sd: 2\n - amfSetId: 2\n nrfAmfSet: - http://nrf-nnrf:8084/nnrf-nfm/v1/nf-instances\n supportedNssaiAvailabilityData:\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33456\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - - sst: 1\n sd: 3\n - sst: 2\n sd: - 1\n - tai:\n plmnId:\n mcc: 466\n - \ mnc: 92\n tac: 33458\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - \ - sst: 2\n nssfName: NSSF\n supportedPlmnList:\n - - mcc: 208\n mnc: 93\n supportedNssaiInPlmnList:\n - plmnId:\n - \ mcc: 208\n mnc: 93\n supportedSnssaiList:\n - \ - sst: 1\n sd: 010203\n - sst: 1\n sd: - 112233\n - sst: 1\n sd: 3\n - sst: 2\n sd: - 1\n - sst: 2\n sd: 2\n amfList:\n - nfId: - 469de254-2fe5-4ca0-8381-af3f500af77c\n supportedNssaiAvailabilityData:\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33456\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 2\n - - sst: 2\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33457\n supportedSnssaiList:\n - \ - sst: 1\n sd: 1\n - sst: 1\n - \ sd: 2\n - nfId: fbe604a8-27b2-417e-bd7c-8a7be2691f8d\n - \ supportedNssaiAvailabilityData:\n - tai:\n plmnId:\n - \ mcc: 466\n mnc: 92\n tac: - 33458\n supportedSnssaiList:\n - sst: 1\n - - sst: 1\n sd: 1\n - sst: 1\n sd: - 3\n - sst: 2\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33459\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - \ - sst: 2\n - sst: 2\n sd: 1\n - \ - nfId: b9e6e2cb-5ce8-4cb6-9173-a266dd9a2f0c\n supportedNssaiAvailabilityData:\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33456\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - - sst: 1\n sd: 2\n - sst: 2\n - tai:\n - \ plmnId:\n mcc: 466\n mnc: - 92\n tac: 33458\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - - sst: 2\n - sst: 2\n sd: 1\n taList:\n - - tai:\n plmnId:\n mcc: 466\n mnc: 92\n tac: - 33456\n accessType: 3GPP_ACCESS\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - sst: 1\n sd: - 2\n - sst: 2\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33457\n accessType: 3GPP_ACCESS\n - \ supportedSnssaiList:\n - sst: 1\n - sst: 1\n - \ sd: 1\n - sst: 1\n sd: 2\n - - sst: 2\n - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33458\n accessType: 3GPP_ACCESS\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - - sst: 1\n sd: 3\n - sst: 2\n restrictedSnssaiList:\n - \ - homePlmnId:\n mcc: 310\n mnc: 560\n - \ sNssaiList:\n - sst: 1\n sd: 3\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33459\n accessType: 3GPP_ACCESS\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - - sst: 2\n - sst: 2\n sd: 1\n restrictedSnssaiList:\n - \ - homePlmnId:\n mcc: 310\n mnc: 560\n - \ sNssaiList:\n - sst: 2\n sd: 1\n - \ mappingListFromPlmn:\n - operatorName: NTT Docomo\n homePlmnId:\n - \ mcc: 440\n mnc: 10\n mappingOfSnssai:\n - - servingSnssai:\n sst: 1\n sd: 1\n homeSnssai:\n - \ sst: 1\n sd: 1\n - servingSnssai:\n - \ sst: 1\n sd: 2\n homeSnssai:\n sst: - 1\n sd: 3\n - servingSnssai:\n sst: - 1\n sd: 3\n homeSnssai:\n sst: 1\n - \ sd: 4\n - servingSnssai:\n sst: 2\n - \ sd: 1\n homeSnssai:\n sst: 2\n sd: - 2\n - operatorName: AT&T Mobility\n homePlmnId:\n mcc: - 310\n mnc: 560\n mappingOfSnssai:\n - servingSnssai:\n - \ sst: 1\n sd: 1\n homeSnssai:\n sst: - 1\n sd: 2\n - servingSnssai:\n sst: - 1\n sd: 2\n homeSnssai:\n sst: 1\n - \ sd: 3 \n\n logger:\n NSSF:\n ReportCaller: - false\n debugLevel: info\n" - nssf/nssf-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-nssf\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: nssf\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: nssf\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: nssf\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n\n containers:\n - \ - name: nssf\n image: towards5gs/free5gc-nssf:v3.1.1\n imagePullPolicy: - IfNotPresent\n securityContext:\n {}\n ports:\n - - containerPort: 80\n command: [\"./nssf\"]\n args: [\"-c\", \"../config/nssfcfg.yaml\"]\n - \ env: \n - name: GIN_MODE\n value: release\n volumeMounts:\n - \ - mountPath: /free5gc/config/\n name: nssf-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: nssf-volume\n projected:\n sources:\n - - configMap:\n name: nssf-configmap\n" - nssf/nssf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: nssf-nnssf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: nssf - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: nssf - package-context.yaml: | - apiVersion: v1 - kind: ConfigMap - metadata: - name: kptfile.kpt.dev - annotations: - config.kubernetes.io/local-config: "true" - data: - name: free5gc - namespace: free5gc - pcf/pcf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - pcf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n pcfcfg.yaml: |\n info:\n version: 1.0.1\n description: - PCF initial local configuration\n\n configuration:\n serviceList:\n - \ - serviceName: npcf-am-policy-control\n - serviceName: npcf-smpolicycontrol\n - \ suppFeat: 3fff\n - serviceName: npcf-bdtpolicycontrol\n - - serviceName: npcf-policyauthorization\n suppFeat: 3\n - serviceName: - npcf-eventexposure\n - serviceName: npcf-ue-policy-control\n\n sbi:\n - \ scheme: http\n registerIPv4: pcf-npcf # IP used to register - to NRF\n bindingIPv4: 0.0.0.0 # IP used to bind the service\n port: - 80\n tls:\n key: config/TLS/pcf.key\n pem: config/TLS/pcf.pem\n - \ \n mongodb: # the mongodb connected by this PCF\n name: - free5gc # name of the mongodb\n url: mongodb://mongodb:27017 - # a valid URL of the mongodb\n \n nrfUri: http://nrf-nnrf:8000\n pcfName: - PCF\n timeFormat: 2019-01-02 15:04:05\n defaultBdtRefId: BdtPolicyId-\n - \ locality: area1\n\n logger:\n PCF:\n ReportCaller: false\n - \ debugLevel: info\n" - pcf/pcf-deployment.yaml: | - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - name: free5gc-pcf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: pcf - spec: - replicas: 1 - selector: - matchLabels: - project: free5gc - nf: pcf - template: - metadata: - labels: - project: free5gc - nf: pcf - spec: - initContainers: - - name: wait-nrf - image: towards5gs/initcurl:1.0.0 - env: - - name: DEPENDENCIES - value: http://nrf-nnrf:8000 - command: ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure --connect-timeout 1 -s -o /dev/null -w "%{http_code}" $dependency) -ne 200 ]; do echo waiting for dependencies; sleep 1; done; done;'] - - containers: - - name: pcf - image: towards5gs/free5gc-pcf:v3.1.1 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 80 - command: ["./pcf"] - args: ["-c", "../config/pcfcfg.yaml"] - env: - - name: GIN_MODE - value: release - volumeMounts: - - mountPath: /free5gc/config/ - name: pcf-volume - resources: - limits: - cpu: 100m - memory: 128Mi - requests: - cpu: 100m - memory: 128Mi - dnsPolicy: ClusterFirst - restartPolicy: Always - - volumes: - - name: pcf-volume - projected: - sources: - - configMap: - name: pcf-configmap - pcf/pcf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: pcf-npcf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: pcf - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: pcf - udm/udm-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - udm-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n udmcfg.yaml: |\n info:\n version: 1.0.2\n description: - UDM initial local configuration\n\n configuration:\n serviceNameList:\n - \ - nudm-sdm\n - nudm-uecm\n - nudm-ueau\n - nudm-ee\n - \ - nudm-pp\n \n sbi:\n scheme: http\n registerIPv4: - udm-nudm # IP used to register to NRF\n bindingIPv4: 0.0.0.0 # IP used - to bind the service\n port: 80\n tls:\n key: config/TLS/udm.key\n - \ pem: config/TLS/udm.pem\n \n nrfUri: http://nrf-nnrf:8000\n - \ # test data set from TS33501-f60 Annex C.4\n SuciProfile:\n - - ProtectionScheme: 1 # Protect Scheme: Profile A\n PrivateKey: c53c22208b61860b06c62e5406a7b330c2b577aa5558981510d128247d38bd1d\n - \ PublicKey: 5a8d38864820197c3394b92613b20b91633cbd897119273bf8e4a6f4eec0a650\n - \ - ProtectionScheme: 2 # Protect Scheme: Profile B\n PrivateKey: - F1AB1074477EBCC7F554EA1C5FC368B1616730155E0041AC447D6301975FECDA\n PublicKey: - 0472DA71976234CE833A6907425867B82E074D44EF907DFB4B3E21C1C2256EBCD15A7DED52FCBB097A4ED250E036C7B9C8C7004C4EEDC4F068CD7BF8D3F900E3B4\n\n - \ logger:\n UDM:\n ReportCaller: false\n debugLevel: info\n" - udm/udm-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-udm\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: udm\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: udm\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: udm\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n\n containers:\n - \ - name: udm\n image: towards5gs/free5gc-udm:v3.1.1\n imagePullPolicy: - IfNotPresent\n ports:\n - containerPort: 80\n command: - [\"./udm\"]\n args: [\"-c\", \"../config/udmcfg.yaml\"]\n env: - \n - name: GIN_MODE\n value: release\n volumeMounts:\n - \ - mountPath: /free5gc/config/\n name: udm-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: udm-volume\n projected:\n sources:\n - - configMap:\n name: udm-configmap\n" - udm/udm-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: udm-nudm - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: udm - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: udm - udr/udr-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - udr-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n udrcfg.yaml: |\n info:\n version: 1.0.1\n description: - UDR initial local configuration\n\n configuration:\n sbi:\n scheme: - http\n registerIPv4: udr-nudr # IP used to register to NRF\n bindingIPv4: - 0.0.0.0 # IP used to bind the service\n port: 80\n tls:\n key: - config/TLS/udr.key\n pem: config/TLS/udr.pem\n\n mongodb:\n name: - free5gc\n url: mongodb://mongodb:27017 \n \n nrfUri: - http://nrf-nnrf:8000\n\n logger:\n MongoDBLibrary:\n ReportCaller: - false\n debugLevel: info\n OpenApi:\n ReportCaller: false\n - \ debugLevel: info\n PathUtil:\n ReportCaller: false\n debugLevel: - info\n UDR:\n ReportCaller: false\n debugLevel: info\n" - udr/udr-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-udr\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: udr\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: udr\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: udr\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n\n containers:\n - \ - name: udr\n image: towards5gs/free5gc-udr:v3.1.1\n imagePullPolicy: - IfNotPresent\n ports:\n - containerPort: 80\n command: - [\"./udr\"]\n args: [\"-c\", \"../config/udrcfg.yaml\"]\n env: - \n - name: DB_URI\n value: mongodb://mongodb/free5gc\n - - name: GIN_MODE\n value: release\n volumeMounts:\n - - mountPath: /free5gc/config/\n name: udr-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: udr-volume\n projected:\n sources:\n - - configMap:\n name: udr-configmap\n" - udr/udr-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: udr-nudr - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: udr - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: udr - webui/webui-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n - \ name: webui-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ app: free5gc\ndata:\n webuicfg.yaml: |\n info:\n version: 1.0.0\n - \ description: WEBUI initial local configuration\n\n configuration:\n - \ mongodb:\n name: free5gc\n url: mongodb://mongodb:27017\n - \ \n logger:\n WEBUI:\n ReportCaller: false\n debugLevel: - info\n" - webui/webui-deployment.yaml: | - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - name: free5gc-webui - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: webui - spec: - replicas: 1 - selector: - matchLabels: - project: free5gc - nf: webui - template: - metadata: - labels: - project: free5gc - nf: webui - spec: - initContainers: - - name: wait-mongo - image: busybox:1.32.0 - env: - - name: DEPENDENCIES - value: mongodb:27017 - command: ["sh", "-c", "until nc -z $DEPENDENCIES; do echo waiting for the MongoDB; sleep 2; done;"] - containers: - - name: webui - image: towards5gs/free5gc-webui:v3.1.1 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 5000 - command: ["./webconsole"] - args: ["-c", "../config/webuicfg.yaml"] - env: - - name: GIN_MODE - value: release - volumeMounts: - - mountPath: /free5gc/config/ - name: webui-volume - resources: - limits: - cpu: 100m - memory: 128Mi - requests: - cpu: 100m - memory: 128Mi - readinessProbe: - initialDelaySeconds: 0 - periodSeconds: 1 - timeoutSeconds: 1 - failureThreshold: 40 - successThreshold: 1 - httpGet: - scheme: HTTP - port: 5000 - livenessProbe: - initialDelaySeconds: 120 - periodSeconds: 10 - timeoutSeconds: 10 - failureThreshold: 3 - successThreshold: 1 - httpGet: - scheme: HTTP - port: 5000 - dnsPolicy: ClusterFirst - restartPolicy: Always - - volumes: - - name: webui-volume - projected: - sources: - - configMap: - name: webui-configmap - webui/webui-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: webui-service - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: webui - spec: - type: NodePort - ports: - - port: 5000 - targetPort: 5000 - nodePort: 30500 - protocol: TCP - name: http - selector: - project: free5gc - nf: webui - revision: v1 - workspaceName: v1 -status: - renderStatus: - error: "" - result: - exitCode: 0 - metadata: - creationTimestamp: null -``` -
- -## The porchctl command - -The `porchtcl` command is an administration command for acting on Porch `Repository` (repo) and `PackageRevision` (rpkg) -CRs. See its [documentation for usage information](porchctl-cli-guide.md). - -Check that porchctl lists our repositories: - -```bash -porchctl repo -n porch-demo get -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -edge1 git Package true True http://172.18.255.200:3000/nephio/edge1.git -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -
-Check that porchctl lists our remote packages (PackageRevisions): - -``` -porchctl rpkg -n porch-demo get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -external-blueprints-922121d0bcdd56bfa8cae6c375720e2b5f358ab0 free5gc-cp main main false Published external-blueprints -external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 free5gc-cp v1 v1 true Published external-blueprints -external-blueprints-716aae722092dbbb9470e56079b90ad76ec8f0d5 free5gc-operator main main false Published external-blueprints -external-blueprints-d65dc89f7a2472650651e9aea90edfcc81a9afc6 free5gc-operator v1 v1 false Published external-blueprints -external-blueprints-9fee880e8fa52066f052c9cae7aac2e2bc1b5a54 free5gc-operator v2 v2 false Published external-blueprints -external-blueprints-91d60ee31d2d0a1a6d5f1807593d5419434accd3 free5gc-operator v3 v3 false Published external-blueprints -external-blueprints-21f19a0641cf520e7dc6268e64c58c2c30c27036 free5gc-operator v4 v4 false Published external-blueprints -external-blueprints-bf2e7522ee92680bd49571ab309e3f61320cf36d free5gc-operator v5 v5 true Published external-blueprints -external-blueprints-c1b9ecb73118e001ab1d1213e6a2c94ab67a0939 free5gc-upf main main false Published external-blueprints -external-blueprints-5d48b1516e7b1ea15830ffd76b230862119981bd free5gc-upf v1 v1 true Published external-blueprints -external-blueprints-ed97798b46b36d135cf23d813eccad4857dff90f pkg-example-amf-bp main main false Published external-blueprints -external-blueprints-ed744bfdf4a4d15d4fcf3c46fde27fd6ac32d180 pkg-example-amf-bp v1 v1 false Published external-blueprints -external-blueprints-5489faa80782f91f1a07d04e206935d14c1eb24c pkg-example-amf-bp v2 v2 false Published external-blueprints -external-blueprints-16e2255bd433ef532684a3c1434ae0bede175107 pkg-example-amf-bp v3 v3 false Published external-blueprints -external-blueprints-7689cc6c953fa83ea61283983ce966dcdffd9bae pkg-example-amf-bp v4 v4 false Published external-blueprints -external-blueprints-caff9609883eea7b20b73b7425e6694f8eb6adc3 pkg-example-amf-bp v5 v5 true Published external-blueprints -external-blueprints-00b6673c438909975548b2b9f20c2e1663161815 pkg-example-smf-bp main main false Published external-blueprints -external-blueprints-4f7dfbede99dc08f2b5144ca550ca218109c52f2 pkg-example-smf-bp v1 v1 false Published external-blueprints -external-blueprints-3d9ab8f61ce1d35e264d5719d4b3c0da1ab02328 pkg-example-smf-bp v2 v2 false Published external-blueprints -external-blueprints-2006501702e105501784c78be9e7d57e426d85e8 pkg-example-smf-bp v3 v3 false Published external-blueprints -external-blueprints-c97ed7c13b3aa47cb257217f144960743aec1253 pkg-example-smf-bp v4 v4 false Published external-blueprints -external-blueprints-3bd78e46b014dac5cc0c58788c1820d043d61569 pkg-example-smf-bp v5 v5 true Published external-blueprints -external-blueprints-c3f660848d9d7a4df5481ec2e06196884778cd84 pkg-example-upf-bp main main false Published external-blueprints -external-blueprints-4cb00a17c1ee2585d6c187ba4d0211da960c0940 pkg-example-upf-bp v1 v1 false Published external-blueprints -external-blueprints-5903efe295026124e6fea926df154a72c5bd1ea9 pkg-example-upf-bp v2 v2 false Published external-blueprints -external-blueprints-16142d8d23c1b8e868a9524a1b21634c79b432d5 pkg-example-upf-bp v3 v3 false Published external-blueprints -external-blueprints-60ef45bb8f55b63556e7467f16088325022a7ece pkg-example-upf-bp v4 v4 false Published external-blueprints -external-blueprints-7757966cc7b965f1b9372370a4b382c8375a2b40 pkg-example-upf-bp v5 v5 true Published external-blueprints -``` -
- -The output above is similar to the output of `kubectl get packagerevision -n porch-demo` above. - -## Creating a blueprint in Porch - -### Blueprint with no Kpt pipelines - -Create a new package in our *management* repository using the sample *network-function* package provided. This network -function Kpt package is a demo Kpt package that installs [Nginx](https://github.com/nginx). - -``` -porchctl -n porch-demo rpkg init network-function --repository=management --workspace=v1 -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 created -porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 false Draft management -``` - -This command creates a new *PackageRevision* CR in porch and also creates a branch called *network-function/v1* in our -Gitea *management* repository. Use the Gitea web UI to confirm that the branch has been created and note that it only has -default content as yet. - -We now pull the package we have initialized from Porch. - -``` -porchctl -n porch-demo rpkg pull management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 blueprints/initialized/network-function -``` - -We update the initialized package and add our local changes. -``` -cp blueprints/local-changes/network-function/* blueprints/initialized/network-function -``` - -Now, we push the package contents to porch: -``` -porchctl -n porch-demo rpkg push management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 blueprints/initialized/network-function -``` - -Check on the Gitea web UI and we can see that the actual package contents have been pushed. - -Now we propose and approve the package. - -``` -porchctl -n porch-demo rpkg propose management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 proposed - -porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 false Proposed management - -porchctl -n porch-demo rpkg approve management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 approved - -porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 v1 true Published management - -``` - -Once we approve the package, the package is merged into the main branch in the *management* repository and the branch called -*network-function/v1* in that repository is removed. Use the Gitea UI to verify this. We now have our blueprint package in our -*management* repository and we can deploy this package into workload clusters. - -### Blueprint with a Kpt pipeline - -The second blueprint in the *blueprint* directory is called *network-function-auto-namespace*. This network -function is exactly the same as the *network-function* package except that it has a Kpt function that automatically -creates a namespace with the namespace configured in the name field in the *package-context.yaml* file. Note that no -namespace is defined in the metadata of the *deployment.yaml* file of this Kpt package. - -We use the same sequence of commands again to publish our blueprint package for *network-function-auto-namespace*. - -``` -porchctl -n porch-demo rpkg init network-function-auto-namespace --repository=management --workspace=v1 -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 created - -porchctl -n porch-demo rpkg pull management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 blueprints/initialized/network-function-auto-namespace - -cp blueprints/local-changes/network-function-auto-namespace/* blueprints/initialized/network-function-auto-namespace - -porchctl -n porch-demo rpkg push management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 blueprints/initialized/network-function-auto-namespace -``` - -Examine the *drafts/network-function-auto-namespace/v1* branch in Gitea. Notice that the set-namespace Kpt function in -the pipeline in the *Kptfile* has set the namespace in the *deployment.yaml* file to the value default-namespace-name, -which it read from the *package-context.yaml* file. - -Now we propose and approve the package. - -``` -porchctl -n porch-demo rpkg propose management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 proposed - -porchctl -n porch-demo rpkg approve management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 approved - -porchctl -n porch-demo rpkg get --name network-function-auto-namespace -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-f9a6f2802111b9e81c296422c03aae279725f6df network-function-auto-namespace v1 main false Published management -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 network-function-auto-namespace v1 v1 true Published management - -``` - -## Deploying a blueprint onto a workload cluster - -### Blueprint with no Kpt pipelines - -The process of deploying a blueprint package from our *management* repository clones the package, then modifies it for use on -the workload cluster. The cloned package is then initialized, pushed, proposed, and approved onto the *edge1* repository. -Remember that the *edge1* repository is being monitored by configsync from the edge1 cluster, so once the package appears in -the *edge1* repository on the management cluster, it will be pulled by configsync and applied to the edge1 cluster. - -``` -porchctl -n porch-demo rpkg pull management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 tmp_packages_for_deployment/edge1-network-function-a.clone.tmp - -find tmp_packages_for_deployment/edge1-network-function-a.clone.tmp - -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/deployment.yaml -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/.KptRevisionMetadata -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/README.md -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/Kptfile -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/package-context.yaml -``` -The package we created in the last section is cloned. We now remove the original metadata from the package. -``` -rm tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/.KptRevisionMetadata -``` - -We use a *kpt* function to change the namespace that will be used for the deployment of the network function. - -``` -kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-a.clone.tmp -- namespace=edge1-network-function-a - -[RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" -[PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" in 300ms - Results: - [info]: namespace "" updated to "edge1-network-function-a", 1 value(s) changed -``` - -We now initialize and push the package to the *edge1* repository: - -``` -porchctl -n porch-demo rpkg init edge1-network-function-a --repository=edge1 --workspace=v1 -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 created - -porchctl -n porch-demo rpkg pull edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 tmp_packages_for_deployment/edge1-network-function-a - -cp tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/* tmp_packages_for_deployment/edge1-network-function-a -rm -fr tmp_packages_for_deployment/edge1-network-function-a.clone.tmp - -porchctl -n porch-demo rpkg push edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 tmp_packages_for_deployment/edge1-network-function-a - -porchctl -n porch-demo rpkg get --name edge1-network-function-a -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 network-function-a v1 false Draft edge1 -``` - -You can verify that the package is in the *network-function-a/v1* branch of the deployment repository using the Gitea web UI. - - -Check that the *edge1-network-function-a* package is not deployed on the edge1 cluster yet: -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-a -No resources found in network-function-a namespace. - -``` - -We now propose and approve the deployment package, which merges the package to the *edge1* repository and further triggers -configsync to apply the package to the edge1 cluster. - -``` -export KUBECONFIG=~/.kube/kind-management-config - -porchctl -n porch-demo rpkg propose edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 proposed - -porchctl -n porch-demo rpkg approve edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 approved - -porchctl -n porch-demo rpkg get --name edge1-network-function-a -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 network-function-a v1 v1 true Published edge1 -``` - -We can now check that the *network-function-a* package is deployed on the edge1 cluster and that the pod is running -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-a -No resources found in network-function-a namespace. - -kubectl get pod -n edge1-network-function-a -NAME READY STATUS RESTARTS AGE -network-function-9779fc9f5-4rqp2 1/1 ContainerCreating 0 9s - -kubectl get pod -n edge1-network-function-a -NAME READY STATUS RESTARTS AGE -network-function-9779fc9f5-4rqp2 1/1 Running 0 44s -``` - -### Blueprint with a Kpt pipeline - -The process for deploying a blueprint with a *Kpt* pipeline runs the Kpt pipeline automatically with whatever configuration we give it. Rather than explicitly running a *Kpt* function to change the namespace, we will specify the namespace as configuration and the pipeline will apply it to the deployment. - -``` -porchctl -n porch-demo rpkg pull management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp - -find tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp - -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/deployment.yaml -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/.KptRevisionMetadata -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/README.md -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/Kptfile -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/package-context.yaml -``` - -We now remove the original metadata from the package. -``` -rm tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/.KptRevisionMetadata -``` - -The package we created in the last section is cloned. We now initialize and push the package to the *edge1* repository: - -``` -porchctl -n porch-demo rpkg init edge1-network-function-auto-namespace-a --repository=edge1 --workspace=v1 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 created - -porchctl -n porch-demo rpkg pull edge1-48997da49ca0a733b0834c1a27943f1a0e075180 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a - -cp tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/* tmp_packages_for_deployment/edge1-network-function-auto-namespace-a -rm -fr tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp -``` - - -We now simply configure the namespace we want to apply. edit the *tmp_packages_for_deployment/edge1-network-function-auto-namespace-a/package-context.yaml* file and set the namespace to use: - -``` -8c8 -< name: default-namespace-name ---- -> name: edge1-network-function-auto-namespace-a -``` - -We now push the package to the *edge1* repository: - -``` -porchctl -n porch-demo rpkg push edge1-48997da49ca0a733b0834c1a27943f1a0e075180 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a -[RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" -[PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" - Results: - [info]: namespace "default-namespace-name" updated to "edge1-network-function-auto-namespace-a", 1 value(s) changed - -porchctl -n porch-demo rpkg get --name edge1-network-function-auto-namespace-a -``` - -You can verify that the package is in the *network-function-auto-namespace-a/v1* branch of the deployment repository using the -Gitea web UI. You can see that the kpt pipeline fired and set the edge1-network-function-auto-namespace-a namespace in -the *deployment.yaml* file on the *drafts/edge1-network-function-auto-namespace-a/v1* branch on the *edge1* repository in -Gitea. - -Check that the *edge1-network-function-auto-namespace-a* package is not deployed on the edge1 cluster yet: -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-auto-namespace-a -No resources found in network-function-auto-namespace-a namespace. - -``` - -We now propose and approve the deployment package, which merges the package to the *edge1* repository and further triggers -configsync to apply the package to the edge1 cluster. - -``` -export KUBECONFIG=~/.kube/kind-management-config - -porchctl -n porch-demo rpkg propose edge1-48997da49ca0a733b0834c1a27943f1a0e075180 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 proposed - -porchctl -n porch-demo rpkg approve edge1-48997da49ca0a733b0834c1a27943f1a0e075180 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 approved - -porchctl -n porch-demo rpkg get --name edge1-network-function-auto-namespace-a -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 edge1-network-function-auto-namespace-a v1 v1 true Published edge1 -``` - -We can now check that the *network-function-auto-namespace-a* package is deployed on the edge1 cluster and that the pod is running -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-auto-namespace-a -No resources found in network-function-auto-namespace-a namespace. - -kubectl get pod -n edge1-network-function-auto-namespace-a -NAME READY STATUS RESTARTS AGE -network-function-auto-namespace-85bc658d67-rbzt6 1/1 ContainerCreating 0 3s - -kubectl get pod -n edge1-network-function-auto-namespace-a -NAME READY STATUS RESTARTS AGE -network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 0 10s -``` - -## Deploying using Package Variant Sets - -### Simple PackageVariantSet - -The PackageVariant CR is defined as: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet - -metadata: - name: network-function - namespace: porch-demo - -spec: - upstream: - repo: management - package: network-function - revision: v1 - targets: - - repositories: - - name: edge1 - packageNames: - - network-function-b - - network-function-c -``` - -In this very simple PackageVariant, the *network-function* package in the *management* repository is cloned into the *edge1* -repository as the *network-function-b* and *network-function-c* package variants. - -{{% alert title="Note" color="primary" %}} - -This simple package variant does not specify any configuration changes. Normally, as well as cloning and renaming, -configuration changes would be applied on a package variant. - -Use `kubectl explain PackageVariantSet` to get help on the structure of the PackageVariantSet CRD. - -{{% /alert %}} - -Applying the PackageVariantSet creates the new packages as draft packages: - -```bash -kubectl apply -f simple-variant.yaml - -kubectl get PackageRevisions -n porch-demo | grep -v 'external-blueprints' -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-bc8294d121360ad305c9a826a8734adcf5f1b9c0 network-function-a v1 main false Published edge1 -edge1-9b4b4d99c43b5c5c8489a47bbce9a61f79904946 network-function-a v1 v1 true Published edge1 -edge1-a31b56c7db509652f00724dd49746660757cd98a network-function-b packagevariant-1 false Draft edge1 -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 network-function-c packagevariant-1 false Draft edge1 -management-49580fc22bcf3bf51d334a00b6baa41df597219e network-function v1 main false Published management -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 v1 true Published management - -porchctl -n porch-demo rpkg get --name network-function-b -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-a31b56c7db509652f00724dd49746660757cd98a network-function-b packagevariant-1 false Draft edge1 - -porchctl -n porch-demo rpkg get --name network-function-c -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 network-function-c packagevariant-1 false Draft edge1 -``` - -We can see that our two new packages are created as draft packages on the *edge1* repository. We can also examine the -*PacakgeVariant* CRs that have been created: - -```bash -kubectl get PackageVariant -n porch-demo -NAMESPACE NAME READY STATUS RESTARTS AGE -network-function-a network-function-9779fc9f5-2tswc 1/1 Running 0 19h -network-function-b network-function-9779fc9f5-6zwhh 1/1 Running 0 76s -network-function-c network-function-9779fc9f5-h7nsb 1/1 Running 0 41s -``` - - -It is also interesting to examine the YAML of the *PackageVariant*: - -```yaml -kubectl get PackageVariant -n porch-demo -o yaml -apiVersion: v1 -items: -- apiVersion: config.porch.kpt.dev/v1alpha1 - kind: PackageVariant - metadata: - creationTimestamp: "2024-01-09T15:00:00Z" - finalizers: - - config.porch.kpt.dev/packagevariants - generation: 1 - labels: - config.porch.kpt.dev/packagevariantset: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - name: network-function-edge1-network-function-b - namespace: porch-demo - ownerReferences: - - apiVersion: config.porch.kpt.dev/v1alpha2 - controller: true - kind: PackageVariantSet - name: network-function - uid: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - resourceVersion: "237053" - uid: 7a81099c-5a0b-49d8-b73c-48e33cd134e5 - spec: - downstream: - package: network-function-b - repo: edge1 - upstream: - package: network-function - repo: management - revision: v1 - status: - conditions: - - lastTransitionTime: "2024-01-09T15:00:00Z" - message: all validation checks passed - reason: Valid - status: "False" - type: Stalled - - lastTransitionTime: "2024-01-09T15:00:31Z" - message: successfully ensured downstream package variant - reason: NoErrors - status: "True" - type: Ready - downstreamTargets: - - name: edge1-a31b56c7db509652f00724dd49746660757cd98a -- apiVersion: config.porch.kpt.dev/v1alpha1 - kind: PackageVariant - metadata: - creationTimestamp: "2024-01-09T15:00:00Z" - finalizers: - - config.porch.kpt.dev/packagevariants - generation: 1 - labels: - config.porch.kpt.dev/packagevariantset: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - name: network-function-edge1-network-function-c - namespace: porch-demo - ownerReferences: - - apiVersion: config.porch.kpt.dev/v1alpha2 - controller: true - kind: PackageVariantSet - name: network-function - uid: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - resourceVersion: "237056" - uid: da037d0a-9a7a-4e85-842c-1324e9da819a - spec: - downstream: - package: network-function-c - repo: edge1 - upstream: - package: network-function - repo: management - revision: v1 - status: - conditions: - - lastTransitionTime: "2024-01-09T15:00:01Z" - message: all validation checks passed - reason: Valid - status: "False" - type: Stalled - - lastTransitionTime: "2024-01-09T15:00:31Z" - message: successfully ensured downstream package variant - reason: NoErrors - status: "True" - type: Ready - downstreamTargets: - - name: edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 -kind: List -metadata: - resourceVersion: "" -``` - -We now want to customize and deploy our two packages. To do this we must pull the packages locally, render the *kpt* -functions, and then push the rendered packages back up to the *edge1* repository. - -```bash -porchctl rpkg pull edge1-a31b56c7db509652f00724dd49746660757cd98a tmp_packages_for_deployment/edge1-network-function-b --namespace=porch-demo -kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-b -- namespace=network-function-b -porchctl rpkg push edge1-a31b56c7db509652f00724dd49746660757cd98a tmp_packages_for_deployment/edge1-network-function-b --namespace=porch-demo - -porchctl rpkg pull edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 tmp_packages_for_deployment/edge1-network-function-c --namespace=porch-demo -kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-c -- namespace=network-function-c -porchctl rpkg push edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 tmp_packages_for_deployment/edge1-network-function-c --namespace=porch-demo -``` - -Check that the namespace has been updated on the two packages in the *edge1* repository using the Gitea web UI. - -Now our two packages are ready for deployment: - -```bash -porchctl rpkg propose edge1-a31b56c7db509652f00724dd49746660757cd98a --namespace=porch-demo -edge1-a31b56c7db509652f00724dd49746660757cd98a proposed - -porchctl rpkg approve edge1-a31b56c7db509652f00724dd49746660757cd98a --namespace=porch-demo -edge1-a31b56c7db509652f00724dd49746660757cd98a approved - -porchctl rpkg propose edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 --namespace=porch-demo -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 proposed - -porchctl rpkg approve edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 --namespace=porch-demo -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 approved -``` - -We can now check that the *network-function-b* and *network-function-c* packages are deployed on the edge1 cluster and -that the pods are running - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -network-function-a network-function-9779fc9f5-2tswc 1/1 Running 0 19h -network-function-b network-function-9779fc9f5-6zwhh 1/1 Running 0 76s -network-function-c network-function-9779fc9f5-h7nsb 1/1 Running 0 41s -``` - -### Using a PackageVariantSet to automatically set the package name and package namespace - -The *PackageVariant* CR defined as: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - name: network-function-auto-namespace - namespace: porch-demo -spec: - upstream: - repo: management - package: network-function-auto-namespace - revision: v1 - targets: - - repositories: - - name: edge1 - packageNames: - - network-function-auto-namespace-x - - network-function-auto-namespace-y - template: - downstream: - packageExpr: "target.package + '-cumulonimbus'" -``` - - -In this *PackageVariant*, the *network-function-auto-namespace* package in the *management* repository is cloned into the -*edge1* repository as the *network-function-auto-namespace-x* and *network-function-auto-namespace-y* package variants, -similar to the *PackageVariant* in *simple-variant.yaml*. - -An extra template section provided for the repositories in the PackageVariant: - -```yaml -template: - downstream: - packageExpr: "target.package + '-cumulus'" -``` - -This template means that each package in the spec.targets.repositories..packageNames list will have the suffix --cumulus added to its name. This allows us to automatically generate unique package names. Applying the -*PackageVariantSet* also automatically sets a unique namespace for each network function because applying the -*PackageVariantSet* automatically triggers the Kpt pipeline in the *network-function-auto-namespace* *Kpt* package to -generate unique namespaces for each deployed package. - -{{% alert title="Note" color="primary" %}} - -Many other mutations can be performed using a *PackageVariantSet*. Use `kubectl explain PackageVariantSet` to get help on -the structure of the *PackageVariantSet* CRD to see the various mutations that are possible. - -{{% /alert %}} - -Applying the *PackageVariantSet* creates the new packages as draft packages: - -```bash -kubectl apply -f name-namespace-variant.yaml -packagevariantset.config.porch.kpt.dev/network-function-auto-namespace created - -kunectl get -n porch-demo PackageVariantSet network-function-auto-namespace -NAME AGE -network-function-auto-namespace 38s - -kubectl get PackageRevisions -n porch-demo | grep auto-namespace -edge1-1f521f05a684adfa8562bf330f7bc72b50e21cc5 edge1-network-function-auto-namespace-a v1 main false Published edge1 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 edge1-network-function-auto-namespace-a v1 v1 true Published edge1 -edge1-009659a8532552b86263434f68618554e12f4f7c network-function-auto-namespace-x-cumulonimbus packagevariant-1 false Draft edge1 -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e network-function-auto-namespace-y-cumulonimbus packagevariant-1 false Draft edge1 -management-f9a6f2802111b9e81c296422c03aae279725f6df network-function-auto-namespace v1 main false Published management -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 network-function-auto-namespace v1 v1 true Published management -``` - -{{% alert title="Note" color="primary" %}} - -The suffix `x-cumulonimbus` and `y-cumulonimbus` has been placed on the package names. - -{{% /alert %}} - -Examine the *edge1* repository on Gitea and you should see two new draft branches. - -- drafts/network-function-auto-namespace-x-cumulonimbus/packagevariant-1 -- drafts/network-function-auto-namespace-y-cumulonimbus/packagevariant-1 - -In these packages, you will see that: - -1. The package name has been generated as network-function-auto-namespace-x-cumulonimbus and - network-function-auto-namespace-y-cumulonimbus in all files in the packages -2. The namespace has been generated as network-function-auto-namespace-x-cumulonimbus and - network-function-auto-namespace-y-cumulonimbus respectively in the *demployment.yaml* files -3. The PackageVariant has set the data.name field as network-function-auto-namespace-x-cumulonimbus and - network-function-auto-namespace-y-cumulonimbus respectively in the *pckage-context.yaml* files - -This has all been performed automatically; we have not had to perform the -`porchctl rpkg pull/kpt fn render/porchctl rpkg push` combination of commands to make these changes as we had to in the -*simple-variant.yaml* case above. - -Now, let us explore the packages further: - -```bash -porchctl -n porch-demo rpkg get --name network-function-auto-namespace-x-cumulonimbus -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-009659a8532552b86263434f68618554e12f4f7c network-function-auto-namespace-x-cumulonimbus packagevariant-1 false Draft edge1 - -porchctl -n porch-demo rpkg get --name network-function-auto-namespace-y-cumulonimbus -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e network-function-auto-namespace-y-cumulonimbus packagevariant-1 false Draft edge1 -``` - -We can see that our two new packages are created as draft packages on the edge1 repository. We can also examine the -*PacakgeVariant* CRs that have been created: - -```bash -kubectl get PackageVariant -n porch-demo -NAME AGE -network-function-auto-namespace-edge1-network-function-35079f9f 3m41s -network-function-auto-namespace-edge1-network-function-d521d2c0 3m41s -network-function-edge1-network-function-b 38m -network-function-edge1-network-function-c 38m -``` - -It is also interesting to examine the YAML of a *PackageVariant*: - -```yaml -kubectl get PackageVariant -n porch-demo network-function-auto-namespace-edge1-network-function-35079f9f -o yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - creationTimestamp: "2024-01-24T15:10:19Z" - finalizers: - - config.porch.kpt.dev/packagevariants - generation: 1 - labels: - config.porch.kpt.dev/packagevariantset: 71edbdff-21c1-45f4-b9cb-6d2ecfc3da4e - name: network-function-auto-namespace-edge1-network-function-35079f9f - namespace: porch-demo - ownerReferences: - - apiVersion: config.porch.kpt.dev/v1alpha2 - controller: true - kind: PackageVariantSet - name: network-function-auto-namespace - uid: 71edbdff-21c1-45f4-b9cb-6d2ecfc3da4e - resourceVersion: "404083" - uid: 5ae69c2d-6aac-4942-b717-918325650190 -spec: - downstream: - package: network-function-auto-namespace-x-cumulonimbus - repo: edge1 - upstream: - package: network-function-auto-namespace - repo: management - revision: v1 -status: - conditions: - - lastTransitionTime: "2024-01-24T15:10:19Z" - message: all validation checks passed - reason: Valid - status: "False" - type: Stalled - - lastTransitionTime: "2024-01-24T15:10:49Z" - message: successfully ensured downstream package variant - reason: NoErrors - status: "True" - type: Ready - downstreamTargets: - - name: edge1-009659a8532552b86263434f68618554e12f4f7c -``` -Our two packages are ready for deployment: - -```bash -porchctl rpkg propose edge1-009659a8532552b86263434f68618554e12f4f7c --namespace=porch-demo -edge1-009659a8532552b86263434f68618554e12f4f7c proposed - -porchctl rpkg approve edge1-009659a8532552b86263434f68618554e12f4f7c --namespace=porch-demo -edge1-009659a8532552b86263434f68618554e12f4f7c approved - -porchctl rpkg propose edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e --namespace=porch-demo -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e proposed - -porchctl rpkg approve edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e --namespace=porch-demo -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e approved -``` - -We can now check that the packages are deployed on the edge1 cluster and that the pods are running - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 45m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 44m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 0/1 ContainerCreating 0 1s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 45m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 44m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 10s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 45m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 50s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 46m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 51s -network-function-auto-namespace-y-cumulonimbus network-function-auto-namespace-85bc658d67-tp5m8 0/1 ContainerCreating 0 1s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 46m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 54s -network-function-auto-namespace-y-cumulonimbus network-function-auto-namespace-85bc658d67-tp5m8 1/1 Running 0 4s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 46m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m -``` diff --git a/content/en/docs/porch/user-guides/porchctl-cli-guide.md b/content/en/docs/porch/user-guides/porchctl-cli-guide.md deleted file mode 100644 index d5424a7d..00000000 --- a/content/en/docs/porch/user-guides/porchctl-cli-guide.md +++ /dev/null @@ -1,481 +0,0 @@ ---- -title: "Using the Porch CLI tool" -type: docs -weight: 2 -description: ---- - -## Setting up the porchctl CLI - -When Porch was ported to Nephio, the `kpt alpha rpkg` commands in kpt were moved into a new command called `porchctl`. - -To use it locally, [download](https://github.com/nephio-project/porch/releases/tag/dev), unpack and add it to your PATH. - -{{% alert title="Note" color="primary" %}} - -Installation of Porch, including its prerequisites, is covered in a [dedicated document](install-and-using-porch.md). - -{{% /alert %}} - -*Optional*: Generate the autocompletion script for the specified shell to add to your sh profile. - -``` -porchctl completion bash -``` - -The `porchtcl` command is an administration command for acting on Porch *Repository* (repo) and *PackageRevision* (rpkg) -CRs. - -The commands for administering repositories are: - -| Command | Description | -| --------------------- | ------------------------------ | -| `porchctl repo get` | List registered repositories. | -| `porchctl repo reg` | Register a package repository. | -| `porchctl repo unreg` | Unregister a repository. | - -The commands for administering package revisions are: - -| Command | Description | -| ------------------------------ | --------------------------------------------------------------------------------------- | -| `porchctl rpkg approve` | Approve a proposal to publish a package revision. | -| `porchctl rpkg clone` | Create a clone of an existing package revision. | -| `porchctl rpkg copy` | Create a new package revision from an existing one. | -| `porchctl rpkg del` | Delete a package revision. | -| `porchctl rpkg get` | List package revisions in registered repositories. | -| `porchctl rpkg init` | Initializes a new package in a repository. | -| `porchctl rpkg propose` | Propose that a package revision should be published. | -| `porchctl rpkg propose-delete` | Propose deletion of a published package revision. | -| `porchctl rpkg pull` | Pull the content of the package revision. | -| `porchctl rpkg push` | Push resources to a package revision. | -| `porchctl rpkg reject` | Reject a proposal to publish or delete a package revision. | -| `porchctl rpkg update` | Update a downstream package revision to a more recent revision of its upstream package. | - -## Using the porchctl CLI - -### Guide prerequisites -* [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) - -Make sure that your `kubectl` context is set up for `kubectl` to interact with the correct Kubernetes instance (see -[installation instructions](install-and-using-porch.md) or the [running-locally](../running-porch/running-locally.md) -guide for details). - -To check whether `kubectl` is configured with your Porch cluster (or local instance), run: - -```bash -kubectl api-resources | grep porch -``` - -You should see the following API resources listed: - -```bash -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -``` - -## Porch Resources - -Porch server manages the following resources: - -1. `repositories`: a repository (Git or OCI) can be registered with Porch to support discovery or management of KRM - configuration packages in those repositories. -2. `packagerevisions`: a specific revision of a KRM configuration package managed by Porch in one of the registered - repositories. This resource represents a _metadata view_ of the KRM configuration package. -3. `packagerevisionresources`: this resource represents the contents of the configuration package (KRM resources - contained in the package) - -{{% alert title="Note" color="primary" %}} - -`packagerevisions` and `packagerevisionresources` represent different _views_ of the same underlying KRM -configuration package. `packagerevisions` represents the package metadata, and `packagerevisionresources` represents the -package content. The matching resources share the same `name` (as well as API group and version: -`porch.kpt.dev/v1alpha1`) and differ in resource kind (`PackageRevision` and `PackageRevisionResources` respectively). - -{{% /alert %}} - - -## Repository Registration - -To use Porch with a Git repository, you will need: - -* A Git repository for your blueprints. -* A [Personal Access Token](https://github.com/settings/tokens) (when using GitHub repository) for Porch to authenticate - with the repository. Porch requires the 'repo' scope. -* Or Basic Auth credentials for Porch to authenticate with the repository. - -To use Porch with an OCI repository ([Artifact Registry](https://console.cloud.google.com/artifacts) or -[Google Container Registry](https://cloud.google.com/container-registry)), first make sure to: - -* Enable [workload identity](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) for Porch -* Assign appropriate roles to the Porch workload identity service account - (`iam.gke.io/gcp-service-account=porch-server@$(GCP_PROJECT_ID).iam.gserviceaccount.com`) - to have appropriate level of access to your OCI repository. - -Use the `porchctl repo register` command to register your repository with Porch: - -```bash - -GITHUB_USERNAME= -GITHUB_TOKEN= - -$ porchctl repo register \ - --namespace default \ - --repo-basic-username=${GITHUB_USERNAME} \ - --repo-basic-password=${GITHUB_TOKEN} \ - https://github.com/${GITHUB_USERNAME}/blueprints.git -``` - -All command line flags supported: - -* `--directory` - Directory within the repository where to look for packages. -* `--branch` - Branch in the repository where finalized packages are committed (defaults to `main`). -* `--name` - Name of the package repository Kubernetes resource. If unspecified, will default to the name portion (last - segment) of the repository URL (`blueprint` in the example above) -* `--description` - Brief description of the package repository. -* `--deployment` - Boolean value; If specified, repository is a deployment repository; published packages in a - deployment repository are considered deployment-ready. -* `--repo-basic-username` - Username for repository authentication using basic auth. -* `--repo-basic-password` - Password for repository authentication using basic auth. - -Additionally, common `kubectl` command line flags for controlling aspects of -interaction with the Kubernetes apiserver, logging, and more (this is true for -all `porchctl` CLI commands which interact with Porch). - -Use the `porchctl repo get` command to query registered repositories: - -```bash -$ porchctl repo get - -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -blueprints git Package True https://github.com/platkrm/blueprints.git -deployments git Package true True https://github.com/platkrm/deployments.git -``` - -The `porchctl get` commands support common `kubectl` -[flags](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output) to format output, for example -`porchctl repo get --output=yaml`. - -The command `porchctl repo unregister` can be used to unregister a repository: - -```bash -$ porchctl repo unregister deployments --namespace default -``` - -## Package Discovery And Introspection - -The `porchctl rpkg` command group contains commands for interacting with packages managed by the Package Orchestration -service. the `r` prefix used in the command group name stands for 'remote'. - -The `porchctl rpkg get` command list the packages in registered repositories: - -```bash -$ porchctl rpkg get - -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -blueprints-0349d71330b89ee48ac85167598ef23021fd0484 basens main main false Published blueprints -blueprints-2e47615fda05664491f72c58b8ab658683afa036 basens v1 v1 true Published blueprints -blueprints-7e2fe44bfdbb744d49bdaaaeac596200102c5f7c istions main main false Published blueprints -blueprints-ac6e872be4a4a3476922deca58cca3183b16a5f7 istions v1 v1 false Published blueprints -blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 istions v2 v2 true Published blueprints -... -``` - -The `LATEST` column indicates whether the package revision is the latest among the revisions of the same package. In the -output above, `v2` is the latest revision of `istions` package and `v1` is the latest revision of `basens` package. - -The `LIFECYCLE` column indicates the lifecycle stage of the package revision, one of: `Draft`, `Proposed` or `Published`. - -The `REVISION` column indicates the revision of the package. Revisions are assigned when a package is `Published` and -starts at `v1`. - -The `WORKSPACENAME` column indicates the workspace name of the package. The workspace name is assigned when a draft -revision is created and is used as the branch name for proposed and draft package revisions. The workspace name -must be unique among package revisions in the same package. - -{{% alert title="Note" color="primary" %}} - -Packages exist in a hierarchical directory structure maintained by the underlying repository such as git, or in a -filesystem bundle of OCI images. The hierarchical, filesystem-compatible names of packages do not satisfy the -Kubernetes naming [constraints](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). -Therefore, the names of the Kubernetes resources representing package revisions are computed as a hash. - -{{% /alert %}} - - -Simple filtering of package revisions by name (substring) and revision (exact match) is supported by the CLI using -`--name` and `--revision` flags: - -```bash -$ porchctl rpkg get --name istio --revision=v2 - -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 istions v2 v2 true Published blueprints -``` - -The common `kubectl` flags that control output format are available as well: - -```bash -$ porchctl rpkg get blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 -ndefault -oyaml - -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevision -metadata: - labels: - kpt.dev/latest-revision: "true" - name: blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 - namespace: default -spec: - lifecycle: Published - packageName: istions - repository: blueprints - revision: v2 - workspaceName: v2 -... -``` - -The `porchctl rpkg pull` command can be used to read the package resources. - -The command can be used to print the package revision resources as `ResourceList` to `stdout`, which enables -[chaining](https://kpt.dev/book/04-using-functions/02-imperative-function-execution?id=chaining-functions-using-the-unix-pipe) -evaluation of functions on the package revision pulled from the Package Orchestration server. - -```bash -$ porchctl rpkg pull blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 -ndefault - -apiVersion: config.kubernetes.io/v1 -kind: ResourceList -items: -- apiVersion: kpt.dev/v1 - kind: Kptfile - metadata: - name: istions -... -``` - -Or, the package contents can be saved on local disk for direct introspection or editing: - -```bash -$ porchctl rpkg pull blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 ./istions -ndefault - -$ find istions - -istions -istions/istions.yaml -istions/README.md -istions/Kptfile -istions/package-context.yaml -... -``` - -## Authoring Packages - -Several commands in the `porchctl rpkg` group support package authoring: - -* `init` - Initializes a new package revision in the target repository. -* `clone` - Creates a clone of a source package in the target repository. -* `copy` - Creates a new package revision from an existing one. -* `push` - Pushes package resources into a remote package. -* `del` - Deletes one or more packages in registered repositories. - -The `porchctl rpkg init` command can be used to initialize a new package revision. Porch server will create and -initialize a new package (as a draft) and save it in the specified repository. - -```bash -$ porchctl rpkg init new-package --repository=deployments --workspace=v1 -ndefault - -deployments-c32b851b591b860efda29ba0e006725c8c1f7764 created - -$ porchctl rpkg get - -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -deployments-c32b851b591b860efda29ba0e006725c8c1f7764 new-package v1 false Draft deployments -... -``` - -The new package is created in the `Draft` lifecycle stage. This is true also for all commands that create new package -revision (`init`, `clone` and `copy`). - -Additional flags supported by the `porchctl rpkg init` command are: - -* `--repository` - Repository in which the package will be created. -* `--workspace` - Workspace of the new package. -* `--description` - Short description of the package. -* `--keywords` - List of keywords for the package. -* `--site` - Link to page with information about the package. - - -Use `porchctl rpkg clone` command to create a _downstream_ package by cloning an _upstream_ package: - -```bash -$ porchctl rpkg clone blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 istions-clone \ - --repository=deployments -ndefault -deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 created - -# Confirm the package revision was created -porchctl rpkg get deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 -ndefault -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 istions-clone v1 false Draft deployments -``` - -`porchctl rpkg clone` can also be used to clone packages that are in repositories not registered with Porch, for -example: - -```bash -$ porchctl rpkg clone \ - https://github.com/GoogleCloudPlatform/blueprints.git cloned-bucket \ - --directory=catalog/bucket \ - --ref=main \ - --repository=deployments \ - --namespace=default -deployments-e06c2f6ec1afdd8c7d977fcf204e4d543778ddac created - -# Confirm the package revision was created -porchctl rpkg get deployments-e06c2f6ec1afdd8c7d977fcf204e4d543778ddac -ndefault -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -deployments-e06c2f6ec1afdd8c7d977fcf204e4d543778ddac cloned-bucket v1 false Draft deployments -``` - -The flags supported by the `porchctl rpkg clone` command are: - -* `--directory` - Directory within the upstream repository where the upstream - package is located. -* `--ref` - Ref in the upstream repository where the upstream package is - located. This can be a branch, tag, or SHA. -* `--repository` - Repository to which package will be cloned (downstream - repository). -* `--workspace` - Workspace to assign to the downstream package. -* `--strategy` - Update strategy that should be used when updating this package; - one of: `resource-merge`, `fast-forward`, `force-delete-replace`. - - -The `porchctl rpkg copy` command can be used to create a new revision of an existing package. It is a means to -modifying an already published package revision. - -```bash -$ porchctl rpkg copy \ - blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 \ - --workspace=v3 -ndefault - -# Confirm the package revision was created -$ porchctl rpkg get blueprints-bf11228f80de09f1a5dd9374dc92ebde3b503689 -ndefault -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -blueprints-bf11228f80de09f1a5dd9374dc92ebde3b503689 istions v3 false Draft blueprints -``` - -The `porchctl rpkg push` command can be used to update the resources (package contents) of a package _draft_: - -```bash -$ porchctl rpkg pull \ - deployments-c32b851b591b860efda29ba0e006725c8c1f7764 ./new-package -ndefault - -# Make edits using your favorite YAML editor, for example adding a new resource -$ cat < ./new-package/config-map.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: example-config-map -data: - color: orange -EOF - -# Push the updated contents to the Package Orchestration server, updating the -# package contents. -$ porchctl rpkg push \ - deployments-c32b851b591b860efda29ba0e006725c8c1f7764 ./new-package -ndefault - -# Confirm that the remote package now includes the new ConfigMap resource -$ porchctl rpkg pull deployments-c32b851b591b860efda29ba0e006725c8c1f7764 -ndefault - -apiVersion: config.kubernetes.io/v1 -kind: ResourceList -items: -... -- apiVersion: v1 - kind: ConfigMap - metadata: - name: example-config-map - data: - color: orange -... -``` - -Package revision can be deleted using `porchctl rpkg del` command: - -```bash -# Delete package revision -$ porchctl rpkg del blueprints-bf11228f80de09f1a5dd9374dc92ebde3b503689 -ndefault - -blueprints-bf11228f80de09f1a5dd9374dc92ebde3b503689 deleted -``` - -## Package Lifecycle and Approval Flow - -Authoring is performed on the package revisions in the _Draft_ lifecycle stage. Before a package can be deployed or -cloned, it must be _Published_. The approval flow is the process by which the package is advanced from _Draft_ state -through _Proposed_ state and finally to _Published_ lifecycle stage. - -The commands used to manage package lifecycle stages include: - -* `propose` - Proposes to finalize a package revision draft -* `approve` - Approves a proposal to finalize a package revision. -* `reject` - Rejects a proposal to finalize a package revision - -In the [Authoring Packages](#authoring-packages) section above we created several _draft_ packages and in this section -we will create proposals for publishing some of them. - -```bash -# List package revisions to identify relevant drafts: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -deployments-e06c2f6ec1afdd8c7d977fcf204e4d543778ddac cloned-bucket v1 false Draft deployments -deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 istions-clone v1 false Draft deployments -deployments-c32b851b591b860efda29ba0e006725c8c1f7764 new-package v1 false Draft deployments - -# Propose two package revisions to be be published -$ porchctl rpkg propose \ - deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 \ - deployments-c32b851b591b860efda29ba0e006725c8c1f7764 \ - -ndefault - -deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 proposed -deployments-c32b851b591b860efda29ba0e006725c8c1f7764 proposed - -# Confirm the package revisions are now Proposed -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -deployments-e06c2f6ec1afdd8c7d977fcf204e4d543778ddac cloned-bucket v1 false Draft deployments -deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 istions-clone v1 false Proposed deployments -deployments-c32b851b591b860efda29ba0e006725c8c1f7764 new-package v1 false Proposed deployments -``` - -At this point, a person in _platform administrator_ role, or even an automated process, will review and either approve -or reject the proposals. To aid with the decision, the platform administrator may inspect the package contents using the -commands above, such as `porchctl rpkg pull`. - -```bash -# Approve a proposal to publish a package revision -$ porchctl rpkg approve deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 -ndefault -deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 approved - -# Reject a proposal to publish a package revision -$ porchctl rpkg reject deployments-c32b851b591b860efda29ba0e006725c8c1f7764 -ndefault -deployments-c32b851b591b860efda29ba0e006725c8c1f7764 rejected -``` - -Now the user can confirm lifecycle stages of the package revisions: - -```bash -# Confirm package revision lifecycle stages after approvals: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -deployments-e06c2f6ec1afdd8c7d977fcf204e4d543778ddac cloned-bucket v1 false Draft deployments -deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 istions-clone v1 v1 true Published deployments -deployments-c32b851b591b860efda29ba0e006725c8c1f7764 new-package v1 false Draft deployments -``` - -Observe that the rejected proposal returned the package revision back to _Draft_ lifecycle stage. The package whose -proposal was approved is now in _Published_ state. \ No newline at end of file diff --git a/content/en/docs/porch/user-guides/using-authenticated-private-registries.md b/content/en/docs/porch/user-guides/using-authenticated-private-registries.md deleted file mode 100644 index 08a0a45c..00000000 --- a/content/en/docs/porch/user-guides/using-authenticated-private-registries.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -title: "Using authenticated private registries with the Porch function runner" -type: docs -weight: 4 -description: "" ---- - -The Porch function runner pulls kpt function images from registries and uses them for rendering kpt packages in Porch. The function runner is set up by default to fetch kpt function images from public container registries such as [GCR](https://gcr.io/kpt-fn/) and the configuration options described here are not required for such public registries. - -## 1. Configuring function runner to operate with private container registries - -This section describes how set up authentication for a private container registry containing kpt functions online e.g. (GitHub's GHCR) or locally e.g. (Harbor or JFrog) that require authentication (username/password or token). - -To enable pulling of kpt function images from authenticated private registries by the Porch function runner the system requires: - -1. Creating a Kubernetes secret using a JSON file according to the Docker configuration schema, containing valid credentials for each authenticated registry. -2. Mounting this new secret as a volume on the function runner. -3. Configuring private registry functionality in the function runner's arguments: - 1. Enabling the functionality using the argument *--enable-private-registries*. - 2. Providing the path and name of the mounted secret using the arguments *--registry-auth-secret-path* and *--registry-auth-secret-name* respectively. - -### 1.1 Kubernetes secret setup for private registry using docker configuration - -An example template of what a docker *config.json* file looks like is as follows below. The base64 encoded value *bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=* of the *auth* key decodes to *my_username:my_password*, which is the format used by the configuration when authenticating. - -```json -{ - "auths": { - "https://index.docker.io/v1/": { - "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" - }, - "ghcr.io": { - "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" - } - } -} -``` - -A quick way to generate this secret for your use using your docker *config.json* would be to run the following command: - -```bash -kubectl create secret generic --from-file=.dockerconfigjson=/path/to/your/config.json --type=kubernetes.io/dockerconfigjson --dry-run=client -o yaml -n porch-system -``` - -{{% alert title="Note" color="primary" %}} -The secret must be in the same namespace as the function runner deployment. By default, this is the *porch-system* namespace. -{{% /alert %}} - -This should generate a secret template, similar to the one below, which you can add to the *2-function-runner.yaml* file in the Porch catalog package found [here](https://github.com/nephio-project/catalog/tree/main/nephio/core/porch) - -```yaml -apiVersion: v1 -data: - .dockerconfigjson: -kind: Secret -metadata: - creationTimestamp: null - name: - namespace: porch-system -type: kubernetes.io/dockerconfigjson -``` - -### 1.2 Mounting docker configuration secret to the function runner - -Next you must mount the secret as a volume on the function runner deployment. Add the following sections to the Deployment object in the *2-function-runner.yaml* file: - -```yaml - volumeMounts: - - mountPath: /var/tmp/auth-secret - name: docker-config - readOnly: true -volumes: - - name: docker-config - secret: - secretName: -``` - -You may specify your desired paths for each `mountPath:` so long as the function runner can access them. - -{{% alert title="Note" color="primary" %}} -The chosen `mountPath:` should use its own, dedicated sub-directory, so that it does not overwrite access permissions of the existing directory. For example, if you wish to mount on `/var/tmp` you should use `mountPath: /var/tmp/` etc. -{{% /alert %}} - -### 1.3 Configuring function runner environment variables for private registries - -Lastly you must enable private registry functionality along with providing the path and name of the secret. Add the `--enable-private-registries`, `--registry-auth-secret-path` and `--registry-auth-secret-name` arguments to the function-runner Deployment object in the *2-function-runner.yaml* file: - -```yaml -command: - - --enable-private-registries=true - - --registry-auth-secret-path=/var/tmp/auth-secret/.dockerconfigjson - - --registry-auth-secret-name= -``` - -The `--enable-private-registries`, `--registry-auth-secret-path` and `--registry-auth-secret-name` arguments have default values of *false*, */var/tmp/auth-secret/.dockerconfigjson* and *auth-secret* respectively; however, these should be overridden to enable the functionality and match user specifications. - -With this last step, if your Porch package uses kpt function images stored in an private registry (for example `- image: ghcr.io/private-registry/set-namespace:customv2`), the function runner will now use the secret info to replicate your secret on the `porch-fn-system` namespace and specify it as an `imagePullSecret` for the function pods, as documented [here](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). - -## 2. Configuring function runner to use custom TLS for private container registries - -If your private container registry uses a custom certificate for TLS authentication then extra configuration is required for the function runner to integrate with. See below - -1. Creating a Kubernetes secret using TLS information valid for all private registries you wish to use. -2. Mounting the secret containing the registries' TLS information to the function runner similarly to step 2. -3. Enabling TLS functionality and providing the path of the mounted secret to the function runner using the arguments *--enable-private-registries-tls* and *--tls-secret-path* respectively. - -### 2.1 Kubernetes secret layout for TLS certificate - -A typical secret containing TLS information will take on the a similar format to the following: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: - namespace: porch-system -data: - : -type: kubernetes.io/tls -``` - -{{% alert title="Note" color="primary" %}} -The content in ** must be in PEM (Privacy Enhanced Mail) format, and the ** must be *ca.crt* or *ca.pem*. No other values are accepted. -{{% /alert %}} - -### 2.2 Mounting TLS certificate secret to the function runner - -The TLS secret must then be mounted onto the function runner similarly to how the docker configuration secret was done previously in section 1.2 - -```yaml - volumeMounts: - - mountPath: /var/tmp/tls-secret/ - name: tls-registry-config - readOnly: true -volumes: - - name: tls-registry-config - secret: - secretName: -``` - -### 2.3 Configuring function runner environment variables for TLS on private registries - -The *--enable-private-registries-tls* and *--tls-secret-path* variables are only required if a private registry has TLS enabled. They indicate to the function runner that it should attempt authentication to the registry using TLS, and should use the TLS certificate information found on the path provided in *--tls-secret-path*. - -```yaml -command: - - --enable-private-registries-tls=true - - --tls-secret-path=/var/tmp/tls-secret/ -``` - -The *--enable-private-registries-tls* and *--tls-secret-path* arguments have default values of *false* and */var/tmp/tls-secret/* respectively; however, these should be configured by the user and are only necessary when using a private registry secured with TLS. - -### Function runner logic flow when TLS registries are enabled - -It is important to note that enabling TLS registry functionality makes the function runner attempt connection to the registry provided in the porch file using the mounted TLS certificate. If this certificate is invalid for the provided registry, it will try again using the Intermediate Certificates stored on the machine for use in TLS with "well-known websites" (e.g. GitHub). If this also fails, it will attempt to connect without TLS: if this last resort fails, it will return an error to the user. - -{{% alert title="Note" color="primary" %}} -It is vital that the user has pre-configured the Kubernetes node which the function runner is operating on with the same TLS certificate information as is used in the ** secret. If this is not configured correctly, then even if the certificate is correctly configured in the function runner, the kpt function will not run - the function runner will be able to pull the image, but the KRM function pod created to run the function will fail with the error *x509 certificate signed by unknown authority*. -This pre-configuration setup is heavily cluster/implementation-dependent - consult your cluster's specific documentation about adding self-signed certificates or private/internal CA certs to your cluster. -{{% /alert %}}