From 61754e28ee3883316d8004d46e8b73e064dd4069 Mon Sep 17 00:00:00 2001
From: cnp-autobot <85171364+cnp-autobot@users.noreply.github.com>
Date: Fri, 21 Nov 2025 11:57:04 +0000
Subject: [PATCH 1/9] Sync EnterpriseDB/cloud-native-postgres
product/pg4k/v1.28.0-rc1
---
.../postgres_for_kubernetes/1/bootstrap.mdx | 2 +-
.../1/connection_pooling.mdx | 134 +-
.../1/database_import.mdx | 96 +-
.../1/declarative_database_management.mdx | 154 +-
.../postgres_for_kubernetes/1/failover.mdx | 41 +-
.../1/installation_upgrade.mdx | 2 +-
.../1/kubectl-plugin.mdx | 30 +-
.../1/labels_annotations.mdx | 29 +
.../1/logical_replication.mdx | 46 +-
.../postgres_for_kubernetes/1/monitoring.mdx | 90 +-
.../1/operator_capability_levels.mdx | 3 +-
.../1/operator_conf.mdx | 1 +
.../1/pg4k.v1/index.mdx | 380 +-
.../1/pg4k.v1/v1.28.0-rc1.mdx | 6829 +++++++++++++++++
.../postgres_for_kubernetes/1/postgis.mdx | 2 +-
.../1/preview_version.mdx | 9 +-
.../postgres_for_kubernetes/1/recovery.mdx | 4 +-
.../1/rolling_update.mdx | 34 +-
.../postgres_for_kubernetes/1/samples.mdx | 7 +
.../cluster-example-security-context.yaml | 44 +
.../cluster-example-syncreplicas-quorum.yaml | 3 +-
.../1/samples/pooler-external.yaml | 1 -
.../1/samples/postgis-example.yaml | 2 +-
.../1/ssl_connections.mdx | 2 +-
24 files changed, 7786 insertions(+), 159 deletions(-)
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.28.0-rc1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-security-context.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
index ea5469111c..77babfb342 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
@@ -661,7 +661,7 @@ spec:
```
All the requirements must be met for the clone operation to work, including
-the same PostgreSQL version (in our case 18.0).
+the same PostgreSQL version (in our case 18.1).
#### TLS certificate authentication
diff --git a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
index 4fa07449e6..6236cfe295 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
@@ -14,6 +14,10 @@ between your applications and a PostgreSQL service, for example, the `rw`
service. It creates a separate, scalable, configurable, and highly available
database access layer.
+!!! Warning
+ {{name.ln}} requires the `auth_dbname` feature in PgBouncer.
+ Make sure to use a PgBouncer container image version **1.19 or higher**.
+
## Architecture
The following diagram highlights how introducing a database access layer based
@@ -51,7 +55,7 @@ spec:
!!! Important
The pooler name can't be the same as any cluster name in the same namespace.
-This example creates a `Pooler` resource called `pooler-example-rw`
+This example creates a `Pooler` resource called `pooler-example-rw`
that's strictly associated with the Postgres `Cluster` resource called
`cluster-example`. It points to the primary, identified by the read/write
service (`rw`, therefore `cluster-example-rw`).
@@ -78,23 +82,24 @@ the configuration files used with PgBouncer.
## Pooler resource lifecycle
-`Pooler` resources aren't cluster-managed resources. You create poolers
-manually when they're needed. You can also deploy multiple poolers per
+`Pooler` resources are not managed automatically by the operator. You create
+them manually when needed, and you can deploy multiple poolers for the same
PostgreSQL cluster.
-What's important is that the life cycles of the `Cluster` and the `Pooler`
-resources are currently independent. Deleting the cluster doesn't imply the
-deletion of the pooler, and vice versa.
+The key point to understand is that the lifecycles of the `Cluster` and
+`Pooler` resources are independent. Deleting a cluster does not automatically
+remove its poolers, and deleting a pooler does not affect the cluster.
-!!! Important
- Once you know how a pooler works, you have full freedom in terms of
- possible architectures. You can have clusters without poolers, clusters with
- a single pooler, or clusters with several poolers, that is, one per application.
+!!! Info
+ Once you are familiar with how poolers work, you have complete flexibility
+ in designing your architecture. You can run clusters without poolers, clusters
+ with a single pooler, or clusters with multiple poolers (for example, one per
+ application).
!!! Important
- When the operator is upgraded, the pooler pods will undergo a rolling
- upgrade. This is necessary to ensure that the instance manager within the
- pooler pods is also upgraded.
+ When the operator itself is upgraded, pooler pods will also undergo a
+ rolling upgrade. This ensures that the instance manager inside the pooler
+ pods is upgraded consistently.
## Security
@@ -102,78 +107,94 @@ Any PgBouncer pooler is transparently integrated with {{name.ln}} support for
in-transit encryption by way of TLS connections, both on the client
(application) and server (PostgreSQL) side of the pool.
-Specifically, PgBouncer reuses the certificates of the PostgreSQL server. It
-also uses TLS client certificate authentication to connect to the PostgreSQL
-server to run the `auth_query` for clients' password authentication (see
-[Authentication](#authentication)).
-
-Containers run as the pgbouncer system user, and access to the `pgbouncer`
-database is allowed only by way of local connections, through peer authentication.
+Containers run as the `pgbouncer` system user, and access to the `pgbouncer`
+administration database is allowed only by way of local connections, through
+peer authentication.
### Certificates
-By default, a PgBouncer pooler uses the same certificates that are used by the
-cluster. However, if you provide those certificates, the pooler accepts secrets
-with the following formats:
+By default, a PgBouncer pooler reuses the same certificates as the PostgreSQL
+cluster. It relies on TLS client certificate authentication to connect to the
+PostgreSQL server and run the `auth_query` used for client password
+authentication (see ["Authentication"](#authentication)).
+
+Supplying your own secrets disables the built-in integration. From that point,
+you gain complete control (and responsibility) for managing authentication.
+Supported secret formats are:
1. Basic Auth
2. TLS
3. Opaque
-In the Opaque case, it looks for the following specific keys that need to be used:
+For Opaque secrets, the `Pooler` resource expects the following keys:
-- tls.crt
-- tls.key
+- `tls.crt`
+- `tls.key`
-So you can treat this secret as a TLS secret, and start from there.
+In practice, this means you can treat an Opaque secret as a TLS secret,
+starting from the same structure.
## Authentication
-Password-based authentication is the only supported method for clients of
-PgBouncer in {{name.ln}}.
+### Default authentication method
+
+By default, {{name.ln}} natively supports password-based authentication for
+PgBouncer clients connecting to the PostgreSQL database.
-Internally, the implementation relies on PgBouncer's `auth_user` and
-`auth_query` options. Specifically, the operator:
+This built-in mechanism leverages PgBouncer’s `auth_dbname` (introduced in
+version 1.19), together with the `auth_user` and `auth_query` options.
+
+!!! Important
+ If you provide your own certificate secrets, the built-in integration is
+ disabled. In that case, you are fully responsible for configuring and
+ managing PgBouncer authentication.
-- Creates a standard user called `cnp_pooler_pgbouncer` in the PostgreSQL server
+The built-in integration performs the following tasks:
+
+- Creates a dedicated user called `cnp_pooler_pgbouncer` in the PostgreSQL
+ server
- Creates the lookup function in the `postgres` database and grants execution
- privileges to the cnp_pooler_pgbouncer user (PoLA)
+ privileges to `cnp_pooler_pgbouncer` (following PoLA principles)
- Issues a TLS certificate for this user
-- Sets `cnp_pooler_pgbouncer` as the `auth_user`
-- Configures PgBouncer to use the TLS certificate to authenticate
- `cnp_pooler_pgbouncer` against the PostgreSQL server
-- Removes all the above when it detects that a cluster doesn't have
- any pooler associated to it
+- Configures PgBouncer to use `cnp_pooler_pgbouncer` as the `auth_user` and
+ `postgres` as the `auth_dbname`
+- Configures PgBouncer to authenticate `cnp_pooler_pgbouncer` against
+ PostgreSQL using the issued TLS certificate
+- Cleans up all of the above automatically when no poolers are associated with
+ the cluster
-!!! Important
- If you specify your own secrets, the operator doesn't automatically
- integrate the pooler.
+#### SQL instructions
-To manually integrate the pooler, if you specified your own
-secrets, you must run the following queries from inside your cluster.
+As part of the built-in integration, {{name.ln}} automatically executes a set
+of SQL statements during reconciliation. These statements are run by the
+instance manager using the `postgres` user against the `postgres` database.
-First, you must create the role:
+Role creation:
```sql
CREATE ROLE cnp_pooler_pgbouncer WITH LOGIN;
```
-Then, for each application database, grant the permission for
-`cnp_pooler_pgbouncer` to connect to it:
+Grant access to the `postgres` database:
```sql
-GRANT CONNECT ON DATABASE { database name here } TO cnp_pooler_pgbouncer;
+GRANT CONNECT ON DATABASE postgres TO cnp_pooler_pgbouncer;
```
-Finally, as a *superuser* connect in each application database, and then create
-the authentication function inside each of the application databases:
+Create the lookup function for password verification. This function is created
+in the `postgres` database with `SECURITY DEFINER` privileges and is used by
+PgBouncer’s `auth_query` option:
```sql
CREATE OR REPLACE FUNCTION public.user_search(uname TEXT)
RETURNS TABLE (usename name, passwd text)
LANGUAGE sql SECURITY DEFINER AS
'SELECT usename, passwd FROM pg_catalog.pg_shadow WHERE usename=$1;';
+```
+
+Restrict and grant permissions on the lookup function:
+```sql
REVOKE ALL ON FUNCTION public.user_search(text)
FROM public;
@@ -181,15 +202,18 @@ GRANT EXECUTE ON FUNCTION public.user_search(text)
TO cnp_pooler_pgbouncer;
```
-!!! Important
- Given that `user_search` is a `SECURITY DEFINER` function, you need to
- create it through a role with `SUPERUSER` privileges, such as the `postgres`
- user.
+### Custom authentication method
+
+Providing your own certificate secrets disables the built-in integration.
+
+This gives you the flexibility — and responsibility — to manage the
+authentication process yourself. You can follow the instructions above to
+replicate similar behavior to the default setup.
## Pod templates
You can take advantage of pod templates specification in the `template`
-section of a `Pooler` resource. For details, see
+section of a `Pooler` resource. For details, see
[`PoolerSpec`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) in the API reference.
Using templates, you can configure pods as you like, including fine control
@@ -320,7 +344,7 @@ replicas).
If your infrastructure spans multiple availability zones with high latency
across them, be aware of network hops. Consider, for example, the case of your
application running in zone 2, connecting to PgBouncer running in zone 3, and
- pointing to the PostgreSQL primary in zone 1.
+ pointing to the PostgreSQL primary in zone 1.
## PgBouncer configuration options
diff --git a/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx b/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
index bd1336fddc..d3bb769d43 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
@@ -251,7 +251,7 @@ There are a few things you need to be aware of when using the `monolith` type:
- The `SUPERUSER` option is removed from any imported role
- Wildcard `"*"` can be used as the only element in the `databases` and/or
`roles` arrays to import every object of the kind; When matching databases
- the wildcard will ignore the `postgres` database, template databases,
+ the wildcard will ignore the `postgres` database, template databases
and those databases not allowing connections
- After the clone procedure is done, `ANALYZE VERBOSE` is executed for every
database.
@@ -394,36 +394,84 @@ unnecessary writes in the checkpoint area by tuning Postgres GUCs like
`shared_buffers`, `max_wal_size`, `checkpoint_timeout` directly in the
`Cluster` configuration.
-## Customizing `pg_dump` and `pg_restore` Behavior
+## Customizing `pg_dump` and `pg_restore` behavior
-You can customize the behavior of `pg_dump` and `pg_restore` by specifying
-additional options using the `pgDumpExtraOptions` and `pgRestoreExtraOptions`
-parameters. For instance, you can enable parallel jobs to speed up data
-import/export processes, as shown in the following example:
+You can customize the behavior of `pg_dump` and `pg_restore` by specifying additional
+options using the `pgDumpExtraOptions` and `pgRestoreExtraOptions` parameters.
+This is especially useful for improving performance or managing import/export complexity.
+
+For example, enabling parallel jobs can significantly speed up data transfer:
```yaml
- #
- bootstrap:
- initdb:
- import:
- type: microservice
- databases:
- - app
- source:
- externalCluster: cluster-example
- pgDumpExtraOptions:
- - '--jobs=2'
- pgRestoreExtraOptions:
+bootstrap:
+ initdb:
+ import:
+ type: microservice
+ databases:
+ - app
+ source:
+ externalCluster: cluster-example
+ pgDumpExtraOptions:
+ - '--jobs=2'
+ pgRestoreExtraOptions:
+ - '--jobs=2'
+```
+
+### Stage-Specific `pg_restore` options
+
+For more granular control over the import process, {{name.ln}} supports
+stage-specific `pg_restore` options for the following phases:
+
+- `pre-data` – e.g., schema definitions
+- `data` – e.g., table contents
+- `post-data` – e.g., indexes, constraints and triggers
+
+By specifying options for each phase, you can optimize parallelism and apply
+flags tailored to the nature of the objects being restored.
+
+```yaml
+bootstrap:
+ initdb:
+ import:
+ type: microservice
+ schemaOnly: false
+ databases:
+ - mynewdb
+ source:
+ externalCluster: sourcedb-external
+ pgRestorePredataOptions:
+ - '--jobs=1'
+ pgRestoreDataOptions:
+ - '--jobs=4'
+ pgRestorePostdataOptions:
- '--jobs=2'
- #
```
+In the example above:
+
+- `--jobs=1` is applied to the `pre-data` stage to preserve the ordering of
+ schema creation.
+- `--jobs=4` increases parallelism during the `data` stage, speeding up large
+ data imports.
+- `--jobs=2` balances performance and dependency handling in the `post-data`
+ stage.
+
+These stage-specific settings are particularly valuable for large databases or
+resource-sensitive environments where tuning concurrency can significantly
+improve performance.
+
+!!! Note
+ When provided, stage-specific options take precedence over the general
+ `pgRestoreExtraOptions`.
+
!!! Warning
- Use the `pgDumpExtraOptions` and `pgRestoreExtraOptions` fields with
- caution and at your own risk. These options are not validated or verified by
- the operator, and some configurations may conflict with its intended
- functionality or behavior. Always test thoroughly in a safe and controlled
- environment before applying them in production.
+ The `pgDumpExtraOptions`, `pgRestoreExtraOptions`, and all stage-specific
+ restore options (`pgRestorePredataOptions`, `pgRestoreDataOptions`,
+ `pgRestorePostdataOptions`) are passed directly to the underlying PostgreSQL
+ tools without validation by the operator. Certain flags may conflict with the
+ operator’s intended functionality or design. Use these options with caution
+ and always test them thoroughly in a safe, controlled environment before
+ applying them in production.
## Online Import and Upgrades
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx b/product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx
index 643604c658..dbb8fe9e66 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx
@@ -44,8 +44,8 @@ spec:
cluster:
name: cluster-example
extensions:
- - name: bloom
- ensure: present
+ - name: bloom
+ ensure: present
```
When applied, this manifest creates a `Database` object called
@@ -185,8 +185,8 @@ with a list of extension specifications, as shown in the following example:
# ...
spec:
extensions:
- - name: bloom
- ensure: present
+ - name: bloom
+ ensure: present
# ...
```
@@ -240,8 +240,8 @@ with a list of schema specifications, as shown in the following example:
# ...
spec:
schemas:
- - name: app
- owner: app
+ - name: app
+ owner: app
# ...
```
@@ -260,6 +260,148 @@ Each schema entry supports the following properties:
[`DROP SCHEMA`](https://www.postgresql.org/docs/current/sql-dropschema.html),
[`ALTER SCHEMA`](https://www.postgresql.org/docs/current/sql-alterschema.html).
+## Managing Foreign Data Wrappers (FDWs) in a Database
+
+!!! Info
+ Foreign Data Wrappers (FDWs) are database-scoped objects that typically
+ require superuser privileges to create or modify. {{name.ln}} provides a
+ declarative API for managing FDWs, enabling users to define and maintain them
+ in a controlled, Kubernetes-native way without directly executing SQL commands
+ or escalating privileges.
+
+{{name.ln}} enables seamless and automated management of PostgreSQL foreign
+data wrappers in the target database using declarative configuration.
+
+To enable this feature, define the `spec.fdws` field with a list of FDW
+specifications, as shown in the following example:
+
+```yaml
+# ...
+spec:
+ fdws:
+ - name: postgres_fdw
+ usage:
+ - name: app
+ type: grant
+# ...
+```
+
+Each FDW entry supports the following properties:
+
+- `name`: The name of the foreign data wrapper **(mandatory)**.
+- `ensure`: Indicates whether the FDW should be `present` or `absent` in the
+ database (default is `present`).
+- `handler`: The name of the handler function used by the FDW. If not
+ specified, the default handler defined by the FDW extension (if any) will be
+ used.
+- `validator`: The name of the validator function used by the FDW. If not
+ specified, the default validator defined by the FDW extension (if any) will
+ be used.
+- `owner`: The owner of the FDW **(must be a superuser)**.
+- `usage`: The list of `USAGE` permissions of the FDW, with the following fields:
+ - `name` : The name of the role to which the usage permission should be
+ granted or from which it should be revoked **(mandatory)**.
+ - `type` : The type of the usage permission. Supports `grant` and `revoke`.
+- `options`: A map of FDW-specific options to manage, where each key is the
+ name of an option. Each option supports the following fields:
+ - `value`: The string value of the option.
+ - `ensure`: Indicates whether the option should be `present` or `absent`.
+
+!!! Info
+ Both `handler` and `validator` are optional, and if not specified, the
+ default handler and validator defined by the FDW extension (if any) will be
+ used. Setting `handler` or `validator` to `"-"` will remove the handler or
+ validator from the FDW respectively. This follows the PostgreSQL convention,
+ where "-" denotes the absence of a handler or validator.
+
+!!! Warning
+ PostgreSQL restricts ownership of foreign data wrappers to **roles with
+ superuser privileges only**. Attempting to assign ownership to a non-superuser
+ (e.g., an app role) will be ignored or rejected, as PostgreSQL does not allow
+ non-superuser ownership of foreign data wrappers. By default, they are
+ owned by the `postgres` user.
+
+The operator reconciles only the FDWs explicitly listed in `spec.fdws`. Any
+existing FDWs not declared in this list are left untouched.
+
+!!! Info
+ {{name.ln}} manages FDWs using PostgreSQL's native SQL commands:
+ [`CREATE FOREIGN DATA WRAPPER`](https://www.postgresql.org/docs/current/sql-createforeigndatawrapper.html),
+ [`ALTER FOREIGN DATA WRAPPER`](https://www.postgresql.org/docs/current/sql-alterforeigndatawrapper.html),
+ and [`DROP FOREIGN DATA WRAPPER`](https://www.postgresql.org/docs/current/sql-dropforeigndatawrapper.html).
+ The `ALTER` command supports option updates.
+
+### Managing Foreign Servers in a Database
+
+{{name.ln}} provides seamless, automated management of PostgreSQL foreign
+servers in a target database using declarative configuration.
+
+A **foreign server** encapsulates the connection details that a foreign data
+wrapper (FDW) uses to access an external data source. For user-specific
+connection details, you can define [user mappings](https://www.postgresql.org/docs/current/sql-createusermapping.html).
+
+!!! Important
+ {{name.ln}} does not currently support declarative configuration of user mappings.
+ However, once an FDW and its foreign server are defined, you can grant
+ usage privileges to a standard database role. This allows you to manage user
+ mappings as part of your SQL schema, without requiring superuser privileges.
+
+To enable this feature, declare the `spec.servers` field in a `Database`
+resource with a list of foreign server specifications, for example:
+
+```yaml
+# ...
+spec:
+ servers:
+ - name: angus
+ fdw: postgres_fdw
+ ensure: present
+ usage:
+ - name: app
+ type: grant
+ options:
+ - name: host
+ value: angus-rw
+ - name: dbname
+ value: app
+# ...
+```
+
+Each foreign server entry supports the following properties:
+
+- `name`: The name of the foreign server **(mandatory)**.
+- `fdw`: The name of the foreign data wrapper the server belongs to
+ **(mandatory)**.
+- `ensure`: Whether the foreign server should be `present` or `absent` in the
+ database (default: `present`).
+- `usage`: The list of `USAGE` permissions of the foreign server, with the
+ following fields:
+ - `name` : The name of the role to which the usage permission should be
+ granted or from which it should be revoked **(mandatory)**.
+ - `type` : The type of the usage permission. Supports `grant` and `revoke`.
+- `options`: A list of FDW-specific option specifications.
+ Each entry in the list supports the following keys:
+ - `name`: The name of the option **(mandatory)**.
+ - `value`: The string value of the option.
+ - `ensure`: Indicates whether the option should be `present` or `absent`.
+
+!!!Important
+The `fdw` field must reference an existing foreign data wrapper already defined in the database.
+If the specified FDW does not exist, the foreign server will not be created.
+!!!
+
+!!!Info
+{{name.ln}} manages foreign servers using PostgreSQL’s native SQL commands:
+[`CREATE SERVER`](https://www.postgresql.org/docs/current/sql-createserver.html),
+[`ALTER SERVER`](https://www.postgresql.org/docs/current/sql-alterserver.html), and
+[`DROP SERVER`](https://www.postgresql.org/docs/current/sql-dropserver.html).
+The `ALTER SERVER` command is used to update server options.
+
+The operator reconciles **only** the foreign servers explicitly listed in
+`spec.servers`. Any existing servers not included in this list are left
+unchanged.
+!!!
+
## Limitations and Caveats
### Renaming a database
diff --git a/product_docs/docs/postgres_for_kubernetes/1/failover.mdx b/product_docs/docs/postgres_for_kubernetes/1/failover.mdx
index d72675f1c5..631952adb8 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/failover.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/failover.mdx
@@ -137,13 +137,42 @@ the instance to promote, and it does not occur otherwise.
This feature allows users to choose their preferred trade-off between data
durability and data availability.
-Failover quorum can be enabled by setting the annotation
-`alpha.k8s.enterprisedb.io/failoverQuorum="true"` in the `Cluster` resource.
+Failover quorum can be enabled by setting the
+`.spec.postgresql.synchronous.failoverQuorum` field to `true`:
-!!! info
- When this feature is out of the experimental phase, the annotation
- `alpha.k8s.enterprisedb.io/failoverQuorum` will be replaced by a configuration option in
- the `Cluster` resource.
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 3
+
+ postgresql:
+ synchronous:
+ method: any
+ number: 1
+ failoverQuorum: true
+
+ storage:
+ size: 1Gi
+```
+
+For backward compatibility, the legacy annotation
+`alpha.k8s.enterprisedb.io/failoverQuorum` is still supported by the admission webhook and
+takes precedence over the `Cluster` spec option:
+
+- If the annotation evaluates to `"true"` and a synchronous replication stanza
+ is present, the webhook automatically sets
+ `.spec.postgresql.synchronous.failoverQuorum` to `true`.
+- If the annotation evaluates to `"false"`, the feature is always disabled
+
+!!! Important
+ Because the annotation overrides the spec, we recommend that users of this
+ experimental feature migrate to the native
+ `.spec.postgresql.synchronous.failoverQuorum` option and remove the annotation
+ from their manifests. The annotation is **deprecated** and will be removed in a
+ future release.
### How it works
diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
index 03ebfe047b..2987b103a8 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
@@ -69,7 +69,7 @@ You can install the manifest for the latest version of the operator by running:
```sh
kubectl apply --server-side -f \
- https://get.enterprisedb.io/pg4k/pg4k-1.27.1.yaml
+ https://get.enterprisedb.io/pg4k/pg4k-1.28.0-rc1.yaml
```
You can verify that with:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
index 09044c837f..41b6e6ec3c 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
@@ -35,11 +35,11 @@ them in your systems.
#### Debian packages
-For example, let's install the 1.27.1 release of the plugin, for an Intel based
+For example, let's install the 1.28.0-rc1 release of the plugin, for an Intel based
64 bit server. First, we download the right `.deb` file.
```sh
-wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.27.1/kubectl-cnp_1.27.1_linux_x86_64.deb \
+wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.28.0-rc1/kubectl-cnp_1.28.0-rc1_linux_x86_64.deb \
--output-document kube-plugin.deb
```
@@ -50,17 +50,17 @@ $ sudo dpkg -i kube-plugin.deb
Selecting previously unselected package cnp.
(Reading database ... 6688 files and directories currently installed.)
Preparing to unpack kube-plugin.deb ...
-Unpacking kubectl-cnp (1.27.1) ...
-Setting up kubectl-cnp (1.27.1) ...
+Unpacking kubectl-cnp (1.28.0-rc1) ...
+Setting up kubectl-cnp (1.28.0-rc1) ...
```
#### RPM packages
-As in the example for `.rpm` packages, let's install the 1.27.1 release for an
+As in the example for `.rpm` packages, let's install the 1.28.0-rc1 release for an
Intel 64 bit machine. Note the `--output` flag to provide a file name.
```sh
-curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.27.1/kubectl-cnp_1.27.1_linux_x86_64.rpm \
+curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.28.0-rc1/kubectl-cnp_1.28.0-rc1_linux_x86_64.rpm \
--output kube-plugin.rpm
```
@@ -74,7 +74,7 @@ Dependencies resolved.
Package Architecture Version Repository Size
====================================================================================================
Installing:
- cnp x86_64 1.27.1-1 @commandline 20 M
+ cnp x86_64 1.28.0-rc1-1 @commandline 20 M
Transaction Summary
====================================================================================================
@@ -243,9 +243,9 @@ sandbox-3 0/604DE38 0/604DE38 0/604DE38 0/604DE38 00:00:00 00:00:00 00
Instances status
Name Current LSN Replication role Status QoS Manager Version Node
---- ----------- ---------------- ------ --- --------------- ----
-sandbox-1 0/604DE38 Primary OK BestEffort 1.27.1 k8s-eu-worker
-sandbox-2 0/604DE38 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker2
-sandbox-3 0/604DE38 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker
+sandbox-1 0/604DE38 Primary OK BestEffort 1.28.0-rc1 k8s-eu-worker
+sandbox-2 0/604DE38 Standby (async) OK BestEffort 1.28.0-rc1 k8s-eu-worker2
+sandbox-3 0/604DE38 Standby (async) OK BestEffort 1.28.0-rc1 k8s-eu-worker
```
If you require more detailed status information, use the `--verbose` option (or
@@ -299,9 +299,9 @@ sandbox-primary primary 1 1 1
Instances status
Name Current LSN Replication role Status QoS Manager Version Node
---- ----------- ---------------- ------ --- --------------- ----
-sandbox-1 0/6053720 Primary OK BestEffort 1.27.1 k8s-eu-worker
-sandbox-2 0/6053720 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker2
-sandbox-3 0/6053720 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker
+sandbox-1 0/6053720 Primary OK BestEffort 1.28.0-rc1 k8s-eu-worker
+sandbox-2 0/6053720 Standby (async) OK BestEffort 1.28.0-rc1 k8s-eu-worker2
+sandbox-3 0/6053720 Standby (async) OK BestEffort 1.28.0-rc1 k8s-eu-worker
```
With an additional `-v` (e.g. `kubectl cnp status sandbox -v -v`), you can
@@ -524,12 +524,12 @@ Archive: report_operator_.zip
```output
====== Begin of Previous Log =====
-2023-03-28T12:56:41.251711811Z {"level":"info","ts":"2023-03-28T12:56:41Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.27.1","build":{"Version":"1.27.1+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
+2023-03-28T12:56:41.251711811Z {"level":"info","ts":"2023-03-28T12:56:41Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.28.0-rc1","build":{"Version":"1.28.0-rc1+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
2023-03-28T12:56:41.251851909Z {"level":"info","ts":"2023-03-28T12:56:41Z","logger":"setup","msg":"Starting pprof HTTP server","addr":"0.0.0.0:6060"}
====== End of Previous Log =====
-2023-03-28T12:57:09.854306024Z {"level":"info","ts":"2023-03-28T12:57:09Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.27.1","build":{"Version":"1.27.1+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
+2023-03-28T12:57:09.854306024Z {"level":"info","ts":"2023-03-28T12:57:09Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.28.0-rc1","build":{"Version":"1.28.0-rc1+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
2023-03-28T12:57:09.854363943Z {"level":"info","ts":"2023-03-28T12:57:09Z","logger":"setup","msg":"Starting pprof HTTP server","addr":"0.0.0.0:6060"}
```
diff --git a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
index 67c6dd00ee..f3b331a2b0 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
@@ -112,6 +112,28 @@ instead
`k8s.enterprisedb.io/instanceRole`
: Whether the instance running in a pod is a `primary` or a `replica`.
+`app.kubernetes.io/managed-by`
+: Name of the manager. It will always be `cloudnative-pg`.
+ Available across all {{name.ln}} managed resources.
+
+`app.kubernetes.io/name`
+: Name of the application. It will always be `postgresql`.
+ Available on pods, jobs, deployments, services, persistentVolumeClaims, volumeSnapshots,
+ podDisruptionBudgets, podMonitors.
+
+`app.kubernetes.io/component`
+: Name of the component (`database`, `pooler`, ...).
+ Available on pods, jobs, deployments, services, persistentVolumeClaims, volumeSnapshots,
+ podDisruptionBudgets, podMonitors.
+
+`app.kubernetes.io/instance`
+: Name of the owning `Cluster` resource.
+ Available on pods, jobs, deployments, services, volumeSnapshots, podDisruptionBudgets, podMonitors.
+
+`app.kubernetes.io/version`
+: Major version of PostgreSQL.
+ Available on pods, jobs, services, volumeSnapshots, podDisruptionBudgets, podMonitors.
+
## Predefined annotations
{{name.ln}} manages the following predefined annotations:
@@ -260,6 +282,13 @@ operations. Use this setting with caution and at your own risk.
`kubectl.kubernetes.io/restartedAt`
: When available, the time of last requested restart of a Postgres cluster.
+`alpha.k8s.enterprisedb.io/unrecoverable`
+: Experimental annotation applied to a `Pod` running a PostgreSQL instance.
+ It instructs the operator to delete the `Pod` and all its associated PVCs.
+ The instance will then be recreated according to the configured join
+ strategy. This annotation can only be used on instances that are neither the
+ current primary nor the designated target primary.
+
## Prerequisites
By default, no label or annotation defined in the cluster's metadata is
diff --git a/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx b/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx
index 4d568e22d7..f30cceaaf7 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx
@@ -88,12 +88,46 @@ In the above example:
- It includes all tables (`spec.target.allTables: true`) from the `app`
database (`spec.dbname`).
+### Fine-grained control over publication tables
+
+While the `allTables` option provides a convenient way to replicate all tables
+in a database, PostgreSQL version 15 and later introduce enhanced flexibility
+through the [`CREATE PUBLICATION`](https://www.postgresql.org/docs/current/sql-createpublication.html)
+command. This allows you to precisely define which tables, or even which types
+of data changes, should be included in a publication.
+
!!! Important
- While `allTables` simplifies configuration, PostgreSQL offers fine-grained
- control for replicating specific tables or targeted data changes. For advanced
- configurations, consult the [PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication.html).
- Additionally, refer to the [{{name.ln}} API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-PublicationTarget)
- for details on declaratively customizing replication targets.
+ If you are using PostgreSQL versions earlier than 15, review the syntax and
+ options available for `CREATE PUBLICATION` in your specific release. Some
+ parameters and features may not be supported.
+
+For complex or tailored replication setups, refer to the
+[PostgreSQL logical replication documentation](https://www.postgresql.org/docs/current/logical-replication.html).
+
+Additionally, refer to the [{{name.ln}} API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-PublicationTarget)
+for details on declaratively customizing replication targets.
+
+The following example defines a publication that replicates all tables in the
+`portal` schema of the `app` database, along with the `users` table from the
+`access` schema:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Publication
+metadata:
+ name: publisher
+spec:
+ cluster:
+ name: freddie
+ dbname: app
+ name: publisher
+ target:
+ objects:
+ - tablesInSchema: portal
+ - table:
+ name: users
+ schema: access
+```
### Required Fields in the `Publication` Manifest
@@ -400,6 +434,8 @@ metadata:
spec:
instances: 1
+ imageName: docker.enterprisedb.com/k8s/postgresql:18-standard-ub9
+
storage:
size: 1Gi
diff --git a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
index f08331a6c3..3ce3fb8cce 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
@@ -22,7 +22,7 @@ more `ConfigMap` or `Secret` resources (see the
!!! Important
{{name.ln}}, by default, installs a set of [predefined metrics](#default-set-of-metrics)
- in a `ConfigMap` named `default-monitoring`.
+ in a `ConfigMap` named `postgresql-operator-default-monitoring`.
!!! Info
You can inspect the exported metrics by following the instructions in
@@ -59,6 +59,20 @@ by specifying a list of one or more databases in the `target_databases` option.
with Prometheus and Grafana, you can find a quick setup guide
in [Part 4 of the quickstart](quickstart.md#part-4-monitor-clusters-with-prometheus-and-grafana)
+### Output caching
+
+By default, the outputs of monitoring queries are cached for thirty
+seconds. This is done to enhance resource efficiency and to avoid
+PostgreSQL to run monitoring queries every time the prometheus
+endpoint is scraped.
+
+The cache itself can be observed by the `cache_hits`, `cache_misses`
+and `last_update_timestamp` metrics.
+
+Setting the `cluster.spec.monitoring.metricsQueriesTTL` to zero will
+disable the cache, and in that case the metrics will be run on every
+metrics endpoint scrape.
+
### Monitoring with the Prometheus operator
You can monitor a specific PostgreSQL cluster using the
@@ -233,7 +247,7 @@ cnp_collector_up{cluster="cluster-example"} 1
# HELP cnp_collector_postgres_version Postgres version
# TYPE cnp_collector_postgres_version gauge
-cnp_collector_postgres_version{cluster="cluster-example",full="18.0"} 18.0
+cnp_collector_postgres_version{cluster="cluster-example",full="17.6"} 17.6
# HELP cnp_collector_last_failed_backup_timestamp The last failed backup as a unix timestamp (Deprecated)
# TYPE cnp_collector_last_failed_backup_timestamp gauge
@@ -763,6 +777,78 @@ spec:
EOF
```
+### Enabling TLS for operator metrics
+
+By default, the operator exposes its metrics over HTTP on port `8080`.
+To secure this endpoint with TLS, follow these steps:
+
+1. Create a Kubernetes Secret containing the TLS certificate (`tls.crt`) and
+ private key (`tls.key`).
+2. Mount the Secret into the operator Pod.
+3. Set the `METRICS_CERT_DIR` environment variable to point to the directory
+ where the certificates are mounted.
+
+Example `Secret` definition:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: postgresql-operator-metrics-cert
+ namespace: postgresql-operator-system
+type: kubernetes.io/tls
+data:
+ tls.crt:
+ tls.key:
+```
+
+Next, update the operator deployment to mount the secret and configure the
+environment variable:
+
+```yaml
+spec:
+ template:
+ spec:
+ containers:
+ - name: manager
+ env:
+ - name: METRICS_CERT_DIR
+ value: /run/secrets/k8s.enterprisedb.io/metrics
+ volumeMounts:
+ - mountPath: /run/secrets/k8s.enterprisedb.io/metrics
+ name: metrics-certificates
+ readOnly: true
+ volumes:
+ - name: metrics-certificates
+ secret:
+ secretName: postgresql-operator-metrics-cert
+ defaultMode: 420
+```
+
+!!! Note
+ When `METRICS_CERT_DIR` is set, the operator automatically enables TLS for
+ the metrics server. You must also update your PodMonitor configuration to
+ use the `https` scheme.
+
+Example `PodMonitor` configuration with TLS enabled:
+
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name: postgresql-operator-controller-manager
+ namespace: postgresql-operator-system
+spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: cloud-native-postgresql
+ podMetricsEndpoints:
+ - port: metrics
+ scheme: https
+ tlsConfig:
+ insecureSkipVerify: true # or configure proper CA validation
+```
+
## How to inspect the exported metrics
In this section we provide basic instructions on how to inspect
diff --git a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
index d8823b98c9..0a1dec1cdd 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
@@ -152,7 +152,8 @@ required, as part of the bootstrap.
Additional databases can be created or managed via
[declarative database management](declarative_database_management.md) using
-the `Database` CRD, also supporting extensions and schemas.
+the `Database` CRD, also supporting extensions, schemas, foreign data wrappers
+(FDW), and foreign servers.
Although no configuration is required to run the cluster, you can customize
both PostgreSQL runtime configuration and PostgreSQL host-based
diff --git a/product_docs/docs/postgres_for_kubernetes/1/operator_conf.mdx b/product_docs/docs/postgres_for_kubernetes/1/operator_conf.mdx
index b64d837913..2cf42fdce9 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/operator_conf.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/operator_conf.mdx
@@ -55,6 +55,7 @@ The operator looks for the following environment variables to be defined in the
| `INHERITED_LABELS` | List of label names that, when defined in a `Cluster` metadata, will be inherited by all the generated resources, including pods |
| `INSTANCES_ROLLOUT_DELAY` | The duration (in seconds) to wait between roll-outs of individual PostgreSQL instances within the same cluster during an operator upgrade. The default value is `0`, meaning no delay between upgrades of instances in the same PostgreSQL cluster. |
| `KUBERNETES_CLUSTER_DOMAIN` | Defines the domain suffix for service FQDNs within the Kubernetes cluster. If left unset, it defaults to "cluster.local". |
+| `METRICS_CERT_DIR` | The directory where TLS certificates for the operator metrics server are stored. When set, enables TLS for the metrics endpoint on port 8080. The directory must contain `tls.crt` and `tls.key` files following standard Kubernetes TLS secret conventions. If not set, the metrics server operates without TLS (default behavior). |
| `MONITORING_QUERIES_CONFIGMAP` | The name of a ConfigMap in the operator's namespace with a set of default queries (to be specified under the key `queries`) to be applied to all created Clusters |
| `MONITORING_QUERIES_SECRET` | The name of a Secret in the operator's namespace with a set of default queries (to be specified under the key `queries`) to be applied to all created Clusters |
| `OPERATOR_IMAGE_NAME` | The name of the operator image used to bootstrap Pods. Defaults to the image specified during installation. |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx
index 3df447f0e7..480d183f27 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx
@@ -1,8 +1,9 @@
---
-title: API Reference - v1.27.1
+title: API Reference - v1.28.0-rc1
originalFilePath: src/pg4k.v1.md
navTitle: API Reference
navigation:
+ - v1.28.0-rc1
- v1.27.1
- v1.27.0
- v1.26.1
@@ -1169,8 +1170,8 @@ EPAS and for EPAS defaults to true
[]string
- The list of options that must be passed to initdb when creating the cluster.
-Deprecated: This could lead to inconsistent configurations,
+ The list of options that must be passed to initdb when creating the cluster.
+Deprecated: This could lead to inconsistent configurations,
please use the explicit provided parameters instead.
If defined, explicit values will be ignored.
|
@@ -2049,6 +2050,25 @@ sources to the pods to be used by Env
Defaults to: RuntimeDefault
+podSecurityContext
+core/v1.PodSecurityContext
+ |
+
+ Override the PodSecurityContext applied to every Pod of the cluster.
+When set, this overrides the operator's default PodSecurityContext for the cluster.
+If omitted, the operator defaults are used.
+This field doesn't have any effect if SecurityContextConstraints are present.
+ |
+
+securityContext
+core/v1.SecurityContext
+ |
+
+ Override the SecurityContext applied to every Container in the Pod of the cluster.
+When set, this overrides the operator's default Container SecurityContext.
+If omitted, the operator defaults are used.
+ |
+
tablespaces
[]TablespaceConfiguration
|
@@ -2546,8 +2566,12 @@ PostgreSQL cluster from an existing storage
- [ExtensionSpec](#postgresql-k8s-enterprisedb-io-v1-ExtensionSpec)
+- [FDWSpec](#postgresql-k8s-enterprisedb-io-v1-FDWSpec)
+
- [SchemaSpec](#postgresql-k8s-enterprisedb-io-v1-SchemaSpec)
+- [ServerSpec](#postgresql-k8s-enterprisedb-io-v1-ServerSpec)
+
DatabaseObjectSpec contains the fields which are common to every
database object
@@ -2558,17 +2582,17 @@ database object
string
- Name of the extension/schema
+ Name of the object (extension, schema, FDW, server)
|
ensure
EnsureOption
|
- Specifies whether an extension/schema should be present or absent in
-the database. If set to present, the extension/schema will be
-created if it does not exist. If set to absent, the
-extension/schema will be removed if it exists.
+ Specifies whether an object (e.g schema) should be present or absent
+in the database. If set to present, the object will be created if
+it does not exist. If set to absent, the extension/schema will be
+removed if it exists.
|
@@ -2837,6 +2861,20 @@ tablespace used for objects created in this database.
The list of extensions to be managed in the database
+fdws
+[]FDWSpec
+ |
+
+ The list of foreign data wrappers to be managed in the database
+ |
+
+servers
+[]ServerSpec
+ |
+
+ The list of foreign servers to be managed in the database
+ |
+
@@ -2889,6 +2927,20 @@ desired state that was synchronized
Extensions is the status of the managed extensions
+fdws
+[]DatabaseObjectStatus
+ |
+
+ FDWs is the status of the managed FDWs
+ |
+
+servers
+[]DatabaseObjectStatus
+ |
+
+ Servers is the status of the managed servers
+ |
+
@@ -2962,6 +3014,8 @@ desired state that was synchronized
- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+- [OptionSpec](#postgresql-k8s-enterprisedb-io-v1-OptionSpec)
+
- [RoleConfiguration](#postgresql-k8s-enterprisedb-io-v1-RoleConfiguration)
EnsureOption represents whether we should enforce the presence or absence of
@@ -3178,6 +3232,69 @@ of WAL archiving and backups for this external cluster
+
+
+## FDWSpec
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+FDWSpec configures an Foreign Data Wrapper in a database
+
+
+| Field | Description |
+
+DatabaseObjectSpec
+DatabaseObjectSpec
+ |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields
+ |
+
+handler
+string
+ |
+
+ Name of the handler function (e.g., "postgres_fdw_handler").
+This will be empty if no handler is specified. In that case,
+the default handler is registered when the FDW extension is created.
+ |
+
+validator
+string
+ |
+
+ Name of the validator function (e.g., "postgres_fdw_validator").
+This will be empty if no validator is specified. In that case,
+the default validator is registered when the FDW extension is created.
+ |
+
+owner
+string
+ |
+
+ Owner specifies the database role that will own the Foreign Data Wrapper.
+The role must have superuser privileges in the target database.
+ |
+
+options
+[]OptionSpec
+ |
+
+ Options specifies the configuration options for the FDW.
+ |
+
+usage
+[]UsageSpec
+ |
+
+ List of roles for which USAGE privileges on the FDW are granted or revoked.
+ |
+
+
+
+
## FailoverQuorumStatus
@@ -3372,20 +3489,58 @@ database right after is imported - to be used with extreme care
[]string
- List of custom options to pass to the pg_dump command. IMPORTANT:
-Use these options with caution and at your own risk, as the operator
-does not validate their content. Be aware that certain options may
-conflict with the operator's intended functionality or design.
+ List of custom options to pass to the pg_dump command.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
|
pgRestoreExtraOptions
[]string
|
- List of custom options to pass to the pg_restore command. IMPORTANT:
-Use these options with caution and at your own risk, as the operator
-does not validate their content. Be aware that certain options may
-conflict with the operator's intended functionality or design.
+ List of custom options to pass to the pg_restore command.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
+ |
+
+pgRestorePredataOptions
+[]string
+ |
+
+ Custom options to pass to the pg_restore command during the pre-data
+section. This setting overrides the generic pgRestoreExtraOptions value.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
+ |
+
+pgRestoreDataOptions
+[]string
+ |
+
+ Custom options to pass to the pg_restore command during the data
+section. This setting overrides the generic pgRestoreExtraOptions value.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
+ |
+
+pgRestorePostdataOptions
+[]string
+ |
+
+ Custom options to pass to the pg_restore command during the post-data
+section. This setting overrides the generic pgRestoreExtraOptions value.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
|
@@ -3967,6 +4122,17 @@ you need this functionality, you can create a PodMonitor manually.
you need this functionality, you can create a PodMonitor manually.
+metricsQueriesTTL
+meta/v1.Duration
+ |
+
+ The interval during which metrics computed from queries are considered current.
+Once it is exceeded, a new scrape will trigger a rerun
+of the queries.
+If not set, defaults to 30 seconds, in line with Prometheus scraping defaults.
+Setting this to zero disables the caching mechanism and can cause heavy load on the PostgreSQL server.
+ |
+
@@ -4050,6 +4216,48 @@ possible. false by default.
+
+
+## OptionSpec
+
+**Appears in:**
+
+- [FDWSpec](#postgresql-k8s-enterprisedb-io-v1-FDWSpec)
+
+- [ServerSpec](#postgresql-k8s-enterprisedb-io-v1-ServerSpec)
+
+OptionSpec holds the name, value and the ensure field for an option
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the option
+ |
+
+value [Required]
+string
+ |
+
+ Value of the option
+ |
+
+ensure
+EnsureOption
+ |
+
+ Specifies whether an option should be present or absent in
+the database. If set to present, the option will be
+created if it does not exist. If set to absent, the
+option will be removed if it exists.
+ |
+
+
+
+
## PasswordState
@@ -4158,6 +4366,39 @@ by pgbouncer
The pool mode. Default: session.
+serverTLSSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ ServerTLSSecret, when pointing to a TLS secret, provides pgbouncer's
+server_tls_key_file and server_tls_cert_file, used when
+authenticating against PostgreSQL.
+ |
+
+serverCASecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ ServerCASecret provides PgBouncer’s server_tls_ca_file, the root
+CA for validating PostgreSQL certificates
+ |
+
+clientCASecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ ClientCASecret provides PgBouncer’s client_tls_ca_file, the root
+CA for validating client certificates
+ |
+
+clientTLSSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ ClientTLSSecret provides PgBouncer’s client_tls_key_file (private key)
+and client_tls_cert_file (certificate) used to accept client connections
+ |
+
authQuerySecret
github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
|
@@ -4166,6 +4407,7 @@ by pgbouncer
query. In case it is specified, also an AuthQuery
(e.g. "SELECT usename, passwd FROM pg_catalog.pg_shadow WHERE usename=$1")
has to be specified and no automatic CNP Cluster integration will be triggered.
+Deprecated.
authQuery
@@ -4461,6 +4703,13 @@ part for now.
| Field | Description |
+clientTLS
+SecretVersion
+ |
+
+ The client TLS secret version
+ |
+
serverTLS
SecretVersion
|
@@ -5843,6 +6092,51 @@ Map keys are the secret names, map values are the versions
+
+
+## ServerSpec
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+ServerSpec configures a server of a foreign data wrapper
+
+
+| Field | Description |
+
+DatabaseObjectSpec
+DatabaseObjectSpec
+ |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields
+ |
+
+fdw [Required]
+string
+ |
+
+ The name of the Foreign Data Wrapper (FDW)
+ |
+
+options
+[]OptionSpec
+ |
+
+ Options specifies the configuration options for the server
+(key is the option name, value is the option value).
+ |
+
+usage
+[]UsageSpec
+ |
+
+ List of roles for which USAGE privileges on the server are granted or revoked.
+ |
+
+
+
+
## ServiceAccountTemplate
@@ -6296,6 +6590,15 @@ to allow for operational continuity. This setting is only applicable if both
standbyNamesPre and standbyNamesPost are unset (empty).
|
+failoverQuorum
+bool
+ |
+
+ FailoverQuorum enables a quorum-based check before failover, improving
+data durability and safety during failover events in {{name.ln}}-managed
+PostgreSQL clusters.
+ |
+
@@ -6510,6 +6813,51 @@ in synchronous replica election in case of failures
+
+
+## UsageSpec
+
+**Appears in:**
+
+- [FDWSpec](#postgresql-k8s-enterprisedb-io-v1-FDWSpec)
+
+- [ServerSpec](#postgresql-k8s-enterprisedb-io-v1-ServerSpec)
+
+UsageSpec configures a usage for a foreign data wrapper
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the usage
+ |
+
+type
+UsageSpecType
+ |
+
+ The type of usage
+ |
+
+
+
+
+
+
+## UsageSpecType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [UsageSpec](#postgresql-k8s-enterprisedb-io-v1-UsageSpec)
+
+UsageSpecType describes the type of usage specified in the usage field of the
+Database object.
+
## VolumeSnapshotConfiguration
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.28.0-rc1.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.28.0-rc1.mdx
new file mode 100644
index 0000000000..991e15c754
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.28.0-rc1.mdx
@@ -0,0 +1,6829 @@
+---
+title: API Reference - v1.28.0-rc1
+navTitle: v1.28.0-rc1
+pdfExclude: 'true'
+
+---
+
+Package v1 contains API Schema definitions for the postgresql v1 API group
+
+## Resource Types
+
+- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
+- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+- [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog)
+- [Database](#postgresql-k8s-enterprisedb-io-v1-Database)
+- [FailoverQuorum](#postgresql-k8s-enterprisedb-io-v1-FailoverQuorum)
+- [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog)
+- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
+- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication)
+- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
+- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription)
+
+
+
+## Backup
+
+A Backup resource is a request for a PostgreSQL backup by the user.
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Backup |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+BackupSpec
+ |
+
+ Specification of the desired behavior of the backup.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+BackupStatus
+ |
+
+ Most recently observed status of the backup. This data may not be up to
+date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Cluster
+
+Cluster defines the API schema for a highly available PostgreSQL database cluster
+managed by {{name.ln}}.
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Cluster |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ClusterSpec
+ |
+
+ Specification of the desired behavior of the cluster.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+ClusterStatus
+ |
+
+ Most recently observed status of the cluster. This data may not be up
+to date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## ClusterImageCatalog
+
+ClusterImageCatalog is the Schema for the clusterimagecatalogs API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | ClusterImageCatalog |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ImageCatalogSpec
+ |
+
+ Specification of the desired behavior of the ClusterImageCatalog.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Database
+
+Database is the Schema for the databases API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Database |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+DatabaseSpec
+ |
+
+ Specification of the desired Database.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+DatabaseStatus
+ |
+
+ Most recently observed status of the Database. This data may not be up to
+date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## FailoverQuorum
+
+**Appears in:**
+
+FailoverQuorum contains the information about the current failover
+quorum status of a PG cluster. It is updated by the instance manager
+of the primary node and reset to zero by the operator to trigger
+an update.
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | FailoverQuorum |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+status
+FailoverQuorumStatus
+ |
+
+ Most recently observed status of the failover quorum.
+ |
+
+
+
+
+
+
+## ImageCatalog
+
+ImageCatalog is the Schema for the imagecatalogs API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | ImageCatalog |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ImageCatalogSpec
+ |
+
+ Specification of the desired behavior of the ImageCatalog.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Pooler
+
+Pooler is the Schema for the poolers API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Pooler |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+PoolerSpec
+ |
+
+ Specification of the desired behavior of the Pooler.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+PoolerStatus
+ |
+
+ Most recently observed status of the Pooler. This data may not be up to
+date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Publication
+
+Publication is the Schema for the publications API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Publication |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+PublicationSpec
+ |
+
+ No description provided. |
+
+status [Required]
+PublicationStatus
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## ScheduledBackup
+
+ScheduledBackup is the Schema for the scheduledbackups API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | ScheduledBackup |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ScheduledBackupSpec
+ |
+
+ Specification of the desired behavior of the ScheduledBackup.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+ScheduledBackupStatus
+ |
+
+ Most recently observed status of the ScheduledBackup. This data may not be up
+to date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Subscription
+
+Subscription is the Schema for the subscriptions API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Subscription |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+SubscriptionSpec
+ |
+
+ No description provided. |
+
+status [Required]
+SubscriptionStatus
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## AffinityConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+AffinityConfiguration contains the info we need to create the
+affinity rules for Pods
+
+
+| Field | Description |
+
+enablePodAntiAffinity
+bool
+ |
+
+ Activates anti-affinity for the pods. The operator will define pods
+anti-affinity unless this field is explicitly set to false
+ |
+
+topologyKey
+string
+ |
+
+ TopologyKey to use for anti-affinity configuration. See k8s documentation
+for more info on that
+ |
+
+nodeSelector
+map[string]string
+ |
+
+ NodeSelector is map of key-value pairs used to define the nodes on which
+the pods can run.
+More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ |
+
+nodeAffinity
+core/v1.NodeAffinity
+ |
+
+ NodeAffinity describes node affinity scheduling rules for the pod.
+More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
+ |
+
+tolerations
+[]core/v1.Toleration
+ |
+
+ Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run
+on tainted nodes.
+More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ |
+
+podAntiAffinityType
+string
+ |
+
+ PodAntiAffinityType allows the user to decide whether pod anti-affinity between cluster instance has to be
+considered a strong requirement during scheduling or not. Allowed values are: "preferred" (default if empty) or
+"required". Setting it to "required", could lead to instances remaining pending until new kubernetes nodes are
+added if all the existing nodes don't match the required pod anti-affinity rule.
+More info:
+https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
+ |
+
+additionalPodAntiAffinity
+core/v1.PodAntiAffinity
+ |
+
+ AdditionalPodAntiAffinity allows to specify pod anti-affinity terms to be added to the ones generated
+by the operator if EnablePodAntiAffinity is set to true (default) or to be used exclusively if set to false.
+ |
+
+additionalPodAffinity
+core/v1.PodAffinity
+ |
+
+ AdditionalPodAffinity allows to specify pod affinity terms to be passed to all the cluster's pods.
+ |
+
+
+
+
+
+
+## AvailableArchitecture
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+AvailableArchitecture represents the state of a cluster's architecture
+
+
+| Field | Description |
+
+goArch [Required]
+string
+ |
+
+ GoArch is the name of the executable architecture
+ |
+
+hash [Required]
+string
+ |
+
+ Hash is the hash of the executable
+ |
+
+
+
+
+
+
+## BackupConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+BackupConfiguration defines how the backup of the cluster are taken.
+The supported backup methods are BarmanObjectStore and VolumeSnapshot.
+For details and examples refer to the Backup and Recovery section of the
+documentation
+
+
+| Field | Description |
+
+volumeSnapshot
+VolumeSnapshotConfiguration
+ |
+
+ VolumeSnapshot provides the configuration for the execution of volume snapshot backups.
+ |
+
+barmanObjectStore
+github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration
+ |
+
+ The configuration for the barman-cloud tool suite
+ |
+
+retentionPolicy
+string
+ |
+
+ RetentionPolicy is the retention policy to be used for backups
+and WALs (i.e. '60d'). The retention policy is expressed in the form
+of XXu where XX is a positive integer and u is in [dwm] -
+days, weeks, months.
+It's currently only applicable when using the BarmanObjectStore method.
+ |
+
+target
+BackupTarget
+ |
+
+ The policy to decide which instance should perform backups. Available
+options are empty string, which will default to prefer-standby policy,
+primary to have backups run always on primary instances, prefer-standby
+to have backups run preferably on the most updated standby, if available.
+ |
+
+
+
+
+
+
+## BackupMethod
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+BackupMethod defines the way of executing the physical base backups of
+the selected PostgreSQL instance
+
+
+
+## BackupPhase
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+BackupPhase is the phase of the backup
+
+
+
+## BackupPluginConfiguration
+
+**Appears in:**
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+BackupPluginConfiguration contains the backup configuration used by
+the backup plugin
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the name of the plugin managing this backup
+ |
+
+parameters
+map[string]string
+ |
+
+ Parameters are the configuration parameters passed to the backup
+plugin for this backup
+ |
+
+
+
+
+
+
+## BackupSnapshotElementStatus
+
+**Appears in:**
+
+- [BackupSnapshotStatus](#postgresql-k8s-enterprisedb-io-v1-BackupSnapshotStatus)
+
+BackupSnapshotElementStatus is a volume snapshot that is part of a volume snapshot method backup
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the snapshot resource name
+ |
+
+type [Required]
+string
+ |
+
+ Type is tho role of the snapshot in the cluster, such as PG_DATA, PG_WAL and PG_TABLESPACE
+ |
+
+tablespaceName
+string
+ |
+
+ TablespaceName is the name of the snapshotted tablespace. Only set
+when type is PG_TABLESPACE
+ |
+
+
+
+
+
+
+## BackupSnapshotStatus
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+BackupSnapshotStatus the fields exclusive to the volumeSnapshot method backup
+
+
+
+
+
+## BackupSource
+
+**Appears in:**
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+BackupSource contains the backup we need to restore from, plus some
+information that could be needed to correctly restore it.
+
+
+
+
+
+## BackupSpec
+
+**Appears in:**
+
+- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
+
+BackupSpec defines the desired state of Backup
+
+
+| Field | Description |
+
+cluster [Required]
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The cluster to backup
+ |
+
+target
+BackupTarget
+ |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to cluster.spec.backup.target.
+Available options are empty string, primary and prefer-standby.
+primary to have backups run always on primary instances,
+prefer-standby to have backups run preferably on the most updated
+standby, if available.
+ |
+
+method
+BackupMethod
+ |
+
+ The backup method to be used, possible options are barmanObjectStore,
+volumeSnapshot or plugin. Defaults to: barmanObjectStore.
+ |
+
+pluginConfiguration
+BackupPluginConfiguration
+ |
+
+ Configuration parameters passed to the plugin managing this backup
+ |
+
+online
+bool
+ |
+
+ Whether the default type of backup with volume snapshots is
+online/hot (true, default) or offline/cold (false)
+Overrides the default setting specified in the cluster field '.spec.backup.volumeSnapshot.online'
+ |
+
+onlineConfiguration
+OnlineConfiguration
+ |
+
+ Configuration parameters to control the online/hot backup with volume snapshots
+Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza
+ |
+
+
+
+
+
+
+## BackupStatus
+
+**Appears in:**
+
+- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
+
+BackupStatus defines the observed state of Backup
+
+
+| Field | Description |
+
+BarmanCredentials
+github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanCredentials
+ |
+(Members of BarmanCredentials are embedded into this type.)
+ The potential credentials for each cloud provider
+ |
+
+majorVersion [Required]
+int
+ |
+
+ The PostgreSQL major version that was running when the
+backup was taken.
+ |
+
+endpointCA
+github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector
+ |
+
+ EndpointCA store the CA bundle of the barman endpoint.
+Useful when using self-signed certificates to avoid
+errors with certificate issuer and barman-cloud-wal-archive.
+ |
+
+endpointURL
+string
+ |
+
+ Endpoint to be used to upload data to the cloud,
+overriding the automatic endpoint discovery
+ |
+
+destinationPath
+string
+ |
+
+ The path where to store the backup (i.e. s3://bucket/path/to/folder)
+this path, with different destination folders, will be used for WALs
+and for data. This may not be populated in case of errors.
+ |
+
+serverName
+string
+ |
+
+ The server name on S3, the cluster name is used if this
+parameter is omitted
+ |
+
+encryption
+string
+ |
+
+ Encryption method required to S3 API
+ |
+
+backupId
+string
+ |
+
+ The ID of the Barman backup
+ |
+
+backupName
+string
+ |
+
+ The Name of the Barman backup
+ |
+
+phase
+BackupPhase
+ |
+
+ The last backup status
+ |
+
+startedAt
+meta/v1.Time
+ |
+
+ When the backup was started
+ |
+
+stoppedAt
+meta/v1.Time
+ |
+
+ When the backup was terminated
+ |
+
+beginWal
+string
+ |
+
+ The starting WAL
+ |
+
+endWal
+string
+ |
+
+ The ending WAL
+ |
+
+beginLSN
+string
+ |
+
+ The starting xlog
+ |
+
+endLSN
+string
+ |
+
+ The ending xlog
+ |
+
+error
+string
+ |
+
+ The detected error
+ |
+
+commandOutput
+string
+ |
+
+ Unused. Retained for compatibility with old versions.
+ |
+
+commandError
+string
+ |
+
+ The backup command output in case of error
+ |
+
+backupLabelFile
+[]byte
+ |
+
+ Backup label file content as returned by Postgres in case of online (hot) backups
+ |
+
+tablespaceMapFile
+[]byte
+ |
+
+ Tablespace map file content as returned by Postgres in case of online (hot) backups
+ |
+
+instanceID
+InstanceID
+ |
+
+ Information to identify the instance where the backup has been taken from
+ |
+
+snapshotBackupStatus
+BackupSnapshotStatus
+ |
+
+ Status of the volumeSnapshot backup
+ |
+
+method
+BackupMethod
+ |
+
+ The backup method being used
+ |
+
+online
+bool
+ |
+
+ Whether the backup was online/hot (true) or offline/cold (false)
+ |
+
+pluginMetadata
+map[string]string
+ |
+
+ A map containing the plugin metadata
+ |
+
+
+
+
+
+
+## BackupTarget
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [BackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration)
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+BackupTarget describes the preferred targets for a backup
+
+
+
+## BootstrapConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+BootstrapConfiguration contains information about how to create the PostgreSQL
+cluster. Only a single bootstrap method can be defined among the supported
+ones. initdb will be used as the bootstrap method if left
+unspecified. Refer to the Bootstrap page of the documentation for more
+information.
+
+
+| Field | Description |
+
+initdb
+BootstrapInitDB
+ |
+
+ Bootstrap the cluster via initdb
+ |
+
+recovery
+BootstrapRecovery
+ |
+
+ Bootstrap the cluster from a backup
+ |
+
+pg_basebackup
+BootstrapPgBaseBackup
+ |
+
+ Bootstrap the cluster taking a physical backup of another compatible
+PostgreSQL instance
+ |
+
+
+
+
+
+
+## BootstrapInitDB
+
+**Appears in:**
+
+- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+
+BootstrapInitDB is the configuration of the bootstrap process when
+initdb is used
+Refer to the Bootstrap page of the documentation for more information.
+
+
+| Field | Description |
+
+database
+string
+ |
+
+ Name of the database used by the application. Default: app.
+ |
+
+owner
+string
+ |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the database key.
+ |
+
+secret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ Name of the secret containing the initial credentials for the
+owner of the user database. If empty a new secret will be
+created from scratch
+ |
+
+redwood
+bool
+ |
+
+ If we need to enable/disable Redwood compatibility. Requires
+EPAS and for EPAS defaults to true
+ |
+
+options
+[]string
+ |
+
+ The list of options that must be passed to initdb when creating the cluster.
+Deprecated: This could lead to inconsistent configurations,
+please use the explicit provided parameters instead.
+If defined, explicit values will be ignored.
+ |
+
+dataChecksums
+bool
+ |
+
+ Whether the -k option should be passed to initdb,
+enabling checksums on data pages (default: false)
+ |
+
+encoding
+string
+ |
+
+ The value to be passed as option --encoding for initdb (default:UTF8)
+ |
+
+localeCollate
+string
+ |
+
+ The value to be passed as option --lc-collate for initdb (default:C)
+ |
+
+localeCType
+string
+ |
+
+ The value to be passed as option --lc-ctype for initdb (default:C)
+ |
+
+locale
+string
+ |
+
+ Sets the default collation order and character classification in the new database.
+ |
+
+localeProvider
+string
+ |
+
+ This option sets the locale provider for databases created in the new cluster.
+Available from PostgreSQL 16.
+ |
+
+icuLocale
+string
+ |
+
+ Specifies the ICU locale when the ICU provider is used.
+This option requires localeProvider to be set to icu.
+Available from PostgreSQL 15.
+ |
+
+icuRules
+string
+ |
+
+ Specifies additional collation rules to customize the behavior of the default collation.
+This option requires localeProvider to be set to icu.
+Available from PostgreSQL 16.
+ |
+
+builtinLocale
+string
+ |
+
+ Specifies the locale name when the builtin provider is used.
+This option requires localeProvider to be set to builtin.
+Available from PostgreSQL 17.
+ |
+
+walSegmentSize
+int
+ |
+
+ The value in megabytes (1 to 1024) to be passed to the --wal-segsize
+option for initdb (default: empty, resulting in PostgreSQL default: 16MB)
+ |
+
+postInitSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the postgres
+database right after the cluster has been created - to be used with extreme care
+(by default empty)
+ |
+
+postInitApplicationSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the application
+database right after the cluster has been created - to be used with extreme care
+(by default empty)
+ |
+
+postInitTemplateSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the template1
+database right after the cluster has been created - to be used with extreme care
+(by default empty)
+ |
+
+import
+Import
+ |
+
+ Bootstraps the new cluster by importing data from an existing PostgreSQL
+instance using logical backup (pg_dump and pg_restore)
+ |
+
+postInitApplicationSQLRefs
+SQLRefs
+ |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the application database right after
+the cluster has been created. The references are processed in a specific order:
+first, all Secrets are processed, followed by all ConfigMaps.
+Within each group, the processing order follows the sequence specified
+in their respective arrays.
+(by default empty)
+ |
+
+postInitTemplateSQLRefs
+SQLRefs
+ |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the template1 database right after
+the cluster has been created. The references are processed in a specific order:
+first, all Secrets are processed, followed by all ConfigMaps.
+Within each group, the processing order follows the sequence specified
+in their respective arrays.
+(by default empty)
+ |
+
+postInitSQLRefs
+SQLRefs
+ |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the postgres database right after
+the cluster has been created. The references are processed in a specific order:
+first, all Secrets are processed, followed by all ConfigMaps.
+Within each group, the processing order follows the sequence specified
+in their respective arrays.
+(by default empty)
+ |
+
+
+
+
+
+
+## BootstrapPgBaseBackup
+
+**Appears in:**
+
+- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+
+BootstrapPgBaseBackup contains the configuration required to take
+a physical backup of an existing PostgreSQL cluster
+
+
+| Field | Description |
+
+source [Required]
+string
+ |
+
+ The name of the server of which we need to take a physical backup
+ |
+
+database
+string
+ |
+
+ Name of the database used by the application. Default: app.
+ |
+
+owner
+string
+ |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the database key.
+ |
+
+secret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ Name of the secret containing the initial credentials for the
+owner of the user database. If empty a new secret will be
+created from scratch
+ |
+
+
+
+
+
+
+## BootstrapRecovery
+
+**Appears in:**
+
+- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+
+BootstrapRecovery contains the configuration required to restore
+from an existing cluster using 3 methodologies: external cluster,
+volume snapshots or backup objects. Full recovery and Point-In-Time
+Recovery are supported.
+The method can be also be used to create clusters in continuous recovery
+(replica clusters), also supporting cascading replication when instances >
+
+- Once the cluster exits recovery, the password for the superuser
+will be changed through the provided secret.
+Refer to the Bootstrap page of the documentation for more information.
+
+
+
+| Field | Description |
+
+backup
+BackupSource
+ |
+
+ The backup object containing the physical base backup from which to
+initiate the recovery procedure.
+Mutually exclusive with source and volumeSnapshots.
+ |
+
+source
+string
+ |
+
+ The external cluster whose backup we will restore. This is also
+used as the name of the folder under which the backup is stored,
+so it must be set to the name of the source cluster
+Mutually exclusive with backup.
+ |
+
+volumeSnapshots
+DataSource
+ |
+
+ The static PVC data source(s) from which to initiate the
+recovery procedure. Currently supporting VolumeSnapshot
+and PersistentVolumeClaim resources that map an existing
+PVC group, compatible with {{name.ln}}, and taken with
+a cold backup copy on a fenced Postgres instance (limitation
+which will be removed in the future when online backup
+will be implemented).
+Mutually exclusive with backup.
+ |
+
+recoveryTarget
+RecoveryTarget
+ |
+
+ By default, the recovery process applies all the available
+WAL files in the archive (full recovery). However, you can also
+end the recovery as soon as a consistent state is reached or
+recover to a point-in-time (PITR) by specifying a RecoveryTarget object,
+as expected by PostgreSQL (i.e., timestamp, transaction Id, LSN, ...).
+More info: https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET
+ |
+
+database
+string
+ |
+
+ Name of the database used by the application. Default: app.
+ |
+
+owner
+string
+ |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the database key.
+ |
+
+secret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ Name of the secret containing the initial credentials for the
+owner of the user database. If empty a new secret will be
+created from scratch
+ |
+
+
+
+
+
+
+## CatalogImage
+
+**Appears in:**
+
+- [ImageCatalogSpec](#postgresql-k8s-enterprisedb-io-v1-ImageCatalogSpec)
+
+CatalogImage defines the image and major version
+
+
+| Field | Description |
+
+image [Required]
+string
+ |
+
+ The image reference
+ |
+
+major [Required]
+int
+ |
+
+ The PostgreSQL major version of the image. Must be unique within the catalog.
+ |
+
+
+
+
+
+
+## CertificatesConfiguration
+
+**Appears in:**
+
+- [CertificatesStatus](#postgresql-k8s-enterprisedb-io-v1-CertificatesStatus)
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+CertificatesConfiguration contains the needed configurations to handle server certificates.
+
+
+| Field | Description |
+
+serverCASecret
+string
+ |
+
+ The secret containing the Server CA certificate. If not defined, a new secret will be created
+with a self-signed CA and will be used to generate the TLS certificate ServerTLSSecret.
+
+Contains:
+
+
+
+ca.crt: CA that should be used to validate the server certificate,
+used as sslrootcert in client connection strings.
+ca.key: key used to generate Server SSL certs, if ServerTLSSecret is provided,
+this can be omitted.
+
+ |
+
+serverTLSSecret
+string
+ |
+
+ The secret of type kubernetes.io/tls containing the server TLS certificate and key that will be set as
+ssl_cert_file and ssl_key_file so that clients can connect to postgres securely.
+If not defined, ServerCASecret must provide also ca.key and a new secret will be
+created using the provided CA.
+ |
+
+replicationTLSSecret
+string
+ |
+
+ The secret of type kubernetes.io/tls containing the client certificate to authenticate as
+the streaming_replica user.
+If not defined, ClientCASecret must provide also ca.key, and a new secret will be
+created using the provided CA.
+ |
+
+clientCASecret
+string
+ |
+
+ The secret containing the Client CA certificate. If not defined, a new secret will be created
+with a self-signed CA and will be used to generate all the client certificates.
+
+Contains:
+
+
+
+ca.crt: CA that should be used to validate the client certificates,
+used as ssl_ca_file of all the instances.
+ca.key: key used to generate client certificates, if ReplicationTLSSecret is provided,
+this can be omitted.
+
+ |
+
+serverAltDNSNames
+[]string
+ |
+
+ The list of the server alternative DNS names to be added to the generated server TLS certificates, when required.
+ |
+
+
+
+
+
+
+## CertificatesStatus
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+CertificatesStatus contains configuration certificates and related expiration dates.
+
+
+| Field | Description |
+
+CertificatesConfiguration
+CertificatesConfiguration
+ |
+(Members of CertificatesConfiguration are embedded into this type.)
+ Needed configurations to handle server certificates, initialized with default values, if needed.
+ |
+
+expirations
+map[string]string
+ |
+
+ Expiration dates for all certificates.
+ |
+
+
+
+
+
+
+## ClusterMonitoringTLSConfiguration
+
+**Appears in:**
+
+- [MonitoringConfiguration](#postgresql-k8s-enterprisedb-io-v1-MonitoringConfiguration)
+
+ClusterMonitoringTLSConfiguration is the type containing the TLS configuration
+for the cluster's monitoring
+
+
+| Field | Description |
+
+enabled
+bool
+ |
+
+ Enable TLS for the monitoring endpoint.
+Changing this option will force a rollout of all instances.
+ |
+
+
+
+
+
+
+## ClusterSpec
+
+**Appears in:**
+
+- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+
+ClusterSpec defines the desired state of a PostgreSQL cluster managed by
+{{name.ln}}.
+
+
+| Field | Description |
+
+description
+string
+ |
+
+ Description of this PostgreSQL cluster
+ |
+
+inheritedMetadata
+EmbeddedObjectMetadata
+ |
+
+ Metadata that will be inherited by all objects related to the Cluster
+ |
+
+imageName
+string
+ |
+
+ Name of the container image, supporting both tags (<image>:<tag>)
+and digests for deterministic and repeatable deployments
+(<image>:<tag>@sha256:<digestValue>)
+ |
+
+imageCatalogRef
+ImageCatalogRef
+ |
+
+ Defines the major PostgreSQL version we want to use within an ImageCatalog
+ |
+
+imagePullPolicy
+core/v1.PullPolicy
+ |
+
+ Image pull policy.
+One of Always, Never or IfNotPresent.
+If not defined, it defaults to IfNotPresent.
+Cannot be updated.
+More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
+ |
+
+schedulerName
+string
+ |
+
+ If specified, the pod will be dispatched by specified Kubernetes
+scheduler. If not specified, the pod will be dispatched by the default
+scheduler. More info:
+https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
+ |
+
+postgresUID
+int64
+ |
+
+ The UID of the postgres user inside the image, defaults to 26
+ |
+
+postgresGID
+int64
+ |
+
+ The GID of the postgres user inside the image, defaults to 26
+ |
+
+instances [Required]
+int
+ |
+
+ Number of instances required in the cluster
+ |
+
+minSyncReplicas
+int
+ |
+
+ Minimum number of instances required in synchronous replication with the
+primary. Undefined or 0 allow writes to complete when no standby is
+available.
+ |
+
+maxSyncReplicas
+int
+ |
+
+ The target value for the synchronous replication quorum, that can be
+decreased if the number of ready standbys is lower than this.
+Undefined or 0 disable synchronous replication.
+ |
+
+postgresql
+PostgresConfiguration
+ |
+
+ Configuration of the PostgreSQL server
+ |
+
+replicationSlots
+ReplicationSlotsConfiguration
+ |
+
+ Replication slots management configuration
+ |
+
+bootstrap
+BootstrapConfiguration
+ |
+
+ Instructions to bootstrap this cluster
+ |
+
+replica
+ReplicaClusterConfiguration
+ |
+
+ Replica cluster configuration
+ |
+
+superuserSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The secret containing the superuser password. If not defined a new
+secret will be created with a randomly generated password
+ |
+
+enableSuperuserAccess
+bool
+ |
+
+ When this option is enabled, the operator will use the SuperuserSecret
+to update the postgres user password (if the secret is
+not present, the operator will automatically create one). When this
+option is disabled, the operator will ignore the SuperuserSecret content, delete
+it when automatically created, and then blank the password of the postgres
+user by setting it to NULL. Disabled by default.
+ |
+
+certificates
+CertificatesConfiguration
+ |
+
+ The configuration for the CA and related certificates
+ |
+
+imagePullSecrets
+[]github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The list of pull secrets to be used to pull the images. If the license key
+contains a pull secret that secret will be automatically included.
+ |
+
+storage
+StorageConfiguration
+ |
+
+ Configuration of the storage of the instances
+ |
+
+serviceAccountTemplate
+ServiceAccountTemplate
+ |
+
+ Configure the generation of the service account
+ |
+
+walStorage
+StorageConfiguration
+ |
+
+ Configuration of the storage for PostgreSQL WAL (Write-Ahead Log)
+ |
+
+ephemeralVolumeSource
+core/v1.EphemeralVolumeSource
+ |
+
+ EphemeralVolumeSource allows the user to configure the source of ephemeral volumes.
+ |
+
+startDelay
+int32
+ |
+
+ The time in seconds that is allowed for a PostgreSQL instance to
+successfully start up (default 3600).
+The startup probe failure threshold is derived from this value using the formula:
+ceiling(startDelay / 10).
+ |
+
+stopDelay
+int32
+ |
+
+ The time in seconds that is allowed for a PostgreSQL instance to
+gracefully shutdown (default 1800)
+ |
+
+smartStopDelay
+int32
+ |
+
+ Deprecated: please use SmartShutdownTimeout instead
+ |
+
+smartShutdownTimeout
+int32
+ |
+
+ The time in seconds that controls the window of time reserved for the smart shutdown of Postgres to complete.
+Make sure you reserve enough time for the operator to request a fast shutdown of Postgres
+(that is: stopDelay - smartShutdownTimeout). Default is 180 seconds.
+ |
+
+switchoverDelay
+int32
+ |
+
+ The time in seconds that is allowed for a primary PostgreSQL instance
+to gracefully shutdown during a switchover.
+Default value is 3600 seconds (1 hour).
+ |
+
+failoverDelay
+int32
+ |
+
+ The amount of time (in seconds) to wait before triggering a failover
+after the primary PostgreSQL instance in the cluster was detected
+to be unhealthy
+ |
+
+livenessProbeTimeout
+int32
+ |
+
+ LivenessProbeTimeout is the time (in seconds) that is allowed for a PostgreSQL instance
+to successfully respond to the liveness probe (default 30).
+The Liveness probe failure threshold is derived from this value using the formula:
+ceiling(livenessProbe / 10).
+ |
+
+affinity
+AffinityConfiguration
+ |
+
+ Affinity/Anti-affinity rules for Pods
+ |
+
+topologySpreadConstraints
+[]core/v1.TopologySpreadConstraint
+ |
+
+ TopologySpreadConstraints specifies how to spread matching pods among the given topology.
+More info:
+https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
+ |
+
+resources
+core/v1.ResourceRequirements
+ |
+
+ Resources requirements of every generated Pod. Please refer to
+https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+for more information.
+ |
+
+ephemeralVolumesSizeLimit
+EphemeralVolumesSizeLimitConfiguration
+ |
+
+ EphemeralVolumesSizeLimit allows the user to set the limits for the ephemeral
+volumes
+ |
+
+priorityClassName
+string
+ |
+
+ Name of the priority class which will be used in every generated Pod, if the PriorityClass
+specified does not exist, the pod will not be able to schedule. Please refer to
+https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
+for more information
+ |
+
+primaryUpdateStrategy
+PrimaryUpdateStrategy
+ |
+
+ Deployment strategy to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be automated (unsupervised - default) or manual (supervised)
+ |
+
+primaryUpdateMethod
+PrimaryUpdateMethod
+ |
+
+ Method to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be with a switchover (switchover) or in-place (restart - default)
+ |
+
+backup
+BackupConfiguration
+ |
+
+ The configuration to be used for backups
+ |
+
+nodeMaintenanceWindow
+NodeMaintenanceWindow
+ |
+
+ Define a maintenance window for the Kubernetes nodes
+ |
+
+licenseKey
+string
+ |
+
+ The license key of the cluster. When empty, the cluster operates in
+trial mode and after the expiry date (default 30 days) the operator
+will cease any reconciliation attempt. For details, please refer to
+the license agreement that comes with the operator.
+ |
+
+licenseKeySecret
+core/v1.SecretKeySelector
+ |
+
+ The reference to the license key. When this is set it take precedence over LicenseKey.
+ |
+
+monitoring
+MonitoringConfiguration
+ |
+
+ The configuration of the monitoring infrastructure of this cluster
+ |
+
+externalClusters
+[]ExternalCluster
+ |
+
+ The list of external clusters which are used in the configuration
+ |
+
+logLevel
+string
+ |
+
+ The instances' log level, one of the following values: error, warning, info (default), debug, trace
+ |
+
+projectedVolumeTemplate
+core/v1.ProjectedVolumeSource
+ |
+
+ Template to be used to define projected volumes, projected volumes will be mounted
+under /projected base folder
+ |
+
+env
+[]core/v1.EnvVar
+ |
+
+ Env follows the Env format to pass environment variables
+to the pods created in the cluster
+ |
+
+envFrom
+[]core/v1.EnvFromSource
+ |
+
+ EnvFrom follows the EnvFrom format to pass environment variables
+sources to the pods to be used by Env
+ |
+
+managed
+ManagedConfiguration
+ |
+
+ The configuration that is used by the portions of PostgreSQL that are managed by the instance manager
+ |
+
+seccompProfile
+core/v1.SeccompProfile
+ |
+
+ The SeccompProfile applied to every Pod and Container.
+Defaults to: RuntimeDefault
+ |
+
+podSecurityContext
+core/v1.PodSecurityContext
+ |
+
+ Override the PodSecurityContext applied to every Pod of the cluster.
+When set, this overrides the operator's default PodSecurityContext for the cluster.
+If omitted, the operator defaults are used.
+This field doesn't have any effect if SecurityContextConstraints are present.
+ |
+
+securityContext
+core/v1.SecurityContext
+ |
+
+ Override the SecurityContext applied to every Container in the Pod of the cluster.
+When set, this overrides the operator's default Container SecurityContext.
+If omitted, the operator defaults are used.
+ |
+
+tablespaces
+[]TablespaceConfiguration
+ |
+
+ The tablespaces configuration
+ |
+
+enablePDB
+bool
+ |
+
+ Manage the PodDisruptionBudget resources within the cluster. When
+configured as true (default setting), the pod disruption budgets
+will safeguard the primary node from being terminated. Conversely,
+setting it to false will result in the absence of any
+PodDisruptionBudget resource, permitting the shutdown of all nodes
+hosting the PostgreSQL cluster. This latter configuration is
+advisable for any PostgreSQL cluster employed for
+development/staging purposes.
+ |
+
+plugins
+[]PluginConfiguration
+ |
+
+ The plugins configuration, containing
+any plugin to be loaded with the corresponding configuration
+ |
+
+probes
+ProbesConfiguration
+ |
+
+ The configuration of the probes to be injected
+in the PostgreSQL Pods.
+ |
+
+
+
+
+
+
+## ClusterStatus
+
+**Appears in:**
+
+- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+
+ClusterStatus defines the observed state of a PostgreSQL cluster managed by
+{{name.ln}}.
+
+
+| Field | Description |
+
+instances
+int
+ |
+
+ The total number of PVC Groups detected in the cluster. It may differ from the number of existing instance pods.
+ |
+
+readyInstances
+int
+ |
+
+ The total number of ready instances in the cluster. It is equal to the number of ready instance pods.
+ |
+
+instancesStatus
+map[PodStatus][]string
+ |
+
+ InstancesStatus indicates in which status the instances are
+ |
+
+instancesReportedState
+map[PodName]InstanceReportedState
+ |
+
+ The reported state of the instances during the last reconciliation loop
+ |
+
+managedRolesStatus
+ManagedRoles
+ |
+
+ ManagedRolesStatus reports the state of the managed roles in the cluster
+ |
+
+tablespacesStatus
+[]TablespaceState
+ |
+
+ TablespacesStatus reports the state of the declarative tablespaces in the cluster
+ |
+
+timelineID
+int
+ |
+
+ The timeline of the Postgres cluster
+ |
+
+topology
+Topology
+ |
+
+ Instances topology.
+ |
+
+latestGeneratedNode
+int
+ |
+
+ ID of the latest generated node (used to avoid node name clashing)
+ |
+
+currentPrimary
+string
+ |
+
+ Current primary instance
+ |
+
+targetPrimary
+string
+ |
+
+ Target primary instance, this is different from the previous one
+during a switchover or a failover
+ |
+
+lastPromotionToken
+string
+ |
+
+ LastPromotionToken is the last verified promotion token that
+was used to promote a replica cluster
+ |
+
+pvcCount
+int32
+ |
+
+ How many PVCs have been created by this cluster
+ |
+
+jobCount
+int32
+ |
+
+ How many Jobs have been created by this cluster
+ |
+
+danglingPVC
+[]string
+ |
+
+ List of all the PVCs created by this cluster and still available
+which are not attached to a Pod
+ |
+
+resizingPVC
+[]string
+ |
+
+ List of all the PVCs that have ResizingPVC condition.
+ |
+
+initializingPVC
+[]string
+ |
+
+ List of all the PVCs that are being initialized by this cluster
+ |
+
+healthyPVC
+[]string
+ |
+
+ List of all the PVCs not dangling nor initializing
+ |
+
+unusablePVC
+[]string
+ |
+
+ List of all the PVCs that are unusable because another PVC is missing
+ |
+
+licenseStatus
+github.com/EnterpriseDB/cloud-native-postgres/pkg/licensekey.Status
+ |
+
+ Status of the license
+ |
+
+writeService
+string
+ |
+
+ Current write pod
+ |
+
+readService
+string
+ |
+
+ Current list of read pods
+ |
+
+phase
+string
+ |
+
+ Current phase of the cluster
+ |
+
+phaseReason
+string
+ |
+
+ Reason for the current phase
+ |
+
+secretsResourceVersion
+SecretsResourceVersion
+ |
+
+ The list of resource versions of the secrets
+managed by the operator. Every change here is done in the
+interest of the instance manager, which will refresh the
+secret data
+ |
+
+configMapResourceVersion
+ConfigMapResourceVersion
+ |
+
+ The list of resource versions of the configmaps,
+managed by the operator. Every change here is done in the
+interest of the instance manager, which will refresh the
+configmap data
+ |
+
+certificates
+CertificatesStatus
+ |
+
+ The configuration for the CA and related certificates, initialized with defaults.
+ |
+
+firstRecoverabilityPoint
+string
+ |
+
+ The first recoverability point, stored as a date in RFC3339 format.
+This field is calculated from the content of FirstRecoverabilityPointByMethod.
+Deprecated: the field is not set for backup plugins.
+ |
+
+firstRecoverabilityPointByMethod
+map[BackupMethod]meta/v1.Time
+ |
+
+ The first recoverability point, stored as a date in RFC3339 format, per backup method type.
+Deprecated: the field is not set for backup plugins.
+ |
+
+lastSuccessfulBackup
+string
+ |
+
+ Last successful backup, stored as a date in RFC3339 format.
+This field is calculated from the content of LastSuccessfulBackupByMethod.
+Deprecated: the field is not set for backup plugins.
+ |
+
+lastSuccessfulBackupByMethod
+map[BackupMethod]meta/v1.Time
+ |
+
+ Last successful backup, stored as a date in RFC3339 format, per backup method type.
+Deprecated: the field is not set for backup plugins.
+ |
+
+lastFailedBackup
+string
+ |
+
+ Last failed backup, stored as a date in RFC3339 format.
+Deprecated: the field is not set for backup plugins.
+ |
+
+cloudNativePostgresqlCommitHash
+string
+ |
+
+ The commit hash number of which this operator running
+ |
+
+currentPrimaryTimestamp
+string
+ |
+
+ The timestamp when the last actual promotion to primary has occurred
+ |
+
+currentPrimaryFailingSinceTimestamp
+string
+ |
+
+ The timestamp when the primary was detected to be unhealthy
+This field is reported when .spec.failoverDelay is populated or during online upgrades
+ |
+
+targetPrimaryTimestamp
+string
+ |
+
+ The timestamp when the last request for a new primary has occurred
+ |
+
+poolerIntegrations
+PoolerIntegrations
+ |
+
+ The integration needed by poolers referencing the cluster
+ |
+
+cloudNativePostgresqlOperatorHash
+string
+ |
+
+ The hash of the binary of the operator
+ |
+
+availableArchitectures
+[]AvailableArchitecture
+ |
+
+ AvailableArchitectures reports the available architectures of a cluster
+ |
+
+conditions
+[]meta/v1.Condition
+ |
+
+ Conditions for cluster object
+ |
+
+instanceNames
+[]string
+ |
+
+ List of instance names in the cluster
+ |
+
+onlineUpdateEnabled
+bool
+ |
+
+ OnlineUpdateEnabled shows if the online upgrade is enabled inside the cluster
+ |
+
+image
+string
+ |
+
+ Image contains the image name used by the pods
+ |
+
+pgDataImageInfo
+ImageInfo
+ |
+
+ PGDataImageInfo contains the details of the latest image that has run on the current data directory.
+ |
+
+pluginStatus
+[]PluginStatus
+ |
+
+ PluginStatus is the status of the loaded plugins
+ |
+
+switchReplicaClusterStatus
+SwitchReplicaClusterStatus
+ |
+
+ SwitchReplicaClusterStatus is the status of the switch to replica cluster
+ |
+
+demotionToken
+string
+ |
+
+ DemotionToken is a JSON token containing the information
+from pg_controldata such as Database system identifier, Latest checkpoint's
+TimeLineID, Latest checkpoint's REDO location, Latest checkpoint's REDO
+WAL file, and Time of latest checkpoint
+ |
+
+systemID
+string
+ |
+
+ SystemID is the latest detected PostgreSQL SystemID
+ |
+
+
+
+
+
+
+## ConfigMapResourceVersion
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+ConfigMapResourceVersion is the resource versions of the secrets
+managed by the operator
+
+
+| Field | Description |
+
+metrics
+map[string]string
+ |
+
+ A map with the versions of all the config maps used to pass metrics.
+Map keys are the config map names, map values are the versions
+ |
+
+
+
+
+
+
+## DataDurabilityLevel
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [SynchronousReplicaConfiguration](#postgresql-k8s-enterprisedb-io-v1-SynchronousReplicaConfiguration)
+
+DataDurabilityLevel specifies how strictly to enforce synchronous replication
+when cluster instances are unavailable. Options are required or preferred.
+
+
+
+## DataSource
+
+**Appears in:**
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+DataSource contains the configuration required to bootstrap a
+PostgreSQL cluster from an existing storage
+
+
+
+
+
+## DatabaseObjectSpec
+
+**Appears in:**
+
+- [ExtensionSpec](#postgresql-k8s-enterprisedb-io-v1-ExtensionSpec)
+
+- [FDWSpec](#postgresql-k8s-enterprisedb-io-v1-FDWSpec)
+
+- [SchemaSpec](#postgresql-k8s-enterprisedb-io-v1-SchemaSpec)
+
+- [ServerSpec](#postgresql-k8s-enterprisedb-io-v1-ServerSpec)
+
+DatabaseObjectSpec contains the fields which are common to every
+database object
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the object (extension, schema, FDW, server)
+ |
+
+ensure
+EnsureOption
+ |
+
+ Specifies whether an object (e.g schema) should be present or absent
+in the database. If set to present, the object will be created if
+it does not exist. If set to absent, the extension/schema will be
+removed if it exists.
+ |
+
+
+
+
+
+
+## DatabaseObjectStatus
+
+**Appears in:**
+
+- [DatabaseStatus](#postgresql-k8s-enterprisedb-io-v1-DatabaseStatus)
+
+DatabaseObjectStatus is the status of the managed database objects
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ The name of the object
+ |
+
+applied [Required]
+bool
+ |
+
+ True of the object has been installed successfully in
+the database
+ |
+
+message
+string
+ |
+
+ Message is the object reconciliation message
+ |
+
+
+
+
+
+
+## DatabaseReclaimPolicy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+DatabaseReclaimPolicy describes a policy for end-of-life maintenance of databases.
+
+
+
+## DatabaseRoleRef
+
+**Appears in:**
+
+- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration)
+
+DatabaseRoleRef is a reference an a role available inside PostgreSQL
+
+
+| Field | Description |
+
+name
+string
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## DatabaseSpec
+
+**Appears in:**
+
+- [Database](#postgresql-k8s-enterprisedb-io-v1-Database)
+
+DatabaseSpec is the specification of a Postgresql Database, built around the
+CREATE DATABASE, ALTER DATABASE, and DROP DATABASE SQL commands of
+PostgreSQL.
+
+
+| Field | Description |
+
+cluster [Required]
+core/v1.LocalObjectReference
+ |
+
+ The name of the PostgreSQL cluster hosting the database.
+ |
+
+ensure
+EnsureOption
+ |
+
+ Ensure the PostgreSQL database is present or absent - defaults to "present".
+ |
+
+name [Required]
+string
+ |
+
+ The name of the database to create inside PostgreSQL. This setting cannot be changed.
+ |
+
+owner [Required]
+string
+ |
+
+ Maps to the OWNER parameter of CREATE DATABASE.
+Maps to the OWNER TO command of ALTER DATABASE.
+The role name of the user who owns the database inside PostgreSQL.
+ |
+
+template
+string
+ |
+
+ Maps to the TEMPLATE parameter of CREATE DATABASE. This setting
+cannot be changed. The name of the template from which to create
+this database.
+ |
+
+encoding
+string
+ |
+
+ Maps to the ENCODING parameter of CREATE DATABASE. This setting
+cannot be changed. Character set encoding to use in the database.
+ |
+
+locale
+string
+ |
+
+ Maps to the LOCALE parameter of CREATE DATABASE. This setting
+cannot be changed. Sets the default collation order and character
+classification in the new database.
+ |
+
+localeProvider
+string
+ |
+
+ Maps to the LOCALE_PROVIDER parameter of CREATE DATABASE. This
+setting cannot be changed. This option sets the locale provider for
+databases created in the new cluster. Available from PostgreSQL 16.
+ |
+
+localeCollate
+string
+ |
+
+ Maps to the LC_COLLATE parameter of CREATE DATABASE. This
+setting cannot be changed.
+ |
+
+localeCType
+string
+ |
+
+ Maps to the LC_CTYPE parameter of CREATE DATABASE. This setting
+cannot be changed.
+ |
+
+icuLocale
+string
+ |
+
+ Maps to the ICU_LOCALE parameter of CREATE DATABASE. This
+setting cannot be changed. Specifies the ICU locale when the ICU
+provider is used. This option requires localeProvider to be set to
+icu. Available from PostgreSQL 15.
+ |
+
+icuRules
+string
+ |
+
+ Maps to the ICU_RULES parameter of CREATE DATABASE. This setting
+cannot be changed. Specifies additional collation rules to customize
+the behavior of the default collation. This option requires
+localeProvider to be set to icu. Available from PostgreSQL 16.
+ |
+
+builtinLocale
+string
+ |
+
+ Maps to the BUILTIN_LOCALE parameter of CREATE DATABASE. This
+setting cannot be changed. Specifies the locale name when the
+builtin provider is used. This option requires localeProvider to
+be set to builtin. Available from PostgreSQL 17.
+ |
+
+collationVersion
+string
+ |
+
+ Maps to the COLLATION_VERSION parameter of CREATE DATABASE. This
+setting cannot be changed.
+ |
+
+isTemplate
+bool
+ |
+
+ Maps to the IS_TEMPLATE parameter of CREATE DATABASE and ALTER DATABASE. If true, this database is considered a template and can
+be cloned by any user with CREATEDB privileges.
+ |
+
+allowConnections
+bool
+ |
+
+ Maps to the ALLOW_CONNECTIONS parameter of CREATE DATABASE and
+ALTER DATABASE. If false then no one can connect to this database.
+ |
+
+connectionLimit
+int
+ |
+
+ Maps to the CONNECTION LIMIT clause of CREATE DATABASE and
+ALTER DATABASE. How many concurrent connections can be made to
+this database. -1 (the default) means no limit.
+ |
+
+tablespace
+string
+ |
+
+ Maps to the TABLESPACE parameter of CREATE DATABASE.
+Maps to the SET TABLESPACE command of ALTER DATABASE.
+The name of the tablespace (in PostgreSQL) that will be associated
+with the new database. This tablespace will be the default
+tablespace used for objects created in this database.
+ |
+
+databaseReclaimPolicy
+DatabaseReclaimPolicy
+ |
+
+ The policy for end-of-life maintenance of this database.
+ |
+
+schemas
+[]SchemaSpec
+ |
+
+ The list of schemas to be managed in the database
+ |
+
+extensions
+[]ExtensionSpec
+ |
+
+ The list of extensions to be managed in the database
+ |
+
+fdws
+[]FDWSpec
+ |
+
+ The list of foreign data wrappers to be managed in the database
+ |
+
+servers
+[]ServerSpec
+ |
+
+ The list of foreign servers to be managed in the database
+ |
+
+
+
+
+
+
+## DatabaseStatus
+
+**Appears in:**
+
+- [Database](#postgresql-k8s-enterprisedb-io-v1-Database)
+
+DatabaseStatus defines the observed state of Database
+
+
+| Field | Description |
+
+observedGeneration
+int64
+ |
+
+ A sequence number representing the latest
+desired state that was synchronized
+ |
+
+applied
+bool
+ |
+
+ Applied is true if the database was reconciled correctly
+ |
+
+message
+string
+ |
+
+ Message is the reconciliation output message
+ |
+
+schemas
+[]DatabaseObjectStatus
+ |
+
+ Schemas is the status of the managed schemas
+ |
+
+extensions
+[]DatabaseObjectStatus
+ |
+
+ Extensions is the status of the managed extensions
+ |
+
+fdws
+[]DatabaseObjectStatus
+ |
+
+ FDWs is the status of the managed FDWs
+ |
+
+servers
+[]DatabaseObjectStatus
+ |
+
+ Servers is the status of the managed servers
+ |
+
+
+
+
+
+
+## EPASConfiguration
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+EPASConfiguration contains EDB Postgres Advanced Server specific configurations
+
+
+| Field | Description |
+
+audit
+bool
+ |
+
+ If true enables edb_audit logging
+ |
+
+tde
+TDEConfiguration
+ |
+
+ TDE configuration
+ |
+
+
+
+
+
+
+## EmbeddedObjectMetadata
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+EmbeddedObjectMetadata contains metadata to be inherited by all resources related to a Cluster
+
+
+| Field | Description |
+
+labels
+map[string]string
+ |
+
+ No description provided. |
+
+annotations
+map[string]string
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## EnsureOption
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [DatabaseObjectSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseObjectSpec)
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+- [OptionSpec](#postgresql-k8s-enterprisedb-io-v1-OptionSpec)
+
+- [RoleConfiguration](#postgresql-k8s-enterprisedb-io-v1-RoleConfiguration)
+
+EnsureOption represents whether we should enforce the presence or absence of
+a Role in a PostgreSQL instance
+
+
+
+## EphemeralVolumesSizeLimitConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+EphemeralVolumesSizeLimitConfiguration contains the configuration of the ephemeral
+storage
+
+
+
+
+
+## ExtensionConfiguration
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+ExtensionConfiguration is the configuration used to add
+PostgreSQL extensions to the Cluster.
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ The name of the extension, required
+ |
+
+image [Required]
+core/v1.ImageVolumeSource
+ |
+
+ The image containing the extension, required
+ |
+
+extension_control_path
+[]string
+ |
+
+ The list of directories inside the image which should be added to extension_control_path.
+If not defined, defaults to "/share".
+ |
+
+dynamic_library_path
+[]string
+ |
+
+ The list of directories inside the image which should be added to dynamic_library_path.
+If not defined, defaults to "/lib".
+ |
+
+ld_library_path
+[]string
+ |
+
+ The list of directories inside the image which should be added to ld_library_path.
+ |
+
+
+
+
+
+
+## ExtensionSpec
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+ExtensionSpec configures an extension in a database
+
+
+| Field | Description |
+
+DatabaseObjectSpec
+DatabaseObjectSpec
+ |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields
+ |
+
+version [Required]
+string
+ |
+
+ The version of the extension to install. If empty, the operator will
+install the default version (whatever is specified in the
+extension's control file)
+ |
+
+schema [Required]
+string
+ |
+
+ The name of the schema in which to install the extension's objects,
+in case the extension allows its contents to be relocated. If not
+specified (default), and the extension's control file does not
+specify a schema either, the current default object creation schema
+is used.
+ |
+
+
+
+
+
+
+## ExternalCluster
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ExternalCluster represents the connection parameters to an
+external cluster which is used in the other sections of the configuration
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ The server name, required
+ |
+
+connectionParameters
+map[string]string
+ |
+
+ The list of connection parameters, such as dbname, host, username, etc
+ |
+
+sslCert
+core/v1.SecretKeySelector
+ |
+
+ The reference to an SSL certificate to be used to connect to this
+instance
+ |
+
+sslKey
+core/v1.SecretKeySelector
+ |
+
+ The reference to an SSL private key to be used to connect to this
+instance
+ |
+
+sslRootCert
+core/v1.SecretKeySelector
+ |
+
+ The reference to an SSL CA public key to be used to connect to this
+instance
+ |
+
+password
+core/v1.SecretKeySelector
+ |
+
+ The reference to the password to be used to connect to the server.
+If a password is provided, {{name.ln}} creates a PostgreSQL
+passfile at /controller/external/NAME/pass (where "NAME" is the
+cluster's name). This passfile is automatically referenced in the
+connection string when establishing a connection to the remote
+PostgreSQL server from the current PostgreSQL Cluster. This ensures
+secure and efficient password management for external clusters.
+ |
+
+barmanObjectStore
+github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration
+ |
+
+ The configuration for the barman-cloud tool suite
+ |
+
+plugin [Required]
+PluginConfiguration
+ |
+
+ The configuration of the plugin that is taking care
+of WAL archiving and backups for this external cluster
+ |
+
+
+
+
+
+
+## FDWSpec
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+FDWSpec configures an Foreign Data Wrapper in a database
+
+
+| Field | Description |
+
+DatabaseObjectSpec
+DatabaseObjectSpec
+ |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields
+ |
+
+handler
+string
+ |
+
+ Name of the handler function (e.g., "postgres_fdw_handler").
+This will be empty if no handler is specified. In that case,
+the default handler is registered when the FDW extension is created.
+ |
+
+validator
+string
+ |
+
+ Name of the validator function (e.g., "postgres_fdw_validator").
+This will be empty if no validator is specified. In that case,
+the default validator is registered when the FDW extension is created.
+ |
+
+owner
+string
+ |
+
+ Owner specifies the database role that will own the Foreign Data Wrapper.
+The role must have superuser privileges in the target database.
+ |
+
+options
+[]OptionSpec
+ |
+
+ Options specifies the configuration options for the FDW.
+ |
+
+usage
+[]UsageSpec
+ |
+
+ List of roles for which USAGE privileges on the FDW are granted or revoked.
+ |
+
+
+
+
+
+
+## FailoverQuorumStatus
+
+**Appears in:**
+
+- [FailoverQuorum](#postgresql-k8s-enterprisedb-io-v1-FailoverQuorum)
+
+FailoverQuorumStatus is the latest observed status of the failover
+quorum of the PG cluster.
+
+
+| Field | Description |
+
+method
+string
+ |
+
+ Contains the latest reported Method value.
+ |
+
+standbyNames
+[]string
+ |
+
+ StandbyNames is the list of potentially synchronous
+instance names.
+ |
+
+standbyNumber
+int
+ |
+
+ StandbyNumber is the number of synchronous standbys that transactions
+need to wait for replies from.
+ |
+
+primary
+string
+ |
+
+ Primary is the name of the primary instance that updated
+this object the latest time.
+ |
+
+
+
+
+
+
+## ImageCatalogRef
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ImageCatalogRef defines the reference to a major version in an ImageCatalog
+
+
+| Field | Description |
+
+TypedLocalObjectReference
+core/v1.TypedLocalObjectReference
+ |
+(Members of TypedLocalObjectReference are embedded into this type.)
+ No description provided. |
+
+major [Required]
+int
+ |
+
+ The major version of PostgreSQL we want to use from the ImageCatalog
+ |
+
+
+
+
+
+
+## ImageCatalogSpec
+
+**Appears in:**
+
+- [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog)
+
+- [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog)
+
+ImageCatalogSpec defines the desired ImageCatalog
+
+
+| Field | Description |
+
+images [Required]
+[]CatalogImage
+ |
+
+ List of CatalogImages available in the catalog
+ |
+
+
+
+
+
+
+## ImageInfo
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+ImageInfo contains the information about a PostgreSQL image
+
+
+| Field | Description |
+
+image [Required]
+string
+ |
+
+ Image is the image name
+ |
+
+majorVersion [Required]
+int
+ |
+
+ MajorVersion is the major version of the image
+ |
+
+
+
+
+
+
+## Import
+
+**Appears in:**
+
+- [BootstrapInitDB](#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB)
+
+Import contains the configuration to init a database from a logic snapshot of an externalCluster
+
+
+| Field | Description |
+
+source [Required]
+ImportSource
+ |
+
+ The source of the import
+ |
+
+type [Required]
+SnapshotType
+ |
+
+ The import type. Can be microservice or monolith.
+ |
+
+databases [Required]
+[]string
+ |
+
+ The databases to import
+ |
+
+roles
+[]string
+ |
+
+ The roles to import
+ |
+
+postImportApplicationSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the application
+database right after is imported - to be used with extreme care
+(by default empty). Only available in microservice type.
+ |
+
+schemaOnly
+bool
+ |
+
+ When set to true, only the pre-data and post-data sections of
+pg_restore are invoked, avoiding data import. Default: false.
+ |
+
+pgDumpExtraOptions
+[]string
+ |
+
+ List of custom options to pass to the pg_dump command.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
+ |
+
+pgRestoreExtraOptions
+[]string
+ |
+
+ List of custom options to pass to the pg_restore command.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
+ |
+
+pgRestorePredataOptions
+[]string
+ |
+
+ Custom options to pass to the pg_restore command during the pre-data
+section. This setting overrides the generic pgRestoreExtraOptions value.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
+ |
+
+pgRestoreDataOptions
+[]string
+ |
+
+ Custom options to pass to the pg_restore command during the data
+section. This setting overrides the generic pgRestoreExtraOptions value.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
+ |
+
+pgRestorePostdataOptions
+[]string
+ |
+
+ Custom options to pass to the pg_restore command during the post-data
+section. This setting overrides the generic pgRestoreExtraOptions value.
+IMPORTANT: Use with caution. The operator does not validate these options,
+and certain flags may interfere with its intended functionality or design.
+You are responsible for ensuring that the provided options are compatible
+with your environment and desired behavior.
+ |
+
+
+
+
+
+
+## ImportSource
+
+**Appears in:**
+
+- [Import](#postgresql-k8s-enterprisedb-io-v1-Import)
+
+ImportSource describes the source for the logical snapshot
+
+
+| Field | Description |
+
+externalCluster [Required]
+string
+ |
+
+ The name of the externalCluster used for import
+ |
+
+
+
+
+
+
+## InstanceID
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+InstanceID contains the information to identify an instance
+
+
+| Field | Description |
+
+podName
+string
+ |
+
+ The pod name
+ |
+
+ContainerID
+string
+ |
+
+ The container ID
+ |
+
+
+
+
+
+
+## InstanceReportedState
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+InstanceReportedState describes the last reported state of an instance during a reconciliation loop
+
+
+| Field | Description |
+
+isPrimary [Required]
+bool
+ |
+
+ indicates if an instance is the primary one
+ |
+
+timeLineID
+int
+ |
+
+ indicates on which TimelineId the instance is
+ |
+
+ip [Required]
+string
+ |
+
+ IP address of the instance
+ |
+
+
+
+
+
+
+## IsolationCheckConfiguration
+
+**Appears in:**
+
+- [LivenessProbe](#postgresql-k8s-enterprisedb-io-v1-LivenessProbe)
+
+IsolationCheckConfiguration contains the configuration for the isolation check
+functionality in the liveness probe
+
+
+| Field | Description |
+
+enabled
+bool
+ |
+
+ Whether primary isolation checking is enabled for the liveness probe
+ |
+
+requestTimeout
+int
+ |
+
+ Timeout in milliseconds for requests during the primary isolation check
+ |
+
+connectionTimeout
+int
+ |
+
+ Timeout in milliseconds for connections during the primary isolation check
+ |
+
+
+
+
+
+
+## LDAPBindAsAuth
+
+**Appears in:**
+
+- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig)
+
+LDAPBindAsAuth provides the required fields to use the
+bind authentication for LDAP
+
+
+| Field | Description |
+
+prefix
+string
+ |
+
+ Prefix for the bind authentication option
+ |
+
+suffix
+string
+ |
+
+ Suffix for the bind authentication option
+ |
+
+
+
+
+
+
+## LDAPBindSearchAuth
+
+**Appears in:**
+
+- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig)
+
+LDAPBindSearchAuth provides the required fields to use
+the bind+search LDAP authentication process
+
+
+| Field | Description |
+
+baseDN
+string
+ |
+
+ Root DN to begin the user search
+ |
+
+bindDN
+string
+ |
+
+ DN of the user to bind to the directory
+ |
+
+bindPassword
+core/v1.SecretKeySelector
+ |
+
+ Secret with the password for the user to bind to the directory
+ |
+
+searchAttribute
+string
+ |
+
+ Attribute to match against the username
+ |
+
+searchFilter
+string
+ |
+
+ Search filter to use when doing the search+bind authentication
+ |
+
+
+
+
+
+
+## LDAPConfig
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+LDAPConfig contains the parameters needed for LDAP authentication
+
+
+| Field | Description |
+
+server
+string
+ |
+
+ LDAP hostname or IP address
+ |
+
+port
+int
+ |
+
+ LDAP server port
+ |
+
+scheme
+LDAPScheme
+ |
+
+ LDAP schema to be used, possible options are ldap and ldaps
+ |
+
+bindAsAuth
+LDAPBindAsAuth
+ |
+
+ Bind as authentication configuration
+ |
+
+bindSearchAuth
+LDAPBindSearchAuth
+ |
+
+ Bind+Search authentication configuration
+ |
+
+tls
+bool
+ |
+
+ Set to 'true' to enable LDAP over TLS. 'false' is default
+ |
+
+
+
+
+
+
+## LDAPScheme
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig)
+
+LDAPScheme defines the possible schemes for LDAP
+
+
+
+## LivenessProbe
+
+**Appears in:**
+
+- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration)
+
+LivenessProbe is the configuration of the liveness probe
+
+
+| Field | Description |
+
+Probe
+Probe
+ |
+(Members of Probe are embedded into this type.)
+ Probe is the standard probe configuration
+ |
+
+isolationCheck
+IsolationCheckConfiguration
+ |
+
+ Configure the feature that extends the liveness probe for a primary
+instance. In addition to the basic checks, this verifies whether the
+primary is isolated from the Kubernetes API server and from its
+replicas, ensuring that it can be safely shut down if network
+partition or API unavailability is detected. Enabled by default.
+ |
+
+
+
+
+
+
+## ManagedConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ManagedConfiguration represents the portions of PostgreSQL that are managed
+by the instance manager
+
+
+| Field | Description |
+
+roles
+[]RoleConfiguration
+ |
+
+ Database roles managed by the Cluster
+ |
+
+services
+ManagedServices
+ |
+
+ Services roles managed by the Cluster
+ |
+
+
+
+
+
+
+## ManagedRoles
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+ManagedRoles tracks the status of a cluster's managed roles
+
+
+| Field | Description |
+
+byStatus
+map[RoleStatus][]string
+ |
+
+ ByStatus gives the list of roles in each state
+ |
+
+cannotReconcile
+map[string][]string
+ |
+
+ CannotReconcile lists roles that cannot be reconciled in PostgreSQL,
+with an explanation of the cause
+ |
+
+passwordStatus
+map[string]PasswordState
+ |
+
+ PasswordStatus gives the last transaction id and password secret version for each managed role
+ |
+
+
+
+
+
+
+## ManagedService
+
+**Appears in:**
+
+- [ManagedServices](#postgresql-k8s-enterprisedb-io-v1-ManagedServices)
+
+ManagedService represents a specific service managed by the cluster.
+It includes the type of service and its associated template specification.
+
+
+| Field | Description |
+
+selectorType [Required]
+ServiceSelectorType
+ |
+
+ SelectorType specifies the type of selectors that the service will have.
+Valid values are "rw", "r", and "ro", representing read-write, read, and read-only services.
+ |
+
+updateStrategy
+ServiceUpdateStrategy
+ |
+
+ UpdateStrategy describes how the service differences should be reconciled
+ |
+
+serviceTemplate [Required]
+ServiceTemplateSpec
+ |
+
+ ServiceTemplate is the template specification for the service.
+ |
+
+
+
+
+
+
+## ManagedServices
+
+**Appears in:**
+
+- [ManagedConfiguration](#postgresql-k8s-enterprisedb-io-v1-ManagedConfiguration)
+
+ManagedServices represents the services managed by the cluster.
+
+
+| Field | Description |
+
+disabledDefaultServices
+[]ServiceSelectorType
+ |
+
+ DisabledDefaultServices is a list of service types that are disabled by default.
+Valid values are "r", and "ro", representing read, and read-only services.
+ |
+
+additional
+[]ManagedService
+ |
+
+ Additional is a list of additional managed services specified by the user.
+ |
+
+
+
+
+
+
+## Metadata
+
+**Appears in:**
+
+- [PodTemplateSpec](#postgresql-k8s-enterprisedb-io-v1-PodTemplateSpec)
+
+- [ServiceAccountTemplate](#postgresql-k8s-enterprisedb-io-v1-ServiceAccountTemplate)
+
+- [ServiceTemplateSpec](#postgresql-k8s-enterprisedb-io-v1-ServiceTemplateSpec)
+
+Metadata is a structure similar to the metav1.ObjectMeta, but still
+parseable by controller-gen to create a suitable CRD for the user.
+The comment of PodTemplateSpec has an explanation of why we are
+not using the core data types.
+
+
+| Field | Description |
+
+name
+string
+ |
+
+ The name of the resource. Only supported for certain types
+ |
+
+labels
+map[string]string
+ |
+
+ Map of string keys and values that can be used to organize and categorize
+(scope and select) objects. May match selectors of replication controllers
+and services.
+More info: http://kubernetes.io/docs/user-guide/labels
+ |
+
+annotations
+map[string]string
+ |
+
+ Annotations is an unstructured key value map stored with a resource that may be
+set by external tools to store and retrieve arbitrary metadata. They are not
+queryable and should be preserved when modifying objects.
+More info: http://kubernetes.io/docs/user-guide/annotations
+ |
+
+
+
+
+
+
+## MonitoringConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+MonitoringConfiguration is the type containing all the monitoring
+configuration for a certain cluster
+
+
+| Field | Description |
+
+disableDefaultQueries
+bool
+ |
+
+ Whether the default queries should be injected.
+Set it to true if you don't want to inject default queries into the cluster.
+Default: false.
+ |
+
+customQueriesConfigMap
+[]github.com/cloudnative-pg/machinery/pkg/api.ConfigMapKeySelector
+ |
+
+ The list of config maps containing the custom queries
+ |
+
+customQueriesSecret
+[]github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector
+ |
+
+ The list of secrets containing the custom queries
+ |
+
+enablePodMonitor
+bool
+ |
+
+ Enable or disable the PodMonitor
+Deprecated: This feature will be removed in an upcoming release. If
+you need this functionality, you can create a PodMonitor manually.
+ |
+
+tls
+ClusterMonitoringTLSConfiguration
+ |
+
+ Configure TLS communication for the metrics endpoint.
+Changing tls.enabled option will force a rollout of all instances.
+ |
+
+podMonitorMetricRelabelings
+[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig
+ |
+
+ The list of metric relabelings for the PodMonitor. Applied to samples before ingestion.
+Deprecated: This feature will be removed in an upcoming release. If
+you need this functionality, you can create a PodMonitor manually.
+ |
+
+podMonitorRelabelings
+[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig
+ |
+
+ The list of relabelings for the PodMonitor. Applied to samples before scraping.
+Deprecated: This feature will be removed in an upcoming release. If
+you need this functionality, you can create a PodMonitor manually.
+ |
+
+metricsQueriesTTL
+meta/v1.Duration
+ |
+
+ The interval during which metrics computed from queries are considered current.
+Once it is exceeded, a new scrape will trigger a rerun
+of the queries.
+If not set, defaults to 30 seconds, in line with Prometheus scraping defaults.
+Setting this to zero disables the caching mechanism and can cause heavy load on the PostgreSQL server.
+ |
+
+
+
+
+
+
+## NodeMaintenanceWindow
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+NodeMaintenanceWindow contains information that the operator
+will use while upgrading the underlying node.
+This option is only useful when the chosen storage prevents the Pods
+from being freely moved across nodes.
+
+
+| Field | Description |
+
+reusePVC
+bool
+ |
+
+ Reuse the existing PVC (wait for the node to come
+up again) or not (recreate it elsewhere - when instances >1)
+ |
+
+inProgress
+bool
+ |
+
+ Is there a node maintenance activity in progress?
+ |
+
+
+
+
+
+
+## OnlineConfiguration
+
+**Appears in:**
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+- [VolumeSnapshotConfiguration](#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration)
+
+OnlineConfiguration contains the configuration parameters for the online volume snapshot
+
+
+| Field | Description |
+
+waitForArchive
+bool
+ |
+
+ If false, the function will return immediately after the backup is completed,
+without waiting for WAL to be archived.
+This behavior is only useful with backup software that independently monitors WAL archiving.
+Otherwise, WAL required to make the backup consistent might be missing and make the backup useless.
+By default, or when this parameter is true, pg_backup_stop will wait for WAL to be archived when archiving is
+enabled.
+On a standby, this means that it will wait only when archive_mode = always.
+If write activity on the primary is low, it may be useful to run pg_switch_wal on the primary in order to trigger
+an immediate segment switch.
+ |
+
+immediateCheckpoint
+bool
+ |
+
+ Control whether the I/O workload for the backup initial checkpoint will
+be limited, according to the checkpoint_completion_target setting on
+the PostgreSQL server. If set to true, an immediate checkpoint will be
+used, meaning PostgreSQL will complete the checkpoint as soon as
+possible. false by default.
+ |
+
+
+
+
+
+
+## OptionSpec
+
+**Appears in:**
+
+- [FDWSpec](#postgresql-k8s-enterprisedb-io-v1-FDWSpec)
+
+- [ServerSpec](#postgresql-k8s-enterprisedb-io-v1-ServerSpec)
+
+OptionSpec holds the name, value and the ensure field for an option
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the option
+ |
+
+value [Required]
+string
+ |
+
+ Value of the option
+ |
+
+ensure
+EnsureOption
+ |
+
+ Specifies whether an option should be present or absent in
+the database. If set to present, the option will be
+created if it does not exist. If set to absent, the
+option will be removed if it exists.
+ |
+
+
+
+
+
+
+## PasswordState
+
+**Appears in:**
+
+- [ManagedRoles](#postgresql-k8s-enterprisedb-io-v1-ManagedRoles)
+
+PasswordState represents the state of the password of a managed RoleConfiguration
+
+
+| Field | Description |
+
+transactionID
+int64
+ |
+
+ the last transaction ID to affect the role definition in PostgreSQL
+ |
+
+resourceVersion
+string
+ |
+
+ the resource version of the password secret
+ |
+
+
+
+
+
+
+## PgBouncerIntegrationStatus
+
+**Appears in:**
+
+- [PoolerIntegrations](#postgresql-k8s-enterprisedb-io-v1-PoolerIntegrations)
+
+PgBouncerIntegrationStatus encapsulates the needed integration for the pgbouncer poolers referencing the cluster
+
+
+| Field | Description |
+
+secrets
+[]string
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## PgBouncerPoolMode
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [PgBouncerSpec](#postgresql-k8s-enterprisedb-io-v1-PgBouncerSpec)
+
+PgBouncerPoolMode is the mode of PgBouncer
+
+
+
+## PgBouncerSecrets
+
+**Appears in:**
+
+- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets)
+
+PgBouncerSecrets contains the versions of the secrets used
+by pgbouncer
+
+
+| Field | Description |
+
+authQuery
+SecretVersion
+ |
+
+ The auth query secret version
+ |
+
+
+
+
+
+
+## PgBouncerSpec
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PgBouncerSpec defines how to configure PgBouncer
+
+
+| Field | Description |
+
+poolMode
+PgBouncerPoolMode
+ |
+
+ The pool mode. Default: session.
+ |
+
+serverTLSSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ ServerTLSSecret, when pointing to a TLS secret, provides pgbouncer's
+server_tls_key_file and server_tls_cert_file, used when
+authenticating against PostgreSQL.
+ |
+
+serverCASecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ ServerCASecret provides PgBouncer’s server_tls_ca_file, the root
+CA for validating PostgreSQL certificates
+ |
+
+clientCASecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ ClientCASecret provides PgBouncer’s client_tls_ca_file, the root
+CA for validating client certificates
+ |
+
+clientTLSSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ ClientTLSSecret provides PgBouncer’s client_tls_key_file (private key)
+and client_tls_cert_file (certificate) used to accept client connections
+ |
+
+authQuerySecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The credentials of the user that need to be used for the authentication
+query. In case it is specified, also an AuthQuery
+(e.g. "SELECT usename, passwd FROM pg_catalog.pg_shadow WHERE usename=$1")
+has to be specified and no automatic CNP Cluster integration will be triggered.
+Deprecated.
+ |
+
+authQuery
+string
+ |
+
+ The query that will be used to download the hash of the password
+of a certain user. Default: "SELECT usename, passwd FROM public.user_search($1)".
+In case it is specified, also an AuthQuerySecret has to be specified and
+no automatic CNP Cluster integration will be triggered.
+ |
+
+parameters
+map[string]string
+ |
+
+ Additional parameters to be passed to PgBouncer - please check
+the CNP documentation for a list of options you can configure
+ |
+
+pg_hba
+[]string
+ |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended
+to the pg_hba.conf file)
+ |
+
+paused
+bool
+ |
+
+ When set to true, PgBouncer will disconnect from the PostgreSQL
+server, first waiting for all queries to complete, and pause all new
+client connections until this value is set to false (default). Internally,
+the operator calls PgBouncer's PAUSE and RESUME commands.
+ |
+
+
+
+
+
+
+## PluginConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+- [ExternalCluster](#postgresql-k8s-enterprisedb-io-v1-ExternalCluster)
+
+PluginConfiguration specifies a plugin that need to be loaded for this
+cluster to be reconciled
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the plugin name
+ |
+
+enabled
+bool
+ |
+
+ Enabled is true if this plugin will be used
+ |
+
+isWALArchiver
+bool
+ |
+
+ Marks the plugin as the WAL archiver. At most one plugin can be
+designated as a WAL archiver. This cannot be enabled if the
+.spec.backup.barmanObjectStore configuration is present.
+ |
+
+parameters
+map[string]string
+ |
+
+ Parameters is the configuration of the plugin
+ |
+
+
+
+
+
+
+## PluginStatus
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+PluginStatus is the status of a loaded plugin
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the name of the plugin
+ |
+
+version [Required]
+string
+ |
+
+ Version is the version of the plugin loaded by the
+latest reconciliation loop
+ |
+
+capabilities
+[]string
+ |
+
+ Capabilities are the list of capabilities of the
+plugin
+ |
+
+operatorCapabilities
+[]string
+ |
+
+ OperatorCapabilities are the list of capabilities of the
+plugin regarding the reconciler
+ |
+
+walCapabilities
+[]string
+ |
+
+ WALCapabilities are the list of capabilities of the
+plugin regarding the WAL management
+ |
+
+backupCapabilities
+[]string
+ |
+
+ BackupCapabilities are the list of capabilities of the
+plugin regarding the Backup management
+ |
+
+restoreJobHookCapabilities
+[]string
+ |
+
+ RestoreJobHookCapabilities are the list of capabilities of the
+plugin regarding the RestoreJobHook management
+ |
+
+status
+string
+ |
+
+ Status contain the status reported by the plugin through the SetStatusInCluster interface
+ |
+
+
+
+
+
+
+## PodTemplateSpec
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PodTemplateSpec is a structure allowing the user to set
+a template for Pod generation.
+Unfortunately we can't use the corev1.PodTemplateSpec
+type because the generated CRD won't have the field for the
+metadata section.
+References:
+https://github.com/kubernetes-sigs/controller-tools/issues/385
+https://github.com/kubernetes-sigs/controller-tools/issues/448
+https://github.com/prometheus-operator/prometheus-operator/issues/3041
+
+
+| Field | Description |
+
+metadata
+Metadata
+ |
+
+ Standard object's metadata.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+ |
+
+spec
+core/v1.PodSpec
+ |
+
+ Specification of the desired behavior of the pod.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## PodTopologyLabels
+
+(Alias of `map[string]string`)
+
+**Appears in:**
+
+- [Topology](#postgresql-k8s-enterprisedb-io-v1-Topology)
+
+PodTopologyLabels represent the topology of a Pod. map[labelName]labelValue
+
+
+
+## PoolerIntegrations
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+PoolerIntegrations encapsulates the needed integration for the poolers referencing the cluster
+
+
+
+
+
+## PoolerMonitoringConfiguration
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PoolerMonitoringConfiguration is the type containing all the monitoring
+configuration for a certain Pooler.
+Mirrors the Cluster's MonitoringConfiguration but without the custom queries
+part for now.
+
+
+
+
+
+## PoolerSecrets
+
+**Appears in:**
+
+- [PoolerStatus](#postgresql-k8s-enterprisedb-io-v1-PoolerStatus)
+
+PoolerSecrets contains the versions of all the secrets used
+
+
+| Field | Description |
+
+clientTLS
+SecretVersion
+ |
+
+ The client TLS secret version
+ |
+
+serverTLS
+SecretVersion
+ |
+
+ The server TLS secret version
+ |
+
+serverCA
+SecretVersion
+ |
+
+ The server CA secret version
+ |
+
+clientCA
+SecretVersion
+ |
+
+ The client CA secret version
+ |
+
+pgBouncerSecrets
+PgBouncerSecrets
+ |
+
+ The version of the secrets used by PgBouncer
+ |
+
+
+
+
+
+
+## PoolerSpec
+
+**Appears in:**
+
+- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
+
+PoolerSpec defines the desired state of Pooler
+
+
+| Field | Description |
+
+cluster [Required]
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ This is the cluster reference on which the Pooler will work.
+Pooler name should never match with any cluster name within the same namespace.
+ |
+
+type
+PoolerType
+ |
+
+ Type of service to forward traffic to. Default: rw.
+ |
+
+instances
+int32
+ |
+
+ The number of replicas we want. Default: 1.
+ |
+
+template
+PodTemplateSpec
+ |
+
+ The template of the Pod to be created
+ |
+
+pgbouncer [Required]
+PgBouncerSpec
+ |
+
+ The PgBouncer configuration
+ |
+
+deploymentStrategy
+apps/v1.DeploymentStrategy
+ |
+
+ The deployment strategy to use for pgbouncer to replace existing pods with new ones
+ |
+
+monitoring
+PoolerMonitoringConfiguration
+ |
+
+ The configuration of the monitoring infrastructure of this pooler.
+Deprecated: This feature will be removed in an upcoming release. If
+you need this functionality, you can create a PodMonitor manually.
+ |
+
+serviceTemplate
+ServiceTemplateSpec
+ |
+
+ Template for the Service to be created
+ |
+
+
+
+
+
+
+## PoolerStatus
+
+**Appears in:**
+
+- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
+
+PoolerStatus defines the observed state of Pooler
+
+
+| Field | Description |
+
+secrets
+PoolerSecrets
+ |
+
+ The resource version of the config object
+ |
+
+instances
+int32
+ |
+
+ The number of pods trying to be scheduled
+ |
+
+
+
+
+
+
+## PoolerType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PoolerType is the type of the connection pool, meaning the service
+we are targeting. Allowed values are rw and ro.
+
+
+
+## PostgresConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+PostgresConfiguration defines the PostgreSQL configuration
+
+
+| Field | Description |
+
+parameters
+map[string]string
+ |
+
+ PostgreSQL configuration options (postgresql.conf)
+ |
+
+synchronous
+SynchronousReplicaConfiguration
+ |
+
+ Configuration of the PostgreSQL synchronous replication feature
+ |
+
+pg_hba
+[]string
+ |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended
+to the pg_hba.conf file)
+ |
+
+pg_ident
+[]string
+ |
+
+ PostgreSQL User Name Maps rules (lines to be appended
+to the pg_ident.conf file)
+ |
+
+epas
+EPASConfiguration
+ |
+
+ EDB Postgres Advanced Server specific configurations
+ |
+
+syncReplicaElectionConstraint
+SyncReplicaElectionConstraints
+ |
+
+ Requirements to be met by sync replicas. This will affect how the "synchronous_standby_names" parameter will be
+set up.
+ |
+
+shared_preload_libraries
+[]string
+ |
+
+ Lists of shared preload libraries to add to the default ones
+ |
+
+ldap
+LDAPConfig
+ |
+
+ Options to specify LDAP configuration
+ |
+
+promotionTimeout
+int32
+ |
+
+ Specifies the maximum number of seconds to wait when promoting an instance to primary.
+Default value is 40000000, greater than one year in seconds,
+big enough to simulate an infinite timeout
+ |
+
+enableAlterSystem
+bool
+ |
+
+ If this parameter is true, the user will be able to invoke ALTER SYSTEM
+on this {{name.ln}} Cluster.
+This should only be used for debugging and troubleshooting.
+Defaults to false.
+ |
+
+extensions
+[]ExtensionConfiguration
+ |
+
+ The configuration of the extensions to be added
+ |
+
+
+
+
+
+
+## PrimaryUpdateMethod
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+PrimaryUpdateMethod contains the method to use when upgrading
+the primary server of the cluster as part of rolling updates
+
+
+
+## PrimaryUpdateStrategy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+PrimaryUpdateStrategy contains the strategy to follow when upgrading
+the primary server of the cluster as part of rolling updates
+
+
+
+## Probe
+
+**Appears in:**
+
+- [LivenessProbe](#postgresql-k8s-enterprisedb-io-v1-LivenessProbe)
+
+- [ProbeWithStrategy](#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy)
+
+Probe describes a health check to be performed against a container to determine whether it is
+alive or ready to receive traffic.
+
+
+| Field | Description |
+
+initialDelaySeconds
+int32
+ |
+
+ Number of seconds after the container has started before liveness probes are initiated.
+More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ |
+
+timeoutSeconds
+int32
+ |
+
+ Number of seconds after which the probe times out.
+Defaults to 1 second. Minimum value is 1.
+More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ |
+
+periodSeconds
+int32
+ |
+
+ How often (in seconds) to perform the probe.
+Default to 10 seconds. Minimum value is 1.
+ |
+
+successThreshold
+int32
+ |
+
+ Minimum consecutive successes for the probe to be considered successful after having failed.
+Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
+ |
+
+failureThreshold
+int32
+ |
+
+ Minimum consecutive failures for the probe to be considered failed after having succeeded.
+Defaults to 3. Minimum value is 1.
+ |
+
+terminationGracePeriodSeconds
+int64
+ |
+
+ Optional duration in seconds the pod needs to terminate gracefully upon probe failure.
+The grace period is the duration in seconds after the processes running in the pod are sent
+a termination signal and the time when the processes are forcibly halted with a kill signal.
+Set this value longer than the expected cleanup time for your process.
+If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this
+value overrides the value provided by the pod spec.
+Value must be non-negative integer. The value zero indicates stop immediately via
+the kill signal (no opportunity to shut down).
+This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate.
+Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
+ |
+
+
+
+
+
+
+## ProbeStrategyType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ProbeWithStrategy](#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy)
+
+ProbeStrategyType is the type of the strategy used to declare a PostgreSQL instance
+ready
+
+
+
+## ProbeWithStrategy
+
+**Appears in:**
+
+- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration)
+
+ProbeWithStrategy is the configuration of the startup probe
+
+
+
+
+
+## ProbesConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ProbesConfiguration represent the configuration for the probes
+to be injected in the PostgreSQL Pods
+
+
+| Field | Description |
+
+startup [Required]
+ProbeWithStrategy
+ |
+
+ The startup probe configuration
+ |
+
+liveness [Required]
+LivenessProbe
+ |
+
+ The liveness probe configuration
+ |
+
+readiness [Required]
+ProbeWithStrategy
+ |
+
+ The readiness probe configuration
+ |
+
+
+
+
+
+
+## PublicationReclaimPolicy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [PublicationSpec](#postgresql-k8s-enterprisedb-io-v1-PublicationSpec)
+
+PublicationReclaimPolicy defines a policy for end-of-life maintenance of Publications.
+
+
+
+## PublicationSpec
+
+**Appears in:**
+
+- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication)
+
+PublicationSpec defines the desired state of Publication
+
+
+| Field | Description |
+
+cluster [Required]
+core/v1.LocalObjectReference
+ |
+
+ The name of the PostgreSQL cluster that identifies the "publisher"
+ |
+
+name [Required]
+string
+ |
+
+ The name of the publication inside PostgreSQL
+ |
+
+dbname [Required]
+string
+ |
+
+ The name of the database where the publication will be installed in
+the "publisher" cluster
+ |
+
+parameters
+map[string]string
+ |
+
+ Publication parameters part of the WITH clause as expected by
+PostgreSQL CREATE PUBLICATION command
+ |
+
+target [Required]
+PublicationTarget
+ |
+
+ Target of the publication as expected by PostgreSQL CREATE PUBLICATION command
+ |
+
+publicationReclaimPolicy
+PublicationReclaimPolicy
+ |
+
+ The policy for end-of-life maintenance of this publication
+ |
+
+
+
+
+
+
+## PublicationStatus
+
+**Appears in:**
+
+- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication)
+
+PublicationStatus defines the observed state of Publication
+
+
+| Field | Description |
+
+observedGeneration
+int64
+ |
+
+ A sequence number representing the latest
+desired state that was synchronized
+ |
+
+applied
+bool
+ |
+
+ Applied is true if the publication was reconciled correctly
+ |
+
+message
+string
+ |
+
+ Message is the reconciliation output message
+ |
+
+
+
+
+
+
+## PublicationTarget
+
+**Appears in:**
+
+- [PublicationSpec](#postgresql-k8s-enterprisedb-io-v1-PublicationSpec)
+
+PublicationTarget is what this publication should publish
+
+
+| Field | Description |
+
+allTables
+bool
+ |
+
+ Marks the publication as one that replicates changes for all tables
+in the database, including tables created in the future.
+Corresponding to FOR ALL TABLES in PostgreSQL.
+ |
+
+objects
+[]PublicationTargetObject
+ |
+
+ Just the following schema objects
+ |
+
+
+
+
+
+
+## PublicationTargetObject
+
+**Appears in:**
+
+- [PublicationTarget](#postgresql-k8s-enterprisedb-io-v1-PublicationTarget)
+
+PublicationTargetObject is an object to publish
+
+
+| Field | Description |
+
+tablesInSchema
+string
+ |
+
+ Marks the publication as one that replicates changes for all tables
+in the specified list of schemas, including tables created in the
+future. Corresponding to FOR TABLES IN SCHEMA in PostgreSQL.
+ |
+
+table
+PublicationTargetTable
+ |
+
+ Specifies a list of tables to add to the publication. Corresponding
+to FOR TABLE in PostgreSQL.
+ |
+
+
+
+
+
+
+## PublicationTargetTable
+
+**Appears in:**
+
+- [PublicationTargetObject](#postgresql-k8s-enterprisedb-io-v1-PublicationTargetObject)
+
+PublicationTargetTable is a table to publish
+
+
+| Field | Description |
+
+only
+bool
+ |
+
+ Whether to limit to the table only or include all its descendants
+ |
+
+name [Required]
+string
+ |
+
+ The table name
+ |
+
+schema
+string
+ |
+
+ The schema name
+ |
+
+columns
+[]string
+ |
+
+ The columns to publish
+ |
+
+
+
+
+
+
+## RecoveryTarget
+
+**Appears in:**
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+RecoveryTarget allows to configure the moment where the recovery process
+will stop. All the target options except TargetTLI are mutually exclusive.
+
+
+| Field | Description |
+
+backupID
+string
+ |
+
+ The ID of the backup from which to start the recovery process.
+If empty (default) the operator will automatically detect the backup
+based on targetTime or targetLSN if specified. Otherwise use the
+latest available backup in chronological order.
+ |
+
+targetTLI
+string
+ |
+
+ The target timeline ("latest" or a positive integer)
+ |
+
+targetXID
+string
+ |
+
+ The target transaction ID
+ |
+
+targetName
+string
+ |
+
+ The target name (to be previously created
+with pg_create_restore_point)
+ |
+
+targetLSN
+string
+ |
+
+ The target LSN (Log Sequence Number)
+ |
+
+targetTime
+string
+ |
+
+ The target time as a timestamp in the RFC3339 standard
+ |
+
+targetImmediate
+bool
+ |
+
+ End recovery as soon as a consistent state is reached
+ |
+
+exclusive
+bool
+ |
+
+ Set the target to be exclusive. If omitted, defaults to false, so that
+in Postgres, recovery_target_inclusive will be true
+ |
+
+
+
+
+
+
+## ReplicaClusterConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ReplicaClusterConfiguration encapsulates the configuration of a replica
+cluster
+
+
+| Field | Description |
+
+self
+string
+ |
+
+ Self defines the name of this cluster. It is used to determine if this is a primary
+or a replica cluster, comparing it with primary
+ |
+
+primary
+string
+ |
+
+ Primary defines which Cluster is defined to be the primary in the distributed PostgreSQL cluster, based on the
+topology specified in externalClusters
+ |
+
+source [Required]
+string
+ |
+
+ The name of the external cluster which is the replication origin
+ |
+
+enabled
+bool
+ |
+
+ If replica mode is enabled, this cluster will be a replica of an
+existing cluster. Replica cluster can be created from a recovery
+object store or via streaming through pg_basebackup.
+Refer to the Replica clusters page of the documentation for more information.
+ |
+
+promotionToken
+string
+ |
+
+ A demotion token generated by an external cluster used to
+check if the promotion requirements are met.
+ |
+
+minApplyDelay
+meta/v1.Duration
+ |
+
+ When replica mode is enabled, this parameter allows you to replay
+transactions only when the system time is at least the configured
+time past the commit time. This provides an opportunity to correct
+data loss errors. Note that when this parameter is set, a promotion
+token cannot be used.
+ |
+
+
+
+
+
+
+## ReplicationSlotsConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ReplicationSlotsConfiguration encapsulates the configuration
+of replication slots
+
+
+| Field | Description |
+
+highAvailability
+ReplicationSlotsHAConfiguration
+ |
+
+ Replication slots for high availability configuration
+ |
+
+updateInterval
+int
+ |
+
+ Standby will update the status of the local replication slots
+every updateInterval seconds (default 30).
+ |
+
+synchronizeReplicas
+SynchronizeReplicasConfiguration
+ |
+
+ Configures the synchronization of the user defined physical replication slots
+ |
+
+
+
+
+
+
+## ReplicationSlotsHAConfiguration
+
+**Appears in:**
+
+- [ReplicationSlotsConfiguration](#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration)
+
+ReplicationSlotsHAConfiguration encapsulates the configuration
+of the replication slots that are automatically managed by
+the operator to control the streaming replication connections
+with the standby instances for high availability (HA) purposes.
+Replication slots are a PostgreSQL feature that makes sure
+that PostgreSQL automatically keeps WAL files in the primary
+when a streaming client (in this specific case a replica that
+is part of the HA cluster) gets disconnected.
+
+
+| Field | Description |
+
+enabled
+bool
+ |
+
+ If enabled (default), the operator will automatically manage replication slots
+on the primary instance and use them in streaming replication
+connections with all the standby instances that are part of the HA
+cluster. If disabled, the operator will not take advantage
+of replication slots in streaming connections with the replicas.
+This feature also controls replication slots in replica cluster,
+from the designated primary to its cascading replicas.
+ |
+
+slotPrefix
+string
+ |
+
+ Prefix for replication slots managed by the operator for HA.
+It may only contain lower case letters, numbers, and the underscore character.
+This can only be set at creation time. By default set to _cnp_.
+ |
+
+synchronizeLogicalDecoding
+bool
+ |
+
+ When enabled, the operator automatically manages synchronization of logical
+decoding (replication) slots across high-availability clusters.
+Requires one of the following conditions:
+
+- PostgreSQL version 17 or later
+- PostgreSQL version < 17 with pg_failover_slots extension enabled
+
+ |
+
+
+
+
+
+
+## RoleConfiguration
+
+**Appears in:**
+
+- [ManagedConfiguration](#postgresql-k8s-enterprisedb-io-v1-ManagedConfiguration)
+
+RoleConfiguration is the representation, in Kubernetes, of a PostgreSQL role
+with the additional field Ensure specifying whether to ensure the presence or
+absence of the role in the database
+The defaults of the CREATE ROLE command are applied
+Reference: https://www.postgresql.org/docs/current/sql-createrole.html
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the role
+ |
+
+comment
+string
+ |
+
+ Description of the role
+ |
+
+ensure
+EnsureOption
+ |
+
+ Ensure the role is present or absent - defaults to "present"
+ |
+
+passwordSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ Secret containing the password of the role (if present)
+If null, the password will be ignored unless DisablePassword is set
+ |
+
+connectionLimit
+int64
+ |
+
+ If the role can log in, this specifies how many concurrent
+connections the role can make. -1 (the default) means no limit.
+ |
+
+validUntil
+meta/v1.Time
+ |
+
+ Date and time after which the role's password is no longer valid.
+When omitted, the password will never expire (default).
+ |
+
+inRoles
+[]string
+ |
+
+ List of one or more existing roles to which this role will be
+immediately added as a new member. Default empty.
+ |
+
+inherit
+bool
+ |
+
+ Whether a role "inherits" the privileges of roles it is a member of.
+Defaults is true.
+ |
+
+disablePassword
+bool
+ |
+
+ DisablePassword indicates that a role's password should be set to NULL in Postgres
+ |
+
+superuser
+bool
+ |
+
+ Whether the role is a superuser who can override all access
+restrictions within the database - superuser status is dangerous and
+should be used only when really needed. You must yourself be a
+superuser to create a new superuser. Defaults is false.
+ |
+
+createdb
+bool
+ |
+
+ When set to true, the role being defined will be allowed to create
+new databases. Specifying false (default) will deny a role the
+ability to create databases.
+ |
+
+createrole
+bool
+ |
+
+ Whether the role will be permitted to create, alter, drop, comment
+on, change the security label for, and grant or revoke membership in
+other roles. Default is false.
+ |
+
+login
+bool
+ |
+
+ Whether the role is allowed to log in. A role having the login
+attribute can be thought of as a user. Roles without this attribute
+are useful for managing database privileges, but are not users in
+the usual sense of the word. Default is false.
+ |
+
+replication
+bool
+ |
+
+ Whether a role is a replication role. A role must have this
+attribute (or be a superuser) in order to be able to connect to the
+server in replication mode (physical or logical replication) and in
+order to be able to create or drop replication slots. A role having
+the replication attribute is a very highly privileged role, and
+should only be used on roles actually used for replication. Default
+is false.
+ |
+
+bypassrls
+bool
+ |
+
+ Whether a role bypasses every row-level security (RLS) policy.
+Default is false.
+ |
+
+
+
+
+
+
+## SQLRefs
+
+**Appears in:**
+
+- [BootstrapInitDB](#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB)
+
+SQLRefs holds references to ConfigMaps or Secrets
+containing SQL files. The references are processed in a specific order:
+first, all Secrets are processed, followed by all ConfigMaps.
+Within each group, the processing order follows the sequence specified
+in their respective arrays.
+
+
+
+
+
+## ScheduledBackupSpec
+
+**Appears in:**
+
+- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
+
+ScheduledBackupSpec defines the desired state of ScheduledBackup
+
+
+| Field | Description |
+
+suspend
+bool
+ |
+
+ If this backup is suspended or not
+ |
+
+immediate
+bool
+ |
+
+ If the first backup has to be immediately start after creation or not
+ |
+
+schedule [Required]
+string
+ |
+
+ The schedule does not follow the same format used in Kubernetes CronJobs
+as it includes an additional seconds specifier,
+see https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format
+ |
+
+cluster [Required]
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The cluster to backup
+ |
+
+backupOwnerReference
+string
+ |
+
+ Indicates which ownerReference should be put inside the created backup resources.
+
+- none: no owner reference for created backup objects (same behavior as before the field was introduced)
+- self: sets the Scheduled backup object as owner of the backup
+- cluster: set the cluster as owner of the backup
+
+ |
+
+target
+BackupTarget
+ |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to cluster.spec.backup.target.
+Available options are empty string, primary and prefer-standby.
+primary to have backups run always on primary instances,
+prefer-standby to have backups run preferably on the most updated
+standby, if available.
+ |
+
+method
+BackupMethod
+ |
+
+ The backup method to be used, possible options are barmanObjectStore,
+volumeSnapshot or plugin. Defaults to: barmanObjectStore.
+ |
+
+pluginConfiguration
+BackupPluginConfiguration
+ |
+
+ Configuration parameters passed to the plugin managing this backup
+ |
+
+online
+bool
+ |
+
+ Whether the default type of backup with volume snapshots is
+online/hot (true, default) or offline/cold (false)
+Overrides the default setting specified in the cluster field '.spec.backup.volumeSnapshot.online'
+ |
+
+onlineConfiguration
+OnlineConfiguration
+ |
+
+ Configuration parameters to control the online/hot backup with volume snapshots
+Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza
+ |
+
+
+
+
+
+
+## ScheduledBackupStatus
+
+**Appears in:**
+
+- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
+
+ScheduledBackupStatus defines the observed state of ScheduledBackup
+
+
+| Field | Description |
+
+lastCheckTime
+meta/v1.Time
+ |
+
+ The latest time the schedule
+ |
+
+lastScheduleTime
+meta/v1.Time
+ |
+
+ Information when was the last time that backup was successfully scheduled.
+ |
+
+nextScheduleTime
+meta/v1.Time
+ |
+
+ Next time we will run a backup
+ |
+
+
+
+
+
+
+## SchemaSpec
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+SchemaSpec configures a schema in a database
+
+
+| Field | Description |
+
+DatabaseObjectSpec
+DatabaseObjectSpec
+ |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields
+ |
+
+owner [Required]
+string
+ |
+
+ The role name of the user who owns the schema inside PostgreSQL.
+It maps to the AUTHORIZATION parameter of CREATE SCHEMA and the
+OWNER TO command of ALTER SCHEMA.
+ |
+
+
+
+
+
+
+## SecretVersion
+
+**Appears in:**
+
+- [PgBouncerSecrets](#postgresql-k8s-enterprisedb-io-v1-PgBouncerSecrets)
+
+- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets)
+
+SecretVersion contains a secret name and its ResourceVersion
+
+
+| Field | Description |
+
+name
+string
+ |
+
+ The name of the secret
+ |
+
+version
+string
+ |
+
+ The ResourceVersion of the secret
+ |
+
+
+
+
+
+
+## SecretsResourceVersion
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+SecretsResourceVersion is the resource versions of the secrets
+managed by the operator
+
+
+| Field | Description |
+
+superuserSecretVersion
+string
+ |
+
+ The resource version of the "postgres" user secret
+ |
+
+replicationSecretVersion
+string
+ |
+
+ The resource version of the "streaming_replica" user secret
+ |
+
+applicationSecretVersion
+string
+ |
+
+ The resource version of the "app" user secret
+ |
+
+managedRoleSecretVersion
+map[string]string
+ |
+
+ The resource versions of the managed roles secrets
+ |
+
+caSecretVersion
+string
+ |
+
+ Unused. Retained for compatibility with old versions.
+ |
+
+clientCaSecretVersion
+string
+ |
+
+ The resource version of the PostgreSQL client-side CA secret version
+ |
+
+serverCaSecretVersion
+string
+ |
+
+ The resource version of the PostgreSQL server-side CA secret version
+ |
+
+serverSecretVersion
+string
+ |
+
+ The resource version of the PostgreSQL server-side secret version
+ |
+
+barmanEndpointCA
+string
+ |
+
+ The resource version of the Barman Endpoint CA if provided
+ |
+
+externalClusterSecretVersion
+map[string]string
+ |
+
+ The resource versions of the external cluster secrets
+ |
+
+metrics
+map[string]string
+ |
+
+ A map with the versions of all the secrets used to pass metrics.
+Map keys are the secret names, map values are the versions
+ |
+
+
+
+
+
+
+## ServerSpec
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+ServerSpec configures a server of a foreign data wrapper
+
+
+| Field | Description |
+
+DatabaseObjectSpec
+DatabaseObjectSpec
+ |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields
+ |
+
+fdw [Required]
+string
+ |
+
+ The name of the Foreign Data Wrapper (FDW)
+ |
+
+options
+[]OptionSpec
+ |
+
+ Options specifies the configuration options for the server
+(key is the option name, value is the option value).
+ |
+
+usage
+[]UsageSpec
+ |
+
+ List of roles for which USAGE privileges on the server are granted or revoked.
+ |
+
+
+
+
+
+
+## ServiceAccountTemplate
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ServiceAccountTemplate contains the template needed to generate the service accounts
+
+
+| Field | Description |
+
+metadata [Required]
+Metadata
+ |
+
+ Metadata are the metadata to be used for the generated
+service account
+ |
+
+
+
+
+
+
+## ServiceSelectorType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService)
+
+- [ManagedServices](#postgresql-k8s-enterprisedb-io-v1-ManagedServices)
+
+ServiceSelectorType describes a valid value for generating the service selectors.
+It indicates which type of service the selector applies to, such as read-write, read, or read-only
+
+
+
+## ServiceTemplateSpec
+
+**Appears in:**
+
+- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService)
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+ServiceTemplateSpec is a structure allowing the user to set
+a template for Service generation.
+
+
+| Field | Description |
+
+metadata
+Metadata
+ |
+
+ Standard object's metadata.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+ |
+
+spec
+core/v1.ServiceSpec
+ |
+
+ Specification of the desired behavior of the service.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## ServiceUpdateStrategy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService)
+
+ServiceUpdateStrategy describes how the changes to the managed service should be handled
+
+
+
+## SnapshotOwnerReference
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [VolumeSnapshotConfiguration](#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration)
+
+SnapshotOwnerReference defines the reference type for the owner of the snapshot.
+This specifies which owner the processed resources should relate to.
+
+
+
+## SnapshotType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [Import](#postgresql-k8s-enterprisedb-io-v1-Import)
+
+SnapshotType is a type of allowed import
+
+
+
+## StorageConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration)
+
+StorageConfiguration is the configuration used to create and reconcile PVCs,
+usable for WAL volumes, PGDATA volumes, or tablespaces
+
+
+| Field | Description |
+
+storageClass
+string
+ |
+
+ StorageClass to use for PVCs. Applied after
+evaluating the PVC template, if available.
+If not specified, the generated PVCs will use the
+default storage class
+ |
+
+size
+string
+ |
+
+ Size of the storage. Required if not already specified in the PVC template.
+Changes to this field are automatically reapplied to the created PVCs.
+Size cannot be decreased.
+ |
+
+resizeInUseVolumes
+bool
+ |
+
+ Resize existent PVCs, defaults to true
+ |
+
+pvcTemplate
+core/v1.PersistentVolumeClaimSpec
+ |
+
+ Template to be used to generate the Persistent Volume Claim
+ |
+
+
+
+
+
+
+## SubscriptionReclaimPolicy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [SubscriptionSpec](#postgresql-k8s-enterprisedb-io-v1-SubscriptionSpec)
+
+SubscriptionReclaimPolicy describes a policy for end-of-life maintenance of Subscriptions.
+
+
+
+## SubscriptionSpec
+
+**Appears in:**
+
+- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription)
+
+SubscriptionSpec defines the desired state of Subscription
+
+
+| Field | Description |
+
+cluster [Required]
+core/v1.LocalObjectReference
+ |
+
+ The name of the PostgreSQL cluster that identifies the "subscriber"
+ |
+
+name [Required]
+string
+ |
+
+ The name of the subscription inside PostgreSQL
+ |
+
+dbname [Required]
+string
+ |
+
+ The name of the database where the publication will be installed in
+the "subscriber" cluster
+ |
+
+parameters
+map[string]string
+ |
+
+ Subscription parameters included in the WITH clause of the PostgreSQL
+CREATE SUBSCRIPTION command. Most parameters cannot be changed
+after the subscription is created and will be ignored if modified
+later, except for a limited set documented at:
+https://www.postgresql.org/docs/current/sql-altersubscription.html#SQL-ALTERSUBSCRIPTION-PARAMS-SET
+ |
+
+publicationName [Required]
+string
+ |
+
+ The name of the publication inside the PostgreSQL database in the
+"publisher"
+ |
+
+publicationDBName
+string
+ |
+
+ The name of the database containing the publication on the external
+cluster. Defaults to the one in the external cluster definition.
+ |
+
+externalClusterName [Required]
+string
+ |
+
+ The name of the external cluster with the publication ("publisher")
+ |
+
+subscriptionReclaimPolicy
+SubscriptionReclaimPolicy
+ |
+
+ The policy for end-of-life maintenance of this subscription
+ |
+
+
+
+
+
+
+## SubscriptionStatus
+
+**Appears in:**
+
+- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription)
+
+SubscriptionStatus defines the observed state of Subscription
+
+
+| Field | Description |
+
+observedGeneration
+int64
+ |
+
+ A sequence number representing the latest
+desired state that was synchronized
+ |
+
+applied
+bool
+ |
+
+ Applied is true if the subscription was reconciled correctly
+ |
+
+message
+string
+ |
+
+ Message is the reconciliation output message
+ |
+
+
+
+
+
+
+## SwitchReplicaClusterStatus
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+SwitchReplicaClusterStatus contains all the statuses regarding the switch of a cluster to a replica cluster
+
+
+| Field | Description |
+
+inProgress
+bool
+ |
+
+ InProgress indicates if there is an ongoing procedure of switching a cluster to a replica cluster.
+ |
+
+
+
+
+
+
+## SyncReplicaElectionConstraints
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+SyncReplicaElectionConstraints contains the constraints for sync replicas election.
+For anti-affinity parameters two instances are considered in the same location
+if all the labels values match.
+In future synchronous replica election restriction by name will be supported.
+
+
+| Field | Description |
+
+nodeLabelsAntiAffinity
+[]string
+ |
+
+ A list of node labels values to extract and compare to evaluate if the pods reside in the same topology or not
+ |
+
+enabled [Required]
+bool
+ |
+
+ This flag enables the constraints for sync replicas
+ |
+
+
+
+
+
+
+## SynchronizeReplicasConfiguration
+
+**Appears in:**
+
+- [ReplicationSlotsConfiguration](#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration)
+
+SynchronizeReplicasConfiguration contains the configuration for the synchronization of user defined
+physical replication slots
+
+
+| Field | Description |
+
+enabled [Required]
+bool
+ |
+
+ When set to true, every replication slot that is on the primary is synchronized on each standby
+ |
+
+excludePatterns
+[]string
+ |
+
+ List of regular expression patterns to match the names of replication slots to be excluded (by default empty)
+ |
+
+
+
+
+
+
+## SynchronousReplicaConfiguration
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+SynchronousReplicaConfiguration contains the configuration of the
+PostgreSQL synchronous replication feature.
+Important: at this moment, also .spec.minSyncReplicas and .spec.maxSyncReplicas
+need to be considered.
+
+
+| Field | Description |
+
+method [Required]
+SynchronousReplicaConfigurationMethod
+ |
+
+ Method to select synchronous replication standbys from the listed
+servers, accepting 'any' (quorum-based synchronous replication) or
+'first' (priority-based synchronous replication) as values.
+ |
+
+number [Required]
+int
+ |
+
+ Specifies the number of synchronous standby servers that
+transactions must wait for responses from.
+ |
+
+maxStandbyNamesFromCluster
+int
+ |
+
+ Specifies the maximum number of local cluster pods that can be
+automatically included in the synchronous_standby_names option in
+PostgreSQL.
+ |
+
+standbyNamesPre
+[]string
+ |
+
+ A user-defined list of application names to be added to
+synchronous_standby_names before local cluster pods (the order is
+only useful for priority-based synchronous replication).
+ |
+
+standbyNamesPost
+[]string
+ |
+
+ A user-defined list of application names to be added to
+synchronous_standby_names after local cluster pods (the order is
+only useful for priority-based synchronous replication).
+ |
+
+dataDurability
+DataDurabilityLevel
+ |
+
+ If set to "required", data durability is strictly enforced. Write operations
+with synchronous commit settings (on, remote_write, or remote_apply) will
+block if there are insufficient healthy replicas, ensuring data persistence.
+If set to "preferred", data durability is maintained when healthy replicas
+are available, but the required number of instances will adjust dynamically
+if replicas become unavailable. This setting relaxes strict durability enforcement
+to allow for operational continuity. This setting is only applicable if both
+standbyNamesPre and standbyNamesPost are unset (empty).
+ |
+
+failoverQuorum
+bool
+ |
+
+ FailoverQuorum enables a quorum-based check before failover, improving
+data durability and safety during failover events in {{name.ln}}-managed
+PostgreSQL clusters.
+ |
+
+
+
+
+
+
+## SynchronousReplicaConfigurationMethod
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [SynchronousReplicaConfiguration](#postgresql-k8s-enterprisedb-io-v1-SynchronousReplicaConfiguration)
+
+SynchronousReplicaConfigurationMethod configures whether to use
+quorum based replication or a priority list
+
+
+
+## TDEConfiguration
+
+**Appears in:**
+
+- [EPASConfiguration](#postgresql-k8s-enterprisedb-io-v1-EPASConfiguration)
+
+TDEConfiguration contains the Transparent Data Encryption configuration
+
+
+| Field | Description |
+
+enabled
+bool
+ |
+
+ True if we want to have TDE enabled
+ |
+
+secretKeyRef
+core/v1.SecretKeySelector
+ |
+
+ Reference to the secret that contains the encryption key
+ |
+
+wrapCommand
+core/v1.SecretKeySelector
+ |
+
+ WrapCommand is the encrypt command provided by the user
+ |
+
+unwrapCommand
+core/v1.SecretKeySelector
+ |
+
+ UnwrapCommand is the decryption command provided by the user
+ |
+
+passphraseCommand
+core/v1.SecretKeySelector
+ |
+
+ PassphraseCommand is the command executed to get the passphrase that will be
+passed to the OpenSSL command to encrypt and decrypt
+ |
+
+
+
+
+
+
+## TablespaceConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+TablespaceConfiguration is the configuration of a tablespace, and includes
+the storage specification for the tablespace
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ The name of the tablespace
+ |
+
+storage [Required]
+StorageConfiguration
+ |
+
+ The storage configuration for the tablespace
+ |
+
+owner
+DatabaseRoleRef
+ |
+
+ Owner is the PostgreSQL user owning the tablespace
+ |
+
+temporary
+bool
+ |
+
+ When set to true, the tablespace will be added as a temp_tablespaces
+entry in PostgreSQL, and will be available to automatically house temp
+database objects, or other temporary files. Please refer to PostgreSQL
+documentation for more information on the temp_tablespaces GUC.
+ |
+
+
+
+
+
+
+## TablespaceState
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+TablespaceState represents the state of a tablespace in a cluster
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the name of the tablespace
+ |
+
+owner
+string
+ |
+
+ Owner is the PostgreSQL user owning the tablespace
+ |
+
+state [Required]
+TablespaceStatus
+ |
+
+ State is the latest reconciliation state
+ |
+
+error
+string
+ |
+
+ Error is the reconciliation error, if any
+ |
+
+
+
+
+
+
+## TablespaceStatus
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [TablespaceState](#postgresql-k8s-enterprisedb-io-v1-TablespaceState)
+
+TablespaceStatus represents the status of a tablespace in the cluster
+
+
+
+## Topology
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+Topology contains the cluster topology
+
+
+| Field | Description |
+
+instances
+map[PodName]PodTopologyLabels
+ |
+
+ Instances contains the pod topology of the instances
+ |
+
+nodesUsed
+int32
+ |
+
+ NodesUsed represents the count of distinct nodes accommodating the instances.
+A value of '1' suggests that all instances are hosted on a single node,
+implying the absence of High Availability (HA). Ideally, this value should
+be the same as the number of instances in the Postgres HA cluster, implying
+shared nothing architecture on the compute side.
+ |
+
+successfullyExtracted
+bool
+ |
+
+ SuccessfullyExtracted indicates if the topology data was extract. It is useful to enact fallback behaviors
+in synchronous replica election in case of failures
+ |
+
+
+
+
+
+
+## UsageSpec
+
+**Appears in:**
+
+- [FDWSpec](#postgresql-k8s-enterprisedb-io-v1-FDWSpec)
+
+- [ServerSpec](#postgresql-k8s-enterprisedb-io-v1-ServerSpec)
+
+UsageSpec configures a usage for a foreign data wrapper
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the usage
+ |
+
+type
+UsageSpecType
+ |
+
+ The type of usage
+ |
+
+
+
+
+
+
+## UsageSpecType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [UsageSpec](#postgresql-k8s-enterprisedb-io-v1-UsageSpec)
+
+UsageSpecType describes the type of usage specified in the usage field of the
+Database object.
+
+
+
+## VolumeSnapshotConfiguration
+
+**Appears in:**
+
+- [BackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration)
+
+VolumeSnapshotConfiguration represents the configuration for the execution of snapshot backups.
+
+
+| Field | Description |
+
+labels
+map[string]string
+ |
+
+ Labels are key-value pairs that will be added to .metadata.labels snapshot resources.
+ |
+
+annotations
+map[string]string
+ |
+
+ Annotations key-value pairs that will be added to .metadata.annotations snapshot resources.
+ |
+
+className
+string
+ |
+
+ ClassName specifies the Snapshot Class to be used for PG_DATA PersistentVolumeClaim.
+It is the default class for the other types if no specific class is present
+ |
+
+walClassName
+string
+ |
+
+ WalClassName specifies the Snapshot Class to be used for the PG_WAL PersistentVolumeClaim.
+ |
+
+tablespaceClassName
+map[string]string
+ |
+
+ TablespaceClassName specifies the Snapshot Class to be used for the tablespaces.
+defaults to the PGDATA Snapshot Class, if set
+ |
+
+snapshotOwnerReference
+SnapshotOwnerReference
+ |
+
+ SnapshotOwnerReference indicates the type of owner reference the snapshot should have
+ |
+
+online
+bool
+ |
+
+ Whether the default type of backup with volume snapshots is
+online/hot (true, default) or offline/cold (false)
+ |
+
+onlineConfiguration
+OnlineConfiguration
+ |
+
+ Configuration parameters to control the online/hot backup with volume snapshots
+ |
+
+
+
diff --git a/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx b/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx
index 4163661461..90b28df474 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx
@@ -122,7 +122,7 @@ values from the ones in this document):
```console
$ kubectl cnpg psql postgis-example -- app
-psql (18.0 (Debian 18.0-1.pgdg13+3))
+psql (17.6 (Debian 17.6-1.pgdg13+3))
Type "help" for help.
app=# SELECT * FROM pg_available_extensions WHERE name ~ '^postgis' ORDER BY 1;
diff --git a/product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx b/product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx
index 79e31a5d83..e4a5e5a900 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx
@@ -38,4 +38,11 @@ are not backwards compatible and could be removed entirely.
## Current Preview Version
-There is no current preview. We'll update this page when the next one is ready!
\ No newline at end of file
+There are currently no preview versions available.
+
+The current preview version is **1.28.0-rc1**.
+
+For more information on the current preview version and how to test, please view the links below:
+
+- [Announcement](https://cloudnative-pg.io/releases/cloudnative-pg-1-28.0-rc1-released/)
+- [Documentation](https://cloudnative-pg.io/documentation/preview/)
diff --git a/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx b/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx
index 01987f8014..746880804d 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx
@@ -64,7 +64,7 @@ purpose. The following example shows how to configure one for Azure Blob
Storage:
```yaml
-apiVersion: barmancloud.k8s.enterprisedb.io/v1
+apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: cluster-example-backup
@@ -524,7 +524,7 @@ completed**:
2. If the `app` user does not exist, it will be created.
3. If the `app` user is not the owner of the `app` database, ownership will be
granted to the `app` user.
-4. If the `username` value matches the `owner` value in the secret, the
+4. If the `owner` value matches the `username` value in the secret, the
password for the application user (the `app` user in this case) will be
updated to the `password` value in the secret.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx b/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx
index b2716a2fb6..a0193821df 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx
@@ -5,32 +5,30 @@ originalFilePath: 'src/rolling_update.md'
-The operator allows changing the PostgreSQL version used in a cluster while
-applications are running against it.
+The operator allows you to change the PostgreSQL version used in a cluster
+while applications continue running against it.
-!!! Important
- Only upgrades for PostgreSQL minor releases are supported.
+Rolling upgrades are triggered when:
-Rolling upgrades are started when:
+- you change the `imageName` attribute in the cluster specification;
-- the user changes the `imageName` attribute of the cluster specification;
+- you change the list of extension images in the `.spec.postgresql.extensions`
+ stanza of the cluster specification;
-- the [image catalog](image_catalog.md) is updated with a new image for the major used by the cluster;
+- the [image catalog](image_catalog.md) is updated with a new image for the
+ major version used by the cluster;
-- a change in the PostgreSQL configuration requires a restart to be
- applied;
+- a change in the PostgreSQL configuration requires a restart to apply;
-- a change on the `Cluster` `.spec.resources` values
+- you change the `Cluster` `.spec.resources` values;
-- a change in size of the persistent volume claim on AKS
+- the operator is updated, ensuring Pods run the latest instance manager
+ (unless [in-place updates are enabled](installation_upgrade.md#in-place-updates-of-the-instance-manager)).
-- after the operator is updated, to ensure the Pods run the latest instance
- manager (unless [in-place updates are enabled](installation_upgrade.md#in-place-updates-of-the-instance-manager)).
+During a rolling upgrade, the operator upgrades all replicas one Pod at a time,
+starting from the one with the highest serial.
-The operator starts upgrading all the replicas, one Pod at a time, and begins
-from the one with the highest serial.
-
-The primary is the last node to be upgraded.
+The primary is always the last node to be upgraded.
Rolling updates are configurable and can be either entirely automated
(`unsupervised`) or requiring human intervention (`supervised`).
@@ -39,7 +37,7 @@ The upgrade keeps the {{name.ln}} identity, without re-cloning the
data. Pods will be deleted and created again with the same PVCs and a new
image, if required.
-During the rolling update procedure, each service endpoints move to reflect the
+During the rolling update procedure, each service's endpoint moves to reflect the
cluster's status, so that applications can ignore the node that is being
updated.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples.mdx b/product_docs/docs/postgres_for_kubernetes/1/samples.mdx
index 2a03ee73be..fff2f040d7 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples.mdx
@@ -66,6 +66,13 @@ your PostgreSQL cluster.
an EPAS 15 cluster with TDE. Note that you will need access credentials
to download the image used.
+## Security
+
+**Sample cluster with custom security contexts**
+: [`cluster-example-security-context.yaml`](../samples/cluster-example-security-context.yaml)
+ A cluster demonstrating how to customize both Pod and Container security contexts.
+ This is useful when working with Pod Security Standards or meeting specific security requirements.
+
## Backups
**Customized storage class and backups**
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-security-context.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-security-context.yaml
new file mode 100644
index 0000000000..85e17fba6f
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-security-context.yaml
@@ -0,0 +1,44 @@
+# Example of PostgreSQL cluster with custom security contexts
+#
+# This example demonstrates how to customize both PodSecurityContext and
+# Container SecurityContext for a PostgreSQL cluster. This is particularly
+# useful when working with Pod Security Standards (PSS) or when you need
+# to meet specific security requirements.
+#
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-security-context
+spec:
+ instances: 3
+
+ # Storage configuration
+ storage:
+ size: 1Gi
+
+ # Custom PodSecurityContext
+ # This will be applied to all pods in the cluster and merged with operator defaults.
+ # Only RunAsUser, RunAsGroup, and SeccompProfile are merged from defaults if not specified.
+ podSecurityContext:
+ runAsUser: 26
+ runAsGroup: 26
+ fsGroup: 26
+ runAsNonRoot: true
+ supplementalGroups: [1000, 2000]
+ fsGroupChangePolicy: "OnRootMismatch"
+
+ # Custom Container SecurityContext
+ # This will be applied to all containers in the cluster pods and merged with operator defaults.
+ # The operator provides secure defaults for all fields, which will be used if not explicitly set.
+ securityContext:
+ allowPrivilegeEscalation: false
+ # Note: capabilities are not merged with operator defaults.
+ # If specified, they fully replace any defaults.
+ capabilities:
+ drop:
+ - ALL
+ add:
+ - NET_BIND_SERVICE
+ privileged: false
+ readOnlyRootFilesystem: true
+ runAsNonRoot: true
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml
index ba932af00b..8f1fbef3c4 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml
@@ -2,8 +2,6 @@ apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-example
- annotations:
- alpha.k8s.enterprisedb.io/failoverQuorum: "true"
spec:
instances: 3
@@ -11,6 +9,7 @@ spec:
synchronous:
method: any
number: 1
+ failoverQuorum: true
storage:
size: 1Gi
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml
index 227fdb6142..9bf16d51ec 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml
@@ -18,4 +18,3 @@ spec:
parameters:
max_client_conn: "1000"
default_pool_size: "10"
-
\ No newline at end of file
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/postgis-example.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/postgis-example.yaml
index 5e1b28177c..c7642d7f18 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/postgis-example.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/postgis-example.yaml
@@ -4,7 +4,7 @@ metadata:
name: postgis-example
spec:
instances: 1
- imageName: ghcr.io/cloudnative-pg/postgis:18-3.6-system-trixie
+ imageName: docker.enterprisedb.com/k8s/postgresql:18-postgis-ubi9
storage:
size: 1Gi
postgresql:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx b/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
index d6001769a8..23f0cb72c3 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
@@ -178,7 +178,7 @@ Output:
version
--------------------------------------------------------------------------------------
------------------
-PostgreSQL 18.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
+PostgreSQL 18.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
8.3.1-5), 64-bit
(1 row)
```
From 434db70abe821a731ba8bc185c1e24bb7e18ea84 Mon Sep 17 00:00:00 2001
From: Josh Heyer
Date: Mon, 1 Dec 2025 03:24:25 +0000
Subject: [PATCH 2/9] RC release goes under "preview"
---
.../docs/postgres_for_kubernetes/{1 => preview}/addons.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/applications.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/architecture.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/backup.mdx | 0
.../{1 => preview}/backup_barmanobjectstore.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/backup_recovery.mdx | 0
.../{1 => preview}/backup_volumesnapshot.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/before_you_start.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/benchmarking.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/bootstrap.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/certificates.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/cluster_conf.mdx | 0
.../{1 => preview}/cncf-projects/cilium.mdx | 0
.../{1 => preview}/cncf-projects/external-secrets.mdx | 0
.../{1 => preview}/cncf-projects/index.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/cnp_i.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/connection_pooling.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/container_images.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/controller.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/css/override.css | 0
.../postgres_for_kubernetes/{1 => preview}/database_import.mdx | 0
.../{1 => preview}/declarative_database_management.mdx | 0
.../{1 => preview}/declarative_hibernation.mdx | 0
.../{1 => preview}/declarative_role_management.mdx | 0
.../{1 => preview}/default-monitoring.yaml | 0
.../docs/postgres_for_kubernetes/{1 => preview}/evaluation.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/failover.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/failure_modes.mdx | 0
product_docs/docs/postgres_for_kubernetes/{1 => preview}/faq.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/fencing.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/image_catalog.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/images/apps-in-k8s.png | 0
.../{1 => preview}/images/apps-outside-k8s.png | 0
.../{1 => preview}/images/architecture-in-k8s.png | 0
.../{1 => preview}/images/architecture-r.png | 0
.../{1 => preview}/images/architecture-read-only.png | 0
.../{1 => preview}/images/architecture-rw.png | 0
.../{1 => preview}/images/grafana-local.png | 0
.../{1 => preview}/images/ironbank/pulling-the-image.png | 0
.../{1 => preview}/images/k8s-architecture-2-az.png | 0
.../{1 => preview}/images/k8s-architecture-3-az.png | 0
.../{1 => preview}/images/k8s-architecture-multi.png | 0
.../{1 => preview}/images/k8s-pg-architecture.png | 0
.../{1 => preview}/images/microservice-import.png | 0
.../{1 => preview}/images/monolith-import.png | 0
.../{1 => preview}/images/multi-cluster.png | 0
.../{1 => preview}/images/network-storage-architecture.png | 0
.../{1 => preview}/images/openshift/alerts-openshift.png | 0
.../images/openshift/oc_installation_screenshot_1.png | 0
.../images/openshift/oc_installation_screenshot_2.png | 0
.../images/openshift/openshift-operatorgroup-error.png | 0
.../{1 => preview}/images/openshift/openshift-rbac.png | 0
.../images/openshift/openshift-webconsole-allnamespaces.png | 0
.../images/openshift/openshift-webconsole-multinamespace.png | 0
.../openshift/openshift-webconsole-singlenamespace-list.png | 0
.../images/openshift/openshift-webconsole-singlenamespace.png | 0
.../{1 => preview}/images/openshift/operatorhub_1.png | 0
.../{1 => preview}/images/openshift/operatorhub_2.png | 0
.../{1 => preview}/images/openshift/prometheus-queries.png | 0
.../{1 => preview}/images/operator-capability-level.png | 0
.../postgres_for_kubernetes/{1 => preview}/images/pgadmin4.png | 0
.../{1 => preview}/images/pgbouncer-architecture-rw.png | 0
.../{1 => preview}/images/pgbouncer-pooler-image.png | 0
.../{1 => preview}/images/pgbouncer-pooler-template.png | 0
.../{1 => preview}/images/prometheus-local.png | 0
.../images/public-cloud-architecture-storage-replication.png | 0
.../{1 => preview}/images/public-cloud-architecture.png | 0
.../{1 => preview}/images/shared-nothing-architecture.png | 0
.../{1 => preview}/images/write_bw.1-2Draw.png | 0
.../{1 => preview}/imagevolume_extensions.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/index.mdx | 0
.../{1 => preview}/installation_upgrade.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/instance_manager.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/iron-bank.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/kubectl-plugin.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/kubernetes_upgrade.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/labels_annotations.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/license_keys.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/logging.mdx | 0
.../{1 => preview}/logical_replication.mdx | 0
.../{1 => preview}/migrating_edb_registries.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/monitoring.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/networking.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/object_stores.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/openshift.mdx | 0
.../{1 => preview}/operator_capability_levels.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/operator_conf.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/index.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v0.6.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v0.7.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v0.8.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.0.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.1.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.10.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.11.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.12.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.13.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.14.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.4.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.5.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.4.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.5.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.4.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.5.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.10.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.11.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.12.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.13.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.4.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.5.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.6.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.7.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.8.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.9.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.4.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.5.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.6.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.2.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.2.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.4.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.5.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.6.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.4.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.5.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.6.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.4.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.5.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.6.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.7.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.8.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.9.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.4.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.5.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.6.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.24.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.24.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.24.2.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.24.3.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.25.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.25.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.26.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.26.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.27.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.27.1.mdx | 0
.../{1 => preview}/pg4k.v1/v1.28.0-rc1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.3.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.4.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.5.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.5.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.6.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.7.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.7.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.8.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.9.0.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.9.1.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.9.2.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/postgis.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/postgres_upgrades.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/postgresql_conf.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/preview_version.mdx | 0
.../{1 => preview}/private_edb_registries.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/quickstart.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/recovery.mdx | 0
.../{1 => preview}/rel_notes/0_0_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/0_1_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/0_2_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/0_3_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/0_4_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/0_5_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/0_6_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/0_7_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/0_8_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_0_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_10_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_11_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_12_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_13_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_14_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_15_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_15_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_15_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_15_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_15_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_15_5_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_16_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_16_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_16_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_16_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_16_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_16_5_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_17_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_17_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_17_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_17_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_17_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_17_5_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_10_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_11_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_12_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_13_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_5_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_6_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_7_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_8_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_18_9_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_19_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_19_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_19_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_19_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_19_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_19_5_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_19_6_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_1_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_20_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_20_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_20_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_20_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_20_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_20_5_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_20_6_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_21_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_21_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_21_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_21_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_21_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_21_5_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_21_6_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_10_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_11_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_5_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_6_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_7_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_8_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_22_9_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_23_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_23_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_23_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_23_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_23_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_23_5_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_23_6_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_24_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_24_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_24_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_24_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_24_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_25_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_25_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_25_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_25_3_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_25_4_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_26_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_26_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_26_2_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_27_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_27_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_2_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_2_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_3_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_4_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_5_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_5_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_6_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_7_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_7_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_8_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_9_0_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_9_1_rel_notes.mdx | 0
.../{1 => preview}/rel_notes/1_9_2_rel_notes.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/rel_notes/index.mdx | 0
.../{1 => preview}/rel_notes/src/1.22.10_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.22.11_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.24.4_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.25.2_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.25.3_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.25.4_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.26.0_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.26.1_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.26.2_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.27.0_rel_notes.yml | 0
.../{1 => preview}/rel_notes/src/1.27.1_rel_notes.yml | 0
.../postgres_for_kubernetes/{1 => preview}/rel_notes/src/meta.yml | 0
.../postgres_for_kubernetes/{1 => preview}/replica_cluster.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/replication.mdx | 0
.../{1 => preview}/resource_management.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/rolling_update.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/samples.mdx | 0
.../{1 => preview}/samples/backup-example.yaml | 0
.../{1 => preview}/samples/backup-with-volume-snapshot.yaml | 0
.../{1 => preview}/samples/cluster-additional-volumes.yaml | 0
.../{1 => preview}/samples/cluster-advanced-initdb.yaml | 0
.../{1 => preview}/samples/cluster-backup-aws-inherit.yaml | 0
.../{1 => preview}/samples/cluster-backup-azure-inherit.yaml | 0
.../{1 => preview}/samples/cluster-backup-retention-30d.yaml | 0
.../{1 => preview}/samples/cluster-clone-basicauth.yaml | 0
.../{1 => preview}/samples/cluster-clone-tls.yaml | 0
.../{1 => preview}/samples/cluster-example-bis-restore-cr.yaml | 0
.../{1 => preview}/samples/cluster-example-bis-restore.yaml | 0
.../{1 => preview}/samples/cluster-example-bis.yaml | 0
.../{1 => preview}/samples/cluster-example-catalog.yaml | 0
.../{1 => preview}/samples/cluster-example-cert-manager.yaml | 0
.../{1 => preview}/samples/cluster-example-custom.yaml | 0
.../{1 => preview}/samples/cluster-example-epas.yaml | 0
.../samples/cluster-example-external-backup-adapter-cluster.yaml | 0
.../samples/cluster-example-external-backup-adapter.yaml | 0
.../{1 => preview}/samples/cluster-example-full.yaml | 0
.../{1 => preview}/samples/cluster-example-initdb-icu.yaml | 0
.../{1 => preview}/samples/cluster-example-initdb-sql-refs.yaml | 0
.../{1 => preview}/samples/cluster-example-initdb.yaml | 0
.../samples/cluster-example-logical-destination.yaml | 0
.../{1 => preview}/samples/cluster-example-logical-source.yaml | 0
.../{1 => preview}/samples/cluster-example-managed-services.yaml | 0
.../{1 => preview}/samples/cluster-example-monitoring.yaml | 0
.../{1 => preview}/samples/cluster-example-pg-hba.yaml | 0
.../{1 => preview}/samples/cluster-example-pge.yaml | 0
.../{1 => preview}/samples/cluster-example-projected-volume.yaml | 0
.../samples/cluster-example-replica-from-backup-simple.yaml | 0
.../samples/cluster-example-replica-from-volume-snapshot.yaml | 0
.../{1 => preview}/samples/cluster-example-replica-streaming.yaml | 0
.../{1 => preview}/samples/cluster-example-secret.yaml | 0
.../{1 => preview}/samples/cluster-example-security-context.yaml | 0
.../{1 => preview}/samples/cluster-example-sync-az.yaml | 0
.../samples/cluster-example-syncreplicas-explicit.yaml | 0
.../samples/cluster-example-syncreplicas-legacy.yaml | 0
.../samples/cluster-example-syncreplicas-quorum.yaml | 0
.../{1 => preview}/samples/cluster-example-tde.yaml | 0
.../{1 => preview}/samples/cluster-example-trigger-backup.yaml | 0
.../{1 => preview}/samples/cluster-example-wal-storage.yaml | 0
.../samples/cluster-example-with-backup-scaleway.yaml | 0
.../{1 => preview}/samples/cluster-example-with-backup.yaml | 0
.../{1 => preview}/samples/cluster-example-with-probes.yaml | 0
.../{1 => preview}/samples/cluster-example-with-roles.yaml | 0
.../samples/cluster-example-with-tablespaces-backup.yaml | 0
.../{1 => preview}/samples/cluster-example-with-tablespaces.yaml | 0
.../samples/cluster-example-with-volume-snapshot.yaml | 0
.../{1 => preview}/samples/cluster-example.yaml | 0
.../{1 => preview}/samples/cluster-expose-service.yaml | 0
.../samples/cluster-import-schema-only-basicauth.yaml | 0
.../{1 => preview}/samples/cluster-import-snapshot-basicauth.yaml | 0
.../{1 => preview}/samples/cluster-import-snapshot-tls.yaml | 0
.../{1 => preview}/samples/cluster-pvc-template.yaml | 0
.../{1 => preview}/samples/cluster-replica-async.yaml | 0
.../{1 => preview}/samples/cluster-replica-basicauth.yaml | 0
.../samples/cluster-replica-from-backup-other-namespace.yaml | 0
.../{1 => preview}/samples/cluster-replica-restore.yaml | 0
.../{1 => preview}/samples/cluster-replica-tls.yaml | 0
.../{1 => preview}/samples/cluster-restore-external-cluster.yaml | 0
.../{1 => preview}/samples/cluster-restore-pitr.yaml | 0
.../{1 => preview}/samples/cluster-restore-snapshot-full.yaml | 0
.../{1 => preview}/samples/cluster-restore-snapshot-pitr.yaml | 0
.../{1 => preview}/samples/cluster-restore-snapshot.yaml | 0
.../{1 => preview}/samples/cluster-restore-with-tablespaces.yaml | 0
.../{1 => preview}/samples/cluster-restore.yaml | 0
.../{1 => preview}/samples/cluster-storage-class-with-backup.yaml | 0
.../{1 => preview}/samples/cluster-storage-class.yaml | 0
.../{1 => preview}/samples/database-example-fail.yaml | 0
.../{1 => preview}/samples/database-example-icu.yaml | 0
.../{1 => preview}/samples/database-example.yaml | 0
.../{1 => preview}/samples/dc/cluster-dc-a.yaml | 0
.../{1 => preview}/samples/dc/cluster-dc-b.yaml | 0
.../{1 => preview}/samples/dc/cluster-test.yaml | 0
.../{1 => preview}/samples/k9s/plugins.yml | 0
.../{1 => preview}/samples/monitoring/alerts.yaml | 0
.../{1 => preview}/samples/monitoring/kube-stack-config.yaml | 0
.../{1 => preview}/samples/monitoring/podmonitor.yaml | 0
.../{1 => preview}/samples/monitoring/prometheusrule.yaml | 0
.../{1 => preview}/samples/networkpolicy-example.yaml | 0
.../{1 => preview}/samples/pooler-basic-auth.yaml | 0
.../{1 => preview}/samples/pooler-deployment-strategy.yaml | 0
.../{1 => preview}/samples/pooler-external.yaml | 0
.../{1 => preview}/samples/pooler-tls.yaml | 0
.../{1 => preview}/samples/postgis-example.yaml | 0
.../{1 => preview}/samples/publication-example-objects.yaml | 0
.../{1 => preview}/samples/publication-example.yaml | 0
.../{1 => preview}/samples/scheduled-backup-example.yaml | 0
.../{1 => preview}/samples/subscription-example.yaml | 0
.../{1 => preview}/samples/subscription.yaml | 0
.../docs/postgres_for_kubernetes/{1 => preview}/scheduling.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/security.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/service_management.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/ssl_connections.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/storage.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/tablespaces.mdx | 0
product_docs/docs/postgres_for_kubernetes/{1 => preview}/tde.mdx | 0
.../postgres_for_kubernetes/{1 => preview}/troubleshooting.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/use_cases.mdx | 0
.../docs/postgres_for_kubernetes/{1 => preview}/wal_archiving.mdx | 0
433 files changed, 0 insertions(+), 0 deletions(-)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/addons.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/applications.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/architecture.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/backup.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/backup_barmanobjectstore.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/backup_recovery.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/backup_volumesnapshot.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/before_you_start.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/benchmarking.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/bootstrap.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/certificates.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/cluster_conf.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/cncf-projects/cilium.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/cncf-projects/external-secrets.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/cncf-projects/index.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/cnp_i.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/connection_pooling.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/container_images.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/controller.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/css/override.css (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/database_import.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/declarative_database_management.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/declarative_hibernation.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/declarative_role_management.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/default-monitoring.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/evaluation.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/failover.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/failure_modes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/faq.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/fencing.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/image_catalog.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/apps-in-k8s.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/apps-outside-k8s.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/architecture-in-k8s.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/architecture-r.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/architecture-read-only.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/architecture-rw.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/grafana-local.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/ironbank/pulling-the-image.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/k8s-architecture-2-az.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/k8s-architecture-3-az.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/k8s-architecture-multi.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/k8s-pg-architecture.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/microservice-import.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/monolith-import.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/multi-cluster.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/network-storage-architecture.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/alerts-openshift.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/oc_installation_screenshot_1.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/oc_installation_screenshot_2.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/openshift-operatorgroup-error.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/openshift-rbac.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/openshift-webconsole-allnamespaces.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/openshift-webconsole-multinamespace.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/openshift-webconsole-singlenamespace-list.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/openshift-webconsole-singlenamespace.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/operatorhub_1.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/operatorhub_2.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/openshift/prometheus-queries.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/operator-capability-level.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/pgadmin4.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/pgbouncer-architecture-rw.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/pgbouncer-pooler-image.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/pgbouncer-pooler-template.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/prometheus-local.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/public-cloud-architecture-storage-replication.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/public-cloud-architecture.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/shared-nothing-architecture.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/images/write_bw.1-2Draw.png (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/imagevolume_extensions.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/index.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/installation_upgrade.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/instance_manager.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/iron-bank.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/kubectl-plugin.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/kubernetes_upgrade.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/labels_annotations.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/license_keys.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/logging.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/logical_replication.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/migrating_edb_registries.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/monitoring.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/networking.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/object_stores.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/openshift.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/operator_capability_levels.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/operator_conf.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/index.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v0.6.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v0.7.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v0.8.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.0.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.1.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.10.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.11.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.12.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.13.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.14.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.4.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.15.5.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.4.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.16.5.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.4.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.17.5.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.10.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.11.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.12.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.13.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.4.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.5.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.6.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.7.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.8.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.18.9.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.4.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.5.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.19.6.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.2.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.2.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.4.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.5.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.20.6.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.4.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.5.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.21.6.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.4.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.5.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.6.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.7.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.8.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.22.9.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.4.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.5.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.23.6.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.24.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.24.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.24.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.24.3.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.25.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.25.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.26.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.26.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.27.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.27.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.28.0-rc1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.3.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.4.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.5.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.5.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.6.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.7.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.7.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.8.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.9.0.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.9.1.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/pg4k.v1/v1.9.2.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/postgis.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/postgres_upgrades.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/postgresql_conf.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/preview_version.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/private_edb_registries.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/quickstart.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/recovery.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/0_0_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/0_1_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/0_2_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/0_3_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/0_4_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/0_5_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/0_6_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/0_7_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/0_8_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_0_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_10_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_11_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_12_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_13_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_14_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_15_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_15_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_15_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_15_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_15_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_15_5_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_16_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_16_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_16_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_16_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_16_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_16_5_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_17_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_17_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_17_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_17_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_17_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_17_5_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_10_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_11_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_12_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_13_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_5_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_6_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_7_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_8_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_18_9_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_19_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_19_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_19_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_19_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_19_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_19_5_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_19_6_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_1_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_20_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_20_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_20_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_20_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_20_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_20_5_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_20_6_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_21_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_21_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_21_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_21_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_21_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_21_5_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_21_6_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_10_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_11_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_5_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_6_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_7_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_8_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_22_9_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_23_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_23_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_23_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_23_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_23_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_23_5_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_23_6_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_24_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_24_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_24_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_24_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_24_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_25_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_25_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_25_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_25_3_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_25_4_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_26_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_26_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_26_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_27_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_27_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_2_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_2_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_3_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_4_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_5_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_5_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_6_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_7_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_7_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_8_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_9_0_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_9_1_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/1_9_2_rel_notes.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/index.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.22.10_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.22.11_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.24.4_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.25.2_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.25.3_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.25.4_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.26.0_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.26.1_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.26.2_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.27.0_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/1.27.1_rel_notes.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rel_notes/src/meta.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/replica_cluster.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/replication.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/resource_management.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/rolling_update.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/backup-example.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/backup-with-volume-snapshot.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-additional-volumes.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-advanced-initdb.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-backup-aws-inherit.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-backup-azure-inherit.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-backup-retention-30d.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-clone-basicauth.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-clone-tls.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-bis-restore-cr.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-bis-restore.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-bis.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-catalog.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-cert-manager.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-custom.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-epas.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-external-backup-adapter-cluster.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-external-backup-adapter.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-full.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-initdb-icu.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-initdb-sql-refs.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-initdb.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-logical-destination.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-logical-source.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-managed-services.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-monitoring.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-pg-hba.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-pge.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-projected-volume.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-replica-from-backup-simple.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-replica-from-volume-snapshot.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-replica-streaming.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-secret.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-security-context.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-sync-az.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-syncreplicas-explicit.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-syncreplicas-legacy.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-syncreplicas-quorum.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-tde.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-trigger-backup.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-wal-storage.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-with-backup-scaleway.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-with-backup.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-with-probes.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-with-roles.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-with-tablespaces-backup.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-with-tablespaces.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example-with-volume-snapshot.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-example.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-expose-service.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-import-schema-only-basicauth.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-import-snapshot-basicauth.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-import-snapshot-tls.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-pvc-template.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-replica-async.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-replica-basicauth.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-replica-from-backup-other-namespace.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-replica-restore.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-replica-tls.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-restore-external-cluster.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-restore-pitr.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-restore-snapshot-full.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-restore-snapshot-pitr.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-restore-snapshot.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-restore-with-tablespaces.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-restore.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-storage-class-with-backup.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/cluster-storage-class.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/database-example-fail.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/database-example-icu.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/database-example.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/dc/cluster-dc-a.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/dc/cluster-dc-b.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/dc/cluster-test.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/k9s/plugins.yml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/monitoring/alerts.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/monitoring/kube-stack-config.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/monitoring/podmonitor.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/monitoring/prometheusrule.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/networkpolicy-example.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/pooler-basic-auth.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/pooler-deployment-strategy.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/pooler-external.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/pooler-tls.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/postgis-example.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/publication-example-objects.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/publication-example.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/scheduled-backup-example.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/subscription-example.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/samples/subscription.yaml (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/scheduling.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/security.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/service_management.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/ssl_connections.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/storage.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/tablespaces.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/tde.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/troubleshooting.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/use_cases.mdx (100%)
rename product_docs/docs/postgres_for_kubernetes/{1 => preview}/wal_archiving.mdx (100%)
diff --git a/product_docs/docs/postgres_for_kubernetes/1/addons.mdx b/product_docs/docs/postgres_for_kubernetes/preview/addons.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/addons.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/addons.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/applications.mdx b/product_docs/docs/postgres_for_kubernetes/preview/applications.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/applications.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/applications.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/architecture.mdx b/product_docs/docs/postgres_for_kubernetes/preview/architecture.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/architecture.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/architecture.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/backup.mdx b/product_docs/docs/postgres_for_kubernetes/preview/backup.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/backup.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/backup.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx b/product_docs/docs/postgres_for_kubernetes/preview/backup_barmanobjectstore.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/backup_barmanobjectstore.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/backup_recovery.mdx b/product_docs/docs/postgres_for_kubernetes/preview/backup_recovery.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/backup_recovery.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/backup_recovery.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/backup_volumesnapshot.mdx b/product_docs/docs/postgres_for_kubernetes/preview/backup_volumesnapshot.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/backup_volumesnapshot.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/backup_volumesnapshot.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx b/product_docs/docs/postgres_for_kubernetes/preview/before_you_start.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/before_you_start.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/benchmarking.mdx b/product_docs/docs/postgres_for_kubernetes/preview/benchmarking.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/benchmarking.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/benchmarking.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/preview/bootstrap.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/bootstrap.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx b/product_docs/docs/postgres_for_kubernetes/preview/certificates.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/certificates.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx b/product_docs/docs/postgres_for_kubernetes/preview/cluster_conf.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/cluster_conf.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cncf-projects/cilium.mdx b/product_docs/docs/postgres_for_kubernetes/preview/cncf-projects/cilium.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/cncf-projects/cilium.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/cncf-projects/cilium.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cncf-projects/external-secrets.mdx b/product_docs/docs/postgres_for_kubernetes/preview/cncf-projects/external-secrets.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/cncf-projects/external-secrets.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/cncf-projects/external-secrets.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cncf-projects/index.mdx b/product_docs/docs/postgres_for_kubernetes/preview/cncf-projects/index.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/cncf-projects/index.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/cncf-projects/index.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cnp_i.mdx b/product_docs/docs/postgres_for_kubernetes/preview/cnp_i.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/cnp_i.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/cnp_i.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx b/product_docs/docs/postgres_for_kubernetes/preview/connection_pooling.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/connection_pooling.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx b/product_docs/docs/postgres_for_kubernetes/preview/container_images.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/container_images.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/controller.mdx b/product_docs/docs/postgres_for_kubernetes/preview/controller.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/controller.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/controller.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/css/override.css b/product_docs/docs/postgres_for_kubernetes/preview/css/override.css
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/css/override.css
rename to product_docs/docs/postgres_for_kubernetes/preview/css/override.css
diff --git a/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx b/product_docs/docs/postgres_for_kubernetes/preview/database_import.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/database_import.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx b/product_docs/docs/postgres_for_kubernetes/preview/declarative_database_management.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/declarative_database_management.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx b/product_docs/docs/postgres_for_kubernetes/preview/declarative_hibernation.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/declarative_hibernation.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx b/product_docs/docs/postgres_for_kubernetes/preview/declarative_role_management.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/declarative_role_management.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml b/product_docs/docs/postgres_for_kubernetes/preview/default-monitoring.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/default-monitoring.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/evaluation.mdx b/product_docs/docs/postgres_for_kubernetes/preview/evaluation.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/evaluation.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/evaluation.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/failover.mdx b/product_docs/docs/postgres_for_kubernetes/preview/failover.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/failover.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/failover.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/failure_modes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/failure_modes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/faq.mdx b/product_docs/docs/postgres_for_kubernetes/preview/faq.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/faq.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/faq.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/fencing.mdx b/product_docs/docs/postgres_for_kubernetes/preview/fencing.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/fencing.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/fencing.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx b/product_docs/docs/postgres_for_kubernetes/preview/image_catalog.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/image_catalog.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/apps-in-k8s.png b/product_docs/docs/postgres_for_kubernetes/preview/images/apps-in-k8s.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/apps-in-k8s.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/apps-in-k8s.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/apps-outside-k8s.png b/product_docs/docs/postgres_for_kubernetes/preview/images/apps-outside-k8s.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/apps-outside-k8s.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/apps-outside-k8s.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/architecture-in-k8s.png b/product_docs/docs/postgres_for_kubernetes/preview/images/architecture-in-k8s.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/architecture-in-k8s.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/architecture-in-k8s.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/architecture-r.png b/product_docs/docs/postgres_for_kubernetes/preview/images/architecture-r.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/architecture-r.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/architecture-r.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/architecture-read-only.png b/product_docs/docs/postgres_for_kubernetes/preview/images/architecture-read-only.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/architecture-read-only.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/architecture-read-only.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/architecture-rw.png b/product_docs/docs/postgres_for_kubernetes/preview/images/architecture-rw.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/architecture-rw.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/architecture-rw.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png b/product_docs/docs/postgres_for_kubernetes/preview/images/grafana-local.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/grafana-local.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/ironbank/pulling-the-image.png b/product_docs/docs/postgres_for_kubernetes/preview/images/ironbank/pulling-the-image.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/ironbank/pulling-the-image.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/ironbank/pulling-the-image.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-2-az.png b/product_docs/docs/postgres_for_kubernetes/preview/images/k8s-architecture-2-az.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-2-az.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/k8s-architecture-2-az.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-3-az.png b/product_docs/docs/postgres_for_kubernetes/preview/images/k8s-architecture-3-az.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-3-az.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/k8s-architecture-3-az.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-multi.png b/product_docs/docs/postgres_for_kubernetes/preview/images/k8s-architecture-multi.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-multi.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/k8s-architecture-multi.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/k8s-pg-architecture.png b/product_docs/docs/postgres_for_kubernetes/preview/images/k8s-pg-architecture.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/k8s-pg-architecture.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/k8s-pg-architecture.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/microservice-import.png b/product_docs/docs/postgres_for_kubernetes/preview/images/microservice-import.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/microservice-import.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/microservice-import.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/monolith-import.png b/product_docs/docs/postgres_for_kubernetes/preview/images/monolith-import.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/monolith-import.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/monolith-import.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/multi-cluster.png b/product_docs/docs/postgres_for_kubernetes/preview/images/multi-cluster.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/multi-cluster.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/multi-cluster.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/network-storage-architecture.png b/product_docs/docs/postgres_for_kubernetes/preview/images/network-storage-architecture.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/network-storage-architecture.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/network-storage-architecture.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/alerts-openshift.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/alerts-openshift.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/alerts-openshift.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/alerts-openshift.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_1.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/oc_installation_screenshot_1.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_1.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/oc_installation_screenshot_1.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_2.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/oc_installation_screenshot_2.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_2.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/oc_installation_screenshot_2.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-operatorgroup-error.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-operatorgroup-error.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-operatorgroup-error.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-operatorgroup-error.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-rbac.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-rbac.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-rbac.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-rbac.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-allnamespaces.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-webconsole-allnamespaces.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-allnamespaces.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-webconsole-allnamespaces.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-webconsole-multinamespace.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-webconsole-multinamespace.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace-list.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-webconsole-singlenamespace-list.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace-list.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-webconsole-singlenamespace-list.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-webconsole-singlenamespace.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/openshift-webconsole-singlenamespace.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_1.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/operatorhub_1.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_1.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/operatorhub_1.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/operatorhub_2.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/operatorhub_2.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/prometheus-queries.png b/product_docs/docs/postgres_for_kubernetes/preview/images/openshift/prometheus-queries.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/openshift/prometheus-queries.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/openshift/prometheus-queries.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/operator-capability-level.png b/product_docs/docs/postgres_for_kubernetes/preview/images/operator-capability-level.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/operator-capability-level.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/operator-capability-level.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/pgadmin4.png b/product_docs/docs/postgres_for_kubernetes/preview/images/pgadmin4.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/pgadmin4.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/pgadmin4.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-architecture-rw.png b/product_docs/docs/postgres_for_kubernetes/preview/images/pgbouncer-architecture-rw.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-architecture-rw.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/pgbouncer-architecture-rw.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-image.png b/product_docs/docs/postgres_for_kubernetes/preview/images/pgbouncer-pooler-image.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-image.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/pgbouncer-pooler-image.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-template.png b/product_docs/docs/postgres_for_kubernetes/preview/images/pgbouncer-pooler-template.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-template.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/pgbouncer-pooler-template.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/prometheus-local.png b/product_docs/docs/postgres_for_kubernetes/preview/images/prometheus-local.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/prometheus-local.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/prometheus-local.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture-storage-replication.png b/product_docs/docs/postgres_for_kubernetes/preview/images/public-cloud-architecture-storage-replication.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture-storage-replication.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/public-cloud-architecture-storage-replication.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture.png b/product_docs/docs/postgres_for_kubernetes/preview/images/public-cloud-architecture.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/public-cloud-architecture.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/shared-nothing-architecture.png b/product_docs/docs/postgres_for_kubernetes/preview/images/shared-nothing-architecture.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/shared-nothing-architecture.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/shared-nothing-architecture.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/write_bw.1-2Draw.png b/product_docs/docs/postgres_for_kubernetes/preview/images/write_bw.1-2Draw.png
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/images/write_bw.1-2Draw.png
rename to product_docs/docs/postgres_for_kubernetes/preview/images/write_bw.1-2Draw.png
diff --git a/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx b/product_docs/docs/postgres_for_kubernetes/preview/imagevolume_extensions.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/imagevolume_extensions.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/preview/index.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/index.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/index.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/preview/installation_upgrade.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/installation_upgrade.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx b/product_docs/docs/postgres_for_kubernetes/preview/instance_manager.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/instance_manager.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/iron-bank.mdx b/product_docs/docs/postgres_for_kubernetes/preview/iron-bank.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/iron-bank.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/iron-bank.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx b/product_docs/docs/postgres_for_kubernetes/preview/kubectl-plugin.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/kubectl-plugin.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/preview/kubernetes_upgrade.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/kubernetes_upgrade.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx b/product_docs/docs/postgres_for_kubernetes/preview/labels_annotations.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/labels_annotations.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx b/product_docs/docs/postgres_for_kubernetes/preview/license_keys.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/license_keys.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx b/product_docs/docs/postgres_for_kubernetes/preview/logging.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/logging.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/logging.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx b/product_docs/docs/postgres_for_kubernetes/preview/logical_replication.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/logical_replication.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/migrating_edb_registries.mdx b/product_docs/docs/postgres_for_kubernetes/preview/migrating_edb_registries.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/migrating_edb_registries.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/migrating_edb_registries.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx b/product_docs/docs/postgres_for_kubernetes/preview/monitoring.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/monitoring.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/networking.mdx b/product_docs/docs/postgres_for_kubernetes/preview/networking.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/networking.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/networking.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx b/product_docs/docs/postgres_for_kubernetes/preview/object_stores.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/object_stores.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx b/product_docs/docs/postgres_for_kubernetes/preview/openshift.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/openshift.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/openshift.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx b/product_docs/docs/postgres_for_kubernetes/preview/operator_capability_levels.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/operator_capability_levels.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/operator_conf.mdx b/product_docs/docs/postgres_for_kubernetes/preview/operator_conf.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/operator_conf.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/operator_conf.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/index.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/index.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.6.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v0.6.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.6.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v0.6.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.7.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v0.7.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.7.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v0.7.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.8.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v0.8.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.8.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v0.8.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.0.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.0.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.0.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.0.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.1.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.1.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.1.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.1.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.10.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.10.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.10.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.10.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.11.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.11.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.11.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.11.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.12.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.12.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.12.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.12.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.13.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.13.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.13.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.13.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.14.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.14.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.14.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.14.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.4.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.4.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.4.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.4.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.5.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.5.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.5.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.15.5.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.4.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.4.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.4.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.4.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.5.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.5.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.5.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.16.5.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.4.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.4.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.4.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.4.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.5.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.5.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.5.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.17.5.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.10.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.10.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.10.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.10.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.11.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.11.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.11.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.11.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.12.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.12.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.12.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.12.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.13.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.13.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.13.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.13.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.4.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.4.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.4.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.4.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.5.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.5.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.5.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.5.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.6.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.6.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.6.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.6.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.7.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.7.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.7.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.7.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.8.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.8.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.8.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.8.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.9.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.9.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.9.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.18.9.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.4.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.4.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.4.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.4.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.5.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.5.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.5.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.5.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.6.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.6.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.6.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.19.6.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.2.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.2.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.2.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.2.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.2.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.2.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.2.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.2.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.4.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.4.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.4.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.4.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.5.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.5.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.5.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.5.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.6.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.6.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.6.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.20.6.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.4.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.4.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.4.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.4.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.5.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.5.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.5.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.5.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.6.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.6.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.6.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.21.6.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.4.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.4.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.4.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.4.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.5.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.5.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.5.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.5.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.6.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.6.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.6.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.6.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.7.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.7.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.7.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.7.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.8.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.8.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.8.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.8.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.9.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.9.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.9.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.22.9.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.4.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.4.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.4.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.4.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.5.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.5.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.5.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.5.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.6.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.6.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.6.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.23.6.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.24.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.24.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.24.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.24.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.24.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.24.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.3.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.24.3.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.3.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.24.3.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.25.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.25.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.25.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.25.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.25.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.25.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.25.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.25.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.26.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.26.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.26.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.26.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.26.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.26.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.26.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.26.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.27.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.27.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.27.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.27.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.27.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.27.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.27.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.27.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.28.0-rc1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.28.0-rc1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.28.0-rc1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.28.0-rc1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.3.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.3.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.3.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.3.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.4.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.4.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.4.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.4.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.5.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.5.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.5.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.5.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.5.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.5.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.5.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.5.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.6.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.6.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.6.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.6.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.7.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.7.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.7.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.7.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.7.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.7.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.7.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.7.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.8.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.8.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.8.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.8.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.9.0.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.9.0.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.9.0.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.9.0.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.9.1.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.9.1.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.9.1.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.9.1.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.9.2.mdx b/product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.9.2.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.9.2.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/pg4k.v1/v1.9.2.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx b/product_docs/docs/postgres_for_kubernetes/preview/postgis.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/postgis.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/postgis.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/postgres_upgrades.mdx b/product_docs/docs/postgres_for_kubernetes/preview/postgres_upgrades.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/postgres_upgrades.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/postgres_upgrades.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx b/product_docs/docs/postgres_for_kubernetes/preview/postgresql_conf.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/postgresql_conf.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx b/product_docs/docs/postgres_for_kubernetes/preview/preview_version.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/preview_version.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/private_edb_registries.mdx b/product_docs/docs/postgres_for_kubernetes/preview/private_edb_registries.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/private_edb_registries.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/private_edb_registries.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx b/product_docs/docs/postgres_for_kubernetes/preview/quickstart.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/quickstart.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx b/product_docs/docs/postgres_for_kubernetes/preview/recovery.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/recovery.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/recovery.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_0_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_0_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_0_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_0_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_1_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_1_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_1_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_1_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_2_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_2_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_2_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_2_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_3_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_3_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_3_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_3_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_4_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_4_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_4_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_4_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_5_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_5_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_5_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_5_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_6_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_6_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_6_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_6_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_7_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_7_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_7_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_7_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_8_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_8_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_8_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/0_8_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_0_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_0_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_0_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_0_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_10_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_10_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_10_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_10_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_11_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_11_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_11_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_11_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_12_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_12_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_12_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_12_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_13_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_13_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_13_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_13_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_14_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_14_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_14_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_14_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_5_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_5_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_15_5_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_5_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_5_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_16_5_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_5_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_5_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_17_5_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_10_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_10_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_10_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_10_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_11_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_11_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_11_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_11_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_12_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_12_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_12_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_12_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_13_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_13_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_13_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_13_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_5_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_5_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_5_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_6_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_6_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_7_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_7_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_7_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_7_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_8_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_8_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_8_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_8_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_9_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_9_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_9_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_18_9_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_5_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_5_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_5_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_6_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_6_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_19_6_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_1_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_1_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_1_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_1_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_5_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_5_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_5_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_6_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_6_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_20_6_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_5_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_5_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_5_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_6_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_6_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_21_6_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_10_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_10_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_10_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_10_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_11_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_11_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_11_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_11_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_5_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_5_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_5_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_6_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_6_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_6_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_7_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_7_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_7_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_7_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_8_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_8_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_8_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_8_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_9_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_9_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_9_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_9_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_5_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_5_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_5_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_6_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_6_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_23_6_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_3_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_3_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_3_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_4_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_4_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_4_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_26_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_26_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_26_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_26_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_26_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_26_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_27_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_27_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_27_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_27_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_2_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_2_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_2_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_2_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_2_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_2_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_2_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_2_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_3_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_3_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_3_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_3_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_4_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_4_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_4_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_4_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_5_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_5_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_5_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_5_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_5_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_5_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_5_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_5_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_6_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_6_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_6_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_6_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_7_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_7_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_7_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_7_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_7_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_7_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_7_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_7_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_8_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_8_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_8_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_8_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_9_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_9_0_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_9_0_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_9_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_9_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_9_1_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_9_1_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_9_1_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_9_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_9_2_rel_notes.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_9_2_rel_notes.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_9_2_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/index.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/index.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.22.10_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.22.10_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.22.10_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.22.10_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.22.11_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.22.11_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.22.11_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.22.11_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.24.4_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.24.4_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.24.4_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.24.4_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.2_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.25.2_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.2_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.25.2_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.3_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.25.3_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.3_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.25.3_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.4_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.25.4_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.4_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.25.4_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.0_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.26.0_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.0_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.26.0_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.1_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.26.1_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.1_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.26.1_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.2_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.26.2_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.2_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.26.2_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.27.0_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.27.0_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.27.0_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.27.0_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.27.1_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.27.1_rel_notes.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.27.1_rel_notes.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.27.1_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/meta.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/meta.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/meta.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/meta.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx b/product_docs/docs/postgres_for_kubernetes/preview/replica_cluster.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/replica_cluster.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/replication.mdx b/product_docs/docs/postgres_for_kubernetes/preview/replication.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/replication.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/replication.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/resource_management.mdx b/product_docs/docs/postgres_for_kubernetes/preview/resource_management.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/resource_management.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/resource_management.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rolling_update.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/rolling_update.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples.mdx b/product_docs/docs/postgres_for_kubernetes/preview/samples.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/samples.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/backup-example.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/backup-example.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/backup-example.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/backup-example.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/backup-with-volume-snapshot.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/backup-with-volume-snapshot.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/backup-with-volume-snapshot.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/backup-with-volume-snapshot.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-additional-volumes.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-additional-volumes.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-additional-volumes.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-additional-volumes.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-advanced-initdb.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-advanced-initdb.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-advanced-initdb.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-advanced-initdb.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-backup-aws-inherit.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-backup-aws-inherit.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-backup-aws-inherit.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-backup-aws-inherit.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-backup-azure-inherit.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-backup-azure-inherit.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-backup-azure-inherit.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-backup-azure-inherit.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-backup-retention-30d.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-backup-retention-30d.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-backup-retention-30d.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-backup-retention-30d.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-clone-basicauth.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-clone-basicauth.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-clone-basicauth.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-clone-basicauth.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-clone-tls.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-clone-tls.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-clone-tls.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-clone-tls.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore-cr.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-bis-restore-cr.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore-cr.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-bis-restore-cr.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-bis-restore.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-bis-restore.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-bis.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-bis.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-catalog.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-catalog.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-catalog.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-catalog.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-cert-manager.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-cert-manager.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-cert-manager.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-cert-manager.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-custom.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-custom.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-custom.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-custom.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-epas.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-epas.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-epas.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-epas.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-external-backup-adapter-cluster.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-external-backup-adapter-cluster.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-external-backup-adapter-cluster.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-external-backup-adapter-cluster.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-external-backup-adapter.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-external-backup-adapter.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-external-backup-adapter.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-external-backup-adapter.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-full.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-full.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb-icu.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-initdb-icu.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb-icu.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-initdb-icu.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb-sql-refs.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-initdb-sql-refs.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb-sql-refs.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-initdb-sql-refs.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-initdb.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-initdb.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-destination.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-logical-destination.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-destination.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-logical-destination.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-source.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-logical-source.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-source.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-logical-source.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-managed-services.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-managed-services.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-managed-services.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-managed-services.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-monitoring.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-monitoring.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-monitoring.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-monitoring.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-pg-hba.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-pg-hba.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-pg-hba.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-pg-hba.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-pge.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-pge.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-pge.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-pge.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-projected-volume.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-projected-volume.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-projected-volume.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-projected-volume.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-from-backup-simple.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-replica-from-backup-simple.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-from-backup-simple.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-replica-from-backup-simple.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-from-volume-snapshot.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-replica-from-volume-snapshot.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-from-volume-snapshot.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-replica-from-volume-snapshot.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-streaming.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-replica-streaming.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-streaming.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-replica-streaming.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-secret.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-secret.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-secret.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-secret.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-security-context.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-security-context.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-security-context.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-security-context.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-sync-az.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-sync-az.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-sync-az.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-sync-az.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-explicit.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-syncreplicas-explicit.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-explicit.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-syncreplicas-explicit.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-legacy.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-syncreplicas-legacy.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-legacy.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-syncreplicas-legacy.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-syncreplicas-quorum.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-syncreplicas-quorum.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-tde.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-tde.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-tde.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-tde.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-trigger-backup.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-trigger-backup.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-trigger-backup.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-trigger-backup.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-wal-storage.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-wal-storage.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-wal-storage.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-wal-storage.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup-scaleway.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-backup-scaleway.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup-scaleway.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-backup-scaleway.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-backup.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-backup.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-probes.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-probes.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-probes.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-probes.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-roles.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-roles.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-roles.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-roles.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-tablespaces-backup.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-tablespaces-backup.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-tablespaces-backup.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-tablespaces-backup.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-tablespaces.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-tablespaces.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-tablespaces.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-tablespaces.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-volume-snapshot.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-volume-snapshot.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-volume-snapshot.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example-with-volume-snapshot.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-example.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-expose-service.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-expose-service.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-expose-service.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-expose-service.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-import-schema-only-basicauth.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-import-schema-only-basicauth.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-import-schema-only-basicauth.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-import-schema-only-basicauth.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-import-snapshot-basicauth.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-import-snapshot-basicauth.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-import-snapshot-basicauth.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-import-snapshot-basicauth.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-import-snapshot-tls.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-import-snapshot-tls.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-import-snapshot-tls.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-import-snapshot-tls.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-pvc-template.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-pvc-template.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-pvc-template.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-pvc-template.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-async.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-async.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-async.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-async.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-basicauth.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-basicauth.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-basicauth.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-basicauth.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-from-backup-other-namespace.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-from-backup-other-namespace.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-from-backup-other-namespace.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-from-backup-other-namespace.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-restore.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-restore.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-restore.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-restore.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-tls.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-tls.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-tls.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-replica-tls.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-external-cluster.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-external-cluster.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-external-cluster.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-external-cluster.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-pitr.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-pitr.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-pitr.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-pitr.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-full.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-snapshot-full.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-full.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-snapshot-full.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-pitr.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-snapshot-pitr.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-pitr.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-snapshot-pitr.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-snapshot.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-snapshot.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-with-tablespaces.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-with-tablespaces.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-with-tablespaces.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore-with-tablespaces.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-restore.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-storage-class-with-backup.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-storage-class-with-backup.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-storage-class-with-backup.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-storage-class-with-backup.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-storage-class.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-storage-class.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/cluster-storage-class.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/cluster-storage-class.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/database-example-fail.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/database-example-fail.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/database-example-fail.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/database-example-fail.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/database-example-icu.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/database-example-icu.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/database-example-icu.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/database-example-icu.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/database-example.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/database-example.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/database-example.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/database-example.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/dc/cluster-dc-a.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/dc/cluster-dc-a.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/dc/cluster-dc-a.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/dc/cluster-dc-a.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/dc/cluster-dc-b.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/dc/cluster-dc-b.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/dc/cluster-dc-b.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/dc/cluster-dc-b.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/dc/cluster-test.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/dc/cluster-test.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/dc/cluster-test.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/dc/cluster-test.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/k9s/plugins.yml b/product_docs/docs/postgres_for_kubernetes/preview/samples/k9s/plugins.yml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/k9s/plugins.yml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/k9s/plugins.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/alerts.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/monitoring/alerts.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/alerts.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/monitoring/alerts.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/kube-stack-config.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/monitoring/kube-stack-config.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/kube-stack-config.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/monitoring/kube-stack-config.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/podmonitor.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/monitoring/podmonitor.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/podmonitor.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/monitoring/podmonitor.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/prometheusrule.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/monitoring/prometheusrule.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/prometheusrule.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/monitoring/prometheusrule.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/networkpolicy-example.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/networkpolicy-example.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/networkpolicy-example.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/networkpolicy-example.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-basic-auth.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/pooler-basic-auth.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/pooler-basic-auth.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/pooler-basic-auth.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-deployment-strategy.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/pooler-deployment-strategy.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/pooler-deployment-strategy.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/pooler-deployment-strategy.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/pooler-external.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/pooler-external.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-tls.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/pooler-tls.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/pooler-tls.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/pooler-tls.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/postgis-example.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/postgis-example.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/postgis-example.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/postgis-example.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/publication-example-objects.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/publication-example-objects.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/publication-example-objects.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/publication-example-objects.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/publication-example.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/publication-example.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/publication-example.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/publication-example.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/scheduled-backup-example.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/scheduled-backup-example.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/scheduled-backup-example.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/scheduled-backup-example.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/subscription-example.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/subscription-example.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/subscription-example.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/subscription-example.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/subscription.yaml b/product_docs/docs/postgres_for_kubernetes/preview/samples/subscription.yaml
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/samples/subscription.yaml
rename to product_docs/docs/postgres_for_kubernetes/preview/samples/subscription.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx b/product_docs/docs/postgres_for_kubernetes/preview/scheduling.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/scheduling.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/security.mdx b/product_docs/docs/postgres_for_kubernetes/preview/security.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/security.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/security.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/service_management.mdx b/product_docs/docs/postgres_for_kubernetes/preview/service_management.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/service_management.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/service_management.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx b/product_docs/docs/postgres_for_kubernetes/preview/ssl_connections.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/ssl_connections.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/storage.mdx b/product_docs/docs/postgres_for_kubernetes/preview/storage.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/storage.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/storage.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/tablespaces.mdx b/product_docs/docs/postgres_for_kubernetes/preview/tablespaces.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/tablespaces.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/tablespaces.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/tde.mdx b/product_docs/docs/postgres_for_kubernetes/preview/tde.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/tde.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/tde.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx b/product_docs/docs/postgres_for_kubernetes/preview/troubleshooting.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/troubleshooting.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/use_cases.mdx b/product_docs/docs/postgres_for_kubernetes/preview/use_cases.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/use_cases.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/use_cases.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx b/product_docs/docs/postgres_for_kubernetes/preview/wal_archiving.mdx
similarity index 100%
rename from product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
rename to product_docs/docs/postgres_for_kubernetes/preview/wal_archiving.mdx
From a317f2fc1d0ea1e13d73ffb27975df81004f4293 Mon Sep 17 00:00:00 2001
From: Josh Heyer
Date: Mon, 1 Dec 2025 03:44:34 +0000
Subject: [PATCH 3/9] add release notes
---
.../preview/rel_notes/1_22_10_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_22_11_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_24_4_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_25_2_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_25_3_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_25_4_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_26_0_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_26_1_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_26_2_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_27_0_rel_notes.mdx | 2 +-
.../preview/rel_notes/1_27_1_rel_notes.mdx | 2 +-
.../rel_notes/1_28_0-rc1_rel_notes.mdx | 108 ++++++++++++
.../preview/rel_notes/index.mdx | 4 +-
.../rel_notes/src/1.28.0_rel_notes.yml | 161 ++++++++++++++++++
14 files changed, 283 insertions(+), 12 deletions(-)
create mode 100644 product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_28_0-rc1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.28.0_rel_notes.yml
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_10_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_10_rel_notes.mdx
index 9b0e0861e7..ce1599ffa8 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_10_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_10_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.22.10 release notes
navTitle: Version 1.22.10
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.22.10_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.22.10_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_11_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_11_rel_notes.mdx
index dd3b954f79..3c6121f4e5 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_11_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_22_11_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.22.11 release notes
navTitle: Version 1.22.11
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.22.11_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.22.11_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_4_rel_notes.mdx
index 58fc7d3a4c..54b04f2458 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_4_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_24_4_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.24.4 release notes
navTitle: Version 1.24.4
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.24.4_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.24.4_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_2_rel_notes.mdx
index a5b5bff6b9..623e069432 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_2_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_2_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.25.2 release notes
navTitle: Version 1.25.2
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.2_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.25.2_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_3_rel_notes.mdx
index 4150ce2cf0..80a714be43 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_3_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_3_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.25.3 release notes
navTitle: Version 1.25.3
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.3_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.25.3_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_4_rel_notes.mdx
index f7de94a2a8..4c92dfe40d 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_4_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_25_4_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.25.4 release notes
navTitle: Version 1.25.4
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.4_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.25.4_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_0_rel_notes.mdx
index 1b55f4f75a..ae25b9954d 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_0_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_0_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.26.0 release notes
navTitle: Version 1.26.0
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.0_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.26.0_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_1_rel_notes.mdx
index 2ebd6e17bb..05a4c28880 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_1_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_1_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.26.1 release notes
navTitle: Version 1.26.1
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.1_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.26.1_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_2_rel_notes.mdx
index 12bff16042..32d0fe3ec1 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_2_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_26_2_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.26.2 release notes
navTitle: Version 1.26.2
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.2_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.26.2_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_0_rel_notes.mdx
index 61611a4f22..4958fe739b 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_0_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_0_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.27.0 release notes
navTitle: Version 1.27.0
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.27.0_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.27.0_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_1_rel_notes.mdx
index e040b481be..e3dfa8f648 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_1_rel_notes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_27_1_rel_notes.mdx
@@ -2,7 +2,7 @@
# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
title: EDB CloudNativePG Cluster 1.27.1 release notes
navTitle: Version 1.27.1
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.27.1_rel_notes.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.27.1_rel_notes.yml
editTarget: originalFilePath
---
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_28_0-rc1_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_28_0-rc1_rel_notes.mdx
new file mode 100644
index 0000000000..2ebe313573
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/1_28_0-rc1_rel_notes.mdx
@@ -0,0 +1,108 @@
+---
+# IMPORTANT: Do not edit this file directly - it is generated from yaml source.
+title: EDB CloudNativePG Cluster 1.28.0-rc1 release notes
+navTitle: Version 1.28.0-rc1
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.28.0_rel_notes.yml
+editTarget: originalFilePath
+---
+
+Released: 7 November 2025
+
+This release includes the following:
+
+## Features
+
+| Description | Addresses |
+Quorum-Based Failover Promoted to Stable
+
Promoted the quorum-based failover feature, introduced experimentally in 1.27.0, to a stable API.
+This data-driven failover mechanism is now configured via the spec.postgresql.synchronous.failoverQuorum field, graduating from the previous alpha.k8s.enterprisedb.io/failoverQuorum annotation.
+ | #8589 |
+Declarative Foreign Data Management
+
Introduced comprehensive declarative management for Foreign Data Wrappers (FDW) by extending the Database CRD.
+This feature adds the .spec.fdws and .spec.servers fields, allowing you to manage FDW extensions and their corresponding foreign servers directly from the Database resource.
+This work was implemented by Ying Zhu (@EdwinaZhu) as part of the LFX Mentorship Program 2025 Term 2.
+ | #7942, #8401 |
+
+
+
+## Enhancements
+
+| Description | Addresses |
+Enabled simultaneous image and configuration changes
+
allowing you to update the container image (including PostgreSQL version or extensions) and
+PostgreSQL configuration settings in the same operation. The operator first
+applies the image change, followed by the configuration changes in a subsequent
+rollout, ensuring safe and consistent cluster updates.
+ | #8115 |
+Introduced securityContext at the pod level and containerSecurityContext
+for individual containers (including postgres, init, and sidecars).
+
This provides granular control over security settings, replacing the previous
+cluster-wide postgres and operator user settings. Contributed by @x0ddf.
+ | #6614 |
+Adopted standard Kubernetes recommended labels
+
(e.g., app.kubernetes.io/name) for all resources generated by EDB Postgres for Kubernetes
+(Clusters, Backups, Poolers, etc.). Contributed by @JefeDavis.
+ | #8087 |
+Introduced a new caching layer for user-defined monitoring queries
+
to reduce load on the PostgreSQL database.
+ | #8003 |
+Introduced the alpha.k8s.enterprisedb.io/unrecoverable=true annotation for replica
+pods.
+
When applied, this annotation instructs the operator to permanently
+delete the instance by removing its Pod and PVCs, after which it will recreate
+the replica from the primary.
+ | #8178 |
+| Enhanced PgBouncer integration by automatically setting `auth_dbname` to the
+`pgbouncer` database, simplifying auth setup.
+ | #8671 |
+Allowed providing stage-specific pg_restore options during database import.
+
(Stage-specific options are preRestore, postRestore, dataRestore.) Contributed by
+@hanshal101.
+ | #7690 |
+| Added the PostgreSQL `majorVersion` to the `Backup` object's status for easier identification and management.
+ | #8464 |
+
+
+
+## Security Fixes
+
+| Description | Addresses |
+Allowed providing fine-grained custom TLS configurations for PgBouncer
+
The Pooler CRD was extended with clientTLSSecret, clientCASecret,
+serverTLSSecret, and serverCASecret fields under .spec.pgbouncer.
+These fields enable users to supply their own certificates for both
+client-to-pooler and pooler-to-server connections, taking precedence over the
+operator-generated certificates.
+ | #8692 |
+Added optional TLS support for the operator's metrics server (port 8080)
+
This feature is opt-in and enabled by setting the METRICS_CERT_DIR
+environment variable, which instructs the operator to look for tls.crt and
+tls.key files in the specified directory. When unset, the server continues to
+use HTTP for backward compatibility.
+ | #8997 |
+Enabled cnp_report_operator to work with minimal permissions by making
+only the operator deployment required
+
All other resources (pods, secrets, config maps, events, webhooks, and OLM data) are now optional and collected on
+a best-efforts basis. The command gracefully handles permission errors for
+those resources by logging clear warnings and continuing report generation with
+available data, rather than failing completely. This enables least-privileged
+access, where users may have limited, namespace-scoped permissions.
+ | #8982 |
+
+
+
+## Bug Fixes
+
+| Description | Addresses |
+Fixed the CREATE PUBLICATION SQL generation for multi-table publications to be backward-compatible with PostgreSQL 13+.
+
The previously generated syntax
+was only valid for PostgreSQL 15+ and caused syntax errors on older versions.
+ | #8888 |
+| Fixed backup failures in complex pod definitions by reliably selecting the `postgres` container by name instead of by index.
+ | #8964 |
+cnp plugin: Fixed bugs in cnp report log collection, especially when fetching previous logs.
+
The collector now correctly fetches previous and current logs in separate requests and gracefully handles missing previous logs (e.g., on containers with no restart history), ensuring current logs are always collected.
+ | #8992 |
+
+
+
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/index.mdx
index 79ad971553..5fe5fa2e1d 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/index.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/index.mdx
@@ -7,6 +7,7 @@ indexCards: none
redirects:
- ../release_notes
navigation:
+ - 1_28_0-rc1_rel_notes
- 1_27_1_rel_notes
- 1_27_0_rel_notes
- 1_26_2_rel_notes
@@ -123,7 +124,7 @@ navigation:
- 0_2_0_rel_notes
- 0_1_0_rel_notes
- 0_0_1_rel_notes
-originalFilePath: product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/meta.yml
+originalFilePath: product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/meta.yml
editTarget: originalFilePath
---
@@ -133,6 +134,7 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB
| Version | Release date | Upstream merges |
|---|---|---|
+| [1.28.0-rc1](./1_28_0-rc1_rel_notes) | 07 Nov 2025 | Upstream [1.28.0-rc1](https://cloudnative-pg.io/documentation/preview/release_notes/v1.28/) |
| [1.27.1](./1_27_1_rel_notes) | 24 Oct 2025 | Upstream [1.27.1](https://cloudnative-pg.io/documentation/1.27/release_notes/v1.27/) |
| [1.27.0](./1_27_0_rel_notes) | 19 Aug 2025 | Upstream [1.27.0](https://cloudnative-pg.io/documentation/1.27/release_notes/v1.27/) |
| [1.26.2](./1_26_2_rel_notes) | 24 Oct 2025 | Upstream [1.26.2](https://cloudnative-pg.io/documentation/1.26/release_notes/v1.26/) |
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.28.0_rel_notes.yml b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.28.0_rel_notes.yml
new file mode 100644
index 0000000000..7753f62b59
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/preview/rel_notes/src/1.28.0_rel_notes.yml
@@ -0,0 +1,161 @@
+# yaml-language-server: $schema=https://raw.githubusercontent.com/EnterpriseDB/docs/refs/heads/develop/tools/automation/generators/relgen/relnote-schema.json
+product: EDB CloudNativePG Cluster
+version: 1.28.0-rc1
+date: 7 November 2025
+intro: |
+ This release includes the following:
+components:
+ "Operator": 1.28.0-rc1
+ "CNP plugin": 1.28.0-rc1
+ upstream-merge: Upstream [1.28.0-rc1](https://cloudnative-pg.io/documentation/preview/release_notes/v1.28/)
+relnotes:
+- relnote: |
+ Quorum-Based Failover Promoted to Stable
+ details: |
+ Promoted the quorum-based failover feature, introduced experimentally in 1.27.0, to a stable API.
+ This data-driven failover mechanism is now configured via the `spec.postgresql.synchronous.failoverQuorum` field, graduating from the previous `alpha.k8s.enterprisedb.io/failoverQuorum` annotation.
+ jira:
+ addresses: #8589
+ type: Feature
+ impact: High
+- relnote: |
+ Declarative Foreign Data Management
+ details: |
+ Introduced comprehensive declarative management for Foreign Data Wrappers (FDW) by extending the `Database` CRD.
+ This feature adds the `.spec.fdws` and `.spec.servers` fields, allowing you to manage FDW extensions and their corresponding foreign servers directly from the `Database` resource.
+ This work was implemented by Ying Zhu (@EdwinaZhu) as part of the LFX Mentorship Program 2025 Term 2.
+ jira:
+ addresses: #7942, #8401
+ type: Feature
+ impact: High
+- relnote: |
+ Enabled simultaneous image and configuration changes
+ details: |
+ allowing you to update the container image (including PostgreSQL version or extensions) and
+ PostgreSQL configuration settings in the same operation. The operator first
+ applies the image change, followed by the configuration changes in a subsequent
+ rollout, ensuring safe and consistent cluster updates.
+ jira:
+ addresses: #8115
+ type: Enhancement
+ impact: High
+- relnote: |
+ Introduced `securityContext` at the pod level and `containerSecurityContext`
+ for individual containers (including `postgres`, `init`, and sidecars).
+ details: |
+ This provides granular control over security settings, replacing the previous
+ cluster-wide `postgres` and `operator` user settings. Contributed by @x0ddf.
+ jira:
+ addresses: #6614
+ type: Enhancement
+ impact: High
+- relnote: |
+ Adopted standard Kubernetes recommended labels
+ details: |
+ (e.g., `app.kubernetes.io/name`) for all resources generated by EDB Postgres for Kubernetes
+ (Clusters, Backups, Poolers, etc.). Contributed by @JefeDavis.
+ jira:
+ addresses: #8087
+ type: Enhancement
+ impact: High
+- relnote: |
+ Introduced a new caching layer for user-defined monitoring queries
+ details: |
+ to reduce load on the PostgreSQL database.
+ jira:
+ addresses: #8003
+ type: Enhancement
+ impact: High
+- relnote: |
+ Introduced the `alpha.k8s.enterprisedb.io/unrecoverable=true` annotation for replica
+ pods.
+ details: |
+ When applied, this annotation instructs the operator to permanently
+ delete the instance by removing its Pod and PVCs, after which it will recreate
+ the replica from the primary.
+ jira:
+ addresses: #8178
+ type: Enhancement
+ impact: High
+- relnote: |
+ Enhanced PgBouncer integration by automatically setting `auth_dbname` to the
+ `pgbouncer` database, simplifying auth setup.
+ jira:
+ addresses: #8671
+ type: Enhancement
+ impact: High
+- relnote: |
+ Allowed providing stage-specific `pg_restore` options during database import.
+ details: |
+ (Stage-specific options are `preRestore`, `postRestore`, `dataRestore`.) Contributed by
+ @hanshal101.
+ jira:
+ addresses: #7690
+ type: Enhancement
+ impact: High
+- relnote: |
+ Added the PostgreSQL `majorVersion` to the `Backup` object's status for easier identification and management.
+ jira:
+ addresses: #8464
+ type: Enhancement
+ impact: High
+- relnote: |
+ Allowed providing fine-grained custom TLS configurations for PgBouncer
+ details: |
+ The `Pooler` CRD was extended with `clientTLSSecret`, `clientCASecret`,
+ `serverTLSSecret`, and `serverCASecret` fields under `.spec.pgbouncer`.
+ These fields enable users to supply their own certificates for both
+ client-to-pooler and pooler-to-server connections, taking precedence over the
+ operator-generated certificates.
+ jira:
+ addresses: #8692
+ type: Security
+ impact: High
+- relnote: |
+ Added optional TLS support for the operator's metrics server (port 8080)
+ details: |
+ This feature is opt-in and enabled by setting the `METRICS_CERT_DIR`
+ environment variable, which instructs the operator to look for `tls.crt` and
+ `tls.key` files in the specified directory. When unset, the server continues to
+ use HTTP for backward compatibility.
+ jira:
+ addresses: #8997
+ type: Security
+ impact: High
+- relnote: |
+ Enabled `cnp_report_operator` to work with minimal permissions by making
+ only the operator deployment required
+ details: |
+ All other resources (pods, secrets, config maps, events, webhooks, and OLM data) are now optional and collected on
+ a best-efforts basis. The command gracefully handles permission errors for
+ those resources by logging clear warnings and continuing report generation with
+ available data, rather than failing completely. This enables least-privileged
+ access, where users may have limited, namespace-scoped permissions.
+ jira:
+ addresses: #8982
+ type: Security
+ impact: High
+- relnote: |
+ Fixed the `CREATE PUBLICATION` SQL generation for multi-table publications to be backward-compatible with PostgreSQL 13+.
+ details: |
+ The previously generated syntax
+ was only valid for PostgreSQL 15+ and caused syntax errors on older versions.
+ jira:
+ addresses: #8888
+ type: Bug Fix
+ impact: High
+- relnote: |
+ Fixed backup failures in complex pod definitions by reliably selecting the `postgres` container by name instead of by index.
+ jira:
+ addresses: #8964
+ type: Bug Fix
+ impact: High
+- relnote: |
+ `cnp` plugin: Fixed bugs in `cnp report` log collection, especially when fetching previous logs.
+ details: |
+ The collector now correctly fetches previous and current logs in separate requests and gracefully handles missing previous logs (e.g., on containers with no restart history), ensuring current logs are always collected.
+ component: CNP plugin
+ jira:
+ addresses: #8992
+ type: Bug Fix
+ impact: High
From 4002e65dee08f5678c7c01736c72953bf434b1a5 Mon Sep 17 00:00:00 2001
From: Josh Heyer
Date: Mon, 1 Dec 2025 03:53:04 +0000
Subject: [PATCH 4/9] preview banner
---
.../docs/postgres_for_kubernetes/preview/index.mdx | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/product_docs/docs/postgres_for_kubernetes/preview/index.mdx b/product_docs/docs/postgres_for_kubernetes/preview/index.mdx
index a5adbeb5db..029cf03852 100644
--- a/product_docs/docs/postgres_for_kubernetes/preview/index.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/preview/index.mdx
@@ -4,9 +4,11 @@ description: The {{name.ln}} operator is a fork based on CloudNativePG™ which
originalFilePath: src/index.md
indexCards: none
directoryDefaults:
- version: "1.27.1"
-redirects:
- - /postgres_for_kubernetes/preview/:splat
+ version: "1.28.0-rc1"
+ displayBanner: |
+ This documentation covers the upcoming release of
+ {{name.ln}}; you may want to read the docs for
+ the current version
navigation:
- rel_notes
- '!commercial_support.mdx'
From 697edc5058eda5325456ee4ec9eb619d9fe5cdf7 Mon Sep 17 00:00:00 2001
From: Josh Heyer
Date: Mon, 1 Dec 2025 03:57:52 +0000
Subject: [PATCH 5/9] restore CNP 1.27.1
---
.../docs/postgres_for_kubernetes/1/addons.mdx | 574 ++
.../1/applications.mdx | 95 +
.../1/architecture.mdx | 433 ++
.../docs/postgres_for_kubernetes/1/backup.mdx | 488 ++
.../1/backup_barmanobjectstore.mdx | 351 +
.../1/backup_recovery.mdx | 8 +
.../1/backup_volumesnapshot.mdx | 405 +
.../1/before_you_start.mdx | 163 +
.../1/benchmarking.mdx | 209 +
.../postgres_for_kubernetes/1/bootstrap.mdx | 784 ++
.../1/certificates.mdx | 356 +
.../1/cluster_conf.mdx | 152 +
.../1/cncf-projects/cilium.mdx | 264 +
.../1/cncf-projects/external-secrets.mdx | 267 +
.../1/cncf-projects/index.mdx | 7 +
.../docs/postgres_for_kubernetes/1/cnp_i.mdx | 210 +
.../1/connection_pooling.mdx | 693 ++
.../1/container_images.mdx | 68 +
.../postgres_for_kubernetes/1/controller.mdx | 128 +
.../1/css/override.css | 3 +
.../1/database_import.mdx | 443 ++
.../1/declarative_database_management.mdx | 318 +
.../1/declarative_hibernation.mdx | 85 +
.../1/declarative_role_management.mdx | 258 +
.../1/default-monitoring.yaml | 488 ++
.../postgres_for_kubernetes/1/evaluation.mdx | 14 +
.../postgres_for_kubernetes/1/failover.mdx | 363 +
.../1/failure_modes.mdx | 76 +
.../docs/postgres_for_kubernetes/1/faq.mdx | 405 +
.../postgres_for_kubernetes/1/fencing.mdx | 111 +
.../1/image_catalog.mdx | 145 +
.../1/images/apps-in-k8s.png | 3 +
.../1/images/apps-outside-k8s.png | 3 +
.../1/images/architecture-in-k8s.png | 3 +
.../1/images/architecture-r.png | 3 +
.../1/images/architecture-read-only.png | 3 +
.../1/images/architecture-rw.png | 3 +
.../1/images/grafana-local.png | 3 +
.../1/images/ironbank/pulling-the-image.png | 3 +
.../1/images/k8s-architecture-2-az.png | 3 +
.../1/images/k8s-architecture-3-az.png | 3 +
.../1/images/k8s-architecture-multi.png | 3 +
.../1/images/k8s-pg-architecture.png | 3 +
.../1/images/microservice-import.png | 3 +
.../1/images/monolith-import.png | 3 +
.../1/images/multi-cluster.png | 3 +
.../1/images/network-storage-architecture.png | 3 +
.../1/images/openshift/alerts-openshift.png | 3 +
.../oc_installation_screenshot_1.png | 3 +
.../oc_installation_screenshot_2.png | 3 +
.../openshift-operatorgroup-error.png | 3 +
.../1/images/openshift/openshift-rbac.png | 3 +
.../openshift-webconsole-allnamespaces.png | 3 +
.../openshift-webconsole-multinamespace.png | 3 +
...nshift-webconsole-singlenamespace-list.png | 3 +
.../openshift-webconsole-singlenamespace.png | 3 +
.../1/images/openshift/operatorhub_1.png | 3 +
.../1/images/openshift/operatorhub_2.png | 3 +
.../1/images/openshift/prometheus-queries.png | 3 +
.../1/images/operator-capability-level.png | 3 +
.../1/images/pgadmin4.png | 3 +
.../1/images/pgbouncer-architecture-rw.png | 3 +
.../1/images/pgbouncer-pooler-image.png | 3 +
.../1/images/pgbouncer-pooler-template.png | 3 +
.../1/images/prometheus-local.png | 3 +
...cloud-architecture-storage-replication.png | 3 +
.../1/images/public-cloud-architecture.png | 3 +
.../1/images/shared-nothing-architecture.png | 3 +
.../1/images/write_bw.1-2Draw.png | 3 +
.../1/imagevolume_extensions.mdx | 354 +
.../docs/postgres_for_kubernetes/1/index.mdx | 222 +
.../1/installation_upgrade.mdx | 351 +
.../1/instance_manager.mdx | 426 ++
.../postgres_for_kubernetes/1/iron-bank.mdx | 127 +
.../1/kubectl-plugin.mdx | 1510 ++++
.../1/kubernetes_upgrade.mdx | 191 +
.../1/labels_annotations.mdx | 313 +
.../1/license_keys.mdx | 111 +
.../postgres_for_kubernetes/1/logging.mdx | 309 +
.../1/logical_replication.mdx | 464 ++
.../1/migrating_edb_registries.mdx | 198 +
.../postgres_for_kubernetes/1/monitoring.mdx | 945 +++
.../postgres_for_kubernetes/1/networking.mdx | 54 +
.../1/object_stores.mdx | 356 +
.../postgres_for_kubernetes/1/openshift.mdx | 1046 +++
.../1/operator_capability_levels.mdx | 732 ++
.../1/operator_conf.mdx | 203 +
.../1/pg4k.v1/index.mdx | 6586 +++++++++++++++++
.../1/pg4k.v1/v0.6.0.mdx | 344 +
.../1/pg4k.v1/v0.7.0.mdx | 345 +
.../1/pg4k.v1/v0.8.0.mdx | 347 +
.../1/pg4k.v1/v1.0.0.mdx | 347 +
.../1/pg4k.v1/v1.1.0.mdx | 347 +
.../1/pg4k.v1/v1.10.0.mdx | 788 ++
.../1/pg4k.v1/v1.11.0.mdx | 796 ++
.../1/pg4k.v1/v1.12.0.mdx | 810 ++
.../1/pg4k.v1/v1.13.0.mdx | 828 +++
.../1/pg4k.v1/v1.14.0.mdx | 847 +++
.../1/pg4k.v1/v1.15.0.mdx | 893 +++
.../1/pg4k.v1/v1.15.1.mdx | 897 +++
.../1/pg4k.v1/v1.15.2.mdx | 898 +++
.../1/pg4k.v1/v1.15.3.mdx | 898 +++
.../1/pg4k.v1/v1.15.4.mdx | 898 +++
.../1/pg4k.v1/v1.15.5.mdx | 898 +++
.../1/pg4k.v1/v1.16.0.mdx | 947 +++
.../1/pg4k.v1/v1.16.1.mdx | 968 +++
.../1/pg4k.v1/v1.16.2.mdx | 968 +++
.../1/pg4k.v1/v1.16.3.mdx | 968 +++
.../1/pg4k.v1/v1.16.4.mdx | 968 +++
.../1/pg4k.v1/v1.16.5.mdx | 968 +++
.../1/pg4k.v1/v1.17.0.mdx | 983 +++
.../1/pg4k.v1/v1.17.1.mdx | 983 +++
.../1/pg4k.v1/v1.17.2.mdx | 983 +++
.../1/pg4k.v1/v1.17.3.mdx | 983 +++
.../1/pg4k.v1/v1.17.4.mdx | 983 +++
.../1/pg4k.v1/v1.17.5.mdx | 984 +++
.../1/pg4k.v1/v1.18.0.mdx | 1009 +++
.../1/pg4k.v1/v1.18.1.mdx | 1022 +++
.../1/pg4k.v1/v1.18.10.mdx | 4069 ++++++++++
.../1/pg4k.v1/v1.18.11.mdx | 4147 +++++++++++
.../1/pg4k.v1/v1.18.12.mdx | 5023 +++++++++++++
.../1/pg4k.v1/v1.18.13.mdx | 5033 +++++++++++++
.../1/pg4k.v1/v1.18.2.mdx | 1025 +++
.../1/pg4k.v1/v1.18.3.mdx | 1042 +++
.../1/pg4k.v1/v1.18.4.mdx | 1046 +++
.../1/pg4k.v1/v1.18.5.mdx | 1077 +++
.../1/pg4k.v1/v1.18.6.mdx | 1078 +++
.../1/pg4k.v1/v1.18.7.mdx | 4003 ++++++++++
.../1/pg4k.v1/v1.18.8.mdx | 4003 ++++++++++
.../1/pg4k.v1/v1.18.9.mdx | 4054 ++++++++++
.../1/pg4k.v1/v1.19.0.mdx | 1041 +++
.../1/pg4k.v1/v1.19.1.mdx | 1047 +++
.../1/pg4k.v1/v1.19.2.mdx | 1051 +++
.../1/pg4k.v1/v1.19.3.mdx | 1082 +++
.../1/pg4k.v1/v1.19.4.mdx | 1084 +++
.../1/pg4k.v1/v1.19.5.mdx | 4084 ++++++++++
.../1/pg4k.v1/v1.19.6.mdx | 4084 ++++++++++
.../1/pg4k.v1/v1.2.0.mdx | 359 +
.../1/pg4k.v1/v1.2.1.mdx | 359 +
.../1/pg4k.v1/v1.20.0.mdx | 1116 +++
.../1/pg4k.v1/v1.20.1.mdx | 1148 +++
.../1/pg4k.v1/v1.20.2.mdx | 1150 +++
.../1/pg4k.v1/v1.20.3.mdx | 4367 +++++++++++
.../1/pg4k.v1/v1.20.4.mdx | 4367 +++++++++++
.../1/pg4k.v1/v1.20.5.mdx | 4418 +++++++++++
.../1/pg4k.v1/v1.20.6.mdx | 4433 +++++++++++
.../1/pg4k.v1/v1.21.0.mdx | 4393 +++++++++++
.../1/pg4k.v1/v1.21.1.mdx | 4509 +++++++++++
.../1/pg4k.v1/v1.21.2.mdx | 4576 ++++++++++++
.../1/pg4k.v1/v1.21.3.mdx | 4591 ++++++++++++
.../1/pg4k.v1/v1.21.4.mdx | 4653 ++++++++++++
.../1/pg4k.v1/v1.21.5.mdx | 4668 ++++++++++++
.../1/pg4k.v1/v1.21.6.mdx | 4678 ++++++++++++
.../1/pg4k.v1/v1.22.0.mdx | 4742 ++++++++++++
.../1/pg4k.v1/v1.22.1.mdx | 4757 ++++++++++++
.../1/pg4k.v1/v1.22.2.mdx | 4819 ++++++++++++
.../1/pg4k.v1/v1.22.3.mdx | 4834 ++++++++++++
.../1/pg4k.v1/v1.22.4.mdx | 4844 ++++++++++++
.../1/pg4k.v1/v1.22.5.mdx | 4844 ++++++++++++
.../1/pg4k.v1/v1.22.6.mdx | 4843 ++++++++++++
.../1/pg4k.v1/v1.22.7.mdx | 4278 +++++++++++
.../1/pg4k.v1/v1.22.8.mdx | 5905 +++++++++++++++
.../1/pg4k.v1/v1.22.9.mdx | 5905 +++++++++++++++
.../1/pg4k.v1/v1.23.0.mdx | 5225 +++++++++++++
.../1/pg4k.v1/v1.23.1.mdx | 5225 +++++++++++++
.../1/pg4k.v1/v1.23.2.mdx | 5235 +++++++++++++
.../1/pg4k.v1/v1.23.3.mdx | 5235 +++++++++++++
.../1/pg4k.v1/v1.23.4.mdx | 5235 +++++++++++++
.../1/pg4k.v1/v1.23.5.mdx | 4670 ++++++++++++
.../1/pg4k.v1/v1.23.6.mdx | 4806 ++++++++++++
.../1/pg4k.v1/v1.24.0.mdx | 5587 ++++++++++++++
.../1/pg4k.v1/v1.24.1.mdx | 4990 +++++++++++++
.../1/pg4k.v1/v1.24.2.mdx | 5142 +++++++++++++
.../1/pg4k.v1/v1.24.3.mdx | 5150 +++++++++++++
.../1/pg4k.v1/v1.25.0.mdx | 5944 +++++++++++++++
.../1/pg4k.v1/v1.25.1.mdx | 5952 +++++++++++++++
.../1/pg4k.v1/v1.26.0.mdx | 6220 ++++++++++++++++
.../1/pg4k.v1/v1.26.1.mdx | 6235 ++++++++++++++++
.../1/pg4k.v1/v1.27.0.mdx | 6462 ++++++++++++++++
.../1/pg4k.v1/v1.27.1.mdx | 6482 ++++++++++++++++
.../1/pg4k.v1/v1.3.0.mdx | 427 ++
.../1/pg4k.v1/v1.4.0.mdx | 427 ++
.../1/pg4k.v1/v1.5.0.mdx | 535 ++
.../1/pg4k.v1/v1.5.1.mdx | 537 ++
.../1/pg4k.v1/v1.6.0.mdx | 566 ++
.../1/pg4k.v1/v1.7.0.mdx | 570 ++
.../1/pg4k.v1/v1.7.1.mdx | 571 ++
.../1/pg4k.v1/v1.8.0.mdx | 597 ++
.../1/pg4k.v1/v1.9.0.mdx | 598 ++
.../1/pg4k.v1/v1.9.1.mdx | 598 ++
.../1/pg4k.v1/v1.9.2.mdx | 600 ++
.../postgres_for_kubernetes/1/postgis.mdx | 168 +
.../1/postgres_upgrades.mdx | 221 +
.../1/postgresql_conf.mdx | 705 ++
.../1/preview_version.mdx | 41 +
.../1/private_edb_registries.mdx | 150 +
.../postgres_for_kubernetes/1/quickstart.mdx | 390 +
.../postgres_for_kubernetes/1/recovery.mdx | 605 ++
.../1/rel_notes/0_0_1_rel_notes.mdx | 19 +
.../1/rel_notes/0_1_0_rel_notes.mdx | 20 +
.../1/rel_notes/0_2_0_rel_notes.mdx | 15 +
.../1/rel_notes/0_3_0_rel_notes.mdx | 20 +
.../1/rel_notes/0_4_0_rel_notes.mdx | 17 +
.../1/rel_notes/0_5_0_rel_notes.mdx | 19 +
.../1/rel_notes/0_6_0_rel_notes.mdx | 21 +
.../1/rel_notes/0_7_0_rel_notes.mdx | 16 +
.../1/rel_notes/0_8_0_rel_notes.mdx | 17 +
.../1/rel_notes/1_0_0_rel_notes.mdx | 38 +
.../1/rel_notes/1_10_0_rel_notes.mdx | 34 +
.../1/rel_notes/1_11_0_rel_notes.mdx | 44 +
.../1/rel_notes/1_12_0_rel_notes.mdx | 34 +
.../1/rel_notes/1_13_0_rel_notes.mdx | 45 +
.../1/rel_notes/1_14_0_rel_notes.mdx | 40 +
.../1/rel_notes/1_15_0_rel_notes.mdx | 14 +
.../1/rel_notes/1_15_1_rel_notes.mdx | 13 +
.../1/rel_notes/1_15_2_rel_notes.mdx | 25 +
.../1/rel_notes/1_15_3_rel_notes.mdx | 24 +
.../1/rel_notes/1_15_4_rel_notes.mdx | 13 +
.../1/rel_notes/1_15_5_rel_notes.mdx | 22 +
.../1/rel_notes/1_16_0_rel_notes.mdx | 63 +
.../1/rel_notes/1_16_1_rel_notes.mdx | 60 +
.../1/rel_notes/1_16_2_rel_notes.mdx | 13 +
.../1/rel_notes/1_16_3_rel_notes.mdx | 17 +
.../1/rel_notes/1_16_4_rel_notes.mdx | 12 +
.../1/rel_notes/1_16_5_rel_notes.mdx | 12 +
.../1/rel_notes/1_17_0_rel_notes.mdx | 12 +
.../1/rel_notes/1_17_1_rel_notes.mdx | 18 +
.../1/rel_notes/1_17_2_rel_notes.mdx | 12 +
.../1/rel_notes/1_17_3_rel_notes.mdx | 12 +
.../1/rel_notes/1_17_4_rel_notes.mdx | 12 +
.../1/rel_notes/1_17_5_rel_notes.mdx | 12 +
.../1/rel_notes/1_18_0_rel_notes.mdx | 12 +
.../1/rel_notes/1_18_10_rel_notes.mdx | 23 +
.../1/rel_notes/1_18_11_rel_notes.mdx | 27 +
.../1/rel_notes/1_18_12_rel_notes.mdx | 22 +
.../1/rel_notes/1_18_13_rel_notes.mdx | 30 +
.../1/rel_notes/1_18_1_rel_notes.mdx | 12 +
.../1/rel_notes/1_18_2_rel_notes.mdx | 14 +
.../1/rel_notes/1_18_3_rel_notes.mdx | 12 +
.../1/rel_notes/1_18_4_rel_notes.mdx | 12 +
.../1/rel_notes/1_18_5_rel_notes.mdx | 12 +
.../1/rel_notes/1_18_6_rel_notes.mdx | 21 +
.../1/rel_notes/1_18_7_rel_notes.mdx | 57 +
.../1/rel_notes/1_18_8_rel_notes.mdx | 21 +
.../1/rel_notes/1_18_9_rel_notes.mdx | 33 +
.../1/rel_notes/1_19_0_rel_notes.mdx | 14 +
.../1/rel_notes/1_19_1_rel_notes.mdx | 12 +
.../1/rel_notes/1_19_2_rel_notes.mdx | 12 +
.../1/rel_notes/1_19_3_rel_notes.mdx | 12 +
.../1/rel_notes/1_19_4_rel_notes.mdx | 12 +
.../1/rel_notes/1_19_5_rel_notes.mdx | 12 +
.../1/rel_notes/1_19_6_rel_notes.mdx | 12 +
.../1/rel_notes/1_1_0_rel_notes.mdx | 20 +
.../1/rel_notes/1_20_0_rel_notes.mdx | 12 +
.../1/rel_notes/1_20_1_rel_notes.mdx | 12 +
.../1/rel_notes/1_20_2_rel_notes.mdx | 12 +
.../1/rel_notes/1_20_3_rel_notes.mdx | 12 +
.../1/rel_notes/1_20_4_rel_notes.mdx | 12 +
.../1/rel_notes/1_20_5_rel_notes.mdx | 12 +
.../1/rel_notes/1_20_6_rel_notes.mdx | 12 +
.../1/rel_notes/1_21_0_rel_notes.mdx | 12 +
.../1/rel_notes/1_21_1_rel_notes.mdx | 12 +
.../1/rel_notes/1_21_2_rel_notes.mdx | 12 +
.../1/rel_notes/1_21_3_rel_notes.mdx | 12 +
.../1/rel_notes/1_21_4_rel_notes.mdx | 12 +
.../1/rel_notes/1_21_5_rel_notes.mdx | 12 +
.../1/rel_notes/1_21_6_rel_notes.mdx | 12 +
.../1/rel_notes/1_22_0_rel_notes.mdx | 12 +
.../1/rel_notes/1_22_10_rel_notes.mdx | 72 +
.../1/rel_notes/1_22_11_rel_notes.mdx | 50 +
.../1/rel_notes/1_22_1_rel_notes.mdx | 12 +
.../1/rel_notes/1_22_2_rel_notes.mdx | 12 +
.../1/rel_notes/1_22_3_rel_notes.mdx | 12 +
.../1/rel_notes/1_22_4_rel_notes.mdx | 12 +
.../1/rel_notes/1_22_5_rel_notes.mdx | 12 +
.../1/rel_notes/1_22_6_rel_notes.mdx | 24 +
.../1/rel_notes/1_22_7_rel_notes.mdx | 34 +
.../1/rel_notes/1_22_8_rel_notes.mdx | 57 +
.../1/rel_notes/1_22_9_rel_notes.mdx | 80 +
.../1/rel_notes/1_23_0_rel_notes.mdx | 12 +
.../1/rel_notes/1_23_1_rel_notes.mdx | 12 +
.../1/rel_notes/1_23_2_rel_notes.mdx | 12 +
.../1/rel_notes/1_23_3_rel_notes.mdx | 12 +
.../1/rel_notes/1_23_4_rel_notes.mdx | 12 +
.../1/rel_notes/1_23_5_rel_notes.mdx | 12 +
.../1/rel_notes/1_23_6_rel_notes.mdx | 12 +
.../1/rel_notes/1_24_0_rel_notes.mdx | 12 +
.../1/rel_notes/1_24_1_rel_notes.mdx | 12 +
.../1/rel_notes/1_24_2_rel_notes.mdx | 12 +
.../1/rel_notes/1_24_3_rel_notes.mdx | 12 +
.../1/rel_notes/1_24_4_rel_notes.mdx | 21 +
.../1/rel_notes/1_25_0_rel_notes.mdx | 12 +
.../1/rel_notes/1_25_1_rel_notes.mdx | 12 +
.../1/rel_notes/1_25_2_rel_notes.mdx | 21 +
.../1/rel_notes/1_25_3_rel_notes.mdx | 21 +
.../1/rel_notes/1_25_4_rel_notes.mdx | 25 +
.../1/rel_notes/1_26_0_rel_notes.mdx | 21 +
.../1/rel_notes/1_26_1_rel_notes.mdx | 21 +
.../1/rel_notes/1_26_2_rel_notes.mdx | 21 +
.../1/rel_notes/1_27_0_rel_notes.mdx | 66 +
.../1/rel_notes/1_27_1_rel_notes.mdx | 89 +
.../1/rel_notes/1_2_0_rel_notes.mdx | 16 +
.../1/rel_notes/1_2_1_rel_notes.mdx | 15 +
.../1/rel_notes/1_3_0_rel_notes.mdx | 19 +
.../1/rel_notes/1_4_0_rel_notes.mdx | 25 +
.../1/rel_notes/1_5_0_rel_notes.mdx | 34 +
.../1/rel_notes/1_5_1_rel_notes.mdx | 16 +
.../1/rel_notes/1_6_0_rel_notes.mdx | 26 +
.../1/rel_notes/1_7_0_rel_notes.mdx | 29 +
.../1/rel_notes/1_7_1_rel_notes.mdx | 22 +
.../1/rel_notes/1_8_0_rel_notes.mdx | 30 +
.../1/rel_notes/1_9_0_rel_notes.mdx | 23 +
.../1/rel_notes/1_9_1_rel_notes.mdx | 18 +
.../1/rel_notes/1_9_2_rel_notes.mdx | 18 +
.../1/rel_notes/index.mdx | 254 +
.../1/rel_notes/src/1.22.10_rel_notes.yml | 109 +
.../1/rel_notes/src/1.22.11_rel_notes.yml | 57 +
.../1/rel_notes/src/1.24.4_rel_notes.yml | 19 +
.../1/rel_notes/src/1.25.2_rel_notes.yml | 19 +
.../1/rel_notes/src/1.25.3_rel_notes.yml | 19 +
.../1/rel_notes/src/1.25.4_rel_notes.yml | 23 +
.../1/rel_notes/src/1.26.0_rel_notes.yml | 18 +
.../1/rel_notes/src/1.26.1_rel_notes.yml | 18 +
.../1/rel_notes/src/1.26.2_rel_notes.yml | 18 +
.../1/rel_notes/src/1.27.0_rel_notes.yml | 105 +
.../1/rel_notes/src/1.27.1_rel_notes.yml | 186 +
.../1/rel_notes/src/meta.yml | 340 +
.../1/replica_cluster.mdx | 630 ++
.../postgres_for_kubernetes/1/replication.mdx | 851 +++
.../1/resource_management.mdx | 114 +
.../1/rolling_update.mdx | 101 +
.../postgres_for_kubernetes/1/samples.mdx | 228 +
.../1/samples/backup-example.yaml | 7 +
.../samples/backup-with-volume-snapshot.yaml | 8 +
.../1/samples/cluster-additional-volumes.yaml | 25 +
.../1/samples/cluster-advanced-initdb.yaml | 25 +
.../1/samples/cluster-backup-aws-inherit.yaml | 15 +
.../samples/cluster-backup-azure-inherit.yaml | 15 +
.../samples/cluster-backup-retention-30d.yaml | 32 +
.../1/samples/cluster-clone-basicauth.yaml | 29 +
.../1/samples/cluster-clone-tls.yaml | 30 +
.../cluster-example-bis-restore-cr.yaml | 26 +
.../samples/cluster-example-bis-restore.yaml | 43 +
.../1/samples/cluster-example-bis.yaml | 29 +
.../1/samples/cluster-example-catalog.yaml | 28 +
.../samples/cluster-example-cert-manager.yaml | 120 +
.../1/samples/cluster-example-custom.yaml | 28 +
.../1/samples/cluster-example-epas.yaml | 10 +
...ample-external-backup-adapter-cluster.yaml | 35 +
...uster-example-external-backup-adapter.yaml | 10 +
.../1/samples/cluster-example-full.yaml | 110 +
.../1/samples/cluster-example-initdb-icu.yaml | 19 +
.../cluster-example-initdb-sql-refs.yaml | 47 +
.../1/samples/cluster-example-initdb.yaml | 23 +
.../cluster-example-logical-destination.yaml | 43 +
.../cluster-example-logical-source.yaml | 54 +
.../cluster-example-managed-services.yaml | 24 +
.../1/samples/cluster-example-monitoring.yaml | 234 +
.../1/samples/cluster-example-pg-hba.yaml | 12 +
.../1/samples/cluster-example-pge.yaml | 10 +
.../cluster-example-projected-volume.yaml | 43 +
...er-example-replica-from-backup-simple.yaml | 30 +
...-example-replica-from-volume-snapshot.yaml | 54 +
.../cluster-example-replica-streaming.yaml | 38 +
.../1/samples/cluster-example-secret.yaml | 38 +
.../1/samples/cluster-example-sync-az.yaml | 14 +
...cluster-example-syncreplicas-explicit.yaml | 14 +
.../cluster-example-syncreplicas-legacy.yaml | 12 +
.../cluster-example-syncreplicas-quorum.yaml | 16 +
.../1/samples/cluster-example-tde.yaml | 26 +
.../cluster-example-trigger-backup.yaml | 7 +
.../samples/cluster-example-wal-storage.yaml | 10 +
.../cluster-example-with-backup-scaleway.yaml | 23 +
.../samples/cluster-example-with-backup.yaml | 33 +
.../samples/cluster-example-with-probes.yaml | 16 +
.../1/samples/cluster-example-with-roles.yaml | 40 +
...uster-example-with-tablespaces-backup.yaml | 37 +
.../cluster-example-with-tablespaces.yaml | 25 +
.../cluster-example-with-volume-snapshot.yaml | 32 +
.../1/samples/cluster-example.yaml | 10 +
.../1/samples/cluster-expose-service.yaml | 36 +
.../cluster-import-schema-only-basicauth.yaml | 27 +
.../cluster-import-snapshot-basicauth.yaml | 30 +
.../samples/cluster-import-snapshot-tls.yaml | 44 +
.../1/samples/cluster-pvc-template.yaml | 25 +
.../1/samples/cluster-replica-async.yaml | 30 +
.../1/samples/cluster-replica-basicauth.yaml | 33 +
...r-replica-from-backup-other-namespace.yaml | 37 +
.../1/samples/cluster-replica-restore.yaml | 44 +
.../1/samples/cluster-replica-tls.yaml | 34 +
.../cluster-restore-external-cluster.yaml | 28 +
.../1/samples/cluster-restore-pitr.yaml | 17 +
.../cluster-restore-snapshot-full.yaml | 18 +
.../cluster-restore-snapshot-pitr.yaml | 40 +
.../1/samples/cluster-restore-snapshot.yaml | 19 +
.../cluster-restore-with-tablespaces.yaml | 28 +
.../1/samples/cluster-restore.yaml | 14 +
.../cluster-storage-class-with-backup.yaml | 32 +
.../1/samples/cluster-storage-class.yaml | 18 +
.../1/samples/database-example-fail.yaml | 9 +
.../1/samples/database-example-icu.yaml | 16 +
.../1/samples/database-example.yaml | 9 +
.../1/samples/dc/cluster-dc-a.yaml | 71 +
.../1/samples/dc/cluster-dc-b.yaml | 75 +
.../1/samples/dc/cluster-test.yaml | 25 +
.../1/samples/k9s/plugins.yml | 134 +
.../1/samples/monitoring/alerts.yaml | 66 +
.../samples/monitoring/kube-stack-config.yaml | 78 +
.../1/samples/monitoring/podmonitor.yaml | 10 +
.../1/samples/monitoring/prometheusrule.yaml | 71 +
.../1/samples/networkpolicy-example.yaml | 23 +
.../1/samples/pooler-basic-auth.yaml | 15 +
.../1/samples/pooler-deployment-strategy.yaml | 19 +
.../1/samples/pooler-external.yaml | 21 +
.../1/samples/pooler-tls.yaml | 14 +
.../1/samples/postgis-example.yaml | 27 +
.../samples/publication-example-objects.yaml | 16 +
.../1/samples/publication-example.yaml | 11 +
.../1/samples/scheduled-backup-example.yaml | 9 +
.../1/samples/subscription-example.yaml | 11 +
.../1/samples/subscription.yaml | 10 +
.../postgres_for_kubernetes/1/scheduling.mdx | 208 +
.../postgres_for_kubernetes/1/security.mdx | 479 ++
.../1/service_management.mdx | 137 +
.../1/ssl_connections.mdx | 195 +
.../postgres_for_kubernetes/1/storage.mdx | 410 +
.../postgres_for_kubernetes/1/tablespaces.mdx | 366 +
.../docs/postgres_for_kubernetes/1/tde.mdx | 295 +
.../1/troubleshooting.mdx | 899 +++
.../postgres_for_kubernetes/1/use_cases.mdx | 52 +
.../1/wal_archiving.mdx | 55 +
431 files changed, 314201 insertions(+)
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/addons.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/applications.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/architecture.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/backup.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/backup_recovery.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/backup_volumesnapshot.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/benchmarking.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/cncf-projects/cilium.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/cncf-projects/external-secrets.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/cncf-projects/index.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/cnp_i.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/controller.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/css/override.css
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/evaluation.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/failover.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/faq.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/fencing.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/apps-in-k8s.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/apps-outside-k8s.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/architecture-in-k8s.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/architecture-r.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/architecture-read-only.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/architecture-rw.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/ironbank/pulling-the-image.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-2-az.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-3-az.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-multi.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/k8s-pg-architecture.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/microservice-import.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/monolith-import.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/multi-cluster.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/network-storage-architecture.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/alerts-openshift.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_1.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_2.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-operatorgroup-error.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-rbac.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-allnamespaces.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace-list.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_1.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/openshift/prometheus-queries.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/operator-capability-level.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/pgadmin4.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-architecture-rw.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-image.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-template.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/prometheus-local.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture-storage-replication.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/shared-nothing-architecture.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/images/write_bw.1-2Draw.png
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/index.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/iron-bank.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/logging.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/migrating_edb_registries.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/networking.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/openshift.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/operator_conf.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.6.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.7.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.8.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.0.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.1.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.10.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.11.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.12.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.13.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.14.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.4.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.15.5.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.4.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.16.5.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.4.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.17.5.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.10.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.11.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.12.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.13.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.4.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.5.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.6.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.7.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.8.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.18.9.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.4.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.5.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.19.6.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.2.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.2.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.4.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.5.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.20.6.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.4.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.5.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.21.6.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.4.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.5.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.6.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.7.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.8.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.22.9.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.4.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.5.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.23.6.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.24.3.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.25.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.25.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.26.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.26.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.27.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.27.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.3.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.4.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.5.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.5.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.6.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.7.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.7.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.8.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.9.0.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.9.1.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.9.2.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/postgis.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/postgres_upgrades.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/private_edb_registries.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/recovery.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_0_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_1_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_2_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_3_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_4_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_5_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_6_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_7_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/0_8_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_0_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_10_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_11_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_12_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_13_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_14_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_10_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_11_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_12_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_13_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_7_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_8_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_9_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_6_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_1_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_6_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_6_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_10_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_11_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_6_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_7_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_8_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_9_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_6_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_24_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_25_4_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_26_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_26_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_26_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_27_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_27_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_2_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_2_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_3_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_4_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_5_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_5_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_6_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_7_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_7_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_8_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_9_0_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_9_1_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_9_2_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.22.10_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.22.11_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.24.4_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.2_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.3_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.25.4_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.0_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.1_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.26.2_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.27.0_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/1.27.1_rel_notes.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/src/meta.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/replication.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/resource_management.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/backup-example.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/backup-with-volume-snapshot.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-additional-volumes.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-advanced-initdb.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-backup-aws-inherit.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-backup-azure-inherit.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-backup-retention-30d.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-clone-basicauth.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-clone-tls.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore-cr.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-catalog.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-cert-manager.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-custom.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-epas.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-external-backup-adapter-cluster.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-external-backup-adapter.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb-icu.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb-sql-refs.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-destination.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-source.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-managed-services.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-monitoring.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-pg-hba.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-pge.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-projected-volume.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-from-backup-simple.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-from-volume-snapshot.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-streaming.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-secret.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-sync-az.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-explicit.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-legacy.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-tde.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-trigger-backup.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-wal-storage.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup-scaleway.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-probes.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-roles.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-tablespaces-backup.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-tablespaces.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-volume-snapshot.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-expose-service.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-import-schema-only-basicauth.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-import-snapshot-basicauth.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-import-snapshot-tls.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-pvc-template.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-async.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-basicauth.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-from-backup-other-namespace.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-restore.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-replica-tls.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-external-cluster.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-pitr.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-full.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-pitr.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-with-tablespaces.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-storage-class-with-backup.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-storage-class.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/database-example-fail.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/database-example-icu.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/database-example.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/dc/cluster-dc-a.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/dc/cluster-dc-b.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/dc/cluster-test.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/k9s/plugins.yml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/alerts.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/kube-stack-config.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/podmonitor.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/prometheusrule.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/networkpolicy-example.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/pooler-basic-auth.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/pooler-deployment-strategy.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/pooler-tls.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/postgis-example.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/publication-example-objects.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/publication-example.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/scheduled-backup-example.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/subscription-example.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/subscription.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/security.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/service_management.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/storage.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/tablespaces.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/tde.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/use_cases.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/addons.mdx b/product_docs/docs/postgres_for_kubernetes/1/addons.mdx
new file mode 100644
index 0000000000..73dade696b
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/addons.mdx
@@ -0,0 +1,574 @@
+---
+title: 'Add-ons'
+originalFilePath: 'src/addons.md'
+---
+
+{{name.ln}} supports add-ons that can be enabled on a
+per-cluster basis. These add-ons are:
+
+1. [External backup adapter](#external-backup-adapter)
+2. [Kasten](#kasten)
+3. [Velero](#velero)
+
+!!! Info
+ If you are planning to use Velero in OpenShift, please refer to the
+ [OADP section](openshift.md#oadp-for-velero) in the Openshift documentation.
+
+All add-ons will automatically be available to the operator and to be used will
+need to be enabled at the cluster level via the `k8s.enterprisedb.io/addons`
+annotation.
+
+## External Backup Adapter
+
+The external backup adapter add-ons provide a generic way to integrate
+{{name.ln}} in a third-party tool for backups through
+customizable ways to identify via labels and/or annotations:
+
+- which PVC group to backup
+- which PVCs to exclude, in case the cluster has one or more active replicas
+- the Pod running the PostgreSQL instance that has been selected for the backup
+ (a standby or, if not available, the primary)
+
+You can choose between two add-ons that only differ from each other for the way
+they allow you to configure the adapter for your backup system:
+
+- `external-backup-adapter`: in case you want to customize the behavior at the
+ operator's configuration level via either a config map or a secret - and share
+ it with all the Postgres clusters that are managed by the operator's deployment
+ (see [the `external-backup-adapter` section below](#the-external-backup-adapter-add-on))
+- `external-backup-adapter-cluster`: in case you want to customize the behavior
+ of the adapter at the Postgres cluster level, through a specific annotation
+ (see [the `external-backup-adapter-cluster` section below](#the-external-backup-adapter-cluster-add-on))
+
+Such add-ons allow you to define the names of the annotations that will contain
+the commands to be run before or after taking a backup in the pod selected by
+the operator.
+
+As a result, any third-party backup tool for Kubernetes can rely on the above
+to coordinate itself with a PostgreSQL cluster, or a set of them.
+
+Recovery simply relies on the operator to reconcile the cluster from an
+existing PVC group.
+
+!!! Important
+ The External Backup Adapter is not a tool to perform backups. It simply
+ provides a generic interface that any third-party backup tool in the Kubernetes
+ space can use. Such tools are responsible for safely storing the PVC
+ and/or the content, and make it available at recovery time together with
+ all the necessary resource definitions of your Kubernetes cluster.
+
+### Customizing the adapter
+
+As mentioned above, the adapter can be configured in two ways, which then
+determines the actual `add-on` you need to use in your `Cluster` resource.
+
+If you are planning to define the same behavior for all the Postgres `Cluster`
+resources managed by the operator, we recommend that you use the
+`external-backup-adapter` add-on, and configure the annotations/labels in the
+operator's configuration.
+
+If you are planning to have different behaviors for a subset of the Postgres
+`Cluster` resources that you have, we recommend that you use the
+`external-backup-adapter-cluster` add-on.
+
+Both add-ons share the same capabilities in terms of customization, which needs
+to be defined as a YAML object having the following keys:
+
+- `electedResourcesDecorators`
+- `excludedResourcesDecorators`
+- `excludedResourcesSelector`
+- `backupInstanceDecorators`
+- `preBackupHookConfiguration`
+- `postBackupHookConfiguration`
+
+Each section is explained below. Further down you'll find the instructions on
+how to customize each of the two add-ons, with some examples.
+
+#### The `electedResourcesDecorators` section
+
+This section allows you to configure an array of labels and/or annotations that
+will be put on every elected PVC group.
+
+Each element of the array must have the following fields:
+
+`key`
+: the name of the key for the label or annotation
+
+`metadataType`
+: the type of metadata, either `"label"` or `"annotation"`
+
+`value`
+: the value that will be assigned to the label or annotation
+
+#### The `excludedResourcesDecorators` section
+
+This section allows you to configure an array of labels and/or annotations that
+will be placed on every excluded pod and PVC.
+
+Each element of the array must have the same fields as the
+`electedResourcesDecorators` section above.
+
+#### The `excludedResourcesSelector` section
+
+This section selects Pods and PVCs that are applied to the
+`excludedResourcesDecorators`. It accepts a [label selector rule](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors)
+as value. When empty, all the Pods and every PVC that is not elected will be excluded.
+
+#### The `backupInstanceDecorators` section
+
+This section allows you to configure an array of labels and/or annotations that
+will be placed on the instance that has been selected for the backup by the operator
+and which contains the hooks to be run.
+
+Each element of the array must have the same fields as the
+`electedResourcesDecorators` section above.
+
+#### The `preBackupHookConfiguration` section
+
+This section allows you to control the names of the annotations in which the
+operator will place the name of the container, the command to run before taking
+the backup, and the command to run in case of error/abort on the third-party
+tool side. Such metadata will be applied on the instance that's been selected by
+the operator for the backup (see `backupInstanceDecorators` above).
+
+The following fields must be provided:
+
+`container`
+: Specifies where to place the information about the container
+ that will run the pre-backup command. The container name is a fixed value and
+ cannot be configured. Will be saved in the annotations. To decorate the pod
+ with hooks refer to: `instanceWithHookDecorators`
+
+`command`
+: Specifies where to place the information about the command that
+ will be executed before the backup is taken. The command that will be
+ executed is a fixed value and cannot be configured. Will be saved in the
+ annotations. To decorate the pod with hooks refer to:
+ `instanceWithHookDecorators`
+
+`onError`
+: Specifies where to place the information about the command that
+ will be executed in case of an error. The command that will be executed is a
+ fixed value and cannot be configured. Will be saved in the annotations. To
+ decorate the pod with hooks refer to: `instanceWithHookDecorators`
+
+#### The `postBackupHookConfiguration` section
+
+This section allows you to control the names of the annotations in which the
+operator will place the name of the container and the command to run after taking
+the backup. Such metadata will be applied on the instance that's been selected by
+the operator for the backup (see `backupInstanceDecorators` above).
+
+The following fields must be provided:
+
+`container`
+: Specifies where to place the information about the container
+ that will run the post-backup command. The container name is a fixed value
+ and cannot be configured. Will be saved in the annotations. To decorate the pod
+ with hooks refer to: `instanceWithHookDecorators`
+
+`command`
+: Specifies where to place the information about the command that
+ will be executed after the backup is taken. The command that will be executed
+ is a fixed value and cannot be configured. Will be saved in the annotations. To
+ decorate the pod with hooks refer to: `instanceWithHookDecorators`
+
+### The `external-backup-adapter` add-on
+
+The `external-backup-adapter` add-on can be entirely configured at operator's
+level via the `EXTERNAL_BACKUP_ADDON_CONFIGURATION` field in the operator's
+`ConfigMap`/`Secret`.
+
+For more information, please refer to the provided sample file at the end of
+this section, or the example below:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: postgresql-operator-controller-manager-config
+ namespace: postgresql-operator-system
+data:
+ # ...
+ EXTERNAL_BACKUP_ADDON_CONFIGURATION: |-
+ electedResourcesDecorators:
+ - key: "app.example.com/elected"
+ metadataType: "label"
+ value: "true"
+ excludedResourcesSelector: app=xyz,env=prod
+ excludedResourcesDecorators:
+ - key: "app.example.com/excluded"
+ metadataType: "label"
+ value: "true"
+ - key: "app.example.com/excluded-reason"
+ metadataType: "annotation"
+ value: "Not necessary for backup"
+ backupInstanceDecorators:
+ - key: "app.example.com/hasHooks"
+ metadataType: "label"
+ value: "true"
+ preBackupHookConfiguration:
+ container:
+ key: "app.example.com/pre-backup-container"
+ command:
+ key: "app.example.com/pre-backup-command"
+ onError:
+ key: "app.example.com/pre-backup-on-error"
+ postBackupHookConfiguration:
+ container:
+ key: "app.example.com/post-backup-container"
+ command:
+ key: "app.example.com/post-backup-command"
+```
+
+The add-on can be activated by adding the following annotation to the `Cluster`
+resource:
+
+```yaml
+k8s.enterprisedb.io/addons: '["external-backup-adapter"]'
+```
+
+### The `external-backup-adapter-cluster` add-on
+
+The `external-backup-adapter-cluster` add-on must be configured in each
+`Cluster` resource you intend to use it through the
+`k8s.enterprisedb.io/externalBackupAdapterClusterConfig` annotation - which
+accepts the YAML object as content - as outlined in the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+ annotations:
+ "k8s.enterprisedb.io/addons": '["external-backup-adapter-cluster"]'
+ "k8s.enterprisedb.io/externalBackupAdapterClusterConfig": |-
+ electedResourcesDecorators:
+ - key: "app.example.com/elected"
+ metadataType: "label"
+ value: "true"
+ excludedResourcesSelector: app=xyz,env=prod
+ excludedResourcesDecorators:
+ - key: "app.example.com/excluded"
+ metadataType: "label"
+ value: "true"
+ - key: "app.example.com/excluded-reason"
+ metadataType: "annotation"
+ value: "Not necessary for backup"
+ backupInstanceDecorators:
+ - key: "app.example.com/hasHooks"
+ metadataType: "label"
+ value: "true"
+ preBackupHookConfiguration:
+ container:
+ key: "app.example.com/pre-backup-container"
+ command:
+ key: "app.example.com/pre-backup-command"
+ onError:
+ key: "app.example.com/pre-backup-on-error"
+ postBackupHookConfiguration:
+ container:
+ key: "app.example.com/post-backup-container"
+ command:
+ key: "app.example.com/post-backup-command"
+spec:
+ instances: 3
+ storage:
+ size: 1Gi
+```
+
+### About the fencing annotation
+
+If the configured external backup adapter backs up annotations, the fencing
+annotation will be set by the pre-backup hook and persist to the restored
+cluster. After restoring the cluster, you will need to manually remove the
+fencing annotation from the `Cluster` object to fix this.
+
+This can be done with the `cnp` plugin for kubectl:
+
+```shell
+kubectl cnp fencing off
+```
+
+Or, if you don't have the `cnp` plugin, you can remove the fencing annotation
+manually with the following command:
+
+```shell
+kubectl annotate cluster k8s.enterprisedb.io/fencedInstances-
+```
+
+Please refer to the [fencing documentation](fencing.md) for more information.
+
+### Limitations
+
+As far as the backup part is concerned, currently, the EDB Postgres for
+Kubernetes integration with `external-backup-adapter` and
+`external-backup-adapter-cluster` supports **cold backups** only. These are
+also referred to as **offline backups**. This means that the selected replica
+is temporarily fenced so that `external-backup-adapter` and
+`external-backup-adapter-cluster` can take a physical snapshot of the PVC group -
+namely the `PGDATA` volume and, where available, the WAL volume.
+
+In this short timeframe, the standby cannot accept read-only connections.
+If no standby is available - usually because we're in a single instance cluster -
+and the annotation `k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary` is
+set to true, `external-backup-adapter` and `external-backup-adapter-cluster`
+will temporarily fence the primary, causing downtime in terms of read-write
+operations. This use case is normally left to development environments.
+
+#### Full example of YAML file
+
+Here is a full example of YAML content to be placed in either:
+
+- the `EXTERNAL_BACKUP_ADDON_CONFIGURATION` option as part of the
+ the operator's configuration process described above for the
+ `external-backup-adapter` add-on, or
+- in the `k8s.enterprisedb.io/externalBackupAdapterClusterConfig` annotation
+ for the `external-backup-adapter-cluster` add-on
+
+!!! Hint
+ Copy the content below and paste it inside the `ConfigMap` or `Secret` that
+ you use to configure the operator or the annotation in the `Cluster`, making
+ sure you use the `|` character that [YAML reserves for literals](https://yaml.org/spec/1.2.2/#812-literal-style),
+ as well as proper indentation. Use the comments to help you customize the
+ options for your tool.
+
+```yaml
+# An array of labels and/or annotations that will be placed
+# on the elected PVC group
+electedResourcesDecorators:
+ - key: "backup.example.com/elected"
+ metadataType: "label"
+ value: "true"
+
+# An array of labels and/or annotations that will be placed
+# on every excluded pod and PVC
+excludedResourcesDecorators:
+ - key: "backup.example.com/excluded"
+ metadataType: "label"
+ value: "true"
+ - key: "backup.example.com/excluded-reason"
+ metadataType: "annotation"
+ value: "Not necessary for backup"
+
+# A LabelSelector containing the labels being used to filter Pods
+# and PVCs to decorate with excludedResourcesDecorators.
+# It accepts a label selector rule as value.
+# See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
+# When empty, all the Pods and every PVC that is not elected will be excluded.
+excludedResourcesSelector: app=xyz,env=prod
+
+# An array of labels and/or annotations that will be placed
+# on the instance pod that's been selected for the backup by
+# the operator and which contains the hooks.
+# At least one element is required
+backupInstanceDecorators:
+ - key: "backup.example.com/hasHooks"
+ metadataType: "label"
+ value: "true"
+
+# The pre-backup hook configuration allows you to control the names
+# of the annotations in which the operator will place the container
+# name, the command to run before taking the backup, and the command
+# to run in case of error/abort on the third-party tool side.
+# Such metadata will be applied on the instance that's been selected
+# by the operator for the backup (see `backupInstanceDecorators`)
+preBackupHookConfiguration:
+ # Where to place the information about the container that will run
+ # the pre-backup command. The container name is a fixed value and
+ # cannot be configured. Will be saved in the annotations.
+ # To decorate the pod with the hooks refer to: instanceWithHookDecorators
+ container:
+ key: "app.example.com/pre-backup-container"
+ # Where to place the information about the command that will be
+ # executed before the backup is taken. The command is a fixed value
+ # and cannot be configured. Will be saved in the annotations.
+ # To decorate the pod with the hooks refer to: instanceWithHookDecorators
+ command:
+ key: "app.example.com/pre-backup-command"
+ # Where to place the information about the command that will be
+ # executed in case of an error on the third-party tool side.
+ # The command is a fixed value and cannot be configured.
+ # Will be saved in the annotations.
+ # To decorate the pod with the hooks refer to: instanceWithHookDecorators
+ onError:
+ key: "app.example.com/pre-backup-on-error"
+
+# The post-backup hook configuration allows you to control the names
+# of the annotations in which the operator will place the container
+# name and the command to run after taking the backup.
+# Such metadata will be applied on the instance that's been selected by
+# the operator for the backup (see `backupInstanceDecorators`).
+postBackupHookConfiguration:
+ # Where to place the information about the container that will run
+ # the post-backup command. The container name is a fixed value and
+ # cannot be configured. Will be saved in the annotations.
+ # To decorate the pod with hooks refer to: instanceWithHookDecorators
+ container:
+ key: "app.example.com/post-backup-container"
+ # Where to place the information about the command that will be
+ # executed after the backup is taken. The command is a fixed value
+ # and cannot be configured. Will be saved in the annotations.
+ # To decorate the pod with hooks refer to: instanceWithHookDecorators
+ command:
+ key: "app.example.com/post-backup-command"
+```
+
+## Kasten
+
+Kasten is a very popular data protection tool for Kubernetes, enabling backup
+and restore, disaster recovery, and application mobility for Kubernetes
+applications. For more information, see the [Kasten
+website](https://www.kasten.io/) and the [Kasten by Veeam Implementation
+Guide](/partner_docs/KastenbyVeeam/)
+
+In brief, to enable transparent integration with Kasten on an EDB Postgres for
+Kubernetes Cluster, you just need to add the `kasten` value to the
+`k8s.enterprisedb.io/addons` annotation in a `Cluster` spec. For example:
+
+```yaml
+ kind: Cluster
+ metadata:
+ name: one-instance
+ annotations:
+ k8s.enterprisedb.io/addons: '["kasten"]'
+ k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary: enabled
+ spec:
+ instances: 1
+ storage:
+ size: 1Gi
+ walStorage:
+ size: 1Gi
+```
+
+Once the cluster is created and healthy, the operator will select the farthest
+ahead replica instance to be the designated backup and will add Kasten-specific
+backup hooks through annotations and labels to that instance.
+
+!!! Important
+ The operator will refuse to shut down a primary instance to take a cold
+ backup unless the Cluster is annotated with
+ `k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary: enabled`
+
+For further guidance on how to configure and use Kasten, see the Implementation Guide's
+[Configuration](/partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten/) and
+[Using](/partner_docs/KastenbyVeeam/05-UsingVeeamKasten/) sections.
+
+### Limitations
+
+As far as the backup part is concerned, currently, the EDB Postgres for
+Kubernetes integration with Kasten supports **cold backups** only. These are
+also referred to as **offline backups**. This means that the selected replica
+is temporarily fenced so that external-backup-adapter can take a physical
+snapshot of the PVC group - namely the `PGDATA` volume and, where available,
+the WAL volume.
+
+In this short timeframe, the standby cannot accept read-only connections.
+If no standby is available - usually because we're in a single instance cluster -
+and the annotation `k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary` is
+set to true, Kasten will temporarily fence the primary, causing downtime in
+terms of read-write operations. This use case is normally left to development
+environments.
+
+In terms of recovery, the integration with Kasten supports snapshot recovery
+only. No Point-in-Time Recovery (PITR) is available at the moment with the
+Kasten add-on, and RPO is determined by the frequency of the snapshots in your
+Kasten environment. If your organization relies on Kasten, this usually is
+acceptable, but if you need PITR we recommend you look at the native continuous
+backup method on object stores.
+
+## Velero
+
+Velero is an open-source tool to safely back up, restore, perform disaster
+recovery, and migrate Kubernetes cluster resources and persistent volumes. For
+more information, see the [Velero documentation](https://velero.io/docs/latest/).
+To enable Velero compatibility with an {{name.ln}} Cluster, add
+the `velero` value to the `k8s.enterprisedb.io/addons` annotation in a Cluster
+spec.
+For example:
+
+```yaml
+ kind: Cluster
+ metadata:
+ name: one-instance
+ annotations:
+ k8s.enterprisedb.io/addons: '["velero"]'
+ k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary: enabled
+ spec:
+ instances: 1
+ storage:
+ size: 1Gi
+ walStorage:
+ size: 1Gi
+```
+
+Once the cluster is created and healthy, the operator will select the farthest
+ahead replica instance to be the designated backup and will add Velero-specific
+backup hooks as annotations to that instance.
+
+These [annotations](https://velero.io/docs/latest/backup-hooks/) are used by
+Velero to run the commands to prepare the Postgres instance to be backed up.
+
+!!! Important
+ The operator will refuse to shut down a primary instance to take a cold
+ backup unless the Cluster is annotated with
+ `k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary: enabled`
+
+### Limitations
+
+As far as the backup part is concerned, currently, the EDB Postgres for
+Kubernetes integration with Velero supports **cold backups** only. These are
+also referred to as **offline backups**. This means that the selected replica
+is temporarily fenced so that external-backup-adapter can take a physical
+snapshot of the PVC group - namely the `PGDATA` volume and, where available,
+the WAL volume.
+
+In this short timeframe, the standby cannot accept read-only connections.
+If no standby is available - usually because we're in a single instance cluster -
+and the annotation `k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary` is
+set to true, Velero will temporarily fence the primary, causing downtime in
+terms of read-write operations. This use case is normally left to development
+environments.
+
+In terms of recovery, the integration with Velero supports snapshot recovery
+only, for now. No Point-in-Time Recovery (PITR) is available at the moment with the
+Velero add-on, and RPO is determined by the frequency of the snapshots in your
+Velero environment. If your organization relies on Velero, this usually is
+acceptable, but if you need PITR we recommend you look at the native continuous
+backup method on object stores.
+
+### Backup
+
+By design, {{name.ln}} offloads as much of the backup
+functionality to Velero as possible, with the only requirement to make
+available the previously mentioned backup hooks. Since EDB Postgres for
+Kubernetes transparently sets all the needed configurations, and the rest is
+standard Velero, using Velero to backup a Postgres cluster is as
+straightforward as it would be for any other object. For example:
+
+```bash
+velero backup create mybackup \
+ --include-namespaces mynamespace \
+ -n velero-install-namespace
+```
+
+This command will create a standard Velero backup using the configured object
+storage and the configured Snapshot API.
+
+!!! Important
+ By default, the Velero add-on exclude only a few resources from the backup
+ operation, namely pods and PVCs of the instances that have not been selected
+ (as you recall, the operator tries to backup the PVCs of the first replica).
+ However, you can use the options for the `velero backup` command to fine tune
+ the resources you want to be part of your backup.
+
+### Restore
+
+As with backup, the recovery process is a standard Velero procedure. The
+command to restore from a backup created with the above parameters would be:
+
+```bash
+velero create restore myrestore \
+ --from-backup mybackup \
+ -n velero-install-namespace
+```
diff --git a/product_docs/docs/postgres_for_kubernetes/1/applications.mdx b/product_docs/docs/postgres_for_kubernetes/1/applications.mdx
new file mode 100644
index 0000000000..bf1f5c2b67
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/applications.mdx
@@ -0,0 +1,95 @@
+---
+title: 'Connecting from an application'
+originalFilePath: 'src/applications.md'
+---
+
+
+
+Applications are supposed to work with the services created by {{name.ln}}
+in the same Kubernetes cluster.
+
+For more information on services and how to manage them, please refer to the
+["Service management"](service_management.md) section.
+
+!!! Hint
+ It is highly recommended using those services in your applications,
+ and avoiding connecting directly to a specific PostgreSQL instance, as the latter
+ can change during the cluster lifetime.
+
+You can use these services in your applications through:
+
+- DNS resolution
+- environment variables
+
+For the credentials to connect to PostgreSQL, you can
+use the secrets generated by the operator.
+
+!!! Seealso "Connection Pooling"
+ Please refer to the ["Connection Pooling" section](connection_pooling.md) for
+ information about how to take advantage of PgBouncer as a connection pooler,
+ and create an access layer between your applications and the PostgreSQL clusters.
+
+### DNS resolution
+
+You can use the Kubernetes DNS service to point to a given server.
+The Kubernetes DNS service is required by the operator.
+You can do that by using the name of the service if the application is
+deployed in the same namespace as the PostgreSQL cluster.
+In case the PostgreSQL cluster resides in a different namespace, you can use the
+full qualifier: `service-name.namespace-name`.
+
+DNS is the preferred and recommended discovery method.
+
+### Environment variables
+
+If you deploy your application in the same namespace that contains the
+PostgreSQL cluster, you can also use environment variables to connect to the database.
+
+For example, suppose that your PostgreSQL cluster is called `pg-database`,
+you can use the following environment variables in your applications:
+
+- `PG_DATABASE_R_SERVICE_HOST`: the IP address of the service
+ pointing to all the PostgreSQL instances for read-only workloads
+
+- `PG_DATABASE_RO_SERVICE_HOST`: the IP address of the
+ service pointing to all hot-standby replicas of the cluster
+
+- `PG_DATABASE_RW_SERVICE_HOST`: the IP address of the
+ service pointing to the *primary* instance of the cluster
+
+### Secrets
+
+The PostgreSQL operator will generate up to two `basic-auth` type secrets for
+every PostgreSQL cluster it deploys:
+
+- `[cluster name]-app` (unless you have provided an existing secret through `.spec.bootstrap.initdb.secret.name`)
+- `[cluster name]-superuser` (if `.spec.enableSuperuserAccess` is set to `true`
+ and you have not specified a different secret using `.spec.superuserSecret`)
+
+Each secret contain the following:
+
+- username
+- password
+- hostname to the RW service
+- port number
+- database name
+- a working [`.pgpass file`](https://www.postgresql.org/docs/current/libpq-pgpass.html)
+- [uri](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING)
+- [jdbc-uri](https://jdbc.postgresql.org/documentation/use/#connecting-to-the-database)
+- [fqdn-uri](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING)
+- [fqdn-jdbc-uri](https://jdbc.postgresql.org/documentation/use/#connecting-to-the-database)
+
+The FQDN to be used in the URIs is calculated using the Kubernetes cluster
+domain specified in the `KUBERNETES_CLUSTER_DOMAIN` configuration parameter.
+See [the operator configuration documentation](operator_conf.md) for more information
+about that.
+
+The `-app` credentials are the ones that should be used by applications
+connecting to the PostgreSQL cluster, and correspond to the user *owning* the
+database.
+
+The `-superuser` ones are supposed to be used only for administrative purposes,
+and correspond to the `postgres` user.
+
+!!! Important
+ Superuser access over the network is disabled by default.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/architecture.mdx b/product_docs/docs/postgres_for_kubernetes/1/architecture.mdx
new file mode 100644
index 0000000000..1c5c1b964d
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/architecture.mdx
@@ -0,0 +1,433 @@
+---
+title: 'Architecture'
+originalFilePath: 'src/architecture.md'
+---
+
+
+
+!!! Hint
+ For a deeper understanding, we recommend reading our article on the CNCF
+ blog post titled ["Recommended Architectures for PostgreSQL in Kubernetes"](https://www.cncf.io/blog/2023/09/29/recommended-architectures-for-postgresql-in-kubernetes/),
+ which provides valuable insights into best practices and design
+ considerations for PostgreSQL deployments in Kubernetes.
+
+This documentation page provides an overview of the key architectural
+considerations for implementing a robust business continuity strategy when
+deploying PostgreSQL in Kubernetes. These considerations include:
+
+- **[Deployments in *stretched*](#multi-availability-zone-kubernetes-clusters)
+ vs. [*non-stretched* clusters](#single-availability-zone-kubernetes-clusters)**:
+ Evaluating the differences between deploying in stretched clusters (across 3
+ or more availability zones) versus non-stretched clusters (within a single
+ availability zone).
+- [**Reservation of `postgres` worker nodes**](#reserving-nodes-for-postgresql-workloads): Isolating PostgreSQL workloads by
+ dedicating specific worker nodes to `postgres` tasks, ensuring optimal
+ performance and minimizing interference from other workloads.
+- [**PostgreSQL architectures within a single Kubernetes cluster**](#postgresql-architecture):
+ Designing effective PostgreSQL deployments within a single Kubernetes cluster
+ to meet high availability and performance requirements.
+- [**PostgreSQL architectures across Kubernetes clusters for disaster recovery**](#deployments-across-kubernetes-clusters):
+ Planning and implementing PostgreSQL architectures that span multiple
+ Kubernetes clusters to provide comprehensive disaster recovery capabilities.
+
+## Synchronizing the state
+
+PostgreSQL is a database management system and, as such, it needs to be treated
+as a **stateful workload** in Kubernetes. While stateless applications
+mainly use traffic redirection to achieve High Availability (HA) and
+Disaster Recovery (DR), in the case of a database, state must be replicated in
+multiple locations, preferably in a continuous and instantaneous way, by
+adopting either of the following two strategies:
+
+- *storage-level replication*, normally persistent volumes
+- *application-level replication*, in this specific case PostgreSQL
+
+{{name.ln}} relies on application-level replication, for a simple reason: the
+PostgreSQL database management system comes with robust and reliable
+built-in **physical replication** capabilities based on **Write Ahead Log (WAL)
+shipping**, which have been used in production by millions of users all over
+the world for over a decade.
+
+PostgreSQL supports both asynchronous and synchronous streaming replication
+over the network, as well as asynchronous file-based log shipping (normally
+used as a fallback option, for example, to store WAL files in an object store).
+Replicas are usually called *standby servers* and can also be used for
+read-only workloads, thanks to the *Hot Standby* feature.
+
+!!! Important
+ **We recommend against storage-level replication with PostgreSQL**, although
+ {{name.ln}} allows you to adopt that strategy. For more information, please refer
+ to the talk given by Chris Milsted and Gabriele Bartolini at KubeCon NA 2022 entitled
+ ["Data On Kubernetes, Deploying And Running PostgreSQL And Patterns For Databases In a Kubernetes Cluster"](https://www.youtube.com/watch?v=99uSJXkKpeI&ab_channel=CNCF%5BCloudNativeComputingFoundation%5D)
+ where this topic was covered in detail.
+
+## Kubernetes architecture
+
+Kubernetes natively provides the possibility to span separate physical
+locations - also known as data centers, failure zones, or more frequently
+**availability zones** - connected to each other via redundant, low-latency,
+private network connectivity.
+
+Being a distributed system, the recommended minimum number of availability
+zones for a Kubernetes cluster is three (3), in order to make the control
+plane resilient to the failure of a single zone.
+For details, please refer to
+["Running in multiple zones"](https://kubernetes.io/docs/setup/best-practices/multiple-zones/).
+This means that **each data center is active at any time** and can run workloads
+simultaneously.
+
+!!! Note
+ Most of the public Cloud Providers' managed Kubernetes services already
+ provide 3 or more availability zones in each region.
+
+### Multi-availability zone Kubernetes clusters
+
+The multi-availability zone Kubernetes architecture with three (3) or more
+zones is the one that we recommend for PostgreSQL usage.
+This scenario is typical of Kubernetes services managed by Cloud Providers.
+
+
+
+Such an architecture enables the {{name.ln}} operator to control the full
+lifecycle of a `Cluster` resource across the zones within a single Kubernetes
+cluster, by treating all the availability zones as active: this includes, among
+other operations,
+[scheduling](scheduling.md) the workloads in a declarative manner (based on
+affinity rules, tolerations and node selectors), automated failover,
+self-healing, and updates. All will work seamlessly across the zones in a single
+Kubernetes cluster.
+
+Please refer to the ["PostgreSQL architecture"](#postgresql-architecture)
+section below for details on how you can design your PostgreSQL clusters within
+the same Kubernetes cluster through shared-nothing deployments at the storage,
+worker node, and availability zone levels.
+
+Additionally, you can leverage [Kubernetes clusters](#deployments-across-kubernetes-clusters)
+to deploy distributed PostgreSQL topologies hosting "passive"
+[PostgreSQL replica clusters](replica_cluster.md) in different regions and
+managing them via declarative configuration. This setup is ideal for disaster
+recovery (DR), read-only operations, or cross-region availability.
+
+!!! Important
+ Each operator deployment can only manage operations within its local
+ Kubernetes cluster. For operations across Kubernetes clusters, such as
+ controlled switchover or unexpected failover, coordination must be handled
+ manually (through GitOps, for example) or by using a higher-level cluster
+ management tool.
+
+
+
+### Single availability zone Kubernetes clusters
+
+If your Kubernetes cluster has only one availability zone, {{name.ln}} still
+provides you with a lot of features to improve HA and DR outcomes for your
+PostgreSQL databases, pushing the single point of failure (SPoF) to the level
+of the zone as much as possible - i.e. the zone must have an outage before your
+{{name.ln}} clusters suffer a failure.
+
+This scenario is typical of self-managed on-premise Kubernetes clusters, where
+only one data center is available.
+
+Single availability zone Kubernetes clusters are the only viable option when
+only **two data centers** are available within reach of a low-latency
+connection (typically in the same metropolitan area). Having only two zones
+prevents the creation of a multi-availability zone Kubernetes cluster, which
+requires a minimum of three zones. As a result, users must create two separate
+Kubernetes clusters in an active/passive configuration, with the second cluster
+primarily used for Disaster Recovery (see
+the [replica cluster feature](replica_cluster.md)).
+
+
+
+!!! Hint
+ If you are at an early stage of your Kubernetes journey, please share this
+ document with your infrastructure team. The two data centers setup might
+ be simply the result of a "lift-and-shift" transition to Kubernetes
+ from a traditional bare-metal or VM based infrastructure, and the benefits
+ that Kubernetes offers in a 3+ zone scenario might not have been known,
+ or addressed at the time the infrastructure architecture was designed.
+ Ultimately, a third physical location connected to the other two might
+ represent a valid option to consider for organization, as it reduces the
+ overall costs of the infrastructure by moving the day-to-day complexity
+ from the application level down to the physical infrastructure level.
+
+Please refer to the ["PostgreSQL architecture"](#postgresql-architecture)
+section below for details on how you can design your PostgreSQL clusters within
+your single availability zone Kubernetes cluster through shared-nothing
+deployments at the storage and worker node levels only. For HA, in such a
+scenario it becomes even more important that the PostgreSQL instances be
+located on different worker nodes and do not share the same storage.
+
+For DR, you can push the SPoF above the single zone, by using additional
+[Kubernetes clusters](#deployments-across-kubernetes-clusters) to define a
+distributed topology hosting "passive" [PostgreSQL replica clusters](replica_cluster.md).
+As with other Kubernetes workloads in this scenario, promotion of a Kubernetes
+cluster as primary must be done manually.
+
+Through the [replica cluster feature](replica_cluster.md), you can define a
+distributed PostgreSQL topology and coordinate a controlled switchover between
+data centers by first demoting the primary cluster and then promoting the
+replica cluster, without the need to re-clone the former primary. While failover
+is now fully declarative, automated failover across Kubernetes clusters is not
+within {{name.ln}}' scope, as the operator can only function within a single
+Kubernetes cluster.
+
+!!! Important
+ {{name.ln}} provides all the necessary primitives and probes to
+ coordinate PostgreSQL active/passive topologies across different Kubernetes
+ clusters through a higher-level operator or management tool.
+
+### Reserving nodes for PostgreSQL workloads
+
+Whether you're operating in a multi-availability zone environment or, more
+critically, within a single availability zone, we strongly recommend isolating
+PostgreSQL workloads by dedicating specific worker nodes exclusively to
+`postgres` in production. A Kubernetes worker node dedicated to running
+PostgreSQL workloads is referred to as a **Postgres node** or `postgres` node.
+This approach ensures optimal performance and resource allocation for your
+database operations.
+
+!!! Hint
+ As a general rule of thumb, deploy Postgres nodes in multiples of
+ three—ideally with one node per availability zone. Three nodes is
+ an optimal number because it ensures that a PostgreSQL cluster with three
+ instances (one primary and two standby replicas) is distributed across
+ different nodes, enhancing fault tolerance and availability.
+
+In Kubernetes, this can be achieved using node labels and taints in a
+declarative manner, aligning with Infrastructure as Code (IaC) practices:
+labels ensure that a node is capable of running `postgres` workloads, while
+taints help prevent any non-`postgres` workloads from being scheduled on that
+node.
+
+!!! Important
+ This methodology is the most straightforward way to ensure that PostgreSQL
+ workloads are isolated from other workloads in terms of both computing
+ resources and, when using locally attached disks, storage. While different
+ PostgreSQL clusters may share the same node, you can take this a step further
+ by using labels and taints to ensure that a node is dedicated to a single
+ instance of a specific `Cluster`.
+
+#### Proposed node label
+
+{{name.ln}} recommends using the `node-role.kubernetes.io/postgres` label.
+Since this is a reserved label (`*.kubernetes.io`), it can only be applied
+after a worker node is created.
+
+To assign the `postgres` label to a node, use the following command:
+
+```sh
+kubectl label node node-role.kubernetes.io/postgres=
+```
+
+To ensure that a `Cluster` resource is scheduled on a `postgres` node, you must
+correctly configure the `.spec.affinity.nodeSelector` stanza in your manifests.
+Here’s an example:
+
+```yaml
+spec:
+ #
+ affinity:
+ #
+ nodeSelector:
+ node-role.kubernetes.io/postgres: ""
+```
+
+#### Proposed node taint
+
+{{name.ln}} recommends using the `node-role.kubernetes.io/postgres` taint.
+
+To assign the `postgres` taint to a node, use the following command:
+
+```sh
+kubectl taint node node-role.kubernetes.io/postgres=:NoSchedule
+```
+
+To ensure that a `Cluster` resource is scheduled on a node with a `postgres` taint, you must correctly configure the `.spec.affinity.tolerations` stanza in your manifests.
+Here’s an example:
+
+```yaml
+spec:
+ #
+ affinity:
+ #
+ tolerations:
+ - key: node-role.kubernetes.io/postgres
+ operator: Exists
+ effect: NoSchedule
+```
+
+## PostgreSQL architecture
+
+{{name.ln}} supports clusters based on asynchronous and synchronous
+streaming replication to manage multiple hot standby replicas within the same
+Kubernetes cluster, with the following specifications:
+
+- One primary, with optional multiple hot standby replicas for HA
+
+- Available services for applications:
+ - `-rw`: applications connect only to the primary instance of the cluster
+ - `-ro`: applications connect only to hot standby replicas for
+ read-only-workloads (optional)
+ - `-r`: applications connect to any of the instances for read-only
+ workloads (optional)
+
+- Shared-nothing architecture recommended for better resilience of the PostgreSQL cluster:
+ - PostgreSQL instances should reside on different Kubernetes worker nodes
+ and share only the network - as a result, instances should not share
+ the storage and preferably use local volumes attached to the node they
+ run on
+ - PostgreSQL instances should reside in different availability zones
+ within the same Kubernetes cluster / region
+
+!!! Important
+ You can configure the above services through the `managed.services` section
+ in the `Cluster` configuration. This can be done by reducing the number of
+ services and selecting the type (default is `ClusterIP`). For more details,
+ please refer to the ["Service Management" section](service_management.md)
+ below.
+
+The below diagram provides a simplistic view of the recommended shared-nothing
+architecture for a PostgreSQL cluster spanning across 3 different availability
+zones, running on separate nodes, each with dedicated local storage for
+PostgreSQL data.
+
+
+
+{{name.ln}} automatically takes care of updating the above services if
+the topology of the cluster changes. For example, in case of failover, it
+automatically updates the `-rw` service to point to the promoted primary,
+making sure that traffic from the applications is seamlessly redirected.
+
+!!! Seealso "Replication"
+ Please refer to the ["Replication" section](replication.md) for more
+ information about how {{name.ln}} relies on PostgreSQL replication,
+ including synchronous settings.
+
+!!! Seealso "Connecting from an application"
+ Please refer to the ["Connecting from an application" section](applications.md) for
+ information about how to connect to {{name.ln}} from a stateless
+ application within the same Kubernetes cluster.
+
+!!! Seealso "Connection Pooling"
+ Please refer to the ["Connection Pooling" section](connection_pooling.md) for
+ information about how to take advantage of PgBouncer as a connection pooler,
+ and create an access layer between your applications and the PostgreSQL clusters.
+
+### Read-write workloads
+
+Applications can decide to connect to the PostgreSQL instance elected as
+*current primary* by the Kubernetes operator, as depicted in the following
+diagram:
+
+
+
+Applications can use the `-rw` suffix service.
+
+In case of temporary or permanent unavailability of the primary, for High
+Availability purposes {{name.ln}} will trigger a failover, pointing the `-rw`
+service to another instance of the cluster.
+
+### Read-only workloads
+
+!!! Important
+ Applications must be aware of the limitations that
+ [Hot Standby](https://www.postgresql.org/docs/current/hot-standby.html)
+ presents and familiar with the way PostgreSQL operates when dealing with
+ these workloads.
+
+Applications can access hot standby replicas through the `-ro` service made available
+by the operator. This service enables the application to offload read-only queries from the
+primary node.
+
+The following diagram shows the architecture:
+
+
+
+Applications can also access any PostgreSQL instance through the
+`-r` service.
+
+## Deployments across Kubernetes clusters
+
+!!! Info
+ {{name.ln}} supports deploying PostgreSQL across multiple Kubernetes
+ clusters through a feature that allows you to define a distributed PostgreSQL
+ topology using replica clusters, as described in this section.
+
+In a distributed PostgreSQL cluster there can only be a single PostgreSQL
+instance acting as a primary at all times. This means that applications can
+only write inside a single Kubernetes cluster, at any time.
+
+However, for business continuity objectives it is fundamental to:
+
+- reduce global **recovery point objectives** ([RPO](before_you_start.md#rpo))
+ by storing PostgreSQL backup data in multiple locations, regions and possibly
+ using different providers (Disaster Recovery)
+- reduce global **recovery time objectives** ([RTO](before_you_start.md#rto))
+ by taking advantage of PostgreSQL replication beyond the primary Kubernetes
+ cluster (High Availability)
+
+In order to address the above concerns, {{name.ln}} introduces the concept of
+a PostgreSQL Topology that is distributed across different Kubernetes clusters
+and is made up of a primary PostgreSQL cluster and one or more PostgreSQL
+replica clusters.
+This feature is called **distributed PostgreSQL topology with replica clusters**,
+and it enables multi-cluster deployments in private, public, hybrid, and
+multi-cloud contexts.
+
+A replica cluster is a separate `Cluster` resource that is in continuous
+recovery, replicating from another source, either via WAL shipping from a WAL
+archive or via streaming replication from a primary or a standby (cascading).
+
+The diagram below depicts a PostgreSQL cluster spanning over two different
+Kubernetes clusters, where the primary cluster is in the first Kubernetes
+cluster and the replica cluster is in the second. The second Kubernetes cluster
+acts as the company's disaster recovery cluster, ready to be activated in case
+of disaster and unavailability of the first one.
+
+
+
+A replica cluster can have the same architecture as the primary cluster.
+Instead of a primary instance, a replica cluster has a **designated primary**
+instance, which is a standby server with an arbitrary number of cascading
+standby servers in streaming replication (symmetric architecture).
+
+The designated primary can be promoted at any time, transforming the replica
+cluster into a primary cluster capable of accepting write connections.
+This is typically triggered by:
+
+- **Human decision:** You choose to make the other PostgreSQL cluster (or the
+ entire Kubernetes cluster) the primary. To avoid data loss and ensure that
+ the former primary can follow without being re-cloned (especially with large
+ data sets), you first demote the current primary, then promote the designated
+ primary using the API provided by {{name.ln}}.
+- **Unexpected failure:** If the entire Kubernetes cluster fails, you might
+ experience data loss, but you need to fail over to the other Kubernetes
+ cluster by promoting the PostgreSQL replica cluster.
+
+!!! Warning
+ {{name.ln}} cannot perform any cross-cluster automated failover, as it
+ does not have authority beyond a single Kubernetes cluster. Such operations
+ must be performed manually or delegated to a multi-cluster/federated
+ cluster-aware authority.
+
+!!! Important
+ {{name.ln}} allows you to control the distributed topology via
+ declarative configuration, enabling you to automate these procedures as part of
+ your Infrastructure as Code (IaC) process, including GitOps.
+
+In the example above, the designated primary receives WAL updates via streaming
+replication (`primary_conninfo`). As a fallback, it can retrieve WAL segments
+from an object store using file-based WAL shipping—for instance, with the
+Barman Cloud plugin through `restore_command` and `barman-cloud-wal-restore`.
+
+{{name.ln}} allows you to define topologies with multiple replica clusters.
+You can also define replica clusters with a lower number of replicas, and then
+increase this number when the cluster is promoted to primary.
+
+!!! Seealso "Replica clusters"
+ Please refer to the ["Replica Clusters" section](replica_cluster.md) for
+ more detailed information on how physical replica clusters operate and how to
+ define a distributed topology with read-only clusters across different
+ Kubernetes clusters. This approach can significantly enhance your global
+ disaster recovery and high availability (HA) strategy.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/backup.mdx b/product_docs/docs/postgres_for_kubernetes/1/backup.mdx
new file mode 100644
index 0000000000..60ed5e968f
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/backup.mdx
@@ -0,0 +1,488 @@
+---
+title: 'Backup'
+originalFilePath: 'src/backup.md'
+---
+
+
+
+!!! Info
+ This section covers **physical backups** in PostgreSQL.
+ While PostgreSQL also supports logical backups using the `pg_dump` utility,
+ these are **not suitable for business continuity** and are **not managed** by
+ {{name.ln}}. If you still wish to use `pg_dump`, refer to the
+ [*Troubleshooting / Emergency backup* section](troubleshooting.md#emergency-backup)
+ for guidance.
+
+!!! Important
+ Starting with version 1.26, native backup and recovery capabilities are
+ being **progressively phased out** of the core operator and moved to official
+ CNP-I plugins. This transition aligns with {{name.ln}}' shift towards a
+ **backup-agnostic architecture**, enabled by its extensible
+ interface—**CNP-I**—which standardizes the management of **WAL archiving**,
+ **physical base backups**, and corresponding **recovery processes**.
+
+{{name.ln}} currently supports **physical backups of PostgreSQL clusters** in
+two main ways:
+
+- **Via [CNPG-I](https://github.com/cloudnative-pg/cnpg-i/) plugins**: the
+ {{name.ln}} Community officially supports the [**Barman Cloud Plugin**](https://cloudnative-pg.io/plugin-barman-cloud/)
+ for integration with object storage services.
+
+- **Natively**, with support for:
+
+ - [Object storage via Barman Cloud](backup_barmanobjectstore.md)
+ *(although deprecated from 1.26 in favor of the Barman Cloud Plugin)*
+ - [Kubernetes Volume Snapshots](backup_volumesnapshot.md), if
+ supported by the underlying storage class
+
+Before selecting a backup strategy with {{name.ln}}, it's important to
+familiarize yourself with the foundational concepts covered in the ["Main Concepts"](#main-concepts)
+section. These include WAL archiving, hot and cold backups, performing backups
+from a standby, and more.
+
+## Main Concepts
+
+PostgreSQL natively provides first class backup and recovery capabilities based
+on file system level (physical) copy. These have been successfully used for
+more than 15 years in mission critical production databases, helping
+organizations all over the world achieve their disaster recovery goals with
+Postgres.
+
+In {{name.ln}}, the backup infrastructure for each PostgreSQL cluster is made
+up of the following resources:
+
+- **WAL archive**: a location containing the WAL files (transactional logs)
+ that are continuously written by Postgres and archived for data durability
+- **Physical base backups**: a copy of all the files that PostgreSQL uses to
+ store the data in the database (primarily the `PGDATA` and any tablespace)
+
+CNP-I provides a generic and extensible interface for managing WAL archiving
+(both archive and restore operations), as well as the base backup and
+corresponding restore processes.
+
+### WAL archive
+
+The WAL archive in PostgreSQL is at the heart of **continuous backup**, and it
+is fundamental for the following reasons:
+
+- **Hot backups**: the possibility to take physical base backups from any
+ instance in the Postgres cluster (either primary or standby) without shutting
+ down the server; they are also known as online backups
+- **Point in Time recovery** (PITR): the possibility to recover at any point in
+ time from the first available base backup in your system
+
+!!! Warning
+ WAL archive alone is useless. Without a physical base backup, you cannot
+ restore a PostgreSQL cluster.
+
+In general, the presence of a WAL archive enhances the resilience of a
+PostgreSQL cluster, allowing each instance to fetch any required WAL file from
+the archive if needed (normally the WAL archive has higher retention periods
+than any Postgres instance that normally recycles those files).
+
+This use case can also be extended to [replica clusters](replica_cluster.md),
+as they can simply rely on the WAL archive to synchronize across long
+distances, extending disaster recovery goals across different regions.
+
+When you [configure a WAL archive](wal_archiving.md), {{name.ln}} provides
+out-of-the-box an [RPO](before_you_start.md#rpo) <= 5 minutes for disaster
+recovery, even across regions.
+
+!!! Important
+ Our recommendation is to always setup the WAL archive in production.
+ There are known use cases — normally involving staging and development
+ environments — where none of the above benefits are needed and the WAL
+ archive is not necessary. RPO in this case can be any value, such as
+ 24 hours (daily backups) or infinite (no backup at all).
+
+### Cold and Hot backups
+
+Hot backups have already been defined in the previous section. They require the
+presence of a WAL archive, and they are the norm in any modern database
+management system.
+
+**Cold backups**, also known as offline backups, are instead physical base backups
+taken when the PostgreSQL instance (standby or primary) is shut down. They are
+consistent per definition, and they represent a snapshot of the database at the
+time it was shut down.
+
+As a result, PostgreSQL instances can be restarted from a cold backup without
+the need of a WAL archive, even though they can take advantage of it, if
+available (with all the benefits on the recovery side highlighted in the
+previous section).
+
+In those situations with a higher RPO (for example, 1 hour or 24 hours), and
+shorter retention periods, cold backups represent a viable option to be considered
+for your disaster recovery plans.
+
+## Comparing Available Backup Options: Object Stores vs Volume Snapshots
+
+{{name.ln}} currently supports two main approaches for physical backups:
+
+- **Object store–based backups**, via the [**Barman Cloud
+ Plugin**](https://cloudnative-pg.io/plugin-barman-cloud/) or the
+ [**deprecated native integration**](backup_barmanobjectstore.md)
+- [**Volume Snapshots**](backup_volumesnapshot.md), using the
+ Kubernetes CSI interface and supported storage classes
+
+!!! Important
+ CNP-I is designed to enable third parties to build and integrate their own
+ backup plugins. Over time, we expect the ecosystem of supported backup
+ solutions to grow.
+
+### Object Store–Based Backups
+
+Backups to an object store (e.g. AWS S3, Azure Blob, GCS):
+
+- Always require WAL archiving
+- Support hot backups only
+- Do not support incremental or differential copies
+- Support retention policies
+
+### Volume Snapshots
+
+Native volume snapshots:
+
+- Do not require WAL archiving, though its use is still strongly
+ recommended in production
+- Support incremental and differential copies, depending on the
+ capabilities of the underlying storage class
+- Support both hot and cold backups
+- Do not support retention policies
+
+### Choosing Between the Two
+
+The best approach depends on your environment and operational requirements.
+Consider the following factors:
+
+- **Object store availability**: Ensure your Kubernetes cluster can access a
+ reliable object storage solution, including a stable networking layer.
+- **Storage class capabilities**: Confirm that your storage class supports CSI
+ volume snapshots with incremental/differential features.
+- **Database size**: For very large databases (VLDBs), **volume snapshots are
+ generally preferred** as they enable faster recovery due to copy-on-write
+ technology—this significantly improves your
+ [Recovery Time Objective (RTO)](before_you_start.md#rto).
+- **Data mobility**: Object store–based backups may offer greater flexibility
+ for replicating or storing backups across regions or environments.
+- **Operational familiarity**: Choose the method that aligns best with your
+ team's experience and confidence in managing storage.
+
+### Comparison Summary
+
+| Feature | Object Store | Volume Snapshots |
+| --------------------------------- | :----------: | :------------------: |
+| **WAL archiving** | Required | Recommended^1^ |
+| **Cold backup** | ❌ | ✅ |
+| **Hot backup** | ✅ | ✅ |
+| **Incremental copy** | ❌ | ✅^2^ |
+| **Differential copy** | ❌ | ✅^2^ |
+| **Backup from a standby** | ✅ | ✅ |
+| **Snapshot recovery** | ❌^3^ | ✅ |
+| **Retention policies** | ✅ | ❌ |
+| **Point-in-Time Recovery (PITR)** | ✅ | Requires WAL archive |
+| **Underlying technology** | Barman Cloud | Kubernetes API |
+
+* * *
+
+> **Notes:**
+>
+> 1. WAL archiving must currently use an object store through a plugin (or the
+> deprecated native one).
+> 2. Availability of incremental and differential copies depends on the
+> capabilities of the storage class used for PostgreSQL volumes.
+> 3. Snapshot recovery can be emulated by using the
+> `bootstrap.recovery.recoveryTarget.targetImmediate` option.
+
+## Scheduled Backups
+
+Scheduled backups are the recommended way to implement a reliable backup
+strategy in {{name.ln}}. They are defined using the `ScheduledBackup` custom
+resource.
+
+!!! Info
+ For a complete list of configuration options, refer to the
+ [`ScheduledBackupSpec`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+ in the API reference.
+
+### Cron Schedule
+
+The `schedule` field defines **when** the backup should occur, using a
+*six-field cron expression* that includes seconds. This format follows the
+[Go `cron` package specification](https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format).
+
+!!! Warning
+ This format differs from the traditional Unix/Linux `crontab`—it includes a
+ **seconds** field as the first entry.
+
+Example of a daily scheduled backup:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ScheduledBackup
+metadata:
+ name: backup-example
+spec:
+ schedule: "0 0 0 * * *" # At midnight every day
+ backupOwnerReference: self
+ cluster:
+ name: pg-backup
+ # method: plugin, volumeSnapshot, or barmanObjectStore (default)
+```
+
+The schedule `"0 0 0 * * *"` triggers a backup every day at midnight
+(00:00:00). In Kubernetes CronJobs, the equivalent expression would be `0 0 * * *`,
+since seconds are not supported.
+
+### Backup Frequency and RTO
+
+!!! Hint
+ The frequency of your backups directly impacts your **Recovery Time Objective**
+ ([RTO](before_you_start.md#rto)).
+
+To optimize your disaster recovery strategy based on continuous backup:
+
+- Regularly test restoring from your backups.
+- Measure the time required for a full recovery.
+- Account for the size of base backups and the number of WAL files that must be
+ retrieved and replayed.
+
+In most cases, a **weekly base backup** is sufficient. It is rare to schedule
+full backups more frequently than once per day.
+
+### Immediate Backup
+
+To trigger a backup immediately when the `ScheduledBackup` is created:
+
+```yaml
+spec:
+ immediate: true
+```
+
+\### Pause Scheduled Backups
+
+To temporarily stop scheduled backups from running:
+
+```yaml
+spec:
+ suspend: true
+```
+
+\### Backup Owner Reference (`.spec.backupOwnerReference`)
+
+Controls which Kubernetes object is set as the owner of the backup resource:
+
+- `none`: No owner reference (legacy behavior)
+- `self`: The `ScheduledBackup` object becomes the owner
+- `cluster`: The PostgreSQL cluster becomes the owner
+
+## On-Demand Backups
+
+On-demand backups allow you to manually trigger a backup operation at any time
+by creating a `Backup` resource.
+
+!!! Info
+ For a full list of available options, see the
+ [`BackupSpec`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-BackupSpec) in the
+ API reference.
+
+### Example: Requesting an On-Demand Backup
+
+To start an on-demand backup, apply a `Backup` request custom resource like the
+following:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Backup
+metadata:
+ name: backup-example
+spec:
+ method: barmanObjectStore
+ cluster:
+ name: pg-backup
+```
+
+In this example, the operator will orchestrate the backup process using the
+`barman-cloud-backup` tool and store the backup in the configured object store.
+
+### Monitoring Backup Progress
+
+You can check the status of the backup using:
+
+```bash
+kubectl describe backup backup-example
+```
+
+While the backup is in progress, you'll see output similar to:
+
+```text
+Name: backup-example
+Namespace: default
+...
+Spec:
+ Cluster:
+ Name: pg-backup
+Status:
+ Phase: running
+ Started At: 2020-10-26T13:57:40Z
+Events:
+```
+
+Once the backup has successfully completed, the `phase` will be set to
+`completed`, and the output will include additional metadata:
+
+```text
+Name: backup-example
+Namespace: default
+...
+Status:
+ Backup Id: 20201026T135740
+ Destination Path: s3://backups/
+ Endpoint URL: http://minio:9000
+ Phase: completed
+ S3 Credentials:
+ Access Key Id:
+ Name: minio
+ Key: ACCESS_KEY_ID
+ Secret Access Key:
+ Name: minio
+ Key: ACCESS_SECRET_KEY
+ Server Name: pg-backup
+ Started At: 2020-10-26T13:57:40Z
+ Stopped At: 2020-10-26T13:57:44Z
+```
+
+* * *
+
+!!! Important
+ On-demand backups do **not** include Kubernetes secrets for the PostgreSQL
+ superuser or application user. You should ensure these secrets are included in
+ your broader Kubernetes cluster backup strategy.
+
+## Backup Methods
+
+{{name.ln}} currently supports the following backup methods for scheduled
+and on-demand backups:
+
+- `plugin` – Uses a CNP-I plugin (requires `.spec.pluginConfiguration`)
+- `volumeSnapshot` – Uses native [Kubernetes volume snapshots](backup_volumesnapshot.md#how-to-configure-volume-snapshot-backups)
+- `barmanObjectStore` – Uses [Barman Cloud for object storage](backup_barmanobjectstore.md)
+ *(deprecated starting with v1.26 in favor of the
+ [Barman Cloud Plugin](https://cloudnative-pg.io/plugin-barman-cloud/),
+ but still the default for backward compatibility)*
+
+Specify the method using the `.spec.method` field (defaults to
+`barmanObjectStore`).
+
+If your cluster is configured to support volume snapshots, you can enable
+scheduled snapshot backups like this:
+
+```yaml
+spec:
+ method: volumeSnapshot
+```
+
+To use the Barman Cloud Plugin as the backup method, set `method: plugin` and
+configure the plugin accordingly. You can find an example in the
+["Performing a Base Backup" section of the plugin documentation](https://cloudnative-pg.io/plugin-barman-cloud/docs/usage/#performing-a-base-backup)
+
+## Backup from a Standby
+
+Taking a base backup involves reading the entire on-disk data set of a
+PostgreSQL instance, which can introduce I/O contention and impact the
+performance of the active workload.
+
+To reduce this impact, **{{name.ln}} supports taking backups from a standby
+instance**, leveraging PostgreSQL’s built-in capability to perform backups from
+read-only replicas.
+
+By default, backups are performed on the **most up-to-date replica** in the
+cluster. If no replicas are available, the backup will fall back to the
+**primary instance**.
+
+!!! Note
+ The examples in this section are focused on backup target selection and do not
+ take the backup method (`spec.method`) into account, as it is not relevant to
+ the scope being discussed.
+
+### How It Works
+
+When `prefer-standby` is the target (the default behavior), {{name.ln}} will
+attempt to:
+
+1. Identify the most synchronized standby node.
+2. Run the backup process on that standby.
+3. Fall back to the primary if no standbys are available.
+
+This strategy minimizes interference with the primary’s workload.
+
+!!! Warning
+ Although the standby might not always be up to date with the primary,
+ in the time continuum from the first available backup to the last
+ archived WAL this is normally irrelevant. The base backup indeed
+ represents the starting point from which to begin a recovery operation,
+ including PITR. Similarly to what happens with
+ [`pg_basebackup`](https://www.postgresql.org/docs/current/app-pgbasebackup.html),
+ when backing up from an online standby we do not force a switch of the WAL on the
+ primary. This might produce unexpected results in the short term (before
+ `archive_timeout` kicks in) in deployments with low write activity.
+
+### Forcing Backup on the Primary
+
+To always run backups on the primary instance, explicitly set the backup target
+to `primary` in the cluster configuration:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ [...]
+spec:
+ backup:
+ target: "primary"
+```
+
+!!! Warning
+ Be cautious when using `primary` as the target for **cold backups using
+ volume snapshots**, as this will require shutting down the primary instance
+ temporarily—interrupting all write operations. The same caution applies to
+ single-instance clusters, even if you haven't explicitly set the target.
+
+### Overriding the Cluster-Wide Target
+
+You can override the cluster-level target on a per-backup basis, using either
+`Backup` or `ScheduledBackup` resources. Here's an example of an on-demand
+backup:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Backup
+metadata:
+ [...]
+spec:
+ cluster:
+ name: [...]
+ target: "primary"
+```
+
+In this example, even if the cluster’s default target is `prefer-standby`, the
+backup will be taken from the primary instance.
+
+## Retention Policies
+
+{{name.ln}} is evolving toward a **backup-agnostic architecture**, where
+backup responsibilities are delegated to external **CNP-I plugins**. These
+plugins are expected to offer advanced and customizable data protection
+features, including sophisticated retention management, that go beyond the
+built-in capabilities and scope of {{name.ln}}.
+
+As part of this transition, the `spec.backup.retentionPolicy` field in the
+`Cluster` resource is **deprecated** and will be removed in a future release.
+
+For more details on available retention features, refer to your chosen plugin’s documentation.
+For example: ["Retention Policies" with Barman Cloud Plugin](https://cloudnative-pg.io/plugin-barman-cloud/docs/retention/).
+
+!!! Important
+ Users are encouraged to rely on the retention mechanisms provided by the
+ backup plugin they are using. This ensures better flexibility and consistency
+ with the backup method in use.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx b/product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx
new file mode 100644
index 0000000000..d975265b4f
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx
@@ -0,0 +1,351 @@
+---
+title: 'Appendix B - Backup on object stores'
+originalFilePath: 'src/appendixes/backup_barmanobjectstore.md'
+---
+
+
+
+!!! Warning
+ As of {{name.ln}} 1.26, **native Barman Cloud support is deprecated** in
+ favor of the **Barman Cloud Plugin**. This page has been moved to the appendix
+ for reference purposes. While the native integration remains functional for
+ now, we strongly recommend beginning a gradual migration to the plugin-based
+ interface after appropriate testing. For guidance, see
+ [Migrating from Built-in {{name.ln}} Backup](https://cloudnative-pg.io/plugin-barman-cloud/docs/migration/).
+
+{{name.ln}} natively supports **online/hot backup** of PostgreSQL
+clusters through continuous physical backup and WAL archiving on an object
+store. This means that the database is always up (no downtime required)
+and that Point In Time Recovery is available.
+
+The operator can orchestrate a continuous backup infrastructure
+that is based on the [Barman Cloud](https://pgbarman.org) tool. Instead
+of using the classical architecture with a Barman server, which
+backs up many PostgreSQL instances, the operator relies on the
+`barman-cloud-wal-archive`, `barman-cloud-check-wal-archive`,
+`barman-cloud-backup`, `barman-cloud-backup-list`, and
+`barman-cloud-backup-delete` tools. As a result, base backups will
+be *tarballs*. Both base backups and WAL files can be compressed
+and encrypted.
+
+For this, it is required to use an image with `barman-cli-cloud` included.
+You can use the image `docker.enterprisedb.com/k8s/postgresql` for this scope,
+as it is composed of a community PostgreSQL image and the latest
+`barman-cli-cloud` package.
+
+!!! Important
+ Always ensure that you are running the latest version of the operands
+ in your system to take advantage of the improvements introduced in
+ Barman cloud (as well as improve the security aspects of your cluster).
+
+!!! Warning "Changes in Barman Cloud 3.16+ and Bucket Creation"
+ Starting with Barman Cloud 3.16, most Barman Cloud commands no longer
+ automatically create the target bucket, assuming it already exists. Only the
+ `barman-cloud-check-wal-archive` command creates the bucket now. Whenever this
+ is not the first operation run on an empty bucket, {{name.ln}} will throw an
+ error. As a result, to ensure reliable, future-proof operations and avoid
+ potential issues, we strongly recommend that you create and configure your
+ object store bucket *before* creating a `Cluster` resource that references it.
+
+A backup is performed from a primary or a designated primary instance in a
+`Cluster` (please refer to
+[replica clusters](replica_cluster.md)
+for more information about designated primary instances), or alternatively
+on a [standby](backup.md#backup-from-a-standby).
+
+## Common object stores
+
+If you are looking for a specific object store such as
+[AWS S3](object_stores.md#aws-s3),
+[Microsoft Azure Blob Storage](object_stores.md#azure-blob-storage),
+[Google Cloud Storage](object_stores.md#google-cloud-storage), or a compatible
+provider, please refer to [Appendix C - Common object stores for backups](object_stores.md).
+
+## WAL archiving
+
+WAL archiving is the process that feeds a [WAL archive](backup.md#wal-archive)
+in {{name.ln}}.
+
+The WAL archive is defined in the `.spec.backup.barmanObjectStore` stanza of
+a `Cluster` resource.
+
+!!! Info
+ Please refer to [`BarmanObjectStoreConfiguration`](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#BarmanObjectStoreConfiguration)
+ in the barman-cloud API for a full list of options.
+
+If required, you can choose to compress WAL files as soon as they
+are uploaded and/or encrypt them:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ [...]
+ wal:
+ compression: gzip
+ encryption: AES256
+```
+
+You can configure the encryption directly in your bucket, and the operator
+will use it unless you override it in the cluster configuration.
+
+PostgreSQL implements a sequential archiving scheme, where the
+`archive_command` will be executed sequentially for every WAL
+segment to be archived.
+
+!!! Important
+ By default, {{name.ln}} sets `archive_timeout` to `5min`, ensuring
+ that WAL files, even in case of low workloads, are closed and archived
+ at least every 5 minutes, providing a deterministic time-based value for
+ your Recovery Point Objective ([RPO](before_you_start.md#rpo)). Even though you change the value
+ of the [`archive_timeout` setting in the PostgreSQL configuration](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT),
+ our experience suggests that the default value set by the operator is
+ suitable for most use cases.
+
+When the bandwidth between the PostgreSQL instance and the object
+store allows archiving more than one WAL file in parallel, you
+can use the parallel WAL archiving feature of the instance manager
+like in the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ [...]
+ wal:
+ compression: gzip
+ maxParallel: 8
+ encryption: AES256
+```
+
+In the previous example, the instance manager optimizes the WAL
+archiving process by archiving in parallel at most eight ready
+WALs, including the one requested by PostgreSQL.
+
+When PostgreSQL will request the archiving of a WAL that has
+already been archived by the instance manager as an optimization,
+that archival request will be just dismissed with a positive status.
+
+## Retention policies
+
+{{name.ln}} can manage the automated deletion of backup files from
+the backup object store, using **retention policies** based on the recovery
+window.
+
+Internally, the retention policy feature uses `barman-cloud-backup-delete`
+with `--retention-policy “RECOVERY WINDOW OF {{ retention policy value }} {{ retention policy unit }}”`.
+
+For example, you can define your backups with a retention policy of 30 days as
+follows:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ destinationPath: ""
+ s3Credentials:
+ accessKeyId:
+ name: aws-creds
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: aws-creds
+ key: ACCESS_SECRET_KEY
+ retentionPolicy: "30d"
+```
+
+!!! Note "There's more ..."
+ The **recovery window retention policy** is focused on the concept of
+ *Point of Recoverability* (`PoR`), a moving point in time determined by
+ `current time - recovery window`. The *first valid backup* is the first
+ available backup before `PoR` (in reverse chronological order).
+ {{name.ln}} must ensure that we can recover the cluster at
+ any point in time between `PoR` and the latest successfully archived WAL
+ file, starting from the first valid backup. Base backups that are older
+ than the first valid backup will be marked as *obsolete* and permanently
+ removed after the next backup is completed.
+
+## Compression algorithms
+
+{{name.ln}} by default archives backups and WAL files in an
+uncompressed fashion. However, it also supports the following compression
+algorithms via `barman-cloud-backup` (for backups) and
+`barman-cloud-wal-archive` (for WAL files):
+
+- bzip2
+- gzip
+- lz4
+- snappy
+- xz
+- zstd
+
+The compression settings for backups and WALs are independent. See the
+[DataBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#DataBackupConfiguration) and
+[WALBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#WalBackupConfiguration) sections in
+the barman-cloud API reference.
+
+It is important to note that archival time, restore time, and size change
+between the algorithms, so the compression algorithm should be chosen according
+to your use case.
+
+The Barman team has performed an evaluation of the performance of the supported
+algorithms for Barman Cloud. The following table summarizes a scenario where a
+backup is taken on a local MinIO deployment. The Barman GitHub project includes
+a [deeper analysis](https://github.com/EnterpriseDB/barman/issues/344#issuecomment-992547396).
+
+| Compression | Backup Time (ms) | Restore Time (ms) | Uncompressed size (MB) | Compressed size (MB) | Approx ratio |
+| ----------- | ---------------- | ----------------- | ---------------------- | -------------------- | ------------ |
+| None | 10927 | 7553 | 395 | 395 | 1:1 |
+| bzip2 | 25404 | 13886 | 395 | 67 | 5.9:1 |
+| gzip | 116281 | 3077 | 395 | 91 | 4.3:1 |
+| snappy | 8134 | 8341 | 395 | 166 | 2.4:1 |
+
+## Tagging of backup objects
+
+Barman 2.18 introduces support for tagging backup resources when saving them in
+object stores via `barman-cloud-backup` and `barman-cloud-wal-archive`. As a
+result, if your PostgreSQL container image includes Barman with version 2.18 or
+higher, {{name.ln}} enables you to specify tags as key-value pairs
+for backup objects, namely base backups, WAL files and history files.
+
+You can use two properties in the `.spec.backup.barmanObjectStore` definition:
+
+- `tags`: key-value pair tags to be added to backup objects and archived WAL
+ file in the backup object store
+- `historyTags`: key-value pair tags to be added to archived history files in
+ the backup object store
+
+The excerpt of a YAML manifest below provides an example of usage of this
+feature:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ [...]
+ tags:
+ backupRetentionPolicy: "expire"
+ historyTags:
+ backupRetentionPolicy: "keep"
+```
+
+## Extra options for the backup and WAL commands
+
+You can append additional options to the `barman-cloud-backup` and `barman-cloud-wal-archive` commands by using
+the `additionalCommandArgs` property in the
+`.spec.backup.barmanObjectStore.data` and `.spec.backup.barmanObjectStore.wal` sections respectively.
+These properties are lists of strings that will be appended to the
+`barman-cloud-backup` and `barman-cloud-wal-archive` commands.
+
+For example, you can use the `--read-timeout=60` to customize the connection
+reading timeout.
+
+For additional options supported by `barman-cloud-backup` and `barman-cloud-wal-archive` commands you can refer to the
+official barman documentation [here](https://www.pgbarman.org/documentation/).
+
+If an option provided in `additionalCommandArgs` is already present in the
+declared options in its section (`.spec.backup.barmanObjectStore.data` or `.spec.backup.barmanObjectStore.wal`), the extra option will be
+ignored.
+
+The following is an example of how to use this property:
+
+For backups:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ [...]
+ data:
+ additionalCommandArgs:
+ - "--min-chunk-size=5MB"
+ - "--read-timeout=60"
+```
+
+For WAL files:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ [...]
+ wal:
+ additionalCommandArgs:
+ - "--max-concurrency=1"
+ - "--read-timeout=60"
+```
+
+## Recovery from an object store
+
+You can recover from a backup created by Barman Cloud and stored on a supported
+object store. After you define the external cluster, including all the required
+configuration in the `barmanObjectStore` section, you need to reference it in
+the `.spec.recovery.source` option.
+
+This example defines a recovery object store in a blob container in Azure:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore
+spec:
+ [...]
+
+ superuserSecret:
+ name: superuser-secret
+
+ bootstrap:
+ recovery:
+ source: clusterBackup
+
+ externalClusters:
+ - name: clusterBackup
+ barmanObjectStore:
+ destinationPath: https://STORAGEACCOUNTNAME.blob.core.windows.net/CONTAINERNAME/
+ azureCredentials:
+ storageAccount:
+ name: recovery-object-store-secret
+ key: storage_account_name
+ storageKey:
+ name: recovery-object-store-secret
+ key: storage_account_key
+ wal:
+ maxParallel: 8
+```
+
+The previous example assumes that the application database and its owning user
+are named `app` by default. If the PostgreSQL cluster being restored uses
+different names, you must specify these names before exiting the recovery phase,
+as documented in ["Configure the application database"](recovery.md#configure-the-application-database).
+
+!!! Important
+ By default, the `recovery` method strictly uses the `name` of the
+ cluster in the `externalClusters` section as the name of the main folder
+ of the backup data within the object store. This name is normally reserved
+ for the name of the server. You can specify a different folder name
+ using the `barmanObjectStore.serverName` property.
+
+!!! Note
+ This example takes advantage of the parallel WAL restore feature,
+ dedicating up to 8 jobs to concurrently fetch the required WAL files from the
+ archive. This feature can appreciably reduce the recovery time. Make sure that
+ you plan ahead for this scenario and correctly tune the value of this parameter
+ for your environment. It will make a difference when you need it, and you will.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/backup_recovery.mdx b/product_docs/docs/postgres_for_kubernetes/1/backup_recovery.mdx
new file mode 100644
index 0000000000..b90f8c8a30
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/backup_recovery.mdx
@@ -0,0 +1,8 @@
+---
+title: 'Backup and Recovery'
+originalFilePath: 'src/backup_recovery.md'
+---
+
+
+
+[Backup](backup.md) and [recovery](recovery.md) are in two separate sections.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/backup_volumesnapshot.mdx b/product_docs/docs/postgres_for_kubernetes/1/backup_volumesnapshot.mdx
new file mode 100644
index 0000000000..79c27cd877
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/backup_volumesnapshot.mdx
@@ -0,0 +1,405 @@
+---
+title: 'Appendix A - Backup on volume snapshots'
+originalFilePath: 'src/appendixes/backup_volumesnapshot.md'
+---
+
+
+
+!!! Important
+ Please refer to the official Kubernetes documentation for a list of all
+ the supported [Container Storage Interface (CSI) drivers](https://kubernetes-csi.github.io/docs/drivers.html)
+ that provide snapshotting capabilities.
+
+{{name.ln}} is one of the first known cases of database operators that
+directly leverages the Kubernetes native Volume Snapshot API for both
+backup and recovery operations, in an entirely declarative way.
+
+## About standard Volume Snapshots
+
+Volume snapshotting was first introduced in
+[Kubernetes 1.12 (2018) as alpha](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/),
+promoted to [beta in 1.17 (2019)](https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-cis-volume-snapshot-beta/),
+and [moved to GA in 1.20 (2020)](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/).
+It’s now stable, widely available, and standard, providing 3 custom resource
+definitions: `VolumeSnapshot`, `VolumeSnapshotContent` and
+`VolumeSnapshotClass`.
+
+This Kubernetes feature defines a generic interface for:
+
+- the creation of a new volume snapshot, starting from a PVC
+- the deletion of an existing snapshot
+- the creation of a new volume from a snapshot
+
+Kubernetes delegates the actual implementation to the underlying CSI drivers
+(not all of them support volume snapshots). Normally, storage classes that
+provide volume snapshotting support **incremental and differential block level
+backup in a transparent way for the application**, which can delegate the
+complexity and the independent management down the stack, including
+cross-cluster availability of the snapshots.
+
+## Requirements
+
+For Volume Snapshots to work with a {{name.ln}} cluster, you need to ensure
+that each storage class used to dynamically provision the PostgreSQL volumes
+(namely, `storage` and `walStorage` sections) support volume snapshots.
+
+Given that instructions vary from storage class to storage class, please
+refer to the documentation of the specific storage class and related CSI
+drivers you have deployed in your Kubernetes system.
+
+Normally, it is the [`VolumeSnapshotClass`](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/)
+that is responsible to ensure that snapshots can be taken from persistent
+volumes of a given storage class, and managed as `VolumeSnapshot` and
+`VolumeSnapshotContent` resources.
+
+!!! Important
+ It is your responsibility to verify with the third party vendor
+ that volume snapshots are supported. {{name.ln}} only interacts
+ with the Kubernetes API on this matter, and we cannot support issues
+ at the storage level for each specific CSI driver.
+
+## How to configure Volume Snapshot backups
+
+{{name.ln}} allows you to configure a given Postgres cluster for Volume
+Snapshot backups through the `backup.volumeSnapshot` stanza.
+
+!!! Info
+ Please refer to [`VolumeSnapshotConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration)
+ in the API reference for a full list of options.
+
+A generic example with volume snapshots (assuming that PGDATA and WALs share
+the same storage class) is the following:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: snapshot-cluster
+spec:
+ instances: 3
+
+ storage:
+ storageClass: @STORAGE_CLASS@
+ size: 10Gi
+ walStorage:
+ storageClass: @STORAGE_CLASS@
+ size: 10Gi
+
+ backup:
+ # Volume snapshot backups
+ volumeSnapshot:
+ className: @VOLUME_SNAPSHOT_CLASS_NAME@
+
+ plugins:
+ - name: barman-cloud.cloudnative-pg.io
+ isWALArchiver: true
+ parameters:
+ barmanObjectName: @OBJECTSTORE_NAME@
+```
+
+As you can see, the `backup` section contains both the `volumeSnapshot` stanza
+(controlling physical base backups on volume snapshots) and the
+`plugins` one (controlling the [WAL archive](wal_archiving.md)).
+
+!!! Info
+ Once you have defined the `plugin`, you can decide to use
+ both volume snapshot and plugin backup strategies simultaneously
+ to take physical backups.
+
+The `volumeSnapshot.className` option allows you to reference the default
+`VolumeSnapshotClass` object used for all the storage volumes you have
+defined in your PostgreSQL cluster.
+
+!!! Info
+ In case you are using a different storage class for `PGDATA` and
+ WAL files, you can specify a separate `VolumeSnapshotClass` for
+ that volume through the `walClassName` option (which defaults to
+ the same value as `className`).
+
+Once a cluster is defined for volume snapshot backups, you need to define
+a `ScheduledBackup` resource that requests such backups on a periodic basis.
+
+## Hot and cold backups
+
+!!! Warning
+ As noted in the [backup document](backup.md), a cold snapshot explicitly
+ set to target the primary will result in the primary being fenced for
+ the duration of the backup, making the cluster read-only during this
+ period. For safety, in a cluster already containing fenced instances, a cold
+ snapshot is rejected.
+
+By default, {{name.ln}} requests an online/hot backup on volume snapshots, using the
+[PostgreSQL defaults of the low-level API for base backups](https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP):
+
+- it doesn't request an immediate checkpoint when starting the backup procedure
+- it waits for the WAL archiver to archive the last segment of the backup when
+ terminating the backup procedure
+
+!!! Important
+ The default values are suitable for most production environments. Hot
+ backups are consistent and can be used to perform snapshot recovery, as we
+ ensure WAL retention from the start of the backup through a temporary
+ replication slot. However, our recommendation is to rely on cold backups for
+ that purpose.
+
+You can explicitly change the default behavior through the following options in
+the `.spec.backup.volumeSnapshot` stanza of the `Cluster` resource:
+
+- `online`: accepting `true` (default) or `false` as a value
+- `onlineConfiguration.immediateCheckpoint`: whether you want to request an
+ immediate checkpoint before you start the backup procedure or not;
+ technically, it corresponds to the `fast` argument you pass to the
+ `pg_backup_start`/`pg_start_backup()` function in PostgreSQL, accepting
+ `true` (default) or `false`
+- `onlineConfiguration.waitForArchive`: whether you want to wait for the
+ archiver to process the last segment of the backup or not; technically, it
+ corresponds to the `wait_for_archive` argument you pass to the
+ `pg_backup_stop`/`pg_stop_backup()` function in PostgreSQL, accepting `true`
+ (default) or `false`
+
+If you want to change the default behavior of your Postgres cluster to take
+cold backups by default, all you need to do is add the `online: false` option
+to your manifest, as follows:
+
+```yaml
+ # ...
+ backup:
+ volumeSnapshot:
+ online: false
+ # ...
+```
+
+If you are instead requesting an immediate checkpoint as the default behavior,
+you can add this section:
+
+```yaml
+ # ...
+ backup:
+ volumeSnapshot:
+ online: true
+ onlineConfiguration:
+ immediateCheckpoint: true
+ # ...
+```
+
+### Overriding the default behavior
+
+You can change the default behavior defined in the cluster resource by setting
+different values for `online` and, if needed, `onlineConfiguration` in the `Backup` or `ScheduledBackup` objects.
+
+For example, in case you want to issue an on-demand cold backup, you can
+create a `Backup` object with `.spec.online: false`:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Backup
+metadata:
+ name: snapshot-cluster-cold-backup-example
+spec:
+ cluster:
+ name: snapshot-cluster
+ method: volumeSnapshot
+ online: false
+```
+
+Similarly, for the ScheduledBackup:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ScheduledBackup
+metadata:
+ name: snapshot-cluster-cold-backup-example
+spec:
+ schedule: "0 0 0 * * *"
+ backupOwnerReference: self
+ cluster:
+ name: snapshot-cluster
+ method: volumeSnapshot
+ online: false
+```
+
+## Persistence of volume snapshot objects
+
+By default, `VolumeSnapshot` objects created by {{name.ln}} are retained after
+deleting the `Backup` object that originated them, or the `Cluster` they refer to.
+Such behavior is controlled by the `.spec.backup.volumeSnapshot.snapshotOwnerReference`
+option which accepts the following values:
+
+- `none`: no ownership is set, meaning that `VolumeSnapshot` objects persist
+ after the `Backup` and/or the `Cluster` resources are removed
+- `backup`: the `VolumeSnapshot` object is owned by the `Backup` resource that
+ originated it, and when the backup object is removed, the volume snapshot is
+ also removed
+- `cluster`: the `VolumeSnapshot` object is owned by the `Cluster` resource that
+ is backed up, and when the Postgres cluster is removed, the volume snapshot is
+ also removed
+
+In case a `VolumeSnapshot` is deleted, the `deletionPolicy` specified in the
+`VolumeSnapshotContent` is evaluated:
+
+- if set to `Retain`, the `VolumeSnapshotContent` object is kept
+- if set to `Delete`, the `VolumeSnapshotContent` object is removed as well
+
+!!! Warning
+ `VolumeSnapshotContent` objects do not keep all the information regarding the
+ backup and the cluster they refer to (like the annotations and labels that
+ are contained in the `VolumeSnapshot` object). Although possible, restoring
+ from just this kind of object might not be straightforward. For this reason,
+ our recommendation is to always backup the `VolumeSnapshot` definitions,
+ even using a Kubernetes level data protection solution.
+
+The value in `VolumeSnapshotContent` is determined by the `deletionPolicy` set
+in the corresponding `VolumeSnapshotClass` definition, which is
+referenced in the `.spec.backup.volumeSnapshot.className` option.
+
+Please refer to the [Kubernetes documentation on Volume Snapshot Classes](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/)
+for details on this standard behavior.
+
+## Backup Volume Snapshot Deadlines
+
+{{name.ln}} supports backups using the volume snapshot method. In some
+environments, volume snapshots may encounter temporary issues that can be
+retried.
+
+The `backup.k8s.enterprisedb.io/volumeSnapshotDeadline` annotation defines how long
+{{name.ln}} should continue retrying recoverable errors before marking the
+backup as failed.
+
+You can add the `backup.k8s.enterprisedb.io/volumeSnapshotDeadline` annotation to both
+`Backup` and `ScheduledBackup` resources. For `ScheduledBackup` resources, this
+annotation is automatically inherited by any `Backup` resources created from
+the schedule.
+
+If not specified, the default retry deadline is **10 minutes**.
+
+### Error Handling
+
+When a retryable error occurs during a volume snapshot operation:
+
+1. {{name.ln}} records the time of the first error.
+2. The system retries the operation every **10 seconds**.
+3. If the error persists beyond the specified deadline (or the default 10
+ minutes), the backup is marked as **failed**.
+
+### Retryable Errors
+
+{{name.ln}} treats the following types of errors as retryable:
+
+- **Server timeout errors** (HTTP 408, 429, 500, 502, 503, 504)
+- **Conflicts** (optimistic locking errors)
+- **Internal errors**
+- **Context deadline exceeded errors**
+- **Timeout errors from the CSI snapshot controller**
+
+### Examples
+
+You can add the annotation to a `ScheduledBackup` resource as follows:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ScheduledBackup
+metadata:
+ name: daily-backup-schedule
+ annotations:
+ backup.k8s.enterprisedb.io/volumeSnapshotDeadline: "20"
+spec:
+ schedule: "0 0 * * *"
+ backupOwnerReference: self
+ method: volumeSnapshot
+ # other configuration...
+```
+
+When you define a `ScheduledBackup` with the annotation, any `Backup` resources
+created from this schedule automatically inherit the specified timeout value.
+
+In the following example, all backups created from the schedule will have a
+30-minute timeout for retrying recoverable snapshot errors.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ScheduledBackup
+metadata:
+ name: weekly-backup
+ annotations:
+ backup.k8s.enterprisedb.io/volumeSnapshotDeadline: "30"
+spec:
+ schedule: "0 0 * * 0" # Weekly backup on Sunday
+ method: volumeSnapshot
+ cluster:
+ name: my-postgresql-cluster
+```
+
+Alternatively, you can add the annotation directly to a `Backup` Resource:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Backup
+metadata:
+ name: my-backup
+ annotations:
+ backup.k8s.enterprisedb.io/volumeSnapshotDeadline: "15"
+spec:
+ method: volumeSnapshot
+ # other backup configuration...
+```
+
+## Example of Volume Snapshot Backup
+
+The following example shows how to configure volume snapshot base backups on an
+EKS cluster on AWS using the `ebs-sc` storage class and the `csi-aws-vsc`
+volume snapshot class.
+
+!!! Important
+ If you are interested in testing the example, please read
+ ["Volume Snapshots" for the Amazon Elastic Block Store (EBS) CSI driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot)
+ for detailed instructions on the installation process for the storage class and the snapshot class.
+
+The following manifest creates a `Cluster` that is ready to be used for volume
+snapshots and that stores the WAL archive in a S3 bucket via IAM role for the
+Service Account (IRSA, see [AWS S3](object_stores.md#aws-s3)):
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: hendrix
+spec:
+ instances: 3
+
+ storage:
+ storageClass: ebs-sc
+ size: 10Gi
+ walStorage:
+ storageClass: ebs-sc
+ size: 10Gi
+
+ backup:
+ volumeSnapshot:
+ className: csi-aws-vsc
+
+ plugins:
+ - name: barman-cloud.cloudnative-pg.io
+ isWALArchiver: true
+ parameters:
+ barmanObjectName: @OBJECTSTORE_NAME@
+
+ serviceAccountTemplate:
+ metadata:
+ annotations:
+ eks.amazonaws.com/role-arn: "@ARN@"
+---
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ScheduledBackup
+metadata:
+ name: hendrix-vs-backup
+spec:
+ cluster:
+ name: hendrix
+ method: volumeSnapshot
+ schedule: '0 0 0 * * *'
+ backupOwnerReference: cluster
+ immediate: true
+```
+
+The last resource defines daily volume snapshot backups at midnight, requesting
+one immediately after the cluster is created.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx b/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx
new file mode 100644
index 0000000000..23ef332baa
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx
@@ -0,0 +1,163 @@
+---
+title: 'Before You Start'
+originalFilePath: 'src/before_you_start.md'
+---
+
+
+
+Before we get started, it is essential to go over some terminology that is
+specific to Kubernetes and PostgreSQL.
+
+## Kubernetes terminology
+
+[Node](https://kubernetes.io/docs/concepts/architecture/nodes/)
+: A *node* is a worker machine in Kubernetes, either virtual or physical, where
+ all services necessary to run pods are managed by the control plane node(s).
+
+[Postgres Node](architecture.md#reserving-nodes-for-postgresql-workloads)
+: A *Postgres node* is a Kubernetes worker node dedicated to running PostgreSQL
+ workloads. This is achieved by applying the `node-role.kubernetes.io` label and
+ taint, as [proposed by {{name.ln}}](architecture.md#reserving-nodes-for-postgresql-workloads).
+ It is also referred to as a `postgres` node.
+
+[Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
+: A *pod* is the smallest computing unit that can be deployed in a Kubernetes
+ cluster and is composed of one or more containers that share network and
+ storage.
+
+[Service](https://kubernetes.io/docs/concepts/services-networking/service/)
+: A *service* is an abstraction that exposes as a network service an
+ application that runs on a group of pods and standardizes important features
+ such as service discovery across applications, load balancing, failover, and so
+ on.
+
+[Secret](https://kubernetes.io/docs/concepts/configuration/secret/)
+: A *secret* is an object that is designed to store small amounts of sensitive
+ data such as passwords, access keys, or tokens, and use them in pods.
+
+[Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/)
+: A *storage class* allows an administrator to define the classes of storage in
+ a cluster, including provisioner (such as AWS EBS), reclaim policies, mount
+ options, volume expansion, and so on.
+
+[Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
+: A *persistent volume* (PV) is a resource in a Kubernetes cluster that
+ represents storage that has been either manually provisioned by an
+ administrator or dynamically provisioned by a *storage class* controller. A PV
+ is associated with a pod using a *persistent volume claim* and its lifecycle is
+ independent of any pod that uses it. Normally, a PV is a network volume,
+ especially in the public cloud. A [*local persistent volume*
+ (LPV)](https://kubernetes.io/docs/concepts/storage/volumes/#local) is a
+ persistent volume that exists only on the particular node where the pod that
+ uses it is running.
+
+[Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
+: A *persistent volume claim* (PVC) represents a request for storage, which
+ might include size, access mode, or a particular storage class. Similar to how
+ a pod consumes node resources, a PVC consumes the resources of a PV.
+
+[Namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
+: A *namespace* is a logical and isolated subset of a Kubernetes cluster and
+ can be seen as a *virtual cluster* within the wider physical cluster.
+ Namespaces allow administrators to create separated environments based on
+ projects, departments, teams, and so on.
+
+[RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
+: *Role Based Access Control* (RBAC), also known as *role-based security*, is a
+ method used in computer systems security to restrict access to the network and
+ resources of a system to authorized users only. Kubernetes has a native API to
+ control roles at the namespace and cluster level and associate them with
+ specific resources and individuals.
+
+[CRD](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
+: A *custom resource definition* (CRD) is an extension of the Kubernetes API
+ and allows developers to create new data types and objects, *called custom
+ resources*.
+
+[Operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
+: An *operator* is a custom resource that automates those steps that are
+ normally performed by a human operator when managing one or more applications
+ or given services. An operator assists Kubernetes in making sure that the
+ resource's defined state always matches the observed one.
+
+[`kubectl`](https://kubernetes.io/docs/reference/kubectl/overview/)
+: `kubectl` is the command-line tool used to manage a Kubernetes cluster.
+
+{{name.ln}} requires a Kubernetes version supported by the community. Please refer to the
+["Supported releases"](https://www.enterprisedb.com/resources/platform-compatibility#pgk8s) page for details.
+
+## PostgreSQL terminology
+
+Instance
+: A Postgres server process running and listening on a pair "IP address(es)"
+ and "TCP port" (usually 5432).
+
+Primary
+: A PostgreSQL instance that can accept both read and write operations.
+
+Replica
+: A PostgreSQL instance replicating from the only primary instance in a
+ cluster and is kept updated by reading a stream of Write-Ahead Log (WAL)
+ records. A replica is also known as *standby* or *secondary* server. PostgreSQL
+ relies on physical streaming replication (async/sync) and file-based log
+ shipping (async).
+
+Hot Standby
+: PostgreSQL feature that allows a *replica* to accept read-only workloads.
+
+Cluster
+: To be intended as High Availability (HA) Cluster: a set of PostgreSQL
+ instances made up by a single primary and an optional arbitrary number of
+ replicas.
+
+Replica Cluster
+: A {{name.ln}} `Cluster` that is in continuous recovery mode from a selected
+ PostgreSQL cluster, normally residing outside the Kubernetes cluster. It is a
+ feature that enables multi-cluster deployments in private, public, hybrid, and
+ multi-cloud contexts.
+
+Designated Primary
+: A PostgreSQL standby instance in a replica cluster that is in continuous
+ recovery from another PostgreSQL cluster and that is designated to become
+ primary in case the replica cluster becomes primary.
+
+Superuser
+: In PostgreSQL a *superuser* is any role with both `LOGIN` and `SUPERUSER`
+ privileges. For security reasons, {{name.ln}} performs administrative tasks
+ by connecting to the `postgres` database as the `postgres` user via `peer`
+ authentication over the local Unix Domain Socket.
+
+[WAL](https://www.postgresql.org/docs/current/wal-intro.html)
+: Write-Ahead Logging (WAL) is a standard method for ensuring data integrity in
+ database management systems.
+
+PVC group
+: A PVC group in {{name.ln}}' terminology is a group of related PVCs
+ belonging to the same PostgreSQL instance, namely the main volume containing
+ the PGDATA (`storage`) and the volume for WALs (`walStorage`).
+
+RTO
+: Acronym for "recovery time objective", the amount of time a system can be
+ unavailable without adversely impacting the application.
+
+RPO
+: Acronym for "recovery point objective", a calculation of the level of
+ acceptable data loss following a disaster recovery scenario.
+
+## Cloud terminology
+
+Region
+: A *region* in the Cloud is an isolated and independent geographic area
+ organized in *availability zones*. Zones within a region have very little
+ round-trip network latency.
+
+Zone
+: An *availability zone* in the Cloud (also known as *zone*) is an area in a
+ region where resources can be deployed. Usually, an availability zone
+ corresponds to a data center or an isolated building of the same data center.
+
+## What to do next
+
+Now that you have familiarized with the terminology, you can decide to
+[test {{name.ln}} on your laptop using a local cluster](quickstart.md) before
+deploying the operator in your selected cloud environment.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/benchmarking.mdx b/product_docs/docs/postgres_for_kubernetes/1/benchmarking.mdx
new file mode 100644
index 0000000000..40919e52c2
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/benchmarking.mdx
@@ -0,0 +1,209 @@
+---
+title: 'Benchmarking'
+originalFilePath: 'src/benchmarking.md'
+---
+
+
+
+The CNP kubectl plugin provides an easy way for benchmarking a PostgreSQL deployment in Kubernetes using {{name.ln}}.
+
+Benchmarking is focused on two aspects:
+
+- the **database**, by relying on [pgbench](https://www.postgresql.org/docs/current/pgbench.html)
+- the **storage**, by relying on [fio](https://fio.readthedocs.io/en/latest/fio_doc.html)
+
+!!! IMPORTANT
+ `pgbench` and `fio` must be run in a staging or pre-production environment.
+ Do not use these plugins in a production environment, as it might have
+ catastrophic consequences on your databases and the other
+ workloads/applications that run in the same shared environment.
+
+### pgbench
+
+The `kubectl` CNP plugin command `pgbench` executes a user-defined `pgbench` job
+against an existing Postgres Cluster.
+
+Through the `--dry-run` flag you can generate the manifest of the job for later
+modification/execution.
+
+A common command structure with `pgbench` is the following:
+
+```shell
+kubectl cnp pgbench \
+ -n \
+ --job-name \
+ --db-name \
+ --
+```
+
+!!! IMPORTANT
+ Please refer to the [`pgbench` documentation](https://www.postgresql.org/docs/current/pgbench.html)
+ for information about the specific options to be used in your jobs.
+
+This example creates a job called `pgbench-init` that initializes for `pgbench`
+OLTP-like purposes the `app` database in a `Cluster` named `cluster-example`,
+using a scale factor of 1000:
+
+```shell
+kubectl cnp pgbench \
+ --job-name pgbench-init \
+ cluster-example \
+ -- --initialize --scale 1000
+```
+
+!!! Note
+ This will generate a database with 100000000 records, taking approximately 13GB
+ of space on disk.
+
+You can see the progress of the job with:
+
+```shell
+kubectl logs jobs/pgbench-run
+```
+
+The following example creates a job called `pgbench-run` executing `pgbench`
+against the previously initialized database for 30 seconds, using a single
+connection:
+
+```shell
+kubectl cnp pgbench \
+ --job-name pgbench-run \
+ cluster-example \
+ -- --time 30 --client 1 --jobs 1
+```
+
+The next example runs `pgbench` against an existing database by using the
+`--db-name` flag and the `pgbench` namespace:
+
+```shell
+kubectl cnp pgbench \
+ --db-name pgbench \
+ --job-name pgbench-job \
+ cluster-example \
+ -- --time 30 --client 1 --jobs 1
+```
+
+By default, jobs do not expire. You can enable automatic deletion with the
+`--ttl` flag. The job will be deleted after the specified duration (in seconds).
+
+```shell
+kubectl cnp pgbench \
+ --job-name pgbench-run \
+ --ttl 600 \
+ cluster-example \
+ -- --time 30 --client 1 --jobs 1
+```
+
+If you want to run a `pgbench` job on a specific worker node, you can use
+the `--node-selector` option. Suppose you want to run the previous
+initialization job on a node having the `workload=pgbench` label, you can run:
+
+```shell
+kubectl cnp pgbench \
+ --db-name pgbench \
+ --job-name pgbench-init \
+ --node-selector workload=pgbench \
+ cluster-example \
+ -- --initialize --scale 1000
+```
+
+The job status can be fetched by running:
+
+```
+kubectl get job/pgbench-job -n
+
+NAME COMPLETIONS DURATION AGE
+job-name 1/1 15s 41s
+```
+
+Once the job is completed the results can be gathered by executing:
+
+```
+kubectl logs job/pgbench-job -n
+```
+
+### fio
+
+The kubectl CNP plugin command `fio` executes a fio job with default values
+and read operations.
+Through the `--dry-run` flag you can generate the manifest of the job for later
+modification/execution.
+
+!!! Note
+ The kubectl plugin command `fio` will create a deployment with predefined
+ fio job values using a ConfigMap. If you want to provide custom job values, we
+ recommend generating a manifest using the `--dry-run` flag and providing your
+ custom job values in the generated ConfigMap.
+
+Example of default usage:
+
+```shell
+kubectl cnp fio
+```
+
+Example with custom values:
+
+```shell
+kubectl cnp fio \
+ -n \
+ --storageClass \
+ --pvcSize
+```
+
+Example of how to run the `fio` command against a `StorageClass` named
+`standard` and `pvcSize: 2Gi` in the `fio` namespace:
+
+```shell
+kubectl cnp fio fio-job \
+ -n fio \
+ --storageClass standard \
+ --pvcSize 2Gi
+```
+
+The deployment status can be fetched by running:
+
+```shell
+kubectl get deployment/fio-job -n fio
+
+NAME READY UP-TO-DATE AVAILABLE AGE
+fio-job 1/1 1 1 14s
+
+```
+
+After running kubectl plugin command `fio`.
+
+It will:
+
+1. Create a PVC
+2. Create a ConfigMap representing the configuration of a fio job
+3. Create a fio deployment composed by a single Pod, which will run fio on
+ the PVC, create graphs after completing the benchmark and start serving the
+ generated files with a webserver. We use the
+ [`fio-tools`](https://github.com/wallnerryan/fio-tools) image for that.
+
+The Pod created by the deployment will be ready when it starts serving the
+results. You can forward the port of the pod created by the deployment
+
+```
+kubectl port-forward -n deployment/ 8000
+```
+
+and then use a browser and connect to `http://localhost:8000/` to get the data.
+
+The default 8k block size has been chosen to emulate a PostgreSQL workload.
+Disks that cap the amount of available IOPS can show very different throughput
+values when changing this parameter.
+
+Below is an example diagram of sequential writes on a local disk
+mounted on a dedicated Kubernetes node
+(1 hour benchmark):
+
+
+
+After all testing is done, fio deployment and resources can be deleted by:
+
+```shell
+kubectl cnp fio --dry-run | kubectl delete -f -
+```
+
+make sure use the same name which was used to create the fio deployment and add namespace if applicable.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
new file mode 100644
index 0000000000..ea5469111c
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
@@ -0,0 +1,784 @@
+---
+title: 'Bootstrap'
+originalFilePath: 'src/bootstrap.md'
+---
+
+
+
+!!! Note
+ When referring to "PostgreSQL cluster" in this section, the same
+ concepts apply to both PostgreSQL and EDB Postgres Advanced Server, unless
+ differently stated.
+
+This section describes the options available to create a new
+PostgreSQL cluster and the design rationale behind them.
+There are primarily two ways to bootstrap a new cluster:
+
+- from scratch (`initdb`)
+- from an existing PostgreSQL cluster, either directly (`pg_basebackup`)
+ or indirectly through a physical base backup (`recovery`)
+
+The `initdb` bootstrap also provides the option to import one or more
+databases from an existing PostgreSQL cluster, even if it's outside
+Kubernetes or running a different major version of PostgreSQL.
+For more detailed information about this feature, please refer to the
+["Importing Postgres databases"](database_import.md) section.
+
+!!! Important
+ Bootstrapping from an existing cluster enables the creation of a
+ **replica cluster**—an independent PostgreSQL cluster that remains in
+ continuous recovery, stays synchronized with the source cluster, and
+ accepts read-only connections.
+ For more details, refer to the [Replica Cluster section](replica_cluster.md).
+
+!!! Warning
+ {{name.ln}} requires both the `postgres` user and database to
+ always exist. Using the local Unix Domain Socket, it needs to connect
+ as the `postgres` user to the `postgres` database via `peer` authentication in
+ order to perform administrative tasks on the cluster.
+ **DO NOT DELETE** the `postgres` user or the `postgres` database!!!
+
+!!! Info
+ {{name.ln}} is gradually introducing support for
+ [Kubernetes' native `VolumeSnapshot` API](https://github.com/cloudnative-pg/cloudnative-pg/issues/2081)
+ for both incremental and differential copy in backup and recovery
+ operations - if supported by the underlying storage classes.
+ Please see ["Recovery from Volume Snapshot objects"](recovery.md#recovery-from-volumesnapshot-objects)
+ for details.
+
+## The `bootstrap` section
+
+The *bootstrap* method can be defined in the `bootstrap` section of the cluster
+specification. {{name.ln}} currently supports the following bootstrap methods:
+
+- `initdb`: initialize a new PostgreSQL cluster (default)
+- `recovery`: create a PostgreSQL cluster by restoring from a base backup of an
+ existing cluster and, if needed, replaying all the available WAL files or up to
+ a given *point in time*
+- `pg_basebackup`: create a PostgreSQL cluster by cloning an existing one of
+ the same major version using `pg_basebackup` through the streaming
+ replication protocol. This method is particularly useful for migrating
+ databases to {{name.ln}}, although meeting all requirements can be
+ challenging. Be sure to review the warnings in the
+ [`pg_basebackup` subsection](#bootstrap-from-a-live-cluster-pg_basebackup)
+ carefully.
+
+Only one bootstrap method can be specified in the manifest.
+Attempting to define multiple bootstrap methods will result in validation errors.
+
+In contrast to the `initdb` method, both `recovery` and `pg_basebackup`
+create a new cluster based on another one (either offline or online) and can be
+used to spin up replica clusters. They both rely on the definition of external
+clusters.
+Refer to the [replica cluster section](replica_cluster.md) for more information.
+
+Given the amount of possible backup methods and combinations of backup
+storage that the {{name.ln}} operator provides for `recovery`, please refer to
+the dedicated ["Recovery" section](recovery.md) for guidance on each method.
+
+!!! Seealso "API reference"
+ Please refer to the ["API reference for the `bootstrap` section](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+ for more information.
+
+## The `externalClusters` section
+
+The `externalClusters` section of the cluster manifest can be used to configure
+access to one or more PostgreSQL clusters as *sources*.
+The primary use cases include:
+
+1. **Importing Databases:** Specify an external source to be utilized during
+ the [importation of databases](database_import.md) via logical backup and
+ restore, as part of the `initdb` bootstrap method.
+2. **Cross-Region Replication:** Define a cross-region PostgreSQL cluster
+ employing physical replication, capable of extending across distinct Kubernetes
+ clusters or traditional VM/bare-metal environments.
+3. **Recovery from Physical Base Backup:** Recover, fully or at a
+ given Point-In-Time, a PostgreSQL cluster by referencing a physical base
+ backup.
+
+!!! Info
+ Ongoing development will extend the functionality of `externalClusters` to
+ accommodate additional use cases, such as logical replication and foreign
+ servers in future releases.
+
+As far as bootstrapping is concerned, `externalClusters` can be used
+to define the source PostgreSQL cluster for either the `pg_basebackup`
+method or the `recovery` one. An external cluster needs to have:
+
+- a name that identifies the external cluster, to be used as a reference via the
+ `source` option
+- at least one of the following:
+
+ - information about streaming connection
+ - information about the **recovery object store**, which is a Barman Cloud
+ compatible object store that contains:
+ - the WAL archive (required for Point In Time Recovery)
+ - the catalog of physical base backups for the Postgres cluster
+
+!!! Note
+ A recovery object store is normally an AWS S3, Azure Blob Storage,
+ or Google Cloud Storage source that is managed by Barman Cloud.
+
+When only the streaming connection is defined, the source can be used for the
+`pg_basebackup` method. When only the recovery object store is defined, the
+source can be used for the `recovery` method. When both are defined, any of
+the two bootstrap methods can be chosen. The following table summarizes your
+options:
+
+| Content of externalClusters | pg_basebackup | recovery |
+| :-------------------------- | :-----------: | :------: |
+| Only streaming | ✓ | |
+| Only object store | | ✓ |
+| Streaming and object store | ✓ | ✓ |
+
+Furthermore, in case of `pg_basebackup` or full `recovery` point in time, the
+cluster is eligible for replica cluster mode. This means that the cluster is
+continuously fed from the source, either via streaming, via WAL shipping
+through the PostgreSQL's `restore_command`, or any of the two.
+
+!!! Seealso "API reference"
+ Please refer to the ["API reference for the `externalClusters` section](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ExternalCluster)
+ for more information.
+
+### Password files
+
+Whenever a password is supplied within an `externalClusters` entry,
+{{name.ln}} autonomously manages a [PostgreSQL password file](https://www.postgresql.org/docs/current/libpq-pgpass.html)
+for it, residing at `/controller/external/NAME/pgpass` in each instance.
+
+This approach enables {{name.ln}} to securely establish connections with an
+external server without exposing any passwords in the connection string.
+Instead, the connection safely references the aforementioned file through the
+`passfile` connection parameter.
+
+## Bootstrap an empty cluster (`initdb`)
+
+The `initdb` bootstrap method is used to create a new PostgreSQL cluster from
+scratch. It is the default one unless specified differently.
+
+The following example contains the full structure of the `initdb`
+configuration:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-initdb
+spec:
+ instances: 3
+
+ bootstrap:
+ initdb:
+ database: app
+ owner: app
+ secret:
+ name: app-secret
+
+ storage:
+ size: 1Gi
+```
+
+The above example of bootstrap will:
+
+1. create a new `PGDATA` folder using PostgreSQL's native `initdb` command
+2. create an *unprivileged* user named `app`
+3. set the password of the latter (`app`) using the one in the `app-secret`
+ secret (make sure that `username` matches the same name of the `owner`)
+4. create a database called `app` owned by the `app` user.
+
+Thanks to the *convention over configuration paradigm*, you can let the
+operator choose a default database name (`app`) and a default application
+user name (same as the database name), as well as randomly generate a
+secure password for both the superuser and the application user in
+PostgreSQL.
+
+Alternatively, you can generate your password, store it as a secret,
+and use it in the PostgreSQL cluster - as described in the above example.
+
+The supplied secret must comply with the specifications of the
+[`kubernetes.io/basic-auth` type](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret).
+As a result, the `username` in the secret must match the one of the `owner`
+(for the application secret) and `postgres` for the superuser one.
+
+The following is an example of a `basic-auth` secret:
+
+```yaml
+apiVersion: v1
+data:
+ username: YXBw
+ password: cGFzc3dvcmQ=
+kind: Secret
+metadata:
+ name: app-secret
+type: kubernetes.io/basic-auth
+```
+
+The application database is the one that should be used to store application
+data. Applications should connect to the cluster with the user that owns
+the application database.
+
+!!! Important
+ If you need to create additional users, please refer to
+ ["Declarative database role management"](declarative_role_management.md).
+
+In case you don't supply any database name, the operator will proceed
+by convention and create the `app` database, and adds it to the cluster
+definition using a *defaulting webhook*.
+The user that owns the database defaults to the database name instead.
+
+The application user is not used internally by the operator, which instead
+relies on the superuser to reconcile the cluster with the desired status.
+
+### Passing Options to `initdb`
+
+The PostgreSQL data directory is initialized using the
+[`initdb` PostgreSQL command](https://www.postgresql.org/docs/current/app-initdb.html).
+
+{{name.ln}} enables you to customize the behavior of `initdb` to modify
+settings such as default locale configurations and data checksums.
+
+!!! Warning
+ {{name.ln}} acts only as a direct proxy to `initdb` for locale-related
+ options, due to the ongoing and significant enhancements in PostgreSQL's locale
+ support. It is your responsibility to ensure that the correct options are
+ provided, following the PostgreSQL documentation, and to verify that the
+ bootstrap process completes successfully.
+
+To include custom options in the `initdb` command, you can use the following
+parameters:
+
+builtinLocale
+: When `builtinLocale` is set to a value, {{name.ln}} passes it to the
+ `--builtin-locale` option in `initdb`. This option controls the builtin locale, as
+ defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
+ from the PostgreSQL documentation (default: empty). Note that this option requires
+ `localeProvider` to be set to `builtin`. Available from PostgreSQL 17.
+
+dataChecksums
+: When `dataChecksums` is set to `true`, {{name.ln}} invokes the `-k` option in
+ `initdb` to enable checksums on data pages and help detect corruption by the
+ I/O system - that would otherwise be silent (default: `false`).
+
+encoding
+: When `encoding` set to a value, {{name.ln}} passes it to the `--encoding`
+ option in `initdb`, which selects the encoding of the template database
+ (default: `UTF8`).
+
+icuLocale
+: When `icuLocale` is set to a value, {{name.ln}} passes it to the
+ `--icu-locale` option in `initdb`. This option controls the ICU locale, as
+ defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
+ from the PostgreSQL documentation (default: empty).
+ Note that this option requires `localeProvider` to be set to `icu`.
+ Available from PostgreSQL 15.
+
+icuRules
+: When `icuRules` is set to a value, {{name.ln}} passes it to the
+ `--icu-rules` option in `initdb`. This option controls the ICU locale, as
+ defined in ["Locale
+ Support"](https://www.postgresql.org/docs/current/locale.html) from the
+ PostgreSQL documentation (default: empty). Note that this option requires
+ `localeProvider` to be set to `icu`. Available from PostgreSQL 16.
+
+locale
+: When `locale` is set to a value, {{name.ln}} passes it to the `--locale`
+ option in `initdb`. This option controls the locale, as defined in
+ ["Locale Support"](https://www.postgresql.org/docs/current/locale.html) from
+ the PostgreSQL documentation. By default, the locale parameter is empty. In
+ this case, environment variables such as `LANG` are used to determine the
+ locale. Be aware that these variables can vary between container images,
+ potentially leading to inconsistent behavior.
+
+localeCollate
+: When `localeCollate` is set to a value, {{name.ln}} passes it to the `--lc-collate`
+ option in `initdb`. This option controls the collation order (`LC_COLLATE`
+ subcategory), as defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
+ from the PostgreSQL documentation (default: `C`).
+
+localeCType
+: When `localeCType` is set to a value, {{name.ln}} passes it to the `--lc-ctype` option in
+ `initdb`. This option controls the collation order (`LC_CTYPE` subcategory), as
+ defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
+ from the PostgreSQL documentation (default: `C`).
+
+localeProvider
+: When `localeProvider` is set to a value, {{name.ln}} passes it to the `--locale-provider`
+option in `initdb`. This option controls the locale provider, as defined in
+["Locale Support"](https://www.postgresql.org/docs/current/locale.html) from the
+PostgreSQL documentation (default: empty, which means `libc` for PostgreSQL).
+Available from PostgreSQL 15.
+
+walSegmentSize
+: When `walSegmentSize` is set to a value, {{name.ln}} passes it to the `--wal-segsize`
+ option in `initdb` (default: not set - defined by PostgreSQL as 16 megabytes).
+
+!!! Note
+ The only two locale options that {{name.ln}} implements during
+ the `initdb` bootstrap refer to the `LC_COLLATE` and `LC_TYPE` subcategories.
+ The remaining locale subcategories can be configured directly in the PostgreSQL
+ configuration, using the `lc_messages`, `lc_monetary`, `lc_numeric`, and
+ `lc_time` parameters.
+
+The following example enables data checksums and sets the default encoding to
+`LATIN1`:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-initdb
+spec:
+ instances: 3
+
+ bootstrap:
+ initdb:
+ database: app
+ owner: app
+ dataChecksums: true
+ encoding: 'LATIN1'
+ storage:
+ size: 1Gi
+```
+
+!!! Warning
+ {{name.ln}} supports another way to customize the behavior of the
+ `initdb` invocation, using the `options` subsection. However, given that there
+ are options that can break the behavior of the operator (such as `--auth` or
+ `-d`), this technique is deprecated and will be removed from future versions of
+ the API.
+
+### Executing Queries After Initialization
+
+You can specify a custom list of queries that will be executed once,
+immediately after the cluster is created and configured. These queries will be
+executed as the *superuser* (`postgres`) against three different databases, in
+this specific order:
+
+1. The `postgres` database (`postInit` section)
+2. The `template1` database (`postInitTemplate` section)
+3. The application database (`postInitApplication` section)
+
+For each of these sections, {{name.ln}} provides two ways to specify custom
+queries, executed in the following order:
+
+- As a list of SQL queries in the cluster's definition (`postInitSQL`,
+ `postInitTemplateSQL`, and `postInitApplicationSQL` stanzas)
+- As a list of Secrets and/or ConfigMaps, each containing a SQL script to be
+ executed (`postInitSQLRefs`, `postInitTemplateSQLRefs`, and
+ `postInitApplicationSQLRefs` stanzas). Secrets are processed before ConfigMaps.
+
+Objects in each list will be processed sequentially.
+
+!!! Warning
+ Use the `postInit`, `postInitTemplate`, and `postInitApplication` options
+ with extreme care, as queries are run as a superuser and can disrupt the entire
+ cluster. An error in any of those queries will interrupt the bootstrap phase,
+ leaving the cluster incomplete and requiring manual intervention.
+
+!!! Important
+ Ensure the existence of entries inside the ConfigMaps or Secrets specified
+ in `postInitSQLRefs`, `postInitTemplateSQLRefs`, and
+ `postInitApplicationSQLRefs`, otherwise the bootstrap will fail. Errors in any
+ of those SQL files will prevent the bootstrap phase from completing
+ successfully.
+
+The following example runs a single SQL query as part of the `postInitSQL`
+stanza:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-initdb
+spec:
+ instances: 3
+
+ bootstrap:
+ initdb:
+ database: app
+ owner: app
+ dataChecksums: true
+ localeCollate: 'en_US'
+ localeCType: 'en_US'
+ postInitSQL:
+ - CREATE DATABASE angus
+ storage:
+ size: 1Gi
+```
+
+The example below relies on `postInitApplicationSQLRefs` to specify a secret
+and a ConfigMap containing the queries to run after the initialization on the
+application database:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-initdb
+spec:
+ instances: 3
+
+ bootstrap:
+ initdb:
+ database: app
+ owner: app
+ postInitApplicationSQLRefs:
+ secretRefs:
+ - name: my-secret
+ key: secret.sql
+ configMapRefs:
+ - name: my-configmap
+ key: configmap.sql
+ storage:
+ size: 1Gi
+```
+
+!!! Note
+ Within SQL scripts, each SQL statement is executed in a single exec on the
+ server according to the [PostgreSQL semantics](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-MULTI-STATEMENT).
+ Comments can be included, but internal commands like `psql` cannot.
+
+### Compatibility Features
+
+EDB Postgres Advanced Server adds many compatibility features to the
+plain community PostgreSQL. You can find more information about that
+in the [EDB Postgres Advanced Server](/epas/latest/).
+
+Those features are already enabled during cluster creation on EPAS and
+are not supported on the community PostgreSQL image. To disable them
+you can use the `redwood` flag in the `initdb` section
+like in the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-initdb
+spec:
+ instances: 3
+ imageName:
+
+ bootstrap:
+ initdb:
+ database: app
+ owner: app
+ redwood: false
+ storage:
+ size: 1Gi
+```
+
+## Bootstrap from another cluster
+
+{{name.ln}} enables bootstrapping a cluster starting from
+another one of the same major version.
+This operation can be carried out either connecting directly to the source cluster via
+streaming replication (`pg_basebackup`), or indirectly via an existing
+physical *base backup* (`recovery`).
+
+The source cluster must be defined in the `externalClusters` section, identified
+by `name` (our recommendation is to use the same `name` of the origin cluster).
+
+!!! Important
+ By default the `recovery` method strictly uses the `name` of the
+ cluster in the `externalClusters` section to locate the main folder
+ of the backup data within the object store, which is normally reserved
+ for the name of the server. Backup plugins provide ways to specify a
+ different one. For example, the Barman Cloud Plugin provides the [`serverName` parameter](https://cloudnative-pg.io/plugin-barman-cloud/docs/parameters/)
+ (by default assigned to the value of `name` in the external cluster definition).
+
+### Bootstrap from a backup (`recovery`)
+
+Given the variety of backup methods and combinations of backup storage
+options provided by the {{name.ln}} operator for `recovery`, please refer
+to the dedicated ["Recovery" section](recovery.md) for detailed guidance on
+each method.
+
+### Bootstrap from a live cluster (`pg_basebackup`)
+
+The `pg_basebackup` bootstrap mode allows you to create a new cluster
+(*target*) as an exact physical copy of an existing and **binary-compatible**
+PostgreSQL instance (*source*) managed by {{name.ln}}, using a valid
+*streaming replication* connection. The source instance can either be a primary
+or a standby PostgreSQL server. It’s crucial to thoroughly review the
+requirements section below, as the pros and cons of PostgreSQL physical
+replication fully apply.
+
+The primary use cases for this method include:
+
+- Reporting and business intelligence clusters that need to be regenerated
+ periodically (daily, weekly)
+- Test databases containing live data that require periodic regeneration
+ (daily, weekly, monthly) and anonymization
+- Rapid spin-up of a standalone replica cluster
+- Physical migrations of {{name.ln}} clusters to different namespaces or
+ Kubernetes clusters
+
+!!! Important
+ Avoid using this method, based on physical replication, to migrate an
+ existing PostgreSQL cluster outside of Kubernetes into {{name.ln}}, unless you
+ are completely certain that all [requirements](#requirements) are met and
+ the operation has been
+ thoroughly tested. The {{name.ln}} community does not endorse this approach
+ for such use cases, and recommends using logical import instead. It is
+ exceedingly rare that all requirements for physical replication are met in a
+ way that seamlessly works with {{name.ln}}.
+
+!!! Warning
+ In its current implementation, this method clones the source PostgreSQL
+ instance, thereby creating a *snapshot*. Once the cloning process has finished,
+ the new cluster is immediately started.
+ Refer to ["Current limitations"](#current-limitations) for more details.
+
+Similar to the `recovery` bootstrap method, once the cloning operation is
+complete, the operator takes full ownership of the target cluster, starting
+from the first instance. This includes overriding certain configuration
+parameters as required by {{name.ln}}, resetting the superuser password,
+creating the `streaming_replica` user, managing replicas, and more. The
+resulting cluster operates independently from the source instance.
+
+!!! Important
+ Configuring the network connection between the target and source instances
+ lies outside the scope of {{name.ln}} documentation, as it depends heavily on
+ the specific context and environment.
+
+The streaming replication client on the target instance, managed transparently
+by `pg_basebackup`, can authenticate on the source instance using one of the
+following methods:
+
+1. [Username/password](#usernamepassword-authentication)
+2. [TLS client certificate](#tls-certificate-authentication)
+
+Both authentication methods are detailed below.
+
+#### Requirements
+
+The following requirements apply to the `pg_basebackup` bootstrap method:
+
+- target and source must have the same hardware architecture
+- target and source must have the same major PostgreSQL version
+- target and source must have the same tablespaces
+- source must be configured with enough `max_wal_senders` to grant
+ access from the target for this one-off operation by providing at least
+ one *walsender* for the backup plus one for WAL streaming
+- the network between source and target must be configured to enable the target
+ instance to connect to the PostgreSQL port on the source instance
+- source must have a role with `REPLICATION LOGIN` privileges and must accept
+ connections from the target instance for this role in `pg_hba.conf`, preferably
+ via TLS (see ["About the replication user"](#about-the-replication-user) below)
+- target must be able to successfully connect to the source PostgreSQL instance
+ using a role with `REPLICATION LOGIN` privileges
+
+!!! Seealso
+ For further information, please refer to the
+ ["Planning" section for Warm Standby](https://www.postgresql.org/docs/current/warm-standby.html#STANDBY-PLANNING),
+ the
+ [`pg_basebackup` page](https://www.postgresql.org/docs/current/app-pgbasebackup.html)
+ and the
+ ["High Availability, Load Balancing, and Replication" chapter](https://www.postgresql.org/docs/current/high-availability.html)
+ in the PostgreSQL documentation.
+
+#### About the replication user
+
+As explained in the requirements section, you need to have a user
+with either the `SUPERUSER` or, preferably, just the `REPLICATION`
+privilege in the source instance.
+
+If the source database is created with {{name.ln}}, you
+can reuse the `streaming_replica` user and take advantage of client
+TLS certificates authentication (which, by default, is the only allowed
+connection method for `streaming_replica`).
+
+For all other cases, including outside Kubernetes, please verify that
+you already have a user with the `REPLICATION` privilege, or create
+a new one by following the instructions below.
+
+As `postgres` user on the source system, please run:
+
+```console
+createuser -P --replication streaming_replica
+```
+
+Enter the password at the prompt and save it for later, as you
+will need to add it to a secret in the target instance.
+
+!!! Note
+ Although the name is not important, we will use `streaming_replica`
+ for the sake of simplicity. Feel free to change it as you like,
+ provided you adapt the instructions in the following sections.
+
+#### Username/Password authentication
+
+The first authentication method supported by {{name.ln}}
+with the `pg_basebackup` bootstrap is based on username and password matching.
+
+Make sure you have the following information before you start the procedure:
+
+- location of the source instance, identified by a hostname or an IP address
+ and a TCP port
+- replication username (`streaming_replica` for simplicity)
+- password
+
+You might need to add a line similar to the following to the `pg_hba.conf`
+file on the source PostgreSQL instance:
+
+```
+# A more restrictive rule for TLS and IP of origin is recommended
+host replication streaming_replica all md5
+```
+
+The following manifest creates a new PostgreSQL 18.1 cluster,
+called `target-db`, using the `pg_basebackup` bootstrap method
+to clone an external PostgreSQL cluster defined as `source-db`
+(in the `externalClusters` array). As you can see, the `source-db`
+definition points to the `source-db.foo.com` host and connects as
+the `streaming_replica` user, whose password is stored in the
+`password` key of the `source-db-replica-user` secret.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: target-db
+spec:
+ instances: 3
+ imageName: docker.enterprisedb.com/k8s/postgresql:18.1-standard-ubi9
+
+ bootstrap:
+ pg_basebackup:
+ source: source-db
+
+ storage:
+ size: 1Gi
+
+ externalClusters:
+ - name: source-db
+ connectionParameters:
+ host: source-db.foo.com
+ user: streaming_replica
+ password:
+ name: source-db-replica-user
+ key: password
+```
+
+All the requirements must be met for the clone operation to work, including
+the same PostgreSQL version (in our case 18.0).
+
+#### TLS certificate authentication
+
+The second authentication method supported by {{name.ln}}
+with the `pg_basebackup` bootstrap is based on TLS client certificates.
+This is the recommended approach from a security standpoint.
+
+The following example clones an existing PostgreSQL cluster (`cluster-example`)
+in the same Kubernetes cluster.
+
+!!! Note
+ This example can be easily adapted to cover an instance that resides
+ outside the Kubernetes cluster.
+
+The manifest defines a new PostgreSQL 18.1 cluster called `cluster-clone-tls`,
+which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
+external cluster. The host is identified by the read/write service
+in the same cluster, while the `streaming_replica` user is authenticated
+thanks to the provided keys, certificate, and certification authority
+information (respectively in the `cluster-example-replication` and
+`cluster-example-ca` secrets).
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-clone-tls
+spec:
+ instances: 3
+ imageName: docker.enterprisedb.com/k8s/postgresql:18.1-standard-ubi9
+
+ bootstrap:
+ pg_basebackup:
+ source: cluster-example
+
+ storage:
+ size: 1Gi
+
+ externalClusters:
+ - name: cluster-example
+ connectionParameters:
+ host: cluster-example-rw.default.svc
+ user: streaming_replica
+ sslmode: verify-full
+ sslKey:
+ name: cluster-example-replication
+ key: tls.key
+ sslCert:
+ name: cluster-example-replication
+ key: tls.crt
+ sslRootCert:
+ name: cluster-example-ca
+ key: ca.crt
+```
+
+#### Configure the application database
+
+We also support to configure the application database for cluster which bootstrap
+from a live cluster, just like the case of `initdb` and `recovery` bootstrap method.
+If the new cluster is created as a replica cluster (with replica mode enabled), application
+database configuration will be skipped.
+
+!!! Important
+ While the `Cluster` is in recovery mode, no changes to the database,
+ including the catalog, are permitted. This restriction includes any role
+ overrides, which are deferred until the `Cluster` transitions to primary.
+ During the recovery phase, roles remain as defined in the source cluster.
+
+The example below configures the `app` database with the owner `app` and
+the password stored in the provided secret `app-secret`, following the
+bootstrap from a live cluster.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ bootstrap:
+ pg_basebackup:
+ database: app
+ owner: app
+ secret:
+ name: app-secret
+ source: cluster-example
+```
+
+With the above configuration, the following will happen only **after recovery is
+completed**:
+
+1. If the `app` database does not exist, it will be created.
+2. If the `app` user does not exist, it will be created.
+3. If the `app` user is not the owner of the `app` database, ownership will be
+ granted to the `app` user.
+4. If the `username` value matches the `owner` value in the secret, the
+ password for the application user (the `app` user in this case) will be
+ updated to the `password` value in the secret.
+
+#### Current limitations
+
+##### Snapshot copy
+
+The `pg_basebackup` method takes a snapshot of the source instance in the form of
+a PostgreSQL base backup. All transactions written from the start of
+the backup to the correct termination of the backup will be streamed to the target
+instance using a second connection (see the `--wal-method=stream` option for
+`pg_basebackup`).
+
+Once the backup is completed, the new instance will be started on a new timeline
+and diverge from the source.
+For this reason, it is advised to stop all write operations to the source database
+before migrating to the target database.
+
+Note that this limitation applies only if the target cluster is not defined as
+a replica cluster.
+
+!!! Important
+ Before you attempt a migration, you must test both the procedure
+ and the applications. In particular, it is fundamental that you run the migration
+ procedure as many times as needed to systematically measure the downtime of your
+ applications in production.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx b/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
new file mode 100644
index 0000000000..be88d70c01
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
@@ -0,0 +1,356 @@
+---
+title: 'Certificates'
+originalFilePath: 'src/certificates.md'
+---
+
+
+
+{{name.ln}} was designed to natively support TLS certificates.
+To set up a cluster, the operator requires:
+
+- A server certification authority (CA) certificate
+- A server TLS certificate signed by the server CA
+- A client CA certificate
+- A streaming replication client certificate generated by the client CA
+
+!!! Note
+ You can find all the secrets used by the cluster and their expiration dates
+ in the cluster's status.
+
+{{name.ln}} is very flexible when it comes to TLS certificates. It
+primarily operates in two modes:
+
+1. [**Operator managed**](#operator-managed-mode) – Certificates are internally
+ managed by the operator in a fully automated way and signed using a CA created
+ by {{name.ln}}.
+2. [**User provided**](#user-provided-certificates-mode) – Certificates are
+ generated outside the operator and imported in the cluster definition as
+ secrets. {{name.ln}} integrates itself with [cert-manager](https://cert-manager.io/)
+ (See [Cert-manager example](#cert-manager-example).)
+
+You can also choose a hybrid approach, where only part of the certificates is
+generated outside CNP.
+
+!!! Note
+ The operator and instances verify server certificates against the CA only,
+ disregarding the DNS name. This approach is due to the typical absence of DNS
+ names in user-provided certificates for the `-rw` service used for
+ communication within the cluster.
+
+## Operator-Managed Mode
+
+By default, the operator automatically generates a single Certificate Authority
+(CA) to issue both client and server certificates. These certificates are
+managed continuously by the operator, with automatic renewal 7 days before
+expiration (within a 90-day validity period).
+
+!!! Info
+ You can adjust this default behavior by configuring the
+ `CERTIFICATE_DURATION` and `EXPIRING_CHECK_THRESHOLD` environment variables.
+ For detailed guidance, refer to the [Operator Configuration](operator_conf.md).
+
+!!! Important
+ Certificate renewal does not cause any downtime for the PostgreSQL server,
+ as a simple reload operation is sufficient. However, any user-managed
+ certificates not controlled by {{name.ln}} must be re-issued following the
+ renewal process.
+
+When generating certificates, the operator assumes that the Kubernetes
+cluster's DNS zone is set to `cluster.local` by default. This behavior can be
+customized by setting the `KUBERNETES_CLUSTER_DOMAIN` environment variable. A
+convenient alternative is to use the [operator's configuration capability](operator_conf.md).
+
+### Server certificates
+
+#### Server CA secret
+
+The operator generates a self-signed CA and stores it in a generic secret
+containing the following keys:
+
+- `ca.crt` – CA certificate used to validate the server certificate, used as
+ `sslrootcert` in clients' connection strings.
+- `ca.key` – The key used to sign the server SSL certificate automatically.
+
+#### Server TLS secret
+
+The operator uses the generated self-signed CA to sign a server TLS
+certificate. It's stored in a secret of type `kubernetes.io/tls` and configured
+to be used as `ssl_cert_file` and `ssl_key_file` by the instances. This
+approach enables clients to verify their identity and connect securely.
+
+#### Server alternative DNS names
+
+In addition to the default ones, you can specify DNS server alternative names
+as part of the generated server TLS secret.
+
+### Client certificates
+
+#### Client CA secret
+
+By default, the same self-signed CA as the server CA is used. The public part
+is passed as `ssl_ca_file` to all the instances so it can verify client
+certificates it signed. The private key is stored in the same secret and used
+to sign client certificates generated by the `kubectl cnp` plugin.
+
+#### Client `streaming_replica` certificate
+
+The operator uses the generated self-signed CA to sign a client certificate for
+the user `streaming_replica`, storing it in a secret of type
+`kubernetes.io/tls`. To allow secure connection to the primary instance, this
+certificate is passed as `sslcert` and `sslkey` in the replicas' connection
+strings.
+
+## User-provided certificates mode
+
+### Server certificates
+
+If required, you can also provide the two server certificates, generating them
+using a separate component such as [cert-manager](https://cert-manager.io/).
+To use a custom server TLS certificate for a cluster, you must specify
+the following parameters:
+
+- `serverTLSSecret` – The name of a secret of type `kubernetes.io/tls`
+ containing the server TLS certificate. It must contain both the standard
+ `tls.crt` and `tls.key` keys.
+- `serverCASecret` – The name of a secret containing the `ca.crt` key.
+
+!!! Note
+ The operator still creates and manages the two secrets related to client
+ certificates.
+
+!!! Note
+ The operator and instances verify server certificates against the CA only,
+ disregarding the DNS name. This approach is due to the typical absence of DNS
+ names in user-provided certificates for the `-rw` service used for
+ communication within the cluster.
+
+!!! Note
+ If you want ConfigMaps and secrets to be reloaded by instances, you can add
+ a label with the key `k8s.enterprisedb.io/reload` to it. Otherwise you must reload the
+ instances using the `kubectl cnp reload` subcommand.
+
+#### Example
+
+Given the following files:
+
+- `server-ca.crt` – The certificate of the CA that signed the server TLS certificate.
+- `server.crt`– The certificate of the server TLS certificate.
+- `server.key` – The private key of the server TLS certificate.
+
+Create a secret containing the CA certificate:
+
+```sh
+kubectl create secret generic my-postgresql-server-ca \
+ --from-file=ca.crt=./server-ca.crt
+```
+
+Create a secret with the TLS certificate:
+
+```sh
+kubectl create secret tls my-postgresql-server \
+ --cert=./server.crt --key=./server.key
+```
+
+Create a PostgreSQL cluster referencing those secrets:
+
+```bash
+kubectl apply -f - <
+
+## Pod-to-Pod Network Security with {{name.ln}} and Cilium
+
+Kubernetes’ default behavior is to allow traffic between any two Pods in the cluster network.
+Cilium provides advanced L3/L4 network security using the `CiliumNetworkPolicy` resource. This
+enables fine-grained control over network traffic between Pods within a Kubernetes cluster. It is
+especially useful for securing communication between application workloads and backend
+services.
+
+In the following examples, we demonstrate how Cilium can be used to secure a
+{{name.ln}} PostgreSQL instance by restricting ingress traffic to only
+authorized Pods.
+
+!!! Important
+ Before proceeding, ensure that the `cluster-example` Postgres cluster is up
+ and running in your environment.
+
+## Default Deny Behavior in Cilium
+
+By default, Cilium does **not** deny all traffic unless explicitly configured
+to do so. In contrast to Kubernetes NetworkPolicy, which uses a deny-by-default
+model once a policy is present in a namespace, Cilium provides more flexible
+control over default deny behavior.
+
+To enforce a default deny posture with Cilium, you need to explicitly create a
+policy that denies all traffic to a set of Pods unless otherwise allowed. This
+is commonly achieved by using an **empty `ingress` section** in combination
+with `endpointSelector`, or by enabling **`--enable-default-deny`** at the
+Cilium agent level for broader enforcement.
+
+A minimal example of a default deny policy:
+
+```yaml
+apiVersion: cilium.io/v2
+kind: CiliumNetworkPolicy
+metadata:
+ name: default-deny
+ namespace: default
+spec:
+ description: "Default deny all ingress traffic to all Pods in this namespace"
+ endpointSelector: {}
+ ingress: []
+```
+
+## Making Cilium Network Policies work with {{name.ln}} Operator
+
+When working with a network policy, Cilium or not, the first step is to make
+sure that the operator can reach the Pods in the target namespace. This is
+important because the operator needs to be able to perform checks and actions
+on the Pods, and one of those actions requires access to the port `8000` on the
+Pods to get the current status of the PostgreSQL instance running inside.
+
+The following `CiliumNetworkPolicy` allows the operator to access the Pods in
+the target `default` namespace:
+
+```yaml
+apiVersion: cilium.io/v2
+kind: CiliumNetworkPolicy
+metadata:
+ name: postgresql-operator-operator-access
+ namespace: default
+spec:
+ description: "Allow {{name.ln}} operator access to any pod in the target namespace"
+ endpointSelector: {}
+ ingress:
+ - fromEndpoints:
+ - matchLabels:
+ io.kubernetes.pod.namespace: postgresql-operator-system
+ toPorts:
+ - ports:
+ - port: "8000"
+ protocol: TCP
+```
+
+!!! Important
+ The `postgresql-operator-system` namespace is the default namespace for the operator when
+ using the YAML manifests. If the operator was installed using a different
+ process (Helm, OLM, etc.), the namespace may be different. Make sure to adjust
+ the namespace properly.
+
+## Allowing access between cluster Pods
+
+Since the default policy is "deny all", we need to explicitly allow access
+between the cluster Pods in the same namespace. We will improve our previous
+policy by adding the required ingress rule:
+
+```yaml
+apiVersion: cilium.io/v2
+kind: CiliumNetworkPolicy
+metadata:
+ name: cnp-cluster-internal-access
+ namespace: default
+spec:
+ description: "Allow {{name.ln}} operator access and connection between pods in the same namespace"
+ endpointSelector: {}
+ ingress:
+ - fromEndpoints:
+ - matchLabels:
+ io.kubernetes.pod.namespace: postgresql-operator-system
+ - matchLabels:
+ io.kubernetes.pod.namespace: default
+ k8s.enterprisedb.io/cluster: cluster-example
+ toPorts:
+ - ports:
+ - port: "8000"
+ protocol: TCP
+ - port: "5432"
+ protocol: TCP
+```
+
+The policy allows access from `postgresql-operator-system` Pods and from `default` namespace
+Pods that also belong to `cluster-example`. The `matchLabels` selector requires
+Pods to have the complete set of listed labels. Missing even one label means
+the Pod will not match.
+
+## Restricting Access to PostgreSQL with Cilium
+
+In this example, we define a `CiliumNetworkPolicy` that allows only Pods
+labeled `role=backend` in the `default` namespace to connect to a PostgreSQL
+cluster named `cluster-example`. All other ingress traffic is blocked by
+default.
+
+```yaml
+apiVersion: cilium.io/v2
+kind: CiliumNetworkPolicy
+metadata:
+ name: postgres-access-backend-label
+ namespace: default
+spec:
+ description: "Allow PostgreSQL access on port 5432 from Pods with role=backend"
+ endpointSelector:
+ matchLabels:
+ k8s.enterprisedb.io/cluster: cluster-example
+ ingress:
+ - fromEndpoints:
+ - matchLabels:
+ role: backend
+ toPorts:
+ - ports:
+ - port: "5432"
+ protocol: TCP
+```
+
+This `CiliumNetworkPolicy` ensures that only Pods labeled with `role=backend`
+can access the PostgreSQL instance managed by {{name.ln}} via port 5432 in
+the `default` namespace.
+
+In the following policy, we demonstrate how to allow ingress traffic to port
+5432 of a PostgreSQL cluster named `cluster-example`, only from Pods with the
+label `role=backend` in any namespace.
+
+```yaml
+apiVersion: cilium.io/v2
+kind: CiliumNetworkPolicy
+metadata:
+ name: postgres-access-backend-any-ns
+ namespace: default
+spec:
+ description: "Allow PostgreSQL access on port 5432 from Pods with role=backend in any namespace"
+ endpointSelector:
+ matchLabels:
+ k8s.enterprisedb.io/cluster: cluster-example
+ ingress:
+ - fromEndpoints:
+ - labelSelector:
+ matchLabels:
+ role: backend
+ matchExpressions:
+ - key: io.kubernetes.pod.namespace
+ operator: Exists
+ toPorts:
+ - ports:
+ - port: "5432"
+ protocol: TCP
+```
+
+The following example allows ingress traffic to port 5432 of the
+`cluster-example` cluster (located in the `default` namespace) from any Pods in
+the `backend` namespace.
+
+```yaml
+apiVersion: cilium.io/v2
+kind: CiliumNetworkPolicy
+metadata:
+ name: postgres-access-backend-namespace
+ namespace: default
+spec:
+ description: "Allow PostgreSQL access on port 5432 from any Pods in the backend namespace"
+ endpointSelector:
+ matchLabels:
+ k8s.enterprisedb.io/cluster: cluster-example
+ ingress:
+ - fromEndpoints:
+ - matchLabels:
+ io.kubernetes.pod.namespace: backend
+ toPorts:
+ - ports:
+ - port: "5432"
+ protocol: TCP
+```
+
+Using Cilium’s L3/L4 policy model, we define a `CiliumNetworkPolicy` that
+explicitly allows ingress traffic to cluster Pods only from application Pods in
+the `backend` namespace. All other traffic is implicitly denied unless
+explicitly permitted by additional policies.
+
+The following example allows ingress traffic to port 5432 of the
+`cluster-example` cluster (located in the `default` namespace) from any source
+within the Kubernetes cluster.
+
+```yaml
+apiVersion: cilium.io/v2
+kind: CiliumNetworkPolicy
+metadata:
+ name: postgres-access-cluster-wide
+ namespace: default
+spec:
+ description: "Allow ingress traffic to port 5432 of the cluster-example from any pods within the Kubernetes cluster"
+ endpointSelector:
+ matchLabels:
+ k8s.enterprisedb.io/cluster: cluster-example
+ ingress:
+ - fromEntities:
+ - cluster
+ toPorts:
+ - ports:
+ - port: "5432"
+ protocol: TCP
+```
+
+You may consider using [editor.networkpolicy.io](https://editor.networkpolicy.io/),
+a visual and interactive tool that simplifies the creation and validation of
+Cilium Network Policies. It’s especially helpful for avoiding misconfigurations
+and understanding traffic rules more clearly by presenting in a visual way.
+
+With these policies, you've established baseline access controls for
+PostgreSQL. You can layer additional egress or audit rules using Cilium's
+policy language or extend to L7 enforcement with Envoy.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cncf-projects/external-secrets.mdx b/product_docs/docs/postgres_for_kubernetes/1/cncf-projects/external-secrets.mdx
new file mode 100644
index 0000000000..83e4ce5af0
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/cncf-projects/external-secrets.mdx
@@ -0,0 +1,267 @@
+---
+title: 'External Secrets'
+originalFilePath: 'src/cncf-projects/external-secrets.md'
+---
+
+[External Secrets](https://external-secrets.io/latest/) is a CNCF Sandbox
+project, accepted in 2022 under the sponsorship of TAG Security.
+
+## About
+
+The **External Secrets Operator (ESO)** is a Kubernetes operator that enhances
+secret management by decoupling the storage of secrets from Kubernetes itself.
+It enables seamless synchronization between external secret management systems
+and native Kubernetes `Secret` resources.
+
+ESO supports a wide range of backends, including:
+
+- [HashiCorp Vault](https://www.vaultproject.io/)
+- [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/)
+- [Google Secret Manager](https://cloud.google.com/secret-manager)
+- [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/)
+- [IBM Cloud Secrets Manager](https://www.ibm.com/cloud/secrets-manager)
+
+…and many more. For a full and up-to-date list of supported providers, refer to
+the [official External Secrets documentation](https://external-secrets.io/latest/).
+
+## Integration with PostgreSQL and {{name.ln}}
+
+When it comes to PostgreSQL databases, External Secrets integrates seamlessly
+with {{name.ln}} in two major use cases:
+
+- **Automated password management:** ESO can handle the automatic generation
+ and rotation of database user passwords stored in Kubernetes `Secret`
+ resources, ensuring that applications running inside the cluster always have
+ access to up-to-date credentials.
+
+- **Cross-platform secret access:** It enables transparent synchronization of
+ those passwords with an external Key Management Service (KMS) via a
+ `SecretStore` resources. This allows applications and developers outside the
+ Kubernetes cluster—who may not have access to Kubernetes secrets—to retrieve
+ the database credentials directly from the external KMS.
+
+## Example: Automated Password Management with External Secrets
+
+Let’s walk through how to automatically rotate the password of the `app` user
+every 24 hours in the `cluster-example` Postgres cluster from the
+[quickstart guide](../quickstart.md#part-3-deploy-a-postgresql-cluster).
+
+!!! Important
+ Before proceeding, ensure that the `cluster-example` Postgres cluster is up
+ and running in your environment.
+
+By default, {{name.ln}} generates and manages a Kubernetes `Secret` named
+`cluster-example-app`, which contains the credentials for the `app` user in the
+`cluster-example` cluster. You can read more about this in the
+[“Connecting from an application” section](../applications.mdx#secrets).
+
+With External Secrets, the goal is to:
+
+1. Define a `Password` generator that specifies how to generate the password.
+2. Create an `ExternalSecret` resource that keeps the `cluster-example-app`
+ secret in sync by updating only the `password` and `pgpass` fields.
+
+### Creating the Password Generator
+
+The following example creates a
+[`Password` generator](https://external-secrets.io/main/api/generator/password/)
+resource named `pg-password-generator` in the `default` namespace. You can
+customize the name and properties to suit your needs:
+
+```yaml
+apiVersion: generators.external-secrets.io/v1alpha1
+kind: Password
+metadata:
+ name: pg-password-generator
+spec:
+ length: 42
+ digits: 5
+ symbols: 5
+ symbolCharacters: "-_$@"
+ noUpper: false
+ allowRepeat: true
+```
+
+This specification defines the characteristics of the generated password,
+including its length and the inclusion of digits, symbols, and uppercase
+letters.
+
+### Creating the External Secret
+
+The example below creates an `ExternalSecret` resource named
+`cluster-example-app-secret`, which refreshes the password every 24 hours. It
+uses a `Merge` policy to update only the specified fields (`password`, `pgpass`,
+`jdbc-uri` and `uri`) in the `cluster-example-app` secret.
+
+```yaml
+apiVersion: external-secrets.io/v1
+kind: ExternalSecret
+metadata:
+ name: cluster-example-app-secret
+spec:
+ refreshInterval: "24h"
+ target:
+ name: cluster-example-app
+ creationPolicy: Merge
+ template:
+ metadata:
+ labels:
+ k8s.enterprisedb.io/reload: "true"
+ data:
+ password: "{{ .password }}"
+ pgpass: "cluster-example-rw:5432:app:app:{{ .password }}"
+ jdbc-uri: "jdbc:postgresql://cluster-example-rw.default:5432/app?password={{ .password }}&user=app"
+ uri: "postgresql://app:{{ .password }}@cluster-example-rw.default:5432/app"
+ dataFrom:
+ - sourceRef:
+ generatorRef:
+ apiVersion: generators.external-secrets.io/v1alpha1
+ kind: Password
+ name: pg-password-generator
+```
+
+The label `k8s.enterprisedb.io/reload: "true"` ensures that {{name.ln}} triggers a reload
+of the user password in the database when the secret changes.
+
+### Verifying the Configuration
+
+To check that the `ExternalSecret` is correctly synchronizing:
+
+```sh
+kubectl get es cluster-example-app-secret
+```
+
+To observe the password being refreshed in real time, temporarily reduce the
+`refreshInterval` to `30s` and run the following command repeatedly:
+
+```sh
+kubectl get secret cluster-example-app \
+ -o jsonpath="{.data.password}" | base64 -d
+```
+
+You should see the password change every 30 seconds, confirming that the
+rotation is working correctly.
+
+### There's More
+
+While the example above focuses on the default `cluster-example-app` secret
+created by {{name.ln}}, the same approach can be extended to manage any
+custom secrets or PostgreSQL users you create to regularly rotate their
+password.
+
+## Example: Integration with an External KMS
+
+One of the most widely used Key Management Service (KMS) providers in the CNCF
+ecosystem is [HashiCorp Vault](https://www.vaultproject.io/). Although Vault is
+licensed under the Business Source License (BUSL), a fully compatible and
+actively maintained open source alternative is available: [OpenBao](https://openbao.org/).
+OpenBao supports all the same interfaces as HashiCorp Vault, making it a true
+drop-in replacement.
+
+In this example, we'll demonstrate how to integrate {{name.ln}},
+External Secrets Operator, and HashiCorp Vault to automatically rotate
+a PostgreSQL password and securely store it in Vault.
+
+!!! Important
+ This example assumes that HashiCorp Vault is already installed and properly
+ configured in your environment, and that your team has the necessary expertise
+ to operate it. There are various ways to deploy Vault, and detailing them is
+ outside the scope of {{name.ln}}. While it's possible to run Vault inside
+ Kubernetes, it is more commonly deployed externally. For detailed instructions,
+ consult the [HashiCorp Vault documentation](https://www.vaultproject.io/docs).
+
+Continuing from the previous example, we will now create the necessary
+`SecretStore` and `PushSecret` resources to complete the integration with
+Vault.
+
+### Creating the `SecretStore`
+
+In this example, we assume that HashiCorp Vault is accessible from within the
+namespace at `http://vault.vault.svc:8200`, and that a Kubernetes `Secret`
+named `vault-token` exists in the same namespace, containing the token used to
+authenticate with Vault.
+
+```yaml
+apiVersion: external-secrets.io/v1
+kind: SecretStore
+metadata:
+ name: vault-backend
+spec:
+ provider:
+ vault:
+ server: "http://vault.vault.svc:8200"
+ path: "secrets"
+ # Specifies the Vault KV secret engine version ("v1" or "v2").
+ # Defaults to "v2" if not set.
+ version: "v2"
+ auth:
+ # References a Kubernetes Secret that contains the Vault token.
+ # See: https://www.vaultproject.io/docs/auth/token
+ tokenSecretRef:
+ name: "vault-token"
+ key: "token"
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: vault-token
+data:
+ token: aHZzLioqKioqKio= # hvs.*******
+```
+
+This configuration creates a `SecretStore` resource named `vault-backend`.
+
+!!! Important
+ This example uses basic token-based authentication, which is suitable for
+ testing API, and CLI use cases. While it is the default method enabled in
+ Vault, it is not recommended for production environments. For production,
+ consider using more secure authentication methods.
+ Refer to the [External Secrets Operator documentation](https://external-secrets.io/latest/provider/hashicorp-vault/)
+ for a full list of supported authentication mechanisms.
+
+!!! Info
+ HashiCorp Vault must have a KV secrets engine enabled at the `secrets` path
+ with version `v2`. If your Vault instance uses a different path or
+ version, be sure to update the `path` and `version` fields accordingly.
+
+### Creating the `PushSecret`
+
+The `PushSecret` resource is used to push a Kubernetes `Secret` to HashiCorp
+Vault. In this simplified example, we'll push the credentials for the `app`
+user of the sample cluster `cluster-example`.
+
+For more details on configuring `PushSecret`, refer to the
+[External Secrets Operator documentation](https://external-secrets.io/latest/api/pushsecret/).
+
+```yaml
+apiVersion: external-secrets.io/v1alpha1
+kind: PushSecret
+metadata:
+ name: pushsecret-example
+spec:
+ deletionPolicy: Delete
+ refreshInterval: 24h
+ secretStoreRefs:
+ - name: vault-backend
+ kind: SecretStore
+ selector:
+ secret:
+ name: cluster-example-app
+ data:
+ - match:
+ remoteRef:
+ remoteKey: cluster-example-app
+```
+
+In this example, the `PushSecret` resource instructs the External Secrets
+Operator to push the Kubernetes `Secret` named `cluster-example-app` to
+HashiCorp Vault (from the previous example). The `remoteKey` defines the name
+under which the secret will be stored in Vault, using the `SecretStore` named
+`vault-backend`.
+
+### Verifying the Configuration
+
+To verify that the `PushSecret` is functioning correctly, navigate to the
+HashiCorp Vault UI. In the `kv` secrets engine at the path `secrets`, you
+should find a secret named `cluster-example-app`, corresponding to the
+`remoteKey` defined above.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cncf-projects/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/cncf-projects/index.mdx
new file mode 100644
index 0000000000..b02c028f67
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/cncf-projects/index.mdx
@@ -0,0 +1,7 @@
+---
+title: CNCF Projects Integrations
+indexCards: simple
+navigation:
+- external-secrets
+- cilium
+---
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cnp_i.mdx b/product_docs/docs/postgres_for_kubernetes/1/cnp_i.mdx
new file mode 100644
index 0000000000..755451a9dc
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/cnp_i.mdx
@@ -0,0 +1,210 @@
+---
+title: 'CNP-I'
+originalFilePath: 'src/cnp_i.md'
+---
+
+
+
+The **CloudNativePG Interface** ([CNPG-I](https://github.com/cloudnative-pg/cnpg-i))
+is a standard way to extend and customize {{name.ln}} without modifying its
+core codebase.
+
+## Why CNP-I?
+
+{{name.ln}} supports a wide range of use cases, but sometimes its built-in
+functionality isn’t enough, or adding certain features directly to the main
+project isn’t practical.
+
+Before CNP-I, users had two main options:
+
+- Fork the project to add custom behavior, or
+- Extend the upstream codebase by writing custom components on top of it.
+
+Both approaches created maintenance overhead, slowed upgrades, and delayed delivery of critical features.
+
+CNP-I solves these problems by providing a stable, gRPC-based integration
+point for extending {{name.ln}} at key points in a cluster’s lifecycle —such
+as backups, recovery, and sub-resource reconciliation— without disrupting the
+core project.
+
+CNP-I can extend:
+
+- The operator, and/or
+- The instance manager running inside PostgreSQL pods.
+
+## Registering a plugin
+
+CNP-I is inspired by the Kubernetes
+[Container Storage Interface (CSI)](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/).
+The operator communicates with registered plugins using **gRPC**, following the
+[CNPG-I protocol](https://github.com/cloudnative-pg/cnpg-i/blob/main/docs/protocol.md).
+
+{{name.ln}} discovers plugins **at startup**. You can register them in one of two ways:
+
+- Sidecar container – run the plugin inside the operator’s Deployment
+- Standalone Deployment – run the plugin as a separate workload in the same
+ namespace
+
+In both cases, the plugin must be packaged as a container image.
+
+### Sidecar Container
+
+When running as a sidecar, the plugin must expose its gRPC server via a **Unix
+domain socket**. This socket must be placed in a directory shared with the
+operator container, mounted at the path set in `PLUGIN_SOCKET_DIR` (default:
+`/plugin`).
+
+Example:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: controller-manager
+spec:
+ template:
+ spec:
+ containers:
+ - image: cloudnative-pg:latest
+ [...]
+ name: manager
+ volumeMounts:
+ - mountPath: /plugins
+ name: cnpg-i-plugins
+
+ - image: cnpg-i-plugin-example:latest
+ name: cnpg-i-plugin-example
+ volumeMounts:
+ - mountPath: /plugins
+ name: cnpg-i-plugins
+ volumes:
+ - name: cnpg-i-plugins
+ emptyDir: {}
+```
+
+### Standalone Deployment (recommended)
+
+Running a plugin as its own Deployment decouples its lifecycle from the
+operator’s and allows independent scaling. In this setup, the plugin exposes a
+TCP gRPC endpoint behind a Service, with **mTLS** for secure communication.
+
+!!! Warning
+ {{name.ln}} does **not** discover plugins dynamically. If you deploy a new
+ plugin, you must **restart the operator** to detect it.
+
+Example Deployment:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: cnpg-i-plugin-example
+spec:
+ template:
+ [...]
+ spec:
+ containers:
+ - name: cnpg-i-plugin-example
+ image: cnpg-i-plugin-example:latest
+ ports:
+ - containerPort: 9090
+ protocol: TCP
+```
+
+The related Service for the plugin must include:
+
+- The label `k8s.enterprisedb.io/plugin: ` — required for {{name.ln}} to
+ discover the plugin
+- The annotation `k8s.enterprisedb.io/pluginPort: ` — specifies the port where the
+ plugin’s gRPC server is exposed
+
+Example Service:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ k8s.enterprisedb.io/pluginPort: "9090"
+ labels:
+ k8s.enterprisedb.io/pluginName: cnpg-i-plugin-example.my-org.io
+ name: cnpg-i-plugin-example
+spec:
+ ports:
+ - port: 9090
+ protocol: TCP
+ targetPort: 9090
+ selector:
+ app: cnpg-i-plugin-example
+```
+
+### Configuring TLS Certificates
+
+When a plugin runs as a `Deployment`, communication with {{name.ln}} happens
+over the network. To secure it, **mTLS is enforced**, requiring TLS
+certificates for both sides.
+
+Certificates must be stored as [Kubernetes TLS Secrets](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets)
+and referenced in the plugin’s Service annotations
+(`k8s.enterprisedb.io/pluginClientSecret` and `k8s.enterprisedb.io/pluginServerSecret`):
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ k8s.enterprisedb.io/pluginClientSecret: cnpg-i-plugin-example-client-tls
+ k8s.enterprisedb.io/pluginServerSecret: cnpg-i-plugin-example-server-tls
+ k8s.enterprisedb.io/pluginPort: "9090"
+ name: barman-cloud
+ namespace: postgresql-operator-system
+spec:
+ [...]
+```
+
+!!! Note
+ You can provide your own certificate bundles, but the recommended method is
+ to use [Cert-manager](https://cert-manager.io).
+
+## Using a plugin
+
+To enable a plugin, configure the `.spec.plugins` section in your `Cluster`
+resource. Refer to the {{name.ln}} API Reference for the full
+[PluginConfiguration](https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-k8s-enterprisedb-io-v1-PluginConfiguration)
+specification.
+
+Example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-with-plugins
+spec:
+ instances: 1
+ storage:
+ size: 1Gi
+ plugins:
+ - name: cnpg-i-plugin-example.my-org.io
+ enabled: true
+ parameters:
+ key1: value1
+ key2: value2
+```
+
+Each plugin may have its own parameters—check the plugin’s documentation for
+details. The `name` field in `spec.plugins` depends on how the plugin is
+deployed:
+
+- Sidecar container: use the Unix socket file name
+- Deployment: use the value from the Service’s `k8s.enterprisedb.io/pluginName` label
+
+## Community plugins
+
+The CNP-I protocol has quickly become a proven and reliable pattern for
+extending {{name.ln}} while keeping the core project maintainable.
+Over time, the community has built and shared plugins that address real-world
+needs and serve as examples for developers.
+
+For a complete and up-to-date list of plugins built with CNP-I, please refer to the
+[CNPG-I GitHub page](https://github.com/cloudnative-pg/cnpg-i?tab=readme-ov-file#projects-built-with-cnpg-i).
diff --git a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
new file mode 100644
index 0000000000..4fa07449e6
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
@@ -0,0 +1,693 @@
+---
+title: 'Connection pooling'
+originalFilePath: 'src/connection_pooling.md'
+---
+
+
+
+{{name.ln}} provides native support for connection pooling with
+[PgBouncer](https://www.pgbouncer.org/), one of the most popular open source
+connection poolers for PostgreSQL, through the `Pooler` custom resource definition (CRD).
+
+In brief, a pooler in {{name.ln}} is a deployment of PgBouncer pods that sits
+between your applications and a PostgreSQL service, for example, the `rw`
+service. It creates a separate, scalable, configurable, and highly available
+database access layer.
+
+## Architecture
+
+The following diagram highlights how introducing a database access layer based
+on PgBouncer changes the architecture of {{name.ln}}. Instead of directly
+connecting to the PostgreSQL primary service, applications can connect to the
+equivalent service for PgBouncer. This ability enables reuse of existing
+connections for faster performance and better resource management on the
+PostgreSQL side.
+
+
+
+## Quick start
+
+This example helps to show how {{name.ln}} implements a PgBouncer
+pooler:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Pooler
+metadata:
+ name: pooler-example-rw
+spec:
+ cluster:
+ name: cluster-example
+
+ instances: 3
+ type: rw
+ pgbouncer:
+ poolMode: session
+ parameters:
+ max_client_conn: "1000"
+ default_pool_size: "10"
+```
+
+!!! Important
+ The pooler name can't be the same as any cluster name in the same namespace.
+
+This example creates a `Pooler` resource called `pooler-example-rw`
+that's strictly associated with the Postgres `Cluster` resource called
+`cluster-example`. It points to the primary, identified by the read/write
+service (`rw`, therefore `cluster-example-rw`).
+
+The `Pooler` resource must live in the same namespace as the Postgres cluster.
+It consists of a Kubernetes deployment of 3 pods running the
+[latest stable image of PgBouncer](https://quay.io/enterprisedb/pgbouncer),
+configured with the [`session` pooling mode](https://www.pgbouncer.org/config.html#pool-mode)
+and accepting up to 1000 connections each. The default pool size is 10
+user/database pairs toward PostgreSQL.
+
+!!! Important
+ The `Pooler` resource sets only the `*` fallback database in PgBouncer. This setting means that
+ that all parameters in the connection strings passed from the client are
+ relayed to the PostgreSQL server. For details, see ["Section \[databases\]"
+ in the PgBouncer documentation](https://www.pgbouncer.org/config.html#section-databases).
+
+{{name.ln}} also creates a secret with the same name as the pooler containing
+the configuration files used with PgBouncer.
+
+!!! Seealso "API reference"
+ For details, see [`PgBouncerSpec`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-PgBouncerSpec)
+ in the API reference.
+
+## Pooler resource lifecycle
+
+`Pooler` resources aren't cluster-managed resources. You create poolers
+manually when they're needed. You can also deploy multiple poolers per
+PostgreSQL cluster.
+
+What's important is that the life cycles of the `Cluster` and the `Pooler`
+resources are currently independent. Deleting the cluster doesn't imply the
+deletion of the pooler, and vice versa.
+
+!!! Important
+ Once you know how a pooler works, you have full freedom in terms of
+ possible architectures. You can have clusters without poolers, clusters with
+ a single pooler, or clusters with several poolers, that is, one per application.
+
+!!! Important
+ When the operator is upgraded, the pooler pods will undergo a rolling
+ upgrade. This is necessary to ensure that the instance manager within the
+ pooler pods is also upgraded.
+
+## Security
+
+Any PgBouncer pooler is transparently integrated with {{name.ln}} support for
+in-transit encryption by way of TLS connections, both on the client
+(application) and server (PostgreSQL) side of the pool.
+
+Specifically, PgBouncer reuses the certificates of the PostgreSQL server. It
+also uses TLS client certificate authentication to connect to the PostgreSQL
+server to run the `auth_query` for clients' password authentication (see
+[Authentication](#authentication)).
+
+Containers run as the pgbouncer system user, and access to the `pgbouncer`
+database is allowed only by way of local connections, through peer authentication.
+
+### Certificates
+
+By default, a PgBouncer pooler uses the same certificates that are used by the
+cluster. However, if you provide those certificates, the pooler accepts secrets
+with the following formats:
+
+1. Basic Auth
+2. TLS
+3. Opaque
+
+In the Opaque case, it looks for the following specific keys that need to be used:
+
+- tls.crt
+- tls.key
+
+So you can treat this secret as a TLS secret, and start from there.
+
+## Authentication
+
+Password-based authentication is the only supported method for clients of
+PgBouncer in {{name.ln}}.
+
+Internally, the implementation relies on PgBouncer's `auth_user` and
+`auth_query` options. Specifically, the operator:
+
+- Creates a standard user called `cnp_pooler_pgbouncer` in the PostgreSQL server
+- Creates the lookup function in the `postgres` database and grants execution
+ privileges to the cnp_pooler_pgbouncer user (PoLA)
+- Issues a TLS certificate for this user
+- Sets `cnp_pooler_pgbouncer` as the `auth_user`
+- Configures PgBouncer to use the TLS certificate to authenticate
+ `cnp_pooler_pgbouncer` against the PostgreSQL server
+- Removes all the above when it detects that a cluster doesn't have
+ any pooler associated to it
+
+!!! Important
+ If you specify your own secrets, the operator doesn't automatically
+ integrate the pooler.
+
+To manually integrate the pooler, if you specified your own
+secrets, you must run the following queries from inside your cluster.
+
+First, you must create the role:
+
+```sql
+CREATE ROLE cnp_pooler_pgbouncer WITH LOGIN;
+```
+
+Then, for each application database, grant the permission for
+`cnp_pooler_pgbouncer` to connect to it:
+
+```sql
+GRANT CONNECT ON DATABASE { database name here } TO cnp_pooler_pgbouncer;
+```
+
+Finally, as a *superuser* connect in each application database, and then create
+the authentication function inside each of the application databases:
+
+```sql
+CREATE OR REPLACE FUNCTION public.user_search(uname TEXT)
+ RETURNS TABLE (usename name, passwd text)
+ LANGUAGE sql SECURITY DEFINER AS
+ 'SELECT usename, passwd FROM pg_catalog.pg_shadow WHERE usename=$1;';
+
+REVOKE ALL ON FUNCTION public.user_search(text)
+ FROM public;
+
+GRANT EXECUTE ON FUNCTION public.user_search(text)
+ TO cnp_pooler_pgbouncer;
+```
+
+!!! Important
+ Given that `user_search` is a `SECURITY DEFINER` function, you need to
+ create it through a role with `SUPERUSER` privileges, such as the `postgres`
+ user.
+
+## Pod templates
+
+You can take advantage of pod templates specification in the `template`
+section of a `Pooler` resource. For details, see
+[`PoolerSpec`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) in the API reference.
+
+Using templates, you can configure pods as you like, including fine control
+over affinity and anti-affinity rules for pods and nodes. By default,
+containers use images from `quay.io/enterprisedb/pgbouncer`.
+
+This example shows `Pooler` specifying \`PodAntiAffinity\`\`:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Pooler
+metadata:
+ name: pooler-example-rw
+spec:
+ cluster:
+ name: cluster-example
+ instances: 3
+ type: rw
+
+ template:
+ metadata:
+ labels:
+ app: pooler
+ spec:
+ containers: []
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: app
+ operator: In
+ values:
+ - pooler
+ topologyKey: "kubernetes.io/hostname"
+```
+
+!!! Note
+ Explicitly set `.spec.template.spec.containers` to `[]` when not modified,
+ as it's a required field for a `PodSpec`. If `.spec.template.spec.containers`
+ isn't set, the Kubernetes api-server returns the following error when trying to
+ apply the manifest:`error validating "pooler.yaml": error validating data:
+ ValidationError(Pooler.spec.template.spec): missing required field
+ "containers"`
+
+This example sets resources and changes the used image:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Pooler
+metadata:
+ name: pooler-example-rw
+spec:
+ cluster:
+ name: cluster-example
+ instances: 3
+ type: rw
+
+ template:
+ metadata:
+ labels:
+ app: pooler
+ spec:
+ containers:
+ - name: pgbouncer
+ image: my-pgbouncer:latest
+ resources:
+ requests:
+ cpu: "0.1"
+ memory: 100Mi
+ limits:
+ cpu: "0.5"
+ memory: 500Mi
+```
+
+## Service Template
+
+Sometimes, your pooler will require some different labels, annotations, or even change
+the type of the service, you can achieve that by using the `serviceTemplate` field:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Pooler
+metadata:
+ name: pooler-example-rw
+spec:
+ cluster:
+ name: cluster-example
+ instances: 3
+ type: rw
+ serviceTemplate:
+ metadata:
+ labels:
+ app: pooler
+ spec:
+ type: LoadBalancer
+ pgbouncer:
+ poolMode: session
+ parameters:
+ max_client_conn: "1000"
+ default_pool_size: "10"
+```
+
+The operator by default adds a `ServicePort` with the following data:
+
+```
+ ports:
+ - name: pgbouncer
+ port: 5432
+ protocol: TCP
+ targetPort: pgbouncer
+```
+
+!!! Warning
+ Specifying a `ServicePort` with the name `pgbouncer` or the port `5432` will prevent the default `ServicePort` from being added.
+ This because `ServicePort` entries with the same `name` or `port` are not allowed on Kubernetes and result in errors.
+
+## High availability (HA)
+
+Because of Kubernetes' deployments, you can configure your pooler to run on a
+single instance or over multiple pods. The exposed service makes sure that your
+clients are randomly distributed over the available pods running PgBouncer,
+which then manages and reuses connections toward the underlying server (if
+using the `rw` service) or servers (if using the `ro` service with multiple
+replicas).
+
+!!! Warning
+ If your infrastructure spans multiple availability zones with high latency
+ across them, be aware of network hops. Consider, for example, the case of your
+ application running in zone 2, connecting to PgBouncer running in zone 3, and
+ pointing to the PostgreSQL primary in zone 1.
+
+## PgBouncer configuration options
+
+The operator manages most of the [configuration options for PgBouncer](https://www.pgbouncer.org/config.html),
+allowing you to modify only a subset of them.
+
+!!! Warning
+ You are responsible for correctly setting the value of each option, as the
+ operator doesn't validate them.
+
+These are the PgBouncer options you can customize, with links to the PgBouncer
+documentation for each parameter. Unless stated otherwise, the default values
+are the ones directly set by PgBouncer.
+
+- [`auth_type`](https://www.pgbouncer.org/config.html#auth_type)
+- [`application_name_add_host`](https://www.pgbouncer.org/config.html#application_name_add_host)
+- [`autodb_idle_timeout`](https://www.pgbouncer.org/config.html#autodb_idle_timeout)
+- [`cancel_wait_timeout`](https://www.pgbouncer.org/config.html#cancel_wait_timeout)
+- [`client_idle_timeout`](https://www.pgbouncer.org/config.html#client_idle_timeout)
+- [`client_login_timeout`](https://www.pgbouncer.org/config.html#client_login_timeout)
+- [`client_tls_sslmode`](https://www.pgbouncer.org/config.html#client_tls_sslmode)
+- [`default_pool_size`](https://www.pgbouncer.org/config.html#default_pool_size)
+- [`disable_pqexec`](https://www.pgbouncer.org/config.html#disable_pqexec)
+- [`dns_max_ttl`](https://www.pgbouncer.org/config.html#dns_max_ttl)
+- [`dns_nxdomain_ttl`](https://www.pgbouncer.org/config.html#dns_nxdomain_ttl)
+- [`idle_transaction_timeout`](https://www.pgbouncer.org/config.html#idle_transaction_timeout)
+- [`ignore_startup_parameters`](https://www.pgbouncer.org/config.html#ignore_startup_parameters):
+ to be appended to `extra_float_digits,options` - required by {{name.ln}}
+- [`listen_backlog`](https://www.pgbouncer.org/config.html#listen_backlog)
+- [`log_connections`](https://www.pgbouncer.org/config.html#log_connections)
+- [`log_disconnections`](https://www.pgbouncer.org/config.html#log_disconnections)
+- [`log_pooler_errors`](https://www.pgbouncer.org/config.html#log_pooler_errors)
+- [`log_stats`](https://www.pgbouncer.org/config.html#log_stats): by default
+ disabled (`0`), given that statistics are already collected by the Prometheus
+ export as described in the ["Monitoring"](#monitoring) section below
+- [`max_client_conn`](https://www.pgbouncer.org/config.html#max_client_conn)
+- [`max_db_connections`](https://www.pgbouncer.org/config.html#max_db_connections)
+- [`max_packet_size`](https://www.pgbouncer.org/config.html#max_packet_size)
+- [`max_prepared_statements`](https://www.pgbouncer.org/config.html#max_prepared_statements)
+- [`max_user_connections`](https://www.pgbouncer.org/config.html#max_user_connections)
+- [`min_pool_size`](https://www.pgbouncer.org/config.html#min_pool_size)
+- [`pkt_buf`](https://www.pgbouncer.org/config.html#pkt_buf)
+- [`query_timeout`](https://www.pgbouncer.org/config.html#query_timeout)
+- [`query_wait_timeout`](https://www.pgbouncer.org/config.html#query_wait_timeout)
+- [`reserve_pool_size`](https://www.pgbouncer.org/config.html#reserve_pool_size)
+- [`reserve_pool_timeout`](https://www.pgbouncer.org/config.html#reserve_pool_timeout)
+- [`sbuf_loopcnt`](https://www.pgbouncer.org/config.html#sbuf_loopcnt)
+- [`server_check_delay`](https://www.pgbouncer.org/config.html#server_check_delay)
+- [`server_check_query`](https://www.pgbouncer.org/config.html#server_check_query)
+- [`server_connect_timeout`](https://www.pgbouncer.org/config.html#server_connect_timeout)
+- [`server_fast_close`](https://www.pgbouncer.org/config.html#server_fast_close)
+- [`server_idle_timeout`](https://www.pgbouncer.org/config.html#server_idle_timeout)
+- [`server_lifetime`](https://www.pgbouncer.org/config.html#server_lifetime)
+- [`server_login_retry`](https://www.pgbouncer.org/config.html#server_login_retry)
+- [`server_reset_query`](https://www.pgbouncer.org/config.html#server_reset_query)
+- [`server_reset_query_always`](https://www.pgbouncer.org/config.html#server_reset_query_always)
+- [`server_round_robin`](https://www.pgbouncer.org/config.html#server_round_robin)
+- [`server_tls_ciphers`](https://www.pgbouncer.org/config.html#server_tls_ciphers)
+- [`server_tls_protocols`](https://www.pgbouncer.org/config.html#server_tls_protocols)
+- [`server_tls_sslmode`](https://www.pgbouncer.org/config.html#server_tls_sslmode)
+- [`stats_period`](https://www.pgbouncer.org/config.html#stats_period)
+- [`suspend_timeout`](https://www.pgbouncer.org/config.html#suspend_timeout)
+- [`tcp_defer_accept`](https://www.pgbouncer.org/config.html#tcp_defer_accept)
+- [`tcp_keepalive`](https://www.pgbouncer.org/config.html#tcp_keepalive)
+- [`tcp_keepcnt`](https://www.pgbouncer.org/config.html#tcp_keepcnt)
+- [`tcp_keepidle`](https://www.pgbouncer.org/config.html#tcp_keepidle)
+- [`tcp_keepintvl`](https://www.pgbouncer.org/config.html#tcp_keepintvl)
+- [`tcp_user_timeout`](https://www.pgbouncer.org/config.html#tcp_user_timeout)
+- [`tcp_socket_buffer`](https://www.pgbouncer.org/config.html#tcp_socket_buffer)
+- [`track_extra_parameters`](https://www.pgbouncer.org/config.html#track_extra_parameters)
+- [`verbose`](https://www.pgbouncer.org/config.html#verbose)
+
+Customizations of the PgBouncer configuration are written declaratively in the
+`.spec.pgbouncer.parameters` map.
+
+The operator reacts to the changes in the pooler specification, and every
+PgBouncer instance reloads the updated configuration without disrupting the
+service.
+
+!!! Warning
+ Every PgBouncer pod has the same configuration, aligned
+ with the parameters in the specification. A mistake in these
+ parameters might disrupt the operability of the whole pooler.
+ The operator doesn't validate the value of any option.
+
+## Monitoring
+
+The PgBouncer implementation of the `Pooler` comes with a default
+Prometheus exporter. It makes available several
+metrics having the `cnp_pgbouncer_` prefix by running:
+
+- `SHOW LISTS` (prefix: `cnp_pgbouncer_lists`)
+- `SHOW POOLS` (prefix: `cnp_pgbouncer_pools`)
+- `SHOW STATS` (prefix: `cnp_pgbouncer_stats`)
+
+Like the {{name.ln}} instance, the exporter runs on port
+`9127` of each pod running PgBouncer and also provides metrics related to the
+Go runtime (with the prefix `go_*`).
+
+!!! Info
+ You can inspect the exported metrics on a pod running PgBouncer. For instructions, see
+ [How to inspect the exported metrics](monitoring.md/#how-to-inspect-the-exported-metrics).
+ Make sure that you use the correct IP and the `9127` port.
+
+This example shows the output for `cnp_pgbouncer` metrics:
+
+```text
+# HELP cnp_pgbouncer_collection_duration_seconds Collection time duration in seconds
+# TYPE cnp_pgbouncer_collection_duration_seconds gauge
+cnp_pgbouncer_collection_duration_seconds{collector="Collect.up"} 0.002338805
+# HELP cnp_pgbouncer_collection_errors_total Total errors occurred accessing PostgreSQL for metrics.
+# TYPE cnp_pgbouncer_collection_errors_total counter
+cnp_pgbouncer_collection_errors_total{collector="sql: Scan error on column index 16, name \"load_balance_hosts\": converting NULL to int is unsupported"} 5
+# HELP cnp_pgbouncer_collections_total Total number of times PostgreSQL was accessed for metrics.
+# TYPE cnp_pgbouncer_collections_total counter
+cnp_pgbouncer_collections_total 5
+# HELP cnp_pgbouncer_last_collection_error 1 if the last collection ended with error, 0 otherwise.
+# TYPE cnp_pgbouncer_last_collection_error gauge
+cnp_pgbouncer_last_collection_error 0
+# HELP cnp_pgbouncer_lists_databases Count of databases.
+# TYPE cnp_pgbouncer_lists_databases gauge
+cnp_pgbouncer_lists_databases 1
+# HELP cnp_pgbouncer_lists_dns_names Count of DNS names in the cache.
+# TYPE cnp_pgbouncer_lists_dns_names gauge
+cnp_pgbouncer_lists_dns_names 0
+# HELP cnp_pgbouncer_lists_dns_pending Not used.
+# TYPE cnp_pgbouncer_lists_dns_pending gauge
+cnp_pgbouncer_lists_dns_pending 0
+# HELP cnp_pgbouncer_lists_dns_queries Count of in-flight DNS queries.
+# TYPE cnp_pgbouncer_lists_dns_queries gauge
+cnp_pgbouncer_lists_dns_queries 0
+# HELP cnp_pgbouncer_lists_dns_zones Count of DNS zones in the cache.
+# TYPE cnp_pgbouncer_lists_dns_zones gauge
+cnp_pgbouncer_lists_dns_zones 0
+# HELP cnp_pgbouncer_lists_free_clients Count of free clients.
+# TYPE cnp_pgbouncer_lists_free_clients gauge
+cnp_pgbouncer_lists_free_clients 49
+# HELP cnp_pgbouncer_lists_free_servers Count of free servers.
+# TYPE cnp_pgbouncer_lists_free_servers gauge
+cnp_pgbouncer_lists_free_servers 0
+# HELP cnp_pgbouncer_lists_login_clients Count of clients in login state.
+# TYPE cnp_pgbouncer_lists_login_clients gauge
+cnp_pgbouncer_lists_login_clients 0
+# HELP cnp_pgbouncer_lists_pools Count of pools.
+# TYPE cnp_pgbouncer_lists_pools gauge
+cnp_pgbouncer_lists_pools 1
+# HELP cnp_pgbouncer_lists_used_clients Count of used clients.
+# TYPE cnp_pgbouncer_lists_used_clients gauge
+cnp_pgbouncer_lists_used_clients 1
+# HELP cnp_pgbouncer_lists_used_servers Count of used servers.
+# TYPE cnp_pgbouncer_lists_used_servers gauge
+cnp_pgbouncer_lists_used_servers 0
+# HELP cnp_pgbouncer_lists_users Count of users.
+# TYPE cnp_pgbouncer_lists_users gauge
+cnp_pgbouncer_lists_users 2
+# HELP cnp_pgbouncer_pools_cl_active Client connections that are linked to server connection and can process queries.
+# TYPE cnp_pgbouncer_pools_cl_active gauge
+cnp_pgbouncer_pools_cl_active{database="pgbouncer",user="pgbouncer"} 1
+# HELP cnp_pgbouncer_pools_cl_active_cancel_req Client connections that have forwarded query cancellations to the server and are waiting for the server response.
+# TYPE cnp_pgbouncer_pools_cl_active_cancel_req gauge
+cnp_pgbouncer_pools_cl_active_cancel_req{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_cl_cancel_req Client connections that have not forwarded query cancellations to the server yet.
+# TYPE cnp_pgbouncer_pools_cl_cancel_req gauge
+cnp_pgbouncer_pools_cl_cancel_req{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_cl_waiting Client connections that have sent queries but have not yet got a server connection.
+# TYPE cnp_pgbouncer_pools_cl_waiting gauge
+cnp_pgbouncer_pools_cl_waiting{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_cl_waiting_cancel_req Client connections that have not forwarded query cancellations to the server yet.
+# TYPE cnp_pgbouncer_pools_cl_waiting_cancel_req gauge
+cnp_pgbouncer_pools_cl_waiting_cancel_req{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_load_balance_hosts Number of hosts not load balancing between hosts
+# TYPE cnp_pgbouncer_pools_load_balance_hosts gauge
+cnp_pgbouncer_pools_load_balance_hosts{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_maxwait How long the first (oldest) client in the queue has waited, in seconds. If this starts increasing, then the current pool of servers does not handle requests quickly enough. The reason may be either an overloaded server or just too small of a pool_size setting.
+# TYPE cnp_pgbouncer_pools_maxwait gauge
+cnp_pgbouncer_pools_maxwait{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_maxwait_us Microsecond part of the maximum waiting time.
+# TYPE cnp_pgbouncer_pools_maxwait_us gauge
+cnp_pgbouncer_pools_maxwait_us{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_pool_mode The pooling mode in use. 1 for session, 2 for transaction, 3 for statement, -1 if unknown
+# TYPE cnp_pgbouncer_pools_pool_mode gauge
+cnp_pgbouncer_pools_pool_mode{database="pgbouncer",user="pgbouncer"} 3
+# HELP cnp_pgbouncer_pools_sv_active Server connections that are linked to a client.
+# TYPE cnp_pgbouncer_pools_sv_active gauge
+cnp_pgbouncer_pools_sv_active{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_sv_active_cancel Server connections that are currently forwarding a cancel request
+# TYPE cnp_pgbouncer_pools_sv_active_cancel gauge
+cnp_pgbouncer_pools_sv_active_cancel{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_sv_idle Server connections that are unused and immediately usable for client queries.
+# TYPE cnp_pgbouncer_pools_sv_idle gauge
+cnp_pgbouncer_pools_sv_idle{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_sv_login Server connections currently in the process of logging in.
+# TYPE cnp_pgbouncer_pools_sv_login gauge
+cnp_pgbouncer_pools_sv_login{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_sv_tested Server connections that are currently running either server_reset_query or server_check_query.
+# TYPE cnp_pgbouncer_pools_sv_tested gauge
+cnp_pgbouncer_pools_sv_tested{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_sv_used Server connections that have been idle for more than server_check_delay, so they need server_check_query to run on them before they can be used again.
+# TYPE cnp_pgbouncer_pools_sv_used gauge
+cnp_pgbouncer_pools_sv_used{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_pools_sv_wait_cancels Servers that normally could become idle, but are waiting to do so until all in-flight cancel requests have completed that were sent to cancel a query on this server.
+# TYPE cnp_pgbouncer_pools_sv_wait_cancels gauge
+cnp_pgbouncer_pools_sv_wait_cancels{database="pgbouncer",user="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_bind_count Average number of prepared statements readied for execution by clients and forwarded to PostgreSQL by pgbouncer.
+# TYPE cnp_pgbouncer_stats_avg_bind_count gauge
+cnp_pgbouncer_stats_avg_bind_count{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_client_parse_count Average number of prepared statements created by clients.
+# TYPE cnp_pgbouncer_stats_avg_client_parse_count gauge
+cnp_pgbouncer_stats_avg_client_parse_count{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_query_count Average queries per second in last stat period.
+# TYPE cnp_pgbouncer_stats_avg_query_count gauge
+cnp_pgbouncer_stats_avg_query_count{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_query_time Average query duration, in microseconds.
+# TYPE cnp_pgbouncer_stats_avg_query_time gauge
+cnp_pgbouncer_stats_avg_query_time{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_recv Average received (from clients) bytes per second.
+# TYPE cnp_pgbouncer_stats_avg_recv gauge
+cnp_pgbouncer_stats_avg_recv{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_sent Average sent (to clients) bytes per second.
+# TYPE cnp_pgbouncer_stats_avg_sent gauge
+cnp_pgbouncer_stats_avg_sent{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_server_parse_count Average number of prepared statements created by pgbouncer on a server.
+# TYPE cnp_pgbouncer_stats_avg_server_parse_count gauge
+cnp_pgbouncer_stats_avg_server_parse_count{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_wait_time Time spent by clients waiting for a server, in microseconds (average per second).
+# TYPE cnp_pgbouncer_stats_avg_wait_time gauge
+cnp_pgbouncer_stats_avg_wait_time{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_xact_count Average transactions per second in last stat period.
+# TYPE cnp_pgbouncer_stats_avg_xact_count gauge
+cnp_pgbouncer_stats_avg_xact_count{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_avg_xact_time Average transaction duration, in microseconds.
+# TYPE cnp_pgbouncer_stats_avg_xact_time gauge
+cnp_pgbouncer_stats_avg_xact_time{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_total_bind_count Total number of prepared statements readied for execution by clients and forwarded to PostgreSQL by pgbouncer
+# TYPE cnp_pgbouncer_stats_total_bind_count gauge
+cnp_pgbouncer_stats_total_bind_count{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_total_client_parse_count Total number of prepared statements created by clients.
+# TYPE cnp_pgbouncer_stats_total_client_parse_count gauge
+cnp_pgbouncer_stats_total_client_parse_count{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_total_query_count Total number of SQL queries pooled by pgbouncer.
+# TYPE cnp_pgbouncer_stats_total_query_count gauge
+cnp_pgbouncer_stats_total_query_count{database="pgbouncer"} 15
+# HELP cnp_pgbouncer_stats_total_query_time Total number of microseconds spent by pgbouncer when actively connected to PostgreSQL, executing queries.
+# TYPE cnp_pgbouncer_stats_total_query_time gauge
+cnp_pgbouncer_stats_total_query_time{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_total_received Total volume in bytes of network traffic received by pgbouncer.
+# TYPE cnp_pgbouncer_stats_total_received gauge
+cnp_pgbouncer_stats_total_received{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_total_sent Total volume in bytes of network traffic sent by pgbouncer.
+# TYPE cnp_pgbouncer_stats_total_sent gauge
+cnp_pgbouncer_stats_total_sent{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_total_server_parse_count Total number of prepared statements created by pgbouncer on a server.
+# TYPE cnp_pgbouncer_stats_total_server_parse_count gauge
+cnp_pgbouncer_stats_total_server_parse_count{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_total_wait_time Time spent by clients waiting for a server, in microseconds.
+# TYPE cnp_pgbouncer_stats_total_wait_time gauge
+cnp_pgbouncer_stats_total_wait_time{database="pgbouncer"} 0
+# HELP cnp_pgbouncer_stats_total_xact_count Total number of SQL transactions pooled by pgbouncer.
+# TYPE cnp_pgbouncer_stats_total_xact_count gauge
+cnp_pgbouncer_stats_total_xact_count{database="pgbouncer"} 15
+# HELP cnp_pgbouncer_stats_total_xact_time Total number of microseconds spent by pgbouncer when connected to PostgreSQL in a transaction, either idle in transaction or executing queries.
+# TYPE cnp_pgbouncer_stats_total_xact_time gauge
+cnp_pgbouncer_stats_total_xact_time{database="pgbouncer"} 0
+```
+
+!!! Info
+ For a better understanding of the metrics please refer to the PgBouncer documentation.
+
+As for clusters, a specific pooler can be monitored using the
+[Prometheus operator's](https://github.com/prometheus-operator/prometheus-operator)
+[`PodMonitor` resource](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api-reference/api.md#monitoring.coreos.com/v1.PodMonitor).
+
+You can deploy a `PodMonitor` for a specific pooler using the following basic example, and change it as needed:
+
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name:
+spec:
+ selector:
+ matchLabels:
+ k8s.enterprisedb.io/poolerName:
+ podMetricsEndpoints:
+ - port: metrics
+```
+
+### Deprecation of Automatic `PodMonitor` Creation
+
+!!!warning "Feature Deprecation Notice"
+ The `.spec.monitoring.enablePodMonitor` field in the `Pooler` resource is
+ now deprecated and will be removed in a future version of the operator.
+
+If you are currently using this feature, we strongly recommend you either
+remove or set `.spec.monitoring.enablePodMonitor` to `false` and manually
+create a `PodMonitor` resource for your pooler as described above.
+This change ensures that you have complete ownership of your monitoring
+configuration, preventing it from being managed or overwritten by the operator.
+
+## Logging
+
+Logs are directly sent to standard output, in JSON format, like in the
+following example:
+
+```json
+{
+ "level": "info",
+ "ts": SECONDS.MICROSECONDS,
+ "msg": "record",
+ "pipe": "stderr",
+ "record": {
+ "timestamp": "YYYY-MM-DD HH:MM:SS.MS UTC",
+ "pid": "",
+ "level": "LOG",
+ "msg": "kernel file descriptor limit: 1048576 (hard: 1048576); max_client_conn: 100, max expected fd use: 112"
+ }
+}
+```
+
+## Pausing connections
+
+The `Pooler` specification allows you to take advantage of PgBouncer's `PAUSE`
+and `RESUME` commands, using only declarative configuration. You can ado this
+using the `paused` option, which by default is set to `false`. When set to
+`true`, the operator internally invokes the `PAUSE` command in PgBouncer,
+which:
+
+1. Closes all active connections toward the PostgreSQL server, after waiting
+ for the queries to complete
+2. Pauses any new connection coming from the client
+
+When the `paused` option is reset to `false`, the operator invokes the
+`RESUME` command in PgBouncer, reopening the taps toward the PostgreSQL
+service defined in the `Pooler` resource.
+
+!!! Seealso "PAUSE"
+ For more information, see
+ [`PAUSE` in the PgBouncer documentation](https://www.pgbouncer.org/usage.html#pause-db).
+
+!!! Important
+ In future versions, the switchover operation will be fully integrated
+ with the PgBouncer pooler and take advantage of the `PAUSE`/`RESUME`
+ features to reduce the perceived downtime by client applications.
+ Currently, you can achieve the same results by setting the `paused`
+ attribute to `true`, issuing the switchover command through the
+ [`cnp` plugin](kubectl-plugin.md#promote), and then restoring the `paused`
+ attribute to `false`.
+
+## Limitations
+
+### Single PostgreSQL cluster
+
+The current implementation of the pooler is designed to work as part of a
+specific {{name.ln}} cluster (a service). It isn't currently possible to
+create a pooler that spans multiple clusters.
+
+### Controlled configurability
+
+{{name.ln}} transparently manages several configuration options that are used
+for the PgBouncer layer to communicate with PostgreSQL. Such options aren't
+configurable from outside and include TLS certificates, authentication
+settings, the `databases` section, and the `users` section. Also, considering
+the specific use case for the single PostgreSQL cluster, the adopted criteria
+is to explicitly list the options that can be configured by users.
+
+!!! Note
+ The adopted solution likely addresses the majority of
+ use cases. It leaves room for the future implementation of a separate
+ operator for PgBouncer to complete the gamma with more advanced and customized
+ scenarios.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
new file mode 100644
index 0000000000..a532e091ae
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
@@ -0,0 +1,68 @@
+---
+title: 'Container Image Requirements'
+originalFilePath: 'src/container_images.md'
+---
+
+
+
+The {{name.ln}} operator for Kubernetes is designed to work with any
+compatible PostgreSQL container image that meets the following requirements:
+
+- PostgreSQL executables must be available in the system path:
+ - `initdb`
+ - `postgres`
+ - `pg_ctl`
+ - `pg_controldata`
+ - `pg_basebackup`
+- Proper locale settings configured
+
+Optional Components:
+
+- [PGAudit](https://www.pgaudit.org/) extension (only required if audit logging
+ is needed)
+- `du` (used for `kubectl cnp status`)
+
+!!! Important
+ Only [PostgreSQL versions officially supported by PGDG](https://postgresql.org/) are allowed.
+
+!!! Info
+ Barman Cloud executables are no longer required in {{name.ln}}. The
+ recommended approach is to use the dedicated [Barman Cloud Plugin](https://github.com/cloudnative-pg/plugin-barman-cloud).
+
+No entry point or command is required in the image definition. {{name.ln}}
+automatically overrides it with its instance manager.
+
+!!! Warning
+ {{name.ln}} only supports **Primary with multiple/optional Hot Standby
+ Servers architecture** for PostgreSQL application container images.
+
+## Image Tag Requirements
+
+To ensure the operator makes informed decisions, it must accurately detect the
+PostgreSQL major version. This detection can occur in two ways:
+
+1. Utilizing the `major` field of the `imageCatalogRef`, if defined.
+2. Auto-detecting the major version from the image tag of the `imageName` if
+ not explicitly specified.
+
+For auto-detection to work, the image tag must adhere to a specific format. It
+should commence with a valid PostgreSQL major version number (e.g., 15.6 or
+16), optionally followed by a dot and the patch level.
+
+Following this, the tag can include any character combination valid and
+accepted in a Docker tag, preceded by a dot, an underscore, or a minus sign.
+
+Examples of accepted image tags:
+
+- `12.1`
+- `13.3.2.1-1`
+- `13.4`
+- `14`
+- `15.5-10`
+- `16.0`
+
+!!! Warning
+ `latest` is not considered a valid tag for the image.
+
+!!! Note
+ Image tag requirements do not apply for images defined in a catalog.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/controller.mdx b/product_docs/docs/postgres_for_kubernetes/1/controller.mdx
new file mode 100644
index 0000000000..6d33d7e139
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/controller.mdx
@@ -0,0 +1,128 @@
+---
+title: 'Custom Pod Controller'
+originalFilePath: 'src/controller.md'
+---
+
+
+
+Kubernetes uses the
+[Controller pattern](https://kubernetes.io/docs/concepts/architecture/controller/)
+to align the current cluster state with the desired one.
+
+Stateful applications are usually managed with the
+[`StatefulSet`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
+controller, which creates and reconciles a set of Pods built from the same
+specification, and assigns them a sticky identity.
+
+{{name.ln}} implements its own custom controller to manage PostgreSQL
+instances, instead of relying on the `StatefulSet` controller.
+While bringing more complexity to the implementation, this design choice
+provides the operator with more flexibility on how we manage the cluster,
+while being transparent on the topology of PostgreSQL clusters.
+
+Like many choices in the design realm, different ones lead to other
+compromises. The following sections discuss a few points where we believe
+this design choice has made the implementation of {{name.ln}}
+more reliable, and easier to understand.
+
+## PVC resizing
+
+This is a well known limitation of `StatefulSet`: it does not support resizing
+PVCs. This is inconvenient for a database. Resizing volumes requires
+convoluted workarounds.
+
+In contrast, {{name.ln}} leverages the configured storage class to
+manage the underlying PVCs directly, and can handle PVC resizing if
+the storage class supports it.
+
+## Primary Instances versus Replicas
+
+The `StatefulSet` controller is designed to create a set of Pods
+from just one template. Given that we use one `Pod` per PostgreSQL instance,
+we have two kinds of Pods:
+
+1. primary instance (only one)
+2. replicas (multiple, optional)
+
+This difference is relevant when deciding the correct deployment strategy to
+execute for a given operation.
+
+Some operations should be performed on the replicas first,
+and then on the primary, but only after an updated replica is promoted
+as the new primary.
+For example, when you want to apply a different PostgreSQL image version,
+or when you increase configuration parameters like `max_connections` (which are
+[treated specially by PostgreSQL because {{name.ln}} uses hot standby
+replicas](https://www.postgresql.org/docs/current/hot-standby.html)).
+
+While doing that, {{name.ln}} considers the PostgreSQL instance's
+role - and not just its serial number.
+
+Sometimes the operator needs to follow the opposite process: work on the
+primary first and then on the replicas. For example, when you
+lower `max_connections`. In that case, {{name.ln}} will:
+
+- apply the new setting to the primary instance
+- restart it
+- apply the new setting on the replicas
+
+The `StatefulSet` controller, being application-independent, can't
+incorporate this behavior, which is specific to PostgreSQL's native
+replication technology.
+
+## Coherence of PVCs
+
+PostgreSQL instances can be configured to work with multiple PVCs: this is how
+WAL storage can be separated from `PGDATA`.
+
+The two data stores need to be coherent from the PostgreSQL point of view,
+as they're used simultaneously. If you delete the PVC corresponding to
+the WAL storage of an instance, the PVC where `PGDATA` is stored will not be
+usable anymore.
+
+This behavior is specific to PostgreSQL and is not implemented in the
+`StatefulSet` controller - the latter not being application specific.
+
+After the user dropped a PVC, a `StatefulSet` would just recreate it, leading
+to a corrupted PostgreSQL instance.
+
+{{name.ln}} would instead classify the remaining PVC as unusable, and
+start creating a new pair of PVCs for another instance to join the cluster
+correctly.
+
+## Local storage, remote storage, and database size
+
+Sometimes you need to take down a Kubernetes node to do an upgrade.
+After the upgrade, depending on your upgrade strategy, the updated node
+could go up again, or a new node could replace it.
+
+Supposing the unavailable node was hosting a PostgreSQL instance,
+depending on your database size and your cloud infrastructure, you
+may prefer to choose one of the following actions:
+
+1. drop the PVC and the Pod residing on the downed node;
+ create a new PVC cloning the data from another PVC;
+ after that, schedule a Pod for it
+
+2. drop the Pod, schedule the Pod in a different node, and mount
+ the PVC from there
+
+3. leave the Pod and the PVC as they are, and wait for the node to
+ be back up.
+
+The first solution is practical when your database size permits, allowing
+you to immediately bring back the desired number of replicas.
+
+The second solution is only feasible when you're not using the storage of the
+local node, and re-mounting the PVC in another host is possible in a reasonable
+amount of time (which only you and your organization know).
+
+The third solution is appropriate when the database is big and uses local
+node storage for maximum performance and data durability.
+
+The {{name.ln}} controller implements all these strategies so that the
+user can select the preferred behavior at the cluster level (read the
+["Kubernetes upgrade"](kubernetes_upgrade.md) section for details).
+
+Being generic, the `StatefulSet` doesn't allow this level of
+customization.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/css/override.css b/product_docs/docs/postgres_for_kubernetes/1/css/override.css
new file mode 100644
index 0000000000..f7389b7b39
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/css/override.css
@@ -0,0 +1,3 @@
+.wy-table-responsive table td, .wy-table-responsive table th {
+ white-space: normal;
+}
diff --git a/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx b/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
new file mode 100644
index 0000000000..bd1336fddc
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
@@ -0,0 +1,443 @@
+---
+title: 'Importing Postgres databases'
+originalFilePath: 'src/database_import.md'
+---
+
+
+
+This section describes how to import one or more existing PostgreSQL
+databases inside a brand new {{name.ln}} cluster.
+
+The import operation is based on the concept of online logical backups in PostgreSQL,
+and relies on `pg_dump` via a network connection to the origin host, and `pg_restore`.
+Thanks to native Multi-Version Concurrency Control (MVCC) and snapshots,
+PostgreSQL enables taking consistent backups over the network, in a concurrent
+manner, without stopping any write activity.
+
+Logical backups are also the most common, flexible and reliable technique to
+perform major upgrades of PostgreSQL versions.
+
+As a result, the instructions in this section are suitable for both:
+
+- importing one or more databases from an existing PostgreSQL instance, even
+ outside Kubernetes
+- importing the database from any PostgreSQL version to one that is either the
+ same or newer, enabling *major upgrades* of PostgreSQL (e.g. from version 13.x
+ to version 17.x)
+
+!!! Warning
+ When performing major upgrades of PostgreSQL you are responsible for making
+ sure that applications are compatible with the new version and that the
+ upgrade path of the objects contained in the database (including extensions) is
+ feasible.
+
+In both cases, the operation is performed on a consistent **snapshot** of the
+origin database.
+
+!!! Important
+ For this reason we suggest to stop write operations on the source before
+ the final import in the `Cluster` resource, as changes done to the source
+ database after the start of the backup will not be in the destination cluster -
+ hence why this feature is referred to as "offline import" or "offline major
+ upgrade".
+
+## How it works
+
+Conceptually, the import requires you to create a new cluster from scratch
+(*destination cluster*), using the [`initdb` bootstrap method](bootstrap.md),
+and then complete the `initdb.import` subsection to import objects from an
+existing Postgres cluster (*source cluster*). As per PostgreSQL recommendation,
+we suggest that the PostgreSQL major version of the *destination cluster* is
+greater or equal than the one of the *source cluster*.
+
+{{name.ln}} provides two main ways to import objects from the source cluster
+into the destination cluster:
+
+- **microservice approach**: the destination cluster is designed to host a
+ single application database owned by the specified application user, as
+ recommended by the {{name.ln}} project
+
+- **monolith approach**: the destination cluster is designed to host multiple
+ databases and different users, imported from the source cluster
+
+The first import method is available via the `microservice` type, the
+second via the `monolith` type.
+
+!!! Warning
+ It is your responsibility to ensure that the destination cluster can
+ access the source cluster with a superuser or a user having enough
+ privileges to take a logical backup with `pg_dump`. Please refer to the
+ [PostgreSQL documentation on `pg_dump`](https://www.postgresql.org/docs/current/app-pgdump.html)
+ for further information.
+
+## The `microservice` type
+
+With the microservice approach, you can specify a single database you want to
+import from the source cluster into the destination cluster. The operation is
+performed in 4 steps:
+
+- `initdb` bootstrap of the new cluster
+- export of the selected database (in `initdb.import.databases`) using
+ `pg_dump -Fd`
+- import of the database using `pg_restore --no-acl --no-owner` into the
+ `initdb.database` (application database) owned by the `initdb.owner` user
+- cleanup of the database dump file
+- optional execution of the user defined SQL queries in the application
+ database via the `postImportApplicationSQL` parameter
+- execution of `ANALYZE VERBOSE` on the imported database
+
+In the figure below, a single PostgreSQL cluster containing *N* databases is
+imported into separate {{name.ln}} clusters, with each cluster using a
+microservice import for one of the *N* source databases.
+
+
+
+For example, the YAML below creates a new 3 instance PostgreSQL cluster (latest
+available major version at the time the operator was released) called
+`cluster-microservice` that imports the `angus` database from the
+`cluster-pg96` cluster (with the unsupported PostgreSQL 9.6), by connecting to
+the `postgres` database using the `postgres` user, via the password stored in
+the `cluster-pg96-superuser` secret.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-microservice
+spec:
+ instances: 3
+
+ bootstrap:
+ initdb:
+ import:
+ type: microservice
+ databases:
+ - angus
+ source:
+ externalCluster: cluster-pg96
+ #postImportApplicationSQL:
+ #- |
+ # INSERT YOUR SQL QUERIES HERE
+ storage:
+ size: 1Gi
+ externalClusters:
+ - name: cluster-pg96
+ connectionParameters:
+ # Use the correct IP or host name for the source database
+ host: pg96.local
+ user: postgres
+ dbname: postgres
+ password:
+ name: cluster-pg96-superuser
+ key: password
+```
+
+!!! Warning
+ The example above deliberately uses a source database running a version of
+ PostgreSQL that is not supported anymore by the Community, and consequently by
+ {{name.ln}}.
+ Data export from the source instance is performed using the version of
+ `pg_dump` in the destination cluster, which must be a supported one, and
+ equal or greater than the source one.
+ Based on our experience, this way of exporting data should work on older
+ and unsupported versions of Postgres too, giving you the chance to move your
+ legacy data to a better system, inside Kubernetes.
+ This is the main reason why we used 9.6 in the examples of this section.
+ We'd be interested to hear from you, should you experience any issues in this area.
+
+There are a few things you need to be aware of when using the `microservice` type:
+
+- It requires an `externalCluster` that points to an existing PostgreSQL
+ instance containing the data to import (for more information, please refer to
+ ["The `externalClusters` section"](bootstrap.md#the-externalclusters-section))
+- Traffic must be allowed between the Kubernetes cluster and the
+ `externalCluster` during the operation
+- Connection to the source database must be granted with the specified user
+ that needs to run `pg_dump` and read roles information (*superuser* is OK)
+- Currently, the `pg_dump -Fd` result is stored temporarily inside the `dumps`
+ folder in the `PGDATA` volume, so there should be enough available space to
+ temporarily contain the dump result on the assigned node, as well as the
+ restored data and indexes. Once the import operation is completed, this
+ folder is automatically deleted by the operator.
+- Only one database can be specified inside the `initdb.import.databases` array
+- Roles are not imported - and as such they cannot be specified inside `initdb.import.roles`
+
+!!! Hint
+ The microservice approach adheres to {{name.ln}} conventions and defaults
+ for the destination cluster. If you do not set `initdb.database` or
+ `initdb.owner` for the destination cluster, both parameters will default to
+ `app`.
+
+## The `monolith` type
+
+With the monolith approach, you can specify a set of roles and databases you
+want to import from the source cluster into the destination cluster.
+The operation is performed in the following steps:
+
+- `initdb` bootstrap of the new cluster
+- export and import of the selected roles
+- export of the selected databases (in `initdb.import.databases`), one at a time,
+ using `pg_dump -Fd`
+- create each of the selected databases and import data using `pg_restore`
+- run `ANALYZE` on each imported database
+- cleanup of the database dump files
+
+
+
+For example, the YAML below creates a new 3 instance PostgreSQL cluster (latest
+available major version at the time the operator was released) called
+`cluster-monolith` that imports the `accountant` and the `bank_user` roles,
+as well as the `accounting`, `banking`, `resort` databases from the
+`cluster-pg96` cluster (with the unsupported PostgreSQL 9.6), by connecting to
+the `postgres` database using the `postgres` user, via the password stored in
+the `cluster-pg96-superuser` secret.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-monolith
+spec:
+ instances: 3
+ bootstrap:
+ initdb:
+ import:
+ type: monolith
+ databases:
+ - accounting
+ - banking
+ - resort
+ roles:
+ - accountant
+ - bank_user
+ source:
+ externalCluster: cluster-pg96
+ storage:
+ size: 1Gi
+ externalClusters:
+ - name: cluster-pg96
+ connectionParameters:
+ # Use the correct IP or host name for the source database
+ host: pg96.local
+ user: postgres
+ dbname: postgres
+ sslmode: require
+ password:
+ name: cluster-pg96-superuser
+ key: password
+```
+
+There are a few things you need to be aware of when using the `monolith` type:
+
+- It requires an `externalCluster` that points to an existing PostgreSQL
+ instance containing the data to import (for more information, please refer to
+ ["The `externalClusters` section"](bootstrap.md#the-externalclusters-section))
+- Traffic must be allowed between the Kubernetes cluster and the
+ `externalCluster` during the operation
+- Connection to the source database must be granted with the specified user
+ that needs to run `pg_dump` and retrieve roles information (*superuser* is
+ OK)
+- Currently, the `pg_dump -Fd` result is stored temporarily inside the `dumps`
+ folder in the `PGDATA` volume of the destination cluster's instances, so
+ there should be enough available space to
+ temporarily contain the dump result on the assigned node, as well as the
+ restored data and indexes. Once the import operation is completed, this
+ folder is automatically deleted by the operator.
+- At least one database to be specified in the `initdb.import.databases` array
+- Any role that is required by the imported databases must be specified inside
+ `initdb.import.roles`, with the limitations below:
+ - The following roles, if present, are not imported:
+ `postgres`, `streaming_replica`, `cnp_pooler_pgbouncer`
+ - The `SUPERUSER` option is removed from any imported role
+- Wildcard `"*"` can be used as the only element in the `databases` and/or
+ `roles` arrays to import every object of the kind; When matching databases
+ the wildcard will ignore the `postgres` database, template databases,
+ and those databases not allowing connections
+- After the clone procedure is done, `ANALYZE VERBOSE` is executed for every
+ database.
+- The `postImportApplicationSQL` field is not supported
+
+!!! Hint
+ The databases and their owners are preserved exactly as they exist in the
+ source cluster—no `app` database or user will be created during import. If your
+ `bootstrap.initdb` stanza specifies custom `database` and `owner` values that
+ do not match any of the databases or users being imported, the instance
+ manager will create a new, empty application database and owner role with those
+ specified names, while leaving the imported databases and owners unchanged.
+
+## A practical example
+
+There is nothing to stop you from using the `monolith` approach to import a
+single database. It is interesting to see how the results of doing so would
+differ from using the `microservice` approach.
+
+Given a source cluster, for example the following, with a database named
+`mydb` owned by role `me`:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 1
+
+ postgresql:
+ pg_hba:
+ - host all all all trust
+
+ storage:
+ size: 1Gi
+
+ bootstrap:
+ initdb:
+ database: mydb
+ owner: me
+```
+
+We can import it via `microservice`:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-microservice
+spec:
+ instances: 1
+
+ storage:
+ size: 1Gi
+
+ bootstrap:
+ initdb:
+ import:
+ type: microservice
+ databases:
+ - mydb
+ source:
+ externalCluster: cluster-example
+
+ externalClusters:
+ - name: cluster-example
+ connectionParameters:
+ host: cluster-example-rw
+ dbname: postgres
+```
+
+as well as via monolith:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-monolith
+spec:
+ instances: 1
+
+ storage:
+ size: 1Gi
+
+ bootstrap:
+ initdb:
+ import:
+ type: monolith
+ databases:
+ - mydb
+ roles:
+ - me
+ source:
+ externalCluster: cluster-example
+
+ externalClusters:
+ - name: cluster-example
+ connectionParameters:
+ host: cluster-example-rw
+ dbname: postgres
+```
+
+In both cases, the database's contents will be imported, but:
+
+- In the microservice case, the imported database's name and owner both become
+ `app`, or whichever configuration for the fields `database` and `owner` are
+ set in the `bootstrap.initdb` stanza.
+- In the monolith case, the database and owner are kept exactly as in the source
+ cluster, i.e. `mydb` and `me` respectively. No `app` database nor user will be
+ created. If there are custom settings for `database` and `owner` in the
+ `bootstrap.initdb` stanza that don't match the source databases/owners to
+ import, the instance manager will create a new empty application database and
+ owner role, but will leave the imported databases/owners intact.
+
+## Import optimizations
+
+During the logical import of a database, {{name.ln}} optimizes the
+configuration of PostgreSQL in order to prioritize speed versus data
+durability, by forcing:
+
+- `archive_mode` to `off`
+- `fsync` to `off`
+- `full_page_writes` to `off`
+- `max_wal_senders` to `0`
+- `wal_level` to `minimal`
+
+Before completing the import job, {{name.ln}} restores the expected
+configuration, then runs `initdb --sync-only` to ensure that data is
+permanently written on disk.
+
+!!! Important
+ WAL archiving, if requested, and WAL level will be honored after the
+ database import process has completed. Similarly, replicas will be cloned
+ after the bootstrap phase, when the actual cluster resource starts.
+
+There are other optimizations you can do during the import phase. Although this
+topic is beyond the scope of {{name.ln}}, we recommend that you reduce
+unnecessary writes in the checkpoint area by tuning Postgres GUCs like
+`shared_buffers`, `max_wal_size`, `checkpoint_timeout` directly in the
+`Cluster` configuration.
+
+## Customizing `pg_dump` and `pg_restore` Behavior
+
+You can customize the behavior of `pg_dump` and `pg_restore` by specifying
+additional options using the `pgDumpExtraOptions` and `pgRestoreExtraOptions`
+parameters. For instance, you can enable parallel jobs to speed up data
+import/export processes, as shown in the following example:
+
+```yaml
+ #
+ bootstrap:
+ initdb:
+ import:
+ type: microservice
+ databases:
+ - app
+ source:
+ externalCluster: cluster-example
+ pgDumpExtraOptions:
+ - '--jobs=2'
+ pgRestoreExtraOptions:
+ - '--jobs=2'
+ #
+```
+
+!!! Warning
+ Use the `pgDumpExtraOptions` and `pgRestoreExtraOptions` fields with
+ caution and at your own risk. These options are not validated or verified by
+ the operator, and some configurations may conflict with its intended
+ functionality or behavior. Always test thoroughly in a safe and controlled
+ environment before applying them in production.
+
+## Online Import and Upgrades
+
+Logical replication offers a powerful way to import any PostgreSQL database
+accessible over the network using the following approach:
+
+- **Import Bootstrap with Schema-Only Option**: Initialize the schema in the
+ target database before replication begins.
+- **`Subscription` Resource**: Set up continuous replication to synchronize
+ data changes.
+
+This technique can also be leveraged for performing major PostgreSQL upgrades
+with minimal downtime, making it ideal for seamless migrations and system
+upgrades.
+
+For more details, including limitations and best practices, refer to the
+[Logical Replication](logical_replication.md) section in the documentation.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx b/product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx
new file mode 100644
index 0000000000..643604c658
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/declarative_database_management.mdx
@@ -0,0 +1,318 @@
+---
+title: 'PostgreSQL Database Management'
+originalFilePath: 'src/declarative_database_management.md'
+---
+
+
+
+{{name.ln}} simplifies PostgreSQL database provisioning by automatically
+creating an application database named `app` by default. This default behavior
+is explained in the ["Bootstrap an Empty Cluster"](bootstrap.md#bootstrap-an-empty-cluster-initdb)
+section.
+
+For more advanced use cases, {{name.ln}} introduces **declarative database
+management**, which empowers users to define and control the lifecycle of
+PostgreSQL databases using the `Database` Custom Resource Definition (CRD).
+This method seamlessly integrates with Kubernetes, providing a scalable,
+automated, and consistent approach to managing PostgreSQL databases.
+
+* * *
+
+## Key Concepts
+
+### Scope of Management
+
+!!! Important
+ {{name.ln}} manages **global objects** in PostgreSQL clusters, including
+ databases, roles, and tablespaces. However, it does **not** manage database content
+ beyond extensions and schemas (e.g., tables). To manage database content, use specialized
+ tools or rely on the applications themselves.
+
+### Declarative `Database` Manifest
+
+The following example demonstrates how a `Database` resource interacts with a
+`Cluster`:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Database
+metadata:
+ name: cluster-example-one
+spec:
+ name: one
+ owner: app
+ cluster:
+ name: cluster-example
+ extensions:
+ - name: bloom
+ ensure: present
+```
+
+When applied, this manifest creates a `Database` object called
+`cluster-example-one` requesting a database named `one`, owned by the `app`
+role, in the `cluster-example` PostgreSQL cluster.
+
+!!! Info
+ Please refer to the [API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+ the full list of attributes you can define for each `Database` object.
+
+### Required Fields in the `Database` Manifest
+
+- `metadata.name`: Unique name of the Kubernetes object within its namespace.
+- `spec.name`: Name of the database as it will appear in PostgreSQL.
+- `spec.owner`: PostgreSQL role that owns the database.
+- `spec.cluster.name`: Name of the target PostgreSQL cluster.
+
+The `Database` object must reference a specific `Cluster`, determining where
+the database will be created. It is managed by the cluster's primary instance,
+ensuring the database is created or updated as needed.
+
+!!! Info
+ The distinction between `metadata.name` and `spec.name` allows multiple
+ `Database` resources to reference databases with the same name across different
+ {{name.ln}} clusters in the same Kubernetes namespace.
+
+## Reserved Database Names
+
+PostgreSQL automatically creates databases such as `postgres`, `template0`, and
+`template1`. These names are reserved and cannot be used for new `Database`
+objects in {{name.ln}}.
+
+!!! Important
+ Creating a `Database` with `spec.name` set to `postgres`, `template0`, or
+ `template1` is not allowed.
+
+## Reconciliation and Status
+
+Once a `Database` object is reconciled successfully:
+
+- `status.applied` will be set to `true`.
+- `status.observedGeneration` will match the `metadata.generation` of the last
+ applied configuration.
+
+Example of a reconciled `Database` object:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Database
+metadata:
+ generation: 1
+ name: cluster-example-one
+spec:
+ cluster:
+ name: cluster-example
+ name: one
+ owner: app
+status:
+ observedGeneration: 1
+ applied: true
+```
+
+If an error occurs during reconciliation, `status.applied` will be `false`, and
+an error message will be included in the `status.message` field.
+
+## Deleting a Database
+
+{{name.ln}} supports two methods for database deletion:
+
+1. Using the `delete` reclaim policy
+2. Declaratively setting the database's `ensure` field to `absent`
+
+### Deleting via `delete` Reclaim Policy
+
+The `databaseReclaimPolicy` field determines the behavior when a `Database`
+object is deleted:
+
+- `retain` (default): The database remains in PostgreSQL for manual management.
+- `delete`: The database is automatically removed from PostgreSQL.
+
+Example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Database
+metadata:
+ name: cluster-example-two
+spec:
+ databaseReclaimPolicy: delete
+ name: two
+ owner: app
+ cluster:
+ name: cluster-example
+```
+
+Deleting this `Database` object will automatically remove the `two` database
+from the `cluster-example` cluster.
+
+### Declaratively Setting `ensure: absent`
+
+To remove a database, set the `ensure` field to `absent` like in the following
+example:.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Database
+metadata:
+ name: cluster-example-database-to-drop
+spec:
+ cluster:
+ name: cluster-example
+ name: database-to-drop
+ owner: app
+ ensure: absent
+```
+
+This manifest ensures that the `database-to-drop` database is removed from the
+`cluster-example` cluster.
+
+## Managing Extensions in a Database
+
+!!! Info
+ While extensions are database-scoped rather than global objects,
+ {{name.ln}} provides a declarative interface for managing them. This approach
+ is necessary because installing certain extensions may require superuser
+ privileges, which {{name.ln}} recommends disabling by default. By leveraging
+ this API, users can efficiently manage extensions in a scalable and controlled
+ manner without requiring elevated privileges.
+
+{{name.ln}} simplifies and automates the management of PostgreSQL extensions within the
+target database.
+
+To enable this feature, define the `spec.extensions` field
+with a list of extension specifications, as shown in the following example:
+
+```yaml
+# ...
+spec:
+ extensions:
+ - name: bloom
+ ensure: present
+# ...
+```
+
+Each extension entry supports the following properties:
+
+- `name` *(mandatory)*: The name of the extension.
+- `ensure`: Specifies whether the extension should be present or absent in the
+ database:
+ - `present`: Ensures that the extension is installed (default).
+ - `absent`: Ensures that the extension is removed.
+- `version`: The specific version of the extension to install or
+ upgrade to.
+- `schema`: The schema in which the extension should be installed.
+
+!!! Info
+ {{name.ln}} manages extensions using the following PostgreSQL’s SQL commands:
+ [`CREATE EXTENSION`](https://www.postgresql.org/docs/current/sql-createextension.html),
+ [`DROP EXTENSION`](https://www.postgresql.org/docs/current/sql-dropextension.html),
+ [`ALTER EXTENSION`](https://www.postgresql.org/docs/current/sql-alterextension.html)
+ (limited to `UPDATE TO` and `SET SCHEMA`).
+
+The operator reconciles only the extensions explicitly listed in
+`spec.extensions`. Any existing extensions not specified in this list remain
+unchanged.
+
+!!! Warning
+ Before the introduction of declarative extension management, {{name.ln}}
+ did not offer a straightforward way to create extensions through configuration.
+ To address this, the ["managed extensions"](postgresql_conf.md#managed-extensions)
+ feature was introduced, enabling the automated and transparent management
+ of key extensions like `pg_stat_statements`. Currently, it is your
+ responsibility to ensure there are no conflicts between extension support in
+ the `Database` CRD and the managed extensions feature.
+
+## Managing Schemas in a Database
+
+!!! Info
+ Schema management in PostgreSQL is an exception to {{name.ln}}' primary
+ focus on managing global objects. Since schemas exist within a database, they
+ are typically managed as part of the application development process. However,
+ {{name.ln}} provides a declarative interface for schema management, primarily
+ to complete the support of extensions deployment within schemas.
+
+{{name.ln}} simplifies and automates the management of PostgreSQL schemas within the
+target database.
+
+To enable this feature, define the `spec.schemas` field
+with a list of schema specifications, as shown in the following example:
+
+```yaml
+# ...
+spec:
+ schemas:
+ - name: app
+ owner: app
+# ...
+```
+
+Each schema entry supports the following properties:
+
+- `name` *(mandatory)*: The name of the schema.
+- `owner`: The owner of the schema.
+- `ensure`: Specifies whether the schema should be present or absent in the
+ database:
+ - `present`: Ensures that the schema is installed (default).
+ - `absent`: Ensures that the schema is removed.
+
+!!! Info
+ {{name.ln}} manages schemas using the following PostgreSQL’s SQL commands:
+ [`CREATE SCHEMA`](https://www.postgresql.org/docs/current/sql-createschema.html),
+ [`DROP SCHEMA`](https://www.postgresql.org/docs/current/sql-dropschema.html),
+ [`ALTER SCHEMA`](https://www.postgresql.org/docs/current/sql-alterschema.html).
+
+## Limitations and Caveats
+
+### Renaming a database
+
+While {{name.ln}} adheres to PostgreSQL’s
+[CREATE DATABASE](https://www.postgresql.org/docs/current/sql-createdatabase.html) and
+[ALTER DATABASE](https://www.postgresql.org/docs/current/sql-alterdatabase.html)
+commands, **renaming databases is not supported**.
+Attempting to modify `spec.name` in an existing `Database` object will result
+in rejection by Kubernetes.
+
+### Creating vs. Altering a Database
+
+- For new databases, {{name.ln}} uses the `CREATE DATABASE` statement.
+- For existing databases, `ALTER DATABASE` is used to apply changes.
+
+It is important to note that there are some differences between these two
+Postgres commands: in particular, the options accepted by `ALTER` are a subset
+of those accepted by `CREATE`.
+
+!!! Warning
+ Some fields, such as encoding and collation settings, are immutable in
+ PostgreSQL. Attempts to modify these fields on existing databases will be
+ ignored.
+
+### Replica Clusters
+
+Database objects declared on replica clusters cannot be enforced, as replicas
+lack write privileges. These objects will remain in a pending state until the
+replica is promoted.
+
+### Conflict Resolution
+
+If two `Database` objects in the same namespace manage the same PostgreSQL
+database (i.e., identical `spec.name` and `spec.cluster.name`), the second
+object will be rejected.
+
+Example status message:
+
+```yaml
+status:
+ applied: false
+ message: 'reconciliation error: database "one" is already managed by Database object "cluster-example-one"'
+```
+
+### Postgres Version Differences
+
+{{name.ln}} adheres to PostgreSQL's capabilities. For example, features like
+`ICU_RULES` introduced in PostgreSQL 16 are unavailable in earlier versions.
+Errors from PostgreSQL will be reflected in the `Database` object's `status`.
+
+### Manual Changes
+
+{{name.ln}} does not overwrite manual changes to databases. Once reconciled,
+a `Database` object will not be reapplied unless its `metadata.generation`
+changes, giving flexibility for direct PostgreSQL modifications.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx b/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
new file mode 100644
index 0000000000..961ab6a9d0
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
@@ -0,0 +1,85 @@
+---
+title: 'Declarative hibernation'
+originalFilePath: 'src/declarative_hibernation.md'
+---
+
+
+
+{{name.ln}} is designed to keep PostgreSQL clusters up, running and available
+anytime.
+
+There are some kinds of workloads that require the database to be up only when
+the workload is active. Batch-driven solutions are one such case.
+
+In batch-driven solutions, the database needs to be up only when the batch
+process is running.
+
+The declarative hibernation feature enables saving CPU power by removing the
+database Pods, while keeping the database PVCs.
+
+## Hibernation
+
+To hibernate a cluster, set the `k8s.enterprisedb.io/hibernation=on` annotation:
+
+```sh
+$ kubectl annotate cluster --overwrite k8s.enterprisedb.io/hibernation=on
+```
+
+A hibernated cluster won't have any running Pods, while the PVCs are retained
+so that the cluster can be rehydrated at a later time. Replica PVCs will be
+kept in addition to the primary's PVC.
+
+The hibernation procedure will delete the primary Pod and then the replica
+Pods, avoiding switchover, to ensure the replicas are kept in sync.
+
+The hibernation status can be monitored by looking for the `k8s.enterprisedb.io/hibernation`
+condition:
+
+```sh
+$ kubectl get cluster -o "jsonpath={.status.conditions[?(.type==\"k8s.enterprisedb.io/hibernation\")]}"
+
+{
+ "lastTransitionTime":"2023-03-05T16:43:35Z",
+ "message":"Cluster has been hibernated",
+ "reason":"Hibernated",
+ "status":"True",
+ "type":"k8s.enterprisedb.io/hibernation"
+}
+```
+
+The hibernation status can also be read with the `status` sub-command of the
+`cnp` plugin for `kubectl`:
+
+```sh
+$ kubectl cnp status
+Cluster Summary
+Name: cluster-example
+Namespace: default
+PostgreSQL Image: docker.enterprisedb.com/k8s/postgresql:18.1-standard-ubi9
+Primary instance: cluster-example-2
+Status: Cluster in healthy state
+Instances: 3
+Ready instances: 0
+
+Hibernation
+Status Hibernated
+Message Cluster has been hibernated
+Time 2023-03-05 16:43:35 +0000 UTC
+[..]
+```
+
+## Rehydration
+
+To rehydrate a cluster, either set the `k8s.enterprisedb.io/hibernation` annotation to `off`:
+
+```
+$ kubectl annotate cluster --overwrite k8s.enterprisedb.io/hibernation=off
+```
+
+Or, just unset it altogether:
+
+```
+$ kubectl annotate cluster k8s.enterprisedb.io/hibernation-
+```
+
+The Pods will be recreated and the cluster will resume operation.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx b/product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx
new file mode 100644
index 0000000000..7615d372e4
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx
@@ -0,0 +1,258 @@
+---
+title: 'PostgreSQL Role Management'
+originalFilePath: 'src/declarative_role_management.md'
+---
+
+
+
+From its inception, {{name.ln}} has managed the creation of specific roles
+required in PostgreSQL instances:
+
+- some reserved users, such as the `postgres` superuser, `streaming_replica`
+ and `cnp_pooler_pgbouncer` (when the PgBouncer `Pooler` is used)
+- The application user, set as the low-privilege owner of the application database
+
+This process is described in the ["Bootstrap"](bootstrap.md) section.
+
+With the `managed` stanza in the cluster spec, {{name.ln}} now provides full
+lifecycle management for roles specified in `.spec.managed.roles`.
+
+This feature enables declarative management of existing roles, as well as the
+creation of new roles if they are not already present in the database. Role
+creation will occur *after* the database bootstrapping is complete.
+
+An example manifest for a cluster with declarative role management can be found
+in the file [`cluster-example-with-roles.yaml`](../samples/cluster-example-with-roles.yaml).
+
+Here is an excerpt from that file:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+spec:
+ managed:
+ roles:
+ - name: dante
+ ensure: present
+ comment: Dante Alighieri
+ login: true
+ superuser: false
+ inRoles:
+ - pg_monitor
+ - pg_signal_backend
+```
+
+The role specification in `.spec.managed.roles` adheres to the
+[PostgreSQL structure and naming conventions](https://www.postgresql.org/docs/current/sql-createrole.html).
+Please refer to the [API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-RoleConfiguration) for
+the full list of attributes you can define for each role.
+
+A few points are worth noting:
+
+1. The `ensure` attribute is **not** part of PostgreSQL. It enables declarative
+ role management to create and remove roles. The two possible values are
+ `present` (the default) and `absent`.
+2. The `inherit` attribute is true by default, following PostgreSQL conventions.
+3. The `connectionLimit` attribute defaults to -1, in line with PostgreSQL conventions.
+4. Role membership with `inRoles` defaults to no memberships.
+
+Declarative role management ensures that PostgreSQL instances align with the
+spec. If a user modifies role attributes directly in the database, the
+{{name.ln}} operator will revert those changes during the next reconciliation
+cycle.
+
+## Password management
+
+The declarative role management feature includes reconciling of role passwords.
+Passwords are managed in fundamentally different ways in the Kubernetes world
+and in PostgreSQL, and as a result there are a few things to note.
+
+Managed role configurations may optionally specify the name of a
+**Secret** where the username and password are stored (encoded in Base64
+as is usual in Kubernetes). For example:
+
+```yaml
+ managed:
+ roles:
+ - name: dante
+ ensure: present
+ [… snipped …]
+ passwordSecret:
+ name: cluster-example-dante
+```
+
+This would assume the existence of a Secret called `cluster-example-dante`,
+containing a username and password. The username should match the role we
+are setting the password for. For example, :
+
+```yaml
+apiVersion: v1
+data:
+ username: ZGFudGU=
+ password: ZGFudGU=
+kind: Secret
+metadata:
+ name: cluster-example-dante
+ labels:
+ k8s.enterprisedb.io/reload: "true"
+type: kubernetes.io/basic-auth
+```
+
+If there is no `passwordSecret` specified for a role, the instance manager will
+not try to CREATE / ALTER the role with a password. This keeps with PostgreSQL
+conventions, where ALTER will not update passwords unless directed to with
+`WITH PASSWORD`.
+
+If a role was initially created with a password, and we would like to set the
+password to NULL in PostgreSQL, this necessitates being explicit on the part of
+the user of {{name.ln}}.
+To distinguish "no password provided in spec" from "set the password to NULL",
+the field `DisablePassword` should be used.
+
+Imagine we decided we would like to have no password on the `dante` role in the
+database. In such case we would specify the following:
+
+```yaml
+ managed:
+ roles:
+ - name: dante
+ ensure: present
+ [… snipped …]
+ disablePassword: true
+```
+
+NOTE: it is considered an error to set both `passwordSecret` and
+`disablePassword` on a given role.
+This configuration will be rejected by the validation webhook.
+
+### Password expiry, `VALID UNTIL`
+
+The `VALID UNTIL` role attribute in PostgreSQL controls password expiry. Roles
+created without `VALID UNTIL` specified get NULL by default in PostgreSQL,
+meaning that their password will never expire.
+
+PostgreSQL uses a timestamp type for `VALID UNTIL`, which includes support for
+the value `'infinity'` indicating that the password never expires. Please see the
+[PostgreSQL documentation](https://www.postgresql.org/docs/current/datatype-datetime.html)
+for reference.
+
+With declarative role management, the `validUntil` attribute for managed roles
+controls password expiry. `validUntil` can only take:
+
+- a Kubernetes timestamp, or
+- be omitted (defaulting to `null`)
+
+In the first case, the given `validUntil` timestamp will be set in the database
+as the `VALID UNTIL` attribute of the role.
+
+In the second case (omitted `validUntil`) the operator will ensure password
+never expires, mirroring the behavior of PostgreSQL. Specifically:
+
+- in case of new role, it will omit the `VALID UNTIL` clause in the role
+ creation statement
+- in case of existing role, it will set `VALID UNTIL` to `infinity` if `VALID
+ UNTIL` was not set to `NULL` in the database (this is due to PostgreSQL not
+ allowing `VALID UNTIL NULL` in the `ALTER ROLE` SQL statement)
+
+!!! Warning
+ New roles created without `passwordSecret` will have a `NULL` password
+ inside PostgreSQL.
+
+### Password hashed
+
+You can also provide pre-encrypted passwords by specifying the password
+in MD5/SCRAM-SHA-256 hash format:
+
+```yaml
+kind: Secret
+type: kubernetes.io/basic-auth
+metadata:
+ name: cluster-example-cavalcanti
+ labels:
+ k8s.enterprisedb.io/reload: "true"
+apiVersion: v1
+stringData:
+ username: cavalcanti
+ password: SCRAM-SHA-256$:$:
+```
+
+## Unrealizable role configurations
+
+In PostgreSQL, in some cases, commands cannot be honored by the database and
+will be rejected. Please refer to the
+[PostgreSQL documentation on error codes](https://www.postgresql.org/docs/current/errcodes-appendix.html)
+for details.
+
+Role operations can produce such fundamental errors.
+Two examples:
+
+- We ask PostgreSQL to create the role `petrarca` as a member of the role
+ (group) `poets`, but `poets` does not exist.
+- We ask PostgreSQL to drop the role `dante`, but the role `dante` is the owner
+ of the database `inferno`.
+
+These fundamental errors cannot be fixed by the database, nor the {{name.ln}}
+operator, without clarification from the human administrator. The two examples
+above could be fixed by creating the role `poets` or dropping the database
+`inferno` respectively, but they might have originated due to human error, and
+in such case, the "fix" proposed might be the wrong thing to do.
+
+{{name.ln}} will record when such fundamental errors occur, and will display
+them in the cluster Status. Which segues into…
+
+## Status of managed roles
+
+The Cluster status includes a section for the managed roles' status, as shown
+below:
+
+```yaml
+status:
+ […snipped…]
+ managedRolesStatus:
+ byStatus:
+ not-managed:
+ - app
+ pending-reconciliation:
+ - dante
+ - petrarca
+ reconciled:
+ - ariosto
+ reserved:
+ - postgres
+ - streaming_replica
+ cannotReconcile:
+ dante:
+ - 'could not perform DELETE on role dante: owner of database inferno'
+ petrarca:
+ - 'could not perform UPDATE_MEMBERSHIPS on role petrarca: role "poets" does not exist'
+```
+
+Note the special sub-section `cannotReconcile` for operations the database (and
+{{name.ln}}) cannot honor, and which require human intervention.
+
+This section covers roles reserved for operator use and those that are **not**
+under declarative management, providing a comprehensive view of the roles in
+the database instances.
+
+The [kubectl plugin](kubectl-plugin.md) also shows the status of managed roles
+in its `status` sub-command:
+
+```txt
+Managed roles status
+Status Roles
+------ -----
+pending-reconciliation petrarca
+reconciled app,dante
+reserved postgres,streaming_replica
+
+Irreconcilable roles
+Role Errors
+---- ------
+petrarca could not perform UPDATE_MEMBERSHIPS on role petrarca: role "poets" does not exist
+```
+
+!!! Important
+ In terms of backward compatibility, declarative role management is designed
+ to ignore roles that exist in the database but are not included in the spec.
+ The lifecycle of these roles will continue to be managed within PostgreSQL,
+ allowing {{name.ln}} users to adopt this feature at their convenience.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml b/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml
new file mode 100644
index 0000000000..1a22775737
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml
@@ -0,0 +1,488 @@
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: default-monitoring
+ labels:
+ k8s.enterprisedb.io/reload: ""
+data:
+ queries: |
+ backends:
+ query: |
+ SELECT sa.datname
+ , sa.usename
+ , sa.application_name
+ , states.state
+ , COALESCE(sa.count, 0) AS total
+ , COALESCE(sa.max_tx_secs, 0) AS max_tx_duration_seconds
+ FROM ( VALUES ('active')
+ , ('idle')
+ , ('idle in transaction')
+ , ('idle in transaction (aborted)')
+ , ('fastpath function call')
+ , ('disabled')
+ ) AS states(state)
+ LEFT JOIN (
+ SELECT datname
+ , state
+ , usename
+ , COALESCE(application_name, '') AS application_name
+ , COUNT(*)
+ , COALESCE(EXTRACT (EPOCH FROM (max(now() - xact_start))), 0) AS max_tx_secs
+ FROM pg_catalog.pg_stat_activity
+ GROUP BY datname, state, usename, application_name
+ ) sa ON states.state = sa.state
+ WHERE sa.usename IS NOT NULL
+ metrics:
+ - datname:
+ usage: "LABEL"
+ description: "Name of the database"
+ - usename:
+ usage: "LABEL"
+ description: "Name of the user"
+ - application_name:
+ usage: "LABEL"
+ description: "Name of the application"
+ - state:
+ usage: "LABEL"
+ description: "State of the backend"
+ - total:
+ usage: "GAUGE"
+ description: "Number of backends"
+ - max_tx_duration_seconds:
+ usage: "GAUGE"
+ description: "Maximum duration of a transaction in seconds"
+
+ backends_waiting:
+ query: |
+ SELECT count(*) AS total
+ FROM pg_catalog.pg_locks blocked_locks
+ JOIN pg_catalog.pg_locks blocking_locks
+ ON blocking_locks.locktype = blocked_locks.locktype
+ AND blocking_locks.database IS NOT DISTINCT FROM blocked_locks.database
+ AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
+ AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
+ AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
+ AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
+ AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
+ AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
+ AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
+ AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
+ AND blocking_locks.pid != blocked_locks.pid
+ JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
+ WHERE NOT blocked_locks.granted
+ metrics:
+ - total:
+ usage: "GAUGE"
+ description: "Total number of backends that are currently waiting on other queries"
+
+ pg_database:
+ query: |
+ SELECT datname
+ , pg_catalog.pg_database_size(datname) AS size_bytes
+ , pg_catalog.age(datfrozenxid) AS xid_age
+ , pg_catalog.mxid_age(datminmxid) AS mxid_age
+ FROM pg_catalog.pg_database
+ WHERE datallowconn
+ metrics:
+ - datname:
+ usage: "LABEL"
+ description: "Name of the database"
+ - size_bytes:
+ usage: "GAUGE"
+ description: "Disk space used by the database"
+ - xid_age:
+ usage: "GAUGE"
+ description: "Number of transactions from the frozen XID to the current one"
+ - mxid_age:
+ usage: "GAUGE"
+ description: "Number of multiple transactions (Multixact) from the frozen XID to the current one"
+
+ pg_postmaster:
+ query: |
+ SELECT EXTRACT(EPOCH FROM pg_postmaster_start_time) AS start_time
+ FROM pg_catalog.pg_postmaster_start_time()
+ metrics:
+ - start_time:
+ usage: "GAUGE"
+ description: "Time at which postgres started (based on epoch)"
+
+ pg_replication:
+ query: "SELECT CASE WHEN (
+ NOT pg_catalog.pg_is_in_recovery()
+ OR pg_catalog.pg_last_wal_receive_lsn() = pg_catalog.pg_last_wal_replay_lsn())
+ THEN 0
+ ELSE GREATEST (0,
+ EXTRACT(EPOCH FROM (now() - pg_catalog.pg_last_xact_replay_timestamp())))
+ END AS lag,
+ pg_catalog.pg_is_in_recovery() AS in_recovery,
+ EXISTS (TABLE pg_stat_wal_receiver) AS is_wal_receiver_up,
+ (SELECT count(*) FROM pg_catalog.pg_stat_replication) AS streaming_replicas"
+ metrics:
+ - lag:
+ usage: "GAUGE"
+ description: "Replication lag behind primary in seconds"
+ - in_recovery:
+ usage: "GAUGE"
+ description: "Whether the instance is in recovery"
+ - is_wal_receiver_up:
+ usage: "GAUGE"
+ description: "Whether the instance wal_receiver is up"
+ - streaming_replicas:
+ usage: "GAUGE"
+ description: "Number of streaming replicas connected to the instance"
+
+ pg_replication_slots:
+ query: |
+ SELECT slot_name,
+ slot_type,
+ database,
+ active,
+ (CASE pg_catalog.pg_is_in_recovery()
+ WHEN TRUE THEN pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_receive_lsn(), restart_lsn)
+ ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), restart_lsn)
+ END) as pg_wal_lsn_diff
+ FROM pg_catalog.pg_replication_slots
+ WHERE NOT temporary
+ metrics:
+ - slot_name:
+ usage: "LABEL"
+ description: "Name of the replication slot"
+ - slot_type:
+ usage: "LABEL"
+ description: "Type of the replication slot"
+ - database:
+ usage: "LABEL"
+ description: "Name of the database"
+ - active:
+ usage: "GAUGE"
+ description: "Flag indicating whether the slot is active"
+ - pg_wal_lsn_diff:
+ usage: "GAUGE"
+ description: "Replication lag in bytes"
+
+ pg_stat_archiver:
+ query: |
+ SELECT archived_count
+ , failed_count
+ , COALESCE(EXTRACT(EPOCH FROM (now() - last_archived_time)), -1) AS seconds_since_last_archival
+ , COALESCE(EXTRACT(EPOCH FROM (now() - last_failed_time)), -1) AS seconds_since_last_failure
+ , COALESCE(EXTRACT(EPOCH FROM last_archived_time), -1) AS last_archived_time
+ , COALESCE(EXTRACT(EPOCH FROM last_failed_time), -1) AS last_failed_time
+ , COALESCE(CAST(CAST('x'||pg_catalog.right(pg_catalog.split_part(last_archived_wal, '.', 1), 16) AS pg_catalog.bit(64)) AS pg_catalog.int8), -1) AS last_archived_wal_start_lsn
+ , COALESCE(CAST(CAST('x'||pg_catalog.right(pg_catalog.split_part(last_failed_wal, '.', 1), 16) AS pg_catalog.bit(64)) AS pg_catalog.int8), -1) AS last_failed_wal_start_lsn
+ , EXTRACT(EPOCH FROM stats_reset) AS stats_reset_time
+ FROM pg_catalog.pg_stat_archiver
+ metrics:
+ - archived_count:
+ usage: "COUNTER"
+ description: "Number of WAL files that have been successfully archived"
+ - failed_count:
+ usage: "COUNTER"
+ description: "Number of failed attempts for archiving WAL files"
+ - seconds_since_last_archival:
+ usage: "GAUGE"
+ description: "Seconds since the last successful archival operation"
+ - seconds_since_last_failure:
+ usage: "GAUGE"
+ description: "Seconds since the last failed archival operation"
+ - last_archived_time:
+ usage: "GAUGE"
+ description: "Epoch of the last time WAL archiving succeeded"
+ - last_failed_time:
+ usage: "GAUGE"
+ description: "Epoch of the last time WAL archiving failed"
+ - last_archived_wal_start_lsn:
+ usage: "GAUGE"
+ description: "Archived WAL start LSN"
+ - last_failed_wal_start_lsn:
+ usage: "GAUGE"
+ description: "Last failed WAL LSN"
+ - stats_reset_time:
+ usage: "GAUGE"
+ description: "Time at which these statistics were last reset"
+
+ pg_stat_bgwriter:
+ runonserver: "<17.0.0"
+ query: |
+ SELECT checkpoints_timed
+ , checkpoints_req
+ , checkpoint_write_time
+ , checkpoint_sync_time
+ , buffers_checkpoint
+ , buffers_clean
+ , maxwritten_clean
+ , buffers_backend
+ , buffers_backend_fsync
+ , buffers_alloc
+ FROM pg_catalog.pg_stat_bgwriter
+ metrics:
+ - checkpoints_timed:
+ usage: "COUNTER"
+ description: "Number of scheduled checkpoints that have been performed"
+ - checkpoints_req:
+ usage: "COUNTER"
+ description: "Number of requested checkpoints that have been performed"
+ - checkpoint_write_time:
+ usage: "COUNTER"
+ description: "Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds"
+ - checkpoint_sync_time:
+ usage: "COUNTER"
+ description: "Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds"
+ - buffers_checkpoint:
+ usage: "COUNTER"
+ description: "Number of buffers written during checkpoints"
+ - buffers_clean:
+ usage: "COUNTER"
+ description: "Number of buffers written by the background writer"
+ - maxwritten_clean:
+ usage: "COUNTER"
+ description: "Number of times the background writer stopped a cleaning scan because it had written too many buffers"
+ - buffers_backend:
+ usage: "COUNTER"
+ description: "Number of buffers written directly by a backend"
+ - buffers_backend_fsync:
+ usage: "COUNTER"
+ description: "Number of times a backend had to execute its own fsync call (normally the background writer handles those even when the backend does its own write)"
+ - buffers_alloc:
+ usage: "COUNTER"
+ description: "Number of buffers allocated"
+
+ pg_stat_bgwriter_17:
+ runonserver: ">=17.0.0"
+ name: pg_stat_bgwriter
+ query: |
+ SELECT buffers_clean
+ , maxwritten_clean
+ , buffers_alloc
+ , EXTRACT(EPOCH FROM stats_reset) AS stats_reset_time
+ FROM pg_catalog.pg_stat_bgwriter
+ metrics:
+ - buffers_clean:
+ usage: "COUNTER"
+ description: "Number of buffers written by the background writer"
+ - maxwritten_clean:
+ usage: "COUNTER"
+ description: "Number of times the background writer stopped a cleaning scan because it had written too many buffers"
+ - buffers_alloc:
+ usage: "COUNTER"
+ description: "Number of buffers allocated"
+ - stats_reset_time:
+ usage: "GAUGE"
+ description: "Time at which these statistics were last reset"
+
+ pg_stat_checkpointer:
+ runonserver: ">=17.0.0"
+ query: |
+ SELECT num_timed AS checkpoints_timed
+ , num_requested AS checkpoints_req
+ , restartpoints_timed
+ , restartpoints_req
+ , restartpoints_done
+ , write_time
+ , sync_time
+ , buffers_written
+ , EXTRACT(EPOCH FROM stats_reset) AS stats_reset_time
+ FROM pg_catalog.pg_stat_checkpointer
+ metrics:
+ - checkpoints_timed:
+ usage: "COUNTER"
+ description: "Number of scheduled checkpoints that have been performed"
+ - checkpoints_req:
+ usage: "COUNTER"
+ description: "Number of requested checkpoints that have been performed"
+ - restartpoints_timed:
+ usage: "COUNTER"
+ description: "Number of scheduled restartpoints due to timeout or after a failed attempt to perform it"
+ - restartpoints_req:
+ usage: "COUNTER"
+ description: "Number of requested restartpoints that have been performed"
+ - restartpoints_done:
+ usage: "COUNTER"
+ description: "Number of restartpoints that have been performed"
+ - write_time:
+ usage: "COUNTER"
+ description: "Total amount of time that has been spent in the portion of processing checkpoints and restartpoints where files are written to disk, in milliseconds"
+ - sync_time:
+ usage: "COUNTER"
+ description: "Total amount of time that has been spent in the portion of processing checkpoints and restartpoints where files are synchronized to disk, in milliseconds"
+ - buffers_written:
+ usage: "COUNTER"
+ description: "Number of buffers written during checkpoints and restartpoints"
+ - stats_reset_time:
+ usage: "GAUGE"
+ description: "Time at which these statistics were last reset"
+
+ pg_stat_database:
+ query: |
+ SELECT datname
+ , xact_commit
+ , xact_rollback
+ , blks_read
+ , blks_hit
+ , tup_returned
+ , tup_fetched
+ , tup_inserted
+ , tup_updated
+ , tup_deleted
+ , conflicts
+ , temp_files
+ , temp_bytes
+ , deadlocks
+ , blk_read_time
+ , blk_write_time
+ FROM pg_catalog.pg_stat_database
+ metrics:
+ - datname:
+ usage: "LABEL"
+ description: "Name of this database"
+ - xact_commit:
+ usage: "COUNTER"
+ description: "Number of transactions in this database that have been committed"
+ - xact_rollback:
+ usage: "COUNTER"
+ description: "Number of transactions in this database that have been rolled back"
+ - blks_read:
+ usage: "COUNTER"
+ description: "Number of disk blocks read in this database"
+ - blks_hit:
+ usage: "COUNTER"
+ description: "Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache)"
+ - tup_returned:
+ usage: "COUNTER"
+ description: "Number of rows returned by queries in this database"
+ - tup_fetched:
+ usage: "COUNTER"
+ description: "Number of rows fetched by queries in this database"
+ - tup_inserted:
+ usage: "COUNTER"
+ description: "Number of rows inserted by queries in this database"
+ - tup_updated:
+ usage: "COUNTER"
+ description: "Number of rows updated by queries in this database"
+ - tup_deleted:
+ usage: "COUNTER"
+ description: "Number of rows deleted by queries in this database"
+ - conflicts:
+ usage: "COUNTER"
+ description: "Number of queries canceled due to conflicts with recovery in this database"
+ - temp_files:
+ usage: "COUNTER"
+ description: "Number of temporary files created by queries in this database"
+ - temp_bytes:
+ usage: "COUNTER"
+ description: "Total amount of data written to temporary files by queries in this database"
+ - deadlocks:
+ usage: "COUNTER"
+ description: "Number of deadlocks detected in this database"
+ - blk_read_time:
+ usage: "COUNTER"
+ description: "Time spent reading data file blocks by backends in this database, in milliseconds"
+ - blk_write_time:
+ usage: "COUNTER"
+ description: "Time spent writing data file blocks by backends in this database, in milliseconds"
+
+ pg_stat_replication:
+ primary: true
+ query: |
+ SELECT usename
+ , COALESCE(application_name, '') AS application_name
+ , COALESCE(client_addr::text, '') AS client_addr
+ , COALESCE(client_port::text, '') AS client_port
+ , EXTRACT(EPOCH FROM backend_start) AS backend_start
+ , COALESCE(pg_catalog.age(backend_xmin), 0) AS backend_xmin_age
+ , pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), sent_lsn) AS sent_diff_bytes
+ , pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), write_lsn) AS write_diff_bytes
+ , pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), flush_lsn) AS flush_diff_bytes
+ , COALESCE(pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), replay_lsn),0) AS replay_diff_bytes
+ , COALESCE((EXTRACT(EPOCH FROM write_lag)),0)::float AS write_lag_seconds
+ , COALESCE((EXTRACT(EPOCH FROM flush_lag)),0)::float AS flush_lag_seconds
+ , COALESCE((EXTRACT(EPOCH FROM replay_lag)),0)::float AS replay_lag_seconds
+ FROM pg_catalog.pg_stat_replication
+ metrics:
+ - usename:
+ usage: "LABEL"
+ description: "Name of the replication user"
+ - application_name:
+ usage: "LABEL"
+ description: "Name of the application"
+ - client_addr:
+ usage: "LABEL"
+ description: "Client IP address"
+ - client_port:
+ usage: "LABEL"
+ description: "Client TCP port"
+ - backend_start:
+ usage: "COUNTER"
+ description: "Time when this process was started"
+ - backend_xmin_age:
+ usage: "COUNTER"
+ description: "The age of this standby's xmin horizon"
+ - sent_diff_bytes:
+ usage: "GAUGE"
+ description: "Difference in bytes from the last write-ahead log location sent on this connection"
+ - write_diff_bytes:
+ usage: "GAUGE"
+ description: "Difference in bytes from the last write-ahead log location written to disk by this standby server"
+ - flush_diff_bytes:
+ usage: "GAUGE"
+ description: "Difference in bytes from the last write-ahead log location flushed to disk by this standby server"
+ - replay_diff_bytes:
+ usage: "GAUGE"
+ description: "Difference in bytes from the last write-ahead log location replayed into the database on this standby server"
+ - write_lag_seconds:
+ usage: "GAUGE"
+ description: "Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it"
+ - flush_lag_seconds:
+ usage: "GAUGE"
+ description: "Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it"
+ - replay_lag_seconds:
+ usage: "GAUGE"
+ description: "Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it"
+
+ pg_settings:
+ query: |
+ SELECT name,
+ CASE setting WHEN 'on' THEN '1' WHEN 'off' THEN '0' ELSE setting END AS setting
+ FROM pg_catalog.pg_settings
+ WHERE vartype IN ('integer', 'real', 'bool')
+ ORDER BY 1
+ metrics:
+ - name:
+ usage: "LABEL"
+ description: "Name of the setting"
+ - setting:
+ usage: "GAUGE"
+ description: "Setting value"
+
+ pg_extensions:
+ query: |
+ SELECT
+ current_database() as datname,
+ name as extname,
+ default_version,
+ installed_version,
+ CASE
+ WHEN default_version = installed_version THEN 0
+ ELSE 1
+ END AS update_available
+ FROM pg_catalog.pg_available_extensions
+ WHERE installed_version IS NOT NULL
+ metrics:
+ - datname:
+ usage: "LABEL"
+ description: "Name of the database"
+ - extname:
+ usage: "LABEL"
+ description: "Extension name"
+ - default_version:
+ usage: "LABEL"
+ description: "Default version"
+ - installed_version:
+ usage: "LABEL"
+ description: "Installed version"
+ - update_available:
+ usage: "GAUGE"
+ description: "An update is available"
+ target_databases:
+ - '*'
diff --git a/product_docs/docs/postgres_for_kubernetes/1/evaluation.mdx b/product_docs/docs/postgres_for_kubernetes/1/evaluation.mdx
new file mode 100644
index 0000000000..cce96c3f69
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/evaluation.mdx
@@ -0,0 +1,14 @@
+---
+title: 'Free evaluation'
+originalFilePath: 'src/evaluation.md'
+---
+
+{{name.ln}} is available for a free evaluation.
+
+Use your EDB account to evaluate {{name.ln}}. If you don't have an account, [register](https://www.enterprisedb.com/accounts/register) for one. Then follow the [installation guide](installation_upgrade.md) to install the operator, using the access token you obtained from your EDB account.
+
+## Evaluating using PostgreSQL
+
+By default, {{name.ln}} installs the latest available version of Community Postgresql.
+
+PostgreSQL container images are available at [quay.io/enterprisedb/postgresql](https://quay.io/repository/enterprisedb/postgresql).
diff --git a/product_docs/docs/postgres_for_kubernetes/1/failover.mdx b/product_docs/docs/postgres_for_kubernetes/1/failover.mdx
new file mode 100644
index 0000000000..d72675f1c5
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/failover.mdx
@@ -0,0 +1,363 @@
+---
+title: 'Automated failover'
+originalFilePath: 'src/failover.md'
+---
+
+
+
+In the case of unexpected errors on the primary for longer than the
+`.spec.failoverDelay` (by default `0` seconds), the cluster will go into
+**failover mode**. This may happen, for example, when:
+
+- The primary pod has a disk failure
+- The primary pod is deleted
+- The `postgres` container on the primary has any kind of sustained failure
+
+In the failover scenario, the primary cannot be assumed to be working properly.
+
+After cases like the ones above, the readiness probe for the primary pod will start
+failing. This will be picked up in the controller's reconciliation loop. The
+controller will initiate the failover process, in two steps:
+
+1. First, it will mark the `TargetPrimary` as `pending`. This change of state will
+ force the primary pod to shutdown, to ensure the WAL receivers on the replicas
+ will stop. The cluster will be marked in failover phase ("Failing over").
+2. Once all WAL receivers are stopped, there will be a leader election, and a
+ new primary will be named. The chosen instance will initiate promotion to
+ primary, and, after this is completed, the cluster will resume normal operations.
+ Meanwhile, the former primary pod will restart, detect that it is no longer
+ the primary, and become a replica node.
+
+!!! Important
+ The two-phase procedure helps ensure the WAL receivers can stop in an orderly
+ fashion, and that the failing primary will not start streaming WALs again upon
+ restart. These safeguards prevent timeline discrepancies between the new primary
+ and the replicas.
+
+During the time the failing primary is being shut down:
+
+1. It will first try a PostgreSQL's *fast shutdown* with
+ `.spec.switchoverDelay` seconds as timeout. This graceful shutdown will attempt
+ to archive pending WALs.
+2. If the fast shutdown fails, or its timeout is exceeded, a PostgreSQL's
+ *immediate shutdown* is initiated.
+
+!!! Info
+ "Fast" mode does not wait for PostgreSQL clients to disconnect and will
+ terminate an online backup in progress. All active transactions are rolled back
+ and clients are forcibly disconnected, then the server is shut down.
+ "Immediate" mode will abort all PostgreSQL server processes immediately,
+ without a clean shutdown.
+
+## RTO and RPO impact
+
+Failover may result in the service being impacted ([RTO](before_you_start.md#rto))
+and/or data being lost ([RPO](before_you_start.md#rpo)):
+
+1. During the time when the primary has started to fail, and before the controller
+ starts failover procedures, queries in transit, WAL writes, checkpoints and
+ similar operations, may fail.
+2. Once the fast shutdown command has been issued, the cluster will no longer
+ accept connections, so service will be impacted but no data
+ will be lost.
+3. If the fast shutdown fails, the immediate shutdown will stop any pending
+ processes, including WAL writing. Data may be lost.
+4. During the time the primary is shutting down and a new primary hasn't yet
+ started, the cluster will operate without a primary and thus be impaired - but
+ with no data loss.
+
+!!! Note
+ The timeout that controls fast shutdown is set by `.spec.switchoverDelay`,
+ as in the case of a switchover. Increasing the time for fast shutdown is safer
+ from an RPO point of view, but possibly delays the return to normal operation -
+ negatively affecting RTO.
+
+!!! Warning
+ As already mentioned in the ["Instance Manager" section](instance_manager.md)
+ when explaining the switchover process, the `.spec.switchoverDelay` option
+ affects the RPO and RTO of your PostgreSQL database. Setting it to a low value,
+ might favor RTO over RPO but lead to data loss at cluster level and/or backup
+ level. On the contrary, setting it to a high value, might remove the risk of
+ data loss while leaving the cluster without an active primary for a longer time
+ during the switchover.
+
+## Delayed failover
+
+As anticipated above, the `.spec.failoverDelay` option allows you to delay the start
+of the failover procedure by a number of seconds after the primary has been
+detected to be unhealthy. By default, this setting is set to `0`, triggering the
+failover procedure immediately.
+
+Sometimes failing over to a new primary can be more disruptive than waiting
+for the primary to come back online. This is especially true of network
+disruptions where multiple tiers are affected (i.e., downstream logical
+subscribers) or when the time to perform the failover is longer than the
+expected outage.
+
+Enabling a new configuration option to delay failover provides a mechanism to
+prevent premature failover for short-lived network or node instability.
+
+## Failover Quorum (Quorum-based Failover)
+
+!!! Warning
+ *Failover quorum* is an experimental feature introduced in version 1.27.0.
+ Use with caution in production environments.
+
+Failover quorum is a mechanism that enhances data durability and safety during
+failover events in {{name.ln}}-managed PostgreSQL clusters.
+
+Quorum-based failover allows the controller to determine whether to promote a replica
+to primary based on the state of a quorum of replicas.
+This is useful when stronger data durability is required than the one offered
+by [synchronous replication](replication.md#synchronous-replication) and
+default automated failover procedures.
+
+When synchronous replication is not enabled, some data loss is expected and
+accepted during failover, as a replica may lag behind the primary when
+promoted.
+
+With synchronous replication enabled, the guarantee is that the application
+will not receive explicit acknowledgment of the successful commit of a
+transaction until the WAL data is known to be safely received by all required
+synchronous standbys.
+This is not enough to guarantee that the operator is able to promote the most
+advanced replica.
+
+For example, in a three-node cluster with synchronous replication set to `ANY 1
+(...)`, data is written to the primary and one standby before a commit is
+acknowledged. If both the primary and the aligned standby become unavailable
+(such as during a network partition), the remaining replica may not have the
+latest data. Promoting it could lose some data that the application considered
+committed.
+
+Quorum-based failover addresses this risk by ensuring that failover only occurs
+if the operator can confirm the presence of all synchronously committed data in
+the instance to promote, and it does not occur otherwise.
+
+This feature allows users to choose their preferred trade-off between data
+durability and data availability.
+
+Failover quorum can be enabled by setting the annotation
+`alpha.k8s.enterprisedb.io/failoverQuorum="true"` in the `Cluster` resource.
+
+!!! info
+ When this feature is out of the experimental phase, the annotation
+ `alpha.k8s.enterprisedb.io/failoverQuorum` will be replaced by a configuration option in
+ the `Cluster` resource.
+
+### How it works
+
+Before promoting a replica to primary, the operator performs a quorum check,
+following the principles of the Dynamo `R + W > N` consistency model[^1].
+
+In the quorum failover, these values assume the following meaning:
+
+- `R` is the number of *promotable replicas* (read quorum);
+- `W` is the number of replicas that must acknowledge the write before the
+ `COMMIT` is returned to the client (write quorum);
+- `N` is the total number of potentially synchronous replicas;
+
+*Promotable replicas* are replicas that have these properties:
+
+- are part of the cluster;
+- are able to report their state to the operator;
+- are potentially synchronous;
+
+If `R + W > N`, then we can be sure that among the promotable replicas there is
+at least one that has confirmed all the synchronous commits, and we can safely
+promote it to primary. If this is not the case, the controller will not promote
+any replica to primary, and will wait for the situation to change.
+
+Users can force a promotion of a replica to primary through the
+`kubectl cnp promote` command even if the quorum check is failing.
+
+!!! Warning
+ Manual promotion should only be used as a last resort. Before proceeding,
+ make sure you fully understand the risk of data loss and carefully consider the
+ consequences of prioritizing the resumption of write workloads for your
+ applications.
+
+An additional CRD is used to track the quorum state of the cluster. A `Cluster`
+with the quorum failover enabled will have a `FailoverQuorum` resource with the same
+name as the `Cluster` resource. The `FailoverQuorum` CR is created by the
+controller when the quorum failover is enabled, and it is updated by the primary
+instance during its reconciliation loop, and read by the operator during quorum
+checks. It is used to track the latest known configuration of the synchronous
+replication.
+
+!!! Important
+ Users should not modify the `FailoverQuorum` resource directly. During
+ PostgreSQL configuration changes, when it is not possible to determine the
+ configuration, the `FailoverQuorum` resource will be reset, preventing any
+ failover until the new configuration is applied.
+
+The `FailoverQuorum` resource works in conjunction with PostgreSQL synchronous
+replication.
+
+!!! Warning
+ There is no guarantee that `COMMIT` operations returned to the
+ client but that have not been performed synchronously, such as those made
+ explicitly disabling synchronous replication with
+ `SET synchronous_commit TO local`, will be present on a promoted replica.
+
+### Quorum Failover Example Scenarios
+
+In the following scenarios, `R` is the number of promotable replicas, `W` is
+the number of replicas that must acknowledge a write before commit, and `N` is
+the total number of potentially synchronous replicas. The "Failover" column
+indicates whether failover is allowed under quorum failover rules.
+
+#### Scenario 1: Three-node cluster, failing pod(s)
+
+A cluster with `instances: 3`, `synchronous.number=1`, and
+`dataDurability=required`.
+
+- If only the primary fails, two promotable replicas remain (R=2).
+ Since `R + W > N` (2 + 1 > 2), failover is allowed and safe.
+- If both the primary and one replica fail, only one promotable replica
+ remains (R=1). Since `R + W = N` (1 + 1 = 2), failover is not allowed to
+ prevent possible data loss.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 2 | 1 | 2 | ✅ |
+| 1 | 1 | 2 | ❌ |
+
+#### Scenario 2: Three-node cluster, network partition
+
+A cluster with `instances: 3`, `synchronous.number: 1`, and
+`dataDurability: required` experiences a network partition.
+
+- If the operator can communicate with the primary, no failover occurs. The
+ cluster can be impacted if the primary cannot reach any standby, since it
+ won't commit transactions due to synchronous replication requirements.
+- If the operator cannot reach the primary but can reach both replicas (R=2),
+ failover is allowed. If the operator can reach only one replica (R=1),
+ failover is not allowed, as the synchronous one may be the other one.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 2 | 1 | 2 | ✅ |
+| 1 | 1 | 2 | ❌ |
+
+#### Scenario 3: Five-node cluster, network partition
+
+A cluster with `instances: 5`, `synchronous.number=2`, and
+`dataDurability=required` experiences a network partition.
+
+- If the operator can communicate with the primary, no failover occurs. The
+ cluster can be impacted if the primary cannot reach at least two standbys,
+ as since it won't commit transactions due to synchronous replication
+ requirements.
+- If the operator cannot reach the primary but can reach at least three
+ replicas (R=3), failover is allowed. If the operator can reach only two
+ replicas (R=2), failover is not allowed, as the synchronous one may be the
+ other one.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 3 | 2 | 4 | ✅ |
+| 2 | 2 | 4 | ❌ |
+
+#### Scenario 4: Three-node cluster with remote synchronous replicas
+
+A cluster with `instances: 3` and remote synchronous replicas defined in
+`standbyNamesPre` or `standbyNamesPost`. We assume that the primary is failing.
+
+This scenario requires an important consideration. Replicas listed in
+`standbyNamesPre` or `standbyNamesPost` are not counted in
+`R` (they cannot be promoted), but are included in `N` (they may have received
+synchronous writes). So, if
+`synchronous.number <= len(standbyNamesPre) + len(standbyNamesPost)`, failover
+is not possible, as no local replica can be guaranteed to have the required
+data. The operator prevents such configurations during validation, but some
+invalid configurations are shown below for clarity.
+
+**Example configurations:**
+
+Configuration #1 (valid):
+
+```yaml
+instances: 3
+postgresql:
+ synchronous:
+ method: any
+ number: 2
+ standbyNamesPre:
+ - angus
+```
+
+In this configuration, when the primary fails, `R = 2` (the local replicas),
+`W = 2`, and `N = 3` (2 local replicas + 1 remote), allowing failover.
+In case of an additional replica failing (`R = 1`) failover is not allowed.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 3 | 2 | 4 | ✅ |
+| 2 | 2 | 4 | ❌ |
+
+Configuration #2 (invalid):
+
+```yaml
+instances: 3
+postgresql:
+ synchronous:
+ method: any
+ number: 1
+ maxStandbyNamesFromCluster: 1
+ standbyNamesPre:
+ - angus
+```
+
+In this configuration, `R = 2` (the local replicas), `W = 1`, and `N = 3`
+(2 local replicas + 1 remote).
+Failover is not possible in this setup, so quorum failover can not be
+enabled with this configuration.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 1 | 1 | 2 | ❌ |
+
+Configuration #3 (invalid):
+
+```yaml
+instances: 3
+postgresql:
+ synchronous:
+ method: any
+ number: 1
+ maxStandbyNamesFromCluster: 0
+ standbyNamesPre:
+ - angus
+ - malcolm
+```
+
+In this configuration, `R = 0` (the local replicas), `W = 1`, and `N = 2`
+(0 local replicas + 2 remote).
+Failover is not possible in this setup, so quorum failover can not be
+enabled with this configuration.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 0 | 1 | 2 | ❌ |
+
+#### Scenario 5: Three-node cluster, preferred data durability, network partition
+
+Consider a cluster with `instances: 3`, `synchronous.number=1`, and
+`dataDurability=preferred` that experiences a network partition.
+
+- If the operator can communicate with both the primary and the API server,
+ the primary continues to operate, removing unreachable standbys from the
+ `synchronous_standby_names` set.
+- If the primary cannot reach the operator or API server, a quorum check is
+ performed. The `FailoverQuorum` status cannot have changed, as the primary cannot
+ have received new configuration. If the operator can reach both replicas,
+ failover is allowed (`R=2`). If only one replica is reachable (`R=1`),
+ failover is not allowed.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 2 | 1 | 2 | ✅ |
+| 1 | 1 | 2 | ❌ |
+
+[^1]: [Dynamo: Amazon’s highly available key-value store](https://www.amazon.science/publications/dynamo-amazons-highly-available-key-value-store)
diff --git a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
new file mode 100644
index 0000000000..7adcdfb2fd
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
@@ -0,0 +1,76 @@
+---
+title: 'Failure Modes'
+originalFilePath: 'src/failure_modes.md'
+---
+
+
+
+!!! Note
+ In previous versions of {{name.ln}}, this page included specific failure
+ scenarios. Since these largely follow standard Kubernetes behavior, we have
+ streamlined the content to avoid duplication of information that belongs to the
+ underlying Kubernetes stack and is not specific to {{name.ln}}.
+
+{{name.ln}} adheres to standard Kubernetes principles for self-healing and
+high availability. We assume familiarity with core Kubernetes concepts such as
+storage classes, PVCs, nodes, and Pods. For {{name.ln}}-specific details,
+refer to the ["Postgres Instance Manager" section](instance_manager.md), which
+covers startup, liveness, and readiness probes, as well as the
+[self-healing](#self-healing) section below.
+
+!!! Important
+ If you are running {{name.ln}} in production, we strongly recommend
+ seeking [professional support](https://cloudnative-pg.io/support/).
+
+## Self-Healing
+
+### Primary Failure
+
+If the primary Pod fails:
+
+- The operator promotes the most up-to-date standby with the lowest replication
+ lag.
+- The `-rw` service is updated to point to the new primary.
+- The failed Pod is removed from the `-r` and `-rw` services.
+- Standby Pods begin replicating from the new primary.
+- The former primary uses `pg_rewind` to re-synchronize if its PVC is available;
+ otherwise, a new standby is created from a backup of the new primary.
+
+### Standby Failure
+
+If a standby Pod fails:
+
+- It is removed from the `-r` and `-ro` services.
+- The Pod is restarted using its PVC if available; otherwise, a new Pod is
+ created from a backup of the current primary.
+- Once ready, the Pod is re-added to the `-r` and `-ro` services.
+
+## Manual Intervention
+
+For failure scenarios not covered by automated recovery, manual intervention
+may be required.
+
+!!! Important
+ Do not perform manual operations without [professional support](https://cloudnative-pg.io/support/).
+
+### Disabling Reconciliation
+
+To temporarily disable the reconciliation loop for a PostgreSQL cluster, use
+the `k8s.enterprisedb.io/reconciliationLoop` annotation:
+
+```yaml
+metadata:
+ name: cluster-example-no-reconcile
+ annotations:
+ k8s.enterprisedb.io/reconciliationLoop: "disabled"
+spec:
+ # ...
+```
+
+Use this annotation **with extreme caution** and only during emergency
+operations.
+
+!!! Warning
+ This annotation should be removed as soon as the issue is resolved. Leaving
+ it in place prevents the operator from executing self-healing actions,
+ including failover.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/faq.mdx b/product_docs/docs/postgres_for_kubernetes/1/faq.mdx
new file mode 100644
index 0000000000..9448432e55
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/faq.mdx
@@ -0,0 +1,405 @@
+---
+title: 'Frequently Asked Questions (FAQ)'
+originalFilePath: 'src/faq.md'
+---
+
+
+
+## Running PostgreSQL in Kubernetes
+
+**Everyone knows that stateful workloads like PostgreSQL cannot run in
+Kubernetes. Why do you say the contrary?**
+
+An [*independent research survey commissioned by the Data on Kubernetes
+Community*](https://dok.community/dokc-2021-report/) in September 2021
+revealed that half of the respondents run most of their production
+workloads on Kubernetes. 90% of them believe that Kubernetes is ready
+for stateful workloads, and 70% of them run databases in production.
+Databases like Postgres. However, according to them, significant
+challenges remain, such as the knowledge gap (Kubernetes and Cloud
+Native, in general, have a steep learning curve) and the quality of
+Kubernetes operators. The latter is the reason why we believe that an
+operator like {{name.ln}} highly contributes to the success
+of your project.
+
+For database fanatics like us, a real game-changer has been the
+introduction of the support for local persistent volumes in
+[*Kubernetes 1.14 in April 2019*](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/).
+
+**{{name.ln}} is built on immutable application containers.
+What does it mean?**
+
+According to the microservice architectural pattern, a container is
+designed to run a single application or process. As a result, such
+container images are built to run the main application as the
+single entry point (the so-called PID 1 process).
+
+In Kubernetes terms, the application is referred to as workload.
+Workloads can be stateless like a web application server or stateful like a
+database. Mapping this concept to PostgreSQL, an immutable application
+container is a single "postgres" process that is running and
+tied to a single and specific version - the one in the immutable
+container image.
+
+No other processes such as SSH or systemd, or syslog are allowed.
+
+Immutable Application Containers are in contrast with Mutable System
+Containers, which are still a very common way to interpret and use
+containers.
+
+Immutable means that a container won't be modified during its life: no
+updates, no patches, no configuration changes. If you must update the
+application code or apply a patch, you build a new image and redeploy
+it. Immutability makes deployments safer and more repeatable.
+
+For more information, please refer to
+[*"Why EDB chose immutable application containers"*](https://www.enterprisedb.com/blog/why-edb-chose-immutable-application-containers).
+
+**What does Cloud Native mean?**
+
+The Cloud Native Computing Foundation defines the term
+"[*Cloud Native*](https://github.com/cncf/toc/blob/main/DEFINITION.md)".
+However, since the start of the Cloud Native PostgreSQL/{{name.ln}} operator
+at 2ndQuadrant, the development team has been interpreting Cloud Native
+as three main concepts:
+
+1. An existing, healthy, genuine, and prosperous DevOps culture, founded
+ on people, as well as principles and processes, which enables teams
+ and organizations (as teams of teams) to continuously change so to
+ innovate and accelerate the delivery of outcomes and produce value
+ for the business in safer, more efficient, and more engaging ways
+2. A microservice architecture that is based on Immutable Application
+ Containers
+3. A way to manage and orchestrate these containers, such as Kubernetes
+
+Currently, the standard de facto for container orchestration is
+Kubernetes, which automates the deployment, administration and
+scalability of Cloud Native Applications.
+
+Another definition of Cloud Native that resonates with us is the one
+defined by Ibryam and Huß in
+[*"Kubernetes Patterns", published by O'Reilly*](https://www.oreilly.com/library/view/kubernetes-patterns/9781492050278/):
+
+> Principles, Patterns, Tools to automate containerized microservices at scale
+
+**Can I run {{name.ln}} on bare metal Kubernetes?**
+
+Yes, definitely. You can run Kubernetes on bare metal. And you can dedicate one
+or more physical worker nodes with locally attached storage to PostgreSQL
+workloads for maximum and predictable I/O performance.
+
+The actual Cloud Native PostgreSQL project, from which {{name.ln}}
+originated, was born after a pilot project in 2019 that benchmarked storage and
+PostgreSQL on the same bare metal server, first directly in Linux, and then
+inside Kubernetes. As expected, the experiment showed only negligible
+performance impact introduced by the container running in Kubernetes through
+local persistent volumes, allowing the Cloud Native initiative to continue.
+
+**Why should I use PostgreSQL replication instead of file system
+replication?**
+
+Please read the ["Architecture: Synchronizing the state"](architecture.md#synchronizing-the-state)
+section.
+
+**Why should I use an operator instead of running PostgreSQL as a
+container?**
+
+The most basic approach to running PostgreSQL in Kubernetes is to have a
+pod, which is the smallest unit of deployment in Kubernetes, running a
+Postgres container with no replica. The volume hosting the Postgres data
+directory is mounted on the pod, and it usually resides on network
+storage. In this case, Kubernetes restarts the pod in case of a
+problem or moves it to another Kubernetes node.
+
+The most sophisticated approach is to run PostgreSQL using an operator.
+An operator is an extension of the Kubernetes controller and defines how
+a complex application works in business continuity contexts. The
+operator pattern is currently state of the art in Kubernetes for
+this purpose. An operator simulates the work of a human operator in an
+automated and programmatic way.
+
+Postgres is a complex application, and an operator not only needs to
+deploy a cluster (the first step), but also properly react after
+unexpected events. The typical example is that of a failover.
+
+An operator relies on Kubernetes for capabilities like self-healing,
+scalability, replication, high availability, backup, recovery, updates,
+access, resource control, storage management, and so on. It also
+facilitates the integration of a PostgreSQL cluster in the log
+management and monitoring infrastructure.
+
+{{name.ln}} enables the definition of the desired state of a
+PostgreSQL cluster via declarative configuration. Kubernetes
+continuously makes sure that the current state of the infrastructure
+matches the desired one through reconciliation loops initiated by the
+Kubernetes controller. If the desired state and the actual state don't
+match, reconciliation loops trigger self-healing procedures. That's
+where an operator like {{name.ln}} comes into play.
+
+**Are there any other operators for Postgres out there?**
+
+Yes, of course. And our advice is that you look at all of them and compare
+them with {{name.ln}} before making your decision. You will see that
+most of these operators use an external failover management tool (Patroni
+or similar) and rely on StatefulSets.
+
+Here is a non exhaustive list, in chronological order from their
+publication on GitHub:
+
+- [Crunchy Data Postgres Operator](https://github.com/CrunchyData/postgres-operator) (2017)
+- [Zalando Postgres Operator](https://github.com/zalando/postgres-operator) (2017)
+- [Stackgres](https://github.com/ongres/stackgres) (2020)
+- [Percona Operator for PostgreSQL](https://github.com/percona/percona-postgresql-operator) (2021)
+- [Kubegres](https://github.com/reactive-tech/kubegres) (2021)
+
+Feel free to report any relevant missing entry as a PR.
+
+!!! Info
+ The [Data on Kubernetes Community](https://dok.community)
+ (which includes some of our maintainers) is working on an independent and
+ vendor neutral project to list the operators called
+ [Operator Feature Matrix](https://github.com/dokc/operator-feature-matrix).
+
+**You say that {{name.ln}} is a fully declarative operator.
+What do you mean by that?**
+
+The easiest way is to explain declarative configuration through an
+example that highlights the differences with imperative configuration.
+In an imperative context, the state is defined as a series of tasks to
+be executed in sequence. So, we can get a three-node PostgreSQL cluster
+by creating the first instance, configuring the replication, cloning a
+second instance, and the third one.
+
+In a declarative approach, the state of a system is defined using
+configuration, namely: there's a PostgreSQL 13 cluster with two replicas.
+This approach highly simplifies change management operations, and when
+these are stored in source control systems like Git, it enables the
+Infrastructure as Code capability. And Kubernetes takes it farther than
+deployment, as it makes sure that our request is fulfilled at any time.
+
+**What are the required skills to run PostgreSQL on Kubernetes?**
+
+Running PostgreSQL on Kubernetes requires both PostgreSQL and Kubernetes
+skills in your DevOps team. The best experience is when database
+administrators familiarize themselves with Kubernetes core concepts
+and are able to interact with Kubernetes administrators.
+
+Our advice is for everyone that wants to fully exploit Cloud Native
+PostgreSQL to acquire the "Certified Kubernetes Administrator (CKA)"
+status from the CNCF certification program.
+
+**Why isn't {{name.ln}} using StatefulSets?**
+
+{{name.ln}} does not rely on `StatefulSet` resources, and
+instead manages the underlying PVCs directly by leveraging the selected
+storage class for dynamic provisioning. Please refer to the
+["Custom Pod Controller"](controller.md) section for details and reasons behind
+this decision.
+
+## High availability
+
+**What happens to the PostgreSQL clusters when the operator pod dies or it is
+not available for a certain amount of time?**
+
+The {{name.ln}} operator, among other things, is responsible for self-healing
+capabilities. As such, they might not be available during an outage of the
+operator.
+
+However, assuming that the outage does not affect the nodes where PostgreSQL
+clusters are running, the database will continue to serve normal operations,
+through the relevant Kubernetes services. Moreover, the [instance manager](instance_manager.md),
+which runs inside each PostgreSQL pod will still work, making sure that the
+database server is up, including accessory services like logging, export of
+metrics, continuous archiving of WAL files, etc.
+
+To summarize:
+
+an outage of the operator does not necessarily imply a PostgreSQL
+database outage; it's like running a database without a DBA or system
+administrator.
+
+**What are the reasons behind {{name.ln}} not relying on a failover
+management tool like Patroni, repmgr, or Stolon?**
+
+Although part of the team that develops {{name.ln}} has been heavily
+involved in repmgr in the past, we decided to take a different approach
+and directly extend the Kubernetes controller and rely on the Kubernetes API
+server to hold the status of a Postgres cluster, and use it as the only source
+of truth to:
+
+- control High Availability of a Postgres cluster primarily via automated
+ failover and switchover, coordinating itself with the [instance manager](instance_manager.md)
+- control the Kubernetes services, that is the entry points for your
+ applications
+
+**Should I manually resync a former primary with the new one following a
+failover?**
+
+No. The operator does that automatically for you, and relies on `pg_rewind` to
+synchronize the former primary with the new one.
+
+
+
+## Database management
+
+**Why should I use PostgreSQL?**
+
+We believe that PostgreSQL is the equivalent in the database area of
+what Linux represents in the operating system space. The current latest
+major version of Postgres is version 16, which ships out of the box:
+
+- native streaming replication, both physical and logical
+- continuous hot backup and point in time recovery
+- declarative partitioning for horizontal table partitioning, which is
+ a very well-known technique in the database area to improve vertical
+ scalability on a single instance
+- extensibility, with extensions like [PostGIS](postgis.md) for geographical
+ databases
+- parallel queries for vertical scalability
+- JSON support, unleashing the multi-model hybrid database for both
+ structured and unstructured data queried via standard SQL
+
+And so on ...
+
+**How many databases should be hosted in a single PostgreSQL instance?**
+
+Our recommendation is to dedicate a single PostgreSQL cluster
+(intended as primary and multiple standby servers) to a single database,
+entirely managed by a single microservice application. However, by
+leveraging the "postgres" superuser, it is possible to create as many
+users and databases as desired (subject to the available resources).
+
+The reason for this recommendation lies in the Cloud Native concept,
+based on microservices. In a pure microservice architecture, the
+microservice itself should own the data it manages exclusively.
+These could be flat files, queues, key-value stores, or, in our case, a
+PostgreSQL relational database containing both structured and
+unstructured data. The general idea is that only the microservice can
+access the database, including schema management and migrations.
+
+{{name.ln}} has been designed to work this way out of the
+box, by default creating an application user and an application database
+owned by the aforementioned application user.
+
+Reserving a PostgreSQL instance to a single microservice owned database,
+enhances:
+
+- resource management: in PostgreSQL, CPU, and memory constrained
+ resources are generally handled at the instance level, not the
+ database level, making it easier to integrate it with Kubernetes
+ resource management policies at the pod level
+- physical continuous backup and Point-In-Time-Recovery (PITR): given
+ that PostgreSQL handles continuous backup and recovery at the
+ instance level, having one database per instance simplifies PITR
+ operations, differentiates retention policy management, and
+ increases data protection of backups
+- application updates: enable each application to decide their update
+ policies without impacting other databases owned by different
+ applications
+- database updates: each application can decide which PostgreSQL
+ version to use, and independently, when to upgrade to a different
+ major version of PostgreSQL and at what conditions (e.g., cutover
+ time)
+
+**Is there an upper limit in database size for not considering Kubernetes?**
+
+No, as Kubernetes is no different from virtual machines and bare metal as far
+as this is regarded.
+Practically, however, it depends on the available resources of your Kubernetes
+cluster. Our advice with very large databases (VLDB) is to consider a shared
+nothing architecture, where a Kubernetes worker node is dedicated to a single
+Postgres instance, with dedicated storage.
+We proved that this extreme architectural pattern works when we benchmarked
+[running PostgreSQL on bare metal Kubernetes with local persistent
+volumes](https://www.2ndquadrant.com/en/blog/local-persistent-volumes-and-postgresql-usage-in-kubernetes/).
+Tablespaces and horizontal partitioning are data modeling techniques that you
+can use to improve the vertical scalability of you databases.
+
+**How can I specify a time zone in the PostgreSQL cluster?**
+
+PostgreSQL has an extensive support for time zones, as explained in the official
+documentation:
+
+- [Date time data types](https://www.postgresql.org/docs/current/datatype-datetime.html)
+- [Client connections config options](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-TIMEZONE)
+
+Although time zones can even be used at session, transaction and even as part
+of a query in PostgreSQL, a very common way is to set them up globally. With
+{{name.ln}} you can configure the cluster level time zone in the
+`.spec.postgresql.parameters` section as in the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: pg-italy
+spec:
+ instances: 1
+
+ postgresql:
+ parameters:
+ timezone: "Europe/Rome"
+
+ storage:
+ size: 1Gi
+```
+
+The time zone can be verified with:
+
+```console
+$ kubectl exec -ti pg-italy-1 -c postgres -- psql -x -c "SHOW timezone"
+-[ RECORD 1 ]---------
+TimeZone | Europe/Rome
+```
+
+**What is the recommended architecture for best business continuity
+outcomes?**
+
+As covered in the ["Architecture"](architecture.md) section, the main
+recommendation is to adopt shared nothing architectures as much as possible, by
+leveraging the native capabilities and resources that Kubernetes provides in a
+single cluster, namely:
+
+- availability zones: spread your instances across different availability zones
+ in the same Kubernetes cluster
+- worker nodes: as a consequence, make sure that your Postgres instances reside
+ on different Kubernetes worker nodes
+- storage: use dedicated storage for each worker node running Postgres
+
+Use at least one standby, preferably at least two, so that you can configure
+synchronous replication in the cluster, introducing [RPO](before_you_start.md#rpo)=0
+for high availability.
+
+If you do not have availability zones - normally the case of on-premise
+installations - separate on worker nodes and storage.
+
+Properly setup continuous backup on a local/regional object store.
+
+The same architecture that is in a single Kubernetes cluster can be replicated
+in another Kubernetes cluster (normally in another geographical area or region)
+through the [replica cluster](replica_cluster.md) feature, providing disaster
+recovery and high availability at global scale.
+
+You can use the WAL archive in the primary object store to feed the replica in
+the other region, without having to provide a streaming connection, if the default
+maximum RPO of 5 minutes is enough for you.
+
+**How can instances be stopped or started?**
+
+Please look at ["Fencing"](fencing.md) or ["Hibernation"](declarative_hibernation.md).
+
+**What are the global objects such as roles and databases that are
+automatically created by {{name.ln}}?**
+
+The operator automatically creates a user for the application (by default
+called `app`) and a database for the application (by default called `app`)
+which is owned by the aforementioned user.
+
+This way, the database is ready for a microservice adoption, as developers
+can control migrations using the `app` user, without requiring *superuser*
+access.
+
+Teams can then create another user for read-write operations through the
+["Declarative role management"](declarative_role_management.md) feature
+and assign the required `GRANT` to the tables.
+
diff --git a/product_docs/docs/postgres_for_kubernetes/1/fencing.mdx b/product_docs/docs/postgres_for_kubernetes/1/fencing.mdx
new file mode 100644
index 0000000000..ed7fa0d9db
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/fencing.mdx
@@ -0,0 +1,111 @@
+---
+title: 'Fencing'
+originalFilePath: 'src/fencing.md'
+---
+
+
+
+Fencing in {{name.ln}} is the ultimate process of protecting the
+data in one, more, or even all instances of a PostgreSQL cluster when they
+appear to be malfunctioning. When an instance is fenced, the PostgreSQL server
+process (`postmaster`) is guaranteed to be shut down, while the pod is kept running.
+This makes sure that, until the fence is lifted, data on the pod is not modified by
+PostgreSQL and that the file system can be investigated for debugging and
+troubleshooting purposes.
+
+## How to fence instances
+
+In {{name.ln}} you can fence:
+
+- a specific instance
+- a list of instances
+- an entire Postgres `Cluster`
+
+Fencing is controlled through the content of the `k8s.enterprisedb.io/fencedInstances`
+annotation, which expects a JSON formatted list of instance names.
+If the annotation is set to `'["*"]'`, a singleton list with a wildcard, the
+whole cluster is fenced.
+If the annotation is set to an empty JSON list, the operator behaves as if the
+annotation was not set.
+
+For example:
+
+- `k8s.enterprisedb.io/fencedInstances: '["cluster-example-1"]'` will fence just
+ the `cluster-example-1` instance
+
+- `k8s.enterprisedb.io/fencedInstances: '["cluster-example-1","cluster-example-2"]'`
+ will fence the `cluster-example-1` and `cluster-example-2` instances
+
+- `k8s.enterprisedb.io/fencedInstances: '["*"]'` will fence every instance in
+ the cluster.
+
+The annotation can be manually set on the Kubernetes object, for example via
+the `kubectl annotate` command, or in a transparent way using the
+`kubectl cnp fencing on` subcommand:
+
+```shell
+# to fence only one instance
+kubectl cnp fencing on cluster-example 1
+
+# to fence all the instances in a Cluster
+kubectl cnp fencing on cluster-example "*"
+```
+
+Here is an example of a `Cluster` with an instance that was previously fenced:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ annotations:
+ k8s.enterprisedb.io/fencedInstances: '["cluster-example-1"]'
+[...]
+```
+
+## How to lift fencing
+
+Fencing can be lifted by clearing the annotation, or set it to a different value.
+
+As for fencing, this can be done either manually with `kubectl annotate`, or
+using the `kubectl cnp fencing` subcommand as follows:
+
+```shell
+# to lift the fencing only for one instance
+# N.B.: at the moment this won't work if the whole cluster was fenced previously,
+# in that case you will have to manually set the annotation as explained above
+kubectl cnp fencing off cluster-example 1
+
+# to lift the fencing for all the instances in a Cluster
+kubectl cnp fencing off cluster-example "*"
+```
+
+## How fencing works
+
+Once an instance is set for fencing, the procedure to shut down the
+`postmaster` process is initiated, identical to the one of the switchover.
+This consists of an initial fast shutdown with a timeout set to
+`.spec.switchoverDelay`, followed by an immediate shutdown. Then:
+
+- the Pod will be kept alive
+
+- the Pod won't be marked as *Ready*
+
+- all the changes that don't require the Postgres instance to be up will be
+ reconciled, including:
+ - configuration files
+ - certificates and all the cryptographic material
+
+- metrics will not be collected, except `cnp_collector_fencing_on` which will be
+ set to 1
+
+!!! Warning
+ If a **primary instance** is fenced, its postmaster process
+ is shut down but no failover is performed, interrupting the operativity of
+ the applications. When the fence will be lifted, the primary instance will be
+ started up again without performing a failover.
+
+ Given that, we advise users to fence primary instances only if strictly required.
+
+If a fenced instance is deleted, the pod will be recreated normally, but the
+postmaster won't be started. This can be extremely helpful when instances
+are `Crashlooping`.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx b/product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx
new file mode 100644
index 0000000000..899734aae7
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx
@@ -0,0 +1,145 @@
+---
+title: 'Image Catalog'
+originalFilePath: 'src/image_catalog.md'
+---
+
+
+
+`ImageCatalog` and `ClusterImageCatalog` are essential resources that empower
+you to define images for creating a `Cluster`.
+
+The key distinction lies in their scope: an `ImageCatalog` is namespaced, while
+a `ClusterImageCatalog` is cluster-scoped.
+
+Both share a common structure, comprising a list of images, each equipped with
+a `major` field indicating the major version of the image.
+
+!!! Warning
+ The operator places trust in the user-defined major version and refrains
+ from conducting any PostgreSQL version detection. It is the user's
+ responsibility to ensure alignment between the declared major version in
+ the catalog and the PostgreSQL image.
+
+The `major` field's value must remain unique within a catalog, preventing
+duplication across images. Distinct catalogs, however, may
+expose different images under the same `major` value.
+
+**Example of a Namespaced `ImageCatalog`:**
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ImageCatalog
+metadata:
+ name: postgresql
+ namespace: default
+spec:
+images:
+- major: 15
+ image: docker.enterprisedb.com/k8s/postgresql:15.14-standard-ubi9
+- major: 16
+ image: docker.enterprisedb.com/k8s/postgresql:16.10-standard-ubi9
+- major: 17
+ image: docker.enterprisedb.com/k8s/postgresql:17.6-standard-ubi9
+- major: 18
+ image: docker.enterprisedb.com/k8s/postgresql:18.1-standard-ubi9
+```
+
+**Example of a Cluster-Wide Catalog using `ClusterImageCatalog` Resource:**
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ClusterImageCatalog
+metadata:
+ name: postgresql
+spec:
+ images:
+ - major: 15
+ image: docker.enterprisedb.com/k8s/postgresql:15.14-standard-ubi9
+ - major: 16
+ image: docker.enterprisedb.com/k8s/postgresql:16.10-standard-ubi9
+ - major: 17
+ image: docker.enterprisedb.com/k8s/postgresql:17.6-standard-ubi9
+ - major: 18
+ image: docker.enterprisedb.com/k8s/postgresql:18.1-standard-ubi9
+```
+
+A `Cluster` resource has the flexibility to reference either an `ImageCatalog`
+(like in the following example) or a `ClusterImageCatalog` to precisely specify
+the desired image.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 3
+ imageCatalogRef:
+ apiGroup: postgresql.k8s.enterprisedb.io
+ # Change the following to `ClusterImageCatalog` if needed
+ kind: ImageCatalog
+ name: postgresql
+ major: 16
+ storage:
+ size: 1Gi
+```
+
+Clusters utilizing these catalogs maintain continuous monitoring.
+Any alterations to the images within a catalog trigger automatic updates for
+**all associated clusters** referencing that specific entry.
+
+## {{name.ln}} Catalogs
+
+The {{name.ln}} project maintains `ClusterImageCatalog` manifests for all
+supported images.
+
+These catalogs are regularly updated and published in the
+[artifacts repository](https://github.com/cloudnative-pg/artifacts/tree/main/image-catalogs).
+
+Each catalog corresponds to a specific combination of image type (e.g.
+`minimal`) and Debian release (e.g. `trixie`). It lists the most up-to-date
+container images for every supported PostgreSQL major version.
+
+By installing these catalogs, cluster administrators can ensure that their
+PostgreSQL clusters are automatically updated to the latest patch release
+within a given PostgreSQL major version, for the selected Debian distribution
+and image type.
+
+For example, to install the latest catalog for the `minimal` PostgreSQL
+container images on Debian `trixie`, run:
+
+```shell
+kubectl apply -f \
+ https://raw.githubusercontent.com/cloudnative-pg/artifacts/refs/heads/main/image-catalogs/catalog-minimal-trixie.yaml
+```
+
+You can install all the available catalogs by using the `kustomization` file
+present in the `image-catalogs` directory:
+
+```shell
+kubectl apply -k https://github.com/cloudnative-pg/artifacts//image-catalogs?ref=main
+```
+
+You can then view all the catalogs deployed with:
+
+```shell
+kubectl get clusterimagecatalogs.postgresql.k8s.enterprisedb.io
+```
+
+For example, you can create a cluster with the latest `minimal` image for PostgreSQL 18 on `trixie` with:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: angus
+spec:
+ instances: 3
+ imageCatalogRef:
+ apiGroup: postgresql.k8s.enterprisedb.io
+ kind: ClusterImageCatalog
+ name: postgresql-minimal-trixie
+ major: 18
+ storage:
+ size: 1Gi
+```
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/apps-in-k8s.png b/product_docs/docs/postgres_for_kubernetes/1/images/apps-in-k8s.png
new file mode 100644
index 0000000000..832dcb3c59
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/apps-in-k8s.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:afe49c1bcdb498302c3cf0af1bd058b43ca98a0a4de15c25e354912443d58eb0
+size 45106
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/apps-outside-k8s.png b/product_docs/docs/postgres_for_kubernetes/1/images/apps-outside-k8s.png
new file mode 100644
index 0000000000..4259c49ec5
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/apps-outside-k8s.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e687abe20e25f9589a094860769d2272ade598ecd643035712caa6a9b620e42
+size 54998
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/architecture-in-k8s.png b/product_docs/docs/postgres_for_kubernetes/1/images/architecture-in-k8s.png
new file mode 100644
index 0000000000..7d93840420
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/architecture-in-k8s.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7237ce42adb430948f5dea1e7449760843679cee00cb8d12f4aa82df1f11a8da
+size 116084
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/architecture-r.png b/product_docs/docs/postgres_for_kubernetes/1/images/architecture-r.png
new file mode 100644
index 0000000000..fd2ec56913
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/architecture-r.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8ba71048bf84260cf81c9d351cf9e8b4ccf40484b121eebc04a97a59044d4867
+size 100517
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/architecture-read-only.png b/product_docs/docs/postgres_for_kubernetes/1/images/architecture-read-only.png
new file mode 100644
index 0000000000..6c19b0cb2d
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/architecture-read-only.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1a0c1d859d922680eb77fb07cb3e9b0a0023fc5e31592849abcf43fd31f4a160
+size 95360
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/architecture-rw.png b/product_docs/docs/postgres_for_kubernetes/1/images/architecture-rw.png
new file mode 100644
index 0000000000..e95d0f887a
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/architecture-rw.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:448e13c114845e12cacb9e8e74bb540556ccac383333e20f21bf416427c973b8
+size 84508
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png b/product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png
new file mode 100644
index 0000000000..4788aab1c7
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a3f34d6c747cba079665d94b19db1e562ca11526ff442e5a85a5e80b78cef14
+size 398383
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/ironbank/pulling-the-image.png b/product_docs/docs/postgres_for_kubernetes/1/images/ironbank/pulling-the-image.png
new file mode 100644
index 0000000000..a3dfebb508
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/ironbank/pulling-the-image.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a9cb5bda073e0e46d58b62167e20931db93f1f7d9ceb2d64efe8fc990913237f
+size 84974
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-2-az.png b/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-2-az.png
new file mode 100644
index 0000000000..94e83d98eb
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-2-az.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:34fef36277b75d487881855084959d393f88947b583f7eede432230632e38ad3
+size 100839
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-3-az.png b/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-3-az.png
new file mode 100644
index 0000000000..bbc0f09f6b
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-3-az.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b5abe82c6febf14dc1c2c09fe5c40f129e70053fefe654983e64bac0ab301a4
+size 119593
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-multi.png b/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-multi.png
new file mode 100644
index 0000000000..51a22831b4
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/k8s-architecture-multi.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7abed062c67cca40349271f22d28595c4e18ddbd6a3da6b62570e8e19590edb2
+size 137762
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/k8s-pg-architecture.png b/product_docs/docs/postgres_for_kubernetes/1/images/k8s-pg-architecture.png
new file mode 100644
index 0000000000..33fc6164bb
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/k8s-pg-architecture.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4a3524b47563edb6b5372d1c2d651768b899f0b415be230751eb131be4eb60c
+size 256099
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/microservice-import.png b/product_docs/docs/postgres_for_kubernetes/1/images/microservice-import.png
new file mode 100644
index 0000000000..0a46f5795c
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/microservice-import.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83e6dd482dc826745b118e43eb97ffcf3c4b7177faf02be178eab8c542934114
+size 66198
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/monolith-import.png b/product_docs/docs/postgres_for_kubernetes/1/images/monolith-import.png
new file mode 100644
index 0000000000..1e0d26fe57
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/monolith-import.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ddcad2179ee80b4023f3b72a45a083b0a21b00c645cc1263dd26499d5dbbd1a
+size 56714
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/multi-cluster.png b/product_docs/docs/postgres_for_kubernetes/1/images/multi-cluster.png
new file mode 100644
index 0000000000..cbee5229bc
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/multi-cluster.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35bfcbf1a93d05eac13ca242b0681df3cdbc6887e7e232884cb7e7eb78adea9a
+size 202406
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/network-storage-architecture.png b/product_docs/docs/postgres_for_kubernetes/1/images/network-storage-architecture.png
new file mode 100644
index 0000000000..32cf8f7a31
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/network-storage-architecture.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:df55506cdc6337da76b3c48458d6de33839d74570f000eb6be6357611dc6a182
+size 214390
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/alerts-openshift.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/alerts-openshift.png
new file mode 100644
index 0000000000..6b8c6f612d
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/alerts-openshift.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:93579c3a3e6391b323f8a0600d60306d42b178d5c2ce66e4d1a42376ba529321
+size 173985
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_1.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_1.png
new file mode 100644
index 0000000000..4e2c7aae2b
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1a08e22ec632f797ebdcc9e72da73946ec380b68cbbd851d77da9cfe7eb96a98
+size 34584
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_2.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_2.png
new file mode 100644
index 0000000000..4ce24413f1
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/oc_installation_screenshot_2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6efd07bcd16fc88709413bdab1235a577e6903410a3061a0295b87c647faed98
+size 85968
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-operatorgroup-error.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-operatorgroup-error.png
new file mode 100644
index 0000000000..e9f22e0844
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-operatorgroup-error.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1f6a88e427fbcf5814e47138317844b90e2913d72ad9acd8d301e6810706855e
+size 34699
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-rbac.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-rbac.png
new file mode 100644
index 0000000000..0c28046875
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-rbac.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6def4b5bdb14a7d8ce3c3a2687bf5a4609c8bd79680e7386b03234fdc698f4d6
+size 167580
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-allnamespaces.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-allnamespaces.png
new file mode 100644
index 0000000000..72e2f3839d
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-allnamespaces.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:68a599db7dab771c1580182e45f4f0afc6bb0dd0252090299889674c98a0eca0
+size 79549
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png
new file mode 100644
index 0000000000..99ab9a72ff
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a5219672700425805ac2550a57fbfa50b6538358417a4579e539a1449bf674aa
+size 80596
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace-list.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace-list.png
new file mode 100644
index 0000000000..e016bb9579
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace-list.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99ef83c8a9bd9b623d88c2a4c37b950dc5782f026a0b10b4347977e14c8319fb
+size 64964
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace.png
new file mode 100644
index 0000000000..32fc972ca3
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-singlenamespace.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f93dda137ef656faca200386a740c89566c1c33b84382c65b7f66a84d959059
+size 82620
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_1.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_1.png
new file mode 100644
index 0000000000..72cceec0aa
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7a821983dc2af2a32a752e5e437153f6ac7c4985da9d78dc4dbb48c0a44891b8
+size 33589
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png
new file mode 100644
index 0000000000..e0e65d7379
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dc2d785d25376be97e97159f2b4721d02494894d47b2d5db7615218f95022c22
+size 105235
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/prometheus-queries.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/prometheus-queries.png
new file mode 100644
index 0000000000..b8786c85f9
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/prometheus-queries.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1330ef1e7247431cc392ef98154651e26af52ea03ea8c2d2192cd5dd7b6ea901
+size 91027
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/operator-capability-level.png b/product_docs/docs/postgres_for_kubernetes/1/images/operator-capability-level.png
new file mode 100644
index 0000000000..6d66fd552e
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/operator-capability-level.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:89951317d0ffadd45c1f9f8b810f3c536d12635ce2abf5527d77ef1f7952d7ad
+size 155789
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/pgadmin4.png b/product_docs/docs/postgres_for_kubernetes/1/images/pgadmin4.png
new file mode 100644
index 0000000000..934cfc8f61
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/pgadmin4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:658332686b109a1cf62aa84b16a14b78c5c0ad131386fd16332805dfc4f1bf59
+size 126821
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-architecture-rw.png b/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-architecture-rw.png
new file mode 100644
index 0000000000..efceb9ce26
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-architecture-rw.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27b9d1ec02c4a0f527ed3dc04c535542d2279cf327382746c742cf28b06ef735
+size 169722
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-image.png b/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-image.png
new file mode 100644
index 0000000000..b5e67e7c24
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-image.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18b7f09c4c6dc18d98cb90ea3449dfdaa4dae2649e9ee90cd8b087b64b48dd34
+size 32737
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-template.png b/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-template.png
new file mode 100644
index 0000000000..7ac00a187d
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/pgbouncer-pooler-template.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee0afb7ffa1755928b999b5ab4a64f26fc46aaa69439c43f7249e7931c08767f
+size 24875
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/prometheus-local.png b/product_docs/docs/postgres_for_kubernetes/1/images/prometheus-local.png
new file mode 100644
index 0000000000..4acf664377
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/prometheus-local.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:19488063b3f0de9576c68fdc22130a14f0b1032e2448b862c270571207ad86d1
+size 125996
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture-storage-replication.png b/product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture-storage-replication.png
new file mode 100644
index 0000000000..5a03f36bba
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture-storage-replication.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:430aa00da8133731ddd105c632d39afed9df6622433cd90ead5c2e742ffa59f8
+size 106922
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture.png b/product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture.png
new file mode 100644
index 0000000000..c875100533
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/public-cloud-architecture.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b26aa19f5af2e0ed31a200cc49936c11f09326733e8bed66620092e805c9705a
+size 162809
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/shared-nothing-architecture.png b/product_docs/docs/postgres_for_kubernetes/1/images/shared-nothing-architecture.png
new file mode 100644
index 0000000000..3aa9f1544a
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/shared-nothing-architecture.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:82acabb6b2eae79a4aeddd568e47a6212a99245df35ef9242ca0b0fc93a1c994
+size 156302
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/write_bw.1-2Draw.png b/product_docs/docs/postgres_for_kubernetes/1/images/write_bw.1-2Draw.png
new file mode 100644
index 0000000000..683a652236
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/write_bw.1-2Draw.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:45e0fd29b8188f5596088c4c5e9a31c8a564ab499304e35635a6aa645c90670d
+size 15555
diff --git a/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx b/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx
new file mode 100644
index 0000000000..6029c7f09b
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx
@@ -0,0 +1,354 @@
+---
+title: 'Image Volume Extensions'
+originalFilePath: 'src/imagevolume_extensions.md'
+---
+
+
+
+{{name.ln}} supports the **dynamic loading of PostgreSQL extensions** into a
+`Cluster` at Pod startup using the [Kubernetes `ImageVolume` feature](https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/)
+and the `extension_control_path` GUC introduced in PostgreSQL 18, to which this
+project contributed.
+
+This feature allows you to mount a [PostgreSQL extension](https://www.postgresql.org/docs/current/extend-extensions.html),
+packaged as an OCI-compliant container image, as a read-only and immutable
+volume inside a running pod at a known filesystem path.
+
+You can make the extension available either globally, using the
+[`shared_preload_libraries` option](postgresql_conf.md#shared-preload-libraries),
+or at the database level through the `CREATE EXTENSION` command. For the
+latter, you can use the [`Database` resource’s declarative extension management](declarative_database_management.md/#managing-extensions-in-a-database)
+to ensure consistent, automated extension setup within your PostgreSQL
+databases.
+
+## Benefits
+
+Image volume extensions decouple the distribution of PostgreSQL operand
+container images from the distribution of extensions. This eliminates the
+need to define and embed extensions at build time within your PostgreSQL
+images—a major adoption blocker for PostgreSQL as a containerized workload,
+including from a security and supply chain perspective.
+
+As a result, you can:
+
+- Use the [official PostgreSQL `minimal` operand images](https://github.com/enterprisedb/docker-postgres?tab=readme-ov-file#minimal-images)
+ provided by {{name.ln}}.
+- Dynamically add the extensions you need to your `Cluster` definitions,
+ without rebuilding or maintaining custom PostgreSQL images.
+- Reduce your operational surface by using immutable, minimal, and secure base
+ images while adding only the extensions required for each workload.
+
+Extension images must be built according to the
+[documented specifications](#image-specifications).
+
+## Requirements
+
+To use image volume extensions with {{name.ln}}, you need:
+
+- **PostgreSQL 18 or later**, with support for `extension_control_path`.
+- **Kubernetes 1.33**, with the `ImageVolume` feature gate enabled.
+- **{{name.ln}}-compatible extension container images**, ensuring:
+ - Matching PostgreSQL major version of the `Cluster` resource.
+ - Compatible operating system distribution of the `Cluster` resource.
+ - Matching CPU architecture of the `Cluster` resource.
+
+## How it works
+
+Extension images are defined in the `.spec.postgresql.extensions` stanza of a
+`Cluster` resource, which accepts an ordered list of extensions to be added to
+the PostgreSQL cluster.
+
+!!! Info
+ For field-level details, see the
+ [API reference for `ExtensionConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ExtensionConfiguration).
+
+Each image volume is mounted at `/extensions/`.
+
+By default, {{name.ln}} automatically manages the relevant GUCs, setting:
+
+- `extension_control_path` to `/extensions//share`, allowing
+ PostgreSQL to locate any extension control file within `/extensions//share/extension`
+- `dynamic_library_path` to `/extensions//lib`
+
+These values are appended in the order in which the extensions are defined in
+the `extensions` list, ensuring deterministic path resolution within
+PostgreSQL. This allows PostgreSQL to discover and load the extension without
+requiring manual configuration inside the pod.
+
+!!! Info
+ Depending on how your extension container images are built and their layout,
+ you may need to adjust the default `extension_control_path` and
+ `dynamic_library_path` values to match the image structure.
+
+!!! Important
+ If the extension image includes shared libraries, they must be compiled
+ with the same PostgreSQL major version, operating system distribution, and CPU
+ architecture as the PostgreSQL container image used by your cluster, to ensure
+ compatibility and prevent runtime issues.
+
+## How to add a new extension
+
+Adding an extension to a database in {{name.ln}} involves a few steps:
+
+1. Define the extension image in the `Cluster` resource so that PostgreSQL can
+ discover and load it.
+2. Add the library to [`shared_preload_libraries`](postgresql_conf.md#shared-preload-libraries)
+ if the extension requires it.
+3. Declare the extension in the `Database` resource where you want it
+ installed, if the extension supports `CREATE EXTENSION`.
+
+!!! Warning
+ Avoid making changes to extension images and PostgreSQL configuration
+ settings (such as `shared_preload_libraries`) simultaneously.
+ First, allow the pod to roll out with the new extension image, then update
+ the PostgreSQL configuration.
+ This limitation will be addressed in a future release of {{name.ln}}.
+
+For illustration purposes, this guide uses a simple, fictitious extension named
+`foo` that supports `CREATE EXTENSION`.
+
+### Adding a new extension to a `Cluster` resource
+
+You can add an `ImageVolume`-based extension to a `Cluster` using the
+`.spec.postgresql.extensions` stanza. For example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: foo-18
+spec:
+ # ...
+ postgresql:
+ extensions:
+ - name: foo
+ image:
+ reference: # registry path for your extension image
+ # ...
+```
+
+The `name` field is **mandatory** and **must be unique within the cluster**, as
+it determines the mount path (`/extensions/foo` in this example). It must
+consist of *lowercase alphanumeric characters or hyphens (`-`)* and must start
+and end with an alphanumeric character.
+
+The `image` stanza follows the [Kubernetes `ImageVolume` API](https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/).
+The `reference` must point to a valid container registry path for the extension
+image.
+
+!!! Important
+ When a new extension is added to a running `Cluster`, {{name.ln}} will
+ automatically trigger a [rolling update](rolling_update.md) to attach the new
+ image volume to each pod. Before adding a new extension in production,
+ ensure you have thoroughly tested it in a staging environment to prevent
+ configuration issues that could leave your PostgreSQL cluster in an unhealthy
+ state.
+
+Once mounted, {{name.ln}} will automatically configure PostgreSQL by appending:
+
+- `/extensions/foo/share` to `extension_control_path`
+- `/extensions/foo/lib` to `dynamic_library_path`
+
+This ensures that the PostgreSQL container is ready to serve the `foo`
+extension when requested by a database, as described in the next section. The
+`CREATE EXTENSION foo` command, triggered automatically during the
+[reconciliation of the `Database` resource](declarative_database_management.md/#managing-extensions-in-a-database),
+will work without additional configuration, as PostgreSQL will locate:
+
+- the extension control file at `/extensions/foo/share/extension/foo.control`
+- the shared library at `/extensions/foo/lib/foo.so`
+
+### Adding a new extension to a `Database` resource
+
+Once the extension is available in the PostgreSQL instance, you can leverage
+declarative databases to [manage the lifecycle of your extensions](declarative_database_management.md#managing-extensions-in-a-database)
+within the target database.
+
+Continuing with the `foo` example, you can request the installation of the
+`foo` extension in the `app` database of the `foo-18` cluster using the
+following resource definition:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Database
+metadata:
+ name: foo-app
+spec:
+ name: app
+ owner: app
+ cluster:
+ name: foo-18
+ extensions:
+ - name: foo
+ version: 1.0
+```
+
+{{name.ln}} will automatically reconcile this resource, executing the
+`CREATE EXTENSION foo` command inside the `app` database if it is not
+already installed, ensuring your desired state is maintained without manual
+intervention.
+
+## Advanced Topics
+
+In some cases, the default expected structure may be insufficient for your
+extension image, particularly when:
+
+- The extension requires additional system libraries.
+- Multiple extensions are bundled in the same image.
+- The image uses a custom directory structure.
+
+Following the *"convention over configuration"* paradigm, {{name.ln}} allows
+you to finely control the configuration of each extension image through the
+following fields:
+
+- `extension_control_path`: A list of relative paths within the container image
+ to be appended to PostgreSQL’s `extension_control_path`, allowing it to
+ locate extension control files.
+- `dynamic_library_path`: A list of relative paths within the container image
+ to be appended to PostgreSQL’s `dynamic_library_path`, enabling it to locate
+ shared library files for extensions.
+- `ld_library_path`: A list of relative paths within the container image to be
+ appended to the `LD_LIBRARY_PATH` environment variable of the instance
+ manager process, allowing PostgreSQL to locate required system libraries at
+ runtime.
+
+This flexibility enables you to support complex or non-standard extension
+images while maintaining clarity and predictability.
+
+### Setting Custom Paths
+
+If your extension image does not use the default `lib` and `share` directories
+for its libraries and control files, you can override the defaults by
+explicitly setting `extension_control_path` and `dynamic_library_path`.
+
+For example:
+
+```yaml
+spec:
+ postgresql:
+ extensions:
+ - name: my-extension
+ extension_control_path:
+ - my/share/path
+ dynamic_library_path:
+ - my/lib/path
+ image:
+ reference: # registry path for your extension image
+```
+
+{{name.ln}} will configure PostgreSQL with:
+
+- `/extensions/my-extension/my/share/path` appended to `extension_control_path`
+- `/extensions/my-extension/my/lib/path` appended to `dynamic_library_path`
+
+This allows PostgreSQL to discover your extension’s control files and shared
+libraries correctly, even with a non-standard layout.
+
+### Multi-extension Images
+
+You may need to include multiple extensions within the same container image,
+adopting a structure where each extension’s files reside in their own
+subdirectory.
+
+For example, to package PostGIS and pgRouting together in a single image, each
+in its own subdirectory:
+
+```yaml
+# ...
+spec:
+ # ...
+ postgresql:
+ extensions:
+ - name: geospatial
+ extension_control_path:
+ - postgis/share
+ - pgrouting/share
+ dynamic_library_path:
+ - postgis/lib
+ - pgrouting/lib
+ # ...
+ image:
+ reference: # registry path for your geospatial image
+ # ...
+ # ...
+ # ...
+```
+
+### Including System Libraries
+
+Some extensions, such as PostGIS, require system libraries that may not be
+present in the base PostgreSQL image. To support these requirements, you can
+package the necessary libraries within your extension container image and make
+them available to PostgreSQL using the `ld_library_path` field.
+
+For example, if your extension image includes a `system` directory with the
+required libraries:
+
+```yaml
+# ...
+spec:
+ # ...
+ postgresql:
+ extensions:
+ - name: postgis
+ # ...
+ ld_library_path:
+ - syslib
+ image:
+ reference: # registry path for your PostGIS image
+ # ...
+ # ...
+ # ...
+```
+
+{{name.ln}} will set the `LD_LIBRARY_PATH` environment variable to include
+`/extensions/postgis/syslib`, allowing PostgreSQL to locate and load these
+system libraries at runtime.
+
+!!! Important
+ Since `ld_library_path` must be set when the PostgreSQL process starts,
+ changing this value requires a **cluster restart** for the new value to take effect.
+ {{name.ln}} does not currently trigger this restart automatically; you will need to
+ manually restart the cluster (e.g., using `cnp restart`) after modifying `ld_library_path`.
+
+## Image Specifications
+
+A standard extension container image for {{name.ln}} includes two
+required directories at its root:
+
+- `/share/`: contains an `extension` subdirectory with the extension control
+ file (e.g. `.control`) and the corresponding SQL files.
+- `/lib/`: contains the extension’s shared library (e.g. `.so`) as
+ well as any other required libraries.
+
+Following this structure ensures that the extension will be automatically
+discoverable and usable by PostgreSQL within {{name.ln}} without requiring
+manual configuration.
+
+!!! Important
+ We encourage PostgreSQL extension developers to publish OCI-compliant extension
+ images following this layout as part of their artifact distribution, making
+ their extensions easily consumable within Kubernetes environments.
+ Ideally, extension images should target a specific operating system
+ distribution and architecture, be tied to a particular PostgreSQL version, and
+ be built using the distribution’s native packaging system (for example, using
+ Debian or RPM packages). This approach ensures consistency, security, and
+ compatibility with the PostgreSQL images used in your clusters.
+
+## Caveats
+
+Currently, adding, removing, or updating an extension image triggers a
+restart of the PostgreSQL pods. This behavior is inherited from how
+[image volumes](https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/)
+work in Kubernetes.
+
+Before performing an extension update, ensure you have:
+
+- Thoroughly tested the update process in a staging environment.
+- Verified that the extension image contains the required upgrade path between
+ the currently installed version and the target version.
+- Updated the `version` field for the extension in the relevant `Database`
+ resource definition to align with the new version in the image.
+
+These steps help prevent downtime or data inconsistencies in your PostgreSQL
+clusters during extension updates.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
new file mode 100644
index 0000000000..a5adbeb5db
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
@@ -0,0 +1,222 @@
+---
+title: '{{name.ln}}'
+description: The {{name.ln}} operator is a fork based on CloudNativePG™ which provides additional value such as compatibility with Oracle databases and additional supported platforms such as IBM Power and OpenShift.
+originalFilePath: src/index.md
+indexCards: none
+directoryDefaults:
+ version: "1.27.1"
+redirects:
+ - /postgres_for_kubernetes/preview/:splat
+navigation:
+ - rel_notes
+ - '!commercial_support.mdx'
+ - '!release_notes*'
+ - '!e2e.mdx'
+ - '!supported_releases.mdx'
+ - '#Getting Started'
+ - before_you_start
+ - use_cases
+ - architecture
+ - installation_upgrade
+ - quickstart
+ - preview_version
+ - '#Configuration'
+ - postgresql_conf
+ - operator_conf
+ - cluster_conf
+ - samples
+ - '#Using'
+ - bootstrap
+ - image_catalog
+ - database_import
+ - security
+ - instance_manager
+ - scheduling
+ - resource_management
+ - failure_modes
+ - rolling_update
+ - replication
+ - logical_replication
+ - backup
+ - recovery
+ - wal_archiving
+ - service_management
+ - declarative_role_management
+ - declarative_database_management
+ - tablespaces
+ - storage
+ - labels_annotations
+ - monitoring
+ - logging
+ - certificates
+ - ssl_connections
+ - applications
+ - connection_pooling
+ - replica_cluster
+ - kubernetes_upgrade
+ - postgres_upgrades
+ - expose_pg_services
+ - kubectl-plugin
+ - failover
+ - fencing
+ - declarative_hibernation
+ - postgis
+ - container_images
+ - imagevolume_extensions
+ - cnp_i
+ - controller
+ - networking
+ - benchmarking
+ - '#EDB Enhancements'
+ - evaluation
+ - license_keys
+ - private_edb_registries
+ - openshift
+ - iron-bank
+ - tde
+ - addons
+ - '#Reference'
+ - operator_capability_levels
+ - faq
+ - troubleshooting
+ - migrating_edb_registries
+ - pg4k.v1
+ - backup_recovery
+ - cncf-projects
+ - '#Appendix'
+ - backup_volumesnapshot
+ - backup_barmanobjectstore
+ - object_stores
+navRootedTo: /edb-postgres-ai/platforms-and-tools/kubernetes
+pdf: true
+---
+
+The {{name.ln}} operator is a fork based on [CloudNativePG™](https://cloudnative-pg.io).
+It provides additional value such as compatibility with Oracle using EDB
+Postgres Advanced Server and additional supported platforms such as IBM Power
+and OpenShift. It is designed, developed, and supported by EDB and covers the
+full lifecycle of a highly available Postgres database clusters with a
+primary/standby architecture, using native streaming replication.
+
+{{name.ln}} was made generally available on February 4, 2021. Earlier versions were made available to selected customers prior to the GA release.
+
+!!! Note
+
+ The operator has been renamed from Cloud Native PostgreSQL. Existing users
+ of Cloud Native PostgreSQL will not experience any change, as the underlying
+ components and resources have not changed.
+
+## Key features in common with CloudNativePG™
+
+- Kubernetes API integration for high availability
+ - CloudNativePG™ uses the `postgresql.cnpg.io/v1` API version
+ - {{name.ln}} uses the `postgresql.k8s.enterprisedb.io/v1` API version
+- Self-healing through failover and automated recreation of replicas
+- Capacity management with scale up/down capabilities
+- Planned switchovers for scheduled maintenance
+- Read-only and read-write Kubernetes services definitions
+- Rolling updates for Postgres minor versions and operator upgrades
+- Continuous backup and point-in-time recovery
+- Connection Pooling with PgBouncer
+- Integrated metrics exporter out of the box
+- PostgreSQL replication across multiple Kubernetes clusters
+- Separate volume for WAL files
+
+## Features unique to EDB Postgres of Kubernetes
+
+- [Long Term Support](#long-term-support)
+- Support on IBM Power and z/Linux through partnership with IBM
+- [Oracle compatibility](/epas/latest/fundamentals/epas_fundamentals/epas_compat_ora_dev_guide/) through EDB Postgres Advanced Sever
+- [Transparent Data Encryption (TDE)](/tde/latest/) through EDB Postgres Advanced Server
+- Cold backup support with Kasten and Velero/OADP
+- Generic adapter for third-party Kubernetes backup tools
+
+You can [evaluate {{name.ln}} for free](evaluation.md) as part of a trial subscription.
+You need a valid EDB subscription to use {{name.ln}} in production.
+
+!!! Note
+
+ Based on the [Operator Capability Levels model](operator_capability_levels.md),
+ users can expect a **"Level V - Auto Pilot"** set of capabilities from the
+ {{name.ln}} Operator.
+
+### Long Term Support
+
+EDB is committed to declaring a Long Term Support (LTS) version of {{name.ln}} annually. 1.25 is the current LTS version. 1.18 was the
+first. Each LTS version will
+receive maintenance releases and be supported for an additional 12 months beyond
+the last community release of CloudNativePG for the same version.
+
+For example, the 1.22 release of CloudNativePG reached End-of-Life on July
+24, 2024, for the open source community.
+Because it was declared an LTS version of {{name.ln}}, it
+will be supported for an additional 12 months, until July 24, 2025.
+
+In addition, customers will always have at least 6 months to move between LTS versions.
+For example, the 1.25 LTS version was released on December 23 2024, giving ample
+time to users to migrate from the 1.22 LTS ahead of the End-of-life on July 2025.
+
+While we encourage customers to regularly upgrade to the latest version of the operator to take
+advantage of new features, having LTS versions allows customers desiring additional stability to stay on the same
+version for 12-18 months before upgrading.
+
+## Licensing
+
+{{name.ln}} works with both PostgreSQL, EDB Postgres Extended and EDB Postgres
+Advanced server, and is available under the
+[EDB Limited Use License](https://www.enterprisedb.com/limited-use-license).
+
+You can [evaluate {{name.ln}} for free](evaluation.md) as part of a trial subscription.
+You need a valid EDB subscription to use {{name.ln}} in production.
+
+## Supported releases and Kubernetes distributions
+
+For a list of the minor releases of {{name.ln}} that are
+supported by EDB, please refer to the
+["Platform Compatibility"](https://www.enterprisedb.com/resources/platform-compatibility#pgk8s)
+page. Here you can also find which Kubernetes distributions and versions are
+supported for each of them and the EOL dates.
+
+### Multiple architectures
+
+The {{name.ln}} Operator container images support the multi-arch
+format for the following platforms: `linux/amd64`, `linux/arm64`, `linux/ppc64le`, `linux/s390x`.
+
+!!! Warning
+
+ {{name.ln}} requires that all nodes in a Kubernetes cluster have the
+ same CPU architecture, thus a hybrid CPU architecture Kubernetes cluster is not
+ supported. Additionally, EDB supports `linux/ppc64le` and `linux/s390x` architectures
+ on OpenShift only.
+
+## Supported Postgres versions
+
+The following versions of Postgres are currently supported by version 1.25:
+
+- PostgreSQL: 13 - 17
+- EDB Postgres Advanced: 13 - 17
+- EDB Postgres Extended: 13 - 17
+
+PostgreSQL and EDB Postgres Advanced are available on the following platforms:
+`linux/amd64`, `linux/ppc64le`, `linux/s390x`.
+In addition, PostgreSQL is also supported on `linux/arm64`.
+EDB Postgres Extended is supported only on `linux/amd64`.
+EDB supports operand images for `linux/ppc64le` and `linux/s390x` architectures
+on OpenShift only.
+
+## About this guide
+
+Follow the instructions in the ["Quickstart"](quickstart.md) to test {{name.ln}}
+on a local Kubernetes cluster using Kind, or Minikube.
+
+In case you are not familiar with some basic terminology on Kubernetes and PostgreSQL,
+please consult the ["Before you start" section](before_you_start.md).
+
+!!! Note
+
+ Although the guide primarily addresses Kubernetes, all concepts can
+ be extended to OpenShift as well.
+
+*[Postgres, PostgreSQL and the Slonik Logo](https://www.postgresql.org/about/policies/trademarks/)
+are trademarks or registered trademarks of the PostgreSQL Community Association
+of Canada, and used with their permission.*
diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
new file mode 100644
index 0000000000..03ebfe047b
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
@@ -0,0 +1,351 @@
+---
+title: 'Installation and upgrades'
+originalFilePath: 'src/installation_upgrade.md'
+---
+
+
+
+!!! Seealso "OpenShift"
+ For instructions on how to install Cloud Native PostgreSQL on Red Hat
+ OpenShift Container Platform, please refer to the ["OpenShift"](openshift.md)
+ section.
+
+!!! Warning
+ OLM (via [operatorhub.io](https://operatorhub.io/) is no longer supported
+ as an installation method for {{name.ln}}.
+
+## Installation on Kubernetes
+
+### Obtaining an EDB subscription token
+
+!!! Important
+ You must obtain an EDB subscription token to install {{name.ln}}. Without a token, you will not be able to access the EDB private software repositories.
+
+Installing EDB Postgres Distributed for Kubernetes requires an EDB Repos 2.0 token to gain access to the EDB private software repositories.
+For instructions on obtaining this token, see: [Get your token](/repos/getting_started/with_web/get_your_token/).
+
+Then set the Repos 2.0 token as an environment variable `EDB_SUBSCRIPTION_TOKEN`:
+
+```shell
+EDB_SUBSCRIPTION_TOKEN=
+```
+
+!!! Warning
+ The token is sensitive information. Please ensure that you don't expose it to unauthorized users.
+
+You can now proceed with the installation.
+
+### Using the Helm Chart
+
+The operator can be installed using the provided [Helm chart](https://github.com/EnterpriseDB/edb-postgres-for-kubernetes-charts).
+
+### Directly using the operator manifest
+
+#### Install the EDB pull secret
+
+Before installing {{name.ln}}, you need to create a pull secret for EDB software in the `postgresql-operator-system` namespace.
+
+The pull secret needs to be saved in the namespace where the operator will reside. Create the `postgresql-operator-system` namespace using this command:
+
+```shell
+kubectl create namespace postgresql-operator-system
+```
+
+To create the pull secret itself, run the following command:
+
+```shell
+kubectl create secret -n postgresql-operator-system docker-registry edb-pull-secret \
+ --docker-server=docker.enterprisedb.com \
+ --docker-username=k8s \
+ --docker-password="$EDB_SUBSCRIPTION_TOKEN"
+```
+
+#### Install the operator
+
+Now that the pull-secret has been added to the namespace, the operator can be installed like any other resource in Kubernetes,
+through a YAML manifest applied via `kubectl`.
+
+You can install the manifest for the latest version of the operator by running:
+
+```sh
+kubectl apply --server-side -f \
+ https://get.enterprisedb.io/pg4k/pg4k-1.27.1.yaml
+```
+
+You can verify that with:
+
+```sh
+kubectl rollout status deployment \
+ -n postgresql-operator-system postgresql-operator-controller-manager
+```
+
+### Using the `cnp` plugin for `kubectl`
+
+You can use the `cnp` plugin to override the default configuration options
+that are in the static manifests.
+
+For example, to generate the default latest manifest but change the watch
+namespaces to only be a specific namespace, you could run:
+
+```shell
+kubectl cnp install generate \
+ --watch-namespace "specific-namespace" \
+ > cnp_for_specific_namespace.yaml
+```
+
+See the ["`cnp` plugin"](./kubectl-plugin.md#generation-of-installation-manifests) documentation
+for a more comprehensive example.
+
+!!! Warning
+ If you are deploying {{name.ln}} on GKE and get an error (`... failed to
+ call webhook...`), be aware that by default traffic between worker nodes
+ and control plane is blocked by the firewall except for a few specific
+ ports, as explained in the official
+ [docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules)
+ and by this
+ [issue](https://github.com/cloudnative-pg/cloudnative-pg/issues/1360).
+ You'll need to either change the `targetPort` in the webhook service, to be
+ one of the allowed ones, or open the webhooks' port (`9443`) on the
+ firewall.
+
+## Details about the deployment
+
+In Kubernetes, the operator is by default installed in the `postgresql-operator-system`
+namespace as a Kubernetes `Deployment`. The name of this deployment
+depends on the installation method.
+When installed through the manifest or the `cnp` plugin, by default, it is called `postgresql-operator-controller-manager`.
+When installed via Helm, by default, the deployment name is derived from the helm release name, appended with the suffix `-edb-postgres-for-kubernetes` (e.g., `-edb-postgres-for-kubernetes`).
+
+!!! Note
+ With Helm you can customize the name of the deployment via the
+ `fullnameOverride` field in the [*"values.yaml"* file](https://helm.sh/docs/chart_template_guide/values_files/).
+
+You can get more information using the `describe` command in `kubectl`:
+
+```sh
+$ kubectl get deployments -n postgresql-operator-system
+NAME READY UP-TO-DATE AVAILABLE AGE
+ 1/1 1 1 18m
+```
+
+```sh
+kubectl describe deploy \
+ -n postgresql-operator-system \
+
+```
+
+As with any Deployment, it sits on top of a ReplicaSet and supports rolling
+upgrades. The default configuration of the {{name.ln}} operator
+comes with a Deployment of a single replica, which is suitable for most
+installations. In case the node where the pod is running is not reachable
+anymore, the pod will be rescheduled on another node.
+
+If you require high availability at the operator level, it is possible to
+specify multiple replicas in the Deployment configuration - given that the
+operator supports leader election. Also, you can take advantage of taints and
+tolerations to make sure that the operator does not run on the same nodes where
+the actual PostgreSQL clusters are running (this might even include the control
+plane for self-managed Kubernetes installations).
+
+!!! Seealso "Operator configuration"
+ You can change the default behavior of the operator by overriding
+ some default options. For more information, please refer to the
+ ["Operator configuration"](operator_conf.md) section.
+
+## Upgrades
+
+!!! Warning CRITICAL WARNING: UPGRADING OPERATORS
+
+ OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the [Central Migration Guide](migrating_edb_registries) first.
+
+!!! Important
+ Please carefully read the [release notes](rel_notes)
+ before performing an upgrade as some versions might require
+ extra steps.
+
+Upgrading {{name.ln}} operator is a two-step process:
+
+1. upgrade the controller and the related Kubernetes resources
+2. upgrade the instance manager running in every PostgreSQL pod
+
+Unless differently stated in the release notes, the first step is normally done
+by applying the manifest of the newer version for plain Kubernetes
+installations, or using the native package manager of the used distribution
+(please follow the instructions in the above sections).
+
+The second step is automatically triggered after updating the controller. By
+default, this initiates a rolling update of every deployed PostgreSQL cluster,
+upgrading one instance at a time to use the new instance manager. The rolling
+update concludes with a switchover, which is governed by the
+`primaryUpdateStrategy` option. The default value, `unsupervised`, completes
+the switchover automatically. If set to `supervised`, the user must manually
+promote the new primary instance using the `cnp` plugin for `kubectl`.
+
+!!! Seealso "Rolling updates"
+ This process is discussed in-depth on the [Rolling Updates](rolling_update.md) page.
+
+!!! Important
+ In case `primaryUpdateStrategy` is set to the default value of `unsupervised`,
+ an upgrade of the operator will trigger a switchover on your PostgreSQL cluster,
+ causing a (normally negligible) downtime. If your PostgreSQL Cluster has only one
+ instance, the instance will be automatically restarted as `supervised` value is
+ not supported for `primaryUpdateStrategy`. In either case, your applications will
+ have to reconnect to PostgreSQL.
+
+The default rolling update behavior can be replaced with in-place updates of
+the instance manager. This approach does not require a restart of the
+PostgreSQL instance, thereby avoiding a switchover within the cluster. This
+feature, which is disabled by default, is described in detail below.
+
+### Spread Upgrades
+
+By default, all PostgreSQL clusters are rolled out simultaneously, which may
+lead to a spike in resource usage, especially when managing multiple clusters.
+{{name.ln}} provides two configuration options at the [operator level](operator_conf.md)
+that allow you to introduce delays between cluster roll-outs or even between
+instances within the same cluster, helping to distribute resource usage over
+time:
+
+- `CLUSTERS_ROLLOUT_DELAY`: Defines the number of seconds to wait between
+ roll-outs of different PostgreSQL clusters (default: `0`).
+- `INSTANCES_ROLLOUT_DELAY`: Defines the number of seconds to wait between
+ roll-outs of individual instances within the same PostgreSQL cluster (default:
+ `0`).
+
+### In-place updates of the instance manager
+
+By default, {{name.ln}} issues a rolling update of the cluster
+every time the operator is updated. The new instance manager shipped with the
+operator is added to each PostgreSQL pod via an init container.
+
+However, this behavior can be changed via configuration to enable in-place
+updates of the instance manager, which is the PID 1 process that keeps the
+container alive.
+
+Internally, each instance manager in {{name.ln}} supports the injection of a
+new executable that replaces the existing one after successfully completing an
+integrity verification phase and gracefully terminating all internal processes.
+Upon restarting with the new binary, the instance manager seamlessly adopts the
+already running *postmaster*.
+
+As a result, the PostgreSQL process is unaffected by the update, refraining
+from the need to perform a switchover. The other side of the coin, is that
+the Pod is changed after the start, breaking the pure concept of immutability.
+
+You can enable this feature by setting the `ENABLE_INSTANCE_MANAGER_INPLACE_UPDATES`
+environment variable to `'true'` in the
+[operator configuration](operator_conf.md#available-options).
+
+The in-place upgrade process will not change the init container image inside the
+Pods. Therefore, the Pod definition will not reflect the current version of the
+operator.
+
+### Compatibility among versions
+
+{{name.ln}} follows semantic versioning. Every release of the
+operator within the same API version is compatible with the previous one.
+The current API version is v1, corresponding to versions 1.x.y of the operator.
+
+In addition to new features, new versions of the operator contain bug fixes and
+stability enhancements. Because of this, **we strongly encourage users to upgrade
+to the latest version of the operator**, as each version is released in order to
+maintain the most secure and stable Postgres environment.
+
+{{name.ln}} currently releases new versions of the operator at
+least monthly. If you are unable to apply updates as each version becomes
+available, we recommend upgrading through each version in sequential order to
+come current periodically and not skipping versions.
+
+The [release notes](rel_notes) page contains a detailed list of the
+changes introduced in every released version of {{name.ln}},
+and it must be read before upgrading to a newer version of the software.
+
+Most versions are directly upgradable and in that case, applying the newer
+manifest for plain Kubernetes installations or using the native package
+manager of the chosen distribution is enough.
+
+When versions are not directly upgradable, the old version needs to be
+removed before installing the new one. This won't affect user data but
+only the operator itself.
+
+
+
+### Upgrading to 1.27 from a previous minor version
+
+!!! Important
+ We strongly recommend that all {{name.ln}} users upgrade to version
+ 1.27.0, or at least to the latest stable version of your current minor release
+ (e.g., 1.26.1).
+
+Version 1.27 introduces a change in the default behavior of the
+[liveness probe](instance_manager.md#liveness-probe): it now enforces the
+[shutdown of an isolated primary](instance_manager.md#primary-isolation)
+within the `livenessProbeTimeout` (30 seconds).
+
+If this behavior is not suitable for your environment, you can disable the
+*isolation check* in the liveness probe with the following configuration:
+
+```yaml
+spec:
+ probes:
+ liveness:
+ isolationCheck:
+ enabled: false
+```
+
+### Upgrading to 1.26 from a previous minor version
+
+!!! Warning
+ Due to changes in the startup probe for the manager component
+ ([#6623](https://github.com/EnterpriseDB/cloud-native-postgres/pull/6623)),
+ upgrading the operator will trigger a restart of your PostgreSQL clusters,
+ even if in-place updates are enabled (`ENABLE_INSTANCE_MANAGER_INPLACE_UPDATES=true`).
+ Your applications will need to reconnect to PostgreSQL after the upgrade.
+
+#### Deprecation of backup metrics and fields in the `Cluster` `.status`
+
+With the transition to a backup and recovery agnostic approach based on CNP-I
+plugins in {{name.ln}}, which began with version 1.26.0 for Barman Cloud, we
+are starting the deprecation period for the following fields in the `.status`
+section of the `Cluster` resource:
+
+- `firstRecoverabilityPoint`
+- `firstRecoverabilityPointByMethod`
+- `lastSuccessfulBackup`
+- `lastSuccessfulBackupByMethod`
+- `lastFailedBackup`
+
+The following Prometheus metrics are also deprecated:
+
+- `cnp_collector_first_recoverability_point`
+- `cnp_collector_last_failed_backup_timestamp`
+- `cnp_collector_last_available_backup_timestamp`
+
+!!! Warning
+ If you have migrated to a plugin-based backup and recovery solution such as
+ Barman Cloud, these fields and metrics are no longer synchronized and will
+ not be updated. Users still relying on the in-core support for Barman Cloud
+ and volume snapshots can continue to use these fields for the time being.
+
+Under the new plugin-based approach, multiple backup methods can operate
+simultaneously, each with its own timeline for backup and recovery. For
+example, some plugins may provide snapshots without WAL archiving, while others
+support continuous archiving.
+
+Because of this flexibility, maintaining centralized status fields in the
+`Cluster` resource could be misleading or confusing, as they would not
+accurately represent the state across all configured backup methods.
+For this reason, these fields are being deprecated.
+
+Instead, each plugin is responsible for exposing its own backup status
+information and providing metrics back to the instance manager for monitoring
+and operational awareness.
+
+#### Declarative Hibernation in the `cnp` plugin
+
+In this release, the `cnp` plugin for `kubectl` transitions from an imperative
+to a [declarative approach for cluster hibernation](declarative_hibernation.md).
+The `hibernate on` and `hibernate off` commands are now convenient shortcuts
+that apply declarative changes to enable or disable hibernation.
+The `hibernate status` command has been removed, as its purpose is now
+fulfilled by the standard `status` command.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx b/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx
new file mode 100644
index 0000000000..105509650e
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx
@@ -0,0 +1,426 @@
+---
+title: 'Postgres instance manager'
+originalFilePath: 'src/instance_manager.md'
+---
+
+
+
+{{name.ln}} does not rely on an external tool for failover management.
+It simply relies on the Kubernetes API server and a native key component called:
+the **Postgres instance manager**.
+
+The instance manager takes care of the entire lifecycle of the PostgreSQL
+server process (also known as `postmaster`).
+
+When you create a new cluster, the operator makes a Pod per instance.
+The field `.spec.instances` specifies how many instances to create.
+
+Each Pod will start the instance manager as the parent process (PID 1) for the
+main container, which in turn runs the PostgreSQL instance. During the lifetime
+of the Pod, the instance manager acts as a backend to handle the
+[startup, liveness and readiness probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
+
+## Startup Probe
+
+The startup probe ensures that a PostgreSQL instance, whether a primary or
+standby, has fully started.
+
+!!! Info
+ By default, the startup probe uses
+ [`pg_isready`](https://www.postgresql.org/docs/current/app-pg-isready.html).
+ However, the behavior can be customized by specifying a different startup
+ strategy.
+
+While the startup probe is running, the liveness and readiness probes remain
+disabled. Following Kubernetes standards, if the startup probe fails, the
+kubelet will terminate the container, which will then be restarted.
+
+The `.spec.startDelay` parameter specifies the maximum time, in seconds,
+allowed for the startup probe to succeed.
+
+By default, the `startDelay` is set to `3600` seconds. It is recommended to
+adjust this setting based on the time PostgreSQL needs to fully initialize in
+your specific environment.
+
+!!! Warning
+ Setting `.spec.startDelay` too low can cause the liveness probe to activate
+ prematurely, potentially resulting in unnecessary Pod restarts if PostgreSQL
+ hasn’t fully initialized.
+
+{{name.ln}} configures the startup probe with the following default parameters:
+
+```yaml
+failureThreshold: FAILURE_THRESHOLD
+periodSeconds: 10
+successThreshold: 1
+timeoutSeconds: 5
+```
+
+The `failureThreshold` value is automatically calculated by dividing
+`startDelay` by `periodSeconds`.
+
+You can customize any of the probe settings in the `.spec.probes.startup`
+section of your configuration.
+
+!!! Warning
+ Be sure that any custom probe settings are tailored to your cluster's
+ operational requirements to avoid unintended disruptions.
+
+!!! Info
+ For more details on probe configuration, refer to the
+ [probe API documentation](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy).
+
+If you manually specify `.spec.probes.startup.failureThreshold`, it will
+override the default behavior and disable the automatic use of `startDelay`.
+
+For example, the following configuration explicitly sets custom probe
+parameters, bypassing `startDelay`:
+
+```yaml
+# ... snip
+spec:
+ probes:
+ startup:
+ periodSeconds: 3
+ timeoutSeconds: 3
+ failureThreshold: 10
+```
+
+### Startup Probe Strategy
+
+In certain scenarios, you may need to customize the startup strategy for your
+PostgreSQL cluster. For example, you might delay marking a replica as started
+until it begins streaming from the primary or define a replication lag
+threshold that must be met before considering the replica ready.
+
+To accommodate these requirements, {{name.ln}} extends the
+`.spec.probes.startup` stanza with two optional parameters:
+
+- `type`: specifies the criteria for considering the probe successful. Accepted
+ values, in increasing order of complexity/depth, include:
+
+ - `pg_isready`: marks the probe as successful when the `pg_isready` command
+ exits with `0`. This is the default for primary instances and replicas.
+ - `query`: marks the probe as successful when a basic query is executed on
+ the `postgres` database locally.
+ - `streaming`: marks the probe as successful when the replica begins
+ streaming from its source and meets the specified lag requirements (details
+ below).
+
+- `maximumLag`: defines the maximum acceptable replication lag, measured in bytes
+ (expressed as Kubernetes quantities). This parameter is only applicable when
+ `type` is set to `streaming`. If `maximumLag` is not specified, the replica is
+ considered successfully started as soon as it begins streaming.
+
+!!! Important
+ The `.spec.probes.startup.maximumLag` option is validated and enforced only
+ during the startup phase of the pod, meaning it applies exclusively when the
+ replica is starting.
+
+!!! Warning
+ Incorrect configuration of the `maximumLag` option can cause continuous
+ failures of the startup probe, leading to repeated replica restarts. Ensure
+ you understand how this option works and configure appropriate values for
+ `failureThreshold` and `periodSeconds` to give the replica enough time to
+ catch up with its source.
+
+The following example requires a replica to have a maximum lag of 16Mi from the
+source to be considered started:
+
+```yaml
+#
+probes:
+ startup:
+ type: streaming
+ maximumLag: 16Mi
+```
+
+## Liveness Probe
+
+The liveness probe begins after the startup probe successfully completes. Its
+primary role is to ensure the PostgreSQL instance manager is operating
+correctly.
+
+Following Kubernetes standards, if the liveness probe fails, the kubelet will
+terminate the container, which will then be restarted.
+
+The amount of time before a Pod is classified as not alive is configurable via
+the `.spec.livenessProbeTimeout` parameter.
+
+{{name.ln}} configures the liveness probe with the following default
+parameters:
+
+```yaml
+failureThreshold: FAILURE_THRESHOLD
+periodSeconds: 10
+successThreshold: 1
+timeoutSeconds: 5
+```
+
+The `failureThreshold` value is automatically calculated by dividing
+`livenessProbeTimeout` by `periodSeconds`.
+
+By default, `.spec.livenessProbeTimeout` is set to `30` seconds. This means the
+liveness probe will report a failure if it detects three consecutive probe
+failures, with a 10-second interval between each check.
+
+You can customize any of the probe settings in the `.spec.probes.liveness`
+section of your configuration.
+
+!!! Warning
+ Be sure that any custom probe settings are tailored to your cluster's
+ operational requirements to avoid unintended disruptions.
+
+!!! Info
+ For more details on probe configuration, refer to the
+ [probe API documentation](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-Probe).
+
+If you manually specify `.spec.probes.liveness.failureThreshold`, it will
+override the default behavior and disable the automatic use of
+`livenessProbeTimeout`.
+
+For example, the following configuration explicitly sets custom probe
+parameters, bypassing `livenessProbeTimeout`:
+
+```yaml
+# ... snip
+spec:
+ probes:
+ liveness:
+ periodSeconds: 3
+ timeoutSeconds: 3
+ failureThreshold: 10
+```
+
+### Primary Isolation
+
+{{name.ln}} 1.27 introduces an additional behavior for the liveness
+probe of a PostgreSQL primary, which will report a failure if **both** of the
+following conditions are met:
+
+1. The instance manager cannot reach the Kubernetes API server
+2. The instance manager cannot reach **any** other instance via the instance manager’s REST API
+
+The effect of this behavior is to consider an isolated primary to be not alive and subsequently **shut it down** when the liveness probe fails.
+
+It is **enabled by default** and can be disabled by adding the following:
+
+```yaml
+spec:
+ probes:
+ liveness:
+ isolationCheck:
+ enabled: false
+```
+
+!!! Important
+ Be aware that the default liveness probe settings—automatically derived from `livenessProbeTimeout`—might
+ be aggressive (30 seconds). As such, we recommend explicitly setting the
+ liveness probe configuration to suit your environment.
+
+The spec also accepts two optional network settings: `requestTimeout`
+and `connectionTimeout`, both defaulting to `1000` (in milliseconds).
+In cloud environments, you may need to increase these values.
+For example:
+
+```yaml
+spec:
+ probes:
+ liveness:
+ isolationCheck:
+ enabled: true
+ requestTimeout: "2000"
+ connectionTimeout: "2000"
+```
+
+## Readiness Probe
+
+The readiness probe starts once the startup probe has successfully completed.
+Its primary purpose is to check whether the PostgreSQL instance is ready to
+accept traffic and serve requests at any point during the pod's lifecycle.
+
+!!! Info
+ By default, the readiness probe uses
+ [`pg_isready`](https://www.postgresql.org/docs/current/app-pg-isready.html).
+ However, the behavior can be customized by specifying a different readiness
+ strategy.
+
+Following Kubernetes standards, if the readiness probe fails, the pod will be
+marked unready and will not receive traffic from any services. An unready pod
+is also ineligible for promotion during automated failover scenarios.
+
+{{name.ln}} uses the following default configuration for the readiness probe:
+
+```yaml
+failureThreshold: 3
+periodSeconds: 10
+successThreshold: 1
+timeoutSeconds: 5
+```
+
+If the default settings do not suit your requirements, you can fully customize
+the readiness probe by specifying parameters in the `.spec.probes.readiness`
+stanza. For example:
+
+```yaml
+# ... snip
+spec:
+ probes:
+ readiness:
+ periodSeconds: 3
+ timeoutSeconds: 3
+ failureThreshold: 10
+```
+
+!!! Warning
+ Ensure that any custom probe settings are aligned with your cluster’s
+ operational requirements to prevent unintended disruptions.
+
+!!! Info
+ For more information on configuring probes, see the
+ [probe API](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy).
+
+### Readiness Probe Strategy
+
+In certain scenarios, you may need to customize the readiness strategy for your
+cluster. For example, you might delay marking a replica as ready until it
+begins streaming from the primary or define a maximum replication lag threshold
+before considering the replica ready.
+
+To accommodate these requirements, {{name.ln}} extends the
+`.spec.probes.readiness` stanza with two optional parameters: `type` and
+`maximumLag`. Please refer to the [Startup Probe Strategy](#startup-probe-strategy)
+section for detailed information on these options.
+
+!!! Important
+ Unlike the startup probe, the `.spec.probes.readiness.maximumLag` option is
+ continuously monitored. A lagging replica may become unready if this setting is
+ not appropriately tuned.
+
+!!!Warning
+```
+Incorrect configuration of the `maximumLag` option can lead to repeated
+readiness probe failures, causing serious consequences, such as:
+
+- Exclusion of the replica from key operator features, such as promotion
+ during failover or participation in synchronous replication quorum.
+- Disruptions in read/read-only services.
+- In longer failover times scenarios, replicas might be declared unready,
+ leading to a cluster stall requiring manual intervention.
+```
+!!!
+
+```
+Use the `streaming` and `maximumLag` options with extreme caution. If
+you're unfamiliar with PostgreSQL replication, rely on the default
+strategy. Seek professional advice if unsure.
+```
+
+The following example requires a replica to have a maximum lag of 64Mi from the
+source to be considered ready. It also provides approximately 300 seconds (30
+failures × 10 seconds) for the startup probe to succeed:
+
+```yaml
+#
+probes:
+ readiness:
+ type: streaming
+ maximumLag: 64Mi
+ failureThreshold: 30
+ periodSeconds: 10
+```
+
+## Shutdown control
+
+When a Pod running Postgres is deleted, either manually or by Kubernetes
+following a node drain operation, the kubelet will send a termination signal to the
+instance manager, and the instance manager will take care of shutting down
+PostgreSQL in an appropriate way.
+The `.spec.smartShutdownTimeout` and `.spec.stopDelay` options, expressed in seconds,
+control the amount of time given to PostgreSQL to shut down. The values default
+to 180 and 1800 seconds, respectively.
+
+The shutdown procedure is composed of two steps:
+
+1. The instance manager first issues a `CHECKPOINT`, then initiates a **smart**
+ shut down, disallowing any new connection to PostgreSQL. This step will last
+ for up to `.spec.smartShutdownTimeout` seconds.
+
+2. If PostgreSQL is still up, the instance manager requests a **fast**
+ shut down, terminating any existing connection and exiting promptly.
+ If the instance is archiving and/or streaming WAL files, the process
+ will wait for up to the remaining time set in `.spec.stopDelay` to complete the
+ operation and then forcibly shut down. Such a timeout needs to be at least 15
+ seconds.
+
+!!! Important
+ In order to avoid any data loss in the Postgres cluster, which impacts
+ the database [RPO](before_you_start.md#rpo), don't delete the Pod where
+ the primary instance is running. In this case, perform a switchover to
+ another instance first.
+
+### Shutdown of the primary during a switchover
+
+During a switchover, the shutdown procedure slightly differs from the general
+case. The instance manager of the former primary first issues a `CHECKPOINT`,
+then initiates a **fast** shutdown of PostgreSQL before the designated new
+primary is promoted, ensuring that all data are safely available on the new
+primary.
+
+For this reason, the `.spec.switchoverDelay`, expressed in seconds, controls
+the time given to the former primary to shut down gracefully and archive all
+the WAL files. By default it is set to `3600` (1 hour).
+
+!!! Warning
+ The `.spec.switchoverDelay` option affects the [RPO](before_you_start.md#rpo)
+ and [RTO](before_you_start.md#rto) of your PostgreSQL database. Setting it to
+ a low value, might favor RTO over RPO but lead to data loss at cluster level
+ and/or backup level. On the contrary, setting it to a high value, might remove
+ the risk of data loss while leaving the cluster without an active primary for a
+ longer time during the switchover.
+
+## Failover
+
+In case of primary pod failure, the cluster will go into failover mode.
+Please refer to the ["Failover" section](failover.md) for details.
+
+## Disk Full Failure
+
+Storage exhaustion is a well known issue for PostgreSQL clusters.
+The [PostgreSQL documentation](https://www.postgresql.org/docs/current/diskusage.html#DISK-FULL)
+highlights the possible failure scenarios and the importance of monitoring disk
+usage to prevent it from becoming full.
+
+The same applies to {{name.ln}} and Kubernetes as well: the
+["Monitoring" section](monitoring.md#predefined-set-of-metrics)
+provides details on checking the disk space used by WAL segments and standard
+metrics on disk usage exported to Prometheus.
+
+!!! Important
+ In a production system, it is critical to monitor the database
+ continuously. Exhausted disk storage can lead to a database server shutdown.
+
+!!! Note
+ The detection of exhausted storage relies on a storage class that
+ accurately reports disk size and usage. This may not be the case in simulated
+ Kubernetes environments like Kind or with test storage class implementations
+ such as `csi-driver-host-path`.
+
+If the disk containing the WALs becomes full and no more WAL segments can be
+stored, PostgreSQL will stop working. {{name.ln}} correctly detects this issue
+by verifying that there is enough space to store the next WAL segment,
+and avoids triggering a failover, which could complicate recovery.
+
+That allows a human administrator to address the root cause.
+
+In such a case, if supported by the storage class, the quickest course of action
+is currently to:
+
+1. Expand the storage size of the full PVC
+2. Increase the size in the `Cluster` resource to the same value
+
+Once the issue is resolved and there is sufficient free space for WAL segments,
+the Pod will restart and the cluster will become healthy.
+
+See also the ["Volume expansion" section](storage.md#volume-expansion) of the
+documentation.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/iron-bank.mdx b/product_docs/docs/postgres_for_kubernetes/1/iron-bank.mdx
new file mode 100644
index 0000000000..e67d872c1b
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/iron-bank.mdx
@@ -0,0 +1,127 @@
+---
+title: 'Iron Bank'
+originalFilePath: 'src/iron-bank.md'
+---
+
+{{name.ln}}({{name.abbr}}) is available on [Iron Bank](https://p1.dso.mil/services/iron-bank).
+As you can read in the
+[overview page](https://docs-ironbank.dso.mil/overview/):
+
+> Iron Bank is the DoD's source for hardened containers.
+>
+> [… snipped …]
+>
+> Iron Bank ultimately is for anyone to consume or contribute. However, we specifically target the following personas:
+>
+> - DoD organizations wishing to consume hardened containers and Iron Banks BoE (Body of Evidence) for each container
+> - DoD organizations wishing to help contribute to containers (e.g. bug fixes, new applications, updates)
+> - DoD Authorization Officials wishing to understand the risks associated with applications
+> - Commercial vendors wishing to bring their application to the DoD
+
+Iron Bank is a part of DoD's [Platform One](https://p1.dso.mil/).
+
+You will need your Iron Bank credentials to access the Iron Bank page for
+[{{name.ln}}](https://repo1.dso.mil/dsop/enterprisedb/edb-pg4k-operator).
+
+## Pulling the EDB {{name.abbr}} and operand images from Iron Bank
+
+The images are pulled from the separate [Iron Bank container registry](https://registry1.dso.mil/).
+To be able to pull images from the Iron Bank registry, please follow the
+[instructions from Iron Bank](https://docs-ironbank.dso.mil/tutorials/image-pull/).
+
+Specifically, you will need to use your
+[registry1](https://registry1.dso.mil/harbor/projects)
+credentials to pull images.
+
+To find the desired operator or operand images, we recommend to use the search tool to look
+with the string `enterprisedb`, and filter by `Tags`, looking for `stable`, as shown in
+the image. From there, you can get the instruction to pull the image:
+
+
+
+For example, to pull the EPAS16 operand from Ironbank, you can run:
+
+```bash
+docker pull registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-16:16
+
+```
+
+If you want to pick a more specific tag or use a specific SHA, you need to find it from the [Harbor page](https://registry1.dso.mil/harbor/projects/3/repositories/enterprisedb%2Fedb-postgres-advanced-16/artifacts-tab).
+
+## Installing the {{name.abbr}} operator using the Iron Bank image
+
+For installation, you will need a deployment manifest that points to your Iron Bank image.
+You can take the deployment manifest from the [installation instructions for EDB {{name.abbr}}](/postgres_for_kubernetes/latest/installation_upgrade/).
+For example, for the 1.22.0 release, the manifest is available at
+`https://get.enterprisedb.io/cnp/postgresql-operator-1.22.0.yaml`.
+There are a couple of places where you will need to set the image path for the IronBank image.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ labels:
+ app.kubernetes.io/name: cloud-native-postgresql
+ name: postgresql-operator-controller-manager
+ namespace: postgresql-operator-system
+spec:
+ [… snipped …]
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: cloud-native-postgresql
+ spec:
+ containers:
+ - args:
+ - controller
+ [… snipped …]
+ env:
+ - name: PULL_SECRET_NAME
+ value: postgresql-operator-pull-secret
+ - name: OPERATOR_IMAGE_NAME
+ value:
+
+ [… snipped …]
+ image:
+```
+
+If you wish for the operator to be deployed from Iron Bank directly, you will
+need to create and set the pull secret with the credentials to the registry,
+as described above.
+
+It may be easier to get the image from Iron Bank with the instructions on the
+site, and from there, re-tag and publish it to a local registry, or push it
+directly to your Kubernetes nodes.
+
+Once you have this in place, you can apply your manifest normally with
+`kubectl apply -f`, as described in the [installation instructions](/postgres_for_kubernetes/latest/installation_upgrade/).
+
+## Deploying clusters with EPAS operands using IronBank images
+
+To deploy a cluster using the EPAS [operand](/postgres_for_kubernetes/latest/private_edb_registries/#operand-images) you must reference the Ironbank operand image appropriately in the `Cluster` resource YAML.
+For example, to deploy a {{name.abbr}} Cluster using the EPAS 16 operand:
+
+1. Create or edit a `Cluster` resource YAML file with the following content:
+
+ ```yaml
+ apiVersion: postgresql.k8s.enterprisedb.io/v1
+ kind: Cluster
+ metadata:
+ name: cluster-example-full
+ spec:
+ imageName: registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-17:17
+ imagePullSecrets:
+ - name: my_ironbank_secret
+ ```
+
+2. Apply the YAML:
+
+ ```
+ kubectl apply -f
+ ```
+
+3. Verify the status of the resource:
+
+ ```
+ kubectl get clusters
+ ```
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
new file mode 100644
index 0000000000..09044c837f
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
@@ -0,0 +1,1510 @@
+---
+title: '{{name.ln}} Plugin'
+originalFilePath: 'src/kubectl-plugin.md'
+deepToC: true
+---
+
+{{name.ln}} provides a plugin for `kubectl` to manage a cluster in Kubernetes.
+The plugin also works with `oc` in an OpenShift environment.
+
+## Install
+
+You can install the `cnp` plugin using a variety of methods.
+
+!!! Note
+ For air-gapped systems, installation via package managers, using previously
+ downloaded files, may be a good option.
+
+### Via the installation script
+
+```sh
+curl -sSfL \
+ https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
+ sudo sh -s -- -b /usr/local/bin
+```
+
+### Using the Debian or RedHat packages
+
+In the
+[releases section of the GitHub repository](https://github.com/EnterpriseDB/kubectl-cnp/releases),
+you can navigate to any release of interest (pick the same or newer release
+than your {{name.ln}} operator), and in it you will find an **Assets**
+section. In that section are pre-built packages for a variety of systems.
+As a result, you can follow standard practices and instructions to install
+them in your systems.
+
+#### Debian packages
+
+For example, let's install the 1.27.1 release of the plugin, for an Intel based
+64 bit server. First, we download the right `.deb` file.
+
+```sh
+wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.27.1/kubectl-cnp_1.27.1_linux_x86_64.deb \
+ --output-document kube-plugin.deb
+```
+
+Then, with superuser privileges, install from the local file using `dpkg`:
+
+```console
+$ sudo dpkg -i kube-plugin.deb
+Selecting previously unselected package cnp.
+(Reading database ... 6688 files and directories currently installed.)
+Preparing to unpack kube-plugin.deb ...
+Unpacking kubectl-cnp (1.27.1) ...
+Setting up kubectl-cnp (1.27.1) ...
+```
+
+#### RPM packages
+
+As in the example for `.rpm` packages, let's install the 1.27.1 release for an
+Intel 64 bit machine. Note the `--output` flag to provide a file name.
+
+```sh
+curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.27.1/kubectl-cnp_1.27.1_linux_x86_64.rpm \
+ --output kube-plugin.rpm
+```
+
+Then, with superuser privileges, install with `yum`, and you're ready to use:
+
+```console
+$ sudo yum --disablerepo=* localinstall kube-plugin.rpm
+Failed to set locale, defaulting to C.UTF-8
+Dependencies resolved.
+====================================================================================================
+ Package Architecture Version Repository Size
+====================================================================================================
+Installing:
+ cnp x86_64 1.27.1-1 @commandline 20 M
+
+Transaction Summary
+====================================================================================================
+Install 1 Package
+
+Total size: 20 M
+Installed size: 78 M
+Is this ok [y/N]: y
+```
+
+### Supported Architectures
+
+{{name.ln}} Plugin is currently built for the following
+operating system and architectures:
+
+* Linux
+ * amd64
+ * arm 5/6/7
+ * arm64
+ * s390x
+ * ppc64le
+* macOS
+ * amd64
+ * arm64
+* Windows
+ * 386
+ * amd64
+ * arm 5/6/7
+ * arm64
+
+### Configuring auto-completion
+
+To configure auto-completion for the plugin, a helper shell script needs to be
+installed into your current PATH. Assuming the latter contains `/usr/local/bin`,
+this can be done with the following commands:
+
+```shell
+cat > kubectl_complete-cnp <
+```
+
+!!! Note
+ The plugin automatically detects if the standard output channel is connected to a terminal.
+ In such cases, it may add ANSI colors to the command output. To disable colors, use the
+ `--color=never` option with the command.
+
+### Generation of installation manifests
+
+The `cnp` plugin can be used to generate the YAML manifest for the
+installation of the operator. This option would typically be used if you want
+to override some default configurations such as number of replicas,
+installation namespace, namespaces to watch, and so on.
+
+For details and available options, run:
+
+```sh
+kubectl cnp install generate --help
+```
+
+The main options are:
+
+- `-n`: specifies the namespace in which to install the operator (default:
+ `postgresql-operator-system`).
+- `--control-plane`: if set to true, the operator deployment will include a
+ toleration and affinity for `node-role.kubernetes.io/control-plane`.
+- `--replicas`: sets the number of replicas in the deployment.
+- `--watch-namespace`: specifies a comma-separated list of namespaces to watch
+ (default: all namespaces).
+- `--version`: defines the minor version of the operator to be installed, such
+ as `1.23`. If a minor version is specified, the plugin installs the latest
+ patch version of that minor version. If no version is supplied, the plugin
+ installs the latest `MAJOR.MINOR.PATCH` version of the operator.
+
+An example of the `generate` command, which will generate a YAML manifest that
+will install the operator, is as follows:
+
+```sh
+kubectl cnp install generate \
+ -n king \
+ --version 1.23 \
+ --replicas 3 \
+ --watch-namespace "albert, bb, freddie" \
+ > operator.yaml
+```
+
+The flags in the above command have the following meaning:
+
+- `-n king` install the {{name.abbr}} operator into the `king` namespace
+- `--version 1.23` install the latest patch version for minor version 1.23
+- `--replicas 3` install the operator with 3 replicas
+- `--watch-namespace "albert, bb, freddie"` have the operator watch for
+ changes in the `albert`, `bb` and `freddie` namespaces only
+
+### Status
+
+The `status` command provides an overview of the current status of your
+cluster, including:
+
+* **general information**: name of the cluster, PostgreSQL's system ID, number of
+ instances, current timeline and position in the WAL
+* **backup**: point of recoverability, and WAL archiving status as returned by
+ the `pg_stat_archiver` view from the primary - or designated primary in the
+ case of a replica cluster
+* **streaming replication**: information taken directly from the `pg_stat_replication`
+ view on the primary instance
+* **instances**: information about each Postgres instance, taken directly by each
+ instance manager; in the case of a standby, the `Current LSN` field corresponds
+ to the latest write-ahead log location that has been replayed during recovery
+ (replay LSN).
+
+!!! Important
+ The status information above is taken at different times and at different
+ locations, resulting in slightly inconsistent returned values. For example,
+ the `Current Write LSN` location in the main header, might be different
+ from the `Current LSN` field in the instances status as it is taken at
+ two different time intervals.
+
+```sh
+kubectl cnp status sandbox
+```
+
+```output
+Cluster Summary
+Name: default/sandbox
+System ID: 7423474350493388827
+PostgreSQL Image: docker.enterprisedb.com/k8s/edb-postgres-extended:18.1-standard-ubi9
+Primary instance: sandbox-1
+Primary start time: 2024-10-08 18:31:57 +0000 UTC (uptime 1m14s)
+Status: Cluster in healthy state
+Instances: 3
+Ready instances: 3
+Size: 126M
+Current Write LSN: 0/604DE38 (Timeline: 1 - WAL File: 000000010000000000000006)
+
+Continuous Backup status
+Not configured
+
+Streaming Replication status
+Replication Slots Enabled
+Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority Replication Slot
+---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- ----------------
+sandbox-2 0/604DE38 0/604DE38 0/604DE38 0/604DE38 00:00:00 00:00:00 00:00:00 streaming async 0 active
+sandbox-3 0/604DE38 0/604DE38 0/604DE38 0/604DE38 00:00:00 00:00:00 00:00:00 streaming async 0 active
+
+Instances status
+Name Current LSN Replication role Status QoS Manager Version Node
+---- ----------- ---------------- ------ --- --------------- ----
+sandbox-1 0/604DE38 Primary OK BestEffort 1.27.1 k8s-eu-worker
+sandbox-2 0/604DE38 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker2
+sandbox-3 0/604DE38 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker
+```
+
+If you require more detailed status information, use the `--verbose` option (or
+`-v` for short). The level of detail increases each time the flag is repeated:
+
+```sh
+kubectl cnp status sandbox --verbose
+```
+
+```output
+Cluster Summary
+Name: default/sandbox
+System ID: 7423474350493388827
+PostgreSQL Image: docker.enterprisedb.com/k8s/edb-postgres-extended:18.1-standard-ubi8
+Primary instance: sandbox-1
+Primary start time: 2024-10-08 18:31:57 +0000 UTC (uptime 2m4s)
+Status: Cluster in healthy state
+Instances: 3
+Ready instances: 3
+Size: 126M
+Current Write LSN: 0/6053720 (Timeline: 1 - WAL File: 000000010000000000000006)
+
+Continuous Backup status
+Not configured
+
+Physical backups
+No running physical backups found
+
+Streaming Replication status
+Replication Slots Enabled
+Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority Replication Slot Slot Restart LSN Slot WAL Status Slot Safe WAL Size
+---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- ---------------- ---------------- --------------- ------------------
+sandbox-2 0/6053720 0/6053720 0/6053720 0/6053720 00:00:00 00:00:00 00:00:00 streaming async 0 active 0/6053720 reserved NULL
+sandbox-3 0/6053720 0/6053720 0/6053720 0/6053720 00:00:00 00:00:00 00:00:00 streaming async 0 active 0/6053720 reserved NULL
+
+Unmanaged Replication Slot Status
+No unmanaged replication slots found
+
+Managed roles status
+No roles managed
+
+Tablespaces status
+No managed tablespaces
+
+Pod Disruption Budgets status
+Name Role Expected Pods Current Healthy Minimum Desired Healthy Disruptions Allowed
+---- ---- ------------- --------------- ----------------------- -------------------
+sandbox replica 2 2 1 1
+sandbox-primary primary 1 1 1 0
+
+Instances status
+Name Current LSN Replication role Status QoS Manager Version Node
+---- ----------- ---------------- ------ --- --------------- ----
+sandbox-1 0/6053720 Primary OK BestEffort 1.27.1 k8s-eu-worker
+sandbox-2 0/6053720 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker2
+sandbox-3 0/6053720 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker
+```
+
+With an additional `-v` (e.g. `kubectl cnp status sandbox -v -v`), you can
+also view PostgreSQL configuration, HBA settings, and certificates.
+
+The command also supports output in `yaml` and `json` format.
+
+### Promote
+
+The meaning of this command is to `promote` a pod in the cluster to primary, so you
+can start with maintenance work or test a switch-over situation in your cluster
+
+```sh
+kubectl cnp promote cluster-example cluster-example-2
+```
+
+Or you can use the instance node number to promote
+
+```sh
+kubectl cnp promote cluster-example 2
+```
+
+### Certificates
+
+Clusters created using the {{name.ln}} operator work with a CA to sign
+a TLS authentication certificate.
+
+To get a certificate, you need to provide a name for the secret to store
+the credentials, the cluster name, and a user for this certificate
+
+```sh
+kubectl cnp certificate cluster-cert --cnp-cluster cluster-example --cnp-user appuser
+```
+
+After the secret it's created, you can get it using `kubectl`
+
+```sh
+kubectl get secret cluster-cert
+```
+
+And the content of the same in plain text using the following commands:
+
+```sh
+kubectl get secret cluster-cert -o json | jq -r '.data | map(@base64d) | .[]'
+```
+
+### Restart
+
+The `kubectl cnp restart` command can be used in two cases:
+
+- requesting the operator to orchestrate a rollout restart
+ for a certain cluster. This is useful to apply
+ configuration changes to cluster dependent objects, such as ConfigMaps
+ containing custom monitoring queries.
+
+- request a single instance restart, either in-place if the instance is
+ the cluster's primary or deleting and recreating the pod if
+ it is a replica.
+
+```sh
+# this command will restart a whole cluster in a rollout fashion
+kubectl cnp restart [clusterName]
+
+# this command will restart a single instance, according to the policy above
+kubectl cnp restart [clusterName] [pod]
+```
+
+If the in-place restart is requested but the change cannot be applied without
+a switchover, the switchover will take precedence over the in-place restart. A
+common case for this will be a minor upgrade of PostgreSQL image.
+
+!!! Note
+ If you want ConfigMaps and Secrets to be **automatically** reloaded
+ by instances, you can add a label with key `k8s.enterprisedb.io/reload` to it.
+
+### Reload
+
+The `kubectl cnp reload` command requests the operator to trigger a reconciliation
+loop for a certain cluster. This is useful to apply configuration changes
+to cluster dependent objects, such as ConfigMaps containing custom monitoring queries.
+
+The following command will reload all configurations for a given cluster:
+
+```sh
+kubectl cnp reload [cluster_name]
+```
+
+### Maintenance
+
+The `kubectl cnp maintenance` command helps to modify one or more clusters
+across namespaces and set the maintenance window values, it will change
+the following fields:
+
+* .spec.nodeMaintenanceWindow.inProgress
+* .spec.nodeMaintenanceWindow.reusePVC
+
+Accepts as argument `set` and `unset` using this to set the
+`inProgress` to `true` in case `set`and to `false` in case of `unset`.
+
+By default, `reusePVC` is always set to `false` unless the `--reusePVC` flag is passed.
+
+The plugin will ask for a confirmation with a list of the cluster to modify
+and their new values, if this is accepted this action will be applied to
+all the cluster in the list.
+
+If you want to set in maintenance all the PostgreSQL in your Kubernetes cluster,
+just need to write the following command:
+
+```sh
+kubectl cnp maintenance set --all-namespaces
+```
+
+And you'll have the list of all the cluster to update
+
+```output
+The following are the new values for the clusters
+Namespace Cluster Name Maintenance reusePVC
+--------- ------------ ----------- --------
+default cluster-example true false
+default pg-backup true false
+test cluster-example true false
+Do you want to proceed? [y/n]: y
+```
+
+### Report
+
+The `kubectl cnp report` command bundles various pieces
+of information into a ZIP file.
+It aims to provide the needed context to debug problems
+with clusters in production.
+
+It has two sub-commands: `operator` and `cluster`.
+
+#### report Operator
+
+The `operator` sub-command requests the operator to provide information
+regarding the operator deployment, configuration and events.
+
+!!! Important
+ All confidential information in Secrets and ConfigMaps is REDACTED.
+ The Data map will show the **keys** but the values will be empty.
+ The flag `-S` / `--stopRedaction` will defeat the redaction and show the
+ values. Use only at your own risk, this will share private data.
+
+!!! Note
+ By default, operator logs are not collected, but you can enable operator
+ log collection with the `--logs` flag
+
+* **deployment information**: the operator Deployment and operator Pod
+* **configuration**: the Secrets and ConfigMaps in the operator namespace
+* **events**: the Events in the operator namespace
+* **webhook configuration**: the mutating and validating webhook configurations
+* **webhook service**: the webhook service
+* **logs**: logs for the operator Pod (optional, off by default) in JSON-lines format
+
+The command will generate a ZIP file containing various manifest in YAML format
+(by default, but settable to JSON with the `-o` flag).
+Use the `-f` flag to name a result file explicitly. If the `-f` flag is not used, a
+default time-stamped filename is created for the zip file.
+
+!!! Note
+ The report plugin obeys `kubectl` conventions, and will look for objects constrained
+ by namespace. The {{name.abbr}} Operator will generally not be installed in the same
+ namespace as the clusters.
+ E.g. the default installation namespace is postgresql-operator-system
+
+```sh
+kubectl cnp report operator -n
+```
+
+results in
+
+```output
+Successfully written report to "report_operator_.zip" (format: "yaml")
+```
+
+With the `-f` flag set:
+
+```sh
+kubectl cnp report operator -n -f reportRedacted.zip
+```
+
+Unzipping the file will produce a time-stamped top-level folder to keep the
+directory tidy:
+
+```sh
+unzip reportRedacted.zip
+```
+
+will result in:
+
+```output
+Archive: reportRedacted.zip
+ creating: report_operator_/
+ creating: report_operator_/manifests/
+ inflating: report_operator_/manifests/deployment.yaml
+ inflating: report_operator_/manifests/operator-pod.yaml
+ inflating: report_operator_/manifests/events.yaml
+ inflating: report_operator_/manifests/validating-webhook-configuration.yaml
+ inflating: report_operator_/manifests/mutating-webhook-configuration.yaml
+ inflating: report_operator_/manifests/webhook-service.yaml
+ inflating: report_operator_/manifests/postgresql-operator-ca-secret.yaml
+ inflating: report_operator_/manifests/postgresql-operator-webhook-cert.yaml
+```
+
+If you activated the `--logs` option, you'd see an extra subdirectory:
+
+```output
+Archive: report_operator_.zip
+
+ creating: report_operator_/operator-logs/
+ inflating: report_operator_/operator-logs/postgresql-operator-controller-manager-66fb98dbc5-pxkmh-logs.jsonl
+```
+
+!!! Note
+ The plugin will try to get the PREVIOUS operator's logs, which is helpful
+ when investigating restarted operators.
+ In all cases, it will also try to get the CURRENT operator logs. If current
+ and previous logs are available, it will show them both.
+
+```output
+====== Begin of Previous Log =====
+2023-03-28T12:56:41.251711811Z {"level":"info","ts":"2023-03-28T12:56:41Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.27.1","build":{"Version":"1.27.1+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
+2023-03-28T12:56:41.251851909Z {"level":"info","ts":"2023-03-28T12:56:41Z","logger":"setup","msg":"Starting pprof HTTP server","addr":"0.0.0.0:6060"}
+
+
+====== End of Previous Log =====
+2023-03-28T12:57:09.854306024Z {"level":"info","ts":"2023-03-28T12:57:09Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.27.1","build":{"Version":"1.27.1+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
+2023-03-28T12:57:09.854363943Z {"level":"info","ts":"2023-03-28T12:57:09Z","logger":"setup","msg":"Starting pprof HTTP server","addr":"0.0.0.0:6060"}
+```
+
+If the operator hasn't been restarted, you'll still see the `====== Begin …`
+and `====== End …` guards, with no content inside.
+
+You can verify that the confidential information is REDACTED by default:
+
+```sh
+cd report_operator_/manifests/
+head postgresql-operator-ca-secret.yaml
+```
+
+```yaml
+data:
+ ca.crt: ""
+ ca.key: ""
+metadata:
+ creationTimestamp: "2022-03-22T10:42:28Z"
+ managedFields:
+ - apiVersion: v1
+ fieldsType: FieldsV1
+ fieldsV1:
+```
+
+With the `-S` (`--stopRedaction`) option activated, secrets are shown:
+
+```sh
+kubectl cnp report operator -n -f reportNonRedacted.zip -S
+```
+
+You'll get a reminder that you're about to view confidential information:
+
+```output
+WARNING: secret Redaction is OFF. Use it with caution
+Successfully written report to "reportNonRedacted.zip" (format: "yaml")
+```
+
+```sh
+unzip reportNonRedacted.zip
+head postgresql-operator-ca-secret.yaml
+```
+
+```yaml
+data:
+ ca.crt: LS0tLS1CRUdJTiBD…
+ ca.key: LS0tLS1CRUdJTiBF…
+metadata:
+ creationTimestamp: "2022-03-22T10:42:28Z"
+ managedFields:
+ - apiVersion: v1
+ fieldsType: FieldsV1
+```
+
+#### report Cluster
+
+The `cluster` sub-command gathers the following:
+
+* **cluster resources**: the cluster information, same as `kubectl get cluster -o yaml`
+* **cluster pods**: pods in the cluster namespace matching the cluster name
+* **cluster jobs**: jobs, if any, in the cluster namespace matching the cluster name
+* **events**: events in the cluster namespace
+* **pod logs**: logs for the cluster Pods (optional, off by default) in JSON-lines format
+* **job logs**: logs for the Pods created by jobs (optional, off by default) in JSON-lines format
+
+The `cluster` sub-command accepts the `-f` and `-o` flags, as the `operator` does.
+If the `-f` flag is not used, a default timestamped report name will be used.
+Note that the cluster information does not contain configuration Secrets / ConfigMaps,
+so the `-S` is disabled.
+
+!!! Note
+ By default, cluster logs are not collected, but you can enable cluster
+ log collection with the `--logs` flag
+
+Usage:
+
+```sh
+kubectl cnp report cluster [flags]
+```
+
+Note that, unlike the `operator` sub-command, for the `cluster` sub-command you
+need to provide the cluster name, and very likely the namespace, unless the cluster
+is in the default one.
+
+```sh
+kubectl cnp report cluster example -f report.zip -n example_namespace
+```
+
+and then:
+
+```sh
+unzip report.zip
+```
+
+```output
+Archive: report.zip
+ creating: report_cluster_example_/
+ creating: report_cluster_example_/manifests/
+ inflating: report_cluster_example_/manifests/cluster.yaml
+ inflating: report_cluster_example_/manifests/cluster-pods.yaml
+ inflating: report_cluster_example_/manifests/cluster-jobs.yaml
+ inflating: report_cluster_example_/manifests/events.yaml
+```
+
+Remember that you can use the `--logs` flag to add the pod and job logs to the ZIP.
+
+```sh
+kubectl cnp report cluster example -n example_namespace --logs
+```
+
+will result in:
+
+```output
+Successfully written report to "report_cluster_example_.zip" (format: "yaml")
+```
+
+```sh
+unzip report_cluster_.zip
+```
+
+```output
+Archive: report_cluster_example_.zip
+ creating: report_cluster_example_/
+ creating: report_cluster_example_/manifests/
+ inflating: report_cluster_example_/manifests/cluster.yaml
+ inflating: report_cluster_example_/manifests/cluster-pods.yaml
+ inflating: report_cluster_example_/manifests/cluster-jobs.yaml
+ inflating: report_cluster_example_/manifests/events.yaml
+ creating: report_cluster_example_/logs/
+ inflating: report_cluster_example_/logs/cluster-example-full-1.jsonl
+ creating: report_cluster_example_/job-logs/
+ inflating: report_cluster_example_/job-logs/cluster-example-full-1-initdb-qnnvw.jsonl
+ inflating: report_cluster_example_/job-logs/cluster-example-full-2-join-tvj8r.jsonl
+```
+
+##### OpenShift support
+
+The `report operator` directive will detect automatically if the cluster is
+running on OpenShift, and will get the Cluster Service Version and the
+Install Plan, and add them automatically to the zip under the `openshift`
+sub-folder.
+
+!!! Note
+ the namespace becomes very important on OpenShift. The default namespace
+ for OpenShift in CNP is "openshift-operators". Many (most) clients will use
+ a different namespace for the CNP operator.
+
+```sh
+kubectl cnp report operator -n openshift-operators
+```
+
+results in
+
+```sh
+Successfully written report to "report_operator_.zip" (format: "yaml")
+```
+
+You can find the OpenShift-related files in the `openshift` sub-folder:
+
+```sh
+unzip report_operator_.zip
+cd report_operator_/
+cd openshift
+head clusterserviceversions.yaml
+```
+
+```yaml
+apiVersion: operators.coreos.com/v1alpha1
+items:
+- apiVersion: operators.coreos.com/v1alpha1
+ kind: ClusterServiceVersion
+ metadata:
+ annotations:
+ alm-examples: |-
+ [
+ {
+ "apiVersion": "postgresql.k8s.enterprisedb.io/v1",
+```
+
+### Logs
+
+The `kubectl cnp logs` command allows to follow the logs of a collection
+of pods related to {{name.ln}} in a single go.
+
+It has at the moment one available sub-command: `cluster`.
+
+#### Cluster logs
+
+The `cluster` sub-command gathers all the pod logs for a cluster in a single
+stream or file.
+This means that you can get all the pod logs in a single terminal window, with a
+single invocation of the command.
+
+As in all the cnp plugin sub-commands, you can get instructions and help with
+the `-h` flag:
+
+`kubectl cnp logs cluster -h`
+
+The `logs` command will display logs in JSON-lines format, unless the
+`--timestamps` flag is used, in which case, a human-readable timestamp will be
+prepended to each line. In this case, lines will no longer be valid JSON,
+and tools such as `jq` may not work as desired.
+
+If the `logs cluster` sub-command is given the `-f` flag (aka `--follow`), it
+will follow the cluster pod logs, and will also watch for any new pods created
+in the cluster after the command has been invoked.
+Any new pods found, including pods that have been restarted or re-created,
+will also have their pods followed.
+The logs will be displayed in the terminal's standard-out.
+This command will only exit when the cluster has no more pods left, or when it
+is interrupted by the user.
+
+If `logs` is called without the `-f` option, it will read the logs from all
+cluster pods until the time of invocation and display them in the terminal's
+standard-out, then exit.
+The `-o` or `--output` flag can be provided, to specify the name
+of the file where the logs should be saved, instead of displaying over
+standard-out.
+The `--tail` flag can be used to specify how many log lines will be retrieved
+from each pod in the cluster. By default, the `logs cluster` sub-command will
+display all the logs from each pod in the cluster. If combined with the "follow"
+flag `-f`, the number of logs specified by `--tail` will be retrieved until the
+current time, and from then the new logs will be followed.
+
+NOTE: unlike other `cnp` plugin commands, the `-f` is used to denote "follow"
+rather than specify a file. This keeps with the convention of `kubectl logs`,
+which takes `-f` to mean the logs should be followed.
+
+Usage:
+
+```sh
+kubectl cnp logs cluster [flags]
+```
+
+Using the `-f` option to follow:
+
+```sh
+kubectl cnp report cluster cluster-example -f
+```
+
+Using `--tail` option to display 3 lines from each pod and the `-f` option
+to follow:
+
+```sh
+kubectl cnp report cluster cluster-example -f --tail 3
+```
+
+```output
+{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] LOG: ending log output to stderr","source":"/controller/log/postgres","logging_pod":"cluster-example-3"}
+{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] HINT: Future log output will go to log destination \"csvlog\".","source":"/controller/log/postgres","logging_pod":"cluster-example-3"}
+…
+…
+```
+
+With the `-o` option omitted, and with `--output` specified:
+
+```console
+$ kubectl cnp logs cluster cluster-example --output my-cluster.log
+
+Successfully written logs to "my-cluster.log"
+```
+
+#### Pretty
+
+The `pretty` sub-command reads a log stream from standard input, formats it
+into a human-readable output, and attempts to sort the entries by timestamp.
+
+It can be used in combination with `kubectl cnp logs cluster`, as
+shown in the following example:
+
+```console
+$ kubectl cnp logs cluster cluster-example | kubectl cnp logs pretty
+2024-10-15T17:35:00.336 INFO cluster-example-1 instance-manager Starting {{name.ln}} Instance Manager
+2024-10-15T17:35:00.336 INFO cluster-example-1 instance-manager Checking for free disk space for WALs before starting PostgreSQL
+2024-10-15T17:35:00.347 INFO cluster-example-1 instance-manager starting tablespace manager
+2024-10-15T17:35:00.347 INFO cluster-example-1 instance-manager starting external server manager
+[...]
+```
+
+Alternatively, it can be used in combination with other commands that produce
+{{name.abbr}} logs in JSON format, such as `stern`, or `kubectl logs`, as in the
+following example:
+
+```console
+$ kubectl logs cluster-example-1 | kubectl cnp logs pretty
+2024-10-15T17:35:00.336 INFO cluster-example-1 instance-manager Starting {{name.ln}} Instance Manager
+2024-10-15T17:35:00.336 INFO cluster-example-1 instance-manager Checking for free disk space for WALs before starting PostgreSQL
+2024-10-15T17:35:00.347 INFO cluster-example-1 instance-manager starting tablespace manager
+2024-10-15T17:35:00.347 INFO cluster-example-1 instance-manager starting external server manager
+[...]
+```
+
+The `pretty` sub-command also supports advanced log filtering, allowing users
+to display logs for specific pods or loggers, or to filter logs by severity
+level.
+Here's an example:
+
+```console
+$ kubectl cnp logs cluster cluster-example | kubectl cnp logs pretty --pods cluster-example-1 --loggers postgres --log-level info
+2024-10-15T17:35:00.509 INFO cluster-example-1 postgres 2024-10-15 17:35:00.509 UTC [29] LOG: redirecting log output to logging collector process
+2024-10-15T17:35:00.509 INFO cluster-example-1 postgres 2024-10-15 17:35:00.509 UTC [29] HINT: Future log output will appear in directory "/controller/log"...
+2024-10-15T17:35:00.510 INFO cluster-example-1 postgres 2024-10-15 17:35:00.509 UTC [29] LOG: ending log output to stderr
+2024-10-15T17:35:00.510 INFO cluster-example-1 postgres ending log output to stderr
+[...]
+```
+
+The `pretty` sub-command will try to sort the log stream,
+to make logs easier to reason about. In order to achieve this, it gathers the
+logs into groups, and within groups it sorts by timestamp. This is the only
+way to sort interactively, as `pretty` may be piped from a command in "follow"
+mode. The sub-command will add a group separator line, `---`, at the end of
+each sorted group. The size of the grouping can be configured via the
+`--sorting-group-size` flag (default: 1000), as illustrated in the following example:
+
+```console
+$ kubectl cnp logs cluster cluster-example | kubectl cnp logs pretty --sorting-group-size=3
+2024-10-15T17:35:20.426 INFO cluster-example-2 instance-manager Starting {{name.ln}} Instance Manager
+2024-10-15T17:35:20.426 INFO cluster-example-2 instance-manager Checking for free disk space for WALs before starting PostgreSQL
+2024-10-15T17:35:20.438 INFO cluster-example-2 instance-manager starting tablespace manager
+---
+2024-10-15T17:35:20.438 INFO cluster-example-2 instance-manager starting external server manager
+2024-10-15T17:35:20.438 INFO cluster-example-2 instance-manager starting controller-runtime manager
+2024-10-15T17:35:20.439 INFO cluster-example-2 instance-manager Starting EventSource
+---
+[...]
+```
+
+To explore all available options, use the `-h` flag for detailed explanations
+of the supported flags and their usage.
+
+!!! Info
+ You can also increase the verbosity of the log by adding more `-v` options.
+
+### Destroy
+
+The `kubectl cnp destroy` command helps remove an instance and all the
+associated PVCs from a Kubernetes cluster.
+
+The optional `--keep-pvc` flag, if specified, allows you to keep the PVCs,
+while removing all `metadata.ownerReferences` that were set by the instance.
+Additionally, the `k8s.enterprisedb.io/pvcStatus` label on the PVCs will change from
+`ready` to `detached` to signify that they are no longer in use.
+
+Running again the command without the `--keep-pvc` flag will remove the
+detached PVCs.
+
+Usage:
+
+```sh
+kubectl cnp destroy [CLUSTER_NAME] [INSTANCE_ID]
+```
+
+The following example removes the `cluster-example-2` pod and the associated
+PVCs:
+
+```sh
+kubectl cnp destroy cluster-example 2
+```
+
+### Cluster hibernation
+
+Sometimes you may want to suspend the execution of a {{name.ln}} `Cluster`
+while retaining its data, then resume its activity at a later time. We've
+called this feature **cluster hibernation**.
+
+Hibernation is only available via the `kubectl cnp hibernate [on|off]`
+commands.
+
+Hibernating a {{name.ln}} cluster means destroying all the resources
+generated by the cluster, except the PVCs that belong to the PostgreSQL primary
+instance.
+
+You can hibernate a cluster with:
+
+```sh
+kubectl cnp hibernate on
+```
+
+This will:
+
+1. shutdown every PostgreSQL instance
+2. detach the PVCs containing the data of the primary instance, and annotate
+ them with the latest database status and the latest cluster configuration
+3. delete the `Cluster` resource, including every generated resource - except
+ the aforementioned PVCs
+
+When hibernated, a {{name.ln}} cluster is represented by just a group of
+PVCs, in which the one containing the `PGDATA` is annotated with the latest
+available status, including content from `pg_controldata`.
+
+!!! Warning
+ A cluster having fenced instances cannot be hibernated, as fencing is
+ part of the hibernation procedure too.
+
+In case of error the operator will not be able to revert the procedure. You can
+still force the operation with:
+
+```sh
+kubectl cnp hibernate on cluster-example --force
+```
+
+A hibernated cluster can be resumed with:
+
+```sh
+kubectl cnp hibernate off
+```
+
+Once the cluster has been hibernated, it's possible to show the last
+configuration and the status that PostgreSQL had after it was shut down.
+That can be done with:
+
+```sh
+kubectl cnp status
+```
+
+### Benchmarking the database with pgbench
+
+Pgbench can be run against an existing PostgreSQL cluster with following
+command:
+
+```sh
+kubectl cnp pgbench -- --time 30 --client 1 --jobs 1
+```
+
+Refer to the [Benchmarking pgbench section](benchmarking.md#pgbench) for more
+details.
+
+### Benchmarking the storage with fio
+
+fio can be run on an existing storage class with following command:
+
+```sh
+kubectl cnp fio -n
+```
+
+Refer to the [Benchmarking fio section](benchmarking.md#fio) for more details.
+
+### Requesting a new physical backup
+
+The `kubectl cnp backup` command requests a new physical backup for
+an existing Postgres cluster by creating a new `Backup` resource.
+
+!!! Info
+ From release 1.21, the `backup` command accepts a new flag, `-m`
+ to specify the backup method.
+ To request a backup using volume snapshots, set `-m volumeSnapshot`
+
+The following example requests an on-demand backup for a given cluster:
+
+```sh
+kubectl cnp backup [cluster_name]
+```
+
+or, if using volume snapshots:
+
+```sh
+kubectl cnp backup [cluster_name] -m volumeSnapshot
+```
+
+The created backup will be named after the request time:
+
+```console
+$ kubectl cnp backup cluster-example
+backup/cluster-example-20230121002300 created
+```
+
+By default, a newly created backup will use the backup target policy defined
+in the cluster to choose which instance to run on.
+However, you can override this policy with the `--backup-target` option.
+
+In the case of volume snapshot backups, you can also use the `--online` option
+to request an online/hot backup or an offline/cold one: additionally, you can
+also tune online backups by explicitly setting the `--immediate-checkpoint` and
+`--wait-for-archive` options.
+
+The ["Backup" section](./backup.md) contains more information about
+the configuration settings.
+
+### Launching psql
+
+The `kubectl cnp psql` command starts a new PostgreSQL interactive front-end
+process (psql) connected to an existing Postgres cluster, as if you were running
+it from the actual pod. This means that you will be using the `postgres` user.
+
+!!! Important
+ As you will be connecting as `postgres` user, in production environments this
+ method should be used with extreme care, by authorized personnel only.
+
+```console
+$ kubectl cnp psql cluster-example
+
+psql (17.0 (Debian 17.0-1.pgdg110+1))
+Type "help" for help.
+
+postgres=#
+```
+
+By default, the command will connect to the primary instance. The user can
+select to work against a replica by using the `--replica` option:
+
+```console
+$ kubectl cnp psql --replica cluster-example
+
+psql (17.0 (Debian 17.0-1.pgdg110+1))
+
+Type "help" for help.
+
+postgres=# select pg_is_in_recovery();
+ pg_is_in_recovery
+-------------------
+ t
+(1 row)
+
+postgres=# \q
+```
+
+This command will start `kubectl exec`, and the `kubectl` executable must be
+reachable in your `PATH` variable to correctly work.
+
+!!!Note
+When connecting to instances running on OpenShift, you must explicitly
+pass a username to the `psql` command, because of a [security measure built into
+OpenShift](https://cloud.redhat.com/blog/a-guide-to-openshift-and-uids):
+
+```sh
+kubectl cnp psql cluster-example -- -U postgres
+```
+!!!
+
+### Snapshotting a Postgres cluster
+
+!!! Warning
+ The `kubectl cnp snapshot` command has been removed.
+ Please use the [`backup` command](#requesting-a-new-physical-backup) to request
+ backups using volume snapshots.
+
+### Using pgAdmin4 for evaluation/demonstration purposes only
+
+[pgAdmin](https://www.pgadmin.org/) stands as the most popular and feature-rich
+open-source administration and development platform for PostgreSQL.
+For more information on the project, please refer to the official
+[documentation](https://www.pgadmin.org/docs/).
+
+Given that the pgAdmin Development Team maintains official Docker container
+images, you can install pgAdmin in your environment as a standard
+Kubernetes deployment.
+
+!!! Important
+ Deployment of pgAdmin in Kubernetes production environments is beyond the
+ scope of this document and, more broadly, of the {{name.ln}} project.
+
+However, **for the purposes of demonstration and evaluation**, {{name.ln}}
+offers a suitable solution. The `cnp` plugin implements the `pgadmin4`
+command, providing a straightforward method to connect to a given database
+`Cluster` and navigate its content in a local environment such as `kind`.
+
+For example, you can install a demo deployment of pgAdmin4 for the
+`cluster-example` cluster as follows:
+
+```sh
+kubectl cnp pgadmin4 cluster-example
+```
+
+This command will produce:
+
+```output
+ConfigMap/cluster-example-pgadmin4 created
+Deployment/cluster-example-pgadmin4 created
+Service/cluster-example-pgadmin4 created
+Secret/cluster-example-pgadmin4 created
+
+[...]
+```
+
+After deploying pgAdmin, forward the port using kubectl and connect
+through your browser by following the on-screen instructions.
+
+
+
+As usual, you can use the `--dry-run` option to generate the YAML file:
+
+```sh
+kubectl cnp pgadmin4 --dry-run cluster-example
+```
+
+pgAdmin4 can be installed in either desktop or server mode, with the default
+being server.
+
+In `server` mode, authentication is required using a randomly generated password,
+and users must manually specify the database to connect to.
+
+On the other hand, `desktop` mode initiates a pgAdmin web interface without
+requiring authentication. It automatically connects to the `app` database as the
+`app` user, making it ideal for quick demos, such as on a local deployment using
+`kind`:
+
+```sh
+kubectl cnp pgadmin4 --mode desktop cluster-example
+```
+
+After concluding your demo, ensure the termination of the pgAdmin deployment by
+executing:
+
+```sh
+kubectl cnp pgadmin4 --dry-run cluster-example | kubectl delete -f -
+```
+
+!!! Warning
+ Never deploy pgAdmin in production using the plugin.
+
+### Logical Replication Publications
+
+The `cnp publication` command group is designed to streamline the creation and
+removal of [PostgreSQL logical replication publications](https://www.postgresql.org/docs/current/logical-replication-publication.html).
+Be aware that these commands are primarily intended for assisting in the
+creation of logical replication publications, particularly on remote PostgreSQL
+databases.
+
+!!! Warning
+ It is crucial to have a solid understanding of both the capabilities and
+ limitations of PostgreSQL's native logical replication system before using
+ these commands.
+ In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).
+
+#### Creating a new publication
+
+To create a logical replication publication, use the `cnp publication create`
+command. The basic structure of this command is as follows:
+
+```sh
+kubectl cnp publication create \
+ --publication \
+ [--external-cluster ]
+ [options]
+```
+
+There are two primary use cases:
+
+- With `--external-cluster`: Use this option to create a publication on an
+ external cluster (i.e. defined in the `externalClusters` stanza). The commands
+ will be issued from the ``, but the publication will be for the
+ data in ``.
+
+- Without `--external-cluster`: Use this option to create a publication in the
+ `` PostgreSQL `Cluster` (by default, the `app` database).
+
+!!! Warning
+ When connecting to an external cluster, ensure that the specified user has
+ sufficient permissions to execute the `CREATE PUBLICATION` command.
+
+You have several options, similar to the [`CREATE PUBLICATION`](https://www.postgresql.org/docs/current/sql-createpublication.html)
+command, to define the group of tables to replicate. Notable options include:
+
+- If you specify the `--all-tables` option, you create a publication `FOR ALL TABLES`.
+- Alternatively, you can specify multiple occurrences of:
+ - `--table`: Add a specific table (with an expression) to the publication.
+ - `--schema`: Include all tables in the specified database schema (available
+ from PostgreSQL 15).
+
+The `--dry-run` option enables you to preview the SQL commands that the plugin
+will execute.
+
+For additional information and detailed instructions, type the following
+command:
+
+```sh
+kubectl cnp publication create --help
+```
+
+##### Example
+
+Given a `source-cluster` and a `destination-cluster`, we would like to create a
+publication for the data on `source-cluster`.
+The `destination-cluster` has an entry in the `externalClusters` stanza pointing
+to `source-cluster`.
+
+We can run:
+
+```sh
+kubectl cnp publication create destination-cluster \
+ --external-cluster=source-cluster --all-tables
+```
+
+which will create a publication for all tables on `source-cluster`, running
+the SQL commands on the `destination-cluster`.
+
+Or instead, we can run:
+
+```sh
+kubectl cnp publication create source-cluster \
+ --publication=app --all-tables
+```
+
+which will create a publication named `app` for all the tables in the
+`source-cluster`, running the SQL commands on the source cluster.
+
+!!! Info
+ There are two sample files that have been provided for illustration and inspiration:
+ [logical-source](../samples/cluster-example-logical-source.yaml) and
+ [logical-destination](../samples/cluster-example-logical-destination.yaml).
+
+#### Dropping a publication
+
+The `cnp publication drop` command seamlessly complements the `create` command
+by offering similar key options, including the publication name, cluster name,
+and an optional external cluster. You can drop a `PUBLICATION` with the
+following command structure:
+
+```sh
+kubectl cnp publication drop \
+ --publication \
+ [--external-cluster ]
+ [options]
+```
+
+To access further details and precise instructions, use the following command:
+
+```sh
+kubectl cnp publication drop --help
+```
+
+### Logical Replication Subscriptions
+
+The `cnp subscription` command group is a dedicated set of commands designed
+to simplify the creation and removal of
+[PostgreSQL logical replication subscriptions](https://www.postgresql.org/docs/current/logical-replication-subscription.html).
+These commands are specifically crafted to aid in the establishment of logical
+replication subscriptions, especially when dealing with remote PostgreSQL
+databases.
+
+!!! Warning
+ Before using these commands, it is essential to have a comprehensive
+ understanding of both the capabilities and limitations of PostgreSQL's
+ native logical replication system.
+ In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).
+
+In addition to subscription management, we provide a helpful command for
+synchronizing all sequences from the source cluster. While its applicability
+may vary, this command can be particularly useful in scenarios involving major
+upgrades or data import from remote servers.
+
+#### Creating a new subscription
+
+To create a logical replication subscription, use the `cnp subscription create`
+command. The basic structure of this command is as follows:
+
+```sh
+kubectl cnp subscription create \
+ --subscription \
+ --publication \
+ --external-cluster \
+ [options]
+```
+
+This command configures a subscription directed towards the specified
+publication in the designated external cluster, as defined in the
+`externalClusters` stanza of the ``.
+
+For additional information and detailed instructions, type the following
+command:
+
+```sh
+kubectl cnp subscription create --help
+```
+
+##### Example
+
+As in the section on publications, we have a `source-cluster` and a
+`destination-cluster`, and we have already created a publication called
+`app`.
+
+The following command:
+
+```sh
+kubectl cnp subscription create destination-cluster \
+ --external-cluster=source-cluster \
+ --publication=app --subscription=app
+```
+
+will create a subscription for `app` on the destination cluster.
+
+!!! Warning
+ Prioritize testing subscriptions in a non-production environment to ensure
+ their effectiveness and identify any potential issues before implementing them
+ in a production setting.
+
+!!! Info
+ There are two sample files that have been provided for illustration and inspiration:
+ [logical-source](../samples/cluster-example-logical-source.yaml) and
+ [logical-destination](../samples/cluster-example-logical-destination.yaml).
+
+#### Dropping a subscription
+
+The `cnp subscription drop` command seamlessly complements the `create` command.
+You can drop a `SUBSCRIPTION` with the following command structure:
+
+```sh
+kubectl cnp subcription drop \
+ --subscription \
+ [options]
+```
+
+To access further details and precise instructions, use the following command:
+
+```sh
+kubectl cnp subscription drop --help
+```
+
+#### Synchronizing sequences
+
+One notable constraint of PostgreSQL logical replication, implemented through
+publications and subscriptions, is the lack of sequence synchronization. This
+becomes particularly relevant when utilizing logical replication for live
+database migration, especially to a higher version of PostgreSQL. A crucial
+step in this process involves updating sequences before transitioning
+applications to the new database (*cutover*).
+
+To address this limitation, the `cnp subscription sync-sequences` command
+offers a solution. This command establishes a connection with the source
+database, retrieves all relevant sequences, and subsequently updates local
+sequences with matching identities (based on database schema and sequence
+name).
+
+You can use the command as shown below:
+
+```sh
+kubectl cnp subscription sync-sequences \
+ --subscription \
+
+```
+
+For comprehensive details and specific instructions, utilize the following
+command:
+
+```sh
+kubectl cnp subscription sync-sequences --help
+```
+
+##### Example
+
+As in the previous sections for publication and subscription, we have
+a `source-cluster` and a `destination-cluster`. The publication and the
+subscription, both called `app`, are already present.
+
+The following command will synchronize the sequences involved in the
+`app` subscription, from the source cluster into the destination cluster.
+
+```sh
+kubectl cnp subscription sync-sequences destination-cluster \
+ --subscription=app
+```
+
+!!! Warning
+ Prioritize testing subscriptions in a non-production environment to
+ guarantee their effectiveness and detect any potential issues before deploying
+ them in a production setting.
+
+## Integration with K9s
+
+The `cnp` plugin can be easily integrated in [K9s](https://k9scli.io/), a
+popular terminal-based UI to interact with Kubernetes clusters.
+
+See [`k9s/plugins.yml`](../samples/k9s/plugins.yml) for details.
+
+## Permissions required by the plugin
+
+The plugin requires a set of Kubernetes permissions that depends on the command
+to execute. These permissions may affect resources and sub-resources like Pods,
+PDBs, PVCs, and enable actions like `get`, `delete`, `patch`. The following
+table contains the full details:
+
+| Command | Resource Permissions |
+| :-------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| backup | clusters: get
backups: create |
+| certificate | clusters: get
secrets: get,create |
+| destroy | pods: get,delete
jobs: delete,list
PVCs: list,delete,update |
+| fencing | clusters: get,patch
pods: get |
+| fio | PVCs: create
configmaps: create
deployment: create |
+| hibernate | clusters: get,patch,delete
pods: list,get,delete
pods/exec: create
jobs: list
PVCs: get,list,update,patch,delete |
+| install | none |
+| logs | clusters: get
pods: list
pods/log: get |
+| maintenance | clusters: get,patch,list
|
+| pgadmin4 | clusters: get
configmaps: create
deployments: create
services: create
secrets: create |
+| pgbench | clusters: get
jobs: create
|
+| promote | clusters: get
clusters/status: patch
pods: get |
+| psql | pods: get,list
pods/exec: create |
+| publication | clusters: get
pods: get,list
pods/exec: create |
+| reload | clusters: get,patch |
+| report cluster | clusters: get
pods: list
pods/log: get
jobs: list
events: list
PVCs: list |
+| report operator | configmaps: get
deployments: get
events: list
pods: list
pods/log: get
secrets: get
services: get
mutatingwebhookconfigurations: list[^1]
validatingwebhookconfigurations: list[^1]
If OLM is present on the K8s cluster, also:
clusterserviceversions: list
installplans: list
subscriptions: list |
+| restart | clusters: get,patch
pods: get,delete |
+| status | clusters: get
pods: list
pods/exec: create
pods/proxy: create
PDBs: list |
+| subscription | clusters: get
pods: get,list
pods/exec: create |
+| version | none |
+
+[^1]: The permissions are cluster scope ClusterRole resources.
+
+///Footnotes Go Here///
+
+Additionally, assigning the `list` permission on the `clusters` will enable
+autocompletion for multiple commands.
+
+### Role examples
+
+It is possible to create roles with restricted permissions.
+The following example creates a role that only has access to the cluster logs:
+
+```yaml
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: cnp-log
+rules:
+ - verbs:
+ - get
+ apiGroups:
+ - postgresql.k8s.enterprisedb.io
+ resources:
+ - clusters
+ - verbs:
+ - list
+ apiGroups:
+ - ''
+ resources:
+ - pods
+ - verbs:
+ - get
+ apiGroups:
+ - ''
+ resources:
+ - pods/log
+```
+
+The next example shows a role with the minimal permissions required to get
+the cluster status using the plugin's `status` command:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: cnp-status
+rules:
+ - verbs:
+ - get
+ apiGroups:
+ - postgresql.k8s.enterprisedb.io
+ resources:
+ - clusters
+ - verbs:
+ - list
+ apiGroups:
+ - ''
+ resources:
+ - pods
+ - verbs:
+ - create
+ apiGroups:
+ - ''
+ resources:
+ - pods/exec
+ - verbs:
+ - create
+ apiGroups:
+ - ''
+ resources:
+ - pods/proxy
+ - verbs:
+ - list
+ apiGroups:
+ - policy
+ resources:
+ - poddisruptionbudgets
+```
+
+!!! Important
+ Keeping the verbs restricted per `resources` and per `apiGroups` helps to
+ prevent inadvertently granting more than intended permissions.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx
new file mode 100644
index 0000000000..297818dc7d
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx
@@ -0,0 +1,191 @@
+---
+title: 'Kubernetes Upgrade and Maintenance'
+originalFilePath: 'src/kubernetes_upgrade.md'
+---
+
+
+
+Maintaining an up-to-date Kubernetes cluster is crucial for ensuring optimal
+performance and security, particularly for self-managed clusters, especially
+those running on bare metal infrastructure. Regular updates help address
+technical debt and mitigate business risks, despite the controlled downtimes
+associated with temporarily removing a node from the cluster for maintenance
+purposes. For further insights on embracing risk in operations, refer to the
+["Embracing Risk"](https://landing.google.com/sre/sre-book/chapters/embracing-risk/)
+chapter from the Site Reliability Engineering book.
+
+## Importance of Regular Updates
+
+Updating Kubernetes involves planning and executing maintenance tasks, such as
+applying security updates to underlying Linux servers, replacing malfunctioning
+hardware components, or upgrading the cluster to the latest Kubernetes version.
+These activities are essential for maintaining a robust and secure
+infrastructure.
+
+## Maintenance Operations in a Cluster
+
+Typically, maintenance operations are carried out on one node at a time, following a [structured process](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/):
+
+1. eviction of workloads (`drain`): workloads are gracefully moved away from
+ the node to be updated, ensuring a smooth transition.
+2. performing the operation: the actual maintenance operation, such as a
+ system update or hardware replacement, is executed.
+3. rejoining the node to the cluster (`uncordon`): the updated node is
+ reintegrated into the cluster, ready to resume its responsibilities.
+
+This process requires either stopping workloads for the entire upgrade duration
+or migrating them to other nodes in the cluster.
+
+## Temporary PostgreSQL Cluster Degradation
+
+While the standard approach ensures service reliability and leverages
+Kubernetes' self-healing capabilities, there are scenarios where operating with
+a temporarily degraded cluster may be acceptable. This is particularly relevant
+for PostgreSQL clusters relying on **node-local storage**, where the storage is
+local to the Kubernetes worker node running the PostgreSQL database. Node-local
+storage, or simply *local storage*, is employed to enhance performance.
+
+!!! Note
+ If your database files reside on shared storage accessible over the
+ network, the default self-healing behavior of the operator can efficiently
+ handle scenarios where volumes are reused by pods on different nodes after a
+ drain operation. In such cases, you can skip the remaining sections of this
+ document.
+
+## Pod Disruption Budgets
+
+By default, {{name.ln}} safeguards Postgres cluster operations. If a node is
+to be drained and contains a cluster's primary instance, a switchover happens
+ahead of the drain. Once the instance in the node is downgraded to replica, the
+draining can resume.
+For single-instance clusters, a switchover is not possible, so {{name.ln}}
+will prevent draining the node where the instance is housed.
+Additionally, in clusters with 3 or more instances, {{name.ln}} guarantees that
+only one replica at a time is gracefully shut down during a drain operation.
+
+Each PostgreSQL `Cluster` is equipped with two associated `PodDisruptionBudget`
+resources - you can easily confirm it with the `kubectl get pdb` command.
+
+Our recommendation is to leave pod disruption budgets enabled for every
+production Postgres cluster. This can be effortlessly managed by toggling the
+`.spec.enablePDB` option, as detailed in the
+[API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ClusterSpec).
+
+## PostgreSQL Clusters used for Development or Testing
+
+For PostgreSQL clusters used for development purposes, often consisting of
+a single instance, it is essential to disable pod disruption budgets. Failure
+to do so will prevent the node hosting that cluster from being drained.
+
+The following example illustrates how to disable pod disruption budgets for a
+1-instance development cluster:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: dev
+spec:
+ instances: 1
+ enablePDB: false
+
+ storage:
+ size: 1Gi
+```
+
+This configuration ensures smoother maintenance procedures without restrictions
+on draining the node during development activities.
+
+## Node Maintenance Window
+
+!!! Important
+ While {{name.ln}} will continue supporting the node maintenance window,
+ it is currently recommended to transition to direct control of pod disruption
+ budgets, as explained in the previous section. This section is retained
+ mainly for backward compatibility.
+
+Prior to release 1.23, {{name.ln}} had just one declarative mechanism to manage
+Kubernetes upgrades when dealing with local storage: you had to temporarily put
+the cluster in **maintenance mode** through the `nodeMaintenanceWindow` option
+to avoid standard self-healing procedures to kick in, while, for example,
+enlarging the partition on the physical node or updating the node itself.
+
+!!! Warning
+ Limit the duration of the maintenance window to the shortest
+ amount of time possible. In this phase, some of the expected
+ behaviors of Kubernetes are either disabled or running with
+ some limitations, including self-healing, rolling updates,
+ and Pod disruption budget.
+
+The `nodeMaintenanceWindow` option of the cluster has two further
+settings:
+
+`inProgress`:
+Boolean value that states if the maintenance window for the nodes
+is currently in progress or not. By default, it is set to `off`.
+During the maintenance window, the `reusePVC` option below is
+evaluated by the operator.
+
+`reusePVC`:
+Boolean value that defines if an existing PVC is reused or
+not during the maintenance operation. By default, it is set to `on`.
+When **enabled**, Kubernetes waits for the node to come up
+again and then reuses the existing PVC; the `PodDisruptionBudget`
+policy is temporarily removed.
+When **disabled**, Kubernetes forces the recreation of the
+Pod on a different node with a new PVC by relying on
+PostgreSQL's physical streaming replication, then destroys
+the old PVC together with the Pod. This scenario is generally
+not recommended unless the database's size is small, and re-cloning
+the new PostgreSQL instance takes shorter than waiting. This behavior
+does **not** apply to clusters with only one instance and
+reusePVC disabled: see section below.
+
+!!! Note
+ When performing the `kubectl drain` command, you will need
+ to add the `--delete-emptydir-data` option.
+ Don't be afraid: it refers to another volume internally used
+ by the operator - not the PostgreSQL data directory.
+
+!!! Important
+ `PodDisruptionBudget` management can be disabled by setting the
+ `.spec.enablePDB` field to `false`. In that case, the operator won't
+ create `PodDisruptionBudgets` and will delete them if they were
+ previously created.
+
+### Single instance clusters with `reusePVC` set to `false`
+
+!!! Important
+ We recommend to always create clusters with more
+ than one instance in order to guarantee high availability.
+
+Deleting the only PostgreSQL instance in a single instance cluster with
+`reusePVC` set to `false` would imply all data being lost,
+therefore we prevent users from draining nodes such instances might be running
+on, even in maintenance mode.
+
+However, in case maintenance is required for such a node you have two options:
+
+1. Enable `reusePVC`, accepting the downtime
+2. Replicate the instance on a different node and switch over the primary
+
+As long as a database service downtime is acceptable for your environment,
+draining the node is as simple as setting the `nodeMaintenanceWindow` to
+`inProgress: true` and `reusePVC: true`. This will allow the instance to
+be deleted and recreated as soon as the original PVC is available
+(e.g. with node local storage, as soon as the node is back up).
+
+Otherwise you will have to scale up the cluster, creating a new instance
+on a different node and promoting the new instance to primary in order to
+shut down the original one on the node undergoing maintenance. The only
+downtime in this case will be the duration of the switchover.
+
+A possible approach could be:
+
+1. Cordon the node on which the current instance is running.
+2. Scale up the cluster to 2 instances, could take some time depending on the database size.
+3. As soon as the new instance is running, the operator will automatically
+ perform a switchover given that the current primary is running on a cordoned node.
+4. Scale back down the cluster to a single instance, this will delete the old instance
+5. The old primary's node can now be drained successfully, while leaving the new primary
+ running on a new node.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
new file mode 100644
index 0000000000..67c6dd00ee
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
@@ -0,0 +1,313 @@
+---
+title: 'Labels and annotations'
+originalFilePath: 'src/labels_annotations.md'
+---
+
+
+
+Resources in Kubernetes are organized in a flat structure, with no hierarchical
+information or relationship between them. However, such resources and objects
+can be linked together and put in relationship through *labels* and
+*annotations*.
+
+!!! info
+ For more information, see the Kubernetes documentation on
+ [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) and
+ [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/).
+
+In brief:
+
+- An annotation is used to assign additional non-identifying information to
+ resources with the goal of facilitating integration with external tools.
+- A label is used to group objects and query them through the Kubernetes native
+ selector capability.
+
+You can select one or more labels or annotations to use
+in your {{name.ln}} deployments. Then you need to configure the operator
+so that when you define these labels or annotations in a cluster's metadata,
+they're inherited by all resources created by it (including pods).
+
+!!! Note
+ Label and annotation inheritance is the technique adopted by {{name.ln}}
+ instead of alternative approaches such as pod templates.
+
+## Predefined labels
+
+{{name.ln}} manages the following predefined labels:
+
+`k8s.enterprisedb.io/backupDate`
+: The date of the backup in ISO 8601 format (`YYYYMMDD`).
+ This label is available only on `VolumeSnapshot` resources.
+
+`k8s.enterprisedb.io/backupName`
+: Backup identifier.
+ This label is available only on `VolumeSnapshot` resources.
+
+`k8s.enterprisedb.io/backupMonth`
+: The year/month when a backup was taken.
+ This label is available only on `VolumeSnapshot` resources.
+
+`k8s.enterprisedb.io/backupTimeline`
+: The timeline of the instance when a backup was taken.
+ This label is available only on `VolumeSnapshot` resources.
+
+`k8s.enterprisedb.io/backupYear`
+: The year a backup was taken.
+ This label is available only on `VolumeSnapshot` resources.
+
+`k8s.enterprisedb.io/cluster`
+: Name of the cluster.
+
+`k8s.enterprisedb.io/immediateBackup`
+: Applied to a `Backup` resource if the backup is the first one created from
+ a `ScheduledBackup` object having `immediate` set to `true`.
+
+`k8s.enterprisedb.io/instanceName`
+: Name of the PostgreSQL instance (replaces the old and
+ deprecated `postgresql` label).
+
+`k8s.enterprisedb.io/jobRole`
+: Role of the job (that is, `import`, `initdb`, `join`, ...)
+
+`k8s.enterprisedb.io/majorVersion`
+: Integer PostgreSQL major version of the backup's data directory (for example, `17`).
+This label is available only on `VolumeSnapshot` resources.
+
+`k8s.enterprisedb.io/onlineBackup`
+: Whether the backup is online (hot) or taken when Postgres is down (cold).
+ This label is available only on `VolumeSnapshot` resources.
+
+`postgresql`
+: deprecated, Name of the PostgreSQL instance. Use `k8s.enterprisedb.io/instanceName`
+instead
+
+`k8s.enterprisedb.io/podRole`
+: Distinguishes pods dedicated to pooler deployment from those used for
+ database instances.
+
+`k8s.enterprisedb.io/poolerName`
+: Name of the PgBouncer pooler.
+
+`k8s.enterprisedb.io/pvcRole`
+: Purpose of the PVC, such as `PG_DATA` or `PG_WAL`.
+
+`k8s.enterprisedb.io/reload`
+: Available on `ConfigMap` and `Secret` resources. When set to `true`,
+ a change in the resource is automatically reloaded by the operator.
+
+`k8s.enterprisedb.io/userType`
+: Specifies the type of PostgreSQL user associated with the
+ `Secret`, either `superuser` (Postgres superuser access) or `app`
+ (application-level user in {{name.ln}} terminology), and is limited to the
+ default users created by {{name.ln}} (typically `postgres` and `app`).
+
+`role` - **deprecated**
+: Whether the instance running in a pod is a `primary` or a `replica`.
+ This label is deprecated, you should use `k8s.enterprisedb.io/instanceRole` instead.
+
+`k8s.enterprisedb.io/scheduled-backup`
+: When available, name of the `ScheduledBackup` resource that created a given
+ `Backup` object.
+
+`k8s.enterprisedb.io/instanceRole`
+: Whether the instance running in a pod is a `primary` or a `replica`.
+
+## Predefined annotations
+
+{{name.ln}} manages the following predefined annotations:
+
+`container.apparmor.security.beta.kubernetes.io/*`
+: Name of the AppArmor profile to apply to the named container.
+ See [AppArmor](security.md#restricting-pod-access-using-apparmor)
+ for details.
+
+`k8s.enterprisedb.io/backupEndTime`
+: The time a backup ended.
+ This annotation is available only on `VolumeSnapshot` resources.
+
+`k8s.enterprisedb.io/backupEndWAL`
+: The WAL at the conclusion of a backup.
+ This annotation is available only on `VolumeSnapshot` resources.
+
+`k8s.enterprisedb.io/backupStartTime`
+: The time a backup started.
+
+`k8s.enterprisedb.io/backupStartWAL`
+: The WAL at the start of a backup.
+ This annotation is available only on `VolumeSnapshot` resources.
+
+`k8s.enterprisedb.io/coredumpFilter`
+: Filter to control the coredump of Postgres processes, expressed with a
+ bitmask. By default it's set to `0x31` to exclude shared memory
+ segments from the dump. See [PostgreSQL core dumps](troubleshooting.md#postgresql-core-dumps)
+ for more information.
+
+`k8s.enterprisedb.io/clusterManifest`
+: Manifest of the `Cluster` owning this resource (such as a PVC). This label
+ replaces the old, deprecated `k8s.enterprisedb.io/hibernateClusterManifest` label.
+
+`k8s.enterprisedb.io/fencedInstances`
+: List of the instances that need to be fenced, expressed in JSON format.
+ The whole cluster is fenced if the list contains the `*` element.
+
+`k8s.enterprisedb.io/forceLegacyBackup`
+: Applied to a `Cluster` resource for testing purposes only, to
+ simulate the behavior of `barman-cloud-backup` prior to version 3.4 (Jan 2023)
+ when the `--name` option wasn't available.
+
+`k8s.enterprisedb.io/hash`
+: The hash value of the resource.
+
+`k8s.enterprisedb.io/hibernation`
+: Applied to a `Cluster` resource to control the [declarative hibernation feature](declarative_hibernation.md).
+ Allowed values are `on` and `off`.
+
+`k8s.enterprisedb.io/managedSecrets`
+: Pull secrets managed by the operator and automatically set in the
+ `ServiceAccount` resources for each Postgres cluster.
+
+`k8s.enterprisedb.io/nodeSerial`
+: On a pod resource, identifies the serial number of the instance within the
+ Postgres cluster.
+
+`k8s.enterprisedb.io/operatorVersion`
+: Version of the operator.
+
+`k8s.enterprisedb.io/pgControldata`
+: Output of the `pg_controldata` command. This annotation replaces the old,
+ deprecated `k8s.enterprisedb.io/hibernatePgControlData` annotation.
+
+`k8s.enterprisedb.io/podEnvHash`
+: Deprecated, as the `k8s.enterprisedb.io/podSpec` annotation now also contains the pod environment.
+
+`k8s.enterprisedb.io/podPatch`
+: Annotation can be applied on a `Cluster` resource.
+
+```
+When set to JSON-patch formatted patch, the patch will be applied on the instance Pods.
+
+**⚠️ WARNING:** This feature may introduce discrepancies between the
+operator’s expectations and Kubernetes behavior. Use with caution and only as a
+last resort.
+
+**IMPORTANT**: adding or changing this annotation won't trigger a rolling deployment
+of the generated Pods. The latter can be triggered manually by the user with
+`kubectl cnp restart`.
+```
+
+`k8s.enterprisedb.io/podSpec`
+: Snapshot of the `spec` of the pod generated by the operator. This annotation replaces
+ the old, deprecated `k8s.enterprisedb.io/podEnvHash` annotation.
+
+`k8s.enterprisedb.io/poolerSpecHash`
+: Hash of the pooler resource.
+
+`k8s.enterprisedb.io/pvcStatus`
+: Current status of the PVC: `initializing`, `ready`, or `detached`.
+
+`k8s.enterprisedb.io/reconcilePodSpec`
+: Annotation can be applied to a `Cluster` or `Pooler` to prevent restarts.
+
+```
+When set to `disabled` on a `Cluster`, the operator prevents instances
+from restarting due to changes in the PodSpec. This includes changes to:
+
+ - Topology or affinity
+ - Scheduler
+ - Volumes or containers
+
+When set to `disabled` on a `Pooler`, the operator restricts any modifications
+to the deployment specification, except for changes to `spec.instances`.
+```
+
+`k8s.enterprisedb.io/reconciliationLoop`
+: When set to `disabled` on a `Cluster`, the operator prevents the
+ reconciliation loop from running.
+
+`k8s.enterprisedb.io/reloadedAt`
+: Contains the latest cluster `reload` time. `reload` is triggered by the user through a plugin.
+
+`k8s.enterprisedb.io/skipEmptyWalArchiveCheck`
+: When set to `enabled` on a `Cluster` resource, the operator disables the check
+ that ensures that the WAL archive is empty before writing data. Use at your own
+ risk.
+
+`k8s.enterprisedb.io/skipWalArchiving`
+: When set to `enabled` on a `Cluster` resource, the operator disables WAL archiving.
+ This will set `archive_mode` to `off` and require a restart of all PostgreSQL
+ instances. Use at your own risk.
+
+`k8s.enterprisedb.io/snapshotStartTime`
+: The time a snapshot started.
+
+`k8s.enterprisedb.io/snapshotEndTime`
+: The time a snapshot was marked as ready to use.
+
+`k8s.enterprisedb.io/validation`
+: When set to `disabled` on a {{name.ln}}-managed custom resource, the
+ validation webhook allows all changes without restriction.
+
+```
+**⚠️ WARNING:** Disabling validation may permit unsafe or destructive
+operations. Use this setting with caution and at your own risk.
+```
+
+`k8s.enterprisedb.io/volumeSnapshotDeadline`
+: Applied to `Backup` and `ScheduledBackup` resources, allows you to control
+ how long the operator should retry recoverable errors before considering the
+ volume snapshot backup failed. In minutes, defaulting to 10.
+
+`kubectl.kubernetes.io/restartedAt`
+: When available, the time of last requested restart of a Postgres cluster.
+
+## Prerequisites
+
+By default, no label or annotation defined in the cluster's metadata is
+inherited by the associated resources.
+To enable label/annotation inheritance, follow the
+instructions provided in [Operator configuration](operator_conf.md).
+
+The following continues from that example and limits it to the following:
+
+- Annotations: `categories`
+- Labels: `app`, `environment`, and `workload`
+
+!!! Note
+ Feel free to select the names that most suit your context for both
+ annotations and labels. You can also use wildcards
+ in naming and adopt strategies like using `mycompany/*` for all labels
+ or setting annotations starting with `mycompany/` to be inherited.
+
+## Defining cluster's metadata
+
+When defining the cluster, before any resource is deployed, you can
+set the metadata as follows:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+ annotations:
+ categories: database
+ labels:
+ environment: production
+ workload: database
+ app: sso
+spec:
+ # ...
+```
+
+Once the cluster is deployed, you can verify, for example, that the labels
+were correctly set in the pods:
+
+```shell
+kubectl get pods --show-labels
+```
+
+## Current limitations
+
+Currently, {{name.ln}} doesn't automatically propagate labels or
+annotations deletions. Therefore, when an annotation or label is removed from
+a cluster that was previously propagated to the underlying pods, the operator
+doesn't remove it on the associated resources.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx b/product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx
new file mode 100644
index 0000000000..2a6e9d1450
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx
@@ -0,0 +1,111 @@
+---
+title: 'License and License keys'
+originalFilePath: 'src/license_keys.md'
+---
+
+License keys are a legacy management mechanism for {{name.ln}}. You do not need a license key if you have installed using an EDB subscription token, and in this case, the licensing commands in this section can be ignored.
+
+If you are not using an EDB subscription token and installing from public repositories, then you will need a license key. The only exception is when you run the operator with Community PostgreSQL: in this case, if the license key is unset, a cluster will be started with the default trial license - which automatically expires after 30 days. This is not the recommended way of trialing {{name.ln}} - see the [installation guide](installation_upgrade.md) for the recommended options.
+
+!!! Warning CRITICAL WARNING: UPGRADING OPERATORS
+
+ OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the [Central Migration Guide](migrating_edb_registries) first.
+
+The following documentation is only for users who have installed the operator using a license key.
+
+## Company level license keys
+
+A license key allows you to create an unlimited number of PostgreSQL
+clusters in your installation.
+
+The license key needs to be available in a `Secret` in the same namespace where
+the operator is deployed (`ConfigMap` is also available, but not recommended
+for a license key).
+
+!!! Seealso "Operator configuration"
+ For more information, refer to [Operator configuration](operator_conf.md).
+
+Once the company level license is installed, the validity of the
+license key can be checked inside the cluster status.
+
+```sh
+kubectl get cluster cluster_example -o yaml
+[...]
+status:
+ [...]
+ licenseStatus:
+ licenseExpiration: "2021-11-06T09:36:02Z"
+ licenseStatus: Trial
+ valid: true
+ isImplicit: false
+ isTrial: true
+[...]
+```
+
+### Kubernetes installations via YAML manifest
+
+When the operator is installed in Kubernetes using the YAML manifest,
+it is deployed by default in the `postgresql-operator-system` namespace.
+
+Given the namespace name, and the license key, you can create
+the config map with the following command:
+
+```
+kubectl create configmap -n [NAMESPACE_NAME_HERE] \
+ postgresql-operator-controller-manager-config \
+ --from-literal=EDB_LICENSE_KEY=[LICENSE_KEY_HERE]
+```
+
+Operator pods will need to be recreated to apply the new configuration. You
+can use the following command:
+
+```sh
+kubectl rollout restart deployment -n [NAMESPACE_NAME_HERE] \
+ postgresql-operator-controller-manager
+```
+
+## Cluster level license keys
+
+Each `Cluster` resource has a `licenseKey` parameter in its definition.
+You can find the expiration date, as well as more information about the license,
+in the cluster status:
+
+```sh
+kubectl get cluster cluster_example -o yaml
+[...]
+status:
+ [...]
+ licenseStatus:
+ licenseExpiration: "2021-11-06T09:36:02Z"
+ licenseStatus: Trial
+ valid: true
+ isImplicit: false
+ isTrial: true
+[...]
+```
+
+A cluster license key can be updated with a new one at any moment, to extend
+the expiration date or move the cluster to a production license.
+
+## License key secret at cluster level
+
+Each `Cluster` resource can also have a `licenseKeySecret` parameter, which contains
+the name and key of a secret. That secret contains the license key provided by EDB.
+
+This field will take precedence over `licenseKey`: it will be refreshed
+when you change the secret, in order to extend the expiration date, or switching from a trial
+license to a production license.
+
+{{name.ln}} is distributed under the EDB Limited Usage License
+Agreement, available at [enterprisedb.com/limited-use-license](https://www.enterprisedb.com/limited-use-license).
+
+{{name.ln}}: Copyright (C) 2019-2022 EnterpriseDB Corporation.
+
+## What happens when a license expires
+
+After the license expiration, the operator will cease any reconciliation
+attempt on the cluster, effectively stopping to manage its status. This also
+includes any self-healing and high availability capabilities, such as automated
+failover and switchovers.
+
+The pods and the data will still be available.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx
new file mode 100644
index 0000000000..0fc52adb72
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx
@@ -0,0 +1,309 @@
+---
+title: 'Logging'
+originalFilePath: 'src/logging.md'
+---
+
+
+
+{{name.ln}} outputs logs in JSON format directly to standard output, including
+PostgreSQL logs, without persisting them to storage for security reasons. This
+design facilitates seamless integration with most Kubernetes-compatible log
+management tools, including command line ones like
+[stern](https://github.com/stern/stern).
+
+!!! Important
+ Long-term storage and management of logs are outside the scope of the
+ operator and should be handled at the Kubernetes infrastructure level.
+ For more information, see the
+ [Kubernetes Logging Architecture](https://kubernetes.io/docs/concepts/cluster-administration/logging/)
+ documentation.
+
+Each log entry includes the following fields:
+
+- `level` – The log level (e.g., `info`, `notice`).
+- `ts` – The timestamp.
+- `logger` – The type of log (e.g., `postgres`, `pg_controldata`).
+- `msg` – The log message, or the keyword `record` if the message is in JSON
+ format.
+- `record` – The actual record, with a structure that varies depending on the
+ `logger` type.
+- `logging_pod` – The name of the pod where the log was generated.
+
+!!! Info
+ If your log ingestion system requires custom field names, you can rename
+ the `level` and `ts` fields using the `log-field-level` and
+ `log-field-timestamp` flags in the operator controller. This can be configured
+ by editing the `Deployment` definition of the `cloudnative-pg` operator.
+
+## Cluster Logs
+
+You can configure the log level for the instance pods in the cluster
+specification using the `logLevel` option. Available log levels are: `error`,
+`warning`, `info` (default), `debug`, and `trace`.
+
+!!! Important
+ Currently, the log level can only be set at the time the instance starts.
+ Changes to the log level in the cluster specification after the cluster has
+ started will only apply to new pods, not existing ones.
+
+## Operator Logs
+
+The logs produced by the operator pod can be configured with log
+levels, same as instance pods: `error`, `warning`, `info` (default), `debug`,
+and `trace`.
+
+The log level for the operator can be configured by editing the `Deployment`
+definition of the operator and setting the `--log-level` command line argument
+to the desired value.
+
+## PostgreSQL Logs
+
+Each PostgreSQL log entry is a JSON object with the `logger` key set to
+`postgres`. The structure of the log entries is as follows:
+
+```json
+{
+ "level": "info",
+ "ts": 1619781249.7188137,
+ "logger": "postgres",
+ "msg": "record",
+ "record": {
+ "log_time": "2021-04-30 11:14:09.718 UTC",
+ "user_name": "",
+ "database_name": "",
+ "process_id": "25",
+ "connection_from": "",
+ "session_id": "608be681.19",
+ "session_line_num": "1",
+ "command_tag": "",
+ "session_start_time": "2021-04-30 11:14:09 UTC",
+ "virtual_transaction_id": "",
+ "transaction_id": "0",
+ "error_severity": "LOG",
+ "sql_state_code": "00000",
+ "message": "database system was interrupted; last known up at 2021-04-30 11:14:07 UTC",
+ "detail": "",
+ "hint": "",
+ "internal_query": "",
+ "internal_query_pos": "",
+ "context": "",
+ "query": "",
+ "query_pos": "",
+ "location": "",
+ "application_name": "",
+ "backend_type": "startup"
+ },
+ "logging_pod": "cluster-example-1",
+}
+```
+
+!!! Info
+ Internally, the operator uses PostgreSQL's CSV log format. For more details,
+ refer to the [PostgreSQL documentation on CSV log format](https://www.postgresql.org/docs/current/runtime-config-logging.html).
+
+## PGAudit Logs
+
+{{name.ln}} offers seamless and native support for
+[PGAudit](https://www.pgaudit.org/) on PostgreSQL clusters.
+
+To enable PGAudit, add the necessary `pgaudit` parameters in the `postgresql`
+section of the cluster configuration.
+
+!!! Important
+ The PGAudit library must be added to `shared_preload_libraries`.
+ {{name.ln}} automatically manages this based on the presence of `pgaudit.*`
+ parameters in the PostgreSQL configuration. The operator handles both the
+ addition and removal of the library from `shared_preload_libraries`.
+
+Additionally, the operator manages the creation and removal of the PGAudit
+extension across all databases within the cluster.
+
+!!! Important
+ {{name.ln}} executes the `CREATE EXTENSION` and `DROP EXTENSION` commands
+ in all databases within the cluster that accept connections.
+
+The following example demonstrates a PostgreSQL `Cluster` deployment with
+PGAudit enabled and configured:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 3
+
+ postgresql:
+ parameters:
+ "pgaudit.log": "all, -misc"
+ "pgaudit.log_catalog": "off"
+ "pgaudit.log_parameter": "on"
+ "pgaudit.log_relation": "on"
+
+ storage:
+ size: 1Gi
+```
+
+The audit CSV log entries generated by PGAudit are parsed and routed to
+standard output in JSON format, similar to all other logs:
+
+- `.logger` is set to `pgaudit`.
+- `.msg` is set to `record`.
+- `.record` contains the entire parsed record as a JSON object. This structure
+ resembles that of `logging_collector` logs, with the exception of
+ `.record.audit`, which contains the PGAudit CSV message formatted as a JSON
+ object.
+
+This example shows sample log entries:
+
+```json
+{
+ "level": "info",
+ "ts": 1627394507.8814096,
+ "logger": "pgaudit",
+ "msg": "record",
+ "record": {
+ "log_time": "2021-07-27 14:01:47.881 UTC",
+ "user_name": "postgres",
+ "database_name": "postgres",
+ "process_id": "203",
+ "connection_from": "[local]",
+ "session_id": "610011cb.cb",
+ "session_line_num": "1",
+ "command_tag": "SELECT",
+ "session_start_time": "2021-07-27 14:01:47 UTC",
+ "virtual_transaction_id": "3/336",
+ "transaction_id": "0",
+ "error_severity": "LOG",
+ "sql_state_code": "00000",
+ "backend_type": "client backend",
+ "audit": {
+ "audit_type": "SESSION",
+ "statement_id": "1",
+ "substatement_id": "1",
+ "class": "READ",
+ "command": "SELECT FOR KEY SHARE",
+ "statement": "SELECT pg_current_wal_lsn()",
+ "parameter": ""
+ }
+ },
+ "logging_pod": "cluster-example-1",
+}
+```
+
+See the
+[PGAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#format)
+for more details about each field in a record.
+
+## EDB Audit logs
+
+Clusters that are running on EDB Postgres Advanced Server (EPAS)
+can enable [EDB Audit](/epas/latest/epas_security_guide/05_edb_audit_logging/) as follows:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 3
+ imageName: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9
+
+ postgresql:
+ epas:
+ audit: true
+
+ storage:
+ size: 1Gi
+```
+
+Setting `.spec.postgresql.epas.audit: true` enforces the following parameters:
+
+```text
+edb_audit = 'csv'
+edb_audit_destination = 'file'
+edb_audit_directory = '/controller/log'
+edb_audit_filename = 'edb_audit'
+edb_audit_rotation_day = 'none'
+edb_audit_rotation_seconds = '0'
+edb_audit_rotation_size = '0'
+edb_audit_tag = ''
+edb_log_every_bulk_value = 'false'
+```
+
+Other parameters can be passed via `.spec.postgresql.parameters` as usual.
+
+The audit CSV logs are parsed and routed to stdout in JSON format, similarly to all the remaining logs:
+
+- `.logger` set to `edb_audit`
+- `.msg` set to `record`
+- `.record` containing the whole parsed record as a JSON object
+
+See the example below:
+
+```json
+{
+ "level": "info",
+ "ts": 1624629110.7641866,
+ "logger": "edb_audit",
+ "msg": "record",
+ "record": {
+ "log_time": "2021-06-25 13:51:50.763 UTC",
+ "user_name": "postgres",
+ "database_name": "postgres",
+ "process_id": "68",
+ "connection_from": "[local]",
+ "session_id": "60d5df76.44",
+ "session_line_num": "5",
+ "process_status": "idle in transaction",
+ "session_start_time": "2021-06-25 13:51:50 UTC",
+ "virtual_transaction_id": "3/93",
+ "transaction_id": "1183",
+ "error_severity": "AUDIT",
+ "sql_state_code": "00000",
+ "message": "statement: GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text) TO \"streaming_replica\"",
+ "detail": "",
+ "hint": "",
+ "internal_query": "",
+ "internal_query_pos": "",
+ "context": "",
+ "query": "",
+ "query_pos": "",
+ "location": "",
+ "application_name": "",
+ "backend_type": "client backend",
+ "command_tag": "GRANT",
+ "audit_tag": "",
+ "type": "grant"
+ },
+ "logging_pod": "cluster-example-1",
+}
+```
+
+See EDB [Audit file](/epas/latest/epas_security_guide/05_edb_audit_logging/)
+for more details about the records' fields.
+
+## Other Logs
+
+All logs generated by the operator and its instances are in JSON format, with
+the `logger` field indicating the process that produced them. The possible
+`logger` values are as follows:
+
+- `barman-cloud-wal-archive`: logs from `barman-cloud-wal-archive`
+- `barman-cloud-wal-restore`: logs from `barman-cloud-wal-restore`
+- `edb_audit`: from the EDB Audit extension
+- `initdb`: logs from running `initdb`
+- `pg_basebackup`: logs from running `pg_basebackup`
+- `pg_controldata`: logs from running `pg_controldata`
+- `pg_ctl`: logs from running any `pg_ctl` subcommand
+- `pg_rewind`: logs from running `pg_rewind`
+- `pgaudit`: logs from the PGAudit extension
+- `postgres`: logs from the `postgres` instance (with `msg` distinct from
+ `record`)
+- `wal-archive`: logs from the `wal-archive` subcommand of the instance manager
+- `wal-restore`: logs from the `wal-restore` subcommand of the instance manager
+- `instance-manager`: from the [PostgreSQL instance manager](./instance_manager.md)
+
+With the exception of `postgres` and `edb_audit`, which follows a specific structure,
+all other `logger` values contain the `msg` field with the escaped message that is
+logged.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx b/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx
new file mode 100644
index 0000000000..4d568e22d7
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx
@@ -0,0 +1,464 @@
+---
+title: 'Logical Replication'
+originalFilePath: 'src/logical_replication.md'
+---
+
+
+
+PostgreSQL extends its replication capabilities beyond physical replication,
+which operates at the level of exact block addresses and byte-by-byte copying,
+by offering [logical replication](https://www.postgresql.org/docs/current/logical-replication.html).
+Logical replication replicates data objects and their changes based on a
+defined replication identity, typically the primary key.
+
+Logical replication uses a publish-and-subscribe model, where subscribers
+connect to publications on a publisher node. Subscribers pull data changes from
+these publications and can re-publish them, enabling cascading replication and
+complex topologies.
+
+!!! Important
+ To protect your logical replication subscribers after a failover of the
+ publisher cluster in {{name.ln}}, ensure that replication slot
+ synchronization for logical decoding is enabled. Without this, your logical
+ replication clients may lose data and fail to continue seamlessly after a
+ failover. For configuration details, see
+ ["Replication: Logical Decoding Slot Synchronization"](replication.md#logical-decoding-slot-synchronization).
+
+This flexible model is particularly useful for:
+
+- Online data migrations
+- Live PostgreSQL version upgrades
+- Data distribution across systems
+- Real-time analytics
+- Integration with external applications
+
+!!! Info
+ For more details, examples, and limitations, please refer to the
+ [official PostgreSQL documentation on Logical Replication](https://www.postgresql.org/docs/current/logical-replication.html).
+
+**{{name.ln}}** enhances this capability by providing declarative support for
+key PostgreSQL logical replication objects:
+
+- **Publications** via the `Publication` resource
+- **Subscriptions** via the `Subscription` resource
+
+## Publications
+
+In PostgreSQL's publish-and-subscribe replication model, a
+[**publication**](https://www.postgresql.org/docs/current/logical-replication-publication.html)
+is the source of data changes. It acts as a logical container for the change
+sets (also known as *replication sets*) generated from one or more tables within
+a database. Publications can be defined on any PostgreSQL 10+ instance acting
+as the *publisher*, including instances managed by popular DBaaS solutions in the
+public cloud. Each publication is tied to a single database and provides
+fine-grained control over which tables and changes are replicated.
+
+For publishers outside Kubernetes, you can [create publications using SQL](https://www.postgresql.org/docs/current/sql-createpublication.html)
+or leverage the [`cnp publication create` plugin command](kubectl-plugin.md#logical-replication-publications).
+
+When managing `Cluster` objects with **{{name.ln}}**, PostgreSQL publications
+can be defined declaratively through the `Publication` resource.
+
+!!! Info
+ Please refer to the [API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-Publication)
+ for the full list of attributes you can define for each `Publication` object.
+
+Suppose you have a cluster named `freddie` and want to replicate all tables in
+the `app` database. Here's a `Publication` manifest:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Publication
+metadata:
+ name: freddie-publisher
+spec:
+ cluster:
+ name: freddie
+ dbname: app
+ name: publisher
+ target:
+ allTables: true
+```
+
+In the above example:
+
+- The publication object is named `freddie-publisher` (`metadata.name`).
+- The publication is created via the primary of the `freddie` cluster
+ (`spec.cluster.name`) with name `publisher` (`spec.name`).
+- It includes all tables (`spec.target.allTables: true`) from the `app`
+ database (`spec.dbname`).
+
+!!! Important
+ While `allTables` simplifies configuration, PostgreSQL offers fine-grained
+ control for replicating specific tables or targeted data changes. For advanced
+ configurations, consult the [PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication.html).
+ Additionally, refer to the [{{name.ln}} API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-PublicationTarget)
+ for details on declaratively customizing replication targets.
+
+### Required Fields in the `Publication` Manifest
+
+The following fields are required for a `Publication` object:
+
+- `metadata.name`: Unique name for the Kubernetes `Publication` object.
+- `spec.cluster.name`: Name of the PostgreSQL cluster.
+- `spec.dbname`: Database name where the publication is created.
+- `spec.name`: Publication name in PostgreSQL.
+- `spec.target`: Specifies the tables or changes to include in the publication.
+
+The `Publication` object must reference a specific `Cluster`, determining where
+the publication will be created. It is managed by the cluster's primary instance,
+ensuring the publication is created or updated as needed.
+
+### Reconciliation and Status
+
+After creating a `Publication`, {{name.ln}} manages it on the primary
+instance of the specified cluster. Following a successful reconciliation cycle,
+the `Publication` status will reflect the following:
+
+- `applied: true`, indicates the configuration has been successfully applied.
+- `observedGeneration` matches `metadata.generation`, confirming the applied
+ configuration corresponds to the most recent changes.
+
+If an error occurs during reconciliation, `status.applied` will be `false`, and
+an error message will be included in the `status.message` field.
+
+### Removing a publication
+
+The `publicationReclaimPolicy` field controls the behavior when deleting a
+`Publication` object:
+
+- `retain` (default): Leaves the publication in PostgreSQL for manual
+ management.
+- `delete`: Automatically removes the publication from PostgreSQL.
+
+Consider the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Publication
+metadata:
+ name: freddie-publisher
+spec:
+ cluster:
+ name: freddie
+ dbname: app
+ name: publisher
+ target:
+ allTables: true
+ publicationReclaimPolicy: delete
+```
+
+In this case, deleting the `Publication` object also removes the `publisher`
+publication from the `app` database of the `freddie` cluster.
+
+## Subscriptions
+
+In PostgreSQL's publish-and-subscribe replication model, a
+[**subscription**](https://www.postgresql.org/docs/current/logical-replication-subscription.html)
+represents the downstream component that consumes data changes.
+A subscription establishes the connection to a publisher's database and
+specifies the set of publications (one or more) it subscribes to. Subscriptions
+can be created on any supported PostgreSQL instance acting as the *subscriber*.
+
+!!! Important
+ Since schema definitions are not replicated, the subscriber must have the
+ corresponding tables already defined before data replication begins.
+
+{{name.ln}} simplifies subscription management by enabling you to define them
+declaratively using the `Subscription` resource.
+
+!!! Info
+ Please refer to the [API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-Subscription)
+ for the full list of attributes you can define for each `Subscription` object.
+
+Suppose you want to replicate changes from the `publisher` publication on the
+`app` database of the `freddie` cluster (*publisher*) to the `app` database of
+the `king` cluster (*subscriber*). Here's an example of a `Subscription`
+manifest:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Subscription
+metadata:
+ name: freddie-to-king-subscription
+spec:
+ cluster:
+ name: king
+ dbname: app
+ name: subscriber
+ externalClusterName: freddie
+ publicationName: publisher
+```
+
+In the above example:
+
+- The subscription object is named `freddie-to-king-subscriber` (`metadata.name`).
+- The subscription is created in the `app` database (`spec.dbname`) of the
+ `king` cluster (`spec.cluster.name`), with name `subscriber` (`spec.name`).
+- It connects to the `publisher` publication in the external `freddie` cluster,
+ referenced by `spec.externalClusterName`.
+
+To facilitate this setup, the `freddie` external cluster must be defined in the
+`king` cluster's configuration. Below is an example excerpt showing how to
+define the external cluster in the `king` manifest:
+
+```yaml
+externalClusters:
+ - name: freddie
+ connectionParameters:
+ host: freddie-rw.default.svc
+ user: postgres
+ dbname: app
+```
+
+!!! Info
+ For more details on configuring the `externalClusters` section, see the
+ ["Bootstrap" section](bootstrap.md#the-externalclusters-section) of the
+ documentation.
+
+As you can see, a subscription can connect to any PostgreSQL database
+accessible over the network. This flexibility allows you to seamlessly migrate
+your data into Kubernetes with nearly zero downtime. It’s an excellent option
+for transitioning from various environments, including popular cloud-based
+Database-as-a-Service (DBaaS) platforms.
+
+### Required Fields in the `Subscription` Manifest
+
+The following fields are mandatory for defining a `Subscription` object:
+
+- `metadata.name`: A unique name for the Kubernetes `Subscription` object
+ within its namespace.
+- `spec.cluster.name`: The name of the PostgreSQL cluster where the
+ subscription will be created.
+- `spec.dbname`: The name of the database in which the subscription will be
+ created.
+- `spec.name`: The name of the subscription as it will appear in PostgreSQL.
+- `spec.externalClusterName`: The name of the external cluster, as defined in
+ the `spec.cluster.name` cluster's configuration. This references the
+ publisher database.
+- `spec.publicationName`: The name of the publication in the publisher database
+ to which the subscription will connect.
+
+The `Subscription` object must reference a specific `Cluster`, determining
+where the subscription will be managed. {{name.ln}} ensures that the
+subscription is created or updated on the primary instance of the specified
+cluster.
+
+### Reconciliation and Status
+
+After creating a `Subscription`, {{name.ln}} manages it on the primary
+instance of the specified cluster. Following a successful reconciliation cycle,
+the `Subscription` status will reflect the following:
+
+- `applied: true`, indicates the configuration has been successfully applied.
+- `observedGeneration` matches `metadata.generation`, confirming the applied
+ configuration corresponds to the most recent changes.
+
+If an error occurs during reconciliation, `status.applied` will be `false`, and
+an error message will be included in the `status.message` field.
+
+### Removing a Subscription
+
+The `subscriptionReclaimPolicy` field controls the behavior when deleting a
+`Subscription` object:
+
+- `retain` (default): Leaves the subscription in PostgreSQL for manual
+ management.
+- `delete`: Automatically removes the subscription from PostgreSQL.
+
+Consider the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Subscription
+metadata:
+ name: freddie-to-king-subscription
+spec:
+ cluster:
+ name: king
+ dbname: app
+ name: subscriber
+ externalClusterName: freddie
+ publicationName: publisher
+ subscriptionReclaimPolicy: delete
+```
+
+In this case, deleting the `Subscription` object also removes the `subscriber`
+subscription from the `app` database of the `king` cluster.
+
+### Resilience to Failovers
+
+To ensure that your logical replication subscriptions remain operational after
+a failover of the publisher, configure {{name.ln}} to synchronize logical
+decoding slots across the cluster. For detailed instructions, see
+[Logical Decoding Slot Synchronization](replication.md#logical-decoding-slot-synchronization).
+
+## Limitations
+
+Logical replication in PostgreSQL has some inherent limitations, as outlined in
+the [official documentation](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).
+Notably, the following objects are not replicated:
+
+- **Database schema and DDL commands**
+- **Sequence data**
+- **Large objects**
+
+### Addressing Schema Replication
+
+The first limitation, related to schema replication, can be easily addressed
+using {{name.ln}}' capabilities. For instance, you can leverage the `import`
+bootstrap feature to copy the schema of the tables you need to replicate.
+Alternatively, you can manually create the schema as you would for any
+PostgreSQL database.
+
+### Handling Sequences
+
+While sequences are not automatically kept in sync through logical replication,
+{{name.ln}} provides a solution to be used in live migrations.
+You can use the [`cnp` plugin](kubectl-plugin.md#synchronizing-sequences)
+to synchronize sequence values, ensuring consistency between the publisher and
+subscriber databases.
+
+## Example of live migration and major Postgres upgrade with logical replication
+
+To highlight the powerful capabilities of logical replication, this example
+demonstrates how to replicate data from a publisher database (`freddie`)
+running PostgreSQL 16 to a subscriber database (`king`) running the latest
+PostgreSQL version. This setup can be deployed in your Kubernetes cluster for
+evaluation and hands-on learning.
+
+This example illustrates how logical replication facilitates live migrations
+and upgrades between PostgreSQL versions while ensuring data consistency. By
+combining logical replication with {{name.ln}}, you can easily set up,
+manage, and evaluate such scenarios in a Kubernetes environment.
+
+### Step 1: Setting Up the Publisher (`freddie`)
+
+The first step involves creating a `freddie` PostgreSQL cluster with version 16.
+The cluster contains a single instance and includes an `app` database
+initialized with a table, `n`, storing 10,000 numbers. A logical replication
+publication named `publisher` is also configured to include all tables in the
+database.
+
+Here’s the manifest for setting up the `freddie` cluster and its publication
+resource:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: freddie
+spec:
+ instances: 1
+
+ imageName: docker.enterprisedb.com/k8s/postgresql:18-standard-ubi9
+
+ storage:
+ size: 1Gi
+
+ bootstrap:
+ initdb:
+ postInitApplicationSQL:
+ - CREATE TABLE n (i SERIAL PRIMARY KEY, m INTEGER)
+ - INSERT INTO n (m) (SELECT generate_series(1, 10000))
+ - ALTER TABLE n OWNER TO app
+
+ managed:
+ roles:
+ - name: app
+ login: true
+ replication: true
+---
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Publication
+metadata:
+ name: freddie-publisher
+spec:
+ cluster:
+ name: freddie
+ dbname: app
+ name: publisher
+ target:
+ allTables: true
+```
+
+### Step 2: Setting Up the Subscriber (`king`)
+
+Next, create the `king` PostgreSQL cluster, running the latest version of
+PostgreSQL. This cluster initializes by importing the schema from the `app`
+database on the `freddie` cluster using the external cluster configuration. A
+`Subscription` resource, `freddie-to-king-subscription`, is then configured to
+consume changes published by the `publisher` on `freddie`.
+
+Below is the manifest for setting up the `king` cluster and its subscription:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: king
+spec:
+ instances: 1
+
+ storage:
+ size: 1Gi
+
+ bootstrap:
+ initdb:
+ import:
+ type: microservice
+ schemaOnly: true
+ databases:
+ - app
+ source:
+ externalCluster: freddie
+
+ externalClusters:
+ - name: freddie
+ connectionParameters:
+ host: freddie-rw.default.svc
+ user: app
+ dbname: app
+ password:
+ name: freddie-app
+ key: password
+---
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Subscription
+metadata:
+ name: freddie-to-king-subscription
+spec:
+ cluster:
+ name: king
+ dbname: app
+ name: subscriber
+ externalClusterName: freddie
+ publicationName: publisher
+```
+
+Once the `king` cluster is running, you can verify that the replication is
+working by connecting to the `app` database and counting the records in the `n`
+table. The following example uses the `psql` command provided by the `cnp`
+plugin for simplicity:
+
+```console
+kubectl cnp psql king -- app -qAt -c 'SELECT count(*) FROM n'
+10000
+```
+
+This command should return `10000`, confirming that the data from the `freddie`
+cluster has been successfully replicated to the `king` cluster.
+
+Using the `cnp` plugin, you can also synchronize existing sequences to ensure
+consistency between the publisher and subscriber. The example below
+demonstrates how to synchronize a sequence for the `king` cluster:
+
+```console
+kubectl cnp subscription sync-sequences king --subscription=subscriber
+SELECT setval('"public"."n_i_seq"', 10000);
+
+10000
+```
+
+This command updates the sequence `n_i_seq` in the `king` cluster to match the
+current value, ensuring it is in sync with the source database.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/migrating_edb_registries.mdx b/product_docs/docs/postgres_for_kubernetes/1/migrating_edb_registries.mdx
new file mode 100644
index 0000000000..4a2db6ae65
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/migrating_edb_registries.mdx
@@ -0,0 +1,198 @@
+---
+title: Migrating to the unified private EDB container registry
+navTitle: Registry migration
+description: Migrate a {{name.ln}} system from a previous tier-based registry configuration to EDB's unified container image registry
+---
+
+## What's changing
+
+Previously, EDB published operator and operand images selectively to one of several private repositories: `k8s_standard`, `k8s_enterprise`, `k8s_standard_pgd`, and `k8s_enterprise_pgd`. Your repository was determined by your subscription level and the operators you wanted to use. Also, public registries may have been used for some operand images. In particular, EDB provided numerous images via public repositories hosted on quay.io.
+
+Now, the {{name.ln}} and EDB Postgres Distributed for Kubernetes operators and associated operands are provided in a single repository, `k8s`. If your subscription includes access to a given image, you can pull it from this repository. The old repositories remain available for now, but new images are published to the unified repository. Thus, future operator and operand upgrades will require migrating to the unified repository.
+
+### Summary of changes
+
+| Old repository path | New unified repository path |
+|--------------------------------------------|-----------------------------|
+| docker.enterprisedb.com/k8s_enterprise | docker.enterprisedb.com/k8s |
+| docker.enterprisedb.com/k8s_standard | docker.enterprisedb.com/k8s |
+| docker.enterprisedb.com/k8s_enterprise_pgd | docker.enterprisedb.com/k8s |
+| docker.enterprisedb.com/k8s_standard_pgd | docker.enterprisedb.com/k8s |
+| docker.enterprisedb.com/anything_else | docker.enterprisedb.com/k8s |
+| quay.io/enterprisedb | docker.enterprisedb.com/k8s |
+
+## Scope of changes
+
+- **Pull secrets.** Because the old system relied on specifying the desired repository by username when authenticating, update the credentials stored in secrets that Kubernetes uses to pull images.
+- **Helm values.** If you're using Helm charts, update the `image.repository` and `image.imageCredentials.username` values.
+- **Manifest image names.** If you're using custom manifests that specify one of the old repository paths as part of an `imageName` or `images[].image` value, update those as well.
+
+## Migration processes
+
+### Updating credentials
+
+In all cases, begin by updating any stored credentials used by your system.
+
+Collect the following information:
+
+- Your [EDB account token](/repos/getting_started/with_web/get_your_token/)
+- The name of the repository: `k8s`
+- The repository server: `docker.enterprisedb.com`
+
+The instructions that follow assume your token is in an environment variable named `EDB_SUBSCRIPTION_TOKEN`.
+
+#### Pull secrets
+
+If you're working directly with operator manifests, update pull secrets in the relevant namespaces.
+
+For {{name.ln}}, the default namespace is `postgresql-operator-system`. To create or update the pull secret in this namespace, run:
+
+```shell
+kubectl create secret -n postgresql-operator-system docker-registry edb-pull-secret \
+ --docker-server=docker.enterprisedb.com \
+ --docker-username=k8s \
+ --docker-password=${EDB_SUBSCRIPTION_TOKEN} \
+ --dry-run=client -o=json \
+ | kubectl apply -f -
+```
+
+!!! Tip
+ These examples overwrite existing secrets. If you have additional credentials stored (for example, to authenticate with internal repositories), instead unpack the value and update only the credentials for `.auths["docker.enterprisedb.com"]`, leaving other servers untouched.
+
+For EDB Postgres Distributed for Kubernetes, the default namespace is `pgd-operator-system`. To create or update the pull secret in this namespace, you would run:
+
+```shell
+kubectl create secret -n pgd-operator-system docker-registry edb-pull-secret \
+ --docker-server=docker.enterprisedb.com \
+ --docker-username=k8s \
+ --docker-password=${EDB_SUBSCRIPTION_TOKEN} \
+ --dry-run=client -o=json \
+ | kubectl apply -f -
+```
+
+!!! Tip
+ If you're using alternative namespaces, adjust these commands as appropriate.
+
+#### OpenShift pull secrets
+
+The relevant pull secrets live in the `openshift-operators` namespace on OpenShift.
+
+For {{name.ln}} and operand images, you'll need to update `postgresql-operator-pull-secret`:
+
+```shell
+oc create secret -n openshift-operators docker-registry postgresql-operator-pull-secret \
+ --docker-server=docker.enterprisedb.com \
+ --docker-username=k8s \
+ --docker-password=${EDB_SUBSCRIPTION_TOKEN} \
+ --dry-run=client -o=json \
+ | oc apply -f -
+```
+
+If you're using the EDB Postgres Distributed for Kubernetes operator, you'll ***also*** need to update `pgd-operator-pull-secret`:
+
+```shell
+oc create secret -n openshift-operators docker-registry pgd-operator-pull-secret \
+ --docker-server=docker.enterprisedb.com \
+ --docker-username=k8s \
+ --docker-password=${EDB_SUBSCRIPTION_TOKEN} \
+ --dry-run=client -o=json \
+ | oc apply -f -
+```
+
+### Updating Helm values
+
+For your {{name.ln}} Helm releases, you'll need to update override values for both `image.repository` and `image.imageCredentials.username`:
+
+```shell
+helm upgrade --reuse-values \
+ --set image.repository=docker.enterprisedb.com/k8s/edb-postgres-for-kubernetes \
+ --set image.imageCredentials.username=k8s \
+ edb-pg4k \
+ edb/edb-postgres-for-kubernetes
+```
+
+(here, `edb-pg4k` and `edb/edb-postgres-for-kubernetes` are the release and pathname used in other examples; substitute your own values)
+
+For your EDB Postgres Distributed for Kubernetes Helm releases, you'll need to override values for both `global.image.repository` and `image.imageCredentials.username`:
+
+```shell
+helm upgrade --reuse-values \
+ --set global.image.repository=docker.enterprisedb.com/k8s \
+ --set image.imageCredentials.username=k8s \
+ edb-pg4k-pgd \
+ edb/edb-postgres-distributed-for-kubernetes
+```
+
+(here, `edb-pg4k-pgd` and `edb/edb-postgres-distributed-for-kubernetes` are the release and pathname used in other examples; substitute your own values)
+
+### Updating image paths in manifests
+
+For operators or operands deployed via custom manifests (including the examples contained within current and past versions of this documentation), you'll need to update these manifests directly to update image references that specify the server and repository name as part of the image name.
+
+Refer to the repository paths under [Summary of changes](#summary-of-changes) for a full list of paths that you'll need to update if found among your manifests.
+
+#### Example: imageName
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: postgresql-extended-cluster
+spec:
+ instances: 3
+ imageName: docker.enterprisedb.com/k8s_enterprise/edb-postgres-extended:18-standard-ubi9
+```
+
+Here is a cluster manifest that deploys three EDB Postgres Extended instances. The image path referenced in `spec.imageName` is out of date and will need to be updated:
+
+```yaml
+ imageName: docker.enterprisedb.com/k8s/edb-postgres-extended:18-standard-ubi9
+```
+
+Deploy this update in the usual manner (`kubectl apply ...`).
+
+#### Example: ImageCatalog images
+
+Consider this (old, outdated) sample ImageCatalog manifest:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ImageCatalog
+metadata:
+ name: postgresql
+ namespace: default
+spec:
+ images:
+ - major: 13
+ image: quay.io/enterprisedb/postgresql:13.21
+ - major: 14
+ image: quay.io/enterprisedb/postgresql:14.18
+ - major: 15
+ image: quay.io/enterprisedb/postgresql:15.13
+ - major: 16
+ image: quay.io/enterprisedb/postgresql:16.9
+ - major: 17
+ image: quay.io/enterprisedb/postgresql:17.5
+
+Here is an ImageCatalog manifest that defines a set of images to be referenced elsewhere. The image paths referenced in `spec.images[].image` values are out of date and will need to be updated:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ImageCatalog
+metadata:
+ name: postgresql
+ namespace: default
+spec:
+ images:
+ - major: 15
+ image: docker.enterprisedb.com/k8s/postgresql:15.14-standard-ubi9
+ - major: 16
+ image: docker.enterprisedb.com/k8s/postgresql:16.10-standard-ubi9
+ - major: 17
+ image: docker.enterprisedb.com/k8s/postgresql:17.6-standard-ubi9
+ - major: 18
+ image: docker.enterprisedb.com/k8s/postgresql:18.1-standard-ubi9
+```
+
+Deploy this update in the usual manner (`kubectl apply ...`).
+
diff --git a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
new file mode 100644
index 0000000000..f08331a6c3
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
@@ -0,0 +1,945 @@
+---
+title: 'Monitoring'
+originalFilePath: 'src/monitoring.md'
+---
+
+
+
+!!! Important
+ Installing Prometheus and Grafana is beyond the scope of this project.
+ We assume they are correctly installed in your system. However, for
+ experimentation we provide instructions in
+ [Part 4 of the Quickstart](quickstart.md#part-4-monitor-clusters-with-prometheus-and-grafana).
+
+## Monitoring Instances
+
+For each PostgreSQL instance, the operator provides an exporter of metrics for
+[Prometheus](https://prometheus.io/) via HTTP or HTTPS, on port 9187, named `metrics`.
+The operator comes with a [predefined set of metrics](#predefined-set-of-metrics), as well as a highly
+configurable and customizable system to define additional queries via one or
+more `ConfigMap` or `Secret` resources (see the
+["User defined metrics" section](#user-defined-metrics) below for details).
+
+!!! Important
+ {{name.ln}}, by default, installs a set of [predefined metrics](#default-set-of-metrics)
+ in a `ConfigMap` named `default-monitoring`.
+
+!!! Info
+ You can inspect the exported metrics by following the instructions in
+ the ["How to inspect the exported metrics"](#how-to-inspect-the-exported-metrics)
+ section below.
+
+All monitoring queries that are performed on PostgreSQL are:
+
+- atomic (one transaction per query)
+- executed with the `pg_monitor` role
+- executed with `application_name` set to `cnp_metrics_exporter`
+- executed as user `postgres`
+
+Please refer to the "Predefined Roles" section in PostgreSQL
+[documentation](https://www.postgresql.org/docs/current/predefined-roles.html)
+for details on the `pg_monitor` role.
+
+Queries, by default, are run against the *main database*, as defined by
+the specified `bootstrap` method of the `Cluster` resource, according
+to the following logic:
+
+- using `initdb`: queries will be run by default against the specified database
+ in `initdb.database`, or `app` if not specified
+- using `recovery`: queries will be run by default against the specified database
+ in `recovery.database`, or `postgres` if not specified
+- using `pg_basebackup`: queries will be run by default against the specified database
+ in `pg_basebackup.database`, or `postgres` if not specified
+
+The default database can always be overridden for a given user-defined metric,
+by specifying a list of one or more databases in the `target_databases` option.
+
+!!! Seealso "Prometheus/Grafana"
+ If you are interested in evaluating the integration of {{name.ln}}
+ with Prometheus and Grafana, you can find a quick setup guide
+ in [Part 4 of the quickstart](quickstart.md#part-4-monitor-clusters-with-prometheus-and-grafana)
+
+### Monitoring with the Prometheus operator
+
+You can monitor a specific PostgreSQL cluster using the
+[Prometheus Operator's](https://github.com/prometheus-operator/prometheus-operator)
+[`PodMonitor` resource](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api-reference/api.md#monitoring.coreos.com/v1.PodMonitor).
+
+The recommended approach is to manually create and manage a `PodMonitor` for
+each {{name.ln}} cluster. This method provides you with full control over the
+monitoring configuration and lifecycle.
+
+#### Creating a `PodMonitor`
+
+To monitor your cluster, define a `PodMonitor` resource as follows. Be sure to
+deploy it in the same namespace where your Prometheus Operator is configured to
+find `PodMonitor` resources.
+
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name: cluster-example
+spec:
+ selector:
+ matchLabels:
+ k8s.enterprisedb.io/cluster: cluster-example
+ podMetricsEndpoints:
+ - port: metrics
+```
+
+!!! important "Important Configuration Details"
+ - `metadata.name`: Give your `PodMonitor` a unique name.
+ - `spec.namespaceSelector`: Use this to specify the namespace where
+ your PostgreSQL cluster is running.
+ - `spec.selector.matchLabels`: You must use the `k8s.enterprisedb.io/cluster: `
+ label to correctly target the PostgreSQL instances.
+
+#### Deprecation of Automatic `PodMonitor` Creation
+
+!!!warning "Feature Deprecation Notice"
+ The `.spec.monitoring.enablePodMonitor` field in the `Cluster` resource is
+ now deprecated and will be removed in a future version of the operator.
+
+If you are currently using this feature, we strongly recommend you either
+remove or set `.spec.monitoring.enablePodMonitor` to `false` and manually
+create a `PodMonitor` resource for your cluster as described above.
+This change ensures that you have complete ownership of your monitoring
+configuration, preventing it from being managed or overwritten by the operator.
+
+### Enabling TLS on the Metrics Port
+
+To enable TLS communication on the metrics port, configure the `.spec.monitoring.tls.enabled`
+setting to `true`. This setup ensures that the metrics exporter uses the same
+server certificate used by PostgreSQL to secure communication on port 5432.
+
+!!! Important
+ Changing the `.spec.monitoring.tls.enabled` setting will trigger a rolling restart of the Cluster.
+
+If the `PodMonitor` is managed by the operator (`.spec.monitoring.enablePodMonitor` set to `true`),
+it will automatically contain the necessary configurations to access the metrics via TLS.
+
+To manually deploy a `PodMonitor` suitable for reading metrics via TLS, define it as follows and
+adjust as needed:
+
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name: cluster-example
+spec:
+ selector:
+ matchLabels:
+ "k8s.enterprisedb.io/cluster": cluster-example
+ podMetricsEndpoints:
+ - port: metrics
+ scheme: https
+ tlsConfig:
+ ca:
+ secret:
+ name: cluster-example-ca
+ key: ca.crt
+ serverName: cluster-example-rw
+```
+
+!!! Important
+ Ensure you modify the example above with a unique name, as well as the
+ correct Cluster's namespace and labels (e.g., `cluster-example`).
+
+!!! Important
+ The `serverName` field in the metrics endpoint must match one of the names
+ defined in the server certificate. If the default certificate is in use,
+ the `serverName` value should be in the format `-rw`.
+
+### Predefined set of metrics
+
+Every PostgreSQL instance exporter automatically exposes a set of predefined
+metrics, which can be classified in two major categories:
+
+- PostgreSQL related metrics, starting with `cnp_collector_*`, including:
+
+ - number of WAL files and total size on disk
+ - number of `.ready` and `.done` files in the archive status folder
+ - requested minimum and maximum number of synchronous replicas, as well as
+ the expected and actually observed values
+ - number of distinct nodes accommodating the instances
+ - timestamps indicating last failed and last available backup, as well
+ as the first point of recoverability for the cluster
+ - flag indicating if replica cluster mode is enabled or disabled
+ - flag indicating if a manual switchover is required
+ - flag indicating if fencing is enabled or disabled
+
+- Go runtime related metrics, starting with `go_*`
+
+Below is a sample of the metrics returned by the `localhost:9187/metrics`
+endpoint of an instance. As you can see, the Prometheus format is
+self-documenting:
+
+```text
+# HELP cnp_collector_collection_duration_seconds Collection time duration in seconds
+# TYPE cnp_collector_collection_duration_seconds gauge
+cnp_collector_collection_duration_seconds{collector="Collect.up"} 0.0031393
+
+# HELP cnp_collector_collections_total Total number of times PostgreSQL was accessed for metrics.
+# TYPE cnp_collector_collections_total counter
+cnp_collector_collections_total 2
+
+# HELP cnp_collector_fencing_on 1 if the instance is fenced, 0 otherwise
+# TYPE cnp_collector_fencing_on gauge
+cnp_collector_fencing_on 0
+
+# HELP cnp_collector_nodes_used NodesUsed represents the count of distinct nodes accommodating the instances. A value of '-1' suggests that the metric is not available. A value of '1' suggests that all instances are hosted on a single node, implying the absence of High Availability (HA). Ideally this value should match the number of instances in the cluster.
+# TYPE cnp_collector_nodes_used gauge
+cnp_collector_nodes_used 3
+
+# HELP cnp_collector_last_collection_error 1 if the last collection ended with error, 0 otherwise.
+# TYPE cnp_collector_last_collection_error gauge
+cnp_collector_last_collection_error 0
+
+# HELP cnp_collector_manual_switchover_required 1 if a manual switchover is required, 0 otherwise
+# TYPE cnp_collector_manual_switchover_required gauge
+cnp_collector_manual_switchover_required 0
+
+# HELP cnp_collector_pg_wal Total size in bytes of WAL segments in the '/var/lib/postgresql/data/pgdata/pg_wal' directory computed as (wal_segment_size * count)
+# TYPE cnp_collector_pg_wal gauge
+cnp_collector_pg_wal{value="count"} 9
+cnp_collector_pg_wal{value="slots_max"} NaN
+cnp_collector_pg_wal{value="keep"} 32
+cnp_collector_pg_wal{value="max"} 64
+cnp_collector_pg_wal{value="min"} 5
+cnp_collector_pg_wal{value="size"} 1.50994944e+08
+cnp_collector_pg_wal{value="volume_max"} 128
+cnp_collector_pg_wal{value="volume_size"} 2.147483648e+09
+
+# HELP cnp_collector_pg_wal_archive_status Number of WAL segments in the '/var/lib/postgresql/data/pgdata/pg_wal/archive_status' directory (ready, done)
+# TYPE cnp_collector_pg_wal_archive_status gauge
+cnp_collector_pg_wal_archive_status{value="done"} 6
+cnp_collector_pg_wal_archive_status{value="ready"} 0
+
+# HELP cnp_collector_replica_mode 1 if the cluster is in replica mode, 0 otherwise
+# TYPE cnp_collector_replica_mode gauge
+cnp_collector_replica_mode 0
+
+# HELP cnp_collector_sync_replicas Number of requested synchronous replicas (synchronous_standby_names)
+# TYPE cnp_collector_sync_replicas gauge
+cnp_collector_sync_replicas{value="expected"} 0
+cnp_collector_sync_replicas{value="max"} 0
+cnp_collector_sync_replicas{value="min"} 0
+cnp_collector_sync_replicas{value="observed"} 0
+
+# HELP cnp_collector_up 1 if PostgreSQL is up, 0 otherwise.
+# TYPE cnp_collector_up gauge
+cnp_collector_up{cluster="cluster-example"} 1
+
+# HELP cnp_collector_postgres_version Postgres version
+# TYPE cnp_collector_postgres_version gauge
+cnp_collector_postgres_version{cluster="cluster-example",full="18.0"} 18.0
+
+# HELP cnp_collector_last_failed_backup_timestamp The last failed backup as a unix timestamp (Deprecated)
+# TYPE cnp_collector_last_failed_backup_timestamp gauge
+cnp_collector_last_failed_backup_timestamp 0
+
+# HELP cnp_collector_last_available_backup_timestamp The last available backup as a unix timestamp (Deprecated)
+# TYPE cnp_collector_last_available_backup_timestamp gauge
+cnp_collector_last_available_backup_timestamp 1.63238406e+09
+
+# HELP cnp_collector_first_recoverability_point The first point of recoverability for the cluster as a unix timestamp (Deprecated)
+# TYPE cnp_collector_first_recoverability_point gauge
+cnp_collector_first_recoverability_point 1.63238406e+09
+
+# HELP cnp_collector_lo_pages Estimated number of pages in the pg_largeobject table
+# TYPE cnp_collector_lo_pages gauge
+cnp_collector_lo_pages{datname="app"} 0
+cnp_collector_lo_pages{datname="postgres"} 78
+
+# HELP cnp_collector_wal_buffers_full Number of times WAL data was written to disk because WAL buffers became full. Only available on PG 14+
+# TYPE cnp_collector_wal_buffers_full gauge
+cnp_collector_wal_buffers_full{stats_reset="2023-06-19T10:51:27.473259Z"} 6472
+
+# HELP cnp_collector_wal_bytes Total amount of WAL generated in bytes. Only available on PG 14+
+# TYPE cnp_collector_wal_bytes gauge
+cnp_collector_wal_bytes{stats_reset="2023-06-19T10:51:27.473259Z"} 1.0035147e+07
+
+# HELP cnp_collector_wal_fpi Total number of WAL full page images generated. Only available on PG 14+
+# TYPE cnp_collector_wal_fpi gauge
+cnp_collector_wal_fpi{stats_reset="2023-06-19T10:51:27.473259Z"} 1474
+
+# HELP cnp_collector_wal_records Total number of WAL records generated. Only available on PG 14+
+# TYPE cnp_collector_wal_records gauge
+cnp_collector_wal_records{stats_reset="2023-06-19T10:51:27.473259Z"} 26178
+
+# HELP cnp_collector_wal_sync Number of times WAL files were synced to disk via issue_xlog_fsync request (if fsync is on and wal_sync_method is either fdatasync, fsync or fsync_writethrough, otherwise zero). Only available on PG 14+
+# TYPE cnp_collector_wal_sync gauge
+cnp_collector_wal_sync{stats_reset="2023-06-19T10:51:27.473259Z"} 37
+
+# HELP cnp_collector_wal_sync_time Total amount of time spent syncing WAL files to disk via issue_xlog_fsync request, in milliseconds (if track_wal_io_timing is enabled, fsync is on, and wal_sync_method is either fdatasync, fsync or fsync_writethrough, otherwise zero). Only available on PG 14+
+# TYPE cnp_collector_wal_sync_time gauge
+cnp_collector_wal_sync_time{stats_reset="2023-06-19T10:51:27.473259Z"} 0
+
+# HELP cnp_collector_wal_write Number of times WAL buffers were written out to disk via XLogWrite request. Only available on PG 14+
+# TYPE cnp_collector_wal_write gauge
+cnp_collector_wal_write{stats_reset="2023-06-19T10:51:27.473259Z"} 7243
+
+# HELP cnp_collector_wal_write_time Total amount of time spent writing WAL buffers to disk via XLogWrite request, in milliseconds (if track_wal_io_timing is enabled, otherwise zero). This includes the sync time when wal_sync_method is either open_datasync or open_sync. Only available on PG 14+
+# TYPE cnp_collector_wal_write_time gauge
+cnp_collector_wal_write_time{stats_reset="2023-06-19T10:51:27.473259Z"} 0
+
+# HELP cnp_last_error 1 if the last collection ended with error, 0 otherwise.
+# TYPE cnp_last_error gauge
+cnp_last_error 0
+
+# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
+# TYPE go_gc_duration_seconds summary
+go_gc_duration_seconds{quantile="0"} 5.01e-05
+go_gc_duration_seconds{quantile="0.25"} 7.27e-05
+go_gc_duration_seconds{quantile="0.5"} 0.0001748
+go_gc_duration_seconds{quantile="0.75"} 0.0002959
+go_gc_duration_seconds{quantile="1"} 0.0012776
+go_gc_duration_seconds_sum 0.0035741
+go_gc_duration_seconds_count 13
+
+# HELP go_goroutines Number of goroutines that currently exist.
+# TYPE go_goroutines gauge
+go_goroutines 25
+
+# HELP go_info Information about the Go environment.
+# TYPE go_info gauge
+go_info{version="go1.20.5"} 1
+
+# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
+# TYPE go_memstats_alloc_bytes gauge
+go_memstats_alloc_bytes 4.493744e+06
+
+# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
+# TYPE go_memstats_alloc_bytes_total counter
+go_memstats_alloc_bytes_total 2.1698216e+07
+
+# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
+# TYPE go_memstats_buck_hash_sys_bytes gauge
+go_memstats_buck_hash_sys_bytes 1.456234e+06
+
+# HELP go_memstats_frees_total Total number of frees.
+# TYPE go_memstats_frees_total counter
+go_memstats_frees_total 172118
+
+# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
+# TYPE go_memstats_gc_cpu_fraction gauge
+go_memstats_gc_cpu_fraction 1.0749468700447189e-05
+
+# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
+# TYPE go_memstats_gc_sys_bytes gauge
+go_memstats_gc_sys_bytes 5.530048e+06
+
+# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
+# TYPE go_memstats_heap_alloc_bytes gauge
+go_memstats_heap_alloc_bytes 4.493744e+06
+
+# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
+# TYPE go_memstats_heap_idle_bytes gauge
+go_memstats_heap_idle_bytes 5.8236928e+07
+
+# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
+# TYPE go_memstats_heap_inuse_bytes gauge
+go_memstats_heap_inuse_bytes 7.528448e+06
+
+# HELP go_memstats_heap_objects Number of allocated objects.
+# TYPE go_memstats_heap_objects gauge
+go_memstats_heap_objects 26306
+
+# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
+# TYPE go_memstats_heap_released_bytes gauge
+go_memstats_heap_released_bytes 5.7401344e+07
+
+# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
+# TYPE go_memstats_heap_sys_bytes gauge
+go_memstats_heap_sys_bytes 6.5765376e+07
+
+# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
+# TYPE go_memstats_last_gc_time_seconds gauge
+go_memstats_last_gc_time_seconds 1.6311727586032727e+09
+
+# HELP go_memstats_lookups_total Total number of pointer lookups.
+# TYPE go_memstats_lookups_total counter
+go_memstats_lookups_total 0
+
+# HELP go_memstats_mallocs_total Total number of mallocs.
+# TYPE go_memstats_mallocs_total counter
+go_memstats_mallocs_total 198424
+
+# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
+# TYPE go_memstats_mcache_inuse_bytes gauge
+go_memstats_mcache_inuse_bytes 14400
+
+# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
+# TYPE go_memstats_mcache_sys_bytes gauge
+go_memstats_mcache_sys_bytes 16384
+
+# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
+# TYPE go_memstats_mspan_inuse_bytes gauge
+go_memstats_mspan_inuse_bytes 191896
+
+# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
+# TYPE go_memstats_mspan_sys_bytes gauge
+go_memstats_mspan_sys_bytes 212992
+
+# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
+# TYPE go_memstats_next_gc_bytes gauge
+go_memstats_next_gc_bytes 8.689632e+06
+
+# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
+# TYPE go_memstats_other_sys_bytes gauge
+go_memstats_other_sys_bytes 2.566622e+06
+
+# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
+# TYPE go_memstats_stack_inuse_bytes gauge
+go_memstats_stack_inuse_bytes 1.343488e+06
+
+# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
+# TYPE go_memstats_stack_sys_bytes gauge
+go_memstats_stack_sys_bytes 1.343488e+06
+
+# HELP go_memstats_sys_bytes Number of bytes obtained from system.
+# TYPE go_memstats_sys_bytes gauge
+go_memstats_sys_bytes 7.6891144e+07
+
+# HELP go_threads Number of OS threads created.
+# TYPE go_threads gauge
+go_threads 18
+```
+
+!!! Note
+ `cnp_collector_postgres_version` is a GaugeVec metric containing the
+ `Major.Minor` version of Postgres (either PostgreSQL or EPAS). The full
+ semantic version `Major.Minor.Patch` can be found inside one of its label
+ field named `full`.
+
+!!! Warning
+ The metrics `cnp_collector_last_failed_backup_timestamp`,
+ `cnp_collector_last_available_backup_timestamp`, and
+ `cnp_collector_first_recoverability_point` have been deprecated starting
+ from version 1.26. These metrics will continue to function with native backup
+ solutions such as in-core Barman Cloud (deprecated) and volume snapshots. Note
+ that for these cases, `cnp_collector_first_recoverability_point` and
+ `cnp_collector_last_available_backup_timestamp` will remain zero until the
+ first backup is completed to the object store. This is separate from WAL
+ archiving.
+
+### User defined metrics
+
+This feature is currently in *beta* state and the format is inspired by the
+[queries.yaml file (release 0.12)](https://github.com/prometheus-community/postgres_exporter/blob/v0.12.1/queries.yaml)
+of the PostgreSQL Prometheus Exporter.
+
+Custom metrics can be defined by users by referring to the created `Configmap`/`Secret` in a `Cluster` definition
+under the `.spec.monitoring.customQueriesConfigMap` or `customQueriesSecret` section as in the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+ namespace: test
+spec:
+ instances: 3
+
+ storage:
+ size: 1Gi
+
+ monitoring:
+ customQueriesConfigMap:
+ - name: example-monitoring
+ key: custom-queries
+```
+
+The `customQueriesConfigMap`/`customQueriesSecret` sections contain a list of
+`ConfigMap`/`Secret` references specifying the key in which the custom queries are defined.
+Take care that the referred resources have to be created **in the same namespace as the Cluster** resource.
+
+!!! Note
+ If you want ConfigMaps and Secrets to be **automatically** reloaded by instances, you can
+ add a label with key `k8s.enterprisedb.io/reload` to it, otherwise you will have to reload
+ the instances using the `kubectl cnp reload` subcommand.
+
+!!! Important
+ When a user defined metric overwrites an already existing metric the instance manager prints a json warning log,
+ containing the message:`Query with the same name already found. Overwriting the existing one.`
+ and a key `queryName` containing the overwritten query name.
+
+#### Example of a user defined metric
+
+Here you can see an example of a `ConfigMap` containing a single custom query,
+referenced by the `Cluster` example above:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: example-monitoring
+ namespace: test
+ labels:
+ k8s.enterprisedb.io/reload: ""
+data:
+ custom-queries: |
+ pg_replication:
+ query: "SELECT CASE WHEN NOT pg_is_in_recovery()
+ THEN 0
+ ELSE GREATEST (0,
+ EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))
+ END AS lag,
+ pg_is_in_recovery() AS in_recovery,
+ EXISTS (TABLE pg_stat_wal_receiver) AS is_wal_receiver_up,
+ (SELECT count(*) FROM pg_stat_replication) AS streaming_replicas"
+
+ metrics:
+ - lag:
+ usage: "GAUGE"
+ description: "Replication lag behind primary in seconds"
+ - in_recovery:
+ usage: "GAUGE"
+ description: "Whether the instance is in recovery"
+ - is_wal_receiver_up:
+ usage: "GAUGE"
+ description: "Whether the instance wal_receiver is up"
+ - streaming_replicas:
+ usage: "GAUGE"
+ description: "Number of streaming replicas connected to the instance"
+```
+
+A list of basic monitoring queries can be found in the
+[`default-monitoring.yaml` file](../default-monitoring.yaml)
+that is already installed in your {{name.ln}} deployment (see ["Default set of metrics"](#default-set-of-metrics)).
+
+#### Example of a user defined metric with predicate query
+
+The `predicate_query` option allows the user to execute the `query` to collect the metrics only under the specified conditions.
+To do so the user needs to provide a predicate query that returns at most one row with a single `boolean` column.
+
+The predicate query is executed in the same transaction as the main query and against the same databases.
+
+```yaml
+some_query: |
+ predicate_query: |
+ SELECT
+ some_bool as predicate
+ FROM some_table
+ query: |
+ SELECT
+ count(*) as rows
+ FROM some_table
+ metrics:
+ - rows:
+ usage: "GAUGE"
+ description: "number of rows"
+```
+
+#### Example of a user defined metric running on multiple databases
+
+If the `target_databases` option lists more than one database
+the metric is collected from each of them.
+
+Database auto-discovery can be enabled for a specific query by specifying a
+*shell-like pattern* (i.e., containing `*`, `?` or `[]`) in the list of
+`target_databases`. If provided, the operator will expand the list of target
+databases by adding all the databases returned by the execution of `SELECT
+datname FROM pg_database WHERE datallowconn AND NOT datistemplate` and matching
+the pattern according to [path.Match()](https://pkg.go.dev/path#Match) rules.
+
+!!! Note
+ The `*` character has a [special meaning](https://yaml.org/spec/1.2/spec.html#id2786448) in yaml,
+ so you need to quote (`"*"`) the `target_databases` value when it includes such a pattern.
+
+It is recommended that you always include the name of the database
+in the returned labels, for example using the `current_database()` function
+as in the following example:
+
+```yaml
+some_query: |
+ query: |
+ SELECT
+ current_database() as datname,
+ count(*) as rows
+ FROM some_table
+ metrics:
+ - datname:
+ usage: "LABEL"
+ description: "Name of current database"
+ - rows:
+ usage: "GAUGE"
+ description: "number of rows"
+ target_databases:
+ - albert
+ - bb
+ - freddie
+```
+
+This will produce in the following metric being exposed:
+
+```text
+cnp_some_query_rows{datname="albert"} 2
+cnp_some_query_rows{datname="bb"} 5
+cnp_some_query_rows{datname="freddie"} 10
+```
+
+Here is an example of a query with auto-discovery enabled which also
+runs on the `template1` database (otherwise not returned by the
+aforementioned query):
+
+```yaml
+some_query: |
+ query: |
+ SELECT
+ current_database() as datname,
+ count(*) as rows
+ FROM some_table
+ metrics:
+ - datname:
+ usage: "LABEL"
+ description: "Name of current database"
+ - rows:
+ usage: "GAUGE"
+ description: "number of rows"
+ target_databases:
+ - "*"
+ - "template1"
+```
+
+The above example will produce the following metrics (provided the databases exist):
+
+```text
+cnp_some_query_rows{datname="albert"} 2
+cnp_some_query_rows{datname="bb"} 5
+cnp_some_query_rows{datname="freddie"} 10
+cnp_some_query_rows{datname="template1"} 7
+cnp_some_query_rows{datname="postgres"} 42
+```
+
+### Structure of a user defined metric
+
+Every custom query has the following basic structure:
+
+```yaml
+:
+ query: ""
+ metrics:
+ - :
+ usage: ""
+ description: ""
+```
+
+Here is a short description of all the available fields:
+
+- ``: the name of the Prometheus metric
+ - `name`: override ``, if defined
+ - `query`: the SQL query to run on the target database to generate the metrics
+ - `primary`: whether to run the query only on the primary instance
+ - `master`: same as `primary` (for compatibility with the Prometheus PostgreSQL exporter's syntax - deprecated)
+ - `runonserver`: a semantic version range to limit the versions of PostgreSQL the query should run on
+ (e.g. `">=11.0.0"` or `">=12.0.0 <=15.0.0"`)
+ - `target_databases`: a list of databases to run the `query` against,
+ or a [shell-like pattern](#example-of-a-user-defined-metric-running-on-multiple-databases)
+ to enable auto discovery. Overwrites the default database if provided.
+ - `predicate_query`: a SQL query that returns at most one row and one `boolean` column to run on the target database.
+ The system evaluates the predicate and if `true` executes the `query`.
+ - `metrics`: section containing a list of all exported columns, defined as follows:
+ - ``: the name of the column returned by the query
+ - `name`: override the `ColumnName` of the column in the metric, if defined
+ - `usage`: one of the values described below
+ - `description`: the metric's description
+ - `metrics_mapping`: the optional column mapping when `usage` is set to `MAPPEDMETRIC`
+
+The possible values for `usage` are:
+
+| Column Usage Label | Description |
+| :----------------- | :------------------------------------------------------- |
+| `DISCARD` | this column should be ignored |
+| `LABEL` | use this column as a label |
+| `COUNTER` | use this column as a counter |
+| `GAUGE` | use this column as a gauge |
+| `MAPPEDMETRIC` | use this column with the supplied mapping of text values |
+| `DURATION` | use this column as a text duration (in milliseconds) |
+| `HISTOGRAM` | use this column as a histogram |
+
+Please visit the ["Metric Types" page](https://prometheus.io/docs/concepts/metric_types/)
+from the Prometheus documentation for more information.
+
+### Output of a user defined metric
+
+Custom defined metrics are returned by the Prometheus exporter endpoint (`:9187/metrics`)
+with the following format:
+
+```text
+cnp__{= ... }
+```
+
+!!! Note
+ `LabelColumnName` are metrics with `usage` set to `LABEL` and their `Value`
+
+Considering the `pg_replication` example above, the exporter's endpoint would
+return the following output when invoked:
+
+```text
+# HELP cnp_pg_replication_in_recovery Whether the instance is in recovery
+# TYPE cnp_pg_replication_in_recovery gauge
+cnp_pg_replication_in_recovery 0
+# HELP cnp_pg_replication_lag Replication lag behind primary in seconds
+# TYPE cnp_pg_replication_lag gauge
+cnp_pg_replication_lag 0
+# HELP cnp_pg_replication_streaming_replicas Number of streaming replicas connected to the instance
+# TYPE cnp_pg_replication_streaming_replicas gauge
+cnp_pg_replication_streaming_replicas 2
+# HELP cnp_pg_replication_is_wal_receiver_up Whether the instance wal_receiver is up
+# TYPE cnp_pg_replication_is_wal_receiver_up gauge
+cnp_pg_replication_is_wal_receiver_up 0
+```
+
+### Default set of metrics
+
+The operator can be configured to automatically inject in a Cluster a set of
+monitoring queries defined in a ConfigMap or a Secret, inside the operator's namespace.
+You have to set the `MONITORING_QUERIES_CONFIGMAP` or
+`MONITORING_QUERIES_SECRET` key in the ["operator configuration"](operator_conf.md),
+respectively to the name of the ConfigMap or the Secret;
+the operator will then use the content of the `queries` key.
+
+Any change to the `queries` content will be immediately reflected on all the
+deployed Clusters using it.
+
+The operator installation manifests come with a predefined ConfigMap,
+called `postgresql-operator-default-monitoring`, to be used by all Clusters.
+`MONITORING_QUERIES_CONFIGMAP` is by default set to `postgresql-operator-default-monitoring` in the operator configuration.
+
+If you want to disable the default set of metrics, you can:
+
+- disable it at operator level: set the `MONITORING_QUERIES_CONFIGMAP`/`MONITORING_QUERIES_SECRET` key to `""`
+ (empty string), in the operator ConfigMap. Changes to operator ConfigMap require an operator restart.
+- disable it for a specific Cluster: set `.spec.monitoring.disableDefaultQueries` to `true` in the Cluster.
+
+!!! Important
+ The ConfigMap or Secret specified via `MONITORING_QUERIES_CONFIGMAP`/`MONITORING_QUERIES_SECRET`
+ will always be copied to the Cluster's namespace with a fixed name: `postgresql-operator-default-monitoring`.
+ So that, if you intend to have default metrics, you should not create a ConfigMap with this name in the cluster's namespace.
+
+### Differences with the Prometheus Postgres exporter
+
+{{name.ln}} is inspired by the PostgreSQL Prometheus Exporter, but
+presents some differences. In particular, the `cache_seconds` field is not implemented
+in {{name.ln}}' exporter.
+
+## Monitoring the {{name.ln}} operator
+
+The operator internally exposes [Prometheus](https://prometheus.io/) metrics
+via HTTP on port 8080, named `metrics`.
+
+!!! Info
+ You can inspect the exported metrics by following the instructions in
+ the ["How to inspect the exported metrics"](#how-to-inspect-the-exported-metrics)
+ section below.
+
+Currently, the operator exposes default `kubebuilder` metrics. See
+[kubebuilder documentation](https://book.kubebuilder.io/reference/metrics.html)
+for more details.
+
+### Monitoring the operator with Prometheus
+
+The operator can be monitored using the
+[Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator) by defining a
+[PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/v0.47.1/Documentation/api.md#podmonitor)
+pointing to the operator pod(s), as follows (note it's applied in the same
+namespace as the operator):
+
+```yaml
+kubectl -n postgresql-operator-system apply -f - < 8080:8080
+```
+
+With port forwarding active, the metrics are easily viewable on a browser at
+[`localhost:8080/metrics`](http://localhost:8080/metrics).
+
+### Using curl
+
+Create the `curl` pod with the following command:
+
+```yaml
+kubectl apply -f - <:9187/metrics
+```
+
+For example, if your PostgreSQL cluster is called `cluster-example` and
+you want to retrieve the exported metrics of the first pod in the cluster,
+you can run the following command to programmatically get the IP of
+that pod:
+
+```shell
+POD_IP=$(kubectl get pod cluster-example-1 --template '{{.status.podIP}}')
+```
+
+And then run:
+
+```shell
+kubectl exec -ti curl -- curl -s ${POD_IP}:9187/metrics
+```
+
+If you enabled TLS metrics, run instead:
+
+```shell
+kubectl exec -ti curl -- curl -sk https://${POD_IP}:9187/metrics
+```
+
+To access the metrics of the operator, you need to point
+to the pod where the operator is running, and use TCP port 8080 as target.
+
+When you're done inspecting metrics, please remember to delete the `curl` pod:
+
+```shell
+kubectl delete -f curl.yaml
+```
+
+## Auxiliary resources
+
+!!! Important
+ These resources are provided for illustration and experimentation, and do
+ not represent any kind of recommendation for your production system
+
+In the [`doc/src/samples/monitoring/`](https://github.com/EnterpriseDB/docs/tree/main/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring)
+directory you will find a series of sample files for observability.
+Please refer to [Part 4 of the quickstart](quickstart.md#part-4-monitor-clusters-with-prometheus-and-grafana)
+section for context:
+
+- `kube-stack-config.yaml`: a configuration file for the kube-stack helm chart
+ installation. It ensures that Prometheus listens for all PodMonitor resources.
+- `prometheusrule.yaml`: a `PrometheusRule` with alerts for {{name.ln}}.
+ NOTE: this does not include inter-operation with notification services. Please refer
+ to the [Prometheus documentation](https://prometheus.io/docs/alerting/latest/alertmanager/).
+- `podmonitor.yaml`: a `PodMonitor` for the {{name.ln}} Operator deployment.
+
+In addition, we provide the "raw" sources for the Prometheus alert rules in the
+`alerts.yaml` file.
+
+A Grafana dashboard for {{name.ln}} clusters and operator, is kept in the
+dedicated repository [`cloudnative-pg/grafana-dashboards`](https://github.com/cloudnative-pg/grafana-dashboards/tree/main)
+as a dashboard JSON configuration:
+[`grafana-dashboard.json`](https://github.com/cloudnative-pg/grafana-dashboards/blob/main/charts/cluster/grafana-dashboard.json).
+The file can be downloaded, and imported into Grafana
+(menus: Dashboard > New > Import).
+
+For a general reference on the settings available on `kube-prometheus-stack`,
+you can execute `helm show values prometheus-community/kube-prometheus-stack`.
+Please refer to the
+[kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
+page for more detail.
+
+## Monitoring on OpenShift
+
+Starting on Openshift 4.6 there is a complete monitoring stack called
+["Monitoring for user-defined projects"](https://docs.openshift.com/container-platform/4.6/monitoring/enabling-monitoring-for-user-defined-projects.html)
+which can be enabled by cluster administrators. Cloud Native PostgreSQL will
+automatically create a `PodMonitor` object if the option
+`spec.monitoring.enablePodMonitor` of the `Cluster` definition is set to
+`true`.
+
+To enable cluster wide `user-defined` monitoring you must first create a
+`ConfigMap` with the name `cluster-monitoring-config` in the
+`openshift-monitoring` namespace/project with the following content:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: cluster-monitoring-config
+ namespace: openshift-monitoring
+data:
+ config.yaml: |
+ enableUserWorkload: true
+```
+
+If the `ConfigMap` already exists, just add the variable `enableUserWorkload: true`.
+
+!!! Important
+ This will enable the monitoring for the whole cluster, if it is needed only
+ for one namespace/project please refer to the official Red Hat documentation or
+ talk with your cluster administrator.
+
+After that, just create the proper PodMonitor in the namespace/project with
+something similar to this:
+
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name: cluster-sample
+spec:
+ selector:
+ matchLabels:
+ postgresql: cluster-sample
+ podMetricsEndpoints:
+ - port: metrics
+```
+
+!!! Note
+ We currently don’t use `ServiceMonitor` because our service doesn’t define
+ a port pointing to the metrics. If we added a metric port this could expose
+ sensitive data.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/networking.mdx b/product_docs/docs/postgres_for_kubernetes/1/networking.mdx
new file mode 100644
index 0000000000..114a1eb799
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/networking.mdx
@@ -0,0 +1,54 @@
+---
+title: 'Networking'
+originalFilePath: 'src/networking.md'
+---
+
+
+
+{{name.ln}} assumes the underlying Kubernetes cluster has the required
+connectivity already set up.
+Networking on Kubernetes is an important and extended topic; please refer to
+the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/) for further information.
+
+If you're following the quickstart guide to install {{name.ln}} on a local KinD or K3d cluster, you should not encounter any networking issues as neither
+platform will add any networking restrictions by default.
+
+However, when deploying {{name.ln}} on existing infrastructure, networking
+restrictions might be in place that could impair the communication of the
+operator with PostgreSQL clusters.
+Specifically, existing [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
+might restrict certain types of traffic.
+
+Or, you might be interested in adding network policies in your environment for
+increased security.
+As mentioned in the [security document](security.md), please ensure the operator can reach every cluster pod on ports 8000 and 5432, and that pods can connect to each other.
+
+## Cross-namespace network policy for the operator
+
+Following the quickstart guide or using helm chart for deployment will install the operator in
+a dedicated namespace (`postgresql-operator-system` by default).
+We recommend that you create clusters in a different namespace.
+
+The operator *must* be able to connect to cluster pods.
+This might be precluded if there is a `NetworkPolicy` restricting
+cross-namespace traffic.
+
+For example, the
+[kubernetes guide on network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
+contains an example policy denying all ingress traffic by default.
+
+If your local kubernetes setup has this kind of restrictive network policy, you
+will need to create a `NetworkPolicy` to explicitly allow connection from the
+operator namespace and pod to the cluster namespace and pods. You can find an example in the
+[`networkpolicy-example.yaml`](../samples/networkpolicy-example.yaml) file in this repository.
+Please note, you'll need to adjust the cluster name and cluster namespace to
+match your specific setup, and also the operator namespace if it is not
+the default namespace.
+
+## Cross-cluster networking
+
+While [bootstrapping](bootstrap.md) from another cluster or when using the `externalClusters` section,
+ensure connectivity among all clusters, object stores, and namespaces involved.
+
+Again, we refer you to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/)
+for setup information.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx b/product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx
new file mode 100644
index 0000000000..70584f76c2
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx
@@ -0,0 +1,356 @@
+---
+title: 'Appendix C - Common object stores for backups'
+originalFilePath: 'src/appendixes/object_stores.md'
+---
+
+
+
+!!! Warning
+ As of {{name.ln}} 1.26, **native Barman Cloud support is deprecated** in
+ favor of the **Barman Cloud Plugin**. While the native integration remains
+ functional for now, we strongly recommend beginning a gradual migration to
+ the plugin-based interface after appropriate testing. The Barman Cloud
+ Plugin documentation describes
+ [how to use common object stores](https://cloudnative-pg.io/plugin-barman-cloud/docs/object_stores/).
+
+You can store the [backup](backup.md) files in any service that is supported
+by the Barman Cloud infrastructure. That is:
+
+- [Amazon S3](#aws-s3)
+- [Microsoft Azure Blob Storage](#azure-blob-storage)
+- [Google Cloud Storage](#google-cloud-storage)
+
+You can also use any compatible implementation of the supported services.
+
+The required setup depends on the chosen storage provider and is
+discussed in the following sections.
+
+## AWS S3
+
+[AWS Simple Storage Service (S3)](https://aws.amazon.com/s3/) is
+a very popular object storage service offered by Amazon.
+
+As far as {{name.ln}} backup is concerned, you can define the permissions to
+store backups in S3 buckets in two ways:
+
+- If {{name.ln}} is running in EKS. you may want to use the
+ [IRSA authentication method](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)
+- Alternatively, you can use the `ACCESS_KEY_ID` and `ACCESS_SECRET_KEY` credentials
+
+### AWS Access key
+
+You will need the following information about your environment:
+
+- `ACCESS_KEY_ID`: the ID of the access key that will be used
+ to upload files into S3
+
+- `ACCESS_SECRET_KEY`: the secret part of the access key mentioned above
+
+- `ACCESS_SESSION_TOKEN`: the optional session token, in case it is required
+
+The access key used must have permission to upload files into
+the bucket. Given that, you must create a Kubernetes secret with the
+credentials, and you can do that with the following command:
+
+```sh
+kubectl create secret generic aws-creds \
+ --from-literal=ACCESS_KEY_ID= \
+ --from-literal=ACCESS_SECRET_KEY=
+# --from-literal=ACCESS_SESSION_TOKEN= # if required
+```
+
+The credentials will be stored inside Kubernetes and will be encrypted
+if encryption at rest is configured in your installation.
+
+Once that secret has been created, you can configure your cluster like in
+the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ destinationPath: ""
+ s3Credentials:
+ accessKeyId:
+ name: aws-creds
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: aws-creds
+ key: ACCESS_SECRET_KEY
+```
+
+The destination path can be any URL pointing to a folder where
+the instance can upload the WAL files, e.g.
+`s3://BUCKET_NAME/path/to/folder`.
+
+### IAM Role for Service Account (IRSA)
+
+In order to use IRSA you need to set an `annotation` in the `ServiceAccount` of
+the Postgres cluster.
+
+We can configure {{name.ln}} to inject them using the `serviceAccountTemplate`
+stanza:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+[...]
+spec:
+ serviceAccountTemplate:
+ metadata:
+ annotations:
+ eks.amazonaws.com/role-arn: arn:[...]
+ [...]
+```
+
+### S3 lifecycle policy
+
+Barman Cloud writes objects to S3, then does not update them until they are
+deleted by the Barman Cloud retention policy. A recommended approach for an S3
+lifecycle policy is to expire the current version of objects a few days longer
+than the Barman retention policy, enable object versioning, and expire
+non-current versions after a number of days. Such a policy protects against
+accidental deletion, and also allows for restricting permissions to the
+{{name.ln}} workload so that it may delete objects from S3 without granting
+permissions to permanently delete objects.
+
+### Other S3-compatible Object Storages providers
+
+In case you're using S3-compatible object storage, like **MinIO** or
+**Linode Object Storage**, you can specify an endpoint instead of using the
+default S3 one.
+
+In this example, it will use the `bucket` of **Linode** in the region
+`us-east1`.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ destinationPath: "s3://bucket/"
+ endpointURL: "https://us-east1.linodeobjects.com"
+ s3Credentials:
+ [...]
+```
+
+In case you're using **Digital Ocean Spaces**, you will have to use the Path-style syntax.
+In this example, it will use the `bucket` from **Digital Ocean Spaces** in the region `SFO3`.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ destinationPath: "s3://[your-bucket-name]/[your-backup-folder]/"
+ endpointURL: "https://sfo3.digitaloceanspaces.com"
+ s3Credentials:
+ [...]
+```
+
+### Using Object Storage with a private CA
+
+Suppose you configure an Object Storage provider which uses a certificate
+signed with a private CA, for example when using OpenShift or MinIO via HTTPS. In that case,
+you need to set the option `endpointCA` inside `barmanObjectStore` referring
+to a secret containing the CA bundle, so that Barman can verify the certificate
+correctly.
+You can find instructions on creating a secret using your cert files in the
+[certificates](certificates.md#example) document.
+Once you have created the secret, you can populate the `endpointCA` as in the
+following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ [...]
+ backup:
+ barmanObjectStore:
+ endpointURL:
+ endpointCA:
+ name: my-ca-secret
+ key: ca.crt
+```
+
+!!! Note
+ If you want ConfigMaps and Secrets to be **automatically** reloaded by instances, you can
+ add a label with key `k8s.enterprisedb.io/reload` to the Secrets/ConfigMaps. Otherwise, you will have to reload
+ the instances using the `kubectl cnp reload` subcommand.
+
+## Azure Blob Storage
+
+[Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) is the
+object storage service provided by Microsoft.
+
+In order to access your storage account for backup and recovery of
+{{name.ln}} managed databases, you will need one of the following
+combinations of credentials:
+
+- [Connection String](https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#configure-a-connection-string-for-an-azure-storage-account)
+- Storage account name and [Storage account access key](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage)
+- Storage account name and [Storage account SAS Token](https://docs.microsoft.com/en-us/azure/storage/blobs/sas-service-create)
+- Storage account name and [Azure AD Workload Identity](https://azure.github.io/azure-workload-identity/docs/introduction.html)
+ properly configured.
+
+Using **Azure AD Workload Identity**, you can avoid saving the credentials into a Kubernetes Secret,
+and have a Cluster configuration adding the `inheritFromAzureAD` as follows:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ destinationPath: ""
+ azureCredentials:
+ inheritFromAzureAD: true
+```
+
+On the other side, using both **Storage account access key** or **Storage account SAS Token**,
+the credentials need to be stored inside a Kubernetes Secret, adding data entries only when
+needed. The following command performs that:
+
+```sh
+kubectl create secret generic azure-creds \
+ --from-literal=AZURE_STORAGE_ACCOUNT= \
+ --from-literal=AZURE_STORAGE_KEY= \
+ --from-literal=AZURE_STORAGE_SAS_TOKEN= \
+ --from-literal=AZURE_STORAGE_CONNECTION_STRING=
+```
+
+The credentials will be encrypted at rest, if this feature is enabled in the used
+Kubernetes cluster.
+
+Given the previous secret, the provided credentials can be injected inside the cluster
+configuration:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ destinationPath: ""
+ azureCredentials:
+ connectionString:
+ name: azure-creds
+ key: AZURE_CONNECTION_STRING
+ storageAccount:
+ name: azure-creds
+ key: AZURE_STORAGE_ACCOUNT
+ storageKey:
+ name: azure-creds
+ key: AZURE_STORAGE_KEY
+ storageSasToken:
+ name: azure-creds
+ key: AZURE_STORAGE_SAS_TOKEN
+```
+
+When using the Azure Blob Storage, the `destinationPath` fulfills the following
+structure:
+
+```sh
+://..core.windows.net/
+```
+
+where `` is `/`. The **account name**,
+which is also called **storage account name**, is included in the used host name.
+
+### Other Azure Blob Storage compatible providers
+
+If you are using a different implementation of the Azure Blob Storage APIs,
+the `destinationPath` will have the following structure:
+
+```sh
+://://
+```
+
+In that case, `` is the first component of the path.
+
+This is required if you are testing the Azure support via the Azure Storage
+Emulator or [Azurite](https://github.com/Azure/Azurite).
+
+## Google Cloud Storage
+
+Currently, the {{name.ln}} operator supports two authentication methods for
+[Google Cloud Storage](https://cloud.google.com/storage/):
+
+- the first one assumes that the pod is running inside a Google Kubernetes Engine cluster
+- the second one leverages the environment variable `GOOGLE_APPLICATION_CREDENTIALS`
+
+### Running inside Google Kubernetes Engine
+
+When running inside Google Kubernetes Engine you can configure your backups to
+simply rely on [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity),
+without having to set any credentials. In particular, you need to:
+
+- set `.spec.backup.barmanObjectStore.googleCredentials.gkeEnvironment` to `true`
+- set the `iam.gke.io/gcp-service-account` annotation in the `serviceAccountTemplate` stanza
+
+Please use the following example as a reference:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ [...]
+ backup:
+ barmanObjectStore:
+ destinationPath: "gs://"
+ googleCredentials:
+ gkeEnvironment: true
+
+ serviceAccountTemplate:
+ metadata:
+ annotations:
+ iam.gke.io/gcp-service-account: [...].iam.gserviceaccount.com
+ [...]
+```
+
+### Using authentication
+
+Following the [instruction from Google](https://cloud.google.com/docs/authentication/getting-started)
+you will get a JSON file that contains all the required information to authenticate.
+
+The content of the JSON file must be provided using a `Secret` that can be created
+with the following command:
+
+```shell
+kubectl create secret generic backup-creds --from-file=gcsCredentials=gcs_credentials_file.json
+```
+
+This will create the `Secret` with the name `backup-creds` to be used in the yaml file like this:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ destinationPath: "gs://"
+ googleCredentials:
+ applicationCredentials:
+ name: backup-creds
+ key: gcsCredentials
+```
+
+Now the operator will use the credentials to authenticate against Google Cloud Storage.
+
+!!! Important
+ This way of authentication will create a JSON file inside the container with all the needed
+ information to access your Google Cloud Storage bucket, meaning that if someone gets access to the pod
+ will also have write permissions to the bucket.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx
new file mode 100644
index 0000000000..b97fe36e27
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx
@@ -0,0 +1,1046 @@
+---
+title: 'Red Hat OpenShift'
+originalFilePath: 'src/openshift.md'
+---
+
+
+{{name.ln}} is certified to run on
+[Red Hat OpenShift Container Platform (OCP) version 4.x](https://www.openshift.com/products/container-platform)
+and is available directly from the
+[Red Hat Catalog](https://catalog.redhat.com/software/operators/detail/5fb41c88abd2a6f7dbe1b37b).
+
+The goal of this section is to help you decide the best installation method for
+{{name.ln}} based on your organizations' security and access
+control policies.
+
+The first and critical step is to design the [architecture](#architecture) of
+your PostgreSQL clusters in your OpenShift environment.
+
+Once the architecture is clear, you can proceed with the installation. {{name.ln}} can be installed and managed via:
+
+- [OpenShift web console](#installation-via-web-console)
+- [OpenShift command-line interface (CLI)](#installation-via-the-oc-cli) called `oc`, for full control
+
+{{name.ln}} supports all available install modes defined by
+OpenShift:
+
+- cluster-wide, in all namespaces
+- local, in a single namespace
+- local, watching multiple namespaces (only available using `oc`)
+
+!!! Note
+ A project is a Kubernetes namespace with additional annotations, and is the
+ central vehicle by which access to resources for regular users is managed.
+
+In most cases, the default cluster-wide installation of {{name.ln}}
+is the recommended one, with either central management of PostgreSQL clusters
+or delegated management (limited to specific users/projects according to RBAC
+definitions - see ["Important OpenShift concepts"](#important-openshift-concepts)
+and ["Users and Permissions"](#users-and-permissions) below).
+
+!!! Important
+ Both the installation and upgrade processes require access to an OpenShift
+ Container Platform cluster using an account with `cluster-admin` permissions.
+ From ["Default cluster roles"](https://docs.openshift.com/container-platform/4.16/authentication/using-rbac.html#default-roles_using-rbac),
+ a `cluster-admin` is *"a super-user that can perform any action in any
+ project. When bound to a user with a local binding, they have full control over
+ quota and every action on every resource in the project"*.
+
+## Architecture
+
+The same concepts that have been included in the generic
+[Kubernetes/PostgreSQL architecture page](architecture.md)
+apply for OpenShift as well.
+
+Here as well, the critical factor is the number of availability
+zones or data centers for your OpenShift environment.
+
+As outlined in the
+["Disaster Recovery Strategies for Applications Running on OpenShift"](https://cloud.redhat.com/blog/disaster-recovery-strategies-for-applications-running-on-openshift)
+blog article written by Raffaele Spazzoli back in 2020 about stateful
+applications, in order to fully exploit {{name.ln}}, you need
+to plan, design and implement an OpenShift cluster spanning 3 or more
+availability zones. While this doesn't pose an issue in most of the public
+cloud provider deployments, it is definitely a challenge in on-premise
+scenarios.
+
+If your OpenShift cluster has only **one availability zone**, the zone is your
+Single Point of Failure (SPoF) from a High Availability standpoint - provided
+that you have wisely adopted a share-nothing architecture, making sure that
+your PostgreSQL clusters have at least one standby (two if using synchronous
+replication), and that each PostgreSQL instance runs on a different Kubernetes
+worker node using different storage. Make sure that continuous backup data is
+stored additionally in a storage service outside the OpenShift cluster,
+allowing you to perform Disaster Recovery operations beyond your data center.
+
+Most likely you will have another OpenShift cluster in another data center,
+either in the same metropolitan area or in another region, in an active/passive
+strategy. You can set up an independent ["Replica cluster"](replica_cluster.md),
+with the understanding that this is primarily a Disaster Recovery solution - very
+effective but with some limitations that require manual intervention, as
+explained in the feature page. The same solution can be applied to additional
+OpenShift clusters, even in a cascading manner.
+
+On the other hand, if your OpenShift cluster spans **multiple availability
+zones** in a region, you can fully leverage the capabilities of the operator for
+resilience and self-healing, and the region can become your SPoF, i.e.
+it would take a full region outage to bring down your cluster.
+Moreover, you can take advantage of multiple OpenShift clusters in different
+regions by setting up replica clusters, as previously mentioned.
+
+### Reserving Nodes for PostgreSQL Workloads
+
+For optimal performance and resource allocation in your PostgreSQL database
+operations, it is highly recommended to isolate PostgreSQL workloads by
+dedicating specific worker nodes solely to `postgres` in production. This is
+particularly crucial whether you're operating in a single availability zone or
+a multi-availability zone environment.
+
+A worker node in OpenShift that is dedicated to running PostgreSQL workloads is
+commonly referred to as a **Postgres node** or `postgres` node.
+
+This dedicated approach ensures that your PostgreSQL workloads are not
+competing for resources with other applications, leading to enhanced stability
+and performance.
+
+For further details, please refer to the ["Reserving Nodes for PostgreSQL Workloads" section within the broader "Architecture"](architecture.md#reserving-nodes-for-postgresql-workloads)
+documentation. The primary difference when working in OpenShift involves how
+labels and taints are applied to the nodes, as described below.
+
+To label a node as a `postgres` node, execute the following command:
+
+```sh
+oc label node node-role.kubernetes.io/postgres=
+```
+
+To apply a `postgres` taint to a node, use the following command:
+
+```sh
+oc adm taint node node-role.kubernetes.io/postgres=:NoSchedule
+```
+
+By correctly labeling and tainting your nodes, you ensure that only PostgreSQL
+workloads are scheduled on these dedicated nodes via affinity and tolerations,
+reinforcing the stability and performance of your database environment.
+
+## Important OpenShift concepts
+
+To understand how the {{name.ln}} operator fits in an OpenShift environment,
+you must familiarize yourself with the following Kubernetes-related topics:
+
+- Operators
+- Authentication
+- Authorization via Role-based Access Control (RBAC)
+- Service Accounts and Users
+- Rules, Roles and Bindings
+- Cluster RBAC vs local RBAC through projects
+
+This is especially true in case you are not comfortable with the elevated
+permissions required by the default cluster-wide installation of the operator.
+
+We have also selected the diagram below from the OpenShift documentation, as it
+clearly illustrates the relationships between cluster roles, local roles,
+cluster role bindings, local role bindings, users, groups and service accounts.
+
+
+The ["Predefined RBAC objects" section](#predefined-rbac-objects)
+below contains important information about how {{name.ln}} adheres
+to Kubernetes and OpenShift RBAC implementation, covering default installed
+cluster roles, roles, service accounts.
+
+If you are familiar with the above concepts, you can proceed directly to the
+selected installation method. Otherwise, we recommend that you read the
+following resources taken from the OpenShift documentation and the Red Hat
+blog:
+
+- ["Operator Lifecycle Manager (OLM) concepts and resources"](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-olm.html)
+- ["Understanding authentication"](https://docs.openshift.com/container-platform/4.16/authentication/understanding-authentication.html)
+- ["Role-based access control (RBAC)"](https://docs.openshift.com/container-platform/4.16/authentication/using-rbac.html),
+ covering rules, roles and bindings for authorization, as well as cluster RBAC vs local RBAC through projects
+- ["Default project service accounts and roles"](https://docs.openshift.com/container-platform/4.16/authentication/using-service-accounts-in-applications.html#service-accounts-default_using-service-accounts)
+- ["With Kubernetes Operators comes great responsibility" blog article](https://www.redhat.com/en/blog/kubernetes-operators-comes-great-responsibility)
+
+### Cluster Service Version (CSV)
+
+Technically, the operator is designed to run in OpenShift via the Operator
+Lifecycle Manager (OLM), according to the Cluster Service Version (CSV) defined
+by EDB.
+
+The CSV is a YAML manifest that defines not only the user interfaces (available
+through the web dashboard), but also the RBAC rules required by the operator
+and the custom resources defined and owned by the operator (such as the
+`Cluster` one, for example). The CSV defines also the available `installModes`
+for the operator, namely: `AllNamespaces` (cluster-wide), `SingleNamespace`
+(single project), `MultiNamespace` (multi-project), and `OwnNamespace`.
+
+!!! Seealso "There's more ..."
+ You can find out more about CSVs and install modes by reading
+ ["Operator group membership"](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-operatorgroups.html#olm-operatorgroups-membership_olm-understanding-operatorgroups)
+ and ["Defining cluster service versions (CSVs)"](https://docs.openshift.com/container-platform/4.16/operators/operator_sdk/osdk-generating-csvs.html)
+ from the OpenShift documentation.
+
+### Limitations for multi-tenant management
+
+Red Hat OpenShift Container Platform provides limited support for
+simultaneously installing different variations of an operator on a single
+cluster. Like any other operator, {{name.ln}} becomes an extension
+of the control plane. As the control plane is shared among all tenants
+(projects) of an OpenShift cluster, operators too become shared resources in a
+multi-tenant environment.
+
+Operator Lifecycle Manager (OLM) can install operators multiple times in
+different namespaces, with one important limitation: they all need to share the
+same API version of the operator.
+
+For more information, please refer to
+["Operator groups"](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-operatorgroups.html)
+in OpenShift documentation.
+
+## Channels
+
+{{name.ln}} is distributed through the following
+[OLM channels](https://olm.operatorframework.io/docs/best-practices/channel-naming/),
+each serving a distinct purpose:
+
+- `candidate`: this channel provides early access to the next potential `fast`
+ release. It includes the latest pre-release versions with new features and
+ fixes, but is considered experimental and not supported. Use this channel only
+ for testing and validation purposes—**not in production environments**. Versions in
+ `candidate` may not appear in other channels if no further updates are
+ recommended.
+
+- `fast`: designed for users who want timely access to the latest stable
+ features and patches. The head of the `fast` channel always points to the
+ latest **patch release** of the latest **minor release** of EDB Postgres for
+ Kubernetes.
+
+- `stable`: similar to `fast`, but restricted to the latest **minor release**
+ currently under EDB’s Long Term Support (LTS) policy. Designed for users who
+ require predictable updates and official support while benefiting from ongoing
+ stability and maintenance.
+
+- `stable-vX.Y`: tracks the latest patch release within a specific minor
+ version (e.g., `stable-v1.26`). These channels are ideal for environments
+ that require version pinning and predictable updates within a stable minor
+ release.
+
+The `fast` and `stable` channels may span multiple minor versions, whereas
+each `stable-vX.Y` channel is limited to patch updates within a specific minor
+release.
+
+**{{name.ln}}** follow *trunk-based development* and
+*continuous delivery* principles. As a result, we generally recommend using the
+`fast` channel to stay current with the latest stable improvements and fixes.
+
+## Installation via web console
+
+### Ensuring access to EDB private registry
+
+!!! Important
+ You'll need access to the private EDB repository where both the operator
+ and operand images are stored. Access requires a valid
+ [EDB subscription plan](https://www.enterprisedb.com/products/plans-comparison).
+ Please refer to ["Accessing EDB private image registries"](private_edb_registries.md) for further details.
+
+!!! Warning CRITICAL WARNING: UPGRADING OPERATORS
+
+ OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the [Central Migration Guide](migrating_edb_registries) first.
+
+The OpenShift install will use pull secrets in order to access the
+operand and operator images, which are held in a private repository.
+
+Once you have credentials to the private repository, you will need to create
+a pull secret in the `openshift-operators` namespace, named:
+
+- `postgresql-operator-pull-secret`, for the {{name.ln}} operator images
+
+You can create this secret using the `oc create` command by replacing `` with
+the repository token for your EDB account, as explained in
+[Get your token](/repos/getting_started/with_web/get_your_token/).
+
+```sh
+oc create secret docker-registry postgresql-operator-pull-secret \
+ -n openshift-operators \
+ --docker-server=docker.enterprisedb.com \
+ --docker-username=k8s \
+ --docker-password=""
+```
+
+The {{name.ln}} operator can be found in the Red Hat OperatorHub
+directly from your OpenShift dashboard.
+
+1. Navigate in the web console to the `Operators -> OperatorHub` page:
+
+ 
+
+2. Scroll in the `Database` section or type a keyword into the `Filter by keyword`
+ box (in this case, "PostgreSQL") to find the {{name.ln}}
+ Operator, then select it:
+
+ 
+
+3. Read the information about the Operator and select `Install`.
+
+4. The following `Operator installation` page expects you to choose:
+
+ - the installation mode: [cluster-wide](#cluster-wide-installation) or
+ [single namespace](#single-project-installation) installation
+ - the update channel (see [the "Channels" section](#channels) for more
+ information - if unsure, pick `fast`)
+ - the approval strategy, following the availability on the market place of
+ a new release of the operator, certified by Red Hat:
+ - `Automatic`: OLM automatically upgrades the running operator with the
+ new version
+ - `Manual`: OpenShift waits for human intervention, by requiring an
+ approval in the `Installed Operators` section
+
+ !!! Important
+ The process of the operator upgrade is described in the
+ ["Upgrades" section](installation_upgrade.md#upgrades).
+
+!!! Important
+ It is possible to install the operator in a single project
+ (technically speaking: `OwnNamespace` install mode) multiple times
+ in the same cluster. There will be an operator installation in every namespace,
+ with different upgrade policies as long as the API is the same (see
+ ["Limitations for multi-tenant management"](#limitations-for-multi-tenant-management)).
+
+Choosing cluster-wide vs local installation of the operator is a critical
+turning point. Trying to install the operator globally with an existing local
+installation is blocked, by throwing the error below. If you want to proceed
+you need to remove every local installation of the operator first.
+
+
+
+### Cluster-wide installation
+
+With cluster-wide installation, you are asking OpenShift to install the
+Operator in the default `openshift-operators` namespace and to make it
+available to all the projects in the cluster. This is the default and normally
+recommended approach to install {{name.ln}}.
+
+!!! Warning
+ This doesn't mean that every user in the OpenShift cluster can use the EDB Postgres for
+ Kubernetes Operator, deploy a `Cluster` object or even see the `Cluster` objects that
+ are running in their own namespaces. There are some special roles that users must
+ have in the namespace in order to interact with {{name.ln}}' managed
+ custom resources - primarily the `Cluster` one. Please refer to the
+ ["Users and Permissions" section below](#users-and-permissions) for details.
+
+From the web console, select `All namespaces on the cluster (default)` as
+`Installation mode`:
+
+
+
+As a result, the operator will be visible in every namespaces. Otherwise, as with any
+other OpenShift operator, check the logs in any pods in the `openshift-operators`
+project on the `Workloads → Pods` page that are reporting issues to troubleshoot further.
+
+!!! Important "Beware"
+ By choosing the cluster-wide installation you cannot easily move to a
+ single project installation at a later time.
+
+### Single project installation
+
+With single project installation, you are asking OpenShift to install the
+Operator in a given namespace, and to make it available to that project only.
+
+!!! Warning
+ This doesn't mean that every user in the namespace can use the EDB Postgres for
+ Kubernetes Operator, deploy a `Cluster` object or even see the `Cluster` objects that
+ are running in the namespace. Similarly to the cluster-wide installation mode,
+ there are some special roles that users must have in the namespace in order to
+ interact with {{name.ln}}' managed custom resources - primarily the `Cluster`
+ one. Please refer to the ["Users and Permissions" section below](#users-and-permissions)
+ for details.
+
+From the web console, select `A specific namespace on the cluster` as
+`Installation mode`, then pick the target namespace (in our example
+`proj-dev`):
+
+
+
+As a result, the operator will be visible in the selected namespace only. You
+can verify this from the `Installed operators` page:
+
+
+
+In case of a problem, from the `Workloads → Pods` page check the logs in any
+pods in the selected installation namespace that are reporting issues to
+troubleshoot further.
+
+!!! Important "Beware"
+ By choosing the single project installation you cannot easily move to a
+ cluster-wide installation at a later time.
+
+This installation process can be repeated in multiple namespaces in the same
+OpenShift cluster, enabling independent installations of the operator in
+different projects. In this case, make sure you read
+["Limitations for multi-tenant management"](#limitations-for-multi-tenant-management).
+
+## Installation via the `oc` CLI
+
+!!! Important
+ Please refer to the ["Installing the OpenShift CLI" section below](#installing-the-openshift-cli-oc)
+ for information on how to install the `oc` command-line interface.
+
+!!! Warning CRITICAL WARNING: UPGRADING OPERATORS
+
+ OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the [Central Migration Guide](migrating_edb_registries) first.
+
+Instead of using the OpenShift Container Platform web console, you can install
+the {{name.ln}} Operator from the OperatorHub and create a
+subscription using the `oc` command-line interface. Through the `oc` CLI you
+can install the operator in all namespaces, a single namespace or multiple
+namespaces.
+
+!!! Warning
+ Multiple namespace installation is currently supported by OpenShift.
+ However, [definition of multiple target namespaces for an operator may be removed in future versions of OpenShift](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-operatorgroups.html#olm-operatorgroups-target-namespace_olm-understanding-operatorgroups).
+
+This section primarily covers the installation of the operator in multiple
+projects with a simple example, by creating an `OperatorGroup` and a
+`Subscription` objects.
+
+!!! Info
+ In our example, we will install the operator in the `my-operators`
+ namespace and make it only available in the `web-staging`, `web-prod`,
+ `bi-staging`, and `bi-prod` namespaces. Feel free to change the names of the
+ projects as you like or add/remove some namespaces.
+
+1. Check that the `cloud-native-postgresql` operator is available from the
+ OperatorHub:
+
+ ```
+ oc get packagemanifests -n openshift-marketplace cloud-native-postgresql
+ ```
+
+2. Inspect the operator to verify the installation modes (`MultiNamespace` in
+ particular) and the available channels:
+
+ ```
+ oc describe packagemanifests -n openshift-marketplace cloud-native-postgresql
+ ```
+
+3. Create an `OperatorGroup` object in the `my-operators` namespace so that it
+ targets the `web-staging`, `web-prod`, `bi-staging`, and `bi-prod` namespaces:
+
+ ```
+ apiVersion: operators.coreos.com/v1
+ kind: OperatorGroup
+ metadata:
+ name: cloud-native-postgresql
+ namespace: my-operators
+ spec:
+ targetNamespaces:
+ - web-staging
+ - web-prod
+ - bi-staging
+ - bi-prod
+ ```
+
+ !!! Important
+ Alternatively, you can list namespaces using a label selector, as explained in
+ ["Target namespace selection"](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-operatorgroups.html#olm-operatorgroups-target-namespace_olm-understanding-operatorgroups).
+
+4. Create a `Subscription` object in the `my-operators` namespace to subscribe
+ to the `fast` channel of the `cloud-native-postgresql` operator that is
+ available in the `certified-operators` source of the `openshift-marketplace`
+ (as previously located in steps 1 and 2):
+
+ ```
+ apiVersion: operators.coreos.com/v1alpha1
+ kind: Subscription
+ metadata:
+ name: cloud-native-postgresql
+ namespace: my-operators
+ spec:
+ channel: fast
+ name: cloud-native-postgresql
+ source: certified-operators
+ sourceNamespace: openshift-marketplace
+ ```
+
+5. Use `oc apply -f` with the above YAML file definitions for the
+ `OperatorGroup` and `Subscription` objects.
+
+The method described in this section can be very powerful in conjunction with
+proper `RoleBinding` objects, as it enables mapping {{name.ln}}'
+predefined `ClusterRole`s to specific users in selected namespaces.
+
+!!! Info
+ The above instructions can also be used for single project binding. The
+ only difference is the number of specified target namespaces (one) and,
+ possibly, the namespace of the operator group (ideally, the same as the target
+ namespace).
+
+The result of the above operation can also be verified from the webconsole, as
+shown in the image below.
+
+
+
+### Cluster-wide installation with `oc`
+
+If you prefer, you can also use `oc` to install the operator globally, by
+taking advantage of the default `OperatorGroup` called `global-operators` in
+the `openshift-operators` namespace, and create a new `Subscription` object for
+the `cloud-native-postgresql` operator in the same namespace:
+
+```yaml
+apiVersion: operators.coreos.com/v1alpha1
+kind: Subscription
+metadata:
+ name: cloud-native-postgresql
+ namespace: openshift-operators
+spec:
+ channel: fast
+ name: cloud-native-postgresql
+ source: certified-operators
+ sourceNamespace: openshift-marketplace
+```
+
+Once you run `oc apply -f` with the above YAML file, the operator will be available in all namespaces.
+
+### Installing the OpenShift CLI (`oc`)
+
+The `oc` command represents the OpenShift command-line interface (CLI). It is
+highly recommended to install it on your system. Below you find a basic set of
+instructions to install `oc` from your OpenShift dashboard.
+
+First, select the question mark at the top right corner of the dashboard:
+
+
+
+Then follow the instructions you are given, by downloading the binary that
+suits your needs in terms of operating system and architecture:
+
+
+
+!!! Seealso "OpenShift CLI"
+ For more detailed and updated information, please refer to the official
+ [OpenShift CLI documentation](https://docs.openshift.com/container-platform/4.16/cli_reference/openshift_cli/getting-started-cli.html)
+ directly maintained by Red Hat.
+
+## Predefined RBAC objects
+
+{{name.ln}} comes with a predefined set of resources that play an
+important role when it comes to RBAC policy configuration.
+
+### Custom Resource Definitions (CRD)
+
+The {{name.ln}} operator owns the following custom resource
+definitions (CRD):
+
+- `Backup`
+- `Cluster`
+- `Pooler`
+- `ScheduledBackup`
+- `ImageCatalog`
+- `ClusterImageCatalog`
+
+You can verify this by running:
+
+```sh
+oc get customresourcedefinitions.apiextensions.k8s.io | grep postgresql
+```
+
+which returns something similar to:
+
+```console
+backups.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ
+clusterimagecatalogs.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ
+clusters.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ
+imagecatalogs.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ
+poolers.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ
+scheduledbackups.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ
+```
+
+### Service accounts
+
+The namespace where the operator has been installed (by default
+`openshift-operators`) contains the following predefined service accounts:
+`builder`, `default`, `deployer`, and most importantly
+`postgresql-operator-manager` (managed by the CSV).
+
+!!! Important
+ Service accounts in Kubernetes are namespaced resources. Unless explicitly
+ authorized, a service account cannot be accessed outside the defined namespace.
+
+You can verify this by running:
+
+```sh
+oc get serviceaccounts -n openshift-operators
+```
+
+which returns something similar to:
+
+```console
+NAME SECRETS AGE
+builder 2 ...
+default 2 ...
+deployer 2 ...
+postgresql-operator-manager 2 ...
+```
+
+The `default` service account is automatically created by Kubernetes and
+present in every namespace. The `builder` and `deployer` service accounts are
+automatically created by OpenShift (see ["Default project service accounts and roles"](https://docs.openshift.com/container-platform/4.16/authentication/using-service-accounts-in-applications.html#default-service-accounts-and-roles_using-service-accounts)).
+
+The `postgresql-operator-manager` service account is the one used by the Cloud
+Native PostgreSQL operator to work as part of the Kubernetes/OpenShift control
+plane in managing PostgreSQL clusters.
+
+!!! Important
+ Do not delete the `postgresql-operator-manager` ServiceAccount as it can
+ stop the operator from working.
+
+### Cluster roles
+
+The Operator Lifecycle Manager (OLM) automatically creates a set of cluster
+role objects to facilitate role binding definitions and granular implementation
+of RBAC policies. Some cluster roles have rules that apply to Custom Resource
+Definitions that are part of {{name.ln}}, while others that are
+part of the broader Kubernetes/OpenShift realm.
+
+#### Cluster roles on EDB Postgres® AI for CloudNativePG™ Cluster CRDs
+
+For [every CRD owned by {{name.ln}}' CSV](#custom-resource-definitions-crd),
+OLM deploys some predefined cluster roles that can be used by customer facing
+users and service accounts. In particular:
+
+- a role for the full administration of the resource (`admin` suffix)
+- a role to edit the resource (`edit` suffix)
+- a role to view the resource (`view` suffix)
+- a role to view the actual CRD (`crdview` suffix)
+
+!!! Important
+ Cluster roles per se are no security threat. They are the recommended way
+ in OpenShift to define templates for roles to be later "bound" to actual users
+ in a specific project or globally. Indeed, cluster roles can be used in
+ conjunction with `ClusterRoleBinding` objects for global permissions or with
+ `RoleBinding` objects for local permissions. This makes it possible to reuse
+ cluster roles across multiple projects while enabling customization within
+ individual projects through local roles.
+
+You can verify the list of predefined cluster roles by running:
+
+```sh
+oc get clusterroles | grep postgresql
+```
+
+which returns something similar to:
+
+```console
+backups.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ
+backups.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ
+backups.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ
+backups.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ
+cloud-native-postgresql.VERSION-HASH YYYY-MM-DDTHH:MM:SSZ
+clusterimagecatalogs.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ
+clusterimagecatalogs.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ
+clusterimagecatalogs.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ
+clusterimagecatalogs.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ
+clusters.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ
+clusters.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ
+clusters.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ
+clusters.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ
+imagecatalogs.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ
+imagecatalogs.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ
+imagecatalogs.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ
+imagecatalogs.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ
+poolers.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ
+poolers.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ
+poolers.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ
+poolers.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ
+scheduledbackups.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ
+scheduledbackups.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ
+scheduledbackups.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ
+scheduledbackups.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ
+```
+
+You can inspect an actual role as any other Kubernetes resource with the `get`
+command. For example:
+
+```sh
+oc get -o yaml clusterrole clusters.postgresql.k8s.enterprisedb.io-v1-admin
+```
+
+By looking at the relevant skimmed output below, you can notice that the
+`clusters.postgresql.k8s.enterprisedb.io-v1-admin` cluster role enables
+everything on the `cluster` resource defined by the
+`postgresql.k8s.enterprisedb.io` API group:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: clusters.postgresql.k8s.enterprisedb.io-v1-admin
+rules:
+- apiGroups:
+ - postgresql.k8s.enterprisedb.io
+ resources:
+ - clusters
+ verbs:
+ - '*'
+```
+
+!!! Seealso "There's more ..."
+ If you are interested in the actual implementation of RBAC by an
+ OperatorGroup, please refer to the
+ ["OperatorGroup: RBAC" section from the Operator Lifecycle Manager documentation](https://olm.operatorframework.io/docs/concepts/crds/operatorgroup/#rbac).
+
+#### Cluster roles on Kubernetes CRDs
+
+When installing a `Subscription` object in a given namespace (e.g.
+`openshift-operators` for cluster-wide installation of the operator), OLM also
+creates a cluster role that is used to grant permissions to the
+`postgresql-operator-manager` service account that the operator uses. The name
+of this cluster role varies, as it depends on the installed version of the
+operator and the time of installation.
+
+You can retrieve it by running the following command:
+
+```sh
+oc get clusterrole --selector=olm.owner.kind=ClusterServiceVersion
+```
+
+You can then use the name returned by the above query (which should have the
+form of `cloud-native-postgresql.VERSION-HASH`) to look at the rules, resources
+and verbs via the `describe` command:
+
+```sh
+oc describe clusterrole cloud-native-postgresql.VERSION-HASH
+```
+
+```console
+Name: cloud-native-postgresql.VERSION.HASH
+Labels: olm.owner=cloud-native-postgresql.VERSION
+ olm.owner.kind=ClusterServiceVersion
+ olm.owner.namespace=openshift-operators
+ operators.coreos.com/cloud-native-postgresql.openshift-operators=
+Annotations:
+PolicyRule:
+ Resources Non-Resource URLs Resource Names Verbs
+ --------- ----------------- -------------- -----
+ configmaps [] [] [create delete get list patch update watch]
+ secrets [] [] [create delete get list patch update watch]
+ services [] [] [create delete get list patch update watch]
+ deployments.apps [] [] [create delete get list patch update watch]
+ poddisruptionbudgets.policy [] [] [create delete get list patch update watch]
+ backups.postgresql.k8s.enterprisedb.io [] [] [create delete get list patch update watch]
+ clusters.postgresql.k8s.enterprisedb.io [] [] [create delete get list patch update watch]
+ poolers.postgresql.k8s.enterprisedb.io [] [] [create delete get list patch update watch]
+ scheduledbackups.postgresql.k8s.enterprisedb.io [] [] [create delete get list patch update watch]
+ persistentvolumeclaims [] [] [create delete get list patch watch]
+ pods/exec [] [] [create delete get list patch watch]
+ pods [] [] [create delete get list patch watch]
+ jobs.batch [] [] [create delete get list patch watch]
+ podmonitors.monitoring.coreos.com [] [] [create delete get list patch watch]
+ serviceaccounts [] [] [create get list patch update watch]
+ rolebindings.rbac.authorization.k8s.io [] [] [create get list patch update watch]
+ roles.rbac.authorization.k8s.io [] [] [create get list patch update watch]
+ leases.coordination.k8s.io [] [] [create get update]
+ events [] [] [create patch]
+ mutatingwebhookconfigurations.admissionregistration.k8s.io [] [] [get list update]
+ validatingwebhookconfigurations.admissionregistration.k8s.io [] [] [get list update]
+ customresourcedefinitions.apiextensions.k8s.io [] [] [get list update]
+ namespaces [] [] [get list watch]
+ nodes [] [] [get list watch]
+ clusters.postgresql.k8s.enterprisedb.io/status [] [] [get patch update watch]
+ poolers.postgresql.k8s.enterprisedb.io/status [] [] [get patch update watch]
+ configmaps/status [] [] [get patch update]
+ secrets/status [] [] [get patch update]
+ backups.postgresql.k8s.enterprisedb.io/status [] [] [get patch update]
+ scheduledbackups.postgresql.k8s.enterprisedb.io/status [] [] [get patch update]
+ pods/status [] [] [get]
+ clusters.postgresql.k8s.enterprisedb.io/finalizers [] [] [update]
+ poolers.postgresql.k8s.enterprisedb.io/finalizers [] [] [update]
+```
+
+!!! Important
+ The above permissions are exclusively reserved for the operator's service
+ account to interact with the Kubernetes API server. They are not directly
+ accessible by the users of the operator that interact only with `Cluster`,
+ `Pooler`, `Backup`, and `ScheduledBackup` resources (see
+ ["Cluster roles on {{name.ln}} CRDs"](#cluster-roles-on-edb-postgres-ai-for-cloudnativepg-cluster-crds)).
+
+The operator automates in a declarative way a lot of operations related to
+PostgreSQL management that otherwise would require manual and imperative
+interventions. Such operations also include security related matters at RBAC
+(e.g. service accounts), pod (e.g. security context constraints) and Postgres
+levels (e.g. TLS certificates).
+
+For more information about the reasons why the operator needs these elevated
+permissions, please refer to the
+["Security / Cluster / RBAC" section](security.md#role-based-access-control-rbac).
+
+## Users and Permissions
+
+A very common way to use the {{name.ln}} operator is to rely on the
+`cluster-admin` role and manage resources centrally.
+
+Alternatively, you can use the RBAC framework made available by
+Kubernetes/OpenShift, as with any other operator or resources.
+
+For example, you might be interested in binding the
+`clusters.postgresql.k8s.enterprisedb.io-v1-admin` cluster role to specific
+groups or users in a specific namespace, as any other cloud native application.
+The following example binds that cluster role to a specific user in the
+`web-prod` project:
+
+```yaml
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: web-prod-admin
+ namespace: web-prod
+subjects:
+ - kind: User
+ apiGroup: rbac.authorization.k8s.io
+ name: mario@cioni.org
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: clusters.postgresql.k8s.enterprisedb.io-v1-admin
+```
+
+The same process can be repeated with any other predefined `ClusterRole`.
+
+If, on the other hand, you prefer not to use cluster roles, you can create
+specific namespaced roles like in this example:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: web-prod-view
+ namespace: web-prod
+rules:
+- apiGroups:
+ - postgresql.k8s.enterprisedb.io
+ resources:
+ - clusters
+ verbs:
+ - get
+ - list
+ - watch
+```
+
+Then, assign this role to a given user:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: web-prod-view
+ namespace: web-prod
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: web-prod-view
+subjects:
+- apiGroup: rbac.authorization.k8s.io
+ kind: User
+ name: web-prod-developer1
+```
+
+This final example creates a role with administration permissions (`verbs` is
+equal to `*`) to all the resources managed by the operator in that namespace
+(`web-prod`):
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: web-prod-admin
+ namespace: web-prod
+rules:
+- apiGroups:
+ - postgresql.k8s.enterprisedb.io
+ resources:
+ - clusters
+ - backups
+ - scheduledbackups
+ - poolers
+ verbs:
+ - '*'
+```
+
+## Pod Security Standards
+
+{{name.ln}} on OpenShift works with the `restricted-v2` SCC
+(`SecurityContextConstraints`).
+
+Since the operator has been developed with a security focus from the beginning,
+in addition to always adhering to the Red Hat Certification process, {{name.ln}} works under the new SCCs introduced in OpenShift 4.11.
+
+By default, {{name.ln}} will drop all capabilities. This
+ensures that during its lifecycle the operator will never make use of any
+unsafe capabilities.
+
+On OpenShift we inherit the `SecurityContext.SeccompProfile` for each Pod from
+the OpenShift deployment, which in turn is set by the Pod Security Admission
+Controller.
+
+!!! Note
+ Even if `nonroot-v2` and `hostnetwork-v2` are qualified as less restricted
+ SCCs, we don't run tests on them, and therefore we cannot guarantee that these
+ SCCs will work. That being said, `nonroot-v2` and `hostnetwork-v2` are a subset
+ of rules in `restricted-v2` so there is no reason to believe that they would
+ not work.
+
+## Customization of the Pooler image
+
+By default, the `Pooler` resource creates pods having the `pgbouncer` container
+that runs with the `quay.io/enterprisedb/pgbouncer` image.
+
+!!! Note "There's more"
+ For more details about pod customization for the pooler, please refer to
+ the ["Pod templates"](connection_pooling.md#pod-templates) section in the
+ connection pooling documentation.
+
+You can change the image name from the advanced interface, specifically by
+opening the *"Template"* section, then selecting *"Add container"*
+under *"Spec > Containers"*:
+
+
+
+Then:
+
+- set `pgbouncer` as the name of the container (required field in the pod template)
+- set the *"Image"* field as desired (see the image below)
+
+
+
+## OADP for Velero
+
+The {{name.ln}} operator recommends the use of the
+[Openshift API for Data Protection](https://github.com/openshift/oadp-operator) operator for managing [Velero](https://velero.io/)
+in OpenShift environments.
+Specific details about how {{name.ln}} integrates with Velero
+can be found in the [Velero section](addons.md#velero) of the Addons documentation.
+The [OADP operator](https://docs.openshift.com/container-platform/latest/backup_and_restore/application_backup_and_restore/installing/about-installing-oadp.html)
+is a community operator that is not directly supported by EDB.
+The OADP operator is not required to use Velero with EDB Postgres but is a
+convenient way to install Velero on OpenShift.
+
+## Monitoring and metrics
+
+OpenShift includes a [Prometheus](https://prometheus.io) installation out of
+the box that can be leveraged for user-defined projects, including {{name.ln}}.
+
+Grafana integration is out of scope for this guide, as Grafana is no longer
+included with OpenShift.
+
+In this section, we show you how to get started with basic observability,
+leveraging the default OpenShift installation.
+
+Please refer to the [OpenShift monitoring stack overview](https://docs.openshift.com/container-platform/4.16/observability/monitoring/monitoring-overview.html)
+for further background.
+
+Depending on your OpenShift configuration, you may need to do a bit of setup
+before you can monitor your {{name.ln}} clusters.
+
+You will need to have your OpenShift configured to
+[enable monitoring for user-defined projects](https://docs.openshift.com/container-platform/4.16/observability/monitoring/enabling-monitoring-for-user-defined-projects.html).
+
+You should check, perhaps with your OpenShift administrator, if your
+installation has the `cluster-monitoring-config` configMap, and if so,
+if user workload monitoring is enabled.
+
+You can check for the presence of this `configMap` (note that it should be in the
+`openshift-monitoring` namespace):
+
+```sh
+oc -n openshift-monitoring get configmap cluster-monitoring-config
+```
+
+To enable user workload monitoring, you might want to `oc apply` or `oc edit`
+the configmap to look something like this:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: cluster-monitoring-config
+ namespace: openshift-monitoring
+data:
+ config.yaml: |
+ enableUserWorkload: true
+```
+
+After `enableUserWorkload` is set, several monitoring components will be
+created in the `openshift-user-workload-monitoring` namespace.
+
+```sh
+$ oc -n openshift-user-workload-monitoring get pod
+NAME READY STATUS RESTARTS AGE
+prometheus-operator-58768d7cc-28xb5 2/2 Running 0 5h10m
+prometheus-user-workload-0 6/6 Running 0 5h10m
+prometheus-user-workload-1 6/6 Running 0 5h10m
+thanos-ruler-user-workload-0 3/3 Running 0 5h10m
+thanos-ruler-user-workload-1 3/3 Running 0 5h10m
+```
+
+You should now be able to see metrics from any cluster enabling them.
+
+For example, we can create the following cluster with monitoring on the `foo`
+namespace:
+
+```sh
+kubectl apply -n foo -f - <-pull`.
+
+The namespace where the operator looks for the `PULL_SECRET_NAME` secret is where
+you installed the operator. If the operator is not able to find that secret, it
+will ignore the configuration parameter.
+
+## Defining an operator config map
+
+The example below customizes the behavior of the operator, by defining
+a default license key (namely a company key),
+the label/annotation names to be inherited by the resources created by
+any `Cluster` object that is deployed at a later time, by enabling
+[in-place updates for the instance
+manager](installation_upgrade.md#in-place-updates-of-the-instance-manager),
+and by spreading upgrades.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: postgresql-operator-controller-manager-config
+ namespace: postgresql-operator-system
+data:
+ CLUSTERS_ROLLOUT_DELAY: '60'
+ ENABLE_INSTANCE_MANAGER_INPLACE_UPDATES: 'true'
+ EDB_LICENSE_KEY:
+ INHERITED_ANNOTATIONS: categories
+ INHERITED_LABELS: environment, workload, app
+ INSTANCES_ROLLOUT_DELAY: '10'
+```
+
+## Defining an operator secret
+
+The example below customizes the behavior of the operator, by defining
+a default license key (namely a company key),
+the label/annotation names to be inherited by the resources created by
+any `Cluster` object that is deployed at a later time, and by enabling
+[in-place updates for the instance
+manager](installation_upgrade.md#in-place-updates-of-the-instance-manager),
+and by spreading upgrades.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: postgresql-operator-controller-manager-config
+ namespace: postgresql-operator-system
+type: Opaque
+stringData:
+ CLUSTERS_ROLLOUT_DELAY: '60'
+ ENABLE_INSTANCE_MANAGER_INPLACE_UPDATES: 'true'
+ EDB_LICENSE_KEY:
+ INHERITED_ANNOTATIONS: categories
+ INHERITED_LABELS: environment, workload, app
+ INSTANCES_ROLLOUT_DELAY: '10'
+```
+
+## Restarting the operator to reload configs
+
+For the change to be effective, you need to recreate the operator pods to
+reload the config map. If you have installed the operator on Kubernetes
+using the manifest you can do that by issuing:
+
+```shell
+kubectl rollout restart deployment \
+ -n postgresql-operator-system \
+ postgresql-operator-controller-manager
+```
+
+Otherwise, If you have installed the operator using OLM, or you are running on
+Openshift, run the following command specifying the namespace the operator is
+installed in:
+
+```shell
+kubectl delete pods -n [NAMESPACE_NAME_HERE] \
+ -l app.kubernetes.io/name=cloud-native-postgresql
+```
+
+!!! Warning
+ Customizations will be applied only to `Cluster` resources created
+ after the reload of the operator deployment.
+
+Following the above example, if the `Cluster` definition contains a `categories`
+annotation and any of the `environment`, `workload`, or `app` labels, these will
+be inherited by all the resources generated by the deployment.
+
+## Profiling tools
+
+The operator can expose a pprof HTTP server on `localhost:6060`.
+To enable it, edit the operator deployment and add the flag
+`--pprof-server=true` to the container args:
+
+```shell
+kubectl edit deployment -n postgresql-operator-system postgresql-operator-controller-manager
+```
+
+Add `--pprof-server=true` to the args list, for example:
+
+```yaml
+ containers:
+ - args:
+ - controller
+ - --enable-leader-election
+ - --config-map-name=postgresql-operator-controller-manager-config
+ - --secret-name=postgresql-operator-controller-manager-config
+ - --log-level=info
+ - --pprof-server=true # relevant line
+ command:
+ - /manager
+```
+
+After saving, the deployment will roll out and the new pod will
+have the pprof server enabled.
+
+!!! Important
+ The pprof server only serves plain HTTP on port `6060`.
+
+To access the pprof endpoints from your local machine, use
+port-forwarding:
+
+```shell
+kubectl port-forward -n postgresql-operator-system deploy/postgresql-operator-controller-manager 6060
+curl -sS http://localhost:6060/debug/pprof/
+go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30
+```
+
+You can also access pprof using the browser at .
+
+!!! Warning
+ The example above uses `kubectl port-forward` for local testing only.
+ This is **not** the intended way to expose the feature in production.
+ Treat pprof as a sensitive debugging interface and never expose it publicly.
+ If you must access it remotely, secure it with proper network policies and access controls.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx
new file mode 100644
index 0000000000..3df447f0e7
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx
@@ -0,0 +1,6586 @@
+---
+title: API Reference - v1.27.1
+originalFilePath: src/pg4k.v1.md
+navTitle: API Reference
+navigation:
+ - v1.27.1
+ - v1.27.0
+ - v1.26.1
+ - v1.26.0
+ - v1.25.1
+ - v1.25.0
+ - v1.24.3
+ - v1.24.2
+ - v1.24.1
+ - v1.24.0
+ - v1.23.6
+ - v1.23.5
+ - v1.23.4
+ - v1.23.3
+ - v1.23.2
+ - v1.23.1
+ - v1.23.0
+ - v1.22.9
+ - v1.22.8
+ - v1.22.7
+ - v1.22.6
+ - v1.22.5
+ - v1.22.4
+ - v1.22.3
+ - v1.22.2
+ - v1.22.1
+ - v1.22.0
+ - v1.21.6
+ - v1.21.5
+ - v1.21.4
+ - v1.21.3
+ - v1.21.2
+ - v1.21.1
+ - v1.21.0
+ - v1.20.6
+ - v1.20.5
+ - v1.20.4
+ - v1.20.3
+ - v1.20.2
+ - v1.20.1
+ - v1.20.0
+ - v1.19.6
+ - v1.19.5
+ - v1.19.4
+ - v1.19.3
+ - v1.19.2
+ - v1.19.1
+ - v1.19.0
+ - v1.18.13
+ - v1.18.12
+ - v1.18.11
+ - v1.18.10
+ - v1.18.9
+ - v1.18.8
+ - v1.18.7
+ - v1.18.6
+ - v1.18.5
+ - v1.18.4
+ - v1.18.3
+ - v1.18.2
+ - v1.18.1
+ - v1.18.0
+ - v1.17.5
+ - v1.17.4
+ - v1.17.3
+ - v1.17.2
+ - v1.17.1
+ - v1.17.0
+ - v1.16.5
+ - v1.16.4
+ - v1.16.3
+ - v1.16.2
+ - v1.16.1
+ - v1.16.0
+ - v1.15.5
+ - v1.15.4
+ - v1.15.3
+ - v1.15.2
+ - v1.15.1
+ - v1.15.0
+ - v1.14.0
+ - v1.13.0
+ - v1.12.0
+ - v1.11.0
+ - v1.10.0
+ - v1.9.2
+ - v1.9.1
+ - v1.9.0
+ - v1.8.0
+ - v1.7.1
+ - v1.7.0
+ - v1.6.0
+ - v1.5.1
+ - v1.5.0
+ - v1.4.0
+ - v1.3.0
+ - v1.2.1
+ - v1.2.0
+ - v1.1.0
+ - v1.0.0
+ - v0.8.0
+ - v0.7.0
+ - v0.6.0
+
+---
+
+Package v1 contains API Schema definitions for the postgresql v1 API group
+
+## Resource Types
+
+- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
+- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+- [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog)
+- [Database](#postgresql-k8s-enterprisedb-io-v1-Database)
+- [FailoverQuorum](#postgresql-k8s-enterprisedb-io-v1-FailoverQuorum)
+- [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog)
+- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
+- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication)
+- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
+- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription)
+
+
+
+## Backup
+
+A Backup resource is a request for a PostgreSQL backup by the user.
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Backup |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+BackupSpec
+ |
+
+ Specification of the desired behavior of the backup.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+BackupStatus
+ |
+
+ Most recently observed status of the backup. This data may not be up to
+date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Cluster
+
+Cluster defines the API schema for a highly available PostgreSQL database cluster
+managed by {{name.ln}}.
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Cluster |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ClusterSpec
+ |
+
+ Specification of the desired behavior of the cluster.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+ClusterStatus
+ |
+
+ Most recently observed status of the cluster. This data may not be up
+to date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## ClusterImageCatalog
+
+ClusterImageCatalog is the Schema for the clusterimagecatalogs API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | ClusterImageCatalog |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ImageCatalogSpec
+ |
+
+ Specification of the desired behavior of the ClusterImageCatalog.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Database
+
+Database is the Schema for the databases API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Database |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+DatabaseSpec
+ |
+
+ Specification of the desired Database.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+DatabaseStatus
+ |
+
+ Most recently observed status of the Database. This data may not be up to
+date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## FailoverQuorum
+
+**Appears in:**
+
+FailoverQuorum contains the information about the current failover
+quorum status of a PG cluster. It is updated by the instance manager
+of the primary node and reset to zero by the operator to trigger
+an update.
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | FailoverQuorum |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+status
+FailoverQuorumStatus
+ |
+
+ Most recently observed status of the failover quorum.
+ |
+
+
+
+
+
+
+## ImageCatalog
+
+ImageCatalog is the Schema for the imagecatalogs API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | ImageCatalog |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ImageCatalogSpec
+ |
+
+ Specification of the desired behavior of the ImageCatalog.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Pooler
+
+Pooler is the Schema for the poolers API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Pooler |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+PoolerSpec
+ |
+
+ Specification of the desired behavior of the Pooler.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+PoolerStatus
+ |
+
+ Most recently observed status of the Pooler. This data may not be up to
+date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Publication
+
+Publication is the Schema for the publications API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Publication |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+PublicationSpec
+ |
+
+ No description provided. |
+
+status [Required]
+PublicationStatus
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## ScheduledBackup
+
+ScheduledBackup is the Schema for the scheduledbackups API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | ScheduledBackup |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ScheduledBackupSpec
+ |
+
+ Specification of the desired behavior of the ScheduledBackup.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+ScheduledBackupStatus
+ |
+
+ Most recently observed status of the ScheduledBackup. This data may not be up
+to date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## Subscription
+
+Subscription is the Schema for the subscriptions API
+
+
+| Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Subscription |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+SubscriptionSpec
+ |
+
+ No description provided. |
+
+status [Required]
+SubscriptionStatus
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## AffinityConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+AffinityConfiguration contains the info we need to create the
+affinity rules for Pods
+
+
+| Field | Description |
+
+enablePodAntiAffinity
+bool
+ |
+
+ Activates anti-affinity for the pods. The operator will define pods
+anti-affinity unless this field is explicitly set to false
+ |
+
+topologyKey
+string
+ |
+
+ TopologyKey to use for anti-affinity configuration. See k8s documentation
+for more info on that
+ |
+
+nodeSelector
+map[string]string
+ |
+
+ NodeSelector is map of key-value pairs used to define the nodes on which
+the pods can run.
+More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ |
+
+nodeAffinity
+core/v1.NodeAffinity
+ |
+
+ NodeAffinity describes node affinity scheduling rules for the pod.
+More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
+ |
+
+tolerations
+[]core/v1.Toleration
+ |
+
+ Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run
+on tainted nodes.
+More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ |
+
+podAntiAffinityType
+string
+ |
+
+ PodAntiAffinityType allows the user to decide whether pod anti-affinity between cluster instance has to be
+considered a strong requirement during scheduling or not. Allowed values are: "preferred" (default if empty) or
+"required". Setting it to "required", could lead to instances remaining pending until new kubernetes nodes are
+added if all the existing nodes don't match the required pod anti-affinity rule.
+More info:
+https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
+ |
+
+additionalPodAntiAffinity
+core/v1.PodAntiAffinity
+ |
+
+ AdditionalPodAntiAffinity allows to specify pod anti-affinity terms to be added to the ones generated
+by the operator if EnablePodAntiAffinity is set to true (default) or to be used exclusively if set to false.
+ |
+
+additionalPodAffinity
+core/v1.PodAffinity
+ |
+
+ AdditionalPodAffinity allows to specify pod affinity terms to be passed to all the cluster's pods.
+ |
+
+
+
+
+
+
+## AvailableArchitecture
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+AvailableArchitecture represents the state of a cluster's architecture
+
+
+| Field | Description |
+
+goArch [Required]
+string
+ |
+
+ GoArch is the name of the executable architecture
+ |
+
+hash [Required]
+string
+ |
+
+ Hash is the hash of the executable
+ |
+
+
+
+
+
+
+## BackupConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+BackupConfiguration defines how the backup of the cluster are taken.
+The supported backup methods are BarmanObjectStore and VolumeSnapshot.
+For details and examples refer to the Backup and Recovery section of the
+documentation
+
+
+| Field | Description |
+
+volumeSnapshot
+VolumeSnapshotConfiguration
+ |
+
+ VolumeSnapshot provides the configuration for the execution of volume snapshot backups.
+ |
+
+barmanObjectStore
+github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration
+ |
+
+ The configuration for the barman-cloud tool suite
+ |
+
+retentionPolicy
+string
+ |
+
+ RetentionPolicy is the retention policy to be used for backups
+and WALs (i.e. '60d'). The retention policy is expressed in the form
+of XXu where XX is a positive integer and u is in [dwm] -
+days, weeks, months.
+It's currently only applicable when using the BarmanObjectStore method.
+ |
+
+target
+BackupTarget
+ |
+
+ The policy to decide which instance should perform backups. Available
+options are empty string, which will default to prefer-standby policy,
+primary to have backups run always on primary instances, prefer-standby
+to have backups run preferably on the most updated standby, if available.
+ |
+
+
+
+
+
+
+## BackupMethod
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+BackupMethod defines the way of executing the physical base backups of
+the selected PostgreSQL instance
+
+
+
+## BackupPhase
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+BackupPhase is the phase of the backup
+
+
+
+## BackupPluginConfiguration
+
+**Appears in:**
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+BackupPluginConfiguration contains the backup configuration used by
+the backup plugin
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the name of the plugin managing this backup
+ |
+
+parameters
+map[string]string
+ |
+
+ Parameters are the configuration parameters passed to the backup
+plugin for this backup
+ |
+
+
+
+
+
+
+## BackupSnapshotElementStatus
+
+**Appears in:**
+
+- [BackupSnapshotStatus](#postgresql-k8s-enterprisedb-io-v1-BackupSnapshotStatus)
+
+BackupSnapshotElementStatus is a volume snapshot that is part of a volume snapshot method backup
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the snapshot resource name
+ |
+
+type [Required]
+string
+ |
+
+ Type is tho role of the snapshot in the cluster, such as PG_DATA, PG_WAL and PG_TABLESPACE
+ |
+
+tablespaceName
+string
+ |
+
+ TablespaceName is the name of the snapshotted tablespace. Only set
+when type is PG_TABLESPACE
+ |
+
+
+
+
+
+
+## BackupSnapshotStatus
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+BackupSnapshotStatus the fields exclusive to the volumeSnapshot method backup
+
+
+
+
+
+## BackupSource
+
+**Appears in:**
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+BackupSource contains the backup we need to restore from, plus some
+information that could be needed to correctly restore it.
+
+
+
+
+
+## BackupSpec
+
+**Appears in:**
+
+- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
+
+BackupSpec defines the desired state of Backup
+
+
+| Field | Description |
+
+cluster [Required]
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The cluster to backup
+ |
+
+target
+BackupTarget
+ |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to cluster.spec.backup.target.
+Available options are empty string, primary and prefer-standby.
+primary to have backups run always on primary instances,
+prefer-standby to have backups run preferably on the most updated
+standby, if available.
+ |
+
+method
+BackupMethod
+ |
+
+ The backup method to be used, possible options are barmanObjectStore,
+volumeSnapshot or plugin. Defaults to: barmanObjectStore.
+ |
+
+pluginConfiguration
+BackupPluginConfiguration
+ |
+
+ Configuration parameters passed to the plugin managing this backup
+ |
+
+online
+bool
+ |
+
+ Whether the default type of backup with volume snapshots is
+online/hot (true, default) or offline/cold (false)
+Overrides the default setting specified in the cluster field '.spec.backup.volumeSnapshot.online'
+ |
+
+onlineConfiguration
+OnlineConfiguration
+ |
+
+ Configuration parameters to control the online/hot backup with volume snapshots
+Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza
+ |
+
+
+
+
+
+
+## BackupStatus
+
+**Appears in:**
+
+- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
+
+BackupStatus defines the observed state of Backup
+
+
+| Field | Description |
+
+BarmanCredentials
+github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanCredentials
+ |
+(Members of BarmanCredentials are embedded into this type.)
+ The potential credentials for each cloud provider
+ |
+
+majorVersion [Required]
+int
+ |
+
+ The PostgreSQL major version that was running when the
+backup was taken.
+ |
+
+endpointCA
+github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector
+ |
+
+ EndpointCA store the CA bundle of the barman endpoint.
+Useful when using self-signed certificates to avoid
+errors with certificate issuer and barman-cloud-wal-archive.
+ |
+
+endpointURL
+string
+ |
+
+ Endpoint to be used to upload data to the cloud,
+overriding the automatic endpoint discovery
+ |
+
+destinationPath
+string
+ |
+
+ The path where to store the backup (i.e. s3://bucket/path/to/folder)
+this path, with different destination folders, will be used for WALs
+and for data. This may not be populated in case of errors.
+ |
+
+serverName
+string
+ |
+
+ The server name on S3, the cluster name is used if this
+parameter is omitted
+ |
+
+encryption
+string
+ |
+
+ Encryption method required to S3 API
+ |
+
+backupId
+string
+ |
+
+ The ID of the Barman backup
+ |
+
+backupName
+string
+ |
+
+ The Name of the Barman backup
+ |
+
+phase
+BackupPhase
+ |
+
+ The last backup status
+ |
+
+startedAt
+meta/v1.Time
+ |
+
+ When the backup was started
+ |
+
+stoppedAt
+meta/v1.Time
+ |
+
+ When the backup was terminated
+ |
+
+beginWal
+string
+ |
+
+ The starting WAL
+ |
+
+endWal
+string
+ |
+
+ The ending WAL
+ |
+
+beginLSN
+string
+ |
+
+ The starting xlog
+ |
+
+endLSN
+string
+ |
+
+ The ending xlog
+ |
+
+error
+string
+ |
+
+ The detected error
+ |
+
+commandOutput
+string
+ |
+
+ Unused. Retained for compatibility with old versions.
+ |
+
+commandError
+string
+ |
+
+ The backup command output in case of error
+ |
+
+backupLabelFile
+[]byte
+ |
+
+ Backup label file content as returned by Postgres in case of online (hot) backups
+ |
+
+tablespaceMapFile
+[]byte
+ |
+
+ Tablespace map file content as returned by Postgres in case of online (hot) backups
+ |
+
+instanceID
+InstanceID
+ |
+
+ Information to identify the instance where the backup has been taken from
+ |
+
+snapshotBackupStatus
+BackupSnapshotStatus
+ |
+
+ Status of the volumeSnapshot backup
+ |
+
+method
+BackupMethod
+ |
+
+ The backup method being used
+ |
+
+online
+bool
+ |
+
+ Whether the backup was online/hot (true) or offline/cold (false)
+ |
+
+pluginMetadata
+map[string]string
+ |
+
+ A map containing the plugin metadata
+ |
+
+
+
+
+
+
+## BackupTarget
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [BackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration)
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+BackupTarget describes the preferred targets for a backup
+
+
+
+## BootstrapConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+BootstrapConfiguration contains information about how to create the PostgreSQL
+cluster. Only a single bootstrap method can be defined among the supported
+ones. initdb will be used as the bootstrap method if left
+unspecified. Refer to the Bootstrap page of the documentation for more
+information.
+
+
+| Field | Description |
+
+initdb
+BootstrapInitDB
+ |
+
+ Bootstrap the cluster via initdb
+ |
+
+recovery
+BootstrapRecovery
+ |
+
+ Bootstrap the cluster from a backup
+ |
+
+pg_basebackup
+BootstrapPgBaseBackup
+ |
+
+ Bootstrap the cluster taking a physical backup of another compatible
+PostgreSQL instance
+ |
+
+
+
+
+
+
+## BootstrapInitDB
+
+**Appears in:**
+
+- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+
+BootstrapInitDB is the configuration of the bootstrap process when
+initdb is used
+Refer to the Bootstrap page of the documentation for more information.
+
+
+| Field | Description |
+
+database
+string
+ |
+
+ Name of the database used by the application. Default: app.
+ |
+
+owner
+string
+ |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the database key.
+ |
+
+secret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ Name of the secret containing the initial credentials for the
+owner of the user database. If empty a new secret will be
+created from scratch
+ |
+
+redwood
+bool
+ |
+
+ If we need to enable/disable Redwood compatibility. Requires
+EPAS and for EPAS defaults to true
+ |
+
+options
+[]string
+ |
+
+ The list of options that must be passed to initdb when creating the cluster.
+Deprecated: This could lead to inconsistent configurations,
+please use the explicit provided parameters instead.
+If defined, explicit values will be ignored.
+ |
+
+dataChecksums
+bool
+ |
+
+ Whether the -k option should be passed to initdb,
+enabling checksums on data pages (default: false)
+ |
+
+encoding
+string
+ |
+
+ The value to be passed as option --encoding for initdb (default:UTF8)
+ |
+
+localeCollate
+string
+ |
+
+ The value to be passed as option --lc-collate for initdb (default:C)
+ |
+
+localeCType
+string
+ |
+
+ The value to be passed as option --lc-ctype for initdb (default:C)
+ |
+
+locale
+string
+ |
+
+ Sets the default collation order and character classification in the new database.
+ |
+
+localeProvider
+string
+ |
+
+ This option sets the locale provider for databases created in the new cluster.
+Available from PostgreSQL 16.
+ |
+
+icuLocale
+string
+ |
+
+ Specifies the ICU locale when the ICU provider is used.
+This option requires localeProvider to be set to icu.
+Available from PostgreSQL 15.
+ |
+
+icuRules
+string
+ |
+
+ Specifies additional collation rules to customize the behavior of the default collation.
+This option requires localeProvider to be set to icu.
+Available from PostgreSQL 16.
+ |
+
+builtinLocale
+string
+ |
+
+ Specifies the locale name when the builtin provider is used.
+This option requires localeProvider to be set to builtin.
+Available from PostgreSQL 17.
+ |
+
+walSegmentSize
+int
+ |
+
+ The value in megabytes (1 to 1024) to be passed to the --wal-segsize
+option for initdb (default: empty, resulting in PostgreSQL default: 16MB)
+ |
+
+postInitSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the postgres
+database right after the cluster has been created - to be used with extreme care
+(by default empty)
+ |
+
+postInitApplicationSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the application
+database right after the cluster has been created - to be used with extreme care
+(by default empty)
+ |
+
+postInitTemplateSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the template1
+database right after the cluster has been created - to be used with extreme care
+(by default empty)
+ |
+
+import
+Import
+ |
+
+ Bootstraps the new cluster by importing data from an existing PostgreSQL
+instance using logical backup (pg_dump and pg_restore)
+ |
+
+postInitApplicationSQLRefs
+SQLRefs
+ |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the application database right after
+the cluster has been created. The references are processed in a specific order:
+first, all Secrets are processed, followed by all ConfigMaps.
+Within each group, the processing order follows the sequence specified
+in their respective arrays.
+(by default empty)
+ |
+
+postInitTemplateSQLRefs
+SQLRefs
+ |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the template1 database right after
+the cluster has been created. The references are processed in a specific order:
+first, all Secrets are processed, followed by all ConfigMaps.
+Within each group, the processing order follows the sequence specified
+in their respective arrays.
+(by default empty)
+ |
+
+postInitSQLRefs
+SQLRefs
+ |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the postgres database right after
+the cluster has been created. The references are processed in a specific order:
+first, all Secrets are processed, followed by all ConfigMaps.
+Within each group, the processing order follows the sequence specified
+in their respective arrays.
+(by default empty)
+ |
+
+
+
+
+
+
+## BootstrapPgBaseBackup
+
+**Appears in:**
+
+- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+
+BootstrapPgBaseBackup contains the configuration required to take
+a physical backup of an existing PostgreSQL cluster
+
+
+| Field | Description |
+
+source [Required]
+string
+ |
+
+ The name of the server of which we need to take a physical backup
+ |
+
+database
+string
+ |
+
+ Name of the database used by the application. Default: app.
+ |
+
+owner
+string
+ |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the database key.
+ |
+
+secret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ Name of the secret containing the initial credentials for the
+owner of the user database. If empty a new secret will be
+created from scratch
+ |
+
+
+
+
+
+
+## BootstrapRecovery
+
+**Appears in:**
+
+- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+
+BootstrapRecovery contains the configuration required to restore
+from an existing cluster using 3 methodologies: external cluster,
+volume snapshots or backup objects. Full recovery and Point-In-Time
+Recovery are supported.
+The method can be also be used to create clusters in continuous recovery
+(replica clusters), also supporting cascading replication when instances >
+
+- Once the cluster exits recovery, the password for the superuser
+will be changed through the provided secret.
+Refer to the Bootstrap page of the documentation for more information.
+
+
+
+| Field | Description |
+
+backup
+BackupSource
+ |
+
+ The backup object containing the physical base backup from which to
+initiate the recovery procedure.
+Mutually exclusive with source and volumeSnapshots.
+ |
+
+source
+string
+ |
+
+ The external cluster whose backup we will restore. This is also
+used as the name of the folder under which the backup is stored,
+so it must be set to the name of the source cluster
+Mutually exclusive with backup.
+ |
+
+volumeSnapshots
+DataSource
+ |
+
+ The static PVC data source(s) from which to initiate the
+recovery procedure. Currently supporting VolumeSnapshot
+and PersistentVolumeClaim resources that map an existing
+PVC group, compatible with {{name.ln}}, and taken with
+a cold backup copy on a fenced Postgres instance (limitation
+which will be removed in the future when online backup
+will be implemented).
+Mutually exclusive with backup.
+ |
+
+recoveryTarget
+RecoveryTarget
+ |
+
+ By default, the recovery process applies all the available
+WAL files in the archive (full recovery). However, you can also
+end the recovery as soon as a consistent state is reached or
+recover to a point-in-time (PITR) by specifying a RecoveryTarget object,
+as expected by PostgreSQL (i.e., timestamp, transaction Id, LSN, ...).
+More info: https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET
+ |
+
+database
+string
+ |
+
+ Name of the database used by the application. Default: app.
+ |
+
+owner
+string
+ |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the database key.
+ |
+
+secret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ Name of the secret containing the initial credentials for the
+owner of the user database. If empty a new secret will be
+created from scratch
+ |
+
+
+
+
+
+
+## CatalogImage
+
+**Appears in:**
+
+- [ImageCatalogSpec](#postgresql-k8s-enterprisedb-io-v1-ImageCatalogSpec)
+
+CatalogImage defines the image and major version
+
+
+| Field | Description |
+
+image [Required]
+string
+ |
+
+ The image reference
+ |
+
+major [Required]
+int
+ |
+
+ The PostgreSQL major version of the image. Must be unique within the catalog.
+ |
+
+
+
+
+
+
+## CertificatesConfiguration
+
+**Appears in:**
+
+- [CertificatesStatus](#postgresql-k8s-enterprisedb-io-v1-CertificatesStatus)
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+CertificatesConfiguration contains the needed configurations to handle server certificates.
+
+
+| Field | Description |
+
+serverCASecret
+string
+ |
+
+ The secret containing the Server CA certificate. If not defined, a new secret will be created
+with a self-signed CA and will be used to generate the TLS certificate ServerTLSSecret.
+
+Contains:
+
+
+
+ca.crt: CA that should be used to validate the server certificate,
+used as sslrootcert in client connection strings.
+ca.key: key used to generate Server SSL certs, if ServerTLSSecret is provided,
+this can be omitted.
+
+ |
+
+serverTLSSecret
+string
+ |
+
+ The secret of type kubernetes.io/tls containing the server TLS certificate and key that will be set as
+ssl_cert_file and ssl_key_file so that clients can connect to postgres securely.
+If not defined, ServerCASecret must provide also ca.key and a new secret will be
+created using the provided CA.
+ |
+
+replicationTLSSecret
+string
+ |
+
+ The secret of type kubernetes.io/tls containing the client certificate to authenticate as
+the streaming_replica user.
+If not defined, ClientCASecret must provide also ca.key, and a new secret will be
+created using the provided CA.
+ |
+
+clientCASecret
+string
+ |
+
+ The secret containing the Client CA certificate. If not defined, a new secret will be created
+with a self-signed CA and will be used to generate all the client certificates.
+
+Contains:
+
+
+
+ca.crt: CA that should be used to validate the client certificates,
+used as ssl_ca_file of all the instances.
+ca.key: key used to generate client certificates, if ReplicationTLSSecret is provided,
+this can be omitted.
+
+ |
+
+serverAltDNSNames
+[]string
+ |
+
+ The list of the server alternative DNS names to be added to the generated server TLS certificates, when required.
+ |
+
+
+
+
+
+
+## CertificatesStatus
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+CertificatesStatus contains configuration certificates and related expiration dates.
+
+
+| Field | Description |
+
+CertificatesConfiguration
+CertificatesConfiguration
+ |
+(Members of CertificatesConfiguration are embedded into this type.)
+ Needed configurations to handle server certificates, initialized with default values, if needed.
+ |
+
+expirations
+map[string]string
+ |
+
+ Expiration dates for all certificates.
+ |
+
+
+
+
+
+
+## ClusterMonitoringTLSConfiguration
+
+**Appears in:**
+
+- [MonitoringConfiguration](#postgresql-k8s-enterprisedb-io-v1-MonitoringConfiguration)
+
+ClusterMonitoringTLSConfiguration is the type containing the TLS configuration
+for the cluster's monitoring
+
+
+| Field | Description |
+
+enabled
+bool
+ |
+
+ Enable TLS for the monitoring endpoint.
+Changing this option will force a rollout of all instances.
+ |
+
+
+
+
+
+
+## ClusterSpec
+
+**Appears in:**
+
+- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+
+ClusterSpec defines the desired state of a PostgreSQL cluster managed by
+{{name.ln}}.
+
+
+| Field | Description |
+
+description
+string
+ |
+
+ Description of this PostgreSQL cluster
+ |
+
+inheritedMetadata
+EmbeddedObjectMetadata
+ |
+
+ Metadata that will be inherited by all objects related to the Cluster
+ |
+
+imageName
+string
+ |
+
+ Name of the container image, supporting both tags (<image>:<tag>)
+and digests for deterministic and repeatable deployments
+(<image>:<tag>@sha256:<digestValue>)
+ |
+
+imageCatalogRef
+ImageCatalogRef
+ |
+
+ Defines the major PostgreSQL version we want to use within an ImageCatalog
+ |
+
+imagePullPolicy
+core/v1.PullPolicy
+ |
+
+ Image pull policy.
+One of Always, Never or IfNotPresent.
+If not defined, it defaults to IfNotPresent.
+Cannot be updated.
+More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
+ |
+
+schedulerName
+string
+ |
+
+ If specified, the pod will be dispatched by specified Kubernetes
+scheduler. If not specified, the pod will be dispatched by the default
+scheduler. More info:
+https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
+ |
+
+postgresUID
+int64
+ |
+
+ The UID of the postgres user inside the image, defaults to 26
+ |
+
+postgresGID
+int64
+ |
+
+ The GID of the postgres user inside the image, defaults to 26
+ |
+
+instances [Required]
+int
+ |
+
+ Number of instances required in the cluster
+ |
+
+minSyncReplicas
+int
+ |
+
+ Minimum number of instances required in synchronous replication with the
+primary. Undefined or 0 allow writes to complete when no standby is
+available.
+ |
+
+maxSyncReplicas
+int
+ |
+
+ The target value for the synchronous replication quorum, that can be
+decreased if the number of ready standbys is lower than this.
+Undefined or 0 disable synchronous replication.
+ |
+
+postgresql
+PostgresConfiguration
+ |
+
+ Configuration of the PostgreSQL server
+ |
+
+replicationSlots
+ReplicationSlotsConfiguration
+ |
+
+ Replication slots management configuration
+ |
+
+bootstrap
+BootstrapConfiguration
+ |
+
+ Instructions to bootstrap this cluster
+ |
+
+replica
+ReplicaClusterConfiguration
+ |
+
+ Replica cluster configuration
+ |
+
+superuserSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The secret containing the superuser password. If not defined a new
+secret will be created with a randomly generated password
+ |
+
+enableSuperuserAccess
+bool
+ |
+
+ When this option is enabled, the operator will use the SuperuserSecret
+to update the postgres user password (if the secret is
+not present, the operator will automatically create one). When this
+option is disabled, the operator will ignore the SuperuserSecret content, delete
+it when automatically created, and then blank the password of the postgres
+user by setting it to NULL. Disabled by default.
+ |
+
+certificates
+CertificatesConfiguration
+ |
+
+ The configuration for the CA and related certificates
+ |
+
+imagePullSecrets
+[]github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The list of pull secrets to be used to pull the images. If the license key
+contains a pull secret that secret will be automatically included.
+ |
+
+storage
+StorageConfiguration
+ |
+
+ Configuration of the storage of the instances
+ |
+
+serviceAccountTemplate
+ServiceAccountTemplate
+ |
+
+ Configure the generation of the service account
+ |
+
+walStorage
+StorageConfiguration
+ |
+
+ Configuration of the storage for PostgreSQL WAL (Write-Ahead Log)
+ |
+
+ephemeralVolumeSource
+core/v1.EphemeralVolumeSource
+ |
+
+ EphemeralVolumeSource allows the user to configure the source of ephemeral volumes.
+ |
+
+startDelay
+int32
+ |
+
+ The time in seconds that is allowed for a PostgreSQL instance to
+successfully start up (default 3600).
+The startup probe failure threshold is derived from this value using the formula:
+ceiling(startDelay / 10).
+ |
+
+stopDelay
+int32
+ |
+
+ The time in seconds that is allowed for a PostgreSQL instance to
+gracefully shutdown (default 1800)
+ |
+
+smartStopDelay
+int32
+ |
+
+ Deprecated: please use SmartShutdownTimeout instead
+ |
+
+smartShutdownTimeout
+int32
+ |
+
+ The time in seconds that controls the window of time reserved for the smart shutdown of Postgres to complete.
+Make sure you reserve enough time for the operator to request a fast shutdown of Postgres
+(that is: stopDelay - smartShutdownTimeout). Default is 180 seconds.
+ |
+
+switchoverDelay
+int32
+ |
+
+ The time in seconds that is allowed for a primary PostgreSQL instance
+to gracefully shutdown during a switchover.
+Default value is 3600 seconds (1 hour).
+ |
+
+failoverDelay
+int32
+ |
+
+ The amount of time (in seconds) to wait before triggering a failover
+after the primary PostgreSQL instance in the cluster was detected
+to be unhealthy
+ |
+
+livenessProbeTimeout
+int32
+ |
+
+ LivenessProbeTimeout is the time (in seconds) that is allowed for a PostgreSQL instance
+to successfully respond to the liveness probe (default 30).
+The Liveness probe failure threshold is derived from this value using the formula:
+ceiling(livenessProbe / 10).
+ |
+
+affinity
+AffinityConfiguration
+ |
+
+ Affinity/Anti-affinity rules for Pods
+ |
+
+topologySpreadConstraints
+[]core/v1.TopologySpreadConstraint
+ |
+
+ TopologySpreadConstraints specifies how to spread matching pods among the given topology.
+More info:
+https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
+ |
+
+resources
+core/v1.ResourceRequirements
+ |
+
+ Resources requirements of every generated Pod. Please refer to
+https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+for more information.
+ |
+
+ephemeralVolumesSizeLimit
+EphemeralVolumesSizeLimitConfiguration
+ |
+
+ EphemeralVolumesSizeLimit allows the user to set the limits for the ephemeral
+volumes
+ |
+
+priorityClassName
+string
+ |
+
+ Name of the priority class which will be used in every generated Pod, if the PriorityClass
+specified does not exist, the pod will not be able to schedule. Please refer to
+https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
+for more information
+ |
+
+primaryUpdateStrategy
+PrimaryUpdateStrategy
+ |
+
+ Deployment strategy to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be automated (unsupervised - default) or manual (supervised)
+ |
+
+primaryUpdateMethod
+PrimaryUpdateMethod
+ |
+
+ Method to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be with a switchover (switchover) or in-place (restart - default)
+ |
+
+backup
+BackupConfiguration
+ |
+
+ The configuration to be used for backups
+ |
+
+nodeMaintenanceWindow
+NodeMaintenanceWindow
+ |
+
+ Define a maintenance window for the Kubernetes nodes
+ |
+
+licenseKey
+string
+ |
+
+ The license key of the cluster. When empty, the cluster operates in
+trial mode and after the expiry date (default 30 days) the operator
+will cease any reconciliation attempt. For details, please refer to
+the license agreement that comes with the operator.
+ |
+
+licenseKeySecret
+core/v1.SecretKeySelector
+ |
+
+ The reference to the license key. When this is set it take precedence over LicenseKey.
+ |
+
+monitoring
+MonitoringConfiguration
+ |
+
+ The configuration of the monitoring infrastructure of this cluster
+ |
+
+externalClusters
+[]ExternalCluster
+ |
+
+ The list of external clusters which are used in the configuration
+ |
+
+logLevel
+string
+ |
+
+ The instances' log level, one of the following values: error, warning, info (default), debug, trace
+ |
+
+projectedVolumeTemplate
+core/v1.ProjectedVolumeSource
+ |
+
+ Template to be used to define projected volumes, projected volumes will be mounted
+under /projected base folder
+ |
+
+env
+[]core/v1.EnvVar
+ |
+
+ Env follows the Env format to pass environment variables
+to the pods created in the cluster
+ |
+
+envFrom
+[]core/v1.EnvFromSource
+ |
+
+ EnvFrom follows the EnvFrom format to pass environment variables
+sources to the pods to be used by Env
+ |
+
+managed
+ManagedConfiguration
+ |
+
+ The configuration that is used by the portions of PostgreSQL that are managed by the instance manager
+ |
+
+seccompProfile
+core/v1.SeccompProfile
+ |
+
+ The SeccompProfile applied to every Pod and Container.
+Defaults to: RuntimeDefault
+ |
+
+tablespaces
+[]TablespaceConfiguration
+ |
+
+ The tablespaces configuration
+ |
+
+enablePDB
+bool
+ |
+
+ Manage the PodDisruptionBudget resources within the cluster. When
+configured as true (default setting), the pod disruption budgets
+will safeguard the primary node from being terminated. Conversely,
+setting it to false will result in the absence of any
+PodDisruptionBudget resource, permitting the shutdown of all nodes
+hosting the PostgreSQL cluster. This latter configuration is
+advisable for any PostgreSQL cluster employed for
+development/staging purposes.
+ |
+
+plugins
+[]PluginConfiguration
+ |
+
+ The plugins configuration, containing
+any plugin to be loaded with the corresponding configuration
+ |
+
+probes
+ProbesConfiguration
+ |
+
+ The configuration of the probes to be injected
+in the PostgreSQL Pods.
+ |
+
+
+
+
+
+
+## ClusterStatus
+
+**Appears in:**
+
+- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+
+ClusterStatus defines the observed state of a PostgreSQL cluster managed by
+{{name.ln}}.
+
+
+| Field | Description |
+
+instances
+int
+ |
+
+ The total number of PVC Groups detected in the cluster. It may differ from the number of existing instance pods.
+ |
+
+readyInstances
+int
+ |
+
+ The total number of ready instances in the cluster. It is equal to the number of ready instance pods.
+ |
+
+instancesStatus
+map[PodStatus][]string
+ |
+
+ InstancesStatus indicates in which status the instances are
+ |
+
+instancesReportedState
+map[PodName]InstanceReportedState
+ |
+
+ The reported state of the instances during the last reconciliation loop
+ |
+
+managedRolesStatus
+ManagedRoles
+ |
+
+ ManagedRolesStatus reports the state of the managed roles in the cluster
+ |
+
+tablespacesStatus
+[]TablespaceState
+ |
+
+ TablespacesStatus reports the state of the declarative tablespaces in the cluster
+ |
+
+timelineID
+int
+ |
+
+ The timeline of the Postgres cluster
+ |
+
+topology
+Topology
+ |
+
+ Instances topology.
+ |
+
+latestGeneratedNode
+int
+ |
+
+ ID of the latest generated node (used to avoid node name clashing)
+ |
+
+currentPrimary
+string
+ |
+
+ Current primary instance
+ |
+
+targetPrimary
+string
+ |
+
+ Target primary instance, this is different from the previous one
+during a switchover or a failover
+ |
+
+lastPromotionToken
+string
+ |
+
+ LastPromotionToken is the last verified promotion token that
+was used to promote a replica cluster
+ |
+
+pvcCount
+int32
+ |
+
+ How many PVCs have been created by this cluster
+ |
+
+jobCount
+int32
+ |
+
+ How many Jobs have been created by this cluster
+ |
+
+danglingPVC
+[]string
+ |
+
+ List of all the PVCs created by this cluster and still available
+which are not attached to a Pod
+ |
+
+resizingPVC
+[]string
+ |
+
+ List of all the PVCs that have ResizingPVC condition.
+ |
+
+initializingPVC
+[]string
+ |
+
+ List of all the PVCs that are being initialized by this cluster
+ |
+
+healthyPVC
+[]string
+ |
+
+ List of all the PVCs not dangling nor initializing
+ |
+
+unusablePVC
+[]string
+ |
+
+ List of all the PVCs that are unusable because another PVC is missing
+ |
+
+licenseStatus
+github.com/EnterpriseDB/cloud-native-postgres/pkg/licensekey.Status
+ |
+
+ Status of the license
+ |
+
+writeService
+string
+ |
+
+ Current write pod
+ |
+
+readService
+string
+ |
+
+ Current list of read pods
+ |
+
+phase
+string
+ |
+
+ Current phase of the cluster
+ |
+
+phaseReason
+string
+ |
+
+ Reason for the current phase
+ |
+
+secretsResourceVersion
+SecretsResourceVersion
+ |
+
+ The list of resource versions of the secrets
+managed by the operator. Every change here is done in the
+interest of the instance manager, which will refresh the
+secret data
+ |
+
+configMapResourceVersion
+ConfigMapResourceVersion
+ |
+
+ The list of resource versions of the configmaps,
+managed by the operator. Every change here is done in the
+interest of the instance manager, which will refresh the
+configmap data
+ |
+
+certificates
+CertificatesStatus
+ |
+
+ The configuration for the CA and related certificates, initialized with defaults.
+ |
+
+firstRecoverabilityPoint
+string
+ |
+
+ The first recoverability point, stored as a date in RFC3339 format.
+This field is calculated from the content of FirstRecoverabilityPointByMethod.
+Deprecated: the field is not set for backup plugins.
+ |
+
+firstRecoverabilityPointByMethod
+map[BackupMethod]meta/v1.Time
+ |
+
+ The first recoverability point, stored as a date in RFC3339 format, per backup method type.
+Deprecated: the field is not set for backup plugins.
+ |
+
+lastSuccessfulBackup
+string
+ |
+
+ Last successful backup, stored as a date in RFC3339 format.
+This field is calculated from the content of LastSuccessfulBackupByMethod.
+Deprecated: the field is not set for backup plugins.
+ |
+
+lastSuccessfulBackupByMethod
+map[BackupMethod]meta/v1.Time
+ |
+
+ Last successful backup, stored as a date in RFC3339 format, per backup method type.
+Deprecated: the field is not set for backup plugins.
+ |
+
+lastFailedBackup
+string
+ |
+
+ Last failed backup, stored as a date in RFC3339 format.
+Deprecated: the field is not set for backup plugins.
+ |
+
+cloudNativePostgresqlCommitHash
+string
+ |
+
+ The commit hash number of which this operator running
+ |
+
+currentPrimaryTimestamp
+string
+ |
+
+ The timestamp when the last actual promotion to primary has occurred
+ |
+
+currentPrimaryFailingSinceTimestamp
+string
+ |
+
+ The timestamp when the primary was detected to be unhealthy
+This field is reported when .spec.failoverDelay is populated or during online upgrades
+ |
+
+targetPrimaryTimestamp
+string
+ |
+
+ The timestamp when the last request for a new primary has occurred
+ |
+
+poolerIntegrations
+PoolerIntegrations
+ |
+
+ The integration needed by poolers referencing the cluster
+ |
+
+cloudNativePostgresqlOperatorHash
+string
+ |
+
+ The hash of the binary of the operator
+ |
+
+availableArchitectures
+[]AvailableArchitecture
+ |
+
+ AvailableArchitectures reports the available architectures of a cluster
+ |
+
+conditions
+[]meta/v1.Condition
+ |
+
+ Conditions for cluster object
+ |
+
+instanceNames
+[]string
+ |
+
+ List of instance names in the cluster
+ |
+
+onlineUpdateEnabled
+bool
+ |
+
+ OnlineUpdateEnabled shows if the online upgrade is enabled inside the cluster
+ |
+
+image
+string
+ |
+
+ Image contains the image name used by the pods
+ |
+
+pgDataImageInfo
+ImageInfo
+ |
+
+ PGDataImageInfo contains the details of the latest image that has run on the current data directory.
+ |
+
+pluginStatus
+[]PluginStatus
+ |
+
+ PluginStatus is the status of the loaded plugins
+ |
+
+switchReplicaClusterStatus
+SwitchReplicaClusterStatus
+ |
+
+ SwitchReplicaClusterStatus is the status of the switch to replica cluster
+ |
+
+demotionToken
+string
+ |
+
+ DemotionToken is a JSON token containing the information
+from pg_controldata such as Database system identifier, Latest checkpoint's
+TimeLineID, Latest checkpoint's REDO location, Latest checkpoint's REDO
+WAL file, and Time of latest checkpoint
+ |
+
+systemID
+string
+ |
+
+ SystemID is the latest detected PostgreSQL SystemID
+ |
+
+
+
+
+
+
+## ConfigMapResourceVersion
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+ConfigMapResourceVersion is the resource versions of the secrets
+managed by the operator
+
+
+| Field | Description |
+
+metrics
+map[string]string
+ |
+
+ A map with the versions of all the config maps used to pass metrics.
+Map keys are the config map names, map values are the versions
+ |
+
+
+
+
+
+
+## DataDurabilityLevel
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [SynchronousReplicaConfiguration](#postgresql-k8s-enterprisedb-io-v1-SynchronousReplicaConfiguration)
+
+DataDurabilityLevel specifies how strictly to enforce synchronous replication
+when cluster instances are unavailable. Options are required or preferred.
+
+
+
+## DataSource
+
+**Appears in:**
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+DataSource contains the configuration required to bootstrap a
+PostgreSQL cluster from an existing storage
+
+
+
+
+
+## DatabaseObjectSpec
+
+**Appears in:**
+
+- [ExtensionSpec](#postgresql-k8s-enterprisedb-io-v1-ExtensionSpec)
+
+- [SchemaSpec](#postgresql-k8s-enterprisedb-io-v1-SchemaSpec)
+
+DatabaseObjectSpec contains the fields which are common to every
+database object
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the extension/schema
+ |
+
+ensure
+EnsureOption
+ |
+
+ Specifies whether an extension/schema should be present or absent in
+the database. If set to present, the extension/schema will be
+created if it does not exist. If set to absent, the
+extension/schema will be removed if it exists.
+ |
+
+
+
+
+
+
+## DatabaseObjectStatus
+
+**Appears in:**
+
+- [DatabaseStatus](#postgresql-k8s-enterprisedb-io-v1-DatabaseStatus)
+
+DatabaseObjectStatus is the status of the managed database objects
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ The name of the object
+ |
+
+applied [Required]
+bool
+ |
+
+ True of the object has been installed successfully in
+the database
+ |
+
+message
+string
+ |
+
+ Message is the object reconciliation message
+ |
+
+
+
+
+
+
+## DatabaseReclaimPolicy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+DatabaseReclaimPolicy describes a policy for end-of-life maintenance of databases.
+
+
+
+## DatabaseRoleRef
+
+**Appears in:**
+
+- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration)
+
+DatabaseRoleRef is a reference an a role available inside PostgreSQL
+
+
+| Field | Description |
+
+name
+string
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## DatabaseSpec
+
+**Appears in:**
+
+- [Database](#postgresql-k8s-enterprisedb-io-v1-Database)
+
+DatabaseSpec is the specification of a Postgresql Database, built around the
+CREATE DATABASE, ALTER DATABASE, and DROP DATABASE SQL commands of
+PostgreSQL.
+
+
+| Field | Description |
+
+cluster [Required]
+core/v1.LocalObjectReference
+ |
+
+ The name of the PostgreSQL cluster hosting the database.
+ |
+
+ensure
+EnsureOption
+ |
+
+ Ensure the PostgreSQL database is present or absent - defaults to "present".
+ |
+
+name [Required]
+string
+ |
+
+ The name of the database to create inside PostgreSQL. This setting cannot be changed.
+ |
+
+owner [Required]
+string
+ |
+
+ Maps to the OWNER parameter of CREATE DATABASE.
+Maps to the OWNER TO command of ALTER DATABASE.
+The role name of the user who owns the database inside PostgreSQL.
+ |
+
+template
+string
+ |
+
+ Maps to the TEMPLATE parameter of CREATE DATABASE. This setting
+cannot be changed. The name of the template from which to create
+this database.
+ |
+
+encoding
+string
+ |
+
+ Maps to the ENCODING parameter of CREATE DATABASE. This setting
+cannot be changed. Character set encoding to use in the database.
+ |
+
+locale
+string
+ |
+
+ Maps to the LOCALE parameter of CREATE DATABASE. This setting
+cannot be changed. Sets the default collation order and character
+classification in the new database.
+ |
+
+localeProvider
+string
+ |
+
+ Maps to the LOCALE_PROVIDER parameter of CREATE DATABASE. This
+setting cannot be changed. This option sets the locale provider for
+databases created in the new cluster. Available from PostgreSQL 16.
+ |
+
+localeCollate
+string
+ |
+
+ Maps to the LC_COLLATE parameter of CREATE DATABASE. This
+setting cannot be changed.
+ |
+
+localeCType
+string
+ |
+
+ Maps to the LC_CTYPE parameter of CREATE DATABASE. This setting
+cannot be changed.
+ |
+
+icuLocale
+string
+ |
+
+ Maps to the ICU_LOCALE parameter of CREATE DATABASE. This
+setting cannot be changed. Specifies the ICU locale when the ICU
+provider is used. This option requires localeProvider to be set to
+icu. Available from PostgreSQL 15.
+ |
+
+icuRules
+string
+ |
+
+ Maps to the ICU_RULES parameter of CREATE DATABASE. This setting
+cannot be changed. Specifies additional collation rules to customize
+the behavior of the default collation. This option requires
+localeProvider to be set to icu. Available from PostgreSQL 16.
+ |
+
+builtinLocale
+string
+ |
+
+ Maps to the BUILTIN_LOCALE parameter of CREATE DATABASE. This
+setting cannot be changed. Specifies the locale name when the
+builtin provider is used. This option requires localeProvider to
+be set to builtin. Available from PostgreSQL 17.
+ |
+
+collationVersion
+string
+ |
+
+ Maps to the COLLATION_VERSION parameter of CREATE DATABASE. This
+setting cannot be changed.
+ |
+
+isTemplate
+bool
+ |
+
+ Maps to the IS_TEMPLATE parameter of CREATE DATABASE and ALTER DATABASE. If true, this database is considered a template and can
+be cloned by any user with CREATEDB privileges.
+ |
+
+allowConnections
+bool
+ |
+
+ Maps to the ALLOW_CONNECTIONS parameter of CREATE DATABASE and
+ALTER DATABASE. If false then no one can connect to this database.
+ |
+
+connectionLimit
+int
+ |
+
+ Maps to the CONNECTION LIMIT clause of CREATE DATABASE and
+ALTER DATABASE. How many concurrent connections can be made to
+this database. -1 (the default) means no limit.
+ |
+
+tablespace
+string
+ |
+
+ Maps to the TABLESPACE parameter of CREATE DATABASE.
+Maps to the SET TABLESPACE command of ALTER DATABASE.
+The name of the tablespace (in PostgreSQL) that will be associated
+with the new database. This tablespace will be the default
+tablespace used for objects created in this database.
+ |
+
+databaseReclaimPolicy
+DatabaseReclaimPolicy
+ |
+
+ The policy for end-of-life maintenance of this database.
+ |
+
+schemas
+[]SchemaSpec
+ |
+
+ The list of schemas to be managed in the database
+ |
+
+extensions
+[]ExtensionSpec
+ |
+
+ The list of extensions to be managed in the database
+ |
+
+
+
+
+
+
+## DatabaseStatus
+
+**Appears in:**
+
+- [Database](#postgresql-k8s-enterprisedb-io-v1-Database)
+
+DatabaseStatus defines the observed state of Database
+
+
+| Field | Description |
+
+observedGeneration
+int64
+ |
+
+ A sequence number representing the latest
+desired state that was synchronized
+ |
+
+applied
+bool
+ |
+
+ Applied is true if the database was reconciled correctly
+ |
+
+message
+string
+ |
+
+ Message is the reconciliation output message
+ |
+
+schemas
+[]DatabaseObjectStatus
+ |
+
+ Schemas is the status of the managed schemas
+ |
+
+extensions
+[]DatabaseObjectStatus
+ |
+
+ Extensions is the status of the managed extensions
+ |
+
+
+
+
+
+
+## EPASConfiguration
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+EPASConfiguration contains EDB Postgres Advanced Server specific configurations
+
+
+| Field | Description |
+
+audit
+bool
+ |
+
+ If true enables edb_audit logging
+ |
+
+tde
+TDEConfiguration
+ |
+
+ TDE configuration
+ |
+
+
+
+
+
+
+## EmbeddedObjectMetadata
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+EmbeddedObjectMetadata contains metadata to be inherited by all resources related to a Cluster
+
+
+| Field | Description |
+
+labels
+map[string]string
+ |
+
+ No description provided. |
+
+annotations
+map[string]string
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## EnsureOption
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [DatabaseObjectSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseObjectSpec)
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+- [RoleConfiguration](#postgresql-k8s-enterprisedb-io-v1-RoleConfiguration)
+
+EnsureOption represents whether we should enforce the presence or absence of
+a Role in a PostgreSQL instance
+
+
+
+## EphemeralVolumesSizeLimitConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+EphemeralVolumesSizeLimitConfiguration contains the configuration of the ephemeral
+storage
+
+
+
+
+
+## ExtensionConfiguration
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+ExtensionConfiguration is the configuration used to add
+PostgreSQL extensions to the Cluster.
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ The name of the extension, required
+ |
+
+image [Required]
+core/v1.ImageVolumeSource
+ |
+
+ The image containing the extension, required
+ |
+
+extension_control_path
+[]string
+ |
+
+ The list of directories inside the image which should be added to extension_control_path.
+If not defined, defaults to "/share".
+ |
+
+dynamic_library_path
+[]string
+ |
+
+ The list of directories inside the image which should be added to dynamic_library_path.
+If not defined, defaults to "/lib".
+ |
+
+ld_library_path
+[]string
+ |
+
+ The list of directories inside the image which should be added to ld_library_path.
+ |
+
+
+
+
+
+
+## ExtensionSpec
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+ExtensionSpec configures an extension in a database
+
+
+| Field | Description |
+
+DatabaseObjectSpec
+DatabaseObjectSpec
+ |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields
+ |
+
+version [Required]
+string
+ |
+
+ The version of the extension to install. If empty, the operator will
+install the default version (whatever is specified in the
+extension's control file)
+ |
+
+schema [Required]
+string
+ |
+
+ The name of the schema in which to install the extension's objects,
+in case the extension allows its contents to be relocated. If not
+specified (default), and the extension's control file does not
+specify a schema either, the current default object creation schema
+is used.
+ |
+
+
+
+
+
+
+## ExternalCluster
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ExternalCluster represents the connection parameters to an
+external cluster which is used in the other sections of the configuration
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ The server name, required
+ |
+
+connectionParameters
+map[string]string
+ |
+
+ The list of connection parameters, such as dbname, host, username, etc
+ |
+
+sslCert
+core/v1.SecretKeySelector
+ |
+
+ The reference to an SSL certificate to be used to connect to this
+instance
+ |
+
+sslKey
+core/v1.SecretKeySelector
+ |
+
+ The reference to an SSL private key to be used to connect to this
+instance
+ |
+
+sslRootCert
+core/v1.SecretKeySelector
+ |
+
+ The reference to an SSL CA public key to be used to connect to this
+instance
+ |
+
+password
+core/v1.SecretKeySelector
+ |
+
+ The reference to the password to be used to connect to the server.
+If a password is provided, {{name.ln}} creates a PostgreSQL
+passfile at /controller/external/NAME/pass (where "NAME" is the
+cluster's name). This passfile is automatically referenced in the
+connection string when establishing a connection to the remote
+PostgreSQL server from the current PostgreSQL Cluster. This ensures
+secure and efficient password management for external clusters.
+ |
+
+barmanObjectStore
+github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration
+ |
+
+ The configuration for the barman-cloud tool suite
+ |
+
+plugin [Required]
+PluginConfiguration
+ |
+
+ The configuration of the plugin that is taking care
+of WAL archiving and backups for this external cluster
+ |
+
+
+
+
+
+
+## FailoverQuorumStatus
+
+**Appears in:**
+
+- [FailoverQuorum](#postgresql-k8s-enterprisedb-io-v1-FailoverQuorum)
+
+FailoverQuorumStatus is the latest observed status of the failover
+quorum of the PG cluster.
+
+
+| Field | Description |
+
+method
+string
+ |
+
+ Contains the latest reported Method value.
+ |
+
+standbyNames
+[]string
+ |
+
+ StandbyNames is the list of potentially synchronous
+instance names.
+ |
+
+standbyNumber
+int
+ |
+
+ StandbyNumber is the number of synchronous standbys that transactions
+need to wait for replies from.
+ |
+
+primary
+string
+ |
+
+ Primary is the name of the primary instance that updated
+this object the latest time.
+ |
+
+
+
+
+
+
+## ImageCatalogRef
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ImageCatalogRef defines the reference to a major version in an ImageCatalog
+
+
+| Field | Description |
+
+TypedLocalObjectReference
+core/v1.TypedLocalObjectReference
+ |
+(Members of TypedLocalObjectReference are embedded into this type.)
+ No description provided. |
+
+major [Required]
+int
+ |
+
+ The major version of PostgreSQL we want to use from the ImageCatalog
+ |
+
+
+
+
+
+
+## ImageCatalogSpec
+
+**Appears in:**
+
+- [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog)
+
+- [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog)
+
+ImageCatalogSpec defines the desired ImageCatalog
+
+
+| Field | Description |
+
+images [Required]
+[]CatalogImage
+ |
+
+ List of CatalogImages available in the catalog
+ |
+
+
+
+
+
+
+## ImageInfo
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+ImageInfo contains the information about a PostgreSQL image
+
+
+| Field | Description |
+
+image [Required]
+string
+ |
+
+ Image is the image name
+ |
+
+majorVersion [Required]
+int
+ |
+
+ MajorVersion is the major version of the image
+ |
+
+
+
+
+
+
+## Import
+
+**Appears in:**
+
+- [BootstrapInitDB](#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB)
+
+Import contains the configuration to init a database from a logic snapshot of an externalCluster
+
+
+| Field | Description |
+
+source [Required]
+ImportSource
+ |
+
+ The source of the import
+ |
+
+type [Required]
+SnapshotType
+ |
+
+ The import type. Can be microservice or monolith.
+ |
+
+databases [Required]
+[]string
+ |
+
+ The databases to import
+ |
+
+roles
+[]string
+ |
+
+ The roles to import
+ |
+
+postImportApplicationSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the application
+database right after is imported - to be used with extreme care
+(by default empty). Only available in microservice type.
+ |
+
+schemaOnly
+bool
+ |
+
+ When set to true, only the pre-data and post-data sections of
+pg_restore are invoked, avoiding data import. Default: false.
+ |
+
+pgDumpExtraOptions
+[]string
+ |
+
+ List of custom options to pass to the pg_dump command. IMPORTANT:
+Use these options with caution and at your own risk, as the operator
+does not validate their content. Be aware that certain options may
+conflict with the operator's intended functionality or design.
+ |
+
+pgRestoreExtraOptions
+[]string
+ |
+
+ List of custom options to pass to the pg_restore command. IMPORTANT:
+Use these options with caution and at your own risk, as the operator
+does not validate their content. Be aware that certain options may
+conflict with the operator's intended functionality or design.
+ |
+
+
+
+
+
+
+## ImportSource
+
+**Appears in:**
+
+- [Import](#postgresql-k8s-enterprisedb-io-v1-Import)
+
+ImportSource describes the source for the logical snapshot
+
+
+| Field | Description |
+
+externalCluster [Required]
+string
+ |
+
+ The name of the externalCluster used for import
+ |
+
+
+
+
+
+
+## InstanceID
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+InstanceID contains the information to identify an instance
+
+
+| Field | Description |
+
+podName
+string
+ |
+
+ The pod name
+ |
+
+ContainerID
+string
+ |
+
+ The container ID
+ |
+
+
+
+
+
+
+## InstanceReportedState
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+InstanceReportedState describes the last reported state of an instance during a reconciliation loop
+
+
+| Field | Description |
+
+isPrimary [Required]
+bool
+ |
+
+ indicates if an instance is the primary one
+ |
+
+timeLineID
+int
+ |
+
+ indicates on which TimelineId the instance is
+ |
+
+ip [Required]
+string
+ |
+
+ IP address of the instance
+ |
+
+
+
+
+
+
+## IsolationCheckConfiguration
+
+**Appears in:**
+
+- [LivenessProbe](#postgresql-k8s-enterprisedb-io-v1-LivenessProbe)
+
+IsolationCheckConfiguration contains the configuration for the isolation check
+functionality in the liveness probe
+
+
+| Field | Description |
+
+enabled
+bool
+ |
+
+ Whether primary isolation checking is enabled for the liveness probe
+ |
+
+requestTimeout
+int
+ |
+
+ Timeout in milliseconds for requests during the primary isolation check
+ |
+
+connectionTimeout
+int
+ |
+
+ Timeout in milliseconds for connections during the primary isolation check
+ |
+
+
+
+
+
+
+## LDAPBindAsAuth
+
+**Appears in:**
+
+- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig)
+
+LDAPBindAsAuth provides the required fields to use the
+bind authentication for LDAP
+
+
+| Field | Description |
+
+prefix
+string
+ |
+
+ Prefix for the bind authentication option
+ |
+
+suffix
+string
+ |
+
+ Suffix for the bind authentication option
+ |
+
+
+
+
+
+
+## LDAPBindSearchAuth
+
+**Appears in:**
+
+- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig)
+
+LDAPBindSearchAuth provides the required fields to use
+the bind+search LDAP authentication process
+
+
+| Field | Description |
+
+baseDN
+string
+ |
+
+ Root DN to begin the user search
+ |
+
+bindDN
+string
+ |
+
+ DN of the user to bind to the directory
+ |
+
+bindPassword
+core/v1.SecretKeySelector
+ |
+
+ Secret with the password for the user to bind to the directory
+ |
+
+searchAttribute
+string
+ |
+
+ Attribute to match against the username
+ |
+
+searchFilter
+string
+ |
+
+ Search filter to use when doing the search+bind authentication
+ |
+
+
+
+
+
+
+## LDAPConfig
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+LDAPConfig contains the parameters needed for LDAP authentication
+
+
+| Field | Description |
+
+server
+string
+ |
+
+ LDAP hostname or IP address
+ |
+
+port
+int
+ |
+
+ LDAP server port
+ |
+
+scheme
+LDAPScheme
+ |
+
+ LDAP schema to be used, possible options are ldap and ldaps
+ |
+
+bindAsAuth
+LDAPBindAsAuth
+ |
+
+ Bind as authentication configuration
+ |
+
+bindSearchAuth
+LDAPBindSearchAuth
+ |
+
+ Bind+Search authentication configuration
+ |
+
+tls
+bool
+ |
+
+ Set to 'true' to enable LDAP over TLS. 'false' is default
+ |
+
+
+
+
+
+
+## LDAPScheme
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig)
+
+LDAPScheme defines the possible schemes for LDAP
+
+
+
+## LivenessProbe
+
+**Appears in:**
+
+- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration)
+
+LivenessProbe is the configuration of the liveness probe
+
+
+| Field | Description |
+
+Probe
+Probe
+ |
+(Members of Probe are embedded into this type.)
+ Probe is the standard probe configuration
+ |
+
+isolationCheck
+IsolationCheckConfiguration
+ |
+
+ Configure the feature that extends the liveness probe for a primary
+instance. In addition to the basic checks, this verifies whether the
+primary is isolated from the Kubernetes API server and from its
+replicas, ensuring that it can be safely shut down if network
+partition or API unavailability is detected. Enabled by default.
+ |
+
+
+
+
+
+
+## ManagedConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ManagedConfiguration represents the portions of PostgreSQL that are managed
+by the instance manager
+
+
+| Field | Description |
+
+roles
+[]RoleConfiguration
+ |
+
+ Database roles managed by the Cluster
+ |
+
+services
+ManagedServices
+ |
+
+ Services roles managed by the Cluster
+ |
+
+
+
+
+
+
+## ManagedRoles
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+ManagedRoles tracks the status of a cluster's managed roles
+
+
+| Field | Description |
+
+byStatus
+map[RoleStatus][]string
+ |
+
+ ByStatus gives the list of roles in each state
+ |
+
+cannotReconcile
+map[string][]string
+ |
+
+ CannotReconcile lists roles that cannot be reconciled in PostgreSQL,
+with an explanation of the cause
+ |
+
+passwordStatus
+map[string]PasswordState
+ |
+
+ PasswordStatus gives the last transaction id and password secret version for each managed role
+ |
+
+
+
+
+
+
+## ManagedService
+
+**Appears in:**
+
+- [ManagedServices](#postgresql-k8s-enterprisedb-io-v1-ManagedServices)
+
+ManagedService represents a specific service managed by the cluster.
+It includes the type of service and its associated template specification.
+
+
+| Field | Description |
+
+selectorType [Required]
+ServiceSelectorType
+ |
+
+ SelectorType specifies the type of selectors that the service will have.
+Valid values are "rw", "r", and "ro", representing read-write, read, and read-only services.
+ |
+
+updateStrategy
+ServiceUpdateStrategy
+ |
+
+ UpdateStrategy describes how the service differences should be reconciled
+ |
+
+serviceTemplate [Required]
+ServiceTemplateSpec
+ |
+
+ ServiceTemplate is the template specification for the service.
+ |
+
+
+
+
+
+
+## ManagedServices
+
+**Appears in:**
+
+- [ManagedConfiguration](#postgresql-k8s-enterprisedb-io-v1-ManagedConfiguration)
+
+ManagedServices represents the services managed by the cluster.
+
+
+| Field | Description |
+
+disabledDefaultServices
+[]ServiceSelectorType
+ |
+
+ DisabledDefaultServices is a list of service types that are disabled by default.
+Valid values are "r", and "ro", representing read, and read-only services.
+ |
+
+additional
+[]ManagedService
+ |
+
+ Additional is a list of additional managed services specified by the user.
+ |
+
+
+
+
+
+
+## Metadata
+
+**Appears in:**
+
+- [PodTemplateSpec](#postgresql-k8s-enterprisedb-io-v1-PodTemplateSpec)
+
+- [ServiceAccountTemplate](#postgresql-k8s-enterprisedb-io-v1-ServiceAccountTemplate)
+
+- [ServiceTemplateSpec](#postgresql-k8s-enterprisedb-io-v1-ServiceTemplateSpec)
+
+Metadata is a structure similar to the metav1.ObjectMeta, but still
+parseable by controller-gen to create a suitable CRD for the user.
+The comment of PodTemplateSpec has an explanation of why we are
+not using the core data types.
+
+
+| Field | Description |
+
+name
+string
+ |
+
+ The name of the resource. Only supported for certain types
+ |
+
+labels
+map[string]string
+ |
+
+ Map of string keys and values that can be used to organize and categorize
+(scope and select) objects. May match selectors of replication controllers
+and services.
+More info: http://kubernetes.io/docs/user-guide/labels
+ |
+
+annotations
+map[string]string
+ |
+
+ Annotations is an unstructured key value map stored with a resource that may be
+set by external tools to store and retrieve arbitrary metadata. They are not
+queryable and should be preserved when modifying objects.
+More info: http://kubernetes.io/docs/user-guide/annotations
+ |
+
+
+
+
+
+
+## MonitoringConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+MonitoringConfiguration is the type containing all the monitoring
+configuration for a certain cluster
+
+
+
+
+
+## NodeMaintenanceWindow
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+NodeMaintenanceWindow contains information that the operator
+will use while upgrading the underlying node.
+This option is only useful when the chosen storage prevents the Pods
+from being freely moved across nodes.
+
+
+| Field | Description |
+
+reusePVC
+bool
+ |
+
+ Reuse the existing PVC (wait for the node to come
+up again) or not (recreate it elsewhere - when instances >1)
+ |
+
+inProgress
+bool
+ |
+
+ Is there a node maintenance activity in progress?
+ |
+
+
+
+
+
+
+## OnlineConfiguration
+
+**Appears in:**
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+- [VolumeSnapshotConfiguration](#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration)
+
+OnlineConfiguration contains the configuration parameters for the online volume snapshot
+
+
+| Field | Description |
+
+waitForArchive
+bool
+ |
+
+ If false, the function will return immediately after the backup is completed,
+without waiting for WAL to be archived.
+This behavior is only useful with backup software that independently monitors WAL archiving.
+Otherwise, WAL required to make the backup consistent might be missing and make the backup useless.
+By default, or when this parameter is true, pg_backup_stop will wait for WAL to be archived when archiving is
+enabled.
+On a standby, this means that it will wait only when archive_mode = always.
+If write activity on the primary is low, it may be useful to run pg_switch_wal on the primary in order to trigger
+an immediate segment switch.
+ |
+
+immediateCheckpoint
+bool
+ |
+
+ Control whether the I/O workload for the backup initial checkpoint will
+be limited, according to the checkpoint_completion_target setting on
+the PostgreSQL server. If set to true, an immediate checkpoint will be
+used, meaning PostgreSQL will complete the checkpoint as soon as
+possible. false by default.
+ |
+
+
+
+
+
+
+## PasswordState
+
+**Appears in:**
+
+- [ManagedRoles](#postgresql-k8s-enterprisedb-io-v1-ManagedRoles)
+
+PasswordState represents the state of the password of a managed RoleConfiguration
+
+
+| Field | Description |
+
+transactionID
+int64
+ |
+
+ the last transaction ID to affect the role definition in PostgreSQL
+ |
+
+resourceVersion
+string
+ |
+
+ the resource version of the password secret
+ |
+
+
+
+
+
+
+## PgBouncerIntegrationStatus
+
+**Appears in:**
+
+- [PoolerIntegrations](#postgresql-k8s-enterprisedb-io-v1-PoolerIntegrations)
+
+PgBouncerIntegrationStatus encapsulates the needed integration for the pgbouncer poolers referencing the cluster
+
+
+| Field | Description |
+
+secrets
+[]string
+ |
+
+ No description provided. |
+
+
+
+
+
+
+## PgBouncerPoolMode
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [PgBouncerSpec](#postgresql-k8s-enterprisedb-io-v1-PgBouncerSpec)
+
+PgBouncerPoolMode is the mode of PgBouncer
+
+
+
+## PgBouncerSecrets
+
+**Appears in:**
+
+- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets)
+
+PgBouncerSecrets contains the versions of the secrets used
+by pgbouncer
+
+
+| Field | Description |
+
+authQuery
+SecretVersion
+ |
+
+ The auth query secret version
+ |
+
+
+
+
+
+
+## PgBouncerSpec
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PgBouncerSpec defines how to configure PgBouncer
+
+
+| Field | Description |
+
+poolMode
+PgBouncerPoolMode
+ |
+
+ The pool mode. Default: session.
+ |
+
+authQuerySecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The credentials of the user that need to be used for the authentication
+query. In case it is specified, also an AuthQuery
+(e.g. "SELECT usename, passwd FROM pg_catalog.pg_shadow WHERE usename=$1")
+has to be specified and no automatic CNP Cluster integration will be triggered.
+ |
+
+authQuery
+string
+ |
+
+ The query that will be used to download the hash of the password
+of a certain user. Default: "SELECT usename, passwd FROM public.user_search($1)".
+In case it is specified, also an AuthQuerySecret has to be specified and
+no automatic CNP Cluster integration will be triggered.
+ |
+
+parameters
+map[string]string
+ |
+
+ Additional parameters to be passed to PgBouncer - please check
+the CNP documentation for a list of options you can configure
+ |
+
+pg_hba
+[]string
+ |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended
+to the pg_hba.conf file)
+ |
+
+paused
+bool
+ |
+
+ When set to true, PgBouncer will disconnect from the PostgreSQL
+server, first waiting for all queries to complete, and pause all new
+client connections until this value is set to false (default). Internally,
+the operator calls PgBouncer's PAUSE and RESUME commands.
+ |
+
+
+
+
+
+
+## PluginConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+- [ExternalCluster](#postgresql-k8s-enterprisedb-io-v1-ExternalCluster)
+
+PluginConfiguration specifies a plugin that need to be loaded for this
+cluster to be reconciled
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the plugin name
+ |
+
+enabled
+bool
+ |
+
+ Enabled is true if this plugin will be used
+ |
+
+isWALArchiver
+bool
+ |
+
+ Marks the plugin as the WAL archiver. At most one plugin can be
+designated as a WAL archiver. This cannot be enabled if the
+.spec.backup.barmanObjectStore configuration is present.
+ |
+
+parameters
+map[string]string
+ |
+
+ Parameters is the configuration of the plugin
+ |
+
+
+
+
+
+
+## PluginStatus
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+PluginStatus is the status of a loaded plugin
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the name of the plugin
+ |
+
+version [Required]
+string
+ |
+
+ Version is the version of the plugin loaded by the
+latest reconciliation loop
+ |
+
+capabilities
+[]string
+ |
+
+ Capabilities are the list of capabilities of the
+plugin
+ |
+
+operatorCapabilities
+[]string
+ |
+
+ OperatorCapabilities are the list of capabilities of the
+plugin regarding the reconciler
+ |
+
+walCapabilities
+[]string
+ |
+
+ WALCapabilities are the list of capabilities of the
+plugin regarding the WAL management
+ |
+
+backupCapabilities
+[]string
+ |
+
+ BackupCapabilities are the list of capabilities of the
+plugin regarding the Backup management
+ |
+
+restoreJobHookCapabilities
+[]string
+ |
+
+ RestoreJobHookCapabilities are the list of capabilities of the
+plugin regarding the RestoreJobHook management
+ |
+
+status
+string
+ |
+
+ Status contain the status reported by the plugin through the SetStatusInCluster interface
+ |
+
+
+
+
+
+
+## PodTemplateSpec
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PodTemplateSpec is a structure allowing the user to set
+a template for Pod generation.
+Unfortunately we can't use the corev1.PodTemplateSpec
+type because the generated CRD won't have the field for the
+metadata section.
+References:
+https://github.com/kubernetes-sigs/controller-tools/issues/385
+https://github.com/kubernetes-sigs/controller-tools/issues/448
+https://github.com/prometheus-operator/prometheus-operator/issues/3041
+
+
+| Field | Description |
+
+metadata
+Metadata
+ |
+
+ Standard object's metadata.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+ |
+
+spec
+core/v1.PodSpec
+ |
+
+ Specification of the desired behavior of the pod.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## PodTopologyLabels
+
+(Alias of `map[string]string`)
+
+**Appears in:**
+
+- [Topology](#postgresql-k8s-enterprisedb-io-v1-Topology)
+
+PodTopologyLabels represent the topology of a Pod. map[labelName]labelValue
+
+
+
+## PoolerIntegrations
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+PoolerIntegrations encapsulates the needed integration for the poolers referencing the cluster
+
+
+
+
+
+## PoolerMonitoringConfiguration
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PoolerMonitoringConfiguration is the type containing all the monitoring
+configuration for a certain Pooler.
+Mirrors the Cluster's MonitoringConfiguration but without the custom queries
+part for now.
+
+
+
+
+
+## PoolerSecrets
+
+**Appears in:**
+
+- [PoolerStatus](#postgresql-k8s-enterprisedb-io-v1-PoolerStatus)
+
+PoolerSecrets contains the versions of all the secrets used
+
+
+| Field | Description |
+
+serverTLS
+SecretVersion
+ |
+
+ The server TLS secret version
+ |
+
+serverCA
+SecretVersion
+ |
+
+ The server CA secret version
+ |
+
+clientCA
+SecretVersion
+ |
+
+ The client CA secret version
+ |
+
+pgBouncerSecrets
+PgBouncerSecrets
+ |
+
+ The version of the secrets used by PgBouncer
+ |
+
+
+
+
+
+
+## PoolerSpec
+
+**Appears in:**
+
+- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
+
+PoolerSpec defines the desired state of Pooler
+
+
+| Field | Description |
+
+cluster [Required]
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ This is the cluster reference on which the Pooler will work.
+Pooler name should never match with any cluster name within the same namespace.
+ |
+
+type
+PoolerType
+ |
+
+ Type of service to forward traffic to. Default: rw.
+ |
+
+instances
+int32
+ |
+
+ The number of replicas we want. Default: 1.
+ |
+
+template
+PodTemplateSpec
+ |
+
+ The template of the Pod to be created
+ |
+
+pgbouncer [Required]
+PgBouncerSpec
+ |
+
+ The PgBouncer configuration
+ |
+
+deploymentStrategy
+apps/v1.DeploymentStrategy
+ |
+
+ The deployment strategy to use for pgbouncer to replace existing pods with new ones
+ |
+
+monitoring
+PoolerMonitoringConfiguration
+ |
+
+ The configuration of the monitoring infrastructure of this pooler.
+Deprecated: This feature will be removed in an upcoming release. If
+you need this functionality, you can create a PodMonitor manually.
+ |
+
+serviceTemplate
+ServiceTemplateSpec
+ |
+
+ Template for the Service to be created
+ |
+
+
+
+
+
+
+## PoolerStatus
+
+**Appears in:**
+
+- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
+
+PoolerStatus defines the observed state of Pooler
+
+
+| Field | Description |
+
+secrets
+PoolerSecrets
+ |
+
+ The resource version of the config object
+ |
+
+instances
+int32
+ |
+
+ The number of pods trying to be scheduled
+ |
+
+
+
+
+
+
+## PoolerType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PoolerType is the type of the connection pool, meaning the service
+we are targeting. Allowed values are rw and ro.
+
+
+
+## PostgresConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+PostgresConfiguration defines the PostgreSQL configuration
+
+
+| Field | Description |
+
+parameters
+map[string]string
+ |
+
+ PostgreSQL configuration options (postgresql.conf)
+ |
+
+synchronous
+SynchronousReplicaConfiguration
+ |
+
+ Configuration of the PostgreSQL synchronous replication feature
+ |
+
+pg_hba
+[]string
+ |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended
+to the pg_hba.conf file)
+ |
+
+pg_ident
+[]string
+ |
+
+ PostgreSQL User Name Maps rules (lines to be appended
+to the pg_ident.conf file)
+ |
+
+epas
+EPASConfiguration
+ |
+
+ EDB Postgres Advanced Server specific configurations
+ |
+
+syncReplicaElectionConstraint
+SyncReplicaElectionConstraints
+ |
+
+ Requirements to be met by sync replicas. This will affect how the "synchronous_standby_names" parameter will be
+set up.
+ |
+
+shared_preload_libraries
+[]string
+ |
+
+ Lists of shared preload libraries to add to the default ones
+ |
+
+ldap
+LDAPConfig
+ |
+
+ Options to specify LDAP configuration
+ |
+
+promotionTimeout
+int32
+ |
+
+ Specifies the maximum number of seconds to wait when promoting an instance to primary.
+Default value is 40000000, greater than one year in seconds,
+big enough to simulate an infinite timeout
+ |
+
+enableAlterSystem
+bool
+ |
+
+ If this parameter is true, the user will be able to invoke ALTER SYSTEM
+on this {{name.ln}} Cluster.
+This should only be used for debugging and troubleshooting.
+Defaults to false.
+ |
+
+extensions
+[]ExtensionConfiguration
+ |
+
+ The configuration of the extensions to be added
+ |
+
+
+
+
+
+
+## PrimaryUpdateMethod
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+PrimaryUpdateMethod contains the method to use when upgrading
+the primary server of the cluster as part of rolling updates
+
+
+
+## PrimaryUpdateStrategy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+PrimaryUpdateStrategy contains the strategy to follow when upgrading
+the primary server of the cluster as part of rolling updates
+
+
+
+## Probe
+
+**Appears in:**
+
+- [LivenessProbe](#postgresql-k8s-enterprisedb-io-v1-LivenessProbe)
+
+- [ProbeWithStrategy](#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy)
+
+Probe describes a health check to be performed against a container to determine whether it is
+alive or ready to receive traffic.
+
+
+| Field | Description |
+
+initialDelaySeconds
+int32
+ |
+
+ Number of seconds after the container has started before liveness probes are initiated.
+More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ |
+
+timeoutSeconds
+int32
+ |
+
+ Number of seconds after which the probe times out.
+Defaults to 1 second. Minimum value is 1.
+More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ |
+
+periodSeconds
+int32
+ |
+
+ How often (in seconds) to perform the probe.
+Default to 10 seconds. Minimum value is 1.
+ |
+
+successThreshold
+int32
+ |
+
+ Minimum consecutive successes for the probe to be considered successful after having failed.
+Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
+ |
+
+failureThreshold
+int32
+ |
+
+ Minimum consecutive failures for the probe to be considered failed after having succeeded.
+Defaults to 3. Minimum value is 1.
+ |
+
+terminationGracePeriodSeconds
+int64
+ |
+
+ Optional duration in seconds the pod needs to terminate gracefully upon probe failure.
+The grace period is the duration in seconds after the processes running in the pod are sent
+a termination signal and the time when the processes are forcibly halted with a kill signal.
+Set this value longer than the expected cleanup time for your process.
+If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this
+value overrides the value provided by the pod spec.
+Value must be non-negative integer. The value zero indicates stop immediately via
+the kill signal (no opportunity to shut down).
+This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate.
+Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
+ |
+
+
+
+
+
+
+## ProbeStrategyType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ProbeWithStrategy](#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy)
+
+ProbeStrategyType is the type of the strategy used to declare a PostgreSQL instance
+ready
+
+
+
+## ProbeWithStrategy
+
+**Appears in:**
+
+- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration)
+
+ProbeWithStrategy is the configuration of the startup probe
+
+
+
+
+
+## ProbesConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ProbesConfiguration represent the configuration for the probes
+to be injected in the PostgreSQL Pods
+
+
+| Field | Description |
+
+startup [Required]
+ProbeWithStrategy
+ |
+
+ The startup probe configuration
+ |
+
+liveness [Required]
+LivenessProbe
+ |
+
+ The liveness probe configuration
+ |
+
+readiness [Required]
+ProbeWithStrategy
+ |
+
+ The readiness probe configuration
+ |
+
+
+
+
+
+
+## PublicationReclaimPolicy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [PublicationSpec](#postgresql-k8s-enterprisedb-io-v1-PublicationSpec)
+
+PublicationReclaimPolicy defines a policy for end-of-life maintenance of Publications.
+
+
+
+## PublicationSpec
+
+**Appears in:**
+
+- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication)
+
+PublicationSpec defines the desired state of Publication
+
+
+| Field | Description |
+
+cluster [Required]
+core/v1.LocalObjectReference
+ |
+
+ The name of the PostgreSQL cluster that identifies the "publisher"
+ |
+
+name [Required]
+string
+ |
+
+ The name of the publication inside PostgreSQL
+ |
+
+dbname [Required]
+string
+ |
+
+ The name of the database where the publication will be installed in
+the "publisher" cluster
+ |
+
+parameters
+map[string]string
+ |
+
+ Publication parameters part of the WITH clause as expected by
+PostgreSQL CREATE PUBLICATION command
+ |
+
+target [Required]
+PublicationTarget
+ |
+
+ Target of the publication as expected by PostgreSQL CREATE PUBLICATION command
+ |
+
+publicationReclaimPolicy
+PublicationReclaimPolicy
+ |
+
+ The policy for end-of-life maintenance of this publication
+ |
+
+
+
+
+
+
+## PublicationStatus
+
+**Appears in:**
+
+- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication)
+
+PublicationStatus defines the observed state of Publication
+
+
+| Field | Description |
+
+observedGeneration
+int64
+ |
+
+ A sequence number representing the latest
+desired state that was synchronized
+ |
+
+applied
+bool
+ |
+
+ Applied is true if the publication was reconciled correctly
+ |
+
+message
+string
+ |
+
+ Message is the reconciliation output message
+ |
+
+
+
+
+
+
+## PublicationTarget
+
+**Appears in:**
+
+- [PublicationSpec](#postgresql-k8s-enterprisedb-io-v1-PublicationSpec)
+
+PublicationTarget is what this publication should publish
+
+
+| Field | Description |
+
+allTables
+bool
+ |
+
+ Marks the publication as one that replicates changes for all tables
+in the database, including tables created in the future.
+Corresponding to FOR ALL TABLES in PostgreSQL.
+ |
+
+objects
+[]PublicationTargetObject
+ |
+
+ Just the following schema objects
+ |
+
+
+
+
+
+
+## PublicationTargetObject
+
+**Appears in:**
+
+- [PublicationTarget](#postgresql-k8s-enterprisedb-io-v1-PublicationTarget)
+
+PublicationTargetObject is an object to publish
+
+
+| Field | Description |
+
+tablesInSchema
+string
+ |
+
+ Marks the publication as one that replicates changes for all tables
+in the specified list of schemas, including tables created in the
+future. Corresponding to FOR TABLES IN SCHEMA in PostgreSQL.
+ |
+
+table
+PublicationTargetTable
+ |
+
+ Specifies a list of tables to add to the publication. Corresponding
+to FOR TABLE in PostgreSQL.
+ |
+
+
+
+
+
+
+## PublicationTargetTable
+
+**Appears in:**
+
+- [PublicationTargetObject](#postgresql-k8s-enterprisedb-io-v1-PublicationTargetObject)
+
+PublicationTargetTable is a table to publish
+
+
+| Field | Description |
+
+only
+bool
+ |
+
+ Whether to limit to the table only or include all its descendants
+ |
+
+name [Required]
+string
+ |
+
+ The table name
+ |
+
+schema
+string
+ |
+
+ The schema name
+ |
+
+columns
+[]string
+ |
+
+ The columns to publish
+ |
+
+
+
+
+
+
+## RecoveryTarget
+
+**Appears in:**
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+RecoveryTarget allows to configure the moment where the recovery process
+will stop. All the target options except TargetTLI are mutually exclusive.
+
+
+| Field | Description |
+
+backupID
+string
+ |
+
+ The ID of the backup from which to start the recovery process.
+If empty (default) the operator will automatically detect the backup
+based on targetTime or targetLSN if specified. Otherwise use the
+latest available backup in chronological order.
+ |
+
+targetTLI
+string
+ |
+
+ The target timeline ("latest" or a positive integer)
+ |
+
+targetXID
+string
+ |
+
+ The target transaction ID
+ |
+
+targetName
+string
+ |
+
+ The target name (to be previously created
+with pg_create_restore_point)
+ |
+
+targetLSN
+string
+ |
+
+ The target LSN (Log Sequence Number)
+ |
+
+targetTime
+string
+ |
+
+ The target time as a timestamp in the RFC3339 standard
+ |
+
+targetImmediate
+bool
+ |
+
+ End recovery as soon as a consistent state is reached
+ |
+
+exclusive
+bool
+ |
+
+ Set the target to be exclusive. If omitted, defaults to false, so that
+in Postgres, recovery_target_inclusive will be true
+ |
+
+
+
+
+
+
+## ReplicaClusterConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ReplicaClusterConfiguration encapsulates the configuration of a replica
+cluster
+
+
+| Field | Description |
+
+self
+string
+ |
+
+ Self defines the name of this cluster. It is used to determine if this is a primary
+or a replica cluster, comparing it with primary
+ |
+
+primary
+string
+ |
+
+ Primary defines which Cluster is defined to be the primary in the distributed PostgreSQL cluster, based on the
+topology specified in externalClusters
+ |
+
+source [Required]
+string
+ |
+
+ The name of the external cluster which is the replication origin
+ |
+
+enabled
+bool
+ |
+
+ If replica mode is enabled, this cluster will be a replica of an
+existing cluster. Replica cluster can be created from a recovery
+object store or via streaming through pg_basebackup.
+Refer to the Replica clusters page of the documentation for more information.
+ |
+
+promotionToken
+string
+ |
+
+ A demotion token generated by an external cluster used to
+check if the promotion requirements are met.
+ |
+
+minApplyDelay
+meta/v1.Duration
+ |
+
+ When replica mode is enabled, this parameter allows you to replay
+transactions only when the system time is at least the configured
+time past the commit time. This provides an opportunity to correct
+data loss errors. Note that when this parameter is set, a promotion
+token cannot be used.
+ |
+
+
+
+
+
+
+## ReplicationSlotsConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ReplicationSlotsConfiguration encapsulates the configuration
+of replication slots
+
+
+| Field | Description |
+
+highAvailability
+ReplicationSlotsHAConfiguration
+ |
+
+ Replication slots for high availability configuration
+ |
+
+updateInterval
+int
+ |
+
+ Standby will update the status of the local replication slots
+every updateInterval seconds (default 30).
+ |
+
+synchronizeReplicas
+SynchronizeReplicasConfiguration
+ |
+
+ Configures the synchronization of the user defined physical replication slots
+ |
+
+
+
+
+
+
+## ReplicationSlotsHAConfiguration
+
+**Appears in:**
+
+- [ReplicationSlotsConfiguration](#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration)
+
+ReplicationSlotsHAConfiguration encapsulates the configuration
+of the replication slots that are automatically managed by
+the operator to control the streaming replication connections
+with the standby instances for high availability (HA) purposes.
+Replication slots are a PostgreSQL feature that makes sure
+that PostgreSQL automatically keeps WAL files in the primary
+when a streaming client (in this specific case a replica that
+is part of the HA cluster) gets disconnected.
+
+
+| Field | Description |
+
+enabled
+bool
+ |
+
+ If enabled (default), the operator will automatically manage replication slots
+on the primary instance and use them in streaming replication
+connections with all the standby instances that are part of the HA
+cluster. If disabled, the operator will not take advantage
+of replication slots in streaming connections with the replicas.
+This feature also controls replication slots in replica cluster,
+from the designated primary to its cascading replicas.
+ |
+
+slotPrefix
+string
+ |
+
+ Prefix for replication slots managed by the operator for HA.
+It may only contain lower case letters, numbers, and the underscore character.
+This can only be set at creation time. By default set to _cnp_.
+ |
+
+synchronizeLogicalDecoding
+bool
+ |
+
+ When enabled, the operator automatically manages synchronization of logical
+decoding (replication) slots across high-availability clusters.
+Requires one of the following conditions:
+
+- PostgreSQL version 17 or later
+- PostgreSQL version < 17 with pg_failover_slots extension enabled
+
+ |
+
+
+
+
+
+
+## RoleConfiguration
+
+**Appears in:**
+
+- [ManagedConfiguration](#postgresql-k8s-enterprisedb-io-v1-ManagedConfiguration)
+
+RoleConfiguration is the representation, in Kubernetes, of a PostgreSQL role
+with the additional field Ensure specifying whether to ensure the presence or
+absence of the role in the database
+The defaults of the CREATE ROLE command are applied
+Reference: https://www.postgresql.org/docs/current/sql-createrole.html
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the role
+ |
+
+comment
+string
+ |
+
+ Description of the role
+ |
+
+ensure
+EnsureOption
+ |
+
+ Ensure the role is present or absent - defaults to "present"
+ |
+
+passwordSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ Secret containing the password of the role (if present)
+If null, the password will be ignored unless DisablePassword is set
+ |
+
+connectionLimit
+int64
+ |
+
+ If the role can log in, this specifies how many concurrent
+connections the role can make. -1 (the default) means no limit.
+ |
+
+validUntil
+meta/v1.Time
+ |
+
+ Date and time after which the role's password is no longer valid.
+When omitted, the password will never expire (default).
+ |
+
+inRoles
+[]string
+ |
+
+ List of one or more existing roles to which this role will be
+immediately added as a new member. Default empty.
+ |
+
+inherit
+bool
+ |
+
+ Whether a role "inherits" the privileges of roles it is a member of.
+Defaults is true.
+ |
+
+disablePassword
+bool
+ |
+
+ DisablePassword indicates that a role's password should be set to NULL in Postgres
+ |
+
+superuser
+bool
+ |
+
+ Whether the role is a superuser who can override all access
+restrictions within the database - superuser status is dangerous and
+should be used only when really needed. You must yourself be a
+superuser to create a new superuser. Defaults is false.
+ |
+
+createdb
+bool
+ |
+
+ When set to true, the role being defined will be allowed to create
+new databases. Specifying false (default) will deny a role the
+ability to create databases.
+ |
+
+createrole
+bool
+ |
+
+ Whether the role will be permitted to create, alter, drop, comment
+on, change the security label for, and grant or revoke membership in
+other roles. Default is false.
+ |
+
+login
+bool
+ |
+
+ Whether the role is allowed to log in. A role having the login
+attribute can be thought of as a user. Roles without this attribute
+are useful for managing database privileges, but are not users in
+the usual sense of the word. Default is false.
+ |
+
+replication
+bool
+ |
+
+ Whether a role is a replication role. A role must have this
+attribute (or be a superuser) in order to be able to connect to the
+server in replication mode (physical or logical replication) and in
+order to be able to create or drop replication slots. A role having
+the replication attribute is a very highly privileged role, and
+should only be used on roles actually used for replication. Default
+is false.
+ |
+
+bypassrls
+bool
+ |
+
+ Whether a role bypasses every row-level security (RLS) policy.
+Default is false.
+ |
+
+
+
+
+
+
+## SQLRefs
+
+**Appears in:**
+
+- [BootstrapInitDB](#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB)
+
+SQLRefs holds references to ConfigMaps or Secrets
+containing SQL files. The references are processed in a specific order:
+first, all Secrets are processed, followed by all ConfigMaps.
+Within each group, the processing order follows the sequence specified
+in their respective arrays.
+
+
+
+
+
+## ScheduledBackupSpec
+
+**Appears in:**
+
+- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
+
+ScheduledBackupSpec defines the desired state of ScheduledBackup
+
+
+| Field | Description |
+
+suspend
+bool
+ |
+
+ If this backup is suspended or not
+ |
+
+immediate
+bool
+ |
+
+ If the first backup has to be immediately start after creation or not
+ |
+
+schedule [Required]
+string
+ |
+
+ The schedule does not follow the same format used in Kubernetes CronJobs
+as it includes an additional seconds specifier,
+see https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format
+ |
+
+cluster [Required]
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference
+ |
+
+ The cluster to backup
+ |
+
+backupOwnerReference
+string
+ |
+
+ Indicates which ownerReference should be put inside the created backup resources.
+
+- none: no owner reference for created backup objects (same behavior as before the field was introduced)
+- self: sets the Scheduled backup object as owner of the backup
+- cluster: set the cluster as owner of the backup
+
+ |
+
+target
+BackupTarget
+ |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to cluster.spec.backup.target.
+Available options are empty string, primary and prefer-standby.
+primary to have backups run always on primary instances,
+prefer-standby to have backups run preferably on the most updated
+standby, if available.
+ |
+
+method
+BackupMethod
+ |
+
+ The backup method to be used, possible options are barmanObjectStore,
+volumeSnapshot or plugin. Defaults to: barmanObjectStore.
+ |
+
+pluginConfiguration
+BackupPluginConfiguration
+ |
+
+ Configuration parameters passed to the plugin managing this backup
+ |
+
+online
+bool
+ |
+
+ Whether the default type of backup with volume snapshots is
+online/hot (true, default) or offline/cold (false)
+Overrides the default setting specified in the cluster field '.spec.backup.volumeSnapshot.online'
+ |
+
+onlineConfiguration
+OnlineConfiguration
+ |
+
+ Configuration parameters to control the online/hot backup with volume snapshots
+Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza
+ |
+
+
+
+
+
+
+## ScheduledBackupStatus
+
+**Appears in:**
+
+- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
+
+ScheduledBackupStatus defines the observed state of ScheduledBackup
+
+
+| Field | Description |
+
+lastCheckTime
+meta/v1.Time
+ |
+
+ The latest time the schedule
+ |
+
+lastScheduleTime
+meta/v1.Time
+ |
+
+ Information when was the last time that backup was successfully scheduled.
+ |
+
+nextScheduleTime
+meta/v1.Time
+ |
+
+ Next time we will run a backup
+ |
+
+
+
+
+
+
+## SchemaSpec
+
+**Appears in:**
+
+- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec)
+
+SchemaSpec configures a schema in a database
+
+
+| Field | Description |
+
+DatabaseObjectSpec
+DatabaseObjectSpec
+ |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields
+ |
+
+owner [Required]
+string
+ |
+
+ The role name of the user who owns the schema inside PostgreSQL.
+It maps to the AUTHORIZATION parameter of CREATE SCHEMA and the
+OWNER TO command of ALTER SCHEMA.
+ |
+
+
+
+
+
+
+## SecretVersion
+
+**Appears in:**
+
+- [PgBouncerSecrets](#postgresql-k8s-enterprisedb-io-v1-PgBouncerSecrets)
+
+- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets)
+
+SecretVersion contains a secret name and its ResourceVersion
+
+
+| Field | Description |
+
+name
+string
+ |
+
+ The name of the secret
+ |
+
+version
+string
+ |
+
+ The ResourceVersion of the secret
+ |
+
+
+
+
+
+
+## SecretsResourceVersion
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+SecretsResourceVersion is the resource versions of the secrets
+managed by the operator
+
+
+| Field | Description |
+
+superuserSecretVersion
+string
+ |
+
+ The resource version of the "postgres" user secret
+ |
+
+replicationSecretVersion
+string
+ |
+
+ The resource version of the "streaming_replica" user secret
+ |
+
+applicationSecretVersion
+string
+ |
+
+ The resource version of the "app" user secret
+ |
+
+managedRoleSecretVersion
+map[string]string
+ |
+
+ The resource versions of the managed roles secrets
+ |
+
+caSecretVersion
+string
+ |
+
+ Unused. Retained for compatibility with old versions.
+ |
+
+clientCaSecretVersion
+string
+ |
+
+ The resource version of the PostgreSQL client-side CA secret version
+ |
+
+serverCaSecretVersion
+string
+ |
+
+ The resource version of the PostgreSQL server-side CA secret version
+ |
+
+serverSecretVersion
+string
+ |
+
+ The resource version of the PostgreSQL server-side secret version
+ |
+
+barmanEndpointCA
+string
+ |
+
+ The resource version of the Barman Endpoint CA if provided
+ |
+
+externalClusterSecretVersion
+map[string]string
+ |
+
+ The resource versions of the external cluster secrets
+ |
+
+metrics
+map[string]string
+ |
+
+ A map with the versions of all the secrets used to pass metrics.
+Map keys are the secret names, map values are the versions
+ |
+
+
+
+
+
+
+## ServiceAccountTemplate
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ServiceAccountTemplate contains the template needed to generate the service accounts
+
+
+| Field | Description |
+
+metadata [Required]
+Metadata
+ |
+
+ Metadata are the metadata to be used for the generated
+service account
+ |
+
+
+
+
+
+
+## ServiceSelectorType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService)
+
+- [ManagedServices](#postgresql-k8s-enterprisedb-io-v1-ManagedServices)
+
+ServiceSelectorType describes a valid value for generating the service selectors.
+It indicates which type of service the selector applies to, such as read-write, read, or read-only
+
+
+
+## ServiceTemplateSpec
+
+**Appears in:**
+
+- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService)
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+ServiceTemplateSpec is a structure allowing the user to set
+a template for Service generation.
+
+
+| Field | Description |
+
+metadata
+Metadata
+ |
+
+ Standard object's metadata.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+ |
+
+spec
+core/v1.ServiceSpec
+ |
+
+ Specification of the desired behavior of the service.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+
+
+## ServiceUpdateStrategy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService)
+
+ServiceUpdateStrategy describes how the changes to the managed service should be handled
+
+
+
+## SnapshotOwnerReference
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [VolumeSnapshotConfiguration](#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration)
+
+SnapshotOwnerReference defines the reference type for the owner of the snapshot.
+This specifies which owner the processed resources should relate to.
+
+
+
+## SnapshotType
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [Import](#postgresql-k8s-enterprisedb-io-v1-Import)
+
+SnapshotType is a type of allowed import
+
+
+
+## StorageConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration)
+
+StorageConfiguration is the configuration used to create and reconcile PVCs,
+usable for WAL volumes, PGDATA volumes, or tablespaces
+
+
+| Field | Description |
+
+storageClass
+string
+ |
+
+ StorageClass to use for PVCs. Applied after
+evaluating the PVC template, if available.
+If not specified, the generated PVCs will use the
+default storage class
+ |
+
+size
+string
+ |
+
+ Size of the storage. Required if not already specified in the PVC template.
+Changes to this field are automatically reapplied to the created PVCs.
+Size cannot be decreased.
+ |
+
+resizeInUseVolumes
+bool
+ |
+
+ Resize existent PVCs, defaults to true
+ |
+
+pvcTemplate
+core/v1.PersistentVolumeClaimSpec
+ |
+
+ Template to be used to generate the Persistent Volume Claim
+ |
+
+
+
+
+
+
+## SubscriptionReclaimPolicy
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [SubscriptionSpec](#postgresql-k8s-enterprisedb-io-v1-SubscriptionSpec)
+
+SubscriptionReclaimPolicy describes a policy for end-of-life maintenance of Subscriptions.
+
+
+
+## SubscriptionSpec
+
+**Appears in:**
+
+- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription)
+
+SubscriptionSpec defines the desired state of Subscription
+
+
+| Field | Description |
+
+cluster [Required]
+core/v1.LocalObjectReference
+ |
+
+ The name of the PostgreSQL cluster that identifies the "subscriber"
+ |
+
+name [Required]
+string
+ |
+
+ The name of the subscription inside PostgreSQL
+ |
+
+dbname [Required]
+string
+ |
+
+ The name of the database where the publication will be installed in
+the "subscriber" cluster
+ |
+
+parameters
+map[string]string
+ |
+
+ Subscription parameters included in the WITH clause of the PostgreSQL
+CREATE SUBSCRIPTION command. Most parameters cannot be changed
+after the subscription is created and will be ignored if modified
+later, except for a limited set documented at:
+https://www.postgresql.org/docs/current/sql-altersubscription.html#SQL-ALTERSUBSCRIPTION-PARAMS-SET
+ |
+
+publicationName [Required]
+string
+ |
+
+ The name of the publication inside the PostgreSQL database in the
+"publisher"
+ |
+
+publicationDBName
+string
+ |
+
+ The name of the database containing the publication on the external
+cluster. Defaults to the one in the external cluster definition.
+ |
+
+externalClusterName [Required]
+string
+ |
+
+ The name of the external cluster with the publication ("publisher")
+ |
+
+subscriptionReclaimPolicy
+SubscriptionReclaimPolicy
+ |
+
+ The policy for end-of-life maintenance of this subscription
+ |
+
+
+
+
+
+
+## SubscriptionStatus
+
+**Appears in:**
+
+- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription)
+
+SubscriptionStatus defines the observed state of Subscription
+
+
+| Field | Description |
+
+observedGeneration
+int64
+ |
+
+ A sequence number representing the latest
+desired state that was synchronized
+ |
+
+applied
+bool
+ |
+
+ Applied is true if the subscription was reconciled correctly
+ |
+
+message
+string
+ |
+
+ Message is the reconciliation output message
+ |
+
+
+
+
+
+
+## SwitchReplicaClusterStatus
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+SwitchReplicaClusterStatus contains all the statuses regarding the switch of a cluster to a replica cluster
+
+
+| Field | Description |
+
+inProgress
+bool
+ |
+
+ InProgress indicates if there is an ongoing procedure of switching a cluster to a replica cluster.
+ |
+
+
+
+
+
+
+## SyncReplicaElectionConstraints
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+SyncReplicaElectionConstraints contains the constraints for sync replicas election.
+For anti-affinity parameters two instances are considered in the same location
+if all the labels values match.
+In future synchronous replica election restriction by name will be supported.
+
+
+| Field | Description |
+
+nodeLabelsAntiAffinity
+[]string
+ |
+
+ A list of node labels values to extract and compare to evaluate if the pods reside in the same topology or not
+ |
+
+enabled [Required]
+bool
+ |
+
+ This flag enables the constraints for sync replicas
+ |
+
+
+
+
+
+
+## SynchronizeReplicasConfiguration
+
+**Appears in:**
+
+- [ReplicationSlotsConfiguration](#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration)
+
+SynchronizeReplicasConfiguration contains the configuration for the synchronization of user defined
+physical replication slots
+
+
+| Field | Description |
+
+enabled [Required]
+bool
+ |
+
+ When set to true, every replication slot that is on the primary is synchronized on each standby
+ |
+
+excludePatterns
+[]string
+ |
+
+ List of regular expression patterns to match the names of replication slots to be excluded (by default empty)
+ |
+
+
+
+
+
+
+## SynchronousReplicaConfiguration
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+SynchronousReplicaConfiguration contains the configuration of the
+PostgreSQL synchronous replication feature.
+Important: at this moment, also .spec.minSyncReplicas and .spec.maxSyncReplicas
+need to be considered.
+
+
+| Field | Description |
+
+method [Required]
+SynchronousReplicaConfigurationMethod
+ |
+
+ Method to select synchronous replication standbys from the listed
+servers, accepting 'any' (quorum-based synchronous replication) or
+'first' (priority-based synchronous replication) as values.
+ |
+
+number [Required]
+int
+ |
+
+ Specifies the number of synchronous standby servers that
+transactions must wait for responses from.
+ |
+
+maxStandbyNamesFromCluster
+int
+ |
+
+ Specifies the maximum number of local cluster pods that can be
+automatically included in the synchronous_standby_names option in
+PostgreSQL.
+ |
+
+standbyNamesPre
+[]string
+ |
+
+ A user-defined list of application names to be added to
+synchronous_standby_names before local cluster pods (the order is
+only useful for priority-based synchronous replication).
+ |
+
+standbyNamesPost
+[]string
+ |
+
+ A user-defined list of application names to be added to
+synchronous_standby_names after local cluster pods (the order is
+only useful for priority-based synchronous replication).
+ |
+
+dataDurability
+DataDurabilityLevel
+ |
+
+ If set to "required", data durability is strictly enforced. Write operations
+with synchronous commit settings (on, remote_write, or remote_apply) will
+block if there are insufficient healthy replicas, ensuring data persistence.
+If set to "preferred", data durability is maintained when healthy replicas
+are available, but the required number of instances will adjust dynamically
+if replicas become unavailable. This setting relaxes strict durability enforcement
+to allow for operational continuity. This setting is only applicable if both
+standbyNamesPre and standbyNamesPost are unset (empty).
+ |
+
+
+
+
+
+
+## SynchronousReplicaConfigurationMethod
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [SynchronousReplicaConfiguration](#postgresql-k8s-enterprisedb-io-v1-SynchronousReplicaConfiguration)
+
+SynchronousReplicaConfigurationMethod configures whether to use
+quorum based replication or a priority list
+
+
+
+## TDEConfiguration
+
+**Appears in:**
+
+- [EPASConfiguration](#postgresql-k8s-enterprisedb-io-v1-EPASConfiguration)
+
+TDEConfiguration contains the Transparent Data Encryption configuration
+
+
+| Field | Description |
+
+enabled
+bool
+ |
+
+ True if we want to have TDE enabled
+ |
+
+secretKeyRef
+core/v1.SecretKeySelector
+ |
+
+ Reference to the secret that contains the encryption key
+ |
+
+wrapCommand
+core/v1.SecretKeySelector
+ |
+
+ WrapCommand is the encrypt command provided by the user
+ |
+
+unwrapCommand
+core/v1.SecretKeySelector
+ |
+
+ UnwrapCommand is the decryption command provided by the user
+ |
+
+passphraseCommand
+core/v1.SecretKeySelector
+ |
+
+ PassphraseCommand is the command executed to get the passphrase that will be
+passed to the OpenSSL command to encrypt and decrypt
+ |
+
+
+
+
+
+
+## TablespaceConfiguration
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+TablespaceConfiguration is the configuration of a tablespace, and includes
+the storage specification for the tablespace
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ The name of the tablespace
+ |
+
+storage [Required]
+StorageConfiguration
+ |
+
+ The storage configuration for the tablespace
+ |
+
+owner
+DatabaseRoleRef
+ |
+
+ Owner is the PostgreSQL user owning the tablespace
+ |
+
+temporary
+bool
+ |
+
+ When set to true, the tablespace will be added as a temp_tablespaces
+entry in PostgreSQL, and will be available to automatically house temp
+database objects, or other temporary files. Please refer to PostgreSQL
+documentation for more information on the temp_tablespaces GUC.
+ |
+
+
+
+
+
+
+## TablespaceState
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+TablespaceState represents the state of a tablespace in a cluster
+
+
+| Field | Description |
+
+name [Required]
+string
+ |
+
+ Name is the name of the tablespace
+ |
+
+owner
+string
+ |
+
+ Owner is the PostgreSQL user owning the tablespace
+ |
+
+state [Required]
+TablespaceStatus
+ |
+
+ State is the latest reconciliation state
+ |
+
+error
+string
+ |
+
+ Error is the reconciliation error, if any
+ |
+
+
+
+
+
+
+## TablespaceStatus
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [TablespaceState](#postgresql-k8s-enterprisedb-io-v1-TablespaceState)
+
+TablespaceStatus represents the status of a tablespace in the cluster
+
+
+
+## Topology
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+Topology contains the cluster topology
+
+
+| Field | Description |
+
+instances
+map[PodName]PodTopologyLabels
+ |
+
+ Instances contains the pod topology of the instances
+ |
+
+nodesUsed
+int32
+ |
+
+ NodesUsed represents the count of distinct nodes accommodating the instances.
+A value of '1' suggests that all instances are hosted on a single node,
+implying the absence of High Availability (HA). Ideally, this value should
+be the same as the number of instances in the Postgres HA cluster, implying
+shared nothing architecture on the compute side.
+ |
+
+successfullyExtracted
+bool
+ |
+
+ SuccessfullyExtracted indicates if the topology data was extract. It is useful to enact fallback behaviors
+in synchronous replica election in case of failures
+ |
+
+
+
+
+
+
+## VolumeSnapshotConfiguration
+
+**Appears in:**
+
+- [BackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration)
+
+VolumeSnapshotConfiguration represents the configuration for the execution of snapshot backups.
+
+
+| Field | Description |
+
+labels
+map[string]string
+ |
+
+ Labels are key-value pairs that will be added to .metadata.labels snapshot resources.
+ |
+
+annotations
+map[string]string
+ |
+
+ Annotations key-value pairs that will be added to .metadata.annotations snapshot resources.
+ |
+
+className
+string
+ |
+
+ ClassName specifies the Snapshot Class to be used for PG_DATA PersistentVolumeClaim.
+It is the default class for the other types if no specific class is present
+ |
+
+walClassName
+string
+ |
+
+ WalClassName specifies the Snapshot Class to be used for the PG_WAL PersistentVolumeClaim.
+ |
+
+tablespaceClassName
+map[string]string
+ |
+
+ TablespaceClassName specifies the Snapshot Class to be used for the tablespaces.
+defaults to the PGDATA Snapshot Class, if set
+ |
+
+snapshotOwnerReference
+SnapshotOwnerReference
+ |
+
+ SnapshotOwnerReference indicates the type of owner reference the snapshot should have
+ |
+
+online
+bool
+ |
+
+ Whether the default type of backup with volume snapshots is
+online/hot (true, default) or offline/cold (false)
+ |
+
+onlineConfiguration
+OnlineConfiguration
+ |
+
+ Configuration parameters to control the online/hot backup with volume snapshots
+ |
+
+
+
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.6.0.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.6.0.mdx
new file mode 100644
index 0000000000..33e74c97c0
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.6.0.mdx
@@ -0,0 +1,344 @@
+---
+title: 'API Reference - v0.6.0'
+navTitle: v0.6.0
+pdfExclude: true
+---
+
+Cloud Native PostgreSQL extends the Kubernetes API defining the following
+custom resources:
+
+- [Backup](#backup)
+- [Cluster](#cluster)
+- [ScheduledBackup](#scheduledbackup)
+
+All the resources are defined in the `postgresql.k8s.enterprisedb.io/v1alpha1`
+API.
+
+Please refer to the ["Configuration Samples" page](../samples.md)" of the
+documentation for examples of usage.
+
+Below you will find a description of the defined resources:
+
+
+
+
+
+- [Backup](#backup)
+- [BackupList](#backuplist)
+- [BackupSpec](#backupspec)
+- [BackupStatus](#backupstatus)
+- [AffinityConfiguration](#affinityconfiguration)
+- [BackupConfiguration](#backupconfiguration)
+- [BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration)
+- [BootstrapConfiguration](#bootstrapconfiguration)
+- [BootstrapInitDB](#bootstrapinitdb)
+- [BootstrapRecovery](#bootstraprecovery)
+- [Cluster](#cluster)
+- [ClusterList](#clusterlist)
+- [ClusterSpec](#clusterspec)
+- [ClusterStatus](#clusterstatus)
+- [DataBackupConfiguration](#databackupconfiguration)
+- [NodeMaintenanceWindow](#nodemaintenancewindow)
+- [PostgresConfiguration](#postgresconfiguration)
+- [RecoveryTarget](#recoverytarget)
+- [RollingUpdateStatus](#rollingupdatestatus)
+- [S3Credentials](#s3credentials)
+- [StorageConfiguration](#storageconfiguration)
+- [WalBackupConfiguration](#walbackupconfiguration)
+- [ScheduledBackup](#scheduledbackup)
+- [ScheduledBackupList](#scheduledbackuplist)
+- [ScheduledBackupSpec](#scheduledbackupspec)
+- [ScheduledBackupStatus](#scheduledbackupstatus)
+
+## Backup
+
+Backup is the Schema for the backups API
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the backup. More info: | [BackupSpec](#backupspec) | false |
+| status | Most recently observed status of the backup. This data may not be up to date. Populated by the system. Read-only. More info: | [BackupStatus](#backupstatus) | false |
+
+## BackupList
+
+BackupList contains a list of Backup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of backups | [][Backup](#backup) | true |
+
+## BackupSpec
+
+BackupSpec defines the desired state of Backup
+
+| Field | Description | Scheme | Required |
+| ------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## BackupStatus
+
+BackupStatus defines the observed state of Backup
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| encryption | Encryption method required to S3 API | string | false |
+| backupId | The ID of the Barman backup | string | false |
+| phase | The last backup status | BackupPhase | false |
+| startedAt | When the backup was started | \*metav1.Time | false |
+| stoppedAt | When the backup was terminated | \*metav1.Time | false |
+| error | The detected error | string | false |
+| commandOutput | The backup command output | string | false |
+| commandError | The backup command output | string | false |
+
+## AffinityConfiguration
+
+AffinityConfiguration contains the info we need to create the affinity rules for Pods
+
+| Field | Description | Scheme | Required |
+| --------------------- | ----------------------------------------------------------------------------------------------- | ------ | -------- |
+| enablePodAntiAffinity | Should we enable anti affinity or not? | bool | true |
+| topologyKey | TopologyKey to use for anti-affinity configuration. See k8s documentation for more info on that | string | true |
+
+## BackupConfiguration
+
+BackupConfiguration defines how the backup of the cluster are taken. Currently the only supported backup method is barmanObjectStore. For details and examples refer to the Backup and Recovery section of the documentation
+
+| Field | Description | Scheme | Required |
+| ----------------- | ------------------------------------------------- | ------------------------------------------------------------------- | -------- |
+| barmanObjectStore | The configuration for the barman-cloud tool suite | \*[BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration) | false |
+
+## BarmanObjectStoreConfiguration
+
+BarmanObjectStoreConfiguration contains the backup configuration using Barman against an S3-compatible object storage
+
+| Field | Description | Scheme | Required |
+| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| wal | The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[WalBackupConfiguration](#walbackupconfiguration) | false |
+| data | The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[DataBackupConfiguration](#databackupconfiguration) | false |
+
+## BootstrapConfiguration
+
+BootstrapConfiguration contains information about how to create the PostgreSQL cluster. Only a single bootstrap method can be defined among the supported ones. `initdb` will be used as the bootstrap method if left unspecified. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------- | ----------------------------------------- | -------- |
+| initdb | Bootstrap the cluster via initdb | \*[BootstrapInitDB](#bootstrapinitdb) | false |
+| recovery | Bootstrap the cluster from a backup | \*[BootstrapRecovery](#bootstraprecovery) | false |
+
+## BootstrapInitDB
+
+BootstrapInitDB is the configuration of the bootstrap process when initdb is used Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------- | -------- |
+| database | Name of the database used by the application. Default: `app`. | string | true |
+| owner | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. | string | true |
+| secret | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | \*corev1.LocalObjectReference | false |
+| redwood | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | \*bool | false |
+| options | The list of options that must be passed to initdb when creating the cluster | \[]string | false |
+
+## BootstrapRecovery
+
+BootstrapRecovery contains the configuration required to restore the backup with the specified name and, after having changed the password with the one chosen for the superuser, will use it to bootstrap a full cluster cloning all the instances from the restored primary. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | -------- |
+| backup | The backup we need to restore | corev1.LocalObjectReference | true |
+| recoveryTarget | By default the recovery will end as soon as a consistent state is reached: in this case that means at the end of a backup. This option allows to fine tune the recovery process | \*[RecoveryTarget](#recoverytarget) | false |
+
+## Cluster
+
+Cluster is the Schema for the postgresql API
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the cluster. More info: | [ClusterSpec](#clusterspec) | false |
+| status | Most recently observed status of the cluster. This data may not be up to date. Populated by the system. Read-only. More info: | [ClusterStatus](#clusterstatus) | false |
+
+## ClusterList
+
+ClusterList contains a list of Cluster
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][Cluster](#cluster) | true |
+
+## ClusterSpec
+
+ClusterSpec defines the desired state of Cluster
+
+| Field | Description | Scheme | Required |
+| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- | -------- |
+| description | Description of this PostgreSQL cluster | string | false |
+| imageName | Name of the container image | string | false |
+| postgresUID | The UID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| postgresGID | The GID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| instances | Number of instances required in the cluster | int32 | true |
+| minSyncReplicas | Minimum number of instances required in synchronous replication with the primary. Undefined or 0 allow writes to complete when no standby is available. | int32 | false |
+| maxSyncReplicas | The target value for the synchronous replication quorum, that can be decreased if the number of ready standbys is lower than this. Undefined or 0 disable synchronous replication. | int32 | false |
+| postgresql | Configuration of the PostgreSQL server | [PostgresConfiguration](#postgresconfiguration) | false |
+| bootstrap | Instructions to bootstrap this cluster | \*[BootstrapConfiguration](#bootstrapconfiguration) | false |
+| superuserSecret | The secret containing the superuser password. If not defined a new secret will be created with a randomly generated password | \*corev1.LocalObjectReference | false |
+| imagePullSecrets | The list of pull secrets to be used to pull the images. If the license key contains a pull secret that secret will be automatically included. | \[]corev1.LocalObjectReference | false |
+| storage | Configuration of the storage of the instances | [StorageConfiguration](#storageconfiguration) | false |
+| startDelay | The time in seconds that is allowed for a PostgreSQL instance to successfully start up (default 30) | int32 | false |
+| stopDelay | The time in seconds that is allowed for a PostgreSQL instance node to gracefully shutdown (default 30) | int32 | false |
+| affinity | Affinity/Anti-affinity rules for Pods | [AffinityConfiguration](#affinityconfiguration) | false |
+| resources | Resources requirements of every generated Pod. Please refer to for more information. | corev1.ResourceRequirements | false |
+| primaryUpdateStrategy | Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated: it can be automated (`unsupervised` - default) or manual (`supervised`) | PrimaryUpdateStrategy | false |
+| backup | The configuration to be used for backups | \*[BackupConfiguration](#backupconfiguration) | false |
+| nodeMaintenanceWindow | Define a maintenance window for the Kubernetes nodes | \*[NodeMaintenanceWindow](#nodemaintenancewindow) | false |
+| licenseKey | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string | false |
+
+## ClusterStatus
+
+ClusterStatus defines the observed state of Cluster
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------- | ---------------------------- | -------- |
+| instances | Total number of instances in the cluster | int32 | false |
+| readyInstances | Total number of ready instances in the cluster | int32 | false |
+| instancesStatus | Instances status | map[utils.PodStatus][]string | false |
+| latestGeneratedNode | ID of the latest generated node (used to avoid node name clashing) | int32 | false |
+| currentPrimary | Current primary instance | string | false |
+| targetPrimary | Target primary instance, this is different from the previous one during a switchover or a failover | string | false |
+| pvcCount | How many PVCs have been created by this cluster | int32 | false |
+| jobCount | How many Jobs have been created by this cluster | int32 | false |
+| danglingPVC | List of all the PVCs created by this cluster and still available which are not attached to a Pod | \[]string | false |
+| licenseStatus | Status of the license | licensekey.Status | false |
+| writeService | Current write pod | string | false |
+| readService | Current list of read pods | string | false |
+| phase | Current phase of the cluster | string | false |
+| phaseReason | Reason for the current phase | string | false |
+
+## DataBackupConfiguration
+
+DataBackupConfiguration is the configuration of the backup of the data directory
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a backup file (a tar file per tablespace) while streaming it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+| immediateCheckpoint | Control whether the I/O workload for the backup initial checkpoint will be limited, according to the `checkpoint_completion_target` setting on the PostgreSQL server. If set to true, an immediate checkpoint will be used, meaning PostgreSQL will complete the checkpoint as soon as possible. `false` by default. | bool | false |
+| jobs | The number of parallel jobs to be used to upload the backup, defaults to 2 | \*int32 | false |
+
+## NodeMaintenanceWindow
+
+NodeMaintenanceWindow contains information that the operator will use while upgrading the underlying node.
+
+This option is only useful when using local storage, as the Pods can't be freely moved between nodes in that configuration.
+
+| Field | Description | Scheme | Required |
+| ---------- | ------------------------------------------------------------------------------------------ | ------ | -------- |
+| inProgress | Is there a node maintenance activity in progress? | bool | true |
+| reusePVC | Reuse the existing PVC (wait for the node to come up again) or not (recreate it elsewhere) | \*bool | true |
+
+## PostgresConfiguration
+
+PostgresConfiguration defines the PostgreSQL configuration
+
+| Field | Description | Scheme | Required |
+| ---------- | ----------------------------------------------------------------------------------------- | ----------------- | -------- |
+| parameters | PostgreSQL configuration options (postgresql.conf) | map[string]string | false |
+| pg_hba | PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file) | \[]string | false |
+
+## RecoveryTarget
+
+RecoveryTarget allows to configure the moment where the recovery process will stop. All the target options except TargetTLI are mutually exclusive.
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------- | ------ | -------- |
+| targetTLI | The target timeline (\\"latest\\", \\"current\\" or a positive integer) | string | false |
+| targetXID | The target transaction ID | string | false |
+| targetName | The target name (to be previously created with `pg_create_restore_point`) | string | false |
+| targetLSN | The target LSN (Log Sequence Number) | string | false |
+| targetTime | The target time, in any unambiguous representation allowed by PostgreSQL | string | false |
+| targetImmediate | End recovery as soon as a consistent state is reached | \*bool | false |
+| exclusive | Set the target to be exclusive (defaults to true) | \*bool | false |
+
+## RollingUpdateStatus
+
+RollingUpdateStatus contains the information about an instance which is being updated
+
+| Field | Description | Scheme | Required |
+| --------- | ----------------------------------- | ----------- | -------- |
+| imageName | The image which we put into the Pod | string | true |
+| startedAt | When the update has been started | metav1.Time | false |
+
+## S3Credentials
+
+S3Credentials is the type for the credentials to be used to upload files to S3
+
+| Field | Description | Scheme | Required |
+| --------------- | -------------------------------------- | ------------------------ | -------- |
+| accessKeyId | The reference to the access key id | corev1.SecretKeySelector | true |
+| secretAccessKey | The reference to the secret access key | corev1.SecretKeySelector | true |
+
+## StorageConfiguration
+
+StorageConfiguration is the configuration of the storage of the PostgreSQL instances
+
+| Field | Description | Scheme | Required |
+| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------- | -------- |
+| storageClass | StorageClass to use for database data (`PGDATA`). Applied after evaluating the PVC template, if available. If not specified, generated PVCs will be satisfied by the default storage class | \*string | false |
+| size | Size of the storage. Required if not already specified in the PVC template. | string | true |
+| pvcTemplate | Template to be used to generate the Persistent Volume Claim | \*corev1.PersistentVolumeClaimSpec | false |
+
+## WalBackupConfiguration
+
+WalBackupConfiguration is the configuration of the backup of the WAL stream
+
+| Field | Description | Scheme | Required |
+| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a WAL file before sending it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+
+## ScheduledBackup
+
+ScheduledBackup is the Schema for the scheduledbackups API
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the ScheduledBackup. More info: | [ScheduledBackupSpec](#scheduledbackupspec) | false |
+| status | Most recently observed status of the ScheduledBackup. This data may not be up to date. Populated by the system. Read-only. More info: | [ScheduledBackupStatus](#scheduledbackupstatus) | false |
+
+## ScheduledBackupList
+
+ScheduledBackupList contains a list of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][ScheduledBackup](#scheduledbackup) | true |
+
+## ScheduledBackupSpec
+
+ScheduledBackupSpec defines the desired state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| suspend | If this backup is suspended of not | \*bool | false |
+| schedule | The schedule in Cron format, see . | string | true |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## ScheduledBackupStatus
+
+ScheduledBackupStatus defines the observed state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| ---------------- | -------------------------------------------------------------------------- | ------------- | -------- |
+| lastCheckTime | The latest time the schedule | \*metav1.Time | false |
+| lastScheduleTime | Information when was the last time that backup was successfully scheduled. | \*metav1.Time | false |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.7.0.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.7.0.mdx
new file mode 100644
index 0000000000..534240c3fe
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.7.0.mdx
@@ -0,0 +1,345 @@
+---
+title: 'API Reference - v0.7.0'
+navTitle: v0.7.0
+pdfExclude: true
+---
+
+Cloud Native PostgreSQL extends the Kubernetes API defining the following
+custom resources:
+
+- [Backup](#backup)
+- [Cluster](#cluster)
+- [ScheduledBackup](#scheduledbackup)
+
+All the resources are defined in the `postgresql.k8s.enterprisedb.io/v1alpha1`
+API.
+
+Please refer to the ["Configuration Samples" page](../samples.md)" of the
+documentation for examples of usage.
+
+Below you will find a description of the defined resources:
+
+
+
+
+
+- [Backup](#backup)
+- [BackupList](#backuplist)
+- [BackupSpec](#backupspec)
+- [BackupStatus](#backupstatus)
+- [AffinityConfiguration](#affinityconfiguration)
+- [BackupConfiguration](#backupconfiguration)
+- [BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration)
+- [BootstrapConfiguration](#bootstrapconfiguration)
+- [BootstrapInitDB](#bootstrapinitdb)
+- [BootstrapRecovery](#bootstraprecovery)
+- [Cluster](#cluster)
+- [ClusterList](#clusterlist)
+- [ClusterSpec](#clusterspec)
+- [ClusterStatus](#clusterstatus)
+- [DataBackupConfiguration](#databackupconfiguration)
+- [NodeMaintenanceWindow](#nodemaintenancewindow)
+- [PostgresConfiguration](#postgresconfiguration)
+- [RecoveryTarget](#recoverytarget)
+- [RollingUpdateStatus](#rollingupdatestatus)
+- [S3Credentials](#s3credentials)
+- [StorageConfiguration](#storageconfiguration)
+- [WalBackupConfiguration](#walbackupconfiguration)
+- [ScheduledBackup](#scheduledbackup)
+- [ScheduledBackupList](#scheduledbackuplist)
+- [ScheduledBackupSpec](#scheduledbackupspec)
+- [ScheduledBackupStatus](#scheduledbackupstatus)
+
+## Backup
+
+Backup is the Schema for the backups API
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the backup. More info: | [BackupSpec](#backupspec) | false |
+| status | Most recently observed status of the backup. This data may not be up to date. Populated by the system. Read-only. More info: | [BackupStatus](#backupstatus) | false |
+
+## BackupList
+
+BackupList contains a list of Backup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of backups | [][Backup](#backup) | true |
+
+## BackupSpec
+
+BackupSpec defines the desired state of Backup
+
+| Field | Description | Scheme | Required |
+| ------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## BackupStatus
+
+BackupStatus defines the observed state of Backup
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| encryption | Encryption method required to S3 API | string | false |
+| backupId | The ID of the Barman backup | string | false |
+| phase | The last backup status | BackupPhase | false |
+| startedAt | When the backup was started | \*metav1.Time | false |
+| stoppedAt | When the backup was terminated | \*metav1.Time | false |
+| error | The detected error | string | false |
+| commandOutput | The backup command output | string | false |
+| commandError | The backup command output | string | false |
+
+## AffinityConfiguration
+
+AffinityConfiguration contains the info we need to create the affinity rules for Pods
+
+| Field | Description | Scheme | Required |
+| --------------------- | ----------------------------------------------------------------------------------------------- | ------ | -------- |
+| enablePodAntiAffinity | Should we enable anti affinity or not? | bool | true |
+| topologyKey | TopologyKey to use for anti-affinity configuration. See k8s documentation for more info on that | string | true |
+
+## BackupConfiguration
+
+BackupConfiguration defines how the backup of the cluster are taken. Currently the only supported backup method is barmanObjectStore. For details and examples refer to the Backup and Recovery section of the documentation
+
+| Field | Description | Scheme | Required |
+| ----------------- | ------------------------------------------------- | ------------------------------------------------------------------- | -------- |
+| barmanObjectStore | The configuration for the barman-cloud tool suite | \*[BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration) | false |
+
+## BarmanObjectStoreConfiguration
+
+BarmanObjectStoreConfiguration contains the backup configuration using Barman against an S3-compatible object storage
+
+| Field | Description | Scheme | Required |
+| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| wal | The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[WalBackupConfiguration](#walbackupconfiguration) | false |
+| data | The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[DataBackupConfiguration](#databackupconfiguration) | false |
+
+## BootstrapConfiguration
+
+BootstrapConfiguration contains information about how to create the PostgreSQL cluster. Only a single bootstrap method can be defined among the supported ones. `initdb` will be used as the bootstrap method if left unspecified. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------- | ----------------------------------------- | -------- |
+| initdb | Bootstrap the cluster via initdb | \*[BootstrapInitDB](#bootstrapinitdb) | false |
+| recovery | Bootstrap the cluster from a backup | \*[BootstrapRecovery](#bootstraprecovery) | false |
+
+## BootstrapInitDB
+
+BootstrapInitDB is the configuration of the bootstrap process when initdb is used Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------- | -------- |
+| database | Name of the database used by the application. Default: `app`. | string | true |
+| owner | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. | string | true |
+| secret | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | \*corev1.LocalObjectReference | false |
+| redwood | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | \*bool | false |
+| options | The list of options that must be passed to initdb when creating the cluster | \[]string | false |
+
+## BootstrapRecovery
+
+BootstrapRecovery contains the configuration required to restore the backup with the specified name and, after having changed the password with the one chosen for the superuser, will use it to bootstrap a full cluster cloning all the instances from the restored primary. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | -------- |
+| backup | The backup we need to restore | corev1.LocalObjectReference | true |
+| recoveryTarget | By default the recovery will end as soon as a consistent state is reached: in this case that means at the end of a backup. This option allows to fine tune the recovery process | \*[RecoveryTarget](#recoverytarget) | false |
+
+## Cluster
+
+Cluster is the Schema for the postgresql API
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the cluster. More info: | [ClusterSpec](#clusterspec) | false |
+| status | Most recently observed status of the cluster. This data may not be up to date. Populated by the system. Read-only. More info: | [ClusterStatus](#clusterstatus) | false |
+
+## ClusterList
+
+ClusterList contains a list of Cluster
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][Cluster](#cluster) | true |
+
+## ClusterSpec
+
+ClusterSpec defines the desired state of Cluster
+
+| Field | Description | Scheme | Required |
+| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- | -------- |
+| description | Description of this PostgreSQL cluster | string | false |
+| imageName | Name of the container image | string | false |
+| postgresUID | The UID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| postgresGID | The GID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| instances | Number of instances required in the cluster | int32 | true |
+| minSyncReplicas | Minimum number of instances required in synchronous replication with the primary. Undefined or 0 allow writes to complete when no standby is available. | int32 | false |
+| maxSyncReplicas | The target value for the synchronous replication quorum, that can be decreased if the number of ready standbys is lower than this. Undefined or 0 disable synchronous replication. | int32 | false |
+| postgresql | Configuration of the PostgreSQL server | [PostgresConfiguration](#postgresconfiguration) | false |
+| bootstrap | Instructions to bootstrap this cluster | \*[BootstrapConfiguration](#bootstrapconfiguration) | false |
+| superuserSecret | The secret containing the superuser password. If not defined a new secret will be created with a randomly generated password | \*corev1.LocalObjectReference | false |
+| imagePullSecrets | The list of pull secrets to be used to pull the images. If the license key contains a pull secret that secret will be automatically included. | \[]corev1.LocalObjectReference | false |
+| storage | Configuration of the storage of the instances | [StorageConfiguration](#storageconfiguration) | false |
+| startDelay | The time in seconds that is allowed for a PostgreSQL instance to successfully start up (default 30) | int32 | false |
+| stopDelay | The time in seconds that is allowed for a PostgreSQL instance node to gracefully shutdown (default 30) | int32 | false |
+| affinity | Affinity/Anti-affinity rules for Pods | [AffinityConfiguration](#affinityconfiguration) | false |
+| resources | Resources requirements of every generated Pod. Please refer to for more information. | corev1.ResourceRequirements | false |
+| primaryUpdateStrategy | Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated: it can be automated (`unsupervised` - default) or manual (`supervised`) | PrimaryUpdateStrategy | false |
+| backup | The configuration to be used for backups | \*[BackupConfiguration](#backupconfiguration) | false |
+| nodeMaintenanceWindow | Define a maintenance window for the Kubernetes nodes | \*[NodeMaintenanceWindow](#nodemaintenancewindow) | false |
+| licenseKey | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string | false |
+
+## ClusterStatus
+
+ClusterStatus defines the observed state of Cluster
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------- | ---------------------------- | -------- |
+| instances | Total number of instances in the cluster | int32 | false |
+| readyInstances | Total number of ready instances in the cluster | int32 | false |
+| instancesStatus | Instances status | map[utils.PodStatus][]string | false |
+| latestGeneratedNode | ID of the latest generated node (used to avoid node name clashing) | int32 | false |
+| currentPrimary | Current primary instance | string | false |
+| targetPrimary | Target primary instance, this is different from the previous one during a switchover or a failover | string | false |
+| pvcCount | How many PVCs have been created by this cluster | int32 | false |
+| jobCount | How many Jobs have been created by this cluster | int32 | false |
+| danglingPVC | List of all the PVCs created by this cluster and still available which are not attached to a Pod | \[]string | false |
+| licenseStatus | Status of the license | licensekey.Status | false |
+| writeService | Current write pod | string | false |
+| readService | Current list of read pods | string | false |
+| phase | Current phase of the cluster | string | false |
+| phaseReason | Reason for the current phase | string | false |
+
+## DataBackupConfiguration
+
+DataBackupConfiguration is the configuration of the backup of the data directory
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a backup file (a tar file per tablespace) while streaming it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+| immediateCheckpoint | Control whether the I/O workload for the backup initial checkpoint will be limited, according to the `checkpoint_completion_target` setting on the PostgreSQL server. If set to true, an immediate checkpoint will be used, meaning PostgreSQL will complete the checkpoint as soon as possible. `false` by default. | bool | false |
+| jobs | The number of parallel jobs to be used to upload the backup, defaults to 2 | \*int32 | false |
+
+## NodeMaintenanceWindow
+
+NodeMaintenanceWindow contains information that the operator will use while upgrading the underlying node.
+
+This option is only useful when using local storage, as the Pods can't be freely moved between nodes in that configuration.
+
+| Field | Description | Scheme | Required |
+| ---------- | ------------------------------------------------------------------------------------------ | ------ | -------- |
+| inProgress | Is there a node maintenance activity in progress? | bool | true |
+| reusePVC | Reuse the existing PVC (wait for the node to come up again) or not (recreate it elsewhere) | \*bool | true |
+
+## PostgresConfiguration
+
+PostgresConfiguration defines the PostgreSQL configuration
+
+| Field | Description | Scheme | Required |
+| ---------- | ----------------------------------------------------------------------------------------- | ----------------- | -------- |
+| parameters | PostgreSQL configuration options (postgresql.conf) | map[string]string | false |
+| pg_hba | PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file) | \[]string | false |
+
+## RecoveryTarget
+
+RecoveryTarget allows to configure the moment where the recovery process will stop. All the target options except TargetTLI are mutually exclusive.
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------- | ------ | -------- |
+| targetTLI | The target timeline (\\"latest\\", \\"current\\" or a positive integer) | string | false |
+| targetXID | The target transaction ID | string | false |
+| targetName | The target name (to be previously created with `pg_create_restore_point`) | string | false |
+| targetLSN | The target LSN (Log Sequence Number) | string | false |
+| targetTime | The target time, in any unambiguous representation allowed by PostgreSQL | string | false |
+| targetImmediate | End recovery as soon as a consistent state is reached | \*bool | false |
+| exclusive | Set the target to be exclusive (defaults to true) | \*bool | false |
+
+## RollingUpdateStatus
+
+RollingUpdateStatus contains the information about an instance which is being updated
+
+| Field | Description | Scheme | Required |
+| --------- | ----------------------------------- | ----------- | -------- |
+| imageName | The image which we put into the Pod | string | true |
+| startedAt | When the update has been started | metav1.Time | false |
+
+## S3Credentials
+
+S3Credentials is the type for the credentials to be used to upload files to S3
+
+| Field | Description | Scheme | Required |
+| --------------- | -------------------------------------- | ------------------------ | -------- |
+| accessKeyId | The reference to the access key id | corev1.SecretKeySelector | true |
+| secretAccessKey | The reference to the secret access key | corev1.SecretKeySelector | true |
+
+## StorageConfiguration
+
+StorageConfiguration is the configuration of the storage of the PostgreSQL instances
+
+| Field | Description | Scheme | Required |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------- | -------- |
+| storageClass | StorageClass to use for database data (`PGDATA`). Applied after evaluating the PVC template, if available. If not specified, generated PVCs will be satisfied by the default storage class | \*string | false |
+| size | Size of the storage. Required if not already specified in the PVC template. Changes to this field are automatically reapplied to the created PVCs. Size cannot be decreased. | string | true |
+| resizeInUseVolumes | Resize existent PVCs, defaults to true | \*bool | false |
+| pvcTemplate | Template to be used to generate the Persistent Volume Claim | \*corev1.PersistentVolumeClaimSpec | false |
+
+## WalBackupConfiguration
+
+WalBackupConfiguration is the configuration of the backup of the WAL stream
+
+| Field | Description | Scheme | Required |
+| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a WAL file before sending it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+
+## ScheduledBackup
+
+ScheduledBackup is the Schema for the scheduledbackups API
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the ScheduledBackup. More info: | [ScheduledBackupSpec](#scheduledbackupspec) | false |
+| status | Most recently observed status of the ScheduledBackup. This data may not be up to date. Populated by the system. Read-only. More info: | [ScheduledBackupStatus](#scheduledbackupstatus) | false |
+
+## ScheduledBackupList
+
+ScheduledBackupList contains a list of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][ScheduledBackup](#scheduledbackup) | true |
+
+## ScheduledBackupSpec
+
+ScheduledBackupSpec defines the desired state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| suspend | If this backup is suspended of not | \*bool | false |
+| schedule | The schedule in Cron format, see . | string | true |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## ScheduledBackupStatus
+
+ScheduledBackupStatus defines the observed state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| ---------------- | -------------------------------------------------------------------------- | ------------- | -------- |
+| lastCheckTime | The latest time the schedule | \*metav1.Time | false |
+| lastScheduleTime | Information when was the last time that backup was successfully scheduled. | \*metav1.Time | false |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.8.0.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.8.0.mdx
new file mode 100644
index 0000000000..bedb31ad45
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v0.8.0.mdx
@@ -0,0 +1,347 @@
+---
+title: 'API Reference - v0.8.0'
+navTitle: v0.8.0
+pdfExclude: true
+---
+
+Cloud Native PostgreSQL extends the Kubernetes API defining the following
+custom resources:
+
+- [Backup](#backup)
+- [Cluster](#cluster)
+- [ScheduledBackup](#scheduledbackup)
+
+All the resources are defined in the `postgresql.k8s.enterprisedb.io/v1`
+API.
+
+Please refer to the ["Configuration Samples" page](../samples.md)" of the
+documentation for examples of usage.
+
+Below you will find a description of the defined resources:
+
+
+
+
+
+- [Backup](#backup)
+- [BackupList](#backuplist)
+- [BackupSpec](#backupspec)
+- [BackupStatus](#backupstatus)
+- [AffinityConfiguration](#affinityconfiguration)
+- [BackupConfiguration](#backupconfiguration)
+- [BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration)
+- [BootstrapConfiguration](#bootstrapconfiguration)
+- [BootstrapInitDB](#bootstrapinitdb)
+- [BootstrapRecovery](#bootstraprecovery)
+- [Cluster](#cluster)
+- [ClusterList](#clusterlist)
+- [ClusterSpec](#clusterspec)
+- [ClusterStatus](#clusterstatus)
+- [DataBackupConfiguration](#databackupconfiguration)
+- [NodeMaintenanceWindow](#nodemaintenancewindow)
+- [PostgresConfiguration](#postgresconfiguration)
+- [RecoveryTarget](#recoverytarget)
+- [RollingUpdateStatus](#rollingupdatestatus)
+- [S3Credentials](#s3credentials)
+- [StorageConfiguration](#storageconfiguration)
+- [WalBackupConfiguration](#walbackupconfiguration)
+- [ScheduledBackup](#scheduledbackup)
+- [ScheduledBackupList](#scheduledbackuplist)
+- [ScheduledBackupSpec](#scheduledbackupspec)
+- [ScheduledBackupStatus](#scheduledbackupstatus)
+
+## Backup
+
+Backup is the Schema for the backups API
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the backup. More info: | [BackupSpec](#backupspec) | false |
+| status | Most recently observed status of the backup. This data may not be up to date. Populated by the system. Read-only. More info: | [BackupStatus](#backupstatus) | false |
+
+## BackupList
+
+BackupList contains a list of Backup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of backups | [][Backup](#backup) | true |
+
+## BackupSpec
+
+BackupSpec defines the desired state of Backup
+
+| Field | Description | Scheme | Required |
+| ------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## BackupStatus
+
+BackupStatus defines the observed state of Backup
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| encryption | Encryption method required to S3 API | string | false |
+| backupId | The ID of the Barman backup | string | false |
+| phase | The last backup status | BackupPhase | false |
+| startedAt | When the backup was started | \*metav1.Time | false |
+| stoppedAt | When the backup was terminated | \*metav1.Time | false |
+| error | The detected error | string | false |
+| commandOutput | The backup command output | string | false |
+| commandError | The backup command output | string | false |
+
+## AffinityConfiguration
+
+AffinityConfiguration contains the info we need to create the affinity rules for Pods
+
+| Field | Description | Scheme | Required |
+| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | -------- |
+| enablePodAntiAffinity | Activates anti-affinity for the pods. The operator will define pods anti-affinity unless this field is explicitly set to false | \*bool | false |
+| topologyKey | TopologyKey to use for anti-affinity configuration. See k8s documentation for more info on that | string | true |
+| nodeSelector | NodeSelector is map of key-value pairs used to define the nodes on which the pods can run. More info: | map[string]string | false |
+
+## BackupConfiguration
+
+BackupConfiguration defines how the backup of the cluster are taken. Currently the only supported backup method is barmanObjectStore. For details and examples refer to the Backup and Recovery section of the documentation
+
+| Field | Description | Scheme | Required |
+| ----------------- | ------------------------------------------------- | ------------------------------------------------------------------- | -------- |
+| barmanObjectStore | The configuration for the barman-cloud tool suite | \*[BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration) | false |
+
+## BarmanObjectStoreConfiguration
+
+BarmanObjectStoreConfiguration contains the backup configuration using Barman against an S3-compatible object storage
+
+| Field | Description | Scheme | Required |
+| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| wal | The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[WalBackupConfiguration](#walbackupconfiguration) | false |
+| data | The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[DataBackupConfiguration](#databackupconfiguration) | false |
+
+## BootstrapConfiguration
+
+BootstrapConfiguration contains information about how to create the PostgreSQL cluster. Only a single bootstrap method can be defined among the supported ones. `initdb` will be used as the bootstrap method if left unspecified. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------- | ----------------------------------------- | -------- |
+| initdb | Bootstrap the cluster via initdb | \*[BootstrapInitDB](#bootstrapinitdb) | false |
+| recovery | Bootstrap the cluster from a backup | \*[BootstrapRecovery](#bootstraprecovery) | false |
+
+## BootstrapInitDB
+
+BootstrapInitDB is the configuration of the bootstrap process when initdb is used Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------- | -------- |
+| database | Name of the database used by the application. Default: `app`. | string | true |
+| owner | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. | string | true |
+| secret | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | \*corev1.LocalObjectReference | false |
+| redwood | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | \*bool | false |
+| options | The list of options that must be passed to initdb when creating the cluster | \[]string | false |
+
+## BootstrapRecovery
+
+BootstrapRecovery contains the configuration required to restore the backup with the specified name and, after having changed the password with the one chosen for the superuser, will use it to bootstrap a full cluster cloning all the instances from the restored primary. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | -------- |
+| backup | The backup we need to restore | corev1.LocalObjectReference | true |
+| recoveryTarget | By default the recovery will end as soon as a consistent state is reached: in this case that means at the end of a backup. This option allows to fine tune the recovery process | \*[RecoveryTarget](#recoverytarget) | false |
+
+## Cluster
+
+Cluster is the Schema for the PostgreSQL API
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the cluster. More info: | [ClusterSpec](#clusterspec) | false |
+| status | Most recently observed status of the cluster. This data may not be up to date. Populated by the system. Read-only. More info: | [ClusterStatus](#clusterstatus) | false |
+
+## ClusterList
+
+ClusterList contains a list of Cluster
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][Cluster](#cluster) | true |
+
+## ClusterSpec
+
+ClusterSpec defines the desired state of Cluster
+
+| Field | Description | Scheme | Required |
+| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- | -------- |
+| description | Description of this PostgreSQL cluster | string | false |
+| imageName | Name of the container image | string | false |
+| postgresUID | The UID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| postgresGID | The GID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| instances | Number of instances required in the cluster | int32 | true |
+| minSyncReplicas | Minimum number of instances required in synchronous replication with the primary. Undefined or 0 allow writes to complete when no standby is available. | int32 | false |
+| maxSyncReplicas | The target value for the synchronous replication quorum, that can be decreased if the number of ready standbys is lower than this. Undefined or 0 disable synchronous replication. | int32 | false |
+| postgresql | Configuration of the PostgreSQL server | [PostgresConfiguration](#postgresconfiguration) | false |
+| bootstrap | Instructions to bootstrap this cluster | \*[BootstrapConfiguration](#bootstrapconfiguration) | false |
+| superuserSecret | The secret containing the superuser password. If not defined a new secret will be created with a randomly generated password | \*corev1.LocalObjectReference | false |
+| imagePullSecrets | The list of pull secrets to be used to pull the images. If the license key contains a pull secret that secret will be automatically included. | \[]corev1.LocalObjectReference | false |
+| storage | Configuration of the storage of the instances | [StorageConfiguration](#storageconfiguration) | false |
+| startDelay | The time in seconds that is allowed for a PostgreSQL instance to successfully start up (default 30) | int32 | false |
+| stopDelay | The time in seconds that is allowed for a PostgreSQL instance node to gracefully shutdown (default 30) | int32 | false |
+| affinity | Affinity/Anti-affinity rules for Pods | [AffinityConfiguration](#affinityconfiguration) | false |
+| resources | Resources requirements of every generated Pod. Please refer to for more information. | corev1.ResourceRequirements | false |
+| primaryUpdateStrategy | Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated: it can be automated (`unsupervised` - default) or manual (`supervised`) | PrimaryUpdateStrategy | false |
+| backup | The configuration to be used for backups | \*[BackupConfiguration](#backupconfiguration) | false |
+| nodeMaintenanceWindow | Define a maintenance window for the Kubernetes nodes | \*[NodeMaintenanceWindow](#nodemaintenancewindow) | false |
+| licenseKey | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string | false |
+
+## ClusterStatus
+
+ClusterStatus defines the observed state of Cluster
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------- | ---------------------------- | -------- |
+| instances | Total number of instances in the cluster | int32 | false |
+| readyInstances | Total number of ready instances in the cluster | int32 | false |
+| instancesStatus | Instances status | map[utils.PodStatus][]string | false |
+| latestGeneratedNode | ID of the latest generated node (used to avoid node name clashing) | int32 | false |
+| currentPrimary | Current primary instance | string | false |
+| targetPrimary | Target primary instance, this is different from the previous one during a switchover or a failover | string | false |
+| pvcCount | How many PVCs have been created by this cluster | int32 | false |
+| jobCount | How many Jobs have been created by this cluster | int32 | false |
+| danglingPVC | List of all the PVCs created by this cluster and still available which are not attached to a Pod | \[]string | false |
+| licenseStatus | Status of the license | licensekey.Status | false |
+| writeService | Current write pod | string | false |
+| readService | Current list of read pods | string | false |
+| phase | Current phase of the cluster | string | false |
+| phaseReason | Reason for the current phase | string | false |
+
+## DataBackupConfiguration
+
+DataBackupConfiguration is the configuration of the backup of the data directory
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a backup file (a tar file per tablespace) while streaming it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+| immediateCheckpoint | Control whether the I/O workload for the backup initial checkpoint will be limited, according to the `checkpoint_completion_target` setting on the PostgreSQL server. If set to true, an immediate checkpoint will be used, meaning PostgreSQL will complete the checkpoint as soon as possible. `false` by default. | bool | false |
+| jobs | The number of parallel jobs to be used to upload the backup, defaults to 2 | \*int32 | false |
+
+## NodeMaintenanceWindow
+
+NodeMaintenanceWindow contains information that the operator will use while upgrading the underlying node.
+
+This option is only useful when the chosen storage prevents the Pods from being freely moved across nodes.
+
+| Field | Description | Scheme | Required |
+| ---------- | ------------------------------------------------------------------------------------------ | ------ | -------- |
+| inProgress | Is there a node maintenance activity in progress? | bool | true |
+| reusePVC | Reuse the existing PVC (wait for the node to come up again) or not (recreate it elsewhere) | \*bool | true |
+
+## PostgresConfiguration
+
+PostgresConfiguration defines the PostgreSQL configuration
+
+| Field | Description | Scheme | Required |
+| ---------- | ----------------------------------------------------------------------------------------- | ----------------- | -------- |
+| parameters | PostgreSQL configuration options (postgresql.conf) | map[string]string | false |
+| pg_hba | PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file) | \[]string | false |
+
+## RecoveryTarget
+
+RecoveryTarget allows to configure the moment where the recovery process will stop. All the target options except TargetTLI are mutually exclusive.
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------- | ------ | -------- |
+| targetTLI | The target timeline (\\"latest\\", \\"current\\" or a positive integer) | string | false |
+| targetXID | The target transaction ID | string | false |
+| targetName | The target name (to be previously created with `pg_create_restore_point`) | string | false |
+| targetLSN | The target LSN (Log Sequence Number) | string | false |
+| targetTime | The target time, in any unambiguous representation allowed by PostgreSQL | string | false |
+| targetImmediate | End recovery as soon as a consistent state is reached | \*bool | false |
+| exclusive | Set the target to be exclusive (defaults to true) | \*bool | false |
+
+## RollingUpdateStatus
+
+RollingUpdateStatus contains the information about an instance which is being updated
+
+| Field | Description | Scheme | Required |
+| --------- | ----------------------------------- | ----------- | -------- |
+| imageName | The image which we put into the Pod | string | true |
+| startedAt | When the update has been started | metav1.Time | false |
+
+## S3Credentials
+
+S3Credentials is the type for the credentials to be used to upload files to S3
+
+| Field | Description | Scheme | Required |
+| --------------- | -------------------------------------- | ------------------------ | -------- |
+| accessKeyId | The reference to the access key id | corev1.SecretKeySelector | true |
+| secretAccessKey | The reference to the secret access key | corev1.SecretKeySelector | true |
+
+## StorageConfiguration
+
+StorageConfiguration is the configuration of the storage of the PostgreSQL instances
+
+| Field | Description | Scheme | Required |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------- | -------- |
+| storageClass | StorageClass to use for database data (`PGDATA`). Applied after evaluating the PVC template, if available. If not specified, generated PVCs will be satisfied by the default storage class | \*string | false |
+| size | Size of the storage. Required if not already specified in the PVC template. Changes to this field are automatically reapplied to the created PVCs. Size cannot be decreased. | string | true |
+| resizeInUseVolumes | Resize existent PVCs, defaults to true | \*bool | false |
+| pvcTemplate | Template to be used to generate the Persistent Volume Claim | \*corev1.PersistentVolumeClaimSpec | false |
+
+## WalBackupConfiguration
+
+WalBackupConfiguration is the configuration of the backup of the WAL stream
+
+| Field | Description | Scheme | Required |
+| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a WAL file before sending it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+
+## ScheduledBackup
+
+ScheduledBackup is the Schema for the scheduledbackups API
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the ScheduledBackup. More info: | [ScheduledBackupSpec](#scheduledbackupspec) | false |
+| status | Most recently observed status of the ScheduledBackup. This data may not be up to date. Populated by the system. Read-only. More info: | [ScheduledBackupStatus](#scheduledbackupstatus) | false |
+
+## ScheduledBackupList
+
+ScheduledBackupList contains a list of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][ScheduledBackup](#scheduledbackup) | true |
+
+## ScheduledBackupSpec
+
+ScheduledBackupSpec defines the desired state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| suspend | If this backup is suspended of not | \*bool | false |
+| schedule | The schedule in Cron format, see . | string | true |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## ScheduledBackupStatus
+
+ScheduledBackupStatus defines the observed state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| ---------------- | -------------------------------------------------------------------------- | ------------- | -------- |
+| lastCheckTime | The latest time the schedule | \*metav1.Time | false |
+| lastScheduleTime | Information when was the last time that backup was successfully scheduled. | \*metav1.Time | false |
+| nextScheduleTime | Next time we will run a backup | \*metav1.Time | false |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.0.0.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.0.0.mdx
new file mode 100644
index 0000000000..0288033898
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.0.0.mdx
@@ -0,0 +1,347 @@
+---
+title: 'API Reference - v1.0.0'
+navTitle: v1.0.0
+pdfExclude: true
+---
+
+Cloud Native PostgreSQL extends the Kubernetes API defining the following
+custom resources:
+
+- [Backup](#backup)
+- [Cluster](#cluster)
+- [ScheduledBackup](#scheduledbackup)
+
+All the resources are defined in the `postgresql.k8s.enterprisedb.io/v1`
+API.
+
+Please refer to the ["Configuration Samples" page](../samples.md)" of the
+documentation for examples of usage.
+
+Below you will find a description of the defined resources:
+
+
+
+
+
+- [Backup](#backup)
+- [BackupList](#backuplist)
+- [BackupSpec](#backupspec)
+- [BackupStatus](#backupstatus)
+- [AffinityConfiguration](#affinityconfiguration)
+- [BackupConfiguration](#backupconfiguration)
+- [BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration)
+- [BootstrapConfiguration](#bootstrapconfiguration)
+- [BootstrapInitDB](#bootstrapinitdb)
+- [BootstrapRecovery](#bootstraprecovery)
+- [Cluster](#cluster)
+- [ClusterList](#clusterlist)
+- [ClusterSpec](#clusterspec)
+- [ClusterStatus](#clusterstatus)
+- [DataBackupConfiguration](#databackupconfiguration)
+- [NodeMaintenanceWindow](#nodemaintenancewindow)
+- [PostgresConfiguration](#postgresconfiguration)
+- [RecoveryTarget](#recoverytarget)
+- [RollingUpdateStatus](#rollingupdatestatus)
+- [S3Credentials](#s3credentials)
+- [StorageConfiguration](#storageconfiguration)
+- [WalBackupConfiguration](#walbackupconfiguration)
+- [ScheduledBackup](#scheduledbackup)
+- [ScheduledBackupList](#scheduledbackuplist)
+- [ScheduledBackupSpec](#scheduledbackupspec)
+- [ScheduledBackupStatus](#scheduledbackupstatus)
+
+## Backup
+
+Backup is the Schema for the backups API
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the backup. More info: | [BackupSpec](#backupspec) | false |
+| status | Most recently observed status of the backup. This data may not be up to date. Populated by the system. Read-only. More info: | [BackupStatus](#backupstatus) | false |
+
+## BackupList
+
+BackupList contains a list of Backup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of backups | [][Backup](#backup) | true |
+
+## BackupSpec
+
+BackupSpec defines the desired state of Backup
+
+| Field | Description | Scheme | Required |
+| ------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## BackupStatus
+
+BackupStatus defines the observed state of Backup
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| encryption | Encryption method required to S3 API | string | false |
+| backupId | The ID of the Barman backup | string | false |
+| phase | The last backup status | BackupPhase | false |
+| startedAt | When the backup was started | \*metav1.Time | false |
+| stoppedAt | When the backup was terminated | \*metav1.Time | false |
+| error | The detected error | string | false |
+| commandOutput | The backup command output | string | false |
+| commandError | The backup command output | string | false |
+
+## AffinityConfiguration
+
+AffinityConfiguration contains the info we need to create the affinity rules for Pods
+
+| Field | Description | Scheme | Required |
+| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | -------- |
+| enablePodAntiAffinity | Activates anti-affinity for the pods. The operator will define pods anti-affinity unless this field is explicitly set to false | \*bool | false |
+| topologyKey | TopologyKey to use for anti-affinity configuration. See k8s documentation for more info on that | string | true |
+| nodeSelector | NodeSelector is map of key-value pairs used to define the nodes on which the pods can run. More info: | map[string]string | false |
+
+## BackupConfiguration
+
+BackupConfiguration defines how the backup of the cluster are taken. Currently the only supported backup method is barmanObjectStore. For details and examples refer to the Backup and Recovery section of the documentation
+
+| Field | Description | Scheme | Required |
+| ----------------- | ------------------------------------------------- | ------------------------------------------------------------------- | -------- |
+| barmanObjectStore | The configuration for the barman-cloud tool suite | \*[BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration) | false |
+
+## BarmanObjectStoreConfiguration
+
+BarmanObjectStoreConfiguration contains the backup configuration using Barman against an S3-compatible object storage
+
+| Field | Description | Scheme | Required |
+| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| wal | The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[WalBackupConfiguration](#walbackupconfiguration) | false |
+| data | The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[DataBackupConfiguration](#databackupconfiguration) | false |
+
+## BootstrapConfiguration
+
+BootstrapConfiguration contains information about how to create the PostgreSQL cluster. Only a single bootstrap method can be defined among the supported ones. `initdb` will be used as the bootstrap method if left unspecified. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------- | ----------------------------------------- | -------- |
+| initdb | Bootstrap the cluster via initdb | \*[BootstrapInitDB](#bootstrapinitdb) | false |
+| recovery | Bootstrap the cluster from a backup | \*[BootstrapRecovery](#bootstraprecovery) | false |
+
+## BootstrapInitDB
+
+BootstrapInitDB is the configuration of the bootstrap process when initdb is used Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------- | -------- |
+| database | Name of the database used by the application. Default: `app`. | string | true |
+| owner | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. | string | true |
+| secret | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | \*corev1.LocalObjectReference | false |
+| redwood | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | \*bool | false |
+| options | The list of options that must be passed to initdb when creating the cluster | \[]string | false |
+
+## BootstrapRecovery
+
+BootstrapRecovery contains the configuration required to restore the backup with the specified name and, after having changed the password with the one chosen for the superuser, will use it to bootstrap a full cluster cloning all the instances from the restored primary. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | -------- |
+| backup | The backup we need to restore | corev1.LocalObjectReference | true |
+| recoveryTarget | By default the recovery will end as soon as a consistent state is reached: in this case that means at the end of a backup. This option allows to fine tune the recovery process | \*[RecoveryTarget](#recoverytarget) | false |
+
+## Cluster
+
+Cluster is the Schema for the PostgreSQL API
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the cluster. More info: | [ClusterSpec](#clusterspec) | false |
+| status | Most recently observed status of the cluster. This data may not be up to date. Populated by the system. Read-only. More info: | [ClusterStatus](#clusterstatus) | false |
+
+## ClusterList
+
+ClusterList contains a list of Cluster
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][Cluster](#cluster) | true |
+
+## ClusterSpec
+
+ClusterSpec defines the desired state of Cluster
+
+| Field | Description | Scheme | Required |
+| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- | -------- |
+| description | Description of this PostgreSQL cluster | string | false |
+| imageName | Name of the container image | string | false |
+| postgresUID | The UID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| postgresGID | The GID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| instances | Number of instances required in the cluster | int32 | true |
+| minSyncReplicas | Minimum number of instances required in synchronous replication with the primary. Undefined or 0 allow writes to complete when no standby is available. | int32 | false |
+| maxSyncReplicas | The target value for the synchronous replication quorum, that can be decreased if the number of ready standbys is lower than this. Undefined or 0 disable synchronous replication. | int32 | false |
+| postgresql | Configuration of the PostgreSQL server | [PostgresConfiguration](#postgresconfiguration) | false |
+| bootstrap | Instructions to bootstrap this cluster | \*[BootstrapConfiguration](#bootstrapconfiguration) | false |
+| superuserSecret | The secret containing the superuser password. If not defined a new secret will be created with a randomly generated password | \*corev1.LocalObjectReference | false |
+| imagePullSecrets | The list of pull secrets to be used to pull the images. If the license key contains a pull secret that secret will be automatically included. | \[]corev1.LocalObjectReference | false |
+| storage | Configuration of the storage of the instances | [StorageConfiguration](#storageconfiguration) | false |
+| startDelay | The time in seconds that is allowed for a PostgreSQL instance to successfully start up (default 30) | int32 | false |
+| stopDelay | The time in seconds that is allowed for a PostgreSQL instance node to gracefully shutdown (default 30) | int32 | false |
+| affinity | Affinity/Anti-affinity rules for Pods | [AffinityConfiguration](#affinityconfiguration) | false |
+| resources | Resources requirements of every generated Pod. Please refer to for more information. | corev1.ResourceRequirements | false |
+| primaryUpdateStrategy | Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated: it can be automated (`unsupervised` - default) or manual (`supervised`) | PrimaryUpdateStrategy | false |
+| backup | The configuration to be used for backups | \*[BackupConfiguration](#backupconfiguration) | false |
+| nodeMaintenanceWindow | Define a maintenance window for the Kubernetes nodes | \*[NodeMaintenanceWindow](#nodemaintenancewindow) | false |
+| licenseKey | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string | false |
+
+## ClusterStatus
+
+ClusterStatus defines the observed state of Cluster
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------- | ---------------------------- | -------- |
+| instances | Total number of instances in the cluster | int32 | false |
+| readyInstances | Total number of ready instances in the cluster | int32 | false |
+| instancesStatus | Instances status | map[utils.PodStatus][]string | false |
+| latestGeneratedNode | ID of the latest generated node (used to avoid node name clashing) | int32 | false |
+| currentPrimary | Current primary instance | string | false |
+| targetPrimary | Target primary instance, this is different from the previous one during a switchover or a failover | string | false |
+| pvcCount | How many PVCs have been created by this cluster | int32 | false |
+| jobCount | How many Jobs have been created by this cluster | int32 | false |
+| danglingPVC | List of all the PVCs created by this cluster and still available which are not attached to a Pod | \[]string | false |
+| licenseStatus | Status of the license | licensekey.Status | false |
+| writeService | Current write pod | string | false |
+| readService | Current list of read pods | string | false |
+| phase | Current phase of the cluster | string | false |
+| phaseReason | Reason for the current phase | string | false |
+
+## DataBackupConfiguration
+
+DataBackupConfiguration is the configuration of the backup of the data directory
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a backup file (a tar file per tablespace) while streaming it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+| immediateCheckpoint | Control whether the I/O workload for the backup initial checkpoint will be limited, according to the `checkpoint_completion_target` setting on the PostgreSQL server. If set to true, an immediate checkpoint will be used, meaning PostgreSQL will complete the checkpoint as soon as possible. `false` by default. | bool | false |
+| jobs | The number of parallel jobs to be used to upload the backup, defaults to 2 | \*int32 | false |
+
+## NodeMaintenanceWindow
+
+NodeMaintenanceWindow contains information that the operator will use while upgrading the underlying node.
+
+This option is only useful when the chosen storage prevents the Pods from being freely moved across nodes.
+
+| Field | Description | Scheme | Required |
+| ---------- | ------------------------------------------------------------------------------------------ | ------ | -------- |
+| inProgress | Is there a node maintenance activity in progress? | bool | true |
+| reusePVC | Reuse the existing PVC (wait for the node to come up again) or not (recreate it elsewhere) | \*bool | true |
+
+## PostgresConfiguration
+
+PostgresConfiguration defines the PostgreSQL configuration
+
+| Field | Description | Scheme | Required |
+| ---------- | ----------------------------------------------------------------------------------------- | ----------------- | -------- |
+| parameters | PostgreSQL configuration options (postgresql.conf) | map[string]string | false |
+| pg_hba | PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file) | \[]string | false |
+
+## RecoveryTarget
+
+RecoveryTarget allows to configure the moment where the recovery process will stop. All the target options except TargetTLI are mutually exclusive.
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------- | ------ | -------- |
+| targetTLI | The target timeline (\\"latest\\", \\"current\\" or a positive integer) | string | false |
+| targetXID | The target transaction ID | string | false |
+| targetName | The target name (to be previously created with `pg_create_restore_point`) | string | false |
+| targetLSN | The target LSN (Log Sequence Number) | string | false |
+| targetTime | The target time, in any unambiguous representation allowed by PostgreSQL | string | false |
+| targetImmediate | End recovery as soon as a consistent state is reached | \*bool | false |
+| exclusive | Set the target to be exclusive (defaults to true) | \*bool | false |
+
+## RollingUpdateStatus
+
+RollingUpdateStatus contains the information about an instance which is being updated
+
+| Field | Description | Scheme | Required |
+| --------- | ----------------------------------- | ----------- | -------- |
+| imageName | The image which we put into the Pod | string | true |
+| startedAt | When the update has been started | metav1.Time | false |
+
+## S3Credentials
+
+S3Credentials is the type for the credentials to be used to upload files to S3
+
+| Field | Description | Scheme | Required |
+| --------------- | -------------------------------------- | ------------------------ | -------- |
+| accessKeyId | The reference to the access key id | corev1.SecretKeySelector | true |
+| secretAccessKey | The reference to the secret access key | corev1.SecretKeySelector | true |
+
+## StorageConfiguration
+
+StorageConfiguration is the configuration of the storage of the PostgreSQL instances
+
+| Field | Description | Scheme | Required |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------- | -------- |
+| storageClass | StorageClass to use for database data (`PGDATA`). Applied after evaluating the PVC template, if available. If not specified, generated PVCs will be satisfied by the default storage class | \*string | false |
+| size | Size of the storage. Required if not already specified in the PVC template. Changes to this field are automatically reapplied to the created PVCs. Size cannot be decreased. | string | true |
+| resizeInUseVolumes | Resize existent PVCs, defaults to true | \*bool | false |
+| pvcTemplate | Template to be used to generate the Persistent Volume Claim | \*corev1.PersistentVolumeClaimSpec | false |
+
+## WalBackupConfiguration
+
+WalBackupConfiguration is the configuration of the backup of the WAL stream
+
+| Field | Description | Scheme | Required |
+| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a WAL file before sending it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+
+## ScheduledBackup
+
+ScheduledBackup is the Schema for the scheduledbackups API
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the ScheduledBackup. More info: | [ScheduledBackupSpec](#scheduledbackupspec) | false |
+| status | Most recently observed status of the ScheduledBackup. This data may not be up to date. Populated by the system. Read-only. More info: | [ScheduledBackupStatus](#scheduledbackupstatus) | false |
+
+## ScheduledBackupList
+
+ScheduledBackupList contains a list of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][ScheduledBackup](#scheduledbackup) | true |
+
+## ScheduledBackupSpec
+
+ScheduledBackupSpec defines the desired state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| suspend | If this backup is suspended of not | \*bool | false |
+| schedule | The schedule in Cron format, see . | string | true |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## ScheduledBackupStatus
+
+ScheduledBackupStatus defines the observed state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| ---------------- | -------------------------------------------------------------------------- | ------------- | -------- |
+| lastCheckTime | The latest time the schedule | \*metav1.Time | false |
+| lastScheduleTime | Information when was the last time that backup was successfully scheduled. | \*metav1.Time | false |
+| nextScheduleTime | Next time we will run a backup | \*metav1.Time | false |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.1.0.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.1.0.mdx
new file mode 100644
index 0000000000..ccca8df96b
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.1.0.mdx
@@ -0,0 +1,347 @@
+---
+title: 'API Reference - v1.1.0'
+navTitle: v1.1.0
+pdfExclude: true
+---
+
+Cloud Native PostgreSQL extends the Kubernetes API defining the following
+custom resources:
+
+- [Backup](#backup)
+- [Cluster](#cluster)
+- [ScheduledBackup](#scheduledbackup)
+
+All the resources are defined in the `postgresql.k8s.enterprisedb.io/v1`
+API.
+
+Please refer to the ["Configuration Samples" page](../samples.md)" of the
+documentation for examples of usage.
+
+Below you will find a description of the defined resources:
+
+
+
+
+
+- [Backup](#backup)
+- [BackupList](#backuplist)
+- [BackupSpec](#backupspec)
+- [BackupStatus](#backupstatus)
+- [AffinityConfiguration](#affinityconfiguration)
+- [BackupConfiguration](#backupconfiguration)
+- [BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration)
+- [BootstrapConfiguration](#bootstrapconfiguration)
+- [BootstrapInitDB](#bootstrapinitdb)
+- [BootstrapRecovery](#bootstraprecovery)
+- [Cluster](#cluster)
+- [ClusterList](#clusterlist)
+- [ClusterSpec](#clusterspec)
+- [ClusterStatus](#clusterstatus)
+- [DataBackupConfiguration](#databackupconfiguration)
+- [NodeMaintenanceWindow](#nodemaintenancewindow)
+- [PostgresConfiguration](#postgresconfiguration)
+- [RecoveryTarget](#recoverytarget)
+- [RollingUpdateStatus](#rollingupdatestatus)
+- [S3Credentials](#s3credentials)
+- [StorageConfiguration](#storageconfiguration)
+- [WalBackupConfiguration](#walbackupconfiguration)
+- [ScheduledBackup](#scheduledbackup)
+- [ScheduledBackupList](#scheduledbackuplist)
+- [ScheduledBackupSpec](#scheduledbackupspec)
+- [ScheduledBackupStatus](#scheduledbackupstatus)
+
+## Backup
+
+Backup is the Schema for the backups API
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the backup. More info: | [BackupSpec](#backupspec) | false |
+| status | Most recently observed status of the backup. This data may not be up to date. Populated by the system. Read-only. More info: | [BackupStatus](#backupstatus) | false |
+
+## BackupList
+
+BackupList contains a list of Backup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of backups | [][Backup](#backup) | true |
+
+## BackupSpec
+
+BackupSpec defines the desired state of Backup
+
+| Field | Description | Scheme | Required |
+| ------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## BackupStatus
+
+BackupStatus defines the observed state of Backup
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| encryption | Encryption method required to S3 API | string | false |
+| backupId | The ID of the Barman backup | string | false |
+| phase | The last backup status | BackupPhase | false |
+| startedAt | When the backup was started | \*metav1.Time | false |
+| stoppedAt | When the backup was terminated | \*metav1.Time | false |
+| error | The detected error | string | false |
+| commandOutput | The backup command output | string | false |
+| commandError | The backup command output | string | false |
+
+## AffinityConfiguration
+
+AffinityConfiguration contains the info we need to create the affinity rules for Pods
+
+| Field | Description | Scheme | Required |
+| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | -------- |
+| enablePodAntiAffinity | Activates anti-affinity for the pods. The operator will define pods anti-affinity unless this field is explicitly set to false | \*bool | false |
+| topologyKey | TopologyKey to use for anti-affinity configuration. See k8s documentation for more info on that | string | true |
+| nodeSelector | NodeSelector is map of key-value pairs used to define the nodes on which the pods can run. More info: | map[string]string | false |
+
+## BackupConfiguration
+
+BackupConfiguration defines how the backup of the cluster are taken. Currently the only supported backup method is barmanObjectStore. For details and examples refer to the Backup and Recovery section of the documentation
+
+| Field | Description | Scheme | Required |
+| ----------------- | ------------------------------------------------- | ------------------------------------------------------------------- | -------- |
+| barmanObjectStore | The configuration for the barman-cloud tool suite | \*[BarmanObjectStoreConfiguration](#barmanobjectstoreconfiguration) | false |
+
+## BarmanObjectStoreConfiguration
+
+BarmanObjectStoreConfiguration contains the backup configuration using Barman against an S3-compatible object storage
+
+| Field | Description | Scheme | Required |
+| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | -------- |
+| s3Credentials | The credentials to use to upload data to S3 | [S3Credentials](#s3credentials) | true |
+| endpointURL | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string | false |
+| destinationPath | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data | string | true |
+| serverName | The server name on S3, the cluster name is used if this parameter is omitted | string | false |
+| wal | The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[WalBackupConfiguration](#walbackupconfiguration) | false |
+| data | The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | \*[DataBackupConfiguration](#databackupconfiguration) | false |
+
+## BootstrapConfiguration
+
+BootstrapConfiguration contains information about how to create the PostgreSQL cluster. Only a single bootstrap method can be defined among the supported ones. `initdb` will be used as the bootstrap method if left unspecified. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------- | ----------------------------------------- | -------- |
+| initdb | Bootstrap the cluster via initdb | \*[BootstrapInitDB](#bootstrapinitdb) | false |
+| recovery | Bootstrap the cluster from a backup | \*[BootstrapRecovery](#bootstraprecovery) | false |
+
+## BootstrapInitDB
+
+BootstrapInitDB is the configuration of the bootstrap process when initdb is used Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------- | -------- |
+| database | Name of the database used by the application. Default: `app`. | string | true |
+| owner | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. | string | true |
+| secret | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | \*corev1.LocalObjectReference | false |
+| redwood | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | \*bool | false |
+| options | The list of options that must be passed to initdb when creating the cluster | \[]string | false |
+
+## BootstrapRecovery
+
+BootstrapRecovery contains the configuration required to restore the backup with the specified name and, after having changed the password with the one chosen for the superuser, will use it to bootstrap a full cluster cloning all the instances from the restored primary. Refer to the Bootstrap page of the documentation for more information.
+
+| Field | Description | Scheme | Required |
+| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | -------- |
+| backup | The backup we need to restore | corev1.LocalObjectReference | true |
+| recoveryTarget | By default the recovery will end as soon as a consistent state is reached: in this case that means at the end of a backup. This option allows to fine tune the recovery process | \*[RecoveryTarget](#recoverytarget) | false |
+
+## Cluster
+
+Cluster is the Schema for the PostgreSQL API
+
+| Field | Description | Scheme | Required |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the cluster. More info: | [ClusterSpec](#clusterspec) | false |
+| status | Most recently observed status of the cluster. This data may not be up to date. Populated by the system. Read-only. More info: | [ClusterStatus](#clusterstatus) | false |
+
+## ClusterList
+
+ClusterList contains a list of Cluster
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][Cluster](#cluster) | true |
+
+## ClusterSpec
+
+ClusterSpec defines the desired state of Cluster
+
+| Field | Description | Scheme | Required |
+| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- | -------- |
+| description | Description of this PostgreSQL cluster | string | false |
+| imageName | Name of the container image | string | false |
+| postgresUID | The UID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| postgresGID | The GID of the `postgres` user inside the image, defaults to `26` | int64 | false |
+| instances | Number of instances required in the cluster | int32 | true |
+| minSyncReplicas | Minimum number of instances required in synchronous replication with the primary. Undefined or 0 allow writes to complete when no standby is available. | int32 | false |
+| maxSyncReplicas | The target value for the synchronous replication quorum, that can be decreased if the number of ready standbys is lower than this. Undefined or 0 disable synchronous replication. | int32 | false |
+| postgresql | Configuration of the PostgreSQL server | [PostgresConfiguration](#postgresconfiguration) | false |
+| bootstrap | Instructions to bootstrap this cluster | \*[BootstrapConfiguration](#bootstrapconfiguration) | false |
+| superuserSecret | The secret containing the superuser password. If not defined a new secret will be created with a randomly generated password | \*corev1.LocalObjectReference | false |
+| imagePullSecrets | The list of pull secrets to be used to pull the images. If the license key contains a pull secret that secret will be automatically included. | \[]corev1.LocalObjectReference | false |
+| storage | Configuration of the storage of the instances | [StorageConfiguration](#storageconfiguration) | false |
+| startDelay | The time in seconds that is allowed for a PostgreSQL instance to successfully start up (default 30) | int32 | false |
+| stopDelay | The time in seconds that is allowed for a PostgreSQL instance node to gracefully shutdown (default 30) | int32 | false |
+| affinity | Affinity/Anti-affinity rules for Pods | [AffinityConfiguration](#affinityconfiguration) | false |
+| resources | Resources requirements of every generated Pod. Please refer to for more information. | corev1.ResourceRequirements | false |
+| primaryUpdateStrategy | Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated: it can be automated (`unsupervised` - default) or manual (`supervised`) | PrimaryUpdateStrategy | false |
+| backup | The configuration to be used for backups | \*[BackupConfiguration](#backupconfiguration) | false |
+| nodeMaintenanceWindow | Define a maintenance window for the Kubernetes nodes | \*[NodeMaintenanceWindow](#nodemaintenancewindow) | false |
+| licenseKey | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string | false |
+
+## ClusterStatus
+
+ClusterStatus defines the observed state of Cluster
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------- | ---------------------------- | -------- |
+| instances | Total number of instances in the cluster | int32 | false |
+| readyInstances | Total number of ready instances in the cluster | int32 | false |
+| instancesStatus | Instances status | map[utils.PodStatus][]string | false |
+| latestGeneratedNode | ID of the latest generated node (used to avoid node name clashing) | int32 | false |
+| currentPrimary | Current primary instance | string | false |
+| targetPrimary | Target primary instance, this is different from the previous one during a switchover or a failover | string | false |
+| pvcCount | How many PVCs have been created by this cluster | int32 | false |
+| jobCount | How many Jobs have been created by this cluster | int32 | false |
+| danglingPVC | List of all the PVCs created by this cluster and still available which are not attached to a Pod | \[]string | false |
+| licenseStatus | Status of the license | licensekey.Status | false |
+| writeService | Current write pod | string | false |
+| readService | Current list of read pods | string | false |
+| phase | Current phase of the cluster | string | false |
+| phaseReason | Reason for the current phase | string | false |
+
+## DataBackupConfiguration
+
+DataBackupConfiguration is the configuration of the backup of the data directory
+
+| Field | Description | Scheme | Required |
+| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a backup file (a tar file per tablespace) while streaming it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+| immediateCheckpoint | Control whether the I/O workload for the backup initial checkpoint will be limited, according to the `checkpoint_completion_target` setting on the PostgreSQL server. If set to true, an immediate checkpoint will be used, meaning PostgreSQL will complete the checkpoint as soon as possible. `false` by default. | bool | false |
+| jobs | The number of parallel jobs to be used to upload the backup, defaults to 2 | \*int32 | false |
+
+## NodeMaintenanceWindow
+
+NodeMaintenanceWindow contains information that the operator will use while upgrading the underlying node.
+
+This option is only useful when the chosen storage prevents the Pods from being freely moved across nodes.
+
+| Field | Description | Scheme | Required |
+| ---------- | ------------------------------------------------------------------------------------------ | ------ | -------- |
+| inProgress | Is there a node maintenance activity in progress? | bool | true |
+| reusePVC | Reuse the existing PVC (wait for the node to come up again) or not (recreate it elsewhere) | \*bool | true |
+
+## PostgresConfiguration
+
+PostgresConfiguration defines the PostgreSQL configuration
+
+| Field | Description | Scheme | Required |
+| ---------- | ----------------------------------------------------------------------------------------- | ----------------- | -------- |
+| parameters | PostgreSQL configuration options (postgresql.conf) | map[string]string | false |
+| pg_hba | PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file) | \[]string | false |
+
+## RecoveryTarget
+
+RecoveryTarget allows to configure the moment where the recovery process will stop. All the target options except TargetTLI are mutually exclusive.
+
+| Field | Description | Scheme | Required |
+| --------------- | ------------------------------------------------------------------------- | ------ | -------- |
+| targetTLI | The target timeline (\\"latest\\", \\"current\\" or a positive integer) | string | false |
+| targetXID | The target transaction ID | string | false |
+| targetName | The target name (to be previously created with `pg_create_restore_point`) | string | false |
+| targetLSN | The target LSN (Log Sequence Number) | string | false |
+| targetTime | The target time, in any unambiguous representation allowed by PostgreSQL | string | false |
+| targetImmediate | End recovery as soon as a consistent state is reached | \*bool | false |
+| exclusive | Set the target to be exclusive (defaults to true) | \*bool | false |
+
+## RollingUpdateStatus
+
+RollingUpdateStatus contains the information about an instance which is being updated
+
+| Field | Description | Scheme | Required |
+| --------- | ----------------------------------- | ----------- | -------- |
+| imageName | The image which we put into the Pod | string | true |
+| startedAt | When the update has been started | metav1.Time | false |
+
+## S3Credentials
+
+S3Credentials is the type for the credentials to be used to upload files to S3
+
+| Field | Description | Scheme | Required |
+| --------------- | -------------------------------------- | ------------------------ | -------- |
+| accessKeyId | The reference to the access key id | corev1.SecretKeySelector | true |
+| secretAccessKey | The reference to the secret access key | corev1.SecretKeySelector | true |
+
+## StorageConfiguration
+
+StorageConfiguration is the configuration of the storage of the PostgreSQL instances
+
+| Field | Description | Scheme | Required |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------- | -------- |
+| storageClass | StorageClass to use for database data (`PGDATA`). Applied after evaluating the PVC template, if available. If not specified, generated PVCs will be satisfied by the default storage class | \*string | false |
+| size | Size of the storage. Required if not already specified in the PVC template. Changes to this field are automatically reapplied to the created PVCs. Size cannot be decreased. | string | true |
+| resizeInUseVolumes | Resize existent PVCs, defaults to true | \*bool | false |
+| pvcTemplate | Template to be used to generate the Persistent Volume Claim | \*corev1.PersistentVolumeClaimSpec | false |
+
+## WalBackupConfiguration
+
+WalBackupConfiguration is the configuration of the backup of the WAL stream
+
+| Field | Description | Scheme | Required |
+| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -------- |
+| compression | Compress a WAL file before sending it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType | false |
+| encryption | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType | false |
+
+## ScheduledBackup
+
+ScheduledBackup is the Schema for the scheduledbackups API
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | -------- |
+| metadata | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#objectmeta-v1-meta) | false |
+| spec | Specification of the desired behavior of the ScheduledBackup. More info: | [ScheduledBackupSpec](#scheduledbackupspec) | false |
+| status | Most recently observed status of the ScheduledBackup. This data may not be up to date. Populated by the system. Read-only. More info: | [ScheduledBackupStatus](#scheduledbackupstatus) | false |
+
+## ScheduledBackupList
+
+ScheduledBackupList contains a list of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | -------- |
+| metadata | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#listmeta-v1-meta) | false |
+| items | List of clusters | [][ScheduledBackup](#scheduledbackup) | true |
+
+## ScheduledBackupSpec
+
+ScheduledBackupSpec defines the desired state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| -------- | ---------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- |
+| suspend | If this backup is suspended of not | \*bool | false |
+| schedule | The schedule in Cron format, see . | string | true |
+| cluster | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#localobjectreference-v1-core) | false |
+
+## ScheduledBackupStatus
+
+ScheduledBackupStatus defines the observed state of ScheduledBackup
+
+| Field | Description | Scheme | Required |
+| ---------------- | -------------------------------------------------------------------------- | ------------- | -------- |
+| lastCheckTime | The latest time the schedule | \*metav1.Time | false |
+| lastScheduleTime | Information when was the last time that backup was successfully scheduled. | \*metav1.Time | false |
+| nextScheduleTime | Next time we will run a backup | \*metav1.Time | false |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.10.0.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.10.0.mdx
new file mode 100644
index 0000000000..65cde47b35
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.10.0.mdx
@@ -0,0 +1,788 @@
+---
+title: 'API Reference - v1.10.0'
+navTitle: v1.10.0
+pdfExclude: true
+---
+
+Cloud Native PostgreSQL extends the Kubernetes API defining the following
+custom resources:
+
+- [Backup](#backup)
+- [Cluster](#cluster)
+- [ScheduledBackup](#scheduledbackup)
+
+All the resources are defined in the `postgresql.k8s.enterprisedb.io/v1`
+API.
+
+Please refer to the ["Configuration Samples" page](../samples.md)" of the
+documentation for examples of usage.
+
+Below you will find a description of the defined resources:
+
+
+
+- [AffinityConfiguration](#AffinityConfiguration)
+- [AzureCredentials](#AzureCredentials)
+- [Backup](#Backup)
+- [BackupConfiguration](#BackupConfiguration)
+- [BackupList](#BackupList)
+- [BackupSource](#BackupSource)
+- [BackupSpec](#BackupSpec)
+- [BackupStatus](#BackupStatus)
+- [BarmanObjectStoreConfiguration](#BarmanObjectStoreConfiguration)
+- [BootstrapConfiguration](#BootstrapConfiguration)
+- [BootstrapInitDB](#BootstrapInitDB)
+- [BootstrapPgBaseBackup](#BootstrapPgBaseBackup)
+- [BootstrapRecovery](#BootstrapRecovery)
+- [CertificatesConfiguration](#CertificatesConfiguration)
+- [CertificatesStatus](#CertificatesStatus)
+- [Cluster](#Cluster)
+- [ClusterList](#ClusterList)
+- [ClusterSpec](#ClusterSpec)
+- [ClusterStatus](#ClusterStatus)
+- [ConfigMapKeySelector](#ConfigMapKeySelector)
+- [ConfigMapResourceVersion](#ConfigMapResourceVersion)
+- [DataBackupConfiguration](#DataBackupConfiguration)
+- [EPASConfiguration](#EPASConfiguration)
+- [ExternalCluster](#ExternalCluster)
+- [InstanceID](#InstanceID)
+- [LocalObjectReference](#LocalObjectReference)
+- [MonitoringConfiguration](#MonitoringConfiguration)
+- [NodeMaintenanceWindow](#NodeMaintenanceWindow)
+- [PgBouncerIntegrationStatus](#PgBouncerIntegrationStatus)
+- [PgBouncerSecrets](#PgBouncerSecrets)
+- [PgBouncerSpec](#PgBouncerSpec)
+- [PodMeta](#PodMeta)
+- [PodTemplateSpec](#PodTemplateSpec)
+- [Pooler](#Pooler)
+- [PoolerIntegrations](#PoolerIntegrations)
+- [PoolerList](#PoolerList)
+- [PoolerSecrets](#PoolerSecrets)
+- [PoolerSpec](#PoolerSpec)
+- [PoolerStatus](#PoolerStatus)
+- [PostgresConfiguration](#PostgresConfiguration)
+- [RecoveryTarget](#RecoveryTarget)
+- [ReplicaClusterConfiguration](#ReplicaClusterConfiguration)
+- [RollingUpdateStatus](#RollingUpdateStatus)
+- [S3Credentials](#S3Credentials)
+- [ScheduledBackup](#ScheduledBackup)
+- [ScheduledBackupList](#ScheduledBackupList)
+- [ScheduledBackupSpec](#ScheduledBackupSpec)
+- [ScheduledBackupStatus](#ScheduledBackupStatus)
+- [SecretKeySelector](#SecretKeySelector)
+- [SecretVersion](#SecretVersion)
+- [SecretsResourceVersion](#SecretsResourceVersion)
+- [StorageConfiguration](#StorageConfiguration)
+- [WalBackupConfiguration](#WalBackupConfiguration)
+
+
+
+## AffinityConfiguration
+
+AffinityConfiguration contains the info we need to create the affinity rules for Pods
+
+| Name | Description | Type |
+| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ |
+| `enablePodAntiAffinity ` | Activates anti-affinity for the pods. The operator will define pods anti-affinity unless this field is explicitly set to false | \*bool |
+| `topologyKey ` | TopologyKey to use for anti-affinity configuration. See k8s documentation for more info on that - *mandatory* | string |
+| `nodeSelector ` | NodeSelector is map of key-value pairs used to define the nodes on which the pods can run. More info: | map[string]string |
+| `tolerations ` | Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run on tainted nodes. More info: | \[]corev1.Toleration |
+| `podAntiAffinityType ` | PodAntiAffinityType allows the user to decide whether pod anti-affinity between cluster instance has to be considered a strong requirement during scheduling or not. Allowed values are: "preferred" (default if empty) or "required". Setting it to "required", could lead to instances remaining pending until new kubernetes nodes are added if all the existing nodes don't match the required pod anti-affinity rule. More info: | string |
+| `additionalPodAntiAffinity` | AdditionalPodAntiAffinity allows to specify pod anti-affinity terms to be added to the ones generated by the operator if EnablePodAntiAffinity is set to true (default) or to be used exclusively if set to false. | \*corev1.PodAntiAffinity |
+| `additionalPodAffinity ` | AdditionalPodAffinity allows to specify pod affinity terms to be passed to all the cluster's pods. | \*corev1.PodAffinity |
+
+
+
+## AzureCredentials
+
+AzureCredentials is the type for the credentials to be used to upload files to Azure Blob Storage. The connection string contains every needed information. If the connection string is not specified, we'll need the storage account name and also one (and only one) of:
+
+- storageKey - storageSasToken
+
+| Name | Description | Type |
+| ------------------ | --------------------------------------------------------------------------------- | ----------------------------------------- |
+| `connectionString` | The connection string to be used | [\*SecretKeySelector](#SecretKeySelector) |
+| `storageAccount ` | The storage account where to upload data | [\*SecretKeySelector](#SecretKeySelector) |
+| `storageKey ` | The storage account key to be used in conjunction with the storage account name | [\*SecretKeySelector](#SecretKeySelector) |
+| `storageSasToken ` | A shared-access-signature to be used in conjunction with the storage account name | [\*SecretKeySelector](#SecretKeySelector) |
+
+
+
+## Backup
+
+Backup is the Schema for the backups API
+
+| Name | Description | Type |
+| ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
+| `metadata` | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta) |
+| `spec ` | Specification of the desired behavior of the backup. More info: | [BackupSpec](#BackupSpec) |
+| `status ` | Most recently observed status of the backup. This data may not be up to date. Populated by the system. Read-only. More info: | [BackupStatus](#BackupStatus) |
+
+
+
+## BackupConfiguration
+
+BackupConfiguration defines how the backup of the cluster are taken. Currently the only supported backup method is barmanObjectStore. For details and examples refer to the Backup and Recovery section of the documentation
+
+| Name | Description | Type |
+| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- |
+| `barmanObjectStore` | The configuration for the barman-cloud tool suite | [\*BarmanObjectStoreConfiguration](#BarmanObjectStoreConfiguration) |
+| `retentionPolicy ` | RetentionPolicy is the retention policy to be used for backups and WALs (i.e. '60d'). The retention policy is expressed in the form of `XXu` where `XX` is a positive integer and `u` is in `[dwm]` - days, weeks, months. | string |
+
+
+
+## BackupList
+
+BackupList contains a list of Backup
+
+| Name | Description | Type |
+| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- |
+| `metadata` | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta) |
+| `items ` | List of backups - *mandatory* | [\[\]Backup](#Backup) |
+
+
+
+## BackupSource
+
+BackupSource contains the backup we need to restore from, plus some information that could be needed to correctly restore it.
+
+| Name | Description | Type |
+| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- |
+| `endpointCA` | EndpointCA store the CA bundle of the barman endpoint. Useful when using self-signed certificates to avoid errors with certificate issuer and barman-cloud-wal-archive | [\*SecretKeySelector](#SecretKeySelector) |
+
+
+
+## BackupSpec
+
+BackupSpec defines the desired state of Backup
+
+| Name | Description | Type |
+| --------- | --------------------- | --------------------------------------------- |
+| `cluster` | The cluster to backup | [LocalObjectReference](#LocalObjectReference) |
+
+
+
+## BackupStatus
+
+BackupStatus defines the observed state of Backup
+
+| Name | Description | Type |
+| ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
+| `s3Credentials ` | The credentials to be used to upload data to S3 | [\*S3Credentials](#S3Credentials) |
+| `azureCredentials` | The credentials to be used to upload data to Azure Blob Storage | [\*AzureCredentials](#AzureCredentials) |
+| `endpointURL ` | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string |
+| `destinationPath ` | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data - *mandatory* | string |
+| `serverName ` | The server name on S3, the cluster name is used if this parameter is omitted | string |
+| `encryption ` | Encryption method required to S3 API | string |
+| `backupId ` | The ID of the Barman backup | string |
+| `phase ` | The last backup status | BackupPhase |
+| `startedAt ` | When the backup was started | [\*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta) |
+| `stoppedAt ` | When the backup was terminated | [\*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta) |
+| `beginWal ` | The starting WAL | string |
+| `endWal ` | The ending WAL | string |
+| `beginLSN ` | The starting xlog | string |
+| `endLSN ` | The ending xlog | string |
+| `error ` | The detected error | string |
+| `commandOutput ` | Unused. Retained for compatibility with old versions. | string |
+| `commandError ` | The backup command output in case of error | string |
+| `instanceID ` | Information to identify the instance where the backup has been taken from | [\*InstanceID](#InstanceID) |
+
+
+
+## BarmanObjectStoreConfiguration
+
+BarmanObjectStoreConfiguration contains the backup configuration using Barman against an S3-compatible object storage
+
+| Name | Description | Type |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- |
+| `s3Credentials ` | The credentials to use to upload data to S3 | [\*S3Credentials](#S3Credentials) |
+| `azureCredentials` | The credentials to use to upload data in Azure Blob Storage | [\*AzureCredentials](#AzureCredentials) |
+| `endpointURL ` | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string |
+| `endpointCA ` | EndpointCA store the CA bundle of the barman endpoint. Useful when using self-signed certificates to avoid errors with certificate issuer and barman-cloud-wal-archive | [\*SecretKeySelector](#SecretKeySelector) |
+| `destinationPath ` | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data - *mandatory* | string |
+| `serverName ` | The server name on S3, the cluster name is used if this parameter is omitted | string |
+| `wal ` | The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | [\*WalBackupConfiguration](#WalBackupConfiguration) |
+| `data ` | The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | [\*DataBackupConfiguration](#DataBackupConfiguration) |
+
+
+
+## BootstrapConfiguration
+
+BootstrapConfiguration contains information about how to create the PostgreSQL cluster. Only a single bootstrap method can be defined among the supported ones. `initdb` will be used as the bootstrap method if left unspecified. Refer to the Bootstrap page of the documentation for more information.
+
+| Name | Description | Type |
+| --------------- | ---------------------------------------------------------------------------------------- | ------------------------------------------------- |
+| `initdb ` | Bootstrap the cluster via initdb | [\*BootstrapInitDB](#BootstrapInitDB) |
+| `recovery ` | Bootstrap the cluster from a backup | [\*BootstrapRecovery](#BootstrapRecovery) |
+| `pg_basebackup` | Bootstrap the cluster taking a physical backup of another compatible PostgreSQL instance | [\*BootstrapPgBaseBackup](#BootstrapPgBaseBackup) |
+
+
+
+## BootstrapInitDB
+
+BootstrapInitDB is the configuration of the bootstrap process when initdb is used Refer to the Bootstrap page of the documentation for more information.
+
+| Name | Description | Type |
+| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------- |
+| `database ` | Name of the database used by the application. Default: `app`. - *mandatory* | string |
+| `owner ` | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. - *mandatory* | string |
+| `secret ` | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | [\*LocalObjectReference](#LocalObjectReference) |
+| `redwood ` | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | \*bool |
+| `options ` | The list of options that must be passed to initdb when creating the cluster. Deprecated: This could lead to inconsistent configurations, please use the explicit provided parameters instead. If defined, explicit values will be ignored. | \[]string |
+| `dataChecksums ` | Whether the `-k` option should be passed to initdb, enabling checksums on data pages (default: `false`) | \*bool |
+| `encoding ` | The value to be passed as option `--encoding` for initdb (default:`UTF8`) | string |
+| `localeCollate ` | The value to be passed as option `--lc-collate` for initdb (default:`C`) | string |
+| `localeCType ` | The value to be passed as option `--lc-ctype` for initdb (default:`C`) | string |
+| `walSegmentSize ` | The value in megabytes (1 to 1024) to be passed to the `--wal-segsize` option for initdb (default: empty, resulting in PostgreSQL default: 16MB) | int |
+| `postInitSQL ` | List of SQL queries to be executed as a superuser immediately after the cluster has been created - to be used with extreme care (by default empty) | \[]string |
+| `postInitTemplateSQL` | List of SQL queries to be executed as a superuser in the `template1` after the cluster has been created - to be used with extreme care (by default empty) | \[]string |
+
+
+
+## BootstrapPgBaseBackup
+
+BootstrapPgBaseBackup contains the configuration required to take a physical backup of an existing PostgreSQL cluster
+
+| Name | Description | Type |
+| -------- | ------------------------------------------------------------------------------- | ------ |
+| `source` | The name of the server of which we need to take a physical backup - *mandatory* | string |
+
+
+
+## BootstrapRecovery
+
+BootstrapRecovery contains the configuration required to restore the backup with the specified name and, after having changed the password with the one chosen for the superuser, will use it to bootstrap a full cluster cloning all the instances from the restored primary. Refer to the Bootstrap page of the documentation for more information.
+
+| Name | Description | Type |
+| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- |
+| `backup ` | The backup we need to restore | [\*BackupSource](#BackupSource) |
+| `source ` | The external cluster whose backup we will restore. This is also used as the name of the folder under which the backup is stored, so it must be set to the name of the source cluster | string |
+| `recoveryTarget` | By default, the recovery process applies all the available WAL files in the archive (full recovery). However, you can also end the recovery as soon as a consistent state is reached or recover to a point-in-time (PITR) by specifying a `RecoveryTarget` object, as expected by PostgreSQL (i.e., timestamp, transaction Id, LSN, ...). More info: | [\*RecoveryTarget](#RecoveryTarget) |
+
+
+
+## CertificatesConfiguration
+
+CertificatesConfiguration contains the needed configurations to handle server certificates.
+
+| Name | Description | Type |
+| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------- |
+| `serverCASecret ` | The secret containing the Server CA certificate. If not defined, a new secret will be created with a self-signed CA and will be used to generate the TLS certificate ServerTLSSecret.
Contains:
- `ca.crt`: CA that should be used to validate the server certificate, used as `sslrootcert` in client connection strings.
- `ca.key`: key used to generate Server SSL certs, if ServerTLSSecret is provided, this can be omitted.
| string |
+| `serverTLSSecret ` | The secret of type kubernetes.io/tls containing the server TLS certificate and key that will be set as `ssl_cert_file` and `ssl_key_file` so that clients can connect to postgres securely. If not defined, ServerCASecret must provide also `ca.key` and a new secret will be created using the provided CA. | string |
+| `replicationTLSSecret` | The secret of type kubernetes.io/tls containing the client certificate to authenticate as the `streaming_replica` user. If not defined, ClientCASecret must provide also `ca.key`, and a new secret will be created using the provided CA. | string |
+| `clientCASecret ` | The secret containing the Client CA certificate. If not defined, a new secret will be created with a self-signed CA and will be used to generate all the client certificates.
Contains:
- `ca.crt`: CA that should be used to validate the client certificates, used as `ssl_ca_file` of all the instances.
- `ca.key`: key used to generate client certificates, if ReplicationTLSSecret is provided, this can be omitted.
| string |
+| `serverAltDNSNames ` | The list of the server alternative DNS names to be added to the generated server TLS certificates, when required. | \[]string |
+
+
+
+## CertificatesStatus
+
+CertificatesStatus contains configuration certificates and related expiration dates.
+
+| Name | Description | Type |
+| ------------- | -------------------------------------- | ----------------- |
+| `expirations` | Expiration dates for all certificates. | map[string]string |
+
+
+
+## Cluster
+
+Cluster is the Schema for the PostgreSQL API
+
+| Name | Description | Type |
+| ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
+| `metadata` | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta) |
+| `spec ` | Specification of the desired behavior of the cluster. More info: | [ClusterSpec](#ClusterSpec) |
+| `status ` | Most recently observed status of the cluster. This data may not be up to date. Populated by the system. Read-only. More info: | [ClusterStatus](#ClusterStatus) |
+
+
+
+## ClusterList
+
+ClusterList contains a list of Cluster
+
+| Name | Description | Type |
+| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- |
+| `metadata` | Standard list metadata. More info: | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta) |
+| `items ` | List of clusters - *mandatory* | [\[\]Cluster](#Cluster) |
+
+
+
+## ClusterSpec
+
+ClusterSpec defines the desired state of Cluster
+
+| Name | Description | Type |
+| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
+| `description ` | Description of this PostgreSQL cluster | string |
+| `imageName ` | Name of the container image, supporting both tags (`:`) and digests for deterministic and repeatable deployments (`:@sha256:`) | string |
+| `imagePullPolicy ` | Image pull policy. One of `Always`, `Never` or `IfNotPresent`. If not defined, it defaults to `IfNotPresent`. Cannot be updated. More info: | corev1.PullPolicy |
+| `postgresUID ` | The UID of the `postgres` user inside the image, defaults to `26` | int64 |
+| `postgresGID ` | The GID of the `postgres` user inside the image, defaults to `26` | int64 |
+| `instances ` | Number of instances required in the cluster - *mandatory* | int32 |
+| `minSyncReplicas ` | Minimum number of instances required in synchronous replication with the primary. Undefined or 0 allow writes to complete when no standby is available. | int32 |
+| `maxSyncReplicas ` | The target value for the synchronous replication quorum, that can be decreased if the number of ready standbys is lower than this. Undefined or 0 disable synchronous replication. | int32 |
+| `postgresql ` | Configuration of the PostgreSQL server | [PostgresConfiguration](#PostgresConfiguration) |
+| `bootstrap ` | Instructions to bootstrap this cluster | [\*BootstrapConfiguration](#BootstrapConfiguration) |
+| `replica ` | Replica cluster configuration | [\*ReplicaClusterConfiguration](#ReplicaClusterConfiguration) |
+| `superuserSecret ` | The secret containing the superuser password. If not defined a new secret will be created with a randomly generated password | [\*LocalObjectReference](#LocalObjectReference) |
+| `enableSuperuserAccess` | When this option is enabled, the operator will use the `SuperuserSecret` to update the `postgres` user password (if the secret is not present, the operator will automatically create one). When this option is disabled, the operator will ignore the `SuperuserSecret` content, delete it when automatically created, and then blank the password of the `postgres` user by setting it to `NULL`. Enabled by default. | \*bool |
+| `certificates ` | The configuration for the CA and related certificates | [\*CertificatesConfiguration](#CertificatesConfiguration) |
+| `imagePullSecrets ` | The list of pull secrets to be used to pull the images. If the license key contains a pull secret that secret will be automatically included. | [\[\]LocalObjectReference](#LocalObjectReference) |
+| `storage ` | Configuration of the storage of the instances | [StorageConfiguration](#StorageConfiguration) |
+| `startDelay ` | The time in seconds that is allowed for a PostgreSQL instance to successfully start up (default 30) | int32 |
+| `stopDelay ` | The time in seconds that is allowed for a PostgreSQL instance node to gracefully shutdown (default 30) | int32 |
+| `affinity ` | Affinity/Anti-affinity rules for Pods | [AffinityConfiguration](#AffinityConfiguration) |
+| `resources ` | Resources requirements of every generated Pod. Please refer to for more information. | [corev1.ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#resourcerequirements-v1-core) |
+| `primaryUpdateStrategy` | Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated: it can be automated (`unsupervised` - default) or manual (`supervised`) | PrimaryUpdateStrategy |
+| `backup ` | The configuration to be used for backups | [\*BackupConfiguration](#BackupConfiguration) |
+| `nodeMaintenanceWindow` | Define a maintenance window for the Kubernetes nodes | [\*NodeMaintenanceWindow](#NodeMaintenanceWindow) |
+| `licenseKey ` | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string |
+| `licenseKeySecret ` | The reference to the license key. When this is set it take precedence over LicenseKey. | [\*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core) |
+| `monitoring ` | The configuration of the monitoring infrastructure of this cluster | [\*MonitoringConfiguration](#MonitoringConfiguration) |
+| `externalClusters ` | The list of external clusters which are used in the configuration | [\[\]ExternalCluster](#ExternalCluster) |
+| `logLevel ` | The instances' log level, one of the following values: error, info (default), debug, trace | string |
+
+
+
+## ClusterStatus
+
+ClusterStatus defines the observed state of Cluster
+
+| Name | Description | Type |
+| ----------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- |
+| `instances ` | Total number of instances in the cluster | int32 |
+| `readyInstances ` | Total number of ready instances in the cluster | int32 |
+| `instancesStatus ` | Instances status | map[utils.PodStatus][]string |
+| `latestGeneratedNode ` | ID of the latest generated node (used to avoid node name clashing) | int32 |
+| `currentPrimary ` | Current primary instance | string |
+| `targetPrimary ` | Target primary instance, this is different from the previous one during a switchover or a failover | string |
+| `pvcCount ` | How many PVCs have been created by this cluster | int32 |
+| `jobCount ` | How many Jobs have been created by this cluster | int32 |
+| `danglingPVC ` | List of all the PVCs created by this cluster and still available which are not attached to a Pod | \[]string |
+| `initializingPVC ` | List of all the PVCs that are being initialized by this cluster | \[]string |
+| `healthyPVC ` | List of all the PVCs not dangling nor initializing | \[]string |
+| `licenseStatus ` | Status of the license | licensekey.Status |
+| `writeService ` | Current write pod | string |
+| `readService ` | Current list of read pods | string |
+| `phase ` | Current phase of the cluster | string |
+| `phaseReason ` | Reason for the current phase | string |
+| `secretsResourceVersion ` | The list of resource versions of the secrets managed by the operator. Every change here is done in the interest of the instance manager, which will refresh the secret data | [SecretsResourceVersion](#SecretsResourceVersion) |
+| `configMapResourceVersion ` | The list of resource versions of the configmaps, managed by the operator. Every change here is done in the interest of the instance manager, which will refresh the configmap data | [ConfigMapResourceVersion](#ConfigMapResourceVersion) |
+| `certificates ` | The configuration for the CA and related certificates, initialized with defaults. | [CertificatesStatus](#CertificatesStatus) |
+| `firstRecoverabilityPoint ` | The first recoverability point, stored as a date in RFC3339 format | string |
+| `cloudNativePostgresqlCommitHash ` | The commit hash number of which this operator running | string |
+| `currentPrimaryTimestamp ` | The timestamp when the last actual promotion to primary has occurred | string |
+| `targetPrimaryTimestamp ` | The timestamp when the last request for a new primary has occurred | string |
+| `poolerIntegrations ` | The integration needed by poolers referencing the cluster | [\*PoolerIntegrations](#PoolerIntegrations) |
+| `cloudNativePostgresqlOperatorHash` | The hash of the binary of the operator | string |
+| `onlineUpdateEnabled ` | OnlineUpdateEnabled shows if the online upgrade is enabled inside the cluster | bool |
+
+
+
+## ConfigMapKeySelector
+
+ConfigMapKeySelector contains enough information to let you locate the key of a ConfigMap
+
+| Name | Description | Type |
+| ----- | ------------------------------- | ------ |
+| `key` | The key to select - *mandatory* | string |
+
+
+
+## ConfigMapResourceVersion
+
+ConfigMapResourceVersion is the resource versions of the secrets managed by the operator
+
+| Name | Description | Type |
+| --------- | ----------------------------------------------------------------------------------------------------------------------------------- | ----------------- |
+| `metrics` | A map with the versions of all the config maps used to pass metrics. Map keys are the config map names, map values are the versions | map[string]string |
+
+
+
+## DataBackupConfiguration
+
+DataBackupConfiguration is the configuration of the backup of the data directory
+
+| Name | Description | Type |
+| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- |
+| `compression ` | Compress a backup file (a tar file per tablespace) while streaming it to the object store. Available options are empty string (no compression, default), `gzip` or `bzip2`. | CompressionType |
+| `encryption ` | Whenever to force the encryption of files (if the bucket is not already configured for that). Allowed options are empty string (use the bucket policy, default), `AES256` and `aws:kms` | EncryptionType |
+| `immediateCheckpoint` | Control whether the I/O workload for the backup initial checkpoint will be limited, according to the `checkpoint_completion_target` setting on the PostgreSQL server. If set to true, an immediate checkpoint will be used, meaning PostgreSQL will complete the checkpoint as soon as possible. `false` by default. | bool |
+| `jobs ` | The number of parallel jobs to be used to upload the backup, defaults to 2 | \*int32 |
+
+
+
+## EPASConfiguration
+
+EPASConfiguration contains EDB Postgres Advanced Server specific configurations
+
+| Name | Description | Type |
+| ------- | --------------------------------- | ---- |
+| `audit` | If true enables edb_audit logging | bool |
+
+
+
+## ExternalCluster
+
+ExternalCluster represents the connection parameters to an external cluster which is used in the other sections of the configuration
+
+| Name | Description | Type |
+| ---------------------- | ------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------- |
+| `name ` | The server name, required - *mandatory* | string |
+| `connectionParameters` | The list of connection parameters, such as dbname, host, username, etc | map[string]string |
+| `sslCert ` | The reference to an SSL certificate to be used to connect to this instance | [\*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core) |
+| `sslKey ` | The reference to an SSL private key to be used to connect to this instance | [\*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core) |
+| `sslRootCert ` | The reference to an SSL CA public key to be used to connect to this instance | [\*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core) |
+| `password ` | The reference to the password to be used to connect to the server | [\*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core) |
+| `barmanObjectStore ` | The configuration for the barman-cloud tool suite | [\*BarmanObjectStoreConfiguration](#BarmanObjectStoreConfiguration) |
+
+
+
+## InstanceID
+
+InstanceID contains the information to identify an instance
+
+| Name | Description | Type |
+| ------------- | ---------------- | ------ |
+| `podName ` | The pod name | string |
+| `ContainerID` | The container ID | string |
+
+
+
+## LocalObjectReference
+
+LocalObjectReference contains enough information to let you locate a local object with a known type inside the same namespace
+
+| Name | Description | Type |
+| ------ | ----------------------------------- | ------ |
+| `name` | Name of the referent. - *mandatory* | string |
+
+
+
+## MonitoringConfiguration
+
+MonitoringConfiguration is the type containing all the monitoring configuration for a certain cluster
+
+| Name | Description | Type |
+| ------------------------ | --------------------------------------------------------------- | ------------------------------------------------- |
+| `disableDefaultQueries ` | Whether the default queries should be injected. Default: false. | \*bool |
+| `customQueriesConfigMap` | The list of config maps containing the custom queries | [\[\]ConfigMapKeySelector](#ConfigMapKeySelector) |
+| `customQueriesSecret ` | The list of secrets containing the custom queries | [\[\]SecretKeySelector](#SecretKeySelector) |
+
+
+
+## NodeMaintenanceWindow
+
+NodeMaintenanceWindow contains information that the operator will use while upgrading the underlying node.
+
+This option is only useful when the chosen storage prevents the Pods from being freely moved across nodes.
+
+| Name | Description | Type |
+| ------------ | -------------------------------------------------------------------------------------------------------- | ------ |
+| `inProgress` | Is there a node maintenance activity in progress? - *mandatory* | bool |
+| `reusePVC ` | Reuse the existing PVC (wait for the node to come up again) or not (recreate it elsewhere) - *mandatory* | \*bool |
+
+
+
+## PgBouncerIntegrationStatus
+
+PgBouncerIntegrationStatus encapsulates the needed integration for the pgbouncer poolers referencing the cluster
+
+Name | Description | Type
+------- \| \| --------
+`secrets` | | \[]string
+
+
+
+## PgBouncerSecrets
+
+PgBouncerSecrets contains the versions of the secrets used by pgbouncer
+
+| Name | Description | Type |
+| ----------- | ----------------------------- | ------------------------------- |
+| `authQuery` | The auth query secret version | [SecretVersion](#SecretVersion) |
+
+
+
+## PgBouncerSpec
+
+PgBouncerSpec defines how to configure PgBouncer
+
+| Name | Description | Type |
+| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------- |
+| `poolMode ` | The pool mode - *mandatory* | PgBouncerPoolMode |
+| `authQuerySecret` | The credentials of the user that need to be used for the authentication query. In case it is specified, also an AuthQuery (e.g. "SELECT usename, passwd FROM pg_shadow WHERE usename=$1") has to be specified and no automatic CNP Cluster integration will be triggered. | [\*LocalObjectReference](#LocalObjectReference) |
+| `authQuery ` | The query that will be used to download the hash of the password of a certain user. Default: "SELECT usename, passwd FROM user_search($1)". In case it is specified, also an AuthQuerySecret has to be specified and no automatic CNP Cluster integration will be triggered. | string |
+| `parameters ` | Additional parameters to be passed to PgBouncer - please check the CNP documentation for a list of options you can configure | map[string]string |
+| `paused ` | When set to `true`, PgBouncer will disconnect from the PostgreSQL server, first waiting for all queries to complete, and pause all new client connections until this value is set to `false` (default). Internally, the operator calls PgBouncer's `PAUSE` and `RESUME` commands. | \*bool |
+
+
+
+## PodMeta
+
+PodMeta is a structure similar to the metav1.ObjectMeta, but still parseable by controller-gen to create a suitable CRD for the user. The comment of PodTemplateSpec has an explanation of why we are not using the core data types.
+
+| Name | Description | Type |
+| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------- |
+| `labels ` | Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: | map[string]string |
+| `annotations` | Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: | map[string]string |
+
+