Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/annotations.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ Use **Annotations** when:
|`percona.com/issue-vault-token: "true"`|
| `service.beta.kubernetes.io/aws-load-balancer-backend-protocol` | Services | Specifies the protocol for AWS load balancers | http, http-test |
| `service.beta.kubernetes.io/aws-load-balancer-backend` | Services | Specifies the backend type for AWS load balancers | test-type |
| `percona.com/headless-service` | Services | Exposes ProxySQL or HAProxy as a headless service | true |

## Setting labels and annotations in the Custom Resource

Expand Down
44 changes: 24 additions & 20 deletions docs/haproxy-conf.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,7 @@ $ kubectl patch pxc cluster1 --type=merge --patch '{

!!! warning

Switching from ProxySQL to HAProxy will cause Percona XtraDB Cluster Pods
restart. Switching from HAProxy to ProxySQL is not possible, and if you need
ProxySQL, this should be configured at cluster creation time.
Switching from ProxySQL to HAProxy will cause the downtime because the Operator needs to reconfigure the proxy Pods.

## HAProxy services

Expand All @@ -46,7 +44,7 @@ The `cluster1-haproxy` service listens on the following ports:
The [haproxy.enabled](operator.md#haproxyexposeprimaryenabled)
Custom Resource option enables or disables `cluster1-haproxy` service.

By default, the `cluster1-haproxy` service points to the number zero Percona XtraDB Cluster member (`cluster1-pxc-0`), when this member is available. If a zero member is not available, members are selected in descending order of their
By default, the `cluster1-haproxy` service points to the first Percona XtraDB Cluster member (`cluster1-pxc-0`), when this member is available. If it is not available, members are selected in descending order of their
numbers: `cluster1-pxc-2`, then `cluster1-pxc-1`. This service
can be used for both read and write load, or it can also be used just for
write load (single writer mode) in setups with split write and read loads.
Expand All @@ -64,26 +62,32 @@ the Round Robin load balancing algorithm.
**Don't use it for write requests**.

The [haproxy.exposeReplicas.enabled](operator.md#haproxyexposereplicasenabled)
Custom Resource option enables or disables `cluster1-haproxy-replicas` service (on by default).
Custom Resource option enables or disables `cluster1-haproxy-replicas` service (on by default).

!!! note
### Expose HAProxy as a headless service

You may want to configure the HAProxy service as a [headless Service :octicons-link-external-16:](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services). For example, if you have applications that need direct DNS access to individual HAProxy pods, such as when running in a multi-tenant setup or when handling advanced networking scenarios.

To enable HAProxy as a headless service, add the `percona.com/headless-service: true` [annotation](annotations.md) to the following options in the Custom Resource:

<a name="headless-service"> If you need to configure `cluster1-haproxy` and
`cluster1-haproxy-replicas` as a [headless Service :octicons-link-external-16:](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services)
(e.g. to use on the tenant network), add the following [annotation](annotations.md)
in the Custom Resource metadata section of the `deploy/cr.yaml`:

```yaml
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
name: cluster1
annotations:
percona.com/headless-service: true
...
* `haproxy.exposePrimary.annotations` key to expose the primary HAProxy Pod
* `haproxy.exposeReplicas.annotations` key to expose the HAProxy replica Pods

```yaml
spec:
haproxy:
exposePrimary:
enabled: true
annotations: percona.com/headless-service: true
....
exposeReplicas:
enabled: true
annotations: percona.com/headless-service: true
```

This annotation works only at service creation time and can't be added later.
This annotation works only at service creation time and can't be added later.

### Upgrade behavior

When the cluster with HAProxy is upgraded, the following steps
take place. First, reader members are upgraded one by one: the Operator waits
Expand Down
67 changes: 67 additions & 0 deletions docs/operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -1812,6 +1812,73 @@ A secret with environment variables, see [Define environment variables](containe
| ----------- | ---------- |
| :material-code-string: string | `my-env-var-secrets` |

### `proxysql.scheduler.enabled`

Enables the external ProxySQL scheduler for even distribution of read/write traffic across Percona XtraDB Cluster nodes. Available since Operator version 1.19.0

See [ProxySQL scheduler](proxysql-conf.md#proxysql-scheduler) for more information.

| Value type | Example |
| ----------- | ---------- |
| :material-toggle-switch-outline: boolean | `true` |


### `proxysql.scheduler.writerIsAlsoReader`

Controls whether the writer node is included in the read pool. When set to `false`, the writer node is excluded from receiving read queries. If the cluster loses its last reader, the writer is automatically elected as a reader regardless of this setting. Available since Operator version 1.19.0

| Value type | Example |
| ----------- | ---------- |
| :material-toggle-switch-outline: boolean | `true` |

### `proxysql.scheduler.checkTimeoutMilliseconds`

The maximum time (in milliseconds) allowed for checking a backend PXC node. If checking a node exceeds this timeout, it is not processed. Available since Operator version 1.19.0

| Value type | Example |
| ----------- | ---------- |
| :material-numeric-1-box: int | `2000` |

### `proxysql.scheduler.successThreshold`

The number of successful checks required before a failed node is restored to the pool. Available since Operator version 1.19.0

| Value type | Example |
| ----------- | ---------- |
| :material-numeric-1-box: int | `1` |

### `proxysql.scheduler.failureThreshold`

The number of failed checks required before a node is marked as DOWN and removed from the pool. Available since Operator version 1.19.0

| Value type | Example |
| ----------- | ---------- |
| :material-numeric-1-box: int | `3` |

### `proxysql.scheduler.pingTimeoutMilliseconds`

The connection timeout (in milliseconds) used to test the connection to a PXC server. Available since Operator version 1.19.0

| Value type | Example |
| ----------- | ---------- |
| :material-numeric-1-box: int | `1000` |

### `proxysql.scheduler.nodeCheckIntervalMilliseconds`

How frequently (in milliseconds) the scheduler runs to check node health and update the server configuration. Available since Operator version 1.19.0

| Value type | Example |
| ----------- | ---------- |
| :material-numeric-1-box: int | `2000` |

### `proxysql.scheduler.maxConnections`

The maximum number of connections from ProxySQL to each backend PXC server. Available since Operator version 1.19.0

| Value type | Example |
| ----------- | ---------- |
| :material-numeric-1-box: int | `1000` |

### `proxysql.priorityClassName`

The [Kubernetes Pod Priority class :octicons-link-external-16:](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass) for ProxySQL.
Expand Down
Loading