From 166947906ff4fb6edcc553609d2dd8eb6a18ed08 Mon Sep 17 00:00:00 2001 From: Anastasia Alexadrova Date: Thu, 27 Nov 2025 13:50:08 +0100 Subject: [PATCH 1/5] K8SPXC-735 Documented new ProxySQL scheduler options and behavior in operator and configuration files. Added details on enabling the scheduler, its impact on load balancing, and configuration examples for better clarity. modified: docs/operator.md modified: docs/proxysql-conf.md --- docs/operator.md | 67 ++++++++ docs/proxysql-conf.md | 379 +++++++++++++++++++++--------------------- 2 files changed, 254 insertions(+), 192 deletions(-) diff --git a/docs/operator.md b/docs/operator.md index a91509e8..187e5b42 100644 --- a/docs/operator.md +++ b/docs/operator.md @@ -1812,6 +1812,73 @@ A secret with environment variables, see [Define environment variables](containe | ----------- | ---------- | | :material-code-string: string | `my-env-var-secrets` | +### `proxysql.scheduler.enabled` + +Enables the external ProxySQL scheduler for even distribution of read/write traffic across Percona XtraDB Cluster nodes. Available since Operator version 1.19.0 + +See [ProxySQL scheduler](proxysql-conf.md#proxysql-scheduler) for more information. + +| Value type | Example | +| ----------- | ---------- | +| :material-toggle-switch-outline: boolean | `true` | + + +### `proxysql.scheduler.writerIsAlsoReader` + +Controls whether the writer node is included in the read pool. When set to `false`, the writer node is excluded from receiving read queries. If the cluster loses its last reader, the writer is automatically elected as a reader regardless of this setting. Available since Operator version 1.19.0 + +| Value type | Example | +| ----------- | ---------- | +| :material-toggle-switch-outline: boolean | `true` | + +### `proxysql.scheduler.checkTimeoutMilliseconds` + +The maximum time (in milliseconds) allowed for checking a backend PXC node. If checking a node exceeds this timeout, it is not processed. Available since Operator version 1.19.0 + +| Value type | Example | +| ----------- | ---------- | +| :material-numeric-1-box: int | `2000` | + +### `proxysql.scheduler.successThreshold` + +The number of successful checks required before a failed node is restored to the pool. Available since Operator version 1.19.0 + +| Value type | Example | +| ----------- | ---------- | +| :material-numeric-1-box: int | `1` | + +### `proxysql.scheduler.failureThreshold` + +The number of failed checks required before a node is marked as DOWN and removed from the pool. Available since Operator version 1.19.0 + +| Value type | Example | +| ----------- | ---------- | +| :material-numeric-1-box: int | `3` | + +### `proxysql.scheduler.pingTimeoutMilliseconds` + +The connection timeout (in milliseconds) used to test the connection to a PXC server. Available since Operator version 1.19.0 + +| Value type | Example | +| ----------- | ---------- | +| :material-numeric-1-box: int | `1000` | + +### `proxysql.scheduler.nodeCheckIntervalMilliseconds` + +How frequently (in milliseconds) the scheduler runs to check node health and update the server configuration. Available since Operator version 1.19.0 + +| Value type | Example | +| ----------- | ---------- | +| :material-numeric-1-box: int | `2000` | + +### `proxysql.scheduler.maxConnections` + +The maximum number of connections from ProxySQL to each backend PXC server. Available since Operator version 1.19.0 + +| Value type | Example | +| ----------- | ---------- | +| :material-numeric-1-box: int | `1000` | + ### `proxysql.priorityClassName` The [Kubernetes Pod Priority class :octicons-link-external-16:](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass) for ProxySQL. diff --git a/docs/proxysql-conf.md b/docs/proxysql-conf.md index 26c0038b..0c7bb45f 100644 --- a/docs/proxysql-conf.md +++ b/docs/proxysql-conf.md @@ -1,15 +1,10 @@ -# Configuring Load Balancing with ProxySQL +# Configuring load balancing with ProxySQL -You can use either [HAProxy :octicons-link-external-16:](https://haproxy.org) or [ProxySQL :octicons-link-external-16:](https://proxysql.com/) for load balancing and proxy services. +You can use either [HAProxy :octicons-link-external-16:](https://haproxy.org) or [ProxySQL :octicons-link-external-16:](https://proxysql.com/) for load balancing and proxy services. Control which one to use via the `haproxy.enabled` and `proxysql.enabled` options in the `deploy/cr.yaml` configuration file. -You can control which one to use: enable or disable the -`haproxy.enabled` and `proxysql.enabled` options in the `deploy/cr.yaml` -configuration file. +!!! warning -!!! warning - - You can enable ProxySQL only when you create a cluster. For a running cluster you can enable only HAProxy. Also note, if you have already enabled HAProxy, the switch from it to ProxySQL is not - possible. + You can enable ProxySQL only during cluster creation. For existing clusters, you can enable only HAProxy. If HAProxy is already enabled, you cannot switch to ProxySQL later. ## `cluster1-proxysql` service @@ -19,62 +14,42 @@ The `cluster1-proxysql` service listens on the following ports: * `33062` is the port to connect to the MySQL Administrative Interface * `6070` is the port to connect to the built-in Prometheus exporter to gather ProxySQL statistics and manage the ProxySQL observability stack -The `cluster1-proxysql` service uses the number zero Percona XtraDB Cluster member -(`cluster1-pxc-0` by default) as the writer. - -[proxysql.expose.enabled](operator.md#proxysqlexposeenabled) Custom Resource -option enables or disables the `cluster1-proxysql` service. +The `cluster1-proxysql` service uses the first Percona XtraDB Cluster member (`cluster1-pxc-0` by default) as the writer. Use the [proxysql.expose.enabled](operator.md#proxysqlexposeenabled) Custom Resource option to enable or disable this service. -!!! note +### Headless ProxySQL service - If you need to configure ProxySQL service as a - [headless Service :octicons-link-external-16:](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) - (e.g. to use on the tenant network), add the following [annotation](annotations.md) - in the Custom Resource metadata section of the `deploy/cr.yaml`: - - ```yaml - apiVersion: pxc.percona.com/v1 - kind: PerconaXtraDBCluster - metadata: - name: cluster1 - annotations: - percona.com/headless-service: true - ... - ``` +You may want to configure the ProxySQL service as a [headless Service :octicons-link-external-16:](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services). For example, if you have applications that need direct DNS access to individual ProxySQL pods, such as when running in a multi-tenant setup or when handling advanced networking scenarios. - This annotation works only at service creation time and can't be added later. +To enable a headless ProxySQL service, add the `percona.com/headless-service: true` [annotation](annotations.md) in the Custom Resource metadata section of the `deploy/cr.yaml` file. Note that this annotation takes effect only at service creation time, so you need to set it when first creating the cluster. -When a cluster with ProxySQL is upgraded, the following steps -take place. First, reader members are upgraded one by one: the Operator waits -until the upgraded member shows up in ProxySQL with online status, and then -proceeds to upgrade the next member. When the upgrade is finished for all -the readers, then the writer Percona XtraDB Cluster member is finally upgraded. +```yaml +apiVersion: pxc.percona.com/v1 +kind: PerconaXtraDBCluster +metadata: + name: cluster1 + annotations: + percona.com/headless-service: true + ... +``` -!!! note +### Upgrade behavior - when both ProxySQL and Percona XtraDB Cluster are upgraded, they are - upgraded in parallel. +During cluster upgrades with ProxySQL, the Operator upgrades reader members one by one, waiting for each to show as online in ProxySQL before proceeding. After all readers are upgraded, the writer member is upgraded last. When both ProxySQL and Percona XtraDB Cluster are upgraded, they are upgraded in parallel. ## Passing custom configuration options to ProxySQL -You can pass custom configuration to ProxySQL - -* edit the `deploy/cr.yaml` file, - -* use a ConfigMap, - -* use a Secret object. +You can pass custom configuration to ProxySQL in these ways: -!!! note +* by editing the `deploy/cr.yaml` file, +* by using a ConfigMap, +* by using a Secret object. - If you specify a custom ProxySQL configuration in this way, ProxySQL - will try to merge the passed parameters with the previously set - configuration parameters, if any. If ProxySQL fails to merge some option, - you will see a warning in its log. +ProxySQL attempts to merge custom configuration with existing settings. If merging fails for any option, ProxySQL logs a warning. ### Edit the `deploy/cr.yaml` file -You can add options from the [proxysql.cnf :octicons-link-external-16:](https://proxysql.com/documentation/configuring-proxysql/) configuration file by editing the `proxysql.configuration` key in the `deploy/cr.yaml` file. +Add options from the [proxysql.cnf :octicons-link-external-16:](https://proxysql.com/documentation/configuring-proxysql/) configuration file by editing the `proxysql.configuration` key in `deploy/cr.yaml`. + Here is an example: ```yaml @@ -137,201 +112,221 @@ proxysql: ### Use a ConfigMap -You can use a configmap and the cluster restart to reset configuration -options. A configmap allows Kubernetes to pass or update configuration -data inside a containerized application. - -Use the `kubectl` command to create the configmap from external -resources, for more information see [Configure a Pod to use a -ConfigMap :octicons-link-external-16:](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap). +A configmap allows Kubernetes to pass or update configuration +data inside a containerized application. When you apply a ConfigMap, the cluster restarts. -For example, you define a `proxysql.cnf` configuration file with the following -setting: +See [Configure a Pod to use a +ConfigMap :octicons-link-external-16:]( +tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap) for information how to create a ConfigMap. ---8<-- "proxysql-config.txt" -You can create a configmap from the `proxysql.cnf` file with the -`kubectl create configmap` command. +Here's the example configuration. -You should use the combination of the cluster name with the `-proxysql` -suffix as the naming convention for the configmap. To find the cluster -name, you can use the following command: +1. Create a `proxysql.cnf` configuration file: -``` {.bash data-prompt="$" } -$ kubectl get pxc -``` + --8<-- "proxysql-config.txt" -The syntax for `kubectl create configmap` command is: +2. Find your cluster name: -```default -$ kubectl create configmap -proxysql -``` + ``` {.bash data-prompt="$" } + $ kubectl get pxc + ``` -The following example defines `cluster1-proxysql` as the configmap name and -the `proxysql.cnf` file as the data source: +3. Create the ConfigMap using the cluster name with the `-proxysql` suffix: -``` {.bash data-prompt="$" } -$ kubectl create configmap cluster1-proxysql --from-file=proxysql.cnf -``` + ``` {.bash data-prompt="$" } + $ kubectl create configmap cluster1-proxysql --from-file=proxysql.cnf + ``` -To view the created configmap, use the following command: +4. Verify the ConfigMap: -``` {.bash data-prompt="$" } -$ kubectl describe configmaps cluster1-proxysql -``` + ``` {.bash data-prompt="$" } + $ kubectl describe configmaps cluster1-proxysql + ``` -### Use a Secret Object +### Use a Secret object -The Operator can also store configuration options in [Kubernetes Secrets :octicons-link-external-16:](https://kubernetes.io/docs/concepts/configuration/secret/). -This can be useful if you need additional protection for some sensitive data. +Store configuration options in [Kubernetes Secrets :octicons-link-external-16:](https://kubernetes.io/docs/concepts/configuration/secret/) for additional protection of sensitive data. -You should create a Secret object with a specific name, composed of your cluster +The Secret name must be composed of your cluster name and the `proxysql` suffix. -!!! note - - To find the cluster name, you can use the following command: +1. Find your cluster name: ``` {.bash data-prompt="$" } $ kubectl get pxc ``` -Configuration options should be put inside a specific key inside of the `data` -section. The name of this key is `proxysql.cnf` for ProxySQL Pods. +2. Create a `proxysql.cnf` configuration file with your options: + + --8<-- "proxysql-config.txt" -Actual options should be encoded with [Base64 :octicons-link-external-16:](https://en.wikipedia.org/wiki/Base64). +3. Encode the configuration file with [Base64 :octicons-link-external-16:](https://en.wikipedia.org/wiki/Base64): -For example, let’s define a `proxysql.cnf` configuration file and put there -options we used in the previous example: + === "in Linux" ---8<-- "proxysql-config.txt" + ``` {.bash data-prompt="$" } + $ cat proxysql.cnf | base64 --wrap=0 + ``` + === "in macOS" -You can get a Base64 encoded string from your options via the command line as -follows: + ``` {.bash data-prompt="$" } + $ cat proxysql.cnf | base64 + ``` -=== "in Linux" +4. Create a Secret object with a name composed of your cluster name and the `proxysql` suffix. Put the Base64-encoded configuration in the `data` section under the `proxysql.cnf` key. Example `deploy/my-proxysql-secret.yaml`: + + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: cluster1-proxysql + data: + proxysql.cnf: "ZGF0YWRpcj0iL3Zhci9saWIvcHJveHlzcWwiCgphZG1pbl92YXJpYWJsZXMgPQp7CiBhZG1pbl9j\ + cmVkZW50aWFscz0icHJveHlhZG1pbjphZG1pbl9wYXNzd29yZCIKIG15c3FsX2lmYWNlcz0iMC4w\ + LjAuMDo2MDMyIgogcmVmcmVzaF9pbnRlcnZhbD0yMDAwCgogY2x1c3Rlcl91c2VybmFtZT0icHJv\ + eHlhZG1pbiIKIGNsdXN0ZXJfcGFzc3dvcmQ9ImFkbWluX3Bhc3N3b3JkIgogY2x1c3Rlcl9jaGVj\ + a19pbnRlcnZhbF9tcz0yMDAKIGNsdXN0ZXJfY2hlY2tfc3RhdHVzX2ZyZXF1ZW5jeT0xMDAKIGNs\ + dXN0ZXJfbXlzcWxfcXVlcnlfcnVsZXNfc2F2ZV90b19kaXNrPXRydWUKIGNsdXN0ZXJfbXlzcWxf\ + c2VydmVyc19zYXZlX3RvX2Rpc2s9dHJ1ZQogY2x1c3Rlcl9teXNxbF91c2Vyc19zYXZlX3RvX2Rp\ + c2s9dHJ1ZQogY2x1c3Rlcl9wcm94eXNxbF9zZXJ2ZXJzX3NhdmVfdG9fZGlzaz10cnVlCiBjbHVz\ + dGVyX215c3FsX3F1ZXJ5X3J1bGVzX2RpZmZzX2JlZm9yZV9zeW5jPTEKIGNsdXN0ZXJfbXlzcWxf\ + c2VydmVyc19kaWZmc19iZWZvcmVfc3luYz0xCiBjbHVzdGVyX215c3FsX3VzZXJzX2RpZmZzX2Jl\ + Zm9yZV9zeW5jPTEKIGNsdXN0ZXJfcHJveHlzcWxfc2VydmVyc19kaWZmc19iZWZvcmVfc3luYz0x\ + Cn0KCm15c3FsX3ZhcmlhYmxlcz0KewogbW9uaXRvcl9wYXNzd29yZD0ibW9uaXRvciIKIG1vbml0\ + b3JfZ2FsZXJhX2hlYWx0aGNoZWNrX2ludGVydmFsPTEwMDAKIHRocmVhZHM9MgogbWF4X2Nvbm5l\ + Y3Rpb25zPTIwNDgKIGRlZmF1bHRfcXVlcnlfZGVsYXk9MAogZGVmYXVsdF9xdWVyeV90aW1lb3V0\ + PTEwMDAwCiBwb2xsX3RpbWVvdXQ9MjAwMAogaW50ZXJmYWNlcz0iMC4wLjAuMDozMzA2IgogZGVm\ + YXVsdF9zY2hlbWE9ImluZm9ybWF0aW9uX3NjaGVtYSIKIHN0YWNrc2l6ZT0xMDQ4NTc2CiBjb25u\ + ZWN0X3RpbWVvdXRfc2VydmVyPTEwMDAwCiBtb25pdG9yX2hpc3Rvcnk9NjAwMDAKIG1vbml0b3Jf\ + Y29ubmVjdF9pbnRlcnZhbD0yMDAwMAogbW9uaXRvcl9waW5nX2ludGVydmFsPTEwMDAwCiBwaW5n\ + X3RpbWVvdXRfc2VydmVyPTIwMAogY29tbWFuZHNfc3RhdHM9dHJ1ZQogc2Vzc2lvbnNfc29ydD10\ + cnVlCiBoYXZlX3NzbD10cnVlCiBzc2xfcDJzX2NhPSIvZXRjL3Byb3h5c3FsL3NzbC1pbnRlcm5h\ + bC9jYS5jcnQiCiBzc2xfcDJzX2NlcnQ9Ii9ldGMvcHJveHlzcWwvc3NsLWludGVybmFsL3Rscy5j\ + cnQiCiBzc2xfcDJzX2tleT0iL2V0Yy9wcm94eXNxbC9zc2wtaW50ZXJuYWwvdGxzLmtleSIKIHNz\ + bF9wMnNfY2lwaGVyPSJFQ0RIRS1SU0EtQUVTMTI4LUdDTS1TSEEyNTYiCn0K" + ``` + +5. Apply the Secret: ``` {.bash data-prompt="$" } - $ cat proxysql.cnf | base64 --wrap=0 + $ kubectl create -f deploy/my-proxysql-secret.yaml ``` -=== "in macOS" +6. Restart Percona XtraDB Cluster to apply the configuration changes. + +## Accessing the ProxySQL Admin Interface + +Use the [ProxySQL admin interface :octicons-link-external-16:](https://www.percona.com/blog/2017/06/07/proxysql-admin-interface-not-typical-mysql-server/) to configure ProxySQL settings by connecting via the MySQL protocol. + +1. Find the ProxySQL Pod name: ``` {.bash data-prompt="$" } - $ cat proxysql.cnf | base64 + $ kubectl get pods ``` -!!! note + ??? example "Sample output" + + ```{.text .no-copy} + NAME READY STATUS + RESTARTS AGE + cluster1-pxc-node-0 1/1 Running + 0 5m + cluster1-pxc-node-1 1/1 Running + 0 4m + cluster1-pxc-node-2 1/1 Running + 0 2m + cluster1-proxysql-0 1/1 Running + 0 5m + percona-xtradb-cluster-operator-dc67778fd-qtspz 1/1 Running + 0 6m + ``` + +2. Get the admin password: - Similarly, you can read the list of options from a Base64 encoded - string: + ``` {.bash data-prompt="$" } + $ kubectl get secrets $(kubectl get pxc -o jsonpath='{.items[].spec.secretsName}') -o template='{{'{{'}} .data.proxyadmin | base64decode {{'}}'}}' + ``` + +3. Connect to ProxySQL. Replace `cluster1-proxysql-0` with your Pod name and `admin_password` with the retrieved password: ``` {.bash data-prompt="$" } - $ echo "ZGF0YWRpcj0iL3Zhci9saWIvcHJveHlzcWwiCgphZG1pbl92YXJpYWJsZXMgPQp7CiBhZG1pbl9j\ - cmVkZW50aWFscz0icHJveHlhZG1pbjphZG1pbl9wYXNzd29yZCIKIG15c3FsX2lmYWNlcz0iMC4w\ - LjAuMDo2MDMyIgogcmVmcmVzaF9pbnRlcnZhbD0yMDAwCgogY2x1c3Rlcl91c2VybmFtZT0icHJv\ - eHlhZG1pbiIKIGNsdXN0ZXJfcGFzc3dvcmQ9ImFkbWluX3Bhc3N3b3JkIgogY2x1c3Rlcl9jaGVj\ - a19pbnRlcnZhbF9tcz0yMDAKIGNsdXN0ZXJfY2hlY2tfc3RhdHVzX2ZyZXF1ZW5jeT0xMDAKIGNs\ - dXN0ZXJfbXlzcWxfcXVlcnlfcnVsZXNfc2F2ZV90b19kaXNrPXRydWUKIGNsdXN0ZXJfbXlzcWxf\ - c2VydmVyc19zYXZlX3RvX2Rpc2s9dHJ1ZQogY2x1c3Rlcl9teXNxbF91c2Vyc19zYXZlX3RvX2Rp\ - c2s9dHJ1ZQogY2x1c3Rlcl9wcm94eXNxbF9zZXJ2ZXJzX3NhdmVfdG9fZGlzaz10cnVlCiBjbHVz\ - dGVyX215c3FsX3F1ZXJ5X3J1bGVzX2RpZmZzX2JlZm9yZV9zeW5jPTEKIGNsdXN0ZXJfbXlzcWxf\ - c2VydmVyc19kaWZmc19iZWZvcmVfc3luYz0xCiBjbHVzdGVyX215c3FsX3VzZXJzX2RpZmZzX2Jl\ - Zm9yZV9zeW5jPTEKIGNsdXN0ZXJfcHJveHlzcWxfc2VydmVyc19kaWZmc19iZWZvcmVfc3luYz0x\ - Cn0KCm15c3FsX3ZhcmlhYmxlcz0KewogbW9uaXRvcl9wYXNzd29yZD0ibW9uaXRvciIKIG1vbml0\ - b3JfZ2FsZXJhX2hlYWx0aGNoZWNrX2ludGVydmFsPTEwMDAKIHRocmVhZHM9MgogbWF4X2Nvbm5l\ - Y3Rpb25zPTIwNDgKIGRlZmF1bHRfcXVlcnlfZGVsYXk9MAogZGVmYXVsdF9xdWVyeV90aW1lb3V0\ - PTEwMDAwCiBwb2xsX3RpbWVvdXQ9MjAwMAogaW50ZXJmYWNlcz0iMC4wLjAuMDozMzA2IgogZGVm\ - YXVsdF9zY2hlbWE9ImluZm9ybWF0aW9uX3NjaGVtYSIKIHN0YWNrc2l6ZT0xMDQ4NTc2CiBjb25u\ - ZWN0X3RpbWVvdXRfc2VydmVyPTEwMDAwCiBtb25pdG9yX2hpc3Rvcnk9NjAwMDAKIG1vbml0b3Jf\ - Y29ubmVjdF9pbnRlcnZhbD0yMDAwMAogbW9uaXRvcl9waW5nX2ludGVydmFsPTEwMDAwCiBwaW5n\ - X3RpbWVvdXRfc2VydmVyPTIwMAogY29tbWFuZHNfc3RhdHM9dHJ1ZQogc2Vzc2lvbnNfc29ydD10\ - cnVlCiBoYXZlX3NzbD10cnVlCiBzc2xfcDJzX2NhPSIvZXRjL3Byb3h5c3FsL3NzbC1pbnRlcm5h\ - bC9jYS5jcnQiCiBzc2xfcDJzX2NlcnQ9Ii9ldGMvcHJveHlzcWwvc3NsLWludGVybmFsL3Rscy5j\ - cnQiCiBzc2xfcDJzX2tleT0iL2V0Yy9wcm94eXNxbC9zc2wtaW50ZXJuYWwvdGxzLmtleSIKIHNz\ - bF9wMnNfY2lwaGVyPSJFQ0RIRS1SU0EtQUVTMTI4LUdDTS1TSEEyNTYiCn0K" | base64 --decode + $ kubectl exec -it cluster1-proxysql-0 -- mysql -h127.0.0.1 -P6032 -uproxyadmin -padmin_password ``` -Finally, use a yaml file to create the Secret object. For example, you can -create a `deploy/my-proxysql-secret.yaml` file with the following contents: +## ProxySQL scheduler -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: cluster1-proxysql -data: - proxysql.cnf: "ZGF0YWRpcj0iL3Zhci9saWIvcHJveHlzcWwiCgphZG1pbl92YXJpYWJsZXMgPQp7CiBhZG1pbl9j\ - cmVkZW50aWFscz0icHJveHlhZG1pbjphZG1pbl9wYXNzd29yZCIKIG15c3FsX2lmYWNlcz0iMC4w\ - LjAuMDo2MDMyIgogcmVmcmVzaF9pbnRlcnZhbD0yMDAwCgogY2x1c3Rlcl91c2VybmFtZT0icHJv\ - eHlhZG1pbiIKIGNsdXN0ZXJfcGFzc3dvcmQ9ImFkbWluX3Bhc3N3b3JkIgogY2x1c3Rlcl9jaGVj\ - a19pbnRlcnZhbF9tcz0yMDAKIGNsdXN0ZXJfY2hlY2tfc3RhdHVzX2ZyZXF1ZW5jeT0xMDAKIGNs\ - dXN0ZXJfbXlzcWxfcXVlcnlfcnVsZXNfc2F2ZV90b19kaXNrPXRydWUKIGNsdXN0ZXJfbXlzcWxf\ - c2VydmVyc19zYXZlX3RvX2Rpc2s9dHJ1ZQogY2x1c3Rlcl9teXNxbF91c2Vyc19zYXZlX3RvX2Rp\ - c2s9dHJ1ZQogY2x1c3Rlcl9wcm94eXNxbF9zZXJ2ZXJzX3NhdmVfdG9fZGlzaz10cnVlCiBjbHVz\ - dGVyX215c3FsX3F1ZXJ5X3J1bGVzX2RpZmZzX2JlZm9yZV9zeW5jPTEKIGNsdXN0ZXJfbXlzcWxf\ - c2VydmVyc19kaWZmc19iZWZvcmVfc3luYz0xCiBjbHVzdGVyX215c3FsX3VzZXJzX2RpZmZzX2Jl\ - Zm9yZV9zeW5jPTEKIGNsdXN0ZXJfcHJveHlzcWxfc2VydmVyc19kaWZmc19iZWZvcmVfc3luYz0x\ - Cn0KCm15c3FsX3ZhcmlhYmxlcz0KewogbW9uaXRvcl9wYXNzd29yZD0ibW9uaXRvciIKIG1vbml0\ - b3JfZ2FsZXJhX2hlYWx0aGNoZWNrX2ludGVydmFsPTEwMDAKIHRocmVhZHM9MgogbWF4X2Nvbm5l\ - Y3Rpb25zPTIwNDgKIGRlZmF1bHRfcXVlcnlfZGVsYXk9MAogZGVmYXVsdF9xdWVyeV90aW1lb3V0\ - PTEwMDAwCiBwb2xsX3RpbWVvdXQ9MjAwMAogaW50ZXJmYWNlcz0iMC4wLjAuMDozMzA2IgogZGVm\ - YXVsdF9zY2hlbWE9ImluZm9ybWF0aW9uX3NjaGVtYSIKIHN0YWNrc2l6ZT0xMDQ4NTc2CiBjb25u\ - ZWN0X3RpbWVvdXRfc2VydmVyPTEwMDAwCiBtb25pdG9yX2hpc3Rvcnk9NjAwMDAKIG1vbml0b3Jf\ - Y29ubmVjdF9pbnRlcnZhbD0yMDAwMAogbW9uaXRvcl9waW5nX2ludGVydmFsPTEwMDAwCiBwaW5n\ - X3RpbWVvdXRfc2VydmVyPTIwMAogY29tbWFuZHNfc3RhdHM9dHJ1ZQogc2Vzc2lvbnNfc29ydD10\ - cnVlCiBoYXZlX3NzbD10cnVlCiBzc2xfcDJzX2NhPSIvZXRjL3Byb3h5c3FsL3NzbC1pbnRlcm5h\ - bC9jYS5jcnQiCiBzc2xfcDJzX2NlcnQ9Ii9ldGMvcHJveHlzcWwvc3NsLWludGVybmFsL3Rscy5j\ - cnQiCiBzc2xfcDJzX2tleT0iL2V0Yy9wcm94eXNxbC9zc2wtaW50ZXJuYWwvdGxzLmtleSIKIHNz\ - bF9wMnNfY2lwaGVyPSJFQ0RIRS1SU0EtQUVTMTI4LUdDTS1TSEEyNTYiCn0K" -``` +By default, the Operator uses the internal ProxySQL scheduler for load balancing. In some cases, this scheduler may not fully recognize the cluster topology, directing both read and write traffic to the primary Pod. This can reduce scalability and efficiency and may increase the risk of overload and downtime. -When ready, apply it with the following command: +To address this limitation, the Operator is integrated with the [`pxc_scheduler_handler` :octicons-link-external-16:](https://docs.percona.com/proxysql/psh-overview.html) tool starting with version 1.19.0. This external ProxySQL scheduler ensures the read/write splitting is distributed as follows: -``` {.bash data-prompt="$" } -$ kubectl create -f deploy/my-proxysql-secret.yaml -``` +* **SELECT queries** (without `FOR UPDATE`) are sent evenly to all PXC nodes or to all nodes except the primary, depending on your configuration +* **Non-SELECT queries** and **SELECT FOR UPDATE** queries are sent to the primary node +* The scheduler automatically manages the primary node, ensuring only one primary exists at a time -!!! note +As a result, you achieve: - Do not forget to restart Percona XtraDB Cluster to ensure the - cluster has updated the configuration. +* Better performance through faster query processing and increased throughput +* Higher reliability by preventing single-node bottlenecks and points of failure +* Healthier cluster through early detection of replication lag and node issues +* Efficient resource utilization +* Improved user experience with consistent, predictable response times -## Accessing the ProxySQL Admin Interface +### Enable the scheduler -You can use [ProxySQL admin interface :octicons-link-external-16:](https://www.percona.com/blog/2017/06/07/proxysql-admin-interface-not-typical-mysql-server/) to configure its settings. +The scheduler is disabled by default to maintain backward compatibility. You can enable it by setting `proxysql.scheduler.enabled=true` in your Custom Resource. -Configuring ProxySQL in this way means connecting to it using the MySQL -protocol, and two things are needed to do it: +1. Edit the `deploy/cr.yaml` file and add the scheduler configuration: -* the ProxySQL Pod name + ```yaml + proxysql: + enabled: true + size: 3 + image: percona/percona-xtradb-cluster-operator:{{ release }}-proxysql + scheduler: + enabled: true + ``` + +2. Apply the configuration: -* the ProxySQL admin password + ```{.bash data-prompt="$" } + $ kubectl apply -f deploy/cr.yaml -n + ``` -You can find out ProxySQL Pod name with the `kubectl get pods` command, -which will have the following output: +When the scheduler is enabled, you should see: -```default -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -cluster1-pxc-node-0 1/1 Running 0 5m -cluster1-pxc-node-1 1/1 Running 0 4m -cluster1-pxc-node-2 1/1 Running 0 2m -cluster1-proxysql-0 1/1 Running 0 5m -percona-xtradb-cluster-operator-dc67778fd-qtspz 1/1 Running 0 6m -``` +* **Hostgroup 10** (readers): All PXC nodes with equal or weighted distribution +* **Hostgroup 11** (writer): Only the current writer node (typically `pod-0`) with a high weight (1000000) -The next command will print you the needed admin password: +You can also test read load balancing by running multiple `SELECT` queries and checking which node they hit: -```default -$ kubectl get secrets $(kubectl get pxc -o jsonpath='{.items[].spec.secretsName}') -o template='{{'{{'}} .data.proxyadmin | base64decode {{'}}'}}' +```{.bash data-prompt="$" } +$ for i in $(seq 100); do + kubectl exec -i cluster1-pxc-0 -c pxc -- mysql -uroot -proot_password \ + --host cluster1-proxysql -Ne "SELECT VARIABLE_VALUE FROM \ + performance_schema.global_variables WHERE VARIABLE_NAME = 'wsrep_node_name' LIMIT 1" \ + 2>/dev/null + done | sort -n | uniq -c ``` -When both Pod name and admin password are known, connect to the ProxySQL as -follows, substituting `cluster1-proxysql-0` with the actual Pod name and -`admin_password` with the actual password: +You should see queries distributed across multiple nodes instead of all going to `cluster1-pxc-0`. + +## Scheduler behavior + +After you enabled the scheduler, it works as follows: + +* **Writer node**: The scheduler sets `pod-0` (the first PXC Pod) as the writer node by default. The scheduler ensures only one writer exists at any time. As long as `pod-0` is available, it remains the writer. + +* **Failover**: If `pod-0` becomes unavailable, the scheduler automatically promotes another Pod to be the writer. The scheduler uses weighted hostgroups to ensure all ProxySQL instances promote the same Pod during failover, preventing split-brain scenarios. + +* **ProxySQL clustering**: When the scheduler is enabled, ProxySQL clustering is automatically disabled. This is because the scheduler and ProxySQL clustering do not work well together. The `proxysql-monit` sidecar container is removed from ProxySQL Pods, and each ProxySQL instance manages its own `mysql_servers` configuration independently. + +!!! warning + + When the scheduler is enabled, ProxySQL clustering is disabled. Each ProxySQL instance manages its own server configuration independently. This ensures proper read/write splitting but means ProxySQL instances do not share configuration. + +By default, ProxySQL scheduler will distribute read requests evenly across all your cluster nodes. You can exclude the primary from processing reads and reserve it only for accepting write requests by setting the `writerIsAlsoReader` option to `false`. + +You can additionally fine-tune the scheduler's behavior for your workload and deployment scenario. See the [Custom resource](operator.md#proxysqlschedulerenabled) reference for a complete list of available options. -```default -$ kubectl exec -it cluster1-proxysql-0 -- mysql -h127.0.0.1 -P6032 -uproxyadmin -padmin_password -``` From 6c549f1d3c53b6d496a3719ecab81393e7005d62 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Thu, 27 Nov 2025 14:56:42 +0100 Subject: [PATCH 2/5] Update docs/proxysql-conf.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- docs/proxysql-conf.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/proxysql-conf.md b/docs/proxysql-conf.md index 0c7bb45f..30b97d58 100644 --- a/docs/proxysql-conf.md +++ b/docs/proxysql-conf.md @@ -314,7 +314,7 @@ You should see queries distributed across multiple nodes instead of all going to ## Scheduler behavior -After you enabled the scheduler, it works as follows: +After you enable the scheduler, it works as follows: * **Writer node**: The scheduler sets `pod-0` (the first PXC Pod) as the writer node by default. The scheduler ensures only one writer exists at any time. As long as `pod-0` is available, it remains the writer. From d7b63314ac63bdad1e8d8683151b7052360473c9 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Thu, 27 Nov 2025 14:56:48 +0100 Subject: [PATCH 3/5] Update docs/proxysql-conf.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- docs/proxysql-conf.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/proxysql-conf.md b/docs/proxysql-conf.md index 30b97d58..1545efb4 100644 --- a/docs/proxysql-conf.md +++ b/docs/proxysql-conf.md @@ -326,7 +326,7 @@ After you enable the scheduler, it works as follows: When the scheduler is enabled, ProxySQL clustering is disabled. Each ProxySQL instance manages its own server configuration independently. This ensures proper read/write splitting but means ProxySQL instances do not share configuration. -By default, ProxySQL scheduler will distribute read requests evenly across all your cluster nodes. You can exclude the primary from processing reads and reserve it only for accepting write requests by setting the `writerIsAlsoReader` option to `false`. +By default, the ProxySQL scheduler distributes read requests evenly across all your cluster nodes. You can exclude the primary from processing reads and reserve it only for accepting write requests by setting the `writerIsAlsoReader` option to `false`. You can additionally fine-tune the scheduler's behavior for your workload and deployment scenario. See the [Custom resource](operator.md#proxysqlschedulerenabled) reference for a complete list of available options. From 84ccf9e07aa295ccdd2ddc1964e3c2ddccf99be6 Mon Sep 17 00:00:00 2001 From: Anastasia Alexadrova Date: Fri, 28 Nov 2025 12:39:40 +0100 Subject: [PATCH 4/5] Fixed CR option for the annotation, updated conf examples, removed migration warning --- docs/annotations.md | 1 + docs/haproxy-conf.md | 43 ++++++++++++++++++++++++------------------- docs/proxysql-conf.md | 20 ++++++++------------ 3 files changed, 33 insertions(+), 31 deletions(-) diff --git a/docs/annotations.md b/docs/annotations.md index 5298b394..2215a516 100644 --- a/docs/annotations.md +++ b/docs/annotations.md @@ -83,6 +83,7 @@ Use **Annotations** when: |`percona.com/issue-vault-token: "true"`| | `service.beta.kubernetes.io/aws-load-balancer-backend-protocol` | Services | Specifies the protocol for AWS load balancers | http, http-test | | `service.beta.kubernetes.io/aws-load-balancer-backend` | Services | Specifies the backend type for AWS load balancers | test-type | +| `percona.com/headless-service` | Services | Exposes ProxySQL or HAProxy as a headless service | true | ## Setting labels and annotations in the Custom Resource diff --git a/docs/haproxy-conf.md b/docs/haproxy-conf.md index fcc73fe8..e12f2b34 100644 --- a/docs/haproxy-conf.md +++ b/docs/haproxy-conf.md @@ -22,8 +22,7 @@ $ kubectl patch pxc cluster1 --type=merge --patch '{ !!! warning Switching from ProxySQL to HAProxy will cause Percona XtraDB Cluster Pods - restart. Switching from HAProxy to ProxySQL is not possible, and if you need - ProxySQL, this should be configured at cluster creation time. + restart. ## HAProxy services @@ -46,7 +45,7 @@ The `cluster1-haproxy` service listens on the following ports: The [haproxy.enabled](operator.md#haproxyexposeprimaryenabled) Custom Resource option enables or disables `cluster1-haproxy` service. -By default, the `cluster1-haproxy` service points to the number zero Percona XtraDB Cluster member (`cluster1-pxc-0`), when this member is available. If a zero member is not available, members are selected in descending order of their +By default, the `cluster1-haproxy` service points to the first Percona XtraDB Cluster member (`cluster1-pxc-0`), when this member is available. If it is not available, members are selected in descending order of their numbers: `cluster1-pxc-2`, then `cluster1-pxc-1`. This service can be used for both read and write load, or it can also be used just for write load (single writer mode) in setups with split write and read loads. @@ -64,26 +63,32 @@ the Round Robin load balancing algorithm. **Don't use it for write requests**. The [haproxy.exposeReplicas.enabled](operator.md#haproxyexposereplicasenabled) -Custom Resource option enables or disables `cluster1-haproxy-replicas` service (on by default). +Custom Resource option enables or disables `cluster1-haproxy-replicas` service (on by default). -!!! note +### Expose HAProxy as a headless service + +You may want to configure the HAProxy service as a [headless Service :octicons-link-external-16:](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services). For example, if you have applications that need direct DNS access to individual HAProxy pods, such as when running in a multi-tenant setup or when handling advanced networking scenarios. + +To enable HAProxy as a headless service, add the `percona.com/headless-service: true` [annotation](annotations.md) to the following options in the Custom Resource: - If you need to configure `cluster1-haproxy` and - `cluster1-haproxy-replicas` as a [headless Service :octicons-link-external-16:](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) - (e.g. to use on the tenant network), add the following [annotation](annotations.md) - in the Custom Resource metadata section of the `deploy/cr.yaml`: - - ```yaml - apiVersion: pxc.percona.com/v1 - kind: PerconaXtraDBCluster - metadata: - name: cluster1 - annotations: - percona.com/headless-service: true - ... +* `haproxy.exposePrimary.annotations` key to expose the primary HAProxy Pod +* `haproxy.exposeReplicas.annotations` key to expose the HAProxy replica Pods + + ```yaml + spec: + haproxy: + exposePrimary: + enabled: true + annotations: percona.com/headless-service: true + .... + exposeReplicas: + enabled: true + annotations: percona.com/headless-service: true ``` - This annotation works only at service creation time and can't be added later. +This annotation works only at service creation time and can't be added later. + +### Upgrade behavior When the cluster with HAProxy is upgraded, the following steps take place. First, reader members are upgraded one by one: the Operator waits diff --git a/docs/proxysql-conf.md b/docs/proxysql-conf.md index 1545efb4..0b4b778d 100644 --- a/docs/proxysql-conf.md +++ b/docs/proxysql-conf.md @@ -2,15 +2,11 @@ You can use either [HAProxy :octicons-link-external-16:](https://haproxy.org) or [ProxySQL :octicons-link-external-16:](https://proxysql.com/) for load balancing and proxy services. Control which one to use via the `haproxy.enabled` and `proxysql.enabled` options in the `deploy/cr.yaml` configuration file. -!!! warning - - You can enable ProxySQL only during cluster creation. For existing clusters, you can enable only HAProxy. If HAProxy is already enabled, you cannot switch to ProxySQL later. - ## `cluster1-proxysql` service The `cluster1-proxysql` service listens on the following ports: -* `3306` is the default MySQL port. It is used by the mysql client, MySQL Connectors, and utilities such as mysqldump and mysqlpump +* `3306` is the default MySQL port. It is used by the mysql client, MySQL Connectors, and utilities such as `mysqldump` and `mysqlpump` * `33062` is the port to connect to the MySQL Administrative Interface * `6070` is the port to connect to the built-in Prometheus exporter to gather ProxySQL statistics and manage the ProxySQL observability stack @@ -20,15 +16,15 @@ The `cluster1-proxysql` service uses the first Percona XtraDB Cluster member (`c You may want to configure the ProxySQL service as a [headless Service :octicons-link-external-16:](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services). For example, if you have applications that need direct DNS access to individual ProxySQL pods, such as when running in a multi-tenant setup or when handling advanced networking scenarios. -To enable a headless ProxySQL service, add the `percona.com/headless-service: true` [annotation](annotations.md) in the Custom Resource metadata section of the `deploy/cr.yaml` file. Note that this annotation takes effect only at service creation time, so you need to set it when first creating the cluster. +To enable a headless ProxySQL service, add the `percona.com/headless-service: true` [annotation](annotations.md) in the `proxysql.expose.annotations` key of the `deploy/cr.yaml` file. Note that this annotation takes effect only at service creation time, so you need to set it when first creating the cluster. ```yaml -apiVersion: pxc.percona.com/v1 -kind: PerconaXtraDBCluster -metadata: - name: cluster1 - annotations: - percona.com/headless-service: true +spec: + proxysql: + expose: + enabled: true + annotations: + percona.com/headless-service: true ... ``` From f1a0981bb5806dc62f3a8755595efd2cf2ac9dea Mon Sep 17 00:00:00 2001 From: Anastasia Alexadrova Date: Mon, 1 Dec 2025 14:40:27 +0100 Subject: [PATCH 5/5] Fixed warning about restart to downtime bacause of proxy reconfiguration --- docs/haproxy-conf.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/haproxy-conf.md b/docs/haproxy-conf.md index e12f2b34..a300521e 100644 --- a/docs/haproxy-conf.md +++ b/docs/haproxy-conf.md @@ -21,8 +21,7 @@ $ kubectl patch pxc cluster1 --type=merge --patch '{ !!! warning - Switching from ProxySQL to HAProxy will cause Percona XtraDB Cluster Pods - restart. + Switching from ProxySQL to HAProxy will cause the downtime because the Operator needs to reconfigure the proxy Pods. ## HAProxy services