Skip to content

v2.2.1

Latest

Choose a tag to compare

@renecannao renecannao released this 09 Apr 14:20
· 3 commits to master since this release

v2.2.1 — InnoDB Cluster & CI Reliability Fixes

This release fixes critical issues with InnoDB Cluster deployment and improves CI reliability across all topologies.

InnoDB Cluster — Complete Rework

InnoDB Cluster deployment was fundamentally broken in v2.2.0. This release rewrites the initialization flow so it works correctly end-to-end with both MySQL 8.4 and 9.5.

  • Let MySQL Shell manage Group Replication from scratch — The previous approach pre-started GR via initialize_nodes and then tried to have mysqlsh adopt it, which caused infinite loops ("unmanaged replication group"), access denied errors, and errant GTID conflicts. Now init_cluster handles everything: reset GTIDs, configure all instances, create the cluster, and add replicas. (aa12a0c)

  • Fix mysqlsh installation (missing libexec/ directory)mysqlsh requires libexec/mysqlsh/ alongside lib/mysqlsh/ to start. Without it: "libexec folder not found, shell installation likely invalid". (c62b470)

  • Remove invalid --interactive flagdba configure-instance in CLI mode (--) does not support --interactive, causing "The following option is invalid: --interactive". (aa12a0c)

MySQL Router

  • Fix Router start hanging foreverRunCmd(start.sh) blocked indefinitely because the backgrounded mysqlrouter process inherited stdout/stderr pipes, preventing Go's cmd.Wait() from returning. Now launches mysqlrouter directly via exec.Command().Start(). (2cc282d)

  • Fix Router port extraction in CIls file && grep file captured both the file path and the port number into the variable, causing mysql to receive a multi-line value as the port. (1481128)

ProxySQL + InnoDB Cluster

  • Fix --with-proxysql for InnoDB Cluster topology — ProxySQL setup used the standard replication path (rsandbox_*/master/) but InnoDB Cluster uses ic_msb_*/node1/ as the primary. Added topology-aware path resolution. (417b262)

  • Fix ProxySQL GR monitor seeing all nodes as offline — The rsandbox monitor user lacked SELECT on performance_schema, so ProxySQL couldn't query replication_group_members to determine writer/reader roles. All servers ended up in the offline hostgroup (3), causing "Max connect timeout reached while reaching hostgroup 0". (4ed75c5)

Fan-in & Multi-Source Replication

  • Fix fan-in Unknown database error — In fan-in topology, node1 and node2 are independent masters. Creating the same database on both caused replication conflicts on the slave. Now uses separate databases per master (fanin_test vs fanin_test2). (a76d93a)

CI Reliability

  • Replace sleep + single-check with retry loops — All replication verification steps now retry up to 10 times with 2-second intervals instead of relying on a fixed sleep. (9304310)

  • Fix set -e killing retry loops — GitHub Actions uses implicit set -e. When mysql returns non-zero inside a retry loop (e.g., database doesn't exist yet), the entire step would exit. Added || true to all RESULT=$() assignments. (037e197)

  • Fix grep -v Warning exiting under set -egrep -v Warning returns exit code 1 when there are no Warning lines to filter, killing the CI step. Wrapped in { grep -v Warning || true; }. (2cc282d)

  • Remove MariaDB 11.4 from CI — MariaDB 11.4.9 has an authentication bug where slave nodes fail with Access denied for user 'msandbox'. Tracked in #82. MariaDB 10.11.9 continues to pass. (a76d93a)

Install

# Linux (amd64)
curl -L -o dbdeployer https://github.com/ProxySQL/dbdeployer/releases/download/v2.2.1/dbdeployer-2.2.1-linux-amd64
chmod +x dbdeployer
sudo mv dbdeployer /usr/local/bin/

# macOS (Apple Silicon)
curl -L -o dbdeployer https://github.com/ProxySQL/dbdeployer/releases/download/v2.2.1/dbdeployer-2.2.1-darwin-arm64
chmod +x dbdeployer
sudo mv dbdeployer /usr/local/bin/

Full Changelog: v2.2.0...v2.2.1