Add 'dbdeployer deploy proxysql' and --with-proxysql for replication#46
Add 'dbdeployer deploy proxysql' and --with-proxysql for replication#46renecannao merged 2 commits intomasterfrom
Conversation
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Adds first-class ProxySQL deployment support to DBDeployer, including a standalone deploy proxysql command and an option to deploy a topology-aware ProxySQL sidecar with replication sandboxes.
Changes:
- Introduces
dbdeployer deploy proxysqlwith flags for admin port, admin credentials, and skipping start. - Adds
--with-proxysqltodbdeployer deploy replicationto deploy ProxySQL alongside a replication sandbox. - Adds
sandbox.DeployProxySQLForTopology()helper to create/configure/start a ProxySQL sandbox from an existing MySQL topology.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 11 comments.
| File | Description |
|---|---|
sandbox/proxysql_topology.go |
New helper to deploy ProxySQL configured with master/slave hostgroups based on topology ports. |
cmd/replication.go |
Adds --with-proxysql flag and post-deploy logic to deploy ProxySQL for the created replication sandbox. |
cmd/deploy_proxysql.go |
New deploy proxysql subcommand to deploy a standalone ProxySQL sandbox using the provider. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } | ||
|
|
||
| if _, err := p.FindBinary(""); err != nil { | ||
| common.Exitf(1, "proxysql binary not found in PATH: %s", err) |
There was a problem hiding this comment.
The error from p.FindBinary already includes "proxysql binary not found in PATH". Wrapping it with the same phrase here results in a duplicated message ("... in PATH: proxysql binary not found in PATH: ..."). Consider using a shorter wrapper like "proxysql binary not found" or just printing err directly.
| common.Exitf(1, "proxysql binary not found in PATH: %s", err) | |
| common.Exitf(1, "%s", err) |
| flags := cmd.Flags() | ||
| port, _ := flags.GetInt("port") | ||
| adminUser, _ := flags.GetString("admin-user") | ||
| adminPassword, _ := flags.GetString("admin-password") | ||
| skipStart, _ := flags.GetBool("skip-start") |
There was a problem hiding this comment.
This command always uses the requested/default admin port (6032) without checking for conflicts or reserving the required 2-port range (admin + mysql interface). Consider using common.GetInstalledPorts + common.FindFreePort (howMany=2), at least when the user did not explicitly set --port.
| package sandbox | ||
|
|
||
| import ( |
There was a problem hiding this comment.
New sandbox file is missing the standard DBDeployer Apache 2.0 license header comment block that is present at the top of other files in this package (e.g. sandbox/sandbox.go). Please add the header for consistency and licensing clarity.
| proxysqlPort = 6032 | ||
| // Try to find a free port | ||
| freePort, err := common.FindFreePort(proxysqlPort, []int{}, 1) | ||
| if err == nil { | ||
| proxysqlPort = freePort | ||
| } |
There was a problem hiding this comment.
When proxysqlPort==0, this calls common.FindFreePort with an empty installedPorts list and ignores the returned error. This effectively always selects 6032 even if it conflicts with existing sandboxes, and can silently fall back to a busy port. Use the actual installed ports (e.g. from common.GetInstalledPorts on the sandbox home) and return an error if a free port cannot be found.
| proxysqlPort = 6032 | |
| // Try to find a free port | |
| freePort, err := common.FindFreePort(proxysqlPort, []int{}, 1) | |
| if err == nil { | |
| proxysqlPort = freePort | |
| } | |
| // Determine currently used ports from the sandbox home (parent of sandboxDir) | |
| sandboxHome := path.Dir(sandboxDir) | |
| installedPorts, err := common.GetInstalledPorts(sandboxHome) | |
| if err != nil { | |
| return fmt.Errorf("cannot determine installed ports in %s: %w", sandboxHome, err) | |
| } | |
| proxysqlPort = 6032 | |
| // Try to find a free port starting from the default ProxySQL admin port | |
| freePort, err := common.FindFreePort(proxysqlPort, installedPorts, 1) | |
| if err != nil { | |
| return fmt.Errorf("no free port available for ProxySQL starting at %d: %w", proxysqlPort, err) | |
| } | |
| proxysqlPort = freePort |
| // Start ProxySQL | ||
| if err := p.StartSandbox(proxysqlDir); err != nil { | ||
| return fmt.Errorf("starting ProxySQL: %w", err) | ||
| } |
There was a problem hiding this comment.
This helper always starts ProxySQL after creating the sandbox. When used from replication deployment, it should respect the user's --skip-start choice (i.e., avoid starting ProxySQL when the replication sandbox was deployed with SkipStart), or accept a skipStart parameter.
| // Determine the sandbox directory that was created | ||
| sandboxDir := path.Join(sd.SandboxDir, defaults.Defaults().MasterSlavePrefix+common.VersionToName(origin)) | ||
| if sd.DirName != "" { | ||
| sandboxDir = path.Join(sd.SandboxDir, sd.DirName) | ||
| } | ||
|
|
||
| // Read port info from child sandbox descriptions | ||
| masterDesc, err := common.ReadSandboxDescription(path.Join(sandboxDir, defaults.Defaults().MasterName)) |
There was a problem hiding this comment.
The ProxySQL sidecar logic assumes a master/slave replication layout (master directory name + node%d slaves) and constructs sandboxDir using MasterSlavePrefix regardless of the selected --topology. For non-master-slave topologies (group, fan-in, all-masters, pxc, ndb) this path/layout will be wrong and the subsequent ReadSandboxDescription calls will fail. Either restrict --with-proxysql to globals.MasterSlaveLabel, or compute the sandbox directory/layout based on the chosen topology (mirroring sandbox.CreateReplicationSandbox’s topology switch).
| slavePorts = append(slavePorts, nodeDesc.Port[0]) | ||
| } | ||
|
|
||
| err = sandbox.DeployProxySQLForTopology(sandboxDir, masterPort, slavePorts, 0, "127.0.0.1") |
There was a problem hiding this comment.
ProxySQL is deployed with host hardcoded to "127.0.0.1". If the replication sandboxes were created with a different bind address (via the global --bind-address flag), ProxySQL may not be able to reach the backends. Prefer using the Host from the sandbox descriptions (masterDesc.Host) or the bind-address value used during deployment.
| err = sandbox.DeployProxySQLForTopology(sandboxDir, masterPort, slavePorts, 0, "127.0.0.1") | ||
| if err != nil { | ||
| common.Exitf(1, "ProxySQL deployment failed: %s", err) | ||
| } |
There was a problem hiding this comment.
If the user deployed replication with --skip-start (sd.SkipStart), this currently still deploys and starts ProxySQL. That creates a sidecar pointing at backends that are not running and diverges from the expected semantics of skip-start. Consider skipping ProxySQL start/deployment when skip-start is set, or propagating skip-start down to the helper.
| port, _ := flags.GetInt("port") | ||
| adminUser, _ := flags.GetString("admin-user") | ||
| adminPassword, _ := flags.GetString("admin-password") | ||
| skipStart, _ := flags.GetBool("skip-start") | ||
|
|
||
| p, err := providers.DefaultRegistry.Get("proxysql") |
There was a problem hiding this comment.
--port can be set to 0/negative, which will produce an invalid config (ProxySQL provider will end up with adminPort=0 and mysqlPort=1). Validate that port is >0 (or explicitly support port=0 as "auto" and implement free-port selection) before continuing.
| // Try to find a free port | ||
| freePort, err := common.FindFreePort(proxysqlPort, []int{}, 1) |
There was a problem hiding this comment.
ProxySQL uses two consecutive ports (adminPort and adminPort+1 for the MySQL interface). The auto-port selection requests only 1 port, so it may choose an admin port whose +1 is already in use, causing start failures. The free-port search should reserve/check a range of 2 ports.
| // Try to find a free port | |
| freePort, err := common.FindFreePort(proxysqlPort, []int{}, 1) | |
| // Try to find a free range of two consecutive ports (admin and MySQL interface) | |
| freePort, err := common.FindFreePort(proxysqlPort, []int{}, 2) |
Summary
dbdeployer deploy proxysqlsubcommand for standalone ProxySQL sandbox deployment with--port,--admin-user,--admin-password, and--skip-startflags--with-proxysqlflag todbdeployer deploy replicationthat automatically deploys a topology-aware ProxySQL instance alongside MySQL replication, with master in hostgroup 0 and slaves in hostgroup 1sandbox.DeployProxySQLForTopology()helper that creates a ProxySQL sandbox configured with backend servers from an existing MySQL topologyCloses #39
Closes #40
Test plan
go buildsucceedsgo vet ./...passesdbdeployer deploy --helpshowsproxysqlsubcommanddbdeployer deploy proxysql --helpshows correct flags and defaultsdbdeployer deploy replication --helpshows--with-proxysqlflagdbdeployer deploy proxysqlfails with clear "proxysql binary not found" message when proxysql is not installed