Add "k0sctl exec" (alias ssh) to open a remote console#1027
Conversation
Signed-off-by: Kimmo Lehto <klehto@mirantis.com>
There was a problem hiding this comment.
Pull request overview
Adds a new k0sctl exec CLI command (alias ssh) to open an interactive remote shell or run a non-interactive command across one or more hosts defined in the k0sctl config, which can serve as a building block for future “fetch logs” workflows (e.g., #985).
Changes:
- Register a new top-level
execcommand in the root CLI. - Implement
k0sctl exec/k0sctl sshwith host filtering (--address,--role,--first) and optional parallel execution (--parallel). - Add command/stdin forwarding and prefixed multi-host output formatting.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| cmd/root.go | Registers the new exec command in the CLI command list. |
| cmd/exec.go | Implements the new exec/ssh command, including host selection, connect/exec logic, and output handling. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| hosts := cfg.Spec.Hosts | ||
|
|
||
| if addresses := ctx.StringSlice("address"); len(addresses) > 0 { | ||
| hosts = hosts.Filter(func(h *cluster.Host) bool { | ||
| for _, a := range addresses { | ||
| if h.Address() == a { | ||
| return true | ||
| } | ||
| } | ||
| return false | ||
| }) | ||
| } | ||
| if role := ctx.String("role"); role != "" { | ||
| hosts = hosts.WithRole(role) | ||
| } | ||
| if ctx.Bool("first") && len(hosts) > 0 { | ||
| hosts = hosts[:1] | ||
| } | ||
|
|
||
| if len(hosts) == 0 { | ||
| return fmt.Errorf("no hosts matched the given filters") | ||
| } |
There was a problem hiding this comment.
This command introduces new host filtering/execution behavior (address/role/first/parallel, interactive single-host restriction, stdin forwarding), but there are no unit tests covering these cases. Adding cmd-level tests similar to apply/init would help prevent regressions (e.g., filter matching, error messages for 0/many hosts, and parallel vs sequential execution paths).
| defer func() { | ||
| _ = hosts.Each(ctx.Context, func(_ context.Context, h *cluster.Host) error { | ||
| h.Disconnect() | ||
| return nil | ||
| }) | ||
| }() |
There was a problem hiding this comment.
The deferred disconnect uses ctx.Context, so if the command context is canceled (timeout/Ctrl-C), hosts.Each will stop early and some hosts may never be disconnected. Use a non-canceling context for cleanup (e.g., context.WithoutCancel(ctx.Context) or context.Background()) so all connections are reliably closed.
| if err := hosts.ParallelEach(ctx.Context, func(_ context.Context, h *cluster.Host) error { | ||
| log.Debugf("connecting to %s", h.Address()) | ||
| if err := h.Connect(); err != nil { | ||
| return fmt.Errorf("failed to connect to %s: %w", h.Address(), err) | ||
| } | ||
| return nil | ||
| }); err != nil { | ||
| return err | ||
| } |
There was a problem hiding this comment.
Host connections are always established via hosts.ParallelEach, which spawns one goroutine per host with no concurrency limit. This ignores the command’s --parallel flag and can overwhelm SSH/WinRM limits on large clusters. Consider connecting sequentially when --parallel is false, and/or using BatchedParallelEach with a configurable concurrency limit (reusing the existing --concurrency flag or config Spec.Options.Concurrency.Limit).
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| var execCommand = &cli.Command{ | ||
| Name: "exec", | ||
| Aliases: []string{"ssh"}, | ||
| Usage: "Open a remote terminal or run a command on hosts", | ||
| ArgsUsage: "[-- COMMAND ...]", | ||
| Flags: []cli.Flag{ | ||
| configFlag, | ||
| &cli.StringSliceFlag{ | ||
| Name: "address", | ||
| Usage: "Target host address (can be given multiple times)", | ||
| Aliases: []string{"a"}, | ||
| }, | ||
| &cli.StringFlag{ | ||
| Name: "role", | ||
| Usage: "Filter hosts by role", | ||
| Aliases: []string{"r"}, | ||
| }, | ||
| &cli.BoolFlag{ | ||
| Name: "first", | ||
| Usage: "Use only the first matching host", | ||
| Aliases: []string{"f"}, | ||
| }, | ||
| &cli.BoolFlag{ | ||
| Name: "parallel", | ||
| Usage: "Run command on hosts in parallel", | ||
| Aliases: []string{"p"}, | ||
| }, | ||
| debugFlag, | ||
| traceFlag, | ||
| redactFlag, | ||
| }, |
There was a problem hiding this comment.
The command is missing a Usage description. Other commands in the codebase provide a concise description of what the command does. Consider adding a brief description similar to how other commands document their purpose, for example: "Usage: "Open a remote terminal or run a command on cluster hosts","
| var execCommand = &cli.Command{ | ||
| Name: "exec", | ||
| Aliases: []string{"ssh"}, | ||
| Usage: "Open a remote terminal or run a command on hosts", | ||
| ArgsUsage: "[-- COMMAND ...]", | ||
| Flags: []cli.Flag{ | ||
| configFlag, | ||
| &cli.StringSliceFlag{ | ||
| Name: "address", | ||
| Usage: "Target host address (can be given multiple times)", | ||
| Aliases: []string{"a"}, | ||
| }, | ||
| &cli.StringFlag{ | ||
| Name: "role", | ||
| Usage: "Filter hosts by role", | ||
| Aliases: []string{"r"}, | ||
| }, | ||
| &cli.BoolFlag{ | ||
| Name: "first", | ||
| Usage: "Use only the first matching host", | ||
| Aliases: []string{"f"}, | ||
| }, | ||
| &cli.BoolFlag{ | ||
| Name: "parallel", | ||
| Usage: "Run command on hosts in parallel", | ||
| Aliases: []string{"p"}, | ||
| }, | ||
| debugFlag, | ||
| traceFlag, | ||
| redactFlag, | ||
| }, | ||
| Before: actions(initLogging, initConfig), | ||
| Action: func(ctx *cli.Context) error { | ||
| cfg, err := readConfig(ctx) | ||
| if err != nil { | ||
| return err | ||
| } | ||
|
|
||
| hosts := cfg.Spec.Hosts | ||
|
|
||
| if addresses := ctx.StringSlice("address"); len(addresses) > 0 { | ||
| hosts = hosts.Filter(func(h *cluster.Host) bool { | ||
| for _, a := range addresses { | ||
| if h.Address() == a { | ||
| return true | ||
| } | ||
| } | ||
| return false | ||
| }) | ||
| } | ||
| if role := ctx.String("role"); role != "" { | ||
| hosts = hosts.WithRole(role) | ||
| } | ||
| if ctx.Bool("first") && len(hosts) > 0 { | ||
| hosts = hosts[:1] | ||
| } | ||
|
|
||
| if len(hosts) == 0 { | ||
| return fmt.Errorf("no hosts matched the given filters") | ||
| } | ||
|
|
||
| args := ctx.Args().Slice() | ||
| var command string | ||
| if len(args) > 0 { | ||
| quoted := make([]string, len(args)) | ||
| for i, a := range args { | ||
| quoted[i] = shellescape.Quote(a) | ||
| } | ||
| command = strings.Join(quoted, " ") | ||
| } | ||
|
|
||
| if command == "" && len(hosts) > 1 { | ||
| return fmt.Errorf("interactive shell requires a single host, %d hosts matched (use --first to select the first one)", len(hosts)) | ||
| } | ||
|
|
||
| if err := hosts.ParallelEach(ctx.Context, func(_ context.Context, h *cluster.Host) error { | ||
| log.Debugf("connecting to %s", h.Address()) | ||
| if err := h.Connect(); err != nil { | ||
| return fmt.Errorf("failed to connect to %s: %w", h.Address(), err) | ||
| } | ||
| return nil | ||
| }); err != nil { | ||
| return err | ||
| } | ||
| defer func() { | ||
| _ = hosts.Each(ctx.Context, func(_ context.Context, h *cluster.Host) error { | ||
| h.Disconnect() | ||
| return nil | ||
| }) | ||
| }() | ||
|
|
||
| if command == "" { | ||
| return hosts[0].ExecInteractive("") | ||
| } | ||
|
|
||
| var stdinData string | ||
| if f, ok := ctx.App.Reader.(*os.File); ok { | ||
| if stat, err := f.Stat(); err == nil && (stat.Mode()&os.ModeCharDevice) == 0 { | ||
| data, err := io.ReadAll(f) | ||
| if err != nil { | ||
| return fmt.Errorf("failed to read stdin: %w", err) | ||
| } | ||
| stdinData = string(data) | ||
| } | ||
| } | ||
|
|
||
| multiHost := len(hosts) > 1 | ||
| var mu sync.Mutex | ||
| execOnHost := func(_ context.Context, h *cluster.Host) error { | ||
| var opts []exec.Option | ||
| if stdinData != "" { | ||
| opts = append(opts, exec.Stdin(stdinData)) | ||
| } | ||
|
|
||
| output, err := h.ExecOutput(command, opts...) | ||
| if output != "" { | ||
| mu.Lock() | ||
| if multiHost { | ||
| for _, line := range strings.Split(strings.TrimRight(output, "\n"), "\n") { | ||
| fmt.Fprintf(ctx.App.Writer, "%s: %s\n", h.Address(), line) | ||
| } | ||
| } else { | ||
| fmt.Fprint(ctx.App.Writer, output) | ||
| if !strings.HasSuffix(output, "\n") { | ||
| fmt.Fprintln(ctx.App.Writer) | ||
| } | ||
| } | ||
| mu.Unlock() | ||
| } | ||
| if err != nil { | ||
| return fmt.Errorf("%s: command failed: %w", h.Address(), err) | ||
| } | ||
| return nil | ||
| } | ||
|
|
||
| if ctx.Bool("parallel") { | ||
| return hosts.ParallelEach(ctx.Context, execOnHost) | ||
| } | ||
| return hosts.Each(ctx.Context, execOnHost) | ||
| }, | ||
| } |
There was a problem hiding this comment.
There is no test coverage for the new exec command. Given that the repository has test coverage for other commands (apply_test.go, init_test.go, flags_test.go), the exec command should also have unit tests to verify its functionality, particularly around host filtering logic, command execution paths, and error handling.
| }, | ||
| &cli.BoolFlag{ | ||
| Name: "parallel", | ||
| Usage: "Run command on hosts in parallel", |
There was a problem hiding this comment.
The parallel execution flag is ignored when running in interactive mode (when no command is provided), but this is not documented in the flag's Usage text. Consider adding a note to the Usage that indicates this flag only applies when running a command, not in interactive mode, to avoid user confusion.
| Usage: "Run command on hosts in parallel", | |
| Usage: "Run command on hosts in parallel (only applies when a command is provided, not in interactive mode)", |
| Action: func(ctx *cli.Context) error { | ||
| cfg, err := readConfig(ctx) | ||
| if err != nil { | ||
| return err | ||
| } | ||
|
|
||
| hosts := cfg.Spec.Hosts | ||
|
|
||
| if addresses := ctx.StringSlice("address"); len(addresses) > 0 { | ||
| hosts = hosts.Filter(func(h *cluster.Host) bool { | ||
| for _, a := range addresses { | ||
| if h.Address() == a { | ||
| return true | ||
| } | ||
| } | ||
| return false | ||
| }) | ||
| } | ||
| if role := ctx.String("role"); role != "" { | ||
| hosts = hosts.WithRole(role) | ||
| } | ||
| if ctx.Bool("first") && len(hosts) > 0 { | ||
| hosts = hosts[:1] | ||
| } | ||
|
|
||
| if len(hosts) == 0 { | ||
| return fmt.Errorf("no hosts matched the given filters") | ||
| } | ||
|
|
||
| args := ctx.Args().Slice() | ||
| var command string | ||
| if len(args) > 0 { | ||
| quoted := make([]string, len(args)) | ||
| for i, a := range args { | ||
| quoted[i] = shellescape.Quote(a) | ||
| } | ||
| command = strings.Join(quoted, " ") | ||
| } | ||
|
|
||
| if command == "" && len(hosts) > 1 { | ||
| return fmt.Errorf("interactive shell requires a single host, %d hosts matched (use --first to select the first one)", len(hosts)) | ||
| } | ||
|
|
||
| if err := hosts.ParallelEach(ctx.Context, func(_ context.Context, h *cluster.Host) error { | ||
| log.Debugf("connecting to %s", h.Address()) | ||
| if err := h.Connect(); err != nil { | ||
| return fmt.Errorf("failed to connect to %s: %w", h.Address(), err) | ||
| } | ||
| return nil | ||
| }); err != nil { | ||
| return err | ||
| } | ||
| defer func() { | ||
| _ = hosts.Each(ctx.Context, func(_ context.Context, h *cluster.Host) error { | ||
| h.Disconnect() | ||
| return nil | ||
| }) | ||
| }() | ||
|
|
||
| if command == "" { | ||
| return hosts[0].ExecInteractive("") | ||
| } | ||
|
|
||
| var stdinData string | ||
| if f, ok := ctx.App.Reader.(*os.File); ok { | ||
| if stat, err := f.Stat(); err == nil && (stat.Mode()&os.ModeCharDevice) == 0 { | ||
| data, err := io.ReadAll(f) | ||
| if err != nil { | ||
| return fmt.Errorf("failed to read stdin: %w", err) | ||
| } | ||
| stdinData = string(data) | ||
| } | ||
| } | ||
|
|
||
| multiHost := len(hosts) > 1 | ||
| var mu sync.Mutex | ||
| execOnHost := func(_ context.Context, h *cluster.Host) error { | ||
| var opts []exec.Option | ||
| if stdinData != "" { | ||
| opts = append(opts, exec.Stdin(stdinData)) | ||
| } | ||
|
|
||
| output, err := h.ExecOutput(command, opts...) | ||
| if output != "" { | ||
| mu.Lock() | ||
| if multiHost { | ||
| for _, line := range strings.Split(strings.TrimRight(output, "\n"), "\n") { | ||
| fmt.Fprintf(ctx.App.Writer, "%s: %s\n", h.Address(), line) | ||
| } | ||
| } else { | ||
| fmt.Fprint(ctx.App.Writer, output) | ||
| if !strings.HasSuffix(output, "\n") { | ||
| fmt.Fprintln(ctx.App.Writer) | ||
| } | ||
| } | ||
| mu.Unlock() | ||
| } | ||
| if err != nil { | ||
| return fmt.Errorf("%s: command failed: %w", h.Address(), err) | ||
| } | ||
| return nil | ||
| } | ||
|
|
||
| if ctx.Bool("parallel") { | ||
| return hosts.ParallelEach(ctx.Context, execOnHost) | ||
| } | ||
| return hosts.Each(ctx.Context, execOnHost) | ||
| }, |
There was a problem hiding this comment.
Error messages should include the log file location for consistency with other commands that use initLogging. Commands like apply, backup, reset, and kubeconfig all include "log file saved to" in their error messages. Consider wrapping major errors with a reference to the log file location using ctx.Context.Value(ctxLogFileKey{}).(string).
Likely works as base for #985