feat: Daemon-based architecture for terminal management#2
feat: Daemon-based architecture for terminal management#2angelsantosa wants to merge 8 commits into1.0.0-alphafrom
Conversation
Introduce a new daemon-based architecture that separates terminal lifecycle management from the GUI application: **Daemon Module (daemon/):** - SessionManager: Central orchestrator for all terminal sessions - PTY lifecycle management with portable_pty - Event broadcasting via tokio broadcast channels - Session persistence to disk for daemon restarts - Notification server for agent hook events - System tray integration (macOS) with agent status display - Agent wrappers for Claude Code, OpenCode, Codex with hook support **Protocol Layer:** - TCP JSON-RPC IPC on localhost - DaemonRequest/DaemonResponse message types - DaemonEvent for terminal output, status, agent status, hooks **CLI Module (cli/):** - Daemon lifecycle commands: start, stop, status, restart, logs - CLI binary installation/uninstallation to PATH - Support for foreground and background daemon modes - Development mode with separate data directory **Runtime Module (runtime/):** - Runtime configuration management - Shell override configuration for PTY **Infrastructure:** - Binary entry points (ada-cli, ada-daemon) - Constants for paths and configuration - Utility functions Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Build Infrastructure: - build-sidecars.sh: Builds ada-cli and ada-daemon binaries for bundling - dev-setup.sh: Sets up development environment with daemon - Update build-signed.sh for new binary structure Tauri Configuration: - Add sidecar external binaries (ada-cli, ada-daemon) - Configure system tray with tray-icon.png - Add tokio, tray-icon, and other daemon dependencies to Cargo.toml - Enable macOS system-tray and tray-icon features Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Refactor the Tauri backend to delegate terminal operations to the daemon: **State Management (state.rs):** - Add DaemonClient to AppState for IPC communication - Implement ensure_daemon_and_connect() for auto-starting daemon - Remove local PTY handle storage (moved to daemon) - Add app_handle storage for event emission **Terminal Commands (terminal/commands.rs):** - Delegate create, write, resize, close operations to daemon - Simplify terminal lifecycle - daemon handles PTY spawning - Add switchAgent, markStopped, getHistory commands - Remove local PTY management code **Terminal Types (terminal/types.rs):** - Add TerminalMode (Main, Folder, CurrentBranch, Worktree) - Add AgentStatus (Idle, Working, Permission, Review) - Simplify TerminalInfo to match daemon protocol **PTY Module (terminal/pty.rs):** - Simplify to just PtyHandle struct (used by daemon) - Remove shell detection (moved to daemon/shell.rs) **Client Detection (clients/types.rs):** - Improve multi-layer detection for macOS GUI apps - Add resolve_via_shell() for proper PATH handling - Add common installation paths as fallback **Project Commands (project/commands.rs):** - Update to work with daemon-based terminals - Fix project deletion to close daemon sessions - Improve error handling for corrupted configs **lib.rs:** - Register new commands from daemon and runtime modules - Setup event forwarding from daemon to frontend Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
**New Types (types.ts):** - TerminalMode: main, folder, current_branch, worktree - AgentStatus: idle, working, permission, review - DaemonStatusInfo, ConnectionState, CliInstallStatus, RuntimeConfig **New Query Modules:** - cli.ts: CLI installation status and install/uninstall mutations - daemon.ts: Daemon status and connection management - runtime.ts: Runtime configuration queries **Terminal Queries (terminals.ts):** - Add useReconnectTerminal for auto-reconnection with history - Add useSwitchTerminalAgent for changing running agents - Update mutations to work with daemon protocol - Add terminalHistoryQueryOptions for scrollback **Event Handling (tauri-events.ts):** - Listen for: terminal-output, terminal-status, agent-status-change, hook-event - Smart output batching with requestAnimationFrame - Agent status color coding (idle/working/permission/review) - Cache utilities: clearTerminalOutput, loadTerminalHistory **Terminal View Component:** - Complete rewrite for daemon protocol - Auto-reconnection with 15s timeout and duplicate prevention - Search functionality (Cmd/Ctrl+F) with xterm SearchAddon - Delayed input handler attachment until history loads - Detailed error categorization and status tracking **Terminal Strip Component:** - Simplified state management using daemon events - Main terminal card with agent selector - Mode badges (main, folder, current_branch, worktree) - Close confirmation with Shift+click to skip **New Components:** - cli-install-card.tsx: CLI installation management UI - button-variants.ts: Extracted button styling **Removed:** - attention-badge.tsx: Replaced by agent status events - Code cleanup in project-sidebar and routes Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove unseen-store.ts: Functionality replaced by daemon agent status events - Minor eslint config update Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
📝 WalkthroughWalkthroughThis PR introduces a complete daemon architecture for Ada, moving from in-process terminal management to a dedicated background daemon. It adds new CLI and daemon binaries, build/dev automation scripts, a comprehensive daemon server with IPC and system tray integration, and corresponding frontend refactoring to delegate terminal operations to the daemon instead of maintaining local state. Changes
Sequence Diagram(s)sequenceDiagram
participant Frontend as Frontend App
participant AppState as App State
participant DaemonClient as Daemon Client
participant TCPServer as TCP Server (IPC)
participant SessionMgr as Session Manager
participant PTY as PTY/Shell
Frontend->>AppState: connect_daemon()
AppState->>DaemonClient: resolve/spawn daemon
AppState->>TCPServer: establish TCP connection
DaemonClient->>TCPServer: subscribe to events
Note over Frontend,PTY: Terminal Session Creation Flow
Frontend->>DaemonClient: create_session(request)
DaemonClient->>TCPServer: send CreateSession request
TCPServer->>SessionMgr: create_session(request)
SessionMgr->>PTY: spawn_pty(shell, command)
SessionMgr->>SessionMgr: start reader thread
SessionMgr->>TCPServer: emit TerminalStatus event
TCPServer->>DaemonClient: broadcast event
DaemonClient->>Frontend: update terminal cache
Note over Frontend,PTY: Terminal I/O Flow
Frontend->>DaemonClient: write_to_session(data)
DaemonClient->>TCPServer: send WriteToSession request
TCPServer->>SessionMgr: write_to_session(data)
SessionMgr->>PTY: write data
PTY-->>SessionMgr: output data
SessionMgr->>TCPServer: emit TerminalOutput event
TCPServer->>DaemonClient: broadcast event
DaemonClient->>Frontend: update output buffer
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Sidecar binaries should not be committed to git: - Large file sizes (49MB total) - Platform-specific (aarch64-apple-darwin only) - Should be built during CI/CD via scripts/build-sidecars.sh Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 58
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src-tauri/src/terminal/pty.rs (1)
20-29: Guard against zero cols/rows before resize.PTY backends often reject zero sizes, and frontend layout can transiently emit 0×0, causing avoidable errors. Add a quick validation to skip/return a clear error.
Proposed fix
pub fn resize_pty(pty_handle: &PtyHandle, cols: u16, rows: u16) -> Result<()> { + if cols == 0 || rows == 0 { + return Err(Error::TerminalError( + "Invalid PTY size: cols/rows must be > 0".to_string(), + )); + } let master = pty_handle.master.lock(); master .resize(PtySize { rows, cols, pixel_width: 0, pixel_height: 0, }) .map_err(|e| Error::TerminalError(e.to_string()))?; Ok(()) }
🤖 Fix all issues with AI agents
In `@scripts/build-sidecars.sh`:
- Around line 69-74: The fallback chain in the RUSTFLAGS block can leave one
binary built while the other fails (ada-cli vs ada-daemon); change the fallback
to build each binary separately using explicit cargo rustc calls and check their
exit codes individually: run cargo rustc --release --bin ada-cli, test its exit
status and echo a clear error on failure (and optionally delete its artifact),
then run cargo rustc --release --bin ada-daemon and handle its exit status
similarly; update the script block containing the RUSTFLAGS="" cargo build and
the cargo rustc calls to perform per-binary error handling and messaging so
failures are disambiguated and partial state can be cleaned up.
- Around line 19-45: In get_target_triple(), avoid combining declaration and
command substitution (SC2155) by declaring variables first and assigning
afterward: change "local arch=$(uname -m)" and "local os=$(uname -s)" to
separate declarations like "local arch; arch=$(uname -m)" and "local os;
os=$(uname -s)"; keep the rest of the case logic intact so the function behavior
is unchanged.
- Around line 38-44: Update the Windows branch in the platform-detection case
(currently matching MINGW*|MSYS*|CYGWIN*) to also detect and return the Windows
ARM64 triple (e.g., aarch64-pc-windows-msvc) when the shell/uname indicates
ARM64, and replace the generic fallback that echoes "unknown-unknown-unknown"
with an explicit error path that prints a clear message and exits non-zero;
specifically modify the case handling around the MINGW*/MSYS*/CYGWIN* pattern
and remove/replace occurrences of the fallback echo "unknown-unknown-unknown" so
unsupported architectures fail fast with a descriptive error instead of emitting
invalid target triples.
In `@scripts/dev-setup.sh`:
- Around line 74-85: The timestamp-only early-exit using CLI_PATH, DAEMON_PATH,
CLI_DEBUG_PATH and DAEMON_DEBUG_PATH can skip rebuilding when Rust sources
change; remove the early exit branch and ensure the script always runs the build
step (e.g., always invoke cargo build incrementally before copying sidecars) or
replace the timestamp check with a comparison against source mtimes (e.g., check
src/**/*.rs) so sidecar copies are updated; specifically remove the exit 0 under
the "Dev sidecars already set up..." branch (or change logic to fall through to
the cargo build invocation) so stale sidecars cannot persist.
In `@src-tauri/src/cli/daemon.rs`:
- Around line 279-283: The sync probe_port function duplicates logic already
present in the async probe in daemon::client; extract the common TCP-connect
check into a shared utility (e.g., daemon::net::is_port_open or a helper
function in a common module) and have both probe_port (in this file) and the
async probe in daemon::client call that utility (the sync probe can be a thin
wrapper returning the helper's bool). Update imports and remove the duplicated
TcpStream::connect logic from probe_port so only the shared helper contains the
implementation.
- Around line 410-443: tail_follow currently keeps reading an old file handle
and will spin forever after log rotation; update tail_follow to detect rotation
by capturing the watched file's metadata (e.g., device+inode or file size)
before entering the loop and on each Ok(0) (no data) call fs::metadata(path) to
compare against the open file's metadata (via file.metadata()); if metadata
differs (inode/device changed) or size decreased, close and reopen the file,
recreate the io::BufReader, and reset seeking behavior (seek to start or end as
appropriate) so the loop follows the new file handle; update references to the
mutable file/reader variables inside tail_follow to support reopening.
- Around line 231-247: get_status has a TOCTOU race — it reads PID/port
(read_pid/read_port) then checks liveness (is_process_running/probe_port) so the
returned DaemonStatus can be stale if the daemon exits immediately after the
check; update the get_status doc comment to explicitly state this race and that
callers must not assume the status is authoritative, and add guidance that
callers who need stronger guarantees should re-check using is_process_running or
probe_port (or implement retry/timeout logic) or call a new verification helper
you add (e.g., verify_running(pid, port)) to perform an active probe before
acting.
- Around line 263-277: Remove this duplicated is_process_running implementation
and instead call the canonical function from the existing module (the one in
daemon/pid.rs). Replace the local function with an import/use of the existing
is_process_running (e.g., use crate::daemon::pid::is_process_running or the
correct module path) and ensure any cfg gating remains satisfied by relying on
the original's platform handling; update any references in this file to call
that function rather than the duplicate.
In `@src-tauri/src/cli/install.rs`:
- Around line 45-54: The current up_to_date check only handles symlinks via
read_link and thus misses copied binaries; update the logic around up_to_date /
install_path / bundled_path to detect copied installations by comparing the
actual file contents or version info instead of only following symlinks: read
both install_path and bundled_path (when bundled_path is Some) using
std::fs::metadata to ensure files exist, then compare a deterministic signature
(e.g., compute a SHA-256 of each file or use embedded version info) and set
up_to_date true when signatures match; add a helper like compute_file_hash(path:
&Path) -> Result<String, Error> and fallback checks (size + mtime) to avoid
expensive hashing where appropriate, and ensure errors are handled the same way
read_link errors were handled.
- Around line 271-298: The uninstall on Windows launches an elevated PowerShell
via Start-Process but doesn't wait for the elevated process to finish, so the
code should add waiting to the launched process; update the command built in
uninstall_cli_windows (the Command::new("powershell") invocation and the
Start-Process argument string) to include the -Wait flag (e.g. "Start-Process
powershell -Verb RunAs -ArgumentList '-Command', '<script>' -Wait") so the
created elevated PowerShell is awaited by .output(), then keep the existing
output.status.success() check to surface failures from the elevated removal
command.
- Around line 235-269: install_cli_windows currently launches an elevated
PowerShell with Start-Process but doesn’t wait for that elevated process to
finish, so the function can return before the copy completes; update the
Start-Process invocation constructed in install_cli_windows to wait for the
elevated process (e.g. add the -Wait flag or use -PassThru and wait) and ensure
the ArgumentList/quoting still correctly passes the inner script (modify the
format! string used to build the
Command::new("powershell").arg(...).arg(format!(...)) call so it includes -Wait
or uses -PassThru and then block until the elevated process exits), keeping the
existing script variable and error handling in place.
- Around line 161-196: install_cli_unix currently calls macOS-only osascript and
will fail on non-mac Unix; update it so the macOS-specific behavior is guarded
with #[cfg(target_os = "macos")] (e.g. rename to install_cli_macos or add
#[cfg(target_os = "macos")] to install_cli_unix) and either add a Linux
implementation (e.g. install_cli_linux that uses sudo sh -c "mkdir -p
/usr/local/bin && ln -sf '<bundled>' '<INSTALL_PATH>'" via Command and checks
output) or restrict compilation to macOS only and document the limitation; make
sure calls/sites using install_cli_unix/INSTALL_PATH are updated to call the
correct platform-specific function or arranged behind the same cfg attribute.
In `@src-tauri/src/cli/paths.rs`:
- Around line 85-95: The test test_dev_vs_prod_paths can fail in CI when
dirs::data_dir() or dirs::home_dir() return None; update the test to first check
the Options and only assert inequality when at least one side is Some.
Specifically, call data_dir(true)/data_dir(false) and
home_dir(true)/home_dir(false) as before, but for each pair skip the assert_ne!
if both values are None (or alternatively only perform assert_ne! when at least
one Option is Some) so the test is resilient in minimal environments; reference
the existing test function test_dev_vs_prod_paths and the data_dir/home_dir
calls to locate where to add these guards.
- Around line 67-73: The code that looks for the daemon next to the current
executable currently uses candidate.exists(), which returns true for directories
and can produce false-positives; update the check in the resolution logic (the
current_exe / parent / candidate flow that uses exe_name) to use
candidate.is_file() instead of exists() so only regular files are accepted (also
scan for other similar checks in the same resolve_daemon or path resolution
function and replace exists() with is_file() where you intend to match an
executable file).
In `@src-tauri/src/clients/types.rs`:
- Around line 44-58: resolve_via_shell may block indefinitely when executing the
user's shell; modify it to enforce a timeout by running the command with a
deadline. Specifically, in resolve_via_shell (which creates
std::process::Command with shell.path, shell.login_args and "command -v ..."),
replace the blocking cmd.output().ok()? call with a timed wait: either use the
wait-timeout crate to spawn and wait with a timeout and bail out (return None)
on timeout/failure, or spawn the child and wait on a background thread/select
with a Duration (e.g., a few seconds) and kill/terminate the child if the
deadline is exceeded; ensure you still handle non-success status and parse
stdout into PathBuf as before.
- Around line 85-91: The current shell_escape function uses the POSIX pattern
'\'' which breaks in fish; update shell_escape to detect the user's shell (e.g.,
inspect env var SHELL or provide an is_fish_shell helper) and choose escaping
accordingly: keep the existing '\'' behavior for POSIX shells, but for fish
return a double-quoted string where double quotes, backslashes, dollar signs and
backticks are escaped (e.g., replace \ -> \\; " -> \"; $ -> \$; ` -> \` ) and
wrap in quotes, and ensure callers that build custom commands for
ClientType::Custom use this updated shell_escape to get fish-safe escaping.
Ensure the function name shell_escape is updated in-place and any call sites
using it (e.g., where ClientType::Custom commands are composed) continue to call
the new implementation.
In `@src-tauri/src/daemon/client.rs`:
- Around line 410-429: The function latest_daemon_source_mtime currently only
scans src/daemon; modify latest_daemon_source_mtime to consider all Rust sources
that can affect the daemon by scanning multiple paths (e.g., src/daemon plus
shared module folders like src/terminal and specific files like src/error.rs or
simply recurse under src/) instead of a single directory. Update the logic to
iterate recursively (or over a list of directories), filter for .rs files,
collect their metadata.modified() times and return the latest timestamp
(preserving the existing Option<SystemTime> behavior); reference the function
latest_daemon_source_mtime and the current daemon_dir/entries loop to locate
where to replace the single-dir read_dir with a recursive walk or multi-path
iteration and time comparison.
- Around line 343-348: Duplicate data-dir resolution logic exists across
daemon_data_dir, cli::paths::data_dir, tauri_commands::daemon_data_dir, and
state::get_data_dir; extract this logic into a single shared function (e.g., pub
fn resolve_data_dir() or data_dir()) in a common module (such as cli::paths or a
new util/paths module), implement the existing behavior (choose "ada-dev" for
debug_assertions else "ada", use dirs::data_dir() and map None to
Error::ConfigError), export it publicly, and replace the duplicate
implementations with calls to the new shared function (update the callers in
daemon::client::daemon_data_dir, tauri_commands::daemon_data_dir,
cli::paths::data_dir, and state::get_data_dir to use the consolidated function
and adjust imports).
- Around line 387-400: The fallback in resolve_daemon_path currently returns
PathBuf::from(exe_name) which can cause a silent failure later; change
resolve_daemon_path to return a Result::Err when the binary isn't found next to
the current executable (and optionally after searching PATH if desired).
Specifically, in resolve_daemon_path, after computing exe_name and checking
current_exe.parent() and candidate.exists(), return
Err(Error::TerminalError(...)) with a descriptive message that includes exe_name
and the checked candidate path instead of returning Ok(PathBuf::from(exe_name));
update any callers to handle the error accordingly.
- Around line 17-82: The reader task currently just logs "daemon connection
closed" but leaves pending requests hanging; fix by having the reader task (in
connect) cleanly notify and unblock everyone: clone the out_tx into the read
task (out_tx_for_read) and when the read loop ends call
out_tx_for_read.close_channel() to stop the writer, then lock pending_for_read,
drain the HashMap and send an error/closure DaemonResponse via each oneshot
sender so callers get unblocked (use the same DaemonResponse variant your code
expects for errors), and additionally add a lightweight connection state
notifier field to DaemonClient (e.g., a tokio::watch::Sender or
broadcast::Sender named connection_state or connection_notifier) and send a
"lost" state from the reader task so the owner can react/reconnect; update
DaemonClient::connect to initialize and return this notifier alongside out_tx
and pending.
- Around line 84-107: The send_request function can block forever awaiting rx;
wrap the oneshot receive with a timeout (e.g., using tokio::time::timeout with a
sensible Duration like 30s) so the call fails instead of hanging; on timeout
remove the pending entry (self.pending.lock().remove(&id)) and return an
appropriate Error::TerminalError (e.g., "Daemon response timed out"), preserving
the existing success and send-failure cleanup logic in send_request.
In `@src-tauri/src/daemon/env.rs`:
- Around line 73-78: The code hardcodes ":" when prepending ada_bin to PATH;
update build_terminal_env to use std::env::split_paths and std::env::join_paths
so it respects OS-specific separators: convert ada_bin_dir to a PathBuf, take
env.get("PATH") and std::env::split_paths to get an iterator, build a new
Vec<PathBuf> that starts with ada_bin_dir.clone() followed by the split existing
entries (or just the ada_bin_dir if none), then use std::env::join_paths to
produce a platform-correct OsString and insert that into env (replacing the
current format!("{ada_bin}:{path}") logic that references ada_bin, path_value).
In `@src-tauri/src/daemon/logging.rs`:
- Around line 39-52: The tracing subscriber is using
Registry::default().with(...).init(), which will panic if logging is initialized
more than once; change both uses of init() in the ADA_LOG_STDERR branch and the
else branch to try_init() instead so initialization returns a Result and avoids
panics (adjust error handling if needed); update the calls that currently
reference
Registry::default().with(filter).with(file_layer).with(stderr_layer).init() and
Registry::default().with(filter).with(file_layer).init() to use try_init() while
preserving the surrounding logic that returns Some(guard).
In `@src-tauri/src/daemon/persistence.rs`:
- Around line 177-184: Document that truncate_utf8_safe may return the original
bytes slice if no valid UTF-8 boundary is found within the first 4 bytes (i.e.,
the entire slice remains invalid UTF-8), and note that callers (e.g.,
read_scrollback) rely on from_utf8_lossy to handle invalid sequences; add this
explanation as a doc comment on the truncate_utf8_safe function so future
readers understand the edge-case behavior.
- Around line 100-118: write_output currently only flushes metadata when
bytes_since_flush >= 4096 which risks losing updated meta.scrollback_bytes on
crash; make the flush frequency configurable or flush on every write by adding a
configuration field (e.g., flush_every_bytes: usize or flush_on_write: bool) to
the struct and update write_output to consult that config: after updating
bytes_written/bytes_since_flush/meta.scrollback_bytes/meta.last_activity, if
flush_on_write is true OR bytes_since_flush >= flush_every_bytes then call
scrollback_writer.flush()? and save_meta()? and reset bytes_since_flush; keep
existing rotate_scrollback() and error handling unchanged.
- Around line 136-155: rotate_scrollback currently reads and then
truncates/writes back to scrollback.bin which can corrupt the file if the
process crashes mid-operation; change it to perform an atomic replace: flush and
drop the existing self.scrollback_writer to release the file handle, read the
current scrollback.bin, compute keep_from/truncated as before, write truncated
bytes to a temp file in the same directory (e.g., "scrollback.bin.tmp") using
OpenOptions, fsync the temp file, rename the temp file over "scrollback.bin"
(std::fs::rename) to atomically replace it, fsync the session directory, then
reopen self.scrollback_writer as a BufWriter for the replaced scrollback.bin and
update self.bytes_written; keep using ? error propagation so callers still
handle IO errors.
In `@src-tauri/src/daemon/pid.rs`:
- Around line 36-44: The is_process_running function currently treats any error
from kill(Pid::from_raw(pid as i32), None) as “not running”; update it to return
true for Ok(_) and also true when the error is EPERM (permission denied), but
false when the error is ESRCH (no such process). Concretely, in
is_process_running use a match on kill(...): Ok(_) => true,
Err(nix::Error::Sys(Errno::EPERM)) => true, Err(nix::Error::Sys(Errno::ESRCH))
=> false, and any other error should be handled conservatively (return true or
log and return true). Import nix::errno::Errno and keep using nix::unistd::Pid
to locate the change.
- Around line 53-64: The cleanup_stale_pid function currently runs on all
platforms but relies on is_process_running (which is false on non-Unix), causing
live PID/port files to be removed; guard the real cleanup implementation with
#[cfg(unix)] and move the current logic (calling read_pid, is_process_running,
remove_pid and removing the "port" file, and logging) into that Unix-only
implementation, then add a #[cfg(not(unix))] no-op cleanup_stale_pid that does
nothing (or returns early) so non-Unix builds won't delete live PID/port files.
In `@src-tauri/src/daemon/server.rs`:
- Around line 466-474: The Shutdown handler currently spawns a short-timer exit
which can race with sending the DaemonResponse::Ok; instead, change
DaemonRequest::Shutdown handling to send a shutdown signal (e.g., use an
existing or new shutdown_tx channel) to the main accept/serve loop so it can
stop accepting, drain/flush pending responses, and exit gracefully;
specifically, in the DaemonRequest::Shutdown arm send () on shutdown_tx (or
return an error if send fails) and remove the immediate std::process::exit spawn
so the main loop controls process termination.
- Around line 229-252: Replace the immediate std::process::exit(0) in the
TrayCommand::Quit arm with a graceful shutdown sequence: instead of exiting
inside the _tray_task thread, send a shutdown signal over the application's
IPC/shutdown channel (e.g., call the existing shutdown sender like
ipc_shutdown_tx.send(Shutdown) or analogous IPC API) so the main daemon can
cleanup sessions and resources, then break/return from the thread; reference the
spawned thread variable _tray_task, the tray_cmd_rx receiver and the
TrayCommand::Quit variant when making the change.
- Around line 548-552: The write_port_file function writes the port file
non-atomically and can produce partial reads; modify write_port_file to write
the port value using the same atomic_write helper used for the PID file: after
creating daemon_dir call atomic_write(daemon_dir.join("port"),
addr.port().to_string()) (or its bytes) and propagate the Result, keeping
fs::create_dir_all(&daemon_dir)? and returning the atomic_write result; ensure
you reference the existing atomic_write function symbol so imports/visibility
match.
- Around line 283-306: The event forwarding loop in the spawned task
(event_task) currently drops events silently when
serde_json::to_string(&message) fails; update the match arm inside the Ok(event)
branch where DaemonMessage::Event is serialized to log serialization errors
instead of ignoring them: capture the Err from serde_json::to_string(&message)
and call warn! or error! with context (e.g., include the event type or message
debug and the serde error), then continue the loop; keep the existing
out_tx_events.send(json) handling unchanged but ensure serialization failures
are logged so they surface during debugging.
In `@src-tauri/src/daemon/session.rs`:
- Around line 660-670: Add a debug log when send fails due to zero receivers
instead of silently ignoring it: inside the error branch that checks
event_tx.receiver_count() == 0 (the block handling
send(DaemonEvent::TerminalOutput { terminal_id, data: output })), call debug!
with terminal_id and the send error (e) to record that there were no active
listeners; keep the existing warn! for non-zero receiver_count cases unchanged.
- Around line 530-730: spawn_pty is too large and should be broken into smaller
helper functions to improve readability and testability; refactor by extracting:
(1) PTY creation and child spawn into a function like open_and_spawn_pty(shell:
&ShellConfig, terminal: &Terminal, cols: u16, rows: u16) -> (PtyPair, Child)
which contains NativePtySystem::openpty and pair.slave.spawn_command calls, (2)
command preparation into prepare_command(terminal: &Terminal, shell:
&ShellConfig, wrapper_dir: &Path) -> CommandBuilder that builds args, cwd and
env, (3) environment building is already in build_terminal_env but apply it via
the new prepare_command, (4) reader thread logic into spawn_pty_reader(reader:
impl Read + Clone, persistence: Arc<Mutex<SessionPersistence>>, shutdown:
Arc<AtomicBool>, event_tx: BroadcastSender, sessions: SessionsMap, terminal_id:
String, project_id: Option<String>) -> JoinHandle<()> which contains the loop,
persistence writes, event sends and session status update, and (5) writer
acquisition into take_pty_writer(master: PtyMaster) -> Writer; then update
spawn_pty to call these helpers in sequence and simply assemble the PtyHandle
and reader_handle. Keep error handling semantics and log calls (info/warn/error)
inside the extracted helpers and reuse existing symbols: spawn_pty, ShellConfig,
build_terminal_env, SessionPersistence, PtyHandle, pair.master, pair.slave,
event_tx, sessions.
- Around line 306-427: The restart_session function contains many step-by-step
info! logs that are noisy in production; change the detailed progress logs (the
info! calls with messages like "restart_session: acquiring write lock",
"restart_session: write lock acquired", "restart_session: releasing write lock
(phase 1)", "restart_session: acquiring write lock (phase 2)", "restart_session:
PTY spawned successfully", "restart_session: dropping old handles", etc.) to
debug! while keeping high-level lifecycle events (e.g., the initial
"restart_session: starting", error paths that call error!, and the final
success/status emission) at info! so you retain important signals; update calls
inside restart_session and ensure messages such as those referencing
entry.terminal.status, shell detection, persistence reset, and emit_status
remain appropriately leveled.
In `@src-tauri/src/daemon/shell.rs`:
- Around line 12-16: The detect function should validate candidate shell paths
so empty or non-existent overrides don't short-circuit to a bad value; update
detect to ignore shell_override when it's an empty string or when
PathBuf::from(shell_override) does not point to an existing file (and treat
non-files as invalid), and likewise ensure the value returned from
Self::get_user_shell is validated the same way; if no valid candidate remains,
fall back to PathBuf::from("/bin/bash"). Refer to the detect and get_user_shell
functions (and any callers like spawn_command) when implementing the
existence/is_file checks so invalid overrides properly fall back.
In `@src-tauri/src/daemon/tauri_commands.rs`:
- Around line 195-248: In query_daemon_status, after the two stream.write_all
calls call stream.flush() to ensure bytes are sent immediately, and replace the
single reader.read_line call with a loop that accumulates into response and
treats a read_line return of 0 as EOF (break the loop) so you don't hang waiting
for a newline if the daemon closes or sends a partial response; keep the
existing read/write timeouts and then parse whatever was received (handling
parse errors as before). Use the existing symbols stream.flush(),
reader.read_line(...), and the response String in query_daemon_status to
implement these changes.
- Around line 113-142: Replace the blocking std::thread::sleep in the
startup-wait loop with an async sleep so the Tokio executor thread isn't
blocked: in the block after cmd.spawn() (the loop that calls
read_port(cfg!(debug_assertions)) and probe_port(port)) use
tokio::time::sleep(Duration::from_millis(250)).await instead of
std::thread::sleep(...), add the tokio::time::sleep import (or fully qualify
tokio::time::sleep), and ensure the enclosing function is async so awaiting is
allowed; keep the same iteration logic and error return for "Daemon did not
start within 5 seconds".
In `@src-tauri/src/daemon/wrappers.rs`:
- Around line 502-549: The cursor hook created by create_cursor_notify_hook
currently exits on unknown events (the case default uses exit 0), which causes
behavior divergence from Claude/Codex hooks; modify the hook string in
create_cursor_notify_hook so the default case only logs the unknown event and
does not call exit, allowing the script to continue to the final JSON response
and return status "ok" (i.e., remove or replace exit 0 in the case "*" branch so
the script skips sending the curl but still echoes '{"status": "ok"}' and exits
normally).
- Around line 200-335: The hook script in create_claude_notify_hook uses an
undefined $JSON when building ENCODED_PAYLOAD and fallback encoding, causing
empty payloads; update those references to use the actual variable $INPUT (i.e.,
replace "$JSON" with "$INPUT" in the ENCODED_PAYLOAD assignment and its
fallback) so the raw Claude stdin is URL-encoded and sent correctly in the curl
request.
- Around line 457-500: The Gemini hook created in create_gemini_notify_hook
currently exits early on unknown EVENT_TYPE (the exit 0 in the gemini-notify.sh
case), preventing the curl notification to the frontend; update the case
handling so unknown events do not exit but instead map to a generic/raw event
(for example set EVENT to the original EVENT_TYPE or a "Raw" marker) and allow
the script to continue to the curl notification block (keep the existing logging
and curl logic intact) so the frontend receives notifications for all event
types.
- Around line 741-761: In ensure_opencode_plugin, detect when the source file
(src = ada_plugins_dir.join("ada-notify.js")) does not exist and emit a
warning-level log that includes the full src path and a brief hint (e.g. "source
not found; create_opencode_plugin may have failed") instead of silently
returning Ok(()); keep the existing behavior of not erroring, but add the
warning so failures are visible; reference ensure_opencode_plugin, the src/dst
variables and create_opencode_plugin in the message so maintainers can trace the
cause.
In `@src-tauri/src/project/commands.rs`:
- Around line 361-379: The response currently returns early when is_now_git &&
!project.is_git_repo, which skips populating terminal metadata; change the
control flow so the terminal session enrichment always runs: either move the
block that calls state.get_daemon() / daemon.list_sessions() and sets
project.terminal_ids and project.main_terminal_id to execute before the
early-return condition or remove/adjust that early return so the code that
iterates sessions (checking session.project_id, session.is_main, pushing
session.id) always runs and updates project.terminal_ids and
project.main_terminal_id even when is_now_git && !project.is_git_repo.
In `@src-tauri/src/runtime/commands.rs`:
- Around line 14-21: In set_shell_override, avoid calling state.get_daemon()
twice (which reacquires the lock) by capturing the daemon reference once into a
local variable (e.g., let daemon = state.get_daemon()?;) and then call
daemon.set_shell_override(shell).await? followed by
daemon.get_runtime_config().await; keep using the existing ? error propagation
and the same return type (Result<RuntimeConfig>) so only the duplicated
get_daemon() calls are replaced with the single local variable.
In `@src-tauri/src/state.rs`:
- Around line 251-269: get_connection_state currently does blocking I/O
(std::fs::read_to_string and std::net::TcpStream::connect) which can stall the
caller; change it to a non-blocking approach by either (A) making
get_connection_state async (rename to async fn get_connection_state) and replace
blocking calls with async equivalents (tokio::fs::read_to_string /
tokio::net::TcpStream::connect with a timeout) or (B) keep a synchronous
signature but offload the blocking checks to a background thread / runtime using
spawn_blocking (e.g. tauri::async_runtime::spawn_blocking) and return a
cached/latest ConnectionState immediately while updating the cache
asynchronously; update all callers of get_connection_state and ensure you still
reference the same fields (daemon, data_dir, port_path) and return
ConnectionState::{Connected,Disconnected,NotRunning} consistently after the
non-blocking check.
- Around line 32-40: Constructor new() currently calls
tauri::async_runtime::block_on(Self::ensure_daemon_and_connect(...)) which
blocks the thread; remove the blocking call and make daemon initialization
asynchronous: return State from new() with daemon set to None (or wrap it in a
shared async-aware container like Arc<RwLock<Option<DaemonClient>>>), then spawn
a background task (using tauri::async_runtime::spawn or tokio::spawn) that runs
ensure_daemon_and_connect(app_handle.clone()), logs errors via tracing::error!,
and stores Some(Arc::new(client)) into the shared daemon slot when ready;
alternatively expose an async init method (e.g., async fn init_daemon(&self))
that callers can await instead of blocking in new().
- Around line 59-96: The async function ensure_daemon_and_connect is performing
blocking I/O (std::fs::read_to_string and std::net::TcpStream::connect) which
can block the runtime; switch to async APIs: use tokio::fs::read_to_string for
reading port_path and use tokio::net::TcpStream::connect (wrapped in
tokio::time::timeout with the same short deadline, e.g. 250ms) when probing
127.0.0.1:port; keep the existing loop and calls to Self::spawn_daemon_via_cli
and DaemonClient::connect but replace the blocking checks with the async
equivalents so ensure_daemon_and_connect remains non-blocking.
In `@src-tauri/src/terminal/commands.rs`:
- Around line 113-122: The current logic in the caller uses
state.get_daemon()?.list_sessions().await? and then .find(|session|
session.project_id == project_id && session.is_main) which allows a race where
two callers both see no main and create duplicates; replace this by calling a
single atomic daemon-side operation (e.g. implement and call a
get_or_create_main_session RPC on the daemon) or serialize via a mutex at the
daemon layer so uniqueness is enforced server-side; specifically, add a daemon
method like get_or_create_main_session(project_id) and invoke that from where
state.get_daemon()/list_sessions() is used so the daemon creates the main
session only if it doesn't exist and returns the existing one atomically.
In `@src-tauri/src/terminal/types.rs`:
- Around line 34-39: The deserialization failure comes from CommandSpec missing
defaults; add #[derive(Default)] (or impl Default) for CommandSpec providing
sensible defaults (e.g., empty command String, empty Vec for args, empty HashMap
for env), add #[serde(default)] to the command fields in the Terminal and
TerminalInfo structs so old persisted SessionMeta can deserialize, and ensure
CommandSpec still derives/implements Serialize + Deserialize; update any
constructors if needed to use CommandSpec::default().
In `@src/components/cli-install-card.tsx`:
- Around line 13-36: The component silently returns null when the CLI status
query fails because useQuery(cliInstallStatusQuery) ignores isError and error;
update the component to read const { data: status, isLoading, isError, error } =
useQuery(cliInstallStatusQuery) and render an error state (e.g., visible error
message and retry button that calls refetch or triggers install/uninstall) when
isError is true instead of falling through to the dev-mode canInstall check;
ensure canInstall still derives from status?.canInstall and preserve existing
isLoading handling, using the same identifiers (useQuery, cliInstallStatusQuery,
isLoading, isError, error, canInstall, isInstalling, isUninstalling, isBusy) so
the change is minimal and locatable.
In `@src/components/terminal-view.tsx`:
- Around line 40-44: Reset the per-terminal reconnect state whenever the active
terminal changes: add an effect that watches the terminalId and clears
isReconnectingRef and prevStatusRef (set isReconnectingRef.current = false and
prevStatusRef.current = undefined) so stale reconnect/status state from a
previous terminal doesn't affect the newly selected terminal; reference the
existing isReconnectingRef, prevStatusRef and terminalId in the effect (e.g.,
inside the TerminalView component).
- Around line 106-142: The timeout handler clears isReconnectingRef.current
after 10s which can race with the useReconnectTerminal mutation (which times out
at 15s) and allow overlapping reconnects; fix by either matching the timeout to
the mutation timeout or removing the premature reset—update the timeout logic
around timeoutId in the reconnect invocation so that isReconnectingRef.current
is only cleared by the reconnect callbacks (onSuccess/onError) or change the
setTimeout duration to 15000 to match useReconnectTerminal; ensure references to
reconnect, timeoutId, isReconnectingRef, and the onSuccess/onError handlers are
updated accordingly.
In `@src/lib/queries/cli.ts`:
- Around line 8-40: Centralize the repeated query key by introducing a single
exported constant (e.g., CLI_INSTALL_STATUS_KEY) and replace the array literal
usages in cliInstallStatusQuery (queryKey), useInstallCli
(queryClient.setQueryData) and useUninstallCli (queryClient.setQueryData) with
that constant; ensure the new constant is used in queryOptions({ queryKey:
CLI_INSTALL_STATUS_KEY }) and in both onSuccess handlers where
queryClient.setQueryData is called so all three places reference the same
identifier.
In `@src/lib/queries/terminals.ts`:
- Around line 184-209: Remove the styled console.log and console.error debug
statements from the reconnection mutation callbacks: delete the console.log in
the onSuccess callback (the one logging `[RECONNECT MUTATION] onSuccess` and
updatedTerminal.id) and the console.error in onError (the one logging
`[RECONNECT MUTATION] onError` and err); keep the existing logic that updates
data via
queryClient.setQueryData(queryKeys.terminals.detail(updatedTerminal.id),
updatedTerminal) and
queryClient.invalidateQueries(queryKeys.terminals.list(updatedTerminal.project_id))
in onSuccess and retain the onError callback body if it needs to handle errors
(but without console.error), or replace with a proper error handler if required.
- Around line 148-182: The mutationFn contains several styled
console.log/console.error calls around terminalApi.restart and the
timeoutPromise which should be removed or gated for production; update
mutationFn to either delete the three console.* statements (the initial
"RECONNECT MUTATION Calling terminalApi.restart", the "terminalApi.restart
resolved", and the "terminalApi.restart rejected/timed out") or wrap them behind
a runtime debug flag (e.g., if (DEBUG) { ... }) so only non-verbose logs remain
while preserving the timeout logic and existing error flow.
In `@src/lib/tauri-events.ts`:
- Around line 178-245: The handleHookEvent function does expensive cache lookups
and verbose console logging on every hook; guard this behind a dev-only flag by
adding a DEBUG_HOOKS constant (e.g. const DEBUG_HOOKS = import.meta.env.DEV) at
file scope and early-return from handleHookEvent when DEBUG_HOOKS is false, and
move all cache lookups, payload JSON.parse and console.log calls inside that
guarded branch so production runtime avoids the work and logs. Ensure you
reference the existing handleHookEvent function and the queryClient/getQueryData
calls so you only skip the logging and lookups when DEBUG_HOOKS is false.
- Around line 49-85: Module-level outputBuffers and scheduled RAFs persist
across component lifecycles; update the buffering code to track and cancel the
RAF and to clear per-terminal buffers when a terminal is removed. Specifically,
add a flushFrameId variable, set flushFrameId = requestAnimationFrame(...)
inside handleTerminalOutput instead of only toggling flushScheduled, clear
flushFrameId inside flushOutputBuffers, and call
cancelAnimationFrame(flushFrameId) during useTauriEvents cleanup; also ensure
you remove stale entries from outputBuffers (e.g.,
outputBuffers.delete(terminal_id)) when a terminal is closed or when its cache
entry is evicted (hook into the terminal removal/close handler or subscribe to
the QueryClient cache removal) so buffers don’t accumulate for closed terminals.
| get_target_triple() { | ||
| local arch=$(uname -m) | ||
| local os=$(uname -s) | ||
|
|
||
| case "$os" in | ||
| Darwin) | ||
| case "$arch" in | ||
| x86_64) echo "x86_64-apple-darwin" ;; | ||
| arm64) echo "aarch64-apple-darwin" ;; | ||
| *) echo "unknown-apple-darwin" ;; | ||
| esac | ||
| ;; | ||
| Linux) | ||
| case "$arch" in | ||
| x86_64) echo "x86_64-unknown-linux-gnu" ;; | ||
| aarch64) echo "aarch64-unknown-linux-gnu" ;; | ||
| *) echo "unknown-unknown-linux-gnu" ;; | ||
| esac | ||
| ;; | ||
| MINGW*|MSYS*|CYGWIN*) | ||
| echo "x86_64-pc-windows-msvc" | ||
| ;; | ||
| *) | ||
| echo "unknown-unknown-unknown" | ||
| ;; | ||
| esac | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Declare and assign separately to avoid masking return values.
Per shellcheck SC2155, separating declaration from assignment prevents masking errors from uname:
♻️ Suggested fix
get_target_triple() {
- local arch=$(uname -m)
- local os=$(uname -s)
+ local arch
+ local os
+ arch=$(uname -m)
+ os=$(uname -s)
case "$os" in📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| get_target_triple() { | |
| local arch=$(uname -m) | |
| local os=$(uname -s) | |
| case "$os" in | |
| Darwin) | |
| case "$arch" in | |
| x86_64) echo "x86_64-apple-darwin" ;; | |
| arm64) echo "aarch64-apple-darwin" ;; | |
| *) echo "unknown-apple-darwin" ;; | |
| esac | |
| ;; | |
| Linux) | |
| case "$arch" in | |
| x86_64) echo "x86_64-unknown-linux-gnu" ;; | |
| aarch64) echo "aarch64-unknown-linux-gnu" ;; | |
| *) echo "unknown-unknown-linux-gnu" ;; | |
| esac | |
| ;; | |
| MINGW*|MSYS*|CYGWIN*) | |
| echo "x86_64-pc-windows-msvc" | |
| ;; | |
| *) | |
| echo "unknown-unknown-unknown" | |
| ;; | |
| esac | |
| } | |
| get_target_triple() { | |
| local arch | |
| local os | |
| arch=$(uname -m) | |
| os=$(uname -s) | |
| case "$os" in | |
| Darwin) | |
| case "$arch" in | |
| x86_64) echo "x86_64-apple-darwin" ;; | |
| arm64) echo "aarch64-apple-darwin" ;; | |
| *) echo "unknown-apple-darwin" ;; | |
| esac | |
| ;; | |
| Linux) | |
| case "$arch" in | |
| x86_64) echo "x86_64-unknown-linux-gnu" ;; | |
| aarch64) echo "aarch64-unknown-linux-gnu" ;; | |
| *) echo "unknown-unknown-linux-gnu" ;; | |
| esac | |
| ;; | |
| MINGW*|MSYS*|CYGWIN*) | |
| echo "x86_64-pc-windows-msvc" | |
| ;; | |
| *) | |
| echo "unknown-unknown-unknown" | |
| ;; | |
| esac | |
| } |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 20-20: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 21-21: Declare and assign separately to avoid masking return values.
(SC2155)
🤖 Prompt for AI Agents
In `@scripts/build-sidecars.sh` around lines 19 - 45, In get_target_triple(),
avoid combining declaration and command substitution (SC2155) by declaring
variables first and assigning afterward: change "local arch=$(uname -m)" and
"local os=$(uname -s)" to separate declarations like "local arch; arch=$(uname
-m)" and "local os; os=$(uname -s)"; keep the rest of the case logic intact so
the function behavior is unchanged.
| MINGW*|MSYS*|CYGWIN*) | ||
| echo "x86_64-pc-windows-msvc" | ||
| ;; | ||
| *) | ||
| echo "unknown-unknown-unknown" | ||
| ;; | ||
| esac |
There was a problem hiding this comment.
Windows ARM64 not handled; unknown targets will cause downstream failures.
The Windows case only handles x86_64 (via MINGW/MSYS/CYGWIN), missing ARM64. Additionally, falling back to unknown-* triples (lines 28, 35, 42) will cause Cargo to fail or produce incorrectly named binaries.
Consider failing early with a clear error message for unsupported architectures rather than producing a broken build.
🛠️ Suggested improvement
*) echo "unknown-unknown-linux-gnu" ;;
esac
;;
MINGW*|MSYS*|CYGWIN*)
- echo "x86_64-pc-windows-msvc"
+ case "$arch" in
+ x86_64) echo "x86_64-pc-windows-msvc" ;;
+ arm64|aarch64) echo "aarch64-pc-windows-msvc" ;;
+ *) echo "Unsupported Windows architecture: $arch" >&2; exit 1 ;;
+ esac
;;
*)
- echo "unknown-unknown-unknown"
+ echo "Unsupported OS: $os" >&2
+ exit 1
;;
esac
}🤖 Prompt for AI Agents
In `@scripts/build-sidecars.sh` around lines 38 - 44, Update the Windows branch in
the platform-detection case (currently matching MINGW*|MSYS*|CYGWIN*) to also
detect and return the Windows ARM64 triple (e.g., aarch64-pc-windows-msvc) when
the shell/uname indicates ARM64, and replace the generic fallback that echoes
"unknown-unknown-unknown" with an explicit error path that prints a clear
message and exits non-zero; specifically modify the case handling around the
MINGW*/MSYS*/CYGWIN* pattern and remove/replace occurrences of the fallback echo
"unknown-unknown-unknown" so unsupported architectures fail fast with a
descriptive error instead of emitting invalid target triples.
| RUSTFLAGS="" cargo build --release --bin ada-cli --bin ada-daemon 2>&1 || { | ||
| echo "Direct build failed, trying without library..." | ||
| # If that fails, try building with specific features disabled | ||
| cargo rustc --release --bin ada-cli -- && \ | ||
| cargo rustc --release --bin ada-daemon -- | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Fallback build command chain may leave partial state on failure.
If the first cargo rustc for ada-cli succeeds but ada-daemon fails, the script exits with an error but leaves ada-cli built. The && chain is correct for short-circuiting, but the outer || { ... } block doesn't distinguish between the two binary failures.
Consider adding explicit error messages per binary or building them separately with individual error handling.
🤖 Prompt for AI Agents
In `@scripts/build-sidecars.sh` around lines 69 - 74, The fallback chain in the
RUSTFLAGS block can leave one binary built while the other fails (ada-cli vs
ada-daemon); change the fallback to build each binary separately using explicit
cargo rustc calls and check their exit codes individually: run cargo rustc
--release --bin ada-cli, test its exit status and echo a clear error on failure
(and optionally delete its artifact), then run cargo rustc --release --bin
ada-daemon and handle its exit status similarly; update the script block
containing the RUSTFLAGS="" cargo build and the cargo rustc calls to perform
per-binary error handling and messaging so failures are disambiguated and
partial state can be cleaned up.
| # Check if we already have valid binaries that are up-to-date with source | ||
| # We check if the binaries in binaries/ are newer than or same age as target/debug binaries | ||
| if [[ -f "$CLI_PATH" && -f "$DAEMON_PATH" && -f "$CLI_DEBUG_PATH" && -f "$DAEMON_DEBUG_PATH" ]]; then | ||
| # Check if binaries dir files are at least as new as debug binaries | ||
| if [[ ! "$CLI_DEBUG_PATH" -nt "$CLI_PATH" && ! "$DAEMON_DEBUG_PATH" -nt "$DAEMON_PATH" ]]; then | ||
| echo "Dev sidecars already set up and up-to-date, skipping build..." | ||
| ls -la "$BINARIES_DIR" | ||
| exit 0 | ||
| else | ||
| echo "Debug binaries are newer, will update sidecar copies..." | ||
| fi | ||
| fi |
There was a problem hiding this comment.
Up‑to‑date check can skip rebuilding after source changes.
The early exit only compares binary timestamps. If Rust sources change but the debug binaries haven’t been rebuilt yet, the script exits and sidecars stay stale. Prefer always running cargo build (incremental) or compare against source mtimes.
🛠️ Minimal fix (remove early exit)
- if [[ ! "$CLI_DEBUG_PATH" -nt "$CLI_PATH" && ! "$DAEMON_DEBUG_PATH" -nt "$DAEMON_PATH" ]]; then
- echo "Dev sidecars already set up and up-to-date, skipping build..."
- ls -la "$BINARIES_DIR"
- exit 0
- else
- echo "Debug binaries are newer, will update sidecar copies..."
- fi
+ if [[ ! "$CLI_DEBUG_PATH" -nt "$CLI_PATH" && ! "$DAEMON_DEBUG_PATH" -nt "$DAEMON_PATH" ]]; then
+ echo "Sidecars present; running cargo build to pick up any source changes..."
+ else
+ echo "Debug binaries are newer, will update sidecar copies..."
+ fi🤖 Prompt for AI Agents
In `@scripts/dev-setup.sh` around lines 74 - 85, The timestamp-only early-exit
using CLI_PATH, DAEMON_PATH, CLI_DEBUG_PATH and DAEMON_DEBUG_PATH can skip
rebuilding when Rust sources change; remove the early exit branch and ensure the
script always runs the build step (e.g., always invoke cargo build incrementally
before copying sidecars) or replace the timestamp check with a comparison
against source mtimes (e.g., check src/**/*.rs) so sidecar copies are updated;
specifically remove the exit 0 under the "Dev sidecars already set up..." branch
(or change logic to fall through to the cargo build invocation) so stale
sidecars cannot persist.
| let up_to_date = if installed { | ||
| if let Ok(target) = std::fs::read_link(&install_path) { | ||
| bundled_path.as_ref().map(|b| target == *b).unwrap_or(false) | ||
| } else { | ||
| // Not a symlink, might be a copy or different installation | ||
| false | ||
| } | ||
| } else { | ||
| false | ||
| }; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
up_to_date check only works for symlinks, not copied binaries.
The up_to_date logic uses read_link which only works for symlinks. On Windows (which copies files), or if someone manually copies the binary on Unix, this will always return false. Consider comparing file hashes or version info for a more robust check.
🤖 Prompt for AI Agents
In `@src-tauri/src/cli/install.rs` around lines 45 - 54, The current up_to_date
check only handles symlinks via read_link and thus misses copied binaries;
update the logic around up_to_date / install_path / bundled_path to detect
copied installations by comparing the actual file contents or version info
instead of only following symlinks: read both install_path and bundled_path
(when bundled_path is Some) using std::fs::metadata to ensure files exist, then
compare a deterministic signature (e.g., compute a SHA-256 of each file or use
embedded version info) and set up_to_date true when signatures match; add a
helper like compute_file_hash(path: &Path) -> Result<String, Error> and fallback
checks (size + mtime) to avoid expensive hashing where appropriate, and ensure
errors are handled the same way read_link errors were handled.
| export const cliInstallStatusQuery = queryOptions({ | ||
| queryKey: ["cli", "installStatus"], | ||
| queryFn: () => invoke<CliInstallStatus>("check_cli_installed"), | ||
| staleTime: 10000, // Cache for 10 seconds | ||
| }) | ||
|
|
||
| /** | ||
| * Mutation to install CLI to PATH | ||
| */ | ||
| export function useInstallCli() { | ||
| const queryClient = useQueryClient() | ||
|
|
||
| return useMutation({ | ||
| mutationFn: () => invoke<CliInstallStatus>("install_cli"), | ||
| onSuccess: (data) => { | ||
| // Update the cache with new status | ||
| queryClient.setQueryData(["cli", "installStatus"], data) | ||
| }, | ||
| }) | ||
| } | ||
|
|
||
| /** | ||
| * Mutation to uninstall CLI from PATH | ||
| */ | ||
| export function useUninstallCli() { | ||
| const queryClient = useQueryClient() | ||
|
|
||
| return useMutation({ | ||
| mutationFn: () => invoke<CliInstallStatus>("uninstall_cli"), | ||
| onSuccess: (data) => { | ||
| // Update the cache with new status | ||
| queryClient.setQueryData(["cli", "installStatus"], data) | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
cat -n src/lib/queries/cli.tsRepository: Segun-io/ada
Length of output: 1491
🏁 Script executed:
#!/bin/bash
# Search for all occurrences of the CLI install-status query key pattern
rg -n '"cli"\s*,\s*"installStatus"' -SRepository: Segun-io/ada
Length of output: 296
🏁 Script executed:
#!/bin/bash
# Search for any other references to "cli" and "installStatus" together
rg -n "cli.*installStatus|installStatus.*cli" -S --type ts --type tsxRepository: Segun-io/ada
Length of output: 83
🏁 Script executed:
#!/bin/bash
# Check if cliInstallStatusQuery is imported elsewhere
rg -n "cliInstallStatusQuery" -SRepository: Segun-io/ada
Length of output: 369
Centralize the CLI install-status query key to avoid drift.
The array literal ["cli", "installStatus"] is repeated across the query and both mutations (lines 9, 24, 39). A shared constant prevents subtle mismatches during refactors.
♻️ Proposed refactor
+const cliInstallStatusKey = ["cli", "installStatus"] as const
+
export const cliInstallStatusQuery = queryOptions({
- queryKey: ["cli", "installStatus"],
+ queryKey: cliInstallStatusKey,
queryFn: () => invoke<CliInstallStatus>("check_cli_installed"),
staleTime: 10000, // Cache for 10 seconds
})
@@
mutationFn: () => invoke<CliInstallStatus>("install_cli"),
onSuccess: (data) => {
// Update the cache with new status
- queryClient.setQueryData(["cli", "installStatus"], data)
+ queryClient.setQueryData(cliInstallStatusKey, data)
},
})
}
@@
mutationFn: () => invoke<CliInstallStatus>("uninstall_cli"),
onSuccess: (data) => {
// Update the cache with new status
- queryClient.setQueryData(["cli", "installStatus"], data)
+ queryClient.setQueryData(cliInstallStatusKey, data)
},
})
}🤖 Prompt for AI Agents
In `@src/lib/queries/cli.ts` around lines 8 - 40, Centralize the repeated query
key by introducing a single exported constant (e.g., CLI_INSTALL_STATUS_KEY) and
replace the array literal usages in cliInstallStatusQuery (queryKey),
useInstallCli (queryClient.setQueryData) and useUninstallCli
(queryClient.setQueryData) with that constant; ensure the new constant is used
in queryOptions({ queryKey: CLI_INSTALL_STATUS_KEY }) and in both onSuccess
handlers where queryClient.setQueryData is called so all three places reference
the same identifier.
| mutationFn: async (terminalId: string) => { | ||
| console.log( | ||
| `%c[RECONNECT MUTATION]%c Calling terminalApi.restart`, | ||
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | ||
| "", | ||
| terminalId | ||
| ) | ||
|
|
||
| // Add timeout to prevent hanging forever if daemon dies | ||
| const timeoutMs = 15000 | ||
| const timeoutPromise = new Promise<never>((_, reject) => { | ||
| setTimeout(() => reject(new Error("Reconnect timed out after 15s")), timeoutMs) | ||
| }) | ||
|
|
||
| try { | ||
| const result = await Promise.race([ | ||
| terminalApi.restart(terminalId), | ||
| timeoutPromise, | ||
| ]) | ||
| console.log( | ||
| `%c[RECONNECT MUTATION]%c terminalApi.restart resolved`, | ||
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | ||
| "color: #22c55e;", | ||
| result | ||
| ) | ||
| return result | ||
| } catch (err) { | ||
| console.error( | ||
| `%c[RECONNECT MUTATION]%c terminalApi.restart rejected/timed out`, | ||
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | ||
| "color: #ef4444;", | ||
| err | ||
| ) | ||
| throw err | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove verbose debug logging before merging.
The styled console.log statements (lines 149-154, 167-172, 175-180) are useful for development but will clutter production console output. Consider removing them or gating behind a debug flag.
♻️ Suggested cleanup
return useMutation({
mutationFn: async (terminalId: string) => {
- console.log(
- `%c[RECONNECT MUTATION]%c Calling terminalApi.restart`,
- "background: `#8b5cf6`; color: white; padding: 2px 6px; border-radius: 3px;",
- "",
- terminalId
- )
-
// Add timeout to prevent hanging forever if daemon dies
const timeoutMs = 15000
const timeoutPromise = new Promise<never>((_, reject) => {
setTimeout(() => reject(new Error("Reconnect timed out after 15s")), timeoutMs)
})
- try {
- const result = await Promise.race([
- terminalApi.restart(terminalId),
- timeoutPromise,
- ])
- console.log(
- `%c[RECONNECT MUTATION]%c terminalApi.restart resolved`,
- "background: `#8b5cf6`; color: white; padding: 2px 6px; border-radius: 3px;",
- "color: `#22c55e`;",
- result
- )
- return result
- } catch (err) {
- console.error(
- `%c[RECONNECT MUTATION]%c terminalApi.restart rejected/timed out`,
- "background: `#8b5cf6`; color: white; padding: 2px 6px; border-radius: 3px;",
- "color: `#ef4444`;",
- err
- )
- throw err
- }
+ return Promise.race([
+ terminalApi.restart(terminalId),
+ timeoutPromise,
+ ])
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| mutationFn: async (terminalId: string) => { | |
| console.log( | |
| `%c[RECONNECT MUTATION]%c Calling terminalApi.restart`, | |
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | |
| "", | |
| terminalId | |
| ) | |
| // Add timeout to prevent hanging forever if daemon dies | |
| const timeoutMs = 15000 | |
| const timeoutPromise = new Promise<never>((_, reject) => { | |
| setTimeout(() => reject(new Error("Reconnect timed out after 15s")), timeoutMs) | |
| }) | |
| try { | |
| const result = await Promise.race([ | |
| terminalApi.restart(terminalId), | |
| timeoutPromise, | |
| ]) | |
| console.log( | |
| `%c[RECONNECT MUTATION]%c terminalApi.restart resolved`, | |
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | |
| "color: #22c55e;", | |
| result | |
| ) | |
| return result | |
| } catch (err) { | |
| console.error( | |
| `%c[RECONNECT MUTATION]%c terminalApi.restart rejected/timed out`, | |
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | |
| "color: #ef4444;", | |
| err | |
| ) | |
| throw err | |
| } | |
| mutationFn: async (terminalId: string) => { | |
| // Add timeout to prevent hanging forever if daemon dies | |
| const timeoutMs = 15000 | |
| const timeoutPromise = new Promise<never>((_, reject) => { | |
| setTimeout(() => reject(new Error("Reconnect timed out after 15s")), timeoutMs) | |
| }) | |
| return Promise.race([ | |
| terminalApi.restart(terminalId), | |
| timeoutPromise, | |
| ]) | |
| }, |
🤖 Prompt for AI Agents
In `@src/lib/queries/terminals.ts` around lines 148 - 182, The mutationFn contains
several styled console.log/console.error calls around terminalApi.restart and
the timeoutPromise which should be removed or gated for production; update
mutationFn to either delete the three console.* statements (the initial
"RECONNECT MUTATION Calling terminalApi.restart", the "terminalApi.restart
resolved", and the "terminalApi.restart rejected/timed out") or wrap them behind
a runtime debug flag (e.g., if (DEBUG) { ... }) so only non-verbose logs remain
while preserving the timeout logic and existing error flow.
| onSuccess: (updatedTerminal) => { | ||
| console.log( | ||
| `%c[RECONNECT MUTATION]%c onSuccess`, | ||
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | ||
| "color: #22c55e;", | ||
| updatedTerminal.id | ||
| ) | ||
| // DO NOT clear terminal output - preserve history for reconnect | ||
| // Just update terminal detail and status | ||
| queryClient.setQueryData( | ||
| queryKeys.terminals.detail(updatedTerminal.id), | ||
| updatedTerminal | ||
| ) | ||
| queryClient.invalidateQueries({ | ||
| queryKey: queryKeys.terminals.list(updatedTerminal.project_id), | ||
| }) | ||
| }, | ||
| onError: (err) => { | ||
| console.error( | ||
| `%c[RECONNECT MUTATION]%c onError`, | ||
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | ||
| "color: #ef4444;", | ||
| err | ||
| ) | ||
| }, | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove debug logging from mutation callbacks.
Same as above—the styled console statements in onSuccess and onError should be removed for production.
♻️ Suggested cleanup
onSuccess: (updatedTerminal) => {
- console.log(
- `%c[RECONNECT MUTATION]%c onSuccess`,
- "background: `#8b5cf6`; color: white; padding: 2px 6px; border-radius: 3px;",
- "color: `#22c55e`;",
- updatedTerminal.id
- )
// DO NOT clear terminal output - preserve history for reconnect
// Just update terminal detail and status
queryClient.setQueryData(
queryKeys.terminals.detail(updatedTerminal.id),
updatedTerminal
)
queryClient.invalidateQueries({
queryKey: queryKeys.terminals.list(updatedTerminal.project_id),
})
},
- onError: (err) => {
- console.error(
- `%c[RECONNECT MUTATION]%c onError`,
- "background: `#8b5cf6`; color: white; padding: 2px 6px; border-radius: 3px;",
- "color: `#ef4444`;",
- err
- )
- },
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onSuccess: (updatedTerminal) => { | |
| console.log( | |
| `%c[RECONNECT MUTATION]%c onSuccess`, | |
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | |
| "color: #22c55e;", | |
| updatedTerminal.id | |
| ) | |
| // DO NOT clear terminal output - preserve history for reconnect | |
| // Just update terminal detail and status | |
| queryClient.setQueryData( | |
| queryKeys.terminals.detail(updatedTerminal.id), | |
| updatedTerminal | |
| ) | |
| queryClient.invalidateQueries({ | |
| queryKey: queryKeys.terminals.list(updatedTerminal.project_id), | |
| }) | |
| }, | |
| onError: (err) => { | |
| console.error( | |
| `%c[RECONNECT MUTATION]%c onError`, | |
| "background: #8b5cf6; color: white; padding: 2px 6px; border-radius: 3px;", | |
| "color: #ef4444;", | |
| err | |
| ) | |
| }, | |
| }) | |
| onSuccess: (updatedTerminal) => { | |
| // DO NOT clear terminal output - preserve history for reconnect | |
| // Just update terminal detail and status | |
| queryClient.setQueryData( | |
| queryKeys.terminals.detail(updatedTerminal.id), | |
| updatedTerminal | |
| ) | |
| queryClient.invalidateQueries({ | |
| queryKey: queryKeys.terminals.list(updatedTerminal.project_id), | |
| }) | |
| }, | |
| }) |
🤖 Prompt for AI Agents
In `@src/lib/queries/terminals.ts` around lines 184 - 209, Remove the styled
console.log and console.error debug statements from the reconnection mutation
callbacks: delete the console.log in the onSuccess callback (the one logging
`[RECONNECT MUTATION] onSuccess` and updatedTerminal.id) and the console.error
in onError (the one logging `[RECONNECT MUTATION] onError` and err); keep the
existing logic that updates data via
queryClient.setQueryData(queryKeys.terminals.detail(updatedTerminal.id),
updatedTerminal) and
queryClient.invalidateQueries(queryKeys.terminals.list(updatedTerminal.project_id))
in onSuccess and retain the onError callback body if it needs to handle errors
(but without console.error), or replace with a proper error handler if required.
| // Batch terminal output to avoid excessive re-renders | ||
| const outputBuffers = new Map<string, string[]>(); | ||
| let flushScheduled = false; | ||
|
|
||
| function flushOutputBuffers(queryClient: QueryClient) { | ||
| flushScheduled = false; | ||
|
|
||
| for (const [terminal_id, chunks] of outputBuffers) { | ||
| if (chunks.length === 0) continue; | ||
|
|
||
| // Batch append all chunks at once | ||
| queryClient.setQueryData<string[]>( | ||
| eventQueryKeys.terminalOutput(terminal_id), | ||
| (oldData = []) => [...oldData, ...chunks], | ||
| ); | ||
| } | ||
|
|
||
| outputBuffers.clear(); | ||
| } | ||
|
|
||
| function handleTerminalOutput( | ||
| queryClient: QueryClient, | ||
| event: TerminalOutputEvent | ||
| event: TerminalOutputEvent, | ||
| ) { | ||
| const { terminal_id, data } = event | ||
| const { terminal_id, data } = event; | ||
|
|
||
| // Append output to the terminal's output cache | ||
| queryClient.setQueryData<string[]>( | ||
| eventQueryKeys.terminalOutput(terminal_id), | ||
| (oldData = []) => [...oldData, data] | ||
| ) | ||
| // Buffer the output chunk | ||
| const buffer = outputBuffers.get(terminal_id) ?? []; | ||
| buffer.push(data); | ||
| outputBuffers.set(terminal_id, buffer); | ||
|
|
||
| // Mark as unseen (store handles active terminal check internally) | ||
| unseenStore.markUnseen(terminal_id) | ||
| // Schedule a flush on next animation frame (batches rapid events) | ||
| if (!flushScheduled) { | ||
| flushScheduled = true; | ||
| requestAnimationFrame(() => flushOutputBuffers(queryClient)); | ||
| } | ||
| } |
There was a problem hiding this comment.
Output buffer is module-level and never cleared on component unmount.
The outputBuffers Map persists across component lifecycles. If a terminal is closed and reopened, stale data could accumulate. Additionally, if the component unmounts during a pending requestAnimationFrame, the callback will still fire.
Proposed improvement
Consider clearing buffers for a terminal when it's removed from cache, and canceling pending animation frames on cleanup:
let flushFrameId: number | null = null;
function flushOutputBuffers(queryClient: QueryClient) {
flushScheduled = false;
flushFrameId = null;
// ... existing logic
}
// In useTauriEvents cleanup:
return () => {
if (flushFrameId !== null) {
cancelAnimationFrame(flushFrameId);
}
unlisteners.forEach((unlisten) => {
unlisten.then((fn) => fn());
});
};🤖 Prompt for AI Agents
In `@src/lib/tauri-events.ts` around lines 49 - 85, Module-level outputBuffers and
scheduled RAFs persist across component lifecycles; update the buffering code to
track and cancel the RAF and to clear per-terminal buffers when a terminal is
removed. Specifically, add a flushFrameId variable, set flushFrameId =
requestAnimationFrame(...) inside handleTerminalOutput instead of only toggling
flushScheduled, clear flushFrameId inside flushOutputBuffers, and call
cancelAnimationFrame(flushFrameId) during useTauriEvents cleanup; also ensure
you remove stale entries from outputBuffers (e.g.,
outputBuffers.delete(terminal_id)) when a terminal is closed or when its cache
entry is evicted (hook into the terminal removal/close handler or subscribe to
the QueryClient cache removal) so buffers don’t accumulate for closed terminals.
| function handleHookEvent(queryClient: QueryClient, event: HookEvent) { | ||
| const agentColors: Record<string, string> = { | ||
| claude: "background: #d97706;", | ||
| codex: "background: #059669;", | ||
| opencode: "background: #7c3aed;", | ||
| gemini: "background: #2563eb;", | ||
| cursor: "background: #dc2626;", | ||
| unknown: "background: #6b7280;", | ||
| }; | ||
|
|
||
| // Look up project name from cache | ||
| let projectName = event.project_id || "unknown"; | ||
| if (event.project_id) { | ||
| const projectData = queryClient.getQueryData<{ name: string }>( | ||
| queryKeys.projects.detail(event.project_id) | ||
| ); | ||
| if (projectData?.name) { | ||
| projectName = projectData.name; | ||
| } | ||
| } | ||
|
|
||
| // Look up terminal name from cache | ||
| let terminalName = "unknown"; | ||
| const terminalData = queryClient.getQueryData<TerminalInfo>( | ||
| queryKeys.terminals.detail(event.terminal_id) | ||
| ); | ||
| if (terminalData?.name) { | ||
| terminalName = terminalData.name; | ||
| } else if (event.project_id) { | ||
| // Try to find in project's terminal list | ||
| const terminals = queryClient.getQueryData<TerminalInfo[]>( | ||
| queryKeys.terminals.list(event.project_id) | ||
| ); | ||
| const terminal = terminals?.find((t) => t.id === event.terminal_id); | ||
| if (terminal?.name) { | ||
| terminalName = terminal.name; | ||
| } | ||
| } | ||
|
|
||
| // Look up client display name from cache | ||
| let agentDisplayName = event.agent; | ||
| const clients = queryClient.getQueryData<ClientConfig[]>(queryKeys.clients.list()); | ||
| const client = clients?.find((c) => c.id === event.agent || c.name.toLowerCase().includes(event.agent)); | ||
| if (client?.name) { | ||
| agentDisplayName = client.name; | ||
| } | ||
|
|
||
| const bgColor = agentColors[event.agent] || agentColors.unknown; | ||
|
|
||
| console.log( | ||
| `%c[HOOK ${event.agent.toUpperCase()}]%c ${event.event}`, | ||
| `${bgColor} color: white; padding: 2px 6px; border-radius: 3px;`, | ||
| "color: #a78bfa; font-weight: bold;", | ||
| ); | ||
| console.log(" project:", projectName); | ||
| console.log(" terminal:", terminalName); | ||
| console.log(" agent:", agentDisplayName); | ||
| console.log(" event:", event.event); | ||
| if (event.payload) { | ||
| try { | ||
| const parsed = JSON.parse(event.payload); | ||
| console.log(" payload:", parsed); | ||
| } catch { | ||
| console.log(" payload:", event.payload); | ||
| } | ||
| } | ||
| console.log(" timestamp:", new Date().toISOString()); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hook event handler has heavy console logging that may impact performance in production.
The handleHookEvent function performs multiple cache lookups and extensive console logging for every hook event. Consider gating debug logging behind a development flag.
Proposed improvement
const DEBUG_HOOKS = import.meta.env.DEV;
function handleHookEvent(queryClient: QueryClient, event: HookEvent) {
if (!DEBUG_HOOKS) return;
// ... existing logging logic
}🤖 Prompt for AI Agents
In `@src/lib/tauri-events.ts` around lines 178 - 245, The handleHookEvent function
does expensive cache lookups and verbose console logging on every hook; guard
this behind a dev-only flag by adding a DEBUG_HOOKS constant (e.g. const
DEBUG_HOOKS = import.meta.env.DEV) at file scope and early-return from
handleHookEvent when DEBUG_HOOKS is false, and move all cache lookups, payload
JSON.parse and console.log calls inside that guarded branch so production
runtime avoids the work and logs. Ensure you reference the existing
handleHookEvent function and the queryClient/getQueryData calls so you only skip
the logging and lookups when DEBUG_HOOKS is false.
| /// Get current daemon status | ||
| pub fn get_status(dev_mode: bool) -> DaemonStatus { | ||
| let pid = read_pid(dev_mode); | ||
| let port = read_port(dev_mode); | ||
|
|
||
| // Check if process is actually running | ||
| let running = if let Some(pid) = pid { | ||
| is_process_running(pid) | ||
| } else if let Some(port) = port { | ||
| // No PID file but port file exists - probe the port | ||
| probe_port(port) | ||
| } else { | ||
| false | ||
| }; | ||
|
|
||
| DaemonStatus { running, pid, port } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Potential race condition in get_status.
There's a TOCTOU race between checking if the process is running and using the status. If the daemon exits immediately after the check, the returned status could be stale. This is acceptable for CLI status display but callers should be aware.
🤖 Prompt for AI Agents
In `@src-tauri/src/cli/daemon.rs` around lines 231 - 247, get_status has a TOCTOU
race — it reads PID/port (read_pid/read_port) then checks liveness
(is_process_running/probe_port) so the returned DaemonStatus can be stale if the
daemon exits immediately after the check; update the get_status doc comment to
explicitly state this race and that callers must not assume the status is
authoritative, and add guidance that callers who need stronger guarantees should
re-check using is_process_running or probe_port (or implement retry/timeout
logic) or call a new verification helper you add (e.g., verify_running(pid,
port)) to perform an active probe before acting.
| /// Check if a process is running by PID | ||
| #[cfg(unix)] | ||
| fn is_process_running(pid: u32) -> bool { | ||
| use nix::sys::signal::kill; | ||
| use nix::unistd::Pid; | ||
|
|
||
| // Signal 0 doesn't send a signal but checks if process exists | ||
| kill(Pid::from_raw(pid as i32), None).is_ok() | ||
| } | ||
|
|
||
| #[cfg(not(unix))] | ||
| fn is_process_running(_pid: u32) -> bool { | ||
| // On non-Unix, fall back to port probing | ||
| false | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Code duplication: is_process_running exists in daemon/pid.rs.
This function duplicates is_process_running from src-tauri/src/daemon/pid.rs. Consider reusing the existing implementation to maintain consistency.
♻️ Suggested refactor to reuse existing function
+use crate::daemon::pid::is_process_running;
+
-/// Check if a process is running by PID
-#[cfg(unix)]
-fn is_process_running(pid: u32) -> bool {
- use nix::sys::signal::kill;
- use nix::unistd::Pid;
-
- // Signal 0 doesn't send a signal but checks if process exists
- kill(Pid::from_raw(pid as i32), None).is_ok()
-}
-
-#[cfg(not(unix))]
-fn is_process_running(_pid: u32) -> bool {
- // On non-Unix, fall back to port probing
- false
-}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Check if a process is running by PID | |
| #[cfg(unix)] | |
| fn is_process_running(pid: u32) -> bool { | |
| use nix::sys::signal::kill; | |
| use nix::unistd::Pid; | |
| // Signal 0 doesn't send a signal but checks if process exists | |
| kill(Pid::from_raw(pid as i32), None).is_ok() | |
| } | |
| #[cfg(not(unix))] | |
| fn is_process_running(_pid: u32) -> bool { | |
| // On non-Unix, fall back to port probing | |
| false | |
| } |
🤖 Prompt for AI Agents
In `@src-tauri/src/cli/daemon.rs` around lines 263 - 277, Remove this duplicated
is_process_running implementation and instead call the canonical function from
the existing module (the one in daemon/pid.rs). Replace the local function with
an import/use of the existing is_process_running (e.g., use
crate::daemon::pid::is_process_running or the correct module path) and ensure
any cfg gating remains satisfied by relying on the original's platform handling;
update any references in this file to call that function rather than the
duplicate.
| /// Probe if the daemon port is responding | ||
| fn probe_port(port: u16) -> bool { | ||
| use std::net::TcpStream; | ||
| TcpStream::connect(format!("127.0.0.1:{}", port)).is_ok() | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Code duplication: probe_port exists in daemon/client.rs.
This synchronous probe_port duplicates similar logic in src-tauri/src/daemon/client.rs (async version). Consider extracting to a shared utility.
🤖 Prompt for AI Agents
In `@src-tauri/src/cli/daemon.rs` around lines 279 - 283, The sync probe_port
function duplicates logic already present in the async probe in daemon::client;
extract the common TCP-connect check into a shared utility (e.g.,
daemon::net::is_port_open or a helper function in a common module) and have both
probe_port (in this file) and the async probe in daemon::client call that
utility (the sync probe can be a thin wrapper returning the helper's bool).
Update imports and remove the duplicated TcpStream::connect logic from
probe_port so only the shared helper contains the implementation.
| /// Follow a file (like tail -f) | ||
| fn tail_follow(path: &Path, initial_lines: usize) -> Result<(), String> { | ||
| // First show initial lines | ||
| tail_file(path, initial_lines)?; | ||
|
|
||
| // Then follow | ||
| println!("--- Following log (Ctrl+C to stop) ---"); | ||
|
|
||
| let mut file = fs::File::open(path) | ||
| .map_err(|e| format!("Failed to open log file: {}", e))?; | ||
|
|
||
| // Seek to end | ||
| file.seek(SeekFrom::End(0)) | ||
| .map_err(|e| format!("Failed to seek: {}", e))?; | ||
|
|
||
| let mut reader = io::BufReader::new(file); | ||
|
|
||
| loop { | ||
| let mut line = String::new(); | ||
| match reader.read_line(&mut line) { | ||
| Ok(0) => { | ||
| // No data - wait and retry | ||
| std::thread::sleep(Duration::from_millis(100)); | ||
| } | ||
| Ok(_) => { | ||
| print!("{}", line); | ||
| io::stdout().flush().ok(); | ||
| } | ||
| Err(e) => { | ||
| return Err(format!("Error reading log: {}", e)); | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
tail_follow will spin indefinitely on log rotation.
If the log file is rotated (deleted and recreated), the follow loop will keep reading from the old file handle which won't receive new data. Consider detecting log rotation via inode change or file size decrease.
🤖 Prompt for AI Agents
In `@src-tauri/src/cli/daemon.rs` around lines 410 - 443, tail_follow currently
keeps reading an old file handle and will spin forever after log rotation;
update tail_follow to detect rotation by capturing the watched file's metadata
(e.g., device+inode or file size) before entering the loop and on each Ok(0) (no
data) call fs::metadata(path) to compare against the open file's metadata (via
file.metadata()); if metadata differs (inode/device changed) or size decreased,
close and reopen the file, recreate the io::BufReader, and reset seeking
behavior (seek to start or end as appropriate) so the loop follows the new file
handle; update references to the mutable file/reader variables inside
tail_follow to support reopening.
| pub struct DaemonClient { | ||
| out_tx: mpsc::UnboundedSender<String>, | ||
| pending: Arc<Mutex<HashMap<String, oneshot::Sender<DaemonResponse>>>>, | ||
| } | ||
|
|
||
| impl DaemonClient { | ||
| pub async fn connect(app_handle: AppHandle) -> Result<Self> { | ||
| let port = ensure_daemon_running().await?; | ||
| let addr = format!("127.0.0.1:{port}"); | ||
| let stream = TcpStream::connect(&addr) | ||
| .await | ||
| .map_err(|e| Error::TerminalError(e.to_string()))?; | ||
| info!(addr = %addr, "daemon client connected"); | ||
|
|
||
| let (reader, writer) = stream.into_split(); | ||
| let mut reader = BufReader::new(reader).lines(); | ||
|
|
||
| let (out_tx, mut out_rx) = mpsc::unbounded_channel::<String>(); | ||
| let pending: Arc<Mutex<HashMap<String, oneshot::Sender<DaemonResponse>>>> = | ||
| Arc::new(Mutex::new(HashMap::new())); | ||
|
|
||
| let pending_for_read = pending.clone(); | ||
| let app_handle_for_read = app_handle.clone(); | ||
|
|
||
| tokio::spawn(async move { | ||
| let mut writer = writer; | ||
| while let Some(line) = out_rx.recv().await { | ||
| if writer.write_all(line.as_bytes()).await.is_err() { | ||
| break; | ||
| } | ||
| if writer.write_all(b"\n").await.is_err() { | ||
| break; | ||
| } | ||
| } | ||
| }); | ||
|
|
||
| tokio::spawn(async move { | ||
| while let Ok(Some(line)) = reader.next_line().await { | ||
| let message: DaemonMessage = match serde_json::from_str(&line) { | ||
| Ok(msg) => msg, | ||
| Err(err) => { | ||
| warn!(error = %err, "daemon message parse failed"); | ||
| continue; | ||
| } | ||
| }; | ||
|
|
||
| match message { | ||
| DaemonMessage::Response { id, response } => { | ||
| let sender = pending_for_read.lock().remove(&id); | ||
| if let Some(sender) = sender { | ||
| let _ = sender.send(response); | ||
| } | ||
| } | ||
| DaemonMessage::Event { event } => { | ||
| debug!(event = ?event, "daemon event"); | ||
| emit_daemon_event(&app_handle_for_read, event); | ||
| } | ||
| _ => {} | ||
| } | ||
| } | ||
|
|
||
| warn!("daemon connection closed"); | ||
| }); | ||
|
|
||
| Ok(Self { out_tx, pending }) | ||
| } |
There was a problem hiding this comment.
Missing connection loss detection and cleanup.
The reader task logs "daemon connection closed" but there's no mechanism to notify the DaemonClient owner or attempt reconnection. Pending requests will hang until their oneshot receivers are dropped.
🔧 Consider adding connection state tracking
pub struct DaemonClient {
out_tx: mpsc::UnboundedSender<String>,
pending: Arc<Mutex<HashMap<String, oneshot::Sender<DaemonResponse>>>>,
+ connected: Arc<std::sync::atomic::AtomicBool>,
}
// In connect():
+let connected = Arc::new(std::sync::atomic::AtomicBool::new(true));
+let connected_for_read = connected.clone();
// In reader task on connection close:
+connected_for_read.store(false, std::sync::atomic::Ordering::SeqCst);
warn!("daemon connection closed");
// In send_request():
+if !self.connected.load(std::sync::atomic::Ordering::SeqCst) {
+ return Err(Error::TerminalError("Daemon connection closed".into()));
+}🤖 Prompt for AI Agents
In `@src-tauri/src/daemon/client.rs` around lines 17 - 82, The reader task
currently just logs "daemon connection closed" but leaves pending requests
hanging; fix by having the reader task (in connect) cleanly notify and unblock
everyone: clone the out_tx into the read task (out_tx_for_read) and when the
read loop ends call out_tx_for_read.close_channel() to stop the writer, then
lock pending_for_read, drain the HashMap and send an error/closure
DaemonResponse via each oneshot sender so callers get unblocked (use the same
DaemonResponse variant your code expects for errors), and additionally add a
lightweight connection state notifier field to DaemonClient (e.g., a
tokio::watch::Sender or broadcast::Sender named connection_state or
connection_notifier) and send a "lost" state from the reader task so the owner
can react/reconnect; update DaemonClient::connect to initialize and return this
notifier alongside out_tx and pending.
| fn spawn_pty( | ||
| &self, | ||
| terminal: &mut Terminal, | ||
| cols: u16, | ||
| rows: u16, | ||
| persistence: Arc<Mutex<SessionPersistence>>, | ||
| shutdown: Arc<AtomicBool>, | ||
| ) -> Result<(PtyHandle, JoinHandle<()>)> { | ||
| info!(terminal_id = %terminal.id, "spawn_pty: starting"); | ||
|
|
||
| self.ensure_claude_settings_file(); | ||
|
|
||
| let shell = ShellConfig::detect(self.shell_override.read().clone()); | ||
| terminal.shell = Some(shell.path.to_string_lossy().to_string()); | ||
|
|
||
| info!( | ||
| terminal_id = %terminal.id, | ||
| shell = %shell.path.display(), | ||
| cols = cols, | ||
| rows = rows, | ||
| "spawn_pty: opening PTY" | ||
| ); | ||
|
|
||
| let pty_system = NativePtySystem::default(); | ||
| let pair = pty_system | ||
| .openpty(PtySize { rows, cols, pixel_width: 0, pixel_height: 0 }) | ||
| .map_err(|e| { | ||
| error!(terminal_id = %terminal.id, error = %e, "spawn_pty: failed to open PTY"); | ||
| Error::TerminalError(e.to_string()) | ||
| })?; | ||
|
|
||
| info!(terminal_id = %terminal.id, "spawn_pty: PTY opened"); | ||
|
|
||
| let mut cmd = CommandBuilder::new(&shell.path); | ||
| cmd.args(&shell.login_args); | ||
|
|
||
| if shell.name == "bash" { | ||
| cmd.arg("--rcfile"); | ||
| cmd.arg(self.wrapper_dir.join("bash/.bashrc")); | ||
| } | ||
|
|
||
| let command_line = format_command_line(&terminal.command); | ||
| cmd.arg("-c"); | ||
| cmd.arg(&command_line); | ||
|
|
||
| info!( | ||
| terminal_id = %terminal.id, | ||
| working_dir = %terminal.working_dir.display(), | ||
| command_line = %command_line, | ||
| "spawn_pty: preparing command" | ||
| ); | ||
|
|
||
| cmd.cwd(&terminal.working_dir); | ||
|
|
||
| let env = build_terminal_env( | ||
| &shell, | ||
| &self.wrapper_dir, | ||
| &self.ada_home, | ||
| &self.ada_bin_dir, | ||
| &terminal.id, | ||
| &terminal.project_id, | ||
| self.notification_port, | ||
| ); | ||
|
|
||
| for (key, value) in &env { | ||
| cmd.env(key, value); | ||
| } | ||
|
|
||
| for (key, value) in &terminal.command.env { | ||
| cmd.env(key, value); | ||
| } | ||
|
|
||
| info!(terminal_id = %terminal.id, "spawn_pty: spawning command"); | ||
|
|
||
| let _child = pair | ||
| .slave | ||
| .spawn_command(cmd) | ||
| .map_err(|e| { | ||
| error!(terminal_id = %terminal.id, error = %e, "spawn_pty: failed to spawn command"); | ||
| Error::TerminalError(e.to_string()) | ||
| })?; | ||
|
|
||
| info!(terminal_id = %terminal.id, "spawn_pty: command spawned successfully"); | ||
|
|
||
| drop(pair.slave); | ||
|
|
||
| info!(terminal_id = %terminal.id, "spawn_pty: cloning reader"); | ||
|
|
||
| let mut reader = pair | ||
| .master | ||
| .try_clone_reader() | ||
| .map_err(|e| { | ||
| error!(terminal_id = %terminal.id, error = %e, "spawn_pty: failed to clone reader"); | ||
| Error::TerminalError(e.to_string()) | ||
| })?; | ||
|
|
||
| info!(terminal_id = %terminal.id, "spawn_pty: reader cloned, spawning reader thread"); | ||
|
|
||
| let terminal_id = terminal.id.clone(); | ||
| let project_id = terminal.project_id.clone(); | ||
| let event_tx = self.event_tx.clone(); | ||
| let sessions = self.sessions.clone(); | ||
|
|
||
| let reader_handle = thread::Builder::new() | ||
| .name(format!("pty-reader-{}", terminal_id)) | ||
| .spawn(move || { | ||
| info!(terminal_id = %terminal_id, "pty-reader: thread started"); | ||
| let mut buffer = [0u8; 4096]; | ||
| loop { | ||
| // Check shutdown flag before blocking on read | ||
| if shutdown.load(Ordering::SeqCst) { | ||
| info!(terminal_id = %terminal_id, "pty-reader: shutdown flag set, exiting"); | ||
| break; | ||
| } | ||
|
|
||
| match reader.read(&mut buffer) { | ||
| Ok(0) => { | ||
| info!(terminal_id = %terminal_id, "pty-reader: EOF received, exiting"); | ||
| break; | ||
| } | ||
| Ok(n) => { | ||
| let output = String::from_utf8_lossy(&buffer[..n]).to_string(); | ||
| { | ||
| let mut persistence = persistence.lock(); | ||
| if let Err(e) = persistence.write_output(output.as_bytes()) { | ||
| warn!(terminal_id = %terminal_id, error = %e, "failed to write output to persistence"); | ||
| } | ||
| } | ||
| // Send to broadcast channel - use send() which won't block | ||
| // If channel is full, old messages are dropped (lagged) | ||
| if let Err(e) = event_tx.send(DaemonEvent::TerminalOutput { | ||
| terminal_id: terminal_id.clone(), | ||
| data: output, | ||
| }) { | ||
| // No receivers or lagged - that's okay, output is persisted | ||
| if event_tx.receiver_count() == 0 { | ||
| // No receivers at all, that's fine | ||
| } else { | ||
| warn!(terminal_id = %terminal_id, error = %e, "failed to send terminal output event"); | ||
| } | ||
| } | ||
| } | ||
| Err(e) => { | ||
| // Check if this is a normal shutdown | ||
| if !shutdown.load(Ordering::SeqCst) { | ||
| warn!(terminal_id = %terminal_id, error = %e, "PTY read error"); | ||
| } | ||
| break; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Update session status on exit | ||
| info!(terminal_id = %terminal_id, "pty-reader: updating session status to Stopped"); | ||
| { | ||
| let mut sessions = sessions.write(); | ||
| if let Some(entry) = sessions.get_mut(&terminal_id) { | ||
| entry.terminal.status = TerminalStatus::Stopped; | ||
| entry.pty = None; | ||
| let mut persistence = entry.persistence.lock(); | ||
| let _ = persistence.mark_ended(); | ||
| // Send status event | ||
| info!(terminal_id = %terminal_id, "pty-reader: sending TerminalStatus::Stopped event"); | ||
| if let Err(e) = event_tx.send(DaemonEvent::TerminalStatus { | ||
| terminal_id: terminal_id.clone(), | ||
| project_id: project_id.clone(), | ||
| status: TerminalStatus::Stopped, | ||
| }) { | ||
| warn!(terminal_id = %terminal_id, error = %e, "failed to send terminal status event"); | ||
| } | ||
| } else { | ||
| warn!(terminal_id = %terminal_id, "pty-reader: session not found on exit"); | ||
| } | ||
| } | ||
| info!(terminal_id = %terminal_id, "pty-reader: thread exiting"); | ||
| }) | ||
| .map_err(|e| { | ||
| error!(error = %e, "spawn_pty: failed to spawn reader thread"); | ||
| Error::TerminalError(format!("failed to spawn PTY reader thread: {}", e)) | ||
| })?; | ||
|
|
||
| info!(terminal_id = %terminal.id, "spawn_pty: reader thread spawned, taking writer"); | ||
|
|
||
| let writer = pair | ||
| .master | ||
| .take_writer() | ||
| .map_err(|e| { | ||
| error!(terminal_id = %terminal.id, error = %e, "spawn_pty: failed to take writer"); | ||
| Error::TerminalError(e.to_string()) | ||
| })?; | ||
|
|
||
| info!(terminal_id = %terminal.id, "spawn_pty: completed successfully"); | ||
|
|
||
| Ok(( | ||
| PtyHandle { | ||
| master: Arc::new(Mutex::new(pair.master)), | ||
| writer: Arc::new(Mutex::new(writer)), | ||
| }, | ||
| reader_handle, | ||
| )) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
spawn_pty implementation is thorough but complex.
The PTY spawn logic correctly sets up environment, shell wrappers, and the reader thread. The reader thread properly handles shutdown signals and broadcasts events. However, the 200+ line method could benefit from extraction of sub-functions.
🤖 Prompt for AI Agents
In `@src-tauri/src/daemon/session.rs` around lines 530 - 730, spawn_pty is too
large and should be broken into smaller helper functions to improve readability
and testability; refactor by extracting: (1) PTY creation and child spawn into a
function like open_and_spawn_pty(shell: &ShellConfig, terminal: &Terminal, cols:
u16, rows: u16) -> (PtyPair, Child) which contains NativePtySystem::openpty and
pair.slave.spawn_command calls, (2) command preparation into
prepare_command(terminal: &Terminal, shell: &ShellConfig, wrapper_dir: &Path) ->
CommandBuilder that builds args, cwd and env, (3) environment building is
already in build_terminal_env but apply it via the new prepare_command, (4)
reader thread logic into spawn_pty_reader(reader: impl Read + Clone,
persistence: Arc<Mutex<SessionPersistence>>, shutdown: Arc<AtomicBool>,
event_tx: BroadcastSender, sessions: SessionsMap, terminal_id: String,
project_id: Option<String>) -> JoinHandle<()> which contains the loop,
persistence writes, event sends and session status update, and (5) writer
acquisition into take_pty_writer(master: PtyMaster) -> Writer; then update
spawn_pty to call these helpers in sequence and simply assemble the PtyHandle
and reader_handle. Keep error handling semantics and log calls (info/warn/error)
inside the extracted helpers and reuse existing symbols: spawn_pty, ShellConfig,
build_terminal_env, SessionPersistence, PtyHandle, pair.master, pair.slave,
event_tx, sessions.
| if let Err(e) = event_tx.send(DaemonEvent::TerminalOutput { | ||
| terminal_id: terminal_id.clone(), | ||
| data: output, | ||
| }) { | ||
| // No receivers or lagged - that's okay, output is persisted | ||
| if event_tx.receiver_count() == 0 { | ||
| // No receivers at all, that's fine | ||
| } else { | ||
| warn!(terminal_id = %terminal_id, error = %e, "failed to send terminal output event"); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Silent suppression of send errors when no receivers.
When event_tx.receiver_count() == 0, errors are silently ignored. This is acceptable since output is persisted, but a debug! log could help diagnose connection issues.
🤖 Prompt for AI Agents
In `@src-tauri/src/daemon/session.rs` around lines 660 - 670, Add a debug log when
send fails due to zero receivers instead of silently ignoring it: inside the
error branch that checks event_tx.receiver_count() == 0 (the block handling
send(DaemonEvent::TerminalOutput { terminal_id, data: output })), call debug!
with terminal_id and the send error (e) to record that there were no active
listeners; keep the existing warn! for non-zero receiver_count cases unchanged.
|
|
||
| // Always try to connect to daemon, spawn it if needed | ||
| let daemon = match tauri::async_runtime::block_on(Self::ensure_daemon_and_connect(app_handle.clone())) { | ||
| Ok(client) => Some(Arc::new(client)), | ||
| Err(e) => { | ||
| tracing::error!(error = %e, "failed to connect to daemon"); | ||
| None | ||
| } | ||
| }; |
There was a problem hiding this comment.
Blocking async call in constructor may cause issues.
tauri::async_runtime::block_on in new() blocks the thread during initialization. If this is called on the main thread, it could cause UI freezes. Consider making initialization lazy or running on a background thread.
🤖 Prompt for AI Agents
In `@src-tauri/src/state.rs` around lines 32 - 40, Constructor new() currently
calls tauri::async_runtime::block_on(Self::ensure_daemon_and_connect(...)) which
blocks the thread; remove the blocking call and make daemon initialization
asynchronous: return State from new() with daemon set to None (or wrap it in a
shared async-aware container like Arc<RwLock<Option<DaemonClient>>>), then spawn
a background task (using tauri::async_runtime::spawn or tokio::spawn) that runs
ensure_daemon_and_connect(app_handle.clone()), logs errors via tracing::error!,
and stores Some(Arc::new(client)) into the shared daemon slot when ready;
alternatively expose an async init method (e.g., async fn init_daemon(&self))
that callers can await instead of blocking in new().
| /// Ensure daemon is running (spawn via CLI if needed) and connect to it | ||
| async fn ensure_daemon_and_connect(app_handle: AppHandle) -> Result<DaemonClient> { | ||
| use std::time::Duration; | ||
|
|
||
| if path.extension().is_some_and(|ext| ext == "json") { | ||
| let content = std::fs::read_to_string(&path)?; | ||
| if let Ok(project) = serde_json::from_str::<AdaProject>(&content) { | ||
| self.projects.write().insert(project.id.clone(), project); | ||
| let dev_mode = cfg!(debug_assertions); | ||
| let data_dir = Self::get_data_dir(dev_mode)?; | ||
| let port_path = data_dir.join("daemon/port"); | ||
|
|
||
| // Check if daemon is already running | ||
| if let Ok(port_str) = std::fs::read_to_string(&port_path) { | ||
| if let Ok(port) = port_str.trim().parse::<u16>() { | ||
| if std::net::TcpStream::connect(format!("127.0.0.1:{}", port)).is_ok() { | ||
| tracing::info!(port, "daemon already running, connecting"); | ||
| return DaemonClient::connect(app_handle).await; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Daemon not running, spawn it using CLI | ||
| tracing::info!("daemon not running, spawning via CLI"); | ||
| Self::spawn_daemon_via_cli(dev_mode)?; | ||
|
|
||
| // Wait for daemon to start | ||
| for _ in 0..20 { | ||
| tokio::time::sleep(Duration::from_millis(250)).await; | ||
|
|
||
| if let Ok(port_str) = std::fs::read_to_string(&port_path) { | ||
| if let Ok(port) = port_str.trim().parse::<u16>() { | ||
| if std::net::TcpStream::connect(format!("127.0.0.1:{}", port)).is_ok() { | ||
| tracing::info!(port, "daemon started, connecting"); | ||
| return DaemonClient::connect(app_handle).await; | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| Ok(()) | ||
| Err(Error::TerminalError("Daemon did not start within 5 seconds".into())) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Mixing sync and async I/O in ensure_daemon_and_connect.
The function uses std::net::TcpStream::connect (blocking) within an async context. While this works due to the short connection timeout, it blocks the async runtime. Consider using tokio::net::TcpStream consistently.
♻️ Suggested async-consistent implementation
- if let Ok(port_str) = std::fs::read_to_string(&port_path) {
- if let Ok(port) = port_str.trim().parse::<u16>() {
- if std::net::TcpStream::connect(format!("127.0.0.1:{}", port)).is_ok() {
+ if let Ok(port_str) = tokio::fs::read_to_string(&port_path).await {
+ if let Ok(port) = port_str.trim().parse::<u16>() {
+ if tokio::net::TcpStream::connect(format!("127.0.0.1:{}", port)).await.is_ok() {
tracing::info!(port, "daemon already running, connecting");
return DaemonClient::connect(app_handle).await;
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Ensure daemon is running (spawn via CLI if needed) and connect to it | |
| async fn ensure_daemon_and_connect(app_handle: AppHandle) -> Result<DaemonClient> { | |
| use std::time::Duration; | |
| if path.extension().is_some_and(|ext| ext == "json") { | |
| let content = std::fs::read_to_string(&path)?; | |
| if let Ok(project) = serde_json::from_str::<AdaProject>(&content) { | |
| self.projects.write().insert(project.id.clone(), project); | |
| let dev_mode = cfg!(debug_assertions); | |
| let data_dir = Self::get_data_dir(dev_mode)?; | |
| let port_path = data_dir.join("daemon/port"); | |
| // Check if daemon is already running | |
| if let Ok(port_str) = std::fs::read_to_string(&port_path) { | |
| if let Ok(port) = port_str.trim().parse::<u16>() { | |
| if std::net::TcpStream::connect(format!("127.0.0.1:{}", port)).is_ok() { | |
| tracing::info!(port, "daemon already running, connecting"); | |
| return DaemonClient::connect(app_handle).await; | |
| } | |
| } | |
| } | |
| // Daemon not running, spawn it using CLI | |
| tracing::info!("daemon not running, spawning via CLI"); | |
| Self::spawn_daemon_via_cli(dev_mode)?; | |
| // Wait for daemon to start | |
| for _ in 0..20 { | |
| tokio::time::sleep(Duration::from_millis(250)).await; | |
| if let Ok(port_str) = std::fs::read_to_string(&port_path) { | |
| if let Ok(port) = port_str.trim().parse::<u16>() { | |
| if std::net::TcpStream::connect(format!("127.0.0.1:{}", port)).is_ok() { | |
| tracing::info!(port, "daemon started, connecting"); | |
| return DaemonClient::connect(app_handle).await; | |
| } | |
| } | |
| } | |
| } | |
| Ok(()) | |
| Err(Error::TerminalError("Daemon did not start within 5 seconds".into())) | |
| } | |
| /// Ensure daemon is running (spawn via CLI if needed) and connect to it | |
| async fn ensure_daemon_and_connect(app_handle: AppHandle) -> Result<DaemonClient> { | |
| use std::time::Duration; | |
| let dev_mode = cfg!(debug_assertions); | |
| let data_dir = Self::get_data_dir(dev_mode)?; | |
| let port_path = data_dir.join("daemon/port"); | |
| // Check if daemon is already running | |
| if let Ok(port_str) = tokio::fs::read_to_string(&port_path).await { | |
| if let Ok(port) = port_str.trim().parse::<u16>() { | |
| if tokio::net::TcpStream::connect(format!("127.0.0.1:{}", port)).await.is_ok() { | |
| tracing::info!(port, "daemon already running, connecting"); | |
| return DaemonClient::connect(app_handle).await; | |
| } | |
| } | |
| } | |
| // Daemon not running, spawn it using CLI | |
| tracing::info!("daemon not running, spawning via CLI"); | |
| Self::spawn_daemon_via_cli(dev_mode)?; | |
| // Wait for daemon to start | |
| for _ in 0..20 { | |
| tokio::time::sleep(Duration::from_millis(250)).await; | |
| if let Ok(port_str) = tokio::fs::read_to_string(&port_path).await { | |
| if let Ok(port) = port_str.trim().parse::<u16>() { | |
| if tokio::net::TcpStream::connect(format!("127.0.0.1:{}", port)).await.is_ok() { | |
| tracing::info!(port, "daemon started, connecting"); | |
| return DaemonClient::connect(app_handle).await; | |
| } | |
| } | |
| } | |
| } | |
| Err(Error::TerminalError("Daemon did not start within 5 seconds".into())) | |
| } |
🤖 Prompt for AI Agents
In `@src-tauri/src/state.rs` around lines 59 - 96, The async function
ensure_daemon_and_connect is performing blocking I/O (std::fs::read_to_string
and std::net::TcpStream::connect) which can block the runtime; switch to async
APIs: use tokio::fs::read_to_string for reading port_path and use
tokio::net::TcpStream::connect (wrapped in tokio::time::timeout with the same
short deadline, e.g. 250ms) when probing 127.0.0.1:port; keep the existing loop
and calls to Self::spawn_daemon_via_cli and DaemonClient::connect but replace
the blocking checks with the async equivalents so ensure_daemon_and_connect
remains non-blocking.
| /// Get the connection state | ||
| pub fn get_connection_state(&self) -> ConnectionState { | ||
| if self.daemon.read().is_some() { | ||
| ConnectionState::Connected | ||
| } else { | ||
| // Check if daemon is running but we're not connected | ||
| let port_path = self.data_dir.join("daemon/port"); | ||
| if port_path.exists() { | ||
| if let Ok(content) = std::fs::read_to_string(&port_path) { | ||
| if let Ok(port) = content.trim().parse::<u16>() { | ||
| if std::net::TcpStream::connect(format!("127.0.0.1:{}", port)).is_ok() { | ||
| return ConnectionState::Disconnected; | ||
| } | ||
| } | ||
| } | ||
| } | ||
| ConnectionState::NotRunning | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
get_connection_state performs synchronous I/O.
This method uses blocking std::fs::read_to_string and std::net::TcpStream::connect which could block the calling thread. Consider making this async or caching the state.
🤖 Prompt for AI Agents
In `@src-tauri/src/state.rs` around lines 251 - 269, get_connection_state
currently does blocking I/O (std::fs::read_to_string and
std::net::TcpStream::connect) which can stall the caller; change it to a
non-blocking approach by either (A) making get_connection_state async (rename to
async fn get_connection_state) and replace blocking calls with async equivalents
(tokio::fs::read_to_string / tokio::net::TcpStream::connect with a timeout) or
(B) keep a synchronous signature but offload the blocking checks to a background
thread / runtime using spawn_blocking (e.g.
tauri::async_runtime::spawn_blocking) and return a cached/latest ConnectionState
immediately while updating the cache asynchronously; update all callers of
get_connection_state and ensure you still reference the same fields (daemon,
data_dir, port_path) and return
ConnectionState::{Connected,Disconnected,NotRunning} consistently after the
non-blocking check.
Summary
This PR introduces a daemon-based architecture that fundamentally refactors how Ada manages terminals and AI
agents. The new design separates terminal lifecycle management from the GUI application, enabling:
Key Changes
1. Daemon Module (
src-tauri/src/daemon/)SessionManager: Central orchestrator for all terminal sessionsportable_pty2. CLI Module (
src-tauri/src/cli/)ada daemon start|stop|status|restart|logs3. Agent Wrappers (
daemon/wrappers.rs)4. Backend Refactoring
AppStatenow holdsDaemonClientinstead of local PTY handlesensure_daemon_and_connect()5. Frontend Updates
Architecture Diagram
Data Flow
~/.local/share/ada/daemon/sessions/Test plan
ada daemon start|stop|status|restart|logsBreaking Changes
Summary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.