Skip to content

feat(cli): add multica issue take <id> for local agent takeover#1880

Open
james-heidi wants to merge 10 commits intomultica-ai:mainfrom
james-heidi:main
Open

feat(cli): add multica issue take <id> for local agent takeover#1880
james-heidi wants to merge 10 commits intomultica-ai:mainfrom
james-heidi:main

Conversation

@james-heidi
Copy link
Copy Markdown

@james-heidi james-heidi commented Apr 29, 2026

What does this PR do?

Adds a new multica issue take <issue-id> CLI subcommand that resumes the agent's most recent completed run for an issue locally. By default it spawns the matching agent CLI inside the worktree the agent used (with stdin/stdout/stderr passthrough); --print emits the equivalent shell command for eval wrappers, --copy drops it on the system clipboard, and --provider <name> overrides the auto-detected provider.

Local runtimes only. Cloud-runtime tasks don't persist a local session_id + work_dir (PinTaskSession in server/internal/daemon/client.go is daemon-only), so they're skipped with no completed run found rather than producing a broken resume command. The constraint is documented in the command's --help. A future remote-takeover feature would need its own attach/exec transport (out of scope here).

Related Issue

N/A

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Refactor / code improvement (no behavior change)
  • Documentation update
  • Tests (adding or improving test coverage)
  • CI / infrastructure

Changes Made

  • server/cmd/multica/cmd_issue.go — new issueTakeCmd cobra command with --provider, --print, --copy flags. Auto-detects provider from /api/issues/<id>/task-runs[0].runtime_id/api/runtimes. Builds a quoting-safe cd <wd> && <bin> <args> shell line and either spawns the agent (default), prints, or pipes into pbcopy/xclip/xsel. Long help text spells out the local-only constraint.
  • Provider→CLI map covers claude, codex, cursor, gemini, opencode, copilot. Unsupported providers exit non-zero after surfacing the session_id + work_dir for manual assembly.
  • Edge cases per spec: missing worktree → warn and run from $PWD; no completed run → exit with no completed run found; codex thread expired → propagate codex's stderr verbatim (no retry, since stdio is pass-through). Cloud-runtime tasks fall into the "no completed run found" path because the daemon's session pin never runs.
  • server/cmd/multica/cmd_issue_test.go — 6 new TestIssueTake* end-to-end cases (auto-detect, --print, --copy mutex, unsupported provider, runtime-not-found, --provider override) plus unit tests for the provider map, POSIX shell quoter, and run picker.

How to Test

# 1) Build the CLI from this branch (no PATH conflict with an installed multica)
make build
./server/bin/multica --version

# 2) Read the constraint
./server/bin/multica issue take --help

# 3) On any LOCAL-runtime issue with a completed agent run, verify --print
#    renders the expected shell command without spawning the agent
./server/bin/multica issue take <issue-id> --print
./server/bin/multica issue take <issue-id> --provider claude --print
./server/bin/multica issue take <issue-id> --provider codex --print

# 4) Verify --copy drops the same command on the clipboard
./server/bin/multica issue take <issue-id> --copy
pbpaste   # macOS

# 5) Default mode actually spawns the agent in the original worktree
./server/bin/multica issue take <issue-id>

# 6) Cloud-runtime issue → expect non-zero exit with "no completed run found"

# 7) Unit tests
cd server && go test ./cmd/multica/ -run TestIssueTake -v
cd server && go test ./...

Checklist

  • I have included a thinking path that traces from project context to this change
  • I have run tests locally and they pass (go test ./... → 872 passed across 25 packages; go vet ./cmd/multica/ clean)
  • I have added or updated tests where applicable (6 new e2e cases + 4 unit tests)
  • If this change affects the UI, I have included before/after screenshots (N/A — CLI only)
  • I have updated relevant documentation to reflect my changes (--help long text now states the local-only constraint)
  • I have considered and documented any risks above
  • I will address all reviewer comments before requesting merge

AI Disclosure

AI tool used: Claude Code (Opus 4.7).

Prompt / approach: Diagnosed CI failure on the original PR, traced the unused-variable error to commit 55b7e2e which removed the consumer (triggerLabel) but left the declaration. Removed the declaration and refreshed the surrounding comment. Subsequent reviewer question about cloud runtime support produced an audit confirming take is local-only and a doc tweak in --help to say so.

Screenshots (optional)

N/A.

james-heidi and others added 2 commits April 29, 2026 22:12
Adds a one-liner command that resumes the agent's most recent
completed run for an issue: spawns the right CLI in the worktree by
default, with `--print` and `--copy` modes for shell wrappers and
clipboard. `--provider` overrides the runtime auto-detection.

Mapping covers claude, codex, cursor, gemini, opencode, and copilot.
Workdir-gone falls back to the user's cwd; unsupported providers
surface session_id + workdir for manual assembly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
feat(cli): add `multica issue take <id>` for local agent takeover
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 29, 2026

@james-heidi is attempting to deploy a commit to the IndexLabs Team on Vercel.

A member of the Team first needs to authorize it.

Commit 55b7e2e dropped the only consumer of `defaultModel` (the
`triggerLabel` expression) but left the `useMemo` declaration in
place, which fails `tsc --noEmit` with TS6133 and red-CI's any
downstream branch.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
james-heidi and others added 7 commits April 29, 2026 22:46
…t-model

fix(views): remove unused defaultModel declaration in model-picker
# Conflicts:
#	packages/views/agents/components/inspector/model-picker.tsx
Cloud-runtime tasks don't persist a local session_id + work_dir
(`PinTaskSession` is daemon-only), so `take` quietly skips them and
exits with "no completed run found". Make that explicit in the
command's long help so users assigning a cloud agent understand why
nothing happens, instead of treating it as a bug.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
docs(cli): note that issue take only handles local-runtime sessions
Surfaces the existing `multica issue take --print` command as a desktop
overflow menu item so users don't have to drop into a terminal to pick
up where an agent left off. Clicking the entry runs the bundled CLI in
the main process, copies the resolved shell command to the clipboard,
and shows a toast with the workdir.

Visibility: desktop builds only, gated on at least one resumable run
(completed task with a session_id whose runtime provider is in the
supported list — claude/codex/cursor/gemini/opencode/copilot). Web
builds skip the data-fetch and never render the row.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The desktop CLI profile only stores `server_url` + `token`; it never
pins a workspace because the renderer is the source of truth (the user
can switch workspaces inside the app without ever touching the CLI).

Without --workspace-id, `multica issue take <id> --print` issued the
`/api/issues/<id>/task-runs` request with no `X-Workspace-*` header
and the API returned 400 "workspace_id or workspace_slug is required".

Plumb the active workspace id from the renderer through the IPC
channel and into the CLI invocation so the headers are populated.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
feat(desktop): add 'Take Over Locally' issue action
@Bohan-J
Copy link
Copy Markdown
Collaborator

Bohan-J commented Apr 30, 2026

Hi @james-heidi — thanks for putting this together. Before diving into the implementation details, I'd like to step back and discuss the product framing, because I'm not sure this is a direction Multica wants to encourage as a first-class user-facing feature.

What this PR offers, in product terms: when a cloud / daemon agent finishes a run on an issue, the user can take that session and resume it locally — either via multica issue take <id> in a terminal, or via a "Take Over Locally" item in the desktop issue menu that copies a cd … && <agent> --resume <session> line to the clipboard.

The concern: if the user's goal is "I want to talk to the agent some more" or "I want to push it to do one more thing," the most natural path on Multica is to keep the conversation on the issue itself — comment, mention the agent, the next run picks up the resume pointer we already store. That's the affordance the product is built around: issue-as-thread, agent runs as part of that thread, everything visible to collaborators.

A local takeover sits awkwardly next to that:

  1. It silently forks the conversation off-platform. Anything the user does in the local agent session — messages, edits, commits — does not flow back to the issue. Other people on the issue have no idea the work continued. That's a real coordination cost for a collaborative product.
  2. The desktop menu item makes it feel sanctioned. A persistent dropdown entry on every issue invites the muscle memory "click → keep working locally," which over time pulls work out of Multica into private terminal sessions. That's the opposite of what we want.
  3. The set of preconditions is hard for users to reason about (local runtime only, agent CLI installed locally, supported provider, worktree not GC'd). When any one fails the user gets a terse error and no path forward — a frustrating discovery process for what looks like a simple menu item.
  4. The clipboard-copy flow itself isn't great UX. Click menu → toast → switch to terminal → paste → enter. If we were committed to this being a real product feature, the bar would be "click and you're in a session," not "we hand you a shell snippet."

So my suggestion is to split the PR's value:

  • The CLI command (multica issue take) is genuinely useful as an escape hatch — for power users debugging a run, for engineers dogfooding, for incident scenarios where the agent is stuck and someone needs to drive it manually. I'd be happy to see that part land.
  • The desktop menu entry I'd hold off on, or at minimum hide behind an experimental / advanced-settings toggle. Putting it in the standard issue actions menu signals "this is a normal way to use Multica," which I don't think it is. If we later decide we do want a first-class "continue this agent locally" experience, it deserves a proper design pass: how does the work flow back, do we change issue status, do we lock further auto-triggers while the user is driving, do we embed a PTY rather than shell out, etc.

Curious what use cases you had in mind that aren't well served by just continuing the conversation on the issue — that would help calibrate whether the desktop entry is solving a real problem I'm underweighting, or whether the CLI alone covers it. Happy to dig into the implementation details (a few were noted earlier) once we're aligned on the product shape.

@james-heidi
Copy link
Copy Markdown
Author

james-heidi commented Apr 30, 2026

Appreciated @Bohan-J for your comprehensive review.

Let me bring the case first:

when I chat with agent in a issue, I found it's hard to use local agent mcp, 
so I need to `tap` into the original session to do a deep dive 
and use the tools I have already configured in local agent environment (claude code or codex)

for example, I need to read/write context from/to notion page.

Reason 1: The core direction here is to enjoy the local agent native experience that I have already configured.
otherwise need to rework again and now I found it's impossible to add mcp to multica.

Reason 2: same as you mentioned, need to to deep-dive, trace back after a incident.

Then my thought start jumping into this

User could resume any session (local/remote) 
from any agent(local/remote/claude/codex...)

each issue becomes an agent-agnostic session that could be extended by any agent at any time. then the issue become a recoverable story.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants