Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
9c8bf00
feat(pipeline): add PipelineSession data model and SessionStore
omsherikar Apr 3, 2026
804bc4e
fix(pipeline-session): add error handling, deepen roundtrip test, add…
omsherikar Apr 3, 2026
a831b5c
feat(pipeline): add RefactronPipeline orchestrator
omsherikar Apr 3, 2026
c86a227
feat(analyze): save PipelineSession after analysis, add --fix-on flag
omsherikar Apr 3, 2026
98e7419
feat(autofix): add --session flag for session-aware pipeline
omsherikar Apr 3, 2026
6ac0831
feat: add refactron status command
omsherikar Apr 3, 2026
1959402
feat(rollback): add --pipeline-session flag for session-aware rollback
omsherikar Apr 3, 2026
1e9a5f3
feat: add refactron run one-shot pipeline command
omsherikar Apr 3, 2026
d75b3ad
test: add full pipeline integration tests + apply black/isort formatting
omsherikar Apr 3, 2026
da7fbed
Merge branch 'main' into feature/connected-pipeline
omsherikar Apr 3, 2026
499819b
fix(analyze): split merged --fix-on and --format click options
omsherikar Apr 3, 2026
ab81338
feat(pipeline): add workspace current session — no session IDs needed…
omsherikar Apr 3, 2026
18908f1
fix: always disable incremental analysis in CLI and pipeline
omsherikar Apr 3, 2026
640e0cd
fix: analyze always queues all issues; autofix filters by --fix-on level
omsherikar Apr 3, 2026
8700632
fix: autofix shows honest breakdown of issues vs fixable issues
omsherikar Apr 3, 2026
4dda068
style: black + isort + flake8 fixes for refactor.py and pipeline.py
omsherikar Apr 3, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 60 additions & 2 deletions refactron/cli/analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,13 @@
default=False,
help="Disable interactive mode — dump all issues (for CI/CD or piped output)",
)
@click.option(
"--fix-on",
"fix_on",
type=click.Choice(["CRITICAL", "ERROR", "WARNING", "INFO"], case_sensitive=False),
default=None,
help="Auto-queue issues at this level and above for fixing after analysis.",
)
Comment on lines +100 to +106
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

--fix-on is exposed but never affects the queued session.

The flag is parsed and _FIX_LEVEL_MAP is built, but this path still calls queue_issues(_pipeline_session, _all_issues) with no min_level. The saved session therefore ignores the user's threshold even though the help text says otherwise.

Also applies to: 280-309

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/analysis.py` around lines 100 - 106, The --fix-on flag is
parsed but never used when saving the queued session; modify the call to
queue_issues so it respects the parsed fix_on threshold by passing the computed
minimum severity (from _FIX_LEVEL_MAP using the fix_on value, e.g.,
_FIX_LEVEL_MAP[fix_on.upper()] when fix_on is not None) as the min_level
argument to queue_issues(_pipeline_session, _all_issues, min_level=...), and
ensure the same change is applied to the other code path(s) mentioned (lines
~280-309) where queue_issues is invoked so the saved session actually filters
issues by the user-specified level.

@click.option(
"--format",
"output_format",
Expand All @@ -123,6 +130,7 @@ def analyze(
environment: Optional[str],
no_cache: bool,
no_interactive: bool,
fix_on: Optional[str] = None,
output_format: str = "text",
fail_on: Optional[str] = None,
) -> None:
Expand Down Expand Up @@ -180,8 +188,10 @@ def analyze(
cfg.log_format = log_format
if metrics is not None:
cfg.enable_metrics = metrics
if no_cache:
cfg.enable_incremental_analysis = False
# Always disable incremental analysis in CLI — users expect `analyze` to
# always return all issues, not skip unchanged files silently.
# (Incremental filtering is an optimization for the programmatic API only.)
cfg.enable_incremental_analysis = False

if output_format != "json":
_print_file_count(target_path)
Expand Down Expand Up @@ -258,6 +268,54 @@ def analyze(
)
console.print(f" Success rate: {metrics_summary.get('success_rate_percent', 0):.1f}%")

# Exit with error code if critical issues found
should_fail = summary["critical"] > 0

# ── Pipeline session ──────────────────────────────────────────────
from datetime import datetime, timezone

from refactron.core.pipeline import RefactronPipeline
from refactron.core.pipeline_session import PipelineSession, SessionStore

_FIX_LEVEL_MAP = {
"CRITICAL": IssueLevel.CRITICAL,
"ERROR": IssueLevel.ERROR,
"WARNING": IssueLevel.WARNING,
"INFO": IssueLevel.INFO,
}

_target_path = Path(target) if target else Path.cwd()
_project_root = _target_path if _target_path.is_dir() else _target_path.parent
_pipeline = RefactronPipeline(project_root=_project_root)
Comment on lines +287 to +289
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Persist the session under the project root, not target.parent.

For nested targets, _project_root becomes the selected directory or file parent, so the session/current pointer lands under that nested path instead of the repository root. refactron status and refactron autofix from the workspace root then won't see the session. Reuse the same project-root detection helper you already use elsewhere instead of rebuilding it from raw target.

Also applies to: 312-313

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/analysis.py` around lines 285 - 287, The code builds
_project_root from the raw target (using _target_path.parent) which makes nested
targets persist the session under the nested folder; replace that logic with the
existing project-root detection helper used elsewhere (call that helper with the
provided target/_target_path to compute the true repository root) and pass the
returned root into RefactronPipeline(project_root=...) so sessions are persisted
under the repository root; apply the same replacement for the other occurrence
around lines 312-313 (i.e., stop using _target_path.parent and use the shared
project-root helper instead).


_session_id = SessionStore.make_session_id()
_pipeline_session = PipelineSession(
session_id=_session_id,
target=str(_target_path),
created_at=datetime.now(timezone.utc).isoformat(),
total_files=summary.get("total_files", 0),
total_issues=summary.get("total_issues", 0),
issues_by_level={
"CRITICAL": summary.get("critical", 0),
"ERROR": summary.get("errors", 0),
"WARNING": summary.get("warnings", 0),
"INFO": summary.get("info", 0),
},
)
Comment on lines +292 to +304
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Set the lifecycle state explicitly when creating the session by hand.

RefactronPipeline.analyze() marks new sessions as SessionState.ANALYZED, but this manual construction relies on the dataclass default instead. A freshly analyzed session can then look pending to status and other session readers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/analysis.py` around lines 292 - 304, When constructing the
PipelineSession instance by hand, explicitly set its lifecycle/status to
SessionState.ANALYZED (or the dataclass field name used for lifecycle, e.g.,
"status" or "state") so it matches RefactronPipeline.analyze() behavior; update
the PipelineSession(...) call in analysis.py to include the lifecycle field with
SessionState.ANALYZED and import or reference SessionState where used (ensure
you use the same enum member name as in RefactronPipeline.analyze to keep
readers and status checks consistent).


# Always queue all issues so `autofix` has a full picture.
# `autofix --fix-on` controls which level actually gets applied.
_all_issues = [i for fm in result.file_metrics for i in fm.issues]
_pipeline.queue_issues(_pipeline_session, _all_issues)

_pipeline.store.save(_pipeline_session)
_pipeline.store.set_current(_session_id)

_fixable = len([i for i in _pipeline_session.fix_queue if i.status.value == "pending"])
console.print(f"\n[dim]Session: {_session_id}[/dim]")
if _fixable:
console.print(f"[dim]{_fixable} fixable issues queued → refactron autofix --dry-run[/dim]")

# Exit with error code: --fail-on sets threshold, default is CRITICAL
_LEVEL_RANK = {"INFO": 0, "WARNING": 1, "ERROR": 2, "CRITICAL": 3}
_SUMMARY_KEY = {
Expand Down
14 changes: 14 additions & 0 deletions refactron/cli/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,3 +153,17 @@ def main(ctx: click.Context) -> None:
main.add_command(init)
except ImportError:
pass

try:
from refactron.cli.status import status

main.add_command(status)
except ImportError:
pass

try:
from refactron.cli.run import run

main.add_command(run)
except ImportError:
pass
191 changes: 184 additions & 7 deletions refactron/cli/refactor.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ def refactor(


@click.command()
@click.argument("target", type=click.Path(exists=True))
@click.argument("target", type=click.Path(exists=True), required=False)
@click.option(
"--config",
"-c",
Expand Down Expand Up @@ -261,29 +261,161 @@ def refactor(
default=False,
help="Run verification checks (syntax, imports, tests) before applying fixes",
)
@click.option(
"--session",
"session_id",
default=None,
help=(
"Override the active workspace session. If omitted, uses the "
"session set by the last 'refactron analyze' or 'refactron run'."
),
)
@click.option(
"--fix-on",
"fix_on",
type=click.Choice(["CRITICAL", "ERROR", "WARNING", "INFO"], case_sensitive=False),
default="CRITICAL",
show_default=True,
help="Apply only issues at this severity level and above.",
)
def autofix(
target: str,
target: Optional[str],
config: Optional[str],
profile: Optional[str],
environment: Optional[str],
preview: bool,
dry_run: bool,
safety_level: str,
verify: bool,
session_id: Optional[str] = None,
fix_on: str = "CRITICAL",
) -> None:
"""
Automatically fix code issues (Phase 3 feature).
"""Apply fixes from the active pipeline session.

TARGET: Path to file or directory to fix
Automatically reads the current workspace session created by
'refactron analyze' or 'refactron run' — no session ID needed.
Use --session to target a specific session instead.

\b
Typical workflow:
refactron analyze src/ --fix-on CRITICAL # creates session
refactron autofix --dry-run # preview (uses active session)
refactron autofix # apply fixes
refactron rollback # undo if needed

\b
Examples:
refactron autofix myfile.py --preview
refactron autofix myproject/ --apply --safety-level moderate
refactron autofix --dry-run
refactron autofix --session sess_20260404_120000
"""
console.print()
_auth_banner("Auto-fix")
console.print()

# ── Session-aware pipeline ────────────────────────────────────────
from refactron.core.pipeline import RefactronPipeline

_target_path = Path(target) if target else Path.cwd()
_project_root = _target_path if _target_path.is_dir() else _target_path.parent
_pipeline = RefactronPipeline(project_root=_project_root)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

autofix accepts --safety-level, but the session-aware implementation constructs RefactronPipeline(project_root=_project_root) without passing the chosen risk level to the underlying AutoFixEngine. This makes --safety-level a no-op for the new flow. Please map the CLI string to FixRiskLevel and pass it into RefactronPipeline(..., safety_level=...) (or otherwise enforce the safety level).

Suggested change
_pipeline = RefactronPipeline(project_root=_project_root)
_safety_level_map = {
"conservative": FixRiskLevel.CONSERVATIVE,
"moderate": FixRiskLevel.MODERATE,
"aggressive": FixRiskLevel.AGGRESSIVE,
}
_selected_safety_level = _safety_level_map.get(
str(safety_level).lower(), FixRiskLevel.MODERATE
)
_pipeline = RefactronPipeline(
project_root=_project_root, safety_level=_selected_safety_level
)

Copilot uses AI. Check for mistakes.

Comment on lines +315 to +321
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The new session-aware path bypasses the command's config and safety settings.

Because this branch returns at Line 378, the old _load_config() call and safety_level mapping are now unreachable. --config, --profile, --environment, and --safety-level no longer affect the new pipeline flow, so the command surface and runtime behavior are out of sync.

Also applies to: 378-404

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 306 - 312, The new session-aware
branch constructs RefactronPipeline using _target_path/_project_root and returns
early, which bypasses _load_config(), the safety_level mapping, and flags like
--config, --profile, and --environment; update the flow so that before creating
or returning the RefactronPipeline instance (_pipeline) you call _load_config()
and apply the resolved configuration and safety_level (or pass them into
RefactronPipeline constructor) so the session-aware path honors --config,
--profile, --environment and --safety-level exactly like the original branch
did.

if session_id:
_pipeline_session = _pipeline.store.load(session_id)
if _pipeline_session is None:
console.print(f"[red]Session not found: {session_id}[/red]")
Comment on lines +318 to +325
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Resolve sessions/backups from the pipeline root, not the caller's location.

RefactronPipeline persists sessions under its project_root, but autofix --session derives that root from the current target/cwd and rollback --pipeline-session hard-codes Path.cwd() for both SessionStore and BackupManager. Resuming or rolling back the same session from a subdirectory or another shell location will miss the saved session/backups or target the wrong project. Reuse the same root-discovery logic in both flows, or derive it from PipelineSession.target.

Also applies to: 461-478

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 300 - 307, The session/backup lookup
is using the caller's CWD instead of the pipeline's project root; update the
code that constructs SessionStore and BackupManager and the session-loading
logic to derive project_root the same way RefactronPipeline does (use the
_target_path/_project_root logic used when instantiating RefactronPipeline or
use PipelineSession.target when resuming), so SessionStore.load(session_id) and
BackupManager operate against the pipeline's project_root rather than
Path.cwd(); apply the same change to the other affected block referenced (around
the code handling rollback / lines 461-478) so all session and backup resolution
consistently uses RefactronPipeline.project_root or PipelineSession.target.

raise SystemExit(1)
else:
# Try workspace current session first (no --session flag needed)
_pipeline_session = _pipeline.store.load_current()
if _pipeline_session is None:
if not target:
console.print(
"[red]No active session. Run 'refactron analyze <target>' first.[/red]"
)
raise SystemExit(1)
console.print("[dim]No session — running fresh analysis...[/dim]")
_pipeline_session = _pipeline.analyze(_target_path)
_pipeline.store.set_current(_pipeline_session.session_id)
if _pipeline._last_result:
_all_issues = [i for fm in _pipeline._last_result.file_metrics for i in fm.issues]
_pipeline.queue_issues(_pipeline_session, _all_issues)

_total_issues = _pipeline_session.total_issues
_fixable = len([i for i in _pipeline_session.fix_queue if i.status.value == "pending"])
_no_fixer = len([i for i in _pipeline_session.fix_queue if i.status.value == "skipped"])
console.print(
f"[dim]Session {_pipeline_session.session_id} · "
f"{_total_issues} issues · {_fixable} have automated fixers · "
f"{_no_fixer} no fixer available[/dim]"
)

# Filter queue by --fix-on level: mark items below threshold as skipped
_LEVEL_RANK = {"INFO": 0, "WARNING": 1, "ERROR": 2, "CRITICAL": 3}
_threshold = _LEVEL_RANK.get(fix_on.upper(), 3)
from refactron.core.pipeline_session import FixStatus as _FixStatus

for _item in _pipeline_session.fix_queue:
if _item.status == _FixStatus.PENDING:
if _LEVEL_RANK.get(_item.level.upper(), 0) < _threshold:
_item.status = _FixStatus.SKIPPED

_pending_count = len([i for i in _pipeline_session.fix_queue if i.status == _FixStatus.PENDING])
Comment on lines +354 to +364
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't persist --fix-on filtering into the stored queue.

This mutates queued PENDING items to SKIPPED before apply. After one CRITICAL-only run, a later WARNING run against the same session never sees those lower-severity items again.

🧰 Tools
🪛 GitHub Actions: Pre-commit

[error] black failed: files were modified by this hook (reformatted refactron/cli/refactor.py). Re-run pre-commit to apply changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 354 - 364, The current loop mutates
stored session items (setting _item.status = _FixStatus.SKIPPED) which persists
the --fix-on filter; instead leave _pipeline_session.fix_queue unchanged and
apply the threshold only transiently. Replace the in-place mutation with logic
that computes a local view or predicates: use _LEVEL_RANK, _threshold and
_FixStatus to count pending items whose level meets the threshold (e.g., count
items where item.status == _FixStatus.PENDING and
_LEVEL_RANK.get(item.level.upper(),0) >= _threshold) and pass that
filtered/local view into the apply step or consumer; do not write back SKIPPED
into _pipeline_session.fix_queue. Ensure all references to _pending_count and
any downstream use operate on the transient/filtered view rather than mutated
stored items.


if _pending_count == 0:
if _no_fixer > 0:
console.print(
f"[yellow]{_no_fixer} issues found but none have automated fixers.[/yellow]\n"
f"[dim]Refactron auto-fixers cover: unused imports, magic numbers, "
f"docstrings, dead code, type hints, sorting, whitespace, quotes, "
f"booleans, f-strings, unused variables, indentation.[/dim]\n"
f"[dim]The issues in this session (complexity, code smell) "
f"require manual refactoring.[/dim]"
)
else:
console.print(
f"[yellow]No fixable issues at {fix_on.upper()} level or above.[/yellow]\n"
f"[dim]Try: refactron autofix --fix-on WARNING[/dim]"
)
return

_pipeline.apply(
_pipeline_session,
dry_run=dry_run,
verify=verify,
)
Comment on lines 286 to +385
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

autofix currently ignores the --preview/--apply flag: it always calls _pipeline.apply(..., dry_run=dry_run) regardless of preview. Since preview defaults to True and dry_run defaults to False, running refactron autofix <target> would write changes by default. Please gate writes on preview (e.g., treat preview as dry-run) so the default behavior remains non-destructive unless --apply is explicitly provided.

Copilot uses AI. Check for mistakes.
Comment on lines +381 to +385
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

autofix now applies by default even though --preview is still the default.

This branch ignores preview and passes only dry_run into RefactronPipeline.apply(). With the current defaults (preview=True, dry_run=False), refactron autofix writes changes immediately instead of previewing them.

Proposed fix
-    _pipeline.apply(
+    _effective_dry_run = dry_run or preview
+
+    _pipeline.apply(
         _pipeline_session,
-        dry_run=dry_run,
+        dry_run=_effective_dry_run,
         verify=verify,
     )
@@
-    if dry_run:
+    if _effective_dry_run:
         console.print("\n[bold]Dry-run complete[/bold]")

Based on learnings, All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback.

Also applies to: 358-366

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 348 - 352, The call to
_pipeline.apply(...) is missing the preview argument so the CLI's preview flag
is ignored (causing autofix to write changes); update the two places where
_pipeline.apply is invoked (the call using _pipeline_session and the similar
block further down) to pass preview=preview along with dry_run=dry_run and
verify=verify so the RefactronPipeline.apply(preview, dry_run, verify, ...)
safety-first flow (preview → backup → apply → optional rollback) is preserved.


_applied = len(_pipeline_session.applied_fixes)
_blocked = len(_pipeline_session.blocked_fixes)
_skipped = len([i for i in _pipeline_session.fix_queue if i.status.value == "skipped"])

if dry_run:
_diff_items = [i for i in _pipeline_session.fix_queue if i.diff]
if not _diff_items:
console.print(
"\n[dim]Dry-run: no diffs generated "
"(fixers may not support these issue types)[/dim]"
)
else:
console.print(f"\n[bold]Dry-run preview ({len(_diff_items)} changes)[/bold]")
for _item in _diff_items:
console.print(
f"\n [cyan]{_item.file_path}:{_item.line_number}[/cyan] {_item.message}"
)
console.print(_item.diff)
else:
console.print(f"\n[bold green]Applied:[/bold green] {_applied}")
if _blocked:
console.print(f"[bold red]Blocked:[/bold red] {_blocked}")
if _skipped:
console.print(f"[dim]Skipped: {_skipped}[/dim]")
console.print(f"\n[dim]Session: {_pipeline_session.session_id}[/dim]")
if _applied > 0:
console.print(
f"[dim]To undo: refactron rollback --session "
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the non-dry-run summary, the rollback hint uses refactron rollback --session <id>, but the pipeline-session rollback option added below is --pipeline-session. As written, the printed command will not roll back the pipeline session. Please update the hint to match the actual option name for pipeline sessions.

Suggested change
f"[dim]To undo: refactron rollback --session "
f"[dim]To undo: refactron rollback --pipeline-session "

Copilot uses AI. Check for mistakes.
f"{_pipeline_session.session_id}[/dim]"
)
return

# Setup
target_path = _validate_path(target)
_load_config(config, profile, environment)
Expand Down Expand Up @@ -359,12 +491,23 @@ def autofix(
default=False,
help="Clear all backup sessions",
)
@click.option(
"--pipeline-session",
"pipeline_session_id",
default=None,
help=(
"Override the active workspace session to roll back. "
"If omitted, rolls back the current session automatically. "
"Use 'refactron status --list' to see all session IDs."
),
)
def rollback(
session_id: Optional[str],
session: Optional[str],
use_git: bool,
list_sessions: bool,
clear: bool,
pipeline_session_id: Optional[str] = None,
) -> None:
"""
Rollback refactoring changes to restore original files.
Expand All @@ -381,6 +524,40 @@ def rollback(
refactron rollback --use-git # Use Git rollback
refactron rollback --clear # Clear all backups
"""
from refactron.core.backup import BackupManager
from refactron.core.pipeline_session import SessionState, SessionStore

_store = SessionStore(root_dir=Path.cwd())

# Use explicit ID, else fall back to active workspace session
_resolved_id = pipeline_session_id or _store.get_current_id()

Comment on lines +533 to +534
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't clear/close the session unless rollback fully succeeded.

BackupManager.rollback_session() can return failed restores, but this branch still marks the session ROLLED_BACK and unconditionally deletes .refactron/current. That misreports partial rollbacks, and it also clears the active session even when the user explicitly rolled back some other session.

Also applies to: 509-513

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 494 - 495, The current code
unconditionally marks a session ROLLED_BACK and removes the active session file
after calling BackupManager.rollback_session(), which can return partial
failures; change the logic in the block around _resolved_id (and the similar
block at lines ~509-513) to inspect the result of
BackupManager.rollback_session() and only (1) set the session state to
ROLLED_BACK and (2) delete .refactron/current when rollback_session reports all
restores succeeded; if rollback_session reports any failures, leave the session
state unchanged (or set an explicit PARTIAL_ROLLBACK status if available) and do
not remove .refactron/current unless _resolved_id matches the currently active
session and the rollback fully succeeded; reference functions/vars:
BackupManager.rollback_session(), _resolved_id, pipeline_session_id,
_store.get_current_id(), and the ROLLED_BACK state to locate and update the code
paths.

if _resolved_id:
_pipeline_session = _store.load(_resolved_id)
if _pipeline_session is None:
console.print(f"[red]Session not found: {_resolved_id}[/red]")
raise SystemExit(1)
if not _pipeline_session.applied_fixes:
console.print("[yellow]No applied fixes in this session to roll back.[/yellow]")
return
if not _pipeline_session.backup_session_id:
console.print("[red]Session has no backup ID — cannot roll back.[/red]")
raise SystemExit(1)

_bm = BackupManager(root_dir=Path.cwd())
_restored_count, _failed = _bm.rollback_session(_pipeline_session.backup_session_id)

_pipeline_session.state = SessionState.ROLLED_BACK
_store.save(_pipeline_session)
_store.clear_current()

console.print(
f"[green]Rolled back {_restored_count} file(s) from session " f"{_resolved_id}[/green]"
)
for _f in _failed:
console.print(f"[red] Failed to restore: {_f}[/red]")
return
Comment on lines +529 to +559
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don't auto-execute pipeline rollback before honoring the explicit rollback mode.

Because _resolved_id is handled before --list, --clear, --use-git, or the legacy positional session id are checked, any active current pipeline session makes this branch run immediately. refactron rollback --list can restore files instead of listing them, and the confirmation prompt is skipped entirely.

🧰 Tools
🪛 GitHub Actions: Pre-commit

[error] black failed: files were modified by this hook (reformatted refactron/cli/refactor.py). Re-run pre-commit to apply changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 529 - 559, The rollback branch
currently triggers as soon as _resolved_id is set (via pipeline_session_id or
SessionStore.get_current_id()), which runs before handling explicit rollback
modes like --list, --clear, --use-git or legacy positional IDs; move or guard
this logic so it only executes when the user actually requested a rollback
action (e.g., when a specific rollback flag/mode is set), not merely when an
active session exists: modify the control flow around _resolved_id,
SessionStore, and the block that uses BackupManager.rollback_session so that
flag checks for list/clear/use-git/positional-id are evaluated first and only
when none apply and the explicit rollback mode is present do you load the
session, validate backup_session_id, prompt for confirmation, and call
BackupManager.rollback_session; ensure functions/classes referenced are
SessionStore, _resolved_id/pipeline_session_id, BackupManager.rollback_session,
and _pipeline_session are used in the guarded branch.


# Support both argument and option for session
target_session = session_id or session
console.print("\n🔄 [bold blue]Refactron Rollback[/bold blue]\n")
Expand Down
Loading
Loading