Skip to content

RFC: scan findings lifecycle API prototype#121

Draft
MTG-Thomas wants to merge 1 commit intojackmusick:mainfrom
MTG-Thomas:codex/nuclei-scan-ingestion-lifecycle
Draft

RFC: scan findings lifecycle API prototype#121
MTG-Thomas wants to merge 1 commit intojackmusick:mainfrom
MTG-Thomas:codex/nuclei-scan-ingestion-lifecycle

Conversation

@MTG-Thomas
Copy link
Copy Markdown
Contributor

RFC framing

This is a draft PR attached to #120. Please treat it as a concrete RFC prototype, not a ready-to-merge feature request.

The question for upstream is whether Bifrost should own a small lifecycle primitive for recurring external scan/security observations, or whether this kind of behavior should remain entirely in downstream workspace code.

What this prototype does

  • Adds an admin-only /api/scans/* router for scan runs and findings.
  • Uses existing org-scoped table documents for storage, avoiding a new schema/migration in the first slice.
  • Implements Nuclei ingestion as the first concrete producer.
  • Tracks lifecycle behavior that plain JSON dumping does not cover: occurrence-key hashing, re-alert suppression, resolve-on-absence, incomplete runs, filtering, and bulk state transitions.
  • Wires the router through the normal FastAPI router exports and app registration.

What this is not

  • Not an MSP-specific Halo/Autotask workflow.
  • Not a vulnerability scanner bundled into Bifrost.
  • Not a final assertion that the API should remain Nuclei-named.
  • Not a request to merge without deciding whether the upstream shape should be generalized first.

Design questions for maintainers

  • Should this be generalized from nuclei_scans to a broader scanner_findings or security_findings API before merge?
  • Should this stay backed by generic table documents for the first slice, or should core models/contracts/services be introduced up front?
  • Is admin-only access the right initial permission boundary?
  • Should resolve-on-absence be default behavior, or explicit per ingest request?
  • Is this aligned with Bifrost's platform direction, or better kept as workspace-level code using existing tables/workflows?

Validation

  • python -m compileall api/src/routers/nuclei_scans.py api/src/main.py api/src/routers/__init__.py
  • targeted router/model smoke test
  • app route registration smoke for /api/scans/*
  • python -m pytest api/tests/unit/test_nuclei_scans_router.py -q
  • python -m ruff check api/src/routers/nuclei_scans.py api/src/main.py api/src/routers/__init__.py api/tests/unit/test_nuclei_scans_router.py
  • git diff --check

Note: broad python -m pytest api/tests/unit -q currently stops during collection on Windows because existing process-pool tests import Unix-only resource; that occurs before this Nuclei test module runs.

Frame the recovered Nuclei ingestion work as a platform RFC for recurring external scan findings lifecycle management.\n\nThis keeps the implementation intentionally small: an admin-only FastAPI router backed by existing org-scoped table documents, with run tracking, ingest, filtering, re-alert suppression, resolve-on-absence, and bulk state transitions.\n\nThe current concrete producer is Nuclei, but the upstream discussion should decide whether this belongs as-is, should be generalized before merge, or should remain workspace-specific.


def _stable_id(raw: str) -> str:
return hashlib.sha1(raw.encode("utf-8")).hexdigest()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants