Releases: samugit83/redamon
4.0.0 - 2026-04-19
RedAmon 4.0.0: Fireteam + SG-ReAct
RedAmon 4.0.0 ships Fireteam, a scatter-gather multi-agent execution mode built into the core ReAct orchestrator. The root agent can now fan out into N specialist sub-agents that work independent angles of the same objective in parallel, each with its own ReAct loop, inside the same event loop, the same MCP session, and the same Neo4j connection. Zero cross-process serialisation.
The architecture is called SG-ReAct (Scatter-Gather ReAct). It's the first pentesting AI pattern that treats autonomy and safety as orthogonal axes, not opposites. Phase gating, RoE enforcement, dangerous-tool confirmation, scope rails, and the recursion ban all sit on the edges of the state graph, the agent reasons freely inside the nodes.
Also in this release: the new built-in XSS attack skill (8-step workflow covering reflected, stored, DOM, blind, WAF bypass, CSP bypass), a full Argentum Digital XSS practice lab (~1,650 LoC embedding every XSS vector the skill can hit), MCP dead-session auto-reconnect, MCP server supervisor with restart-on-crash, and a new PLAN_MAX_PARALLEL_TOOLS concurrency cap that prevents SSE head-of-line blocking under heavy fireteam fan-out.
Benchmarked against PentAGI, PentestGPT, Strix, and Shannon on 82 agentic primitives across 14 architectural dimensions: RedAmon leads at 72.0% coverage, 30 percentage points above the runner-up. Guardrails 8.0/8.0 (nearest 2.0), Domain knowledge 5.0/5.0 (nearest 2.0), Multi-tenancy 4.0/4.0 (nearest 3.0), Memory 6.0/8.0 (+50% over the field). Full methodology in readmes/README.AGENTIC_SYSTEM.md.
What's new
Fireteam (multi-agent deployment)
The root agent can deploy a coordinated team of specialised agent members that work the same target in parallel, each with its own ReAct loop, skill set, and tool budget. Each member runs as a LangGraph subgraph with its own state, reasoning trace, and WebSocket streaming channel. Results are collected by a fireteam_collect_node that merges findings back into the shared graph.
- Gating:
FIRETEAM_ENABLEDmaster switch (defaulttrue), requiresPERSISTENT_CHECKPOINTER=true - 8 project settings:
FIRETEAM_MAX_CONCURRENT(5),FIRETEAM_MAX_MEMBERS(5),FIRETEAM_MEMBER_MAX_ITERATIONS(20),FIRETEAM_TIMEOUT_SEC(3600),FIRETEAM_ALLOWED_PHASES,FIRETEAM_CONFIRMATION_TIMEOUT_SEC(600),FIRETEAM_PROPENSITY(1-5) - Mutex groups:
TOOL_MUTEX_GROUPSserialises singleton tools (metasploit_console) across the team - Dangerous-tool operator gate: per-member approval card on the chat drawer, auto-reject after timeout
- Wave-based
plan_toolsexecution: each member can emit a single-turn plan of N independent tools run viaasyncio.gather - Webapp UI integration: new Agent Behaviour settings, per-member live badges with spinners and stop buttons, fireteam card in sessions view
- Tests:
test_fireteam_core.py,test_fireteam_deploy.py,test_fireteam_regressions.py
PLAN_MAX_PARALLEL_TOOLS setting
Per-wave concurrency cap applied uniformly to root agent and fireteam member plan execution. Default 10. Prevents SSE head-of-line blocking on the MCP kali-sandbox stream under heavy concurrency. Prisma field agentPlanMaxParallelTools, range 1-50, exposed in Agent Behaviour settings. 13 new tests in test_plan_parallelism.py.
MCP dead-session auto-reconnect
MCPToolsManager transparently rebuilds its MultiServerMCPClient when the kali-sandbox SSE stream dies mid-tool-call. Generation counter bumps on every successful get_tools(), asyncio.Lock serialises reconnects, _is_mcp_transport_error walks __cause__/__context__ chain plus ExceptionGroup sub-exceptions. Concurrent fireteam failures collapse to one real rebuild. 48 tests in test_mcp_reconnect.py across 4 classes.
MCP server supervisor
mcp/servers/run_servers.py polls Process.is_alive() every 5s and automatically respawns any dead child server (network_recon, nuclei, metasploit, nmap, playwright) with a logged restart counter. Also fixed a pre-existing AssertionError: can only test a child process on container restart.
Built-in XSS attack skill (Skill #6)
End-to-end Cross-Site Scripting workflow promoted from xss-unclassified fallback to first-class skill. 8-step workflow covering reflected, stored, DOM, and blind XSS. Classified by the Intent Router as a green XSS badge.
XSS_TOOLS(~16 KB): canary reflection sweep (rEdAm0n1337XsS), per-char filter probe viakxss, context-aware payload selection across 8 contexts, DOM XSS via Playwright monkey-patched sinks, dialog-handler verification, dalfox WAF bypass, impact proof via cookie theftXSS_BLIND_WORKFLOW(~2.7 KB, opt-in): interactsh-client OOB callbacks for stored XSS in admin contextsXSS_PAYLOAD_REFERENCE(~5 KB): payloads by context, Brute Logic polyglot, 12-row WAF bypass encoding table, 9-row CSP bypass shortcut table- 3 project settings:
XSS_DALFOX_ENABLED,XSS_BLIND_CALLBACK_ENABLED,XSS_CSP_BYPASS_ENABLED - Tests:
test_xss_skill.pywith 6 test classes, 46 tests
kxss Go binary in kali-sandbox
Per-character XSS filter probe (go install github.com/Emoe/kxss@latest). Reports which dangerous chars survive each parameter unfiltered. Used by Step 3b of the XSS workflow to eliminate blind tag-spraying.
Argentum Digital comprehensive XSS practice lab
guinea_pigs/dvws-node/xss-lab/, ~1,650 LoC, Node.js + Express + headless Chromium. A fictional B2B consulting firm site that embeds every XSS vector the new skill can exploit, hidden inside normal-looking site features. Zero references to "XSS", "lab", "vulnerable", or "challenge" anywhere on the site.
- 8 reflected contexts, 4 stored surfaces, 7 DOM XSS sinks
- 3 blind XSS surfaces fired by an
admin-bot.jsheadless Chromium sidecar on a 30-second moderation queue - 5 WAF bypass tiers disguised as "search engine generations"
- 6 CSP scenarios disguised as marketing/dashboard/widget pages
- Internal moderation queue at
/argentum/admin/inbox(loopback orX-Internal-Bot: 1only) - Integrated into the dvws-node guinea pig with nginx proxy on
/argentum/*
Webapp UI integration
New Cross-Site Scripting toggle in the Built-In Skills section, defaulted to ON. Updated 4 webapp files. Existing project rows in Postgres backfilled with xss:true.
Wiki documentation
Agent-Skills.md, Project-Settings-Reference.md, Chat-Skills.md, and Home.md all updated with the new XSS skill.
Changed
KNOWN_ATTACK_PATHSexpanded from 5 to 6 entries- Classification prompt new XSS section, classifier criteria, unclassified-fallback pruned
_inject_builtin_skill_workflownew branch forxss, gated onexecute_curlbuild_attack_path_behaviornew XSS behavior blocktool_registry.pykali_shelldescription now listsdalfox,kxss,interactsh-client- DVWS-Node deploy command now ships the xss-lab source alongside
setup.sh docker-compose.override.ymlnewargentumservice on port 3001
Notes
- Major version bump. The new built-in skill expands the agent's first-class attack methodology surface by 20% and ships a brand-new comprehensive practice lab
- Existing projects automatically inherit
xss:true(Postgres backfill). New projects get it via the Prisma default - No breaking changes to existing skills, workflows, or APIs
Full technical deep-dive: readmes/README.AGENTIC_SYSTEM.md
Full changelog entry: CHANGELOG.md
RedAmon is built for authorized security testing, education, and research only. Always obtain proper authorization before testing any target you do not own.
3.8.0 - 2026-04-10
Added
-
9 new AI agent tools -- major expansion of the agent's offensive toolkit, all exposed as dedicated MCP tools with full CLI argument passthrough:
- execute_httpx -- HTTP probing and fingerprinting (status codes, titles, server headers, tech detection, redirect following)
- execute_subfinder -- passive subdomain enumeration via OSINT sources (certificate transparency, DNS datasets, search engines). No traffic to target
- execute_gau -- passive URL discovery from Wayback Machine, Common Crawl, AlienVault OTX, and URLScan archives. No traffic to target
- execute_jsluice -- JavaScript static analysis for hidden API endpoints, URL paths, query parameters, and secrets (AWS keys, API tokens). Local file analysis only
- execute_katana -- web crawling and endpoint/URL discovery with JavaScript parsing and known-file enumeration (robots.txt, sitemap.xml)
- execute_amass -- OWASP Amass subdomain enumeration and network mapping (passive + active modes, ASN intel)
- execute_arjun -- HTTP parameter discovery by brute-forcing ~25,000 common parameter names (GET, POST, JSON, XML)
- execute_ffuf -- web fuzzing for hidden directories, files, virtual hosts, and parameters using FUZZ keyword injection
- execute_subfinder -- passive subdomain discovery from third-party OSINT sources
-
URLScan API key integration -- optional API key for enriching
execute_gauresults with URLScan archived data. Configured in Settings, auto-injected into GAU's~/.gau.tomlconfig at runtime -
Tool Phase Matrix expansion -- all 9 new tools added to the agent's tool-phase permission matrix with default phase assignments (informational + exploitation). Configurable per-project in the Tool Matrix UI
-
Stealth mode rules for all new tools -- each new tool has calibrated stealth-mode restrictions:
- No restrictions:
execute_subfinder,execute_gau,execute_jsluice(passive/local only) - Heavily restricted:
execute_httpx(single target, rate-limited),execute_katana(depth 1, rate-limited),execute_amass(passive mode only) - Forbidden:
execute_arjun,execute_ffuf(inherently noisy brute-force tools)
- No restrictions:
-
Tool registry documentation -- detailed usage guides for all 9 tools in the agent's tool registry, including argument formats, examples, and when-to-use guidance
-
Graph empty state component -- new
GraphEmptyStatecomponent replaces the plain text "No data found" message on the graph canvas
Changed
-
15 new pentesting tools in kali-sandbox -- major expansion of the agent's kali_shell toolkit, all accessible as Type A tools (no dedicated MCP wrapper needed):
- Web/infra scanning: nikto (web server misconfiguration scanner), whatweb (1800+ plugin tech fingerprinter), testssl.sh (SSL/TLS audit), commix (command injection detection/exploitation), SSTImap (server-side template injection)
- DNS: dnsrecon (zone transfers, SRV records, DNSSEC walk), dnsx (fast bulk DNS resolution, ProjectDiscovery pipeline)
- Windows/AD: enum4linux-ng (SMB/RPC enumeration with JSON output), netexec/nxc (multi-protocol exploitation -- SMB, WinRM, LDAP, MSSQL, RDP), bloodhound-python (AD relationship collection), certipy-ad (AD-CS ESC1-ESC13 attacks), ldapdomaindump (quick LDAP dumps)
- Secrets/passwords: gitleaks (git repo secret scanning), hashid (hash type identification), cewl (custom wordlist generation from target websites)
-
kali_shell timeout increased -- from 120s to 300s (5 min), enabling tools like nikto, testssl.sh, and bloodhound-python that need more than 2 minutes. Updated across MCP server, tool registry, dev docs, and wiki
-
Kali sandbox Dockerfile -- installs subfinder, katana, jsluice (with CGO for tree-sitter), amass, gau, and paramspider. Adds arjun to Python requirements
-
kali_shell tool description -- restructured into categorized sections (Exploitation, Password cracking, Web/infra, DNS, Windows/AD, API/GraphQL, Secrets, Tunneling) with usage examples for every tool. Added all 15 new tools, restored missing entries (dig, nslookup, smbclient, ngrok, chisel), and expanded the "Do NOT use" list to cover all 17 dedicated MCP tools
-
Rules of Engagement (ROE) --
execute_ffufadded to brute_force category for ROE blocking -
redamon.sh update logic -- agent container now always rebuilds (not just restarts) when any
agentic/file changes, since source code is baked into the image without volume mount -
Settings page -- removed "AI Agent" badge from Censys, FOFA, AlienVault OTX, Netlas, VirusTotal, ZoomEye, and Criminal IP API key fields (these keys are used by Recon Pipeline only, not the agent)
3.2.0 - 2026-03-31
Added
-
Uncover Multi-Engine Target Expansion -- ProjectDiscovery's uncover integrated as GROUP 2b in the recon pipeline, running before Shodan and port scanning to expand the target surface. Queries up to 13 search engines simultaneously (Shodan, Censys, FOFA, ZoomEye, Netlas, CriminalIP, Quake, Hunter, PublicWWW, HunterHow, Google Custom Search, Onyphe, Driftnet) to discover exposed hosts, IPs, and endpoints associated with the target domain:
- Smart key reuse: automatically picks up API keys already configured for standalone OSINT enrichment modules -- no extra configuration needed if you already have Shodan/Censys/FOFA/etc. keys
- Docker-in-Docker: runs
projectdiscovery/uncover:latestcontainer with a dynamically generatedprovider-config.yamlcontaining only engines with valid credentials - Engine-aware parsing: handles per-engine quirks -- Google URL-in-IP field, PublicWWW host-only results (no IP), Censys URL endpoints -- preventing silent data loss
- URL discovery: captures in-scope URLs from engines that populate the
urlfield (Censys, PublicWWW, Google), stored as Endpoint nodes in Neo4j - Pipeline merge: discovered subdomains injected into DNS structures so all downstream modules (port scan, HTTP probe, OSINT enrichment) process them automatically
- Neo4j graph: Subdomain, IP, Port, and Endpoint nodes with source tracking (uncover_sources, source_counts, total_raw, total_deduped)
- Frontend: embedded in OsintEnrichmentSection with enable/disable toggle and max results. Settings page groups uncover-specific keys under "Uncover (Multi-Engine Search)" with
Standalone + Uncoverbadges on shared keys - Tests: 80 unit tests across 3 test files
-
Centralized IP Filtering (
ip_filter.py) -- shared module replacing duplicate inline filtering across all OSINT enrichment modules. Filters RFC 1918 private, loopback, link-local, CGNAT, multicast, reserved ranges and CDN IPs (detected by Naabu/httpx). Used by all 9 enrichment modules before making external API calls -
Censys Platform API v3 Migration -- migrated from deprecated Basic Auth (API_ID/API_SECRET) to Bearer token auth (CENSYS_API_TOKEN + CENSYS_ORG_ID). Both recon pipeline and AI agent tool updated
-
CriminalIP Agent Tool -- added
criminalip_lookupto the AI agent tool registry for interactive IP threat intelligence queries
Fixed
- Silent data loss in uncover (Google/PublicWWW results dropped)
- Graph data loss (sources/source_counts metadata not written to Neo4j)
- Logging format violations (logger.* instead of print with pipeline prefix)
- Missing uncoverDockerImage Prisma schema field
- Missing Uncover entries in nodeMapping (SECTION_INPUT_MAP / SECTION_NODE_MAP)
3.0.0 - 2026-03-23
Added
-
Custom Nuclei Templates Integration — custom nuclei templates (
mcp/nuclei-templates/) are now manageable via the UI with per-project selection, dynamically discovered by the agent, and included in automated recon scans:- Template Upload UI: upload, view, and delete custom
.yaml/.ymlnuclei templates directly from Project Settings → Nuclei → Template Options. Templates are global (shared across all projects). Upload validates nuclei template format (requiresid:andinfo:withname:andseverity:). API:GET/POST/DELETE /api/nuclei-templates - Per-project template selection: each template has a checkbox — only checked templates are included in that project's automated scans. Stored as
nucleiSelectedCustomTemplatesString[] per project (default:[]). Different projects can enable different templates from the same global pool - Agent discovery: at startup, the nuclei MCP server scans
/opt/nuclei-templates/and dynamically appends all template paths (id, severity, name) to theexecute_nucleitool description, so the agent automatically knows what custom templates are available - Recon pipeline: selected templates are individually passed as
-t /custom-templates/{path}flags to nuclei. Recon logs list each selected template by name - Spring Boot Actuator templates (community PR #69): 7 detection templates with 200+ WAF bypass paths for
/actuator,/heapdump,/env,/jolokia,/gatewayendpoints — URL encoding, semicolon injection, path traversal, and alternate base path evasion techniques
- Template Upload UI: upload, view, and delete custom
-
SSL Verify Toggle for OpenAI-compatible LLM Providers (community PR #70) —
sslVerifyboolean (default:true) lets users skip SSL certificate verification when connecting to internal/self-hosted LLM endpoints with self-signed certificates. Full stack: Prisma schema, API route, frontend checkbox, agenthttpx.Client(verify=False)injection. -
Dockerfile
DEBIAN_FRONTEND=noninteractive(community PR #63) — added toagentic,recon_orchestrator, andguinea_pigsDockerfiles to suppress interactiveapt-getprompts during builds. -
ParamSpider Passive Parameter Discovery — mines the Wayback Machine CDX API for historically-documented URLs containing query parameters. Only returns parameterized URLs (with
?key=value), with values replaced by a configurable placeholder (defaultFUZZ), making results directly usable for fuzzing. Runs in Phase 4 (Resource Enumeration) in parallel with Katana, Hakrawler, and GAU. Passive — no traffic to target. No API keys required. Disabled by default; stealth mode auto-enables it. Full stack integration:- Backend:
paramspider_helpers.pywithrun_paramspider_discovery()(subprocess per domain, stdout + file output parsing, scope filtering, temp dir cleanup) andmerge_paramspider_into_by_base_url()(sources array merge, parameter enrichment, deduplication) - Settings: 3 user-configurable
PARAMSPIDER_*settings (enabled, placeholder, timeout) - Frontend:
ParamSpiderSection.tsxwith enable toggle, placeholder input, timeout setting - Stealth mode: auto-enabled (passive tool, queries Wayback Machine only)
- Tests: 22 unit tests covering merge logic, subprocess mocking, scope filtering, method merging, legacy field migration, settings, stealth overrides
- Backend:
-
Arjun Parameter Discovery — discovers hidden HTTP query and body parameters on endpoints by testing ~25,000 common parameter names. Runs in Phase 4 (Resource Enumeration) after FFuf, testing discovered endpoints from crawlers/fuzzers rather than just base URLs. Disabled by default; stealth mode forces passive-only; RoE caps rate. Full stack integration:
- Backend:
arjun_helpers.pywith multi-method parallel execution viaThreadPoolExecutor— each selected method (GET/POST/JSON/XML) runs as a separate Arjun subprocess simultaneously - Discovered endpoint feeding: collects full endpoint URLs from Katana + Hakrawler + jsluice + FFuf results, prioritizes API and dynamic endpoints, caps to configurable max (default 50)
- Settings: 12 user-configurable
ARJUN_*settings (methods, max endpoints, threads, timeout, chunk size, rate limit, stable mode, passive mode, disable redirects, custom headers) - Frontend:
ArjunSection.tsxwith multi-select method checkboxes, max endpoints field, scan parameters, stable/passive/redirect toggles, custom headers textarea - Stealth mode: forces
ARJUN_PASSIVE=True(CommonCrawl/OTX/WaybackMachine only, no active requests to target) - Tests: 29 unit tests covering merge logic, multi-method parallel execution, scope filtering, command building, settings consistency, stealth/RoE overrides
- Backend:
-
FFuf Directory Fuzzer — brute-force directory/endpoint discovery using wordlists, complementing crawlers (Katana, Hakrawler, GAU) by finding hidden content (admin panels, backup files, configs, undocumented APIs). Runs in Phase 4 (Resource Enumeration) after jsluice and before Kiterunner. Disabled by default; stealth mode disables it; RoE caps rate. Full stack integration:
- Backend:
ffuf_helpers.pywithrun_ffuf_discovery(), JSON output parsing, scope filtering, deduplication, and smart fuzzing under crawler-discovered base paths - Dockerfile: multi-stage Go 1.22 build compiles FFuf from source, installs 3 SecLists wordlists (
common.txt,raft-medium-directories.txt,directory-list-2.3-small.txt) - Settings: 16 user-configurable
FFUF_*settings (threads, rate, timeout, wordlist, match/filter codes, extensions, recursion, auto-calibrate, smart fuzz, custom headers) - Frontend:
FfufSection.tsxwith full settings UI, wordlist dropdown (built-in SecLists + custom uploads), custom wordlist upload/delete via API - Custom wordlists: upload
.txtwordlists per-project via/api/projects/[id]/wordlists(GET/POST/DELETE), shared between webapp and recon containers via Docker volume mount - Validation: frontend form validation for FFuf status codes (100-599), header format, numeric ranges, extensions format, recursion depth (1-5)
- Tests: 43 unit tests covering helpers, settings, stealth/RoE overrides, sanitization, and CRUD operations
- Backend:
-
RedAmon Terminal — interactive PTY shell access to the kali-sandbox container directly from the graph page via xterm.js. Provides full Kali Linux terminal with all pre-installed pentesting tools (Metasploit, Nmap, Nuclei, Hydra, sqlmap, etc.) without leaving the browser. Architecture: Browser (xterm.js) → WebSocket → Agent FastAPI proxy (
/ws/kali-terminal) → kali-sandbox terminal server (PTY/bin/bashon port 8016):- Terminal server:
terminal_server.py— WebSocket PTY server usingos.fork+ptymodule with async I/O vialoop.add_reader(), connection limits (max 5 sessions), resize validation (clamped 1-500), process group cleanup, andasyncio.Eventfor clean shutdown - Agent proxy:
/ws/kali-terminalWebSocket endpoint inapi.py— bidirectional relay with proper task cancellation (asyncio.gatherwithreturn_exceptions) - Frontend:
KaliTerminal.tsx— React component with dark Ayu theme, connection status indicator, auto-reconnect with exponential backoff (5 attempts), fullscreen toggle, browser-side keepalive ping (30s), proper xterm.js teardown, ARIA accessibility attributes - Docker: port 8016 bound to localhost only (
127.0.0.1:8016:8016),TERMINAL_WS_PORTandKALI_TERMINAL_WS_URLenv vars - Tests: 18 Python + TypeScript unit tests covering resize clamping, connection limits, URL derivation, reconnect logic
- Terminal server:
-
"Remote Shells" renamed to "Reverse Shell" — tab renamed for clarity to distinguish from the new RedAmon Terminal tab. The Reverse Shell tab manages agent-opened sessions (meterpreter, netcat, etc.), while RedAmon Terminal provides direct interactive sandbox access.
-
Hakrawler Integration — DOM-aware web crawler running as Docker container (
jauderho/hakrawler). Runs in parallel with Katana, GAU, and Kiterunner during resource enumeration. Configurable depth, threads, subdomain inclusion, and scope filtering. Disabled automatically in stealth mode. -
jsluice JavaScript Analysis — Passive JS analysis tool for extracting URLs, API endpoints, and embedded secrets (AWS keys, GitHub tokens, GCP credentials, etc.) from discovered JavaScript files. Runs sequentially after the parallel crawling phase.
-
Secret Node in Neo4j — Generic
Secretnode type linked toBaseURLvia[:HAS_SECRET]. Source-agnostic design supports jsluice now and future secret discovery tools. Includes deduplication, severity classification, and redacted samples. -
Hakrawler enabled by default — New projects have Hakrawler and Include Subdomains enabled by default.
-
Tool Confirmation Gate — per-tool human-in-the-loop safety gate that pauses the agent before executing dangerous tools (
execute_nmap,execute_naabu,execute_nuclei,execute_curl,metasploit_console,msf_restart,kali_shell,execute_code,execute_hydra). Full multi-layer integration:- Backend:
DANGEROUS_TOOLSfrozenset inproject_settings.py,ToolConfirmationRequestPydantic model instate.py, two new LangGraph nodes (await_tool_confirmation,process_tool_confirmation) intool_confirmation_nodes.py - Orchestrator: think node detects dangerous tools in both single-tool and plan-wave decisions, sets
awaiting_tool_confirmationandtool_confirmation_pendingstate, graph pauses atawait_tool_confirmation(END) and resumes viaprocess_tool_confirmationrouting to execute_tool/execute_plan (approve), think (reject), or patching tool_args (modify) - WebSocket:
tool_confirmation(client→server) andtool_confirmation_request(server→client) message types,ToolConfirmationMessagemodel,handle_tool_confirmation()handler with str...
- Backend:
2.3.0 - 2026-03-14
Added
-
Global Settings Page — new
/settingspage (gear icon in header) for managing all user-level configuration through the UI. AI provider keys and Tavily API key are configured exclusively here — no.envfile needed. Two sections:- LLM Providers — add, edit, delete, and test LLM provider configurations stored per-user in the database. Supports five provider types:
- OpenAI, Anthropic, OpenRouter — enter API key, all models auto-discovered
- AWS Bedrock — enter AWS credentials + region, foundation models auto-discovered
- OpenAI-Compatible — single endpoint+model configuration with presets for Ollama, vLLM, LM Studio, Groq, Together AI, Fireworks AI, Mistral AI, and Deepinfra. Supports custom base URL, headers, timeout, temperature, and max tokens
- Tool API Keys — Tavily API key (web search), Shodan API key (internet-wide OSINT), and SerpAPI key (Google dorking)
- LLM Providers — add, edit, delete, and test LLM provider configurations stored per-user in the database. Supports five provider types:
-
Test Connection — each LLM provider can be tested before saving with a "Test Connection" button that sends a simple message and shows the response
-
DB-only settings — AI provider keys and Tavily API key are stored exclusively in the database (per-user). No env-var fallback —
.envis reserved for infrastructure variables only (NVD, tunneling, database credentials, ports) -
Prisma schema — added
UserLlmProviderandUserSettingsmodels with relations toUser -
Centralized LLM setup — CypherFix triage and codefix orchestrators now use the shared
setup_llm()function instead of duplicating provider routing logic -
Pentest Report Generation — generate professional, client-ready penetration testing reports as self-contained HTML files from the
/reportspage. Reports compile all reconnaissance data, vulnerability findings, CVE intelligence, attack chain results, and remediation recommendations into an 11-section document (Cover, Executive Summary, Scope & Methodology, Risk Summary, Findings, Other Vulnerability Details, Attack Surface, CVE Intelligence, GitHub Secrets, Attack Chains, Recommendations, Appendix). Features include:- LLM-generated narratives — when an AI model is configured, six report sections receive detailed prose: executive summary (8–12 paragraphs), scope, risk analysis, findings context, attack surface analysis, and exhaustive prioritized remediation triage. Falls back gracefully to data-only reports when no LLM is available
- Security Posture Radar — inline SVG 6-axis radar chart in the Risk Summary section showing Attack Surface, Vulnerability Density, Exploitability, Certificate Health, Injectable Parameters, and Security Header coverage using logarithmic normalization
- Security Headers Gap Analysis — per-header weighted coverage bars (HSTS, CSP, X-Frame-Options, X-Content-Type-Options, X-XSS-Protection, Referrer-Policy, Permissions-Policy) with color-coded thresholds
- CISA KEV Callout — prominent alert box highlighting Known Exploited Vulnerabilities when present
- Injectable Parameters Breakdown — summary and per-position injection risk analysis with visual bars
- Attack Flow Chains — Technology → CVE → CWE → CAPEC flow table showing complete attack paths
- CDN Coverage visualization — ratio of CDN-fronted vs directly exposed IPs in the Attack Surface section
- Project-specific generation — dedicated project selector dropdown on the reports page (independent of the top bar selection)
- Download and Open — separate buttons to save the HTML file locally or open in a new browser tab
- Print/PDF optimized — page breaks, print-friendly CSS, and clean SVG/CSS bar rendering for
Ctrl+Pexport - Export/Import support — reports (metadata + HTML files) are included in project export ZIP archives and fully restored on import
- Wiki documentation — new Pentest Reports wiki page with example report download
-
Target Guardrail — LLM-based safety check that prevents targeting unauthorized domains and IPs. Blocks government sites (
.gov,.mil), major tech companies, financial institutions, social media platforms, and other well-known public services. Two layers: project creation (fail-open) and agent initialization (fail-closed). For IP mode, public IPs are resolved via reverse DNS before evaluation; private/RFC1918 IPs are auto-allowed. Blocked targets show a centered modal with the reason. -
Expanded CPE Technology Mappings — CPE_MAPPINGS table in
recon/helpers/cve_helpers.pyexpanded from 82 to 133 entries, significantly improving CVE lookup accuracy for Wappalyzer-detected technologies. New coverage includes:- CMS: Magento, Ghost, TYPO3, Concrete CMS, Craft CMS, Strapi, Umbraco, Adobe Experience Manager, Sitecore, DNN, Kentico
- Web Frameworks: CodeIgniter, Symfony, CakePHP, Yii, Nuxt.js, Apache Struts, Adobe ColdFusion
- JavaScript Libraries: Moment.js, Lodash, Handlebars, Ember.js, Backbone.js, Dojo, CKEditor, TinyMCE, Prototype
- E-commerce: PrestaShop, OpenCart, osCommerce, Zen Cart, WooCommerce
- Message Boards / Community: Discourse, phpBB, vBulletin, MyBB, Flarum, NodeBB, Mastodon, Mattermost
- Wikis: MediaWiki, Atlassian Confluence, DokuWiki, XWiki
- Issue Trackers / DevOps: Atlassian Jira, Atlassian Bitbucket, Bugzilla, Redmine, Gitea, TeamCity, Artifactory
- Hosting Panels: cPanel, Plesk, DirectAdmin
- Web Servers: OpenResty, Deno, Tengine
- Databases: SQLite, Apache Solr, Adminer
- Security / Network: Kong, F5 BIG-IP, Pulse Secure
- Webmail: Zimbra, SquirrelMail
- 29 new
normalize_product_name()aliases for Wappalyzer output variations (e.g., "Atlassian Jira" → "jira", "Moment" → "moment.js", "Concrete5" → "concrete cms") - 6 new
skip_listentries (Cloudflare, Google Analytics, Google Tag Manager, Facebook Pixel, Hotjar, Google Font API) to avoid wasting NVD API calls on SaaS/CDN technologies
-
Insights Dashboard — Real-time analytics page (
/insights) with interactive charts and tables covering attack chains, exploit successes, finding severity, targets attacked, strategic decisions, vulnerability distributions, attack surface composition, and agent activity. All data is pulled directly from the Neo4j graph and organized into sections: Attack Chains & Exploits, Attack Surface, Vulnerabilities & CVE Intelligence, Graph Overview, and Activity & Timeline. -
Rules of Engagement (RoE) — upload a RoE document (PDF, TXT, MD, DOCX) at project creation and an LLM auto-parses it into structured settings enforced across the entire platform:
- Document upload & parsing — file upload area in the RoE tab of the project form (create mode only). The agent extracts client info, scope, exclusions, time windows, testing permissions, rate limits, data handling policies, compliance frameworks, and more into 30+ structured fields
- Three enforcement layers — (1) agent prompt injection: structured
RULES OF ENGAGEMENT (MANDATORY)section injected into every reasoning step with excluded hosts, permissions, and constraints; (2) hard gate inexecute_tool_node: deterministic code blocks forbidden tools, forbidden categories, permission flags, and phase cap violations regardless of LLM output; (3) recon pipeline: excluded hosts filtered from target lists, rate limits capped viamin(tool_rate, global_max), time window blocks scan starts outside allowed hours - 30+ RoE project fields — client & engagement info, excluded hosts with reasons, time windows (days/hours/timezone), 6 testing permission toggles (DoS, social engineering, physical access, data exfiltration, account lockout, production testing), forbidden tool/category lists, max severity phase cap, global rate limit, sensitive data handling policy, data retention, encryption requirements, status update frequency, critical finding notification, incident procedure, compliance frameworks, third-party providers, and free-text notes
- RoE Viewer tab on the graph dashboard — formatted read-only view with cards for engagement, scope, exclusions, time window (live ACTIVE/OUTSIDE WINDOW status), testing permissions (green/red badge grid), constraints, data handling, communication, compliance, and notes. Download button for the original uploaded document
- RoE toolbar badge — blue "RoE" badge on the graph toolbar when engagement guardrails are active
- Smart tool restriction parsing — only explicitly banned tools (e.g., "do not use Hydra") are disabled; "discouraged" or "use with caution" language is noted in the prompt but does not disable tools. Phase restrictions use
roeMaxSeverityPhaseinstead of stripping phases from individual tools - Export/import support — RoE document binary is base64-encoded in project exports and restored on import. All RoE fields are included in the export ZIP
- Cascade deletion — all RoE data (fields + document binary) deleted with the project via Prisma cascade
- One-way at creation only — RoE settings become read-only after project creation to prevent mid-engagement modification
- Based on industry standards: PTES, SANS, NIST SP 800-115, Microsoft RoE, HackerOne, Red Team Guide
-
Emergency PAUSE ALL button — red/yellow danger-styled button on the Graph toolbar that instantly freezes every running pipeline (Recon, GVM, GitHub Hunt) and stops all AI agent conversations in one click. Shows "PAUSING..." with spinner during operation. Always visible on the toolbar, disabled when nothing is running. New
POST /emergency-stop-allendpoint on the agent service cancels all active agent tasks via the WebSocket manager -
Wave Runner (Parallel Tool Plans) — when the LLM identifies two or more independent tools that don't depend on each other's outputs, it groups them into a wave and executes them concurrently via
asyncio.gather()instead of sequentially. K...
2.2.0 - 2026-03-05
Added
- Pipeline Pause / Resume / Stop Controls — full lifecycle management for all three pipelines (Recon, GVM Scan, GitHub Secret Hunt):
- Pause — freezes the running container via Docker cgroups (
container.pause()). Zero changes to scan scripts; processes resume exactly where they left off - Resume — unfreezes the container (
container.unpause()), logs resume streaming instantly - Stop — kills the container permanently. Paused containers are unpaused before stopping to avoid cgroup issues. Sub-containers (naabu, httpx, nuclei, etc.) are also cleaned up
- Toolbar UI — when running: spinner + Pause button + Stop button. When paused: Resume button + Stop button. When stopping: "Stopping..." with disabled controls
- Logs drawer controls — pause/resume and stop buttons in the status bar, with
Pausedstatus indicator and spinner during stopping - Optimistic UI — stop button immediately shows "Stopping..." before the API responds
- SSE stays alive during pause and stopping states so logs resume/complete without reconnection
- 6 new backend endpoints (
POST /{recon,gvm,github-hunt}/{projectId}/{pause,resume}) and 9 new webapp API proxy routes (pause/resume/stop × 3 pipelines) - Removed the auto-scroll play/pause toggle from logs drawer (redundant with "Scroll to bottom" button)
- Pause — freezes the running container via Docker cgroups (
- IP/CIDR Targeting Mode — start reconnaissance from IP addresses or CIDR ranges instead of a domain:
- "Start from IP" toggle in the Target & Modules tab — switches the project from domain-based to IP-based targeting. Locked after creation (cannot switch modes on existing projects)
- Target IPs / CIDRs textarea — accepts individual IPs (
192.168.1.1), IPv6 (2001:db8::1), and CIDR ranges (10.0.0.0/24,192.168.1.0/28) with a max /24 (256 hosts) limit per CIDR - Reverse DNS (PTR) resolution — each IP is resolved to its hostname via PTR records. When no PTR exists, a mock hostname is generated from the IP (e.g.,
192-168-1-1) - CIDR expansion — CIDR ranges are automatically expanded into individual host IPs (network and broadcast addresses excluded). Original CIDRs are passed to naabu for efficient native scanning
- Full pipeline support — IP-mode projects run the complete 6-phase pipeline: reverse DNS + IP WHOIS → port scan → HTTP probe → resource enumeration (Katana, Kiterunner) → vulnerability scan (Nuclei) → CVE/MITRE enrichment
- Neo4j graph integration — mock Domain node (
ip-targets.{project_id}) withip_mode: true, Subdomain nodes (real PTR hostnames or IP-based mocks), IP nodes with WHOIS data, and all downstream relationships - Tenant-scoped Neo4j constraints — IP, Subdomain, BaseURL, Port, Service, and Technology uniqueness constraints are now scoped to
(key, user_id, project_id), allowing the same IP/subdomain to exist in different projects without conflicts - Input validation — new
webapp/src/lib/validation.tsmodule with regex validators for IPs, CIDRs, domains, ports, status codes, HTTP headers, GitHub tokens, and more. Validation runs on form submit ipModeandtargetIpsfields added to Prisma schema with database migration
- Chisel TCP Tunnel Integration — multi-port reverse tunnel alternative to ngrok for full attack path support:
- chisel (v1.11.4) installed alongside ngrok in kali-sandbox Dockerfile — single binary, supports amd64 and arm64
- Reverse tunnels both port 4444 (handler) and port 8080 (web delivery/HTA) through a single connection to a VPS
- Enables Web Delivery (Method C) and HTA Delivery (Method D) phishing attacks that require two ports — previously blocked with ngrok's single-port limitation
- Stageless Meterpreter payloads required through chisel (staged payloads fail through tunnels — same as ngrok)
- Deterministic endpoint discovery — LHOST derived from
CHISEL_SERVER_URLhostname (no API polling needed) - Auto-reconnect with exponential backoff if VPS connection drops
CHISEL_SERVER_URLandCHISEL_AUTHenv vars added to.env.exampleanddocker-compose.yml_query_chisel_tunnel()utility inagentic/utils.pywithget_session_config_prompt()integrationagentChiselTunnelEnabledPrisma field with database migration
- Phishing / Social Engineering Attack Path (
phishing_social_engineering) — third classified attack path with a mandatory 6-step workflow: target platform selection, handler setup, payload generation, verification, delivery, and session callback:- Standalone Payloads (Method A): msfvenom-based payload generation for Windows (exe, psh, psh-reflection, vba, hta-psh), Linux (elf, bash, python), macOS (macho), Android (apk), Java (war), and cross-platform (python) — with optional AV evasion via shikata_ga_nai encoding
- Malicious Documents (Method B): Metasploit fileformat modules for weaponized Word macro (.docm), Excel macro (.xlsm), PDF (Adobe Reader exploit), RTF (CVE-2017-0199 HTA handler), and LNK shortcut files
- Web Delivery (Method C): fileless one-liner delivery via
exploit/multi/script/web_deliverysupporting Python, PHP, PowerShell, Regsvr32 (AppLocker bypass), pubprn, SyncAppvPublishingServer, and PSH Binary targets - HTA Delivery (Method D): HTML Application server via
exploit/windows/misc/hta_serverfor browser-based payload delivery - Email Delivery: Python smtplib-based email sending via
execute_codewith per-project SMTP configuration (host, port, user, password, sender, TLS) — agent asks at runtime if no SMTP settings are configured - Chat Download: default delivery via
docker cpcommand reported in chat - New prompt module
phishing_social_engineering_prompts.pywithPHISHING_SOCIAL_ENGINEERING_TOOLS(full workflow) andPHISHING_PAYLOAD_FORMAT_GUIDANCE(OS-specific format decision tree and msfvenom quick reference) - LLM classifier updated with phishing keywords and 10 example requests for accurate routing
phishing_social_engineeringadded toKNOWN_ATTACK_PATHSset andAttackPathClassificationvalidator
- ngrok TCP Tunnel Integration — automatic reverse shell tunneling through ngrok for NAT/cloud environments:
- ngrok installed in kali-sandbox Dockerfile and auto-started in
entrypoint.shwhenNGROK_AUTHTOKENenv var is set - TCP tunnel on port 4444 with ngrok API exposed on port 4040
_query_ngrok_tunnel()utility inagentic/utils.pythat queries ngrok API, discovers the public TCP endpoint, and resolves the hostname to an IP for targets with limited DNSget_session_config_prompt()auto-detects LHOST/LPORT from ngrok when enabled — injects a status banner, dual LHOST/LPORT table (handler vs payload), and enforces REVERSE-only payloads through ngrokis_session_config_complete()short-circuits to complete when ngrok tunnel is activeNGROK_AUTHTOKENadded to.env.exampleanddocker-compose.yml(kali-sandbox env + port 4040 exposed)
- ngrok installed in kali-sandbox Dockerfile and auto-started in
- Phishing Section in Project Settings — new
PhishingSectioncomponent with SMTP configuration textarea for per-project email delivery settings - Tunnel Provider Dropdown — replaced the single "Enable ngrok TCP Tunnel" toggle in Agent Behaviour settings with a Tunnel Provider dropdown (None / ngrok / chisel). Mutually exclusive — selecting one automatically disables the other
- Social Engineering Suggestion Templates — 15 new suggestion buttons in AI Assistant drawer under a pink "Social Engineering" template group (Mail icon), covering payload generation, malicious documents, web delivery, HTA, email phishing, AV evasion, and more
- Phishing Attack Path Badge — pink "PHISH" badge with
#ec4899accent color for phishing sessions in the AI Assistant drawer - Prisma Migrations —
20260228120000_add_ngrok_tunnel(agentNgrokTunnelEnabled),20260228130000_add_phishing_smtp_config(phishingSmtpConfig), and20260305145750_add_ip_mode(ipMode, targetIps) database migrations - Remote Shells Tab — new "Remote Shells" tab on the graph dashboard for real-time session management:
- Unified view of all active Metasploit sessions (meterpreter, shell), background handlers/jobs, and non-MSF listeners (netcat, socat)
- Sessions auto-detected from the Kali sandbox with 3-second polling and background cache refresh
- Built-in interactive terminal with command history (arrow keys), session-aware prompts, and auto-scroll
- Session actions: kill, upgrade shell to meterpreter, stop background jobs
- Agent busy detection with lock-timeout strategy — session listing always works from cache, interaction retries when lock is available
- Session-to-chat mapping — each session card shows which AI agent chat session created it
- Non-MSF session registration when agent creates netcat/socat listeners via
kali_shell
- Command Whisperer — AI-powered NLP-to-command translator in the Remote Shells terminal:
- Natural language input bar (purple accent) above the terminal command line
- Describe what you want in plain English → LLM generates the correct command for the current session type (meterpreter vs shell)
- Uses the project's configured LLM (same model as the AI agent) via a new
/command-whispererAPI endpoint - Generated commands auto-fill the terminal input for review — no auto-execution
- Metasploit Session Persistence — removed automatic Metasploit restart on new conversations:
- Removed
start_msf_prewarmcall from WebSocket initialization - Removed
sessions -Ksoft-reset on firstmetasploit_consoleuse msf_restarttool now visible to the AI agent for manual use when a clean state is needed
- Removed
Changed
- Conflict detection — IP-mode projects skip domain conflict checks entirely (tenant-scoped Neo4j constraints make IP overlap safe across projects). Domain-mode conflict detection unchanged
- HTTP probe scope filtering —...
2.1.0 - 2026-02-27
Added
- CypherFix — Automated Vulnerability Remediation Pipeline — end-to-end system that takes offensive findings from the Neo4j graph and turns them into merged code fixes:
- Triage Agent (
cypherfix_triage/): AI agent that queries the Neo4j knowledge graph, correlates hundreds of reconnaissance and exploitation findings, deduplicates them, ranks by exploitability and severity, and produces a prioritized remediation plan - CodeFix Agent (
cypherfix_codefix/): autonomous code-repair agent that clones the target repository, navigates the codebase with 11 code-aware tools, implements targeted fixes for each triaged vulnerability, and opens a GitHub pull request ready for review and merge - Real-time WebSocket streaming for both Triage and CodeFix agents with dedicated hooks (
useCypherFixTriageWS,useCypherFixCodeFixWS) - Remediations API (
/api/remediations/) and hook (useRemediations) for persisting and retrieving remediation results - CypherFix API routes (
/api/cypherfix/) for triggering and managing triage and codefix sessions - Agent-side API endpoints and orchestrator integration in
api.pyandorchestrator.py
- Triage Agent (
- CypherFix Tab on Graph Page — new tab (
CypherFixTab/) in the Graph dashboard providing a dedicated interface to launch triage, review prioritized findings, trigger code fixes, and monitor remediation progress - CypherFix Settings Section — new
CypherFixSettingsSectionin Project Settings for configuring CypherFix parameters (GitHub repo, branch, AI model, triage/codefix behavior) - CypherFix Type System (
cypherfix-types.ts) — shared TypeScript types for triage results, codefix sessions, remediation records, and WebSocket message protocols - Agentic README Documentation (
agentic/readmes/) — internal documentation for the agentic module
Changed
- Global Header — updated navigation to include CypherFix access point
- View Tabs — styling updates to accommodate the new CypherFix tab
- Project Form — expanded with CypherFix settings section and updated section exports
- Hooks barrel export — updated
hooks/index.tswith new CypherFix and remediation hooks - Prisma Schema — new fields for CypherFix configuration in the project model
- Agent Requirements — new Python dependencies for CypherFix agents
- Docker Compose — updated service configuration for CypherFix support
- README — version bump to v2.1.0, CypherFix badge added, pipeline description updated
1.3.0 - 2026-02-19
Added
- Multi-Provider LLM Support — the agent now supports 4 AI providers (OpenAI, Anthropic, OpenRouter, AWS Bedrock) with 400+ selectable models. Models are dynamically fetched from each provider's API and cached for 1 hour. Provider is auto-detected via a prefix convention (
openrouter/,bedrock/,claude-*, or plain OpenAI) - Dynamic Model Selector — replaced the hardcoded 11-model dropdown with a searchable, provider-grouped model picker in Project Settings. Type to filter across all providers instantly; each model shows name, context window, and pricing info
GET /modelsAPI Endpoint — new agent endpoint that fetches available models from all configured providers in parallel. Proxied through the webapp at/api/modelsmodel_providers.py— new provider discovery module with async fetchers for OpenAI, Anthropic, OpenRouter, and AWS Bedrock APIs, with in-memory caching (1h TTL)- Stealth Mode — new per-project toggle that forces the entire pipeline to use only passive and low-noise techniques:
- Recon: disables Kiterunner and banner grabbing, switches Naabu to CONNECT scan with rate limiting, throttles httpx/Katana/Nuclei, disables DAST and interactsh callbacks
- Agent: injects stealth rules into the system prompt — only passive/stealthy methods allowed, agent must refuse if stealth is impossible
- GVM scanning disabled in stealth mode (generates ~50K active probes per target)
- Stealth Mode UI — toggle in Target section of Project Settings with description of what it does
- Kali Sandbox Tooling Expansion — 15+ new packages installed in the Kali container:
netcat,socat,rlwrap,exploitdb,john,smbclient,sqlmap,jq,gcc,g++,make,perl,go kali_shellMCP Tool — direct Kali Linux shell command execution, available in all phasesexecute_codeMCP Tool — run custom Python/Bash exploit scripts on the Kali sandboxmsf_restartMCP Tool — restart Metasploit RPC daemon when it becomes unresponsiveexecute_nmapMCP Tool — deep service analysis, OS fingerprinting, NSE scripts (consolidated from previous naabu-only setup)- MCP Server Consolidation — merged curl and naabu servers into a unified
network_recon_server.py, added dedicatednmap_server.py, fixed tool loading race condition - Failure Loop Detection — agent detects 3+ consecutive similar failures and injects a pivot warning to break out of unproductive loops
- Prompt Token Optimization — lazy no-module fallback injection (saves ~1.1K tokens), compact formatting for older execution trace steps (full output only for last 5), trimmed rarely-used wordlist tables
- Metasploit Prewarm — pre-initializes Metasploit console on agent startup to reduce first-use latency
- Markdown Report Export — download the full agent conversation as a formatted Markdown file
- Brute Force & CVE Exploit Settings — new Project Settings sections for configuring brute force speed/wordlist limits and CVE exploit attack path parameters
- Node.js Deserialization Guinea Pig — new test environment for CVE-2017-5941 (node-serialize RCE)
- Phase Tools Tooltip — hover on phase badges to see which MCP tools are available in that phase
- GitHub Secrets Suggestion — new suggestion button in AI Assistant to leverage discovered GitHub secrets during exploitation
Changed
- Agent Orchestrator — rewritten
_setup_llm()with 4-way provider detection (OpenAI, Anthropic, OpenRouter via ChatOpenAI + custom base_url, Bedrock via ChatBedrockConverse with lazy import) - Model Display —
formatModelDisplay()helper cleans up prefixed model names in the AI Assistant badge and markdown export (e.g.,openrouter/meta-llama/llama-4-maverick→llama-4-maverick (OR)) - Prompt Architecture — tool registry extracted into dedicated
tool_registry.py, attack path prompts (CVE exploit, brute force, post-exploitation) significantly reworked for better token efficiency and exploitation success rates - curl-based Exploitation — expanded curl-based vulnerability probing and no-module fallback workflows for when Metasploit modules aren't available
- kali_shell & execute_nuclei — expanded to all phases (previously restricted)
- GVM Button — disabled in stealth mode with tooltip explaining why
- README — extensive updates: 4-provider documentation, AI Model Providers section, Kali sandbox tooling tables, new badges (400+ AI Models, Stealth Mode, Full Kill Chain, 30+ Security Tools, 9000+ Vuln Templates, 170K+ NVTs, 180+ Settings), version bump to v1.3.0
1.2.0 - 2026-02-13
Added
- GVM Vulnerability Scanning — full end-to-end integration of Greenbone Vulnerability Management (GVM/OpenVAS) into the RedAmon pipeline:
- Python scanner module (
gvm_scan/) withGVMScannerclass wrapping the GMP protocol for headless API-based scanning - Orchestrator endpoints (
/gvm/{id}/start,/gvm/{id}/status,/gvm/{id}/stop,/gvm/{id}/logs) with SSE log streaming - Webapp API routes,
useGvmStatuspolling hook,useGvmSSEstreaming hook, toolbar buttons, and log drawer on the Graph page - Neo4j graph integration — GVM findings stored as
Vulnerabilitynodes (source="gvm") linked to IP/Subdomain viaHAS_VULNERABILITY, with associatedCVEnodes - JSON result download from the Graph page toolbar
- Python scanner module (
- GitHub Secret Hunt — automated secret and credential detection across GitHub organizations and user repositories:
- Python scanner module (
github_secret_hunt/) withGitHubSecretHunterclass supporting 40+ regex patterns for AWS, Azure, GCP, GitHub, Slack, Stripe, database connection strings, CI/CD tokens, cryptographic keys, JWT/Bearer tokens, and more - High-entropy string detection via Shannon entropy to catch unknown secret formats
- Sensitive filename detection (
.env,.pem,.key, credentials files, Kubernetes kubeconfig, Terraform tfvars, etc.) - Commit history scanning (configurable depth, default 100 commits) and gist scanning
- Organization member repository enumeration with rate-limit handling and exponential backoff
- Orchestrator endpoints (
/github-hunt/{id}/start,/github-hunt/{id}/status,/github-hunt/{id}/stop,/github-hunt/{id}/logs) with SSE log streaming - Webapp API routes for start, status, stop, log streaming, and JSON result download
useGithubHuntStatuspolling hook anduseGithubHuntSSEstreaming hook for real-time UI updates- Graph page toolbar integration with start/stop button, log drawer, and result download
- JSON output with statistics (repos scanned, files scanned, commits scanned, gists scanned, secrets found, sensitive files, high-entropy findings)
- Python scanner module (
- GitHub Hunt Per-Project Settings — GitHub scan configuration is now configurable per-project via the webapp UI:
- New "GitHub" section in Project Settings with token, target org/user, and scan options
- 7 configurable fields: Access Token, Target Organization, Scan Members, Scan Gists, Scan Commits, Max Commits, Output JSON
github_secret_hunt/project_settings.pymirrors the recon/GVM settings pattern (fetch from webapp API, fallback to defaults)- 7 new Prisma schema fields (
github_access_token,github_target_org,github_scan_members,github_scan_gists,github_scan_commits,github_max_commits,github_output_json)
- GVM Per-Project Settings — GVM scan configuration is now configurable per-project via the webapp UI:
- New "GVM Scan" tab in Project Settings (between Integrations and Agent Behaviour)
- 5 configurable fields: Scan Profile, Scan Targets Strategy, Task Timeout, Poll Interval, Cleanup After Scan
gvm_scan/project_settings.pymirrors the recon/agentic settings pattern (fetch from webapp API, fallback to defaults)- Defaults served via orchestrator
/defaultsendpoint usingimportlibto avoid module name collision - 5 new Prisma schema fields (
gvm_scan_config,gvm_scan_targets,gvm_task_timeout,gvm_poll_interval,gvm_cleanup_after_scan)
Changed
- Webapp Dockerfile — embedded Prisma CLI in the production image; entrypoint now runs
prisma db pushautomatically on startup, eliminating the separatewebapp-initcontainer - Dev Compose —
docker-compose.dev.ymlnow runsprisma db pushbeforenpm run devto ensure schema is always in sync - Docker Compose — removed
webapp-initservice andwebapp_prisma_cachevolume; webapp handles its own schema migration
Removed
webapp-initservice — replaced by automatic migration in the webapp entrypoint (both production and dev modes)gvm_scan/params.py— hardcoded GVM settings replaced by per-projectproject_settings.py
1.1.0 - 2026-02-08
Added
- Attack Path System — agent now supports dynamic attack path selection with two built-in paths:
- CVE Exploit — automated Metasploit module search, payload configuration, and exploit execution
- Brute Force Credential Guess — service-level brute force with configurable wordlists and max attempts per service
- Agent Guidance — send real-time steering messages to the agent while it works, injected into the system prompt before the next reasoning step
- Agent Stop & Resume — stop the agent at any point and resume from the last LangGraph checkpoint with full context preserved
- Project Creation UI — full frontend project form with all configurable settings sections:
- Naabu (port scanner), Httpx (HTTP prober), Katana (web crawler), GAU (passive URLs), Kiterunner (API discovery), Nuclei (vulnerability scanner), and agent behavior settings
- Agent Settings in Frontend — transferred agent configuration parameters from hardcoded
params.pyto PostgreSQL, editable via webapp UI - Metasploit Progress Streaming — HTTP progress endpoint (port 8013) for real-time MSF command tracking with ANSI escape code cleaning
- Metasploit Session Auto-Reset —
msf_restart()MCP tool for clean msfconsole state; auto-reset on first use per chat session - WebSocket Integration — real-time bidirectional communication between frontend and agent orchestrator
- Markdown Chat UI — react-markdown with syntax highlighting for agent chat messages
- Smart Auto-Scroll — chat only auto-scrolls when user is at the bottom of the conversation
- Connection Status Indicator — color-coded WebSocket connection status (green/red) in the chat interface
Changed
- Unified Docker Compose — replaced per-module
.envfiles andstart.sh/stop.shscripts with a single rootdocker-compose.ymlanddocker-compose.dev.ymlfor full-stack orchestration - Settings Source of Truth — migrated all recon and agent settings from hardcoded
params.pyto PostgreSQL via Prisma ORM, fetched at runtime via webapp API - Recon Pipeline Improvements — multi-level improvements across all recon modules for reliability and accuracy
- Orchestrator Model Selection — fixed model selection logic in the agent orchestrator
- Frontend Usability — unified RedAmon primary crimson color (#d32f2f), styled message containers with ghost icons and gradient backgrounds, improved markdown heading and list spacing
- Environment Configuration — added root
.env.examplewith all required keys; forwarded NVD_API_KEY and Neo4j credentials from recon-orchestrator to spawned containers - Webapp Header — replaced Crosshair icon with custom logo.png image, bumped logo text size
Fixed
- Double Approval Dialog — fixed duplicate approval confirmation with ref-based state tracking
- Orchestrator Model Selection — corrected model selection logic when switching between AI providers