Skip to content

Samkit-shah/SecScout

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

SecScout

What is SecScout?

As autonomous agents and AI-powered security systems become the new standard for pentesting and security reviews, one bottleneck remains constant: they need a deep, structured understanding of the target before they can act.

SecScout solves the cold-start problem.

Point SecScout at a URL and it autonomously:

  • Crawls the web application (login, forms, SPA routes, authenticated pages)
  • Discovers every API endpoint and operation via real network traffic capture
  • Infers logical dependencies between operations (what must be called before what)
  • Computes attack paths — chains of operations from entry point to sensitive resource
  • Outputs a structured context package your security agent can immediately consume

Think of SecScout as the recon and context layer that runs before your security agent does anything. Instead of your agent blindly poking at a target, it starts with a complete, AI-enriched map of the surface.

Branches Overview

The repository is structured around two primary branches that explore different methodologies:

1. agentic Branch

This branch utilizes an agent-based (agentic) approach.

  • The AI operates iteratively, acting as an autonomous agent that explores the target, plans its next steps, and executes them over multiple turns.
  • Ideal for complex logic flows where discovering one dependency dynamically uncovers the path to the next (e.g., multi-step authentication processes and complex state manipulation).
  • Leverages dynamic planning, giving the system flexibility to adapt to new findings as the scan progresses.

2. llm-oneshot Branch

This branch focuses on a one-shot LLM approach.

  • Replaces the multi-turn agent interaction with single, comprehensive prompts (one-shot inference).
  • Relies heavily on pre-processing, grouping context, and extracting heuristics automatically so that the LLM is provided with everything it needs in a single request.
  • Designed for scenarios where speed, deterministic behavior, and lower API token consumption are prioritized over dynamic iteration.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors