Plugin dependencies are installed in Docker containers during runtime. For VSCode Intellisense to work properly with plugin imports, run:
./install-dev-dependencies.shThis script:
- Merges dependencies from both
PenPal/app/client/package.jsonandPenPal/app/server/package.json - Installs all plugin dependencies from
Plugins/*/client/npm-dependencies.txtfiles - Creates a root-level
node_modulesdirectory with all dependencies for development
Run this script when:
- Adding new plugin dependencies
- Plugin
npm-dependencies.txtfiles change - VSCode can't resolve imports from plugins or the main application
Note: This only affects development - production containers use their own isolated node_modules.
PenPal is an automation and reporting all-in-one tool that is meant to enable Cybersecurity Engineers to perform a better, more thorough job and produce better quality reports by automating many of the most tedious tasks in penetration testing and/or red teaming. It is built on a pluggable architecture that can allow for many tools to be integrated seamlessly into the structured, opinionated database scheme. This allows for a consistent approach to targeting that can enable trigger-based automations to perform actions when a condition occurs or on-demand.
- Core API for data standardization (Plugin)
- Customers (can have many projects)
- Projects
- Hosts
- Networks (have many hosts)
- Services (ports, etc)
- Vulnerabilities
- Credentials
- Files
- Notes
- Audit trails
- Centralized Job Management System
- Real-time job tracking and monitoring via WebSocket subscriptions
- Multi-stage job support with progress tracking
- Automatic job cleanup and status management
- Web UI for job visualization and filtering
- Plugin integration via Jobs API
- Live navbar job counter with spinning icon for active jobs
- ScanQueue Plugin - Bandwidth Management
- Sequential scan execution to prevent bandwidth conflicts
- Smart job creation with multi-stage progress tracking
- MQTT-triggered scan serialization for network stability
- Eliminates false negatives from concurrent scanning
- User Interface
- Pluggable Dashboard
- Projects Summary Page
- Jobs Monitoring Page with real-time WebSocket updates
- Live job counter in navigation bar
- Project Details Page
- Hosts table with vulnerability counts
- Services table with enrichment and vulnerability counts
- Vulnerabilities dashboard with severity distribution
- Vulnerabilities table with filtering and status management
- Notetaking
- Test Range management interface
- DataStore abstraction layer
- DataStore Adapters
- Mongo Adapter
- Postgres Adapter (Plugin)
- Grepable Filesystem Adapter (Plugin)
- S3 Adapter
- MinIO (Plugin)
- Amazon S3 (Plugin)
- Docker support for plugins
- Report generation
- Ghostwriter (Plugin)
PenPal features an extensible service enrichment architecture that allows plugins to add rich metadata to discovered services. This creates a comprehensive intelligence view by layering data from multiple cybersecurity tools.
- Service Discovery: Tools like Nmap discover services (IP:port combinations)
- Enrichment Plugins: Additional tools (HttpX, etc.) analyze services and add metadata
- Unified View: All enrichment data is displayed in a rich, extensible UI
- Plugin Extensibility: New plugins can register custom display components
- Nmap: Service fingerprinting, version detection, OS detection
- Service names, product versions, banners
- Operating system detection
- Service fingerprints and additional info
- HttpX: HTTP service analysis
- HTTP status codes, content types, page titles
- Technology stack detection (frameworks, servers, etc.)
- Content length, response headers
- Clickable URLs with security validation
- Gobuster: Directory and file enumeration
- Directory discovery with status codes
- File enumeration results
- Wordlist-based scanning with SecLists integration
- Gowitness: Website screenshot capture
- Automated screenshot capture for HTTP services
- Visual documentation of discovered web applications
Services Tab Structure:
- List View: Overview of all services with enrichment count badges
- Enrichments View: Detailed plugin data with custom rich displays
- Graph View: Network topology visualization (coming soon)
Rich Display Components:
- HttpX: Clickable URLs, color-coded HTTP status, technology chips
- Nmap: Service information, version details, fingerprint data
- Default Display: Automatic fallback for any plugin enrichment
Real-time Updates:
- Services UI polls every 15 seconds for new enrichments
- Automatic refresh when new scan data becomes available
- Live enrichment count indicators
The enrichment system is designed for easy extension with the new CoreAPI Enrichment Functions:
// ✅ NEW: Simple enrichment API (recommended)
const enrichment_updates = results.map((result) => ({
host: result.host, // IP address from tool
port: result.port, // Port number from tool
ip_protocol: "TCP", // Protocol (TCP/UDP)
project_id: project_id, // Required for project isolation
enrichment: {
plugin_name: "YourPlugin", // Required for GraphQL resolution
url: result.url, // Tool-specific data
status_code: result.status_code,
tech: result.tech,
// ... other tool-specific fields
},
}));
// Add enrichments using CoreAPI
const result = await PenPal.API.Services.AddEnrichments(enrichment_updates);
console.log(`Successfully added ${result.accepted.length} enrichments`);
// Client-side: Register custom display
import YourEnrichmentDisplay from "./components/your-enrichment-display.jsx";
PenPal.API.registerEnrichmentDisplay("YourPlugin", YourEnrichmentDisplay);Key Benefits of New API:
- Automatic Service Matching: No need to manually find and match services
- Atomic Operations: Thread-safe enrichment updates with MongoDB atomic operators
- Natural Identifiers: Use host/port/protocol that tools already provide
- Error Handling: Detailed success/failure reporting with rejection reasons
- Project Isolation: Built-in multi-project support
📖 Full Documentation: See Plugins/CoreAPI/README-Enrichment-API.md for complete API reference, migration guide, and best practices.
Enrichment plugins automatically respond to service discovery events:
// Subscribe to new services from other plugins
await MQTT.Subscribe(
PenPal.API.MQTT.Topics.New.Services,
async ({ service_ids }) => {
const services = await PenPal.API.Services.GetMany(service_ids);
// Filter for relevant services and enrich them
await enrichServices(services);
}
);This creates an intelligent service discovery chain where each plugin builds upon the discoveries of others, creating comprehensive service intelligence automatically.
PenPal includes a comprehensive vulnerability management system that integrates with security scanning tools to track, manage, and report vulnerabilities across your infrastructure.
- Unified Vulnerability Model: Standardized data model for vulnerabilities across all plugins
- Severity Management: CRITICAL, HIGH, MEDIUM, LOW, INFO severity levels
- Status Tracking: NEW, CONFIRMED, FALSE_POSITIVE, MITIGATED status workflow
- CVE Integration: Automatic CVE ID extraction and tracking
- CVSS Scoring: Support for CVSS scores (0.0-10.0)
- Host/Service Relationships: Link vulnerabilities to affected hosts and services
- Project Isolation: Multi-project vulnerability tracking
- Audit Trail: Complete change history with Annotatable and Auditable interfaces
The project view includes a comprehensive vulnerability dashboard:
- Severity Distribution: Visual breakdown of vulnerabilities by severity level
- Status Overview: Current status of all vulnerabilities
- Statistics: Total counts, recent discoveries, and trends
- Quick Filters: Filter by severity, status, or discovery plugin
Advanced table view with:
- Sorting: Sort by severity, status, discovery date, CVE IDs
- Filtering: Filter by severity, status, affected hosts/services, discovery plugin
- Status Management: Update vulnerability status (confirm, mark false positive, mitigate)
- Details View: Expandable rows showing full vulnerability details
- References: Links to vulnerability documentation and advisories
- Nuclei: Automated vulnerability scanning
- Template-based vulnerability detection
- Automatic CVE extraction from templates
- Severity mapping to PenPal's vulnerability model
- MQTT-triggered scanning on HTTP service discovery
- Configurable severity filters and tag exclusions
- Project-level enable/disable configuration
Plugins can create vulnerabilities using the CoreAPI:
// Create vulnerability from scan results
const vulnerability = {
title: "SQL Injection in Login Form",
description: "The login form is vulnerable to SQL injection attacks",
severity: "HIGH",
cveIds: ["CVE-2023-12345"],
cvssScore: 7.5,
affectedHostIds: [host_id],
affectedServiceIds: [service_id],
discoveredBy: "Nuclei",
project: project_id,
status: "NEW",
references: ["https://example.com/advisory"],
};
const result = await PenPal.API.Vulnerabilities.Insert(vulnerability);Vulnerabilities are automatically linked to hosts and services:
- Host View: Shows vulnerability count badges and filtering
- Service View: Displays vulnerabilities affecting specific services
- Vulnerability Details: Shows all affected hosts and services
- Cross-referencing: Navigate between vulnerabilities and affected assets
Complete GraphQL API for vulnerability management:
# Query vulnerabilities
query GetVulnerabilities($projectId: ID!) {
getVulnerabilitiesByProjectID(project: $projectId) {
id
title
severity
status
cveIds
affectedHosts {
ip_address
}
affectedServices {
port
}
}
}
# Create vulnerability
mutation CreateVulnerability($vulnerability: VulnerabilityInput!) {
createVulnerability(vulnerability: $vulnerability) {
id
title
}
}
# Update vulnerability status
mutation UpdateVulnerability($vulnerability: VulnerabilityUpdateInput!) {
updateVulnerability(vulnerability: $vulnerability) {
id
status
}
}PenPal includes a Test Range plugin for managing vulnerable containers and testing environments. This enables security professionals to deploy and manage vulnerable applications for testing and validation purposes.
- Container Management: Start, stop, restart, and remove containers
- Vulhub Integration: Deploy pre-configured vulnerable applications from Vulhub
- Running Containers: Real-time monitoring of active test containers
- Recent Containers: History of recently used containers
- Available Containers: Browse and deploy from available container catalog
- Container Information: Detailed container metadata and status
- Vulnerability Validation: Test vulnerability scanners against known vulnerable applications
- Training Environments: Deploy vulnerable applications for security training
- Tool Testing: Validate security tools against controlled test environments
- Proof of Concept: Demonstrate vulnerabilities in isolated environments
Access the Test Range interface at http://localhost:3000/testrange:
- Running Tab: View and manage currently running containers
- Recent Tab: Browse recently used containers
- Available Tab: Discover and deploy new vulnerable containers
- Real-time Updates: 5-second polling for container status updates
- Base: Foundation plugin providing core services and configuration UI
- CoreAPI: Data standardization, vulnerability management, and API layer
- DataStore: Database abstraction layer with adapter support
- DataStoreMongoAdapter: MongoDB adapter for DataStore
- Docker: Container orchestration and image management
- MQTT: Inter-plugin messaging and event system
- JobsTracker: Centralized job management with real-time monitoring
- ScanQueue: Bandwidth management and sequential scan execution
- FileStore: File storage abstraction layer
- FileStoreMinIOAdapter: MinIO adapter for FileStore
- Ping: ICMP ping sweep for host discovery
- Nmap: Network discovery and port scanning with service detection
- Rustscan: Fast port scanning capabilities
- HttpX: HTTP service discovery and enrichment
- Nuclei: Automated vulnerability scanning with template support
- Gobuster: Directory and file enumeration on HTTP services
- Gowitness: Website screenshot capture and analysis
- TestRange: Vulnerable container management for testing environments
- Tester: Plugin testing and validation framework
- E2ETesting: End-to-end testing infrastructure (foundation)
- Burpsuite for vulnerability scanning
- Eyeballer for searching screenshots for interesting things
- Changeme for default password checking
- Additional vulnerability scanners (Burp Suite, OWASP ZAP, etc.)
- Credential management and storage
- Report generation plugins
- Integration with external vulnerability databases
PenPal is purely dependent on docker and docker-compose. It will definitely work on MacOS and maybe on Linux (does not currently support Windows)
Currently there are a number of services and endpoints that are interesting/useful. The current way to run it is by executing dev.sh -- if you add more plugins to the Plugins folder they will automatically mount with the docker-compose scripts and mount into the container. Here's a list of interesting URLs:
- Web UI - http://localhost:3000
- Projects - http://localhost:3000/projects
- Jobs Monitor - http://localhost:3000/jobs (with real-time WebSocket updates)
- Test Range - http://localhost:3000/testrange
- Configuration - http://localhost:3000/configuration
- GraphQL Studio - http://localhost:3001/graphql
- GraphQL WebSocket - ws://localhost:3001/graphql (subscriptions)
PenPal includes a comprehensive Jobs API for managing long-running tasks across all plugins. This system provides real-time monitoring, progress tracking, and automatic cleanup of background jobs.
- Centralized Management: All plugin jobs are tracked in one place
- Real-time Monitoring: Live updates with 500ms polling
- Multi-stage Support: Complex jobs can be broken into trackable stages
- Progress Tracking: Visual progress bars and percentage completion
- Standardized Status: Validated status constants prevent inconsistencies
- Automatic Cleanup: Stale jobs are automatically marked as cancelled
Always use the standardized status constants to ensure consistency across plugins:
// ✅ CORRECT - Use status constants
PenPal.Jobs.Status.PENDING; // "pending" - Job is queued/waiting
PenPal.Jobs.Status.RUNNING; // "running" - Job is actively executing
PenPal.Jobs.Status.DONE; // "done" - Job completed successfully
PenPal.Jobs.Status.FAILED; // "failed" - Job failed with error
PenPal.Jobs.Status.CANCELLED; // "cancelled" - Job was cancelled
// Check if job is completed
const isFinished = PenPal.Jobs.CompletedStatuses.includes(job.status);
// ❌ WRONG - Don't use hardcoded strings
status: "completed"; // Invalid - use PenPal.Jobs.Status.DONE
status: "finished"; // Invalid - use PenPal.Jobs.Status.DONE- Filtering & History: Filter by active/recent/all jobs with pagination
- Runtime Tracking: See how long jobs have been running
- Completion Times: Track when jobs finished or were cancelled
The Jobs API is available to all plugins through the PenPal.Jobs object:
// Create a simple job
const job = await PenPal.Jobs.Create({
name: "Network Scan",
statusText: "Starting network scan",
progress: 0,
});
// Update job progress
await PenPal.Jobs.UpdateProgress(job.id, 50);
// Complete the job
await PenPal.Jobs.Update(job.id, {
progress: 100,
status: PenPal.Jobs.Status.DONE,
statusText: "Scan complete",
});For complex operations, jobs can include multiple stages:
const job = await PenPal.Jobs.Create({
name: "Comprehensive Security Scan",
stages: [
{
name: "Port Scan",
statusText: "Scanning ports",
progress: 0,
status: PenPal.Jobs.Status.PENDING,
},
{
name: "Service Detection",
statusText: "Detecting services",
progress: 0,
status: PenPal.Jobs.Status.PENDING,
},
{
name: "Vulnerability Assessment",
statusText: "Checking vulnerabilities",
progress: 0,
status: PenPal.Jobs.Status.PENDING,
},
],
});
// Update individual stages
await PenPal.Jobs.UpdateStage(job.id, 0, {
progress: 100,
status: PenPal.Jobs.Status.DONE,
statusText: "Port scan complete",
});Access the Jobs Monitor at http://localhost:3000/jobs to:
- View all running and completed jobs in real-time
- Filter jobs by status (Active, Recent, All)
- See detailed progress for multi-stage jobs
- Track job runtime and completion times
- Hide cancelled jobs with toggle option
- Browse job history with pagination
Security tools like Nmap and Rustscan use the Jobs API to provide visibility into scan progress:
// Example from Nmap plugin
export const start_detailed_hosts_scan = async (hosts) => {
const job = await PenPal.Jobs.Create({
name: `Detailed Host Scan for ${hosts.length} hosts`,
statusText: "Preparing detailed scan",
progress: 0,
stages: [
{
name: "Port Scan",
statusText: "Scanning ports",
progress: 0,
status: PenPal.Jobs.Status.PENDING,
},
{
name: "Service Detection",
statusText: "Detecting services",
progress: 0,
status: PenPal.Jobs.Status.PENDING,
},
{
name: "OS Detection",
statusText: "Identifying operating systems",
progress: 0,
status: PenPal.Jobs.Status.PENDING,
},
],
});
performScan(hosts, job.id);
return job.id;
};✅ CRITICAL: Preventing Bandwidth Conflicts and Scan Serialization The ScanQueue plugin solves a critical infrastructure problem where multiple security tools try to scan simultaneously, causing bandwidth conflicts, false negative timeouts, and network congestion.
During large network assessments, multiple plugins can trigger scans simultaneously:
- Nmap discovers services via MQTT events
- HttpX immediately tries to scan discovered HTTP services
- Other tools respond to the same discovery events
- Network bottleneck occurs when all tools scan concurrently
- False negatives appear as legitimate services timeout due to network saturation
ScanQueue provides sequential scan execution with comprehensive job tracking:
// ✅ CORRECT: Queue scan operations with descriptive names
PenPal.ScanQueue.Add(
async () => await performScanOperation(args),
"HttpX Scan (15 services, Project: abc123)"
);
// Operations run sequentially, not concurrently
PenPal.ScanQueue.Add(
async () => await performNmapScan(hosts),
"Nmap Host Scan (3 hosts), Project: abc123"
);Sequential Execution: Only one scan operation runs at a time, eliminating bandwidth conflicts
Smart Job Creation: Creates JobsTracker jobs only when queuing occurs (2+ operations), avoiding unnecessary overhead
Descriptive Progress: Rich job names show exactly what's being scanned and for which project
Multi-stage Tracking: Each queued operation becomes a job stage with individual progress tracking
Busy Progress Indication: Uses animated stripe progress bars for operations without detailed progress
Keep-alive System: Prevents job timeout cancellation with periodic 5-second updates
HttpX Integration Example:
const BatchEnqueue = (BatchArgs) => {
const totalServices = BatchArgs.reduce(
(sum, [{ service_ids }]) => sum + service_ids.length,
0
);
const queueName = `HttpX Scan (${totalServices} services, Project: ${project})`;
PenPal.ScanQueue.Add(
async () => await start_http_service_scan_batch(BatchArgs),
queueName
);
};
// Works with BatchFunction for efficient event processing
await MQTT.Subscribe(
PenPal.API.MQTT.Topics.New.Services,
PenPal.Utils.BatchFunction(BatchEnqueue, 1000)
);Nmap Integration Example:
const queueHostsScan = (args) => {
const { project, host_ids } = args;
const queueName = `Nmap Detailed Host Scan (${host_ids.length} hosts), Project: ${project}`;
PenPal.ScanQueue.Add(
async () => await start_detailed_hosts_scan(args),
queueName
);
};ScanQueue jobs appear in the JobsTracker UI with rich visual feedback:
Progress Bar Types:
- Orange Striped Bars: "Busy" operations (100% with animated stripes)
- Blue Progress Bars: Real progress with actual percentages
- Green Progress Bars: Completed operations
Stage Status:
- "Processing...": Currently executing with busy stripes
- "Pending": Waiting in queue with 0% progress
- "Completed": Finished successfully with green bar
Reliability: Eliminates false negatives caused by bandwidth saturation
Consistency: Same scan targets produce repeatable results
Visibility: Clear queue progress and operation tracking in web UI
Performance: Optimal network utilization without overwhelming infrastructure
Error Isolation: Failed operations don't affect subsequent queued items
To migrate existing plugins to use ScanQueue:
Step 1: Add Dependency
{
"name": "YourPlugin",
"dependsOn": ["CoreAPI@0.1.0", "ScanQueue@0.1.0"]
}Step 2: Wrap Scan Functions
// ❌ OLD: Direct execution
await performScan(args);
// ✅ NEW: Queue execution
PenPal.ScanQueue.Add(
async () => await performScan(args),
"Descriptive Operation Name"
);The ScanQueue plugin is essential for any PenPal deployment where multiple security tools run concurrently, ensuring reliable, repeatable scanning results without bandwidth conflicts.
PenPal includes a BatchFunction utility (PenPal.Utils.BatchFunction) for batching rapid function calls together, essential for handling high-frequency MQTT events during large scans without overwhelming system resources.
During large network scans, plugins can receive hundreds of rapid MQTT events as services are discovered:
// ❌ Problem: Each event triggers separate processing
await MQTT.Subscribe(PenPal.API.MQTT.Topics.New.Services, ({ service_ids }) => {
// This fires 100+ times during a large scan
processServices(service_ids); // Creates many jobs, containers, etc.
});BatchFunction collects rapid calls and processes them together after a timeout period:
// ✅ Solution: Batch events together with 5-second timeout
await MQTT.Subscribe(
PenPal.API.MQTT.Topics.New.Services,
PenPal.Utils.BatchFunction(processBatchedServices, 5000)
);
const processBatchedServices = async (batchedArgs) => {
console.log(`Processing ${batchedArgs.length} batched events`);
// Deduplicate service IDs across all events
const allServiceIds = new Set();
for (const [{ service_ids }] of batchedArgs) {
service_ids.forEach((id) => allServiceIds.add(id));
}
// Process all unique services in one operation
await processServices(Array.from(allServiceIds));
};- Collect Arguments: Each function call adds its arguments to an internal array
- Reset Timer: Each new call resets the timeout timer
- Execute Handler: After timeout period with no new calls, executes handler with all batched arguments
- Clear Batch: Resets for the next batch cycle
const batchedFunction = PenPal.Utils.BatchFunction(handler, timeoutMs);Parameters:
handler- Function that receives an array of batched argument setstimeoutMs- Timeout in milliseconds to wait after last call before executing
HttpX Plugin Example - Before and after BatchFunction implementation:
Before (Individual Processing):
- 🔴 200+ separate Docker containers spawned during large scans
- 🔴 200+ individual jobs created
- 🔴 Overwhelming system resources and MQTT broker
- 🔴 Processing duplicate service IDs multiple times
After (Batched Processing):
- ✅ 1 Docker container per project with bulk service list
- ✅ 1 job per project with comprehensive progress tracking
- ✅ Automatic deduplication of service IDs
- ✅ 90%+ reduction in resource usage
Choose timeout values based on your use case:
- 1-2 seconds: Real-time operations requiring quick response
- 5-10 seconds: Service discovery and enrichment (recommended)
- 30+ seconds: Non-critical background processing
- Resource Optimization: Dramatically reduces Docker container and job creation
- Deduplication: Automatically handles duplicate data across events
- Bulk Processing: Enables efficient batch operations
- System Stability: Prevents overwhelming during scan bursts
- Better Performance: 90%+ reduction in overhead for high-frequency events
Converting existing event handlers to use BatchFunction:
// Step 1: Modify handler to accept batched arguments
const processBatchedEvents = async (batchedArgs) => {
for (const [originalArgs] of batchedArgs) {
// Process each original argument set
// Or group/deduplicate across all arguments
}
};
// Step 2: Wrap with BatchFunction
const batchedHandler = PenPal.Utils.BatchFunction(processBatchedEvents, 5000);
// Step 3: Use in MQTT subscriptions
await MQTT.Subscribe(topic, batchedHandler);The BatchFunction utility is essential for building scalable plugins that can handle the rapid event streams generated during large cybersecurity scans.
PenPal includes WebSocket-based GraphQL subscriptions for real-time updates while maintaining full Apollo Client compatibility. This enables live monitoring of jobs, scan progress, and service discoveries without polling.
- WebSocket Transport: Real-time updates via GraphQL subscriptions
- Apollo Client Compatible: Seamless integration with existing queries/mutations
- Split Link Transport: Automatic routing of subscriptions to WebSocket, queries/mutations to HTTP
- Graceful Fallback: Falls back to polling if WebSocket connection fails
- PubSub Events: Server-side event publishing for plugin communications
- GraphQL HTTP:
http://localhost:3001/graphql(queries, mutations) - GraphQL WebSocket:
ws://localhost:3001/graphql(subscriptions) - Client Auto-routing: Apollo Client automatically chooses transport based on operation type
The Jobs UI includes real-time updates that eliminate the need for manual polling:
// JobsCounter component in navbar shows live active job count
const { data } = useSubscription(ACTIVE_JOBS_SUBSCRIPTION, {
onData: ({ data }) => {
if (data?.data?.activeJobsChanged) {
setActiveJobs(data.data.activeJobsChanged);
setJobCount(data.data.activeJobsChanged.length);
}
},
onError: (error) => {
console.warn("Subscription failed, falling back to polling:", error);
// Automatic fallback to polling
},
});Job Status Updates:
subscription ActiveJobsChanged {
activeJobsChanged {
id
name
plugin
progress
status
updated_at
}
}Service Discovery Events:
subscription NewServicesDiscovered($projectId: ID!) {
newServicesDiscovered(projectId: $projectId) {
project_id
services {
id
host_ip
port
protocol
status
}
}
}Plugins can publish real-time events using the built-in PubSub system:
// Server-side: Publish events when data changes
export const updateJob = async (job_id, updates) => {
const result = await PenPal.DataStore.updateOne(
"JobsTracker",
"Jobs",
{ id: job_id },
updates
);
// Real-time notification
if (PenPal.PubSub) {
const updatedJob = await getJob(job_id);
PenPal.PubSub.publish("JOB_UPDATED", { jobUpdated: updatedJob });
// Aggregate events for efficiency
const activeJobs = await getActiveJobs();
PenPal.PubSub.publish("ACTIVE_JOBS_CHANGED", {
activeJobsChanged: activeJobs,
});
}
return result;
};Live Job Counter: The navbar displays a real-time badge showing active job count with spinning icon for running jobs
Instant Updates: Job status changes appear immediately across all connected clients
Service Discovery: New hosts, services, and scan results appear in real-time as they're discovered
Progress Tracking: Multi-stage job progress updates live without page refresh
- Reduced Server Load: Eliminates constant polling requests
- Instant Feedback: Updates appear immediately when events occur
- Bandwidth Efficient: Only sends data when changes happen
- Better UX: Live updates provide immediate feedback on scan progress
PenPal includes a powerful Docker Plugin that provides essential container orchestration capabilities for running cybersecurity tools in isolated environments. This plugin is fundamental for security tools like Nmap, HttpX, Rustscan, and other containerized scanners.
- Automatic Image Building: Builds Docker images from plugin contexts during startup
- Container Lifecycle Management: Start, stop, wait, and manage container execution
- Volume Management: Secure file exchange between host and containers
- Network Isolation: All containers run in isolated
penpal_penpalnetwork - Resource Management: Ephemeral containers with automatic cleanup
- Multi-tool Support: Orchestrates multiple security tools simultaneously
Plugins configure Docker settings in their plugin.js files:
// ✅ CORRECT Docker configuration
export const settings = {
docker: {
name: "penpal:httpx", // Container image name
dockercontext: `${__dirname}/docker-context`, // Build context path
},
};
// ✅ Alternative: Use pre-built images
export const settings = {
docker: {
name: "penpal:nmap",
image: "instrumentisto/nmap:latest", // Pull existing image
},
};The Docker plugin provides a standardized pattern for running security tools:
// ✅ Standard containerized security tool execution
export const performScan = async ({ targets, project_id }) => {
// 1. Prepare shared volume directory
const outdir = `/penpal-plugin-share/toolname/${project_id}`;
PenPal.Utils.MkdirP(outdir);
// 2. Create input files on host
const targets_file = path.join(outdir, `targets-${PenPal.Utils.Epoch()}.txt`);
fs.writeFileSync(targets_file, targets.join("\n"));
// 3. Define output file path
const output_file = path.join(outdir, `results-${PenPal.Utils.Epoch()}.json`);
// 4. Convert to container paths (volume mount)
const container_targets = targets_file.replace(
"/penpal-plugin-share",
"/penpal-plugin-share"
);
const container_output = output_file.replace(
"/penpal-plugin-share",
"/penpal-plugin-share"
);
// 5. Run containerized tool
const result = await PenPal.Docker.Run({
image: "penpal:httpx",
cmd: `-l ${container_targets} -o ${container_output} -json`,
daemonize: true, // Run in background
network: "penpal_penpal", // Isolated network
volume: {
// Shared volume mount
name: "penpal_penpal-plugin-share",
path: "/penpal-plugin-share",
},
});
// 6. Wait for completion
const container_id = result.stdout.trim();
await PenPal.Docker.Wait(container_id);
// 7. Process results
const results = fs.readFileSync(output_file, "utf8");
return JSON.parse(results);
};The Docker plugin exposes comprehensive container management APIs:
// Container lifecycle
await PenPal.Docker.Run(options); // Create and run container
await PenPal.Docker.Start(container_id); // Start stopped container
await PenPal.Docker.Stop(container_id); // Stop running container
await PenPal.Docker.Wait(container_id); // Wait for completion
// Container operations
await PenPal.Docker.Exec({ container, cmd }); // Execute command in container
await PenPal.Docker.Copy({ container, container_file, output_file }); // Copy files
// Image management
await PenPal.Docker.Build(docker_config); // Build image from context
await PenPal.Docker.Pull({ image }); // Pull pre-built image
// Cleanup
await PenPal.Docker.RemoveContainer(container_id); // Remove container
// Advanced
await PenPal.Docker.Raw(docker_command); // Execute raw docker commandNetwork Isolation:
- All containers run in the
penpal_penpalnetwork - Isolated from host network by default
- Can communicate with other PenPal services (databases, APIs)
- No direct internet access unless explicitly configured
Volume Security:
- Shared volumes use specific mount points (
/penpal-plugin-share) - No access to host filesystem outside mounted volumes
- Temporary files automatically cleaned up after scans
- Prevents container escape and data exfiltration
Resource Management:
- Containers are ephemeral and removed after use
- No persistent state stored in containers
- Resource limits can be enforced per container
- Automatic cleanup prevents resource exhaustion
The Docker plugin enables seamless integration of popular cybersecurity tools:
Nmap Integration:
// Nmap plugin uses Docker for isolated network scanning
const result = await PenPal.Docker.Run({
image: "penpal:nmap",
cmd: `-sS -sV -O ${targets} -oX ${output_file}`,
network: "penpal_penpal",
volume: { name: "penpal_penpal-plugin-share", path: "/penpal-plugin-share" },
});HttpX Integration:
// HttpX plugin uses Docker for HTTP service discovery
const result = await PenPal.Docker.Run({
image: "penpal:httpx",
cmd: `-l ${targets_file} -json -title -tech-detect`,
network: "penpal_penpal",
volume: { name: "penpal_penpal-plugin-share", path: "/penpal-plugin-share" },
});Rustscan Integration:
// Rustscan plugin uses Docker for fast port scanning
const result = await PenPal.Docker.Run({
image: "penpal:rustscan",
cmd: `-a ${targets} --ports ${ports} -- -sV`,
network: "penpal_penpal",
volume: { name: "penpal_penpal-plugin-share", path: "/penpal-plugin-share" },
});PenPal plugins use multi-stage Docker builds for security and efficiency:
# ✅ Example: HttpX plugin Dockerfile
FROM golang:1.21-alpine AS builder
WORKDIR /app
RUN go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/bin/httpx .
ENTRYPOINT ["./httpx"]Key principles:
- Multi-stage builds to minimize final image size
- Alpine Linux base images for security and size
- Specific tool versions for reproducibility
- Minimal attack surface with only required dependencies
- Non-root execution where possible
Plugins using Docker must declare the dependency:
{
"name": "HttpX",
"version": "0.1.0",
"dependsOn": ["CoreAPI@0.1.0", "Docker@0.1.0", "JobsTracker@0.1.0"]
}The Docker plugin automatically:
- Validates plugin Docker configurations during startup
- Builds images from
docker-context/directories - Pulls pre-built images if specified
- Caches built images for subsequent runs
- Reports build status and errors
This ensures all required container images are available before plugins attempt to use them.
Shared Volume Pattern:
- All plugins use the
penpal_penpal-plugin-sharevolume - Host path:
/penpal-plugin-share/ - Container path:
/penpal-plugin-share/ - Plugin-specific subdirectories:
/penpal-plugin-share/toolname/project_id/
File Exchange Pattern:
// ✅ Correct volume path handling
const host_path = "/penpal-plugin-share/httpx/project1/targets.txt";
const container_path = host_path; // Same path due to volume mount
// ❌ Wrong - hardcoded paths won't work
const bad_path = "/tmp/targets.txt"; // Not accessible in containerThe Docker plugin integrates with PenPal's monitoring systems:
// ✅ Proper error handling with Jobs API
const job = await PenPal.Jobs.Create({
name: "HTTP Service Scan",
statusText: "Starting containerized scan",
});
try {
const result = await PenPal.Docker.Run(docker_config);
await PenPal.Docker.Wait(result.stdout.trim());
await PenPal.Jobs.Update(job.id, {
status: PenPal.Jobs.Status.DONE,
statusText: "Scan completed successfully",
});
} catch (error) {
await PenPal.Jobs.Update(job.id, {
status: PenPal.Jobs.Status.FAILED,
statusText: `Container execution failed: ${error.message}`,
});
}The Docker plugin is essential for PenPal's microservices architecture, enabling secure, isolated execution of cybersecurity tools while maintaining seamless integration with the broader platform.
PenPal provides a sophisticated centralized logging system that automatically assigns unique colors to each plugin and ensures consistent formatting across the entire platform. This replaces manual console.log statements with a professional, maintainable logging solution.
- 🎨 Automatic Color Assignment: Each plugin gets a unique, consistent color based on plugin name hash
- 📝 Consistent Formatting: ISO 8601 timestamps and automatic
[PluginName]prefixes in assigned colors - 🔧 Easy Integration: File-level logger exports that can be imported anywhere within a plugin
- 🚀 Multiple Log Levels:
log,info,warn,error,debugwith appropriate colors - ⚡ Drop-in Replacement: Simple migration from existing console.log statements
Before (Manual Console Logging):
console.log("[HttpX] Starting HTTP scan for 25 targets");
console.error("[HttpX] Scan failed: Connection timeout");
console.log("[+] HttpX scan completed successfully");After (Centralized Logger):
logger.log("Starting HTTP scan for 25 targets");
logger.error("Scan failed: Connection timeout");
logger.log("Scan completed successfully");Output:
2024-01-15T10:30:45.123Z [HttpX] Starting HTTP scan for 25 targets
2024-01-15T10:30:46.456Z [HttpX] Scan failed: Connection timeout
2024-01-15T10:30:47.789Z [HttpX] Scan completed successfully
1. Create File-Level Logger Export (in plugin.js):
import PenPal from "#penpal/core";
// File-level logger that can be imported by other files
export const YourPluginLogger = PenPal.Utils.BuildLogger("YourPlugin");
const YourPlugin = {
async loadPlugin() {
YourPluginLogger.log("Plugin loading started");
// ... plugin code
return { settings };
},
};
export default YourPlugin;2. Import in Other Plugin Files:
import { YourPluginLogger as logger } from "./plugin.js";
export const performOperation = async () => {
logger.log("Starting operation");
try {
// Operation logic
logger.info("Operation completed successfully");
} catch (error) {
logger.error("Operation failed:", error.message);
}
};- Consistent Formatting: All plugins use the same timestamp and prefix format
- Unique Colors: Easy visual identification of different plugins in logs
- Reduced Maintenance: No manual prefix management or formatting
- Better Debugging: Clear plugin attribution for all log messages
- Professional Output: Clean, consistent logging across the entire system
📖 Complete Documentation: See docs/LOGGER.md for full implementation guide, migration steps, API reference, and best practices.
Below is documentation describing how plugins should be structured and what is required. Plugins are loaded live by the Vite (client) and Node (server) dynamically, so simply placing the plugin in the plugins/ folder will let you get started. Use the penpal-plugin-develop.py python script to get a Template with a name put into the right place.
python3 penpal-plugin-develop.py --new-plugin --name MySuperCoolAwesomePlugin
Each plugin is required to have three server files: index.js, manifest.json, and plugin.js. In general, the index.js will register the plugin, the manifest.json describes the plugin, and the the plugin.js implements the plugin. The simplest possible plugin is shown in the snippets below:
File Structure:
plugins/
|-> Base/
|-> CoreAPI/
|-> YourPlugin/
| |-> install-dependencies.sh (optional shell script that will be automatically called if you need things like npm packages)
| |-> server/
| | |-> index.js
| | |-> manifest.json
| | |-> plugin.js
index.js:
// The code below is used to register a plugin (at runtime), which will then be loaded
// once the main server finishes starting up.
// Overall PenPal coordinating server code
import PenPal from "#penpal/core";
// Plugin-specific info
import Plugin from "./plugin.js";
import Manifest from "./manifest.json" with { type: "json" };
// Register the plugin
PenPal.registerPlugin(Manifest, Plugin);manifest.json:
{
"name": "MyCoolPlugin",
"version": "0.1.0",
"dependsOn": ["AnotherPlugin@0.1.0"]
}plugin.js:
// This defines the custom server-side code being run by the plugin. It has GraphQL schemas and resolvers
// in order to interact with the plugged application
import { types, resolvers, loaders } from "./graphql";
const settings = {};
const MyCoolPlugin = {
loadPlugin() {
// Required
return {
graphql: {
// Optional
types, // Optional
resolvers, // Optional
loaders, // Optional
},
settings, // Optional
hooks: {
// Optional
settings: {}, // Optional
postload: () => null, // Optional
startup: () => null, // Optional
},
};
},
};
export default MyCoolPlugin;PenPal
registerPlugin(manifest, plugin)- this function registers the plugin with PenPal for it to be loaded. It takes two arguments:manifest(required) - an object containing decriptive fields about the plugin, defined in theManifestsection belowplugin(required) - an object containing fields that associate with the code of the plugin, defined in thePluginsection below
Manifest
name(required) - aStringthat is a unique name for the pluginversion(required) - aStringin semantic versioning formload(optional) - aBooleanthat can be set tofalseto disable and not load a plugin. Defaults to truedependsOn(required) - a[String]where eachStringis of the formname@versionfor plugins. Your plugin will not load if any of the dependencies are missingrequiresImplementation(optional) - aBooleanspecifying whether another plugin must implement this one in order to load. This is currently used by theDataStoreplugin, which defines a general API for interacting with data store plugins but does not actually implement one.implements(optional) - aStringof the formname@versionthat specifies if the plugin implements another plugins specification. For example,DataStoreMongoAdapterimplements theDataStorespecification.
Plugin
loadPlugin()- This function takes no arguments and returns one object withtypes,resolvers,loaders, andsettingsfields to define the schema and resolvers that can be used to interact with the plugin. The settings object contains all of the specific info that defines how the plugin queries will interact with the user interface and other server-side APIs (more on this in theSettingssection).
The hooks property that is returned from the loadPlugin function allows you to pass in functions that can be called to validate and/or execute code when other plugins are loaded. The three hooks available are described below.
startup - This function takes no arguments but is guaranteed to execute after all other plugins have been loaded and after all core services are running (databases, the GraphQL server, etc).
hooks: {
startup: () => null;
}settings - This hook takes an object where each key describes a section of the settings object (described later) and the value is a function that is used to validate the settings in question. For example, the Docker plugin uses this hook in Plugins/Docker/server/plugin.js to check other plugins' usage of the docker field of the settings object.
hooks: {
settings: {
my_cool_settings_field: check_my_cool_settings_field;
}
}postload - This hook will fire after a plugin loads with a single argument of the plugin_name. This can be used to take settings information and do something with it. For example, the DataStore plugin uses this hook in Plugins/DataStore/server/plugin.js to fire a function that creates datastores for each plugin immediately after they are loaded. We do this after the plugin is loaded because we know all of its dependencies exist and before the startup hook in order to make sure that everything is ready for those hooks to fire.
hooks: {
postload: (plugin_name) => null;
}The sections below enumerate the different settings available and what they do. Much of this is subject to change, so take the documentation with a grain of salt and look at examples for current functionality.
To utilize the automatic configuration page generator, utilize the following field in the settings object, which will allow PenPal to introspect your schema and generate a configuration editor
{
"configuration": {
"schema_root": "MyCoolPluginConfiguration",
"getter": "getMyCoolPluginConfiguration",
"setter": "setMyCoolPluginConfiguration"
}
}This section of the settings object is used to automatically generate data stores (using the DataStore API). It can be used for actual PenPal data or just configuration information for your plugin. The datastores field of the settings object is an [Object] where each Object has a name field. The name is automatically prepended with your plugin name, so it is automatically namespaced. There is planned functionality for things like unique data stores for data types (S3 stores for files, relational DB for data, etc), but that is not yet implemented.
{
"datastores": [
{
"name": "YourCollectionName"
}
]
}This section of the settings object is used to automatically pull docker images (not yet implemented) or build provided docker files (implemented) at runtime. This is an easy way to make sure that your particular plugin is cross platform and can be executed regardless of where PenPal is running. See the Rustscan Plugin for an example.
The graphql field of the loadPlugin return value can have any of three fields: types, resolvers, and loaders. These are automatically merged into the overall GraphQL schema to add API endpoints that are accessible on the /graphql endpoint.
✅ CRITICAL: Correct GraphQL Structure
Plugins with GraphQL schemas must follow the established loading pattern used by CoreAPI, Nmap, and other plugins:
// ✅ CORRECT: graphql/index.js
export { default as loadGraphQLFiles } from "./schema/index.js";
export { default as resolvers } from "./resolvers.js";
// ❌ WRONG - Don't import from penpal/core
import { loadGraphQLFiles } from "#penpal/core"; // This function doesn't exist!Required File Structure:
server/graphql/
├── index.js // Main GraphQL exports
├── resolvers.js // Resolver structure
├── schema/
│ ├── index.js // loadGraphQLFiles implementation
│ └── enrichment.schema.graphql // Plugin-specific types (MUST contain valid GraphQL)
└── resolvers/
├── index.js // Resolver exports
└── enrichment.default.js // Plugin resolvers
- All
.graphqlfiles MUST contain valid GraphQL definitions (types, queries, mutations, etc.) - Files with only comments will cause "Unexpected " syntax errors
- Remove empty schema files or add minimal valid definitions
- Use descriptive filenames like
plugin-name-enrichment.schema.graphql
Schema Loading Implementation:
// ✅ CORRECT: graphql/schema/index.js
import PenPal from "#penpal/core";
import { dirname, join } from "path";
import { fileURLToPath } from "url";
const __dirname = dirname(fileURLToPath(import.meta.url));
const cur_dir = join(__dirname, ".");
const loadGraphQLFiles = async () => {
return PenPal.Utils.LoadGraphQLDirectories(cur_dir);
};
export default loadGraphQLFiles;Resolver Structure:
// ✅ CORRECT: graphql/resolvers.js
import resolvers from "./resolvers/index.js";
export default [
{
Query: {
...resolvers.queries,
},
},
{
Mutation: {
...resolvers.mutations,
},
},
...resolvers.default_resolvers,
...resolvers.scalars,
];
// ✅ CORRECT: graphql/resolvers/index.js
export default {
queries: {
// Custom queries
},
mutations: {
// Custom mutations
},
default_resolvers: [/* resolver functions */],
scalars: [],
};Plugin Integration:
// ✅ CORRECT: plugin.js
import { loadGraphQLFiles, resolvers } from "./graphql/index.js";
const YourPlugin = {
loadPlugin() {
return {
graphql: {
types: loadGraphQLFiles,
resolvers,
},
};
},
};This pattern ensures proper GraphQL schema loading and integration with PenPal's plugin system. The PenPal.Utils.LoadGraphQLDirectories() function automatically discovers and loads all .graphql files in the schema directory.
✅ CRITICAL: Plugin Registration Pattern
Every plugin MUST have an index.js file that registers the plugin with PenPal:
// ✅ CORRECT: server/index.js
// Overall PenPal coordinating server code
import PenPal from "#penpal/core";
// Plugin-specific info
import Plugin from "./plugin.js";
import Manifest from "./manifest.json" with { type: "json" };
// Register the plugin
PenPal.registerPlugin(Manifest, Plugin);
// ❌ WRONG - Don't export anything
export default Plugin; // Remove this lineKey Requirements:
- Import
PenPalfrom#penpal/core - Import
Pluginfrom./plugin.js - Import
Manifestfrom./manifest.jsonwith JSON assertion - Call
PenPal.registerPlugin(Manifest, Plugin) - No exports needed - registration is a side effect
This registration pattern is what actually loads your plugin into the PenPal system. Without it, your plugin will not be recognized or loaded.
✅ CRITICAL: Correct Subscription Resolver Pattern
For real-time GraphQL subscriptions, use the object pattern with subscribe method:
// ✅ CORRECT: Object resolvers with subscribe method
export default {
jobUpdated: {
subscribe: (parent, args, context) => {
if (!context?.pubsub) {
throw new Error("PubSub not available in subscription context");
}
return context.pubsub.asyncIterator(["JOB_UPDATED"]);
}
},
activeJobsChanged: {
subscribe: (parent, args, context) => {
return context.pubsub.asyncIterator(["ACTIVE_JOBS_CHANGED"]);
}
}
};
// ❌ WRONG: Direct function resolvers (will fail)
export default {
async jobUpdated(parent, args, context) {
return context.pubsub.asyncIterator(["JOB_UPDATED"]); // Causes "must return Async Iterable" error
}
};Why This Matters:
- GraphQL requires subscription resolvers to return async iterables
- The function pattern fails with "Subscription field must return Async Iterable. Received: undefined."
- The object pattern with
subscribemethod is the GraphQL specification standard - Using the wrong pattern causes WebSocket disconnections with 4500 error codes
Subscription Schema:
extend type Subscription {
jobUpdated: Job
activeJobsChanged: [Job]
}Publishing Events:
// Server-side: Publish real-time updates
PenPal.PubSub.publish("JOB_UPDATED", { jobUpdated: updatedJob });
PenPal.PubSub.publish("ACTIVE_JOBS_CHANGED", { activeJobsChanged: activeJobs });