diff --git a/.gitignore b/.gitignore
index 60455b1..f7faa0f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -163,3 +163,8 @@ test.env
# Misc
.DS_Store
memory_bank.md
+/openaudit.egg-info
+/dist
+*.whl
+/openaudit.egg-info
+openaudit.egg-info/PKG-INFO
diff --git a/README.md b/README.md
index 1d8f601..47129bd 100644
--- a/README.md
+++ b/README.md
@@ -1,100 +1,91 @@
-# OpenAuditKit
+
-OpenAuditKit is an open-source CLI security audit tool designed to scan your codebase for secrets and configuration vulnerabilities. It emphasizes offline capability, modular design, and secure handling of sensitive data (secret masking).
+

-## Features
-- **Secret Scanning**: Detects API keys and secrets using regex and entropy checks.
-- **Config Scanning**: Identifies misconfigurations in deployment files (e.g., .env, Dockerfile).
-- **Secure**: Secrets are masked in outputs; offline-first design.
-- **Backend Ready**: Feature-based architecture with Pydantic models for easy integration into dashboards or APIs.
-- **Customizable**: Add your own rules! See [Rule Documentation](openopenaudit/rules/README.md).
+# OpenAuditKit
-## 🛡️ Why OpenAuditKit?
+[](https://badge.fury.io/py/openaudit)
+[](https://pypi.org/project/openaudit/)
+[](https://opensource.org/licenses/MIT)
+[](https://neuralforge.one)
+**Next-Gen Security Audit Tool for Modern Codebases.**
+*Powered by AI. Secure by Design. Offline First.*
-## 🎥 Usage Demo
+[🌐 Website](https://neuralforge.one) • [📚 Documentation](https://github.com/neuralforgeone/OpenAuditKit) • [🐛 Report Bug](https://github.com/neuralforgeone/OpenAuditKit/issues)
-
-*(Replace this with your actual usage GIF)*
+
-## Usage
+---
-### Basic Scan
-```bash
-openaudit scan .
-```
+## � What is OpenAuditKit?
-### 🧠 AI-Powered Analysis
-Unlock advanced capabilities by configuring your OpenAI API key:
+**OpenAuditKit** is not just another linter. It's an intelligent security companion that lives in your terminal. Unlike traditional tools that drown you in false positives, OpenAuditKit combines robust pattern matching (Regex & Entropy) with **Context-Aware AI Agents** to understand *why* a piece of code might be dangerous.
-```bash
-# 1. Configure API Key
-openaudit config set-key sk-your-key-here
+Whether you are a solo developer or part of a large enterprise, OpenAuditKit helps you ship secure code faster.
-# 2. Run Scan with AI Agents
-openaudit scan . --ai
+## ✨ Key Features
-# 3. Explain a specific file
-openaudit explain openaudit/main.py
-```
+| Feature | Description |
+| :--- | :--- |
+| **🕵️ Secret Scanning** | Detects API keys, tokens, and credentials with high-entropy validation. |
+| **⚙️ Config Audit** | Discovers misconfigurations in `Dockerfile`, `.env`, `Kubernetes`, and more. |
+| **🧠 AI Advisory** | **(New)** Integrated AI Agents explain vulnerabilities and suggest fixes. |
+| **🏗️ Architecture Analysis** | AI agents analyze your project structure for design flaws. |
+| **🛡️ Threat Modeling** | auto-generates STRIDE threat models based on your codebase. |
+| **🔌 Integrations** | Native support for CI/CD pipelines (GitHub Actions, GitLab CI). |
+| **📝 JSON Reporting** | Export findings for easy integration with dashboards like DefectDojo. |
+
+## 🚀 Installation
-**AI Agents:**
-- **Architecture Agent**: Reviews modularity and dependencies.
-- **Cross-File Agent**: Traces dangerous data flows across modules.
-- **Explain Agent**: Provides detailed code explanations.
-- **Secret Agent**: Validates if found secrets are likely real or test data.
-- **Threat Model Agent**: Generates a STRIDE threat model for your project structure.
+Install simply via pip:
-### JSON Output
```bash
-openaudit scan . --format json --output report.json
+pip install openaudit
```
-## 🛠 Features
+## ⚡ Quick Start
-- **Secret Scanning**: Detects API keys and secrets using regex and entropy checks.
-- **Config Scanning**: Identifies misconfigurations in deployment files (e.g., .env, Dockerfile).
-- **Secure**: Secrets are masked in outputs; offline-first design (unless AI is enabled).
-- **Backend Ready**: Feature-based architecture with Pydantic models for easy integration into dashboards or APIs.
-- **Customizable**: Add your own rules! See [Rule Documentation](openaudit/rules/README.md).
+### 1. Basic Scan
+Run a security scan on your current directory:
-## 🛡️ Why OpenAuditKit?
+```bash
+openaudit scan .
+```
-Often, security tools are either too simple (grep) or too complex (enterprise SAST). OpenAuditKit bridges the gap:
+### 2. Enable AI Superpowers 🧠
+Unlock the full potential with AI agents that analyze architecture and data flow:
-| Feature | OpenAuditKit | Gitleaks | TruffleHog |
-| :--- | :---: | :---: | :---: |
-| **Secret Scanning** | ✅ | ✅ | ✅ |
-| **Config Scanning** | ✅ | ❌ | ❌ |
-| **Offline First** | ✅ | ✅ | ❌ (Often requires API) |
-| **AI Analysis** | ✅ (Optional) | ❌ | ❌ |
-| **Custom Rules** | ✅ (YAML) | ✅ (TOML) | ✅ (Detectors) |
-| **Backend Integration** | ✅ (Pydantic Models) | ❌ | ❌ |
+```bash
+# Set your OpenAI API Key
+openaudit config set-key sk-your-api-key
-### Security Philosophy
-1. **Offline First**: No data leaves your machine unless you explicitly enable AI features.
-2. **Confidence > Noise**: We use entropy checks and specific regexes to minimize false positives.
-3. **Actionable**: Every finding comes with a remediation step.
+# Run an AI-enhanced scan
+openaudit scan . --ai
+```
-## Installation
+### 3. Ask Your Code
+Don't understand a complex file? Let the **Explain Agent** break it down:
```bash
-# From PyPI
-pip install openaudit
-
-# From Source
-git clone https://github.com/neuralforgeone/OpenAuditKit.git
-cd OpenAuditKit
-pip install .
+openaudit explain src/complex_logic.py
```
-## 🚀 CI/CD Integration
+## 📊 Comparison
-OpenAuditKit is designed to run in CI/CD pipelines. Use the `--ci` flag to enable CI mode (exit code 1 on failure, no interactive elements).
+| Feature | OpenAuditKit | Gitleaks | TruffleHog |
+| :--- | :---: | :---: | :---: |
+| **Finding Secrets** | ✅ | ✅ | ✅ |
+| **Config Analysis** | ✅ | ❌ | ❌ |
+| **AI Context Analysis** | ✅ | ❌ | ❌ |
+| **Architecture Review** | ✅ | ❌ | ❌ |
+| **Offline Capabilities** | ✅ | ✅ | ❌* |
+
+*\*TruffleHog often requires API connectivity for verification.*
-### GitHub Actions Example
+## 🤖 CI/CD Integration
-Create `.github/workflows/audit.yml`:
+Secure your pipeline with zero effort. Add this to your `.github/workflows/security.yml`:
```yaml
name: Security Audit
@@ -104,24 +95,29 @@ jobs:
openaudit:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-python@v4
- with:
- python-version: '3.10'
- - run: pip install openaudit
- - run: openaudit scan . --ci --fail-on high
+ - uses: actions/checkout@v3
+ - uses: actions/setup-python@v4
+ with:
+ python-version: '3.10'
+ - run: pip install openaudit
+ - run: openaudit scan . --ci --fail-on high --ai
+ env:
+ OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} # Optional for AI features
```
-### Exit Codes
-- `0`: No issues found (or issues below threshold).
-- `1`: Issues found matching or exceeding severity threshold.
+## 🛡️ Security Philosophy
-## 🛠 Development & Testing
+At **NeuralForge**, we believe security tools should be:
+1. **Silent but Deadly:** Only alert on real issues (Low False Positives).
+2. **Educational:** Don't just find bugs, explain them.
+3. **Private:** Your code never leaves your machine unless you explicitly opt-in to AI features (which are redacted by default).
-Run the test suite with coverage:
-```bash
-pip install -e .[dev]
-pytest tests --cov=openaudit
-```
+## 🤝 Contributing
+
+We love contributions! Please check out our [Contributing Guide](CONTRIBUTING.md) to get started.
+
+---
-We enforce a 90% test coverage threshold.
+
diff --git a/build/lib/openaudit/__init__.py b/build/lib/openaudit/__init__.py
deleted file mode 100644
index 3dc1f76..0000000
--- a/build/lib/openaudit/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-__version__ = "0.1.0"
diff --git a/build/lib/openaudit/main.py b/build/lib/openaudit/main.py
deleted file mode 100644
index 67a93c9..0000000
--- a/build/lib/openaudit/main.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from openaudit.interface.cli.app import app
-
-import sys
-def main():
- print(f"DEBUG: sys.argv = {sys.argv}")
- app()
-
-if __name__ == "__main__":
- main()
diff --git a/build/lib/openaudit/rules/config.yaml b/build/lib/openaudit/rules/config.yaml
deleted file mode 100644
index 923eba0..0000000
--- a/build/lib/openaudit/rules/config.yaml
+++ /dev/null
@@ -1,67 +0,0 @@
-rules:
- # .env Rules
- - id: "CONF_DEBUG_ENABLED"
- description: "Debug mode enabled in configuration"
- regex: "(?i)^\\s*DEBUG\\s*=\\s*(true|1|yes)"
- severity: "high"
- confidence: "high"
- category: "config"
- remediation: "Set DEBUG=False in production environments."
-
- - id: "CONF_DATABASE_URL_UNENCRYPTED"
- description: "Plaintext database URL detected"
- regex: "^\\s*DATABASE_URL\\s*=\\s*(postgres|mysql|mongodb)://"
- severity: "high"
- confidence: "high"
- category: "config"
- remediation: "Use encrypted secrets management or mask credentials."
-
- - id: "CONF_ENV_DEV_IN_PROD"
- description: "Development environment setting detected"
- regex: "(?i)^\\s*ENV\\s*=\\s*(dev|development)"
- severity: "medium"
- confidence: "high"
- category: "config"
- remediation: "Ensure this is not a production environment."
-
- # Dockerfile Rules
- - id: "DOCKER_USER_ROOT"
- description: "Container running as root"
- regex: "^\\s*USER\\s+root"
- severity: "high"
- confidence: "high"
- category: "infrastructure"
- remediation: "Create and switch to a non-root user."
-
- - id: "DOCKER_EXPOSE_ALL"
- description: "Exposing service on all interfaces (0.0.0.0)"
- regex: "^\\s*EXPOSE\\s+.*0\\.0\\.0\\.0"
- severity: "medium"
- confidence: "high"
- category: "infrastructure"
- remediation: "Bind to specific interfaces if possible."
-
- - id: "DOCKER_ADD_COPY_ALL"
- description: "Broad COPY instruction (COPY . /)"
- regex: "^\\s*COPY\\s+\\.\\s+/"
- severity: "low"
- confidence: "medium"
- category: "infrastructure"
- remediation: "Use .dockerignore and copy only necessary files."
-
- # Docker Compose Rules (Regex approximation for simple detection, can be refined with yaml parsing)
- - id: "COMPOSE_RESTART_ALWAYS"
- description: "Restart policy set to always"
- regex: "restart:\\s*always"
- severity: "low"
- confidence: "high"
- category: "infrastructure"
- remediation: "Consider 'on-failure' or specific restart policies."
-
- - id: "COMPOSE_PORT_EXPOSURE"
- description: "Port exposed to host (broad range)"
- regex: "\\s*-\\s*[\"']?0\\.0\\.0\\.0:"
- severity: "medium"
- confidence: "high"
- category: "infrastructure"
- remediation: "Bind ports to localhost (127.0.0.1) if external access is not required."
diff --git a/build/lib/openaudit/rules/secrets.yaml b/build/lib/openaudit/rules/secrets.yaml
deleted file mode 100644
index e349700..0000000
--- a/build/lib/openaudit/rules/secrets.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-rules:
- - id: "AWS_ACCESS_KEY_ID"
- description: "AWS Access Key ID"
- regex: "(?:A3T[A-Z0-9]|AKIA|AGPA|AIDA|AROA|AIPA|ANPA|ANVA|ASIA)[A-Z0-9]{16}"
- entropy_check: false
- severity: "critical"
- confidence: "high"
- category: "secret"
- remediation: "Revoke the key immediately and rotate credentials."
-
- - id: "GENERIC_API_KEY"
- description: "Potential High Entropy Key"
- regex: "api_key['\"]?\\s*[:=]\\s*['\"]?([A-Za-z0-9_\\-]{32,})"
- entropy_check: true
- severity: "high"
- confidence: "medium"
- category: "secret"
- remediation: "Verify if this is a real secret and move to environment variables."
diff --git a/dist/openaudit-0.1.0-py3-none-any.whl b/dist/openaudit-0.1.0-py3-none-any.whl
deleted file mode 100644
index b4b3fcc..0000000
Binary files a/dist/openaudit-0.1.0-py3-none-any.whl and /dev/null differ
diff --git a/dist/openaudit-0.1.0.tar.gz b/dist/openaudit-0.1.0.tar.gz
deleted file mode 100644
index 8ed2dd7..0000000
Binary files a/dist/openaudit-0.1.0.tar.gz and /dev/null differ
diff --git a/openaudit.egg-info/PKG-INFO b/openaudit.egg-info/PKG-INFO
index 8ce9edd..8f0702d 100644
--- a/openaudit.egg-info/PKG-INFO
+++ b/openaudit.egg-info/PKG-INFO
@@ -2,7 +2,7 @@ Metadata-Version: 2.4
Name: openaudit
Version: 0.1.0
Summary: Offline-first security audit tool (secrets & config scanning) for local codebases.
-Author-email: OpenAuditKit Team
+Author-email: OpenAuditKit Team
License: MIT
Project-URL: Repository, https://github.com/neuralforgeone/OpenAuditKit
Project-URL: Issues, https://github.com/neuralforgeone/OpenAuditKit/issues
@@ -18,105 +18,99 @@ Requires-Dist: rich>=13.0.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: pathspec>=0.11.0
Requires-Dist: openai>=1.0.0
+Provides-Extra: dev
+Requires-Dist: pytest>=7.0.0; extra == "dev"
+Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
Dynamic: license-file
-# OpenAuditKit
+
-OpenAuditKit is an open-source CLI security audit tool designed to scan your codebase for secrets and configuration vulnerabilities. It emphasizes offline capability, modular design, and secure handling of sensitive data (secret masking).
+

-## Features
-- **Secret Scanning**: Detects API keys and secrets using regex and entropy checks.
-- **Config Scanning**: Identifies misconfigurations in deployment files (e.g., .env, Dockerfile).
-- **Secure**: Secrets are masked in outputs; offline-first design.
-- **Backend Ready**: Feature-based architecture with Pydantic models for easy integration into dashboards or APIs.
-- **Customizable**: Add your own rules! See [Rule Documentation](openopenaudit/rules/README.md).
+# OpenAuditKit
-## 🛡️ Why OpenAuditKit?
+[](https://badge.fury.io/py/openaudit)
+[](https://pypi.org/project/openaudit/)
+[](https://opensource.org/licenses/MIT)
+[](https://neuralforge.one)
+**Next-Gen Security Audit Tool for Modern Codebases.**
+*Powered by AI. Secure by Design. Offline First.*
-## 🎥 Usage Demo
+[🌐 Website](https://neuralforge.one) • [📚 Documentation](https://github.com/neuralforgeone/OpenAuditKit) • [🐛 Report Bug](https://github.com/neuralforgeone/OpenAuditKit/issues)
-
-*(Replace this with your actual usage GIF)*
+
-## Usage
+---
-### Basic Scan
-```bash
-openaudit scan .
-```
+## � What is OpenAuditKit?
-### 🧠 AI-Powered Analysis
-Unlock advanced capabilities by configuring your OpenAI API key:
+**OpenAuditKit** is not just another linter. It's an intelligent security companion that lives in your terminal. Unlike traditional tools that drown you in false positives, OpenAuditKit combines robust pattern matching (Regex & Entropy) with **Context-Aware AI Agents** to understand *why* a piece of code might be dangerous.
-```bash
-# 1. Configure API Key
-openaudit config set-key sk-your-key-here
+Whether you are a solo developer or part of a large enterprise, OpenAuditKit helps you ship secure code faster.
-# 2. Run Scan with AI Agents
-openaudit scan . --ai
+## ✨ Key Features
-# 3. Explain a specific file
-openaudit explain openaudit/main.py
-```
+| Feature | Description |
+| :--- | :--- |
+| **🕵️ Secret Scanning** | Detects API keys, tokens, and credentials with high-entropy validation. |
+| **⚙️ Config Audit** | Discovers misconfigurations in `Dockerfile`, `.env`, `Kubernetes`, and more. |
+| **🧠 AI Advisory** | **(New)** Integrated AI Agents explain vulnerabilities and suggest fixes. |
+| **🏗️ Architecture Analysis** | AI agents analyze your project structure for design flaws. |
+| **🛡️ Threat Modeling** | auto-generates STRIDE threat models based on your codebase. |
+| **🔌 Integrations** | Native support for CI/CD pipelines (GitHub Actions, GitLab CI). |
+| **📝 JSON Reporting** | Export findings for easy integration with dashboards like DefectDojo. |
+
+## 🚀 Installation
-**AI Agents:**
-- **Architecture Agent**: Reviews modularity and dependencies.
-- **Cross-File Agent**: Traces dangerous data flows across modules.
-- **Explain Agent**: Provides detailed code explanations.
-- **Secret Agent**: Validates if found secrets are likely real or test data.
-- **Threat Model Agent**: Generates a STRIDE threat model for your project structure.
+Install simply via pip:
-### JSON Output
```bash
-openaudit scan . --format json --output report.json
+pip install openaudit
```
-## 🛠 Features
+## ⚡ Quick Start
-- **Secret Scanning**: Detects API keys and secrets using regex and entropy checks.
-- **Config Scanning**: Identifies misconfigurations in deployment files (e.g., .env, Dockerfile).
-- **Secure**: Secrets are masked in outputs; offline-first design (unless AI is enabled).
-- **Backend Ready**: Feature-based architecture with Pydantic models for easy integration into dashboards or APIs.
-- **Customizable**: Add your own rules! See [Rule Documentation](openaudit/rules/README.md).
+### 1. Basic Scan
+Run a security scan on your current directory:
-## 🛡️ Why OpenAuditKit?
+```bash
+openaudit scan .
+```
-Often, security tools are either too simple (grep) or too complex (enterprise SAST). OpenAuditKit bridges the gap:
+### 2. Enable AI Superpowers 🧠
+Unlock the full potential with AI agents that analyze architecture and data flow:
-| Feature | OpenAuditKit | Gitleaks | TruffleHog |
-| :--- | :---: | :---: | :---: |
-| **Secret Scanning** | ✅ | ✅ | ✅ |
-| **Config Scanning** | ✅ | ❌ | ❌ |
-| **Offline First** | ✅ | ✅ | ❌ (Often requires API) |
-| **AI Analysis** | ✅ (Optional) | ❌ | ❌ |
-| **Custom Rules** | ✅ (YAML) | ✅ (TOML) | ✅ (Detectors) |
-| **Backend Integration** | ✅ (Pydantic Models) | ❌ | ❌ |
+```bash
+# Set your OpenAI API Key
+openaudit config set-key sk-your-api-key
-### Security Philosophy
-1. **Offline First**: No data leaves your machine unless you explicitly enable AI features.
-2. **Confidence > Noise**: We use entropy checks and specific regexes to minimize false positives.
-3. **Actionable**: Every finding comes with a remediation step.
+# Run an AI-enhanced scan
+openaudit scan . --ai
+```
-## Installation
+### 3. Ask Your Code
+Don't understand a complex file? Let the **Explain Agent** break it down:
```bash
-# From PyPI
-pip install openaudit
-
-# From Source
-git clone https://github.com/neuralforgeone/OpenAuditKit.git
-cd OpenAuditKit
-pip install .
+openaudit explain src/complex_logic.py
```
-## 🚀 CI/CD Integration
+## 📊 Comparison
-OpenAuditKit is designed to run in CI/CD pipelines. Use the `--ci` flag to enable CI mode (exit code 1 on failure, no interactive elements).
+| Feature | OpenAuditKit | Gitleaks | TruffleHog |
+| :--- | :---: | :---: | :---: |
+| **Finding Secrets** | ✅ | ✅ | ✅ |
+| **Config Analysis** | ✅ | ❌ | ❌ |
+| **AI Context Analysis** | ✅ | ❌ | ❌ |
+| **Architecture Review** | ✅ | ❌ | ❌ |
+| **Offline Capabilities** | ✅ | ✅ | ❌* |
+
+*\*TruffleHog often requires API connectivity for verification.*
-### GitHub Actions Example
+## 🤖 CI/CD Integration
-Create `.github/workflows/audit.yml`:
+Secure your pipeline with zero effort. Add this to your `.github/workflows/security.yml`:
```yaml
name: Security Audit
@@ -126,24 +120,29 @@ jobs:
openaudit:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-python@v4
- with:
- python-version: '3.10'
- - run: pip install openaudit
- - run: openaudit scan . --ci --fail-on high
+ - uses: actions/checkout@v3
+ - uses: actions/setup-python@v4
+ with:
+ python-version: '3.10'
+ - run: pip install openaudit
+ - run: openaudit scan . --ci --fail-on high --ai
+ env:
+ OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} # Optional for AI features
```
-### Exit Codes
-- `0`: No issues found (or issues below threshold).
-- `1`: Issues found matching or exceeding severity threshold.
+## 🛡️ Security Philosophy
-## 🛠 Development & Testing
+At **NeuralForge**, we believe security tools should be:
+1. **Silent but Deadly:** Only alert on real issues (Low False Positives).
+2. **Educational:** Don't just find bugs, explain them.
+3. **Private:** Your code never leaves your machine unless you explicitly opt-in to AI features (which are redacted by default).
-Run the test suite with coverage:
-```bash
-pip install -e .[dev]
-pytest tests --cov=openaudit
-```
+## 🤝 Contributing
+
+We love contributions! Please check out our [Contributing Guide](CONTRIBUTING.md) to get started.
+
+---
-We enforce a 90% test coverage threshold.
+
diff --git a/openaudit.egg-info/requires.txt b/openaudit.egg-info/requires.txt
index 29d10d7..c673d36 100644
--- a/openaudit.egg-info/requires.txt
+++ b/openaudit.egg-info/requires.txt
@@ -4,3 +4,7 @@ rich>=13.0.0
pydantic>=2.0.0
pathspec>=0.11.0
openai>=1.0.0
+
+[dev]
+pytest>=7.0.0
+pytest-cov>=4.0.0
diff --git a/openaudit/ai/engine.py b/openaudit/ai/engine.py
index 2b0f782..353bddb 100644
--- a/openaudit/ai/engine.py
+++ b/openaudit/ai/engine.py
@@ -48,3 +48,29 @@ def chat_completion(self, system_prompt: str, user_prompt: str, model: str = "gp
# For now, let's log and re-raise to be handled by caller or CLI
raise RuntimeError(f"OpenAI API Error: {str(e)}")
+ def chat_completion_stream(self, system_prompt: str, user_prompt: str, model: str = "gpt-4o"):
+ """
+ Executes a chat completion request with streaming.
+ Yields chunks of the response content.
+ """
+ if not self.client:
+ self._initialize_client()
+ if not self.client:
+ raise RuntimeError("OpenAI API key not configured. Run 'openaudit config set-key ' or set OPENAI_API_KEY env var.")
+
+ try:
+ stream = self.client.chat.completions.create(
+ model=model,
+ messages=[
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": user_prompt}
+ ],
+ temperature=0.2,
+ stream=True
+ )
+ for chunk in stream:
+ if chunk.choices[0].delta.content:
+ yield chunk.choices[0].delta.content
+ except OpenAIError as e:
+ raise RuntimeError(f"OpenAI API Error: {str(e)}")
+
diff --git a/openaudit/features/explain/agent.py b/openaudit/features/explain/agent.py
index ab841da..5ec0df0 100644
--- a/openaudit/features/explain/agent.py
+++ b/openaudit/features/explain/agent.py
@@ -37,10 +37,31 @@ def run(self, context: PromptContext) -> AIResult:
is_advisory=True
)
except Exception as e:
- return AIResult(
+ return AIResult(
analysis=f"Error: {str(e)}",
risk_score=0.1,
severity=Severity.LOW,
confidence=Confidence.LOW,
is_advisory=True
)
+
+ def stream(self, context: PromptContext):
+ """
+ Stream the explanation.
+ Yields chunks of text.
+ """
+ from openaudit.ai.engine import AIEngine
+ engine = AIEngine()
+
+ if not engine.is_available():
+ yield "AI not configured. Please set API key."
+ return
+
+ system_prompt = "You are a technical expert. Explain the code and identify security risks. Use Markdown."
+ user_prompt = f"Code:\n{context.code_snippet}\n\nExplain and Analyze."
+
+ try:
+ for chunk in engine.chat_completion_stream(system_prompt, user_prompt):
+ yield chunk
+ except Exception as e:
+ yield f"\n\nError during streaming: {str(e)}"
diff --git a/openaudit/features/secrets/agent.py b/openaudit/features/secrets/agent.py
index 74c568f..94606ec 100644
--- a/openaudit/features/secrets/agent.py
+++ b/openaudit/features/secrets/agent.py
@@ -11,22 +11,33 @@ class SecretConfidenceAgent:
def run(self, context: PromptContext) -> AIResult:
from openaudit.ai.engine import AIEngine
+ engine = AIEngine()
if not engine.is_available():
# No fallback, return None to indicate no analysis possible
return None
snippet = context.code_snippet
- system_prompt = "You are a secret scanning expert. Analyze the context of a potential secret. Determine if it is a TEST/MOCK secret or a REAL production secret."
- user_prompt = f"Code Context:\n{snippet}\n\nIs this a real secret? Answer with JSON: {{'is_test': bool, 'reason': str}}"
+ system_prompt = "You are a secret scanning expert. Analyze the context of a potential secret. Determine if it is a TEST/MOCK secret or a REAL production secret. Respond ONLY with valid JSON in the format: {\"is_test\": boolean, \"reason\": \"string\"}"
+ user_prompt = f"Code Context:\n{snippet}\n\nIs this a real secret?"
try:
+ import json
response = engine.chat_completion(system_prompt, user_prompt)
- # Naive parsing for now
- is_test = "true" in response.lower() and "is_test" in response.lower()
+
+ # Clean up potential markdown formatting in response (e.g. ```json ... ```)
+ cleaned_response = response.strip()
+ if cleaned_response.startswith("```"):
+ cleaned_response = cleaned_response.strip("`")
+ if cleaned_response.startswith("json"):
+ cleaned_response = cleaned_response[4:]
+
+ data = json.loads(cleaned_response)
+ is_test = data.get("is_test", False)
+ reason = data.get("reason", "No reason provided.")
if is_test:
return AIResult(
- analysis="AI identified this as a likely TEST/MOCK secret.",
+ analysis=f"AI identified this as a likely TEST/MOCK secret. Reason: {reason}",
risk_score=0.1,
severity=Severity.LOW,
confidence=Confidence.HIGH,
@@ -35,7 +46,7 @@ def run(self, context: PromptContext) -> AIResult:
)
else:
return AIResult(
- analysis="AI identified this as a likely REAL secret.",
+ analysis=f"AI identified this as a likely REAL secret. Reason: {reason}",
risk_score=0.9,
severity=Severity.HIGH,
confidence=Confidence.HIGH,
diff --git a/openaudit/interface/cli/commands.py b/openaudit/interface/cli/commands.py
index 8c812f6..4a708e0 100644
--- a/openaudit/interface/cli/commands.py
+++ b/openaudit/interface/cli/commands.py
@@ -90,7 +90,6 @@ def scan_command(
ConfigScanner(rules=rules)
]
- # 4. Run Scan
# 4. Run Scan
all_findings = []
@@ -100,115 +99,127 @@ def scan_command(
for scanner in scanners:
all_findings.extend(scanner.scan(context))
else:
- with typer.progressbar(scanners, label="Running Scanners") as progress:
- for scanner in progress:
+ from openaudit.interface.cli.ui import UI
+ with UI.create_progress() as progress:
+ scan_task = progress.add_task("[green]Scanning...", total=len(scanners))
+ for scanner in scanners:
all_findings.extend(scanner.scan(context))
+ progress.update(scan_task, advance=1)
# 4.1 Run AI Agents if enabled
if ai:
- typer.echo("Running AI Agents...")
- # Architecture Agent
- arch_scanner = ArchitectureScanner()
- structure = arch_scanner.scan(context)
-
- arch_agent = ArchitectureAgent()
- # In a real scenario, we might use a proper AIEngine to look this up
- result = arch_agent.run_on_structure(structure)
-
- if result and result.is_advisory:
- # Convert AIResult to Finding
- ai_finding = Finding(
- rule_id=f"AI-{arch_agent.name.upper()}",
- description=f"{result.analysis} Suggested: {result.suggestion}",
- file_path="PROJECT_ROOT",
- line_number=0,
- secret_hash="",
- severity=result.severity,
- confidence=result.confidence,
- category="architecture",
- remediation=result.suggestion or "Review architecture.",
- is_ai_generated=True
- )
- all_findings.append(ai_finding)
-
- # Cross-File Agent
- df_scanner = DataFlowScanner()
- df_graph = df_scanner.scan(context, structure)
+ from openaudit.interface.cli.ui import UI
- cross_agent = CrossFileAgent()
- df_results = cross_agent.run_on_graph(df_graph)
+ UI.header("AI Analysis")
- for res in df_results:
- if res.is_advisory:
- df_finding = Finding(
- rule_id=f"AI-{cross_agent.name.upper()}",
- description=f"{res.analysis} Suggested: {res.suggestion}",
+ # Architecture Agent
+ with UI.console.status("[bold blue]Analyzing Architecture...[/bold blue]"):
+ arch_scanner = ArchitectureScanner()
+ structure = arch_scanner.scan(context)
+
+ arch_agent = ArchitectureAgent()
+ result = arch_agent.run_on_structure(structure)
+
+ if result and result.is_advisory:
+ ai_finding = Finding(
+ rule_id=f"AI-{arch_agent.name.upper()}",
+ description=f"{result.analysis} Suggested: {result.suggestion}",
file_path="PROJECT_ROOT",
line_number=0,
secret_hash="",
- severity=res.severity,
- confidence=res.confidence,
+ severity=result.severity,
+ confidence=result.confidence,
category="architecture",
- remediation=res.suggestion or "Secure data flow.",
+ remediation=result.suggestion or "Review architecture.",
is_ai_generated=True
)
- all_findings.append(df_finding)
+ all_findings.append(ai_finding)
+
+ # Cross-File Agent
+ with UI.console.status("[bold purple]Analyzing Data Flow...[/bold purple]"):
+ df_scanner = DataFlowScanner()
+ df_graph = df_scanner.scan(context, structure)
+
+ cross_agent = CrossFileAgent()
+ df_results = cross_agent.run_on_graph(df_graph)
+
+ for res in df_results:
+ if res.is_advisory:
+ df_finding = Finding(
+ rule_id=f"AI-{cross_agent.name.upper()}",
+ description=f"{res.analysis} Suggested: {res.suggestion}",
+ file_path="PROJECT_ROOT",
+ line_number=0,
+ secret_hash="",
+ severity=res.severity,
+ confidence=res.confidence,
+ category="architecture",
+ remediation=res.suggestion or "Secure data flow.",
+ is_ai_generated=True
+ )
+ all_findings.append(df_finding)
# Threat Modeling Agent
- threat_agent = ThreatModelingAgent()
- tm_results = threat_agent.run_on_structure(structure)
- for res in tm_results:
- if res.is_advisory:
- tm_finding = Finding(
- rule_id=f"AI-THREAT-{res.analysis.split(':')[0]}", # Crude ID generation
- description=f"{res.analysis} {res.suggestion}",
- file_path="PROJECT_ROOT",
- line_number=0,
- secret_hash="",
- severity=res.severity,
- confidence=res.confidence,
- category="architecture",
- remediation=res.suggestion or "Mitigate threat.",
- is_ai_generated=True
- )
- all_findings.append(tm_finding)
+ with UI.console.status("[bold red]Modeling Threats...[/bold red]"):
+ threat_agent = ThreatModelingAgent()
+ tm_results = threat_agent.run_on_structure(structure)
+ for res in tm_results:
+ if res.is_advisory:
+ tm_finding = Finding(
+ rule_id=f"AI-THREAT-{res.analysis.split(':')[0]}", # Crude ID generation
+ description=f"{res.analysis} {res.suggestion}",
+ file_path="PROJECT_ROOT",
+ line_number=0,
+ secret_hash="",
+ severity=res.severity,
+ confidence=res.confidence,
+ category="architecture",
+ remediation=res.suggestion or "Mitigate threat.",
+ is_ai_generated=True
+ )
+ all_findings.append(tm_finding)
# Secret Confidence Agent
- secret_agent = SecretConfidenceAgent()
- for finding in all_findings:
- if finding.category == "secret":
- # Extract context
- code_context = SecretContextExtractor.get_context(finding.file_path, finding.line_number)
- if not code_context:
- continue
-
- # Redact
- redacted_context = Redactor.redact(code_context)
-
- # Analyze
- ctx = PromptContext(
- file_path=finding.file_path,
- code_snippet=redacted_context,
- line_number=finding.line_number
- )
-
- ai_result = secret_agent.run(ctx)
+ secret_findings = [f for f in all_findings if f.category == "secret"]
+ if secret_findings:
+ with UI.create_progress() as progress:
+ secret_task = progress.add_task("[cyan]Verifying Secrets with AI...", total=len(secret_findings))
+ secret_agent = SecretConfidenceAgent()
- if ai_result:
- # Enrich Finding
- finding.description += f" [AI: {ai_result.analysis}]"
- finding.is_ai_generated = True # Tag enriched findings too
+ for finding in secret_findings:
+ # Extract context
+ code_context = SecretContextExtractor.get_context(finding.file_path, finding.line_number)
+ if not code_context:
+ progress.update(secret_task, advance=1)
+ continue
- # If agent is very confident it's a false positive (test), downgrade
- if ai_result.confidence == Confidence.LOW and ai_result.severity == Severity.LOW:
- finding.confidence = Confidence.LOW
- finding.severity = Severity.LOW
- finding.description = f"[ADVISORY] {finding.description}"
-
+ # Redact
+ redacted_context = Redactor.redact(code_context)
+
+ # Analyze
+ ctx = PromptContext(
+ file_path=finding.file_path,
+ code_snippet=redacted_context,
+ line_number=finding.line_number
+ )
+
+ ai_result = secret_agent.run(ctx)
+
+ if ai_result:
+ # Enrich Finding
+ finding.description += f" [AI: {ai_result.analysis}]"
+ finding.is_ai_generated = True # Tag enriched findings too
+
+ # If agent is very confident it's a false positive (test), downgrade
+ if ai_result.confidence == Confidence.LOW and ai_result.severity == Severity.LOW:
+ finding.confidence = Confidence.LOW
+ finding.severity = Severity.LOW
+ finding.description = f"[ADVISORY] {finding.description}"
+
+ progress.update(secret_task, advance=1)
duration = time.time() - start_time
- # 5. Report
# 5. Report
if format == OutputFormat.JSON:
reporter = JSONReporter(output_path=output)
@@ -241,9 +252,11 @@ def explain_command(
"""
Explain the code in a specific file using AI.
"""
+ from openaudit.interface.cli.ui import UI
+
target_path = Path(path).absolute()
if not target_path.exists() or not target_path.is_file():
- typer.echo(f"Error: path {path} does not exist or is not a file.")
+ UI.error(f"Error: path {path} does not exist or is not a file.")
raise typer.Exit(code=1)
# Check Consent
@@ -252,7 +265,7 @@ def explain_command(
if confirm:
ConsentManager.grant_consent()
else:
- typer.echo("Consent refused. Exiting.")
+ UI.warning("Consent refused. Exiting.")
raise typer.Exit(code=1)
# Read Content
@@ -264,14 +277,9 @@ def explain_command(
# Run Agent
agent = ExplainAgent()
context = PromptContext(code_snippet=redacted_content, file_path=str(target_path))
- result = agent.run(context)
- # Output
- typer.echo("")
- typer.echo(f"🔍 Analysis for {target_path.name}")
- typer.echo("=========================================")
- typer.echo(result.analysis)
- typer.echo("=========================================")
+ # Stream Output
+ UI.stream_markdown(agent.stream(context), title=f"Analysis for {target_path.name}")
# Config Commands
diff --git a/openaudit/interface/cli/ui.py b/openaudit/interface/cli/ui.py
new file mode 100644
index 0000000..479110f
--- /dev/null
+++ b/openaudit/interface/cli/ui.py
@@ -0,0 +1,62 @@
+from rich.console import Console
+from rich.progress import Progress, SpinnerColumn, TextColumn, BarColumn, TaskProgressColumn
+from rich.live import Live
+from rich.panel import Panel
+from rich.markdown import Markdown
+from typing import Generator, Optional
+import time
+
+class UI:
+ """
+ Centralized UI handler using Rich.
+ """
+ console = Console()
+
+ @staticmethod
+ def print(text: str, style: str = None):
+ UI.console.print(text, style=style)
+
+ @staticmethod
+ def header(title: str):
+ UI.console.rule(f"[bold blue]{title}[/bold blue]")
+
+ @staticmethod
+ def success(message: str):
+ UI.console.print(f"[bold green]✓[/bold green] {message}")
+
+ @staticmethod
+ def error(message: str):
+ UI.console.print(f"[bold red]✗[/bold red] {message}")
+
+ @staticmethod
+ def warning(message: str):
+ UI.console.print(f"[bold yellow]![/bold yellow] {message}")
+
+ @staticmethod
+ def create_progress():
+ return Progress(
+ SpinnerColumn(),
+ TextColumn("[progress.description]{task.description}"),
+ BarColumn(),
+ TaskProgressColumn(),
+ console=UI.console
+ )
+
+ @staticmethod
+ def stream_markdown(content_generator: Generator[str, None, None], title: str = "Analysis"):
+ """
+ Streams markdown content nicely.
+ """
+ with Live(console=UI.console, refresh_per_second=10) as live:
+ accumulated_text = ""
+ for chunk in content_generator:
+ accumulated_text += chunk
+ markdown = Markdown(accumulated_text)
+ panel = Panel(markdown, title=title, border_style="blue")
+ live.update(panel)
+
+ # Final render
+ markdown = Markdown(accumulated_text)
+ panel = Panel(markdown, title=title, border_style="green")
+ live.update(panel)
+ return accumulated_text
diff --git a/openaudit/main.py b/openaudit/main.py
index 67a93c9..a35b2e0 100644
--- a/openaudit/main.py
+++ b/openaudit/main.py
@@ -2,7 +2,7 @@
import sys
def main():
- print(f"DEBUG: sys.argv = {sys.argv}")
+ # Debug print removed
app()
if __name__ == "__main__":
diff --git a/pyproject.toml b/pyproject.toml
index 6ba8b99..c60f176 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -10,7 +10,7 @@ readme = "README.md"
requires-python = ">=3.9"
license = { text = "MIT" }
authors = [
- { name = "OpenAuditKit Team", email = "info@openauditkit.org" }
+ { name = "OpenAuditKit Team", email = "info@neuralforge.one" }
]
urls = { Repository = "https://github.com/neuralforgeone/OpenAuditKit", Issues = "https://github.com/neuralforgeone/OpenAuditKit/issues" }
classifiers = [
@@ -27,6 +27,12 @@ dependencies = [
"openai>=1.0.0"
]
+[project.optional-dependencies]
+dev = [
+ "pytest>=7.0.0",
+ "pytest-cov>=4.0.0"
+]
+
[project.scripts]
openaudit = "openaudit.main:main"
diff --git a/requirements.txt b/requirements.txt
index cb31f29..de77b39 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -3,5 +3,6 @@ pyyaml>=6.0
rich>=13.0.0
pydantic>=2.0.0
pathspec>=0.11.0
+openai>=1.0.0
pytest>=7.0.0
pytest-cov>=4.0.0