Warning
This project is simple.
The results provided by this project cannot be fully trusted.
(I'm actively using this project myself, and I'll continue to improve it whenever I have time.)
CodeSentinel is an AI-powered security auditor designed to scan project directories for malicious intent, dangerous coding practices, and obfuscated payloads. By leveraging Large Language Models (LLMs) and Tree-sitter, it provides both surface-level scans and deep, dependency-aware analysis.
Note
Read-only scan of target files/directories
no modifications are made to the scanned content.
Many thanks to Gemini and GPT for their help!
Tip
If you notice any issues or have any suggestions and have the time,
please leave them in the Issues section. Thank you.
👉 Project Architecture | 👉 Documents
- AI-Powered Analysis: Uses LLMs to audit code for backdoors, SQL injection,
eval()usage, and more. - Deep Analysis Mode: Traces cross-file logic by providing the AI with the context of local dependencies (either full code or skeletal structures).
- Multi-Language Support: Install the corresponding Tree-sitter parser package for languages you want structural dependency analysis for.
- Intelligent Skeletons: Extracts class and function signatures to provide context without exhausting LLM token limits.
- Detailed Reporting: Generates interactive CLI output and structured JSON reports (Full scan vs. Problems only).
- Flexible Backend: Compatible with OpenAI, LM Studio, llama.cpp, and other OpenAI-compatible APIs.
- Python 3.10+
- (Optional) A local LLM runner like llama.cpp, LM Studio, OpenAI ...
-
Clone the repository:
git clone https://github.com/yourlayer/CodeSentinel.git cd CodeSentinel -
Install dependencies:
python -m venv venv source venv/bin/activate pip install -r requirements.txt
Edit config.yaml or use environment variables to configure the scanner:
openai_api_key: Your API key (default:any-key-for-local).openai_base_url: The API endpoint (e.g.,http://localhost:1234/v1for LM Studio).ai_model: The name of the model to use.
Scan a directory using the default configuration:
python -m src.main --dir ./path/to/projectAnalyze files along with their local dependencies:
python -m src.main --dir ./path/to/project --deep--dir <path>,-d <path>: Directory to scan (default: current directory).--dry-run: List files that would be scanned without sending them to the AI.--model <name>: Override the model specified in config.--url <url>: Override the API base URL.--full-deps: In deep mode, include the full source code of dependencies instead of just skeletons.
Reports are saved in the reports/scan_YYYYMMDD_HHMMSS/ directory:
full_report.json: Detailed results for every scanned file.problems_report.json: Filtered results containing only[DANGER]and[WARNING]status.project_structure.txt: A text-based visualization of the scanned directory.logs/: Raw AI interaction logs, mirroring the scanned project's structure.
Run the test suite:
venv/bin/python test/main_test.pyThe legacy unittest discovery command is also supported:
venv/bin/python -m unittest discover testDocumentation maintained by mindofcharles and AI. Last updated: 2026-04-29.