DeepBleep is a high-performance, real-time content moderation engine designed to anonymize human faces and automatically censor NSFW (nudity/violence) content using a scalable Python-to-Web architecture.
- Dual-Use Processing Modes:
- Privacy Mode: Anonymizes human faces and NSFW content with Gaussian blur or pixelation.
- Security Mode: Enhances suspicious objects using Cognitive Augmentation (CLAHE, Unsharp Masking), adds Sci-Fi HUD overlays, Screen Flash illumination, and an automated Alert Capture system.
- Context-Aware Threat Profiles: Scalable architecture to detect and apply logic to specific threats (faces, weapons, etc.).
- Real-time Video Processing: High-speed frame capture and processing using OpenCV.
- AI Models: Integration with YuNet (Face Detection) and NudeNet (NSFW). Future-ready for YOLO/COCO.
- Streaming API: FastAPI-based MJPEG streaming endpoint for real-time monitoring.
- Configurable Pipeline: Easy-to-adjust parameters for detection sensitivity, enhancement levels, and blur intensity.
- Backend: Python 3.12+
- API Framework: FastAPI + Uvicorn
- Computer Vision: OpenCV
- AI Models:
- MediaPipe (Face Detection)
- NudeNet (NSFW Detection)
- DeepFake Detection (See Research Report)
- Package Management:
uv
- Python 3.12 or higher
- uv (fast Python package installer and resolver)
-
Clone the repository
git clone https://github.com/nhannpl/DeepBleep.git cd DeepBleep -
Install dependencies using
uv# Initialize and sync dependencies uv sync -
Configure Environment
cp .env.example .env # Edit .env with your specific configurations if needed
Start the FastAPI server with hot-reloading enabled:
uv run uvicorn src.backend.main:app --reload --host 0.0.0.0 --port 8000The API will be available at http://localhost:8000.
Once the server is running, you can access the interactive API documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
To process a video file offline (uses the current PROCESSING_MODE from your config or .env):
uv run python scripts/process_video.py input.mp4 output.mp4You can run a quick simulation to ensure the Security Processor initializes and works correctly without needing a full video stream:
uv run python tests/demo_security_mode.pyRun the full test suite using pytest:
uv run pytestLinting and formatting are handled by ruff and black:
# formatting
uv run black .
# linting
uv run ruff check .DeepBleep/
├── src/
│ └── backend/
│ ├── api/ # API endpoints and routes
│ ├── capture/ # Video capture logic
│ ├── inference/ # AI model wrappers (Face, NSFW)
│ ├── processing/ # Image processing (Blur, etc.)
│ └── shared/ # Utilities and constants
├── scripts/ # Helper scripts
├── tests/ # Unit and integration tests
├── implementation-plan.md # Detailed design and plan
└── pyproject.toml # Project configuration
[Add License Here]