Skip to content

nhannpl/DeepBleep

Repository files navigation

DeepBleep

DeepBleep is a high-performance, real-time content moderation engine designed to anonymize human faces and automatically censor NSFW (nudity/violence) content using a scalable Python-to-Web architecture.

🚀 Features

  • Dual-Use Processing Modes:
    • Privacy Mode: Anonymizes human faces and NSFW content with Gaussian blur or pixelation.
    • Security Mode: Enhances suspicious objects using Cognitive Augmentation (CLAHE, Unsharp Masking), adds Sci-Fi HUD overlays, Screen Flash illumination, and an automated Alert Capture system.
  • Context-Aware Threat Profiles: Scalable architecture to detect and apply logic to specific threats (faces, weapons, etc.).
  • Real-time Video Processing: High-speed frame capture and processing using OpenCV.
  • AI Models: Integration with YuNet (Face Detection) and NudeNet (NSFW). Future-ready for YOLO/COCO.
  • Streaming API: FastAPI-based MJPEG streaming endpoint for real-time monitoring.
  • Configurable Pipeline: Easy-to-adjust parameters for detection sensitivity, enhancement levels, and blur intensity.

🛠️ Technical Stack

  • Backend: Python 3.12+
  • API Framework: FastAPI + Uvicorn
  • Computer Vision: OpenCV
  • AI Models:
    • MediaPipe (Face Detection)
    • NudeNet (NSFW Detection)
    • DeepFake Detection (See Research Report)
  • Package Management: uv

📋 Prerequisites

  • Python 3.12 or higher
  • uv (fast Python package installer and resolver)

⚡ Installation & Setup

  1. Clone the repository

    git clone https://github.com/nhannpl/DeepBleep.git
    cd DeepBleep
  2. Install dependencies using uv

    # Initialize and sync dependencies
    uv sync
  3. Configure Environment

    cp .env.example .env
    # Edit .env with your specific configurations if needed

🏃‍♂️ Usage

Running the Backend Server

Start the FastAPI server with hot-reloading enabled:

uv run uvicorn src.backend.main:app --reload --host 0.0.0.0 --port 8000

The API will be available at http://localhost:8000.

API Documentation

Once the server is running, you can access the interactive API documentation:

Offline Video Processing

To process a video file offline (uses the current PROCESSING_MODE from your config or .env):

uv run python scripts/process_video.py input.mp4 output.mp4

🧪 Development & Testing

Testing Security Mode

You can run a quick simulation to ensure the Security Processor initializes and works correctly without needing a full video stream:

uv run python tests/demo_security_mode.py

Running the Test Suite

Run the full test suite using pytest:

uv run pytest

Code Quality & Formatting

Linting and formatting are handled by ruff and black:

# formatting
uv run black .

# linting
uv run ruff check .

📂 Project Structure

DeepBleep/
├── src/
│   └── backend/
│       ├── api/            # API endpoints and routes
│       ├── capture/        # Video capture logic
│       ├── inference/      # AI model wrappers (Face, NSFW)
│       ├── processing/     # Image processing (Blur, etc.)
│       └── shared/         # Utilities and constants
├── scripts/                # Helper scripts
├── tests/                  # Unit and integration tests
├── implementation-plan.md  # Detailed design and plan
└── pyproject.toml          # Project configuration

📝 License

[Add License Here]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages