Skip to content

akashkumar011/Deep-Facial-Analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepFake Face iOS Application

Overview

This iOS application uses the iPhone 12's front and back cameras to create real-time deepfake videos. The app analyzes a person's face using the back camera and applies those facial features to another person captured by the front TrueDepth camera.

Features

  • Real-time face detection and tracking using both front and back cameras
  • Facial feature extraction and mapping
  • Live deepfake video generation
  • Video recording capability
  • Optimized for iPhone 12 with TrueDepth camera

Requirements

  • iPhone 12 or newer with TrueDepth camera
  • iOS 14.0 or later
  • Xcode 12.0 or later

Installation

  1. Clone this repository
  2. Open the project in Xcode
  3. Select your development team in the Signing & Capabilities section
  4. Install required dependencies using CocoaPods or Swift Package Manager (if applicable)
  5. Build and run the application on your device

Running the Project

  1. Prerequisites:

    • Make sure you have Xcode 12.0 or later installed
    • Ensure you have an iPhone 12 or newer with iOS 14.0+ for full functionality
    • Developer account for signing the application
  2. Setting up the Development Environment:

    # Clone the repository
    git clone https://github.com/yourusername/DeepFakeApp.git
    cd DeepFakeApp
    
    # If using CocoaPods
    pod install
  3. Opening the Project:

    • If using CocoaPods, open the .xcworkspace file
    • If not using CocoaPods, open the .xcodeproj file
  4. Configuration:

    • In Xcode, select your target device (must be a physical device with TrueDepth camera)
    • Go to the Signing & Capabilities tab and select your development team
    • Ensure the Bundle Identifier is unique or change it to something unique
  5. Building and Running:

    • Connect your iPhone to your computer
    • Select your device from the device dropdown in Xcode
    • Click the Run button (▶️) or press Cmd+R
    • The first time you run the app on your device, you may need to trust the developer certificate in your device settings
  6. Troubleshooting:

    • If you encounter camera permission issues, ensure the app has camera access in your device settings
    • For performance issues, try adjusting the quality settings in the app's settings panel
    • If the app crashes during initialization, check that you're using a compatible device with TrueDepth camera

Usage

  1. Launch the app and grant camera permissions when prompted
  2. Position the back camera to capture the source face (the person whose facial features you want to use)
  3. Position yourself in front of the front camera (the target face that will receive the deepfake effect)
  4. Press the "Start" button to begin the deepfake process
  5. The output view will display the deepfake result in real-time
  6. Press the "Stop" button to end the process

Project Structure

  • Controllers: Contains view controllers for the application

    • MainViewController.swift: Main interface controller
  • Models: Contains data models and processing logic

    • FaceTracker.swift: Handles face detection and tracking
    • DeepfakeProcessor.swift: Processes facial data and generates deepfake output
  • Views: Contains custom UI components

    • DeepfakeOutputView.swift: Custom view for displaying the deepfake output
  • Utils: Contains utility classes

    • CameraUtility.swift: Helper functions for camera setup and permissions

Technical Implementation

The application uses several key iOS frameworks:

  • AVFoundation: For camera capture and video processing
  • Vision: For face detection and facial landmark extraction
  • ARKit: For 3D face tracking and depth data processing
  • Metal: For high-performance rendering of the deepfake output

The deepfake generation process involves:

  1. Detecting and tracking faces in both camera feeds
  2. Extracting facial landmarks and features from the source face
  3. Mapping these features onto the target face's 3D geometry
  4. Rendering the combined result to create the deepfake effect

Performance Optimizations

The application includes several optimizations to ensure smooth performance on iPhone 12 devices:

  • Frame Rate Throttling: Intelligently limits frame processing based on device capabilities
  • Device-Specific Settings: Automatically adjusts quality settings based on device performance tier
  • Pixel Buffer Pooling: Reuses memory buffers for improved rendering efficiency
  • Dedicated Rendering Queue: Separates rendering from processing for better parallelization
  • Metal Optimizations: Efficient texture caching and command buffer reuse
  • User-Configurable Performance: Settings panel with options for:
    • Target frame rate (24, 30, or 60 FPS)
    • Performance mode toggle
    • Rendering quality adjustments

Privacy Considerations

This application processes facial data locally on the device and does not transmit any personal information. The app requires camera permissions to function but does not store facial data permanently unless the user explicitly saves a video recording.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages