This iOS application uses the iPhone 12's front and back cameras to create real-time deepfake videos. The app analyzes a person's face using the back camera and applies those facial features to another person captured by the front TrueDepth camera.
- Real-time face detection and tracking using both front and back cameras
- Facial feature extraction and mapping
- Live deepfake video generation
- Video recording capability
- Optimized for iPhone 12 with TrueDepth camera
- iPhone 12 or newer with TrueDepth camera
- iOS 14.0 or later
- Xcode 12.0 or later
- Clone this repository
- Open the project in Xcode
- Select your development team in the Signing & Capabilities section
- Install required dependencies using CocoaPods or Swift Package Manager (if applicable)
- Build and run the application on your device
-
Prerequisites:
- Make sure you have Xcode 12.0 or later installed
- Ensure you have an iPhone 12 or newer with iOS 14.0+ for full functionality
- Developer account for signing the application
-
Setting up the Development Environment:
# Clone the repository git clone https://github.com/yourusername/DeepFakeApp.git cd DeepFakeApp # If using CocoaPods pod install
-
Opening the Project:
- If using CocoaPods, open the
.xcworkspacefile - If not using CocoaPods, open the
.xcodeprojfile
- If using CocoaPods, open the
-
Configuration:
- In Xcode, select your target device (must be a physical device with TrueDepth camera)
- Go to the Signing & Capabilities tab and select your development team
- Ensure the Bundle Identifier is unique or change it to something unique
-
Building and Running:
- Connect your iPhone to your computer
- Select your device from the device dropdown in Xcode
- Click the Run button (
▶️ ) or press Cmd+R - The first time you run the app on your device, you may need to trust the developer certificate in your device settings
-
Troubleshooting:
- If you encounter camera permission issues, ensure the app has camera access in your device settings
- For performance issues, try adjusting the quality settings in the app's settings panel
- If the app crashes during initialization, check that you're using a compatible device with TrueDepth camera
- Launch the app and grant camera permissions when prompted
- Position the back camera to capture the source face (the person whose facial features you want to use)
- Position yourself in front of the front camera (the target face that will receive the deepfake effect)
- Press the "Start" button to begin the deepfake process
- The output view will display the deepfake result in real-time
- Press the "Stop" button to end the process
-
Controllers: Contains view controllers for the application
MainViewController.swift: Main interface controller
-
Models: Contains data models and processing logic
FaceTracker.swift: Handles face detection and trackingDeepfakeProcessor.swift: Processes facial data and generates deepfake output
-
Views: Contains custom UI components
DeepfakeOutputView.swift: Custom view for displaying the deepfake output
-
Utils: Contains utility classes
CameraUtility.swift: Helper functions for camera setup and permissions
The application uses several key iOS frameworks:
- AVFoundation: For camera capture and video processing
- Vision: For face detection and facial landmark extraction
- ARKit: For 3D face tracking and depth data processing
- Metal: For high-performance rendering of the deepfake output
The deepfake generation process involves:
- Detecting and tracking faces in both camera feeds
- Extracting facial landmarks and features from the source face
- Mapping these features onto the target face's 3D geometry
- Rendering the combined result to create the deepfake effect
The application includes several optimizations to ensure smooth performance on iPhone 12 devices:
- Frame Rate Throttling: Intelligently limits frame processing based on device capabilities
- Device-Specific Settings: Automatically adjusts quality settings based on device performance tier
- Pixel Buffer Pooling: Reuses memory buffers for improved rendering efficiency
- Dedicated Rendering Queue: Separates rendering from processing for better parallelization
- Metal Optimizations: Efficient texture caching and command buffer reuse
- User-Configurable Performance: Settings panel with options for:
- Target frame rate (24, 30, or 60 FPS)
- Performance mode toggle
- Rendering quality adjustments
This application processes facial data locally on the device and does not transmit any personal information. The app requires camera permissions to function but does not store facial data permanently unless the user explicitly saves a video recording.
This project is licensed under the MIT License - see the LICENSE file for details.