LocalFaceSwap is a Windows-first live face-swap project with two modes:
- A Python app for real model-based face swap from a folder image to your live webcam feed
- A native C++ app for a lighter local effect when you want lower overhead and easier native execution
If your goal is "use my webcam in Google Meet or Zoom, but make my face look like the person in the uploaded image", use the Python app first.
- Watch the
uploads/folder and automatically use the newest JPG, PNG, JPEG, BMP, or WEBP image - Detect the main face in the uploaded image and use it as the source identity
- Read your live webcam feed and swap your face so the live pose and expression drive the output
- Show the result in a local preview window
- Run in the background and publish the swapped feed to a virtual camera for Google Meet, Zoom, Teams, and similar apps
- Output through
OBS Virtual CameraorUnity Captureon Windows - Mirror or un-mirror the local preview
- Swap only the main face or all detected faces in a frame
- Blend the result with the original frame using opacity control
- Use different ONNX execution providers such as CPU, CUDA, or DirectML when available
- Keep a separate native C++ fallback app for a lighter non-neural effect
For most people, this is the best workflow:
- Install Python 3.11.
- Install OBS Studio if you want the swapped video inside Google Meet or another webcam app.
- Put one face image into
uploads/. - Run the Python setup once.
- Start the Python app in virtual camera mode.
- In Google Meet, choose
OBS Virtual Camerainstead of your real webcam.
- Windows
- Python 3.11
- A webcam
- PowerShell
- OBS Studio if you want a virtual camera for Google Meet, Zoom, Teams, or similar apps
Optional but useful:
- A GPU with a supported ONNX Runtime provider for lower latency
- A clean portrait image in
uploads/
If you only want the local preview window, you can skip OBS Studio.
If you want the swapped output to appear as a camera inside Google Meet:
- Download and install OBS Studio from
https://obsproject.com/download. - Finish the installation normally.
- Fully close Chrome, Edge, and Meet after installation.
- Reopen them only after the LocalFaceSwap virtual camera is running.
LocalFaceSwap reads your real webcam and publishes the processed result to a virtual camera. Because of that, Google Meet should use OBS Virtual Camera, not your physical camera.
Open PowerShell in the project folder and run:
.\setup-python.ps1This does the first-time Python setup:
- creates
.venv/ - upgrades
pip - installs all Python packages
- keeps the environment ready for later runs
On the first actual launch, the app will also download:
- the
inswapper_128.onnxface-swap model intomodels/ - the InsightFace
buffalo_lanalysis models intomodels/buffalo_l/
If you want to set everything up and immediately start the virtual camera:
.\setup-python.ps1 -Run -VirtualCamera -NoPreviewWhat to expect:
- the app loads the newest source image from
uploads/ - the app opens your real webcam
- the app starts the swap pipeline
- the app publishes the result to
OBS Virtual Camera - the terminal should print a line like:
[python-swap] Virtual camera ready: OBS Virtual Camera
After the first setup, these are the main commands you will use.
Run with a local preview window:
.\.venv\Scripts\python.exe .\python\live_face_swap.pyRun with a local preview and virtual camera:
.\setup-python.ps1 -Run -VirtualCameraRun in the background for Google Meet or Zoom without a local preview window:
.\setup-python.ps1 -Run -VirtualCamera -NoPreviewEquivalent direct command:
.\.venv\Scripts\python.exe .\python\live_face_swap.py --virtual-camera --virtual-camera-backend=obs --no-previewUse this order each time:
- Put your source image into
uploads/. - Start LocalFaceSwap:
.\setup-python.ps1 -Run -VirtualCamera -NoPreview- Wait for:
[python-swap] Virtual camera ready: OBS Virtual Camera
- Open Google Meet after that.
- In Meet, choose
OBS Virtual Cameraas the camera. - Do not choose
HP FHD Cameraor your physical webcam.
If Meet only shows the physical webcam:
- Fully close Chrome or Edge.
- Start LocalFaceSwap first.
- Reopen the browser.
- Recheck the camera list in Meet.
If needed, join the meeting first, then open:
More options -> Settings -> Video
Meet sometimes refreshes the device list better there.
- Auto-watches the
uploads/folder - Uses the newest supported image automatically
- Detects the best face in the source image
- Detects faces in the live webcam feed
- Swaps the source identity onto the live face
- Keeps live pose and expression from the webcam frame
- Supports local preview mode
- Supports background virtual camera mode
- Supports OBS or Unity Capture backends for virtual camera output
- Supports
auto,cpu,cuda,directml,openvino,coreml, andtensorrtprovider selection when available - Supports single-face or all-face swap
- Supports adjustable opacity from
0.0to1.0 - Supports mirrored or non-mirrored local preview
- Supports camera index selection
- Supports custom capture resolution, FPS, and detector input size
Setup only:
.\setup-python.ps1Setup and run with preview:
.\setup-python.ps1 -RunSetup and run with OBS virtual camera:
.\setup-python.ps1 -Run -VirtualCameraSetup and run with OBS virtual camera in background:
.\setup-python.ps1 -Run -VirtualCamera -NoPreview--uploads-dir=uploads--camera=0--width=640--height=480--fps=30--det-size=640--execution-provider=auto|cpu|cuda|directml|openvino|coreml|tensorrt--backend=dshow|any--virtual-camera--virtual-camera-backend=auto|obs|unitycapture--virtual-camera-device=<exact device name>--preview--no-preview--mirror--no-mirror--swap-all-faces--opacity=1.0
These keys are available when the local preview window is enabled:
qquitrrescanuploads/immediatelymtoggle preview mirror
Use an uploaded image that:
- shows one face clearly
- is reasonably front-facing
- has good lighting
- is not heavily blurred
- does not contain multiple competing faces
- is tightly framed around the face and upper head area when possible
Good source images usually improve results more than any code setting.
If the app feels slow, start with these:
.\.venv\Scripts\python.exe .\python\live_face_swap.py --width=320 --height=240 --fps=24 --det-size=320 --virtual-camera --no-previewFor smoother runtime:
- keep capture size at
640x480or lower - reduce
--det-sizeto320 - reduce
--fpsto24 - keep
--swap-all-facesoff - use
--no-previewwhen you only need Meet or Zoom output - use
directmlor another hardware provider if available
Example with DirectML:
.\.venv\Scripts\python.exe .\python\live_face_swap.py --execution-provider=directml --virtual-camera --no-preview --width=640 --height=480 --fps=24 --det-size=320The Python app works like this:
- It watches the
uploads/folder and finds the newest supported image. - It reads that image and detects the best face inside it.
- It opens the webcam with a latest-frame-only capture loop to reduce backlog.
- For each fresh frame, it detects the live face or faces.
- It runs the
inswapper_128.onnxmodel to place the uploaded identity onto the live face. - It optionally blends the result with the original frame using the configured opacity.
- It either shows the result in a local OpenCV preview window, sends it to a virtual camera, or both.
- Google Meet or Zoom reads the virtual camera instead of the physical webcam.
Important behavior:
- The real webcam is owned by LocalFaceSwap.
- Meet should read the virtual camera output, not the real webcam.
- The source image can be changed by dropping a newer image into
uploads/.
Used for:
- webcam capture
- image loading
- preview rendering
- text overlay
- frame transforms such as mirroring
- frame blending
Why it is used:
- it gives a simple and fast camera plus image-processing layer
- it is the easiest way to manage preview windows on Windows for this project
Used for:
- frame storage
- array operations
- efficient image data movement between OpenCV, InsightFace, and pyvirtualcam
Why it is used:
- almost every image-processing library in this stack expects NumPy arrays
Used for:
- running the ONNX models used by the face analysis and swap pipeline
Why it is used:
- it supports CPU and multiple hardware backends
- it is a lightweight inference runtime compared to a full training framework
Used for:
- face detection
- face recognition embeddings
- landmark extraction
- loading the
inswapper_128.onnxmodel
Why it is used:
- it provides the full face analysis plus swap flow needed for a Deep-Live-Cam-style result
Used for:
- sending processed frames to a Windows virtual camera device such as
OBS Virtual Camera
Why it is used:
- it is the bridge that lets Google Meet, Zoom, and Teams see the processed video as a normal camera
python/live_face_swap.pyPython live face-swap apppython/requirements.txtPython dependenciessetup-python.ps1PowerShell helper for Python setup and runuploads/drop source images heremodels/downloaded swap and analysis modelsnative/src/main.cppnative C++ lightweight appbuild-native.ps1one-command native build helperCMakeLists.txtnative build definitionvcpkg.jsonnative dependency manifestvcpkg-triplets/x64-windows-release.cmakenative release triplet
The native app is still included for a lighter local effect path.
Use the C++ app when:
- you want a simpler native executable
- you care more about lightweight local performance than neural realism
- you want a non-Python fallback
Do not expect the same identity transfer quality as the Python app. The C++ version is not a full Deep-Live-Cam neural port.
- opens the webcam
- tracks the face locally
- watches the
uploads/folder - auto-loads the newest uploaded image
- supports face and avatar overlay modes
- supports lightweight runtime controls for scale and offset
- stays smoother on weaker systems than the model-based Python path
First build:
.\build-native.ps1Build and run:
.\build-native.ps1 -RunDirect run:
.\build\native\local_face_filter.exeLower-load native example:
.\build\native\local_face_filter.exe --fps=24 --detect-interval=8 --overlay=faceqquitspacelock face using the guide boxcclear the current lockrrescanuploads/1hat overlay2glasses overlay3uploaded face swap4uploaded avatar swap[shrink avatar]enlarge avatarimove avatar upkmove avatar downjmove avatar leftlmove avatar rightdtoggle debug box
--camera=<index>--width=<pixels>--height=<pixels>--fps=<value>--detect-interval=<frames>--overlay=face|avatar|hat|glasses--backend=dshow|any--mirror=true|false--debug=true|false--uploads-dir=<path>
The C++ path stays lighter because it:
- uses a latest-frame-only capture loop
- relies on local tracking more than heavy neural inference
- renders a lightweight 2D effect instead of running the full model-based swap pipeline
The Python app downloads models automatically if they are missing.
Files involved:
models/inswapper_128.onnxmodels/buffalo_l/...
This means the first launch is slower than later launches.
The Python app polls uploads/ periodically and always prefers the newest matching file. If you add a new file, it becomes the new source image automatically.
- Preview mode shows a local OpenCV window
- Virtual camera mode publishes frames to OBS Virtual Camera or Unity Capture
- You can use both at the same time
--no-previewis useful when you want the app to run quietly in the background
If the app logs CPUExecutionProvider, it is running fully on CPU. That works, but latency will be higher.
If your system supports it, try a hardware execution provider such as directml on Windows Intel or AMD GPUs.
Try this order:
- Close Meet and fully close the browser.
- Start LocalFaceSwap first with:
.\setup-python.ps1 -Run -VirtualCamera -NoPreview- Wait for:
[python-swap] Virtual camera ready: OBS Virtual Camera
- Reopen the browser and Meet.
- Check the camera list again.
If still needed:
- Join the meeting first.
- Open
More options -> Settings -> Video. - Select
OBS Virtual Camerathere.
If the device still does not appear, reboot Windows once and try again.
That usually means Meet is trying to use the physical webcam while LocalFaceSwap already owns it.
Fix:
- keep LocalFaceSwap running
- switch Meet from the physical webcam to
OBS Virtual Camera
Try:
--width=320 --height=240--fps=24--det-size=320--no-preview--execution-provider=directmlif supported
Use an uploaded image with:
- one clear face
- good lighting
- minimal background clutter
The app picks the best face in the uploaded image, so crowded photos are a bad source.
Try a different source photo with:
- a larger face
- brighter lighting
- less blur
- less extreme head angle
If you want the raw CMake flow for the native app:
$cmake = "C:\Program Files (x86)\Microsoft Visual Studio\18\BuildTools\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin\cmake.exe"
$toolchain = "C:\Program Files (x86)\Microsoft Visual Studio\18\BuildTools\VC\vcpkg\scripts\buildsystems\vcpkg.cmake"
$ninja = "C:\Program Files (x86)\Microsoft Visual Studio\18\BuildTools\Common7\IDE\CommonExtensions\Microsoft\CMake\Ninja\ninja.exe"
$triplets = (Resolve-Path ".\vcpkg-triplets").Path
& $cmake -S . -B build\native -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_MAKE_PROGRAM="$ninja" -DCMAKE_TOOLCHAIN_FILE="$toolchain" -DVCPKG_OVERLAY_TRIPLETS="$triplets" -DVCPKG_TARGET_TRIPLET=x64-windows-release
& $cmake --build build\native --config Release- The Python app is the real face-swap path, but it still depends on source image quality and model limitations.
- Latency depends heavily on whether you are running on CPU or a hardware execution provider.
- Google Meet and Chrome sometimes cache device lists, so virtual camera visibility can require a full browser restart.
- The native C++ app is intentionally lighter and does not match the Python app's neural swap quality.