Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
85ed8c6
docs: fix image address (#22067)
ZhaiSoul Feb 21, 2026
c9be98f
docs: fix hailo setup numbering error (#22066)
ZhaiSoul Feb 21, 2026
4d51f7a
Fix script for downloading RF-DETR (#22083)
f0ff886f Feb 22, 2026
a7d8d13
docs: Add frame selection and clean copy details to snapshots docs (#…
webmasterkai Feb 23, 2026
984d654
Update line breaks in video_pipeline.md diagram (#21919)
GrumpyMeow Feb 23, 2026
a6e11a5
docs: Add detail to face recognition MQTT update docs (#21942)
webmasterkai Feb 23, 2026
352d271
Update HA docs with MQTT example (#22098)
hawkeye217 Feb 23, 2026
dd8282f
Docs: fix YOLOv9 onnx export (#22107)
tremby Feb 24, 2026
a0d6cb5
Docs updates (#22131)
hawkeye217 Feb 26, 2026
7df3622
updates for yolov9 coral support (#22136)
blakeblackshear Feb 27, 2026
0310a96
Merge pull request #19787 from blakeblackshear/dev
blakeblackshear Feb 27, 2026
96c70ee
fix link to coral yolov9 plus models (#22164)
hawkeye217 Feb 27, 2026
e064024
Fix go2rtc stream alias auth (#22097)
hawkeye217 Feb 28, 2026
c687aa5
Birdseye fixes (#22166)
hawkeye217 Feb 28, 2026
720e949
Fix genai (#22203)
NickM-27 Mar 2, 2026
0dd1e94
update docs for avx cpu system requirements (#22222)
hawkeye217 Mar 3, 2026
d311974
fix menu display conditions (#22237)
hawkeye217 Mar 4, 2026
8c67704
docs: updated the guides detectors section (#22241)
ZhaiSoul Mar 4, 2026
c338533
fix ordering of points in planning setup docs (#22251)
hawkeye217 Mar 4, 2026
ab3cef8
adopt official HA language, change add-on to app (#22258)
hawkeye217 Mar 5, 2026
d1f3a80
call out avx2 requirement (#22305)
hawkeye217 Mar 7, 2026
f316244
Improve playback of videos in Tracking Details (#22301)
hawkeye217 Mar 7, 2026
537e723
Fix/rknn arcface input format master (#22319)
begetan Mar 8, 2026
4e71a83
Fix broken link to Home Assistant apps page (#22320)
ARandomGitHubUser Mar 8, 2026
bde518e
Fix preview retrieval to handle missing previews gracefully (#22331)
hawkeye217 Mar 8, 2026
b6f78bd
fix thumbnail encoding logic (#22329)
hawkeye217 Mar 8, 2026
41e2904
Environment variable fixes (#22335)
hawkeye217 Mar 8, 2026
c4a5ac0
fix go2rtc homekit handling (#22346)
hawkeye217 Mar 9, 2026
1188d87
Save detect dimensions to config on add camera wizard save (#22349)
hawkeye217 Mar 9, 2026
1948086
docs: add highlight magic comments (#22367)
ZhaiSoul Mar 10, 2026
59fc844
Various Fixes (#22376)
NickM-27 Mar 10, 2026
104e623
Filter push notifications by user role camera access (#22385)
hawkeye217 Mar 11, 2026
544d3c6
keep nav buttons visible (#22384)
hawkeye217 Mar 11, 2026
f29ee53
Add handler for license plate which is not expected to be stationary …
NickM-27 Mar 13, 2026
614a6b3
consistently sort class names (#22415)
hawkeye217 Mar 13, 2026
d2b2faa
Update dev contrib docs with Python checks (#22419)
leccelecce Mar 13, 2026
b147b53
add padding to dropdown text (#22420)
hawkeye217 Mar 13, 2026
be79ad8
hide set password menu option when native auth is disabled (#22439)
hawkeye217 Mar 14, 2026
8b035be
disable pip for animated event cards (#22438)
hawkeye217 Mar 14, 2026
3ec2305
sync Tracking Details timeline with keyframe-snapped vod clip start (…
hawkeye217 Mar 15, 2026
65ca90d
Add Strix to third party extensions (#22488)
eduard256 Mar 16, 2026
d4731c1
don't try to run cleanup if frigate is in safe mode (#22492)
hawkeye217 Mar 16, 2026
01c16a9
check for config update before state evaluation (#22495)
hawkeye217 Mar 16, 2026
ae9b307
docs: remove onvif host environment variable (#22517)
ZhaiSoul Mar 18, 2026
e78da27
Restrict /api/config/raw to admin role to prevent credential leak to …
hawkeye217 Mar 18, 2026
d11c269
Fix cross-camera auth in timeline and media endpoints (#22522)
hawkeye217 Mar 19, 2026
416a9b7
Validate preview filename and camera access (#22530)
hawkeye217 Mar 19, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
default_target: local

COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.17.0
VERSION = 0.17.1
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty
Expand Down
32 changes: 17 additions & 15 deletions docker/main/rootfs/etc/s6-overlay/s6-rc.d/go2rtc/run
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ function setup_homekit_config() {

if [[ ! -f "${config_path}" ]]; then
echo "[INFO] Creating empty config file for HomeKit..."
echo '{}' > "${config_path}"
: > "${config_path}"
fi

# Convert YAML to JSON for jq processing
Expand All @@ -65,23 +65,25 @@ function setup_homekit_config() {
return 0
}

# Use jq to filter and keep only the homekit section
local cleaned_json="/tmp/cache/homekit_cleaned.json"
jq '
# Keep only the homekit section if it exists, otherwise empty object
if has("homekit") then {homekit: .homekit} else {} end
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || {
echo '{}' > "${cleaned_json}"
}
# Use jq to extract the homekit section, if it exists
local homekit_json
homekit_json=$(jq '
if has("homekit") then {homekit: .homekit} else null end
' "${temp_json}" 2>/dev/null) || homekit_json="null"

# Convert back to YAML and write to the config file
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
echo '{}' > "${config_path}"
}
# If no homekit section, write an empty config file
if [[ "${homekit_json}" == "null" ]]; then
: > "${config_path}"
else
# Convert homekit JSON back to YAML and write to the config file
echo "${homekit_json}" | yq eval -P - > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
: > "${config_path}"
}
fi

# Clean up temp files
rm -f "${temp_json}" "${cleaned_json}"
rm -f "${temp_json}"
}

set_libva_version
Expand Down
12 changes: 10 additions & 2 deletions docs/docs/configuration/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,21 @@ go2rtc:

### `environment_vars`

This section can be used to set environment variables for those unable to modify the environment of the container, like within Home Assistant OS.
This section can be used to set environment variables for those unable to modify the environment of the container, like within Home Assistant OS. Docker users should set environment variables in their `docker run` command (`-e FRIGATE_MQTT_PASSWORD=secret`) or `docker-compose.yml` file (`environment:` section) instead. Note that values set here are stored in plain text in your config file, so if the goal is to keep credentials out of your configuration, use Docker environment variables or Docker secrets instead.

Variables prefixed with `FRIGATE_` can be referenced in config fields that support environment variable substitution (such as MQTT host and credentials, camera stream URLs, and ONVIF host and credentials) using the `{FRIGATE_VARIABLE_NAME}` syntax.

Example:

```yaml
environment_vars:
VARIABLE_NAME: variable_value
FRIGATE_MQTT_USER: my_mqtt_user
FRIGATE_MQTT_PASSWORD: my_mqtt_password

mqtt:
host: "{FRIGATE_MQTT_HOST}"
user: "{FRIGATE_MQTT_USER}"
password: "{FRIGATE_MQTT_PASSWORD}"
```

#### TensorFlow Thread Configuration
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/configuration/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Frigate looks for a JWT token secret in the following order:

1. An environment variable named `FRIGATE_JWT_SECRET`
2. A file named `FRIGATE_JWT_SECRET` in the directory specified by the `CREDENTIALS_DIRECTORY` environment variable (defaults to the Docker Secrets directory: `/run/secrets/`)
3. A `jwt_secret` option from the Home Assistant Add-on options
3. A `jwt_secret` option from the Home Assistant App options
4. A `.jwt_secret` file in the config directory

If no secret is found on startup, Frigate generates one and stores it in a `.jwt_secret` file in the config directory.
Expand Down Expand Up @@ -232,7 +232,7 @@ The viewer role provides read-only access to all cameras in the UI and API. Cust

### Role Configuration Example

```yaml
```yaml {11-16}
cameras:
front_door:
# ... camera config
Expand Down
9 changes: 6 additions & 3 deletions docs/docs/configuration/birdseye.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ A custom icon can be added to the birdseye background by providing a 180x180 ima

If you want to include a camera in Birdseye view only for specific circumstances, or just don't include it at all, the Birdseye setting can be set at the camera level.

```yaml
```yaml {8-10,12-14}
# Include all cameras by default in Birdseye view
birdseye:
enabled: True
Expand All @@ -48,6 +48,7 @@ By default birdseye shows all cameras that have had the configured activity in t
```yaml
birdseye:
enabled: True
# highlight-next-line
inactivity_threshold: 15
```

Expand Down Expand Up @@ -78,9 +79,11 @@ birdseye:
cameras:
front:
birdseye:
# highlight-next-line
order: 1
back:
birdseye:
# highlight-next-line
order: 2
```

Expand All @@ -92,7 +95,7 @@ It is possible to limit the number of cameras shown on birdseye at one time. Whe

For example, this can be configured to only show the most recently active camera.

```yaml
```yaml {3-4}
birdseye:
enabled: True
layout:
Expand All @@ -103,7 +106,7 @@ birdseye:

By default birdseye tries to fit 2 cameras in each row and then double in size until a suitable layout is found. The scaling can be configured with a value between 1.0 and 5.0 depending on use case.

```yaml
```yaml {3-4}
birdseye:
enabled: True
layout:
Expand Down
6 changes: 4 additions & 2 deletions docs/docs/configuration/camera_specific.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,15 @@ Some cameras support h265 with different formats, but Safari only supports the a
cameras:
h265_cam: # <------ Doesn't matter what the camera is called
ffmpeg:
# highlight-next-line
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
```

## MJPEG Cameras

Note that mjpeg cameras require encoding the video into h264 for recording, and restream roles. This will use significantly more CPU than if the cameras supported h264 feeds directly. It is recommended to use the restream role to create an h264 restream and then use that as the source for ffmpeg.

```yaml
```yaml {3,10}
go2rtc:
streams:
mjpeg_cam: "ffmpeg:http://your_mjpeg_stream_url#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
Expand Down Expand Up @@ -96,6 +97,7 @@ This camera is H.265 only. To be able to play clips on some devices (like MacOs
cameras:
annkec800: # <------ Name the camera
ffmpeg:
# highlight-next-line
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
output_args:
record: preset-record-generic-audio-aac
Expand Down Expand Up @@ -274,7 +276,7 @@ To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's

- In your Frigate Configuration File, add the go2rtc stream and roles as appropriate:

```
```yaml {4,11-12}
go2rtc:
streams:
usb_camera:
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/configuration/cameras.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Not every PTZ supports ONVIF, which is the standard protocol Frigate uses to com

Add the onvif section to your camera in your configuration file:

```yaml
```yaml {4-8}
cameras:
back:
ffmpeg: ...
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ Object classification allows you to train a custom MobileNetV2 classification mo

## Minimum System Requirements

Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Object classification models are lightweight and run very fast on CPU.

Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.

A CPU with AVX instructions is required for training and inference.
A CPU with AVX + AVX2 instructions is required for training and inference.

## Classes

Expand All @@ -27,7 +27,6 @@ For object classification:
### Classification Type

- **Sub label**:

- Applied to the object’s `sub_label` field.
- Ideal for a single, more specific identity or type.
- Example: `cat` → `Leo`, `Charlie`, `None`.
Expand Down Expand Up @@ -119,6 +118,7 @@ Enable debug logs for classification models by adding `frigate.data_processing.r
logger:
default: info
logs:
# highlight-next-line
frigate.data_processing.real_time.custom_classification: debug
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ State classification allows you to train a custom MobileNetV2 classification mod

## Minimum System Requirements

State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
State classification models are lightweight and run very fast on CPU.

Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.

A CPU with AVX instructions is required for training and inference.
A CPU with AVX + AVX2 instructions is required for training and inference.

## Classes

Expand Down Expand Up @@ -85,6 +85,7 @@ Enable debug logs for classification models by adding `frigate.data_processing.r
logger:
default: info
logs:
# highlight-next-line
frigate.data_processing.real_time.custom_classification: debug
```

Expand Down
5 changes: 2 additions & 3 deletions docs/docs/configuration/face_recognition.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ All of these features run locally on your system.

## Minimum System Requirements

A CPU with AVX + AVX2 instructions is required to run Face Recognition.

The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently.

The `large` model is optimized for accuracy, an integrated or discrete GPU / NPU is required. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
Expand Down Expand Up @@ -143,17 +145,14 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library.

If you are using a Frigate+ or `face` detecting model:

- Watch the debug view (Settings --> Debug) to ensure that `face` is being detected along with `person`.
- You may need to adjust the `min_score` for the `face` object if faces are not being detected.

If you are **not** using a Frigate+ or `face` detecting model:

- Check your `detect` stream resolution and ensure it is sufficiently high enough to capture face details on `person` objects.
- You may need to lower your `detection_threshold` if faces are not being detected.

2. Any detected faces will then be _recognized_.

- Make sure you have trained at least one face per the recommendations above.
- Adjust `recognition_threshold` settings per the suggestions [above](#advanced-configuration).

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/configuration/genai/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ genai:
To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example:

```
```yaml {4,5}
genai:
provider: gemini
...
Expand Down Expand Up @@ -152,7 +152,7 @@ To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` env

For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:

```yaml
```yaml {5,6}
genai:
provider: openai
base_url: http://your-llama-server
Expand Down
5 changes: 3 additions & 2 deletions docs/docs/configuration/genai/review_summaries.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ By default, review summaries use preview images (cached preview frames) which ha
review:
genai:
enabled: true
# highlight-next-line
image_source: recordings # Options: "preview" (default) or "recordings"
```

Expand All @@ -104,7 +105,7 @@ If recordings are not available for a given time period, the system will automat

Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. These concerns can be configured so that the review summaries will make note of them if the activity requires additional review. For example:

```yaml
```yaml {4,5}
review:
genai:
enabled: true
Expand All @@ -116,7 +117,7 @@ review:

By default, review summaries are generated in English. You can configure Frigate to generate summaries in your preferred language by setting the `preferred_language` option:

```yaml
```yaml {4}
review:
genai:
enabled: true
Expand Down
5 changes: 1 addition & 4 deletions docs/docs/configuration/hardware_acceleration_enrichments.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,20 @@ Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accel
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation.

- **AMD**

- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.

- **Intel**

- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.

- **Nvidia**

- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.

- **RockChip**
- RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image.

Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments.
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image to run enrichments on an Nvidia GPU and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is the `tensorrt` image for object detection on an Nvidia GPU and Intel iGPU for enrichments.

:::note

Expand Down
Loading
Loading