Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
70 commits
Select commit Hold shift + click to select a range
c9bd907
Frontend fixes (#22294)
hawkeye217 Mar 6, 2026
c2e667c
Add dynamic configuration for more fields (#22295)
hawkeye217 Mar 6, 2026
dda9f7b
apply filters after clustering (#22308)
hawkeye217 Mar 7, 2026
889dfca
Frontend fixes (#22309)
hawkeye217 Mar 7, 2026
acdfed4
Improve annotation offset UX (#22310)
hawkeye217 Mar 7, 2026
a705f25
Support using GenAI for embeddings / semantic search (#22323)
NickM-27 Mar 8, 2026
ef07563
Update onnx deps to support 50 series GPUs (#22324)
NickM-27 Mar 8, 2026
df27e04
Frontend updates (#22327)
hawkeye217 Mar 8, 2026
b2c7840
Refactor enrichment confg updater (#22325)
hawkeye217 Mar 8, 2026
e930492
Publish license plate box coordinates (#22337)
hawkeye217 Mar 8, 2026
dd9497b
Add ability to delete cameras (#22336)
hawkeye217 Mar 8, 2026
9cbd80d
Add motion previews filter (#22347)
hawkeye217 Mar 9, 2026
119137c
Update Intel Deps (#22351)
NickM-27 Mar 9, 2026
e8b9225
Recordings API and calendar UI performance improvements (#22352)
hawkeye217 Mar 9, 2026
5cec1be
Bump docker/bake-action from 6 to 7 (#22291)
dependabot[bot] Mar 10, 2026
6c47e83
Bump docker/build-push-action from 5 to 7 (#22292)
dependabot[bot] Mar 10, 2026
90ad521
Update translation files
weblate Mar 9, 2026
d88c0e2
Update translation files
weblate Mar 9, 2026
b405877
Update translation files
weblate Mar 9, 2026
7b2c75b
Update translation files
weblate Mar 9, 2026
68fe788
Update translation files
weblate Mar 9, 2026
e1ae326
Update translation files
weblate Mar 9, 2026
3dd584b
Translated using Weblate (German)
weblate Mar 9, 2026
aa5085c
Update translation files
weblate Mar 9, 2026
0a6c440
Added translation using Weblate (Greek)
weblate Mar 9, 2026
6007ec8
Added translation using Weblate (Estonian)
weblate Mar 9, 2026
ec49ccf
Added translation using Weblate (Russian)
weblate Mar 9, 2026
a2eabdb
Translated using Weblate (Romanian)
weblate Mar 9, 2026
0723380
Added translation using Weblate (Bulgarian)
weblate Mar 9, 2026
5f75289
Added translation using Weblate (Ukrainian)
weblate Mar 9, 2026
e6405a2
Added translation using Weblate (Japanese)
weblate Mar 9, 2026
680361c
Translated using Weblate (Catalan)
weblate Mar 9, 2026
5de553c
Added translation using Weblate (Czech)
weblate Mar 9, 2026
fe22d99
Added translation using Weblate (Portuguese)
weblate Mar 9, 2026
acbec0b
Added translation using Weblate (Vietnamese)
weblate Mar 9, 2026
6291107
Added translation using Weblate (Icelandic)
weblate Mar 9, 2026
f79e07d
Added translation using Weblate (Croatian)
weblate Mar 9, 2026
d838d13
Added translation using Weblate (Hungarian)
weblate Mar 9, 2026
bfb275f
Added translation using Weblate (Hindi)
weblate Mar 9, 2026
7f061ba
Added translation using Weblate (Hebrew)
weblate Mar 9, 2026
d8d5f1c
Added translation using Weblate (Malayalam)
weblate Mar 9, 2026
4c5124e
Added translation using Weblate (Polish)
weblate Mar 9, 2026
34b5e3c
Added translation using Weblate (Italian)
weblate Mar 9, 2026
983ee58
Added translation using Weblate (Arabic)
weblate Mar 9, 2026
9d3bc82
Added translation using Weblate (Indonesian)
weblate Mar 9, 2026
00fd1cc
Translated using Weblate (Dutch)
weblate Mar 9, 2026
665fc76
Translated using Weblate (Spanish)
weblate Mar 9, 2026
36efcf9
Translated using Weblate (French)
weblate Mar 9, 2026
f6f4c67
Added translation using Weblate (Swedish)
weblate Mar 9, 2026
dbf9976
Added translation using Weblate (Persian)
weblate Mar 9, 2026
b9a35dc
Added translation using Weblate (Finnish)
weblate Mar 9, 2026
a8ea77b
Added translation using Weblate (Serbian)
weblate Mar 9, 2026
f0ecc5b
Added translation using Weblate (Albanian)
weblate Mar 9, 2026
c0c64fe
Added translation using Weblate (Korean)
weblate Mar 9, 2026
39c878b
Added translation using Weblate (Slovak)
weblate Mar 9, 2026
8d010c5
Added translation using Weblate (Slovenian)
weblate Mar 9, 2026
e5494bb
Added translation using Weblate (Urdu)
weblate Mar 9, 2026
29d071f
Added translation using Weblate (Uzbek)
weblate Mar 9, 2026
d568207
Added translation using Weblate (Chinese (Traditional Han script))
weblate Mar 9, 2026
c575e6b
Added translation using Weblate (Chinese (Simplified Han script))
weblate Mar 9, 2026
925cb04
Added translation using Weblate (Norwegian Bokmål)
weblate Mar 9, 2026
29ffcea
Translated using Weblate (Cantonese (Traditional Han script))
weblate Mar 9, 2026
e75b8ca
Update translation files
weblate Mar 9, 2026
5254bfd
Refactor Review GenAI Prompt (#22353)
NickM-27 Mar 10, 2026
c7b5193
feat: Initial AXERA detector
ZhaiSoul Mar 10, 2026
9dbff56
chore: update pip install URL for axengine package
ZhaiSoul Mar 10, 2026
e562381
Update docker/main/Dockerfile
ZhaiSoul Mar 10, 2026
7e6d15d
Update docs/docs/configuration/object_detectors.md
ZhaiSoul Mar 10, 2026
904e651
Update AXERA section in installation.md
ZhaiSoul Mar 10, 2026
a4dddde
Update axmodel download URL to Hugging Face
Mar 10, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
16 changes: 8 additions & 8 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push amd64 standard build
uses: docker/build-push-action@v5
uses: docker/build-push-action@v7
with:
context: .
file: docker/main/Dockerfile
Expand All @@ -56,7 +56,7 @@ jobs:
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push arm64 standard build
uses: docker/build-push-action@v5
uses: docker/build-push-action@v7
with:
context: .
file: docker/main/Dockerfile
Expand All @@ -67,7 +67,7 @@ jobs:
${{ steps.setup.outputs.image-name }}-standard-arm64
cache-from: type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
- name: Build and push RPi build
uses: docker/bake-action@v6
uses: docker/bake-action@v7
with:
source: .
push: true
Expand Down Expand Up @@ -96,7 +96,7 @@ jobs:
BASE_IMAGE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
SLIM_BASE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
TRT_BASE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
uses: docker/bake-action@v6
uses: docker/bake-action@v7
with:
source: .
push: true
Expand Down Expand Up @@ -124,7 +124,7 @@ jobs:
- name: Build and push TensorRT (x86 GPU)
env:
COMPUTE_LEVEL: "50 60 70 80 90"
uses: docker/bake-action@v6
uses: docker/bake-action@v7
with:
source: .
push: true
Expand All @@ -137,7 +137,7 @@ jobs:
- name: AMD/ROCm general build
env:
HSA_OVERRIDE: 0
uses: docker/bake-action@v6
uses: docker/bake-action@v7
with:
source: .
push: true
Expand All @@ -163,7 +163,7 @@ jobs:
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Rockchip build
uses: docker/bake-action@v6
uses: docker/bake-action@v7
with:
source: .
push: true
Expand All @@ -188,7 +188,7 @@ jobs:
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Synaptics build
uses: docker/bake-action@v6
uses: docker/bake-action@v7
with:
source: .
push: true
Expand Down
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ __pycache__
.mypy_cache
*.swp
debug
.claude/*
.mcp.json
.vscode/*
!.vscode/launch.json
config/*
Expand All @@ -19,4 +21,4 @@ web/.env
core
!/web/**/*.ts
.idea/*
.ipynb_checkpoints
.ipynb_checkpoints
6 changes: 6 additions & 0 deletions docker/main/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -266,6 +266,12 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
RUN --mount=type=bind,from=wheels,source=/wheels,target=/deps/wheels \
pip3 install -U /deps/wheels/*.whl

# Install Axera Engine
RUN pip3 install https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3-frigate/axengine-0.1.3-py3-none-any.whl
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The RUN pip3 install line downloads and executes a wheel directly from a third-party GitHub release URL without any integrity verification, which creates a supply-chain risk. If the AXERA-TECH/pyaxengine repository or its release assets were compromised or the tag reused, an attacker could replace the wheel with malicious code that would run during image build and inside the container. To reduce this risk, pin to an immutable, integrity-verified artifact (for example using a trusted package registry and checksum verification or vendoring the wheel into the build context) instead of a mutable GitHub URL.

Copilot uses AI. Check for mistakes.

ENV PATH="${PATH}:/usr/bin/axcl"
ENV LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/lib/axcl"

# Install MemryX runtime (requires libgomp (OpenMP) in the final docker image)
RUN --mount=type=bind,source=docker/main/install_memryx.sh,target=/deps/install_memryx.sh \
bash -c "bash /deps/install_memryx.sh"
Expand Down
23 changes: 12 additions & 11 deletions docker/main/install_deps.sh
Original file line number Diff line number Diff line change
Expand Up @@ -105,28 +105,29 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
# install legacy and standard intel icd and level-zero-gpu
# see https://github.com/intel/compute-runtime/blob/master/LEGACY_PLATFORMS.md for more info
# needed core package
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/libigdgmm12_22.5.5_amd64.deb
dpkg -i libigdgmm12_22.5.5_amd64.deb
rm libigdgmm12_22.5.5_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/25.13.33276.19/libigdgmm12_22.7.0_amd64.deb
dpkg -i libigdgmm12_22.7.0_amd64.deb
rm libigdgmm12_22.7.0_amd64.deb

# legacy packages
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-opencl-icd-legacy1_24.35.30872.36_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-level-zero-gpu-legacy1_1.5.30872.36_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-opencl_1.0.17537.24_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-core_1.0.17537.24_amd64.deb
# standard packages
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-opencl-icd_24.52.32224.5_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-level-zero-gpu_1.6.32224.5_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-opencl-2_2.5.6+18417_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-core-2_2.5.6+18417_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/25.13.33276.19/intel-opencl-icd_25.13.33276.19_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/25.13.33276.19/intel-level-zero-gpu_1.6.33276.19_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.10.10/intel-igc-opencl-2_2.10.10+18926_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.10.10/intel-igc-core-2_2.10.10+18926_amd64.deb
# npu packages
wget https://github.com/oneapi-src/level-zero/releases/download/v1.21.9/level-zero_1.21.9+u22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-driver-compiler-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-fw-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-level-zero-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
wget https://github.com/oneapi-src/level-zero/releases/download/v1.28.2/level-zero_1.28.2+u22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.19.0/intel-driver-compiler-npu_1.19.0.20250707-16111289554_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.19.0/intel-fw-npu_1.19.0.20250707-16111289554_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.19.0/intel-level-zero-npu_1.19.0.20250707-16111289554_ubuntu22.04_amd64.deb

dpkg -i *.deb
rm *.deb
apt-get -qq install -f -y
fi

if [[ "${TARGETARCH}" == "arm64" ]]; then
Expand Down
28 changes: 14 additions & 14 deletions docker/tensorrt/requirements-amd64.txt
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# NVidia TensorRT Support (amd64 only)
# Nvidia ONNX Runtime GPU Support
--extra-index-url 'https://pypi.nvidia.com'
cython==3.0.*; platform_machine == 'x86_64'
nvidia_cuda_cupti_cu12==12.5.82; platform_machine == 'x86_64'
nvidia-cublas-cu12==12.5.3.*; platform_machine == 'x86_64'
nvidia-cudnn-cu12==9.3.0.*; platform_machine == 'x86_64'
nvidia-cufft-cu12==11.2.3.*; platform_machine == 'x86_64'
nvidia-curand-cu12==10.3.6.*; platform_machine == 'x86_64'
nvidia_cuda_nvcc_cu12==12.5.82; platform_machine == 'x86_64'
nvidia-cuda-nvrtc-cu12==12.5.82; platform_machine == 'x86_64'
nvidia_cuda_runtime_cu12==12.5.82; platform_machine == 'x86_64'
nvidia_cusolver_cu12==11.6.3.*; platform_machine == 'x86_64'
nvidia_cusparse_cu12==12.5.1.*; platform_machine == 'x86_64'
nvidia_nccl_cu12==2.23.4; platform_machine == 'x86_64'
nvidia_nvjitlink_cu12==12.5.82; platform_machine == 'x86_64'
nvidia-cuda-cupti-cu12==12.9.79; platform_machine == 'x86_64'
nvidia-cublas-cu12==12.9.1.*; platform_machine == 'x86_64'
nvidia-cudnn-cu12==9.19.0.*; platform_machine == 'x86_64'
nvidia-cufft-cu12==11.4.1.*; platform_machine == 'x86_64'
nvidia-curand-cu12==10.3.10.*; platform_machine == 'x86_64'
nvidia-cuda-nvcc-cu12==12.9.86; platform_machine == 'x86_64'
nvidia-cuda-nvrtc-cu12==12.9.86; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu12==12.9.79; platform_machine == 'x86_64'
nvidia-cusolver-cu12==11.7.5.*; platform_machine == 'x86_64'
nvidia-cusparse-cu12==12.5.10.*; platform_machine == 'x86_64'
nvidia-nccl-cu12==2.29.7; platform_machine == 'x86_64'
nvidia-nvjitlink-cu12==12.9.86; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.22.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.24.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'
46 changes: 43 additions & 3 deletions docs/docs/configuration/object_detectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,11 @@ Frigate supports multiple different detectors that work on different types of ha

- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.

**AXERA** <CommunityBadge />

- [AXEngine](#axera): axmodels can run on AXERA AI acceleration.


**For Testing**

- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
Expand Down Expand Up @@ -1478,6 +1483,41 @@ model:
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```

## AXERA

Hardware accelerated object detection is supported on the following SoCs:

- AX650N
- AX8850N

This implementation uses the [AXera Pulsar2 Toolchain](https://huggingface.co/AXERA-TECH/Pulsar2).

See the [installation docs](../frigate/installation.md#axera) for information on configuring the AXEngine hardware.

### Configuration

When configuring the AXEngine detector, you have to specify the model name.

#### yolov9

A yolov9 model is provided in the container at `/axmodels` and is used by this detector type by default.

Use the model configuration shown below when using the axengine detector with the default axmodel:

```yaml
detectors:
axengine:
type: axengine

model:
path: frigate-yolov9-tiny
model_type: yolo-generic
width: 320
height: 320
tensor_format: bgr
labelmap_path: /labelmap/coco-80.txt
```

# Models

Some model types are not included in Frigate by default.
Expand Down Expand Up @@ -1571,12 +1611,12 @@ YOLOv9 model can be exported as ONNX using the command below. You can copy and p
```sh
docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
RUN apt-get update && apt-get install --no-install-recommends -y cmake libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.10.4 /uv /bin/
WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier==0.4.* onnxscript
ARG MODEL_SIZE
ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt
Expand Down
34 changes: 34 additions & 0 deletions docs/docs/configuration/semantic_search.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,40 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings

:::

### GenAI Provider

Frigate can use a GenAI provider for semantic search embeddings when that provider has the `embeddings` role. Currently, only **llama.cpp** supports multimodal embeddings (both text and images).

To use llama.cpp for semantic search:

1. Configure a GenAI provider in your config with `embeddings` in its `roles`.
2. Set `semantic_search.model` to the GenAI config key (e.g. `default`).
3. Start the llama.cpp server with `--embeddings` and `--mmproj` for image support:

```yaml
genai:
default:
provider: llamacpp
base_url: http://localhost:8080
model: your-model-name
roles:
- embeddings
- vision
- tools

semantic_search:
enabled: True
model: default
```

The llama.cpp server must be started with `--embeddings` for the embeddings API, and a multi-modal embeddings model. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for details.

:::note

Switching between Jina models and a GenAI provider requires reindexing. Embeddings from different backends are incompatible.

:::

### GPU Acceleration

The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
Expand Down
14 changes: 13 additions & 1 deletion docs/docs/frigate/hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,10 @@ Frigate supports multiple different detectors that work on different types of ha

- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.

**AXERA** <CommunityBadge />

- [AXEngine](#axera): axera models can run on AXERA NPUs via AXEngine, delivering highly efficient object detection.

:::

### Hailo-8
Expand Down Expand Up @@ -288,6 +292,14 @@ The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms fo
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |

### AXERA

- **AXEngine** Default model is **yolov9**

| Name | AXERA AX650N/AX8850N Inference Time |
| ---------------- | ----------------------------------- |
| yolov9-tiny | ~ 4 ms |

## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)

This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.
Expand All @@ -308,4 +320,4 @@ Basically - When you increase the resolution and/or the frame rate of the stream

YES! The Coral does not help with decoding video streams.

Decompressing video streams takes a significant amount of CPU power. Video compression uses key frames (also known as I-frames) to send a full frame in the video stream. The following frames only include the difference from the key frame, and the CPU has to compile each frame by merging the differences with the key frame. [More detailed explanation](https://support.video.ibm.com/hc/en-us/articles/18106203580316-Keyframes-InterFrame-Video-Compression). Higher resolutions and frame rates mean more processing power is needed to decode the video stream, so try and set them on the camera to avoid unnecessary decoding work.
Decompressing video streams takes a significant amount of CPU power. Video compression uses key frames (also known as I-frames) to send a full frame in the video stream. The following frames only include the difference from the key frame, and the CPU has to compile each frame by merging the differences with the key frame. [More detailed explanation](https://support.video.ibm.com/hc/en-us/articles/18106203580316-Keyframes-InterFrame-Video-Compression). Higher resolutions and frame rates mean more processing power is needed to decode the video stream, so try and set them on the camera to avoid unnecessary decoding work.
33 changes: 33 additions & 0 deletions docs/docs/frigate/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -439,6 +439,39 @@ or add these options to your `docker run` command:

Next, you should configure [hardware object detection](/configuration/object_detectors#synaptics) and [hardware video processing](/configuration/hardware_acceleration_video#synaptics).

### AXERA

AXERA accelerators are available in an M.2 form factor, compatible with both Raspberry Pi and Orange Pi. This form factor has also been successfully tested on x86 platforms, making it a versatile choice for various computing environments.

#### Installation

Using AXERA accelerators requires the installation of the AXCL driver. We provide a convenient Linux script to complete this installation.

Follow these steps for installation:

1. Copy or download [this script](https://github.com/ivanshi1108/assets/releases/download/v0.16.2/user_installation.sh).
2. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
3. Run the script with `./user_installation.sh`

#### Setup

To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`

Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:

```yaml
devices:
- /dev/axcl_host
- /dev/ax_mmb_dev
- /dev/msg_userdev
```

If you are using `docker run`, add this option to your command `--device /dev/axcl_host --device /dev/ax_mmb_dev --device /dev/msg_userdev`

#### Configuration

Finally, configure [hardware object detection](/configuration/object_detectors#axera) to complete the setup.

## Docker

Running through Docker with Docker Compose is the recommended install method.
Expand Down
3 changes: 2 additions & 1 deletion docs/docs/integrations/mqtt.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,8 @@ Published when a license plate is recognized on a car object. See the [License P
"plate": "123ABC",
"score": 0.95,
"camera": "driveway_cam",
"timestamp": 1607123958.748393
"timestamp": 1607123958.748393,
"plate_box": [917, 487, 1029, 529] // box coordinates of the detected license plate in the frame
}
```

Expand Down
Loading
Loading