Skip to content

Commit 6ec9722

Browse files
Add Python 3.13 support
coremltools versions prior to 9.0 did not support Python 3.13. Now executorch uses 9.0, we can finally enable building wheels with Python 3.13.
1 parent d318b3b commit 6ec9722

12 files changed

+15
-21
lines changed

.github/workflows/build-wheels-aarch64-linux.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ jobs:
3232
test-infra-ref: main
3333
with-cuda: disabled
3434
with-rocm: disabled
35-
python-versions: '["3.10", "3.11", "3.12"]'
35+
python-versions: '["3.10", "3.11", "3.12", "3.13"]'
3636

3737
build:
3838
needs: generate-matrix

.github/workflows/build-wheels-linux.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ jobs:
3232
test-infra-ref: main
3333
with-cuda: disabled
3434
with-rocm: disabled
35-
python-versions: '["3.10", "3.11", "3.12"]'
35+
python-versions: '["3.10", "3.11", "3.12", "3.13"]'
3636

3737
build:
3838
needs: generate-matrix

.github/workflows/build-wheels-macos.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ jobs:
3232
test-infra-ref: main
3333
with-cuda: disabled
3434
with-rocm: disabled
35-
python-versions: '["3.10", "3.11", "3.12"]'
35+
python-versions: '["3.10", "3.11", "3.12", "3.13"]'
3636

3737
build:
3838
needs: generate-matrix

.github/workflows/build-wheels-windows.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ jobs:
2727
test-infra-ref: main
2828
with-cuda: disabled
2929
with-rocm: disabled
30-
python-versions: '["3.10", "3.11", "3.12"]'
30+
python-versions: '["3.10", "3.11", "3.12", "3.13"]'
3131

3232
build:
3333
needs: generate-matrix

.github/workflows/pull.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ jobs:
2222
# strategy:
2323
# fail-fast: false
2424
# matrix:
25-
# python-version: [ "3.10", "3.11", "3.12" ]
25+
# python-version: [ "3.10", "3.11", "3.12", "3.13" ]
2626
# with:
2727
# runner: linux.2xlarge
2828
# docker-image: ci-image:executorch-ubuntu-22.04-qnn-sdk

README-wheel.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ ExecuTorch is to enable wider customization and deployment capabilities of the
55
PyTorch programs.
66

77
The `executorch` pip package is in beta.
8-
* Supported python versions: 3.10, 3.11, 3.12
8+
* Supported python versions: 3.10, 3.11, 3.12, 3.13
99
* Compatible systems: Linux x86_64, macOS aarch64
1010

1111
The prebuilt `executorch.runtime` module included in this package provides a way

docs/source/backends/coreml/coreml-troubleshooting.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,6 @@ This page describes common issues that you may encounter when using the Core ML
77

88
This happens because the model is in FP16, but Core ML interprets some of the arguments as FP32, which leads to a type mismatch. The solution is to keep the PyTorch model in FP32. Note that the model will be still be converted to FP16 during lowering to Core ML unless specified otherwise in the compute_precision [Core ML `CompileSpec`](coreml-partitioner.md#coreml-compilespec). Also see the [related issue in coremltools](https://github.com/apple/coremltools/issues/2480).
99

10-
2. coremltools/converters/mil/backend/mil/load.py", line 499, in export
11-
raise RuntimeError("BlobWriter not loaded")
12-
13-
If you're using Python 3.13, try reducing your python version to Python 3.12. coremltools does not support Python 3.13 per [coremltools issue #2487](https://github.com/apple/coremltools/issues/2487).
14-
1510
### Issues during runtime
1611
1. [ETCoreMLModelCompiler.mm:55] [Core ML] Failed to compile model, error = Error Domain=com.apple.mlassetio Code=1 "Failed to parse the model specification. Error: Unable to parse ML Program: at unknown location: Unknown opset 'CoreML7'." UserInfo={NSLocalizedDescription=Failed to par$
1712

docs/source/getting-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This section is intended to describe the necessary steps to take a PyTorch model
88
## System Requirements
99
The following are required to install the ExecuTorch host libraries, needed to export models and run from Python. Requirements for target end-user devices are backend dependent. See the appropriate backend documentation for more information.
1010

11-
- Python 3.10 - 3.12
11+
- Python 3.10 - 3.13
1212
- g++ version 7 or higher, clang++ version 5 or higher, or another C++17-compatible toolchain.
1313
- Linux (x86_64 or ARM64), macOS (ARM64), or Windows (x86_64).
1414
- Intel-based macOS systems require building PyTorch from source (see [Building From Source](using-executorch-building-from-source.md) for instructions).

docs/source/quick-start-section.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Follow these guides in order to get started with ExecuTorch:
1717

1818
## Prerequisites
1919

20-
- Python 3.10-3.12
20+
- Python 3.10-3.13
2121
- PyTorch 2.9+
2222
- Basic familiarity with PyTorch model development
2323

docs/source/raspberry_pi_llama_tutorial.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
This tutorial demonstrates how to deploy **Llama models on Raspberry Pi 4/5 devices** using ExecuTorch:
66

7-
- **Prerequisites**: Linux host machine, Python 3.10-3.12, conda environment, Raspberry Pi 4/5
7+
- **Prerequisites**: Linux host machine, Python 3.10-3.13, conda environment, Raspberry Pi 4/5
88
- **Setup**: Automated cross-compilation using `setup.sh` script for ARM toolchain installation
99
- **Export**: Convert Llama models to optimized `.pte` format with quantization options
1010
- **Deploy**: Transfer binaries to Raspberry Pi and configure runtime libraries
@@ -19,7 +19,7 @@ This tutorial demonstrates how to deploy **Llama models on Raspberry Pi 4/5 devi
1919

2020
**Software Dependencies**:
2121

22-
- **Python 3.10-3.12** (ExecuTorch requirement)
22+
- **Python 3.10-3.13** (ExecuTorch requirement)
2323
- **conda** or **venv** for environment management
2424
- **CMake 3.29.6+**
2525
- **Git** for repository cloning
@@ -42,7 +42,7 @@ uname -s # Should output: Linux
4242
uname -m # Should output: x86_64
4343

4444
# Check Python version
45-
python3 --version # Should be 3.10-3.12
45+
python3 --version # Should be 3.10-3.13
4646

4747
# Check required tools
4848
hash cmake git md5sum 2>/dev/null || echo "Missing required tools"

0 commit comments

Comments
 (0)