You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/using-executorch-ios.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,8 +11,7 @@ The ExecuTorch Runtime for iOS and macOS is distributed as a collection of prebu
11
11
*`backend_mps` - MPS backend
12
12
*`backend_xnnpack` - XNNPACK backend
13
13
*`kernels_custom` - Custom kernels for LLMs
14
-
*`kernels_optimized` - Optimized kernels
15
-
*`kernels_portable` - Portable kernels (naive implementation used as a reference)
14
+
*`kernels_optimized` - Accelerated generic CPU kernels
16
15
*`kernels_quantized` - Quantized kernels
17
16
18
17
Link your binary with the ExecuTorch runtime and any backends or kernels used by the exported ML model. It is recommended to link the core runtime to the components that use ExecuTorch directly, and link kernels and backends against the main app target.
0 commit comments