Skip to content

Commit 45ff957

Browse files
fs-eireqjia7axingingsatyajandhyalaYang Gu
authored
1.17.3 cherry-picks for ORT Web changes (#19926)
### Description This PR is a preview of cherry-picks for ort-web to `rel-1.17.3` based on `rel-1.17.2`. <details> <summary>Changes of ort-web to cherry-pick</summary> The following commits are from main branch. `o` stands for pick, and `x` stands for skip. ``` o 2e0a388 [js/webgpu] Add HardSigmoid support (#19215) o d226e40 [js/webgpu] set query type in onRunStart (#19202) o 61610ff [js/webgpu] Add FusedConv clip test case (#18900) o a33b5bd [JS/WebGPU] Added Uniforms to SkipLayerNorm. (#18788) o 591f90c [js/webgpu] Fix issue of timestamp query (#19258) o 7252c6e [WebNN EP] Support WebNN async API with Asyncify (#19145) o 5b06505 [js/webgpu] Fix Tanh explosion (#19201) o 656ca66 [js/webgpu] Support uniforms for conv, conv transpose, conv grouped (#18753) o a3f0e24 [js/webgpu] Support f16 uniform (#19098) o 9e69606 fix f16 for attention, enable slice and flatten for more types (#19262) o 624b4e2 [js/webgpu] Remove enableShapesUniforms (#19279) o 90883a3 [js/webgpu] Add hardSigmoid activation for fusedConv (#19233) o 85cef0a [js/webgpu] Support capture and replay for jsep (#18989) o d73131c [js/webgpu] Use DataType as uniform cpu type (#19281) o dd1f6cc [js/webgpu] resolve codescan alert (#19343) o 3a2ab19 [js/webgpu] Refactor createTensorShapeVariables (#18883) o efc17e7 [js/webgpu] Fix the undefined push error (#19366) x 50806a7 [js/web] support external data in npm test (#19377) o ccbe264 [js/webgpu] Add LeakyRelu activation for fusedConv (#19369) o 5ff27ef [js/webgpu] support customop FastGelu (#19392) x 03be65e [js/web] fix types exports in package.json (#19458) o 06269a3 [js/webgpu] allow uint8 tensors for webgpu (#19545) o dfeda90 [JS/WebGPU] Add MatMulNBits (#19446) o 1b48054 [js/webgpu] Create Split indices helpers by rank, not by shape (#19554) o 3fe2c13 [js] small fix to workaround formatter (#19400) x 70567a4 [js/web] use ApiTensor insteadof onnxjs Tensor in TensorResultValidator (#19358) o 6e04e36 [js/common] upgrade tsc in common from 4.9.5 to 5.2.2 (#19317) o 58f4921 [js] changes to allow Float16Array if any polyfill is available (#19305) o 57d6819 [js/web] Fix fused-conv is not included in npm test (#19581) o ebd220b Misspelling in README.md (#19433) o 38c3432 Bump ip from 1.1.8 to 1.1.9 in /js/react_native (#19582) o fe82fcc [js/webgpu] Fix Conv2DTransposeMatMul f16 compilation failure (#19596) o 76a2a48 Bump ip from 1.1.8 to 1.1.9 in /js/react_native/e2e (#19583) o 29b1106 [node] Switch to setImmediate to avoid starving the Node.js event loop (#19610) o ae3d73c [JS/WebGPU] Fix Split and Where to handle corner cases. (#19613) o aec2389 [js/webgpu] allows a ProgramInfo's RunData to use zero sized output (#19614) o bb43a0f [js/webgpu] minor fixes to make tinyllama work (#19564) o 0edb035 [js/web] fix suite test list for zero sized tensor (#19638) o 3cb81cd [js/common] move 'env.wasm.trace' to 'env.trace' (#19617) o e30618d [js/webgpu] use Headless for webgpu test by default (#19702) o f06164e [js/web] transfer input buffer back to caller thread (#19677) x a788514 [js/web] dump debug logs for karma for diagnose purpose (#19785) o 24b72d2 [JS/WebGPU] Preserve zero size input tensor dims. (#19737) o 4538d31 [js/webgpu] expose a few properties in WebGPU API (#19857) o 53de2d8 [js/webgpu] Enable GroupedConvVectorize path (#19791) o ed250b8 [JS/WebGPU] Optimize MatMulNBits (#19852) x e771a76 [js/test] align web test runner flags with ort.env (#19790) o 79e50ae [js/web] rewrite backend resolve to allow multiple EPs (#19735) o acb0df2 Fix #19931 broken Get Started link of "ONNX Runtime JavaScript API" page (#19932) o b29849a [js/common] fix typedoc warnings (#19933) o afdab62 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/web (#19949) o 28ad6c3 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/node (#19951) o 7e0d424 accumulate in fp32 for Reduce* (#19868) o 4c6a6a3 [js/webgpu] Fix NAN caused by un-initialized buffer in instance-norm (#19387) o 01c7aaf [js/webgpu] allow setting env.webgpu.adapter (#19940) o c45cff6 [js/webgpu] fix maxpool / fp16 (#19981) ``` </details> <details> <summary>Cherry-pick commandlines</summary> ```sh git cherry-pick 2e0a388 git cherry-pick d226e40 git cherry-pick 61610ff git cherry-pick a33b5bd git cherry-pick 591f90c git cherry-pick 7252c6e git cherry-pick 5b06505 git cherry-pick 656ca66 git cherry-pick a3f0e24 git cherry-pick 9e69606 git cherry-pick 624b4e2 git cherry-pick 90883a3 git cherry-pick 85cef0a #<<<<< Note: conflicts git cherry-pick d73131c git cherry-pick dd1f6cc git cherry-pick 3a2ab19 git cherry-pick efc17e7 git cherry-pick ccbe264 git cherry-pick 5ff27ef git cherry-pick 06269a3 git cherry-pick dfeda90 git cherry-pick 1b48054 git cherry-pick 3fe2c13 git cherry-pick 6e04e36 git cherry-pick 58f4921 git cherry-pick 57d6819 git cherry-pick ebd220b git cherry-pick 38c3432 git cherry-pick fe82fcc git cherry-pick 76a2a48 git cherry-pick 29b1106 git cherry-pick ae3d73c git cherry-pick aec2389 git cherry-pick bb43a0f git cherry-pick 0edb035 git cherry-pick 3cb81cd git cherry-pick e30618d git cherry-pick f06164e git cherry-pick 24b72d2 git cherry-pick 4538d31 git cherry-pick 53de2d8 git cherry-pick ed250b8 git cherry-pick 79e50ae git cherry-pick acb0df2 git cherry-pick b29849a git cherry-pick afdab62 git cherry-pick 28ad6c3 git cherry-pick 7e0d424 git cherry-pick 4c6a6a3 git cherry-pick 01c7aaf git cherry-pick c45cff6 ``` </details> <details> <summary>Cherry-pick conflicts</summary> - 85cef0a #18989 this change is for enabling graph capture feature for JSEP, and it is done after ROCM EP enabled graph capture feature. However, the ROCM EP graph capture feature is not cherry-picked in rel-1.17.2. </details> --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Jiajia Qin <[email protected]> Co-authored-by: Xu Xing <[email protected]> Co-authored-by: satyajandhyala <[email protected]> Co-authored-by: Yang Gu <[email protected]> Co-authored-by: Wanming Lin <[email protected]> Co-authored-by: Jiajie Hu <[email protected]> Co-authored-by: Guenther Schmuelling <[email protected]> Co-authored-by: Matttttt <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Segev Finer <[email protected]> Co-authored-by: Belem Zhang <[email protected]>
1 parent 046d06f commit 45ff957

File tree

114 files changed

+5493
-1350
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

114 files changed

+5493
-1350
lines changed

js/common/lib/backend-impl.ts

Lines changed: 90 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
// Licensed under the MIT License.
33

44
import {Backend} from './backend.js';
5+
import {InferenceSession} from './inference-session.js';
56

67
interface BackendInfo {
78
backend: Backend;
@@ -10,6 +11,7 @@ interface BackendInfo {
1011
initPromise?: Promise<void>;
1112
initialized?: boolean;
1213
aborted?: boolean;
14+
error?: string;
1315
}
1416

1517
const backends: Map<string, BackendInfo> = new Map();
@@ -60,43 +62,100 @@ export const registerBackend = (name: string, backend: Backend, priority: number
6062
};
6163

6264
/**
63-
* Resolve backend by specified hints.
65+
* Try to resolve and initialize a backend.
6466
*
65-
* @param backendHints - a list of execution provider names to lookup. If omitted use registered backends as list.
66-
* @returns a promise that resolves to the backend.
67+
* @param backendName - the name of the backend.
68+
* @returns the backend instance if resolved and initialized successfully, or an error message if failed.
69+
*/
70+
const tryResolveAndInitializeBackend = async(backendName: string): Promise<Backend|string> => {
71+
const backendInfo = backends.get(backendName);
72+
if (!backendInfo) {
73+
return 'backend not found.';
74+
}
75+
76+
if (backendInfo.initialized) {
77+
return backendInfo.backend;
78+
} else if (backendInfo.aborted) {
79+
return backendInfo.error!;
80+
} else {
81+
const isInitializing = !!backendInfo.initPromise;
82+
try {
83+
if (!isInitializing) {
84+
backendInfo.initPromise = backendInfo.backend.init(backendName);
85+
}
86+
await backendInfo.initPromise;
87+
backendInfo.initialized = true;
88+
return backendInfo.backend;
89+
} catch (e) {
90+
if (!isInitializing) {
91+
backendInfo.error = `${e}`;
92+
backendInfo.aborted = true;
93+
}
94+
return backendInfo.error!;
95+
} finally {
96+
delete backendInfo.initPromise;
97+
}
98+
}
99+
};
100+
101+
/**
102+
* Resolve execution providers from the specific session options.
103+
*
104+
* @param options - the session options object.
105+
* @returns a promise that resolves to a tuple of an initialized backend instance and a session options object with
106+
* filtered EP list.
67107
*
68108
* @ignore
69109
*/
70-
export const resolveBackend = async(backendHints: readonly string[]): Promise<Backend> => {
71-
const backendNames = backendHints.length === 0 ? backendsSortedByPriority : backendHints;
72-
const errors = [];
73-
for (const backendName of backendNames) {
74-
const backendInfo = backends.get(backendName);
75-
if (backendInfo) {
76-
if (backendInfo.initialized) {
77-
return backendInfo.backend;
78-
} else if (backendInfo.aborted) {
79-
continue; // current backend is unavailable; try next
80-
}
110+
export const resolveBackendAndExecutionProviders = async(options: InferenceSession.SessionOptions):
111+
Promise<[backend: Backend, options: InferenceSession.SessionOptions]> => {
112+
// extract backend hints from session options
113+
const eps = options.executionProviders || [];
114+
const backendHints = eps.map(i => typeof i === 'string' ? i : i.name);
115+
const backendNames = backendHints.length === 0 ? backendsSortedByPriority : backendHints;
81116

82-
const isInitializing = !!backendInfo.initPromise;
83-
try {
84-
if (!isInitializing) {
85-
backendInfo.initPromise = backendInfo.backend.init(backendName);
117+
// try to resolve and initialize all requested backends
118+
let backend: Backend|undefined;
119+
const errors = [];
120+
const availableBackendNames = new Set<string>();
121+
for (const backendName of backendNames) {
122+
const resolveResult = await tryResolveAndInitializeBackend(backendName);
123+
if (typeof resolveResult === 'string') {
124+
errors.push({name: backendName, err: resolveResult});
125+
} else {
126+
if (!backend) {
127+
backend = resolveResult;
128+
}
129+
if (backend === resolveResult) {
130+
availableBackendNames.add(backendName);
131+
}
86132
}
87-
await backendInfo.initPromise;
88-
backendInfo.initialized = true;
89-
return backendInfo.backend;
90-
} catch (e) {
91-
if (!isInitializing) {
92-
errors.push({name: backendName, err: e});
133+
}
134+
135+
// if no backend is available, throw error.
136+
if (!backend) {
137+
throw new Error(`no available backend found. ERR: ${errors.map(e => `[${e.name}] ${e.err}`).join(', ')}`);
138+
}
139+
140+
// for each explicitly requested backend, if it's not available, output warning message.
141+
for (const {name, err} of errors) {
142+
if (backendHints.includes(name)) {
143+
// eslint-disable-next-line no-console
144+
console.warn(`removing requested execution provider "${
145+
name}" from session options because it is not available: ${err}`);
93146
}
94-
backendInfo.aborted = true;
95-
} finally {
96-
delete backendInfo.initPromise;
97147
}
98-
}
99-
}
100148

101-
throw new Error(`no available backend found. ERR: ${errors.map(e => `[${e.name}] ${e.err}`).join(', ')}`);
102-
};
149+
const filteredEps = eps.filter(i => availableBackendNames.has(typeof i === 'string' ? i : i.name));
150+
151+
return [
152+
backend, new Proxy(options, {
153+
get: (target, prop) => {
154+
if (prop === 'executionProviders') {
155+
return filteredEps;
156+
}
157+
return Reflect.get(target, prop);
158+
}
159+
})
160+
];
161+
};

js/common/lib/backend.ts

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ export interface TrainingSessionHandler extends SessionHandler {
5858
options: InferenceSession.RunOptions): Promise<SessionHandler.ReturnType>;
5959

6060
getParametersSize(trainableOnly: boolean): Promise<number>;
61-
loadParametersBuffer(array: Uint8Array, trainableOnly: boolean): Promise<void>;
61+
loadParametersBuffer(buffer: Uint8Array, trainableOnly: boolean): Promise<void>;
6262
getContiguousParameters(trainableOnly: boolean): Promise<OnnxValue>;
6363
}
6464

@@ -77,8 +77,8 @@ export interface Backend {
7777
Promise<InferenceSessionHandler>;
7878

7979
createTrainingSessionHandler?
80-
(checkpointStateUriOrBuffer: TrainingSession.URIorBuffer, trainModelUriOrBuffer: TrainingSession.URIorBuffer,
81-
evalModelUriOrBuffer: TrainingSession.URIorBuffer, optimizerModelUriOrBuffer: TrainingSession.URIorBuffer,
80+
(checkpointStateUriOrBuffer: TrainingSession.UriOrBuffer, trainModelUriOrBuffer: TrainingSession.UriOrBuffer,
81+
evalModelUriOrBuffer: TrainingSession.UriOrBuffer, optimizerModelUriOrBuffer: TrainingSession.UriOrBuffer,
8282
options: InferenceSession.SessionOptions): Promise<TrainingSessionHandler>;
8383
}
8484

js/common/lib/env.ts

Lines changed: 49 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ export declare namespace Env {
3636
/**
3737
* set or get a boolean value indicating whether to enable trace.
3838
*
39+
* @deprecated Use `env.trace` instead. If `env.trace` is set, this property will be ignored.
3940
* @defaultValue `false`
4041
*/
4142
trace?: boolean;
@@ -142,13 +143,52 @@ export declare namespace Env {
142143
*/
143144
ondata?: (data: WebGpuProfilingData) => void;
144145
};
146+
/**
147+
* Set or get the power preference.
148+
*
149+
* Setting this property only has effect before the first WebGPU inference session is created. The value will be
150+
* used as options for `navigator.gpu.requestAdapter()`.
151+
*
152+
* See {@link https://gpuweb.github.io/gpuweb/#dictdef-gpurequestadapteroptions} for more details.
153+
*
154+
* @defaultValue `undefined`
155+
*/
156+
powerPreference?: 'low-power'|'high-performance';
157+
/**
158+
* Set or get the force fallback adapter flag.
159+
*
160+
* Setting this property only has effect before the first WebGPU inference session is created. The value will be
161+
* used as options for `navigator.gpu.requestAdapter()`.
162+
*
163+
* See {@link https://gpuweb.github.io/gpuweb/#dictdef-gpurequestadapteroptions} for more details.
164+
*
165+
* @defaultValue `undefined`
166+
*/
167+
forceFallbackAdapter?: boolean;
168+
/**
169+
* Set or get the adapter for WebGPU.
170+
*
171+
* Setting this property only has effect before the first WebGPU inference session is created. The value will be
172+
* used as the GPU adapter for the underlying WebGPU backend to create GPU device.
173+
*
174+
* If this property is not set, it will be available to get after the first WebGPU inference session is created. The
175+
* value will be the GPU adapter that created by the underlying WebGPU backend.
176+
*
177+
* When use with TypeScript, the type of this property is `GPUAdapter` defined in "@webgpu/types".
178+
* Use `const adapter = env.webgpu.adapter as GPUAdapter;` in TypeScript to access this property with correct type.
179+
*
180+
* see comments on {@link Tensor.GpuBufferType}
181+
*/
182+
adapter: unknown;
145183
/**
146184
* Get the device for WebGPU.
147185
*
186+
* This property is only available after the first WebGPU inference session is created.
187+
*
148188
* When use with TypeScript, the type of this property is `GPUDevice` defined in "@webgpu/types".
149189
* Use `const device = env.webgpu.device as GPUDevice;` in TypeScript to access this property with correct type.
150190
*
151-
* see comments on {@link GpuBufferType} for more details about why not use types defined in "@webgpu/types".
191+
* see comments on {@link Tensor.GpuBufferType} for more details about why not use types defined in "@webgpu/types".
152192
*/
153193
readonly device: unknown;
154194
/**
@@ -167,13 +207,21 @@ export interface Env {
167207
* @defaultValue `'warning'`
168208
*/
169209
logLevel?: 'verbose'|'info'|'warning'|'error'|'fatal';
210+
170211
/**
171212
* Indicate whether run in debug mode.
172213
*
173214
* @defaultValue `false`
174215
*/
175216
debug?: boolean;
176217

218+
/**
219+
* set or get a boolean value indicating whether to enable trace.
220+
*
221+
* @defaultValue `false`
222+
*/
223+
trace?: boolean;
224+
177225
/**
178226
* Get version of the current package.
179227
*/

js/common/lib/index.ts

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
* - [onnxruntime-react-native](https://www.npmjs.com/package/onnxruntime-react-native)
1212
*
1313
* See also:
14-
* - [Get Started](https://onnxruntime.ai/docs/get-started/with-javascript.html)
14+
* - [Get Started](https://onnxruntime.ai/docs/get-started/with-javascript/)
1515
* - [Inference examples](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js)
1616
*
1717
* @packageDocumentation
@@ -21,6 +21,9 @@ export * from './backend.js';
2121
export * from './env.js';
2222
export * from './inference-session.js';
2323
export * from './tensor.js';
24+
export * from './tensor-conversion.js';
25+
export * from './tensor-factory.js';
2426
export * from './trace.js';
27+
export * from './onnx-model.js';
2528
export * from './onnx-value.js';
2629
export * from './training-session.js';

js/common/lib/inference-session-impl.ts

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
// Copyright (c) Microsoft Corporation. All rights reserved.
22
// Licensed under the MIT License.
33

4-
import {resolveBackend} from './backend-impl.js';
4+
import {resolveBackendAndExecutionProviders} from './backend-impl.js';
55
import {InferenceSessionHandler} from './backend.js';
66
import {InferenceSession as InferenceSessionInterface} from './inference-session.js';
77
import {OnnxValue} from './onnx-value.js';
@@ -195,11 +195,9 @@ export class InferenceSession implements InferenceSessionInterface {
195195
throw new TypeError('Unexpected argument[0]: must be \'path\' or \'buffer\'.');
196196
}
197197

198-
// get backend hints
199-
const eps = options.executionProviders || [];
200-
const backendHints = eps.map(i => typeof i === 'string' ? i : i.name);
201-
const backend = await resolveBackend(backendHints);
202-
const handler = await backend.createInferenceSessionHandler(filePathOrUint8Array, options);
198+
// resolve backend, update session options with validated EPs, and create session handler
199+
const [backend, optionsWithValidatedEPs] = await resolveBackendAndExecutionProviders(options);
200+
const handler = await backend.createInferenceSessionHandler(filePathOrUint8Array, optionsWithValidatedEPs);
203201
TRACE_FUNC_END();
204202
return new InferenceSession(handler);
205203
}

js/common/lib/inference-session.ts

Lines changed: 42 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ export declare namespace InferenceSession {
111111
optimizedModelFilePath?: string;
112112

113113
/**
114-
* Wether enable profiling.
114+
* Whether enable profiling.
115115
*
116116
* This setting is a placeholder for a future use.
117117
*/
@@ -154,6 +154,12 @@ export declare namespace InferenceSession {
154154
*/
155155
preferredOutputLocation?: OnnxValueDataLocation|{readonly [outputName: string]: OnnxValueDataLocation};
156156

157+
/**
158+
* Whether enable graph capture.
159+
* This setting is available only in ONNXRuntime Web for WebGPU EP.
160+
*/
161+
enableGraphCapture?: boolean;
162+
157163
/**
158164
* Store configurations for a session. See
159165
* https://github.com/microsoft/onnxruntime/blob/main/include/onnxruntime/core/session/
@@ -180,22 +186,22 @@ export declare namespace InferenceSession {
180186
// #region execution providers
181187

182188
// Currently, we have the following backends to support execution providers:
183-
// Backend Node.js binding: supports 'cpu' and 'cuda'.
189+
// Backend Node.js binding: supports 'cpu', 'dml' (win32), 'coreml' (macOS) and 'cuda' (linux).
184190
// Backend WebAssembly: supports 'cpu', 'wasm', 'webgpu' and 'webnn'.
185191
// Backend ONNX.js: supports 'webgl'.
186192
// Backend React Native: supports 'cpu', 'xnnpack', 'coreml' (iOS), 'nnapi' (Android).
187193
interface ExecutionProviderOptionMap {
194+
coreml: CoreMLExecutionProviderOption;
188195
cpu: CpuExecutionProviderOption;
189-
coreml: CoreMlExecutionProviderOption;
190196
cuda: CudaExecutionProviderOption;
191197
dml: DmlExecutionProviderOption;
198+
nnapi: NnapiExecutionProviderOption;
192199
tensorrt: TensorRtExecutionProviderOption;
193200
wasm: WebAssemblyExecutionProviderOption;
194201
webgl: WebGLExecutionProviderOption;
195-
xnnpack: XnnpackExecutionProviderOption;
196202
webgpu: WebGpuExecutionProviderOption;
197203
webnn: WebNNExecutionProviderOption;
198-
nnapi: NnapiExecutionProviderOption;
204+
xnnpack: XnnpackExecutionProviderOption;
199205
}
200206

201207
type ExecutionProviderName = keyof ExecutionProviderOptionMap;
@@ -213,10 +219,6 @@ export declare namespace InferenceSession {
213219
readonly name: 'cuda';
214220
deviceId?: number;
215221
}
216-
export interface CoreMlExecutionProviderOption extends ExecutionProviderOption {
217-
readonly name: 'coreml';
218-
coreMlFlags?: number;
219-
}
220222
export interface DmlExecutionProviderOption extends ExecutionProviderOption {
221223
readonly name: 'dml';
222224
deviceId?: number;
@@ -247,8 +249,39 @@ export declare namespace InferenceSession {
247249
}
248250
export interface CoreMLExecutionProviderOption extends ExecutionProviderOption {
249251
readonly name: 'coreml';
252+
/**
253+
* The bit flags for CoreML execution provider.
254+
*
255+
* ```
256+
* COREML_FLAG_USE_CPU_ONLY = 0x001
257+
* COREML_FLAG_ENABLE_ON_SUBGRAPH = 0x002
258+
* COREML_FLAG_ONLY_ENABLE_DEVICE_WITH_ANE = 0x004
259+
* COREML_FLAG_ONLY_ALLOW_STATIC_INPUT_SHAPES = 0x008
260+
* COREML_FLAG_CREATE_MLPROGRAM = 0x010
261+
* ```
262+
*
263+
* See include/onnxruntime/core/providers/coreml/coreml_provider_factory.h for more details.
264+
*
265+
* This flag is available only in ONNXRuntime (Node.js binding).
266+
*/
267+
coreMlFlags?: number;
268+
/**
269+
* Specify whether to use CPU only in CoreML EP.
270+
*
271+
* This setting is available only in ONNXRuntime (react-native).
272+
*/
250273
useCPUOnly?: boolean;
274+
/**
275+
* Specify whether to enable CoreML EP on subgraph.
276+
*
277+
* This setting is available only in ONNXRuntime (react-native).
278+
*/
251279
enableOnSubgraph?: boolean;
280+
/**
281+
* Specify whether to only enable CoreML EP for Apple devices with ANE (Apple Neural Engine).
282+
*
283+
* This setting is available only in ONNXRuntime (react-native).
284+
*/
252285
onlyEnableDeviceWithANE?: boolean;
253286
}
254287
export interface NnapiExecutionProviderOption extends ExecutionProviderOption {

0 commit comments

Comments
 (0)