Skip to content

Commit 61d4d5a

Browse files
Add device preference use cases to explainer (#855)
Adds a new subsection "Device Preference Use Cases" to the device-selection-explainer.md document. This subsection details several use cases for device selection preferences, mapping them to the preferences discussed in the W3C WebML WG minutes of 2025-05-08 (https://www.w3.org/2025/05/08-webmachinelearning-minutes.html#1db2). The use cases cover: - Preferring CPU - Preferring NPU - Preferring GPU - Maximizing performance - Maximizing power efficiency - Minimizing overall system power Future-proof device names ("where JS and Wasm execute", "where WebGL and WebGPU programs execute", "other") are used in the descriptions. Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
1 parent e24624b commit 61d4d5a

File tree

1 file changed

+26
-0
lines changed

1 file changed

+26
-0
lines changed

device-selection-explainer.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,29 @@ Later the need for explicit device selection support was challenged in [[MLConte
5151

5252
## Key use cases and requirements
5353

54+
### Device Preference Use Cases
55+
56+
A WebNN application may have specific device preferences for model execution. The following use cases map to such preferences, informed by existing APIs such as ONNX Runtime's `OrtExecutionProviderDevicePolicy` [1](https://onnxruntime.ai/docs/api/c/group___global.html#gaf26ca954c79d297a31a66187dd1b4e24):
57+
58+
* **Prefer execution on the main CPU**:
59+
* *Preference*: `"prefer CPU"`
60+
* *Description*: The application developer hints that the model should ideally run on the device component primarily responsible for general computation, typically "where JS and Wasm execute". This could be due to the model's characteristics (e.g., heavy control flow, operations best suited for CPU) or to reserve other accelerators for different tasks.
61+
* **Prefer execution on a Neural Processing Unit (NPU)**:
62+
* *Preference*: `"prefer NPU"`
63+
* *Description*: The application developer hints that the model is well-suited for an NPU. NPUs are specialized hardware accelerators, distinct from CPUs (typically "where JS and Wasm execute") and GPUs (typically "where WebGL and WebGPU programs execute"). In a future-proof context, NPUs fall under the category of "other" compute devices, encompassing various current and future specialized ML accelerators. This preference is often chosen for models optimized for low power and sustained performance.
64+
* **Prefer execution on a Graphics Processing Unit (GPU)**:
65+
* *Preference*: `"prefer GPU"`
66+
* *Description*: The application developer hints that the model should run on the GPU (the device "where WebGL and WebGPU programs execute"). This is common for models with highly parallelizable operations.
67+
* **Maximize Performance**:
68+
* *Preference*: `"maximum performance"`
69+
* *Description*: The application developer desires the highest possible throughput or lowest latency for the model execution, regardless of power consumption. The underlying system will choose the device or combination of devices (e.g., "where WebGL and WebGPU programs execute", or other specialized hardware) that can achieve this.
70+
* **Maximize Power Efficiency**:
71+
* *Preference*: `"maximum efficiency"`
72+
* *Description*: The application developer prioritizes executing the model in the most power-efficient manner, which might involve using an NPU or a low-power mode of the CPU ("where JS and Wasm execute"). This is crucial for battery-constrained devices or long-running tasks.
73+
* **Minimize Overall System Power**:
74+
* *Preference*: `"minimum overall power"`
75+
* *Description*: The application developer hints that the model execution should contribute as little as possible to the overall system power draw. This is a broader consideration than just the model's own efficiency, potentially influencing scheduling and resource allocation across the system. The implementation may choose any device ("where JS and Wasm execute", "where WebGL and WebGPU programs execute", or "other") that best achieves this goal.
76+
5477
Design decisions may take the following into account:
5578

5679
1. Allow the underlying platform to ultimately choose the compute device.
@@ -191,3 +214,6 @@ Other use cases were raised as well, in [this comment](https://github.com/webmac
191214
> 1. If the user selects to use functionality like background blur, we want to offer the best quality the device can offer. So the product has a small set of candidate models and technologies (WebNN, WebGPU, WASM) that it has to choose between. Accelerated technologies come with allowance for beefier models.
192215
193216
> 2. The model/tech choser algorithm needs to be fast, and we need to avoid spending seconds or even hundreds of milliseconds to figure out if a given model should be able to run accelerated. So for example downloading the entirety (could be large things..), compiling & try-running a model seems infeasible.
217+
218+
## References
219+
[1] ONNX Runtime - OrtExecutionProviderDevicePolicy. (https://onnxruntime.ai/docs/api/c/group___global.html#gaf26ca954c79d297a31a66187dd1b4e24)

0 commit comments

Comments
 (0)