-
Notifications
You must be signed in to change notification settings - Fork 52
Description
Hi! This is deriving from discussion here #815 (comment) .
Eventually which devices are used can only be known after a model is provided. It's a different problem from the context level device query mechanism so I decided to create a separate issue.
I've tested with CoreML's MLComputePlan which can give you op level device selections after a model is compiled. (Before calling dispatch
).
Tflite also gives which delegate gets used for each op, after a graph is compiled.
So for WebNN, we can attach such information to MLGraph which represents a compiled graph.
The first high level information we can attach is a list of devices
that will be used to execute the graph, since sometimes a graph could be executed using multiple devices.
const graph = await builder.build({'C': C});
console.log(graph.devices) // ['cpu', 'npu']
We could further expose op level device selection with a map or list:
const graph = await builder.build({'C': C});
console.log(graph.devices) // ['cpu', 'npu']
console.log(graph.deviceForOperations)
{
"add_1": "cpu",
"conv2d_2": "npu"
}
The only thing is we would need identifiers for each op, we could: 1. auto generate op identifiers using op name + auto incrementing index. 2. Use label
and only return ops that have labels defined.
Maybe just the high level graph.devices
is good enough for app developers to make decisions. But I want to provide what's possible with current backends right now.