@@ -1209,15 +1209,15 @@ Size: 4
1209
1209
1210
1210
Alignment: 4
1211
1211
1212
- ## <a href =" #nn_errno " name =" nn_errno " ></a > ` nn_errno ` : Enum( ` u16 ` )
1212
+ ## <a href =" #nn_errno " name =" nn_errno " ></a > ` nn_errno ` : ` Variant `
1213
1213
Error codes returned by functions in this API. This is prefixed to avoid conflicts with the ` $errno ` in
1214
1214
` typenames.witx ` .
1215
1215
1216
1216
Size: 2
1217
1217
1218
1218
Alignment: 2
1219
1219
1220
- ### Variants
1220
+ ### Variant cases
1221
1221
- <a href =" #nn_errno.success " name =" nn_errno.success " ></a > ` success `
1222
1222
No error occurred.
1223
1223
@@ -1230,7 +1230,7 @@ Caller module is missing a memory export.
1230
1230
- <a href =" #nn_errno.busy " name =" nn_errno.busy " ></a > ` busy `
1231
1231
Device or resource busy.
1232
1232
1233
- ## <a href =" #tensor_dimensions " name =" tensor_dimensions " ></a > ` tensor_dimensions ` : ` Array <u32>`
1233
+ ## <a href =" #tensor_dimensions " name =" tensor_dimensions " ></a > ` tensor_dimensions ` : ` List <u32>`
1234
1234
The dimensions of a tensor.
1235
1235
1236
1236
The array length matches the tensor rank and each element in the array
@@ -1240,14 +1240,14 @@ Size: 8
1240
1240
1241
1241
Alignment: 4
1242
1242
1243
- ## <a href =" #tensor_type " name =" tensor_type " ></a > ` tensor_type ` : Enum( ` u8 ` )
1243
+ ## <a href =" #tensor_type " name =" tensor_type " ></a > ` tensor_type ` : ` Variant `
1244
1244
The type of the elements in a tensor.
1245
1245
1246
1246
Size: 1
1247
1247
1248
1248
Alignment: 1
1249
1249
1250
- ### Variants
1250
+ ### Variant cases
1251
1251
- <a href =" #tensor_type.f16 " name =" tensor_type.f16 " ></a > ` f16 `
1252
1252
1253
1253
- <a href =" #tensor_type.f32 " name =" tensor_type.f32 " ></a > ` f32 `
@@ -1256,7 +1256,7 @@ Alignment: 1
1256
1256
1257
1257
- <a href =" #tensor_type.i32 " name =" tensor_type.i32 " ></a > ` i32 `
1258
1258
1259
- ## <a href =" #tensor_data " name =" tensor_data " ></a > ` tensor_data ` : ` Array <u8>`
1259
+ ## <a href =" #tensor_data " name =" tensor_data " ></a > ` tensor_data ` : ` List <u8>`
1260
1260
The tensor data
1261
1261
1262
1262
Initially conceived as a sparse representation, each empty cell would be filled with zeroes and
@@ -1269,14 +1269,14 @@ Size: 8
1269
1269
1270
1270
Alignment: 4
1271
1271
1272
- ## <a href =" #tensor " name =" tensor " ></a > ` tensor ` : Struct
1272
+ ## <a href =" #tensor " name =" tensor " ></a > ` tensor ` : ` Record `
1273
1273
A tensor.
1274
1274
1275
1275
Size: 20
1276
1276
1277
1277
Alignment: 4
1278
1278
1279
- ### Struct members
1279
+ ### Record members
1280
1280
- <a href =" #tensor.dimensions " name =" tensor.dimensions " ></a > ` dimensions ` : [ ` tensor_dimensions ` ] ( #tensor_dimensions )
1281
1281
Describe the size of the tensor (e.g. 2x2x2x2 -> [ 2, 2, 2, 2] ). To represent a tensor containing a single value,
1282
1282
use ` [1] ` for the tensor dimensions.
@@ -1292,55 +1292,55 @@ Contains the tensor data.
1292
1292
1293
1293
Offset: 12
1294
1294
1295
- ## <a href =" #graph_builder " name =" graph_builder " ></a > ` graph_builder ` : ` Array <u8>`
1295
+ ## <a href =" #graph_builder " name =" graph_builder " ></a > ` graph_builder ` : ` List <u8>`
1296
1296
The graph initialization data. This consists of an array of buffers because implementing backends may encode their
1297
1297
graph IR in parts (e.g. OpenVINO stores its IR and weights separately).
1298
1298
1299
1299
Size: 8
1300
1300
1301
1301
Alignment: 4
1302
1302
1303
- ## <a href =" #graph_builder_array " name =" graph_builder_array " ></a > ` graph_builder_array ` : ` Array <graph_builder>`
1303
+ ## <a href =" #graph_builder_array " name =" graph_builder_array " ></a > ` graph_builder_array ` : ` List <graph_builder>`
1304
1304
1305
1305
Size: 8
1306
1306
1307
1307
Alignment: 4
1308
1308
1309
- ## <a href =" #graph " name =" graph " ></a > ` graph `
1309
+ ## <a href =" #graph " name =" graph " ></a > ` graph ` : ` Handle `
1310
1310
An execution graph for performing inference (i.e. a model).
1311
1311
1312
1312
Size: 4
1313
1313
1314
1314
Alignment: 4
1315
1315
1316
1316
### Supertypes
1317
- ## <a href =" #graph_encoding " name =" graph_encoding " ></a > ` graph_encoding ` : Enum( ` u8 ` )
1317
+ ## <a href =" #graph_encoding " name =" graph_encoding " ></a > ` graph_encoding ` : ` Variant `
1318
1318
Describes the encoding of the graph. This allows the API to be implemented by various backends that encode (i.e.
1319
1319
serialize) their graph IR differently.
1320
1320
1321
1321
Size: 1
1322
1322
1323
1323
Alignment: 1
1324
1324
1325
- ### Variants
1325
+ ### Variant cases
1326
1326
- <a href =" #graph_encoding.openvino " name =" graph_encoding.openvino " ></a > ` openvino `
1327
1327
TODO document buffer order
1328
1328
1329
- ## <a href =" #execution_target " name =" execution_target " ></a > ` execution_target ` : Enum( ` u8 ` )
1329
+ ## <a href =" #execution_target " name =" execution_target " ></a > ` execution_target ` : ` Variant `
1330
1330
Define where the graph should be executed.
1331
1331
1332
1332
Size: 1
1333
1333
1334
1334
Alignment: 1
1335
1335
1336
- ### Variants
1336
+ ### Variant cases
1337
1337
- <a href =" #execution_target.cpu " name =" execution_target.cpu " ></a > ` cpu `
1338
1338
1339
1339
- <a href =" #execution_target.gpu " name =" execution_target.gpu " ></a > ` gpu `
1340
1340
1341
1341
- <a href =" #execution_target.tpu " name =" execution_target.tpu " ></a > ` tpu `
1342
1342
1343
- ## <a href =" #graph_execution_context " name =" graph_execution_context " ></a > ` graph_execution_context `
1343
+ ## <a href =" #graph_execution_context " name =" graph_execution_context " ></a > ` graph_execution_context ` : ` Handle `
1344
1344
A $graph_execution_context allows for attaching inputs prior to calling [ ` compute ` ] ( #compute ) on a graph and retrieving outputs after
1345
1345
the computation has completed. TODO a handle may not be the right type but we want it to be opaque to users.
1346
1346
@@ -2705,7 +2705,7 @@ Which channels on the socket to shut down.
2705
2705
2706
2706
---
2707
2707
2708
- #### <a href =" #load " name =" load " ></a > ` load(builder: graph_builder_array, encoding: graph_encoding, target: execution_target) -> (nn_errno, graph) `
2708
+ #### <a href =" #load " name =" load " ></a > ` load(builder: graph_builder_array, encoding: graph_encoding, target: execution_target) -> Result<graph, nn_errno> `
2709
2709
Load an opaque sequence of bytes to use for inference.
2710
2710
2711
2711
This allows runtime implementations to support multiple graph encoding formats. For unsupported graph encodings,
@@ -2722,14 +2722,21 @@ The encoding of the graph.
2722
2722
Where to execute the graph.
2723
2723
2724
2724
##### Results
2725
- - <a href =" #load.error " name =" load.error " ></a > ` error ` : [ ` nn_errno ` ] ( #nn_errno )
2725
+ - <a href =" #load.error " name =" load.error " ></a > ` error ` : ` Result<graph, nn_errno> `
2726
+
2727
+ ###### Variant Layout
2728
+ - size: 8
2729
+ - align: 4
2730
+ - tag_size: 4
2731
+ ###### Variant cases
2732
+ - <a href =" #load.error.ok " name =" load.error.ok " ></a > ` ok ` : [ ` graph ` ] ( #graph )
2726
2733
2727
- - <a href =" #load.graph " name =" load.graph " ></a > ` graph ` : [ ` graph ` ] ( #graph )
2734
+ - <a href =" #load.error.err " name =" load.error.err " ></a > ` err ` : [ ` nn_errno ` ] ( #nn_errno )
2728
2735
2729
2736
2730
2737
---
2731
2738
2732
- #### <a href =" #init_execution_context " name =" init_execution_context " ></a > ` init_execution_context(graph: graph) -> (nn_errno, graph_execution_context) `
2739
+ #### <a href =" #init_execution_context " name =" init_execution_context " ></a > ` init_execution_context(graph: graph) -> Result<graph_execution_context, nn_errno> `
2733
2740
TODO Functions like ` describe_graph_inputs ` and ` describe_graph_outputs ` (returning
2734
2741
an array of ` $tensor_description ` s) might be useful for introspecting the graph but are not yet included here.
2735
2742
Create an execution instance of a loaded graph.
@@ -2739,14 +2746,21 @@ TODO this may need to accept flags that might affect the compilation or executio
2739
2746
- <a href =" #init_execution_context.graph " name =" init_execution_context.graph " ></a > ` graph ` : [ ` graph ` ] ( #graph )
2740
2747
2741
2748
##### Results
2742
- - <a href =" #init_execution_context.error " name =" init_execution_context.error " ></a > ` error ` : [ ` nn_errno ` ] ( #nn_errno )
2749
+ - <a href =" #init_execution_context.error " name =" init_execution_context.error " ></a > ` error ` : ` Result<graph_execution_context, nn_errno> `
2750
+
2751
+ ###### Variant Layout
2752
+ - size: 8
2753
+ - align: 4
2754
+ - tag_size: 4
2755
+ ###### Variant cases
2756
+ - <a href =" #init_execution_context.error.ok " name =" init_execution_context.error.ok " ></a > ` ok ` : [ ` graph_execution_context ` ] ( #graph_execution_context )
2743
2757
2744
- - <a href =" #init_execution_context.context " name =" init_execution_context.context " ></a > ` context ` : [ ` graph_execution_context ` ] ( #graph_execution_context )
2758
+ - <a href =" #init_execution_context.error.err " name =" init_execution_context.error.err " ></a > ` err ` : [ ` nn_errno ` ] ( #nn_errno )
2745
2759
2746
2760
2747
2761
---
2748
2762
2749
- #### <a href =" #set_input " name =" set_input " ></a > ` set_input(context: graph_execution_context, index: u32, tensor: tensor) -> nn_errno `
2763
+ #### <a href =" #set_input " name =" set_input " ></a > ` set_input(context: graph_execution_context, index: u32, tensor: tensor) -> Result<(), nn_errno> `
2750
2764
Define the inputs to use for inference.
2751
2765
2752
2766
This should return an $nn_errno (TODO define) if the input tensor does not match the expected dimensions and type.
@@ -2761,12 +2775,21 @@ The index of the input to change.
2761
2775
The tensor to set as the input.
2762
2776
2763
2777
##### Results
2764
- - <a href =" #set_input.error " name =" set_input.error " ></a > ` error ` : [ ` nn_errno ` ] ( #nn_errno )
2778
+ - <a href =" #set_input.error " name =" set_input.error " ></a > ` error ` : ` Result<(), nn_errno> `
2779
+
2780
+ ###### Variant Layout
2781
+ - size: 8
2782
+ - align: 4
2783
+ - tag_size: 4
2784
+ ###### Variant cases
2785
+ - <a href =" #set_input.error.ok " name =" set_input.error.ok " ></a > ` ok `
2786
+
2787
+ - <a href =" #set_input.error.err " name =" set_input.error.err " ></a > ` err ` : [ ` nn_errno ` ] ( #nn_errno )
2765
2788
2766
2789
2767
2790
---
2768
2791
2769
- #### <a href =" #get_output " name =" get_output " ></a > ` get_output(context: graph_execution_context, index: u32, out_buffer: Pointer<u8>, out_buffer_max_size: buffer_size) -> (nn_errno, buffer_size) `
2792
+ #### <a href =" #get_output " name =" get_output " ></a > ` get_output(context: graph_execution_context, index: u32, out_buffer: Pointer<u8>, out_buffer_max_size: buffer_size) -> Result<buffer_size, nn_errno> `
2770
2793
Extract the outputs after inference.
2771
2794
2772
2795
This should return an $nn_errno (TODO define) if the inference has not yet run.
@@ -2785,15 +2808,22 @@ tensor metadata (i.e. dimension, element type) but this should be added at some
2785
2808
- <a href =" #get_output.out_buffer_max_size " name =" get_output.out_buffer_max_size " ></a > ` out_buffer_max_size ` : [ ` buffer_size ` ] ( #buffer_size )
2786
2809
2787
2810
##### Results
2788
- - <a href =" #get_output.error " name =" get_output.error " ></a > ` error ` : [ ` nn_errno ` ] ( #nn_errno )
2789
-
2790
- - <a href =" #get_output.bytes_written " name =" get_output.bytes_written " ></a > ` bytes_written ` : [ ` buffer_size ` ] ( #buffer_size )
2811
+ - <a href =" #get_output.error " name =" get_output.error " ></a > ` error ` : ` Result<buffer_size, nn_errno> `
2791
2812
The number of bytes of tensor data written to the ` $out_buffer ` .
2792
2813
2814
+ ###### Variant Layout
2815
+ - size: 8
2816
+ - align: 4
2817
+ - tag_size: 4
2818
+ ###### Variant cases
2819
+ - <a href =" #get_output.error.ok " name =" get_output.error.ok " ></a > ` ok ` : [ ` buffer_size ` ] ( #buffer_size )
2820
+
2821
+ - <a href =" #get_output.error.err " name =" get_output.error.err " ></a > ` err ` : [ ` nn_errno ` ] ( #nn_errno )
2822
+
2793
2823
2794
2824
---
2795
2825
2796
- #### <a href =" #compute " name =" compute " ></a > ` compute(context: graph_execution_context) -> nn_errno `
2826
+ #### <a href =" #compute " name =" compute " ></a > ` compute(context: graph_execution_context) -> Result<(), nn_errno> `
2797
2827
Compute the inference on the given inputs (see [ ` set_input ` ] ( #set_input ) ).
2798
2828
2799
2829
This should return an $nn_errno (TODO define) if the inputs are not all defined.
@@ -2802,5 +2832,14 @@ This should return an $nn_errno (TODO define) if the inputs are not all defined.
2802
2832
- <a href =" #compute.context " name =" compute.context " ></a > ` context ` : [ ` graph_execution_context ` ] ( #graph_execution_context )
2803
2833
2804
2834
##### Results
2805
- - <a href =" #compute.error " name =" compute.error " ></a > ` error ` : [ ` nn_errno ` ] ( #nn_errno )
2835
+ - <a href =" #compute.error " name =" compute.error " ></a > ` error ` : ` Result<(), nn_errno> `
2836
+
2837
+ ###### Variant Layout
2838
+ - size: 8
2839
+ - align: 4
2840
+ - tag_size: 4
2841
+ ###### Variant cases
2842
+ - <a href =" #compute.error.ok " name =" compute.error.ok " ></a > ` ok `
2843
+
2844
+ - <a href =" #compute.error.err " name =" compute.error.err " ></a > ` err ` : [ ` nn_errno ` ] ( #nn_errno )
2806
2845
0 commit comments