You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -235,170 +235,199 @@ Example of a JSONL file for Instance Segmentation:
235
235
236
236

237
237
238
-
## Data format for inference
238
+
## Data schema for online scoring
239
239
240
-
In this section, we document the input data format required to make predictions when using a deployed model. Any aforementioned image format is accepted with content type `application/octet-stream`.
240
+
In this section, we document the input data format required to make predictions using a deployed model.
241
241
242
242
### Input format
243
243
244
-
The following is the input format needed to generate predictions on any task using task-specific model endpoint. After we [deploy the model](how-to-auto-train-image-models.md#register-and-deploy-model), we can use the following code snippet to get predictions for all tasks.
244
+
The following is the input format needed to generate predictions on any task using task-specific model endpoint.
This json is a dictionary with outer key `input_data` and inner keys `columns`, `data` as described in the following table. The endpoint accepts a json string in the above format and converts it into a dataframe of samples required by the scoring script. Each input image in the `request_json["input_data"]["data"]` section of the json is a [base64 encoded string](https://docs.python.org/3/library/base64.html#base64.encodebytes).
260
+
261
+
262
+
| Key | Description |
263
+
| -------- |----------|
264
+
|`input_data`<br> (outer key) | It is an outer key in json request. `input_data` is a dictionary that accepts input image samples <br>`Required, Dictionary`|
265
+
|`columns`<br> (inner key) | Column names to use to create dataframe. It accepts only one column with `image` as column name.<br>`Required, List`|
266
+
|`data`<br> (inner key) | List of base64 encoded images <br>`Required, List`|
267
+
268
+
269
+
After we [deploy the mlflow model](how-to-auto-train-image-models.md#register-and-deploy-model), we can use the following code snippet to get predictions for all tasks.
Predictions made on model endpoints follow different structure depending on the task type. This section explores the output data formats for multi-class, multi-label image classification, object detection, and instance segmentation tasks.
279
+
Predictions made on model endpoints follow different structure depending on the task type. This section explores the output data formats for multi-class, multi-label image classification, object detection, and instance segmentation tasks.
280
+
281
+
The following schemas are applicable when the input request contains one image.
261
282
262
283
#### Image classification
263
284
264
285
Endpoint for image classification returns all the labels in the dataset and their probability scores for the input image in the following format.
265
286
266
287
```json
267
-
{
268
-
"filename":"/tmp/tmppjr4et28",
269
-
"probs":[
270
-
2.098e-06,
271
-
4.783e-08,
272
-
0.999,
273
-
8.637e-06
274
-
],
275
-
"labels":[
276
-
"can",
277
-
"carton",
278
-
"milk_bottle",
279
-
"water_bottle"
280
-
]
281
-
}
288
+
[
289
+
{
290
+
"filename": "/tmp/tmppjr4et28",
291
+
"probs": [
292
+
2.098e-06,
293
+
4.783e-08,
294
+
0.999,
295
+
8.637e-06
296
+
],
297
+
"labels": [
298
+
"can",
299
+
"carton",
300
+
"milk_bottle",
301
+
"water_bottle"
302
+
]
303
+
}
304
+
]
282
305
```
283
306
284
307
#### Image classification multi-label
285
308
286
309
For image classification multi-label, model endpoint returns labels and their probabilities.
287
310
288
311
```json
289
-
{
290
-
"filename":"/tmp/tmpsdzxlmlm",
291
-
"probs":[
292
-
0.997,
293
-
0.960,
294
-
0.982,
295
-
0.025
296
-
],
297
-
"labels":[
298
-
"can",
299
-
"carton",
300
-
"milk_bottle",
301
-
"water_bottle"
302
-
]
303
-
}
312
+
[
313
+
{
314
+
"filename": "/tmp/tmpsdzxlmlm",
315
+
"probs": [
316
+
0.997,
317
+
0.960,
318
+
0.982,
319
+
0.025
320
+
],
321
+
"labels": [
322
+
"can",
323
+
"carton",
324
+
"milk_bottle",
325
+
"water_bottle"
326
+
]
327
+
}
328
+
]
304
329
```
305
330
306
331
#### Object detection
307
332
308
333
Object detection model returns multiple boxes with their scaled top-left and bottom-right coordinates along with box label and confidence score.
309
334
310
335
```json
311
-
{
312
-
"filename":"/tmp/tmpdkg2wkdy",
313
-
"boxes":[
314
-
{
315
-
"box":{
316
-
"topX":0.224,
317
-
"topY":0.285,
318
-
"bottomX":0.399,
319
-
"bottomY":0.620
320
-
},
321
-
"label":"milk_bottle",
322
-
"score":0.937
323
-
},
324
-
{
325
-
"box":{
326
-
"topX":0.664,
327
-
"topY":0.484,
328
-
"bottomX":0.959,
329
-
"bottomY":0.812
336
+
[
337
+
{
338
+
"filename": "/tmp/tmpdkg2wkdy",
339
+
"boxes": [
340
+
{
341
+
"box": {
342
+
"topX": 0.224,
343
+
"topY": 0.285,
344
+
"bottomX": 0.399,
345
+
"bottomY": 0.620
346
+
},
347
+
"label": "milk_bottle",
348
+
"score": 0.937
330
349
},
331
-
"label":"can",
332
-
"score":0.891
333
-
},
334
-
{
335
-
"box":{
336
-
"topX":0.423,
337
-
"topY":0.253,
338
-
"bottomX":0.632,
339
-
"bottomY":0.725
350
+
{
351
+
"box": {
352
+
"topX": 0.664,
353
+
"topY": 0.484,
354
+
"bottomX": 0.959,
355
+
"bottomY": 0.812
356
+
},
357
+
"label": "can",
358
+
"score": 0.891
340
359
},
341
-
"label":"water_bottle",
342
-
"score":0.876
343
-
}
344
-
]
345
-
}
360
+
{
361
+
"box": {
362
+
"topX": 0.423,
363
+
"topY": 0.253,
364
+
"bottomX": 0.632,
365
+
"bottomY": 0.725
366
+
},
367
+
"label": "water_bottle",
368
+
"score": 0.876
369
+
}
370
+
]
371
+
}
372
+
]
346
373
```
347
374
#### Instance segmentation
348
375
349
376
In instance segmentation, output consists of multiple boxes with their scaled top-left and bottom-right coordinates, labels, confidence scores, and polygons (not masks). Here, the polygon values are in the same format that we discussed in the schema section.
0 commit comments