You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -235,170 +235,229 @@ Example of a JSONL file for Instance Segmentation:
235
235
236
236

237
237
238
-
## Data format for inference
238
+
## Data schema for online scoring
239
239
240
-
In this section, we document the input data format required to make predictions when using a deployed model. Any aforementioned image format is accepted with content type `application/octet-stream`.
240
+
In this section, we document the input data format required to make predictions using a deployed model.
241
241
242
242
### Input format
243
243
244
-
The following is the input format needed to generate predictions on any task using task-specific model endpoint. After we [deploy the model](how-to-auto-train-image-models.md#register-and-deploy-model), we can use the following code snippet to get predictions for all tasks.
244
+
The following is the input format needed to generate predictions on any task using task-specific model endpoint.
245
+
246
+
```json
247
+
{
248
+
"input_data": {
249
+
"columns": [
250
+
"image"
251
+
],
252
+
"data": [
253
+
"image_in_base64_string_format"
254
+
]
255
+
}
256
+
}
257
+
```
258
+
259
+
This json is a dictionary with outer key `input_data` and inner keys `columns`, `data` as described in the following table. The endpoint accepts json string as input in the above format so that this json string can be decoded to json to create a dataframe of samples required by scoring script. Each input image in the `request_json["input_data"]["data"]` defined in the json format above is a [base64 encoded string](https://docs.python.org/3/library/base64.html#base64.encodebytes).
260
+
261
+
262
+
| Key | Description |
263
+
| -------- |----------|
264
+
|`input_data`<br> (outer key) | It is an outer key in json request. `input_data` is a dictionary that accepts input image samples <br>`Required, Dictionary`|
265
+
|`columns`<br> (inner key) | Column names to be used to create dataframe in scoring script. It accepts only one column with `image` as column name.<br>`Required, List`|
266
+
|`data`<br> (inner key) | List of base64 encoded images <br>`Required, List`|
267
+
268
+
269
+
After we [deploy the mlflow model](how-to-auto-train-image-models.md#register-and-deploy-model), we can use the following code snippet to get predictions for all tasks.
Predictions made on model endpoints follow different structure depending on the task type. This section explores the output data formats for multi-class, multi-label image classification, object detection, and instance segmentation tasks.
309
+
Predictions made on model endpoints follow different structure depending on the task type. This section explores the output data formats for multi-class, multi-label image classification, object detection, and instance segmentation tasks.
310
+
311
+
The following schemas are defined for the case of one input image.
261
312
262
313
#### Image classification
263
314
264
315
Endpoint for image classification returns all the labels in the dataset and their probability scores for the input image in the following format.
265
316
266
317
```json
267
-
{
268
-
"filename":"/tmp/tmppjr4et28",
269
-
"probs":[
270
-
2.098e-06,
271
-
4.783e-08,
272
-
0.999,
273
-
8.637e-06
274
-
],
275
-
"labels":[
276
-
"can",
277
-
"carton",
278
-
"milk_bottle",
279
-
"water_bottle"
280
-
]
281
-
}
318
+
[
319
+
{
320
+
"filename": "/tmp/tmppjr4et28",
321
+
"probs": [
322
+
2.098e-06,
323
+
4.783e-08,
324
+
0.999,
325
+
8.637e-06
326
+
],
327
+
"labels": [
328
+
"can",
329
+
"carton",
330
+
"milk_bottle",
331
+
"water_bottle"
332
+
]
333
+
}
334
+
]
282
335
```
283
336
284
337
#### Image classification multi-label
285
338
286
339
For image classification multi-label, model endpoint returns labels and their probabilities.
287
340
288
341
```json
289
-
{
290
-
"filename":"/tmp/tmpsdzxlmlm",
291
-
"probs":[
292
-
0.997,
293
-
0.960,
294
-
0.982,
295
-
0.025
296
-
],
297
-
"labels":[
298
-
"can",
299
-
"carton",
300
-
"milk_bottle",
301
-
"water_bottle"
302
-
]
303
-
}
342
+
[
343
+
{
344
+
"filename": "/tmp/tmpsdzxlmlm",
345
+
"probs": [
346
+
0.997,
347
+
0.960,
348
+
0.982,
349
+
0.025
350
+
],
351
+
"labels": [
352
+
"can",
353
+
"carton",
354
+
"milk_bottle",
355
+
"water_bottle"
356
+
]
357
+
}
358
+
]
304
359
```
305
360
306
361
#### Object detection
307
362
308
363
Object detection model returns multiple boxes with their scaled top-left and bottom-right coordinates along with box label and confidence score.
309
364
310
365
```json
311
-
{
312
-
"filename":"/tmp/tmpdkg2wkdy",
313
-
"boxes":[
314
-
{
315
-
"box":{
316
-
"topX":0.224,
317
-
"topY":0.285,
318
-
"bottomX":0.399,
319
-
"bottomY":0.620
366
+
[
367
+
{
368
+
"filename": "/tmp/tmpdkg2wkdy",
369
+
"boxes": [
370
+
{
371
+
"box": {
372
+
"topX": 0.224,
373
+
"topY": 0.285,
374
+
"bottomX": 0.399,
375
+
"bottomY": 0.620
376
+
},
377
+
"label": "milk_bottle",
378
+
"score": 0.937
320
379
},
321
-
"label":"milk_bottle",
322
-
"score":0.937
323
-
},
324
-
{
325
-
"box":{
326
-
"topX":0.664,
327
-
"topY":0.484,
328
-
"bottomX":0.959,
329
-
"bottomY":0.812
380
+
{
381
+
"box": {
382
+
"topX": 0.664,
383
+
"topY": 0.484,
384
+
"bottomX": 0.959,
385
+
"bottomY": 0.812
386
+
},
387
+
"label": "can",
388
+
"score": 0.891
330
389
},
331
-
"label":"can",
332
-
"score":0.891
333
-
},
334
-
{
335
-
"box":{
336
-
"topX":0.423,
337
-
"topY":0.253,
338
-
"bottomX":0.632,
339
-
"bottomY":0.725
340
-
},
341
-
"label":"water_bottle",
342
-
"score":0.876
343
-
}
344
-
]
345
-
}
390
+
{
391
+
"box": {
392
+
"topX": 0.423,
393
+
"topY": 0.253,
394
+
"bottomX": 0.632,
395
+
"bottomY": 0.725
396
+
},
397
+
"label": "water_bottle",
398
+
"score": 0.876
399
+
}
400
+
]
401
+
}
402
+
]
346
403
```
347
404
#### Instance segmentation
348
405
349
406
In instance segmentation, output consists of multiple boxes with their scaled top-left and bottom-right coordinates, labels, confidence scores, and polygons (not masks). Here, the polygon values are in the same format that we discussed in the schema section.
0 commit comments