Skip to content

Commit ab608ff

Browse files
author
AWS
committed
Amazon Lookout for Vision Update: This release introduces support for image segmentation models and updates CPU accelerator options for models hosted on edge devices.
1 parent 49d9395 commit ab608ff

File tree

2 files changed

+70
-8
lines changed

2 files changed

+70
-8
lines changed
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
{
2+
"type": "feature",
3+
"category": "Amazon Lookout for Vision",
4+
"contributor": "",
5+
"description": "This release introduces support for image segmentation models and updates CPU accelerator options for models hosted on edge devices."
6+
}

services/lookoutvision/src/main/resources/codegen-resources/service-2.json

Lines changed: 64 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -214,7 +214,7 @@
214214
{"shape":"ResourceNotFoundException"},
215215
{"shape":"ThrottlingException"}
216216
],
217-
"documentation":"<p>Detects anomalies in an image that you supply. </p> <p>The response from <code>DetectAnomalies</code> includes a boolean prediction that the image contains one or more anomalies and a confidence value for the prediction.</p> <note> <p>Before calling <code>DetectAnomalies</code>, you must first start your model with the <a>StartModel</a> operation. You are charged for the amount of time, in minutes, that a model runs and for the number of anomaly detection units that your model uses. If you are not using a model, use the <a>StopModel</a> operation to stop your model. </p> </note> <p>This operation requires permissions to perform the <code>lookoutvision:DetectAnomalies</code> operation.</p>"
217+
"documentation":"<p>Detects anomalies in an image that you supply. </p> <p>The response from <code>DetectAnomalies</code> includes a boolean prediction that the image contains one or more anomalies and a confidence value for the prediction. If the model is an image segmentation model, the response also includes segmentation information for each type of anomaly found in the image.</p> <note> <p>Before calling <code>DetectAnomalies</code>, you must first start your model with the <a>StartModel</a> operation. You are charged for the amount of time, in minutes, that a model runs and for the number of anomaly detection units that your model uses. If you are not using a model, use the <a>StopModel</a> operation to stop your model. </p> </note> <p>For more information, see <i>Detecting anomalies in an image</i> in the Amazon Lookout for Vision developer guide.</p> <p>This operation requires permissions to perform the <code>lookoutvision:DetectAnomalies</code> operation.</p>"
218218
},
219219
"ListDatasetEntries":{
220220
"name":"ListDatasetEntries",
@@ -431,19 +431,54 @@
431431
"error":{"httpStatusCode":403},
432432
"exception":true
433433
},
434+
"Anomaly":{
435+
"type":"structure",
436+
"members":{
437+
"Name":{
438+
"shape":"AnomalyName",
439+
"documentation":"<p>The name of an anomaly type found in an image. <code>Name</code> maps to an anomaly type in the training dataset, apart from the anomaly type <code>background</code>. The service automatically inserts the <code>background</code> anomaly type into the response from <code>DetectAnomalies</code>. </p>"
440+
},
441+
"PixelAnomaly":{
442+
"shape":"PixelAnomaly",
443+
"documentation":"<p>Information about the pixel mask that covers an anomaly type.</p>"
444+
}
445+
},
446+
"documentation":"<p>Information about an anomaly type found on an image by an image segmentation model. For more information, see <a>DetectAnomalies</a>.</p>"
447+
},
434448
"AnomalyClassFilter":{
435449
"type":"string",
436450
"max":10,
437451
"min":1,
438452
"pattern":"(normal|anomaly)"
439453
},
454+
"AnomalyList":{
455+
"type":"list",
456+
"member":{"shape":"Anomaly"}
457+
},
458+
"AnomalyMask":{
459+
"type":"blob",
460+
"max":5242880,
461+
"min":1
462+
},
463+
"AnomalyName":{
464+
"type":"string",
465+
"max":256,
466+
"min":1,
467+
"pattern":"[a-zA-Z0-9]*"
468+
},
440469
"Boolean":{"type":"boolean"},
441470
"ClientToken":{
442471
"type":"string",
443472
"max":64,
444473
"min":1,
445474
"pattern":"^[a-zA-Z0-9-]+$"
446475
},
476+
"Color":{
477+
"type":"string",
478+
"max":7,
479+
"min":7,
480+
"pattern":"\\#[a-zA-Z0-9]{6}"
481+
},
447482
"CompilerOptions":{
448483
"type":"string",
449484
"max":1024,
@@ -1013,14 +1048,22 @@
10131048
},
10141049
"IsAnomalous":{
10151050
"shape":"Boolean",
1016-
"documentation":"<p>True if the image contains an anomaly, otherwise false.</p>"
1051+
"documentation":"<p>True if Amazon Lookout for Vision classifies the image as containing an anomaly, otherwise false.</p>"
10171052
},
10181053
"Confidence":{
10191054
"shape":"Float",
1020-
"documentation":"<p>The confidence that Amazon Lookout for Vision has in the accuracy of the prediction.</p>"
1055+
"documentation":"<p>The confidence that Lookout for Vision has in the accuracy of the classification in <code>IsAnomalous</code>.</p>"
1056+
},
1057+
"Anomalies":{
1058+
"shape":"AnomalyList",
1059+
"documentation":"<p>If the model is an image segmentation model, <code>Anomalies</code> contains a list of anomaly types found in the image. There is one entry for each type of anomaly found (even if multiple instances of an anomaly type exist on the image). The first element in the list is always an anomaly type representing the image background ('background') and shouldn't be considered an anomaly. Amazon Lookout for Vision automatically add the background anomaly type to the response, and you don't need to declare a background anomaly type in your dataset.</p> <p>If the list has one entry ('background'), no anomalies were found on the image.</p> <p/> <p>An image classification model doesn't return an <code>Anomalies</code> list. </p>"
1060+
},
1061+
"AnomalyMask":{
1062+
"shape":"AnomalyMask",
1063+
"documentation":"<p>If the model is an image segmentation model, <code>AnomalyMask</code> contains pixel masks that covers all anomaly types found on the image. Each anomaly type has a different mask color. To map a color to an anomaly type, see the <code>color</code> field of the <a>PixelAnomaly</a> object.</p> <p>An image classification model doesn't return an <code>Anomalies</code> list. </p>"
10211064
}
10221065
},
1023-
"documentation":"<p>The prediction results from a call to <a>DetectAnomalies</a>.</p>"
1066+
"documentation":"<p>The prediction results from a call to <a>DetectAnomalies</a>. <code>DetectAnomalyResult</code> includes classification information for the prediction (<code>IsAnomalous</code> and <code>Confidence</code>). If the model you use is an image segementation model, <code>DetectAnomalyResult</code> also includes segmentation information (<code>Anomalies</code> and <code>AnomalyMask</code>). Classification information is calculated separately from segmentation information and you shouldn't assume a relationship between them.</p>"
10241067
},
10251068
"ExceptionString":{"type":"string"},
10261069
"Float":{"type":"float"},
@@ -1033,7 +1076,7 @@
10331076
"members":{
10341077
"CompilerOptions":{
10351078
"shape":"CompilerOptions",
1036-
"documentation":"<p>Additional compiler options for the Greengrass component. Currently, only NVIDIA Graphics Processing Units (GPU) are supported. If you specify <code>TargetPlatform</code>, you must specify <code>CompilerOptions</code>. If you specify <code>TargetDevice</code>, don't specify <code>CompilerOptions</code>.</p> <p>For more information, see <i>Compiler options</i> in the Amazon Lookout for Vision Developer Guide. </p>"
1079+
"documentation":"<p>Additional compiler options for the Greengrass component. Currently, only NVIDIA Graphics Processing Units (GPU) and CPU accelerators are supported. If you specify <code>TargetDevice</code>, don't specify <code>CompilerOptions</code>.</p> <p>For more information, see <i>Compiler options</i> in the Amazon Lookout for Vision Developer Guide. </p>"
10371080
},
10381081
"TargetDevice":{
10391082
"shape":"TargetDevice",
@@ -1696,6 +1739,20 @@
16961739
"max":2048,
16971740
"pattern":"^[a-zA-Z0-9\\/\\+\\=]{0,2048}$"
16981741
},
1742+
"PixelAnomaly":{
1743+
"type":"structure",
1744+
"members":{
1745+
"TotalPercentageArea":{
1746+
"shape":"Float",
1747+
"documentation":"<p>The percentage area of the image that the anomaly type covers.</p>"
1748+
},
1749+
"Color":{
1750+
"shape":"Color",
1751+
"documentation":"<p>A hex color value for the mask that covers an anomaly type. Each anomaly type has a different mask color. The color maps to the color of the anomaly type used in the training dataset. </p>"
1752+
}
1753+
},
1754+
"documentation":"<p>Information about the pixels in an anomaly mask. For more information, see <a>Anomaly</a>. <code>PixelAnomaly</code> is only returned by image segmentation models.</p>"
1755+
},
16991756
"ProjectArn":{"type":"string"},
17001757
"ProjectDescription":{
17011758
"type":"structure",
@@ -2069,8 +2126,7 @@
20692126
"type":"structure",
20702127
"required":[
20712128
"Os",
2072-
"Arch",
2073-
"Accelerator"
2129+
"Arch"
20742130
],
20752131
"members":{
20762132
"Os":{
@@ -2083,7 +2139,7 @@
20832139
},
20842140
"Accelerator":{
20852141
"shape":"TargetPlatformAccelerator",
2086-
"documentation":"<p>The target accelerator for the model. NVIDIA (Nvidia graphics processing unit) is the only accelerator that is currently supported. You must also specify the <code>gpu-code</code>, <code>trt-ver</code>, and <code>cuda-ver</code> compiler options. </p>"
2142+
"documentation":"<p>The target accelerator for the model. Currently, Amazon Lookout for Vision only supports NVIDIA (Nvidia graphics processing unit) and CPU accelerators. If you specify NVIDIA as an accelerator, you must also specify the <code>gpu-code</code>, <code>trt-ver</code>, and <code>cuda-ver</code> compiler options. If you don't specify an accelerator, Lookout for Vision uses the CPU for compilation and we highly recommend that you use the <a>GreengrassConfiguration$CompilerOptions</a> field. For example, you can use the following compiler options for CPU: </p> <ul> <li> <p> <code>mcpu</code>: CPU micro-architecture. For example, <code>{'mcpu': 'skylake-avx512'}</code> </p> </li> <li> <p> <code>mattr</code>: CPU flags. For example, <code>{'mattr': ['+neon', '+vfpv4']}</code> </p> </li> </ul>"
20872143
}
20882144
},
20892145
"documentation":"<p>The platform on which a model runs on an AWS IoT Greengrass core device.</p>"

0 commit comments

Comments
 (0)