|
12 | 12 | "DescribeModel": "<p>Describes a version of an Amazon Lookout for Vision model.</p> <p>This operation requires permissions to perform the <code>lookoutvision:DescribeModel</code> operation.</p>",
|
13 | 13 | "DescribeModelPackagingJob": "<p>Describes an Amazon Lookout for Vision model packaging job. </p> <p>This operation requires permissions to perform the <code>lookoutvision:DescribeModelPackagingJob</code> operation.</p> <p>For more information, see <i>Using your Amazon Lookout for Vision model on an edge device</i> in the Amazon Lookout for Vision Developer Guide. </p>",
|
14 | 14 | "DescribeProject": "<p>Describes an Amazon Lookout for Vision project.</p> <p>This operation requires permissions to perform the <code>lookoutvision:DescribeProject</code> operation.</p>",
|
15 |
| - "DetectAnomalies": "<p>Detects anomalies in an image that you supply. </p> <p>The response from <code>DetectAnomalies</code> includes a boolean prediction that the image contains one or more anomalies and a confidence value for the prediction.</p> <note> <p>Before calling <code>DetectAnomalies</code>, you must first start your model with the <a>StartModel</a> operation. You are charged for the amount of time, in minutes, that a model runs and for the number of anomaly detection units that your model uses. If you are not using a model, use the <a>StopModel</a> operation to stop your model. </p> </note> <p>This operation requires permissions to perform the <code>lookoutvision:DetectAnomalies</code> operation.</p>", |
| 15 | + "DetectAnomalies": "<p>Detects anomalies in an image that you supply. </p> <p>The response from <code>DetectAnomalies</code> includes a boolean prediction that the image contains one or more anomalies and a confidence value for the prediction. If the model is an image segmentation model, the response also includes segmentation information for each type of anomaly found in the image.</p> <note> <p>Before calling <code>DetectAnomalies</code>, you must first start your model with the <a>StartModel</a> operation. You are charged for the amount of time, in minutes, that a model runs and for the number of anomaly detection units that your model uses. If you are not using a model, use the <a>StopModel</a> operation to stop your model. </p> </note> <p>For more information, see <i>Detecting anomalies in an image</i> in the Amazon Lookout for Vision developer guide.</p> <p>This operation requires permissions to perform the <code>lookoutvision:DetectAnomalies</code> operation.</p>", |
16 | 16 | "ListDatasetEntries": "<p>Lists the JSON Lines within a dataset. An Amazon Lookout for Vision JSON Line contains the anomaly information for a single image, including the image location and the assigned label.</p> <p>This operation requires permissions to perform the <code>lookoutvision:ListDatasetEntries</code> operation.</p>",
|
17 | 17 | "ListModelPackagingJobs": "<p> Lists the model packaging jobs created for an Amazon Lookout for Vision project. </p> <p>This operation requires permissions to perform the <code>lookoutvision:ListModelPackagingJobs</code> operation. </p> <p>For more information, see <i>Using your Amazon Lookout for Vision model on an edge device</i> in the Amazon Lookout for Vision Developer Guide. </p>",
|
18 | 18 | "ListModels": "<p>Lists the versions of a model in an Amazon Lookout for Vision project.</p> <p>The <code>ListModels</code> operation is eventually consistent. Recent calls to <code>CreateModel</code> might take a while to appear in the response from <code>ListProjects</code>.</p> <p>This operation requires permissions to perform the <code>lookoutvision:ListModels</code> operation.</p>",
|
|
31 | 31 | "refs": {
|
32 | 32 | }
|
33 | 33 | },
|
| 34 | + "Anomaly": { |
| 35 | + "base": "<p>Information about an anomaly type found on an image by an image segmentation model. For more information, see <a>DetectAnomalies</a>.</p>", |
| 36 | + "refs": { |
| 37 | + "AnomalyList$member": null |
| 38 | + } |
| 39 | + }, |
34 | 40 | "AnomalyClassFilter": {
|
35 | 41 | "base": null,
|
36 | 42 | "refs": {
|
37 | 43 | "ListDatasetEntriesRequest$AnomalyClass": "<p>Specify <code>normal</code> to include only normal images. Specify <code>anomaly</code> to only include anomalous entries. If you don't specify a value, Amazon Lookout for Vision returns normal and anomalous images.</p>"
|
38 | 44 | }
|
39 | 45 | },
|
| 46 | + "AnomalyList": { |
| 47 | + "base": null, |
| 48 | + "refs": { |
| 49 | + "DetectAnomalyResult$Anomalies": "<p>If the model is an image segmentation model, <code>Anomalies</code> contains a list of anomaly types found in the image. There is one entry for each type of anomaly found (even if multiple instances of an anomaly type exist on the image). The first element in the list is always an anomaly type representing the image background ('background') and shouldn't be considered an anomaly. Amazon Lookout for Vision automatically add the background anomaly type to the response, and you don't need to declare a background anomaly type in your dataset.</p> <p>If the list has one entry ('background'), no anomalies were found on the image.</p> <p/> <p>An image classification model doesn't return an <code>Anomalies</code> list. </p>" |
| 50 | + } |
| 51 | + }, |
| 52 | + "AnomalyMask": { |
| 53 | + "base": null, |
| 54 | + "refs": { |
| 55 | + "DetectAnomalyResult$AnomalyMask": "<p>If the model is an image segmentation model, <code>AnomalyMask</code> contains pixel masks that covers all anomaly types found on the image. Each anomaly type has a different mask color. To map a color to an anomaly type, see the <code>color</code> field of the <a>PixelAnomaly</a> object.</p> <p>An image classification model doesn't return an <code>Anomalies</code> list. </p>" |
| 56 | + } |
| 57 | + }, |
| 58 | + "AnomalyName": { |
| 59 | + "base": null, |
| 60 | + "refs": { |
| 61 | + "Anomaly$Name": "<p>The name of an anomaly type found in an image. <code>Name</code> maps to an anomaly type in the training dataset, apart from the anomaly type <code>background</code>. The service automatically inserts the <code>background</code> anomaly type into the response from <code>DetectAnomalies</code>. </p>" |
| 62 | + } |
| 63 | + }, |
40 | 64 | "Boolean": {
|
41 | 65 | "base": null,
|
42 | 66 | "refs": {
|
43 |
| - "DetectAnomalyResult$IsAnomalous": "<p>True if the image contains an anomaly, otherwise false.</p>" |
| 67 | + "DetectAnomalyResult$IsAnomalous": "<p>True if Amazon Lookout for Vision classifies the image as containing an anomaly, otherwise false.</p>" |
44 | 68 | }
|
45 | 69 | },
|
46 | 70 | "ClientToken": {
|
|
58 | 82 | "UpdateDatasetEntriesRequest$ClientToken": "<p>ClientToken is an idempotency token that ensures a call to <code>UpdateDatasetEntries</code> completes only once. You choose the value to pass. For example, An issue might prevent you from getting a response from <code>UpdateDatasetEntries</code>. In this case, safely retry your call to <code>UpdateDatasetEntries</code> by using the same <code>ClientToken</code> parameter value.</p> <p>If you don't supply a value for <code>ClientToken</code>, the AWS SDK you are using inserts a value for you. This prevents retries after a network error from making multiple updates with the same dataset entries. You'll need to provide your own value for other use cases. </p> <p>An error occurs if the other input parameters are not the same as in the first request. Using a different value for <code>ClientToken</code> is considered a new call to <code>UpdateDatasetEntries</code>. An idempotency token is active for 8 hours. </p>"
|
59 | 83 | }
|
60 | 84 | },
|
| 85 | + "Color": { |
| 86 | + "base": null, |
| 87 | + "refs": { |
| 88 | + "PixelAnomaly$Color": "<p>A hex color value for the mask that covers an anomaly type. Each anomaly type has a different mask color. The color maps to the color of the anomaly type used in the training dataset. </p>" |
| 89 | + } |
| 90 | + }, |
61 | 91 | "CompilerOptions": {
|
62 | 92 | "base": null,
|
63 | 93 | "refs": {
|
64 |
| - "GreengrassConfiguration$CompilerOptions": "<p>Additional compiler options for the Greengrass component. Currently, only NVIDIA Graphics Processing Units (GPU) are supported. If you specify <code>TargetPlatform</code>, you must specify <code>CompilerOptions</code>. If you specify <code>TargetDevice</code>, don't specify <code>CompilerOptions</code>.</p> <p>For more information, see <i>Compiler options</i> in the Amazon Lookout for Vision Developer Guide. </p>" |
| 94 | + "GreengrassConfiguration$CompilerOptions": "<p>Additional compiler options for the Greengrass component. Currently, only NVIDIA Graphics Processing Units (GPU) and CPU accelerators are supported. If you specify <code>TargetDevice</code>, don't specify <code>CompilerOptions</code>.</p> <p>For more information, see <i>Compiler options</i> in the Amazon Lookout for Vision Developer Guide. </p>" |
65 | 95 | }
|
66 | 96 | },
|
67 | 97 | "ComponentDescription": {
|
|
313 | 343 | }
|
314 | 344 | },
|
315 | 345 | "DetectAnomalyResult": {
|
316 |
| - "base": "<p>The prediction results from a call to <a>DetectAnomalies</a>.</p>", |
| 346 | + "base": "<p>The prediction results from a call to <a>DetectAnomalies</a>. <code>DetectAnomalyResult</code> includes classification information for the prediction (<code>IsAnomalous</code> and <code>Confidence</code>). If the model you use is an image segementation model, <code>DetectAnomalyResult</code> also includes segmentation information (<code>Anomalies</code> and <code>AnomalyMask</code>). Classification information is calculated separately from segmentation information and you shouldn't assume a relationship between them.</p>", |
317 | 347 | "refs": {
|
318 | 348 | "DetectAnomaliesResponse$DetectAnomalyResult": "<p>The results of the <code>DetectAnomalies</code> operation.</p>"
|
319 | 349 | }
|
|
340 | 370 | "Float": {
|
341 | 371 | "base": null,
|
342 | 372 | "refs": {
|
343 |
| - "DetectAnomalyResult$Confidence": "<p>The confidence that Amazon Lookout for Vision has in the accuracy of the prediction.</p>", |
| 373 | + "DetectAnomalyResult$Confidence": "<p>The confidence that Lookout for Vision has in the accuracy of the classification in <code>IsAnomalous</code>.</p>", |
344 | 374 | "ModelPerformance$F1Score": "<p>The overall F1 score metric for the trained model.</p>",
|
345 | 375 | "ModelPerformance$Recall": "<p>The overall recall metric value for the trained model. </p>",
|
346 |
| - "ModelPerformance$Precision": "<p>The overall precision metric value for the trained model.</p>" |
| 376 | + "ModelPerformance$Precision": "<p>The overall precision metric value for the trained model.</p>", |
| 377 | + "PixelAnomaly$TotalPercentageArea": "<p>The percentage area of the image that the anomaly type covers.</p>" |
347 | 378 | }
|
348 | 379 | },
|
349 | 380 | "GreengrassConfiguration": {
|
|
651 | 682 | "ListProjectsResponse$NextToken": "<p>If the response is truncated, Amazon Lookout for Vision returns this token that you can use in the subsequent request to retrieve the next set of projects.</p>"
|
652 | 683 | }
|
653 | 684 | },
|
| 685 | + "PixelAnomaly": { |
| 686 | + "base": "<p>Information about the pixels in an anomaly mask. For more information, see <a>Anomaly</a>. <code>PixelAnomaly</code> is only returned by image segmentation models.</p>", |
| 687 | + "refs": { |
| 688 | + "Anomaly$PixelAnomaly": "<p>Information about the pixel mask that covers an anomaly type.</p>" |
| 689 | + } |
| 690 | + }, |
654 | 691 | "ProjectArn": {
|
655 | 692 | "base": null,
|
656 | 693 | "refs": {
|
|
874 | 911 | "TargetPlatformAccelerator": {
|
875 | 912 | "base": null,
|
876 | 913 | "refs": {
|
877 |
| - "TargetPlatform$Accelerator": "<p>The target accelerator for the model. NVIDIA (Nvidia graphics processing unit) is the only accelerator that is currently supported. You must also specify the <code>gpu-code</code>, <code>trt-ver</code>, and <code>cuda-ver</code> compiler options. </p>" |
| 914 | + "TargetPlatform$Accelerator": "<p>The target accelerator for the model. Currently, Amazon Lookout for Vision only supports NVIDIA (Nvidia graphics processing unit) and CPU accelerators. If you specify NVIDIA as an accelerator, you must also specify the <code>gpu-code</code>, <code>trt-ver</code>, and <code>cuda-ver</code> compiler options. If you don't specify an accelerator, Lookout for Vision uses the CPU for compilation and we highly recommend that you use the <a>GreengrassConfiguration$CompilerOptions</a> field. For example, you can use the following compiler options for CPU: </p> <ul> <li> <p> <code>mcpu</code>: CPU micro-architecture. For example, <code>{'mcpu': 'skylake-avx512'}</code> </p> </li> <li> <p> <code>mattr</code>: CPU flags. For example, <code>{'mattr': ['+neon', '+vfpv4']}</code> </p> </li> </ul>" |
878 | 915 | }
|
879 | 916 | },
|
880 | 917 | "TargetPlatformArch": {
|
|
0 commit comments