|
1550 | 1550 | "smithy.api#documentation": "<p>\nThe ARN of the Amazon Rekognition Custom Labels project to which you want to asssign the dataset.\n</p>", |
1551 | 1551 | "smithy.api#required": {} |
1552 | 1552 | } |
| 1553 | + }, |
| 1554 | + "Tags": { |
| 1555 | + "target": "com.amazonaws.rekognition#TagMap", |
| 1556 | + "traits": { |
| 1557 | + "smithy.api#documentation": "<p>A set of tags (key-value pairs) that you want to attach to the dataset.</p>" |
| 1558 | + } |
1553 | 1559 | } |
1554 | 1560 | }, |
1555 | 1561 | "traits": { |
|
1729 | 1735 | "traits": { |
1730 | 1736 | "smithy.api#documentation": "<p>Specifies whether automatic retraining should be attempted for the versions of the\n project. Automatic retraining is done as a best effort. Required argument for Content\n Moderation. Applicable only to adapters.</p>" |
1731 | 1737 | } |
| 1738 | + }, |
| 1739 | + "Tags": { |
| 1740 | + "target": "com.amazonaws.rekognition#TagMap", |
| 1741 | + "traits": { |
| 1742 | + "smithy.api#documentation": "<p>A set of tags (key-value pairs) that you want to attach to the project.</p>" |
| 1743 | + } |
1732 | 1744 | } |
1733 | 1745 | }, |
1734 | 1746 | "traits": { |
|
4326 | 4338 | "ModerationLabels": { |
4327 | 4339 | "target": "com.amazonaws.rekognition#ModerationLabels", |
4328 | 4340 | "traits": { |
4329 | | - "smithy.api#documentation": "<p>Array of detected Moderation labels and the time, in milliseconds from the start of the\n video, they were detected.</p>" |
| 4341 | + "smithy.api#documentation": "<p>Array of detected Moderation labels. For video operations, this includes the time, \n in milliseconds from the start of the video, they were detected.</p>" |
4330 | 4342 | } |
4331 | 4343 | }, |
4332 | 4344 | "ModerationModelVersion": { |
|
6311 | 6323 | } |
6312 | 6324 | ], |
6313 | 6325 | "traits": { |
6314 | | - "smithy.api#documentation": "<p>Gets the label detection results of a Amazon Rekognition Video analysis started by <a>StartLabelDetection</a>. </p>\n <p>The label detection operation is started by a call to <a>StartLabelDetection</a> which returns a job identifier (<code>JobId</code>). When\n the label detection operation finishes, Amazon Rekognition publishes a completion status to the\n Amazon Simple Notification Service topic registered in the initial call to <code>StartlabelDetection</code>. </p>\n <p>To get the results of the label detection operation, first check that the status value\n published to the Amazon SNS topic is <code>SUCCEEDED</code>. If so, call <a>GetLabelDetection</a> and pass the job identifier (<code>JobId</code>) from the\n initial call to <code>StartLabelDetection</code>.</p>\n <p>\n <code>GetLabelDetection</code> returns an array of detected labels\n (<code>Labels</code>) sorted by the time the labels were detected. You can also sort by the\n label name by specifying <code>NAME</code> for the <code>SortBy</code> input parameter. If\n there is no <code>NAME</code> specified, the default sort is by\n timestamp.</p>\n <p>You can select how results are aggregated by using the <code>AggregateBy</code> input\n parameter. The default aggregation method is <code>TIMESTAMPS</code>. You can also aggregate\n by <code>SEGMENTS</code>, which aggregates all instances of labels detected in a given\n segment. </p>\n <p>The returned Labels array may include the following attributes:</p>\n <ul>\n <li>\n <p>Name - The name of the detected label.</p>\n </li>\n <li>\n <p>Confidence - The level of confidence in the label assigned to a detected object. </p>\n </li>\n <li>\n <p>Parents - The ancestor labels for a detected label. GetLabelDetection returns a hierarchical\n taxonomy of detected labels. For example, a detected car might be assigned the label car.\n The label car has two parent labels: Vehicle (its parent) and Transportation (its\n grandparent). The response includes the all ancestors for a label, where every ancestor is\n a unique label. In the previous example, Car, Vehicle, and Transportation are returned as\n unique labels in the response. </p>\n </li>\n <li>\n <p> Aliases - Possible Aliases for the label. </p>\n </li>\n <li>\n <p>Categories - The label categories that the detected label belongs to.</p>\n </li>\n <li>\n <p>BoundingBox — Bounding boxes are described for all instances of detected common object labels, \n returned in an array of Instance objects. An Instance object contains a BoundingBox object, describing \n the location of the label on the input image. It also includes the confidence for the accuracy of the detected bounding box.</p>\n </li>\n <li>\n <p>Timestamp - Time, in milliseconds from the start of the video, that the label was detected.\n For aggregation by <code>SEGMENTS</code>, the <code>StartTimestampMillis</code>,\n <code>EndTimestampMillis</code>, and <code>DurationMillis</code> structures are what\n define a segment. Although the “Timestamp” structure is still returned with each label,\n its value is set to be the same as <code>StartTimestampMillis</code>.</p>\n </li>\n </ul>\n <p>Timestamp and Bounding box information are returned for detected Instances, only if\n aggregation is done by <code>TIMESTAMPS</code>. If aggregating by <code>SEGMENTS</code>,\n information about detected instances isn’t returned. </p>\n <p>The version of the label model used for the detection is also returned.</p>\n <p>\n <b>Note <code>DominantColors</code> isn't returned for <code>Instances</code>,\n although it is shown as part of the response in the sample seen below.</b>\n </p>\n <p>Use <code>MaxResults</code> parameter to limit the number of labels returned. If\n there are more results than specified in <code>MaxResults</code>, the value of\n <code>NextToken</code> in the operation response contains a pagination token for getting the\n next set of results. To get the next page of results, call <code>GetlabelDetection</code> and\n populate the <code>NextToken</code> request parameter with the token value returned from the\n previous call to <code>GetLabelDetection</code>.</p>", |
| 6326 | + "smithy.api#documentation": "<p>Gets the label detection results of a Amazon Rekognition Video analysis started by <a>StartLabelDetection</a>. </p>\n <p>The label detection operation is started by a call to <a>StartLabelDetection</a> which returns a job identifier (<code>JobId</code>). When\n the label detection operation finishes, Amazon Rekognition publishes a completion status to the\n Amazon Simple Notification Service topic registered in the initial call to <code>StartlabelDetection</code>. </p>\n <p>To get the results of the label detection operation, first check that the status value\n published to the Amazon SNS topic is <code>SUCCEEDED</code>. If so, call <a>GetLabelDetection</a> and pass the job identifier (<code>JobId</code>) from the\n initial call to <code>StartLabelDetection</code>.</p>\n <p>\n <code>GetLabelDetection</code> returns an array of detected labels\n (<code>Labels</code>) sorted by the time the labels were detected. You can also sort by the\n label name by specifying <code>NAME</code> for the <code>SortBy</code> input parameter. If\n there is no <code>NAME</code> specified, the default sort is by\n timestamp.</p>\n <p>You can select how results are aggregated by using the <code>AggregateBy</code> input\n parameter. The default aggregation method is <code>TIMESTAMPS</code>. You can also aggregate\n by <code>SEGMENTS</code>, which aggregates all instances of labels detected in a given\n segment. </p>\n <p>The returned Labels array may include the following attributes:</p>\n <ul>\n <li>\n <p>Name - The name of the detected label.</p>\n </li>\n <li>\n <p>Confidence - The level of confidence in the label assigned to a detected object. </p>\n </li>\n <li>\n <p>Parents - The ancestor labels for a detected label. GetLabelDetection returns a hierarchical\n taxonomy of detected labels. For example, a detected car might be assigned the label car.\n The label car has two parent labels: Vehicle (its parent) and Transportation (its\n grandparent). The response includes the all ancestors for a label, where every ancestor is\n a unique label. In the previous example, Car, Vehicle, and Transportation are returned as\n unique labels in the response. </p>\n </li>\n <li>\n <p> Aliases - Possible Aliases for the label. </p>\n </li>\n <li>\n <p>Categories - The label categories that the detected label belongs to.</p>\n </li>\n <li>\n <p>BoundingBox — Bounding boxes are described for all instances of detected common object labels, \n returned in an array of Instance objects. An Instance object contains a BoundingBox object, describing \n the location of the label on the input image. It also includes the confidence for the accuracy of the detected bounding box.</p>\n </li>\n <li>\n <p>Timestamp - Time, in milliseconds from the start of the video, that the label was detected.\n For aggregation by <code>SEGMENTS</code>, the <code>StartTimestampMillis</code>,\n <code>EndTimestampMillis</code>, and <code>DurationMillis</code> structures are what\n define a segment. Although the “Timestamp” structure is still returned with each label,\n its value is set to be the same as <code>StartTimestampMillis</code>.</p>\n </li>\n </ul>\n <p>Timestamp and Bounding box information are returned for detected Instances, only if\n aggregation is done by <code>TIMESTAMPS</code>. If aggregating by <code>SEGMENTS</code>,\n information about detected instances isn’t returned. </p>\n <p>The version of the label model used for the detection is also returned.</p>\n <p>\n <b>Note <code>DominantColors</code> isn't returned for <code>Instances</code>,\n although it is shown as part of the response in the sample seen below.</b>\n </p>\n <p>Use <code>MaxResults</code> parameter to limit the number of labels returned. If\n there are more results than specified in <code>MaxResults</code>, the value of\n <code>NextToken</code> in the operation response contains a pagination token for getting the\n next set of results. To get the next page of results, call <code>GetlabelDetection</code> and\n populate the <code>NextToken</code> request parameter with the token value returned from the\n previous call to <code>GetLabelDetection</code>.</p>\n <p>If you are retrieving results while using the Amazon Simple Notification Service, note that you will receive an\n \"ERROR\" notification if the job encounters an issue.</p>", |
6315 | 6327 | "smithy.api#paginated": { |
6316 | 6328 | "inputToken": "NextToken", |
6317 | 6329 | "outputToken": "NextToken", |
|
0 commit comments