Skip to content

Commit eb39746

Browse files
feat(aws-android-sdk-rekognition): update models to latest (#3071)
Co-authored-by: Erica Eaton <[email protected]>
1 parent 98009a9 commit eb39746

File tree

52 files changed

+5130
-196
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

52 files changed

+5130
-196
lines changed

aws-android-sdk-rekognition/src/main/java/com/amazonaws/services/rekognition/AmazonRekognition.java

Lines changed: 115 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -1554,26 +1554,126 @@ DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
15541554
* For an example, see Analyzing images stored in an Amazon S3 bucket in the
15551555
* Amazon Rekognition Developer Guide.
15561556
* </p>
1557-
* <note>
1558-
* <p>
1559-
* <code>DetectLabels</code> does not support the detection of activities.
1560-
* However, activity detection is supported for label detection in videos.
1561-
* For more information, see StartLabelDetection in the Amazon Rekognition
1562-
* Developer Guide.
1563-
* </p>
1564-
* </note>
15651557
* <p>
15661558
* You pass the input image as base64-encoded image bytes or as a reference
15671559
* to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
15681560
* Rekognition operations, passing image bytes is not supported. The image
15691561
* must be either a PNG or JPEG formatted file.
15701562
* </p>
15711563
* <p>
1564+
* <b>Optional Parameters</b>
1565+
* </p>
1566+
* <p>
1567+
* You can specify one or both of the <code>GENERAL_LABELS</code> and
1568+
* <code>IMAGE_PROPERTIES</code> feature types when calling the DetectLabels
1569+
* API. Including <code>GENERAL_LABELS</code> will ensure the response
1570+
* includes the labels detected in the input image, while including
1571+
* <code>IMAGE_PROPERTIES </code>will ensure the response includes
1572+
* information about the image quality and color.
1573+
* </p>
1574+
* <p>
1575+
* When using <code>GENERAL_LABELS</code> and/or
1576+
* <code>IMAGE_PROPERTIES</code> you can provide filtering criteria to the
1577+
* Settings parameter. You can filter with sets of individual labels or with
1578+
* label categories. You can specify inclusive filters, exclusive filters,
1579+
* or a combination of inclusive and exclusive filters. For more information
1580+
* on filtering see <a href=
1581+
* "https://docs.aws.amazon.com/rekognition/latest/dg/labels-detect-labels-image.html"
1582+
* >Detecting Labels in an Image</a>.
1583+
* </p>
1584+
* <p>
1585+
* You can specify <code>MinConfidence</code> to control the confidence
1586+
* threshold for the labels returned. The default is 55%. You can also add
1587+
* the <code>MaxLabels</code> parameter to limit the number of labels
1588+
* returned. The default and upper limit is 1000 labels.
1589+
* </p>
1590+
* <p>
1591+
* <b>Response Elements</b>
1592+
* </p>
1593+
* <p>
15721594
* For each object, scene, and concept the API returns one or more labels.
1573-
* Each label provides the object name, and the level of confidence that the
1574-
* image contains the object. For example, suppose the input image has a
1575-
* lighthouse, the sea, and a rock. The response includes all three labels,
1576-
* one for each object.
1595+
* The API returns the following types of information regarding labels:
1596+
* </p>
1597+
* <ul>
1598+
* <li>
1599+
* <p>
1600+
* Name - The name of the detected label.
1601+
* </p>
1602+
* </li>
1603+
* <li>
1604+
* <p>
1605+
* Confidence - The level of confidence in the label assigned to a detected
1606+
* object.
1607+
* </p>
1608+
* </li>
1609+
* <li>
1610+
* <p>
1611+
* Parents - The ancestor labels for a detected label. DetectLabels returns
1612+
* a hierarchical taxonomy of detected labels. For example, a detected car
1613+
* might be assigned the label car. The label car has two parent labels:
1614+
* Vehicle (its parent) and Transportation (its grandparent). The response
1615+
* includes the all ancestors for a label, where every ancestor is a unique
1616+
* label. In the previous example, Car, Vehicle, and Transportation are
1617+
* returned as unique labels in the response.
1618+
* </p>
1619+
* </li>
1620+
* <li>
1621+
* <p>
1622+
* Aliases - Possible Aliases for the label.
1623+
* </p>
1624+
* </li>
1625+
* <li>
1626+
* <p>
1627+
* Categories - The label categories that the detected label belongs to. A
1628+
* given label can belong to more than one category.
1629+
* </p>
1630+
* </li>
1631+
* <li>
1632+
* <p>
1633+
* BoundingBox — Bounding boxes are described for all instances of detected
1634+
* common object labels, returned in an array of Instance objects. An
1635+
* Instance object contains a BoundingBox object, describing the location of
1636+
* the label on the input image. It also includes the confidence for the
1637+
* accuracy of the detected bounding box.
1638+
* </p>
1639+
* </li>
1640+
* </ul>
1641+
* <p>
1642+
* The API returns the following information regarding the image, as part of
1643+
* the ImageProperties structure:
1644+
* </p>
1645+
* <ul>
1646+
* <li>
1647+
* <p>
1648+
* Quality - Information about the Sharpness, Brightness, and Contrast of
1649+
* the input image, scored between 0 to 100. Image quality is returned for
1650+
* the entire image, as well as the background and the foreground.
1651+
* </p>
1652+
* </li>
1653+
* <li>
1654+
* <p>
1655+
* Dominant Color - An array of the dominant colors in the image.
1656+
* </p>
1657+
* </li>
1658+
* <li>
1659+
* <p>
1660+
* Foreground - Information about the Sharpness and Brightness of the input
1661+
* image’s foreground.
1662+
* </p>
1663+
* </li>
1664+
* <li>
1665+
* <p>
1666+
* Background - Information about the Sharpness and Brightness of the input
1667+
* image’s background.
1668+
* </p>
1669+
* </li>
1670+
* </ul>
1671+
* <p>
1672+
* The list of returned labels will include at least one label for every
1673+
* detected object, along with information about that label. In the
1674+
* following example, suppose the input image has a lighthouse, the sea, and
1675+
* a rock. The response includes all three labels, one for each object, as
1676+
* well as the confidence in the label:
15771677
* </p>
15781678
* <p>
15791679
* <code>{Name: lighthouse, Confidence: 98.4629}</code>
@@ -1585,10 +1685,9 @@ DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
15851685
* <code> {Name: sea,Confidence: 75.061}</code>
15861686
* </p>
15871687
* <p>
1588-
* In the preceding example, the operation returns one label for each of the
1589-
* three objects. The operation can also return multiple labels for the same
1590-
* object in the image. For example, if the input image shows a flower (for
1591-
* example, a tulip), the operation might return the following three labels.
1688+
* The list of labels can include multiple labels for the same object. For
1689+
* example, if the input image shows a flower (for example, a tulip), the
1690+
* operation might return the following three labels.
15921691
* </p>
15931692
* <p>
15941693
* <code>{Name: flower,Confidence: 99.0562}</code>
@@ -1603,37 +1702,13 @@ DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
16031702
* In this example, the detection algorithm more precisely identifies the
16041703
* flower as a tulip.
16051704
* </p>
1606-
* <p>
1607-
* In response, the API returns an array of labels. In addition, the
1608-
* response also includes the orientation correction. Optionally, you can
1609-
* specify <code>MinConfidence</code> to control the confidence threshold
1610-
* for the labels returned. The default is 55%. You can also add the
1611-
* <code>MaxLabels</code> parameter to limit the number of labels returned.
1612-
* </p>
16131705
* <note>
16141706
* <p>
16151707
* If the object detected is a person, the operation doesn't provide the
16161708
* same facial details that the <a>DetectFaces</a> operation provides.
16171709
* </p>
16181710
* </note>
16191711
* <p>
1620-
* <code>DetectLabels</code> returns bounding boxes for instances of common
1621-
* object labels in an array of <a>Instance</a> objects. An
1622-
* <code>Instance</code> object contains a <a>BoundingBox</a> object, for
1623-
* the location of the label on the image. It also includes the confidence
1624-
* by which the bounding box was detected.
1625-
* </p>
1626-
* <p>
1627-
* <code>DetectLabels</code> also returns a hierarchical taxonomy of
1628-
* detected labels. For example, a detected car might be assigned the label
1629-
* <i>car</i>. The label <i>car</i> has two parent labels: <i>Vehicle</i>
1630-
* (its parent) and <i>Transportation</i> (its grandparent). The response
1631-
* returns the entire list of ancestors for a label. Each ancestor is a
1632-
* unique label in the response. In the previous example, <i>Car</i>,
1633-
* <i>Vehicle</i>, and <i>Transportation</i> are returned as unique labels
1634-
* in the response.
1635-
* </p>
1636-
* <p>
16371712
* This is a stateless API operation. That is, the operation does not
16381713
* persist any data.
16391714
* </p>

aws-android-sdk-rekognition/src/main/java/com/amazonaws/services/rekognition/AmazonRekognitionClient.java

Lines changed: 115 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -2391,26 +2391,126 @@ public DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
23912391
* For an example, see Analyzing images stored in an Amazon S3 bucket in the
23922392
* Amazon Rekognition Developer Guide.
23932393
* </p>
2394-
* <note>
2395-
* <p>
2396-
* <code>DetectLabels</code> does not support the detection of activities.
2397-
* However, activity detection is supported for label detection in videos.
2398-
* For more information, see StartLabelDetection in the Amazon Rekognition
2399-
* Developer Guide.
2400-
* </p>
2401-
* </note>
24022394
* <p>
24032395
* You pass the input image as base64-encoded image bytes or as a reference
24042396
* to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
24052397
* Rekognition operations, passing image bytes is not supported. The image
24062398
* must be either a PNG or JPEG formatted file.
24072399
* </p>
24082400
* <p>
2401+
* <b>Optional Parameters</b>
2402+
* </p>
2403+
* <p>
2404+
* You can specify one or both of the <code>GENERAL_LABELS</code> and
2405+
* <code>IMAGE_PROPERTIES</code> feature types when calling the DetectLabels
2406+
* API. Including <code>GENERAL_LABELS</code> will ensure the response
2407+
* includes the labels detected in the input image, while including
2408+
* <code>IMAGE_PROPERTIES </code>will ensure the response includes
2409+
* information about the image quality and color.
2410+
* </p>
2411+
* <p>
2412+
* When using <code>GENERAL_LABELS</code> and/or
2413+
* <code>IMAGE_PROPERTIES</code> you can provide filtering criteria to the
2414+
* Settings parameter. You can filter with sets of individual labels or with
2415+
* label categories. You can specify inclusive filters, exclusive filters,
2416+
* or a combination of inclusive and exclusive filters. For more information
2417+
* on filtering see <a href=
2418+
* "https://docs.aws.amazon.com/rekognition/latest/dg/labels-detect-labels-image.html"
2419+
* >Detecting Labels in an Image</a>.
2420+
* </p>
2421+
* <p>
2422+
* You can specify <code>MinConfidence</code> to control the confidence
2423+
* threshold for the labels returned. The default is 55%. You can also add
2424+
* the <code>MaxLabels</code> parameter to limit the number of labels
2425+
* returned. The default and upper limit is 1000 labels.
2426+
* </p>
2427+
* <p>
2428+
* <b>Response Elements</b>
2429+
* </p>
2430+
* <p>
24092431
* For each object, scene, and concept the API returns one or more labels.
2410-
* Each label provides the object name, and the level of confidence that the
2411-
* image contains the object. For example, suppose the input image has a
2412-
* lighthouse, the sea, and a rock. The response includes all three labels,
2413-
* one for each object.
2432+
* The API returns the following types of information regarding labels:
2433+
* </p>
2434+
* <ul>
2435+
* <li>
2436+
* <p>
2437+
* Name - The name of the detected label.
2438+
* </p>
2439+
* </li>
2440+
* <li>
2441+
* <p>
2442+
* Confidence - The level of confidence in the label assigned to a detected
2443+
* object.
2444+
* </p>
2445+
* </li>
2446+
* <li>
2447+
* <p>
2448+
* Parents - The ancestor labels for a detected label. DetectLabels returns
2449+
* a hierarchical taxonomy of detected labels. For example, a detected car
2450+
* might be assigned the label car. The label car has two parent labels:
2451+
* Vehicle (its parent) and Transportation (its grandparent). The response
2452+
* includes the all ancestors for a label, where every ancestor is a unique
2453+
* label. In the previous example, Car, Vehicle, and Transportation are
2454+
* returned as unique labels in the response.
2455+
* </p>
2456+
* </li>
2457+
* <li>
2458+
* <p>
2459+
* Aliases - Possible Aliases for the label.
2460+
* </p>
2461+
* </li>
2462+
* <li>
2463+
* <p>
2464+
* Categories - The label categories that the detected label belongs to. A
2465+
* given label can belong to more than one category.
2466+
* </p>
2467+
* </li>
2468+
* <li>
2469+
* <p>
2470+
* BoundingBox — Bounding boxes are described for all instances of detected
2471+
* common object labels, returned in an array of Instance objects. An
2472+
* Instance object contains a BoundingBox object, describing the location of
2473+
* the label on the input image. It also includes the confidence for the
2474+
* accuracy of the detected bounding box.
2475+
* </p>
2476+
* </li>
2477+
* </ul>
2478+
* <p>
2479+
* The API returns the following information regarding the image, as part of
2480+
* the ImageProperties structure:
2481+
* </p>
2482+
* <ul>
2483+
* <li>
2484+
* <p>
2485+
* Quality - Information about the Sharpness, Brightness, and Contrast of
2486+
* the input image, scored between 0 to 100. Image quality is returned for
2487+
* the entire image, as well as the background and the foreground.
2488+
* </p>
2489+
* </li>
2490+
* <li>
2491+
* <p>
2492+
* Dominant Color - An array of the dominant colors in the image.
2493+
* </p>
2494+
* </li>
2495+
* <li>
2496+
* <p>
2497+
* Foreground - Information about the Sharpness and Brightness of the input
2498+
* image’s foreground.
2499+
* </p>
2500+
* </li>
2501+
* <li>
2502+
* <p>
2503+
* Background - Information about the Sharpness and Brightness of the input
2504+
* image’s background.
2505+
* </p>
2506+
* </li>
2507+
* </ul>
2508+
* <p>
2509+
* The list of returned labels will include at least one label for every
2510+
* detected object, along with information about that label. In the
2511+
* following example, suppose the input image has a lighthouse, the sea, and
2512+
* a rock. The response includes all three labels, one for each object, as
2513+
* well as the confidence in the label:
24142514
* </p>
24152515
* <p>
24162516
* <code>{Name: lighthouse, Confidence: 98.4629}</code>
@@ -2422,10 +2522,9 @@ public DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
24222522
* <code> {Name: sea,Confidence: 75.061}</code>
24232523
* </p>
24242524
* <p>
2425-
* In the preceding example, the operation returns one label for each of the
2426-
* three objects. The operation can also return multiple labels for the same
2427-
* object in the image. For example, if the input image shows a flower (for
2428-
* example, a tulip), the operation might return the following three labels.
2525+
* The list of labels can include multiple labels for the same object. For
2526+
* example, if the input image shows a flower (for example, a tulip), the
2527+
* operation might return the following three labels.
24292528
* </p>
24302529
* <p>
24312530
* <code>{Name: flower,Confidence: 99.0562}</code>
@@ -2440,37 +2539,13 @@ public DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
24402539
* In this example, the detection algorithm more precisely identifies the
24412540
* flower as a tulip.
24422541
* </p>
2443-
* <p>
2444-
* In response, the API returns an array of labels. In addition, the
2445-
* response also includes the orientation correction. Optionally, you can
2446-
* specify <code>MinConfidence</code> to control the confidence threshold
2447-
* for the labels returned. The default is 55%. You can also add the
2448-
* <code>MaxLabels</code> parameter to limit the number of labels returned.
2449-
* </p>
24502542
* <note>
24512543
* <p>
24522544
* If the object detected is a person, the operation doesn't provide the
24532545
* same facial details that the <a>DetectFaces</a> operation provides.
24542546
* </p>
24552547
* </note>
24562548
* <p>
2457-
* <code>DetectLabels</code> returns bounding boxes for instances of common
2458-
* object labels in an array of <a>Instance</a> objects. An
2459-
* <code>Instance</code> object contains a <a>BoundingBox</a> object, for
2460-
* the location of the label on the image. It also includes the confidence
2461-
* by which the bounding box was detected.
2462-
* </p>
2463-
* <p>
2464-
* <code>DetectLabels</code> also returns a hierarchical taxonomy of
2465-
* detected labels. For example, a detected car might be assigned the label
2466-
* <i>car</i>. The label <i>car</i> has two parent labels: <i>Vehicle</i>
2467-
* (its parent) and <i>Transportation</i> (its grandparent). The response
2468-
* returns the entire list of ancestors for a label. Each ancestor is a
2469-
* unique label in the response. In the previous example, <i>Car</i>,
2470-
* <i>Vehicle</i>, and <i>Transportation</i> are returned as unique labels
2471-
* in the response.
2472-
* </p>
2473-
* <p>
24742549
* This is a stateless API operation. That is, the operation does not
24752550
* persist any data.
24762551
* </p>

0 commit comments

Comments
 (0)