@@ -2391,26 +2391,126 @@ public DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
23912391 * For an example, see Analyzing images stored in an Amazon S3 bucket in the
23922392 * Amazon Rekognition Developer Guide.
23932393 * </p>
2394- * <note>
2395- * <p>
2396- * <code>DetectLabels</code> does not support the detection of activities.
2397- * However, activity detection is supported for label detection in videos.
2398- * For more information, see StartLabelDetection in the Amazon Rekognition
2399- * Developer Guide.
2400- * </p>
2401- * </note>
24022394 * <p>
24032395 * You pass the input image as base64-encoded image bytes or as a reference
24042396 * to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
24052397 * Rekognition operations, passing image bytes is not supported. The image
24062398 * must be either a PNG or JPEG formatted file.
24072399 * </p>
24082400 * <p>
2401+ * <b>Optional Parameters</b>
2402+ * </p>
2403+ * <p>
2404+ * You can specify one or both of the <code>GENERAL_LABELS</code> and
2405+ * <code>IMAGE_PROPERTIES</code> feature types when calling the DetectLabels
2406+ * API. Including <code>GENERAL_LABELS</code> will ensure the response
2407+ * includes the labels detected in the input image, while including
2408+ * <code>IMAGE_PROPERTIES </code>will ensure the response includes
2409+ * information about the image quality and color.
2410+ * </p>
2411+ * <p>
2412+ * When using <code>GENERAL_LABELS</code> and/or
2413+ * <code>IMAGE_PROPERTIES</code> you can provide filtering criteria to the
2414+ * Settings parameter. You can filter with sets of individual labels or with
2415+ * label categories. You can specify inclusive filters, exclusive filters,
2416+ * or a combination of inclusive and exclusive filters. For more information
2417+ * on filtering see <a href=
2418+ * "https://docs.aws.amazon.com/rekognition/latest/dg/labels-detect-labels-image.html"
2419+ * >Detecting Labels in an Image</a>.
2420+ * </p>
2421+ * <p>
2422+ * You can specify <code>MinConfidence</code> to control the confidence
2423+ * threshold for the labels returned. The default is 55%. You can also add
2424+ * the <code>MaxLabels</code> parameter to limit the number of labels
2425+ * returned. The default and upper limit is 1000 labels.
2426+ * </p>
2427+ * <p>
2428+ * <b>Response Elements</b>
2429+ * </p>
2430+ * <p>
24092431 * For each object, scene, and concept the API returns one or more labels.
2410- * Each label provides the object name, and the level of confidence that the
2411- * image contains the object. For example, suppose the input image has a
2412- * lighthouse, the sea, and a rock. The response includes all three labels,
2413- * one for each object.
2432+ * The API returns the following types of information regarding labels:
2433+ * </p>
2434+ * <ul>
2435+ * <li>
2436+ * <p>
2437+ * Name - The name of the detected label.
2438+ * </p>
2439+ * </li>
2440+ * <li>
2441+ * <p>
2442+ * Confidence - The level of confidence in the label assigned to a detected
2443+ * object.
2444+ * </p>
2445+ * </li>
2446+ * <li>
2447+ * <p>
2448+ * Parents - The ancestor labels for a detected label. DetectLabels returns
2449+ * a hierarchical taxonomy of detected labels. For example, a detected car
2450+ * might be assigned the label car. The label car has two parent labels:
2451+ * Vehicle (its parent) and Transportation (its grandparent). The response
2452+ * includes the all ancestors for a label, where every ancestor is a unique
2453+ * label. In the previous example, Car, Vehicle, and Transportation are
2454+ * returned as unique labels in the response.
2455+ * </p>
2456+ * </li>
2457+ * <li>
2458+ * <p>
2459+ * Aliases - Possible Aliases for the label.
2460+ * </p>
2461+ * </li>
2462+ * <li>
2463+ * <p>
2464+ * Categories - The label categories that the detected label belongs to. A
2465+ * given label can belong to more than one category.
2466+ * </p>
2467+ * </li>
2468+ * <li>
2469+ * <p>
2470+ * BoundingBox — Bounding boxes are described for all instances of detected
2471+ * common object labels, returned in an array of Instance objects. An
2472+ * Instance object contains a BoundingBox object, describing the location of
2473+ * the label on the input image. It also includes the confidence for the
2474+ * accuracy of the detected bounding box.
2475+ * </p>
2476+ * </li>
2477+ * </ul>
2478+ * <p>
2479+ * The API returns the following information regarding the image, as part of
2480+ * the ImageProperties structure:
2481+ * </p>
2482+ * <ul>
2483+ * <li>
2484+ * <p>
2485+ * Quality - Information about the Sharpness, Brightness, and Contrast of
2486+ * the input image, scored between 0 to 100. Image quality is returned for
2487+ * the entire image, as well as the background and the foreground.
2488+ * </p>
2489+ * </li>
2490+ * <li>
2491+ * <p>
2492+ * Dominant Color - An array of the dominant colors in the image.
2493+ * </p>
2494+ * </li>
2495+ * <li>
2496+ * <p>
2497+ * Foreground - Information about the Sharpness and Brightness of the input
2498+ * image’s foreground.
2499+ * </p>
2500+ * </li>
2501+ * <li>
2502+ * <p>
2503+ * Background - Information about the Sharpness and Brightness of the input
2504+ * image’s background.
2505+ * </p>
2506+ * </li>
2507+ * </ul>
2508+ * <p>
2509+ * The list of returned labels will include at least one label for every
2510+ * detected object, along with information about that label. In the
2511+ * following example, suppose the input image has a lighthouse, the sea, and
2512+ * a rock. The response includes all three labels, one for each object, as
2513+ * well as the confidence in the label:
24142514 * </p>
24152515 * <p>
24162516 * <code>{Name: lighthouse, Confidence: 98.4629}</code>
@@ -2422,10 +2522,9 @@ public DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
24222522 * <code> {Name: sea,Confidence: 75.061}</code>
24232523 * </p>
24242524 * <p>
2425- * In the preceding example, the operation returns one label for each of the
2426- * three objects. The operation can also return multiple labels for the same
2427- * object in the image. For example, if the input image shows a flower (for
2428- * example, a tulip), the operation might return the following three labels.
2525+ * The list of labels can include multiple labels for the same object. For
2526+ * example, if the input image shows a flower (for example, a tulip), the
2527+ * operation might return the following three labels.
24292528 * </p>
24302529 * <p>
24312530 * <code>{Name: flower,Confidence: 99.0562}</code>
@@ -2440,37 +2539,13 @@ public DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
24402539 * In this example, the detection algorithm more precisely identifies the
24412540 * flower as a tulip.
24422541 * </p>
2443- * <p>
2444- * In response, the API returns an array of labels. In addition, the
2445- * response also includes the orientation correction. Optionally, you can
2446- * specify <code>MinConfidence</code> to control the confidence threshold
2447- * for the labels returned. The default is 55%. You can also add the
2448- * <code>MaxLabels</code> parameter to limit the number of labels returned.
2449- * </p>
24502542 * <note>
24512543 * <p>
24522544 * If the object detected is a person, the operation doesn't provide the
24532545 * same facial details that the <a>DetectFaces</a> operation provides.
24542546 * </p>
24552547 * </note>
24562548 * <p>
2457- * <code>DetectLabels</code> returns bounding boxes for instances of common
2458- * object labels in an array of <a>Instance</a> objects. An
2459- * <code>Instance</code> object contains a <a>BoundingBox</a> object, for
2460- * the location of the label on the image. It also includes the confidence
2461- * by which the bounding box was detected.
2462- * </p>
2463- * <p>
2464- * <code>DetectLabels</code> also returns a hierarchical taxonomy of
2465- * detected labels. For example, a detected car might be assigned the label
2466- * <i>car</i>. The label <i>car</i> has two parent labels: <i>Vehicle</i>
2467- * (its parent) and <i>Transportation</i> (its grandparent). The response
2468- * returns the entire list of ancestors for a label. Each ancestor is a
2469- * unique label in the response. In the previous example, <i>Car</i>,
2470- * <i>Vehicle</i>, and <i>Transportation</i> are returned as unique labels
2471- * in the response.
2472- * </p>
2473- * <p>
24742549 * This is a stateless API operation. That is, the operation does not
24752550 * persist any data.
24762551 * </p>
0 commit comments