|
998 | 998 | {"shape":"ThrottlingException"},
|
999 | 999 | {"shape":"ProvisionedThroughputExceededException"}
|
1000 | 1000 | ],
|
1001 |
| - "documentation":"<p>Starts the running of the version of a model. Starting a model takes a while to complete. To check the current state of the model, use <a>DescribeProjectVersions</a>.</p> <p>Once the model is running, you can detect custom labels in new images by calling <a>DetectCustomLabels</a>.</p> <note> <p>You are charged for the amount of time that the model is running. To stop a running model, call <a>StopProjectVersion</a>.</p> </note> <p>This operation requires permissions to perform the <code>rekognition:StartProjectVersion</code> action.</p>" |
| 1001 | + "documentation":"<p>Starts the running of the version of a model. Starting a model takes a while to complete. To check the current state of the model, use <a>DescribeProjectVersions</a>.</p> <p>Once the model is running, you can detect custom labels in new images by calling <a>DetectCustomLabels</a>.</p> <note> <p>You are charged for the amount of time that the model is running. To stop a running model, call <a>StopProjectVersion</a>.</p> </note> <p>For more information, see <i>Running a trained Amazon Rekognition Custom Labels model</i> in the Amazon Rekognition Custom Labels Guide.</p> <p>This operation requires permissions to perform the <code>rekognition:StartProjectVersion</code> action.</p>" |
1002 | 1002 | },
|
1003 | 1003 | "StartSegmentDetection":{
|
1004 | 1004 | "name":"StartSegmentDetection",
|
|
1806 | 1806 | },
|
1807 | 1807 | "RegionsOfInterest":{
|
1808 | 1808 | "shape":"RegionsOfInterest",
|
1809 |
| - "documentation":"<p> Specifies locations in the frames where Amazon Rekognition checks for objects or people. You can specify up to 10 regions of interest. This is an optional parameter for label detection stream processors and should not be used to create a face search stream processor. </p>" |
| 1809 | + "documentation":"<p> Specifies locations in the frames where Amazon Rekognition checks for objects or people. You can specify up to 10 regions of interest, and each region has either a polygon or a bounding box. This is an optional parameter for label detection stream processors and should not be used to create a face search stream processor. </p>" |
1810 | 1810 | },
|
1811 | 1811 | "DataSharingPreference":{
|
1812 | 1812 | "shape":"StreamProcessorDataSharingPreference",
|
|
4378 | 4378 | "KmsKeyId":{
|
4379 | 4379 | "shape":"KmsKeyId",
|
4380 | 4380 | "documentation":"<p>The identifer for the AWS Key Management Service key (AWS KMS key) that was used to encrypt the model during training. </p>"
|
| 4381 | + }, |
| 4382 | + "MaxInferenceUnits":{ |
| 4383 | + "shape":"InferenceUnits", |
| 4384 | + "documentation":"<p>The maximum number of inference units Amazon Rekognition Custom Labels uses to auto-scale the model. For more information, see <a>StartProjectVersion</a>.</p>" |
4381 | 4385 | }
|
4382 | 4386 | },
|
4383 | 4387 | "documentation":"<p>A description of a version of an Amazon Rekognition Custom Labels model.</p>"
|
|
4584 | 4588 | "documentation":"<p> Specifies a shape made up of up to 10 <code>Point</code> objects to define a region of interest. </p>"
|
4585 | 4589 | }
|
4586 | 4590 | },
|
4587 |
| - "documentation":"<p>Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a <code>BoundingBox</code> or object or <code>Polygon</code> to set a region of the screen.</p> <p>A word, face, or label is included in the region if it is more than half in that region. If there is more than one region, the word, face, or label is compared with all regions of the screen. Any object of interest that is more than half in a region is kept in the results.</p>" |
| 4591 | + "documentation":"<p>Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a <code>BoundingBox</code> or <code>Polygon</code> to set a region of the screen.</p> <p>A word, face, or label is included in the region if it is more than half in that region. If there is more than one region, the word, face, or label is compared with all regions of the screen. Any object of interest that is more than half in a region is kept in the results.</p>" |
4588 | 4592 | },
|
4589 | 4593 | "RegionsOfInterest":{
|
4590 | 4594 | "type":"list",
|
|
5131 | 5135 | },
|
5132 | 5136 | "MinInferenceUnits":{
|
5133 | 5137 | "shape":"InferenceUnits",
|
5134 |
| - "documentation":"<p>The minimum number of inference units to use. A single inference unit represents 1 hour of processing and can support up to 5 Transaction Pers Second (TPS). Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use. </p>" |
| 5138 | + "documentation":"<p>The minimum number of inference units to use. A single inference unit represents 1 hour of processing. </p> <p>For information about the number of transactions per second (TPS) that an inference unit can support, see <i>Running a trained Amazon Rekognition Custom Labels model</i> in the Amazon Rekognition Custom Labels Guide. </p> <p>Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use. </p>" |
| 5139 | + }, |
| 5140 | + "MaxInferenceUnits":{ |
| 5141 | + "shape":"InferenceUnits", |
| 5142 | + "documentation":"<p>The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Rekognition Custom Labels doesn't auto-scale the model.</p>" |
5135 | 5143 | }
|
5136 | 5144 | }
|
5137 | 5145 | },
|
|
0 commit comments