|
32 | 32 | {"shape":"ServiceQuotaExceededException"},
|
33 | 33 | {"shape":"ModelErrorException"}
|
34 | 34 | ],
|
35 |
| - "documentation":"<p>Invokes the specified Bedrock model to run inference using the input provided in the request body. You use InvokeModel to run inference for text models, image models, and embedding models.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p> <p>For example requests, see Examples (after the Errors section).</p>" |
| 35 | + "documentation":"<p>Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. You use model inference to generate text, images, and embeddings.</p> <p>For example code, see <i>Invoke model code examples</i> in the <i>Amazon Bedrock User Guide</i>. </p> <p>This operation requires permission for the <code>bedrock:InvokeModel</code> action.</p>" |
36 | 36 | },
|
37 | 37 | "InvokeModelWithResponseStream":{
|
38 | 38 | "name":"InvokeModelWithResponseStream",
|
|
55 | 55 | {"shape":"ServiceQuotaExceededException"},
|
56 | 56 | {"shape":"ModelErrorException"}
|
57 | 57 | ],
|
58 |
| - "documentation":"<p>Invoke the specified Bedrock model to run inference using the input provided. Return the response in a stream.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p> <p>For an example request and response, see Examples (after the Errors section).</p>" |
| 58 | + "documentation":"<p>Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. The response is returned in a stream.</p> <p>To see if a model supports streaming, call <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetFoundationModel.html\">GetFoundationModel</a> and check the <code>responseStreamingSupported</code> field in the response.</p> <note> <p>The CLI doesn't support <code>InvokeModelWithResponseStream</code>.</p> </note> <p>For example code, see <i>Invoke model with streaming code example</i> in the <i>Amazon Bedrock User Guide</i>. </p> <p>This operation requires permissions to perform the <code>bedrock:InvokeModelWithResponseStream</code> action. </p>" |
59 | 59 | }
|
60 | 60 | },
|
61 | 61 | "shapes":{
|
|
77 | 77 | "min":0,
|
78 | 78 | "sensitive":true
|
79 | 79 | },
|
| 80 | + "GuardrailIdentifier":{ |
| 81 | + "type":"string", |
| 82 | + "max":2048, |
| 83 | + "min":0, |
| 84 | + "pattern":"(([a-z0-9]+)|(arn:aws(-[^:]+)?:bedrock:[a-z0-9-]{1,20}:[0-9]{12}:guardrail/[a-z0-9]+))" |
| 85 | + }, |
| 86 | + "GuardrailVersion":{ |
| 87 | + "type":"string", |
| 88 | + "pattern":"(([1-9][0-9]{0,7})|(DRAFT))" |
| 89 | + }, |
80 | 90 | "InternalServerException":{
|
81 | 91 | "type":"structure",
|
82 | 92 | "members":{
|
|
102 | 112 | "members":{
|
103 | 113 | "body":{
|
104 | 114 | "shape":"Body",
|
105 |
| - "documentation":"<p>Input data in the format specified in the content-type request header. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>" |
| 115 | + "documentation":"<p>The prompt and inference parameters in the format specified in the <code>contentType</code> in the header. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p>" |
106 | 116 | },
|
107 | 117 | "contentType":{
|
108 | 118 | "shape":"MimeType",
|
|
118 | 128 | },
|
119 | 129 | "modelId":{
|
120 | 130 | "shape":"InvokeModelIdentifier",
|
121 |
| - "documentation":"<p>Identifier of the model. </p>", |
| 131 | + "documentation":"<p>The unique identifier of the model to invoke to run inference.</p> <p>The <code>modelId</code> to provide depends on the type of model that you use:</p> <ul> <li> <p>If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns\">Amazon Bedrock base model IDs (on-demand throughput)</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html\">Run inference using a Provisioned Throughput</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html\">Use a custom model in Amazon Bedrock</a> in the Amazon Bedrock User Guide.</p> </li> </ul>", |
122 | 132 | "location":"uri",
|
123 | 133 | "locationName":"modelId"
|
| 134 | + }, |
| 135 | + "trace":{ |
| 136 | + "shape":"Trace", |
| 137 | + "documentation":"<p>Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.</p>", |
| 138 | + "location":"header", |
| 139 | + "locationName":"X-Amzn-Bedrock-Trace" |
| 140 | + }, |
| 141 | + "guardrailIdentifier":{ |
| 142 | + "shape":"GuardrailIdentifier", |
| 143 | + "documentation":"<p>The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.</p> <p>An error will be thrown in the following situations.</p> <ul> <li> <p>You don't provide a guardrail identifier but you specify the <code>amazon-bedrock-guardrailConfig</code> field in the request body.</p> </li> <li> <p>You enable the guardrail but the <code>contentType</code> isn't <code>application/json</code>.</p> </li> <li> <p>You provide a guardrail identifier, but <code>guardrailVersion</code> isn't specified.</p> </li> </ul>", |
| 144 | + "location":"header", |
| 145 | + "locationName":"X-Amzn-Bedrock-GuardrailIdentifier" |
| 146 | + }, |
| 147 | + "guardrailVersion":{ |
| 148 | + "shape":"GuardrailVersion", |
| 149 | + "documentation":"<p>The version number for the guardrail. The value can also be <code>DRAFT</code>.</p>", |
| 150 | + "location":"header", |
| 151 | + "locationName":"X-Amzn-Bedrock-GuardrailVersion" |
124 | 152 | }
|
125 | 153 | },
|
126 | 154 | "payload":"body"
|
|
134 | 162 | "members":{
|
135 | 163 | "body":{
|
136 | 164 | "shape":"Body",
|
137 |
| - "documentation":"<p>Inference response from the model in the format specified in the content-type header field. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>" |
| 165 | + "documentation":"<p>Inference response from the model in the format specified in the <code>contentType</code> header. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>" |
138 | 166 | },
|
139 | 167 | "contentType":{
|
140 | 168 | "shape":"MimeType",
|
|
154 | 182 | "members":{
|
155 | 183 | "body":{
|
156 | 184 | "shape":"Body",
|
157 |
| - "documentation":"<p>Inference input in the format specified by the content-type. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>" |
| 185 | + "documentation":"<p>The prompt and inference parameters in the format specified in the <code>contentType</code> in the header. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p>" |
158 | 186 | },
|
159 | 187 | "contentType":{
|
160 | 188 | "shape":"MimeType",
|
|
170 | 198 | },
|
171 | 199 | "modelId":{
|
172 | 200 | "shape":"InvokeModelIdentifier",
|
173 |
| - "documentation":"<p>Id of the model to invoke using the streaming request.</p>", |
| 201 | + "documentation":"<p>The unique identifier of the model to invoke to run inference.</p> <p>The <code>modelId</code> to provide depends on the type of model that you use:</p> <ul> <li> <p>If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns\">Amazon Bedrock base model IDs (on-demand throughput)</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html\">Run inference using a Provisioned Throughput</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html\">Use a custom model in Amazon Bedrock</a> in the Amazon Bedrock User Guide.</p> </li> </ul>", |
174 | 202 | "location":"uri",
|
175 | 203 | "locationName":"modelId"
|
| 204 | + }, |
| 205 | + "trace":{ |
| 206 | + "shape":"Trace", |
| 207 | + "documentation":"<p>Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.</p>", |
| 208 | + "location":"header", |
| 209 | + "locationName":"X-Amzn-Bedrock-Trace" |
| 210 | + }, |
| 211 | + "guardrailIdentifier":{ |
| 212 | + "shape":"GuardrailIdentifier", |
| 213 | + "documentation":"<p>The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.</p> <p>An error is thrown in the following situations.</p> <ul> <li> <p>You don't provide a guardrail identifier but you specify the <code>amazon-bedrock-guardrailConfig</code> field in the request body.</p> </li> <li> <p>You enable the guardrail but the <code>contentType</code> isn't <code>application/json</code>.</p> </li> <li> <p>You provide a guardrail identifier, but <code>guardrailVersion</code> isn't specified.</p> </li> </ul>", |
| 214 | + "location":"header", |
| 215 | + "locationName":"X-Amzn-Bedrock-GuardrailIdentifier" |
| 216 | + }, |
| 217 | + "guardrailVersion":{ |
| 218 | + "shape":"GuardrailVersion", |
| 219 | + "documentation":"<p>The version number for the guardrail. The value can also be <code>DRAFT</code>.</p>", |
| 220 | + "location":"header", |
| 221 | + "locationName":"X-Amzn-Bedrock-GuardrailVersion" |
176 | 222 | }
|
177 | 223 | },
|
178 | 224 | "payload":"body"
|
|
186 | 232 | "members":{
|
187 | 233 | "body":{
|
188 | 234 | "shape":"ResponseStream",
|
189 |
| - "documentation":"<p>Inference response from the model in the format specified by Content-Type. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>" |
| 235 | + "documentation":"<p>Inference response from the model in the format specified by the <code>contentType</code> header. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>" |
190 | 236 | },
|
191 | 237 | "contentType":{
|
192 | 238 | "shape":"MimeType",
|
|
243 | 289 | "documentation":"<p>The original message.</p>"
|
244 | 290 | }
|
245 | 291 | },
|
246 |
| - "documentation":"<p>An error occurred while streaming the response.</p>", |
| 292 | + "documentation":"<p>An error occurred while streaming the response. Retry your request.</p>", |
247 | 293 | "error":{
|
248 | 294 | "httpStatusCode":424,
|
249 | 295 | "senderFault":true
|
|
303 | 349 | "shape":"PayloadPart",
|
304 | 350 | "documentation":"<p>Content included in the response.</p>"
|
305 | 351 | },
|
306 |
| - "internalServerException":{"shape":"InternalServerException"}, |
307 |
| - "modelStreamErrorException":{"shape":"ModelStreamErrorException"}, |
308 |
| - "validationException":{"shape":"ValidationException"}, |
309 |
| - "throttlingException":{"shape":"ThrottlingException"}, |
310 |
| - "modelTimeoutException":{"shape":"ModelTimeoutException"} |
| 352 | + "internalServerException":{ |
| 353 | + "shape":"InternalServerException", |
| 354 | + "documentation":"<p>An internal server error occurred. Retry your request.</p>" |
| 355 | + }, |
| 356 | + "modelStreamErrorException":{ |
| 357 | + "shape":"ModelStreamErrorException", |
| 358 | + "documentation":"<p>An error occurred while streaming the response. Retry your request.</p>" |
| 359 | + }, |
| 360 | + "validationException":{ |
| 361 | + "shape":"ValidationException", |
| 362 | + "documentation":"<p>Input validation failed. Check your request parameters and retry the request.</p>" |
| 363 | + }, |
| 364 | + "throttlingException":{ |
| 365 | + "shape":"ThrottlingException", |
| 366 | + "documentation":"<p>The number or frequency of requests exceeds the limit. Resubmit your request later.</p>" |
| 367 | + }, |
| 368 | + "modelTimeoutException":{ |
| 369 | + "shape":"ModelTimeoutException", |
| 370 | + "documentation":"<p>The request took too long to process. Processing time exceeded the model timeout length.</p>" |
| 371 | + } |
311 | 372 | },
|
312 | 373 | "documentation":"<p>Definition of content in the response stream.</p>",
|
313 | 374 | "eventstream":true
|
|
342 | 403 | },
|
343 | 404 | "exception":true
|
344 | 405 | },
|
| 406 | + "Trace":{ |
| 407 | + "type":"string", |
| 408 | + "enum":[ |
| 409 | + "ENABLED", |
| 410 | + "DISABLED" |
| 411 | + ] |
| 412 | + }, |
345 | 413 | "ValidationException":{
|
346 | 414 | "type":"structure",
|
347 | 415 | "members":{
|
|
355 | 423 | "exception":true
|
356 | 424 | }
|
357 | 425 | },
|
358 |
| - "documentation":"<p>Describes the API operations for running inference using Bedrock models.</p>" |
| 426 | + "documentation":"<p>Describes the API operations for running inference using Amazon Bedrock models.</p>" |
359 | 427 | }
|
0 commit comments