Skip to content

Commit 8fa7e6e

Browse files
author
AWS
committed
Amazon Bedrock Runtime Update: This release introduces Guardrails for Amazon Bedrock.
1 parent d5ca29a commit 8fa7e6e

File tree

2 files changed

+89
-15
lines changed

2 files changed

+89
-15
lines changed
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
{
2+
"type": "feature",
3+
"category": "Amazon Bedrock Runtime",
4+
"contributor": "",
5+
"description": "This release introduces Guardrails for Amazon Bedrock."
6+
}

services/bedrockruntime/src/main/resources/codegen-resources/service-2.json

Lines changed: 83 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
{"shape":"ServiceQuotaExceededException"},
3333
{"shape":"ModelErrorException"}
3434
],
35-
"documentation":"<p>Invokes the specified Bedrock model to run inference using the input provided in the request body. You use InvokeModel to run inference for text models, image models, and embedding models.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p> <p>For example requests, see Examples (after the Errors section).</p>"
35+
"documentation":"<p>Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. You use model inference to generate text, images, and embeddings.</p> <p>For example code, see <i>Invoke model code examples</i> in the <i>Amazon Bedrock User Guide</i>. </p> <p>This operation requires permission for the <code>bedrock:InvokeModel</code> action.</p>"
3636
},
3737
"InvokeModelWithResponseStream":{
3838
"name":"InvokeModelWithResponseStream",
@@ -55,7 +55,7 @@
5555
{"shape":"ServiceQuotaExceededException"},
5656
{"shape":"ModelErrorException"}
5757
],
58-
"documentation":"<p>Invoke the specified Bedrock model to run inference using the input provided. Return the response in a stream.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p> <p>For an example request and response, see Examples (after the Errors section).</p>"
58+
"documentation":"<p>Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. The response is returned in a stream.</p> <p>To see if a model supports streaming, call <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetFoundationModel.html\">GetFoundationModel</a> and check the <code>responseStreamingSupported</code> field in the response.</p> <note> <p>The CLI doesn't support <code>InvokeModelWithResponseStream</code>.</p> </note> <p>For example code, see <i>Invoke model with streaming code example</i> in the <i>Amazon Bedrock User Guide</i>. </p> <p>This operation requires permissions to perform the <code>bedrock:InvokeModelWithResponseStream</code> action. </p>"
5959
}
6060
},
6161
"shapes":{
@@ -77,6 +77,16 @@
7777
"min":0,
7878
"sensitive":true
7979
},
80+
"GuardrailIdentifier":{
81+
"type":"string",
82+
"max":2048,
83+
"min":0,
84+
"pattern":"(([a-z0-9]+)|(arn:aws(-[^:]+)?:bedrock:[a-z0-9-]{1,20}:[0-9]{12}:guardrail/[a-z0-9]+))"
85+
},
86+
"GuardrailVersion":{
87+
"type":"string",
88+
"pattern":"(([1-9][0-9]{0,7})|(DRAFT))"
89+
},
8090
"InternalServerException":{
8191
"type":"structure",
8292
"members":{
@@ -102,7 +112,7 @@
102112
"members":{
103113
"body":{
104114
"shape":"Body",
105-
"documentation":"<p>Input data in the format specified in the content-type request header. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>"
115+
"documentation":"<p>The prompt and inference parameters in the format specified in the <code>contentType</code> in the header. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p>"
106116
},
107117
"contentType":{
108118
"shape":"MimeType",
@@ -118,9 +128,27 @@
118128
},
119129
"modelId":{
120130
"shape":"InvokeModelIdentifier",
121-
"documentation":"<p>Identifier of the model. </p>",
131+
"documentation":"<p>The unique identifier of the model to invoke to run inference.</p> <p>The <code>modelId</code> to provide depends on the type of model that you use:</p> <ul> <li> <p>If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns\">Amazon Bedrock base model IDs (on-demand throughput)</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html\">Run inference using a Provisioned Throughput</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html\">Use a custom model in Amazon Bedrock</a> in the Amazon Bedrock User Guide.</p> </li> </ul>",
122132
"location":"uri",
123133
"locationName":"modelId"
134+
},
135+
"trace":{
136+
"shape":"Trace",
137+
"documentation":"<p>Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.</p>",
138+
"location":"header",
139+
"locationName":"X-Amzn-Bedrock-Trace"
140+
},
141+
"guardrailIdentifier":{
142+
"shape":"GuardrailIdentifier",
143+
"documentation":"<p>The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.</p> <p>An error will be thrown in the following situations.</p> <ul> <li> <p>You don't provide a guardrail identifier but you specify the <code>amazon-bedrock-guardrailConfig</code> field in the request body.</p> </li> <li> <p>You enable the guardrail but the <code>contentType</code> isn't <code>application/json</code>.</p> </li> <li> <p>You provide a guardrail identifier, but <code>guardrailVersion</code> isn't specified.</p> </li> </ul>",
144+
"location":"header",
145+
"locationName":"X-Amzn-Bedrock-GuardrailIdentifier"
146+
},
147+
"guardrailVersion":{
148+
"shape":"GuardrailVersion",
149+
"documentation":"<p>The version number for the guardrail. The value can also be <code>DRAFT</code>.</p>",
150+
"location":"header",
151+
"locationName":"X-Amzn-Bedrock-GuardrailVersion"
124152
}
125153
},
126154
"payload":"body"
@@ -134,7 +162,7 @@
134162
"members":{
135163
"body":{
136164
"shape":"Body",
137-
"documentation":"<p>Inference response from the model in the format specified in the content-type header field. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>"
165+
"documentation":"<p>Inference response from the model in the format specified in the <code>contentType</code> header. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>"
138166
},
139167
"contentType":{
140168
"shape":"MimeType",
@@ -154,7 +182,7 @@
154182
"members":{
155183
"body":{
156184
"shape":"Body",
157-
"documentation":"<p>Inference input in the format specified by the content-type. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>"
185+
"documentation":"<p>The prompt and inference parameters in the format specified in the <code>contentType</code> in the header. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p>"
158186
},
159187
"contentType":{
160188
"shape":"MimeType",
@@ -170,9 +198,27 @@
170198
},
171199
"modelId":{
172200
"shape":"InvokeModelIdentifier",
173-
"documentation":"<p>Id of the model to invoke using the streaming request.</p>",
201+
"documentation":"<p>The unique identifier of the model to invoke to run inference.</p> <p>The <code>modelId</code> to provide depends on the type of model that you use:</p> <ul> <li> <p>If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns\">Amazon Bedrock base model IDs (on-demand throughput)</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html\">Run inference using a Provisioned Throughput</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html\">Use a custom model in Amazon Bedrock</a> in the Amazon Bedrock User Guide.</p> </li> </ul>",
174202
"location":"uri",
175203
"locationName":"modelId"
204+
},
205+
"trace":{
206+
"shape":"Trace",
207+
"documentation":"<p>Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.</p>",
208+
"location":"header",
209+
"locationName":"X-Amzn-Bedrock-Trace"
210+
},
211+
"guardrailIdentifier":{
212+
"shape":"GuardrailIdentifier",
213+
"documentation":"<p>The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.</p> <p>An error is thrown in the following situations.</p> <ul> <li> <p>You don't provide a guardrail identifier but you specify the <code>amazon-bedrock-guardrailConfig</code> field in the request body.</p> </li> <li> <p>You enable the guardrail but the <code>contentType</code> isn't <code>application/json</code>.</p> </li> <li> <p>You provide a guardrail identifier, but <code>guardrailVersion</code> isn't specified.</p> </li> </ul>",
214+
"location":"header",
215+
"locationName":"X-Amzn-Bedrock-GuardrailIdentifier"
216+
},
217+
"guardrailVersion":{
218+
"shape":"GuardrailVersion",
219+
"documentation":"<p>The version number for the guardrail. The value can also be <code>DRAFT</code>.</p>",
220+
"location":"header",
221+
"locationName":"X-Amzn-Bedrock-GuardrailVersion"
176222
}
177223
},
178224
"payload":"body"
@@ -186,7 +232,7 @@
186232
"members":{
187233
"body":{
188234
"shape":"ResponseStream",
189-
"documentation":"<p>Inference response from the model in the format specified by Content-Type. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>"
235+
"documentation":"<p>Inference response from the model in the format specified by the <code>contentType</code> header. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>"
190236
},
191237
"contentType":{
192238
"shape":"MimeType",
@@ -243,7 +289,7 @@
243289
"documentation":"<p>The original message.</p>"
244290
}
245291
},
246-
"documentation":"<p>An error occurred while streaming the response.</p>",
292+
"documentation":"<p>An error occurred while streaming the response. Retry your request.</p>",
247293
"error":{
248294
"httpStatusCode":424,
249295
"senderFault":true
@@ -303,11 +349,26 @@
303349
"shape":"PayloadPart",
304350
"documentation":"<p>Content included in the response.</p>"
305351
},
306-
"internalServerException":{"shape":"InternalServerException"},
307-
"modelStreamErrorException":{"shape":"ModelStreamErrorException"},
308-
"validationException":{"shape":"ValidationException"},
309-
"throttlingException":{"shape":"ThrottlingException"},
310-
"modelTimeoutException":{"shape":"ModelTimeoutException"}
352+
"internalServerException":{
353+
"shape":"InternalServerException",
354+
"documentation":"<p>An internal server error occurred. Retry your request.</p>"
355+
},
356+
"modelStreamErrorException":{
357+
"shape":"ModelStreamErrorException",
358+
"documentation":"<p>An error occurred while streaming the response. Retry your request.</p>"
359+
},
360+
"validationException":{
361+
"shape":"ValidationException",
362+
"documentation":"<p>Input validation failed. Check your request parameters and retry the request.</p>"
363+
},
364+
"throttlingException":{
365+
"shape":"ThrottlingException",
366+
"documentation":"<p>The number or frequency of requests exceeds the limit. Resubmit your request later.</p>"
367+
},
368+
"modelTimeoutException":{
369+
"shape":"ModelTimeoutException",
370+
"documentation":"<p>The request took too long to process. Processing time exceeded the model timeout length.</p>"
371+
}
311372
},
312373
"documentation":"<p>Definition of content in the response stream.</p>",
313374
"eventstream":true
@@ -342,6 +403,13 @@
342403
},
343404
"exception":true
344405
},
406+
"Trace":{
407+
"type":"string",
408+
"enum":[
409+
"ENABLED",
410+
"DISABLED"
411+
]
412+
},
345413
"ValidationException":{
346414
"type":"structure",
347415
"members":{
@@ -355,5 +423,5 @@
355423
"exception":true
356424
}
357425
},
358-
"documentation":"<p>Describes the API operations for running inference using Bedrock models.</p>"
426+
"documentation":"<p>Describes the API operations for running inference using Amazon Bedrock models.</p>"
359427
}

0 commit comments

Comments
 (0)