Skip to content

Commit 35fa938

Browse files
Update SDK models
1 parent cb81883 commit 35fa938

File tree

127 files changed

+7139
-545
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

127 files changed

+7139
-545
lines changed

Cargo.toml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
[workspace]
22
resolver = "2"
33
exclude = [
4-
"examples/examples",
5-
"examples/cross_service",
4+
"examples/test-utils",
65
"examples/lambda",
76
"examples/webassembly",
8-
"examples/test-utils",
7+
"examples/cross_service",
8+
"examples/examples",
99
"tests/no-default-features",
1010
"tests/telemetry",
1111
"tests/webassembly"

aws-models/arc-zonal-shift.json

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@
177177
}
178178
},
179179
"traits": {
180-
"smithy.api#documentation": "<p>Information about an autoshift. Amazon Web Services starts an autoshift to temporarily move traffic for a resource \n\t\t\taway from an Availability Zone in an Amazon Web Services Region\n\t\t\twhen Amazon Web Services determines that there's an issue in the Availability Zone that could potentially affect customers.\n\t\t\tYou can configure zonal autoshift in ARC for managed resources in your Amazon Web Services account in a Region. \n\t\t\tSupported Amazon Web Services resources are automatically registered with ARC.</p>\n <p>Autoshifts are temporary. When the Availability Zone recovers, Amazon Web Services ends the autoshift, and \n\t\t\ttraffic for the resource is no longer directed to the other Availability Zones in the Region.</p>\n <p>You can stop an autoshift for a resource by disabling zonal autoshift.</p>"
180+
"smithy.api#documentation": "<p>Information about an autoshift. Amazon Web Services starts an autoshift to temporarily move traffic for a resource \n\t\t\taway from an Availability Zone in an Amazon Web Services Region\n\t\t\twhen Amazon Web Services determines that there's an issue in the Availability Zone that could potentially affect customers.\n\t\t\tYou can configure zonal autoshift in ARC for managed resources in your Amazon Web Services account in a Region. \n\t\t\tSupported Amazon Web Services resources are automatically registered with ARC.</p>\n <p>Autoshifts are temporary. When the Availability Zone recovers, Amazon Web Services ends the autoshift, and \n\t\t\ttraffic for the resource is no longer directed to the other Availability Zones in the Region.</p>"
181181
}
182182
},
183183
"com.amazonaws.arczonalshift#AutoshiftTriggerResource": {
@@ -1013,7 +1013,7 @@
10131013
}
10141014
],
10151015
"traits": {
1016-
"smithy.api#documentation": "<p>Lists all active and completed zonal shifts in Amazon Route 53 Application Recovery Controller in your Amazon Web Services account in this Amazon Web Services Region.\n \t\t<code>ListZonalShifts</code> returns customer-initiated zonal shifts, as well as practice run zonal shifts that ARC started on \n \t\tyour behalf for zonal autoshift.</p>\n <p>For more information about listing\n \t\tautoshifts, see <a href=\"https://docs.aws.amazon.com/arc-zonal-shift/latest/api/API_ListAutoshifts.html\">\"&gt;ListAutoshifts</a>.</p>",
1016+
"smithy.api#documentation": "<p>Lists all active and completed zonal shifts in Amazon Route 53 Application Recovery Controller in your Amazon Web Services account in this Amazon Web Services Region.</p>",
10171017
"smithy.api#http": {
10181018
"method": "GET",
10191019
"uri": "/zonalshifts",
@@ -2594,6 +2594,12 @@
25942594
"traits": {
25952595
"smithy.api#enumValue": "FISExperimentUpdateNotAllowed"
25962596
}
2597+
},
2598+
"AUTOSHIFT_UPDATE_NOT_ALLOWED": {
2599+
"target": "smithy.api#Unit",
2600+
"traits": {
2601+
"smithy.api#enumValue": "AutoshiftUpdateNotAllowed"
2602+
}
25972603
}
25982604
}
25992605
},

aws-models/budgets.json

Lines changed: 528 additions & 5 deletions
Large diffs are not rendered by default.

aws-models/firehose.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -962,7 +962,7 @@
962962
}
963963
],
964964
"traits": {
965-
"smithy.api#documentation": "<p>Creates a Firehose stream.</p>\n <p>By default, you can create up to 50 Firehose streams per Amazon Web Services\n Region.</p>\n <p>This is an asynchronous operation that immediately returns. The initial status of the\n Firehose stream is <code>CREATING</code>. After the Firehose stream is created, its status\n is <code>ACTIVE</code> and it now accepts data. If the Firehose stream creation fails, the\n status transitions to <code>CREATING_FAILED</code>. Attempts to send data to a delivery\n stream that is not in the <code>ACTIVE</code> state cause an exception. To check the state\n of a Firehose stream, use <a>DescribeDeliveryStream</a>.</p>\n <p>If the status of a Firehose stream is <code>CREATING_FAILED</code>, this status\n doesn't change, and you can't invoke <code>CreateDeliveryStream</code> again on it.\n However, you can invoke the <a>DeleteDeliveryStream</a> operation to delete\n it.</p>\n <p>A Firehose stream can be configured to receive records directly\n from providers using <a>PutRecord</a> or <a>PutRecordBatch</a>, or it\n can be configured to use an existing Kinesis stream as its source. To specify a Kinesis\n data stream as input, set the <code>DeliveryStreamType</code> parameter to\n <code>KinesisStreamAsSource</code>, and provide the Kinesis stream Amazon Resource Name\n (ARN) and role ARN in the <code>KinesisStreamSourceConfiguration</code>\n parameter.</p>\n <p>To create a Firehose stream with server-side encryption (SSE) enabled, include <a>DeliveryStreamEncryptionConfigurationInput</a> in your request. This is\n optional. You can also invoke <a>StartDeliveryStreamEncryption</a> to turn on\n SSE for an existing Firehose stream that doesn't have SSE enabled.</p>\n <p>A Firehose stream is configured with a single destination, such as Amazon Simple\n Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Amazon OpenSearch\n Serverless, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by or supported by\n third-party service providers, including Datadog, Dynatrace, LogicMonitor, MongoDB, New\n Relic, and Sumo Logic. You must specify only one of the following destination configuration\n parameters: <code>ExtendedS3DestinationConfiguration</code>,\n <code>S3DestinationConfiguration</code>,\n <code>ElasticsearchDestinationConfiguration</code>,\n <code>RedshiftDestinationConfiguration</code>, or\n <code>SplunkDestinationConfiguration</code>.</p>\n <p>When you specify <code>S3DestinationConfiguration</code>, you can also provide the\n following optional values: BufferingHints, <code>EncryptionConfiguration</code>, and\n <code>CompressionFormat</code>. By default, if no <code>BufferingHints</code> value is\n provided, Firehose buffers data up to 5 MB or for 5 minutes, whichever\n condition is satisfied first. <code>BufferingHints</code> is a hint, so there are some\n cases where the service cannot adhere to these conditions strictly. For example, record\n boundaries might be such that the size is a little over or under the configured buffering\n size. By default, no encryption is performed. We strongly recommend that you enable\n encryption to ensure secure data storage in Amazon S3.</p>\n <p>A few notes about Amazon Redshift as a destination:</p>\n <ul>\n <li>\n <p>An Amazon Redshift destination requires an S3 bucket as intermediate location.\n Firehose first delivers data to Amazon S3 and then uses\n <code>COPY</code> syntax to load data into an Amazon Redshift table. This is\n specified in the <code>RedshiftDestinationConfiguration.S3Configuration</code>\n parameter.</p>\n </li>\n <li>\n <p>The compression formats <code>SNAPPY</code> or <code>ZIP</code> cannot be\n specified in <code>RedshiftDestinationConfiguration.S3Configuration</code> because\n the Amazon Redshift <code>COPY</code> operation that reads from the S3 bucket doesn't\n support these compression formats.</p>\n </li>\n <li>\n <p>We strongly recommend that you use the user name and password you provide\n exclusively with Firehose, and that the permissions for the account are\n restricted for Amazon Redshift <code>INSERT</code> permissions.</p>\n </li>\n </ul>\n <p>Firehose assumes the IAM role that is configured as part of the\n destination. The role should allow the Firehose principal to assume the role,\n and the role should have permissions that allow the service to deliver the data. For more\n information, see <a href=\"https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3\">Grant Firehose Access to an Amazon S3 Destination</a> in the <i>Amazon Firehose Developer Guide</i>.</p>"
965+
"smithy.api#documentation": "<p>Creates a Firehose stream.</p>\n <p>By default, you can create up to 5,000 Firehose streams per Amazon Web Services\n Region.</p>\n <p>This is an asynchronous operation that immediately returns. The initial status of the\n Firehose stream is <code>CREATING</code>. After the Firehose stream is created, its status\n is <code>ACTIVE</code> and it now accepts data. If the Firehose stream creation fails, the\n status transitions to <code>CREATING_FAILED</code>. Attempts to send data to a delivery\n stream that is not in the <code>ACTIVE</code> state cause an exception. To check the state\n of a Firehose stream, use <a>DescribeDeliveryStream</a>.</p>\n <p>If the status of a Firehose stream is <code>CREATING_FAILED</code>, this status\n doesn't change, and you can't invoke <code>CreateDeliveryStream</code> again on it.\n However, you can invoke the <a>DeleteDeliveryStream</a> operation to delete\n it.</p>\n <p>A Firehose stream can be configured to receive records directly\n from providers using <a>PutRecord</a> or <a>PutRecordBatch</a>, or it\n can be configured to use an existing Kinesis stream as its source. To specify a Kinesis\n data stream as input, set the <code>DeliveryStreamType</code> parameter to\n <code>KinesisStreamAsSource</code>, and provide the Kinesis stream Amazon Resource Name\n (ARN) and role ARN in the <code>KinesisStreamSourceConfiguration</code>\n parameter.</p>\n <p>To create a Firehose stream with server-side encryption (SSE) enabled, include <a>DeliveryStreamEncryptionConfigurationInput</a> in your request. This is\n optional. You can also invoke <a>StartDeliveryStreamEncryption</a> to turn on\n SSE for an existing Firehose stream that doesn't have SSE enabled.</p>\n <p>A Firehose stream is configured with a single destination, such as Amazon Simple\n Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Amazon OpenSearch\n Serverless, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by or supported by\n third-party service providers, including Datadog, Dynatrace, LogicMonitor, MongoDB, New\n Relic, and Sumo Logic. You must specify only one of the following destination configuration\n parameters: <code>ExtendedS3DestinationConfiguration</code>,\n <code>S3DestinationConfiguration</code>,\n <code>ElasticsearchDestinationConfiguration</code>,\n <code>RedshiftDestinationConfiguration</code>, or\n <code>SplunkDestinationConfiguration</code>.</p>\n <p>When you specify <code>S3DestinationConfiguration</code>, you can also provide the\n following optional values: BufferingHints, <code>EncryptionConfiguration</code>, and\n <code>CompressionFormat</code>. By default, if no <code>BufferingHints</code> value is\n provided, Firehose buffers data up to 5 MB or for 5 minutes, whichever\n condition is satisfied first. <code>BufferingHints</code> is a hint, so there are some\n cases where the service cannot adhere to these conditions strictly. For example, record\n boundaries might be such that the size is a little over or under the configured buffering\n size. By default, no encryption is performed. We strongly recommend that you enable\n encryption to ensure secure data storage in Amazon S3.</p>\n <p>A few notes about Amazon Redshift as a destination:</p>\n <ul>\n <li>\n <p>An Amazon Redshift destination requires an S3 bucket as intermediate location.\n Firehose first delivers data to Amazon S3 and then uses\n <code>COPY</code> syntax to load data into an Amazon Redshift table. This is\n specified in the <code>RedshiftDestinationConfiguration.S3Configuration</code>\n parameter.</p>\n </li>\n <li>\n <p>The compression formats <code>SNAPPY</code> or <code>ZIP</code> cannot be\n specified in <code>RedshiftDestinationConfiguration.S3Configuration</code> because\n the Amazon Redshift <code>COPY</code> operation that reads from the S3 bucket doesn't\n support these compression formats.</p>\n </li>\n <li>\n <p>We strongly recommend that you use the user name and password you provide\n exclusively with Firehose, and that the permissions for the account are\n restricted for Amazon Redshift <code>INSERT</code> permissions.</p>\n </li>\n </ul>\n <p>Firehose assumes the IAM role that is configured as part of the\n destination. The role should allow the Firehose principal to assume the role,\n and the role should have permissions that allow the service to deliver the data. For more\n information, see <a href=\"https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3\">Grant Firehose Access to an Amazon S3 Destination</a> in the <i>Amazon Firehose Developer Guide</i>.</p>"
966966
}
967967
},
968968
"com.amazonaws.firehose#CreateDeliveryStreamInput": {

0 commit comments

Comments
 (0)