+ "smithy.api#documentation": "<p>Creates a Firehose stream.</p>\n <p>By default, you can create up to 5,000 Firehose streams per Amazon Web Services\n Region.</p>\n <p>This is an asynchronous operation that immediately returns. The initial status of the\n Firehose stream is <code>CREATING</code>. After the Firehose stream is created, its status\n is <code>ACTIVE</code> and it now accepts data. If the Firehose stream creation fails, the\n status transitions to <code>CREATING_FAILED</code>. Attempts to send data to a delivery\n stream that is not in the <code>ACTIVE</code> state cause an exception. To check the state\n of a Firehose stream, use <a>DescribeDeliveryStream</a>.</p>\n <p>If the status of a Firehose stream is <code>CREATING_FAILED</code>, this status\n doesn't change, and you can't invoke <code>CreateDeliveryStream</code> again on it.\n However, you can invoke the <a>DeleteDeliveryStream</a> operation to delete\n it.</p>\n <p>A Firehose stream can be configured to receive records directly\n from providers using <a>PutRecord</a> or <a>PutRecordBatch</a>, or it\n can be configured to use an existing Kinesis stream as its source. To specify a Kinesis\n data stream as input, set the <code>DeliveryStreamType</code> parameter to\n <code>KinesisStreamAsSource</code>, and provide the Kinesis stream Amazon Resource Name\n (ARN) and role ARN in the <code>KinesisStreamSourceConfiguration</code>\n parameter.</p>\n <p>To create a Firehose stream with server-side encryption (SSE) enabled, include <a>DeliveryStreamEncryptionConfigurationInput</a> in your request. This is\n optional. You can also invoke <a>StartDeliveryStreamEncryption</a> to turn on\n SSE for an existing Firehose stream that doesn't have SSE enabled.</p>\n <p>A Firehose stream is configured with a single destination, such as Amazon Simple\n Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Amazon OpenSearch\n Serverless, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by or supported by\n third-party service providers, including Datadog, Dynatrace, LogicMonitor, MongoDB, New\n Relic, and Sumo Logic. You must specify only one of the following destination configuration\n parameters: <code>ExtendedS3DestinationConfiguration</code>,\n <code>S3DestinationConfiguration</code>,\n <code>ElasticsearchDestinationConfiguration</code>,\n <code>RedshiftDestinationConfiguration</code>, or\n <code>SplunkDestinationConfiguration</code>.</p>\n <p>When you specify <code>S3DestinationConfiguration</code>, you can also provide the\n following optional values: BufferingHints, <code>EncryptionConfiguration</code>, and\n <code>CompressionFormat</code>. By default, if no <code>BufferingHints</code> value is\n provided, Firehose buffers data up to 5 MB or for 5 minutes, whichever\n condition is satisfied first. <code>BufferingHints</code> is a hint, so there are some\n cases where the service cannot adhere to these conditions strictly. For example, record\n boundaries might be such that the size is a little over or under the configured buffering\n size. By default, no encryption is performed. We strongly recommend that you enable\n encryption to ensure secure data storage in Amazon S3.</p>\n <p>A few notes about Amazon Redshift as a destination:</p>\n <ul>\n <li>\n <p>An Amazon Redshift destination requires an S3 bucket as intermediate location.\n Firehose first delivers data to Amazon S3 and then uses\n <code>COPY</code> syntax to load data into an Amazon Redshift table. This is\n specified in the <code>RedshiftDestinationConfiguration.S3Configuration</code>\n parameter.</p>\n </li>\n <li>\n <p>The compression formats <code>SNAPPY</code> or <code>ZIP</code> cannot be\n specified in <code>RedshiftDestinationConfiguration.S3Configuration</code> because\n the Amazon Redshift <code>COPY</code> operation that reads from the S3 bucket doesn't\n support these compression formats.</p>\n </li>\n <li>\n <p>We strongly recommend that you use the user name and password you provide\n exclusively with Firehose, and that the permissions for the account are\n restricted for Amazon Redshift <code>INSERT</code> permissions.</p>\n </li>\n </ul>\n <p>Firehose assumes the IAM role that is configured as part of the\n destination. The role should allow the Firehose principal to assume the role,\n and the role should have permissions that allow the service to deliver the data. For more\n information, see <a href=\"https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3\">Grant Firehose Access to an Amazon S3 Destination</a> in the <i>Amazon Firehose Developer Guide</i>.</p>"
0 commit comments