Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion manifest.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"variables": {
"${LATEST}": "3.325.3"
"${LATEST}": "3.325.5"
},
"endpoints": "https://raw.githubusercontent.com/aws/aws-sdk-php/${LATEST}/src/data/endpoints.json",
"services": {
Expand Down
4 changes: 4 additions & 0 deletions src/Service/Firehose/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@

## NOT RELEASED

### Changed

- AWS enhancement: Documentation updates.

## 1.3.2

### Changed
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
use AsyncAws\Core\Exception\Http\ClientException;

/**
* Firehose throws this exception when an attempt to put records or to start or stop delivery stream encryption fails.
* Firehose throws this exception when an attempt to put records or to start or stop Firehose stream encryption fails.
* This happens when the KMS service throws one of the following exception types: `AccessDeniedException`,
* `InvalidStateException`, `DisabledException`, or `NotFoundException`.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

/**
* The service is unavailable. Back off and retry the operation. If you continue to see the exception, throughput limits
* for the delivery stream may have been exceeded. For more information about limits and how to request an increase, see
* for the Firehose stream may have been exceeded. For more information about limits and how to request an increase, see
* Amazon Firehose Limits [^1].
*
* [^1]: https://docs.aws.amazon.com/firehose/latest/dev/limits.html
Expand Down
38 changes: 24 additions & 14 deletions src/Service/Firehose/src/FirehoseClient.php
Original file line number Diff line number Diff line change
Expand Up @@ -21,21 +21,26 @@
class FirehoseClient extends AbstractApi
{
/**
* Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery
* stream, use PutRecordBatch. Applications using these operations are referred to as producers.
* Writes a single data record into an Firehose stream. To write multiple data records into a Firehose stream, use
* PutRecordBatch. Applications using these operations are referred to as producers.
*
* By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
* By default, each Firehose stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
* per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each
* delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
* Firehose stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
*
* Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
* that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
* that the bursts of incoming bytes/records ingested to a Firehose stream last only for a few seconds. Due to this, the
* actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
*
* You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
* You must specify the name of the Firehose stream and the data record when using PutRecord. The data record consists
* of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log
* file, geographic location data, website clickstream data, and so on.
*
* For multi record de-aggregation, you can not put more than 500 records even if the data blob length is less than 1000
* KiB. If you include more than 500 records, the request succeeds but the record de-aggregation doesn't work as
* expected and transformation lambda is invoked with the complete base64 encoded data blob instead of de-aggregated
* base64 decoded records.
*
* Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
* destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
* unique within the data. This allows the consumer application to parse individual data items when reading the data
Expand All @@ -45,13 +50,13 @@ class FirehoseClient extends AbstractApi
* applications can use this ID for purposes such as auditability and investigation.
*
* If the `PutRecord` operation throws a `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3
* times. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery
* times. If the exception persists, it is possible that the throughput limits have been exceeded for the Firehose
* stream.
*
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
* larger data assets, allow for a longer time out before retrying Put API operations.
*
* Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries
* Data records sent to Firehose are stored for 24 hours from the time they are added to a Firehose stream as it tries
* to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no
* longer available.
*
Expand Down Expand Up @@ -90,23 +95,28 @@ public function putRecord($input): PutRecordOutput
}

/**
* Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per
* producer than when writing single records. To write single data records into a delivery stream, use PutRecord.
* Writes multiple data records into a Firehose stream in a single call, which can achieve higher throughput per
* producer than when writing single records. To write single data records into a Firehose stream, use PutRecord.
* Applications using these operations are referred to as producers.
*
* Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
* that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
* that the bursts of incoming bytes/records ingested to a Firehose stream last only for a few seconds. Due to this, the
* actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
*
* For information about service quota, see Amazon Firehose Quota [^1].
*
* Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB
* (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed.
*
* You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
* You must specify the name of the Firehose stream and the data record when using PutRecord. The data record consists
* of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a
* log file, geographic location data, website clickstream data, and so on.
*
* For multi record de-aggregation, you can not put more than 500 records even if the data blob length is less than 1000
* KiB. If you include more than 500 records, the request succeeds but the record de-aggregation doesn't work as
* expected and transformation lambda is invoked with the complete base64 encoded data blob instead of de-aggregated
* base64 decoded records.
*
* Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
* destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
* unique within the data. This allows the consumer application to parse individual data items when reading the data
Expand All @@ -132,12 +142,12 @@ public function putRecord($input): PutRecordOutput
* charges). We recommend that you handle any duplicates at the destination.
*
* If PutRecordBatch throws `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3 times. If the
* exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.
* exception persists, it is possible that the throughput limits have been exceeded for the Firehose stream.
*
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
* larger data assets, allow for a longer time out before retrying Put API operations.
*
* Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it
* Data records sent to Firehose are stored for 24 hours from the time they are added to a Firehose stream as it
* attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data
* is no longer available.
*
Expand Down
2 changes: 1 addition & 1 deletion src/Service/Firehose/src/Input/PutRecordBatchInput.php
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
final class PutRecordBatchInput extends Input
{
/**
* The name of the delivery stream.
* The name of the Firehose stream.
*
* @required
*
Expand Down
2 changes: 1 addition & 1 deletion src/Service/Firehose/src/Input/PutRecordInput.php
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
final class PutRecordInput extends Input
{
/**
* The name of the delivery stream.
* The name of the Firehose stream.
*
* @required
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

/**
* Contains the result for an individual record from a PutRecordBatch request. If the record is successfully added to
* your delivery stream, it receives a record ID. If the record fails to be added to your delivery stream, the result
* your Firehose stream, it receives a record ID. If the record fails to be added to your Firehose stream, the result
* includes an error code and an error message.
*/
final class PutRecordBatchResponseEntry
Expand Down
2 changes: 1 addition & 1 deletion src/Service/Firehose/src/ValueObject/Record.php
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
use AsyncAws\Core\Exception\InvalidArgument;

/**
* The unit of data in a delivery stream.
* The unit of data in a Firehose stream.
*/
final class Record
{
Expand Down
4 changes: 4 additions & 0 deletions src/Service/Lambda/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@

## NOT RELEASED

### Changed

- AWS enhancement: Documentation updates.

## 2.6.0

### Added
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,15 +128,25 @@ final class UpdateFunctionConfigurationRequest extends Input
private $deadLetterConfig;

/**
* The ARN of the Key Management Service (KMS) customer managed key that's used to encrypt your function's environment
* variables [^1]. When Lambda SnapStart [^2] is activated, Lambda also uses this key is to encrypt your function's
* snapshot. If you deploy your function using a container image, Lambda also uses this key to encrypt your function
* when it's deployed. Note that this is not the same key that's used to protect your container image in the Amazon
* Elastic Container Registry (Amazon ECR). If you don't provide a customer managed key, Lambda uses a default service
* key.
* The ARN of the Key Management Service (KMS) customer managed key that's used to encrypt the following resources:
*
* - The function's environment variables [^1].
* - The function's Lambda SnapStart [^2] snapshots.
* - When used with `SourceKMSKeyArn`, the unzipped version of the .zip deployment package that's used for function
* invocations. For more information, see Specifying a customer managed key for Lambda [^3].
* - The optimized version of the container image that's used for function invocations. Note that this is not the same
* key that's used to protect your container image in the Amazon Elastic Container Registry (Amazon ECR). For more
* information, see Function lifecycle [^4].
*
* If you don't provide a customer managed key, Lambda uses an Amazon Web Services owned key [^5] or an Amazon Web
* Services managed key [^6].
*
* [^1]: https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-encryption
* [^2]: https://docs.aws.amazon.com/lambda/latest/dg/snapstart-security.html
* [^3]: https://docs.aws.amazon.com/lambda/latest/dg/encrypt-zip-package.html#enable-zip-custom-encryption
* [^4]: https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-lifecycle
* [^5]: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk
* [^6]: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk
*
* @var string|null
*/
Expand Down
19 changes: 16 additions & 3 deletions src/Service/Lambda/src/Result/FunctionConfiguration.php
Original file line number Diff line number Diff line change
Expand Up @@ -152,12 +152,25 @@ class FunctionConfiguration extends Result
private $environment;

/**
* The KMS key that's used to encrypt the function's environment variables [^1]. When Lambda SnapStart [^2] is
* activated, this key is also used to encrypt the function's snapshot. This key is returned only if you've configured a
* customer managed key.
* The ARN of the Key Management Service (KMS) customer managed key that's used to encrypt the following resources:
*
* - The function's environment variables [^1].
* - The function's Lambda SnapStart [^2] snapshots.
* - When used with `SourceKMSKeyArn`, the unzipped version of the .zip deployment package that's used for function
* invocations. For more information, see Specifying a customer managed key for Lambda [^3].
* - The optimized version of the container image that's used for function invocations. Note that this is not the same
* key that's used to protect your container image in the Amazon Elastic Container Registry (Amazon ECR). For more
* information, see Function lifecycle [^4].
*
* If you don't provide a customer managed key, Lambda uses an Amazon Web Services owned key [^5] or an Amazon Web
* Services managed key [^6].
*
* [^1]: https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-encryption
* [^2]: https://docs.aws.amazon.com/lambda/latest/dg/snapstart-security.html
* [^3]: https://docs.aws.amazon.com/lambda/latest/dg/encrypt-zip-package.html#enable-zip-custom-encryption
* [^4]: https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-lifecycle
* [^5]: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk
* [^6]: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk
*
* @var string|null
*/
Expand Down
19 changes: 16 additions & 3 deletions src/Service/Lambda/src/ValueObject/FunctionConfiguration.php
Original file line number Diff line number Diff line change
Expand Up @@ -135,12 +135,25 @@ final class FunctionConfiguration
private $environment;

/**
* The KMS key that's used to encrypt the function's environment variables [^1]. When Lambda SnapStart [^2] is
* activated, this key is also used to encrypt the function's snapshot. This key is returned only if you've configured a
* customer managed key.
* The ARN of the Key Management Service (KMS) customer managed key that's used to encrypt the following resources:
*
* - The function's environment variables [^1].
* - The function's Lambda SnapStart [^2] snapshots.
* - When used with `SourceKMSKeyArn`, the unzipped version of the .zip deployment package that's used for function
* invocations. For more information, see Specifying a customer managed key for Lambda [^3].
* - The optimized version of the container image that's used for function invocations. Note that this is not the same
* key that's used to protect your container image in the Amazon Elastic Container Registry (Amazon ECR). For more
* information, see Function lifecycle [^4].
*
* If you don't provide a customer managed key, Lambda uses an Amazon Web Services owned key [^5] or an Amazon Web
* Services managed key [^6].
*
* [^1]: https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-encryption
* [^2]: https://docs.aws.amazon.com/lambda/latest/dg/snapstart-security.html
* [^3]: https://docs.aws.amazon.com/lambda/latest/dg/encrypt-zip-package.html#enable-zip-custom-encryption
* [^4]: https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-lifecycle
* [^5]: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk
* [^6]: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk
*
* @var string|null
*/
Expand Down
Loading