You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -69,10 +71,14 @@ Segment lets you change these destination settings from the Segment app without
69
71
## Adding {{ currentIntegration.display_name }} to the integrations object
70
72
71
73
To add {{ currentIntegration.display_name }} to the `integrations` JSON object (for example, [to filter data from a specific source](/docs/guides/filtering-data/#filtering-with-the-integrations-object)), use one of the following valid names for this integration:
Copy file name to clipboardExpand all lines: src/config-api/api-design.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ You can manage each resource using standard methods:
43
43
| PermissionDenied | 403 Forbidden | 7 | An access token with `write` scope is required for the Create, Update and Delete methods |
44
44
| Not Found | 404 Not Found | 5 | The request or resource could not be found. Either the request method or path is incorrect, or the resource does not exist in this workspace (sometimes because of a typo). |
45
45
| Already Exists | 409 Conflict | 6 | A resource (e.g. source) already exists with the given name |
46
-
| Resource Exhausted | 429 Too Many Requests | 8 | The 60 req / sec rate limit was exhausted |
46
+
| Resource Exhausted | 429 Too Many Requests | 8 | The 200 req / min rate limit was exhausted |
47
47
| Internal | 500 Internal Server Error | 13 | Segment encountered an error processing the request |
48
48
| Unimplemented | 501 Not Implemented | 12 | The method is not supported or implemented |
49
49
| Unavailable | 503 Service Unavailable | 14 | The API is down |
Copy file name to clipboardExpand all lines: src/connections/destinations/add-destination.md
-7Lines changed: 0 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,14 +88,8 @@ Each destination can also have destination settings. These control how Segment t
88
88
89
89
## Connecting one source to multiple instances of a destination
90
90
91
-
<!-- LR: 03/04/21 - hiding this for now since it's in limited rollout.
92
91
> note ""
93
92
> Multiple-destination support is available for all Segment customers on all plan tiers.
94
-
-->
95
-
96
-
97
-
> info ""
98
-
> Support for connecting to multiple instances of a destination is in public preview. To use this, you must agree to the [(1) Segment First Access](https://segment.com/legal/first-access-beta-preview/) and Beta Terms and Conditions and [(2) Segment Acceptable Use Policy](https://segment.com/legal/acceptable-use-policy/). The feature is being released to different tiers over time. If you see an error message that you can’t connect to multiple instances of the same destination, it is not available yet in your workspace but is coming soon.
99
93
100
94
Segment allows you to connect a source to multiple instances of a destination. You can use this to set up a single Segment source that sends data into different instances of your analytics and other tools.
101
95
@@ -187,7 +181,6 @@ For the following destinations, a single source can connect to up to 10 instance
Copy file name to clipboardExpand all lines: src/connections/destinations/catalog/amazon-kinesis-firehose/index.md
+73-63Lines changed: 73 additions & 63 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,22 +2,23 @@
2
2
rewrite: true
3
3
title: Amazon Kinesis Firehose Destination
4
4
---
5
-
[Amazon Kinesis Firehose](https://aws.amazon.com/kinesis/data-firehose/)is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards you're already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
5
+
[Amazon Kinesis Firehose](https://aws.amazon.com/kinesis/data-firehose/)provides way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards you're already using today. It's a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
6
6
7
7
This document was last updated on February 05, 2020. If you notice any gaps, outdated information or simply want to leave some feedback to help us improve our documentation, [let us know](https://segment.com/help/contact)!
8
8
9
9
## Getting Started
10
10
11
11
{% include content/connection-modes.md %}
12
12
13
-
1. Create at least one Kinesis Firehose delivery stream. You can follow these [instructions](http://docs.aws.amazon.com/firehose/latest/dev/basic-create.html) to create a new delivery stream.
13
+
To get started:
14
+
1. Create at least one Kinesis Firehose delivery stream. You can follow these [instructions](http://docs.aws.amazon.com/firehose/latest/dev/basic-create.html){:target="_blank"} to create a new delivery stream.
14
15
2. Create an IAM policy.
15
-
1. Sign in to the [Identity and Access Management (IAM) console](https://console.aws.amazon.com/iam/).
16
-
2. Follow [these instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-json-editor) to create an IAM policy on the JSON to allow Segment permission to write to your Kinesis Firehose Stream.
16
+
1. Sign in to the [Identity and Access Management (IAM) console](https://console.aws.amazon.com/iam/){:target="_blank"}.
17
+
2. Follow [these instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-json-editor){:target="_blank"} to create an IAM policy on the JSON to allow Segment permission to write to your Kinesis Firehose Stream.
17
18
- Use the following template policy in the **Policy Document** field. Be sure to change the `{region}`, `{account-id}` and `{stream-name}` with the applicable values.
To begin using the Kinesis Firehose destination, you must first decide on which Segment events you would like to route to which Firehose delivery streams. This mapping then needs to be defined in your destination settings.
77
78
78
-
Segment `track` events can map based on their **event name**. For example, if you have an event called `User Registered`, and you want these events to be published to a Firehose delivery stream called `new_users`, you would create a row in your destination settings that looks like this:
79
+
Segment `track` events can map based on their **event name**. For example, if you have an event called `User Registered`, and you want these events to be published to a Firehose delivery stream called `new_users`, create a row in your destination settings that looks like this:
Any Segment **event type** (ie. `page`, `track`, `identify`, `screen`, etc.) can also be mapped. This enables you to publish all instances of a given Segment event type to a given stream. To do this, create a row with the event type and its corresponding delivery stream:
Events can be defined **insensitive to case** so `Page` will be equivalent to `page`. The delivery stream name however needs to be formatted exactly as it is on AWS.
87
+
Events can be defined **insensitive to case** so `Page` will be equivalent to `page`. The delivery stream name needs to be formatted exactly as it is on AWS.
87
88
88
89
If you would like to route all events to a stream, use an `*` as the event name.
89
90
@@ -103,7 +104,7 @@ Let's say you've decided to publish your Segment track events named `User Regist
103
104
104
105
The Segment Kinesis destination will issue a `PutRecord` request with the following parameters:
105
106
106
-
```
107
+
```js
107
108
firehose.putRecord({
108
109
Record: {
109
110
Data:JSON.stringify(msg)) +'/n'
@@ -112,7 +113,7 @@ firehose.putRecord({
112
113
});
113
114
```
114
115
115
-
Segment will append a newline character to each record to allow for easy downstream parsing.
116
+
Segment appends a newline character to each record to allow for easy downstream parsing.
116
117
117
118
## Group
118
119
Take a look to understand what the [Group method](https://segment.com/docs/connections/spec/group/) does. An example group call is shown below:
If you have multiple sources using Kinesis/Firehose, you have two options:
133
134
134
135
#### Attach multiple sources to your IAM role
135
-
Find the IAM role you created for this destination in the AWS Console in Services > IAM > Roles. Click on the role, and navigate to the **Trust Relationships** tab. Click **Edit trust relationship**. You should see a snippet that looks something that looks like this:
136
+
To attach multiple sources to your IAM role:
137
+
1. Find the IAM role you created for this destination in the AWS Console in **Services > IAM > Roles**.
138
+
2. Select the role and navigate to the **Trust Relationships** tab.
139
+
3. Click **Edit trust relationship**. You should see a snippet that looks something that looks like this:
136
140
137
-
```json
138
-
{
139
-
"Version": "2012-10-17",
140
-
"Statement": [
141
+
```json
141
142
{
142
-
"Effect": "Allow",
143
-
"Principal": {
144
-
"AWS": "arn:aws:iam::595280932656:root"
145
-
},
146
-
"Action": "sts:AssumeRole",
147
-
"Condition": {
148
-
"StringEquals": {
149
-
"sts:ExternalId": "YOUR_SEGMENT_SOURCE_ID"
143
+
"Version": "2012-10-17",
144
+
"Statement": [
145
+
{
146
+
"Effect": "Allow",
147
+
"Principal": {
148
+
"AWS": "arn:aws:iam::595280932656:root"
149
+
},
150
+
"Action": "sts:AssumeRole",
151
+
"Condition": {
152
+
"StringEquals": {
153
+
"sts:ExternalId": "YOUR_SEGMENT_SOURCE_ID"
154
+
}
155
+
}
150
156
}
151
-
}
157
+
]
152
158
}
153
-
]
154
-
}
155
-
```
159
+
```
156
160
157
-
Replace that snippet with the following, and replace the contents of the array with all of your source IDs.
161
+
4. Replace that snippet with the following, and replace the contents of the array with all of your source IDs.
If you have so many sources using Kinesis that it is impractical to attach all of their IDs to your IAM role, you can set a single ID to use instead. **This approach requires that you securely store a secret value, so we recommend that you use the method above if at all possible. **
185
+
If you have many sources using Kinesis that it's impractical to attach all of their IDs to your IAM role, you can set a single ID to use instead.
182
186
183
-
To set this value, go to the Kinesis Firehose destination settings from each of your Segment sources and set the **Secret ID'** to a value of your choosing. This value is a secret and should be treated as sensitively as a password. Once all of your sources have been updated to use this value, find the IAM role you created for this destination in the AWS Console in Services > IAM > Roles. Click on the role, and navigate to the **Trust Relationships** tab. Click **Edit trust relationship**. You should see a snippet that looks something that looks like this:
187
+
To set this value for a single Secret ID:
188
+
1. Go to the Kinesis Firehose destination settings from each of your Segment sources.
189
+
2. Click **Secret ID** and enter your Workspace ID.
190
+
* **NOTE:** For security purposes, Segment recommends you to use your Segment Workspace ID as your Secret ID. If you’re using a Secret ID different from your Workspace ID, you're susceptible to attacks. You can find your Workspace ID by going to: **Settings > Workspace Settings > ID** from the Segment dashboard.
191
+
3. Once all of your sources are updated to use this value, find the IAM role you created for this destination in the AWS Console in **Services > IAM > Roles**.
192
+
4. Select the role and navigate to the **Trust Relationships** tab.
193
+
5. Click **Edit trust relationship**. You should see a snippet that looks something that looks like this:
184
194
185
-
```json
186
-
{
187
-
"Version": "2012-10-17",
188
-
"Statement": [
195
+
```json
189
196
{
190
-
"Effect": "Allow",
191
-
"Principal": {
192
-
"AWS": "arn:aws:iam::595280932656:root"
193
-
},
194
-
"Action": "sts:AssumeRole",
195
-
"Condition": {
196
-
"StringEquals": {
197
-
"sts:ExternalId": "YOUR_SEGMENT_SOURCE_ID"
197
+
"Version": "2012-10-17",
198
+
"Statement": [
199
+
{
200
+
"Effect": "Allow",
201
+
"Principal": {
202
+
"AWS": "arn:aws:iam::595280932656:root"
203
+
},
204
+
"Action": "sts:AssumeRole",
205
+
"Condition": {
206
+
"StringEquals": {
207
+
"sts:ExternalId": "YOUR_SEGMENT_SOURCE_ID"
208
+
}
209
+
}
198
210
}
199
-
}
211
+
]
200
212
}
201
-
]
202
-
}
203
-
```
204
-
Replace your source ID (found at "YOUR_SEGMENT_SOURCE_ID") with your secret ID.
213
+
```
214
+
6. Replace the value of `sts:ExternalId` ( "YOUR_SEGMENT_SOURCE_ID") with the Secret ID / Workspace ID value from the previous step.
0 commit comments