You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -97,7 +97,7 @@ Once you’ve added a model, you need to add a destination. In Reverse ETL, dest
97
97
> If you'd like to request Segment to add a particular destination, please note it on the [feedback form](https://airtable.com/shriQgvkRpBCDN955){:target="_blank"}.
98
98
99
99
To add your first destination:
100
-
1. Navigate to **Reverse ETL > Destinations**.
100
+
1. Navigate to **Connections > Sources** and select the **Reverse ETL** tab.
101
101
2. Click **Add Destination**.
102
102
3. Select the destination you want to connect to.
103
103
4. Select the source you want to connect the destination to.
@@ -158,58 +158,65 @@ To edit your mapping:
158
158
2. Select the destination with the mapping you want to edit.
159
159
3. Select the **...** three dots and click **Edit mapping**. If you want to delete your mapping, select **Delete**.
160
160
161
+
## Data handling
162
+
The Segment-owned infrastructure stores your data for no more than 14 days. After 14 days, the data is deleted. This means that some data in your sync history such as query results, detailed error messages, logs and other information will be unavailable after 14 days.
163
+
164
+
You can handle deletion of your data by deleting the data from your warehouse, and Segment will pick up that deletion on your next sync.
165
+
166
+
> info ""
167
+
> All customer data is encrypted at all times.
168
+
169
+
## Record diffing
170
+
The first time you run a query, Segment stores the unique identifier column or primary key (for example, segment_id) and a small checksum for every row in the customer data model within the customer’s warehouse. Segment doesn’t duplicate all column values for every row.
171
+
172
+
On subsequent runs of the query, Segment f irst performs that same checksumming operation for each row in the new result and uses a `JOIN` with the stored checksum table to create the list of new, updated, and deleted rows and to update the checksum table. This diffing operation is performed entirely using your data warehouse. This ensures that Segment doesn’t ingest data unnecessarily.
173
+
174
+
If you want to resync individual rows (or all rows) whether they’ve changed or not, you need to delete those corresponding rows in the stored checksum table and Segment handles the rest.
175
+
176
+
## Segment Connections destination
177
+
If you don’t see your destination listed in the [Reverse ETL catalog], use the [Segment Connections destination](/docs/connections/destinations/catalog/actions-segment/) to send data from your Reverse ETL warehouse to other destinations listed in the destinations catalog.
178
+
179
+
The Segment Connections destination enables you to mold data extracted from your warehouse in Segment Spec API calls that are then processed by [Segment’s HTTP Tracking API](/docs/connections/sources/catalog/libraries/server/http-api/). The Segment HTTP Tracking API lets you record analytics data from any website or application. The requests hit Segment’s servers, and then Segment routes your data to any destination you want.
180
+
181
+
> info ""
182
+
> If you use the Segment Connections destination, the destination sends data to Segment’s tracking API. This means that new users count as new MTUs and each call counts as an API call. This affects your Reverse ETL usage limits and also your Segment costs as it can drive overages.
161
183
162
184
## Limits
163
185
To provide consistent performance and reliability at scale, Segment enforces default use and rate limits.
164
186
165
187
### Usage limits
166
-
Processed Reverse ETL records are the total number of records Segment attempts to load to your downstream destinations, including those that fail to load. Your plan determines how many Reverse ETL records you can process in one monthly billing cycle.
188
+
Reverse ETL usage limits are measured based on the number of records processed to each destination – this includes both successful and failed records. For example, if you processed 50k records to Braze and 50k records to Mixpanel, then your total usage is 100k records.
189
+
190
+
Your plan determines how many Reverse ETL records you can process in one monthly billing cycle. When your limit is reached before the end of your billing period, your syncs will pause and then resume on your next billing cycle.
167
191
168
192
Plan | Number of Reverse ETL records you can process to each destination per month |
Business | 50 x the number of [MTUs](/docs/guides/usage-and-billing/mtus-and-throughput/#what-is-an-mtu) <br>or .25 x the number of monthly API calls
173
197
174
-
When your limit is reached before the end of your billing period, your syncs will pause and then resume on your next billing cycle. To increase the number of processed Reverse ETL records, connect with your sales representative to upgrade your plan. If you're on a Free plan, upgrade to the Teams plan in the Segmet app.
198
+
If you’re on a Teams or Business plan, to increase the number of processed Reverse ETL records, contact your sales representative to upgrade your plan. If you're on a Free plan, upgrade to the Teams plan in the Segment app.
175
199
176
200
To see how many records you’ve processed using Reverse ETL, navigate to **Settings > Usage & billing** and select the **Reverse ETL** tab.
177
201
178
202
### Configuration limits
179
203
180
204
Name | Details | Limit
181
205
--------- | ------- | ------
182
-
Model query length | The maximum length for the model SQL query | 131,072 characters
206
+
Model query length | The maximum length for the model SQL query. | 131,072 characters
183
207
Model identifier column name length | The maximum length for the ID column name. | 191 characters
184
208
Model timestamp column name length | The maximum length for the timestamp column name. | 191 characters
185
-
Sync frequency | The shortest possible duration Segment allows between syncs | 15 minutes
209
+
Sync frequency | The shortest possible duration Segment allows between syncs. | 15 minutes
186
210
187
211
### Extract limits
188
212
The extract phase is the time spent connecting to your database, executing the model query, updating internal state tables and staging the extracted records for loading. There is a 14-day data retention period to support internal disaster recovery and debugging as needed.
189
213
190
214
Name | Details | Limit
191
215
----- | ------- | ------
192
-
Duration | The maximum amount of time Segment spends attempting to extract before timing out. | 3 hours
193
-
Record count | The maximum number of records a single sync will process. Note: This is the number of records extracted from the warehouse not the limit for the number of records loaded to the destination (e.g. new/update/deleted). | 20 million records
194
-
Column count | The maximum number of columns a single sync will process
195
-
Column name length | The maximum length of a record column | 128 characters
196
-
Record JSON Length | The maximum size for a record when converted to JSON (some of this limit is used by Segment) | 512 KiB
197
-
Column JSON Length | The maximum size of any single column value | 128 KiB
198
-
199
-
### Load limits
200
-
The load phase covers the time spent preparing the extracted records for delivery to all connected destinations and mappings, in addition to waiting for those records to be fully handled by Centrifuge. There is a 30-day data retention period with records that fail to deliver through Centrifuge.
201
-
202
-
Name | Details | Limit
203
-
----- | ------- | ------
204
-
Load prepare duration | The maximum amount of time Segment spends attempting to prepare the load before timing out. | 3 hours
205
-
Load wait duration | The maximum amount of time Segment spends waiting for records to be delivered by Centrifuge. | 6 hours
206
-
207
-
## Data retention
208
-
Segment uses Kafka queues to buffer data between systems, in which
209
-
210
-
## Security
211
-
Segment
212
-
213
-
\\ask Kathlynn what's the information we need for security
216
+
Record count | The maximum number of records a single sync will process. Note: This is the number of records extracted from the warehouse not the limit for the number of records loaded to the destination (for example, new/update/deleted). | 30 million records
217
+
Column count | The maximum number of columns a single sync will process. | 512 columns
218
+
Column name length | The maximum length of a record column. | 128 characters
219
+
Record JSON Length | The maximum size for a record when converted to JSON (some of this limit is used by Segment). | 512 KiB
220
+
Column JSON Length | The maximum size of any single column value. | 128 KiB
0 commit comments