You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/connections/storage/warehouses/redshift-tuning.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,10 @@ As your data volume grows and your team writes more queries, you might be runnin
15
15
16
16
To check if you're getting close to your max, run this query. It will tell you the percentage of storage used in your cluster. Segment recommends that you don't exceed 75-80% of your storage capacity. If you approach that limit, consider adding more nodes to your cluster.
17
17
18
-

18
+
```json
19
+
SELECT sum(pct_used);
20
+
FROM svv_table_info;
21
+
```
19
22
20
23
[Learn how to resize your cluster.](http://docs.aws.amazon.com/redshift/latest/mgmt/rs-resize-tutorial.html)
21
24
@@ -51,7 +54,7 @@ If you have multiple ETL processes loading into your warehouse at the same time,
51
54
52
55
If you're a Segment Business Tier customer, you can schedule your sync times under Warehouses Settings.
53
56
54
-

57
+

55
58
56
59
You also might want to take advantage of Redshift's [Workload Management](http://docs.aws.amazon.com/redshift/latest/dg/c_workload_mngmt_classification.html) that helps ensure fast-running queries won't get stuck behind long ones.
57
60
@@ -70,7 +73,7 @@ Segment's initial recommendation is for 2 WLM queues:
70
73
2. leave the default queue with a concurrency of `5`
71
74
72
75
73
-

76
+

74
77
75
78
Generally, Segment is responsible for most writes in the databases Segment connects to, so having a higher concurrency allows Segment to write as fast as possible. If you're also using the same database for your own ETL process, you may want to use the same concurrency for both groups. You may even require additional queues if you have other applications writing to the database.
<td></td>
211
211
<td width="45%">Generating a CSV can take a substantial amount of time for large audiences (around 30 seconds for a formatted CSV with 1 million rows). For CSVs that are expected to take over 20 seconds, Segment displays an estimated generation time. After you generate the CSV file, leave the modal window open while Segment creates the file.
212
212
(If the audience recalculates between when you click Generate and when you download the file, you might want to regenerate the file. The CSV is a snapshot from when you clicked Generate, and could be outdated.)</td>
Copy file name to clipboardExpand all lines: src/getting-started/01-what-is-segment.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ In a nutshell, the Segment libraries ([Sources](/docs/connections/sources/catalo
7
7
8
8
## Overview
9
9
10
-

10
+

11
11
12
12
[Segment Spec methods](/docs/connections/spec/) are how you collect interaction data from your interfaces, and the [Sources](/docs/connections/sources/) are what you package with your interfaces to collect and route the data.
13
13
@@ -71,7 +71,7 @@ Segment maintains a catalog of destinations where you can send your data.
71
71
72
72
<!--TODO: big list o' destinations image (programmatically update?) should go here-->
Copy file name to clipboardExpand all lines: src/getting-started/02-simple-install.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ Make note of or write down your write key, as you'll need it in the next steps.
51
51
52
52
Any time you change a library's settings in the Segment App, the write key regenerates.
53
53
54
-

54
+

55
55
56
56
> info ""
57
57
> [Cloud-sources](/docs/connections/sources/about-cloud-sources/) do not have write keys, as they use a token or key from your account with that service. Cloud-sources have other considerations and aren't part of this tutorial.
@@ -388,7 +388,7 @@ Once you've set up your Segment library, and instrumented at least one call, you
388
388
389
389
The Source Debugger is a real-time tool that helps you confirm that API calls made from your website, mobile app, or servers arrive at your Segment Source, so you can quickly see how calls are received by your Segment source, so you can troubleshoot quickly without having to wait for data processing.

392
392
393
393
The Debugger is separate from your workspace's data pipeline, and is not an exhaustive view of all the events ever sent to your Segment workspace. The Debugger only shows a sample of the events that the Source receives in real time, with a cap of 500 events. The Debugger is a great way to test specific parts of your implementation to validate that events are being fired successfully and arriving to your Source.
394
394
@@ -399,7 +399,7 @@ The Debugger shows a live stream of sampled events arriving at the Source, but y
399
399
400
400
You can search on any information you know is available in an event payload to search in the Debugger and show only matching payloads. You can also use advanced search options to limit the results to a specific event.
Copy file name to clipboardExpand all lines: src/guides/how-to-guides/automated-multichannel-reengagement.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ Before we proceed, it's important to register for these tools and enable them on
23
23
24
24
## Set it up
25
25
26
-

26
+

27
27
28
28
When you send tracking data from your app or website to Segment, Segment will send the same data to all of your tools. Segment also collects key messaging events like Push Notification Opened and Email Opened from Braze and Customer.io, respectively, and sends that to other tools. By defining cohorts based on these events, you can create dynamic campaign audiences, to which customers can add and remove themselves.
29
29
@@ -33,21 +33,21 @@ In each of your destinations—Braze, Facebook, Customer.io, AdRoll—you can cr
33
33
34
34
In Braze, create a segment of customers who added a product to their cart, but did not check out. The segment definition, in this case, should be people who have performed `Product Added`, but have not performed `Order Completed` . Send a push notification to these customers with a message that the cart was abandoned and that they can complete the transaction with a 10% coupon (or up to you).
35
35
36
-

36
+

37
37
38
38
## 2nd Line of Defense: The Email Reminder
39
39
40
40
Because Segment automatically collects second-party data from Braze, you now also have push notification event data, like `Push Notification Opened` and `Push Notification Received` in Segment. You can use the `properties` on each of these events to define a property called `campaign_name` so you can tie these activities to a given campaign.
41
41
42
-

42
+

43
43
44
44
This is helpful because now, you can define segments in Customer.io for customers who have triggered `Push Notification Received`, but not `Push Notification Opened` . You've now automated the process of targeting customers who don't open your push notifications. In Customer.io, From here, you can create a campaign that sends an email to those people asking them to check their push notifications and offering them a coupon to complete their order.
45
45
46
46
## 3rd Line of Defense: Paid Advertising
47
47
48
48
Since Segment collects email event data, like `Email Opened`, from Customer.io, you can similarly create segments in Facebook Ads and AdRoll for when customers don't open your email. Create a segment where users have an `Email Delivered` event, but no `Email Opened` event. When users meet these criteria, they'll get automatically added to your retargeting campaigns. You can then serve them custom creatives about them neglecting to open your emails and, again, perhaps offer them a coupon to complete the transaction.
49
49
50
-

50
+

51
51
52
52
When users do not open an activation email, we can seamlessly add them to a specific retargeting campaign that contains messaging to remind them to activate.
| Owned (domain, app) | First-party data sources (on-page or in-app analytics) |
24
+
| Owned (email, push notifications) | Second-party data sources |
25
+
| Earned (blogs, PR, partners, news) | UTM params, deep links on mobile |
26
+
| Paid aquisition | UTM params, deep links on mobile |
20
27
21
28
"Owned" marketing encompasses all activities you have full control over. It can be further split into first- and second-party data. First-party data is customer data generated on your site or in your app. Second-party data is customer data generated when your customers interact with your email or push notifications (e.g. "Email Opened", "Push Notification Received").
22
29
@@ -81,7 +88,7 @@ Add the complete URL as the `src` in the image tag.
81
88
82
89
UTM parameters are types of query strings added to the end of a URL. When clicked, they let the domain owners track where incoming traffic is coming from and understand what aspects of their marketing campaigns are driving traffic.
83
90
84
-

91
+

85
92
86
93
UTM parameters are only used when linking to your site from outside of your domain. When a visitor arrives to your site using a link containing UTM parameters, Segment's client-side analytics.js library will automatically parse the URL's query strings, and store them within the `context` object as outlined [here](https://segment.com/docs/connections/spec/common/#context-fields-automatically-collected). These parameters do not persist to subsequent calls unless you pass them explicitly.
87
94
@@ -133,7 +140,15 @@ Your UTM parameters would match a pattern such as
133
140
134
141
An example would be a National Toast Day campaign. This campaign would include emails, paid acquisition (via AdRoll and Facebook Ads), organic social (Twitter), and promotional content on partners' blogs.
135
142
136
-

143
+
<!------>
Having the consistent UTM parameters naming convention simplifies the downstream analysis and the ease of querying across dimensions, such as within the campaign, which medium or source was the best. Or which placement of the display ad led to the most conversions.
Copy file name to clipboardExpand all lines: src/guides/how-to-guides/dynamic-coupon-program.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ We'll conduct a split test (half of the cohort will represent the control group
38
38
39
39
First, register for an account with Customer.io and Amplitude. Then, enable Customer.io and enable Amplitude on your Segment project. Finally, go into your Customer.io account and enable "sending data to Segment":
40
40
41
-

41
+

42
42
43
43
[You can find those destination settings in Customer.io here.](https://fly.customer.io/account/integration_settings)
44
44
@@ -50,13 +50,13 @@ When everything is enabled, customer event data such as `Order Completed` and `P
50
50
51
51
Now we define the specific cohort in Customer.io as per our conditions listed earlier: someone who spends over $20 per order and shops over twice a month. In Customer.io, go to "Segments" and "Create Segment":
52
52
53
-

53
+

54
54
55
55
After this cohort is created, then when a customer makes the third purchase in a month and it's over $20, then she will be added to this segment.
56
56
57
57
Next, we'll create a "segment trigger campaign", where Customer.io will send a message the first time someone enters a segment. The segment in this case will be the one we just created: Coupon Loyalty Experiment.
58
58
59
-

59
+

60
60
61
61
Save the changes and then enable the campaign!
62
62
@@ -71,29 +71,29 @@ In Amplitude, we can create a funnel that compares the two cohorts—one who rec
71
71
72
72
First, let's define a behavioral cohort with the conditions of being loyal customers so we can use it when we analyze the conversion funnel:
73
73
74
-

74
+

75
75
76
76
We'll also have to create a second identical cohort, except with the only difference that these customers did not receive the coupon email. We need this cohort so we can create the conversion funnel with the control group.
77
77
78
-

78
+

79
79
80
80
After we've created these two cohorts, we'll create two funnel charts. The first funnel will look at the control group. The second funnel will look at the group that received the coupon email.
81
81
82
-

82
+

83
83
84
84
Resulting in:
85
85
86
-

86
+

87
87
88
88
We can see that the control group that did not receive the email for the coupon resulted in 233 people visiting the store, with 66 conversions.
89
89
90
90
The funnel for the group who did receive the emails can be created with these parameters:
91
91
92
-

92
+

93
93
94
94
Resulting in:
95
95
96
-

96
+

97
97
98
98
The email itself drove 168 customers to the store, which also saw higher conversions to `Product Added` and ultimately `Order Completed` .
Copy file name to clipboardExpand all lines: src/guides/how-to-guides/forecast-with-sql.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -91,13 +91,13 @@ To retrieve a table with the right columns for analysis, let's use the follow SQ
91
91
92
92
This returns a table where each row is a unique user and the columns are email, number of purchases within the time window, number of discrete time units since last purchase, and average purchase order.
93
93
94
-

94
+

95
95
96
96
> Here is a screenshot of the first twelve rows returned from the query in Mode Analytics.
97
97
98
98
Export this data to a CSV, then copy and paste it in the first sheet of the Google Sheet where the blue type is in the below screenshot:
99
99
100
-

100
+

101
101
102
102
Also be sure to add the total time in days in cell B6. This is important as the second sheet uses this time duration for calculating net present value of future payments.
103
103
@@ -107,7 +107,7 @@ After you paste in the CSV from the table into the first tab of the sheet, the n
107
107
108
108
You can export your Google Sheet as an Excel document. Then, use Excel Solver to minimize the log-likelihood number in cell B5, while keeping the parameters from B1:B4 greater than 0.0001.
109
109
110
-

110
+

111
111
112
112
After Solver runs, cells B1:B4 will be updated to represent the model's estimates. Now, you can hard code those back into the sheet on Google Sheets. The next sheet relies on these model estimates to calculate the expected purchases per customer.
113
113
@@ -184,7 +184,7 @@ Below is a simple query to get a table of a user's actions in rows. Just replace
184
184
185
185
This above query for user whose `user_id` is `"46X8VF96G6"` returns the below table:
186
186
187
-

187
+

188
188
189
189
At Toastmates, most of the highest forward-looking expected LTV customers share one thing in common: averaging two orders per month with an average purchase size of $20.
0 commit comments