Skip to content

Commit 4af9217

Browse files
Merge pull request #4327 from segmentio/DOC-521-IG
Remove Compose docs [DOC-521]
2 parents 6447166 + b3968c2 commit 4af9217

File tree

1 file changed

+4
-131
lines changed
  • src/connections/storage/catalog/postgres

1 file changed

+4
-131
lines changed

src/connections/storage/catalog/postgres/index.md

Lines changed: 4 additions & 131 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,8 @@ PostgreSQL is ACID-compliant and transactional. PostgreSQL has updatable views a
1515
Segment supports the following Postgres database providers:
1616
- [Heroku](#heroku-postgres)
1717
- [RDS](#rds-postgres)
18-
- [Compose](#compose-postgres)*
1918

20-
> note "Deprecation of Compose"
21-
> On March 1, 2023, [Compose will be deprecated](https://help.compose.com/docs/compose-deprecation){:target="_blank"}. After this date, all databases on Compose will be disabled and deprovisioned. If you need help selecting another Segment-supported Postgres database provider, [contact Segment Support](https://segment.com/help/contact){:target="_blank"}.
19+
Segment supported a third Postgres provider, Compose, until Compose was [was deprecated on March 1, 2023](https://help.compose.com/docs/compose-deprecation){:target="_blank"}. To continue sending your Segment data to a Postgres destination, consider using either [Heroku Postgres](#heroku-postgres) or [Amazon's Relational Database Service](#rds-postgres).
2220

2321
> warning ""
2422
> Segment only supports these Postgres databases. Postgres databases from other providers aren't guaranteed to work. For questions or concerns about Segment-supported Postgres providers, [contact Segment Support](https://segment.com/help/contact){:target="_blank"}.
@@ -87,136 +85,11 @@ To create a new inbound rule:
8785

8886
8. Click **Save rules**.
8987

90-
## Compose Postgres
91-
92-
> warning "Deprecation of Compose"
93-
> [Compose will be deprecated](https://help.compose.com/docs/compose-deprecation){:target="_blank"} on March 1, 2023. After this date, all databases on Compose will be disabled and deprovisioned. To continue sending your Segment data to a Postgres destination, consider using either [Heroku Postgres](#heroku-postgres) or [Amazon's Relational Database Service](#rds-postgres).
94-
95-
Compose is the first DBaaS (Database as a Service) of its kind, geared at helping developers spend more time building their applications rather than wrestling with database provisioning and maintenance. Compose provides easy to deploy and scale data stores and services in many flavors: PostgreSQL, MongoDB, RethinkDB, Elasticsearch, Redis, etcd, and RabbitMQ.
96-
97-
Using Compose, companies can deploy databases instantly with backups, monitoring, performance tuning, and a full-suite of management tools. Compose Enterprise brings all this to the corporate VPC (virtual private cloud).
98-
99-
Compose uses Segment for hooking together web analytics, email, and social tracking and manages its Segment warehouse on PostgreSQL. Compose is pleased to be able to harness the power of Postgres to query Segment data and be able create custom reports.
100-
101-
1. set up PostgreSQL
102-
103-
If you don't yet have an account with Compose, [sign-up](https://www.compose.com/signup){:target="_blank"} and select the PostgreSQL database to get started.
104-
105-
For those of you already on Compose, if don't yet have a PostgreSQL instance, you can add one from the Deployments page in the management console by clicking "Create Deployment" then selecting PostgreSQL or just [add a PostgreSQL deployment](https://help.compose.com/docs/postgresql-on-compose){:target="_blank"} to your account.
106-
107-
![](images/compose1.png)
108-
109-
Once your PostgreSQL deployment is spun up, you may want to [create a user](https://www.compose.io/articles/compose-postgresql-making-users-and-more/){:target="_blank"} to be the owner of the database you'll use for Segment. There is already an admin user role that is generated on initialization of your deployment, but this user has full privileges for your deployment so you may want to create additional users with more specific privileges. You may also want to manually scale up your deployment for the initial load of Segment data since it loads the past two months of data by default. You can then scale it back down according to your data needs after the initial load. The easy-to-use management console lets you perform these tasks, monitor your deployments, configure security settings, manage backups, and more.
110-
111-
Now, all you need to do is create a database where your Segment data will live. You can create a database directly from the Data Browser interface in the Compose management console, by using a tool such as the [pgAdmin GUI](http://www.pgadmin.org/download/){:target="_blank"} or programmatically using code you've written. For simplicity, this database is simply named "segment" and associated it to the "compose" user as the owner. Here is the SQL statement to create the database for Segment data, using the default PostgreSQL arguments (set yours appropriately to your requirements):
112-
113-
```sql
114-
CREATE DATABASE segment
115-
WITH OWNER = compose
116-
ENCODING = 'SQL_ASCII'
117-
TABLESPACE = pg_default
118-
LC_COLLATE = 'C'
119-
LC_CTYPE = 'C'
120-
CONNECTION LIMIT = -1;
121-
```
122-
123-
And that's it! You don't even need to create any tables - Segment will handle that for you.
124-
125-
2. Browse & Query
126-
127-
And now the fun part - browsing and querying the data!
128-
129-
You'll notice in your PostgreSQL database that a new schema has been created for each source that was synced. Under the production source schema a whole bunch of tables were created. You can see the tables in the Compose data browser "Tables" view:
130-
131-
![](images/compose1.png)
132-
133-
When the Segment data is loaded to the PostgreSQL database, several tables are created by default: `aliases`, `groups`, `identifies`, `pages`, `screens` and `tracks`. You might also have `accounts` and `users` tables if you use unique calls for groups and for identifies. To learn more about these default tables and their fields, see the [Segment schema documentation](/docs/connections/storage/warehouses/schema/).
134-
135-
All of the other tables will be event-specific, according to the event names and properties you use in your `track` calls. The number of tables will depend on the number of unique events you're tracking. For example, at Compose, there is a track call for when customers view their deployments such as:
136-
137-
```js
138-
analytics.track('deployments_show', {
139-
deployment_name: 'heroic-rabbitmq-62',
140-
deployment_type: 'RabbitMQ'
141-
});
142-
```
143-
144-
In the Postgres Segment database, there will then be a table named "deployments_show" which can be queried for that deployment to see how many times it was viewed:
145-
146-
```sql
147-
SELECT COUNT(id)
148-
-- Don't forget the schema: FROM <source>.<table>
149-
FROM production.deployments_show
150-
WHERE deployment_name = 'heroic-rabbitmq-62';
151-
```
152-
153-
The result is 18 times in the past two months by a particular database user. To verify, just join to the identifies table, which contains user data, through the `user_id` foreign key:
154-
155-
```sql
156-
SELECT DISTINCT i.name
157-
FROM production.identifies i
158-
JOIN production.deployments_show ds ON ds.user_id = i.user_id
159-
WHERE ds.deployment_name = 'heroic-rabbitmq-62';
160-
```
161-
162-
A more interesting query for this, however, might be to see how many deployments were created in November using the "deployments_new" event:
163-
164-
```sql
165-
SELECT COUNT(DISTINCT id)
166-
FROM production.deployments_new
167-
WHERE original_timestamp &gt;= '2015-11-01'
168-
AND original_timestamp &lt; '2015-12-01';
169-
```
170-
171-
This way, you can create custom reports for analysis on the tracking data, using SQL as simple or as complex as needed, to gain insights which Segment-integrated tracking tools may not be able to easily find.
172-
173-
174-
### Database set up - Service user and permissions
175-
176-
Once you have your Postgres database running, you should do a few more things before connecting the database to Segment.
177-
178-
Your database probably has an `admin` username and password. While you _could_ give these credentials directly to Segment, for security purposes you should instead create a separate "service" user. Do this for any other third-parties who connect with your database. This helps isolate access, and makes it easier to audit which accounts have done what.
179-
180-
To use the SQL commands here, [connect to your database using a command line tool](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL.html){:target="_blank"} such AWSCLI or psql Client.
181-
182-
```sql
183-
-- this command creates a user named "segment" that Segment will use when connecting to your Redshift cluster.
184-
CREATE USER segment WITH PASSWORD '<enter password here>';
185-
186-
-- allows the "segment" user to create new schemas and temporary tables on the specified database.
187-
GRANT CREATE, TEMPORARY ON DATABASE <enter database name here> TO segment;
188-
```
189-
190-
### Connect with Segment
191-
192-
1. Open up Segment in another browser window or tab
193-
194-
Visit the [Segment Workspaces screen](http://segment.com/workspaces){:target="_blank"}. Click the workspace you'd like the database to be associated with.
195-
196-
197-
2. Click **Add Destination**.
198-
199-
In the Workspace, you can find the button beside the Destinations.
200-
201-
202-
3. Either select "Warehouses" categories from the left-hand sidebar, or use the search field and look for "Postgres".
203-
204-
205-
4. Configure the Database Connection.
206-
207-
Select Postgres database. Then, copy the relevant settings into the text fields on this page and clicking **Connect**.
208-
209-
![](images/segment4.png)
210-
211-
5. Verify that the database connected successfully.
212-
213-
You should see a message indicating that the connection was successful. If not, check that you entered the settings correctly. If it still isn't working, feel free to [contact Segment support](https://segment.com/help/contact/){:target="_blank"}.
214-
215-
### Sync schedule
88+
## Sync schedule
21689

21790
{% include content/warehouse-sync-sched.md %}
21891

219-
![](/docs/connections/destinations/catalog/images/syncsched.png)
92+
![A screenshot of the sync schedule page. The enable sync schedule is toggled on, and the sync schedule dropdowns are visible.](/docs/connections/destinations/catalog/images/syncsched.png)
22093

22194

22295
## Security
@@ -260,7 +133,7 @@ For more information on single vs double follow [this link](http://blog.lerner.c
260133

261134
### Can I add an index to my tables?
262135

263-
Yes! You can add indexes to your tables without blocking Segment syncs. However, Segment recommends limiting the number of indexes you have. Postgres's native behavior requires that indexes update as more data is loaded, and this can slow down your Segment syncs.
136+
Yes, you can add indexes to your tables without blocking Segment syncs. However, Segment recommends limiting the number of indexes you have. Postgres's native behavior requires that indexes update as more data is loaded, and this can slow down your Segment syncs.
264137

265138
## Troubleshooting
266139

0 commit comments

Comments
 (0)