Skip to content

Commit fab6d19

Browse files
authored
[clickpipes] FAQ or slot growth (#3048)
1 parent 0662472 commit fab6d19

File tree

1 file changed

+39
-1
lines changed
  • docs/en/integrations/data-ingestion/clickpipes/postgres

1 file changed

+39
-1
lines changed

docs/en/integrations/data-ingestion/clickpipes/postgres/faq.md

Lines changed: 39 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,4 +63,42 @@ Please refer to the [ClickPipes for Postgres: Schema Changes Propagation Support
6363

6464
### What are the costs for ClickPipes for Postgres CDC?
6565

66-
During the preview, ClickPipes is free of cost. Post-GA, pricing is still to be determined. The goal is to make the pricing reasonable and highly competitive compared to external ETL tools.
66+
During the preview, ClickPipes is free of cost. Post-GA, pricing is still to be determined. The goal is to make the pricing reasonable and highly competitive compared to external ETL tools.
67+
68+
### My replication slot size is growing or not decreasing; what might be the issue?
69+
70+
If you're noticing that the size of your Postgres replication slot keeps increasing or isn’t coming back down, it usually means that **WAL (Write-Ahead Log) records aren’t being consumed (or “replayed”) quickly enough** by your CDC pipeline or replication process. Below are the most common causes and how you can address them.
71+
72+
1. **Sudden Spikes in Database Activity**
73+
- Large batch updates, bulk inserts, or significant schema changes can quickly generate a lot of WAL data.
74+
- The replication slot will hold these WAL records until they are consumed, causing a temporary spike in size.
75+
76+
2. **Long-Running Transactions**
77+
- An open transaction forces Postgres to keep all WAL segments generated since the transaction began, which can dramatically increase slot size.
78+
- Set `session_timeout` and `idle_in_transaction_session_timeout` to reasonable values to prevent transactions from staying open indefinitely:
79+
```sql
80+
SELECT
81+
pid,
82+
state,
83+
age(now(), xact_start) AS transaction_duration,
84+
query AS current_query
85+
FROM
86+
pg_stat_activity
87+
WHERE
88+
xact_start IS NOT NULL
89+
ORDER BY
90+
age(now(), xact_start) DESC;
91+
```
92+
Use this query to identify unusually long-running transactions.
93+
94+
3. **Maintenance or Utility Operations (e.g., `pg_repack`)**
95+
- Tools like `pg_repack` can rewrite entire tables, generating large amounts of WAL data in a short time.
96+
- Schedule these operations during slower traffic periods or monitor your WAL usage closely while they run.
97+
98+
4. **VACUUM and VACUUM ANALYZE**
99+
- Although necessary for database health, these operations can create extra WAL traffic—especially if they scan large tables.
100+
- Consider using autovacuum tuning parameters or scheduling manual VACUUMs during off-peak hours.
101+
102+
5. **Replication Consumer Not Actively Reading the Slot**
103+
- If your CDC pipeline (e.g., ClickPipes) or another replication consumer stops, pauses, or crashes, WAL data will accumulate in the slot.
104+
- Ensure your pipeline is continuously running and check logs for connectivity or authentication errors.

0 commit comments

Comments
 (0)