Skip to content

Commit ce141ec

Browse files
committed
postgres logs guide
1 parent 105f163 commit ce141ec

File tree

7 files changed

+378
-0
lines changed

7 files changed

+378
-0
lines changed

docs/use-cases/observability/clickstack/integration-examples/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ Several of these integration guides use ClickStack's built-in OpenTelemetry Coll
1919
| [Kafka Metrics](/use-cases/observability/clickstack/integrations/kafka-metrics) | Quick start guide for Kafka Metrics |
2020
| [Nginx Logs](/use-cases/observability/clickstack/integrations/nginx) | Quick start guide for Nginx Logs |
2121
| [Nginx Traces](/use-cases/observability/clickstack/integrations/nginx-traces) | Quick start guide for Nginx Traces |
22+
| [PostgreSQL Logs](/use-cases/observability/clickstack/integrations/postgresql-logs) | Quick start guide for PostgreSQL Metrics |
2223
| [PostgreSQL Metrics](/use-cases/observability/clickstack/integrations/postgresql-metrics) | Quick start guide for PostgreSQL Metrics |
2324
| [Redis Logs](/use-cases/observability/clickstack/integrations/redis) | Quick start guide for Redis Logs |
2425
| [Redis Metrics](/use-cases/observability/clickstack/integrations/redis-metrics) | Quick start guide for Redis Metrics |
Lines changed: 376 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,376 @@
1+
---
2+
slug: /use-cases/observability/clickstack/integrations/postgresql-logs
3+
title: 'Monitoring PostgreSQL Logs with ClickStack'
4+
sidebar_label: 'PostgreSQL Logs'
5+
pagination_prev: null
6+
pagination_next: null
7+
description: 'Monitoring PostgreSQL Logs with ClickStack'
8+
doc_type: 'guide'
9+
keywords: ['PostgreSQL', 'Postgres', 'logs', 'OTEL', 'ClickStack', 'database monitoring']
10+
---
11+
12+
import Image from '@theme/IdealImage';
13+
import useBaseUrl from '@docusaurus/useBaseUrl';
14+
import import_dashboard from '@site/static/images/clickstack/import-dashboard.png';
15+
import logs_search_view from '@site/static/images/clickstack/postgres/postgres-logs-search-view.png';
16+
import log_view from '@site/static/images/clickstack/postgres/postgres-log-view.png';
17+
import logs_dashboard from '@site/static/images/clickstack/postgres/postgres-logs-dashboard.png';
18+
import finish_import from '@site/static/images/clickstack/postgres/import-logs-dashboard.png';
19+
import { TrackedLink } from '@site/src/components/GalaxyTrackedLink/GalaxyTrackedLink';
20+
21+
# Monitoring PostgreSQL Logs with ClickStack {#postgres-logs-clickstack}
22+
23+
:::note[TL;DR]
24+
This guide shows you how to monitor PostgreSQL with ClickStack by configuring the OpenTelemetry collector to ingest PostgreSQL server logs. You'll learn how to:
25+
26+
- Configure PostgreSQL to output logs in CSV format for structured parsing
27+
- Create a custom OTel collector configuration for log ingestion
28+
- Deploy ClickStack with your custom configuration
29+
- Use a pre-built dashboard to visualize PostgreSQL log insights (errors, slow queries, connections)
30+
31+
A demo dataset with sample logs is available if you want to test the integration before configuring your production PostgreSQL.
32+
33+
Time Required: 10-15 minutes
34+
:::
35+
36+
## Integration with existing PostgreSQL {#existing-postgres}
37+
38+
This section covers configuring your existing PostgreSQL installation to send logs to ClickStack by modifying the ClickStack OTel collector configuration.
39+
40+
If you would like to test the PostgreSQL logs integration before configuring your own existing setup, you can test with our preconfigured setup and sample data in the ["Demo dataset"](/use-cases/observability/clickstack/integrations/postgresql-logs#demo-dataset) section.
41+
42+
##### Prerequisites {#prerequisites}
43+
- ClickStack instance running
44+
- Existing PostgreSQL installation (version 9.6 or newer)
45+
- Access to modify PostgreSQL configuration files
46+
- Sufficient disk space for log files
47+
48+
<VerticalStepper headerLevel="h4">
49+
50+
#### Configure PostgreSQL logging {#configure-postgres}
51+
52+
PostgreSQL supports multiple log formats. For structured parsing with OpenTelemetry, we recommend CSV format which provides consistent, parseable output.
53+
54+
The `postgresql.conf` file is typically located at:
55+
- **Linux (apt/yum)**: `/etc/postgresql/{version}/main/postgresql.conf`
56+
- **macOS (Homebrew)**: `/usr/local/var/postgres/postgresql.conf` or `/opt/homebrew/var/postgres/postgresql.conf`
57+
- **Docker**: Configuration is usually set via environment variables or mounted config file
58+
59+
Add or modify these settings in `postgresql.conf`:
60+
61+
```conf
62+
# Required for CSV logging
63+
logging_collector = on
64+
log_destination = 'csvlog'
65+
66+
# Recommended: Connection logging
67+
log_connections = on
68+
log_disconnections = on
69+
70+
# Optional: Tune based on your monitoring needs
71+
#log_min_duration_statement = 1000 # Log queries taking more than 1 second
72+
#log_statement = 'ddl' # Log DDL statements (CREATE, ALTER, DROP)
73+
#log_checkpoints = on # Log checkpoint activity
74+
#log_lock_waits = on # Log lock contention
75+
```
76+
77+
:::note
78+
This guide uses PostgreSQL's `csvlog` format for reliable structured parsing. If you're using `stderr` or `jsonlog` formats, you'll need to adjust the OpenTelemetry collector configuration accordingly.
79+
:::
80+
81+
After making these changes, restart PostgreSQL:
82+
83+
```bash
84+
# For systemd
85+
sudo systemctl restart postgresql
86+
87+
# For Docker
88+
docker restart
89+
```
90+
91+
Verify logs are being written:
92+
93+
```bash
94+
# Default log location on Linux
95+
tail -f /var/lib/postgresql/{version}/main/log/postgresql-*.log
96+
97+
# macOS Homebrew
98+
tail -f /usr/local/var/postgres/log/postgresql-*.log
99+
```
100+
101+
#### Create custom OTel collector configuration {#custom-otel}
102+
103+
ClickStack allows you to extend the base OpenTelemetry Collector configuration by mounting a custom configuration file and setting an environment variable. The custom configuration is merged with the base configuration managed by HyperDX via OpAMP.
104+
105+
Create a file named `postgres-logs-monitoring.yaml` with the following configuration:
106+
107+
```yaml
108+
receivers:
109+
filelog/postgres:
110+
include:
111+
- /var/lib/postgresql/*/main/log/postgresql-*.csv # Adjust to match your PostgreSQL installation
112+
start_at: end
113+
multiline:
114+
line_start_pattern: '^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}'
115+
operators:
116+
- type: csv_parser
117+
parse_from: body
118+
parse_to: attributes
119+
header: 'log_time,user_name,database_name,process_id,connection_from,session_id,session_line_num,command_tag,session_start_time,virtual_transaction_id,transaction_id,error_severity,sql_state_code,message,detail,hint,internal_query,internal_query_pos,context,query,query_pos,location,application_name,backend_type,leader_pid,query_id'
120+
lazy_quotes: true
121+
122+
- type: time_parser
123+
parse_from: attributes.log_time
124+
layout: '%Y-%m-%d %H:%M:%S.%L %Z'
125+
126+
- type: add
127+
field: attributes.source
128+
value: "postgresql"
129+
130+
- type: add
131+
field: resource["service.name"]
132+
value: "postgresql-production"
133+
134+
service:
135+
pipelines:
136+
logs/postgres:
137+
receivers: [filelog/postgres]
138+
processors:
139+
- memory_limiter
140+
- transform
141+
- batch
142+
exporters:
143+
- clickhouse
144+
```
145+
146+
This configuration:
147+
- Reads PostgreSQL CSV logs from their standard location
148+
- Handles multi-line log entries (errors often span multiple lines)
149+
- Parses CSV format with all standard PostgreSQL log fields
150+
- Extracts timestamps to preserve original log timing
151+
- Adds `source: postgresql` attribute for filtering in HyperDX
152+
- Routes logs to the ClickHouse exporter via a dedicated pipeline
153+
154+
:::note
155+
- You only define new receivers and pipelines in the custom config
156+
- The processors (`memory_limiter`, `transform`, `batch`) and exporters (`clickhouse`) are already defined in the base ClickStack configuration - you just reference them by name
157+
- The `csv_parser` operator extracts all standard PostgreSQL CSV log fields into structured attributes
158+
- This configuration uses `start_at: end` to avoid re-ingesting logs on collector restarts. For testing, change to `start_at: beginning` to see historical logs immediately.
159+
- Adjust the `include` path to match your PostgreSQL log directory location
160+
:::
161+
162+
#### Configure ClickStack to load custom configuration {#load-custom}
163+
164+
To enable custom collector configuration in your existing ClickStack deployment, you must:
165+
166+
1. Mount the custom config file at `/etc/otelcol-contrib/custom.config.yaml`
167+
2. Set the environment variable `CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml`
168+
3. Mount your PostgreSQL log directory so the collector can read them
169+
170+
##### Option 1: Docker Compose {#docker-compose}
171+
172+
Update your ClickStack deployment configuration:
173+
```yaml
174+
services:
175+
clickstack:
176+
# ... existing configuration ...
177+
environment:
178+
- CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml
179+
# ... other environment variables ...
180+
volumes:
181+
- ./postgres-logs-monitoring.yaml:/etc/otelcol-contrib/custom.config.yaml:ro
182+
- /var/lib/postgresql:/var/lib/postgresql:ro
183+
# ... other volumes ...
184+
```
185+
186+
##### Option 2: Docker Run (All-in-One Image) {#all-in-one}
187+
188+
If you're using the all-in-one image with docker run:
189+
```bash
190+
docker run --name clickstack \
191+
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
192+
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
193+
-v "$(pwd)/postgres-logs-monitoring.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
194+
-v /var/lib/postgresql:/var/lib/postgresql:ro \
195+
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
196+
```
197+
198+
:::note
199+
Ensure the ClickStack collector has appropriate permissions to read the PostgreSQL log files. In production, use read-only mounts (`:ro`) and follow the principle of least privilege.
200+
:::
201+
202+
#### Verifying Logs in HyperDX {#verifying-logs}
203+
204+
Once configured, log into HyperDX and verify logs are flowing:
205+
206+
1. Navigate to the search view
207+
2. Set source to Logs
208+
3. Filter by `source:postgresql` to see PostgreSQL-specific logs
209+
4. You should see structured log entries with fields like `user_name`, `database_name`, `error_severity`, `message`, `query`, etc.
210+
211+
<Image img={logs_search_view} alt="Logs search view"/>
212+
213+
<Image img={log_view} alt="Log view"/>
214+
215+
</VerticalStepper>
216+
217+
## Demo dataset {#demo-dataset}
218+
219+
For users who want to test the PostgreSQL logs integration before configuring their production systems, we provide a sample dataset of pre-generated PostgreSQL logs with realistic patterns.
220+
221+
<VerticalStepper headerLevel="h4">
222+
223+
#### Download the sample dataset {#download-sample}
224+
225+
Download the sample log file:
226+
227+
```bash
228+
curl -O https://datasets-documentation.s3.eu-west-3.amazonaws.com/clickstack-integrations/postgres/postgresql.log
229+
```
230+
231+
#### Create test collector configuration {#test-config}
232+
233+
Create a file named `postgres-logs-demo.yaml` with the following configuration:
234+
235+
```yaml
236+
cat > postgres-logs-demo.yaml << 'EOF'
237+
receivers:
238+
filelog/postgres:
239+
include:
240+
- /tmp/postgres-demo/postgresql.log
241+
start_at: beginning # Read from beginning for demo data
242+
multiline:
243+
line_start_pattern: '^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}'
244+
operators:
245+
- type: csv_parser
246+
parse_from: body
247+
parse_to: attributes
248+
header: 'log_time,user_name,database_name,process_id,connection_from,session_id,session_line_num,command_tag,session_start_time,virtual_transaction_id,transaction_id,error_severity,sql_state_code,message,detail,hint,internal_query,internal_query_pos,context,query,query_pos,location,application_name,backend_type,leader_pid,query_id'
249+
lazy_quotes: true
250+
251+
- type: time_parser
252+
parse_from: attributes.log_time
253+
layout: '%Y-%m-%d %H:%M:%S.%L %Z'
254+
255+
- type: add
256+
field: attributes.source
257+
value: "postgresql-demo"
258+
259+
- type: add
260+
field: resource["service.name"]
261+
value: "postgresql-demo"
262+
263+
service:
264+
pipelines:
265+
logs/postgres-demo:
266+
receivers: [filelog/postgres]
267+
processors:
268+
- memory_limiter
269+
- transform
270+
- batch
271+
exporters:
272+
- clickhouse
273+
EOF
274+
```
275+
276+
#### Run ClickStack with demo configuration {#run-demo}
277+
278+
Run ClickStack with the demo logs and configuration:
279+
280+
```bash
281+
docker run --name clickstack-demo \
282+
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
283+
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
284+
-v "$(pwd)/postgres-logs-demo.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
285+
-v "$(pwd)/postgresql.log:/tmp/postgres-demo/postgresql.log:ro" \
286+
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
287+
```
288+
289+
#### Verify logs in HyperDX {#verify-demo-logs}
290+
291+
Once ClickStack is running:
292+
293+
1. Open [HyperDX](http://localhost:8080/) and log in to your account (you may need to create an account first)
294+
2. Navigate to the Search view and set the source to `Logs`
295+
3. Set the time range to **2025-11-10 00:00:00 - 2025-11-11 00:00:00**
296+
297+
<Image img={logs_search_view} alt="Logs search view"/>
298+
299+
<Image img={log_view} alt="Log view"/>
300+
301+
</VerticalStepper>
302+
303+
## Dashboards and visualization {#dashboards}
304+
305+
To help you get started monitoring PostgreSQL with ClickStack, we provide essential visualizations for PostgreSQL logs.
306+
307+
<VerticalStepper headerLevel="h4">
308+
309+
#### <TrackedLink href={useBaseUrl('/examples/postgres-logs-dashboard.json')} download="postgresql-logs-dashboard.json" eventName="docs.postgres_logs_monitoring.dashboard_download">Download</TrackedLink> the dashboard configuration {#download}
310+
311+
#### Import the pre-built dashboard {#import-dashboard}
312+
313+
1. Open HyperDX and navigate to the Dashboards section
314+
2. Click **Import Dashboard** in the upper right corner under the ellipses
315+
316+
<Image img={import_dashboard} alt="Import dashboard button"/>
317+
318+
3. Upload the `postgresql-logs-dashboard.json` file and click **Finish Import**
319+
320+
<Image img={finish_import} alt="Finish import"/>
321+
322+
#### View the dashboard {#created-dashboard}
323+
324+
The dashboard will be created with all visualizations pre-configured:
325+
326+
<Image img={logs_dashboard} alt="Logs dashboard"/>
327+
328+
:::note
329+
For the demo dataset, ensure the time range is set to 2025-11-10 00:00:00 - 2025-11-11 00:00:00. The imported dashboard will not have a time range specified by default.
330+
:::
331+
332+
</VerticalStepper>
333+
334+
## Troubleshooting {#troubleshooting}
335+
336+
### Custom config not loading {#troubleshooting-not-loading}
337+
338+
Verify the environment variable is set:
339+
```bash
340+
docker exec <container-name> printenv CUSTOM_OTELCOL_CONFIG_FILE
341+
```
342+
343+
Check the custom config file is mounted and readable:
344+
```bash
345+
docker exec <container-name> cat /etc/otelcol-contrib/custom.config.yaml | head -10
346+
```
347+
348+
### No logs appearing in HyperDX {#no-logs}
349+
350+
Check the effective config includes your filelog receiver:
351+
```bash
352+
docker exec <container> cat /etc/otel/supervisor-data/effective.yaml | grep -A 10 filelog
353+
```
354+
355+
Check for errors in the collector logs:
356+
```bash
357+
docker exec <container> cat /etc/otel/supervisor-data/agent.log | grep -i postgres
358+
```
359+
360+
If using the demo dataset, verify the log file is accessible:
361+
```bash
362+
docker exec <container> cat /tmp/postgres-demo/postgresql.log | wc -l
363+
```
364+
365+
## Next steps {#next-steps}
366+
367+
After setting up PostgreSQL logs monitoring:
368+
369+
- Set up [alerts](/use-cases/observability/clickstack/alerts) for critical events (connection failures, slow queries, error spikes)
370+
- Correlate logs with [PostgreSQL metrics](/use-cases/observability/clickstack/integrations/postgresql-metrics) for comprehensive database monitoring
371+
- Create custom dashboards for application-specific query patterns
372+
- Configure `log_min_duration_statement` to identify slow queries specific to your performance requirements
373+
374+
## Going to production {#going-to-production}
375+
376+
This guide extends ClickStack's built-in OpenTelemetry Collector for quick setup. For production deployments, we recommend running your own OTel Collector and sending data to ClickStack's OTLP endpoint. See [Sending OpenTelemetry data](/use-cases/observability/clickstack/ingesting-data/opentelemetry) for production configuration.
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
{"version":"0.1.0","name":"PostgreSQL Logs","tiles":[{"id":"1gp532","x":0,"y":0,"w":8,"h":10,"config":{"name":"Log volume over time","source":"Logs","displayType":"line","granularity":"auto","select":[{"aggFn":"count","aggCondition":"","aggConditionLanguage":"lucene","valueExpression":"","alias":"Log volume"}],"where":"","whereLanguage":"lucene"}},{"id":"763sy","x":0,"y":10,"w":8,"h":10,"config":{"name":"Errors over time","source":"Logs","displayType":"stacked_bar","granularity":"auto","select":[{"aggFn":"count","aggCondition":"LogAttributes['error_severity'] IN ('ERROR', 'FATAL')","aggConditionLanguage":"sql","valueExpression":"","alias":"Errors"}],"where":"","whereLanguage":"lucene"}},{"id":"1e6iez","x":16,"y":0,"w":8,"h":10,"config":{"name":"Logs by database","source":"Logs","displayType":"stacked_bar","granularity":"auto","select":[{"aggFn":"count","aggCondition":"LogAttributes['database_name'] != ''","aggConditionLanguage":"sql","valueExpression":""}],"where":"","whereLanguage":"lucene","groupBy":"LogAttributes['database_name']"}},{"id":"lm1o1","x":8,"y":0,"w":8,"h":10,"config":{"name":"Slow Queries","source":"Logs","displayType":"line","granularity":"auto","select":[{"aggFn":"count","aggCondition":"LogAttributes['message'] LIKE '%duration:%'\n AND toFloat64OrZero(extractAll(LogAttributes['message'], 'duration: ([0-9]+)')[1]) > 1000","aggConditionLanguage":"sql","valueExpression":"","alias":"Queries over 1000 ms"}],"where":"","whereLanguage":"lucene"}},{"id":"11ce4l","x":16,"y":10,"w":8,"h":10,"config":{"name":"Query types over time","source":"Logs","displayType":"line","granularity":"auto","select":[{"aggFn":"count","aggCondition":"LogAttributes['command_tag'] != ''","aggConditionLanguage":"sql","valueExpression":""}],"where":"","whereLanguage":"lucene","groupBy":"LogAttributes['command_tag']"}},{"id":"2lh3e","x":8,"y":10,"w":8,"h":10,"config":{"name":"Authentication failures","source":"Logs","displayType":"stacked_bar","granularity":"auto","select":[{"aggFn":"count","aggCondition":"LogAttributes['error_severity'] = 'FATAL'\n AND LogAttributes['message'] LIKE '%authentication failed%'","aggConditionLanguage":"sql","valueExpression":"","alias":"Auth failures"}],"where":"","whereLanguage":"lucene"}}],"filters":[]}
424 KB
Loading
781 KB
Loading
764 KB
Loading
1.55 MB
Loading

0 commit comments

Comments
 (0)