Monitoring PostgreSQL Logs with ClickStack
This guide shows you how to monitor PostgreSQL with ClickStack by configuring the OpenTelemetry collector to ingest PostgreSQL server logs. You'll learn how to:
- Configure PostgreSQL to output logs in CSV format for structured parsing
- Create a custom OTel collector configuration for log ingestion
- Deploy ClickStack with your custom configuration
- Use a pre-built dashboard to visualize PostgreSQL log insights (errors, slow queries, connections)
A demo dataset with sample logs is available if you want to test the integration before configuring your production PostgreSQL.
Time Required: 10-15 minutes
Integration with existing PostgreSQL
This section covers configuring your existing PostgreSQL installation to send logs to ClickStack by modifying the ClickStack OTel collector configuration.
If you would like to test the PostgreSQL logs integration before configuring your own existing setup, you can test with our preconfigured setup and sample data in the "Demo dataset" section.
Prerequisites
- ClickStack instance running
- Existing PostgreSQL installation (version 9.6 or newer)
- Access to modify PostgreSQL configuration files
- Sufficient disk space for log files
Configure PostgreSQL logging
PostgreSQL supports multiple log formats. For structured parsing with OpenTelemetry, we recommend CSV format which provides consistent, parseable output.
The postgresql.conf file is typically located at:
- Linux (apt/yum):
/etc/postgresql/{version}/main/postgresql.conf - macOS (Homebrew):
/usr/local/var/postgres/postgresql.confor/opt/homebrew/var/postgres/postgresql.conf - Docker: Configuration is usually set via environment variables or mounted config file
Add or modify these settings in postgresql.conf:
This guide uses PostgreSQL's csvlog format for reliable structured parsing. If you're using stderr or jsonlog formats, you'll need to adjust the OpenTelemetry collector configuration accordingly.
After making these changes, restart PostgreSQL:
Verify logs are being written:
Create custom OTel collector configuration
ClickStack allows you to extend the base OpenTelemetry Collector configuration by mounting a custom configuration file and setting an environment variable. The custom configuration is merged with the base configuration managed by HyperDX via OpAMP.
Create a file named postgres-logs-monitoring.yaml with the following configuration:
This configuration:
- Reads PostgreSQL CSV logs from their standard location
- Handles multi-line log entries (errors often span multiple lines)
- Parses CSV format with all standard PostgreSQL log fields
- Extracts timestamps to preserve original log timing
- Adds
source: postgresqlattribute for filtering in HyperDX - Routes logs to the ClickHouse exporter via a dedicated pipeline
- You only define new receivers and pipelines in the custom config
- The processors (
memory_limiter,transform,batch) and exporters (clickhouse) are already defined in the base ClickStack configuration - you just reference them by name - The
csv_parseroperator extracts all standard PostgreSQL CSV log fields into structured attributes - This configuration uses
start_at: endto avoid re-ingesting logs on collector restarts. For testing, change tostart_at: beginningto see historical logs immediately. - Adjust the
includepath to match your PostgreSQL log directory location
Configure ClickStack to load custom configuration
To enable custom collector configuration in your existing ClickStack deployment, you must:
- Mount the custom config file at
/etc/otelcol-contrib/custom.config.yaml - Set the environment variable
CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml - Mount your PostgreSQL log directory so the collector can read them
Option 1: Docker Compose
Update your ClickStack deployment configuration:
Option 2: Docker Run (All-in-One Image)
If you're using the all-in-one image with docker run:
Ensure the ClickStack collector has appropriate permissions to read the PostgreSQL log files. In production, use read-only mounts (:ro) and follow the principle of least privilege.
Verifying Logs in HyperDX
Once configured, log into HyperDX and verify logs are flowing:
- Navigate to the search view
- Set source to Logs
- Filter by
source:postgresqlto see PostgreSQL-specific logs - You should see structured log entries with fields like
user_name,database_name,error_severity,message,query, etc.
Demo dataset
For users who want to test the PostgreSQL logs integration before configuring their production systems, we provide a sample dataset of pre-generated PostgreSQL logs with realistic patterns.
Create test collector configuration
Create a file named postgres-logs-demo.yaml with the following configuration:
Dashboards and visualization
To help you get started monitoring PostgreSQL with ClickStack, we provide essential visualizations for PostgreSQL logs.
Import the pre-built dashboard
- Open HyperDX and navigate to the Dashboards section
- Click Import Dashboard in the upper right corner under the ellipses
- Upload the
postgresql-logs-dashboard.jsonfile and click Finish Import
View the dashboard
The dashboard will be created with all visualizations pre-configured:
For the demo dataset, ensure the time range is set to 2025-11-10 00:00:00 - 2025-11-11 00:00:00. The imported dashboard will not have a time range specified by default.
Troubleshooting
Custom config not loading
Verify the environment variable is set:
Check the custom config file is mounted and readable:
No logs appearing in HyperDX
Check the effective config includes your filelog receiver:
Check for errors in the collector logs:
If using the demo dataset, verify the log file is accessible:
Next steps
After setting up PostgreSQL logs monitoring:
- Set up alerts for critical events (connection failures, slow queries, error spikes)
- Correlate logs with PostgreSQL metrics for comprehensive database monitoring
- Create custom dashboards for application-specific query patterns
- Configure
log_min_duration_statementto identify slow queries specific to your performance requirements
Going to production
This guide extends ClickStack's built-in OpenTelemetry Collector for quick setup. For production deployments, we recommend running your own OTel Collector and sending data to ClickStack's OTLP endpoint. See Sending OpenTelemetry data for production configuration.