Monitoring Kafka Logs with ClickStack
Collect and visualize Kafka broker logs (Log4j format) in ClickStack using the OTel filelog receiver. Includes a demo dataset and pre-built dashboard.
Integration with existing Kafka
This section covers configuring your existing Kafka installation to send broker logs to ClickStack by modifying the ClickStack OTel collector configuration. If you would like to test the Kafka logs integration before configuring your own existing setup, you can test with our preconfigured setup and sample data in the "Demo dataset" section.
Prerequisites
- ClickStack instance running
- Existing Kafka installation (version 2.0 or newer)
- Access to Kafka log files (
server.log,controller.log, etc.)
Verify Kafka logging configuration
Kafka uses Log4j and writes logs to the directory specified by the kafka.logs.dir system property or the LOG_DIR environment variable. Check your log file location:
Key Kafka log files:
server.log: General broker logs (startup, connections, replication, errors)controller.log: Controller-specific events (leader election, partition reassignment)state-change.log: Partition and replica state transitions
Kafka's default Log4j pattern produces lines like:
For Docker-based Kafka deployments (e.g., confluentinc/cp-kafka), the default Log4j configuration only includes a console appender — there is no file appender, so logs are written to stdout only. To use the filelog receiver, you'll need to redirect logs to a file, either by adding a file appender to log4j.properties or by piping stdout (e.g., | tee /var/log/kafka/server.log).
Create a custom OTel collector configuration for Kafka
ClickStack allows you to extend the base OpenTelemetry Collector configuration by mounting a custom configuration file and setting an environment variable. The custom configuration is merged with the base configuration managed by HyperDX via OpAMP.
Create a file named kafka-logs-monitoring.yaml with the following configuration:
- You only define new receivers and pipelines in the custom config. The processors (
memory_limiter,transform,batch) and exporters (clickhouse) are already defined in the base ClickStack configuration — you just reference them by name. - The
multilineconfiguration ensures stack traces are captured as a single log entry. - This configuration uses
start_at: beginningto read all existing logs when the collector starts. For production deployments, change tostart_at: endto avoid re-ingesting logs on collector restarts.
Configure ClickStack to load custom configuration
To enable custom collector configuration in your existing ClickStack deployment, you must:
- Mount the custom config file at
/etc/otelcol-contrib/custom.config.yaml - Set the environment variable
CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml - Mount your Kafka log directory so the collector can read them
- Docker Compose
- Docker Run (All-in-One Image)
Update your ClickStack deployment configuration:
If you're using the all-in-one image with docker, run:
Ensure the ClickStack collector has appropriate permissions to read the Kafka log files. In production, use read-only mounts (:ro) and follow the principle of least privilege.
Demo dataset
Test the Kafka logs integration with a pre-generated sample dataset before configuring your production systems.
Create test collector configuration
Create a file named kafka-logs-demo.yaml with the following configuration:
Run ClickStack with demo configuration
Run ClickStack with the demo logs and configuration:
Verify logs in HyperDX
Once ClickStack is running:
- Open HyperDX and log in to your account (you may need to create an account first)
- Navigate to the Search view and set the source to
Logs - Set the time range to include 2026-03-09 00:00:00 - 2026-03-10 00:00:00 (UTC)
Dashboards and visualization
Import pre-built dashboard
- Open HyperDX and navigate to the Dashboards section.
- Click "Import Dashboard" in the upper right corner under the ellipses.
- Upload the kafka-logs-dashboard.json file and click finish import.
The dashboard will be created with all visualizations pre-configured
For the demo dataset, set the time range to include 2026-03-09 00:00:00 - 2026-03-10 00:00:00 (UTC).
Troubleshooting
Verify the effective config includes your filelog receiver:
Check for collector errors:
Verify Kafka log format matches the expected pattern:
If your Kafka installation uses a custom Log4j pattern, adjust the regex_parser regex accordingly.
Next steps
- Set up alerts for critical events (broker failures, replication errors, consumer group issues)
- Combine with Kafka Metrics for comprehensive Kafka monitoring
- Create additional dashboards for specific use cases (controller events, partition reassignment)
Going to production
This guide extends ClickStack's built-in OpenTelemetry Collector for quick setup. For production deployments, we recommend running your own OTel Collector and sending data to ClickStack's OTLP endpoint. See Sending OpenTelemetry data for production configuration.