Skip to main content
Skip to main content

Monitoring Kubernetes

This guide allows you to collect logs and metrics from your Kubernetes system, sending them to ClickStack for visualization and analysis. For demo data we use optionally use the ClickStack fork of the official Open Telemetry demo.

Prerequisites

This guide requires you to have:

  • A Kubernetes cluster (v1.20+ recommended) with at least 32 GiB of RAM and 100GB of disk space available on one node for ClickHouse.
  • Helm v3+
  • kubectl, configured to interact with your cluster

Deployment options

You can follow this guide using either of the following deployment options:

  • Self-hosted: Deploy ClickStack entirely within your Kubernetes cluster, including:

    • ClickHouse
    • HyperDX
    • MongoDB (used for dashboard state and configuration)
  • Cloud-hosted: Use ClickHouse Cloud, with HyperDX managed externally. This eliminates the need to run ClickHouse or HyperDX inside your cluster.

To simulate application traffic, you can optionally deploy the ClickStack fork of the OpenTelemetry Demo Application. This generates telemetry data including logs, metrics, and traces. If you already have workloads running in your cluster, you can skip this step and monitor existing pods, nodes, and containers.

Install cert-manager (Optional)

If your setup needs TLS certificates, install cert-manager using Helm:

Deploy the OpenTelemetry Demo (Optional)

This step is optional and intended for users with no existing pods to monitor. Although users with existing services deployed in their Kubernetes environment can skip, this demo does include instrumented microservices which generate trace and session replay data - allowing users to explore all features of ClickStack.

The following deploys the ClickStack fork of the OpenTelemetry Demo application stack within a Kubernetes cluster, tailored for observability testing and showcasing instrumentation. It includes backend microservices, load generators, telemetry pipelines, supporting infrastructure (e.g., Kafka, Redis), and SDK integrations with ClickStack.

All services are deployed to the otel-demo namespace. Each deployment includes:

  • Automatic instrumentation with OTel and ClickStack SDKS for traces, metrics, and logs.
  • All services send their instrumentation to a my-hyperdx-hdx-oss-v2-otel-collector OpenTelemetry collector (not deployed)
  • Forwarding of resource tags to correlate logs, metrics and traces via the environment variable OTEL_RESOURCE_ATTRIBUTES.

On deployment of the demo, confirm all pods have been successfully created and are in the Running state:

Demo Architecture

The demo is composed of microservices written in different programming languages that talk to each other over gRPC and HTTP and a load generator that uses Locust to fake user traffic. The original source code for this demo has been modified to use ClickStack instrumentation.

Credit: https://opentelemetry.io/docs/demo/architecture/

Further details on the demo can be found in:

Add the ClickStack Helm chart repository

To deploy ClickStack, we use the official Helm chart.

This requires us to add the HyperDX Helm repository:

Deploy ClickStack

With the Helm chart installed, you can deploy ClickStack to your cluster. You can either run all components, including ClickHouse and HyperDX, within your Kubernetes environment, or use ClickHouse Cloud, where HyperDX is also available as a managed service.


Self-managed deployment

The following command installs ClickStack to the otel-demo namespace. The helm chart deploys:

  • A ClickHouse instance
  • HyperDX
  • The ClickStack distribution of the OTel collector
  • MongoDB for storage of HyperDX application state
Note

You might need to adjust the storageClassName according to your Kubernetes cluster configuration.

Users not deploying the OTel demo can modify this, selecting an appropriate namespace.

ClickStack in production

This chart also installs ClickHouse and the OTel collector. For production, it is recommended that you use the clickhouse and OTel collector operators and/or use ClickHouse Cloud.

To disable clickhouse and OTel collector, set the following values:

Using ClickHouse Cloud

If you'd rather use ClickHouse Cloud, you can deploy ClickStack and disable the included ClickHouse.

Note

The chart currently always deploys both HyperDX and MongoDB. While these components offer an alternative access path, they are not integrated with ClickHouse Cloud authentication. These components are intended for administrators in this deployment model, providing access to the secure ingestion key needed to ingest through the deployed OTel collector, but should not be exposed to end users.

To verify the deployment status, run the following command and confirm all components are in the Running state. Note that ClickHouse will be absent from this for users using ClickHouse Cloud:

Access the HyperDX UI

Note

Even when using ClickHouse Cloud, the local HyperDX instance deployed in the Kubernetes cluster is still required. It provides an ingestion key managed by the OpAMP server bundled with HyperDX, with secures ingestion through the deployed OTel collector - a capability not currently available in the ClickHouse Cloud-hosted version.

For security, the service uses ClusterIP and is not exposed externally by default.

To access the HyperDX UI, port forward from 3000 to the local port 8080.

Navigate http://localhost:8080 to access the HyperDX UI.

Create a user, providing a username and password that meets the complexity requirements.

Retrieve ingestion API key

Ingestion to the OTel collector deployed by the ClickStack collector is secured with an ingestion key.

Navigate to Team Settings and copy the Ingestion API Key from the API Keys section. This API key ensures data ingestion through the OpenTelemetry collector is secure.

Create API Key Kubernetes Secret

Create a new Kubernetes secret with the Ingestion API Key and a config map containing the location of the OTel collector deployed with the ClickStack helm chart. Later components will use this to allow ingest into the collector deployed with the ClickStack Helm chart:

Restart the OpenTelemetry demo application pods to take into account the Ingestion API Key.

Trace and log data from demo services should now begin to flow into HyperDX.

Add the OpenTelemetry Helm repo

To collect Kubernetes metrics, we will deploy a standard OTel collector, configuring this to send data securely to our ClickStack collector using the above ingestion API key.

This requires us to install the OpenTelemetry Helm repo:

Deploy Kubernetes collector components

To collect logs and metrics from both the cluster itself and each node, we'll need to deploy two separate OpenTelemetry collectors, each with its own manifest. The two manifests provided - k8s_deployment.yaml and k8s_daemonset.yaml - work together to collect comprehensive telemetry data from your Kubernetes cluster.

  • k8s_deployment.yaml deploys a single OpenTelemetry Collector instance responsible for collecting cluster-wide events and metadata. It gathers Kubernetes events, cluster metrics, and enriches telemetry data with pod labels and annotations. This collector runs as a standalone deployment with a single replica to avoid duplicate data.

  • k8s_daemonset.yaml deploys a DaemonSet-based collector that runs on every node in your cluster. It collects node-level and pod-level metrics, as well as container logs, using components like kubeletstats, hostmetrics, and Kubernetes attribute processors. These collectors enrich logs with metadata and send them to HyperDX using the OTLP exporter.

Together, these manifests enable full-stack observability across the cluster, from infrastructure to application-level telemetry, and send the enriched data to ClickStack for centralized analysis.

First, install the collector as a deployment:

k8s_deployment.yaml

Next, deploy the collector as a DaemonSet for node and pod-level metrics and logs:

k8s_daemonset.yaml

Explore Kubernetes data in HyperDX

Navigate to your HyperDX UI - either using your Kubernetes-deployed instance or via ClickHouse Cloud.

Using ClickHouse Cloud

If using ClickHouse Cloud, simply log in to your ClickHouse Cloud service and select "HyperDX" from the left menu. You will be automatically authenticated and will not need to create a user.

When prompted to create a datasource, retain all default values within the create source model, completing the Table field with the value otel_logs - to create a logs source. All other settings should be auto-detected, allowing you to click Save New Source.

ClickHouse Cloud HyperDX Datasource

You will also need to create a datasource for traces and metrics.

For example, to create sources for traces and OTel metrics, users can select Create New Source from the top menu.

HyperDX create new source

From here, select the required source type followed by the appropriate table e.g. for traces, select the table otel_traces. All settings should be auto-detected.

HyperDX create trace source
Correlating sources

Note that different data sources in ClickStack—such as logs and traces—can be correlated with each other. To enable this, additional configuration is required on each source. For example, in the logs source, you can specify a corresponding trace source, and vice versa in the traces source. See "Correlated sources" for further details.

Using self-managed deployment

To access the local deployed HyperDX, you can port forward using the local command and access HyperDX at http://localhost:8080.

ClickStack in production

In production, we recommend using an ingress with TLS if you are not using HyperDX in ClickHouse Cloud. For example:

To explore the Kubernetes data, navigate to the dedicated present dashboard at /kubernetes e.g. http://localhost:8080/kubernetes.

Each of the tabs, Pods, Nodes, and Namespaces, should be populated with data.