Skip to main content
Skip to main content
Edit this page

Configuration Options

The following configuration options are available for each component of ClickStack:

Modifying settings

Docker

If using the All in One, HyperDX Only or Local Mode simply pass the desired setting via an environment variable e.g.

Docker compose

If using the Docker Compose deployment guide, the .env file can be used to modify settings.

Alternatively, explicitly overwrite settings in the docker-compose.yaml file e.g.

Example:

Helm

Customizing values (Optional)

You can customize settings by using --set flags e.g.

Alternatively edit the values.yaml. To retrieve the default values:

Example config:

HyperDX

Data source settings

HyperDX relies on the user defining a source for each of the Observability data types/pillars:

  • Logs
  • Traces
  • Metrics
  • Sessions

This configuration can be performed inside the application from Team Settings -> Sources, as shown below for logs:

Each of these sources require at least one table specified on creation as well as a set of columns which allow HyperDX to query the data.

If using the default OpenTelemetry (OTel) schema distributed with ClickStack, these columns can be automatically inferred for each of the sources. If modifying the schema or using a custom schema, users are required to specify and update these mappings.

note

The default schema for ClickHouse distributed with ClickStack is the schema created by the ClickHouse exporter for the OTel collector. These column names correlate with the OTel official specification documented here.

The following settings are available for each source:

Logs

SettingDescriptionRequiredInferred in Default SchemaInferred Value
NameSource name.YesNo
Server ConnectionServer connection name.YesNoDefault
DatabaseClickHouse database name.YesYesdefault
TableTarget table name. Set to otel_logs if default schema is used.YesNo
Timestamp ColumnDatetime column or expression that is part of your primary key.YesYesTimestampTime
Default SelectColumns shown in default search results.YesYesTimestamp, ServiceName, SeverityText, Body
Service Name ExpressionExpression or column for the service name.YesYesServiceName
Log Level ExpressionExpression or column for the log level.YesYesSeverityText
Body ExpressionExpression or column for the log message.YesYesBody
Log Attributes ExpressionExpression or column for custom log attributes.YesYesLogAttributes
Resource Attributes ExpressionExpression or column for resource-level attributes.YesYesResourceAttributes
Displayed Timestamp ColumnTimestamp column used in UI display.YesYesResourceAttributes
Correlated Metric SourceLinked metric source (e.g. HyperDX metrics).NoNo
Correlated Trace SourceLinked trace source (e.g. HyperDX traces).NoNo
Trace Id ExpressionExpression or column used to extract trace ID.YesYesTraceId
Span Id ExpressionExpression or column used to extract span ID.YesYesSpanId
Implicit Column ExpressionColumn used for full-text search if no field is specified (Lucene-style). Typically the log body.YesYesBody

Traces

SettingDescriptionRequiredInferred in Default SchemaInferred Value
NameSource name.YesNo
Server ConnectionServer connection name.YesNoDefault
DatabaseClickHouse database name.YesYesdefault
TableTarget table name. Set to otel_traces if using the default schema.YesYes-
Timestamp ColumnDatetime column or expression that is part of your primary key.YesYesTimestamp
TimestampAlias for Timestamp Column.YesYesTimestamp
Default SelectColumns shown in default search results.YesYesTimestamp, ServiceName as service, StatusCode as level, round(Duration / 1e6) as duration, SpanName
Duration ExpressionExpression for calculating span duration.YesYesDuration
Duration PrecisionPrecision for the duration expression (e.g. nanoseconds, microseconds).YesYesns
Trace Id ExpressionExpression or column for trace IDs.YesYesTraceId
Span Id ExpressionExpression or column for span IDs.YesYesSpanId
Parent Span Id ExpressionExpression or column for parent span IDs.YesYesParentSpanId
Span Name ExpressionExpression or column for span names.YesYesSpanName
Span Kind ExpressionExpression or column for span kind (e.g. client, server).YesYesSpanKind
Correlated Log SourceOptional. Linked log source (e.g. HyperDX logs).NoNo
Correlated Session SourceOptional. Linked session source.NoNo
Correlated Metric SourceOptional. Linked metric source (e.g. HyperDX metrics).NoNo
Status Code ExpressionExpression for the span status code.YesYesStatusCode
Status Message ExpressionExpression for the span status message.YesYesStatusMessage
Service Name ExpressionExpression or column for the service name.YesYesServiceName
Resource Attributes ExpressionExpression or column for resource-level attributes.YesYesResourceAttributes
Event Attributes ExpressionExpression or column for event attributes.YesYesSpanAttributes
Span Events ExpressionExpression to extract span events. Typically a Nested type column. This allows rendering of exception stack traces with supported language SDKs.YesYesEvents
Implicit Column ExpressionColumn used for full-text search if no field is specified (Lucene-style). Typically the log body.YesYesSpanName

Metrics

SettingDescriptionRequiredInferred in Default SchemaInferred Value
NameSource name.YesNo
Server ConnectionServer connection name.YesNoDefault
DatabaseClickHouse database name.YesYesdefault
Gauge TableTable storing gauge-type metrics.YesNootel_metrics_gauge
Histogram TableTable storing histogram-type metrics.YesNootel_metrics_histogram
Sum TableTable storing sum-type (counter) metrics.YesNootel_metrics_sum
Correlated Log SourceOptional. Linked log source (e.g. HyperDX logs).NoNo

Sessions

SettingDescriptionRequiredInferred in Default SchemaInferred Value
NameSource name.YesNo
Server ConnectionServer connection name.YesNoDefault
DatabaseClickHouse database name.YesYesdefault
TableTarget table for session data. Target table name. Set to hyperdx_sessions if using the default schema.YesYes-
Timestamp ColumnDatetime column or expression that is part of your primary key.YesYesTimestampTime
Log Attributes ExpressionExpression for extracting log-level attributes from session data.YesYesLogAttributes
LogAttributesAlias or field reference used to store log attributes.YesYesLogAttributes
Resource Attributes ExpressionExpression for extracting resource-level metadata.YesYesResourceAttributes
Correlated Trace SourceOptional. Linked trace source for session correlation.NoNo
Implicit Column ExpressionColumn used for full-text search when no field is specified (e.g. Lucene-style query parsing).YesYesBody

Correlated sources

To enable full cross-source correlation in ClickStack, users must configure correlated sources for logs, traces, metrics, and sessions. This allows HyperDX to associate related data and provide rich context when rendering events.

  • Logs: Can be correlated with traces and metrics.
  • Traces: Can be correlated with logs, sessions, and metrics.
  • Metrics: Can be correlated with logs.
  • Sessions: Can be correlated with traces.

By setting these correlations, HyperDX can, for example, render relevant logs alongside a trace or surface metric anomalies linked to a session. Proper configuration ensures a unified and contextual observability experience.

For example, below is the Logs source configured with correlated sources:

Application configuration settings

  • HYPERDX_API_KEY

    • Default: None (required)
    • Description: Authentication key for the HyperDX API.
    • Guidance:
    • Required for telemetry and logging
    • In local development, can be any non-empty value
    • For production, use a secure, unique key
    • Can be obtained from the team settings page after account creation
  • HYPERDX_LOG_LEVEL

    • Default: info
    • Description: Sets the logging verbosity level.
    • Options: debug, info, warn, error
    • Guidance:
    • Use debug for detailed troubleshooting
    • Use info for normal operation
    • Use warn or error in production to reduce log volume
  • HYPERDX_API_PORT

    • Default: 8000
    • Description: Port for the HyperDX API server.
    • Guidance:
    • Ensure this port is available on your host
    • Change if you have port conflicts
    • Must match the port in your API client configurations
  • HYPERDX_APP_PORT

    • Default: 8000
    • Description: Port for the HyperDX frontend app.
    • Guidance:
    • Ensure this port is available on your host
    • Change if you have port conflicts
    • Must be accessible from your browser
  • HYPERDX_APP_URL

    • Default: http://localhost
    • Description: Base URL for the frontend app.
    • Guidance:
    • Set to your domain in production
    • Include protocol (http/https)
    • Don't include trailing slash
  • MONGO_URI

    • Default: mongodb://db:27017/hyperdx
    • Description: MongoDB connection string.
    • Guidance:
    • Use default for local development with Docker
    • For production, use a secure connection string
    • Include authentication if required
    • Example: mongodb://user:pass@host:port/db
  • MINER_API_URL

    • Default: http://miner:5123
    • Description: URL for the log pattern mining service.
    • Guidance:
    • Use default for local development with Docker
    • Set to your miner service URL in production
    • Must be accessible from the API service
  • FRONTEND_URL

    • Default: http://localhost:3000
    • Description: URL for the frontend app.
    • Guidance:
    • Use default for local development
    • Set to your domain in production
    • Must be accessible from the API service
  • OTEL_SERVICE_NAME

    • Default: hdx-oss-api
    • Description: Service name for OpenTelemetry instrumentation.
    • Guidance:
    • Use descriptive name for your HyperDX service. Applicable if HyperDX self-instruments.
    • Helps identify the HyperDX service in telemetry data
  • NEXT_PUBLIC_OTEL_EXPORTER_OTLP_ENDPOINT

    • Default: http://localhost:4318
    • Description: OpenTelemetry collector endpoint.
    • Guidance:
    • Relevant of self-instrumenting HyperDX.
    • Use default for local development
    • Set to your collector URL in production
    • Must be accessible from your HyperDX service
  • USAGE_STATS_ENABLED

    • Default: true
    • Description: Toggles usage statistics collection.
    • Guidance:
    • Set to false to disable usage tracking
    • Useful for privacy-sensitive deployments
    • Default is true for better product improvement
  • IS_OSS

    • Default: true
    • Description: Indicates if running in OSS mode.
    • Guidance:
    • Keep as true for open-source deployments
    • Set to false for enterprise deployments
    • Affects feature availability
  • IS_LOCAL_MODE

    • Default: false
    • Description: Indicates if running in local mode.
    • Guidance:
    • Set to true for local development
    • Disables certain production features
    • Useful for testing and development
  • EXPRESS_SESSION_SECRET

    • Default: hyperdx is cool 👋
    • Description: Secret for Express session management.
    • Guidance:
    • Change in production
    • Use a strong, random string
    • Keep secret and secure
  • ENABLE_SWAGGER

    • Default: false
    • Description: Toggles Swagger API documentation.
    • Guidance:
    • Set to true to enable API documentation
    • Useful for development and testing
    • Disable in production

OpenTelemetry collector

See "ClickStack OpenTelemetry Collector" for more details.

  • CLICKHOUSE_ENDPOINT

    • Default: None (required) if standalone image. If All-in-one or Docker Compose distribution this is set to the integrated ClickHouse instance.
    • Description: The HTTPS URL of the ClickHouse instance to export telemetry data to.
    • Guidance:
      • Must be a full HTTPS endpoint including port (e.g., https://clickhouse.example.com:8443)
      • Required for the collector to send data to ClickHouse
  • CLICKHOUSE_USER

    • Default: default
    • Description: Username used to authenticate with the ClickHouse instance.
    • Guidance:
      • Ensure the user has INSERT and CREATE TABLE permissions
      • Recommended to create a dedicated user for ingestion
  • CLICKHOUSE_PASSWORD

    • Default: None (required if authentication is enabled)
    • Description: Password for the specified ClickHouse user.
    • Guidance:
      • Required if the user account has a password set
      • Store securely via secrets in production deployments
  • HYPERDX_LOG_LEVEL

    • Default: info
    • Description: Log verbosity level for the collector.
    • Guidance:
      • Accepts values like debug, info, warn, error
      • Use debug during troubleshooting
  • OPAMP_SERVER_URL

    • Default: None (required) if standalone image. If All-in-one or Docker Compose distribution this points to the deployed HyperDX instance.
    • Description: URL of the OpAMP server used to manage the collector (e.g., HyperDX instance). This is port 4320 by default.
    • Guidance:
      • Must point to your HyperDX instance
      • Enables dynamic configuration and secure ingestion
  • HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE

    • Default: default
    • Description: ClickHouse database the collector writes telemetry data to.
    • Guidance:
      • Set if using a custom database name
      • Ensure the specified user has access to this database

ClickHouse

ClickStack ships with a default ClickHouse configuration designed for multi-terabyte scale, but users are free to modify and optimize it to suit their workload.

To tune ClickHouse effectively, users should understand key storage concepts such as parts, partitions, shards and replicas, as well as how merges occur at insert time. We recommend reviewing the fundamentals of primary indices, sparse secondary indices, and data skipping indices, along with techniques for managing data lifecycle e.g. using a TTL lifecycle.

ClickStack supports schema customization - users may modify column types, extract new fields (e.g. from logs), apply codecs and dictionaries, and accelerate queries using projections.

Additionally, materialized views can be used to transform or filter data during ingestion, provided that data is written to the source table of the view and the application reads from the target table.

For more details, refer to ClickHouse documentation on schema design, indexing strategies, and data management best practices - most of which apply directly to ClickStack deployments.