Pricing
For pricing information, see the ClickHouse Cloud Pricing page. ClickHouse Cloud bills based on the usage of compute, storage, data transfer (egress over the internet and cross-region), and ClickPipes. To understand what can affect your bill, and ways that you can manage your spend, keep reading.
Amazon Web Services (AWS) example
- Prices reflect AWS us-east-1 pricing.
- Explore applicable data transfer and ClickPipes charges here.
Basic: from $66.52 per month
Best for: Departmental use cases with smaller data volumes that do not have hard reliability guarantees.
Basic tier service
- 1 replica x 8 GiB RAM, 2 vCPU
- 500 GB of compressed data
- 500 GB of backup of data
- 10 GB of public internet egress data transfer
- 5 GB of cross-region data transfer
Pricing breakdown for this example:
Active 6 hours a day | Active 12 hours a day | Active 24 hours a day | |
---|---|---|---|
Compute | $39.91 | $79.83 | $159.66 |
Storage | $25.30 | $25.30 | $25.30 |
Public internet egress data transfer | $1.15 | $1.15 | $1.15 |
Cross-region data transfer | $0.16 | $0.16 | $0.16 |
Total | $66.52 | $106.44 | $186.27 |
Scale (always-on, auto-scaling): from $499.38 per month
Best for: workloads requiring enhanced SLAs (2+ replica services), scalability, and advanced security.
Scale tier service
- Active workload ~100% time
- Auto-scaling maximum configurable to prevent runaway bills
- 100 GB of public internet egress data transfer
- 10 GB of cross-region data transfer
Pricing breakdown for this example:
Example 1 | Example 2 | Example 3 | |
---|---|---|---|
Compute | 2 replicas x 8 GiB RAM, 2 vCPU $436.95 | 2 replicas x 16 GiB RAM, 4 vCPU $873.89 | 3 replicas x 16 GiB RAM, 4 vCPU $1,310.84 |
Storage | 1 TB of data + 1 backup $50.60 | 2 TB of data + 1 backup $101.20 | 3 TB of data + 1 backup $151.80 |
Public internet egress data transfer | $11.52 | $11.52 | $11.52 |
Cross-region data transfer | $0.31 | $0.31 | $0.31 |
Total | $499.38 | $986.92 | $1,474.47 |
Enterprise: Starting prices vary
Best for: large scale, mission critical deployments that have stringent security and compliance needs
Enterprise tier service
- Active workload ~100% time
- 1 TB of public internet egress data transfer
- 500 GB of cross-region data transfer
Example 1 | Example 2 | Example 3 | |
---|---|---|---|
Compute | 2 replicas x 32 GiB RAM, 8 vCPU $2,285.60 | 2 replicas x 64 GiB RAM, 16 vCPU $4,571.19 | 2 x 120 GiB RAM, 30 vCPU $8,570.99 |
Storage | 5 TB + 1 backup $253.00 | 10 TB + 1 backup $506.00 | 20 TB + 1 backup $1,012.00 |
Public internet egress data transfer | $115.20 | $115.20 | $115.20 |
Cross-region data transfer | $15.60 | $15.60 | $15.60 |
Total | $2,669.40 | $5,207.99 | $9,713.79 |
Frequently Asked Questions
How is compute metered?
ClickHouse Cloud meters compute on a per-minute basis, in 8G RAM increments. Compute costs will vary by tier, region, and cloud service provider.
How is storage on disk calculated?
ClickHouse Cloud uses cloud object storage and usage is metered on the compressed size of data stored in ClickHouse tables. Storage costs are the same across tiers and vary by region and cloud service provider.
Do backups count toward total storage?
Storage and backups are counted towards storage costs and billed separately. All services will default to one backup, retained for a day. Users who need additional backups can do so by configuring additional backups under the settings tab of the Cloud Console.
How do I estimate compression?
Compression can vary from dataset to dataset. How much it varies is dependent on how compressible the data is in the first place (number of high vs. low cardinality fields), and how the user sets up the schema (using optional codecs or not, for instance). It can be on the order of 10x for common types of analytical data, but it can be significantly lower or higher as well. See the optimizing documentation for guidance and this Uber blog for a detailed logging use case example. The only practical way to know exactly is to ingest your dataset into ClickHouse and compare the size of the dataset with the size stored in ClickHouse.
You can use the query:
What tools does ClickHouse offer to estimate the cost of running a service in the cloud if I have a self-managed deployment?
The ClickHouse query log captures key metrics that can be used to estimate the cost of running a workload in ClickHouse Cloud. For details on migrating from self-managed to ClickHouse Cloud please refer to the migration documentation, and contact ClickHouse Cloud support if you have further questions.
What billing options are available for ClickHouse Cloud?
ClickHouse Cloud supports the following billing options:
- Self-service monthly (in USD, via credit card).
- Direct-sales annual / multi-year (through pre-paid "ClickHouse Credits", in USD, with additional payment options).
- Through the AWS, GCP, and Azure marketplaces (either pay-as-you-go (PAYG) or commit to a contract with ClickHouse Cloud through the marketplace).
How long is the billing cycle?
Billing follows a monthly billing cycle and the start date is tracked as the date when the ClickHouse Cloud organization was created.
What controls does ClickHouse Cloud offer to manage costs for Scale and Enterprise services?
- Trial and Annual Commit customers are notified automatically by email when their consumption hits certain thresholds:
50%
,75%
, and90%
. This allows users to proactively manage their usage. - ClickHouse Cloud allows users to set a maximum auto-scaling limit on their compute via Advanced scaling control, a significant cost factor for analytical workloads.
- The Advanced scaling control lets you set memory limits with an option to control the behavior of pausing/idling during inactivity.
What controls does ClickHouse Cloud offer to manage costs for Basic services?
- The Advanced scaling control lets you control the behavior of pausing/idling during inactivity. Adjusting memory allocation is not supported for Basic services.
- Note that the default setting pauses the service after a period of inactivity.
If I have multiple services, do I get an invoice per service or a consolidated invoice?
A consolidated invoice is generated for all services in a given organization for a billing period.
If I add my credit card and upgrade before my trial period and credits expire, will I be charged?
When a user converts from trial to paid before the 30-day trial period ends, but with credits remaining from the trial credit allowance, we continue to draw down from the trial credits during the initial 30-day trial period, and then charge the credit card.
How can I keep track of my spending?
The ClickHouse Cloud console provides a Usage display that details usage per service. This breakdown, organized by usage dimensions, helps you understand the cost associated with each metered unit.
How do I access my invoice for my marketplace subscription to the ClickHouse Cloud service?
All marketplace subscriptions are billed and invoiced by the marketplace. You can view your invoice through the respective cloud provider marketplace directly.
Why do the dates on the Usage statements not match my Marketplace Invoice?
AWS Marketplace billing follows the calendar month cycle. For example, for usage between dates 01-Dec-2024 and 01-Jan-2025, an invoice is generated between 3-Jan and 5-Jan-2025
ClickHouse Cloud usage statements follow a different billing cycle where usage is metered and reported over 30 days starting from the day of sign up.
The usage and invoice dates will differ if these dates are not the same. Since usage statements track usage by day for a given service, users can rely on statements to see the breakdown of costs.
Are there any restrictions around the usage of prepaid credits?
ClickHouse Cloud prepaid credits (whether direct through ClickHouse, or via a cloud provider's marketplace) can only be leveraged for the terms of the contract. This means they can be applied on the acceptance date, or a future date, and not for any prior periods. Any overages not covered by prepaid credits must be covered by a credit card payment or marketplace monthly billing.
Is there a difference in ClickHouse Cloud pricing, whether paying through the cloud provider marketplace or directly to ClickHouse?
There is no difference in pricing between marketplace billing and signing up directly with ClickHouse. In either case, your usage of ClickHouse Cloud is tracked in terms of ClickHouse Cloud Credits (CHCs), which are metered in the same way and billed accordingly.
How is compute-compute separation billed?
When creating a service in addition to an existing service, you can choose if this new service should share the same data with the existing one. If yes, these two services now form a warehouse. A warehouse has the data stored in it with multiple compute services accessing this data.
As the data is stored only once, you only pay for one copy of data, though multiple services are accessing it. You pay for compute as usual — there are no additional fees for compute-compute separation / warehouses. By leveraging shared storage in this deployment, users benefit from cost savings on both storage and backups.
Compute-compute separation can save you a significant amount of ClickHouse Credits in some cases. A good example is the following setup:
-
You have ETL jobs that are running 24/7 and ingesting data into the service. These ETL jobs do not require a lot of memory so they can run on a small instance with, for example, 32 GiB of RAM.
-
A data scientist on the same team that has ad hoc reporting requirements, says they need to run a query that requires a significant amount of memory - 236 GiB, however does not need high availability and can wait and rerun queries if the first run fails.
In this example you, as an administrator for the database, can do the following:
-
Create a small service with two replicas 16 GiB each - this will satisfy the ETL jobs and provide high availability.
-
For the data scientist, you can create a second service in the same warehouse with only one replica with 236 GiB. You can enable idling for this service so you will not be paying for this service when the data scientist is not using it.
Cost estimation (per month) for this example on the Scale Tier:
- Parent service active 24 hours day: 2 replicas x 16 GiB 4 vCPU per replica
- Child service: 1 replica x 236 GiB 59 vCPU per replica per replica
- 3 TB of compressed data + 1 backup
- 100 GB of public internet egress data transfer
- 50 GB of cross-region data transfer
Child service active 1 hour/day | Child service active 2 hours/day | Child service active 4 hours/day | |
---|---|---|---|
Compute | $1,142.43 | $1,410.97 | $1,948.05 |
Storage | $151.80 | $151.80 | $151.80 |
Public internet egress data transfer | $11.52 | $11.52 | $11.52 |
Cross-region data transfer | $1.56 | $1.56 | $1.56 |
Total | $1,307.31 | $1,575.85 | $2,112.93 |
Without warehouses, you would have to pay for the amount of memory that the data engineer needs for his queries. However, combining two services in a warehouse and idling one of them helps you save money.
ClickPipes pricing
ClickPipes for Postgres CDC
This section outlines the pricing model for our Postgres Change Data Capture (CDC) connector in ClickPipes. In designing this model, our goal was to keep pricing highly competitive while staying true to our core vision:
Making it seamless and affordable for customers to move data from Postgres to ClickHouse for real-time analytics.
The connector is over 5x more cost-effective than external ETL tools and similar features in other database platforms.
Pricing will start being metered in monthly bills beginning September 1st, 2025, for all customers (both existing and new) using Postgres CDC ClickPipes. Until then, usage is free. Customers have a 3-month window starting May 29 (GA announcement) to review and optimize their costs if needed, although we expect most will not need to make any changes.
For example, the external ETL tool Airbyte, which offers similar CDC capabilities, charges $10/GB (excluding credits)—more than 20 times the cost of Postgres CDC in ClickPipes for moving 1TB of data.
Pricing dimensions
There are two main dimensions to pricing:
- Ingested Data: The raw, uncompressed bytes coming from Postgres and ingested into ClickHouse.
- Compute: The compute units provisioned per service manage multiple Postgres CDC ClickPipes and are separate from the compute units used by the ClickHouse Cloud service. This additional compute is dedicated specifically to Postgres CDC ClickPipes. Compute is billed at the service level, not per individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM.
Ingested data
The Postgres CDC connector operates in two main phases:
- Initial load / resync: This captures a full snapshot of Postgres tables and occurs when a pipe is first created or re-synced.
- Continuous Replication (CDC): Ongoing replication of changes—such as inserts, updates, deletes, and schema changes—from Postgres to ClickHouse.
In most use cases, continuous replication accounts for over 90% of a ClickPipe life cycle. Because initial loads involve transferring a large volume of data all at once, we offer a lower rate for that phase.
Phase | Cost |
---|---|
Initial load / resync | $0.10 per GB |
Continuous Replication (CDC) | $0.20 per GB |
Compute
This dimension covers the compute units provisioned per service just for Postgres ClickPipes. Compute is shared across all Postgres pipes within a service. It is provisioned when the first Postgres pipe is created and deallocated when no Postgres CDC pipes remain. The amount of compute provisioned depends on your organization’s tier:
Tier | Cost |
---|---|
Basic Tier | 0.5 compute unit per service — $0.10 per hour |
Scale or Enterprise Tier | 1 compute unit per service — $0.20 per hour |
Example
Let’s say your service is in Scale tier and has the following setup:
- 2 Postgres ClickPipes running continuous replication
- Each pipe ingests 500 GB of data changes (CDC) per month
- When the first pipe is kicked off, the service provisions 1 compute unit under the Scale Tier for Postgres CDC
Monthly cost breakdown
Ingested Data (CDC):
Compute:
Compute is shared across both pipes
Total Monthly Cost:
ClickPipes for streaming and object storage
This section outlines the pricing model of ClickPipes for streaming and object storage.
What does the ClickPipes pricing structure look like?
It consists of two dimensions
- Compute: Price per unit per hour Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not. It applies to all ClickPipes types.
- Ingested data: per GB pricing The ingested data rate applies to all streaming ClickPipes (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs) for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
What are ClickPipes replicas?
ClickPipes ingests data from remote data sources via a dedicated infrastructure that runs and scales independently of the ClickHouse Cloud service. For this reason, it uses dedicated compute replicas.
What is the default number of replicas and their size?
Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU. This corresponds to 0.25 ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
What are the ClickPipes public prices?
- Compute: $0.20 per unit per hour ($0.05 per replica per hour)
- Ingested data: $0.04 per GB
How does it look in an illustrative example?
The following examples assume a single replica unless explicitly mentioned.
100 GB over 24h | 1 TB over 24h | 10 TB over 24h | |
---|---|---|---|
Streaming ClickPipe | (0.25 x 0.20 x 24) + (0.04 x 100) = $5.20 | (0.25 x 0.20 x 24) + (0.04 x 1000) = $41.20 | With 4 replicas: (0.25 x 0.20 x 24 x 4) + (0.04 x 10000) = $404.80 |
Object Storage ClickPipe | (0.25 x 0.20 x 24) = $1.20 | (0.25 x 0.20 x 24) = $1.20 | (0.25 x 0.20 x 24) = $1.20 |
Only ClickPipes compute for orchestration, effective data transfer is assumed by the underlying Clickhouse Service
ClickPipes pricing FAQ
Below, you will find frequently asked questions about CDC ClickPipes and streaming and object-based storage ClickPipes.
FAQ for Postgres CDC ClickPipes
Is the ingested data measured in pricing based on compressed or uncompressed size?
The ingested data is measured as uncompressed data coming from Postgres—both during the initial load and CDC (via the replication slot). Postgres does not compress data during transit by default, and ClickPipe processes the raw, uncompressed bytes.
When will Postgres CDC pricing start appearing on my bills?
Postgres CDC ClickPipes pricing begins appearing on monthly bills starting September 1st, 2025, for all customers—both existing and new. Until then, usage is free. Customers have a 3-month window starting from May 29 (the GA announcement date) to review and optimize their usage if needed, although we expect most won’t need to make any changes.
Will I be charged if I pause my pipes?
No data ingestion charges apply while a pipe is paused, since no data is moved. However, compute charges still apply—either 0.5 or 1 compute unit—based on your organization’s tier. This is a fixed service-level cost and applies across all pipes within that service.
How can I estimate my pricing?
The Overview page in ClickPipes provides metrics for both initial load/resync and CDC data volumes. You can estimate your Postgres CDC costs using these metrics in conjunction with the ClickPipes pricing.
Can I scale the compute allocated for Postgres CDC in my service?
By default, compute scaling is not user-configurable. The provisioned resources are optimized to handle most customer workloads optimally. If your use case requires more or less compute, please open a support ticket so we can evaluate your request.
What is the pricing granularity?
- Compute: Billed per hour. Partial hours are rounded up to the next hour.
- Ingested Data: Measured and billed per gigabyte (GB) of uncompressed data.
Can I use my ClickHouse Cloud credits for Postgres CDC via ClickPipes?
Yes. ClickPipes pricing is part of the unified ClickHouse Cloud pricing. Any platform credits you have will automatically apply to ClickPipes usage as well.
How much additional cost should I expect from Postgres CDC ClickPipes in my existing monthly ClickHouse Cloud spend?
The cost varies based on your use case, data volume, and organization tier. That said, most existing customers see an increase of 0–15% relative to their existing monthly ClickHouse Cloud spend post trial. Actual costs may vary depending on your workload—some workloads involve high data volumes with lesser processing, while others require more processing with less data.
FAQ for streaming and object storage ClickPipes
Why are we introducing a pricing model for ClickPipes now?
We decided to initially launch ClickPipes for free with the idea to gather feedback, refine features, and ensure it meets user needs. As the GA platform has grown, it has effectively stood the test of time by moving trillions of rows. Introducing a pricing model allows us to continue improving the service, maintaining the infrastructure, and providing dedicated support and new connectors.
What are ClickPipes replicas?
ClickPipes ingests data from remote data sources via a dedicated infrastructure that runs and scales independently of the ClickHouse Cloud service. For this reason, it uses dedicated compute replicas. The diagrams below show a simplified architecture.
For streaming ClickPipes, ClickPipes replicas access the remote data sources (e.g., a Kafka broker), pull the data, process and ingest it into the destination ClickHouse service.

In the case of object storage ClickPipes, the ClickPipes replica orchestrates the data loading task (identifying files to copy, maintaining the state, and moving partitions), while the data is pulled directly from the ClickHouse service.

What's the default number of replicas and their size?
Each ClickPipe defaults to 1 replica that's provided with 2 GiB of RAM and 0.5 vCPU. This corresponds to 0.25 ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
Can ClickPipes replicas be scaled?
ClickPipes for streaming can be scaled horizontally by adding more replicas each with a base unit of 0.25 ClickHouse compute units. Vertical scaling is also available on demand for specific use cases (adding more CPU and RAM per replica).
How many ClickPipes replicas do I need?
It depends on the workload throughput and latency requirements. We recommend starting with the default value of 1 replica, measuring your latency, and adding replicas if needed. Keep in mind that for Kafka ClickPipes, you also have to scale the Kafka broker partitions accordingly. The scaling controls are available under "settings" for each streaming ClickPipe.

What does the ClickPipes pricing structure look like?
It consists of two dimensions:
- Compute: Price per unit per hour Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not. It applies to all ClickPipes types.
- Ingested data: per GB pricing The ingested data rate applies to all streaming ClickPipes (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs) for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
What are the ClickPipes public prices?
- Compute: $0.20 per unit per hour ($0.05 per replica per hour)
- Ingested data: $0.04 per GB
How does it look in an illustrative example?
For example, ingesting 1 TB of data over 24 hours using the Kafka connector using a single replica (0.25 compute unit) costs:
For object storage connectors (S3 and GCS), only the ClickPipes compute cost is incurred since the ClickPipes pod is not processing data but only orchestrating the transfer which is operated by the underlying ClickHouse service:
When does the new pricing model take effect?
The new pricing model takes effect for all organizations created after January 27th, 2025.
What happens to current users?
Existing users will have a 60-day grace period where the ClickPipes service continues to be offered for free. Billing will automatically start for ClickPipes for existing users on March 24th, 2025.
How does ClickPipes pricing compare to the market?
The philosophy behind ClickPipes pricing is to cover the operating costs of the platform while offering an easy and reliable way to move data to ClickHouse Cloud. From that angle, our market analysis revealed that we are positioned competitively.