Accelerating analytics with MergeTree
In the previous section, you connected ClickHouse to a data catalog and queried open table formats directly. While querying data in place is convenient, lakehouse formats are not optimized for the low-latency, high-concurrency workloads that power dashboards and operational reporting. For these use cases, loading data into ClickHouse's MergeTree engine delivers dramatically better performance.
MergeTree offers several advantages over reading open table formats directly:
- Sparse primary index - Orders data on disk by a chosen key, allowing ClickHouse to skip over large ranges of irrelevant rows during queries.
- Enhanced data types - Native support for types such as JSON, LowCardinality, and Enum, enabling more compact storage and faster processing.
- Skip indices and full-text indices - Secondary index structures that let ClickHouse skip granules that don't match a query's filter predicates, particularly effective for text search workloads.
- Fast inserts with automatic compaction - ClickHouse is designed for high-throughput inserts and automatically merges data parts in the background, analogous to compaction in open table formats.
- Optimized for concurrent reads - MergeTree columnar storage layout, combined with multiple caching layers, supports real-time analytical workloads with high concurrency - something open table formats are not designed for.
This guide shows how to load data from a catalog into a MergeTree table using INSERT INTO SELECT for faster analytics.
Connect to the catalog
We'll use the same Unity Catalog connection from the previous guide, connecting via the Iceberg REST endpoint:
List tables
Explore the schema
This table contains ~283 million log rows from ClickHouse CI test runs - a realistic dataset for exploring analytical performance.
Query over the lakehouse table
Let's run a query that filters logs by thread name and instance type, searches the message text for errors, and groups results by logger:
The query takes nearly 9 seconds because ClickHouse must perform a full table scan across all Parquet files in object storage. Performance could be improved with partitioning, but columns like logger_name may be too high cardinality to partition effectively. We also have no indices such as Text indices to further prune data. This is where MergeTree excels.
Load data into MergeTree
Create an optimized table
We create a MergeTree table with some effort to optimize the schema. Notice a few key differences from the Iceberg schema:
- No
Nullablewrappers - removingNullableimproves storage efficiency and query performance. LowCardinality(String)on thelevel,instance_type,thread_nameandcheck_namecolumns - dictionary-encodes a column with few distinct values for better compression and faster filtering.- A full-text index on the
messagecolumn - accelerates token-based text searches likehasToken(message, 'error'). - An
ORDER BYkey of(instance_type, thread_name, toStartOfMinute(event_time))- aligns data on disk with common filter patterns so the sparse primary index can skip irrelevant granules.
Insert data from the catalog
Use INSERT INTO SELECT to load the ~300m from the lakehouse table into our ClickHouse table:
Re-execute the query
If we now run the same query against the MergeTree table we see performance improves dramatically:
The same query now completes in 0.22 seconds - a ~40x speedup. Two key optimizations drive this improvement:
- Sparse primary index - The
ORDER BY (instance_type, thread_name, ...)key means ClickHouse can skip directly to granules matchinginstance_type = 'm6i.4xlarge'andthread_name = 'TCPHandler', reducing the rows processed from 283 million to just 14 million. - Full-text index - The
text_idxindex on themessagecolumn allowshasToken(message, 'error')to resolve via the index rather than scanning every message string, further reducing the data ClickHouse needs to read.
The result is a query that can comfortably power a real-time dashboard - at a scale and latency that querying Parquet files in object storage cannot match.