Support matrix
This page provides comprehensive support matrices for ClickHouse's lakehouse integrations. It covers the features available for each lakehouse table format, the catalogs ClickHouse can connect to, and the capabilities supported by each catalog.
Lakehouse format support
ClickHouse integrates with four lakehouse table formats: Apache Iceberg, Delta Lake, Apache Hudi, and Apache Paimon. Select a format below to view its support matrix.
Legend: ✅ Supported | ⚠️ Partial / Experimental | ❌ Not supported
- Apache Iceberg
- Delta Lake
- Apache Hudi
- Apache Paimon
| Feature | Status | Notes |
|---|---|---|
| Storage backends | ||
| AWS S3 | ✅ | Via icebergS3() or iceberg() alias |
| GCS | ✅ | Via icebergS3() or iceberg() alias |
| Azure Blob Storage | ✅ | Via icebergAzure() |
| HDFS | ⚠️ | Via icebergHDFS(). Deprecated. |
| Local filesystem | ✅ | Via icebergLocal() |
| Access methods | ||
| Table function | ✅ | icebergS3() with variants per backend |
| Table engine | ✅ | IcebergS3 with variants per backend |
| Cluster-distributed reads | ✅ | icebergS3Cluster, icebergAzureCluster, icebergHDFSCluster |
| Named collections | ✅ | Defining a named collection |
| Read features | ||
| Read support | ✅ | Full SELECT support with all ClickHouse SQL functions |
| Partition pruning | ✅ | See Partition pruning. |
| Hidden partitioning | ✅ | Iceberg transform-based partitioning supported |
| Partition evolution | ✅ | Reading tables with changing partition specs over time supported |
| Schema evolution | ✅ | Column addition, removal, and reordering. See Schema evolution. |
| Type promotion / widening | ✅ | int → long, float → double, decimal(P,S) → decimal(P',S) where P' > P. See Schema evolution. |
| Time travel / snapshots | ✅ | Via iceberg_timestamp_ms or iceberg_snapshot_id settings. See Time travel. |
| Position deletes | ✅ | See Processing deleted rows. |
| Equality deletes | ✅ | Table engine only, from v25.8+. See Processing deleted rows. |
| Merge-on-read | ⚠️ | Experimental. Supported for delete operations. |
| Format versions | ⚠️ | v1 and v2 supported. V3 not supported. |
| Column statistics | ✅ | |
| Bloom filters / puffin files | ❌ | Bloom filter indexes in puffin files not supported |
| Virtual columns | ✅ | _path, _file, _size, _time, _etag. See Virtual columns. |
| Write features | ||
| Table creation | ✅ | Experimental. Requires allow_insert_into_iceberg = 1. From v25.7+. See Creating a table. |
| INSERT | ✅ | Beta from 26.2. Requires allow_insert_into_iceberg = 1. See Inserting data. |
| DELETE | ✅ | Experimental. Requires allow_insert_into_iceberg = 1. Via ALTER TABLE ... DELETE WHERE. See Deleting data. |
| ALTER TABLE (schema changes) | ✅ | Experimental. Requires allow_insert_into_iceberg = 1. Add, drop, modify, rename columns. See Schema evolution. |
| Compaction | ⚠️ | Experimental. Requires allow_experimental_iceberg_compaction = 1. Merges position delete files into data files. See Compaction. Other Iceberg compaction operations not supported. |
| UPDATE / MERGE | ❌ | Not supported. See Compaction. |
| Copy-on-write | ❌ | Not supported |
| Expire snapshots | ❌ | Not supported |
| Remove orphan files | ❌ | Not supported |
| Writing partitions | ✅ | Supported. |
| Altering partitions | ❌ | The changing of the partitioning scheme from ClickHouse is not supported. ClickHouse can write to iceberg tables which have an evolved partitioning. |
| Metadata | ||
| Branching and tagging | ❌ | Iceberg branch/tag references not supported |
| Metadata file resolution | ✅ | Support for metadata resolution through catalogs, simple directory listing, using 'version-hint' and specific path. Configurable via iceberg_metadata_file_path and iceberg_metadata_table_uuid. See Metadata file resolution. |
| Data caching | ✅ | Same mechanism as S3/Azure/HDFS storage engines. See Data cache. |
| Metadata caching | ✅ | Manifest and metadata files cached in memory. Enabled by default via use_iceberg_metadata_files_cache. See Metadata cache. |
From version 25.6, ClickHouse reads Delta Lake tables using the Delta Lake Rust kernel, providing broader feature support; however, known issues occur when accessing data in Azure Blob Storage. For this reason the Kernel is disabled when reading data on Azure Blob Storage. We indicate below which features require this kernel.
| Feature | Status | Notes |
|---|---|---|
| Storage backends | ||
| AWS S3 | ✅ | Via deltaLake() or deltaLakeS3() |
| GCS | ✅ | Via deltaLake() or deltaLakeS3() |
| Azure Blob Storage | ✅ | Via deltaLakeAzure() |
| HDFS | ❌ | Not supported |
| Local filesystem | ✅ | Via deltaLakeLocal() |
| Access methods | ||
| Table function | ✅ | deltaLake() with variants per backend |
| Table engine | ✅ | DeltaLake |
| Cluster-distributed reads | ✅ | deltaLakeCluster, deltaLakeAzureCluster |
| Named collections | ✅ | Named collection |
| Read features | ||
| Read support | ✅ | Full SELECT support with all ClickHouse SQL functions |
| Partition pruning | ✅ | Requires Delta Kernel. |
| Schema evolution | ✅ | Requires Delta Kernel. |
| Time travel | ✅ | Requires Delta Kernel. |
| Deletion vectors | ✅ | |
| Column mapping | ✅ | |
| Change data feed | ✅ | Requires Delta Kernel. |
| Virtual columns | ✅ | _path, _file, _size, _time, _etag. See Virtual columns. |
| Write features | ||
| INSERT | ✅ | Experimental. Requires allow_experimental_delta_lake_writes = 1. See DeltaLake engine. Requires Delta Kernel. |
| DELETE / UPDATE / MERGE | ❌ | Not supported |
| CREATE empty table | ❌ | Creation of a new empty Delta Lake table is not supported. CREATE TABLE operation assumes existence of existing Delta Lake on object storage. |
| Caching | ||
| Data caching | ✅ | Same mechanism as S3/Azure/HDFS storage engines. See Data cache. |
| Feature | Status | Notes |
|---|---|---|
| Storage backends | ||
| AWS S3 | ✅ | Via hudi() |
| GCS | ✅ | Via hudi() |
| Azure Blob Storage | ❌ | Not supported |
| HDFS | ❌ | Not supported |
| Local filesystem | ❌ | Not supported |
| Access methods | ||
| Table function | ✅ | hudi() |
| Table engine | ✅ | Hudi |
| Cluster-distributed reads | ✅ | hudiCluster (S3 only) |
| Named collections | ✅ | Hudi arguments |
| Read features | ||
| Read support | ✅ | Full SELECT support with all ClickHouse SQL functions |
| Schema evolution | ❌ | Not supported |
| Time travel | ❌ | Not supported |
| Virtual columns | ✅ | _path, _file, _size, _time, _etag. See Virtual columns. |
| Write features | ||
| INSERT / DELETE / UPDATE | ❌ | Read-only integration |
| Caching | ||
| Data caching | ❌ | Not supported |
| Feature | Status | Notes |
|---|---|---|
| Storage backends | ||
| S3 | ✅ | Experimental. Via paimon() or paimonS3() |
| GCS | ✅ | Experimental. Via paimon() or paimonS3() |
| Azure Blob Storage | ✅ | Experimental. Via paimonAzure() |
| HDFS | ⚠️ | Experimental. Via paimonHDFS(). Deprecated. |
| Local filesystem | ✅ | Experimental. Via paimonLocal() |
| Access methods | ||
| Table function | ✅ | Experimental. paimon() with variants per backend |
| Table engine | ❌ | No dedicated table engine |
| Cluster-distributed reads | ✅ | Experimental. paimonS3Cluster, paimonAzureCluster, paimonHDFSCluster |
| Named collections | ✅ | Experimental. Defining a named collection |
| Read features | ||
| Read support | ✅ | Experimental. Full SELECT support with all ClickHouse SQL functions |
| Schema evolution | ❌ | Not supported |
| Time travel | ❌ | Not supported |
| Virtual columns | ✅ | Experimental. _path, _file, _size, _time, _etag. See Virtual columns. |
| Write features | ||
| INSERT / DELETE / UPDATE | ❌ | Read-only integration |
| Caching | ||
| Data caching | ❌ | Not supported |
Catalog support
ClickHouse can connect to external data catalogs using the DataLakeCatalog database engine, which exposes the catalog as a ClickHouse database. Tables registered in the catalog appear automatically and can be queried with standard SQL.
The following catalogs are currently supported. Refer to each catalog's reference guide for full setup instructions.
| Catalog | Formats | Read | Create table | INSERT | Reference guide |
|---|---|---|---|---|---|
| AWS Glue | Iceberg | ✅ Beta | ❌ | ❌ | Glue catalog guide |
| Databricks Unity | Delta, Iceberg | ✅ Experimental | ❌ | ❌ | Unity catalog guide |
| Iceberg REST | Iceberg | ✅ Beta | ❌ | ❌ | REST catalog guide |
| Lakekeeper | Iceberg | ✅ Experimental | ❌ | ❌ | Lakekeeper catalog guide |
| Project Nessie | Iceberg | ✅ Experimental | ❌ | ❌ | Nessie catalog guide |
| Microsoft OneLake | Iceberg | ✅ Beta | ❌ | ❌ | OneLake catalog guide |
All catalog integrations currently require an experimental or beta setting to be enabled and expose read-only access — tables can be queried but not created or written to through the catalog connection. To load data from a catalog into ClickHouse for faster analytics, use INSERT INTO SELECT as described in the accelerating analytics guide. To write data back to open table formats, create standalone Iceberg tables as described in the writing data guide.