Skip to main content
Skip to main content

Support matrix

This page provides comprehensive support matrices for ClickHouse's lakehouse integrations. It covers the features available for each lakehouse table format, the catalogs ClickHouse can connect to, and the capabilities supported by each catalog.

Lakehouse format support

ClickHouse integrates with four lakehouse table formats: Apache Iceberg, Delta Lake, Apache Hudi, and Apache Paimon. Select a format below to view its support matrix.

Legend: ✅ Supported | ⚠️ Partial / Experimental | ❌ Not supported

FeatureStatusNotes
Storage backends
AWS S3Via icebergS3() or iceberg() alias
GCSVia icebergS3() or iceberg() alias
Azure Blob StorageVia icebergAzure()
HDFS⚠️Via icebergHDFS(). Deprecated.
Local filesystemVia icebergLocal()
Access methods
Table functionicebergS3() with variants per backend
Table engineIcebergS3 with variants per backend
Cluster-distributed readsicebergS3Cluster, icebergAzureCluster, icebergHDFSCluster
Named collectionsDefining a named collection
Read features
Read supportFull SELECT support with all ClickHouse SQL functions
Partition pruningSee Partition pruning.
Hidden partitioningIceberg transform-based partitioning supported
Partition evolutionReading tables with changing partition specs over time supported
Schema evolutionColumn addition, removal, and reordering. See Schema evolution.
Type promotion / wideningintlong, floatdouble, decimal(P,S)decimal(P',S) where P' > P. See Schema evolution.
Time travel / snapshotsVia iceberg_timestamp_ms or iceberg_snapshot_id settings. See Time travel.
Position deletesSee Processing deleted rows.
Equality deletesTable engine only, from v25.8+. See Processing deleted rows.
Merge-on-read⚠️Experimental. Supported for delete operations.
Format versions⚠️v1 and v2 supported. V3 not supported.
Column statistics
Bloom filters / puffin filesBloom filter indexes in puffin files not supported
Virtual columns_path, _file, _size, _time, _etag. See Virtual columns.
Write features
Table creationExperimental. Requires allow_insert_into_iceberg = 1. From v25.7+. See Creating a table.
INSERTBeta from 26.2. Requires allow_insert_into_iceberg = 1. See Inserting data.
DELETEExperimental. Requires allow_insert_into_iceberg = 1. Via ALTER TABLE ... DELETE WHERE. See Deleting data.
ALTER TABLE (schema changes)Experimental. Requires allow_insert_into_iceberg = 1. Add, drop, modify, rename columns. See Schema evolution.
Compaction⚠️Experimental. Requires allow_experimental_iceberg_compaction = 1. Merges position delete files into data files. See Compaction. Other Iceberg compaction operations not supported.
UPDATE / MERGENot supported. See Compaction.
Copy-on-writeNot supported
Expire snapshotsNot supported
Remove orphan filesNot supported
Writing partitionsSupported.
Altering partitionsThe changing of the partitioning scheme from ClickHouse is not supported. ClickHouse can write to iceberg tables which have an evolved partitioning.
Metadata
Branching and taggingIceberg branch/tag references not supported
Metadata file resolutionSupport for metadata resolution through catalogs, simple directory listing, using 'version-hint' and specific path. Configurable via iceberg_metadata_file_path and iceberg_metadata_table_uuid. See Metadata file resolution.
Data cachingSame mechanism as S3/Azure/HDFS storage engines. See Data cache.
Metadata cachingManifest and metadata files cached in memory. Enabled by default via use_iceberg_metadata_files_cache. See Metadata cache.

Catalog support

ClickHouse can connect to external data catalogs using the DataLakeCatalog database engine, which exposes the catalog as a ClickHouse database. Tables registered in the catalog appear automatically and can be queried with standard SQL.

The following catalogs are currently supported. Refer to each catalog's reference guide for full setup instructions.

CatalogFormatsReadCreate tableINSERTReference guide
AWS GlueIceberg✅ BetaGlue catalog guide
Databricks UnityDelta, Iceberg✅ ExperimentalUnity catalog guide
Iceberg RESTIceberg✅ BetaREST catalog guide
LakekeeperIceberg✅ ExperimentalLakekeeper catalog guide
Project NessieIceberg✅ ExperimentalNessie catalog guide
Microsoft OneLakeIceberg✅ BetaOneLake catalog guide

All catalog integrations currently require an experimental or beta setting to be enabled and expose read-only access — tables can be queried but not created or written to through the catalog connection. To load data from a catalog into ClickHouse for faster analytics, use INSERT INTO SELECT as described in the accelerating analytics guide. To write data back to open table formats, create standalone Iceberg tables as described in the writing data guide.