Querying open table formats directly
ClickHouse provides table functions for querying data stored in open table formats directly in object storage. This does not require connecting to an external catalog - it queries the data in place, similar to how AWS Athena reads from S3.
You pass the storage path and credentials directly in the function call, and ClickHouse handles the rest. All ClickHouse SQL syntax and functions are available, and queries benefit from ClickHouse's parallelized execution and efficient native Parquet reader.
The steps in this guide can be executed using an existing ClickHouse server installation. For ad hoc querying, you can instead use clickhouse-local and complete the same workflow without running a server. With minor adjustments, the process can also be performed using ClickHouse’s in process distribution, chDB.
The following examples use the hits dataset stored in each lakehouse format on S3. For each lake format, dedicated functions exist for each object store provider.
- Apache Iceberg
- Delta Lake
- Apache Hudi
- Apache Paimon
The iceberg table function (alias for icebergS3) reads Iceberg tables directly from object storage. Variants exist for each storage backend: icebergS3, icebergAzure, icebergHDFS, and icebergLocal.
Example syntax:
The S3 variant of the functions can be used for Google Cloud Storage (GCS).
Example:
Cluster variant
The icebergS3Cluster function distributes reads across multiple nodes in a ClickHouse cluster. The initiator node establishes connections to all nodes and dispatches data files dynamically. Each worker node requests and processes tasks until all files have been read. icebergCluster is an alias for icebergS3Cluster. Variants also exist for Azure (icebergAzureCluster) and HDFS (icebergHDFSCluster).
Example syntax:
Example (ClickHouse Cloud):
Table engine
As an alternative to using the table function in every query, you can create a persistent table using the Iceberg table engine. The data still resides in object storage and is read on demand - no data is copied into ClickHouse. The advantage is that the table definition is stored in ClickHouse and can be shared across users and sessions without each user needing to specify the storage path and credentials. Engine variants exist for each storage backend: IcebergS3 (or the Iceberg alias), IcebergAzure, IcebergHDFS, and IcebergLocal.
Both the table engine and the table function support data caching, using the same caching mechanism as the S3, AzureBlobStorage, and HDFS storage engines. Additionally, a metadata cache stores manifest file information in memory, reducing repeated reads of Iceberg metadata. This cache is enabled by default via the use_iceberg_metadata_files_cache setting.
Example syntax:
The table engine Iceberg is an alias to IcebergS3.
The S3 variant of the table engine can be used for Google Cloud Storage (GCS).
Example:
For supported features including partition pruning, schema evolution, time travel, caching, and more, see the support matrix. For full reference, see the iceberg table function and Iceberg table engine documentation.
The deltaLake table function (alias for deltaLakeS3) reads Delta Lake tables from object storage. Variants exist for other backends: deltaLakeAzure and deltaLakeLocal.
Example syntax:
The S3 variant of the functions can be used for Google Cloud Storage (GCS).
Example:
Cluster variant
The deltaLakeCluster function distributes reads across multiple nodes in a ClickHouse cluster. The initiator node dispatches data files dynamically to worker nodes for parallel processing. deltaLakeS3Cluster is an alias for deltaLakeCluster. An Azure variant (deltaLakeAzureCluster) is also available.
Example syntax:
The S3 variant of the functions can be used for Google Cloud Storage (GCS).
Example (ClickHouse Cloud):
Table engine
As an alternative to using the table function in every query, you can create a persistent table using the DeltaLake table engine if using S3 compatible storage. The data still resides in object storage and is read on demand - no data is copied into ClickHouse. The advantage is that the table definition is stored in ClickHouse and can be shared across users and sessions without each user needing to specify the storage path and credentials.
Both the table engine and the table function support data caching, using the same caching mechanism as the S3, AzureBlobStorage, and HDFS storage engines.
Example syntax:
This table engine can be used for Google Cloud Storage (GCS).
Example:
For supported features including storage backends, caching, and more, see the support matrix. For full reference, see the deltaLake table function and DeltaLake table engine documentation.
The hudi table function reads Hudi tables from S3.
Syntax:
Cluster variant
The hudiCluster function distributes reads across multiple nodes in a ClickHouse cluster. The initiator node dispatches data files dynamically to worker nodes for parallel processing.
Table engine
As an alternative to using the table function in every query, you can create a persistent table using the Hudi table engine. The data still resides in object storage and is read on demand - no data is copied into ClickHouse. The advantage is that the table definition is stored in ClickHouse and can be shared across users and sessions without each user needing to specify the storage path and credentials.
Syntax:
For supported features including storage backends and more, see the support matrix. For full reference, see the hudi table function and Hudi table engine documentation.
The paimon table function (alias for paimonS3) reads Paimon tables from object storage. Variants exist for each storage backend: paimonS3, paimonAzure, paimonHDFS, and paimonLocal.
Syntax:
Cluster variant
The paimonS3Cluster function distributes reads across multiple nodes in a ClickHouse cluster. The initiator node dispatches data files dynamically to worker nodes for parallel processing. paimonCluster is an alias for paimonS3Cluster. Variants also exist for Azure (paimonAzureCluster) and HDFS (paimonHDFSCluster).
Table engine
Paimon does not currently have a dedicated table engine in ClickHouse. Use the table functions above for querying Paimon tables.
For supported features including storage backends and more, see the support matrix. For full reference, see the paimon table function documentation.