Skip to main content
Skip to main content

Querying open table formats directly

ClickHouse provides table functions for querying data stored in open table formats directly in object storage. This does not require connecting to an external catalog - it queries the data in place, similar to how AWS Athena reads from S3.

You pass the storage path and credentials directly in the function call, and ClickHouse handles the rest. All ClickHouse SQL syntax and functions are available, and queries benefit from ClickHouse's parallelized execution and efficient native Parquet reader.

Server, local or chDB

The steps in this guide can be executed using an existing ClickHouse server installation. For ad hoc querying, you can instead use clickhouse-local and complete the same workflow without running a server. With minor adjustments, the process can also be performed using ClickHouse’s in process distribution, chDB.

The following examples use the hits dataset stored in each lakehouse format on S3. For each lake format, dedicated functions exist for each object store provider.

The iceberg table function (alias for icebergS3) reads Iceberg tables directly from object storage. Variants exist for each storage backend: icebergS3, icebergAzure, icebergHDFS, and icebergLocal.

Example syntax:

icebergS3(url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method])

icebergAzure(connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method])

icebergLocal(path_to_table, [,format] [,compression_method])
GCS support

The S3 variant of the functions can be used for Google Cloud Storage (GCS).

Example:

SELECT
    url,
    count() AS cnt
FROM icebergS3('https://datasets-documentation.s3.amazonaws.com/lake_formats/iceberg/')
GROUP BY url
ORDER BY cnt DESC
LIMIT 5

┌─url────────────────────────────────────────────────┬─────cnt─┐
│ http://liver.ru/belgorod/page/1006.jки/доп_приборы │ 3288173 │ -- 3.29 million
│ http://kinopoisk.ru                                │ 1625250 │ -- 1.63 million
│ http://bdsm_po_yers=0&with_video                   │  791465 │
│ http://video.yandex                                │  582400 │
│ http://smeshariki.ru/region                        │  514984 │
└────────────────────────────────────────────────────┴─────────┘

5 rows in set. Elapsed: 3.375 sec. Processed 100.00 million rows, 9.98 GB (29.63 million rows/s., 2.96 GB/s.)
Peak memory usage: 10.48 GiB.

Cluster variant

The icebergS3Cluster function distributes reads across multiple nodes in a ClickHouse cluster. The initiator node establishes connections to all nodes and dispatches data files dynamically. Each worker node requests and processes tasks until all files have been read. icebergCluster is an alias for icebergS3Cluster. Variants also exist for Azure (icebergAzureCluster) and HDFS (icebergHDFSCluster).

Example syntax:

icebergS3Cluster(cluster_name, url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method])
-- icebergCluster is an alias for icebergS3Cluster

icebergAzureCluster(cluster_name, connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method])

Example (ClickHouse Cloud):

SELECT
    url,
    count() AS cnt
FROM icebergS3Cluster(
    'default',
    'https://datasets-documentation.s3.amazonaws.com/lake_formats/iceberg/'
)
GROUP BY url
ORDER BY cnt DESC
LIMIT 5

Table engine

As an alternative to using the table function in every query, you can create a persistent table using the Iceberg table engine. The data still resides in object storage and is read on demand - no data is copied into ClickHouse. The advantage is that the table definition is stored in ClickHouse and can be shared across users and sessions without each user needing to specify the storage path and credentials. Engine variants exist for each storage backend: IcebergS3 (or the Iceberg alias), IcebergAzure, IcebergHDFS, and IcebergLocal.

Both the table engine and the table function support data caching, using the same caching mechanism as the S3, AzureBlobStorage, and HDFS storage engines. Additionally, a metadata cache stores manifest file information in memory, reducing repeated reads of Iceberg metadata. This cache is enabled by default via the use_iceberg_metadata_files_cache setting.

Example syntax:

The table engine Iceberg is an alias to IcebergS3.

CREATE TABLE iceberg_table
    ENGINE = IcebergS3(url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method])

CREATE TABLE iceberg_table
    ENGINE = IcebergAzure(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression])

CREATE TABLE iceberg_table
    ENGINE = IcebergLocal(path_to_table, [,format] [,compression_method])
GCS support

The S3 variant of the table engine can be used for Google Cloud Storage (GCS).

Example:

CREATE TABLE hits_iceberg
    ENGINE = IcebergS3('https://datasets-documentation.s3.amazonaws.com/lake_formats/iceberg/')

SELECT
    url,
    count() AS cnt
FROM hits_iceberg
GROUP BY url
ORDER BY cnt DESC
LIMIT 5

┌─url────────────────────────────────────────────────┬─────cnt─┐
│ http://liver.ru/belgorod/page/1006.jки/доп_приборы │ 3288173 │
│ http://kinopoisk.ru                                │ 1625250 │
│ http://bdsm_po_yers=0&with_video                   │  791465 │
│ http://video.yandex                                │  582400 │
│ http://smeshariki.ru/region                        │  514984 │
└────────────────────────────────────────────────────┴─────────┘

5 rows in set. Elapsed: 2.737 sec. Processed 100.00 million rows, 9.98 GB (36.53 million rows/s., 3.64 GB/s.)
Peak memory usage: 10.53 GiB.

For supported features including partition pruning, schema evolution, time travel, caching, and more, see the support matrix. For full reference, see the iceberg table function and Iceberg table engine documentation.