Table Engines for Integrations
ClickHouse provides various means for integrating with external systems, including table engines. Like with all other table engines, the configuration is done using CREATE TABLE
or ALTER TABLE
queries. Then from a user perspective, the configured integration looks like a normal table, but queries to it are proxied to the external system. This transparent querying is one of the key advantages of this approach over alternative integration methods, like dictionaries or table functions, which require the use of custom query methods on each use.
Page | Description |
---|---|
ExternalDistributed | The ExternalDistributed engine allows to perform SELECT queries on data that is stored on a remote servers MySQL or PostgreSQL. Accepts MySQL or PostgreSQL engines as an argument so sharding is possible. |
AzureQueue Table Engine | This engine provides an integration with the Azure Blob Storage ecosystem, allowing streaming data import. |
AzureBlobStorage Table Engine | This engine provides an integration with Azure Blob Storage ecosystem. |
DeltaLake Table Engine | This engine provides a read-only integration with existing Delta Lake tables in Amazon S3. |
EmbeddedRocksDB Engine | This engine allows integrating ClickHouse with RocksDB |
HDFS | This engine provides integration with the Apache Hadoop ecosystem by allowing to manage data on HDFS via ClickHouse. This engine is similar to the File and URL engines, but provides Hadoop-specific features. |
Hive | The Hive engine allows you to perform SELECT queries on HDFS Hive table. |
Hudi Table Engine | This engine provides a read-only integration with existing Apache Hudi tables in Amazon S3. |
Iceberg Table Engine | This engine provides a read-only integration with existing Apache Iceberg tables in Amazon S3, Azure, HDFS and locally stored tables. |
JDBC | Allows ClickHouse to connect to external databases via JDBC. |
Kafka | The Kafka engine works with Apache Kafka and lets you publish or subscribe to data flows, organize fault-tolerant storage, and process streams as they become available. |
MaterializedPostgreSQL | Creates a ClickHouse table with an initial data dump of a PostgreSQL table and starts the replication process. |
MongoDB | MongoDB engine is read-only table engine which allows to read data from a remote collection. |
The MySQL engine allows you to perform SELECT and INSERT queries on data that is stored on a remote MySQL server. | |
NATS Engine | his engine allows integrating ClickHouse with NATS to publish or subscribe to message subjects, and process new messages as they become available. |
ODBC | Allows ClickHouse to connect to external databases via ODBC. |
PostgreSQL Table Engine | The PostgreSQL engine allows SELECT and INSERT queries on data stored on a remote PostgreSQL server. |
RabbitMQ Engine | This engine allows integrating ClickHouse with RabbitMQ. |
Redis | This engine allows integrating ClickHouse with Redis. |
S3 Table Engine | This engine provides integration with the Amazon S3 ecosystem. Similar to the HDFS engine, but provides S3-specific features. |
S3Queue Table Engine | This engine provides integration with the Amazon S3 ecosystem and allows streaming imports. Similar to the Kafka and RabbitMQ engines, but provides S3-specific features. |
SQLite | The engine allows to import and export data to SQLite and supports queries to SQLite tables directly from ClickHouse. |
TimeSeries Engine | A table engine storing time series, i.e. a set of values associated with timestamps and tags (or labels). |