Table Functions
Table functions are methods for constructing tables.
You can use table functions in:
-
FROM clause of the
SELECT
query.The method for creating a temporary table that is available only in the current query. The table is deleted when the query finishes.
-
CREATE TABLE AS table_function() query.
It's one of the methods of creating a table.
-
INSERT INTO TABLE FUNCTION query.
Note
You can't use table functions if the allow_ddl setting is disabled.
Page | Description |
---|---|
generateRandom | Generates random data with a given schema. Allows populating test tables with that data. Not all types are supported. |
generate_series (generateSeries) | Returns a table with the single 'generate_series' column (UInt64) that contains integers from start to stop inclusively. |
hdfs | Creates a table from files in HDFS. This table function is similar to the url and file table functions. |
hdfsCluster | Allows processing files from HDFS in parallel from many nodes in a specified cluster. |
hudi | Provides a read-only table-like interface to Apache Hudi tables in Amazon S3. |
hudiCluster Table Function | An extension to the hudi table function. Allows processing files from Apache Hudi tables in Amazon S3 in parallel with many nodes in a specified cluster. |
iceberg | Provides a read-only table-like interface to Apache Iceberg tables in Amazon S3, Azure, HDFS or locally stored. |
icebergCluster | An extension to the iceberg table function which allows processing files from Apache Iceberg in parallel from many nodes in a specified cluster. |
input | Table function that allows effectively converting and inserting data sent to the server with a given structure to a table with another structure. |
jdbc | Returns a table that is connected via JDBC driver. |
loop | The loop table function in ClickHouse is used to return query results in an infinite loop. |
merge | Creates a temporary Merge table. The table structure is taken from the first table encountered that matches the regular expression. |
mergeTreeIndex | Represents the contents of index and marks files of MergeTree tables. It can be used for introspection. |
mongodb | Allows SELECT queries to be performed on data that is stored on a remote MongoDB server. |
mysql | Allows SELECT and INSERT queries to be performed on data that are stored on a remote MySQL server. |
null | Creates a temporary table of the specified structure with the Null table engine. The function is used for the convenience of test writing and demonstrations. |
numbers | Returns tables with a single 'number' column that contains specifiable integers. |
odbc | Returns the table that is connected via ODBC. |
postgresql | Allows SELECT and INSERT queries to be performed on data that is stored on a remote PostgreSQL server. |
redis | This table function allows integrating ClickHouse with Redis. |
remote, remoteSecure | Table function remote allows to access remote servers on-the-fly, i.e. without creating a distributed table. Table function remoteSecure is same as remote but over a secure connection. |
Provides a table-like interface to select/insert files in Amazon S3 and Google Cloud Storage. This table function is similar to the hdfs function, but provides S3-specific features. | |
s3Cluster | An extension to the s3 table function, which allows processing files from Amazon S3 and Google Cloud Storage in parallel with many nodes in a specified cluster. |
sqlite | Allows to perform queries on data stored in a SQLite database. |
timeSeriesData | timeSeriesData returns the data table used by table db_name.time_series_table whose table engine is TimeSeries. |
timeSeriesMetrics | timeSeriesMetrics returns the metrics table used by table db_name.time_series_table whose table engine is the TimeSeries engine. |
timeSeriesTags | timeSeriesTags table function returns the tags table use by table db_name.time_series_table whose table engine is the TimeSeries engine. |
url | Creates a table from the URL with given format and structure |
urlCluster | Allows processing files from URL in parallel from many nodes in a specified cluster. |
view | Turns a subquery into a table. The function implements views. |
zeros | Used for test purposes as the fastest method to generate many rows. Similar to the system.zeros and system.zeros_mt system tables. |
azureBlobStorage | Provides a table-like interface to select/insert files in Azure Blob Storage. Similar to the s3 function. |
azureBlobStorageCluster | Allows processing files from Azure Blob storage in parallel with many nodes in a specified cluster. |
clusterAllReplicas | Allows accessing all shards (configured in the remote_servers section) of a cluster without creating a Distributed table. |
deltaLake | Provides a read-only table-like interface to the Delta Lake tables in Amazon S3. |
deltaLakeCluster | This is an extension to the deltaLake table function. |
dictionary | Displays the dictionary data as a ClickHouse table. Works the same way as the Dictionary engine. |
executable | The executable table function creates a table based on the output of a user-defined function (UDF) that you define in a script that outputs rows to stdout. |
file | A table engine which provides a table-like interface to SELECT from and INSERT into files, similar to the s3 table function. Use file() when working with local files, and s3() when working with buckets in object storage such as S3, GCS, or MinIO. |
fileCluster | Enables simultaneous processing of files matching a specified path across multiple nodes within a cluster. The initiator establishes connections to worker nodes, expands globs in the file path, and delegates file-reading tasks to worker nodes. Each worker node is querying the initiator for the next file to process, repeating until all tasks are completed (all files are read). |
format | Parses data from arguments according to specified input format. If structure argument is not specified, it's extracted from the data. |
fuzzJSON | Perturbs a JSON string with random variations. |
fuzzQuery | Perturbs the given query string with random variations. |
gcs | Provides a table-like interface to SELECT and INSERT data from Google Cloud Storage. Requires the Storage Object User IAM role. |