Skip to main content
Skip to main content
Edit this page

Formats for Input and Output Data

ClickHouse can accept and return data in various formats. A format supported for input can be used to parse the data provided to INSERTs, to perform SELECTs from a file-backed table such as File, URL or HDFS, or to read a dictionary. A format supported for output can be used to arrange the results of a SELECT, and to perform INSERTs into a file-backed table. All format names are case-insensitive.

The supported formats are:

FormatInputOutput
TabSeparated
TabSeparatedRaw
TabSeparatedWithNames
TabSeparatedWithNamesAndTypes
TabSeparatedRawWithNames
TabSeparatedRawWithNamesAndTypes
Template
TemplateIgnoreSpaces
CSV
CSVWithNames
CSVWithNamesAndTypes
CustomSeparated
CustomSeparatedWithNames
CustomSeparatedWithNamesAndTypes
SQLInsert
Values
Vertical
JSON
JSONAsString
JSONAsObject
JSONStrings
JSONColumns
JSONColumnsWithMetadata
JSONCompact
JSONCompactStrings
JSONCompactColumns
JSONEachRow
PrettyJSONEachRow
JSONEachRowWithProgress
JSONStringsEachRow
JSONStringsEachRowWithProgress
JSONCompactEachRow
JSONCompactEachRowWithNames
JSONCompactEachRowWithNamesAndTypes
JSONCompactStringsEachRow
JSONCompactStringsEachRowWithNames
JSONCompactStringsEachRowWithNamesAndTypes
JSONObjectEachRow
BSONEachRow
TSKV
Pretty
PrettyNoEscapes
PrettyMonoBlock
PrettyNoEscapesMonoBlock
PrettyCompact
PrettyCompactNoEscapes
PrettyCompactMonoBlock
PrettyCompactNoEscapesMonoBlock
PrettySpace
PrettySpaceNoEscapes
PrettySpaceMonoBlock
PrettySpaceNoEscapesMonoBlock
Prometheus
Protobuf
ProtobufSingle
ProtobufList
Avro
AvroConfluent
Parquet
ParquetMetadata
Arrow
ArrowStream
ORC
One
Npy
RowBinary
RowBinaryWithNames
RowBinaryWithNamesAndTypes
RowBinaryWithDefaults
Native
Null
XML
CapnProto
LineAsString
Regexp
RawBLOB
MsgPack
MySQLDump
DWARF
Markdown
Form

You can control some format processing parameters with the ClickHouse settings. For more information read the Settings section.

TabSeparated

See TabSeparated

TabSeparatedRaw

See TabSeparatedRaw

TabSeparatedWithNames

See TabSeparatedWithNames

TabSeparatedWithNamesAndTypes

See TabSeparatedWithNamesAndTypes

TabSeparatedRawWithNames

See TabSeparatedRawWithNames

TabSeparatedRawWithNamesAndTypes

See TabSeparatedRawWithNamesAndTypes

Template

See Template

TemplateIgnoreSpaces

See TemplateIgnoreSpaces

TSKV

See TSKV

CSV

See CSV

CSVWithNames

See CSVWithNames

CSVWithNamesAndTypes

See CSVWithNamesAndTypes

CustomSeparated

See CustomSeparated

CustomSeparatedWithNames

See CustomSeparatedWithNames

CustomSeparatedWithNamesAndTypes

See CustomSeparatedWithNamesAndTypes

SQLInsert

See SQLInsert

JSON

See JSON

JSONStrings

See JSONStrings

JSONColumns

See JSONColumns

JSONColumnsWithMetadata

See JSONColumnsWithMetadata

JSONAsString

See JSONAsString

JSONAsObject

See JSONAsObject

JSONCompact

See JSONCompact

JSONCompactStrings

See JSONCompactStrings

JSONCompactColumns

See JSONCompactColumns

JSONEachRow

See JSONEachRow

PrettyJSONEachRow

See PrettyJSONEachRow

JSONStringsEachRow

See JSONStringsEachRow

JSONCompactEachRow

See JSONCompactEachRow

JSONCompactStringsEachRow

See JSONCompactStringsEachRow

JSONEachRowWithProgress

See JSONEachRowWithProgress

JSONStringsEachRowWithProgress

See JSONStringsEachRowWithProgress

JSONCompactEachRowWithNames

See JSONCompactEachRowWithNames

JSONCompactEachRowWithNamesAndTypes

See JSONCompactEachRowWithNamesAndTypes

JSONCompactStringsEachRowWithNames

See JSONCompactStringsEachRowWithNames

JSONCompactStringsEachRowWithNamesAndTypes

See JSONCompactStringsEachRowWithNamesAndTypes

JSONObjectEachRow

See JSONObjectEachRow

JSON Formats Settings

See JSON Format Settings

BSONEachRow

See BSONEachRow

Native

See Native

Null

See Null

Pretty

Outputs data as Unicode-art tables, also using ANSI-escape sequences for setting colours in the terminal. A full grid of the table is drawn, and each row occupies two lines in the terminal. Each result block is output as a separate table. This is necessary so that blocks can be output without buffering results (buffering would be necessary in order to pre-calculate the visible width of all the values).

NULL is output as ᴺᵁᴸᴸ.

Example (shown for the PrettyCompact format):

SELECT * FROM t_null
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
└───┴──────┘

Rows are not escaped in Pretty* formats. Example is shown for the PrettyCompact format:

SELECT 'String with \'quotes\' and \t character' AS Escaping_test
┌─Escaping_test────────────────────────┐
│ String with 'quotes' and character │
└──────────────────────────────────────┘

To avoid dumping too much data to the terminal, only the first 10,000 rows are printed. If the number of rows is greater than or equal to 10,000, the message “Showed first 10 000” is printed. This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).

The Pretty format supports outputting total values (when using WITH TOTALS) and extremes (when ‘extremes’ is set to 1). In these cases, total values and extreme values are output after the main data, in separate tables. Example (shown for the PrettyCompact format):

SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT PrettyCompact
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1406958 │
│ 2014-03-18 │ 1383658 │
│ 2014-03-19 │ 1405797 │
│ 2014-03-20 │ 1353623 │
│ 2014-03-21 │ 1245779 │
│ 2014-03-22 │ 1031592 │
│ 2014-03-23 │ 1046491 │
└────────────┴─────────┘

Totals:
┌──EventDate─┬───────c─┐
│ 1970-01-01 │ 8873898 │
└────────────┴─────────┘

Extremes:
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1031592 │
│ 2014-03-23 │ 1406958 │
└────────────┴─────────┘

PrettyNoEscapes

Differs from Pretty in that ANSI-escape sequences aren’t used. This is necessary for displaying this format in a browser, as well as for using the ‘watch’ command-line utility.

Example:

$ watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"

You can use the HTTP interface for displaying in the browser.

PrettyMonoBlock

Differs from Pretty in that up to 10,000 rows are buffered, then output as a single table, not by blocks.

PrettyNoEscapesMonoBlock

Differs from PrettyNoEscapes in that up to 10,000 rows are buffered, then output as a single table, not by blocks.

PrettyCompact

Differs from Pretty in that the grid is drawn between rows and the result is more compact. This format is used by default in the command-line client in interactive mode.

PrettyCompactNoEscapes

Differs from PrettyCompact in that ANSI-escape sequences aren’t used. This is necessary for displaying this format in a browser, as well as for using the ‘watch’ command-line utility.

PrettyCompactMonoBlock

Differs from PrettyCompact in that up to 10,000 rows are buffered, then output as a single table, not by blocks.

PrettyCompactNoEscapesMonoBlock

Differs from PrettyCompactNoEscapes in that up to 10,000 rows are buffered, then output as a single table, not by blocks.

PrettySpace

Differs from PrettyCompact in that whitespace (space characters) is used instead of the grid.

PrettySpaceNoEscapes

Differs from PrettySpace in that ANSI-escape sequences aren’t used. This is necessary for displaying this format in a browser, as well as for using the ‘watch’ command-line utility.

PrettySpaceMonoBlock

Differs from PrettySpace in that up to 10,000 rows are buffered, then output as a single table, not by blocks.

PrettySpaceNoEscapesMonoBlock

Differs from PrettySpaceNoEscapes in that up to 10,000 rows are buffered, then output as a single table, not by blocks.

Pretty formats settings

RowBinary

Formats and parses data by row in binary format. Rows and values are listed consecutively, without separators. Because data is in the binary format the delimiter after FORMAT RowBinary is strictly specified as next: any number of whitespaces (' ' - space, code 0x20; '\t' - tab, code 0x09; '\f' - form feed, code 0x0C) followed by exactly one new line sequence (Windows style "\r\n" or Unix style '\n'), immediately followed by binary data. This format is less efficient than the Native format since it is row-based.

Integers use fixed-length little-endian representation. For example, UInt64 uses 8 bytes. DateTime is represented as UInt32 containing the Unix timestamp as the value. Date is represented as a UInt16 object that contains the number of days since 1970-01-01 as the value. String is represented as a varint length (unsigned LEB128), followed by the bytes of the string. FixedString is represented simply as a sequence of bytes.

Array is represented as a varint length (unsigned LEB128), followed by successive elements of the array.

For NULL support, an additional byte containing 1 or 0 is added before each Nullable value. If 1, then the value is NULL and this byte is interpreted as a separate value. If 0, the value after the byte is not NULL.

RowBinaryWithNames

Similar to RowBinary, but with added header:

  • LEB128-encoded number of columns (N)
  • N Strings specifying column names
Note

If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped.

RowBinaryWithNamesAndTypes

Similar to RowBinary, but with added header:

  • LEB128-encoded number of columns (N)
  • N Strings specifying column names
  • N Strings specifying column types
Note

If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. If setting input_format_with_types_use_header is set to 1, the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped.

RowBinaryWithDefaults

Similar to RowBinary, but with an extra byte before each column that indicates if default value should be used.

Examples:

:) select * from format('RowBinaryWithDefaults', 'x UInt32 default 42, y UInt32', x'010001000000')

┌──x─┬─y─┐
421
└────┴───┘

For column x there is only one byte 01 that indicates that default value should be used and no other data after this byte is provided. For column y data starts with byte 00 that indicates that column has actual value that should be read from the subsequent data 01000000.

RowBinary format settings

Values

See Values

Vertical

See Vertical

XML

XML format is suitable only for output, not for parsing. Example:

<?xml version='1.0' encoding='UTF-8' ?>
<result>
<meta>
<columns>
<column>
<name>SearchPhrase</name>
<type>String</type>
</column>
<column>
<name>count()</name>
<type>UInt64</type>
</column>
</columns>
</meta>
<data>
<row>
<SearchPhrase></SearchPhrase>
<field>8267016</field>
</row>
<row>
<SearchPhrase>bathroom interior design</SearchPhrase>
<field>2166</field>
</row>
<row>
<SearchPhrase>clickhouse</SearchPhrase>
<field>1655</field>
</row>
<row>
<SearchPhrase>2014 spring fashion</SearchPhrase>
<field>1549</field>
</row>
<row>
<SearchPhrase>freeform photos</SearchPhrase>
<field>1480</field>
</row>
<row>
<SearchPhrase>angelina jolie</SearchPhrase>
<field>1245</field>
</row>
<row>
<SearchPhrase>omsk</SearchPhrase>
<field>1112</field>
</row>
<row>
<SearchPhrase>photos of dog breeds</SearchPhrase>
<field>1091</field>
</row>
<row>
<SearchPhrase>curtain designs</SearchPhrase>
<field>1064</field>
</row>
<row>
<SearchPhrase>baku</SearchPhrase>
<field>1000</field>
</row>
</data>
<rows>10</rows>
<rows_before_limit_at_least>141137</rows_before_limit_at_least>
</result>

If the column name does not have an acceptable format, just ‘field’ is used as the element name. In general, the XML structure follows the JSON structure. Just as for JSON, invalid UTF-8 sequences are changed to the replacement character � so the output text will consist of valid UTF-8 sequences.

In string values, the characters < and & are escaped as < and &.

Arrays are output as <array><elem>Hello</elem><elem>World</elem>...</array>,and tuples as <tuple><elem>Hello</elem><elem>World</elem>...</tuple>.

CapnProto

Not supported in ClickHouse Cloud

CapnProto is a binary message format similar to Protocol Buffers and Thrift, but not like JSON or MessagePack.

CapnProto messages are strictly typed and not self-describing, meaning they need an external schema description. The schema is applied on the fly and cached for each query.

See also Format Schema.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

CapnProto data type (INSERT)ClickHouse data typeCapnProto data type (SELECT)
UINT8, BOOLUInt8UINT8
INT8Int8INT8
UINT16UInt16, DateUINT16
INT16Int16INT16
UINT32UInt32, DateTimeUINT32
INT32Int32, Decimal32INT32
UINT64UInt64UINT64
INT64Int64, DateTime64, Decimal64INT64
FLOAT32Float32FLOAT32
FLOAT64Float64FLOAT64
TEXT, DATAString, FixedStringTEXT, DATA
union(T, Void), union(Void, T)Nullable(T)union(T, Void), union(Void, T)
ENUMEnum(8/16)ENUM
LISTArrayLIST
STRUCTTupleSTRUCT
UINT32IPv4UINT32
DATAIPv6DATA
DATAInt128/UInt128/Int256/UInt256DATA
DATADecimal128/Decimal256DATA
STRUCT(entries LIST(STRUCT(key Key, value Value)))MapSTRUCT(entries LIST(STRUCT(key Key, value Value)))

Integer types can be converted into each other during input/output.

For working with Enum in CapnProto format use the format_capn_proto_enum_comparising_mode setting.

Arrays can be nested and can have a value of the Nullable type as an argument. Tuple and Map types also can be nested.

Inserting and Selecting Data

You can insert CapnProto data from a file into ClickHouse table by the following command:

$ cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits SETTINGS format_schema = 'schema:Message' FORMAT CapnProto"

Where schema.capnp looks like this:

struct Message {
SearchPhrase @0 :Text;
c @1 :Uint64;
}

You can select data from a ClickHouse table and save them into some file in the CapnProto format by the following command:

$ clickhouse-client --query = "SELECT * FROM test.hits FORMAT CapnProto SETTINGS format_schema = 'schema:Message'"

Using autogenerated schema

If you don't have an external CapnProto schema for your data, you can still output/input data in CapnProto format using autogenerated schema. For example:

SELECT * FROM test.hits format CapnProto SETTINGS format_capn_proto_use_autogenerated_schema=1

In this case ClickHouse will autogenerate CapnProto schema according to the table structure using function structureToCapnProtoSchema and will use this schema to serialize data in CapnProto format.

You can also read CapnProto file with autogenerated schema (in this case the file must be created using the same schema):

$ cat hits.bin | clickhouse-client --query "INSERT INTO test.hits SETTINGS format_capn_proto_use_autogenerated_schema=1 FORMAT CapnProto"

The setting format_capn_proto_use_autogenerated_schema is enabled by default and applies if format_schema is not set.

You can also save autogenerated schema in the file during input/output using setting output_format_schema. For example:

SELECT * FROM test.hits format CapnProto SETTINGS format_capn_proto_use_autogenerated_schema=1, output_format_schema='path/to/schema/schema.capnp'

In this case autogenerated CapnProto schema will be saved in file path/to/schema/schema.capnp.

Prometheus

Expose metrics in Prometheus text-based exposition format.

The output table should have a proper structure. Columns name (String) and value (number) are required. Rows may optionally contain help (String) and timestamp (number). Column type (String) is either counter, gauge, histogram, summary, untyped or empty. Each metric value may also have some labels (Map(String, String)). Several consequent rows may refer to the one metric with different labels. The table should be sorted by metric name (e.g., with ORDER BY name).

There's special requirements for labels for histogram and summary, see Prometheus doc for the details. Special rules applied to row with labels {'count':''} and {'sum':''}, they'll be converted to <metric_name>_count and <metric_name>_sum respectively.

Example:

┌─name────────────────────────────────┬─type──────┬─help──────────────────────────────────────┬─labels─────────────────────────┬────value─┬─────timestamp─┐
│ http_request_duration_seconds │ histogram │ A histogram of the request duration. │ {'le':'0.05'} │ 24054 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'0.1'} │ 33444 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'0.2'} │ 100392 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'0.5'} │ 129389 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'1'} │ 133988 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'+Inf'} │ 144320 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'sum':''} │ 53423 │ 0 │
│ http_requests_total │ counter │ Total number of HTTP requests │ {'method':'post','code':'200'} │ 1027 │ 1395066363000 │
│ http_requests_total │ counter │ │ {'method':'post','code':'400'} │ 3 │ 1395066363000 │
│ metric_without_timestamp_and_labels │ │ │ {} │ 12.47 │ 0 │
│ rpc_duration_seconds │ summary │ A summary of the RPC duration in seconds. │ {'quantile':'0.01'} │ 3102 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'quantile':'0.05'} │ 3272 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'quantile':'0.5'} │ 4773 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'quantile':'0.9'} │ 9001 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'quantile':'0.99'} │ 76656 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'count':''} │ 2693 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'sum':''} │ 17560473 │ 0 │
│ something_weird │ │ │ {'problem':'division by zero'} │ inf │ -3982045 │
└─────────────────────────────────────┴───────────┴───────────────────────────────────────────┴────────────────────────────────┴──────────┴───────────────┘

Will be formatted as:

# HELP http_request_duration_seconds A histogram of the request duration.
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{le="0.05"} 24054
http_request_duration_seconds_bucket{le="0.1"} 33444
http_request_duration_seconds_bucket{le="0.5"} 129389
http_request_duration_seconds_bucket{le="1"} 133988
http_request_duration_seconds_bucket{le="+Inf"} 144320
http_request_duration_seconds_sum 53423
http_request_duration_seconds_count 144320

# HELP http_requests_total Total number of HTTP requests
# TYPE http_requests_total counter
http_requests_total{code="200",method="post"} 1027 1395066363000
http_requests_total{code="400",method="post"} 3 1395066363000

metric_without_timestamp_and_labels 12.47

# HELP rpc_duration_seconds A summary of the RPC duration in seconds.
# TYPE rpc_duration_seconds summary
rpc_duration_seconds{quantile="0.01"} 3102
rpc_duration_seconds{quantile="0.05"} 3272
rpc_duration_seconds{quantile="0.5"} 4773
rpc_duration_seconds{quantile="0.9"} 9001
rpc_duration_seconds{quantile="0.99"} 76656
rpc_duration_seconds_sum 17560473
rpc_duration_seconds_count 2693

something_weird{problem="division by zero"} +Inf -3982045

Protobuf

See Protobuf

ProtobufSingle

See ProtobufSingle

ProtobufList

See ProtobufList

Avro

Apache Avro is a row-oriented data serialization framework developed within Apache’s Hadoop project.

ClickHouse Avro format supports reading and writing Avro data files.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

Avro data type INSERTClickHouse data typeAvro data type SELECT
boolean, int, long, float, doubleInt(8\16\32), UInt(8\16\32)int
boolean, int, long, float, doubleInt64, UInt64long
boolean, int, long, float, doubleFloat32float
boolean, int, long, float, doubleFloat64double
bytes, string, fixed, enumStringbytes or string *
bytes, string, fixedFixedString(N)fixed(N)
enumEnum(8\16)enum
array(T)Array(T)array(T)
map(V, K)Map(V, K)map(string, K)
union(null, T), union(T, null)Nullable(T)union(null, T)
union(T1, T2, …) **Variant(T1, T2, …)union(T1, T2, …) **
nullNullable(Nothing)null
int (date) ***Date, Date32int (date) ***
long (timestamp-millis) ***DateTime64(3)long (timestamp-millis) ***
long (timestamp-micros) ***DateTime64(6)long (timestamp-micros) ***
bytes (decimal) ***DateTime64(N)bytes (decimal) ***
intIPv4int
fixed(16)IPv6fixed(16)
bytes (decimal) ***Decimal(P, S)bytes (decimal) ***
string (uuid) ***UUIDstring (uuid) ***
fixed(16)Int128/UInt128fixed(16)
fixed(32)Int256/UInt256fixed(32)
recordTuplerecord

* bytes is default, controlled by output_format_avro_string_column_pattern

** Variant type implicitly accepts null as a field value, so for example the Avro union(T1, T2, null) will be converted to Variant(T1, T2). As a result, when producing Avro from ClickHouse, we have to always include the null type to the Avro union type set as we don't know if any value is actually null during the schema inference.

*** Avro logical types

Unsupported Avro logical data types: time-millis, time-micros, duration

Inserting Data

To insert data from an Avro file into ClickHouse table:

$ cat file.avro | clickhouse-client --query="INSERT INTO {some_table} FORMAT Avro"

The root schema of input Avro file must be of record type.

To find the correspondence between table columns and fields of Avro schema ClickHouse compares their names. This comparison is case-sensitive. Unused fields are skipped.

Data types of ClickHouse table columns can differ from the corresponding fields of the Avro data inserted. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to corresponding column type.

While importing data, when field is not found in schema and setting input_format_avro_allow_missing_fields is enabled, default value will be used instead of error.

Selecting Data

To select data from ClickHouse table into an Avro file:

$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Avro" > file.avro

Column names must:

  • start with [A-Za-z_]
  • subsequently contain only [A-Za-z0-9_]

Output Avro file compression and sync interval can be configured with output_format_avro_codec and output_format_avro_sync_interval respectively.

Example Data

Using the ClickHouse DESCRIBE function, you can quickly view the inferred format of an Avro file like the following example. This example includes the URL of a publicly accessible Avro file in the ClickHouse S3 public bucket:

DESCRIBE url('https://clickhouse-public-datasets.s3.eu-central-1.amazonaws.com/hits.avro','Avro);
┌─name───────────────────────┬─type────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ WatchID │ Int64 │ │ │ │ │ │
│ JavaEnable │ Int32 │ │ │ │ │ │
│ Title │ String │ │ │ │ │ │
│ GoodEvent │ Int32 │ │ │ │ │ │
│ EventTime │ Int32 │ │ │ │ │ │
│ EventDate │ Date32 │ │ │ │ │ │
│ CounterID │ Int32 │ │ │ │ │ │
│ ClientIP │ Int32 │ │ │ │ │ │
│ ClientIP6 │ FixedString(16) │ │ │ │ │ │
│ RegionID │ Int32 │ │ │ │ │ │
...
│ IslandID │ FixedString(16) │ │ │ │ │ │
│ RequestNum │ Int32 │ │ │ │ │ │
│ RequestTry │ Int32 │ │ │ │ │ │
└────────────────────────────┴─────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘

AvroConfluent

AvroConfluent supports decoding single-object Avro messages commonly used with Kafka and Confluent Schema Registry.

Each Avro message embeds a schema id that can be resolved to the actual schema with help of the Schema Registry.

Schemas are cached once resolved.

Schema Registry URL is configured with format_avro_schema_registry_url.

Data Types Matching

Same as Avro.

Usage

To quickly verify schema resolution you can use kafkacat with clickhouse-local:

$ kafkacat -b kafka-broker  -C -t topic1 -o beginning -f '%s' -c 3 | clickhouse-local   --input-format AvroConfluent --format_avro_schema_registry_url 'http://schema-registry' -S "field1 Int64, field2 String"  -q 'select *  from table'
1 a
2 b
3 c

To use AvroConfluent with Kafka:

CREATE TABLE topic1_stream
(
field1 String,
field2 String
)
ENGINE = Kafka()
SETTINGS
kafka_broker_list = 'kafka-broker',
kafka_topic_list = 'topic1',
kafka_group_name = 'group1',
kafka_format = 'AvroConfluent';

-- for debug purposes you can set format_avro_schema_registry_url in a session.
-- this way cannot be used in production
SET format_avro_schema_registry_url = 'http://schema-registry';

SELECT * FROM topic1_stream;
Note

Setting format_avro_schema_registry_url needs to be configured in users.xml to maintain it’s value after a restart. Also you can use the format_avro_schema_registry_url setting of the Kafka table engine.

Parquet

Apache Parquet is a columnar storage format widespread in the Hadoop ecosystem. ClickHouse supports read and write operations for this format.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

Parquet data type (INSERT)ClickHouse data typeParquet data type (SELECT)
BOOLBoolBOOL
UINT8, BOOLUInt8UINT8
INT8Int8/Enum8INT8
UINT16UInt16UINT16
INT16Int16/Enum16INT16
UINT32UInt32UINT32
INT32Int32INT32
UINT64UInt64UINT64
INT64Int64INT64
FLOATFloat32FLOAT
DOUBLEFloat64DOUBLE
DATEDate32DATE
TIME (ms)DateTimeUINT32
TIMESTAMP, TIME (us, ns)DateTime64TIMESTAMP
STRING, BINARYStringBINARY
STRING, BINARY, FIXED_LENGTH_BYTE_ARRAYFixedStringFIXED_LENGTH_BYTE_ARRAY
DECIMALDecimalDECIMAL
LISTArrayLIST
STRUCTTupleSTRUCT
MAPMapMAP
UINT32IPv4UINT32
FIXED_LENGTH_BYTE_ARRAY, BINARYIPv6FIXED_LENGTH_BYTE_ARRAY
FIXED_LENGTH_BYTE_ARRAY, BINARYInt128/UInt128/Int256/UInt256FIXED_LENGTH_BYTE_ARRAY

Arrays can be nested and can have a value of the Nullable type as an argument. Tuple and Map types also can be nested.

Unsupported Parquet data types: FIXED_SIZE_BINARY, JSON, UUID, ENUM.

Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then cast the data to that data type which is set for the ClickHouse table column.

Inserting and Selecting Data

You can insert Parquet data from a file into ClickHouse table by the following command:

$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet"

You can select data from a ClickHouse table and save them into some file in the Parquet format by the following command:

$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq}

To exchange data with Hadoop, you can use HDFS table engine.

Parquet format settings

ParquetMetadata

Special format for reading Parquet file metadata (https://parquet.apache.org/docs/file-format/metadata/). It always outputs one row with the next structure/content:

  • num_columns - the number of columns
  • num_rows - the total number of rows
  • num_row_groups - the total number of row groups
  • format_version - parquet format version, always 1.0 or 2.6
  • total_uncompressed_size - total uncompressed bytes size of the data, calculated as the sum of total_byte_size from all row groups
  • total_compressed_size - total compressed bytes size of the data, calculated as the sum of total_compressed_size from all row groups
  • columns - the list of columns metadata with the next structure:
    • name - column name
    • path - column path (differs from name for nested column)
    • max_definition_level - maximum definition level
    • max_repetition_level - maximum repetition level
    • physical_type - column physical type
    • logical_type - column logical type
    • compression - compression used for this column
    • total_uncompressed_size - total uncompressed bytes size of the column, calculated as the sum of total_uncompressed_size of the column from all row groups
    • total_compressed_size - total compressed bytes size of the column, calculated as the sum of total_compressed_size of the column from all row groups
    • space_saved - percent of space saved by compression, calculated as (1 - total_compressed_size/total_uncompressed_size).
    • encodings - the list of encodings used for this column
  • row_groups - the list of row groups metadata with the next structure:
    • num_columns - the number of columns in the row group
    • num_rows - the number of rows in the row group
    • total_uncompressed_size - total uncompressed bytes size of the row group
    • total_compressed_size - total compressed bytes size of the row group
    • columns - the list of column chunks metadata with the next structure:
      • name - column name
      • path - column path
      • total_compressed_size - total compressed bytes size of the column
      • total_uncompressed_size - total uncompressed bytes size of the row group
      • have_statistics - boolean flag that indicates if column chunk metadata contains column statistics
      • statistics - column chunk statistics (all fields are NULL if have_statistics = false) with the next structure:
        • num_values - the number of non-null values in the column chunk
        • null_count - the number of NULL values in the column chunk
        • distinct_count - the number of distinct values in the column chunk
        • min - the minimum value of the column chunk
        • max - the maximum column of the column chunk

Example:

SELECT * FROM file(data.parquet, ParquetMetadata) format PrettyJSONEachRow
{
"num_columns": "2",
"num_rows": "100000",
"num_row_groups": "2",
"format_version": "2.6",
"metadata_size": "577",
"total_uncompressed_size": "282436",
"total_compressed_size": "26633",
"columns": [
{
"name": "number",
"path": "number",
"max_definition_level": "0",
"max_repetition_level": "0",
"physical_type": "INT32",
"logical_type": "Int(bitWidth=16, isSigned=false)",
"compression": "LZ4",
"total_uncompressed_size": "133321",
"total_compressed_size": "13293",
"space_saved": "90.03%",
"encodings": [
"RLE_DICTIONARY",
"PLAIN",
"RLE"
]
},
{
"name": "concat('Hello', toString(modulo(number, 1000)))",
"path": "concat('Hello', toString(modulo(number, 1000)))",
"max_definition_level": "0",
"max_repetition_level": "0",
"physical_type": "BYTE_ARRAY",
"logical_type": "None",
"compression": "LZ4",
"total_uncompressed_size": "149115",
"total_compressed_size": "13340",
"space_saved": "91.05%",
"encodings": [
"RLE_DICTIONARY",
"PLAIN",
"RLE"
]
}
],
"row_groups": [
{
"num_columns": "2",
"num_rows": "65409",
"total_uncompressed_size": "179809",
"total_compressed_size": "14163",
"columns": [
{
"name": "number",
"path": "number",
"total_compressed_size": "7070",
"total_uncompressed_size": "85956",
"have_statistics": true,
"statistics": {
"num_values": "65409",
"null_count": "0",
"distinct_count": null,
"min": "0",
"max": "999"
}
},
{
"name": "concat('Hello', toString(modulo(number, 1000)))",
"path": "concat('Hello', toString(modulo(number, 1000)))",
"total_compressed_size": "7093",
"total_uncompressed_size": "93853",
"have_statistics": true,
"statistics": {
"num_values": "65409",
"null_count": "0",
"distinct_count": null,
"min": "Hello0",
"max": "Hello999"
}
}
]
},
...
]
}

Arrow

See Arrow

ArrowStream

See ArrowStream

ORC

Apache ORC is a columnar storage format widespread in the Hadoop ecosystem.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

ORC data type (INSERT)ClickHouse data typeORC data type (SELECT)
BooleanUInt8Boolean
TinyintInt8/UInt8/Enum8Tinyint
SmallintInt16/UInt16/Enum16Smallint
IntInt32/UInt32Int
BigintInt64/UInt32Bigint
FloatFloat32Float
DoubleFloat64Double
DecimalDecimalDecimal
DateDate32Date
TimestampDateTime64Timestamp
String, Char, Varchar, BinaryStringBinary
ListArrayList
StructTupleStruct
MapMapMap
IntIPv4Int
BinaryIPv6Binary
BinaryInt128/UInt128/Int256/UInt256Binary
BinaryDecimal256Binary

Other types are not supported.

Arrays can be nested and can have a value of the Nullable type as an argument. Tuple and Map types also can be nested.

The data types of ClickHouse table columns do not have to match the corresponding ORC data fields. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to the data type set for the ClickHouse table column.

Inserting Data

You can insert ORC data from a file into ClickHouse table by the following command:

$ cat filename.orc | clickhouse-client --query="INSERT INTO some_table FORMAT ORC"

Selecting Data

You can select data from a ClickHouse table and save them into some file in the ORC format by the following command:

$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT ORC" > {filename.orc}

Arrow format settings

To exchange data with Hadoop, you can use HDFS table engine.

One

Special input format that doesn't read any data from file and returns only one row with column of type UInt8, name dummy and value 0 (like system.one table). Can be used with virtual columns _file/_path to list all files without reading actual data.

Example:

Query:

SELECT _file FROM file('path/to/files/data*', One);

Result:

┌─_file────┐
│ data.csv │
└──────────┘
┌─_file──────┐
│ data.jsonl │
└────────────┘
┌─_file────┐
│ data.tsv │
└──────────┘
┌─_file────────┐
│ data.parquet │
└──────────────┘

Npy

This function is designed to load a NumPy array from a .npy file into ClickHouse. The NumPy file format is a binary format used for efficiently storing arrays of numerical data. During import, ClickHouse treats top level dimension as an array of rows with single column. Supported Npy data types and their corresponding type in ClickHouse:

Npy data type (INSERT)ClickHouse data typeNpy data type (SELECT)
i1Int8i1
i2Int16i2
i4Int32i4
i8Int64i8
u1, b1UInt8u1
u2UInt16u2
u4UInt32u4
u8UInt64u8
f2, f4Float32f4
f8Float64f8
S, UStringS
FixedStringS

Example of saving an array in .npy format using Python

import numpy as np
arr = np.array([[[1],[2],[3]],[[4],[5],[6]]])
np.save('example_array.npy', arr)

Example of reading a NumPy file in ClickHouse

Query:

SELECT *
FROM file('example_array.npy', Npy)

Result:

┌─array─────────┐
│ [[1],[2],[3]] │
│ [[4],[5],[6]] │
└───────────────┘

Selecting Data

You can select data from a ClickHouse table and save them into some file in the Npy format by the following command:

$ clickhouse-client --query="SELECT {column} FROM {some_table} FORMAT Npy" > {filename.npy}

LineAsString

In this format, every line of input data is interpreted as a single string value. This format can only be parsed for table with a single field of type String. The remaining columns must be set to DEFAULT or MATERIALIZED, or omitted.

Example

Query:

DROP TABLE IF EXISTS line_as_string;
CREATE TABLE line_as_string (field String) ENGINE = Memory;
INSERT INTO line_as_string FORMAT LineAsString "I love apple", "I love banana", "I love orange";
SELECT * FROM line_as_string;

Result:

┌─field─────────────────────────────────────────────┐
│ "I love apple", "I love banana", "I love orange"; │
└───────────────────────────────────────────────────┘

Regexp

Each line of imported data is parsed according to the regular expression.

When working with the Regexp format, you can use the following settings:

  • format_regexpString. Contains regular expression in the re2 format.

  • format_regexp_escaping_ruleString. The following escaping rules are supported:

    • CSV (similarly to CSV)
    • JSON (similarly to JSONEachRow)
    • Escaped (similarly to TSV)
    • Quoted (similarly to Values)
    • Raw (extracts subpatterns as a whole, no escaping rules, similarly to TSVRaw)
  • format_regexp_skip_unmatchedUInt8. Defines the need to throw an exception in case the format_regexp expression does not match the imported data. Can be set to 0 or 1.

Usage

The regular expression from format_regexp setting is applied to every line of imported data. The number of subpatterns in the regular expression must be equal to the number of columns in imported dataset.

Lines of the imported data must be separated by newline character '\n' or DOS-style newline "\r\n".

The content of every matched subpattern is parsed with the method of corresponding data type, according to format_regexp_escaping_rule setting.

If the regular expression does not match the line and format_regexp_skip_unmatched is set to 1, the line is silently skipped. Otherwise, exception is thrown.

Example

Consider the file data.tsv:

id: 1 array: [1,2,3] string: str1 date: 2020-01-01
id: 2 array: [1,2,3] string: str2 date: 2020-01-02
id: 3 array: [1,2,3] string: str3 date: 2020-01-03

and the table:

CREATE TABLE imp_regex_table (id UInt32, array Array(UInt32), string String, date Date) ENGINE = Memory;

Import command:

$ cat data.tsv | clickhouse-client  --query "INSERT INTO imp_regex_table SETTINGS format_regexp='id: (.+?) array: (.+?) string: (.+?) date: (.+?)', format_regexp_escaping_rule='Escaped', format_regexp_skip_unmatched=0 FORMAT Regexp;"

Query:

SELECT * FROM imp_regex_table;

Result:

┌─id─┬─array───┬─string─┬───────date─┐
│ 1 │ [1,2,3] │ str1 │ 2020-01-01 │
│ 2 │ [1,2,3] │ str2 │ 2020-01-02 │
│ 3 │ [1,2,3] │ str3 │ 2020-01-03 │
└────┴─────────┴────────┴────────────┘

Format Schema

The file name containing the format schema is set by the setting format_schema. It’s required to set this setting when it is used one of the formats Cap'n Proto and Protobuf. The format schema is a combination of a file name and the name of a message type in this file, delimited by a colon, e.g. schemafile.proto:MessageType. If the file has the standard extension for the format (for example, .proto for Protobuf), it can be omitted and in this case, the format schema looks like schemafile:MessageType.

If you input or output data via the client in interactive mode, the file name specified in the format schema can contain an absolute path or a path relative to the current directory on the client. If you use the client in the batch mode, the path to the schema must be relative due to security reasons.

If you input or output data via the HTTP interface the file name specified in the format schema should be located in the directory specified in format_schema_path in the server configuration.

Skipping Errors

Some formats such as CSV, TabSeparated, TSKV, JSONEachRow, Template, CustomSeparated and Protobuf can skip broken row if parsing error occurred and continue parsing from the beginning of next row. See input_format_allow_errors_num and input_format_allow_errors_ratio settings. Limitations:

  • In case of parsing error JSONEachRow skips all data until the new line (or EOF), so rows must be delimited by \n to count errors correctly.
  • Template and CustomSeparated use delimiter after the last column and delimiter between rows to find the beginning of next row, so skipping errors works only if at least one of them is not empty.

RawBLOB

In this format, all input data is read to a single value. It is possible to parse only a table with a single field of type String or similar. The result is output in binary format without delimiters and escaping. If more than one value is output, the format is ambiguous, and it will be impossible to read the data back.

Below is a comparison of the formats RawBLOB and TabSeparatedRaw.

RawBLOB:

  • data is output in binary format, no escaping;
  • there are no delimiters between values;
  • no newline at the end of each value.

TabSeparatedRaw:

  • data is output without escaping;
  • the rows contain values separated by tabs;
  • there is a line feed after the last value in every row.

The following is a comparison of the RawBLOB and RowBinary formats.

RawBLOB:

  • String fields are output without being prefixed by length.

RowBinary:

When an empty data is passed to the RawBLOB input, ClickHouse throws an exception:

Code: 108. DB::Exception: No data to insert

Example

$ clickhouse-client --query "CREATE TABLE {some_table} (a String) ENGINE = Memory;"
$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT RawBLOB"
$ clickhouse-client --query "SELECT * FROM {some_table} FORMAT RawBLOB" | md5sum

Result:

f9725a22f9191e064120d718e26862a9  -

MsgPack

ClickHouse supports reading and writing MessagePack data files.

Data Types Matching

MessagePack data type (INSERT)ClickHouse data typeMessagePack data type (SELECT)
uint N, positive fixintUIntNuint N
int N, negative fixintIntNint N
boolUInt8uint 8
fixstr, str 8, str 16, str 32, bin 8, bin 16, bin 32Stringbin 8, bin 16, bin 32
fixstr, str 8, str 16, str 32, bin 8, bin 16, bin 32FixedStringbin 8, bin 16, bin 32
float 32Float32float 32
float 64Float64float 64
uint 16Dateuint 16
int 32Date32int 32
uint 32DateTimeuint 32
uint 64DateTime64uint 64
fixarray, array 16, array 32Array/Tuplefixarray, array 16, array 32
fixmap, map 16, map 32Mapfixmap, map 16, map 32
uint 32IPv4uint 32
bin 8Stringbin 8
int 8Enum8int 8
bin 8(U)Int128/(U)Int256bin 8
int 32Decimal32int 32
int 64Decimal64int 64
bin 8Decimal128/Decimal256bin 8

Example:

Writing to a file ".msgpk":

$ clickhouse-client --query="CREATE TABLE msgpack (array Array(UInt8)) ENGINE = Memory;"
$ clickhouse-client --query="INSERT INTO msgpack VALUES ([0, 1, 2, 3, 42, 253, 254, 255]), ([255, 254, 253, 42, 3, 2, 1, 0])";
$ clickhouse-client --query="SELECT * FROM msgpack FORMAT MsgPack" > tmp_msgpack.msgpk;

MsgPack format settings

MySQLDump

ClickHouse supports reading MySQL dumps. It reads all data from INSERT queries belonging to one table in dump. If there are more than one table, by default it reads data from the first one. You can specify the name of the table from which to read data from using input_format_mysql_dump_table_name settings. If setting input_format_mysql_dump_map_columns is set to 1 and dump contains CREATE query for specified table or column names in INSERT query the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. This format supports schema inference: if the dump contains CREATE query for the specified table, the structure is extracted from it, otherwise schema is inferred from the data of INSERT queries.

Examples:

File dump.sql:

/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `test` (
`x` int DEFAULT NULL,
`y` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `test` VALUES (1,NULL),(2,NULL),(3,NULL),(3,NULL),(4,NULL),(5,NULL),(6,7);
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `test 3` (
`y` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `test 3` VALUES (1);
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `test2` (
`x` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `test2` VALUES (1),(2),(3);

Queries:

DESCRIBE TABLE file(dump.sql, MySQLDump) SETTINGS input_format_mysql_dump_table_name = 'test2'
┌─name─┬─type────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ x │ Nullable(Int32) │ │ │ │ │ │
└──────┴─────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
SELECT *
FROM file(dump.sql, MySQLDump)
SETTINGS input_format_mysql_dump_table_name = 'test2'
┌─x─┐
│ 1 │
│ 2 │
│ 3 │
└───┘

DWARF

Parses DWARF debug symbols from an ELF file (executable, library, or object file). Similar to dwarfdump, but much faster (hundreds of MB/s) and with SQL. Produces one row for each Debug Information Entry (DIE) in the .debug_info section. Includes "null" entries that the DWARF encoding uses to terminate lists of children in the tree.

Quick background: .debug_info consists of units, corresponding to compilation units. Each unit is a tree of DIEs, with a compile_unit DIE as its root. Each DIE has a tag and a list of attributes. Each attribute has a name and a value (and also a form, which specifies how the value is encoded). The DIEs represent things from the source code, and their tag tells what kind of thing it is. E.g. there are functions (tag = subprogram), classes/structs/enums (class_type/structure_type/enumeration_type), variables (variable), function arguments (formal_parameter). The tree structure mirrors the corresponding source code. E.g. a class_type DIE can contain subprogram DIEs representing methods of the class.

Outputs the following columns:

  • offset - position of the DIE in the .debug_info section
  • size - number of bytes in the encoded DIE (including attributes)
  • tag - type of the DIE; the conventional "DW_TAG_" prefix is omitted
  • unit_name - name of the compilation unit containing this DIE
  • unit_offset - position of the compilation unit containing this DIE in the .debug_info section
  • ancestor_tags - array of tags of the ancestors of the current DIE in the tree, in order from innermost to outermost
  • ancestor_offsets - offsets of ancestors, parallel to ancestor_tags
  • a few common attributes duplicated from the attributes array for convenience:
    • name
    • linkage_name - mangled fully-qualified name; typically only functions have it (but not all functions)
    • decl_file - name of the source code file where this entity was declared
    • decl_line - line number in the source code where this entity was declared
  • parallel arrays describing attributes:
    • attr_name - name of the attribute; the conventional "DW_AT_" prefix is omitted
    • attr_form - how the attribute is encoded and interpreted; the conventional DW_FORM_ prefix is omitted
    • attr_int - integer value of the attribute; 0 if the attribute doesn't have a numeric value
    • attr_str - string value of the attribute; empty if the attribute doesn't have a string value

Example: find compilation units that have the most function definitions (including template instantiations and functions from included header files):

SELECT
unit_name,
count() AS c
FROM file('programs/clickhouse', DWARF)
WHERE tag = 'subprogram' AND NOT has(attr_name, 'declaration')
GROUP BY unit_name
ORDER BY c DESC
LIMIT 3
┌─unit_name──────────────────────────────────────────────────┬─────c─┐
│ ./src/Core/Settings.cpp │ 28939 │
│ ./src/AggregateFunctions/AggregateFunctionSumMap.cpp │ 23327 │
│ ./src/AggregateFunctions/AggregateFunctionUniqCombined.cpp │ 22649 │
└────────────────────────────────────────────────────────────┴───────┘

3 rows in set. Elapsed: 1.487 sec. Processed 139.76 million rows, 1.12 GB (93.97 million rows/s., 752.77 MB/s.)
Peak memory usage: 271.92 MiB.

Markdown

You can export results using Markdown format to generate output ready to be pasted into your .md files:

SELECT
number,
number * 2
FROM numbers(5)
FORMAT Markdown
| number | multiply(number, 2) |
|-:|-:|
| 0 | 0 |
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |

Markdown table will be generated automatically and can be used on markdown-enabled platforms, like Github. This format is used only for output.

Form

The Form format can be used to read or write a single record in the application/x-www-form-urlencoded format in which data is formatted key1=value1&key2=value2

Examples:

Given a file data.tmp placed in the user_files path with some URL encoded data:

t_page=116&c.e=ls7xfkpm&c.tti.m=raf&rt.start=navigation&rt.bmr=390%2C11%2C10
SELECT * FROM file(data.tmp, Form) FORMAT vertical;

Result:

Row 1:
──────
t_page: 116
c.e: ls7xfkpm
c.tti.m: raf
rt.start: navigation
rt.bmr: 390,11,10