Format Settings
These settings are autogenerated from source.
bool_false_representation
Type: String
Default value: false
Text to represent false bool value in TSV/CSV/Vertical/Pretty formats.
bool_true_representation
Type: String
Default value: true
Text to represent true bool value in TSV/CSV/Vertical/Pretty formats.
column_names_for_schema_inference
Type: String
Default value:
The list of column names to use in schema inference for formats without column names. The format: 'column1,column2,column3,...'
composed_data_type_output_format_mode
Type: String
Default value: default
Set output format mode for composed data types like Array, Map, Tuple. Possible values: 'default', 'spark'.
In 'default' mode, the output format is the same as in the previous versions of ClickHouse,
- Arrays are displayed without spaces between elements.
- Maps use curly braces `{}` and colons `:` to separate keys and values.
- Tuples are displayed with single quotes around string elements.
Example of 'default' mode:
┌─[1, 2, 3]─┬─map('a', 1, 'b', 2)─┬─(123, 'abc')─┐
│ [1,2,3] │ {'a':1,'b':2} │ (123,'abc') │
└───────────┴─────────────────────┴──────────────┘
In 'spark' mode, the output format is similar to Apache Spark:
- Arrays are displayed with spaces between elements.
- Maps use curly braces `{}` and arrows `->` to separate keys and values.
- Tuples are displayed without single quotes around string elements.
Example of 'spark' mode:
┌─[1, 2, 3]─┬─map('a', 1, 'b', 2)─┬─(123, 'abc')─┐
│ [1, 2, 3] │ {a -> 1, b -> 2} │ (123, abc) │
└───────────┴─────────────────────┴──────────────┘
cross_to_inner_join_rewrite
Type: UInt64
Default value: 1
Use inner join instead of comma/cross join if there are joining expressions in the WHERE section. Values: 0 - no rewrite, 1 - apply if possible for comma/cross, 2 - force rewrite all comma joins, cross - if possible
date_time_64_output_format_cut_trailing_zeros_align_to_groups_of_thousands
Type: Bool
Default value: 0
Dynamically trim the trailing zeros of datetime64 values to adjust the output scale to [0, 3, 6], corresponding to 'seconds', 'milliseconds', and 'microseconds'
date_time_input_format
Type: DateTimeInputFormat
Default value: basic
Allows choosing a parser of the text representation of date and time.
The setting does not apply to date and time functions.
Possible values:
'best_effort'
— Enables extended parsing.ClickHouse can parse the basic
YYYY-MM-DD HH:MM:SS
format and all ISO 8601 date and time formats. For example,'2018-06-08T01:02:03.000Z'
.'basic'
— Use basic parser.ClickHouse can parse only the basic
YYYY-MM-DD HH:MM:SS
orYYYY-MM-DD
format. For example,2019-08-20 10:18:56
or2019-08-20
.
Cloud default value: 'best_effort'
.
See also:
date_time_output_format
Type: DateTimeOutputFormat
Default value: simple
Allows choosing different output formats of the text representation of date and time.
Possible values:
simple
- Simple output format.ClickHouse output date and time
YYYY-MM-DD hh:mm:ss
format. For example,2019-08-20 10:18:56
. The calculation is performed according to the data type's time zone (if present) or server time zone.iso
- ISO output format.ClickHouse output date and time in ISO 8601
YYYY-MM-DDThh:mm:ssZ
format. For example,2019-08-20T10:18:56Z
. Note that output is in UTC (Z
means UTC).unix_timestamp
- Unix timestamp output format.ClickHouse output date and time in Unix timestamp format. For example
1566285536
.
See also:
date_time_overflow_behavior
Type: DateTimeOverflowBehavior
Default value: ignore
Defines the behavior when Date, Date32, DateTime, DateTime64 or integers are converted into Date, Date32, DateTime or DateTime64 but the value cannot be represented in the result type.
Possible values:
ignore
— Silently ignore overflows. The result is random.throw
— Throw an exception in case of conversion overflow.saturate
— Silently saturate the result. If the value is smaller than the smallest value that can be represented by the target type, the result is chosen as the smallest representable value. If the value is bigger than the largest value that can be represented by the target type, the result is chosen as the largest representable value.
Default value: ignore
.
dictionary_use_async_executor
Type: Bool
Default value: 0
Execute a pipeline for reading dictionary source in several threads. It's supported only by dictionaries with local CLICKHOUSE source.
errors_output_format
Type: String
Default value: CSV
Method to write Errors to text output.
exact_rows_before_limit
Type: Bool
Default value: 0
When enabled, ClickHouse will provide exact value for rows_before_limit_at_least statistic, but with the cost that the data before limit will have to be read completely
format_avro_schema_registry_url
Type: URI
Default value:
For AvroConfluent format: Confluent Schema Registry URL.
format_binary_max_array_size
Type: UInt64
Default value: 1073741824
The maximum allowed size for Array in RowBinary format. It prevents allocating large amount of memory in case of corrupted data. 0 means there is no limit
format_binary_max_string_size
Type: UInt64
Default value: 1073741824
The maximum allowed size for String in RowBinary format. It prevents allocating large amount of memory in case of corrupted data. 0 means there is no limit
format_capn_proto_enum_comparising_mode
Type: CapnProtoEnumComparingMode
Default value: by_values
How to map ClickHouse Enum and CapnProto Enum
format_capn_proto_use_autogenerated_schema
Type: Bool
Default value: 1
Use autogenerated CapnProto schema when format_schema is not set
format_csv_allow_double_quotes
Type: Bool
Default value: 1
If it is set to true, allow strings in double quotes.
format_csv_allow_single_quotes
Type: Bool
Default value: 0
If it is set to true, allow strings in single quotes.
format_csv_delimiter
Type: Char
Default value: ,
The character to be considered as a delimiter in CSV data. If setting with a string, a string has to have a length of 1.
format_csv_null_representation
Type: String
Default value: \N
Custom NULL representation in CSV format
format_custom_escaping_rule
Type: EscapingRule
Default value: Escaped
Field escaping rule (for CustomSeparated format)
format_custom_field_delimiter
Type: String
Default value:
Delimiter between fields (for CustomSeparated format)
format_custom_result_after_delimiter
Type: String
Default value:
Suffix after result set (for CustomSeparated format)
format_custom_result_before_delimiter
Type: String
Default value:
Prefix before result set (for CustomSeparated format)
format_custom_row_after_delimiter
Type: String
Default value:
Delimiter after field of the last column (for CustomSeparated format)
format_custom_row_before_delimiter
Type: String
Default value:
Delimiter before field of the first column (for CustomSeparated format)
format_custom_row_between_delimiter
Type: String
Default value:
Delimiter between rows (for CustomSeparated format)
format_display_secrets_in_show_and_select
Type: Bool
Default value: 0
Enables or disables showing secrets in SHOW
and SELECT
queries for tables, databases,
table functions, and dictionaries.
User wishing to see secrets must also have
display_secrets_in_show_and_select
server setting
turned on and a
displaySecretsInShowAndSelect
privilege.
Possible values:
- 0 — Disabled.
- 1 — Enabled.
format_json_object_each_row_column_for_object_name
Type: String
Default value:
The name of column that will be used for storing/writing object names in JSONObjectEachRow format.
Column type should be String. If value is empty, default names row_{i}
will be used for object names.
input_format_json_compact_allow_variable_number_of_columns
Allow variable number of columns in rows in JSONCompact/JSONCompactEachRow input formats. Ignore extra columns in rows with more columns than expected and treat missing columns as default values.
Disabled by default.
output_format_markdown_escape_special_characters
When enabled, escape special characters in Markdown.
Common Mark defines the following special characters that can be escaped by \:
! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ ` { | } ~
Possible values:
- 0 — Disable.
- 1 — Enable.
input_format_json_empty_as_default
When enabled, replace empty input fields in JSON with default values. For complex default expressions input_format_defaults_for_omitted_fields
must be enabled too.
Possible values:
- 0 — Disable.
- 1 — Enable.
format_protobuf_use_autogenerated_schema
Type: Bool
Default value: 1
Use autogenerated Protobuf when format_schema is not set
format_regexp
Type: String
Default value:
Regular expression (for Regexp format)
format_regexp_escaping_rule
Type: EscapingRule
Default value: Raw
Field escaping rule (for Regexp format)
format_regexp_skip_unmatched
Type: Bool
Default value: 0
Skip lines unmatched by regular expression (for Regexp format)
format_schema
Type: String
Default value:
This parameter is useful when you are using formats that require a schema definition, such as Cap’n Proto or Protobuf. The value depends on the format.
format_template_resultset
Type: String
Default value:
Path to file which contains format string for result set (for Template format)
format_template_resultset_format
Type: String
Default value:
Format string for result set (for Template format)
format_template_row
Type: String
Default value:
Path to file which contains format string for rows (for Template format)
format_template_row_format
Type: String
Default value:
Format string for rows (for Template format)
format_template_rows_between_delimiter
Type: String
Default value:
Delimiter between rows (for Template format)
format_tsv_null_representation
Type: String
Default value: \N
Custom NULL representation in TSV format
input_format_allow_errors_num
Type: UInt64
Default value: 0
Sets the maximum number of acceptable errors when reading from text formats (CSV, TSV, etc.).
The default value is 0.
Always pair it with input_format_allow_errors_ratio
.
If an error occurred while reading rows but the error counter is still less than input_format_allow_errors_num
, ClickHouse ignores the row and moves on to the next one.
If both input_format_allow_errors_num
and input_format_allow_errors_ratio
are exceeded, ClickHouse throws an exception.
input_format_allow_errors_ratio
Type: Float
Default value: 0
Sets the maximum percentage of errors allowed when reading from text formats (CSV, TSV, etc.). The percentage of errors is set as a floating-point number between 0 and 1.
The default value is 0.
Always pair it with input_format_allow_errors_num
.
If an error occurred while reading rows but the error counter is still less than input_format_allow_errors_ratio
, ClickHouse ignores the row and moves on to the next one.
If both input_format_allow_errors_num
and input_format_allow_errors_ratio
are exceeded, ClickHouse throws an exception.
input_format_allow_seeks
Type: Bool
Default value: 1
Allow seeks while reading in ORC/Parquet/Arrow input formats.
Enabled by default.
input_format_arrow_allow_missing_columns
Type: Bool
Default value: 1
Allow missing columns while reading Arrow input formats
input_format_arrow_case_insensitive_column_matching
Type: Bool
Default value: 0
Ignore case when matching Arrow columns with CH columns.
input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference
Type: Bool
Default value: 0
Skip columns with unsupported types while schema inference for format Arrow
input_format_avro_allow_missing_fields
Type: Bool
Default value: 0
For Avro/AvroConfluent format: when field is not found in schema use default value instead of error
input_format_avro_null_as_default
Type: Bool
Default value: 0
For Avro/AvroConfluent format: insert default in case of null and non Nullable column
input_format_binary_decode_types_in_binary_format
Type: Bool
Default value: 0
Read data types in binary format instead of type names in RowBinaryWithNamesAndTypes input format
input_format_binary_read_json_as_string
Type: Bool
Default value: 0
Read values of JSON data type as JSON String values in RowBinary input format.
input_format_bson_skip_fields_with_unsupported_types_in_schema_inference
Type: Bool
Default value: 0
Skip fields with unsupported types while schema inference for format BSON.
input_format_capn_proto_skip_fields_with_unsupported_types_in_schema_inference
Type: Bool
Default value: 0
Skip columns with unsupported types while schema inference for format CapnProto
input_format_csv_allow_cr_end_of_line
Type: Bool
Default value: 0
If it is set true, \r will be allowed at end of line not followed by \n
input_format_csv_allow_variable_number_of_columns
Type: Bool
Default value: 0
Ignore extra columns in CSV input (if file has more columns than expected) and treat missing fields in CSV input as default values
input_format_csv_allow_whitespace_or_tab_as_delimiter
Type: Bool
Default value: 0
Allow to use spaces and tabs(\t) as field delimiter in the CSV strings
input_format_csv_arrays_as_nested_csv
Type: Bool
Default value: 0
When reading Array from CSV, expect that its elements were serialized in nested CSV and then put into string. Example: \"[\"\"Hello\"\", \"\"world\"\", \"\"42\"\"\"\" TV\"\"]\". Braces around array can be omitted.
input_format_csv_deserialize_separate_columns_into_tuple
Type: Bool
Default value: 1
If it set to true, then separate columns written in CSV format can be deserialized to Tuple column.
input_format_csv_detect_header
Type: Bool
Default value: 1
Automatically detect header with names and types in CSV format
input_format_csv_empty_as_default
Type: Bool
Default value: 1
Treat empty fields in CSV input as default values.
input_format_csv_enum_as_number
Type: Bool
Default value: 0
Treat inserted enum values in CSV formats as enum indices
input_format_csv_skip_first_lines
Type: UInt64
Default value: 0
Skip specified number of lines at the beginning of data in CSV format
input_format_csv_skip_trailing_empty_lines
Type: Bool
Default value: 0
Skip trailing empty lines in CSV format
input_format_csv_trim_whitespaces
Type: Bool
Default value: 1
Trims spaces and tabs (\t) characters at the beginning and end in CSV strings
input_format_csv_try_infer_numbers_from_strings
Type: Bool
Default value: 0
If enabled, during schema inference ClickHouse will try to infer numbers from string fields. It can be useful if CSV data contains quoted UInt64 numbers.
Disabled by default.
input_format_csv_try_infer_strings_from_quoted_tuples
Type: Bool
Default value: 1
Interpret quoted tuples in the input data as a value of type String.
input_format_csv_use_best_effort_in_schema_inference
Type: Bool
Default value: 1
Use some tweaks and heuristics to infer schema in CSV format
input_format_csv_use_default_on_bad_values
Type: Bool
Default value: 0
Allow to set default value to column when CSV field deserialization failed on bad value
input_format_custom_allow_variable_number_of_columns
Type: Bool
Default value: 0
Ignore extra columns in CustomSeparated input (if file has more columns than expected) and treat missing fields in CustomSeparated input as default values
input_format_custom_detect_header
Type: Bool
Default value: 1
Automatically detect header with names and types in CustomSeparated format
input_format_custom_skip_trailing_empty_lines
Type: Bool
Default value: 0
Skip trailing empty lines in CustomSeparated format
input_format_defaults_for_omitted_fields
Type: Bool
Default value: 1
When performing INSERT
queries, replace omitted input column values with default values of the respective columns. This option applies to JSONEachRow (and other JSON formats), CSV, TabSeparated, TSKV, Parquet, Arrow, Avro, ORC, Native formats and formats with WithNames
/WithNamesAndTypes
suffixes.
When this option is enabled, extended table metadata are sent from server to client. It consumes additional computing resources on the server and can reduce performance.
Possible values:
- 0 — Disabled.
- 1 — Enabled.
input_format_force_null_for_omitted_fields
Type: Bool
Default value: 0
Force initialize omitted fields with null values
input_format_hive_text_allow_variable_number_of_columns
Type: Bool
Default value: 1
Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values
input_format_hive_text_collection_items_delimiter
Type: Char
Default value:
Delimiter between collection(array or map) items in Hive Text File
input_format_hive_text_fields_delimiter
Type: Char
Default value:
Delimiter between fields in Hive Text File
input_format_hive_text_map_keys_delimiter
Type: Char
Default value:
Delimiter between a pair of map key/values in Hive Text File
input_format_import_nested_json
Type: Bool
Default value: 0
Enables or disables the insertion of JSON data with nested objects.
Supported formats:
Possible values:
- 0 — Disabled.
- 1 — Enabled.
See also:
- Usage of Nested Structures with the
JSONEachRow
format.
input_format_ipv4_default_on_conversion_error
Type: Bool
Default value: 0
Deserialization of IPv4 will use default values instead of throwing exception on conversion error.
Disabled by default.
input_format_ipv6_default_on_conversion_error
Type: Bool
Default value: 0
Deserialization of IPV6 will use default values instead of throwing exception on conversion error.
Disabled by default.
input_format_json_compact_allow_variable_number_of_columns
Type: Bool
Default value: 0
Ignore extra columns in JSONCompact(EachRow) input (if file has more columns than expected) and treat missing fields in JSONCompact(EachRow) input as default values
input_format_json_defaults_for_missing_elements_in_named_tuple
Type: Bool
Default value: 1
Insert default values for missing elements in JSON object while parsing named tuple.
This setting works only when setting input_format_json_named_tuples_as_objects
is enabled.
Enabled by default.
input_format_json_empty_as_default
Type: Bool
Default value: 0
Treat empty fields in JSON input as default values.
input_format_json_ignore_unknown_keys_in_named_tuple
Type: Bool
Default value: 1
Ignore unknown keys in json object for named tuples.
Enabled by default.
input_format_json_ignore_unnecessary_fields
Type: Bool
Default value: 1
Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields
input_format_json_infer_incomplete_types_as_strings
Type: Bool
Default value: 1
Allow to use String type for JSON keys that contain only Null
/{}
/[]
in data sample during schema inference.
In JSON formats any value can be read as String, and we can avoid errors like Cannot determine type for column 'column_name' by first 25000 rows of data, most likely this column contains only Nulls or empty Arrays/Maps
during schema inference
by using String type for keys with unknown types.
Example:
SET input_format_json_infer_incomplete_types_as_strings = 1, input_format_json_try_infer_named_tuples_from_objects = 1;
DESCRIBE format(JSONEachRow, '{"obj" : {"a" : [1,2,3], "b" : "hello", "c" : null, "d" : {}, "e" : []}}');
SELECT * FROM format(JSONEachRow, '{"obj" : {"a" : [1,2,3], "b" : "hello", "c" : null, "d" : {}, "e" : []}}');
Result:
┌─name─┬─type───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ obj │ Tuple(a Array(Nullable(Int64)), b Nullable(String), c Nullable(String), d Nullable(String), e Array(Nullable(String))) │ │ │ │ │ │
└──────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
┌─obj────────────────────────────┐
│ ([1,2,3],'hello',NULL,'{}',[]) │
└────────────────────────────────┘
Enabled by default.
input_format_json_max_depth
Type: UInt64
Default value: 1000
Maximum depth of a field in JSON. This is not a strict limit, it does not have to be applied precisely.
input_format_json_named_tuples_as_objects
Type: Bool
Default value: 1
Parse named tuple columns as JSON objects.
Enabled by default.
input_format_json_read_arrays_as_strings
Type: Bool
Default value: 1
Allow parsing JSON arrays as strings in JSON input formats.
Example:
SET input_format_json_read_arrays_as_strings = 1;
SELECT arr, toTypeName(arr), JSONExtractArrayRaw(arr)[3] from format(JSONEachRow, 'arr String', '{"arr" : [1, "Hello", [1,2,3]]}');
Result:
┌─arr───────────────────┬─toTypeName(arr)─┬─arrayElement(JSONExtractArrayRaw(arr), 3)─┐
│ [1, "Hello", [1,2,3]] │ String │ [1,2,3] │
└───────────────────────┴─────────────────┴───────────────────────────────────────────┘
Enabled by default.
input_format_json_read_bools_as_numbers
Type: Bool
Default value: 1
Allow parsing bools as numbers in JSON input formats.
Enabled by default.
input_format_json_read_bools_as_strings
Type: Bool
Default value: 1
Allow parsing bools as strings in JSON input formats.
Enabled by default.
input_format_json_read_numbers_as_strings
Type: Bool
Default value: 1
Allow parsing numbers as strings in JSON input formats.
Enabled by default.
input_format_json_read_objects_as_strings
Type: Bool
Default value: 1
Allow parsing JSON objects as strings in JSON input formats.
Example:
SET input_format_json_read_objects_as_strings = 1;
CREATE TABLE test (id UInt64, obj String, date Date) ENGINE=Memory();
INSERT INTO test FORMAT JSONEachRow {"id" : 1, "obj" : {"a" : 1, "b" : "Hello"}, "date" : "2020-01-01"};
SELECT * FROM test;
Result:
┌─id─┬─obj──────────────────────┬───────date─┐
│ 1 │ {"a" : 1, "b" : "Hello"} │ 2020-01-01 │
└────┴──────────────────────────┴────────────┘
Enabled by default.
input_format_json_throw_on_bad_escape_sequence
Type: Bool
Default value: 1
Throw an exception if JSON string contains bad escape sequence in JSON input formats. If disabled, bad escape sequences will remain as is in the data.
Enabled by default.
input_format_json_try_infer_named_tuples_from_objects
Type: Bool
Default value: 1
If enabled, during schema inference ClickHouse will try to infer named Tuple from JSON objects. The resulting named Tuple will contain all elements from all corresponding JSON objects from sample data.
Example:
SET input_format_json_try_infer_named_tuples_from_objects = 1;
DESC format(JSONEachRow, '{"obj" : {"a" : 42, "b" : "Hello"}}, {"obj" : {"a" : 43, "c" : [1, 2, 3]}}, {"obj" : {"d" : {"e" : 42}}}')
Result:
┌─name─┬─type───────────────────────────────────────────────────────────────────────────────────────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ obj │ Tuple(a Nullable(Int64), b Nullable(String), c Array(Nullable(Int64)), d Tuple(e Nullable(Int64))) │ │ │ │ │ │
└──────┴────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
Enabled by default.
input_format_json_try_infer_numbers_from_strings
Type: Bool
Default value: 0
If enabled, during schema inference ClickHouse will try to infer numbers from string fields. It can be useful if JSON data contains quoted UInt64 numbers.
Disabled by default.
input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects
Type: Bool
Default value: 0
Use String type instead of an exception in case of ambiguous paths in JSON objects during named tuples inference
input_format_json_validate_types_from_metadata
Type: Bool
Default value: 1
For JSON/JSONCompact/JSONColumnsWithMetadata input formats, if this setting is set to 1, the types from metadata in input data will be compared with the types of the corresponding columns from the table.
Enabled by default.
input_format_max_bytes_to_read_for_schema_inference
Type: UInt64
Default value: 33554432
The maximum amount of data in bytes to read for automatic schema inference.
input_format_max_rows_to_read_for_schema_inference
Type: UInt64
Default value: 25000
The maximum rows of data to read for automatic schema inference.
input_format_msgpack_number_of_columns
Type: UInt64
Default value: 0
The number of columns in inserted MsgPack data. Used for automatic schema inference from data.
input_format_mysql_dump_map_column_names
Type: Bool
Default value: 1
Match columns from table in MySQL dump and columns from ClickHouse table by names
input_format_mysql_dump_table_name
Type: String
Default value:
Name of the table in MySQL dump from which to read data
input_format_native_allow_types_conversion
Type: Bool
Default value: 1
Allow data types conversion in Native input format
input_format_native_decode_types_in_binary_format
Type: Bool
Default value: 0
Read data types in binary format instead of type names in Native input format
input_format_null_as_default
Type: Bool
Default value: 1
Enables or disables the initialization of NULL fields with default values, if data type of these fields is not nullable.
If column type is not nullable and this setting is disabled, then inserting NULL
causes an exception. If column type is nullable, then NULL
values are inserted as is, regardless of this setting.
This setting is applicable for most input formats.
For complex default expressions input_format_defaults_for_omitted_fields
must be enabled too.
Possible values:
- 0 — Inserting
NULL
into a not nullable column causes an exception. - 1 —
NULL
fields are initialized with default column values.
input_format_orc_allow_missing_columns
Type: Bool
Default value: 1
Allow missing columns while reading ORC input formats
input_format_orc_case_insensitive_column_matching
Type: Bool
Default value: 0
Ignore case when matching ORC columns with CH columns.
input_format_orc_dictionary_as_low_cardinality
Type: Bool
Default value: 1
Treat ORC dictionary encoded columns as LowCardinality columns while reading ORC files.
input_format_orc_filter_push_down
Type: Bool
Default value: 1
When reading ORC files, skip whole stripes or row groups based on the WHERE/PREWHERE expressions, min/max statistics or bloom filter in the ORC metadata.
input_format_orc_reader_time_zone_name
Type: String
Default value: GMT
The time zone name for ORC row reader, the default ORC row reader's time zone is GMT.
input_format_orc_row_batch_size
Type: Int64
Default value: 100000
Batch size when reading ORC stripes.
input_format_orc_skip_columns_with_unsupported_types_in_schema_inference
Type: Bool
Default value: 0
Skip columns with unsupported types while schema inference for format ORC
input_format_orc_use_fast_decoder
Type: Bool
Default value: 1
Use a faster ORC decoder implementation.
input_format_parquet_allow_missing_columns
Type: Bool
Default value: 1
Allow missing columns while reading Parquet input formats
input_format_parquet_bloom_filter_push_down
Type: Bool
Default value: 0
When reading Parquet files, skip whole row groups based on the WHERE expressions and bloom filter in the Parquet metadata.
input_format_parquet_case_insensitive_column_matching
Type: Bool
Default value: 0
Ignore case when matching Parquet columns with CH columns.
input_format_parquet_enable_row_group_prefetch
Type: Bool
Default value: 1
Enable row group prefetching during parquet parsing. Currently, only single-threaded parsing can prefetch.
input_format_parquet_filter_push_down
Type: Bool
Default value: 1
When reading Parquet files, skip whole row groups based on the WHERE/PREWHERE expressions and min/max statistics in the Parquet metadata.
input_format_parquet_local_file_min_bytes_for_seek
Type: UInt64
Default value: 8192
Min bytes required for local read (file) to do seek, instead of read with ignore in Parquet input format
input_format_parquet_max_block_size
Type: UInt64
Default value: 65409
Max block size for parquet reader.
input_format_parquet_prefer_block_bytes
Type: UInt64
Default value: 16744704
Average block bytes output by parquet reader
input_format_parquet_preserve_order
Type: Bool
Default value: 0
Avoid reordering rows when reading from Parquet files. Usually makes it much slower.
input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference
Type: Bool
Default value: 0
Skip columns with unsupported types while schema inference for format Parquet
input_format_parquet_use_native_reader
Type: Bool
Default value: 0
When reading Parquet files, to use native reader instead of arrow reader.
input_format_protobuf_flatten_google_wrappers
Type: Bool
Default value: 0
Enable Google wrappers for regular non-nested columns, e.g. google.protobuf.StringValue 'str' for String column 'str'. For Nullable columns empty wrappers are recognized as defaults, and missing as nulls
input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference
Type: Bool
Default value: 0
Skip fields with unsupported types while schema inference for format Protobuf
input_format_record_errors_file_path
Type: String
Default value:
Path of the file used to record errors while reading text formats (CSV, TSV).
input_format_skip_unknown_fields
Type: Bool
Default value: 1
Enables or disables skipping insertion of extra data.
When writing data, ClickHouse throws an exception if input data contain columns that do not exist in the target table. If skipping is enabled, ClickHouse does not insert extra data and does not throw an exception.
Supported formats:
- JSONEachRow (and other JSON formats)
- BSONEachRow (and other JSON formats)
- TSKV
- All formats with suffixes WithNames/WithNamesAndTypes
- MySQLDump
- Native
Possible values:
- 0 — Disabled.
- 1 — Enabled.
input_format_try_infer_dates
Type: Bool
Default value: 1
If enabled, ClickHouse will try to infer type Date
from string fields in schema inference for text formats. If all fields from a column in input data were successfully parsed as dates, the result type will be Date
, if at least one field was not parsed as date, the result type will be String
.
Enabled by default.
input_format_try_infer_datetimes
Type: Bool
Default value: 1
If enabled, ClickHouse will try to infer type DateTime64
from string fields in schema inference for text formats. If all fields from a column in input data were successfully parsed as datetimes, the result type will be DateTime64
, if at least one field was not parsed as datetime, the result type will be String
.
Enabled by default.
input_format_try_infer_datetimes_only_datetime64
Type: Bool
Default value: 0
When input_format_try_infer_datetimes is enabled, infer only DateTime64 but not DateTime types
input_format_try_infer_exponent_floats
Type: Bool
Default value: 0
Try to infer floats in exponential notation while schema inference in text formats (except JSON, where exponent numbers are always inferred)
input_format_try_infer_integers
Type: Bool
Default value: 1
If enabled, ClickHouse will try to infer integers instead of floats in schema inference for text formats. If all numbers in the column from input data are integers, the result type will be Int64
, if at least one number is float, the result type will be Float64
.
Enabled by default.
input_format_try_infer_variants
Type: Bool
Default value: 0
If enabled, ClickHouse will try to infer type Variant
in schema inference for text formats when there is more than one possible type for column/array elements.
Possible values:
- 0 — Disabled.
- 1 — Enabled.
input_format_tsv_allow_variable_number_of_columns
Type: Bool
Default value: 0
Ignore extra columns in TSV input (if file has more columns than expected) and treat missing fields in TSV input as default values
input_format_tsv_crlf_end_of_line
Type: Bool
Default value: 0
If it is set true, file function will read TSV format with \r\n instead of \n.
input_format_tsv_detect_header
Type: Bool
Default value: 1
Automatically detect header with names and types in TSV format
input_format_tsv_empty_as_default
Type: Bool
Default value: 0
Treat empty fields in TSV input as default values.
input_format_tsv_enum_as_number
Type: Bool
Default value: 0
Treat inserted enum values in TSV formats as enum indices.
input_format_tsv_skip_first_lines
Type: UInt64
Default value: 0
Skip specified number of lines at the beginning of data in TSV format
input_format_tsv_skip_trailing_empty_lines
Type: Bool
Default value: 0
Skip trailing empty lines in TSV format
input_format_tsv_use_best_effort_in_schema_inference
Type: Bool
Default value: 1
Use some tweaks and heuristics to infer schema in TSV format
input_format_values_accurate_types_of_literals
Type: Bool
Default value: 1
For Values format: when parsing and interpreting expressions using template, check actual type of literal to avoid possible overflow and precision issues.
input_format_values_deduce_templates_of_expressions
Type: Bool
Default value: 1
For Values format: if the field could not be parsed by streaming parser, run SQL parser, deduce template of the SQL expression, try to parse all rows using template and then interpret expression for all rows.
input_format_values_interpret_expressions
Type: Bool
Default value: 1
For Values format: if the field could not be parsed by streaming parser, run SQL parser and try to interpret it as SQL expression.
input_format_with_names_use_header
Type: Bool
Default value: 1
Enables or disables checking the column order when inserting data.
To improve insert performance, we recommend disabling this check if you are sure that the column order of the input data is the same as in the target table.
Supported formats:
- CSVWithNames
- CSVWithNamesAndTypes
- TabSeparatedWithNames
- TabSeparatedWithNamesAndTypes
- JSONCompactEachRowWithNames
- JSONCompactEachRowWithNamesAndTypes
- JSONCompactStringsEachRowWithNames
- JSONCompactStringsEachRowWithNamesAndTypes
- RowBinaryWithNames
- RowBinaryWithNamesAndTypes
- CustomSeparatedWithNames
- CustomSeparatedWithNamesAndTypes
Possible values:
- 0 — Disabled.
- 1 — Enabled.
input_format_with_types_use_header
Type: Bool
Default value: 1
Controls whether format parser should check if data types from the input data match data types from the target table.
Supported formats:
- CSVWithNamesAndTypes
- TabSeparatedWithNamesAndTypes
- JSONCompactEachRowWithNamesAndTypes
- JSONCompactStringsEachRowWithNamesAndTypes
- RowBinaryWithNamesAndTypes
- CustomSeparatedWithNamesAndTypes
Possible values:
- 0 — Disabled.
- 1 — Enabled.
insert_distributed_one_random_shard
Type: Bool
Default value: 0
Enables or disables random shard insertion into a Distributed table when there is no distributed key.
By default, when inserting data into a Distributed
table with more than one shard, the ClickHouse server will reject any insertion request if there is no distributed key. When insert_distributed_one_random_shard = 1
, insertions are allowed and data is forwarded randomly among all shards.
Possible values:
- 0 — Insertion is rejected if there are multiple shards and no distributed key is given.
- 1 — Insertion is done randomly among all available shards when no distributed key is given.
interval_output_format
Type: IntervalOutputFormat
Default value: numeric
Allows choosing different output formats of the text representation of interval types.
Possible values:
kusto
- KQL-style output format.ClickHouse outputs intervals in KQL format. For example,
toIntervalDay(2)
would be formatted as2.00:00:00
. Please note that for interval types of varying length (ie.IntervalMonth
andIntervalYear
) the average number of seconds per interval is taken into account.numeric
- Numeric output format.ClickHouse outputs intervals as their underlying numeric representation. For example,
toIntervalDay(2)
would be formatted as2
.
See also:
output_format_arrow_compression_method
Type: ArrowCompression
Default value: lz4_frame
Compression method for Arrow output format. Supported codecs: lz4_frame, zstd, none (uncompressed)
output_format_arrow_fixed_string_as_fixed_byte_array
Type: Bool
Default value: 1
Use Arrow FIXED_SIZE_BINARY type instead of Binary for FixedString columns.
output_format_arrow_low_cardinality_as_dictionary
Type: Bool
Default value: 0
Enable output LowCardinality type as Dictionary Arrow type
output_format_arrow_string_as_string
Type: Bool
Default value: 1
Use Arrow String type instead of Binary for String columns
output_format_arrow_use_64_bit_indexes_for_dictionary
Type: Bool
Default value: 0
Always use 64 bit integers for dictionary indexes in Arrow format
output_format_arrow_use_signed_indexes_for_dictionary
Type: Bool
Default value: 1
Use signed integers for dictionary indexes in Arrow format
output_format_avro_codec
Type: String
Default value:
Compression codec used for output. Possible values: 'null', 'deflate', 'snappy', 'zstd'.
output_format_avro_rows_in_file
Type: UInt64
Default value: 1
Max rows in a file (if permitted by storage)
output_format_avro_string_column_pattern
Type: String
Default value:
For Avro format: regexp of String columns to select as AVRO string.
output_format_avro_sync_interval
Type: UInt64
Default value: 16384
Sync interval in bytes.
output_format_binary_encode_types_in_binary_format
Type: Bool
Default value: 0
Write data types in binary format instead of type names in RowBinaryWithNamesAndTypes output format
output_format_binary_write_json_as_string
Type: Bool
Default value: 0
Write values of JSON data type as JSON String values in RowBinary output format.
output_format_bson_string_as_string
Type: Bool
Default value: 0
Use BSON String type instead of Binary for String columns.
output_format_csv_crlf_end_of_line
Type: Bool
Default value: 0
If it is set true, end of line in CSV format will be \r\n instead of \n.
output_format_csv_serialize_tuple_into_separate_columns
Type: Bool
Default value: 1
If it set to true, then Tuples in CSV format are serialized as separate columns (that is, their nesting in the tuple is lost)
output_format_decimal_trailing_zeros
Type: Bool
Default value: 0
Output trailing zeros when printing Decimal values. E.g. 1.230000 instead of 1.23.
Disabled by default.
output_format_enable_streaming
Type: Bool
Default value: 0
Enable streaming in output formats that support it.
Disabled by default.
output_format_json_array_of_rows
Type: Bool
Default value: 0
Enables the ability to output all rows as a JSON array in the JSONEachRow format.
Possible values:
- 1 — ClickHouse outputs all rows as an array, each row in the
JSONEachRow
format. - 0 — ClickHouse outputs each row separately in the
JSONEachRow
format.
Example of a query with the enabled setting
Query:
SET output_format_json_array_of_rows = 1;
SELECT number FROM numbers(3) FORMAT JSONEachRow;
Result:
[
{"number":"0"},
{"number":"1"},
{"number":"2"}
]
Example of a query with the disabled setting
Query:
SET output_format_json_array_of_rows = 0;
SELECT number FROM numbers(3) FORMAT JSONEachRow;
Result:
{"number":"0"}
{"number":"1"}
{"number":"2"}
output_format_json_escape_forward_slashes
Type: Bool
Default value: 1
Controls escaping forward slashes for string outputs in JSON output format. This is intended for compatibility with JavaScript. Don't confuse with backslashes that are always escaped.
Enabled by default.
output_format_json_named_tuples_as_objects
Type: Bool
Default value: 1
Serialize named tuple columns as JSON objects.
Enabled by default.
output_format_json_quote_64bit_floats
Type: Bool
Default value: 0
Controls quoting of 64-bit floats when they are output in JSON* formats.
Disabled by default.
output_format_json_quote_64bit_integers
Type: Bool
Default value: 1
Controls quoting of 64-bit or bigger integers (like UInt64
or Int128
) when they are output in a JSON format.
Such integers are enclosed in quotes by default. This behavior is compatible with most JavaScript implementations.
Possible values:
- 0 — Integers are output without quotes.
- 1 — Integers are enclosed in quotes.
output_format_json_quote_decimals
Type: Bool
Default value: 0
Controls quoting of decimals in JSON output formats.
Disabled by default.
output_format_json_quote_denormals
Type: Bool
Default value: 0
Enables +nan
, -nan
, +inf
, -inf
outputs in JSON output format.
Possible values:
- 0 — Disabled.
- 1 — Enabled.
Example
Consider the following table account_orders
:
┌─id─┬─name───┬─duration─┬─period─┬─area─┐
│ 1 │ Andrew │ 20 │ 0 │ 400 │
│ 2 │ John │ 40 │ 0 │ 0 │
│ 3 │ Bob │ 15 │ 0 │ -100 │
└────┴────────┴──────────┴────────┴──────┘
When output_format_json_quote_denormals = 0
, the query returns null
values in output:
SELECT area/period FROM account_orders FORMAT JSON;
{
"meta":
[
{
"name": "divide(area, period)",
"type": "Float64"
}
],
"data":
[
{
"divide(area, period)": null
},
{
"divide(area, period)": null
},
{
"divide(area, period)": null
}
],
"rows": 3,
"statistics":
{
"elapsed": 0.003648093,
"rows_read": 3,
"bytes_read": 24
}
}
When output_format_json_quote_denormals = 1
, the query returns:
{
"meta":
[
{
"name": "divide(area, period)",
"type": "Float64"
}
],
"data":
[
{
"divide(area, period)": "inf"
},
{
"divide(area, period)": "-nan"
},
{
"divide(area, period)": "-inf"
}
],
"rows": 3,
"statistics":
{
"elapsed": 0.000070241,
"rows_read": 3,
"bytes_read": 24
}
}
output_format_json_skip_null_value_in_named_tuples
Type: Bool
Default value: 0
Skip key value pairs with null value when serialize named tuple columns as JSON objects. It is only valid when output_format_json_named_tuples_as_objects is true.
output_format_json_validate_utf8
Type: Bool
Default value: 0
Controls validation of UTF-8 sequences in JSON output formats, doesn't impact formats JSON/JSONCompact/JSONColumnsWithMetadata, they always validate UTF-8.
Disabled by default.
output_format_markdown_escape_special_characters
Type: Bool
Default value: 0
Escape special characters in Markdown
output_format_msgpack_uuid_representation
Type: MsgPackUUIDRepresentation
Default value: ext
The way how to output UUID in MsgPack format.
output_format_native_encode_types_in_binary_format
Type: Bool
Default value: 0
Write data types in binary format instead of type names in Native output format
output_format_native_write_json_as_string
Type: Bool
Default value: 0
Write data of JSON column as String column containing JSON strings instead of default native JSON serialization.
output_format_orc_compression_method
Type: ORCCompression
Default value: zstd
Compression method for ORC output format. Supported codecs: lz4, snappy, zlib, zstd, none (uncompressed)
output_format_orc_dictionary_key_size_threshold
Type: Double
Default value: 0
For a string column in ORC output format, if the number of distinct values is greater than this fraction of the total number of non-null rows, turn off dictionary encoding. Otherwise dictionary encoding is enabled
output_format_orc_row_index_stride
Type: UInt64
Default value: 10000
Target row index stride in ORC output format
output_format_orc_string_as_string
Type: Bool
Default value: 1
Use ORC String type instead of Binary for String columns
output_format_parquet_batch_size
Type: UInt64
Default value: 1024
Check page size every this many rows. Consider decreasing if you have columns with average values size above a few KBs.
output_format_parquet_compliant_nested_types
Type: Bool
Default value: 1
In parquet file schema, use name 'element' instead of 'item' for list elements. This is a historical artifact of Arrow library implementation. Generally increases compatibility, except perhaps with some old versions of Arrow.
output_format_parquet_compression_method
Type: ParquetCompression
Default value: zstd
Compression method for Parquet output format. Supported codecs: snappy, lz4, brotli, zstd, gzip, none (uncompressed)
output_format_parquet_data_page_size
Type: UInt64
Default value: 1048576
Target page size in bytes, before compression.
output_format_parquet_fixed_string_as_fixed_byte_array
Type: Bool
Default value: 1
Use Parquet FIXED_LENGTH_BYTE_ARRAY type instead of Binary for FixedString columns.
output_format_parquet_parallel_encoding
Type: Bool
Default value: 1
Do Parquet encoding in multiple threads. Requires output_format_parquet_use_custom_encoder.
output_format_parquet_row_group_size
Type: UInt64
Default value: 1000000
Target row group size in rows.
output_format_parquet_row_group_size_bytes
Type: UInt64
Default value: 536870912
Target row group size in bytes, before compression.
output_format_parquet_string_as_string
Type: Bool
Default value: 1
Use Parquet String type instead of Binary for String columns.
output_format_parquet_use_custom_encoder
Type: Bool
Default value: 1
Use a faster Parquet encoder implementation.
output_format_parquet_version
Type: ParquetVersion
Default value: 2.latest
Parquet format version for output format. Supported versions: 1.0, 2.4, 2.6 and 2.latest (default)
output_format_parquet_write_page_index
Type: Bool
Default value: 1
Add a possibility to write page index into parquet files.
output_format_pretty_color
Type: UInt64Auto
Default value: auto
Use ANSI escape sequences in Pretty formats. 0 - disabled, 1 - enabled, 'auto' - enabled if a terminal.
output_format_pretty_display_footer_column_names
Type: UInt64
Default value: 1
Display column names in the footer if there are many table rows.
Possible values:
- 0 — No column names are displayed in the footer.
- 1 — Column names are displayed in the footer if row count is greater than or equal to the threshold value set by output_format_pretty_display_footer_column_names_min_rows (50 by default).
Example
Query:
SELECT *, toTypeName(*) FROM (SELECT * FROM system.numbers LIMIT 1000);
Result:
┌─number─┬─toTypeName(number)─┐
1. │ 0 │ UInt64 │
2. │ 1 │ UInt64 │
3. │ 2 │ UInt64 │
...
999. │ 998 │ UInt64 │
1000. │ 999 │ UInt64 │
└─number─┴─toTypeName(number)─┘
output_format_pretty_display_footer_column_names_min_rows
Type: UInt64
Default value: 50
Sets the minimum number of rows for which a footer with column names will be displayed if setting output_format_pretty_display_footer_column_names is enabled.
output_format_pretty_grid_charset
Type: String
Default value: UTF-8
Charset for printing grid borders. Available charsets: ASCII, UTF-8 (default one).
output_format_pretty_highlight_digit_groups
Type: Bool
Default value: 1
If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline.
output_format_pretty_max_column_pad_width
Type: UInt64
Default value: 250
Maximum width to pad all values in a column in Pretty formats.
output_format_pretty_max_rows
Type: UInt64
Default value: 10000
Rows limit for Pretty formats.
output_format_pretty_max_value_width
Type: UInt64
Default value: 10000
Maximum width of value to display in Pretty formats. If greater - it will be cut.
output_format_pretty_max_value_width_apply_for_single_value
Type: UInt64
Default value: 0
Only cut values (see the output_format_pretty_max_value_width
setting) when it is not a single value in a block. Otherwise output it entirely, which is useful for the SHOW CREATE TABLE
query.
output_format_pretty_row_numbers
Type: Bool
Default value: 1
Add row numbers before each row for pretty output format
output_format_pretty_single_large_number_tip_threshold
Type: UInt64
Default value: 1000000
Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)
output_format_protobuf_nullables_with_google_wrappers
Type: Bool
Default value: 0
When serializing Nullable columns with Google wrappers, serialize default values as empty wrappers. If turned off, default and null values are not serialized
output_format_schema
Type: String
Default value:
The path to the file where the automatically generated schema will be saved in Cap’n Proto or Protobuf formats.
output_format_sql_insert_include_column_names
Type: Bool
Default value: 1
Include column names in INSERT query
output_format_sql_insert_max_batch_size
Type: UInt64
Default value: 65409
The maximum number of rows in one INSERT statement.
output_format_sql_insert_quote_names
Type: Bool
Default value: 1
Quote column names with '`' characters
output_format_sql_insert_table_name
Type: String
Default value: table
The name of table in the output INSERT query
output_format_sql_insert_use_replace
Type: Bool
Default value: 0
Use REPLACE statement instead of INSERT
output_format_tsv_crlf_end_of_line
Type: Bool
Default value: 0
If it is set true, end of line in TSV format will be \r\n instead of \n.
output_format_values_escape_quote_with_quote
Type: Bool
Default value: 0
If true escape ' with '', otherwise quoted with \'
output_format_write_statistics
Type: Bool
Default value: 1
Write statistics about read rows, bytes, time elapsed in suitable output formats.
Enabled by default
precise_float_parsing
Type: Bool
Default value: 0
Prefer more precise (but slower) float parsing algorithm
regexp_dict_allow_hyperscan
Type: Bool
Default value: 1
Allow regexp_tree dictionary using Hyperscan library.
regexp_dict_flag_case_insensitive
Type: Bool
Default value: 0
Use case-insensitive matching for a regexp_tree dictionary. Can be overridden in individual expressions with (?i) and (?-i).
regexp_dict_flag_dotall
Type: Bool
Default value: 0
Allow '.' to match newline characters for a regexp_tree dictionary.
rows_before_aggregation
Type: Bool
Default value: 0
When enabled, ClickHouse will provide exact value for rows_before_aggregation statistic, represents the number of rows read before aggregation
schema_inference_hints
Type: String
Default value:
The list of column names and types to use as hints in schema inference for formats without schema.
Example:
Query:
desc format(JSONEachRow, '{"x" : 1, "y" : "String", "z" : "0.0.0.0" }') settings schema_inference_hints='x UInt8, z IPv4';
Result:
x UInt8
y Nullable(String)
z IPv4
If the schema_inference_hints
is not formatted properly, or if there is a typo or a wrong datatype, etc... the whole schema_inference_hints will be ignored.
schema_inference_make_columns_nullable
Type: UInt64Auto
Default value: 1
Controls making inferred types Nullable
in schema inference.
If the setting is enabled, all inferred type will be Nullable
, if disabled, the inferred type will never be Nullable
, if set to auto
, the inferred type will be Nullable
only if the column contains NULL
in a sample that is parsed during schema inference or file metadata contains information about column nullability.
schema_inference_mode
Type: SchemaInferenceMode
Default value: default
Mode of schema inference. 'default' - assume that all files have the same schema and schema can be inferred from any file, 'union' - files can have different schemas and the resulting schema should be the a union of schemas of all files
show_create_query_identifier_quoting_rule
Type: IdentifierQuotingRule
Default value: when_necessary
Set the quoting rule for identifiers in SHOW CREATE query
show_create_query_identifier_quoting_style
Type: IdentifierQuotingStyle
Default value: Backticks
Set the quoting style for identifiers in SHOW CREATE query
type_json_skip_duplicated_paths
Type: Bool
Default value: 0
When enabled, during parsing JSON object into JSON type duplicated paths will be ignored and only the first one will be inserted instead of an exception
validate_experimental_and_suspicious_types_inside_nested_types
Type: Bool
Default value: 1
Validate usage of experimental and suspicious types inside nested types like Array/Map/Tuple