Skip to main content

· 2 min read

This is a step by step example on how to start using Python with ClickHouse Cloud service.


Keep in mind that Python versions and libraries dependencies are constantly evolving. Make also sure to use the latest supported versions of both the driver and Python environment when trying this.

At the time of writing this article, we're using the clickhouse-connect driver version 0.5.23 and python 3.11.2 respectively.


  1. Check the Python version:
$  python -V
Python 3.11.2
  1. We'll assemble the project in a folder called ch-python:
$ mkdir ch-python
$ cd ch-python
  1. Create a dependencies file named requirements.txt with:
  1. Create a python source file named
import clickhouse_connect
import sys
import json


client = clickhouse_connect.get_client(

print("connected to " + CLICKHOUSE_CLOUD_HOSTNAME + "\n")
'CREATE TABLE IF NOT EXISTS new_table (key UInt32, value String, metric Float64) ENGINE MergeTree ORDER BY key')

print("table new_table created or exists already!\n")

row1 = [1000, 'String Value 1000', 5.233]
row2 = [2000, 'String Value 2000', -107.04]
data = [row1, row2]
client.insert('new_table', data, column_names=['key', 'value', 'metric'])

print("written 2 rows to table new_table\n")

QUERY = "SELECT max(key), avg(metric) FROM new_table"

result = client.query(QUERY)

sys.stdout.write("query: ["+QUERY + "] returns:\n\n")
  1. Create the virtual environment:
chpython$ python -m venv venv
  1. Load the virtual environment:
chpython$ source venv/bin/activate

Once loaded, your terminal prompt should be prefixed with (venv), install dependencies:

(venv) ➜  chpython$ pip install -r requirements.txt
Collecting certifi
Using cached certifi-2023.5.7-py3-none-any.whl (156 kB)
Collecting urllib3>=1.26
Using cached urllib3-2.0.2-py3-none-any.whl (123 kB)
Collecting pytz
Using cached pytz-2023.3-py2.py3-none-any.whl (502 kB)
Collecting zstandard
Using cached zstandard-0.21.0-cp311-cp311-macosx_11_0_arm64.whl (364 kB)
Collecting lz4
Using cached lz4-4.3.2-cp311-cp311-macosx_11_0_arm64.whl (212 kB)
Installing collected packages: pytz, zstandard, urllib3, lz4, certifi, clickhouse-connect
Successfully installed certifi-2023.5.7 clickhouse-connect-0.5.23 lz4-4.3.2 pytz-2023.3 urllib3-2.0.2 zstandard-0.21.0
  1. Launch the code!
(venv) chpython$ venv/bin/python

connected to

table new_table created or exists already!

written 2 rows to table new_table

query: [SELECT max(key), avg(metric) FROM new_table] returns:

[(2000, -50.9035)]

If using an older Python version (e.g. 3.9.6) you might be getting an ImportError related to urllib3 library. In that case either upgrade your Python environment to a newer version or pin the urllib3 version to 1.26.15 in your requirements.txt file.

· 5 min read

How can I use API to manage clusters on ClickHouse Cloud?


We will use Terraform to configure our infra and ClickHouse Provider


1). Create an API Key on Cloud. Follow the docs here -

Save the creds locally.

2). Install Terraform using -

You can use Homebrew package manager if you're on Mac.

3). Create a directory anywhere you like:

mkdir test
➜ test pwd

4). Create 2 files: and secret.tfvars

Copy the following: file would be:

terraform {
required_providers {
clickhouse = {
source = "ClickHouse/clickhouse"
version = "0.0.2"

variable "organization_id" {
type = string

variable "token_key" {
type = string

variable "token_secret" {
type = string

provider clickhouse {
environment = "production"
organization_id = var.organization_id
token_key = var.token_key
token_secret = var.token_secret

variable "service_password" {
type = string
sensitive = true

resource "clickhouse_service" "service123" {
name = "jai-terraform"
cloud_provider = "aws"
region = "us-east-2"
tier = "development"
idle_scaling = true
password = var.service_password
ip_access = [
source = ""
description = "Anywhere"

value =

You can replace your own parameters like service name, region etc.. in the resources section above.

secret.tfvars is where you'll put all the API Key related info that you downloaded earlier. The idea behind this file is that all your secret credentials will be hidden from the main config file.

It would be something like (replace these parameters):

organization_id = "e957a5f7-4qe3-4b05-ad5a-d02b2dcd0593"
token_key = "QWhhkMeytqQruTeKg"
token_secret = "4b1dNmjWdLUno9lXxmKvSUcPP62jvn7irkuZPbY"
service_password = "password123!"

5). Run terraform init from this directory

Expected output:

Initializing the backend...

Initializing provider plugins...
- Finding clickhouse/clickhouse versions matching "0.0.2"...
- Installing clickhouse/clickhouse v0.0.2...
- Installed clickhouse/clickhouse v0.0.2 (self-signed, key ID D7089EE5C6A92ED1)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

6). Run terraform apply -var-file=secret.tfvars command.

Something like:

➜  test terraform apply -var-file=secret.tfvars

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:
+ create

Terraform will perform the following actions:

# clickhouse_service.service123 will be created
+ resource "clickhouse_service" "service123" {
+ cloud_provider = "aws"
+ endpoints = (known after apply)
+ id = (known after apply)
+ idle_scaling = true
+ ip_access = [
+ {
+ description = "Anywhere"
+ source = ""
+ last_updated = (known after apply)
+ name = "jai-terraform"
+ password = (sensitive value)
+ region = "us-east-2"
+ tier = "development"

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ CLICKHOUSE_HOST = (known after apply)

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

Type yes and hit enter

Side note: Notice it says password = (sensitive value) above. This is because we set sensitive = true for the password in the file.

7). It will take a couple of mins to create the service but eventually it should come up like:

  Enter a value: yes

clickhouse_service.service123: Creating...
clickhouse_service.service123: Still creating... [10s elapsed]
clickhouse_service.service123: Still creating... [20s elapsed]
clickhouse_service.service123: Still creating... [30s elapsed]
clickhouse_service.service123: Still creating... [40s elapsed]
clickhouse_service.service123: Still creating... [50s elapsed]
clickhouse_service.service123: Still creating... [1m0s elapsed]
clickhouse_service.service123: Still creating... [1m10s elapsed]
clickhouse_service.service123: Still creating... [1m20s elapsed]
clickhouse_service.service123: Still creating... [1m30s elapsed]
clickhouse_service.service123: Still creating... [1m40s elapsed]
clickhouse_service.service123: Creation complete after 1m41s [id=aa8d8d63-1878-4600-8470-630715af38ed]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.


➜ test

8). Check Cloud Console, you should be able to see the service created.

9). To clean up/destroy the service again, run terraform destroy -var-file=secret.tfvars

Something like:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:
- destroy

Terraform will perform the following actions:

# clickhouse_service.service123 will be destroyed
- resource "clickhouse_service" "service123" {
- cloud_provider = "aws" -> null
- ............

Plan: 0 to add, 0 to change, 1 to destroy.

Changes to Outputs:
- CLICKHOUSE_HOST = "" -> null

Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.

Enter a value:

Type yes and hit enter


clickhouse_service.service123: Destroying... [id=aa8d8d63-1878-4600-8470-630715af38ed]
clickhouse_service.service123: Still destroying... [id=aa8d8d63-1878-4600-8470-630715af38ed, 10s elapsed]
clickhouse_service.service123: Still destroying... [id=aa8d8d63-1878-4600-8470-630715af38ed, 20s elapsed]
clickhouse_service.service123: Destruction complete after 27s

Destroy complete! Resources: 1 destroyed.

And it should be gone from the Cloud Console.

More details about the Cloud API can be found here -

· One min read


When trying to connect from PowerBI to ClickHouse using the connector, you receive the following authentication error:

We encountered an error while trying to connect.
Details: "ODBC: ERROR [HY000] HTTP status code: 403
Received error:
Code: 516. DB::Exception: default: Authentication failed: password is incorrect, or there is no user with such name.
If you have installed ClickHouse and forgot password you can reset it in the configuration file.
The password for default user is typically located at /etc/clickhouse-server/users.d/default-password.xml and deleting this file will reset the password.
See also /etc/clickhouse-server/ on the server where
ClickHouse is installed.



Check the password being used to see if the password contains a tilde ~.

The recommendation is to use a dedicated user for the connection and set the password manually. If using ClickHouse Cloud and the admin level of permissions with the default user is needed, then create a new user and and assign the default_role.

For more information:

· 2 min read


How do I create a table that can query other clusters or instances?


Below is a simple example to test functionality.

In this example, ClickHouse Cloud is use but the example will work when using self-hosted clusters also. The targets will need to change to the urls/hosts/dns of a target node or load balancer.

In cluster A:

./clickhouse client --host --secure --password 'Password123!'

Create the database:

create database db1;

Create the table:

 CREATE TABLE db1.table1_remote1
`id` UInt32,
`timestamp_column` DateTime,
`string_column` String
ENGINE = MergeTree()

Insert some sample rows:

insert into db1.table1_remote1
(1, '2023-09-29 00:01:00', 'a'),
(2, '2023-09-29 00:02:00', 'b'),
(3, '2023-09-29 00:03:00', 'c');

In cluster B:

./clickhouse client --host --secure --password 'Password123!'

Create the database:

create database db1;

Create the table:

CREATE TABLE db1.table1_remote2
`id` UInt32,
`timestamp_column` DateTime,
`string_column` String
ENGINE = MergeTree()

Insert sample rows:

insert into db1.table1_remote1
(4, '2023-09-29 00:04:00', 'x'),
(5, '2023-09-29 00:05:00', 'y'),
(6, '2023-09-29 00:06:00', 'z');

In Cluster C:
*this cluster will be used to gather the data from the other two clusters, however, can also be used as a source.

./clickhouse client --host --secure --password 'Password123!'

Create the database:

create database db1;

Create the remote tables with remoteSecure() to connect to the other clusters.
Definition for remote cluster A table:

CREATE TABLE db1.table1_remote1_main
`id` UInt32,
`timestamp_column` DateTime,
`string_column` String
) AS remoteSecure('', 'db1.table1_remote1', 'default', 'Password123!');

Definition for remote cluster B table:

CREATE TABLE db1.table1_remote2_main
`id` UInt32,
`timestamp_column` DateTime,
`string_column` String
) AS remoteSecure('', 'db1.table1_remote2', 'default', 'Password123!')

Create the merge table to be used to gather results:

create table db1.table1_merge_remote
id UInt32,
timestamp_column DateTime,
string_column String
engine = Merge('db1', 'table.\_main');

Test the results:

clickhouse-cloud :) select * from db1.table1_merge_remote;

FROM db1.table1_merge_remote

Query id: 46b6e741-bbd1-47ed-b40e-69ddb6e0c364

│ 1 │ 2023-09-29 00:01:00 │ a │
│ 2 │ 2023-09-29 00:02:00 │ b │
│ 3 │ 2023-09-29 00:03:00 │ c │
│ 4 │ 2023-09-29 00:04:00 │ x │
│ 5 │ 2023-09-29 00:05:00 │ y │
│ 6 │ 2023-09-29 00:06:00 │ z │

6 rows in set. Elapsed: 0.275 sec.

For more info:

· 4 min read


What data types should I use in ClickHouse to optimize my queries for speed and storage?


Many times when using an automated conversion from another system or trying to choose a data type, users will often choose the "more is better" or "choose what's easier" or "choose the most generic" approaches. This will likely work for small datasets in the millions, maybe even billions of rows. It may not be noticeable and is acceptable for those type of sets where users' queries difference is small in their use-cases.

It will not be acceptable, however, as the data grows and becomes more noticeable.

The difference between a query taking 50ms and 500ms may be okay for most use-cases, for example in a webUI, but one is 10x slower than the other, even though for a front-end user, is not very noticeable.

Example initial table:

timestamp Datetime64(9),
group_id Int64,
vendor_id String,
product_id String,
category1 Int64,
code_name String,
paid_status String,
country_code String,
description String,
price Float64,
attributes Map(String, String)

Sample data:

3456, 0123456789, bd6087b7-6026-4974-9122-bc99faae5d84, "2024-03-01 01:00:01.000", 98, "bear", paid", "us", "corvette model car", 123.45, {"color" : "blue", "size" : "S"}
156, 0000012345, bd6087b7-6026-4974-9122-bc99faae5d84, "2024-03-01 01:00:02:123", 45, "tiger", "not paid", "uk", "electric car", 53432.10, {"color" : "red", "model" : "X"}

Below are some recommendations where this data could be optimized:

timestamp : DateTime64(9)
Unless needing scientific precision, a value of 9 precision (nanoseconds), is unlikely necessary. Possibly could be for display or ordering, but usually not in queries for searching, primary keys, etc.

  • Recommendation:
    For PK, order by: DateTime
    For display or ordering: add additional column - i.e. timestamp_microseconds : DateTime64(6)

group_id : Int64
This appears to be an integer, select the smallest integer type that will fit the max number for the column required. From this sample dataset and name of the column, it is unlikely to need a quintillion values, probably an Int16 would work where it could have up to 16k values.

  • Recommendation: Int16

vendor_id : String
This column looks like to be a number but has leading zeros, likely important to keep formatting. Also appears to be only a certain number of chars.

  • Recommendation: FixedString(10)

product_id : String
This one is alphanumeric so intuitively would be a string, however, it is also a UUID.

  • Recommendation: UUID

category1 : Int64
The values are small, probably not very many categories and not looking to grow very much or limited. Less than 255

  • Recommendation: UInt8

code_name : String
This field looks like it may have only a limited number of strings that would be used. For this kind of situation where the number of string values might be in the hundreds or thousands, low cardinality fields help.

  • Recommendation: LowCardinality(String)

paid_status : String
There is a string value of "paid" or "not_paid". For situations where there may be only two values, use a boolean.

  • Recommendation: Bool

country_code : String
Sometimes there are columns which meet multiple optimizations. In this example, there are only a certain number of country codes and they are all two character identifiers.

  • Recommendation: LowCardinality(FixedString(2))

price : Float64
Floats are not recommended when there is a fixed precision that is known, especially for financial data and calculations. Best is to use Decimal types for the precision necessary. For this use-case, it is likely that the price of the item is not over 999,999.00

  • Recommendation: Decimal(10,2)

attributes : map
Often there might be a table with dynamic attributes in maps. Searching for keys or values is usually slower. There’s a couple of ways that the maps can be made faster. If there are keys that will be present in most records, best to place those in a separate column as low cardinality and those that will be sparse in another column high cardinality. From there, it will be more efficient to create skip indexes although it may increase the complexity of the queries.

  • Recommendation: lc_attributes: Map(String, String), hc_attributes: Map(String, String).

Depending on the queries, the options that can also be used to create a skip index and/or extract the attributes are:
Using Array Join to extract into columns using a Materialized View:
Using a skip index for the keys:

· 4 min read


If I have varying attributes in a column using map types, how can I extract them and use them in queries?


This is a basic example of extracting keys and values from a variable attributes field. This method will create seemingly duplicates from each row in the source/raw table. Due to the keys and values being extracted, however, they can be put into the Primary Key or a secondary with an index, such as a bloom filter.

In this example, we basically have a source that creates a metrics table, it has multiple attributes that can apply in an attributes field that has maps. If there are attributes that will always be present for records, it is better to pull those out into their own columns and populate.

You should be able to just copy and paste to see what the outputs would be and what the materialized view does in this instance.

Create a sample database:

create database db1;

Create the initial table that will have the rows and attributes:

create table db1.table1_metric_map
id UInt32,
timestamp DateTime,
metric_name String,
metric_value Int32,
attributes Map(String, String)
engine = MergeTree()
order by timestamp;

Insert sample rows into the table. The sample size is intentionally small so that when the materialized view is created, you can see how the rows are multiplied for each attribute.

insert into db1.table1_metric_map
(1, '2023-09-20 00:01:00', 'ABC', 10, {'env':'prod','app':'app1','server':'server1'}),
(2, '2023-09-20 00:01:00', 'ABC', 20,{'env':'prod','app':'app2','server':'server1','dc':'dc1'}),
(3, '2023-09-20 00:01:00', 'ABC', 30,{'env':'qa','app':'app1','server':'server1'}),
(4, '2023-09-20 00:01:00', 'ABC', 40,{'env':'qa','app':'app2','server':'server1','dc':'dc1'}),
(5, '2023-09-20 00:01:00', 'DEF', 50,{'env':'prod','app':'app1','server':'server2'}),
(6, '2023-09-20 00:01:00', 'DEF', 60, {'env':'prod','app':'app2','server':'server1'}),
(7, '2023-09-20 00:01:00', 'DEF', 70,{'env':'qa','app':'app1','server':'server1'}),
(8, '2023-09-20 00:01:00', 'DEF', 80,{'env':'qa','app':'app2','server':'server1'}),
(9, '2023-09-20 00:02:00', 'ABC', 90,{'env':'prod','app':'app1','server':'server1'}),
(10, '2023-09-20 00:02:00', 'ABC', 100,{'env':'prod','app':'app1','server':'server2'}),
(11, '2023-09-20 00:02:00', 'ABC', 110,{'env':'qa','app':'app1','server':'server1'}),
(12, '2023-09-20 00:02:00', 'ABC', 120,{'env':'qa','app':'app1','server':'server1'}),
(13, '2023-09-20 00:02:00', 'DEF', 130,{'env':'prod','app':'app1','server':'server1'}),
(14, '2023-09-20 00:02:00', 'DEF', 140,{'env':'prod','app':'app2','server':'server1','dc':'dc1'}),
(15, '2023-09-20 00:02:00', 'DEF', 150,{'env':'qa','app':'app1','server':'server2'}),
(16, '2023-09-20 00:02:00', 'DEF', 160,{'env':'qa','app':'app1','server':'server1','dc':'dc1'}),
(17, '2023-09-20 00:03:00', 'ABC', 170,{'env':'prod','app':'app1','server':'server1'}),
(18, '2023-09-20 00:03:00', 'ABC', 180,{'env':'prod','app':'app1','server':'server1'}),
(19, '2023-09-20 00:03:00', 'ABC', 190,{'env':'qa','app':'app1','server':'server1'}),
(20, '2023-09-20 00:03:00', 'ABC', 200,{'env':'qa','app':'app1','server':'server2'}),
(21, '2023-09-20 00:03:00', 'DEF', 210,{'env':'prod','app':'app1','server':'server1'}),
(22, '2023-09-20 00:03:00', 'DEF', 220,{'env':'prod','app':'app1','server':'server1'}),
(23, '2023-09-20 00:03:00', 'DEF', 230,{'env':'qa','app':'app1','server':'server1'}),
(24, '2023-09-20 00:03:00', 'DEF', 240,{'env':'qa','app':'app1','server':'server1'});

We can then create a materialized view with array join so that it can extract the map attributes onto keys and values columns. For demonstration, in the example below, it uses an implicit table (with the POPULATE command, and backing table like .inner.{uuid}... ). The recommended best practice, however, is to use an explicit table where you wouldd define the table first, then create a materialized view on top with the TO command instead.

CREATE MATERIALIZED VIEW db1.table1_metric_map_mv
attributes.keys as attribute_keys,
attributes.values as attribute_values
from db1.table1_metric_map
array join attributes
where notEmpty(attributes.keys);

The new table will have more rows and will have the keys extracted, like this:

FROM db1.table1_metric_map_mv

Query id: b7384381-53af-4e3e-bc54-871f61c033a6

│ 1 │ 2023-09-20 00:01:00 │ ABC │ 10 │ ('env','prod') │ env │ prod │
│ 1 │ 2023-09-20 00:01:00 │ ABC │ 10 │ ('app','app1') │ app │ app1 │
│ 1 │ 2023-09-20 00:01:00 │ ABC │ 10 │ ('server','server1') │ server │ server1 │
│ 2 │ 2023-09-20 00:01:00 │ ABC │ 20 │ ('env','prod') │ env │ prod │
│ 2 │ 2023-09-20 00:01:00 │ ABC │ 20 │ ('app','app2') │ app │ app2 │

From here, in order to query for your rows that need certain attributes, you would do something like this:

FROM db1.table1_metric_map_mv
WHERE (attribute_keys = 'app') AND (attribute_values = 'app1') AND (metric_name = 'ABC')
) AS t1_app
FROM db1.table1_metric_map_mv
WHERE (attribute_keys = 'server') AND (attribute_values = 'server1')
) AS t2_server ON =

Query id: 72ce7f19-b02a-4b6e-81e7-a955f257436d

│ 1 │ 2023-09-20 00:01:00 │ ABC │ 10 │
│ 3 │ 2023-09-20 00:01:00 │ ABC │ 30 │
│ 9 │ 2023-09-20 00:02:00 │ ABC │ 90 │
│ 11 │ 2023-09-20 00:02:00 │ ABC │ 110 │
│ 12 │ 2023-09-20 00:02:00 │ ABC │ 120 │
│ 17 │ 2023-09-20 00:03:00 │ ABC │ 170 │
│ 18 │ 2023-09-20 00:03:00 │ ABC │ 180 │
│ 19 │ 2023-09-20 00:03:00 │ ABC │ 190 │

· 3 min read


How do I set up ClickHouse with a Docker image to connect to Microsoft SQL Server?


Notes on this example

  • Uses the ClickHouse Docker Ubuntu image
  • Uses the FreeTDS Driver
  • Uses MSSQL Server 2012R2
  • Windows hostname for this example is MARSDB2.marsnet2.local at IP: (update with your hostname and/or IP)
  • MSSQL Instance name MARSDB2
  • MSSQL Login and datbase users are sql_user

Example setup in MSSQL for testing

Database and table created in MSSQL:

Screenshot 2024-01-01 at 8 25 50 PM

MSSQL Login User, sql_user:

Screenshot 2024-01-01 at 8 27 11 PM

Database membership roles for sql_user:

Screenshot 2024-01-01 at 8 27 35 PM

Database User with Login:

Screenshot 2024-01-01 at 8 35 34 PM

Configuring ClickHouse with ODBC

Create a working directory:

mkdir ch-odbc-mssql
cd ch-odbc-mssql

Create an odbc.ini file:

vim odbc.ini

Add the following entries to update the name of the DSN and IP:

Driver = FreeTDS
Server =

Create an odbcinst.ini file:

vim odbcinst.ini

Add the following entries (trace is optional but helps with debugging):

Trace = Yes
TraceFile = /tmp/odbc.log

Description = FreeTDS
Driver = /usr/lib/aarch64-linux-gnu/odbc/
Setup = /usr/lib/x86_64-linux-gnu/odbc/
UsageCount = 1

Configure a Dockerfile to download the image and add the TDS and required ODBC libraries

Create the Dockerfile:

vim Dockerfile

Add the contents of the Dockerfile:

FROM clickhouse/clickhouse-server:23.10

# Install the ODBC driver

RUN apt-get update && apt-get install -y --no-install-recommends unixodbc \
&& apt-get install -y freetds-bin freetds-common freetds-dev libct4 libsybdb5 \
&& apt-get install tdsodbc

Build the new docker image:

docker build . -t marsnet/clickhouse-odbc:23.10

Create a docker-compose.yml file:

vim docker-compose.yml

Add the following contents to the YAML:

version: '3.7'
image: marsnet/clickhouse-odbc:23.10
container_name: clickhouse-odbc
hostname: clickhouse-host
- "9000:9000"
- "8123:8123"
- "9009:9009"
- ./odbc.ini:/etc/odbc.ini
- ./odbcinst.ini:/etc/odbcinst.ini
restart: always
soft: -1
hard: -1
soft: 262144
hard: 262144
memory: 4g

Start the container:

docker compose up --detach

After you start the container, you should see something like this:

ch-odbc-mssql % docker compose up --detach
[+] Running 1/1
✔ Container clickhouse-odbc Started

Check to ensure the container is running:

ch-odbc-mssql % docker ps
87a400b803ce marsnet/clickhouse-odbc:23.10 "/" 57 minutes ago Up About a minute>8123/tcp,>9000/tcp,>9009/tcp clickhouse-odbc

Test ODBC connection

Login with the ClickHouse client:

./clickhouse client

Test the SELECT using the odbc table function to the remote MSSQL Database table:

clickhouse-host :) SELECT * from odbc('DSN=marsdb2_mssql;port=1433;Uid=sql_user;Pwd=ClickHouse123;Database=db1', 'table1');

FROM odbc('DSN=marsdb2_mssql;port=1433;Uid=sql_user;Pwd=ClickHouse123;Database=db1', 'table1')

Query id: 23494da2-6e12-4ade-95fa-372a0420cac1

│ 1 │ abc │
│ 2 │ def │
│ 3 │ ghi │

3 rows in set. Elapsed: 0.188 sec.

You can also create a remote table using the odbc table engine:

CREATE TABLE table1_odbc_mssql
`id` Int32,
`column1` String
ENGINE = ODBC('DSN=marsdb2_mssql;port=1433;Uid=sql_user;Pwd=ClickHouse123;Database=db1', 'dbo', 'table1')

Use a SELECT query to test the new remote table:

clickhouse-host :) select * from table1_odbc_mssql;

FROM table1_odbc_mssql

Query id: 94724368-485d-4364-ae58-a435a225c37d

│ 1 │ abc │
│ 2 │ def │
│ 3 │ ghi │

3 rows in set. Elapsed: 0.218 sec.

For more information, please see:

· 2 min read


How do I work with JSON message using a source or landing table to extract with a Materialized View?
How do I work with JSON without the experimental JSON Object?


A common pattern to work with JSON data is to send the data to a landing table and use JSONExtract functions to pull the data onto a new table using a Materialized View trigger. This is normally done in the following flow and pattern:

source data --> MergeTree table --> Materialized View (with base table) --> application/client

The landing table should have a raw string field where you would store the raw json. It should also have one to two other fields that can be used for management of that table so that it could be partitioned and trimmed as the data ages.

*some integrations can add fields to the original data for example if using the ClickHouse Kafka Connector Sink.

Simplified example below:

  • create the example database
create database db1;
  • create a landing table where your raw json will be inserted:
create table db1.table2_json_raw
id Int32,
timestamp DateTime,
raw String
engine = MergeTree()
order by timestamp;
  • create the base table for the materialized view
create table db1.table2_json_mv_base
id Int32,
timestamp DateTime,
raw_string String,
custId Int8,
custName String
engine = MergeTree()
order by timestamp;
  • create the materialized view to the base table
create materialized view db1.table2_json_mv to db1.table2_json_mv_base
raw as raw_string,
simpleJSONExtractRaw(raw, 'customerId') as custId,
simpleJSONExtractRaw(raw, 'customerName') as custName
  • insert some sample rows
 insert into db1.table2_json_raw
(1, '2024-05-16 00:00:00', '{"customerId":1, "customerName":"ABC"}'),
(2, '2024-05-16 00:00:01', '{"customerId":2, "customerName":"XYZ"}');
  • view the results from the extraction and the materialized view that would be used in the queries
clickhouse-cloud :) select * from db1.table2_json_mv;

FROM db1.table2_json_mv

Query id: 12655fd3-567a-4dfb-9ef7-abc4b11ad044

│ 1 │ 2024-05-16 00:00:00 │ {"customerId":1, "customerName":"ABC"} │ 1 │ "ABC" │
│ 2 │ 2024-05-16 00:00:01 │ {"customerId":2, "customerName":"XYZ"} │ 2 │ "XYZ" │

Additional Reference links:
Materialized Views:
Working with JSON:
JSON functions:

· 6 min read

This example shows how AD users that belong to different AD security groups can be given role access in ClickHouse. It also shows how a user may be added to multiple AD user groups so they can have access provided by multiple roles.

In this environment, we have the following:

  • A Windows Active Directory domain: marsnet2.local
  • A ClickHouse Cluster, cluster_1S_3R with 3 nodes on a cluster configuration of 1 shard, 3 replicas
  • 3 AD users
AD UserDescription
clickhouse_ad_adminClickHouse Admin user
clickhouse_db1_userUser with access to db1.table1
clickhouse_db2_userUser with access to db2.table1
ch_db1_db2_userUser with access to both db1.table1 and db2.table1
  • 3 AD security groups
AD GroupDescription
clickhouse_ad_adminsClickHouse Admins group
clickhouse_ad_db1_usersGroup to map with access to db1.table1
clickhouse_ad_db2_usersGroup to map with access to db2.table1
  • Example AD Environment and UO structure:


  • Example AD Security Group Configuration:


  • Example AD User Configuration:


  1. In Windows AD Users and Groups, add each user to their respective group(s), they will be mapped to the ClickHouse roles (example in the next step).
AD Security GroupClickHouse Role
ch_db1_db2_userclickhouse_ad_db1_users and clickhouse_ad_db2_users
  • Example user group membership:


  1. In ClickHouse config.xml, add the ldap_servers configuration to each ClickHouse node.
xml tagDescriptionExample Value
ldap_serversTag used to define the ldap servers that will be used by ClickHouseNA
marsnet_adThis tag is arbitrary and is just a label to use to identify the server in <user_directories> sectionNA
hostFQDN or IP Address of Active Directory server or domainmarsdc1.marsnet2.local
portActive Directory Port, usually 389 for non-ssl or 636 for SSL389
bind_dnWhich user will be used to create the bind to AD, it can be a dedicated user if regular users are not allowed to{user_name}@marsnet2.local
user_dn_detectionSettings on how ClickHouse will find the AD usersNA
base_dnAD OU path to start the search for the usersOU=Users,OU=ClickHouse,DC=marsnet2,DC=local
search_filterldap search filter to find the AD user(&(objectClass=user)(sAMAccountName={user_name}))

Refer to documentation for full set of options:

  1. In ClickHouse config.xml, add the <user_directories> configuration with <ldap> entries to each ClickHouse node.
xml tagDescriptionExample Value
user_directoriesDefines which authenticators will be usedNA
ldapThis contains the settings for the ldap servers, in this AD that will be usedNA
serverThis is the tag that was define in the <ldap_servers> sectionmarsnet2_ad
role_mappingdefinition on how the users authenticated will be mapped between AD groups and ClickHouse rolesNA
base_dnAD path that the system will use to start search for AD groupsOU=Groups,OU=ClickHouse,DC=marsnet2,DC=local
search_filterldap search filter to find the AD groups(&(objectClass=group)(member={user_dn}))
attributeWhich AD attribute field should be used to identify the userCN
scopeWhich levels in the base DN the system should search for the groupssubtree
prefixPrefix for the names of the groups in AD, this prefix will be removed to find the roles in ClickHouseclickhouse_

Refer to documentation for full set of options:

note::: Since the AD security groups were prefixed in the example - i.e. clickhouse_ad_db1_users- when the system retrieves them, the prefix will be removed and the system will look for a ClickHouse role called ad_db1_users to map to clickhouse_ad_db1_users. :::

  1. Create example databases.
create database db1 on cluster 'cluster_1S_3R';
create database db2 on cluster 'cluster_1S_3R';
  1. Create example tables.
create table db1.table1 on cluster 'cluster_1S_3R'
id Int32,
column1 String
engine = MergeTree()
order by id;

create table db2.table1 on cluster 'cluster_1S_3R
id Int32,
column1 String
engine = MergeTree()
order by id;
  1. Insert sample data.
insert into db1.table1
(1, 'a');

insert into db2.table1
(2, 'b');
  1. Create ClickHouse Roles.
create role ad_admins on cluster 'cluster_1S_3R';
create role ad_db1_users on cluster 'cluster_1S_3R';
create role ad_db2_users on cluster 'cluster_1S_3R';
  1. Grant the privileges to the roles.

GRANT SELECT ON db1.table1 on cluster 'cluster_1S_3R' TO ad_db1_users;

GRANT SELECT ON db2.table1 on cluster 'cluster_1S_3R' TO ad_db2_users;
  1. Test access for restricted db1 user. For example:
root@chnode1:/etc/clickhouse-server# clickhouse-client --user clickhouse_db1_user --password MyPassword123  --secure --port 9440 --host chnode1.marsnet.local
ClickHouse client version (official build).
Connecting to chnode1.marsnet.local:9440 as user clickhouse_db1_user.
Connected to ClickHouse server version 24.1.3.

clickhouse :) select * from db1.table1;

FROM db1.table1

Query id: b04b92d6-5b8b-40a2-a92a-f06f15774930

│ 1 │ a │

1 row in set. Elapsed: 0.004 sec.

clickhouse :) select * from db2.table1;

FROM db2.table1

Query id: 7f7eaa44-7b47-4184-807a-6968a56057ad

Elapsed: 0.115 sec.

Received exception from server (version 24.1.3):
Code: 497. DB::Exception: Received from chnode1.marsnet.local:9440. DB::Exception: clickhouse_db1_user: Not enough privileges. To execute this query, it's necessary to have the grant SELECT(id, column1) ON db2.table1. (ACCESS_DENIED)
  1. Test access for the user that has access to both databases, db1 and db2. For example:
root@chnode1:/etc/clickhouse-server# clickhouse-client --user ch_db1_db2_user --password MyPassword123  --secure --port 9440 --host chnode1.marsnet.local
ClickHouse client version (official build).
Connecting to chnode1.marsnet.local:9440 as user ch_db1_db2_user.
Connected to ClickHouse server version 24.1.3.

clickhouse :) select * from db1.table1;

FROM db1.table1

Query id: 23084744-08c2-48bd-8635-a23438812026

│ 1 │ a │

1 row in set. Elapsed: 0.005 sec.

clickhouse :) select * from db2.table1;

FROM db2.table1

Query id: f9954ec4-d8d9-4b5a-9f68-a7aa79a1bb4a

│ 2 │ b │

1 row in set. Elapsed: 0.004 sec.
  1. Test access for the Admin user. For example:
root@chnode1:/etc/clickhouse-server# clickhouse-client --user clickhouse_ad_admin --password MyPassword123  --secure --port 9440 --host chnode1.marsnet.local
ClickHouse client version (official build).
Connecting to chnode1.marsnet.local:9440 as user clickhouse_ad_admin.
Connected to ClickHouse server version 24.1.3.

clickhouse :) create table db1.table2 on cluster 'cluster_1S_3R'
id Int32,
column1 String
engine = MergeTree()
order by id;

CREATE TABLE db1.table2 ON CLUSTER cluster_1S_3R
`id` Int32,
`column1` String
ENGINE = MergeTree

Query id: 6041fd32-4294-44bd-b442-3fdd41333e6f

│ chnode1.marsnet.local │ 9440 │ 0 │ │ 2 │ 2 │
│ chnode2.marsnet.local │ 9440 │ 0 │ │ 1 │ 1 │
│ chnode3.marsnet.local │ 9440 │ 0 │ │ 0 │ 0 │

· 6 min read

Overview: This article walks through the process of sending data from a Kafka topic to a ClickHouse table. We’ll use the Wiki recent changes feed, which provides a stream of events that represent changes made to various Wikimedia properties. The steps include:

  1. How to setup Kafka on Ubuntu
  2. Ingest a stream of data into a Kakfa topic
  3. Create a ClickHouse table that subscribes to the topic

1. Setup Kafka on Ubuntu

  1. Create an Ubuntu ec2 instance and SSH on to it:
ssh -i ~/training.pem [email protected]
  1. Install Kafka (based on the instructions here:
sudo apt update
sudo apt install openjdk-11-jdk

mkdir /home/ubuntu/kafka
cd /home/ubuntu/kafka/


tar -zxvf kafka_2.13-3.7.0.tgz
  1. Start ZooKeeper:
cd kafka_2.13-3.7.0
bin/ config/
  1. Open a new console and launch Kafka:
ssh -i ~/training.pem [email protected]
cd kafka/kafka_2.13-3.7.0/
bin/ config/
  1. Open a third console and create a topic named wikimedia:
ssh -i ~/training.pem [email protected]
cd kafka/kafka_2.13-3.7.0/

bin/ --create --topic wikimedia --bootstrap-server localhost:9092
  1. You can verify it was created successfully by:
bin/ --list --bootstrap-server localhost:9092

2. Ingest the Wikimedia Stream into Kafka

  1. We need some utilities first:
sudo apt-get install librdkafka-dev libyajl-dev
sudo apt-get install kafkacat
  1. The data is sent to Kafka using a clever curl command that grabs the latest Wikimedia events, parses out the JSON data and sends that to the Kafka topic:
curl -N  | awk '/^data: /{gsub(/^data: /, ""); print}' | kafkacat -P -b localhost:9092 -t wikimedia
  1. You can "describe" the topic:
bin/ --describe --topic wikimedia --bootstrap-server localhost:9092
  1. Let's verify everything is working by consuming some events:
bin/ --topic wikimedia --from-beginning --bootstrap-server localhost:9092
  1. Hit Ctrl+c to kill the previous command.

3. Ingest the Data into ClickHouse

  1. Here is what the incoming data looks like:
"$schema": "/mediawiki/recentchange/1.0.0",
"meta": {
"uri": "",
"request_id": "f64cfb17-04ba-4d09-8935-38ec6f0001c2",
"id": "9d7d2b5a-b79b-45ea-b72c-69c3b69ae931",
"dt": "2024-04-18T13:21:21Z",
"domain": "",
"stream": "mediawiki.recentchange",
"topic": "eqiad.mediawiki.recentchange",
"partition": 0,
"offset": 5032636513
"id": 2196113017,
"type": "edit",
"namespace": 0,
"title": "Q45791749",
"title_url": "",
"comment": "/* wbsetqualifier-add:1| */ [[Property:P1545]]: 20, Modify PubMed ID: 7292984 citation data from NCBI, Europe PMC and CrossRef",
"timestamp": 1713446481,
"user": "Cewbot",
"bot": true,
"notify_url": "",
"minor": false,
"patrolled": true,
"length": {
"old": 75618,
"new": 75896
"revision": {
"old": 2131981341,
"new": 2131981357
"server_url": "",
"server_name": "",
"server_script_path": "/w",
"wiki": "wikidatawiki",
"parsedcomment": "<span dir=\"auto\"><span class=\"autocomment\">Added qualifier: </span></span> <a href=\"/wiki/Property:P1545\" title=\"series ordinal | position of an item in its parent series (most frequently a 1-based index), generally to be used as a qualifier (different from &quot;rank&quot; defined as a class, and from &quot;ranking&quot; defined as a property for evaluating a quality).\"><span class=\"wb-itemlink\"><span class=\"wb-itemlink-label\" lang=\"en\" dir=\"ltr\">series ordinal</span> <span class=\"wb-itemlink-id\">(P1545)</span></span></a>: 20, Modify PubMed ID: 7292984 citation data from NCBI, Europe PMC and CrossRef"
  1. We will need the Kafka table engine to pull the data from the Kafka topic:
`id` UInt32,
`type` String,
`title` String,
`title_url` String,
`comment` String,
`timestamp` UInt64,
`user` String,
`bot` Bool,
`server_url` String,
`server_name` String,
`wiki` String,
`meta` Tuple(uri String, id String, stream String, topic String, domain String)
ENGINE = Kafka(
  1. For some reason the Kafka table engine seems to take the public ec2 URL and convert it to the private DNS name, so I had to add that to my local /etc/hosts file:
  1. You can read from a Kafka table, you just have to enable a setting:
FROM wikiQueue
FORMAT Vertical
SETTINGS stream_like_engine_allow_direct_select = 1;

The rows should come back nicely parsed based on the columns defined in the wikiQueue table:

id:          2473996741
type: edit
title: File:Père-Lachaise - Division 6 - Cassereau 05.jpg
comment: /* wbcreateclaim-create:1| */ [[d:Special:EntityPage/P921]]: [[d:Special:EntityPage/Q112327116]], [[:toollabs:quickstatements/#/batch/228454|batch #228454]]
timestamp: 1713457283
user: Ameisenigel
bot: false
wiki: commonswiki
meta: ('','01a832e2-24c5-4ccb-bd93-8e2c0e429418','mediawiki.recentchange','eqiad.mediawiki.recentchange','')
  1. We need a MergeTree table to store these incoming events:
CREATE TABLE rawEvents (
id UInt64,
type LowCardinality(String),
comment String,
timestamp DateTime64(3, 'UTC'),
title_url String,
topic LowCardinality(String),
user String
ENGINE = MergeTree
ORDER BY (type, timestamp);
  1. Let's define a materialized view that gets triggered when an insert occurs on the Kafka table and sends the data to our rawEvents table:
toDateTime(timestamp) AS timestamp,
tupleElement(meta, 'topic') AS topic,
FROM wikiQueue
WHERE title_url <> '';
  1. You should start seeing data going into rawEvents almost immediately:
SELECT count()
FROM rawEvents;
  1. Let's view some of the rows:
FROM rawEvents
FORMAT Vertical
Row 1:
id: 124842852
type: 142
comment: Pere prlpz commented on "Plantilles Enciclopèdia Catalana" (Diria que no cal fer res als articles. Es pot actualitzar els enllaços que es facin servir a les referències (tot i que l'antic encara ha...)
timestamp: 2024-04-18 16:22:29.000
topic: eqiad.mediawiki.recentchange
user: Pere prlpz

Row 2:
id: 2473996748
type: categorize
comment: [[:File:Ruïne van een poortgebouw, RP-T-1976-29-6(R).jpg]] removed from category
timestamp: 2024-04-18 16:21:20.000
topic: eqiad.mediawiki.recentchange
user: Warburg1866

Row 3:
id: 311828596
type: categorize
comment: [[:Cujo (película)]] añadida a la categoría
timestamp: 2024-04-18 16:21:21.000
topic: eqiad.mediawiki.recentchange
user: Beta15

Row 4:
id: 311828597
type: categorize
comment: [[:Cujo (película)]] eliminada de la categoría
timestamp: 2024-04-18 16:21:21.000
topic: eqiad.mediawiki.recentchange
user: Beta15

Row 5:
id: 48494536
type: categorize
comment: [[:braiteremmo]] ajoutée à la catégorie
timestamp: 2024-04-18 16:21:21.000
topic: eqiad.mediawiki.recentchange
user: Àncilu bot
  1. Let's see what types of events are coming in:
FROM rawEvents
1. │ 142 │ 1 │
2. │ new │ 1003 │
3. │ categorize │ 12228 │
4. │ log │ 1799 │
5. │ edit │ 17142 │

Let's define a materialized view chained to our current materialized view. We will keep track of some aggregated stats per minute:

`dateTime` DateTime64(3, 'UTC') NOT NULL,
`users` AggregateFunction(uniq, String),
`pages` AggregateFunction(uniq, String),
`updates` AggregateFunction(sum, UInt32)
ENGINE = AggregatingMergeTree
ORDER BY dateTime;

toStartOfMinute(timestamp) AS dateTime,
uniqState(user) AS users,
uniqState(title_url) AS pages,
sumState(toUInt32(1)) AS updates
FROM rawEvents
GROUP BY dateTime;
  1. We will need -Merge functions to view the results:
dateTime AS dateTime,
uniqMerge(users) AS users,
uniqMerge(pages) AS pages,
sumMerge(updates) AS updates
FROM byMinute
GROUP BY dateTime