Advanced Usage
Raw API
For use cases which do not require transformation between ClickHouse data and native or third party data types and structures, the ClickHouse Connect client provides methods for direct usage of the ClickHouse connection.
Client raw_query
method
The Client.raw_query
method allows direct usage of the ClickHouse HTTP query interface using the client connection. The return value is an unprocessed bytes
object. It offers a convenient wrapper with parameter binding, error handling, retries, and settings management using a minimal interface:
Parameter | Type | Default | Description |
---|---|---|---|
query | str | Required | Any valid ClickHouse query |
parameters | dict or iterable | None | See parameters description. |
settings | dict | None | See settings description. |
fmt | str | None | ClickHouse Output Format for the resulting bytes. (ClickHouse uses TSV if not specified) |
use_database | bool | True | Use the ClickHouse Connect client-assigned database for the query context |
external_data | ExternalData | None | An ExternalData object containing file or binary data to use with the query. See Advanced Queries (External Data) |
It is the caller's responsibility to handle the resulting bytes
object. Note that the Client.query_arrow
is just a thin wrapper around this method using the ClickHouse Arrow
output format.
Client raw_stream
method
The Client.raw_stream
method has the same API as the raw_query
method, but returns an io.IOBase
object which can be used as a generator/stream source of bytes
objects. It is currently utilized by the query_arrow_stream
method.
Client raw_insert
method
The Client.raw_insert
method allows direct inserts of bytes
objects or bytes
object generators using the client connection. Because it does no processing of the insert payload, it is highly performant. The method provides options to specify settings and insert format:
Parameter | Type | Default | Description |
---|---|---|---|
table | str | Required | Either the simple or database qualified table name |
column_names | Sequence[str] | None | Column names for the insert block. Required if the fmt parameter does not include names |
insert_block | str, bytes, Generator[bytes], BinaryIO | Required | Data to insert. Strings will be encoded with the client encoding. |
settings | dict | None | See settings description. |
fmt | str | None | ClickHouse Input Format of the insert_block bytes. (ClickHouse uses TSV if not specified) |
It is the caller's responsibility to ensure that the insert_block
is in the specified format and uses the specified compression method. ClickHouse Connect uses these raw inserts for file uploads and PyArrow Tables, delegating parsing to the ClickHouse server.
Saving query results as files
You can stream files directly from ClickHouse to the local file system using the raw_stream
method. For example, if you'd like to save the results of a query to a CSV file, you could use the following code snippet:
The code above yields an output.csv
file with the following content:
Similarly, you could save data in TabSeparated and other formats. See Formats for Input and Output Data for an overview of all available format options.
Multithreaded, multiprocess, and async/event driven use cases
ClickHouse Connect works well in multithreaded, multiprocess, and event-loop-driven/asynchronous applications. All query and insert processing occurs within a single thread, so operations are generally thread-safe. (Parallel processing of some operations at a low level is a possible future enhancement to overcome the performance penalty of a single thread, but even in that case thread safety will be maintained.)
Because each query or insert executed maintains state in its own QueryContext
or InsertContext
object, respectively, these helper objects are not thread-safe, and they should not be shared between multiple processing streams. See the additional discussion about context objects in the QueryContexts and InsertContexts sections.
Additionally, in an application that has two or more queries and/or inserts "in flight" at the same time, there are two further considerations to keep in mind. The first is the ClickHouse "session" associated with the query/insert, and the second is the HTTP connection pool used by ClickHouse Connect Client instances.
AsyncClient wrapper
ClickHouse Connect provides an async wrapper over the regular Client
, so that it is possible to use the client in an asyncio
environment.
To get an instance of the AsyncClient
, you can use the get_async_client
factory function, which accepts the same parameters as the standard get_client
:
AsyncClient
has the same methods with the same parameters as the standard Client
, but they are coroutines when applicable. Internally, these methods from the Client
that perform I/O operations are wrapped in a run_in_executor call.
Multithreaded performance will increase when using the AsyncClient
wrapper, as the execution threads and the GIL will be released while waiting for I/O operations to complete.
Note: Unlike the regular Client
, the AsyncClient
enforces autogenerate_session_id
to be False
by default.
See also: run_async example.
Managing ClickHouse session IDs
Each ClickHouse query occurs within the context of a ClickHouse "session". Sessions are currently used for two purposes:
- To associate specific ClickHouse settings with multiple queries (see the user settings). The ClickHouse
SET
command is used to change the settings for the scope of a user session. - To track temporary tables.
By default, each query executed with a ClickHouse Connect Client
instance uses that client's session ID. SET
statements and temporary tables work as expected when using a single client. However, the ClickHouse server does not allow concurrent queries within the same session (the client will raise a ProgrammingError
if attempted). For applications that execute concurrent queries, use one of the following patterns:
- Create a separate
Client
instance for each thread/process/event handler that needs session isolation. This preserves per-client session state (temporary tables andSET
values). - Use a unique
session_id
for each query via thesettings
argument when callingquery
,command
, orinsert
, if you do not require shared session state. - Disable sessions on a shared client by setting
autogenerate_session_id=False
before creating the client (or pass it directly toget_client
).
Alternatively, pass autogenerate_session_id=False
directly to get_client(...)
.
In this case ClickHouse Connect does not send a session_id
; the server does not treat separate requests as belonging to the same session. Temporary tables and session-level settings will not persist across requests.
Customizing the HTTP connection pool
ClickHouse Connect uses urllib3
connection pools to handle the underlying HTTP connection to the server. By default, all client instances share the same connection pool, which is sufficient for the majority of use cases. This default pool maintains up to 8 HTTP Keep Alive connections to each ClickHouse server used by the application.
For large multi-threaded applications, separate connection pools may be appropriate. Customized connection pools can be provided as the pool_mgr
keyword argument to the main clickhouse_connect.get_client
function:
As demonstrated by the above example, clients can share a pool manager, or a separate pool manager can be created for each client. For more details on the options available when creating a PoolManager, see the urllib3
documentation.