Altinity ClickHouse Integrations

Various Altinity ClickHouse Integrations

Altinity.Cloud Integrations - Overview

The following integrations work with Altinity.Cloud ClickHouse


Programming Languages

Programming
  1. Java (ClickHouse)
  2. Go (ClickHouse)
  3. Python 3 (ClickHouse)
  4. Node.js (ClickHouse)
  5. C# (ClickHouse)

Data Ingestions

Data Ingestions
  1. Amazon S3 (ClickHouse)
  2. Kafka (ClickHouse)
  3. Airbyte (ClickHouse)
  4. dbt (ClickHouse)
  5. Vector (ClickHouse)
  6. PostgreSQL (ClickHouse)
  7. MySQL (ClickHouse)

Data Visualization

MySQL
  1. Grafana (ClickHouse)
  2. Superset (ClickHouse)
  3. Deepnote (ClickHouse)
  4. Hex (ClickHouse)
  5. Metabase (ClickHouse)
  6. Tableau (ClickHouse)

SQL Clients

MySQL
  1. DBeaver (ClickHouse)
  2. ClickHouse Client (ClickHouse)
  3. DataGrip (ClickHouse)



Altinity.Cloud Integrations



Programming Languages


Java

Connect Java to ClickHouse

MySQL

Maven

<dependency>
   <groupId>com.clickhouse</groupId>
   <artifactId>clickhouse-jdbc</artifactId>
   <version>0.3.2-patch11</version>
</dependency>

Open a connection

Properties prop = new Properties();
prop.setProperty("username", "default");
prop.setProperty("password", "<password>");
Connection conn = DriverManager.getConnection("jdbc:ch:https://<host>:8443?insert_quorum=auto", prop);

Create a table

Statement stmt = conn.createStatement();
stmt.execute("CREATE TABLE IF NOT EXISTS jdbc_test(idx Int8, str String) ENGINE = MergeTree ORDER BY idx");

Write data to a table

try (PreparedStatement ps = conn.prepareStatement(
    "insert into jdbc_test select col1, col2 from input('col1 Int8, col2 String')")) {
        for (int i = 0; i < 10; i++) {
           ps.setInt(1, i);
           ps.setString(2, "test:" + i); // col1
           
// parameters will be write into buffered stream immediately in binary format
           ps.addBatch();
}

// stream everything on-hand into ClickHouse
       ps.executeBatch();
}

Read data from a table

ResultSet rs = stmt.executeQuery("select * from jdbc_test");
while (rs.next()) {
    System.out.println(String.format("idx: %s str: %s", rs.getString(1), rs.getString(2)));
}


Go

Connect Go to ClickHouse

MySQL

Installation

(Create a project in your IDE, open a terminal within that project, then run the command.)

go get github.com/ClickHouse/clickhouse-go/v2

EXAMPLES

1. CLIENT API

Connect (Native Protocol)

conn, err := clickhouse.Open(&clickhouse.Options{
    Addr: []string{"<host>:9440"},
    Auth: clickhouse.Auth{
        Database: "default",
        Username: "default",
        Password: "<password>",
   },
   TLS: &tls.Config{},
})

Connect (HTTP protocol)

conn, err := clickhouse.Open(&clickhouse.Options{
    Addr: []string{"<host>:8443"},
    Auth: clickhouse.Auth{
        Database: "default",
        Username: "default",
        Password: "<password>",
    },
    TLS: &tls.Config{},
    Protocol: clickhouse.HTTP,
})

Create a Table

err := conn.Exec(context.Background(), `
    CREATE TABLE IF NOT EXISTS my_table (Col1 UInt8, Col2 String)
`)

if err != nil {
    return err
}

Batch Insert

batch, err := conn.PrepareBatch(ctx, "INSERT INTO my_table")
if err != nil {
    return err
}

if err := batch.Append(uint8(42), "ClickHouse")); err != nil {
    return err
}
if err := batch.Send(); err != nil {
    return err
}

Querying Rows

rows, err := conn.Query(ctx, "SELECT Col1, Col2 FROM my_table")
    if err != nil {
return err
}
for rows.Next() {
    var (
        col1 uint8
        col2 string
    )
    if err := rows.Scan(&col1, &col2); err != nil {
        return err
    }
    fmt.Printf("row: col1=%d, col2=%s\n", col1, col2)
}
rows.Close()
return rows.Err()

2. DATABASE/SQL DRIVER

Connect (Native Protocol)

conn, err := sql.Open("clickhouse", "clickhouse://<host>:9440?username=default&password=<password>/<database>")

Connect (HTTP protocol)

conn, err := sql.Open("clickhouse", "http://<host>:8443?username=default&password=<password>/<database>")

Create a Table

err := conn.ExecContext(context.Background(), `
CREATE TABLE IF NOT EXISTS my_table (Col1 UInt8, Col2 String)
`)

if err != nil {
    return err
}

Batch Insert

stmt, err := conn.PrepareContext(ctx, "INSERT INTO my_table")
if err != nil {
    return err
}

if err := stmt.Exec(uint8(42), "ClickHouse")); err != nil {
    return err
}
if err := stmt.Commit(); err != nil {
    return err
}

Querying Rows

rows, err := conn.QueryContext(ctx, "SELECT Col1, Col2 FROM my_table")
if err != nil {
    return err
}

for rows.Next() {
    var (
        col1 uint8
        col2 string
    )
    
    if err := rows.Scan(&col1, &col2); err != nil {
    return err
    }
    
    fmt.Printf("row: col1=%d, col2=%s\n", col1, col2)
}
rows.Close()
return rows.Err()


Python 3

Connect Python to ClickHouse

MySQL

Installation

pip install clickhouse-connect

EXAMPLES

Basic query

import clickhouse_connect

client = clickhouse_connect.get_client(host='<host>', port=8443, username='default', password='<password>')
query_result = client.query('SELECT * FROM system.tables')
print (query_result.result_set)

Simple ‘command’ without a result

import clickhouse_connect

client = clickhouse_connect.get_client()
client.command ('CREATE TABLE test_table (key UInt16, value String) ENGINE Memory')

Bulk insert of a matrix of rows and columns

data = [[100, 'value1'], [200, 'value2']]
client.insert('test_table', data)
print(client.query('SELECT * FROM test_table').result_set)
->  [(100, 'value1'), (200, 'value2')]


Data Ingestions


Amazon S3

Connect Amazon S3 to ClickHouse

MySQL

Reading data from S3

Using ClickHouse S3 table function, users can query S3 data as a source without requiring persistence in ClickHouse.

The following example illustrates how to read 10 rows of the NYC Taxi dataset.

SELECT
    trip_id,
    total_amount,
    pickup_longitude,
    pickup_latitude,
    dropoff_longitude,
    dropoff_latitude,
    pickup_datetime,
    dropoff_datetime,
    trip_distance
FROM
    s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz', 'TabSeparatedWithNames')
LIMIT 10
SETTINGS input_format_try_infer_datetimes = 0;

Inserting Data from S3

To transfer data from S3 to ClickHouse, users can combine the s3 table function with INSERT statement.

To create an empty hackernews table:

CREATE TABLE hackernews ORDER BY tuple
(
) EMPTY AS SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.csv.gz', 'CSVWithNames');

This creates an empty table using the schema inferred from the data.
We can then insert the first 1 million rows from the remote dataset:

INSERT INTO hackernews SELECT *
FROM url('https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.csv.gz', 'CSVWithNames')
LIMIT 1000000;


Kafka

Connect Kafka to ClickHouse

MySQL

Options for integrating ClickHouse Cloud with Apache Kafka include:

Kafka Connect
A free, open-source component of Apache Kafka® that works as a centralized data hub for simple data integration between Kafka and other data systems.

Vector
A vendor agnostic data pipeline that reads from Kafka and sends events to ClickHouse.

Confluent Platform
A full-scale data streaming platform that enables you to easily access, store, and manage data as continuous, real-time streams.

  • Confluent is provided as On Prem & Cloud Solutions.
  • Enabled by using the JdbcSinkConnector.

Install Confluent

To configure Local Single node installation for Confluent, refer to the Quick Start for Confluent Platform

Install ClickHouse-JDBC Driver

Download the latest version of ClickHouse-JDBC version into the Confluent directory.

wget https://repo1.maven.org/maven2/com/clickhouse/clickhouse-jdbc/0.3.2-   patch11/clickhouse-jdbc-0.3.2-patch11.jar
  1. Start Confluent instance.
    • Prepare configuration in the UI.
  2. Create a database and a table in ClickHouse.
  3. Create a Kafka topic that our JdbcSinkConnector can pull messages from. Call it ClickHouse Cloud.
  4. Create a new JdbcSinkConnector with this configuration and provide endpoint, password and username properties:
{
"name": "JdbcSinkConnectorConnector_0",
"config": {
    "name": "JdbcSinkConnectorConnector_0",
    "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
    "topics": "test_v1",
    "connection.url": "jdbc:clickhouse://<host>:8443/default?ssl=true",
    "connection.user": "default",
    "connection.password": "<password>",
    "dialect.name": "GenericDatabaseDialect",
    "auto.create": "false"
}
}
  1. Select Launch.


Airbyte

Connect Airbyte to ClickHouse

MySQL

Download and run Airbyte

Download and run Airbyte using docker-compose or use Airbyte Cloud.

git clone https://github.com/airbytehq/airbyte.git
cd airbyte
docker-compose up

NOTE: If your are using Airbyte Cloud, make sure to allow Airbyte’s IP addresses to access your ClickHouse service

Credentials

From your Altinity cluster Connection Details, retrieve the following:

  • host
  • port
  • username
  • password
  • database name (cluster name)

Set ClickHouse as a destination by filling in the above information in the “Set up the destination” form.



dbt

Connect dbt to ClickHouse

MySQL

Install dbt-core and dbt-clickhouse

pip install dbt-clickhouse

Connection

The full list of connection configuration options is available

Configure clickhouse_cloud profile in ~/.dbt/profiles.yml file and provide user, password, schema host properties:

clickhouse_cloud:
    target: dev
    outputs:
        dev:
            type: clickhouse
            schema: <target_schema>
            host: <host>
            port: 8443 # use 9440 for native
            user: default
            password: <password>
            secure: True

Create dbt project

dbt init project_name

Update dbt_project.yml

Inside project_name dir, update your dbt_project.yml file to specify a profile name to connect to the ClickHouse server.

profile: 'clickhouse_cloud'

Execute dbt debug

  1. Execute dbt debug with the CLI tool to confirm whether dbt is able to connect to ClickHouse.
  2. Confirm the response includes Connection test: [OK connection ok] indicating a successful connection.


Node.js

Connect Node.js to ClickHouse

MySQL

Installation

npm i @clickhouse/client

Examples

Create an instance

import { createClient } from ‘@clickhouse/client’

const client = createClient({
    host: `https://<host>:8443`,
    password: '<password>',
})

Simple ‘command’ without a result:

await client.exec({
    query: `
        CREATE TABLE IF NOT EXISTS clickhouse_js_example_table
        (id UInt64, name String)
        ORDER BY (id)
    `,
    clickhouse_settings: {
        wait_end_of_query: 1,
    },
})

Bulk insert

await client.insert({
    table: 'clickhouse_js_example_table',
    values: [
        { id: 42, name: 'foo' },
        { id: 42, name: 'bar' },
    ],
format: 'JSONEachRow',
})

Basic query

const rows = await client.query({
    query: 'SELECT * FROM clickhouse_js_example_table',
    format: 'JSONEachRow',
})

console.log(await rows.json())


Vector

Connect Vector to ClickHouse

Vector is a vendor agnostic data pipeline that can:

  • Reads from Kafka
  • Send events to ClickHouse
MySQL

TOML configuration file

The following example shows a toml configuration file for a ClickHouse sink in Vector:

[sinks.clickhouse]
type = "clickhouse"
inputs = ["<your_vector_source>"]
endpoint = "https://<clickhouse_host>:8443"
database = "<database_name>"
table = "<table_name>"
auth.strategy = "basic"
auth.user = "username"
auth.password = "password"


C#

Connect C# to ClickHouse

MySQL

Installation

dotnet add package ClickHouse.Client

EXAMPLES

Basic query

using System;
using System.Threading.Tasks;
using ClickHouse.Client.ADO;
using ClickHouse.Client.Utility;
namespace Clickhouse.Test;
class Program
{
static async Task Main(string[] args)
{
using var connection = new ClickHouseConnection("Host=<host>;Protocol=https;Port=8443;Username=default;Password=<password>")
// ExecuteScalarAsync is an async extension which creates command and executes it
var version = await connection.ExecuteScalarAsync("SELECT version()");
Console.WriteLine(version);
}
}

Bulk insert

using var connection = new ClickHouseConnection("Host=<host>;Protocol=https;Port=8443;Username=default;Password=<password>");
using var bulkCopyInterface = new ClickHouseBulkCopy(connection)
{
DestinationTableName = "<database>.<table>",
BatchSize = 100000
};
// Example data to insert
var values = Enumerable.Range(0, 1000000).Select(i => new object[] { (long)i, "value" + i.ToString() });
await bulkCopyInterface.WriteToServerAsync(values);
Console.WriteLine(bulkCopyInterface.RowsWritten);


PostgreSQL

Connect PostgreSQL to ClickHouse

MySQL

Note: In the event that you need to authorize ClickHouse Cloud to access to your PostgreSQL instance, see the full list of ClickHouse Cloud egress IP addresses:

Query PostgreSQL tables

To use the PostgreSQL table function to query remote PostgreSQL tables:

SELECT * FROM PostgreSQL('<postgres_host>:<port>', '<schema>', '<database>', '<user>', '<password>');

Create Table

To create a table using PostgreSQL table engine in your ClickHouse Cloud service without specifying a schema use the CREATE TABLE AS syntax. When ClickHouse creates the table locally, types in PostgreSQL will be mapped to equivalent ClickHouse types.

CREATE TABLE mycountries AS PostgreSQL('<postgres_host>:5432', '', 'countries', 'postgres', '<password>');

Note: You can use the setting external_table_functions_use_nulls = 0 to ensure Null values are represented as their default values (instead of Null). If set to 1 (the default), ClickHouse will create Nullable variants of each column.

Create Table with a custom Schema

To create a table with the PostgreSQL table engine in your ClickHouse service, specify a custom schema:

CREATE TABLE default.postgresql_table
(
`float_nullable` Nullable(Float32),
`str` String,
`int_id` Int32
)
ENGINE = PostgreSQL('<postgres_host>:<port>', '<schema>', '<database>', '<user>', '<password>');


MySQL

Connect MySQL to ClickHouse

MySQL

NOTE: Your ClickHouse database must be authorized to access to your MySQL instance.

The full list of ClickHouse Cloud egress IP addresses is available here.

Query Remote MySQL Tables

Use the MySQL table function to query remote MySQL tables:

SELECT * FROM mysql('<mysql_host>:<port>', '<database>', '<table>', '<user>', '<password>');

Create Table

Create a table using PostgreSQL table engine in your ClickHouse Cloud service without specifying a schema using the CREATE TABLE AS syntax.

When ClickHouse creates the table locally, types in MySQL will be mapped to equivalent ClickHouse types.

CREATE TABLE mycountries AS mysql('<mysql_host>:<port>', '<database>', '<table>', '<user>', '<password>');

NOTE: You can use the setting external_table_functions_use_nulls = 0 to ensure Null values are represented as their default values (instead of Null). If set to 1 (the default), ClickHouse will create Nullable variants of each column.

Custom Schema

Create a MySQL table with a custom schema

To create a table with the MySQL table engine in your ClickHouse service, specify a custom schema:

CREATE TABLE default.mysql_table
(
    `float_nullable` Nullable(Float32),
    `str` String,
    `int_id` Int32
)
ENGINE = mysql('<mysql_host>:<port>', '<database>', '<table>', '<user>', '<password>')


Data Visualization


Grafana

Connecting Grafana to ClickHouse

MySQL

With Grafana you can create, explore and share all of your data through dashboards. Grafana requires a plugin to connect to ClickHouse, which is easily installed within their UI.

Connection Details

In the Altinity ClickHouse Manager, select your cluster and select Connection Details.

  • Host
  • Port
    9440 when using TLS
    9000 when not using TLS
  • Database name
  • User name
  • Password

Grafana Plugin

  1. Install the Grafana Plugin for ClickHouse


Superset

Connect Superset to ClickHouse

MySQL

Apache Superset is an open-source data exploration and visualization platform written in Python. Superset connects to ClickHouse using a Python driver provided by ClickHouse. Let’s see how it works…

Goal

In this guide you will build a dashboard in Superset with data from a ClickHouse database. The dashboard will look like this:

New Dashboard
ADD SOME DATA
If you do not have a dataset to work with you can add one of the examples. This guide uses the UK Price Paid dataset, so you might choose that one. There are several others to look at in the same documentation category.

  1. Gather your connection details

To connect to ClickHouse with HTTP(S) you need this information:

The HOST and PORT: typically, the port is 8443 when using TLS or 8123 when not using TLS.

The DATABASE NAME: out of the box, there is a database named default, use the name of the database that you want to connect to.

The USERNAME and PASSWORD: out of the box, the username is default. Use the username appropriate for your use case.

The details for your ClickHouse Cloud service are available in the ClickHouse Cloud console. Select the service that you will connect to and click Connect:

ClickHouse Cloud service connect button

Choose HTTPS, and the details are available in an example curl command.

ClickHouse Cloud HTTPS connection details

If you are using self-managed ClickHouse, the connection details are set by your ClickHouse administrator.

  1. Install the Driver

Superset uses the clickhouse-connect driver to connect to ClickHouse. The details of clickhouse-connect are at https://pypi.org/project/clickhouse-connect/ and it can be installed with the following command:

pip install clickhouse-connect

Start (or restart) Superset.

  1. Connect Superset to ClickHouse

Within Superset, select Data from the top menu and then Databases from the drop-down menu. Add a new database by clicking the + Database button:

Add a new database
In the first step, select ClickHouse Connect as the type of database:

Select ClickHouse
In the second step:

Set SSL on or off.

Enter the connection information that you collected earlier

Specify the DISPLAY NAME: this can be any name you prefer. If you will be connecting to multiple ClickHouse databases then make the name more descriptive.

Test the connection
Click the CONNECT and then FINISH buttons to complete the setup wizard, and you should see your database in the list of databases.
4. Add a Dataset​

To interact with your ClickHouse data with Superset, you need to define a dataset. From the top menu in Superset, select Data, then Datasets from the drop-down menu.

Click the button for adding a dataset. Select your new database as the datasource and you should see the tables defined in your database:

New dataset
Click the ADD button at the bottom of the dialog window and your table appears in the list of datasets. You are ready to build a dashboard and analyze your ClickHouse data!
5. Creating charts and a dashboard in Superset​

If you are familiar with Superset, then you will feel right at home with this next section. If you are new to Superset, well…it’s like a lot of the other cool visualization tools out there in the world - it doesn’t take long to get started, but the details and nuances get learned over time as you use the tool.

You start with a dashboard. From the top menu in Superset, select Dashboards. Click the button in the upper-right to add a new dashboard. The following dashboard is named UK property prices:

New dashboard
To create a new chart, select Charts from the top menu and click the button to add a new chart. You will be shown a lot of options. The following example shows a Pie Chart chart using the uk_price_paid dataset from the CHOOSE A DATASET drop-down:

New chart
Superset pie charts need a Dimension and a Metric, the rest of the settings are optional. You can pick your own fields for the dimension and metric, this example uses the ClickHouse field district as the dimension and AVG(price) as the metric.

The SUM metricThe SUM metric
If you prefer donut charts over pie, then you can set that and other options under CUSTOMIZE:

Add Chart to Dashboard
Click the SAVE button to save the chart, then select UK property prices under the ADD TO DASHBOARD drop-down, then SAVE & GO TO DASHBOARD saves the chart and adds it to the dashboard:

Add Chart to Dashboard
That’s it. Building dashboards in Superset based on data in ClickHouse opens up a whole world of blazing fast data analytics!

New Dashboard
Related Content​

Blog: Visualizing Data with ClickHouse - Part 2 - Superset



Deepnote

Connect ClickHouse to Deepnote

MySQL

Deepnote is a collaborative data notebook built for teams to discover and share insights. In addition to being Jupyter-compatible, it works in the cloud and provides you with one central place to collaborate and work on data science projects efficiently.
This guide assumes you already have a Deepnote account and that you have a running ClickHouse instance.

Interactive example

If you would like to explore an interactive example of querying ClickHouse from Deepnote data notebooks, click the button below to launch a template project connected to the ClickHouse playground.

Connect to ClickHouse

Within Deepnote, select the “Integrations” overview and click on the ClickHouse tile.
ClickHouse integration tile
Provide the connection details for your ClickHouse instance:

To connect to ClickHouse with HTTP(S) you need this information:

The HOST and PORT: typically, the port is 8443 when using TLS or 8123 when not using TLS.

The DATABASE NAME: out of the box, there is a database named default, use the name of the database that you want to connect to.

The USERNAME and PASSWORD: out of the box, the username is default. Use the username appropriate for your use case.

The details for your ClickHouse Cloud service are available in the ClickHouse Cloud console. Select the service that you will connect to and click Connect:

ClickHouse Cloud service connect button

Choose HTTPS, and the details are available in an example curl command.

ClickHouse Cloud HTTPS connection details

If you are using self-managed ClickHouse, the connection details are set by your ClickHouse administrator.

ClickHouse details dialog
NOTE: If your connection to ClickHouse is protected with an IP Access List, you might need to allow Deepnote’s IP addresses. Read more about it in Deepnote’s docs.

Congratulations! You have now integrated ClickHouse into Deepnote.

Using ClickHouse integration.

Start by connecting to the ClickHouse integration on the right of your notebook.

ClickHouse details dialog
Now create a new ClickHouse query block and query your database. The query results will be saved as a dataframe and stored in the variable specified in the SQL block.

You can also convert any existing SQL block to a ClickHouse block.



Hex

Hex is a modern, collaborative platform used with notebooks, data apps, SQL, Python, no-code, R.

MySQL

Hex website

Within HEX, select the “Data Sources” menu, click on Add a Data Connection then select the ClickHouse tile.

Provide the connection details for your ClickHouse Service:

  • Hostname
  • Port
  • Username
  • Password

NOTE: Make sure to allow HEX’s IP addresses to access your ClickHouse Service
HEX IP Addresses



Metabase

Metabase is an easy-to-use, open source UI tool for asking questions about your data.

MySQL

Installation

Install Metabase using either of the following:

  • Docker container
  • Metabase JAR

Run as a Docker container

mkdir -p mb/plugins && cd mb
curl -L -o plugins/ch.jar https://github.com/clickhouse/metabase-clickhouse-driver/releases/download/0.9.0/clickhouse.metabase-driver.jar
docker run -d -p 3000:3000 \
--mount type=bind,source=$PWD/plugins/ch.jar,destination=/plugins/clickhouse.jar \
--name metabase metabase/metabase:v0.44.6

Run using Metabase JAR

  1. Download a Metabase binary release (jar file) from the Metabase distribution page.
    https://metabase.com/start/jar.html
  2. Download the ClickHouse driver jar from the Releases repository page.
  3. Create a directory and copy the metabase.jar to it.
    In that directory create a sub-directory called plugins and copy the ClickHouse driver jar to the plugins directory.
  4. From the in the directory metabase.jar is, run:
    MB_PLUGINS_DIR=./plugins; java -jar metabase.jar

For example (using Metabase v0.44.6 and ClickHouse driver 0.9.0):

mkdir -p mb/plugins && cd mb
curl -o metabase.jar https://downloads.metabase.com/v0.44.6/metabase.jar
curl -L -o plugins/ch.jar https://github.com/clickhouse/metabase-clickhouse-driver/releases/download/0.9.0/clickhouse.metabase-driver.jar
MB_PLUGINS_DIR=./plugins; java -jar metabase.jar

Connect

Start (or restart) Metabase and access it at:

  • http://localhost:3000/
  • Click on “Add a database” and select “ClickHouse”.

From your Altinity.Cloud cluster, enter the Connection Details then Save

  • Make sure to check the “Use secure connection” option.


Tableau

Tableau is a popular visual analytics platform for business intelligence.

MySQL

How to connect

Tableau Desktop uses JDBC over HTTP(S) to connect to ClickHouse; you will need to:

  1. Gather your ClickHouse Cloud Service connection details.

    • host
    • HTTP port number
    • username
    • password
  2. Download the latest version of:

  3. Store the TACO connector in the following folder (based on your OS):

    • MacOS
      ~/Documents/My Tableau Repository/Connectors

    • Windows
      C:\Users[Windows User]\Documents\My Tableau Repository\Connectors

  4. Follow clickhouse-tableau-connector-jdbc instructions to download the compatible version of ClickHouse JDBC driver.

  5. Download the JAR file:

    • clickhouse-jdbc-x.x.x-shaded.jar
  6. Store the JDBC driver in the following folder, based on your OS, if the folder doesn’t exist you can create it:

    • MacOS
      ~/Library/Tableau/Drivers

    • Windows
      C:\Program Files\Tableau\Drivers

  7. Configure a ClickHouse data source in Tableau and start building data visualizations.



SQL Clients


DBeaver

DBeaver PRO is a comprehensive database management and administration tool with an easy connection to ClickHouse Cloud.

MySQL

DBeaver provides a wizard that guides you through the steps to connect to the database.
To open a wizard, click on the plug icon in the upper left corner of the application window or go to Database -> New Database Сonnection.

  1. Download DBeaver and open it.
  • https://dbeaver.io
  • https://dbeaver.com (DBeaver PRO)

    The database selection window will open, and you will see a list from which you can choose the Clickhouse driver.
    Find it in the Analytical section.
  1. Double-click on the driver icon, and you will see the window with the connection settings.
    On the main tab, you need to set all primary connection settings.

  2. The required settings for Clickhouse Cloud include:

    • Host
    • Port
    • Database name
    • Username
    • Password
  3. You need to use SSL protocol to create a connection to the Clickhouse Cloud database.

    • Open the SSL tab, tick the Use SSL box.
    • Load certificates and a private key from your file system.
      If necessary, you can view and edit driver properties in the corresponding tab.
  4. To start editing, you need to click on the row in the Value column.

  5. To check that all settings are entered correctly, click Test Connection on the main tab.
    You will see a dialog with all information about the driver.

  6. After completing all the settings, click the Finish button.

The connection appears in the tree of connections in the Database Navigator, and DBeaver connects to the database.



ClickHouse Client

ClickHouse Client is the native command-line client for ClickHouse.

MySQL

Download and install clickhouse-client:

Run the clickhouse-client command with your credentials.

  • See the Altinity.Cloud Connection Details for your cluster.
clickhouse-client --host <hostname> \
                  --secure \
                  --port 9440 \
                  --user <username> \
                  --password <password>


DataGrip

Jetbrains Databrip

Connect Jetbrains DataGrip IDE to Altinity.Cloud clusters.

  1. Create a new blank project with File > New > Project > Name
  2. Select Add (’+’ icon) > Datasource > ClickHouse.
  3. Copy the JDBC Connection Details for your cluster and paste it into the URL:
    jdbc:clickhouse://example-cluster.your-environment.altinity.cloud:8443?ssl=true
  4. Add the host name.
    example-cluster.your-environment.altinity.cloud
  5. Add the Port 8443.
  6. Add the User as admin.
  7. Add the Password for the cluster.
  8. Click Test Connection.
    Jetbrains Databrip Test Connection
  9. In the Playground window, enter the SQL command then Execute.
    SELECT * FROM events;
  10. View the result in the Output window.
    Jetbrains Databrip Test Query
Last modified 0001.01.01