This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Altinity Documentation

Your go-to technical source for all things ClickHouse.
Welcome to the Altinity documentation site. Here we have created technical reference documents, quick start guides, best practices, and everything you need to be productive with ClickHouse and Altinity.Cloud.

1 - Altinity.Cloud

Manuals, quick start guides, code samples and tutorials on how to use Altinity.Cloud to launch and get the most out of your ClickHouse clusters.

Altinity.Cloud provides the best experience in managing ClickHouse. Create new clusters with the version of ClickHouse, set your node configurations, and get right to work.

1.1 - Altinity.Cloud 101

What is Altinity.Cloud?

Welcome to Altinity.Cloud. In this guide, we will be answering a few simple questions:

  • What is Altinity.Cloud?
  • Why should I use it?
  • How does it work?

What is Altinity.Cloud?

Altinity.Cloud is a fully managed ClickHouse services provider. Altinity.Cloud is the easiest way to set up a ClickHouse cluster with different configurations of shards and replicas, with the version of ClickHouse or Altinity Stable for ClickHouse you want. From one spot you can monitor performance, run queries, upload data from S3 or other cloud stores, and other essential operations.

For more details on Altinity.Cloud abilities, see the Administrator Guide. For a crash course on how to create your own ClickHouse clusters with Altinity.Cloud, we have the Altinity.Cloud Quick Start Guide.

What Can I Do with Altinity.Cloud?

Altinity.Cloud lets you create, manage, and monitor ClickHouse clusters with a few simple clicks. Here’s a brief look at the user interface:

Clusters View
  • A: Cluster Creation: Clusters can be created from scratch with Launch Cluster.
  • B: Clusters: Each cluster associated with your Altinity.Cloud account is listed in either tile format, or as a short list. They’ll display a short summary of their health and performance. By selecting a cluster, you can view the full details.
  • C: User and Environment Management:
    • Change to another environment.
    • Manage environments and zookeepers.
    • Update account settings.

Clusters can be spun up and set with the number of replicas and shards, the specific version of ClickHouse that you want to run on them, and what kind of virtual machines to power the nodes.

When your clusters are running you can connect to them with the ClickHouse client, or your favorite applications like Grafana, Kafka, Tableau, and more. See the Altinity.Cloud connectivity guide for more details.

Monitoring

Cluster performance can be monitored in real time through the Cluster Monitor system.

Cluster Monitoring View

Some of the metrics displayed here include:

  • DNS and Distributed Connection Errors: Displays the rate of any connection issues.
  • Select Queries: The number of select queries submitted to the cluster.
  • Zookeeper Transactions: The communications between the zookeeper nodes.
  • ClickHouse Data Size on Disk: The total amount of data the ClickHouse database is using.

How is Altinity.Cloud organized?

Security Tiers

Altinity.Cloud starts at the Organization level - that’s your company. When you and members of your team log into Altinity.Cloud, they’ll start here. Depending on their access level, they can then access the different systems within the organization.

The next level down from there are the Environments. Each organization has at least one Environment, and these are used to allow users access to one or more Clusters.

Clusters consist of one or more Nodes - individual containers that run the ClickHouse databases. These nodes are grouped into shards, which are sets of nodes that all work together to improve performance and reliability. Shards can then be set as replicas, where groups of nodes are copied. If one replica goes down, the other replicas can keep running and copy their synced data when the replica is restored or when a new replica is added.

To recap in reverse order:

  • Nodes are individual virtual machines or containers that run ClickHouse.
  • Shards are groups of nodes work together to improve performance and share data.
  • Replicas are groups of shards that mirror data and performance so when one replica goes down, they can keep going.
  • Clusters are sets of replicas that work together to replicate data and improve performance.
  • Environments contain different clusters into a set to control access and resources.
  • Organizations have one or more environments that service your company.

Altinity.Cloud Access

Altinity.Cloud keeps your users organized in the following roles:

Role Environment Cluster
orgadmin These users can create environments and clusters, and assign users in their organization to them.
envadmin These users have control over environments they are assigned to by the orgadmin. They can create clusters and control clusters within these environments.
envuser These users can access the clusters they are specifically assigned to within specific environments.

More details are available in the Account Administration guide.

Where can I find out more?

Altinity provides the following resources to our customers an the Open Source community:

1.2 - Quick Start Guide

The minimal steps to get Altinity.Cloud running with your first cluster.

Welcome to Altinity.Cloud! Altinity.Cloud is the fastest, easiest way to set up, administer and use ClickHouse. Your ClickHouse is fully managed so you can focus on your work.

If this is your first time using Altinity.Cloud, this quick start guide will give you the minimum steps to become familiar with the system. When you’re ready to dig deeper and use the full power of ClickHouse in your Altinity.Cloud environment, check out our Administrator and Developer Guides for in depth knowledge and best practices.

1.2.1 - Altinity Cloud Manager Introduction

An overview of using the Altinity Cloud Manager (ACM) to managing your ClickHouse clusters with Altinity.Cloud.

26 January 2023 · Read time 3 min

Overview - Altinity Cloud Manager

This section introduces the Altinity Cloud Manager for managing ClickHouse cluster environments.
The Altinity Cloud Manager (ACM) is where your existing clusters are shown.
https://acm.altinity.cloud/clusters/

Points of interest marked by the red pins include:

  • The your.environment name is what you signed up with, otherwise a cluster name appears hear in this menu.
  • The John Doe is an example of your logged-in name.
  • The left pane collapses or expands the text labels beside the icons.
  • The Launch Cluster text tag refers to the call-to-action LAUNCH CLUSTER button.
Launch Cluster button

Figure 1 – The Altinity Cloud Manager (ACM) home screen with no clusters showing.


Looking at the Demo

To switch to the demo environment:

  1. Use the environment menu to switch to demo to see the clusters.
Launch Cluster button

Figure 2 – The environment menu, where the demo name is selected.


The demo environment has several example clusters:

  • posthog
  • clickhouse101
  • meetup
  • github

Panel View

To see the detail for the cluster named clickhouse101:

  1. In the cluster named clickhouse101, hover over pane, the outline turns blue, then click .
Launch Cluster button

Figure 3 – The demo environment showing several cluster panels.


List View

The list view provides a spreadsheet table view of your clusters.

  1. Select the List View icon.
Launch Cluster button

Figure 4 – The list view of all the demo clusters.

Cluster Dashboard view

Selecting a cluster name from a panel view or list view displays the settings that were set by the Cluster Launch Wizard.

Launch Cluster button

Figure 5 – Detailed settings view of the cluster clickhouse101.

Explore View

While viewing your cluster, selecting the Explore button displays the Query screen. This is where SQL queries are run on your cluster. Note the additional tabs for Schema, Workload and DBA Tools.

Launch Cluster button

Figure 6 – The cluster Query tab screen. This is where SQL queries are run.

Grafana Monitoring View

From your cluster dashboard view, selecting the Monitoring View in Grafana link displays the graphs shown in the following screenshot.

Launch Cluster button

Figure 7 – The Grafana graphs for the K8S Namespace demo, for the cluster named clickhouse101.

A Wizard-Created Cluster

When you create a new cluster using the LAUNCH CLUSTER Wizard, the example-environment-name appears in your Altinity Cloud Manager (ACM).

Points of interest include:

example-environment - Menu name changes to the selected environment (aka namespace or domain).
2/2 nodes online - Green to indicate running status. Action > Stop to take offline.
0/2 nodes online - Red shows nodes are not running. Action > Resume to start.
stopped - Cluster / node is not running. Action > Resume to start.
6/6 checks passed - Shows green when all 6 checks have completed successfully.
0/6 checks passed - Shows red until all checks have passed. Action > Resume to start.
Shield green - TLS (Transport Layer Security) is enabled.
Actions - Mouse hover shows this light blue shade.
Blue outline - In cluster panel view, moving your mouse cursor over a cluster changes the grey outline to blue; click to view.
Panel view icon - View clusters in panel format.
List view icon - View cluster in a list format.
Address of all clusters - https://acm.altinity.cloud/clusters/
Address of a specific cluster - https://acm.altinity.cloud/cluster/2887

Cluster Launch Wizard summary

Figure 8 – Points of interest from the newly created example-cluster.

1.2.2 - Account Creation and Login

How to set up your Altinity.Cloud account, and login to the service.

26 January 2023 · Read time 2 min

Free Trial Account Creation

To start your Altinity.Cloud journey, the first thing you need is an account.
New users can sign up for a 14-day Trial account from the following link:

For the free trial to the Altinity.Cloud Anywhere product, which is an on-premises version that you can run in your own Kubernetes environment, see:

Requested information as shown in the following screenshot includes:

  • First Name and Last Name
  • Email (if you provide a Google business email account (this does not include person @Gmail emails), you can login with Auth0 login)
  • Company name
  • Country
  • Cloud Provider that you use (Examples: Amazon, Google)
  • Your managed environment name (eg: my-staging)


Email Validation

When you SUBMIT the Free 14-day Trial form, you will immediately receive an email from Altinity with a validation link that you click to confirm.

First Time Login

Once you validate your email, your request for the Altinity.Cloud free trial will be processed for approval.
The Altinity.Cloud (support@altinity.com) team will provide the following login information.

The following example screenshot is an email that you will receive from Altinity.Cloud notifying you that your 14-day Free Trial to Altinity.Cloud has been approved.

  1. Your-altinity is an example of the environment name you supplied Altinity.
    (NOTE: If your name choice already exists in our system, we will rename it.)

  2. The URL is customized to you and is used once only for the initial login.

  3. Here refers to a Calendar link https://calendly.com/trial-support-team/altinity-cloud-trial-check

  4. Quick Start Guide https://docs.altinity.com/altinitycloud/quickstartguide/

  5. Series of videos https://www.youtube.com/hashtag/altinitycloud

  6. Support email is support@altinity.com

Creating a new password

Clicking the link (item 2) shows the Onboarding window that
prompts you to create a new password.

Logging into the Altinity Cloud Manager

Fill in your Altinity Cloud Manager credentials screen and select SIGN IN:

  1. ACM Login
    https://acm.altinity.cloud/login
  2. Login Email
    (example: johnDoe@outlook.com)
  3. Password
    (example: S0me@c0pl3x_password - •••••••••••••••••••• dots show as you type)

Auth0 login

AUTH0 is used to login if you have a Google Gmail account that Altinity.Cloud supports for trusted authentication.
Note that in order for this to work, you must have used your Gmail address to register for the Altinity.Cloud Free Trial.

To use Auth0 to login:

  1. Select the Auth0 link from the bottom of the ACM login window.
  2. Select Continue with Google.
  3. If you are not already logged into a Google domain, the usual Google login screen appears. Select the same Google domain email address you
    registered with Altinity.Cloud to complete the login.
    NOTE: This does not includ

Privacy Policy

See the Privacy Policy at:

1.2.3 - How to Run a SQL Query

This is an introduction to the cluster Explore > Query feature. You will learn how to select a cluster from the included demo and run a SQL query and view the results.

26 January 2023 · Read time 3 min

Overview - Using the cluster Explore > Query

This example shows how to navigate from the cluster home page and how to use Explore > Query to run a query a ClickHouse database on the included demo cluster called github.

The following screenshot shows the step-by-step sequence of events, starting from your cluster home page (A), then selecting the ClickHouse database repository demo github cluster against which you will run the example SQL query in the Explore > Query screen (B).

To run a SQL query from the demo cluster named github:

  1. Enter the URL into a browser:
    https://acm.altinity.cloud/clusters/
  2. From the domain menu, select demo.
  3. Within the cluster called github, select EXPLORE.
    Note that the URL changes to:
    https://acm.altinity.cloud/cluster/337/explore (see Figure 2 B-1) .
    Note that the menu title shows that CLUSTER:GITHUB is selected (see Figure 2 B-3).
  4. Under the Query History, paste the text of the example SQL query.
  5. Select EXECUTE to run the query.
  6. Optionally, select and copy the results of the query.
Cluster Launch Wizard summary

Figure 1 – The home page showing all clusters.


Cluster Launch Wizard summary

Figure 2 – The Explore of the demo > github cluster viewing the Query tab.

SQL Query script

The following SQL script generates a 3-column report from a github database from 2019 to 2023 that the collates the number of pull requests (PRs) made by unique contributors.

SELECT toStartOfYear(merged_at) m, sum(merged) prs, uniq(creator_user_login) contributors
  FROM github_events
 WHERE merged_at>='2019-01-01'
   AND event_type = 'PullRequestEvent'
   AND repo_name in ('ClickHouse/ClickHouse', 'yandex/ClickHouse')
 GROUP BY m
 ORDER BY m

Code snippet 1 – The input, an example SQL query script to get 4 years of unique pull request contributor totals.


Query explanation

The SQL query visualization shows the Input (green) data sources and Output (red) 3 columns M, PRs and Contributors.

Cluster Launch Wizard summary

Figure 3 – The example SQL query script and visualization.

First select the start of the year for the merged_at column (line 1), the sum of the merged column (m), and the unique values in the creator_user_login column from the github_events table (line 2).

Include only the rows where the merged_at column is on or after January 1, 2019 (line 3) and the event_type column is ‘PullRequestEvent’ (line 4) and the repo_name column is either ‘ClickHouse/ClickHouse’ or ‘yandex/ClickHouse’ (line 5).

Group the results (line 6) by the start of the year for the merged_at column (from line 3)

Lastly, order the results (line 7) by the start of the year for the merged_at column.

SQL Query results

The results of the query appear below the EXECUTE button, listing 3 columns titled m (year), PRs and Contributors.

┌──────────m─┬──prs─┬─contributors─┐
 2019-01-01  2278           232 
 2020-01-01  6000           300 
 2021-01-01  7766           366 
 2022-01-01  5639           392 
 2023-01-01   157            41 
└────────────┴──────┴──────────────┘

Code snippet 2 – The output, 3 columns of year (m), PRs and contributors showing 4 years of unique pull request contributor totals.

1.2.4 - Cluster Launch Wizard

An introduction to the Cluster Wizard, used to create a new ClickHouse cluster. An overview of the settings is provided, with links to further details.

26 January 2023 · Read time 5 min

Overview - Cluster Launch Wizard (summary)

This section covers the Altinity Cloud Manager ClickHouse cluster creation feature called the Cluster Launch Wizard. This getting started guide walks you through the steps to create a new cluster from scratch.

  • The Detailed reference link takes you to section on the Wizard Settings Detail page that explains each setting.
  • Where indicated, additional settings and resources can be obtained upon request from Altinity Technical Support at:
    https://altinity.com/contact/

The following wizard screens are covered on this page:

  1. ClickHouse Setup | Detailed reference
  2. Resources Configuration | Detailed reference
  3. High Availability Configuration | Detailed reference
  4. Connection Configuration | Detailed reference
  5. Uptime Schedule | Detailed reference
  6. Review & Launch | Detailed reference

The following illustration shows a summary of the various screens available in the Cluster Wizard.

Restore Wizard summary

Figure 1 – Each of the Restore Wizard screens and the available settings.



Launch Cluster

The Altinity Cloud Manager (ACM) is where your existing clusters are shown.
https://acm.altinity.cloud/clusters/
If this is the first time you have seen this page, or you have deleted all of your clusters, this page will be blank and show the text string:
"You don’t have any clusters running at this moment."

Points of interest marked by the red pins include:

  • The your.environment name is what you signed up with. Note that a read-only demo environment is included.
  • The John Doe is an example of your logged-in name.
  • The left pane collapses or expands the text labels beside the icons.
  • The Launch Cluster text tag refers to the call-to-action LAUNCH CLUSTER button.

To begin the creation of a new ClickHouse cluster:

  1. Select the LAUNCH CLUSTER button.
Launch Cluster button

Figure 2 – The Altinity Cloud Manager (ACM) home page, selecting the LAUNCH CLUSTER button.


  1. Starting with the ClickHouse Setup screen, fill in the information required by each wizard screen clicking NEXT to navigate.
Launch Cluster wizard screens

Figure 3 – Each of the 6 Cluster Launch Wizard screens and the available settings.


1. ClickHouse Setup

After selecting the LAUNCH CLUSTER button, the first wizard setup screen appears ClickHouse Setup.

Enter the following then click NEXT:

Cluster Launch Wizard summary

Figure 4 – The wizard screen 1 of 6 ClickHouse Setup.


2. Resources Configuration

The second screen is Resources Configuration, where you choose the CPU and storage settings.
If you need more resources that what is displayed, contact Altinity Support.

Enter the following then click NEXT:

Cluster Launch Wizard summary

Figure 5 – The wizard screen 2 of 6 Resources Configuration.


3. High Availability Configuration

This screen covers server redundancy and failover.

Enter the following then click NEXT:

Cluster Launch Wizard summary

Figure 6 – The wizard screen 3 of 6 High Availability Configuration.


4. Connection Configuration

This screen covers communication details such as port settings and endpoint information.

Enter the following then click NEXT:

Cluster Launch Wizard summary

Figure 7 – The wizard screen 4 of 6 Connection Configuration.


5. Uptime Schedule

This lets you choose the type of schedule for when the cluster is allowed run.

Set the Uptime Schedule so that the server never turns off, then click NEXT:

Cluster Launch Wizard summary

Figure 8 – The wizard screen 5 of 6 Uptime Schedule.


6. Review & Launch

There is nothing to add or change on this review screen.

This is last chance where you can use the BACK button and change previously entered settings, then return to this screen.
If there are no changes, select the LAUNCH button to save the settings and start the cluster provisioning process.

The following information is presented:

Cluster Launch Wizard summary

Figure 9 – The last wizard screen 6 of 6 Review & Launch.


Cluster view after Wizard finishes

The example-cluster appears in your Altinity Cloud Manager (ACM).
Any new cluster will appear as another panel or another row in a table listing.

Points of interest include:

example-environment - Menu name changes to the selected environment (aka namespace or domain).
2/2 nodes online - Green to indicate running status. Action > Stop to take offline.
0/2 nodes online - Red shows nodes are not running. Action > Resume to start.
stopped - Cluster / node is not running. Action > Resume to start.
6/6 checks passed - Shows green when all 6 checks have completed successfully.
0/6 checks passed - Shows red until all checks have passed. Action > Resume to start.
Shield green - TLS (Transport Layer Security) is enabled.
Actions - Mouse hover shows this light blue shade.
Blue outline - In cluster panel view, moving your mouse cursor over a cluster changes the grey outline to blue; click to view.
Panel view icon - View clusters in panel format.
List view icon - View cluster in a list format.
Address of all clusters - https://acm.altinity.cloud/clusters/
Address of a specific cluster - https://acm.altinity.cloud/cluster/2887

Cluster Launch Wizard summary

Figure 10 – Points of interest from the newly created example-cluster.

1.2.5 - Wizard Settings Detail

Details for the various Cluster wizard settings.

26 January 2023 · Read time 11 min

Overview

Purpose

This section provides a detailed discussion of the settings that expands on the overview of the Cluster Launch Wizard.

See Also:

The following diagram is a snapshot view of the 6 wizard screens showing the settings that are discussed on this page:

Cluster Launch Wizard summary
Figure 1 Summary list of settings in each of the Cluster Wizard screens.




Create a new ClickHouse cluster

  1. From the Altinity.Cloud Manager, create a new ClickHouse cluster by selecting the LAUNCH CLUSTER button.

    Cluster Launch Wizard summary


ClickHouse Setup

These are ClickHouse-related settings. (Screen 1 of 6)
Name | ClickHouse Version | ClickHouse User Name | ClickHouse User Password

  1. Fill in the 4 ClickHouse Setup fields.
  2. Select NEXT to advance to the next cluster wizard screen Resources Configuration.
  3. Or, select CANCEL to close the wizard without saving.

To see the full-sized ClickHouse Setup screen, see:

ClickHouse Setup

Cluster Launch Wizard ❯ ClickHouse Setup


Name

This is the DNS-compliant name of the cluster.

  • Example: example-cluster

Name of the cluster
Figure 2 – The Name of the cluster example-cluster in the ClickHouse Setup wizard screen.

The Name of your cluster must follow DNS name restrictions as follows:

Allowed

  • Name must start with a letter
  • Lower case letters (a to z)
  • Numbers (0 to 9)
  • Hyphens, also named dash or minus character (-)
  • 15 character limit

Disallowed

  • Periods (.)
  • Special characters ( > ’ , " ! # )

UI Text

  • “Cluster name tag will be used in ClickHouse configuration and it may contain only lowercase letters [a-z], numbers [0-9] and hyphens [-] in between”

ClickHouse Version

This lets you choose which version of ClickHouse to use, from a list in the Altinity Builds category or any other ClickHouse Community Builds.

ClickHouse Version
Figure 3 The ClickHouse software version number from the cluster wizard ClickHouse Setup screen.

Altinity Builds

Altinity Stable Builds are tested, proven stable builds with 6-month test cycles. They are based on ClickHouse Long Term Support (LTS) releases and are supported for up to 3 years. These release numbers include the phrase Altinity Stable Build.

See also:

Example values: (Your system may be different.)

  • 21.1.11 Altinity Stable Build
  • 21.3.20 Altinity Stable Build
  • 21.8.15 Altinity Stable Build
  • 22.3.12 Altinity Stable Build
  • Custom Version (Image Identifier)
Community Builds

This is a list of all available ClickHouse versions.
Example values: (Your system may be different.)

  • 21.3.20.1 Altinity Stable (community build)
  • 21.8.15.7 Altinity Stable (community build)
  • 21.11.11.1
  • 22.3.12.19 Altinity Stable (community build)
  • 22.6.8.35
  • 22.8.10.29
  • Custom Version (Image Identifier)

UI Text

  • “ClickHouse Version will be the same across all Cluster nodes”

ClickHouse User Name

ClickHouse User Name
Figure 4 The ClickHouse User Name from the cluster wizard ClickHouse Setup screen.

This is the Account name of the ClickHouse administrator.
By default, this name is set to admin.

  • Example: admin

UI Text

  • “ClickHouse user will be created with the specified login”

ClickHouse User Password

ClickHouse User Name
Figure 5 The ClickHouse User Password from the cluster wizard ClickHouse Setup screen.

  • Passwords need to be at lease 12 characters.
  • Too-short passwords produce an Error: Invalid ClickHouse user password.
  • As you type past 12 characters, the red banner will go away.
  • Click the Show password icon then copy and paste to the right Confirm Password field.
  • Example: ThisI5Ac0Plexp4ssW0rd | Confirm Password •••••••••••••••••••••

UI Text

  • This password will be assigned to the ClickHouse User
    The minimum password length is 12 characters. Consider adding digits
    capital letters and special symbols to make password more secure


Resources Configuration

Sets the CPU size, RAM/memory and storage settings. (Screen 2 of 6)
Node Type | Node Storage (GB) | Number of Volumes | Volume Type | Number of shards

  1. Fill in the 5 Resources Configuration fields.
  2. Select NEXT to advance to the next cluster wizard screen High Availability Configuration.
  3. Or, select BACK to return to the previous cluster wizard screen ClickHouse Setup.
  4. Or, select CANCEL to close the wizard without saving.

To see the full-sized Resources Configuration screen, see:

Resources Configuration

Cluster Launch Wizard ❯ Resources Configuration



Node Type

CPU and RAM sizes for the node. The names in the drop-down menu will differ depending on which environment you are using. (E.g., between AWS and GCP.)
Contact Altinity Technical Support if you need additional node types.

ClickHouse User Name
Figure 6 The Node Type (CPU and RAM) from the cluster wizard Resources Configuration screen.

  • Clusters can later be scaled up or down dynamically.
  • Example: m5.large (CPU x2, RAM 7 GB)

More Information

Available sizes (set by your Cloud provider):

Example

  • m5.large (CPU x2. RAM 7 GB)
  • m5.xlarge (CPU x4. RAM 14 GB)
  • m5.2xlarge (CPU x8. RAM 29 GB)
  • m5.4xlarge (CPU x16. RAM 58 GB)
  • m5.8xlarge (CPU x32. RAM 120 GB)
  • c5.18xlarge (CPU x72. RAM 128.9 GB)

UI Text

  • “Node Type will be the same across all ClickHouse hosts”

Node Storage

The size of each cluster node in GB (gigabytes).

ClickHouse User Name
Figure 7 The Node Storage size in GB from the cluster wizard Resources Configuration screen.

  • Each node has the same storage area.
  • Example: 100 (GB)

UI Text

  • “Each ClickHouse host will have specified amount of local volume storage”

Number of Volumes

Depending on the cloud provider you are using, creating more volumes may improve query performance.

ClickHouse User Name
Figure 8 The Number of Volumes to set, from the cluster wizard Resources Configuration screen.

UI Text

  • “Network storage can be split to several volumes for a better query performance”

Volume Type

These are SSD block storage volumes provided by Amazone AWS, Google GCP, or other cloud providers.

The choices are:

  • gp2-encrypted
  • gp3-encrypted

ClickHouse User Name
Figure 9 The Volume Type gp2-encrypted or gp3-encrypted from the cluster wizard Resources Configuration screen.

UI Text

  • “Defines volume claim storage class for each ClickHouse host”

Number of Shards

Shards group nodes that work together to share data and improve performance. Replicating shards is done to increase availability and speed up recovery if one shard goes down.

ClickHouse User Name
Figure 10 The Number of Shards to create, from the cluster wizard Resources Configuration screen.

Where quotas are in place, the UI string “x / y (ie. 1 of 20) shards will be used”.

UI Text

  • “Each shard will required X number of
  • ClickHouse hosts where X is the number
  • of replicas of this shard (X = 2)”


High Availability Configuration

These are redundancy and failover settings. (Screen 3 of 6)
Number of Replicas | Zookeeper Configuration | Zookeeper Node Type | Enable Backups

  1. Fill in the 4 High Availability Configuration fields.
  2. Select NEXT to advance to the next cluster wizard screen Connection Configuration.
  3. Or, select BACK to return to the previous cluster wizard screen Resources Configuration.
  4. Or, select CANCEL to close the wizard without saving.

UI Text

  • “Please contact Altinity Support if you need more resources”

To see the full-sized High Availability Configuration screen, see:

High Availability

Cluster Launch Wizard ❯ High Availability Configuration


Number of Replicas

ClickHouse User Name
Figure 11 The Number of Replicas to create, from the cluster wizard High Availability Configuration screen.

  • 1 | 2 | 3
  • Number of Replicas for each Cluster Shard
  • Replicas: x / y Replicas will be used (appears if Altinity Support has set usage quotas)

UI Text

  • “Number of Replicas for each Cluster shard”

Quotas set by Altinity

If bar charts appears, this means Altinity Support has set quotas for your domain.

ClickHouse User Name
Figure 12 The graphs for CPU and Storage appear if Altinity Support has set quotas. From the cluster wizard High Availability Configuration screen.

  • CPU: x / y vCPUs will be used
  • Storage: x / y GB will be used

Zookeeper Configuration

ClickHouse User Name
Figure 13 The Zookeeper Configuration and Zookeeper Node Type from the cluster wizard High Availability Configuration screen.

  • Example: Dedicated

UI Text

  • “You can pick a shared Zookeeper cluster, Launch a dedicated one or do not use Zookeeper at all.”

Zookeeper Node Type

  • Default

Enable Backups

Whether or not backups occur.

ClickHouse User Name
Figure 14 The Enable Backups checkbox and Backup Schedule from the cluster wizard High Availability Configuration screen.

  • Example: True (checked)

Backup Schedule

This is the frequency that data backups happen.

  • Example: Daily

UI Text

  • [ none ]

Number of Backups to keep

ClickHouse User Name
Figure 15 The Number of Backups to keep from the cluster wizard High Availability Configuration screen.

  • Example: 7

UI Text

  • [ none ]


Connection Configuration

Used to set the communications protocols. (Screen 4 of 6)
Endpoint | Use TLS | Load Balancer Type | Protocols | Datadog integration | IP restrictions

  1. Fill in the 5 Connection Configuration fields.
  2. Select NEXT to advance to the next cluster wizard screen Uptime Schedule.
  3. Or, select BACK to return to the previous cluster wizard screen High Availability Configuration.
  4. Or, select CANCEL to close the wizard without saving.

UI Text

  • “Please contact Altinity Support if you need more resources.”

To see the full-sized Connection Configuration screen, see:

Connection Configuration

Cluster Launch Wizard ❯ Connection Configuration


Endpoint

The Endpoint is the access point domain name to your cluster. The URL breaks down as follows:

  • example-cluster is the name of the cluster
  • customer-domain is your environment
  • altinity.cloud is the parent for all environments
  • Example: example-cluster.customer-domain.altinity.cloud

ClickHouse User Name
Figure 16 The Endpoint URL from the cluster wizard Connection Configuration screen.

UI Text

  • “Access point Domain Name”

Use TLS

When True, the connection to the Cluster Endpoints will be secured with TLS.

  • Example: True (checked)

ClickHouse User Name
Figure 17 The Use TLS checkbox from the cluster wizard Connection Configuration screen.

UI Text

  • “Connection to the Cluster Endpoints will be secured with TLS”

Load Balancer Type

Choice of the Load Balancer you want to use.

  • Example: Altinity Edge Ingress

ClickHouse User Name
Figure 18 The Load Balancer Type (Altinity Edge Ingress is selected) from the cluster wizard Connection Configuration screen.

UI Text

  • [ none ]

Protocols

Port settings. By default, these settings cannot be unchecked.

  • Binary Protocol (port: 9440) = True (checked)
    UI Text - “This enables native ClickHouse protocol connection”
  • HTTP Protocol (port: 8443) = True (checked)
    UI Text - “This enables native ClickHouse protocol connection”

ClickHouse User Name
Figure 19 The Protocols ports to enable, from the cluster wizard Connection Configuration screen.

Datadog integration

For logging purposes, you can request that the third-party application Datadog be set up by Altinity Support.

  • Datadog integration (disabled) - This entire section is dimmed.
  • Send Logs = False (unchecked)
  • Send Metrics = False (unchecked)

ClickHouse User Name
Figure 20 If Altinity Support has enabled this, Datadog integration is selectable. From the cluster wizard Connection Configuration screen.

UI Text

  • [ none ]

IP restrictions

This is used to increase security by using a whitelist, or allowlist of IP numbers to restrict access.

Note that Altinity needs to have certain ports open in order to maintain your tenancy.

  • IP restrictions
  • Enabled False (unchecked) - This is the default setting
  • Enabled True (checked) - Turn on this setting then add IP numbers

ClickHouse User Name
Figure 21 If Altinity Support has enabled this, *IP restrictions lets you whitelists IP numbers. From the cluster wizard Connection Configuration screen.

UI Text

  • Restricts ClickHouse client connections to the provided list of IP addresses in CIDR format. Multiple entries can be separated by new lines or commas
    Note:
    34.238.65.247,
    44.195.72.25,
    100.24.75.12,
    10.0.0.0/8,
    172.16.0.0/12,
    192.168.0.0/16,
    10.128.0.0/23,
    10.128.2.0/23,
    10.128.4.0/23
    is added automatically as it is required for Altinity.Cloud to operate.


Uptime Schedule

Used to set a schedule for when the clusters should run. (Screen 5 of 6)
For this Quick Start Guide, the servers are set to ALWAYS ON.
Always On | Stop When Inactive | On Schedule


To see the full-sized Uptime Schedule screen, see:

Uptime Schedule

Cluster Launch Wizard ❯ Uptime Schedule


To set (or remove) an Uptime Schedule:

  1. Click one of the three Uptime Schedule settings.
  2. Select NEXT to advance to the last cluster wizard screen Review & Launch.
  3. Or, select BACK to return to the previous cluster wizard screen Connection Configuration.
  4. Or, select CANCEL to close the wizard without saving.

ClickHouse User Name
Figure 22 The cluster , Uptime Schedule lets you set when you want your clusters to run. From the cluster wizard Uptime Schedule screen.

Sets server uptime.
See the General User Guide > Uptime Schedule Settings page for setting details .

Always On

Use this setting to run your cluster 24/7.

UI Text

  • “Altinity.Cloud shall not trigger any Stop or Resume Operations on this Cluster automatically”

Stop When Inactive

Use this setting to stop clusters from running after a set number of hours.
Servers must be manually restarted.

  • Example: Hours of inactivity 24

UI Text

  • “The cluster will be stopped automatically when there is no activity for a given amount of hours”

On Schedule

Use this setting to schedule the times in each of the days of a week to run.

UI Text

  • Uptime Schedule for .
  • “Schedule (Time in GMT)”
  • “Monday Tuesday Wednesday Thursday Friday Saturday Sunday”
  • “Active”, “All Day”, “From: hh:mm AM/PM To: hh:mm AM/PM”


Review & Launch

This is the last chance to review settings before saving and launching the new ClickHouse cluster. (Screen 6 of 6)

  1. Select LAUNCH to save and start the cluster.
  2. Or, select BACK to return to the previous cluster wizard screen Uptime Schedule.
  3. Or, select CANCEL to close the wizard without saving.

To see the full-sized Review & Launch screen, see:

Review & Launch

Cluster Launch Wizard ❯ Review & Launch


ClickHouse User Name
Figure 23 The last wizard screen Review & Launch displays your CPU and RAM choices and a cost estimate.

Cluster Cost Estimate

UI Text
You are about to launch cluster example-cluster with 1 shards with replication which will require:

  • 2 x CH Nodes m5.large
  • 14 GB of memory in total
  • 200 GB of storage volume in total
  • Estimated Cluster cost: $0.76/hour ($547.20/month)

Select LAUNCH to save and start the cluster.

ClickHouse User Name

Figure 24 – The LAUNCH button on the last wizard screen Review & Launch.

Conclusion

Launching a new cluster using the Cluster Launch Wizard is now complete.

Continue on to the next section:

1.2.6 - Creating Tables and Adding Data

How to use Explore on your cluster to run SQL queries to create tables, import data, and view schema and table data.

26 January 2023 · Read time 3 min

Overview - Creating Tables

This section is for first time users that have just learned how to create a ClickHouse cluster, and now want to add tables and data.

The Altinity.Cloud Manager (ACM) screens used on this page are:

  • ACM home page ❯ Clusters
  • ACM: Cluster (name) > Explore > Query tab
  • ACM: Cluster (name) > Explore > Schema tab
  • ACM: Cluster (name) > Explore > Schema tab > Table (name)
  • ACM: Cluster (name) > Explore > Schema tab > Table (name) > Table Details > Sample Rows

Creating Tables

The next step after creating a new ClickHouse cluster is to create tables.
After completing this example, two empty tables are created:

  • events_local
  • events

Prerequisite

  • Open the UI screen: ACM: Cluster (name) > Explore > Schema tab

To create two tables in your blank cluster by using a SQL Query:

  1. From the domain menu, select your domain (ie. your.domain).
    https://acm.altinity.cloud/clusters

  2. In your cluster (ie. example-cluster), select EXPLORE. Confirm that the Query tab is selected.

  3. To create the first table called events_local table, copy and paste in the following SQL query then EXECUTE:

    CREATE TABLE IF NOT EXISTS events_local ON CLUSTER '{cluster}' (
        event_date  Date,
        event_type  Int32,
        article_id  Int32,
        title       String
    ) ENGINE = ReplicatedMergeTree('/clickhouse/{cluster}/tables/{shard}/{database}/{table}', '{replica}')
        PARTITION BY toYYYYMM(event_date)
        ORDER BY (event_type, article_id);
    
  4. To create the second table called events, copy and paste in the following SQL query then EXECUTE:

    CREATE TABLE events ON CLUSTER '{cluster}' AS events_local
       ENGINE = Distributed('{cluster}', default, events_local, rand())
    
  5. Below the EXECUTE button, the following information displays after running each SQL query:

    example-cluster.your-domain.altinity.cloud:8443 (query time: 0.335s)

    chi-example-cluster-example-cluster-0-0	9000	0		1	0
    chi-example-cluster-example-cluster-0-1	9000	0		0	0
    
  6. In the Schema tab, confirm that the two tables events_local and events are present.


Adding Data

In the Query tab, SQL commands are used to add data to your ClickHouse tables and to verify the additions.


Prerequisite

  • Open the UI screen: ACM: Cluster (name) > Explore > Schema tab

To add data to the events_local table:

  1. Copy and paste the following to the cluster Query field then Execute:

    INSERT INTO events VALUES(today(), 1, 13, 'Example');
    
  2. Verify that the data has been added to events_local by running the query:

    SELECT * FROM events;
    
  3. The following response appears below the EXECUTE button.

    ┌─event_date─┬─event_type─┬─article_id─┬─title───┐
     2023-01-04           1          13  Example 
    └────────────┴────────────┴────────────┴─────────┘
    

Viewing Schema and Data

The Schema tab contains a list of your ClickHouse tables and data in your cluster.


Prerequisite

  • Open the UI screen: ACM: Cluster (name) > Explore > Schema tab
  • UI screen: ACM: Cluster (name) > Explore > Schema tab > Table (name) > Table Details > Sample Rows

To view the Adding:

  1. Select the Schema tab.
    Two tables are listed, events_local and events.

  2. Within the Schema tab, select the Table link called events_local.

  3. In the Table Details dialog box, select the tab Sample Rows.
    The following information appears.

    ┌─event_date─┬─event_type─┬─article_id─┬─title───┐
     2023-01-04           1          13  Example 
    └────────────┴────────────┴────────────┴─────────┘
    
  4. Select DONE to close the Table Details window.

1.2.7 - ClickHouse Ubuntu Terminal Remote Client

How to install the ClickHouse Ubuntu command line client and connect to your Altinity.Cloud cluster.

26 January 2023 · Read time 2 min

Overview - Ubuntu ClickHouse Client

This section covers the installation of the ClickHouse client on the Linux OS Ubuntu 20.04.
After installation, you will be able to run use ClickHouse queries from the terminal.

Updating Ubuntu

  1. Update your Ubuntu OS and confirm the version with the following commands:

    sudo apt-get update
    sudo apt-get upgrade
    lsb_release -a
    

Installing ClickHouse drivers

To install ClickHouse drivers on Ubuntu 20.04:

  1. Copy and paste each of the following lines to your Ubuntu terminal in sequence:

    sudo apt-get install -y apt-transport-https ca-certificates dirmngr
    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
    echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee /etc/apt/sources.list.d/clickhouse.list
    sudo apt-get update
    sudo apt-get install -y clickhouse-client
    clickhouse-client --version
       ClickHouse client version 22.12.3.5 (official build).
    

More information

Logging on to your cluster

  1. From the Connection Details, copy and paste the text string to your Ubuntu terminal:

     clickhouse-client -h example-cluster.your-domain.altinity.cloud --port 9440 -s --user=admin --password
    

ClickHouse terminal response

  1. After you enter your ClickHouse cluster password, you enter the ClickHouse interactive mode.
    ClickHouse prompt example: example-cluster :)

    (test2) user@xubuntu:~$ clickhouse-client -h example-cluster.your-domain.altinity.cloud --port 9440 -s --user=admin --password
    ClickHouse client version 22.12.3.5 (official build).
    Password for user (admin): 
    *********
    
    Connecting to example-cluster.your-domain.altinity.cloud:9440 as user admin.
    Connected to ClickHouse server version 22.3.15 revision 54455.
    
    ClickHouse server version is older than ClickHouse client. 
    It may indicate that the server is out of date and can be upgraded.
    
    example-cluster :) 
    

ClickHouse query examples

  1. At the ClickHouse prompt, enter the query command show tables:

    example-cluster :) show tables
    
    SHOW TABLES
    
    Query id: c319298f-2f28-48fe-96ca-ce59aacdbc43
    
    ┌─name─────────┐
    │ events       │
    │ events_local │
    └──────────────┘
    
    2 rows in set. Elapsed: 0.080 sec.
    
  2. At the ClickHouse prompt, enter the query select * from events:

    example-cluster :) select * from events
    
    SELECT *
    FROM events
    
    Query id: 0e4d08b3-a52d-4a03-917d-226c6a2b00ac
    
    ┌─event_date─┬─event_type─┬─article_id─┬─title───┐
    │ 2023-01-04 │          113 │ Example │
    │ 2023-01-10 │          113 │ Example │
    │ 2023-01-10 │          114 │ Example │
    └────────────┴────────────┴────────────┴─────────┘
    
    3 rows in set. Elapsed: 0.073 sec.
    

To quit, or exit from the ClickHouse interactive mode:

  1. Enter the exit command to return to your Ubuntu shell environment.

    example-cluster :) exit
    Bye.
    

This completes the quick start guide to installing ClickHouse command-line client on an Ubuntu OS.

Related links

1.2.8 - ClickHouse Python Client

How to install the ClickHouse Python driver, and connect to your Altinity.Cloud ClickHouse Cluster from a Python program or a Python console.

26 January 2023 · Read time 4 min

Overview - Python ClickHouse Client

This section covers the installation and use of the Python ClickHouse client.
After installation, you will be able to run use ClickHouse queries on any platform that will run Python3.

  • You must first have completed Creating Tables and Adding Data.
  • You have copied the Connection Details from your cluster.
  • Python version 3.7 (or later) is installed (this page is tested using Python 3.11.1)

More information


Installing the ClickHouse Python driver

Install ClickHouse drivers using the Python 3 PIP package installer:

pip install clickhouse-driver

Example Python program

The following Python program demonstrates:

  • Importing the clickhouse-driver library (that was previously installed by the Python 3 PIP package installer)
  • Connecting to your ClickHouse cluster example-cluster
  • Listing all of the tables that exist in the example-cluster
  • Listing the data in the example table events_local
  • Showing the version number of the Python clickhouse-driver (0.2.5)

Shortcut for experienced users

  • Python program filename: ClickHouse-example.py (copy and paste the code)
  • Update the program strings for the cluster name from Connection Details link on your cluster, such as:
    your-company-example
    yourpasswordhere
  • Run the command:
    python3 ClickHouse-example.py

Instructions

  1. Modify the following in the Python program ClickHouse-example.py as follows:

    • Change the cluster name, replacing your-company-example
    • Change password yourpasswordhere to your own ClickHouse cluster name

  1. Run the program from the terminal or your IDE:

    • python3 ClickHouse-example.py

Code Snippet 1 - ClickHouse-example.py


import clickhouse_driver
print(clickhouse_driver.__version__)

# No return characters:
# Replace your-company-example
# Replace yourpasswordhere 
# client = Client('example-cluster.your-company-example.altinity.cloud', user='admin', password='yourpasswordhere', port=9440, secure='y', verify=False)

# Connect to your cluster
from clickhouse_driver import Client
client = Client('example-cluster.your-company-example.altinity.cloud',
                user='admin',
                password='yourpasswordhere',
                port=9440,
                secure='y',
                verify=False)

# Show tables
tables = client.execute('SHOW TABLES in default')
print(tables)


# Show data
result = client.execute('SELECT * FROM default.events')
print(result)


# Show ClickHouse Python driver version
version = (clickhouse_driver.__version__)
print("ClickHouse Python version: ", version)

Python program response

[('events',), ('events_local',)]
[(datetime.date(2023, 1, 4), 1, 13, 'Example'), (datetime.date(2023, 1, 10), 1, 13, 'Example'), (datetime.date(2023, 1, 10), 1, 14, 'Example')]
ClickHouse Python version:  0.2.5

Python Console

This section shows how you can use the Python console to interactively connect to your ClickHouse cluster on Altinity and send SQL queries.

First copy your Connection Details from your cluster you want to communicate with.

Bring up the terminal and enter Python console mode, then copy and paste the following commands shown after the Python console prompt »>:

# Get into Python console mode
python3
>>>

# Getting the ClickHouse Python driver version number
>>> import clickhouse_driver
>>> print(clickhouse_driver.__version__)
0.2.5

# Connect to your ClickHouse cluster (replace <HOSTNAME> and <PASSWORD>)
>>> from clickhouse_driver import Client
>>> client = Client('<HOSTNAME>', user='admin', password=<PASSWORD>, port=9440, secure='y', verify=False)

# Confirm that the client object is created
>>> print(client)
<clickhouse_driver.client.Client object at 0x107730910>

# Show all tables
>>> client.execute('SHOW TABLES in default')
[('events',), ('events_local',)]

# Show data
>>> result = client.execute('SELECT * FROM default.events')
>>> print(result)
[(datetime.date(2023, 1, 4), 1, 13, 'Example'), (datetime.date(2023, 1, 10), 1, 13, 'Example'), (datetime.date(2023, 1, 10), 1, 14, 'Example')]

# ClickHouse Version 0.2.5 as of January 2023
version = (clickhouse_driver.__version__)
print("ClickHouse Python version: ", version)
ClickHouse Python version:  0.2.5

Checking your ClickHouse Version from PIP

To check your Python ClickHouse version installation from the terminal, enter:

  • pip show clickhouse-driver
username)   ~ 
username)   ~ pip show clickhouse-driver

        Name:  clickhouse-driver
     Version:  0.2.5
     Summary:  Python driver with native interface for ClickHouse
   Home-page:  https://github.com/mymarilyn/clickhouse-driver
      Author:  Konstantin Lebedev
Author-email:  kostyan.lebedev@gmail.com
     License:  MIT
    Location:  /Users/username/lib/python3.11/site-packages
    Requires:  pytz, tzlocal
 Required-by: 

This concludes the Quick Start instructions for how to use the Python3 ClickHouse driver.


Related links

1.2.9 - ClickHouse Go Client

How to install the ClickHouse Go driver, and connect to your Altinity.Cloud ClickHouse Cluster from a Go program.

2 February 2023 · Read time 4 min

Overview - Go ClickHouse Client

This section covers the installation and use of the Go ClickHouse client.
After installation, you will be able to run use ClickHouse queries on any platform that will run Go.

  • You must first have completed Creating Tables and Adding Data.
  • You have copied the Connection Details from your cluster.
  • Go version 1.19 (or later) is installed (this page is tested using Go 1.19)

More information

Note
The following example was run on:

  • macOS running Monterey
  • Go 1.19 installed using the Brew package manager
  • Jetbrains GoLand IDE

Show tables in a cluster

This example connects to your ClickHouse cluster using Go, then lists all of the tables.

In the following Go SHOW TABLES example, the port of 9440 and username of admin, is by default the same, so replace the following host and password fields with your own values.

  • host - example-cluster.your-xyzcompany.altinity.cloud
  • port - 9440
  • username - admin
  • password - yourpassword1234
package main

import (
	"database/sql"
	"fmt"
	_ "github.com/ClickHouse/clickhouse-go/v2"
	"log"
)

func main() {
	tables, err := queryTables(
		"example-cluster.your-xyzcompany.altinity.cloud",  // host
		9440,                                              // port
		"admin",                                           // username
		"yourpassword1234",                                // password
	)
	if err != nil {
		log.Fatal(err)
	}
	for _, table := range tables {
		fmt.Println(table)
	}
}

func queryTables(host string, port int, username, password string) ([]string, error) {
	db, err := sql.Open("clickhouse",
		fmt.Sprintf("clickhouse://%v:%v@%s:%d?secure=true", username, password, host, port))
	if err != nil {
		return nil, err
	}
	defer db.Close()
	q, err := db.Query("SHOW TABLES in default")
	if err != nil {
		return nil, err
	}
	defer q.Close()
	var tableName string
	var r []string
	for q.Next() {
		if err := q.Scan(&tableName); err != nil {
			return nil, err
		}
		r = append(r, tableName)
	}
	return r, q.Err()
}

The output appears as follows:

GOROOT=/usr/local/opt/go/libexec #gosetup
GOPATH=/Users/johndouser/go #gosetup
/usr/local/opt/go/libexec/bin/go build -o /private/var/<yourpath>/main/login.go #gosetup
/private/var/folders/<yourpath>/___go_build_login_go
events
events_local

Process finished with the exit code 0

Display ClickHouse Version

This Go program lists the ClickHouse version.

package main

import (
	_ "github.com/ClickHouse/clickhouse-go/v2"
	"github.com/kr/pretty"
	"log"
	"runtime/debug"
)

func main() {
	bi, ok := debug.ReadBuildInfo()
	if !ok {
		log.Fatal("!ok")
	}
	pretty.Println(bi)
}

The output appears in the JSON format:

GOROOT=/usr/local/Cellar/go/1.19.5/libexec #gosetup
GOPATH=/Users/rmkc/go #gosetup
/usr/local/Cellar/go/1.19.5/libexec/bin/go build -o 
/private/var/folders/<yourpath>/___go_build_clickhouse_demo_go 
/Users/johndoeuser/<yourpath>/clickhouse-demo.go #gosetup
/private/var/folders/<your path>/___go_build_clickhouse_demo_go
&debug.BuildInfo{
    GoVersion: "go1.19.5",
    Path:      "command-line-arguments",
    Main:      debug.Module{},
    Deps:      {
        &debug.Module{
            Path:    "github.com/ClickHouse/ch-go",
            Version: "v0.51.0",
            Sum:     "h1:APpjlLifKMgRp1b94cujX7dRZLL4DjbtMVs7qRoUPqg=",
            Replace: (*debug.Module)(nil),
        },
        &debug.Module{
            Path:    "github.com/ClickHouse/clickhouse-go/v2",
            Version: "v2.5.1",
            Sum:     "h1:+KebkZtGJKaCilgNF0vQBrc7hOdNWnheue0bX1OVhl4=",
            Replace: (*debug.Module)(nil),
        },
        
 // .... several omitted lines

Process finished with the exit code 0

This concludes the Quick Start instructions for how to use the Go language to use ClickHouse.


Related links

1.2.10 - ClickHouse Macintosh (Monterey) Client

How to install the ClickHouse Python terminal client on a Macintosh running Monterey using the Brew package installer, and connect to your Altinity.Cloud ClickHouse Cluster.

26 January 2023 · Read time 4 min

Overview - Macintosh ClickHouse Client

This section covers the installation and use of the Mac ClickHouse client.
After installation, you will be able to run ClickHouse queries from a Macintosh terminal.
The following ClickHouse install instructions using Brew is tested to work on Monterey (OS version 12.1)

Prerequisites

  • You have copied the Connection Details from your cluster.
  • Python version 3.7 (or later) is installed
  • The PIP Python Package installer is installed
  • The Brew package installer is installed

More information

Installing ClickHouse on macOS

Installation of the macOS ClickHouse client uses the Brew package installer.

To install macOS ClickHouse client:

  1. Enter each of the four lines in turn.
    The final step starts ClickHouse.

    brew tap altinity/clickhouse
    brew tap-info --json altinity/clickhouse
    brew install clickhouse
    brew services start clickhouse
    

Checking the ClickHouse Version Installed

This step checks the installed version of the ClickHouse macOS client.
You will note that it is not necessary to login to ClickHouse to run this command.

To check the installed ClickHouse version from a macOS terminal:

  1. Enter the following string:

    clickhouse client -q 'SELECT version()'
    22.7.2.1
    

Note that you are still in the macOS shell, not the ClickHouse interactive mode, as the following examples demonstrate.

Logging on to your ClickHouse cluster

To login to your cluster from the macOS terminal:

  1. From the Connection Details, copy and paste the string of text into the macOS terminal:

    clickhouse-client -h example-cluster.your-cluster.altinity.cloud --port 9440 -s --user=admin --password
    

ClickHouse terminal response

  1. Enter your ClickHouse cluster password at the prompt.

  2. The ClickHouse version number is displayed, and the interactive prompt mode shows:
    example-cluster :)

    ClickHouse client version 22.7.2.1.
    
    Password for user (admin): 
    ********
    
    Connecting to example-cluster.your-cluster.altinity.cloud:9440 as user admin.
    Connected to ClickHouse server version 22.3.15 revision 54455.
    
    ClickHouse server version is older than ClickHouse client. 
    It may indicate that the server is out of date and can be upgraded.
    
    example-cluster :) 
    

Running ClickHouse Queries

To display all the tables in your cluster from a macOS terminal:

  1. At the ClickHouse prompt, enter the query command show tables:

    example-cluster :) show tables
    
    SHOW TABLES
    
    Query id: c04f8699-33db-4f4f-9a5b-e92fd25b6bb6
    
    Password for user (admin):
    Connecting to example-cluster.your-cluster.altinity.cloud:9440 as user admin.
    Connected to ClickHouse server version 22.3.15 revision 54455.
    
    ClickHouse server version is older than ClickHouse client. 
    It may indicate that the server is out of date and can be upgraded.
    
    ┌─name─────────┐
    │ events       │
    │ events_local │
    └──────────────┘
    
    2 rows in set. Elapsed: 0.073 sec.
    

To display the data in the in your cluster table named events from a macOS terminal:

  1. At the ClickHouse prompt, enter the query command select * from events:

    example-cluster :) select * from events
    
    SELECT *
    FROM events
    
    Query id: 998dbb3e-786e-404e-9d17-f044a6098191
    
    Password for user (admin):
    Connecting to example-cluster.your-cluster.altinity.cloud:9440 as user admin.
    Connected to ClickHouse server version 22.3.15 revision 54455.
    
    ClickHouse server version is older than ClickHouse client. 
    It may indicate that the server is out of date and can be upgraded.
    
    ┌─event_date─┬─event_type─┬─article_id─┬─title───┐
    │ 2023-01-04 │          113 │ Example │
    │ 2023-01-10 │          113 │ Example │
    │ 2023-01-10 │          114 │ Example │
    └────────────┴────────────┴────────────┴─────────┘
    
    3 rows in set. Elapsed: 0.074 sec.
    
    example-cluster :)
    

To quit, or exit from the ClickHouse interactive mode:

  1. Enter the exit command to return to your macOS shell environment.

    example-cluster :) exit
    Bye.
    

This concludes the Quick Start instructions for how to install and use the Macintosh ClickHouse terminal client.


Related links

1.3 - General User Guide

Instructions on general use of Altinity.Cloud

Altinity.Cloud is made to be both convenient and powerful for ClickHouse users. Whether you’re a ClickHouse administrator or a developer, these are the concepts and procedures common to both.

1.3.1 - How to Create an Account

Creating your Altinity.Cloud account.

To create an Altinity.Cloud account, visit the Altinity.Cloud info page and select Free Trial. Fill in your contact information, and our staff will reach out to you to create a test account.

If you’re ready to upgrade to a full production account, talk to one of our consultants by filling out your contact information on our Consultation Request page.

1.3.2 - How to Login

Login to Altinity.Cloud

Altinity.Cloud provides the following methods to login to your account:

  • Username and Password
  • Auth0

Login with Username and Password

To login to Altinity.Cloud with your Username and Password:

  1. Open the Altinity.Cloud website.
  2. Enter your Email Address registered to your Altinity.Cloud account.
  3. Enter your Password.
  4. Click Sign In.

Once authenticated, you will be logged into Altinity.Cloud.

Login with Auth0

Auth0 allows you to use your existing Altinity account using trust authentication platforms such as Google to verify your identity.

  • IMPORTANT NOTE: This requires that your Altinity.Cloud account matches the authentication platform you are using. For example, if your email address in Altinity.Cloud is listed as Nancy.Doe@gmail.com, your Gmail address must also be Nancy.Doe@gmail.com.

To login using Auth0:

  1. Open the Altinity.Cloud website.
  2. Select Auth0.
  3. Select which authentication platform to use from the list (for example: Google).
    1. If this is your first time using Auth0, select which account to use. You must be already logged into the authentication platform
  4. You will be automatically logged into Altinity.Cloud.

1.3.3 - How to Logout

Logout of Altinity.Cloud

To logout:

  1. Select your profile icon in the upper right hand corner.
  2. Select Log out.

Your session will be ended, and you will have to authenticate again to log back into Altinity.Cloud.

1.3.4 - Account Settings

Account and profile settings.

Access My Account

To access your account profile:

  1. Select your user profile in the upper right hand corner.

  2. Select My Account.

    Access user account

My Account Settings

From the My Account page the following settings can be viewed:

  • Common Information. From here you can update or view the following:
    • Email Address View Only: Your email address or login
    • Password settings.
    • Dark Mode: Set the user interface to either the usual or darker interface.
  • API Access: The security access rights assigned to this account.
  • Access Rights: What security related actions this account can perform.

Update Password

To update your account password:

  1. Click your profile icon in the upper right hand corner.

  2. Select My Account.

  3. In the Common Information tab, enter the new password in the Password field.

  4. Select Save.

    Altinity Cloud user common settings

API Access Settings

Accounts can make calls to Altinity.Cloud through the API address at https://acm.altinity.cloud/api, and the Swagger API definition file is available at https://acm.altinity.cloud/api/reference.json.

Access is controlled through API access keys and API Allow Domains.

API Access Keys

Accounts can use this page to generate one or more API keys that can be used without exposing the accounts username and password. They allow API calls made to Altinity.Cloud to be made by the same Account as the keys were generated for.

When an Altinity.Cloud API key is generated, an expiration date is set for the key. By default, the expiration date is set 24 hours after the key is generated, with the date and time set to GMT. This date can be manually adjusted to allow the expiration date to make the API key invalid at the date of your choosing.

Create Altinity.Cloud API Key

To generate a new API key:

  1. Click your profile icon in the upper right hand corner.
  2. Select My Account.
  3. In the API Access tab, select + Add Key. The key will be available for use with the Altinity.Cloud API.

To change the expiration date of an API key:

  1. Click your profile icon in the upper right hand corner.
  2. Select My Account.
  3. In the API Access tab, update the date and time for the API key being modified. Note that the date and time are in GMT (Greenwich Mean Time).

To remove an API key:

  1. Click your profile icon in the upper right hand corner.
  2. Select My Account.
  3. In the API Access tab, select the trashcan icon next to the API key to delete. The key will no longer be allowed to connect to the Altinity.Cloud API for this account.

API Allow Domains

API submissions can be restricted by the source domain address. This provides enhanced security by keeping API communications only between authorized sources.

To update the list of domains this account can submit API commands from:

  1. Click your profile icon in the upper right hand corner.
  2. Select My Account.
  3. In the API Access tab, list each URL this account can submit API commands from. Each URl is a separate line.
  4. Click Save to update the account settings.
Altinity Cloud user common settings

Access Rights

The Access Rights page displays which permissions your account has. These are listed in three columns:

  • Section: The area of access within Altinity.Cloud, such as Accounts, Environments, and Console.
  • Action: What actions the access right rule allows within the section. Actions marked as * include all actions within the section.
  • Rule: Whether the Action in the Section is Allow (marked with a check mark), or Deny (marked with an X).

1.3.5 - Clusters View

Overview of the Clusters View

The Clusters View page allows you to view available clusters and access your profile settings.

To access the Clusters View page while logged in to Altinity.Cloud, click Altinity Cloud Manager.

The Clusters View page is separated into the following section:

  • A: Cluster Creation: For more information on how to create new clusters, see the Administrator Guide.
  • B: Clusters: Each cluster associated with your Altinity.Cloud account is listed in either tile format, or as a short list.
  • C: User Management:
    • Change which environment clusters are on display.
    • Access your Account Settings.
Clusters View

Organizational Admins have additional options in the left navigation panel that allows them to select the Accounts, Environments, and Clusters connected to the organization’s Altinity.Cloud account.

Change Environment

Accounts who are assigned to multiple Altinity.Cloud environments can select which environment’s clusters they are viewing. To change your current environment:

  1. Click the environment dropdown in the upper right hand corner, next to your user profile icon.
  2. Select the environment to use. You will automatically view that environment’s clusters.
Change Environment

Manage Environments

Accounts that have permission to manage environments access them through the following process:

  1. Select the Settings icon in the upper right hand corner.
  2. Select Environments.
Manage Environments

For more information on managing environments, see the Administrator Guide.

Access Settings

For information about access your account profile and settings, see Account Settings.

Cluster Access

For details on how to launch and manage clusters, see the Administrator Guide for Clusters.

1.3.6 - Clusters Reference

How to launch clusters and manage clusters.

ClickHouse databases are managed through clusters, which harness the power of distributed processing to quickly deliver results on even the most complex and data intensive queries.

Altinity.Cloud users can create their own ClickHouse Clusters tailored to their organization’s needs.

1.3.6.1 - View Cluster Details

How to view details of a running cluster and its nodes.

Cluster Dashboard

The left-hand option labeled Clusters, is what you select to view your clusters. Selecting a cluster displays its Dashboard.

  • The left side includes the Endpoint with a Connection Details link, Layouts that summarize the nodes, shards and replication, the Replication count, the Version of ClickHouse installed, the Latest Backup date, the Last Query date, the Last Insert date,

  • The right side includes Monitoring with a link to View in Grafana, the number of Nodes, the type of Load Balancer, the Node Type, the Node Storage capacity in GB and type, the Node Memory in GB, and the Node CPU count.

  • The bottom shows pie charts for Volume (disk space used for each replica) and Memory used by each replica.

More Information

The following screen shot is what you see when you select Clusters from the left-hand navigation panel.

Cluster Details Page
Figure 1 - Cluster Dashboard


The sections in Figure 1 are detailed as follows:

(A) Cluster Management

The top menu bar provides cluster management functions.

  • Actions menu - Includes options to Upgrade, Rescale, Resume, Restart, Export Configuration, Publish ConfigurationPublish Configuration, Launch a Replica Cluster, and Destroy a cluster Addition functions for authorized users will show Restore a Backup and Create Backup. Contact Altinity for more information.
  • Configure menu - includes Settings, Profiles, Users, Connections, and Uptime Schedule.
  • Explore button - contains work areas for Query, Schema , Workload, and DBA Tools.
  • Alerts
  • Logs

(B) Health

(C) Access Point

(D) Monitoring, Queries

  • Monitor the Cluster and its Queries.

(E) Summary Information

  • View summary details for the Cluster or Node.
  • Select Nodes to view details on the cluster’s Nodes.

Nodes Summary

The Nodes tab displays detailed information for each of your nodes in the cluster.

Details for each node include:

  • Endpoint with a link to a node’s Connection Details
  • Version of ClickHouse installed
  • Type CPU Processor used
  • Node Storage size in GB
  • Memory allocated RAM for the node
  • Availability Zone that is set in your AWS or GKE cloud provider

Select Node View or View for the specific node to access:

Node Summary Page
Figure 2 - The Nodes tab from the Clusters page.

Node Connection

For the selected Node, Connection Details lists connection strings for use by various clients, including the clickhouse-client, JDBC drivers, HTTPS, and Python.

Similar to the Cluster Access Point, this page provides connection information to a specific node.

Node Connection
Figure 3 - Display of client Connection Details for a specific Node.

Node Dashboard

From the Node Dashboard Page users can:

Node Dashboard
Figure 4 - Node Dashboard.

(A) Manage Nodes

  • Actions menu
  • Tables and structure with Explore
  • Logs

(B) Node Heath

  • Online/Offline status
  • health checks passed status

(C) Metrics

  • View a node’s metrics, summary details, and its Schema.

Node Metrics

Node Metrics provides a breakdown of the node’s performance, such as CPU data, active threads, etc.

Node Schema

The Node Schema provides a view of the databases’ schema and tables.

More Information
For more information on how to interact with a Node by submitting queries, viewing the schema of its databases and tables, and viewing process, see the Cluster Explore Guide.

1.3.6.2 - Cluster Actions

Actions that can be taken on launched clusters.

Launched clusters can be have different actions applied to them based on your needs.

1.3.6.2.1 - Upgrade Cluster

How to upgrade an existing cluster.

Clusters can be upgraded to versions of ClickHouse other than the one your cluster is running.

When upgrading to a ClickHouse Altinity Stable Build, review the release notes for the version that you are upgrading to.

How to Upgrade an Altinity Cloud Cluster

To upgrade a launched cluster:

  1. Select Actions for the cluster to upgrade.

  2. Select Upgrade.

  3. Select the ClickHouse version to upgrade to.

  4. Select Upgrade to start the process.

    Cluster Upgrade

The upgrade process completion time varies with the size of the cluster, as each server is upgraded individually. This may cause downtime while the cluster is upgraded.

1.3.6.2.2 - Rescale Cluster

How to rescale an existing cluster.

The size and structure of the cluster may need to be altered after launching based on your organization’s needs. The following settings can be rescaled:

  • Number of Shards
  • Number of Replicas
  • Node Type
  • Node Storage
  • Number of Volumes
  • Apply to new nodes only: This setting will only effect nodes created from this point forward.

See Cluster Settings for more information.

How to Rescale a Cluster

To rescale a cluster:

  1. Select Actions for the cluster to rescale.

  2. Select Rescale.

  3. Set the new values of the cluster.

  4. Click OK to begin rescaling.

    Cluster Rescale

Depending on the size of the cluster, this may take several minutes.

1.3.6.2.3 - Stop and Start a Cluster

How to stop or start an existing cluster.

To stop an launched cluster, or start a stopped cluster:

  1. From either the Clusters View or the Cluster Details Page, select Actions.
    1. If the cluster is currently running, select Stop to halt its operations.
    2. If the cluster has been stopped, select Start to restart it.

Depending on the size of your cluster, it may take a few minutes until it is fully stopped or is restarted. To access the health and availability of the cluster, see Cluster Health or the Cluster Availability.

1.3.6.2.4 - Export Cluster Settings

How to export a cluster’s settings.

The structure of an Altinity Cloud cluster can be exported as JSON. For details on the cluster’s settings that are exported, see Cluster Settings.

To export a cluster’s settings to JSON:

  1. From either the Clusters View or the Cluster Details Page, select Actions, then select Export.
  2. A new browser window will open with the settings for the cluster in JSON.

1.3.6.2.5 - Replicate a Cluster

How to replicate an existing cluster.

Clusters can be replicated with the same or different settings. These can include the same database schema as the replicated cluster, or launched without the schema. This may be useful to create a test cluster, then launch the production cluster with different settings ready for production data.

For complete details on Altinity.Cloud clusters settings, see Cluster Settings.

To create a replica of an existing cluster:

  1. From either the Clusters View or the Cluster Details Page, select Actions, then select Launch a Replica Cluster.
  2. Enter the desired values for Resources Configuration.
    1. To replicate the schema of the source directory, select Replicate Schema.

      Replicate Schema
    2. Click Next to continue.

  3. High Availability Configuration, and Connection Configuration.
    1. Each section must be completed in its entirety before moving on to the next one.
  4. In the module Review & Launch, verify the settings are correct. When finished, select Launch.

Depending on the size of the new cluster it will be available within a few minutes. To verify the health and availability of the new cluster, see Cluster Health or the Cluster Availability.

1.3.6.2.6 - Destroy Cluster

How to destroy an existing cluster.

When a cluster is no longer required, the entire cluster and all of its data can be destroyed.

  • IMPORTANT NOTE: Once destroyed, a cluster can not be recovered. It must be manually recreated.

To destroy a cluster:

  1. From either the Clusters View or the Cluster Details Page, select Actions, then select Destroy.

  2. Enter the cluster name, then select OK to confirm its deletion.

    Destroy Cluster

1.3.6.3 - Cluster Settings

Settings and values used for Altinity.Cloud ClickHouse Clusters.

ClickHouse Clusters hosted on Altinity.Cloud have the following structural attributes. These determine options such as the version of ClickHouse installed on them, how many replicas, and other important features.

Name Description Values
Cluster Name The name for this cluster. It will be used for the hostname of the cluster. Cluster names must be DNS compliant. This includes:
  • Alphanumeric characters and underscores only
  • No special characters such as periods, ?, #, etc.
    Example:
    • Good: mycluster
    • Bad: my.cluster?
  • Can not start with a number.
Node Type Determines the number of CPUs and the amount of RAM used per node. The following Node Types are sample values, and may be updated at any time:
  • m5.large: CPU x2, RAM 6.5 GB
  • m5.xlarge: CPU x4, RAM 14 GB
  • M5.2xlarge: (CPU x8, RAM 29 GB)
  • m5.4xlarge: (CPU x16, RAM 58 GB)
  • m5.8xlarge: (CPU x32, RAM 120 GB)
Node Storage The amount of storage space available to each node, in GB.  
Number of Volumes Storage can be split across multiple volumes. The amount of data stored per node is the same as set in Node Storage, but it split into multiple volumes.
Separating storage into multiple volumes can increase query performance.
 
Volume Type Defines the Amazon Web Services volume class. Typically used to determine whether or not to encrypt the columns. Values:
  • gp2 (Not Encrypted)
  • gp2-encrypted (encrypted)
Number of Shards Shards represent a set of nodes. Shards can be replicated to provide increased availability and computational power.  
ClickHouse Version The version of the ClickHouse database that will be used on each node.
To run a custom ClickHouse container version, specify the Docker image to use.
  • IMPORTANT NOTE: The nodes in the cluster will all be running the same version of ClickHouse. If you want to run multiple versions of ClickHouse, they will have to be set on different clusters.
Currently available options:
  • Altinity Stable:
    • 19.11.12.69
    • 19.16.19.85
    • 20.3.21.2
    • 20.8.7.15
  • Standard Release
    • 20.10.5.10
    • 20.11.4.13
  • Custom Identifier
ClickHouse Admin Name The name of the ClickHouse administrative user. Set to admin by default. Can not be changed.
ClickHouse Admin Password The password for the ClickHouse administrative user.  
Data Replication Toggles whether shards will be replicated. When enabled, Zookeeper is required to manage the shard replication process. Values:
  • Enabled (Default): Each Cluster Shard will be replicated to the value set in Number of Replicas.
  • Disabled: Shards will not be replicated.
Number of Replicas Sets the number of replicas per shard. Only enabled if Data Replication is enabled.  
Zookeeper Configuration When Data Replication is set to Enabled, Zookeeper is required. This setting determines how Zookeeper will run and manage shard replication.
The Zookeeper Configuration mainly sets how many Zookeeper nodes are used to manage the shards. More Zookeeper nodes increases the availability of the cluster.
Values:
  • Single Node (Default): Replication is managed by one Zookeeper node.
  • Three Nodes: Increases the Zookeeper nodes to an ensemble of 3.
Zookeeper Node Type Determines the type of Zookeeper node. Defaults to default and can not be changed.
Node Placement Sets how nodes are distributed via Kubernetes. Depending on your situation and how robust you want your replicas and clusters. Values:
  • Separate Nodes (Default): ClickHouse containers are distributed across separate cluster nodes.
  • Separate Shards: ClickHouse containers for different shards are distributed across separate cluster nodes.
  • Separate Replicas: ClickHouse containers for different replicas are distributed across separate cluster nodes.
Enable Backups Backs up the cluster. These can be restored in the event data loss or to roll back to previous versions. Values:
  • Enabled (Default): The cluster will be backed up automatically.
  • Disabled: Automatic Backups are disabled.
Backup Schedule Determines how often the cluster will be backed up. Defaults to Daily
Number of Backups to keep Sets how many backups will be stored before deleting the oldest one Defaults to 5.
Endpoint The Access point Domain Name. This is hard set by the name of your cluster and your organization.
Use TLS Sets whether or not to encrypt external communications with the cluster to TLS. Default to Enabled and can not be changed.
Load Balancer Type The load balancer manages communications between the various nodes to ensure that nodes are not overwhelmed. Defaults to Altinity Edge Ingress
Protocols Sets the TCP ports used in external communications with the cluster. Defaults to ClickHouse TCP port 9440 and HTTP port 8443.

1.3.6.4 - Configure Cluster

How to configure launched clusters.

Once a cluster has been launched, it’s configuration can be updated to best match your needs.

1.3.6.4.1 - How to Configure Cluster Settings

How to update the cluster’s settings.

Cluster settings can be updated from the Clusters View or from the Cluster Details by selecting Configure > Settings.

  • IMPORTANT NOTE: Changing a cluster’s settings will require a restart of the entire cluster.

Note that some settings are locked - their values can not be changed from this screen.

Cluster Settings

How to Set Troubleshooting Mode

Troubleshooting mode prevents your cluster from auto-starting after a crash. To update this setting:

  1. Toggle Troubleshooting Mode either On or Off.

How to Edit an Existing Setting

To edit an existing setting:

  1. Select the menu on the left side of the setting to update.
  2. Select Edit.
  3. Set the following:
    1. Setting Type.
    2. Name
    3. Value
  4. Select OK to save the setting.
Edit Cluster Setting

How to Add a New Setting

To add a new setting to your cluster:

  1. Select Add Setting.
  2. Set the following:
    1. Setting Type.
    2. Name
    3. Value
  3. Select OK to save the setting.
Add a New Cluster Setting

How to Delete an Existing Setting

To delete an existing setting:

  1. Select the menu on the left side of the setting to update.
  2. Select OK.
  3. Select Remove to confirm removing the setting.
Delete a Cluster Setting

1.3.6.4.2 - How to Configure Cluster Profiles

How to update the cluster’s profiles.

Cluster profiles allow you to set the user permissions and settings based on their assigned profile.

The Cluster Profiles can be accessed from the Clusters View or from the Cluster Details by selecting Configure > Settings.

Cluster Profile Settings

Add a New Profile

To add a new cluster profile:

  1. From the Cluster Profile View page, select Add Profile.
  2. Provide profile’s Name and Description, then click OK.

Edit an Existing Profile

To edit an existing profile:

  1. Select the menu to the left of the profile to update and select Edit, or select Edit Settings.
  2. To add a profile setting, select Add Setting and enter the Name and Value, then click OK to store your setting value.
  3. To edit an existing setting, select the menu to the left of the setting to update. Update the Name and Value, then click OK to store the new value.

Delete an Existing Profile

To delete an existing profile:

  1. Select the menu to the left of the profile to update and select Delete.
  2. Select OK to confirm the profile deletion.

1.3.6.4.3 - How to Configure Cluster Users

How to update the cluster’s users.

The cluster’s Users allow you to set one or more entities who can access your cluster, based on their Cluster Profile.

Cluster users can be updated from the Clusters View or from the Cluster Details by selecting Configure > Users.

Cluster Users

How to Add a New User

To add a new user to your cluster:

  1. Select Add User

  2. Enter the following:

    Add New User
    1. Login: the name of the new user.
    2. Password and Confirm Password: the authenticating credentials for the user. Th
    3. Networks: The networks that the user can connect from. Leave as 0.0.0.0/0 to allow access from all networks.
    4. Databases: Which databases the user can connect to. Leave empty to allow access all databases.
    5. Profile: Which profile settings to apply to this user.
  3. Select OK to save the new user.

How to Edit a User

To edit an existing user:

  1. Select the menu to the left of the user to edit, then select Edit.
  2. Enter the following:
    1. Login: the new name of the user.
    2. Password and Confirm Password: the authenticating credentials for the user. Th
    3. Networks: The networks that the user can connect from. Leave as 0.0.0.0/0 to allow access from all networks.
    4. Databases: Which databases the user can connect to. Leave empty to allow access all databases.
    5. Profile: Which profile settings to apply to this user.
  3. Select OK to save the updated user user.

How to Delete a User

  1. Select the menu to the left of the user to edit, then select Delete.
  2. Select OK to verify the user deletion.

1.3.6.5 - Launch New Cluster

How to launch a new ClickHouse Cluster from Altinity.Cloud.

Launching a new ClickHouse Cluster is incredibly easy, and only takes a few minutes. For those looking to create their first ClickHouse cluster with the minimal steps, see the Quick Start Guide. For complete details on Altinity.Cloud clusters settings, see Cluster Settings.

To launch a new ClickHouse cluster:

  1. From the Clusters View page, select Launch Cluster. This starts the Cluster Launch Wizard.

    Launch New Cluster
  2. Enter the desired values for Resources Configuration, High Availability Configuration, and Connection Configuration.

    1. Each section must be completed in its entirety before moving on to the next one.
  3. In the module Review & Launch, verify the settings are correct. When finished, select Launch.

Within a few minutes, the new cluster will be ready for your use and display that all health checks have been passed.

1.3.6.6 - Cluster Alerts

How to be notified about cluster issues

The Cluster Alerts module allows users to set up when they are notified for a set fo events. Alerts can either be a popup, displaying the alert when the user is logged into Altinity.Cloud, or email so they can receive an alert even when they are not logged into Altinty.Cloud.

To set which alerts you receive:

  1. From the Clusters view, select the cluster to for alerts.

  2. Select Alerts.

    Cluster Alerts
  3. Add the Email address to send alerts to.

  4. Select whether to receive a Popup or Email alert for the following events:

    1. ClickHouse Version Upgrade: Alert triggered when the version of ClickHouse that is installed in the cluster has a new update.
    2. Cluster Rescale: Alert triggered when the cluster is rescaled, such as new shards added.
    3. Cluster Stop: Alert triggered when some event has caused the cluster to stop running.
    4. Cluster Resume: Alert triggered when a cluster that was stopped has resumed operations.

1.3.6.7 - Cluster Health Check

How to quickly check your cluster’s health.

From the Clusters View, you can see the health status of your cluster and its nodes at a glance.

How to Check Node Health

The quick health check of your cluster’s nodes is displayed from the Clusters View. Next to the cluster name is a summary of your nodes’ statuses, indicating the total number of nodes and how many nodes are available.

View the Access Point

How to Check Cluster Health

The overall health of the cluster is shown in the Health row of the cluster summary, showing the number of health checks passed.

View the Access Point

Click checked passed to view a detailed view of the cluster’s health.

How to View a Cluster’s Health Checks

The cluster’s Health Check module displays the status of the following health checks:

  • Access point availability check
  • Distributed query check
  • Zookeeper availability check
  • Zookeeper contents check
  • Readonly replica check
  • Delayed inserts check

To view details on what queries are used to verify the health check, select the caret for each health check.

Cluster Health Details

1.3.6.8 - Cluster Monitoring

How to monitor your clusters performance.

Altinity.Cloud integrates Grafana into its monitoring tools. From a cluster, you can quickly access the following monitoring views:

  • Cluster Metrics
  • Queries
  • Logs

How to Access Cluster Metrics

To access the metrics views for your cluster:

  1. From the Clusters view, select the cluster to monitor.
  2. From Monitoring, select the drop down View in Grafana and select from one of the following options:
    1. Cluster Metrics
    2. Queries
    3. Logs
  3. Each metric view opens in a separate tab.

Cluster Metrics

Cluster Metrics displays how the cluster is performing from a hardware and connection standpoint.

Cluster Monitoring View

Some of the metrics displayed here include:

  • DNS and Distributed Connection Errors: Displays the rate of any connection issues.
  • Select Queries: The number of select queries submitted to the cluster.
  • Zookeeper Transactions: The communications between the zookeeper nodes.
  • ClickHouse Data Size on Disk: The total amount of data the ClickHouse database is using.

Queries

The Queries monitoring page displays the performance of clusters, including the top requests, queries that require the most memory, and other benchmarks. This can be useful in identifying queries that can cause performance issues and refactoring them to be more efficient.

Query Monitoring View

Log Metrics

The Log monitoring page displays the logs for your clusters, and allows you to make queries directly on them. If there’s a specific detail you’re trying to iron out, the logs are the most granular way of tracking down those issues.

Log Monitoring View

1.3.6.9 - Cluster Logs

How to access your cluster’s logs

Altinity.Cloud provides the cluster log details so users can track down specific issues or performance bottlenecks.

To access a cluster’s logs:

  1. From the Clusters view, select the cluster to for alerts.
  2. Select Logs.
  3. From the Log Page, you can display the number of rows to view, or filter logs by specific text.
  4. To download the logs, select the download icon in the upper right corner (A).
  5. To refresh the logs page, select the refresh icon (B).
Cluster Logs Page

The following logs are available:

  • ACM Logs: These logs are specific to Altinity.Cloud issues and include the following:
    • System Log: Details the system actions such as starting a cluster, updating endpoints, and other details.
    • API Log: Displays updates to the API and activities.
  • ClickHouse Logs: Displays the Common Log that stores ClickHouse related events. From this view a specific host can be selected form the dropdown box.
  • Backup Logs: Displays backup events from the clickhouse-backup service. Log details per cluster host can be selected from the dropdown box.
  • Operator Logs: Displays logs from the Altinity Kubernetes Operator service, which is used to manage cluster replication cluster and communications in the Kubernetes environment.

1.3.7 - Notifications

Notifications critical to your Altinity.Cloud account.

Notifications allow you to see any messages related to your Altinity.Cloud account. For example: billing, service issues, etc.

To access your notifications:

  1. From the upper right corner of the top navigation bar, select your user ID, then Notifications.

    Access notifications

Notifications History

The Notifications History page shows the notifications for your account, including the following:

  • Message: The notifications message.
  • Level: The priority level which can be:
    • Danger: Critical notifications that can effect your clusters or account.
    • Warning: Notifications of possible issues that are less than critical.
    • News: Notifications of general news and updates in Altinity.Cloud.
    • Info: Updates for general information.

1.3.8 - Billing

Managing billing for Altinity.Cloud.

Accounts with the role orgadmin are able to access the Billing page for their organizations.

To access the Billing page:

  1. Login to your Altinity.Cloud with an account with the orgadmin role.
  2. From the upper right hand corner, select the Account icon, and select Billing.
Access Billing

From the billing page, the following Usage Summary and the Billing Summary are available for the environments connected to the account.

Billing page

Usage Summary

The Usage Summary displays the following:

  • Current Period: The current billing month displaying the following:
    • Current Spend: The current total value of charges for Altinity.Cloud services.
    • Avg. Daily Spend: The average cost of Altinity.Cloud services per day.
    • Est. Monthly BIll: The total estimated value for the current period based on Current Spend and if usage continues at the current rate.
  • Usage for Period: Select the billing period to display.
  • Environment: Select the environment or All environments to display billing costs for. Each environment, its usage, and cost will be displayed with the total cost.

Billing Summary

The Billing Summary section displays the payment method, service address, and email address used for billing purposes. Each of these settings can be changed as required.

1.3.9 - System Status

View the status of Altinity.Cloud services.

The System Status page provides a quick view of whether the Altinity.Cloud services are currently up or down. This provides a quick glance to help devops staff determine where any issues may be when communicating with their Altinity.Cloud clusters.

To access tne System Status page:

  1. Login to your Altinity.Cloud account.

  2. From the upper right hand corner, select the Account icon, and select System Status.

    Access user account

System Status Page

The System Status page displays the status of the Altinity.Cloud services. To send a message to Altinity.Cloud support representatives, select Get in touch.

From the page the following information is displayed:

Altinity.Cloud system statut page

This sample is from a staging environment and cluster that was stopped and started to demonstrate how the uptime tracking system works.

  • Whether all Altinity.Cloud services are online or if there are any issues.
  • The status of services by product, with the uptime of the last 60 days shown as either green (the service was fully available that day), or red (the service suffered an issue). Hovering over a red bar will display how long the service was unavailable for the following services:
    • ClickHouse clusters
    • Ingress
    • Management Console

Enter your email at the bottom of the page in the section marked Subscribe to status updates to receive notifications via email regarding any issues with Altinity.Cloud services.

1.3.10 - Uptime Schedule Settings

How to choose and set different cluster uptime schedules.

26 January 2023 · Read time 4 min

Overview

The Uptime Schedule settings are provided for non-critical servers that do not need to be running continuously.
For non-running servers, Altinity.Cloud does not bill you for compute resources or support.

  • Note that this cost-saving does not apply to storage and backups.

Available uptime schedules covered in this section include:

The Schedule (clock) icon indicates if a schedule has been set, and serves as a shortcut to quickly open the Uptime Schedule settings window. Other locations include:

  • On the Altinity Cluster Manager cluster view, beside each cluster name (See Figure 1 item (A)
  • Within the CONFIGURE > Uptime Schedule menu (See Figure 1 item (B)
  • On the Dashboard tab of the cluster detail page.

WARNING: Do not use schedules on production clusters that must operate continuously.


UI path

From the Altinity Cloud Manager dashboard page, use the menu Configure > Uptime Schedule to display settings for your cluster.

  • (A) The Schedule Icon appears if STOP WHEN INACTIVE or ON SCHEDULE is set.
  • (B) The CONFIGURE menu is how you get to the Uptime Schedule settings.
  • (C) The Uptime Schedule settings are where you choose the uptime settings for your cluster and CONFIRM to save.

Figure 1Uptime Schedule located in the Configure menu. A clock icon shows in the cluster dashboard if a schedule is set


ALWAYS ON

Purpose

For mission-critical ClickHouse servers that must run 24/7.


Settings

There are no adjustable settings.


UI text

Altinity.Cloud shall not trigger any Stop or Resume operations on this Cluster automatically*


Figure 2 ALWAYS ON Uptime Schedule setting.


Usage

To select ALWAYS ON from your cluster’s CONFIGURE > Uptime Schedule menu:

  1. Select ALWAYS ON.
  2. Select CONFIRM to save.
  3. Use CANCEL to close without saving.

Result

  • Cluster Status shows nodes online as is shown in the following screenshot

Figure 3 Cluster list view shows green nodes online.



STOP WHEN INACTIVE

Purpose

Used to turn off non-critical servers after a set number of hours of inactivity such as development environments that do not need to be running continuously. For non-running servers, Altinity.Cloud does not bill you for compute resources or support.


Settings

Hours of inactivity

Unit: Hours
Example: 48


UI text

The cluster will be stopped automatically when there is no activity for a given amount of hours.


Figure 4 STOP WHEN INACTIVE Uptime Schedule setting.


Usage

To set the hours after which your cluster becomes inactive, from your cluster’s CONFIGURE > Uptime Schedule menu:

  1. Select STOP WHEN INACTIVE.
  2. Adjust the Hours of inactivity integer value with the up or down arrows, or enter a number. (Example: 2)
  3. Select CONFIRM to save.
  4. Use CANCEL to close without saving.

Result

  • In your cluster dashboard list view , a clock icon appears beside your cluster name (Example: cluster-example ).


ON SCHEDULE

Purpose

Sets the daily To and From times (GMT format) that your cluster servers will be allowed to operate on a weekly schedule.


Settings

  • Monday
  • Tuesday
  • Wednesday
  • Thursday
  • Friday
  • Saturday
  • Sunday
  • 12:59 AM / PM (From and To) times
  • Active (yes | no)
  • All Day (yes | no)

Setting example

  • Monday Active (yes) All Day (yes)
  • Tuesday Active (yes) All Day (no) From 8:00 PM To 5:00 PM
  • Wednesday Active (yes) All Day (yes)
  • Thursday Active (yes) All Day (no) From 8:00 PM To 5:01 PM
  • Friday Active (yes) All Day (yes)
  • Saturday Active (no)
  • Sunday Active (no)

The following cluster-example schedule sets Tuesday and Thursday for part-day operation from 8:00 PM to 5:00 PM, Monday, Wednesday, and Friday, for All Day operation, and for the weekend Saturday and Sunday, the server is off.


Figure 5 ON SCHEDULE Uptime Schedule setting.


Usage

To set a schedule for a cluster to run, from your cluster’s CONFIGURE > Uptime Schedule menu:

  1. Select ON SCHEDULE.
  2. Select Active green (right) for on, grey (left) for off.
  3. Select All Day green (right) for on, grey (left) for off.
  4. Enter the From time (GMT HH:MM AM/PM) you want the cluster to be active.
  5. Enter the To stop time (GMT HH:MM AM/PM) you want the cluster to be off.
  6. Select CONFIRM to save.
  7. Use CANCEL to close without saving.

Result

  • In your cluster dashboard list view , a clock icon appears beside your cluster name (Example: cluster-example ).

1.4 - Administrator Guide

How to manage Altinity.Cloud.

Altinity.Cloud allows administrators to manage clusters, users, and keep control of their ClickHouse environments with a few clicks. Monitoring tools are provided so you can keep track of everything in your environment to keep on top of your business.

1.4.1 - Access Control

How to control access to your organizations, environments, and clusters.

Altinity.Cloud provides role based access control. Depending the role granted to an Altinity.Cloud Account, they can assign other Altinity.Cloud accounts roles and grant permissions to access organizations, environments, or clusters.

1.4.1.1 - Role Based Access and Security Tiers

Altinity.Cloud hierarchy and role based access.

Access to ClickHouse data hosted in Altinity.Cloud is controlled through a combination of security tiers and account roles. This allows companies to tailor access to data in a way that maximizes security while still allowing ease of access.

Security Tiers

Altinity.Cloud groups sets of clusters together in ways that allows companies to provide Accounts access only to the clusters or groups of clusters that they need to.

Altinity.Cloud groups clusters into the following security related tiers:

Security Tiers
  • Nodes: The most basic level - an individual ClickHouse database and tables.
  • Clusters: These contain one or more nodes provide ClickHouse database access.
  • Environments: Environments contain one or more clusters.
  • Organizations: Organizations contain one or more environments.

Account access is controlled by assigning an account a single role and a security tier depending on their role. A single account can be assigned to multiple organizations, environments, multiple clusters in an environment, or a single cluster depending on their account role.

Account Roles

The actions that can be taken by Altinity.Cloud accounts is based on the role they are assigned. The following roles and their actions based on the security tier is detailed in the table below:

Role Environment Cluster
orgadmin Create, Edit, and Delete environments that they create, or are assigned to, within the assigned organizations.
Administrate Accounts associated with environments they are assign to.
Create, Edit, and Delete clusters within environments they create or assigned to in the organization.
envadmin Access assigned environments. Create, Edit, and Delete clusters within environments they are assigned to in the organization.
envuser Access assigned environments. Access one or more clusters the account is specifically assigned to.

The account roles are tied into the security tiers, and allow an account to access multiple environment and clusters depending on what type of tier they are assigned to.

For example, we may have the following situation:

  • Accounts peter, paul, and mary and jessica are all members of the organization HappyDragon.
  • HappyDragon has the following environments: HappyDragon_Dev and HappyDragon_Prod, each with the clusters marketing, sales, and ops.

The accounts are assigned the following roles and security tiers:

Account Role Organization Environments Clusters
mary orgadmin HappyDragon HappyDragon_Prod *
peter envadmin HappyDragon HappyDragon_Dev *
jessica envadmin HappyDragon HappyDragon_Prod, HappyDragon_Dev *
paul envuser HappyDragon HappyDragon_Prod marketing

In this scenario, mary has the ability to access the environment HappyDragon_Prod, or can create new environments and manage them and any clusters within them. However, she is not able to edit or access HappyDragon_Dev or any of its clusters.

  • Both peter and jessica have the ability to create and remove clusters within their assigned environments.
    • peter is able to modify the clusters in the environment HappyDragon_Dev.
    • jessica can modify clusters in both environments.
  • paul can only access the cluster marketing in the environment HappyDragon_Prod.

1.4.1.2 - Account Management

How to manage Altinity.Cloud accounts.

Altinity.Cloud accounts with the role orgadmin are able to create new Altinity.Cloud accounts and associate them with organizations, environments, and one or more clusters depending on their role. For more information on roles, see Role Based Access and Security Tiers.

Account Page

The Account Page displays all accounts assigned to the same Organization and Environments as the logged in account.

For example: the accounts mario, luigi, and peach and todd are members of the organizations MushroomFactory and BeanFactory as follows:

Account Role Organization: MushroomFactory Organization: BeanFactory
peach orgadmin *  
mario orgadmin   *
luigi envuser   *
todd envuser *  
  • peach will be able to see their account and todd in the Account Page, while accounts mario and luigi will be hidden from them.
  • mario will be able to see their account and luigi.

Access Accounts

To access the accounts that are assigned to the same Organizations and Environments as the logged in user with the account role orgadmin:

  1. Login to Altinity.Cloud with an account granted the orgadmin role.
  2. From the left navigation panel, select Accounts.
  3. All accounts that are in the same Organizations and Environments as the logged in account will be displayed.

Account Details

Accounts have the following details that can be set by an account with the orgadmin role:

  1. Common Information:
    1. Name: The name of the account.
    2. Email (Required): The email address of the account. This will be used to login, reset passwords, notifications, and other uses. This must be a working email for these functions to work.
    3. Password: The password for the account. Once a user has authenticated to the account, they can change their password.
    4. Confirm Password: Confirm the password for the account.
    5. Role (Required): The role assigned to the account. For more information on roles, see Role Based Access and Security Tiers.
    6. Organization: The organization assigned to the account. Note that the orgadmin can only assign accounts the same organizations that the orgadmin account also belongs to.
    7. Suspended: When enabled, this prevents the account from logging into Altinity.Cloud.
  2. Environment Access:
    1. Select the environments that the account will require access to. Note that the orgadmin can only assign accounts the same environments that the orgadmin account also belongs to.
  3. Cluster Access:
    1. This is only visible if the Role is set to envuser. This allows one or more clusters in the environments the new account was assigned to in Environmental Access to be accessed by them.
  4. API Access:
    1. Allows the new account to make API calls from the listed domain names.

Account Actions

Create Account

orgadmin accounts can create new accounts and assign them to the same organization and environments they are assigned to. For example, continuing the scenario from above, if account peach is assigned to the organization MushroomFactory and the environments MushroomProd and MushroomDev, they can assign new accounts to the organization MushroomFactory, and to the environments MushroomProd or MushroomDev or both.

To create a new account:

  1. Login to Altinity.Cloud with an account granted the orgadmin role.

  2. From the left navigation panel, select Accounts.

  3. Select Add Account.

  4. Set the fields as listed in the Account Details section.

    New User Settings
  5. Once all settings are completed, select Save. The account will be able to login with the username and password, or if their email address is registered through Google, Auth0.

Edit Account

  1. Login to Altinity.Cloud with an account granted the orgadmin role.
  2. From the left navigation panel, select Accounts.
  3. From the left hand side of the Accounts table, select the menu icon for the account to update and select Edit.
  4. Update the fields as listed in the Account Details section.
  5. When finished, select Save.

Suspend Account

Instead of deleting an account, setting an account to Suspended may be more efficient to preserve the accounts name and other settings. A suspended account is unable to login to Altinity.Cloud. This includes directly logging through a browser and API calls made under the account.

To suspend or activate an account:

  1. Login to Altinity.Cloud with an account granted the orgadmin role.
  2. From the left navigation panel, select Accounts.
  3. From the left hand side of the Accounts table, select the menu icon for the account to update and select Edit.
    1. To suspend an account, toggle Suspended to on.
    2. To activate a suspended account, toggle Suspended to off.
  4. When finished, select Save.

Delete Account

Accounts can be deleted which removes all information on the account. Clusters and environments created by account will remain.

To delete an existing account:

  1. Login to Altinity.Cloud with an account granted the orgadmin role.
  2. From the left navigation panel, select Accounts.
  3. From the left hand side of the Accounts table, select the menu icon for the account to update and select Delete.
  4. Verify the account is to be deleted by selecting OK.

1.4.1.3 - Integrating Okta into the Altinity.Cloud login page

How to set up Okta integration with Auth0 in Altinity.Cloud

10 March 2023 · Read time 3 min

Overview - Okta Integration

Altinity uses Auth0 so that customers who are already logged into other identity providers such as Google or Okta are automatically granted access to Altinity.Cloud.

The following diagram shows the Altinity login process using Auth0, plus adding Okta as discussed on this page.

  1. Logging in to Altinity.Cloud using a Login Email and Password.
  2. The Auth0 login link to use a 3rd party authenticator such as Google or Okta. (See Okta/Auth0 Altinity Integration)
  3. Using Okta allows previously authorized logged-in employees to gain immediate access to Altinity.Cloud. (See Okta Customer Configuration)
Launch Cluster wizard screens

Figure 1 – Altinity integration of an Okta customer to Auth0.



Setting up the Auth0 Connection

These steps are for Altinity customers to configure their login integration with Okta.

  1. Go to Auth0 Dashboard 》Authentication 》Enterprise.
  2. Click Create (➕ plus icon) located next to OpenID Connect.
  3. Provide a name.
  4. Copy the customer-provided Okta domain to Issuer URL.
  5. Copy the customer-provided Client ID to Client ID.
  6. Click Create.

If you closed the page, select Dashboard 》Applications 》<application name> to view those settings.

Contact Altinity Support

Contact Altinity to add the customer’s Okta domain and Client ID to the Altinity.Cloud access list.
Please provide the following:

  • The domain you want to sign in on the Okta side
  • The Issues URL
  • Client ID


Okta/Auth0 Altinity Integration

These steps are for Altinity technical support to add an Okta connection to Auth0.


Setting up the Auth0 connection

  1. Go to Auth0 Dashboard -> Authentication -> Enterprise.
  2. Click Create (plus icon) next to OpenID Connect.
  3. Provide a name.
  4. Copy the Okta domain provided by a customer to Issuer URL.
  5. Copy the Client ID provided by a customer to the Client ID.
  6. Click Create.

Enabling the connection

  1. Go to Auth0 Dashboard -> Applications.
  2. Click the application you wish to use with the connection.
  3. Go to the Connections tab, find your newly created connection, and switch it on.

Testing the connection

  1. Go to Auth0 Dashboard -> Authentication -> Enterprise.
  2. Click OpenID Connect (not the plus sign, the text).
  3. Find the newly created connection.
  4. Click the three dots on the right -> Try.
    • You should be greeted with an Okta password prompt, or if there is a problem, an error is shown.

Enabling the button

  1. Go to Auth0 Dashboard -> Authentication -> Enterprise.
  2. Click OpenID Connect (not the plus sign, the text).
  3. Find the newly created connection and click its name.
  4. Go to the Login Experience tab.
  5. Check the Connection button -> Display connection as a button.
  6. Configure the Button display name and logo URL.
  7. Click Save.

Testing

  1. Go to the https://acm.altinity.cloud login page.
  2. Click Sign in with Auth0.
  3. A button for the new connection should be shown.
  4. Upon clicking the button, it should either ask for Okta credentials or log straight into the app.


Altinity blog post

The following Altinity blog post provides an in-depth discussion of adding Okta as an identity provider.

1.4.2 - Altinity Backup Solutions

This section covers Altinity backup solutions.

14 March 2023 · Read time 1 min

Overview

This section covers backup and restore options provided by the Altinity Cloud Manager.

Backups

Restoring Data

Contact Altinity Support

1.4.3 - How to use Altinity Backup and the Restore Wizard

This section covers how to use the Altinity Restore Wizard.

14 March 2023 · Read time 5 min

Overview

This section covers the Altinity Cloud Manager Cluster Restore Wizard, which is available from your cluster’s ACTIONS > Restore a Backup menu. This guide walks you through the steps to restore a cluster from an Altinity backup.

Create Backup

In addition to scheduled backups, an ad hoc backup feature is provided to create an on-demand snapshot. By manually creating a backup from the ACTIONS 》 Create Backup Backup, each backup request adds a Tag with a timestamp.

  • Note that for ad hoc backups, there are no configuration options. A dialog box (Cluster Backup) displays to let you know the backup is in progress.
Cluster Backup acknowledgement dialog box

Figure 1 – Making an on-demand backup: the ACTIONS 》Create Backup and the confirmation window.

UI Text

  • Cluster Backup
    The Backup procedure for the Cluster your-backup has been scheduled.
    It will require some time to finish the procedure.
    Note: Backup files are handled by ACM and stored separately from the cluster instances.
    These backup files will remain available even if you accidentally delete the cluster.

Backup Scheduling

To change the backup schedule from the 7 day default, bring up your cluster Environments settings, select the 3-dot menu icon beside your cluster, choose Edit then the Backups tab.

Cluster Backup Schedulingz

Figure 2 – Backup Schedule settings from the Environment > Cluster > Edit > Backup tab.

Restore a Backup

Prerequisites

  • Login to the Altinity.Cloud Manager https://acm.altinity.cloud/clusters
  • The backup operator has backup roles and permissions set up in My Account > Common Information > Role and Access Rights
  • Accounts > Manage Roles > Account Role Details > Cluster Settings and NodeTypes are set to (Allow)
  • The backup provider is Altinity or an external cloud provider
  • The Organization list of account Names has been granted the role of allowing backups and restores to the listed Environments

NOTE: New users do not have the rights to perform manual backups and restore functions, so the default view of the Actions menu will not display the backup or restore options.

The following wizard screens are covered on this page:

  1. Backup Location
  2. Source Cluster
  3. Source Backup
  4. Destination Cluster
  5. Restore Summary
  6. Cluster Launch Wizard

The following illustration shows a summary of the various screens available in the Restore Wizard option.

  • The Backup Location may be the Altinity.Cloud Backup or a Remote Backup of your choice of cloud provider.
  • Colored orange, the Cluster Launch Wizard opens after you CONTINUE from the last screen of the Cluster Restore Wizard.
    Restore Wizard summary

Figure 3 – Each of the 5 Restore Wizard screens and the available settings.



Clusters > Actions - Where to find the Restore Wizard

The ACTIONS menu contains the Restore a Backup function that starts the Cluster Restore Wizard within the Altinity.Cloud Connection Manager. The following screenshot shows the Altinity.Cloud Connection Manager dashboard screen for cluster-a, highlighting points of interest.

  1. The your-environment menu is the parent for your clusters.
  2. The Clusters in the left navigation pane let you know where you are in the ACM.
  3. The cluster-a is the name of the dashboard you are currently viewing.
  4. The Actions menu is where the Backup and Restore functions are found.

NOTE: If your account and roles are not set up, you will not see all of the items in the left-hand pane or all the items in the Actions menu.

Environment

Figure 4 – The dashboard view of cluster-a showing the selected environment and the ACTIONS menu.



ACTIONS menu - Backup and Restore

From your cluster dashboard, the ACTIONS menu lists the Restore a Backup setting.

Backup and Restore options in the cluster Action menu

Figure 5 – The ACTIONS menu shows the Restore a Backup setting.



Running the Cluster Restore Wizard

From the Altinity Cloud Manager cluster screen select ACTIONS > Restore a Backup to open the Cluster Restore Wizard. The

Features of the interface shown in Figure 4:

  • The left side of each Cluster Restore Wizard screen shows the progress of the restore operation.
  • The green bar indicates the section of the wizard you are viewing.
  • The dimmed choices indicate you have not yet set anything in those screens.
  • After progressing through the wizard screens, you can return to a previous screen by clicking on any of the 5 names of the screens shown in the left pane.

Buttons located at the bottom right include:

  • CANCEL - Closes the Wizard without saving any settings.
  • NEXT - Indicates that there is another screen that you will go to.
  • BACK - Returns to the previous screen while retaining any changes you have made.
  • CONTINUE - Appears on the last screen of the wizard. Click to start the Cluster Launch Wizard.
  • The left pane titles are also buttons that allow you to jump directly to without using the BACK button.

1. Backup Location

This screen lets you choose where your backups are located, either from Altinity or from your own cloud provider.

  • Altinity.Cloud Backup
  • Remote Backup (AWS or GCP)

Backup Location Information

  • This Backup Location information screen is used to choose between an Altinity backup or an external cloud vendor.

Altinity.Cloud Backup

To use Altinity as the source of the cluster you want to restore:

  1. Select Altinity.Cloud Backup from the Backup Location Information.
  2. Select an Altinity environment from the Source Environment menu, for example, tenant-a.
  3. Select NEXT to go to the next screen Source Cluster.
Launch Cluster button

Figure 6 – The Backup Location Information screen, selecting the Altinity.Cloud Backup and the Source Environment (for example: tenant-a).

Remote Backup

Use this selection if you are using a third-party cloud provider such as Amazon (AWS) or Google (GCP).
Select Remote Backup, fill in the fields then NEXT.

  1. Select Remote Backup, fill in the following fields from steps 2 to 7 then NEXT.
Launch Cluster button

Figure 7 – The Altinity Cloud Manager (ACM) home page, selecting the LAUNCH CLUSTER button.

  1. Select an Altinity-supported cloud provider:

    Cloud Storage Provider


  2. Copy the AWS or GCP Access Key and paste it here:

    Access Key

    • UI Text:
      Storage Access Key
    • Example:
      EXAMPLE_KEY_ID_ABCDEFGH

  1. Copy the AWS or GCP Secret Key and paste it here:

    Secret Key
    • UI Text:
      Storage Secret Key
    • Example:
      EXAMPLE_SECRET_ACCESS_KEY_ID_ABCDEFGH

  1. Enter the Region identifier string ID here:

    Region

  1. Enter the Bucket name:

    Bucket

  1. Select ACM (Altinity Cloud Manager) if you want to retain this folder structure:

    ACM Compatible Folder Structure
    • UI Text:
      Bucket contents was previously created by ACM or has a fully ACM-compatible structure
    • Example:
      (checked)

  1. Select NEXT to go to the next screen Source Cluster.


2. Source Cluster

The Source Cluster Information screen lets you choose one cluster from a list of cluster names. The following column names appear in a scrolling table:

  • Radio button - Marks the cluster you want to use as the source. (Example: selected)
  • Cluster - This is the name of the backed-up cluster. (Example: abc)
  • Namespace - This is the environment name. (Example: tenant-a)
  • Configuration - Checked if the configuration information is included in the backup (Example: checked)

To select a cluster name:

  1. In the Source Cluster Information scrolling table, select the radio button of the Cluster name.

    Source Cluster Information

    • UI Text:
      Note: It is possible to restore a complete Cluster’s configuration only if a given backup contains it.
    • Example:
      abc
  2. Select NEXT to go to the next screen Source Backup.

Launch Cluster wizard screens

Figure 8 – The restore wizard screen 2 of 5 Source Cluster Information showing the selected cluster abc.



3. Source Backup (Backup Tag)

The Backup Information: <environment name>/<cluster name> screen lists the available backups by date. Note that the environment name (example tenant-a) and cluster name (example: abc) were selected in the previous wizard screens.

The following column names appear in a scrolling table:

  • Radio button - Marks the backup Tag name you want to use.
  • Tag - This is the 14-digit name of the backed-up cluster that is the
    year yyyy
    month mm
    day dd
    time hhmmss
    (Example: 20230104201700)
  • Size - This is the size of the backup. (Example: 450 B)
  • Timestamp - The date and time of the backup (Example: 2023-01-04-20:17:00)
  • Configuration - Checked if the configuration information is included in the backup (Example: checked)

To select a backup:

  1. In the screen Backup Information: tenant-a/abc select the radio button by the Tag name.
    Example:
    Tag: 20230104201700

  2. Select NEXT to go to the next screen Destination Cluster.

Cluster Launch Wizard summary

Figure 9 – The wizard restore wizard screen 3 of 5 Backup Information showing a Tag backup selected.



4. Destination Cluster

The Destination Cluster Information screen is where the restored data is saved to.

Available options are:

  • Launch a new Cluster
  • Launch a new Cluster using configuration and settings of a source cluster
  • Restore into an existing cluster
    • A new cluster will be launched using a fresh setup (it may be different from the original Cluster configuration)

Launch a new Cluster

The Launch of a new cluster runs the Cluster Launch Wizard immediately after the Review & Confirm screen appears and you select CONTINUE.

Select Launch a new cluster then NEXT to advance to Restore Summary.

Launch a new Cluster using the configuration and settings of a source cluster

Use this setting if you want to override the backed-up cluster settings such as CPU, RAM, and Storage size to instead use the destination cluster values.

The Launch a New Cluster runs the Cluster Launch Wizard immediately after the Review & Confirm screen appears and you select CONTINUE.

Select Launch a new cluster then NEXT to advance to Restore Summary.

Launch a new Cluster

In the Name field, create a new name.

  • Example:
    restored-cluster-backup

Source Cluster

Ignores the backed-up settings and instead uses the settings in the destination cluster.
This is useful when you want to ignore the settings of the existing cluster and instead use the backed-up configuration.

Existing Cluster

This is the opposite of the Source Cluster.
This is useful when you want to use the settings of the existing cluster and not the backed-up configuration.

Cluster Launch Wizard summary

Figure 10 – The restore wizard screen 4 of 5 Destination Cluster Information with Launch a new Cluster selected.



5. Restore Summary

This Review & Confirm screen covers communication details such as port settings and endpoint information.
To change a setting, select BACK or the title of the screen from the left-hand pane.
Select CONTINUE to save and begin the restore process.
IMPORTANT: Do not perform any changes to the restored cluster until the process is complete

UI Text:
Please review and confirm the selected options:

Cluster Launch Wizard summary

Figure 11 – The last restore wizard screen 5 of 5 Review & Confirm.



Cluster Launch Wizard

The Cluster Launch Wizard starts automatically after selecting CONTINUE from the Cluster Restore Wizard.
From that point, follow the instructions to create a cluster to restore to.

Cluster Launch Wizard summary

Figure 12 – The Cluster Launch Wizard appears after you select CONTINUE from the Cluster Restore Wizard.

1.4.4 - Cluster Explore Guide

How to explore a Cluster through queries, schema and processes

Altinity.Cloud users a range of options they can take on existing clusters.

For a quick view on how to create a cluster, see the Altinity.Cloud Quick Start Guide. For more details on interacting with clusters, see the Administrative Clusters Guide.

1.4.4.1 - Query Tool

How to submit ClickHouse queries to a cluster or nodes of the cluster

The Query Tool page allows users to submit ClickHouse SQL queries directly to the cluster or a specific cluster node.

To use the Query Tool:

  1. Select Explore from either the Clusters View or the Clusters Detail Page.

  2. Select Query from the top tab. This is the default view for the Explore page.

  3. Select from the following:

    Query Page
    1. Select which cluster to run a query against.

    2. Select Run DDLs ON CLUSTER to run Distributed DDL Queries.

    3. Select the following node options:

      Select node for query.
      1. Any: Any node selected from the Zookeeper parameters.
      2. All: Run the query against all nodes in the cluster.
      3. Node: Select a specific node to run the query against.
    4. The Query History allows you to scroll through queries that have been executed.

    5. Enter the query in the Query Textbox. For more information on ClickHouse SQL queries, see the SQL Reference page on ClickHouse.tech.

    6. Select Execute to submit the query from the Query Textbox.

    7. The results of the query will be displayed below the Execute button.

Additional tips and examples are listed on the Query page.

1.4.4.2 - Schema View

Viewing the database schema for clusters and nodes.

The Schema page allows you to view the databases, tables, and other details.

To access the Schema page:

  1. Select Explore from either the Clusters View or the Clusters Detail Page.

  2. Select Schema from the top tab.

  3. Select the following node options:

    Select node for query.
    1. Any: Any node selected from the Zookeeper parameters.
    2. All: Run the query against all nodes in the cluster.
    3. Node: Select a specific node to run the query against.

To view details on a table, select the table name. The following details are displayed:

  • Table Description: Details on the table’s database, engine, and other details.
  • Table Schema: The CREATE TABLE command used to generate the table.
  • Sample Rows: A display of 5 selected rows from the table to give an example of the data contents.

1.4.4.3 - Processes

How to view the processes for a cluster or node.

The Processes page displays the currently running processes on a cluster or node.

To view the processes page:

  1. Select Explore from either the Clusters View or the Clusters Detail Page.

  2. Select Processes from the top tab.

  3. Select the following node options:

    Select node for query.
    1. Any: Any node selected from the Zookeeper parameters.
    2. All: Run the query against all nodes in the cluster.
    3. Node: Select a specific node to run the query against.

The following information is displayed:

  • Query ID: The ClickHouse ID of the query.
  • Query: The ClickHouse query that the process is running.
  • Time: The elapsed time of the process.
  • User: The ClickHouse user running the process.
  • Client Address: The address of the client submitting the process.
  • Action: Stop or restart a process.

1.5 - Connectivity

Connecting Altinity.Cloud with other services.

The following guides are designed to help organizations connect their existing services to Altinity.Cloud.

1.5.1 - Cluster Access Point

How to view your Cluster’s access information.

ClickHouse clusters created in Altinity.Cloud can be accessed through the Access Point. The Access Point is configured by the name of your cluster and environment it is hosted in.

Information on the Access Point is displayed from the Clusters View. Clusters with TLS Enabled will display a green shield icon.

View Cluster Access Point

To view your cluster’s access point:

  1. From the Clusters View, select Access Point.
  2. The Access Point details will be displayed.
View the Access Point

Access Point Details

The Access Point module displays the following details:

  • Host: The dns host name of the cluster, based on the name of the cluster and environment the cluster is hosted in.

  • TCP Port: The ClickHouse TCP port for the cluster.

  • HTTP Point: The HTTP port used for the cluster.

  • Client Connections: The client connections are quick commands you can copy and paste into your terminal or use in your code. This make it a snap to have your code connecting to your cluster by copying the details right from your cluster’s Access Point. Provided client includes are:

    • clickhouse-client
    • jdbc
    • https
    • python

1.5.2 - Configure Cluster Connections

Configure the connection protocols to your Altinity.Cloud cluster

Altinity.Cloud provides accounts the ability to customize their connections to their clusters. This allows organizations the ability to enable or disable:

  • The Binary Protocol: The native ClickHouse client secure port on port 9440.
  • The HTTP Protocol: The HTTPS protocol on port 8443.
  • IP Restrictions: Restricts ClickHouse client connections to the provided whitelist. The IP addresses must be listed in CIDR format. For example, ip_address1,ip_address2,etc.

As of this time, accounts can update the IP Restrictions section. Binary Protocol and HTTP Protocol are enabled by default and can not be disabled.

Update Connection Configuration

To update the cluster’s Connection Configuration:

  1. Log into Altinity.Cloud with an account.

  2. Select the cluster to update.

  3. From the top menu, select Configure->Connections.

    Select Configure->Connections for the cluster.
  4. To restrict IP communication only to a set of whitelisted IP addresses:

    1. Under IP Restrictions, select Enabled.

    2. Enter a list of IP addresses. These can be separated by comma, spaces, or a new line. The following examples are all equivalent:

      192.168.1.1,192.168.1.2
      
      192.168.1.1
      192.168.1.2
      
      192.168.1.1 192.168.1.2
      
  5. When finished, select Confirm to save the Connection Configuration settings.

    Cluster Connection Configuration Settings

1.5.3 - Connecting with DBeaver

Creating a connection to Altinity.Cloud from DBeaver.

Connecting to Altinity.Cloud from DBeaver is a quick, secure process thanks to the available JDBC driver plugin.

Required Settings

The following settings are required for the driver connection:

  • hostname: The DNS name of the Altinity.Cloud cluster. This is typically based on the name of your cluster, environment, and organization. For example, if the organization name is CameraCo and the environment is prod with the cluster sales, then the URL may be https://sales.prod.cameraco.altinity.cloud. Check the cluster’s Access Point to verify the DNS name of the cluster.
  • port: The port to connect to. For Altinity.Cloud, it will be HTTPS on port 8443.
  • Username: The ClickHouse user to authenticate to the ClickHouse server.
  • Password: The ClickHouse user password used to authenticate to the ClickHouse server.

Example

The following example is based on connecting to the Altinity.Cloud public demo database, with the following settings:

  • Server: github.demo.trial.altinity.cloud
  • Port: 8443
  • Database: default
  • Username: demo
  • Password: demo
  • Secure: yes

DBeaver Example

  1. Start DBeaver and select Database->New Database Connection.

    Create Database Connection
  2. Select All, then in the search bar enter ClickHouse.

  3. Select the ClickHouse icon in the “Connect to a database” screen.

    Select ClickHouse JDBC Driver
  4. Enter the following settings:

    1. Host: github.demo.trial.altinity.cloud
    2. Port: 8443
    3. Database: default
    4. User: demo
    5. Password: demo
    Connection details.
  5. Select the Driver Properties tab. If prompted, download the ClickHouse JDBC driver.

  6. Scroll down to the ssl property. Change the value to true.

    Set secure.
  7. Press the Test Connection button. You should see a successful connection message.

    Successful Test.

1.5.4 - clickhouse-client

How to install and connect to an Altinity.Cloud cluster with clickhouse-client.

The ClickHouse Client is a command line based program that will be familiar to SQL based users. For more information on clickhouse-client, see the ClickHouse Documentation Command-Line Client page.

The access points for your Altinity.Cloud ClickHouse cluster can be viewed through the Cluster Access Point.

How to Setup clickhouse-client for Altinity.Cloud in Linux

As of this document’s publication, version 20.13 and above of the ClickHouse client is required to connect with the SNI enabled clusters. These instructions use the testing version of that client. An updated official stable build is expected to be released soon.

sudo apt-get install apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4

echo "deb https://repo.clickhouse.tech/deb/testing/ main/" | sudo tee \
    /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update

sudo apt-get install -y clickhouse-client

Connect With clickhouse-client to a Altinity.Cloud Cluster

If your ClickHouse client is ready, then you can copy and paste your connection settings into your favorite terminal program, and you’ll be connected.

clickhouse-client to Altinity.Cloud demo

1.5.5 - Amazon VPC Endpoint

How to create an Amazon VPC Endpoint for Altinity.Cloud Services

Altinity.Cloud users can connect a VPC (Virtual Private Cloud) Endpoint from existing AWS environments to their Altinity.Cloud environment. The VPC Endpoint becomes a private connection between their existing Amazon services and Altinity.Cloud, without exposing the connection to the Internet.

The following instructions are based on using the AWS console. Examples of the Terraform equivalent settings are included.

Requirements

Altinity.Cloud requires the AWS ID for the account that will be linked to the Altinity.Cloud environment. This can be found when you login to your AWS Console, and select your username from the upper right hand corner:

Create Endpoint Details

Instructions

To create an VPC Endpoint, the following general steps are required:

  • Retrieve Your Altinity.Cloud Environment URL.
  • Request an Endpoint Service Name from Altinity.Cloud.
  • Create a VPC Endpoint. This must be in the same region as the service to be connected to.
  • Create a private Route 53 Hosted Zone to internal.{Altinity.Cloud environment name}.altinity.cloud.
  • Create a CNAME that points to the VPC Endpoint.

Retrieve Your Altinity.Cloud Environment URL

Your AWS service will be connected to the URL for your Altinity.Cloud environment. Typically this will be the name of your environment, followed by internal.{Altinity.Cloud environment name}.altinity.cloud. For example: if your environment is named trafficanalysis, then your environment URL will be internal.trafficanalysis.altinity.cloud.

This may differ depending on your type of service. If you have any questions, please contact your Altinity Support representative.

Request an Endpoint Service Name

Before creating a VPC Endpoint, Altinity.Cloud will need to provide you a AWS Service Name that will be used for your Endpoint. To request your AWS Service Name to use in later steps of creating the VPC Endpoint to Altinity.Cloud:

  1. Login to your AWS console and retrieve your AWS ID.

    Create Endpoint Details
  2. Contact your Altinity.Cloud support representative and inform them that you want to set up a VPC Endpoint to your Altinity.Cloud environment. They will require your AWS ID.

  3. Your Altinity.Cloud support representative will process your request, and return your AWS Service Name to you. Store this in a secure location for your records.

Create a VPC Endpoint

The next step in connecting Altinity.Cloud to the existing AWS Service is to create an Endpoint.

  1. From the AWS Virtual Private Cloud console, select Endpoints > Create Endpoint.

    Select Create Endpoint
  2. Set the following:

    1. Service Category: Set to Find service by name. (1)
    2. Service Name: Enter the Service Name (2) provided in the step Create Service Name, then select Verify. (3)
    Create Endpoint Details
  3. Select the VPC from the dropdown.

  4. Select Create Endpoint.

Terraform VPC Endpoint Configuration

resource "aws_vpc_endpoint" "this" {
    service_name = local.service_name,
    vpc_endpoint_type = "Interface",
    vpc_id = aws_vpc.this.id,
    subnet_ids = [aws_subnet.this.id],
    security_group_ids  = [aws_vpc.this.default_security_group_id],
    private_dns_enabled = false,
    tags = local.tags
}

Create Route 53 Hosted Zone

To create the Route 53 Hosted Zone for the newly created endpoint:

  1. From the AWS Console, select Endpoints.

  2. Select the Endpoint to connect to Altinity.Cloud, then the tab Details. In the section marked DNS names, select the DNS entry created and copy it. Store this in a separate location until ready.

    Copy Endpoint DNS Entry
  3. Enter the Route 53 console, and select Hosted zones.

    Select Create hosted zone
  4. Select Create hosted zone.

  5. On the Hosted zone configuration page, update the following:

    1. Domain name: Enter the URL of the Altinity.Cloud environment. Recall this will be internal.{Altinity.Cloud environment name}.altinity.cloud, where {your environment name} was determined in the step Retrieve Your Altinity.Cloud Environment URL.
    2. Description (optional): Enter a description of the hosted zone.
    3. Type: Set to Private hosted zone.
    Create hosted zone details
  6. In VPCs to associated with the hosted zone, set the following:

    1. Region: Select the region for the VPC to use.
    2. VPC ID: Enter the ID of the VPC that is being used.
  7. Verify the information is correct, then select Create hosted zone.

    Create hosted zone

Terraform Route 53 Configuration

resource "aws_route53_zone" "this" {
    name  = "$internal.{environment_name}.altinity.cloud.",
    vpc {
        vpc_id = aws_vpc.this.id
    }
    tags = local.tags
}

Create CNAME for VPC Endpoint

Once the Hosted Zone that will be used to connect the VPC to Altinity.Cloud has been created, the CNAME for the VPC Endpoint can be configured through the following process:

  1. From the AWS Console, select Route 53 > Hosted Zones, then select Create record.

    Create hosted zone
  2. Select the Hosted Zone that will be used for the VPC connection. This will be the internal.{Altinity.Cloud environment name}.altinity.cloud.

  3. Select Create record.

  4. From Choose routing policy select Simple routing, then select Next.

    Choose routing policy
  5. From Configure records, select Define simple record.

    Select Define simple record
  6. From Define simple record, update the following:

    1. Record name: set to *. (1)
    2. Value/Route traffic to:
      1. Select Ip address or another value depending on the record type. (3)
      2. Enter the DNS name for the Endpoint created in Create Route 53 Hosted Zone.
    3. Record type
      1. Select CNAME (2).
    Define simple record
  7. Verify the information is correct, then select Define simple record.

Terraform CNAME Configuration

resource "aws_route53_record" "this" {
    zone_id = aws_route53_zone.this.zone_id,
    name = "*.${aws_route53_zone.this.name}",
    type = "CNAME",
    ttl = 300,
    records = [aws_vpc_endpoint.this.dns_entry[0]["dns_name"]]
}

Test

To verify the VPC Endpoint works, launch a EC2 instance and execute the following curl command, and will return OK if successful. Use the name of your Altinity.Cloud environment’s host name in place of {your environment name here}:

curl -sS https://statuscheck.{your environment name here}
OK

For example, if your environment is internal.trafficanalysis.altinity.cloud, then use:

curl -sS https://statuscheck.internal.trafficanalysis.altinity.cloud
OK

References

1.5.6 - Amazon VPC Endpoint for Amazon MSK

How to create Amazon VPC Endpoint Services to connect Altinity.Cloud to Amazon MSK within your VPC

Altinity.Cloud users can connect a VPC (Virtual Private Cloud) Endpoint service from their existing AWS (Amazon Web Services) MSK (Amazon Managed Streaming for Apache Kafka) environments to their Altinity.Cloud environment. The VPC Endpoint services become a private connection between their existing Amazon services and Altinity.Cloud, without exposing Amazon MSK to the Internet.

The following instructions are based on using the AWS console. Examples of the Terraform equivalent settings are included.

Requirements

  • Amazon MSK
  • Provision Broker mapping.

Instructions

To create an VPC Endpoint Service, the following general steps are required:

  1. Contact your Altinity Support representative to retrieve the Altinity.Cloud AWS Account ID.
  2. Create VPC Endpoint Services: For each broker in the Amazon MSK cluster, provision a VPC endpoint service in the same region your Amazon MSK cluster. For more information, see the Amazon AWS service endpoints documentation.
  3. Configure each endpoint service to a Kafka broker. For example:
    1. Endpoint Service: com.amazonaws.vpce.us-east-1.vpce-svc-aaa
    2. Kafka broker: b-0.xxx.yyy.zzz.kafka.us-east-1.amazonaws.com
    3. Endpoint service provision settings: Set com.amazonaws.vpce.us-east-1.vpce-svc-aaa = b-0.xxx.yyy.zzz.kafka.us-east-1.amazonaws.com
  4. Provide Endpoint Services and MSK Broker mappings to your Altinity Support representative.

Create VPC Endpoint Services

To create the VPC Endpoint Service that connects your Altinity.Cloud environment to your Amazon MSK service:

  1. From the AWS Virtual Private Cloud console, select Endpoints Services > Create Endpoint Service.

    Select Create Endpoint
  2. Set the following:

    1. Name: Enter a Name of you own choice (A).
    2. Load balancer type: Set to Network. (B)
    3. Available load balancers: Set to the load balancer you provisioned for this broker. (C)
    4. Additional settings:
      1. If you are required to manually accept the endpoint, set Acceptence Required to Enabled (D).
      2. Otherwise, leave Acceptance Required unchecked.
        Create Endpoint Details
  3. Select Create.

Test

To verify the VPC Endpoint Service works, please contact your Altinity Support representative.

References

2 - Altinity.Cloud Anywhere

Manuals, quick start guides, code samples and tutorials on how to use Altinity.Cloud Anywhere.

Altinity.Cloud Anywhere is a zero-maintenance, open source-based SaaS for ClickHouse that gives you control of your data, letting you chart your own path and giving you choice as to working with vendors or running your infrastructure yourself.

Your data. Your control. Our tools.

2.1 - Altinity.Anywhere Introduction and Concepts

Altinity.Cloud Anywhere introduction and concepts.

20 March 2023 · Read time 5 min

Introduction - Altinity.Anywhere

Data

Altinity.Cloud Anywhere is a deployment model used by Altinity.Cloud, a zero-maintenance, open source-based SaaS for ClickHouse that gives you control of your data, letting you chart your own path and giving you choice as to working with vendors or running your infrastructure yourself.

Data

Customers can easily deploy and manage ClickHouse clusters on their own infrastructure, using Kubernetes as the underlying orchestration system. Altinity Cloud Anywhere provides a self-managed, on-premises version of the Altinity.Cloud service, which is a fully managed ClickHouse cloud service offered by Altinity.


Data

The following are some of the features, advantages, and benefits of Altinity Cloud Anywhere:

Deploys Clickhouse database clusters

  • Altinity Anywhere is built on ClickHouse, an open-source columnar database management system designed for high-performance analytics. ClickHouse is highly efficient and can process massive (petabytes) amounts of data in real-time, making it ideal for organizations that need to perform complex analytics at scale.
  • ClickHouse is a high-performance, columnar database designed for OLAP workloads. Altinity Cloud Anywhere takes advantage of ClickHouse’s performance capabilities, providing customers with fast query response times, even for large datasets.

Altinity Cloud Manager
Altinity Cloud Manager is a cloud-based management tool that provides a centralized dashboard for managing Altinity Anywhere instances and ClickHouse clusters. The tool is designed to simplify the process of managing ClickHouse-based analytics workloads, making it easier for users to deploy, monitor, and scale their ClickHouse clusters, as well as easily switch cloud providers.

  • Cluster deployment
    Altinity Cloud Manager provides a ClickHouse cluster deployment wizard that allows users to easily deploy ClickHouse clusters in a few minutes, on various cloud providers, such as Amazon Web Services (EKS), and Google Cloud Platform (GKE) and on-Prem Kubernetes environments such as Minikube. Users can choose from a variety of deployment options, including single-node, multi-node, and high-availability clusters.

  • Cluster monitoring
    Altinity Cloud Manager provides real-time monitoring of ClickHouse clusters, allowing users to track cluster health, resource utilization, and query performance. The tool provides alerts when issues arise, helping users to proactively identify and resolve issues before they become critical.

  • Cluster scaling
    Altinity Cloud Manager enables users to easily scale ClickHouse clusters up or down as needed, in response to changes in workload or data volume. This allows users to optimize cluster performance and reduce costs by only paying for the resources they need.

  • Analytics tools
    Altinity Anywhere includes a variety of analytics tools and integrations, such as SQL editors, data visualization tools, and data integration capabilities. These tools allow you to easily analyze and visualize your data, as well as integrate it with other systems and applications.

  • Backup and recovery
    Altinity Cloud Manager provides backup and recovery capabilities, allowing users to protect their data and recover quickly in the event of a disaster. Users can create backups on a schedule, or manually as needed, and can restore backups to any supported ClickHouse instance.

  • Access control
    Altinity Cloud Manager provides granular access control, allowing users to manage permissions for individual users and groups. This helps to ensure that only authorized users have access to sensitive data and analytics tools. Customers can grant Altinity technical support staff access to their clusters as needed for analysis and troubleshooting.

Kubernetes-based orchestration

  • Altinity Cloud Anywhere leverages Kubernetes to manage the deployment and scaling of ClickHouse clusters, providing a highly available and fault-tolerant environment for ClickHouse.

Flexibility

  • Altinity Cloud Anywhere provides customers with the flexibility to choose the infrastructure and storage options that best suit their needs. Customers can use their own hardware or cloud infrastructure, and can configure storage options to meet their specific requirements.

Cost savings

  • Altinity Cloud Anywhere can help customers save money compared to third party cloud providers and on-prem database solutions. By leveraging ClickHouse’s performance and scalability capabilities, customers can reduce their hardware and infrastructure costs while still meeting their performance and availability requirements.
  • Cluster scheduling features spins down nodes to save runtime costs from third party cloud providers that bill only when CPUs are running.

Security

  • Altinity Anywhere takes security seriously. Customers control kubernetes security, including features such as SSL connection encryption, data encryption, and multi-factor authentication to ensure the security of your data.

Altinity Cloud Connect

  • Altinity Cloud Connect is a feature of the Altinity Anywhere platform that enables seamless data integration between cloud-based and on-premises data sources.
  • Supported data sources include popular cloud-based data warehouses like Amazon Redshift, Google BigQuery, and Snowflake, as well as on-premises databases like MySQL and PostgreSQL.
  • Data integration is made easier by providing a user-friendly interface for setting up data connections and configuring data transfer jobs. Users can set up recurring data transfer jobs to move data from one source to another on a schedule, or initiate ad-hoc data transfers as needed.
  • Altinity Cloud Connect includes a number of features designed to ensure data integrity and reliability, including support for data type mapping, error handling, and data transformation. It also provides a high level of scalability, making it ideal for organizations that need to transfer large volumes of data quickly and efficiently.

Support

  • Altinity Anywhere provides dedicated support to ensure that your organization gets the most out of the platform. This includes training, technical support, and access to a community of other Altinity Anywhere users.

In summary, Altinity Cloud Anywhere is a powerful solution that provides customers with a simple, scalable, and cost-effective way to deploy and manage ClickHouse clusters on their own infrastructure using Kubernetes. With Altinity Cloud Anywhere, customers can take advantage of ClickHouse’s performance and scalability capabilities, while retaining full control over their infrastructure and data.

Free Trial

A two-week free trial is available. If you do not already have an Altinity.Cloud account, the trial includes account credentials and an access key from the *.Anywhere environment to use in your Kubernetes installation.

2.2 - Kubernetes Preparation

Kubernetes Preparation.

Altinity.Cloud Anywhere is installed through the ccctl suite of applications which can either be installed or built from source.

2.2.1 - Recommendations for EKS (AWS)

Altinity.Cloud Anywhere recommendations for EKS (AWS)

20 March 2023 · Read time 1 min

We recommend setting up karpenter or cluster-autoscaler
to launch instances in at least 3 Availability Zones.

If you plan on sharing Kubernetes cluster with other workloads, it’s
recommended you label Kubernetes Nodes intended for Altinity.Cloud Anywhere
with altinity.cloud/use=anywhere & taint with dedicated=anywhere:NoSchedule.

Instance Types

for Zookeeper infrastructure nodes

  • t3.large or t4g.large*

* t4g instances are AWS Graviton2-based (ARM).

for ClickHouse nodes

ClickHouse works the best in AWS when using nodes from those instance families:

  • m5
  • m6i
  • m6g*

* m6g instances are AWS Graviton2-based (ARM).

Instance sizes from large to 8xlarge are typical.

Storage Classes

  • gp2
  • gp2-encrypted
  • gp3*
  • gp3-encrypted*

* gp3 storage classes require Amazon EBS CSI driver that does not come pre-installed.

Example manifests:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
  fsType: ext4
  type: gp2
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2-encrypted
provisioner: kubernetes.io/aws-ebs
parameters:
  encrypted: 'true'
  fsType: ext4
  type: gp2
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3
provisioner: ebs.csi.aws.com
parameters:
  fsType: ext4
  type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3-encrypted
  annotations:
    storageclass.kubernetes.io/is-default-class: 'true'
provisioner: ebs.csi.aws.com
parameters:
  encrypted: 'true'
  fsType: ext4
  type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Notes:

  • We do not recommend using gp2 storage classes. gp3 is better and less expensive
  • gp3 default throughput is 125MB/s for any volume size. It can be increased in AWS console or using storage class parameters. Here is an example:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3-encrypted-500
provisioner: ebs.csi.aws.com
parameters:
  encrypted: 'true'
  fsType: ext4
  throughput: '500'
  type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

2.2.2 - Recommendations for GKE (GCP)

Altinity.Cloud Anywhere recommendations for GKE (GCP)

20 March 2023 · Read time 1 min

Machine Types

NOTE: Depending on machine types & number of instances you plan to use, you may need
to request GCE quota increase.

We recommend setting up each node pool except the default one in at least 3 zones.

If you plan on sharing Kubernetes cluster with other workloads, it’s
recommended you label Kubernetes Nodes intended for Altinity.Cloud Anywhere
with altinity.cloud/use=anywhere & taint with dedicated=anywhere:NoSchedule.

for Zookeeper and infrastructure nodes

  • e2-standard-2

for ClickHouse nodes

It’s recommended to taint node pools below with dedicated=clickhouse:NoSchedule (in addition to altinity.cloud/use=anywhere).

  • n2d-standard-2
  • n2d-standard-4
  • n2d-standard-8
  • n2d-standard-16
  • n2d-standard-32

If GCP is out of n2d-standard-* instances in the region of your choice, we recommend
substituting them with n2-standard-*.

Storage Classes

  • standard-rwo
  • premium-rwo

GKE comes pre-configured with both.

2.2.3 - Amazon Remote Provisioning Configuration

How to set up your Amazon roles.

21 March 2023 · Read time 3 min

Introduction

Altinity technical support can remotely provision EKS clusters with an Altinity.Cloud Anywhere environment on your Amazon account.

The document describes steps required to be done on the user side to let Altinity provision EKS clusters for Altinity.Cloud Anywhere.

Data

Preparing the EC2 instance

An EC2 instance is required to deploy altinitycloud-connect, which will establish an outbound connection to Altinity.Cloud and start the EKS provisioning process.

EC2 instances requirements

  • Instance type: t2.micro or comparable
    OS: Ubuntu Server 20.04
  • Role with IAM policies to access IAM, EC2, VPC, EKS, S3 & Lambda subsystems

    arn:aws:iam::aws:policy/IAMFullAccess
    arn:aws:iam::aws:policy/AmazonEC2FullAccess
    arn:aws:iam::aws:policy/AmazonVPCFullAccess
    arn:aws:iam::aws:policy/AmazonS3FullAccess
    arn:aws:iam::aws:policy/AWSLambda_FullAccess
    arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
    

The AmazonSSMManagedInstanceCore is for the Break Glass Procedure over AWS SSM.


Creating a policy for EKS full access

  1. Create a standard policy for EKS full access as follows:
    {
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "eks     :*"
         ],
         "Resource":"*"
      },
      {
         "Effect":"Allow",
         "Action":"iam:PassRole",
         "Resource":"*",
         "Condition":{
            "StringEquals":{
               "iam:PassedToService":"eks.amazonaws.com"
            }
         }
      }
   ]
}
  1. To set this instance to have access to the EC2 metadata and Internet, set the Security group to:

    • deny all inbound traffic
    • allow all outbound traffic

Installing Altinity.Cloud Connect

(altinitycloud-connect)

  1. Download altinitycloud-connect

    curl -sSL https://github.com/altinity/altinitycloud-connect/releases/download/v0.20.0/altinitycloud-connect-0.20.0-linux-amd64 -o altinitycloud-connect \
    && chmod a+x altinitycloud-connect \
    && sudo mv altinitycloud-connect /usr/local/bin/
    
  2. Login to Altinity.Cloud and get a connection token. A cloud-connect.pem file is created in the current working directory.

    altinitycloud-connect login --token=<registration token>
    
  3. Connect to Altinity.Cloud:

    altinitycloud-connect --capability aws
    
  4. Send Altinity the following information so we can start EKS provisioning:

    • CIDR for the Kubernetes VPC (at least /21 recommended, e.g. 10.1.0.0/21)
    • Number of AZs (3 recommended)
  5. Once the EKS cluster is ready, select the PROCEED button to complete the configuration.

Break Glass Procedure

The “Break Glass” procedure allows Altinity access to EC2 instance with SSH via AWS SSM in order to troubleshoot altinitycloud-connect running on this instance.

  1. Create an AnywhereAdmin IAM role with trust policy set:

    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect":"Allow",
             "Principal":{
                "AWS":"arn:aws:iam::313342380333:role/AnywhereAdmin"
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }
    
  2. Add a permission policy set:

    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect":"Allow",
             "Action":"ssm:StartSession",
             "Resource":[
                "arn:aws:ec2:$REGION:$ACCOUNT_ID:instance/$INSTANCE_ID",
                "arn:aws:ssm:*:*:document/AWS-StartSSHSession"
             ]
          }
       ]
    }
    
  3. Send the following data to Altinity:

    arn:aws:ec2:$REGION:$ACCOUNT_ID:instance/$INSTANCE_ID
    

2.3 - Altinity.Cloud Anywhere Quickstart

How to use Altinity Anywhere to connect to your on-prem or 3rd party environment.

23 March 2023 · Read time 4 min

This tutorial explains how to connect your choice of Kubernetes cloud provider or your own on-prem cluster to the Altinity.Cloud Manager (using the Anywhere Free Trial and begin managing ClickHouse clusters. The end result of the tutorial on this page is shown in Figure 1, showing the Altinity Cloud Manager (ACM) managing your ClickHouse cluster.

Signup Page

Figure 1 - The end result - an Altinity.Cloud managed ClickHouse cluster made possible by Altinity Anywhere.


More Information

If you encounter difficulties with any part of the tutorial, check the Troubleshooting section.
Contact Altinity support for additional help if the troubleshooting advice does not resolve the problem.

Preparing Kubernetes

Altinity Anywhere supports Kubernetes environments from either Amazon AWS (EKS), Google GCP (GKE) or Minikube7.
Create your Kubernetes cluster using the following Altinity recommendations:

Verify that the kubernetes host that you have selected has access to the cluster and can run kubectl commands.


Listing namespaces

Before Altinity Anywhere is installed, a listing of the current kubernetes namespaces should appear as follows:

$ kubectl get namespaces

NAME                                STATUS   AGE
default                             Active   184d
kube-node-lease                     Active   184d
kube-public                         Active   184d
kube-system                         Active   184d

How to get an Altinity.Cloud Anywhere Free Trial Account

Get your Altinity.Cloud Anywhere free trial account from the following link:

Signup Page
Figure 2 - Altinity.Cloud Anywhere Free Trial signup page.


Submitting the Free Trial form for Altinity.Cloud Anywhere

  1. From the first Altinity Email you receive after clicking SUBMIT, follow the instructions in the signup process to validate your email. This will notify Altinity technical support to provision your new Altinity.Cloud account.
  2. The next email you will receive after Altinity completes your account setup. It contains a link to login to Altinity.Cloud where you will create a password to login to the Altinity Cloud Manaager (ACM).

Now you are ready to connect your Kubernetes cluster.

Connecting Kubernetes to Altinity.Cloud

The first time you login to your new Altinity.Cloud Anywhere account, you will be directed to the environment setup page shown in Figure 3. If you have an existing account or restart installation, just select the Environments tab on the left side of your screen to reach the setup page.

Environment - Connection Setup Tab
Figure 3 - Environments > Connection Setup tab in the Altinity.Cloud Manager.


Connection Setup

Highlighted in red in Figure 3 are the steps to complete before you PROCEED to the next screen.

  1. In the first step Altinity.Cloud connect, download the correct binary for your system.

  2. From step 2 Connect to Altinity.Cloud, copy and paste the connection string to your terminal. Note that there is no output, so the command prompt is immediately ready for the next command.

    altinitycloud-connect login --token=<registration token>
    
  3. Run the Deploy connector to your Kubernetes cluster command.

    altinitycloud-connect kubernetes | kubectl apply -f -
    

    This step takes several minutes to complete depending on the speed of your host system.

    The response displays as follows:

    namespace/altinity-cloud-system created
    namespace/altinity-cloud-managed-clickhouse created
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
    serviceaccount/cloud-connect created
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
    rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
    rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
    secret/cloud-connect created
    deployment.apps/cloud-connect created
    

Inspecting Kubernetes roles and resources

To display Kubernetes roles and resources that the kubect apply will use run the following command and save the output from your terminal as a text file to read.

altinitycloud-connect kubernetes

Resources Configuration

Once these commands have completed, press PROCEED in Altinity.Cloud. After the connection is made, you will advance to the next Resources Configuration screen.

At the Resources Configuration screen, set the resources used for ClickHouse clusters as follows.

  1. Select your Kubernetes provider using the Cloud Provider radio button
    (Example: GCP).
  2. Add Storage Classes names, which is the block storage for your nodes. Use the ADD STORAGE CLASS button to add additional storage classes as needed to allocate block storage for nodes in your environment.
  3. In the Node Pools section, Inspect the node pool list to ensure availability zones and pools you wish to use are listed. Altinity.Cloud lists availability zones that are currently in use. If you see zones that are missing, add them using the ADD NODE POOL.

The following Resources Configuration example shows the red boxes around the settings made for the Google Cloud Platform GKE environment.

Resources Configuration Tab
Figure 4 - Resources Configuration setup page.

  1. The Cloud Provider is set to GCP.
  2. The Storage Classes uses the ADD STORAGE CLASS button to add the following:
    premium-rwo
    standard
    standard-two
  3. The Node Pools section uses the ADD NODE POOL button to add the Zone and Instance Type, storage Capacity in GB, and the Used For settings as follows:
    Zone       Instance Type   Capacity  Used for
    ---------  --------------  --------  ---------------------------------------------------
    us-east-b  e2-standard-2      10     [True] ClickHouse  [True] Zookeeper  [False] System 
    us-east-a  e2-standard-2       3     [True] ClickHouse  [True] Zookeeper  [False] System
    

Confirmation of Settings

The Confirmation screen displays a JSON representation of the settings you just made. Review these settings then select FINISH.

Confirmation Tab
Figure 5 - Confirmation page showing the JSON version of the settings.


Connection Completed, Nodes Running

Once the connection is fully set up, the Altinity.Cloud Environments dashboard will display your new environment.

Provisioned Environment Tab
Figure 6 - Environment dashboard page showing your running Anywhere cluster.

Creating your first ClickHouse cluster

To create your first cluster, switch to the Cluster page by as indicated by the red keylines in Figure 6:

  • From the Environments page, select MANAGE CLUSTERS link located just below the blue banner.
  • Select Clusters from the left navigation panel.

The Cluster Launch Wizard document covers how to create a new cluster.

Frequently Asked Questions

FAQ-1. Altinity.Cloud Anywhere endpoint not reachable

Problem

  • The altinitycloud-connect command has a –url option that defaults to host anywhere.altinity.cloud on port 443. If this host is not reachable, the following error message appears.

    altinitycloud-connect login --token=<token>
    Error: Post "https://anywhere.altinity.cloud/sign": 
       dial tcp: lookup anywhere.altinity.cloud on 127.0.0.53:53: no such host
    

Solution

  • Make sure the name is available in DNS and that the resolved IP address is reachable on port 443 (UDP and TCP), then try again.

  • Note: if you are using a non-production Altinity.Cloud environment you must specify the correct URL explicitly. Contact Altinity support for help.

FAQ-2. Insufficient Kubernetes privileges

Problem

  • Your Kubernetes account has unsufficient permissions.

Solution

  • Set the following permissions for your Kubernetes account:

    • cluster-admin for initial provisioning only (it can be revoked afterwards)
    • Give full access to altinity-cloud-system and altinity-cloud-managed-clickhouse namespaces
    • A few optional read-only cluster-level permissions (for observability only)

FAQ-3. Help! I messed up the resource configuration

Problem

  • The resource configuration settings are not correct.

Solution

  1. From the in the Environment tab, in the Environment Name column, select the link to your environment.
  2. Select the menu function ACTIONS 》Reset Anywhere.
  3. Rerun the Environment 》Connection Setup and enter the correct values.

FAQ-4 One of my pods won’t spin up

When you reboot your Mac, the Anywhere cluster in your ACM has not started.

Problem

One of the pods won’t start. (Example: see line 3 edge-proxy-66d44f7465-lxjjn)

    ┌──────────────── Pods(altinity-cloud-system)[8] ──────────────────────────┐
    │ NAME↑                                PF READY RESTARTS STATUS            │
 1  │ cloud-connect-d6ff8499f-bkc5k        ●  1/1       3    Running           │
 2  │ crtd-665fd5cb85-wqkkk                ●  1/1       3    Running           │
 3  │ edge-proxy-66d44f7465-lxjjn          ●  1/2       7    CrashLoopBackOff  │
 4  │ grafana-5b466574d-4scjc              ●  1/1       1    Running           │
 5  │ kube-state-metrics-58d86c747c-7hj79  ●  1/1       6    Running           │
 6  │ node-exporter-762b5                  ●  1/1       3    Running           │
 7  │ prometheus-0                         ●  1/1       3    Running           │
 8  │ statuscheck-f7c9b4d98-2jlt6          ●  1/1       3    Running           │
    └──────────────────────────────────────────────────────────────────────────┘

Terminal listing 1 - The pod in Line 3 edge-proxy-66d44f7465-lxjjn won’t start.


Solution

Delete the pod using the kubectl delete pod command and it will regenerate. (Example: see line 3 edge-proxy-66d44f7465-lxjjn)

kubectl -n altinity-cloud-system delete pod edge-proxy-66d44f7465-lxjjn

2.4 - Installation

How to install Altinity.Cloud Anywhere.

Altinity.Cloud Anywhere is installed through the ccctl suite of applications which can either be installed or built from source.

2.4.1 - Linux - Debian

How to preparing your environment for Altinity.Cloud Anywhere

20 March 2023 · Read time 1 min


Before installing Altinity.Cloud Anywhere into your environment, verify that the following requirements are met.

Security Requirements

  • Have a current Altinity.Cloud account.
  • An Altinity.Cloud API token. For more details, see Account Settings.

Software Requirements

The following are instructions that can be used to install some of the prerequisites.

kubectl Installation for Deb

The following instructions are based on Install and Set Up kubectl on Linux

  1. Download the kubectl binary:

    curl -LO 'https://dl.k8s.io/release/v1.22.0/bin/linux/amd64/kubectl'
    
  2. Verify the SHA-256 hash:

    curl -LO "https://dl.k8s.io/v1.22.0/bin/linux/amd64/kubectl.sha256"
    
    echo "$(<kubectl.sha256) kubectl" | sha256sum --check
    
  3. Install kubectl into the /usr/local/bin directory (this assumes that your PATH includes use/local/bin):

    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    
  4. Verify the installation and the version:

    kubectl version
    

2.4.2 - Installing Altinity.Cloud Anywhere on Minikube Mac

How to install Altinity.Cloud Anywhere on Minikube on the MacOS.

20 March 2023 · Read time 19 min

Overview - Minikube

This guide covers how to install an Altinity.Anywhere ClickHouse environment on your MacOS running Minikube. Minicube is ok to use for development purposes, but should not be used for production.

These instructions have been tested on:

  • M1 Silicon Mac running Monteray (v12.6.3)
  • Ventura (v13.2.1)
  • Intel Mac running Big Sur (v11.7.4)

Requirements

The following Altinity services are used to demonstrate ClickHouse cluster installation on a Mac running MiniKube:

The following software must first be installed on your MacOS:

Installation

From a terminal, first check the versions of all the installed software by running each command in turn.

Checking versions

minikube version
docker-machine --version
docker-compose --version
docker --version
brew list watch

Terminal animation
The following terminal string animation replays commands and responses so you can see how long each step takes.

Minikube start

From the terminal, run the command:

minikube start

Response
This minikube’s response:

😄  minikube v1.29.0 on Darwin 13.2.1 (arm64)
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🏃  Updating the running docker "minikube" container ...
🐳  Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Check the namespaces
Run the namespace list command:

kubectl get ns

Response
The following namespace items appear before ClickHouse is installed.

NAME              STATUS   AGE
default           Active   15d
kube-node-lease   Active   15d
kube-public       Active   15d
kube-system       Active   15d

Altinity Connection Setup

Go to your Altinity.Anywhere Environments section, select the correct Environment from the menu, then copy and paste the login string to the terminal then a command prompt appears.

altinitycloud-connect login --token=
eyJhbGciOiJSUzI1Ni        808 characters           Rpbml0eS5jbG91ZCIsImV4cCI6MTY3
OTMzNzMwMywic3ViIjoicm1rYzIzdGVzdC1kNDgxIn0.tODyYF8WnTSx6mbAZA5uwW176... cont.

Anywhere provisioning on your Mac

From the same Altinity.Anywhere environment, copy the next string and paste it into your terminal.
This begins the provisioning process on your Mac.

altinitycloud-connect kubernetes | kubectl apply -f -

Response
The response appears similar to the following:

namespace/altinity-cloud-system created
namespace/altinity-cloud-managed-clickhouse created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
serviceaccount/cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
secret/cloud-connect created
deployment.apps/cloud-connect created

1 of 3 Connection Setup

From the Altinity Cloud Manager Connection Setup page, select the green PROCEED button.

Data
Figure 1 - The Environments > Connection Setup screen.

2 of 3 Resources Configuration

Confirm the following settings then select the green PROCEED button:

Data
Figure 2 - The Resources Configuration screen.

  • Cloud Provider = Not Specified
  • Storage Classes = Standard
  • Node Pools:
    • Zone = minikube-zone-a
    • Instance Type = minikube-node
    • Capacity = 10 GB (this is an example setting)
    • Used for: (checkmark each of these items)
      • ClickHouse (checked on)
      • Zookeeper (checked on)
      • System (checked on)
    • Tolerations = dedicated=clickhouse:NoSchedule

3 of 3 Confirmation

Review the JSON data then select the green Finish button.
Note: A message saying “Connection is not ready yet.” you can select “Continue waiting…” until the next screen appears.

Confirm the following settings then select the green FINISH button:

Data
Figure 3 - The Confirmation screen showing the Resources Specification JSON.

The Resources Specification JSON string appears as follows:

 {
    "storageClasses": [
      {
        "name": "standard"
      }
    ],
    "nodePools": [
      {
        "for": [
          "CLICKHOUSE",
          "ZOOKEEPER",
          "SYSTEM"
        ],
        "instanceType" : "minikube-node",
        "zone"         : "minikube-zone-a",
        "capacity"     : 10
      } 
    ]
}

Optional Watch Commands

For installations that are taking a long time, and you wish to watch the provisioning process in real-time, run watch commands on the two altinity-cloud prefixed namespaces.

Running Watch command 1 of 2
To watch the progress of the provisioning, use the Watch command to monitor altinity-cloud-system.
The display updates every 2 seconds.

watch kubectl -n altinity-cloud-system get all

Response
The finished result will appear similar to the following display:

Every 2.0s: kubectl -n altinity-cloud-system get all                     john.doe-MacBook-Pro.local: Sun Mar 19 23:03:18 2023

NAME                                      READY   STATUS    RESTARTS   AGE
pod/cloud-connect-d6ff8499f-bkc5k         1/1     Running   0          10h
pod/crtd-665fd5cb85-wqkkk                 1/1     Running   0          10h
pod/edge-proxy-66d44f7465-t9446           2/2     Running   0          10h
pod/grafana-5b466574d-vvt9p               1/1     Running   0          10h
pod/kube-state-metrics-58d86c747c-7hj79   1/1     Running   0          10h
pod/node-exporter-762b5                   1/1     Running   0          10h
pod/prometheus-0                          1/1     Running   0          10h
pod/statuscheck-f7c9b4d98-2jlt6           1/1     Running   0          10h

NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                       AGE
service/edge-proxy            ClusterIP      10.109.2.17      <none>        443/TCP,8443/TCP,9440/TCP                     10h
service/edge-proxy-lb         LoadBalancer   10.100.216.192   <pending>     443:31873/TCP,8443:32612/TCP,9440:31596/TCP   10h
service/grafana               ClusterIP      10.108.24.91     <none>        3000/TCP                                      10h
service/prometheus            ClusterIP      10.102.103.141   <none>        9090/TCP                                      10h
service/prometheus-headless   ClusterIP      None             <none>        9090/TCP                                      10h
service/statuscheck           ClusterIP      10.101.224.247   <none>        80/TCP                                        10h

NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/node-exporter   1         1         1       1            1           <none>          10h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cloud-connect        1/1     1            1           10h
deployment.apps/crtd                 1/1     1            1           10h
deployment.apps/edge-proxy           1/1     1            1           10h
deployment.apps/grafana              1/1     1            1           10h
deployment.apps/kube-state-metrics   1/1     1            1           10h
deployment.apps/statuscheck          1/1     1            1           10h

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/cloud-connect-d6ff8499f         1         1         1       10h
replicaset.apps/crtd-665fd5cb85                 1         1         1       10h
replicaset.apps/edge-proxy-66d44f7465           1         1         1       10h
replicaset.apps/grafana-5b466574d               1         1         1       10h
replicaset.apps/grafana-6478f89b7c              0         0         0       10h
replicaset.apps/kube-state-metrics-58d86c747c   1         1         1       10h
replicaset.apps/statuscheck-f7c9b4d98           1         1         1       10h

NAME                          READY   AGE
statefulset.apps/prometheus   1/1     10h

Running Watch command 2 of 2
Open a second terminal window to monitor altinity-cloud-managed-clickhouse.

watch kubectl -n altinity-cloud-system get all

Response
The finished result will appear similar to the following display:

Every 2.0s: kubectl -n altinity-cloud-managed-clickhouse get all        john.doe-MacBook-Pro.local: Mon Mar 20 00:14:44 2023

NAME                                            READY   STATUS    RESTARTS   AGE
pod/chi-rory-anywhere-6-rory-anywhere-6-0-0-0   2/2     Running   0          11h
pod/clickhouse-operator-996785fc-rgfvl          2/2     Running   0          11h
pod/zookeeper-5244-0                            1/1     Running   0          11h

NAME                                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/chi-rory-anywhere-6-rory-anywhere-6-0-0   ClusterIP   10.98.202.85    <none>        8123/TCP,9000/TCP,9009/TCP   11h
service/clickhouse-operator-metrics               ClusterIP   10.109.90.202   <none>        8888/TCP                     11h
service/clickhouse-rory-anywhere-6                ClusterIP   10.100.48.57    <none>        8443/TCP,9440/TCP            11h
service/zookeeper-5244                            ClusterIP   10.101.71.82    <none>        2181/TCP,7000/TCP            11h
service/zookeepers-5244                           ClusterIP   None            <none>        2888/TCP,3888/TCP            11h

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/clickhouse-operator   1/1     1            1           11h

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/clickhouse-operator-996785fc   1         1         1       11h

NAME                                                       READY   AGE
statefulset.apps/chi-rory-anywhere-6-rory-anywhere-6-0-0   1/1     11h
statefulset.apps/zookeeper-5244                            1/1     11h

Provisioning Completed

Dashboard

The completion of the provisioning process will result in the ACM displaying the dashboard page showing the memory and cpu.

Data
Figure 4 - The Clusters dashboard screen showing the new minikube setting.

View the new altinity-cloud namespaces

Open a third terminal window and list the namespaces to show the two CLickHouse additions:

kubectl get ns

Response
Note the two new altinity-cloud items at the top:

NAME                                STATUS   AGE
altinity-cloud-managed-clickhouse   Active   8h
altinity-cloud-system               Active   8h
default                             Active   16d
kube-node-lease                     Active   16d
kube-public                         Active   16d
kube-system                         Active   16d

Creating a ClickHouse Cluster

These instructions run through the use of the Altinity.Cloud Manager (ACM) Clusters > LAUNCH CLUSTER wizard to create a ClickHouse cluster on your Mac running Minikube.

Note: Each of the 6 steps in the Wizard you can navigate back forth between the previously filled-in screens by selecting the title links on the left, or using the BACK and NEXT buttons.

To create a new ClickHouse Cluster using the Launch Cluster wizard):

  1. From your Chrome web browser in the ACM select Clusters.
  2. Select the LAUNCH CLUSTERS blue button.
  3. In the 1 ClickHouse Setup screen, fill in the following and select the blue NEXT button:
    • Name = test-anywhere (15-character limit, lower-case letters only)
    • ClickHouse Version = ALTINITY BUILDS: 22.8.13 Stable Build
    • ClickHouse User Name = admin
    • ClickHouse User Password = admin-password
  4. In the 2. Resources Configuration screen, fill in the following then select NEXT button:
    • Node Type = minikube-node (CPU xnull, RAM pending)
    • Node Storage = 10 GB
    • Number of Volumes = 1
    • Volume Type = standard
    • Number of Shards = 1
  5. In the 3. High Availability Configuration screen, fill in the following then select NEXT:
    • Number of Replicas = 1
    • Zookeeper Configuration = Dedicated
    • Zookeeper Node Type = default
    • Enable Backups = OFF (unchecked)
    • Number of Backups to keep = 0 (leave blank)
  6. In the 4. Connection Configuration screen, fill in the following then select NEXT:
    • Endpoint = test-anywhere5.your-environment-name-a123.altinity.cloud
    • Use TLS = Checked
    • Load Balancer Type = Altinity Edge Ingress
    • Protocols: Binary Protocol (port:9440) - is checked ON
    • Protocols: HTTP Protocol (port:8443) - is checked ON
    • Datadog integration = disabled
    • IP restrictions = OFF (Enabled is unchecked)
  7. In the 5. Uptime Schedule screen, select ALWAYS ON then NEXT:
  8. In the final screen 6. Review & Launch, select the green LAUNCH button.

Your new ClickHouse Cluster will start building on your Minikube Mac, and will complete with the green pill boxes under your cluster name;

  • 2 / 2 nodes online
  • Health: 6/6 checks passed

Creating a Database and Running Queries

In this section you will create tables on your cluster using the ACM and run queries from both the ACM and then from your local terminal.

Testing your datatbase on ACM

To create a new database on your Altinity.Cloud Anywhere cluster from the ACM:

  1. Login to the ACM and select Clusters, then select EXPLORE on your cluster.
  2. In the Query text box, enter the following create table SQL query:
CREATE TABLE IF NOT EXISTS events_local ON CLUSTER '{cluster}' (
    event_date  Date,
    event_type  Int32,
    article_id  Int32,
    title       String
) ENGINE = ReplicatedMergeTree('/clickhouse/{cluster}/tables/{shard}/{database}/{table}', '{replica}')
    PARTITION BY toYYYYMM(event_date)
    ORDER BY (event_type, article_id);
  1. Create a second table:
CREATE TABLE events ON CLUSTER '{cluster}' AS events_local
   ENGINE = Distributed('{cluster}', default, events_local, rand())
  1. Add some data with this query:
INSERT INTO events VALUES(today(), 1, 13, 'Example');
  1. List the data you just entered:
SELECT * FROM events;

# Response
test-anywhere-6.johndoetest-a123.altinity.cloud:8443 (query time: 0.196s)
┌─event_date─┬─event_type─┬─article_id─┬─title───┐
│ 2023-03-24 │          113 │ Example │
└────────────┴────────────┴────────────┴─────────┘
  1. Show all the tables:
show tables

# Response
test-anywhere-6.johndoetest-a123.altinity.cloud:8443 (query time: 0.275s)
┌─name─────────┐
│ events       │
│ events_local │
└──────────────┘

Testing ClickHouse on your local terminal

This section shows you how to use your local Minikube computer terminal to login to your Clickhouse Cluster that ACM created:

  1. Find your pod name:
kubectl -n altinity-cloud-managed-clickhouse get all

# Response
NAME                                               READY   STATUS    RESTARTS        AGE
pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0   2/2     Running   8 (3h25m ago)   2d17h
  1. On your Minikube computer terminal, login to that pod using the name you got from step 1:
kubectl -n altinity-cloud-managed-clickhouse exec -it pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0 -- bash

# Response
Defaulted container "clickhouse-pod" out of: clickhouse-pod, clickhouse-backup
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ 
  1. Login to your ClickHouse database using the clickhouse-client command to get the :) happy face prompt:
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ 
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ clickhouse-client

# Response
<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)
<jemalloc>: (This is the expected behaviour if you are running under QEMU)
ClickHouse client version 22.8.13.21.altinitystable (altinity build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.8.13 revision 54460.

test-anywhere-6 :) 
  1. Run a show tables sql command:
test-anywhere-6 :) show tables

# Response

SHOW TABLES

Query id: da01133d-0130-4b98-9090-4ebc6fa4b568

┌─name─────────┐
│ events       │
│ events_local │
└──────────────┘

2 rows in set. Elapsed: 0.013 sec.  
  1. Run SQL query to show data in the events table:
test-anywhere-6 :) SELECT * FROM events;

# Response

SELECT * 
FROM events

Query id: 00fef876-e9b0-44b1-b768-9e662eda0483

┌─event_date─┬─event_type─┬─article_id─┬─title───┐
│ 2023-03-24 │          113 │ Example │
└────────────┴────────────┴────────────┴─────────┘

1 row in set. Elapsed: 0.023 sec. 

test-anywhere-6 :) 

Review the following database creation and query instructions:

Rescale example

Information and steps here.

How to delete your Anywhere cluster

This section covers how to delete your MiniKube cluster on your Mac using the ACM and reset your Anywhere environment.

Deleting namespaces from your local Minikube terminal

Delete ClickHouse Services and Namespaces

To delete ClickHouse services and altinity-cloud namespaces in Minikube, run the following commands in sequence:

kubectl delete chi --all -n altinity-cloud-managed-clickhouse
kubectl delete ns altinity-cloud-managed-clickhouse
kubectl delete ns altinity-cloud-system

Use ACM to Reset your Anywhere environment

Resetting your anywhere cluster from the ACM and your Minikube install will let you create a new connection.
After running the terminal commands, use the ACM Reset Anywhere function.

  1. Select Environments.
  2. Select your environment name.
  3. In the ACTION menu, select Reset Anywhere

The result is that you will see the Anywhere connection and provisioning wizard that shows you the connection string to copy and paste to reprovision an Anywhere environment.

Frequently Asked Questions

1. Question

Put FAQ 1 answer here.

2. Question

Put FAQ 2 answer here.

3. Question

Put FAQ 3 answer here.

Reference

Terminal Commands

This section lists the Terminal commands used on this page.

  • minikube version
  • docker-machine --version
  • docker-compose --version
  • docker --version
  • brew list watch <command string>
  • minikube start
  • minikube stop
  • kubectl get ns
  • kubectl get namespace
  • altinitycloud-connect login --token=
  • watch kubectl -n altinity-cloud-system get all
  • watch kubectl -n altinity-cloud-managed-clickhouse get all
  • kubectl get ns altinity-cloud-managed-clickhouse
  • kubectl get ns altinity-cloud-system
  • altinitycloud-connect kubernetes
  • kubectl delete chi --all -n altinity-cloud-managed-clickhouse
  • kubectl delete ns altinity-cloud-managed-clickhouse
  • kubectl delete ns altinity-cloud-system

ACM common functions

This section lists commonly used ACM functions.

  • Environments: ACM: Environments (left panel)
  • Clusters: ACM: Clusters (left panel)
  • Cluster Launch Wizard: Clusters > LAUNCH CLUSTER (button)
  • Backup settings ACM: Environments > Edit (3 dots beside cluster name) > Backups
  • Reset Anywhere: Environments > cluster-name-link > ACTION > Reset Anywhere

Video of the Anywhere installation

Video

Terminal recording

2.5 - Administration

Administration functions of Altinity.Anywhere.

Altinity.Cloud Anywhere Administraiton

2.5.1 - Altinity.Cloud connect

Setting up Altinity.Cloud connect

What is Altinity.Cloud connect?

Altinity.Cloud connect (altinitycloud-connect) is a tunneling daemon for Altinity.Cloud.
It enables management of ClickHouse clusters through Altinity.Cloud Anywhere.

Required permissions

altinitycloud-connect requires following permissions:

Open outbound ports:

  • 443 tcp/udp (egress; stateful)

Kubernetes permissions:

  • cluster-admin for initial provisioning only, it can be revoked afterwards
  • full access to ‘altinity-cloud-system’ and ‘altinity-cloud-managed-clickhouse’ namespaces and a few optional read-only cluster-level permissions (for observability)

Connecting to Altinity.Cloud

See the steps in the Quickstart Connect to Altinity.Cloud procedure.

Batch operation of altinitycloud-connect

altinitycloud-connect login produces cloud-connect.pem used to connect to
Altinity.Cloud Anywhere control plane (--token is short-lived while cloud-connect.pem does not expire until revoked).
If you need to reconnect the environment in unattended/batch mode (i.e. without requesting the token),
you can do so via

altinitycloud-connect kubernetes -i /path/to/cloud-connect.pem | kubectl apply -f -

Disconnecting your environment from Altinity.Cloud

  1. Locate your environment in the Environment tab in your Altinity.Cloud account.

  2. Select ACTIONS->Delete.

  3. Toggle the Delete Clusters switch only if you want to delete managed clusters.

  4. Press OK to complete.

After this is complete Altinity.Cloud will no longer be able to see or
connect to your Kubernetes environment via the connector.

Cleaning up managed environments in Kubernetes

To clean up managed ClickHouse installations and namespaces in a
disconnected Kubernetes cluster, issue the following commands in the
exact order shown below.

kubectl -n altinity-cloud-managed-clickhouse delete chi --all
kubectl delete ns altinity-cloud-managed-clickhouse
kubectl delete ns altinity-cloud-system

If you delete the namespaces before deleting the ClickHouse installations
(chi) the operation will hang due to missing finalizers on chi resources.
Should this occur, issue kubectl edit commands on each ClickHouse
installation and remove the finalizer manually from the resource
specification. Here is an example.

 kubectl -n altinity-cloud-managed-clickhouse edit clickhouseinstallations.clickhouse.altinity.com/test2

2.5.2 - Setting up logging

Setting up Altinity.Cloud Anywhere logging

20 March 2023 · Read time 2 min

Configuring logging

In order for Altinity.Cloud Anywhere to gather/store/query logs, you need to configure access to S3 or GCS bucket.
Cloud-specific instructions provided below.

EKS (AWS)

The recommended way is to use IRSA.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: log-storage
  namespace: altinity-cloud-system
  annotations:
    eks.amazonaws.com/role-arn: "arn:aws:iam::<aws_account_id>:role/<role_arn>"

Alternatively, you can use custom Instance Profile or explicit credentials (shown below).

# create bucket
aws s3api create-bucket --bucket REPLACE_WITH_BUCKET_NAME --region REPLACE_WITH_AWS_REGION

# create user with access to the bucket
aws iam create-user --user-name REPLACE_WITH_USER_NAME
aws iam put-user-policy \
    --user-name REPLACE_WITH_USER_NAME \
    --policy-name REPLACE_WITH_POLICY_NAME \
    --policy-document \
'{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::REPLACE_WITH_BUCKET_NAME",
                "arn:aws:s3:::REPLACE_WITH_BUCKET_NAME/*"
            ],
            "Effect": "Allow"
        }
    ]
}'

# generate access key
aws iam create-access-key --user-name REPLACE_WITH_USER_NAME |
  jq -r '"AWS_ACCESS_KEY_ID="+(.AccessKey.AccessKeyId)+"\nAWS_SECRET_ACCESS_KEY="+(.AccessKey.SecretAccessKey)+"\n"' > credentials.env

# create altinity-cloud-system/log-storage-aws secret containing AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY
kubectl create secret -n altinity-cloud-system generic log-storage-aws \
  --from-env-file=credentials.env

rm -i credentials.env

Please send bucket name back to Altinity in order to finish configuration.

GKE (GCP)

The recommended way is to use Workload Identity.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: log-storage
  namespace: altinity-cloud-system
  annotations:
    iam.gke.io/gcp-service-account: "<gcp_sa_name>@<project_id>.iam.gserviceaccount.com"

Alternatively, you can use GCP service account for instance or explicit credentials (shown below).

# create bucket
gsutil mb gs://REPLACE_WITH_BUCKET_NAME

# create GCP SA with access to the bucket
gcloud iam service-accounts create REPLACE_WITH_GCP_SA_NAME \
  --project=REPLACE_WITH_PROJECT_ID \
  --display-name "REPLACE_WITH_DISPLAY_NAME"
gsutil iam ch \
  serviceAccount:REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com:roles/storage.admin \
  gs://REPLACE_WITH_BUCKET_NAME

# generate GCP SA key
gcloud iam service-accounts keys create credentials.json \
--iam-account=REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com \
--project=REPLACE_WITH_PROJECT_ID

# create altinity-cloud-system/log-storage-gcp secret containing credentials.json
kubectl create secret -n altinity-cloud-system generic log-storage-gcp \
  --from-file=credentials.json

rm -i credentials.json

Please send bucket name back to Altinity in order to finish configuration.

3 - Altinity Stable Builds

ClickHouse tested and verified for production use with 3 years of support.

ClickHouse, as an open source project, has multiple methods of installation. Altinity recommends either using Altinity Stable builds for ClickHouse, or community builds.

The Altinity Stable builds are releases with extended service of ClickHouse that undergo rigorous testing to verify they are secure and ready for production use. Altinity Stable Builds provide a secure, pre-compiled binary release of ClickHouse server and client with the following features:

  • The ClickHouse version release is ready for production use.
  • 100% open source and 100% compatible with ClickHouse community builds.
  • Provides Up to 3 years of support.
  • Validated against client libraries and visualization tools.
  • Tested for cloud use including Kubernetes.

For more information regarding the Altinity Stable builds, see Altinity Stable Builds for ClickHouse.

Altinity Stable Builds Life-Cycle Table

The following table lists Altinity Stable builds and their current status. Community builds of ClickHouse are no longer available after Community Support EOL (shown in red). Contact us for build support beyond the Altinity Extend Support EOL.

Release Notes Build Status Latest Version Release Date Latest Update Support Duration Community Support End-of-Life* Altinity Extended Support End-of-Life**
22.8 Available 22.8.13.21 13 Feb 2023 13 Feb 2023 3 years 31 Aug 2023 13 Feb 2026
22.3 Available 22.3.8.39 15 Jul 2022 15 Jul 2022 3 years 15 Mar 2023 15 Jul 2025
21.8 Available 21.8.15.7 11 Oct 2021 15 Apr 2022 3 years
31 Aug 2022
30 Aug 2024
21.3 Available 21.3.20.2 29 Jun 2021 10 Feb 2022 3 years
30 Mar 2022
31 Mar 2024
21.1 Available 21.1.11.3 24 Mar 2021 01 Jun 2022 2 years
30 Apr 2021
31 Jan 2023
20.8 Available Upon Request 20.8.12.2 02 Dec 2020 03 Feb 2021 2 years
31 Aug 2021
02 Dec 2022
20.3 Available Upon Request 20.3.19.4 24 Jun 2020 23 Sep 2020 2 years
31 Mar 2021
24 Jun 2022
  • *During Community Support bug fixes are automatically backported to community builds and picked up in refreshes of Altinity Stable builds.
  • **Altinity Extended Support covers P0-P1 bugs encountered by customers and critical security issues regardless of audience. Fixes are best effort and may not be possible in every circumstance. Altinity makes every effort to ensure a fix, workaround, or upgrade path for covered issues.

3.1 - Altinity Stable Builds Install Guide

How to install the Altinity Stable Builds for ClickHouse

Installing ClickHouse from the Altinity Stable Builds, available from https://builds.altinity.cloud, takes just a few minutes.

General Installation Instructions

When installing or upgrading from a previous version of ClickHouse from the Altinity Stable Builds, review the Release Notes for the ClickHouse version to install and upgrade to before starting. This will inform you of additional steps or requirements of moving from one version to the next.

Part of the installation procedures recommends you specify the version to install. The Release Notes lists the version numbers available for installation.

There are three main methods for installing Altinity Stable Builds:

  • Deb Packages
  • RPM Packages
  • Docker images

The package sources come from two sources:

  • Altinity Stable Builds: These are built from a secure, internal build pipeline and available from https://builds.altinity.cloud. Altinity Stable Builds are distinguishable from community builds when displaying version information:

    select version()
    
    ┌─version()─────────────────┐
     21.8.11.1.altinitystable  
    └───────────────────────────┘
    
  • Community Builds: These are made by ClickHouse community members, and are available at repo.clickhouse.tech.

3.1.1 - Altinity Stable Builds Deb Install Guide

How to install the Altinity Stable Builds for ClickHouse on Debian based systems.

Installation Instructions: Deb packages

ClickHouse can be installed from the Altinity Stable builds, located at https://builds.altinity.cloud, or from the ClickHouse community repository.

Deb Prerequisites

The following prerequisites must be installed before installing an Altinity Stable build of ClickHouse:

  • curl
  • gnupg2
  • apt-transport-https
  • ca-certificates

These can be installed prior to installing ClickHouse with the following command:

sudo apt-get update
sudo apt-get install curl gnupg2 apt-transport-https ca-certificates

Deb Packages for Altinity Stable Build

To install ClickHouse Altinity Stable build via Deb based packages from the Altinity Stable build repository:

  1. Update the apt-get local repository:

    sudo apt-get update
    
  2. Install the Altinity package signing keys:

    sudo sh -c 'mkdir -p /usr/share/keyrings && curl -s https://builds.altinity.cloud/apt-repo/pubkey.gpg | gpg --dearmor > /usr/share/keyrings/altinity-dev-archive-keyring.gpg'
    
  3. Update the apt-get repository to include the Altinity Stable build repository with the following commands:

    sudo sh -c 'echo "deb [signed-by=/usr/share/keyrings/altinity-dev-archive-keyring.gpg] https://builds.altinity.cloud/apt-repo stable main" > /etc/apt/sources.list.d/altinity-dev.list'
    
    sudo apt-get update
    
  4. Install either a specific version of ClickHouse, or the most current version.

    1. To install a specific version, include the version in the apt-get install command. The example below specifies the version 21.8.10.1.altinitystable:
    version=21.8.10.1.altinitystable
    
    sudo apt-get install clickhouse-common-static=$version clickhouse-client=$version clickhouse-server=$version
    
    1. To install the most current version of the ClickHouse Altinity Stable build without specifying a specific version, leave out the version= command.
    sudo apt-get install clickhouse-client clickhouse-server
    
  5. When prompted, provide the password for the default clickhouse user.

  6. Restart server.

    Installed packages are not applied to an already running server. It makes it convenient to install the packages first and restart later when convenient.

    sudo systemctl restart clickhouse-server
    

Remove Community Package Repository

For users upgrading to Altinity Stable builds from the community ClickHouse builds, we recommend removing the community builds from the local repository. See the instructions for your distribution of Linux for instructions on modifying your local package repository.

Community Builds

For instructions on how to install ClickHouse community, see the ClickHouse Documentation site.

3.1.2 - Altinity Stable Builds RPM Install Guide

How to install the Altinity Stable Builds for ClickHouse on RPM based systems.

Installation Instructions: RPM packages

ClickHouse can be installed from the Altinity Stable builds, located at https://builds.altinity.cloud, or from the ClickHouse commuinity repository.

Depending on your Linux distribution, either dnf or yum will be used. See your particular distribution of Linux for specifics.

The instructions below uses the command $(type -p dnf || type -p yum) to provide the correct command based on the distribution to be used.

RPM Prerequisites

The following prerequisites must be installed before installing an Altinity Stable build:

  • curl
  • gnupg2

These can be installed prior to installing ClickHouse with the following:

sudo $(type -p dnf || type -p yum) install curl gnupg2

RPM Packages for Altinity Stable Build

To install ClickHouse from an Altinity Stable build via RPM based packages from the Altinity Stable build repository:

  1. Update the local RPM repository to include the Altinity Stable build repository with the following command:

    sudo curl https://builds.altinity.cloud/yum-repo/altinity.repo -o /etc/yum.repos.d/altinity.repo    
    
  2. Install ClickHouse server and client with either yum or dnf. It is recommended to specify a version to maximize compatibly with other applications and clients.

    1. To specify the version of ClickHouse to install, create a variable for the version and pass it to the installation instructions. The example below specifies the version 21.8.10.1.altinitystable:
    version=21.8.10.1.altinitystable
    sudo $(type -p dnf || type -p yum) install clickhouse-common-static-$version clickhouse-server-$version clickhouse-client-$version
    
    1. To install the most recent version of ClickHouse, leave off the version- command and variable:
    sudo $(type -p dnf || type -p yum) install clickhouse-common-static clickhouse-server clickhouse-client
    

Remove Community Package Repository

For users upgrading to Altinity Stable builds from the community ClickHouse builds, we recommend removing the community builds from the local repository. See the instructions for your distribution of Linux for instructions on modifying your local package repository.

RPM Downgrading Altinity ClickHouse Stable to a Previous Release

To downgrade to a previous release, the current version must be installed, and the previous version installed with the --setup=obsoletes=0 option. Review the Release Notes before downgrading for any considerations or issues that may occur when downgrading between versions of ClickHouse.

For more information, see the Altinity Knowledge Base article Altinity packaging compatibility greater than 21.x and earlier.

Community Builds

For instructions on how to install ClickHouse community, see the ClickHouse Documentation site.

3.1.3 - Altinity Stable Builds Docker Install Guide

How to install the Altinity Stable Builds for ClickHouse with Docker.

Installation Instructions: Docker

These included instructions detail how to install a single Altinity Stable build of ClickHouse container through Docker. For details on setting up a cluster of Docker containers, see ClickHouse on Kubernetes.

Docker Images are available for Altinity Stable builds and Community builds. The instructions below focus on using the Altinity Stable builds for ClickHouse.

The Docker repositories are located at:

To install a ClickHouse Altinity Stable build through Docker:

  1. Create the directory for the docker-compose.yml file and the database storage and ClickHouse server storage.

    mkdir clickhouse
    cd clickhouse
    mkdir clickhouse_database
    
  2. Create the file docker-compose.yml and populate it with the following, updating the clickhouse-server to the current altinity/clickhouse-server version:

    version: '3'
    
    services:
      clickhouse_server:
          image: altinity/clickhouse-server:21.8.10.1.altinitystable
          ports:
          - "8123:8123"
          volumes:
          - ./clickhouse_database:/var/lib/clickhouse
          networks:
              - clickhouse_network
    
    networks:
      clickhouse_network:
          driver: bridge
          ipam:
              config:
                  - subnet: 10.222.1.0/24
    
  3. Launch the ClickHouse Server with docker-compose or docker compose depending on your version of Docker:

    docker compose up -d
    
  4. Verify the installation by logging into the database from the Docker image directly, and make any other necessary updates with:

    docker compose exec clickhouse_server clickhouse-client
    root@67c732d8dc6a:/# clickhouse-client
    ClickHouse client version 21.3.15.2.altinity+stable (altinity build).
    Connecting to localhost:9000 as user default.
    Connected to ClickHouse server version 21.1.10 revision 54443.
    
    67c732d8dc6a :)
    

3.1.4 - Altinity Stable Builds macOS Install Guide

How to install the Altinity Stable Builds for ClickHouse with macOS.

Altinity Stable for ClickHouse is available to macOS users through the Homebrew package manager. Users and developers who use macOS as their preferred environment can quickly install a production ready version of ClickHouse within minutes.

The following instructions are targeted for users of Altinity Stable for ClickHouse. For more information on running community or other versions of ClickHouse on macOS, see either the Homebrew Tap for ClickHouse project or the blog post Altinity Introduces macOS Homebrew Tap for ClickHouse.

macOS Prerequisites

Brew Install for Altinity Stable Instructions

By default, installing ClickHouse through brew will install the latest version of the community version of ClickHouse. Extra steps are required to install the Altinity Stable version of ClickHouse. Altinity Stable is installed as a keg-only version, which requires manually setting paths and other commands to run the Altinity Stable for ClickHouse through brew.

To install Altinity Stable for ClickHouse in macOS through Brew:

  1. Add the ClickHouse formula via brew tap:

    brew tap altinity/clickhouse
    
  2. Install Altinity Stable for ClickHouse by specifying clickhouse@altinity-stable for the most recent Altinity Stable version, or specify the version with clickhouse@{Altinity Stable Version}. For example, as of this writing the most current version of Altinity Stable is 21.8, therefore the command to install that version of altinity stable is clickhouse@21.8-altinity-stable. To install the most recent version, use the brew install command as follows:

    brew install clickhouse@altinity-stable
    
  3. Because Altinity Stable for ClickHouse is available as a keg only release, the path must be set manually. These instructions will be displayed as part of the installation procedure. Based on your version, executable directory will be different based on the pattern:

    $(brew --prefix)/{clickhouse version}/bin

    For our example, clickhouse@altinity-stable gives us the following path setting:

    export PATH="/opt/homebrew/opt/clickhouse@21.8-altinity-stable/bin:$PATH"

    Using the which command after updating the path reveals the location of the clickhouse-server executable:

    which clickhouse-server
    /opt/homebrew/opt/clickhouse@21.8-altinity-stable/bin/clickhouse-server
    
  4. To start the Altinity Stable for ClickHouse server use the brew services start command. For example:

    brew services start clickhouse@altinity-stable
    
  5. Connect to the new server with clickhouse-client:

    > clickhouse-client
    ClickHouse client version 21.8.13.1.
    Connecting to localhost:9000 as user default.
    Connected to ClickHouse server version 21.11.6 revision 54450.
    
    ClickHouse client version is older than ClickHouse server. It may lack support for new features.
    
    penny.home :) select version()
    
    SELECT version()
    
    Query id: 128a2cae-d0e2-4170-a771-83fb79429260
    
    ┌─version()─┐
    │ 21.11.6.1 │
    └───────────┘
    
    1 rows in set. Elapsed: 0.004 sec.
    
    penny.home :) exit
    Bye.
    
  6. To end the ClickHouse server, use brew services stop command:

    brew services stop clickhouse@altinity-stable
    

3.1.5 - Altinity Stable Build Guide for ClickHouse

How to build ClickHouse from Altinity Stable manually.

Organizations that prefer to build ClickHouse manually can use the Altinity Stable versions of ClickHouse directly from the source code.

Clone the Repo

Before using either the Docker or Direct build process, the Altinity Stable for ClickHouse must be downloaded from the Altinity Stable of ClickHouse repository, located at https://github.com/Altinity/clickhouse. The following procedure is used to update the source code to the most current version. For more information on downloading a specific version of the source code, see the GitHub documentation.

Hardware Recommendations

ClickHouse can run on the most minimum hardware to full clusters. The following hardware requirements are recommended for building and running ClickHouse:

  • 16GB of RAM (32 GB recommende)
  • Multiple cores (4+)
  • 20-50 GB disk storage

Downloading Altinity Stable for ClickHouse

Before building ClickHouse, specify the verified version to download and build by specifying the Altinity Stable for ClickHouse tags. The `–recursive`` command will download all submodules part of the Altinity Stable project.

As of this writing, the most recent verified version is v21.8.10.19-altinitystable, so the download command to download that version of Altinity Stable into the folder AltinityStableClickHouse is:

  1. git clone --recursive -b v21.8.10.19-altinitystable --single-branch https://github.com/Altinity/clickhouse.git AltinityStableClickHouse.

Direct Build Instructions for Deb Based Linux

To build Altinity Stable for ClickHouse from the source code for Deb based Linux platforms:

  1. Install the prerequisites:

    sudo apt-get install git cmake python ninja-build
    
  2. Install clang-12.

    sudo apt install clang-12
    
  3. Create and enter the build directory within your AltinityStable directory.

    mkdir build && cd build
    
  4. Set the compile variables to clang-12 and initiate the ninja build.

    CC=clang-12 CXX=clang++-12 cmake .. -GNinja
    
  5. Provide the ninja command to build your own Altinity Stable for ClickHouse:

    ninja clickhouse
    
  6. Once complete, Altinity Stable for ClickHouse will be in the project’s programs folder, and can be run with the following commands:

    1. ClickHouse Server: clickhouse server
    2. ClickHouse Client: clickhouse client

3.1.6 - Legacy ClickHouse Altinity Stable Releases Install Guide

How to install the ClickHouse Altinity Stable Releases from packagecloud.io.

ClickHouse Altinity Stable Releases are specially vetted community builds of ClickHouse that Altinity certifies for production use. We track critical changes and verify against a series of tests to make sure they’re ready for your production environment. We take the steps to verify how to upgrade from previous versions, and what issues you might run into when transitioning your ClickHouse clusters to the next Stable Altinity ClickHouse release.

As of October 12, 2021, Altinity replaced the ClickHouse Altinity Stable Releases with the Altinity Stable Builds, providing longer support and validation. For more information, see Altinity Stable Builds.

Legacy versions of the ClickHouse Altinity Stable Releases are available from the Altinity ClickHouse Stable Release packagecloud.io repository, located at https://packagecloud.io/Altinity/altinity-stable.

The available Altinity ClickHouse Stable Releases from packagecloud.io for ClickHouse server, ClickHouse client and ClickHouse common versions are:

  • Altinity ClickHouse Stable Release 21.1.10.3
  • Altinity ClickHouse Stable Release 21.3.13.9
  • Altinity ClickHouse Stable Release 21.3.15.2
  • Altinity ClickHouse Stable Release 21.3.15.4

General Installation Instructions

When installing or upgrading from a previous version of legacy ClickHouse Altinity Stable Release, review the Release Notes for the version to install and upgrade to before starting. This will inform you of additional steps or requirements of moving from one version to the next.

Part of the installation procedures recommends you specify the version to install. The Release Notes lists the version numbers available for installation.

There are three main methods for installing the legacy ClickHouse Altinity Stable Releases:

Altinity ClickHouse Stable Releases are distinguishable from community builds when displaying version information. The suffix altinitystable will be displayed after the version number:

select version()

┌─version()─────────────────┐
 21.3.15.2.altinitystable 
└───────────────────────────┘

Prerequisites

This guide assumes that the reader is familiar with Linux commands, permissions, and how to install software for their particular Linux distribution. The reader will have to verify they have the correct permissions to install the software in their target systems.

Installation Instructions

Legacy Altinity ClickHouse Stable Release DEB Builds

To install legacy ClickHouse Altinity Stable Release version DEB packages from packagecloud.io:

  1. Update the apt-get repository with the following command:

    curl -s https://packagecloud.io/install/repositories/Altinity/altinity-stable/script.deb.sh | sudo bash
    
  2. ClickHouse can be installed either by specifying a specific version, or automatically going to the most current version. It is recommended to specify a version for maximum compatibility with existing clients.

    1. To install a specific version, create a variable specifying the version to install and including it with the install command:
    version=21.8.8.1.altinitystable
    sudo apt-get install clickhouse-client=$version clickhouse-server=$version clickhouse-common-static=$version
    
    1. To install the most current version of the legacy ClickHouse Altinity Stable release without specifying a specific version, leave out the version= command.
    sudo apt-get install clickhouse-client clickhouse-server clickhouse-server-common
    
  3. Restart server.

    Installed packages are not applied to the already running server. It makes it convenient to install packages first and restart later when convenient.

    sudo systemctl restart clickhouse-server
    

Legacy Altinity ClickHouse Stable Release RPM Builds

To install legacy ClickHouse Altinity Stable Release version RPM packages from packagecloud.io:

  1. Update the yum package repository configuration with the following command:

    curl -s https://packagecloud.io/install/repositories/Altinity/altinity-stable/script.rpm.sh | sudo bash
    
  2. ClickHouse can be installed either by specifying a specific version, or automatically going to the most current version. It is recommended to specify a version for maximum compatibility with existing clients.

    1. To install a specific version, create a variable specifying the version to install and including it with the install command:
    version=version=21.8.8.1.altinitystable
    sudo yum install clickhouse-client-${version} clickhouse-server-${version} clickhouse-server-common-${version}
    
    1. To install the most current version of the legacy ClickHouse Altinity Stable release without specifying a specific version, leave out the version= command.
    sudo yum install clickhouse-client clickhouse-server clickhouse-server-common
    
  3. Restart the ClickHouse server.

    sudo systemctl restart clickhouse-server
    

3.2 - Monitoring Considerations

Monitoring Considerations

Monitoring helps to track potential issues in your cluster before they cause a critical error.

External Monitoring

External monitoring collects data from the ClickHouse cluster and uses it for analysis and review. Recommended external monitoring systems include:

ClickHouse can collect the recording of metrics internally by enabling system.metric_log in config.xml.

For dashboard system:

  • Grafana is recommended for graphs, reports, alerts, dashboard, etc.
  • Other options are Nagios or Zabbix.

The following metrics should be collected:

  • For Host Machine:
    • CPU
    • Memory
    • Network (bytes/packets)
    • Storage (iops)
    • Disk Space (free / used)
  • For ClickHouse:
    • Connections (count)
    • RWLocks
    • Read / Write / Return (bytes)
    • Read / Write / Return (rows)
    • Zookeeper operations (count)
    • Absolute delay
    • Query duration (optional)
    • Replication parts and queue (count)
  • For Zookeeper:

The following queries are recommended to be included in monitoring:

  • SELECT * FROM system.replicas
    • For more information, see the ClickHouse guide on System Tables
  • SELECT * FROM system.merges
    • Checks on the speed and progress of currently executed merges.
  • SELECT * FROM system.mutations
    • This is the source of information on the speed and progress of currently executed merges.

Monitor and Alerts

Configure the notifications for events and thresholds based on the following table:

Health Checks

The following health checks should be monitored:

Check Name Shell or SQL command Severity
ClickHouse status $ curl 'http://localhost:8123/'Ok. Critical
Too many simultaneous queries. Maximum: 100 select value from system.metrics where metric='Query' Critical
Replication status $ curl 'http://localhost:8123/replicas_status'Ok. High
Read only replicas (reflected by replicas_status as well) select value from system.metrics where metric='ReadonlyReplica’ High
ReplicaPartialShutdown (not reflected by replicas_status, but seems to correlate with ZooKeeperHardwareExceptions) select value from system.events where event='ReplicaPartialShutdown' HighI turned this one off. It almost always correlates with ZooKeeperHardwareExceptions, and when it’s not, then there is nothing bad happening…
Some replication tasks are stuck select count()from system.replication_queuewhere num_tries > 100 High
ZooKeeper is available select count() from system.zookeeper where path='/' Critical for writes
ZooKeeper exceptions select value from system.events where event='ZooKeeperHardwareExceptions' Medium
Other CH nodes are available $ for node in `echo "select distinct host_address from system.clusters where host_name !='localhost'" curl 'http://localhost:8123/' –silent –data-binary @-`; do curl "http://$node:8123/" –silent ; done
All CH clusters are available (i.e. every configured cluster has enough replicas to serve queries) for cluster in `echo "select distinct cluster from system.clusters where host_name !='localhost'" curl 'http://localhost:8123/' –silent –data-binary @-` ; do clickhouse-client –query="select '$cluster', 'OK' from cluster('$cluster', system, one)" ; done
There are files in 'detached' folders $ find /var/lib/clickhouse/data///detached/* -type d

wc -l;

19.8+select count() from system.detached_parts

Too many parts:

Number of parts is growing;

Inserts are being delayed;

Inserts are being rejected

select value from system.asynchronous_metrics where metric='MaxPartCountForPartition';select value from system.events/system.metrics where event/metric='DelayedInserts';

select value from system.events where event='RejectedInserts'

Critical
Dictionaries: exception select concat(name,': ',last_exception) from system.dictionarieswhere last_exception != '' Medium
ClickHouse has been restarted select uptime();select value from system.asynchronous_metrics where metric='Uptime'
DistributedFilesToInsert should not be always increasing select value from system.metrics where metric='DistributedFilesToInsert' Medium
A data part was lost select value from system.events where event='ReplicatedDataLoss' High
Data parts are not the same on different replicas

select value from system.events where event='DataAfterMergeDiffersFromReplica';

select value from system.events where event='DataAfterMutationDiffersFromReplica'

Medium

Monitoring References

3.3 -

Test

This is a test.
This is Bold text.
This is Italic text.

4 - ClickHouse on Kubernetes

Install and Manage ClickHouse Clusters on Kubernetes.

Setting up a cluster of Altinity Stable for ClickHouse is made easy with Kubernetes, even if saying that takes some effort from the tongue. Organizations that want to setup their own distributed ClickHouse environments can do so with the Altinity Kubernetes Operator.

As of this time, the current version of the Altinity Kubernetes Operator is 0.18.5.

4.1 - Altinity Kubernetes Operator Quick Start Guide

Become familiar with the Kubernetes Altinity Kubernetes Operator in the fewest steps.

If you’re running the Altinity Kubernetes Operator for the first time, or just want to get it up and running as quickly as possible, the Quick Start Guide is for you.

Requirements:

  • An operating system running Kubernetes and Docker, or a service providing support for them such as AWS.
  • A ClickHouse remote client such as clickhouse-client. Full instructions for installing ClickHouse can be found on the ClickHouse Installation page.

4.1.1 - How to install the Altinity ClickHouse-Operator to your Kubernetes environment

How to install and verify the Altinity Kubernetes Operator

1 March 2023 · Read time 4 min

Introduction - Altinity ClickHouse-Operator

This page provides instructions to deploy the Altinity Kubernetes Operator to your Kubernetes environment.

Prerequisites

The following items are required:


For Other Altinity deployment YAML file versions

To find other versions of the deployment YAML file, visit our Altinity clickhouse-operator site and use the GitHub branch menu Switch branches/tags to find a specific version.

Deployment Instructions

This example shows how to deploy version 0.20.3 of clickhouse-operator-install-bundle.yaml from the Altinity GitHub repository.

NOTE: Altinity recommends that you deploy a specific version, rather than using the latest clickhouse-operator YAML file from the master branch.


Installation Commands

To install a specific version of the Altinity Kubernetes Operator to your existing Kubernetes environment, run the following command:

kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/0.20.3/deploy/operator/clickhouse-operator-install-bundle.yaml

Alternatively, to deploy your own version of the YAML file, download and modify the latest Altinity Kubernetes Operator YAML file and and run the deployment command:

kubectl apply -f clickhouse-operator-install-bundle.yaml

Successful Installation

The following example response shows the result of a successful installation.

customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallationtemplates.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseoperatorconfigurations.clickhouse.altinity.com created
serviceaccount/clickhouse-operator created
clusterrole.rbac.authorization.k8s.io/clickhouse-operator-kube-system created
clusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator-kube-system created
configmap/etc-clickhouse-operator-files created
configmap/etc-clickhouse-operator-confd-files created
configmap/etc-clickhouse-operator-configd-files created
configmap/etc-clickhouse-operator-templatesd-files created
configmap/etc-clickhouse-operator-usersd-files created
deployment.apps/clickhouse-operator created
service/clickhouse-operator-metrics created

Installation Verification

To verify that the installation was successful, run the following. On a successful installation, you’ll be able to see the clickhouse-operator pod under the NAME column.

kubectl get pods --namespace kube-system
NAME                                   READY   STATUS    RESTARTS       AGE
clickhouse-operator-857c69ffc6-dq2sz   2/2     Running   0              5s
coredns-78fcd69978-nthp2               1/1     Running   4 (110s ago)   50d
etcd-minikube                          1/1     Running   4 (115s ago)   50d
kube-apiserver-minikube                1/1     Running   4 (105s ago)   50d
kube-controller-manager-minikube       1/1     Running   4 (115s ago)   50d
kube-proxy-lsggn                       1/1     Running   4 (115s ago)   50d
kube-scheduler-minikube                1/1     Running   4 (105s ago)   50d
storage-provisioner                    1/1     Running   8 (115s ago)   50d

More Information
The following section provides more information on the resources created in the installation.

Customization options

To customize Altinity Kubernetes Operator settings see:

Altinity recommends that you install a specific version of the ClickHouse-operator version that you know will work with your Kubernetes environment, rather than use the latest build from the GitHub master branch.

For details on installing other versions of the Altinity Kubernetes Operator see:

Deleting a deployment

This section covers how to delete a deployment.

To delete a deployment using the latest clickhouse-operator YAML file:

kubectl delete -f https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml

To delete a deployment using your local clickhouse-operator YAML file:

kubectl delete -f clickhouse-operator-install-bundle.yaml

4.1.2 - First Clusters

Create your first ClickHouse Cluster

If you followed the Quick Installation guide, then you have the
Altinity Kubernetes Operator for Kubernetes installed.
Let’s give it something to work with.

Create Your Namespace

For our examples, we’ll be setting up our own Kubernetes namespace test.
All of the examples will be installed into that namespace so we can track
how the cluster is modified with new updates.

Create the namespace with the following kubectl command:

kubectl create namespace test
namespace/test created

Just to make sure we’re in a clean environment,
let’s check for any resources in our namespace:

kubectl get all -n test
No resources found in test namespace.

The First Cluster

We’ll start with the simplest cluster: one shard, one replica.
This template and others are available on the
Altinity Kubernetes Operator Github example site,
and contains the following:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "demo-01"
spec:
  configuration:
    clusters:
      - name: "demo-01"
        layout:
          shardsCount: 1
          replicasCount: 1

Save this as sample01.yaml and launch it with the following:

kubectl apply -n test -f sample01.yaml
clickhouseinstallation.clickhouse.altinity.com/demo-01 created

Verify that the new cluster is running. When the status is
Running then it’s complete.

kubectl -n test get chi -o wide
NAME      VERSION   CLUSTERS   SHARDS   HOSTS   TASKID                                 STATUS      UPDATED   ADDED   DELETED   DELETE   ENDPOINT
demo-01   0.18.1    1          1        1       6d1d2c3d-90e5-4110-81ab-8863b0d1ac47   Completed             1                          clickhouse-demo-01.test.svc.cluster.local

To retrieve the IP information use the get service option:

kubectl get service -n test
NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
chi-demo-01-demo-01-0-0   ClusterIP      None           <none>        8123/TCP,9000/TCP,9009/TCP      2s
clickhouse-demo-01        LoadBalancer   10.111.27.86   <pending>     8123:31126/TCP,9000:32460/TCP   19s

So we can see our pods is running, and that we have the"
load balancer for the cluster.

Connect To Your Cluster With Exec

Let’s talk to our cluster and run some simple ClickHouse queries.

We can hop in directly through Kubernetes and run the clickhouse-client
that’s part of the image with the following command:

kubectl -n test exec -it chi-demo-01-demo-01-0-0-0 -- clickhouse-client
ClickHouse client version 20.12.4.5 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.12.4 revision 54442.

chi-demo-01-demo-01-0-0-0.chi-demo-01-demo-01-0-0.test.svc.cluster.local :)

From within ClickHouse, we can check out the current clusters:

SELECT * FROM system.clusters
┌─cluster─────────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
 all-replicated                                           1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 all-sharded                                              1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 demo-01                                                  1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            2  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            3  127.0.0.3                127.0.0.3     9000         0  default                               0                0                        0 
 test_cluster_two_shards                                  1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards                                  2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_internal_replication             1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_internal_replication             2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_localhost                        1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_localhost                        2             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost                                     1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost_secure                              1             1            1  localhost                127.0.0.1     9440         0  default                               0                0                        0 
 test_unavailable_shard                                   1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_unavailable_shard                                   2             1            1  localhost                127.0.0.1        1         0  default                               0                0                        0 
└─────────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘

Exit out of your cluster:

chi-demo-01-demo-01-0-0-0.chi-demo-01-demo-01-0-0.test.svc.cluster.local :) exit
Bye.

Connect to Your Cluster with Remote Client

You can also use a remote client such as clickhouse-client to
connect to your cluster through the LoadBalancer.

  • The default username and password is set by the
    clickhouse-operator-install.yaml file. These values can be altered
    by changing the chUsername and chPassword ClickHouse
    Credentials section:

    • Default Username: clickhouse_operator
    • Default Password: clickhouse_operator_password
# ClickHouse credentials (username, password and port) to be used
# by operator to connect to ClickHouse instances for:
# 1. Metrics requests
# 2. Schema maintenance
# 3. DROP DNS CACHE
# User with such credentials can be specified in additional ClickHouse
# .xml config files,
# located in `chUsersConfigsPath` folder
chUsername: clickhouse_operator
chPassword: clickhouse_operator_password
chPort: 8123

In either case, the command to connect to your new cluster will
resemble the following, replacing {LoadBalancer hostname} with
the name or IP address or your LoadBalancer, then providing
the proper password. In our examples so far, that’s been localhost.

From there, just make your ClickHouse SQL queries as you please - but
remember that this particular cluster has no persistent storage.
If the cluster is modified in any way, any databases or tables
created will be wiped clean.

Update Your First Cluster To 2 Shards

Well that’s great - we have a cluster running. Granted, it’s really small
and doesn’t do much, but what if we want to upgrade it?

Sure - we can do that any time we want.

Take your sample01.yaml and save it as sample02.yaml.

Let’s update it to give us two shards running with one replica:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "demo-01"
spec:
  configuration:
    clusters:
      - name: "demo-01"
        layout:
          shardsCount: 2
          replicasCount: 1

Save your YAML file, and apply it. We’ve defined the name
in the metadata, so it knows exactly what cluster to update.

kubectl apply -n test -f sample02.yaml
clickhouseinstallation.clickhouse.altinity.com/demo-01 configured

Verify that the cluster is running - this may take a few
minutes depending on your system:

kubectl -n test get chi -o wide
NAME      VERSION   CLUSTERS   SHARDS   HOSTS   TASKID                                 STATUS      UPDATED   ADDED   DELETED   DELETE   ENDPOINT
demo-01   0.18.1    1          2        2       80102179-4aa5-4e8f-826c-1ca7a1e0f7b9   Completed             1                          clickhouse-demo-01.test.svc.cluster.local
kubectl get service -n test
NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
chi-demo-01-demo-01-0-0   ClusterIP      None           <none>        8123/TCP,9000/TCP,9009/TCP      26s
chi-demo-01-demo-01-1-0   ClusterIP      None           <none>        8123/TCP,9000/TCP,9009/TCP      3s
clickhouse-demo-01        LoadBalancer   10.111.27.86   <pending>     8123:31126/TCP,9000:32460/TCP   43s

Once again, we can reach right into our cluster with
clickhouse-client and look at the clusters.

clickhouse-client --host localhost --user=clickhouse_operator --password=clickhouse_operator_password
ClickHouse client version 20.12.4.5 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.12.4 revision 54442.

chi-demo-01-demo-01-1-0-0.chi-demo-01-demo-01-1-0.test.svc.cluster.local :)
SELECT * FROM system.clusters
┌─cluster─────────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
 all-replicated                                           1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 all-sharded                                              1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 demo-01                                                  1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            2  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            3  127.0.0.3                127.0.0.3     9000         0  default                               0                0                        0 
 test_cluster_two_shards                                  1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards                                  2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_internal_replication             1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_internal_replication             2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_localhost                        1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_localhost                        2             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost                                     1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost_secure                              1             1            1  localhost                127.0.0.1     9440         0  default                               0                0                        0 
 test_unavailable_shard                                   1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_unavailable_shard                                   2             1            1  localhost                127.0.0.1        1         0  default                               0                0                        0 
└─────────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘

So far, so good. We can create some basic clusters.
If we want to do more, we’ll have to move ahead with replication
and zookeeper in the next section.

4.1.3 - Zookeeper and Replicas

Install Zookeeper and Replicas
kubectl create namespace test
namespace/test created

Now we’ve seen how to setup a basic cluster and upgrade it. Time to step
up our game and setup our cluster with Zookeeper, and then add
persistent storage to it.

The Altinity Kubernetes Operator does not install or manage Zookeeper.
Zookeeper must be provided and managed externally. The samples below
are examples on establishing Zookeeper to provide replication support.
For more information running and configuring Zookeeper,
see the Apache Zookeeper site.

This step can not be skipped - your Zookeeper instance must
have been set up externally from your ClickHouse clusters.
Whether your Zookeeper installation is hosted by other
Docker Images or separate servers is up to you.

Install Zookeeper

Kubernetes Zookeeper Deployment

A simple method of installing a single Zookeeper node is provided from
the Altinity Kubernetes Operator
deployment samples. These provide samples deployments of Grafana, Prometheus, Zookeeper and other applications.

See the Altinity Kubernetes Operator deployment directory
for a full list of sample scripts and Kubernetes deployment files.

The instructions below will create a new Kubernetes namespace zoo1ns,
and create a Zookeeper node in that namespace.
Kubernetes nodes will refer to that Zookeeper node by the hostname
zookeeper.zoo1ns within the created Kubernetes networks.

To deploy a single Zookeeper node in Kubernetes from the
Altinity Kubernetes Operator Github repository:

  1. Download the Altinity Kubernetes Operator Github repository, either with
    git clone https://github.com/Altinity/clickhouse-operator.git or by selecting Code->Download Zip from the
    Altinity Kubernetes Operator GitHub repository
    .

  2. From a terminal, navigate to the deploy/zookeeper directory
    and run the following:

cd clickhouse-operator/deploy/zookeeper
./quick-start-volume-emptyDir/zookeeper-1-node-create.sh
namespace/zoo1ns created
service/zookeeper created
service/zookeepers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/zookeeper-pod-disruption-budget created
statefulset.apps/zookeeper created
  1. Verify the Zookeeper node is running in Kubernetes:
kubectl get all --namespace zoo1ns
NAME              READY   STATUS    RESTARTS   AGE
pod/zookeeper-0   0/1     Running   0          2s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
service/zookeeper    ClusterIP   10.100.31.86   <none>        2181/TCP,7000/TCP   2s
service/zookeepers   ClusterIP   None           <none>        2888/TCP,3888/TCP   2s

NAME                         READY   AGE
statefulset.apps/zookeeper   0/1     2s
  1. Kubernetes nodes will be able to refer to the Zookeeper
    node by the hostname zookeeper.zoo1ns.

Configure Kubernetes with Zookeeper

Once we start replicating clusters, we need Zookeeper to manage them.
Create a new file sample03.yaml and populate it with the following:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "demo-01"
spec:
  configuration:
    zookeeper:
      nodes:
        - host: zookeeper.zoo1ns
          port: 2181
    clusters:
      - name: "demo-01"
        layout:
          shardsCount: 2
          replicasCount: 2
        templates:
          podTemplate: clickhouse-stable
  templates:
    podTemplates:
      - name: clickhouse-stable
        spec:
          containers:
            - name: clickhouse
              image: altinity/clickhouse-server:21.8.10.1.altinitystable

Notice that we’re increasing the number of replicas from the
sample02.yaml file in the
[First Clusters - No Storage]({<ref “quickcluster”>}) tutorial.

We’ll set up a minimal Zookeeper connecting cluster by applying
our new configuration file:

kubectl apply -f sample03.yaml -n test
clickhouseinstallation.clickhouse.altinity.com/demo-01 created

Verify it with the following:

kubectl -n test get chi -o wide
NAME      VERSION   CLUSTERS   SHARDS   HOSTS   TASKID                                 STATUS      UPDATED   ADDED   DELETED   DELETE   ENDPOINT                                    AGE
demo-01   0.18.3    1          2        4       5ec69e86-7e4d-4b8b-877f-f298f26161b2   Completed             4                          clickhouse-demo-01.test.svc.cluster.local   102s
kubectl get service -n test
NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
chi-demo-01-demo-01-0-0   ClusterIP      None             <none>        8123/TCP,9000/TCP,9009/TCP      85s
chi-demo-01-demo-01-0-1   ClusterIP      None             <none>        8123/TCP,9000/TCP,9009/TCP      68s
chi-demo-01-demo-01-1-0   ClusterIP      None             <none>        8123/TCP,9000/TCP,9009/TCP      47s
chi-demo-01-demo-01-1-1   ClusterIP      None             <none>        8123/TCP,9000/TCP,9009/TCP      16s
clickhouse-demo-01        LoadBalancer   10.104.157.249   <pending>     8123:32543/TCP,9000:30797/TCP   101s

If we log into our cluster and show the clusters, we can show
the updated results and that we have a total of 4 replicas
of demo-01 - two shards for each node with two replicas.

SELECT * FROM system.clusters
┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
 all-replicated                                        1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 all-replicated                                        1             1            2  chi-demo-01-demo-01-0-1  172.17.0.6    9000         0  default                               0                0                        0 
 all-replicated                                        1             1            3  chi-demo-01-demo-01-1-0  172.17.0.7    9000         0  default                               0                0                        0 
 all-replicated                                        1             1            4  chi-demo-01-demo-01-1-1  172.17.0.8    9000         0  default                               0                0                        0 
 all-sharded                                           1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 all-sharded                                           2             1            1  chi-demo-01-demo-01-0-1  172.17.0.6    9000         0  default                               0                0                        0 
 all-sharded                                           3             1            1  chi-demo-01-demo-01-1-0  172.17.0.7    9000         0  default                               0                0                        0 
 all-sharded                                           4             1            1  chi-demo-01-demo-01-1-1  172.17.0.8    9000         0  default                               0                0                        0 
 demo-01                                               1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 demo-01                                               1             1            2  chi-demo-01-demo-01-0-1  172.17.0.6    9000         0  default                               0                0                        0 
 demo-01                                               2             1            1  chi-demo-01-demo-01-1-0  172.17.0.7    9000         0  default                               0                0                        0 
 demo-01                                               2             1            2  chi-demo-01-demo-01-1-1  172.17.0.8    9000         0  default                               0                0                        0 
 test_cluster_two_shards                               1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards                               2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_internal_replication          1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_internal_replication          2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_localhost                     1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_localhost                     2             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost                                  1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost_secure                           1             1            1  localhost                127.0.0.1     9440         0  default                               0                0                        0 
 test_unavailable_shard                                1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_unavailable_shard                                2             1            1  localhost                127.0.0.1        1         0  default                               0                0                        0 
└──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘

Distributed Tables

We have our clusters going - let’s test it out with some distributed
tables so we can see the replication in action.

Login to your ClickHouse cluster and enter the following SQL statement:

CREATE TABLE test AS system.one ENGINE = Distributed('demo-01', 'system', 'one')

Once our table is created, perform a SELECT * FROM test command.
We’ll see nothing because we didn’t give it any data,
but that’s all right.

SELECT * FROM test
┌─dummy─┐
     0 
└───────┘
┌─dummy─┐
     0 
└───────┘

Now let’s test out our results coming in.
Run the following command - this tells us just what shard is
returning the results. It may take a few times, but you’ll
start to notice the host name changes each time you run the
command SELECT hostName() FROM test:

SELECT hostName() FROM test
┌─hostName()────────────────┐
 chi-demo-01-demo-01-0-0-0 
└───────────────────────────┘
┌─hostName()────────────────┐
 chi-demo-01-demo-01-1-1-0 
└───────────────────────────┘
SELECT hostName() FROM test
┌─hostName()────────────────┐
 chi-demo-01-demo-01-0-0-0 
└───────────────────────────┘
┌─hostName()────────────────┐
 chi-demo-01-demo-01-1-0-0 
└───────────────────────────┘

This is showing us that the query is being distributed across
different shards. The good news is you can change your
configuration files to change the shards and replication
however suits your needs.

One issue though: there’s no persistent storage.
If these clusters stop running, your data vanishes.
Next instruction will be on how to add persistent storage
to your ClickHouse clusters running on Kubernetes.
In fact, we can test by creating a new configuration
file called sample04.yaml:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "demo-01"
spec:
  configuration:
    zookeeper:
      nodes:
        - host: zookeeper.zoo1ns
          port: 2181
    clusters:
      - name: "demo-01"
        layout:
          shardsCount: 1
          replicasCount: 1
        templates:
          podTemplate: clickhouse-stable
  templates:
    podTemplates:
      - name: clickhouse-stable
        spec:
          containers:
            - name: clickhouse
              image: altinity/clickhouse-server:21.8.10.1.altinitystable

Make sure you’re exited out of your ClickHouse cluster,
then install our configuration file:

kubectl apply -f sample04.yaml -n test
clickhouseinstallation.clickhouse.altinity.com/demo-01 configured

Notice that during the update that four pods were deleted,
and then two new ones added.

When your clusters are settled down and back down to just 1 shard
with 1 replication, log back into your ClickHouse database
and select from table test:

SELECT * FROM test
Received exception from server (version 21.8.10):
Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.test doesn't exist. 
command terminated with exit code 60

No persistent storage means any time your clusters are changed over,
everything you’ve done is gone. The next article will cover
how to correct that by adding storage volumes to your cluster.

4.1.4 - Persistent Storage

How to set up persistent storage for your ClickHouse Kubernetes cluster.
kubectl create namespace test
namespace/test created

We’ve shown how to create ClickHouse clusters in Kubernetes, how to add zookeeper so we can create replicas of clusters. Now we’re going to show how to set persistent storage so you can change your cluster configurations without losing your hard work.

The examples here are built from the Altinity Kubernetes Operator examples, simplified down for our demonstrations.

Create a new file called sample05.yaml with the following:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "demo-01"
spec:
  configuration:
    zookeeper:
        nodes:
        - host: zookeeper.zoo1ns
          port: 2181
    clusters:
      - name: "demo-01"
        layout:
          shardsCount: 2
          replicasCount: 2
        templates:
          podTemplate: clickhouse-stable
          volumeClaimTemplate: storage-vc-template
  templates:
    podTemplates:
      - name: clickhouse-stable
        spec:
          containers:
          - name: clickhouse
            image: altinity/clickhouse-server:21.8.10.1.altinitystable
    volumeClaimTemplates:
      - name: storage-vc-template
        spec:
          storageClassName: standard
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 1Gi

Those who have followed the previous examples will recognize the
clusters being created, but there are some new additions:

  • volumeClaimTemplate: This is setting up storage, and we’re
    specifying the class as default. For full details on the different
    storage classes see the
    kubectl Storage Class documentation
  • storage: We’re going to give our cluster 1 Gigabyte of storage,
    enough for our sample systems. If you need more space that
    can be upgraded by changing these settings.
  • podTemplate: Here we’ll specify what our pod types are going to be.
    We’ll use the latest version of the ClickHouse containers,
    but other versions can be specified to best it your needs.
    For more information, see the
    [ClickHouse on Kubernetes Operator Guide]({<ref “kubernetesoperatorguide”>}).

Save your new configuration file and install it.
If you’ve been following this guide and already have the
namespace test operating, this will update it:

kubectl apply -f sample05.yaml -n test
clickhouseinstallation.clickhouse.altinity.com/demo-01 created

Verify it completes with get all for this namespace,
and you should have similar results:

kubectl -n test get chi -o wide
NAME      VERSION   CLUSTERS   SHARDS   HOSTS   TASKID                                 STATUS      UPDATED   ADDED   DELETED   DELETE   ENDPOINT                                    AGE
demo-01   0.18.3    1          2        4       57ec3f87-9950-4e5e-9b26-13680f66331d   Completed             4                          clickhouse-demo-01.test.svc.cluster.local   108s
kubectl get service -n test
NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
chi-demo-01-demo-01-0-0   ClusterIP      None             <none>        8123/TCP,9000/TCP,9009/TCP      81s
chi-demo-01-demo-01-0-1   ClusterIP      None             <none>        8123/TCP,9000/TCP,9009/TCP      63s
chi-demo-01-demo-01-1-0   ClusterIP      None             <none>        8123/TCP,9000/TCP,9009/TCP      45s
chi-demo-01-demo-01-1-1   ClusterIP      None             <none>        8123/TCP,9000/TCP,9009/TCP      8s
clickhouse-demo-01        LoadBalancer   10.104.236.138   <pending>     8123:31281/TCP,9000:30052/TCP   98s

Testing Persistent Storage

Everything is running, let’s verify that our storage is working.
We’re going to exec into our cluster with a bash prompt on
one of the pods created:

kubectl -n test exec -it chi-demo-01-demo-01-0-0-0 -- df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          32G   26G  4.0G  87% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2        32G   26G  4.0G  87% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           7.7G   12K  7.7G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           3.9G     0  3.9G   0% /proc/acpi
tmpfs           3.9G     0  3.9G   0% /proc/scsi
tmpfs           3.9G     0  3.9G   0% /sys/firmware

And we can see we have about 1 Gigabyte of storage
allocated into our cluster.

Let’s add some data to it. Nothing major, just to show that we can
store information, then change the configuration and the data stays.

Exit out of your cluster and launch clickhouse-client on your LoadBalancer.
We’re going to create a database, then create a table in the database,
then show both.

SHOW DATABASES
┌─name────┐
 default 
 system  
└─────────┘
CREATE DATABASE teststorage
CREATE TABLE teststorage.test AS system.one ENGINE = Distributed('demo-01', 'system', 'one')
SHOW DATABASES
┌─name────────┐
 default     
 system      
 teststorage 
└─────────────┘
SELECT * FROM teststorage.test
┌─dummy─┐
     0 
└───────┘
┌─dummy─┐
     0 
└───────┘

If you followed the instructions from
[Zookeeper and Replicas]({<ref “quickzookeeper” >}),
note at the end when we updated the configuration of our sample cluster
that all of the tables and data we made were deleted.
Let’s recreate that experiment now with a new configuration.

Create a new file called sample06.yaml. We’re going to reduce
the shards and replicas to 1:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "demo-01"
spec:
  configuration:
    zookeeper:
        nodes:
        - host: zookeeper.zoo1ns
          port: 2181
    clusters:
      - name: "demo-01"
        layout:
          shardsCount: 1
          replicasCount: 1
        templates:
          podTemplate: clickhouse-stable
          volumeClaimTemplate: storage-vc-template
  templates:
    podTemplates:
      - name: clickhouse-stable
        spec:
          containers:
          - name: clickhouse
            image: altinity/clickhouse-server:21.8.10.1.altinitystable
    volumeClaimTemplates:
      - name: storage-vc-template
        spec:
          storageClassName: standard
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 1Gi

Update the cluster with the following:

kubectl apply -f sample06.yaml -n test
clickhouseinstallation.clickhouse.altinity.com/demo-01 configured

Wait until the configuration is done and all of the pods are spun down,
then launch a bash prompt on one of the pods and check
the storage available:

kubectl -n test get chi -o wide
NAME      VERSION   CLUSTERS   SHARDS   HOSTS   TASKID                                 STATUS      UPDATED   ADDED   DELETED   DELETE   ENDPOINT                                    AGE
demo-01   0.18.3    1          1        1       776c1a82-44e1-4c2e-97a7-34cef629e698   Completed                               4        clickhouse-demo-01.test.svc.cluster.local   2m56s
kubectl -n test exec -it chi-demo-01-demo-01-0-0-0 -- df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          32G   26G  4.0G  87% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2        32G   26G  4.0G  87% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           7.7G   12K  7.7G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           3.9G     0  3.9G   0% /proc/acpi
tmpfs           3.9G     0  3.9G   0% /proc/scsi
tmpfs           3.9G     0  3.9G   0% /sys/firmware

Storage is still there. We can test if our databases are still available
by logging into clickhouse:

SHOW DATABASES
┌─name────────┐
 default     
 system      
 teststorage 
└─────────────┘
SELECT * FROM teststorage.test
┌─dummy─┐
     0 
└───────┘

All of our databases and tables are there.

There are different ways of allocating storage - for data, for logging,
multiple data volumes for your cluster nodes, but this will get you
started in running your own Kubernetes cluster running ClickHouse
in your favorite environment.

4.1.5 - Uninstall

How to uninstall the Altinity Kubernetes Operator and its namespace

To remove the Altinity Kubernetes Operator, both the Altinity Kubernetes Operator and the components in its installed namespace will have to be removed. The proper command is to uses the same clickhouse-operator-install-bundle.yaml file that was used to install the Altinity Kubernetes Operator. For more details, see how to install and verify the Altinity Kubernetes Operator.

The following instructions are based on the standard installation instructions. For users who perform a custom installation, note that the any custom namespaces that the user wants to remove will have to be deleted separate from the Altinity Kubernetes Operator deletion.

For example, if the custom namespace operator-test is created, then it would be removed with the command kubectl delete namespaces operator-test.

Instructions

To remove the Altinity Kubernetes Operator from your Kubernetes environment from a standard install:

  1. Verify the Altinity Kubernetes Operator is in the kube-system namespace. The Altinity Kubernetes Operator and other pods will be displayed:

    NAME                                   READY   STATUS    RESTARTS      AGE
    clickhouse-operator-857c69ffc6-2frgl   2/2     Running   0             5s
    coredns-78fcd69978-nthp2               1/1     Running   4 (23h ago)   51d
    etcd-minikube                          1/1     Running   4 (23h ago)   51d
    kube-apiserver-minikube                1/1     Running   4 (23h ago)   51d
    kube-controller-manager-minikube       1/1     Running   4 (23h ago)   51d
    kube-proxy-lsggn                       1/1     Running   4 (23h ago)   51d
    kube-scheduler-minikube                1/1     Running   4 (23h ago)   51d
    storage-provisioner                    1/1     Running   9 (23h ago)   51d
    
  2. Issue the kubectl delete command using the same YAML file used to install the Altinity Kubernetes Operator. By default the Altinity Kubernetes Operator is installed in the namespace kube-system. If this was installed into a custom namespace, verify that it installed in the uninstall command. In this example, we specified an installation of the Altinity Kubernetes Operator version 0.18.3 into the default kube-system namespace. This produces output similar to the following:

    kubectl delete -f https://github.com/Altinity/clickhouse-operator/raw/0.18.3/deploy/operator/clickhouse-operator-install-bundle.yaml
    
    customresourcedefinition.apiextensions.k8s.io "clickhouseinstallations.clickhouse.altinity.com" deleted
    customresourcedefinition.apiextensions.k8s.io "clickhouseinstallationtemplates.clickhouse.altinity.com" deleted
    customresourcedefinition.apiextensions.k8s.io "clickhouseoperatorconfigurations.clickhouse.altinity.com" deleted
    serviceaccount "clickhouse-operator" deleted
    clusterrole.rbac.authorization.k8s.io "clickhouse-operator-kube-system" deleted
    clusterrolebinding.rbac.authorization.k8s.io "clickhouse-operator-kube-system" deleted
    configmap "etc-clickhouse-operator-files" deleted
    configmap "etc-clickhouse-operator-confd-files" deleted
    configmap "etc-clickhouse-operator-configd-files" deleted
    configmap "etc-clickhouse-operator-templatesd-files" deleted
    configmap "etc-clickhouse-operator-usersd-files" deleted
    deployment.apps "clickhouse-operator" deleted
    service "clickhouse-operator-metrics" deleted
    
  3. To verify the Altinity Kubernetes Operator has been removed, use the kubectl get namespaces command:

    kubectl get pods --namespace kube-system
    
    NAME                               READY   STATUS    RESTARTS      AGE
    coredns-78fcd69978-nthp2           1/1     Running   4 (23h ago)   51d
    etcd-minikube                      1/1     Running   4 (23h ago)   51d
    kube-apiserver-minikube            1/1     Running   4 (23h ago)   51d
    kube-controller-manager-minikube   1/1     Running   4 (23h ago)   51d
    kube-proxy-lsggn                   1/1     Running   4 (23h ago)   51d
    kube-scheduler-minikube            1/1     Running   4 (23h ago)   51d
    storage-provisioner                1/1     Running   9 (23h ago)   51d
    

4.2 - Kubernetes Install Guide

How to install Kubernetes in different environments

Kubernetes and Zookeeper form a major backbone in running the Altinity Kubernetes Operator in a cluster. The following guides detail how to setup Kubernetes in different environments.

4.2.1 - Install minikube for Linux

How to install Kubernetes through minikube

One popular option for installing Kubernetes is through minikube, which creates a local Kubernetes cluster for different environments. Tests scripts and examples for the clickhouse-operator are based on using minikube to set up the Kubernetes environment.

The following guide demonstrates how to install minikube that support the clickhouse-operator for the following operating systems:

  • Linux (Deb based)

Minikube Installation for Deb Based Linux

The following instructions assume an installation for x86-64 based Linux that use Deb package installation. Please see the referenced documentation for instructions for other Linux distributions and platforms.

To install minikube that supports running clickhouse-operator:

kubectl Installation for Deb

The following instructions are based on Install and Set Up kubectl on Linux

  1. Download the kubectl binary:

    curl -LO 'https://dl.k8s.io/release/v1.22.0/bin/linux/amd64/kubectl'
    
  2. Verify the SHA-256 hash:

    curl -LO "https://dl.k8s.io/v1.22.0/bin/linux/amd64/kubectl.sha256"
    
    echo "$(<kubectl.sha256) kubectl" | sha256sum --check
    
  3. Install kubectl into the /usr/local/bin directory (this assumes that your PATH includes use/local/bin):

    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    
  4. Verify the installation and the version:

    kubectl version
    

Install Docker for Deb

These instructions are based on Docker’s documentation Install Docker Engine on Ubuntu

  1. Install the Docker repository links.

    1. Update the apt-get repository:

      sudo apt-get update
      
  2. Install the prequisites ca-certificates, curl, gnupg, and lsb-release:

    sudo apt-get install -y ca-certificates curl gnupg lsb-release
    
  3. Add the Docker repository keys:

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    
    1. Add the Docker repository:

      echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" |sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
      
  4. Install Docker:

    1. Update the apt-get repository:

      sudo apt-get update
      
    2. Install Docker and other libraries:

    sudo apt install docker-ce docker-ce-cli containerd.io
    
  5. Add non-root accounts to the docker group. This allows these users to run Docker commands without requiring root access.

    1. Add current user to the docker group and activate the changes to group

      sudo usermod -aG docker $USER&& newgrp docker
      

Install Minikube for Deb

The following instructions are taken from minikube start.

  1. Update the apt-get repository:

    sudo apt-get update
    
  2. Install the prerequisite conntrack:

    sudo apt install conntrack
    
  3. Download minikube:

    curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
    
  4. Install minikube:

    sudo install minikube-linux-amd64 /usr/local/bin/minikube
    
  5. To correct issues with the kube-proxy and the storage-provisioner, set nf_conntrack_max=524288 before starting minikube:

    sudo sysctl net/netfilter/nf_conntrack_max=524288
    
  6. Start minikube:

    minikube start && echo "ok: started minikube successfully"
    
  7. Once installation is complete, verify that the user owns the ~/.kube and ~/.minikube directories:

    sudo chown -R $USER.$USER .kube
    
    sudo chown -R $USER.$USER .minikube
    

4.2.2 - Altinity Kubernetes Operator on GKE

How to install the Altinity Kubernetes Operator using Google Kubernetes Engine

Organizations can host their Altinity Kubernetes Operator on the Google Kubernetes Engine (GKE). This can be done either through Altinity.Cloud or through a separate installation on GKE.

To setup a basic Altinity Kubernetes Operator environment, use the following steps. The steps below use the current free Google Cloud services to set up a minimally viable Kubernetes with ClickHouse environment.

Prerequisites

  1. Register a Google Cloud Account: https://cloud.google.com/.
  2. Create a Google Cloud project: https://cloud.google.com/resource-manager/docs/creating-managing-projects
  3. Install gcloud and run gcloud init or gcloud init --console to set up your environment: https://cloud.google.com/sdk/docs/install
  4. Enable the Google Compute Engine: https://cloud.google.com/endpoints/docs/openapi/enable-api
  5. Enable GKE on your project: https://console.cloud.google.com/apis/enableflow?apiid=container.googleapis.com.
  6. Select a default Computer Engine zone.
  7. Select a default Compute Engine region.
  8. Install kubectl on your local system. For sample instructions, see the Minikube on Linux installation instructions.

Altinity Kubernetes Operator on GKE Installation instructions

Installing the Altinity Kubernetes Operator in GKE has the following main steps:

Create the Network

The first step in setting up the Altinity Kubernetes Operator in GKE is creating the network. The complete details can be found on the Google Cloud documentation site regarding the gcloud compute networks create command. The following command will create a network called kubernetes-1 that will work for our minimal Altinity Kubernetes Operator cluster. Note that this network will not be available to external networks unless additional steps are made. Consult the Google Cloud documentation site for more details.

  1. See a list of current networks available. In this example, there are no networks setup in this project:

    gcloud compute networks list
    NAME     SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
    default  AUTO         REGIONAL
    
  2. Create the network in your Google Cloud project:

    gcloud compute networks create kubernetes-1 --bgp-routing-mode regional --subnet-mode custom
    Created [https://www.googleapis.com/compute/v1/projects/betadocumentation/global/networks/kubernetes-1].
    NAME          SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
    kubernetes-1  CUSTOM       REGIONAL
    
    Instances on this network will not be reachable until firewall rules
    are created. As an example, you can allow all internal traffic between
    instances as well as SSH, RDP, and ICMP by running:
    
    $ gcloud compute firewall-rules create <FIREWALL_NAME> --network kubernetes-1 --allow tcp,udp,icmp --source-ranges <IP_RANGE>
    $ gcloud compute firewall-rules create <FIREWALL_NAME> --network kubernetes-1 --allow tcp:22,tcp:3389,icmp
    
  3. Verify its creation:

    gcloud compute networks list
    NAME          SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
    default       AUTO         REGIONAL
    kubernetes-1  CUSTOM       REGIONAL
    

Create the Cluster

Now that the network has been created, we can set up our cluster. The following cluster will be one using the ec2-micro machine type - the is still in the free model, and gives just enough power to run our basic cluster. The cluster will be called cluster-1, but you can replace it whatever name you feel appropriate. It uses the kubernetes-1 network specified earlier and creates a new subnet for the cluster under k-subnet-1.

To create and launch the cluster:

  1. Verify the existing clusters with the gcloud command. For this example there are no pre-existing clusters.

    gcloud container clusters list
    
  2. From the command line, issue the following gcloud command to create the cluster:

    gcloud container clusters create cluster-1 --region us-west1 --node-locations us-west1-a --machine-type e2-micro --network kubernetes-1 --create-subnetwork name=k-subnet-1 --enable-ip-alias &
    
  3. Use the clusters list command to verify when the cluster is available for use:

    gcloud container clusters list
    Created [https://container.googleapis.com/v1/projects/betadocumentation/zones/us-west1/clusters/cluster-1].
    To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-west1/cluster-1?project=betadocumentation
    kubeconfig entry generated for cluster-1.
    NAME       LOCATION  MASTER_VERSION   MASTER_IP      MACHINE_TYPE  NODE_VERSION     NUM_NODES  STATUS
    cluster-1  us-west1  1.21.6-gke.1500  35.233.231.36  e2-micro      1.21.6-gke.1500  3          RUNNING
    NAME       LOCATION  MASTER_VERSION   MASTER_IP      MACHINE_TYPE  NODE_VERSION     NUM_NODES  STATUS
    cluster-1  us-west1  1.21.6-gke.1500  35.233.231.36  e2-micro      1.21.6-gke.1500  3          RUNNING
    [1]+  Done                    gcloud container clusters create cluster-1 --region us-west1 --node-locations us-west1-a --machine-type e2-micro --network kubernetes-1 --create-subnetwork name=k-subnet-1 --enable-ip-alias
    

Get Cluster Credentials

Importing the cluster credentials into your kubectl environment will allow you to issue commands directly to the cluster on Google Cloud. To import the cluster credentials:

  1. Retrieve the credentials for the newly created cluster:

    gcloud container clusters get-credentials cluster-1 --region us-west1 --project betadocumentation
    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for cluster-1.
    
  2. Verify the cluster information from the kubectl environment:

    kubectl cluster-info
    Kubernetes control plane is running at https://35.233.231.36
    GLBCDefaultBackend is running at https://35.233.231.36/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
    KubeDNS is running at https://35.233.231.36/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    Metrics-server is running at https://35.233.231.36/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

Install the Altinity ClickHouse Operator

Our cluster is up and ready to go. Time to install the Altinity Kubernetes Operator through the following steps. Note that we are specifying the version of the Altinity Kubernetes Operator to install. This insures maximum compatibility with your applications and other Kubernetes environments.

As of the time of this article, the most current version is 0.18.1

  1. Apply the Altinity Kubernetes Operator manifest by either downloading it and applying it, or referring to the GitHub repository URL. For more information, see the Altinity Kubernetes Operator Installation Guides.

    kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/0.18.1/deploy/operator/clickhouse-operator-install-bundle.yaml
    
  2. Verify the installation by running:

    kubectl get pods --namespace kube-system
    NAME                                                  READY   STATUS    RESTARTS   AGE
    clickhouse-operator-77b54889b4-g98kk                  2/2     Running   0          53s
    event-exporter-gke-5479fd58c8-7h6bn                   2/2     Running   0          108s
    fluentbit-gke-b29c2                                   2/2     Running   0          79s
    fluentbit-gke-k8f2n                                   2/2     Running   0          80s
    fluentbit-gke-vjlqh                                   2/2     Running   0          80s
    gke-metrics-agent-4ttdt                               1/1     Running   0          79s
    gke-metrics-agent-qf24p                               1/1     Running   0          80s
    gke-metrics-agent-szktc                               1/1     Running   0          80s
    konnectivity-agent-564f9f6c5f-59nls                   1/1     Running   0          40s
    konnectivity-agent-564f9f6c5f-9nfnl                   1/1     Running   0          40s
    konnectivity-agent-564f9f6c5f-vk7l8                   1/1     Running   0          97s
    konnectivity-agent-autoscaler-5c49cb58bb-zxzlp        1/1     Running   0          97s
    kube-dns-697dc8fc8b-ddgrx                             4/4     Running   0          98s
    kube-dns-697dc8fc8b-fpnps                             4/4     Running   0          71s
    kube-dns-autoscaler-844c9d9448-pqvqr                  1/1     Running   0          98s
    kube-proxy-gke-cluster-1-default-pool-fd104f22-8rx3   1/1     Running   0          36s
    kube-proxy-gke-cluster-1-default-pool-fd104f22-gnd0   1/1     Running   0          29s
    kube-proxy-gke-cluster-1-default-pool-fd104f22-k2sv   1/1     Running   0          12s
    l7-default-backend-69fb9fd9f9-hk7jq                   1/1     Running   0          107s
    metrics-server-v0.4.4-857776bc9c-bs6sl                2/2     Running   0          44s
    pdcsi-node-5l9vf                                      2/2     Running   0          79s
    pdcsi-node-gfwln                                      2/2     Running   0          79s
    pdcsi-node-q6scz                                      2/2     Running   0          80s
    

Create a Simple ClickHouse Cluster

The Altinity Kubernetes Operator allows the easy creation and modification of ClickHouse clusters in whatever format that works best for your organization. Now that the Google Cloud cluster is running and has the Altinity Kubernetes Operatorinstalled, let’s create a very simple ClickHouse cluster to test on.

The following example will create an Altinity Kubernetes Operator controlled cluster with 1 shard and replica, 500 MB of persistent storage, and sets the password of the default Altinity Kubernetes Operator user’s password to topsecret. For more information on customizing the Altinity Kubernetes Operator, see the Altinity Kubernetes Operator Configuration Guides.

  1. Create the following manifest and save it as gcp-example01.yaml.

    
    apiVersion: "clickhouse.altinity.com/v1"
    kind: "ClickHouseInstallation"
    metadata:
    name: "gcp-example"
    spec:
    configuration:
        # What does my cluster look like?
        clusters:
        - name: "gcp-example"
        layout:
            shardsCount: 1
            replicasCount: 1
        templates:
            podTemplate: clickhouse-stable
            volumeClaimTemplate: pd-ssd
        # Where is Zookeeper?
        zookeeper:
        nodes:
        - host: zookeeper.zoo1ns
            port: 2181
        # What are my users?
        users:
        # Password = topsecret
        demo/password_sha256_hex: 53336a676c64c1396553b2b7c92f38126768827c93b64d9142069c10eda7a721
        demo/profile: default
        demo/quota: default
        demo/networks/ip:
        - 0.0.0.0/0
        - ::/0
    templates:
        podTemplates:
        # What is the definition of my server?
        - name: clickhouse-stable
        spec:
            containers:
            - name: clickhouse
            image: altinity/clickhouse-server:21.8.10.1.altinitystable
        # Keep servers on separate nodes!
            podDistribution:
            - scope: ClickHouseInstallation
            type: ClickHouseAntiAffinity
        volumeClaimTemplates:
        # How much storage and which type on each node?
        - name: pd-ssd
        # Do not delete PVC if installation is dropped.
        reclaimPolicy: Retain
        spec:
            accessModes:
            - ReadWriteOnce
            resources:
            requests:
                storage: 500Mi
    
  2. Create a namespace in your GKE environment. For this example, we will be using test:

    kubectl create namespace test
    namespace/test created
    
  3. Apply the manifest to the namespace:

    kubectl -n test apply -f gcp-example01.yaml
    clickhouseinstallation.clickhouse.altinity.com/gcp-example created
    
  4. Verify the installation is complete when all pods are in a Running state:

    kubectl -n test get chi -o wide
    NAME          VERSION   CLUSTERS   SHARDS   HOSTS   TASKID                                 STATUS      UPDATED   ADDED   DELETED   DELETE   ENDPOINT
    gcp-example   0.18.1    1          1        1       f859e396-e2de-47fd-8016-46ad6b0b8508   Completed             1                          clickhouse-gcp-example.test.svc.cluster.local
    

Login to the Cluster

This example does not have any open external ports, but we can still access our ClickHouse database through kubectl exec. In this case, our specific pod we are connecting to is chi-demo-01-demo-01-0-0-0. Replace this with the designation of your pods;

Use the following procedure to verify the Altinity Stable build install in your GKE environment.

  1. Login to the clickhouse-client in one of your existing pods:

    kubectl -n test exec -it chi-gcp-example-gcp-example-0-0-0 -- clickhouse-client
    
  2. Verify the cluster configuration:

    kubectl -n test exec -it chi-gcp-example-gcp-example-0-0-0  -- clickhouse-client -q "SELECT * FROM system.clusters  FORMAT PrettyCompactNoEscapes"
    ┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
    │ all-replicated                               │         111 │ chi-gcp-example-gcp-example-0-0 │ 127.0.0.1    │ 90001 │ default │                  │            000│ all-sharded                                  │         111 │ chi-gcp-example-gcp-example-0-0 │ 127.0.0.1    │ 90001 │ default │                  │            000│ gcp-example                                  │         111 │ chi-gcp-example-gcp-example-0-0 │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_cluster_two_shards                      │         111 │ 127.0.0.1                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_cluster_two_shards                      │         211 │ 127.0.0.2                       │ 127.0.0.2    │ 90000 │ default │                  │            000│ test_cluster_two_shards_internal_replication │         111 │ 127.0.0.1                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_cluster_two_shards_internal_replication │         211 │ 127.0.0.2                       │ 127.0.0.2    │ 90000 │ default │                  │            000│ test_cluster_two_shards_localhost            │         111 │ localhost                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_cluster_two_shards_localhost            │         211 │ localhost                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_shard_localhost                         │         111 │ localhost                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_shard_localhost_secure                  │         111 │ localhost                       │ 127.0.0.1    │ 94400 │ default │                  │            000│ test_unavailable_shard                       │         111 │ localhost                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_unavailable_shard                       │         211 │ localhost                       │ 127.0.0.1    │    10 │ default │                  │            000└──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
    
  3. Exit out of your cluster:

    chi-gcp-example-gcp-example-0-0-0.chi-gcp-example-gcp-example-0-0.test.svc.cluster.local :) exit
    Bye.
    

Further Steps

This simple example demonstrates how to build and manage an Altinity Altinity Kubernetes Operator run cluster for ClickHouse. Further steps would be to open the cluster to external network connections, setup replication schemes, etc.

For more information, see the Altinity Kubernetes Operator guides and the Altinity Kubernetes Operator repository.

4.3 - Operator Guide

Installation and Management of clickhouse-operator for Kubernetes

The the Altinity Kubernetes Operator is an open source project managed and maintained by Altinity Inc. This Operator Guide is created to help users with installation, configuration, maintenance, and other important tasks.

4.3.1 - Installation Guide

Basic and custom installation instructions of the clickhouse-operator

Depending on your organization and its needs, there are different ways of installing the Kubernetes clickhouse-operator.

4.3.1.1 - Basic Installation Guide

The simple method of installing the Altinity Kubernetes Operator

Requirements

The Altinity Kubernetes Operator for Kubernetes has the following requirements:

Instructions

To install the Altinity Kubernetes Operator for Kubernetes:

  1. Deploy the Altinity Kubernetes Operator from the manifest directly from GitHub. It is recommended that the version be specified during installation - this insures maximum compatibility and that all replicated environments are working from the same version. For more information on installing other versions of the Altinity Kubernetes Operator, see the specific Version Installation Guide.

    The most current version is 0.18.3:

kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/0.18.3/deploy/operator/clickhouse-operator-install-bundle.yaml
  1. The following will be displayed on a successful installation.
    For more information on the resources created in the installation,
    see [Altinity Kubernetes Operator Resources]({<ref “operatorresources” >})
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallationtemplates.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseoperatorconfigurations.clickhouse.altinity.com created
serviceaccount/clickhouse-operator created
clusterrole.rbac.authorization.k8s.io/clickhouse-operator-kube-system created
clusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator-kube-system created
configmap/etc-clickhouse-operator-files created
configmap/etc-clickhouse-operator-confd-files created
configmap/etc-clickhouse-operator-configd-files created
configmap/etc-clickhouse-operator-templatesd-files created
configmap/etc-clickhouse-operator-usersd-files created
deployment.apps/clickhouse-operator created
service/clickhouse-operator-metrics created
  1. Verify the installation by running:
kubectl get pods --namespace kube-system

The following will be displayed on a successful installation,
with your particular image:

NAME                                   READY   STATUS    RESTARTS      AGE
clickhouse-operator-857c69ffc6-ttnsj   2/2     Running   0             4s
coredns-78fcd69978-nthp2               1/1     Running   4 (23h ago)   51d
etcd-minikube                          1/1     Running   4 (23h ago)   51d
kube-apiserver-minikube                1/1     Running   4 (23h ago)   51d
kube-controller-manager-minikube       1/1     Running   4 (23h ago)   51d
kube-proxy-lsggn                       1/1     Running   4 (23h ago)   51d
kube-scheduler-minikube                1/1     Running   4 (23h ago)   51d
storage-provisioner                    1/1     Running   9 (23h ago)   51d

4.3.1.2 - Custom Installation Guide

How to install a customized Altinity Kubernetes Operator

Users who need to customize their Altinity Kubernetes Operator namespace or
can not directly connect to Github from the installation environment
can perform a custom install.

Requirements

The Altinity Kubernetes Operator for Kubernetes has the following requirements:

Instructions

Script Install into Namespace

By default, the Altinity Kubernetes Operator installed into the kube-system
namespace when using the Basic Installation instructions.
To install into a different namespace use the following command replacing {custom namespace here}
with the namespace to use:

curl -s https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/operator-web-installer/clickhouse-operator-install.sh | OPERATOR_NAMESPACE={custom_namespace_here} bash

For example, to install into the namespace test-clickhouse-operator
namespace, use:

curl -s https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/operator-web-installer/clickhouse-operator-install.sh | OPERATOR_NAMESPACE=test-clickhouse-operator bash
Setup ClickHouse Operator into 'test-clickhouse-operator' namespace
No 'test-clickhouse-operator' namespace found. Going to create
namespace/test-clickhouse-operator created
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallationtemplates.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseoperatorconfigurations.clickhouse.altinity.com created
serviceaccount/clickhouse-operator created
clusterrole.rbac.authorization.k8s.io/clickhouse-operator-test-clickhouse-operator configured
clusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator-test-clickhouse-operator configured
configmap/etc-clickhouse-operator-files created
configmap/etc-clickhouse-operator-confd-files created
configmap/etc-clickhouse-operator-configd-files created
configmap/etc-clickhouse-operator-templatesd-files created
configmap/etc-clickhouse-operator-usersd-files created
deployment.apps/clickhouse-operator created
service/clickhouse-operator-metrics created

If no OPERATOR_NAMESPACE value is set, then the Altinity Kubernetes Operator will
be installed into kube-system.

Manual Install into Namespace

For organizations that can not access GitHub directly from the environment they are installing the Altinity Kubernetes Operator in, they can perform a manual install through the following steps:

  1. Download the install template file: clickhouse-operator-install-template.yaml.

  2. Edit the file and set OPERATOR_NAMESPACE value.

  3. Use the following commands, replacing {your file name} with the name of your YAML file:

    namespace = "custom-clickhouse-operator"
    bash("sed -i s/'${OPERATOR_NAMESPACE}'/test-clickhouse-operator/ clickhouse-operator-install-template.yaml", add_to_text=False)
    bash(f"kubectl apply -f clickhouse-operator-install-template.yaml", add_to_text=False)
    
    try:
    
        retry(bash, timeout=60, delay=1)("kubectl get pods --namespace test-clickhouse-operator "
            "-o=custom-columns=NAME:.metadata.name,STATUS:.status.phase",
            exitcode=0, message="Running", lines=slice(1, None),
            fail_message="not all pods in Running state", add_to_text=true)
    
    finally:
        bash(f"kubectl delete namespace test-clickhouse-operator', add_to_text=False)
    
    kubectl apply -f {your file name}
    

    For example:

    kubectl apply -f customtemplate.yaml
    

Alternatively, instead of using the install template, enter the following into your console
(bash is used below, modify depending on your particular shell).
Change the OPERATOR_NAMESPACE value to match your namespace.

# Namespace to install operator into
OPERATOR_NAMESPACE="${OPERATOR_NAMESPACE:-clickhouse-operator}"
# Namespace to install metrics-exporter into
METRICS_EXPORTER_NAMESPACE="${OPERATOR_NAMESPACE}"

# Operator's docker image
OPERATOR_IMAGE="${OPERATOR_IMAGE:-altinity/clickhouse-operator:latest}"
# Metrics exporter's docker image
METRICS_EXPORTER_IMAGE="${METRICS_EXPORTER_IMAGE:-altinity/metrics-exporter:latest}"

# Setup Altinity Kubernetes Operator into specified namespace
kubectl apply --namespace="${OPERATOR_NAMESPACE}" -f <( \
    curl -s https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/operator/clickhouse-operator-install-template.yaml | \
        OPERATOR_IMAGE="${OPERATOR_IMAGE}" \
        OPERATOR_NAMESPACE="${OPERATOR_NAMESPACE}" \
        METRICS_EXPORTER_IMAGE="${METRICS_EXPORTER_IMAGE}" \
        METRICS_EXPORTER_NAMESPACE="${METRICS_EXPORTER_NAMESPACE}" \
        envsubst \
)

Verify Installation

To verify the Altinity Kubernetes Operator is running in your namespace, use the following command:

kubectl get pods -n clickhouse-operator
NAME                                   READY   STATUS    RESTARTS   AGE
clickhouse-operator-5d9496dd48-8jt8h   2/2     Running   0          16s

4.3.1.3 - Source Build Guide - 0.18 and Up

How to build the Altinity Kubernetes Operator from source code

For organizations who prefer to build the software directly from source code,
they can compile the Altinity Kubernetes Operator and install it into a Docker
container through the following process. The following procedure is available for versions of the Altinity Kubernetes Operator 0.18.0 and up.

Binary Build

Binary Build Requirements

  • go-lang compiler: Go.
  • Go mod Package Manager.
  • The source code from the Altinity Kubernetes Operator repository.
    This can be downloaded using git clone https://github.com/altinity/clickhouse-operator.

Binary Build Instructions

  1. Switch working dir to clickhouse-operator.

  2. Link all packages with the command: echo {root_password} | sudo -S -k apt install -y golang.

  3. Build the sources with go build -o ./clickhouse-operator cmd/operator/main.go.

This creates the Altinity Kubernetes Operator binary. This binary is only used
within a kubernetes environment.

Docker Image Build and Usage

Docker Build Requirements

Install Docker Buildx CLI plugin

  1. Download Docker Buildx binary file releases page on GitHub

  2. Create folder structure for plugin

    mkdir -p ~/.docker/cli-plugins/
    
  3. Rename the relevant binary and copy it to the destination matching your OS

    mv buildx-v0.7.1.linux-amd64  ~/.docker/cli-plugins/docker-buildx
    
  4. On Unix environments, it may also be necessary to make it executable with chmod +x:

    chmod +x ~/.docker/cli-plugins/docker-buildx
    
  5. Set buildx as the default builder

    docker buildx install
    
  6. Create config.json file to enable the plugin

    touch ~/.docker/config.json
    
  7. Create config.json file to enable the plugin

    echo "{"experimental": "enabled"}" >> ~/.docker/config.json
    

Docker Build Instructions

  1. Switch working dir to clickhouse-operator

  2. Build docker image with docker: docker build -f dockerfile/operator/Dockerfile -t altinity/clickhouse-operator:dev .

  3. Register freshly build docker image inside kubernetes environment with the following:

    docker save altinity/clickhouse-operator | (eval $(minikube docker-env) && docker load)
    
  4. Install the Altinity Kubernetes Operator as described in either the Basic Build
    or Custom Build.

4.3.1.4 - Specific Version Installation Guide

How to install a specific version of the Altinity Kubernetes Operator

Users may want to install a specific version of the Altinity Kubernetes Operator for a variety of reasons: to maintain parity between different environments, to preserve the version between replicas, or other reasons.

The following procedures detail how to install a specific version of the Altinity Kubernetes Operator in the default Kubernetes namespace kube-system. For instructions on performing custom installations based on the namespace and other settings, see the Custom Installation Guide.

Requirements

The Altinity Kubernetes Operator for Kubernetes has the following requirements:

Instructions

Altinity Kubernetes Operator Versions After 0.17.0

To install a specific version of the Altinity Kubernetes Operator after version 0.17.0:

  1. Run kubectl and apply the manifest directly from the GitHub Altinity Kubernetes Operator repository, or by downloading the manifest and applying it directly. The format for the URL is:

    https://github.com/Altinity/clickhouse-operator/raw/{OPERATOR_VERSION}/deploy/operator/clickhouse-operator-install-bundle.yaml
    

    Replace the {OPERATOR_VERSION} with the version to install. For example, for the Altinity Kubernetes Operator version 0.18.3, the URL would be:

    https://github.com/Altinity/clickhouse-operator/raw/0.18.3/deploy/operator/clickhouse-operator-install-bundle.yaml

    The command to apply the Docker manifest through kubectl is:

    kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/0.18.3/deploy/operator/clickhouse-operator-install-bundle.yaml
    
    customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.altinity.com configured
    customresourcedefinition.apiextensions.k8s.io/clickhouseinstallationtemplates.clickhouse.altinity.com configured
    customresourcedefinition.apiextensions.k8s.io/clickhouseoperatorconfigurations.clickhouse.altinity.com configured
    serviceaccount/clickhouse-operator created
    clusterrole.rbac.authorization.k8s.io/clickhouse-operator-kube-system created
    clusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator-kube-system created
    configmap/etc-clickhouse-operator-files created
    configmap/etc-clickhouse-operator-confd-files created
    configmap/etc-clickhouse-operator-configd-files created
    configmap/etc-clickhouse-operator-templatesd-files created
    configmap/etc-clickhouse-operator-usersd-files created
    deployment.apps/clickhouse-operator created
    service/clickhouse-operator-metrics created
    
  2. Verify the installation is complete and the clickhouse-operator pod is running:

    kubectl get pods --namespace kube-system
    

    A similar result to the following will be displayed on a successful installation:

    NAME                                   READY   STATUS    RESTARTS      AGE
    clickhouse-operator-857c69ffc6-q8qrr   2/2     Running   0             5s
    coredns-78fcd69978-nthp2               1/1     Running   4 (23h ago)   51d
    etcd-minikube                          1/1     Running   4 (23h ago)   51d
    kube-apiserver-minikube                1/1     Running   4 (23h ago)   51d
    kube-controller-manager-minikube       1/1     Running   4 (23h ago)   51d
    kube-proxy-lsggn                       1/1     Running   4 (23h ago)   51d
    kube-scheduler-minikube                1/1     Running   4 (23h ago)   51d
    storage-provisioner                    1/1     Running   9 (23h ago)   51d
    
  3. To verify the version of the Altinity Kubernetes Operator, use the following command:

    kubectl get pods -l app=clickhouse-operator --all-namespaces -o