Settings and values used for Altinity.Cloud ClickHouse Clusters.
ClickHouse Clusters hosted on Altinity.Cloud have the following structural attributes. These determine options such as the version of ClickHouse installed on them, how many replicas, and other important features.
|Cluster Name||The name for this cluster. It will be used for the hostname of the cluster.||Cluster names must be DNS compliant. This includes:
|Node Type||Determines the number of CPUs and the amount of RAM used per node.||The following Node Types are sample values, and may be updated at any time:
|Node Storage||The amount of storage space available to each node, in GB.|
|Number of Volumes||Storage can be split across multiple volumes. The amount of data stored per node is the same as set in Node Storage, but it split into multiple volumes.
Separating storage into multiple volumes can increase query performance.
|Volume Type||Defines the Amazon Web Services volume class. Typically used to determine whether or not to encrypt the columns.||Values:
|Number of Shards||Shards represent a set of nodes. Shards can be replicated to provide increased availability and computational power.|
|ClickHouse Version||The version of the ClickHouse database that will be used on each node.
To run a custom ClickHouse container version, specify the Docker image to use.
|Currently available options:
|ClickHouse Admin Name||The name of the ClickHouse administrative user.||Set to admin by default. Can not be changed.|
|ClickHouse Admin Password||The password for the ClickHouse administrative user.|
|Data Replication||Toggles whether shards will be replicated. When enabled, Zookeeper is required to manage the shard replication process.||Values:
|Number of Replicas||Sets the number of replicas per shard. Only enabled if Data Replication is enabled.|
|Zookeeper Configuration||When Data Replication is set to Enabled, Zookeeper is required. This setting determines how Zookeeper will run and manage shard replication.
The Zookeeper Configuration mainly sets how many Zookeeper nodes are used to manage the shards. More Zookeeper nodes increases the availability of the cluster.
|Node Placement||Sets how nodes are distributed via Kubernetes. Depending on your situation and how robust you want your replicas and clusters.||Values:
|Enable Backups||Backs up the cluster. These can be restored in the event data loss or to roll back to previous versions.||Values:
Last modified 2021.01.29: Kubernetes and Cloud settings update.