This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Altinity Software Releases

List of releases of Altinity Software.

Releases for all Altinity products for the last six months are listed below, organized by Product and Release Date. Please select either the Product or the specific Release Note to be view a summary of the release note.

For all Release Notes for a product, select the product from the navigation bar to the right.

  • Altinity Backup for ClickHouse
  • Altinity Dashboard for Kubernetes
  • Altinity Kubernetes Operator
  • Altinity Plugin Grafana for ClickHouse
  • Altinity Stable for ClickHouse
  • Altinity Tableau Connector for ClickHouse
  • Altinity.Cloud Release Notes
  • ODBC Driver for ClickHouse ReleaseNotes
  • 1 - Altinity Stable for ClickHouse

    Altinity Stable for ClickHouse Release Notes

    Most recent releases organized by major release and release date:

  • Altinity Stable for ClickHouse 22.8
  • Altinity Stable for ClickHouse 22.3
  • Altinity Stable for ClickHouse 21.8
  • Altinity Stable for ClickHouse 21.3
  • Altinity Stable for ClickHouse 21.1
  • Altinity Stable for ClickHouse 20.8
  • Altinity Stable for ClickHouse 20.3

    1.1 - Altinity Stable for ClickHouse 22.8

    1.1.1 - Altinity Stable for ClickHouse 22.8.13

    A few months ago, we certified the ClickHouse 22.3 release. It was delivered together with the Altinity Stable build for ClickHouse. Since then many things have happened to ClickHouse. On the Altinity side we continued to put in features but really focused on build process and testing. We started testing the new ClickHouse LTS release 22.8 as soon as it was out in late August. It took us several months to confirm 22.8 is ready for production use and to make sure upgrades go smoothly. As of 22.8.13 we are confident in certifying 22.3 as an Altinity Stable release.

    This release is a significant upgrade since the previous Altinity Stable release. It includes more than 2000 pull requests from 280 contributors. Please look below for detailed release notes.

    Major new features in 22.8 since the previous stable release 22.3

    A new release introduces a lot of changes and new functions. It is very hard to pick the most essential ones, so refer to the full list in the Appendix. The following major features are worth mentioning on the front page:

    • SQL features:
      • GROUPING SETS
      • Experimental support for SQL DELETE (lightweight deletes) – partially contributed by Altinity
      • Interpolate extension for ORDER BY … WITH FILL
      • Support SELECT ... INTO OUTFILE and STDOUT.
      • INTERVAL support for all kinds of time intervals a)
      • Support for non-constant argument for LIKE, ILIKE and match functions
      • Support expressions with window functions
    • Security features:
      • Base58 encoding/decoding a)
      • Named collections were extended to cover more data sources
    • Replication and Cluster improvements:
      • Support inserts into system.zookeeper table
    • Integrations:
      • ‘mongodb’ table function to insert into and select from MongoDB databases
      • MeiliSearch engine integration (experimental)
      • New ‘direct’ join algorithm to work efficiently with key-value databases
    • Remote file system and object storage features:
      • Proper support of Google GCS for S3 table function a)
      • Write cache for remote file systems
      • Caches management commands: DESCRIBE CACHE, SHOW CACHES, SYSTEM DROP FILESYSTEM CACHE
    • Other:
      • Store metadata cache in RocksDB for faster startup times
      • Add simple chart visualization to the built-in Play interface
      • system.settings_changes table to track changes between versions (incomplete)

    As usual with ClickHouse, there are many performance and operational improvements in different server components.


    a) - contributed by Altinity developers.

    Major changes that require attention

    Ordinary database deprecation

    The Ordinary database engine and the old storage definition syntax for *MergeTree tables are deprecated. It has following consequences:

    • If the system database has the Ordinary engine it will be automatically converted to Atomic on server startup.

    • default_database_engine setting is deprecated and does nothing. New databases are always created as Atomic ones if the engine is not specified.

    • By default it’s not possible to create new databases with the Ordinary engine. There is a setting to keep old behavior: allow_deprecated_database_ordinary=1.

    • Setting allow_deprecated_syntax_for_merge_tree allows to create MergeTree tables in old syntax, but these settings may be removed in future releases.

    • There is a way to convert existing databases from Ordinary to Atomic. In order to do that, create an empty convert_ordinary_to_atomic file in the flags directory and all Ordinary databases will be converted automatically on the next server start.

    Background pools

    Configuration of background pools have been moved from profile to server settings. However, due to the ClickHouse bug, pools need to be defined in both config.xml and users.xml.

    • If pools were modified in your ClickHouse cluster, you need to add a section in config.xml.
    • If you need to adjust pools in 22.8, you need to do in two places now

    For example:

    config.xml
    —--------
        <background_pool_size>18</background_pool_size>
        <background_move_pool_size>18</background_move_pool_size>
        <background_schedule_pool_size>18</background_schedule_pool_size>
        <background_fetches_pool_size>18</background_fetches_pool_size>
        <background_distributed_schedule_pool_size>18</background_distributed_schedule_pool_size>
        <background_common_pool_size>18</background_common_pool_size>
    

    And:

    users.xml
    —--------
    <profiles>
        <default>
            <background_pool_size>18</background_pool_size>
            <background_move_pool_size>18</background_move_pool_size>
            <background_schedule_pool_size>18</background_schedule_pool_size>
            <background_fetches_pool_size>18</background_fetches_pool_size>          
            <background_distributed_schedule_pool_size>18</background_distributed_schedule_pool_size>
        </default>
    </profiles>
    

    Zero-copy replication for remote file systems

    Zero-copy replication was incidentally turned on by default in 22.3. In 22.8 it is turned off. So if you were using Disk S3 on a replicated cluster, you will see an increase in S3 storage, double writes and double merges. This is still an experimental feature, it can be enabled with allow_remote_fs_zero_copy_replication setting.

    Changes in s3 multipart upload

    Since 22.6 uploads to s3 are executed in more threads and in more aggressive manner, which can create a higher pressure on the network and saturate some s3-compatible storages (e.g. Minio). Better control of that was added only in 22.9.

    Backward Incompatible Changes

    The following changes are backward incompatible and require user attention during an upgrade:

    • Extended range of Date32 and DateTime64 to support dates from the year 1900 to 2299. In previous versions, the supported interval was only from the year 1925 to 2283. The implementation is using the proleptic Gregorian calendar (which is conformant with ISO 8601:2004 (clause 3.2.1 The Gregorian calendar)) instead of accounting for historical transitions from the Julian to the Gregorian calendar. This change affects implementation-specific behavior for out-of-range arguments. E.g. if in previous versions the value of 1899-01-01 was clamped to 1925-01-01, in the new version it will be clamped to 1900-01-01. It changes the behavior of rounding with toStartOfInterval if you pass INTERVAL 3 QUARTER up to one quarter because the intervals are counted from an implementation-specific point of time.
    • Now, all relevant dictionary sources respect the remote_url_allow_hosts setting. It was already done for HTTP, Cassandra, Redis. Added ClickHouse, MongoDB, MySQL, PostgreSQL. Host is checked only for dictionaries created from DDL.
    • ClickHouse x86 binaries now require support for AVX instructions, i.e. a CPU not older than Intel Sandy Bridge / AMD Bulldozer, both released in 2011.
    • Make the remote filesystem cache composable, allow not to evict certain files (regarding idx, mrk, ..), delete the old cache version. Now it is possible to configure cache over Azure blob storage disk, over Local disk, over StaticWeb disk, etc. This PR is marked backward incompatible because cache configuration changes and in order for cache to work need to update the config file. Old cache will still be used with new configuration. The server will startup fine with the old cache configuration.
    • Remove support for octal number literals in SQL. In previous versions they were parsed as Float64.
    • Changes how settings using seconds as type are parsed to support floating point values (for example: max_execution_time=0.5). Infinity or NaN values will throw an exception.
    • Changed format of binary serialization of columns of experimental type Object. New format is more convenient to implement by third-party clients.
    • LIKE patterns with trailing escape symbol (’') are now disallowed (as mandated by the SQL standard).
    • Do not allow SETTINGS after FORMAT for INSERT queries. There is compatibility setting parser_settings_after_format_compact to accept such queries, but it is turned OFF by default.
    • Function yandexConsistentHash (consistent hashing algorithm by Konstantin “kostik” Oblakov) is renamed to kostikConsistentHash. The old name is left as an alias for compatibility. Although this change is backward compatible, we may remove the alias in subsequent releases, that’s why it’s recommended to update the usages of this function in your apps.

    Upgrade Notes

    There were several changes between versions that may affect the rolling upgrade of big clusters. Upgrading only part of the cluster is not recommended.

    • If you run different ClickHouse versions on a cluster with AArch64 CPU or mix AArch64 and amd64 on a cluster, and use distributed queries with GROUP BY multiple keys of fixed-size type that fit in 256 bits but don’t fit in 64 bits, and the size of the result is huge, the data will not be fully aggregated in the result of these queries during upgrade. Workaround: upgrade with downtime instead of a rolling upgrade.

    Rolling upgrade from 20.4 and older is impossible because the “leader election” mechanism is removed from ReplicatedMergeTree.

    Known Issues in 22.8.x

    The development team continues to improve the quality of the 22.8 release. The following issues still exist in the 22.8.13 version and may affect ClickHouse operation. Please inspect them carefully to decide if those are applicable to your applications.

    General stability issues (fixed in Altinity.Stable build 22.8.13):

    Some queries that worked in 22.3 may fail in 22.8:

    There are a few performance regressions:

    You may also look into a GitHub issues using v22.8-affected label.

    Other Important Changes

    Several settings has changed their defaults:

    • The new default value for local_filesystem_read_method is pread_threadpool. In some cases this can lead to performance degradation (see Degraded performance with local_filesystem_read_method = pread_threadpool)
      if you notice that 22.8 is reading data slower, set local_filesystem_read_method = pread
    • allow_remote_fs_zero_copy_replication is now 0 (was 1)
    • format_csv_allow_single_quotes is now 0 (was 1) – if it is set to true, allow strings in single quotes.
    • distributed_ddl_entry_format_version is now 3 (was 1)
    • log_query_threads is now 0 (was 1)
    • join_algorithm is now ‘default’ (was ‘hash’) – ‘default’ means ‘hash’ or ‘direct’ if possible, so this change only applies to new ‘direct’ joins.
    • max_download_threads is now 4 – that affects number of parallel threads for URL and S3 table engines and table functions

    Some new ClickHouse features and optimizations are now enabled by default. It may lead to a change in behavior, so review those carefully and disable features that may affect your system:

    • allow_experimental_geo_types – allow geo data types such as Point, Ring, Polygon, MultiPolygon
    • output_format_json_named_tuples_as_objects – it allows to serialize named tuples as JSON objects in JSON formats.
    • collect_hash_table_stats_during_aggregation – enable collecting hash table statistics to optimize memory allocation, this may lead to performance degradation
    • enable_positional_arguments – enable positional arguments in ORDER BY, GROUP BY and LIMIT BY
    • input_format_skip_unknown_fields – skip columns with unknown names from input data (it works for JSONEachRow, -WithNames, -WithNamesAndTypes and TSKV formats). Previously, ClickHouse threw an exception.
    • odbc_bridge_use_connection_pooling – use connection pooling in ODBC bridge. If set to false, a new connection is created every time
    • optimize_distinct_in_order – enable DISTINCT optimization if some columns in DISTINCT form a prefix of sorting. For example, prefix of sorting key in merge tree or ORDER BY statement
    • optimize_multiif_to_if – replace ‘multiIf’ with only one condition to ‘if’.
    • optimize_read_in_window_order – enable ORDER BY optimization in window clause for reading data in corresponding order in MergeTree tables.
    • optimize_sorting_by_input_stream_properties – optimize sorting by sorting properties of input stream
    • output_format_json_named_tuples_as_objects – serialize named tuple columns as JSON objects.

    ClickHouse embedded monitoring is since 21.8. It now collects host level metrics, and stores them every second in the table system.asynchronious_metric_log. This can be visible as an increase of background writes, storage usage, etc. To return to the old rate of metrics refresh / flush, adjust those settings in config.xml:

    <asynchronous_metrics_update_period_s>
        60
    </asynchronous_metrics_update_period_s>
    <asynchronous_metric_log>
        <flush_interval_milliseconds>
            60000
        </flush_interval_milliseconds>
    </asynchronous_metric_log>
    

    In the previous releases we recommended disabling optimize_on_insert. This recommendation stays for 22.8 as well as inserts into Summing and AggregatingMergeTree can slow down.

    Changes in Altinity Stable build Compared to Community Build

    ClickHouse Altinity Stable builds are based on the community LTS versions. Altinity.Stable 22.8.13.21 is based on community 22.8.13.20-lts, but we have additionally backported several fixes:

    Let’s Install!

    Linux packages can be found at:

    Note: naming schema for Altinity.Stable build packages has been changed since 21.8.x.

    21.8.x 22.3.x and later versions
    <package>_<ver>.altinitystable_all.deb <package>_<ver>.altinitystable_amd64.deb
    <package>-<ver>.altinitystable-2.noarch.rpm <package>-<ver>.altinitystable.x86_64.rpm

    Docker images for community versions have been moved from ‘yandex’ to ‘clickhouse’ organization, and should be referenced as ‘clickhouse/clickhouse-server:22.8’.
    Altinity Stable build images are available as ‘altinity/clickhouse-server:22.8.13.21.altinitystable’.

    Mac users are welcome to use Homebrew Formulae. Ready-to-use bottles are available for both M1 and Intel Macs running Monterey.

    For more information on installing ClickHouse from either the Altinity Builds or the Upstream Builds, see the ClickHouse Altinity Stable Release Build Install Guide.

    Please contact us at info@altinity.com if you experience any issues with the upgrade.

    Appendix

    New table functions

    • MeiliSearch – renamed to meilisearch (lowercase) in 22.9
    • mongodb
    • viewIfPermitted

    New table engines

    • MeiliSearch
    • NATS

    New functions

    • L2SquaredDistance, L2SquaredNorm, distanceL2Squared, normL2Squared
    • LAST_DAY, toLastDayOfMonth
    • addMicroseconds, addMilliseconds, addNanoseconds, subtractMicroseconds, subtractMilliseconds, subtractNanoseconds, toIntervalMicrosecond, toIntervalMillisecond, toIntervalNanosecond
    • arrayFirstOrNull, arrayLastOrNull
    • base58Decode, base58Encode
    • flattenTuple
    • getTypeSerializationStreams
    • Multiple h3* geo functions
    • hashid
    • isNullable
    • makeDate, makeDate32, makeDateTime, makeDateTime64
    • meiliMatch
    • minSampleSizeContinous, minSampleSizeConversion
    • nonNegativeDerivative
    • nowInBlock
    • th_value
    • parseDateTime64BestEffortUS, parseDateTime64BestEffortUSOrNull, parseDateTime64BestEffortUSOrZero
    • parseTimeDelta
    • revision
    • showCertificate
    • toStartOfFiveMinutes, toStartOfMicrosecond, toStartOfMillisecond, toStartOfNanosecond
    • transactionID, transactionLatestSnapshot, transactionOldestSnapshot
    • wyHash64

    New formats

    • JSONColumns
    • JSONColumnsWithMetadata
    • JSONCompactColumns
    • JSONLines
    • MySQLDump
    • NDJSON
    • PrettyCompactNoEscapesMonoBlock
    • PrettyMonoBlock
    • PrettyNoEscapesMonoBlock
    • PrettySpaceMonoBlock
    • PrettySpaceNoEscapesMonoBlock
    • Prometheus
    • ProtobufList
    • SQLInsert

    New system tables

    • system.backups
    • system.certificates
    • system.filesystem_cache
    • system.merge_tree_metadata_cache
    • system.remote_data_paths
    • system.schema_inference_cache
    • system.settings_changes

    New metrics and events

    system.events

    Several groups of events were added:

    • Cache introspection events: Cached*
    • Kafka engine introspection events (finally!): Kafka*
    • MergeTree metadata cache events: MergeTreeMetadataCache*
    • Keeper events: Keeper*

    system.metrics

    • CacheDetachedFileSegments
    • CacheFileSegments
    • FilesystemCacheElements
    • FilesystemCacheReadBuffers
    • FilesystemCacheSize
    • KafkaAssignedPartitions
    • KafkaBackgroundReads
    • KafkaConsumers
    • KafkaConsumersInUse
    • KafkaConsumersWithAssignment
    • KafkaLibrdkafkaThreads
    • KafkaProducers
    • KafkaWrites
    • KeeperAliveConnections
    • KeeperOutstandingRequests
    • S3Requests

    Changes in default configuration files
    https://gist.github.com/filimonov/d60f749be2581b151e6bdaa830bc8111#file-default_config_changes-txt


    Also, please refer to the release notes from the development team available at the following URLs:

    1.2 - Altinity Stable for ClickHouse 22.3

    1.2.1 - Altinity Stable for ClickHouse 22.3.10

    Details of the Altinity Stable 22.3.10

    Release Notes

    Changes from ClickHouse v22.3.10.22-LTS

    Changes Compared to Community Build v22.3.10.22-lts:

    Bug Fixes

    • Fix for exponential time decaying window functions. Now respecting boundaries of the window #36944 by @excitoon (via #164)
    • Fixes for objects removal in S3ObjectStorage #37882 by @excitoon (via #164)
    • Fixed Unknown identifier (aggregate-function) exception #39762 by @quickhouse (via #189)
    • Fixed point of origin for exponential decay window functions #39593 by @quickhouse (via #190)
    • Reverted ‘Fix seeking while reading from encrypted disk’ https://github.com/ClickHouse/ClickHouse/pull/39687, known to cause issues in 22.3 (via #https://github.com/Altinity/ClickHouse/pull/193)
    • Reverted: ‘Enable using constants from with and select in aggregate function parameters.’ https://github.com/ClickHouse/ClickHouse/pull/27531, causes problems on 22.3 (via #194)

    Improvements

    • Improvement: Optimized processing of ORDER BY in window functions #34632 by @excitoon (via #164)
    • Improvement: Support batch_delete capability for GCS #37659 by @frew (via #164)

    Build/Testing/Packaging Improvements

    • Build/Testing/Packaging Improvement: Allow Github workflows to run on Altinity’s infrastructure

    Changes from v22.3.8.39-lts to v22.3.10.22-lts

    Bug Fixes

    Other

    Full list of changes

    Available at:

    1.2.2 - Altinity Stable for ClickHouse 22.3.8

    A few months ago we certified 21.8 as an Altinity Stable release. It was delivered together with the Altinity Stable build for ClickHouse. Since then many things have happened to ClickHouse. In Altinity we continued to work on newer releases and run them in-house. We completed several new features, and many more have been added by community contributors. We started testing the new ClickHouse LTS release 22.3 as soon as it was out in late March. It took us more than three months to confirm 22.3 is ready for production use and to make sure upgrades go smoothly. As of 22.3.8 we are confident in certifying 22.3 as an Altinity Stable release.

    This release is a significant upgrade since the previous Altinity Stable release. It includes more than 3000 pull requests from 415 contributors. Please look below for detailed release notes.

    Major new features in 22.3 since the previous stable release 21.8

    A new release introduces a lot of changes and new functions. It is very hard to pick the most essential ones, so refer to the full list in the Appendix. The following major features are worth mentioning on the front page:

    • SQL features:
      • User defined functions as lambda expressions.
      • User defined functions as external executables.
      • Schema inference for INSERT and SELECT from external data sources.
      • -Map combinator for Map data type.
      • WINDOW VIEW for stream processing (experimental). See our blog article “Battle of Views” comparing it to LIVE VIEW.
      • INTERSECT, EXCEPT, ANY, ALL, EXISTS operators.
      • EPHEMERAL columns.
      • Asynchronous inserts.
      • Support expressions in JOIN ON.
      • OPTIMIZE DEDUPLICATE on a subset of columns a).
      • COMMENT on schema objects a).
    • Security features:
      • Predefined named connections (or named collections) for external data sources. Can be used in table functions, dictionaries, table engines.
      • system.session_log table that tracks connections and login attempts a).
      • Disk-level encryption. See the meetup presentation for many interesting details.
      • Support server-side encryption keys for S3 a).
      • Support authentication of users connected via SSL by their X.509 certificate.
    • Replication and Cluster improvements:
      • ClickHouse Keeper – in-process ZooKeeper replacement – has been graduated to production-ready by the ClickHouse team. We also keep testing it on our side: the core functionality looks good and stable, some operational issues and edge cases still exist.
      • Automatic replica discovery – no need to alter remote_servers anymore.
      • Parallel reading from multiple replicas (experimental).
    • Dictionary features:
      • Array attributes, Nullable attributes.
      • New hashed_array dictionary layout that reduces RAM usage for big dictionaries (idea proposed by Altinity).
      • Custom queries for dictionary source.
    • Integrations:
      • SQLite table engine, table function, database engine.
      • Executable function, storage engine and dictionary source.
      • FileLog table engine.
      • Huawei OBS storage support.
      • Aliyun OSS storage support.
    • Remote file system and object storage features:
      • Zero-copy replication for HDFS.
      • Partitioned writes into S3 a) , File, URL and HDFS storages.
      • Local data cache for remote filesystems.
    • Other:
      • Store user access management data in ZooKeeper.
      • Production ready ARM support.
      • Significant rework of clickhouse-local that is now as advanced as clickhouse-client .
      • Projections and window functions are graduated and not experimental anymore.

    As usual with ClickHouse, there are many performance and operational improvements in different server components.

    a) - contributed by Altinity developers.

    Backward Incompatible Changes

    The following changes are backward incompatible and require user attention during an upgrade:

    • Do not output trailing zeros in text representation of Decimal types. Example: 1.23 will be printed instead of 1.230000 for decimal with scale 6. Serialization in output formats can be controlled with the setting output_format_decimal_trailing_zeros.
    • Now, scalar subquery always returns a Nullable result if its type can be Nullable.
    • Introduce syntax for here documents. Example: SELECT $doc$ VALUE $doc$. This change is backward incompatible if there are identifiers that contain $.
    • Now indices can handle Nullable types, including isNull and isNotNull functions. This required index file format change – idx2 file extension. ClickHouse 21.8 can not read those files, so a correct downgrade may not be possible.
    • MergeTree table-level settings replicated_max_parallel_sends, replicated_max_parallel_sends_for_table, replicated_max_parallel_fetches, replicated_max_parallel_fetches_for_table were replaced with max_replicated_fetches_network_bandwidth, max_replicated_sends_network_bandwidth and background_fetches_pool_size.
    • Change the order of json_path and json arguments in SQL/JSON functions to be consistent with the standard.
    • A “leader election” mechanism is removed from ReplicatedMergeTree, because multiple leaders have been supported since 20.6. If you are upgrading from ClickHouse version older than 20.6, and some replica with an old version is a leader, then the server will fail to start after upgrade. Stop replicas with the old version to make the new version start. Downgrading to versions older than 20.6 is not possible.
    • Change implementation specific behavior on overflow of the function toDateTime. It will be saturated to the nearest min/max supported instant of datetime instead of wraparound. This change is highlighted as “backward incompatible” because someone may unintentionally rely on the old behavior.

    Upgrade Notes

    There were several changes between versions that may affect the rolling upgrade of big clusters. Upgrading only part of the cluster is not recommended.

    • Data after merge is not byte-identical for tables with MINMAX indexes.
    • Data after merge is not byte-identical for tables created with the old syntax - count.txt is added in 22.1

    Rolling upgrade from 20.4 and older is impossible because the “leader election” mechanism is removed from ReplicatedMergeTree.

    Known Issues in 22.3.x

    The development team continues to improve the quality of the 22.3 release. The following issues still exist in the 22.3.8 version and may affect ClickHouse operation. Please inspect them carefully to decide if those are applicable to your applications.

    If you started using 22.3 from the earlier versions, please note that following important bugs have been fixed, especially related to PREWHERE functionality:

    You may also look into a GitHub issues using v22.3-affected label.​​

    Other Important Changes

    • ClickHouse now recreates system tables if the Engine definition is different from the one in the config. (see https://github.com/ClickHouse/ClickHouse/pull/31824 + edge case https://github.com/ClickHouse/ClickHouse/issues/34929).

    • Behavior of some metrics has been changed. For example written_rows / result_rows may be reported differently, see https://gist.github.com/filimonov/c83fdf988398c062f6fe5b3344c35e80 BackgroundPoolTask was split into BackgroundMergesAndMutationsPoolTask and BackgroundCommonPoolTask.

    • New setting background_merges_mutations_concurrency_ratio=2 – that means ClickHouse can schedule two times more merges/mutations than background_pool_size, which is still 16 by default. For some scenarios with stale replicas the behavior may be harder to predict / explain. If needed, you can return the old behavior by setting background_merges_mutations_concurrency_ratio=1.

    • Pool sizes should be now configured in config.xml (Fallback reading that from default profile of users.xml is still exists) .

    • In queries like SELECT a, b, … GROUP BY (a, b, …) ClickHouse does not untuple the GROUP BY expression anymore. The same is true for ORDER BY and PARTITION BY.

    • When dropping and renaming schema objects ClickHouse now checks dependencies and throws an Exception if the operation may break the dependency:

      Code: 630. DB::Exception: Cannot drop or rename X because some tables depend on it: Y
      

      It may be disabled by setting:

      check_table_dependencies=0
      

    ClickHouse embedded monitoring is since 21.8. It now collects host level metrics, and stores them every second in the table system.asynchronious_metric_log. This can be visible as an increase of background writes, storage usage, etc. To return to the old rate of metrics refresh / flush, adjust those settings in config.xml:

    <asynchronous_metrics_update_period_s>
        60
    </asynchronous_metrics_update_period_s>
    
    <asynchronous_metric_log>
        <flush_interval_milliseconds>
    		    60000
        </flush_interval_milliseconds>
    </asynchronous_metric_log>
    

    Alternatively, metric_log and asynchronous_metric_log tables can be completely disabled:

    <yandex>
        <asynchronous_metric_log remove="1"/>
        <metric_log remove="1"/>
    </yandex>
    

    Some new ClickHouse features are now enabled by default. It may lead to a change in behavior, so review those carefully and disable features that may affect your system:

    • check_table_dependencies
    • empty_result_for_aggregation_by_constant_keys_on_empty_set
    • http_skip_not_found_url_for_globs
    • input_format_allow_seeks
    • input_format_csv_empty_as_default
    • input_format_with_types_use_header
    • log_query_views
    • optimize_distributed_group_by_sharding_key
    • remote_filesystem_read_prefetch
    • remote_fs_enable_cache
    • use_hedged_requests
    • use_local_cache_for_remote_storage
    • use_skip_indexes
    • wait_for_async_insert

    In the previous releases we recommended disabling optimize_on_insert. This recommendation stays for 22.3 as well as inserts into Summing and AggregatingMergeTree can slow down.

    Changes Compared to Community Build

    ClickHouse Altinity Stable builds are based on the community LTS versions. Altinity.Stable 22.3.8.40 is based on community 22.3.8.39-lts, but we have additionally backported several features we were working on for our clients:

    Let’s Install!

    ClickHouse Altinity Stable releases are based on the community versions.

    Linux packages can be found at https://packages.clickhouse.com for community builds, and at https://builds.altinity.cloud for Altinity Stable builds.

    Note: The naming schema for Altinity.Stable build packages has been changed since 21.8.x.

    21.8.x 22.3.x
    <package>_<ver>.altinitystable_all.deb <package>_<ver>.altinitystable_amd64.deb
    <package>-<ver>.altinitystable-2.noarch.rpm <package>-<ver>.altinitystable.x86_64.rpm

    Docker images for community versions have been moved from ‘yandex’ to ‘clickhouse’ organization, and should be referenced as ‘clickhouse/clickhouse-server:22.3’. Altinity stable build images are available as ‘altinity/clickhouse-server:22.3’.

    Mac users are welcome to use Homebrew Formulae. Ready-to-use bottles are available for both M1 and Intel Macs running Monterey.

    For more information on installing ClickHouse from either the Altinity Builds or the Community Builds, see the ClickHouse Altinity Stable Release Build Install Guide.

    Please contact us at info@altinity.com if you experience any issues with the upgrade.

    Appendix

    New table functions

    • executable
    • format
    • hdfsCluster
    • hive
    • sqlite

    New table engines

    • Executable
    • ExecutablePool
    • FileLog
    • Hive
    • SQLite
    • WindowView

    New aggregate function combinators

    • -Map

    New functions

    • bayesAB REMOVED

    renames (geometrics functions)

    • maxMap -> maxMappedArrays
    • minMap -> minMappedArrays
    • sumMap -> sumMappedArrays
    • readWktMultiPolygon -> readWKTMultiPolygon
    • readWktPoint -> readWKTPoint
    • readWktPolygon -> readWKTPolygon
    • readWktRing -> readWKTRing
    • svg -> SVG

    String

    • Basic / SQL compatibility
      • left + leftUTF8
      • right + rightUTF8
      • REGEXP_MATCHES
      • REGEXP_REPLACE
    • JSON
      • JSONExtractKeys
    • NLP / text split
      • stem
      • lemmatize
      • ngrams
      • synonyms
      • splitByNonAlpha
      • splitByWhitespace
      • tokens
    • text classification
      • detectCharset
      • detectLanguage
      • detectLanguageMixed
      • detectLanguageUnknown
      • detectProgrammingLanguage
      • detectTonality
    • UTF8
      • normalizeUTF8NFC
      • normalizeUTF8NFD
      • normalizeUTF8NFKC
      • normalizeUTF8NFKD
    • bitwise string processing
      • bitSlice

    Bitmpa

    • subBitmap

    Hash

    • MD4
    • SHA384
    • SHA512

    Array

    • arrayLast
    • arrayLastIndex

    Other

    • singleValueOrNull (if all values are equal it return the value, Null otherwise)
    • sparkbar

    Introspection/debug

    • addressToLineWithInlines
    • currentProfiles
    • currentRoles
    • defaultProfiles
    • defaultRoles
    • enabledProfiles
    • enabledRoles
    • initialQueryID
    • initial_query_id
    • queryID
    • serverUUID
    • shardCount
    • shardNum
    • zookeeperSessionUptim
    • getOSKernelVersion
    • getServerPort
    • query_id

    Date

    • monthName

    snowflake identifiers

    • snowflakeToDateTime
    • snowflakeToDateTime64
    • dateTime64ToSnowflake
    • dateTimeToSnowflake

    URL

    • decodeURLFormComponent
    • encodeURLComponent
    • encodeURLFormComponent

    Math

    • degrees
    • radians

    Geospatial

    • h3
      • h3CellAreaM2
      • h3CellAreaRads2
      • h3EdgeLengthKm
      • h3ExactEdgeLengthKm
      • h3ExactEdgeLengthM
      • h3ExactEdgeLengthRads
      • h3GetFaces
      • h3HexAreaKm2
      • h3IsPentagon
      • h3IsResClassIII
      • h3NumHexagons
      • h3ToCenterChild
      • h3ToGeo
      • h3ToGeoBoundary
    • s2
      • geoToS2
      • s2CapContains
      • s2CapUnion
      • s2CellsIntersect
      • s2GetNeighbors
      • s2RectAdd
      • s2RectContains
      • s2RectIntersection
      • s2RectUnion
      • s2ToGeo

    multidimensional vectors

    Source: https://github.com/ClickHouse/ClickHouse/pull/27933

    • L1 space

      • distanceL1
      • normL1
      • normalizeL1
      • L1Distance
      • L1Norm
      • L1Normalize
    • L2 space

      • distanceL2
      • normL2
      • normalizeL2
      • L2Distance
      • L2Norm
      • L2Normalize
    • Linf space

      • distanceLinf
      • normLinf
      • normalizeLinf
      • LinfDistance
      • LinfNorm
      • LinfNormalize
    • Lp space

      • distanceLp
      • normLp
      • normalizeLp
      • LpDistance
      • LpNorm
      • LpNormalize
    • Math

      • tuplePlus
      • tupleMinus
      • tupleNegate
      • tupleDivide
      • tupleDivideByNumber
      • tupleMultiply
      • tupleMultiplyByNumber
      • vectorSum
      • vectorDifference
      • max2
      • min2
      • dotProduct
      • scalarProduct
      • cosineDistance
    • mapApply
    • mapFilter
    • mapContainsKeyLike
    • mapExtractKeyLike
    • mapUpdate

    aggregation & statistical functions

    • meanZTest
    • proportionsZTest
    • theilsU
    • cramersV
    • cramersVBiasCorrected
    • contingency
    • exponentialMovingAverage
    • exponentialTimeDecayedAvg
    • exponentialTimeDecayedCount
    • exponentialTimeDecayedMax
    • exponentialTimeDecayedSum
    • quantilesBFloat16Weighted
    • medianBFloat16Weighted
    • quantileBFloat16Weighted
    • nothing (Aggregate function that takes arbitrary number of arbitrary arguments and does nothing. :) )
    • tumble
    • tumbleEnd
    • tumbleStart
    • hop
    • hopEnd
    • hopStart
    • windowID

    Casting

    • orDefault variants

      • IPv4StringToNumOrDefault
      • IPv6StringToNumOrDefault
      • toDateOrDefault
      • toDateTime64OrDefault
      • toDateTimeOrDefault
      • toDecimal128OrDefault
      • toDecimal256OrDefault
      • toDecimal32OrDefault
      • toDecimal64OrDefault
      • toFloat32OrDefault
      • toFloat64OrDefault
      • toIPv4OrDefault
      • toIPv6OrDefault
      • toInt128OrDefault
      • toInt16OrDefault
      • toInt256OrDefault
      • toInt32OrDefault
      • toInt64OrDefault
      • toInt8OrDefault
      • toUInt16OrDefault
      • toUInt256OrDefault
      • toUInt32OrDefault
      • toUInt64OrDefault
      • toUInt8OrDefault
      • toUUIDOrDefault
    • orNull casting variants

      • IPv4StringToNumOrNull
      • IPv6StringToNumOrNull
      • toIPv4OrNull
      • toIPv6OrNull
    • new types

      • toBool
      • toDate32
      • toDate32OrDefault
      • toDate32OrNull
      • toDate32OrZero
    • tuple to key-value

      • tupleToNameValuePairs
    • accurate cast

      • accurateCastOrDefault
      • accurate_CastOrNull
      • _CAST

    New system tables

    • asynchronous_inserts
    • rocksdb
    • session_log
    • warnings

    New columns in system tables

    • columns:
      • character_octet_length, numeric_precision, numeric_precision_radix, numeric_scale, datetime_precision
    • data_skipping_indices:
      • data_compressed_bytes, data_uncompressed_bytes, marks
    • databases:
      • comment
    • dictionaries:
      • comment
    • distributed_ddl_queue:
      • entry_version, initiator_host, initiator_port, settings, query_create_time, host, exception_text
    • functions:
      • create_query, origin
    • graphite_retentions:
      • Rule_type
    • part_moves_between_shards:
      • dst_part_name, rollback
    • parts:
      • secondary_indices_compressed_bytes, secondary_indices_uncompressed_bytes, secondary_indices_marks_bytes, projections
    • parts_columns:
      • serialization_kind, subcolumns.names, subcolumns.types, subcolumns.serializations
    • processes:
      • disributed_depth
    • query_log:
      • formatted_query, views, distributed_depth
    • query_thread_log:
      • distributed_depth
    • replicas:
      • last_queue_update_exception, replica_is_active
    • tables:
      • as_select, has_own_data, loading_dependencies_database, loading_dependencies_table, loading_dependent_database, loading_dependent_table
    • users:
      • default_database

    New metrics and events

    system.asynchronous_metrics

    • CompiledExpressionCacheBytesREMOVED
    • CompiledExpressionCacheCountREMOVED

    system.metrics

    • ActiveAsyncDrainedConnections
    • ActiveSyncDrainedConnections
    • AsyncDrainedConnections
    • AsynchronousReadWait
    • BackgroundCommonPoolTask
    • BackgroundMergesAndMutationsPoolTask
    • BackgroundPoolTaskREMOVED
    • MaxPushedDDLEntryID
    • PartsActive
    • PartsPreActive
    • PendingAsyncInsert
    • SyncDrainedConnections

    Also, please refer to the release notes from the development team available at the following URLs:

    1.3 - Altinity Stable for ClickHouse 21.8

    1.3.1 - Altinity Stable for ClickHouse 21.8.15

    Details of the Altinity Stable 21.8.15

    Release Notes

    Based on upstream/v21.8.15.7-lts.

    ClickHouse release v21.8.15.7-altinitystable as compared to community ClickHouse v21.8.15.7:

    Changes compared to the Community Build

    Bug Fixes

    • Bug Fix: Fixed issue with merging configs that have and root XML tags. (backport of ClickHouse@213ecae) (via #114).
    • Bug Fix: Backport of ClickHouse#31823 to 21.8: Fix invalid cast of nullable type when nullable primary key is used (via #110)

    Features

    • Build/Testing/Packaging Improvement: Various macOS compilation and packaging fixes/backporting (via #108)
    • Build/Testing/Packaging Improvement: Minor test improvements (via afb4a0a, 4c35386)

    Changes compared to Altinity Stable 21.8.13.1

    Bug Fixes

    Test Results

    Packages

    1.3.2 - Altinity Stable for ClickHouse 21.8.13

    Details of the Altinity Stable 21.8.13

    Release Notes

    ClickHouse release v21.8.13.1-altinitystable as compared to v21.8.12.29-altinitystable:

    Bug Fixes

    • Fixed Apache Avro Union type index out of boundary issue in Apache Avro binary format. #33022 (Harry Lee).

    • Quota limit was not reached, but the limit was exceeded. This PR fixes #31174. #31656 (sunny).

    • NO CL ENTRY: ‘fix json error after downgrade’. #33166 (bullet1337).

    • Integer overflow to resize the arrays causes heap corrupt. #33024 (varadarajkumar).

    • fix crash when used fuzzBits with multiply same FixedString, Close #32737. #32755 (SuperDJY).

    • Fix possible exception at RabbitMQ storage startup by delaying channel creation. #32584 (Kseniia Sumarokova).

    • Fixed crash with SIGFPE in aggregate function avgWeighted with Decimal argument. Fixes #32053. #32303 (tavplubix).

    • Some replication queue entries might hang for temporary_directories_lifetime (1 day by default) with Directory tmp_merge_<part_name> or Part ... (state Deleting) already exists, but it will be deleted soon or similar error. It’s fixed. Fixes #29616. #32201 (tavplubix).

    • XML dictionaries identifiers, used in table create query, can be qualified to default_database during upgrade to newer version. Closes #31963. #32187 (Maksim Kita).

    • Number of active replicas might be determined incorrectly when inserting with quorum if setting replicated_can_become_leader is disabled on some replicas. It’s fixed. #32157 (tavplubix).

    • Fixed Directory ... already exists and is not empty error when detaching part. #32063 (tavplubix).

    • Some GET_PART entry might hang in replication queue if part is lost on all replicas and there are no other parts in the same partition. It’s fixed in cases when partition key contains only columns of integer types or Date[Time]. Fixes #31485. #31887 (tavplubix).

    • Change configuration path from keeper_server.session_timeout_ms to keeper_server.coordination_settings.session_timeout_ms when constructing a KeeperTCPHandler - Same with operation_timeout. #31859 (JackyWoo).

    • Fix a bug about function transform with decimal args. #31839 (李帅).

    • Fix crash when function dictGet with type is used for dictionary attribute when type is Nullable. Fixes #30980. #31800 (Maksim Kita).

    • Fix crash with empty result on odbc query. Closes #31465. #31766 (Kseniia Sumarokova).

    • Fix possible crash (or incorrect result) in case of LowCardinality arguments of window function. Fixes #31114. #31888 (Nikolai Kochetov).

    Changes compared to the Community Build

    • Fix invalid cast of Nullable type when nullable primary key is used. (Nullable primary key is a discouraged feature - please do not use). This fixes #31075 #31823 Amos Bird).

    Test Reports

    Test Report for Altinity Stable Build v21.8.13.1.

    1.3.3 - Altinity Stable for ClickHouse 21.8.12

    Details of the Altinity Stable 21.8.12

    Release Notes

    ClickHouse release v21.8.12.29-altinitystable as compared to v21.8.11.1-altinitystable:

    Performance Improvements

    Bug Fixes

    • Quota limit was not reached, but the limit was exceeded. This PR fixes #31174. #31337 (sunny).

    Build/Testing/Packaging Improvements

    • Now all images for CI will be placed in the separate dockerhub repo. #28656 (alesapin).

    Bug Fixes (user-visible misbehaviour in official stable or prestable release)

    • Fixed functions empty and notEmpty with the arguments of UUID type. Fixes #31819. #31883 (Anton Popov).
    • Fixed possible assertion ../src/IO/ReadBuffer.h:58: bool DB::ReadBuffer::next(): Assertion '!hasPendingData()' failed. in TSKV format. #31804 (Kruglov Pavel).
    • Fixed usage of Buffer table engine with type Map. Fixes #30546. #31742 (Anton Popov).
    • Fixed race in JSONEachRowWithProgress output format when data and lines with progress are mixed in output. #31736 (Kruglov Pavel).
    • Fixed there are no such cluster here error on execution of ON CLUSTER query if specified cluster name is name of Replicated database. #31723 (tavplubix).
    • Settings input_format_allow_errors_num and input_format_allow_errors_ratio did not work for parsing of domain types, such as IPv4, it’s fixed. Fixes #31686. #31697 (tavplubix).
    • RENAME TABLE query worked incorrectly on attempt to rename an DDL dictionary in Ordinary database, it’s fixed. #31638 (tavplubix).
    • Fix invalid generated JSON when only column names contain invalid UTF-8 sequences. #31534 (Kevin Michel).
    • Resolve nullptr in STS credentials provider for S3. #31409 (Vladimir Chebotarev).
    • Remove not like function into RPNElement. #31169 (sundyli).
    • Fixed bug in Keeper which can lead to inability to start when some coordination logs was lost and we have more fresh snapshot than our latest log. #31150 (alesapin).
    • Fixed abort in debug server and DB::Exception: std::out_of_range: basic_string error in release server in case of bad hdfs url by adding additional check of hdfs url structure. #31042 (Kruglov Pavel).

    1.3.4 - Altinity Stable for ClickHouse 21.8.11

    Details of the Altinity Stable 21.8.11

    ClickHouse release v21.8.11.1-altinitystable FIXME as compared to v21.8.10.1-altinitystable

    New Features

    • CompiledExpressionCache limit elements size using compiled_expression_cache_elements_size setting. #30667 (Maksim Kita).

    Improvements

    Bug Fixes

    Bug Fixes (user-visible misbehaviour in official stable or prestable release)

    • Fixed StorageMerge with aliases and where (it did not work before at all). Closes #28802. #31044 (Kseniia Sumarokova).
    • Fixed JSONValue/Query with quoted identifiers. This allows to have spaces in json path. Closes #30971. #31003 (Kseniia Sumarokova).
    • Using formatRow function with not row formats led to segfault. Don’t allow to use this function with such formats (because it doesn’t make sense). #31001 (Kruglov Pavel).
    • Skip max_partition_size_to_drop check in case of ATTACH PARTITION ... FROM and MOVE PARTITION ... #30995 (Amr Alaa).
    • Fixed set index not used in AND/OR expressions when there are more than two operands. This fixes #30416 . #30887 (Amos Bird).
    • Fixed ambiguity when extracting auxiliary ZooKeeper name from ZooKeeper path in ReplicatedMergeTree. Previously server might fail to start with Unknown auxiliary ZooKeeper name if ZooKeeper path contains a colon. Fixes #29052. Also it was allowed to specify ZooKeeper path that does not start with slash, but now it’s deprecated and creation of new tables with such path is not allowed. Slashes and colons in auxiliary ZooKeeper names are not allowed too. #30822 (tavplubix).
    • Fixed a race condition between REPLACE/MOVE PARTITION and background merge in non-replicated MergeTree that might cause a part of moved/replaced data to remain in partition. Fixes #29327. #30717 (tavplubix).
    • Fixed PREWHERE with WHERE in case of always true PREWHERE. #30668 (Azat Khuzhin).
    • Functions for case-insensitive search in UTF8 strings like positionCaseInsensitiveUTF8 and countSubstringsCaseInsensitiveUTF8 might find substrings that actually does not match is fixed. #30663 (tavplubix).
    • Limit push down optimization could cause a error Cannot find column. Fixes #30438. #30562 (Nikolai Kochetov).
    • Fixed exception handling in parallel_view_processing. This resolves issues / prevents crashes in some rare corner cases when that feature is enabled and exception (like Memory limit exceeded ...) happened in the middle of materialized view processing. #30472 (filimonov).
    • Fixed segfault which might happen if session expired during execution of REPLACE PARTITION. #30432 (tavplubix).
    • Fixed ComplexKeyHashedDictionary, ComplexKeySparseHashedDictionary parsing preallocate option from layout config. #30246 (Maksim Kita).
    • Fixed [I]LIKE function. Closes #28661. #30244 (Nikolay Degterinsky).
    • Support nullable arguments in function initializeAggregation. #30177 (Anton Popov).
    • Fixed data-race between LogSink::writeMarks() and LogSource in StorageLog. #29946 (Azat Khuzhin).
    • Fixed hanging DDL queries on Replicated database while adding a new replica. #29328 (Kevin Michel).
    • Fixed bad optimizations of ORDER BY if it contains WITH FILL. This closes #28908. This closes #26049. #28910 (alexey-milovidov).
    • Fixed queries to external databases (i.e. MySQL) with multiple columns in IN ( i.e. (k,v) IN ((1, 2)) ) (but note that this has some backward incompatibility for the clickhouse-copier since it uses alias for tuple element). #28888 (Azat Khuzhin).
    • Fixed “Column is not under aggregate function and not in GROUP BY” with PREWHERE (Fixes: #28461). #28502 (Azat Khuzhin).
    • Fixed NOT-IN index optimization when not all key columns are used. This fixes #28120. #28315 (Amos Bird).

    Bug Fixes (user-visible misbehaviour in official stable or prestable release

    • Fixed ORDER BY ... WITH FILL with set TO and FROM and no rows in result set. #30888 (Anton Popov).

    1.3.5 - Altinity Stable for ClickHouse 21.8.10

    Details of the Altinity Stable 21.8.10

    Release notes for Altinity Stable 21.8.10

    ClickHouse release 21.8.10 as compared to v21.8.8

    Improvements

    Bug Fixes

    • Fix shutdown of AccessControlManager. Now there can’t be reloading of the configuration after AccessControlManager has been destroyed. This PR fixes the flaky test test_user_directories/test.py::test_relative_path. #29951 (Vitaly Baranov).
    • Allow using a materialized column as the sharding key in a distributed table even if insert_allow_materialized_columns=0:. #28637 (Vitaly Baranov).
    • FlatDictionary, HashedDictionary fix bytes_allocated calculation for nullable attributes. #30238 (Maksim Kita).
    • Dropped Memory database might reappear after server restart, it’s fixed (#29795). Also added force_remove_data_recursively_on_drop setting as a workaround for Directory not empty error when dropping Ordinary database (because it’s not possible to remove data leftovers manually in cloud environment). #30054 (tavplubix).
    • Fix crash of sample by tuple(), closes #30004. #30016 (Flynn).
    • Fix possible data-race between FileChecker and StorageLog/StorageStripeLog. #29959 (Azat Khuzhin).
    • Fix system tables recreation check (fails to detect changes in enum values). #29857 (Azat Khuzhin).
    • Avoid Timeout exceeded: elapsed 18446744073.709553 seconds error that might happen in extremely rare cases, presumably due to some bug in kernel. Fixes #29154. #29811 (tavplubix).
    • Fix bad cast in ATTACH TABLE ... FROM 'path' query when non-string literal is used instead of path. It may lead to reading of uninitialized memory. #29790 (alexey-milovidov).
    • Fix concurrent access to LowCardinality during GROUP BY (leads to SIGSEGV). #29782 (Azat Khuzhin).
    • Fixed incorrect behaviour of setting materialized_postgresql_tables_list at server restart. Found in #28529. #29686 (Kseniia Sumarokova).
    • Condition in filter predicate could be lost after push-down optimisation. #29625 (Nikolai Kochetov).
    • Fix rare segfault in ALTER MODIFY query when using incorrect table identifier in DEFAULT expression like x.y.z... Fixes #29184. #29573 (alesapin).
    • Fix bug in check pathStartsWith becuase there was bug with the usage of std::mismatch: The behavior is undefined if the second range is shorter than the first range.. #29531 (Kseniia Sumarokova).
    • In ODBC bridge add retries for error Invalid cursor state. It is a retriable error. Closes #29473. #29518 (Kseniia Sumarokova).
    • Fix possible Block structure mismatch for subqueries with pushed-down HAVING predicate. Fixes #29010. #29475 (Nikolai Kochetov).
    • Avoid deadlocks when reading and writting on JOIN Engine tables at the same time. #30187 (Raúl Marín).
    • Fix INSERT SELECT incorrectly fills MATERIALIZED column based of Nullable column. #30189 (Azat Khuzhin).
    • Fix null deference for GROUP BY WITH TOTALS HAVING (when the column from HAVING wasn’t selected). #29553 (Azat Khuzhin).

    1.3.6 - Altinity Stable for ClickHouse 21.8.8

    Details of the Altinity Stable 21.8.8

    A few months ago we certified ClickHouse 21.3 as an Altinity Stable release. Since then, we have worked on newer releases and run them in-house. We completed several new features, and even more have been added by community contributors. We were running ClickHouse 21.8 to power our public datasets at Altinity.Cloud instance and testing it in our environments. We have also performed a comprehensive QA of ClickHouse 21.8 to make sure upgrades go smoothly. As of 21.8.8 we are confident in certifying 21.8 as an Altinity Stable release.

    This release is a significant upgrade since the previous Altinity Stable release. It includes 1472 pull requests from 224 contributors. Please look below for the detailed release notes.

    Major new features since the previous stable release 21.3

    A new release introduces a lot of changes and new functions. The full list is available in the Appendix, so refer to this section for more detail. The following new features are worth mentioning on the front page:

    • SQL features:
      • DISTINCT ON a subset of columns
      • Partial support of SQL/JSON standard
      • Arrays in dictionaries are now supported
      • Arrays and nested data types are now supported for Parquet and Arrow formats
      • DateTime64 extended range, Now dates between 1925 to 2283 years are supported. a)
    • Security features:
      • Disk level encryption
      • Kerberos authentication for HTTP protocol a)
      • Active Directory groups mapping for LDAP user directory a)
    • Replication and Cluster improvements:
      • ClickHouse Keeper (experimental) – in-process ZooKeeper replacement
      • SYSTEM RESTORE REPLICA – a handy tool that makes life easier in the unfortunate occasion if ZooKeeper metadata is lost a)
      • Support for MySQL/PostgreSQL clusters when using mysql/postgresql table functions
      • Zero-copy replication for S3 tables (see Altinity blog post for details)
      • Parts movement between shards (experimental)
    • MergeTree features:
    • Integrations:
      • MaterializedPostgreSQL database engine for replication from PostgreSQL
      • HDFS disk support (experimental)
      • Allow to catch Kafka errors into a separate stream (see the KB article on this)
    • Other:
      • YAML configuration format as an alternative to XML

    As usual with ClickHouse, there are many performance and operational improvements in different server components.

    a) - contributed by Altinity developers.

    Backward Incompatible Changes

    The following changes are backward incompatible and require user attention during an upgrade:

    • Users of LowCardinality(Nullable(...)) can not safely downgrade to versions older than 21.4. Data in tables with columns of those types inserted / merged by 21.8 will be lost after the downgrade.
    • Values of UUID type cannot be compared with integers. For example, instead of writing uuid != 0 type uuid != '00000000-0000-0000-0000-000000000000'
    • The toStartOfIntervalFunction will align hour intervals to midnight (in previous versions they were aligned to the start of unix epoch). For example, toStartOfInterval(x, INTERVAL 11 HOUR) will split every day into three intervals: 00:00:00..10:59:59, 11:00:00..21:59:59 and 22:00:00..23:59:59.
    • It’s not possible to rollback to the older ClickHouse version after executing ALTER ... ATTACH query as the old servers would fail to process the new command entry ATTACH_PART in the replicated log.
    • The behaviour of remote_url_allow_hosts has changed. In previous versions the empty section did nothing, in 21.8 it will block access to all external hosts. Remove this section from the configuration files after an upgrade if you experience issues with url() or s3() functions.
    • If you will downgrade to version before 21.1 clickhouse will not be able to start automatically – you will need to remove the system.*_log tables manually to downgrade
    • There is an issue with uniqueState(UUID) in AggregatingMergeTree tables, and can be corrected by replacing uniqState(uuid) in MATERIALIZED VIEWs with uniqState(sipHash64(uuid)) and change data type for already saved data from AggregateFunction(uniq, UUID) to AggregateFunction(uniq, UInt64). For more information see the following:

    Upgrade Notes

    There were several changes between versions that may affect the rolling upgrade of big clusters. Upgrading only part of the cluster is not recommended.

    • Distributed queries with explicitly defined large sets are now executed differently. Compatibility setting legacy_column_name_of_tuple_literal may be enabled during the rolling upgrade of the cluster. Otherwise distributed queries with explicitly defined sets at IN clause may fail during upgrade.
    • ATTACH PART[ITION] queries may not work during cluster upgrade

    Other Important Changes

    ClickHouse embedded monitoring has become a bit more aggressive. It now collects several system stats, and stores them in the table system.asynchronious_metric_log. This can be visible as an increase of background writes, storage usage, etc. To return to the old rate of metrics refresh / flush, adjust those settings in config.xml:

    <asynchronous_metrics_update_period_s>
        60
    </asynchronous_metrics_update_period_s>
    <asynchronous_metric_log>
        <flush_interval_milliseconds>
            60000
        </flush_interval_milliseconds>
    </asynchronous_metric_log>
    

    Alternatively, metric_log and asynchronous_metric_log tables can be completely disabled:

    <yandex>
        <asynchronous_metric_log remove="1"/>
        <metric_log remove="1"/>
    </yandex>
    

    Some new ClickHouse features are now enabled by default. It may lead to a change in behaviour, so review those carefully and disable features that may affect your system:

    • async_socket_for_remote
    • compile_aggregate_expressions
    • compile_expressions
    • cross_to_inner_join_rewrite
    • insert_null_as_default
    • optimize_skip_unused_shards_rewrite_in
    • query_plan_enable_optimizations
    • query_plan_filter_push_down

    In the previous releases we recommended disabling optimize_on_insert. This recommendation stays for 21.8 as well as inserts into Summing and AggregatingMergeTree can slow down.

    Known issues in 21.8.8

    The development team continues to improve the quality of the 21.8 release. The following issues still exist in the 21.8.8 version and may affect ClickHouse operation. Please inspect them carefully to decide if those are applicable to your applications:

    • system.events for event = 'Merge' are overstated. ClickHouse incorrectly increments this counter.
    • Timeout exceeded: elapsed 18446744073.709553 seconds error that might happen in extremely rare cases, presumably due to some bug in kernel.

    You may also look into a GitHub issues using a special v21.8-affected label.​​

    ClickHouse Altinity Stable Releases are based on the community versions. For more information on installing ClickHouse from either the Altinity Stable builds or the community builds, see the ClickHouse Altinity Stable Release Build Install Guide.

    Please contact us at info@altinity.com if you experience any issues with the upgrade.

    Appendix

    New functions

    • DateTime functions:
      • dateName
      • timeZone, timeZoneOf, timeZoneOffset, timezoneOf, toTimezone
    • JSON processing functions:
      • JSON_EXISTS, JSON_QUERY, JSON_VALUE
      • simpleJSONExtractBool, simpleJSONExtractFloat, simpleJSONExtractInt, simpleJSONExtractRaw, simpleJSONExtractString, simpleJSONExtractUInt, simpleJSONHas
      • toJSONString
    • String functions:
      • bin/unbin
      • leftPad, lpad, leftPadUTF8
      • rightPad, rpad, rightPadUTF8
      • splitByRegexp
    • Array functions:
      • arrayProduct
      • bitPositionsToArray
      • validateNestedArraySizes
    • Dictionary:
      • dictGetChildren
      • dictGetDescendants
      • dictGetOrNull
    • Aggregate functions
      • deltaSumTimestamp
      • intervalLengthSum
      • lagInFrame/leadInFrame
      • sequenceNextNode
      • sumCount
      • uniqTheta
    • Geo:
      • polygonAreaCartesian, polygonAreaSpherical
      • polygonConvexHullCartesian
      • polygonPerimeterCartesian, polygonPerimeterSpherical
      • polygonsDistanceCartesian, polygonsDistanceSpherical
      • polygonsEqualsCartesian
      • polygonsIntersectionCartesian, polygonsIntersectionSpherical
      • polygonsSymDifferenceCartesian, polygonsSymDifferenceSpherical
      • polygonsUnionCartesian, polygonsUnionSpherical
      • polygonsWithinCartesian, polygonsWithinSpherical
      • readWktMultiPolygon, readWktPoint, readWktPolygon, readWktRing
      • wkt
    • Math/Statistics:
      • quantileBFloat16, quantilesBFloat16
    • Type related:
      • reinterpretAsUInt128
      • toUInt128, toUInt128OrNull, toUInt128OrZero
    • Other:
      • indexHint
      • isIPAddressInRange
      • partitionId – that corresponds to _partition_id virtual column added to MergeTree tables

    New table functions

    • dictionary
    • s3Cluster – see our blog article that highlights this one

    New table engines

    • ExternalDistributed – allows to query MySQL or PostgreSQL clusters
    • MaterializedPostgreSQL

    New metrics and events

    system.asynchronous_metrics
    • AsynchronousMetricsCalculationTimeSpent

    The table has also got a lot of new metrics for a host monitoring rather than ClickHouse.

    system.metrics
    • BrokenDistributedFilesToInsert
    • MMappedFileBytes
    • NetworkReceive
    • NetworkSend
    system.events
    • MMappedFileCacheHits
    • MMappedFileCacheMisses
    • MergeTreeDataProjectionWriterBlocks
    • MergeTreeDataProjectionWriterBlocksAlreadySorted
    • MergeTreeDataProjectionWriterCompressedBytes
    • MergeTreeDataProjectionWriterRows
    • MergeTreeDataProjectionWriterUncompressedBytes
    • NetworkReceiveBytes
    • NetworkSendBytes
    • StorageBufferLayerLockReadersWaitMilliseconds
    • StorageBufferLayerLockWritersWaitMilliseconds
    • StorageBufferPassedBytesFlushThreshold
    • StorageBufferPassedRowsFlushThreshold
    • StorageBufferPassedTimeFlushThreshold
    system.errors
    • CANNOT_CREATE_FILE
    • CANNOT_PARSE_YAML
    • CANNOT_SET_ROUNDING_MODE
    • CANNOT_SYSCONF
    • CONCURRENT_ACCESS_NOT_SUPPORTED
    • DISTRIBUTED_BROKEN_BATCH_FILES
    • DISTRIBUTED_BROKEN_BATCH_INFO
    • ILLEGAL_PROJECTION
    • INCORRECT_PART_TYPE
    • INVALID_FORMAT_INSERT_QUERY_WITH_DATA
    • KERBEROS_ERROR
    • NO_SUCH_PROJECTION_IN_TABLE
    • POSTGRESQL_CONNECTION_FAILURE
    • PROJECTION_NOT_USED
    • TOO_LARGE_DISTRIBUTED_DEPTH
    • UNKNOWN_SNAPSHOT

    New system tables

    • data_skipping_indices:
      • database, table, name, type, expr, granularity
    • part_moves_between_shards:
      • database, table, task_name, task_uuid, create_time, part_name, part_uuid, to_shard, update_time, state, num_tries, last_exception
    • projection_parts – same as parts but with extra ‘parent_*’ columns
    • projection_parts_columns – same as parts_columns but with extra ‘parent_*’ columns

    New columns in system tables

    • clusters
      • shutdown_count
    • dictionaries:
      • key – removed
      • key.names, key.types, found_rate
    • distribution_queue:
      • broken_data_files, broken_data_compressed_bytes
    • errors:
      • last_error_time, last_error_message, last_error_trace, remote
    • metric_log:
      • multiple columns
    • parts_columns:
      • uuid, min_time, max_time
    • processes:
      • Settings (Map replaces Settings.Names/Values arrays), ProfileEvents (Map replaces ProfileEvents.Names/Values arrays), current_database
    • query_log:
      • projections, initial_query_start_time, initial_query_start_time_microseconds, Settings (Map replaces Settings.Names/Values arrays), ProfileEvents (Map replaces ProfileEvents.Names/Values arrays)
    • query_thread_log:
      • initial_query_start_time_microseconds, Settings (Map replaces Settings.Names/Values arrays), ProfileEvents (Map replaces ProfileEvents.Names/Values arrays)
    • stack_trace:
      • thread_name
    • table_engines:
      • supports_projections
    • tables:
      • comment
    • users
      • grantees_any, grantees_list, grantees_except

    System.merge_tree_settings added/changed

    Name Old value New value Description
    max_parts_to_merge_at_once 100 Max amount of parts which can be merged at once (0 - disabled). Doesn’t affect OPTIMIZE FINAL query.
    max_replicated_fetches_network_bandwidth 0 The maximum speed of data exchange over the network in bytes per second for replicated fetches. Zero means unlimited.
    max_replicated_sends_network_bandwidth 0 The maximum speed of data exchange over the network in bytes per second for replicated sends. Zero means unlimited.
    min_bytes_to_rebalance_partition_over_jbod 0 Minimal amount of bytes to enable part rebalance over JBOD array (0 - disabled).
    non_replicated_deduplication_window 0 How many last blocks of hashes should be kept on disk (0 - disabled).
    part_moves_between_shards_delay_seconds 30 Time to wait before/after moving parts between shards.
    part_moves_between_shards_enable 0 Experimental/Incomplete feature to move parts between shards. Does not take into account sharding expressions.
    replicated_fetches_http_connection_timeout 0 HTTP connection timeout for part fetch requests. Inherited from default profile http_connection_timeout if not set explicitly.
    replicated_fetches_http_receive_timeout 0 HTTP receive timeout for fetch part requests. Inherited from default profile http_receive_timeout if not set explicitly.
    replicated_fetches_http_send_timeout 0 HTTP send timeout for part fetch requests. Inherited from default profile http_send_timeout if not set explicitly.
    remote_fs_execute_merges_on_single_replica_time_threshold 10800 When greater than zero only a single replica starts the merge immediately when merged part on shared storage and ‘allow_remote_fs_zero_copy_replication’ is enabled.

    system.settings added/changed

    Type Name Old value New value Description
    settings allow_experimental_bigint_types 0 1 Obsolete setting, does nothing.
    settings allow_experimental_codecs 0 If it is set to true, allow to specify experimental compression codecs (but we don't have those yet and this option does nothing).
    settings allow_experimental_database_materialized_postgresql 0 Allow to create database with Engine=MaterializedPostgreSQL(…).
    settings allow_experimental_funnel_functions 0 Enable experimental functions for funnel analysis.
    settings allow_experimental_map_type 0 1 Obsolete setting, does nothing.
    settings allow_experimental_projection_optimization 0 Enable projection optimization when processing SELECT queries
    settings async_socket_for_remote 0 1 Asynchronously read from socket executing remote query
    settings background_schedule_pool_size 16 128 Number of threads performing background tasks for replicated tables, dns cache updates. Only has meaning at server startup.
    settings compile_aggregate_expressions 1 Compile aggregate functions to native code.
    settings compile_expressions 0 1 Compile some scalar functions and operators to native code.
    settings cross_to_inner_join_rewrite 1 Use inner join instead of comma/cross join if possible
    settings database_replicated_always_detach_permanently 0 Execute DETACH TABLE as DETACH TABLE PERMANENTLY if database engine is Replicated
    settings distributed_ddl_entry_format_version 1 Version of DDL entry to write into ZooKeeper
    settings distributed_ddl_output_mode throw Format of distributed DDL query result
    settings distributed_directory_monitor_split_batch_on_failure 0 Should StorageDistributed DirectoryMonitors try to split batch into smaller in case of failures.
    settings distributed_push_down_limit 0 If 1, LIMIT will be applied on each shard separatelly. Usually you don't need to use it, since this will be done automatically if it is possible, i.e. for simple query SELECT FROM LIMIT.
    settings experimental_query_deduplication_send_all_part_uuids 0 If false only part UUIDs for currently moving parts are sent. If true all read part UUIDs are sent (useful only for testing).
    settings external_storage_max_read_bytes 0 Limit maximum number of bytes when table with external engine should flush history data. Now supported only for MySQL table engine, database engine, dictionary and MaterializeMySQL. If equal to 0, this setting is disabled
    settings external_storage_max_read_rows 0 Limit maximum number of rows when table with external engine should flush history data. Now supported only for MySQL table engine, database engine, dictionary and MaterializeMySQL. If equal to 0, this setting is disabled
    settings force_optimize_projection 0 If projection optimization is enabled, SELECT queries need to use projection
    settings glob_expansion_max_elements 1000 Maximum number of allowed addresses (For external storages, table functions, etc).
    settings group_by_two_level_threshold_bytes 100000000 50000000 From what size of the aggregation state in bytes, a two-level aggregation begins to be used. 0 - the threshold is not set. Two-level aggregation is used when at least one of the thresholds is triggered.
    settings handle_kafka_error_mode default Obsolete setting, does nothing.
    settings http_max_field_name_size 1048576 Maximum length of field name in HTTP header
    settings http_max_field_value_size 1048576 Maximum length of field value in HTTP header
    settings http_max_fields 1000000 Maximum number of fields in HTTP header
    settings http_max_uri_size 1048576 Maximum URI length of HTTP request
    settings insert_null_as_default 1 Insert DEFAULT values instead of NULL in INSERT SELECT (UNION ALL)
    settings legacy_column_name_of_tuple_literal 0 List all names of element of large tuple literals in their column names instead of hash. This settings exists only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher.
    settings max_distributed_depth 5 Maximum distributed query depth
    settings max_replicated_fetches_network_bandwidth_for_server 0 The maximum speed of data exchange over the network in bytes per second for replicated fetches. Zero means unlimited. Only has meaning at server startup.
    settings max_replicated_sends_network_bandwidth_for_server 0 The maximum speed of data exchange over the network in bytes per second for replicated sends. Zero means unlimited. Only has meaning at server startup.
    settings min_count_to_compile_aggregate_expression 3 The number of identical aggregate expressions before they are JIT-compiled
    settings normalize_function_names 1 0 Normalize function names to their canonical names
    settings odbc_bridge_connection_pool_size 16 Connection pool size for each connection settings string in ODBC bridge.
    settings optimize_functions_to_subcolumns 0 Transform functions to subcolumns, if possible, to reduce amount of read data. E.g. 'length(arr)' -> 'arr.size0', 'col IS NULL' -> 'col.null'
    settings optimize_fuse_sum_count_avg 0 Fuse aggregate functions sum(), avg(), count() with identical arguments into one sumCount() call, if the query has at least two different functions
    settings optimize_move_to_prewhere_if_final 0 If query has FINAL, the optimization move_to_prewhere is not always correct and it is enabled only if both settings optimize_move_to_prewhere and optimize_move_to_prewhere_if_final are turned on
    settings optimize_skip_unused_shards_limit 1000 Limit for number of sharding key values, turns off optimize_skip_unused_shards if the limit is reached
    settings optimize_skip_unused_shards_rewrite_in 1 Rewrite IN in query for remote shards to exclude values that does not belong to the shard (requires optimize_skip_unused_shards)
    settings output_format_arrow_low_cardinality_as_dictionary 0 Enable output LowCardinality type as Dictionary Arrow type
    settings postgresql_connection_pool_size 16 Connection pool size for PostgreSQL table engine and database engine.
    settings postgresql_connection_pool_wait_timeout 5000 Connection pool push/pop timeout on empty pool for PostgreSQL table engine and database engine. By default it will block on empty pool.
    settings prefer_column_name_to_alias 0 Prefer using column names instead of aliases if possible.
    settings prefer_global_in_and_join 0 If enabled, all IN/JOIN operators will be rewritten as GLOBAL IN/JOIN. It's useful when the to-be-joined tables are only available on the initiator and we need to always scatter their data on-the-fly during distributed processing with the GLOBAL keyword. It's also useful to reduce the need to access the external sources joining external tables.
    settings query_plan_enable_optimizations 1 Apply optimizations to query plan
    settings query_plan_filter_push_down 1 Allow to push down filter by predicate query plan step
    settings s3_max_single_read_retries 4 The maximum number of retries during single S3 read.
    settings sleep_in_send_data 0
    settings sleep_in_send_data_ms 0 Time to sleep in sending data in TCPHandler
    settings sleep_in_send_tables_status 0
    settings sleep_in_send_tables_status_ms 0 Time to sleep in sending tables status response in TCPHandler
    settings use_antlr_parser 0

    Also, please refer to the release notes from the development team available at the following URLs:

    1.4 - Altinity Stable for ClickHouse 21.3

    1.4.1 - Altinity Stable for ClickHouse 21.3.20.2

    Details of the Altinity Stable Release 21.3.20.2

    Changes Compared to Altinity Stable 21.3.17.3

    Bug Fixes

    • Integer overflow to resize the arrays causes heap corrupt. #33024 (Rajkumar Varada).
    • fix crash when used fuzzBits with multiply same FixedString, Close #32737. #32755 (SuperDJY).
    • Number of active replicas might be determined incorrectly when inserting with quorum if setting replicated_can_become_leader is disabled on some replicas. It’s fixed. #32157 (tavplubix).
    • Fix possible assertion ../src/IO/ReadBuffer.h:58: bool DB::ReadBuffer::next(): Assertion '!hasPendingData()' failed. in TSKV format. #31804 (Kruglov Pavel).
    • Fix crash when function dictGet with type is used for dictionary attribute when type is Nullable. Fixes #30980. #31800 (Maksim Kita).
    • Fix usage of Buffer table engine with type Map. Fixes #30546. #31742 (Anton Popov).
    • Fix race in JSONEachRowWithProgress output format when data and lines with progress are mixed in output. #31736 (Kruglov Pavel).
    • Settings input_format_allow_errors_num and input_format_allow_errors_ratio did not work for parsing of domain types, such as IPv4, it’s fixed. Fixes #31686. #31697 (tavplubix).
    • Remove not like function into RPNElement. #31169 (sundyli).
    • Using formatRow function with not row formats led to segfault. Don’t allow to use this function with such formats (because it doesn’t make sense). #31001 (Kruglov Pavel).
    • Functions for case-insensitive search in UTF8 strings like positionCaseInsensitiveUTF8 and countSubstringsCaseInsensitiveUTF8 might find substrings that actually does not match, it’s fixed. #30663 (tavplubix).
    • Fixed segfault which might happen if session expired during execution of REPLACE PARTITION. #30432 (tavplubix).
    • Fix [I]LIKE function. Closes #28661. #30244 (Nikolay Degterinsky).
    • FlatDictionary, HashedDictionary fix bytes_allocated calculation for nullable attributes. #30238 (Maksim Kita).
    • Support nullable arguments in function initializeAggregation. #30177 (Anton Popov).
    • Fix crash of sample by tuple(), closes #30004. #30016 (flynn).
    • Fix concurrent access to LowCardinality during GROUP BY (leads to SIGSEGV). #29782 (Azat Khuzhin).
    • Condition in filter predicate could be lost after push-down optimisation. #29625 (Nikolai Kochetov).
    • Fix null deference for GROUP BY WITH TOTALS HAVING (when the column from HAVING wasn’t selected). #29553 (Azat Khuzhin).
    • Fix connection timeouts (send_timeout/receive_timeout). #29282 (Azat Khuzhin).
    • Fix possible Table columns structure in ZooKeeper is different from local table structure exception while recreating or creating new replicas of ReplicatedMergeTree, when one of table columns have default expressions with case-insensitive functions. #29266 (Anton Popov).
    • Fix segfault while inserting into column with type LowCardinality(Nullable) in Avro input format. #29132 (Kruglov Pavel).
    • Fix the number of threads used in GLOBAL IN subquery (it was executed in single threads since #19414 bugfix). #28997 (Nikolai Kochetov).
    • Fix invalid constant type conversion when nullable or lowcardinality primary key is used. #28636 (Amos Bird).
    • Fix ORDER BY ... WITH FILL with set TO and FROM and no rows in result set. #30888 (Anton Popov).
    • Fixed Apache Avro Union type index out of boundary issue in Apache Avro binary format. #33022 (Harry Lee).
    • Fix null pointer dereference in low cardinality data when deserializing LowCardinality data in the Native format. #33021 (Harry Lee).
    • Quota limit was not reached, but the limit was exceeded. This PR fixes #31174. #31656 (sunny).
    • Quota limit was not reached, but the limit was exceeded. This PR fixes #31174. #31337 (sunny).
    • Fix shutdown of AccessControlManager. Now there can’t be reloading of the configuration after AccessControlManager has been destroyed. This PR fixes the flaky test test_user_directories/test.py::test_relative_path. #29951 (Vitaly Baranov).

    Performance Improvements

    Other Improvements

    • Updated zoneinfo files to 2021c. #29925 (alexey-milovidov).
    • Use real tmp file instead of predefined “rows_sources” for vertical merges. This avoids generating garbage directories in tmp disks. #28299 (Amos Bird).
    • Use separate clickhouse-bridge group and user for bridge processes. Set oom_score_adj so the bridges will be first subjects for OOM killer. Set set maximum RSS to 1 GiB. Closes #23861. #25280 (Kseniia Sumarokova).

    Changes compared to Community Build 21.3.20.1-lts

    Test Results

    1.4.2 - Altinity Stable for ClickHouse 21.3.17.3

    Details of Altinity Stable 21.3.17.3

    Changes Since v21.3.17.2-lts

    Changes since the Long Term Support release of ClickHouse v21.3.17.2-lts to Altinity Stable 21.3.17.3:

    1.4.3 - Altinity Stable for ClickHouse 21.3.13.9

    Details of the Altinity Stable Release 21.3.13.9

    A few months ago we certified ClickHouse 21.1 as an Altinity Stable release. Since then, we have worked on newer releases and run them in-house. We completed several new features, and even more have been added by community contributors. We were running ClickHouse 21.3 to power our public datasets at Altinity.Cloud instance and testing it in our environments. As of 21.3.13.9 we are confident in certifying 21.3 as an Altinity Stable release.

    This release is a relatively small upgrade since the previous Altinity Stable release. But even a small upgrade includes 742 pull requests from 133 contributors; these mostly focused on performance and stability improvements. There are several important new features as well! Please look below for detailed release notes.

    Major new features since the previous stable release 21.1.x

    A new release introduces a lot of changes and new features. These are organized by feature in the Appendix, so refer to this section for more detail. The following new features are worth mentioning on the front page:

    • SQL features:
      • Window functions! See our blog article for an introduction.
      • Support multiple levels nesting in the Nested datatype.
    • Security features:
      • LDAP roles mapping a).
      • Apply row-level security as a separate PREWHERE step a).
      • Server side keys support for S3 table function a).
    • Cluster improvements:
      • Hedged requests! That allows to reduce tail latencies on large clusters by running the same query at different replicas. This is controlled by use_hedged_requests and max_parallel_replicas settings.
      • Allow inserts into a specific shard via a distributed table. This is controlled by the insert_shard_id setting.
      • Allow inserts into cluster() table function and specify a sharding key.
      • Replicated database engine (experimental). Allows to replicate DDL statements across the cluster. It is not production ready in this release.
    • MergeTree features:
      • Table level concurrency control. Can be controlled by the max_concurrent_queries merge tree setting.
    • Integrations:
      • PostgreSQL table engine, table function and dictionary source.

    As usual with ClickHouse, there are many performance and operational improvements in different server components.

    a)- contributed by Altinity developers.

    Backward Incompatible Changes

    The following changes are backward incompatible and require user attention during an upgrade:

    • It is no longer allowed to create MergeTree tables in the old syntax with table TTL; this syntax is now ignored. Attaching old tables is still possible.
    • Floating point columns and values are disallowed in:
      • Partitioning key – these can be turned back on with the allow_floating_point_partition_key merge tree setting.
      • Bitwise functions.
    • Excessive parenthesis in type definitions are no longer supported, example: Array((UInt8)).

    Upgrade Notes

    There were several changes between versions that may affect the rolling upgrade of big clusters. Upgrading only part of the cluster is not recommended.

    • Now replicas that are processing the ALTER TABLE ATTACH PART[ITION] command search their detached/ folders before fetching the data from other replicas. As an implementation detail, a new command ATTACH_PART is introduced in the replicated log. Parts are searched and compared by their checksums. ATTACH PART[ITION] queries may not work during cluster upgrade.

    Other Important Changes

    Some new ClickHouse features are now enabled by default. It may lead to a change in behaviour, so review those carefully and disable features that may affect your system:

    • distributed_aggregation_memory_efficient
    • enable_global_with_statement
    • optimize_normalize_count_variants
    • optimize_respect_aliases
    • optimize_rewrite_sum_if_to_count_if

    Known issues in 21.3.13.9

    The development team continues to improve the quality of the 21.3 release. The following issues still exist in the 21.3.13.9 version and may affect ClickHouse operation. Please inspect them carefully to decide if those are applicable to your applications:

    ClickHouse Altinity Stable releases are based on the community versions. Community builds can be downloaded from repo.clickhouse.tech.

    ClickHouse Altinity builds may be found at:

    Please contact us at info@altinity.com if you experience any issues with the upgrade.

    Appendix

    New functions

    • DateTime functions:
      • Compatibility with MySQL: DATE, DAY, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, HOUR, MINUTE, MONTH, QUARTER, SECOND, YEAR
      • parseDateTimeBestEffortUSOrNull
      • parseDateTimeBestEffortUSOrZero
      • timezoneOffset
    • String functions:
      • decodeXMLComponent, extractTextFromHTML
    • Map functions:
      • mapContains, mapKeys, mapValues
    • Aggregate functions
      • New combinator -SimpleState
      • deltaSum
      • runningConcurrency
    • Functions to work with new Map data type:
      • map
    • Math/Statistics:
      • sign
    • Type related:
      • reinterpret, reinterpretAsInt128, reinterpretAsInt256, reinterpretAsUInt256
    • Internal ClickHouse:
      • connectionID, connection_id
      • normalizeQueryKeepNames, normalizedQueryHashKeepNames

    New table functions

    • postgresql
    • file

    New table engines

    • PostgreSQL

    New metrics and events

    system.metrics.

    • PartsCommitted
    • PartsCompact
    • PartsDeleteOnDestroy
    • PartsDeleting
    • PartsInMemory
    • PartsOutdated
    • PartsPreCommitted
    • PartsTemporary
    • PartsWide

    system.events:

    • DistributedDelayedInserts
    • DistributedDelayedInsertsMilliseconds
    • DistributedRejectedInserts
    • HedgedRequestsChangeReplica

    New system tables

    • distributed_ddl_queue

    New columns in system tables

    • metric_log.
      • ProfileEvent_DistributedDelayedInserts, ProfileEvent_DistributedRejectedInserts, ProfileEvent_DistributedDelayedInsertsMilliseconds, ProfileEvent_HedgedRequestsChangeReplica, CurrentMetric_PartsTemporary, CurrentMetric_PartsPreCommitted, CurrentMetric_PartsCommitted, CurrentMetric_PartsOutdated, CurrentMetric_PartsDeleting, CurrentMetric_PartsDeleteOnDestroy, CurrentMetric_PartsWide, CurrentMetric_PartsCompact, CurrentMetric_PartsInMemory
    • Parts:
      • group_by_ttl_info.expression, group_by_ttl_info.min, group_by_ttl_info.max, rows_where_ttl_info.expression, rows_where_ttl_info.min, rows_where_ttl_info.max
    • processes.http_referer
    • query_log:
      • http_referer, log_comment, used_aggregate_functions, used_aggregate_function_combinators, used_database_engines, used_data_type_families, used_dictionaries, used_formats, used_functions, used_storages, used_table_functions
    • query_thread_log.http_referer
    • quota_limits:
      • max_query_selects, max_query_inserts
    • quota_usage:
      • query_selects, max_query_selects, query_inserts, max_query_inserts
    • quotas_usage:
      • query_selects, max_query_selects, query_inserts, max_query_inserts

    System.merge_tree_settings added/changed

    Name Old value New value Description
    allow_floating_point_partition_key 0 Allow floating point as partition key.
    inactive_parts_to_delay_insert 0 If table contains at least that many inactive parts in single partition, artificially slow down insert into table.
    inactive_parts_to_throw_insert 0 If more than this number of inactive parts are in a single partition, throw the ‘Too many inactive parts …’ exception.
    max_concurrent_queries 0 Max number of concurrently executed queries related to the MergeTree table (0 - disabled). Queries will still be limited by other max_concurrent_queries settings.
    min_marks_to_honor_max_concurrent_queries 0 Minimal number of marks to honor the MergeTree-level’s max_concurrent_queries (0 - disabled). Queries will still be limited by other max_concurrent_queries settings.

    system.settings added/changed

    Name Old value New value Description
    allow_changing_replica_until_first_data_packet 0 Allow HedgedConnections to change the replica until receiving the first data packet.
    allow_experimental_database_replicated 0 Allows creating databases with the Replicated engine.
    allow_experimental_query_deduplication 0 Allows sending parts’ UUIDs for a query in order to deduplicate data parts, if any.
    async_socket_for_remote 1 0 Asynchronously read from socket executing remote query.
    background_fetches_pool_size 3 8 Number of threads performing background fetches for replicated tables. Only has meaning at server startup.
    checksum_on_read 1 Validates checksums on reading. It is enabled by default and should be always enabled in production. Please do not expect any benefits in disabling this setting. It may only be used for experiments and benchmarks. These settings are only applicable for tables of the MergeTree family. Checksums are always validated for other table engines and when receiving data over the network.
    database_replicated_ddl_output 1 Returns table with query execution status as a result of a DDL query.
    database_replicated_initial_query_timeout_sec 300 How long an initial DDL query should wait for a Replicated database to process previous DDL queue entries.
    distributed_aggregation_memory_efficient 0 1 The memory-saving mode of distributed aggregation is enabled.
    enable_global_with_statement 0 1 Propagate WITH statements to UNION queries and all subqueries.
    engine_file_empty_if_not_exists 0 Allows selecting data from a file engine table without a file.
    engine_file_truncate_on_insert 0 Enables or disables truncate before insert in file engine tables.
    flatten_nested 1 If true, columns of type Nested will be flatten to separate array columns instead of one array of tuples.
    hedged_connection_timeout_ms 100 Connection timeout for establishing connection with replica for Hedged requests.
    insert_in_memory_parts_timeout 600000 REMOVED.
    insert_shard_id 0 If non zero, when inserting into a distributed table, the data will be inserted into the shard insert_shard_id synchronously. Possible values range from 1 to shards_number of the corresponding distributed table.
    log_comment Log comment into system.query_log table and server log. It can be set to an arbitrary string no longer than max_query_size.
    normalize_function_names 0 Normalize function names to their canonical names.
    optimize_normalize_count_variants 1 Rewrite aggregate functions that semantically equals count() as count().
    optimize_respect_aliases 1 If it is set to true, it will respect aliases in WHERE/GROUP BY/ORDER BY, that will help with partition pruning/secondary indexes/optimize_aggregation_in_order/optimize_read_in_order/optimize_trivial_count.
    optimize_rewrite_sum_if_to_count_if 1 Rewrite sumIf() and sum(if()) function countIf() function when logically equivalent.
    periodic_live_view_refresh 60 Interval after which a periodically refreshed live view is forced to refresh.
    query_plan_max_optimizations_to_apply 10000 Limit the total number of optimizations applied to the query plan. If zero, ignored. If the limit is reached, throw an exception.
    receive_data_timeout_ms 2000 Connection timeout for receiving the first packet of data or packet with positive progress from replica.
    s3_max_connections 1024 The maximum number of connections per server.
    sleep_in_send_data 0 Time to sleep in sending data in TCPHandler.
    sleep_in_send_tables_status 0 Time to sleep in sending tables the status response in TCPHandler.
    unknown_packet_in_send_data 0 Sends unknown packet instead of N-th data packet.
    use_hedged_requests 0 Use hedged requests for distributed queries.

    Also, please refer to the release notes from the development team available at the following URLs:

    1.5 - Altinity Stable for ClickHouse 21.1

    1.5.1 - Altinity Stable for ClickHouse 21.1.11.3

    Details of the Altinity Stable Release 21.1.11.3

    This is a security update to 21.1.10.3 release. It backports fixes for several CVEs into 21.1 version:

    As we discussed in a recent Altinity Blog article, JFrog discovered these CVEs and reported them to the Yandex team in 2021. The CVEs have been already included in 21.3.19 and 21.8.11 and Altinity.Stable and ClickHouse builds and all ClickHouse builds after 21.10.

    We have now backported the fixes to 21.1 as part of our long-term maintenance policy for Altinity Stable builds. We recommend all users running ClickHouse 21.1 upgrade to this new release.

    1.5.2 - Altinity Stable for ClickHouse 21.1.10.3

    Details of the Altinity Stable Release 21.1.10.3

    Changes between 21.1.10.3 and 21.1.9.41

    The complete diff between Altinity and community versions can also be viewed at github:

    https://github.com/ClickHouse/ClickHouse/compare/v21.1.9.41-stable&hellip;Altinity:v21.1.10.3-altinity+stable

    Changes between ClickHouse Altinity Stable Release 21.1.9.41 and ClickHouse Altinity Stable Release 21.1.7.1

    • Fixed a bug with distributed queries that could sometimes fail with ‘Unknown packet n from server’ error message: https://github.com/ClickHouse/ClickHouse/issues/21588

    • Fixed several issues with mutations handling (e.g. https://github.com/ClickHouse/ClickHouse/issues/22013)

    • Fix a rare bug when quorum insert with iinsert_quorum_parallel=1 is not really a “quorum” because of deduplication

    • Fixed a possible buffer overflow in token bloom filter index: https://github.com/ClickHouse/ClickHouse/issues/19233

    • Fixed a bug when nullable types sometimes could not be retrieved: https://github.com/ClickHouse/ClickHouse/issues/21116

    • Fixed a bug with replicated tables metadata not properly cleaned from the non-primary ZooKeeper instance: https://github.com/ClickHouse/ClickHouse/issues/21054

    • Added parseDateTimeBestEffortOrZero to allow for empty values, while parseDateTimeBestEffort will not work with empty values. For example:

      SELECT parseDateTimeBestEffort('')
      
      Query id: 1c60717e-116b-445f-b675-7b9cac35c262
      
      
      0 rows in set. Elapsed: 0.003 sec.
      
      Received exception from server (version 21.11.1):
      Code: 41. DB::Exception: Received from localhost:9000. DB::Exception: Cannot read DateTime: neither Date nor Time was parsed successfully: While processing parseDateTimeBestEffort(''). (CANNOT_PARSE_DATETIME)
      
      SELECT parseDateTimeBestEffortOrZero('')
      
      Query id: 484baec9-6d00-4c66-a792-85a270d6f2f2
      
      ┌─parseDateTimeBestEffortOrZero('')─┐
                     1970-01-01 03:00:00 
      └───────────────────────────────────┘
      

    ClickHouse Altinity Stable packages for this release can be found at:

    1.5.3 - Altinity Stable for ClickHouse 21.1.9.41

    Details of the Altinity Stable Release 21.1.9.41

    Changes between 21.1.9.41 and 21.1.7.1

    1.5.4 - Altinity Stable for ClickHouse 21.1.7.1

    Details of the Altinity Stable Release 21.1.7.1

    Several months ago we certified ClickHouse 20.8 as Altinity Stable. Since then we have worked on newer releases and run them in-house. We completed quite a few new features, and even more have been added by community contributors. It is always difficult to make a stop and pick a community release for certification. Newer releases promise new features, but older ones are easier to upgrade to. We were running ClickHouse 21.1 to power our public datasets instance at Altinity.Cloud for two months, and testing it in our environments. We are now confident to certify 21.1 as an Altinity Stable release.

    This release is a significant step forward since the previous Altinity Stable release. It includes 1462 pull requests from 208 contributors with many new features and performance improvements! Please look below for detailed release notes.

    Major new features since the previous stable release 20.8.x

    A new release introduces a lot of changes and new features. These are organized by feature in the Appendix, so refer to this section for more detail. The following new features are worth mentioning on the front page:

    • Security features:
      • LDAP external users directory, see an article in out blog a)
      • AES Encryption functions, see an article in our blog for more detail a)
      • Migration from OpenSSL to BoringSSL library
      • Support SNI in https connections to remote resources
      • Support SNI in ClickHouse server TLS endpoint a)
      • Security context propagation in distributed queries
    • MergeTree features:
      • New TTL extension: TTL RECOMPRESS.
      • ALTER UPDATE/DELETE IN PARTITION for replicated MergeTree tables a)
      • DETACH TABLE/VIEW PERMANENTLY <sup>a)</sup>
      • [OPTIMIZE DEDUPLICATE BY](https://clickhouse.tech/docs/en/sql-reference/statements/optimize/#by-expression) – deduplicate MergeTree tables by a subset of columns a)
      • SimpleAggregateFunction in SummingMergeTree
      • Option to disable merges for a cold storage in tiered storage configuration a)
    • Integrations:
      • gRPC protocol
      • zstd and xz compression for file-based engines
      • EmbeddedRocksDB engine
    • SQL compatibility:
      • UNION DISTINCT (previously only UNION ALL was supported). The default can be altered by union_default_mode setting.
      • Improved CTE compatibility
      • REPLACE TABLE and CREATE OR REPLACE TABLE DDL statements for Atomic database engine.
    • Other:

    As usual with ClickHouse, there are many performance and operational improvements in different server components.

    a) - contributed by Altinity developers.

    Backward Incompatible Changes

    The following changes are backward incompatible and require user attention during an upgrade:

    • Atomic database engine is enabled by default. It does not affect existing databases but new databases will be created with Engine = Atomic. The engine can not be modified for the database once created. Database Atomic has been used for system tables since 20.5, and it is a good feature in the long term. We recommend disabling it for now, however, especially if you use some backup tools, including ClickHouse Backup 0.6.4 or earlier. The data layout on the storage has been changed. In order to disable it by default, add a following configuration section for a default profile:

      <yandex>
          <profiles>
              <default>
                  <default_database_engine>Ordinary</default_database_engine>
              </default>
          </profiles>
      </yandex>
      
    • toUUID(N) is no longer supported. If there is a DEFAULT column with this expression ClickHouse won’t start.

    • Following functions where removed: sumburConsistentHash, timeSeriesGroupSum, timeSeriesGroupRateSum.

    • avg and avgWeighted functions now always return Float64. In previous versions they returned Decimal for Decimal arguments.

    • Accept user settings related to file formats (e.g. format_csv_delimiter) in the SETTINGS clause when creating a table that uses File engine, and use these settings in all INSERTs and SELECTs. Session level settings are ignored in this case.

    Upgrade Notes

    There were several changes between versions that may affect the rolling upgrade of big clusters. Upgrading only part of the cluster is not recommended.

    • Replication protocol has been changed in 20.10 in order to improve reliability of TTL merges. Replication between versions prior to 20.10 and 20.10+ is incompatible if ReplicatedMergeTree tables with TTL are used. See https://github.com/ClickHouse/ClickHouse/pull/14490 for more information.
      • For a safe upgrade all replicas should be upgraded at once.
      • Alternatively, SYSTEM STOP TTL MERGES should be used during the upgrade.

    Other Important Changes

    Some new ClickHouse features are now enabled by default. It may lead to a change in behaviour, so review those carefully and disable features that may affect your system:

    • Insert quorum behaviour has been changed. insert_quorum_parallel is enabled by default. It breaks sequential consistency and may have other side effects. We recommend disabling this feature if you are using quorum inserts on previous versions.
    • optimize_on_insert is enabled by default. This is a new feature that applies a logic of Replacing, Summing, Collapsing and AggregatingMergeTree on the inserted block. Unfortunately, it still has some issues so we recommend disabling this after upgrading.
    • use_compact_format_in_distributed_parts_names is enabled by default.
    • input_format_null_as_default is enabled by default.
    • Background fetches are now limited by background_fetches_pool_size setting. The default value is 3 that may be low in some cases. In previous versions the common background pool has been used for merges and fetches with the default size 16.
    • Compact MergeTree parts are enabled by default for parts below 10MB of size uncompressed. See min_bytes_for_wide_part setting.

    Known issues in 21.1.7.1

    Development team continues to improve the quality of the 21.1 release. Following issues still exist in the 21.1.7.1 version and may affect ClickHouse operation. Please inspect them carefully in order to decide if those are applicable to your applications:

    ClickHouse Altinity Stable release is based on community version. It can be downloaded from repo.clickhouse.tech, and RPM packages are available from the Altinity Stable Repository.

    Please contact us at info@altinity.com if you experience any issues with the upgrade.


    Appendix

    New data types

    • DateTime32 (alias to DateTime)
    • Map (Experimental)

    New formats

    • JSONCompactStrings
    • JSONCompactStringsEachRow
    • JSONCompactStringsEachRowWithNamesAndTypes
    • JSONStrings
    • JSONStringsEachRow
    • JSONStringsEachRowWithProgress
    • LineAsString
    • ProtobufSingle
    • RawBLOB

    New functions

    • DateTime functions:
      • fromModifiedJulianDay, toModifiedJulianDay
      • fromModifiedJulianDayOrNull, toModifiedJulianDayOrNull
      • parseDateTime32BestEffort, parseDateTime32BestEffortOrNull, parseDateTime32BestEffortOrZero, toDateTime32
    • String functions:
      • countMatches, countMatchesCaseInsensitive
      • countSubstrings, countSubstringsCaseInsensitive, countSubstringsCaseInsensitiveUTF8
      • isIPv4String, isIPv6String
    • Functions for semi-duplicate search and strings proximity:
      • bitHammingDistance, tupleHammingDistance
      • ngramMinHash*, ngramSimHash*
    • Encoding/Formatting:
      • encodeXMLComponent
      • formatReadableQuantity
      • formatReadableTimeDelta
    • Array functions:
      • arrayAvg, arrayMax, arrayMin
      • mapPopulateSeries
    • Machine-learning and statistics:
      • mannWhitneyUTest
      • rankCorr
      • hypot
      • studentTTest, welchTTest
    • Encryption:
      • aes_decrypt_mysql, aes_encrypt_mysql
      • dencrypt, encrypt
    • URL functions
      • cutToFirstSignificantSubdomainCustom, cutToFirstSignificantSubdomainCustomWithWWW, cutToFirstSignificantSubdomainWithWWW
      • firstSignificantSubdomainCustom
    • Aggregate functions
      • New combinator -SimpleState
    • Functions to work with new Map data type:
      • map
    • Math/Statistics:
      • acosh, asinh, atan2, atanh, cosh, sinh
      • log1p
    • Type related:
      • accurateCast
      • accurateCastOrNull
      • byteSize
      • reinterpretAsUUID
      • toUUIDOrNull
      • toUUIDOrZero
    • Other:
      • farmFingerprint64
    • Internal ClickHouse:
      • errorCodeToName
      • logTrace
      • tid
      • tcpPort

    New table functions

    • null
    • view

    New table engines

    • EmbeddedRocksDB

    New metrics and events

    system.metrics:

    • BackgroundFetchesPoolTask
    • BackgroundMessageBrokerSchedulePoolTask
    • MaxDDLEntryID
    • TablesToDropQueueSize
    • REMOVED MemoryTrackingForMerges
    • REMOVED MemoryTrackingInBackgroundBufferFlushSchedulePool
    • REMOVED MemoryTrackingInBackgroundDistributedSchedulePool
    • REMOVED MemoryTrackingInBackgroundMoveProcessingPool
    • REMOVED MemoryTrackingInBackgroundProcessingPool
    • REMOVED MemoryTrackingInBackgroundSchedulePool

    system.asynchronous_metrics:

    • HTTPThreads
    • InterserverThreads
    • MySQLThreads
    • TCPThreads
    • TotalBytesOfMergeTreeTables
    • TotalPartsOfMergeTreeTables
    • TotalRowsOfMergeTreeTables

    system.events:

    • RWLockReadersWaitMilliseconds
    • REMOVED CreatedWriteBufferOrdinary

    New system tables

    • errors
    • replicated_fetches
    • replicated_merge_tree_settings

    New columns in system tables

    • grants.access_type
    • merges.merge_type, merge_algorithm
    • parts.uuid, default_compression_codec, recompression_ttl_info.expression, recompression_ttl_info.min, recompression_ttl_info.max
    • privileges.privilege, parent_group
    • processes.forwarded_for
    • query_log.event_time_microseconds, query_start_time_microseconds, normalized_query_hash, query_kind, databases, tables, columns, forwarded_for
    • query_thread_log.event_time_microseconds, query_start_time_microseconds, normalized_query_hash, current_database, forwarded_for
    • quotas.keys
    • replication_queue.merge_type
    • storage_policies.prefer_not_to_merge
    • table_engines.supports_parallel_insert
    • trace_log.event_time_microseconds
    • user_directories.path, readonly, params

    system.merge_tree_settings added/changed

    Name Old value New value Description
    assign_part_uuids 0 Generate UUIDs for parts. Before enabling check that all replicas support the new format.
    execute_merges_on_single_replica_time_threshold 0 When greater than zero only a single replica starts the merge immediately, others wait up to that amount of time to download the result instead of doing merges locally. If the chosen replica doesn’t finish the merge during that amount of time, fallback to standard behavior happens.
    fsync_after_insert 0 Do fsync for every inserted part. Significantly decreases performance of inserts, not recommended to use with wide parts.
    fsync_part_directory 0 Do fsync for part directory after all part operations (writes, renames, etc.).
    in_memory_parts_insert_sync 0 If true insert of part with in-memory format will wait for fsync of WAL
    max_compress_block_size 0 Compress the pending uncompressed data in a buffer if its size is larger or equal than the specified threshold. Block of data will be compressed even if the current granule is not finished. If this setting is not set, the corresponding global setting is used.
    max_number_of_merges_with_ttl_in_pool 2 When there is more than a specified number of merges with TTL entries in the pool, do not assign a new merge with TTL. This is to leave free threads for regular merges and avoid “Too many parts”
    max_partitions_to_read -1 Limit the max number of partitions that can be accessed in one query. <= 0 means unlimited. This setting is the default that can be overridden by the query-level setting with the same name.
    max_replicated_merges_with_ttl_in_queue 1 How many tasks of merging parts with TTL are allowed simultaneously in the ReplicatedMergeTree queue.
    merge_with_recompression_ttl_timeout 14400 Minimal time in seconds, when merge with recompression TTL can be repeated.
    merge_with_ttl_timeout 86400 14400 Minimal time in seconds, when merge with delete TTL can be repeated.
    min_bytes_for_wide_part 0 10485760 Minimal uncompressed size in bytes to create part in wide format instead of compact
    min_compress_block_size 0 When granule is written, compress the data in a buffer if the size of pending uncompressed data is larger or equal than the specified threshold. If this setting is not set, the corresponding global setting is used.
    min_compressed_bytes_to_fsync_after_fetch 0 Minimal number of compressed bytes to do fsync for part after fetch (0 - disabled)
    min_compressed_bytes_to_fsync_after_merge 0 Minimal number of compressed bytes to do fsync for part after merge (0 - disabled)
    min_rows_to_fsync_after_merge 0 Minimal number of rows to do fsync for part after merge (0 - disabled)
    remove_empty_parts 1 Remove empty parts after they were pruned by TTL, mutation, or collapsing merge algorithm
    try_fetch_recompressed_part_timeout 7200 Recompression works slowly in most cases. We don’t start a merge with recompression until this timeout and try to fetch the recompressed part from the replica which assigned this merge with recompression.
    write_ahead_log_bytes_to_fsync 104857600 Amount of bytes, accumulated in WAL to do fsync.
    write_ahead_log_interval_ms_to_fsync 100 Interval in milliseconds after which fsync for WAL is being done.

    system.settings added/changed

    Name Old value New value Description
    aggregate_functions_null_for_empty 0 Rewrite all aggregate functions in a query, adding -OrNull suffix to them
    allow_experimental_cross_to_join_conversion 1
    allow_experimental_data_skipping_indices 1
    allow_experimental_low_cardinality_type 1
    allow_experimental_map_type 0 Allow data type Map
    allow_experimental_multiple_joins_emulation 1
    allow_experimental_window_functions 0 Allow experimental window functions
    asterisk_include_alias_columns 0 Include ALIAS columns for wildcard query
    asterisk_include_materialized_columns 0 Include MATERIALIZED columns for wildcard query
    async_socket_for_remote 1 Asynchronously read from socket executing remote query
    background_fetches_pool_size 3 Number of threads performing background fetches for replicated tables. Only has meaning at server startup.
    background_message_broker_schedule_pool_size 16 Number of threads performing background tasks for message streaming. Only has meaning at server startup.
    compile 0
    database_atomic_wait_for_drop_and_detach_synchronously 0 When executing DROP or DETACH TABLE in Atomic database, wait for table data to be finally dropped or detached.
    date_time_output_format simple Method to write DateTime to text output. Possible values: ‘simple’, ‘iso’, ‘unix_timestamp’.
    default_database_engine Ordinary Atomic Default database engine.
    do_not_merge_across_partitions_select_final 0 Merge parts only in one partition in select final
    enable_global_with_statement 0 Propagate WITH statements to UNION queries and all subqueries
    experimental_use_processors 1
    force_data_skipping_indices Comma separated list of strings or literals with the name of the data skipping indices that should be used during query execution, otherwise an exception will be thrown.
    force_optimize_skip_unused_shards_no_nested 0
    format_regexp_escaping_rule Escaped Raw Field escaping rule (for Regexp format)
    input_format_csv_arrays_as_nested_csv 0 When reading Array from CSV, expect that its elements were serialized in nested CSV and then put into string. Example: “[““Hello””, ““world””, ““42"””” TV""]". Braces around an array can be omitted.
    input_format_csv_enum_as_number 0 Treat inserted enum values in CSV formats as enum indices
    input_format_null_as_default 0 1 For text input formats initialize null fields with default values if data type of this field is not nullable
    input_format_tsv_enum_as_number 0 Treat inserted enum values in TSV formats as enum indices
    insert_distributed_one_random_shard 0 If setting is enabled, inserting into distributed table will choose a random shard to write when there is no sharding key
    insert_quorum_parallel 1 For quorum INSERT queries - enable to make parallel inserts without linearizability
    limit 0 Limit on read rows from the most ’end’ result for select query, default 0 means no limit length
    load_balancing_first_offset 0 Which replica to preferably send a query when FIRST_OR_RANDOM load balancing strategy is used.
    log_queries_min_query_duration_ms 0 Minimal time for the query to run, to get to the query_log/query_thread_log.
    mark_cache_min_lifetime 0
    max_bytes_to_read_leaf 0 Limit on read bytes (after decompression) on the leaf nodes for distributed queries. Limit is applied for local reads only excluding the final merge stage on the root node.
    max_concurrent_queries_for_all_users 0 The maximum number of concurrent requests for all users.
    max_partitions_to_read -1 Limit the max number of partitions that can be accessed in one query. <= 0 means unlimited.
    max_rows_to_read_leaf 0 Limit on read rows on the leaf nodes for distributed queries. Limit is applied for local reads only excluding the final merge stage on the root node.
    merge_tree_uniform_read_distribution 1
    min_count_to_compile 0
    multiple_joins_rewriter_version 2 0 Obsolete setting, does nothing. Will be removed after 2021-03-31
    mysql_datatypes_support_level Which MySQL types should be converted to corresponding ClickHouse types (rather than being represented as String). Can be empty or any combination of ‘decimal’ or ‘datetime64’. When empty MySQL’s DECIMAL and DATETIME/TIMESTAMP with non-zero precision are seen as String on ClickHouse’s side.
    offset 0 Offset on read rows from the most ’end’ result for select query
    opentelemetry_start_trace_probability 0 Probability to start an OpenTelemetry trace for an incoming query.
    optimize_move_functions_out_of_any 1 0 Move functions out of aggregate functions ‘any’, ‘anyLast’.
    optimize_on_insert 1 Do the same transformation for inserted block of data as if merge was done on this block.
    optimize_skip_merged_partitions 0 Skip partitions with one part with level > 0 in optimize final
    output_format_json_array_of_rows 0 Output a JSON array of all rows in JSONEachRow(Compact) format.
    output_format_json_named_tuples_as_objects 0 Serialize named tuple columns as JSON objects.
    output_format_parallel_formatting 1 Enable parallel formatting for some data formats.
    output_format_pretty_row_numbers 0 Add row numbers before each row for pretty output format
    output_format_tsv_null_representation Custom NULL representation in TSV format
    partial_merge_join 0
    read_backoff_min_concurrency 1 Settings to try keeping the minimal number of threads in case of slow reads.
    read_overflow_mode_leaf throw What to do when the leaf limit is exceeded.
    remerge_sort_lowered_memory_bytes_ratio 2 If memory usage after remerge does not reduced by this ratio, remerge will be disabled.
    s3_max_redirects 10 Max number of S3 redirects hops allowed.
    s3_max_single_part_upload_size 67108864 The maximum size of object to upload using singlepart upload to S3.
    special_sort not_specified
    system_events_show_zero_values 0 Include all metrics, even with zero values
    union_default_mode Set default Union Mode in SelectWithUnion query. Possible values: empty string, ‘ALL’, ‘DISTINCT’. If empty, query without Union Mode will throw exception.
    use_antlr_parser 0 Parse incoming queries using ANTLR-generated experimental parser
    use_compact_format_in_distributed_parts_names 0 1 Changes format of directories names for distributed table insert parts.

    Also, please refer to the release notes from the development team available at the following URLs:

    1.6 - Altinity Stable for ClickHouse 20.8

    1.6.1 - Altinity Stable for ClickHouse 20.8.12.2

    Details of the Altinity Stable Release 20.8.12.2

    Description

    Bug fixes.

    Bug Fixes

    • Fixed a rare bug that may result in corrupted data when merging from wide to compact parts. We recommend upgrading to this release all 20.8 users who are using compact parts.

    1.6.2 - Altinity Stable for ClickHouse 20.8.11.17

    Details of the Altinity Stable Release 20.8.11.17

    Description

    This release provides bug fixes and general stability improvements.

    1.6.3 - Altinity Stable for ClickHouse 20.8.7.15

    Details of the Altinity Stable Release 20.8.7.15

    Description

    This release provides several changes and new features from the previous release.

    Major New Features

    • Security features:
      • RBAC and SQL management for users, roles, grants etc.
      • LDAP authentication
      • Kerberos authentication in Kafka Engine
    • Dictionary features:
      • direct and ssd_cache layouts
      • Redis, MongoDB, Cassandra sources
      • Automatic query rewrite for joins (see an example here)
      • Experimental polygon dictionaries
    • MergeTree features:
      • S3 disk for MergeTree tables! It is still an experimental feature, see our article for more detail.
      • New TTL types: TTL DELETE WHERE and TTL GROUP BY.
      • In-memory MergeTree parts
    • Integrations:
      • PostgreSQL wire protocol
      • RabbitMQ and MongoDB storage engines
      • Tencent Cloud Object Storage (COS)
      • Highly experimental MaterializeMySQL engine that implements MySQL replica in ClickHouse
    • SQL compatibility:
      • New Int128, (U)Int256, Decimal256 extended precision data types
      • Aliases for standard SQL types
      • EXPLAIN statement!
      • Merge join improvements
    • Custom HTTP handlers
    • clickhouse-copier underwent extensive updates and improvements

    Upgrade Notes

    Backward Incompatible Changes

    The following changes are backward incompatible and require user attention during an upgrade:

    • Aggregate functions states with Nullable arguments may produce different / incompatible types.
    • Gorilla, Delta and DoubleDelta codecs can not be used anymore on data types of variable size (like strings).
    • System tables (e.g. system.query_log, system.trace_log, system.metric_log) are using compact data part format for parts smaller than 10 MiB in size (this is almost always the case). Compact data part format is supported since version 20.3.
    • WARNING: If you have to downgrade to version prior 20.3, you should manually delete table data for system logs in /var/lib/clickhouse/data/system/.
    • The setting input_format_with_names_use_header is enabled by default. It will affect parsing of input formats -WithNames and -WithNamesAndTypes.
    • Deprecate special printing of zero Date/DateTime values as '0000-00-00’ and '0000-00-00 00:00:00’. Now it is printed as ‘1970-01-01’ and '1970-01-01 00:00:00’ respectively.

    There were several changes between versions that may affect the rolling upgrade of big clusters. Upgrading only part of the cluster is not recommended. Note the following:

    • 20.3 two-level aggregation is not compatible with 20.4+.
      Data will not be fully aggregated for queries that are processed using the two-level aggregation algorithm. This algorithm should be disabled before upgrading if the risk is high in your environment. See group_by_two_level_threshold and group_by_two_level_threshold_bytes settings.
    • zstd library has been updated in 20.5. While it is not a problem for ClickHouse in general, it may result in inconsistent parts when several replicas merge parts independently, and will force ClickHouse to download merged parts to make sure they are byte-identical (which will lead to extra traffic between nodes). The first node to complete the merge will register the part in ZooKeeper, and the other nodes will download the part if their checksums are different. There will be no data loss; conflicts will disappear once all replicas are upgraded.
    • The following settings lead to incompatibility in distributed queries when only a subset of shards are upgraded and others are not:
      • optimize_move_functions_out_of_any
      • optimize_arithmetic_operations_in_aggregate_functions
      • optimize_injective_functions_inside_uniq
      • optimize_if_transform_strings_to_enum
    • When pre-20.5 and 20.5+ versions run as replicas “Part ... intersects previous part" errors are possible due to change in leadership selection protocol. If you need to run pre-20.5 and 20.5+ versions in the same cluster make sure the old version can not become a leader. This can be configured via replicated_can_become_leader merge tree setting globally or on a table level.

    Changes

    • All replicas are now ’leaders’. This allows multiple replicas to assign merges, mutations, partition drop, move and replace concurrently. Now system.replicas.is_leader is 1 for all tables on all nodes. If you rely on this value for some operations, your processes must be revised. The LeaderElection and LeaderReplica metrics were removed.
    • New setting max_server_memory_usage limits total memory usage of the server. The setting max_memory_usage_for_all_queries is now obsolete and does nothing. You might see an exception ‘Memory limit (total) exceeded‘.Increasing the limit requires a restart.
    • The log_queries setting is now enabled by default. You might want to disable this setting for some profiles if you don’t want their queries logged into the system.query_log table.
    • Several new optimizations are enabled by default. While they typically improve performance sometimes regressions are possible in corner cases:
      • optimize_aggregators_of_group_by_keys
      • optimize_arithmetic_operations_in_aggregate_functions
      • optimize_duplicate_order_by_and_distinct
      • optimize_group_by_function_keys
      • optimize_injective_functions_inside_uniq
      • optimize_move_functions_out_of_any \ optimize_monotonous_functions_in_order_by
      • optimize_redundant_functions_in_order_by
      • optimize_trivial_insert_select
      • partial_merge_join_optimizations

    New data types

    • All standard SQL data types, like BIGINT, VARCHAR, etc. are mapped to ClickHouse data types. See the system.data_type_families table.
    • Experimental data types Point, Ring, Polygon, MultiPolygon
    • Int128, Int256, UInt256, Decimal256 extended precision data types

    New formats

    • Arrow
    • ArrowStream
    • JSONAsString – this one allows to store the full JSON row unparsed and process later using ClickHouse JSONExtract functions
    • Markdown
    • MsgPack
    • PosgreSQLWire
    • Regexp - allows to parse any non-standard textish formats (for example logs) by applying regular expression to every line.

    New functions

    • DateTime functions:
      • fromUnixTimestamp / FROM_UNIXTIME
      • dateTrunc / date_trunc
      • fromUnixTimestamp64Micro
      • fromUnixTimestamp64Milli
      • fromUnixTimestamp64Nano
      • toUnixTimestamp64Micro
      • toUnixTimestamp64Milli
      • toUnixTimestamp64Nano
    • String functions:
      • extractGroups, extractAllGroupsHorizontal, extractAllGroupsVertical (alias to extractAllGroups)
      • Ilike, notILike (also new SQL operator ILIKE has been added)
    • Array functions:
      • arrayReduceInRanges
      • hasSubstr
    • Machine-learning and statistics:
      • arrayAUC -- area Under the ROC Curve
      • bayesAB -- bayesian A/B Testing Calculator
      • medianExactHigh, medianExactLow
      • quantileExactHigh, quantileExactLow, quantilesExactHigh, quantilesExactLow
    • JSON
      • JSONExtractKeysAndValuesRaw
    • URL functions
      • port
      • netloc
    • Aggregate functions
      • -Distinct – new aggregate function combinator
      • initializeAggregation– allow initialization of AggregateFunction state
    • Functions to work with key-value pairs (aka maps):
      • mapAdd, mapSubtract, maxMap, minMap (extending the API of sumMap).
      • SimpleAggregateFunction now supportsminMap, maxMap, sumMap.
      • The new data type Map is currently in development that should make it even more intuitive.
    • MySQL integration
      • DATABASE (alias for currentDatabase)
      • globalVariable (stub)
    • New types related:
      • toDecimal256, toDecimal256OrNull, toDecimal256OrZero
      • toInt128, toInt128OrNull, toInt128OrZero
      • toInt256, toInt256OrNull, toInt256OrZero
      • toUInt256, toUInt256OrNull, toUInt256OrZero
    • engine=Join
      • joinGetOrNull
    • Random:
      • fuzzBits (used for CI)
      • rand32 (alias for rand)
      • randomFixedString, randomString, randomStringUTF8
    • Serialize columns to some text format
      • formatRow, formatRowNoNewline
    • Settings & custom settings:
      • getSetting
    • Check types
      • isDecimalOverflow
      • defaultValueOfTypeName
      • isZeroOrNull
    • Helper functions to analyze query_log
      • normalizeQuery, normalizedQueryHash
    • Other:
      • countDigits
      • mod (alias for modulo)
    • Special (used in CI / by ClickHouse developers):
      • hasThreadFuzzer
      • buildId
    • Related to internals of execution of of IN / GLOBAL IN operator (also see transform_null_in setting)
      • notNullIn, notNullInIgnoreSet, nullIn, nullInIgnoreSet, inIgnoreSet, notInIgnoreSet
      • globalNotNullIn, globalNotNullInIgnoreSet, globalNullIn, globalNullInIgnoreSet, globalInIgnoreSet, globalNotInIgnoreSet

    New table functions

    • cosn – integration with Tencent Cloud Object Storage (COS)

    New table engines

    • MongoDB
    • RabbitMQ
    • COSN

    New metrics and events

    system.metrics:

    • BackgroundBufferFlushSchedulePoolTask
    • BackgroundDistributedSchedulePoolTask
    • MemoryTrackingInBackgroundBufferFlushSchedulePool
    • MemoryTrackingInBackgroundDistributedSchedulePool
    • PostgreSQLConnection

    system.events:

    • OSCPUVirtualTimeMicroseconds
    • OSCPUWaitMicroseconds
    • OSReadBytes, OSReadChars
    • OSWriteBytes, OSWriteChars
    • QueryTimeMicroseconds

    New system tables (mostly for RBAC introspection)

    • current_roles
    • distribution_queue
    • enabled_roles
    • grants
    • licenses
    • privileges
    • quota_limits
    • quotas_usage
    • row_policies
    • role_grants
    • roles
    • settings_profile_elements
    • settings_profiles
    • time_zones
    • user_directories
    • users

    New columns in system tables

    • columns.position, parts_columns.position
    • databases.uuid, dictionaries.uuid, tables.uuid
    • disks.type
    • merge_tree_settings.type, settings.type
    • parts: delete_ttl_info_min, delete_ttl_info_max, move_ttl_info_min, move_ttl_info_max, move_ttl_info.expression
    • query_log.current_database
    • storage_policies.volume_type
    • tables: total_rows, total_bytes, lifetime_rows, lifetime_bytes
    • trace_log: trace_type, size, timestamp_ns

    system.merge_tree_settings added/changed

    Setting Old value New value Description
    allow_nullable_key 0 Allow Nullable types as primary keys.
    always_fetch_merged_part 0 If true, replicas never merge parts and always download merged parts from other replicas.
    disable_background_merges 0 REMOVED
    enable_mixed_granularity_parts 0 1 Enable parts with adaptive and non-adaptive granularity at the same time
    in_memory_parts_enable_wal 1 Whether to write blocks in Native format to write-ahead-log before creation in-memory part
    lock_acquire_timeout_for_background_operations 120 For background operations like merges, mutations etc. How many seconds before failing to acquire table locks.
    max_part_loading_threads auto max_part_loading_threads
    max_part_removal_threads auto The number of threads for concurrent removal of inactive data parts. One is usually enough, but in ‘Google Compute Environment SSD Persistent Disks’ file removal (unlink) operation is extraordinarily slow and you probably have to increase this number (recommended is up to 16).
    max_replicated_logs_to_keep 10000 1000 How many records may be in log, if there is an inactive replica.
    min_replicated_logs_to_keep 100 10 Keep about this number of last records in ZooKeeper log, even if they are obsolete. It doesn’t affect work of tables: used only to diagnose ZooKeeper log before cleaning.
    min_bytes_for_compact_part 0 Minimal uncompressed size in bytes to create part in compact format instead of saving it in RAM. If non-zero enables in-memory parts.
    min_rows_for_compact_part 0 Minimal number of rows to create part in compact format instead of saving it in RAM. If non-zero enables in-memory parts.
    min_index_granularity_bytes 1024 Minimum amount of bytes in single granule.
    min_relative_delay_to_measure 120 Calculate relative replica delay only if absolute delay is not less than this value.
    write_ahead_log_max_bytes 1073741824 Rotate WAL, if it exceeds that amount of bytes

    system.settings added/changed

    Setting Old value New value Description
    allow_experimental_bigint_types 0 Allow Int128, Int256, UInt256 and Decimal256 types
    allow_experimental_database_materialize_mysql 1 Allow database creation with Engine=MaterializeMySQL(…) (Highly experimental yet)
    allow_experimental_geo_types 0 Allow geo data types such as Point, Ring, Polygon, MultiPolygon
    allow_non_metadata_alters 1 Allow to execute alters which affects table’s metadata and data on disk.
    allow_push_predicate_when_subquery_contains_with 1 Allows push predicate when subquery contains WITH clause
    allow_suspicious_codecs 0 If it is set to true, allow specifying meaningless compression codecs.
    alter_partition_verbose_result 0 Output information about affected parts. Currently works only for FREEZE and ATTACH commands.
    background_buffer_flush_schedule_pool_size 16 Number of threads performing background flush for tables with Buffer engine. Only has meaning at server startup.
    background_distributed_schedule_pool_size 16 Number of threads performing background tasks for distributed sends. Only has meaning at server startup.
    cast_keep_nullable 0 CAST operator keep Nullable for result data type
    data_type_default_nullable 0 Data types without NULL or NOT NULL will make Nullable
    default_database_engine Ordinary Default database engine
    distributed_replica_max_ignored_errors 0 Number of errors that will be ignored while choosing replicas
    force_optimize_skip_unused_shards_nesting 0 Same as force_optimize_skip_unused_shards, but accept nesting level until which it will work
    format_regexp Regular expression (for Regexp format)
    format_regexp_escaping_rule Escaped Field escaping rule (for Regexp format)
    format_regexp_skip_unmatched 0 Skip lines unmatched by regular expression (for Regexp format)
    function_implementation Choose function implementation for specific target or variant (experimental). If empty, enable all of them.
    input_format_avro_allow_missing_fields 0 For Avro/AvroConfluent format: when field is not found in schema use default value instead of error
    input_format_with_names_use_header 0 1 For TSVWithNames and CSVWithNames input formats this controls whether the format parser is to assume that column data appear in the input exactly as they are specified in the header.
    insert_in_memory_parts_timeout 600000
    join_on_disk_max_files_to_merge 64 For MergeJoin on disk, set how many files are allowed to sort simultaneously. The larger the value the more memory is used and less disk I/O needed. Minimum is 2.
    lock_acquire_timeout 120 How long locking request should wait before failing
    log_queries_min_type QUERY_START Minimal type in query_log to log, possible values (from low to high): QUERY_START, QUERY_FINISH, EXCEPTION_BEFORE_START, EXCEPTION_WHILE_PROCESSING.
    materialize_ttl_after_modify 1 Apply TTL for old data, after ALTER MODIFY TTL query
    max_block_size 65536 65505 Maximum block size for reading
    max_final_threads 16 The maximum number of threads to read from the table with FINAL.
    max_insert_block_size 1048576 1048545 The maximum block size for insertion, if we control the creation of blocks for insertion
    max_joined_block_size_rows 65536 65505 Maximum block size for JOIN result (if join algorithm supports it). 0 means unlimited.
    max_untracked_memory 4194304 Small allocations and deallocations are grouped in thread local variables and tracked or profiled only when the amount (in absolute value) becomes larger than specified value. If the value is higher than ‘memory_profiler_step’ it will be effectively lowered to ‘memory_profiler_step’.
    memory_profiler_sample_probability 0 Collect random allocations and deallocations and write them into system.trace_log with 'MemorySample' trace_type. The probability is for every alloc/free regardless to the size of the allocation. Note that sampling happens only when the amount of untracked memory exceeds ‘max_untracked_memory’. You may want to set ‘max_untracked_memory’ to 0 for extra fine grained sampling.
    metrics_perf_events_enabled 0 If enabled, some of the perf events will be measured throughout queries’ execution.
    metrics_perf_events_list Comma separated list of perf metrics that will be measured throughout queries’ execution. Empty means all events.
    min_chunk_bytes_for_parallel_parsing 1048576 10485760 The minimum chunk size in bytes, which each thread will parse in parallel.
    min_insert_block_size_bytes 268435456 268427520 Squash blocks passed to INSERT query to specified size in bytes, if blocks are not big enough.
    min_insert_block_size_bytes_for_materialized_views 0 Like min_insert_block_size_bytes, but applied only during pushing to MATERIALIZED VIEW (default: min_insert_block_size_bytes)
    min_insert_block_size_rows 1048576 1048545 Squash blocks passed to INSERT query to specified size in rows, if blocks are not big enough.
    min_insert_block_size_rows_for_materialized_views 0 Like min_insert_block_size_rows, but applied only during pushing to MATERIALIZED VIEW (default: min_insert_block_size_rows)
    multiple_joins_rewriter_version 2 1 or 2. Second rewriter version knows about table columns and keeps not clashing names as is.
    optimize_aggregation_in_order 0 Enable GROUP BY optimization for aggregating data in corresponding order in MergeTree tables.
    optimize_aggregators_of_group_by_keys 1 Eliminates min/max/any/anyLast aggregators of GROUP BY keys in SELECT section
    optimize_arithmetic_operations_in_aggregate_functions 1 Move arithmetic operations out of aggregation functions
    optimize_distributed_group_by_sharding_key 0 Optimize GROUP BY sharding_key queries (by avoiding costly aggregation on the initiator server)
    optimize_duplicate_order_by_and_distinct 1 Remove duplicate ORDER BY and DISTINCT if it’s possible
    optimize_group_by_function_keys 1 Eliminates functions of other keys in GROUP BY section
    optimize_if_chain_to_multiif 0 Replace if(cond1, then1, if(cond2, …)) chains to multiIf. Currently it’s not beneficial for numeric types.
    optimize_if_transform_strings_to_enum 0 Replaces string-type arguments in If and Transform to enum. Disabled by default cause it could make inconsistent change in distributed query that would lead to its fail
    optimize_injective_functions_inside_uniq 1 Delete injective functions of one argument inside uniq*() functions
    optimize_monotonous_functions_in_order_by 1 Replace monotonous function with its argument in ORDER BY
    optimize_move_functions_out_of_any 1 Move functions out of aggregate functions ‘any’, ‘anyLast’
    optimize_redundant_functions_in_order_by 1 Remove functions from ORDER BY if its argument is also in ORDER BY
    optimize_skip_unused_shards_nesting 0 Same as optimize_skip_unused_shards, but accept nesting level until which it will work
    optimize_trivial_insert_select 1 Optimize trivial ‘INSERT INTO table SELECT … FROM TABLES’ query
    output_format_enable_streaming 0 Enable streaming in output formats that supports it
    output_format_pretty_grid_charset UTF-8 Charset for printing grid borders. Available charsets: ASCII, UTF-8 (default)
    output_format_pretty_max_value_width 10000 Maximum width of value to display in Pretty formats. If greater - it will be cut.
    parallel_distributed_insert_select 0 Process distributed INSERT SELECT query in the same cluster on local tables on every shard, if 1 SELECT is executed on each shard, if 2 SELECT and INSERT is executed on each shard
    partial_merge_join_left_table_buffer_bytes 32000000 If not 0, group left table blocks in bigger ones for the left-side table in partial merge join. It uses up to 2x of specified memory per joining thread. The current version works only with ‘partial_merge_join_optimizations = 1’.
    partial_merge_join_optimizations 0 1 Enable optimizations in partial merge join
    partial_merge_join_rows_in_right_blocks 10000 65536 Split right-hand joining data in blocks of specified size. It’s a portion of data indexed by min-max values and possibly unloaded on disk.
    rabbitmq_max_wait_ms 5000 The wait time for reading from RabbitMQ before retry.
    read_in_order_two_level_merge_threshold 100 Minimal number of parts to read to run preliminary merge step during multithread reading in order of primary key.
    send_logs_level none fatal Send server text logs with specified minimum level to client. Valid values: ’trace’, ‘debug’, ‘information’, ‘warning’, ’error’, ‘fatal’, ’none’
    show_table_uuid_in_table_create_query_if_not_nil 0 For tables in databases with Engine=Atomic show UUID of the table in its CREATE query.
    temporary_files_codec LZ4 Set compression codec for temporary files (sort and join on disk). I.e. LZ4, NONE.
    transform_null_in 0 If enabled, NULL values will be matched with ‘IN’ operator as if they are considered equal.
    validate_polygons 1 Throw exception if polygon is invalid in function pointInPolygon (e.g. self-tangent, self-intersecting). If the setting is false, the function will accept invalid polygons but may silently return wrong result.

    References

    ClickHouse Altinity Stable release is based on community version. It can be downloaded from repo.clickhouse.tech, and RPM packages are available from the Altinity Stable Repository.

    1.7 - Altinity Stable for ClickHouse 20.3

    1.7.1 - Altinity Stable for ClickHouse 20.3.19.4

    Details of the Altinity Stable Release 20.3.19.4

    Description

    The release includes several dozen bug fixes from the previous release.

    Major New Features

    • Server startup time has been reduced by parsing metadata in parallel.
    • Improved performance of primary key analysis for LowCardinality columns. That was a performance regression introduced in 20.3.
    • Improved performance of queries with large tuples and tuples used in primary key.

    Bug Fixes

    • Fixed a bug that prevented attaching Materialized Views to system tables.
    • Fixed incorrect behavior of if function with NULLs.
    • Fixed segfaults in rare cases.
    • Fixed a bug that prevented predicate pushdown for queries using WITH clause.
    • Fixed SIGSEGV in Kafka engine when broker is unavailable.
    • Fixed a bug leading to block structure mismatch error for queries with UNION and JOIN.
    • Fixed TTL processing logic to process all partitions in one run.
    • Fixed a bug of parsing of row policies from configuration files that could result in missing policies sometimes.
    • Fixed a bug with ALTER TABLE UPDATE could produce incorrect results when updating Nullable columns.
    • Fixed a bug with codecs not working properly for MergeTree compact parts.

    References

    Updated RPM packages for Altinity Stable Release can be found in the Altinity Stable Repository. The ClickHouse repository can be used for DEB packages but check version numbers carefully.

    1.7.2 - Altinity Stable for ClickHouse 20.3.12.112

    Details of the Altinity Stable Release 20.3.12.112

    Description

    This release is a significant step forward since the previous Altinity Stable release 19.16.19.85. It includes 1203 pull requests from 171 contributors.

    Major New Features

    • TTL moves. See the Putting Things Where They Belong Using New TTL Moves article. While we consider this feature to be production ready, we have a roadmap to further improve it. TTL moves have been developed by Altinity.
    • Dictionary DDLs that many ClickHouse users dreamed of! See the ClickHouse Dictionaries: Reloaded! article that highlights this feature.
    • New DateTime64 datatype with configurable precision up to nanoseconds. The feature is in beta state and more functions for usability will follow in next releases. It has also been developed by Altinity.
    • Joins have been improved a lot, including SEMI/ANTI JOIN, experimental merge join and other changes. See the join_algorithm setting below for more info on merge joins.
    • A new compact format for MergeTree tables that store all columns in one file. It improves performance of small inserts. The usual format where every column is stored separately is now called “wide.” Compact format is disabled by default. See the min_bytes_for_wide_part and min_rows_for_wide_part settings.
    • A built-in Prometheus exporter endpoint for ClickHouse monitoring statistics
    • Porting some functionality of the H3 library — A Hexagonal Hierarchical Geospatial Indexing System
    • ALTER MODIFY/DROP are currently implemented as mutations for ReplicatedMergeTree* engine family. Now ALTER commands block only at the metadata update stage, and don’t block after that.
    • Import/export gzip-compressed files directly for file based storage engines and table functions: File, URL, HDFS and S3.
    • Server improvements across different server functions. For example, performance of the Kafka Engine has been improved, parallel INSERT is now possible (see max_insert_threads setting), etc.

    Upgrade Notes

    • Replication fetch protocol has been changed in this release. If only one replica is upgraded in a shard, replication may get stuck with unknown protocol version error until all replicas are upgraded. 19.16.17.80 and 19.16.19.85 contain a special compatibility fix that allows smooth upgrade without replication downtime.
    • If enable_optimize_predicate_expression is turned on, it may result in incorrect data when only part of the cluster is upgraded. Please turn it off before the upgrade, and turn it back on afterwards.
    • There were some optimizations that are incompatible between versions, so we recommend disabling enable_scalar_subquery_optimization before the upgrade. This setting turns off new scalar subquery optimizations and was backported specifically in order to facilitate smooth upgrades.
    • There were some optimizations around distributed query execution that may result in incorrect data if part of the cluster is upgraded. We recommend disabling distributed_aggregation_memory_efficient while upgrading, and turn it on afterwards.

    Backward Incompatible Changes

    The following changes are backward incompatible and require user attention.

    • ALTER on ReplicatedMergeTree is not compatible with previous versions.20.3 creates a different metadata structure in ZooKeeper for ALTERs. Earlier versions do not understand the format and cannot proceed with their replication queue.
    • The format of replication log entries for mutation commands has changed. You have to wait for old mutations to process before installing the new version.
    • 20.3 requires an alias for every subquery participating in a join by default. Set joined_subquery_requires_alias=0 in order to keep the previous behavior.
    • ANY JOIN logic has been changed. To upgrade without changes in behavior, you need to add SETTINGS any_join_distinct_right_table_keys = 1 to Engine Join tables metadata or recreate these tables after upgrade.
    • Functions indexHint, findClusterValue, findClusterIndex were removed
    • Settings merge_tree_uniform_read_distribution, allow_experimental_cross_to_join_conversion, allow_experimental_multiple_joins_emulation are deprecated and ignored

    Other Important Notes

    • If you encounter [Argument at index 0 for function dictGetDateTime must be a constant](https://github.com/ClickHouse/ClickHouse/issues/7798) or a similar error message after upgrading, set enable_early_constant_folding=0.
    • The new release adds a parallel parsing of input formats. While it should improve the performance of data loading in text formats, sometimes it may result in a slower load. Set input_format_parallel_parsing=0 if you experience insert performance degradation.
    • The check that a query is performed fast enough is enabled by default. Queries can get an exception like this: DB::Exception: Estimated query execution time (N seconds) is too long. Maximum: 12334.. Set timeout_before_checking_execution_speed=0 to fix this problem.
    • Mutations containing non-deterministic functions, e.g. dictGet or joinGet are disabled by default. Set allow_nondeterministic_mutations=1 to enable.

    Also note that this release has a new query pipeline enabled by default (codename — Processors). This was a significant internal refactoring, and we believe all issues have been fixed. However, in the rare case that you face some weird query behavior or performance degradation, try disabling experimental_use_processors and check if the problem goes away.

    Changes

    New formats

    • Avro
    • AvroConfluent
    • JSONCompactEachRow
    • JSONCompactEachRowWithNamesAndTypes

    New or Improved Functions

    • CRC32IEEE, CRC64
    • New JSON functions.
      • isValidJSON
      • JSONExtractArrayRaw — very useful for parsing nested JSON structures
    • New array functions.
      • arrayCompact
      • arrayFill
      • arrayReverseFill
      • arrayZip
    • New geo functions.
      • geoDistance
      • greatCircleDistance
    • H3 functions.
      • h3EdgeAngle
      • h3EdgeLengthM
      • h3GetBaseCell
      • h3GetResolution
      • h3HexAreaM2
      • h3IndexesAreNeighbors
      • h3IsValid
      • h3ToChildren
      • h3ToParent
      • h3ToString
      • h3kRing
      • stringToH3
    • Functions for DateTime64.
      • now64
      • parseDateTime64BestEffort, parseDateTime64BestEffortOrNull, parseDateTime64BestEffortOrZero
      • toDateTime64, toDateTime64OrNull, toDateTime64OrZero
    • New aggregate functions:
      • avgWeighted
      • categoricalInformationValue
      • groupArraySample
    • Other.
      • bitCount
      • ifNotFinit.
      • isConstant
      • javaHashUTF16LE
      • moduloOrZero
      • randomPrintableASCII
      • roundBankers

    New table functions

    • clusterAllReplicas — similar to cluster function but queries all replicas in the cluster
    • generateRandom — allows users to generate random dataset with defined structure
    • zeros, zeros_mt

    New metrics and events

    system.metrics:

    • BackgroundMovePoolTask
    • CacheDictionaryUpdateQueueBatches
    • CacheDictionaryUpdateQueueKeys
    • MemoryTrackingInBackgroundMoveProcessingPool
    • MySQLConnection
    • SendScalars

    New system tables

    • metric_log — stores system.metrics and system.events history
    • quotas, quota_usage — introspection to quotes
    • row_policies — introspection to row security policies
    • trace_log, stack_trace — for low-level debugging purpose
    • zeros, zeros_mt — zero generators, like system.numbers, but faster and returns zeros

    New columns in system tables

    • mutations: parts_to_do_names
    • parts, parts_columns: part_type
    • processes: thread_ids replaces old columns thread_numbers and os_thread_ids
    • query_log: exception_code, thread_ids replaces old columns thread_numbers and os_thread_ids
    • replicas: zookeeper_exception
    • settings: min, max, readonly
    • table_engines: supports_settings, supports_skipping_indices, supports_sort_order, supports_ttl, supports_replication, supports_deduplication

    system.merge_tree_settings added/changed

    Settings Old value New value Description
    disable_background_merges   0 Disable background merges
    merge_max_block_size   8192 How many rows in blocks should be formed for merge operations
    min_bytes_for_wide_part   0 Minimal uncompressed size in bytes to create part in wide format instead of compact
    min_rows_for_wide_part   0 Minimal number of rows to create part in wide format instead of compact
    use_minimalistic_part_header_in_zookeeper 0 1 Store part header (checksums and columns) in a compact format and a single part znode instead of separate znodes. This setting was available for a year already, and many users have it enabled.

    system.settings added/changed

    Setting Old value New value Description
    allow_experimental_alter_materialized_view_structure   0 Allow atomic alter on Materialized views. Work in progress
    allow_experimental_data_skipping_indices 0 1 Data skipping indices are now enabled by default
    allow_nondeterministic_mutations   0 Allow non-deterministic functions in ALTER UPDATE/ALTER DELETE statements, e.g. with dictGet() functions
    background_move_pool_size   8 Number of threads performing background moves for tables. Only has meaning at server startup
    default_max_bytes_in_join 100000000 1000000000 Maximum size of right-side table if limit is required but max_bytes_in_join is not set
    enable_early_constant_folding 0 1 Enable query optimization where we analyze function and subqueries results and rewrite query if there’re constants there
    enable_scalar_subquery_optimization 0 1 If it is set to true, prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once
    experimental_use_processors 0 1 Processors pipeline is now enabled by default
    force_optimize_skip_unused_shards   1 Throw an exception if unused shards cannot be skipped (1 – throw only if the table has the sharding key, 2 – always throw)
    format_avro_schema_registry_url     For AvroConfluent format: Confluent Schema Registry URL
    input_format_parallel_parsing   1 Enable parallel parsing for some data formats
    input_format_values_deduce_templates_of_expressions 0 1 For Values format: if the field could not be parsed by streaming parser, run SQL parser, deduce template of the SQL expression, try to parse all rows using template and then interpret expression for all rows
    join_algorithm   hash Specify join algorithm: auto, hash, partial_merge, prefer_partial_merge. auto tries to change HashJoin to MergeJoin on the fly to avoid out of memory
    joined_subquery_requires_alias 0 1 Force joined subqueries and table functions to have aliases for correct name qualification
    max_insert_threads   0 The maximum number of threads to execute the INSERT SELECT query. Values 0 or 1 means that INSERT SELECT is not run in parallel. Higher values will lead to higher memory usage. Parallel INSERT SELECT has effect only if the SELECT part is run on parallel, see max_threads setting
    max_joined_block_size_rows   65536 Maximum block size for JOIN result (if join algorithm supports it). 0 means unlimited
    max_parser_depth   1000 Maximum parser depth
    memory_profiler_step   0 Every number of bytes the memory profiler will collect the allocating stack trace. The minimal effective step is 4 MiB (less values will work as clamped to 4 MiB). Zero means disabled memory profiler
    min_bytes_to_use_mmap_io   0 The minimum number of bytes for reading the data with the mmap option during SELECT queries execution. 0 – disabled
    min_chunk_bytes_for_parallel_parsing   1048576 The minimum chunk size in bytes, which each thread will parse in parallel
    mutations_sync   0 Wait for synchronous execution of ALTER TABLE UPDATE/DELETE queries (mutations). 0 – execute asynchronously. 1 – wait current server. 2 – wait all replicas if they exist
    optimize_if_chain_to_miltiif   0 Replace if(cond1, then1, if(cond2, …)) chains to multiIf
    optimize_trivial_count_query   1 Process trivial SELECT count() FROM table query from metadata
    output_format_avro_codec     Compression codec used for output. Possible values: null, deflate, snappy
    output_format_avro_sync_interval   16384 Sync interval in bytes
    output_format_csv_crlf_end_of_line     If it is set true, end of line in CSV format will be \r\n instead of \n
    output_format_tsv_crlf_end_of_line     If it is set true, end of line in TSV format will be \r\n instead of \n
    timeout_before_checking_execution_speed 0 10 Check that the speed is not too low after the specified time has elapsed
    use_compact_format_in_distributed_parts_names   0 Changes format of directories names for distributed table insert parts

    References

    RPM packages for Altinity Stable Release can be found in the Altinity Stable Repository. The ClickHouse repository can be used as well but check version numbers carefully.

    1.8 - Altinity Stable for ClickHouse 19.16

    1.8.1 - Altinity Stable for ClickHouse 19.16.19.85

    Details of the Altinity Stable Release 19.16.19.85

    Description

    This is a combined release of 19.16.19.65, upgraded to 19.16.19.85 after review.

    Major New Features

    enable_scalar_subquery_optimization setting is disabled by default. It is required in order to perform rolling upgrades to 20.x versions.

    Bug Fixes

    • Fixed a bug with non-Date/DateTime columns not being allowed in TTL expressions
    • Fixed a bug in the Kafka engine that sometimes prevented committing a message back to the broker if the broker was temporarily unavailable
    • Fixed a bug in which the boolean functions ‘or’ and ‘and’ might return incorrect results when called on more than 10 arguments if some of them are NULL
    • Fixed server side certificates in docker image
    • Fixed a possible race condition between insert quorum and drop partition
    • Volume selection in merge process could move data back to the first volume in storage policy in some cases
    • Mutations that did not work if insert_quorum has been used for the table
    • Deduplication logic that ignored dependent materialized views if a duplicate for the main table has been detected. The fix is turned off by default and can be enabled with a new deduplicate_blocks_in_dependent_materialized_views setting.
    • Kafka engine could result in data losses when ZooKeeper is temporarily not available
    • Kafka engine did not allow users to drop a table with incorrect Kafka topic
    • Kafka engine did not allow users to use subqueries in attached materialized views
    • ALTER TABLE MODIFY CODEC default expression and a codec are specified

    1.8.2 - Altinity Stable for ClickHouse 19.16.12.49

    Details of the Altinity Stable Release 19.16.12.49

    Description

    Improved Kafka engine reliability, performance and usability.

    Bug Fixes

    • When there is a rebalance of partitions between consumers, occasional data duplicates and losses were possible.
    • When data was polled from several partitions with one poll and committed partially, occasional data losses were possible.
    • Block size threshold (‘kafka_max_block_size’ setting) now triggers block flush correctly that reduces the chance of data loss under high load.
    • When new columns are added to the Kafka pipeline (Kafka engine -> Materialized View -> Table) it was previously not possible to alter the destination tables first with default value and then alter MV in order to minimize downtime. The problem was not related directly to Kafka, but general implementation of materialized views. It is fixed now.
    • Few other minor problems have been addressed as well.

    Since ClickHouse now respects the ‘kafka_max_block_size’ setting that defaults to 65535, we recommend increasing it to the bigger values for high volume streaming. Setting it to 524288 or 1048576 may increase consumer throughput up to 20%.

    Known Issues

    The ‘kafka_max_block_size’ defaults to 65535. We recommend increasing it to the bigger values for high volume streaming. Setting it to 524288 or 1048576 may increase consumer throughput up to 20%.

    References

    1.8.3 - Altinity Stable for ClickHouse 19.16.10.44

    Details of the Altinity Stable Release 19.16.10.44

    Description

    This release has major updates:: 2744 commits in 528 pull-requests from 113 contributors.

    Major New Features

    • Tiered storage: multiple storage volumes for MergeTree tables. It’s possible to store fresh data on SSD and automatically move old data to HDD. We already discussed this functionality in our blog.

    • LIVE VIEW tables that we also described in our blog. The support in this release is not complete, but new improvements are coming quickly. Feel free to try it out.

    • WITH FILL modifier of ORDER BY for data gap filling. It allows to fill missing data to provide uniform reports. For example, you can fill missing dates for time series so that every day is shown even if some days have no data. This feature is not documented yet, so here is an example how missing dates can be filled:

      SELECT arrayJoin([today()-10, today()]) AS d ORDER BY d ASC WITH FILL

    • Table constraints that are checked at insert.

    • Sensitive data masking for query_log, server logs, process list with regexp-based rules.

    • Table function input() for reading incoming data in INSERT SELECT query. A very useful feature when you need to preprocess data just before inserting. We are going to showcase it in our planned articles around log processing use cases.

    • Cascaded materialized views. It is an important feature for many use cases including Kafka integration. In particular, it allows you to have load raw data from Kafka using MV and then aggregate it using another MV. That was not possible in previous ClickHouse releases.

    • Kafka: Altinity took over support for the Kafka engine a few months ago. Kafka functionality and stability has been improved in this release, in particular:

      • ClickHouse can act as Kafka producer, and not just to read from Kafka, but also send data back with an insert statement.
      • Atomic parsing of each message: kafka_row_delimiter is now obsolete (ignored)
      • More reliable commit logic
      • Virtual columns _partition and _timestamp for Kafka engine table.
      • Parsing of most of the formats is working properly now

    Known issues

    • In case of topic rebalancing on the Kafka side, duplicates are possible. We are going to address it in 2020/Q1. https://github.com/ClickHouse/ClickHouse/issues/7259.
    • Select from mysql() table function is broken in this release. Two workarounds are possible:
      • Use MySQL table engine.
      • create table as mysql(...) and then select from this table — this is actually a shortcut to MySQL table engine.
    • Some constant-like functions can misbehave when queried through Distributed tables: https://github.com/ClickHouse/ClickHouse/issues/7666.
    • ‘system reload dictionary’ does not reload dictionaries with ‘update_field’: https://github.com/ClickHouse/ClickHouse/issues/7440.
    • complicated where conditions involving fields of UInt8 type with values > 1 can return unexpected results. (workaround: instead of a and b and c use a <> 0 and b <> 0 and c <> 0) https://github.com/ClickHouse/ClickHouse/issues/7772.
    • If you want to preserve old ANY JOIN behavior while upgrading from a version before 19.13, you may need to install 19.13.7 first, change any_join_distinct_right_table_keys setting there and after that you can upgrade to 19.16. But we recommend to review your queries and rewrite them without this join type. In future releases it will be available, but with a different name (SEMI JOIN).

    Upgrade Notes

    Backward Incompatible Changes

    • count() supports only a single argument.
    • Legacy asterisk_left_columns_only setting has been removed (it was disabled by default).
    • Numeric values for Enums can now be used directly in IN section of the query.
    • Changed serialization format of bitmap* aggregate function states to improve performance. Serialized states of bitmap* from previous versions cannot be read. If you happen to use bitmap aggregate functions, please contact us before upgrading.
    • system.query_log column type was changed from UInt8 to Enum8.
    • ANY RIGHT/FULL/INNER JOIN is disabled by default. Set any_join_distinct_right_table_keys setting to enable them.

    Changes

    New Formats

    • ORC format.
    • Template/TemplateIgnoreSpaces format. It allows to parse / generate data in custom text formats. So you can for example generate HTML directly from ClickHouse thus turning ClickHouse to the web server.
    • CustomSeparated/CustomSeparatedIgnoreSpaces format. Supports custom escaping and delimiter rules.
    • JSONEachRowWithProgress
    • Parse unquoted NULL literal as NULL (enabled by setting format_csv_unquoted_null_literal_as_null).
    • Initialize null fields with default values if the data type of this field is not nullable (enabled by setting input_format_null_as_default).

    New or improved functions

    • Aggregate function combinators which fill null or default value when there is nothing to aggregate: -OrDefault, -OrNull

    • Introduce uniqCombined64() to get sane results for cardinality > UINT_MAX

    • QuantileExactExclusive and Inclusive aggregate functions

    • hasToken/hasTokenCaseInsensitive (look for the token in string the same way as token_bf secondary index)

    • multiFuzzyMatchAllIndices, multiMatchAllIndices (return the Array of all matched indices in multiMatch family functions)

    • repeat function for strings

    • sigmoid and tanh functions for ML applications

    • Roaring Bitmaps:

      • Changes CRoaring serialization functions (you will not be able to read Bitmaps created by earlier versions)
      • bitmapSubsetInRange,
      • bitmapMin, bitmapMax,
      • bitmapSubsetLimit(bitmap, range_start, limit),
      • groupBitmapAnd, groupBitmapOr, groupBitmapXor
    • geohashesInBox(longitude_min, latitude_min, longitude_max, latitude_max, precision) which creates an array of precision-long strings of geohash boxes covering the provided area.

    • Support for wildcards in paths of table functions file and hdfs. If the path contains wildcards, the table will be readonly:

      SELECT * FROM hdfs('hdfs://hdfs1:9000/some_dir/another_dir/*/file', 'Parquet', 'col1 String')

    • New function neighbour(value, offset[, default_value]). Allows to reach prev/next row within the column.

    • Optimize queries with ORDER BY expressions clause, where expressions have coinciding prefix with sorting key in MergeTree tables. This optimization is controlled by optimize_read_in_order ‘setting

    • New function arraySplit and arrayReverseSplit which can split an array by “cut off” conditions. They are useful in time sequence handling.

    • Table function values (the name is case-insensitive). It allows to create table with some data inline.

      SELECT * FROM VALUES('a UInt64, s String', (1, 'one'), (2, 'two'), (3, 'three'))

    • fullHostName (alias FQDN)

    • numbers_mt() — multithreaded version of numbers().

    • currentUser() (and alias user()), returning login of authorized user.

    • S3 engine and table function. Partial support in this release (no authentication), complete version is expected in 19.18.x and later

    • WITH TIES modifier for LIMIT

    New dictionaries features

    • Redis as an external dictionary source.
    • <sparse_hashed> dictionary layout, that is functionally equivalent to the <hashed> layout, but is more memory efficient. It uses about twice as less memory at the cost of slower value retrieval.
    • allow_dictionaries user setting that works similar to allow_databases.
    • HTTP source new attributes: credentials and http-headers.

    Operations / Monitoring

    • system.metric_log table which stores values of system.events and system.metrics with specified time interval.

    • system.text_log in order to store ClickHouse logs to itself

    • Support for detached parts removal:

      ALTER TABLE &lt;table_name&gt; DROP DETACHED PART '&lt;part_id&gt;'

    • MergeTree now has an additional option ttl_only_drop_parts (disabled by default) to avoid partial pruning of parts, so that they dropped completely when all the rows in a part are expired.

    • Added miscellaneous function getMacro(name) that returns String with the value of corresponding <macros> from configuration file on current server where the function is executed.

    New metrics & events

    • system.metrics: DistributedFilesToInsert, GlobalThread, GlobalThreadActive, LocalThread, LocalThreadActive
    • system.events: Merge

    New system tables

    • system.disks
    • system.storage_policies

    New columns in system tables

    • system.clusters: errors_count, estimated_recovery_time
    • system.collations: language
    • system.parts: disk_name
    • system.parts_columns: disk_name
    • system.processes: os_thread_ids
    • system.query_log: os_thread_ids
    • system.tables: storage_policy

    system.merge_tree_settings changed/added

    Name Default Description
    concurrent_part_removal_threshold 100 Activate concurrent part removal (see ‘max_part_removal_threads’) only if the number of inactive data parts is at least this.
    max_part_loading_threads auto(6) The number of threads to load data parts at startup.
    max_part_removal_threads auto(6) The number of threads for concurrent removal of inactive data parts. One is usually enough, but in ‘Google Compute Environment SSD Persistent Disks’ file removal (unlink) operation is extraordinarily slow and you probably have to increase this number (recommended is up to 16).
    storage_policy default Name of storage disk policy
    ttl_only_drop_parts 0 Only drop altogether the expired parts and not partially prune them.

    system.settings changed/added

    Name Default Description
    allow_drop_detached 0 Allow ALTER TABLE … DROP DETACHED PARTITION … queries
    allow_experimental_live_view 0 Enable LIVE VIEW. Not mature enough.
    allow_introspection_functions 0 Allow functions for introspection of ELF and DWARF for query profiling. These functions are slow and may impose security considerations.
    any_join_distinct_right_table_keys 0 (was 1) Enable old ANY JOIN logic with many-to-one left-to-right table keys mapping for all ANY JOINs. It leads to confusing not equal results for ’t1 ANY LEFT JOIN t2’ and ’t2 ANY RIGHT JOIN t1’. ANY RIGHT JOIN needs one-to-many keys mapping to be consistent with LEFT one.
    asterisk_left_columns_only removed
    connection_pool_max_wait_ms 0 The wait time when connection pool is full.
    default_max_bytes_in_join 100000000 Maximum size of right-side table if limit’s required but max_bytes_in_join is not set.
    distributed_directory_monitor_max_sleep_time_ms 30000 Maximum sleep time for StorageDistributed DirectoryMonitors, it limits exponential growth too.
    distributed_replica_error_cap 1000 Max number of errors per replica, prevents piling up incredible amount of errors if replica was offline for some time and allows it to be reconsidered in a shorter amount of time.
    distributed_replica_error_half_life 60 Time period reduces replica error counter by 2 times.
    format_custom_escaping_rule Escaped Field escaping rule (for CustomSeparated format)
    format_custom_field_delimiter Delimiter between fields (for CustomSeparated format)
    format_custom_result_after_delimiter Suffix after result set (for CustomSeparated format)
    format_custom_result_before_delimiter Prefix before result set (for CustomSeparated format)
    format_custom_row_after_delimiter Delimiter after field of the last column (for CustomSeparated format)
    format_template_resultset Path to file which contains format string for result set (for Template format)
    format_template_row Path to file which contains format string for rows (for Template format)
    format_template_rows_between_delimiter Delimiter between rows (for Template format)
    input_format_csv_unquoted_null_literal_as_null 0 Consider unquoted NULL literal as N
    input_format_null_as_default 0 For text input formats initialize null fields with default values if data type of this field is not nullable
    input_format_tsv_empty_as_default 0 Treat empty fields in TSV input as default values.
    input_format_values_accurate_types_of_literals 1 For Values format: when parsing and interpreting expressions using template, check actual type of literal to avoid possible overflow and precision issues.
    input_format_values_deduce_templates_of_expressions 0 For Values format: if field could not be parsed by streaming parser, run SQL parser, deduce template of the SQL expression, try to parse all rows using template and then interpret expression for all rows.
    joined_subquery_requires_alias 0 Force joined subqueries to have aliases for correct name qualification.
    kafka_max_wait_ms 5000 The wait time for reading from Kafka before retry.
    live_view_heartbeat_interval 15 The heartbeat interval in seconds to indicate live query is alive.
    max_http_get_redirects 0 Max number of http GET redirects hops allowed. Make sure additional security measures are in place to prevent a malicious server to redirect your requests to unexpected services.
    max_live_view_insert_blocks_before_refresh 64 Limit maximum number of inserted blocks after which mergeable blocks are dropped and query is re-executed.
    min_free_disk_space_for_temporary_data 0 The minimum disk space to keep while writing temporary data used in external sorting and aggregation.
    optimize_read_in_order 1 Enable ORDER BY optimization for reading data in corresponding order in MergeTree tables.
    partial_merge_join 0 Use partial merge join instead of hash join for LEFT and INNER JOINs.
    partial_merge_join_optimizations 0 Enable optimizations in partial merge join
    partial_merge_join_rows_in_left_blocks 10000 Group left-hand joining data in bigger blocks. Setting it to a bigger value increase JOIN performance and memory usage.
    partial_merge_join_rows_in_right_blocks 10000 Split right-hand joining data in blocks of specified size. It’s a portion of data indexed by min-max values and possibly unloaded on disk.
    query_profiler_cpu_time_period_ns 1000000000 (was 0) Highly experimental. Period for CPU clock timer of query profiler (in nanoseconds). Set 0 value to turn off CPU clock query profiler. Recommended value is at least 10000000 (100 times a second) for single queries or 1000000000 (once a second) for cluster-wide profiling.
    query_profiler_real_time_period_ns 1000000000 (was 0) Highly experimental. Period for real clock timer of query profiler (in nanoseconds). Set 0 value to turn off real clock query profiler. Recommended value is at least 10000000 (100 times a second) for single queries or 1000000000 (once a second) for cluster-wide profiling.
    replace_running_query_max_wait_ms 5000 The wait time for running query with the same query_id to finish when setting ‘replace_running_query’ is active.
    s3_min_upload_part_size 536870912 The minimum size of part to upload during multipart upload to S3.
    temporary_live_view_timeout 5 Timeout after which temporary live view is deleted.

    1.9 - Altinity Stable for ClickHouse 19.13

    1.9.1 - Altinity Stable for ClickHouse 19.13.7.57

    Details of the Altinity Stable Release 19.13.7.57

    Description

    This is a minor upgrade over the previous release.

    Major New Features

    • CREATE TABLE AS TABLE FUNCTION statement that allows to create tables from external data automatically, e.g. from mysql table.
    • COLUMN('regexp') macro function as a generalization of SELECT * queries.

    Bug Fixes

    • Fixed security vulnerability in url() function.
    • Fixed security vulnerability related to Zookeeper.
    • Fixed vulnerabilities in compression codecs.
    • Fixed corner case overflows in DoubleDelta codec.

    Known Issues

    • ANY INNER|RIGHT|FULL JOIN is deprecated! While we like how it worked before, there are plans to change the behavior completely. We are still discussing with the development team how we can preserve the old behavior. For now, in order to preserve the old behavior and safely upgrade to newer versions in the future one needs to enable any_join_distinct_right_table_keys setting!
    • The setting input_format_defaults_for_omitted_fields is on by default. It enables calculation of complex default expressions for omitted fields in JSONEachRow and CSV* formats. Inserts to Distributed tables need this setting to be the same across the cluster.
      • You need to set input_format_defaults_for_omitted_fields across the cluster before the upgrade if rolling upgrade is performed.

    References

    1.10 - Altinity Stable for ClickHouse 19.11

    1.10.1 - Altinity Stable for ClickHouse 19.11.8

    Details of the Altinity Stable Release 19.11.8

    Description

    Updates including improved Kafka support.

    Major New Features

    • Multi-table JOINs with standard SQL syntax (some limitations are left: only equi-JOIN, limited condition pushdown support)
    • Secondary indices - allow skipping data blocks without reading data: min/max filter, n-grams or token-based bloom filter
    • TTL expressions for columns and tables to remove old data automatically
    • Unique ASOF JOIN - join to the most recent value known
    • Per-column custom compression codecs (Delta, T64, DoubleDelta, Gorilla): fine-tune compression for particular columns
    • Roaring Bitmaps - use bitmaps for different on-set operations (calculate cardinality, unions, intersects, etc)
    • Machine-learning functions: evalMLMethod, simpleLinearRegression, stochasticLinearRegression, stochasticLogisticRegression
    • Predefined per-row filter expressions for tables, which enable simple row-level security rules
    • Adaptive index granularity (setting index_granularity_bytes) - useful for tables having rows with relatively big size (>1280 bytes)
    • Advanced text processing functions
      • Multiple substring search: multiSearch(Any|FirstPosition|FirstIndex|AllPositions)(CaseInsensitive)?(UTF8)?
      • Multiple regex search (hyperscan-powered): multiMatch(Any|AnyIndex)
      • Fuzzy regex match (hyperscan-powered): multiFuzzyMatch(Any|AnyIndex)
      • N-gram distance for fuzzy string comparison and search (similar to q-gram metrics in R language): ngramDistance(CaseInsensitive)?(UTF8)?, ngramSearch(CaseInsensitive)?(UTF8)?
    • HDFS read/write access
    • New JSON processing functions - high performance & compliant with JSON standard
    • IPv4 and IPv6 data types
    • New formats
      • Protobuf — now fully supported with input and output plus nested data structures
      • Parquet
      • RowBinaryWithNamesAndTypes
      • JSONEachRow and TSKV - now support default expressions for missing fields (Check input_format_defaults_for_omitted_fields)
      • TSVWithNames/CSVWithNames - column order can now be determined from file header (Check input_format_with_names_use_header parameter).
    • SQL statements with bind parameters
    • Improved MySQL integration:
      • new database engine to access all the tables in remote MySQL server
      • support for MySQL wire protocol, allowing to connect to ClickHouse using MySQL clients.

    Known issues

    • All released 19.x versions have had some problems with Kafka engine implementation due to a full re-write of Kafka support. In 19.11.8 Kafka is working much better than previous 19.x releases. However, there are still some corner cases that can lead to data duplication in certain scenarios, for example, in the event of ClickHouse server restart. Those issues will be addressed soon.

    • Adaptive granularity is enabled by default, and index_granularity_bytes is set to 10Mb. This feature uses a different data format, and interoperability between old and new format has some issues. So if you’re upgrading your cluster from an older version, consider disabling it before the upgrade by putting the following fragment in your config.xml:

      <merge_tree>
        <index_granularity_bytes>0</index_granularity_bytes>
      </merge_tree></code></pre>
      

      After upgrade, you can choose any convenient time to turn adaptive granularity on. In general, it’s a cool feature and especially useful when you have rows of size > 1Kb in your tables. If you are doing a new installation, please leave the default setting value as is. The adaptive granularity feature is very nice and useful.

    • enable_optimize_predicate_expression is now enabled by default. It`s possible you may have some issues when a condition passed to subselect leads to some suboptimal / undesired effects. If this happens please report the issue and disable the feature for that select (or globally).

    • Secondary indices are maturing but still considered as experimental. There is at least one severe bug: when a mutation is executed with a condition on the column with secondary index - it can affect more rows than expected. Please be careful with DELETE, or upgrade to 19.13 (see issue #6224).

    • Some users have reported problems with ODBC data sources after upgrade. In most cases these were misconfigurations. Nevertheless, please do canary/staging updates and check how your ODBC connections work before moving to production.

    Upgrade Notes

    Backward compatibility issues

    • Due to update of LZ4 library the new ClickHouse version writes parts which are not binary equivalent to those written with older versions. That makes it problematic to update only one replica. Leaving the cluster in such a state for a long period of time time will work but may lead to excessive parts copying between nodes due to checksum mismatches.
    • There is a new setting max_partitions_per_insert_block with default value 100. If the inserted block contains a larger number of partitions, an exception is thrown. Set it to 0 if you want to remove the limit (not recommended).
    • If you are using unexpected low cardinality combinations like LowCardinality(UInt8), the new version will prevent you from doing so. if you really know what you are doing check allow_suspicious_low_cardinality_types and set it to 1.
    • This release adds max_parts_in_total setting for MergeTree family of tables (default: 100 000). We hope your number of partitions is much lower than this limit. If necessary you can raise the value.
    • The system.dictionaries table has been changed. If you used it for monitoring purposes, you may need to change your scripts.
    • Dictionaries are loaded lazily by default. It means they have status NOT_LOADED until the first access.

    Changes

    New functions

    • New hash functions: xxHash32, xxHash64, gccMurmurHash, hiveHash, javaHash, CRC32
    • JSON processing functions: JSONExtract,JSONExtract(Bool|Float|Int|KeysAndValues|Raw|String|UInt),JSONHas,JSONKey,JSONLength,JSONType
    • Geospatial processing: geoToH3, geohashEncode, geohashDecode
    • IP address handling: IPv4CIDRtoIPv4Range, IPv6CIDRtoIPv6Range, toIPv4, toIPv6
    • New statistical aggregate functions: skewPop, skewSamp, kurtPop, kurtSamp
    • String functions: isValidUTF8, regexpQuoteMeta, trimBoth, trimLeft, trimRight, format, toValidUTF8
    • Encoding: tryBase64Decode, base64Decode, base64Encode
    • Array processing: arrayEnumerateDenseRanked, arrayEnumerateUniqRanked, flatten, arrayFlatten, arrayWithConstant,
    • Date/Time processing: toStartOfInterval, addQuarters, subtractQuarters, toIntervalQuarter, toStartOfTenMinutes
    • Numerical: roundDown, toDecimalOrZero, toDecimalOrNull
    • Dictionary: dictGet, dictGetOrDefault
    • Roaring Bitmaps: bitmapAndCardinality, bitmapAndnot, bitmapAndnotCardinality, bitmapBuild, bitmapCardinality, bitmapContains, bitmapHasAll, bitmapHasAny, bitmapOr, bitmapOrCardinality, bitmapToArray, bitmapXor, bitmapXorCardinality, groupBitmap
    • OS level introspection: filesystemAvailable, filesystemCapacity, filesystemFree, basename
    • New aggregate functions: boundingRatio, entropy, groupBitmap, sumMap(Filtered|FilteredWithOverflow|WithOverflow), topKWeighted, groupArrayMovingAvg,groupArrayMovingSum, timeSeriesGroupRateSum, timeSeriesGroupSum
    • SQL standard compatibility aliases added: left, right, trim, ltrim, rtrim, timestampadd, timestampsub, power, replace, ln, locate, mid

    Other notable changes

    • Setting constraints which limit the possible range of setting value per user profile.
    • KILL MUTATION added
    • New aggregate function combinators: -Resample
    • Added new data type SimpleAggregateFunction - light aggregation for simple functions like any, anyLast, sum, min, max
    • Ability to use different sorting key (ORDER BY) and index (PRIMARY KEY). The sorting key can be longer than the index. You can alter ORDER BY at the moment of adding / removing the column
    • HTTP interface: brotli compression support, X-ClickHouse-Query-Id and X-ClickHouse-Summary headers in response, ability to cancel query on disconnect (check cancel_http_readonly_queries_on_client_close)
    • DFA-based implementation for functions sequenceMatch and sequenceCount in case the pattern doesn’t contain time
    • Back up all partitions at once with ALTER TABLE … FREEZE
    • Comments for a column in the table description
    • Join engine: new options join_use_nulls, max_rows_in_join, max_bytes_in_join, join_any_take_last_row and join_overflow_mode + joinGet function that allows you to use a Join type table like a dictionary.
    • /docker-entrypoint-initdb.d for database initialization in docker
    • Graphite rollup rules reworked.
    • Query settings in asynchronous INSERTs into Distributed tables are respected now
    • Hints when while user make a typo in function name or type in command line client
    • system.detached_parts table containing information about detached parts of MergeTree tables
    • Table function remoteSecure
    • Ability to write zookeeper part data in more compact form (use_minimalistic_part_header_in_zookeeper)
    • Ability to close MySQL connections after their usage in external dictionaries
    • Support RENAME operation for Materialized View
    • Non-blocking loading of external dictionaries

    Kafka now supports SASL SCRAM authentication, new virtual columns _topic, _offset, _key are available, and a lot of other improvements.

    Settings changed/added

    Name Default Description
    allow_experimental_cross_to_join_conversion 1 Convert CROSS JOIN to INNER JOIN if possible
    allow_experimental_data_skipping_indices 0 If it is set to true, data skipping indices can be used in CREATE TABLE/ALTER TABLE queries.
    allow_experimental_low_cardinality_type 1 (was 0) Obsolete setting, does nothing. Will be removed after 2019-08-13
    allow_experimental_multiple_joins_emulation 1 Emulate multiple joins using subselects
    allow_hyperscan 1 Allow functions that use Hyperscan library. Disable to avoid potentially long compilation times and excessive resource usage.
    allow_simdjson 1 Allow using simdjson library in ‘JSON*’ functions if AVX2 instructions are available. If disabled rapidjson will be used.
    allow_suspicious_low_cardinality_types 0 In CREATE TABLE statement allows specifying LowCardinality modifier for types of small fixed size (8 or less). Enabling this may increase merge times and memory consumption.
    cancel_http_readonly_queries_on_client_close 0 Cancel HTTP readonly queries when a client closes the connection without waiting for response.
    check_query_single_value_result 1 Return check query result as single 1/0 value
    enable_conditional_computation (was 0) DELETED
    enable_optimize_predicate_expression 1 (was 0) If it is set to true, optimize predicates to subqueries.
    enable_unaligned_array_join 0 Allow ARRAY JOIN with multiple arrays that have different sizes. When this setting is enabled, arrays will be resized to the longest one.
    external_table_functions_use_nulls 1 If it is set to true, external table functions will implicitly use Nullable type if needed. Otherwise NULLs will be substituted with default values. Currently supported only for ‘mysql’ table function.
    idle_connection_timeout 3600 Close idle TCP connections after specified number of seconds.
    input_format_defaults_for_omitted_fields 0 For input data calculate default expressions for omitted fields (it works for JSONEachRow format).
    input_format_with_names_use_header 0 For TSVWithNames and CSVWithNames input formats this controls whether format parser is to assume that column data appear in the input exactly as they are specified in the header.
    join_any_take_last_row 0 When disabled (default) ANY JOIN will take the first found row for a key. When enabled, it will take the last row seen if there are multiple rows for the same key. Allows you to overwrite old values in table with Engine=Join.
    join_default_strictness ALL Set default strictness in JOIN query. Possible values: empty string, ‘ANY’, ‘ALL’. If empty, query without strictness will throw exception.
    low_cardinality_allow_in_native_format 1 Use LowCardinality type in Native format. Otherwise, convert LowCardinality columns to ordinary for select query, and convert ordinary columns to required LowCardinality for insert query.
    max_alter_threads auto The maximum number of threads to execute the ALTER requests. By default, it is determined automatically.
    max_execution_speed 0 Maximum number of execution rows per second.
    max_execution_speed_bytes 0 Maximum number of execution bytes per second.
    max_partitions_per_insert_block 100 Limit maximum number of partitions in single INSERTed block. Zero means unlimited. Throw exception if the block contains too many partitions. This setting is a safety threshold, because using large number of partitions is a common misconception.
    max_streams_multiplier_for_merge_tables 5 Request more streams when reading from Merge table. Streams will be spread across tables that Merge table will use. This allows more even distribution of work across threads and is especially helpful when merged tables differ in size.
    max_threads auto The maximum number of threads to execute the request. By default, it is determined automatically.
    merge_tree_max_bytes_to_use_cache 2013265920 The maximum number of rows per request, to use the cache of uncompressed data. If the request is large, the cache is not used. (For large queries not to flush out the cache.)
    merge_tree_min_bytes_for_concurrent_read 251658240 If at least as many bytes are read from one file, the reading can be parallelized.
    merge_tree_min_bytes_for_seek 0 You can skip reading more than that number of bytes at the price of one seek per file.
    min_count_to_compile_expression 3 The number of identical expressions before they are JIT-compiled
    min_execution_speed_bytes 0 Minimum number of execution bytes per second.
    network_compression_method LZ4 Allows you to select the method of data compression when writing.
    optimize_skip_unused_shards 0 Assumes that data is distributed by sharding_key. Optimization to skip unused shards if SELECT query filters by sharding_key.
    os_thread_priority 0 If non zero - set corresponding ‘nice’ value for query processing threads. Can be used to adjust query priority for OS scheduler.
    output_format_parquet_row_group_size 1000000 Row group size in rows.
    stream_poll_timeout_ms 500 Timeout for polling data from streaming storages.
    tcp_keep_alive_timeout 0 The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes

    MergeTree settings

    Name Default Description
    enable_mixed_granularity_parts 0 Enable parts with adaptive and non adaptive granularity
    index_granularity_bytes 10485760 Approximate amount of bytes in single granule (0 - disabled).
    write_final_mark 1 Write final mark after end of column (0 - disabled, do nothing if index_granularity_bytes=0)
    max_parts_in_total 100000 If more than this number active parts in all partitions in total, throw ‘Too many parts …’ exception.
    merge_with_ttl_timeout 86400 Minimal time in seconds, when merge with TTL can be repeated.
    min_merge_bytes_to_use_direct_io 10737418240 (was 0) Minimal amount of bytes to enable O_DIRECT in merge (0 - disabled).
    replicated_max_parallel_fetches_for_host 15 Limit parallel fetches from endpoint (actually pool size).
    use_minimalistic_part_header_in_zookeeper 0 Store part header (checksums and columns) in a compact format and a single part znode instead of separate znodes (/columns and /checksums). This can dramatically reduce snapshot size in ZooKeeper. Before enabling check that all replicas support new format.

    1.11 - Altinity Stable for ClickHouse 18.14

    1.11.1 - Altinity Stable for ClickHouse 18.14.19

    Details of the Altinity Stable Release 18.14.19

    Description

    Altinity Stable 18.14.19 is a minor bug-fixing release to the previous 18.14.15. ClickHouse memory consumption in general is decreased with this release.

    Bug fixes

    • Fixed a bug with range_hashed dictionaries returning wrong results.
    • Fixed an error that caused messages “netlink: ‘…’: attribute type 1 has an invalid length” to be printed in Linux kernel log in some recent versions of Linux kernel.
    • Fixed segfault in function ’empty’ for FixedString arguments.
    • Fixed excessive memory allocation when using a large value of max_query_size.
    • Fixed cases when the ODBC bridge process did not terminate with the main server process.
    • Fixed synchronous insertion into the Distributed table with a columns list that differs from the column list of the remote table.
    • Fixed a rare race condition that could lead to a crash when dropping a MergeTree table.
    • Fixed a query deadlock in a case when query thread creation fails with the ‘Resource temporarily unavailable’ error.
    • Fixed parsing of the ENGINE clause when the CREATE AS table syntax was used and the ENGINE clause was specified before the AS table (the error resulted in ignoring the specified engine).
    • Fixed a segfault if the ‘max_temporary_non_const_columns’ limit was exceeded
    • Fixed a bug with databases being not correctly specified when executing DDL ON CLUSTER queries

    References

    2 - Altinity Backup for ClickHouse

    Altinity Backup for ClickHouse Release Notes

    Most recent releases organized by major release and release date:

    2.1 - Altinity Backup for ClickHouse 1.5.0

    IMPROVEMENTS

    • Try to improve implementation check_replicas_before_attach configuration to avoid concurrent ATTACH PART execution during restore command on multi-shard cluster, fix 474

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.5.0

    2.2 - Altinity Backup for ClickHouse 1.4.8

    IMPROVEMENTS

    • Add check_replicas_before_attach configuration to avoid concurrent ATTACH PART execution during restore command on multi-shard cluster, fix 474
      allow backup list when clickhouse server offline, fix 476

    BUG FIXES

    • Resolve {uuid} macros during restore for ReplicatedMergeTree table and ClickHouse server 22.5+, fix 466

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.8

    2.3 - Altinity Backup for ClickHouse 1.4.9

    BUG FIXES

    Fix download behavior for parts which contains special characters in name, fix 462

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.9

    2.4 - Altinity Backup for ClickHouse 1.4.7

    IMPROVEMENTS

    • PROPERLY restore to default disk if disks not found on destination clickhouse server, fix 457

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.7

    2.5 - Altinity Backup for ClickHouse 1.4.6

    BUG FIXES

    • Fix infinite loop error can’t acquire semaphore during Download: context canceled, and error can’t acquire semaphore during Upload: context canceled all 1.4.x users recommends upgrade to 1.4.6

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.6

    2.6 - Altinity Backup for ClickHouse 1.4.5

    IMPROVEMENTS

    • Add CLICKHOUSE_FREEZE_BY_PART_WHERE option which allow freeze by part with WHERE condition, thanks @vahid-sohrabloo

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.5

    2.7 - Altinity Backup for ClickHouse 1.4.3

    IMPROVEMENTS

    • Add API_INTEGRATION_TABLES_HOST option to allow use DNS name in integration tables system.backup_list, system.backup_actions

    BUG FIXES

    • Fix upload_by_part: false max file size calculation, fix 454

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.3

    2.8 - Altinity Backup for ClickHouse 1.4.4

    IMPROVEMENTS

    • Download and restore to default disk if disks not found on destination clickhouse server, fix 457

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.4

    2.9 - Altinity Backup for ClickHouse 1.4.2

    BUG FIXES

    • Fix --partitions parameter parsing, fix 425

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.2

    2.10 - Altinity Backup for ClickHouse 1.4.1

    BUG FIXES

    • Fix upload data go routines waiting, expect the same upload speed as 1.3.2.

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.1

    2.11 - Altinity Backup for ClickHouse 1.4.0

    IMPROVEMENTS

    • Add S3_ALLOW_MULTIPART_DOWNLOAD to config, to improve download speed, fix 431.
    • Add support backup/restore user defined functions, fix 420.
    • Add clickhouse_backup_number_backups_remote, clickhouse_backup_number_backups_local, clickhouse_backup_number_backups_remote_expected,clickhouse_backup_number_backups_local_expected prometheus metric, fix 437.
    • Add ability to apply system.macros values to path field in all types of remote_storage, fix 438.
    • Use all disks for upload and download for mutli-disk volumes in parallel when upload_by_part: true fix 400.

    BUG FIXES

    • fix wrong warning for .gz, .bz2, .br archive extensions during download, fix 441.

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.4.0

    2.12 - Altinity Backup for ClickHouse 1.3.2

    IMPROVEMENTS

    • Add TLS certificates and TLS CA support for clickhouse connections, fix 410.
    • Switch to go 1.18.
    • Add clickhouse version 22.3 to integration tests.
    • Add S3_MAX_PARTS_COUNT and AZBLOB_MAX_PARTS_COUNT for properly calculate buffer sizes during upload and download.
    • Add multithreading GZIP implementation.

    BUG FIXES

    • fix 406, properly handle path for S3, GCS for case when it begins from “/”.
      fix 409, avoid delete partially uploaded backups via backups_keep_remote option.
      fix 422, avoid cache broken (partially uploaded) remote backup metadata.
      fix 404, properly calculate S3_PART_SIZE to avoid freeze after 10000 multi parts uploading, properly handle error when upload and download go-routine failed to avoid pipe stuck.

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/v1.3.2

    2.13 - Altinity Backup for ClickHouse 1.3.1

    IMPROVEMENTS

    • Fix 387, improve documentation related to memory and CPU usage

    BUG FIXES

    • fix 392, correct download for recursive sequence of diff backups when DOWNLOAD_BY_PART true
    • fix 390, respect skip_tables patterns during restore and skip all INFORMATION_SCHEMA related tables even skip_tables don’t contain INFORMATION_SCHEMA pattern
    • fix 388, improve restore ON CLUSTER for VIEW with TO clause
    • fix 385, properly handle multiple incremental backup sequences + BACKUPS_TO_KEEP_REMOTE

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/1.3.1

    2.14 - Altinity Backup for ClickHouse 1.2.3

    HOT FIXES

    • Fix 390, respect skip_tables patterns during restore and skip all INFORMATION_SCHEMA related tables even skip_tables don’t contain INFORMATION_SCHEMA pattern

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/1.2.3

    2.15 - Altinity Backup for ClickHouse 1.3.0

    Improvements

    • Add API_ALLOW_PARALLEL to support multiple parallel execution calls for, WARNING, control command names don’t try to execute multiple same commands and be careful, it could allocate much memory during upload / download, fix #332
    • Add support for –partitions on create, upload, download, restore CLI commands and API endpoint fix #378 properly implementation of #356
    • Add implementation –diff-from-remote for upload command and properly handle required on download command, fix #289
    • Add print-config cli command fix #366
    • Add UPLOAD_BY_PART (default: true) option for improve upload/download concurrency fix #324
    • Add ARM support for Docker images and pre-compiled binary files, fix #312
    • KeepRemoteBackups should respect differential backups, fix #111
    • Add SFTP_DEBUG option, fix #335
    • Add ability to restore schema ON CLUSTER, fix #145
    • Add support encrypted disk (include s3 encrypted disks), fix #260
    • API Server optimization for speed of last_backup_size_remote metric calculation to make it async during REST API startup and after download/upload, fix #309
    • Improve list remote speed via local metadata cache in $TEMP/.Altinity Backup for ClickHouse.$REMOTE_STORAGE, fix #318
    • Add CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE option, fix #319
    • Add support for PROJECTION, fix #320
    • Return clean cli command and API POST /backup/clean endpoint, fix #379

    Bug Fixes

    • fix #300, allow GCP properly work with empty GCP_PATH value
    • fix #340, properly handle errors on S3 during Walk() and delete old backup
    • fix #331, properly restore tables where have table name with the same name as database name
    • fix #311, properly run Altinity Backup for ClickHouse inside docker container via entrypoint
    • fix #317, properly upload large files to Azure Blob Storage
    • fix #220, properly handle total_bytes for uint64 type
    • fix #304, properly handle archive extension during download instead of use config settings
    • fix #375, properly REMOTE_STORAGE=none error handle
    • fix #379, will try to clean shadow if create fail during moveShadow
    • more precise calculation backup size during upload, for backups created with –partitions, fix bug after #356
    • fix restore –rm behavior for 20.12+ for tables which have dependent objects (like dictionary)
    • fix concurrency by FTP creation directories during upload, reduce connection pool usage properly handle –schema parameter for show local backup size after download
    • fix restore bug for WINDOW VIEW, thanks @zvonand

    Experimental

    • Try to add experimental support for backup MaterializedMySQL and MaterializedPostgeSQL tables, restore MySQL tables not impossible now without replace table_name.json to Engine=MergeTree, PostgreSQL not supported now, see ClickHouse/ClickHouse#32902

    Download at: https://github.com/Altinity/clickhouse-backup/releases/tag/1.3.0

    3 - Altinity Kubernetes Operator

    clickhouse-operator Release Notes

    Most recent releases organized by major release and release date:

    3.1 - Altinity Kubernetes Operator 0.19.0

    clickhouse-operator 0.19.0 Release Notes

    What’s Changed

    New Contributors

    Full Changelog: https://github.com/Altinity/clickhouse-operator/compare/0.18.5...release-0.19.0

    Download at: https://github.com/Altinity/clickhouse-operator/releases/tag/release-0.19.0

    3.2 - Altinity Kubernetes Operator 0.18.5

    clickhouse-operator 0.18.5 Release Notes

    What’s Changed

    • Dependencies were updated to address CVE-2022-21698 and CVE-2021-38561
      generate_chart.sh by @ganievs in #925

    New Contributors

    • @ganievs made their first contribution in #925

    Full Changelog: 0.18.4-0.18.5

    Download at: https://github.com/Altinity/clickhouse-operator/releases/tag/0.18.5

    3.3 - Altinity Kubernetes Operator 0.18.4

    clickhouse-operator 0.18.4 Release Notes

    What’s Changed

    • Base image has been updated to address https://access.redhat.com/errata/RHSA-2022:0896 (closes #913).
    • Https support for health checks by @bkuschel in #912
    • Fixed number of SERVERS from 1 to 3 for a 3 node clickhouse-keeper deployment by @a-dot in #902
    • clickhouse-keeper and ZooKeeper manifests were updated by @Slach in #911 and #916

    New Contributors

    • @a-dot made their first contribution in #902
    • @bkuschel made their first contribution in #912
      Full Changelog: 0.18.3-0.18.4

    Download at: https://github.com/Altinity/clickhouse-operator/releases/tag/0.18.4

    3.4 - Altinity Kubernetes Operator 0.18.3

    clickhouse-operator 0.18.3 Release Notes

    What’s Change

    • Fixed a bug that might result in broken CHI if operator has been restarted during the reconcile
    • Added ‘AGE’ to CHI status
    • Added ClickHouse Keeper examples

    Download at: https://github.com/Altinity/clickhouse-operator/releases/tag/0.18.3

    3.5 - Altinity Kubernetes Operator 0.18.2

    clickhouse-operator 0.18.2 Release Notes
    • Operator now correctly respects configuration settings to exclude annotations from managed objects. Fixes #859.
    • Make CHI status messages more human friendly

    Download at: https://github.com/Altinity/clickhouse-operator/releases/tag/0.18.2

    3.6 - Altinity Kubernetes Operator 0.18.1

    clickhouse-operator 0.18.1 Release Notes

    What’s Changed

    • fix non pointer mistake in metrics-exporter by @adu-bricks in #870
    • Helper files for operatorhub.io integration

    New Contributors

    • @adu-bricks made their first contribution in #870

    Download at: https://github.com/Altinity/clickhouse-operator/releases/tag/0.18.1

    3.7 - Altinity Kubernetes Operator 0.18.0

    clickhouse-operator 0.18.0 Release Notes

    New features

    • arm64 packages are published (closes #852)
    • ‘access_management’ can be specified when defining users in CHI.
    • k8s secrets can be referenced when defining user passwords. See 05-settings-01-overview.yaml.

    Changed

    • When CRD is deleted, operator keeps all dependent objects now (statefulsets, volumes). That prevents from incidental delete of a cluster.
    • When operator restarts it does not run a reconcile cycle anymore if CHI has not been changed. That prevents unneeded pod restarts. (Closes #855)
    • Operator configuration file format has been changed. See https://github.com/Altinity/clickhouse-operator/blob/0.18.0/config/config.yaml. Configuration in old format is supported for backwards compatibility.

    Fixed

    • Fixed a bug ‘unable to decode watch event: no kind “ClickHouseOperatorConfiguration” is registered’ that could appear in some k8s configurations.
    • Removed INFORMATION_SCHEMA from schema propagation. (closes #854)
    • Added a containerPort to metrics-exporter (#834)

    New Contributors

    • @johnhummelAltinity made their first contribution in #838
    • @jiangxinqi1995 made their first contribution in #846
    • @johnny made their first contribution in #834

    Download at: https://github.com/Altinity/clickhouse-operator/releases/tag/0.18.0

    4 - Altinity.Cloud Release Notes

    Most recent releases organized by major release and release date:

    The GitHub repository for this project is located at:

    https://github.com/Altinity/acm

    4.1 - Altinity.Cloud 22.8.15

    Added

    • New tool to review and remove detached parts
    • New tool to test connectivity to Kafka broker
    • New tool to import test datasets – available for 22.3 ClickHouse version and above

    Changed

    • Default log level for new clusters is set to ‘information’. Users can edit log level using ’logger/level’ cluster setting
    • Storage size is displayed as ‘Pending’ if rescale has not been complete
    • Improved version upgrade recommendations
    • Improved Schema and Workload tabs

    Fixed

    • Fixed a bug with uptime schedule that could unexpectidly stop clusters sometimes
    • Fixed a bug with memory configuration settings if cluster is resumed to the different node size

    4.2 - Altinity.Cloud 22.7.13

    Added

    • User role is now displayed on Account page
    • A warning is displayed on ClickHouse version upgrade is available
    • Sorting has been added to the Schema tab
    • A tool to convert a table from MergeTree to ReplicatedMergeTree (available for superadmins only)
    • VPC Endpoint status is displayed on Connections popup
    • Custom tolerations can be specified for node types

    Changed

    • User is automatically authenticated as ClickHouse admin when starting a new cluster
    • Schedule icon opens a Schedule popup now
    • When rescaling a volume, minimum rescale ratio is 20% of the volume size
    • Org admin can access Environment details now (previously it was only possible with Env admin)

    Fixed

    • ClickHouse use authentication fixes
    • List of backups was sometimes not properly sorted
    • Fixed a possible race condition in backup and restore of replicated clusters

    4.3 - Altinity.Cloud 22.6.18

    Added

    • Explorer now requires user authorization using ClickHouse user/password
    • Altinity staff does not have an access to the data by default. Multiple levels of access can be granted by a customer – that enables ‘altinity’ user to work with the data.
    • New ‘Workload’ tab in Explorer to analyze typical queries
    • Copy to Clipboard buttons in Connection Details

    Changed

    • DataDog metrics are labeled with environment and cluster
    • clickhouse-operator user is restricted to Altinity network only
    • Brute force protection is enabled for password recovery form

    4.4 - Altinity.Cloud 22.5.15

    Added

    • Cluster Uptime feature that allows stopping the cluster automatically when inactive or by schedule.
    • Reset to defaults is available on ClickHouse settings and profile settings pages.
    • Email notification is sent to the user when trial is extended.
    • Connection to Remote environment via Altinity Connector.

    Changed

    • Display a warning when downgrading ClickHouse major version
    • Display a warning when non-replicated MergeTree table is created on replicated cluster
    • Cluster Dashboard now shows LB type and storage type

    Fixed

    • Fixed UI slowness when a long query is being executed.
    • Fixed Schema tab that may be not displaying tables when databases contains hundreds of those.
    • Small UI fixes in different places.

    4.5 - Altinity.Cloud 22.4.15

    Added

    • Breakdown by clusters on a billing page.
    • Initial support for user managed environments.
    • Environment connection status.

    Changed

    • Changed cluster wizard to make it easier to use.
    • Fractional disk sizes are not allowed anymore.
    • ‘Pay by Invoice’ option is disabled.

    Fixed

    • Fixed a bug with local backups not being deleted that might result in higher disk usage.
    • Fixed a possible backup failure on clusters with async_inserts=1.
    • Fixed a bug with ALTER TABLE did not work sometimes on clusters restored from a backup.
    • Fixed a bug with schema did not refresh sometimes.
    • Fixed a bug in incorrect retention of backups of sharded cluster.

    4.6 - Altinity.Cloud 22.9.21

    Added

    • Added a feature to convert from non-replicated to replicated MergeTree (available for Altinity staff only)
    • Data Migration Tool to copy data between clusters (available for Altinity staff only)
    • New ‘Launch from Backup’ wizard that allows to pick any backup source, including outside of Altinity.Cloud
    • Replication status in Workload
      Alert cluster owners when Altinity access level is changed
    • Backlink button to the Environment page
    • Proxy connection mode for Anywhere clusters

    Changed

    • Workload tab was refactored to show 3 panels: Queries, Replication and Processes
    • Workload is sorted in descending order by default
    • Table creation hints were updated in Explorer
    • Improved loading test datasets feature
    • User is automatically redirected to the last visited page on re-login

    Fixed

    • Fixed restoring a backup in sharded and replicated cluster with tables replicated to every shard
    • Fixed sorting order in dashboard
    • Last insert timestamp now displays timestamp for Kafka inserts correctly now
    • Fixed SELECT ALL IGNORED for detached parts

    4.7 - Altinity.Cloud 22.3.14

    Added

    • Allow to select a node type when launching a cluster from backup (available for Altinity staff only).

    Changed

    • DROP TABLE is executed now before restoring a table from the backup.
    • CPU is correctly displayed in cores on Environment page in this release.
    • clickhouse-operator has been upgraded to 0.18.3.
    • Fixed a bug that did not allow to enter ‘0’ in cluster settings.
    • Fixed a bug that might result in cluster not going online when using ClickHouse versions 22.2 and above.
    • Fixed a bug with incorrect display of billing data for GCP environments.
    • Fixed a bug with AWS NLB usage.
    • Fixed a bug with cluster metadata not being stored in backup sometimes.

    4.8 - Altinity.Cloud 22.2.15

    Added

    • Notifications for cluster related events can be configured
    • Datadog integration
    • Option to force Auth0 authentication for organization
    • Dark UI theme
    • Cmd/Ctrl+Enter in Explorer can be used to run a query

    Changed

    • ZooKeeper is now stopped when cluster is stopped
    • Warning has been removed from volume rescale popup
    • Monitoring links have been moved to the top in order to make it easier to find
    • Max part size has been reduced to 50GB
    • Column compression tab is more compact now
    • When backup operation is in progress all operations that may incidentally restart the cluster are blocked now

    Fixed

    • Fixed a bug with list of databases sometimes could not be parsed in user dialog
    • Fixed a bug with replica cluster creation
    • Fixed latest backup timestamp display for stopped clusters
    • Fixed a bug with table restore from a backup that could download full cluster backup instead of a single table

    4.9 - Altinity.Cloud 22.1.28

    Added

    • Restore cluster from an External backup (available for superadmins only)
    • Restore single table from a backup
    • New Grafana dashboard for logs

    Changed

    • UI facelifting on Cluster and Nodes dashboards
    • System tables are now hidden by default on Schema dashboard
    • Stronger security for Grafana monitoring dashboard
    • Better performance of Billing dashboard
    • Use ‘pd-ssd’ fast storage for ZooKeeper nodes in GCP
    • Cluster dashboard now shows a warning if backup utility is outdated
    • Backup line item is now correctly displayed for GCP clusters
    • Support plan is now displayed on a billing page
    • Error Log section is removed from ClickHouse logs page

    Fixed

    • Fixed ON CLUSTER switch behavior
    • Removed deprecated ClickHouse settings from default profile
    • Removed deprecated node annotations

    4.10 - Altinity.Cloud 21.13.21

    Added

    • Expiration date for trials can be adjusted from UI (previously it could only be done with an API call)
    • Expiration date for notifications can be configured
    • Units to the billing usage page
    • Legend to the Columns tab that clarifies key flags

    Changed

    • UI has been refreshed with a new version of Clarity.Design library
    • When user VPC endpoint is configured Access Point URLs are switched to ‘internal’ now
    • AWS NLB is disabled by default. It can be enabled on Environment level if needed
    • Cron job logs are sent to centralized logging

    Fixed

    • Processes view is fixed for ClickHouse versions 21.8
    • Estimated costs for GCP are corrected
    • Trial notifications are not longer created for blocked organizations

    4.11 - Altinity.Cloud 21.12.26

    Added

    • Post-trial workflow allows user to signup to the paid account
    • Column level compression tab in Schema popup
    • New billing user role in order to access billing only
    • Soft rolling restart of ClickHouse nodes is now available. The previous ‘hard’ restart can be used as well.
    • Background operation status is now displayed on the cluster dashboard
    • Billing integration for GCP
    • Backups for GCP

    Changed

    • ZooKeeper is updated to 3.7.0. It only affects new clusters.
    • ClickHouse Backup is updated to 1.2.2
    • User will receive notification when subscribes or unsubscribes from Clickhouse alerts
    • Billing company name can now be different from the organization name

    Fixed

    • Fixed ‘Refresh’ button on Processes tab that did not refresh the screen sometimes
    • Fixed an issue when sometimes schema was displayed as empty

    4.12 - Altinity.Cloud 21.11.24

    Added

    • Usage details are available in Billing
    • Download links to issued invoices are available in Billing
    • Backup cost is extracted as a separate line item. Previously it was a part of a storage cost.
    • VPC Endpoint status is displayed when enabled
    • Datadog integration is supported for ClickHouse logs. Contact us with your datadog agent id.
    • GCP support
    • Enabled support for Altinity Builds of ClickHouse

    Changed

    • Networks are removed from ClickHouse users creation. It is now controlled in Connectivity on a cluster level
    • ZooKeeper cluster size can not be selected anymore. It is adjusted automatically.
    • Available API endpoints are now filtered according to the user role. Previously, user could see all API even if those could not be used.

    Fixed

    • Fixed a bug with node endpoints being unavailable under VPC endpoint

    4.13 - Altinity.Cloud 21.10.28

    Added

    • Allow users to configure alerts for the cluster. Note: cluster may be restarted when email is used for a first time
    • IP/CIDR whitelisting to protect access point access
    • Compression ratio column on a Schema tab
    • Email notifications (system)
    • Notifications to a specific recipient (system)
    • ‘Prepare All Invoices’ super-button that sends invoices to all paid customers (system)
    • CC and BCC addresses for invoices

    Changed

    • Users and ClickHouse users passwords are now required to be 12 characters at least
    • Session timeout is increased to 60 minutes
    • Troubleshooting mode now allows Altinity Engineers to enter the failed node
    • ‘View Logs’ are renamed to ‘Logs’
    • Backup utility is updated to [altinity/ClickHouse Backup:1.0.16](https://github.com/Altinity/ClickHouse Backup/releases/tag/1.0.16). That improves speed of backups among other fixes.

    4.14 - Altinity.Cloud 21.9.9

    Added

    • Cluster level access control
    • Cluster Restart action
    • User level configurable API tokens

    Changed

    • Improved ZK cluster scale up/down logic
    • Improved Invoices creation logic
    • Security improvements

    4.15 - Altinity.Cloud 21.8.20

    Added

    • Billing integration
    • Price estimator is now displayed when starting a cluster
    • In-app notifications
    • Allow to deploy 3-node ZK cluster in the regions with only two AZs available (e.g. us-west-1)
    • Cluster name now appears at all cluster related popup windows

    Changed

    • Upgraded clickhouse-operator to 0.15.0
    • Backup and security improvements

    4.16 - Altinity.Cloud 21.7.19

    Added

    • Support for multiple environments for an organization
    • Basic usage information at the billing page
    • Download of logs
    • Password recovery if user has lost his password
    • Externally available status page that can be referenced as https://status.<tenant-name>.altinity.cloud

    Changed

    • ClickHouse custom images are restricted to Yandex and Altinity only
    • Non-XML config files are now allowed (e.g. for dictionaries)
    • Backup and security improvements

    4.17 - Altinity.Cloud 21.6.9

    Added

    • New Environment Settings section into cluster configuration
    • default_replica_path and default_replica_name settings are now pre-populated that to simplifies creation of ReplicatedMergeTree tables (requires ClickHouse 21.1+)
    • Node selector is now available at the Explorer page. It allows to run queries against an individual node.

    Changed

    • Improved usability of cluster settings
    • Fixed a bug with “.inner.” tables did not properly work in Explorer
    • Fixed a bug with orgadmin did not have permissions to modify cluster settings
    • Fixed a bug with orgadmin could see user plain password in user audit log when password is changed
    • Upgraded to clickhouse-operator to 0.14.1. That fixes a possible failure of schema creation when adding a new replica to the cluster.
    • Upgraded ClickHouse Backup to 1.0.9. That fixes restore of materialized views and distributed tables.
    • Backup and security improvements

    4.18 - Altinity.Cloud 21.5.10

    Added

    • Node endpoints connection details
    • Last query and last insert timestamps
    • Fixed default ClickHouse version when upgrading.
    • Login screen protection from a brute force attack

    Changed

    • Now it is possible to rollback quickly to the older version if ClickHouse does not start after upgrade
    • Fixed Share feature when user belongs to a different environment
    • Enabled multi-AZ ZooKeeper deployment
    • Upgraded to clickhouse-operator 0.14.0
    • Improvements to ACM dashboard
    • Backup and security improvements

    4.19 - Altinity.Cloud 21.4.10

    • Schema tab has been fully re-designed
    • Atomic databases can be now correctly backed up with the new version of backup utility (requires cluster re-publish)
    • We have added gp3 storage option, and removed unencrypted volumes completely
    • ClickHouse Version is now automatically defaults to the latest Altinity Stable release
    • The number of replicas for Zookeeper clusters has been added to a list of Zookeepers
    • General stability and security improvements

    4.20 - Altinity.Cloud 21.3.21

    • New ‘Share’ function to invite users
    • Cluster created by one user can not be deleted by another user anymore
    • Improved Account Details screen
    • New service status page that allows to subscribe for status changes
    • Added a startup mode advanced setting. New ‘Direct’ mode decreases startup times for huge clusters
    • Added input validation in different places
    • Added a limit on maximum number of shards and replicas
    • Added an option to re-scale ZooKeeper from 1 to 3 nodes and vice versa (available only for admins yet)
    • Stability, security and backup improvements

    4.21 - Altinity.Cloud 21.2.15

    • Grafana is now accessible under ‘/monitoring’ path rather than a separate endpoint. That works under VPC as well.
    • Individual shards and replicas can be directly accessed by TCP and HTTP clients. In order to use that cluster needs to be re-published. Ask us for more detail if interested.
    • Flexible backup schedule (available on request)
    • Restore a backup to a new cluster (available on request)
    • Cluster names are now forced to use only lower case latin characters and numbers
    • Fixed a bug with RENAME TABLE query did not work in ON CLUSTER mode
    • Fixed a bug with losing SSL files when stopping the cluster for a long time
    • UI stability improvements and bug fixes

    4.22 - Altinity.Cloud 21.1.12

    • UI performance optimizations
    • Usability changes in Rescale screen
    • Added a validation to Profile Settings values
    • Login attempts are now correctly logged into Audit Log
    • Macros section is removed. Macros can be overridden via Settings if needed and looked up in system.macros table
    • Fixed a bug in Query Explorer that did not display query results with WITH statement properly
    • Fixed a bug that did not allow to Resume to a different node type
    • UI improvements and minor bug fixes

    4.23 - Altinity.Cloud 20.5.10

    • Handling of custom configuration files in cluster settings has been improved
    • Altinity Stable ClickHouse version has been upgraded to 20.8.11.17
    • Cluster Migrate feature has been temporary disabled
    • UI improvements and minor bug fixes

    4.24 - Altinity.Cloud 20.4.3

    • Grafana has been switched to Edge Ingress load balancer, and is now secured by https
    • UI improvements and minor bug fixes

    4.25 - Altinity.Cloud 20.3.4

    • Platform specific settings are now protected from user modifications
    • Node and Cluster status is displayed correctly
    • ClickHouse version has been added to the clusters list
    • Session times out after 30 minutes of user inactivity
    • Healthcheck screen has been improved
    • Automatic redirect from HTTP to HTTPs when accessing the management plane

    5 - Altinity Dashboard for Kubernetes

    Altinity Dashboard for Kubernetes Release Notes

    Most recent releases organized by major release and release date:

    5.1 - Altinity Dashboard for Kubernetes 0.1.4 Release Notes

    Altinity Dashboard for Kubernetes Release Notes for version 0.1.4

    What’s New

    • Update clickhouse-operator to 0.19.0
    • Node package security updates
    • Update go-restful for CVE-2022-1996

    Download from: https://github.com/Altinity/altinity-dashboard/releases/tag/v0.1.4.

    5.2 - Altinity Dashboard for Kubernetes 0.1.3 Release Notes

    Altinity Dashboard for Kubernetes Release Notes for version 0.1.3

    What’s New

    • Update clickhouse-operator to 0.18.4
    • Node package security updates

    Download from: https://github.com/Altinity/altinity-dashboard/releases/tag/v0.1.3.

    5.3 - Altinity Dashboard for Kubernetes 0.1.2 Release Notes

    Altinity Dashboard for Kubernetes Release Notes for version 0.1.2

    What’s New

    • Documentation updates
    • Many NPM package updates
    • Restructured adash.go / server.go
    • Minor UI fixes
    • Warn if operator is not the latest version
    • Use local IP for openbrowser when bindhost=0.0.0.0
    • Check for nil values in k8s API results

    Download from: https://github.com/Altinity/altinity-dashboard/releases/tag/v0.1.2.

    5.4 - Altinity Dashboard for Kubernetes 0.1.1 Release Notes

    Altinity Dashboard for Kubernetes Release Notes for version 0.1.1

    What’s Changed

    • Add loading skeletons
    • Ensure only one refresh in flight at once, and don’t refresh when not visible
    • Move addAlert to a Context and close mobile menu on click
    • Issue warnings for no or unbound storage

    Download at: https://github.com/Altinity/altinity-dashboard/releases/tag/v0.1.1

    5.5 - Altinity Dashboard for Kubernetes 0.1.0 Release Notes

    Altinity Dashboard for Kubernetes Release Notes for version 0.1.0

    Initial Release

    Download at: https://github.com/Altinity/altinity-dashboard/releases/tag/v0.1.0

    6 - Altinity Plugin Grafana for ClickHouse

    Altinity Plugin Grafana for ClickHouse Release Notes

    Most recent releases organized by major release and release date:

    6.1 - Altinity Grafana Plugin for ClickHouse 2.5.0

    Enhancements

    • Add support for Logs visualization, fix #331, thanks @Fiery-Fenix and @pixelsquared
    • Add $conditionalTest to editor auto-complete
    • Add support $__searchFilter to template variable queries, fix #354
    • Add allow sub-seconds time resolution with $timeSeriesMs and $timeFilterMs support, fix #344, fix #398
    • Expand template variable values when open context menu Explore, fix #346

    Fixes

    • Remove legacy binaries in dist folder, fix #419
    • Allow Nullable types in alert label name in backend part, fix #405
    • Remove INFORMATION_SCHEMA from adhoc control, fix #426
    • Legacy binaries in dist folder after 2.4.4 release plugin name changed, fix #419
    • Resolve corner case for SELECT x IN (SELECT …), fix #421
    • Tested textbox variables with ${variable:sqlstring} format, fix #125

    Download at: https://github.com/Altinity/clickhouse-grafana/releases/tag/v2.5.0

    6.2 - Altinity Grafana Plugin for ClickHouse 2.4.2

    Fixes

    • Fix unified alerts interval #400

    Download at: https://github.com/Vertamedia/clickhouse-grafana/releases/tag/v2.4.2

    7 - Altinity Tableau Connector for ClickHouse

    Altinity Tableau Connector for ClickHouse Release Notes

    Most recent releases organized by major release and release date:

    7.1 - Altinity Tableau Connector for ClickHouse 1.0.0 ODBC

    Altinity Tableau Connector for ClickHouse 1.0.0 ODBC

    v1.0.0-odbc-unicode

    • Added compatibility with latest Tableau SDK.

    Download at: https://github.com/Altinity/clickhouse-tableau-connector-odbc/releases/tag/v1.0.0-odbc-unicode

    7.2 - Altinity Tableau Connector for ClickHouse 1.0.0 JDBC Release Notes

    Altinity Tableau Connector for ClickHouse 1.0.0 JDBC Release Notes

    JDBC version of the connector.

    Download at: https://github.com/Altinity/clickhouse-tableau-connector-odbc/releases/tag/v1.0.0-jdbc

    8 - ODBC Driver for ClickHouse ReleaseNotes

    ODBC Driver for ClickHouse ReleaseNotes

    Most recent releases organized by major release and release date:

  • 2021-08-22 Altinity ODBC Driver for ClickHouse 1.1.10.0210822 Release Notes

    8.1 - Altinity ODBC Driver for ClickHouse 1.1.10.0210822 Release Notes

    Altinity ODBC Driver for ClickHouse ReleaseNotes for version 1.1.10.20210822

    WARNING: Before upgrading to a new version of the driver, make sure you read the release notes of all intermediate versions!

    • Fixed compilation for recent versions of GCC/Clang/AppleClang (#356)
    • Fixed handling of Null values and Nothing type (#356)
    • Fixed DateTime type as String issues (#356)
    • Implemented parametrized early connection/credential error detection (#356)
    • Implemented parametrized big integer types handling as strings (#356)
    • Various lesser fixes for ClickHouse ODBC-based Tableau connector

    Download at: https://github.com/ClickHouse/clickhouse-odbc/releases/tag/v1.1.10.20210822