top-selling products Ranks for most-clicked articles or videos Total cost for business operation Statements for financial department Summaries of registered users Statistics of unique visitors ... etc.
range, numbers, text, data type Aggregations sum, average, count, max, min Transformers concat, mask, numbers format Groups Orders Basic Elements of Report
your data rows is 1000, 10k, 1m, 10m or 100m? Report Performance in MySQL SELECT product_id, COUNT(id) as count WHERE status != 'canceled' GROUP BY product_id ORDER BY count DESC LIMIT 10 id user_id product_id amount discount shipping_fee total status created_at 1 35 2 1000 50 0 950 shipped 2022-11-11 09:24:13 2 78 5 860 20 60 900 paid 2022-11-11 10:12:04 3 93 11 780 0 60 840 canceled 2022-11-12 01:39:17
many indexes is not a good idea Partitions and Shardings Suitable for unchanged conditions combination Complicated and limited usage (join, transactions... etc) Vertical Scaling Database There's hard limit for single server Money is always not a problem Aggregation Improvements in MySQL
combinations of conditions will be difficult to maintain Aggregation results need to be refreshed once source data keeps updating Aggregation Improvements in MySQL id amount discount shipping_fee total status period 1 2 50 0 950 shipped 2022-11-11 09:00 2 5 20 60 900 paid 2022-11-11 10:00 3 11 0 60 840 canceled 2022-11-11 11:00
rows are large for aggregation (over 10 million) Filtering conditions are flexible Data rows are mutable Near real-time requirement Aggregation Improvements in MySQL
kept next to each other in memory Optimized for reading and writing a single row of data Common in OLTP database like MySQL, Postgres etc. Row vs Column Oriented Databases id name country age 1 Taylor USA 35 2 Evan China 35
kept next to each other in memory Optimized for reading to support analyzing data Common in Data Warehouses like Cassandra, HBase, Big Query, Clickhouse etc. id 1 2 Row vs Column Oriented Databases name Taylor Evan Country USA China age 35 35
columns out of many Aggregating queries against large volumes of data Column-wise data compression Cons of Columnar Database Poor at data mutations like update or delete Not optimized for querying row-specific data Lack of unique constraints Row vs Column Oriented Databases
Was originally developed as an internal project by Yandex (a Russian search engine) The word ClickHouse is a combination of Clickstream and Data WareHouse Known for extremely high query speed and performance Fully based on ANSI SQL, which makes it easier to interact with Highly productive development on Github Many companies are using ClickHouse in production like Uber, Slack, Tesla, Tencent, Tiktok, Cloudflare, etc. Introduction of ClickHouse
specialized codecs that can make data even more compact Vectorized query execution to allow for SIMD CPU instructions usage ClickHouse can leverage all available CPU cores and disks to execute even a single query Introduction of ClickHouse
are heavy operation and not designed for frequent use Limited Transactional (ACID) support at this moment (INSERT into one partition in one table of MergeTree family) ClickHouse Keeper is not stable enough like official's announcement Introduction of ClickHouse
TinyLog and StripeLog Integration Engines MySQL, Postgres, ODBC, JDBC, MongoDB, Kafka, etc. Special Engines File, MaterializedView, URL, Buffer, Memory, etc. Introduction of ClickHouse
1 Taylor 36 2022-12-01 00:01:00 2 Evan 35 2022-12-01 00:00:00 insert id name age updated_at 1 Taylor 35 2022-12-01 00:00:00 2 Evan 35 2022-12-01 00:00:00
1 Taylor 35 2022-12-01 00:00:00 2 Evan 35 2022-12-01 00:00:00 1 Taylor 36 2022-12-01 00:01:00 id name age updated_at 1 Taylor 36 2022-12-01 00:01:00 2 Evan 35 2022-12-01 00:00:00 ?
1 Taylor 36 2022-12-01 00:01:00 2 Evan 35 2022-12-01 00:00:00 id name age updated_at 1 Taylor 36 2022-12-01 00:01:00 2 Evan 35 2022-12-01 00:00:00 Deduplicate
versions Mutations are extremely heavy operations Mutations are asynchronous by default There's no transaction guaranteed Deduplication in ClickHouse ALTER TABLE authors DELETE WHERE id = 1 INSERT INTO authors VALUEs (1, 'Taylor', 36, '2022-12-01 00:00:00')
timing is controlled by ClickHouse Need to use FINAL or OPTIMIZE to force merging Deduplication in ClickHouse id name age updated_at 1 Taylor 35 2022-12-01 00:00:00 2 Evan 35 2022-12-01 00:00:00 1 Taylor 36 2022-12-01 00:01:00 id name age updated_at 1 Taylor 36 2022-12-01 00:01:00 2 Evan 35 2022-12-01 00:00:00 Deduplicate
the query performance FINAL runs on single thread, and improved as multi-thread in lately versions Deduplication in ClickHouse SELECT * FROM authors FINAL
OPTIMIZE statement OPTIMIZE will trigger partitions merge immediately OPTIMIZE will use lots of CPU and disk resource Needs to enable optimize_throw_if_noop for exception Can be optimized with specific partitions Deduplication in ClickHouse 202211 ... unchanged 202212 merge duplicated
better query experience Duplicated data still exists, but being filtered Difficult for performance optimization in some complicated queries Deduplication in ClickHouse CREATE VIEW authors_view AS SELECT id, argMax(name, updated_at) as name, argMax(age, updated_at) as age, max(updated_at) as updated_at FEOM authors GROUP BY id
allows_query_when_mysql_lost=true, max_wait_time_when_mysql_unavailable=10000; MaterializedMySQL Database Engine Experimental feature, not production ready Some data types are not supported Implicit errors will stop all replication process Deduplication in ClickHouse
INSERT, UPDATE and DELETE Rely on row policy and FINAL modifier Deduplication in ClickHouse (https://altinity.com/blog/using-clickhouse-as-an-analytic-extension-for-mysql)
are asynchronous by default Still don't use it as aggressively as on an OLTP system Implement with bitmap mechanism, deleted data are filtered There's no transaction guaranteed between delete and insert Deduplication in ClickHouse DELETE FROM authors WHERE id = 1 INSERT INTO authors VALUEs (1, 'Taylor', 36, '2022-12-01 00:00:00')
DELETE (current version) How about transaction guarantee? Deduplication in ClickHouse DELETE FROM authors WHERE id = 1 INSERT INTO authors VALUEs (1, 'Taylor', 36, '2022-12-01 00:00:00')
2022-12-01 00:01:00 2 Evan 35 2022-12-01 00:00:00 id name age updated_at 1 Taylor 36 2022-12-01 00:01:00 2 Evan 35 2022-12-01 00:00:00 Original Table Replica Table sync Alternative for FINAL Modifier To prevent duplicated data during the merge To make it more performant than FINAL Sync to replica table first, then redirect queries to replica table while deduplication process
and insert Similar concepts to lightweight delete + insert, but based on a table engine level There's an RFC, but not being accepted yet: https://github.com/ClickHouse/ClickHouse/issues/41817
= users.id WHERE users.rank = 'gold' group by orders.user_id order by orders.user_id limit 10 SELECT orders.user_id, sum(orders.revenue) FROM orders WHERE orders.user_id IN (SELECT id FROM users WHERE rank = 'gold') group by orders.user_id order by orders.user_id limit 10 Optimizations in ClickHouse Use SubQuery Rather Than JOIN
FROM orders GROUP BY platform_id HAVING finished_at > '2022-01-01' SELECT COUNT(), SUM(revenue), platform_id FROM orders WHERE finished_at > '2022-01-01' GROUP BY platform_id
amount of data is extremely slow because the totally different index design in ClickHouse For example, ordering 20 million data scans all table and takes 21G ram in 2.3 seconds Queries of GROUP BY with ORDER BY are not optimized SELECT * FROM orders ORDER BY finished_at DESC LIMIT 10
by ClickHouse's query optimizer yet Open issue: https://github.com/ClickHouse/ClickHouse/issues/5329 Manual optimized query scans 20 million data scans nearly all table but takes 197M ram in 0.1 seconds SELECT * FROM orders WHERE id IN (SELECT id FROM orders ORDER BY settled_at DESC LIMIT 10) ORDER BY finished_at DESC LIMIT 10