ClickHouse/Doris vs Elasticsearch谁更胜一筹?

前言

我之前在ClickHouse vs Doris 读写性能比较 一文中,初步做了一下ClickHouse和Doris的读写性能比较,但由于数据样本比较小,且未发挥出所有硬件资源的性能,因此进行了第二轮压测。

本轮压测与上一轮的区别在于:

  1. 新加入了Elasticsearch搜索引擎
  2. ClickHouse和Doris均采用多并发写入,发挥最大性能
  3. 本轮测试得到了飞轮科技多位技术专家的指导,对Doris进行了一定的参数调优

环境准备(硬件机器配置同上一篇文章)

clickhouse集群

节点IP分片编号副本编号
ck93192.168.101.9311
ck94192.168.101.9412
ck96192.168.101.9621
ck97192.168.101.9722

doris集群

角色节点IP
FEck94192.168.101.94
BEck93192.168.101.93
BEck94192.168.101.94
BEck96192.168.101.96
BEck97192.168.101.97
BEck98192.168.101.98

FE 配置:

meta_dir = /data01/doris/fe
​
http_port = 58030
rpc_port = 59020
query_port = 59030
edit_log_port = 59010
​
priority_networks = 192.168.101.0/24
​
enable_single_replica_load = true
max_routine_load_task_concurrent_num = 50
max_routine_load_task_num_per_be = 10

BE配置

be_port = 59060
webserver_port = 58040
heartbeat_service_port = 59050
brpc_port = 58060
​
priority_networks = 192.168.101.0/24
storage_root_path = /data01/doris/be
enable_single_replica_load = true
inverted_index_compaction_enable = true
scan_thread_nice_value = 5

ES集群

计10个节点。

节点节点服务
node1ck93192.168.101.93:59200
node2ck93192.168.101.93:59201
node3ck94192.168.101.94:59200
node4ck94192.168.101.94:59201
node5ck96192.168.101.96:59200
node6ck96192.168.101.96:59201
node7ck97192.168.101.97:59200
node8ck97192.168.101.97:59201
node9ck98192.168.101.98:59200
node10ck98192.168.101.98:59201

数据准备

数据源

4个节点clickhouse-server日志,每个节点约2.5亿数据量,共计10亿数据,原始数据kafka压缩后为155GB。

建表语句

clickhouse

--- 本地表
create table log_test on cluster abc (
    `@@id` String NOT NULL CODEC(ZSTD(1)),
    `@message` String NOT NULL CODEC(ZSTD(1)) ,
    `@filehashkey` String NOT NULL CODEC(ZSTD(1)) ,
    `@collectiontime` DateTime64(3) CODEC(DoubleDelta, LZ4),
    `@hostname` LowCardinality(String) NOT NULL CODEC(ZSTD(1)) ,
    `@path` String NOT NULL CODEC(ZSTD(1)) ,
    `@rownumber` Int64 NOT NULL ,
    `@seq` Int64 NOT NULL ,
    `@ip` LowCardinality(String) NOT NULL CODEC(ZSTD(1)) ,
    `@topic` LowCardinality(String) NOT NULL CODEC(ZSTD(1)) ,
    `@timestamp` DateTime64(3)  CODEC(DoubleDelta, LZ4),
    INDEX message_idx `@message` TYPE ngrambf_v1(5, 65535, 1, 0) GRANULARITY 1,
    PROJECTION p_cnt (
    SELECT `@ip`,  `@path`, count() GROUP BY `@ip`, `@path`
  ) 
)ENGINE = ReplicatedMergeTree
PARTITION BY toYYYYMMDD(`@timestamp`)
ORDER BY (`@timestamp`, `@ip`, `@path`);
​
--- 分布式表
create table dist_log_test on cluster abc as log_test engine = Distributed('abc', 'default', 'log_test')

Doris

CREATE TABLE demo.log_test (
    `@@id` CHAR(34) NOT NULL ,
    `@message` STRING NOT NULL ,
    `@filehashkey` CHAR(34) NOT NULL,
    `@collectiontime` DATETIME(3) ,
    `@hostname` VARCHAR(20) NOT NULL ,
    `@path` VARCHAR(256) NOT NULL ,
    `@rownumber` BIGINT NOT NULL ,
    `@seq` BIGINT NOT NULL,
    `@ip` CHAR(16) NOT NULL ,
    `@topic` CHAR(16) NOT NULL,
    `@timestamp` DATETIME(3),
​
    INDEX idx_message_inv(`@message`) USING INVERTED PROPERTIES(
        "parser" = "unicode",
        "parser_mode" = "fine_grained",
        "support_phrase" = "true"
    )
)
DUPLICATE KEY(`@@id`)
PARTITION BY RANGE(`@timestamp`) ()
DISTRIBUTED BY HASH(`@@id`) BUCKETS AUTO
ROLLUP (
    r1 (`@ip`, `@path`)
)
PROPERTIES (
    "replication_allocation" = "tag.location.default: 2",
    "dynamic_partition.enable" = "true",
    "dynamic_partition.time_unit" = "MONTH",
    "dynamic_partition.start" = "-12",
    "dynamic_partition.create_history_partition" = "true",
    "dynamic_partition.history_partition_num" = "12",
    "dynamic_partition.end" = "3",
    "dynamic_partition.prefix" = "p",
    "compression"="zstd",
    "compaction_policy"="time_series",
    "enable_single_replica_compaction"="true"
);

写入配置

clickhouse_sinker

开启5个sinker并发,向clickhouse写入数据。配置如下:

{
  "clickhouse": {
      "cluster": "abc",
      "db": "default",
      "hosts": [
                  ["192.168.101.93", "192.168.101.94"],
                  ["192.168.101.96", "192.168.101.97"]
      ],
      "port": 19000,
      "username": "default",
      "password": "123456",
      "maxOpenConns": 5,
      "retryTimes": 0
  },
  "kafka": {
      "brokers": "192.168.101.94:29092,192.168.101.96:29092,192.168.101.98:29092"
  },
    "tasks": [{
      "name": "log_test",
      "topic": "log_test",
      "earliest": true,
      "consumerGroup": "test_2024001",
      "parser": "fastjson",
      "tableName": "log_test",
      "autoSchema": true,
      "dynamicSchema":{
        "enable": false
      },
      "prometheusSchema": false,
      "bufferSize": 1000000,
      "flushInterval": 10
   }],
  "logLevel": "info"
}

routine_load

CREATE ROUTINE LOAD demo.log_test_10 ON log_test
COLUMNS(`@message`,`@@id`,`@filehashkey`,`@collectiontime`,`@hostname`,`@path`,`@rownumber`,`@seq`,`@ip`,`@topic`,`@timestamp`)
PROPERTIES
(
    "desired_concurrent_number"="10",
    "max_error_number" = "500",
    "max_batch_interval" = "20",
    "max_batch_rows" = "1000000",
    "max_batch_size" = "536870912",
    "strict_mode" = "false",
    "format" = "json"
)
FROM KAFKA
(
    "kafka_broker_list" = "192.168.101.94:29092,192.168.101.96:29092,192.168.101.98:29092",
    "kafka_topic" = "log_test",
    "kafka_partitions" = "0,1,2,3,4,5",
    "kafka_offsets" = "0,0,0,0,0,0"
);

ElasticSearch settings and mapping

{
    "order": 200,
    "index_patterns": [
        "estest_chenyc_log_*"
    ],
    "settings": {
        "index": {
            "codec": "best_compression",
            "refresh_interval": "10s",
            "number_of_shards": "10",
            "translog": {
                "sync_interval": "60s",
                "durability": "async"
            },
            "merge": {
                "scheduler": {
                    "max_thread_count": "10"
                },
                "policy": {
                    "max_merged_segment": "5g"
                }
            },
            "unassigned": {
                "node_left": {
                    "delayed_timeout": "15m"
                }
            },
            "number_of_replicas": "1"
        }
    },
    "mappings": {
        "dynamic": true,
        "dynamic_templates": [
            {
                "message_field": {
                    "path_match": "@message",
                    "mapping": {
                        "norms": false,
                        "type": "text"
                    },
                    "match_mapping_type": "string"
                }
            },
            {
                "string_fields": {
                    "mapping": {
                        "type": "keyword"
                    },
                    "match_mapping_type": "string",
                    "match": "*"
                }
            }
        ],
        "properties": {
            "@message": {
                "norms": false,
                "type": "text"
            },
            "@seq": {
                "type": "long"
            },
            "@timestamp": {
                "type": "date"
            },
            "@topic": {
                "type": "keyword"
            },
            "@filehashkey": {
                "type": "keyword"
            },
            "@@id": {
                "type": "keyword"
            },
            "@rownumber": {
                "type": "long"
            },
            "@ip": {
                "type": "keyword"
            },
            "@collectiontime": {
                "type": "date"
            },
            "@hostname": {
                "type": "keyword"
            },
            "@path": {
                "type": "keyword"
            }
        }
    },
    "aliases": {}
}

资源配置:

(为了最大发挥es的写入性能,重新生成了一份数据,设置kafka的topic partition数为 30)

中台配置:

写入性能

clickhouse

并发数资源占用(sinker)资源占用(clickhouse)数据总量写入速度数据大小压缩后大小(含副本)
515vcpu|25G5vcpu|6G10亿1205k/s155GB95GB
5(有大查询)10vcpu|16G37vcpu|8G10亿890k/s155GB95GB

Doris

并发数资源占用(be)数据总量写入速度数据大小压缩后大小(含副本)
52~8vcpu|9GB10亿532k/s155GB161GB
102~11vcpu|10GB10亿559k/s155GB161GB
155-15vcpu|10GB10亿675k/s155GB161GB
203~12vcpu|9GB10亿609k/s155GB161GB
15并发,有大查询16~29vcpu|12GB10亿490k/s155GB161GB

cpu负载各个节点有所区别,部分节点cpu负载比较高(应该是被写入数据的节点),剩余节点CPU负载都维持在一个相对低的水平,和ClickHouse相当。

ElasticSearch

使用擎创科技内部的日志精析产品启动flink存储任务写入数据。

并发数ES节点资源占用数据总量写入速度数据大小压缩后大小(含副本)
307vcpu|38GB10亿106k/s155GB281GB
30(有大查询)11vcpu|39GB10亿75k/s155GB281GB

写入结论

ES:CK:Doris写入速度比为 1:6: 12。Doris写入性能是ES的6倍,ck是ES的12倍。

在本次测试中,数据均保留两副本,存储相同的数据,得出压缩比为ES:CK:Doris 为 1: 0.35: 0.57。即存储相同的数据量,Doris只需要ES近一半的存储资源,ClickHouse仅需ES三分之一的存储资源即可满足要求。

查询性能

依然使用上次的查询场景:

场景说明
场景1根据ip和path维度统计每个ip下path的个数
场景2统计每个ip下的Error日志的数量
场景3统计日志中出现Debug 和 query_id 为 cdb56920-2d39-4e6d-be99-dd6ef24cc66a 的条数
场景4统计出现Trace和gauge.apm_service_span出现的次数
场景5查询Error中出现READ_ONLY的日志明细
场景6查询日志中出现“上海”关键字的明细

查询语句:

场景数据库SQL语句
场景1clickhouseSELECT @ip, @path, count() FROM dist_log_test GROUP BY @ip,@path
场景1DorisSELECT @ip, @path, count() FROM log_test GROUP BY @ip,@path
场景1ElasticSearch|stats count by @ip,@path
场景2clickhouseSELECT @ip, count() FROM dist_log_test WHERE @message LIKE '%Error%' GROUP BY @ip
场景2DorisSELECT @ip, count() FROM log_test WHERE @message MATCH_ANY 'Error' GROUP BY @ip
场景2ElasticSearchError | stats count by @ip
场景3clickhouseSELECT count() FROM dist_log_test WHERE @message LIKE '%Debug%' AND @message LIKE '%cdb56920-2d39-4e6d-be99-dd6ef24cc66a%'
场景3DorisSELECT count() FROM log_test WHERE @message MATCH_ALL 'Debug cdb56920-2d39-4e6d-be99-dd6ef24cc66a'
场景3ElasticSearchDebug AND cdb56920-2d39-4e6d-be99-dd6ef24cc66a | stats by count
场景4clickhouseSELECT count() FROM dist_log_test WHERE @message LIKE '%Trace%' AND @message LIKE '%gauge.apm_service_span%'
场景4DorisSELECT count() FROM log_test WHERE @message MATCH_ALL 'Trace gauge.apm_service_span'
场景4ElasticSearchTrace AND gauge.apm_service_span |stats by count
场景5clickhouseSELECT * FROM dist_log_test WHERE @message LIKE '%Error%' AND @message LIKE '%READ_ONLY%'
场景5DorisSELECT * FROM log_test WHERE @message MATCH_ALL 'Error READ_ONLY'
场景5ElasticSearchError AND READ_ONLY
场景6clickhouseSELECT * FROM dist_log_test WHERE @message LIKE '%上海%'
场景6DorisSELECT * FROM log_test WHERE @message MATCH_ANY '上海'
场景6ElasticSearch上海

查询结果如下所示:

数据库场景1场景2场景3场景4场景5场景6
clickhouse无干扰查询
有写入时查询
0.064
0.181
16.284
22.156
1.688
3.994
12.879
35.167
16.065
33.792
14.694
31.244
Doris无干扰查询
有写入时查询
7.08
9.78
2.56
3.28
0.09
0.37
0.48
0.75
0.33
0.37
0.48
0.56
elasticsearch无干扰查询
有写入时查询
9.09
9.25
1.54
6.22
0.556
1.68
0.296
3.36
0.248
0.828
0.49
1.09

结论

在给出的6个场景中,除了场景1,ClickHouse凭借projection带来的预聚合加速明显更快之外,其余场景ClickHouse均慢于doris和ES,在全⽂检索的查询场景,落后近10倍以上。

得益于全⽂检索的加持,Doris与ES在模糊查询的场景下不分轩轾,各有千秋,但均比clickhouse快很多。

在数据库有⾼速写⼊时,三者都出现了⼀定的查询性能下降。但从实际效果来看,doris和ES因为本身就比较快,因此影响不是很大。ClickHouse的影响⽐较明显,在原本就⽐较慢的基础上,⼜有了近乎3倍的查询时间消耗。

整体结论

  • 写入性能: clickhouse > doris >> es
  • 数据压缩: clickhouse > doris > es
  • 查询性能:
    • 全文检索:Doris ≈ es > clickhouse
    • 聚合查询: clickhouse > doris ≈ es

在并发足够的情况下,clickhouse能轻松满足每秒100w+数据的写入。Doris写入性能相比之下要减半,并且不会随着并发数的增加而增加(并发数过多,反而写入更慢了)。但总体可以达到60w+每秒左右。三者之中,ES写入速度最慢,峰值仅能达到10w每秒左右。

在有大查询时,三者均对写入有一定影响,会造成写入性能下降25%左右。

压缩比方面,clickhouse和Doris均有比较优秀的压缩表现,而ES不仅没有压缩,反而数据有所膨胀。Doris相比与ES,有着近2倍的压缩比,而clickhouse更是达到了3倍之多。

查询方面,在聚合查询场景,clickhouse明显要更优秀,Doris和ES相对弱一些。

模糊查询场景,Doris与ES性能相当,都明显优于clickhouse。



本专栏知识点是通过<零声教育>的系统学习,进行梳理总结写下文章,对C/C++课程感兴趣的读者,可以点击链接,查看详细的服务:C/C++Linux服务器开发/高级架构师

  • 22
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值