<本文以Redhat7.5部署为例,其他系统可参考。>
<单机部署无需zookeeper,仅集群模式必须。>
Clickhouse集群部署
节点规划
部署规划
IP | hostname | jdk | zookeeper | clickhouse |
---|---|---|---|---|
172.17.1.9 | testnode1 | jdk | zookeeper | clickhouse |
172.17.1.10 | testnode2 | jdk | zookeeper | clickhouse |
172.17.1.11 | testnode3 | jdk | zookeeper | clickhouse |
注意:zookeeer节点数最好为奇数,便于选举Leader。
部署jdk集群
上传版本
【官方下载地址】
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html#license-lightbox
【本地已存在版本,拷贝即可】
【部署路径:${用户目录}/java】【操作节点:172.17.1.9】
上传版本至:用户目录下解压
修改权限
【bin/ 和 jre/bin 都需要修改】
【部署路径:${用户目录}/java】【操作节点:172.17.1.9】
$ chmod 755 $HOME/java/bin/*;chmod 755 $HOME/java/jre/bin/*
同步版本
【注意替换真实IP】
$ for node in 172.17.1.{9..11};do scp -r $HOME/java $node:$HOME/;done
环境变量
所有部署JDK的服务器都需配置,如source后无效,请使用alternatives管理jdk。
【手工配置】
【方式1:脚本配置】
【注意替换真实IP】
【注意替换“第一个JAVA_HOME=”后的值为实际路径】
$ for NODE in 172.17.1.{9..11};do echo "【 Configure NODE 】: ${NODE}";ssh ${NODE} "echo -e '\n#jdk#\nexport JAVA_HOME=/home/java\nexport JRE_HOME=\$JAVA_HOME/jre\nexport CLASS_PATH=.:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar\nexport PATH=\$JAVA_HOME/bin:\$PATH' >> $HOME/.bashrc;source $HOME/.bashrc";done
【方式2:手动配置】
$ vim ~/.bashrc
#jdk#
export JAVA_HOME=/home/java
export JRE_HOME=$JAVA_HOME/jre
export CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
【生效配置】
$ source ~/.bashrc
检测状态
【注意替换实际IP】
$ for NODE in 172.17.1.{9..11};do echo -e "\n【${NODE}】 JAVA VERSION INFO:";ssh ${NODE} "java -version";let NUM++;done;echo -e "\n【TOTAL】 : ${NUM} NODES !"
部署zookeeper集群
下载版本
【官方链接】
https://zookeeper.apache.org/releases.html
【解压示例:版本号按需下载, -C 后为需要解压的路径】
# tar zxvf zookeeper-3.4.14.tar.gz -C /home/
配置修改
【配置zoo.cfg】
# vim zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
maxSessionTimeout=180000
minSessionTimeout=180000
maxClientCnxns=1000
zookeeper.leaderServes=yes
zookeeper.service.jute.maxbuffer=1024KB
dataDir=/home/zookeeper/data
dataLogDir=/home/zookeeper/data/logs
clientPort=2181
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
isObserver=false
zookeeper.oom_heap_dump_enabled=true
zookeeper.oom_heap_dump_dir=/home/zookeeper/data
zookeeper.java_additional_options=-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled
zookeeper.oom_sigkill_enabled=true
zookeeper.java_heapsize=1024MB
electionPort=2888
quorumPort=3888
zookeeper.sasl=false
observers=
server.1=172.17.1.9:2888:3888
server.2=172.17.1.10:2888:3888
server.3=172.17.1.11:2888:3888
环境变量
【手工配置】
$ vim ~/.bashrc
@ 增加配置
#zookeeper#
export ZOOKEEPER_HOME=/home/zookeeper
export ZOO_LOG_DIR=/home/zookeeper/logs
@ 生效
$ source ~/.bashrc
服务管理
创建服务管理脚本,便于管理整个集群的启停。
【集群节点】
# mkdir -p /home/zookeeper/run
# vim conf.ini
ZKNODES=(172.17.1.{9..11})
【启动脚本】
# vim /home/zookeeper/run/startall.sh
#!/bin/bash
# Loading Configuration
. ./conf.ini
echo -e "\n【 ====== START NODE SERVER ====== 】 "
for NODE in ${ZKNODES[@]};do
ZOOKEEPER_HOME=$(ssh ${NODE} "echo \${ZOOKEEPER_HOME}")
if [ -z "${ZOOKEEPER_HOME}" ];then
echo "ZOOKEEPER_HOME IS NULL."
exit 99
fi
echo -e "【NODE】: 【${NODE}】 "
ssh ${NODE} "${ZOOKEEPER_HOME}/bin/zkServer.sh start 2>/dev/null"
done
sleep 3
echo -e "\n【 ====== CHECK NODE STATUS ====== 】 "
for NODE in ${ZKNODES[@]};do
status=$(ssh ${NODE} "${ZOOKEEPER_HOME}/bin/zkServer.sh status 2>/dev/null"|grep "Mode"|awk -F: '{print $NF}'|tr -d [:space:])
if [[ -n ${status} && ("leader" =~ ${status} || "follower" =~ ${status}) ]];then
echo -e "【NODE】: ${NODE} \t 【STATUS】: ${status}"
else
echo -e "【NODE】: ${NODE} \t 【STATUS】: ERROR"
fi
done
echo -e "【 ====== CHECK STATUS END ====== 】\n"
【停止脚本】
# vim /home/zookeeper/run/stopall.sh
#!/bin/bash
# Loading Configuration
. ./conf.ini
echo -e "\n【 ====== STOP NODE SERVER ====== 】 "
for NODE in ${ZKNODES[@]};do
ZOOKEEPER_HOME=$(ssh ${NODE} "echo \${ZOOKEEPER_HOME}")
if [ -z "${ZOOKEEPER_HOME}" ];then
echo "ZOOKEEPER_HOME IS NULL."
exit 99
fi
echo -e "【NODE】: 【${NODE}】 "
ssh ${NODE} "${ZOOKEEPER_HOME}/bin/zkServer.sh stop 2>/dev/null"
done
sleep 3
echo -e "\n【 ====== CHECK NODE STATUS ====== 】 "
for NODE in ${ZKNODES[@]};do
status=$(ssh ${NODE} "${ZOOKEEPER_HOME}/bin/zkServer.sh status 2>/dev/null"|grep "Mode"|awk -F: '{print $NF}'|tr -d [:space:])
if [[ -n ${status} && ("leader" =~ ${status} || "follower" =~ ${status}) ]];then
echo -e "【NODE】: ${NODE} \t 【STATUS】: ${status}"
else
echo -e "【NODE】: ${NODE} \t 【STATUS】: STOPPED"
fi
done
echo -e "【 ====== CHECK STATUS END ====== 】\n"
【检测脚本】
# vim /home/zookeeper/run/statusall.sh
#!/bin/bash
# Loading Configuration
. ./conf.ini
echo -e "\n【 ====== CHECK NODE STATUS ====== 】 "
for NODE in ${ZKNODES[@]};do
ZOOKEEPER_HOME=$(ssh ${NODE} "echo \${ZOOKEEPER_HOME}")
if [ -z "${ZOOKEEPER_HOME}" ];then
echo "ZOOKEEPER_HOME IS NULL."
exit 99
fi
status=$(ssh ${NODE} "${ZOOKEEPER_HOME}/bin/zkServer.sh status 2>/dev/null"|grep "Mode"|awk -F: '{print $NF}'|tr -d [:space:])
if [[ -n ${status} && ("leader" =~ ${status} || "follower" =~ ${status}) ]];then
echo -e "【NODE】: ${NODE} \t 【STATUS】: ${status}"
else
echo -e "【NODE】: ${NODE} \t 【STATUS】: ERROR"
fi
done
echo -e "【 ====== CHECK STATUS END ====== 】\n"
同步版本
【集群同步】
# for NODE in 172.17.1.{9..11};do scp -r /home/zookeeper $NODE:/home/;done
配置myid
【根据zoo.cfg配置,自动替换myid】
根据zoo.cfg中server后的ID和IP的对应关系,配置三个节点的conf/myid:
如:
172.17.1.9 下 conf/myid 文件内容配置为1;
172.17.1.10 下 conf/myid 文件内容配置为2;
172.17.1.11 下 conf/myid 文件内容配置为3;
注意:如果是拷贝的zookeeper,则配置集群时建议删除/home/zookeeper/data/下除myid外的文件和目录。
启动集群
【启动集群】
【启动集群】
# cd /home/zookeeper/run;./startall.sh
【检查集群服务状态】
# cd /home/zookeeper/run;./statusall.sh
测试连接
# /home/zookeeper/bin/zkCli.sh -server 172.17.1.9:2181
[zk: 172.17.1.9:2181(CONNECTED) 0] ls /
[zookeeper]
注意:执行 "ls /" 命令,正常返回即为正常。
部署clickhouse集群
版本下载
【下载对应的linux版本的clickhouse rpm版本,对应的文件如下:】
https://repo.clickhouse.tech/rpm/stable/x86_64/clickhouse-client-20.5.2.7-2.noarch.rpm
https://repo.clickhouse.tech/rpm/stable/x86_64/clickhouse-common-static-20.5.2.7-2.x86_64.rpm
https://repo.clickhouse.tech/rpm/stable/x86_64/clickhouse-server-20.5.2.7-2.noarch.rpm
https://repo.clickhouse.tech/rpm/stable/x86_64/clickhouse-server-common-19.4.0-2.noarch.rpm
节点部署
【所有节点先单机部署,部署完后配置集群模式】
【rpm安装顺序】
# rpm -ivh clickhouse-common-static-20.5.2.7-2.x86_64.rpm
# rpm -ivh clickhouse-server-common-19.4.0-2.noarch.rpm
# rpm -ivh clickhouse-server-20.5.2.7-2.noarch.rpm
# rpm -ivh clickhouse-client-20.5.2.7-2.noarch.rpm
节点配置
【clickhouse-server、clickhouse-client配置目录】
# ls /etc/clickhouse-server/
config.d config.xml preprocessed users.d users.xml
# ls /etc/clickhouse-client/
config.xml conf.d
存储配置
【配置/etc/clickhouse-server/下的config.xml中数据存放目录]
# vim /etc/clickhouse-server/config.xml
<!-- Path to data directory, with trailing slash. -->
<path>/home/clickhouse/data/clickhouse/</path>
<!-- Path to temporary data for processing hard queries. -->
<tmp_path>/home/clickhouse/data/clickhouse/tmp/</tmp_path>
注意:磁盘空间必须足够大,其他配置可以根据自己的实际情况而定,注意配置端口是否被占用。
检查依赖
【检查是否安装libicu】
# rpm -qa|grep libicu
libicu-50.1.2-15.el7.x86_64
注意:如果未安装,则启动异常。
【如可访问公网,使用此方法安装】
# yum install -y libicu
【如无公网,则离线安装】
下载:http://rpmfind.net/linux/centos/7.8.2003/updates/x86_64/Packages/libicu-50.2-4.el7_7.x86_64.rpm (按需下载)
http://rpmfind.net/linux/centos/7.8.2003/updates/x86_64/Packages/libicu-devel-50.2-4.el7_7.x86_64.rpm (按需下载)
安装:rpm -ivh libicu-50.2-4.el7_7.x86_64.rpm
rpm -ivh libicu-devel-50.2-4.el7_7.x86_64.rpm
服务配置
【配置IP】【取消此行注释,并配置本机IP】
# vim /etc/clickhouse-server/config.xml
<listen_host>172.17.1.10</listen_host>
【配置端口】【可自定义】
# vim /etc/clickhouse-server/config.xml
<tcp_port>9001</tcp_port>
【配置日志】
<logger>
<level>trace</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>10</count>
</logger>
启动节点
# /etc/init.d/clickhouse-server start
如果出现以下信息证明成功:
Start clickhouse-server service: Path to data directory in /etc/clickhouse-server/config.xml: /home/clickhouse/data/clickhouse/
Changing owner of [/home/clickhouse/data/clickhouse/] to [clickhouse:clickhouse]
DONE
停止节点
# /etc/init.d/clickhouse-server stop
连接节点
【172.17.1.9为例】
# clickhouse-client --host 172.17.1.9 --port 9001
登陆后日志如下即可:
ClickHouse client version 20.5.2.7 (official build).
Connecting to 127.0.0.1:9000 as user default.
Connected to ClickHouse server version 20.5.2 revision 54435.
【测试SQL】
testnode1 :) select 1;
SELECT 1
┌─1─┐
│ 1 │
└───┘
1 rows in set. Elapsed: 0.004 sec.
配置集群
【所有节点全部配置后方可配置集群】
配置修改
【模板】
# vim /etc/metrika.xml
<yandex>
<clickhouse_remote_servers>
<perftest_3shards_1replicas>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>172.17.1.9</host>
<port>9001</port>
</replica>
</shard>
<shard>
<replica>
<internal_replication>true</internal_replication>
<host>172.17.1.10</host>
<port>9001</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>172.17.1.11</host>
<port>9001</port>
</replica>
</shard>
</perftest_3shards_1replicas>
</clickhouse_remote_servers>
<!--zookeeper相关配置-->
<zookeeper-servers>
<node index="1">
<host>172.17.1.9</host>
<port>2181</port>
</node>
<node index="2">
<host>172.17.1.10</host>
<port>2181</port>
</node>
<node index="3">
<host>172.17.1.11</host>
<port>2181</port>
</node>
</zookeeper-servers>
<macros>
<replica>172.17.1.9</replica>
</macros>
<networks>
<ip>::/0</ip>
</networks>
<clickhouse_compression>
<case>
<min_part_size>10000000000</min_part_size>
<min_part_size_ratio>0.01</min_part_size_ratio>
<method>lz4</method>
</case>
</clickhouse_compression>
</yandex>
【注意】
<!-- 以下配置根据当前节点的IP/域名进行配置,每个节点配置均不同 -->
<macros>
<replica>172.17.1.9</replica>
</macros>
分发配置
【集群分发 /etc/metrika.xml 文件,并配置对应macros下的IP】
# for node in 172.17.1.{9..11};do scp /etc/metrika.xml $node:/etc/;done
【172.17.1.10 修改配置】
# vim /etc/metrika.xml
<macros>
<replica>172.17.1.10</replica>
</macros>
【172.17.1.11 修改配置】
# vim /etc/metrika.xml
<macros>
<replica>172.17.1.11</replica>
</macros>
重启服务
【集群三个节点均重启服务】
# /etc/init.d/clickhouse-server restart
集群验证
【连接client】【172.17.1.9】
# clickhouse-client --host 172.17.1.9 --port 9001
testnode1 :) select * from system.clusters;
┌─cluster─────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ perftest_3shards_1replicas │ 1 │ 1 │ 1 │ 172.17.1.9 │ 172.17.1.9 │ 9001 │ 1 │ default │ │ 0 │ 0 │
│ perftest_3shards_1replicas │ 2 │ 1 │ 1 │ 172.17.1.10 │ 172.17.1.10 │ 9001 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_1replicas │ 3 │ 1 │ 1 │ 172.17.1.11 │ 172.17.1.11 │ 9001 │ 0 │ default │ │ 0 │ 0 │
└─────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘
【连接client】【172.17.1.10】
# clickhouse-client --host 172.17.1.10 --port 9001
testnode2 :) select * from system.clusters;
┌─cluster─────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ perftest_3shards_1replicas │ 1 │ 1 │ 1 │ 172.17.1.9 │ 172.17.1.9 │ 9001 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_1replicas │ 2 │ 1 │ 1 │ 172.17.1.10 │ 172.17.1.10 │ 9001 │ 1 │ default │ │ 0 │ 0 │
│ perftest_3shards_1replicas │ 3 │ 1 │ 1 │ 172.17.1.11 │ 172.17.1.11 │ 9001 │ 0 │ default │ │ 0 │ 0 │
└─────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘
【连接client】【172.17.1.11】
# clickhouse-client --host 172.17.1.11 --port 9001
testnode3 :) select * from system.clusters;
┌─cluster─────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ perftest_3shards_1replicas │ 1 │ 1 │ 1 │ 172.17.1.9 │ 172.17.1.9 │ 9001 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_1replicas │ 2 │ 1 │ 1 │ 172.17.1.10 │ 172.17.1.10 │ 9001 │ 0 │ default │ │ 0 │ 0 │
│ perftest_3shards_1replicas │ 3 │ 1 │ 1 │ 172.17.1.11 │ 172.17.1.11 │ 9001 │ 1 │ default │ │ 0 │ 0 │
└─────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘
注意:三个节点查询均无误后即可。