使用clickhouse-keeper进行clickhouse集群安装

1、clickhouse-keeper进行clickhouse集群安装

我们将安装一个在 2 个节点上运行的 Clickhouse 集群,在这些服务器上运行的 clickhouse-keeper,以及一个独立运行的 clickhouse-keeper。可以使用 yum install 进行在线安装。

# yum install yum-utils 
# rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG 
# yum-config-manager --add-repo https://repo.clickhouse.com/rpm/stable/x86_64
# yum install clickhouse-server clickhouse-client

ClickHouse 服务器中捆绑了 clickhouse-keeper。如果正在安装服务器,则无法单独安装 clickhouse-keeper,会收到冲突错误,但如果我们有仅用作 clickhouse-keeper 的服务器,则仅这里可以安装clickhouse-keeper,无需安装服务器。

# yum install clickhouse-keeper

2、下载安装

对于离线安装包
https://repo.yandex.ru/clickhouse/rpm/stable/

从报告中下载手动rpm包并将这些包发送到所有节点。

clickhouse-client-23.6.2.18.x86_64.rpm
clickhouse-common-static-23.6.2.18.x86_64.rpm
clickhouse-server-23.6.2.18.x86_64.rpm

clicknode01 –> Clikhouse keeper,Clickhose 服务器,客户端
clicknode02 –> Clikhouse keeper,Clickhose 服务器,客户端
clicknode03 –> Clikhouse keeper

# mkdir -p /click/installs

3、编辑 /etc/hosts 文件

在所有节点上编辑 /etc/hosts 文件;

192.168.3.49 clicknode01 
192.168.3.50 clicknode02
192.168.3.51 clicknode03 

对于节点1和2;

# yum localinstall clickhouse-server-23.6.2.18.x86_64.rpm clickhouse-client-23.6.2.18.x86_64.rpm clickhouse-common-static-23.6.2.18.x86_64.rpm

对于节点3来说,只能clickhose-keeper安装或者clickhouse-server 通过安装为clickhouse-keeper创建一个服务。

clickhose-keeper这里我们将继续仅上传到节点 3 。

# yum localinstall clickhouse-keeper-23.6.2.18.x86_64.rpm

4、配置xml

在所有三个节点中创建以下目录;

mkdir -p /etc/clickhouse-keeper/config.d
mkdir -p /var/log/clickhouse-keeper
mkdir -p /var/lib/clickhouse-keeper/coordination/log
mkdir -p /var/lib/clickhouse-keeper/coordination/snapshots
mkdir -p /var/lib/clickhouse-keeper/cores

为这些目录定义了权限;

节点1和节点2不需要安装ClickHouse Keeper,不能一起安装,如果要在服务器上使用keeper,可以通过conifg设置来完成。这些设置如下所述。

/etc/clickhouse-server/config.xmlclickhouse 将以下部分添加到文件中的xml 标记中

 <listen_host>0.0.0.0</listen_host>
 <interserver_listen_host>0.0.0.0</interserver_listen_host>

在节点 1 和 2 上安装服务器,并为这些节点上的 keeper 进行以下配置设置;

/etc/clickhouse-server/config.d以下配置文件作为 clickhose 所有者在此目录下创建。这些文件还可以添加为带有 XML 标记的属性,config.xml 而无需创建这些文件。<clickhouse> ..<clickhouse>

# ls
clusters.xml  
enable-keeper.xml  
listen.xml  
macros.xml  
network-and-logging.xml  
remote-servers.xml  
use_keeper.xml

1个节点的相关文件内容;

enable-keeper.xml

Server id -->1 Raft 代表我们将充当守护者的节点。Server id --> 2同一文件必须位于2 个节点上。

<clickhouse>
    <keeper_server>
            <tcp_port>9181</tcp_port>
            <server_id>1</server_id>
            <log_storage_path>/var/lib/clickhouse-keeper/coordination/log</log_storage_path>
            <snapshot_storage_path>/var/lib/clickhouse-keeper/coordination/snapshots</snapshot_storage_path>

            <coordination_settings>
                <operation_timeout_ms>10000</operation_timeout_ms>
                <session_timeout_ms>30000</session_timeout_ms>
                <raft_logs_level>trace</raft_logs_level>
                <rotate_log_storage_interval>10000</rotate_log_storage_interval>
            </coordination_settings>

            <raft_configuration>
                <server>
                   		<id>1</id>
                   		<hostname>clicknode01</hostname>
                   		<port>9234</port>
                </server>
                <server>
                        <id>2</id>
                        <hostname>clicknode02</hostname>
                        <port>9234</port>
                </server>
                <server>
                        <id>3</id>
                        <hostname>clicknode03</hostname>
                        <port>9234</port>
                </server>
           </raft_configuration>
    </keeper_server>
</clickhouse>

use_keeper.xml,这个文件1和2必须是一样的。由于第三个节点中只有一个守护者,因此那里不需要它。

<clickhouse>
    <zookeeper>
        <node index="1">
            <host>clicknode01</host>
            <port>9181</port>
        </node>
        <node index="2">
            <host>clicknode02</host>
            <port>9181</port>
        </node>
        <node index="3">
            <host>clicknode03</host>
            <port>9181</port>
        </node>
    </zookeeper>
</clickhouse>

macros.xml–> 我们设置Shards和Replicas的文件,replica标签为1个节点,1个节点指向第2个节点中的第2个节点;

<clickhouse>
        <macros>
                <cluster>mycluster</cluster>
                <shard>01</shard>
                <replica>clicknode01</replica>
                <layer>01</layer>
        </macros>
</clickhouse>

clusters.xml

Click House是我们的Cluster文件,如果我们要向Cluster添加节点,就必须添加到这个文件中,这个文件对于创建的集群的所有节点都是一样的。

<clickhouse>
    <remote_servers>
        <mycluster>
            <shard>
                <internal_replication>true</internal_replication>
                <replica><host>clicknode01</host><port>9000</port></replica>
                <replica><host>clicknode02</host><port>9000</port></replica>
            </shard>
        </mycluster>
    </remote_servers>
</clickhouse>

remote-servers.xml

<clickhouse>
  <remote_servers replace="true">
    <mycluster>
      <secret>mysecretphrase</secret>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>clicknode01</host>
                <port>9000</port>
            </replica>
        </shard>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>clicknode02</host>
                <port>9000</port>
            </replica>
        </shard>
    </mycluster>
  </remote_servers>
</clickhouse>

listen.xml

<clickhouse>
    <listen_host>0.0.0.0</listen_host>
</clickhouse>

network-and-logging.xml

<clickhouse>
        <logger>
                <level>debug</level>
                <log>/var/log/clickhouse-server/clickhouse-server.log</log>
                <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
                <size>1000M</size>
                <count>3</count>
        </logger>
        <display_name>clickhouse</display_name>
        <listen_host>0.0.0.0</listen_host>
        <http_port>8123</http_port>
        <tcp_port>9000</tcp_port>
        <interserver_http_port>9009</interserver_http_port>
</clickhouse>

我们的节点(仅作为节点 3 中的守护者)中的配置文件如下:它位于配置文件下/etc/clickhouse-keeper;

keeper_config.xml

<?xml version="1.0"?>
<clickhouse>
    <logger>
        <!-- Possible levels [1]:

          - none (turns off logging)
          - fatal
          - critical
          - error
          - warning
          - notice
          - information
          - debug
          - trace
          - test (not for production usage)

            [1]: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105-L114
        -->
        <level>trace</level>
        <log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
        <errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>

        <size>1000M</size>
        <count>10</count>

        <levels>
          <logger>
            <name>ContextAccess (default)</name>
            <level>none</level>
          </logger>
          <logger>
            <name>DatabaseOrdinary (test)</name>
            <level>none</level>
          </logger>
        </levels>

    <path>/var/lib/clickhouse-keeper/</path>
    <core_path>/var/lib/clickhouse-keeper/cores</core_path>


    <keeper_server>
            <tcp_port>9181</tcp_port>
            <server_id>3</server_id>
            <log_storage_path>/var/lib/clickhouse-keeper/coordination/log</log_storage_path>
            <snapshot_storage_path>/var/lib/clickhouse-keeper/coordination/snapshots</snapshot_storage_path>

            <coordination_settings>
                <operation_timeout_ms>10000</operation_timeout_ms>
                <session_timeout_ms>30000</session_timeout_ms>
                <raft_logs_level>trace</raft_logs_level>
                <rotate_log_storage_interval>10000</rotate_log_storage_interval>
            </coordination_settings>

            <raft_configuration>
                      <server>
                   <id>1</id>
                   <hostname>clicknode01</hostname>
                   <port>9234</port>
                </server>
                                <server>
                                        <id>2</id>
                                        <hostname>clicknode02</hostname>
                                        <port>9234</port>
                                </server>
                                <server>
                                        <id>3</id>
                                        <hostname>clicknode03</hostname>
                                        <port>9234</port>
                                </server>
           </raft_configuration>
    </keeper_server>
   <listen_host>0.0.0.0</listen_host>
   <interserver_listen_host>0.0.0.0</interserver_listen_host>
</clickhouse>

/etc/clickhouse-keeper/config.d我们目录下有一个文件,enable-keeper.xml这个文件的内容;

<clickhouse>
    <keeper_server>
            <tcp_port>9181</tcp_port>
            <server_id>3</server_id>
            <log_storage_path>/var/lib/clickhouse-keeper/coordination/log</log_storage_path>
            <snapshot_storage_path>/var/lib/clickhouse-keeper/coordination/snapshots</snapshot_storage_path>

            <coordination_settings>
                <operation_timeout_ms>10000</operation_timeout_ms>
                <session_timeout_ms>30000</session_timeout_ms>
                <raft_logs_level>trace</raft_logs_level>
                <rotate_log_storage_interval>10000</rotate_log_storage_interval>
            </coordination_settings>

            <raft_configuration>
                      <server>
                   <id>1</id>
                   <hostname>clicknode01</hostname>
                   <port>9234</port>
                </server>
                <server>
                   <id>2</id>
                   <hostname>clicknode02</hostname>
                   <port>9234</port>
                </server>
                <server>
                   <id>3</id>
                   <hostname>clicknode03</hostname>
                   <port>9234</port>
                </server>
           </raft_configuration>
    </keeper_server>
</clickhouse>

5、让我们运行并检查整个结构;

在节点 1 和 2 中;

# systemctl enable clickhouse-server.service
# systemctl start clickhouse-server.service

对于节点 3;

# systemctl enable clickhouse-keeper.service
# systemctl start clickhouse-keeper.service

我们来看看我们的Click Keeper服务,谁是领先者?谁是追随者?https://clickhouse.com/docs/en/guides/sre/keeper/clickhouse-keeper对于 Node1;(追随者)

# echo mntr | nc localhost 9181
zk_version      v23.6.2.18-stable-89f39a7ccfe0c068c03555d44036042fc1c09d22
zk_avg_latency  1
zk_max_latency  48
zk_min_latency  0
zk_packets_received     4264
zk_packets_sent 4271
zk_num_alive_connections        0
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count  52
zk_watch_count  0
zk_ephemerals_count     0
zk_approximate_data_size        15652
zk_key_arena_size       12288
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   130
zk_max_file_descriptor_count    363758

对于节点 2;如下图所示,处于领先位置

# echo mntr | nc localhost 9181
zk_version      v23.6.2.18-stable-89f39a7ccfe0c068c03555d44036042fc1c09d22
zk_avg_latency  1
zk_max_latency  27
zk_min_latency  0
zk_packets_received     228
zk_packets_sent 227
zk_num_alive_connections        2
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count  52
zk_watch_count  2
zk_ephemerals_count     0
zk_approximate_data_size        15652
zk_key_arena_size       12288
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   134
zk_max_file_descriptor_count    363762
zk_followers    2
zk_synced_followers     2

对于节点 3;追随者

# echo mntr | nc localhost 9181
zk_version      v23.6.2.18-stable-89f39a7ccfe0c068c03555d44036042fc1c09d22
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received     0
zk_packets_sent 0
zk_num_alive_connections        0
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count  52
zk_watch_count  0
zk_ephemerals_count     0
zk_approximate_data_size        15652
zk_key_arena_size       12288
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   38
zk_max_file_descriptor_count    363758

对于3个节点,它只能通过安装clickhouse-keeperrpms而不安装 clickhouse-serverrpms来用作clickhose keeper。为此,
我们需要创建一个服务,如下所示,启用并启动它;

cat /lib/systemd/system/clickhouse-keeper.service

	[Unit]
	Description=ClickHouse Keeper (analytic DBMS for big data)
	Requires=network-online.target
	# NOTE: that After/Wants=time-sync.target is not enough, you need to ensure
	# that the time was adjusted already, if you use systemd-timesyncd you are
	# safe, but if you use ntp or some other daemon, you should configure it
	# additionaly.
	After=time-sync.target network-online.target
	Wants=time-sync.target
	
	[Service]
	Type=simple
	User=clickhouse
	Group=clickhouse
	Restart=always
	RestartSec=30
	RuntimeDirectory=clickhouse-keeper
	ExecStart=/usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pid
	# Minus means that this file is optional.
	EnvironmentFile=-/etc/default/clickhouse
	LimitCORE=infinity
	LimitNOFILE=500000
	CapabilityBoundingSet=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE CAP_NET_BIND_SERVICE
	
	[Install]
	# ClickHouse should not start from the rescue shell (rescue.target).
	WantedBy=multi-user.target
	
	clickhouse-keeper --config /etc/clickhouse-server/config.d/keeper.xml

6、启动 ClickHouse 服务器。

# systemctl daemon-reload

# systemctl daemon-reload

# systemctl start clickhouse-keeper

# systemctl enable clickhouse-keeper

# systemctl start clickhouse-keeper
clickhouse :) SELECT
    host_name,
    host_address,
    replica_num
FROM system.clusters


SELECT
    host_name,
    host_address,
    replica_num
FROM system.clusters

Query id: aea6f589-8ef3-4b91-8d6c-89d66bb55445

┌─host_name─—┬─host_address──┬─replica_num─┐

└───────────—┴───────────────┴─────────────┘

我XXX,没成功,什么原因啊,有搞过的吗?

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值